- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Breaking changes
- Upgrading Logstash
- Configuring Logstash
- Working with Filebeat Modules
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring APIs
- Working with plugins
- Input plugins
- beats
- cloudwatch
- couchdb_changes
- drupal_dblog
- elasticsearch
- eventlog
- exec
- file
- ganglia
- gelf
- gemfire
- generator
- github
- google_pubsub
- graphite
- heartbeat
- heroku
- http
- http_poller
- imap
- irc
- jdbc
- jmx
- kafka
- kinesis
- log4j
- lumberjack
- meetup
- pipe
- puppet_facter
- rabbitmq
- rackspace
- redis
- relp
- rss
- s3
- salesforce
- snmptrap
- sqlite
- sqs
- stdin
- stomp
- syslog
- tcp
- udp
- unix
- varnishlog
- websocket
- wmi
- xmpp
- zenoss
- zeromq
- Output plugins
- boundary
- circonus
- cloudwatch
- csv
- datadog
- datadog_metrics
- elasticsearch
- exec
- file
- ganglia
- gelf
- google_bigquery
- google_cloud_storage
- graphite
- graphtastic
- hipchat
- http
- influxdb
- irc
- jira
- juggernaut
- kafka
- librato
- loggly
- lumberjack
- metriccatcher
- mongodb
- nagios
- nagios_nsca
- newrelic
- opentsdb
- pagerduty
- pipe
- rabbitmq
- rackspace
- redis
- redmine
- riak
- riemann
- s3
- sns
- solr_http
- sqs
- statsd
- stdout
- stomp
- syslog
- tcp
- udp
- webhdfs
- websocket
- xmpp
- zabbix
- zeromq
- Filter plugins
- age
- aggregate
- alter
- anonymize
- cidr
- cipher
- clone
- collate
- csv
- date
- de_dot
- dissect
- dns
- drop
- elapsed
- elasticsearch
- emoji
- environment
- extractnumbers
- fingerprint
- geoip
- grok
- i18n
- jdbc_streaming
- json
- json_encode
- kv
- metaevent
- metricize
- metrics
- mutate
- oui
- prune
- punct
- range
- ruby
- sleep
- split
- syslog_pri
- throttle
- tld
- translate
- truncate
- urldecode
- useragent
- uuid
- xml
- yaml
- zeromq
- Codec plugins
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Glossary of Terms
- Release Notes
Node Stats API
editNode Stats API
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
The node stats API retrieves runtime stats about Logstash.
GET /_node/stats/<types>
Where <types>
is optional and specifies the types of stats you want to return.
By default, all stats are returned. You can limit the info that’s returned by combining any of the following types in a comma-separated list:
Gets JVM stats, including stats about threads, memory usage, garbage collectors, and uptime. |
|
Gets process stats, including stats about file descriptors, memory consumption, and CPU usage. |
|
Gets runtime stats about the Logstash pipeline. |
|
Gets runtime stats about config reload successes and failures. |
|
Gets runtime stats about cgroups when Logstash is running in a container. |
See Common Options for a list of options that can be applied to all Logstash monitoring APIs.
JVM Stats
editThe following request returns a JSON document containing JVM stats:
GET /_node/stats/jvm
Example response:
{ "jvm": { "threads": { "count": 35, "peak_count": 36 }, "mem": { "heap_used_in_bytes": 318691184, "heap_used_percent": 15, "heap_committed_in_bytes": 519045120, "heap_max_in_bytes": 2075918336, "non_heap_used_in_bytes": 189382304, "non_heap_committed_in_bytes": 200728576, "pools": { "survivor": { "peak_used_in_bytes": 8912896, "used_in_bytes": 9538656, "peak_max_in_bytes": 35782656, "max_in_bytes": 71565312, "committed_in_bytes": 17825792 }, "old": { "peak_used_in_bytes": 106946320, "used_in_bytes": 181913072, "peak_max_in_bytes": 715849728, "max_in_bytes": 1431699456, "committed_in_bytes": 357957632 }, "young": { "peak_used_in_bytes": 71630848, "used_in_bytes": 127239456, "peak_max_in_bytes": 286326784, "max_in_bytes": 572653568, "committed_in_bytes": 143261696 } } }, "gc": { "collectors": { "old": { "collection_time_in_millis": 58, "collection_count": 2 }, "young": { "collection_time_in_millis": 338, "collection_count": 26 } } }, "uptime_in_millis": 382701 }
Process Stats
editThe following request returns a JSON document containing process stats:
GET /_node/stats/process
Example response:
{ "process": { "open_file_descriptors": 164, "peak_open_file_descriptors": 166, "max_file_descriptors": 10240, "mem": { "total_virtual_in_bytes": 5399474176 }, "cpu": { "total_in_millis": 72810537000, "percent": 0, "load_average": { "1m": 2.41943359375 } } } }
Pipeline Stats
editThe following request returns a JSON document containing pipeline stats, including:
- the number of events that were input, filtered, or output by the pipeline
- stats for each configured filter or output stage
- info about config reload successes and failures (when config reload is enabled)
- info about the persistent queue (when persistent queues are enabled)
Detailed pipeline stats for input plugins are not currently available, but
will be available in a future release. For now, the node stats API returns an
empty set array for inputs ("inputs": []
).
GET /_node/stats/pipeline
Example response:
{ "pipeline": { "events": { "duration_in_millis": 6304989, "in": 200, "filtered": 200, "out": 200 }, "plugins": { "inputs": [], "filters": [ { "id": "4e3d4bed6ba821ebb47f4752bb757b04a754d736-2", "events": { "duration_in_millis": 113, "in": 200, "out": 200 }, "matches": 200, "patterns_per_field": { "message": 1 }, "name": "grok" }, { "id": "4e3d4bed6ba821ebb47f4752bb757b04a754d736-3", "events": { "duration_in_millis": 526, "in": 200, "out": 200 }, "name": "geoip" } ], "outputs": [ { "id": "4e3d4bed6ba821ebb47f4752bb757b04a754d736-4", "events": { "duration_in_millis": 2312, "in": 200, "out": 200 }, "name": "stdout" } ] }, "reloads": { "last_error": null, "successes": 0, "last_success_timestamp": null, "last_failure_timestamp": null, "failures": 0 }, "queue": { "events": 26, "type": "persisted", "capacity": { "page_capacity_in_bytes": 262144000, "max_queue_size_in_bytes": 4294967296, "max_unread_events": 0 }, "data": { "path": "/path/to/data/queue", "free_space_in_bytes": 123027787776, "storage_type": "hfs" } } } }
Reload Stats
editThe following request returns a JSON document that shows info about config reload successes and failures.
GET /_node/stats/reloads
Example response:
{ "reloads": { "successes": 0, "failures": 0 } }
OS Stats
editWhen Logstash is running in a container, the following request returns a JSON document that contains cgroup information to give you a more accurate view of CPU load, including whether the container is being throttled.
GET /_node/stats/os
Example response:
{ "os" : { "cgroup" : { "cpuacct" : { "control_group" : "/elastic1", "usage_nanos" : 378477588075 }, "cpu" : { "control_group" : "/elastic1", "cfs_period_micros" : 1000000, "cfs_quota_micros" : 800000, "stat" : { "number_of_elapsed_periods" : 4157, "number_of_times_throttled" : 460, "time_throttled_nanos" : 581617440755 } } } }