- Kibana Guide: other versions:
- What is Kibana?
- What’s new in 8.17
- Kibana concepts
- Quick start
- Set up
- Install Kibana
- Configure Kibana
- AI Assistant settings
- Alerting and action settings
- APM settings
- Banners settings
- Cases settings
- Enterprise Search settings
- Fleet settings
- i18n settings
- Logging settings
- Logs settings
- Metrics settings
- Monitoring settings
- Reporting settings
- Search sessions settings
- Secure settings
- Security settings
- Spaces settings
- Task Manager settings
- Telemetry settings
- URL drilldown settings
- Start and stop Kibana
- Access Kibana
- Securing access to Kibana
- Add data
- Upgrade Kibana
- Configure security
- Configure reporting
- Configure logging
- Configure monitoring
- Command line tools
- Production considerations
- Discover
- Dashboards
- Canvas
- Maps
- Build a map to compare metrics by country or region
- Track, visualize, and alert on assets in real time
- Map custom regions with reverse geocoding
- Heat map layer
- Tile layer
- Vector layer
- Plot big data
- Search geographic data
- Configure map settings
- Connect to Elastic Maps Service
- Import geospatial data
- Troubleshoot
- Reporting and sharing
- Machine learning
- Graph
- Alerting
- Observability
- Search
- Security
- Dev Tools
- Fleet
- Osquery
- Stack Monitoring
- Stack Management
- Cases
- Connectors
- Amazon Bedrock
- Cases
- CrowdStrike
- D3 Security
- Google Gemini
- IBM Resilient
- Index
- Jira
- Microsoft Teams
- Observability AI Assistant
- OpenAI
- Opsgenie
- PagerDuty
- SentinelOne
- Server log
- ServiceNow ITSM
- ServiceNow SecOps
- ServiceNow ITOM
- Swimlane
- Slack
- TheHive
- Tines
- Torq
- Webhook
- Webhook - Case Management
- xMatters
- Preconfigured connectors
- License Management
- Maintenance windows
- Manage data views
- Numeral Formatting
- Rollup Jobs
- Manage saved objects
- Security
- Spaces
- Advanced Settings
- Tags
- Upgrade Assistant
- Watcher
- REST API
- Get features API
- Kibana spaces APIs
- Kibana role management APIs
- User session management APIs
- Saved objects APIs
- Data views API
- Index patterns APIs
- Alerting APIs
- Action and connector APIs
- Cases APIs
- Import and export dashboard APIs
- Logstash configuration management APIs
- Machine learning APIs
- Osquery manager API
- Short URLs APIs
- Get Task Manager health
- Upgrade assistant APIs
- Synthetics APIs
- Uptime APIs
- Kibana plugins
- Troubleshooting
- Accessibility
- Release notes
- Upgrade notes
- Kibana 8.17.0
- Kibana 8.16.2
- Kibana 8.16.1
- Kibana 8.16.0
- Kibana 8.15.5
- Kibana 8.15.4
- Kibana 8.15.3
- Kibana 8.15.2
- Kibana 8.15.1
- Kibana 8.15.0
- Kibana 8.14.3
- Kibana 8.14.2
- Kibana 8.14.1
- Kibana 8.14.0
- Kibana 8.13.4
- Kibana 8.13.3
- Kibana 8.13.2
- Kibana 8.13.1
- Kibana 8.13.0
- Kibana 8.12.2
- Kibana 8.12.1
- Kibana 8.12.0
- Kibana 8.11.4
- Kibana 8.11.3
- Kibana 8.11.2
- Kibana 8.11.1
- Kibana 8.11.0
- Kibana 8.10.4
- Kibana 8.10.3
- Kibana 8.10.2
- Kibana 8.10.1
- Kibana 8.10.0
- Kibana 8.9.2
- Kibana 8.9.1
- Kibana 8.9.0
- Kibana 8.8.2
- Kibana 8.8.1
- Kibana 8.8.0
- Kibana 8.7.1
- Kibana 8.7.0
- Kibana 8.6.1
- Kibana 8.6.0
- Kibana 8.5.2
- Kibana 8.5.1
- Kibana 8.5.0
- Kibana 8.4.3
- Kibana 8.4.2
- Kibana 8.4.1
- Kibana 8.4.0
- Kibana 8.3.3
- Kibana 8.3.2
- Kibana 8.3.1
- Kibana 8.3.0
- Kibana 8.2.3
- Kibana 8.2.2
- Kibana 8.2.1
- Kibana 8.2.0
- Kibana 8.1.3
- Kibana 8.1.2
- Kibana 8.1.1
- Kibana 8.1.0
- Kibana 8.0.0
- Kibana 8.0.0-rc2
- Kibana 8.0.0-rc1
- Kibana 8.0.0-beta1
- Kibana 8.0.0-alpha2
- Kibana 8.0.0-alpha1
- Developer guide
Running Elasticsearch during development
editRunning Elasticsearch during development
editThere are many ways to run Elasticsearch while you are developing.
By snapshot
editThis will run a snapshot of Elasticsearch that is usually built nightly. Snapshot status is available on the dashboard.
yarn es snapshot
By default, two users are added to Elasticsearch:
-
A superuser with username:
elastic
and password:changeme
, which can be used to log into Kibana with. -
A user with username:
kibana_system
and passwordchangeme
. This account is used by the Kibana server to authenticate itself to Elasticsearch, and to perform certain actions on behalf of the end user. These credentials should be specified in your kibana.yml as described in Configure security
See all available options, like how to specify a specific license, with the --help
flag.
yarn es snapshot --help
--license trial
will give you access to all capabilities.
Keeping data between snapshots
If you want to keep the data inside your Elasticsearch between usages of this command, you should use the following command, to keep your data folder outside the downloaded snapshot folder:
yarn es snapshot -E path.data=../data
By source
editIf you have the Elasticsearch repo checked out locally and wish to run against that, use source
. By default, it will reference an Elasticsearch checkout which is a sibling to the Kibana directory named elasticsearch. If you wish to use a checkout in another location you can provide that by supplying --source-path
yarn es source
From an archive
editUse this if you already have a distributable. For released versions, one can be obtained on the Elasticsearch downloads page.
yarn es archive <full_path_to_archive>
Each of these will run Elasticsearch with a basic license. Additional options are available, pass --help for more information.
From a remote host
editYou can save some system resources, and the effort of generating sample data, if you have a remote Elasticsearch cluster to connect to. (Elasticians: you do! Check with your team about where to find credentials)
You’ll need to create a kibana.dev.yml (Customizing config/kibana.dev.yml
) and add the following to it:
elasticsearch.hosts: - {{ url }} elasticsearch.username: {{ username }} elasticsearch.password: {{ password }} elasticsearch.ssl.verificationMode: none
Running remote clusters
editSetup remote clusters for cross cluster search (CCS) and cross cluster replication (CCR).
Start your primary cluster by running:
yarn es snapshot -E path.data=../data_prod1
Start your remote cluster by running:
yarn es snapshot -E transport.port=9500 -E http.port=9201 -E path.data=../data_prod2
Once both clusters are running, start Kibana. Kibana will connect to the primary cluster.
Setup the remote cluster in Kibana from either Management → Elasticsearch → Remote Clusters UI or by running the following script in Console.
PUT _cluster/settings { "persistent": { "cluster": { "remote": { "cluster_one": { "seeds": [ "localhost:9500" ] } } } } }
Follow the cross-cluster search instructions for setting up index patterns to search across clusters (Use data views with cross-cluster search).