Commit af9e335a authored by Anthony Lapenna's avatar Anthony Lapenna

ELK 5 with X-Pack support

parents b5a4deee 7eeb5703
...@@ -4,6 +4,8 @@ ...@@ -4,6 +4,8 @@
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose. Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
**Note**: This version has [X-Pack support](https://www.elastic.co/products/x-pack).
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana. It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
Based on the official images: Based on the official images:
...@@ -20,27 +22,36 @@ Based on the official images: ...@@ -20,27 +22,36 @@ Based on the official images:
2. Install [Docker-compose](http://docs.docker.com/compose/install/). 2. Install [Docker-compose](http://docs.docker.com/compose/install/).
3. Clone this repository 3. Clone this repository
## Increase max_map_count on your host (Linux)
You need to increase `max_map_count` on your Docker host:
```bash
$ sudo sysctl -w vm.max_map_count=262144
```
## SELinux ## SELinux
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly. On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly.
For example on Redhat and CentOS, the following will apply the proper context: For example on Redhat and CentOS, the following will apply the proper context:
````bash ```bash
.-root@centos ~ .-root@centos ~
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/ -$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
```` ```
## Windows ## Windows
When cloning this repo on Windows with line ending conversion enabled (git option `core.autocrlf` set to `true`), the script `kibana/entrypoint.sh` will malfunction due to a corrupt shebang header (which must not terminated by `CR+LF` but `LF` only): When cloning this repo on Windows with line ending conversion enabled (git option `core.autocrlf` set to `true`), the script `kibana/entrypoint.sh` will malfunction due to a corrupt shebang header (which must not terminated by `CR+LF` but `LF` only):
````bash ```bash
... ...
Creating dockerelk_kibana_1 Creating dockerelk_kibana_1
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1 Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
: No such file or directory/usr/bin/env: bash : No such file or directory/usr/bin/env: bash
```` ```
So you have to either So you have to either:
* disable line ending conversion *before* cloning the repository by setting `core.autocrlf` set to `false`: `git config core.autocrlf false`, or * disable line ending conversion *before* cloning the repository by setting `core.autocrlf` set to `false`: `git config core.autocrlf false`, or
* convert the line endings in script `kibana/entrypoint.sh` from `CR+LF` to `LF` (e.g. using Notepad++). * convert the line endings in script `kibana/entrypoint.sh` from `CR+LF` to `LF` (e.g. using Notepad++).
...@@ -67,17 +78,14 @@ Now that the stack is running, you'll want to inject logs in it. The shipped log ...@@ -67,17 +78,14 @@ Now that the stack is running, you'll want to inject logs in it. The shipped log
$ nc localhost 5000 < /path/to/logfile.log $ nc localhost 5000 < /path/to/logfile.log
``` ```
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser. And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser and use the following credentials to login:
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to * user: *elastic*
hit the create button. * password: *changeme*
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
You can also access: *NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
* Sense: [http://localhost:5601/app/sense](http://localhost:5601/app/sense)
*NOTE*: In order to use Sense, you'll need to query the IP address associated to your *network device* instead of localhost. See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
By default, the stack exposes the following ports: By default, the stack exposes the following ports:
* 5000: Logstash TCP input. * 5000: Logstash TCP input.
...@@ -113,7 +121,7 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro ...@@ -113,7 +121,7 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro
```yml ```yml
logstash: logstash:
build: logstash/ build: logstash/
command: logstash -f /etc/logstash/conf.d/logstash.conf command: -f /etc/logstash/conf.d/
volumes: volumes:
- ./logstash/config:/etc/logstash/conf.d - ./logstash/config:/etc/logstash/conf.d
ports: ports:
...@@ -140,12 +148,11 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir ...@@ -140,12 +148,11 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir
```yml ```yml
logstash: logstash:
build: logstash/ build: logstash/
command: logstash -f /etc/logstash/conf.d/logstash.conf command: -f /etc/logstash/conf.d/
volumes: volumes:
- ./logstash/config:/etc/logstash/conf.d - ./logstash/config:/etc/logstash/conf.d
ports: ports:
- "5000:5000" - "5000:5000"
- "18080:18080"
links: links:
- elasticsearch - elasticsearch
environment: environment:
...@@ -163,9 +170,11 @@ Then, you'll need to map your configuration file inside the container in the `do ...@@ -163,9 +170,11 @@ Then, you'll need to map your configuration file inside the container in the `do
```yml ```yml
elasticsearch: elasticsearch:
build: elasticsearch/ build: elasticsearch/
command: elasticsearch -Des.network.host=_non_loopback_
ports: ports:
- "9200:9200" - "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
volumes: volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
``` ```
...@@ -178,6 +187,9 @@ elasticsearch: ...@@ -178,6 +187,9 @@ elasticsearch:
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
ports: ports:
- "9200:9200" - "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
``` ```
# Storage # Storage
...@@ -191,9 +203,11 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con ...@@ -191,9 +203,11 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con
```yml ```yml
elasticsearch: elasticsearch:
build: elasticsearch/ build: elasticsearch/
command: elasticsearch -Des.network.host=_non_loopback_
ports: ports:
- "9200:9200" - "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
volumes: volumes:
- /path/to/storage:/usr/share/elasticsearch/data - /path/to/storage:/usr/share/elasticsearch/data
``` ```
......
elasticsearch: version: '2'
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
logstash: services:
build: logstash/ elasticsearch:
command: logstash -f /etc/logstash/conf.d/logstash.conf build: elasticsearch/
volumes: ports:
- ./logstash/config:/etc/logstash/conf.d - "9200:9200"
ports: - "9300:9300"
- "5000:5000" environment:
links: ES_JAVA_OPTS: "-Xms1g -Xmx1g"
- elasticsearch networks:
kibana: - docker_elk
build: kibana/ logstash:
volumes: build: logstash/
- ./kibana/config/:/opt/kibana/config/ command: -f /etc/logstash/conf.d/
ports: volumes:
- "5601:5601" - ./logstash/config:/etc/logstash/conf.d
links: ports:
- elasticsearch - "5000:5000"
networks:
- docker_elk
kibana:
build: kibana/
volumes:
- ./kibana/config/:/opt/kibana/config/
ports:
- "5601:5601"
networks:
- docker_elk
networks:
docker_elk:
driver: bridge
FROM elasticsearch:5
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
RUN elasticsearch-plugin install --batch x-pack
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
FROM kibana:latest FROM kibana:5
RUN apt-get update && apt-get install -y netcat RUN apt-get update && apt-get install -y netcat bzip2
COPY entrypoint.sh /tmp/entrypoint.sh COPY entrypoint.sh /tmp/entrypoint.sh
RUN chmod +x /tmp/entrypoint.sh RUN chmod +x /tmp/entrypoint.sh
RUN kibana plugin --install elastic/sense RUN kibana-plugin install x-pack
CMD ["/tmp/entrypoint.sh"] CMD ["/tmp/entrypoint.sh"]
# Kibana is served by a back end server. This controls which port to use. # Kibana is served by a back end server. This setting specifies the port to use.
port: 5601 server.port: 5601
# The host to bind the server to. # This setting specifies the IP address of the back end server.
host: "0.0.0.0" server.host: "0.0.0.0"
# The Elasticsearch instance to use for all your queries. # Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
elasticsearch_url: "http://elasticsearch:9200" # cannot end in a slash.
# server.basePath: ""
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false, # The maximum payload size in bytes for incoming server requests.
# then the host you use to connect to *this* Kibana instance will be sent. # server.maxPayloadBytes: 1048576
elasticsearch_preserve_host: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations # The Kibana server's name. This is used for display purposes.
# and dashboards. It will create a new index if it doesn't already exist. # server.name: "your-hostname"
kibana_index: ".kibana"
# If your Elasticsearch is protected with basic auth, this is the user credentials # The URL of the Elasticsearch instance to use for all your queries.
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana elasticsearch.url: "http://elasticsearch:9200"
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass
# If your Elasticsearch requires client certificate and key # When this setting’s value is true Kibana uses the hostname specified in the server.host
# kibana_elasticsearch_client_crt: /path/to/your/client.crt # setting. When the value of this setting is false, Kibana uses the hostname of the host
# kibana_elasticsearch_client_key: /path/to/your/client.key # that connects to this Kibana instance.
# elasticsearch.preserveHost: true
# If you need to provide a CA certificate for your Elasticsarech instance, put # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# the path of the pem file here. # dashboards. Kibana creates a new index if the index doesn’t already exist.
# ca: /path/to/your/CA.pem # kibana.index: ".kibana"
# The default application to load. # The default application to load.
default_app_id: "discover" # kibana.defaultAppId: "discover"
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to # If your Elasticsearch is protected with basic authentication, these settings provide
# request_timeout setting # the username and password that the Kibana server uses to perform maintenance on the Kibana
# ping_timeout: 1500 # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
# Time in milliseconds to wait for responses from the back end or elasticsearch. # elasticsearch.username: "user"
# This must be > 0 # elasticsearch.password: "pass"
request_timeout: 300000
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# Time in milliseconds for Elasticsearch to wait for responses from shards. # files enable SSL for outgoing requests from the Kibana server to the browser.
# Set to 0 to disable. # server.ssl.cert: /path/to/your/server.crt
shard_timeout: 0 # server.ssl.key: /path/to/your/server.key
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# startup_timeout: 5000 # These files validate that your Elasticsearch backend uses the same key files.
# elasticsearch.ssl.cert: /path/to/your/client.crt
# Set to false to have a complete disregard for the validity of the SSL # elasticsearch.ssl.key: /path/to/your/client.key
# certificate.
verify_ssl: true # Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
# SSL for outgoing requests from the Kibana Server (PEM formatted) # elasticsearch.ssl.ca: /path/to/your/CA.pem
# ssl_key_file: /path/to/your/server.key
# ssl_cert_file: /path/to/your/server.crt # To disregard the validity of SSL certificates, change this setting’s value to false.
# elasticsearch.ssl.verify: true
# Set the path to where you would like the process id file to be created.
# pid_file: /var/run/kibana.pid # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
# If you would like to send the log output to a file you can set the path below. # elasticsearch.pingTimeout: 1500
# This will also turn off the STDOUT log output.
# log_file: ./kibana.log # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# Plugins that are included in the build, and no longer found in the plugins/ folder # must be a positive integer.
bundled_plugin_ids: # elasticsearch.requestTimeout: 30000
- plugins/dashboard/index
- plugins/discover/index # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
- plugins/doc/index # headers, set this value to [] (an empty list).
- plugins/kibana/index # elasticsearch.requestHeadersWhitelist: [ authorization ]
- plugins/markdown_vis/index
- plugins/metric_vis/index # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
- plugins/settings/index # elasticsearch.shardTimeout: 0
- plugins/table_vis/index
- plugins/vis_types/index # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
- plugins/visualize/index # elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
# pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
# logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 10000.
# ops.interval: 10000
FROM logstash:latest FROM logstash:5
# Add your logstash plugins setup here # Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json # Example: RUN logstash-plugin install logstash-filter-json
\ No newline at end of file
...@@ -9,5 +9,7 @@ input { ...@@ -9,5 +9,7 @@ input {
output { output {
elasticsearch { elasticsearch {
hosts => "elasticsearch:9200" hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
} }
} }
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment