Commit 6cd82cf4 authored by Antoine Cotten's avatar Antoine Cotten Committed by Anthony Lapenna

Use new official images (#96)

parent cdf52bbf
...@@ -8,9 +8,9 @@ It will give you the ability to analyze any data set by using the searching/aggr ...@@ -8,9 +8,9 @@ It will give you the ability to analyze any data set by using the searching/aggr
Based on the official images: Based on the official images:
* [elasticsearch](https://registry.hub.docker.com/_/elasticsearch/) * [elasticsearch](https://github.com/elastic/elasticsearch-docker)
* [logstash](https://registry.hub.docker.com/_/logstash/) * [logstash](https://github.com/elastic/logstash-docker)
* [kibana](https://registry.hub.docker.com/_/kibana/) * [kibana](https://github.com/elastic/kibana-docker)
**Note**: Other branches in this project are available: **Note**: Other branches in this project are available:
...@@ -90,9 +90,27 @@ The Kibana default configuration is stored in `kibana/config/kibana.yml`. ...@@ -90,9 +90,27 @@ The Kibana default configuration is stored in `kibana/config/kibana.yml`.
## How can I tune Logstash configuration? ## How can I tune Logstash configuration?
The logstash configuration is stored in `logstash/config/logstash.conf`. The Logstash container is using the [shipped configuration](https://github.com/elastic/logstash-docker/blob/master/build/logstash/config/logstash.yml).
The folder `logstash/config` is mapped onto the container `/etc/logstash/conf.d` so you If you want to override the default configuration, create a file `logstash/config/logstash.conf` and add your configuration in it.
Then, you'll need to map your configuration file inside the container in the `docker-compose.yml`. Update the logstash container declaration to:
```yml
logstash:
build: logstash/
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config:/usr/share/logstash/config
ports:
- "5000:5000"
networks:
- docker_elk
depends_on:
- elasticsearch
```
In the above example the folder `logstash/config` is mapped onto the container `/usr/share/logstash/config` so you
can create more than one file in that folder if you'd like to. However, you must be aware that config files will be read from the directory in alphabetical order. can create more than one file in that folder if you'd like to. However, you must be aware that config files will be read from the directory in alphabetical order.
## How can I specify the amount of memory used by Logstash? ## How can I specify the amount of memory used by Logstash?
...@@ -104,9 +122,8 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro ...@@ -104,9 +122,8 @@ If you want to override the default configuration, add the *LS_HEAP_SIZE* enviro
```yml ```yml
logstash: logstash:
build: logstash/ build: logstash/
command: -f /etc/logstash/conf.d/
volumes: volumes:
- ./logstash/config:/etc/logstash/conf.d - ./logstash/pipeline:/usr/share/logstash/pipeline
ports: ports:
- "5000:5000" - "5000:5000"
networks: networks:
...@@ -122,7 +139,7 @@ logstash: ...@@ -122,7 +139,7 @@ logstash:
To add plugins to logstash you have to: To add plugins to logstash you have to:
1. Add a RUN statement to the `logstash/Dockerfile` (ex. `RUN logstash-plugin install logstash-filter-json`) 1. Add a RUN statement to the `logstash/Dockerfile` (ex. `RUN logstash-plugin install logstash-filter-json`)
2. Add the associated plugin code configuration to the `logstash/config/logstash.conf` file 2. Add the associated plugin code configuration to the `logstash/pipeline/logstash.conf` file
## How can I enable a remote JMX connection to Logstash? ## How can I enable a remote JMX connection to Logstash?
...@@ -133,9 +150,8 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir ...@@ -133,9 +150,8 @@ Update the container in the `docker-compose.yml` to add the *LS_JAVA_OPTS* envir
```yml ```yml
logstash: logstash:
build: logstash/ build: logstash/
command: -f /etc/logstash/conf.d/
volumes: volumes:
- ./logstash/config:/etc/logstash/conf.d - ./logstash/pipeline:/usr/share/logstash/pipeline
ports: ports:
- "5000:5000" - "5000:5000"
networks: networks:
...@@ -148,7 +164,7 @@ logstash: ...@@ -148,7 +164,7 @@ logstash:
## How can I tune Elasticsearch configuration? ## How can I tune Elasticsearch configuration?
The Elasticsearch container is using the shipped configuration and it is not exposed by default. The Elasticsearch container is using the [shipped configuration](https://github.com/elastic/elasticsearch-docker/blob/master/build/elasticsearch/elasticsearch.yml).
If you want to override the default configuration, create a file `elasticsearch/config/elasticsearch.yml` and add your configuration in it. If you want to override the default configuration, create a file `elasticsearch/config/elasticsearch.yml` and add your configuration in it.
...@@ -168,17 +184,18 @@ elasticsearch: ...@@ -168,17 +184,18 @@ elasticsearch:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
``` ```
You can also specify the options you want to override directly in the command field: You can also specify the options you want to override directly via environment variables:
```yml ```yml
elasticsearch: elasticsearch:
build: elasticsearch/ build: elasticsearch/
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
ports: ports:
- "9200:9200" - "9200:9200"
- "9300:9300" - "9300:9300"
environment: environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g" ES_JAVA_OPTS: "-Xms1g -Xmx1g"
network.host: "_non_loopback_"
cluster.name: "my-cluster"
networks: networks:
- docker_elk - docker_elk
``` ```
...@@ -194,12 +211,13 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con ...@@ -194,12 +211,13 @@ In order to persist Elasticsearch data even after removing the Elasticsearch con
```yml ```yml
elasticsearch: elasticsearch:
build: elasticsearch/ build: elasticsearch/
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
ports: ports:
- "9200:9200" - "9200:9200"
- "9300:9300" - "9300:9300"
environment: environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g" ES_JAVA_OPTS: "-Xms1g -Xmx1g"
network.host: "_non_loopback_"
cluster.name: "my-cluster"
networks: networks:
- docker_elk - docker_elk
volumes: volumes:
......
...@@ -8,13 +8,19 @@ services: ...@@ -8,13 +8,19 @@ services:
- "9300:9300" - "9300:9300"
environment: environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g" ES_JAVA_OPTS: "-Xms1g -Xmx1g"
# disable X-Pack
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
# https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-enabling
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
networks: networks:
- docker_elk - docker_elk
logstash: logstash:
build: logstash/ build: logstash/
command: -f /etc/logstash/conf.d/
volumes: volumes:
- ./logstash/config:/etc/logstash/conf.d - ./logstash/pipeline:/usr/share/logstash/pipeline
ports: ports:
- "5000:5000" - "5000:5000"
networks: networks:
...@@ -24,7 +30,7 @@ services: ...@@ -24,7 +30,7 @@ services:
kibana: kibana:
build: kibana/ build: kibana/
volumes: volumes:
- ./kibana/config/:/etc/kibana/ - ./kibana/config/:/usr/share/kibana/config
ports: ports:
- "5601:5601" - "5601:5601"
networks: networks:
......
FROM elasticsearch:5 # https://github.com/elastic/elasticsearch-docker
FROM docker.elastic.co/elasticsearch/elasticsearch:5.2.1
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
FROM kibana:5 # https://github.com/elastic/kibana-docker
FROM docker.elastic.co/kibana/kibana:5.2.1
# Kibana is served by a back end server. This setting specifies the port to use. ---
server.port: 5601 ## Default Kibana configuration from kibana-docker.
## from https://github.com/elastic/kibana-docker/blob/master/build/kibana/config/kibana.yml
# This setting specifies the IP address of the back end server. #
server.host: "0.0.0.0" server.name: kibana
server.host: "0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting elasticsearch.url: http://elasticsearch:9200
# cannot end in a slash. elasticsearch.username: elastic
# server.basePath: "" elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: false
# The maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576 ## Disable X-Pack
## see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
# The Kibana server's name. This is used for display purposes. ## https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-enabling
# server.name: "your-hostname" #
xpack.security.enabled: false
# The URL of the Elasticsearch instance to use for all your queries. xpack.monitoring.enabled: false
elasticsearch.url: "http://elasticsearch:9200" xpack.graph.enabled: false
xpack.reporting.enabled: false
# When this setting’s value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
# elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn’t already exist.
# kibana.index: ".kibana"
# The default application to load.
# kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# To disregard the validity of SSL certificates, change this setting’s value to false.
# elasticsearch.ssl.verify: true
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
# elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
# elasticsearch.requestHeadersWhitelist: [ authorization ]
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
# elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
# pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
# logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 10000.
# ops.interval: 10000
FROM logstash:5 # https://github.com/elastic/logstash-docker
FROM docker.elastic.co/logstash/logstash:5.2.1
# Add your logstash plugins setup here # Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json # Example: RUN logstash-plugin install logstash-filter-json
Ensure the existence of the parent folder.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment