Commit 3146c911 authored by floragunn GmbH's avatar floragunn GmbH Committed by Anthony Lapenna

Add Search Guard support (#85)

parent 45f2bbbb
# Docker ELK stack # Docker ELK stack (elastic stack)
[![Join the chat at https://gitter.im/deviantony/docker-elk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deviantony/docker-elk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Join the chat at https://gitter.im/deviantony/docker-elk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deviantony/docker-elk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose. Run version 5.1.1 of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana. It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
...@@ -12,10 +12,21 @@ Based on the official images: ...@@ -12,10 +12,21 @@ Based on the official images:
* [logstash](https://registry.hub.docker.com/_/logstash/) * [logstash](https://registry.hub.docker.com/_/logstash/)
* [kibana](https://registry.hub.docker.com/_/kibana/) * [kibana](https://registry.hub.docker.com/_/kibana/)
**Note**: Other branches in this project are available: **Note**: This version has [Search Guard support](https://github.com/floragunncom/search-guard).
* ELK 5 with X-Pack support: https://github.com/deviantony/docker-elk/tree/x-pack Default configuration of Search Guard in this repo is:
* ELK 5 in Vagrant: https://github.com/deviantony/docker-elk/tree/vagrant
* Basic authentication required to access Elasticsearch/Kibana
* HTTPS disabled
* Hostname verification disabled
* Self-signed SSL certificate for transport protocol (do not use in production)
Existing users:
* admin (password: admin): No restrictions for this user, can do everything
* logstash (password: logstash): CRUD permissions for logstash-* index
* kibanaro (password: kibanaro): Kibana user which can read every index
* kibanaserver (password: kibanaserver): User for the Kibana server (all permissions for .kibana index)
# Requirements # Requirements
...@@ -57,6 +68,14 @@ You can also choose to run it in background (detached mode): ...@@ -57,6 +68,14 @@ You can also choose to run it in background (detached mode):
$ docker-compose up -d $ docker-compose up -d
``` ```
After elasticsearch is started Search Guard have to be initialized:
```bash
$ docker exec -it dockerelk_elasticsearch_1 /init_sg.sh
```
_This executes sgadmin and load the configuration in elasticsearch/config/sg*.yml_
Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp: Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp:
```bash ```bash
...@@ -65,6 +84,9 @@ $ nc localhost 5000 < /path/to/logfile.log ...@@ -65,6 +84,9 @@ $ nc localhost 5000 < /path/to/logfile.log
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser. And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser.
* user: *kibanaro*
* password: *kibanaro*
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button. *NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
......
FROM elasticsearch:5 FROM elasticsearch:5.1.1
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch" COPY config/ /etc/elasticsearch
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"] RUN elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.1.1-10
RUN printf "#!/bin/bash\n/usr/share/elasticsearch/plugins/search-guard-5/tools/sgadmin.sh -cd /etc/elasticsearch -ts /etc/elasticsearch/truststore.jks -ks /etc/elasticsearch/kirk-keystore.jks -nhnv -icl" > /init_sg.sh
RUN chmod +x /usr/share/elasticsearch/plugins/search-guard-5/tools/sgadmin.sh
RUN chmod +x /init_sg.sh
CMD ["-E", "path.conf=/etc/elasticsearch", "-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
searchguard.ssl.transport.keystore_filepath: node-0-keystore.jks
searchguard.ssl.transport.truststore_filepath: truststore.jks
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.authcz.admin_dn:
- "CN=kirk,OU=client,O=client,l=tEst,C=De"
ALL:
- "indices:*"
MANAGE:
- "indices:monitor/*"
- "indices:admin/*"
CREATE_INDEX:
- "indices:admin/create"
- "indices:admin/mapping/put"
MANAGE_ALIASES:
- "indices:admin/aliases*"
MONITOR:
- "indices:monitor/*"
DATA_ACCESS:
- "indices:data/*"
- "indices:admin/mapping/put"
WRITE:
- "indices:data/write*"
- "indices:admin/mapping/put"
READ:
- "indices:data/read*"
DELETE:
- "indices:data/write/delete*"
CRUD:
- READ
- WRITE
SEARCH:
- "indices:data/read/search*"
- "indices:data/read/msearch*"
- SUGGEST
SUGGEST:
- "indices:data/read/suggest*"
INDEX:
- "indices:data/write/index*"
- "indices:data/write/update*"
- "indices:admin/mapping/put"
# no bulk index
GET:
- "indices:data/read/get*"
- "indices:data/read/mget*"
# CLUSTER
CLUSTER_ALL:
- cluster:*
CLUSTER_MONITOR:
- cluster:monitor/*
CLUSTER_COMPOSITE_OPS_RO:
- "indices:data/read/mget"
- "indices:data/read/msearch"
- "indices:data/read/mtv"
- "indices:data/read/coordinate-msearch*"
- "indices:admin/aliases/exists*"
- "indices:admin/aliases/get*"
CLUSTER_COMPOSITE_OPS:
- "indices:data/write/bulk"
- "indices:admin/aliases*"
- CLUSTER_COMPOSITE_OPS_RO
\ No newline at end of file
searchguard:
dynamic:
http:
xff:
enabled: false
authc:
basic_internal_auth_domain:
http_authenticator:
type: basic
authentication_backend:
type: intern
admin:
hash: $2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG
#password is: admin
logstash:
hash: $2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2
#password is: logstash
kibanaserver:
hash: $2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H.
#password is: kibanaserver
kibanaro:
hash: $2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC
#password is: kibanaro
sg_all_access:
cluster:
- '*'
indices:
'*':
'*':
- '*'
sg_kibana:
cluster:
- CLUSTER_COMPOSITE_OPS_RO
- CLUSTER_MONITOR
indices:
'*':
'*':
- READ
- indices:admin/mappings/fields/get*
'?kibana':
'*':
- READ
- WRITE
- 'indices:admin/mappings/fields/get*'
- 'indices:admin/refresh*'
sg_kibana_server:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
indices:
'?kibana':
'*':
- ALL
sg_logstash:
cluster:
- indices:admin/template/get
- indices:admin/template/put
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
indices:
'logstash-*':
'*':
- CRUD
- CREATE_INDEX
sg_logstash:
users:
- logstash
sg_kibana_server:
users:
- kibanaserver
sg_kibana:
users:
- kibanaro
sg_all_access:
users:
- admin
FROM kibana:5 FROM kibana:5.1.1
RUN kibana-plugin install https://github.com/floragunncom/search-guard-kibana-plugin/releases/download/v5.1.1-alpha/searchguard-kibana-alpha-5.1.1.zip
searchguard.cookie.password: "123567818187654rwrwfsfshdhdhtegdhfzftdhncn"
# Kibana is served by a back end server. This setting specifies the port to use. # Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601 server.port: 5601
...@@ -33,8 +35,8 @@ elasticsearch.url: "http://elasticsearch:9200" ...@@ -33,8 +35,8 @@ elasticsearch.url: "http://elasticsearch:9200"
# the username and password that the Kibana server uses to perform maintenance on the Kibana # the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server. # is proxied through the Kibana server.
# elasticsearch.username: "user" elasticsearch.username: "kibanaserver"
# elasticsearch.password: "pass" elasticsearch.password: "kibanaserver"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These # Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser. # files enable SSL for outgoing requests from the Kibana server to the browser.
......
FROM logstash:5 FROM logstash:5.1.1
# Add your logstash plugins setup here # Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json # Example: RUN logstash-plugin install logstash-filter-json
...@@ -9,5 +9,7 @@ input { ...@@ -9,5 +9,7 @@ input {
output { output {
elasticsearch { elasticsearch {
hosts => "elasticsearch:9200" hosts => "elasticsearch:9200"
user => "logstash"
password => "logstash"
} }
} }
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment