Day 80 - ELK Stack

This commit is contained in:
michaelcade 2022-03-28 08:11:09 +01:00
parent ae46473156
commit 47bef4f851
66 changed files with 1809 additions and 10 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 331 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 282 KiB

View File

@ -0,0 +1,22 @@
ELASTIC_VERSION=8.1.0
## Passwords for stack users
#
# User 'elastic' (built-in)
#
# Superuser role, full access to cluster management and data indices.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
ELASTIC_PASSWORD='90DaysOfDevOps'
# User 'logstash_internal' (custom)
#
# The user Logstash uses to connect and send data to Elasticsearch.
# https://www.elastic.co/guide/en/logstash/current/ls-security.html
LOGSTASH_INTERNAL_PASSWORD='90DaysOfDevOps'
# User 'kibana_system' (built-in)
#
# The user Kibana uses to connect and communicate with Elasticsearch.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
KIBANA_SYSTEM_PASSWORD='90DaysOfDevOps'

View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Anthony Lapenna
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,438 @@
# Elastic stack (ELK) on Docker
[![Elastic Stack version](https://img.shields.io/badge/Elastic%20Stack-8.1.0-00bfb3?style=flat&logo=elastic-stack)](https://www.elastic.co/blog/category/releases)
[![Build Status](https://github.com/deviantony/docker-elk/workflows/CI/badge.svg?branch=main)](https://github.com/deviantony/docker-elk/actions?query=workflow%3ACI+branch%3Amain)
[![Join the chat at https://gitter.im/deviantony/docker-elk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deviantony/docker-elk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
Run the latest version of the [Elastic stack][elk-stack] with Docker and Docker Compose.
It gives you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticsearch and
the visualization power of Kibana.
![Animated demo](https://user-images.githubusercontent.com/3299086/155972072-0c89d6db-707a-47a1-818b-5f976565f95a.gif)
*:information_source: The Docker images backing this stack include [X-Pack][xpack] with [paid features][paid-features]
enabled by default (see [How to disable paid features](#how-to-disable-paid-features) to disable them). **The [trial
license][trial-license] is valid for 30 days**. After this license expires, you can continue using the free features
seamlessly, without losing any data.*
Based on the official Docker images from Elastic:
* [Elasticsearch](https://github.com/elastic/elasticsearch/tree/master/distribution/docker)
* [Logstash](https://github.com/elastic/logstash/tree/master/docker)
* [Kibana](https://github.com/elastic/kibana/tree/master/src/dev/build/tasks/os_packages/docker_generator)
Other available stack variants:
* [`tls`](https://github.com/deviantony/docker-elk/tree/tls): TLS encryption enabled in Elasticsearch
* [`searchguard`](https://github.com/deviantony/docker-elk/tree/searchguard): Search Guard support
---
## Philosophy
We aim at providing the simplest possible entry into the Elastic stack for anybody who feels like experimenting with
this powerful combo of technologies. This project's default configuration is purposely minimal and unopinionated. It
does not rely on any external dependency, and uses as little custom automation as necessary to get things up and
running.
Instead, we believe in good documentation so that you can use this repository as a template, tweak it, and make it _your
own_. [sherifabdlnaby/elastdocker][elastdocker] is one example among others of project that builds upon this idea.
---
## Contents
1. [Requirements](#requirements)
* [Host setup](#host-setup)
* [Docker Desktop](#docker-desktop)
* [Windows](#windows)
* [macOS](#macos)
1. [Usage](#usage)
* [Bringing up the stack](#bringing-up-the-stack)
* [Initial setup](#initial-setup)
* [Setting up user authentication](#setting-up-user-authentication)
* [Injecting data](#injecting-data)
* [Cleanup](#cleanup)
* [Version selection](#version-selection)
1. [Configuration](#configuration)
* [How to configure Elasticsearch](#how-to-configure-elasticsearch)
* [How to configure Kibana](#how-to-configure-kibana)
* [How to configure Logstash](#how-to-configure-logstash)
* [How to disable paid features](#how-to-disable-paid-features)
* [How to scale out the Elasticsearch cluster](#how-to-scale-out-the-elasticsearch-cluster)
* [How to reset a password programmatically](#how-to-reset-a-password-programmatically)
1. [Extensibility](#extensibility)
* [How to add plugins](#how-to-add-plugins)
* [How to enable the provided extensions](#how-to-enable-the-provided-extensions)
1. [JVM tuning](#jvm-tuning)
* [How to specify the amount of memory used by a service](#how-to-specify-the-amount-of-memory-used-by-a-service)
* [How to enable a remote JMX connection to a service](#how-to-enable-a-remote-jmx-connection-to-a-service)
1. [Going further](#going-further)
* [Plugins and integrations](#plugins-and-integrations)
## Requirements
### Host setup
* [Docker Engine][docker-install] version **18.06.0** or newer
* [Docker Compose][compose-install] version **1.26.0** or newer (including [Compose V2][compose-v2])
* 1.5 GB of RAM
*:information_source: Especially on Linux, make sure your user has the [required permissions][linux-postinstall] to
interact with the Docker daemon.*
By default, the stack exposes the following ports:
* 5044: Logstash Beats input
* 5000: Logstash TCP input
* 9600: Logstash monitoring API
* 9200: Elasticsearch HTTP
* 9300: Elasticsearch TCP transport
* 5601: Kibana
**:warning: Elasticsearch's [bootstrap checks][booststap-checks] were purposely disabled to facilitate the setup of the
Elastic stack in development environments. For production setups, we recommend users to set up their host according to
the instructions from the Elasticsearch documentation: [Important System Configuration][es-sys-config].**
### Docker Desktop
#### Windows
If you are using the legacy Hyper-V mode of _Docker Desktop for Windows_, ensure [File Sharing][win-filesharing] is
enabled for the `C:` drive.
#### macOS
The default configuration of _Docker Desktop for Mac_ allows mounting files from `/Users/`, `/Volume/`, `/private/`,
`/tmp` and `/var/folders` exclusively. Make sure the repository is cloned in one of those locations or follow the
instructions from the [documentation][mac-filesharing] to add more locations.
## Usage
**:warning: You must rebuild the stack images with `docker-compose build` whenever you switch branch or update the
[version](#version-selection) of an already existing stack.**
### Bringing up the stack
Clone this repository onto the Docker host that will run the stack, then start the stack's services locally using Docker
Compose:
```console
$ docker-compose up
```
*:information_source: You can also run all services in the background (detached mode) by appending the `-d` flag to the
above command.*
Give Kibana about a minute to initialize, then access the Kibana web UI by opening <http://localhost:5601> in a web
browser and use the following (default) credentials to log in:
* user: *elastic*
* password: *changeme*
*:information_source: Upon the initial startup, the `elastic`, `logstash_internal` and `kibana_system` Elasticsearch
users are intialized with the values of the passwords defined in the [`.env`](.env) file (_"changeme"_ by default). The
first one is the [built-in superuser][builtin-users], the other two are used by Kibana and Logstash respectively to
communicate with Elasticsearch. This task is only performed during the _initial_ startup of the stack. To change users'
passwords _after_ they have been initialized, please refer to the instructions in the next section.*
### Initial setup
#### Setting up user authentication
*:information_source: Refer to [Security settings in Elasticsearch][es-security] to disable authentication.*
**:warning: Starting with Elastic v8.0.0, it is no longer possible to run Kibana using the bootstraped privileged
`elastic` user.**
The _"changeme"_ password set by default for all aforementioned users is **unsecure**. For increased security, we will
reset the passwords of all aforementioned Elasticsearch users to random secrets.
1. Reset passwords for default users
The commands below resets the passwords of the `elastic`, `logstash_internal` and `kibana_system` users. Take note
of them.
```console
$ docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user elastic
```
```console
$ docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user logstash_internal
```
```console
$ docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user kibana_system
```
If the need for it arises (e.g. if you want to [collect monitoring information][ls-monitoring] through Beats and
other components), feel free to repeat this operation at any time for the rest of the [built-in
users][builtin-users].
1. Replace usernames and passwords in configuration files
Replace the password of the `elastic` user inside the `.env` file with the password generated in the previous step.
Its value isn't used by any core component, but [extensions](#how-to-enable-the-provided-extensions) use it to
connect to Elasticsearch.
*:information_source: In case you don't plan on using any of the provided
[extensions](#how-to-enable-the-provided-extensions), or prefer to create your own roles and users to authenticate
these services, it is safe to remove the `ELASTIC_PASSWORD` entry from the `.env` file altogether after the stack
has been initialized.*
Replace the password of the `logstash_internal` user inside the `.env` file with the password generated in the
previous step. Its value is referenced inside the Logstash pipeline file (`logstash/pipeline/logstash.conf`).
Replace the password of the `kibana_system` user inside the `.env` file with the password generated in the previous
step. Its value is referenced inside the Kibana configuration file (`kibana/config/kibana.yml`).
See the [Configuration](#configuration) section below for more information about these configuration files.
1. Restart Logstash and Kibana to re-connect to Elasticsearch using the new passwords
```console
$ docker-compose up -d logstash kibana
```
*:information_source: Learn more about the security of the Elastic stack at [Secure the Elastic Stack][sec-cluster].*
#### Injecting data
Open the Kibana web UI by opening <http://localhost:5601> in a web browser and use the following credentials to log in:
* user: *elastic*
* password: *\<your generated elastic password>*
Now that the stack is fully configured, you can go ahead and inject some log entries. The shipped Logstash configuration
allows you to send content via TCP:
```console
# Using BSD netcat (Debian, Ubuntu, MacOS system, ...)
$ cat /path/to/logfile.log | nc -q0 localhost 5000
```
```console
# Using GNU netcat (CentOS, Fedora, MacOS Homebrew, ...)
$ cat /path/to/logfile.log | nc -c localhost 5000
```
You can also load the sample data provided by your Kibana installation.
### Cleanup
Elasticsearch data is persisted inside a volume by default.
In order to entirely shutdown the stack and remove all persisted data, use the following Docker Compose command:
```console
$ docker-compose down -v
```
### Version selection
This repository stays aligned with the latest version of the Elastic stack. The `main` branch tracks the current major
version (8.x).
To use a different version of the core Elastic components, simply change the version number inside the [`.env`](.env)
file. If you are upgrading an existing stack, remember to rebuild all container images using the `docker-compose build`
command.
**:warning: Always pay attention to the [official upgrade instructions][upgrade] for each individual component before
performing a stack upgrade.**
Older major versions are also supported on separate branches:
* [`release-7.x`](https://github.com/deviantony/docker-elk/tree/release-7.x): 7.x series
* [`release-6.x`](https://github.com/deviantony/docker-elk/tree/release-6.x): 6.x series (End-of-life)
* [`release-5.x`](https://github.com/deviantony/docker-elk/tree/release-5.x): 5.x series (End-of-life)
## Configuration
*:information_source: Configuration is not dynamically reloaded, you will need to restart individual components after
any configuration change.*
### How to configure Elasticsearch
The Elasticsearch configuration is stored in [`elasticsearch/config/elasticsearch.yml`][config-es].
You can also specify the options you want to override by setting environment variables inside the Compose file:
```yml
elasticsearch:
environment:
network.host: _non_loopback_
cluster.name: my-cluster
```
Please refer to the following documentation page for more details about how to configure Elasticsearch inside Docker
containers: [Install Elasticsearch with Docker][es-docker].
### How to configure Kibana
The Kibana default configuration is stored in [`kibana/config/kibana.yml`][config-kbn].
You can also specify the options you want to override by setting environment variables inside the Compose file:
```yml
kibana:
environment:
SERVER_NAME: kibana.example.org
```
Please refer to the following documentation page for more details about how to configure Kibana inside Docker
containers: [Install Kibana with Docker][kbn-docker].
### How to configure Logstash
The Logstash configuration is stored in [`logstash/config/logstash.yml`][config-ls].
You can also specify the options you want to override by setting environment variables inside the Compose file:
```yml
logstash:
environment:
LOG_LEVEL: debug
```
Please refer to the following documentation page for more details about how to configure Logstash inside Docker
containers: [Configuring Logstash for Docker][ls-docker].
### How to disable paid features
Switch the value of Elasticsearch's `xpack.license.self_generated.type` setting from `trial` to `basic` (see [License
settings][trial-license]).
You can also cancel an ongoing trial before its expiry date — and thus revert to a basic license — either from the
[License Management][license-mngmt] panel of Kibana, or using Elasticsearch's [Licensing APIs][license-apis].
### How to scale out the Elasticsearch cluster
Follow the instructions from the Wiki: [Scaling out Elasticsearch](https://github.com/deviantony/docker-elk/wiki/Elasticsearch-cluster)
### How to reset a password programmatically
If for any reason your are unable to use Kibana to change the password of your users (including [built-in
users][builtin-users]), you can use the Elasticsearch API instead and achieve the same result.
In the example below, we reset the password of the `elastic` user (notice "/user/elastic" in the URL):
```console
$ curl -XPOST -D- 'http://localhost:9200/_security/user/elastic/_password' \
-H 'Content-Type: application/json' \
-u elastic:<your current elastic password> \
-d '{"password" : "<your new password>"}'
```
## Extensibility
### How to add plugins
To add plugins to any ELK component you have to:
1. Add a `RUN` statement to the corresponding `Dockerfile` (eg. `RUN logstash-plugin install logstash-filter-json`)
1. Add the associated plugin code configuration to the service configuration (eg. Logstash input/output)
1. Rebuild the images using the `docker-compose build` command
### How to enable the provided extensions
A few extensions are available inside the [`extensions`](extensions) directory. These extensions provide features which
are not part of the standard Elastic stack, but can be used to enrich it with extra integrations.
The documentation for these extensions is provided inside each individual subdirectory, on a per-extension basis. Some
of them require manual changes to the default ELK configuration.
## JVM tuning
### How to specify the amount of memory used by a service
By default, both Elasticsearch and Logstash start with [1/4 of the total host
memory](https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size) allocated to
the JVM Heap Size.
The startup scripts for Elasticsearch and Logstash can append extra JVM options from the value of an environment
variable, allowing the user to adjust the amount of memory that can be used by each component:
| Service | Environment variable |
|---------------|----------------------|
| Elasticsearch | ES_JAVA_OPTS |
| Logstash | LS_JAVA_OPTS |
To accomodate environments where memory is scarce (Docker for Mac has only 2 GB available by default), the Heap Size
allocation is capped by default to 256MB per service in the `docker-compose.yml` file. If you want to override the
default JVM configuration, edit the matching environment variable(s) in the `docker-compose.yml` file.
For example, to increase the maximum JVM Heap Size for Logstash:
```yml
logstash:
environment:
LS_JAVA_OPTS: -Xmx1g -Xms1g
```
### How to enable a remote JMX connection to a service
As for the Java Heap memory (see above), you can specify JVM options to enable JMX and map the JMX port on the Docker
host.
Update the `{ES,LS}_JAVA_OPTS` environment variable with the following content (I've mapped the JMX service on the port
18080, you can change that). Do not forget to update the `-Djava.rmi.server.hostname` option with the IP address of your
Docker host (replace **DOCKER_HOST_IP**):
```yml
logstash:
environment:
LS_JAVA_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=18080 -Dcom.sun.management.jmxremote.rmi.port=18080 -Djava.rmi.server.hostname=DOCKER_HOST_IP -Dcom.sun.management.jmxremote.local.only=false
```
## Going further
### Plugins and integrations
See the following Wiki pages:
* [External applications](https://github.com/deviantony/docker-elk/wiki/External-applications)
* [Popular integrations](https://github.com/deviantony/docker-elk/wiki/Popular-integrations)
[elk-stack]: https://www.elastic.co/what-is/elk-stack
[xpack]: https://www.elastic.co/what-is/open-x-pack
[paid-features]: https://www.elastic.co/subscriptions
[es-security]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
[trial-license]: https://www.elastic.co/guide/en/elasticsearch/reference/current/license-settings.html
[license-mngmt]: https://www.elastic.co/guide/en/kibana/current/managing-licenses.html
[license-apis]: https://www.elastic.co/guide/en/elasticsearch/reference/current/licensing-apis.html
[elastdocker]: https://github.com/sherifabdlnaby/elastdocker
[docker-install]: https://docs.docker.com/get-docker/
[compose-install]: https://docs.docker.com/compose/install/
[compose-v2]: https://docs.docker.com/compose/cli-command/
[linux-postinstall]: https://docs.docker.com/engine/install/linux-postinstall/
[booststap-checks]: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
[es-sys-config]: https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config.html
[win-filesharing]: https://docs.docker.com/desktop/windows/#file-sharing
[mac-filesharing]: https://docs.docker.com/desktop/mac/#file-sharing
[builtin-users]: https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
[ls-monitoring]: https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[sec-cluster]: https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-cluster.html
[connect-kibana]: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html
[index-pattern]: https://www.elastic.co/guide/en/kibana/current/index-patterns.html
[config-es]: ./elasticsearch/config/elasticsearch.yml
[config-kbn]: ./kibana/config/kibana.yml
[config-ls]: ./logstash/config/logstash.yml
[es-docker]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
[kbn-docker]: https://www.elastic.co/guide/en/kibana/current/docker.html
[ls-docker]: https://www.elastic.co/guide/en/logstash/current/docker-config.html
[upgrade]: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html

View File

@ -0,0 +1,93 @@
version: '3.7'
services:
# The 'setup' service runs a one-off script which initializes the
# 'logstash_internal' and 'kibana_system' users inside Elasticsearch with the
# values of the passwords defined in the '.env' file.
#
# This task is only performed during the *initial* startup of the stack. On all
# subsequent runs, the service simply returns immediately, without performing
# any modification to existing users.
setup:
build:
context: setup/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
init: true
volumes:
- setup:/state:Z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- elk
elasticsearch:
build:
context: elasticsearch/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,z
- elasticsearch:/usr/share/elasticsearch/data:z
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: -Xmx256m -Xms256m
# Bootstrap password.
# Used to initialize the keystore during the initial startup of
# Elasticsearch. Ignored on subsequent runs.
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: -Xmx256m -Xms256m
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
ports:
- "5601:5601"
environment:
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
setup:
elasticsearch:

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,7 @@
ARG ELASTIC_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
# Add your elasticsearch plugins setup here
# Example: RUN elasticsearch-plugin install analysis-icu

View File

@ -0,0 +1,12 @@
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true

View File

@ -0,0 +1,3 @@
# Extensions
Third-party extensions that enable extra integrations with the Elastic stack.

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,3 @@
ARG ELASTIC_VERSION
FROM docker.elastic.co/apm/apm-server:${ELASTIC_VERSION}

View File

@ -0,0 +1,56 @@
# APM Server extension
The APM Server receives data from APM agents and transforms them into Elasticsearch documents that can be visualised in
Kibana.
## Usage
To include APM Server in the stack, run Docker Compose from the root of the repository with an additional command line
argument referencing the `apm-server-compose.yml` file:
```console
$ docker-compose -f docker-compose.yml -f extensions/apm-server/apm-server-compose.yml up
```
Meanwhile, you can navigate to the **APM** application in Kibana and follow the setup instructions to get started.
## Connecting an agent to APM Server
The most basic configuration to send traces to APM server is to specify the `SERVICE_NAME` and `SERVICE_URL`. Here is an
example Python Flask configuration:
```python
import elasticapm
from elasticapm.contrib.flask import ElasticAPM
from flask import Flask
app = Flask(__name__)
app.config['ELASTIC_APM'] = {
# Set required service name. Allowed characters:
# a-z, A-Z, 0-9, -, _, and space
'SERVICE_NAME': 'PYTHON_FLASK_TEST_APP',
# Set custom APM Server URL (default: http://localhost:8200)
'SERVER_URL': 'http://apm-server:8200',
'DEBUG': True,
}
```
Configuration settings for each supported language are available in the APM documentation: [APM Agents][apm-agents].
## Checking connectivity and importing default APM dashboards
1. On the Kibana home page, click `Add APM` under the _Observability_ panel.
1. Click `Check APM Server status` to confirm the server is up and running.
1. Click `Check agent status` to verify your agent has registered properly.
1. Click `Load Kibana objects` to create an index pattern for APM.
1. Click `Launch APM` to be taken to the APM dashboard.
## See also
[Running APM Server on Docker][apm-docker]
[apm-agents]: https://www.elastic.co/guide/en/apm/guide/current/components.html
[apm-docker]: https://www.elastic.co/guide/en/apm/guide/current/running-on-docker.html

View File

@ -0,0 +1,22 @@
version: '3.7'
services:
apm-server:
build:
context: extensions/apm-server/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
command:
# Disable strict permission checking on 'apm-server.yml' configuration file
# https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html
- --strict.perms=false
volumes:
- ./extensions/apm-server/config/apm-server.yml:/usr/share/apm-server/apm-server.yml:ro,Z
ports:
- '8200:8200'
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch

View File

@ -0,0 +1,8 @@
apm-server:
host: 0.0.0.0:8200
output:
elasticsearch:
hosts: ['http://elasticsearch:9200']
username: elastic
password: ${ELASTIC_PASSWORD}

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,17 @@
FROM bitnami/elasticsearch-curator:5.8.1
USER root
RUN install_packages cron && \
echo \
'* * * * *' \
root \
LC_ALL=C.UTF-8 LANG=C.UTF-8 \
/opt/bitnami/python/bin/curator \
--config=/usr/share/curator/config/curator.yml \
/usr/share/curator/config/delete_log_files_curator.yml \
'>/proc/1/fd/1' '2>/proc/1/fd/2' \
>>/etc/crontab
ENTRYPOINT ["cron"]
CMD ["-f", "-L8"]

View File

@ -0,0 +1,20 @@
# Curator
Elasticsearch Curator helps you curate or manage your indices.
## Usage
If you want to include the Curator extension, run Docker Compose from the root of the repository with an additional
command line argument referencing the `curator-compose.yml` file:
```bash
$ docker-compose -f docker-compose.yml -f extensions/curator/curator-compose.yml up
```
This sample setup demonstrates how to run `curator` every minute using `cron`.
All configuration files are available in the `config/` directory.
## Documentation
[Curator Reference](https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html)

View File

@ -0,0 +1,12 @@
# Curator configuration
# https://www.elastic.co/guide/en/elasticsearch/client/curator/current/configfile.html
client:
hosts:
- elasticsearch
port: 9200
http_auth: 'elastic:changeme'
logging:
loglevel: INFO
logformat: default

View File

@ -0,0 +1,21 @@
actions:
1:
action: delete_indices
description: >-
Delete indices. Find which to delete by first limiting the list to
logstash- prefixed indices. Then further filter those to prevent deletion
of anything less than the number of days specified by unit_count.
Ignore the error if the filter does not result in an actionable list of
indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: creation_date
direction: older
unit: days
unit_count: 2

View File

@ -0,0 +1,14 @@
version: '3.7'
services:
curator:
build:
context: extensions/curator/
init: true
volumes:
- ./extensions/curator/config/curator.yml:/usr/share/curator/config/curator.yml:ro,Z
- ./extensions/curator/config/delete_log_files_curator.yml:/usr/share/curator/config/delete_log_files_curator.yml:ro,Z
networks:
- elk
depends_on:
- elasticsearch

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,4 @@
ARG ELASTIC_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/enterprise-search/enterprise-search:${ELASTIC_VERSION}

View File

@ -0,0 +1,147 @@
# Enterprise Search extension
Elastic Enterprise Search is a suite of products for search applications backed by the Elastic Stack.
## Requirements
* 2 GB of free RAM, on top of the resources required by the other stack components and extensions.
Enterprise Search exposes the TCP port `3002` for its Web UI and API.
## Usage
### Generate an encryption key
Enterprise Search requires one or more [encryption keys][enterprisesearch-encryption] to be configured before the
initial startup. Failing to do so prevents the server from starting.
Encryption keys can contain any series of characters. Elastic recommends using 256-bit keys for optimal security.
Those encryption keys must be added manually to the [`config/enterprise-search.yml`][config-enterprisesearch] file. By
default, the list of encryption keys is empty and must be populated using one of the following formats:
```yaml
secret_management.encryption_keys:
- my_first_encryption_key
- my_second_encryption_key
- ...
```
```yaml
secret_management.encryption_keys: [my_first_encryption_key, my_second_encryption_key, ...]
```
> :information_source: To generate a strong encryption key, for example using the AES-256 cipher, you can use the
> OpenSSL utility or any other online/offline tool of your choice:
>
> ```console
> $ openssl enc -aes-256 -P
>
> enter aes-256-cbc encryption password: <a strong password>
> Verifying - enter aes-256-cbc encryption password: <repeat your strong password>
> ...
>
> key=<generated AES key>
> ```
### Enable Elasticsearch's API key service
Enterprise Search requires Elasticsearch's built-in [API key service][es-security] to be enabled in order to start.
Unless Elasticsearch is configured to enable TLS on the HTTP interface (disabled by default), this service is disabled
by default.
To enable it, modify the Elasticsearch configuration file in [`elasticsearch/config/elasticsearch.yml`][config-es] and
add the following setting:
```yaml
xpack.security.authc.api_key.enabled: true
```
### Configure the Enterprise Search host in Kibana
Kibana acts as the [management interface][enterprisesearch-ui] to Enterprise Search.
To enable the management experience for Enterprise Search, modify the Kibana configuration file in
[`kibana/config/kibana.yml`][config-kbn] and add the following setting:
```yaml
enterpriseSearch.host: http://enterprise-search:3002
```
### Start the server
To include Enterprise Search in the stack, run Docker Compose from the root of the repository with an additional command
line argument referencing the `enterprise-search-compose.yml` file:
```console
$ docker-compose -f docker-compose.yml -f extensions/enterprise-search/enterprise-search-compose.yml up
```
Allow a few minutes for the stack to start, then open your web browser at the address <http://localhost:3002> to see the
Enterprise Search home page.
Enterprise Search is configured on first boot with the following default credentials:
* user: *enterprise_search*
* password: *changeme*
## Security
The Enterprise Search password is defined inside the Compose file via the `ENT_SEARCH_DEFAULT_PASSWORD` environment
variable. We highly recommend choosing a more secure password than the default one for security reasons.
To do so, change the value `ENT_SEARCH_DEFAULT_PASSWORD` environment variable inside the Compose file **before the first
boot**:
```yaml
enterprise-search:
environment:
ENT_SEARCH_DEFAULT_PASSWORD: {{some strong password}}
```
> :warning: The default Enterprise Search password can only be set during the initial boot. Once the password is
> persisted in Elasticsearch, it can only be changed via the Elasticsearch API.
For more information, please refer to [User Management and Security][enterprisesearch-security].
## Configuring Enterprise Search
The Enterprise Search configuration is stored in [`config/enterprise-search.yml`][config-enterprisesearch]. You can
modify this file using the [Default Enterprise Search configuration][enterprisesearch-config] as a reference.
You can also specify the options you want to override by setting environment variables inside the Compose file:
```yaml
enterprise-search:
environment:
ent_search.auth.source: standard
worker.threads: '6'
```
Any change to the Enterprise Search configuration requires a restart of the Enterprise Search container:
```console
$ docker-compose -f docker-compose.yml -f extensions/enterprise-search/enterprise-search-compose.yml restart enterprise-search
```
Please refer to the following documentation page for more details about how to configure Enterprise Search inside a
Docker container: [Running Enterprise Search Using Docker][enterprisesearch-docker].
## See also
[Enterprise Search documentation][enterprisesearch-docs]
[config-enterprisesearch]: ./config/enterprise-search.yml
[enterprisesearch-encryption]: https://www.elastic.co/guide/en/enterprise-search/current/encryption-keys.html
[enterprisesearch-security]: https://www.elastic.co/guide/en/workplace-search/current/workplace-search-security.html
[enterprisesearch-config]: https://www.elastic.co/guide/en/enterprise-search/current/configuration.html
[enterprisesearch-docker]: https://www.elastic.co/guide/en/enterprise-search/current/docker.html
[enterprisesearch-docs]: https://www.elastic.co/guide/en/enterprise-search/current/index.html
[enterprisesearch-ui]: https://www.elastic.co/guide/en/enterprise-search/current/user-interfaces.html
[es-security]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#api-key-service-settings
[config-es]: ../../elasticsearch/config/elasticsearch.yml
[config-kbn]: ../../kibana/config/kibana.yml

View File

@ -0,0 +1,28 @@
---
## Enterprise Search core configuration
## https://www.elastic.co/guide/en/enterprise-search/current/configuration.html
#
## --------------------- REQUIRED ---------------------
# Encryption keys to protect application secrets.
secret_management.encryption_keys:
# add encryption keys below
#- add encryption keys here
## ----------------------------------------------------
# IP address Enterprise Search listens on
ent_search.listen_host: 0.0.0.0
# URL at which users reach Enterprise Search / Kibana
ent_search.external_url: http://localhost:3002
kibana.host: http://localhost:5601
# Elasticsearch URL and credentials
elasticsearch.host: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: ${ELASTIC_PASSWORD}
# Allow Enterprise Search to modify Elasticsearch settings. Used to enable auto-creation of Elasticsearch indexes.
allow_es_settings_modification: true

View File

@ -0,0 +1,20 @@
version: '3.7'
services:
enterprise-search:
build:
context: extensions/enterprise-search/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./extensions/enterprise-search/config/enterprise-search.yml:/usr/share/enterprise-search/config/enterprise-search.yml:ro,Z
environment:
JAVA_OPTS: -Xmx2g -Xms2g
ENT_SEARCH_DEFAULT_PASSWORD: 'changeme'
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
ports:
- '3002:3002'
networks:
- elk
depends_on:
- elasticsearch

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,3 @@
ARG ELASTIC_VERSION
FROM docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}

View File

@ -0,0 +1,36 @@
# Filebeat
Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers,
Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to
Elasticsearch or Logstash for indexing.
## Usage
To include Filebeat in the stack, run Docker Compose from the root of the repository with an additional command line
argument referencing the `filebeat-compose.yml` file:
```console
$ docker-compose -f docker-compose.yml -f extensions/filebeat/filebeat-compose.yml up
```
## Configuring Filebeat
The Filebeat configuration is stored in [`config/filebeat.yml`](./config/filebeat.yml). You can modify this file with
the help of the [Configuration reference][filebeat-config].
Any change to the Filebeat configuration requires a restart of the Filebeat container:
```console
$ docker-compose -f docker-compose.yml -f extensions/filebeat/filebeat-compose.yml restart filebeat
```
Please refer to the following documentation page for more details about how to configure Filebeat inside a Docker
container: [Run Filebeat on Docker][filebeat-docker].
## See also
[Filebeat documentation][filebeat-doc]
[filebeat-config]: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html
[filebeat-docker]: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html
[filebeat-doc]: https://www.elastic.co/guide/en/beats/filebeat/current/index.html

View File

@ -0,0 +1,30 @@
## Filebeat configuration
## https://github.com/elastic/beats/blob/master/deploy/docker/filebeat.docker.yml
#
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
# The Docker autodiscover provider automatically retrieves logs from Docker
# containers as they start and stop.
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ['http://elasticsearch:9200']
username: elastic
password: ${ELASTIC_PASSWORD}
## HTTP endpoint for health checking
## https://www.elastic.co/guide/en/beats/filebeat/current/http-endpoint.html
#
http.enabled: true
http.host: 0.0.0.0

View File

@ -0,0 +1,34 @@
version: '3.7'
services:
filebeat:
build:
context: extensions/filebeat/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
# Run as 'root' instead of 'filebeat' (uid 1000) to allow reading
# 'docker.sock' and the host's filesystem.
user: root
command:
# Log to stderr.
- -e
# Disable config file permissions checks. Allows mounting
# 'config/filebeat.yml' even if it's not owned by root.
# see: https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html
- --strict.perms=false
volumes:
- ./extensions/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro,Z
- type: bind
source: /var/lib/docker/containers
target: /var/lib/docker/containers
read_only: true
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
read_only: true
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,5 @@
# uses ONBUILD instructions described here:
# https://github.com/gliderlabs/logspout/tree/master/custom
FROM gliderlabs/logspout:master
ENV SYSLOG_FORMAT rfc3164

View File

@ -0,0 +1,28 @@
# Logspout extension
Logspout collects all Docker logs using the Docker logs API, and forwards them to Logstash without any additional
configuration.
## Usage
If you want to include the Logspout extension, run Docker Compose from the root of the repository with an additional
command line argument referencing the `logspout-compose.yml` file:
```bash
$ docker-compose -f docker-compose.yml -f extensions/logspout/logspout-compose.yml up
```
In your Logstash pipeline configuration, enable the `udp` input and set the input codec to `json`:
```logstash
input {
udp {
port => 5000
codec => json
}
}
```
## Documentation
<https://github.com/looplab/logspout-logstash>

View File

@ -0,0 +1,13 @@
#!/bin/sh
# source: https://github.com/gliderlabs/logspout/blob/621524e/custom/build.sh
set -e
apk add --update go build-base git mercurial ca-certificates
cd /src
go build -ldflags "-X main.Version=$1" -o /bin/logspout
apk del go git mercurial build-base
rm -rf /root/go /var/cache/apk/*
# backwards compatibility
ln -fs /tmp/docker.sock /var/run/docker.sock

View File

@ -0,0 +1,19 @@
version: '3.7'
services:
logspout:
build:
context: extensions/logspout
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
read_only: true
environment:
ROUTE_URIS: logstash://logstash:5000
LOGSTASH_TAGS: docker-elk
networks:
- elk
depends_on:
- logstash
restart: on-failure

View File

@ -0,0 +1,10 @@
package main
// installs the Logstash adapter for Logspout, and required dependencies
// https://github.com/looplab/logspout-logstash
import (
_ "github.com/gliderlabs/logspout/healthcheck"
_ "github.com/gliderlabs/logspout/transports/tcp"
_ "github.com/gliderlabs/logspout/transports/udp"
_ "github.com/looplab/logspout-logstash"
)

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,3 @@
ARG ELASTIC_VERSION
FROM docker.elastic.co/beats/metricbeat:${ELASTIC_VERSION}

View File

@ -0,0 +1,36 @@
# Metricbeat
Metricbeat is a lightweight shipper that you can install on your servers to periodically collect metrics from the
operating system and from services running on the server. Metricbeat takes the metrics and statistics that it collects
and ships them to the output that you specify, such as Elasticsearch or Logstash.
## Usage
To include Metricbeat in the stack, run Docker Compose from the root of the repository with an additional command line
argument referencing the `metricbeat-compose.yml` file:
```console
$ docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml up
```
## Configuring Metricbeat
The Metricbeat configuration is stored in [`config/metricbeat.yml`](./config/metricbeat.yml). You can modify this file
with the help of the [Configuration reference][metricbeat-config].
Any change to the Metricbeat configuration requires a restart of the Metricbeat container:
```console
$ docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml restart metricbeat
```
Please refer to the following documentation page for more details about how to configure Metricbeat inside a
Docker container: [Run Metricbeat on Docker][metricbeat-docker].
## See also
[Metricbeat documentation][metricbeat-doc]
[metricbeat-config]: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html
[metricbeat-docker]: https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html
[metricbeat-doc]: https://www.elastic.co/guide/en/beats/metricbeat/current/index.html

View File

@ -0,0 +1,44 @@
## Metricbeat configuration
## https://github.com/elastic/beats/blob/master/deploy/docker/metricbeat.docker.yml
#
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
metricbeat.autodiscover:
providers:
- type: docker
hints.enabled: true
metricbeat.modules:
- module: docker
metricsets:
- container
- cpu
- diskio
- healthcheck
- info
#- image
- memory
- network
hosts: ['unix:///var/run/docker.sock']
period: 10s
enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ['http://elasticsearch:9200']
username: elastic
password: ${ELASTIC_PASSWORD}
## HTTP endpoint for health checking
## https://www.elastic.co/guide/en/beats/metricbeat/current/http-endpoint.html
#
http.enabled: true
http.host: 0.0.0.0

View File

@ -0,0 +1,45 @@
version: '3.7'
services:
metricbeat:
build:
context: extensions/metricbeat/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
# Run as 'root' instead of 'metricbeat' (uid 1000) to allow reading
# 'docker.sock' and the host's filesystem.
user: root
command:
# Log to stderr.
- -e
# Disable config file permissions checks. Allows mounting
# 'config/metricbeat.yml' even if it's not owned by root.
# see: https://www.elastic.co/guide/en/beats/libbeat/current/config-file-permissions.html
- --strict.perms=false
# Mount point of the hosts filesystem. Required to monitor the host
# from within a container.
- --system.hostfs=/hostfs
volumes:
- ./extensions/metricbeat/config/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro,Z
- type: bind
source: /
target: /hostfs
read_only: true
- type: bind
source: /sys/fs/cgroup
target: /hostfs/sys/fs/cgroup
read_only: true
- type: bind
source: /proc
target: /hostfs/proc
read_only: true
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
read_only: true
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,7 @@
ARG ELASTIC_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
# Add your kibana plugins setup here
# Example: RUN kibana-plugin install <name|url>

View File

@ -0,0 +1,13 @@
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: kibana_system
elasticsearch.password: ${KIBANA_SYSTEM_PASSWORD}

View File

@ -0,0 +1,6 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store

View File

@ -0,0 +1,7 @@
ARG ELASTIC_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/logstash/logstash:${ELASTIC_VERSION}
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json

View File

@ -0,0 +1,5 @@
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"

View File

@ -0,0 +1,19 @@
input {
beats {
port => 5044
}
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "logstash_internal"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
}
}

View File

@ -0,0 +1,12 @@
# Ignore Docker build files
Dockerfile
.dockerignore
# Ignore OS artifacts
**/.DS_Store
# Ignore Git files
.gitignore
# Ignore setup state
state/

View File

@ -0,0 +1 @@
/state/

View File

@ -0,0 +1,17 @@
ARG ELASTIC_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
USER root
COPY . /
RUN set -eux; \
mkdir /state; \
chown elasticsearch /state; \
chmod +x /entrypoint.sh
USER elasticsearch:root
ENTRYPOINT ["/entrypoint.sh"]

View File

@ -0,0 +1,85 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
source "$(dirname "${BASH_SOURCE[0]}")/helpers.sh"
# --------------------------------------------------------
# Users declarations
declare -A users_passwords
users_passwords=(
[logstash_internal]="${LOGSTASH_INTERNAL_PASSWORD:-}"
[kibana_system]="${KIBANA_SYSTEM_PASSWORD:-}"
)
declare -A users_roles
users_roles=(
[logstash_internal]='logstash_writer'
)
# --------------------------------------------------------
# Roles declarations
declare -A roles_files
roles_files=(
[logstash_writer]='logstash_writer.json'
)
# --------------------------------------------------------
echo "-------- $(date) --------"
state_file="$(dirname ${BASH_SOURCE[0]})/state/.done"
if [[ -e "$state_file" ]]; then
log "State file exists at '${state_file}', skipping setup"
exit 0
fi
log 'Waiting for availability of Elasticsearch'
wait_for_elasticsearch
sublog 'Elasticsearch is running'
for role in "${!roles_files[@]}"; do
log "Role '$role'"
declare body_file
body_file="$(dirname "${BASH_SOURCE[0]}")/roles/${roles_files[$role]:-}"
if [[ ! -f "${body_file:-}" ]]; then
sublog "No role body found at '${body_file}', skipping"
continue
fi
sublog 'Creating/updating'
ensure_role "$role" "$(<"${body_file}")"
done
for user in "${!users_passwords[@]}"; do
log "User '$user'"
if [[ -z "${users_passwords[$user]:-}" ]]; then
sublog 'No password defined, skipping'
continue
fi
declare -i user_exists=0
user_exists="$(check_user_exists "$user")"
if ((user_exists)); then
sublog 'User exists, setting password'
set_user_password "$user" "${users_passwords[$user]}"
else
if [[ -z "${users_roles[$user]:-}" ]]; then
err ' No role defined, skipping creation'
continue
fi
sublog 'User does not exist, creating'
create_user "$user" "${users_passwords[$user]}" "${users_roles[$user]}"
fi
done
mkdir -p "$(dirname "${state_file}")"
touch "$state_file"

View File

@ -0,0 +1,182 @@
#!/usr/bin/env bash
# Log a message.
function log {
echo "[+] $1"
}
# Log a message at a sub-level.
function sublog {
echo "$1"
}
# Log an error.
function err {
echo "[x] $1" >&2
}
# Poll the 'elasticsearch' service until it responds with HTTP code 200.
function wait_for_elasticsearch {
local elasticsearch_host="${ELASTICSEARCH_HOST:-elasticsearch}"
local -a args=( '-s' '-D-' '-m15' '-w' '%{http_code}' "http://${elasticsearch_host}:9200/" )
if [[ -n "${ELASTIC_PASSWORD:-}" ]]; then
args+=( '-u' "elastic:${ELASTIC_PASSWORD}" )
fi
local -i result=1
local output
# retry for max 300s (60*5s)
for _ in $(seq 1 60); do
output="$(curl "${args[@]}" || true)"
if [[ "${output: -3}" -eq 200 ]]; then
result=0
break
fi
sleep 5
done
if ((result)); then
echo -e "\n${output::-3}"
fi
return $result
}
# Verify that the given Elasticsearch user exists.
function check_user_exists {
local username=$1
local elasticsearch_host="${ELASTICSEARCH_HOST:-elasticsearch}"
local -a args=( '-s' '-D-' '-m15' '-w' '%{http_code}'
"http://${elasticsearch_host}:9200/_security/user/${username}"
)
if [[ -n "${ELASTIC_PASSWORD:-}" ]]; then
args+=( '-u' "elastic:${ELASTIC_PASSWORD}" )
fi
local -i result=1
local -i exists=0
local output
output="$(curl "${args[@]}")"
if [[ "${output: -3}" -eq 200 || "${output: -3}" -eq 404 ]]; then
result=0
fi
if [[ "${output: -3}" -eq 200 ]]; then
exists=1
fi
if ((result)); then
echo -e "\n${output::-3}"
else
echo "$exists"
fi
return $result
}
# Set password of a given Elasticsearch user.
function set_user_password {
local username=$1
local password=$2
local elasticsearch_host="${ELASTICSEARCH_HOST:-elasticsearch}"
local -a args=( '-s' '-D-' '-m15' '-w' '%{http_code}'
"http://${elasticsearch_host}:9200/_security/user/${username}/_password"
'-X' 'POST'
'-H' 'Content-Type: application/json'
'-d' "{\"password\" : \"${password}\"}"
)
if [[ -n "${ELASTIC_PASSWORD:-}" ]]; then
args+=( '-u' "elastic:${ELASTIC_PASSWORD}" )
fi
local -i result=1
local output
output="$(curl "${args[@]}")"
if [[ "${output: -3}" -eq 200 ]]; then
result=0
fi
if ((result)); then
echo -e "\n${output::-3}\n"
fi
return $result
}
# Create the given Elasticsearch user.
function create_user {
local username=$1
local password=$2
local role=$3
local elasticsearch_host="${ELASTICSEARCH_HOST:-elasticsearch}"
local -a args=( '-s' '-D-' '-m15' '-w' '%{http_code}'
"http://${elasticsearch_host}:9200/_security/user/${username}"
'-X' 'POST'
'-H' 'Content-Type: application/json'
'-d' "{\"password\":\"${password}\",\"roles\":[\"${role}\"]}"
)
if [[ -n "${ELASTIC_PASSWORD:-}" ]]; then
args+=( '-u' "elastic:${ELASTIC_PASSWORD}" )
fi
local -i result=1
local output
output="$(curl "${args[@]}")"
if [[ "${output: -3}" -eq 200 ]]; then
result=0
fi
if ((result)); then
echo -e "\n${output::-3}\n"
fi
return $result
}
# Ensure that the given Elasticsearch role is up-to-date, create it if required.
function ensure_role {
local name=$1
local body=$2
local elasticsearch_host="${ELASTICSEARCH_HOST:-elasticsearch}"
local -a args=( '-s' '-D-' '-m15' '-w' '%{http_code}'
"http://${elasticsearch_host}:9200/_security/role/${name}"
'-X' 'POST'
'-H' 'Content-Type: application/json'
'-d' "$body"
)
if [[ -n "${ELASTIC_PASSWORD:-}" ]]; then
args+=( '-u' "elastic:${ELASTIC_PASSWORD}" )
fi
local -i result=1
local output
output="$(curl "${args[@]}")"
if [[ "${output: -3}" -eq 200 ]]; then
result=0
fi
if ((result)); then
echo -e "\n${output::-3}\n"
fi
return $result
}

View File

@ -0,0 +1,33 @@
{
"cluster": [
"manage_index_templates",
"monitor",
"manage_ilm"
],
"indices": [
{
"names": [
"logs-generic-default",
"logstash-*",
"ecs-logstash-*"
],
"privileges": [
"write",
"create",
"create_index",
"manage",
"manage_ilm"
]
},
{
"names": [
"logstash",
"ecs-logstash"
],
"privileges": [
"write",
"manage"
]
}
]
}

View File

@ -2,10 +2,8 @@
A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle peice to the overall observability jigsaw.
### Log Management & Aggregation
Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched.
One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing.
@ -45,6 +43,7 @@ Examples of log management platforms there's
- Fluentd - popular open source choice
- Datadog - hosted offering, commonly used at larger enterprises,
- LogDNA - hosted offering
- Splunk
Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging.

View File

@ -1,17 +1,89 @@
## ELK Stack & Fluentd
## ELK Stack
In this session, we are going to get a little more hands-on with some of the options we have mentioned.
### ELK Stack
ELK Stack is the combination of 3 separate tools:
### Fluentd
- [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."
- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.
ELK stack lets us reliably and securely take data from any source, in any format, then search, analyze, and visualize it in real time.
On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
### EFK Stack
- Logs: Server logs that need to be analyzed are identified
- Logstash: Collect logs and events data. It even parses and transforms data
- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed.
- Kibana uses Elasticsearch DB to Explore, Visualize, and Share
![](https://www.guru99.com/images/tensorflow/082918_1504_ELKStackTut1.png)
[Picture taken from Guru99](https://www.guru99.com/elk-stack-tutorial.html)
A good resource explaining this [The Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/)
With the addition of beats the ELK Stack is also now known as Elastic Stack.
For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system.
[Start the Elastic Stack with Docker Compose](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#get-started-docker-tls)
![](Images/Day80_Monitoring1.png)
You will find the original files and walkthrough that I used here [ deviantony/docker-elk](https://github.com/deviantony/docker-elk)
Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images.
![](Images/Day80_Monitoring2.png)
If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
After a few minutes we can navigate to http://localhost:5601/ which is our Kibana server / Docker container.
![](Images/Day80_Monitoring3.png)
Your initial home screen is going to look something like this.
![](Images/Day80_Monitoring4.png)
Under the section titled "Get started by adding integrations" there is a "try sample data" click this and we can add one of the shown below.
![](Images/Day80_Monitoring5.png)
I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack.
When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down.
![](Images/Day80_Monitoring6.png)
As it states on the dashboard view:
**Sample Logs Data**
*This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.*
![](Images/Day80_Monitoring7.png)
This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this.
We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus.
The key takeaway I have had between the Elastic Stack and Prometheus + Grafana is that Elastic Stack or ELK Stack is focused on Logs and Prometheus is focused on metrics.
I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metricfire.com/blog/prometheus-vs-elk/) to get a better understanding of the different offerings.
## Resources
- [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848)
- [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/)
- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b)
- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0)

View File

@ -1,3 +1,13 @@
## The Big Picture: Data Visualisation
## Fluentd
### Fluentd
Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer.
Fluentd treats logs as JSON
[Installing Fluentd](https://docs.fluentd.org/quickstart#step-1-installing-fluentd)
https://devops.com/making-data-work-data-visualization-techniques/

View File

@ -0,0 +1,6 @@
### EFK Stack
In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD.

View File

@ -0,0 +1,3 @@
## Data Visualisation
https://devops.com/making-data-work-data-visualization-techniques/

View File

@ -126,9 +126,9 @@ This will not cover all things DevOps but it will cover the areas that I feel wi
- [✔️] 📈 78 > [Hands-On Monitoring Tools](Days/day78.md)
- [✔️] 📈 79 > [The Big Picture: Log Management](Days/day79.md)
- [🚧] 📈 80 > [ELK Stack & Fluentd](Days/day80.md)
- [] 📈 81 > [The Big Picture: Data Visualisation](Days/day81.md)
- [] 📈 82 > [](Days/day82.md)
- [] 📈 83 > [](Days/day83.md)
- [] 📈 81 > [Fluentd](Days/day81.md)
- [] 📈 82 > [EFK Stack](Days/day82.md)
- [] 📈 83 > [Data Visualisation](Days/day83.md)
### Store & Protect Your Data