Docker 搭建ELK集群

Docker/容器
394
0
0
2022-05-05
标签   Docker

Elasticsearch、Kibana版本

  • Elasticsearch:7.8.1
  • Kibana:7.8.1
  • Logstash:7.8.1

集群结构及服务器配置

  • Elasticsearch集群共3个节点,Kibana共1个节点,Logstash共1个节点
  • 服务器:2核4G,数据盘100G固态硬盘

Docker 安装 Elasticsearch

一、下载安装 Docker

卸载旧版本

sudo yum remove docker \
                docker-client \
                docker-client-latest \
                docker-common \
                docker-latest \
                docker-latest-logrotate \
                docker-logrotate \
                docker-engine

设置远程仓库

1.安装软件依赖

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

2.选择稳定的仓库

sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

安装Docker

1.查看可以安装版本列表,并安装特定版本

yum list docker-ce --showduplicates | sort -r

sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io

2.或者直接安装最新版本

sudo yum install docker-ce docker-ce-cli containerd.io

3.启动Docker

sudo systemctl start docker

4.运行 hello-world 验证Docker是否安装成功

sudo docker run hello-world

卸载Docker

1.卸载Docker软件包

sudo yum remove docker-ce

2.删除映像,容器,卷或自定义配置文件

sudo rm -rf /var/lib/docker

配置Docker加速器

1.创建或修改 /etc/docker/daemon.json 文件,选择合适镜像

vim /etc/docker/daemon.json

# docker-cn镜像
{"registry-mirrors": ["https://registry.docker-cn.com"]
}

# 腾讯云镜像
{"registry-mirrors": ["https://mirror.ccs.tencentyun.com"]
}

2.依次执行以下命令,重新启动 Docker 服务

sudo systemctl daemon-reload

sudo systemctl restart docker

3.检查加速器是否生效

# 执行 docker info 命令
docker info

# 返回结果中包含以下内容,则说明配置成功
Registry Mirrors:
  https://mirror.ccs.tencentyun.com

二、安装 Elasticsearch

安装并运行单节点

1.拉取镜像

docker pull elasticsearch:7.8.1

2.第一次运行单个节点

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.8.1

映射目录

1.新建/data目录,并挂载SSD磁盘

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.创建Elasticsearch数据目录日志目录(因为data和logs下的具体节点目录没有创建,启动ES时可能报错没有权限,chmod添加权限即可)

mkdir -p /data/elasticsearch/data
mkdir -p /data/elasticsearch/logs

3.设置Elasticsearch安装目录读写权限

chmod -R 777 /data/elasticsearch

使用 Docker Compose 启动三个节点

1.安装docker-compose

# 载Docker Compose的当前稳定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# 添加可执行权限
sudo chmod +x /usr/local/bin/docker-compose

# 创建路径连接符
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

2.创建一个 docker-compose.yml 文件

version: '2.2'
services: 
  es01: 
    image: elasticsearch:7.8.1 
    container_name: es01 
    restart: always 
    environment: 
      - node.name=es01 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es02,es03 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m" 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es01:/usr/share/elasticsearch/data 
      - ./logs/es01:/usr/share/elasticsearch/logs 
    ports: 
      - 9201:9200 
    networks: 
      - elastic 
  es02: 
    image: elasticsearch:7.8.1 
    container_name: es02 
    restart: always 
    depends_on: 
      - es01 
    environment: 
      - node.name=es02 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es01,es03 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m" 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es02:/usr/share/elasticsearch/data 
      - ./logs/es02:/usr/share/elasticsearch/logs 
    ports: 
      - 9202:9200 
    networks: 
      - elastic 
  es03: 
    image: elasticsearch:7.8.1 
    container_name: es03 
    restart: always 
    depends_on: 
      - es01 
    environment: 
      - node.name=es03 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es01,es02 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xmx256m -Xmx256m" 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es03:/usr/share/elasticsearch/data 
      - ./logs/es03:/usr/share/elasticsearch/logs 
    ports: 
      - 9203:9200 
    networks: 
      - elastic

networks: 
  elastic: 
    driver: bridge

3.设置宿主机vm.max_map_count

# 永久设置
echo "vm.max_map_count = 655300" >>/etc/sysctl.conf

# 使/etc/sysctl.conf立即生效
sysctl -p

4.启动并测试集群

# 确保为Docker分配了至少4G内存

# 启动集群
docker-compose up

# 查看节点是否已启动
curl -X GET "localhost:9200/_cat/nodes?v&pretty"

在Elasticsearch Docker容器中加密通信

1.创建配置文件

instances.yml 标您需要创建证书的实例

instances: 
  - name: es01 
    dns: 
      - es01 
      - localhost 
    ip: 
      - 127.0.0.1

  - name: es02 
    dns: 
      - es02 
      - localhost 
    ip: 
      - 127.0.0.1

  - name: es03 
    dns: 
      - es03 
      - localhost 
    ip: 
      - 127.0.0.1

.env 设置环境变量,配置文件中使用

COMPOSE_PROJECT_NAME=es
CERTS_DIR=/usr/share/elasticsearch/config/certificates
VERSION=7.8.1

create-certs.yml 是一个Docker Compose文件,启动一个容器来生成Elasticsearch证书

version: '2.2'

services: 
  create_certs: 
    image: elasticsearch:${VERSION} 
    container_name: create_certs 
    command: >bash -c '
        yum install -y -q -e 0 unzip;
        if [[ ! -f /certs/bundle.zip ]]; then
          bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
          unzip /certs/bundle.zip -d /certs;
        fi;
        chown -R 1000:0 /certs
      ' 
    working_dir: /usr/share/elasticsearch 
    volumes: 
      - certs:/certs 
      - .:/usr/share/elasticsearch/config/certificates 
    networks: 
      - elastic

volumes: 
  certs: 
    driver: local

networks: 
  elastic: 
    driver: bridge

修改 docker-compose.yml 文件

version: '2.2'
services: 
  es01: 
    image: elasticsearch:${VERSION} 
    container_name: es01 
    restart: always 
    environment: 
      - node.name=es01 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es02,es03 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m" 
      - xpack.security.enabled=true 
      - xpack.security.transport.ssl.enabled=true 
      - xpack.security.transport.ssl.verification_mode=certificate 
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt 
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt 
      - xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es01:/usr/share/elasticsearch/data 
      - ./logs/es01:/usr/share/elasticsearch/logs 
      - certs:$CERTS_DIR 
    ports: 
      - 9201:9200 
    networks: 
      - elastic 
  es02: 
    image: elasticsearch:${VERSION} 
    container_name: es02 
    restart: always 
    depends_on: 
      - es01 
    environment: 
      - node.name=es02 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es01,es03 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m" 
      - xpack.security.enabled=true 
      - xpack.security.transport.ssl.enabled=true 
      - xpack.security.transport.ssl.verification_mode=certificate 
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt 
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es02/es02.crt 
      - xpack.security.transport.ssl.key=$CERTS_DIR/es02/es02.key 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es02:/usr/share/elasticsearch/data 
      - ./logs/es02:/usr/share/elasticsearch/logs 
      - certs:$CERTS_DIR 
    ports: 
      - 9202:9200 
    networks: 
      - elastic 
  es03: 
    image: elasticsearch:${VERSION} 
    container_name: es03 
    restart: always 
    depends_on: 
      - es01 
    environment: 
      - node.name=es03 
      - cluster.name=es-docker-cluster 
      - discovery.seed_hosts=es01,es02 
      - cluster.initial_master_nodes=es01,es02,es03 
      - bootstrap.memory_lock=true 
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m" 
      - xpack.security.enabled=true 
      - xpack.security.transport.ssl.enabled=true 
      - xpack.security.transport.ssl.verification_mode=certificate 
      - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt 
      - xpack.security.transport.ssl.certificate=$CERTS_DIR/es03/es03.crt 
      - xpack.security.transport.ssl.key=$CERTS_DIR/es03/es03.key 
    ulimits: 
      nofile: 
        soft: 650000 
        hard: 655360 
      as: 
        soft: -1 
        hard: -1 
      nproc: 
        soft: -1 
        hard: -1 
      fsize: 
        soft: -1 
        hard: -1 
      memlock: 
        soft: -1 
        hard: -1 
    volumes: 
      - ./data/es03:/usr/share/elasticsearch/data 
      - ./logs/es03:/usr/share/elasticsearch/logs 
      - certs:$CERTS_DIR 
    ports: 
      - 9203:9200 
    networks: 
      - elastic

volumes: 
  certs: 
    driver: local

networks: 
  elastic: 
    driver: bridge

2.确保为Docker Engine分配了至少2G的内存

3.生成为Elasticsearch证书

docker-compose -f create-certs.yml run --rm create_certs

4.构建并启动Elasticsearch集群

docker-compose up -d

5.使用 elasticsearch-setup-passwords 生成内置用户密码

# 进入容器节点 es01
docker exec -it es01 /bin/bash

# 生成内置用户密码
./bin/elasticsearch-setup-passwords auto

三、安装 Kibana

安装并运行单节点

1.拉取镜像

docker pull kibana:7.8.1

2.第一次运行单个节点

docker run --link YOUR_ELASTICSEARCH_CONTAINER_NAME_OR_ID:elasticsearch -p 5601:5601 {docker-repo}:{version}

映射目录

1.新建/data目录,并挂载SSD磁盘

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.创建Kibana数据目录日志目录

mkdir -p /data/kibana/config

3.设置Kibana安装目录读写权限

chmod -R 777 /data/kibana

使用 Docker Compose 启动节点

1.安装docker-compose

# 载Docker Compose的当前稳定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# 添加可执行权限
sudo chmod +x /usr/local/bin/docker-compose

# 创建路径连接符
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

2.创建一个 docker-compose.yml 文件

version: '2'
services: 
  kibana: 
    image: kibana:7.8.1 
    container_name: kibana 
    volumes: 
      - ./config/kibana.yml:/usr/share/kibana/config/kibana.yml 
    ports: 
      - 5601:5601

3.config/kibana.yml

# HTTP访问端口
server.port: 5601

# HTTP访问IP,内网IP、外网IP都可以访问
server.host: "0.0.0.0"

# Elasticsearch节点地址(物理机内网IP,或者127.0.0.1)
elasticsearch.hosts: "http://127.0.0.1:9201"

## Elasticsearch账号和密码
elasticsearch.username: "elastic"
elasticsearch.password: "elastic_password"

# Kibana Web页面国际化【简体中文】
i18n.locale: "zh-CN"

4.启动并测试集群

# 启动集群
docker-compose up

# 查看节点是否已启动
curl -X GET "localhost:5601"

四、安装 Logstash

安装

1.拉取镜像

docker pull logstash:7.8.1

映射目录

1.新建/data目录,并挂载SSD磁盘

mkdir /data
fdisk -u /dev/vdb
mkfs.ext4  /dev/vdb1
cp /etc/fstab /etc/fstab.bak
echo "/dev/vdb1 /data/ ext4 defaults 0 0" >> /etc/fstab
mount /dev/vdb1 /data

2.创建Logstash数据目录日志目录

mkdir -p /data/logstash/config
mkdir -p /data/logstash/pipeline
mkdir -p /data/logstash/logs
mkdir -p /data/logstash/last-run-metadata
mkdir -p /data/logstash/java

3.下载兼容mysql对应版本的mysql-connector-java.jar驱动包

  • Mysql版本:5.7.20-log
  • 驱动包版本:mysql-connector-java-5.1.48.tar.gz(可以选择5.1.*其他最新版本)
  • 官方下载地址:dev.mysql.com/downloads/connector/... (点击Looking for previous GA versions?选择其他老版本)
  • 系统兼容版本:选择平台无关平台独立对应的版本

上传mysql-connector-java.jar驱动包

mv mysql-connector-java-5.1.48-bin.jar /data/logstash/java

4.设置Logstash安装目录读写权限

chmod -R 777 /data/logstash

使用 Docker Compose 启动节点

1.安装docker-compose

# 安装步骤省略...

2.创建一个 docker-compose.yml 文件

version: '2'
services: 
  logstash: 
    image: logstash:7.8.1 
    container_name: logstash
    volumes:
      - ./config/:/usr/share/logstash/config/
      - ./pipeline/:/usr/share/logstash/pipeline/
      - ./logs/:/usr/share/logstash/logs/
      - ./last-run-metadata/:/usr/share/logstash/last-run-metadata/
      - ./java/:/usr/share/logstash/java/

3.config/logstash.yml

# 启用定时重新加载配置
config.reload.automatic: true
# 定时重新加载配置周期
config.reload.interval: 3s

# 持久队列
queue.type: persisted
# 控制耐久性
queue.checkpoint.writes: 1
# 死信队列
dead_letter_queue.enable: true

# 启用Logstash节点监控
xpack.monitoring.enabled: true
# Elasticsearch账号和密码
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elasticpassword
# Elasticsearch节点地址列表(物理机内网IP,或者127.0.0.1)
xpack.monitoring.elasticsearch.hosts: ["127.0.0.1:9201", "127.0.0.1:9202", "127.0.0.1:9203"]
# 发现Elasticsearch集群的其他节点(端口包含除9200外的其它端口时需关闭)
# xpack.monitoring.elasticsearch.sniffing: true
# 发送监控数据的频率
xpack.monitoring.collection.interval: 10s
# 启用监控管道信息
xpack.monitoring.collection.pipeline.details.enabled: true

4.config/pipelines.yml

# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/*.conf"

4.config/log4j2.properties(最新版本参考官方配置,如果无配置文件,默认不记录日志)

status = error
name = LogstashPropertiesConfig

appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n

appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true

appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.rolling.avoid_pipelined_filter.type = ScriptFilter
appender.rolling.avoid_pipelined_filter.script.type = Script
appender.rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.rolling.avoid_pipelined_filter.script.language = JavaScript
appender.rolling.avoid_pipelined_filter.script.value = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))

appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling.policies.size.size = 100MB
appender.json_rolling.strategy.type = DefaultRolloverStrategy
appender.json_rolling.strategy.max = 30
appender.json_rolling.avoid_pipelined_filter.type = ScriptFilter
appender.json_rolling.avoid_pipelined_filter.script.type = Script
appender.json_rolling.avoid_pipelined_filter.script.name = filter_no_pipelined
appender.json_rolling.avoid_pipelined_filter.script.language = JavaScript
appender.json_rolling.avoid_pipelined_filter.script.value = ${sys:ls.pipeline.separate_logs} == false || !(logEvent.getContextData().containsKey("pipeline.id"))

appender.routing.type = Routing
appender.routing.name = pipeline_routing_appender
appender.routing.routes.type = Routes
appender.routing.routes.script.type = Script
appender.routing.routes.script.name = routing_script
appender.routing.routes.script.language = JavaScript
appender.routing.routes.script.value = logEvent.getContextData().containsKey("pipeline.id") ? logEvent.getContextData().getValue("pipeline.id") : "sink";
appender.routing.routes.route_pipelines.type = Route
appender.routing.routes.route_pipelines.rolling.type = RollingFile
appender.routing.routes.route_pipelines.rolling.name = appender-${ctx:pipeline.id}
appender.routing.routes.route_pipelines.rolling.fileName = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log
appender.routing.routes.route_pipelines.rolling.filePattern = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz
appender.routing.routes.route_pipelines.rolling.layout.type = PatternLayout
appender.routing.routes.route_pipelines.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.routing.routes.route_pipelines.rolling.policy.type = SizeBasedTriggeringPolicy
appender.routing.routes.route_pipelines.rolling.policy.size = 100MB
appender.routing.routes.route_pipelines.strategy.type = DefaultRolloverStrategy
appender.routing.routes.route_pipelines.strategy.max = 30
appender.routing.routes.route_sink.type = Route
appender.routing.routes.route_sink.key = sink
appender.routing.routes.route_sink.null.type = Null
appender.routing.routes.route_sink.null.name = drop-appender

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
rootLogger.appenderRef.routing.ref = pipeline_routing_appender

# Slowlog

appender.console_slowlog.type = Console
appender.console_slowlog.name = plain_console_slowlog
appender.console_slowlog.layout.type = PatternLayout
appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

appender.json_console_slowlog.type = Console
appender.json_console_slowlog.name = json_console_slowlog
appender.json_console_slowlog.layout.type = JSONLayout
appender.json_console_slowlog.layout.compact = true
appender.json_console_slowlog.layout.eventEol = true

appender.rolling_slowlog.type = RollingFile
appender.rolling_slowlog.name = plain_rolling_slowlog
appender.rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_slowlog.policies.type = Policies
appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_slowlog.policies.time.interval = 1
appender.rolling_slowlog.policies.time.modulate = true
appender.rolling_slowlog.layout.type = PatternLayout
appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_slowlog.policies.size.size = 100MB
appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.rolling_slowlog.strategy.max = 30

appender.json_rolling_slowlog.type = RollingFile
appender.json_rolling_slowlog.name = json_rolling_slowlog
appender.json_rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling_slowlog.policies.type = Policies
appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.time.interval = 1
appender.json_rolling_slowlog.policies.time.modulate = true
appender.json_rolling_slowlog.layout.type = JSONLayout
appender.json_rolling_slowlog.layout.compact = true
appender.json_rolling_slowlog.layout.eventEol = true
appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.size.size = 100MB
appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.json_rolling_slowlog.strategy.max = 30

logger.slowlog.name = slowlog
logger.slowlog.level = trace
logger.slowlog.appenderRef.console_slowlog.ref = ${sys:ls.log.format}_console_slowlog
logger.slowlog.appenderRef.rolling_slowlog.ref = ${sys:ls.log.format}_rolling_slowlog
logger.slowlog.additivity = false

logger.licensereader.name = logstash.licensechecker.licensereader
logger.licensereader.level = error

# Deprecation log
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_plain_rolling
appender.deprecation_rolling.fileName = ${sys:ls.logs}/logstash-deprecation.log
appender.deprecation_rolling.filePattern = ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.deprecation_rolling.policies.time.interval = 1
appender.deprecation_rolling.policies.time.modulate = true
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]} %m%n
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30

logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false

logger.deprecation_root.name = deprecation
logger.deprecation_root.level = WARN
logger.deprecation_root.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation_root.additivity = false

5.启动并测试集群

# 启动集群
docker-compose up

# 查看节点是否已启动
tail -fn 100 logs/logstash-plain.log