spring cloud实践(八) elk

当服务扩展后,每次排查问题需要到多台机器查看日志;这是一个非常麻烦的问题,我们需要一个统一查看日志的地方。

ELK是一个日志集中平台,其组成为elastic search,logstash,kibana。其中es作为日志存储,logstatsh作日志收集,kibana作为可视化页面展示。logstash通常和filebeat共同使用,由filebeat收集本地日志再发送到logstash。

filebeat也可以不经过logstash直接存储到es进行展示。

ELK

在之前一篇博文中,已搭建es,这里稍作修改方便定位es。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
container_name: elasticsearch
# 增加 hostname
hostname: elasticsearch
networks:
- default
- elk
ports:
- "9300:9300"
- "9200:9200"
networks:
elk:
external: true
kibana

同样修改kibana的docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:6.7.0
container_name: kibana
# 增加hostname
hostname: kibana
networks:
- default
- elk
ports:
- "5601:5601"
environment:
SERVER_NAME: localhost
SERVER_PORT: 5601
# 使用elasticsearch代替之前的ip
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
networks:
elk:
external: true
logstash

docker-compose.yml文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:6.7.1
container_name: logstash
hostname: logstash
networks:
- default
- elk
volumes:
- ~/logs:/app/logs
- ~/otherapps/logstash/test-pipeline.conf:/usr/share/logstash/pipeline/test-pipeline.conf
networks:
elk:
external: true

同时配置pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
input {
file {
path => "/app/logs/*.json"
codec => "json"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "logstash"
}
}
json日志

配置producer和consumer项目,增加json日志输出。

spring boot 默认使用logback进行日志记录,这里则添加相应的logstash encoder。

  • 新增依赖
1
2
3
4
5
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
  • 配置logback
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>

<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<!-- Example for logging into the build folder of your project -->
<property name="LOG_FILE" value="~/logs/${springAppName}"/>

<!-- You can override this to have a custom pattern -->
<property name="CONSOLE_LOG_PATTERN"
value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

<!-- Appender to log to console -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- Minimum logging level to be presented in the console logs-->
<level>DEBUG</level>
</filter>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>

<!-- Appender to log to file -->
<appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>

<!-- Appender to log to file in a JSON format -->
<appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern>
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"severity": "%level",
"service": "${springAppName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"parent": "%X{X-B3-ParentSpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>

<root level="INFO">
<appender-ref ref="console"/>
<!-- uncomment this to have also JSON logs -->
<appender-ref ref="logstash"/>
<appender-ref ref="flatfile"/>
</root>
</configuration>
测试

启动测试,在 kibana 的 management中能找到logstash的es index,添加至kibana index,即可查看日志。

ELK + filebeat

logstatsh

修改 pipeline.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
input {
beats {
port => 5045
type => beats
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "logstash"
}
}
filebeat

docker-compose.yml 如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: '3'
services:
logstash:
image: store/elastic/filebeat:6.7.1
container_name: filebeat
networks:
- default
- elk
volumes:
- ~/logs:/app/logs
- ~/otherapps/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ~/otherapps/filebeat/registry:/usr/share/filebeat/registry

networks:
elk:
external: true

filebeat.yml 如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
filebeat:
inputs:
- type: log
paths:
- /app/logs/*.json
registry_file: /usr/share/filebeat/registry/mark

json.keys_under_root: true

output:
logstash:
hosts: ["logstash:5045"]

logging:
files:
rotateeverybytes: 10485760 # = 10MB
测试

同ELK相同展示。

kibana log

在kibana菜单栏有日志一栏,专门用来展示日志汇总。如需使用则对相关组件做如下修改。

filebeat

修改filebeat.yml,需新增模块。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
filebeat:
inputs:
- type: log
paths:
- /app/logs/*.json
modules:
- module: logstash
registry_file: /usr/share/filebeat/registry/mark

json.keys_under_root: true

output:
elasticsearch:
hosts: ["elasticsearch:9200"]
# logstash:
# hosts: ["logstash:5044"]

setup.kibana:
host: "kibana:5601"

logging:
files:
rotateeverybytes: 10485760 # = 10MB
测试

启动测试,在 kibana 的 management中能找到filebeat的es index,添加至kibana index,即可查看数据。

kibana-es-filebeat

在logs里可查看日志。

kibana-logs


参考