filebeat收集日志到elasticsearch,使用logstash过滤日志
来源:原创
时间:2024-09-02
作者:脚本小站
分类:云原生
安装filebeat到kubernetes:
wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz tar -xf helm-v3.9.4-linux-amd64.tar.gz cp ./linux-amd64/helm /usr/local/bin/helm helm version helm help helm repo add elastic https://helm.elastic.co helm pull elastic/filebeat --version 7.17.3 tar -xf filebeat-7.17.3.tgz helm install filebeat . -n logging --create-namespace helm upgrade filebeat . -n logging --create-namespace helm uninstall filebeat . -n logging
filebeat配置参考:
filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: default_indexers.enabled: true default_matchers.enabled: true host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" - drop_event.when.regexp: or: kubernetes.pod.name: "filebeat-*" kubernetes.pod.name: "external-dns.*" kubernetes.pod.name: "coredns-*" - drop_fields: fields: - log - input - container.* - kubernetes.labels - kubernetes.node - kubernetes.pod.id output.elasticsearch: host: '${NODE_NAME}' hosts: ["172.17.68.100:9200"] indices: - index: "efk-test-kube-system-%{+yyyy.MM.dd}" when.contains: kubernetes.namespace: "kube-system" - index: "efk-test-test-%{+yyyy.MM.dd}" when.contains: kubernetes.namespace: "test" #关闭索引的生命周期,开启则上面的index配置会被无视 setup.ilm.enabled: false #设置索引模板的名称 setup.template.nameo: "efk-test" #设置索引模板的匹配模式 setup.template.pattern: "efk-test-*" #覆盖已有的索引模板 setup.template.overwrite: false #设置索引分片与副本数量 setup.template.settings: index.number_of_shards: 1 index.number_of_replicas: 0
filebeat收集节点上面的日志配置:
filebeat.inputs: - type: filestream enabled: true paths: - /home/probe/code/finstepapi/finstep-probe-*.log filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 # 自定义输出索引名称,7版本之后必须要先定义模板才能在output哪里之定义index的名称 setup.template.name: "finstepapi-monitor" setup.template.pattern: "finstepapi-monitor-*" setup.template.overwrite: true setup.template.enabled: true setup.ilm.enabled: false output.elasticsearch: hosts: ["10.159.0.55:9200"] index: "finstepapi-monitor-%{+yyyy.MM.dd}" ## 输出到console调试 #output.console: # pretty: true ## 加上下面俩行只输出message内容 # codec.format: # string: '%{[message]}' # 删除掉不需要的字段 processors: - drop_fields: fields: ["log","host","input","agent","ecs"] ignore_missing: false # - add_host_metadata: # when.not.contains.tags: forwarded # - add_cloud_metadata: ~ # - add_docker_metadata: ~ # - add_kubernetes_metadata: ~
创建自定义索引名称注意项:
cnblogs.com/zyxnhr/p/12214706.html
安装es:
mkdir /data/elasticsearch/{logs,data} -pv chmod 777 -R /data/elasticsearch/logs /data/elasticsearch/data docker run -d --name elasticsearch \ -p 9200:9200 \ -p 9300:9300 \ -e "discovery.type=single-node" \ -e ES_JAVA_OPTS="-Xms256m -Xmx256m" \ -v /data/elasticsearch/logs:/usr/share/elasticsearch/logs \ -v /data/elasticsearch/data:/usr/share/elasticsearch/data \ elasticsearch:7.17.1 docker run \ -d --name kibana \ -p 5601:5601 \ kibana:7.17.1 mkdir -p /data/kibana/config -pv docker cp kibana:/usr/share/kibana/config /data/kibana/ vim /data/kibana/config/kibana.yml server.host: "0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://localhost:9100" ] # 记得修改ip monitoring.ui.container.elasticsearch.enabled: true i18n.locale: "zh-CN" docker stop kibana docker rm kibana docker run \ -d --name kibana \ -p 5601:5601 \ -v /data/kibana/config:/usr/share/kibana/config \ kibana:7.17.1
使用logstash过滤日志:使用下面的配置filebeat配置不用改直接安装即可。
input { beats { port => 5044 } } output { stdout { codec => rubydebug } } #output { # elasticsearch { # hosts => ["http://10.159.0.19:9200"] # index => "%{[@metadata][index]}" # } #} filter { # mutate { # add_field => { "new_field" => "Hello, World!" } # } if [kubernetes][namespace] == "kube-system" and [kubernetes][deployment][name] == "ingress-nginx-controller" { mutate { add_field => { "[@metadata][index]" => "dev-%{[kubernetes][deployment][name]}-%{+YYYY.MM.dd}" } remove_field => ["[kubernetes][node]", "[kubernetes][namespace_labels]", "[kubernetes][labels]", "[kubernetes][namespace]", "[kubernetes][deployment]","[kubernetes][namespace_uid]", "[kubernetes][pod][uid]", "[kubernetes][pod][ip]", "[kubernetes][replicaset]", "[kubernetes][container]", "container", "agent", "tags", "ecs", "input", "host", "stream", "log", "@version"] } json { source => "message" } } else if [kubernetes][namespace] != "kube-system" and [kubernetes][deployment][name] { mutate { add_field => { "[@metadata][index]" => "dev-%{[kubernetes][deployment][name]}-%{+YYYY.MM.dd}" } remove_field => ["[kubernetes][node]", "[kubernetes][namespace_labels]", "[kubernetes][labels]", "[kubernetes][namespace]", "[kubernetes][deployment]","[kubernetes][namespace_uid]", "[kubernetes][pod][uid]", "[kubernetes][pod][ip]", "[kubernetes][replicaset]", "[kubernetes][container]", "container", "agent", "tags", "ecs", "input", "host", "stream", "log", "@version"] } # grok { # match => { "message" => "%{COMBINEDAPACHELOG}" } # } } else { mutate { add_field => { "[@metadata][index]" => "dev-other-%{+YYYY.MM.dd}" } remove_field => ["[kubernetes][labels]"] } } } #output { # elasticsearch { # hosts => ["http://10.159.0.19:9200"] # index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" # #user => "elastic" # #password => "changeme" # } #}
可以删除不需要的日志,也可作为临时调试删掉干扰的信息:
if [kubernetes][namespace] == "kube-system" { drop { } }
创建索引:
查看日志:
筛选日志:
参考:
cnblogs.com/whtjyt/p/17829241.html