Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Filebeat collects k8s logs

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. Overview of Filebeat

Filebeat is a lightweight delivery program for forwarding and centralizing log data. As an agent installation on the server, Filebeat monitors the log file or location you specify, collects log events, and forwards them to [Elasticsearch] or [Logstash] for indexing.

Filebeat works as follows: when you start Filebeat, it starts one or more inputs that are looked up in the location specified for the log data. For each log found by Filebeat, Filebeat starts the collector. Each harvester reads a single log to get

Fetch the new content and send the new log data to libbeat,libbeat to aggregate events and send the aggregated data to the output configured for Filebeat.

2. Run Filebeat on Kubernetes

Deploy Filebeat as DaemonSet to ensure that there is a running instance on each node of the cluster. The Docker log host folder (/ var/lib/docker/containers) is installed on the Filebeat container. Filebeat starts to import the file, and the file appears in the

Start collecting them as soon as they are in the folder.

The official method is used here to deploy:

Curl-L-O https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml

3. Set up

By default, Filebeat sends events to an existing Elasticsearch deployment, if one exists. To specify a different target, change the following parameters in the manifest file:

Env:- name: ELASTICSEARCH_HOST value: elasticsearch- name: ELASTICSEARCH_PORT value: "9200"-name: ELASTICSEARCH_USERNAME value: elastic- name: ELASTICSEARCH_PASSWORD value: changeme- name: ELASTIC_CLOUD_ID value:- name: ELASTIC_CLOUD_AUTH value:

Output to logstash:

-apiVersion: v1kind: ConfigMapmetadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeatdata: filebeat.yml:-filebeat.config: inputs: # Mounted `filebeat- inputs` configmap: path: ${path.config} / inputs.d/*.yml # Reload inputs configs as they change: reload.enabled: false modules: path: ${path.config} / modules.d / * .yml # Reload module configs as they change: reload.enabled: false # To enable hints based autodiscover Remove `filebeat.config.inputs` configuration and uncomment this: # filebeat.autodiscover: # providers: #-type: kubernetes # hints.enabled: true processors:-add_cloud_metadata: # cloud.id: ${ELASTIC_CLOUD_ID} # cloud.auth: ${ELASTIC_CLOUD_AUTH} # output.elasticsearch: # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}: ${ELASTICSEARCH_PORT:9200} '] # username: ${ELASTICSEARCH_USERNAME} # password: ${ELASTICSEARCH_PASSWORD} output.logstash: hosts: ["192.168.0.104 filebeat-inputs namespace 5044"]-apiVersion: v1kind: ConfigMapmetadata: filebeat-inputs namespace: kube-system labels: k8s-app: filebeatdata: kubernetes.yml:-- type: log # set type to log paths:-/ var/lib/ Docker/containers/*/*.log # fields: # app: K8s # type: docker-log fields_under_root: true json.keys_under_root: true json.overwrite_keys: true encoding: utf-8 fields.sourceType: docker-log # Index name format-apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: filebeat namespace: kube-system labels: K8s-app: filebeatspec: template: metadata: labels: k8s-app: filebeatspec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers:-name: filebeat image: docker.elastic.co/beats/filebeat:6.5.4 # prepare the image in advance Download args: ["- c", "/ etc/filebeat.yml", "- e" ] securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: # privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts:-name: config mountPath: / etc/filebeat.yml readOnly: true SubPath: filebeat.yml-name: inputs mountPath: / usr/share/filebeat/inputs.d readOnly: true-name: data mountPath: / usr/share/filebeat/data-name: varlibdockercontainers mountPath: / var/lib/docker/containers readOnly: true volumes:-name: config configMap: defaultMode: 0600 Name: filebeat-config-name: varlibdockercontainers hostPath: path: / var/lib/docker/containers-name: inputs configMap: defaultMode: 0600 name: filebeat-inputs # data folder stores a registry of read status for all files So we don't send everything again on a Filebeat pod restart-name: data hostPath: path: / var/lib/filebeat-data type: DirectoryOrCreate---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: filebeatsubjects:- kind: ServiceAccount name: filebeat namespace: kube-systemroleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: filebeat Labels: k8s-app: filebeatrules:- apiGroups: ["] #" indicates the core API group resources:-namespaces-pods verbs:-get-watch-list---apiVersion: v1kind: ServiceAccountmetadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat

Create and run:

If you can see the log above, it means that the startup is successful.

4. Troubleshooting

If the startup is not successful, check the logstash log and report the error as follows

[2019-12-20T19:53:14049] [ERROR] [logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {: status= > 400,: action= > ["index", {: _ id= > nil,: _ index= > "dev-% {[fields] [sourceType]}-2019-12-20",: _ type= > "doc",: routing= > nil}, #],: response= > {"_ index" = > "dev-% {[fields] [sourceType]}-2019-12-20", "_ type" = > "doc", "_ id" = > nil, "status" = > 400 "error" = > {"type" = > "invalid_index_name_exception", "reason" = > "Invalid index name [dev-% {[fields] [sourceType]}-2019-12-20], must be lowercase", "index_uuid" = > "_ na_", "index" = > "dev-% {[fields] [sourceType]}-2019-12-20"} [2019-12-20T19:53:14049] [ERROR] [logstash.outputs. The reason is that the index of output in logstash cannot have uppercase:

My original logstash conf file

Output {elasticsearch {hosts = > ["localhost:9200"] index = >'% {platform} -% {[fields] [sourceType]} -% {+ YYYY-MM-dd} 'template = > "/ opt/logstash-6.5.2/config/af-template.json" template_overwrite = > true}} modified

Output {elasticsearch {hosts = > ["localhost:9200"] index = > "k8sCorr template% {+ YYYY.MM.dd}" template = > "/ opt/logstash-6.5.2/config/af-template.json" template_overwrite = > true}}

A perfect end! No pit

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report