Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

EFK Log Collection system in K8s Cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

The Kubernetes cluster itself does not provide a solution for log collection. Generally speaking, there are three main solutions for log collection:

1. Run an agent on each node to collect logs

Since this agent must be run on each node, you can run the application directly using the DaemonSet controller

This method is also only suitable for collecting application logs output to stdout and stderr

Simply put, this approach is to run a log agent container on each node

Collect logs under the / var/log and / var/lib/docker/containers/ directories of this node

2. Include a sidecar container in each Pod to collect application logs

Running the log collection agent in the sidecar container can lead to a lot of resource consumption, because you need to run as many Pod collection agents as you have to collect, and you cannot use the kubectl logs command to access these logs

3. Push the log information directly to the collection backend in the application.

The popular log collection solution in Kubernetes is Elasticsearch, Fluentd and Kibana (EFK) technology stack, which is also recommended by the government.

Elasticsearch is a real-time, distributed and scalable search engine that allows full-text, structured search. It is usually used to index and search large amounts of log data, as well as to search many different types of documents.

Create an Elasticsearch cluster

Generally use 3 Elasticsearch Pod to avoid the "brain crack" problem in the high availability multi-node cluster, and use the StatefulSet controller to create the Elasticsearch Pod

When creating StatefulSet pod, you can use StorageClass to automatically generate pv and pvc directly in its pvc template, which can achieve data persistence, and nfs-client-provisioner has been prepared in advance.

1. Create a separate namespace

ApiVersion: v1kind: Namespacemetadata: name: logging

2. Create a StorageClas, or use an existing StorageClas

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: es-data-dbprovisioner: fuseim.pri/ifs # this value needs to be consistent with the provisioner configuration

3. You need to create a headless service before creating a StatefulSet pod.

Kind: ServiceapiVersion: v1metadata: name: elasticsearch namespace: logging labels: app: elasticsearchspec: selector: app: elasticsearch clusterIP: None ports:-port: 9200 name: rest-port: 9300 name: inter-node

4. Create elasticsearch statefulset pod

$docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3

$docker pull busybox

ApiVersion: apps/v1kind: StatefulSetmetadata: name: es-cluster namespace: loggingspec: serviceName: elasticsearch replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers:-name: elasticsearch image: docker.io/elasticsearch:latest resources: limits: cpu: 1000m requests: cpu : 100m ports:-containerPort: 9200 name: rest protocol: TCP-containerPort: 9300 name: inter-node protocol: TCP volumeMounts:-name: data mountPath: / usr/share/elasticsearch/data env:-name: cluster.name value: k8s-logs-name: node.name ValueFrom: fieldRef: fieldPath: metadata.name-name: discovery.zen.ping.unicast.hosts value: "es-cluster-0.elasticsearch Es-cluster-1.elasticsearch,es-cluster-2.elasticsearch "- name: discovery.zen.minimum_master_nodes value:" 2 "- name: ES_JAVA_OPTS value:"-Xms512m-Xmx512m "initContainers:-name: fix-permissions image: busybox command: [" sh ","-c " "chown-R 1000VR 1000 / usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts:-name: data mountPath: / usr/share/elasticsearch/data-name: increase-vm-max-map image: busybox command: ["sysctl", "- w" "vm.max_map_count=262144"] securityContext: privileged: true-name: increase-fd-ulimit image: busybox command: ["sh", "- c" "ulimit-n 65536"] securityContext: privileged: true volumeClaimTemplates:-metadata: name: data labels: app: elasticsearch spec: accessModes: ["ReadWriteOnce"] storageClassName: es-data-db resources: requests: storage: 100Gi

$kubectl get pod-n logging

NAME READY STATUS RESTARTS AGE

Es-cluster-0 1/1 Running 0 42s

Es-cluster-1 1/1 Running 0 10m

Es-cluster-2 1/1 Running 0 9m49s

Three directories are automatically generated on the nfs server for the three pod to store data

$cd / data/k8s

$ls

Logging-data-es-cluster-0-pvc-98c87fc5-c581-11e9-964d-000c29d8512b/

Logging-data-es-cluster-1-pvc-07872570-c590-11e9-964d-000c29d8512b/

Logging-data-es-cluster-2-pvc-27e15977-c590-11e9-964d-000c29d8512b/

Check the status of the es cluster

$kubectl port-forward es-cluster-0 9200 9200-namespace=logging

Execute in another window

$curl http://localhost:9200/_cluster/state?pretty

Create kibana with deployment controller

ApiVersion: v1kind: Servicemetadata: name: kibana namespace: logging labels: app: ports:-port: 5601 type: NodePort selector: app: kibana---apiVersion: apps/v1kind: Deploymentmetadata: name: kibana namespace: logging labels: app: kibanaspec: selector: matchLabels: app: kibana template: metadata: labels: app: kibanaspec: containers:-name: kibana image: docker.elastic. Co/kibana/kibana-oss:6.4.3 resources: limits: cpu: 1000m requests: cpu: 100m env:-name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports:-containerPort: 5601

$kubectl get svc-n logging | grep kibana

Kibana NodePort 10.111.239.0 5601:32081/TCP 114m

Visit kibana

Http://192.168.1.243:32081

Install and configure Fluentd

1. Specify the Fluentd configuration file through the ConfigMap object

Kind: ConfigMapapiVersion: v1metadata: name: fluentd-config namespace: logging labels: addonmanager.kubernetes.io/mode: Reconciledata: system.conf: |-root_dir / tmp/fluentd-buffers/ containers.input.conf: |-@ id fluentd-containers.log @ type tail path / var/log/containers/*.log pos_file / var/log/es-containers.log.pos time_format% Ymury% m -% dT%H:%M:%S.%NZ localtime tag raw.kubernetes.* format json read_from_head true @ id raw.kubernetes @ type detect_exceptions remove_tag_prefix raw message log stream stream multiline_flush_interval 5 max_bytes 500000 max_lines 1000 system.input.conf:-@ id journald-docker @ type systemd Filters [{"_ SYSTEMD_UNIT": "docker.service"}] @ type local persistent true read_from_head true tag docker @ id journald-kubelet @ type systemd filters [{"_ SYSTEMD_UNIT": "kubelet.service"}] @ type local persistent true read_from_head true Tag kubelet forward.input.conf: |-@ type forward output.conf: |-@ type kubernetes_metadata @ id elasticsearch @ type elasticsearch @ log_level info include_tag_key true host elasticsearch port 9200 logstash_format true request_timeout 30s @ type file path / var/log/fluentd-buffers/kubernetes.system.buffer Flush_mode interval retry_type exponential_backoff flush_thread_count 2 flush_interval 5s retry_forever retry_max_interval 30 chunk_limit_size 2M queue_limit_length 8 overflow_action block

In the above configuration file, we configure the docker container log directory and the log collection of docker and kubelet applications. The collected data is processed and sent to the elasticsearch:9200 service.

2. Use DaemonSet to create fluentd pod

$docker pull cnych/fluentd-elasticsearch:v2.0.4

$docker info

Docker Root Dir: / var/lib/docker

ApiVersion: v1kind: ServiceAccountmetadata: name: fluentd-es namespace: logging labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilerules:- apiGroups:-"" resources: -"namespaces"-"pods" verbs:-"get"-"watch"-"list"-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilesubjects:- kind: ServiceAccount name: fluentd-es namespace: logging apiGroup: "roleRef: kind: ClusterRole name: fluentd-es ApiGroup: "-"-apiVersion: apps/v1kind: DaemonSetmetadata: name: fluentd-es namespace: logging labels: k8s-app: fluentd-es version: v2.0.4 kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: selector: matchLabels: k8s-app: fluentd-es version: v2.0.4 template: metadata: labels: k8s-app: fluentd-es Kubernetes.io/cluster-service: "true" version: v2.0.4 annotations: scheduler.alpha.kubernetes.io/critical-pod:''spec: serviceAccountName: fluentd-es containers:-name: fluentd-es image: cnych/fluentd-elasticsearch:v2.0.4 env:-name: FLUENTD_ARGS value:-- no-supervisor-Q resources : limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts:-name: varlog mountPath: / var/log-name: varlibdockercontainers mountPath: / var/lib/docker/containers readOnly: true-name: config-volume mountPath: / etc/fluent/config.d NodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true" tolerations:-key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule terminationGracePeriodSeconds: 30 volumes:-name: varlog hostPath: path: / var/log-name: varlibdockercontainers hostPath: path: / var/lib/docker/containers- Name: config-volume configMap: name: fluentd-config

You can collect logs in / var/log and / var/log/containers and / var/lib/docker/containers

You can also collect logs for docker services and kubelet services

In order to flexibly control which nodes' logs can be collected, we also add a nodSelector attribute here.

NodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true"

So label all nodes:

$kubectl get node

$kubectl label nodes server243.example.com beta.kubernetes.io/fluentd-ds-ready=true

$kubectl get nodes-show-labels

Since our cluster is built using kubeadm, master nodes are tainted by default, so if you want to collect logs from master nodes, you need to add tolerance

Tolerations:- key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule

$kubectl get pod-n logging

NAME READY STATUS RESTARTS AGE

Es-cluster-0 1/1 Running 0 10h

Es-cluster-1 1/1 Running 0 10h

Es-cluster-2 1/1 Running 0 10h

Fluentd-es-rf6p6 1/1 Running 0 9h

Fluentd-es-s99r2 1/1 Running 0 9h

Fluentd-es-snmtt 1/1 Running 0 9h

Kibana-bd6f49775-qsxb2 1/1 Running 0 11h

3. Configure on kibana

Http://192.168.1.243:32081

Create index pattern---- the first step is to enter logstash-*, and the second step is to select @ timestamp

4. Create a test pod and view the log on kibana

ApiVersion: v1kind: Podmetadata: name: counterspec: containers:-name: count image: busybox args: [/ bin/sh,-c, 'iTuno; while true; do echo "$I: $(date)"; iTunes $((iTun1)); sleep 1; done']

Go back to the Kibana Dashboard page and enter kubernetes.pod_name:counter in the search bar of the Discover page above

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report