In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to use EFK in Kubernetes. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
One: preface
1. We downloaded the compressed package https://dl.k8s.io/v1.8.5/kubernetes-client-linux-amd64.tar.gz when we installed the Kubernetes cluster.
After unzipping, there is the yaml file of each plug-in in the directory cluster\ addons, which can be used with only a few changes in most cases.
two。 In the process of building a Kubernetes cluster, many images are downloaded. It is recommended to purchase an ECS server located in Hong Kong in Aliyun. After the image download is completed, export the image through docker save-o, import the image through docker load or upload the image to the personal image repository.
3.Kubernetes starts from version 1.8. in the installation of EFK, elasticsearch-logging uses the StatefulSet type, but the existence of bug will cause the elasticsearch-logging-0 POD to be unable to be created successfully. Therefore, it is recommended to use the version before 1.8 to use ReplicationController.
4. To successfully install EFK, be sure to install kube-dns first as described in the previous article.
Elasticsearch and kibana versions should be compatible during 5.EFK installation. The image used here is as follows:
Gcr.io/google_containers/elasticsearch:v2.4.1-2
Gcr.io/google_containers/fluentd-elasticsearch:1.22
Gcr.io/google_containers/kibana:v4.6.1-1
Two: yaml file
Efk-rbac.yaml
Click (here) to collapse or open
ApiVersion: v1
Kind: ServiceAccount
Metadata:
Name: efk
Namespace: kube-system
-
Kind: ClusterRoleBinding
ApiVersion: rbac.authorization.k8s.io/v1beta1
Metadata:
Name: efk
Subjects:
-kind: ServiceAccount
Name: efk
Namespace: kube-system
RoleRef:
Kind: ClusterRole
Name: cluster-admin
ApiGroup: rbac.authorization.k8s.io
Es-controller.yaml
Click (here) to collapse or open
ApiVersion: v1
Kind: ReplicationController
Metadata:
Name: elasticsearch-logging-v1
Namespace: kube-system
Labels:
K8s-app: elasticsearch-logging
Version: v1
Kubernetes.io/cluster-service: "true"
Addonmanager.kubernetes.io/mode: Reconcile
Spec:
Replicas: 2
Selector:
K8s-app: elasticsearch-logging
Version: v1
Template:
Metadata:
Labels:
K8s-app: elasticsearch-logging
Version: v1
Kubernetes.io/cluster-service: "true"
Spec:
ServiceAccountName: efk
Containers:
-image: gcr.io/google_containers/elasticsearch:v2.4.1-2
Name: elasticsearch-logging
Resources:
# need more cpu upon initialization, therefore burstable class
Limits:
Cpu: 1000m
Requests:
Cpu: 100m
Ports:
-containerPort: 9200
Name: db
Protocol: TCP
-containerPort: 9300
Name: transport
Protocol: TCP
VolumeMounts:
-name: es-persistent-storage
MountPath: / data
Env:
-name: "NAMESPACE"
ValueFrom:
FieldRef:
FieldPath: metadata.namespace
Volumes:
-name: es-persistent-storage
EmptyDir: {}
Es-service.yaml
Click (here) to collapse or open
ApiVersion: v1
Kind: Service
Metadata:
Name: elasticsearch-logging
Namespace: kube-system
Labels:
K8s-app: elasticsearch-logging
Kubernetes.io/cluster-service: "true"
Addonmanager.kubernetes.io/mode: Reconcile
Kubernetes.io/name: "Elasticsearch"
Spec:
Ports:
-port: 9200
Protocol: TCP
TargetPort: db
Selector:
K8s-app: elasticsearch-logging
Fluentd-es-ds.yaml
Click (here) to collapse or open
ApiVersion: extensions/v1beta1
Kind: DaemonSet
Metadata:
Name: fluentd-es-v1.22
Namespace: kube-system
Labels:
K8s-app: fluentd-es
Kubernetes.io/cluster-service: "true"
Addonmanager.kubernetes.io/mode: Reconcile
Version: v1.22
Spec:
Template:
Metadata:
Labels:
K8s-app: fluentd-es
Kubernetes.io/cluster-service: "true"
Version: v1.22
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (# 40573)
Annotations:
Scheduler.alpha.kubernetes.io/critical-pod:''
Spec:
ServiceAccountName: efk
Containers:
-name: fluentd-es
Image: gcr.io/google_containers/fluentd-elasticsearch:1.22
Command:
-'/ bin/sh'
-'- c'
-'/ usr/sbin/td-agent 2 > & 1 > > / var/log/fluentd.log'
Resources:
Limits:
Memory: 200Mi
Requests:
Cpu: 100m
Memory: 200Mi
VolumeMounts:
-name: varlog
MountPath: / var/log
-name: varlibdockercontainers
MountPath: / var/lib/docker/containers
ReadOnly: true
NodeSelector:
Beta.kubernetes.io/fluentd-ds-ready: "true"
Tolerations:
-key: "node.alpha.kubernetes.io/ismaster"
Effect: "NoSchedule"
TerminationGracePeriodSeconds: 30
Volumes:
-name: varlog
HostPath:
Path: / var/log
-name: varlibdockercontainers
HostPath:
Path: / var/lib/docker/containers
Kibana-controller.yaml needs special instructions here. The value of some KIBANA_BASE_URL marked in green should be set to empty, and the default value will cause problems with Kibana access.
Click (here) to collapse or open
ApiVersion: extensions/v1beta1
Kind: Deployment
Metadata:
Name: kibana-logging
Namespace: kube-system
Labels:
K8s-app: kibana-logging
Kubernetes.io/cluster-service: "true"
Addonmanager.kubernetes.io/mode: Reconcile
Spec:
Replicas: 1
Selector:
MatchLabels:
K8s-app: kibana-logging
Template:
Metadata:
Labels:
K8s-app: kibana-logging
Spec:
ServiceAccountName: efk
Containers:
-name: kibana-logging
Image: gcr.io/google_containers/kibana:v4.6.1-1
Resources:
# keep request = limit to keep this container in guaranteed class
Limits:
Cpu: 100m
Requests:
Cpu: 100m
Env:
-name: "ELASTICSEARCH_URL"
Value: "http://elasticsearch-logging:9200"
-name: "KIBANA_BASE_URL"
Value: ""
Ports:
-containerPort: 5601
Name: ui
Protocol: TCP
Kibana-service.yaml
Click (here) to collapse or open
ApiVersion: v1
Kind: Service
Metadata:
Name: kibana-logging
Namespace: kube-system
Labels:
K8s-app: kibana-logging
Kubernetes.io/cluster-service: "true"
Addonmanager.kubernetes.io/mode: Reconcile
Kubernetes.io/name: "Kibana"
Spec:
Ports:
-port: 5601
Protocol: TCP
TargetPort: ui
Selector:
K8s-app: kibana-logging
Three: startup and verification
1. Create a resource
Kubectl create-f.
two。 Check the log of the relevant pod through kubectl logs-f to confirm that it starts normally. It takes a certain amount of time for kibana-logging-* POD to start.
3.elasticsearch authentication (proxies can be created through kube proxy)
Http://IP :PORT/_cat/nodes?v
Click (here) to collapse or open
Host ip heap.percent ram.percent load node.role master name
10.1.88.4 10.1.88.4 9 87 0.45 d m elasticsearch-logging-v1-hnfv2
10.1.67.4 10.1.67.4 6 91 0.03 d * elasticsearch-logging-v1-zmtdl
Http://IP :PORT/_cat/indices?v
Click (here) to collapse or open
Health status index pri rep docs.count docs.deleted store.size pri.store.size
Green open logstash-2018.04.07 51 515 0 1.1mb 584.4kb
Green open .kibana 1 1 2 0 22.2kb 9.7kb
Green open logstash-2018.04.06 5 1 15364 0 7.3mb 3.6mb
4.kibana verification
Http://IP :PORT/app/kibana#/discover?_g
Four: remarks
To successfully build an EFK, you need to pay attention to the following points:
1. Ensure that kube-dns has been successfully installed
two。 The current version of elasticsearch-logging uses ReplicationController
The versions of 3.elasticsearch and kibana should be compatible
4.KIBANA_BASE_URL value is set to ""
Thank you for reading! This is the end of the article on "how to use EFK in Kubernetes". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.