Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy EFK Log system in docker

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "how to deploy the EFK log system in docker". In the daily operation, I believe many people have doubts about how to deploy the EFK log system in docker. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "how to deploy the EFK log system in docker". Next, please follow the editor to study!

A complete K8s cluster should consist of the following six parts: kube-dns, ingress-controller, metrics server monitoring system, dashboard, storage and EFK log system.

Our log system should be deployed outside the K8s cluster, so that even if the entire K8s cluster goes down, we can still view the logs before the K8s downtime from the external log system.

In addition, our production and deployment of the log system should be placed on a separate storage volume. Here we turn off the storage volume function of the log system for convenience in this test.

1. Add an incubator source (this source is a development version of the installation package, which may be unstable to use)

Visit https://hub.kubeapps.com/charts

[root@master ~] # helm repo listNAME URL local http://127.0.0.1:8879/charts stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts[root@master efk] # helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com"incubator" has been added to your repositories [root@master Efk] # helm repo listNAME URL local http://127.0.0.1:8879/charts stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts incubator https://kubernetes-charts-incubator.storage.googleapis.com

2. Download elasticsearch

[root@master efk] # helm fetch incubator/elasticsearch [root@master efk] # lselasticsearch-1.10.2.tgz [root@master efk] # tar-xvf elasticsearch-1.10.2.tgz

3. Close the storage volume (do not turn it off in production, we turn it off here for testing convenience)

[root@master efk] # vim elasticsearch/values.yaml changes persistence: enabled: true to persistence: enabled: false there are two things that need to be changed

Above we turn off the ability to store volumes and use local directories to store logs instead.

4. Create a separate namespace

[root@master efk] # kubectl create namespace efknamespace/logs created [root@master efk] # kubectl get nsNAME STATUS AGEekf Active 13s

5. Install elasticsearch in the efk namespace

[root@master efk] # helm install-- name els1--namespace=efk-f elasticsearch/values.yaml incubator/elasticsearchNAME: els1LAST DEPLOYED: Thu Oct 18 01:59:15 2018NAMESPACE: efkSTATUS: DEPLOYEDRESOURCES:== > v1/Pod (related) NAME READY STATUS RESTARTS AGEels1-elasticsearch-client-58899f6794-gxn7x 0 Compact 1 Pending 0 0sels1-elasticsearch-client-58899f6794-mmqq6 0 Compact 1 Pending 0 0 sels 1 ElasticSearchMuydata- 00 / 1 Pending 0 0sels1-elasticsearch-master-0 0bat 1 Pending 0 0s = > v1/ConfigMapNAME DATA AGEels1-elasticsearch 4 1s = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEels1-elasticsearch-client ClusterIP 10.103.147.142 9200/TCP 0sels1- Elasticsearch-discovery ClusterIP None 9300/TCP 0slots = > v1beta1/DeploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEels1-elasticsearch-client 2000 slots = > v1beta1/StatefulSetNAME DESIRED CURRENT AGEels1-elasticsearch-data 2 1 0sels1-elasticsearch-master 3 1 0sNOTES:The elasticsearch cluster has been installed.*** Please note that this chart has been deprecated and moved to stable.Going forward please use the stable version of this chart.***Elasticsearch can be accessed: * Within your cluster At the following DNS name at port 9200: els1-elasticsearch-client.efk.svc * From outside the cluster, run these commands in the same shell: export POD_NAME=$ (kubectl get pods-- namespace efk-l "app=elasticsearch,component=client,release=els1"-o jsonpath= "{.items [0] .metadata.name}") echo "Visit http://127.0.0.1:9200 to use Elasticsearch" kubectl port-forward-- namespace efk $POD_NAME 9200

Note:-- name els1 is the release name after chart deployment. You can choose your own name at will.

Above we installed els online through the values.yaml file. However, we have downloaded the els installation package, or we can install it offline through the downloaded els package, as follows:

[root@master efk] # lselasticsearch elasticsearch-1.10.2.tgz [root@master efk] # helm install-- name els1-- namespace=efk. / elasticsearch

Note:. / elasticsearch is the name of the current els installation package directory.

After installation, we can see the corresponding pods resources in the efk namespace (when I installed elasticsearch, I could not install it at that time, because it was said that I could not open the official website of elasticseartch, that is, I could no longer download the image on this official website. Later, I left it unchecked for two days, and then logged in to have a look, and found that the image had been downloaded by myself, which was really interesting.)

[root@master efk] # kubectl get pods-n efk-o wideNAME READY STATUS RESTARTS AGE IP NODEels1-elasticsearch-client-78b54979c5-kzj7z 1 Running 2 1h 10.244.2.157 node2els1-elasticsearch-client-78b54979c5-xn2gb 1 Running 1 1h 10.244 . 2.151 node2els1-elasticsearch-data-0 1/1 Running 0 1h 10.244.1.165 node1els1-elasticsearch-data-1 1/1 Running 0 1h 10.244.2.169 node2els1-elasticsearch-master-0 1/1 Running 0 1h 10 .244.1.163 node1els1-elasticsearch-master-1 1/1 Running 0 1h 10.244.2.168 node2els1-elasticsearch-master-2 1/1 Running 0 57m 10.244.1.170 node1

Check the installed release:

[root@master efk] # helm listNAME REVISIONUPDATED STATUS CHART NAMESPACEels1 1 Thu Oct 18 23:11:54 2018DEPLOYEDelasticsearch-1.10.2efk

View the status of els1:

[root@k8s-master1 ~] # helm status els1 * Within your cluster, at the following DNS name at port 9200: els1-elasticsearch-client.efk.svc # # this is the hostname of els1 service * From outside the cluster, run these commands in the same shell: export POD_NAME=$ (kubectl get pods-- namespace efk-l "app=elasticsearch,component=client Release=els1 "- o jsonpath=" {.items [0] .metadata.name} ") echo" Visit http://127.0.0.1:9200 to use Elasticsearch "kubectl port-forward-- namespace efk $POD_NAME 9200

Cirror is designed to test the virtual environment of the client, it can quickly create a kvm virtual machine, a total of only a few megabytes, and the tools provided are relatively complete.

Let's run cirror:

[root@k8s-master1] # kubectl run cirror-$RANDOM-- rm-it-- image=cirros-/ bin/shkubectl run-- generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.If you don't see a command prompt, try pressing enter./ # / # nslookup els1-elasticsearch-client.efk.svcServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: els1-elasticsearch-client.efk.svcAddress 1: 10.103.105.170 els1-elasticsearch-client.efk.svc.cluster.local

-rm: if we quit, we will delete it.

-it: indicates interactive login

Above we see the ip address parsed from the els1-elasticsearch-client.efk.svc service name.

Let's visit the http:els1-elasticsearch-client.efk.svc:9200 page again:

/ # curl els1-elasticsearch-client.efk.svc:9200curl: (6) Couldn't resolve host 'els1-elasticsearch-client.efk.svc'/ # / # curl els1-elasticsearch-client.efk.svc.cluster.local:9200 {"name": "els1-elasticsearch-client-b898c9d47-5gwzq", "cluster_name": "elasticsearch", "cluster_uuid": "RFiD2ZGWSAqM2dF6wy24Vw", "version": {"number": "6.4.2" "build_flavor": "oss", "build_type": "tar", "build_hash": "04711c2", "build_date": "2018-09-26T13:34:09.098244Z", "build_snapshot": false, "lucene_version": "7.4.0", "minimum_wire_compatibility_version": "5.6.0" "minimum_index_compatibility_version": "5.0.0"}, "tagline": "You Know, for Search"}

Look at the contents:

/ # curl els1-elasticsearch-client.efk.svc.cluster.local:9200/_cat= ^. ^ = / _ cat/allocation/_cat/shards/_cat/shards/ {index} / _ cat/master/_cat/nodes/_cat/tasks/_cat/indices/_cat/indices/ {index} / _ cat/segments/_cat/segments/ {index} / _ cat/count/_cat/count/ {index} / _ cat/recovery/_cat/recovery/ {index} / _ cat/health/ _ cat/pending_tasks/_cat/aliases/_cat/aliases/ {alias} / _ cat/thread_pool/_cat/thread_pool/ {thread_pools} / _ cat/plugins/_cat/fielddata/_cat/fielddata/ {fields} / _ cat/nodeattrs/_cat/repositories/_cat/snapshots/ {repository} / _ cat/templates

See how many nodes there are:

/ # curl els1-elasticsearch-client.efk.svc.cluster.local:9200/_cat/nodes10.244.2.104 23 95 0 0.00 0.02 0.05 di-els1-elasticsearch-data-010.244.4.83 42 99 1 0.01 0.11 0.13 mi * els1-elasticsearch-master-110.244.4.81 35 99 1 0.01 0.11 0.11 0.13 I-els1-elasticsearch-client-b898c9d47-5gwzq10.244.4.84 31 99 1 0.11 0.13 mi-els1-elasticsearch-master-210.244.2.105 35 95 0 0.00 0.02 0.05 I-els1-elasticsearch-client-b898c9d47-shqd210.244.4.85 18 99 1 0.01 0.11 0.13 di-els1-elasticsearch-data-110.244.4.82 40 99 1 0.01 0.11 0.13 mi-els1-elasticsearch-master-0

6. Install fluentd in efk space

[root@k8s-master1 ~] # helm fetch incubator/fluentd-elasticsearch [root@k8s-master1 ~] # tar-xvf fluentd-elasticsearch-0.7.2.tgz [root@k8s-master1 ~] # cd fluentd-elasticsearch [root@k8s-master1 fluentd-elasticsearch] # vim values.yaml 1, change the host: 'elasticsearch-client', to host:' els1-elasticsearch-client.efk.svc.cluster.local' indicates where to find our elasticsearch service. 2. Change the tolerations stain, which means that K8s master can also accept the deployment of fluentd pod, so that the logs of the master node can be collected: change tolerations: {} #-key: node-role.kubernetes.io/master # operator: Exists # effect: NoSchedule to tolerations:-key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule3, and change annotations In this way, we can collect logs for monitoring prometheus. Change annotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "24231" to annotations: prometheus.io/scrape: "true" prometheus.io/port: "24231" and change service: {} # type: ClusterIP # ports: #-name: "monitor-agent" # port: 24231 to service: type: ClusterIP ports:- Name: "monitor-agent" port: 24231 find the fluentd monitoring prometheus through service 24231

Start installing fluentd:

[root@k8s-master1 fluentd-elasticsearch] # helm install-- name fluentd1-- namespace=efk-f values.yaml. / [root@k8s-master1 fluentd-elasticsearch] # helm listNAME REVISIONUPDATED STATUS CHART NAMESPACEels1 1 Sun Nov 4 09:37:35 2018DEPLOYEDelasticsearch-1.10.2 efk fluentd11 Tue Nov 6 09:28:42 2018DEPLOYEDfluentd-elasticsearch-0.7.2efk [root@k8s-master1 fluentd-elasticsearch ] # kubectl get pods-n efkNAME READY STATUS RESTARTS AGEels1-elasticsearch-client-b898c9d47-5gwzq 1 47hels1-elasticsearch-client-b898c9d47-shqd2 1 Running 0 47hels1-elasticsearch-client-b898c9d47-shqd2 1 Running 0 47hels1-elasticsearch-data-0 1 Running 0 47hels1-elasticsearch-data-1 1/1 Running 0 45hels1-elasticsearch-master-0 1/1 Running 0 47hels1-elasticsearch-master-1 1/1 Running 0 45hels1-elasticsearch-master-2 1/1 Running 0 45hfluentd1-fluentd-elasticsearch-9k456 1 + 1 Running 0 2m28sfluentd1-fluentd-elasticsearch-dcnsc 1 + 1 Running 0 2m28sfluentd1 Lichita search-p5h88 1 + + 1 Running 0 2m28sfluentd1-fluentd-elasticsearch-sdvn9 1 + 1 Running 0 2m28sfluentd1-fluentd-elasticsearch-ztm9s 1/1 Running 0 2m28s

7. Install kibanna in efk space

Note that the version number of the installation of kibana must be the same as that of elasticsearch, otherwise the two cannot be combined.

[root@k8s-master1 ~] # helm fetch stable/kibana [root@k8s-master1 ~] # ls kibana-0.2.2.tgz [root@k8s-master1 ~] # tar-xvf kibana-0.2.2.tgz [root@k8s-master1 ~] # cd kibana [root@t-cz-mysql1 appuser] # vim last_10_null_sql.txt modify ELASTICSEARCH_URL to: ELASTICSEARCH_URL: els domain name is found through helm status els1 output: [root@k8s -master1 ~] # helm status els1 * Within your cluster At the following DNS name at port 9200: els1-elasticsearch-client.efk.svc in addition Change service: type: ClusterIP externalPort: 443internalPort: 5601 in vim last_10_null_sql.txt to service: type: NodePort externalPort: 443internalPort: 5601

Start deploying kibana:

[root@k8s-master1 kibana] # helm install-- name=kib1-- namespace=efk-f values.yaml. / = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkib1-kibana NodePort 10.108.188.4 443:31865/TCP 0s [root@k8s-master1 kibana] # kubectl get svc-n efkNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEels1-elasticsearch-client ClusterIP 10.103.105.170 9200/TCP 2d22hels1-elasticsearch-discovery ClusterIP None 9300/TCP 2d22hkib1-kibana NodePort 10.108.188.4 443:31865/TCP 4m27s [root@k8s-master1 kibana] # kubectl get pods-n efk NAME READY STATUS RESTARTS AGEels1-elasticsearch-client-b898c9d47-5gwzq 1 + 1 Running 0 2d22hels1-elasticsearch-client-b898c9d47-shqd2 1 + 1 Running 0 2d22hels1-elasticsearch-data-0 1 + + 1 Running 0 22hels1-elasticsearch-data-1 1 + + 1 Running 0 22hels1-elasticsearch-master-0 1/1 Running 0 2d22hels1-elasticsearch-master-1 1/1 Running 0 2d19hels1-elasticsearch-master-2 1/1 Running 0 2d19hfluentd1-fluentd-elasticsearch-9k456 1 / 1 Running 0 22hfluentd1-fluentd-elasticsearch-dcnsc 1/1 Running 0 22hfluentd1-fluentd-elasticsearch-p5h88 1/1 Running 0 22hfluentd1-fluentd-elasticsearch-sdvn9 1/1 Running 0 22hfluentd1-fluentd-elasticsearch-ztm9s 1/1 Running 0 22hkib1-kibana-68f9fbfd84-pt2dt 0 Running 1 Running 0 9m59s # if this image cannot be downloaded Wait a few more days and download it.

Then find a browser and open the host ip:nodeport

Https://172.16.22.201:31865

However, there is an error in my open page. Just do the following:

[root@k8s-master1] # kubectl get pods-n efk | grep elaels1-elasticsearch-client-b898c9d47-8pntr 1 43hels1-elasticsearch-client-b898c9d47-shqd2 1 Running 1 43hels1-elasticsearch-client-b898c9d47-shqd2 1 Running 1 5d13hels1-elasticsearch-data-0 1 Running 0 117mels1-elasticsearch-data-1 1 117mels1-elasticsearch-data-1 1 Running 0 109 Mels1 117mels1-elasticsearch-data-1 search- Master-0 1 to 1 Running 1 2d11hels1-elasticsearch-master-1 1 to 1 Running 0 14hels1-elasticsearch-master-2 1 to 1 Running 0 14h [root@k8s-master1 ~] # kubectl exec-it els1-elasticsearch-client-b898c9d47-shqd2-n efk-/ bin/bash to delete .kibana under elasticsearch [root@els1-elasticsearch-client-b898c9d47-shqd2 elasticsearch] # curl-XDELETE http://els1-elasticsearch-client.efk.svc:9200/.kibana

Finally, we see that we have made EFK's log collection system.

At this point, the study on "how to deploy the EFK log system in docker" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report