Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Collection of application container log files based on Docker

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1 background introduction

Middleware applications running in Docker containers can feel that the rapid creation and destruction of containers does bring great convenience and flexibility. However, no matter whether the Docker application container is destroyed artificially, the container hangs or even the container application causes it to crash, the data it runs will end with the end of its life cycle, which is very suitable for deploying stateless services. But it is not suitable for stateful application deployment. However, the problem arises: if the OPS colleague needs to analyze the failure of the container application in the next step, and the stateful application log in the container will be destroyed with the destruction of the container. Even if the log is lost, there is no fault analysis and location, and the operation and maintenance colleagues are faced with a blind passive state. However, in a large container cluster environment, if the container application log is persisted directly to the local disk directory, the directory structure of the log will be messed up and the container log files will also face the overwrite problem.

2 log rollover requirements

1) the new log contents of single or multiple business applications in the container are stored in the remote log server according to a certain directory structure.

2) the directory structure of the storage log needs to be stored in the following ways:

/ logs/app_id/service_id/container_id/app_name/xxx.log

3 introduction to tools

1) Filebeat is a log file shipping tool. After installing the client on your server, filebeat will monitor the log directory or specified log files, track and read these files (track changes in files, keep reading), and forward these information to logstarsh for storage.

2) Logstash is a lightweight log collection and processing framework, which can easily collect scattered and diversified logs, customize them, and then transfer them to the specified location.

4 Logstash log server

The deployment log server logstash configuration requirements are as follows.

System

Centos7.0 X860064 or above

CPU

4 cores

Memory

16G

Storage

More than 500g of external storage

Logstash software also has requirements for JDK, so it is recommended to run logstash above JDK1.8.0.

4.1 install JDK software

Go directly to oracle to download JDK version 1.8 software to install.

Tar xvf jdk1.8.0_131.tar.gz-C / usr

Then configure the JDK environment variable vi / etc/profile

JAVA_HOME=/usr/jdk1.8.0_131

CLASSPATH=.:$JAVA_HOME/lib.tools.jar

PATH=$JAVA_HOME/bin:$PATH

Export JAVA_HOME CLASSPATH PATH

Use the command source / etc/profile to take effect on environment variables

4.2 install logstash software

Software download

Https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.tar.gz

Software installation

Execute the command tar-xvf / opt/ logstash-6.0.0.tar.gz to extract the logstash

Start logstash

/ opt/logstash/bin/logstash-f / opt/logstash/logstash.conf

Create a new / logs directory on the log server to store a large number of application container logs.

5 Filebeat software installation

Directly encapsulate filebeat software and nginx,php-fpm software into a new basic image, we need to know which application log files need to be extracted in advance. Note the following is the application log that needs to be extracted from the container:

Nginx log container path

/ var/log/nginx

Php-fpm Log window path:

/ var/opt/remi/php70/log/php-fpm

Download filebeat software

Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-linux-x86_64.tar.gz

5.1 install nginx,php-fpm and Filebeat software with dockerfile script

The following red font deployment is to standardize nginx and php-fpm application logs and map log files to the / logs directory. Install filebeat software at the same time.

FROM centos

MAINTAINER jaymarco@shsnc.com

# Install system library

RUN rpm-ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm & & rpm-ivh http://rpms.remirepo.net/enterprise/remi-release-7.rpm & &

Yum install-y php70-php-gd.x86_64 php70-php-mcrypt.x86_64 php70-php-fpm.x86_64 php70-php-pecl-redis.x86_64 python-setuptools\

Php70-php-mbstring.x86_64 php70-php-snmp.x86_64 php70-php-pecl-zip.x86_64 php70-php-xml.x86_64\

Php70-php-mysqlnd.x86_64 php70-php-pecl-mysql.x86_64 gcc gcc-c++ automake libtool make cmake openssl openssl-devel pcre-devel & &\

Yum clean all

# Make install nginx

RUN rpm-ivh http://nginx.org/packages/centos/7/x86_64/RPMS/nginx-1.10.3-1.el7.ngx.x86_64.rpm

COPY nginx.conf / etc/nginx/nginx.conf

# set php www.conf config

RUN sed-e's Compact 127.0.0.1pur9000Uniplex 9000Unix'\

-e'/ allowed_clients/d'\

-e'/ catch_workers_output/s/ ^; / /'\

-e'/ error_log/d'\

-e's contract listen.backlog = 511/listen.backlog = 1024max'\

-e 's/pm.max_children = 50/pm.max_children = 300max'\

-e 's/pm.start_servers = 5/pm.start_servers = 30max'\

-e 's/pm.min_spare_servers = 5/pm.min_spare_servers = 30max'\

-e 's/pm.max_spare_servers = 35/pm.max_spare_servers = 60max'\

-e's Universe = 500/pm.max_requests = 10240 Universe'\

-e's _ 0/request_slowlog_timeout _

-e's _

-e's _

-I / etc/opt/remi/php70/php-fpm.d/www.conf & &\

Sed-e 's/max_execution_time = 30/max_execution_time = 150max'\

-e 's/max_input_time = 60/max_input_time = 300max'\

-I / etc/opt/remi/php70/php.ini & &\

Sed-e 's/daemonize = yes/daemonize = no/'\

-e's _ etc/opt/remi/php70/php-fpm.conf _ files = 1024/rlimit_files = 65535 _

Cp / usr/share/zoneinfo/Asia/Shanghai / etc/localtime & &\

Echo 'Asia/Shanghai' > / etc/timezone

RUN easy_install supervisor & &\

Mkdir-p / var/log/supervisor & &\

Mkdir-p / var/run/sshd & &\

Mkdir-p / var/run/supervisord

# Add supervisord conf

ADD supervisord.conf / etc/supervisord.conf

# copy start script

ADD startserv.sh / startserv.sh

RUN chmod + x / startserv.sh

# Set port

EXPOSE 9000

# For collecting logs, install filebeat plugin

RUN mkdir / logs

RUN ln-s / var/log/nginx / logs/

RUN ln-s / var/opt/remi/php70/log/php-fpm / logs

ADD filebeat-6.0.0-linux-x86_64.tar.gz / var/log/

RUN chmod + x / var/log/filebeat/filebeat

# Start web server

# ENTRYPOINT ["/ var/log/filebeat/init.sh"]

CMD ["/ startserv.sh"]

5.2 nginx parameter configuration optimization

The following configuration optimizes some performance metrics parameters of the nginx service and packages them into the basic image.

User nginx

Worker_processes 2

Worker_cpu_affinity auto

Error_log / var/log/nginx/error.log error

Worker_rlimit_nofile 10240

Worker_priority-2

Events {

Use epoll

Accept_mutex on

Worker_connections 10240

}

Http {

Include mime.types

Default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

'upstream_addr: "$upstream_addr"'

'upstream_cache_status: "$upstream_cache_status"'

'upstream_status: "$upstream_status"'

Access_log / var/log/nginx/access.log main

Sendfile on

Sendfile_max_chunk 512k

Aio threads

Directio 4m

Keepalive_timeout 65

Open_log_file_cache max=1000 inactive=20s valid=1m min_uses=2

Gzip on

Gzip_comp_level 4

Gzip_disable "MSIE [1-6]."

Gzip_min_length 10k

Gzip_http_version 1.0

Gzip_types text/plain text/css text/xml text/javascript application/xml application/x-javascript application/xml+rss application/javascript application/json

Gzip_vary on

Client_max_body_size 2m

Include / etc/nginx/conf.d/*.conf

}

5.3 supervisord.conf configuration parameters

Only one process can be run in the container. If you need to run multiple processes, we use the supervisord background management process tool to facilitate multiple processes to start monitoring. The following red font adds the filebeat startup command.

[unix_http_server]

File=/tmp/supervisor.sock; (the path to the socket file)

[supervisord]

Logfile=/tmp/supervisord.log; (main logfile; default $CWD/supervisord.log)

Logfile_maxbytes=50MB; (max main logfile bytes b4 rotation;default 50MB)

Logfile_backups=10; (num of main logfile rotation backups;default 10)

Loglevel=info; (loglevel; default info; others: debug,warn,trace)

Pidfile=/tmp/supervisord.pid; (supervisord pidfile;default supervisord.pid)

Nodaemon=true; (start in foreground if true;default false)

Minfds=1024; (min. Avail startup file descriptors;default 1024)

Minprocs=200; (min. Avail process descriptors;default 200)

User=root

; the below section must remain in the config file for RPC

; (supervisorctl/web interface) to work, additional interfaces may be

; added by defining them in separate rpcinterface: sections

[rpcinterface:supervisor]

Supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]

Serverurl=unix:///tmp/supervisor.sock; use a unix:// URL for a unix socket

[program:php-fpm]

Command=/opt/remi/php70/root/usr/sbin/php-fpm-F

[program:nginx]

Command=/usr/sbin/nginx-c / etc/nginx/nginx.conf

[program:filebeat]

Command=/var/log/filebeat/filebeat-c / var/log/filebeat/filebeat.yml

5.4 startserv.sh startup script

The content of the following red font section is mainly to generate a filebeat.yml file, let the filebeat program load the corresponding app_id,service_id,host_name,logstash and other parameter values, and connect the service.

#! / bin/sh

Ip= `ip a | grep-w inet | grep-v-w lo | awk'{print $2}'| awk-print $1}'`

LOGS= "/ logs/"

# FILE= `ls-l $LOGS | awk'/ ^ d / {print $NF}'`

FILE= `ls $LOGS`

HOME= "/ var/log/filebeat"

BAK= "$HOME/bak"

CONF= "$HOME/filebeat.yml"

HOST_NAME= `hostname`

Cp $BAK $CONF

For name in $FILE

Do

Sed-I "/ paths/a\-$LOGS$name/*.log" $CONF

Done

Sed-I "s/#APP_ID#/$APP_ID/g" $CONF

Sed-I "s/#ip#/$ip/g" $CONF

Sed-I "s/#SERVICE_ID#/$SERVICE_ID/g" $CONF

Sed-I "s/#HOST_NAME#/$HOST_NAME/g" $CONF

Sed-I "s/#LOGSTASH_HOST#/$LOGSTASH_HOST/g" $CONF

/ usr/bin/supervisord-n-c / etc/supervisord.conf

Filebeat.yml example:

Filebeat:

Spool_size: 10240

Idle_timeout: "10s"

Prospectors:

-

Paths:

-/ logs/php-fpm/*.log

-/ logs/nginx/*.log

Fields:

App_id: "6db116df"

Service_id: "_ 6db116df_64a00233"

Host_name: "139b3e343614"

Fields_under_root: true

Tail_files: true

Document_type: "172.17.0.2"

Processors:

-drop_fields:

Fields: ["input_type", "beat", "offset"]

Output.logstash:

Hosts: ["XX.XX.XX.XX:5044"]

Worker: 2

Package application image

Docker build-t acitivty_front:6.0-201711221420.

5.5 launch the application container

The above operations have been encapsulated into a new application image acitivty_front:6.0-201711221420, and then start the application image by running the command docker.

Docker run-itd-p 80:80-e APP_ID=6db116df-e SERVICE_ID=_6db116df_64a00233-e LOGSTASH_HOST= acitivty_front:6.0-201711221420

6. Extract log results

After the application container is pulled up, both the active application and the filebeat plug-in will be started, and the container will automatically collect and push the application logs to the log server. The following is the application log in the extracted container seen in the log server.

Parsing of Logstash log directory

Log server logstash receives logs

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report