Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize paas with nginx+docker

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to achieve paas in nginx+docker". In daily operation, I believe many people have doubts about how to achieve paas in nginx+docker. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "how to achieve paas in nginx+docker". Next, please follow the editor to study!

Front position

Updated the installation instructions. Centralize the global variables, pay attention to the deps prepare section, bridge-utils is used to control the openfaas0 virtual network card made by cni. Install docker.io, which can install both containerd1.3.3 and runc on ubuntu

Since Docker 1.11, the Docker container has not simply started through Docker daemon, but integrated many components such as containerd, runC and so on. If you do a search, you will find that docker-containerd is containerd and docker-runc is runc. Containerd is the daemon that really controls the container, and runc is used to execute the container. Why should it be divided into pieces? In order to prevent docker from dominating itself, the implementation of docker was split into several standardized modules. The purpose of standardization is that the module can be replaced by other implementations. In fact, it is also to achieve the component-based developable effect of class llvm (software abstraction, if the source is divided, it is difficult to separate from the beginning). It's also for distributed effects. Docker is also like git to do distributed componentization, distributed is to set up two parts, cliserver, so that both local and remote architecture can be done.

And why dockerio instead of docker-ce: the fact is that I also found that on some systems, docker-ce is installed before containerd is installed. Can cause problems with the system, Cannot connect to the Docker daemon at. Docker is not compatible with containerd, so you have to install ubuntu-maintained docker.io, which solves containerd dependencies by default on containerd and runc (although we'll talk about replacing and upgrading the version of containerd later). We choose the version of sudo apt install docker.io=19.03.6-0ubuntu 18.04.1 apt-get update sudo apt-cache madison docker.io obtained by adding the latest cn ubuntu deb src.

#! / bin/bash## currently tested under ubuntu1804 64b easy to be ported to centos (can be tested with replacing apt-get and / etc/systemd/system) # # How to use this script in a cloudhost## su root and then:. / panel.sh-d 'your domain to be binded'-m' email you use to pass to certbot'-p 'your inital passwords' (email and passwords are not neccessary,feed email only if you encount the "toomanyrequestofagivetype" error) # # (no prefix https/http needed Should bind to the right ip ahead for laster certbot working) export DOMAIN_NAME=''export EMAIL_NAME='some@some.com'export PANEL_TYPE='0'export PASS_INIT='5cTWUsD75ZgL3VJHdzpHLfcvJyOrUnza1jr6KXry5pXUUNmGtqmCZU4yGoc9yW4'MIRROR_PATH= "http://default-8g95m46n2bd18f80.service.tcloudbase.com/d/demos"# the pai backendSERVER_PATH=$ {MIRROR_PATH} / pai/pai-agent/stable/pai_agent_frameworkPAI_MATE_SERVER_PATH=$ {MIRROR_PATH} / pai/pai-mate/stable/install# the openfaas backendOPENFAAS_PATH=$ {MIRROR_ PATH} / faasd# the code-server web ideCODE_SERVER_PATH=$ {MIRROR_PATH} / codeserver#install dirINSTALL_DIR= "/ root/.local" CONFIG_DIR= "/ root/.config" # datadir only for pai and common dataDATA_DIR= "/ data" while [[$#-ge 1]] Do case $1 in-d |-- domain) shift DOMAIN_NAME= "$1" shift;;-m |-- mail) shift EMAIL_NAME= "$1" shift;;-t |-- paneltype) shift PANEL_TYPE= "$1" shift -p |-- passinit) shift PASS_INIT= "$1" shift; *) if [["$1"! = 'error']]; then echo-ne "\ nInvaild option:' $1'\ n\ n" Fi echo-ne "Usage (args are self explained):\ n\ tbash $(basename $0)\ t-d/--domain\ n\ t\-m/--mail\ n\ t\ t\ t/--paneltype\ n\ t\ t\ t-p/--passinit\ t\ n" exit 1; Esac done [["$EUID"-ne'0']] & & echo "Error:This script must be run as root!" & & exit 1 BeginTime=$ (date +% s) # write log with timewriteProgressLog () {echo "[`DATA_DIR'+% Y-%m-%d% HV% MV% S'`] [$1] [$2]" echo "[`date'+% Y-%m-%d% HV% MV% S' `] [$1] [$2]" > ${DATA_DIR} / h6/access.log} # update install progressupdateProgress () {progress=$1 message=$2 status=$3 installType=$4 # echo " = $installType progress=== "echo" = = $installType progress=== "> > ${DATA_DIR} / h6/access.log writeProgressLog" installType "$installType writeProgressLog" progress "$progress writeProgressLog" status "$status echo $message > > ${DATA_DIR} / h6/access.log if [$status = =" 0 "] Then code=0 message= "success" else code=1 message= "$installType error" # exit 1 fi cat ${DATA_DIR} / h6/progress.json {"code": $code, "message": "$message", "data": {"installType": "$installType", "progress": $progress} EOF if [$status = = "0"] Then code=0 message= "success" else code=1 message= "$installType error" # exit 1 fi if [$status! = "0"] Then echo $message > > ${DATA_DIR} / h6/installErr.log fi} echo "= begin. =" echo "PANEL_TYPE: ${PANEL_TYPE}" echo "DOMAIN_NAME: ${DOMAIN_NAME}" echo "SERVER_PATH: ${MIRROR_PATH}" echo "OPENFAAS_PATH: ${OPENFAAS_PATH}" echo "PAI_MATE_SERVER_PATH: ${PAI_MATE_SERVER_PATH}" echo "CODE_SERVER_PATH: ${CODE_SERVER_PATH} "echo" INSTALL_DIR: ${INSTALL_DIR} "rm-rf ${DATA_DIR} / h6mkdir-p ${DATA_DIR} / h6rm-rf ${DATA_DIR} / h6/index.jsonrm-rf ${DATA_DIR} / logsmkdir-p ${DATA_DIR} / logsmkdir-p ${INSTALL_DIR} / binmkdir-p ${CONFIG_DIR} echo" = deps prepare progress (this may take long...) = = "msg=$ (# begin if [$PANEL_TYPE =" 0 "] Then apt-key adv-recv-keys-keyserver keyserver.Ubuntu.com 3B4FE6ACC0B21F32 echo deb http://cn.archive.ubuntu.com/ubuntu/ bionic main restricted universe multiverse > > / etc/apt/sources.list echo deb http://cn.archive.ubuntu.com/ubuntu/ bionic-security main restricted universe multiverse > / etc/apt/sources.list echo deb http://cn.archive.ubuntu.com/ubuntu/ bionic-updates Main restricted universe multiverse > > / etc/apt/sources.list echo deb http://cn.archive.ubuntu.com/ubuntu/ bionic-proposed main restricted universe multiverse > > / etc/apt/sources.list echo deb http://cn.archive.ubuntu.com/ubuntu/ bionic-backports main restricted universe multiverse > / etc/apt/sources.list apt-get update apt-get install docker.io=19.03.6-0ubuntu1~18.04.1-- No-install-recommends bridge-utils-y apt-get install nginx python git python-certbot-nginx-y # sed'1 {: a N / etc/apt/sources.list # apt-get update else apt-get update & & apt-get install git nginx gcc python3.6 python3-pip python3-virtualenv python-certbot-nginx golang-y fi 2 > & 1) status=$?updateProgress 30 "$msg"$status"deps prepare" basic component code: nginx front and docker backend

Although this part is written dead and forwarded. But the focus is on how to lay out the code according to the specific forwarding needs. The theory here is that if the proxy server address (the one after the proxy_pass) has a URI, the URI will replace the URI part that the location matches. If there is no URI in the proxy server address, the full request URL is forwarded to the proxy server.

Confignginx () {echo "= certbot renew+start+init progress===" systemctl enable nginx.service systemctl start nginx # cp-f / lib/systemd/system/certbot.service / etc/systemd/system/certbot-renew.service # echo'[Install]'> > / etc/systemd/system/certbot-renew.service # echo 'WantedBy=multi-user.target' > > / etc/systemd/system/certbot-renew.service # cp-f / lib/systemd/system/certbot.timer / etc/systemd/system / certbot-renew.timer # sed-I "s/renew/renew-- nginx/g" / etc/systemd/system/certbot-renew.service rm-rf / etc/systemd/system/certbot-renew.service cat / etc/systemd/system/certbot-renew.service [Unit] Description=CertbotDocumentation= file:///usr/share/doc/python-certbot-doc/html/index.htmlDocumentation=https://letsencrypt.readthedocs.io/en/latest/[Service]Type=oneshotExecStart=/usr/bin/certbot-Q renewPrivateTmp=true [ Install] WantedBy=multi-user.targetEOF rm-rf / etc/systemd/system/certbot-renew.timer cat / etc/systemd/system/certbot- renew.Timer [unit] Description=Run certbot twice daily [timer] OnCalendar=*-*-* 00RandomizedDelaySecre43200Persistent = true [install] WantedBy=timers.targetEOF msg=$ (# first time renew certbot certonly-- quiet-- standalone-- agree-tos-- non-interactive-m ${EMAIL_NAME}-d ${DOMAIN_NAME}-pre-hook "systemctl stop nginx "systemctl daemon-reload systemctl enable certbot-renew.service systemctl start certbot-renew.service systemctl start certbot-renrew.timer 2 > & 1) status=$? UpdateProgress 40 "$msg"$status"certbot renew+start+init" echo "= nginx reconfig progress===" # add nginx conf rm-rf / etc/nginx/conf.d/default.conf cat / etc/nginx/conf.d/default.confserver {listen 443 http2 ssl; listen [:]: 443 http2 ssl; server_name DOMAIN_NAME; ssl on; ssl_certificate / etc/letsencrypt/live/DOMAIN_NAME/fullchain.pem Ssl_certificate_key / etc/letsencrypt/live/DOMAIN_NAME/privkey.pem; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ssl_prefer_server_ciphers on; location / {proxy_pass http://localhost:PORT; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade Proxy_set_header Accept-Encoding gzip;} location / pai/ {proxy_pass http://localhost:5523; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_set_header Accept-Encoding gzip;} location / faasd/ {proxy_pass http://localhost:8080/ui/; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade Proxy_set_header Accept-Encoding gzip;} location / codeserver/ {proxy_pass http://localhost:5000/; proxy_redirect http:// https://; proxy_set_header Host $host:443/codeserver; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_set_header Accept-Encoding gzip;}} server {listen 80; server_name DOMAIN_NAME If ($host = DOMAIN_NAME) {return 301 https://$host$request_uri;} return 404;} EOF sed-I "sworn DOMAING naming ${DOMAIN_NAME} # g" / etc/nginx/conf.d/default.conf if [$PANEL_TYPE = = "0"] Then sed-I "s#PORT#8080/functions/#g" / etc/nginx/conf.d/default.conf else sed-I "s#PORT#3000#g" / etc/nginx/conf.d/default.conf fi # restart nginx msg=$ (# begin [[$(systemctl is-active nginx.service) = = "activating"]] & & systemctl reload nginx.service systemctl restart nginx 2 > & 1) status=$? UpdateProgress 50 "$msg"$status"nginx reconfig"} confignginx

In order for docker to overwrite the installation, the configuration is logically emptied at the beginning of the script. The key issue here is the complex relationship between containerd and cni, and openfaasd:

Container Network Interface (CNI) is the container network specification initiated by CoreOS and is the basis of the Kubernetes network plug-in. The basic idea is that when Container Runtime creates the container, first create the network namespace, then call the CNI plug-in to configure the network for the netns, and then start the processes in the container. Now it has joined CNCF and become the network model of CNCF main push. CNI is responsible for all network-related operations during container creation or deletion. It creates all rules to ensure proper network connectivity from the container, but it is not responsible for setting up network media, such as creating bridges or distributing routes to connect containers on different hosts.

This work is done by openfaasd et al. These components of docker, > containerd+cni+ctr+runc, are configured and run by faasd. Starting the first installed containerd+cni+ctr+runc alone will not start cni and open the network card (starting containerd alone prompts cni conf not found that it still starts). You need the actions in openfaasd to bring cni and network card configuration to the latter. But this combination is very close, which makes it difficult to clean up the container completely.

For container cleanup, you can see ps aux with ctr tasks kill & & ctr tasks delete & & ctr container delete | grep manual sees that the shim task and / proc/id number / ns in the host space have been deleted, but there are still residues in some places. This is because it is difficult to separate the two. The task associated container enabled by shim and / var/run/containerd cannot be cleaned, making it difficult for the former to unplug / uninstall configuration separately, and to start fresh from 0 in the next overlay installation.

And this is actually caused by a bug, containerd cannot properly do "clean-up" with shim process during start up # 3971 (https://github.com/containerd/containerd/issues/3971), until 1.4.0beta is solved (https://github.com/containerd/containerd/pull/4100/commits/488d6194f2080709d9667e00ff244fbdc7ff95b2), but I tested (cd / var/lib/faasd/ faasd up), but the effect is better. 1.3.3 is a hint that id exists cannot rebuild container. 1.40 is the files exists under the prompt / run/container, which also does not solve the need to completely clean up and install containerd with a new coverage, so I prompted "containerd install+start progress (this may hang long and if you over install the script you may encount / run/containerd device busy error,for this case you need to reboot to fix after scripts finished" in my script, which is basically if you encounter an error that cannot be deleted from var/run, wait for the installer to finish running and restart.

So I chose 1.40 containerd, which also solved the problem of gateway failure that I mentioned at the beginning. The cni plugins used is still 0.8.5. I originally intended to use that cri-containerd-cni-1.4.0-linux-amd64.tar.gz, but the cni in it is 0.7.1, which is not in line with the 0.4.0 required by faasd.

For the uninstallation and removal of cni, it does not fall within the scope of control of ctr. Cni does not have control on the host, unless it is troublesome to restore the process network namespace to the host directory, or to run the IP command in the container network space to check that the network interface is set correctly. Using the ctr tasks kill & & ctr tasks delete & & ctr container delete trilogy to delete the container above, you can see that the virtual network cards corresponding to the five task in ifconfig have also been killed, so I did not delve further into the uninstall logic of cni.

Configdocker () {[[$(systemctl is-active faasd-provider) = = "activating"] & & systemctl stop faasd-provider [[$(systemctl is-active faasd) = = "activating"]] & & systemctl stop faasd [[$(systemctl is-active containerd) = = "activating"]] & & ctr image remove docker.io/openfaas/basic-auth-plugin:0.18.18 docker.io/library/nats-streaming:0.11.2 docker.io/ Prom/prometheus:v2.14.0 docker.io/openfaas/gateway:0.18.18 docker.io/openfaas/queue-worker:0.11.2 & & for i in basic-auth-plugin nats prometheus gateway queue-worker Do ctr tasks kill-s SIGKILL $iTIC CTR tasks delete $I tert CTR container delete $I Done & & systemctl stop containerd & & sleep 10 ps-ef | grep containerd | awk'{print $2}'| xargs kill-9 rm-rf / var/run/containerd / run/containerd [[!-z "$(brctl show | grep openfaas0)"]] & & ifconfig openfaas0 down & & brctl delbr openfaas0 rm-rf / etc/cni echo "= cniplugins installonly==" msg=$ (# begin if [!-f "/ tmp/cni-plugins-linux-amd64-v0.8.5.tar.gz"] Then wget-- no-check-certificate-qO- ${MIRROR_PATH} / docker/containernetworking/plugins/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tar.gz > / tmp/cni-plugins-linux-amd64-v0.8.5.tar.gz fi mkdir-p / opt/cni/bin tar-xf / tmp/cni-plugins-linux-amd64-v0.8.5.tar.gz-C / opt/cni/bin / sbin / sysctl-w net.ipv4.conf.all.forwarding=1 2 > & 1) status=$? UpdateProgress 50 "$msg"$status"cniplugins installonly" echo "= containerd install+start progress (this may hang long and if you over install the script you may encount / run/containerd device busy error,for this case you need to reboot to fix after scripts finished) =" msg=$ (# begin # del original deb by docker.io rm-rf / usr/bin/containerd* / usr/bin/ctr # replace with new bins if [!-f "/ tmp/containerd-1.4.0-linux-amd64.tar.gz"] Then wget-- no-check-certificate-qO- ${MIRROR_PATH} / docker/containerd/v1.4.0/containerd-1.4.0-linux-amd64.tar.gz > / tmp/containerd-1.4.0-linux-amd64.tar.gz fi tar-xf / tmp/containerd-1.4.0-linux-amd64.tar.gz-C ${INSTALL_DIR} / bin/-- strip-components=1 & & ln-sf ${INSTALL_DIR} / bin / containerd* / usr/local/bin/ & & ln-sf ${INSTALL_DIR} / bin/ctr / usr/local/bin/ctr rm-rf / etc/systemd/system/containerd.service cat / etc/systemd/system/ containerd.service [Unit] Description=containerd container runtimeDocumentation= https://containerd.ioAfter=network.target local-fs.target#After=network.target containerd.socket containerd.service#Requires=containerd.socket containerd.service [service] ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=process # changed to mixed To let systemctl stop containerd kill shims#KillMode=mixedRestart=always# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=1048576# Comment TasksMax if your systemd version does not supports it.# Only systemd 226 and above support this version.TasksMax= infinity [install] WantedBy=multi-user.targetEOF systemctl daemon-reload & & systemctl enable containerd systemctl start containerd-- no-pager 2 > & 1) status=$? UpdateProgress 50 "$msg"$status"containerd install+start"} configdocker so far, the study on "how to implement paas by nginx+docker" is over. I hope I can solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report