In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to deploy the Teprunner test platform to the Linux system Docker, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
Local operation
By executing npm runserve on the Vue project and python manage.py runserver on the Django project, we run the project locally, as shown in the following diagram:
The front end starts a Node server locally, and the back end starts a Django server locally, using ports 8080 and 8000, respectively. The browser has a homologous policy: the domain name, port and protocol are consistent to access, otherwise it will be blocked by the browser due to cross-domain access. In the figure, the ports of the front and rear ends are inconsistent, cross-domain occurs, and the front end cannot directly request the backend. The workaround is to configure devServer in vue.config.js:
This is a proxy server opened by Node. When the current end requests the backend, it will first send it to the Node proxy server. The Node proxy server will request the real backend server with the same parameters, and then return the response to the front end. In this project, the front-end request is still sent to http://127.0.0.1:8080, the browser will not block it, and the Node proxy server will forward the request to the backend port 8000 for you.
Nginx deployment
Now that you understand how to run agent forwarding locally, let's take a look at Nginx deployment. Nginx itself is a server, just like a Node server, it can also be thought of as an Apache Tomcat. The Vue project uses the npm run build command to build the code into a static file of the dist directory, which is loaded into the Nginx server. Combined with the Docker diagram, the following is shown:
Compared with local operation, when Nginx is deployed, the front end changes a lot: 1: dist static files are copied to the / usr/share/nginx/html directory, 2: for / path, Nginx listens on port 80, and 3: for / api path, Nginx forwards the request to the backend server port, which is also called reverse proxy. There is no change in the backend, in order to look a little different from the local operation, the port has been changed slightly, and port 80 is used internally in Docker.
The key here is to understand the relationship among Docker teprunner-frontend, Docker teprunner-backend and Linux. If you don't know Docker, you should have heard of virtual machines. Docker is conceptually like a virtual machine, and these three can be thought of as three hosts. The IP of Linux is 172.16.25.131 Port 80 is mapped to port 80 of Docker teprunner-frontend, and port 8099 is mapped to port 80 of Docker teprunner-backend, as shown by the two-way arrow at the bottom of the figure. When you access http://127.0.0.1 on Linux, you can open the login page, but you cannot initiate a request to the backend, because you can request port 8099 directly from port 80, which is cross-domain. The solution is to reverse proxy in Docker teprunner-frontend with the help of Nginx, sending the request to the Nginx server first, and then forwarding it to port 8099 of Linux.
The proxy for / api cannot be set to http://127.0.0.1:8099 in Docker teprunner-frontend, because port 8099 of this Docker container is not enabled, but port 8099 on the machine Linux is enabled, so it needs to be specified through IP.
The overall idea is clear, and then we will start to do it.
Write deploy scripts
Front end
Open the teprunner-frontend folder and create a new deploy/nginx.conf file:
/ the path reads the file from user/share/nginx/html, and the entry is forwarded to http://172.16.25.131:8099 for index.html,/api. This file is copied to the Docker image. Create a new Dockerfile file:
FROM defines the basic image, which can be understood as the operating system, and the front-end project is built on nginx. WORKDIR defines the current working directory for mirroring, which means which one to use when performing subsequent COPY operations. COPY copies the dist static file and the nginx.conf configuration file to the image, respectively. The first parameter of the COPY instruction is the native directory, and the second parameter is the mirror directory. The image directory is specified by WORKDIR, and the local directory is specified by Docker context. Create a new build.sh file:
DockerContext specifies that the Docker context is the teprunner-frontend root directory. The Shell script here has two phases, and the first phase is compiled using node:
Docker run # run image-- rm # delete container after run-v $(pwd) /.. /: / data/src # $(pwd) refers to the current working directory, mount the root directory to data/src-v / root/.npm/_logs:/root/.npm/_logs # mount log file-w / data/src/ # Image current working directory $BUILDER_IMAGE # run image as node:latest Compile the front-end code with node / bin/sh-c "npm install & & npm run build" # / bin/sh is a shell executable program that calls and executes the npm command
The second stage is to package it into a Docker image:
Docker build # build image-f $Dockerfile # specify Dockerfile file location-t $PkgName # image package name $DockerContext # Docker context
The backend is similar. Create a new deploy/Dockerfile file first:
The back-end project is built based on python:3.8, and then the time zone is set, COPY. . Copy the Django source files directly to the mirror directory / app/release, RUN instructions to execute pip install commands to install dependent packages, CMD and RUN are somewhat different, RUN instructions are executed when docker build, CMD instructions are executed only when docker run, predefined startup commands.
This simplifies the startup commands such as migrating the database migrate, the server database and the same one used locally.
Then create a new build.sh file:
Python code does not need to be compiled, just package it into a Docker image.
Deploy to Ubuntu system Docker
Linux system is a kernel version, it has many distributions, such as CentOS, Ubuntu, this article uses Ubuntu, only one reason, it looks good.
College roommates once impulsively replaced the Windows system with Ubuntu, and every day they showed us how cool and awesome they were. After two or three days, they found that Office didn't work and couldn't play games, so they changed it back. Haha, Ubuntu usually just play, unless it is to do Linux kernel development.
Download the software:
VMware cracked version
Ubuntu Desktop 20.04
The installation process will not be repeated here. Open the Ubuntu of the virtual machine:
Open Terminal, enter su, enter password, and switch to root:
Su if you find that there is a lack of permissions.
Install curl:
Apt-get install curl
Install docker:
Curl-fsSL https://get.docker.com | bash-s docker-- mirror Aliyun
Use ifconfig to query the virtual machine IP:
Instead of selecting the .git and node_modules folders, package teprunner-frontend into a zip file. Instead of selecting the .git and _ _ pycache__ folders, pack teprunner-backend into a zipped package. Copy the front and back end compression package to the virtual machine Documents to decompress:
The advantage of Ubuntu Desktop is that it provides a graphical interface, which is suitable for novice users like me. Use the command line editing tool vi or the graphics editing tool gedit to edit the / api forwarding address in the teprunner-frontend/deploy/nginx.conf file to the actual IP address of your virtual machine:
Open two Terminal, cd to teprunner-frontend/deploy and teprunner-backend/deploy, and execute the. / build.sh command.
If you report an error such as the prompt ^ M, it is because it is inconsistent to copy to Linux format after Windows editing, and use the apt-get install dos2unix command to install tools for format conversion, such as dos2unix build.sh, dos2unix Dockerfile.
The first time because you have to download the node dependency package and pull the nginx image, it will be slow, and the second time it will be much faster.
The first time because you have to pull the python image, it will be slow, and the second time it will be much faster. After the construction is completed, enter the docker images command to see the packaged Docker image:
Start the front-end image:
Docker run-p 80:80 teprunner-frontend
Start the backend mirror:
Docker run-p 8099purl 80 teprunner-backend
After the image is started, it becomes a Docker container, which can be understood as a virtual host. The-p parameter is used to map Ubuntu ports and Docker ports. You can add the-d parameter to make the container run in the background. Docker ps-a looks at the container, docker kill CONTAINER or docker stop CONTAINER exits the container.
Finally, you can access http:127.0.0.1 in the virtual machine to log in. If the local machine wants to access it, you need to change 127.0.0.1 to your virtual machine's actual IP, such as http://172.16.25.131.
In the process of use, we also feel the charm of the epoch-making technology of Docker. Without Docker, we need to install nginx, node, python and other software on Ubuntu. With Docker, we only need to install Docker, and the rest are built based on Docker images. The use case of the teprunner test platform is in the form of code, which involves the location of the code. In order to enable pytest to call and execute, it must be stored in a file. The practice of this article gives an important reminder that if the back end writes the code directly to the disk file, the saved use case code will be erased each time the image deployment is packaged. The first way to solve this problem is to use K8S, and the second way is to store the code in the database. The learning version uses the second method to save the database and dynamically take the code from the database to generate files during execution.
Finally, briefly talk about that Docker and K8SPowerDocker are owned by Docker, K8S is owned by Google, and Docker is done by a small company. At the beginning of its creation, it did not take into account the function of "container choreography". In 2014, Google launched Kubernetes to solve the problem of Docker container choreography in large-scale scenarios. Kubernetes released CRI unified interface in 2016, although Docker also released Docker Swarm in 2016, bringing Docker choreography solution in multi-host and multi-container. But it can no longer stop K8S from winning the container arrangement war.
After reading the above, do you know how to deploy the Teprunner test platform to the Linux system Docker? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.