In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Today, the editor will share with you the relevant knowledge points about how to deploy Django applications in Docker. The content is detailed and the logic is clear. I believe most people still know too much about this, so share this article for your reference. I hope you can get something after reading this article. Let's take a look at it.
I. Network architecture
I used visio to roughly draw a diagram of my network architecture:
The container I built:
Nginx container
Web server container
Redis container
Memcached container
Mysql container
It may be easier to deploy all applications into one application, but it is slightly more complicated between different containers. First of all, you should consider the dependency relationship between containers, for example, nginx depends on web server, if web server does not work properly, nginx will not work properly; web server depends on database, and so on; second, set up the problem of data sharing among containers. For example, for the static resources of web applications, how to make nginx achieve reverse proxy.
With these questions, begin to deploy.
2. Environment:
Prepare the environment for docker.
Ubuntu 16.04 (host environment)
Docker 17.06.0
Docker-compose 1.14.0
Compose file version: version 3
The role and introduction of docker can be found on the official website: docker service.
Pay attention to your docker version as well as your composefile version, because the syntax may be slightly different from one version to another. I have encountered pitfalls when configuring shared data volumes before, such as deleting volumes_from in version 3. I don't know if there is a version difference, so no matter how to configure it. For more details, see the official website:
1. Engineering structure
├── blog │ ├── account │ ├── blog │ ├── dailyblog │ ├── dockerfile │ ├── gunicorn.conf │ ├── manage.py │ ├── media │ ├── requirements.txt │ ├── start.sh static docker-compose.yml nginx dockerfile nginx.conf
Blog is my django application with a dockfile file; there is also a dockfile in the nginx file. Blog and nginx are a service, respectively, and we create images and containers through the configuration of the docker-compose.yml file. Which means you have to do a few things:
Write dockerfile under each service (application)
Configure related services in the docker-compose.yml file
Execute the docker-compose commands build and up
2. Configuration of django application (blog package):
1) dockfile
From ubuntu:16.04# updates the software source, which must be executed, otherwise errors may occur. -y is to skip the prompt and install directly. Run apt-get-y updaterun apt-get install-y python-dev python-piprun apt-get install-y python-setuptools#mysql-python must first install the library run apt-get install-y libmysqlclient-dev run mkdir / blog# to set the working directory workdir / blog# to add the current directory to the working directory add. / blog#install any needed pacakges in requirements.txt, you need to add all the python modules that need to be installed to this file. Run pip install-r requirements.txt# external exposure port expose 80 8080 8000 500 sets the environment variable env spider=/blog
I chose ubuntu for my basic image because I thought I might be more used to it.
2) start the script start.sh
The #! / bin/bash# command only executes the last one, so use & & python manage.py collectstatic-- noinput & & python manage.py migrate & & gunicorn blog.wsgi:application-c gunicorn.conf
When you deploy for the first time, you need to collect the static directory of each app into the project static directory, and create the database. The above three commands pass & & concatenation, which is equivalent to one command.
In addition, the django application selects gunicorn as the web server, and the configuration file of gunicorn is as follows:
Workers=4bind= ['0.0.0.0 8000'] proc_name='blog'pidfile='/tmp/blog.pid'worker_class='gevent'max_requests=6000
Select 0.0.0.0 host 8000 in gunicorn.
3. Nginx configuration (nginx directory)
1) dockfile
From nginx# exposes port expose 80 8000run rm / etc/nginx/conf.d/default.confadd nginx.conf / etc/nginx/conf.d/run mkdir-p / usr/share/nginx/html/staticrun mkdir-p / usr/share/nginx/html/media
The basic image of nginx can be selected as the basic image nginx in the docker repository, and your own configuration file should be added to the relevant directory. One thing to note here is that when I used to configure nginx on the host, / etc/nginx/nginx.conf would usually look for conf files from the / etc/nginx/conf.d and / etc/nginx/site-enabled/ file directories. I added it to / etc/nginx/site-enabled/, before and did the same this time, but after I configured and ran, nginx didn't work properly. I went into the nginx container and took a look. To see why my configuration is not loaded, open / etc/nginx/nginx.conf and see, sure enough, it only include the conf file in / etc/nginx/conf.d. Bingo! Changed my configuration file, ok.
The static and media created later are for static file storage for web applications.
2) nginx.conf
Server {listen 80; server_name localhost; charset utf-8; error_log / tmp/nginx_error.log; access_log / tmp/nginx_access.log; location / media {alias / usr/share/nginx/html/media;} location / static {alias / usr/share/nginx/html/static;} location / {proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_redirect off; proxy_pass http://web:8000; }}
With regard to nginx configuration, it is very important to note the following two points:
Location
Static file configuration, the original directory of static files specified by nginx is in / usr/share/nginx/html/, and the static files in this directory are synchronized through volumes from the web container. So, wait a minute, docker-compose is very important.
Proxy_pass
This is not the same as configuring it directly on the host. Host cannot be written as a specific ip, but the service name should be written. Here, the name,web of web service is the service name of the web application defined in docker-compose. The configuration of docker-compose will be written later.
4. Docker-compose.yml configuration
Version: "3" services: db: image: mysql environment: mysql_database: app_blog mysql_root_password: admin volumes:-/ srv/db:/var/lib/mysql restart: always redis: image: redis restart: always memcached: image: memcached restart: always web: build:. / blog ports:-"8000Vera 8000" volumes: -. / blog:/blog-/ tmp/logs:/tmp command: bash start.sh links:-redis-memcached-db depends_ On:-db restart: always nginx: build:. / nginx ports:-"80:80" volumes: -. / blog/static:/usr/share/nginx/html/static:ro -. / blog/media:/usr/share/nginx/html/media:ro links:-web depends_on:-web restart: always
This document is very important!
Five services are defined:
Db . Mysql database
Redis . Cache, nosql database
Memcached . Caching
Web . Web application
Nginx . Reverse proxy.
The service name is very important for communication between containers. We say one by one here.
1) db
Several aspects of configuration:
The basic image is obtained from the docker repository (configure image)
Configure environment variables to create a database (the database named app_blog,django is used when performing migrate operations)
Volumes . The data volume, for backup purposes, / srv/db, is the host directory, and / var/lib/mysql is the directory within the mysql container
Restart defaults to no, which means it will not restart under any circumstances. If it is set to always, it will be restarted if stop is enabled.
The password of the root user; you should also write the response configuration in the settings.py of the django application, as follows:
Databases = {'default': {' engine': 'django.db.backends.mysql',' name': 'app_blog',' user': 'root',' password':'admin', 'port':3306,' host':'db',}}
2) redis,memcached
These two are said together, because there is no need to reconfigure, just use the image in the repository.
3) web application
Several aspects of configuration:
Build . Re-build a mirror image according to dockerfile
Ports . The format is host:container. It is equivalent to a nat conversion, setting the port that the internal port forwards outward.
Volumes . It also sets up data file backup, which can also be described as synchronization. The working directory / blog of the web container is backed up to the directory on the host.
Links. Create a link to the service in other containers and specify the service name. With this connection, services can communicate through the service name, and the proxy_pass in the previous nginx configuration uses the web service
Depends_on. It has two meanings, one is that when starting the service, db will be started first, and then web; will be started. Second, if ocker-compose up web is executed, db will be created and started.
4) nginx
Build . Re-build a mirror image according to dockerfile
Ports . The format is host:container. Equivalent to a nat translation, setting the port on which the internal port forwards outward; the default port for http
Links. It has already been introduced.
Depends_on. It also introduces
Volumes . I think this is the most important. I'll focus on it.
The question of how to achieve data sharing between nginx containers and web containers, that is, static file sharing, really bothers me. First, according to the official website configuration, I configured volumes at the top level and type,source under the service, but it was not successful (if there is a match, share it). Later, I searched a lot of information on the Internet, and they all used volumes_from for sharing among containers, which has been cancelled in version3, but it is not possible to return the old version. It really bothered me last Friday. After reading an article, it suddenly enlightened me. I should have thought of it earlier. The logic goes like this:
First of all, I have set up the backup of volumes data in the web application, that is, the files in the container are synchronized to the host, and then the host can act as the middleman, and the nginx container synchronizes static files from the host. This is equivalent to in celery, the producer writes the task message to the message middleware, and then the consumer takes the message from the middleware, in which the web application is similar to the producer, and nginx is the consumer.
In this way, the problem will be easily solved!
So far, all deployment-related configurations have been written.
First execute:
Docker-compose build
Then execute:
Docker-compose up-d
Beside the point: I was the first build command to run at 11:00 on Saturday night. I performed various mirror downloads, software source updates, and slow get resources. I was too sleepy, so I went to bed. Sleep at night dream feeling is docker, and then less than 6 o'clock I got up, to the living room to see the computer build has been successful. I started to execute the up command, and the moment I opened the browser, typed localhost, and successfully returned the result, I felt a sense of accomplishment!
A few random points of knowledge:
Docker deletes all containers:
Docker rm docker ps-a-Q
The most important thing is the later-Q option, which means that only id is displayed.
Delete the none image:
Docker rmi docker images-f "dangling=true"-Q
Update. Add apt-get update to dockerfile, otherwise the following commands will not be executed properly.
The command command executes only the last one, three commands are written in the script, but only the last one is executed in the end. Then the three commands were spliced together with & &.
Files in the docker image copy each other
1. Copy the local file into the docker image
Docker cp / users/howey/documents/apache-maven-3.5.2/ 749056ea1637:/optdocker cp Local path Container id or name: container directory
2. Copy the files in docker to the local folder
Docker cp 749056ea1637:/users/howey/documents/apache-maven-3.5.2 / opt/docker cp container id: local path image path above is all the content of the article "how to deploy Django applications in Docker". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.