Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create a Dockerfile for installing applications

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to create Dockerfile for installing applications". In daily operation, I believe many people have doubts about how to create Dockerfile for installing applications. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the questions of "how to create Dockerfile for installing applications". Next, please follow the editor to study!

Concept

What is a "traditional" application?

There is no specific definition that can describe all traditional applications, but they have some common features:

Use the local file system to persist storage, mixing data files with applied files.

Run many services on the same server, such as MySQL database, Redis server, nginx web server, a Ruby on Rails application, and a lot of scheduled tasks

Install and upgrade using a variety of scripts and manual processes (the documentation is also crude).

Configurations are stored in files, usually scattered in multiple locations, and mixed with applied files.

Inter-process communication is done using the local file system (such as putting a file on disk and another process to read it) rather than TCP/IP.

It is designed in the way that only one application runs on a single server.

Shortcomings of traditional applications

It is difficult to automate deployment.

If you need to run multiple different instances of the application, it is difficult for multiple instances to "coexist" on the same server.

If the server is down, it will take a long time to recover because of the manual process.

The process of deploying a new version is basically manual, or most of it is manual and difficult to roll back.

It is very likely that there is a big difference between the test environment and the production environment, so that some production environment problems can not be found during the test.

It is difficult to scale out by adding new instances.

What is containerization?

The process of "containerizing" applications is to enable applications to run in Docker containers or similar technologies that encapsulate the operating system environment and applications (complete system images). Because the container can provide the application with an environment similar to a complete system, it provides a way to modernize the deployment of the application without modification or a small amount of modification. This is also the basis on which the architecture of the application continues to be "cloud-friendly".

Benefits of containerization

Deployment is much easier: replace the entire old version directly with a new container image.

Automated deployment is also relatively easy and can even be driven entirely by CI (Continuous Integration).

The rollback when the deployment fails is simply switched to the previous image.

Applying an upgrade is easy because there are no "intermediate steps" that can go wrong (whether or not it affects the success of the entire deployment process).

The same container image can be fully tested in different environments and then deployed directly to the production environment. This ensures that the product in the test state is exactly the same as that in the production state.

The system is easier to recover from downtime because a new container containing the application can be quickly launched on the new hardware resource and attached to the same data source.

Developers can test new features locally in the form of containers in a more realistic environment.

The use of hardware resources is more efficient, and multiple container applications can now be run on a single host, which was not possible before.

Containerization is a solid foundation for supporting zero downtime upgrades, canary deployment, high availability, and scale-out.

Alternatives to containerization

Configuration management tools such as Puppet and Chef can solve some of the "traditional" problems, such as environmental consistency. However, they cannot support "atomic" deployment and a complete rollback of the application + environment. On the other hand, a deployment scheme that cannot be easily rolled back is still full of risks in the middle of deployment.

Virtual machine mirrors are another way to achieve some of these capabilities, and in some cases, it is more appropriate to use a complete virtual machine for "atomic" deployment than a container. But the main problem with using a virtual machine is that it is less efficient in hardware utilization. Because the virtual machine needs some exclusive resources (CPU, memory, disk, etc.), and the resources of the host can be shared between containers.

How to containerize

I. preparatory work

List the location of the file system where the data is stored

Because the deployment of the new version of the application is achieved by replacing the Docker image, any persisted data should be stored outside the container. If you are lucky, you may encounter that applications have written all their data to specific locations, but most traditional applications often scribble their data all over disk and may be mixed with the application's own files. Docker's loadable storage volume (volume) allows the host's file system to be exposed to the container as a specific path so that data can be retained between containers. So, in either case, we need to list the locations where the data is stored.

Now you can consider writing all the output data in the application to the same directory of the file system, which can significantly simplify the deployment of containerized versions. However, this is not necessary if the modified application is difficult to achieve.

Find out the configuration data that will change with the deployment environment

To ensure consistency, the same image is used in multiple environments (for example, testing and production), so it is necessary to list all configuration values that will change in different environments and reset the values when starting the container. Programs in the container can then get the values of these configurations from environment variables or from configuration files.

You can now consider modifying the application and supporting reading the configuration from environment variables to simplify the containerization process. Similarly, if it is not easy to modify the application, this is not necessarily necessary.

Find services that are easy to move out

On the same machine, our applications may have to rely on other services that can be easily moved out if they are highly independent and communicate using TCP/IP. For example, if you run a MySQL or PostgreSQL database, or a Redis-like cache, on the same machine, it's easy to move out. You may also need to adjust the configuration to support the specified machine name (hostname) and port (port) rather than directly thinking that the application is running on localhost.

Create a container image

Create a Dockerfile to install the application

If you already have automated installation capabilities based on scripts or configuration management tools such as Chef or Puppet, the process is simple. Just pick a favorite system image, install all the dependencies, and then run the automation script.

If the current installation process is manual, you need to write some scripts. However, because the state of the mirror is known, it is easier to script here than based on a native system that may have inconsistencies.

If the services to be removed are identified in advance, they should not be installed in the script.

Here is a simple example of Dockerfile:

# based on the Ubuntu package RUN apt-get install-y\ & & apt-get clean\ & & rm-rf / var/lib/apt/lists/*# that the official Ubuntu 16.04 Docker image FROM ubuntu:16.04# installation depends on, copy the applied files to the image ADD. / app# run the installation script RUN / app/setup.sh# to change to the application's directory WORKDIR / app# specify the application's startup script COMMAND / app/start.sh

Make a startup script for configuration

If the application is already using environment variables to read the configuration values, then this step can be skipped. If you want to read the configuration values related to a particular environment from a file, the startup script should be able to read the configuration values from the environment variables and update those values to the configuration file.

Here is an example of a startup script:

#! / usr/bin/env bashset-e # add the value of the environment variable $MYAPPCONFIG to the configuration file cat > > / app/config.txt

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report