In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to use AWS's EC2 technology to deploy the server's Docker container". In the daily operation, I believe many people have doubts about how to use AWS's EC2 technology to deploy the server's Docker container. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubt of "how to use AWS's EC2 technology to deploy the server's Docker container". Next, please follow the editor to study!
Deploy the first container
When users access the ECS service using the terminal for the first time, they will see a simple wizard. Although it's not too onerous to configure ECS manually, for the first time, it's worth trying to use this wizard to configure everything for you-your EC2 server, an appropriate security group, and an auto-scaling group, the correct AMI (this AMI has a built-in ECS agent), and so on. This is the fastest way to get up and running and gain ECS experience.
Step 1 define the task
First, as part of the wizard, we need to define the task. For the purpose of this demonstration, we will use the free Docker image of NGINX. NGINX is an open source web service that has been containerized by the community and uploaded to the official hub. )
Specify a name for the container to start with, for example, nginx-task in this example.
Next, click add Container definition, which defines the nginx container. The main thing to be reminded here is the name of the image, which must be consistent with the name of the image exposed on Docker hub (ngnix). Of course, you can also specify a proprietary mirror.
The memory field is the maximum value of memory, in megabytes, which is used to allocate to running containers. CPU unit is an abstract number, each CPU core has 1024 units, this number is the number of units to be assigned.
This information is very useful because it adds a certain degree of flexibility and intelligent container scheduling. ECS will monitor which instances have free resources, and then intelligently allocate containers to achieve the purpose of efficient use of server resources.
Step 2 define the service
In the second step, we need to define the service, which describes how many instances to run in the cluster for this task.
Select the checkbox to create the service, name the service, in this case nginx-service, and then set the number of tasks to run, which is 3 in this case. This means that once the service is running, three tasks are created, each of which is a separate instance, each with a nginx container running in it.
For more complex configurations, you can choose Elastic Load Balancer (ELB), and then dynamically register the services with ELB and cluster them after they are instantiated. These are described in detail later.
Step 3 create an ECS cluster
We need to create a cluster of EC2 servers that are used to run containers. This demo environment uses three t2.micro instances to achieve the desired results. This means that 1 task and 1 container will be distributed to each of the three servers. Of course, we can also implement configurations that use more instances than tasks in a cluster, or use these servers to run different tasks, but we have not been able to implement multiple instances of a given task in the same server.
Select your primary key pair, and then click the button behind to create the IAM role, the IAM role is very important, with which the hosts in the cluster can access the central ECS service.
Step 4 create the stack
The final step of the wizard is to show the configuration of summary tasks, services, and clusters.
The generated JSON code is shown on the page below, which can also be used on the command line, if someone is used to using the command line, or if they plan to create their clusters automatically.
During the creation process, you will see the use of Cloud Formation to build the stack. It may take 2 to 3 minutes to build the stack.
Step 5 Review the stack and NGINX service
Now if you visit the EC2 panel, we can see that the server has been created and is running. The wizard has helped us create hosts across availability zones to demonstrate the benefits of resilience.
Then back to the ECS panel, we can check the service, and of course what we want to see is that it is ready and has three tasks.
Remember, it takes a few minutes to create an instance, a few minutes to start the container image from hub, and some time for the service to reach the available state, so don't worry that the whole process will be a little slow.
Delving into the details of a task in the service, we will see that the task is in the RUNNING state.
Expand nginx-container. Below the external link, we can see a HTTP link pointing to the container within the task.
Click on this link and what we can see is the web welcome page provided by the Nginx container.
At this point, we have completed the steps of deploying the NGINX container to ECS and can access the NGINX service through the web browser. Now you can consider sorting out your ideas and proof of concept.
Next step
After establishing a simple container, we need to do some more advanced configuration in order to deploy the application to the production environment.
ELB load balancing
In the above example, we use a browser to link directly to one of the three containers to achieve access to NGINX. This is not robust, in theory, when the container goes down, or restarts to a different server, then the static IP address specified is no longer valid.
We can register the service with EC2 Elastic Load Blance (ELB) to achieve a dynamic address. As the underlying task, no matter how it starts, stops, and moves in the EC2 instance pool, ELB can keep up-to-date through the service and route the corresponding traffic to the correct address.
To configure load balancing, we first need to create an ELB in the panel of EC2. Then recreate the service and add ELB during the service creation process, as shown in the following screenshot:
Automatic telescopic
ECS can also integrate EC2 autoscaling and is the preferred way to expand the cluster in the face of increased load.
The work of Autoscaling depends on metering monitoring such as CPU, memory and IO, and adding or deleting nodes is carried out when certain conditions are broken.
The instantiated new node is automatically registered with the ECS cluster before it is eligible to become an instance of the future deployment container.
This is very practical, but currently ECS has not implemented the Hook to expand the number of tasks or grow the container cluster. But we can still benefit from joining the new size cluster after the new container is started, we can introduce the new container into the cluster through GUI or API, and distribute the load in the larger cluster.
Container link
When defining containers in a task, you can use Docker native container links to interconnect them to each other.
This eliminates the need for static port mapping or service discovery in a multi-container environment, making it easier to deploy distributed micro-services.
AWS command line tool
Although the above walkthrough is based on the UI console, ECS is fully integrated into the AWS command line.
Troubleshooting
If there is a problem, you can access the nodes of the cluster directly through SSH for debugging.
In order to log in to a node using SSH, you need to open port 22 in the security group, because nodes created through the wizard will not open this port by default.
After logging in to the server node, you can view the log file of the ECS agent: / var/log/ecs.
You can also run standard Docker commands, such as docker images and docker ps, to see the status of the images and containers on the server.
At this point, the study on "how to use AWS's EC2 technology to deploy the server's Docker container" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.