In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to understand the EC2 container service provided by AWS for Docker". In daily operation, I believe many people have doubts about how to understand the EC2 container service provided by AWS for Docker. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to understand the EC2 container service provided by AWS for Docker"! Next, please follow the editor to study!
EC2 Container Service (ECS) is a new product released by Amazon web Services (AWS).
The goal of ECS is to make Docker containers easier by providing a cluster and orchestration layer that controls container deployment on the host and lifecycle management of containers within the cluster after deployment.
ECS is an alternative to tools such as Docker Swarm,Kubernetes,Mesos, which work at the same layer, except as a service. These tools differ from ECS in that you need to deploy and manage them yourself, while ECS is provided "as a service".
ECS is based on a proprietary clustering technology, not through engines such as Docker Swarm,Kubernetes,Mesos. This can be compared to the Google Container engine (GCE), which is based on Kubernetes in the background of GCE.
Why do we need container choreography?
This layer of container orchestration provided by ECS,Swarm, or Kurbernetes, plays an important role in the entire blueprint for deploying and running container-based applications.
First of all, we need containers to form clusters for scalability. As our load grows, we need to add more containers, scale them horizontally, and handle higher loads in parallel across servers.
Second, we need to set up container clusters to ensure robustness and high availability. When a host or container fails, we want the container to be rebuilt, perhaps restarted on another healthy host, so that the entire system will not be affected.
Finally, one of the important functions provided by orchestration tools is abstraction, which keeps developers away from the specific underlying implementation details. In the containerized world, we don't need to care about each individual host, we just need to care about how many of our expected containers are running, running in the 'right place'. Orchestration and clustering tools do this for us, allowing us to easily deploy containers to the cluster, and to calculate the best scheduling method to determine which hosts the container should run on.
It is very difficult to design robust and high-performance distributed cluster system. So tools like Kubernetes and Swarm obviate the need to build clusters ourselves. ECS takes this one step further by simplifying the setup, operation, and management of the orchestration layer without human involvement. For this reason, ECS is undoubtedly a project that developers who use containers to run applications should pay close attention to.
ECS architecture
ECS is not a black box service, it runs in your EC2 service instance, and you can log in using SSH and manage it just like other EC2 services.
All EC2 services in the cluster run an ECS proxy, which is a lightweight process that connects the host to the central ECS service. The ECS agent responds to the host registering with the ECS service and controls all requests for container deployment or lifecycle events such as starting / stopping the container. By the way, ECS proxies implemented using go are already open source.
When creating a new server, we can choose either to configure the ECS agent manually or to use a pre-built configured AMI image.
Through Amazon CTO Werner Vogels's blog, we learned that centralized services have been logically divided into cluster management and scheduling of container deployments on the host. The reason behind this is to make container scheduling pluggable, so we can even use other schedulers, such as Mesos or other developer-defined schedulers. The documentation for the custom scheduler is still under development at the time of this writing, but we can read this blog and refer to its source code, which is by far the best reference practice.
The following diagram well demonstrates the logical hierarchy of ECS clusters: container instances contain multiple tasks, tasks contain multiple containers, EC2 container instance clusters can be dispersed into multiple availability zones, and Elastic Load Balancers can be used to dynamically distribute load across tasks. This picture can help readers organize their thoughts after reading the rest of the content.
Services and tasks
In ECS, the Docker payload is described as a task.
A task essentially defines one or more containers, including the name of the container you intend to run (consistent with the name of Docker Hub), and the corresponding port and disk volume mapping information when the container instance starts.
When the task runs, the underlying container is started. When all container processes complete the mission, the task is over. Tasks can be either short or long-running, for example, providing a short event driver for a data processing task, or a web service process.
One thing to remind you here is that for a given task on the architecture, all its containers are running on the same host. If we are going to locate containers, we will do so using the method of organizing them under the same task. If we are going to run the service on different hosts, we only need to define multiple tasks to achieve control. At first glance this may seem like a constraint, but in the end it gives us the same degree of control over container positioning as Kubernetes pods.
To illustrate the above, as shown in the screenshot below, we can see that a specific task is defined, which has a container that hosts the nginx web service.
In addition to tasks, services are the second most important concept in ECS. A service is a specific number of instances requested to run by a given task. For example, if we had a task to run the nginx web service container as defined above, we would define a service to request three or more instances to form a cluster to complete the task of the web service.
Service is the guarantee of how ECS provides flexibility. When a service is started, the service monitors whether the tasks in it are active, whether the number of instances is correct, and whether the number of containers is normal. If the task stops running, or there is no response, or there is a problem. The service will request that more tasks be started and, if necessary, clean up.
As shown in the screenshot below, a nginx service is defined as three running tasks in the cluster. Each of these tasks is running.
At this point, the study on "how to understand the EC2 container service for Docker provided by AWS" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.