In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
docker run is the method to start the container. You can specify the command to execute when the container starts in three specified ways.
(1) CMD command
(2) ENTRYPOINT command
(3) Specify in docker run command line
But docker run doesn't keep running for a long time. We often need to go into the container to do some work, such as viewing logs, debugging, starting other processes, etc. There are two ways to get into containers: attach and exec.
docker attach
Start a container first and keep it running in the background for a long time
docker run -d ubuntu /bin/bash -c "while true; do sleep 1; echo i_am_a_container;done
First check the ID of the container
docker ps -a
CONTAINER ID is the number of the container. In fact, it is the first 12 characters of the container ID, also known as the short ID
Image is base IMAGE.
NAMES is the name of the container, which can be explicitly named by the--name parameter when starting the container.
Docker attach allows you to attach to the terminal of the container launch command.
docker attach 1e5cc7e3b22b
Attach to the startup command terminal of the container through the short ID, and then see echo output once per second print information
You can exit the attach terminal with ctl+p, then ctl+q.
docker exec
Enter the same container via docker exec
docker exec -it 1e5cc7e3b22b bash
description
1. -it Open terminal in interactive form and execute bash
2, can execute commands like ordinary linux, showing the container started process
3. Exit the container
The docker exec command format is as follows:
docker exec -it bash |shattach vs exec
The main differences are as follows:
1, attch directly into the container start command terminal, will not start a new process
Exec opens a new terminal in the container and can start a new process.
3. If you want to view the output of the startup command directly in the terminal, use attach. In other cases, use exec.
Best practices for running containers
Containers can be roughly divided into two categories: service containers and tool containers
The former runs in the form of a daemon and provides services to the outside world, such as Web servers, databases, etc. It is appropriate to start such containers in the background with-d, and if you want to troubleshoot, you can enter the container with exec -it
The latter container usually provides us with a temporary working environment, usually run -it
Tool containers mostly use basic images, such as busybox,debian,ubuntu, etc.
In summary:
(1) When the CMD,Entypoint, and docker run command lines specify the end of the command run, the container stops
(2) Start the container in the background with the-d parameter
(3) You can enter the container and execute the command through exec -it
Container lifecycle stop/start/restart Container
docker stop: Stop the running container. The container is actually a process in docker host. This command essentially sends a SIGTERM signal to the process. You can also quickly stop the container by docker kill command.
docker start: For stopped containers. This command can be used to start the container, and the parameters of the container are preserved when it is first started.
docker restart : Restart container
pause/unpause container
docker pause: pause the container, such as taking a snapshot of the container's file system
unpause: Containers in pause do not consume CPU resources until they resume running via dokcer unpause
delete a container
docker rm: After using docker for a period of time, there may be a large number of retired containers on the host. These containers will still occupy the file system resources of the host. You can use docker rm to delete them.
If you want to delete multiple containers at once, you can use the following command
docker rm -v $(docker ps -aq -f status=exited) State mechanism for containers
The lifecycle state mechanism for the entire container is as follows:
resource constraints
A docker host runs several containers, each of which requires CPU, memory, and IO resources. Docker provides a similar mechanism to prevent one container from consuming too many resources and affecting the resources of other containers or the entire HOST.
memory quota
Similar to the operating system, the memory available to containers consists of two parts: physical memory and swap. Docker controls the amount of container memory used through two sets of parameters:
(1)-m or--memory: Set limits on memory usage
(2)--memory-swap: Set the usage limit of memory +swap
For example:
docker run -m 200M --memory-swap=300M ubuntu
The container is allowed to use up to 200 MB of memory and 300MB of swap. By default, the above two sets of parameters are-l, that is, there is no limit on the use of container memory and swap.
Testing uses the progress/stress mirror, which can be used to perform stress tests on containers
docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 280M
Description:
1. --vm1: Start 1 memory worker thread
--vm-bytes 280 MB, 280MB memory allocated per thread
Process:
1 Allocate 280 MB of memory
2 Release 280 MB memory
3 Reallocation of 280 MB of memory
4 released.
5 Consistent Continuous Cycle
CPU limits
By default, all containers can use hostCPU resources equally, and there are no restrictions.
docker can set the weight of CPU used by container through-c or--cpu-shares. If not specified, the default value is 1024.
Unlike memory quotas, cpu share set via-c is not an absolute amount of cpu resources, but a relative weight. The CPU resources that a container can finally allocate depend on the sum and proportion of its CPU shares. In other words, the CPU share priority of the container can be set through CPU share.
docker run --name "container_A" -c 1024 ubuntu && docker run --name "container_B" -c 512 ubuntu
The CPU share of container_A is 1024, which is twice that of container B. When both containers need CPU resources, the former can get twice as much CPU as the latter.
It is important to note that this weighted allocation of CPU only occurs when CPU resources are tight. If containerA is idle, containerB can also be allocated to all available CPUs in order to make full use of CPU resources
Block IO bandwidth limitation
Block IO is another way to limit container resources. It refers to disk reads and writes. Docker can control the bandwidth of container reads and writes to disks by setting weights and limiting bsp and iops.
IO weight
By default, all containers can read and write disks equally. You can change the priority of block IO of containers by setting the--blkio-weight parameter.
--blkio-weight Similar to--cpu-share, relative weight is set, default is 500
docker run -it --name container_A --blkio-weight 600 ubuntu && docker run --it --name container_B --blkio-weight 300 ubuntu
ContainerA has twice the read and write disk bandwidth of containerB by command line setting
Limit bps and iops
bsp is byte per second , the amount of data read and written per second
iops is ip per second, the number of IOs per second
The bsp and iops of the container can be controlled by the following parameters
--device-read-bps, which limits the bps read of a device.
--device-write-bps, which limits the bps of writing to a device.
--device-read-iops, which restricts reading iops for a device.
--device-write-iops, which restricts writing iops for a device.
cgroup and namespace
cgropu and namespace are two of the most important technologies for implementing the container infrastructure. cgroup implements resource restriction, namespace implements resource isolation
cgroup
cGroup is the full name of Control Group. Linux operating systems use cgroups to set quotas for CPU, memory, and IO resources used by processes. The--cpu-shares, -m, --device-write-bps we saw earlier are actually configuring cgroups.
It can be found in/sys/fs/cgroup.
docker run -it --cpu-shares 512 progrium/stress -c 1
Record the container ID. In the/sys/fs/cgroupXXXpu/docker directory, Linux creates a cgroup directory for each container, named after the container long ID:
The directory contains all cgroup configurations associated with the cpu, and the file cpu.shares holds the configuration of--cpu-shares, with a value of 512.
Similarly,/sys/fs/cgroup/memory/docker and/sys/fs/cgroup/blkio/docker hold memory and cgroup configurations for Block IO.
namespace
Within each container, we can see resources such as file systems, network cards, etc., which appear to be the container's own. For example, each container thinks it has a separate NIC, even if there is only one physical NIC on the host. This is a great way to make the container more like a standalone computer.
Linux implements this technique in the form of namespaces. Namespace manages globally unique resources in a host and allows each container to feel like it is the only one using it. In other words, namespaces isolate resources between containers.
Linux uses six namespaces for six resources: Mount, UTS, I*, PID, Network, and User.
Mount namespace
Mount namespace makes it appear that the container owns the entire file system.
The container has its own/directory, which allows you to execute mount and mount commands. Of course, we know that these actions only take effect in the current container and will not affect host and other containers.
UTS namespace
Simply put, UTS namespaces allow containers to have their own hostname. By default, the hostname of a container is its short ID, which can be set with the-h or--hostname parameter.
I P C namespace
IPC namespaces allow containers to have their own shared memory and semaphore for interprocess communication, without mixing it with host and other container IPC.
PID namespace
Enable containers to have their own PID
Network namespace
Network namespace allows containers to have their own independent network cards, IP, routing and other resources. We'll discuss this in detail later in the networking chapter.
User namespace
User namespaces enable containers to manage their own users, and hosts cannot see users created in containers.
frequently used commands
The following are common container commands
create Create container
run container
pause Pause container
unpause unpause Continue running container
stop Send SIGTERM Stop container
kill Send SIGKILL Quick Stop Container
start Start container
restart Restart container
attach attach to the terminal of the container startup process
exec starts a new process in the container, usually with the "-it" parameter
logs Display console output of the container startup process, printing continuously with "-f"
rm Remove container from disk
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.