Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Container Network for getting started with docker

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Container Network for getting started with docker

Debut: arppinging.com

1. Network namespace 1) IP command 2) instance 2, network model 3, common network operations in containers 1) specify network mode 2) specify dns address and hosts resolution of container 4, bridge configuration

1. Network Namespace 1) IP command

Check to see if the package to which the ip command belongs has been installed

[root@node2 ~] # rpm-qa iproute

Iproute-3.10.0-87.el7.x86_64

[root@node2 ~] #

1.ip netns command

Ip netns, see the help for the ip netns command.

[root@node2 ~] # ip netns help

Usage: ip netns list

Ip netns add NAME

Ip netns set NAME NETNSID

Ip [- all] netns delete [NAME]

Ip netns identify [PID]

Ip netns pids NAME

Ip [- all] netns exec [NAME] cmd...

Ip netns monitor

Ip netns list-id

[root@node2 ~] #

Ip netns list: viewing namespaces

Ip netns add Name: adding namespaces

Ip netns set Name Netnsid: setting namespaces

Ip netns exec Name command: executing commands in a namespace

2.ip link command

The ip link command can be used to create virtual network card pairs. If there is no network card in a namespace, then only one lo interface exists.

[root@node2 ~] # ip link help

Usage: ip link add [link DEV] [name] NAME

[txqueuelen PACKETS]

[address LLADDR]

[broadcast LLADDR]

[mtu MTU]

[numtxqueues QUEUE_COUNT]

[numrxqueues QUEUE_COUNT]

Type TYPE [ARGS]

Ip link delete {DEVICE | dev DEVICE | group DEVGROUP} type TYPE [ARGS]

Ip link set {DEVICE | dev DEVICE | group DEVGROUP}

[{up | down}]

[type TYPE ARGS]

[arp {on | off}]

[dynamic {on | off}]

[multicast {on | off}]

[allmulticast {on | off}]

[promisc {on | off}]

[trailers {on | off}]

[txqueuelen PACKETS]

[name NEWNAME]

[address LLADDR]

[broadcast LLADDR]

[mtu MTU]

[netns {PID | NAME}]

[link-netnsid ID]

[alias NAME]

[vf NUM [mac LLADDR]

[vlan VLANID [qos VLAN-QOS]]

[rate TXRATE]

[max_tx_rate TXRATE]

[min_tx_rate TXRATE]

[spoofchk {on | off}]

[query_rss {on | off}]

[state {auto | enable | disable}]]

[trust {on | off}]]

[master DEVICE]

[nomaster]

[addrgenmode {eui64 | none}]

[protodown {on | off}]

Ip link show [DEVICE | group GROUP] [up] [master DEV] [type TYPE]

Ip link help [TYPE]

TYPE: = {vlan | veth | vcan | dummy | ifb | macvlan | macvtap |

Bridge | bond | ipoib | ip6tnl | ipip | sit | vxlan |

Gre | gretap | ip6gre | ip6gretap | vti | nlmon |

Bond_slave | geneve | bridge_slave | macsec}

[root@node2 ~] #

Ip link show: view all links

Ip link add: create a virtual network card pair

Ip link set: setting up link

2) instance

1. Create two namespaces R1 and R2:

[root@node2 ~] # ip netns add R1

[root@node2 ~] # ip netns add R2

[root@node2 ~] # ip netns list

R2

R1

[root@node2 ~] #

two。 View the ip address of namespace R1

[root@node2 ~] # ip netns exec R1 ifconfig

[root@node2] # ip netns exec R1 ifconfig-a

Lo: flags=8 mtu 65536

Loop txqueuelen 1 (Local Loopback)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node2 ~] #

3. Create a network card for veth2.1 and veth2.2

[root@node2 ~] # ip link add name veth2.1 type veth peer name veth2.2

[root@node2 ~] # ip link show | grep veth

5: veth2.2@veth2.1: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

6: veth2.1@veth2.2: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

[root@node2 ~] #

4. Add veth2.1 to the network namespace R1

[root@node2 ~] # ip link set dev veth2.1 netns R1

[root@node2] # ip netns exec R1 ifconfig-a

Lo: flags=8 mtu 65536

Loop txqueuelen 1 (Local Loopback)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Veth2.1: flags=4098 mtu 1500

Ether c6:06:a4:0f:ba:91 txqueuelen 1000 (Ethernet)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node2 ~] #

5. Rename veth2.1 in R1 to eth0

[root@node2 ~] # ip netns exec R1 ip link set dev veth2.1 name eth0

[root@node2] # ip netns exec R1 ifconfig-a

Eth0: flags=4098 mtu 1500

Ether c6:06:a4:0f:ba:91 txqueuelen 1000 (Ethernet)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Lo: flags=8 mtu 65536

Loop txqueuelen 1 (Local Loopback)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node2 ~] #

6. Set the ip address for eth0 in namespace R1 and activate

[root@node2 ~] # ip netns exec R1 ifconfig eth0 192.168.0.1 Plus 24 up

[root@node2 ~] # ip netns exec R1 ifconfig

Eth0: flags=4099 mtu 1500

Inet 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255

Ether c6:06:a4:0f:ba:91 txqueuelen 1000 (Ethernet)

RX packets 0 bytes 0 (0.0B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 0 bytes 0 (0.0B)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node2 ~] #

7. Configure the ip address for the peer veth2.2 of veth2.1 and activate

[root@node2 ~] # ip link show | grep veth

5: veth2.2@if6: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

[root@node2 ~] # ifconfig veth2.2 192.168.0.2 Universe 24 up

[root@node2 ~] # ifconfig veth2.2

Veth2.2: flags=4163 mtu 1500

Inet 192.168.0.2 netmask 255.255.255.0 broadcast 192.168.0.255

Inet6 fe80::c873:1fff:fe9e:90f6 prefixlen 64 scopeid 0x20

Ether ca:73:1f:9e:90:f6 txqueuelen 1000 (Ethernet)

RX packets 8 bytes 648 (648.0 B)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 26 bytes 3856 (3.7 KiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node2 ~] #

8. In namespace R1, test whether you can ping the address of the host

[root@node2 ~] # ip netns exec R1 ping 192.168.0.2

PING 192.168.0.2 (192.168.0.2) 56 (84) bytes of data.

64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.051 ms

64 bytes from 192.168.0.2: icmp_seq=2 ttl=64 time=0.032 ms

64 bytes from 192.168.0.2: icmp_seq=3 ttl=64 time=0.039 ms

^ C

-192.168.0.2 ping statistics-

3 packets transmitted, 3 received, 0 packet loss, time 1999ms

Rtt min/avg/max/mdev = 0.032 ms 0.040 pound 0.051 Universe 0.010

[root@node2 ~] #

2. Network model

1. Closed container-only lo interface

two。 Bridged container-the default mode has lo interface and eth0 interface to communicate with the outside world.

3. Container federation-two namespaces share net ipc

Federated network creation:

[root@localhost] # docker run-- name b1-it-- rm busybox

/ #

[root@localhost] # docker run-name b2-network container:b1-it-rm busybox

/ #

When you look at b1 and b2, you will find that ip is the same.

3. Common network operations in the container 1) specify the network mode

-- network

[root@localhost ~] # docker network help

Usage: docker network COMMAND

Manage networks

Commands:

Connect Connect a container to a network

Create Create a network

Disconnect Disconnect a container from a network

Inspect Display detailed information on one or more networks

Ls List networks

Prune Remove all unused networks

Rm Remove one or more networks

Run 'docker network COMMAND-- help' for more information on a command.

[root@localhost ~] #

Specifies that the network mode of container T1 is bridge mode

[root@localhost] # docker run-- name T1-it-- network bridge-- rm busybox

/ # ip add

1: lo: mtu 65536 qdisc noqueue qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

27: eth0@if28: mtu 1500 qdisc noqueue

Link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff

Inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0

Valid_lft forever preferred_lft forever

/ #

2) specify the dns address and hosts resolution of the container

View the hosts file of container T1

/ # cat / etc/hosts

127.0.0.1 localhost

:: 1 localhost ip6-localhost ip6-loopback

Fe00::0 ip6-localnet

Ff00::0 ip6-mcastprefix

Ff02::1 ip6-allnodes

Ff02::2 ip6-allrouters

192.168.1.2 f2fb5f32bdb2

/ #

View the dns server address of container T1

/ # cat / etc/resolv.conf

Nameserver 8.8.8.8

/ #

Specify hostname and dns addresses and hosts resolution addresses when creating the container

[root@localhost] # docker run-name T1-hostname T1-add-host www.arppinging.com:1.1.1.1-dns 114.114.114.114-it-network bridge-rm busybox

/ # cat / etc/resolv.conf

Nameserver 114.114.114.114

/ # cat / etc/hosts

127.0.0.1 localhost

:: 1 localhost ip6-localhost ip6-loopback

Fe00::0 ip6-localnet

Ff00::0 ip6-mcastprefix

Ff02::1 ip6-allnodes

Ff02::2 ip6-allrouters

1.1.1.1 www.arppinging.com

192.168.1.2 t1

/ #

3) Port mapping

If the application in the container needs to be accessed, it can be implemented in the following ways:

1.network mode uses host

two。 Port mapping

Specify that the network mode uses host

[root@localhost] # docker run-name T1-it-d-network host-rm nginx

524349e018aabe9702c3f033cdd28f92c8970d41632a90820356474dcf843e13

[root@localhost ~] #

Access Container Service using node2

[root@node2] # curl-o-p 192.168.100.75

Welcome to nginx!

Body {

Width: 35em

Margin: 0 auto

Font-family: Tahoma, Verdana, Arial, sans-serif

}

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and

Working. Further configuration is required.

For online documentation and support please refer to

Nginx.org.

Commercial support is available at

Nginx.com.

Thank you for using nginx.

[root@node2 ~] #

Port mapping

-p option:

-p maps the specified container port to a dynamic port for all addresses of the host

[root@localhost] # docker run-- name T1-- hostname T1-it-- rm-d-p 80 nginx

A9ed176632769450e1a652ae45461680a3e48d9af6b91da2c2dfd20dfdb6f727

View Mapping

[root@localhost ~] # docker port T1

80/tcp-> 0.0.0.014 32768

[root@localhost ~] #

Use node2 to view web pages

[root@node2] # curl-o-p 192.168.100.75purl 32768

Welcome to nginx!

Body {

Width: 35em

Margin: 0 auto

Font-family: Tahoma, Verdana, Arial, sans-serif

}

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and

Working. Further configuration is required.

For online documentation and support please refer to

Nginx.org.

Commercial support is available at

Nginx.com.

Thank you for using nginx.

[root@node2 ~] #

-p: maps the container port to the specified host port

[root@localhost] # docker run-- name T1-- hostname T1-- it-- rm-d-p 80:80 nginx

9083bc33157f01b3b2e0d4d3acd2da7fc2eba2d976f0d3cf2b99a987fef8a6df

[root@localhost ~] # docker port T1

80/tcp-> 0.0.0.0For 80

[root@localhost ~] #

-pvv: maps the port of the specified container to the dynamic port specified by the host

[root@localhost] # docker run-- name T1-- hostname T1-it-- rm-d-p 192.168.100.75 Virgo 80 nginx

1fefd9bde32a157e24eb7838bd349d196f860f6017ba1154125e3a1b8893afce

[root@localhost ~] # docker port T1

80/tcp-> 192.168.100.75 purl 32768

[root@localhost ~] #

-pvv: maps the specified container port to the port specified by the host

[root@localhost] # docker run-- name T1-- hostname T1-it-- rm-d-p 192.168.100.75 hostname 80 nginx

Fbedd72124302f2b95de33d3799cf44a236e2c5e475358e868b114c8a0faa2e6

[root@localhost ~] # docker port T1

80/tcp-> 192.168.100.75 purl 80

[root@localhost ~] #

IV. Bridge configuration

Modify the ip and other information of the bridge

Stop the docker service

[root@localhost ~] # systemctl stop docker

[root@localhost ~] #

Edit the docker file

/ etc/docker/daemon.json

{

"bip": "192.168.1.1 ip 24", # bridge

"fixed-cidr": "10.20.0.0 Compact 16"

"fixed-cidr-v6": "2001:db8::/64"

"mtu": 1500

"default-gateway": "10.20.1.1"

"default-gateway-v6": "2001:db8:abcd::89"

"dns": ["10.20.1.2", "10.20.1.3"]

}

The core option is bip, which means bridge ip, which is used to specify the IP address of the docker0 bridge itself; other options can be calculated from this address.

Start the service

[root@localhost ~] # systemctl start docker

[root@localhost ~] #

Create a bridge

[root@localhost] # docker network create-d bridge-- subnet "10.1.1.0 take 24"-- gateway "10.1.1.1" mybr0

75e5401680b9790d5fa91e688271a4f7722ed7e7cb5a0d6ef91a475d25dd0329

[root@localhost ~] # docker network ls

NETWORK ID NAME DRIVER SCOPE

8247c91941d0 bridge bridge local

6b108679bb90 host host local

75e5401680b9 mybr0 bridge local

Fbeb24fe71fb none null local

[root@localhost ~] # ip add

1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link/ether 00:1a:4a:16:01:69 brd ff:ff:ff:ff:ff:ff

Inet 192.168.100.75/24 brd 192.168.100.255 scope global dynamic eth0

Valid_lft 80748sec preferred_lft 80748sec

Inet6 fe80::46bb:80cd:da25:717/64 scope link

Valid_lft forever preferred_lft forever

3: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000

Link/ether 52:54:00:06:89:69 brd ff:ff:ff:ff:ff:ff

Inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

Valid_lft forever preferred_lft forever

4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000

Link/ether 52:54:00:06:89:69 brd ff:ff:ff:ff:ff:ff

5: docker0: mtu 1500 qdisc noqueue state DOWN

Link/ether 02:42:33:82:61:44 brd ff:ff:ff:ff:ff:ff

Inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0

Valid_lft forever preferred_lft forever

Inet6 fe80::42:33ff:fe82:6144/64 scope link

Valid_lft forever preferred_lft forever

22: br-75e5401680b9: mtu 1500 qdisc noqueue state DOWN

Link/ether 02:42:8f:cd:19:40 brd ff:ff:ff:ff:ff:ff

Inet 10.1.1.1/24 brd 10.1.1.255 scope global br-75e5401680b9

Valid_lft forever preferred_lft forever

[root@localhost ~] #

Create container T1 and specify that the network uses mybr0

[root@localhost] # docker run-- name T1-it-- network mybr0-- rm busybox

/ # ip add

1: lo: mtu 65536 qdisc noqueue qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

23: eth0@if24: mtu 1500 qdisc noqueue

Link/ether 02:42:0a:01:01:02 brd ff:ff:ff:ff:ff:ff

Inet 10.1.1.2/24 brd 10.1.1.255 scope global eth0

Valid_lft forever preferred_lft forever

/ #

Create container T2, using the default network

[root@localhost] # docker run-- name T2-it-- rm busybox

/ # ip add

1: lo: mtu 65536 qdisc noqueue qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

57: eth0@if58: mtu 1500 qdisc noqueue

Link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff

Inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0

Valid_lft forever preferred_lft forever

/ #

Can the containers on the two bridges communicate?

Enable core forwarding

[root@localhost ~] # cat / proc/sys/net/ipv4/ip_forward

one

[root@localhost ~] #

test

/ # ip add

1: lo: mtu 65536 qdisc noqueue qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

57: eth0@if58: mtu 1500 qdisc noqueue

Link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff

Inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0

Valid_lft forever preferred_lft forever

/ # ping 10.1.1.2

PING 10.1.1.2 (10.1.1.2): 56 data bytes

64 bytes from 10.1.1.2: seq=0 ttl=63 time=0.228 ms

64 bytes from 10.1.1.2: seq=1 ttl=63 time=0.185 ms

^ C

-10.1.1.2 ping statistics-

2 packets transmitted, 2 packets received, 0 packet loss

Round-trip min/avg/max = 0.185 ms 0.206 max 0.228

/ #

If it doesn't work, please check the firewall and other information.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report