Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Based on calico network strategy

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

I. brief introduction

Calico is a pure three-layer protocol that provides multi-host communication between OpenStack virtual machines and Docker containers. Calico does not use overlay networks such as flannel and libnetwork overlay network drivers, it is a pure three-layer approach, using virtual routes instead of virtual switching, each virtual route propagates reachable information (routes) to the remaining data centers through the BGP protocol.

Why did Calico networks choose BGP?

Reference address: https://blog.51cto.com/weidawei/2152319

We use BGP in Calico networks to announce data Terminal (end points) routes for the following reasons:

(1) BGP is a simple routing protocol.

(2) have best practices in the current industry

(3) the only protocol that can support the scale of Calico network

1. Principle

As shown in the following figure, it describes the process of routing from the source container to the source host, through the data center, and then to the destination host and finally to the destination container.

In the whole process, routing and forwarding is always carried out according to iptables rules, and there is no packet encapsulation and unpacking process, which is much faster than flannel.

2. Framework

Calico includes the following important components: Felix,etcd,BGP Client,BGP Route Reflector. Each of these components is described below.

(1) Felix: mainly responsible for routing configuration and configuration and distribution of ACLS rules, which exists on each node node.

(2) etcd: distributed key-value storage, which is mainly responsible for the consistency of network metadata, ensures the accuracy of Calico network status, and can be shared with kubernetes.

(3) BGP Client (BIRD), which is mainly responsible for distributing the routing information written by Felix into kernel to the current Calico network to ensure the effectiveness of communication between workload.

(4) BGPRoute Reflector (BIRD), used in large-scale deployment, abandons the mesh mode in which all nodes are interconnected, and completes centralized route distribution through one or more BGPRoute Reflector.

Second, service building

1. Environmental preparation

1) system environment

Hostnam

System

Host IP

Service

Master01

CentOS7.4

172.169.18.210

Docker1.13.1

Etcd3.2.22

Calico2.6.10

Node01

CentOS7.4

172.169.18.162

Docker1.13.1

Etcd3.2.22

Calico2.6.10

Node02

CentOS7.4

172.169.18.180

Docker1.13.1

Etcd3.2.22

Calico2.6.10

2) temporarily shut down the firewall and selinux services

3) configure hosts and add the following

# vim / etc/hosts

172.169.18.210 master01

172.169.18.162 node01

172.169.18.180 node02

2. Install the docker service (three nodes)

1) yum install docker1.13 version

# yum install docker-y

2) modify the Nic information of the default docker0

# vim / etc/docker/daemon.json

# append the following parameters

"bip": "172.169.10.1 Compact 24"

Other nodes:

Node01 network address: 172.169.20.1

Node01 network address: 172.169.30.1Universe 24

Restart the docker service

3. Install etcd service (etcd should be installed on all three nodes to form an etcd cluster environment)

1) yum install etcd3.2.22 version

# yum install etcd-y

2) modify etcd configuration file

# vim / etc/etcd/etcd.conf

# maser01 node

ETCD_DATA_DIR= "/ var/lib/etcd/"

ETCD_LISTEN_PEER_URLS= "http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS= "http://0.0.0.0:2379,http://127.0.0.1:4001"

ETCD_NAME= "master01"

ETCD_ADVERTISE_CLIENT_URLS= "http://0.0.0.0:2379"

ETCD_INITIAL_CLUSTER= "master01= http://172.169.18.210:2380,node01=http://172.169.18.162:2380,node02=http://172.169.18.180:2380"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE= "new"

# node01 node

ETCD_DATA_DIR= "/ var/lib/etcd"

ETCD_LISTEN_PEER_URLS= "http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS= "http://0.0.0.0:2379,http://127.0.0.1:4001"

ETCD_NAME= "node01"

ETCD_ADVERTISE_CLIENT_URLS= "http://0.0.0.0:2379"

ETCD_INITIAL_CLUSTER= "master01= http://172.169.18.210:2380,node01=http://172.169.18.162:2380,node02=http://172.169.18.180:2380"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE= "new"

# node02

ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS= "http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS= "http://0.0.0.0:2379,http://0.0.0.0:4001"

ETCD_NAME= "node02"

ETCD_ADVERTISE_CLIENT_URLS= "http://0.0.0.0:2379"

ETCD_INITIAL_CLUSTER= "master01= http://172.169.18.210:2380,node01=http://172.169.18.162:2380,node02=http://172.169.18.180:2380"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE= "new"

Start the cluster according to the above configuration. After starting the cluster, you will enter the cluster election state. If a large number of timeouts occur, you need to check whether the firewall of the host is turned off, or whether the hosts can communicate through port 2380. After the cluster is established, check the cluster status with the following command.

3) View the cluster status on any node

# etcdctl member list

When the master01 node is closed, the election is automatically switched to the node02

4) check the health status of the cluster (it can be seen on any of the three nodes)

# etcdctl cluster-health

At this point, the etcd cluster has been built!

4. Build Calico services (version 2.6 of the docker collection)

Note: there are some differences between the new and old versions of the execution command, pay attention!

Calico implements a Docker network plug-in that can be used to provide routing and advanced network policies for Docker containers.

Calico address: https://github.com/projectcalico/calicoctl/tags

1) download

# wget-O / usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v1.6.4/calicoctl

# chmod + x / usr/local/bin/calicoctl

2) configure calicoctl-etcdv2 datastores

Official configuration address: https://docs.projectcalico.org/v2.6/reference/calicoctl/setup/etcdv2

# mkdir-p / etc/calico

# vim / etc/calico/calicoctl.cfg

# cat / etc/calico/calicoctl.cfg

ApiVersion: v1

Kind: calicoApiConfig

Metadata:

Spec:

DatastoreType: "etcdv2"

EtcdEndpoints: http://172.169.18.210:2379,http://172.169.18.162:2379,http://172.169.18.180:2379

3) help information

# calicoctl-help

Start the Calico service

In the Docker environment, the Calico service runs as a container, using the network configuration of host. All containers are configured to use Calico services as calico nodes to communicate with each other.

Calico communicates with other hosts or networks through its own container on each host, that is, the container of calico-node, which includes Bird routing management, Felix protocol and so on.

4) download the node image of calico on all three nodes

(you can download the image on one node, then save it locally through docker save export, and then copy the image to other nodes and import it through docker load, which is faster than using docker pull for other nodes.)

[root@master01 ~] # docker pull calico/node:v2.6.10

[root@master01 ~] # docker pull calico/node-libnetwork

5.1) start the node service on three nodes (method 1)

# master01 node (specified version-node-image=calico/node:v2.6.10)

# calicoctl node run-node-image=calico/node:v2.6.10-ip=172.169.18.210

# node01

# calicoctl node run-node-image=calico/node:v2.6.10-ip=172.169.18.162

# node02

# calicoctl node run-node-image=calico/node:v2.6.10-ip=172.169.18.180

5.2) start the node service on three nodes (method 2)

# create system service file

# Environment variables with a large number of references in EnvironmentFile:ExecStart, set in the / etc/calico/calico.env file

# ExecStartPre operation: delete the calico-node service if it exists in the environment

# ExecStart operation: "--net" sets network parameters; "--privileged" runs in privileged mode;'"--nam" sets the container name; "calico/node:v2.6.10" specifies the image, where the default is "quay.io/calico/node:v2.6.10"

# ExecStop operation: stop the container

#-v / var/run/docker.sock:/var/run/docker.sock: this mapping is not given in the official documentation

# will lead to the error "docker: Error response from daemon: failed to create endpoint test1 on network net1: NetworkDriver.CreateEndpoint: Network 44322b3b9b8c5eface703e1dbeb7e3755f47ede1761a72ea4cb7cec6d31ad2e5 inspection error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?." in the creation of Container Times

# that is, a container cannot be created on a network of calico type, because the sock of the docker service cannot be called. You need to specify the path for the calico service to call docker sock. Please see: https://github.com/projectcalico/calico/issues/1303

# vim / usr/lib/systemd/system/calico.service

[Unit]

Description=calico-node

After=docker.service

Requires=docker.service

[Service]

EnvironmentFile=/etc/calico/calico.env

ExecStartPre=-/usr/bin/docker rm-f calico-node

ExecStart=/usr/bin/docker run-net=host-privileged\

-- name=calico-node\

-e NODENAME=$ {CALICO_NODENAME}\

-e IP=$ {CALICO_IP}\

-e IP6=$ {CALICO_IP6}\

-e CALICO_NETWORKING_BACKEND=$ {CALICO_NETWORKING_BACKEND}\

-e AS=$ {CALICO_AS}\

-e NO_DEFAULT_POOLS=$ {CALICO_NO_DEFAULT_POOLS}\

-e CALICO_LIBNETWORK_ENABLED=$ {CALICO_LIBNETWORK_ENABLED}\

-e ETCD_ENDPOINTS=$ {ETCD_ENDPOINTS}\

-e ETCD_CA_CERT_FILE=$ {ETCD_CA_CERT_FILE}\

-e ETCD_CERT_FILE=$ {ETCD_CERT_FILE}\

-e ETCD_KEY_FILE=$ {ETCD_KEY_FILE}\

-v / var/log/calico:/var/log/calico\

-v / run/docker/plugins:/run/docker/plugins\

-v / var/run/docker.sock:/var/run/docker.sock\

-v / lib/modules:/lib/modules\

-v / var/run/calico:/var/run/calico\

Calico/node:v2.6.10

ExecStop=-/usr/bin/docker stop calico-node

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

# calico.env variable file

# vim / etc/calico/calico.env

ETCD_ENDPOINTS= http://172.169.128.210:2379,http://172.169.18.162:2379,http://172.169.18.180:2379

# verify the location of related files when ssl/tls is enabled

ETCD_CA_FILE= ""

ETCD_CERT_FILE= ""

ETCD_KEY_FILE= ""

# the host hostname is used as identification by default when left blank, so make sure the hostname is unique when left blank

CALICO_NODENAME= ""

CALICO_NO_DEFAULT_POOLS= ""

# set the next hop of the route. When left blank, it is automatically selected from the current port of the host. The parameter "autodetect" forces automatic detection of each startup time.

CALICO_IP= "172.169.18.210"

CALICO_IP6= ""

# as number, inherited from the global default setting by default

CALICO_AS= ""

# enable libnetwork drive

CALICO_LIBNETWORK_ENABLED=true

# routing protocol, available "bird", "gobgp", "none", default is "bird", "gobgp" has no ipip mode

CALICO_NETWORKING_BACKEND=bird

# start the service

# systemctl daemon-reload

# systemctl enable calico

# systemctl restart calico

# systemctl status calico

6) View status

You can view calico-node startup on three nodes

# docker ps-a

# ps-ef | grep calico

View node status information (available on all three nodes)

# calicoctl node status

5. Use calicoctl to create ipPool

Official reference address: https://docs.projectcalico.org/v2.6/reference/calicoctl/resources/ippool

Before starting another container, we need to configure an IP address pool with ipip and nat-outgoing options. So a container with a valid configuration can access the Internet and run the following command on each node:

Check the ip pool of calico first (you can view it on any node)

# calicoctl get ipPool-o wide

Parameter explanation:

# after running the calico service, there is a 192.168.0.0plus 16 ipv4 address pool and a 64-bit ipv6 address pool by default. The subsequent network allocation is the address pool acquisition.

# NAT: whether the address obtained by the container can be nat out of host

# IPIP:ipip is a compromise overlay mechanism when the host network does not fully support bgp. A "tunl0" virtual port is created in the host. When set to false, the route is pure bgp mode. Theoretically, the network transmission performance of ipip mode is lower than that of pure bgp mode. When set to true, it is divided into ipip always mode (pure ipip mode) and ipip cross-subnet mode (ipip-bgp mixed mode). The latter means that "bgp is used for routing in the same subnet and ipip is used for cross-subnet routing".

1) it can be defined on any node. IpPool belongs to the global.

[root@master01 ~] # vim new-pool-1.yaml

ApiVersion: v1

Kind: ipPool

Metadata:

Cidr: 172.169.10.0/24

Spec:

Ipip:

Enabled: true

Mode: cross-subnet

Nat-outgoing: true

Disabled: false

2) # create

# calicoctl create-f new-pool-1.yaml

In the same way, you can continue to add the network segments of the next two nodes, as follows

3) delete the default ip address pool

# calicoctl delete ipPool 192.168.0.0/16

View the docker three-node network

4) Container network configuration

# modify docker configuration file (modified by three nodes)

# vim / usr/lib/systemd/system/docker.service

Add the following parameters:

-H unix:///var/run/docker.sock-H 0.0.0.0 2375-- cluster-store etcd://172.169.18.210:2379172.169.18.162:2379172.169.18.180:2379\

Restart the service

# systemctl daemon-reload

# systemctl restart docker

# create a network (you can create any node)

# docker network create-- driver calico--ipam-driver calico-ipam net1

# docker network ls

5) create a container test

Docker run-name web1-privileged-itd-net=net1 centos:7.4.1708 / usr/sbin/init

(1) # View container information

# docker inspect web1

(2) change of network nodes

(3) host node routing table

# ip route show

(4) container routing table

# Container Gateway "169.254.1.1" is a reserved local ip address that is sent to the gateway through the cali0 port

# calico to simplify network configuration, set the gateway of the container to a fixed local reserved address. The routing rules in the container are the same and do not need to be dynamically updated.

# after determining the next hop, the container will query the mac address of the next hop "169.254.1.1"

# docker exec web1 ip route show

# docker exec web1 ping 114.114.114.114

The arp table can be queried through "ip neigh show" (need to be triggered). The mac address of "169.254.1.1" is "7a:b2:59:f7:0c:91". If you look closely, you will find that this mac address is the mac address of the host node network card "calic255b5bfca1".

# docker exec web1 ip neigh show

The network card of the host node corresponding to the container is not configured with an ip address. No matter what the request address of the arp sent by the container is, it directly responds with its own mac address, namely "arp proxy".

# the destination IP of the subsequent messages of the container remains the same, but the mac address becomes the address of the corresponding interface on the host, that is, all messages are sent to the host, and then the host forwards it according to the ip address. This feature is enabled by calico by default, and can be confirmed by the following ways:

# cat / proc/sys/net/ipv4/conf/cali7d1d9f06ed8/proxy_arp

(5) create a container web2 in the node02 node

# docker run-name web2-privileged-itd-net=net1 centos:7.4.1708 / usr/sbin/init

# docker exec-ti web2 / bin/bash

Can

Additional knowledge points:

Flannel

Calico

Overlay scheme

1. Udp

2. Vxlan

Ipip. When bgp is not fully supported in the host network, ipip mode can be used.

Host-gw scheme

Host-gw, which requires the host to be in the same subnet

Bgp

Network strategy

Not supported

Support

Ipam

Decentralized ipam is not supported

Decentralized ipam is not supported

Performance

Theoretical value:

1. The performance of host-gw mode is better than that of overlay mode, even close to the direct communication of host computer.

2. Flannel host-gw mode is equivalent to calico bgp mode.

3. Flannel overlay mode is equivalent to or slightly worse than calico ipip mode, and ipip header is smaller than vxlan.

4. Flannel udp mode is the worst. Udp encapsulation is done in linux user mode, while vxlan and ipip are encapsulated in linux kernel mode (ipip security is not as secure as vxlan).

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report