Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation and summary of kubernet+calico binary system

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Kubernetes has been deployed many times, but each time it still encounters various problems. Some are full of problems with the new version, and some are full of holes that have been stepped on before. Because there is no documentation, the pit is still there, and I am still there.

This article is deployed in a binary way, which makes it easy for us to find and solve problems.

Environment description:

Role

module

IP

Remarks

Master

Kube-apiserver

Kube-scheduler

Kube-controller-manager

Etcd

10.8.8.27

Node1

Docker

Kubelet

Kube-proxy

Calico-kube-controllers

Calico-node

10.8.8.28

Node2

Docker

Kubelet

Kube-proxy

Calico-node

10.8.8.29

Part I: kubernetes installation 1. Environment preparation 1.1 installation of docker

According to the documentation on Kubernetes's official website, the current version (k8s 1.13) only supports docker version after 1.11. I have installed docker version 1.13 here (traditional yum installation: yum install docker)

Configure image acceleration to avoid timeouts when downloading images later:

Add the following to the / etc/docker/daemon.json file:

{

"registry-mirrors": ["http://68e02ab9.m.daocloud.io"]"

}

Set up boot boot: systemctl enable docker

Then we start docker:systemctl start docker.

1.2 Kubernetes binary file preparation

Download the version you need to install from the kubernetes official website. When I built it, the version was v1.13, so download the binary generation file corresponding to v1.13 and connect to: https://kubernetes.io/docs/setup/release/notes/#downloads-for-v1-13-0. The binary bin package is no longer provided directly in the new version, and you need to download the script in it manually.

After decompressing the downloaded file, go to the cluster directory and execute the get-kube-binaries.sh script. The binary file of kubernetes will be formed under the decompression path.

Binary file path: kubernetes/server/kubernetes/server/bin

Management tool path: kubernetes/client/bin

2. Configure kubernetes

2.1 create a certificate

I didn't use security authentication at first, and all API interfaces are exposed to non-secure interfaces. There is no problem with simply deploying kubernetes in this way, but there will be various errors when deploying calico, which will be described in more detail later.

2.1.1 create a CA certificate for apiserver

Generate relevant certificates using openssl

# create a ca private key

Openssl genrsa-out ca.key 2048

# generate ca certificate

Openssl req-x509-new-nodes-key ca.key-days 10000-out ca.crt-subj "/ CN=k8s-master"

# create an apiserver private key

Openssl genrsa-out server.key 2048

# generate apiserver certificate request file based on configuration and private key

Openssl req-new-key server.key-out apiserver.csr-subj "/ CN=k8s-master"-config openssl.cnf

# using CA to issue apiserver certificates

Openssl x509-req-in apiserver.csr-CA ca.pem-CAkey ca.key-CAcreateserial-out server.crt-days 365-extensions v3_req-extfile openssl.cnf

The configuration file (openssl.cnf) is as follows:

[req]

Req_extensions = v3_req

Distinguished_name = req_distinguished_name

[req_distinguished_name]

[v3_req]

BasicConstraints = CA:FALSE

KeyUsage = nonRepudiation, digitalSignature, keyEncipherment

SubjectAltName = @ alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

DNS.3 = kubernetes.default.svc

DNS.4 = kubernetes.default.svc.cluster.local

IP.1 = 10.0.254.1

IP.2 = 10.8.8.27

Description: IP.1 is the cluster IP address of apisver; IP.2 is the host address of apisver; DNS is configured as the name of the kube-apiserver virtual service

The above operations will generate six files in the current directory: ca.crt ca.key ca.srl server.crt server.csr server.key

2.1.2 create CA certificates for kubernet-controller and kubernet-schedule

# generate controller private key

Openssl genrsa-out controller.key 2048

# generate certificate application documents

Openssl req-new-key controller.key-out controller.csr-subj "/ CN=k8s-master"

# issuing controller certificates

Openssl x509-req-in controller.csr-CA ca.crt-CAkey controller.key-CAcreateserial-out controller.crt-days 365

The above actions will generate: controller.crt controller.key controller.csr

2.1.3 create CA certificates for node1 and node2

The creation method is the same as 2.1.2, except that-subj is replaced with corresponding information, node1 is replaced with-subj "/ CN=node1", and node2 is replaced with-subj "/ CN=node2". It will eventually form a file: node1.crt node1.csr node1.key node2.crt node2.csr node2.key

Now that all the certificates have been configured, let's start configuring the relevant components of kubernetes

2.2 Startup scripts for configuring related components

2.2.1 configure etcd, kube-apiserver, kube-controller-manager, kube-schedule services on master

1) etcd service

As the master database of the kubernetes cluster, the Etcd service needs to be installed and started before installing the kubernetes services. Download the etcd binaries from the github website (https://github.com/etcd-io/etcd/releases) and copy the etcd and etcdctl executable binaries to the / usr/local/bin directory.

Set up ETCD service files

Edit / lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=/etc/etcd/etcd.conf

ExecStart=/usr/local/bin/etcd

[Install]

WantedBy=multi-user.target

Add the service to boot:

Systemctl daemon-reload

Systemctl enable etcd.service

Configure ECTD (stand-alone mode)

Edit / etc/etcd/etcd.conf

ETCD_NAME= "etcd1"

ETCD_DATA_DIR= "/ export/data/etcd"

ETCD_LISTEN_PEER_URLS= "http://10.8.8.27:2380"

ETCD_LISTEN_CLIENT_URLS= "http://10.8.8.27:2379"

ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://10.8.8.27:2380"

ETCD_INITIAL_CLUSTER= "etcd1= http://10.8.8.27:2380"

ETCD_INITIAL_CLUSTER_STATE= "new"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS= "http://10.8.8.27:2379"

Start ETCD

Systemctl start etcd.service

Etcdctl-- endpoints= http://10.8.8.27:2379 cluster-health # check etcd startup status

2) kube-apiserver service

Copy the binaries kube-apiserver, kube-controller-manager, kubectl, kube-scheduler obtained from step 1.2 (directory: kubernetes/server/kubernetes/server/bin) to the / usr/local/bin directory.

Edit / lib/systemd/system/kube-api.service

[Unit]

Description=Kubernetes API Server

After=etcd.service

Wants=etcd.service

[Service]

EnvironmentFile=/etc/kubernetes/apiserver

ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Add the service to boot:

Systemctl daemon-reload

Systemctl enable kube-api.service

Configure kube-api

Edit: / etc/kubernetes/apiserver

KUBE_API_ARGS= "--storage-backend=etcd3\

-- etcd-servers= http://10.8.8.27:2379\

-- insecure-bind-address=0.0.0.0\

-- secure-port=64-- insecure-port=8080\

-- service-cluster-ip-range=10.0.254.0/24\

-- admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota\

-- logtostderr=false-- log-dir=/export/logs/kubernets-- vault 2\

-- allow-privileged=true\

-- tls-private-key-file=/etc/kubernetes/ssl/apiserver/server.key\

-- client-ca-file=/etc/kubernetes/ssl/apiserver/ca.crt\

-- service-account-key-file=/etc/kubernetes/ssl/apiserver/server.key\

-- tls-cert-file=/etc/kubernetes/ssl/apiserver/server.crt "

-- service-cluster-ip-range: is the range of cluster virtual IP address field

-- admission control settings for admission-control:kubernetes clusters. Each control module takes effect at once in the form of plug-ins.

-- allow-privileged: this is for later calico, because calico-node needs to run on each node in privileged mode

-- client-ca-file,-- tls-cert-file,-- tls-private-key-file: the api certificate you just created

-- service-account-key-file: serviceaccount is mentioned a little bit here, because later calico will create a serviceaccount account, and the pod it starts will be used frequently. Service Account is also an account, but it is not for users in the cluster, but for processes running in pod. Normally, in order to ensure the security of the kubernetes cluster, API server authenticates the client, and pod is authenticated by serviceaccount. When pod starts, it generates ca.crt, namespace and token files in / var/run/secrets/kubernetes.io/serviceaccount based on the passed serviceaccount (default default is used if it is not passed). These three files are the elements of API Server authentication. In master, you can view the created Service Account through kubectl get serviceaccount-- all-namespaces, and we'll mention Service Account later.

2) kube-controller-manager service

Kube-controller-manager relies on kube-apiserver services

Edit / lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

After=kube-api.service

Wants=kube-api.service

[Service]

EnvironmentFile=/etc/kubernetes/controller-manager

ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

# Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

I always restart when I use Type=notify, so comment it out.

Configuration / etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS= "\

-- kubeconfig=/etc/kubernetes/kubeconfig.yaml\

-- master= https://10.8.8.27:6443\

-- logtostderr=false-- log-dir=/export/logs/kubernetes-- vault 2\

-- service-account-private-key-file=/etc/kubernetes/ssl/apiserver/server.key\

-- root-ca-file=/etc/kubernetes/ssl/apiserver/ca.crt "

The corresponding / etc/kubernetes/kubeconfig.yaml configuration is as follows:

ApiVersion: v1

Kind: Config

Users:

-name: controllermanager

User:

Client-certificate: / etc/kubernetes/ssl/kube-controller/controller.crt

Client-key: / etc/kubernetes/ssl/kube-controller/controller.key

Clusters:

-name: local

Cluster:

Certificate-authority: / etc/kubernetes/ssl/kube-controller/ca.crt

Contexts:

-context:

Cluster: local

User: controllermanager

Name: my-context

Current-context: my-context

Config can specify a multi-cluster configuration, which represents the context of using my-context, which includes the configuration of user:controllermanager and cluster:local

Add the service to boot:

Systemctl daemon-reload

Systemctl enable kube-controller-manager.service

3) configure kube-schedule service

Kube-schedule also relies on kube-apisever services

Edit / lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

After=kube-api.service

Wants=kube-api.service

[Service]

EnvironmentFile=/etc/kubernetes/scheduler

ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS

Restart=on-failure

# Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Also use the default Type

Configuration / etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS= "\

-- kubeconfig=/etc/kubernetes/kubeconfig.yaml\

-- master= https://10.8.8.27:6443\

-- logtostderr=false-- log-dir=/export/logs/kubernetes-- vault 2 "

-- kubeconfig: share kube-controller-manager configuration

Add the service to boot:

Systemctl daemon-reload

Systemctl enable kube-scheduler.service

4) start kube-apiserver, kube-controller-manager, kube-schedule

Systemctl restart kube-api.service

Systemctl restart kube-controller-manager.service

Systemctl restart kube-scheduler.service

2.2.2 configure kube-proxy and kubelet services on node1 and node2

Similarly, synchronize the previously obtained binaries kubelet and kube-proxy to all the node's / usr/local/bin directories, and the certificates of the corresponding nodes to the corresponding directories.

Configure the kubelet service

Edit / lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet Server

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=/etc/kubernetes/kubelet

ExecStart=/usr/local/bin/kubelet $KUBE_KUBELET_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Configure kubelet

Edit / etc/kubernetes/kubelet

KUBE_KUBELET_ARGS= "--kubeconfig=/etc/kubernetes/kubeconfig.yaml\

-- hostname-override=10.8.8.28\

-- logtostderr=false-- log-dir=/export/logs/kubernetes-- vault 2\

-- fail-swap-on=false\

-- cgroup-driver=systemd\

-- runtime-cgroups=/systemd/system.slice\

-- kubelet-cgroups=/systemd/system.slice "

The new version of the configuration no longer supports-master and needs to be placed in the file specified by-kubeconfig.

-- pod-infra-container-image: this option specifies the basic container image in each POD, which is responsible for managing the network/ipc namespaces of the POD. Here, you can specify your own repository image (default: "k8s.gcr.io/pause:3.1"), or you can download the image in the following ways to avoid the returned error: No such image: k8s.gcr.io/pause:3.1 "error:

Docker pull ibmcom/pause:3.1

Docker tag ibmcom/pause:3.1 k8s.gcr.io/pause:3.1

-- cgroup-driver=systemd needs to be consistent with docker, otherwise the following error will occur:

Failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "system" is different from docker cgroup driver: "systemd"

Configure / etc/kubernetes/kubeconfig.yaml content:

ApiVersion: v1

Kind: Config

Users:

-name: kubelet

User:

Client-certificate: / etc/kubernetes/ssl/node1.crt

Client-key: / etc/kubernetes/ssl/node1.key

Clusters:

-cluster:

Server: https://10.8.8.27:6443

Certificate-authority: / etc/kubernetes/ssl/ca.crt

Name: local

Contexts:

-context:

Cluster: local

User: kubelet

Name: local

Current-context: local

Client-certificate, client-key: different node changes different paths

Configure the kube-proxy service

Edit / lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Kube-Proxy Server

After=network.service

Requires=network.service

[Service]

EnvironmentFile=/etc/kubernetes/proxy

ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

Configuration / etc/kubernetes/proxy

KUBE_PROXY_ARGS= "--cluster-cidr=10.0.253.0/24\

-- master= https://10.8.8.27:6443\

-- logtostderr=true-- log-dir=/export/logs/kubernetes-- vault 2\

-- hostname-override=10.8.8.28\

-- proxy-mode=iptables\

-- kubeconfig=/etc/kubernetes/kubeconfig.yaml "

There are three modes of proxy-mode:proxy. The default is iptables. The new version also supports ipvs, but it is still in the testing phase.

-- kubeconfig: shared with kubelet

3) start kubelet and kube-proxy

Systemctl daemon-reload

Systemctl enable kube-proxy.service

Systemctl enable kubelet.service

Systemctl start kubelet.service

Systemctl start kube-proxy.service

So our kubernetes installation is complete.

Part II Calico installation

The corresponding configuration file has been provided on the official website, which defines all the resources required for calico. You can create calico-node and calico-kube-controllers directly through kubectl.

Download address: https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/calico.yaml changes the content of v3.5 according to different versions. Here is a brief description of the configuration:

1. Configuration description

1) configmap object configuration

Kind: ConfigMap

ApiVersion: v1

Metadata:

Name: calico-config

Namespace: kube-system

Data:

# Configure this with the location of your etcd cluster.

Etcd_endpoints: "http://10.8.8.27:2379"

# If you're using TLS enabled etcd uncomment the following.

# You must also populate the Secret below with these files.

Etcd_ca: "#" / calico-secrets/etcd-ca "

Etcd_cert: "#" / calico-secrets/etcd-cert "

Etcd_key: "#" / calico-secrets/etcd-key "

# Typha is disabled.

Typha_service_name: "none"

# Configure the Calico backend to use.

Calico_backend: "bird"

# Configure the MTU to use

Veth_mtu: "1440"

# The CNI network configuration to install on each node. The special

# values in this config will be automatically populated.

Cni_network_config: |-

{

"name": "k8s-pod-network"

"cniVersion": "0.3.0"

"plugins": [

{

"type": "calico"

"log_level": "info"

"etcd_endpoints": "_ _ ETCD_ENDPOINTS__"

"etcd_key_file": "_ _ ETCD_KEY_FILE__"

"etcd_cert_file": "_ _ ETCD_CERT_FILE__"

"etcd_ca_cert_file": "_ _ ETCD_CA_CERT_FILE__"

"mtu": _ _ CNI_MTU__

"ipam": {

"type": "calico-ipam"

}

"policy": {

"type": "K8s"

}

"kubernetes": {

"kubeconfig": "_ _ KUBECONFIG_FILEPATH__"

}

}

{

"type": "portmap"

"snat": true

"capabilities": {"portMappings": true}

}

]

}

Etcd_endpoints: will replace the _ _ ETCD_ENDPOINTS__ of cni_network_config in the variable

Etcd_ca, etcd_cert, etcd_key: same as above, the corresponding parameters will be replaced

Cni_network_config: the initialization container of calico-node pod, calico-cni, uses install-cni.sh to parse this parameter into 10-calico.conflist and put it in the / etc/cni/net.d directory. For example, my parsing content is as follows:

{

"name": "k8s-pod-network"

"cniVersion": "0.3.0"

"plugins": [

{

"type": "calico"

"log_level": "info"

"etcd_endpoints": "http://10.8.8.27:2379","

"etcd_key_file":

"etcd_cert_file":

"etcd_ca_cert_file":

"mtu": 1440

"ipam": {

"type": "calico-ipam"

}

"policy": {

"type": "K8s"

}

"kubernetes": {

"kubeconfig": "/ etc/cni/net.d/calico-kubeconfig"

}

}

{

"type": "portmap"

"snat": true

"capabilities": {"portMappings": true}

}

]

}

Kubeconfig: this configuration is where the serviceaccount mentioned earlier works. Calico-cni 's install-cni.sh script reads the contents under / var/run/secrets/kubernetes.io/serviceaccount and generates the / etc/cni/net.d/calico-kubeconfig configuration based on the environment variables set by kubelet (KUBERNETES_SERVICE_PROTOCOL, KUBERNETES_SERVICE_HOST, KUBERNETES_SERVICE_PORT). This configuration will be used to access kube-apiserver later when calico-node and calico-kube-controller are started.

2) ServiceAccount object

ApiVersion: v1

Kind: ServiceAccount

Metadata:

Name: calico-node

Namespace: kube-system

Create a ServiceAccount object named calica-node. Pod can specify which token to use by specifying serviceaccount

3) DaemonSet object

A calico-node pod is created on all node, and the pod includes two containers, one is the initialization container: calico-cni, and the other is the calico-node that creates routing and iptables information

CALICO_IPV4POOL_CIDR: this parameter creates an IPpool in ETCD

CALICO_IPV4POOL_IPIP: choose whether to start IPIP. By default, off is off. If it is off, the route is synchronized through the BGP protocol.

FELIX_IPINIPENABLED:false is FELIX. Turn off IPIP.

4) Deployment object

A calico-kube-controllers container is created. This container is a configuration storage system that monitors and synchronizes network policy, namespaces, pods, nodes, serviceaccount, and other information to calico. The official document suggests that a calico-kube-controllers replicas can manage 200 nodes, with a total of no more than 20 replicas.

2. Install calico components

2.1 install calico components

Kubectl apply-f calico.yaml install calico components

2.2 configure kubelet

If kubernetes is going to use calico, you must add the-- network-plugin=cni parameter to the configuration of kubelet to ensure that kubelet can eventually call calico's plug-in through cni.go. After the configuration is complete, restart kubelet.

2.3 start the test container and you will eventually get the following information

# kubectl get pods-- all-namespaces-o wide

# kubectl get serviceaccount-all-namespaces

# kubectl get services-- all-namespaces-o wide

When I list the route information on 10.8.8.28, I can see that there is a route to 192.168.95.139, which turns on IPIP mode.

If it is in BGP mode, there will be a situation similar to the following: the route runs on the actual physical Nic.

The routing protocol is bird

You can see that the bird client is started locally to synchronize BGP routes.

3. Test script

The whole installation process took me a lot of time. Because I didn't do security at the beginning, I encountered all kinds of errors, and then I started to look at the source code and look for problems. I also wrote a lot of calico test scripts in the middle. I probably combed the running logic of the components across the kubernetes+calico:

1) IP configuration logic

2) IP allocation logic

This time I didn't take a serious look at the IP allocation logic of calico, so I took out the flow chart that had been analyzed by source code before and filled it up.

Let's start with the test script.

1. Call calico script to apply for IP resources

This is a script that simulates cni.go. When calico executes, it reads a large number of K8s environment variables. The following is an example of environment settings:

#! / bin/bash

Mkdir-p / var/run/netns/

[- L / var/run/netns/default] & & rm-f / var/run/netns/default

Ln-s / var/run/docker/netns/default / var/run/netns/default

Export CNI_ARGS= "IgnoreUnknown=1;K8S_POD_NAMESPACE=;K8S_POD_NAME="

# export CNI_ARGS='IgnoreUnknown=1'

Export CNI_COMMAND= "ADD"

Export CNI_CONTAINERID= "7dd0f5009d1e4e6d0289311755e7885b93a5a1aa7e34a066689860df5bf6d763"

Export CNI_NETNS= "kube-system"

Export CNI_IFNAME= "eth0"

Export CNI_PATH= "/ opt/cni/bin"

Export CNI_NETNS= "/ var/run/netns/default"

Here we talk about the two parameters K8S_POD_NAMESPACE and K8S_POD_NAME (just keep the script blank). If it is empty, the script / opt/cni/bin/calico does not need to verify kube-apiserver, so it does not need serviceaccount, but if these two are not empty, you need to use serviceaccount. That is, / etc/cni/net.d/calico-kubeconfig configuration.

At the beginning, when service account is not configured, the following error occurs when cni gets IP:

Error adding default_nginx1/83637c6d9fa54573ba62538dcd6b7b5778a7beb4fc1299449e894e488fb3c116 to network calico/k8s-calico-network: invalid configuration: no configuration has been provided

This is actually the result that returns an empty result.

The following is the content of the script (calicoctl), and go run calicoctl.go will produce the desired results:

Package main

Import (

"io/ioutil"

"bytes"

"encoding/json"

"fmt"

"net"

"os/exec"

"reflect"

)

Type Interface struct {

Name string `json: "name" `

Mac string `json: "mac,omitempty" `

Sandbox string `json: "sandbox,omitempty" `

}

Type Result struct {

CNIVersion string `json: "cniVersion,omitempty" `

Interfaces [] * Interface `json: "interfaces,omitempty" `

IPs [] * IPConfig `json: "ips,omitempty" `

/ / DNS types.DNS `json: "dns,omitempty" `

}

Type IPConfig struct {

/ / IP version, either "4" or "6"

Version string

/ / Index into Result structs Interfaces list

Interface * int

/ / Address string `json: "address,omitempty" `

Address net.IPNet

Gateway net.IP

}

Func NewResult (data [] byte) {

Result: = & Result {}

If err: = json.Unmarshal (data, result); err! = nil {

Fmt.Println (err)

}

Fmt.Println (result.IPs [0] .version)

}

Func main () {

Stdout: = & bytes.Buffer {}

StdinData, _: = ioutil.ReadFile ("/ etc/kubernetes/calico_test.yaml")

C: = exec.Cmd {

Path: "/ opt/cni/bin/calico"

Args: [] string {"/ opt/cni/bin/calico"}

Stdin: bytes.NewBuffer (stdinData)

Stdout: stdout

}

_ = c.Run ()

Result: = stdout.Bytes ()

Fmt.Println (string (result))

NewResult (result)

}

The corresponding / etc/kubernetes/calico_test.yaml content is as follows, and this file corresponds to the stdinData passed in by kubelet:

{

"cniVersion": "0.3.0"

"etcd_ca_cert_file":

"etcd_cert_file":

"etcd_endpoints": "http://10.8.8.27:2379","

"etcd_key_file":

"ipam": {

"type": "calico-ipam"

}

"kubernetes": {

"kubeconfig": "/ etc/cni/net.d/calico-kubeconfig"

}

"log_level": "debug"

"mtu": 1440

"name": "k8s-pod-network"

"policy": {

"type": "K8s"

}

"type": "calico"

}

2. Get the parameter script passed in by kubelet cni

Edit calico.go

Package main

Import (

"io/ioutil"

"log"

"os"

)

Func main () {

File: = "/ export/logs/calico/request.log"

LogFile, err: = os.OpenFile (file, os.O_RDWR | os.O_CREATE | os.O_APPEND, 0766)

If nil! = err {

Panic (err)

}

Loger: = log.New (logFile, "calico", log.Ldate | log.Ltime | log.Lshortfile)

Loger.SetFlags (log.Ldate | log.Ltime | log.Lshortfile)

Stdin: = os.Stdin

StdinData, err: = ioutil.ReadAll (Stdin)

If err! = nil {

Loger.Println ("error reading from stdin:%v", err)

}

Loger.Println (string (stdinData))

}

Here, the incoming request from kubelet is written to the / export/logs/calico/request.log file, or you can output it directly. The final result is similar to the following:

Compile and put it in the / opt/cni/bin directory. Start the test container and you can see the log output

3. Obtain ETCD key information

Export ETCDCTL_API=3

Etcdctl-- endpoints= http://10.8.8.27:2379 get /-- prefix-- keys-only

4. Error introduction

In fact, the above has covered some wrong solutions, and some are also explained here:

X509: certificate is valid for 10.8.8.27, 10.0.254.1, not 10.254.0.1

This is because kube-apiservre 's virtual IP is not within the scope of the service-cluster-ip-range configuration, and the solution is to delete kubernetes's service and let it be recreated. So this mistake won't happen again.

Pod_controller.go:206: Failed to list * v1.Pod: Unauthorized

This is because service account is not configured correctly, which causes calico-kube-controllers to fail authentication when obtaining pod, namespace (namespace_controller.go), policy (policy_controller.go), node (node_controller.go), serviceaaccount (serviceaaccount_controller.go) list information from kube-apiserver.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 210

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report