Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the deployment based on K8s in Tungsten Fabric actual combat?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

In this issue, the editor will bring you about the deployment of K8s based on Tungsten Fabric in actual combat. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.

Tungsten Fabric (formerly known as opencontrail) provides a controller that can work with the choreographer (openstack/k8s/vCenter), and the vRouter deployed on the compute node / node is controlled by it, replacing the original linux-bridge/ovs for communication.

Preface

The best way to study an open source controller is to deploy one first.

Go to the GitHub of TF first, whether it's tf-devstack or run.sh in tf-dev-env, it's all stuck.

Find the TF Chinese community, add Wechat, and be pulled into the TF discussion group.

After the guidance of Wu sir and Yang sir, the leaders in the group, they began to deploy according to the following articles

Part 1: deployment preparation and initial state

Part 2: creating a Virtual Network

Part 3: creating a Security Policy

Part IV: creating an isolated Namespace

Practical record

Initial preparation

Create three virtual machines for CentOS7.7

Pip acceleration based on aliyun

Each node sets pip acceleration

Acceleration of docker Image based on aliyun

There are many online tutorials. The following accelerated address is hidden with * *.

Some source files

Many required installation files are placed on the http://35.220.208.0/ server, and commands can be issued according to the actual link.

Encounter the following error, but it doesn't seem to make any difference

[root@localhost pkg_python] # easy_install-upgrade-dry-run pip

Searching for pip

Reading https://pypi.python.org/simple/pip/

Best match: pip 20.0.2

Downloading https://files.pythonhosted.org/packages/8e/76/66066b7bc71817238924c7e4b448abdb17eb0c92d645769c223f9ace478f/pip-20.0.2.tar.gz#sha256=7db0c8ea4c7ea51c8049640e8e6e7fde949de672bfa4949920675563a5a6967f

Processing pip-20.0.2.tar.gz

Writing / tmp/easy_install-bm8Ztx/pip-20.0.2/setup.cfg

Running pip-20.0.2/setup.py-n-Q bdist_egg-- dist-dir / tmp/easy_install-bm8Ztx/pip-20.0.2/egg-dist-tmp-32s9sn

/ usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls'

Warnings.warn (msg)

/ usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires'

Warnings.warn (msg)

Warning: no files found matching 'docs/docutils.conf'

Warning: no previously-included files found matching '.coveragerc'

Warning: no previously-included files found matching '.mailmap'

Warning: no previously-included files found matching '.appveyor.yml'

Warning: no previously-included files found matching '.travis.yml'

Warning: no previously-included files found matching '.readthedocs.yml'

Warning: no previously-included files found matching'. Pre-commit-config.yaml'

Warning: no previously-included files found matching 'tox.ini'

Warning: no previously-included files found matching 'noxfile.py'

Warning: no files found matching 'Makefile' under directory' docs'

Warning: no files found matching'* .bat 'under directory' docs'

Warning: no previously-included files found matching 'src/pip/_vendor/six'

Warning: no previously-included files found matching 'src/pip/_vendor/six/moves'

Warning: no previously-included files matching'* .pyi 'found under directory' src/pip/_vendor'

No previously-included directories found matching '.GitHub'

No previously-included directories found matching'. Azure-pipelines'

No previously-included directories found matching 'docs/build'

No previously-included directories found matching 'news'

No previously-included directories found matching 'tasks'

No previously-included directories found matching 'tests'

No previously-included directories found matching 'tools'

Warning: install_lib: 'build/lib' does not exist-- no Python modules to install

[root@localhost pkg_python] #

Local registry

Run the registry container locally, and port 80 of the host is mapped to port 5000 of the container.

[root@deployer] # docker run-d-p 8015 000-- restart=always-- name registry registry:2

0c17a03ebdffe3cea98d7cec42c268c1117241f236f9f2443bbb1b77d34b0082

[root@deployer ~] #

[root@deployer ~] # docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

0c17a03ebdff registry:2 "/ entrypoint.sh / etc customers" About an hour ago Up About an hour 0.0.0.0 etc 80-> 5000/tcp registry

[root@deployer ~] #

Set up the yaml file

After getting the contrail-ansible-deployer, go to the folder and modify the instances.yaml

[root@deployer inventory] # vim.. / config/instances.yaml

Provider_config:

Bms:

Ssh_pwd: Password

Ssh_user: root

Ssh_public_key: / root/.ssh/id_rsa.pub

Ssh_private_key: / root/.ssh/id_rsa

Domainsuffix: local

Instances:

Bms1:

Provider: bms

Roles:

Config_database:

Config:

Control:

Analytics_database:

Analytics:

Webui:

K8s_master:

Kubemanager:

Ip: 192.168.122.96

Bms2:

Provider: bms

Roles:

Vrouter:

K8s_node:

Ip: 192.168.122.250

Global_configuration:

CONTAINER_REGISTRY: hub.juniper.net

Contrail_configuration:

CONTRAIL_VERSION: 1912-latest

The version of CONTAINER_REGISTRY replaced by local registry,contrail is set to 1912-last, which is consistent with pulling the image retag later.

Set secret-free login

You need to set up to log in to the native / master01/node01 without entering a password from developer

# ssh-keygen-t rsa

# ssh-copy-id-I ~ / .ssh/id_rsa.pub root@master01

# ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node01

# ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node02

Ansible

There will be an error when executing ansible on deployer

/ usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.3) or chardet (2.2.1) doesn't match a supported version!

RequestsDependencyWarning)

The solution is

Pip uninstall urllib3

Pip uninstall chardet

Pip install requests

Pull the image

The image of K8s is fine, with aliyun acceleration.

The source of contrail, hub.juniper.net is an account that requires Juniper, which needs to be replaced with opencontrailnightly

Yang sir provides a script to pull and push to the local registry, and the subsequent master/node can be pulled directly from the registry of deployer.

If you are using the latest contrail-ansible-deployer code, you need to add a mirror image: contrail-provisioner

But before execution, you need to set the local IP to insecure-registry, so you can download it based on http instead of https.

One solution is to modify / etc/docker/daemon.json (if not, add it yourself)

[root@node01 ~] # cat / etc/docker/daemon.json

{

"insecure-registries": ["hub.juniper.net", "k8s.gcr.io"]

}

[root@node01 ~] #

And then

[root@deployer ~] # systemctl daemon-reload

[root@deployer ~] # systemctl restart docker

The script is as follows, which has been modified to IP of deployer

# to prepare Kubernetes offline image, run the following script

#! / bin/bash

# Author: Alex Yang

Set-e

REPOSITORIE= "gcr.azk8s.cn/google_containers"

LOCAL_REPO= "192.168.122.160"

IMAGES= "kube-proxy:v1.12.9 kube-controller-manager:v1.12.9 kube-scheduler:v1.12.9 kube-apiserver:v1.12.9 coredns:1.2.2 coredns:1.2.6 pause:3.1 etcd:3.2.24 kubernetes-dashboard-amd64:v1.8.3"

For img in $IMAGES

Do

Echo "= = Pulling image:" $img

Docker pull $REPOSITORIE/$img

Echo "= = Retag image [" $img "]"

Docker tag $REPOSITORIE/$img $LOCAL_REPO/$img

Echo "= = Pushing image:" $LOCAL_REPO/$img

Docker push $LOCAL_REPO/$img

Docker rmi $REPOSITORIE/$img

Done

# to prepare TungstenFabric offline image, run the following script

#! / bin/bash

# Author: Alex Yang

Set-e

REGISTRY_URL=opencontrailnightly

LOCAL_REGISTRY_URL=192.168.122.160

IMAGE_TAG=1912-latest

COMMON_IMAGES= "contrail-node-init contrail-status contrail-nodemgr contrail-external-cassandra contrail-external-zookeeper contrail-external-kafka contrail-external-redis contrail-external-rabbitmq contrail-external-rsyslogd"

ANALYTICS_IMAGES= "contrail-analytics-query-engine contrail-analytics-api contrail-analytics-collector contrail-analytics-snmp-collector contrail-analytics-snmp-topology contrail-analytics-alarm-gen"

CONTROL_IMAGES= "contrail-controller-control-control contrail-controller-control-dns contrail-controller-control-named contrail-controller-config-api contrail-controller-config-devicemgr contrail-controller-config-schema contrail-controller-config-svcmonitor contrail-controller-config-stats contrail-controller-config-dnsmasq"

WEBUI_IMAGES= "contrail-controller-webui-job contrail-controller-webui-web"

K8SystIMAGES = "contrail-kubernetes-kube-manager contrail-kubernetes-cni-init"

VROUTER_IMAGES= "contrail-vrouter-kernel-init contrail-vrouter-agent"

IMAGES=$COMMON_IMAGES "$ANALYTICS_IMAGES", $CONTROL_IMAGES "$WEBUI_IMAGES", $K8S_IMAGES "$VROUTER_IMAGES

For image in $IMAGES

Do

Echo "= = Pulling image:" $image

Docker pull $REGISTRY_URL/$image:$IMAGE_TAG

Echo "= = Retag image [" $image "]"

Docker tag $REGISTRY_URL/$image:$IMAGE_TAG $LOCAL_REGISTRY_URL/$image:$IMAGE_TAG

Echo "= = Pushing image:" $LOCAL_REGISTRY_URL/$image:$IMAGE_TAG

Docker push $LOCAL_REGISTRY_URL/$image:$IMAGE_TAG

Docker rmi $REGISTRY_URL/$image:$IMAGE_TAG

Done

View the list of images

[root@deployer ~] # docker image list

REPOSITORY TAG IMAGE ID CREATED SIZE

Ubuntu latest 72300a873c2c 3 weeks ago 64.2MB

Registry 2 708bc6af7e5e 7 weeks ago 25.8MB

Registry latest 708bc6af7e5e 7 weeks ago 25.8MB

192.168.122.160/contrail-vrouter-kernel-init 1912-latest 92e9cce315a5 3 months ago 581MB

192.168.122.160/contrail-vrouter-agent 1912-latest e8d9457d740e 3 months ago 729MB

192.168.122.160/contrail-status 1912-latest d2264c6741a5 3 months ago 513MB

192.168.122.160/contrail-nodemgr 1912-latest c3428aa7e9b7 3 months ago 523MB

192.168.122.160/contrail-node-init 1912-latest c846ff071cc8 3 months ago 506MB

192.168.122.160/contrail-kubernetes-kube-manager 1912-latest 983a6307731b 3 months ago 517MB

192.168.122.160/contrail-kubernetes-cni-init 1912-latest 45c88538c834 3 months ago 525MB

192.168.122.160/contrail-external-zookeeper 1912-latest 6937c72b866c 3 months ago 290MB

192.168.122.160/contrail-external-rsyslogd 1912-latest 812ba27a4e08 3 months ago 304MB

192.168.122.160/contrail-external-redis 1912-latest 3dc79f0b6eb9 3 months ago 129MB

192.168.122.160/contrail-external-rabbitmq 1912-latest a98ac91667b2 3 months ago 256MB

192.168.122.160/contrail-external-kafka 1912-latest 7b5a2ce6a656 3 months ago 665MB

192.168.122.160/contrail-external-cassandra 1912-latest 20109c39696c 3 months ago 545MB

192.168.122.160/contrail-controller-webui-web 1912-latest 44054aa131c5 3 months ago 552MB

192.168.122.160/contrail-controller-webui-job 1912-latest 946e2bbd7451 3 months ago 552MB

192.168.122.160/contrail-controller-control-named 1912-latest 81ef8223a519 3 months ago 575MB

192.168.122.160/contrail-controller-control-dns 1912-latest 15c1ce0cf26e 3 months ago 575MB

192.168.122.160/contrail-controller-control-control 1912-latest ec195cc75705 3 months ago 594MB

192.168.122.160/contrail-controller-config-svcmonitor 1912-latest 3d53781422be 3 months ago 673MB

192.168.122.160/contrail-controller-config-stats 1912-latest 46bc77cf1c87 3 months ago 506MB

192.168.122.160/contrail-controller-config-schema 1912-latest 75acb8ed961f 3 months ago 673MB

192.168.122.160/contrail-controller-config-dnsmasq 1912-latest dc2980441d51 3 months ago 506MB

192.168.122.160/contrail-controller-config-devicemgr 1912-latest c08868a27a0a 3 months ago 772MB

192.168.122.160/contrail-controller-config-api 1912-latest f39ca251b475 3 months ago 706MB

192.168.122.160/contrail-analytics-snmp-topology 1912-latest 5ee37cbbd034 3 months ago 588MB

192.168.122.160/contrail-analytics-snmp-collector 1912-latest 29ae502fb74f 3 months ago 588MB

192.168.122.160/contrail-analytics-query-engine 1912-latest b5f937d6b6e3 3 months ago 588MB

192.168.122.160/contrail-analytics-collector 1912-latest ee1bdbcc460a 3 months ago 588MB

192.168.122.160/contrail-analytics-api 1912-latest ac5c8f7cef89 3 months ago 588MB

192.168.122.160/contrail-analytics-alarm-gen 1912-latest e155b24a0735 3 months ago 588MB

192.168.10.10/kube-proxy v1.12.9 295526df163c 9 months ago 95.7MB

192.168.122.160/kube-proxy v1.12.9 295526df163c 9 months ago 95.7MB

192.168.122.160/kube-controller-manager v1.12.9 f473e8452c8e 9 months ago 164MB

192.168.122.160/kube-apiserver v1.12.9 8ea704c2d4a7 9 months ago 194MB

192.168.122.160/kube-scheduler v1.12.9 c79506ccc1bc 9 months ago 58.4MB

192.168.122.160/coredns 1.2.6 f59dcacceff4 16 months ago 40MB

192.168.122.160/etcd 3.2.24 3cab8e1b9802 18 months ago 220MB

192.168.122.160/coredns 1.2.2 367cdc8433a4 18 months ago 39.2MB

192.168.122.160/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 2 years ago 102MB

192.168.122.160/pause 3.1 da86e6ba6ca1 2 years ago 742kB

[root@deployer ~] #

View the image in the local warehouse

[root@deployer ~] # curl-X GET http://localhost/v2/_catalog | python-m json.tool

Total Received Xferd Average Speed Time Current

Dload Upload Total Spent Left Speed

1080 1080 00 18298 0 -:-18620

{

"repositories": [

"contrail-analytics-alarm-gen"

"contrail-analytics-api"

"contrail-analytics-collector"

"contrail-analytics-query-engine"

"contrail-analytics-snmp-collector"

"contrail-analytics-snmp-topology"

"contrail-controller-config-api"

"contrail-controller-config-devicemgr"

"contrail-controller-config-dnsmasq"

"contrail-controller-config-schema"

"contrail-controller-config-stats"

"contrail-controller-config-svcmonitor"

"contrail-controller-control-control"

"contrail-controller-control-dns"

"contrail-controller-control-named"

"contrail-controller-webui-job"

"contrail-controller-webui-web"

"contrail-external-cassandra"

"contrail-external-kafka"

"contrail-external-rabbitmq"

"contrail-external-redis"

"contrail-external-rsyslogd"

"contrail-external-zookeeper"

"contrail-kubernetes-cni-init"

"contrail-kubernetes-kube-manager"

"contrail-node-init"

"contrail-nodemgr"

"contrail-status"

"contrail-vrouter-agent"

"contrail-vrouter-kernel-init"

"coredns"

"etcd"

"kube-apiserver"

"kube-controller-manager"

"kube-proxy"

"kube-scheduler"

"kubernetes-dashboard-amd64"

"pause"

]

}

[root@deployer ~] #

As for master01 and node01, you can pull the mirror image of k8s/contrail directly from developer, speed lever! (don't forget-- insecure-registry=192.168.122.160)

# to prepare Kubernetes offline image, run the following script

#! / bin/bash

# Author: Alex Yang

Set-e

REPOSITORIE= "k8s.gcr.io"

LOCAL_REPO= "192.168.122.160"

IMAGES= "kube-proxy:v1.12.9 kube-controller-manager:v1.12.9 kube-scheduler:v1.12.9 kube-apiserver:v1.12.9 coredns:1.2.2 coredns:1.2.6 pause:3.1 etcd:3.2.24 kubernetes-dashboard-amd64:v1.8.3"

For img in $IMAGES

Do

Echo "= = Pulling image:" $img

Docker pull $LOCAL_REPO/$img

Echo "= = Retag image [" $img "]"

Docker tag $LOCAL_REPO/$img $REPOSITORIE/$img

Docker rmi $LOCAL_REPO/$img

Done

# to prepare TungstenFabric offline image, run the following script

#! / bin/bash

# Author: Alex Yang

Set-e

REPOSITORIE=hub.juniper.net

LOCAL_REPO= "192.168.122.160"

IMAGE_TAG=1912-latest

COMMON_IMAGES= "contrail-node-init contrail-status contrail-nodemgr contrail-external-cassandra contrail-external-zookeeper contrail-external-kafka contrail-external-redis contrail-external-rabbitmq contrail-external-rsyslogd"

ANALYTICS_IMAGES= "contrail-analytics-query-engine contrail-analytics-api contrail-analytics-collector contrail-analytics-snmp-collector contrail-analytics-snmp-topology contrail-analytics-alarm-gen"

CONTROL_IMAGES= "contrail-controller-control-control contrail-controller-control-dns contrail-controller-control-named contrail-controller-config-api contrail-controller-config-devicemgr contrail-controller-config-schema contrail-controller-config-svcmonitor contrail-controller-config-stats contrail-controller-config-dnsmasq"

WEBUI_IMAGES= "contrail-controller-webui-job contrail-controller-webui-web"

K8SystIMAGES = "contrail-kubernetes-kube-manager contrail-kubernetes-cni-init"

VROUTER_IMAGES= "contrail-vrouter-kernel-init contrail-vrouter-agent"

IMAGES=$COMMON_IMAGES "$ANALYTICS_IMAGES", $CONTROL_IMAGES "$WEBUI_IMAGES", $K8S_IMAGES "$VROUTER_IMAGES

For img in $IMAGES

Do

Echo "= = Pulling image:" $img

Docker pull $LOCAL_REPO/$img:$IMAGE_TAG

Echo "= = Retag image [" $img "]"

Docker tag $LOCAL_REPO/$img:$IMAGE_TAG $REPOSITORIE/$img:$IMAGE_TAG

Docker rmi $LOCAL_REPO/$img:$IMAGE_TAG

Done

Open web

It was executed on developer

Ansible-playbook-e orchestrator=kubernetes-I inventory/ playbooks/install_k8s.yml

Ansible-playbook-e orchestrator=kubernetes-I inventory/ playbooks/install_contrail.yml

Web accesses port 8143 of master01 and enters the monitor page by default

User name / password: admin/contrail123,domain does not need to be filled in, finally see WebUI

You can switch to the config page

K8s status

Node

[root@master01 ~] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

Master01 Ready master 6h5m v1.12.9

Node01 Ready 6h4m v1.12.9

[root@master01 ~] #

[root@master01 ~] # kubectl get namespaces

NAME STATUS AGE

Contrail Active 80m

Default Active 6h30m

Kube-public Active 6h30m

Kube-system Active 6h30m

[root@master01 ~] #

Pods

[root@master01] # kubectl get pods-n kube-system

NAME READY STATUS RESTARTS AGE

Coredns-85c98899b4-4dzzx 0bat 1 ImagePullBackOff 0 6h3m

Coredns-85c98899b4-w4bcs 0/1 ImagePullBackOff 0 6h3m

Etcd-master01 1/1 Running 5 28m

Kube-apiserver-master01 1/1 Running 4 28m

Kube-controller-manager-master01 1/1 Running 5 28m

Kube-proxy-dmmlh 1/1 Running 5 6h3m

Kube-proxy-ph9gx 1/1 Running 1 6h3m

Kube-scheduler-master01 1/1 Running 5 28m

Kubernetes-dashboard-76456c6d4b-x5lz4 0/1 ImagePullBackOff 0 6h3m

Continue to troubleshoot

Node01 cannot use the kubectrl command

The questions are as follows

[root@node01] # kubectl get pods-n kube-system-o wide

The connection to the server localhost:8080 was refused-did you specify the right host or port?

The solution can be found here.

Https://blog.csdn.net/qq_24046745/article/details/94405188?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

[root@node01 ~] # scp root@192.168.122.250:/etc/kubernetes/admin.conf / etc/kubernetes/admin.conf

[root@node01 ~] # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > > ~ / .bash_profile

[root@node01] # source ~ / .bash_profile

[root@node01] # kubectl get pods-n kube-system-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

Coredns-85c98899b4-4dzzx 0bat 1 ImagePullBackOff 0 5h55m 10.47.255.252 node01

Coredns-85c98899b4-w4bcs 0/1 ImagePullBackOff 0 5h55m 10.47.255.251 node01

Etcd-master01 1/1 Running 3 11m 192.168.122.96 master01

Kube-apiserver-master01 1/1 Running 3 11m 192.168.122.96 master01

Kube-controller-manager-master01 1/1 Running 3 11m 192.168.122.96 master01

Kube-proxy-dmmlh 1/1 Running 3 5h55m 192.168.122.96 master01

Kube-proxy-ph9gx 1/1 Running 1 5h54m 192.168.122.250 node01

Kube-scheduler-master01 1/1 Running 3 11m 192.168.122.96 master01

Kubernetes-dashboard-76456c6d4b-x5lz4 0/1 ImagePullBackOff 0 5h54m 192.168.122.250 node01

[root@node01 ~] # ImagePullBackOff problem

Take a look at coredns's pod description first.

[root@master01] # kubectl describe pod coredns-85c98899b4-4dzzx-n kube-system

Name: coredns-85c98899b4-4dzzx

Namespace: kube-system

...

Events:

Type Reason Age From Message

Warning FailedScheduling 75m (x281 over 4h50m) default-scheduler 0 nodes are available: 2 node (s) had taints that the pod didn't tolerate.

Warning FailedCreatePodSandBox 71m kubelet, node01 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1af3fb24d906d5f82ad3bdcf6d65be328302d3c596e63fc79ed0c134390b4753" network for pod "coredns-85c98899b4-4dzzx": NetworkPlugin cni failed to set up pod "coredns-85c98899b4-4dzzx_kube-system" network: Failed in Poll VM-CFG. Error: Failed in PollVM. Error: Failed HTTP Get operation. Return code 404

Normal SandboxChanged 70m (x3 over 71m) kubelet, node01 Pod sandbox changed, it will be killed and re-created.

Normal Pulling 70m (x3 over 70m) kubelet, node01 pulling image "k8s.gcr.io/coredns:1.2.6"

Warning Failed 70m (x3 over 70m) kubelet, node01 Failed to pull image "k8s.gcr.io/coredns:1.2.6": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 192.168.122.160 dial tcp 443: getsockopt: no route to host

Warning Failed 70m (x3 over 70m) kubelet, node01 Error: ErrImagePull

Warning Failed 6m52s (x282 over 70m) kubelet, node01 Error: ImagePullBackOff

Normal BackOff 103s (x305 over 70m) kubelet, node01 Back-off pulling image "k8s.gcr.io/coredns:1.2.6"

[root@master01 ~] #

It seems that when pod is started, insecure-registry has not been set up to force pod restart.

[root@master01 ~] # kubectl get pod coredns-85c98899b4-4dzzx-n kube-system-o yaml | kubectl replace-- force-f-

Pod "coredns-85c98899b4-4dzzx" deleted

Pod/coredns-85c98899b4-4dzzx replaced

[root@master01 ~] #

Found that there is no up, continue to check

[root@master01] # kubectl describe pod coredns-85c98899b4-4dzzx-n kube-system

Events:

Type Reason Age From Message

Normal Scheduled 6m29s default-scheduler Successfully assigned kube-system/coredns-85c98899b4-fnpd7 to master01

Warning FailedCreatePodSandBox 6m26s kubelet, master01 Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "3074c719934789cef519eeae16d2eca4e272fb6bda1b157cee1dbdf2f597a59f" network for pod "coredns-85c98899b4-fnpd7": NetworkPlugin cni failed to set up pod "coredns-85c98899b4-fnpd7_kube-system" network: failed to find plugin "contrail-k8s-cni" in path [/ opt/cni/bin] Failed to clean up sandbox container "3074c719934789cef519eeae16d2eca4e272fb6bda1b157cee1dbdf2f597a59f" network for pod "coredns-85c98899b4-fnpd7": NetworkPlugin cni failed to teardown pod "coredns-85c98899b4-fnpd7_kube-system" network: failed to find plugin "contrail-k8s-cni" in path [/ opt/cni/bin]]

Normal SandboxChanged 76s (x25 over 6m25s) kubelet, master01 Pod sandbox changed, it will be killed and re-created.

Contrail-k8s-cni is missing. Copy one from node01.

[root@master01 ~] # scp root@192.168.122.250:opt/cni/bin/contrail-k8s-cni / opt/cni/bin/

Rebuild again

[root@master01 ~] # kubectl get pod coredns-85c98899b4-fnpd7-n kube-system-o yaml | kubectl replace-- force-f-

Pod "coredns-85c98899b4-fnpd7" deleted

Pod/coredns-85c98899b4-fnpd7 replaced

[root@master01 ~] #

Unfortunately, there was still an error after the restart.

Events:

Type Reason Age From Message

Normal Scheduled 18m default-scheduler Successfully assigned kube-system/coredns-85c98899b4-8zq9h to master01

Warning FailedCreatePodSandBox 17m kubelet, master01 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ffe9745c42750850e44035ee6413bf573148759738fc6131ce970537e03a5d13" network for pod "coredns-85c98899b4-8zq9h": NetworkPlugin cni failed to set up pod "coredns-85c98899b4-8zq9h_kube-system" network: Failed in Poll VM-CFG. Error: Failed in PollVM. Error: Get http://127.0.0.1:9091/vm-cfg/9bf51269-675b-11ea-ac43-525400c1ec4f: dial tcp 127.0.0.1:9091: connect: connection refused

The next day, kebectl's orders can't be used.

Whether it's on master01 or node01.

[root@master01 ~] # kubectl get nodes

The connection to the server 192.168.122.96 6443 was refused-did you specify the right host or port?

[root@master01 ~] #

It is useless to restart kubelet many times, although it is running, there are errors reported.

[root@master01] # journalctl-xe-u kubelet

3 node master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.336303 28722 kubelet.go:2236] node "master01" not found

3 k8s.io/kubernetes/pkg/kubelet/kubelet.go:451 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.425393 28722 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list * v1.Node: Get https://192.168.122.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dma

3 k8s.io/kubernetes/pkg/kubelet/kubelet.go:442 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.426388 28722 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list * v1.Service: Get https://192.168.122.96:6443/api/v1/services?limit=500&resourceVersion=

3. Master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.436468 28722 kubelet.go:2236] node "master01" not found

3. Master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.536632 28722 kubelet.go:2236] node "master01" not found

3 node master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.636848 28722 kubelet.go:2236] node "master01" not found

3 k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.636961 28722 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list * v1.Pod: Get https://192.168.122.96:6443/api/v1/pods?fieldSelector=spec.nodeNam

3. Master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.737070 28722 kubelet.go:2236] node "master01" not found

3. Master01 kubelet 17 21:57:15 master01 kubelet [28722]: E0317 21 purge 57 purl 15.837781 28722 kubelet.go:2236] node "master01" not found

The search found that many people also encountered this problem.

It is said that it may be because kube-apiserver did not start, but the current environment cannot start kube-apiserver

[root@master01 ~] # systemctl start kube-apiserver

Failed to start kube-apiserver.service: Unit not found.

[root@master01 ~] #

Call northbound interface

Refer to the documentation and poke here.

Http://www.opencontrail.org/documentation/api/r5.0/#

For example, the easiest way to get a virtual-networks list (using the simplest username / password authentication method)

[root@master01] # curl-X GET-u "admin:contrail123"-H "Content-Type: application/json; charset=UTF-8" http://192.168.122.96:8082/virtual-networks

{"virtual-networks": [{"href": "http://192.168.122.96:8082/virtual-network/99c4144d-a7b7-4fb1-833e-887f21144320"," fq_name ": [" default-domain "," default-project "," default-virtual-network "]," uuid ":" 99c4144d-a7b7-4fb1-833e-887f21144320 "} {"href": "http://192.168.122.96:8082/virtual-network/6e90abe8-91b6-48ad-99d2-fba6c9e29de4"," fq_name ": [" default-domain "," k8s-default "," k8s-default-service-network "]," uuid ":" 6e90abe8-91b6-48ad-99d2-fba6c9e29de4 "}, {" href ":" http://192.168.122.96:8082/virtual-network/ab12e6dc-be52-407d-8f1d-37e6d29df0b1", " "fq_name": ["default-domain", "default-project", "ip-fabric"], "uuid": "ab12e6dc-be52-407d-8f1d-37e6d29df0b1"}, {"href": "http://192.168.122.96:8082/virtual-network/915156f1-cec3-44eb-b15e-742452084d67"," fq_name ": [" default-domain "," k8s-default "," k8s-default-pod-network "] "uuid": "915156f1-cec3-44eb-b15e-742452084d67"}, {"href": "http://192.168.122.96:8082/virtual-network/64a648ee-3ba6-4348-a543-07de6f225486"," fq_name ": [" default-domain "," default-project "," dci-network "]," uuid ":" 64a648ee-3ba6-4348-a543-07de6f225486 "} {"href": "http://192.168.122.96:8082/virtual-network/82890bf9-a8e5-4c85-a32c-e307d9447a0a"," fq_name ": [" default-domain "," default-project "," _ _ link_local__ "]," uuid ":" 82890bf9-a8e5-4c85-a32c-e307d9447a0a "}]} [root@master01 ~] #

[root@master01 ~] #

Redeploy

Make up your mind to redeploy 1-master/2-node 's K8s scenario or use the previous deployer

Record

[root@deployer contrail-ansible-deployer] # cat install_k8s_3node.log

...

PLAY RECAP * *

192.168.122.116: ok=31 changed=15 unreachable=0 failed=0

192.168.122.146: ok=23 changed=8 unreachable=0 failed=0

192.168.122.204: ok=23 changed=8 unreachable=0 failed=0

Localhost: ok=62 changed=4 unreachable=0 failed=0

[root@deployer contrail-ansible-deployer] # cat install_contrail_3node.log

...

PLAY RECAP * *

192.168.122.116: ok=76 changed=45 unreachable=0 failed=0

192.168.122.146: ok=37 changed=17 unreachable=0 failed=0

192.168.122.204: ok=37 changed=17 unreachable=0 failed=0

Localhost: ok=66 changed=4 unreachable=0 failed=0

Find that the status of the new master is NotReady. Check the status.

[root@master02 ~] # systemctl status kubelet

● kubelet.service-kubelet: The Kubernetes Node Agent

Loaded: loaded (/ usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)

Drop-In: / usr/lib/systemd/system/kubelet.service.d

└─ 10-kubeadm.conf

Active: active (running) since three 2020-03-18 16:04:35 + 08 32min ago

Docs: https://kubernetes.io/docs/

Main PID: 18801 (kubelet)

Tasks: 20

Memory: 60.3M

CGroup: / system.slice/kubelet.service

└─ 18801 / usr/bin/kubelet-bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf-kubeconfig=/etc/kubernetes/kubelet.conf-config=/var/lib/kubelet/config.yaml-cgroup-driver=cgroupfs-network-plugin=cni

March 18 16:36:51 master02 kubelet [18801]: W0318 16 purl 3615 51.929447 18801 cni.go:188] Unable to update cni config: No networks found in / etc/cni/net.d

March 18 16:36:51 master02 kubelet [18801]: E0318 16 purl 3615 51.929572 18801 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready...fig uninitialized

18 March18 16:36:56 master02 kubelet [18801]: W0318 16 purl 36 cni.go:188 56.930736 18801] Unable to update cni config: No networks found in / etc/cni/net.d

It is found that the directory / etc/cni/net.d does not exist on master, so copy the node02.

[root@master02] # mkdir-p / etc/cni/net.d/

[root@master02 ~] # scp root@192.168.122.146:/etc/cni/net.d/10-contrail.conf / etc/cni/net.d/10-contrail.conf

[root@master02 ~] # systemctl restart kubelet

Problem solving

[root@master02 ~] # kubectl get node

NAME STATUS ROLES AGE VERSION

Localhost.localdomain Ready 35m v1.12.9

Master02 Ready master 35m v1.12.9

Node03 Ready 35m v1.12.9

[root@master02 ~] #

If you deploy two environments with one deployer, you will be prompted when you open web

The solution can be found here.

Https://support.mozilla.org/en-US/kb/Certificate-contains-the-same-serial-number-as-another-certificate

Pod status is normal.

[root@master02] # kubectl get pods-n kube-system-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

Coredns-85c98899b4-4vgk4 1bat 1 Running 0 69m 10.47.255.252 node03

Coredns-85c98899b4-thpz6 1/1 Running 0 69m 10.47.255.251 localhost.localdomain

Etcd-master02 1/1 Running 0 55m 192.168.122.116 master02

Kube-apiserver-master02 1/1 Running 0 55m 192.168.122.116 master02

Kube-controller-manager-master02 1/1 Running 0 55m 192.168.122.116 master02

Kube-proxy-6sp2n 1/1 Running 0 69m 192.168.122.116 master02

Kube-proxy-8gpgd 1/1 Running 0 69m 192.168.122.204 node03

Kube-proxy-wtvhd 1/1 Running 0 69m 192.168.122.146 localhost.localdomain

Kube-scheduler-master02 1/1 Running 0 55m 192.168.122.116 master02

Kubernetes-dashboard-76456c6d4b-9s6vc 1/1 Running 0 69m 192.168.122.204 node03

[root@master02 ~] #

This is how the deployment based on K8s in the actual Tungsten Fabric is shared by the editor. If you happen to have similar doubts, please refer to the above analysis. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report