In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to use Ansible to deploy Kubernetes clusters to OpenStack", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's ideas slowly in depth, together to study and learn "how to deploy Kubernetes clusters to OpenStack with Ansible"!
The first question about answer is what are kubernetes and ansible, and why choose them?
Kubernetes (K8S) is a platform for orchestrating and managing Docker containers by calling API. In addition to the basic orchestration function, it also has the ability to continuously drive the control process and specify the desired state to the user. When using this platform, you can group your application containers into a composite unit called pod. Pod is a group of containers that share network and storage. When you create Docker containers, by default, each container gets its own network namespace, its own TCP/IP stack. Kubernetes is set to Docker with-net= "|" to combine the cyberspace of all pod containers. This setting allows the container to use the network stack of another container again. K8S does this by creating a pod-level hold container and its own network stack, and all pod containers are configured to reuse the cyberspace that holds the container.
At the pod level, Kubernetes provides a variety of services, such as scheduling, replicas, self-healing, monitoring, naming / discovery, identification, authentication authorization, and so on. Kubernetes also has plug-in templates that allow developers to write their own modules and then create services on top of the platform. As this blog writes, Kubernetes is one of the most advanced open source platforms that can write and manage Docker containers.
We chose Ansible because it is one of the hottest, most direct, and easiest to use automation platforms today. It runs fewer agents, uses ssh on the infrastructure to log in to the system, and implements the policies you describe in the playbook file. These policies are modeled as a to-do list in yaml format. In the absence of automation, these are the manual tasks that must be performed by the administrator to deploy infrastructure software.
This blog post describes the Kubernetes cluster running on the OpenStack virtual machine. A K8S cluster has a master node that runs API server and a set of worker nodes running on a pod container. The settings use Ansible (> 2.0), Ubuntu and Neutron networks. After testing, use OpenStack kilo to publish. Ansible deploys the K8S software components, starts the virtual machines, classifies the virtual machines into master and worker nodes, and then deploys the Kubernetes key inventory. We use neutron to provide network connectivity to the OpenStack virtual machine and the K8S pod container. All virtual machines in the test environment are running the Ubuntu 14.04 server operating system.
The following figure shows the various software components that are running and how they interact in the cluster. I will use this chart as information to illustrate the automatic process, and when you read this blog, it will make sense when you see this block diagram.
Set up
This setting assumes that you already have an OpenStack cloud running core services, such as Nova,Neutron,Glance and Keystone. You also need > 2.x version of ansible in a ssh with credentials and network connectivity to compute nodes and virtual machines. This ansible node also needs to be able to access openStack API. I installed ansible on my Macbook with these commands:
Sudo easy_install pip
Sudo pip install ansible
After you have installed ansible, use the command line: "ansible-version" to verify its version. It should output a 2.x release.
Kubernetes Cluster Deployment
Automated cluster configuration is controlled by three ansible playbooks. You can click here to pull playbooks, template and code: https://github.com/naveenjoy/microservices. These three playbooks are:
Launch-instances.yml-launches kubernetescluster instances
Deploy-docker.yml-deploys docker onall of the cluster instances
Deploy-kubernetes.yml-deploys kubernetescontrol and worker software components and brings up the cluster
All playbooks get their input variables from a file called settings.yml, which is referenced by the settings file. The node code dictionary and their original data (also known as tags) in the settings file specify the name of the node in the cluster, and the tag is injected into the node when the application starts. These tags are used by the cloud inventory script (https://github.com/naveenjoy/microservices/blob/master/scripts/inventory.py) to classify nodes as master and worker when running playbooks. For example, a node labeled ansible _ host_groups is k8s_master will be classified as a master node, while a tag value equal to k8s_worker will be classified as workers. The settings file also includes a code dictionary called os_cloud_profile, which provides ansible with nova virtual machine startup settings. To open the instance, run playbook as follows:
Ansible-playbook-I hosts launch-instances.yml.
If all goes well, you will see that all the Nova instances have been created accurately on the OpenStack cloud. These instances provide the underlying infrastructure to run the K8S cluster. After adding the instance, you can run the remaining playbooks to deploy Docker and Kubernetes. While playbook is running, a script named 'inventory.py' inventory is used to classify nodes so that the control and worker components are deployed to the correct virtual machine.
Run playbooks as follows:
Ansible-playbook-I scripts/inventory.py deploy-docker.yml
Ansible-playbook-I scripts/inventory.py deploy-kubernetes.yml
The control panel of the K8S cluster includes an API server, a scheduler, an etcd database and kube controller manager through a master key inventory file. The file name master-manifest.j2 can be found in the templates folder. The version of the K8S control panel software is determined by the settings file. This playbook, called deploy-kubernetes.yml, is the first time to download and deploy kubelet and kube-proxy binaries, and turn on both services on all nodes. The master-manifest template file is then deployed to a config directory called / etc/kubernetes/manifest on the master node. This directory is monitored by the kubelet daemon process, and it opens all Docker containers that provide control panel services. When you use the Docker ps command, you will see the kube-apiserver,kube-controller-manager,etcd and kube-schedules processes running on their own containers in the master node.
The API server is configured to use the HTTPS service API. The certificates required for SSL are generated by running the make-ca-cert.sh script as one of the playbook tasks. This script generates the following certificates in the certificate directory on each node. This is generated on each node because Docker daemon also uses the same server certificate to configure TLS. The cert file directory values are also configurable in the settings file.
Ca.pem-- self-signed CA certificate
The kube server-side certificate signed by Server.crt/server.key-- and its key file. This cert file can also be used by the Docker Daemon process to ensure secure client access.
The client certificate signed by cert.pem/key.pem-- and its key file. Used by kubectl and docker customers.
Above the client, you can find these certs folders in repo. The Docker environment file is created for each node in the client using convention.env. You can trace the source of this environment variable and run the Docker client instead of the Docker host. For example, to run the Docker command on a master node named master1, the first step is to execute "source master1.env" and then run the command. Similarly, for kubectl clients, the config file is created with the necessary credentials and cluster master IP address. The Config file can be found in $HOME/.kube/config. This allows you to run the kubectl command from the terminal window on the cluster.
In this blog post, I will describe how to use OpenStack neutron service to connect to K8S pods. This is somewhat similar to the setting of GCE. You can actually choose something else, such as Flannel, and use UDP encapsulation to create an overlay network option for routing pod in the existing tenant neutron. Use neutron to remove the network architecture that covers and covers the container for the pod network.
It is important to note that each pod (that is, a set of containers) in K8S has an IP address. This is different from the network template in Docker, where each container has the private IP address of the host. In order for the K8S network to run, the IP address of pod must be NAT-free and routable. This means two things:
A) when a pod container communicates with other containers in pod, the communication must be routed directly and does not require NAT.
B) when a pod container communicates with the IP address of a virtual machine, the communication must be routed directly and does not require NAT.
To do this, the first step is to replace the default docker bridge named docker0 with a Linux bridge named cbr0 in each node. Across all nodes, an IP module is assigned to the pod network, such as / 16. This module is abstracted and the node-to-pod mapping is created in a settings file. In the above diagram, I assigned 10.1.0.0swap 16 to the pod network and created the following mapping:
Node1: 10.1.1.1/24
Node2: 10.1.2.1/24
NodeN: 10.1.n.1/24
The create-bridge.sh (create-bridge.sh) script creates the cbr0 and then configures the IP address of the pod subnetwork using the mapping defined in the settings file.
The second step is to configure the tenant router to route traffic and then to the pod subnetwork. For example, in the block diagram above, the tenant router must be configured in the path, and then configure the traffic to the pod subnetwork 10.1.1.0 and 24, and to the Ethernet address of the node#1 on the private-subnet#1. Similarly, the path must be added to the cluster to route traffic from the destination station to each node to the pod network. Use the add_neutron_routes.py script to complete this step.
The third step is to add IP tables rules to fake traffic, from the pod subnetwork to the network connected for outbound. This is because the neutron tenant router does not know that it needs to SNAT traffic from the pod subnetwork.
The final step is to turn on IP forwarding on the Linux kernel of each node, to the path packet, and then to the bridge container network. These tasks are performed by playbook deploy-kubernetes.yml.
The end result of running this playbook is that the neutron network is now programmed to route traffic from pods to the network.
Note: by default, as an anti-spoofing security measure, neutron installs iptables firewall rules on the super manager to control the flow of traffic to and from the virtual machine port. Therefore, when routed traffic is injected into the port of the pod network to the virtual machine, it is filtered by the super manager firewall. Fortunately, there is a neutron extension called AllowedAddressPairs, which allows Havana releases such as the Pod subnetwork to pass through the hypervisor firewall.
Expose Pod Services
For practical purposes, each pod must precede the service abstraction. This service uses applications that can run in a pod container to provide a stable IP address. This is because pod can be scheduled on any node and any IP address can be obtained from the assigned node_pod_cidr range. Similarly, when you expand / shrink these pods to accommodate traffic changes, or when the failed pods is created again through the platform, their IP address will change. From the customer's point of view, the service abstraction ensures that the IP address of the pods remains fixed. It is important to note that for services, CIDR, or cluster_cidr, only survives locally on each node and does not need to be routed by the neutron tenant router. This service IP traffic is distributed by K8S with the proxy function (kube-proxy) to the backup pod,proxy function, which is usually implemented in each node using iptables.
This stable service IP can be exposed with the NodePort performance of Kubernetes. What the node port does is that it uses the IP address of the worker node and a high TCP port 31000 to expose the service IP address and port to the outside. So if you assign a floating IP to a node, the application will provide traffic at that IP and its node IP. If you use a neutron load balancer, add worker node members and write vip to distribute traffic to node ports. This method has been described in the block diagram above.
Service discovery
Service discovery can be fully automated using DNS cluster add-on services. You can use skydns-manifest and skydns-service to deploy. K8S automatically assigns a DNS name to each service defined in the cluster. So the program running in pod can find the cluster DNS server to solve the service name and location. The cluster DNS service supports An and SRV record lookup.
Thank you for your reading, the above is the content of "how to deploy Kubernetes clusters to OpenStack with Ansible". After the study of this article, I believe you have a deeper understanding of how to deploy Kubernetes clusters to OpenStack with Ansible, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.