In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article is from Rancher Labs
Jieshao
Support for Windows has been GA in Kubernetes version 1.14. This result condenses the efforts of a group of excellent engineers from Microsoft, Pivotal, VMware, Red Hat and now closed Apprenda. When I worked at Apprenda, I made some contributions to the sig-windows community from time to time. Even now at Rancher Labs, I have been watching its movements. So when the company decided to add support for Windows in Rancher, I was extremely excited.
Rancher 2.3 was released earlier this month, which is the first Kubernetes management platform for GA to support Windows containers. It greatly reduces the complexity of enterprises using Windows containers and provides a quick way to modernize Windows-based legacy applications, whether they run locally or in a multi-cloud environment. In addition, Rancher 2.3 can containerize them and transform them into efficient, secure, and portable multi-cloud applications, eliminating the need to rewrite applications.
In this article, we will configure the Rancher cluster on top of the Kubernetes cluster running RKE. We will also configure clusters that support both Linux and Windows containers. After the configuration, we will talk about the positioning of the operating system, because Kubernetes scheduler needs to guide the deployment location of the various Linux and Windows containers at startup.
Our goal is to do this in a fully automated manner. Since Rancher 2.3 does not currently have stable, this cannot be used in a production environment, but if you need to use Azure and Rancher to automate your infrastructure, this will be a good start for your team. To say the least, even if you don't use Azure, many of the concepts and code in this example can be applied to other environments.
Consider Windows vs. Linux
Before we officially begin, I need to tell you a lot of precautions and "traps". First of all, the most obvious thing is that Windows is not Linux. Windows adds subsystems needed to support containerized applications in the network grid, which are proprietary to the Windows operating system and are implemented by Windows host network services and Windows host computing services. There will be significant differences in operating system and underlying container runtime configuration, troubleshooting, and operation and maintenance. Moreover, the Windows node is bound by Windows Server licensing, and the container image is also bound by the supplementary license terms of the Windows container.
Windows OS version
The WindowsOS version needs to be bound to a specific container image version, which is unique to Windows. Using Hyper-V isolation can overcome this problem, but since Kubernetes 1.16, Kubernetes does not support Hyper-V isolation. Therefore, K8S and Rancher only support Windows Server Core 1809 or Windows Server 2019 (Builds 17763) and above Windows servers (running Docker EE-basic 18.09).
Persistence support and CSI plug-ins
Since Kubernetes version 1.16, the CSI plug-in supports the alpha version. The Windows node supports a large number of in-tree and flex volume.
CNI plug-in
Rancher support is limited to host gateway (L2Bridge) and VXLAN (Overlay) network support provided by flannel. In our scenario, we will take advantage of the default VXLAN because the host gateway option requires User Defined Routes configuration when the nodes are not all on the same network. It depends on the provider, so we will take advantage of the simplicity of the VXLAN feature. According to the Kubernetes documentation, this is alpha-level support. There is currently no open source Windows network plug-in that supports Kubernetes network strategy API.
Other restrictions
Please make sure that you have read the Kubernetes documentation, because there are many functions in the Windows container that cannot be implemented, or its functions are different from the same functions in Linux.
Kubernetes documentation:
Https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#limitations
Infrastructure is code
Automation is the first way to implement DevOps. We will automate the infrastructure of our Rancher cluster and the Azure nodes to be configured in that cluster.
Terraform
Terraform is an open source automated infrastructure orchestration tool that supports almost all cloud service providers available on the market. Today we will use this tool to automate the configuration. Make sure that the Terraform version you are running is at least Terraform 12. In this article, the Terraform version is v0.12.9.
$terraform versionTerraform v0.12.9RKE Provider
RKE provider for Terraform is a community project that is not officially developed by Rancher, but is used by many Rancher engineers, including me. Because this is a community provider rather than Terraform's official provider, you need to install the latest version into your Terraform plugin directory. For most Linux distributions, you can use the setup-rke-terraform-provider.sh scripts included in this article's repository.
Rancher Provider
Rancher2 provider for Terraform is a Terraform-supported provider that automates Rancher through Rancher REST API. We will use it to create a Kubernetes cluster from a virtual machine in Terraform, which needs to be created using Azure Resource Manager and Azure Active Directory Terraform Provider.
Format for this example
Each step of the Terraform model in this article will be split into sub-models, which will enhance the reliability of the models and can be reused in the future if you create other automation architectures.
Part1: set up the Rancher cluster to log in to Azure
Azure Resource Manager and Azure Active Directory Terraform Provider will use an active Azure Cli login to access Azure. They can use other authentication methods, but in this case, I log in before running Terraform.
Az loginNote, we have launched a browser for you to login. For old experience with device code, use "az login-use-device-code" You have logged in. Now let us find all the subscriptions to which you have access... [{"cloudName": "AzureCloud", "id": "14a619f7-a887-4635-8647-d8f46f92eaac", "isDefault": true, "name": "Rancher Labs Shared", "state": "Enabled", "tenantId": "abb5adde-bee8-4821-8b03-e63efdc7701c", "user": {"name": "jvb@rancher.com" "type": "user"}}] sets Resource Group
Azure Resource Group is the location range of nodes and other virtual hardware storage of the Rancher cluster. We will actually create two groups, one for the Rancher cluster and the other for the Kubernetes cluster. That will be done in resource-group module.
Https://github.com/JasonvanBrackel/cattle-drive/tree/master/terraform-module/resourcegroup-module
Resource "azurerm_resource_group"resource-group" {name = var.group-name location = var.region} set hardware
Virtual network
We will need a virtual network and subnet. We will use network-module to set them separately in their respective resource groups.
We will use node-module to set up each node. Since each node needs to install Docker, we need to run the cloud-init file when we use the Rancher install-docker script to configure and install Docker. This script will detect the Linux distribution and install Docker.
Os_profile {computer_name = "${local.prefix}-${count.index}-vm" admin_username = var.node-definition.admin-username custom_data = templatefile (". / cloud-init.template", {docker-version = var.node-definition.docker-version, admin-username = var.node-definition.admin-username AdditionalCommand = "${var.commandToExecute}-- address ${azurerm_public_ ip.publicIp [count.index] .IP _ address}-- internal-address ${azurerm_network_ interface.Nic [count.index] .IP _ configuration [0] .privat e_ip_address}"})} # cloud-configrepo_update: truerepo_upgrade: allruncmd:-[sh,-c "curl https://releases.rancher.com/install-docker/${docker-version}.sh | sh & & sudo usermod-a-G docker ${admin-username}"]-[sh,-c, "${additionalCommand}"]
The additional command block in the template is populated with sleep 0 of these nodes, but later this command will be used for Linux nodes to add custom cluster nodes managed by Rancher to the platform.
Set up nod
Next, we will create several sets of nodes for each character: the control plane, etcd, and worker. There are a few things we need to consider, because there is something unique about the way Azure handles its virtual network. It reserves the first few IP for its own use, so you need to consider this feature when creating a static IP. These are the four IP in the NIC creation, and since we also manage the IP of the subnet, we deal with it in each IP.
Resource "azurerm_network_interface"nic" {count = var.node-count name = "${local.prefix}-${count.index}-nic" location = var.resource-group.location resource_group_name = var.resource-group.name ip_configuration {name = "${local.prefix}-ip-config-$ {count.index } "subnet_id = var.subnet-id private_ip_address_allocation =" static "private_ip_address = cidrhost (" 10.0.1.0 example 24 " Count.index + var.address-starting-index + 4) public_ip_address_id = azurerm_public_ ip.publicIp [count.index] .id}} Why not use dynamic allocation for private IP?
Azure's Terraform provider will not be able to obtain the IP address until the node is created and fully configured. With static processing, we can use addresses during the generation of the RKE cluster. Of course, there are other ways to solve this problem, such as dividing the infrastructure configuration into multiple runs. But for simplicity's sake, IP addresses are managed statically.
Set up the front-end load balancer
By default, the Rancher installer will install an ingress controller on each worker node, which means that we should load balance any traffic among the available worker nodes. We will also take advantage of the capabilities of Azure to create a common DNS entry for the public IP and use it for clustering. This can be done in loadbalancer-module.
Https://github.com/JasonvanBrackel/cattle-drive/tree/master/terraform-module/loadbalancer-module
Resource "azurerm_public_ip"frontendloadbalancer_publicip" {name = "rke-lb-publicip" location = var.resource-group.location resource_group_name = var.resource-group.name allocation_method = "Static" domain_name_label = replace (var.domain-name-label,. ","-")}
As an alternative, it contains code that uses cloudflare DNS. We didn't use this scheme in this article, but you might as well give it a try. If you use this method, you will need to reset the DNS cache or host file entry so that your local computer can call Rancher to use Rancher terraform provider.
Provider "cloudflare" {email = "${var.cloudflare-email}" api_key = "${var.cloudflare-token}"} data "cloudflare_zones"zones" {filter {name = "${replace (var.domain-name," .com ") "")}. * "# Modify for other suffixes status =" active "paused = false}} # Add a record to the domainresource" cloudflare_record "" domain "{zone_id = data.cloudflare_zones.zones.zones [0] .id name = var.domain-name value = var.ip-address type =" A "ttl =" 120 "proxied =" false "}
Install Kubernetes using RKE
We will use the nodes created by dynamic code blocks of Azure and Terraform and the open source RKE Terraform Provider to create a RKE cluster.
Dynamic nodes {for_each = module.rancher-control.nodes content {address = module.rancher- control.publicIps [nodes.key] .IP _ address internal_address = module.rancher-control.privateIps [nodes.key] .private _ ip_address user = module.rancher-control.node-definition.admin-username role = ["controlplane"] ssh_key = file (module.rancher-control.node-definition.ssh-keypath-private)}} install Tiller using RKE
There are many ways to install Tiller. You can use the methods in Rancher's official documentation, but we will take advantage of RKE Add-On 's features in this tutorial.
Official documents:
Https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/
Addons =
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.