In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to use K3s + raspberry pie to build a lightweight K8S bare metal cluster in production, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
Boogie Software is a famous financial technology company in Europe. Over the years, it has been committed to providing banks with innovative services such as Fintech, AI, big data high-performance back-end, mobile applications, data analysis and UX to help banks promote their digital transformation. With more than a decade of unique experience in this field, Boogie has become a leader in digital banking service providers.
Boogie Software's IT team uses Kubernetes and container technology in many customer banks' core banking digitization projects, so we are always thinking about how Kubernetes can use more appropriate hardware to work locally. In this article, I'll describe in detail how we build a lightweight bare metal cluster on a raspberry pie to run applications and services on a corporate network.
We do this for two reasons: first, by building clusters, we will have a platform to run applications and services within the company's network reliably and flexibly; second, we can take this opportunity to learn more about Kubernetes, micro-services, and containers. If you want to build a similar system based on our experience, I suggest you know at least the basics of Docker containers, Kubernetes key concepts (nodes, pod, services, deployment, etc.), and IP networks.
Hardware preparation
You need to prepare the following equipment:
At least one raspberry pie is available on the 2Bamp 3Bhand 3Bplus model. You even run some applications on a single development board, but it is recommended that you use two or more development boards to spread the load and increase redundancy.
Power and SD cards that can be used for raspberry pie, existing Ethernet switches or free ports, and some cables.
In our setup, we currently have four raspberry third-generation B+ development boards, so there is one master/server and three proxy nodes in the cluster. It would be better if raspberry pie had a shell, and our colleagues designed one with a 3D printer. In addition, there are two fans on the back of the chassis for cooling, each of which is located on a tray that can be hot-swappable for maintenance. These trays are also provided with activity/heartbeat LED and power switch positions in front of them, which are all connected to the GPIO connector of the development board.
Software preparation
For the implementation of Kubernetes, we use K3s. K3s is a lightweight CNCF-certified Kubernetes distribution released by Rancher Labs. Although this is a new product, it is really stable and easy to use, and can be started in seconds. What sets K3s apart from other lightweight Kubernetes distributions is that K3s is available for production, while projects such as microk8s or Minikube do not, and K3s is lightweight and works well on ARM-based hardware. In K3s, everything you need to install Kubernetes on any device is included in the 40MB binaries.
K3s works well in almost any Linux distribution, so we decided to use Raspbian Stretch Lite as the base OS because we didn't need to add any additional services or desktop UI to the development board. K3s does need to enable cgroup in the Linux kernel, which can be done on Raspbian by adding the following parameters to / boot/cmdline.txt:
Cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory installation K3s
What is very friendly about K3s is that it enables a smooth installation process. After you have your server hardware ready, it only takes a few minutes to set up, because it only takes one command to install server (master node):
Curl-sfL https://get.k3s.io | sh-
The same is true of proxy nodes:
Curl-sfL https://get.k3s.io | K3Smits token = K3SystURL = https://:6443 sh-
Where token_from_server is the content of the file / var / lib / rancher / K3s / server / node-token from the server, and server_ip is the IP address of the server node. At this point, our cluster is up and running, and we can start deploying the workload:
Root@k3s-server:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONk3s-node1 Ready 40s v1.13.4-k3s.1k3s-server Ready 108s v1.13.4-k3s.1
To manage and monitor the cluster, we installed Kubernetes Dashboard, which provides a very convenient web interface to view the status of the entire system, perform administrator actions, and access logs. At the same time, installing and running the kubectl command locally is also helpful because it allows you to manage the cluster from your own computer without having to ssh into the cluster. To do this, you just need to install kubectl and copy the cluster information from the server node config / etc/rancher/k3s/k3s.yaml to the local kubeconfig file (usually ${HOME} / .kube / config).
Expose services using load balancers
By default, applications deployed on a Kubernetes cluster are only available in the cluster (the default service type is ClusterIP). If you want to get the application from outside the cluster, you have two options. You can use the NodePort type to configure the service, which exposes the service on each node IP of the static port, or you can use the load balancer (service type LoadBalancer). However, NodePort services are limited: they use their own dedicated port range, and we can only distinguish applications by port numbers. K3s has a simple load balancer built in, but because it uses the node's IP address, we may soon run out of IP/ port combinations and be unable to bind services to a virtual IP. For these reasons, we decided to deploy MetalLB--, a load balancer implementation for bare metal clusters.
You only need to apply YAML manifest to install MetalLB. The easiest way to run MetalLB on an existing network is to use the so-called layer 2 mode, which means that the cluster node declares the virtual IP of the services in the local network through the ARP protocol. To this end, we reserve a small number of IP addresses from the internal network for clustering services. The configuration of MetalLB is as follows:
ApiVersion: v1kind: ConfigMapmetadata: namespace: metallb-system name: configdata: config: | address-pools:-name: company-office protocol: layer2 addresses:-10.10.10.50-10.10.10.99
With this configuration, the cluster service will be exposed to addresses in the range of 10.10.10.50-10.10.10.99. To bind a service to the specified IP, you can use the loadBalancerIP parameter in the service list:
ApiVersion: v1kind: Servicemetadata: name: my-web-appspec: ports:-name: http port: 80 protocol: TCP targetPort: 8080 loadBalancerIP: 10.10.10.51 selector: app: my-web-app type: LoadBalancer
In load balancing, we face many challenges. For example, Kubernetes restricts the use of both TCP and UDP ports in a single load balancer. To solve this problem, you can define two service instances, one for the TCP port and one for the UDP port. The disadvantage is that unless you enable IP address sharing, you need to run the two services in different IP addresses. Moreover, since MetalLB is a young project, there are some minor problems, but we believe that these will be resolved soon.
Add Stora
K3s doesn't have a built-in storage solution for the time being, so in order for Pod to access persistent file storage, we need to use Kubernetes's plug-in to create one. Because one of the goals of Kubernetes is to decouple the application from the infrastructure and make it portable, the abstraction layer for storage is defined in Kubernetes using the concepts of PersistentVolume (PV) and PersistentVolumeClaim (PVC). A detailed conceptual explanation can be found in our previous article: a detailed explanation of the key concepts of Kubernetes storage. PV is a storage resource that is typically configured by an administrator and available to applications. PVC, on the other hand, describes the application's requirements for a certain type and amount of storage. When creating a PVC (usually as part of an application), if there is an available PVC that is not yet in use and meets the application's PVC requirements, it binds to the PV. Configuring and maintaining all of this requires manual work, so dynamic configuration volumes emerge as the times require.
We already have an existing NFS server in our infrastructure, so we decided to use it for cluster persistent file storage. In our case, the easiest way is to use NFS-Client Provisioner that supports dynamic configuration of PV. Provisioner simply creates a new directory on each new PV (cluster maps to PVC) on the existing NFS share, and then mounts the PV directory in the container that uses it. This eliminates the need to configure volumes that NFS shares into a single pod, but instead runs all dynamically.
Cross-build a container image for ARM
Obviously, when running an application container on ARM-based hardware, such as raspberry pie, you need to build the container based on the architecture of ARM. You may encounter some pitfalls when building your own application in an ARM schema container. First of all, the basic image needs to be available for your target architecture. For raspberry pie 3, you usually need to use arm32v7's basic images, which can be called in most Docker image repositories. So, when cross-building applications, make sure your Dockerfile contains the following code:
FROM arm32v7/alpine:latest
The second thing to note is that your host Docker needs to be able to run ARM binaries. If you run Docker on mac, it will be easy because it has built-in support for it. If you are on Linux, you need to perform some steps:
Add QEMU binaries to your base image
In order to run the ARM binary in Docker on Linux, the image requires a QEMU binary. You can choose a basic image that already contains QEMU binaries, or you can copy it during the image construction process.
Qemu-arm-static binaries to it, for example, by adding the following line to your Dockerfile:
COPY-- from=biarms/qemu-bin / usr/bin/qemu-arm-static / usr/bin/qemu-arm-static
Security warning: please note that downloading and running unknown containers is like downloading and running .exe files in the location. Except for amateur projects, any project should use scanned / audited images (such as Docker official images) or container images from trusted organizations and companies.
Then, you need to register QEMU on the host OS that created the Docker image. This can be done simply in the following ways:
Docker run-rm-privileged multiarch/qemu-user-static:register-reset
You can add this command to your build script before building the actual image. To sum up, your Dockerfile.arm should look like this:
FROM arm32v7/alpine:latestCOPY-- from=biarms/qemu-bin / usr/bin/qemu-arm-static / usr/bin/qemu-arm-static# commands to build your app go here... # e.g. RUN apk add-- update
And your build / CI script should be:
Docker run-rm-privileged multiarch/qemu-user-static:register-resetdocker build-t my-custom-image-arm. -f Dockerfile.arm
This will provide you with a container image of the ARM schema. If you are interested in details, please refer to:
Https://www.ecliptik.com/Cross-Building-and-Running-Multi-Arch-Docker-Images/
Automatically build and upload to the image warehouse
The final step is to automate the entire process so that container images can be automatically built and uploaded to an image repository where they can be easily deployed to our K3s cluster. Internally, we use GitLab for source control and CI/CD, so we naturally want to run these builds in it, and it even includes a built-in container image repository, so there is no need to set up a separate image repository.
GitLab has very good documentation (https://docs.gitlab.com/ee/ci/docker/using_docker_build.html) for building Docker images, so we won't repeat them here. After configuring GitLab Runner for the docker build, all that is left to do is create the. gitlab-ci.yml file for the project. In our example, it looks like this:
Image: docker:stablestages:-build-release variables: DOCKER_DRIVER: overlay2 CONTAINER_TEST_IMAGE: ${CI_REGISTRY_IMAGE} / ${CI_PROJECT_NAME}-arm:$ {CI_COMMIT_REF_SLUG} CONTAINER_RELEASE_IMAGE: ${CI_REGISTRY_IMAGE} / ${CI_PROJECT_NAME}-arm:latest before_script:- docker info- docker login-u gitlab-ci-token-p $CI_JOB_TOKEN $CI_REGISTRY build_image: stage: build script :-docker pull $CONTAINER_RELEASE_IMAGE | | true-docker run-- rm-- privileged multiarch/qemu-user-static:register-- reset-docker build-- cache-from $CONTAINER_RELEASE_IMAGE-t $CONTAINER_TEST_IMAGE -f Dockerfile.arm-docker push $CONTAINER_TEST_IMAGE release: stage: release script:-docker pull $CONTAINER_TEST_IMAGE-docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE-docker push $CONTAINER_RELEASE_IMAGE
Now that we have our images in the container image repository, we just need to deploy them to our cluster. To grant the cluster access to the image repository, we created a deploy token in GitLab and then added the token credentials to the cluster as a docker-registry key:
Kubectl create secret docker-registry deploycred-docker-server=-docker-username=-docker-password=-docker-email=
You can then use the deploy token key in the YAML file PodSpec:
ImagePullSecrets:-name: deploycred containers:-name: myapp image: gitlab.mycompany.com:4567/my/project/my-app-arm:latest
After completing all these steps, we finally have an automatic CI / CD pipeline from the source code in the private image repository to the ARM container image, which can be deployed to the cluster.
All in all, it turns out that setting up and running your own bare metal Kubernetes cluster is easier than expected. And K3s is indeed a wise choice to run containerized services in edge computing scenarios and on hardware with generally lower configurations.
One minor drawback is that K3s does not yet support high availability (multi-host device configuration). Although the single primary server setting is already quite flexible because the service can continue to run on the proxy node even if the primary server is offline, we want to provide some redundancy for the primary node. Obviously, this feature is under development, but until this feature is available, we recommend backing up from the server node configuration.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.