In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces what to do when the K8S cluster on the raspberry pie is hung up. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.
When I was updating on my raspberry pie last night, all the nodes suddenly stopped detecting the network interface and could not recover!
At that time, I had two choices, but both were time-consuming:
Method 1: get up early and rebuild manually according to my own previous article, and you may also find some room for improvement. I've tried it myself many times: this method is error-prone, mainly typing errors or skipping one or two steps, and then spend the next half an hour trying to find out what's going on, erase everything and start over and over again.
Method 2: wake up early and code from scratch, so that my final solution implementation can be executed once and used permanently, making the reconstruction and production of the entire cluster as easy as possible. Of course, this will mean more downtime. However, its benefits will be long-term, so I don't have to worry about the cluster itself and eventually see it as a stable solution to which I can migrate the entire local infrastructure.
The automation legend begins.
Before starting any work, I was very focused on the basics and proposed some principles to follow throughout the process. There is a lot of logical separation between the code between the parts of the environment created in this way, so dear readers, you can more easily change the code or comment out the entire file to disable a small amount of functionality here or here.
Principle 1: setting up a Kubernetes cluster on a raspberry pie can be divided into three parts: setting up a storage card, configuring nodes at the system level, and finally expanding Kubernetes resources.
Principle 2: there is a NFS service running on my old version of Intel NUC, which is connected to DROBO storage. It is suitable for permanent shared storage for all nodes.
Principle 3: the raspberry pie cluster runs in VLAN in my home network, so protecting it is not my priority, and all services and nodes should be easy to access without having to deal with credentials.
To reproduce the result (or make it work), you will need to:
Mac: when I have time to install Linux VM, I may add a platform monitoring section to the script
Ansible: the version I use is 2.10.6
Terraform: written in 0.13.4, also applicable to 0.14.8
Make: any version should be available
Raspberry pie cluster: step 1
Raspberry pie uses a memory card as its hard drive. This may not be the best choice, and it will never give you maximum read and write speed, but it should be sufficient for hobby projects.
What should I pay attention to in step 1?
Format memory card
Divide the memory card into two partitions-1GB + the rest of the memory card
Copy the Alpine Linux image to the memory card
Create a system overlay
The system override is responsible for setting eth0 in Promisc mode (required for MetalLB) and configuring SSH secret-free login.
Important: check the source of the 001-prepare-card.sh and make sure that the memory card you just inserted is actually / dev/disk5, otherwise data may be lost.
Result: it takes about one minute to prepare six memory cards.
Raspberry pie cluster: step 2
This step is the really interesting beginning. I guess you've plugged the memory card into your raspberry pie, connected all the cables (network and power), and allowed them to boot? You now need to get the IP address of your device-you can do this by connecting the screen to each device and performing ifconfig eth0, or by logging in to your router and checking. Adjust the pi-hosts.txt with the appropriate value.
[masters] pi0 ansible_host=192.168.50.132 # Pi0[workers] pi1 ansible_host=192.168.50.135 # Pi1pi3 ansible_host=192.168.50.60 # Pi3pi4 ansible_host=192.168.50.36 # Pi4pi2 ansible_host=192.168.50.85 # Pi2pi5 ansible_host=192.168.50.230 # Pi5
Important: pi0 hostnames are rarely needed in the future.
Add the following to ~ / .ssh / config to force all pi * hosts as root connections.
Host pi? User root Hostname h.local
Because we have prepared microcomputers, we need to be ready for Ansible operation. This can be easily done using the 001pi-hosts prepare.sh script, which will ssh to each server specified in the pi-hosts file, configure chrony for NTP, and install the Python interpreter.
Important: you may need to check the rpi.yaml file and adjust the vars section as needed.
After successfully performing this step, you can execute ansible-playbook rpi.yaml-f 10 to run Ansible, which will proceed in the following order:
Common actions:
Install the necessary software packages
Partition and format RPI memory card
Set the "larger" partition to the system disk
Add all fstab entries
Commit changes
Restart raspberry pie to boot from the "permanent" partition
KUBEMASTER:
Use kubeadm to set up the primary node
Store tokens locally (in static/token_file)
Root sets up users who can access kubectl on raspberry pie
Save the Kubernetes configuration locally (in static/kubectl.conf)
KUBEWORKER:
Copy token to the worker node
It allows worker to join your master server through token files
Root users who copy kubectl.conf to workers
Basic operations:
Remove the stain from the Master to participate in the workload
Install py3-pip,PyYaml and helm on the node
If you are still reading (congratulations), you have just completed a basic Kubernetes cluster. I believe this is as simple as executing a few scripts and waiting for them to complete, and it is certainly better than the manual method.
Important: you can run it as many times as you want. Once done, your memory card will not be reformatted.
The result: it takes about a few minutes to install a basic 6-node Kubernetes cluster, depending on your network link status.
Raspberry pie cluster: step 3
After successfully performing the first two steps, you should be ready to use the raspberry pie cluster for the first deployment. It takes only a few basic setup steps to make it run on demand, but guess. They are also set up automatically and are taken care of by Terraform.
First, let's look at the configuration:
# Variables used for barebone kubernetes setupnetwork_subnet = "192.168.50" net_hosts = {adguard = "240,249" traefik = "234" torrent_rpc = "245"} nfs_storage = {general = "/ media/nfs" torrent = "/ mnt/drobo-storage/docker-volumes/torrent" adguard = "/ mnt/drobo-storage/docker-volumes/adguard"} # ENV variable: TRAEFIK_API_KEY sets traefik_api_key# ENV variable: GH_USER, GH_PAT for authentication with private containers
You can see that I am running a cluster on the 192.168.50.0 end 24 network, but by default, MetalLB will use the address 200,250 as the "cluster" of the network address pool. When I have my home torrent server and DNS from Adguard, I want them to have specific addresses, along with the Traefik balancer to provide "stuff" for the dashboard.
Important tips:
The nfs_*_ path value should be compatible with the settings you updated in step 2.
Make sure that your Kubernetes configuration file ~ / .kube / config updates the access details of static/kubernetes.conf, and I'm using home-k8s as the context name.
What is the purpose of Terraforming?
The network
Install configuration patches for flannel and host-gw
Install metalLB and set the network to var.network_subnet200-250
Traefik
Install the Traefik agent and expose it to your home network through the metalLB load balancer. The Traefik dashboard itself can access traefik.local in the following ways
Traefik dashboard running on raspberry pie cluster
Adguard
Use NFS to install Adguard DNS services with persistent volume declarations
Expose the dashboard through traefik and expose the service itself through a private IP in the home network, as follows: adguard.local
Adguard Home running on raspberry pie cluster
Prometheus
Install and deploy the Prometheus monitoring stack and grafana on all nodes. Update the Prometheus DaemonSet, eliminating the need for volume mount. It also exposes grafana to grafana.local through traefik. The default grafana user / password combination is admin:admin. Grafana comes with a pre-installed plug-in devopsprodigi-kubegrafi-app, which I think is the best plug-in for cluster monitoring.
Grafana dashboard running on raspberry pie cluster
Kubernetes Dashboard
Install Kubernetes Dashboard and expose it to k8s.local through traefik
Kubernetes dashboard running on raspberry pie cluster
1.Torrent
Use the Flood Web interface to install and deploy the torrent server (rTorrent). By exposing the dashboard torrent.local. It uses a large number of mounts to store data and configurations. There is one reason to set replication to 1: there is a problem with the lock file of-rTorrent, and because it uses a shared directory, it will not start if a lock file artifact is detected. (in my private configuration) rTorrent is set to listen on port 23340.
two。 Backup
Since the raspberry pie runs on the memory card, which may wear out due to frequent read and write operations, I decided to add regular backups of etcd to NFS. They run once a day, in the configuration of the t erraform application-each backup weighs about 32 megabytes.
To make the operation easier, I created Makefile, which should help you set up. You may need to set the following environment variables.
TRAEFIK_API_KEY / / Traefik API keyGH_USER / / Github userGH_PAT / / Github Personal Access Token
Important: the Github certificate is not in use yet, but I plan to add authentication soon to extract the private image from GHCR.
ADDITIONAL_ARGS=-var 'traefik_api_key=$ (TRAEFIK_API_KEY)'-var "github_user=$ (GH_USER)"-var "github_pat=$ (GH_TOKEN)" apply: cd infrastructure; terraform apply $(ADDITIONAL_ARGS)-auto-approve-var-file.. / variables.tfvarsplan: cd infrastructure; terraform plan $(ADDITIONAL_ARGS)-var-file.. / variables.tfvarsdestroy: cd infrastructure; terraform destroy $(ADDITIONAL_ARGS)-var-file.. / variables.tfvarsdestroy-target: cd infrastructure Terraform destroy $(ADDITIONAL_ARGS)-var-file.. / variables.tfvars-target $(TARGET) refresh: cd infrastructure; terraform refresh $(ADDITIONAL_ARGS)-var-file.. / variables.tfvarsinit: cd infrastructure; rm-fr .terraform; terraform initimport: cd infrastructure; terraform import $(ADDITIONAL_ARGS)-var-file.. / variables.tfvars $(ARGS) lint: terraform fmt-recursive infrastructure/
All code can be used in lukaszraczylo / rpi-home-cluster-setup on GitHub, and anyone can use and modify it free of charge. I also released custom-built docker images (rTorrent and Flood) with multiple architectures that support ARM64 processors.
Automation can save time and energy, and can definitely get the job done. I often clean up the entire cluster and build from scratch using the repository mentioned earlier to make sure it works as expected. I will keep the latest features and features of this article and repository.
About the raspberry pie on the K8S cluster hung up to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.