In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces you how to build a complete set of GitOps in Argo CD, the content is very detailed, interested friends can refer to, hope to be helpful to you.
As Kubernetes continues to establish itself as the industry standard for container orchestration, finding effective ways to use declarative models for your applications and tools is the key to success. In this article, we will set up a K3s Kubernetes cluster in AWS, and then use Argo CD and Vault to implement secure GitOps.
Here are the components we will use:
AWS-- this is the cloud provider that we will use in the underlying infrastructure. It will manage our virtual machines and the networks needed for Kubernetes to work, and allow Ingress to enter the cluster from the outside.
K3sMel-A lightweight Kubernetes distribution developed by Rancher. It cleans up many alpha functions and cloud plug-ins, while also using a relational database (in this case, RDS) instead of etcd for back-end storage.
Rancher--API-driven UI, you can easily manage your Kubernetes cluster
The key management implementation of Vault--Hashicorp. I'll use Banzai Cloud Vault's bank-- vaults implementation, which injects keys directly into pod by using Admission Webhook. This greatly reduces the need for you to store keys in the Git repository.
Argo CD-- this is a GitOps tool that allows you to maintain the state of Kubernetes resources in Git. Argo CD automatically synchronizes your Kubernetes resources with those in the Git repository, while ensuring that manual changes to manifest within the cluster are automatically restored. This ensures your declarative deployment model.
Cert Manager or LetsEncrypt-- provides a way to automatically generate and update certificates for Kubernetes Ingress.
Let's start with the AWS infrastructure.
Preparation in advance
You need to install the following CLI on your system:
Terraform
Kubectl
AWS
At the same time, you also need AWS administrator access and an access key. If you don't, you can use a credit card to create an account.
Finally, you need a managed domain name that can be managed / updated to point to your Kubernetes-based elastic load balancer (ELB). If you haven't already, it is recommended that you open an account on NameCheap and buy a .dev domain name. It is cheap and works well.
AWS Infrastructure
For our AWS infrastructure, we will use Terraform and S3 support to persist the state. This gives us a way to declaratively define our infrastructure and change it repeatedly when we need it. In the infrastructure repository, you will see a k3s/example.tfvars file. We need to update this file based on our specific environment / usage, setting the following values:
Db_username-the administrator user name that will be applied to the RDS instance stored at the Kubernetes backend
Db_password-the administrator password of the RDS user. This should usually be passed during the inline of your terraform apply command, but for simplicity, we will set it in the file.
Public_ssh_key-your public SSH key, which you will use when you need SSH to Kubernetes EC2s.
Keypair_name-the name of the key pair to apply to your public_ssh_key.
Key_s3_bucket_name-the generated bucket will store your kubeconfig file when the cluster is successfully created.
If you want to change the cluster size or set a specific CIDRs (classless inter-domain routing), you can set some optional fields below, but by default, you will get a 6-node (3 servers, 3 agents) K3s cluster.
At the same time, you will need to create an S3 bucket to store your Terraform status and change the bucket field in the k3s/backends/s3.tfvars and k3s/main.tf files to match it.
Once we have updated all the fields and created the S3 status bucket, let's start applying Terraform. First, make sure that you have an administrative IAM user in your AWS account and that you have set the environment variable or AWS credential file correctly on the system so that you can interface with AWS API, and then run the following command:
Cd k3s/terraform init-backend-config=backends/s3.tfvarsterraform apply-var-file=example.tfvars
Once you execute the above command, Terraform will output the expected AWS status after the apply is successful. If everything looks as expected, enter yes. At this time, due to the RDS cluster, it takes 5-10 minutes to configure AWS resources.
Verify your Kubernetes cluster
After the successful application of Terraform (wait a few more minutes to make sure that K3s has been deployed), you need to grab the kubeconfig file from S3 bucket using the following command (replacing the bucket name you entered in example.tfvars):
Aws S3 cp s3://YOUR_BUCKET_NAME/k3s.yaml ~ / .kube/config
This should be done successfully so that you can now communicate with your cluster. Let's check the state of our nodes and make sure they are ready before continuing.
$kubectl get nodesNAME STATUS ROLES AGE VERSIONip-10-0-1-208.ec2.internal Ready 39m v1.18.9+k3s1ip-10-0-1-12.ec2.internal Ready master 39m v1.18.9+k3s1ip-10-0-1-191.ec2.internal Ready master 39m v1.18.9+k3s1ip-10-0-2-12.ec2.internal Ready master 39m v1.18.9k3s1p- 10-0-2-204.ec2.internal Ready 39m v1.18.9+k3s1ip-10-0-1-169.ec2.internal Ready 39m v1.18.9+k3s1
Let's also take a look at the status of Argo CD, which is automatically deployed through manifest:
$kubectl get pods-n kube-system | grep argocdhelm-install-argocd-5jc9s 0 Completed 1 Completed 1 40margocd-redis-774b4b475c-8v9s8 1 Running 0 40margocd-dex-server-6ff57ff5fd-62v9b 1 Running 0 40margocd-server-5bf58444b4-mpvht 1 Running 0 40margocd-repo-server-6d456ddf8f-h9gvd 1/1 Running 0 40margocd-application-controller-67c7856685-qm9hm 1/1 Running 0 40m
Now we can continue to configure the wildcard DNS for our ingress and certificate automation.
DNS configuration
For DNS, I get the atoy.dev domain name through Namecheap, but you can use any DNS vendor you like. What we need to do is create a wildcard CNAME entry to route all requests to AWS ELB, which is managing the application's ingress.
First, get your elastic load balancer host name by visiting your AWS console-navigate to the EC2 section and click Load Balancers on the left menu bar. Then you should see a new LoadBalancer created using random characters. If you check tag, it should refer to your new Kubernetes cluster.
You need to copy the DNS name from this entry. Access the NamecCheap Advanced DNS page for my domain name and enter the CNAME entry for * .demo.atoy.dev**. * * point to the domain name you copied from AWS. You can adjust this name according to your provider / domain name:
To verify that it works, you can install / use nslookup to make sure it resolves to the correct hostname:
$nslookup test.demo.atoy.devServer: 71.252.0.12Address: 71.252.0.12#53Non-authoritative answer:test.demo.atoy.dev canonical name = a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com.Name: a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.comAddress: 52.20.5.150Name: a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.comAddress: 23.20.0.2
Now go to the Umbrella application.
Argo CD and Umbrella applications
We already know that Argo CD is deployed, but now we will use Argo CD's App-of-Apps deployment model to deploy the rest of our toolkit. Since we are using GitOps, you need to fork the k8s-tools-app repository to your own Github account, and then we need to make some changes to match your respective environment.
You need to find / replace https://github.com/atoy3731/k8s-tools-app.git globally and change it to the new repository git URL of the previous fork. This allows you to manage your environment so that Argo CD can pull from there. In addition, you need to make sure that your Git repository is public so that Argo CD can access it.
In resources/tools/resources/other-resources.yaml, change argoHostand issuerEmail to match your domain name and email address.
In resources/tools/resources/rancher.yaml, change the host name and message to match the respective domain name and email.
In resources/apps/resources/hello-world.yaml, change the two references to app.demo.aptoy.dev to match your domain name.
Once you have made these updates, continue to submit / push your changes to your forked Github repository. Now you are ready to apply the umbrella application. Do the following in the repository of the local clone:
$kubectl apply-f umbrella-tools.yamlappproject.argoproj.io/tools createdapplication.argoproj.io/umbrella-tools created
Now Argo CD will begin to configure all the other tools that the repository defines for your cluster. You can get a list of deployed applications by doing the following:
$kubectl get applications-n kube-systemNAME AGEother-resources 56mumbrella-tools 58mrancher 57mvault-impl 57mvault-operator 58mvault-webhook 57mcert-manager 57mcert-manager-crds 58m
You will have to wait about 5 minutes for everything to be ready for LetsEncrypt to generate temporary certificates. Once things work as expected, you should see two generated Ingress entries that you can access through your browser:
$kubectl get ingress-ANAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGEcattle-system rancher rancher.demo.atoy.dev a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com 80 443 59mkube-system argocd-ingress argo.demo.atoy.dev a4c6dfd75b47a4b1cb85fbccb390fe1f-529310843.us-east-1.elb.amazonaws.com 80, 443 58m
Now you can browse Rancher through https://rancher.YOUR-DOMAIN and Argo CD through https://argo.YOUR-DOMAIN.
NOTE 1: to avoid any rate limit of LetsEncrypt, we are using invalid staging certificates. One advantage is that when you access Argo, Rancher, or your hello world application in your browser, it will give you a SSL exception. Using a Chrome browser, type thisisunsafe when your exception page loads, and it will allow you to bypass it. You can also learn to update Cert-manager 's ClusterIssuer to use production-level trusted certificates.
NOTE 2:K3s pre-installs a Traefik as an ingress controller, and we use it directly for simplicity.
NOTE 3: after logging in to Rancher for the first time, you need to generate a password and accept the URI used to access Rancher. URI should be preloaded in the form, so you can click "Okay" directly.
NOTE 4: to log in to Argo CD, it uses admin as the user name and argocd-server pod name as the password. You can get the pod name of the server (in this case, argocd-server-5bf58444b4-mpvht) by doing the following.
$kubectl get pods-n kube-system | grep argocd-serverargocd-server-5bf58444b4-mpvht 1 Running 1 Running 0 64m
You should now be able to access Argo CD UI, log in and view, as shown below:
Now that our tool has been deployed, let's store the key in Vault for extraction by the hello world application.
Create a key in Vault
To make things easier, there is a help script in your tool library. Run the following command to obtain the Vault administrator token and port forwarding command:
$sh tools/vault-config.shYour Vault root token is: s.qEl4Ftr4DR61dmbH3umRaXP0Run the following:export VAULT_TOKEN=s.qEl4Ftr4DR61dmbH3umRaXP0export VAULT_CACERT=/Users/adam.toy/.vault-ca.crtkubectl port-forward-n vault service/vault 8200 & You will then be able to access Vault in your browser at: [https://localhost:8200](https://localhost:8200)
Run the output command, and then navigate to https://localhost:8200. Enter the root token above to log in.
When you log in, you should be on a key engine page. Click the Secret/ entry, and then click create key at the top right. We will create a demo key, so add the following and click Save:
Now we have the key ready for the hello world application.
Deploy Hello World applications
Now, back to our parent version, let's run the following code to deploy the hello world application:
$kubectl apply-f umbrella-apps.yamlappproject.argoproj.io/apps createdapplication.argoproj.io/umbrella-apps created
After the creation is complete, go back to Argo CD UI, and you should first see two new applications, umbrella-apps and demo-app. Click demo-app and wait for all resources to run properly:
Once the status is healthy, you should be able to navigate to your application by visiting https://app.YOUR-DOMAIN.
Let's also verify that our Vault key has been injected into our application pod. In Argo CD UI's demo-app, click a Pod of the application, and then click the log tab at the top. There should be two containers on the left. Choose the test-deployment container. At the top of the log, you should see that your key is between two lines of equal sign:
Test GitOps
Now let's test Argo CD to make sure it automatically synchronizes when we make some changes in the repository.
In your tool library, find the resources/apps/resources/hello-world.yaml file and change the value of replicaCount from 5 to 10. Commit and push your changes to the main branch, and then navigate back to demo-app in Argo CD UI. When Argo CD reaches its refresh interval, it automatically starts deploying the other five copies of the application (if you don't want to wait, click the refresh button in your * * umbrella-apps * * Argo application * *):
Clean up
If you are going to dismantle your cluster, you need to go to the AWS console, EC2 service, and then click Load Balancers. You will see that the Kubernetes cloud provider has created an ELB, but is not managed by Terraform, and you need to clean it up. You also need to delete the security group that the ELB is using.
After cleaning up the ELB, run the following command and enter yes when prompted:
Terraform destroy-var-file=example.tfvars, what's next?
We already have a good toolset to deploy applications using GitOps. So what's the next step? If you are willing to accept the challenge, you can try to deploy your own application next to the hello world application, or even try to implement CI/CD by updating the image tag in the application manifest repository. In this way, when you build a new application image, the new tags are automatically updated in the manifest repository and Argo CD deploys the new version.
On how to build a complete set of GitOps in Argo CD to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.