In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "How to deploy and manage Kubernetes on AWS with KOps". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's learn how to deploy and manage Kubernetes on AWS with KOps!
Kubernetes as a container orchestration leader has no doubt, but Kubernetes still faces these problems, one is that the learning curve is very steep, you still have to learn a lot from the container to K8s, the other is deployment, to deploy a set of K8s is not easy, you have some choices:
minikube minikube installs a single-node K8s cluster in your native VM, but this can only be used for local testing and learning, not for real production and large-scale use.
Bare metal can be installed on bare metal (or virtual machine) by itself, but it is troublesome to manage physical resources, configure networks and drivers by yourself. If you want to challenge yourself, you can refer to the official documents
Cloud hosted solution: All major cloud vendors have launched their own cloud solutions for K8s. Cloud services have obvious advantages, but usually the master of cloud services is controlled by cloud vendors, and users have less control over clusters. Moreover, cloud services enable users to bind their businesses to a cloud vendor. For example:
Google GKE
Azure AKS
Amazon EKS
IBM Cloud Kubernetes Service
Ali Kubernetes Engine
Finally, we want to use the cloud, but do not want to be limited by cloud vendors, this time we can use for example the following tools to deploy our K8s on the cloud. For example:
Kops
Kubespray
Let's take a look today at deploying a cluster of K8s on AWS using kops.
Assuming that all operations are done in Linux clients, Mac or other clients search themselves.
Suppose you have an AWS account with corresponding permissions.
install the client
Install kubectl,kubectl is the command-line client for K8s that Kops uses to configure k8s.
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectlchmod +x ./ kubectlsudo mv ./ kubectl /usr/local/bin/kubectl
Install kops
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest| grep tag_name |cut -d '"' -f 4)/kops-linux-amd64chmod +x kops-linux-amd64sudo mv kops-linux-amd64 /usr/local/bin/kops Configuration AWS Resources
First install AWS CLI.
To implement we need an AWS user to run Kops, who needs to have the following permissions:
AmazonEC2FullAccessAmazonRoute53FullAccessAmazonS3FullAccessIAMFullAccessAmazonVPCFullAccess
Use CLI to create the corresponding user group, user and access key:
aws iam create-group --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kopsaws iam create-user --user-name kopsaws iam add-user-to-group --user-name kops --group-name kopsaws iam create-access-key --user-name kops
Record the user SecretAccessKey and AccessKeyID created in the last step, configure them in the client, and export them to environment variables.
# configure the aws client to use your new IAM useraws configure # Use your new access and secret key hereaws iam list-users # you should see a list of all your IAM users here# Because "aws configure" doesn't export these vars for kops to use, we export them nowexport AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
User configuration can configure DNS, this is optional, we skip first.
Kops stores the configuration of K8s clusters in AWS S3, and each cluster configuration corresponds to an S3 file, so we create an S3 bucket to store the cluster configuration.
export BUCKET=aws s3api create-bucket \ --bucket $BUCKET \ --region us-west-2 \ --create-bucket-configuration LocationConstraint=us-west-2aws s3api put-bucket-versioning --bucket $BUCKET --versioning-configuration Status=Enabled
All right, we're ready to start creating K8s clusters.
creating a cluster
First, if your client does not have an ssh key, create one.
ssh-keygen -t rsa -C "your_email@example.com"
Configure environment variables to define cluster names and configured urls
export NAME= .k8s.localexport KOPS_STATE_STORE=s3://$BUCKET
Before creating a cluster, check what AZ is available
aws ec2 describe-availability-zones --region us-west-2
I am currently using the region of us-west-2. The az available are these three
{ "AvailabilityZones": [ { "State": "available", "Messages": [], "RegionName": "us-west-2", "ZoneName": "us-west-2a" }, { "State": "available", "Messages": [], "RegionName": "us-west-2", "ZoneName": "us-west-2b" }, { "State": "available", "Messages": [], "RegionName": "us-west-2", "ZoneName": "us-west-2c" } ]}
Then we choose to create it in us-west-2a
kops create cluster \ --zones us-west-2a \ ${NAME}
Note that this step only generates the cluster configuration file and stores it in S3.
You can modify the configuration using the kops command:
kops edit cluster ${NAME}
If there is no problem with the configuration, you can deploy:
kops update cluster ${NAME} --yes
By default, kops creates all corresponding AWS resources, including VPCs, subnets, EC2, Auto Scaling Groups, ELBs, security groups, and more.
If you want to install on a specific subnet, you can specify the subnet id when creating the cluster. HA configurations across AZ are also supported.
After the cluster is installed, it takes a few minutes to start, we can use kubectl to check the status (Kops will automatically write the cluster configuration to the ~/.kube/config file as the default configuration):
kubectl cluster-info
It is recommended to install kube-dashboard, you can use UI to manage clusters, Linux terminal maniacs skip by themselves.
When the cluster is not needed, you can delete the cluster with kops:
kops delete cluster --name ${NAME} Expand and suspend clusters
K8s clusters in the cloud can be easily scaled, and if your cluster is running out of compute resources and you want to scale your cluster, there are two ways to do so.
The first is to modify AWS auto scaling group directly. KOps creates two autoscaling groups on AWS, one for Node and one for Master, usually we just change the number of the Autoscaling Group where Node is located.
The default setting for Kops is 2, and you can set the corresponding value to the number you want.
The other is to modify it through Kops.
kops edit ig nodes
Set both maxSize and minSize to desired values, then update
kops update cluster --yeskops rolling-update cluster
Using rolling-update ensures that there is no interruption in the update process.
In addition, one might ask, I wish I could suspend the cluster when not in use, so that I don't use a lot of AWS system resources, what should I do? Because Auto Scaling Group exists, if you directly stop the corresponding EC2 instance, Auto Scaling Group will create a new instance to replace it, so this method does not work. In fact, the method is very simple, just set the corresponding Auto Scaling Group value to 0.
You can also modify the Auto Scaling Group where Masters and Nodes are located directly in AWS, or in Kops.
Note that in Kops, you need to call the following command to get the name of the group to which the Master belongs.
$ kops get igUsing cluster from kubectl context: staging.cluster-name.comNAME ROLE MACHINETYPE MIN MAX SUBNETSmaster-us-west-2a Master m3.medium 0 0 us-west-2anodes Node t2.medium 0 0 us-west-2a At this point, I believe that everyone has a deeper understanding of "how to deploy and manage Kubernetes on AWS with KOps". Let's actually do it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.