In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to implement distributed load test Locust in kubernetes, which has a certain reference value. Interested friends can refer to it. I hope you will gain a lot after reading this article. Let's take a look at it.
One: preface
This paper introduces how to test an application in a Kubernetes cluster. Sample-webapp is a simple web testing application. The testing tool uses Locust.
Second: introduction to Locust
Locust is a framework / tool for extensible, distributed, performance testing, open source, writing in Python.
In the Locust testing framework, the test scenario is described by pure Python scripts. For the most common HTTP (S) protocol systems, Locust uses Python's requests library as the client, which makes scripting greatly simplified, expressive and aesthetic. For other protocol types of systems, Locust also provides interfaces. As long as we can write the corresponding request client with Python, we can easily use Locust to achieve stress testing. From this point of view, Locust can be used to stress test any type of system.
In the aspect of simulating effective concurrency, the advantage of Locust is that it abandons processes and threads, is completely event-driven, and uses the non-blocking IO and coroutine provided by gevent to achieve concurrent requests in the network layer, so even a single press can generate thousands of concurrent requests; coupled with the support for distributed operation, theoretically, Locust can support extremely high concurrent testing on the premise of using fewer presses.
Example of Locust script:
From locust import HttpLocust, TaskSet, task
Class WebsiteTasks (TaskSet):
Def on_start (self):
Self.client.post ("/ login", {
"username": "test"
"password": "123456"
})
@ task (2)
Def index (self):
Self.client.get (/)
@ task (1)
Def about (self):
Self.client.get ("/ about/")
Class WebsiteUser (HttpLocust):
Task_set = WebsiteTasks
Host = "http://debugtalk.com"
Min_wait = 1000
Max_wait = 5000
In this example, a test scenario for the http://debugtalk.com website is defined: first simulate the user login system, then randomly visit the home page (/) and about page (/ about/), with a request ratio of 2 to 1; and, during the test, the interval between two requests is a random value of 1 to 5 seconds.
Three: deploy and test WEB applications
Sample-webapp-controller.yaml
Kind: ReplicationController
ApiVersion: v1
Metadata:
Name: sample-webapp
Namespace: kube-system
Labels:
Name: sample-webapp
Spec:
Selector:
Name: sample-webapp
Replicas: 1
Template:
Metadata:
Labels:
Name: sample-webapp
Spec:
Containers:
-name: sample-webapp
Image: index.tenxcloud.com/jimmy/k8s-sample-webapp:latest
Ports:
-containerPort: 8000
Sample-webapp-service.yaml
Kind: Service
ApiVersion: v1
Metadata:
Name: sample-webapp
Namespace: kube-system
Labels:
Name: sample-webapp
Spec:
Ports:
-port: 8000
Selector:
Name: sample-webapp
Kubectl create-f sample-webapp-controller.yaml
Kubectl create-f sample-webapp-service.yaml
Four: deploy Locust
Locust-master-controller.yaml
Kind: ReplicationController
ApiVersion: v1
Metadata:
Name: locust-master
Namespace: kube-system
Labels:
Name: locust
Role: master
Spec:
Replicas: 1
Selector:
Name: locust
Role: master
Template:
Metadata:
Labels:
Name: locust
Role: master
Spec:
Containers:
-name: locust
Image: index.tenxcloud.com/jimmy/locust-tasks:latest
Env:
-name: LOCUST_MODE
Value: master
-name: TARGET_HOST
Value: http://sample-webapp:8000
Ports:
-name: loc-master-web
ContainerPort: 8089
Protocol: TCP
-name: loc-master-p1
ContainerPort: 5557
Protocol: TCP
-name: loc-master-p2
ContainerPort: 5558
Protocol: TCP
Locust-master-service.yaml
Kind: Service
ApiVersion: v1
Metadata:
Name: locust-master
Namespace: kube-system
Labels:
Name: locust
Role: master
Spec:
Ports:
-port: 8089
TargetPort: loc-master-web
Protocol: TCP
Name: loc-master-web
-port: 5557
TargetPort: loc-master-p1
Protocol: TCP
Name: loc-master-p1
-port: 5558
TargetPort: loc-master-p2
Protocol: TCP
Name: loc-master-p2
Selector:
Name: locust
Role: master
Locust-worker-controller.yaml
Kind: ReplicationController
ApiVersion: v1
Metadata:
Name: locust-worker
Namespace: kube-system
Labels:
Name: locust
Role: worker
Spec:
Replicas: 1
Selector:
Name: locust
Role: worker
Template:
Metadata:
Labels:
Name: locust
Role: worker
Spec:
Containers:
-name: locust
Image: index.tenxcloud.com/jimmy/locust-tasks:latest
Env:
-name: LOCUST_MODE
Value: worker
-name: LOCUST_MASTER
Value: locust-master
-name: TARGET_HOST
Value: http://sample-webapp:8000
Kubectl create-f locust-master-controller.yaml
Kubectl create-f locust-master-service.yaml
Kubectl create-f locust-worker-controller.yaml
Five: configure Traefik
-host: locust.donkey
Http:
Paths:
-path: /
Backend:
ServiceName: locust-master
ServicePort: 8089
Kubectl replace-f ingress.yaml
Six: execute the test
Visit http://locust.donkey/ to set test parameters for testing
Thank you for reading this article carefully. I hope the article "how to implement distributed load Test Locust in kubernetes" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.