In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Automated testing framework
In the eyes of most testers, as long as they touch the "frame", it feels very mysterious and far away. The reason why people think it is complicated is that it is very complicated to use it on the ground; the business process of each company, each business and product line is different, so it leads to a lot of unstable factors when the "automated testing framework" completes the automated testing, so it is difficult to locate it as a fixed framework. In fact, the real automated testing framework is not a pattern, but a collection of ideas and methods, popularly speaking, is an architecture.
Second, the idea of automated testing framework.
In order to better understand the automated testing framework, let's start with the development of automated testing; generally, those who have been in contact with automated testing for more than 3 years should have a certain understanding of the following automated testing framework ideas:
Modularization thought
Library thought
Data-driven thought
Keyword-driven thought
The above only represents the idea of automated testing and cannot be defined as a framework. The above mentioned framework = thought + method, so the following five frameworks have been evolved:
1. Modular test script framework
You need to create small and independent scripts for descriptive modules, fragments, and applications under test. These small tree-structured scripts can be combined to form scripts that can be used for specific test cases.
2. Test library framework
It is similar to the modular test script framework and has the same advantages. The difference is that the test library framework decomposes the application under test into procedures and functions rather than scripts. This framework requires the creation of functional library files that describe modules, fragments, and applications under test.
3. Keyword-driven or table-driven testing framework
This framework requires the development of data tables and keywords. These datasheets and keywords are independent of the test automation tools that execute them, and can be used to "drive" test script code for applications and data under test. Keyword-driven tests look similar to manual test cases. In a keyword-driven test, write the functions of the application under test and the execution steps of each test into a table.
This testing framework can generate a large number of test cases with very little code. The same code is reused while using data tables to generate individual test cases.
4. Data-driven testing framework
The input and output data tested here are read from data files (datapools, ODBC sources, CSV files, EXCEL files, Json files, Yaml files, ADO objects, etc.) and loaded into variables by capturing tool-generated or manually generated code scripts. In this framework, variables are used to store not only the input values but also the output validation values. Throughout the program, the test script reads the numerical file and records the test status and information. This is similar to a table-driven test, in which the test case is contained in a data file rather than in a script, which is simply a "driver" or a delivery mechanism for data. However, data-driven testing is different from table-driven testing, although navigation data is not included in the table structure. In data-driven testing, only test data is contained in the data file.
5. Hybrid test automation framework
The most common implementation framework is a combination of all the technologies described above to take advantage of their strengths and make up for their shortcomings. This hybrid testing framework evolved from most of the frameworks over time and over several projects.
Third, the strategy of interface automation testing framework
The framework is designed directly to the testers, and other testers only need to simply add test cases to it; so our framework design must be simplified, that is, simple operation, simple maintenance and simple expansion.
When designing the framework, we must combine the business process, and not only rely on technical implementation, in fact, technical implementation is not difficult, difficult to understand and grasp the business process.
The framework should be designed to encapsulate the basic into common, such as: get requests, post requests and assertions into the same basic general class.
Test cases should be shared with the code, so it is convenient for use case management, so we will choose the above data-driven idea. 4. Interface automation testing framework design 1. Before designing the interface framework, let's take a look at some mainstream interface automation tool frameworks.
Cdn.xitu.io/2019/7/26/16c2d1cf7989b040?w=1080&h=252&f=png&s=204363 ">
2. The above tools features, tools, learning costs, recording, continuous integration, test case management, performance test expansion, minimum difficulty, Java+testng+Maven high, medium JavaRequests+Python, low JavaRequests+Python, medium PythonRobot Framework, low PythonRobot Framework, easy, easy, high tool components, HttpRunner, low, easy, low, low, easy, easy
According to the above characteristics, we can give priority to Python+Requests and HttpRunner;. Next, we analyze the use case execution process according to their two frameworks.
3. Use case execution parsing
Python's Requests library uses a unified interface for all HTTP request methods.
Requests.request (method, url, * * kwargs)
Among them, kwargs can protect the information that may be used in HTTP requests, such as headers, cookies, params, data, auth and so on. Therefore, as long as you follow the parameter specification of Requests, you can reuse the concept of Requests parameters in the interface test case. On the other hand, the HttpRunner processing logic is very simple, which reads the parameters in the test case directly and passes it to Requests to initiate the request.
1) example of Requests API request
Def test_login (self): url = "www.xxx.com/api/users/login" data = {"name": "user1", "password": "123456"} resp = requests.post (url, json=data) self.assertEqual (200, resp.status_code) self.assertEqual (True, resp.json () ["success"])
In this use case, the HTTP POST request is implemented, and then the response result is judged to see if the response code and so on are in line with expectations.
There are two problems with such a use case in a real project:
The use case pattern is basically fixed, there will be a large number of similar or repetitive use cases, and there are great problems in use case maintenance.
The use case is not separated from the execution code, and the parameter data is not separated, so it is also difficult to maintain.
2) HttpRunner uses json/yaml format to process test cases, and the separated use cases are described as follows
{"name": "test login", "request": {"url": "www.xxx.com/api/users/login", "method": "POST", "headers": {"content-type": "application/json"}, "json": {"name": "user1", "password": "123456"}}, "response": {"status_code": "headers": {"Content-Type": "application/json"}, "body": {"success": true, "msg": "user login successfully."}
3) HttpRunner use case execution engine
Def run_testcase (testcase): req_kwargs = testcase ['request'] try: url= req_kwargs.pop (' url') method= req_kwargs.pop ('method') except KeyError: raise exception.ParamsError ("ParamsError") resp_obj = requests.request (url=url, method=method, * * req_kwargs) diff_content = utils.diff_response (resp_obj, testcase [' response']) success = False if diff_content else True return success, diff_content
4) get the HTTP API request parameters from the test case, testcase ['request']
{"url": "www.xxx.com/api/users/login", "method": "POST", "headers": {"content-type": "application/json"}, "json": {"name": "user1", "password": "123456"}}
5) initiate a Http request
Requests.request (url=url, method=method, * * req_kwargs)
6) Test results, that is, assertions
Utils.diff_response (resp_obj, testcase ['response']) 5. Interface Automation Test Framework landed
According to the principle of easy to use and easy to maintain, we use HttpRunner tools to design the framework.
1. Introduction to HttpRunner
Main features:
All the features of Requests are integrated to meet the testing requirements of http and https.
The test case is separated from the code, and the test scenario is described in the form of YAML/JSON to ensure the maintainability of the test case.
Test cases support parameterization and data-driven mechanisms
Realization of interface recording and use case generation based on HAR
Combined with Locust framework, distributed performance testing can be implemented without additional work.
The execution mode is called by CLI, which can be perfectly combined with continuous integration tools such as Jenkins
The statistical report of test results is concise and clear, with detailed statistical information and log records.
It is scalable and easy to expand to realize Web platform.
# 2. Environment preparation
Install HomeBrew (MacOs package management tools, similar to apt-get, yum)
Terminal execution / usr/bin/ruby-e "$(curl-fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" installs pyenv and configures the environment variable: python version manager, which can manage multiple Python versions at the same time (HttpRunner is based on Python development But support above Python3.6.0) brew install pyenvecho 'export PYENV_ROOT= "$HOME/.pyenv" > ~ / .bash_profileecho' export PATH= "$PYENV_ROOT/bin:$PATH" > > ~ / .bash_profileecho 'eval "$(pyenv init -)" > ~ / .bash_profileexec $SHELL-l install Python3.6pyenv install-- list / / View the installable Python version pyenv install 3.6.0 / install 3.6.0 version pyenv rehash / / update pyenvpyenv Versions / / View the installed version of python The one with the * sign is the version currently in use. Select Pyhtonpyenv global 3.6.0 / / to set the global version, that is, the version used by the current system will be switched to 3.6.0 to install HttpRunner and verify that pip install httprunner// runs the following command. If the version number is displayed normally, the httprunner installation is successful: hrun-V0.9.8
So far, the HttpRunner has been built.
3. Use case management
In HttpRunner, the most important feature of test case engine is that it supports use case description in Yaml/Json format.
Writing maintenance test cases in YAML/JSON format has obvious advantages:
Compared with the table form, it has more powerful flexibility and richer information carrying capacity.
Compared with the code form, it reduces the unnecessary repetition of programming language syntax, maximizes the use case description form, and improves the maintainability of the use case.
Yaml format
Json format
The following is an example of the R & D platform in Qilan-Digital platform 2.x (in Json format)
Scenario: after the project space, you need to quickly support the creation of Demo examples, that is, the automatic creation of various directories and tasks.
1) identify the interfaces used by the business process and pass and classify them through Postman or Jmeter debugging
Query class (Get request) API: query task directory, query resource group, query workflow, etc.
New class (Post request) API: new directory, new task, etc.
2) determine the interface order according to the business process
If you want to create a new task under a directory: first call the API for creating a directory and then call the API for building tasks.
3) fill in the information about the interface in the Json file according to the rules.
Interface Base_Url
Interface path
Interface request mode
API request parameters
Interface assertion
The parameters returned by the API (the parameters returned by the previous interface will be used when associating the interface)
The following are some use case examples
4) after the use case is completed, execute the use case file, for example, the Json file is task.json
Hrun task.json
5) View the running results
A reports file is automatically generated in this directory, and when you enter this folder, you can see that the html is generated with time (a Html file will be generated after one execution)
)
Open this Html to view
All through
Partially pass
Click Log to view specific request information and return information
Click trackback to view the location error message
Brief introduction to the author: Luo Kong, 6 years of testing related work experience, used to be the head of search testing in Micro Medical Group, responsible for server testing, interface automation, continuous integration testing and performance testing and test development. Has participated in the China Mobile government and enterprise capability enhancement project and so on.]
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.