In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Main conclusion
1. Test automation is a well-documented and clearly defined way to run the same set of test scripts over and over again. At the same time, however, this test automation script can further implement other more creative applications.
two。 Although automated analytical thinking is difficult to achieve, there is no doubt that we can have a certain degree of randomness in our scripts.
3. The specific degree of "randomness" in testing varies: from random inputs and parameters to comprehensive random test cases.
4. It is difficult to match random steps with corresponding validation measures, but we can use different validation strategies to ensure that the application works as expected.
5. Random testing cannot replace subjective or traditional testing techniques, but it can make us more confident about the quality of the application during regression testing.
As Cem Kaner said in one of his tutorials, exploratory testing is a way of software testing that emphasizes individual freedom and the responsibility of individual testers. It can continuously optimize the results of testing by treating the learning, test design, test execution, and understanding of test results as a series of activities that complement each other and are executed in parallel throughout the project.
In short, according to his definition, the well-known "software quality and consumer" (Software Quality and Consumer) proposition provides testers with the freedom and responsibility to test as they see fit in a project. It is no longer necessary to record all specifications step by step, and the reason is very simple. The basic style of the creative process cannot be recorded, right? In his speech at the TestBash 3 conference on decision-making in testing, Mark Tomlinson supported the idea of subjective understanding of the system. If it is regarded as the core of exploratory, risk-based and session-based testing technology (which can be called subjective technology), testers will be able to subjectively identify important aspects of the application that may lead to failure.
Take a look at this diagram of the dynamic illusion of a spinning dancer: at different times, our brains or judge that the dancer rotates in a particular order: left or right. Testing will face a similar situation: we may consider using different processes to achieve the same results, or the same processes lead to different but expected results, or, um... Any other results.
The subjective techniques used in the whole test execution process can be guided by a variety of mature analytical thinking and the advantages of "randomness". The latter is a more important element, and this article will reveal the mystery of "randomization" in automated testing.
If you feel that it is difficult to learn programming, but love the IT industry very much, you can add a test exchange group: 1017539290, join the group to get test learning materials for free!
To be clear, test automation is not a creative activity, but a well-documented and clearly defined way to run the same set of test scripts over and over again. The question is, how can we use these test automation scripts and be more creative?
Product quality varies with time
The product quality model and the recorded test scenarios can be summarized by specific state machines and external features. This is exactly what test automation loves. Test automation is concerned with writing test scripts based on some very specific set of test requirements.
This approach is well suited for functional regression testing: clean up, polish, brand new release, and then created by the developer master. Let's call it Shiny.
But what will the system look like after a long, exhausting development timeline (with multiple releases, years of support, hundreds of Bug fixes, feature requests, etc.)?
Indeed, from a user interface point of view, it may be very similar to a system that is old but still works well, but beneath the surface, this is often referred to as a "big mud ball".
For such a system, even with automated scripts, which parts of the specific functions can still be tested to the same degree as when they were first released in production? Maybe only 30% of 80% of it is OK. What about other functions? I have no idea.
Of course, the simple solution at this point may be to review all existing quality documents, improve existing scenarios, introduce new scenarios, and so on. However, given the industry experience, as the rule test documents for legacy systems become obsolete, although the update work is still important, this approach is not always feasible.
Create a well-defined architecture for test automation solutions
The following figure is an example of a simplified test automation solution that contains three layers (similar to the way you implement business applications based on UI, business logic, and database): UI/API mapping, business bare metal, and test scripts.
1.UI/API mapping represents the technical side of the solution: the degree of UI automation tools is highly bound to the UI of the automation system, and the method used in this layer may be similar to focus (), type_text (), and click_button ().
two。 Business logic is a library of keywords from business operations. A business operation is a step that can be performed in an application (such as login (), create_user (), validate_user_created ()).
3. The test script is responsible for performing a series of business steps that are chained together.
Learn more about independent testing (Separate Test)
Consider a simple recording test case: execute this-verify this, execute that-verify that, execute so-and-so-verify so-and-so. Qualified automation developers create a series of methods similar to the following:
Do_that (), verify_that (), do_this (), verify_this (), do_bla ().
The test script calls such methods in a particular order:
MySpecifiedCase_1 () {
Do_that ()
Verify_that ()
Do_this ()
Verify_this ()
Do_bla ()
Verify_that ()
Verify_this ()
}
Since the script did not find any Bug, our task at certain stages became to have it look for potential system problems.
Randomization method 1-Naked Random
From a business point of view, any step in an automation solution is effective. So exploratory testing gives us the freedom to perform any step at any point in time. The mashup of these steps is also very simple. We need to follow the implemented steps to create "random" test cases after a few tests have been performed.
Enter: the number of all business methods in the solution, the number of test scripts to be generated, and the number of steps required to generate each test script.
Output: similar to the following script:
MyRandomCase_1 () {
Do_that ()
Do_bla ()
Verify_this ()
}
Obviously, even though some test cases may (or even have) run successfully, most of them will still fail, because a large number of use cases are actually trying to complete invalid operations. If you haven't already executed do_this (), then verify_this () will undoubtedly fail.
Randomization method 2-Stochastic method with preconditions
The idea of this approach is to add the next steps to the workflow only after the perceived steps are included in the workflow, but this requires the necessary extension of the code base to ensure that the test case generator can understand and ensure the accurate sequence. To do this, you can add properties or comments to the method:
@ Reguires (do_this)
Verify_this ()
{... }
So we got:
MyRandomCase_2 () {
Do_bla ()
Do_this ()
Verify_this (); / / can be added, because prerequisite step is already in test
}
This is a more predictable approach. But what if do_this () and verify_that () need to be executed on the same Page1, and do_bla () has reached Page2?
At this point we are faced with a new problem: verify_that () will fail because the control / context required for execution cannot be found.
Artificial randomization method 3-context awareness
If you feel that it is difficult to learn programming, but love the IT industry very much, you can add a test exchange group: 1017539290, join the group to get test learning materials for free!
The test generator must be aware of the execution location context (such as "pages" in Web development). Of course, you can also provide an active context for the generator through features / annotations at this time.
@ ReguiresContext (pageThis)
Verify_this ()
{... }
@ ReguiresContext (pageThis)
Do_this ()
{... }
@ ReguiresContext (pageThis)
@ MovesContextTo (pageThat)
Do_bla ()
{... }
In this case, do_this () and verify_this () are not placed after the method that changes the context to pageThat, or after the method whose context is pageThat.
So we can get a test script like this:
MyRandomCase_3 () {
Do_this ()
Do_bla ()
Do_that ()
}
Or it can be achieved through the method chain. Assuming that the object returned by the business method is a page, the test case generator continues to track the pages displayed in the browser before and after the steps are executed, so you can determine the correct page that needs to call the validation or step method. This approach requires additional checks to verify that the process is correct, but this operation can be implemented without annotations.
Screen the appropriate use cases
The method introduced so far has been able to generate a considerable number of test cases.
The main problem is that the validation process itself, and whether the test scenario of the validation failure is caused by Bug within the application, rather than automated test script logic, is also time-consuming.
Therefore, a "prophecy" class can be implemented to predict whether the results obtained are satisfactory or represent any error information, and follow-up analysis can be carried out if necessary. In this example, however, we chose a slightly different approach.
You can use the following set of rules to indicate that the failure of the application is caused by Bug:
1.500 error or similar page
2.JavaScript error
3. "unknown error" or similar error message caused by misuse
4. Information about exception and / or error conditions in the application log
5. Found errors related to any other product
In this example, the status of the application can be verified after each step is completed. So the automatically generated script looks like this:
MyRandomCase_3 () {
Do_this ()
Validate_standard_rules ()
Do_bla ()
Validate_standard_rules ()
Do_that ()
Validate_standard_rules ()
}
The validate_standard_rules () method can search for the various problems mentioned above.
Note: when combined with OOP, this method appears to be more powerful and can detect the actual Bug. Implementing general checks in the Page Object superclass requires finding "general problems", such as JavaScript errors, application errors in the log, and so on. For reasonable checks related to a particular page, you can bypass this approach to add additional page-specific checks.
Experiment.
In order to carry out the experiment, we decided to use an open mail system. Given the popularity of Gmail and Yahoo, it is highly likely that all Bug present in these systems have been discovered. So we chose ProtonMail.
Taking Over Random
Assuming that the automation solution is in place, we "adopt" the automated testing mechanism of the Shiny system: first establish a general Java/Selenium test project that includes several smoke tests implemented using the Page Object pattern. Then, as a good practice, all business methods can return a new Page Object (for the page that is still displayed in the browser at the end of the business method) or the current Page Object, unless the page is changed.
For automated exploratory testing, we have added classes included in the explr.core package, of which TestCaseGenerator and TesCaseExecutor are of interest.
TestCaseGenerator
To generate a new "random" test case, you can call one of two generateTestCase methods through the TestCaseGenerator class. Both methods can accept integers as parameters that represent the number of "step verification pairs" in the generated test case. The second method can also accept an additional parameter that represents the number of "authentication policies" to use (the first method uses the default policy, in this case USE_PAGE_SANITY_VERIFICATIONS).
The validation policy represents the method used to add a check step to the test case. At present, we have two options:
1.USE_RANDOM_VERIFICATIONS: the first and obvious strategy. The idea of this strategy is to use the current validation method from the page object. But the deficiency is that it is heavily dependent on context. For example, we randomly select a method to verify the existence of messages for a particular topic. First of all, we must know which topic to look for. To do this, we introduced the @ Default annotation and the DefaultTestData class. DefaultTestData contains general test data that can be used for random testing. The @ Default annotation can be used to bind this data to a specific method parameter. We then need to make sure that the message containing the topic already exists before the validation operation (which can be created during the execution of the specification, or during any previous testing). To do this, you can tell TestCaseGenerator to check for calls to a specific method through the @ Depends annotation, and add it directly if it is not found before the current step. We also need to make sure that the message is not deleted before authentication. We find that for the generated test cases, the dependency problem greatly reduces the degree of randomization, and the stability of this method can not meet the requirements.
2.USE_PAGE_SANITY_VERIFICATIONS: this policy checks for obvious application failures, such as pages showing errors, error messages, JavaScript errors, errors in the application log, and so on. This strategy is more flexible in terms of dependencies, enabling page-specific checks when needed, such as being flexible enough to find the actual Bug. We currently use it as the default authentication policy.
The TestCaseGenerator class can search for Page objects by class name: each class with a "Page" string in its name is treated as a page object. All exposed methods of the page object are treated as business methods. Business methods whose names contain the string "Verify" are treated as validations, and all other methods are treated as test steps. The @ IgnoreInRandomTesting annotation can be used to exclude certain tool methods or entire page objects from the list.
You can then randomly select a method from two lists to generate test cases: one list contains test steps, and one list contains validation steps (if the selected validation strategy requires validation steps). When the first method is selected, it is checked whether its return value is another page object. If the return value is another page object, the next step is selected from its method (see note above). To avoid cycling between two pages, there is a 10% chance of jumping to a completely random page. If the method annotated any dependencies with the @ Depends annotation, these issues are resolved and added as needed.
To avoid calling the test method from objects other than the currently displayed page, the generated test case passes an additional validation to add the missing navigation call.
TesCaseExecutor
After generation, the test case is basically a list of "class-method pairs" that can be executed or saved in a specific way. Although it can be executed at run time, saving as a file is a better practice from a debugging and subsequent analysis point of view.
If you feel that it is difficult to learn programming, but love the IT industry very much, you can add a test exchange group: 1017539290, join the group to get test learning materials for free!
The generated test cases can be executed in a variety of ways, with TesCaseExecutor as its interface and SaveToFileExecutor as the implementation, thus you can simply create a .java file representing the generated test cases. Amazingly, this fairly simple solution fully meets our needs: fast implementation, in-depth analysis of test results, and understanding of how it is generated. The drawback is that the generated test cases must be compiled and run manually, but this is not a big deal for the experiment.
The test case code generated by SaveToFileExecutor can be converted into a compilable file through a template. The resulting test example is as follows:
@ Test (dataProvider = "WebDriverProvider")
Public void test (WebDriver driver) {
Login (driver)
/ / *
ContactsPage contactspage = new ContactsPage (driver, true)
InboxMailPage inboxmailpage = contactspage.inbox ()
Inboxmailpage.sanityCheck ()
ComposeMailPage composemailpage = inboxmailpage.compose ()
Composemailpage.sanityCheck ()
Composemailpage.setTo ("me@myself.com")
Composemailpage.send ()
Inboxmailpage.sanityCheck ()
List list = inboxmailpage.findBySubject ("Seen that?")
Inboxmailpage.sanityCheck ()
Inboxmailpage.inbox ()
Inboxmailpage.sanityCheck ()
DraftsMailPage draftsmailpage = inboxmailpage.drafts ()
Draftsmailpage.sanityCheck ()
Inboxmailpage.inbox ()
Inboxmailpage.sanityCheck ()
Inboxmailpage.sendNewMessageToMe ()
Inboxmailpage.setMessagesStarred (true, "autotest", "Seen that?")
Inboxmailpage.sanityCheck ()
TrashMailPage trashmailpage = inboxmailpage.trash ()
Trashmailpage.sanityCheck ()
/ / *
}
The code generated by SaveToFileExecutor is between comments, and the rest is added by the template.
In terms of the operations performed, the use cases we generated are moderately diverse, but it can be easily solved by adding more pages of objects that contain more test steps.
If you feel that it is difficult to learn programming, but love the IT industry very much, you can add a test exchange group: 1017539290, join the group to get test learning materials for free!
After thousands of "random" tests, we found that there was no major problem with Protonmail (such as error pages), but the browser reported some JavaScript errors, which are important for systems that rely on JavaScript for mail codec. Obviously, we do not have access to the server log throughout the experiment, but from the experimental point of view, it is enough to show how much this method can improve the quality of the system under test.
Of course, random testing cannot replace subjective or traditional testing techniques, but it can make us more confident about the quality of the application during regression testing.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.