Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Some basic theoretical concepts necessary for software test interview

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The basic concept of testing

Testing is a very important process in the software life cycle, which is the stabilization stage before the product is released and submitted to the end user.

1. Classification of tests:

From the point of view of testing methods, it can be divided into manual testing and automated testing.

Manual testing: do not use any testing tools, according to the pre-designed test cases to run the system, test each functional module.

Automated testing: use test tools to automatically run test programs by writing test scripts and entering test data. At present, the most commonly used automated testing tool is the automated testing tool based on GUI, and the basic principle is recording and playback technology.

From the overall point of view, it can be divided into unit test, integration test, system test and confirmation test.

Unit testing: testing the correctness of the program module, which is the smallest unit of software design. It generally includes logic check, structure check, interface check, error handling, code comments, input check, boundary value check.

Unit testing is based on the detailed design of the system; it is generally done by the project team developers themselves.

Integration testing: on the basis of unit testing, all modules are assembled and tested according to the design requirements. It generally includes logical relationship check, data relationship check, business relationship check, inter-module interface check and external interface check.

System testing: system testing is the overall test of the function and performance of the system after all unit and integration tests.

Confirmation testing: simulate the business environment in which the user is running, and use the black box testing method to verify whether the software system meets the user requirements or the software features (functional, non-functional) specified in the software requirements specification.

From the testing principle, it can be divided into white-box testing, black-box testing and gray-box testing.

White-box testing: testing through the source code of the program without using the user interface. This type of testing needs to find shortcomings or errors in the internal code in algorithms, overflows, paths, conditions, etc., from the code syntax, and then correct them.

Black box testing: it is strictly tested by using the whole software or some software function, without checking the source code of the program or having a clear understanding of how the source code of the software is designed. Testers understand how the software works by entering their data and looking at the output. When testing, think of the program as a black basin that cannot be opened

In the case of completely ignoring the internal structure and internal characteristics of the program, the tester tests at the program interface, which only checks whether the program function is used normally in accordance with the requirements specification and whether the program can properly receive and correctly output.

The black box testing methods mainly include equivalent class division, boundary value analysis, cause-effect diagram and error inference method.

Equivalent class division:

It divides all possible input data, that is, the input field of the program, into several parts (subsets), and then selects a small number of representative data from each subset as test cases. This method is an important and commonly used black box test case design method.

Partition equivalence class: an equivalence class refers to a subset of an input field. In this subset, each input data is equivalent to exposing errors in the program. It is reasonably assumed that testing the representative value of an equivalent class is equal to testing other values of this class. Therefore, all the input data can be reasonably divided into several equivalent classes, and a small amount of representative test data can be used if a data is taken as the input condition of the test in each equivalence class. Better test results are obtained. There are two different cases of equivalence class division: effective equivalence class and invalid equivalence class.

Efficient equivalence class: a collection of input data that is reasonable and meaningful for the specification of the program. The effective equivalence class can be used to verify whether the program achieves the function and performance specified in the specification.

Invalid equivalence class: the opposite of the definition of valid equivalence class.

When designing test cases, consider these two equivalence classes at the same time. Because software should not only be able to receive reasonable data, but also be able to withstand unexpected tests. This kind of testing can ensure that the software has higher reliability.

Boundary value analysis:

Long-term testing experience tells us that a large number of errors occur at the boundary of the input or output range, not within the input and output range. Therefore, more errors can be found by designing test cases for various boundary situations.

False conjecture:

Speculate all kinds of possible errors in the program based on experience and intuition, so as to design test cases pertinently. The basic idea of error inference method: enumerate all possible errors and special cases prone to errors in the program, and select test cases according to them. For example, many common errors in modules have been listed during unit testing. The mistakes that have been found in previous product tests are a summary of experience. Also, the input data and output data are 0. The input form is a space or has only one row. These are all error-prone situations. Examples in these cases can be selected as test cases.

Gray box test:

Grey-box testing is tested through the user interface just like black-box testing, but testers already know exactly how the software or the source code of a software function is designed. I even read some of the source code. So testers can really test certain conditions / functions.

From the software features, it is divided into functional testing and performance testing.

Functional testing: refers to the testing carried out to ensure the correctness, integrity and other features of the software system.

Performance testing: refers to the testing and analysis carried out to evaluate the performance status of the software system and to predict the performance trend of the software system.

2. Definition of BUG:

BUG: (minor mistakes, defects, deficiencies, faults, etc.) A computer bug refers to an error (error), flaw (flaw), mistake (mistake) or fault in a computer program that prevents the program from running correctly. Bug results from negligence or errors in the source code or programming phase of the program.

Defect: (defect) in software engineering (Software Engineering), the software is inconsistent with its requirements, which often means that the software can not correctly complete the required functions, also known as bug.

Fault: (fault) is defined as an abnormal condition or defect in a component, device, or subsystem, which often leads to the failure of the system.

Error: (error) An error refers to writing the wrong code, usually inadvertently. There are generally two main types of errors, one is syntax errors (syntax error), which are easy to detect because the code cannot be parsed at compile time and cannot be compiled properly. The other is a logic error (logical error), which is difficult to find because it is closely related to the actual execution of the code.

3. Planning of project testing

The content of the project test:

The project test is divided into two parts: the project development phase test and the project completion acceptance test.

The testing contents in the development phase mainly include: module function testing, integration testing and document checking.

Module functional test: to ensure that the functional modules of the system can operate normally, and the IPO of the data meets the requirements of the system design. Unit and module functions meet the requirements definition.

Integration testing: after each module of the system is assembled, according to the requirements of the business process, the business functions can be completed correctly, and the data processing and output are correct.

Document inspection: in the project development phase, check and verify the submitted project documents and records (technical documents and management documents) according to the project schedule and "Project document Test specifications and Standards". In order to meet the requirements of the company's quality system and project system, for the key elements of technical documents, verify whether the pass standards can be met.

Completion acceptance testing mainly includes: installation testing, functional verification, performance testing, requirements verification, document testing. The completion acceptance test is actually a comprehensive inspection and verification of the project before the project is completed. It can be used as the basis and release condition for the completion of the project.

Requirements testing: check whether the software product meets the functional requirements specified in the requirements specification of the project, and check the integrity, consistency and updating of the requirements. The test focuses on the integrity of the requirements.

Installation test: according to the installation steps in the installation document provided by the project, set up the system running environment and check whether the system installation process is correct. It may include the installation and configuration of database server, application server, control registration, client installation and configuration, and application software installation.

Functional verification: according to the requirements specification and system outline design, check the operability and correctness of each function (functional unit, functional module) item by item.

Document testing: document testing starts from the time the project is established, which is actually a document check, including normative check and validity check. The goal is to make project-related documents and records both standardized and meaningful, not to deal with useless files. Technical documents such as requirements specification, summary design, detailed design, etc., are also evaluated during the technical review. User documentation, such as installation manual and user operation manual, shall be carried out according to the document inspection specification.

Performance testing: the source of this part of the test, strictly speaking, depends on the user's specific requirements for software features, in addition, some basic performance requirements of the company's development department for the product. If users have specific non-functional requirements for software products from a business point of view, they must be explained in the software requirements specification to make them measurable and testable. For some multi-user environment or data processing capacity and load testing, it is difficult to build a test environment by hand, so we can refer to the combination of some special performance testing tools and manual testing.

4. The basic process of project testing:

Project test start: after the project is established, create the project in the test configuration library.

Test plan: after the detailed design of the system, make the test plan and prepare the test resources.

Design test cases, mainly business-related test cases.

Implement functional module testing, build running or development environment, use functional module test table, developers update progress status in functional module test table, testers describe test progress in this table. Form a list of test errors, which is linked to a corresponding test record for each error, in which the error is described in detail. Correction information and verification information should also be included in the test record.

After the error is turned off, the tester maintains the test record table and updates the test case base and question base as experience.

At the end of the project, the testers will conduct the acceptance test for the completion of the project and fill in the project test report. The test report can be used as an input artifact for user acceptance.

5. Functional testing methods and contents

Data input test: when entering data into the system or entering database operation commands, it is generally the process of testing the system to operate on the data in the database.

Data type testing: due to the different data type requirements of different database systems, the data type of the data field is also specified when defining the database table.

1. Testing steps and methods: on the data maintenance function interface of the system, when entering or modifying data, enter data types that are not designed by the system and check whether the system is acceptable or not. If not, check whether it meets the design requirements of the system in this regard. For example, the illegal content is cleared immediately, the input focus can not go to the next input location, the system-defined prompt message appears, and the error message of the development tool is not allowed. If the system can accept and save, it depends on whether the field type design of the database table is inconsistent with the user or custom, and pay attention to whether other modules have specific requirements when fetching the data.

Boundary value test: according to the requirements of the value range of the data, input the data that meet the value range, the upper and lower limits of the value range and the data that exceed the value range. Note that in addition to testing the range of data types of the database system itself, it is also necessary to design test cases according to some specific requirements in the design of the software system.

Data validity testing: in addition to testing whether the input data meets the requirements of the data type and value range of the database system itself, the tester should also check the legitimacy of the input data according to experience and the specific requirements of the software system and requirements. For example: date legitimacy (birth date, insured date, occurrence time, requirements for date rationality according to habit and business logic order, etc.). Pay, proportion, rate, etc., should pay attention to the reasonableness and legality of input.

Single and double quotes: do not ignore errors and data problems that may be caused by entering single and double quotes. In the function input interface, data including single quotation marks and double quotation marks are entered in the input box of a field, which may cause problems in the future when querying through the Select statement. Especially in the system based on WEB, when entering single quotation marks, when querying data records, there is bound to be a page link error (the page cannot be linked or cannot be found or the link object error).

Null value testing: when testing the functional interface of data entry or modification, if you do not enter anything and the system is not designed to NOT NULL, then you should pay great attention to its impact. Because the data can be saved normally, but the field in the data table is empty, then all operations related to this field, such as query (AND), calculation (accumulation, multiplication), etc., may have data problems (the calculation result is 0 and no record is returned). For testers, the first thing to check is whether the system is treated as a null value or as an empty string or empty character. In addition, for fields that allow no value input, during the test, check whether these fields are used as key elements or titles when the report is displayed or printed in the interface.

Space: in the functional interface of data maintenance, when entering data, you should pay attention to whether there is a space in the input position, first of all, look at the design of the system, if the system allows input space, check the conditional query or the data returned as a call parameter; in addition, check whether the program uses a function to remove spaces.

Data check inconsistency: when testing, for some primary keys such as number, code, code or fields as query or call conditions, we should pay attention to whether the system checks the validity of their input and whether the requirements of query or call conditions are consistent. Especially for the case where there are no specific constraints in the data structure design, and the program checks and controls it.

Analysis: the main purpose of data input testing is to ensure the legality and rationality of the data input into the system. In my opinion, the inspection of the data input process is very important. If we do not pay attention to the data verification function in the programming process, although it seems to speed up the development progress, it will bring some unpredictable programming or maintenance workload in the future.

2. Directory path testing: test the path requirements specified in the system, change the path, check whether the system can run correctly and the troubleshooting function of the system. When testing, according to the system design manual (detailed design) or through the familiarity with the program source code, find out the path specified in the running process of the system or the place where the user needs to choose the path in the running process. Deliberately change the path (choose the correct path, choose another path, enter a path that does not exist). Check whether the system is fault-tolerant and flexible on the path. For example, in principle, in the program, it is best not to write an absolute path, and you can provide a dialog box to configure the path, and if you enter an illegal path, there is no prompt for the system, and so on.

3. Data operation test: including data operation test and user interface operation test.

Modify and add data: for new and modified data, we should pay attention to the following aspects of testing. On the interface, whether the data list is refreshed immediately after the new data is successful, whether the wrong data is cleared when there is an input error, and whether the input focus is controlled. In the prompt information, whether there is a prompt that has been saved successfully, and if there is an error, whether the error message is accurate or not can be read. On the data side, check whether the data is submitted correctly through SQL.

Delete data: test whether the system has a confirmation prompt when deleting records and whether it can be deleted in batches. According to the detailed design of the system, check whether other related tables change accordingly in business when deleting the records of the master table.

Commit and rollback of things: people who are familiar with the development of CPX S mode or database application systems all know the concept of database things. For a more complex business logic or when there are data consistency and integrity requirements in the business, try to use things to submit the data, so that in case of system or hardware failure due to unexpected reasons, it can be rolled back. According to the design requirements of the system, unexpected faults can be artificially simulated to test the data integrity and fault tolerance of the system.

4. Toolbar and shortcut key testing: when testing the functional interface, test the shortcut keys defined in the system menu and the tool buttons in the menu toolbar. Mainly validity and conformance testing. Effectiveness: check whether it is effective and whether the interface is responsive. Consistency: whether the information defined or prompted is consistent with the actual completed function.

5. Operation sequence test

Button sequence test: in the functional interface, do not click the function button according to the design or customary operation order to see how the system reacts; click a button many times and repeatedly to see how the system reacts. The main purpose of this paper is to test the control, verification and fault tolerance of the system.

Business logic order: do not test whether the system controls the order of business processes according to the normal business logic and process operation of the system.

6. Button validity control test: the main purpose of the test is to test the "Enabled" attribute of the button when there is no condition or no practical meaning. For example, if a business is not processed, the function button of the next link should be grayed out (not available). Display data records one by one, when the cursor has been pointed to the last one, the "next" and "last record" buttons should be grayed out, and so on.

7. Simultaneous operation testing: for deleting, modifying, adding data and some business functions, multi-client simultaneous operation testing is conducted to see how the system reacts.

8, attachment stress test: for systems with functions such as sending, uploading, downloading, e-mail, etc., select large files and test to check the interface effect and stability of the system to see if it will crash or do not have any reaction for a long time.

9. Data output test:

Data processing output test: the main test is to sort the data and whether the conditional query outputs the correct data according to the input conditions or requirements.

Printout: test whether the printing function can print the report normally, and whether it can print in accordance with the requirements of the setting after the print setting.

10. WEB testing: based on WEB applications, for some pages that submit forms, test the processing of the system by clicking the "back" key many times. For pages that have the ability to save data, click "Save" many times to test the processing of the system.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report