Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

User testing and evaluation in practice

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

The first game of the 2019 Industrial Information Security skills Competition came to a successful conclusion. While ensuring the successful and stable operation of the competition platform, it is more important to think about the user experience, user testing and evaluation of the competition platform in this supporting process. As a professional platform for talent evaluation, selection and competition for industrial information security, the actual competition is not only the best touchstone for testing and testing products, but also the best testing environment for users.

In the process of product design and optimization, the process of product user testing and evaluation refers to inviting target users who meet the test requirements to complete some specific or representative tasks, observing and recording the whole process of using the product, including the smoothness and mood of users. The testing process is not to deal with the dissatisfaction and requirements of users, but to grasp the actual use of users by observing their behavior, operations and emotions. User testing is sometimes referred to as product usability testing. Usability testing is defined as "the effectiveness, frequency and satisfaction felt by a specific user when using a product to achieve a specific goal in a specific usage scenario." User testing is also used as a competitive product analysis and market research method, we will often use user testing methods to evaluate the products of other manufacturers to provide reference for their own product design.

For the existing network security competition platform in the market, users use situations and goals to support the interaction of network security competition questions and the submission of answers, and to output quantitative and qualitative evaluation results for the contestants. Only by meeting this premise can we use these criteria such as effectiveness, efficiency and satisfaction to evaluate it. On the basis of usability testing of our typical usability laboratory and product testing team, the competition platform has experienced several user use tests of the 2018 Industrial Information Security skills Competition. In terms of product usability, it already has the goal and function of providing network security competition question display interaction and competition answer submission and providing quantitative and qualitative evaluation results, that is, the product already has usability.

On the support of the 2019 Industrial Information Security skills Competition, we further sort out the usability testing objectives needed by the competition platform, that is, in the process of the 2019 Industrial Information Security skills Competition, what is the test goal of our product team for the usability testing of the competition platform? That is, what exactly do we test for usability testing and what information do we want to know? According to the standard usability testing methods and procedures, usability testing is generally divided into three main contents:

First: the first is effectiveness. Whether it is effective means whether users can achieve their own goals, such as network security competitions. Effectiveness means that users can interact with each other, submit answers and output quantitative and qualitative results. If they cannot submit answers, there is something wrong with the validity. Then the competition platform has no value. Therefore, the problem of effectiveness is the most important problem that must be solved.

Second: efficiency. Efficiency means that users do not have to do useless work and can achieve their goals with the fastest path. Still take the network security competition as an example. If users need to operate repeatedly in the course of the competition to complete the task, then there will be a problem of efficiency. The serious problem of efficiency is the problem of effectiveness.

Third: satisfaction. Satisfaction is whether there is an unpleasant experience for our users when there are no major problems in terms of effectiveness and efficiency, which may involve many aspects and details, such as whether users will be required to enter too much information when they register, or whether the system will react and not timely, and so on.

Based on the standard usability content of these three basic elements, the usability testing of our competition platform needs to continue to dig deeply in terms of effectiveness, efficiency and satisfaction, taking the competition as the opportunity, and the competition user players as the main body of the test evaluation. it is evaluated based on real user data, which is relatively objective and persuasive. The purpose of user testing is to find problems, improve our design, and tap potential requirements. Its test evaluation is of great value:

First, it can tap potential demand. Tapping the potential demand is fundamental and determines the development direction of the product.

Second, find and solve the problems existing in the product. Find out the problems that need to be solved, including whether the interface framework, logical structure, interaction, etc., are reasonable, and whether the user experience is good.

Third, reduce the cost of product cycle. Test and evaluate while doing, reduce the cycle cost of interaction design through fast iteration, or provide good reference for product iteration.

Fourth, enhance the persuasive power of the design. User testing can use fresh cases, fresh data and so on to enhance the persuasive power of the design.

At the end of the competition, we collect the sample user test data of the contestants by telephone survey and interview, record and summarize the collected test evaluation data, and finally output the survey and evaluation data we need. In the whole process, the basic process of user testing needs to be carefully designed. Based on experience, we divide the basic user testing process into four parts:

First: first, we need to clarify the goal of this test, and then the user researcher will design the tasks and test scripts that the participants need to complete in the testing process. Because this is a direct user test in the course of the competition, so the contestants in this competition are the test users we need to recruit. According to age and social experience as recruitment criteria and screening conditions for survey subjects. Because what kind of task is designed in the test is directly related to the user action you want to describe, determining the former makes the latter more clear.

Second: we need to screen users among the contestants for data collection, and the users we invite must be target users and meet the conditions of this test. According to the situation of the current survey, we selected all the users who entered the scene questions as the objects of this survey and data collection, and conducted telephone interviews one by one according to the corresponding test survey tasks and questions designed, and successfully recovered all the test data.

Third: conduct testing, in normal product testing, we will use the usability laboratory and product testing team. In this test, we directly use the actual use of the contestants as the test environment and test objects. in the test, we need to observe and record the performance of the participants. In telephone interviews, we usually throw out questions, make fewer suggestions, ask questions appropriately or help guide users when they don't know what to do.

Fourth: analysis and report, after the telephone interview, we need to record and preprocess the collected data, carefully reread the test records of each participant, mine the usability problems, list all the problems and sort them out. Judge the seriousness of the problem and organize it into a readability test report.

The test report lists the collected problems, which can not be directly used for product improvement tasks and optimization items, and we have to continue to analyze the problems to sort out more details: for example, how to determine whether the test user is rational or emotional about a particular problem? This goes back to the original intention and content level of our design test task, and the judgment for this question also requires us to start with the task content itself and find the answer to the task content itself. In addition to the rational and perceptual judgment of the answer itself, we also need to quantitatively and qualitatively evaluate the collected data of the task content itself. quantitative evaluation is to evaluate the measurable parts, such as clicks, utilization and so on. it can be explained by data. Qualitative evaluation refers to the evaluation of non-metrological parts, such as fluency, comfort, creativity, etc., it can only express a degree, can not accurately use data to explain the problem. According to the value judgment and quantitative and qualitative evaluation and analysis of the collected problems, we can screen out the more concerned usability problems and make a targeted list plan to solve them one by one.

This user test survey helps us to find out some problems, which are beneficial to the optimization and improvement of the product. In terms of effectiveness, all the test users expressed their concerns about the network stability of the × × access scenario, and the download speed of the * * client was limited by the contestant's own network speed. Since this is the first time to adopt the online game scene access mode, in the survey feedback, the test user said that it took a long time to download the xxx client, and the race against the clock was on the field. the bad side of the network will be limited by the network download and slow down the speed of entering the scene, affecting the interaction with the game questions and the submission of answers. From the point of view of the usability of the product, this is a serious effectiveness problem, which will be solved first in the next few sessions. Secondly, all the test users expressed their concerns about the stability of the network and other major effectiveness issues.

In terms of efficiency, test users want to provide operational efficiency in such aspects as examples of answers to competition questions, direct provision of * * target IP, etc. As the format of the answers is not uniform, they all hope to provide examples of answers to each question in order to improve the efficiency of answering questions, so as to improve the operational efficiency of contestants in finding answers and splicing answers, and reduce repeated attempts and waste of time. In addition, because the * target is determined, the test users want to be able to provide the * target IP address directly to reduce unnecessary network scanning, because the network scanning of all contestants will seriously consume a lot of bandwidth resources, resulting in network congestion and stutters during the access phase.

There are a variety of other details of small problem records will also help us to improve the user experience of subsequent products. Generally speaking, the competition platform for user testing and evaluation in practice can help us to improve the product, optimize the experience, find and solve the problems of the product.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report