Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Python programmers must know what developer tools they must know.

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

Python programmers must know what developer tools are available. In response to this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.

Python has evolved a broad ecosystem that makes life easier for Python programmers and reduces their repetitive wheel-making work. The same idea applies to the work of tool developers, even if the tools they develop do not appear in the final program. This article will introduce the developer tools that Python programmers must know.

For developers, the most practical help is to help them write code documentation. The pydoc module can generate well-formed documentation for any importable module based on the docstrings in the source code. Python includes two testing frameworks to automatically test the code and verify the correctness of the code:

1) doctest module, which can extract test cases from examples of source code or stand-alone files.

2) unittest module, which is a fully functional automated testing framework, which provides support for test preparation (test fixtures), predefined test suite (predefined test suite) and test discovery (test discovery).

The trace module can monitor how the Python executes the program and generate a report showing the number of times each line of the program is executed. This information can be used to find the program execution paths that are not covered by the automated test set, or to study the program call graph, and then to discover the dependencies between modules. Writing and executing tests can find problems in most programs, and Python makes debug work easier because, in most cases, Python is able to print unhandled errors to the console, which we call traceback. If the program is not running in a text console, traceback can also output error messages to log files or message dialogs. When the standard traceback does not provide enough information, you can use the cgitb module to view details at all levels of stack and source code context, such as local variables. The cgitb module can also output this trace information in the form of HTML to report errors in web applications.

Once you find out what the problem is, you need to use the interactive debugger to debug the code, and the pdb module is well qualified for this job. This module can show the execution path of the program when the error occurs, and can adjust the object and code dynamically for debugging. After the program is tested and debugged, the next step is to focus on performance. Developers can use profile and timit modules to test the speed of the program, find out what is slow in the program, and then tune this part of the code independently. The Python program is executed through the interpreter, whose input is the bytecode compiled version of the original program. This bytecode compilation version can be generated dynamically when the program is executed or when the program is packaged. The compileall module can handle program packaging and exposes packaging-related interfaces that can be used by installers and packaging tools to generate files containing module bytecodes. At the same time, in the development environment, the compileall module can also be used to verify that the source file contains syntax errors.

At the source code level, the pyclbr module provides a class viewer that makes it easy for text editors or other programs to scan interesting characters in Python programs, such as functions or classes. After the class viewer is provided, there is no need to introduce code, which avoids potential side effects.

Document string and doctest module

If the * * line of a function, class, or module is a string, then the string is a document string. It can be considered a good programming habit to include document strings because they can provide some information to Python program development tools. For example, the help () command can detect document strings, and Python-related IDE can also detect document strings. Because programmers tend to view document strings in interactive shell, * make these strings shorter. For example

# mult.py class Test: "> a=Test (5) > a.multiply_by_2 () 10" def _ init__ (self, number): self._number=number def multiply_by_2 (self): return self._number*2

When writing a document, a common problem is how to keep the document in sync with the actual code. For example, a programmer may modify the implementation of a function but forget to update the documentation. To solve this problem, we can use the doctest module. The doctest module collects document strings, scans them, and executes them as tests. To use the doctest module, we usually create a new stand-alone module for testing. For example, if the previous example Test class is included in the file mult.py, you should create a new testmult.py file to test, as shown below:

# testmult.py import mult, doctest doctest.testmod (mult, verbose=True) # Trying: # a=Test (5) # Expecting nothing # ok # Trying: # a.multiply_by_2 () # Expecting: # 10 # ok # 3 items had no tests: # mult # mult.Test.__init__ # mult.Test.multiply_by_2 # 1 items passed all tests: # 2 tests in mult.Test # 2 tests in 4 items. # 2 passed and 0 failed. # Test passed.

In this code, doctest.testmod (module) performs tests for a specific module and returns the number of test failures and the total number of tests. If all the tests pass, no output will be produced. Otherwise, you will see a failure report showing the difference between the expected value and the actual value. If you want to see the detailed output of the test, you can use testmod (module, verbose=True).

If you do not want to create a separate test file, another option is to include the appropriate test code at the end of the file:

If _ _ name__ = & # 039 domestic maintainers: import doctest doctest.testmod ()

If we want to perform such tests, we can call the doctest module with the-m option. Generally speaking, there is no output when the test is executed. If you want to see the details, you can add the-v option.

$python-m doctest-v mult.py

Unit Test and unittest Module

If we want to test the program more thoroughly, we can use the unittest module. Through unit testing, developers can write a series of independent test cases for each element of the program (for example, separate functions, methods, classes, and modules). When testing larger programs, these tests can be used as a cornerstone to verify the correctness of the program. As our programs get larger and larger, unit tests of different components can be combined into larger testing frameworks and testing tools. This can greatly simplify the work of software testing and provide convenience for finding and solving software problems.

# splitter.py import unittest def split (line, types=None, delimiter=None): "Splits a line of text and optionally performs type conversion." Fields = line.split (delimiter) if types: fields = [ty (val) for ty,val in zip (types,fields)] return fields class TestSplitFunction (unittest.TestCase): def setUp (self): # Perform set up actions (if any) pass def tearDown (self): # Perform clean-up actions (if any) pass def testsimplestring (self): r = split GOOG 100 490.50 / 039;) self.assertEqual (r, [& # 039 / GOOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOGOOGOGOGOGO , 100,039]) def testdelimiter (self): r = split (& # 039 × GOG Magnesia 100490.50 ~ 039 ~ (th)) self.assertEqual (r, [& # # 039 ~ (th) GOG) / 039 ~ (th)) self.assertEqual (r, [& # # 039 ~ (th) GOG) / 039 ~ (th) [& # # 039 ~ (100) / 039 ~ []) # Run the unittests if _ _ name__ = = & # # 039 ~ (th) : unittest.main () #... #-# Ran 3 tests in 0.001s # OK

When using unit tests, we need to define a class that inherits from unittest.TestCase. In this class, each test is defined in the form of a method and named after test-- for example, 'testsimplestring','testtypeconvert' and similar naming methods (it is important to emphasize that as long as the method name starts with test). In each test, assertions can be used to check for different conditions.

Practical examples:

If you have a method in the program, the output of this method points to sys.stdout. This usually means outputting text information to the screen. If you want to test your code to prove it, just give the corresponding input and the corresponding output will be displayed.

# url.py def urlprint (protocol, host, domain): url = & # 039; {}: / / {}. {} & # 039 domain .format (protocol, host, domain) print (url)

The built-in print function sends output to sys.stdout by default. In order to test that the output has actually arrived, you can simulate it with a stand-in object and assert the expected value of the program. The patch () method in the unittest.mock module can replace the object only in the context of running the test and return to the original state of the object immediately after the test is completed. Here is the test code for the urlprint () method:

# urltest.py from io import StringIO from unittest import TestCase from unittest.mock import patch import url class TestURLPrint (TestCase): def test_url_gets_to_stdout (self): protocol = & # 039; host = & # 039; domain = & # 039; expected_url = & # 039; {}: / / {}. .format (protocol, host, domain) with patch (& # 039. Sys.stdoutsides) 039 politics, new=StringIO () as fake_out: url.urlprint (protocol, host, domain) self.assertEqual (fake_out.getvalue (), expected_url)

The urlprint () function has three parameters, and the test code first assigns a false value to each parameter. The variable expected_url contains the desired output string. To be able to perform the test, we use the unittest.mock.patch () method as the context manager, replacing the standard output sys.stdout with the StringIO object, so that the content of the standard output sent is received by the StringIO object. The variable fake_out is the mock object created in this process, which can be used in the code block where with is located for a series of test checks. When the with statement is complete, the patch method can restore everything to the state it was before the test was executed, as if the test had not been executed, without any extra work. But for some Python C extensions, this example is meaningless because these C extensions bypass the sys.stdout setting and send the output directly to standard output. This example only applies to programs with pure Python code (if you want to capture input and output similar to the C extension, you can do this by opening a temporary file and redirecting standard output to that file).

Python Debugger and pdb Module

Python includes a simple command-line-based debugger in the pdb module. The pdb module supports post-mortem debugging (post-mortem debugging), stack frame exploration (inspection of stack frames), breakpoint (breakpoints), single-step debugging (single-stepping of source lines) and code review (code evaluation).

There are several functions that can call the debugger in a program or debug in an interactive Python terminal.

Of all the functions that start the debugger, the function set_trace () is probably the simplest and most practical. If you find a problem in a complex program, you can insert the set_trace () function into your code and run the program. When you execute the set_trace () function, this pauses the execution of the program and jumps directly to the debugger, where you can start checking the runtime environment. When you exit the debugger, the debugger automatically resumes program execution.

Suppose there is something wrong with your program and you want to find an easy way to debug it.

If your program crashes with an abnormal error, you can use the command python3-I someprogram.py to run your program, which is a good way to find out what the problem is. The-I option indicates that an interactive shell will be started as soon as the program ends. In this interactive shell, you can find out exactly what caused the error in the program. For example, if you have the following code:

Def function (n): return n + 10 function ("Hello")

If you run the program using the python3-I command, you will produce the following output:

Python3-I sample.py Traceback (most recent call last): File "sample.py", line 4, in function ("Hello") File "sample.py", line 2, in function return n + 10 TypeError: Can't convert & # 039

If you don't find any obvious errors, you can further launch the Python debugger. For example:

> > import pdb > > pdb.pm () > sample.py (4) func ()-> return n + 10 (Pdb) w sample.py (6) ()-> func (& # 039) func () > sample.py (4) func ()-> return n + 10 (Pdb) print n & # 039 (Pdb) Q >

If your code is in an environment where it is difficult to start an interactive shell (for example, in a server environment), you can add error handling code and output trace information yourself. For example:

Import traceback import sys try: func (arg) except: print (& # 039 * & # 039;) traceback.print_exc (file=sys.stderr)

If your program does not crash, but says that its behavior is not consistent with your expected performance, then you can try to add the print () function where something may go wrong. If you are going to adopt this approach, then there are some related techniques to explore. First, the function traceback.print_stack () can print out the trace information of the stack in the program as soon as it is executed. For example:

Def sample (n):... If n > 0:... Sample (nMel 1)... Else:... Traceback.print_stack (file=sys.stderr)... > sample (5) File "", line 1, in File "", line 3, in sample File ", line 5, in sample >

In addition, you can start the debugger manually using pdb.set_trace () anywhere in the program, like this:

Import pdb def func (arg):... Pdb.set_trace ()...

This is a very practical technique when parsing large programs in depth, so that you can clearly understand the control flow of the program or the parameters of the function. For example, once the debugger is started, you can use the print or w command to look at the variables to understand the stack trace information.

Don't let things get complicated when debugging software. Sometimes you only need to know the trace information of the program to solve most simple errors (for example, the actual error is always displayed on the * line of the trace information). In the actual development process, inserting the print () function into the code can also easily display debugging information (just remember to delete the print statement after debugging). The common usage of the debugger is to explore the value of a variable in a crashed function, and it is very practical to know how to enter the debugger after the program crashes. When the control flow of the program is not so clear, you can insert the pdb.set_trace () statement to sort out the ideas of the complex program. In essence, the program executes until it encounters a set_trace () call, after which it immediately jumps into the debugger. In the debugger, you can try more. If you are using Python's IDE, then IDE usually provides a pdb-based debugging interface, you can check the IDE documentation for more information.

Here is a list of resources for getting started with the Python debugger:

Read Steve Ferb's article "Debugging in Python"

Watch the screenshot "Using pdb, the Python Debugger" of Eric Holscher

Read Ayman Hourieh's article "Python Debugging Techniques"

Read Python documentation for pdb-The Python Debugger

Read Chapter 9 of Karen Tracey's D jango 1.1 Testing and Debugging-- When You Don't Even Know What to Log: Using Debuggers

Program analysis

Profile module and cProfile module can be used to analyze programs. They all work the same way, the only difference is that the cProfile module is implemented in C extension, which makes it much faster and more popular. Both modules can be used to collect coverage information (for example, how many functions have been executed) and to collect performance data. The easiest way to analyze a program is to run this command:

% python-m cProfile someprogram.py

Alternatively, you can use the run function in the profile module:

Run (command [, filename])

This function executes the contents of the command using the exEC statement. Filename is the optional file save name, and if there is no filename, the output of the command will be sent directly to the standard output.

The following is the output report when the parser execution is complete:

Function calls (6 primitive calls) in 5.130 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno (function) 1 0.030 0.030 5.070 5.070: 1 (?) 121 process 1 5.020 0.041 5.020 5.020 book.py:11 (process) 1 0.020 0.020 5.040 5.040 book.py:5 (?) 2 0.000 0.000 0.000 exceptions.py:101 (_ _ init_ _) 1 0.060 0.060 5.130 5.130 profile:0 (& # 0.060 Book.py') 0 0.000 0.000 profile:0 (profiler)

When the * * column in the output contains two numbers (for example, 121 primitive call 1), the latter is the number of meta calls, and the former is the number of actual calls. For most applications, using the analysis reports generated by this module is sufficient, for example, you just want to simply see how much time your program takes. Then, if you want to save the data and analyze it in the future, you can use the pstats module.

Suppose you want to know exactly where and how much time your program takes.

If you just want to simply time your entire program, you can handle it by using the time command in Unix. For example:

Bash time python3 someprogram.py real 0m13.937s user 0m12.162s sys 0m0.098s bash

Generally speaking, the degree of code analysis is somewhere between these two extremes. For example, you may already know that your code will spend a lot of time on certain functions. For the analysis of such specific functions, we can use the modifier decorator, for example:

Import time from functools import wraps def timethis (func): @ wraps (func) def wrapper (* args, * * kwargs): start = time.perf_counter () r = func (* args, * * kwargs) end = time.perf_counter () print (& # 039; {}: {} & # 039 .format (func.__module__, func.__name__, end-start)) return r return wrapper

The way to use decorator is simple, you just need to put it in front of the definition of the function you want to analyze. For example:

> > @ timethis... Def countdown (n):... While n > 0:... N-= 1. > > countdown (10000000) _ main__.countdown: 0.803001880645752 >

If you want to parse a statement block, you can define a context manager (context manager). For example:

Import time from contextlib import contextmanager @ contextmanager def timeblock (label): start = time.perf_counter () try: yield finally: end = time.perf_counter () print (& # 039; {}: {} & # 039 switch .format (label, end-start))

Here is an example of how to use the context manager:

> N = 10000000. While n > 0:... N-= 1. Counting: 1.5551159381866455 > the answers to the questions about the developer tools that Python programmers must know will be shared here. I hope the above content can be of some help to everyone. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report