In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "what are the core coding skills to achieve faster python". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
one. Memoize (cached) reused data
When you can do it once and save the results, never do a thousand jobs. If you have a frequently called function that returns predictable results, python gives you the option to cache the results in memory. Subsequent calls that return the same result return almost immediately.
A variety of examples show how to do this; my favorite memories are almost the least. Functools, a native library of python, has the @ functools.lru cache decorator, which caches the most recent n calls to functions. This is convenient when the value being cached changes, but is relatively static within a specific time window. The list of newly used projects during the day is a good example.
two. Move math to numpy
If you are doing matrix-or array-based math and do not want the Python interpreter to interfere with your work, use numpy. By doing heavy work with the C library, numpy provides faster array processing than native python. It also stores digital data more efficiently than Python's built-in data structures.
Relatively speaking, non-alien mathematics can also be greatly accelerated by numpy. This package provides replacements for many common python mathematical operations, such as min and max, which are many times faster than the original python.
Another benefit of numpy is the more efficient use of memory for large objects, such as having lists of millions of items. In general, large objects like those in numpy take up about 1/4 of memory if represented in traditional python. Note that it helps to start with the correct data structure of the job, that is, the optimization itself.
Rewriting the python algorithm to use numpy requires some work, because array objects need to be declared using the syntax of numpy. But numpy uses python's existing idioms (+, -, etc.) in actual mathematical operations, so switching to numpy won't be too confusing in the long run.
three. Use the C library
Numpy's library written in C is a good simulation strategy. If there is an existing C library that meets your needs, Python and its ecosystem provide several options to connect to the library and take advantage of its speed.
The most common method is to use Python's CTypes library. Because ctypes is widely compatible with other python applications (and runtimes), it is the best place to start, but it is far from the only game in town. The cffi project provides a more elegant interface for c.cython (see below) and can also be used to wrap external libraries, albeit at the expense of learning cython markup.
four. In parallel with multiprocessing
Traditional python applications-- those implemented in cpython-- execute only one thread at a time to avoid state problems when using multiple threads. This is the infamous global interpreter lock (gil). There is a good reason for its existence, which does not make it more gorgeous.
Over time, the efficiency of gil has improved significantly (another reason for running python 3 instead of python 2), but the core problem remains. A cpython application can be multithreaded, but cpython does not allow these threads to run in parallel on multiple cores.
To solve this problem, python provides multiple processing modules to run multiple instances of the python interpreter on different cores. State can be shared through shared memory or server processes, and data can be passed between process instances through queues or pipes.
You still need to manually manage state between processes. In addition, there is not much overhead in the process of starting multiple Python instances and passing objects in them. But for long-running processes, multiprocessing libraries are useful because they can benefit from cross-core parallelism.
In addition, python modules and packages that use the C library (such as numpy) avoid using gil altogether. This is another reason why they are recommended to speed up.
five. Know what your library is doing.
How convenient it is to simply type XYZ and take advantage of the work of countless other programmers! But you need to know that third-party libraries can change the performance of an application, not always for the better.
Sometimes this is shown in obvious ways, such as when modules from a particular library constitute a bottleneck. (again, analysis will be helpful. (sometimes the situation is not so obvious. Example: Pyglet is a handy library for creating windowed graphics applications that automatically enables debug mode, which significantly affects performance until it is explicitly disabled. Unless you read the documentation, you may never realize this. Read and be told.
six. Pay attention to platform
Python runs across platforms, but this does not mean that the features of each operating system (windows, linux, os x) are abstracted under python. Most of the time, this means knowing the details of the platform, such as path naming conventions, for which there are helper functions.
However, in terms of performance, it is also important to understand the platform differences. For example, on Windows, python scripts that require timer precision of less than 15 milliseconds (for example, for multimedia) will need to use Windows API calls to access high-resolution timers or to improve timer resolution.
seven. Go with PyPy
Cpython is the most commonly used implementation of Python, which prioritizes compatibility over raw speed. For programmers who want to put speed first, there is pypy, an python implementation with a jit compiler to speed up code execution.
Because pypy is designed as an alternative to cpython, it is one of the easiest ways to improve performance quickly. Many common python applications will run on pypy. In general, the more an application relies on a "normal" python, the more likely it is to run on pypy without modification.
However, making full use of PYPY may require testing and research. You will find that long-running applications get the greatest performance benefit from Pypy because the compiler analyzes execution over time. For short scripts that run and exit, you'd better use cpython, because the performance improvement is not enough to overcome the overhead of JIT.
Note that pypy's support for python 3 still lags behind several versions; it currently supports python 3.2.5. Code that uses the latest python features, such as async and await-co routines, will not work. Finally, python applications that use ctypes may not always behave as expected. If you're writing something that might run on both pypy and cpython, it might make sense to handle use cases separately for each interpreter.
Other experiments that accelerate python by jitter are still bearing fruit. This includes a Microsoft project, pyjion, which provides a JIT interface for cpython. Microsoft provided its own JIT as a proof of concept.
eight. Upgrade to python 3
If you are using python 2.x and insist on using it without overriding it (such as incompatible modules), you should skip to python 3.
In addition to the future of python 3 as a language, there are many constructs and optimizations available in python 3 that are not available in python 2.x. For example, python 3.5 reduces the hassle of asynchronous programming by using the async and wait keywords as part of the language syntax. Python 3.2 has made a major upgrade to the global interpreter lock, significantly improving the way python handles multiple threads.
This is the end of the content of "what are the core coding techniques for faster python". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.