In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces how to use the python module, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
What is a module module is a collection of functions common module form (custom module, third-party module, built-in module): 1, an module.py file is a module, the file name is module.py While the module name is module 2, a folder containing the _ _ init__.py file is also module 3, C or C++ extension 4 that has been compiled into a shared library or DLL, why should the built-in module written in C and linked to the python interpreter use module 1, using a third party or built-in module is a kind of borrowlism The development efficiency can be greatly improved. 2. Custom modules write the common functions needed in our own programs into a python file, and then the components of the program can reference / reuse the functions in the custom module by importing. The main premise: the module is imported by the execution file, and the import of the module must figure out who is the execution file. Who is the imported module import import M1 import module for the first time three things happen: 1, first create a module namespace 2, execute m1.py Put all the names generated during execution into the module's namespace 3 and get a name M1 in the current execution file, which refers to the use of the module's namespace: access the name func in the M1 namespace by name. The advantage is that it will not conflict with the names in the current namespace, but the disadvantage is that each visit needs to be prefixed with m1.func from. Three things happen to import's first from M1 import func import module: 1, first create a module namespace 2, execute m1.py Put all the names generated during execution into the module's namespace. 3. Get a function name func directly in the current execution file, which points directly to the use of a function in the module namespace: you can use the function directly, with the advantage that you don't need to add any prefixes The disadvantage is that it is easy to conflict with the name in the current namespace def func (): pass func () two ways to import module absolutely import: reference to the sys.path of the execution file as the benchmark search It can be used either as an execution file or as an imported module, but the file can be used as an environment variable that must refer to the execution file to import the format from folder import module name relative import:. Search based on the folder of the current file can only be imported as a module, not as an execution file to execute from. Import module name relative to import is generally used in the case of import package, in the case of import package, there are 3 restrictions 1. 1. The scope of the way can not be outsourced 2. Cannot be executed as an execution file. The premise is only the search path memory of the package module-"built-in module -" sys.pathimport logging log module
Make basic log configuration:
Logging.basicConfig (filename='access.log', format='% (asctime) s -% (name) s -% (levelname) s -% (module) s:% (message) slots, datefmt='%Y-%m-%d% H:%M:%S% packs, level=10 # stream=True)
The log level follows the principle of matching # debug- "info-" warning- "error-" critical from the bottom up:
Logging.debug ('debugging information') # 10 logging.info ('normal information') # 20 logging.warning ('bad fire') # 30 logging.error ('error message') # 40 logging.critical ('critical error message') # 50
Question:
1. Do not specify log level 2, do not specify log format 3, can only print to the screen, do not write files
New problems
# 1. String encoding cannot be specified
# 2. Print only to files
Import logging
The logging module contains four roles: logger,filter,formatter,handler
1. Logger: responsible for generating log information
Logger1=logging.getLogger ('transaction log')
Logger2=logging.getLogger ('user related')
2. Filter: responsible for filtering logs
3. Formatter: control the log output format
Formatter1=logging.Formatter (
Fmt='% (asctime) SV% (name) SV% (levelname) SV% (message) s'
Datefmt='%Y-%m-%d X'
)
Formatter2=logging.Formatter (
Fmt='% (asctime) SJV% (message) s'
Datefmt='%Y-%m-%d X'
)
4. Handler: the target responsible for log output
H2=logging.FileHandler (filename='a1.log',encoding='utf-8')
H3=logging.FileHandler (filename='a2.log',encoding='utf-8')
Sm=logging.StreamHandler ()
5. Bind logger object and handler object
Logger1.addHandler (H2)
Logger1.addHandler (h3)
Logger1.addHandler (sm)
6. Bind handler object and formatter object
H2.setFormatter (formatter1)
H3.setFormatter (formatter1)
Sm.setFormatter (formatter2)
7. Set log level: you can set logger and handler at two levels:
The final usage of logger1.setLevel (10) h2.setLevel (10) h3.setLevel (10) sm.setLevel (10) logger1.info ('Egon lent 100W' to Li Jie): LOG_PATH = os.path.join (BASE_DIR, 'log'') 'user.log') standard_format =' [% (asctime) s] [% (threadName) s threadName% (thread) d] [task_id:% (name) s] [% (filename) s% (lineno) d]'\'[% (levelname) s] [% (message) s]'# where name specifies the name simple_format ='[% (levelname) s] [% (asctime) s for getlogger ) s] [% (filename) slug% (lineno) d]% (message) s' id_simple_format ='[% (levelname) s] [% (asctime) s]% (message) s'# define the end of the log output format # if no defined log directory exists, create an if not os.path.isdir (DB_PATH): os.mkdir (DB_PATH) # log text The full path of the part # logfile_path = os.path.join (DB_PATH 'log') # log configuration dictionary LOGGING_DIC = {' version': 1, 'disable_existing_loggers': False,' formatters': {'standard': {' format': standard_format}, 'simple': {' format': simple_format},} 'filters': {},' handlers': {# print to terminal log 'console': {' level': 'DEBUG',' class': 'logging.StreamHandler', # print to screen' formatter': 'simple'} # Log printed to a file, collected info and above logs' default': {'level':' DEBUG', 'class':' logging.handlers.RotatingFileHandler', # saved to file 'formatter':' standard', 'filename': LOG_PATH # Log file 'maxBytes': 1024 * 1024 * 5, # log size 5m' backupCount': 5, 'encoding':' utf-8', # log file encoding No longer need to worry about Chinese log garbled},}, 'logger configuration obtained by loggers': {# logging.getLogger (_ _ name__)': {'handlers': [' default', 'console'], # add both handler defined above That is, log data is both written to the file and printed to the screen 'level':' DEBUG', 'propagate': True, and # is passed up (higher level logger)},} } import logging.configdef get_logger (name): "" logging.config.dictConfig (settings.LOGGING_DIC) logger = logging.getLogger (name) # where name refers to the file name return loggerjson module import jsonwith open ('db1.json','rt',encoding='utf-8') as f: json.load (f) # load is used for file reading Deserialization json.loads ('{"name": "egon"}') # loads is used in dictionary serialization, with open ('db.json','wt',encoding='utf-8') as f: l = [1Med TrueLo none] json.dump (lLME f) # dump is used in serialization to file x = json.dumps (' name': 'egon') # dumps is used in serialization to deserialize with open with json (' db.json' 'rt',encoding='utf-8') as f: l=json.load (f) print (l) pickle module import pickle deserialization 1, read pickle format with open (' db.pkl') from file 'rb') as f: pkl=f.read () 2, convert json_str to in-memory data type dic=pickle.loads (pkl) print (dic [' a']) 1 and 2 can work together one step with open ('db.pkl','rb') as f: dic=pickle.load (f) print (dic [' a']) serialize dic= {'a.' Serialize pkl=pickle.dumps (dic) print (pkl,type (pkl)) 2 write file with open ('db.pkl','wb') as f: f.write (pkl) 1 and 2 can cooperate one step with open (' db.pkl','wb') as f: pickle.dump (dic,f) ```compare json and pickle
What is serialization / deserialization
Serialization is to convert the data structure in memory into an intermediate format for storage on the hard disk or network-based transmission.
Sending serialization is the conversion of a data format from a hard disk or network to an in-memory data structure.
Why should there be?
1. You can save the running state of the program
2. Cross-platform interaction of data
Json
Advantages:
Strong cross-platform
Disadvantages:
Can only support / correspond to the data types of the python part
Pickle
Advantages:
Can support / correspond to all python data types
Disadvantages:
Can only be recognized by python, not cross-platform
Time module time is divided into three formats: 1, timestamp start= time.time () time.sleep (3) stop= time.time () print (stop-start) output result: 3.0001292228698732, Formatted string form print (time.strftime ('% Y-%m-%d% X')) print (time.strftime ('% Y-%m-%d% H:%M:%S% p')) output result: 2018-07-30 18:04:27 2018-07-30 18:04:27 PM3, Structured time / time object t1=time.localtime () print (T1) print (type (t1.tm_min)) print (t1.tm_mday) t2=time.gmtime () print (T1) print (T2)
Time hassle of getting formatted string form
Conversion hassle between timestamp and formatting time
It is troublesome to get previous or future time. So the datetime module is used.
Datetime module print (datetime.datetime.now ()) # present time print (datetime.datetime.fromtimestamp (1231233213)) print (datetime.datetime.now () + datetime.timedelta (days=3)) # present time plus next 3 days print (datetime.datetime.now () + datetime.timedelta (days=-3)) # present time minus s=datetime.datetime.now () print (s.replace) 3 days ago (year=2020)) # Modification year random module import random print (random.random ()) # (0Magne1)-decimal print between float greater than 0 and less than 1) # [1Magol 3] Integer print (random.randrange (1Magne3)) # [1Magne3) greater than or equal to 1 and integer print between 1 and less than 3 (random.choice ([1)] / For example, 1.927109612082716 item= random.shuffle (item) # disrupts the order of item, which is equivalent to the random number def make_code (size=7): res =''for i in range (size): # cycle, which produces a random character (alphanumeric) s = chr (random.randint (65,90)) num = str (random.randint (0)) 9) res + = random.choice ([s, num]) return resres=make_code () print (res) os module os.getcwd () gets the current working directory That is, the directory path where the current python script works os.chdir ("dirname") changes the current script working directory It is equivalent to cdos.curdir returning the current directory under shell: ('.') os.pardir gets the string name of the parent directory of the current directory: ('..') os.makedirs ('dirname1/dirname2') can generate a multi-tier recursive directory os.removedirs (' dirname1'). If the directory is empty, delete it and recursively return it to the next level directory. If it is also empty, delete it, and so on, os.mkdir ('dirname') generates a single-level directory. It is equivalent to deleting a single-level empty directory by mkdir dirnameos.rmdir ('dirname') in shell. If the directory is not empty, it cannot be deleted. An error is reported. Equivalent to rmdir dirnameos.listdir ('dirname') in shell lists all files and subdirectories under the specified directory, including hidden files, and prints os.remove () as a list to delete a file os.rename ("oldname", "newname") rename file / directory os.stat (' path/filename') to get file / directory information os.sep output operating system-specific path separator "\" under win, "/" os.linesep outputs the line Terminator used by the current platform under Linux, "\ t\ n" under win, "\ n" os.pathsep output string used to split the file path under win Under Linux, the os.name output string indicates the current platform. Win- > 'nt'; Linux- >' posix'os.system ("bash command") runs the shell command, which directly shows that os.environ gets the system environment variable os.path.abspath (path), returns the absolute path normalized by path, os.path.split (path), splits the path into directories and file name tuples, and returns the directory that os.path.dirname (path) returns to path. In fact, the first element of os.path.split (path), os.path.basename (path), returns the last file name of path. If the path ends with / or\, a null value is returned. That is, the second element of os.path.split (path) os.path.exists (path) if path exists, returns True; if path does not exist, returns Falseos.path.isabs (path) if path is an absolute path, returns Trueos.path.isfile (path) if path is an existing file, returns True. Otherwise, return Falseos.path.isdir (path) if path is an existing directory, return True. Otherwise, return Falseos.path.join (path2 [, path3 [,...]]) Return after combining multiple paths Parameters before the first absolute path will be ignored os.path.getatime (path) returns the last access time of the file or directory pointed to by path os.path.getmtime (path) returns the last modification time of the file or directory pointed to by path os.path.getsize (path) returns the size of the path print progress bar import timedef make_progress (percent) Width=50): if percent > 1: percent=1 show_str= ('[% -% ds]'% width)% (int (percent * width) *'#') print ('\ r% s% s%'% (show_str,int (percent * 100)), end='') total_size=1025recv_size=0while recv_size
< total_size: time.sleep(0.1) # 模拟经过了0.5的网络延迟下载了1024个字节 recv_size+=1024 # 调用打印进度条的功能去打印进度条 percent=recv_size / total_size make_progress(percent)re模块 pattern=re.compile('alex')print(pattern.findall('alex is SB,alex is bigSB')) # 匹配所有的'alex'print(pattern.search('alex is SB,alex is bigSB').group()) # 从行首开始匹配,没有就返回None.有就可以通过group拿到print(re.match('alex','123alex is SB,alex is bigSB'))#以什么结尾,有就返回这个值,没有就返回None结果如下:['alex', 'alex']alexNonesubprocess模块import subprocessobj=subprocess.Popen( 'tasklist', shell=True, stdout=subprocess.PIPE, # 正确的管道 stderr=subprocess.PIPE # 错误的管道)print(obj)stdout_res = obj.stdout.read() # 得到正确管道中的数据print(stdout_res.decode('gbk'))print(stdout_res)stderr_res1=obj.stderr.read() #得到错误管道中的数据stderr_res2=obj.stderr.read() # 管道中的信息只能取一次stderr_res3=obj.stderr.read()print(stderr_res1.decode('gbk'))print(stderr_res1)print(stderr_res2)print(stderr_res3)hash模块 什么是hash hash是一种算法,该算法接受传入的内容,经过运算得到一串hash值 如果把hash算法比喻为一座工厂 那传给hash算法的内容就是原材料 生成的hash值就是生产出的产品 为何要用hash算法 hash值/产品有三大特性: 1、只要传入的内容一样,得到的hash值必然一样 2、只要我们使用的hash算法固定,无论传入的内容有多大, 得到的hash值的长度是固定的 3、不可以用hash值逆推出原来的内容 基于1和2可以在下载文件时做文件一致性校验基于1和3可以对密码进行加密 怎么用 import hashlib 1、造出hash工厂 m=hashlib.sha512('你'.encode('utf-8')) 2、运送原材料m.update('好啊美sadfsadf丽asdfsafdasdasdfsafsdafasdfasdfsadfsadfsadfsadfasdff的张铭言'.encode('utf-8')) 3、产出hash值 print(m.hexdigest()) #2ff39b418bfc084c8f9a237d11b9da6d5c6c0fb6bebcde2ba43a433dc823966c import shutil #压缩模块 with open('old.xml','r') as read_f,open('new.xml', 'w') as write_f: shutil.copyfileobj(read_f,write_f) shutil.make_archive("data_bak", 'gztar', root_dir='D:\SH_fullstack_s2\day04') import tarfile # 解压t=tarfile.open('data_bak.tar.gz','r')t.extractall('D:\SH_fullstack_s2\day20\dir')t.close()import xmlimport xml.etree.ElementTree as ETtree = ET.parse("a.xml") # 打开一个xml的文件root = tree.getroot() # 用getroot 拿到树根对于任何标签都有三个特征:标签名、标签属性、标签的文本内容print(root.tag)# 标签名print(root.attrib)# 标签属性print(root.text)# 文本内容print(list(root.iter('year'))) #全文搜索,找到所有for year in root.iter('year'): print(year.tag) print(year.attrib) print(year.text) print('='*100)print(root.find('country').attrib) #在root的子节点找,只找一个print([country.attrib for country in root.findall('country')]) #在root的子节点找,找所有root.iter('year') #全文搜索root.findall('country') # 在root的子节点找,找所有root.find('country') # 在root的子节点找,只找一个1、查遍历整个文档for country in root: print('============>Country% s'% country.attrib) for item in country: print (item.tag) print (item.attrib) print (item.text) 2, change for year in root.iter ('year'): print (year.tag) year.attrib= {' updated':'yes'} year.text=str (int (year.text) + 1) tree.write ('a.xml3) 3, Add for country in root: rank=country.find ('rank') if int (rank.text) > 50: # print (' country of symbol bar') Country.attrib) tag=ET.Element ('egon') tag.attrib= {' updated':'yes'} tag.text='NB' country.append (tag) tree.write ('a.xml') 4, delete for country in root: tag=country.find (' egon') # print (tag Bool (tag)) if tag is not None: print ('= >') country.remove (tag) tree.write ('a.xml') import configparser # parsing file module config = configparser.ConfigParser () config.read (' config.ini') # a.cfg a.ini a.cnfprint (config.sections ()) # remove the title print (config.options ('egon')) # remove the k from this directory To print (config.items ('egon')) # fetch everything in this directory Put in tuple res = config.get ('egon','age') res = config.getint (' egon','age') print (res, type (res)) res = config.getfloat ('egon',' salary') print (res, type (res)) res = config.getboolean (' egon',' is_beautiful') print (res, type (res)) the output is as follows: ['egon',' alex'] [' pwd', 'age',' sex', 'salary' 'is_beautiful'] [(' pwd', "'123'"), (' age', '18'), (' sex', "'male'"), (' salary', '3.1'), (' is_beautiful', 'True')] 18 3.1 True thank you for reading this article carefully I hope the article "how to use the python module" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.