Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure the logger of Python

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "how to configure the logger of Python". In the daily operation, I believe that many people have doubts about how to configure the logger of Python. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to configure logger of Python". Next, please follow the editor to study!

How to consider the passed data structure (whether there is a priori knowledge requirement for the caller, such as returning a Tuple, it requires the user to know the order of the elements in the tuple, so whether it should be encapsulated;), if the data structure is clearly defined, a lot of things will be clear.

How to operate the database (you can learn sqlalchemy, including core and orm api)

How to handle the exception (the exception should be caught separately-you can clearly know under what circumstances it was caused, after the exception, you should print a log to explain what happened, and if the situation is bad, you need to throw the exception again or give an alarm.)

All the places where resources are obtained should do check (a. What will happen if you don't get it? What if the exception is obtained)

All places that operate resources should check whether the operation is successful.

Each function should be short, and should be split if the function is too long (there is a suggested value that the function should contain between 20 and 30 lines, which is nice to do once according to this specification)

After using class, consider refactoring the _ _ str__ function. The user will print the output (if you do not implement _ _ str__, you will call _ _ repr__). If the object is placed in collection, you need to implement the _ _ repr__ function, which can be used to print the entire collection.

If some resources change, they can be extracted separately and made into functions, so that subsequent calls do not have to be changed.

Attach a copy of Python2.7 code (some proprietary things have been modified)

#-*-coding:utf-8-*-from sqlalchemy import create_engine import logging from logging.config import fileConfig import requests import Clinet # Private module fileConfig ("logging_config.ini") logger = logging.getLogger ("killduplicatedjob") # configuration can be placed in a separate module DB_USER = "xxxxxxx" DB_PASSWORD = "xxxxxxxx" DB_PORT = 111111 DB_HOST_PORT = "xxxxxxxxxx" DB_DATA_BASE = "xxxxxxxxxxx" REST_API_URL = "http://sample.com" engine = create_engine (" mysql://%s:%s@%s:%s/%s "% (DB_USER) DB_PASSWORD, DB_HOST_PORT, DB_PORT, DB_DATA_BASE) # this class is used when passing between functions It is written without the user knowing the specific order of the attributes. You can also put class DuplicatedJobs (object) in a separate module: def _ init__ (self, app_id, app_name, user): self.app_id = app_id self.app_name = app_name self.user = user def _ repr__ (self): return'[appid:%s, app_name:%s, user:%s]'% (self.app_id, self.app_name) Self.user) def find_duplicated_jobs (): logger.info ("starting find duplicated jobs") (running_apps, app_name_to_user) = get_all_running_jobs () all_apps_on_yarn = get_apps_from_yarn_with_queue (get_resource_queue () duplicated_jobs = [] for app in all_apps_on_yarn: (app_id App_name) = app if app_id not in running_apps: if not app_name.startswith ("test"): logger.info ("find a duplicated job, prefixed_name [% s] with appid [% s]"% (app_name, app_id)) user = app_name_to_ user [app _ name] duplicated_jobs.append (DuplicatedJobs (app_id, app_name, user)) else: logger.info ("Job [% s] is a test job Would not kill it "% app_name) logger.info (" Find duplicated jobs [% s] "% duplicated_jobs) return duplicated_jobs def get_apps_from_yarn_with_queue (queue): param = {" queue ": queue} r = requests.get (REST_API_URL, params=param) apps_on_yarn = [] try: jobs = r.json (). Get (" apps ") app_list = jobs.get (" app ") For app in app_list: app_id = app.get ("id") name = app.get ("name") apps_on_yarn.append ((app_id, name)) except Exception as e: # Exception * * separately Different processing for each Exception logger.error ("Get apps from Yarn Error, message [% s]"% e.message) logger.info ("Fetch all apps from Yarn [% s]"% apps_on_yarn) return apps_on_yarn def get_all_running_jobs (): job_infos = get_result_from_mysql ("select * from xxxx where xx=yy") app_ids = [] app_name_to_user = {} for (topology_id) Topology_name) in job_infos: status_set = get_result_from_mysql ("select * from xxxx where xx=yy") application_id = status_set [0] [0] if "! = application_id: configed_resource_queue = get_result_from_mysql (" select * from xxxx where xx=yy ") app_ids.append (application_id) app_name_to_ user [user _ name] = configed_resource_queue [0] [0]. Split ( ".") [1] logger.info ("All running jobs appids [% s] topology_name2user [% s]"% (app_ids App_name_to_user)) return app_ids, app_name_to_user def kill_duplicated_jobs (duplicated_jobs): for job in duplicated_jobs: app_id = job.app_id app_name = job.app_name user = job.user logger.info ("try to kill job [% s] with appid [% s] for user [% s]"% (app_name, app_id, user) try: Client.kill_job (app_id) User) logger.info ("Job [% s] with appid [% s] for user [% s] has been killed"% (app_name, app_id, user)) except Exception as e: logger.error ("Can't kill job [% s] with appid [% s] for user [% s]"% (app_name, app_id) User)) def get_result_from_mysql (sql): a = engine.execute (sql) return a.fetchall () # because the following resources may change And it may contain some specific logic, so it is extracted separately into a function def get_resource_queue (): return "xxxxxxxxxxxxx" if _ _ name__ = = "_ _ main__": kill_duplicated_jobs (find_duplicated_jobs ())

The logger configuration file is as follows (for Python's logger, the official document is very well written, it is recommended to read it once and practice it once)

[loggers] keys=root, simpleLogger [handlers] keys=consoleHandler, logger_handler [formatters] keys=formatter [logger_root] level=WARN handlers=consoleHandler [logger_simpleLogger] level=INFO handlers=logger_handler propagate=0 qualname=killduplicatedjob [handler_consoleHandler] class=StreamHandler level=WARN formatter=formatter args= (sys.stdout,) [handler_logger_handler] class=logging.handlers.RotatingFileHandler level=INFO formatter=formatter args= ("kill_duplicated_streaming.log", "a", 52428800, 3,) [formatter_formatter] format=% (asctime) s% (name)-12s% (levelname)-5s% (message) s The study on "how to configure the logger of Python" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report