In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. introduction
Themis is a database audit product developed by DBA team of Yixin Company, which can help DBA and developers quickly find database quality problems and improve work efficiency. Its name comes from the goddess of justice and law in Greek mythology. The name of the project means that this platform is fair and discerning in judging the quality of the database.
This platform can realize the multi-dimensional audit of Oracle and MySQL databases (object structure, SQL text, execution plan and execution characteristics), which can be used to evaluate the design quality of object structure and the running efficiency of SQL. It can help DBA and developers to find positioning problems quickly, and provide some auxiliary diagnosis capabilities to improve the efficiency of optimization work. All operations can be carried out through the WEB interface, simple and convenient. In addition, in order to better meet the personalized needs, the platform also provides the ability to expand, and users can expand themselves according to their needs.
Themis, the Greek goddess of justice and law, is known for her clarity of mind. The project uses this name, implying that the platform can fairly judge the quality of the database.
1.1 the functional overview is reviewed afterwards, and the independent optimization part is implemented in Phase II. It can also be introduced in the project design stage to play a part of the role of pre-audit. All the work is done through the WEB interface, and the main users are DBA and R & D personnel with a certain database foundation. Can be audited for a user, including data structure, SQL text, SQL execution characteristics, SQL execution plan and other dimensions. Audit results are provided in the form of a WEB page or an exported file. The platform supports mainstream Oracle and MySQL databases, and other databases are implemented in the second phase. Try to provide flexible customization capabilities to facilitate future expansion of functions. 1.2 supported databases MySQL (5.6g and above) Oracle (10g and above) 1.3Audit dimensional database results (objects) = "refers to database objects, common tables, partitions, indexes, views, triggers, etc. The SQL text (statement) = "refers to the SQL statement text itself. SQL execution plan = "refers to the execution plan of the SQL in the database. The SQL execution characteristic = "refers to the actual execution of the statement on the database. 1.4 implementation principle
The basic implementation principle of the whole platform is very simple, that is, our audit objects (currently supported four) are filtered through the rule set. The audit objects that comply with the rules are suspected to be problematic. The platform will provide these questions and related information for manual screening. Thus it can be seen that whether the platform is powerful or not mainly depends on the richness of the rule set. The platform also provides some extensibility to facilitate the expansion of rule sets.
1.5 platform architecture
The box part of the figure is the main module of the platform. Modules with different background colors indicate that the current progress status is different. The dotted line represents the data flow and the solid line represents the control flow. Its core is these modules:
Data acquisition module. It is responsible for fetching the basic data needed for auditing from the data source. Currently, fetching from Oracle and MySQL is supported. OBJ/SQL repository. This is the common storage part of the system, where the collected data and the intermediate data and result data in the process of processing are stored here. Its core data is divided into object class and SQL class. MongoDB is used in physics. Core management module. The dotted line on the right side of the figure contains two modules: SQL management and OBJ management. It mainly completes the whole life cycle management of the object. At present, only a simple object filtering function has been done, so it is still a white background, and the core function has not yet been completed. Audit rules and audit engine module. This is the core component of the first phase of the platform. The audit rule module is to complete the definition and configuration of the rules. The audit engine module is the audit executive part that completes the specific rules. Optimize rules and optimize engine modules. This part is the core component of the second phase of the platform. It has not been developed yet, so it has a white background. System management module. This part is to complete the basic functions of the platform, such as task scheduling, space management, audit report generation, export and other functions. 1.6 Operation flow
Second, environmental construction
Mysql,mongo and redis,python will be used in this project to support 2.6,2.7, but python3 is not supported.
Mysql is used to store slow queries of mysql crawled by pt-query-digest, mongo stores our rules, collection results of oracle, executes job, parses result sets, and redis is used as a queue for task scheduling celery.
In the data acquisition part of mysql, we use the pt-query-digest tool.
2.1 dependent installation of new users
In order to reduce future modifications to the supervisord.conf configuration file, we recommend using a unified user for installation
Adduser themis-test su-themis-test
The latter operations are installed under the themis-test user by default, except that the virtualenv installation needs to be switched to the root user.
Install cx_Oracle dependencies
Since you need to connect to the oracle database during the audit process, you need to install the cx_Oracle dependency first. Refer to: http://www.jianshu.com/p/pKz5K7
Install python dependencies
To install virtualenv first, refer to the link: https://pypi.python.org/simple/virtualenv/. It is recommended to install 13.0.3 or later.
If it is not convenient to connect to the Internet, or if you are on the company's intranet, you can download the compressed package from https://pan.baidu.com/s/1o7AIWlG and extract the code: 3sy3
The compressed package includes all the dependent packages that need to be used.
Install virtualenvtar-zxvf virtualenv-13.0.3.tar.gzcd virtualenv-13.1.0python setup.py install
For more information about the use of virtualenv, please refer to https://virtualenv.pypa.io/en/stable/
Install additional dependencies
First initialize the virtual environment
Virtualenv python-project-python=python2.7source / home/themis-test/python-project/bin/activate
Explain the above command: the second parameter python-project of virtualenv is the name of the virtual environment we established. Although we can define this name freely, this name is used in the configuration of supervisor. It is recommended to use the default. If you are familiar with python, you can define it at will. Later, we specify the version of python.-- python can not be added. By default, the python version of the system is used to build a virtual environment. When there are multiple versions of python, you can use this command to specify the version.
Next, use source to initialize the virtual environment, and later installed package dependencies will be installed here in / home/themis-test/python-project/home/themis-test/python2.7/lib/python2.7/site-packages.
If you can connect to the Internet, go to the source code directory and use the following command
Pip install-r requirement.txt
Install Pyh separately, download address: https://github.com/hanxiaomax/pyh
Unzip pyh-master.zipcd pyh-masterpython setup.py install
If it is not convenient to connect to the Internet in the local area network environment, please use the compressed package provided in the above network disk.
Pip install-- no-index-f file:///home/themis-test/software-r requirement.txt
File:///home/themis-test/software is the location where the compressed package is decompressed
2.2 profile introduction
Take the configuration file settings.py as an example to illustrate some of the dependencies you need
# # set oracle ipaddress, port, sid, account, password# ipaddres: port-> keyORACLE_ACCOUNT = {# oracle "127.0.0.1 password 1521": ["cedb", "system", "password"]} # set mysql ipaddress, port, account, passwordMYSQL_ACCOUNT = {"127.0.0.1 sid 3307": ["mysql", "user", "password"]} # pt-query save data for mysql account PasswordPT_QUERY_USER = "user" PT_QUERY_PORT = 3306PT_QUERY_SERVER = "127.0.0.1" PT_QUERY_PASSWD = "password" PT_QUERY_DB = "slow_query_log" # celery settingREDIS_BROKER = 'redis://:password@127.0.0.1:6379/0'# REDIS_BROKER =' redis://:@127.0.0.1:6379/0'REDIS_BACKEND = 'redis://:password@127.0.0. 1RV 6379Grammer # REDIS_BACKEND = 'redis://:@127.0.0.1:6379/0'CELERY_CONF = {"CELERYD_POOL_RESTARTS": True} # mongo server settingsMONGO_SERVER = "127.0.0.1" MONGO_PORT = 270 miles MONGO_USER = "sqlreview" MONGO_USER = "sqlreview" # MONGO_PASSWORD = "" MONGO_PASSWORD = "sqlreview" MONGO_DB = "sqlreview" # server port settingSERVER_PORT = 700 miles capture time settingCAPTURE_OBJ_HOUR = "18" CAPTURE_OBJ_MINUTE = 15CAPTURE_OTHER_HOUR = "18" CAPTURE_OTHER_MINUTE = 30
ORACLE_ACCOUNT and MYSQL_ACCOUNT are the accounts and passwords of the target machines that we need to audit, which are mainly used in the data collection section, object class audit and mysql execution plan class audit. Therefore, the account should have higher permissions. For the sake of security, we should set up a proprietary account and set exclusive permissions in the production environment, or add some ip restrictions.
PT_QUERY_USER, PT_QUERY_PORT, PT_QUERY_SERVER, PT_QUERY_PASSWD, PT_QUERY_DB are some of the configurations of the mysql database that our pt-query-digest tools need to store after parsing the slow sql of the target machine.
REDIS_BROKER, REDIS_BACKEND, and CELERY_CONF are configuration options for the task scheduling tool celery.
MONGO_SERVER, MONGO_PORT, MONGO_USER, MONGO_PASSWORD, MONGO_DB are configuration options for mongo that need to store the result set.
SERVER_PORT is the port on which the web management side listens. Do not use ports 9000 and 5555, which are assigned to the file download server and the flower management tool.
CAPTURE_OBJ_HOUR, CAPTURE_OBJ_MINUTE, CAPTURE_OTHER_HOUR and CAPTURE_OTHER_MINUTE are the collection time that needs to be set for the data acquisition module of oracle. You can set different time according to your own actual situation to avoid the business peak.
Please configure the file according to the relevant instructions
2.3 rules Import
Enter the source code directory and initialize the rules using the following command
Mongoimport-h 127.0.0.1-- port 27017-u sqlreview-p password-d sqlreview-c rule-- file script/rule.json III. Data acquisition
Data acquisition is divided into oracle part and mysql part. Oracle part uses some scripts developed by ourselves, and mysql uses pt-query-digest tools.
The default frequency of data collection is once a day, which can be modified according to your own needs.
Oracle depends in part on celery task scheduling, which is hosted by supervisor, and pt-query-digest can be added to crontab.
3.1 oracle partial manual data acquisition
Collect oracle obj information manually
Configure the data/capture_obj.json file
{"module": "capture", "type": "OBJ", "db_type": "O", "db_server": "127.0.0.1", "db_port": 1521, "capture_date": "2017-02-28"}
You only need to configure db_server and dbport options. The port requirement of oracle is 1521 moment capturecollection date to specify the date of data collection. Now, only daily data collection is supported.
Execute a command
Python command.py-m capture_obj-c data/capture_obj.json
Collect oracle other information manually, including plan, stat and text information.
Configure the data/capture_other.json file.
{"module": "capture", "type": "OTHER", "db_type": "O", "db_server": "127.0.0.1", "db_port": 1521, "capture_date": "2017-02-28"}
The configuration is the same as the obj above
Execute a command
Python command.py-m capture_obj-c data/capture_obj.json
Manual data acquisition is generally used for the first time, and then it is generally completed by automatic acquisition.
Automatic data acquisition
Configure the ORACLE_ACCOUNT in the settings.py file, and the account needs to have the permission to query all tables, namely select any table.
ORACLE_ACCOUNT = {# oracle "127.0.0.1 system 1521": ["cedb", "system", "password"]}
Configure scheduling time
# capture time settingCAPTURE_OBJ_HOUR = "18" CAPTURE_OBJ_MINUTE = 15CAPTURE_OTHER_HOUR = "18" CAPTURE_OTHER_MINUTE = 30
If you do not audit the oracle database, you do not have to configure
3.2 mysql part of pt-query-digest can be used to centralize slow logs in one place, and then centralized storage can also install pt-query-digest on each mysql machine, and then push the resolution results to the storage machine.
The second scheme is adopted in this platform.
Download and install pt-query-digest from https://www.percona.com/get/pt-query-digest, and use yum installation in case of lack of dependencies.
Use scirpt/pt_query_digest.sql to initialize the table structure instead of using the default table structure.
Configure the script/pt-query-digest.sh script on the target machine:
Pt-query-digest-- user=root-- password=password-- review hackers 127.0.0.1-- history hackers 127.0.0.1-- history hackers 127.0.0.1-- no-report-- limit=0%-- filter= "\ $event- > {Bytes} = length (\ $event- > {arg}) and\ $event- > {hostname} = '127.0.0.1 and 3306' and\ $event- > {client} =\ $event- > {ip}" slow.log
$event- > {hostname} = '127.0.0.1 ip 3306' is the ip address and port number of the machine on which the slow log is collected.
The main task is to configure the account number, password, machine ip, port number, and slow log location of the mysql machine that stores the parsing results.
Run the pt-query-digest.sh script to start collecting mysql slow query data, which can then be added to a scheduled task and collected over a fixed period of time.
IV. Analysis of rules
Rule parsing is divided into four parts: object class rule parsing, text class rule parsing, execution plan rule parsing, statistical information rule parsing. Each module can be done manually or automatically.
4.1object class rule parsing manually parsing oracle object class information
Configure the data/analysis_o_obj.json file
{"module": "analysis", "type": "OBJ", "db_server": "127.0.0.1", "db_port": 1521, "username": "schema", "db_type": "O", "rule_type": "OBJ", "rule_status": "ON", "create_user": "system" "task_ip": "127.0.0.1", "task_port": 1521}
Configure db_server, db_port, username, create_user, task_ip options, and leave the rest by default. Username is the name of the target object that needs to be audited.
Python command.py-m analysis_o_obj-c data/analysis_o_obj.json
Use the above command to start collecting obj data
Manually parsing mysql object class data
Configure the data/analysis_m_obj.json file
{"module": "mysql", "type": "OBJ", "db_server": "127.0.0.1", "db_port": 3306, "username": "schema", "db_type": "mysql", "rule_type": "OBJ", "rule_status": "ON", "create_user": "mysqluser" "task_ip": "127.0.0.1", "task_port": 3306}
Configure db_server, db_port, username, create_user, task_ip, db_port options, and leave the rest by default.
Run the command:
Python command.py-m analysis_m_obj-c data/analysis_m_obj.json
Oracle and mysql object class rules do not need to rely on the collected data, they are directly connected to the database for query. As some databases may take a long time, it is recommended to do so during the business trough.
4.2 text class rule parsing manually parsing oracle text class rules
Configure the data/analysis_o_text.json file
{"module": "analysis", "type": "TEXT", "username": "schema", "create_user": "SYSTEM", "db_type": "O", "sid": "cedb", "rule_type": "TEXT", "rule_status": "ON", "hostname": "127.0.0.1" "task_ip": "127.0.0.1", "task_port": 1521, "startdate": "2017-02-23", "stopdate": "2017-02-23"}
Configure sid, username, create_user, task_ip, hostname, startdate, and stopdate options. Since data is collected on a daily basis, only startdate and stopdate are supported for the time being, hostname and task_ip can be consistent, and others can be kept by default.
Rule resolution can be done by executing the following command:
Python command.py-m analysis_o_plan-c data/analysis_o_plan.json manually parses mysql text class rules
Configure the data/oracle_m_text.json file
"module": "analysis", "type": "TEXT", "hostname_max": "127.0.0.1 hostname_max", "username": "schema", "create_user": "mysqluser", "db_type": "mysql", "rule_type": "TEXT", "rule_status": "ON", "task_ip": "127.0.0.1" "task_port": 3306, "startdate": "2017-02-21 00:00:00", "stopdate": "2017-02-22 23:59:00"}
Configure username, create_user, taskip, taskport, hostname, hostname_max, startdate, stopdate options, hostname and task_ip can be consistent, others can be kept by default.
You can resolve the rules by running the following command:
Python command.py-m analysis_m_text-c data/analysis_m_text.json
The username in the above two steps is the object that needs to be audited.
4.3 execution plan class rule parsing oracle plan type rule parsing
Configure the data/analysis_o_plan.json file
{"module": "analysis", "type": "SQLPLAN", "capture_date": "2017-02-23", "username": "schema", "create_user": "SYSTEM", "sid": "cedb", "db_type": "O", "rule_type": "SQLPLAN", "rule_status": "ON" "task_ip": "127.0.0.1", "task_port": 1521}
Mainly to configure capture_date,username, create_user, sid,db_type,rule_type,task_ip,task_port parameters, type is divided into four types of SQLPLAN,SQLSTAT,TEXT,OBJ, the type of rule_type is the same as SQLPLAN, but one represents the type of module, the other represents the type of rule, and db_type is divided into "O" and "mysql" two types, which represent the data capture date of oracle and mysql,capture_date for the configuration that we want to be taught.
Python command.py-m analysis-c data/analysis_o_plan.json
Run the above command to generate the parsing result.
Analysis of mysql plan rules
Configure the data/analysis_m_plan.json file
{"module": "analysis", "type": "SQLPLAN", "hostname_max": "127.0.0.1 SQLPLAN", "db_server": "127.0.0.1", "db_port": 3306, "username": "schema", "db_type": "mysql", "rule_status": "ON" "create_user": "mysqluser", "task_ip": "127.0.0.1", "rule_type": "SQLPLAN", "task_port": 3306, "startdate": "2017-02-21 00:00:00", "stopdate": "2017-02-22 23:59:00"}
The meaning of the type type is the same as the ip: Port number in which oracle,hostname_max is mysql above. Each hostname_max represents an instance of mysql. Startdate and stopdate need to add hours, minutes, and seconds, which is different from oracle.
Python command.py-m analysis-c data/analysis_m_plan.json
Then run the above command to parse the rules of mysql's plan.
4.4 execute feature class rule parsing oracle stat type rule parsing
Configure the data/analysis_o_stat.json file
{"module": "analysis", "type": "SQLSTAT", "capture_date": "2017-02-23", "username": "schema", "create_user": "SYSTEM", "sid": "cedb", "db_type": "O", "rule_type": "SQLSTAT", "rule_status": "ON" "task_ip": "127.0.0.1", "task_port": 1521}
Configure sid, username, create_user, task_ip, capture_date options, and leave the rest by default.
Run the command:
Python command.py-m analysis_o_stat-c data/analysis_o_stat.json
For data collection.
Resolution of mysql stat type rules
Profile data/analysis_m_text.json
{"module": "analysis", "type": "SQLSTAT", "hostname_max": "127.0.0.1 SQLSTAT", "db_server": "127.0.0.1", "db_port": 3306, "username": "schema", "db_type": "mysql", "rule_status": "ON", "create_user": "mysqluser" "task_ip": "127.0.0.1", "rule_type": "SQLSTAT", "task_port": 3306, "startdate": "2017-02-21 00:00:00", "stopdate": "2017-02-22 23:59:00"}
Configure username, create_user, task_ip, task_port, hostname, hostname_max, startdate, stopdate options, hostname and task_ip can be consistent, others can be kept by default.
Run the command:
Python command.py-m analysis_m_text-c data/analysis_m_text.json
For data collection.
4.5 automatic rule parsing
The manual rule parsing described above can be tested or used in some special cases, most of which we use automatic rule parsing.
Automatic rule parsing we use celery to complete, about the use of celery, please refer to http://docs.celeryproject.org/en/master/getting-started/first-steps-with-celery.html.
Here are some common commands about celery:
Open rule resolution celery- A task_other worker-E-Q sqlreview_analysis-l info start task export celery- A task_exports worker-E-l info enable obj information grab celery- A task_capture worker-E-Q sqlreview_obj-l debug-B-n celery-capture-obj open flowercelery flower-- address=0.0.0.0-- broker=redis://:password@127.0.0.1:6379/ open plan, stat, Text grabs celery- A task_capture worker-E-Q sqlreview_other-l info-B-n celery-capture-other
Finally, we will add the rule parsing to the supervisor hosting, then generate the task through the web interface, then schedule it with celery, and check the task execution status through flower.
Please refer to the configuration of supervisor for specific use.
Task export 5.1 manual task export
Configure the data/export.json file
{"module": "export", "type": "export", "task_uuid": "08d03ec6-f80a-11e6-adbc-005056a30561", "file_id": "08d03ec6-f80a-11e6-adbc-005056a30561"}
Configure the task_uuid and file_id options, which are the only flags of the task, which can be viewed from the job collection in the sqlreview library in mongo, and then run:
Python command.py-m export-c data/export.json
For manual task export, an offline html package is generated, which is saved under task_export/downloads, which can be decompressed directly, and then opened through the browser to view the report.
5.2 automatic task export
It is realized by cooperating with supervisor hosting in celery. For more information, please see the configuration of supervisor.
6. Web management terminal 6.1 manually opens web management terminal
Execute the following command
Python command.py-m web-c data/web.json
Access to http://127.0.0.1:7000 to open the management side
7. Supervisor configuration 7.1 supervisor configuration; web management side opens [program:themis-web] command=/home/themis-test/python-project/bin/python command.py-m web-c data/web.jsonautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_web.logloglevel=info; to open file download server [program:themis-download] command=/home/themis-test/python-project/bin/python task_export/file_download.pyautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_download.logloglevel=info Open the task export module [program:themis-export] command=/home/themis-test/python-project/bin/celery-A task_exports worker-E-l infoautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_export.logloglevel=info; Open the rule parsing module [program:themis-analysis] command=/home/themis-test/python-project/bin/celery-A task_other worker-E-Q sqlreview_analysis-l infoautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_analysis.logloglevel=info Open the obj information capture module [program:themis-capture-obj] command=/home/themis-test/python-project/bin/celery-A task_capture worker-E-Q sqlreview_obj-l debug-B-n celery-capture-objautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_capture_obj.logloglevel=info Open the task management module of plan, stat, text information capture module [program:themis-capture-other] command=/home/themis-test/python-project/bin/celery-A task_capture worker-E-Q sqlreview_other-l info-B-n celery-capture-otherautostart=trueredirect_stderr=truestdout_logfile=tmp/themis_capture_other.logloglevel=info;celery, remove the previous ";", and you need to configure the connection mode of redis. [program:themis-flower] Command=/home/themis-test/python-project/bin/celery flower-address=0.0.0.0-broker=redis://:password@127.0.0.1:6379/0;autostart=true;redirect_stderr=true;stdout_logfile=tmp/themis_flower.log;loglevel=info
Note: if the previously created user is different or uses a different directory, you need to replace the / home/themis-test/python-project/ in this file with your own path.
Supervisor commands are commonly used to open supervisorsupervisord-c script/supervisord.conf overload supervisorsupervisorctl-u sqlreview-p sqlreview.themis reload to enter the supervisor management console, where-Umam username-p password represents the user name and password of supervisorctl, and configure supervisorctl-u username-p password in supervisord.conf
Reference: http://www.supervisord.org/
VIII. Description of built-in rules
The core of the platform is the rules. A rule is the definition and implementation of a set of filter conditions. The richness of the rule set represents the capabilities of the platform. The platform also provides extensibility, and users can define their own rules. From the perspective of classification, the rules can be roughly divided into several categories.
8.1 rules are classified according to database types, and rules can be divided into Oracle and MySQL. Not all rules distinguish between databases, and rules for text classes do not. In terms of complexity, rules can be divided into simple rules and complex rules. The simplicity and complexity here actually refers to the implementation part of the rule audit. Simple rules are a set of query statements that can be described as mongodb or relational databases, while complex rules need to be implemented externally through the program body. From the point of view of audit objects, rules can be divided into object class, text class, execution plan class and execution feature class. 8.2 Rule parameters
Rules can contain parameters. For example, one of the execution plan rules is a large table scan. Here, you need to define the large table by parameters, which can be specified by physical size.
8.3 Rule weight and threshold weight represent violation of rules and deduction of several points at a time. It can be adjusted according to its own situation. The threshold represents the upper limit of deductions that violate the rule. The main purpose here is to avoid violating too many single rules, resulting in ignoring other rules.
Rule weights and deductions will eventually accumulate into a total deduction, and the platform will be converted on a percentile basis. In this way, it can play a certain quantitative role.
8.4 Rule _ object class (Oracle section)
8.5 Rule _ object class (MySQL section)
8.6 Rule _ execution plan class (Oracle section)
8.7 Rule _ execution Plan Class (MySQL section)
8.8 Rule _ execution feature class (Oracle section)
Rule _ execution feature class (MySQL section)
8.10 Rule _ text class
9. Frequently encountered problems: inconsistent host names, resulting in cx_Oracle errors. The version of celery is inconsistent with that of flower, which causes flower not to start, so upgrade flower to above 0.8.1. Mysql5.7 cannot initialize datetime with the default type of (DEFAULT '0000-00-0000: 00:00). There is a limit on the maximum insertion data of mongodb documents, resulting in a failure to insert the document when generating results. When oracle acquires users, some systems may build users under users, so you need to change NOT IN (' USERS', 'SYSAUX') to NOT IN (' SYSAUX')).
File location: capture/sql.py webui/utils/f priv db user list.py
In some cases, you need to install python-devel,centos and install yum install python-devel.
Mysqldb installation questions reference: http://blog.csdn.net/wklken/article/details/7271019
Exception handling
If something goes wrong in the program, you can check it by opening flower, or by manually executing the code.
Flower can be enabled by configuring it in supervisor or by
; celery's task management module, which can be opened without the previous ";". You need to configure the connection mode of redis; [program:themis-flower]; command=/home/themis-test/python-project/bin/celery flower-- address=0.0.0.0-- broker=redis://:password@127.0.0.1:6379/0;autostart=true;redirect_stderr=true;stdout_logfile=tmp/themis_flower.log;loglevel=info
You can also open it manually:
Celery flower-address=0.0.0.0-broker=redis://:password@127.0.0.1:6379/0
However, you need to configure the redis authentication option.
11. Join the development
If you have any questions, you can ask them directly in https://github.com/CreditEaseDBA/Themis/issues.
Source: Yixin Institute of Technology
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.