In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Start testing with version 1.6 of zabbix, version 1.8 has been used online, have used version 1.9,2.0,2.2,3.0,4.0 online, and used or tested all versions after zabbix1.6. Individuals also have before the transformation of SA into DBA, the operation and maintenance of zabbix to take the database level have some of their own experience, I hope to help readers.
1:MySQL version recommendation
MySQL5.7 and above, convenient online DDL to facilitate the rapid upgrade of zabbix
Link database method: zabbix's server, proxy and MySQL databases are connected by domain name as far as possible to facilitate failover.
Authorization of 2:zabbix database
Read and write permissions, used as zabbix's own access:
Grant all privileges on zabbix. To 'zabbix'@'1.1.1.1' identified by' zabbix'
Read-only permission, used as a secondary development read-only zabbix database:
Grant SELECT on zabbix. To 'zabbix_ro'@'1.1.1.1' identified by' zabbixro'
Several important parameters that need to be adjusted in 3:MySQL configuration file
Innodb_log_files_in_group = 16
Innodb_log_file_size = 1G
Innodb_file_per_table = 1
Max_allowed_packet = 64m
Back_log = 1024
Max-connections = 2000
Sync_binlog = 0
Innodb_flush_log_at_trx_commit = 0
Binlog_format = row
Expire_logs_days = 3
4: skillfully use SQL statements to operate and maintain zabbix
Use SQL statements to update monitoring items or triggers to improve change efficiency and reduce the impact on the database:
/
Update zabbix.items set delay=5 where key='icmpping'
Update zabbix.items tpje Zabix.qf set f.parameterized parts 30' where f.itemid=t.itemid and t.keywriting agent.ping`
/
5: skillful use of read-write separation and load balancing
When doing secondary development, in order to avoid the impact on the server side of online monitoring, try to use an independent database to read data from the database (there may be a large number of read-only operations such as reading history database through API). You need a read-only api interface. At this time, you need to do zabbix database read-write separation. You can filter out several tables from the database and open the write permission of the slave database. These tables are mainly used for user login audit and session records. Otherwise, login may fail,:
Replicate-wild-ignore-table = zabbix.auditlog
Replicate-wild-ignore-table = zabbix.sessions
Replicate-wild-ignore-table = zabbix.ids
At the same time, when zabbix web_api is doing load balancer, please use IP hash method, because session is involved.
Considerations for 6:zabbix upgrade
To upgrade zabbix is to change the table of zabbix database, which may include table structure change, index change, data reorganization and other operations. At this time, you need to do a new database cluster as the repository of the online cluster, wait for the data synchronization to be completed, break the synchronization, and use the new zabbix server program to connect to the new database cluster to complete the upgrade. After the upgrade is completed, switch the online zabbix server and database domain name to the new server and the new database, which will at least affect the normal monitoring.
7: database backup and data cleanup
Back up the main information, monitoring data and audit logs, events do not need to back up, the amount of data is too large.
Deletion of historical data: it contains monitoring historical details data and trend data, which can be deleted by partition. In addition, the events table should also be cleaned regularly, using the primary key to clean up and keep it for about 5 days, and it needs to be defragmented regularly, otherwise the display of the monitoring screen will gradually slow down.
Partition of 8:zabbix server historical data and trend data
Download address of partition sql script:
Https://www.zabbix.org/wiki/Docs/howto/mysql_partition
/ prepare for history table partitioning. Versions above zabbix2.0/3.0 only,3.2 do not need to execute /
ALTER TABLE history_text DROP PRIMARY KEY, ADD INDEX (id), DROP INDEX history_text_2, ADD INDEX history_text_2 (itemid, id)
ALTER TABLE history_log DROP PRIMARY KEY, ADD INDEX (id), DROP INDEX history_log_2, ADD INDEX history_log_2 (itemid, id)
/ set HousekeepingFrequency=0 in the configuration file of server and disable the cleaning historical data task that comes with zabbix /
/ set the time field in the historical monitoring data table as the index to facilitate the subsequent secondary development according to the time value.
ALTER TABLE history ADD INDEX clock (clock)
ALTER TABLE history_log ADD INDEX clock (clock)
ALTER TABLE history_str ADD INDEX clock (clock)
ALTER TABLE history_str_sync ADD INDEX clock (clock)
ALTER TABLE history_sync ADD INDEX clock (clock)
ALTER TABLE history_text ADD INDEX clock (clock)
ALTER TABLE history_uint ADD INDEX clock (clock)
ALTER TABLE history_uint_sync ADD INDEX clock (clock)
ALTER TABLE trends ADD INDEX clock (clock)
ALTER TABLE trends_uint ADD INDEX clock (clock)
ALTER TABLE EVENTS ADD INDEX objectid (objectid,VALUE,clock)
/ delete if these five stored procedures exist /
DROP PROCEDURE IF EXISTS partition_create
DROP PROCEDURE IF EXISTS partition_drop
DROP PROCEDURE IF EXISTS partition_maintenance
DROP PROCEDURE IF EXISTS partition_maintenance_all
DROP PROCEDURE IF EXISTS partition_verify
/ execute stored procedure for the first time /
CALL partition_maintenance_all ('zabbix')
/
Inser to crontab executes a stored procedure once a day to generate new partitions and delete old partitions
#! / bin/sh
/ xxxxx/mysql/bin/mysql-uzabbix-pzabbixmonitor-- socket=/xxxx/mysql/tmp/mysql.sock-- database=zabbix-e "CALL partition_maintenance_all ('zabbix');"
/
Details of the stored procedure are as follows:
/ partition_create /
DELIMITER $$
CREATE PROCEDURE partition_create (SCHEMANAME VARCHAR 64), TABLENAME VARCHAR 64, PARTITIONNAME VARCHAR 64, CLOCK INT)
BEGIN
/
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
PARTITIONNAME = The name of the partition to create
/
/
Verify that the partition does not already exist
/
DECLARE RETROWS INT; SELECT COUNT (1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_description > = CLOCK; IF RETROWS = 0 THEN / * 1. Print a message indicating that a partition was created. 2. Create the SQL to create the partition. 3. Execute the SQL from # 2. * / SELECT CONCAT ("partition_create (", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ") AS msg; SET @ SQL = CONCAT ('ALTER TABLE', SCHEMANAME,'.', TABLENAME, 'ADD PARTITION (PARTITION', PARTITIONNAME, 'VALUES LESS THAN (', CLOCK,');') PREPARE STMT FROM @ SQL; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF
END$$
DELIMITER
/ partition_drop /
DELIMITER $$
CREATE PROCEDURE partition_drop (SCHEMANAME VARCHAR 64), TABLENAME VARCHAR 64, DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
/
SCHEMANAME = The DB schema in which to make changes
TABLENAME = The table with partitions to potentially delete
DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
/
DECLARE done INT DEFAULT FALSE
DECLARE drop_part_name VARCHAR (16)
/ * Get a list of all the partitions that are older than the date in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with a "p", so use SUBSTRING TO get rid of that character. * / DECLARE myCursor CURSOR FOR SELECT partition_name FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND CAST (SUBSTRING (partition_name FROM 2) AS UNSIGNED)
< DELETE_BELOW_PARTITION_DATE; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; /* Create the basics for when we need to drop the partition. Also, create @drop_partitions to hold a comma-delimited list of all partitions that should be deleted. */ SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION "); SET @drop_partitions = ""; /* Start looping through all the partitions that are too old. */ OPEN myCursor; read_loop: LOOP FETCH myCursor INTO drop_part_name; IF done THEN LEAVE read_loop; END IF; SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name)); END LOOP; IF @drop_partitions != "" THEN /* 1. Build the SQL to drop all the necessary partitions. 2. Run the SQL to drop the partitions. 3. Print out the table partitions that were deleted. */ SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";"); PREPARE STMT FROM @full_sql; EXECUTE STMT; DEALLOCATE PREPARE STMT; SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`; ELSE /* No partitions are being deleted, so print out "N/A" (Not applicable) to indicate that no changes were made. */ SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`; END IF; END$$ DELIMITER ; / partition_maintenance / DELIMITER $$ CREATE PROCEDURE partition_maintenance(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT) BEGIN DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16); DECLARE PARTITION_NAME VARCHAR(16); DECLARE LESS_THAN_TIMESTAMP INT; DECLARE CUR_TIME INT; CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL); SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00')); SET @__interval = 1; create_loop: LOOP IF @__interval >CREATE_NEXT_INTERVALS THEN LEAVE create_loop; END IF; SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @ _ interval * 3600); SET PARTITION_NAME = FROM_UNIXTIME (CUR_TIME + HOURLY_INTERVAL * (@ _ interval-1) * 3600,'p% Y% m% d% H00') CALL partition_create (SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP); SET @ _ interval=@__interval+1; END LOOP; SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT (DATE_SUB (NOW (), INTERVAL KEEP_DATA_DAYS DAY),'% Y% m% d0000'); CALL partition_drop (SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE)
END$$
DELIMITER
/ partition_verify /
DELIMITER $$
CREATE PROCEDURE partition_verify (SCHEMANAME VARCHAR (64), TABLENAME VARCHAR (64), HOURLYINTERVAL INT (11)
BEGIN
DECLARE PARTITION_NAME VARCHAR (16)
DECLARE RETROWS INT (11)
DECLARE FUTURE_TIMESTAMP TIMESTAMP
/ * * Check if any partitions exist for the given SCHEMANAME.TABLENAME. * / SELECT COUNT (1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_name IS NULL; / * * If partitions do not exist, go ahead and partition the table * / IF RETROWS = 1 THEN / * * Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values. * We begin partitioning based on the beginning of a day. This is because we don't want to generate a random partition * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000"). * / SET FUTURE_TIMESTAMP = TIMESTAMPADD (HOUR, HOURLYINTERVAL, CONCAT (CURDATE (), ", '00SET FUTURE_TIMESTAMP 00'); SET PARTITION_NAME = DATE_FORMAT (CURDATE (),' p% Y% m% d% H00');-- Create the partitioning query SET @ _ PARTITION_SQL = CONCAT (" ALTER TABLE ", SCHEMANAME,"., TABLENAME, "PARTITION BY RANGE (`clock`)") SET @ _ _ PARTITION_SQL = CONCAT (@ _ _ PARTITION_SQL, "(PARTITION", PARTITION_NAME, "VALUES LESS THAN (", UNIX_TIMESTAMP (FUTURE_TIMESTAMP), "););-- Run the partitioning query PREPARE STMT FROM @ _ _ PARTITION_SQL; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF
END$$
DELIMITER
/ partition_maintenance_all /
DELIMITER $$
CREATE PROCEDURE partition_maintenance_all (SCHEMA_NAME VARCHAR (32))
BEGIN
CALL partition_maintenance (SCHEMA_NAME, 'history', 28,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'history_log', 28,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'history_str', 28,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'history_text', 28,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'history_uint', 28,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'trends', 730,24,14)
CALL partition_maintenance (SCHEMA_NAME, 'trends_uint', 730,24,14)
END$$
DELIMITER
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.