In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
SaltStack introduction
Saltstack is an open source batch distribution management tool, which has a very powerful management function and can manage tens of thousands of servers at the same time. It is written in python language and provides API.
Saltstack can be run in four ways: Local, Master/Minion, and Salt SSH,Syndic
Saltstack has three major functions: remote execution, configuration management (status), and cloud management.
Saltstack supports a variety of common operating systems, including Windows (can only be used as minion).
Saltstack relies on ZeroMQ to implement (subscribe to publish mode) and listen on port 4505 (publishing system port). All minion are connected to port 4505 on master, while port 4506 on master is used to accept data returned by minion. Because you are using TCP persistent connections, you will get a very quick response when you execute commands against a large number of servers on master.
Installation and configuration of SaltStack
Http://repo.saltstack.com/#rhel
SaltStack provides a wealth of installation methods, which can be found on the official website.
Two CS7 systems are used here, one as master (node1) and the other as Minion (node2).
Install the SaltStack repository:
Yum install-y https://repo.saltstack.com/yum/redhat/salt-repo-latest-1.el7.noarch.rpm
Install the server and client on the masterside:
Yum install salt-master salt-minion-y
Install the client in minoin:
Yum install salt-minion-y
Start master:
# systemctl start salt-master
Modify the minion configuration file on the master and minion side to specify master,: for minion
Vim / etc/salt/minionmaster: 172.16.10.60 # can be hostname or IP#id: # id can not be changed by default, start minion service for hostname # systemctl start salt-minion #
After a successful startup, a / etc/salt/minion_id file is generated that records the ID information of the minion.
# cat / etc/salt/minion_id node2
A series of keys are generated under the pki directory:
[root@node1 salt] # tree pkipki ├── master │ ├── master.pem # master ├── master.pub # master public key │ ├── minions │ ├── minions_autosign │ ├── minions_denied │ ├── minions_pre # unmanaged key │ │ ├── node1 │ │ └── node2 minions_rejected minion ── minion.pem └── minion.pub [root@node2 salt] # tree pkipki ├── master └── minion ├── minion.pem └── minion.pub
Through the comparison of MD5 verification, it can be found that the key of node2 received on master is the same as the local key of node2:
# after completing the distribution, the path is changed [root@node1 ~] # md5sum / etc/salt/pki/master/minions/node2d9a0453d7d539dbcc36e1daea259aa10 / etc/salt/pki/master/minions/node2 [root@node2 ~] # md5sum / etc/salt/pki/minion/minion.pub d9a0453d7d539dbcc36e1daea259aa10 / etc/salt/pki/minion/minion.pub
Use the salt-key command on master to view adding minion hosts:
[root@node1 salt] # salt-key Accepted Keys:Denied Keys:Unaccepted Keys:node1node2Rejected Keys:
Join the specified minion host:
# salt-key-a node1 # you can use wildcards such as node* to add all host The following keys are going to be accepted:Unaccepted Keys:node1Proceed that begins with node? [n/Y] YKey for minion node1 accepted.
Agree to all minion hosts:
# salt-key-A
Execute commands on all hosts to test whether the host is online:
[root@node1 ~] # salt'* 'test.pingnode2: Truenode1: True
Run a command on all hosts, and the following results indicate that you can manage remotely:
[root@node1] # salt'* 'cmd.run' date'node1: Fri Nov 4 14:21:37 CST 2016node2: Fri Nov 4 14:21:37 CST 2016
If you modify the hostname during use (which is not recommended, which will cause a lot of problems), you can specify the hostname (id) in the configuration file of minion, then delete the KEY that modified the host, and restart minon:
[root@node-1 ~] # salt-key-d old-key-name # Delete specified key [root @ node-1 ~] # salt-key-D # Delete all key
SaltSack configuration management
When we need to configure and manage a large number of machines, we often modify various configuration files. SaltStack configuration management allows us to deploy services more conveniently, which can be managed by editing .sls files for different servers and different services.
The configuration distribution management of SaltStack is written using YAML state files.
YAML syntax:
1. Indent: two spaces, you cannot use the tab key.
two。 Colon: represents the key-value pair key: value (note that the key-value pair is followed by a space)
3. Dash: indicates a list (a space after the dash)
Basic configuration
Edit the configuration file to define the YAML module:
# vim / etc/salt/master # Open the basic configuration file_roots: base:-/ srv/salt # configuration file root path
In the example, there are three more modules, base,dev,prod. Represents the configuration of the basic environment, test environment, and production environment respectively, and you can specify the corresponding path for each module:
# Example:# file_roots:# base:#-/ srv/salt/# dev:#-/ srv/salt/dev/services#-/ srv/salt/dev/states# prod:#-/ srv/salt/prod/services#-/ srv/salt/prod/states
Restart after modifying the salt configuration file:
Systemctl restart salt-master
Create the root path:
Root path of mkdir / srv/salt/ # salt
Execute the same command in batch
When you need to configure the same content for a large number of host installations, you can do so by editing a simple sls file.
Create a catalog of base environment files and edit the configuration that needs to be installed:
Mkdir / srv/salt/web vim / srv/salt/web/apache.sls# Writing content: apache-install: # naming ID, you can name pkg.installed: # pkg module at will Call the installed method-parameters in the names: # installed method-httpd # package name to be installed-httpd-develapache-service: # status naming ID service.running: # call the service.running method-name: httpd # service named httpd- enable: True # to set boot self-startup
Execute the .sls file:
Salt'* 'state.sls web.apache # state.sls execution module, which executes apache.sls files in the root directory
Apache will be installed and started on the two online hosts.
Look at the file on node2 and also generate a file for apache.sls:
[root@node2 ~] # cat / var/cache/salt/minion/files/base/web/apache.sls apache-install: pkg.installed:-names:-httpd- httpd-develapache-service: service.running:-name: httpd- enable: True
Minion executes locally by copying master's .sls file.
Assign different actions to different hosts
In the master configuration file, specify the top configuration file, and the top file must be in the base environment.
# state_top: top.sls # configuration is not required by default
Write a top.sls file on the specified base environment path, specifying different actions for different minion hosts:
[root@node1 salt] # cat / srv/salt/top.sls base: 'node1':-web.apache' node2':-web.apache
Execute the advanced status command to make the top.sls file effective:
Salt'* 'state.highstate test=True # tests the top file without modification to the minion salt' * 'state.highstate # * represents all minion and executes the contents of the top file
SaltStack data system
SaltStack is released through two data systems, Grains and Pillar. Through the data system to count the relevant information of the host, and then help users to use a variety of ways to filter and locate the host where we want to execute the salt command.
Grans introduction
Grains: get static data. When minion starts, it collects local information about Minion, such as operating system version, kernel information, hardware information, etc. It can also be customized. When minion is started, the acquired data will not be changed, and new change information can only be obtained when restarting or synchronizing.
The features of Grains can be used in several scenarios:
Asset management, information inquiry.
Used for target selection. (salt-G 'key:value' cmd.run' cmd')
Used in configuration management.
View Grains information:
Salt 'node1' grains.ls # shows all modules
Salt 'node1' grains.items # displays information for all modules
Salt'* 'grains.item fqdn # returns the fqdn information of all minion sides
Use Grains to specify the host corresponding to the information, and execute the salt command:
[root@node1 salt] # salt'* 'grains.item osnode2:-os:-os:CentOS [root@node1 salt] # salt-G' os:CentOS' cmd.run 'date' #-G parameter specifies Grains information node1: Sun Nov 6 17:28:36 CST 2016node2: Sun Nov 6 17:28:36 CST 2016
Custom Grains
There are two ways to customize Grains:
Add configuration in / etc/salt/minion
Write a new configuration file grains in the / etc/salt/ directory
Add the configuration on / etc/salt/minion on node2, uncomment in the grains section, and add a roles role:
Grains: roles: apache
Restart salt-minion and get the value of roles on master:
[root@node2 ~] # systemctl restart salt-minion [root@node1 ~] # salt'* 'grains.item rolesnode1:-roles:node2:-roles: apache
This allows you to execute commands against hosts that match the matching results based on the matching results of the Grains:
[root@node1] # salt-G 'roles:apache' cmd.run 'date' node2: Sun Nov 6 17:51:07 CST 2016
Write a new configuration file grains in the / etc/salt/ directory:
[root@node2 ~] # cat / etc/salt/grainssaltname: trying [root@node2 ~] # systemctl restart salt-minion # can also use the refresh command on master, which has the same effect as restarting salt-minion [root@node1 ~] # salt'* 'saltutil.sync_grainsnode1:node2:
View the following on master:
[root@node1 ~] # salt'* 'grains.item saltnamenode1:-saltname:node2:-saltname: trying
Using Grains in top file
Use grains to match in top.sls
Vim / srv/salt/top.sls base: 'node1':-web.apache' saltname:trying': # where grains matches node2-match: grain # matches grains without s.-web.apache
Write a Grains using python script
Writing a grains using python scripts needs to be stored in the _ grains directory in the base environment.
[root@node1 _ grains] # cat / srv iaas' openstack' grains ['blog'] =' openstack' grains ['blog'] =' trying' # return value return grains]
Synchronize the script to the minion side:
[root@node1 _ grains] # salt'* 'saltutil.sync_grains # here you can specify host synchronization node1:-grains.my_grainsnode2:-grains.my_grains
You can see the synchronized scripts and directories on the minion side:
[root@node2] # tree / var/cache/salt/minion//var/cache/salt/minion/ ├── accumulator ├── extmods │ └── grains │ ├── my_grains.py # synchronization of the script │ └── my_grains.pyc ├── files │ └── base │ ├── _ grains │ │ └── my_grains.py # Script file │ ├── top.sls │ └── web │ └── apache.sls ├── highstate.cache.p module_refresh ├── pkg_refresh ├── proc └── sls.p
Check the execution result on master, which has been synchronized to minion:
[root@node1 _ grains] # salt'* 'grains.item blognode1:-blog: tryingnode2:-blog: trying
Priority of Grains
The above gives four ways to use the value of Grains. In these four ways, if the same name occurs, the
Get the corresponding value according to the following priority:
The value that comes with the system
Values written in / etc/salt/grains file
Values defined in the minion configuration file
Custom values in / srv/salt/_grains/my_grains.py script
=
Pillar introduction
Unlike Grains, Pillar is dynamic and can assign specific data to a particular minion. Only the specified minion can see its own data, which is commonly used with some sensitive special data.
The system module of pillar is turned off by default:
[root@node1 salt] # salt'* 'pillar.itemsnode1:-node2:-
You can open the comment in / etc/salt/master and change it to True, and these variables of the system are rarely used:
Pillar_opts: True
Custom pillar
To customize pillar, you need to write a sls file that defines pillar. Exe. Use YAML to define multilevels.
Modify the master configuration file and open the comment:
Vim / etc/salt/masterpillar_roots: base:-/ srv/pillar
Restart salt-master:
Systemctl restart salt-master
Create a directory for pillar:
Mkdir / srv/pillar
Like grains, create category directories and .sls files
Vim / srv/pillar/web/apache.sls {% if grains ['os'] = =' CentOS'%} apache: httpd {% elif grains ['os'] = =' Debian'%} apache: apache2 {% endif%}
Write top file to specify the host in the pillar directory, as in grains:
# cat / srv/pillar/top.sls base: 'node2': # only node2-web.apache is specified
Refresh to make the configuration effective:
# salt'* 'saltutil.refresh_pillarnode2: Truenode1: True
Execute to check that the pillar items takes effect:
# salt'* 'pillar.items apache node2:-apache: httpdnode1:-apache:
Use pillar to match the host node2, and execute the command:
# salt-I 'apache:httpd' cmd.run' hostname' node2: node2
Comparison between Grains and Pillar
Classification type data collection mode application scenario definition location Grains static minion collection data query, target selection, configuration management minionPillar dynamic master custom target selection, configuration management, sensitive data master
SaltStack remote execution
Saltstack command syntax, salt'* 'cmd.run' w'
Command: salt
Goal:'*'
Module: cmd.run comes with 150 + modules, and you can also customize the writing module.
Return: the result is returned after execution.
Target selection mode
All ways to match the target can be specified using top file.
Use wildcards:
# salt 'node*' test.ping# salt' node [1 | 2] 'test.ping# salt' node [1-2] 'test.ping# salt' node [! 2] 'test.ping
Use list,-L:
# salt-L 'node1,node2' test.ping
Using regular,-E:
# salt-E 'node (1 | 2) *' test.ping
Use the IP/ subnet method:
# salt-S '172.16.10.61' test.ping
# salt-S '172.16.10.Universe 24' test.ping
Use node_group to match:-N
Modify the configuration file / etc/salt/master to add matching groups to the node group module:
Nodegroups: web: 'Lindsay Node1 Node2'
Restart salt-master:
Systemctl restart salt-master
Match node-group:
[root@node1 ~] # salt-N web test.pingnode2: Truenode1: True
Examples of commands with multiple matching methods:
Https://www.unixhot.com/docs/saltstack/topics/targeting/index.html
Https://www.unixhot.com/docs/saltstack/topics/targeting/compound.html
SaltStack comes with module
Https://docs.saltstack.com/en/latest/ref/modules/all/index.html#all-salt-modules
Saltstack has many integrated modules, most of which are written by python. Using these modules, you can easily return a list of information or perform certain tasks.
Examples of some commonly used modules:
Network module: network-related services
Https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.network.html#module-salt.modules.network
Salt.modules.network.active_tcp: this indicates the path and location of the module function, and salt.modules represents the location where the slat module is installed on the machine. By default, it is / usr/lib/python2.7/site-packages/salt/modules
When executing a command, you can directly use the slat command to call
Service module: the module related to the current service of the host
Https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.service.html#module-salt.modules.service
Among some commonly used modules, some module methods already have separate commands such as the cp module:
Cp is a replication module, and we can directly use salt-cp in the system, such as:
Salt-cp'*'/ etc/hosts / tmp/ # copy the local hosts file to the / tmp directory of all minion.
State: execution module
Https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#module-salt.modules.state
Saltstack return program
The return of data is provided by the minion side, and the data returned by minion can be written directly to mysql without going through master.
To enable minion to return data directly to mysql, you need to install MySQL-python on all minion
Salt'* 'state.single pkg.installed name=MySQL-python
To store the returned data in mysql, you need to install the database on master and create a table structure with authorization:
Https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.mysql.html
Execute the following command to create the database and authorize:
> CREATE DATABASE `salt` DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; > USE `salt`; > DROP TABLE IF EXISTS `jids`; CREATE TABLE `jids` (`jid` varchar (255) NOT NULL, `load`mediumtext NOT NULL, UNIQUE KEY `jid` (`jid`) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE INDEX jid ON jids (jid) USING BTREE; > DROP TABLE IF EXISTS `jidns` > CREATE TABLE `salt_ Secretns` (`fun` varchar (50) NOT NULL, `jid` varchar (255) NOT NULL, `funn` mediumtext NOT NULL, `id`varchar (255) NOT NULL, `room`varchar (10) NOT NULL, `full_ ret` mediumtext NOT NULL, `alter_ time`TIMESTAMP DEFAULT CURRENT_TIMESTAMP, KEY `id` (`id`), KEY `jid` (`jid`), KEY `fun` (`fun`) ENGINE=InnoDB DEFAULT CHARSET=utf8; > DROP TABLE IF EXISTS `salt_ events` > CREATE TABLE `salt_ events` (`id` BIGINT NOT NULL AUTO_INCREMENT, `tag` varchar) NOT NULL, `data` mediumtext NOT NULL, `alter_ time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `master_ id` varchar (255) NOT NULL, PRIMARY KEY (`id`), KEY `tag` (`tag`) ENGINE=InnoDB DEFAULT CHARSET=utf8; > grant all on salt.* to salt@'%' identified by 'salt'; > show databases +-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | salt | +-+ 4 rows in set (0.00 sec) > use saltDatabase changed > show tables +-+ | Tables_in_salt | +-+ | jids | | salt_events | | salt_returns | +-+ 3 rows in set (0.00 sec) MariaDB [salt] > select * from salt_returns;Empty set (0.00 sec)
Modify the configuration files of all salt on Mini, and restart the salt service after adding the following configuration:
Add the following information at the end of the configuration:
Mysql.host: '172.16.10.60' # the host address of mysql mysql.user: 'salt'mysql.pass:' salt'mysql.db: 'salt'mysql.port: 3306
Restart the minion service:
Systemctl restart salt-minion
Execute the database test command:
Salt'* 'test.ping-- return mysql
Look at the database again and find that the data has been written:
[root@node1] # mysql-h 172.16.10.60-usalt-p-e "use salt Select * from salt_returns\ G "Enter password: * * 1. Row * * fun: test.ping jid: 20161108135248194687 return: true id: node2 success: 1 full_ret: {" fun_args ": []," jid ":" 20161108135248194687 "," return ": true "retcode": 0, "success": true, "fun": "test.ping" "id": "node2"} alter_time: 2016-11-0813: 52 row 48. Row * * fun: test.ping jid: 20161108135248194687 return: true id: node1 success: 1 full_ret: {"fun_args": [] "jid": "20161108135248194687", "return": true, "retcode": 0, "success": true, "fun": "test.ping", "id": "node1"} alter_time: 2016-11-0813: 52:48
Execute a command to write to the database:
Salt'* 'cmd.run' df-h'-return mysql
If you look at the database again, you will find that the command and return value records of the operation have been recorded in the database, as well as the time of the operation.
Write a status module
We can customize some status modules according to our own needs. For example, you can write a module script to perform complex operations, and you can use salt to call these modules directly to achieve the functions we need.
The path where the module is stored: / srv/salt/_modules
Mkdir / srv/salt/_modules
Write a module script that displays disk information:
[root@node1 / srv/salt/_modules] # vim / srv/salt/_modules/my_disk.pydef list (): cmd ='df-h' ret = _ _ salt__ ['cmd.run'] (cmd) return ret
Salt synchronizes files to all minion sides:
# salt'* 'saltutil.sync_modules node1:-modules.my_disknode2:-modules.my_disk
Run the my_disk.py module:
# salt'* 'my_disk.listnode2: Filesystem Size Used Avail Use% Mounted on / dev/mapper/centos-root 8.5G 1.4G 7.1G 17% / devtmpfs 483M 0 483M 0% / devtmpfs 493M 12K 493M 1% / dev/shm tmpfs 493M 20M 474M 4% / run tmpfs 493M 0493M 0% / sys/fs/cgroup / dev/sda1 497M 165m 332m 34% / boot tmpfs 99m 099m 0% / run/user/0node1: Filesystem Size Used Avail Use% Mounted on / dev/mapper/centos-root 8.5G 1.6G 7.0G 18% / devtmpfs 483M 0 483M 0% / dev tmpfs 493M 28K 493M 1% / dev/shm tmpfs 493M 20M 474M 4% / run tmpfs 493M 0493M 0% / sys/fs/cgroup / dev/sda1 497M 165M 332M 34% / boot tmpfs 99m 0 99m 0% / run/user/0
You can see that salt returns the results of each minion run to our master side, which is the same as executing df-h.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.