In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about what is automation in Linux and the installation and use of SaltStack, which is a sharp tool for operation and maintenance. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
(1) introduction of automatic operation and maintenance
(1.1) when we operate on a single machine, such as installing a system, then installing related software packages, configuring related services, etc., because the number of machines is very small, it is very easy for us to manage. But in our daily work, the server we manage is sometimes not a single machine, but may be hundreds of thousands. At this time, how should we manage our server simply and efficiently? we may use the conventional way. Remote login to each server to configure, but this way will be very complicated and inefficient At this time, we should use the corresponding automation management tools to help us manage our servers on the level of skills we already have. In this way, when we manage the server cluster, we only need to operate on a specific terminal, then all configuration operations will be performed automatically on the corresponding server. With the assistance of automated operation and maintenance tools, the efficiency of operation and maintenance can be greatly improved. Among them, our common tools for automatic operation and maintenance are: Ansible, SaltStack, Puppet
SaltStack is a heterogeneous platform basic settings management tool, we are usually on Linux, using lightweight communicator ZMQ, using Python Ctrip batch management tools, completely open source, comply with Apache2 protocol, similar to Puppet, Chef function, there is a powerful remote execution command engine, there is also a powerful configuration management system, usually called Salt State System. The Saltstack adopts the Saltstack S mode, the server side is the master of salt, and the client side is the communication between minion,minion and master through ZeroMQ message queue.
(2) install saltstack
We usually install SaltStack through the official website https://repo.saltstack.com/, but the operating system usually contains an epel source, so it is very convenient for us to install SaltStack directly using the epel source.
(2.2) at present, there are three machines in our environment, and the systems used are all CentOS 7.4.The hostname of master is vms11.rhce.cc,IP address 192.168.26.11; the hostname of one minion is vms12.rhce.cc,IP address 192.168.26.12; the hostname of the other minion is vms13.rhce.cc,IP address 192.168.26.13.
First, let's go to the system and set the screen saver, then edit the IP address of the vms11 host to 192.168.26.11, set the subnet mask, gateway and DNS of the network, and then restart the network network of the system. Then we set the IP address of the vms12 host to 192.168.26.12 and set the host to 192.168.26.13, and configure other settings.
At the same time, we also need to set the SELinux of the three hosts to Disabled, and take effect after restarting the computer, and set the firewall to trusted, that is, allow all packets to pass through, and set the correct host name information. At the same time, the required IP address, long host name and short host name information are set in the / etc/hosts file of the three hosts to replace the role of local domain name resolution.
We first configure the mount information for the CD, then create an aa.repo repository information in the / etc/yum.repos.d/ directory, and then we begin to install the epel source (figure 2-9).
Then we started to install salt-master on the vms11 machine, salt-minion on the vms12 and vms13 machines, and turn on the salt-master service on the vms11 host and set the boot to start automatically.
# systemctl list-unit-files | grep salt
# systemctl start salt-master
# systemctl enable salt-master
# yum install salt-minion-y
Then we need to configure which server to manage on the client, so we first operate on the vms12 host, enter the / etc/salt directory, and edit the value of master in the minion file to vms11, while we copy the minion file of vms12 to the vms13 host.
Then we start the salt-minion host service on two minion hosts, vms12 and vms13, and set the boot to start automatically. At this point, when we restart the salt-minion service on the minion host, the minion host will actively register with master.
# systemctl list-unit-files | grep salt
# systemctl start salt-minion.service
# systemctl enable salt-minion.service
(2.9) however, the master host is also in an uncertain state after receiving the request. We can check the request information of the vms12 and vms13 hosts on the vms11 host through the following salt-key command. At the same time, you can also see the pki key information of the vms12 and vms13 hosts received by the vms11 host in the / etc/salt/pki/master/minions_pre/ directory (figure 2-18). At the same time, we can also see the relevant key information on the minion host. For example, on the vms12 host, we can see that two key files minion.pem and minion.pub have been generated in the / etc/salt/pki/minion/ directory, and the certificate file will be sent to master (figure 2-19).
# salt-key-L
# ls / etc/salt/pki/master/minions_pre/
# ls / etc/salt/pki/minion/
(2.10) if we want to release a node, we can use a specified node so that the master node can manage the released minion node (figure 2-20). If we want to release all the requested nodes, we can simply use the "- A" parameter (figure 2-21). If we want to delete unwanted nodes, we can use the "- d" parameter (figure 2-22), and if we want to delete all nodes at once, we can use the "- D" parameter. When we restart the service on the minion node, the node's request to join the master takes effect again (figure 2-23), so that we can manage the node on a day-to-day basis through the master host.
# salt-key-a vms12.rhce.cc--- allows a specified single node to pass through
# salt-key-Amuri-allows all nodes to pass through
# salt-key-d vms12.rhce.cc--- deletes a specified single node
# salt-key-Dmurmuri-uniformly delete all nodes
# systemctl restart salt-minion.service--- restart the service on the minion node
(3) remote execution
(3.1) remote execution means that we can define an operation on the master, and then we can automatically execute the relevant commands on the corresponding minion node, so that we do not need a login node to execute.
Format: salt'* 'module. Command
# salt'* 'test.ping--- is executed on all machines (figure 2-24)
# salt vms12.rhce.cc test.ping--- is executed on a single machine (figure 2-24)
# salt'* 'cmd.run' ls'--- executes on all machines
# salt vms12.rhce.cc cmd.run 'ls'--- is executed on a single machine (figure 2-25)
# salt 'vms13.rhce.cc' cmd.run' hostname'--- queries the hostname of the vms13 host (figure 2-25)
# salt'* 'cmd.run' ifconfig ens32'--- queries the network information of all hosts (figure 2-26)
# salt'* 'cmd.run' yum install vsftpd-y; systemctl start vsftpd; systemctl enable vsftpd'--- install software packages on all machines (figure 2-27 and figure 2-28)
(4) configuration management
We can also install the software package through configuration management. When installing through configuration management, we need to write a configuration file with the suffix sls, such as creating an aa.sls configuration file, then this name is the name to be executed later. Sls files can not be written in any directory, but must be written in the specified directory. We go to the / etc/salt directory to see a master file. We find the line of information "file_roots" in the master file, and then we can define a directory / srv/salt. In this case, our sls file must be written to the / srv/salt directory or subdirectory to take effect.
(4.2) then we need to create the / srv/salt directory on master, restart the salt-master service, and then go to the / srv/salt directory to create an aa.sls file, in which we need to define the name, module name, command name, and package name to be installed (figure 4-3). Then let the YAML file specify the corresponding minion to install according to the characteristics of the system (figure 4-4)
# mkdir / srv/salt
# systemctl restart salt-master.service
Then we run the contents of the aa.sls configuration file on the vms11 host (figure 4-5), and we can find that the related services of httpd on the vms13 host have been installed correctly (figure 4-6)
# salt'* 'state.sls aa
# rpm-qa | grep httpd
This aa.sls file, we do not specify which machine to execute, if you install many machines, install different packages, we need many different sls files. At this point, we should use the top.sls file for processing, and the purpose of top.sls is to set which sls file to execute for each minion. We first stop the httpd and vsftpd services on the minion host, and then uninstall the httpd and vsftpd installations (figure 4-8). Then create a top.sls file in the / srv/salt directory on the vms11 host, and create a xx directory, create a bb.sls file in the xx directory, and we can use the tree command to view the status of the directory tree under the current directory (figure 4-9). We installed and set up the configuration of the vsftpd service in the bb.sls file, and now our requirement is to install the httpd service on the vms12 host and the vsftpd service on the vms13 host, which can be set in the top.sls file (figures 4-10 and 4-11).
# rpm-qa httpd
# systemctl stop httpd vsftpd.service
# rpm-e httpd vsftpd httpd-devel
Then we started to execute the installation command directly on the master host. At this time, we found that the httpd service had been installed on the vms12 host, but the vsftpd service had not been installed, while the vms13 host had already installed the vsftpd service and did not install the httpd service.
# salt'* 'state.highstate
(5) Module management
We want to know exactly how many modules can be executed in the system, which can be checked through sys.list_modules (figure 5-1), such as "test.ping", "cmd.run", "pkg.installed" and "service.running" that we used earlier. If we want to know all the commands of a module, we can get all the commands in the pkg module by using the Tab key twice in a row (figure 5-2)
# salt 'vms12.rhce.cc' sys.list_modules
# salt 'vms12.rhce.cc' pkg.
(5.2) of course, we can also get the information about the relevant modules of the command through sys.list_functions (figure 5-3). If we want to see the relevant usage of module subcommands, we can use sys.doc to get the usage information of the module (figure 5-4).
# salt 'vms12.rhce.cc' sys.list_functions pkg
# salt 'vms12.rhce.cc' sys.doc pkg
# salt 'vms12.rhce.cc' sys.list_functions service
(6) grains module
The main function of grains is to obtain all the information that can be used in the system, which is mainly stored in the form of variables. If we can first take a look at the subcommands in grains
# salt 'vms12.rhce.cc' sys.list_functions grains
# salt 'vms12.rhce.cc' grains.items--- gets all available variables and values (figures 6-2 to 6-4)
# salt 'vms12.rhce.cc' grains.ls--- gets all available variables, excluding values (figure 6-5)
(6.2) if we want to get the value of a variable separately, we can use the grains.get command (figure 6-6). The purpose of obtaining these variables is that we can make relevant judgments by obtaining information about the system when writing the script, and perform different things on different machines, such as performing one operation on redhat and another operation on debian (figure 6-7). For specific grammar information, we can refer to the official document https://docs.saltstack.com/en/latest/contents.html.
# salt 'vms12.rhce.cc' grains.get os--- gets the value of a variable separately
(7) pillar module
Pillar is mainly a function that carries out some system operation commands. When we query pillar.items, we cannot get the relevant values, because the relevant modules of pillar are not opened by default.
# salt 'vms12.rhce.cc' sys.list_functions pillar
# salt 'vms12.rhce.cc' pillar.items
If you need to enable the module related to pillar, you need to go to the / etc/salt/ directory, edit the master file, change the value of pillar_opts to True, open the path information of pillar_roots, create a corresponding / srv/pillar directory in the system, and finally restart the salt-master service. At this point, we can get the information about pillar.items (figure 7-7).
# mkdir / srv/pillar
# systemctl restart salt-master.service
# salt'* 'pillar.items
# salt 'vms12.rhce.cc' sys.doc pillar--- We can get the relevant help information through this command
(8) salt-ssh
(8.1) when we were connected to minion by master for management, we were able to connect directly remotely without asking us to enter any password information. in fact, this is because master and minion communicate directly through certificates. On the vms12 host, we found that there are two certificate files pem and pub under the / etc/salt/pki directory. We can also find two certificate files, master.pem and master.pub, in the / etc/salt/pki directory on the vms11 host.
(8.2) if now we do not want to make a communication connection between master and minion by certificate, but want to establish a connection by ssh, then we should use salt-ssh to meet this requirement. We re-generate three hosts vms11, vms12, vms13, and then install the relevant software packages, and we first install the epel source.
Steps of environment building and environment configuration: ① first goes to / etc/sysconfig/network-scripts/ directory to configure IP address, gateway, subnet mask, DNS and other information; ② mounts / etc/cdrom image to / mnt directory; ③ configures the setting of booting and automatically mounting CD image in / etc/fstab file; ④ goes to / etc/yum.repos.d/ directory to create aa.repo warehouse file. ⑤ goes to the / etc/hosts file to edit IP address, long hostname, short hostname and other information; ⑥ removes the screensaver function of the host.
# yum install epel*-y
After the epel source installation is complete, we install salt-ssh and edit the roster file in the / etc/salt/ directory. In the configuration file, we edit the IP address, username, password, port and other information connected to the vms12 and vms13 hosts on the master host.
# yum install salt-ssh-y
# vim / etc/salt/roster
(8.4) when we use ssh to log in, the system will always ask us for yes/no login information, this is because the relevant login information is saved in the known_hosts file in the root/.ssh/ directory. At this point, we can go to the / etc/ssh/ssh_config configuration file to change the value of StrictHostKeyChecking to no and save it. At this time, we test the connectivity of the connection minion on the vms11 host and find that it is possible to connect normally.
# salt-ssh'* 'test.ping
(8.5) at this time, we can also execute relevant commands on the master host. For example, using salt-ssh, we can remotely perform the operation of obtaining the host name, in which the "- r" parameter can be followed by the system command; the "- I" parameter means that if you ask for a password, you can confirm it directly. Of course, we know that the way saltstack uses ssh will have lower performance than the connection management method using certificates, but the security will be improved accordingly.
# salt-ssh'*'- r 'hostname'
# salt-ssh'*'- r'df-hT'
After reading the above, do you have any further understanding of what is automation in Linux and the installation and use of SaltStack? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.