In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Ls
# list all files starting with an and o [root@sh02-hap-bss-prod-consul03 ~] # lsanaconda-ks.cfg nss-pam-ldapd-0.9.8-1.gf.el7.x86_64.rpm openldap-clients-2.4.44-21.el7_6.x86_64.rpm original-ks.cfg tools [root@sh02-hap-bss-prod-consul03 ~] # ls [ao] * anaconda-ks.cfg openldap-clients-2.4.44-21 .el7 _ 6.x86_64.rpm original-ks.cfg# [0-9] represents any single number # [! 0-9] represents a string that does not begin with a number [root@sh02-hap-bss-prod-consul03 ~] # ls1 3 anaconda-ks.cfg openldap-clients-2.4.44-21.el7_6.x86_64.rpm tools2 44 nss-pam-ldapd-0.9.8-1.gf .el7.x86 _ 64.rpm original-ks.cfg [root@sh02-hap-bss-prod-consul03 ~] # ls [! 0-9] * anaconda-ks.cfg nss-pam-ldapd-0.9.8-1.gf.el7.x86_64.rpm openldap-clients-2.4.44-21.el7_6.x86_64.rpm original-ks.cfgtools:libnss-cache nsscache [root@sh02-hap-bss-prod-consul03 ~] # ls [0-9] * 1 2 3 44
Rm
# Delete files that start with numbers rm-f [0-9] * # Delete files that do not start with numbers [root@sh02-hap-bss-prod-consul03 test] # ls1 2 34 an aa b bb [root@sh02-hap-bss-prod-consul03 test] # rm-f [! 0-9] * [root@sh02-hap-bss-prod-consul03 test] # ls1 2 3 4
Echo
By default, echo appends a newline character to the end of the text. You can use-n to ignore the newline character [root@host1 src] # echo abc dddabc ddd [root@host1 src] # echo-n abc dddabc ddd [root@host1 src] #
Echo-e
Echo-e deals with special characters if the following characters appear in the string, they are specially treated instead of being treated as normal text output:\ a warning sound;\ b delete the previous character;\ c finally do not add a newline symbol;\ f wrap but the cursor remains in the original position;\ nwrap and the cursor moves to the beginning of the line;\ r the cursor moves to the beginning of the line, but does not wrap;\ t insert tab;\ v the same as\ f \\ insert\ character;\ nnn insert ASCII character represented by nnn (octal); here is an example: $echo-e "a\ bdddd" / / the preceding a will be erased dddd$echo-e "a\ adddd" / / output will emit an alarm sound adddd$echo-e "a\ ndddd" / / automatic line wrapping adddd
Variable
String length: ${# var}
[root@host1 src] # echo ${NODE_HOME} / usr/local/node [root@host1 src] # echo ${# NODE_HOME} "has a length of 15 characters
Using shell for Mathematical calculation
When using let, you do not need to add $before the variable name
[root@host1 src] # nod1=3 [root@host1 src] # nod2=5 [root@host1 src] # abc=$ [nod1+nod2] [root@host1 src] # echo $abc8 [root@host1 src] # let def=nod1+nod2 [root@host1 src] # echo $def8
Bc
[root@host3 2056] # echo "4x0.56" | bc2.24 [root @ host3 2056] # no=54 [root@host3 2056] # res= `echo "$no*1.5" | bc`[ root @ host3 2056] # echo $res81.0 [root 2056] #
Other parameters can be placed before the specific operation to be performed, with a semicolon as the delimiter and passed to bc through stdin
For example, set the decimal precision
[root@host3 2056] # echo "scale=2;3/8" | bc.37
File descriptor
0---stdin standard input 1---stdout standard output 2---stderr standard error
When a command has an error and exits, she returns a non-zero exit status, and the number 0 is returned after successful execution. Can the exit status be charged to $? Get, echo $?
Correct output to out.txt, error output to desktop ls > out.txt error output to out.txt, correct output to desktop ls 2 > out.txt all output redirected to out.txtls & > out.txt can combine find / etc-name passwd > find.txt 2 > find.err to discard the error result Only output correct results on the screen find / etc-name passwd 2 > / dev/null discard all results find / etc-name passwd & > / dev/null because the wrong output cannot go through the pipeline, so if necessary, the error output must be regarded as the correct output, that is, find / etc-name passwd 2 > & 1 | less such as find / etc-name passwd | wc-l actually only counts the correct number of rows. Find / etc-name passwd 2 > & 1 | wc-l this counts the error as correct. / sbin/service vsftpd stop > / dev/null 2 > & 1 means to stop the service, the correct output is discarded, and the error output is output when the correct output is output to the terminal.
Array and associative array
There are many ways to define an array, and we often use only one column of values in a single row to define an array:
[root@host3 ~] # array_var= (1 2 3 4 5 6 6 6) [root@host3 ~] # echo ${array_var [*]} # print all values in the array, 11 2 3 4 5 6 6 6 [root@host3] # echo ${array_var [@]} # print all values in the array, 21 2 3 4 5 6 6 6 [root@host3 ~] # echo ${# array_var [*]} # print array length 8
Associative arrays are similar to dictionaries. You can customize key values and list array indexes key.
Get terminal information
Tput sc # Storage cursor position tput rc # restore cursor tput ed # clear everything from the cursor to the end of the line
Generate delay in script
Countdown:
#! / bin/bashecho-n Count:tput sccount=11;while true;do+ if [$count- gt 0]; then let count--; sleep 1; tput rc tput ed echo-n $count; else exit 0; fidone# in chestnut here, the variable count has an initial value of 11, which decreases by 1 per cycle. Tput sc stores the cursor location. In each loop, a new count value is printed in the terminal by restoring the previously stored cursor position. The command to restore the cursor position is tput rc. Tput ed clears everything from the current cursor position to the end of the line so that the old count value can be cleared and written to the new value.
Functions and parameters
Define function
Function fname () {statements;} or: fname () {statements;}
Call, you only need to use the function name to call the
Fname; # execute function
Parameters can be passed to the function and accessed by a script
Fname arg1 arg2
Various methods of accessing function parameters
Fname () {echo $1 echo 2; # access parameter 1 and parameter 2echo "$@"; # print all parameters echo "$*" at once in the form of a list; # is similar to $@, but the parameter is used as a single entity return "$#"; # $# indicates the number of arguments following this script or function return 0 The # return value} # $@ is used more than $* because $* treats all parameters as a single string, so it is rarely used
Function recursion
Functions in bash also support recursion (you can call your own functions), such as
F () {echo $1; F hello; sleep 1;}
Fork bomb
: () {: |: &};: # this recursive function can constantly call itself to generate a new process, resulting in a denial of service * *, and put the Forbidden City in the background before the function call. This dangerous piece of code will branch a large number of processes, so it becomes a fork bomb [root@host3 ~] #: () {: |: &};: [1] 2526 [root@host3 ~] # [1] + crashes
This doesn't seem to be easy to understand, so we can change the format:
: {:: |: &};:
To make it a little easier to understand, it would be like this:
Bomb () {bomb | bomb&}; bomb
Because the function in shell can omit the function keyword, so the above 13 characters are the function to define a function and call this function, the name of the function is:, the main core code is:: &, you can see that this is a recursive call to the function itself, through & to start a new process to run in the background, through the pipeline to achieve the geometric growth of the process, and finally through: call the function to detonate the bomb. Therefore, in a few seconds, the system will crash because it cannot handle too many processes, and the only solution is to restart.
Prevention mode
Of course, the Fork bomb is not that scary, and you can write one in other languages every minute, for example, the python version:
Import os while True: os.fork ()
The essence of Fork bomb is to preempt system resources by creating processes. In Linux, we can use the ulimit command to restrict some user behavior. Run ulimit-a to see what restrictions we can make:
[root@host3] # ulimit-acore file size (blocks,-c) 0data seg size (kbytes,-d) unlimitedscheduling priority (- e) 0file size (blocks,-f) unlimitedpending signals (- I) 7675max locked memory (kbytes,-l) 64max memory size (kbytes -m) unlimitedopen files (- n) 655350pipe size (512 bytes,-p) 8POSIX message queues (bytes,-Q) 819200real-time priority (- r) 0stack size (kbytes,-s) 8192cpu time (seconds,-t) unlimitedmax user processes (- u) 100virtual memory (kbytes -v) unlimitedfile locks (- x) unlimited
As you can see, the-u parameter limits the number of processes created by the user, so we can use ulimit-u 100 to allow the user to create up to 100 processes. This will prevent bomb bombs. But this is not complete, after closing the terminal, the command will be invalid. We can take deeper precautions by modifying the / etc/security/limits.conf file by adding the following line to the file
* soft nproc 100 * hard nproc 100
Read command return value (status)
$? Give the return value of the command
The return value is called exit status, which can be used to analyze whether the command was executed successfully. If successful, the exit status is 0, otherwise it is not 0.
Read the output of the command sequence into a variable
Use sub-shell to generate a separate process
A child shell is already a separate process, and you can use the () operator to define a child shell:
Pwd; (cd / bin; ls); pwd;# when the command is run in the child shell, it has no effect on the current shell, and all changes are limited to the child shell. For example, when cd changes the current directory of a child shell, the change is not reflected in the main shell environment
Read read
Read is used to read text from the keyboard or standard input. Read the input from the user interactively.
Any input library that becomes a language mostly reads input from the keyboard; but it marks the end of input only when the enter key is pressed.
Read provides a way to accomplish this task without the need for enter.
Read n characters and save the variable nameread-p "Enter input:" var# prompt to read read-n number_of_chars nameread-n 3 varecho $var with a specific delimiter as the end of the input line read-d: "varecho $var# ends the input line with a colon
Run the command until it is executed successfully
Define the function as follows:
Repeat () {while true;do $@ & & return; done} # We created the repeat function, which contains an infinite loop that executes commands passed in as parameters (accessed by $@). If the command executes successfully, it returns and exits the loop
A faster approach:
In most modern systems, true is implemented as a binary file. This means that without executing a while loop, shell has to generate a process. If you don't want to, you can use the ":" command of the shell internal test, and she always returns the exit code with a value of 0:
Repeat () {while:; do $@ & & return; done} # although it is not readable, it is certainly faster than the previous method
Increase delay
For example, you need to download a file from internet that is temporarily unavailable, but it will take a while to download. The methods are as follows:
If repeat wget-c http://abc.test.com/software.tar.gz# is in this form, it needs to send a lot of data to the server, which may affect the server. We can modify the function and add a short delay of repeat () {while:; do $@ & & return; sleep30; done} # which makes the command run every 30 seconds.
Field delimiters and iterators
The internal field delimiter IFS is an important concept in shell scripts. It is the environment variable that stores the delimiter and is the default existence identification string used in the current shell environment
The default value for IFS is a blank character (newline, tab, or space), as in shell, the space character is used as the IFS by default.
[root@host3 ~] # data= "abc eee ddd fff" [root@host3 ~] # for item in $data; doecho ITEM: $item; doneITEM: abcITEM: eeeITEM: dddITEM: fff execution: list1= "123344" for line in $list1doecho $line;done output: 123344 execution: for line in 123344 # if you enclose the in in quotation marks, it will be output as a string doecho $line;done: 123344
Next we can change IFS to a comma:
# IFS is not modified, so the data is printed as a single string [root@host3 ~] # data= "eee,eee,111222" [root@host3 ~] # for item in $data1; do echo ITEM: $item DoneITEM: eee,eee,111222 [root@host3 ~] # oldIFS=$IFS # this step is to back up the current IFS as oldIFS, then restore [root@host3 ~] # IFS=, # backup and modify IFS to comma, and then output again and find that the comma has become the delimiter [root@host3 ~] # for item in $data1; do echo ITEM: $item; doneITEM: eeeITEM: eeeITEM: 111ITEM: 222[ root@host3 ~] # IFS=$oldIFS # restore IFS to the original [root@host3 ~] # for item in $data1 Do echo ITEM: $item; doneITEM: eee,eee,111222
Therefore, after we have modified the use of IFS, remember to restore it to its original state.
For cycle
For var in list;do commands;donelist can be a string or a sequence {1.. 50} to generate a list of numbers from 1 to 50 {a.. z} or {A.. Z} or {a.. h} to generate the alphabet.
For can also adopt the for loop mode in C language.
For (iTun0) Iall.txt# appends the contents of all .conf files to the all.txt file because there is only one data stream (stdin) for all output of the find command, and only when multiple data streams are appended to a single file, it is necessary to use # the following command to copy the .txt file from 10 days ago to the OLD directory: find / etc-type f-mtime + 10-name "* .txt"-exec cp {} OLD\
Let find skip some directories
Sometimes to improve performance, you need to skip directories, such as git, where each subdirectory contains a .git directory.
The function of find / etc\ (- name ".git"-prune\)-o\ (- type f-print\) #\ (- name "/ etc/rabbitmq"-prune\) is to exclude, while\ (- type f-print\) indicates the action to be performed. Play xargs.
The xargs command reformats the data received from stdin and provides it as an argument to other commands
Xargs, as an alternative, is similar to-exec in the find command
To convert multi-line input to single-line output, you can achieve multi-line input conversion by removing the newline character and replacing it with a space. With xargs, we can replace newline characters with spaces, so we can convert multiple lines into a single line [root@host3 ~] # cat 123.txt 123 4 56 7 8 910 11 12 13 14 [root@host3 ~] # cat 123.txt | xargs1 23 4 56 7 8 910 11 12 13 14 converts single-line input to multiple lines of output, specifying the maximum number of parameters per line. We can divide any text from stdin into multiple lines of n parameters per line. Each parameter has a string separated by spaces. Spaces are the default delimiters. [root@host3 ~] # cat 123.txt 123 4 56 78 910 11 1213 14 [root@host3 ~] # cat 123.txt | xargs-n 31 2 34 5 67 8 910 11 1213 14 [root@host3 ~] # echo 13 45 67 8 | xargs-n 31 3 45 6 78 custom delimiters to segment parameters. Specify a custom delimiter [root@host3 ~] # echo "abcTdslfjTdshfsT1111Tfd222" for the input with the-d option | xargs-d Tabc dslfjdshfs1111fd222 # with the letter T as the delimiter # We can customize the splitter while defining how many parameters are output per line [root@host3 ~] # echo "abcTdslfjTdshfsT1111Tfd222" | xargs-d T-n 2abc dslfjdshfs1111fd222 # output one parameter per line [root@host3 ~] # echo "abcTdslfjTdshfsT1111Tfd222" | xargs-d T-n 1abcdslfjdshfs1111fd222 sub-shell
Cmd0 | (cmd1;cmd2;cmd3) | cmd4
In the middle is the sub-shell. If there is a cmd in it, it only takes effect in the sub-shell.
The difference between print and print0-print adds a carriage return newline character after each output, while-print0 does not. [root@AaronWong shell_test] # find / home/AaronWong/ABC/-type f-print/home/AaronWong/ABC/libcvaux.so/home/AaronWong/ABC/libgomp.so.1/home/AaronWong/ABC/libcvaux.so.4/home/AaronWong/ABC/libcv.so/home/AaronWong/ABC/libhighgui.so.4/home/AaronWong/ABC/libcxcore.so/home/AaronWong/ABC/libhighgui.so/home/AaronWong/ABC/libcxcore.so.4/home/AaronWong/ABC/libcv. So.4/home/AaronWong/ABC/libgomp.so/home/AaronWong/ABC/libz.so/home/AaronWong/ABC/libz.so.1 [root@AaronWong shell_test] # find / home/AaronWong/ABC/-type f-print0/home/AaronWong/ABC/libcvaux.so/home/AaronWong/ABC/libgomp.so.1/home/AaronWong/ABC/libcvaux.so.4/home/AaronWong/ABC/libcv.so/home/AaronWong/ABC/libhighgui.so.4/home/AaronWong/ABC / libcxcore.so/home/AaronWong/ABC/libhighgui.so/home/AaronWong/ABC/libcxcore.so.4/home/AaronWong/ABC/libcv.so.4/home/AaronWong/ABC/libgomp.so/home/AaronWong/ABC/libz.so/home/AaronWong/ABC/libz.so.1tr
Tr can only receive input through stdin standard input, not through command line arguments. The format of his call is:
Tr [option] set1 set2
Convert tabs to spaces: tr'\ t''
< file.txt [root@host3 ~]# cat -T 123.txt 1 2 3 4 56 7 8 9^I10 11 12 13 14[root@host3 ~]# tr '\t' ' ' < 123.txt 1 2 3 4 56 7 8 9 10 11 12 13 14用tr删除字符 tr有一个选项-d,可以通过指定需要被删除的字符集合,将出现在stdin中的特定字符清除掉: cat file.txt |tr -d '[set1]'#只使用set1不使用set2#替换数字[root@host3 ~]# echo "Hello 123 world 456" |tr -d '0-9'Hello world #替换字母[root@host3 ~]# echo "Hello 123 world 456" |tr -d 'A-Za-z' 123 456#替换H[root@host3 ~]# echo "Hello 123 world 456" |tr -d 'H'ello 123 world 456排序,唯一与重复 sort能帮助我们对文本文件和stdin进行排序操作。他通常配合其他命令来生成所需要的输出。uniq是一个经常与sort一同使用的命令。他的作用是从文本或stdin中提取唯一的行。 #我们可以按照下面的方式轻松的对一组文件(例如file1.txt file2.txt)进行排序:[root@host3 ~]# sort /etc/passwd /etc/group adm:x:3:4:adm:/var/adm:/sbin/nologinadm:x:4:apache:x:48:apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologinaudio:x:63:bin:x:1:bin:x:1:1:bin:/bin:/sbin/nologincaddy:x:996:caddy:x:997:996:Caddy web server:/var/lib/caddy:/sbin/nologin...#也可合并排序后重定向到新文件sort /etc/passwd /etc/group >Abc.txt# sorts by number sort-n # in reverse order sort-r # sorts by month sort-M month.txt# merges two sorted files sort-m sorted1 sorted2# to find out the non-duplicate lines in the sorted file sort file1.txt file2.txt | uniq
Check that the files have been sorted:
To check whether the file is out of order, you can use the following method. If it has been sorted, sort will return the exit code ($?) of 0, otherwise it will return non-0
#! / bin/bashsort-C filename;if [$?-eq 0]; then echo Sorted;else echo Unsorted;fi
The sort command contains a number of options. If you use uniq, then sort is even more essential because the requirement input data must be sorted
Sort accomplishes some more complex tasks
#-k specifies which column to sort by,-r in reverse order,-n by the number sort-nrk 1 data.txtsort-k 2 data.txt
Uniq
Uniq can only be used for ordered data input
[root@host3 ~] # cat data.txt 1010hellothis3333 2189ababbba333 7464dfddfdfd333# de-duplicated [root@host3 ~] # sort data.txt | uniq1010hellothis 2189ababbba3333333 7464dfddfdfd# de-duplicated and counted [root@host3 ~] # sort data.txt | uniq-c 1 1010hellothis 1 2189ababbba 2333 1 3333 1 7464dfddfdfd# displays only lines with no duplicates in the text [root@host3 ~] # sort data.txt | uniq-u1010hellothis 2189ababbba3333 7464dfddfdfd# displays only duplicate lines in the text [root@host3 ~] # Sort data.txt | uniq-d333 temporary file naming and random number
When writing shell scripts, we often need to store temporary data. The best place to store temporary data is / tmp (the contents of this directory will be emptied after the system reboot). There are two ways to generate standard file names for temporary data
[root@host3 ~] # file1= `mktemp` [root@host3 ~] # echo $file1/tmp/tmp.P9var0Jjdw [root@host3 ~] # cd / tmp/ [root@host3 tmp] # lsadd_user_ldapsync.ldif create_module_config.ldif.bak globalconfig.ldif overlay.ldifcreate_module_config.ldif databaseconfig_nosyncrepl.ldif initial_structure.ldif tmp.P9var0Jjdw# creates a temporary file And print out the file name [root@host3 tmp] # dir1= `mktemp-d` [root@host3 tmp] # echo $dir1/tmp/tmp.UqEfHa389N [root@host3 tmp] # ll total usage 28 Murray. 1 root root 130 February 12 2019 add_user_ldapsync.ldif-r-. 1 root root 329 February 14 2019 create_module_config.ldif-r-. 1 root root 329 February 12 2019 create_module_config.ldif.bak-r-. 1 root root 2458 February 14 2019 databaseconfig_nosyncrepl.ldif-r-. 1 root root 239 February 12 2019 globalconfig.ldif-r-. 1 root root 795 February 12 2019 initial_structure.ldif-r-. 1 root root 143.12 2019 overlay.ldif-rw- 1 root root 0 September 27 13:06 tmp.P9var0Jjdwdrwx- 2 root root 6 September 27 13:09 tmp.UqEfHa389N# created a temporary directory and printed the directory name [root@host3 tmp] # mktemp test1.XXXtest1.mBX [root@host3 tmp] # mktemp test1.XXXtest1.wj1 [root@host3 tmp] # ls total usage 28 Murray. 1 root root 130 February 12 2019 add_user_ldapsync.ldif-r-. 1 root root 329 February 14 2019 create_module_config.ldif-r-. 1 root root 329 February 12 2019 create_module_config.ldif.bak-r-. 1 root root 2458 February 14 2019 databaseconfig_nosyncrepl.ldif-r-. 1 root root 239 February 12 2019 globalconfig.ldif-r-. 1 root root 795 February 12 2019 initial_structure.ldif-r-. 1 root root 142.12 2019 overlay.ldif-rw- 1 root root 0 September 27 13:12 test1.mBX-rw- 1 root root 0 September 27 13:12 test1.wj1-rw- 1 root root 0 September 27 13:06 tmp.P9var0Jjdwdrwx- 2 root root 6 September 27 13:09 tmp.UqEfHa389N# is created under the template name and XXX is uppercase X will be replaced by random characters, letters or numbers. Note that the premise for mktemp to work properly is to ensure that there are at least 3 X-split files and data split in the template.
Suppose a data.txt test file is 100kb in size, and you can split it into multiple files with the size of 10kb
[root@host3 src] # lsnginx-1.14.2 nginx-1.14.2.tar.gz [root@host3 src] # du-sh nginx-1.14.2.tar.gz 992K nginx-1.14.2.tar.gz [root@host3 src] # split-b 100k nginx-1.14.2.tar.gz [root@host3 src] # ll x link 1984drwxr-xr-x 9 postgres mysql 1868 ≤ 15 19:50 nginx-1.14.2-rw-r--r -- 1 root root 1015384 8 "16 10:44 nginx-1.14.2.tar.gz-rw-r--r-- 1 root root 102400 9" 29 12:36 xaa-rw-r--r-- 1 root root 102400 9 "29 12:36 xab-rw-r--r-- 1 root root 102400 9" 29 12:36 xac-rw-r--r-- 1 root root 102400 9 "29 12:36 xad-rw-r--r- -1 root root 102400 9 "29 12:36 xae-rw-r--r-- 1 root root 102400 9" 29 12:36 xaf-rw-r--r-- 1 root root 102400 9 "29 12:36 xag-rw-r--r-- 1 root root 102400 9" 29 12:36 xah-rw-r--r-- 1 root root 102400 9 "29 12:36 xai-rw-r--r-- 1 root root 93784 9" 29 " 12:36 xaj [root@host3 src] # lsnginx-1.14.2 nginx-1.14.2.tar.gz xaa xab xac xad xae xaf xag xah xai xaj [root@host3 src] # du-sh * 32m nginx-1.14.2992K nginx-1.14.2.tar.gz100K xaa100K xab100K xac100K xad100K xae100K xaf100K xag100K xah100K xai92K xaj# as above The 992k nginx tar packet is divided into 100k, and finally only 92k is less than 100k.
As can be seen from the above, the default is suffixed with letters. If you want to use a number as a suffix, you can use the-d parameter,-a length, to specify the suffix length
[root@host3 src] # lsnginx-1.14.2 nginx-1.14.2.tar.gz [root@host3 src] # split-b 100k nginx-1.14.2.tar.gz-d-a 5 [root@host3 src] # lsnginx-1.14.2 nginx-1.14.2.tar.gz x00000 x00001 x00002 x00003 x00004 x00006 x00006 x00007 x00008 x00009 [root@host3 src] # du-sh * 32m nginx-1.14.2992K nginx-1.14.2.tar.gz100K x00000100K X00001100K x00002100K x00003100K x00004100K x00005100K x00006100K x00007100K x0000892K x000000K x00003100K x00004100K x00005100K x00006100K x00007100K x00007100K x00002K x00002K x00002100K x00003100K x00004100K x00005100K x00006100K x00007100K x00002K x00002100K x00002100K x00
Specify file name prefix
Previously split files, files have a file name x, we can also use our own file prefix through the prefix name. The last argument to the split command is PREFIX
[root@host3 src] # lsnginx-1.14.2 nginx-1.14.2.tar.gz [root@host3 src] # split-b 100k nginx-1.14.2.tar.gz-d-a 4 nginxfuck [root@host3 src] # lsnginx-1.14.2 nginxfuck0000 nginxfuck0002 nginxfuck0004 nginxfuck0006 nginxfuck0008nginx-1.14.2.tar.gz nginxfuck0001 nginxfuck0003 nginxfuck0005 nginxfuck0007 nginxfuck0009# as above, the last parameter specifies a prefix
If you don't want to split according to size, we can split-l according to the number of rows.
[root@host3 test] # lsdata.txt [root@host3 test] # wc-l data.txt 7474 data.txt [root@host3 test] # split-l 1000 data.txt-d-a 4 conf [root@host3 test] # lsconf0000 conf0001 conf0002 conf0003 conf0004 conf0005 conf0006 conf0007 data.txt [root@host3 test] # du-sh * 40K conf000048K conf000148K conf000236K conf000336K conf000436K conf000536K conf000620K conf0007288K data.txt# divide a 7000line file into 1000 lines with the file name beginning with conf followed by a 4-digit file split csplit
Csplit can split log files according to specified conditions and string matching options, which is a variation of the split tool.
Split can only be segmented according to the size and number of rows of data, while csplit can be divided according to the characteristics of the file itself. The existence of a word or text content can be used as a condition for dividing a file.
[root@host3 test] # lsdata.txt [root@host3 test] # cat data.txt SERVER-1 [conection] 192.168.0.1 success [conection] 192.168.0.2 failed [conection] 192.168.0.3 success [conection] 192.168.0.4 successSERVER-2 [conection] 192.168.0.5 success [conection] 192.168.0.5 failed [conection] 192.168.0.5 success [conection] 192.168.0.5 successSERVER-3 [conection] 192 .168.0.6 success [conection] 192.168.0.7 failed [conection] 192.168.0.8 success [conection] 192.168.0.9 success [root@host3 test] # csplit data.txt / SERVER/-n 2-s {*}-f server-b "d.log" Rm server00.logrm: delete the normal empty file "server00.log"? Y [root@host3 test] # lsdata.txt server01.log server02.log server03.log
Details:
/ SERVER/ is used to match lines, and this is where the segmentation process begins / [REGEX] / to represent the text style. Including a matching line {*} that knows (but does not include) "SERVER" from the current line (the first line) means that the split is repeated based on the matching line until the end of the file. You can specify the number of times to split in the form of {integer}-s to put the command into silent mode without printing other messages. -n specifies the split file prefix-b specifies the suffix format, such as d.log, which is similar to printf in C language.
Because the first file after the split has no content (the matching word is on the first line of the file), we delete the server00.log
Split the file name based on the extension
There are some scripts that perform various processes based on the file name, and we may need to change the file name while retaining the extension, convert the file format (retain the file name and modify the extension), or extract part of the file name. Some of the built-in features of shell can segment file names according to different situations.
With the help of the% symbol, you can easily extract the name part from the "name. Extension" format.
[root@host3 ~] # file_jpg= "test.jpg" [root@host3 ~] # name=$ {file_jpg%.*} [root@host3 ~] # echo $nametest# extract the file name
With the help of the # symbol, the extension part of the file name can be extracted.
[root@host3 ~] # file_jpg= "test.jpg" [root@host3 ~] # exten=$ {file_jpg#*.} [root@host3 ~] # echo $extenjpg# extract the extension. The above extraction file name is. * here the extraction extension is *.
The above grammatical interpretation
Meaning of ${VAR%.*}:
Delete the string matched by the wildcard to the right of% from $VAR, the wildcard matches from right to left to assign values to VAR, and the wildcard matches to .jpg from right to left in VAR=test.jpg. So, delete the match from $VAR and you get test.
% is a non-greedy (non-greedy) operation that finds the shortest result of matching wildcards from right to left. There is another operator,%%, which is similar to%, but the behavior pattern is greedy, which means that she matches the longest string that meets the criteria, such as VAR=hack.fun.book.txt.
Use% operator: [root@host3 ~] # VAR=hack.fun.book.txt [root@host3 ~] # echo ${VAR%.*} hack.fun.book use% operator: [root@host3 ~] # echo ${VAR%%.*} hack
Similarly, there is also # # for the # operator
Use the # operator: [root@host3 ~] # echo ${VAR#*.} fun.book.txt use the # # operator [root@host3 ~] # echo ${VAR##*.} txt to batch rename and move
Using find,rename,mv comprehensively, we can do a lot of things.
The easiest way to rename an image file in the current directory in a specific format is to use the following script
#! / bin/bashcount=1;for img in `find. -iname'* .png'- o-iname'* .jpg'- type f-maxdepth 1`do new=image-$count.$ {img##*.} echo "Rename $img to $new" mv $img $new let count++done
Execute the above script
[root@host3] # ll total usage 24 root root 0 October 8 14:22 aaaaaa.jpg-rw-r--r-- 1 root root 190 August 9 13:51 aaa.sh-rw-r--r-- 1 root root 2168 September 24 10:15 abc.txt-rw-r--r-- 1 root root 3352 September 20 09:58 all.txt-rw-. 1 root root 1228 January 8 2019 anaconda-ks.cfg-rw-r--r-- 1 root root 0 October 8 14:22 bbbb.jpg-rw-r--r-- 1 root root 48 September 18 10:27 bbb.sh-rw-r--r-- 1 root root 0 October 8 14:22 cccc.pngdrwxr-xr-x 2 root root 333 April 11 19:21 conf-rw-r--r-- 1 root root 0 October 8 14:22 dddd.png-rw-r--r-- 1 root root 190 October 8 14:22 rename.sh [root@host3 ~] # sh rename.shfind: warning: you defined the-maxdepth option after the non-option parameter-iname However, the option is not a location option (- maxdepth affects the specified comparison tests before or after it). Please specify options before other parameters. Rename. / aaaaaa.jpg to image-1.jpgRename. / bbbb.jpg to image-2.jpgRename. / cccc.png to image-3.pngRename. / dddd.png to image-4.png [root@host3 ~] # lsaaa.sh abc.txt all.txt anaconda-ks.cfg bbb.sh conf image-1.jpg image-2.jpg image-3.png image-4.png rename.sh interactive input automation
First write a script that reads interactive input
#! / bin/bash# file name: test.shread-p "Enter number:" noread-p "Enter name:" nameecho $no,$name
Automatically send input to the script as follows:
[root@host3 ~] # / test.sh Enter number:2Enter name:rong2,rong [root@host3 ~] # echo-e "2\ nrong\ n" |. / test.sh 2 echo #\ n represents carriage return. We use echo-e to generate the input sequence, and-e indicates that echo interprets the escape sequence. If you have a lot of input, you can provide input by combining a separate input file with the redirection operator, as follows: [root@host3 ~] # echo-e "2\ nrong\ n" > input.data [root@host3 ~] # cat input.data 2rong [root@host3 ~] #. / test.sh < input.data 2 fetch # this method imports interactive input data from the file.
If you are a reverse engineer, you may have dealt with buffer overflows. To implement, we need to redirect shellcode in hexadecimal form (for example, "\ xeb\ x1a\ x5e\ x31\ xc0\ x88\ x46"). These characters cannot be entered directly through the keyboard because there are no corresponding keys on the keyboard. So we should use:
Echo-e "\ xeb\ x1a\ x5e\ x31\ xc0\ x88\ x46"
Using this command to redirect shellcode to a defective executable, to handle dynamic input and provide input by checking the input requirements when the program is running, we use an excellent tool, expect.
The expect command can provide appropriate input according to the input requirements
Realizing Automation with expect
In the default linux distribution, most do not include expect, you have to install it yourself: yum-y install expect
#! / usr/bin/expect# file name expect.shspawn. / test.shexpect "Enter number:" send "2\ n" expect "Enter name:" send "rong\ n" expect eof# execute [root@host3 ~] # / expect.shspawn. / test.shEnter number:2Enter name:rong2 The rongspawn parameter specifies which command needs to be executed or the script expect parameter provides the message to wait send is the message to be sent expect eof indicates the end of the command interaction and accelerates command execution using parallel processes
Take the md5sum command as an example. Because of the operation involved, this command is cpu-intensive. If more than one file needs to generate a checksum, we can run it using the script below.
#! / bin/bashPIDARRAY= () for file in `find / etc/-name "* .conf" `do md5sum $file & PIDARRAY+= ("$!") donewait ${PIDARRAY [@]} execute: [root@host3 ~] # sh expect.sh 72688131394bcce818f818e2bae98846 / etc/modprobe.d/tuned.conf77304062b81bc20cffce814ff6bf8ed5 / etc/dracut.confd0f5f705846350b43033834f51c9135c / etc/prelink.conf.d/nss-softokn-prelink.conf0335aabf8106f29f6857d74c98697542 / etc/prelink.conf.d/fipscheck.conf0b501d6d547fa5bb989b9cb877fee8cb / etc/modprobe.d/dccp-blacklist.confd779db0cc6135e09b4d146ca69d39c2b / etc/rsyslog.d/listen.conf4eaff8c463f8c4b6d68d7a7237ba862c / etc/resolv.conf321ec6fd36bce09ed68b854270b9136c / etc/prelink.conf.d/grub2.conf3a6a059e04b951923f6d83b7ed327e0e / etc/depmod.d/dist.conf7cb6c9cab8ec511882e0e05fceb87e45 / etc/systemd/bootchart.conf2ad769b57d77224f7a460141e3f94258 / etc/systemd/coredump.conff55c94d000b5d62b5f06d38852977dd1 / etc/dbus-1/system.d/org.freedesktop.hostname1.conf7e2c094c5009f9ec2748dce92f2209bd / etc/dbus-1/system.d/org.freedesktop.import1.conf5893ab03e7e96aa3759baceb4dd04190 / etc/dbus-1/system. D/org.freedesktop.locale1.conff0c4b315298d5d687e04183ca2e36079 / etc/dbus-1/system.d/ org.freedesktop.login1.conf # because multiple md5sum commands are running at the same time If you are using a multi-core processor, you will run the results faster.
How it works:
Using bash's operator &, it causes shell to put commands in the background and continue to execute the script. This means that once the loop is over, the script exits and the md5sum command is still running in the background. To avoid this, we use $! To get the process pid, $! in bash! To save the pid of the most recent background process, we put the pid into an array, and then use the wait command to wait for the process to finish.
Intersection and difference of text files
The comm command can be used to compare two files
Intersection: print out the line difference shared by two files: print the different set of line differences contained in the specified file: print the lines contained in file a but not in other files
It is important to note that comm must use sorted files as output
[root@host3 ~] # cat a.txt appleorangegoldsilversteeliron [root@host3 ~] # cat b.txt orangegoldcookiescarrot [root@host3 ~] # sort a.txt-o A.txt [root@host3 ~] # vim A.txt [root@host3 ~] # sort b.txt-o B.txt [root@host3 ~] # comm A.txt B.txt apple carrot cookies goldiron orangesilversteel# shows that the result is three columns, and the first column outputs rows that exist only in A.txt The second column outputs rows that only appear in B.txt, and the third column contains rows that exist in both A.txt and B.txt, with tabs (\ t) as delimiters for each column. In order to win the intersection, we need to delete the first column and the second column Show only the third column [root@host3 ~] # comm A.txt B.txt-1-2goldorange# only print different [root@host3 ~] # comm A.txt B.txt-3apple carrotcookiesironsilversteel # for readable results, remove the previous\ t tab [root@host3 ~] # comm A.txt B.txt-3 | sed's / ^\ t//'applecarrotcookiesironsilversteel to create an unmodifiable file
Make the file unmodifiable chattr + I file
[root@host3 ~] # chattr + I passwd [root@host3 ~] # rm-rf passwd rm: cannot delete "passwd": disallowed operation [root@host3 ~] # chattr-I passwd [root@host3 ~] # rm-rf passwd [root@host3 ~] # grep
Grep can search multiple files
[root@host3 ~] # grep root / etc/passwd / etc/group/etc/passwd:root:x:0:0:root:/root:/bin/bash/etc/passwd:operator:x:11:0:operator:/root:/sbin/nologin/etc/passwd:dockerroot:x:996:994:Docker User:/var/lib/docker:/sbin/nologin/etc/group:root:x:0:/etc/group:dockerroot:x:994:
The grep command interprets only certain special characters in match_text. If you want to use regular expressions, you need to add the-E option. This means using extended regular expressions. Or you can use the egrep command that allows regular expressions by default (without adding-E after measurement)
# count the number of lines containing matching strings in text [root@host3 ~] # grep-c root / etc/passwd3# print line number [root@host3 ~] # grep-n root / etc/passwd1:root:x:0:0:root:/root:/bin/bash10:operator:x:11:0:operator:/root:/sbin/nologin27:dockerroot:x:996:994:Docker User:/var/lib/docker:/sbin/nologin# search multiple files and find out Which file is the matching text in-l [root@host3 ~] # grep root / etc/passwd / etc/group/etc/passwd:root:x:0:0:root:/root:/bin/bash/etc/passwd:operator:x:11:0:operator:/root:/sbin/nologin/etc/passwd:dockerroot:x:996:994:Docker User:/var/lib/docker:/sbin/nologin/etc/group:root:x:0:/etc/group:dockerroot: XRV 994: [root@host3] # grep root / etc/passwd/etc/group-l/etc/passwd/etc/group#-L is just the opposite. Mismatched filenames are listed # ignore case-i# multiple style matches-egrep-e "pattern1"-e "pattern2" # match [root@host3 ~] # grep-e root-e docker / etc/passwd / etc/group/etc/passwd:root:x:0:0:root:/root:/bin/bash/etc/passwd:operator:x:11:0:operator:/root:/sbin/nologin containing pattern 1 or pattern 2 / etc/passwd:dockerroot:x:996:994:Docker User:/var/lib/docker:/sbin/nologin/etc/group:root:x:0:/etc/group:dockerroot:x:994:/etc/group:docker:x:992:# there is another way to specify multiple styles We can provide a style condition for reading styles. Specify the file with-f Be careful not to include blank lines at the end of the pat.file file, such as [root@host3 ~] # cat pat.file rootdocker [root@host3 ~] # grep-f pat.file / etc/passwd / etc/group/etc/passwd:root:x:0:0:root:/root:/bin/bash/etc/passwd:operator:x:11:0:operator:/root:/sbin/nologin/etc/passwd:dockerroot:x:996:994:Docker User:/var/lib/docker:/ Sbin/nologin/etc/group:root:x:0:/etc/group:dockerroot:x:994:/etc/group:docker:x:992:
Specify or exclude certain files in grep search
Grep can specify (include) or exclude (exclude) certain files in the search. We specify all include files or exclude files through wildcards
Recursively search all .c and .cpp files grep root in the # directory. -r-- include *. {c Cpp} [root@host3 ~] # grep root / etc/-r-l-- include * .conf # where-l refers to listing only the file name / etc/systemd/logind.conf/etc/dbus-1/system.d/org.freedesktop.hostname1.conf/etc/dbus-1/system.d/org.freedesktop.import1.conf/etc/dbus-1/system.d/org.freedesktop.locale1.conf/etc/dbus-1/system.d/org.freedesktop.login1.conf / etc/dbus-1/system.d/org.freedesktop.machine1.conf/etc/dbus-1/system.d/org.freedesktop.systemd1.conf/etc/dbus-1/system.d/org.freedesktop.timedate1.conf/etc/dbus-1/system.d/wpa_supplicant.conf# excludes all README files grep root. -r-- exclude "README" # * if you want to exclude directories, use-- exclude-dir, if you want to read the list of excluded files from a file Using-- exclude-from FILE*#cut (abbreviated) sed# to remove blank lines sed'/ ^ $/ d 'file # / pattern/d removes lines of matching styles # and replaces them directly in the file Replace all three-digit digits in the file with the specified digits [root@host3 ~] # cat sed.data 11 abc 111111this 9 file contains 1111888 numbers 0000 [root@host3 ~] # sed-I's /\ b [0-9]\ {3\}\ bUnibus g 'sed.data [root@host3 ~] # cat sed.data 11 abc NUMBER this 9 file contains NUMBER 11 NUMBER numbers 000numbers the command above replaces all three digits. The regular expression\ b [0-9]\ {3\}\ b is used to match 3 digits, and [0-9] represents a range of digits, that is, 3 times from 0-match {3} to the character before the match. Where\ is used to escape #\ b to indicate the word boundary sed-I. bak's abcUniverse 'file # in this case sed not only performs a file content replacement, but also creates a file called file.bak, which contains a copy of the original file contents.
Matched string flag &
In sed, we can use the & tag to match the style of the string so that we can use the matched content when replacing the string
[root@host3 ~] # echo this is my sister | sed's /\ w\ + / / g'# replace all words with angle bracketed words [root@host3 ~] # echo this is my sister | sed's /\ w\ + / [&] / g'# replace all words with bracketed words [this] [is] [sister] # regular expressions\ w\ + match each word Then we replace it with [&], which corresponds to the word we matched before.
Quote
Sed expressions are usually referenced in single quotation marks. But you can also use double quotation marks, which come in handy when we want to use some variables in sed expressions.
[root@host3 ~] # text=hello [root@host3 ~] # echo hello world | sed "s/$text/HELLO/" HELLO worldawk
Special variables:
NR: indicates the number of records, corresponding to the current line number during execution NF: indicates the number of fields, and corresponds to the current number of fields during execution $0: the text content of the current line during execution
Principles of use:
Make sure the entire awk command is enclosed in single quotation marks. Make sure all quotation marks appear in pairs within the command. Make sure you enclose the action statement in curly braces. Use parentheses to expand the conditional statement awk-F:'{print NR}'/ etc/passwd # print the line number of each line awk-F:'{print NF}'/ etc/passwd # print the number of columns per line [root@host3 ~] # cat passwd sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologinpostfix:x:89:89::/var/spool/postfix:/sbin/nologintcpdump:x:72:72:: /: / sbin/nologinelk:x:1000:1000::/home/elk:/bin/bashntp:x:38:38::/etc/ntp:/sbin/nologinsaslauth:x:998:76:Saslauthd user:/run/saslauthd:/sbin/nologinapache:x:48:48:Apache:/usr/share/httpd:/sbin/nologinnscd:x:28:28:NSCD Daemon:/:/sbin/nologin [root@host3 ~] # awk-F:'{print NR} 'passwd 12345678 [root@host3 ~ ] # awk-F:'{print NF} 'passwd 77777777 [root@host3 ~] # awk-F:' END {print NF} 'passwd 7 [root@host3 ~] # awk-F:' END {print NR} 'passwd uses only the end statement Each time a line is read, awk updates NR to the corresponding line number, and when the last line is reached, NR is the line number of the last line, so it is the number of lines in the file awk 'BEGIN {print "start"} pattern {commands} END {print "end"}' after awk is quoted in single and double quotes, the awk 'BEGIN {statements} {statements} END {statements}' awk script usually consists of three parts, BEGIN,END, and common statement blocks with pattern matching options. All three parts are optional [root@host3 ~] # awk 'BEGIN {iTun0} {print +} END {print I}' passwd 8
Awk splicing:
[root@mgmt-k8smaster01 deployment] # docker images | grep veh192.168.1.74:5000/veh/zuul 0.0.1-SNAPSHOT.34 41e9c323b825 26 hours ago 172MB192.168.1.74:5000/veh/vehicleanalysis 0.0.1-SNAPSHOT.38 bca9981ac781 26 hours ago 210MB192.168.1.74:5000/veh/masterveh 0.0.1-SNAPSHOT.88 265e448020f3 26 hours ago 209MB192.168.1.74:5000/veh/obugateway 0.0.1-SNAPSHOT.18 a4b3309beccd 8 days ago 182MB192.168.1.74:5000/veh/frontend 1.0.33 357b20afec08 11 days ago 131MB192.168.1.74:5000/veh/rtkconsumer 0.0.1-SNAPSHOT.12 4c2e63b5b2f6 2 weeks ago 200MB192.168.1.74:5000/veh/user 0.0.1-SNAPSHOT.14 015fc6516533 2 weeks ago 186MB192.168.1.74:5000/veh/rtkgw 0.0.1-SNAPSHOT.12 a17a3eed4d28 2 months ago 173MB192.168.1.74:5000/veh/websocket 0.0.1-SNAPSHOT.7 a1af778846e6 2 months ago 179MB192.168.1.74:5000/veh/vehconsumer 0.0.1-SNAPSHOT.20 4a763860a5c5 2 months ago 200MB192.168.1.74: 5000/veh/dfconsumer 0.0.1-SNAPSHOT.41 2e3471d6ca27 2 months ago 200MB192.168.1.74:5000/veh/auth 0.0.1-SNAPSHOT.4 be5c86dd285b 3 months ago 185MB [root@mgmt-k8smaster01 deployment] # docker images | grep veh | awk'{astat1 Bachelorette 2: (a ":" b) Print c} '192.168.1.74:5000/veh/zuul:0.0.1-SNAPSHOT.34192.168.1.74:5000/veh/vehicleanalysis:0.0.1-SNAPSHOT.38192.168.1.74:5000/veh/masterveh:0.0.1-SNAPSHOT.88192.168.1.74:5000/veh/obugateway:0.0.1-SNAPSHOT.18192.168.1.74:5000/veh/frontend:1.0.33192.168.1.74:5000/ Veh/rtkconsumer:0.0.1-SNAPSHOT.12192.168.1.74:5000/veh/user:0.0.1-SNAPSHOT.14192.168.1.74:5000/veh/rtkgw:0.0.1-SNAPSHOT.12192.168.1.74:5000/veh/websocket:0.0.1-SNAPSHOT.7192.168.1.74:5000/veh/vehconsumer:0.0.1-SNAPSHOT.20192.168.1.74:5000/veh/dfconsumer:0.0.1- SNAPSHOT.41192.168.1.74:5000/veh/auth:0.0.1-SNAPSHOT.4
Awk works as follows:
1. Execute content 2 in the BEGIN {commands} statement block. Execute the intermediate block pattern {commands}. Repeat this process to read all the guidance files. 3. When reading to the end of the input stream, execute the END {commands} statement block
We can add the values of the first field in each row, that is, the sum of the columns.
[root@host3 ~] # cat sum.data 1 23 4 5 62 2 2 2 23 3 3 3 35 5 5 6 6 6 [root@host3 ~] # cat sum.data | awk 'BEGIN {sum=0} {print $1 Sum+=$1} END {print sum} '123511 [root@host3 ~] # awk' {if ($2 colors 3) print $0} 'sum.data 3 3 3 [root@host3 ~] # awk' {if ($2 cycles 5) print $0} 'sum.data 5 5 5 6 6 "each value plus 1 [root@host2 ~] # cat passwd 1, 3 45 cat passwd 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 3, 3, 5, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.