In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains the "method tutorial for writing robust Bash scripts". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn the method tutorial for writing robust Bash scripts.
Shell scripts are greatly affected when they run exceptions.
This article introduces some of the techniques that make bash scripts robust.
Use set-u
How many times has the script crashed because the variables were not initialized? For me, many times.
Chroot=$1
...
Rm-rf $chroot/usr/share/doc
If the above code runs without parameters, it will not just delete the documents in chroot, but will delete all documents in the system. What should be done? Fortunately, bash provides set-u, which lets bash exit automatically when using uninitialized variables.
You can also use the more readable set-o nounset.
The code is as follows:
David% bash / tmp/shrink-chroot.sh
$chroot=
David% bash-u / tmp/shrink-chroot.sh
/ tmp/shrink-chroot.sh: line 3: $1: unbound variable
David%
Use set-e
The beginning of every script you write should include set-e. This tells bash to exit bash as soon as any statement returns a value that is not true. The advantage of using-e is to prevent errors from snowballing into serious errors and catch errors as soon as possible. A more readable version: set-o errexit
Use-e to free yourself from checking errors. If you forget to check, bash will do it for you. But there is no way to use $? To get the command execution status, because bash cannot get any non-zero return value. You can use another structure:
Command
If ["$?"-ne 0]; then echo "command failed"; exit 1; fi
Can be replaced by:
Command | | {echo "command failed"; exit 1;}
Or use:
If! Command; then echo "command failed"; exit 1; fi
What if you have to use a command that returns a non-zero value, or is not interested in the return value? You can use command | | true, or you can turn off error checking temporarily with a long piece of code, but I recommend using it with caution.
Set + e
Command1
Command2
Set-e
The documentation indicates that bash returns the value of the last command in the pipeline by default, which may be the unwanted one. For example, executing false | true will be considered successful. If you want such a command to be considered an execution failure, you can use set-o pipefail
Program defense-consider unexpected things
The script may be run under an "unexpected" account, such as missing files or directories that have not been created. Something can be done to prevent these mistakes. For example, when a directory is created, the mkdir command returns an error if the parent directory does not exist. If you add the-p option to the mkdir command when creating a directory, it will create the desired parent directory before creating the desired directory. Another example is the rm command. If you delete a file that does not exist, it will "complain" and the script will stop working. (because of the-e option, right? You can use the-f option to solve this problem and let the script continue to work when the file does not exist.
Be ready to deal with spaces in file names
Some people use spaces in file names or command-line arguments and need to keep this in mind when writing scripts. Always remember to surround the variable in quotation marks.
If [$filename = "foo"]
When the $filename variable contains spaces, it dies. It can be solved as follows:
If ["$filename" = "foo"]
When using the $@ variable, you also need to use quotation marks because the two parameters separated by spaces are interpreted as two separate parts.
The code is as follows:
David% foo () {for i in $@; do echo $I; done}; foo bar "baz quux"
Bar
Baz
Quux
David% foo () {for i in "$@"; do echo $I; done}; foo bar "baz quux"
Bar
Baz quux
I didn't think of any time I couldn't use "$@", so when in doubt, there was no error in using quotation marks.
If you use both find and xargs, you should use-print0 to split the filename with characters instead of newline characters.
The code is as follows:
David% touch "foo bar"
David% find | xargs ls
Ls:. / foo: No such file or directory
Ls: bar: No such file or directory
David% find-print0 | xargs-0 ls
. / foo bar
Set traps
When the written script dies, the file system is in an unknown state. For example, lock file status, temporary file status, or hang up after updating one file before updating the next one. It would be great to solve these problems, whether it's deleting the lock file or rolling back to a known state when the script encounters a problem. Fortunately, bash provides a way to run a command or function when bash receives a UNIX signal. You can use the trap command.
Trap command signal [signal...]
Multiple signals can be linked (the list can be obtained using kill-l), but to clean up the mess, we use only three of them: INT,TERM and EXIT. You can use-as to restore traps to its original state.
Signal description
INT
Interrupt-triggered when someone uses Ctrl-C to terminate the script
TERM
Terminate-triggered when someone kills a script process using kill
EXIT
Exit-this is a pseudo signal that is triggered when the script exits normally or set-e exits due to an error
When using a lock file, you can write:
The code is as follows:
If [!-e $lockfile]; then
Touch $lockfile
Critical-section
Rm $lockfile
Else
Echo "critical-section is already running"
Fi
What happens if you kill the script process when the most important part (critical-section) is running?
The lock file will be left there, and the script will never run again until it is deleted.
Solution:
The code is as follows:
If [!-e $lockfile]; then
Trap "rm-f $lockfile; exit" INT TERM EXIT
Touch $lockfile
Critical-section
Rm $lockfile
Trap-INT TERM EXIT
Else
Echo "critical-section is already running"
Fi
Now when the process is killed, the lock file is deleted as well. Note that the script is explicitly exited in the trap command, otherwise the script will continue to execute the commands that follow trap.
Singularity condition (wikipedia)
In the example of the lock file above, there is a normality condition that has to be pointed out, which exists between determining the lock file and creating the lock file. One possible solution is to use IO redirection and bash's noclobber (wikipedia) mode to redirect to a file that does not exist.
You can do this:
The code is as follows:
If (set-o noclobber; echo "$$" > "$lockfile") 2 > / dev/null
Then
Trap'rm-f "$lockfile"; exit $?' INT TERM EXIT
Critical-section
Rm-f "$lockfile"
Trap-INT TERM EXIT
Else
Echo "Failed to acquire lockfile: $lockfile"
Echo "held by $(cat $lockfile)"
Fi
The more complicated question is to update a lot of files, and whether it can make the script hang more elegantly when there are problems in the update process. I want to make sure that those have been updated correctly and which have not changed at all. For example, you need a script to add users.
The code is as follows:
Add_to_passwd $user
Cp-a / etc/skel / home/$user
Chown $user / home/$user-R
This script has a problem when there is insufficient disk space or when the process is killed halfway. In this case, it may be hoped that the user's account does not exist and that his files should be deleted.
The code is as follows:
Rollback () {
Del_from_passwd $user
If [- e / home/$user]; then
Rm-rf / home/$user
Fi
Exit
}
Trap rollback INT TERM EXIT
Add_to_passwd $user
Cp-a / etc/skel / home/$user
Chown $user / home/$user-R
Trap-INT TERM EXIT
At the end of the script, you need to use trap to turn off the rollback call, otherwise rollback will be called when the script exits normally, then the script does nothing.
Keep atomized
Again, you need to update a lot of files in the directory at once, such as the domain name that needs to rewrite URL to another website.
Maybe write:
The code is as follows:
For file in $(find / var/www-type f-name "* .html"); do
Perl-pi-e's file www.example.netbank www.example.comUnip.
Done
If half of the modification is due to a problem with the script, one part uses www.example.com and the other part uses www.example.net. Backup and trap solutions can be used, but the site URL during the upgrade process is inconsistent.
Solution:
Turn this change into an atomic operation. First make a copy of the data, update the URL in the copy, and then replace the working version with the copy.
You need to make sure that the copy and the working version directory are on the same disk partition so that you can take advantage of the Linux system, which moves the directory simply to update the inode node that the directory points to.
The code is as follows:
Cp-a / var/www / var/www-tmp
For file in $(find / var/www-tmp-type-f-name "* .html"); do
Perl-pi-e's file www.example.netbank www.example.comUnip.
Done
Mv / var/www / var/www-old
Mv / var/www-tmp / var/www
This means that if something goes wrong with the update process, the online system will not be affected. The time affected by the online system is reduced to two mv operations, which is very short because the file system only updates the inode without actually copying all the data.
Disadvantages:
Twice as much disk space is required, and processes that open files for a long time take a long time to upgrade to the new file version, so it is recommended to restart these processes after the update is complete.
This is not a problem for the apache server because it reopens the file every time.
You can use the lsof command to view the file that is currently open. The advantage is that you have a previous backup that comes in handy when you need to restore it.
Thank you for reading, the above is the content of the "method tutorial for writing robust Bash scripts". After the study of this article, I believe you have a deeper understanding of the problem of writing robust Bash script method tutorials, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.