Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Write a tutorial on robust Bash Shell scripting skills

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "Writing a robust Bash Shell script skills tutorial". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Use set-u

How many times have you crashed the script because you didn't initialize the variables? For me, many times.

The code is as follows:

Chroot=$1

...

Rm-rf $chroot/usr/share/doc

If you run the above code without giving parameters, you will not just delete the documents in the chroot, but delete all the documents in the system. So what should you do? Fortunately, bash provides set-u, which lets bash exit automatically when you use uninitialized variables. You can also use the more readable set-o nounset.

The code is as follows:

David% bash / tmp/shrink-chroot.sh

$chroot=

David% bash-u / tmp/shrink-chroot.sh

/ tmp/shrink-chroot.sh: line 3: $1: unbound variable

David%

Use set-e

The beginning of every script you write should include set-e. This tells bash to exit bash as soon as any statement returns a value that is not true. The advantage of using-e is to prevent errors from snowballing into serious errors and catch errors as soon as possible. A more readable version: set-o errexit

Use-e to free you from checking for errors. If you forget to check, bash will do it for you. But you can't use $? To get the command execution status, because bash cannot get any non-zero return value. You can use another structure:

The code is as follows:

Command

If ["$?"-ne 0]; then echo "command failed"; exit 1; fi

Can be replaced by:

The code is as follows:

Command | | {echo "command failed"; exit 1;}

Or use:

The code is as follows:

If! Command; then echo "command failed"; exit 1; fi

What if you have to use a command that returns a non-zero value, or what if you are not interested in the return value? You can use command | | true, or if you have a long piece of code, you can temporarily turn off error checking, but I suggest you use it with caution.

The code is as follows:

Set + e

Command1

Command2

Set-e

The documentation indicates that bash returns the value of the last command in the pipeline by default, which may be the one you don't want. For example, executing false | true will be considered successful. If you want such a command to be considered an execution failure, you can use set-o pipefail

Program defense-consider unexpected things

Your script may be run under an "unexpected" account, such as missing files or directories that have not been created. You can do something to prevent these mistakes. For example, when you create a directory, the mkdir command returns an error if the parent directory does not exist. If you add the-p option to the mkdir command when you create a directory, it will create the desired parent directory before creating the desired directory. Another example is the rm command. If you delete a file that does not exist, it will "complain" and your script will stop working. (because you used the-e option, right? You can use the-f option to solve this problem and let the script continue to work when the file does not exist.

Be ready to deal with spaces in file names

Some people use spaces in file names or command line arguments, which you need to keep in mind when writing scripts. You need to always remember to surround the variable in quotation marks.

The code is as follows:

If [$filename = "foo"]

When the $filename variable contains spaces, it dies. It can be solved as follows:

The code is as follows:

If ["$filename" = "foo"]

When using the $@ variable, you also need to use quotation marks because the two parameters separated by spaces are interpreted as two separate parts.

The code is as follows:

David% foo () {for i in $@; do echo $I; done}; foo bar "baz quux"

Bar

Baz

Quux

David% foo () {for i in "$@"; do echo $I; done}; foo bar "baz quux"

Bar

Baz quux

I don't think of any time I can't use "$@", so when you have questions, there is no mistake in using quotation marks. If you use both find and xargs, you should use-print0 to split the filename with characters instead of newline characters.

The code is as follows:

David% touch "foo bar"

David% find | xargs ls

Ls:. / foo: No such file or directory

Ls: bar: No such file or directory

David% find-print0 | xargs-0 ls

. / foo bar

Set traps

When the script you write dies, the file system is in an unknown state. For example, lock file status, temporary file status, or hang up after updating one file before updating the next one. If you can solve these problems, whether it's deleting the lock file, or rolling back to a known state when the script encounters a problem, you're great. Fortunately, bash provides a way to run a command or function when bash receives a UNIX signal. You can use the trap command.

The code is as follows:

Trap command signal [signal...]

You can link multiple signals (the list can be obtained using kill-l), but to clean up the mess, we only use three of them: INT,TERM and EXIT. You can use-as to restore traps to its original state.

Signal description

INT: Interrupt-triggered when someone uses Ctrl-C to terminate the script

TERM: Terminate-triggered when someone kills a script process using kill

EXIT: Exit-this is a pseudo signal that is triggered when the script exits normally or when set-e exits due to an error

When you use a lock file, you can write:

The code is as follows:

If [!-e $lockfile]; then

Touch $lockfile

Critical-section

Rm $lockfile

Else

Echo "critical-section is already running"

Fi

What happens if you kill the script process when the most important part (critical-section) is running? The lock file will be left there, and your script will never run again until it is deleted. Solution:

The code is as follows:

If [!-e $lockfile]; then

Trap "rm-f $lockfile; exit" INT TERM EXIT

Touch $lockfile

Critical-section

Rm $lockfile

Trap-INT TERM EXIT

Else

Echo "critical-section is already running"

Fi

Now when you kill the process, the lock file is deleted as well. Note that the script is explicitly exited in the trap command, otherwise the script will continue to execute the commands that follow trap.

Singularity condition (wikipedia)

In the example of the lock file above, there is a normality condition that has to be pointed out, which exists between determining the lock file and creating the lock file. One possible solution is to use IO redirection and bash's noclobber (wikipedia) mode to redirect to a file that does not exist. We can do this:

The code is as follows:

If (set-o noclobber; echo "$$" > "$lockfile") 2 > / dev/null

Then

Trap'rm-f "$lockfile"; exit $?' INT TERM EXIT

Critical-section

Rm-f "$lockfile"

Trap-INT TERM EXIT

Else

Echo "Failed to acquire lockfile: $lockfile"

Echo "held by $(cat $lockfile)"

Fi

The more complicated question is that you have to update a lot of files, and when something goes wrong with them, can you make the script hang more elegantly? You want to make sure that those have been updated correctly and which have not changed at all. For example, you need a script to add users.

The code is as follows:

Add_to_passwd $user

Cp-a / etc/skel / home/$user

Chown $user / home/$user-R

This script has a problem when there is insufficient disk space or when the process is killed halfway. In this case, you may wish that the user account does not exist and that his files should be deleted.

The code is as follows:

Rollback () {

Del_from_passwd $user

If [- e / home/$user]; then

Rm-rf / home/$user

Fi

Exit

}

Trap rollback INT TERM EXIT

Add_to_passwd $user

Cp-a / etc/skel / home/$user

Chown $user / home/$user-R

Trap-INT TERM EXIT

At the end of the script, you need to use trap to turn off the rollback call, otherwise rollback will be called when the script exits normally, then the script does nothing.

Keep atomized

Again, you need to update a lot of files in the directory at once, for example, you need to rewrite URL to the domain name of another website. You might write:

The code is as follows:

For file in $(find / var/www-type f-name "* .html"); do

Perl-pi-e's file www.example.netbank www.example.comUnip.

Done

If half of the modification is due to a problem with the script, one part uses www.example.com and the other part uses www.example.net. You can use backup and trap solutions, but your website URL is inconsistent during the upgrade process.

The solution is to turn this change into an atomic operation. First make a copy of the data, update the URL in the copy, and then replace the working version with the copy. You need to make sure that the copy and the working version directory are on the same disk partition so that you can take advantage of the Linux system, which moves the directory just to update the inode node that the directory points to.

The code is as follows:

Cp-a / var/www / var/www-tmp

For file in $(find / var/www-tmp-type-f-name "* .html"); do

Perl-pi-e's file www.example.netbank www.example.comUnip.

Done

Mv / var/www / var/www-old

Mv / var/www-tmp / var/www

This means that if something goes wrong with the update process, the online system will not be affected. The time affected by the online system is reduced to two mv operations, which is very short because the file system only updates the inode without actually copying all the data.

The disadvantage of this technique is that you need twice as much disk space, and processes that open files for a long time take longer to upgrade to the new file version. It is recommended to restart these processes after the update is complete. This is not a problem for the apache server because it reopens the file every time. You can use the lsof command to view the file that is currently open. The advantage is that you have a previous backup that comes in handy when you need to restore it.

This is the end of the tutorial on Writing robust Bash Shell scripting skills. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report