In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
2016.11.17
Variables and quoting
Variable names are unmarked in assignments but prefixed with a dollar sign when their values are referenced. For example:
$etcdir='/etc'
$echo $etcdir
/ etc
Do not put spaces around the = symbol or the shell will mistake your variable name for a command name.
2.1.3 variables and references
Variable names are not marked forever when assigned, but a $character is added before the variable name when accessing their values. For example:
$etcdir='/etc'
$echo $etcdir
/ etc
Do not leave spaces on both sides of the equal sign, or shll will mistake the variable name for the command name.
When referencing a variable, you can surround its name with curly braces to clarify to the parser and to human readers where the variable name stops and other text begins; for example, ${etcdir} instead of just $etcdir. The braces are not normally required, but they can be useful when you want to expand variables inside double-quoted strings. Often, you'll want the contents of a variable to be followed by literal letters or punctuation. For example
$echo "Saved ${rev} th version of mdadm.conf."
Saved 8th version of mdadm.conf.
Refer to a variable and use curly braces to enclose the variable's name so that analysts and code readers can clearly know where the variable name begins and ends; for example, replace $etcdir with ${etcdir}. Curly braces are not normally required, but they come in handy if you want to extend variables in double-quoted strings. Because people often want to follow the contents of a variable with letters or punctuation. For example:
$echo "Saved ${rev} th version of mdadm.conf."
Saved 8th version of mdadm.conf.
There's no standard convention for the naming of shell variables, but all-caps names typically suggest environment variables or variables read from global configuration files. More often than not, local variables are all-lowercase with components separated by underscores. Variable names are case sensitive.
There is no standard command specification for naming shell variables, but if all letters of the variable name are capitalized, it generally indicates that the variable is an environment variable or a variable read from the global configuration file. Local variables are mostly lowercase and separated by an underscore between parts of the variable name. Variable names are case sensitive.
Environment variables are automatically imported into bash's variable namespace, so they can be set and read with the standard syntax. Use export varname to promote a shell variable to an environment variable. Commands for environment variables that you want to set up at login time should be included in your / .profile or ~ / .bash_profile file. Other environment variables, such as PWD for the current working directory, are maintained automatically by the shell.
Environment variables are automatically imported into the variable name space of bash, so they can be set and read with identity syntax. The command exportvarname promotes a shell variable to an environment variable. The commands used to set environment variables when a user logs in should be placed in the user's ~ / .profile or ~ / .bash_profile files. Other environment variables such as PWD (which represents the current working directory) are automatically maintained by shell.
The shell treats strings enclosed in single and double quotes similarly, except that double-quoted strings are subject to globbing (the expansion of filename-matching metacharacters such as * and?) And variable expansion. For example:
$mylang= "Pennsylvania Dutch"
$echo "I speak ${mylang}."
I speak Pennsylvania Dutch.
$echo'I speak ${mylang}.
I speak ${mylang}.
For strings enclosed in single and double quotes, shell treats them in a similar way, except that variables enclosed in double quotes can be replaced (* and? Such filenames match metacharacters for extensions) and variable extensions. For example:
$mylang= "Pennsylvania Dutch"
$echo "I speak ${mylang}."
I speak Pennsylvania Dutch.
$echo'I speak ${mylang}.
I speak ${mylang}.
Back quotes, also known as back-ticks, are treated similarly to double quotes, but they have the additional effect of executing the contents of the string as a shell command and replacing the string with the command's output. For example
$echo "There are `wc-l / etc/ passwd` lines in the passwd file."
There are 28 lines in the passwd file.
Left quotation marks, also known as apostrophes, are treated like quotation marks, but they have other functions, that is, they can execute the contents of a string as a shell command and replace the string with the output of that command. For example:
$echo "There are `wc-l / etc/ passwd` lines in the passwd file."
There are 28 lines in the passwd file.
Common filter commands
Any well-behaved command that reads STDIN and writes STDOUT can be used
As a filter (that is, a component of a pipeline) to process data. In this section we briefly review some of the more widely used filter commands (including some used in passing above), but the list is practically endless. Filter commands are so team oriented that it's sometimes hard to show their use in isolation.
2.1.4 Common filtering commands
Any command that follows the rules of "reading data from STDIN and outputting results to STDOUT" can be used as a filter (that is, a part of management). In this section, we briefly review some of the more widely used filter commands (including some of the commands already used above), but such filter commands are actually endless. Filter commands are mostly oriented to "group warfare", so sometimes their respective uses are difficult to reflect alone.
Most filter commands accept one or more filenames on the command line. Only
If you fail to specify a file do they read their standard input.
Most filtering commands accept one or more file names provided on the command line as input. They read data from their own standard input only when none of the files are specified.
Cut: separate lines into fields
The cut command prints selected portions of its input lines. It's most commonly used to extract delimited fields, as in the example on page 32, but it can return segments defined by column boundaries as well. The default delimiter is, but you can change delimiters with the-d option. The-f options specifies which fields to include in the output.
For an example of the use of cut, see the section on uniq, below.
Sort: sort lines
Sort sorts its input lines. Simple, right? Well, maybe not-there are a few potential subtleties regarding the exact parts of each line that are sorted (the "keys") and the collation order to be imposed. Table 2.1 shows a few of the more common options, but check the man page for others.
Cut: dividing rows into fields
The cut command selects parts from its input line and prints them out. A common use of this command is to extract a number of qualified fields, as shown in the 32-page example, but it can also return several sections defined by column boundaries. The default qualifier is, but you can change it with the-d option. The-f option specifies which fields are included in the output.
Refer to the unip command section below for examples of cut usage, as follows
Sort: sort rows
The sort command sorts the input lines. It's easy, isn't it? Maybe it's not simple-which parts of each row are sorted (that is, "keywords"), and the order in which they are sorted, can be fine-tuned. Table 2.1 shows some of the more common options, but you have to look at the man page to learn about other options.
2017.11.18
P83 P71
Sort options
Opt Meaning
-b Ignore leading whitespace
-f Case insensitive sorting
-k Specify the columns that form the sort key
-n Compare fields as integer numbers
-r Reverse sort order
-t Set field separator (the default is whitespace)
-u Output unique records only
The commands below illustrate the difference between numeric and dictionary
Sorting, which is the default. Both commands use the-t: and-k3jue 3 options to sort the / etc/group file by its third colon-separated field, the group ID. The first sorts numerically and the second alphabetically.
$sort-t:-k3jue 3-n / etc/group1
Root:x:0:
Bin:x:1:daemon
Daemon:x:2:
...
$sort-t:-k3jue 3 / etc/group
Root:x:0:
Bin:x:1:daemon
Users:x:100:
...
The following command shows the difference between numeric sorting and dictionary sorting, which is sorted by default. Both commands use the options-t: and-k3jinger 3 to sort the contents of the / etc/group file by the third field separated by colons (that is, the group ID). The first command is sorted numerically, while the second command is sorted alphabetically.
$sort-t:-k3jue 3-n / etc/group1
Root:x:0:
Bin:x:1:daemon
Daemon:x:2:
...
$sort-t:-k3jue 3 / etc/group
Root:x:0:
Bin:x:1:daemon
Users:x:100:
...
Uniq: print unique lines
Uniq is similar in spirit to sort-u, but it has some useful options that sort does not emulate:-c to count the number of instances of each line,-d to show only duplicated lines, and-u to show only nonduplicated lines. The input must be presorted, usually by being run through sort.
Uniq: duplicate lines are printed only once
The uniq command is similar in thought to sort-u, but it has some options that sor cannot simulate:-c accumulates the number of occurrences per line,-d shows only duplicate lines, and-u shows only non-repeating lines. The input of the uniq command must be sorted first, so it is usually run after the sort command.
For example, the command below shows that 20 users have / bin/bash as their
Login shell and that 12 have / bin/false. (The latter are either pseudo-users or users whose accounts have been disabled.)
$cut-d:-f7 / etc/passwd | sort | uniq-c
20 / bin/bash
12 / bin/false
For example, the following command shows that 20 users have / bin/bash as their login shell,12 and / bin/false as their login shell (the latter is either a pseudo-user or a user whose account is disabled)
$cut-d:-f7 / etc/passwd | sort | uniq-c
20 / bin/bash
12 / bin/false
Wc: count lines, words, and characters
Counting the number of lines, words, and characters in a file is another common operation, and the wc (word count) command is a convenient way of doing this. Run without options, it displays all three counts:
$wc / etc/passwd
32 77 2003 / etc/passwd
Wc: count lines, words, and characters
Counting the number of lines, words, and characters in a file is another common operation, and the wc (for word count) command is a convenient way to do this. If you run wc without any parameters, it displays all three statistics:
$wc / etc/passwd
32 77 2003 / etc/passwd
In the context of scripting, it is more common to supply a-l,-w, or-c option to make wc's output consist of a single number. This form is most commonly seen inside backquotes so that the result can be saved or acted upon.
In scripting applications, you often add-l,-w, or-c options to the wc command to output only one number. This form of command occurs most often in apostrophes so that the result of the execution of the command can be saved or the next step can be determined based on the result of the execution.
Tee: copy input to two places
A command pipeline is typically linear, but it's often helpful to tap into the data stream and send a copy to a file or to the terminal window. You can do this with the tee command, which sends its standard input both to standard out and to a file that you specify on the command line. Think of it as a tee fixture in plumbing.
The device / dev/tty is a synonym for the current terminal. For example
$find /-name core | tee / dev/tty | wc-l
Tee: copy input to two places
The management of commands is generally linear, but it is often helpful to insert a data stream from the management and then send a copy to a file or to a terminal window. You can do this with the tee command, which sends its standard input both to the identity output and to a file specified on the command line. Think of it as a tee on a water pipe.
Device / dev/tty is synonymous with the current terminal. For example:
$find /-name core | tee / dev/tty | wc-l
Prints both the pathnames of files named core and a count of the number of core files that were found.
A common idiom is to terminate a pipeline that will take a long time to run with a tee command so that output goes both to a file and to the terminal window for inspection. You can preview the initial results to make sure everything is working as you expected, then leave while the command runs, knowing that the results will be saved.
This command prints out the pathname of the file called core, as well as the number of core files found.
It is a common idiom to manage the tee command as the last command that has been executed for a long time, so that the managed output can be sent either to a file or to a terminal window for the user to view. The user can see the initial output in advance to ensure that everything works as expected, and then the user can run the command without caring about it, because they know the result will be saved.
Head and tail: read the beginning or end of a file
Reviewing lines from the beginning or end of a file is a common administrative operation. These commands display ten lines by default, but you can include a command-line option to specify how many lines you want to see.
For interactive use, head is more or less obsoleted by the less command, which paginates files for display. But head still finds plenty of use within scripts.
Tail also has a nifty-f option that's particularly useful for sysadmins. Instead of exiting immediately after printing the requested number of lines, tail-f waits for new lines to be added to the end of the file and prints them as they appear- great for monitoring log files. Be aware, however, that the program writing the file may be buffering its output. Even if lines are being added at regular intervals from a logical perspective, they may only become visible in chunks of 1KiB or 4KiB.2
Type to stop monitoring.
Head and tail: read the beginning or end of a file
Administrators often come across the operation of looking at the lines at the beginning or end of a file. These two commands display the first 10 lines by default, but the user can take a command line argument to specify how many lines you want to see.
For interactive applications, the head command has been more or less replaced by the less command, which can mark the number of pages of the file being displayed, but the head command is still heavily used in scripts.
Tail also has a nice-f option, which is especially useful for systems that are not easy to use. The tail-f command does not exit immediately after printing the required number of lines, but waits for new lines to be appended to the end of the file, and then prints new lines as new lines appear-useful for monitoring log files. Note, however, that the program that writes the file may buffer its output. Even though logically new lines are appended at regular intervals, they may only be displayed as blocks of 1kib or 4kib.
Type to stop monitoring.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.