In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "what are the Java command line skills to improve development efficiency?" interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let the editor take you to learn "what are the Java command line skills to improve development efficiency"?
Mac environment
Zsh
On-my-zsh
Plugin
Git
Autojump
Osx (man-preview/quick-look/pfd (print Finder director) / cdf (cd Finder))
Common shortcut keys (bindkey)
Demo: highlight / git/ Intelligent completion / Jump (j, d).
Shell basic command
Which/whereis, commonly used whatis, man,-- help
➜.oh-my-zsh git: (master) $whereis ls / bin/ls➜. Oh-my-zsh git: (master) $which ls ls: aliased to ls-G
Basic file directory operation
Rm, mkdir, mv, cp, cd, ls, ln, file, stat, wc (- l/w/c), head, more, tail, cat...
Sharp tool pipeline: |
Shell text processing
Here is a case study of the general usage and parameters of 12 commands. You can click on the directory on the right (my blog has a directory, but not on the official account) to get to the commands you want to know.
Find, grep, xargs, cut, paste, comm join, sort, uniq, tr, sed, awk
Find
Common parameters
File name-name, file type-type, find maximum depth-maxdepth
Time filtering (create/access/modify)-[cam] time
Perform Action-exec
Example
Find. /-name "* .json" find. -maxdepth 7-name "* .json"-type f find. -name "* .log.gz"-ctime + 7-size + 1m-delete (atime/ctime/mtime) find. -name "* .scala"-atime-7-exec du-h {}\
Grep
Common parameters
-v (invert-match)
-c (count)
-n (line-number)
-I (ignore-case)
-l,-L,-R (- r,-- recursive),-e
Example
Grep 'partner'. / *. Scala-l grep-e' World'-e 'first'-I-R. / (- e: or)
Related commands: grep-z / zgrep / zcat xx | grep
Xargs
Common parameters
-n (number of columns per row)
-I (variable substitution)
-d (delimiter). Mac does not support it. Note the difference with GNU version.
Example
Echo "helloworldhellp" | cut-C1-10 cut-d,-f2-8 csu.db.export.csv
Cut
Common parameters
-b (bytes)
-c (character)
-f (column),-d (delimiter), f range: n, nMel,-m, nMel m
Example
Echo "helloworldhellp" | cut-C1-10cut-d,-f2-8 csu.db.export.csv
Paste
Common parameters
-d delimiter
-s column to change rows
Example
➜Documents$ cat file1 1 11 2 22 3 33 4 44➜ Documents$ cat file2 one1 two 2 three 3 one1 4 ➜Documents$ paste-d, file1 file2 1 11, one1 2 22, two 2 3 33, three 3 4 44, one1 4➜ Documents$ paste-s-d: file1 file2 a 11 ➜bb:3 33:4 44 one1: two 2:three 3:one1 4
Join
Similar to the... inner join... on...,-t delimiter in sql, defaults to space or tab
➜Documents$ cat j1 1 11 2 22 3 33 4 44 5 55➜ Documents$ cat j2 one1 0 one 2 1 two 4 2 three 5 3 one1 5 4 ➜Documents$ join-11-2 3 j1 j1 11 one 22 two 4 3 33 three 5 4 44 one1 5
Comm
Common parameters
Usage comm [- 123i] file1 file2
Dictionary sequence, 3 columns: only in file1/file2/both
-remove a column, I ignore case
Example
➜Documents$ seq 1 5 > file11➜ Documents$ seq 2 6 > file22 ➜Documents$ cat file11 1 23 4 5➜ Documents$ cat file22 23 4 5 6 ➜Documents$ comm file11 file22 1 23 4 5 6➜ Documents$ comm-1 file11 file22 23 4 5 6 ➜Documents$ comm-2 file11 file22 1 23 4 5➜ Documents$ comm-23 file11 file22 1
Related command diff (similar to git diff)
Sort
Common parameters
-d,-- dictionary-order
-n,-- numeric-sort
-r,-- reverse
-b,-- ignore-leading-blanks
-k,-- key
Example
➜Documents$ cat file2one1 two 2 three 3 one1 4➜ Documents$ sort file2one1 one1 4 three 3 two 2 ➜Documents$ sort-b-K2-r file2one1 4 three 3 two 2one1
Uniq
Common parameters
-c repetition times
-d repetitive
-u not repeated
-f ignore the first few columns
Example
➜Documents$ cat file4 11 22 33 11 11➜ Documents$ sort file4 | uniq-c 3 11 1 22 1 33 ➜Documents$ sort file4 | uniq-d 11➜ Documents$ sort file4 | uniq-u 22 33 ➜Documents$ cat file3 one1 two 1 three 3 one1 4➜ Documents$ uniq-c-f 1 file3 2 one1 1 three 3 1 one1 4
Note: whether the adjacent uniq is duplicated. It is usually used in conjunction with sort.
Tr
Common parameters
-c complement
-d Delete
-s compress adjacent duplicates
Example
➜Documents$ echo '1111234444533hello' | tr' [1-3]''aaaabc44445cchello➜ Documents$ echo' 1111234444533hello' | tr-d' [1-3] '44445hello ➜Documents$ echo' 1111234444533hello' | tr-dc'[1-3] '11112333➜ Documents$ echo' 1111234444533 hello' | tr-s'[0-9] '123453hello ➜Documents$ echo' helloworld' | tr'[: lower:]'[: upper:] 'HELLOWORLD
Sed
Common parameters
-d Delete
-s replacement, g global
-e multiple commands are superimposed
-I modify the original file (add parameter "" under Mac, backup)
Example
➜Documents$ cat file2 one1 two 2 three 3 one1 4➜ Documents$ sed "2Magazine 3D" file2 one1 one1 4 ➜Documents$ sed'/ one/d' file2 two 2 three 3➜ Documents$ sed's Universe GONY 1111 two 2 three 3 1111 "replace one with 111and delete the line containing two Documents$ sed-e's LGN Documents$ sed-e'/ two/d 'file2 1111 three 3 1111 4 # () tag (escaped) \ 1 reference ➜Documents$ sed's /\ ([0-9]\) /\ 1.htmlfile2 one1.html two 2.html three 3.html one1.html 4.html file2 one1.html two 2.html three 3.html one1.html 4.html # is the same as above & tag matching characters➜ Documents$ sed's / [0-9] / & .html / G'file2 one1.html two 2.html three 3.html one1.html 4.html ➜Documents$ cat Mobile.csv "13090246026"18020278026"18520261021"13110221022" ➜Documents$ sed's /\ ([0-9]\ {3\}\) [0-9]\ {4\} /\ 1xxxxtag g 'mobile.csv "130xxxx6026"180xxxx8026"185xxxx1021"131xxxx1022"
Awk
Basic parameters and syntax
NR line number, number of NF columns
Column 1, $2, $3.
-F fs fs delimiter, string or regular
Syntax: awk 'BEGIN {commands} pattern {commands} END {commands}', the process is as follows:
Execute begin
Perform pattern {commands} on each input line, pattern can be regular / reg exp/, relational operation, etc.
After processing, execute end
Example
➜Documents$ cat file5 11 11 aa cc 22 bb 33 d 11 11 11 # line number, number of columns, column 3➜ Documents$ awk'{print NR "(" NF "):", $3} 'file5 1 (4): aa 2 (3): bb 3 (3): d 4 (2): 5 (2): # string splitting, print 1 and 2 columns ➜Documents$ awk-F "xxxx' {print $1" Add the expression ➜Documents$ awk'$1 > = 22 {print NR ":", $3} 'file5 2: bb3: d # accumulate 1 to 36, odd, even➜ Documents$ seq 36 | awk' BEGIN {sum=0 Print "question:"} {print $1 "+"; sum+=$1} END {print "="; print sum}'| xargs | sed's Documents$ seq + = / = / 'question: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 33 + 34 + 36 = 666 ➜Documents$ seq 36 | awk' BEGIN {sum=0 Print "question:"} $1% 2 = = 1 {print $1 "+"; sum+=$1} END {print "="; print sum}'| xargs | sed's question + 1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 + 17 + 19 + 21 + 23 + 25 + 27 + 29 + 33 + 35 = 324 ➜Documents$ seq 36 | awk 'BEGIN {sum=0; print "question:"} $1% 2! = 1 {print $1 "+; sum+=$1} END {print" = " Print sum}'| xargs | sed's Plus + = / = / 'question: 2 + 4 + 6 + 8 + 10 + 12 + 14 + 16 + 18 + 20 + 22 + 24 + 26 + 28 + 30 + 32 + 34 + 36 = 342
Other advanced syntax: for, while, etc., various functions, etc., awk itself is a powerful language, can master some basic usage.
Practical application
Log statistical analysis
For example, if you get a nginx log file, you can do a lot of things, such as looking at which requests are the longest and then optimizing, such as looking at the number of "PV" per hour, and so on.
➜Documents$ head-N5 std.nginx.log 106.38.187.225-[20/Feb/2017:03:31:01 + 0800] www.tanglei.name "GET / baike/208344.html HTTP/1.0" 301486 "-" Mozilla/5.0 (compatible; MSIE 7.0; Windows NT 5.1 .net CLR 1.1.4322) 360JK yunjiankong 975382 "" 106.38.187.225106.38.187.225 "- 0.000 106.38.187.225-[20/Feb/2017:03:31:02 + 0800] www.tanglei.name" GET / baike/208344.html HTTP/1.0 "301486"-"" Mozilla/5.0 (compatible; MSIE 7.0; Windows NT 5.1) .net CLR 1.1.4322) 360JK yunjiankong 975382 "" 106.38.187.225106.38.187.225 "- 0.000 10.130.64.143-[20/Feb/2017:03:31:02 + 0800] stdbaike.bdp.cc" POST / baike/wp-cron.php?doing_wp_cron=1487532662.2058920860290527343750 HTTP/1.1 "200182"-"" WordPress/4.5.6 Http://www.tanglei.name/baike"10.130.64.143" 0.205 0.205 10.130.64.143-[20/Feb/2017:03:31:02 + 0800] www.tanglei.name "GET / external/api/login-status HTTP/1.0" 0.003 0.004 10.130.64.143-[20/Feb/2017:03:31 : 02 + 0800] www.tanglei.name "GET / content_util/authorcontents?count=5&offset=0&israndom=1&author=9 HTTP/1.0" 200 11972 "-"10.130.64.143" 0.013 0.013
The above is a case of nginx, such as a path that wants to find a request from top 10:
Head-n 10000 std.nginx.log | awk'{print $8 "," $10}'| grep', 404'| sort | uniq-c | sort-nr-K1 | head-n 10 # orhead-n 10000 std.nginx.log | awk'$10 orhead-n 10000 std.nginx.log | sort | uniq-c | sort-nr-K1 | head-n 10
Of course, you may not be able to deal with it directly at one time, usually take less data to see if the logic is normal, or you can cache some intermediate results.
Cat std.nginx.log | awk'{print $8 "," $10}'| grep', 404'> 404.log sort 404.log | uniq-c | sort-nr-K1 | head-n 10
For example, the number of requests per hour, request time, and so on.
➜Documents$ head-n 100000 std.nginx.log | awk-F:'{print $1 $2}'| cut-f3-d / | uniq-c 8237 201703 15051 201704 16083 201705 18561 201706 22723 201707 19345 201708
Other actual cases ip block
Case: db data revision
Background: because of a service bug, the path of the image inserted into db is incorrect. It is necessary to replace https://www.tanglei.name/upload/photos/129630//internal-public/shangtongdai/2017-02-19-abcdefg-eb85-4c24-883e-hijklmn.jpg with http://www.tanglei.me/internal-public/shangtongdai/2017-02-19-abcdefg-eb85-4c24-883e-hijklmn.jpg, which looks like (sensitive data has been replaced by security requirements). Because db such as mysql does not seem to support direct regular replacement, it is not easy to write sql for replacement (even if it is supported, direct change is also risky, it is better to make a backup and then modify it to leave a "regret medicine").
Of course, it is also a solution to export the data and then write scripts such as python, but if you use the above command line processing, it only takes dozens of seconds to complete.
Steps:
Prepare data
Select id, photo_url_1, photo_url_2, photo_url_3 from somedb.sometable where photo_url_1 like 'https://www.tanglei.name/upload/photos/%//internal-public/%' or photo_url_2 like' https://www.tanglei.name/upload/photos/%//internal-public/%' or photo_url_3 like 'https://www.tanglei.name/upload/photos/%//internal-public/%';
In general, when replacing the original file with sed, first test whether the replacement is normal.
# Test whether OK head-n 5 customers.csv | sed's | https://www.tanglei.name/upload/photos/[0-9]\{1,\}/|http://www.tanglei.me|g' # directly replace the original file. You can keep the original backup file sed-I "s" https://www.tanglei.name/upload/photos/[0-9]\{1,\}/|http://www.tanglei.me|g' customers.csv when you replace sed-I ".bak".
Splice sql, and then execute
Awk-F,'{print "update sometable set photo_url_1 =" $2, ", photo_url_2 =" $3, "photo_url_3 =" $4, "where id =" $1 ";"} 'customers.csv > customer.sql # and then execute sql
Other
Play framework session
The old way: need to start the play environment, slow. The new way is solved directly on the command line.
Sbt "project site" consoleQuick import play.api.libs._val sec = "secret...secret" var uid= "10086" Crypto.sign (s "uid=$uid", sec.getBytes ("UTF-8")) + s "- uid=$uid" ➜Documents$ ~ / stdcookie.sh 97522 918xxxxdf64abcfcxxxxc465xx7554dxxxx21e-uid=97522➜ Documents$ cat ~ / stdcookie.sh cookie.bins cannot remove this line uid=$1 hash= `echo-n "uid=$uid" | openssl dgst-sha1-hmac "secret...secret" `echo "$cookie"
Statistics on the word frequency of articles: the following examples count the 10 most frequently used words in the original Trump inaugural speech.
➜Documents$ head-N3 chuanpu.txt Chief Justice Roberts, President Carter, President Clinton, President Bush, President Obama, fellow Americans and people of the world, thank you. We, the citizens of America, are now joined in a great national effort to rebuild our country and restore its promise for all of our people. Together we will determine the course of America and the world for many, many years to come. ➜Documents$ cat chuanpu.txt | tr-dc 'a-zA-Z' | xargs-N1 | sort | uniq-c | sort-nr-K1 | head-n 20 65 the 63 and 48 of 46 our 42 will 37 to 21 We 20 is 18 we 17 America 15 a 14 all 13 in 13 for 13 be 13 are 10 your 10 not 10 And 10 American
Random numbers: for example, often sign up for a new website, randomly generate a password and so on.
➜Documents$ cat / dev/urandom | LC_CTYPE=C tr-dc 'a-zA-Z0-9' | fold-w 32 | head-n 5 cpBnvC0niwTybSSJhUUiZwIz6ykJxBvu VDP56NlHnugAt2yDySAB9HU2Nd0LlYCW 0WEDzpjPop32T5STvR6K6SfZMyT6KvAI a9xBwBat7tJVaad279fOPdA9fEuDEqUd hTLrOiTH5FNP2nU3uflsjPUXJmfleI5c➜ Documents$ cat / dev/urandom | head-c32 | base64 WoCqUye9mSXI/WhHODHDjzLaSb09xrOtbrJagG7Kfqc=
Image processing compression, batch change picture size and so on sips
➜linux-shell-more-effiency$ sips-g all which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png pixelWidth: 280pixelHeight: 81 typeIdentifier: public.png format: png formatOptions: default dpiWidth: 72.000 dpiHeight: 72.000 samplesPerPixel: 4 bitsPerSample: 8 hasAlpha: yes space: RGB profile: DELL U2412M➜ linux-shell-more-effiency$ sips-Z 250 which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png ➜linux-shell-more-effiency$ sips-g all which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png pixelWidth: 250pixelHeight: 72 typeIdentifier: public.png format: png formatOptions: default dpiWidth: 72.000 dpiHeight: 72.000 samplesPerPixel: 4 bitsPerSample: 8 hasAlpha: yes space: RGB profile: DELL U2412M➜ linux-shell-more-effiency$ sips-z 100 30 which-whereis .png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png ➜linux-shell-more-effiency$ sips-g pixelWidth-g pixelHeight which-whereis.png / Users/tanglei/Documents/linux-shell-more-effiency/which-whereis.png pixelWidth: 30 pixelHeight: 100
Command line processing JSON artifact: with the versatility of JSON, it is often necessary to deal with JSON data. It is recommended that this command line JSON handle artifact jq is a lightweight and flexible command-line JSON processor [1]
At this point, I believe you have a deeper understanding of "what are the Java command line skills to improve development efficiency?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.