In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Linux system wegt download tool how to use, in view of this problem, this article introduces in detail the corresponding analysis and answers, hoping to help more partners who want to solve this problem to find a more simple and easy way.
Wget is an indispensable tool for downloading files in Linux systems, which can support HTTP,HTTPS and FTP protocols, and can use HTTP agents. So wget is a very powerful tool.
1. Command format: wget [parameters] [URL address]
2. Command function: it is used to download resources from the network. No directory is specified. Downloading resources defaults to the current directory. Although wget is powerful, it is relatively easy to use:
1) support the downloading function of breakpoint; this is also the biggest selling point of network ants and FlashGet at that time. Now, Wget can also use this function, and those users whose network is not very good can rest assured.
2) both FTP and HTTP downloads are supported; although most software can be downloaded using HTTP, sometimes it is still necessary to download software using FTP.
3) support proxy server; for systems with high security intensity, they generally do not expose their systems directly on the Internet, so supporting proxies is a necessary function for downloading software.
4) the setting is convenient and simple; it is possible that users who are used to the graphical interface are not used to the command line, but the command line actually has more advantages in setting up, at least, the mouse can be clicked many times less, and don't worry about whether it is wrong to click the mouse.
5) programs are small and completely free; small programs can be ignored, because now the hard drive is too big; completely free has to be considered, even though there are a lot of so-called free software on the Internet, but the advertisements of these software are not what we like.
3. Command parameters: startup parameters:
-V,-version exits after displaying the version of wget
-h,-help print syntax help
-b,-background is transferred to the background for execution after startup.
-e,-execute=COMMAND execute commands in the format `.wgetrc'. See / etc/wgetrc or ~ / .wgetrc in wgetrc format.
Record and enter file parameters:
-o.-output-file=FILE writes the record to the FILE file
-a,-append-output=FILE appends records to the FILE file
-d,-debug print debug output
-Q,-quiet quiet mode (no output)
-v,-verbose verbose mode (this is the default)
-nv.-non-verbose, turn off verbose mode, but not quiet mode.
-I,-input-file=FILE download the URLs that appears in the FILE file
-F,-force-html treats input files as if they were in HTML format
-B,-base=URL prefix URL as the relative link that appears in the file specified by the-F-I parameter
-sslcertfile=FILE optional client certificate
-KEYFILE of sslcertkey=KEYFILE optional client certificate
-egd-file=FILE specifies the file name of the EGD socket
Download parameters:
-bind-address=ADDRESS specifies the local use address (hostname or IP, used when there is more than one IP or name locally)
-t,-tries=NUMBER sets the maximum number of attempted links (0 means unlimited).
-O-output-document=FILE writes the document to the FILE file
-nc,-no-clobber do not overwrite existing files or use the. # prefix
-c.-continue then downloads the unfinished files.
-progress=TYPE sets the process bar flag
-N,-timestamping do not download the file again unless it is newer than the local file
-S,-response from server-response print server
-spider doesn't download anything.
-T,-timeout=SECONDS sets the number of seconds for response timeout
-w,-wait=SECONDS SECONDS seconds between attempts
-waitretry=SECONDS waits for 1 between relinks. SECONDS second
-random-wait waits for 0 between downloads. 2*WAIT second
-Y,-proxy=on/off turns the agent on or off
-Q,-quota=NUMBER sets the capacity limit for downloads
-limit-rate=RATE limits the download output rate
Directory parameters:
-nd-no-directories does not create a directory
-x,-force-directories force directory creation
-nH,-no-host-directories does not create host directories
-P,-directory-prefix=PREFIX saves the file to the directory PREFIX/...
-cut-dirs=NUMBER ignores NUMBER-tier remote directories
HTTP option parameters:
-http-user=USER sets the HTTP user name to USER.
-http-passwd=PASS sets the http password to PASS
-C,-cache=on/off allows / disallows server-side data caching (generally allowed)
-E,-html-extension saves all text/html documents with the .html extension
-ignore-length ignores `Content-Length' header domain
-header=STRING inserts the string STRING into headers
-proxy-user=USER sets the user name of the agent to USER
-proxy-passwd=PASS sets the password of the agent to PASS
-referer=URL contains the `Referer: URL' header in the HTTP request
-s,-save-headers saves the HTTP header to the file
-U,-user-agent=AGENT sets the name of the agent to AGENT instead of Wget/VERSION
-no-http-keep-alive closes HTTP active links (permanent links)
-cookies=off does not use cookies
-load-cookies=FILE loads cookie from the file FILE before starting the session
-save-cookies=FILE saves cookies to the FILE file at the end of the session
FTP option parameters:
-nr,-dont-remove-listing does not remove the `.upload 'file
-g,-glob=on/off turns on or off the globbing mechanism for filenames
-passive-ftp uses passive transfer mode (default).
-active-ftp uses active transfer mode
-retr-symlinks points a link to a file (not a directory) when recursive
Recursive download parameters:
-r,-recursive recursive download-- use with caution!
-l,-level=NUMBER maximum recursive depth (inf or 0 for infinity)
-delete-after partially deletes the file after it is finished now
-k,-convert-links convert non-relative links to relative links
-K,-backup-converted backs up file X as X.orig before converting it
-m,-mirror is equivalent to-r-N-l inf-nr
-p,-page-requisites download all the pictures showing the HTML file
Include and accept/reject in recursive downloads:
-A,-accept=LIST semicolon delimited list of accepted extensions
-R,-reject=LIST semicolon delimited list of unacceptable extensions
-D,-domains=LIST semicolon delimited list of accepted domains
-exclude-domains=LIST semicolon delimited list of unacceptable domains
-follow-ftp tracks FTP links in HTML documents
-follow-tags=LIST semicolon separated list of tracked HTML tags
-G,-ignore-tags=LIST semicolon separated list of ignored HTML tags
-H,-span-hosts go to external host when recursive
-L,-relative only tracks relative links
-I,-list of include-directories=LIST allowed directories
-X,-exclude-directories=LIST is not included in the list of directories
-np,-no-parent, do not trace back to the parent directory
Wget-S-spider url does not download and only shows the process
4. Use example: example 1: download a single file using wget
Command:
Wget http://www.linuxidc.com/linuxidc.zip
Description:
The following example downloads a file from the network and saves it in the current directory, and a progress bar is displayed during the download process, including (percentage of download completed, bytes already downloaded, current download speed, remaining download time).
Example 2: download using wget-O and save under a different file name
Command:
Wget-O wordpress.zip http://www.linuxidc.com/download.aspx?id=1080
Description:
By default, wget commands with the last character following the "/", and the file name is usually incorrect for dynamically linked downloads.
Error: the following example downloads a file and saves it with the name download.aspx?id=1080
Wget http://www.linuxidc.com/download?id=1
Even if the downloaded file is in zip format, it still uses the download.php?id=1080 command.
Correct: to solve this problem, we can use the parameter-O to specify a file name:
Wget-O wordpress.zip http://www.linuxidc.com/download.aspx?id=1080
Example 3: download with wget-limit-rate speed limit
Command:
Wget-limit-rate=300k http://www.linuxidc.com/linuxidc.zip
Description:
When you execute wget, it takes up all possible broadband downloads by default. But the speed limit is necessary when you are ready to download a large file and you need to download other files.
Example 4: resume transmission using wget-c breakpoint
Command:
Wget-c http://www.linuxidc.com/linuxidc.zip
Description:
Using wget-c to restart interrupted files is very helpful for us to download large files when suddenly interrupted due to network and other reasons, we can continue to download instead of re-download a file. You can use the-c parameter when you need to continue interrupting downloads.
Example 5: download using wget-b background
Command:
Wget-b http://www.linuxidc.com/linuxidc.zip
Description:
For downloading very large files, we can use the parameter-b for background download.
Wget-b http://www.linuxidc.com/linuxidc.zip
Continuing in background, pid 1840.
Output will be written to `wget-log'.
You can check the progress of the download using the following command:
Tail-f wget-log
Example 6: download the camouflage agent name
Command:
Wget-user-agent= "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.204 Safari/534.16" http://www.linuxidc.com/linuxidc.zip
Description:
Some websites can reject your download request by judging that the proxy name is not a browser. However, you can disguise it with the-user-agent parameter.
Example 7: use wget-spider to test the download link
Command:
Wget-spider URL
Description:
When you plan to download regularly, you should test whether the download link is valid at the scheduled time. We can add the-spider parameter to check.
Wget-spider URL
If the download link is correct, it will display
Wget-spider URL
Spider mode enabled. Check if remote file exists.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Remote file exists and could contain further links
But recursion is disabled-not retrieving.
This ensures that the download will take place at the scheduled time, but when you give the wrong link, the following error will be displayed
Wget-spider url
Spider mode enabled. Check if remote file exists.
HTTP request sent, awaiting response... 404 Not Found
Remote file does not exist-broken Linklink links!
You can use the spider parameter in the following situations:
Check before regular download
Interval to check whether a website is available
Check the dead links on the website page
Example 8: use wget-tries to increase the number of retries
Command:
Wget-tries=40 URL
Description:
If there is a problem with the network or downloading a large file may also fail. By default, wget retries the connection to download the file 20 times. If necessary, you can use-tries to increase the number of retries.
Example 9: download multiple files using wget-I
Command:
Wget-I filelist.txt
Description:
First, save a download link file
Cat > filelist.txt
Url1
Url2
Url3
Url4
Then use this file and parameters-I to download
Example 10: use wget-mirror mirror website
Command:
Wget-mirror-p-convert-links-P. / LOCAL URL
Description:
Download the entire website locally.
-miror: download the image of opening an account
-p: download all files that are displayed properly for the html page
-convert-links: after download, convert it to a local link
-P. / LOCAL: save all files and directories to the local specified directory
Example 11: use wget-reject to filter downloads in specified format
Command: wget-reject=gif ur
Description:
Download a website, but you do not want to download pictures, you can use the following command.
Example 12: use wget-o to store the download information in the log file
Command:
Wget-o download.log URL
Description:
Do not want the download information to be displayed directly on the terminal but in a log file, you can use the
Example 13: use wget-Q to limit the total download file size
Command:
Wget-Q5m-I filelist.txt
Description:
When you want to download more than 5m files and exit the download, you can use. Note: this parameter does not work for a single file download and is only valid for recursive downloads.
Example 14: download files in specified format using wget-r-A
Command:
Wget-r-A.pdf url
Description:
You can use this feature in the following situations:
Download all the pictures on a website
Download all the videos on a website
Download all the PDF files of a website
Example 15: download using wget FTP
Command:
Wget ftp-url
Wget-ftp-user=USERNAME-ftp-password=PASSWORD url
Description:
You can use wget to download the ftp link.
Download using wget anonymous ftp:
Wget ftp-url
Ftp downloads authenticated with wget username and password
Wget-ftp-user=USERNAME-ftp-password=PASSWORD url
Remarks: compile and install
Use the following command to compile and install:
# tar zxvf wget-1.9.1.tar.gz# cd wget-1.9.1#. / configure# make# make install
This is the answer to the question about how to use the wegt download tool for Linux system. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.