In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to use curl to access the Internet from the command line". In daily operation, I believe many people have doubts about how to use curl to access the Internet from the command line. The editor has consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to use curl to access the Internet from the command line". Next, please follow the editor to study!
Download the curl memo we sorted out. Curl is a fast and effective way to get the information you need from the Internet without using a graphical interface.
Curl is usually seen as a non-interactive Web browser, which means it can get information from the Internet and display it on your terminal, or save it to a file. On the face of it, this is what Web browsers do, similar to what Firefox or Chromium does, except that they render the information by default, while curl downloads and displays the original information. In fact, the curl command can do more and can transfer data to the server in both directions using a variety of protocols, including HTTP, FTP, SFTP, IMAP, POP3, LDAP, SMB, SMTP, and so on. This is a useful tool for the average end user, convenient for system administrators, and a quality assurance tool for micro services and cloud developers.
Curl is designed to work without user interaction, so unlike Firefox, you have to think about interaction with online data from beginning to end. For example, if you want to view a web page in Firefox, you need to launch the Firefox window. After opening Firefox, enter the website you want to visit in the address bar or in the search engine. Then, navigate to the site and click the page you want to view.
The same is true for curl, except that you need to do everything at once: provide the Internet address you need to access while starting curl, and tell it whether to save the data in a terminal or file. It gets a little complicated when you have to interact with a website or API that requires authentication, but once you learn the syntax of the curl command, it becomes natural. To help you master it, we have collected relevant grammatical information in a convenient memo.
Download files using curl
You can use the curl command to download files by providing a link to a specific URL. If the URL you provide defaults to index.html, this page will be downloaded and the downloaded file will be displayed on the terminal screen. You can pipe data to less, tail, or any other command:
$curl "http://example.com" | tail-n 4 Example Domain
This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.
More information...
Because some URL contain special characters, which are usually interpreted by shell, it is safest to enclose the URL in quotation marks.
Some files cannot be well converted and displayed in the terminal. You can use the-- remote-name option to save the file according to the name on the server:
$curl-- remote-name "https://example.com/linux-distro.iso"$ lslinux-distro.iso
Alternatively, you can use the-- output option to name what you want to download:
Curl "http://example.com/foo.html"-output bar.html uses curl to list content with remote directories
Because curl is not interactive, it is difficult to browse the downloadable elements on the page. If the remote server you want to connect to allows, you can use curl to list the contents of the directory:
$curl-- list-only "https://example.com/foo/" continues to interrupt the download
If you are downloading a very large file, you may find that sometimes you have to interrupt the download. Curl is smart enough to determine where the download is interrupted and continue to download. This means that the next time you have a problem downloading the ISO of a Linux distribution of 4GB, you don't have to start over. The syntax of continue-at is a little unusual: if you know the number of bytes when the download is interrupted, you can provide it to curl;. Otherwise, you can use a separate dash (-) to instruct curl to detect automatically:
$curl-- remote-name-- continue-at-"https://example.com/linux-distro.iso" download file sequence
If you need to download multiple files instead of one large file, curl can help you solve this problem. Assuming you know the location and filename pattern of the file you want to download, you can use the curl sequence tag: the start and end of the range of integers are in square brackets. For the output file name, use # 1 for the first variable:
$curl "https://example.com/file_[1-4].webp"-- output" file_#1.webp "
If you need to use other variables to represent another sequence, represent them in the order in which each variable appears in the command. For example, in this command, # 1 refers to directories images_000 to images_009, and # 2 refers to directories file_1.webp to file_4.webp:
$curl "https://example.com/images_00[0-9]/file_[1-4].webp"-- output" file_#1-#2.webp "download all PNG files from the site
You can also use only curl and grep to do some basic Web crawling to find what you want to download. For example, suppose you need to download all the images associated with the page you are archiving. First, download the page that references the image. Pipe the page to grep to search for the desired picture type (PNG in this example). Finally, create a while loop to construct the download URL and save the file to your computer:
$curl https://example.com |\ grep-- only-matching 'src= "[^"] *. [png] "' |\ cut-d\"-f2 |\ while read i; do\ curl https://example.com/"${i}"-o "${itemized payment /}";\ done
This is just an example, but it shows how flexible curl can be when combined with Unix pipes and some basic and ingenious parsing.
Get HTML header
The protocol used for data exchange embeds a large amount of metadata in the data packets sent by the computer. The HTTP header is the component of the initial part of the data. When there is a problem connecting to a website, it can be helpful to check these headers (especially the response codes):
Curl-head "https://example.com"HTTP/2 200accept-ranges: bytesage: 485487cache-control: max-age=604800content-type: text/html; charset=UTF-8date: Sun, 26 Apr 2020 09:02:09 GMTetag:" 3147526947 "expires: Sun, 03 May 2020 09:02:09 GMTlast-modified: Thu, 17 Oct 2019 07:18:26 GMTserver: ECS (sjc/4E76) x-cache: HITcontent-length: 1256 Fast failure
Response 200 is usually a HTTP success indicator, which is what you usually expect when you connect to the server. The famous 404 response indicates that the page could not be found, while 500 indicates that the server encountered an error in processing the request.
To see what errors occurred during the negotiation, add the-- show-error option:
$curl-head-show-error "http://opensource.ga"
These problems will be difficult to solve unless you have access to the server you want to connect to, but curl usually tries to connect to the address you specify. Sometimes when testing on the network, endless retries seem to be a waste of time, so you can use the-- fail-early option to force curl to exit quickly if it fails:
Curl-- fail-early "http://opensource.ga" responds to the specified redirect query by 3xx
The response of the 300 series is more flexible. Specifically, the 301 response means that a URL has been permanently moved to another location. It is a common way for webmasters to relocate content and leave a "trail" so that people who visit the old address can still find it. By default, curl does not redirect 301s, but you can use the-- localtion option to continue to the target that the 301response points to:
$curl "https://iana.org" | grep title301 Moved Permanently$ curl-- location" https://iana.org"Internet Assigned Numbers Authority launches a short URL
The-- location option is very useful if you want to check short URLs before visiting them. Short URLs are useful for character-limited social networks (of course, this may not be a problem if you use modern and open source social networks), or for print media where users can't copy long addresses. However, they can also be risky because their destination addresses are hidden in nature. By using the-- head option to view only the HTTP header, and the-- location option to view the final address of a URL, you can view a short URL without loading its full resource:
Curl-- head-- location "" download our curl memo
Once you start thinking about exploring web with a single command, curl becomes a quick and efficient way to get the information you need from the Internet without the hassle of a graphical interface. To help you adapt to the workflow, we have created a curl memo that contains common curl usage and syntax, including an overview of querying API using it.
At this point, the study on "how to use curl to access the Internet from the command line" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.