In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "Docker how to deploy Scrapy", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "Docker how to deploy Scrapy" this article.
What if we want to deploy crawlers on 10 Ubuntu? You will vomit blood in the traditional way, unless you record each step, and then the order between the steps is exactly the same. But it's still tiring, and it takes time to download individual software. So Docker showed up. Docker compiles the system, various required applications, and settings into an image, and then run it. The difference between a virtual machine and a virtual machine does not require additional physical support and sharing.
1. Deployment steps
1.1 upload local scrapy crawler code to git server except settings
1.2 write a Dockerfile file, copy settings and requirements.txt into image, and package them into an image.
Dockerfile content:
FROM ubuntuRUN apt-get updateRUN apt-get install-y gitRUN apt-get install-y nano RUN apt-get install-y redis-server RUN apt-get-y dist-upgradeRUN apt-get install-y openssh-serverRUN apt-get install-y python3.5 python3-pipRUN apt-get install-y zlib1g-dev libffi-dev libssl-devRUN apt-get install-y libxml2-dev libxslt1-devRUN mkdir / codeWORKDIR / codeADD. / requirements.txt / code/ADD. / settings.py / code/RUN mkdir / code/myspiderRUN pip3 install-r requirements.txtVOLUME ["/ data"]
Requirements.txt content:
BeautifulSoup4scrapysetuptoolsscrapy_redisredissqlalchemypymysqlpillow
The entire directory structure:
Docker build-t fox6419/scrapy:scrapyTag.
Fox6419 is the user name and scrapyTag is tag
After successful execution of docker images, you can see image locally
1.3 upload packaged image to docker hub
Docker push username/repository:tag
The command format of push is like this, which is on my side:
Docker push fox6419/scrapy:scrapyTag
1.4 create version 16.04 of Ubuntu with docker applications on hosts like DigitalOcean
Log in to docker 1.5, pull down 1.3 image, and then run up
Docker run-it fox6419/scrapy:scrapyTag / bin/bash
1.6. after entering the command, the crawler in git clone 1.1, then copy the settings in images to the crawler directory, and then execute scrapy crawl xxx.
These are all the contents of the article "how Docker deploys Scrapy". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.