-->

Setting Scrapy start_urls from a Script

2019-05-25 04:11发布

问题:

I have a working scrapy spider and I'm able to run it through a separate script following the example here. I have also created a wxPython GUI for my script that simply contains a multi-line TextCtrl for users to input a list of URLs to scrape and a button to submit. Currently the start_urls are hardcoded into my spider - How can I pass the URLs entered in my TextCtrl to the start_urls array in my spider? Thanks in advance for the help!

回答1:

Just set start_urls on your Spider instance:

spider = FollowAllSpider(domain=domain)
spider.start_urls = ['http://google.com']


回答2:

alecxe answer doesn't work for me. My solution works for Scrapy==1.0.3 :

from scrapy.crawler import CrawlerProcess
from tutorial.spiders.some_spider import SomeSpider

process = CrawlerProcess()

process.crawl(SomeSpider, start_urls=["http://www.example.com"])
process.start()

It might help someone in the future.