Issue
When running Scrapy from an own script that loads URLs from a DB and follows all internal links on those websites, I encounter a pitty. I need to know which start_url is currently used as I have to maintain consistency with a database (SQL DB). But: When Scrapy uses the built-in list called 'start_urls' in order to receive a list of links to follow and those websites have an immediate redirect, a problem occurs. For example, when Scrapy starts and the start_urls are being crawled and the crawler follows all internal links that are being found there, I later can only determine the currently visited URL, not the start_url where Scrapy started out.
Other answers from the web are wrong, for other use cases or deprecated as there seems to have been a change in Scrapy's code last year.
MWE:
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess
class CustomerSpider(CrawlSpider):
name = "my_crawler"
rules = [Rule(LinkExtractor(unique=True), callback="parse_obj", ), ]
def parse_obj(self, response):
print(response.url) # find current start_url and do something
a = CustomerSpider
a.start_urls = ["https://upb.de", "https://spiegel.de"] # I want to re-identify upb.de in the crawling process in process.crawl(a), but it is redirected immediately # I have to hand over the start_urls this way, as I use the class CustomerSpider in another class
a.allowed_domains = ["upb.de", "spiegel.de"]
process = CrawlerProcess()
process.crawl(a)
process.start()
Here, I provide an MWE where Scrapy (my crawler) receives a list of URLs like I have to do it. An example redirection-url is https://upb.de which redirects to https://uni-paderborn.de.
I am searching for an elegant way of handling this as I want to make use of Scrapy's numerous features such as parallel crawling etc. Thus, I do not want to use something like the requests-library additionally. I want to find the Scrapy start_url which is currently used internally (in the Scrapy library). I appreciate your help.
Solution
Ideally, you would set a meta
property on the original request, and reference it later in the callback. Unfortunately, CrawlSpider
doesn't support passing meta
through a Rule
(see #929).
You're best to build your own spider, instead of subclassing CrawlSpider
. Start by passing your start_urls
in as a parameter to process.crawl
, which makes it available as a property on the instance. Within the start_requests
method, yield a new Request
for each url, including the database key as a meta
value.
When parse
receives the response from loading your url, run a LinkExtractor
on it, and yield a request for each one to scrape it individually. Here, you can again pass meta
, propagating your original database key down the chain.
The code looks like this:
from scrapy.spiders import Spider
from scrapy import Request
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess
class CustomerSpider(Spider):
name = 'my_crawler'
def start_requests(self):
for url in self.root_urls:
yield Request(url, meta={'root_url': url})
def parse(self, response):
links = LinkExtractor(unique=True).extract_links(response)
for link in links:
yield Request(
link.url, callback=self.process_link, meta=response.meta)
def process_link(self, response):
print {
'root_url': response.meta['root_url'],
'resolved_url': response.url
}
a = CustomerSpider
a.allowed_domains = ['upb.de', 'spiegel.de']
process = CrawlerProcess()
process.crawl(a, root_urls=['https://upb.de', 'https://spiegel.de'])
process.start()
# {'root_url': 'https://spiegel.de', 'resolved_url': 'http://www.spiegel.de/video/'}
# {'root_url': 'https://spiegel.de', 'resolved_url': 'http://www.spiegel.de/netzwelt/netzpolitik/'}
# {'root_url': 'https://spiegel.de', 'resolved_url': 'http://www.spiegel.de/thema/buchrezensionen/'}
Answered By - jschnurr
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.