Issue
If during crawling, scrapy is unable to find a node and extract data from it? will it throw an exception and close the scraper? Or, just store a null value for that item's attribute?
I am asking this because, I was wondering whether or not, I should check for element existence, before extraction using if statements.
Solution
You can use the try
statement to handle exceptions so if it doesn't find what you need it will move along. This also works in Selenium.
If you don't treat exceptions in your code the script will stop but in scrapy's case if made requests before, only the thread where the exception was found will stop and the rest will keep going.
Answered By - Rafael Almeida
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.