Issue
How I can clear scrapy jobs list? When I start any spider I have a lot jobs with specific spider and I know how can I kill all them ? After reading documentation I have done next code, which I run in a loop:
cd = os.system('curl http://localhost:6800/schedule.json -d project=default -d spider=google > kill_job.text')
file = open('kill_job.text', 'r')
a = ast.literal_eval(file.read())
kill='curl http://localhost:6800/cancel.json -d project=default -d job={}'.format(a['jobid'])
pprint(kill)
cd = os.system(kill)
but looks like that it doesn't works. How can I kill all jobs because even if I have finished manually scrapy's process in the next start all jobs come back. Find this https://github.com/DormyMo/SpiderKeeper for project management. Does anybody know how to include existing project ?
Solution
So, I do not know what is wrong with my first example, but I fixed problem with this:
cd = os.system('curl http://localhost:6800/listjobs.json?project=projectname > kill_job.text')
file = open('kill_job.text', 'r')
a = ast.literal_eval(file.read())
b = a.values()
c = b[3]
for i in c:
kill = 'curl http://localhost:6800/cancel.json -d project=projectname -d job={}'.format(i['id'])
os.system(kill)
Answered By - kolas
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.