python – 如何集成Flask和Scrapy?
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了python – 如何集成Flask和Scrapy?,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含7990字,纯文字阅读大概需要12分钟。
内容图文
![python – 如何集成Flask和Scrapy?](/upload/InfoBanner/zyjiaocheng/714/fe7779b9cf944e77bb7fa0e8610b10f1.jpg)
我正在使用scrapy来获取数据,我想使用flask web框架在网页中显示结果.但我不知道如何在烧瓶应用程序中调用蜘蛛.我曾尝试使用CrawlerProcess来调用我的蜘蛛,但是我得到了这样的错误:
ValueError
ValueError: signal only works in main thread
Traceback (most recent call last)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/Rabbit/PycharmProjects/Flask_template/FlaskTemplate.py", line 102, in index
process = CrawlerProcess()
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 210, in __init__
install_shutdown_handlers(self._signal_shutdown)
File "/Library/Python/2.7/site-packages/scrapy/utils/ossignal.py", line 21, in install_shutdown_handlers
reactor._handleSignals()
File "/Library/Python/2.7/site-packages/twisted/internet/posixbase.py", line 295, in _handleSignals
_SignalReactorMixin._handleSignals(self)
File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1154, in _handleSignals
signal.signal(signal.SIGINT, self.sigInt)
ValueError: signal only works in main thread
我的scrapy代码是这样的:
class EPGD(Item):
genID = Field()
genID_url = Field()
taxID = Field()
taxID_url = Field()
familyID = Field()
familyID_url = Field()
chromosome = Field()
symbol = Field()
description = Field()
class EPGD_spider(Spider):
name = "EPGD"
allowed_domains = ["epgd.biosino.org"]
term = "man"
start_urls = ["http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery="+term+"&submit=Feeling+Lucky"]
db = DB_Con()
collection = db.getcollection(name, term)
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')
url_list = []
base_url = "http://epgd.biosino.org/EPGD"
for site in sites:
item = EPGD()
item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
item['genID_url'] = base_url+map(unicode.strip, site.xpath('td[1]/a/@href').extract())[0][2:]
item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
item['taxID_url'] = map(unicode.strip, site.xpath('td[2]/a/@href').extract())
item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
item['familyID_url'] = base_url+map(unicode.strip, site.xpath('td[3]/a/@href').extract())[0][2:]
item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
self.collection.update({"genID":item['genID']}, dict(item), upsert=True)
yield item
sel_tmp = Selector(response)
link = sel_tmp.xpath('//span[@id="quickPage"]')
for site in link:
url_list.append(site.xpath('a/@href').extract())
for i in range(len(url_list[0])):
if cmp(url_list[0][i], "#") == 0:
if i+1 < len(url_list[0]):
print url_list[0][i+1]
actual_url = "http://epgd.biosino.org/EPGD/search/" + url_list[0][i+1]
yield Request(actual_url, callback=self.parse)
break
else:
print "The index is out of range!"
我的烧瓶代码如下:
@app.route('/', methods=['GET', 'POST'])
def index():
process = CrawlerProcess()
process.crawl(EPGD_spider)
return redirect(url_for('details'))
@app.route('/details', methods = ['GET'])
def epgd():
if request.method == 'GET':
results = db['EPGD_test'].find()
json_results= []
for result in results:
json_results.append(result)
return toJson(json_results)
在使用烧瓶网框架时,如何调用我的scrapy蜘蛛?
解决方法:
在蜘蛛面前添加HTTP服务器并不容易.有几种选择.
1. Python子流程
如果你真的只限于Flask,如果你不能使用其他任何东西,只有将Scrapy与Flask集成的方法是为每个蜘蛛爬行启动外部进程,如其他答案所建议的那样(注意你的子进程需要在适当的Scrapy项目中生成)目录).
所有示例的目录结构应如下所示,我使用的是dirbot test project
> tree -L 1
├── dirbot
├── README.rst
├── scrapy.cfg
├── server.py
└── setup.py
以下是在新流程中启动Scrapy的代码示例:
# server.py
import subprocess
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
"""
Run spider in another process and store items in file. Simply issue command:
> scrapy crawl dmoz -o "output.json"
wait for this command to finish, and read output.json to client.
"""
spider_name = "dmoz"
subprocess.check_output(['scrapy', 'crawl', spider_name, "-o", "output.json"])
with open("output.json") as items_file:
return items_file.read()
if __name__ == '__main__':
app.run(debug=True)
将上面保存为server.py并访问localhost:5000,您应该可以看到已删除的项目.
2. Twisted-Klein Scrapy
其他更好的方法是使用一些现有项目,将Twisted与Werkzeug集成,并显示类似于Flask的API,例如: Twisted-Klein. Twisted-Klein允许您在与Web服务器相同的过程中异步运行您的蜘蛛.它更好,因为它不会阻止每个请求,它允许您简单地从HTTP路由请求处理程序返回Scrapy / Twisted延迟.
以下代码片段将Twisted-Klein与Scrapy集成,请注意您需要创建自己的CrawlerRunner基类,以便抓取工具收集项目并将其返回给调用者.这个选项有点高级,你在与Python服务器相同的过程中运行Scrapy蜘蛛,项目不存储在文件中但存储在内存中(因此没有像前面的例子那样的磁盘写入/读取).最重要的是它是异步的,它都在一个Twisted反应堆中运行.
# server.py
import json
from klein import route, run
from scrapy import signals
from scrapy.crawler import CrawlerRunner
from dirbot.spiders.dmoz import DmozSpider
class MyCrawlerRunner(CrawlerRunner):
"""
Crawler object that collects items and returns output after finishing crawl.
"""
def crawl(self, crawler_or_spidercls, *args, **kwargs):
# keep all items scraped
self.items = []
# create crawler (Same as in base CrawlerProcess)
crawler = self.create_crawler(crawler_or_spidercls)
# handle each item scraped
crawler.signals.connect(self.item_scraped, signals.item_scraped)
# create Twisted.Deferred launching crawl
dfd = self._crawl(crawler, *args, **kwargs)
# add callback - when crawl is done cal return_items
dfd.addCallback(self.return_items)
return dfd
def item_scraped(self, item, response, spider):
self.items.append(item)
def return_items(self, result):
return self.items
def return_spider_output(output):
"""
:param output: items scraped by CrawlerRunner
:return: json with list of items
"""
# this just turns items into dictionaries
# you may want to use Scrapy JSON serializer here
return json.dumps([dict(item) for item in output])
@route("/")
def schedule(request):
runner = MyCrawlerRunner()
spider = DmozSpider()
deferred = runner.crawl(spider)
deferred.addCallback(return_spider_output)
return deferred
run("localhost", 8080)
将上面保存在文件server.py中并在Scrapy项目目录中找到它,
现在打开localhost:8080,它将启动dmoz spider并将作为json的项目返回浏览器.
3. ScrapyRT
当您尝试在蜘蛛面前添加HTTP应用程序时,会出现一些问题.例如,您有时需要处理蜘蛛日志(在某些情况下您可能需要它们),您需要以某种方式处理蜘蛛异常等.有些项目允许您以更简单的方式向蜘蛛添加HTTP API,例如ScrapyRT.这是一个将HTTP服务器添加到Scrapy蜘蛛并为您处理所有问题的应用程序(例如处理日志记录,处理蜘蛛错误等).
所以在安装ScrapyRT后你只需要这样做:
> scrapyrt
在您的Scrapy项目目录中,它将启动HTTP服务器侦听您的请求.然后你访问http://localhost:9080/crawl.json?spider_name=dmoz&url=http://alfa.com它应该启动你的蜘蛛爬行给你的URL.
免责声明:我是ScrapyRt的作者之一.
内容总结
以上是互联网集市为您收集整理的python – 如何集成Flask和Scrapy?全部内容,希望文章能够帮你解决python – 如何集成Flask和Scrapy?所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。