python – 运行Scrapy但它错误:没有名为_util的模块
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了python – 运行Scrapy但它错误:没有名为_util的模块,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含3529字,纯文字阅读大概需要6分钟。
内容图文
![python – 运行Scrapy但它错误:没有名为_util的模块](/upload/InfoBanner/zyjiaocheng/704/e70a9232f0a94d8b91384b47e4162121.jpg)
我已经安装了Scrapy,并在python中导入它,每个东西看起来都很好.但是当我在http://scrapy-chs.readthedocs.io/zh_CN/0.24/intro/tutorial.html中尝试一个例子时它会导致错误.
我运行scrapy crawl swspider,然后我得到:
> 2018-05-14 14:24:16 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
> 2018-05-14 14:24:16 [scrapy.utils.log] INFO: Versions: lxml 3.2.1.0,
> libxml2 2.9.1, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted
> 18.4.0, Python 2.7.5 (default, Nov 20 2015, 02:00:19) - [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)], pyOpenSSL 0.13.1 (OpenSSL 1.0.1e-fips 11
> Feb 2013), cryptography 0.8.2, Platform
> Linux-3.10.0-327.el7.x86_64-x86_64-with-centos-7.2.1511-Core
> 2018-05-14 14:24:16 [scrapy.crawler] INFO: Overridden settings:
> {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES':
> ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'tutorial'}
> Traceback (most recent call last): File
> "/disk1/wulixin/install/bin/scrapy", line 11, in <module>
> sys.exit(execute()) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py",
> line 150, in execute
> _run_print_help(parser, _run_command, cmd, args, opts) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py",
> line 90, in _run_print_help
> func(*a, **kw) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py",
> line 157, in _run_command
> cmd.run(args, opts) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/commands/crawl.py",
> line 57, in run
> self.crawler_process.crawl(spname, **opts.spargs) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py",
> line 170, in crawl
> crawler = self.create_crawler(crawler_or_spidercls) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py",
> line 198, in create_crawler
> return self._create_crawler(crawler_or_spidercls) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py",
> line 203, in _create_crawler
> return Crawler(spidercls, self.settings) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py",
> line 55, in __init__
> self.extensions = ExtensionManager.from_crawler(self) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/middleware.py",
> line 58, in from_crawler
> return cls.from_settings(crawler.settings, crawler) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/middleware.py",
> line 34, in from_settings
> mwcls = load_object(clspath) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/utils/misc.py",
> line 44, in load_object
> mod = import_module(module) File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in
> import_module
> __import__(name) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/extensions/memusage.py",
> line 16, in <module>
> from scrapy.mail import MailSender File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/mail.py",
> line 25, in <module>
> from twisted.internet import defer, reactor, ssl File "/disk1/wulixin/install/lib64/python2.7/site-packages/twisted/internet/ssl.py",
> line 230, in <module>
> from twisted.internet._sslverify import ( File "/disk1/wulixin/install/lib64/python2.7/site-packages/twisted/internet/_sslverify.py",
> line 15, in <module>
> from OpenSSL._util import lib as pyOpenSSLlib ImportError: No module named _util
解决方法:
你需要升级pyopenssl
sudo pip install pyopenssl --user --upgrade
内容总结
以上是互联网集市为您收集整理的python – 运行Scrapy但它错误:没有名为_util的模块全部内容,希望文章能够帮你解决python – 运行Scrapy但它错误:没有名为_util的模块所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。