urls = []
# 获取网址
for human in human_list:
url = human.find('a')['href']
urls.append('https://www.wikidata.org'+url)
# 获取每个网页的name和description
def parser(url):
req = requests.get(url)
# 利用BeautifulSoup将获取到的文本解析成HTML
soup = BeautifulSoup(req.text, "lxml")
# 获取name和description
name = soup.find('span', class_="wikibase-title-label")
desc = soup.find('span', class_="wikibase-descriptionview-text")
if name is not None and desc is not None:
print('%-40s,\t%s'%(name.text, desc.text))
##################################################
George Washington , first President of the United States
Douglas Adams , British author and humorist (1952–2001)
......
Willoughby Newton , Politician from Virginia, USA
Mack Wilberg , American conductor
一般方法,总共耗时:724.9654655456543
##################################################
1
2
3
4
5
6
7
8
使用同步方法,总耗时约725秒,即12分钟多。
一般方法虽然思路简单,容易实现,但效率不高,耗时长。那么,使用并发试试看。
urls = []
# 获取网址
for human in human_list:
url = human.find('a')['href']
urls.append('https://www.wikidata.org'+url)
# 获取每个网页的name和description
def parser(url):
req = requests.get(url)
# 利用BeautifulSoup将获取到的文本解析成HTML
soup = BeautifulSoup(req.text, "lxml")
# 获取name和description
name = soup.find('span', class_="wikibase-title-label")
desc = soup.find('span', class_="wikibase-descriptionview-text")
if name is not None and desc is not None:
print('%-40s,\t%s'%(name.text, desc.text))
##################################################
Larry Sanger , American former professor, co-founder of Wikipedia, founder of Citizendium and other projects
Ken Jennings , American game show contestant and writer
......
Antoine de Saint-Exupery , French writer and aviator
Michael Jackson , American singer, songwriter and dancer
并发方法,总共耗时:226.7499692440033
##################################################
1
2
3
4
5
6
7
8
使用多线程并发后的爬虫执行时间约为227秒,大概是一般方法的三分之一的时间,速度有了明显的提升啊!多线程在速度上有明显提升,但执行的网页顺序是无序的,在线程的切换上开销也比较大,线程越多,开销越大。
关于多线程与一般方法在速度上的比较,可以参考文章:Python爬虫之多线程下载豆瓣Top250电影图片。
urls = []
# 获取网址
for human in human_list:
url = human.find('a')['href']
urls.append('https://www.wikidata.org'+url)
# 异步HTTP请求
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
# 解析网页
async def parser(html):
# 利用BeautifulSoup将获取到的文本解析成HTML
soup = BeautifulSoup(html, "lxml")
# 获取name和description
name = soup.find('span', class_="wikibase-title-label")
desc = soup.find('span', class_="wikibase-descriptionview-text")
if name is not None and desc is not None:
print('%-40s,\t%s'%(name.text, desc.text))
# 处理网页,获取name和description
async def download(url):
async with aiohttp.ClientSession() as session:
try:
html = await fetch(session, url)
await parser(html)
except Exception as err:
print(err)
# 利用asyncio模块进行异步IO处理
loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(download(url)) for url in urls]
tasks = asyncio.gather(*tasks)
loop.run_until_complete(tasks)
##################################################
Frédéric Taddeï , French journalist and TV host
Gabriel Gonzáles Videla , Chilean politician
......
Denmark , sovereign state and Scandinavian country in northern Europe
Usain Bolt , Jamaican sprinter and soccer player
使用异步,总共耗时:126.9002583026886
##################################################
1
2
3
4
5
6
7
8
显然,异步方法使用了异步和并发两种提速方法,自然在速度有明显提升,大约为一般方法的六分之一。异步方法虽然效率高,但需要掌握异步编程,这需要学习一段时间。
关于异步方法与一般方法在速度上的比较,可以参考文章:利用aiohttp实现异步爬虫。
如果有人觉得127秒的爬虫速度还是慢,可以尝试一下异步代码(与之前的异步代码的区别在于:仅仅使用了正则表达式代替BeautifulSoup来解析网页,以提取网页中的内容):
import requests
from bs4 import BeautifulSoup
import time
import aiohttp
import asyncio
import re
##################################################
Dejen Gebremeskel , Ethiopian long-distance runner
Erik Kynard , American high jumper
......
Buzz Aldrin , American astronaut
Egon Krenz , former General Secretary of the Socialist Unity Party of East Germany
使用异步(正则表达式),总共耗时:16.521944999694824
##################################################
1
2
3
4
5
6
7
8
16.5秒,仅仅为一般方法的43分之一,速度如此之快,令人咋舌(感谢某人提供的尝试)。笔者虽然自己实现了异步方法,但用的是BeautifulSoup来解析网页,耗时127秒,没想到使用正则表达式就取得了如此惊人的效果。可见,BeautifulSoup解析网页虽然快,但在异步方法中,还是限制了速度。但这种方法的缺点为,当你需要爬取的内容比较复杂时,一般的正则表达式就难以胜任了,需要另想办法。
class WikidatascrapyItem(scrapy.Item):
# define the fields for your item here like:
name = scrapy.Field()
desc = scrapy.Field()
1
2
3
4
5
6
7
8
然后,在spiders文件夹下新建wikiSpider.py,代码如下:
import scrapy.cmdline
from wikiDataScrapy.items import WikidatascrapyItem
import requests
from bs4 import BeautifulSoup