Beautifulsoup实例(网站级爬虫)(非ajax请求)

二叉树上的我

活捉一只小RBQ
2020-02-27
28
20
3
www.spiritlhl.top
直接放代码(改了改以前爬本论坛的代码);)
ps:ajax请求就是对网页实时刷新,本代码无法抓取动态加载的页面。
Python:
import requests
from bs4 import BeautifulSoup


a = True
while a:
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36'
    }
    params = {
        '_v': '5.12.0'
    }
    fp = open('./萌新交流社论坛爬虫.text', 'w', encoding='utf-8')
    urls = 'http://www.lolichan.vip/'
    response = requests.get(url=urls, params=params, headers=headers).text
    soup = BeautifulSoup(response, 'lxml')
    class_list= soup.select('.node-title')
    for li in class_list:
        try:
            detail_url = 'http://www.lolichan.vip/' + li.a['href']
            detail_page_text = requests.get(url=detail_url, params=params, headers=headers).text
            detail_soup = BeautifulSoup(detail_page_text, 'lxml')
            page_list = detail_soup.select('.structItem-title')
            print('抓取链接成功')
        except:
            page_list = '-----'
        for i in page_list:
            try:
                page_title = i.a.string
                page_url = 'http://www.lolichan.vip/' + i.a['href']
                page_text = requests.get(url=page_url, headers=headers).text
                detail_soup = BeautifulSoup(page_text, 'lxml')
                div_tag = detail_soup.find('div', class_='bbWrapper')
                content = div_tag.text
                fp.write(page_title + ':' + content + '\n')
                print('爬取页面成功')
            except:
                continue
    a = False
最后的最后,多说一句,论坛真的该加robots.txt了,不然被别人爬虫爬了个干净都没理由骂。
效果视频
CC BY-NC 4.0
 
最后编辑:
  • 支持
反馈: 萌新杰少