两句话掌握 Python 最难知识点——元类出处( 四 )


因为IntergerField(‘id’)是Field的子类的实例,自动触发元类的__new__,所以将IntergerField(‘id’)存入__mappings__并删除这个键值对 。
二生三、三生万物当你初始化一个实例的时候并调用save()方法时候
Python
 
u = User(id=12345, name='Batman', email='batman@nasa.org', password='iamback')
u.save()
这时先完成了二生三的过程:

  1. 先调用Model.__setattr__,将键值载入私有对象
  2. 然后调用元类的“天赋”,ModelMetaclass.__new__,将Model中的私有对象,只要是Field的实例,都自动存入u.__mappings__ 。
接下来完成了三生万物的过程:
通过u.save()模拟数据库存入操作 。这里我们仅仅做了一下遍历__mappings__操作,虚拟了sql并打印,在现实情况下是通过输入sql语句与数据库来运行 。
输出结果为
Python
 
Found model: User
Found mapping: name ==> <StringField:username>
Found mapping: password ==> <StringField:password>
Found mapping: id ==> <IntegerField:id>
Found mapping: email ==> <StringField:email>
SQL: insert into User (username,password,id,email) values (Batman,iamback,12345,batman@nasa.org)
ARGS: ['Batman', 'iamback', 12345, 'batman@nasa.org']
年轻的造物主,你已经和我一起体验了由“道”演化“万物”的伟大历程,这也是Django中的Model版块核心原理 。接下来,请和我一起进行更好玩的爬虫实战(嗯,你现在已经是初级黑客了):网络代理的爬取吧!
挑战二:网络代理的爬取准备工作,先爬个页面玩玩请确保已安装requests和pyquery这两个包 。
Python
 
# 文件:get_page.py
import requests
base_headers = {
'User-Agent': 'Mozilla/5.0 (windows NT 10.0; Win64; x64) AppleWebKit/537.36 (Khtml, like Gecko) Chrome/54.0.2840.71 Safari/537.36',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'zh-CN,zh;q=0.8'
}
def get_page(url):
headers = dict(base_headers)
print('Getting', url)
try:
r = requests.get(url, headers=headers)
print('Getting result', url, r.status_code)
if r.status_code == 200:
return r.text
except ConnectionError:
print('Crawling Failed', url)
return None
这里,我们利用request包,把百度的源码爬了出来 。
试一试抓百度把这一段粘在get_page.py后面,试完删除
Python
 if(__name__ == '__main__'):
rs = get_page('https://www.baidu.com')
print('result:rn', rs)
试一试抓代理把这一段粘在get_page.py后面,试完删除
Python
 
if(__name__ == '__main__'):
from pyquery import PyQuery as pq
start_url = 'http://www.proxy360.cn/Region/China'
print('Crawling', start_url)
html = get_page(start_url)
if html:
doc = pq(html)
lines = doc('div[name="list_proxy_ip"]').items()
for line in lines:
ip = line.find('.tbBottomLine:nth-child(1)').text()
port = line.find('.tbBottomLine:nth-child(2)').text()
print(ip+':'+port)
接下来进入正题:使用元类批量抓取代理
批量处理抓取代理Python
 
from getpage import get_page
from pyquery import PyQuery as pq
# 道生一:创建抽取代理的metaclass
class ProxyMetaclass(type):
"""
元类,在FreeProxyGetter类中加入
__CrawlFunc__和__CrawlFuncCount__
两个参数,分别表示爬虫函数,和爬虫函数的数量 。
"""
def __new__(cls, name, bases, attrs):
count = 0
attrs['__CrawlFunc__'] = []
attrs['__CrawlName__'] = []
for k, v in attrs.items():
if 'crawl_' in k:
attrs['__CrawlName__'].append(k)
attrs['__CrawlFunc__'].append(v)
count += 1
for k in attrs['__CrawlName__']:
attrs.pop(k)
attrs['__CrawlFuncCount__'] = count
return type.__new__(cls, name, bases, attrs)
# 一生二:创建代理获取类
class ProxyGetter(object, metaclass=ProxyMetaclass):
def get_raw_proxies(self, site):
proxies = []
print('Site', site)


推荐阅读