0
点赞
收藏
分享

微信扫一扫

天天基金数据爬取并导入excel(详细信息爬取)

船长_Kevin 2022-01-13 阅读 68

一位泉州小哥近日喜提热搜“一万多买了1314只基金每只10块,买了好几天,第一次花钱花到手抽筋”

投资界有句名言,不要把鸡蛋全部放在一个篮子里,但你见过,有人装鸡蛋的篮子,比鸡蛋还多的吗?

那么怎么成为这样一个基金海王呢 ?

当然得从基金的信息获取筛选开始。

天天基金的基金页面就是个不错的选择

代码如下

#-*-coding:utf-8 -*- 
#********************
#微信&电话:13248260503
# 证券开户 研报收集
# 代码交流 数据分析
# 脚本开发 投资推荐
#********************
import urllib.request
import requests
import re
import random
import time
from urllib.parse import urlencode
import pandas as pd  #制表模块
from urllib.parse import urlparse
my_headers = [
    "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36",
    "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14",
    "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0)",
    'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
    'Opera/9.25 (Windows NT 5.1; U; en)',
    'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
    'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
    'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
    'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',
    "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7",
    "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 "
]
headers = {'User-Agent':random.choice(my_headers)}
def get_page(url):#封装下载页面方法
    response = requests.get(url,headers=headers)
    if response.status_code == 200:
        return response.content.decode("utf-8")#应对乱码
    else:
        return '爬取失败!'

def parse_html(html_content):
    pattern = re.compile('.*?fname fl.*?>(.*?)\D(\d+)\D</a>.*?单位净值.*?>(.*?)</span>.*?<span.*?>(.*?)</span>.*?基金类型:(.*?)</li>.*?管&nbsp;理&nbsp;人:.*?>(.*?)</a>.*?规&nbsp;&nbsp;&nbsp;&nbsp;模</a>:((---)|(.*?)亿元).*?基金经理:.*?>(.*?)</a>', re.S)#(.*?fname fl.*?>(.*?)\D(\d+)\D</a>).*?ping.*?>(.*?)</span>.*?基金类型:(.*?)</li>.*?管&nbsp;理&nbsp;人:.*?>(.*?)</a>.*?规&nbsp;&nbsp;&nbsp;&nbsp;模</a>:(.*?)亿元.*?基金经理:.*?>(.*?)</a>.*?手&nbsp;续&nbsp;费</a>:(.*?)<', re.S)
    result1 = re.findall(pattern, html_content)
    return result1
def parse_html1(html):
    pattern = re.compile('.*?allPages.*?(\d+)', re.S)
    result2 = re.findall(pattern, html)
    return result2
ex_name = input('表格名称:') +'.csv'
url = input('输入网址:')
result = urlparse(url)
url_parse = list(result) #元组转为列表
url_parse_fragment = url_parse[-1]
url_parse_fragment_l= url_parse_fragment.split(";")#将字符串转为列表
new_data_list =[]
for data in url_parse_fragment_l:
    data_1=re.split('(\w{2})',data,1)
    data_2=data_1
    new_data_list.append(data_2)
new_data_list_len = len(new_data_list)
new_dict = {}
for i in new_data_list:
    new_dict[i[1]] = i[2] 
base_url = 'http://fund.eastmoney.com/data/FundGuideapi.aspx?'
new_url = base_url + urlencode(new_dict)
html = get_page(new_url)
rusult2 = parse_html1(html)
all_page =int(rusult2[0])
shuju = pd.DataFrame([], columns=[ '名称', '单位净值', '涨跌','基金类型','管理人','规模(亿元)','基金经理'])
for i in range(all_page):
   new_dict['pi']=str(i+1)
   print(new_dict)
   new_url = base_url + urlencode(new_dict)
   html_content = get_page(new_url)
   result1 = parse_html(html_content)
   print(result1)
   for item in result1:
      daima = '代码:'+item[1]
      mingcheng = item[0]
      jingzhi = item[2]
      zhangdie = item[3]
      leixing = item[4]
      guanli = item[5]
      guimo = item[7]+item[8]
      jingli = item[9]
      shuju.loc[daima,'名称'] = mingcheng
      shuju.loc[daima,'单位净值'] = jingzhi
      shuju.loc[daima,'涨跌'] = zhangdie
      shuju.loc[daima,'基金类型'] = leixing
      shuju.loc[daima,'管理人'] = guanli
      shuju.loc[daima,'规模(亿元)'] = guimo
      shuju.loc[daima,'基金经理'] = jingli
shuju.to_csv(ex_name, encoding='utf-8')      

天天基金的详细信息页面是这个样子

 可以先选择筛选条件进行筛选

然后复制网址

 

 

在运行程序中输入表格名称和复制的网址,得到下面的结果。

 得到所有89只符合“互联网服务”条件的基金信息。

 成品程序下载链接:https://pan.baidu.com/s/1xB0Rna5M4nhcyx7ZsVGmvA 
提取码:ix33 

当然代码更进一步优化还能获得更加详细的信息,比如这样

 代码和成品脚本打包择机放出

举报

相关推荐

0 条评论