首先是英文的词频统计,不需要jieba中文分词库,只需要注意大小写转换,特殊标点符号删除,而后利用元祖!
split函数,元祖的get函数添加映射数据
对于《三国演义》进行中文分词统计,得到人物出场次数最多的数据
代码 。。。
笔记,Jieba库 需要使用open打开txt文件,并读取其中的文本
jieba的lcut方法分词,(jieba的三种分词方式) 然后,
使用元祖,映射 人名:出现次数 方式作为其中元素
使用for ...in...循环遍历整个txt文本加入元祖,同样的循环遍历元祖输出打印
元祖不方便统计,需要改为list列表方式sort排序,其中有一个lamda函数,以键值(即出现次数)排序 然后打印输出
得到了不同的词频,但是包括了很多不是人名的中文词频,需要进一步优化
建立一个非人名元祖,将排序靠前的加进去,如果出现其中的词,则不统计,或者从结果中删除
再关联相同的人的不同称谓,用分支语句+1...
Hamlet词频统计(含Hamlet原文文本)
#CalHamletV1.py
def getText(): txt = open("hamlet.txt", "r").read() txt = txt.lower() for ch in '!"#$%&()*+,-./:;<=>?@[\\]^_‘{|}~': txt = txt.replace(ch, " ") #将文本中特殊字符替换为空格 return txt hamletTxt = getText() words = hamletTxt.split() counts = {}
for word in words: counts[word] = counts.get(word,0) + 1 items = list(counts.items()) items.sort(key=lambda x:x[1], reverse=True)
for i in range(10): word, count = items[i] print ("{0:<10}{1:>5}".format(word, count))
《三国演义》人物出场统计(上)(含《三国演义》原文文本)
#CalThreeKingdomsV1.py import jieba txt = open("threekingdoms.txt", "r", encoding='utf-8').read() words = jieba.lcut(txt) counts = {} for word in words: if len(word) == 1: continue else: counts[word] = counts.get(word,0) + 1 items = list(counts.items()) items.sort(key=lambda x:x[1], reverse=True) for i in range(15): word, count = items[i] print ("{0:<10}{1:>5}".format(word, count))
《三国演义》人物出场统计(下)(含《三国演义》原文文本)
#CalThreeKingdomsV2.py import jieba excludes = {"将军","却说","荆州","二人","不可","不能","如此"} txt = open("threekingdoms.txt", "r", encoding='utf-8').read() words = jieba.lcut(txt) counts = {} for word in words: if len(word) == 1: continue elif word == "诸葛亮" or word == "孔明曰": rword = "孔明" elif word == "关公" or word == "云长": rword = "关羽" elif word == "玄德" or word == "玄德曰": rword = "刘备" elif word == "孟德" or word == "丞相": rword = "曹操" else: rword = word counts[rword] = counts.get(rword,0) + 1 for word in excludes: del counts[word] items = list(counts.items()) items.sort(key=lambda x:x[1], reverse=True) for i in range(10): word, count = items[i] print ("{0:<10}{1:>5}".format(word, count))