本文共 1817 字,大约阅读时间需要 6 分钟。
英文词频统计分为两大步骤:
#CalHamletV1.pydef getText(): txt = open("hamlet.txt", "r").read() txt = txt.lower() for ch in '!"#$%&()*+,-./:;<=>?@[\\]^_‘{|}~': txt = txt.replace(ch, " ") return txthamletTxt = getText()words = hamletTxt.split()counts = {}for word in words: counts[word] = counts.get(word, 0) + 1items = list(counts.items())items.sort(key=lambda x: x[1], reverse=True)for i in range(10): word, count = items[i] print(f"{word:<10}{count:5}") 以下是英文文本中出现频率最高的前十个单词及其频率:
中文词频统计主要包含以下步骤:
为了提高统计效率并减少不相关单词的干扰,我们可以对文本进行进一步的优化:
以下是优化后的代码示例:
#CalThreeKingdomsV2.pyimport jiebaexcludes = {"将军", "却说", "荆州", "二人", "不可", "不能", "如此"}txt = open("threekingdoms.txt", "r", encoding='utf-8').read()words = jieba.lcut(txt)counts = {}for word in words: if len(word) == 1: continue elif word == "诸葛亮" or word == "孔明曰": rword = "孔明" elif word == "关公" or word == "云长": rword = "关羽" elif word == "玄德" or word == "玄德曰": rword = "刘备" elif word == "孟德" or word == "丞相": rword = "曹操" else: rword = word counts[rword] = counts.get(rword, 0) + 1for word in excludes: del counts[word]items = list(counts.items())items.sort(key=lambda x: x[1], reverse=True)for i in range(10): word, count = items[i] print(f"{word:<10}{count:5}") 以下是中文文本中出现频率最高的前十个单词及其频率:
将上述词频统计方法应用到考研英语词汇学习中,可以帮助考生快速掌握高频关键词。通过分析文本中的常见单词,考生可以更有针对性地进行复习和练习。
转载地址:http://ddtkz.baihongyu.com/