0
点赞
收藏
分享

微信扫一扫

Python机器学习从入门到高级:快速处理文本(含代码)

Python机器学习:文本处理

  • 🌸个人主页:JoJo的数据分析历险记
  • 📝个人介绍:小编大四统计在读,目前保研到统计学top3高校继续攻读统计研究生
  • 💌如果文章对你有帮助,欢迎✌关注、👍点赞、✌收藏、👍订阅专栏

文章目录

🍁1. 清洗文本

对一些非结构化的文本数据进行基本的清洗

  • strip
  • split
  • replace
# 创建文本
text_data = ['   Interrobang. By Aishwarya Henriette   ',
             'Parking And goding. by karl fautier',
             '   Today is the night. by jarek prakash    ']
# 去除文本两端的空格
stripwhitespace = [string.strip() for string in text_data]
stripwhitespace
['Interrobang. By Aishwarya Henriette',
 'Parking And goding. by karl fautier',
 'Today is the night. by jarek prakash']
# 删除句号
remove_periods = [string.replace('.','') for string in text_data]
remove_periods
['   Interrobang By Aishwarya Henriette   ',
 'Parking And goding by karl fautier',
 '   Today is the night by jarek prakash    ']
# 创建函数
def capitalizer(string):
    return string.upper()
[capitalizer(string) for string in remove_periods]
['   INTERROBANG BY AISHWARYA HENRIETTE   ',
 'PARKING AND GODING BY KARL FAUTIER',
 '   TODAY IS THE NIGHT BY JAREK PRAKASH    ']
# 使用正则表达式
import re
def replace_letters_with_x(string):
    return re.sub(r'[a-zA-Z]','x',string)
[replace_letters_with_x(string) for string in remove_periods]
['   xxxxxxxxxxx xx xxxxxxxxx xxxxxxxxx   ',
 'xxxxxxx xxx xxxxxx xx xxxx xxxxxxx',
 '   xxxxx xx xxx xxxxx xx xxxxx xxxxxxx    ']

🍂2. 解析并清洗HTML

#使用beautiful soup 对html进行解析
from bs4 import BeautifulSoup
# 创建html代码
html = """
        <div class='full_name'><span style='font-weight:bold'>
        Masege Azra"
    
    """
# 创建soup对象
soup = BeautifulSoup(html, 'lxml')
soup.find('div')
<div class="full_name"><span style="font-weight:bold">
        Masege Azra"
    
    </span></div>

🍃3. 移除标点

import unicodedata
import sys
text_data = ['Hi!!!! I. love. This. Song....',
             '10000% Agree!!!! #LoveIT',
             'Right??!!']
# 创建一个标点符号字典
punctuation = dict.fromkeys(i for i in range(sys.maxunicode) if unicodedata.category(chr(i)).startswith('P'))
[string.translate(punctuation) for string in text_data]
['Hi I love This Song', '10000 Agree LoveIT', 'Right']

🌍4. 文本分词

这里介绍一下jieba库


import jieba
# 创建文本
string = 'The science of study is the technology of tomorrow'
seg = jieba.lcut(string)
print(seg)
['The', ' ', 'science', ' ', 'of', ' ', 'study', ' ', 'is', ' ', 'the', ' ', 'technology', ' ', 'of', ' ', 'tomorrow']

当然,本文只是介绍了在数据清洗中的一些最基本的文本处理方法,后续还会介绍目前NLP的一些主流方法和代码。

本章的介绍到此介绍,如果文章对你有帮助,请多多点赞、收藏、评论、关注支持!!

举报

相关推荐

0 条评论