0
点赞
收藏
分享

微信扫一扫

pyspark数据计算

code_balance 2023-11-09 阅读 48

# 导包
from pyspark import SparkConf, SparkContext

# 获取sparkconf对象
conf = SparkConf().setMaster("local[*]").setAppName("test_spark")

# 基于sparkconf获取sparkcontext对象(sparkcontext对象是pyspark一切功能的入口)
sc = SparkContext(conf=conf)

rdd1 = sc.parallelize([1, 2, 3, 4, 5, 6])
rdd2 = sc.parallelize((1, 2, 3, 4, 5, 6))
rdd3 = sc.parallelize({1, 2, 3, 4, 5, 6})
rdd4 = sc.parallelize("asdfghjkl")
rdd5 = sc.parallelize({"key1": 666, "key2": 999})
rdd6 = sc.textFile("D:/title.txt")  # 通过文件路径进行读取

print(rdd1.collect())
print(rdd2.collect())
print(rdd3.collect())
print(rdd4.collect())  # 字符串会被拆成一个一个的字符
print(rdd5.collect())  # 字典仅剩下key的值
print(rdd6.collect())
#停止Pyspark程序
sc.stop()



举报

相关推荐

0 条评论