0
点赞
收藏
分享

微信扫一扫

安装SCALA SPARK

Python芸芸 2022-03-25 阅读 49

解压缩

tar -xvf scala.tar.gz -C /etc/hadoop

修改文件名

mv sacla~ sacla

配置环境变量

vim /etc/profile

export SCALA_HOME=/etc/hadoop/scala

export PATH=$PATH:$PATH:SACLA_HOME/bin

刷新

source /etc/profile

检测版本

sacla -version

 

解压缩spark

tar -xvf spark.tar.gz -C /etc/hadoop

修改文件名

mv spark~ spark

配置环境变量

vim /etc/profile

export SACLA_HOME=/etc/hadoop/spark

export PATH=$PATH:$PATH:SPARK_HOME/bin

刷新

在$SPARK_HOME/conf/目录下复制spark-env.sh.template成spark-env.sh

cp spark-env.sh.template spark-env.sh

修改$SPARK_HOME/conf/spark-env.sh,添加如下内容:

vim ~

export SCALA_HOME=/etc/hadoop/scala

export JAVA_HOME=/etc/hadoop/java

export HADOOP_HOME=/etc/hadoop/hadoop

export SPARK_WORKER_MEMORY=1g

export HADOOP_CONF_DIR=/etc/hadoop/hadoop/etc/hadoop

export SPARK_MASTER_IP=192.168.3.120

export SPARK_HOME=/etc/hadoop/spark

 

cp slaves.template slaves

vim slaves

hadoop01 

hadoop02  

hadoop03

将配置好的环境拷贝到Slaver1和Slaver2节点。

scp -r /etc/hadoop/scala hadoop02:/etc/hadoop

scp -r /etc/hadoop/scala hadoop03:/etc/hadoop

scp -r /etc/hadoop/spark hadoop02:/etc/hadoop

scp -r /etc/hadoop/spark hadoop03:/etc/hadoop

scp -r /etc/profile hadoop02:/etc/profile

scp -r /etc/profile hadoop03:/etc/profile

 

在每个节点上刷新环境配置: source /etc/profile。

先启动hadoop

在hadoop01上启动spark

cd /etc/hadoop/spark

ls

cd sbin

./start-all.sh

bin

./spark-shell

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA6K-t56yR5auj54S2WS5Bbm5k,size_20,color_FFFFFF,t_70,g_se,x_16

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA6K-t56yR5auj54S2WS5Bbm5k,size_20,color_FFFFFF,t_70,g_se,x_16 

 

 

 

举报

相关推荐

0 条评论