Hadoop和Spark命令
Hadoop启动命令
启动HDFS服务
start-dfs.sh
stop-dfs.sh
启动YARN服务
start-yarn.sh
stop-yarn.sh
启动同时启动
start-all.sh
stop-yarn.sh
Spark模式
Local本地模式
先启动hdfs服务
start-all.sh
或者
start-dfs.sh
启动交互式服务
/export/server/spark-3.0.1-bin-hadoop2.7/bin/spark-shell
独立集群模式
先启动hadoop的hdfs服务
start-all.sh
或者
start-dfs.sh
启动集群
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/start-all.sh
启动交互式服务
/export/server/spark-3.0.1-bin-hadoop2.7/bin/spark-shell --master spark://node1:7077
完成上面两个命令就可以了
在主节点上单独启动和停止Master:
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/start-master.sh
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/stop-master.sh
在从节点上单独启动和停止Worker(Worker指的是slaves配置文件中的主机名)
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/start-slaves.sh
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/stop-slaves.sh
在主节点上停止服务
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/stop-all.sh
Spark-On-Yarn模式
- 启动HDFS和YARN
start-dfs.sh
start-yarn.sh
或
start-all.sh
- 启动MRHistoryServer服务,在node1执行命令
mr-jobhistory-daemon.sh start historyserver
- 启动Spark HistoryServer服务,在node1执行命令
/export/server/spark-3.0.1-bin-hadoop2.7/sbin/start-history-server.sh