- 准备工作
- 启动
hadoop集群
[amelia@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
-help:输出这个命令参数
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -help rm
- 创建
/sanguo文件夹
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -mkdir /sanguo
查看hadoop中是否存在sanguo文件

2. 上传
-moveFromLocal:从本地剪切粘贴到HDFS
[amelia@hadoop102 hadoop-2.7.2]$ vim shuguo.txt
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -moveFromLocal ./shuguo.txt /sanguo
操作完成后查看会发现文件已经被剪贴到了/sanguo文件夹中
-copyFromLocal:把本地文件系统中拷贝文件到HDFS路径去
[amelia@hadoop102 hadoop-2.7.2]$ vim weiguo.txt
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -copyFromLocal ./weiguo.txt /sanguo
操作完成后查看会发现文件已经被复制到了/sanguo文件夹中
-put:等同于copyFromLocal,生产环境更习惯用put
[amelia@hadoop102 hadoop-2.7.2]$ vim wuguo.txt
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -put ./wuguo.txt /sanguo
操作完成后查看会发现文件已经被复制到了/sanguo文件夹中
-appendToFile:追加一个文件到已经存在的文件末尾
[amelia@hadoop102 hadoop-2.7.2]$ vim wuguo.txt
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo.txt
操作完成后查看会发现文件内容被附加到了/shuguo.txt中
- 下载
-copyToLocal:从HDFS拷贝到本地
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -copyToLocal /sanguo/shuguo.txt ./
操作完成后查看会发现文件内容被附加到了/hadoop-2.7.2目录中
-get:等同于copyToLocal,生产环境更习惯用get
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -get /sanguo/shuguo.txt ./shuguo2.txt
操作完成后查看会发现文件内容被附加到了/hadoop-2.7.2目录中
4. HDFS直接操作
-ls:显示目录信息
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -ls /sanguo
-cat:显示文件内容
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -ls /sanguo
-chgrp、-chmod、-chown:修改文件所属权限
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -chown amelia:amelia /sanguo/shuguo.txt
-mkdir:创建路径
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -mkdir /jinguo
-cp:从HDFS的一个路径拷贝到HDFS的另一个路径
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -cp /sanguo/shuguo.txt /jinguo
-mv:在HDFS目录中移动文件
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -mv /sanguo/weiguo.txt /jinguo
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo
-tail:显示一个文件的末尾1kb的数据
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -tail /jinguo/shuguo.txt
-rm:删除文件或文件夹
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -rm /sanguo/shuguo.txt
-rm -r:递归删除目录及目录里面内容
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -rm -r /sanguo
注意:使用删除命令一定要谨慎!!!!
-du:统计文件夹的大小信息
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -du -s -h /jinguo
27 /jinguo
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -du -h /jinguo
14 /jinguo/shuguo.txt
7 /jinguo/weiguo.txt
6 /jinguo/wuguo.txt
-setrep:设置HDFS中文件的副本数量
[amelia@hadoop102 hadoop-2.7.2]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
Replication 10 set: /jinguo/shuguo.txt
这里设置的副本数只是记录在NameNode的原数组中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数增加到10台时,副本数才能达到10。










