Shell operation of HDFS

Keywords: Hadoop Linux

1. Basic grammar

bin/hadoop fs specific command or bin / HDFS DFS specific command

2. Command Daquan

[andy@xiaoai01 hadoop-2.7.2]$ bin/hadoop fs

[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] <path> ...]
[-cp [-f] [-p] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]]

3. Common command practice

(0) start Hadoop cluster (convenient for subsequent test)

[andy@xiaoai01 hadoop-2.7.2]$ sbin/start-dfs.sh
[andy@xiaoai01 hadoop-2.7.2]$ sbin/start-yarn.sh

(1) - help: output this command parameter

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -help rm

(2) - ls: display directory information

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -ls /

(3) - mkdir: create directory on HDFS

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -mkdir -p /sanguo/shuguo

(4) - moveFromLocal: cut and paste from local to HDFS

[andy@xiaoai01 hadoop-2.7.2]$ touch kongming.txt
[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs  -moveFromLocal  ./kongming.txt  /sanguo/shuguo

(5) - appendToFile: append a file to the end of an existing file

[andy@xiaoai01 hadoop-2.7.2]$ vi liubei.txt
//input
san gu mao lu
[andy@xiaoai01 102 hadoop-2.7.2]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo/kongming.txt

(6) - cat: display file contents

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -cat /sanguo/shuguo/kongming.txt

(7) - chgrp, - chmod, - chown: in the same way as in the Linux file system, modify the permissions of the file

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs  -chmod  666  /sanguo/shuguo/kongming.txt
[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs  -chown  andy:andy   /sanguo/shuguo/kongming.txt

(8) - copyFromLocal: copy files from the local file system to the HDFS path

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -copyFromLocal README.txt /

(9) - copyToLocal: copy from HDFS to local

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -copyToLocal /sanguo/shuguo/kongming.txt ./

(10) - cp: copy from one path of HDFS to another path of HDFS

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -cp /sanguo/shuguo/kongming.txt /zhuge.txt

(11) - mv: moving files in the HDFS directory

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -mv /zhuge.txt /sanguo/shuguo/

(12) - get: equivalent to copyToLocal, that is, downloading files from HDFS to local

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -get /sanguo/shuguo/kongming.txt ./

(13) - getmerge: merge and download multiple files. For example, there are multiple files in HDFS directory / user/atguigu/test: log.1, log.2,log.3

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -getmerge /user/andy/test/* ./zaiyiqi.txt

(14) - put: equivalent to copyFromLocal

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -put ./zaiyiqi.txt /user/andy/test/

(15) - tail: displays the end of a file

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -tail /sanguo/shuguo/kongming.txt

(16) - rm: delete files or folders

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -rm /user/andy/test/jinlian2.txt

(17) - rmdir: delete empty directory

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -mkdir /test
[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -rmdir /test

(18) - du statistics folder size information

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -du -s -h /user/andy/test
2.7 K  /andy/test
[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -du  -h /user/andy/test
1.3 K  /user/andy/test/README.txt
15     /user/andy/test/jinlian.txt
1.4 K  /user/andy/test/zaiyiqi.txt

(19) - setrep: sets the number of copies of files in HDFS

[andy@xiaoai01 hadoop-2.7.2]$ hadoop fs -setrep 10 /sanguo/shuguo/kongming.txt

The number of replicas set here is only recorded in the metadata of NameNode. Whether there will be so many replicas depends on the number of datanodes. At present, there are only three devices and at most three replicas. Only when the number of nodes is increased to 10, the number of replicas can reach 10.

Published 37 original articles, won praise 1, visited 530
Private letter follow

Posted by Abarak on Fri, 21 Feb 2020 03:07:18 -0800