In the case where HDFS reports there are under or over replicated blocks you can use the metasave option
$ sudo -u hdfs hdfs dfsadmin -metasave metasave-report.txt
$ cd /var/log/hadoop-hdfs
$ cat metasave-report.txt
Metasave: Blocks waiting for replication: 125
/mapred/tmp/hdfs/.staging/job_1395354337974_1132/job.jar: blk_1073994598_253980 (replicas: l: 6 d: 0 c: 0 e: 0) 11.10.1.9:50010 : 11.12.14.30:50010 : 1.10.14.8:50010 : 1.12.10.2:50010 : 1.20.104.94:50010 : 1.10.14.2:50010 :
Each block will have a line such as above which contains information on its replication (see highlighted):
l: live replicas
d: decomissioned replicas
c: corrupt replicas
e: excess replicas
This information can be used to determine the next action (such as remove the block if it is corrupt).
Above we can see the block has 6 live replicas and the IP of each datanode that stores one.
$ sudo -u hdfs hdfs dfsadmin -metasave metasave-report.txt
$ cd /var/log/hadoop-hdfs
$ cat metasave-report.txt
Metasave: Blocks waiting for replication: 125
/mapred/tmp/hdfs/.staging/job_1395354337974_1132/job.jar: blk_1073994598_253980 (replicas: l: 6 d: 0 c: 0 e: 0) 11.10.1.9:50010 : 11.12.14.30:50010 : 1.10.14.8:50010 : 1.12.10.2:50010 : 1.20.104.94:50010 : 1.10.14.2:50010 :
Each block will have a line such as above which contains information on its replication (see highlighted):
l: live replicas
d: decomissioned replicas
c: corrupt replicas
e: excess replicas
This information can be used to determine the next action (such as remove the block if it is corrupt).
Above we can see the block has 6 live replicas and the IP of each datanode that stores one.
No comments:
Post a Comment