admin管理员组

文章数量:1530279

2019-02-25 10:04:29,090 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught exception while scanning /home/warehouse/hadoop-2.7.1/tmp/dfs/data/current. Will throw later.
ExitCodeException exitCode=1: du: cannot access ‘/home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0/blk_1073741825_1001.meta’: Structure needs cleaning
du: cannot access ‘/home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0/blk_1073741825’: Structure needs cleaning
du: cannot access ‘/home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0/blk_1073741826_1002.meta’: Structure needs cleaning
du: cannot access ‘/home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0/blk_1073741826’: Structure needs cleaning
du: cannot access ‘/home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0/blk_1073741891’: Structure needs cleaning

2019-02-25 10:04:29,093 INFO org.apache.hadoop.hdfs.servermon.Storage: Storage directory [DISK]file:/home/warehouse/hadoop-2.7.1/tmp/dfs/data/ has already been used.
2019-02-25 10:04:29,109 INFO org.apache.hadoop.hdfs.servermon.Storage: Analyzing storage directories for bpid BP-916304360-173.18.16.132-1547187332871
2019-02-25 10:04:29,110 WARN org.apache.hadoop.hdfs.servermon.Storage: Failed to analyze storage directories for block pool BP-916304360-173.18.16.132-1547187332871
java.io.IOException: BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871
at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:210)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:242)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:394)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:476)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
at java.lang.Thread.run(Thread.java:748)
2019-02-25 10:04:29,111 WARN org.apache.hadoop.hdfs.servermon.Storage: Failed to add storage for block pool: BP-916304360-173.18.16.132-1547187332871 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871
2019-02-25 10:04:29,111 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to slave2/173.18.16.133:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
at java.lang.Thread.run(Thread.java:748)
2019-02-25 10:04:29,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to slave2/173.18.16.133:9000
2019-02-25 10:04:34,097 INFO org.apache.hadoop.hdfs.servermon.Storage: Storage directory [DISK]file:/home/warehouse/hadoop-2.7.1/tmp/dfs/data/ has already been used.
2019-02-25 10:04:34,113 INFO org.apache.hadoop.hdfs.servermon.Storage: Analyzing storage directories for bpid BP-916304360-173.18.16.132-1547187332871
2019-02-25 10:04:34,113 WARN org.apache.hadoop.hdfs.servermon.Storage: Failed to analyze storage directories for block pool BP-916304360-173.18.16.132-1547187332871
java.io.IOException: BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871
at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:210)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:242)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:394)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:476)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
at java.lang.Thread.run(Thread.java:748)
2019-02-25 10:04:34,113 WARN org.apache.hadoop.hdfs.servermon.Storage: Failed to add storage for block pool: BP-916304360-173.18.16.132-1547187332871 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871
2019-02-25 10:04:34,113 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/173.18.16.132:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801)
at java.lang.Thread.run(Thread.java:748)
2019-02-25 10:04:34,113 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to master/173.18.16.132:9000
2019-02-25 10:04:34,113 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
2019-02-25 10:04:36,114 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2019-02-25 10:04:36,116 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2019-02-25 10:04:36,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/173.18.16.132
************************************************************/
相关配置:core-site.xml

hadoop.tmp.dir
/home/warehouse/hadoop-2.7.1/tmp

问题描述:一次机房停电之后,hadoop集群其中一个datanode启动失败。
原因:home/warehouse/hadoop-2.7.1/tmp/dfs/data/current/BP-916304360-173.18.16.132-1547187332871/current/finalized/subdir0/subdir0损坏了,且无法删除。

解决方法:将/home/warehouse/hadoop-2.7.1/tmp/dfs/目录下的data重命名为****(名字随便起),然后重启hadoop集群,启动后,会自动创建data目录,并同步数据,如果数据量较大,会耗时较长,可以通过hadoop-root-datanode-master.log查看数据同步情况,在同步过程中会影响到查询速度。
(还有一种方法就是修复/home 磁盘,修复之后/home目录下的数据会丢失,需要重新添加节点)

本文标签: 其中一个文件夹Hadoopdatanodecleaning