admin管理员组文章数量:1611922
发现自己真是。。。
今天的有一次出现了之前的错误:
hadoop@hapmaster:~/hadoop-2.3.0/sbin$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
hadoop@hapmaster:~/hadoop-2.3.0/sbin$
昨天遇到这个错误是因为多次hdfs namenode -format导致namespaceID不同,删掉datanode配置的dfs.data.dir目录后就好了。
但这次的区别在于datanode已经启动了:
hadoop@hapmaster:~/hadoop-2.3.0/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hapmaster]
hapmaster: namenode running as process 15828. Stop it first.
hapslave4: datanode running as process 8675. Stop it first.
hapslave3: datanode running as process 8854. Stop it first.
hapslave1: datanode running as process 9104. Stop it first.
hapslave2: datanode running as process 8986. Stop it first.
Starting secondary namenodes [hapmaster]
hapmaster: secondarynamenode running as process 16151. Stop it first.
starting yarn daemons
resourcemanager running as process 16309. Stop it first.
hapslave4: nodemanager running as process 8895. Stop it first.
hapslave1: nodemanager running as process 9324. Stop it first.
hapslave3: nodemanager running as process 9079. Stop it first.
hapslave2: nodemanager running as process 9202. Stop it first.
难道是网络连接有什么问题?
ping一下,namenode可以ping通datenode,反过来则不行。发现ip不在一个段,修改,重启,好了。
本文标签: 错误configuredDFSremainingCapacity
版权声明:本文标题:hadoop2.3.0错误之Configured Capacity: 0 (0 B)Present Capacity: 0 (0 B) DFS Remaining: 0 (0 B) DFS Used: 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://m.elefans.com/xitong/1728621830a1166452.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论