场景:
NameNode节点:master
DataNode节点:slave1 , slave2
1,往hdfs中put文件的时候报以下错误,但是执行结果是正常的,:
[hdfs@master ~]$ hadoop fs -put /file1.tgz /tmp
17/07/25 00:43:29 INFO hdfs.DFSClient: Exception in createBlockOutputStream
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error, status message , ack with firstBadLink as 10.1.3.35:50010
at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)
17/07/25 00:43:29 INFO hdfs.DFSClient: Abandoning BP-1644766071-10.1.3.39-1499963302012:blk_1073741943_1124
17/07/25 00:43:29 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.1.3.35:50010,DS-19e6d399-729d-4f98-9bda-11fb76f1a164,DISK]
2,经过排查防火墙已经关闭,hdfs状态是正常的,运行服务检查也正常,最后发现原因是slave2节点和其他节点时间不一致,时钟校准之后put文件正常。
slave2日志:
---------------------
【转载】
作者:leelongzaitianya
原文:https://blog.csdn.net/leelongzaitianya/article/details/79866795
|
|