I am faced with the error while appending file on HDFS (cloudera 2.0.0-cdh4.2.0). The use case that cause an error is:
- Create file on file system (DistributedFileSystem). OK
Append earlier created file. ERROR
OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);Then error is thrown:
Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010])
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)
Some related hdfs configs:
dfs.replication set to 2
dfs.client.block.write.replace-datanode-on-failure.policy set to true
dfs.client.block.write.replace-datanode-on-failure set to DEFAULT
Any ideas? Thanks!