0

I am faced with the error while appending file on HDFS (cloudera 2.0.0-cdh4.2.0). The use case that cause an error is:

  • Create file on file system (DistributedFileSystem). OK
  • Append earlier created file. ERROR

    OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);

    Then error is thrown:

Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)

Some related hdfs configs:

dfs.replication set to 2

dfs.client.block.write.replace-datanode-on-failure.policy set to true dfs.client.block.write.replace-datanode-on-failure set to DEFAULT

Any ideas? Thanks!

1 Answer 1

1

Problem was solved by running on file system

hadoop dfs -setrep -R -w 2 /

Old files on file system had replication factor set to 3, setting dfs.replication to 2 in hdfs-site.xml will not solve the problem as this config will not apply to already existing files.

So, if u remove machines from cluster you better check files and system replication factor

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.