0

I used following code to delete a file on hdfs filesystem

    conf = new org.apache.hadoop.conf.Configuration();
    // TODO: Change IP
    conf.set("fs.defaultFS", "hdfs://aaa.bbb.com:1234/user/hdfs");
    conf.set("hadoop.job.ugi", "hdfs");
    conf.set("fs.hdfs.impl", 
        org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
    );
    conf.set("fs.file.impl",
        org.apache.hadoop.fs.LocalFileSystem.class.getName()
    );
fs = FileSystem.get(conf);
fs.delete(new Path("/user/hdfs/file.copy"), true);

I created a user called "xyz" in my local machine, and to my amazement I was able to delete the file(file.copy) in hdfs filesystem with the given namenode whose owner was xyz. That means someone with the access to namenode url could delete anyfile by creating hdfs or root user?

I understand that Java API has a way to authenticate user using Kerberos, I believe solething is wrong with our configuration of the hadoop system. Could somebody help me to setup the security properly? I believe remote user should provide some key or key file to authenticate itself. Just same username won't do!

PS: I am using Cloudera 5.3.1

1 Answer 1

1

Yes, if you don't have Kerberos authentication enabled on your cluster then you really have no authentication at all. If you care about your data you absolutely should enable Kerberos authentication.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.