I used following code to delete a file on hdfs filesystem
conf = new org.apache.hadoop.conf.Configuration();
// TODO: Change IP
conf.set("fs.defaultFS", "hdfs://aaa.bbb.com:1234/user/hdfs");
conf.set("hadoop.job.ugi", "hdfs");
conf.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
);
conf.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName()
);
fs = FileSystem.get(conf);
fs.delete(new Path("/user/hdfs/file.copy"), true);
I created a user called "xyz" in my local machine, and to my amazement I was able to delete the file(file.copy) in hdfs filesystem with the given namenode whose owner was xyz. That means someone with the access to namenode url could delete anyfile by creating hdfs or root user?
I understand that Java API has a way to authenticate user using Kerberos, I believe solething is wrong with our configuration of the hadoop system. Could somebody help me to setup the security properly? I believe remote user should provide some key or key file to authenticate itself. Just same username won't do!
PS: I am using Cloudera 5.3.1