top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Hadoop: Mounting HDFS on client

+2 votes
1,338 views

Is there a way to mount HDFS directly on a Linux and Windows client? I believe I read something about there being some limitations, but that there is possibly a FUSE solution. Any information on this (with links to a how-to) would be greatly appreciated.

posted Dec 20, 2013 by Deepankar Dubey

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

2 Answers

+2 votes

Mount on Windows client is a bit different. Here are the steps to mount export on Windows:

  1. Enable NFS Client on the Windows client system.
    Step 1. Enable File Services Role. Go to Server Management – > Add Roles -> File Services
    Step 2. Install Services for Network File System. Go to File Services – > Add Role Services

  2. Mount the export, can use either mount command or "net use" command:
    c:> mount * 192.168.111.158! net use Z: 192.168.111.158!
    (there is a space between Z: and ! another important thing is ! mark, this does the trick. Withoutout !, Windows NFS client doesnt work with root export "/")

There is a user mapping issue: map Windows users to Linux users. Since NFS gateway doesnt recognize Windows users, it maps them to the Linux user "nobody".

To do any data I/O test, you can create a directory on HDFS and give everyone access permission. In reality, most administrators use Windows AD server to manage the user mapping between Windows and Linux systems so the user mapping is not an issue in those environments.

answer Dec 21, 2013 by anonymous
Thanks, I will give this a try.
Also, I see you are with Horton Works, is there a way to use the 2.0 Sandbox to map any of the vg_sandbox to another Linux box or the Windows Server I am running the Virtualbox that hosts the sandbox?
+1 vote

You can use the HDFS NFS gateway to mount HDFS. The limitation is that random write is not supported.

JIRA HDFS-5347 added a user guider. https://issues.apache.org/jira/browse/HDFS-5347
There is the html attachment to this JIRA: HdfsNfsGateway.new.html
If you are using branch 2.3, 2.4 or trunk, "mvn site" will generate the user guide too.

answer Dec 21, 2013 by Satish Mishra
Similar Questions
+1 vote

To run a job we use the command
$ hadoop jar example.jar inputpath outputpath
If job is so time taken and we want to stop it in middle then which command is used? Or is there any other way to do that?

0 votes

The reason behind this is I want to have my custom user who can create anything on the entire hdfs file system (/).
I tried couple of links however, none of them were useful. Is there any way by adding/modifying some property tags I can do that ?

+2 votes

YARN application would benefit from maximal bandwidth on HDFS reads. But I'm unclear on how short-circuit reads are enabled. Are they on by default?

Can our application check programmatically to see if the short-circuit read is enabled?

+2 votes
public class MaxMinReducer extends Reducer {
int max_sum=0; 
int mean=0;
int count=0;
Text max_occured_key=new Text();
Text mean_key=new Text("Mean : ");
Text count_key=new Text("Count : ");
int min_sum=Integer.MAX_VALUE; 
Text min_occured_key=new Text();

 public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
       int sum = 0;           

       for (IntWritable value : values) {
             sum += value.get();
             count++;
       }

       if(sum < min_sum)
          {
              min_sum= sum;
              min_occured_key.set(key);        
          }     


       if(sum > max_sum) {
           max_sum = sum;
           max_occured_key.set(key);
       }          

       mean=max_sum+min_sum/count;
  }

 @Override
 protected void cleanup(Context context) throws IOException, InterruptedException {
       context.write(max_occured_key, new IntWritable(max_sum));   
       context.write(min_occured_key, new IntWritable(min_sum));   
       context.write(mean_key , new IntWritable(mean));   
       context.write(count_key , new IntWritable(count));   
 }
}

Here I am writing minimum,maximum and mean of wordcount.

My input file :

high low medium high low high low large small medium

Actual output is :

high - 3------maximum

low - 3--------maximum

large - 1------minimum

small - 1------minimum

but i am not getting above output ...can anyone please help me?

...