top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How to find min, max and mean of wordcount from text file in hadoop mapreduce?

+2 votes
1,458 views
public class MaxMinReducer extends Reducer {
int max_sum=0; 
int mean=0;
int count=0;
Text max_occured_key=new Text();
Text mean_key=new Text("Mean : ");
Text count_key=new Text("Count : ");
int min_sum=Integer.MAX_VALUE; 
Text min_occured_key=new Text();

 public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
       int sum = 0;           

       for (IntWritable value : values) {
             sum += value.get();
             count++;
       }

       if(sum < min_sum)
          {
              min_sum= sum;
              min_occured_key.set(key);        
          }     


       if(sum > max_sum) {
           max_sum = sum;
           max_occured_key.set(key);
       }          

       mean=max_sum+min_sum/count;
  }

 @Override
 protected void cleanup(Context context) throws IOException, InterruptedException {
       context.write(max_occured_key, new IntWritable(max_sum));   
       context.write(min_occured_key, new IntWritable(min_sum));   
       context.write(mean_key , new IntWritable(mean));   
       context.write(count_key , new IntWritable(count));   
 }
}

Here I am writing minimum,maximum and mean of wordcount.

My input file :

high low medium high low high low large small medium

Actual output is :

high - 3------maximum

low - 3--------maximum

large - 1------minimum

small - 1------minimum

but i am not getting above output ...can anyone please help me?

posted Oct 16, 2015 by Sathish

Looking for an answer?  Promote on:
Facebook Share Button Twitter Share Button LinkedIn Share Button
Let me understand
if your input is
high low medium high low high low large small medium

Then the Max is 3 (i.e. high as well as low is appearing three times), min is 1 (i.e. Large and small is appearing once) and mean is 2. But your logic does not seems to be doing the same. Please cross check the reduce function
You may not be handling the dupplicate min max key case check the following link
http://stackoverflow.com/questions/32964067/hadoop-word-count-and-get-the-minimum-occured-word

Look at second part of the answer :)
I tried in different ways...but i can't resolved .......actually i am new to hadoop...can you plz write the code ....
I am just pasting the code from other site (as I shared the link) which deals duplicate keys and only min case using that you can write max as well as mean.

public class MaxReducer extends Reducer {        
     int min_sum=Integer.MAX_VALUE;        
     ArrayList<String> al = new ArrayList<String>();
     public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
           int sum = 0;

           for (IntWritable value : values) {
                 sum += value.get();        
           }

           if(sum < min_sum) {
               min_sum= sum;
               al.clear();
               al.add(key)        
           } else if(sum == min_sum){
           al.add(key);
           }

      }

     @Override
     protected void cleanup(Context context) throws IOException, InterruptedException {
          for(String value : al) {
           context.write(new Text(value) , new IntWritable(min_sum));
}       
}
Actually that question asked by me in stackoverflow(http://stackoverflow.com/questions/32964067/hadoop-word-count-and-get-the-minimum-occured-word) ..no one resolved that question
This code spot should work

           if(sum < min_sum) {
               min_sum= sum;
               al.clear();
               al.add(key)        
           } else if(sum == min_sum){
           al.add(key);
           }
i already tried above code....can't works....
Unfortunately I dont have the access to the test platform here, so you may need to debug at your end. The crux is this code where you need to clear the old list if sum is less then min_list followed by add and if same just add the key.
once you resolve the solution ...can you plz share that code.....
Sure I will :)
can anyone give me the solution?

Similar Questions
+1 vote

To run a job we use the command
$ hadoop jar example.jar inputpath outputpath
If job is so time taken and we want to stop it in middle then which command is used? Or is there any other way to do that?

+3 votes

Date date; long start, end; // for recording start and end time of job
date = new Date(); start = date.getTime(); // starting timer

job.waitForCompletion(true)

date = new Date(); end = date.getTime(); //end timer
log.info("Total Time (in milliseconds) = "+ (end-start));
log.info("Total Time (in seconds) = "+ (end-start)*0.001F);

I am not sure this is the correct way to find. Is there any other method or API to find the execution time of a MapReduce job?

+1 vote

In xmls configuration file of Hadoop-2.x, "mapreduce.input.fileinputformat.split.minsize" is given which can be set but how to set "mapreduce.input.fileinputformat.split.maxsize" in xml file. I need to set it in my mapreduce code.

+1 vote

A mapreduce job can be run as jar file from terminal or directly from eclipse IDE. When a job run as jar file from terminal it uses multiple jvm and all resources of cluster. Does the same thing happen when we run from IDE. I have run a job on both and it takes less time on IDE than jar file on terminal.

+2 votes

Let we change the default block size to 32 MB and replication factor to 1. Let Hadoop cluster consists of 4 DNs. Let input data size is 192 MB. Now I want to place data on DNs as following. DN1 and DN2 contain 2 blocks (32+32 = 64 MB) each and DN3 and DN4 contain 1 block (32 MB) each. Can it be possible? How to accomplish it?

...