top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Hadoop YARN 2.2.0 Streaming Memory Limitation?

+1 vote
629 views

We are currently facing a frustrating hadoop streaming memory problem. our setup:

  • our compute nodes have about 7 GB OF RAM
  • hadoop streaming starts a bash script wich uses about 4 GB OF RAM
  • therefore it is only possible to start one and only ONE TASK PER NODE

out of the box each hadoop instance starts about 7 hadoop containers with default hadoop settings. each hadoop task forks a bash script that need about 4 GB of RAM, the first fork works, all following fail because THEY RUN OUT OF MEMORY. so what we are looking for is to LIMIT the number of containers TO ONLY ONE. so what we found on the internet:

  • yarn.scheduler.maximum-allocation-mb and mapreduce.map.memory.mb is set to values such that there is at most one container. this means, mapreduce.map.memory.mb must be MORE THAN HALF of the maximum memory (otherwise there will be multiple containers).

done right, this gives us one container per node. but it produces a new problem: since our java process is now using at least half of the max memory, our child (bash) process we fork will INHERIT THE PARENT MEMORY FOOTPRINT and since the memory used by our parent was more than half of total memory, WE RUN OUT OF MEMORY AGAIN. if we lower the map memory, hadoop will allocate 2 containers per node, which will run out of memory too.

since this problem is a blocker in our current project we are evaluating adapting the source code to solve this issue. as a last resort. any ideas on this are very much welcome.

posted Feb 24, 2014 by Jagan Mishra

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button
Can you try setting yarn.nodemanager.resource.memory-mb(Amount of physical memory, in MB, that can be allocated for containers), say 1024, and also set mapreduce.map.memory.mb to 1024?
thanks for the input. unfortunately it doesn’t solve our problem, if we set the properties:
yarn.nodemanager.resource.memory-mb = 1024
mapreduce.map.memory.mb = 1024
there are no containers spawned and no jobs started.

if I set:
yarn.nodemanager.resource.memory-mb = 2048
mapreduce.map.memory.mb = 2048

there is one container and one mapper, but the bash process can’t be started by hadoop streaming.
logs say:
ContainersMonitorImpl: Memory usage of ProcessTree 7655 for container-id container_1393326502216_0001_01_000001: 164.7 MB of 2 GB physical memory used; 1.5 GB of 4.2 GB virtual memory used
but there is no sign, why our bash script isn’t started.

1 Answer

+1 vote

Please try with mapreduce.map.memory.mb = 5124

answer Feb 24, 2014 by Garima Jain
Thanks a lot for your input. we got it to run correctly, although not exactly the solution you proposed, but it’s close:

the main error we made is that on a yarn controller node the memory footprint must be set differently than on a hadoop worker node. following rule of thumb seems to apply in our setup:
master: mapreduce.map.memory.mb = 1/3 of yarn.nodemanager.resource.memory-mb
worker:mapreduce.map.memory.mb = 1/2 of yarn.nodemanager.resource.memory-mb

for both cases we set:mapreduce.map.child.java.opts=“Xmx 1024” or about 1/4 of total memory.

The reason for this behaviour is that the yarn controller spawns 2 subprocesses, while all worker spawn only 1 subprocess:- on master: java MRAppMaster and YarnChild (which spawns the mapper)- on workers: YarnChild (which spawns the mapper)

Now everything works smoothly.
Similar Questions
+2 votes

I am using containerLaunchContext.setCommands() to add different commands that I wanted to run on container. But only first command is getting execute.Is there is something else I need to do?

List commands = new ArrayList();commands.add(cmd1);commands.add(cmd2);

I can see only cmd1 is getting executed.

+1 vote

I have a job running very slowly, when I examine cluster, I find my hdfs user using 170m swap though top command, thats user run datanode daemon, ps output show following info, there are two -Xmx value, and i do not know which value is the real ,1000m or 10240m

# ps -ef|grep 2853
root      2095  1937  0 15:06 pts/4    00:00:00 grep 2853
hdfs      2853     1  5 Nov07 ?        1-22:34:22 /usr/java/jdk1.7.0_45/bin/java -Dproc_datanode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ch14.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Xmx10240m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/hadoop-hdfs/gc-ch14-datanode.log -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
+1 vote

I am using the YARN fair scheduler to allow a group of users to equally share a cluster for running Spark jobs. It works great, but when a large rebalance happens, Spark sometimes cant keep up, and the job fails.

Is there any way to control the rate at which YARN preempts resources? Id love to limit the killing of containers to a slower pace, so Spark has a chance to keep up.

+1 vote

How a job works in YARN/Map Reduce? like navigation path.

Please check my understanding is right?

When the application or job or client starts, client communicate with Name node the application manager started on node (data node), Application manager communicates with Resource manager (on name node) to get resource.The resource are assigned to container. The job runs on Container which is JVM.

+1 vote

How can I track a job failure on node or list of nodes, using YARN apis. I could get the list of long running jobs, using yarn client API, but need to go further to AM, NM, task attempts for map or reduce.
Say, I have a job running for long,(about 4hours), might be caused of some task failures.

Please provide the sequence of APIs, or any reference.

...