1

I'm facing Out Of Memory error while running a mapreduce program.If I keep 260 files in one folder and give as input to the mapreduce program,it is showing Java Heap space Out of Memory error.If I give only 100 files as input the mapreduce,it is running fine.Then how can I limit the mapreduce program to take only 100 files (~50MB) at a time. Can anyone please suggest on this issue ...

No of files:318 ,No of blocks:1(blocksize:128MB), Hadoop is running on 32 bit system

My StackTrace:
==============
    15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318
    15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.
end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
    15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0
    15/05/05 11:52:48 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
    15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload
         .....
         .....
         .....
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092
    15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false
    15/05/05 11:52:49 INFO mapreduce.Job:  map 0% reduce 0%
    15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784)
    15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300
    15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240
    15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800
    15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output
    15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output
    15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800
    15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map
    15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete.
    15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
    Caused by: java.lang.OutOfMemoryError: Java heap space
        at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208)
        at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559)
        at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57)
        at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42)
        at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA
    15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25
        File System Counters
            FILE: Number of bytes read=29002348
            FILE: Number of bytes written=29450636
            FILE: Number of read operations=0
            FILE: Number of large read operations=0
            FILE: Number of write operations=0
            HDFS: Number of bytes read=103142
            HDFS: Number of bytes written=0
            HDFS: Number of read operations=6
            HDFS: Number of large read operations=0
            HDFS: Number of write operations=1
        Map-Reduce Framework
            Map input records=1303
            Map output records=1303
            Map output bytes=105296
            Map output materialized bytes=0
            Input split bytes=38078
            Combine input records=0
            Spilled Records=0
            Failed Shuffles=0
            Merged Map outputs=0
            GC time elapsed (ms)=593
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0
            Total committed heap usage (bytes)=1745092608
        File Input Format Counters 
            Bytes Read=0
4
  • can you share stack trace? Commented May 18, 2015 at 6:06
  • Which version/distribution are you using? Commented May 18, 2015 at 6:08
  • I'm using hadoop 2.4.1 Commented May 18, 2015 at 6:12
  • please suggest me on this issue ... Commented May 18, 2015 at 8:10

1 Answer 1

2

STEP 1:

Add this line in .bashrc file found in your hadoop home directory:

export JVM_ARGS="-Xms1024m -Xmx1024m"

This changes the java heap memory to 1024. Default is 128. If you were running a terminal hadoop job, then do this as hadoop user:

source ~/.bashrc

If you still get the error, try step 2.

STEP 2:

Add this line in hadoop-env.sh file:

export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"

If still there is no luck, try step 3.

STEP 3:

Add this property in mapred-site.xml file:

  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>

All these steps increases the default java heap memory.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks for ur reply.I had tried the above mentioned steps.Actually,I have two folders in hdfs.one is upload(98MB) and other is download(48MB).I had given upload folder as input to mapreduce program.It's running fine.I had tried giving the download folder as input to the mapreduce program,now I'm facing Java Heap Space Out Of Memory error.I'm not getting why it is happening like this.please help me out.
What does the download folder contain?
Both upload and download folders contain mlab .c2s and .s2c trace files respectively

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.