top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How to reduce perl memory usage?

+1 vote

I would like to use Perl on an embedded device, which only has 64MB of RAM.
Are there any tricks to reduce the memory usage of the perl interpreter?

posted Oct 13, 2013 by Sanketi Garg

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

2 Answers

+1 vote

You can try building the perl interpreter/run-time-library with support for some aspects of the core language compiled-out (like Unicode, threads, etc.). See the files in the core distribution such as "README" or "INSTALL". Otherwise, there are ways to reduce the memory consumption of a running perl process, especially those that avoid memory leaks such as those caused by circular references, preferring packed strings over arrays over hashes, and using specialized Perl/XS code for memory intensive code.

answer Oct 13, 2013 by Seema Siddique
+1 vote

That 64 MB looks like room enough. Compile a perl without threads, debug. There is also a miniperl, that is used by the installer. But that is rather limited in options than in size.

Can still be interesting:

Also check the malloc options.

answer Oct 13, 2013 by Jagan Mishra
Similar Questions
+1 vote

I have a job running very slowly, when I examine cluster, I find my hdfs user using 170m swap though top command, thats user run datanode daemon, ps output show following info, there are two -Xmx value, and i do not know which value is the real ,1000m or 10240m

# ps -ef|grep 2853
root      2095  1937  0 15:06 pts/4    00:00:00 grep 2853
hdfs      2853     1  5 Nov07 ?        1-22:34:22 /usr/java/jdk1.7.0_45/bin/java -Dproc_datanode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ch14.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -server -Xmx10240m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/hadoop-hdfs/gc-ch14-datanode.log,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
0 votes

I am looking at the following code someone wrote and I have difficultly in understand map usage. Can somebody please explain how does the following code work?

print OUT2 join( ',', map { $_=~ s/"/'/g; ""$_"" } @data )
+3 votes

I have few questions about Cache Memory

1) My understanding of cache memory is, there are 3 type of cache memory,
1. Within RAM
2. Within CPU (L1,L2,L3)
3. Separate Hardware which is costlier than all others.

Please correct if I am wrong,

2) Who stores the data into cache? I mean, can we write a program which should use only cache memory? If yes then how to do it? if no then who manages that?
For example,
Cache memory within RAM, is it managed by Kernel?(Memory Management Unit)
Cache memory within CPU, is it managed by CPU itself?

3) Why cache memory is faster than RAM?