2

I am trying to find the maximum memory that I could allocate on stack, global and heap memory in C++. I am trying this program on a Linux system with 32 GB of memory, and on my Mac with 2 GB of RAM.

/* test to determine the maximum memory that could be allocated for static, heap and stack memory  */

#include <iostream>
using namespace std;

//static/global
long double a[200000000];

int main()
{
//stack
long double b[999999999];

//heap
long double *c = new long double[3999999999];
cout << "Sizeof(long double) = " << sizeof(long double) << " bytes\n";
cout << "Allocated Global (Static) size of a = " << (double)((sizeof(a))/(double)(1024*1024*1024)) << " Gbytes \n";
cout << "Allocated Stack size of b = " << (double)((sizeof(b))/(double)(1024*1024*1024)) << " Gbytes \n";
cout << "Allocated Heap Size of c = " << (double)((3999999999 * sizeof(long double))/(double)(1024*1024*1024)) << " Gbytes \n";

delete[] c;

return 0;

}

Results (on both):

Sizeof(long double) = 16 bytes
Allocated Global (Static) size of a = 2.98023 Gbytes 
Allocated Stack size of b = 14.9012 Gbytes 
Allocated Heap Size of c = 59.6046 Gbytes

I am using GCC 4.2.1. My question is:

Why is my program running? I expected since stack got depleted (16 MB in linux, and 8 MB in Mac), the program should throw an error. I saw some of the many questions asked in this topic, but I couldn't solve my problem from the answers given there.

4
  • Where did you get those 16MB/8MB figures from? Those look like default thread stack sizes, not maximum process stack sizes. Commented Sep 13, 2012 at 17:55
  • I know some OS' can extend the stack until it runs out of virtual memory, maybe both of yours can? Or maybe the compiler moved the stuff you thought was on the stack into a global? (Recursive functions would prevent this optimization) Commented Sep 13, 2012 at 17:55
  • 1
    ulimit -a gives stack size (kbytes, -s) 8192 on Mac and on Linux it is stack size (kbytes, -s) 10240. Sorry, it is 10 MB I think (not 16), I would edit. Commented Sep 13, 2012 at 18:10
  • No, someone else was using the machine, so it is 16 MB indeed. Thank you. Commented Sep 13, 2012 at 18:38

2 Answers 2

2

On some systems you can allocate any amount of memory that fits in the address space. The problems begin when you start actually using that memory.

What happens is that the OS reserves a virtual address range for the process, without mapping it to anything physical, or even checking that there's enough physical memory (including swap) to back that address range up. The mapping only happens in a page-by-page fashion, when the process tries to access newly allocated pages. This is called memory overcommitment.

Try accessing every sysconf(_SC_PAGESIZE)th byte of your huge arrays and see what happens.

Sign up to request clarification or add additional context in comments.

2 Comments

I thought that this is the case with heap allocations only, and not stack.
@Sayan: Your OS may not know whether a memory allocation is intended for stack or heap usage.
2

Linux overcommits, meaning that it can allow a process more memory than is available on the system, but it is not until that memory is actually used by the process that actual memory (physical main memory or swap space on disk) is allocated for the process. My guess would be that Mac OS X works in a similar way.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.