2

I'm trying to find out how much memory an array uses inside of the JVM. I've set up a program for that purpose, which is giving me odd results.

protected static long openMem(){
    System.gc();
    System.runFinalization();
    return Runtime.getRuntime().freeMemory();
}

public static double listSize(int limit){
    long start= openMem();
    Object[] o= new Object[limit];
    for(int i= 0; i<limit; i++ ){
        o[i]= null;
    }
    long end= openMem();
    o= null;
    return (start-end);
}

public static void list(int i){
    for(int y= 0; y<50; y++ ){
        double d= Quantify.listSize(i);
        System.out.println(i+" = "+d+" bytes");
    }
}

public static void main(String ... args){
        list(1);
        list(2);
        list(3);
        list(100);
    }

When I run this, I get two different byte-sizes for each size of array, like:

  • 1 = 24.0 bytes
  • 1 = 208.0 bytes
  • 1 = 24.0 bytes
  • 1 = 208.0 bytes
  • 1 = 208.0 bytes
  • 1 = 208.0 bytes
  • 1 = 208.0 bytes
  • 1 = 24.0 bytes

So an array of 1 element only ever returns "24 bytes" or "208 bytes", and the same pattern holds for all others:

1 = 24.0 bytes

1 = 208.0 bytes

2 = 24.0 bytes

2 = 208.0 bytes

3 = 32.0 bytes

3 = 216.0 bytes

100 = 416.0 bytes

100 = 600.0 bytes

I'm trying to figure out why that is. What I'm wondering is whether anyone else here (a) already knows the answer, or (b) knows how to find the answer.

1
  • 1
    I'd never recommend measuring anything via freemem deltas. Create some typed array for example MyClass[], and run jmap -histo:live <pid>. Memory is not deterministic at any rate and Java runtime may have its own caches and objects, threads, so you can quantify anything via delta of freemem, you may get ballpark number for the current used or free memory but trying to measure a single object (like the array) is a futile effort. An array consume the object header+4bytes length + sun.misc.Unsafe.ADDRESS_SIZE*array.length Commented Nov 11, 2014 at 21:40

1 Answer 1

4

Measuring heap occupancy on the JVM is even trickier than measuring performance. For one, there are thread-local allocation buffers (TLABs), which are chunks of heap allocated at once regardless of object size being allocated. You should disable their use for measurement: -XX:-UseTLAB. Further, your code does some things right but others only almost right. I would for example suggest running two GC's; no need to run finalization; and run a GC before allocation, then after deallocation. You run it only before each measurement. You also need to use totalMemory-freeMemory, otherwise you are vulnerable to heap resizing.

All in all, try measuring with this code, it gives me reliable results.

class Quantify {
  static final Object[][] arrays = new Object[20][];

  static long takenMem(){
    final Runtime rt = Runtime.getRuntime();
    return rt.totalMemory() - rt.freeMemory();
  }

  static long arraySize(int size){
    System.gc(); System.gc();
    long start = takenMem();
    for (int i = 0; i < arrays.length; i++) arrays[i] = new Object[size];
    final long end = takenMem();
    for (int i = 0; i < arrays.length; i++) arrays[i] = null;
    System.gc(); System.gc();
    return (end - start) / arrays.length;
  }
  public static void main(String... args) {
    for (int i = 1; i <= 20; i++) System.out.println(i+": "+arraySize(i));
  }
}

I get this output:

1: 24
2: 24
3: 32
4: 32
5: 40
6: 40
7: 48
8: 48
9: 56
10: 56
11: 64
12: 64
13: 72
14: 72
15: 80
16: 80
17: 88
18: 88
19: 96
20: 96

This is consistent with the real situation: the minimum allocation is 24 bytes due to the overhead of headers; sizes change by 8 due to memory alignment concerns (this is typical for a 64-bit JVM).

Sign up to request clarification or add additional context in comments.

5 Comments

a much simpler way to measure is jmap -histo
I got burnt once with such metrics with VisualVM (it probably relies on jmap or similar internally): it failed to take compressed OOPs into account. This indicates to me that its reports do not derive from direct measurement, but from some hardcoded sizeof data which may be out of sync with reality.
jmap and jstack use the debugging interface to connect to jvm, visualvm probably uses the same interface (not a great fan of VisualVM since it cannot be really used in production). However all the snapshots are performed in the host process, so if CompressedOptions was not taken into account it was probably some bug. Overall I highly recommend "mastering" jmap as it has been an indispensable tool to profile production servers for memory allocations/usage and leaks.
btw there are 2 different modes - one is the the histogram, and the other is the memory dump. Histogram is the one performed in the host processs, the memory dump rightfully would use any estimated size.
@bestsss Yes, we too use jmap for production profiling. We did manage to use VisualVM there as well, but it was a royal pain (setting up SSH tunnels for all the ports, which aren't exactly documented and some even change from run to run). But looking at your production server through the eyes of VisualGC is a whole new experience :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.