As 64-bit machines become more common, the problems we need to solve also evolve. In this post I’d like to talk about what it means for the GC and the applications’ memory usage when we move from 32-bit to 64-bit.
One big limitation of 32-bit is the virtual memory address space – as a user mode process you get 2GB, and if you use large address aware you get 3GB. A few years these seemed like giant numbers but I’ve seen as more and more people start using .NET framework, the sizes of the managed heap go up at a quite high rate. I remember when I first started working on GC (which was late 2004 I think) we were talking about hundreds of MBs of heaps – 300MB seemed like a lot. Today I am seeing managed heaps easily of GBs in size – and yes, some of them (and more and more of them) are on 64-bit – 2 or 3GB is just not enough anymore.
And along with this, we are shifting to solving a different set of problems. In CLR 2.0 we concentrated heavily on using the VM space efficiently. We tried very hard to reduce the fragmentation on the managed heap so when you get a hold of a chunk of virtual memory you can make very efficient use of it. So people don’t see problems like they have N managed heap segments, are running out of VM, yet many of these segments are quite empty (meaning having a lot of free space on them).
Then you switch to 64-bit. Now suddenly you don’t need to worry about VM anymore – you get plenty there. Practically unlimited for many applications (of course it’s still limited – for example if you are running out physical memory to even allocate the datastructures for virtual pages then you still can’t reserve those pages). What kind of differences will you see in your managed memory usage?
First of all, your process consumes more memory – I am sure all of you are already aware of this – the pointer size is bigger – it’s doubled on 64-bit so if you don’t change anything at all, now your managed heap (which undoubtly contains references) is bigger. Of course being able to manipulate memory in QWORDs instead of DWORDs can also be beneficial –our measurements show that the raw allocation speed is slightly higher on 64-bit than on 32-bit that can be attributed to this.
There are other factors that could make your process consume more memory – for example the module size is bigger (mscorwks.dll is about 5MB on x86, 10MB on x64 and 20MB on ia64), instructions are bigger on 64-bit and what have you.
Another thing you may notice – if you have looked at the performance counters under .NET CLR Memory – is that you are now doing a lot fewer GCs on 64-bit than what you used to see on 32-bit.
The curious minds might have already noticed one thing – the managed heap segments are much bigger in size on 64-bit. If you do !SOS.eeheap -gc you will now see way bigger segments.
Why did we make the segment size so much bigger on 64-bit? Well, remember we talked about in Using GC Efficiently Part 2 how we have a budget for gen0 and when you’ve allocated more than this budget a GC will be triggered. When you have a bigger budget it means you’ll need to do fewer GCs which means your code will get more chance to run. From this perspective you should get a performance gain when you move to 64-bit – I want to emphasize the “this perspective” part because in general things tend to run slower on 64-bit. The perf benefit you get because of GC may very well be obscured by other perf degrades. In reality many people are not expecting perf gain when they move to 64-bit but rather they are happy with being able to use more memory to handle more work load.
Of course we also don’t want to wait for too long before we collect – we strive for the right balance between memory (how much memory your app consumes) and CPU (how often user threads run).
0 comments