We open sourced our new GC Perf Infrastructure! It’s now part of the dotnet performance repo. I’ve been meaning to write about it ‘cause some curious minds had been asking when they could use it after I blogged about it last time but didn’t get around to it till now.
Post by this author
Recently, Nick from Stack Overflow tweeted about his experience of using the .NET Core GC configs – he seemed quite happy with them (minus the fact they are not documented well which is something I’m talking to our doc folks about).
Years ago I wrote a document on making finalization scanning concurrent. At the time there was an internal team that was using finalization as a way to resurrect objects and putting them back in their cache. While we’ve always advised to folks that finalization was for releasing native resources I couldn’t fault this team for using it the way they did.
In this blog entry and some future ones I will be showing off functionalities that our new GC perf infrastructure provides. Andy and I have been working on it (he did all the work; I merely played the consultant role). We will be open sourcing it soon and I wanted to give you some examples of using it and you can add these to your repertoire of perf analysis techniques when it’s available.
I’ve been talking about doing managed heap performance analysis with ETW events for ages because ETW is just such a powerful tool. It has a well defined format so many components, from kernel modes to user mode ones, all emit ETW events which means you can have tools that just know how to parse the event format and correlate them.
If you are running Windows on a machine with 64 CPUs, you’ll need to use this feature called the CPU groups for your process to be able to use more than 64 CPUs. At some point in the far distant past,
I’ve checked in 2 configs related to specifying a hard limit for the GC heap memory usage so I wanted to describe them and how they are intended to be used. Feedback would be greatly appreciated.
In order to talk about the new configs it’s important to understand what consists of the memory usage in a process.
This week I was able to get some time to work on the container stuff with low memory limits. As many of you have expressed your dissatisfaction on how Server GC behaves with low memory limit specified on containers on github,
A customer who just experienced some server outage asked us for help as they thought it was due to some very long GC pauses. I thought this diagnostics exercise might be useful for other folks as well so I am sharing it with you.
A long time ago I wrote about using Workstation GC on server applications when you have many instances of your server app running on the same machine. By default Server GC will treat the process as owning the machine so it uses all CPUs to do the GC work.