{"id":23163,"date":"2008-11-19T20:02:00","date_gmt":"2008-11-19T20:02:00","guid":{"rendered":"https:\/\/blogs.msdn.microsoft.com\/maoni\/2008\/11\/19\/so-whats-new-in-the-clr-4-0-gc\/"},"modified":"2021-10-04T16:33:23","modified_gmt":"2021-10-04T23:33:23","slug":"so-whats-new-in-the-clr-4-0-gc","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/dotnet\/so-whats-new-in-the-clr-4-0-gc\/","title":{"rendered":"So, what\u2019s new in the CLR 4.0 GC?"},"content":{"rendered":"<p><P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">PDC 2008 happened not long ago so I get to write another \u201cwhat\u2019s new in GC\u201d blog entry. For quite a while now I\u2019ve been working on a new concurrent GC that replaces the existing one. And this new concurrent GC is called \u201cbackground GC\u201d. <\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">First of all let me apologize for having not written anything for so long. It\u2019s been quite busy working on the new GC and other things. <\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">Let me refresh your memory on <I>concurrent GC<\/I>. Concurrent GC has existed since CLR V1.0. For a blocking GC, ie, a non concurrent GC we always suspend managed threads, do the GC work then resume managed threads. Concurrent GC, on the other hand, runs concurrently with the managed threads to the following extend:<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpFirst\"><SPAN><SPAN><FONT size=\"3\">\u00a7<\/FONT><SPAN>&nbsp; <\/SPAN><\/SPAN><\/SPAN><FONT face=\"Calibri\" size=\"3\">It allows you to allocate while a concurrent GC is in progress.<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">&nbsp;<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">However you can only allocate so much \u2013 for small objects you can allocate at most up to end of the ephemeral segment. Remember if we don\u2019t do an ephemeral GC, the total space occupied by ephemeral generations can be as big as a full segment allows so as soon as you reached the end of the segment you will need to wait for the concurrent GC to finish so managed threads that need to make small object allocations are suspended. <\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">&nbsp;<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><SPAN><SPAN><FONT size=\"3\">\u00a7<\/FONT><SPAN>&nbsp; <\/SPAN><\/SPAN><\/SPAN><FONT face=\"Calibri\" size=\"3\">It still needs to stop managed threads a couple of times during a concurrent GC.<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">&nbsp;<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">During a concurrent GC we need to suspend managed threads twice to do some phases of the GC. These phases could possibly take a while to finish.<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">&nbsp;<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">We only do concurrent GCs for full GCs. A full GC can be either a concurrent GC or a blocking GC. Ephemeral GCs (ie, gen0 or gen1 GCs) are always blocking. <\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpMiddle\"><FONT face=\"Calibri\" size=\"3\">&nbsp;<\/FONT><\/P>\n<P class=\"MsoListParagraphCxSpLast\"><FONT face=\"Calibri\" size=\"3\">Concurrent GC is only available for workstation GC. In server GC we always do blocking GCs for any GCs.<\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">Concurrent GC is done on a dedicated GC thread. This thread times out if no concurrent GC has happened for a while and gets recreated next time we need to do concurrent GC.<\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">When the program activity (including making allocations and modifying references) is not really high and the heap is not very large concurrent GC works well \u2013 the latency caused by the GC is reasonable. But as people start writing larger applications with larger heaps that handle more stressful situations, the latency can be unacceptable. <\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT size=\"3\"><FONT face=\"Calibri\"><I>Background GC<\/I> is an evolution to concurrent GC. The significance of background GC is we can do ephemeral GCs while a background GC is in progress if needed. As with concurrent GC, background GC is also only applicable to full GCs and ephemeral GCs are always done as blocking GCs, and a background GC is also done on its dediated GC thread. The ephemeral GCs done while a background GC is in progress are called foreground GCs.<\/FONT><\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">So when a background GC is in progress and you\u2019ve allocated enough in gen0, we will trigger a gen0 GC (which may stay as a gen0 GC or get elevated as a gen1 GC depending on GC\u2019s internal tuning). The background GC thread will check at frequent safe points (ie, when we can allow a foreground GC to happen) and see if there\u2019s a request for a foreground GC. If so it will suspend itself and a foreground GC can happen. After this foreground GC is finished, the background GC thread and the user threads can resume their work.<\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">Not only does this allow us to get rid of dead objects in young generations, it also lifts the restriction of having to stay in the ephemeral segment \u2013 if we need to expand the heap while a background GC is going on, we can do so in a gen1 GC.<\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">We also made some performance improvement in background GC which does better at doing more things concurrently so the time we need to suspend managed threads is also shorter. <\/FONT><\/P>\n<P class=\"MsoNormal\"><FONT face=\"Calibri\" size=\"3\">We are not offering background GC for server GC in V4.0. It\u2019s under consideration \u2013 we recognize how important it is for server applications (which usually have much larger heaps than client apps) to benefit from smaller latency but the work did not fit in our V4.0 timeframe. For now for server applications, I would recommend you to look at the full GC notification feature we added in .NET 3.5 SP1. It\u2019s explained here: <\/FONT><A href=\"http:\/\/msdn.microsoft.com\/en-us\/library\/cc713687.aspx\"><FONT face=\"Calibri\" size=\"3\">http:\/\/msdn.microsoft.com\/en-us\/library\/cc713687.aspx<\/FONT><\/A><FONT face=\"Calibri\" size=\"3\">. Basically you register to get notified when a full GC is approaching and when it\u2019s finished. This allows you to do software load balancing between different server instances \u2013 when a full GC is about to happen in one of the server instances, you can redirect new requests to other instances. <\/FONT><\/P><\/p>\n","protected":false},"excerpt":{"rendered":"<p>PDC 2008 happened not long ago so I get to write another \u201cwhat\u2019s new in GC\u201d blog entry. For quite a while now I\u2019ve been working on a new concurrent GC that replaces the existing one. And this new concurrent GC is called \u201cbackground GC\u201d. First of all let me apologize for having not written [&hellip;]<\/p>\n","protected":false},"author":3542,"featured_media":58792,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[685],"tags":[3010,3011,108],"class_list":["post-23163","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet","tag-general","tag-maoniposts","tag-performance"],"acf":[],"blog_post_summary":"<p>PDC 2008 happened not long ago so I get to write another \u201cwhat\u2019s new in GC\u201d blog entry. For quite a while now I\u2019ve been working on a new concurrent GC that replaces the existing one. And this new concurrent GC is called \u201cbackground GC\u201d. First of all let me apologize for having not written [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/23163","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/users\/3542"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/comments?post=23163"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/23163\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media\/58792"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media?parent=23163"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/categories?post=23163"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/tags?post=23163"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}