{"id":9631,"date":"2007-03-16T07:37:57","date_gmt":"2007-03-16T07:37:57","guid":{"rendered":"https:\/\/blogs.msdn.microsoft.com\/bharry\/2007\/03\/16\/orcas-dogfood-upgrade-cpu-utilization\/"},"modified":"2018-08-14T00:34:14","modified_gmt":"2018-08-14T00:34:14","slug":"orcas-dogfood-upgrade-cpu-utilization","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/bharry\/orcas-dogfood-upgrade-cpu-utilization\/","title":{"rendered":"Orcas Dogfood Upgrade &#8211; CPU Utilization"},"content":{"rendered":"<p>I think we&#8217;ve got enough data now that we can put a stake in the ground about where we stand on CPU utilization improvements.&nbsp; We&#8217;ve still got a bit more tuning and improvements to make but it&#8217;s probably within 10% of where it will turn out.\nWe&#8217;ve made less progress investigating the regressions this week than I expected &#8211; too many other things going on.&nbsp; Given that, I expect it will be another couple of weeks before we put it to bed.&nbsp; That said, we did identify a significant issue in one of the usage patterns of QueryItems.&nbsp; Although it was not a regression to start with, I expect it to go green once we apply the patch.&nbsp; We have also fixed GetBuildUri.&nbsp; It didn&#8217;t show up in the last post because there were no occurances in the sample that I used to generate it but previous samplings showed a significant regression.&nbsp; Some progress &#8211; but not as much as I&#8217;d hoped.\nOn to the CPU utilization&#8230;\nBecause no two time periods are quite the same, any comparison is a little like apples to oranges.&nbsp; The technique I have used is to average the CPU utilization from the week before the dogfood upgrade and from this week.&nbsp; I then took this week&#8217;s CPU utilization and &#8220;normalized&#8221; it.&nbsp; That means dividing it my the average # of requests per hour this week and multiplying by the average number of requests per hour in the earlier week.&nbsp; This is the best attempt I can think of to make oranges look like apples.&nbsp; So looking at this for the data tier (which as you will recall has always been our bottleneck), we get:\nEffective CPU utilization this week:\n<strong>20.85% * 134,454 \/ 180,020 = 15.57%<\/strong>\nThe previous week&#8217;s average CPU utilization was 28.82%.\nSo comparing them:\n<strong>15.57%\/28.82% = 0.5404<\/strong>\nIn other words, overall Orcas uses about 46% less CPU cycles on the data tier to do the same amount of work as TFS 2005.&nbsp; We&#8217;re pretty psyched about that.\nDoing the same analysis for the application tier yields an effective CPU utilization of <strong>14.85%<\/strong> compared to 24.90%, meaning <strong>the application tier is about 40% more efficient<\/strong>.\nYou&#8217;ll remember that in our configuration (and in our general recommendation) the application tier has half the number of cores that our data tier has (4 for the AT and 8 for the DT).&nbsp; And still the AT CPU utilization is less than the DT CPU utilization.&nbsp; I had been a bit worried that all of the improvements in DT efficiency would mean we needed to change our guidance and start recommending balanced AT\/DT pairs but given what I see now, we are good to stick with our current guidance.\nWe are expecting to get the I\/O analysis tonight so I&#8217;ll write about that as soon as I can.&nbsp; It may very well be mid next week before I get to it because I&#8217;m traveling to San Francisco to give a talk at SD West on Monday.\nThanks,<\/p>\n<p>Brian<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I think we&#8217;ve got enough data now that we can put a stake in the ground about where we stand on CPU utilization improvements.&nbsp; We&#8217;ve still got a bit more tuning and improvements to make but it&#8217;s probably within 10% of where it will turn out. We&#8217;ve made less progress investigating the regressions this week [&hellip;]<\/p>\n","protected":false},"author":244,"featured_media":14617,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[5,3],"class_list":["post-9631","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-tfs","tag-tfs-dogfood-statistics"],"acf":[],"blog_post_summary":"<p>I think we&#8217;ve got enough data now that we can put a stake in the ground about where we stand on CPU utilization improvements.&nbsp; We&#8217;ve still got a bit more tuning and improvements to make but it&#8217;s probably within 10% of where it will turn out. We&#8217;ve made less progress investigating the regressions this week [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/posts\/9631","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/users\/244"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/comments?post=9631"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/posts\/9631\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/media\/14617"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/media?parent=9631"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/categories?post=9631"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/bharry\/wp-json\/wp\/v2\/tags?post=9631"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}