Managing Quality (part 6) – Community Feedback

Brian Harry

I touched on one aspect of customer involvement in our quality efforts in my last post about Dr. Watson.  There are a variety of other metrics we track and measure ourselves against that are more overtly community oriented.  Before I drill into the metrics we track, let’s talk a bit about what aspects of community we think about.

Pre-releases – We have a variety of forms in which we provide pre-release software.  We’ve done Betas for a long time and in recent years we started doing Community Technology Previews (CTPs).  We target CTPs every month or two and they are just “points in time”.  We don’t do any significant stabilization, we just take a timely “self-test” build (see early posts in the series for a definition) and release it.  Betas, on the other hand, involve a prolonged stabilization period – generally months and include test passes, bug fixing, release criteria evaluation, an escrow period, etc.  We want to make sure that when we release a Beta it’s a pretty solid build.  We have historically done 2 and sometimes 3 Betas for a major release.

When designing the schedule for the Orcas release, we thought long and hard about the role of CTPs vs Betas.  We decided to think of CTPs more formally as a part of our feedback cycle for the release.  We debated doing away with Beta 1 and having only one Beta for the product cycle.  Historically, Beta 1 was about getting feedback on whether or not we had built the right thing and Beta 2 was about getting feedback on stability, configurations, etc.  The thinking was that the CTPs could replace much of the value we get from a Beta 1.  As a result (as you will eventually see), we downplayed the role of Beta 1 by making the Beta 1 -> Beta 2 delta much shorter than we have done in the past 10 years.  Unfortunately, we have not seen the uptake of CTPs and volume of feedback we had hoped for.  Don’t get me wrong – we’ve gotten some great feedback but it’s primarily been from bleeding edge people and broad uptake has been lacking.  As a result, we’ve been scrambling the last few months to adjust our Beta 1 and Beta 2 plans to react to this reality.  Among other things, we’ve extended the period between Beta 1 and Beta 2 by a few weeks longer than we had expected.  Once we start to see the Beta 1 feedback, we’ll understand better where we are.

Specs – In the last year or so, we started publishing specs on the web early in the product cycle.  This is a way to share our thinking in a structured way to get very early feedback.  Overall, I think we could do a bit better at being both more timely and more thorough in the specs we publish but I think it’s heading in the right direction.

ForumsForums have become a big part of the way we interact with customers/community.  The primary goal with forums is to provide a way for customers to ask questions and share unstructured ideas with us and each other.

Connect – Connect provides a way for customers to provide structured feedback.  We use it to get feedback on specs, bug reports, and feature ideas.  There’s a pretty direct path from Connect into our internal work item tracking databases.  It’s worth talking a bit about how we think about the feedback that comes from Connect.  We divide it into 2 rough categories – what we call “fixable bugs” and “other feedback”.  Fixable bugs are defects in the product that it is possible to address.  Things that don’t fall in this category include things like: bugs that can’t be fixed due to compatibility constraints, suggestions for new features, feedback that someone doesn’t like the design of a feature, etc.  These fall in “other feedback”.  We metric fixable bugs carefully and less so on other feedback.  Overall we do a very good job responding to fixable bugs and we have room for improvement in prioritizing direct customer suggestions into our feature list.  We’ve been talking for the past couple of months about how we improve that.

Blogs – And, of course, there’s blogs like this one.  Our goal with blogs is to share ideas.  To help you understand how we think about things and vice-versa. 

Community Metrics

Here’s a chart we use to track community activity on our forums:

and we overall health by looking a time to answer.  We track both the 2 day and 7 day answer rates (what percentage of questions are answered in those respective times).  Our 2 day answer rate goal is 60% and our 7 day goal is 80%.  Here’s a chart we use showing the 7 day rates.  As you can see we haven’t been doing so well lately.  It was much greener in the fall but the push to get Orcas Beta 1 done has really detracted from forum efforts.  We’ve recently taken steps (bringing on some more people) to help with the load.

And here is the list of top 10 people providing answers on the forums (across all of VS, during March) – with James Manning from our very own TFS team:

For Connect we track 3 metrics:

% Fixed/%Fixable – The goal is 90% or higher.  As we are currently in a bug fixing period, I expect this number will get better soon.

First response – We always try to read every issue and respond with a thank you and some indication of what we plan to do with the feedback.

Stal e bugs – These are bugs that haven’t been resolved for a long time (I forget the threshold).  I know in the case of TFS the high count is due to a synchronization problem between our TFS database and the Connect database.  We are working on getting that fixed.

And lastly, for blogs we track activity on all of our blogs and stack rank them by popularity.  I wouldn’t say that we use that in any particularly actionable effort.  We send it around every month to the whole division and I’m sure individuals take pride or chagrin at how their ranking changes over time.  FWIW – mine is about #60 so keep on reading and keep me near the top of the list 🙂

Here’s the top 20, just in case you are curious 🙂

Conclusion

I suppose the main take away from this post is that community interaction and feedback – both giving and getting is a very important part of what we do.  We have a variety of programs and we metric them to make sure we are doing well.  What I’ve talked about here are the very broad programs (and even then I may have missed one or two).  I haven’t talked about all of the more narrow ways we interact with customers: Early adopter programs, MVPs, Software Design Reviews, Advisory Councils, Customer visits and all other manor of trying to understand how we can build products that better serve our customers.

Thanks for listening,

Brian

0 comments

Discussion is closed.

Feedback usabilla icon