September 20th, 2007

Visual C++ Libraries Development Regression Tests

Hi, my name is Pat Brenner and I’m a software design engineer on the Visual C++ libraries team.  I’d like to spend some time talking about our process for preventing regressions in our libraries code.

 

After I joined the libraries team about a year ago, I was told about the set of sniff tests that we needed to run before checking in new features or bug fixes.  As it turned out, I worked on some of the new MFC code for Visual C++ 2008, and I didn’t need to run these tests for the first five or six months that I was on the team.  When it came time for me to start running the tests, because I was fixing bugs for Visual C++ 2008, I found that some of the tests did not pass for me, and I was not getting consistent results.  I (and the rest of the development team) got by with this up until Beta1 or thereabouts, because our QA team performs scheduled runs of all their tests, which included the tests included in the sniffs, and we were able to rely on their results.  But then I found that, in addition to the sniff tests, we also had a large set of regression tests, written by the development team, which were largely being ignored.

 

So I decided that I wanted to get take some time early this summer and accomplish this goal: the sniff tests and the regression tests needed to run well enough that I could schedule my development machine to run a build and the sniff and regression tests overnight, and have results in the morning.

 

I first needed to get the sniff tests running and passing consistently on my machine.  So I worked with QA to stabilize the broken tests, remove the tests that were obsolete, and clear out any other blocking problems so that the sniff tests would run start to end, and pass 100%.  This took several weeks (as a background task) and we found several bugs in tests that needed to be fixed (mostly platform issues, where a test would pass on Windows XP but not on Windows Vista, for instance).

 

Then, with help from a member of our QA team, I got our regression tests ported over to a QA test harness similar to the one which runs our sniff tests.  This meant that our regression tests now got run in many more configurations (static library vs. DLL, ANSI vs. Unicode, Debug vs. Release) than they had been before.  This flushed out a number of test bugs, as well as several actual libraries code bugs, all of which we have fixed for Visual C++ 2008.

 

Our policy now is that for almost every bug we fix, we write a regression test which fails without the fix and succeeds with the fix.  This helps us to keep our regression rate down when fixing bugs, especially those ones where a particular method has been changed a couple of different times.  Using the test harness, it’s very easy to add a regression test.

 

The set of sniff tests (owned by our QA team) consists of 6542 tests, of which over 6000 run on the C++ runtimes.  We have about 75 sniff tests on ATL and about 400 for MFC.  The set of development regression tests (owned by the development team) consists of another 2536 tests, of which about 1500 test the C++ runtimes and 1000 test ATL and MFC.  I have all of these 9000+ tests running on my development machine on a nightly basis, which is really convenient when it comes time to fix a bug.  I can make the fix, check that it does indeed fix the bug, and then let my overnight process build it and fully test it.  When I arrive in the morning, I can check my test results, and if all passed 100%, I know my fix has not caused any problems.  I’ve also written up scripts that the rest of my team can use, so they can have the same productivity increase that I’ve been enjoying with the automation of these tests.

 

Thanks, and I welcome any questions you might have.

 

Pat Brenner

Visual C++ Libraries Team

Category
C++

0 comments

Discussion are closed.