Just wanted to introduce myself. I’m John Dixon, one of the SDET Leads on VWD. I wanted to share some of the recent work the Venus QA team has done to make our life easier. I’ll start with a little bit of history. I hope you find it interesting!
****Old School:******
**In our last release (VS2005) QA would get a new build and based on build verification tests (BVTs) decide to take\reject the build and perform additional qualification (such as run nightly tests – a manual process of creating a ‘run’ in our lab and wait for results). Upon the results we could write new tests against the new build\features made available from the dev work performed since the last build we took. This generally worked but caused a lot of downtime as we would typically need to reimage our machines, install the latest build, get our automation tools and libraries setup and then start automating. This generally hindered productivity as it took anywhere from 2-4 hours to get going. Quite regularly we would decide to hang onto a good build and just write tests against it. This worked but added risk in that the new tests may or may not work in the latest build as UI changes or just product flow changes during a development lifecycle are common. In this release we wanted to see how we could fix this downtime, reduce risk and just streamline these manual tasks. Some of our SDETs were able to leverage some key features within Visual Studio – and other Microsoft products and fix this problem. I’ll try to share that with you folks.
****New School:****
We looked at the problem(s), specifically the manual tasks, and as we excel at automation we tried to figure out how to automate them.
The first task was our nightly run. Our run has over 1000 specific tests (note: this is just for Visual Web Developer features – Visual Studio as a whole has many many more). To accomplish a run of this size we need about 10 lab machines dedicated for 8 or so hours, and we like to have those machines around for the work day so as to address any failures in the tests, the product or the run itself. Having robust OM’s available in our Run Managers allowed us to script this. What we have today is a system that every night reimages 10 machines, installs the latest product and runs the tests – all whilst we are asleep. We wake to find the results that we can choose to analyze, or not. Historically we had a run owner who manually did this – now an automated process does all the work. This was a great start – we call this the rolling nightly. Being an unattended and 0 person cost process it freed up a large portion of one of our engineers, a huge win.
The second issue we wanted to tackle was getting the team to not have the downtime for reimaging\reinstalling their own machines to be able to automate or analyze the previous nights run results against that same build that was run the night before. We tackled this one with Microsoft Virtual Server, what has turned out to be a really cool product. What we have is, after the run is kicked off, all subscribed virtual machines are regenerated with the exact same build as the rolling nightly run itself is based upon. The net end result is that when I come in on any given morning I can look at the run results, see new failures and if something is afoot bring the test onto the local machine, within a constant Visual Studio 2005, and debug it onto the new Virtual Server that has the exact same development build of Visual Studio in which the test failed in within the run. No more do I have to acost one of the lab machines, try to debug the failure on an older build – or waste 2-4 hours reimaging\reinstalling to repro the failure. We did not finish there. To accomplish the machine to machine debugging we utilize the ‘Remote Debugging’ capabilities within Visual Studio. Typically this means we open a VS solution, set the project properties to debug again a specific target (the Virtual Server in this case), go to the Virtual Server, and invoke the remote debugger assistant. To simplify this one of our SDETs used the Add-in model OM and within Visual Studio to write a tool to configure the remote debug project properties automatically. Here is the UI for the Add In:
So the process is simplified by opening the project and just hitting F5. Since we scripted the rolling Virtual Server to automagically install Visual Studio AND spawn the remote debugging assistant and our test just runs against our latest build. If there is a bug I can wait one day and just check the next rolling nightly passing result which targets the new build, or simply reopen the test – hit F5 and watch the test pass, or fail, in the newly created Virtual Server on my machine. No more config, reimaging, installation or wasted time!
Lastly, one small downside to the process is that to accomplish the overall affect is that a beefy machine is needed. Running two copies of Windows Server 2003 on one machine requires horsepower, memory, and IO throughput. To solve this we decided on 2GB of memory – 1GB for the host machine, dedicated, and 1GB of memory for the Virtual Server. This helped, but IO was still a bottleneck. To tackle this a second dedicated drive for the virtual server, on a separate IDE channel, helped. Better, but still kinda sluggish. Next we decided on two striped raided SATA drives – again a nice increase in IO for the Virtual server specific drive. Lastly we looked at the processor. In step the almighty dual core – handily one for the main OS and one for the virtual server. w00t – smoking fast. Lastly we use two monitors, side by side multi-mon: one for VS2005 and the other to see what is happening on the Virtual Server.
So now we have it – instant runs -every night at 0 cost to people resources. Instant builds on everyone’s machines, every day – with 0 cost. So far we have not evangelized this to other MS teams heavily as we are working out the last of the kinks, but the design scales well and hopefully will benefit other teams and potentially other testing groups even outside of Microsoft who have faced the same problem.
I hope you found the insight into our world interesting. Comments, further optimizations or just questions are always welcome!
0 comments