In his latest post, Premier App Dev Manager Kevin Rabun shares his experience with Test Driven Development (TDD). He goes over key insights of TDD as a tool versus as a philosophy.
Test Driven Development (TDD) is a topic of controversy, enthusiasm and opinions across software development companies, projects and within project teams. This post is not about why you MUST use TDD or NEVER use TDD. Instead, I’m going to share a positive experience in using TDD as a tool instead of a philosophy.
I have been working on a small project in my free time in which I’m the only developer. When I started the project, I wanted to write the entire application in a test driven, test first, manner. I wrote my failing test, then made the test pass and as I saw opportunities to refactor, I took the time to reduce complexity, separate concerns and reorganize as needed. I was in a red-green-refactor rhythm and it was enjoyable to see the test count go up and my code coverage for tests at 100%… but then reality set in. Once my application reached Minimum Viable Product (MVP) and I was ready to start consuming real world inputs and generating usable output, I ran into bugs. At this point I realized that I like using TDD as a tool, after all it had gotten me this far, but not a philosophy. I wanted to get actionable data in the hands of users quickly, so instead of writing the test to expose the bug, and then fix the bug and then refactor as needed I just fixed the bug. I ended up uncovering a number of issues when using real world data in my small application that I hadn’t accounted for during my initial test driven implementation and I needed to explore how to resolve the issues through experimentation.
I don’t like writing tests during the experimental and exploratory phases of software development because I’m not yet sure what tests to write. When TDD is a tool, instead of a philosophy, I can leave the structured world of TDD for the exploratory world of experimental software development. After my experimental code yielded the data my users needed I left the code in place. That’s right. I didn’t remove it and then write tests and put the code back in and continue to refactor. Instead, I used Analyze Code Coverage in Visual Studio 2017 to show me where I was missing test coverage and added additional tests as I had time. I know I could’ve used all the practices of TDD and changed the way I implemented this project and been more disciplined in my approach. But instead, I chose to use TDD when it felt natural, abandon TDD when it felt labored and add additional testing as needed when development was complete. Oh and I enjoyed the work!
As software developers we should all remember that our job is to deliver value with confidence, not to practice TDD. Test coverage is a useful metric for understanding where gaps in automated coverage exist, but I believe how coverage is achieved is less important than the coverage itself. Teams must find a practical process that delivers developer confidence and customer value. If your team is successfully delivering customer value with TDD then use it. If your team does better by developing code and then writing tests then do that instead. However, nothing is stopping you from doing both, depending on the situation. Automated testing is important in the fast paced, agile, DevOps world we live in, but we must all be careful to separate practical practices that deliver customer value and engage developers from blind adherence to one-size-fits-all philosophies.