Advertisement

An interview with James Grenning, Part 2

May 04, 2010

Jack Ganssle-May 04, 2010

James Grenning (www.renaissancesoftware.net), whose book Test Driven Development in C will be out in the fall, graciously agreed to be interviewed about TDD (test driven development). The first part of our talk ran last month at www.embedded.com/224200702, where you can also see reader comments.

Jack: How do you know if your testing is adequate? TDD people-heck, practically everyone in this industry-don't seem to use MC/DC, npath, or cyclomatic complexity to prove they have run at least the minimum number of tests required to ensure the system has been adequately verified.

James Grenning
Founder, Renaissance Software

James: You are right; TDD practitioners do not generally measure these things. There is nothing said in TDD about these metrics. It certainly does not prohibit them. You know, we have not really defined TDD yet, so here goes. This is the TDD micro cycle:

• Write a small test for code behavior that does not exist

• Watch the test fail, maybe not even compile

• Write the code to make the test pass

• Refactor any messes made in the process of getting the code to pass

• Continue until you run out of test cases

Maybe you can see that TDD would do very well with these metrics. Coverage will be very high, measured by line or path coverage.

One reason these metrics are not the focus is that there are some problems with them. It is possible to get a lot of code coverage and not know if your code operates properly. Imagine a test case that executes fully some hunk of code but never checks the direct or indirect outputs of the highly covered code. Sure it was all executed, but did it behave correctly? The metrics won't tell you.

Even though code coverage is not the goal of TDD it can be complementary. New code developed with TDD should have very high code coverage, along with meaningful checks that confirm the code is behaving correctly. Some practitioners do a periodic review of code coverage, looking for code that slipped through the TDD process. I've found this to be useful, especially when a team is learning TDD.

There has been some research on TDD's impact on cyclomatic complexity. TDD's emphasis on testability, modularity, and readability leads to shorter functions. Generally, code produced with TDD shows reduced cyclomatic complexity. If you Google for "TDD cyclomatic complexity," you can find articles supporting this conclusion.

Jack: Who tests the tests?

James: In part, the production code tests the test code. Bob Martin wrote a blog a few years ago describing how TDD is like double entry accounting. Every entry is a debit and a credit. Accounts have to end up balanced or something is wrong. If there is a test failure, it could be due to a mistake in the test or the production code. Copy and paste of test cases is the biggest source of wrong test cases that I have seen. But it's not a big deal because the feedback is just seconds after the mistake, making it easy to find.

Also the second step in the TDD micro cycle helps get a test case right in the first place. In that step, we watch the new test case fail prior to implementing the new behavior. Only after seeing that the test case can detect the wrong result, do we make the code behave as specified by the test case. So, at first a wrong implementation tests the test case. After that, the production code tests the test case.

Another safeguard is to have others look at the tests. That could be through pair programming or test reviews. Actually, on some teams we've decided that doing test reviews is more important than reviewing production code. The tests are a great place to review interface and behavior, two critical aspects of design.

< Previous
Page 1 of 3
Next >

Loading comments...