Is test-driven development bullshit?

I have never worked at a company that used it, but in trying to use it on my own, I ran into various problems. One was that your tests are supposed to test features in isolation. How to do you test that you successfully added an object to a database without using your function to read that object from the database? TDD tutorials usually use bullshit calculator examples which are great if you want to write tests for functions that just return some output for a given input, but not that useful in the above scenario. Also, it is difficult to isolate static functions from the functions that use them. You pretty much have to use some sort of singleton which contains what would otherwise be your static functions and inject a mock version into the code under testing which gets very cumbersome. Are these solved problems that I don’t know about?


Test-driven development generally assumes that each individual subsystem is wholly complete in isolation from the system itself. This means that adding and subtracting from a database should be the same whether it’s connected to the main program or not.

Obviously it’s going to rely on your access, set, and manipulation functions. Yes, it’s possible for something to go wrong that normal tests won’t likely detect, but generally testing your code is going to catch more errors than, well, not testing it.

The only thing test-driven dev doesn’t really deal with well is GUI based environments, because it’s usually rather hard to automate the testing of interrupt-based listener events.

Usually there’s more than one way to verify an operation. But it’s the basic problem with unit testing. You may only be testing for things that already work, and the tests aren’t very complex. One alternative is the use of random testing that produces an unpredictable set of operations that the cumulative result can be compared to either locally, or in a parallel, but different system.

It is not bullshit. It is also not universally applicable.

I would say that GUI and I/O (including databases) are difficult to fit into a test harness. If you structure things for testing from the onset, you will probably define a DatabaseAccessor class that implements the DatabaseSupplier interface, and make a MockAccessor class that also implements the interface but uses a temporary (or imaginary!) database to support your tests.

Yesterday I was adding a feature for a device driver. The functionality will involve 4 different DLLs and at least 3 threads. If I tried to create unit tests for all that, I’d spend all week on this little feature.

But I did write tests for the low-level byte encoder and then wrote the code to satisfy the tests. And I used the same TDD approach to add the logic in the device emulator. The rest of the testing will be done by hand, I’m afraid.

At least the unit tests I wrote will still be there as a safety net when me make more changes later on. And when somebody else inherits this code, the test code can show them how it’s supposed to be used.

To be successful TDD requires tools, and there are lots and lots of tools available. For example there are tools to make mock-up versions of ancillary objects so you don’t need working versions of other classes to test the one you are working on. There are tools to quickly build then tear-down temporary databases so you can test a single DB operation in isolation. There are also tools to test web-page content so you can automate some aspects of UI.

In addition to tools you need to adopt test-friendly architectures. When designing object interactions and writing code you need to be aware how design decisions affect testability.

Also because it can’t catch bugs like, “the font on this button label is supposed to be System, not Consolas.”

I’ve worked on three TDD projects that had to do database stuff. The way we did testing was to create a database wrapper interface. We then made a mock database wrapper for unit testing, which we could load with data rows to return, and which would create a log of all queries run. Unit tests would then pass the mock wrapper to the objects we were testing, register the data rows to return, perform the actions we wanted to test, and then read the log to make sure the right things were done in the right order.

Testing the real wrapper class was done by creating a test database, initializing tables as needed, and then running the data retrieval / manipulation functions of the wrapper class, after which we would check those tables to make sure the right data had gotten read / written / updated / deleted as expected.

For one of the projects, we made a generic database wrapper class that could write arbitrary objects to a database, based upon a compile-time object description. We tested it for objects with one field each, across each allowed field types, and we tested it for objects with multiple fields; mostly 2 or 3 fields, but also one big hairy complex object. That last test took a while to write, but we feel it suitably tested different interacting corner cases.

Yup, that’s generally how it’s supposed to be done.

An alternative is to just forget isolation, and test with a real DB. It’s not unit testing, strictly speaking, but it can be useful.