Software testing can range from nonexistent to so overdone you can’t produce a finished product. That said, here’s the way I’ve handled it in the past.
RULE 1: Programmers testing their own code is inadequate. It won’t catch misunderstandings in the specs, and they’ll obviously only test with the input data they anticipated.
My first step was always to have programmers test each other’s modules. The more of this I did, the more the programmers turned it into a game (“I’ll find more bugs in your code than you’ll find in mine”), and that competitiveness made their testing more thorough.
Next step: I’d have whoever wrote the original spec give it a go-through to see if what they got matched what they thought they were asking for. This would catch totally different things than other programmers would test for.
Once they’d fixed whatever those two steps found, I’d do an alpha test cycle. Since we’d often go for months without one of these, I’d frequently hire a few college students majoring in whatever the software did and have them go through the documentation and check every feature. They were explicitly instructed to not follow the rules. If the documentation said to enter a number from 1 to 100, I’d have them enter a few random numbers in the range, but also try entering 0, 1, 100, 101, -1, @, x, and 1023498743289164321804683926, just to see if the code crashed and burned.
I’d also have the person responsible for the product documentation (hopefully someone with some writing skill) go through at this point to make sure the error messages are clear, the prompts make sense, and everything is spelled write.
Once I was comfortable that catastrophic crashes (those that destroyed data) were not too terribly likely, we’d bring in customers for beta testing. Unlike all previous tests, these were real-world, using the product for what it’s designed for, not throwing it artificial data to try and make it choke.
One of the most useful testing processes we ever added was when we had a bit of spare time and I had a couple of guys build a set of macros that drove the software. The macros were very long and very complex, and basically started from scratch and used every function the software had to construct a big file (this was a graphics system and it was building a huge database). Every time we made a change, no matter how small, we’d run it through the macros and compare the final result to the “gold brick” final result. This caught a huge number of tiny errors. Of course, those macros had to be changed (and a new gold brick generated) every time we made major changes.