Educate me on software testing practices!

To make a long story short, our current testing process at work isn’t working. Management has asked us code monkeys for our opinions on what we should change to make it better. This is a) my first job out of school, and b) I’m not too familiar with testing procedures since I didn’t go a Comp Sci route, so I’m looking towards my fellow Geek Dopers for help.

Some background: when I started here, testing procedure was non-existent. Client emailed us, we did the work, we emailed client it was done. About a year ago we got a manager and a bug tracking app, but the app basically just gives us a more organized place to keep track of things. The manager doesn’t review the bugs properly, either she bulk verifies them or just takes a surface look. I think management has a view of ‘everything should work the first time and the person who did the change should be able to find and fix all the bugs’.

How can we improve our testing process? How can we impart to management that everything needs a rigorous testing process, and ‘we’re doing everything twice (ie, more than one person is looking at the same thing)’ is not a bad thing?

In my experience, formal test plans should be written from the requirements and specs by the business analysts and customers, not the programmers who obviously coded what they thought was right. The test plan needs to be detailed and include every possible logic branch. Programmers should never do the QA testing (as opposed to unit and integration testing) themselves but some places aren’t big enough to separate duties that way. In that case I think that the best thing to do is write your test plan from the reqs or specs before you code it. In a small application this is no big deal, and in a large one it’s a major phase of the project.

Without writing a book on how to conduct software testing (because there are already plenty out there, hint, hint) some basic tips:

You definitely need more than one level of testing.

Developers should test as they go, then they should test again.

Analysts should create test scripts and then testers should test the apps against the test scripts.

Finally, before the app is officially delivered the client should conduct user acceptance testing.

As for your bug tracking app, there are several good ones available but without the correct process for using them you’re not going to see any benefit. I was responsible for issue tracking for one of our major projects. We used a home-grown bug tracking app but what really mattered was the process.

Several types of users were responsible for reporting bugs: developers, analysts/testers, content editors, and client users. They all reported into the same system. It’s important that everyone is using the same method to report problems.

Once the issues were reported, I was responsible for issue triage. Issues were assigned a priority based on severity of the issue or the number of users affected. On a daily basis, these issues were assigned to a lead for fixes. If it was a code problem, the issue was assigned to the lead developer. If it was a content issue, it was assigned to the lead editor, and so on based on the type of issue.

The lead assigned the issue down to a specific employee to fix, and the status was updated. The issue was then sent back into the queue for testing. Only after the issue was re-tested by one of the analysts/testers and confirmed to be fixed was the status updated to “resolved.”

On a weekly basis a meeting was held with the leads and the project manager. All high and medium priority issues were reviewed. If something was still on the list from last week, there had better be a good explanation as to why. High priority issues were expected to be addressed and resolved ASAP.

It was a good system, and we managed to address everything in a fairly short order, and ensure that fixes were tested appropriately.

If the problem is that your management doesn’t see the value in testing, here’s a great article from Joel on Software you might want to share with them: Top Five (Wrong) Reasons You Don’t Have Testers.

Software testing can range from nonexistent to so overdone you can’t produce a finished product. That said, here’s the way I’ve handled it in the past.

RULE 1: Programmers testing their own code is inadequate. It won’t catch misunderstandings in the specs, and they’ll obviously only test with the input data they anticipated.

My first step was always to have programmers test each other’s modules. The more of this I did, the more the programmers turned it into a game (“I’ll find more bugs in your code than you’ll find in mine”), and that competitiveness made their testing more thorough.

Next step: I’d have whoever wrote the original spec give it a go-through to see if what they got matched what they thought they were asking for. This would catch totally different things than other programmers would test for.

Once they’d fixed whatever those two steps found, I’d do an alpha test cycle. Since we’d often go for months without one of these, I’d frequently hire a few college students majoring in whatever the software did and have them go through the documentation and check every feature. They were explicitly instructed to not follow the rules. If the documentation said to enter a number from 1 to 100, I’d have them enter a few random numbers in the range, but also try entering 0, 1, 100, 101, -1, @, x, and 1023498743289164321804683926, just to see if the code crashed and burned.

I’d also have the person responsible for the product documentation (hopefully someone with some writing skill) go through at this point to make sure the error messages are clear, the prompts make sense, and everything is spelled write.

Once I was comfortable that catastrophic crashes (those that destroyed data) were not too terribly likely, we’d bring in customers for beta testing. Unlike all previous tests, these were real-world, using the product for what it’s designed for, not throwing it artificial data to try and make it choke.

One of the most useful testing processes we ever added was when we had a bit of spare time and I had a couple of guys build a set of macros that drove the software. The macros were very long and very complex, and basically started from scratch and used every function the software had to construct a big file (this was a graphics system and it was building a huge database). Every time we made a change, no matter how small, we’d run it through the macros and compare the final result to the “gold brick” final result. This caught a huge number of tiny errors. Of course, those macros had to be changed (and a new gold brick generated) every time we made major changes.

I was making up a hilariously snarky post by quoting previous people, with responses like ‘Test cases? What are these…test cases? I would like to subscribe to your newsletter!’ but then I got depressed.

We follow nothing like this (although I should stress we are a VERY small team).

It can be hard to distill best practices down to a very small environment but good testing procedures can still be hammered out if people have to wear more than one hat using some of the things that invisible wombat mentioned. Simple things that you can do is have someone like another programmer do a blind test. Don’t tell them a thing about how it’s supposed to work - just let them loose to do their best to break it. They’ll put in all sorts of garbage data that you aren’t expecting and have a good general idea of what kinds of things cause edits to fail. That will take care of a lot of the problems that arise from only testing for positives instead of negatives. If you have at least one other programmer working with you on it, cross testing each other’s code is good. If you’re going straight from “I tested it” to telling the customer that it’s done, then you can add a more formal ‘End
User QA Testing’ stage that makes it clear that it’s still in testing and not ‘complete but with a bunch of bugs’. That tends to keep the complaints about putting buggy stuff into production down because the user has signed off on his or her testing and given you the go-ahead.

This is probably the most efficient thing you can do: automate as many tests as you can, and let the developers run the test before committing changes to the main repo. This includes not just unit tests and other “API” tests, but also automated tests of the front-end. You’ll still need testers, but it’s amazing how often you can catch bugs by writing good tests. Also see Test-driven development - Wikipedia

Take XJETGIRLX’s advice and check out the Joel link. In fact there are many things out there he has written that are at least partly relevant to your situation, and he’s got 2 or 3 paperbacks out that are nice collections of articles - I’m reading “Joel on Software” now and bought “More Joel on Software” last weekend. I don’t know much about how professional programming shops operate, and at least at my level his writings look quite useful.

I’ll never forget when a group of my programmers proudly presented a product they felt was finished. I invited the sales manager in to take a look at it. He walked up to the computer and banged randomly on the keyboard, pressing return every so often. It took about 15 seconds to crash the program. Those were some dejected programmers.

It probably wasn’t necessary by that point, but I made them all sit and listen to my input validation lecture anyway (my personal philosophy is that things should be coded so that the user can’t enter invalid information, and then it should be tested for validity anyway).

We called it “idiot proof” but programmers, being logical people, have a hard time conceiving just how idiotic people can be when it comes to learning software. They were lacking in imagination at times into the creative nature of "I wonder will what happen if I do this?