Poll: Do you recognize the term "Fuzz Testing", and are you a Software Developer

I was talking with a software developer today, and I brought up Fuzz Testing. He’d never heard of it. I thought it was a well-known term. A friend who’s a developer didn’t recognize it either. I guess it’s not commonly known, but can you give me the Straight Dope? Poll:

  • Developer who recognized “Fuzz Testing”
  • Developer who did not recognize “Fuzz Testing”
  • Non-developer who recognized “Fuzz Testing”
  • Non-developer who did not recognize “Fuzz Testing”
0 voters

Hugely obsolete terminology but instantly recognizable 2005 BS terminology

I’m not a developer but I’m a longtime technical business analyst and part-time tester. I didn’t recognize the term but after googling for a definition I certainly know the method, just not by that name.

Am a SWE, know the term, don’t use it super often, but have it available when needed.

I also didn’t know the term. But I do know about deliberately trying to break things to test for bugs.

I’m not in the field myself, but I do watch videos and read articles on such stuff. And the above is usually what they call it. Sometimes, they even just call it “bug testing,” with the method being clear from context.

Fuzz testing is a subset of that. The idea is that you can expose bugs not just manually, but by sending random garbage as input to the code. That takes much less manual labor, and in principle you can fully automate the process of detecting when malformed inputs break things. The longer you let it run, the more bugs it finds.

In practice, there’s some hand-holding, because you might not explore the space of malformed inputs very efficiently without some domain knowledge. Say you’re trying to break a JPEG decoder–you might not catch many errors by inputting purely random files. But fuzzing specific parts, like random width/height values, data lengths, and so on, is likely to catch bugs faster.

It’s not a replacement for manual testing, but can be a good adjunct. And if you don’t do it, hackers will (if it’s a potential security vulnerability).

I only know of it from this project:

Android has a tool called Monkey Test that ‘tests’ an app by pressing and swiping the touchscreen randomly to look for crashes.

There used to be, or is a term called Fuzzy Creep/Tolerance in GIS

  • 1. [spatial analysis] The distance within which coordinates of nearby features are adjusted to coincide with each other when topology is being constructed or polygon overlay is performed. Nodes and vertices within the fuzzy tolerance are merged into a single coordinate location. Fuzzy tolerance is a very small distance, usually from 1/1,000,000 to 1/10,000 times the width of the coverage extent, and is generally used to correct inexact intersections.

Never been a problem. At least for me.

I feel like there are two kinds of testing here:

  • “Chaos Monkey” type testing, where you’re generating random noise and making sure nothing crashes/leaks, but the test cases are mostly expected not to “work” in sense of doing anything useful.
  • “Property” type testing, where you are constructing real test cases that might actually show up in production, and the randomness ensures you are covering a good sample of those test cases, instead of just trying a few like you would in a unit test.

One thing I’ve seen in a couple of frameworks of the latter type is that if the framework finds an error case, it will try to find a simpler error case for easier debugging.

I’ve never heard it called by that term, but I’ve certainly done it. I’ve run servers completely dry of entropy throwing random garbage at cellular network nodes.

Same here. I created heritable classes to provide random test data for high and low level application testing. My recollection is there was more support for unit testing at the time, which was mostly implemented the wrong way by having developers write the unit tests for their own code in an isolated environment so it would reveal nothing and allow problems to propagate through future development.

I retired a few years ago and never heard of it.
I was a Business oriented Programmer/Analyst.

Wouldn’t automating that process be just as hard as writing the original program bug-free to begin with?

Depends on what you’re testing I would think.

For simple stuff like EDI testing it is a good idea that should work well as it could be used again and again.

Probably true for alarm testing and logging for companies like AT&T.

A lot of online inquiries also seem suitable for this.

SW engineer for the last 40 years, not familiar with the term.

But I know to always mount a scratch monkey.

I’m a recently retired software engineer. I’ve written a gazillion unit tests, built integration test frameworks, used github and cloud for development, full stack web development, etc. C/C++/C#, Java, all the latest frameworks up to about 5 years ago.

I’ve never heard the term. Of course the concept has been around forever, especially in security testing. Most of the time our tests push out to the edge cases, but not truly random. Things like if a function takes a string as input, you try zero length strings, huge strings, etc. You test for buffer overflow issues, commands hidden in strings depending on what’s done with them, that kind of stuff.

Writing good tests is a bit of an art. There are lots of garbage unit tests out there. ‘fuzz trsting’ sounds like a good tool to overcome developer bias in testing code.

Not necessarily.

A lot of fuzzers these days use profile-driven fuzzing - so that, for example, the fuzzer provides an input to the module under test and then checks to see what part of that module’s code executed for that input. It then takes that information and varies the input and over many iterations builds an understanding of how inputs exercise each basic block of the module’s code - some even take into account the values stored in variables. Over time, it tries to maximize the amount of code covered by the fuzz testing, and can point to parts of code it hasn’t been able to reach, so that a tester can guide the process a little to find how to cover what the fuzzer hasn’t.

You’d like to think so, but no. Software is hard, and we’re still bad at it.

Also, the people writing the tests are not necessarily the developers. This is most obvious in the case of hackers, who have an interest in exploiting vulnerabilities. Developers have an interest in writing bug-free code, but they’re subject to different priorities. They don’t win a million dollars if they fix some subtle bug that allows stealing a bunch of passwords or whatever.

Less maliciously, fuzz testing is used by security researchers to find bugs before hackers do. The old Heartbleed bug (in the OpenSSL library) was found via fuzz testing:

Fuzz testing is a form of black box testing. I.e., you don’t need to know anything about the internals–you just feed in inputs and see what happens. So it doesn’t require the same degree of domain expertise as the original software development did.

And there are various generic fuzz testing frameworks, so a lot of the work has already been done and packaged up. There’s still some work involved in connecting it to the target code, but much less than building it all from scratch.

I’m a little surprised that less than half of developers have heard of it. It’s not that common a general practice in my experience, but I’d hope most developers follow security news just to keep abreast of things, and it’s very commonplace there.

Sam_Stone makes a very good point that fuzz testing is a way of overcoming developer bias. We can be very blind to the defects in our own code (otherwise we’d have fixed them already).

Towards the end, my company demanded that developers be their own QA. I pushed back on that quite hard, and lost. We were always told to write our own unit tests, and towards the end even write the test plans and waste expensive computer engineering time setting up test machines and building docker images and integration tests.

But having developers write their own test plans and execute them was a ridiculous idea. If the developer didn’t notice the design flaw in their code, why would you expect them to know to test for it? And Quality Assurance is its own professional field, and it shouldn’t be assumed that developers are all capable of decent QA - especially on their own code.

Fuzz testing at least adds an element of randomness to the testing. But the fact that so many developers have never heard of it backs up the idea that not all developers are educated in the latest QA things.