“Look … the damn machine said to type in password … so I typed in ‘password’ … what the hell else was I supposed to do?” – Non-techie Neighbor
Speak friend and enter.
That is definitely a problem. It’s known in some circles as a PEBCAK error. In others as an ID 10-t error. In yet others, it’s called LMGTFY.
Er, that’s not what Agile is about. Maybe that’s the way some places use it, but there’s nothing inherently “faster” or more error-prone in Agile than in any other methodology.
This. You pit developers; screw that. Developers have next to no say about how much time they are able to focus on security while writing software. It’s all a business decision, sent down from the top, and yeah, most of the time, “the top” wants new features and shit they can sell, not super-secure software that they can’t leverage for more money and has absolutely no whizzy cool shit to sell to consumers.
It’s a balance. Go count the number of threads where people say “Hey, I need a piece of software that lets me balance my checkbook, retouch photos, vacuums my house, and can whip up a mean martini at the end of the day. Oh, and, it has to be free.”
Like TriPolar said, in the end, the consumer is the main driver behind shitty software. The minute everyone is content to wait and pay more cash for software is when a lot of security issues will go away. Until then, organizations do what they can to make things safe. Either way, it’s not the developers’ fault.
Too many to list.
But there’s this one as an example.
Phones, yes, but what about cars?
One thing that confuses me is the brute-force password attack. If your server containing very important data can be breached this way, why would you not spend a few extra dollars to install a fake mirror that lets the attackers in and looks almost exactly like the main server (with lots of convincing fake data) but leads nowhere. Make sure the attacker will waste a lot of time figuring out that they are in the dead-end. I think more major systems need to do this. It might even be practical to implement on the same machine as the main server – if the system is well-designed.
How can this be implemented? If the live server needs to be web-accessible, how does it determine was is legitimate traffic versus not?
They’re called honeypots, and are a real thing, generally used on sensitive installations and by security researchers studying new attack techniques and malware.
I’m just going to point out that biological organisms have been developing immune systems for probably a few hundred million years now and yet we still get colds. It’s not an exact analogy, but I think it helps make my point. Security in complex systems is hard and it’s a continual battle between the attacker and the defender.
Complexity is a big part of it. When I do a seemingly simple thing like scroll a page on this forum, how many layers of code are there between me and the bare metal? I have no idea, but each layer introduces more possibilities for error or security holes. When I write code for a website, I’m depending on a large number of unknown others who’ve written code for browsers, parsers, frameworks, libraries, servers, operating systems, firmware, and God knows what else, to have gotten everything right and secure; and let me tell you, I’ve seen a lot of source over the years and there’s a lot of bad code out there. I’d even say that it’s all bad except of course that I’ve written some of it and that stuff is perfect.
(I wish I could actually claim that.) I think, for example, that at least half of the web developers out there don’t understand the proper usage of IDs versus classes and seem to think that they can be used interchangeably.
Open source seems to be a big circle jerk of creating as many different frameworks and parsers and templates and CMS’s as possible to put as much distance as possible between coders and actual logic and code. All of which introduces more complexity, more layers, and more room for error. I understand the concept of re-usability and it makes sense but like anything it has it’s drawbacks.
Other’s have mentioned the rapid development cycles. I’ve had employers annoyed at me because I have an old school attitude of doing it right the first time and making it flexible and maintainable rather than shoving shit through the pipeline as fast as possible.
I asked about serious breaches of security.
The example you’ve given is an alert about a possible vulnerability in older versions of Android. It was discovered by software engineers and was made public so that users could take heed and upgrade accordingly, and other software engineers can understand the defect and help make this and other systems yet more secure in future.
If this is an example of bad software engineering, I wish more industries were bad.
The down side of “Faster, cheaper, more flexible!”.
Way back when, the procedural languages (the only kind known) required the coder to know the physical attributes of each file. Even the “database” had specific fields on specific segments. Sequential files could support multiple record types, but the coder needed to know how to figure out what kind of record.
Now, file access is more or less magic, the software can make it look like everything can be anything and a single line of code (which are “macros” or “subroutines”) does what 2 pages of COBOL used to do.
But the coder is as far from the actual data stores and flows as that cute instruction is to the 2 pages of procedural code.
Just as we all threw out the incandescent bulbs and replaced them with CFL and now we get to throw out the CFL and install LEDs - we will end up replacing all those massive procedural programs with variants of C and Basic and Java and PERL and gawd-knows what else, only to replace them with a new set of languages based on a new O/S - one which FORCES the coders to know what the bigger picture is.
There was an old graph - basically, a parabola where both X and Y are positive, and beginning at 0,0.
It was:
X - point in development cycle
Y - cost to fix bug
The coders found a whole bunch of “System Design” problems. I was once given specs for a financial system. It called for a massive “Breach of Fiduciary Duty”. Another one was COBOL calling PL/I modules - it doesn’t work. COBOL and PL/I were written by different people without regard for the other. When the PL/I routine finished, the COBOL program would crash. PL/I used memory in an entirely different way than COBOL. It scrambled the COBOL’s Working Storage AND Linkage section.
I kinda wonder if that doesn’t happen with some of these "Newer. Faster, Better! languages.
A classic example: Dungeons of Daggorath.. A, I mean, THE first-person, uh, stabber in EIGHT FUCKING K! A glorious example of elegant code.
A classic. I linked my boss.
I think there’s a fundamental misunderstanding of what software is, and how bugs and vulnerabilities can manifest.
Many in this thread seem to be assuming that software “just would be” secure if only corners weren’t cut, or programmers didn’t make mistakes.
Instead it’s more like if developers only implemented the explicit requirements of their systems they would be riddled with bugs and backdoors. Developing robust software often involves anticipating situations and inputs very different from the typical use case, and thinking of all the ways someone might try to circumvent a system.
It’s not easy. And it’s a journey: we’ll know more about making apps secure tomorrow than we do today.
Nevertheless, the fact of the matter is it’s basically a success story. It would be interesting to calculate how many separate programs ran flawlessly, and securely exchanged data, to get you through your day today.
Even in terms of bugs testing is kind of tough because you never really know when to stop. I know the various ways of doing test generation, and how to track bugs found, but in hardware testing (for manufacturing defects) we have very good fault coverage metrics that have let us go from chips with thousands of transistors to billions with very low escape rates. Design verification testing is more like software testing, but we have a more constrained environment and can throw millions of random tests at simulated chips. And we still need a very revisions of processors, though most ASICs work the first time.
Where do the bugs happen? Software isn’t written on its own to run directly on the metal anymore. Instead it consists of libraries provided by the operating system vendor and others.
If I link to Apple’s provided library, and that library has a flaw that I couldn’t figure out, is it my fault, or Apple’s, or the upstream provider’s? I suppose the fault is ultimately mine for linking to a library that I didn’t write myself, but if no one could trust any library, we’d still be writing assembly language (or even machine language directly!) on dumb processors.
All true, all lamentable, but all necessary.
This is true of modern civilization in general. When you have a bowl of cereal in the morning, the fact that you aren’t poisoning yourself depends on an unknown number of strangers, of unknown competence, from the dairy farms and grain fields, to the processing plants and supermarkets (and these days every step of that also depends on code and all of the people involved with that). Such is modern life.
If applications and websites were ever finished products that worked perfectly from day one until their retirement, we’d have an unemployment crisis (not that we make it like that on purpose for job security :D).
I’ve been in the software development business for almost 40 years. Something I learned way back when is still as true today as ever: The only bug-free software doesn’t do anything.
Testing? We don’t need testing. Just program it perfectly the first time/That’s all useless overhead/We need to get this out the door now/Our users will test it/We didn’t budget for any of that/We didn’t plan ahead enough for that/There’s isn’t a good return on paying for testing/insert any other idiotic managerial statement here.
I’m a software tester. I would pin a good chunk of the blame for things like this on management, not developers. I’ve seen so many problems like these that could have been prevented with adequate testing, but too many places just want to get things out the door so they can get $ and don’t give a crap what happens after that.