"software developers"

That’s not an excuse for buggy software. It’s actually the other way around – that kind of functional layering on services and libraries is actually the key to reliable software because it provides much greater opportunities for thorough testing of common underlying functionality. Big problems arise, however, when these underlying frameworks and OS services are themselves buggy. This has been Microsoft’s legacy to the industry, making it impossible for even the best application software in the world to be reliable on their platforms. This was almost comically bad in the consumer line prior to Windows XP, when Windows was essentially a GUI hack sitting on top of DOS.

True, but every development organization has institutional norms about the kinds of bugs that will be tolerated. It’s probably fair to say that no organization has ever released major software without some known bugs, unless their testing process is so incredibly awful that it pretty much guarantees that they’ve released software with massive quantities of unknown bugs! But an organization with a culture of software quality will categorize problem types and will never release software with known problems above a set severity level, typically the kind of threshold that a user would regard as a “minor” problem with an easy workaround. Others are happy to release software where important functions just don’t work, or the whole thing just randomly crashes.

You are always going to have unknown bugs, since the developers and testers share some unwritten assumptions on the environment and use of the software which are sure to be incomplete.
35 years ago people wrote manufacturing tests for hardware by hand, and were sure they exercised it every possible way. When we got fault simulators, we found that they had found only 80 - 90% of the defects in the model, and those were far simpler than the kind of defects you are going to find in software. So I don’t have a lot of confidence that software test is going to find everything, no matter how diligently done. Not to mention that your environment today isn’t that of five years from now.
I heard from Dell that the reason software is preloaded on PCs is not for user convenience, but because there are so many possible configurations of software and hardware that the only way to system test them is to load and run. And find some small percent of bad configurations before they get shipped.

If only Microsoft actually charged consumers money for their operating systems !
Or inserted **advert generators **into security updates, no doubt to satisfy the average Windows User’s insatiable hunger for more and more advertising. Those sort of people are certainly willing to pay much more for better software. Then they could finally pay developers to make slicker stuff.
Prolly not safer though. This is Microsoft.

Holy crap, man, must you be so vulgar?

Hey - I mentioned PL/I - what more do I have to do?

I’ve seen large, multi-national corp declare a product they know was buggy as hell “Installed in Production” - now pay us the balance!

That was their first “mega” product - the next got them sued into oblivion.

The multi-national parent just folded the operation.

Large companies did not do well when buying IT contract houses.
The contract business is “immediate need last week”*. Mega corps have 15 layers of management to get anything approved.

    • Contractors were expensive - it was when the project was six months behind schedule, over budget, and a VP’s ass was turning green that money was suddenly available for hired guns.

I’ve done some contract work recently, out of necessity, but it’s not something I’d choose to do. They expect you to understand their systems and code from day one, and to fix their problems yesterday. I ended up feeling stupid and unprofessional until I realized that they had totally unrealistic expectations.

In my experience (corporate IT), the biggest threat is that management gets consumed with meeting business-set deadlines and keeping their project flow going smoothly, and as a result, rushes testing, or understaffs projects or a whole multitude of other sins just so that the business doesn’t scream when it’s not on time.

Apparently they’re more willing to have it on time and fix some bugs, than to get it right, but late. Kind of perplexing to me, but I’m not making those decisions.

Apple took the lead on that; remember the “genie” window minimize/maximize effect in early versions of OS X?

you’ve never had a post or page load here sit for a while, then eventually give you a VBulletin database error?

smartphones are a poor target for hackers. except for photos, they don’t store much valuable information on the device. most breaches and hacking are done for money now, so they go after stuff like credit card numbers and other PII stored by corporations. and even the stuff that home PC users have to worry about (botnets, surreptitious Bitcoin mining, ransomware) aren’t worth targeting because they rely on not being detected by the user. and a botnet client or Bitcoin miner would immediately start gobbling up your data and battery.

although it’s worth noting that a common breach of security exploits on smartphones is users “rooting” or “jailbreaking” their own devices.

I don’t recall seeing that, but if I did, that would be proof that programmers suck?

That of the thousands of times I use this software to communicate with people around the world, a rare error message that is unlikely to be a bug per se (e.g. it could be the server could not process your request at this time, and the only issue is that the error message is unclear) is sufficient grounds for a pitting?

But not for a copy of “The Mythical Man Month” obviously.