Why do Bill Gates and Microsoft get away with writing inferior software?Why do they have such a monopoly if their software is so buggy?
And if Microsoft is a monopoly should the Government have split the company up?Or is the another case of Judges being bought off?
Because their competitors’ software is just as buggy.
Because Microsoft’s software is used by so many more people than their competitors’, buyers hold Microsoft’s software to a higher standard than their competitors’. A bug which would be “par for the course” in a non-Microsoft product causes a public outcry if it’s in a Microsoft product. Furthermore, because Microsoft software is more ubiquitous, more people use it, so more bugs are going to be found than would be if fewer people were using it.
Try using something written by the competition, and you’ll see that I mean. Linux, for example, is not the “be all and end all” of operating systems that its rah-rah worshippers would like you to belive. Linux is closer to utter chaos. You need to be a programmer just to use it, and no two Linux installations will have remotely the same tool set. The Macintosh O/S, for all its groundbreaking firsts, still has nowhere near the peripheral hardware flexibility that Windows 2.1 did in 1989 – and didn’t have virtual memory capability until two years after Windows had it.
true but now windows is making a bug that has used by computer criminals a “feature” in XP.But also as long as people make software there will always be bugs.
People seem to have an idea about designing operating systems and sophisticated software. They seem to think that it’s easy, and people are producing bugs because they don’t care.
As a software engineer (by function if not title), let me say that I wish that were the case.
Y’see, designing software is really hard, and the discipline of software engineering is very young. We’ve only been doing it for about 50 years now, and even though we’ve come a long way, we’re nowhere near where we are in other engineering disciplines.
Now you may ask, “What makes software engineering so hard? Engineers design bridges, cars, and computer hardware, and they all work.” That’s true. But all other forms of engineering have something going for them that we don’t: physics.
You only need a few laws of physics to build a really good bridge. Cars have moving parts, so they require more calculations, but they still have their basis in physical reality, and therefore must obey the laws of physics. Same thing with electrical devices.
People who design software have nothing like physics to fall back on. The situation is analogous to having a city where the designer of each building, road, or car gets to define their own laws of physics. It ain’t pretty.
Bugs aren’t always easy to find. Sometimes they only show up when 500 different things have the exact right value out to 500 decimal places. Sometimes they only show up in the interactions between two different pieces of software.
Some bugs can be localized, and fixed (e.g., most infinite loops). Other bugs are emergent properties, the sort of things you only get with sufficient complexity.
Add to this the fact that most software is by necessity designed by groups of people, each with their own conceptions and style, and you can see how much worse it can get.
Add to that the fact that each individual software engineer can only see one part of the overall design. That’s bad, but it’s something we have to do.
The physical engineers have simulators that they can use to find exactly when their designs will fail. Not only do we not have such simulators, but some of the ones we need the most are honest-to-God impossible to make. That’s right, impossible. With the most rigorous and formal definition of correctness and specification of your program, there is no other program that can always decide whether your program is correct.
Speaking of the formal specifications and concepts of correctness, did I mention that they’re very difficult to understand? We’re talking graduate level math here. That doesn’t help either.
Add to that the fact that programs have the exact same representation as data in memory (“memory” here includes any storage medium), and the fact that some programs will accidentally overwrite parts of other programs, and you can see how things can fail over time.
Another problem is that programming is, for all intents and purposes, an art form. I don’t mean that it’s beautiful; I mean that it requires a lot of creativity to do well. No one can be extremely creative all day every day, and software engineers are no exception. We make mistakes, we don’t do things in the best way possible, and we don’t always catch each other’s mistakes, because we might not even be able to recognize them.
With all that, is it any surprise that software has bugs?
Oh, and that was post number 5[sup]4[/sup] for me. Now that’s a fortuitous coincidence.
I have a few things to add to ultrafitter’s points.
First, computer hardware, which is what an operating system must accomadate, changes quickly. An operating system must be able to provide functionality for hardware that hasn’t even been thought of yet.
Second, to actually market software, it must not only be well designed from a technical standpoint, but must also appeal to a wide variety of users. Windows is used not only by Microsoft Certified Systems Engineers, but also by their grandmothers. Microsoft writes the only software that really seems to appeal to users across the entire spectrum of computer users.
Third, to fit in the features that are required to sell the software, the entire OS becomes mind-bogglingly complex, to the point that the code cannot be understood by any one person. However, there has to be some way of making sure that every piece is designed to work with every other piece, in every circumstance.
Sure, Linux is probably a more reliable operating system. Keep in mind, though, that even at a much lower price than Windows, not nearly as many people buy it. Also, Linux traces its heritage back to the 1960’s, and has been designed to comply with various published standards that computer scientists have agreed upon. Windows is designed to comply with what the market wants, a harder, but more financially rewarding, task.
Yes, I have issues with the Microsoft software that I use. Certain choices they have made seem stupid to me (such as the backspace key doing something different if you have clicked in the reply window or not). Their friggin’ C++ compiler seems to just like screwing with me. Generally, I relish any opportunity to use a Unix-derived system, but always come back to Windows.
Ok, I have to say chime in and say Amen! Some bugs can be maddeningly hard to find.
I once had a customer who was reporting a frequently reoccurring bug that we couldn’t duplicate. This went on for several weeks. At the customer site, the bug was causing a program halt once a week or so. In our shop, we couldn’t make it fail for love or money. Since I was the main author of the code, it was up to me to find it and fix it.
When we finally tracked it down, it turned out that the magic combination was that the bug only happened (and I’m dead serious) on Wednesdays, and only if the workstation ID initiating the job started with the letter “E”. (Any IBM midrange geeks want to know the ugly details, let me know.)
Normal trouble shooting just doesn’t start you out looking at that particular combination of variables!
To waterj2’s and ultrafilter’s comments, I would add that a couple of points.
First, the market doesn’t demand perfect anything. Refrigerators break down, cars have recalls, pens leak in your pocket. At a time when the Macintosh was demonstrably more stable and easier to use than Windows, people paid less for Windows because they thought they were getting almost as good a product at a better price. In general, people are very willing to sacrifice the last 5% of quality for a price savings, so there’s little pressure on software companies to make bug-free software.
Second, there’s pressure on the developers to get the product to a point where it’s shippable. Tell a software executive that you want six more months to iron out the last few bugs, and he’ll tell you to ship it now and release a patch later.
Lastly, an operating system like Windows is literally millions of manhours of effort by thousands of people, all of them working together towards a goal that’s not perfectly defined, either as a product or a process. Just like the Voyager spacecraft was 16 inches off course as it passed Jupiter because of an accumulation of infinitesimal errors and vagueries in the calculations of NASA scientists, an operating system is a massive undertaking that reveals tiny but chaotic results that don’t prevent the OS from working, but do crash Internet Explorer now and then.
How did Microsoft get to be a monopoly? Because Bill Gates and his cohorts realized that, as a business, software is about selling a lot of it, not making it perfect, at a time when other businesses were celebrating the fact that “the geeks” were in charge. Engineering perfection is rarely good for business; price point is. Other companies were concentrating on the ascendency of the programmer; Microsoft was concentrating on becoming the de facto standard in key markets.
They also figured out that they could improve their product at their leisure from a position of strength in the market. Windows 2000 is easily as good as Macintosh anything, and as stable for general use as any Unix. That it came years later than those other products is irrelevent; the market’s already warmed up for it.
Another issue with PCs is that the software and hardware are designed by different companies. Sun and Apple are more vertically integrated. They make both the hardware and software and can control the configurations of machines.
Microsoft spends an incredible amount of money testing their software on various hardware platforms, but can never test it on all the possible combinations. That said, I still hate Microsoft. There is no excuse for their poor human factors design. Also, dealing with them is like scuba diving in the Bahamas.
I take it by “human factors” you mean user interface, right? Assuming that’s what you mean, I’ll halfway agree with you–it could be a lot better, but it could easily be worse.
Since when is scuba diving in the Bahamas a bad thing?
Keep in mind that Microsoft has been providing people with software they like for many years now. Windows is a pretty stable platform for the vast majority of PC users. The most stressful thing most people do to their systems is play games. The only people I hear piss and moan about Microsoft products being crappy are computer nerds and Mac users. And oddly enough most computer nerds I know use Microsoft products.
Sorry, bad joke. There were two recent shark attacks in the Bahamas.
The newest of ALL computer disciplines is User Interface Design. It’s a poorly understood area of development by many software engineers because they 1) are not taught it in school, and 2) don’t expect the same simplicity and graceful error handling and recovery that mom & pop at home. That said, Windows has gradually improved it’s user interface based on intensive research. The goal is an intuitive interface, graceful error recovery, acustomizable interfaces, and contextual learning. These are sophisticated ideas, and I’d put MS’s work in this area above just about any other company. It is vrey sifficult to design an interface that will satisfy all users, but in the modern Windows desktop, there’s very little that cant be customized to suit your personal tastes.
An interesting book on the subject is “About Face: the Essentials of User Interface Design” by Alan Cooper (1995). Among other books, it’s required reading for any software developer at my company who will be designing for clients. Although it is a bit dated, the concepts are still quite relevant.
I also heartily recommend the 3-book series by Edward Tufte: “Envisioning Information”, “The Visual Display of Quantitative Information”, and “Visual Explanations”. They sound like the most boring books in existence. On the contrary, they are exciting and absorbing discussions of the ways information can be presented with wonderful anecdotes and thoughful explanations.
For those who are interested in UI issues, I suggest looking for books by Dr. Ben Schneiderman, a professor at the University of Maryland. Dr. Schneiderman has been credited as the inventor of the hyperlink, and is quite active in UI research–in fact, he heads a lab where they research this stuff.
I attended a talk by Dr. Schneiderman recently, and I saw some of the stuff he had researched in the recent past. Except for a couple pieces of software, the UI’s weren’t all that different from Microsoft’s. So even if Microsoft’s UI’s are horrible, they’re not that bad compared to what else is out there.
The couple pieces of software that were significantly different were digital photo albums. If I had seen these in a movie, I would’ve denied that anything like them existed yet. So go check out that link above.
On slashdot the OP would be moderated as flamebait or troll, but I’m going to presume you meant your question as a serious inquiry. For the OP to hold, we must first accept that Microsoft puts out inferior software. In many cases, this is true.
The Windows 9x kernel is outdated crap. I am not going to rehash the whole “why Windows sucks” line of thought unless someone asks me to do so specifically, simply because it’s already been done. A lot. Check out any number of websites devoted to the subject. If you are unwilling to do that, and you use Windows, ask yourself how often it crashes. For many people it crashes daily, especially under heavy use. For others it crashes weekly. Some people reboot to prevent slowdown from memory leakage and to ward off crashes. Guess what? There is no need to do that. My Linux box never crashes. Period. Full stop. Not once, not ever. I only reboot to update to the newest kernel. That is the only time I do so. One more quick example: Windows rot. This is a strange phenomenon in which Windows gradually deteriorates after several (it can range from 3 to 12, from what I have heard and experienced – I used Windows once too) months of use, especially under heavy use, and especially after installing (and/or uninstalling) many programs. You should never have to deal with this. You won’t in Linux.
I disagree, and I think objective evidence is on my side. *nix/BSD variants are not “just as buggy.” Let’s take the example of Microsoft IIS (Internet Information Server) vs. Apache (the most popular open source server). Perhaps you recall Code Red and its progeny? Yeah, they exploited a buffer overrun vulnerability in Microsoft’s IIS. I do not blame Microsoft for that. That would be verging on ridiculous – all software can have bugs, sometimes very dangerous exploitable ones, even in sacred *nix-land (cough sendmail cough). However, the amount of bugs discovered in IIS vs. the amount in Apache is truly staggering. (Check out the Bugtraq Stats – 54 IIS vulnerabilities, 0 Apache. That is only counting 1999 and 2000; the .ida buffer overrun does not look to be taken into account, nor do [obviously] exploits before 1999.)
Sorry, but no. Apache dominates the webserver market (59% market share for Apache, 26% for IIS, according to Netcraft), but many, many more bugs are found and exploited in IIS than in Apache. Quite frankly, I would not do business with any company that would store information I gave them on a server running IIS. I wouldn’t want this to happen to me.
I could take other examples, such as the numerous flaws in Outlook Express, or flaws in Internet Explorer in conjunction with ActiveX. ActiveX causes a lot of problems. You can find many websites that go on and on in explicit detail about bugs in Microsoft software. I don’t really intend to rehash that either. I do recommend the book Hacking Exposed, because it will give you a good perspective on the whole issue of security flaws in general by examining specific flaws in software ranging from Microsoft to Unix and touching on many things in between.
I think this comes from a…a position of nescience. Linux is a very elegant operating system. The Linux kernel is solid as a rock, and not nearly as kludged as, oh, say the Windows 9x kernel. I can go on and on about the virtues of the Linux kernel, but I suppose I do not need to evangelize here. The GNU tools that come in any standard distribution are wonderful examples of compactness and modularity, and can easily be strung together on the command line or in scripts to do things that would take much longer and be much harder to do in the Windows GUI. But I digress somewhat.
I personally think that it is a given that Microsoft produces some poor software, software that is inferior to what their competitors put out. The obvious reason that they can do this is that they have a monopoly. The more interesting question is how they got to that point, and that is what I think the OP may be hinting at with “Why do they have such a monopoly if their software is so buggy?”
As if this post were not long enough already, I shall proceed to recount SDP’s Short History of Microsoft, or How the Beast of Redmond became the Terror of the Northwest that We Know it as Today™.
Microsoft got to where they are today through an odd combination of genius, luck, skill, and business savvy. Most of the following information up until about 1984 comes from the excellent book Fire in the Valley, which describes the early years of the computer revolution. If anyone would like page numbered cites for any of this, please let me know and I will provide them.
First, the genius part. Bill Gates and Paul Allen, the founders of Microsoft, were computer geniuses. When Gates was 13, he, Allen, and some friends of theirs from school would go after school to a company they called “C Cubed” (Computer Center Corporation). C Cubed had a DEC TOPS-10 timesharing computer, and they did not have to pay DEC for it as long as they could find bugs in it. They found bug after bug after bug. Eventually DEC gave up and acknowledged that they were always going to find more bugs. Gates then became a hacker – he knocked out CDC’s Cybernet, which “they claimed was entirely reliable at all times.” If you knew Bill Gates at this age, it would be like knowing a Major League Baseball player in Little League. A player like, oh, say, Babe Ruth.
In 1975, the Altair microcomputer was introduced in a Popular Electronics cover story. At the time, Gates was at Harvard and Allen was working in Boston. In high school they had written a fairly sophisticated program called “Traf-O-Data” that…basically went nowhere. No one wanted it. They were more successful with their next project. They wrote a BASIC (a high level programming language) for the Altair. They did it in a very gutsy manner – they called MITS’ president Ed Roberts and told him they already had written it…when in fact they didn’t even have an Altair, much less a BASIC for it! Undaunted, within 6 weeks they wrote it on a computer emulating the Altair. That was the beginning of Microsoft.
Throughout the 70s they continued to improve their BASIC and port it to various other platforms. It became a standard language. They also brought languages like Fortran and Cobol to the microcomputer. They designed a special chip (the SoftCard) for the Apple II that allowed their software to be run on that computer. They were successful – in 1980 they had $8 million in annual sales and employed 32 people. However, the whole microcomputer industry, though it had grown by leaps and bounds with the introduction of the Apple II, was still not a huge industry. People wondered why IBM, which was the computer company (in the 60s they controlled 2/3 of the computer market) had not yet entered the fray. In 1980 they did, and so made microcomputers mainstream.
This is where the luck factor started to come in for Microsoft. IBM wanted MS BASIC, and approached Gates about it. This alone would have been big. Microsoft contracted with IBM to port their BASIC to what would become known as the IBM PC. It was to use Intel’s new chip, which was vastly superior to anything that had been seen previously (16 bit vs. 8 bit). Most people still use a chip derived from this one today (check out The Future of the x86 for details about its history and future). At this time, Microsoft did not have an operating system. Gary Kildall had written CP/M, which was the de facto standard OS at the time. IBM approached Kildall about using CP/M, but no deal was made. According to Gates “Kildall was out flying” the day IBM came to visit. Kildall says that that ain’t true, but in any case, IBM came back to Gates and made a deal with him for an operating system. Microsoft did not happen to have one at this time, so they simply bought one: they bought SCP-DOS (also known as Q-DOS – quick and dirty operating system) from Seattle Computer Products. This operating system was later revealed to have characteristics that were disturbingly similar/basically the same as CP/M.
IBM then made a fateful decision – the Purple Book. This made available all the specifications for the IBM PC and its BIOS. Up until this time, most computer companies used proprietary hardware and discouraged others from making hardware that integrated with their own. IBM was especially known for this. This was a complete about face. Quite simply, it created a standard. Combined with IBM’s domination of the industry, an Open Source approach to their hardware made it so that anyone could develop for it. And Billy G.'s operating system was part of the IBM PC package.
The IBM PC spread like wildfire, and became a standard, and the DOS that Microsoft had bought and made their own went with it. MS-DOS gained a huge user base, and programs were written for it that people found essential, thus tying them into Microsoft even further. This may have all hinged on Gary Kildall going flying when IBM came to ask about CP/M. Once Microsoft became entrenched, it was hard to get rid of them.
Other companies “cloned” the IBM PC, and MS-DOS usually went with it. The skill element began to come into play – MS-DOS was a decent OS. It did what people needed it to do.
Business savvy was very important too. “In 1986, the company went public, and Gates became a 31-year old billionaire. The next year, the first version of Windows was introduced, and by 1993 a million copies per month were being sold” (Brief History of Microsoft). Windows improved throughout the 80s and 90s, until it got to where it is today. There has been something of a snowball effect: people use PCs, Windows ships standard on most PCs. Ergo, people use Windows. People want Windows games and applications, and so people write them. People become more and more strongly bound to Windows. Microsoft aggressively fights competitor ingression – e.g., DR-DOS (Mackido History has good information on this, and there is some other good stuff on that page too) and Netscape (check out the earlier decision against MS that I linked to for information on Netscape, or just look around on the web).
Microsoft mounts an aggressive campaing of FUD (Fear, Uncertainty, and Doubt) against its competitors. For example, Linux is “cancer” and “Pacman-like.” (source) They market themselves very well. They make sure their OS gets on everyone’s PC that they can. With Windows XP, they try to bind you even more tightly to them by completely integrating many features and with aggressive copy protection schemes. We’ll see how it works.
Microsoft has entrenched themselves, and developers, users, and PC companies only further that entrenchment. It’s all been a giant snowball since about 1980. That is why they can put out buggy software – everyone uses it, many people always have used it, most people don’t realize that there are better options…but developers don’t develop for those other options as much as they develop for MS…because users don’t use those other options…but sometimes they don’t use other options because of lack of software…etc., etc., ad infinitum. Microsoft is entrenched solidly and will be here to stay for the foreseeable future in the desktop PC market.
Whew. That was damn long. I will not comment on part 2 of the OP (i.e., the post that came after the OP), other than to say that the idea that judges are being bought off is ludicrous and that an accusation (or implication) of that sort should be backed up with substantial concrete evidence.
Actually, I think Microsoft is generally pretty good at interface design, all things considered. There’s a number of stupidities, such as the action of backspace in IE, and the way multiple rows of tabs work, but considering how much Microsoft has to set standards, how ubiquitous their software is, and how many different ability groups use their products, I’d rate Microsoft at about a B to a B+.
For some really bad examples of interface design, including some examples from Microsoft, see the Interface Hall of Shame.
Also, I’d point out that in Unix, I could never predict which side of a window a scrollbar would be on, even for similar programs (e.g. emacs and xemacs).
The basic look and feel is fine, and is improving. But there are a number of things that my in my understanding of UI design (admittedly a few years old) are just plain wrong. For example:
There are lots of keystrokes that will do bizzare and unexpected things to your document if you type them accidentally. It is just too easy to hit ctrl when you mean shift. Most of those functions would be much better off in menus.
The new method of putting only recently used commands in menus violates basic principles of UI design which say that over time you should develop muscle memory of where menu items are. You can’t if the menu changes every time.
Rarely done actions like changing the location of the command bars can be done by just dragging the bars around. It is disconcerting to be off by just a pixel and have your command bar floating in space.
I like Alan Kay’s motto: “Simple things should be simple, complex things should be possible.”
DanBlather: Those are some very specific problems with Microsoft UI’s, and all of them should be fixed. However, those are the most glaring examples I can think of. It could be significantly worse.
One of my problems with MS is that they never figured out in the Windows 9X OS series how to truly expand and enhance the resources table (they just expanded it to three files–user, GDI and Kernel). Also, it has little provision against program leaks resulting in 100% resource use in one part. This limited resorce handling, a more or less throwback to Win3.1, causes more crashes in windows 9X systems than any other. They decided to do away with backward compatibility with DOS/Win 3x altogether with the new windows XP, which is very annoying.