Why do GUI's become slow-choppy-freeze when devices or network connections are slow?

As stated, why do GUI’s become slow-choppy-freeze when devices or network connections are slow?

Is this because of poor programming? What does the state of the file menu in Internet Explorer have to do with the speed of my network connection? Why does Roxio 6 freeze up when the CD in my drive is difficult to read? Is it that difficult for programmers to make the interface stand on its own two feet?

This is something that has been bugging me for years :smack: The whole idealogy behind the spinning hour glass just perves me off. If what I am currently doing in a program does not use up absolutely 100% of my computers potential, why am I limited to waiting on that one task?

Perhaps this is more of a ‘the philosophy of programming’ than a factual question, but I think its a good question nonetheless.

  1. If it’s a client / server application, the software program at your desk and a program on the server are constantly interacting with each other over the network. If the network gets hosed, it might hose the program as well. Client / server applications are usually business applications like you might use in a bank, a restaurant, etc.

  2. I don’t want to sound mean, but I have no idea what you are talking about with the I.E. File menu and connectivity speed. You might have to explain that in more detail.

  3. Well, the one program you are using isn’t the only thing that is using resources; there is no such thing as just one task. Lots of things are using resources. Even if you have Outlook minimized and aren’t using it, it is still using a lot of resources.

  4. Why is there crummy software, especially in regards to processing? The people who make software decisions are often more interested in new features that might sell the software as opposed to boring, old processing speed. The book, Rapid Development, has an interesting chapter on this. Different teams were given a different primary goal, like processing speed, features, etc. All the development teams met their primary goal. The lesson. Developers do what they are told. Usually, however, developers are told to do stupid stuff like, “Make it fast and good.” Some of your examples probably have more to do with error handling than processing. The program has a problem, but it doesn’t know how to handle it; the program then spins around in circles. The logic here is the same. New features are often more attractive than error handling. Keep in mind that consumers are somewhat guilty here. As long as people are willing to buy crummy software, people are going to make it.

  5. For further reading, you might be interested in Alan Cooper’s The Inmates Are Running the Asylum. It discusses interface design and decision making at software companies.

The following contains generalizations so gross that they make first grade poopie jokes flee in disgust.

Often the way old hardware handled certain tasks causes today’s problems. Newer software and OSs often just tack new features or subroutines into older code designs, rather than doing important redesigns of the basic architecture.

Since you mention MS, I find that Win2K (which was a product of the NT ‘ground up’ redesign) has only a tiny fraction of the delays that the Win95/98/Me family did, even though Me is newer. In fact, these days, I do most of my home computing on an old Celeron 466 in my bedroom, rather than (e.g) the 2+ GHz P4 in my home office, and I find that the difference in delays in surfing, word processing or spreadsheets aren’t too significant under Win2K. If a task genuinely maxes out my CPU on the C466, it’s simply impractical to do on that machine at all (e.g. video processing). However, for 95% of my work, the ancient CPU chugs along so nicely that I haven’t felt any need to bother swapping it with any of the newer, more powerful, and less used computers in my house.

Back in the Bad Old Days, NICs had very little buffer. They constantly checked the network for new bits, but they couldn’t remember too many bits before packaging them up and sending them to the CPU. Most of the bits they saw on a burdened network were not meant for them, but they couldn’t know that until they checked them out. Meanwhile the CPU had to keep checking on those addlepated NICs and getting bytes before the NIC overflowed and forgot them.

A lot of that checking is pretty useless now, when CPU and bus speeds are so high that a 10/100 Mbit NIC is like a normally slow-drawling Texas grandma with a stroke. A good new NIC with the newest drivers can often dramatically speed up a computer that gets bogged down when the network is slow - cheap fix. Read the reviews for various models under your OS.

The same type of problem arises with many peripherals. Nowadays, these peripherals often have cheap built-in chips that take care of a lot of these tasks on their own, but equally often, manufacturers cut corners, and expect today’s powerful CPUs to do work that the device once did (thereby saving several pennies per device) Winmodems (cheap common modems that plug into a slot and run on Windows only) are classic examples: they actually burden your CPU hundreds of times more than the older modems did, because they make the computer do most fo the work. If you use a modem, get one that plugs into a serial port; they’re better in MANY ways.

In office settings, often the software, data, or workfiles are actually located on a central server, in which case, you really can’t hope for much improvement, unless you configure your computer to do things like cache them locally (which you should do anyway). I know people who still run win95, and I’m amazed how often I have to reset their machine’s settings to what were once fancy “network server” values - the OS assumes an early Pentium-era CPU and limited memory. Sure it “knows” what you really have, but it can’t believe it. It won’t automatically make best use of all your memory unless you order it.

Incidentally, you’d be surprised how little CPU it takes to run a small animated “waiting” icon. You want your CPU to poll your mouse and updat the screen several times a second anyway, and the graphics chip does much of the work. It’s about as easy to display a slightly different pre-defined image at that point on the screen, as it is to display the same image you did last time. If you’re finding this annoying, you’re probably also encountering the alternative: an unresponsive computer. Basically, the OS divides its time into clices that are, say 1 msec long. That’s nothing for you, but it’s enough for thousands or millions of computer instructions on modern CPUS.

When its the turn of the ‘household tasks’ slice, most of that time goes to waste -there just isn’t that much to do- but you have to return to that task often anyway. Having File Explorer open means you get back to the “important” jobs less often - it stands in line for its slice, and wastes it, if you’re not actively doing something. If you have an actual *access (to a HDD or CD) then it can really chew up time, because the physical movements of the mechanical drive take longer than 1 msec, so the task starts demanding extra slices. If you have a dialogue box open, it pretty much counts as a device access to a really slow device: you. If you don’t actually need that dialogue box (etc.) close it. You’re only torturing yourself.