And some software (Netscape’s browser) was changed from a bar graph that actually moved when data did, to one that moved as an animation independently of what was going on behind the scenes. So it no longer was useful as a progress indicator, but I guess unsophisticated users were a lot less confused. :rolleyes:
You forgot the case of adjusting the formatting info for two similar paragraphs, and you still haven’t explained why this is worth doing given the low cost of transmitting this data.
It’s been my experience that writing code and designing software are two very different skills.
There’s a few different reasons for this:
-If the indicator is animated but hangs (as happens sometimes), the now-static display is an obvious cue that something is wrong. But if the indicator is static, there’s no obvious cue when it hangs (since the indicator is static even when it’s working). So to properly use a static indicator, you’d have to build in functionality to pop up another indicator when the system enters a non-good state, and ensure that the visual design of the progress dialog accounts for the bad-state condition even though that state will occur relatively seldom.
-A graphic display is an easy visual at-a-glance cue as to what’s going on, so you don’t have to stop and read the text to see what’s happening. For example, let’s say that I kick off a long file transfer and then wander off to my coworker’s desk to waste time while waiting for the transfer to finish. It’s easy for me to glance across the room and see whether the animation is still moving, but if it was a text display I’d have to walk back to my desk in order to read the text.
-Use of graphics here is another opportunity to enforce the visual identity of the product.
-Designers think it looks spiffy. Heck, there was one defragmentation program that included an animated progress indicator, and the online help freely admitted that it was to give you something to watch while the process churned.
Not necessarily. Progress indicators are common code, so there’s not much overhead once they’re implemented the first time.
It’s also possible that if the program didn’t spend so much time on frou-frou graphics it would finish sooner and you wouldn’t have the time to wander off.
The reason is simple. People HATE having to think. So they love software that does that for them. The more software has to think for them, the more complicated it gets. I spend a huge amount of time building “wizards”, wysiwyg editors etc. And I see a lot of products gleefully advertising “no need to learn code” or “build webpages without having to learn html” Eventually “communicate without having to learn grammar or spelling”, or “generate theories without bothering to learn math”
Instead of “Think Different” its “Stop thinking.”
Eventually, our brains will be utterly atrophied, and our software will rule us. Hey, Word is already finishing our sentences.
Its inevitable.
As a programmer that has a lot of problems keeping up with current events, I would say it’s because processing power, memory and storage has become so cheap.
Cheaper hardware? Bigger programs. Better hardware, more code.
Programming is to hardware as a dog is to chasing its tail.
Programmers, since time=money, tend to work on the fastest, biggest capacity computers so their development cycle is short. Unfortunately, the average user for their software rarely has such a dream machine.
It is my opinion that programmers should be forced to use the slowest, smallest machine around instead. This way, they would feel the pain they are infllicting on their customers and would tend to take steps to optimize the code.
Also, the programs they write are typically compiled for the fastest, newest machine & opsys. Rarely do they absolutely need the features in XP vs. 98, for example, but since that is the platform they wrote it for, that may be the only platform it runs on.
There’s an awful lot of baggage to a 65 million color GUI when a simple text-based display would be adequate for many tasks.
Yes and no.
The program or OS puts out a “call” that says “A file is being transferred” or “Here’s something to be printed” or whatever.
Some other module sees the “file transfer” call and invokes the fluttering folder animation. That call could just as easily be picked up by a different module that plays a tune, or something that puts the words “Your file is being transferred” on screen. The thing that picks up the file transfer call and does soemthing with it is what’s going to be big or not - the flag itself is small.
This modular breakdown of function can make the system more efficient as well - Remember that “Here’s something to print” flag I mentioned above? In Windows, stuff to be printed goes from the application to a “miniport” module that acts as an intermediary. Back in the “old days” different apps talked to different printers in different ways, and app developers had to write driver routines for specific printers. Now, the app developer only has to learn how to make the app talk to the Windows miniport. On the other side of the miniport, printer manufacturers only need to know how to talk to the miniport, rather than a multitude of applications.
This is the point of automation. The computer and software should do everything that makes economic sense.
Why in the world would you not maximize that ability?
I sometimes wonder just how small most of todays software could be made if there was some incentive. If they suddenly stopped producing newer hardware and programmers had to somehow make do with what is they proably could get several oders of magnitude of performance increase by simply tightening up their code.
I remember the final years of the C-64 era when fans tried to get increasingly more and more performance out of the same hardware. At the end there were multimedia hacks that really took the avaidable resources to their limits through clever computing.
The phenomenon could also be seen in older video game consoles that stayed the same for years. If you look at the very first games that were published for a new console and then comapare them to the last games for that system before it was abondoned by its makers you would often see a difference far greater than you would see between different generations of consoles. With titles of a series this was especially visible.
Nowadays you don’t have that many static systems that programmers rarely actually have the incentive to dig into the system to figure out how to get something more out of it. They can just take advantage of constant hardware improvements instead.
Many programmers are stuck with older computers. At many companies, the speed and capacity of a computer is inversely proportional to the user’s needs. I’ve never had a “dream machine” at work.
Yep, that’s all we do all day, think of new ways to make life miserable for our customers. In the real world, software requirements usually include a target platform. This is often what is being currently sold as a low-end PC. It may vary depending on the targetted market and any special considerations. The software is designed and written to run at an acceptable speed on that platform. For PC software, I think it is reasonable to assume a 5 year replacement cycle on customer hardware. We shouldn’t be expected to support every boat anchor that is in the customer’s inventory.
Support of older hardware and software increases the costs of development and testing. At some point, you have to draw a line and tell the customer to upgrade their systems.
It sounds simple, but who is going to pay for it? The testing schedule and budget is going to be huge if the software has to be tested on every major release of Windows 9X/ME, NT, 2000 and XP. It also makes life more difficult for the developers.
I’ve tried that. Give the users a text-based program and they react like someone left a dead skunk on their desk. The command line is dead, as are text-based user interfaces. Users expect a GUI that follows the operating system’s guidelines for a user interface.
You don’t get several orders of magnitude of performance increase by “simply tightening up the code”, unless the code is pathologically written. That sort of improvement, if at all possible, would require a major redesign and new algorithms.
What makes you think a lot of code out there is not pathologically written?
I do agree, though, that most of the improvement would come from a major redesign.
Once on PBS they had the story of Netscape, and one of the things that struck me was the story of how this young kid was hired and he was able to redesign the “layout engine” (or whatever they call the part of the code that handles what the browser displays).
They said that it was a mess, and that caused Netscape to display pages much slower than it should (and, most likely, much slower than the competition).
The guy redesigned it from scratch and, not only made the code smaller, the loading time for each page was reduced drastically.
I do agree with a lot of people here that getting computers to do seemingly simple things takes a lot of man-hours.
On the flip side though, we should not deny that there are simply some very bad programmers out there that write crappy code, and who hide behind the “it’s so complex, you don’t understand” excuse, to explain their crappy code.
Crappy code not only makes files larger and programs slower, it is usually much more diffiult to understand, debug, and maintain.
A few points
- that ghastly flying paper thing when copying files is part of Windows, using anything else actually makes the EXE larger (not that I worry as a sensible progress indicator is far more useful than that bit of eye candy)
Some programmers prefer to use older machines, even for coding.
Others have older machines for testing.
With expensive Apps the software costs many times the price of the hardware, in those cases specifying hardware upgrades (new kit) is totally reasonable.
I’ve had a fair few systems in the past that had dedicated machines.
Huge Apps for mundane mass market use are generally a sign of poor programmers.
I dont think I said I would not want to maximize that ability, I make a really good living creating tools that automate procedures and insulate people from the complex reality of what they are working with. Other people make tools that isolate ME from the complex reality I live with. I was just pointing out why software is getting so big.
That is also a sign of a low budget, small team, short timeframe , and customers unwillingness to pay extra for well written code. Doing something right takes time and effort, two resources rapidly depleting in our society.
Small team is good
- but I agree with the rest
I shudder to think what shoddy code has really been written by people who live in the 3rd World - either physically or mentally.
This doesn jive with my experience, and doesn’t explain why I run across new software that won’t run, or won’t run well, on a 2-year-old machine.
Doesn’t jive with my experience. I didn’t say the “make life miserable” was intentional. It’s just a byproduct of pressure on development speed and corner cutting. From what software I see being distributed, it doesn’t look like the writers took any care at all at covering the field; they just wrote it for the newest machine, perhaps the one they had around. A little more care might show them that the API they used wasn’t available for owners of the previous opsys, and not required to perform their simple functions, so it would be best to leave it out.
At that point unless you want to be a shill for hardware vendors, you alienate more customers than you might think; customers who don’t understand. I support a diverse end-user community of people for whom the most common opsys is Win98 with 64MB of RAM (yes, Win98, yes, the majority of people), which the industry treats as unworthy of consideration. And no, it’s not poverty that keeps them from upgrading.
Writing for the oldest system would make it more likely to run on the newest. The reverse isn’t true.
And many of my customers wonder why an accounting program needs to have graphics at all, especially when I tell them that it makes the computer slower. Once the thrill of looking at pretty pictures wears off and they want to get some work done, it’s just extra baggage.
And I use the command line every day.
Like I said earlier, Quick Books Pro has neither bells nor whistles, yet it’s GARGANTUAN for what it does. Maybe writing tight code is expensive but so is the typing, the mere putting of code into the programmers’ computers, and if they write a million lines of code when ten thousand will get the job done that is a waste.
Look. This software is just another basic accounting system. People have been writing these for decades and the only real difference between Quick Books and something written in interpreted BASIC thirty years ago is the GUI. The speed is even comparable if the old one were running on a hard drive.
On the question of formatting information for different paragraphs, here’s an idea… Why not only put in the formatting information when the user actually changes the formatting? If I sit down at my computer and write a few paragraphs, in most cases, I’m never going to touch the formatting buttons at all. So the computer should just start with whatever its default formatting is, and when I get to the end of a paragraph, just keep on going, because I haven’t told it to change anything. If I do tell it to change anything, well, then, it can just insert the changes when I make them. That’s even simpler than re-doing all the formatting every paragraph, and it’s more efficient as well.