I’m not quite sure how to articulate this question, so bear with me. It seems kind of silly to me that computers are still built around the idea of keeping information in permanent storage on a hard disk, then loading that information into temporary memory - RAM - in order to operate and do things.
I guess it just seems kind of archaic to be using the same system that’s been around for 20+ years now - if the power goes out, you lose everything. Whenever a computer is restarted or powered-up, everything has to be re-loaded from the disk into the ram. Even launching a program requires loading up that program from the disk into the ram.
Why are things still done this way - is it still the best way, or just the cheapest? What’s “Flash memory,” and why aren’t computers built around it now? Why aren’t computers built so that I could just turn off my system at any given time and then turn it back on with this webpage still up and this message still halfway written?
Non-volatile RAM AKA Flash memory/RAM is inherently somewhat slow (slower than hard disk for real world read-write access) and is considerably more expensive than a hard disk by at least one or two orders of magnitude for the amount of memory per dollar. Slower and more expensive does not typically win the day in PC land.
Yes, you can turn off your computer and have it come back to the exact same state. This is done using Suspend or Hibernate in XP. Your computer and OS should support these modes for it to work.
Memory used for RAM is extremely expensive, and therefore not feasible for use as storage space. Storage memory, such as used in your hard disk, is much cheaper.
Flash Memory is a type of non-volatile memory, which means that it retains data without the need to be powered. Most of the new handheld computers use Flash Memory. Also USB Memory sticks, mp3 players, etc.
Let me know if you need more detailed explanation on any point.
It’s fast. No, really, a lot faster than any permanent storage system we have so far. Flash memory (what’s used on digital camera memory cards and USB drives) is certainly faster than a hard disk, but still not as fast as RAM. And to say it’s the same technology we used 20 years ago is a misnomer. It’s much faster, larger, and more effecient than before.
Flash memory is relatively fast to read, but writing is slow due to the way it’s memory write process is implemented, and for real world read-write perfromance would be significantly outpaced by a 7200 RPM hard disk with a standard 2-8 meg cache.
In addition to being slower, Flash memory also supports a limited number of write cycles. Limited enough that Flash-based file systems are specially designed to spread out the writes evenly over the entire memory range, and limited enough that using it for purposes which require constant re-writes (e.g. using Flash memory for swap space) is strongly discouraged. If you used Flash memory in place of RAM, I suspect that it would start failing before the end of the day.
If you ever invent a type of memory that is cheap to produce, not too energy-hungry, as reliable as current storage technologies, feasible to use for amounts of storage in the terabyte range, AND supports near-infinite amounds of read- and write-cycles, you won’t have to worry about money ever again. Right now, using dynamic RAM for working memory, Flash memory for relatively small amounts of permanent storage, and spinning magnetic platters for large storage is the best compromise we’ve been able to come up with.
You can buy PCs that use flash for permanent storage and RAM for processing. Industrial PCs that handle high vibration, and small handheld PCs, are two examples. You can see the former at cyberresearch.com.
We have been making the distinction between working store and mass storage since the very beginning, back to the very first stored-program computers in the late 1940s. The electromechanical computing devices used in WWII just barely miss the cut-off point I’m using here. The Manchester Baby is an example of the minimum system required. The Baby used paper tape for mass storage and tubes for working store, well before solid-state transistors (invented 1947, Bell Labs) were used in computer design.
My point is, the fundamental design has been around for as long as computers themselves and it has survived the decades of fundamental changes for a good reason: Fast storage is expensive. It relies on very advanced design processess to pack components in as densely as possible, something that has been true even in the days when components were glass tubes. It’s expensive in terms of money because it’s expensive in terms of the mental labor required to design a new process.
(As a final aside, the fastest storage in a modern general-purpose computer isn’t RAM, it’s on-chip store like registers or cache. And registers are more expensive than RAM for the same reason RAM is more expensive than disk drives: It takes mental effort to jam more transistors on a CPU without ruining performance and reliability characteristics.)
Just an observation - the core memory used in mainframe and minicomputers before semiconductor memory was non-volatile, i.e. its contents remained after the computer was turned off. However, it was MUCH slower than today’s semiconductor memory.
Economics
There will always (most likely?) be a hierarchy of storage.
Current hierarchy:
Register (fast, expensive)
L1 Cache
L2 Cache
L3 Cache
General Purpose RAM
ROM
Hard Disk
Tape/CD/DVD
Micro-fiche/Paper (slow and cheap)
All of your data does not have the same latency requirements. For example, the backup of the database from the end of 1st quarter of 2003 might never need to be retrieved, and for that reason is stored on tape. On the other hand, key data from the currently running programming might need to be retrieved every cycle, so it’s stored in a register.
If you are storing all of your data at the same level in the hierarchy, then you are not optimizing dollars and performance.