Putting here instead of FQ because I know the answer - or knew it after I researched it.
Whether von Neumann invented the stored program concept is a bit controversial, but this thread isn’t about that. As a computer architect who learned to program on a machine not a lot more advanced than EDSAC (first version of the LGP-30 was released in 1956) I thought I knew the answer to this question. Especially since my first PhD adviser was a student of von Neumann’s who also worked on the IAS machine.
I discovered I was wrong when researching the chapter on computers for our new book. Plugged in Marketplace here.
What’s your first impression - no fair researching it!
Guess: Because typing a program in each time you want it to run is really tedious? It was for me at least.
Or, if I’m misunderstanding what “stored program” means in this context, because loading it off a tape is really slow. Something else I know from experience.
My impression, keeping with your rules and not researching anything, is the same as my answer in the “what if the internet had never been developed?” thread, which was that the question was moot because with the ubiquity of global communications and the development of standardized protocol stacks like TCP/IP and OSI, an “information highway” was inevitable – it was just a question of what form it would take and who would control it.
Similarly, my impression is that a stored-program computer was inevitable because that’s fundamentally what a digital computer is – a computer as we know it is a compute engine that executes instructions – otherwise what you have is just an elaborate calculator, essentially a digital version of early analog computers. The paradigm is exactly analogous to how the human mind works, which is easily conceptualized as hardware executing software.
Is there some different or more fundamental explanation?
I believe it was so early automated looms could produce consistent patterns in cloth. Perhaps the idea was used again in other types of machines.
Unless the meaning of stored program excludes the use of serial input from a tape like device that allows little functional complexity. So you have a stored program or every function requires a unique design. I have no idea what other reason there could be.
Okay, I wasn’t clear enough. By stored program I mean executing a program from within the computer’s memory. ENIAC, Colossus and the looms all worked from patch boards or punch cards containing the program - directly, without them being read in.
As to why, I meant the immediate reason. In retrospect executing a program from memory seems the most obvious idea around. I’m asking for the reason the architects of EDSAC added this feature.
I had to load programs on my first computer from paper tape. Fortunately with only 4K of memory, programs couldn’t be too long.
See above for how programs were stored before the stored program capability was introduced. When I was an undergrad the index for the MIT Science Fiction Society was stored on punch cards. When it was time to make a new copy, I took it to the computer center where there was a card sorter controlled by a wired up board and a printer, ditto. Never got near a computer, only these peripherals. Which of course were dumb, it being 1970.
My oldest brother (a physicist) had a program on punch cards. He tripped and the cards got spilled.
He said he cried when it happened. The work to get it back together again was substantial.
ETA: I forget…it might have been my brother watching another guy do the same. It’s been a while since I heard the story and my brother has been dead for near ten years so I can’t ask him.
Yeah, debugging seems like a fairly big advantage, although people used to patch and repunch tapes and cards, so maybe not, but then again, maybe; patching and repunching a punch card allows substitution of a character value, but it doesn’t allow insertion of additional code so easily.
My guess would be: execution speed of code that is not stored in memory is limited to the speed at which the punch cards or paper tape (or whatever) can be read; this can be fast, but it is a physical process and so it has mechanical limits, beyond which you risk destroying the media.
Imagine a database lookup where the program has to advance the tape to an indexed point to retrieve a piece of data; the bigger the database, the longer the wait for retrieval.
Once the program is moved from a fixed media like cards, tape, etc, into live storage (call it RAM for simplicity) it now becomes just another form of data.
Which means you can process it. Which is what enables practical assemblers and compilers.
A secondary reason is speed and simplicity of hardware. The same circuits which e.g. load an accumulator w a word of value from memory can also load a current instruction register / decoder w a word of instruction from the same memory.
Sorta ironically today we’re going the other way with various efforts to segregate code and data as a security measure against malware.
Maybe that’s what the OP is talking about. That was the feature that the Von Neuman model introduced over earlier models that used stored data in a different set of memory that was not as easy to access and modify as the memory set dedicated to data.
Note that the Turing Machine is a stored program computer and predates ENIAC by almost 10 years. While most computer developers in the late 40s might not of heard of it, Turing certainly did and was a major player in the development of actual computers.
The big thing about the “Von Neumann” (a.k.a Mauchly-Eckert) machine model was the abstraction of components: memory, CPU, input, output.
To me, it all comes down to flexible loops. Being able to cycle thru instructions in a lot more variety of ways than punch cards and plugboards.
Once you had stored program memory, the next big leap was subroutines. IIRC, the very first one wasn’t numeric like a sine function, but for sorting.
Right. The Harvard Architecture stored instructions in a distinct store from data stors. Instructions being executed couldn’t access themselves as data operands.
Von Neumann’s design integrated both so that self-reference was possible (instructions accessing instructions as operands). Self-modifying code and “data as instructions” (structuring stored data so that it could be treated as valid strings of operations).
But if there’s a deeper “why”, I was never taught it.
My first comment was, as requested by the OP, done without any research. Having now done a bit of research, I see that early EDSAC programs often took advantage of the ability to be self-modifying. However, that seems like a very poor reason for designing a stored-program computer since in general (except for unique special cases) self-modifying code is regarded as very bad programming practice (imagine trying to debug a self-modifying program!).
I’m curious about what @Voyager will give as his reasons since the only thing I can come up with is that EDSAC was built as a stored-program computer because the designers recognized the power and versatility of the von Neumann architecture where instructions and data reside in the same memory, the basic power of this paradigm being not that programs can modify themselves, but that they can modify the data they’re working on.
Storing a program facilitates:
Archiving it for future use.
Swapping it with another program.
Making a copy of it.
Inspecting it while it’s not running.