That’s not a usage of “stored program” that I’ve encountered before. A store is generally external to the processor. A cache is in an intermediate position. Once it’s in the processor it’s internal.
Is he talking about anything different than that? Memory is functionally distinct from processor functions. Even registers can be external to the processor.
I don’t see any reasons given in this thread that the OP would not have understood prior to his research. I’m imaging this reason could be something along the lines of a dead moth covering a photo-optical sensor and not the obvious practical reasons.
It’s not clear to me, hence my comment.
“Stored program” is a (fairly obsolete) term of art which indicates the executable machine code program is stored in the same memory space as the data the program processes.
It’s obsolete because it’s been the overwhelming default for computer architectures since the 1950s.
The alternatives are Harvard Architecture, where the executable is stored but in a distinct memory space than the data; and the older non-stored program, where the executable is not stored in an online memory device at all, but accessed one instruction at a time from mass storage.
“Stored” in the sense of “fixed in some form of persistent offline storage” is not relevant; it’s assumed in all three cases that instructions start out in a tangible medium. It’s how those instructions are treated at runtime that makes the difference.
“Stored program architecture” or “stored program computer” being the more complete terms of art. And, as you say, largely of historical interest since that design pretty well took over the world for general purpose computers.
There are certainly specialty exceptions such as microcontrollers or embedded systems where the code is burnt into some sort of ROM distinct from the RAM where live modifiable data lives.
Some possible exceptions but even where code is stored in ROM it will be accessible by address just like data memory.
I think they’re talking about something more classical like the IBM 360, where you have your instructions and data in memory (not the same segments), and as they’re executed, they interact with registers, and store the data back into memory.
We did some sketchy stuff in college that took advantage of this sort of thing when we were doing assembly code ‘races’ to see who could write the fastest program to accomplish a given task. One thing I did was conditionally change an instruction further down in the program based on the results of that comparison earlier in the code. This was possible because the instructions and data were basically the same thing- you could just rewrite a particular memory location on the fly if you so chose, and that might be data, or it might be an instruction; it didn’t really matter.
Not researched but here is my guess. Multiple users could use the program without exchanging physical material. Very useful if people from different sites (although given the time maybe different floors or different buildings on campus) wanted to use the same program.
But why do I think it is something like, “Hey, I bet you a beer you can’t store a program on a computer.” “You’re on.”
This is also used in simple debugging techniques to trace code operation by inserting a call or interrupt at the address to create a break. Don’t know if that was being considered early in the game.
There had to be some point where the use of a large data store was conceived instead of computers merely processing dedicated functions similar to analog devices to produce limited output in human readable form.
Nah, my suspicion is that since computing hardware was so absurdly expensive and labor intensive back when all this was getting started, the stored program concept was a way to minimize the amount of total hardware needed and maximize the versatility of what there was.
Missed edit window:
There had to be some point where the use of a large data store was conceived instead of computers merely processing dedicated functions similar to analog devices to produce limited output in human readable form. Before then if complex procedural processing was being considered the major need for memory would have been for code, not data, and possibly the original reason for the stored program concept instead of practical reasons to use data memory for code. I don’t know the timeline of these developments though.
The von Neumann architecture was implemented in mainstream commercial computers long, long before the System/360. In my personal timeline, the System/360 is practically a modern computer. The IBM 700/7000 series of mainframes, kicked off with the 701 in 1952, was a von Neumann conformant stored-program computer, as were all its succesors in the series. Ironically, the IBM 650, a business computer introduced a year later, was not. The issue wasn’t instruction execution from its drum memory rather than from RAM, as that can do everything we can do today except slower; the issue was that instructions and data were segregated on the drum, so AIUI this was more like the Harvard architecture.
Back in my early college years, one of the tricks with the PDP-10 (aka DECsystem 10) was that if you had a small loop that was executed a large number of time and was only a few instructions long, you could store it in the general-purpose registers, of which the PDP-10 had 16,. which could function as accumulators, index registers, or just general memory locations. This sped up the program because the registers had much faster access time than main memory, which at that time was based on core. I still remember when high-speed solid-state memory – at that time referred to as “MOS memory” – was available as an option for the PDP-11; because of its high cost it was usually used just for the lower memory addresses where the OS ran.
You mean like some sort of purpose-built system that has actual hardware to accomplish specific functions, versus some sort of general-purpose computer that could handle any task (within reason) because you just vary the program rather than the hardware?
While I don’t quite remember it very well, I think you’re looking for Alan Turing’s work on computing, Turing machines, and Turing completeness. If I remember right (and understood it in the first place), that’s the theoretical underpinnings of what you’re asking about.
Well yes. But to clarify my intended point:
The memory capacity requirements of complex processing could be based on a perceived larger amount of code storage needed than that for pure data. In addition to pure code sequences this might include memory for stacking return addresses and state data that exceeds the size of pure input and output data. It’s possible the Von Neuman innovation was to use the larger amount of code memory necessary for complex procedural processing in order to store that pure data also instead of the other way round as modern computer users might see it.
IOW, maybe the need for code storage was the original priority for greater amounts of memory, then later the benefit of more memory available for data was understood.
yeah, I’m gonna agree with this . I know absolutely nothing about computer architecture.* But all the answers in this thread have been full of techy stuff, with nice, logical historical explanations, and , well, that’s no fun, is it?
So the correct (and fun) answer to the OP probably involves beer. Or maybe a story about a freshman student who didn’t understand the assignment, so naively tried to do something he wasn’t supposed to know about.
===
*(But, hey, I still know how to write a program in Basic:
10 print “Hello World”, etc. In 1989, I wrote a program in Basic for my job, and I used it everyday for 35 years, till I retired. )
When John McCarthy first envisioned the lisp programming language, it had a different syntax. Lisp programs looked like programs in other languages instead of like lisp lists. McCarthy created a version of lisp where lisp programs and lisp data were in the same format so that he could prove that lisp was as powerful as a Turing machine by writing a lisp interpreter in lisp creating a universal lisp program just as you can create a universal Turing machine.
It wasn’t until someone suggested hand implementing the lisp interpreter from the proof that lisp got the weird format that we now know and love.
It occurs to me that Von Neumann could have done the same thing and put programs in main memory just to make the universal Von Neumann machine proof work.
The Turing machine was abstract. Though its program was stored on tape, it wasn’t built back then. Turing himself worked on algorithms for decrypting and the Bombe. Colossus, the real computer was designed by a guy from the General Post Office named Thomas Flowers.
Colossus had precious little influence on the development of computers, alas, because the British demolished it and heavily classified all information about it.
This is the closest so far, I think.
Before I researched this, I assumed the reason was to enable the writing of assemblers, perhaps because my adviser wrote one for the IAS machine (which von Neumann thought was a waste of time) and I wrote one for my LGP-21 when I was in high school. Alas my adviser’s was not the first.
But I was wrong. The real reason can be seen if you consider how you would do something like add the elements of an array. In a modern computer you have an index register which holds a memory address. If you want to add the elements of the array you write a loop adding the contents to some other register, incrementing the index register, and ending when the value hits the end of the array.
ENIAC didn’t have an index register. To do this you had to write n add instructions each for a distinct array element. Clearly painful. But, if you held the program in memory (caches were far in the future, as was separating program space from data space) you could basically add one to the address field of the ADD instruction and put that in a loop. Much less valuable memory used.
Once you had the capability code to write programs into storage soon followed, but that wasn’t the motivator.
I don’t remember if I pulled this trick on my LGP-21, which had no index register, but I did have cause to modify code in memory, much like the story of Mel, a real programmer.
The LGP-21 was the transistorized version of the LGP-30 in the story. I checked, and it did have an instruction Y, store address, which specifically stored the value in the Accumulator (the only programmer accessible register) into the address part of a word in memory.
Thanks for all the guesses! And my book is my cite, though I didn’t put this into the chapter since it is meant for the general public and my editor gave me some more pages for it to explain computers from the bottom up. I even got some truth tables in.
There was data storage from the beginning, though it was very expensive, and before the invention of core memory consisted of blips on a scope (the Williams tube) and acoustical delay lines, one of which built for my first logic lab. I don’t think there was a second when they didn’t want more memory. My LGP-21 had 4 K of memory, and I had to modify the code of my tic-tac-toe program while it was running to get it to fit.
Not a problem when there was only one of these in the entire world. I’m not sure which computer got multiple copies - maybe Univac - but it was well after stored programs took over.
Before this, the boot program for the PDP-11 we used to teach assembler was resident in ROM, so if it crashed all you had to do was restart the machine with a 0 in the program counter. The one we used for research forced you to toggle in the boot code.
So the address space of the ROM was shared with RAM.
Except if the machine was microprogrammed with the microcode sitting in a different ROM with a different address space.
At this point not mass storage (what a concept!) but from plugboards. I’m not aware of any computer that executes code from mass storage.
Stored as in stored into memory. Caches should be invisible to the programmer, and occupy a position on the memory hierarchy that allows instructions in the working set to be placed where access time is lower. But caches were far in the future in the time of EDSAC.
I think it is hard for anyone who learned how to program, even in assembler, in the past 50 years to wrap their head around how primitive things were then. Learning on a computer whose instruction set was already 12 years old in 1968 clearly warped me, and no doubt led to me working on microcode. And you can’t modify microinstructions! Though that would be interesting …