What other types of memory access are there besides RAM, and lookup tables?

A computer’s RAM is referred to as “Random Access” because you can access any chuck at any time. I know there’s another special type of memory that’s used for memory address translation or lookup tables*, but I’m not sure if it’s considered “random access” or not.

So, what other flavors of memory used today are there that aren’t random access? I remember magnetic tapes were used in the very early days (and for backups until a decade or so ago) and they’re definitely not random access. Is there anything else?
*learned about this when I was reading about the differences between DRAM (main memory) and SRAM (cache). I remember it being ungodly complex (even compared to SRAM) at the physical level.

It is “random” as compared to linear formats like magnetic or paper tape, hard drives or CD-ROM.

It is called “random access” because the access time is not tied to the physical location of data.

Consider a hard drive spinning at 7200 RPM, even if you ignore the head movement time it can only access the same location every ~140 microseconds.

Perhaps you are thinking of Content Addressable Memory or CAM. They are commonly used inside microprocessors, at least.

CAMs are found in the memory libraries of all semiconductor houses I know of. They are a pain to test.

As for antique memories, the first project in my undergrad digital design lab was a driver for an acoustic memory, which worked by converting the signal to sound and zapping it down an acoustic delay line, a big cable. This was obsolete in 1971 when I did it. I suspect they had a bunch of them laying around to torture us with.
Here is a wiki article that seems to be about it - didn’t read it.

I think the accent on “random access” was because some early computers used drum memory, where the computer’s working memory was physically sequential / cyclical.

FIFO (First-in first-out) memories were sometimes used to simplify the interface when reading to or writing from an external device. These are a type of shift register.

Associative memories (lookup tables) can be emulated in constant time on random-access memories using hash tables. Other conceptual memory structures can also be implemented with software on RAM.

Sparse distributed memory is a complicated type of associative memory introduced, with some excitement, three decades ago. Have there been any hardware realizations?

While main memory is usually DRAM and cache is SRAM, this isn’t always the case. There have been many systems produced over the years that use SRAM for their main memory. DRAM is usually used for main memory because it’s cheaper, even when you factor in the more expensive DRAM controller (which needs to be more complex due to dynamic refresh). SRAM is more expensive, but it is also faster and generally requires less power.

There is also NVRAM (non-volatile RAM) which retains it’s contents even after power is removed, and of course ROM (read only memory). And, humorously, there is write-only memory (WOM), which is a funny way of referring to memory that you can no longer read due to the hardware failing. :slight_smile: Signetics actually published a data sheet for a WOM chip back in the 1970s as a joke.

And with a bit of googling, I managed to find the WOM data sheet:

As far as other types of memory go, there is also FIFO, which is First In First Out. You wouldn’t use this for system memory, but it is still used on things like network interfaces where you can only pull the first message out of the network chip’s queue (many chips set up a ring buffer in RAM these days, but FIFOs are still used). Similar to WOM, you can also have FINO memory, which is first in, never out. The Signetics WOM data sheet mentions that the chip has asynchronous FINO buffers. :slight_smile:

You can also have dedicated hardware stacks. Most processors these days have a stack of some sort, but most often it is implemented in system RAM. Some microcontrollers have a dedicated hardware stack, and modern PC processors have a floating point stack since they are backwards compatible to the original 8086 which used a separate chip for floating point operations. The way you do floating point math on a PC is you first load values onto the floating point stack and then do operations on those stack registers. When you are done with your math, you then store the floating point value somewhere (typically back to a variable location in RAM). Google “x86 floating point stack” if you want more gory details.

Similar to FIFO, you can also have LIFO, which is last in, first out. It’s similar to a FIFO, except that access is more like a stack when you push things onto it and pop off the last thing you pushed.

You also have SAM or Sequential Access Memory. You can think of this as kinda like how a disk drive works. You can only read one track’s data sequentially. If you want to read only one part of it, you need to read in the entire track at least up to the point where the data you want is located. Solid state disk drives are internally organized as RAM, but the computer’s access to them treats them as SAM since they emulate a physical disk with heads, tracks, and sectors.

You can also have paged memory. This will often be mapped into a particular memory space, and there will be a page control register that controls which page is currently accessible. Instead of being like system RAM where you can access all of the RAM at once, with paged memory, you select a page to map it into the usable area, then access that page as if it were RAM. Early PCs had this type of memory (called Expanded Memory back in the DOS days) and many microcontrollers still use this type of architecture.

Are we talking about types of hardware, or ways of accessing that hardware?

A bit of both. SRAM and DRAM distinguish how the basic bits are stored - and can be contrasted with acoustic delay lines drum, core, bubble, and so on. But you can’t divide the controller from the memory device. DRAM requires that the data is written back once read, as does core. SRAM doesn’t. The manner in which you drive the memory accesses is tied to the underlying storage, but also to the abstraction you are building.
Content addressable memory uses what is essentially static ram to store the contents, but the controller logic exceeds the memory in size and complexity. RAM chips have a whole world of protocols that must be obeyed in order to use them. You used to have a separate memory controller that handled that, and it was only relatively recently that the controller came onto the same die as the CPU. If you have a FIFO memory, the memory controller can make all sorts of optimisations, since it knows ahead of time the locations in the memory chips (row column etc) that will take the next write, and also what the value of the data that will pop out is. So the controller can be heavily optimised, and the overall memory system may run significantly faster than a simple block of RAM that the CPU generates addresses for. (That said, dedicated DSP architectures will typically include a modulo arithmetic addressing mode, they may also provide bit reversal addressing modes - which support FFT calculations.)
Another memory mode that comes to mind is vector. On say an old Cray machine, the memory architecture was designed to allow for very efficient transfer of vectors of data in one operation. Filling a vector register with data. The memory controller could manage gather scatter and strided memory accesses to find the vector. Memory had many channels, and for the time, the machine had insane performance.
You also get tagged memory. This is more a mix of additional bits in memory and a CPU ISA that uses those extra bits for all sorts of interesting things.(instruction versus data, floating point value, integer value, pointer value, empty/full, locked.)

as an aside, this is because DRAM stores a “bit” using 1 transistor plus a capacitor, while SRAM requires 4 or 6 transistors per bit. While both are volatile (lose contents when powered off,) because those tiny capacitors discharge quickly DRAM needs to be constantly “refreshed.” While SRAM will maintain state as long as power is applied.

Still being used. Tape is cheap and efficient for offline storage.

The original Univac I used paper tape for I/O, magnetic tape for permanent storage and an acoustic delay line for working memory. The entire working memory was only 1K of 72 bit double words (smallest addressable units), each word consisting of 6 “bytes” of 6 bits each. Of the 1000 double words, only 100 were available for read or write at any time; the other 900 were in one of the 100 mercury-filled drums, traveling as acoustic signals with transducers at each end. Among other things, programmers sometimes tried to write “minimum-latency” code, using tables of execution times for the various primitive instructions and trying to choose a storage address in such a way as to minimize the amount of time waiting for that address to be writable. And similarly for memory reads. I knew a few of the programmers and it was no fun. All this went out the window when the first assemblers came around, since now all memory allocation was relative and the programmer no longer specified absolute addresses. Although nowadays assembly language is considered the most primitive kind of programming, doing it on the bare metal was even worse.

IBM switched their cache on Power cpu’s to eDRAM a while ago. If I remember correctly, the reason was to due to higher density (at the expense of speed) due to needing to keep many cores fed with data. Without the switch and increased cache sizes, the cores would be under utilized due to cache bottlenecks.

It might help to give a kind of taxonomy, since a lot of different ways of looking at memories are being mixed together here.
At the basic level there is the memory cell. DRAM, SRAM and varieties of ROMs are all different here, as are magnetic memory cells or things like fuses which are used to burn in information about a device, like the ID.
Then there are memories distinguished by hardware placed around them, like CAMs, Register Files and FIFOs. There are often hooks in these things that allow you to test the memories like any other memory, but the rest of the system sees them with their implemented functionality.
Caches are in this category now, but in the old days a lot of cache management was done in software.
Finally there are memories which are defined by software or microcode, like stacks.

And since we are discussing weird memories, we can’t forget the Williams Tube. Never saw one of these things outside a museum, but there was an article about them in Astounding in the 1950s.

I’m doing “embedded” devices. For the stuff I’m working on, we still use serial-access memory.

I set a start address, then clock out subsequent data bytes in series. I have also worked with memory that always only could be read from the start, then serially, but that was only 256 bits.

Regarding DRAM and SSRAM, note that although it is RAM in the sense that you can randomly address any memory line, doing so is slow. It’s actually implemented the same way as the stuff I’m using: you set up a start address, then clock out subsequent lines. The setup is much slower than the subsequent access. With DRAM and SSRAM the work-around is that you can set up the address in advance, while you are still clocking out data, so if you’ve got something else going on you don’t necessarily have to just wait for the setup time.

Quibble: Absolute addressing is and was still used in assembly language. Relocatable code also used (even in assembly language), but more particularly when using a high level language, and/or an operating system, and/or large programs on hardware that supports relocatable code without significant operating penalties.

Where I work, all of the assembly language is using absolute addressing.

I think Hari might be fooled by what looks like relative addressing in assembly code, though the assembler constructs an absolute address from the constant. The LGP-21 I learned on, which was from 1962, had absolute addressing only. Real relative addressing with index registers came much later.
The first assembler I know of was for the IAS machine, written by my first PhD adviser who was a student of von Neumann’s.

A lot of great answers here. Thank you, all.

I was asking about ways of accessing the hardware.

The dram vs sram tangent came up because I mentioned that’s where I’d heard about content-addressable memory, the only other type of memory access I’d heard of. (Thanks, Voyager, for providing the name).

Nitpick: In the early 1970’s the “World’s fastest NMOS RAM” was a pseudo-static device with, IIRC, 3 transistors per bit. Chips came in two sizes: 768-bit and 1k-bit. (Yes, that’s kilobit with a K.)

There’s much potential for terminological ambiguity. In some architectures almost every address is “relative” (and there may be multiple layers of “relativity”). In assembly language, addresses are almost always specified symbolically, and are then converted to relative addresses. Whatever changes are made to addresses at assembly or linkage time would also be applied during the compilation of higher-level code. IIRC, on S370 the term “absolute address” was applied to addresses which had not only been converted via DAT etc., but had prefix-register substitution where appropriate. (And even these might not be physical addresses, cf. reconfiguration panel.)

And since nostalgia for computers which used drum for main memory has come up in this thread, I’ll link to the famous tale of Mel, a Real Programmer, who wrote a blackjack program for such a machine.

That machine was an LGP-30, the vacuum tube version of the LGP-21 I mentioned. The LGP-21 was after the LGP-30 despite its lower number. That had a drum memory (4k of 32 bit words) also and we had a wheel giving the best next address. It also did not use either ASCII coding or the traditional hex numbering system of 9,A,B,C,D, E,F. There were 6 bit codes for the letters, which were chosen so that if you used “B” for a bring instruction and chopped off the 2 least significant bits, you’d get the opcode for Bring. And yes, there were only 16 instructions.

Good times! There is an LGP-30 in the Computer Museum in Mountain View, CA by the way.