Laymans answer: How does memory work in a computer?

In the good old days, computers used “core memory”. Core because they were little magnetic donuts smaller than beads. Factories in the far east paid workers to thread micro-thin wires through the donut holes. They would be an array - for example, a 1Kbit (1024 bit) - memory would be an array of tiny bead-sized toruses (tori?) 32 by 32. there would be a wire through each bead on the vertical and another on the horizontal, so each bead was addressable by one vertical and one horizontal wire. There was also one “sense” wire threaded through all of the magnetic beads.

If the donut was magnetized one way it was a “1” and the other direction a “0”. Each of the horizontal and vertical array wires would carry half the current necessary to “flip” the donut’s direction of magnetization. So to address say, core 4,3 the computer would energize the 4 vertical wire and the 3 horizontal wire to flip the magnetization to “0”, and the sense wire would detect the impulse if the magnetic field flipped. If it flipped, the bit was a “1” if not, it was a “0”. And to keep the memory value intact, if it was detected to be a “1”, now it is a “0” and the wires were energized the opposite way to flip it back to “1” for future reads.

Obvious, write was a lot simpler - just energize. Core was expensive and time-consuming to manufacture. One fellow I worked with said he learned IBM 360 assembler code because the computer he worked with had only 40K (!) bytes of RAM so complex COBOL programs often were too big for it. (There was a COBOL feature to load separate overlays for differnt parts of the program) it was also much slower than modern electronic memory. The term “core memory” is used from time to time, but nowadays means the electronic RAM that a computer works with.

As should be apparent by now, there is a hierarchy of memory in any computer system, made even more complicated if you consider changes in technology.
At the very lowest levels processors have fuses which store information such as the device id, the voltage which optimizes speed and power, when it was made, the lot and wafer id, and the XY location on the wafer. Then there is the ROM already mentioned. There are tons of single bit memory elements called flip-flops which are build using feedback in the few gates that make it up. The next level up are small, regular memory structures such as register files. These are built of transistors, but have a regular structure and so can be made more compact than random logic. Then there are on-chip caches, both data and instruction, to store frequently used instructions and data elements so they can be accessed more quickly. These are usually dynamic, since it uses less power. Then we have off-chip RAM, and then off chip permanent storage, formerly disk now semiconductor.
You don’t even need electronics. One of the very first memory elements was the Williams Tube, which stored data on the surface of a CRT. My first logic lab was building a memory from a acoustic delay line. You converted the data into sound waves, pumped it through a long tube, got the results back, converted back to bits, and then read or recirculate it.