In the good old days, computers used “core memory”. Core because they were little magnetic donuts smaller than beads. Factories in the far east paid workers to thread micro-thin wires through the donut holes. They would be an array - for example, a 1Kbit (1024 bit) - memory would be an array of tiny bead-sized toruses (tori?) 32 by 32. there would be a wire through each bead on the vertical and another on the horizontal, so each bead was addressable by one vertical and one horizontal wire. There was also one “sense” wire threaded through all of the magnetic beads.
If the donut was magnetized one way it was a “1” and the other direction a “0”. Each of the horizontal and vertical array wires would carry half the current necessary to “flip” the donut’s direction of magnetization. So to address say, core 4,3 the computer would energize the 4 vertical wire and the 3 horizontal wire to flip the magnetization to “0”, and the sense wire would detect the impulse if the magnetic field flipped. If it flipped, the bit was a “1” if not, it was a “0”. And to keep the memory value intact, if it was detected to be a “1”, now it is a “0” and the wires were energized the opposite way to flip it back to “1” for future reads.
Obvious, write was a lot simpler - just energize. Core was expensive and time-consuming to manufacture. One fellow I worked with said he learned IBM 360 assembler code because the computer he worked with had only 40K (!) bytes of RAM so complex COBOL programs often were too big for it. (There was a COBOL feature to load separate overlays for differnt parts of the program) it was also much slower than modern electronic memory. The term “core memory” is used from time to time, but nowadays means the electronic RAM that a computer works with.