Short answer: Yes, indexes are magic.
The very simplest thing you could do with an index is store a copy of that column in alphabetical order (just that column, not the whole of each row), along with a pointer to the place in the full file where the rest of the row may be found. That is just how the index in a book works: A list of items, alphabetized so you can quickly find what you’re looking for, with page numbers pointing to the full text for each item.
Note that you can find items in the alphabetized index quickly and easily. A HUGE example would be a phone book, with everybody’s names alphabetized. If phone books were listed in phone number order, and you wanted to find Abraham Pleisczkowicz, good luck. You’d have to read the list item by item until you find it. That’s the “full scan” method. For big tables: SLOW.
But note how quickly you can find the name when the list is in alpha order. You don’t need to do a full scan. In computer databases, the equivalent thing can be done to find records quickly with an index.
And that’s just the easy way. There are more complex algorithms, like “hashing” and “binary trees” (B-tree), and others. This gets heavy. We’re talking about heavy-duty research by Computer Science graduate students, professors, and other PhD’s here. Major software corporations like Microsoft or Oracle hire squadrons of PhD’s just to spend their lives developing better and better sorting, searching, and indexing algorithms. It’s a whole field of endeavor.
Not only do you want the index column to be easily searchable, but for really BIG databases, you also want to do it with the fewest number of physical disk sector reads. So there’s whole areas of research into ways to compress indexes so you can get the most data in each sector, while still preserving whatever organization you need to search fast. People write their doctoral theses on these things, and then there are whole books full of mathematical algorithms that rival Quantum Physics in their complexity.