I think another important idea when learning how computers work is the way in which things are built on top of the next. Essentially what you do is once a certain layer is working you can sort of forget about it. Software is simply too big these days to do in machine language. We are at the point now to where the time to code is more important than time to execute. You could write a modern OS in assembly language, but it’d take forever and a day to do. The time it takes to develop and maintain things is most important now rather than machine speed. In the end, from an economical standpoint, which is cheaper? Hiring a programmer to waste all of those man-hours that would end up saving very few once it is implemented?
So you end up with varying levels. When you produce a post on the internet, you’re relying on a program, probably written in a high-level language that interacts with the operating system, which relies on various architecture features such as instruction sets. Programs don’t interact with the hardware at all anymore. This is all done by the OS which does it on the program’s behalf. This is slower, but it’s far easier to let the OS people build a single interface and let everyone plug in their pieces.
Hardware manufacturers make drivers that interface with hardware which eventually has a known, and pretty standard interface with the OS. Software interacts in a similar way with a standardized set of instructions (like the windows API). Otherwise, Firefox would need to interact with your soundcard to play a sound file. The old DOS games did just this as you’d have to select your soundcard to play. I’m pretty sure this is what DirextX was all about.
The OS does all of the interacting with the processor for these systems. They were designed by people who probably never will write operating systems. Now there is some help coming down from below too. Processors do include features to make the OS designer’s job a bit easier. There is, for example, a cache designed exclusively to hold a virtual memory page table. It’s a lot faster than going to memory for it.
Anyway, I know very little about OS design, yet I can still write a program in C++ that allows me to add something. And then later on, someone might rely on my code to do something they need to. Maybe it’d be more efficient if they would write everything they needed from scratch to really optimize what they needed. But if my program is already there they just use it like they would any other pre-existing resource. You don’t need to know how to do everything below you in order to provide a base for people above you. Maybe sometime in the future someone would use his program that used mine and so on.
You can make or find tools that do the dirty work for you and once they’re complete you don’t need to know about it anymore. It’s less efficient to go through all of these layers but it is efficient in terms of human comprehension. The history of computers has followed this model. As time has gone by, we’ve progressively built and built up everything to the point it’s at today.
Originally computers had to be fed in binary punch-cards. As mentioned previously, “machine language” is simply just a way to translate processor instrucitons into something more readable. From there people built more abstract languages which allowed them to build more complex operating systems.