How familiar with the hardware aspects of a computer system are programmers?

I know that building a computer from scrap parts really isn’t all that difficult. But can the majority of computer programmers do it?

Since programmers are mainly concerned with the software aspects of a computer system, I wonder whether they are familiar enough with the hardware that they could build a computer from scratch.

Is it possible that a software programmer wouldn’t be aware of the various electronic circuitry of a computer such that they couldn’t build a computer from scratch?

Now I sort of know I’m asking two different questions here.

One concerns PC’s. Can ALL programmers determine the hardware of a PC system and put the various circuits together to actually build a standard PC? If not all, then which ones (in particular) can?

The other question concerns non-embedded (or programmable) computer systems in general. How much do programmers know about the generic hardware of these types of computer systems? Is there ever any need to know?
I find it difficult to imagine someone programming instructions into a machine without knowing at least some hardware components. I don’t know why this is, but there you go.
So, to recap, how much do programmers know about the hardware?

Okay, looks like I might be the first programmer to reply.

I know a lot about computer hardware, in terms of capabilities and theory. I learned some stuff in university about the different parts of a desktop (or similar machine) talk to each other, the basics of beginning to design a microprocessor, and similar stuff. I know a fair bit about networking hardware that has been useful as a web/multitier systems programmer, (and useful as a general computer user as well, of course.)

As far as practical knowhow of putting stuff together, I’m probably on par with a very computer-literate layperson or user. I know how to plug a whole bunch of different kinds of things into any computer, swap peripheral bays on a dell notebook, and I have some experience with taking my old desktop tower apart to connect different IDE devices (secondary hard drive, CD burner drive.) I haven’t dared try that with the brand new Medion and got a USB secondary hard drive instead of an internal-IDE one. (Also because the USB drive just seemed all kinds of cool, even if it was more $$$)

In general, no there isn’t much need for a programmer to learn the practical details of putting computers together, and there’s so much to learn in the software field that I’d rather be reading online about what you can do with the new version of C#.net or how to build a hash index on SQL server, say, than how to hook up a new sound card. Mileage may vary with other programmers.

Most programmers with an undergraduate degree in computer science have to take some computer architecture and hardware classes. So they should be reasonably familiar with the components of a computer. How much a given programmer knows about hardware will vary with their background and inclination.

To go any further than that, what do you mean by build a computer from scratch? Take a handful of chips and a CPU and build something that does something useful? Well, we actually had to breadboard a computer back in college, although it was nothing fancy. If you mean buy a motherboard and CPU from a catalog and stick in a case – any reasonably competent person with a screwdriver can do that.

However it is true that nowadays, most programmers need to know very little about the actual hardware. Unless you are writing very high performance software (e.g. games or real time applications), compilers will do a very good job of optimizing for a particular architecture. (The folks who write the compilers, of course, have to understand the given machine architecture.)

Most of the programmers I know who aren’t doing hardware-related programming have only the most basic understanding of how hardware works. It’s occasionally important to understand a little bit more about how a CPU operates, but beyond that you’d have to be dealing with something specialized to require more detailed knowledge.

In general, unless you are programming to access or control a specific device, a programmer doesn’t need to know any more about hardware than any competent user. As a generalization I would say that they probably know more than average, just because they tend to be more technically minded rather than any need to know.

I’ve worked with programmers for years now and from my personal experience the one’s I’ve been around don’t know too much about PC hardware. They are familiar with the concepts but have little practical skills in that area. That said, most of them could probably build a PC with a little help, its not really all that difficult, you just have to do things in a certain order.

This is a pretty subjective question, because the background of programmers varies from person to person. Some programmers work at much higher application-building levels, using tools like .NET and other integrated development environments to do their major work, and others are more experienced in lower-level programming, perhaps working with system calls or even the computer’s instruction set itself (i.e. usually using the native assembly language).

How much “from scratch” do you mean? Are we talking about purchasing a motherboard, case, processor, fans, etc etc and putting all the parts together? Or are we talking about actually creating the motherboard and all the circuitry itself from raw materials?

I’m a programmer, and personally I could do the former, but the latter would probably require specialized tools I don’t have, and further research on my part. You see, my specialty is computer science, not electrical engineering. So while I can understand the circuit designs, even perhaps down to the AND / OR gates, flip-flops, registers, and so forth, I wouldn’t feel comfortable with the physical tasks of soldering all the pieces together, the electrical aspects (building a power supply from scratch), etc. So how much “from scratch” are we really talking about?

Generally speaking, I would say that a good programmer who has actually gotten a good background in Computer Science (and not just, say, a .NET certification) is likely to have a decent understanding of how a computer works at least to the level of understanding its hardware components, how they fit together, and probably a good bit about the instruction sets of computers, how assembly languages work, registers and memory, devices and interrupts, and a variety of other hardware aspects. Some may even have learned about integrated circuit design (I had a course in Digital Networks, myself, when working on my CS degree).

Programmers who learned just a specialized high-end language or environment or two, and who primarily build things at that application level – are probably less likely to know the deep details, although I think a good programmer of ay stripe would have the ability to figure out how to put a computer together from pre-fabricated parts (motherboard, power supply, processor, etc) easily, just by reading the manual that comes with a motherboard. Not that all would know this already – but it wouldn’t be hard to train oneself to do it.

But overall, it varies from programmer to programmer. Different people work at different levels of abstraction in their day-to-day programming tasks. Some write device drivers and must be intimately familiar with certain aspects of hardware. Others write web applications and only need to work with high-level concepts, and usually would not need to interact with the hardware directly. Others work on operating systems and are constantly interacting with the computer’s native instruction set. This is why much of what is built in computer systems (at least well-designed systems) is set up in layers of abstraction, so that a person working at one specific level only needs to focus on that level. (A better programmer will understand more than just one level, though).

I’d say that programmers who have to work at the operating system or device driver level will generally have a better deep understanding of the hardware than the high-end application developer. But in computer science, there’s always something else to learn. I seriously doubt any one person knows it ALL – but that’s no reason I shouldn’t strive for that!

I haven’t time to answer all of the questions (there are already good replies anyhoo)

Some of my colleagues (one guy in particular) would have very little idea what goes on inside the box. If you are an applications programmer you don’t have to know. These days there is often a ‘virtual machine’ layer involved anyway, so the code you write is always at a remove from the real hardware.

It really depends on how deep into hardware you are talking about. I doubt if most programmers are familiar with how a memory register or gate works on a transistor level but really there isn’t any need for most software or hardware people to know that much. The military liked training techs on that level, at least when I was in, teaching component level theory then moving us to “computer” trainers that had a few bytes of memory and used paper tape for i/o. It was interesting to learn that but I’m not sure how much it has contributed to my IS career.

I don’t think it is all that useful for someone to know everything from low level component hardware to application programming. It’s one thing to understand a simple compute like an Apple II but I don’t think more than a small percentage of programming is done with assembly anymore. C can get down to quite a low level without having to be completely married to specific hardware aside from word sizes. The systems I work on now are massively parallel database servers with the smallest increment at about one terabyte in storage. In the end they are just Intel boxes running NT or Unix, not so exotic. It’s no longer useful for me to know what is happening at a register level so I stopped caring when I upgraded from my last 8088 box.

Right now you’re getting a lot of replies from programmers 'cause we’re all slacking off at work. :wink:

I’ll throw in another ‘it depends’. While I’m familiar with the operation of hardware only a bit less than Monstre appears to be, and have assembled computers from parts often, you don’t need to know much about the hardware really. If you’re doing low-level programming you may need to know the system endian type, bit lengths and so on, but higher level languages abstract all this.

You may be interested to know that when I was in University, the top guy in our programming contest class (went to the world final at least 3 years in a row, has more problems solved at the Valladolid problem archive than most entire countries put together, etc) had no idea how his laptop worked, or even the basics of electricity.

Well, my reply because I teach programming but my classes don’t meet on Wednesdays. I’m at home sitting on the couch grading Java tests at the moment… (tests balanced on my left knee and laptop on my right…) :wink:

It’s not slacking; it’s thinking.

Any way, let me toss in my two cents to the anecdote heap. In college, I designed a NAND gate from transistors (on paper, of course) and wrote at least one non-trivial program in machine code. It was interesting at the time, but it’s not something I’ll ever probably use again. But within spitting distance of me are a couple guys who do moderate hardware-level programming.

I spent the semester writing some device drivers and an OS, so I do my share of assembly hacking, but if you handed me a case, a motherboard, a processor, and so forth and told me to synthesize a computer from those raw materials, I’d be utterly hopeless. I know how pipelining works, and how not to screw it up; I can build any gate out of NAND gates, and build those out of wire; I know far more about the x86 segmentation and virtual memory systems than anyone should have to know (hate that architecture so much, but that’s another thread). Those skills aren’t of any use when it comes to actually building the box. I know about (some of the) computer components, but only have a general idea of how they fit together in the real world. Even stuff like adding slave drives is beyond me.

[sub]I’m not slacking, I’m freeing my subconscious to work on the problem.[/sub]

That describes me accurately. I can get by, but only barely. And I hate peripherals. Especially printers. Printers are of the devil.

No joke. Whenever someone asks me about a printer issue, I tell them to pour a gallon of water on it and go get a new one.

Note that some computer science programs evolved out of the university’s EE department, while others split off from the Math department. While the difference in genesis may not be as significant now, in years past it meant that some programmers got force fed a lot of hardware dependent courseware while others were suffering through formal language theory and analysis of algorithms.

Which, in grad school, led to some of us thinking that the “build a multiplexor strictly out of NAND gates” question on the qualifying exams was a gimmee while the guys with math backgrounds got TKO’ed. But then, of course, the guys with EE backgrounds got slapped around on the theory part of the exam.

If I wanted to learn about the hardware, I would have majored in Electrical Engineering. :wink:

To add my voice to the choir: most application programmers have a home network and tend to spend a bit more money on hardware than the average Joe, so you expect them to have at least the hardware understanding of a reasonably sophisticated home user, but not necessarily more than that. And it’s quite possible to be a competent application developer without ever having touched a screwdriver in your life.

Also, note that it is quite common to meet professional (as in, getting paid for it) programmers who do not actually have any formal training in their chosen field, other than a few weeks of technology-specific training. Now, that doesn’t automatically mean they’re not good at their jobs, but… Let’s just say you shouldn’t make too many assumptions about what subjects they’ve been exposed to during their education, because their formal education may very well have been in biology or English literature.

I feel comfortable enough with abstract circuit design, like building a full-adder on paper out of a given set of gates or screwing around in Tkgate, but I don’t own a soldering iron and I’ve never breadboarded anything.

Building a modern PC is different from soldering discrete components onto a backplane. The most advanced tool you need is a screwdriver and some of the parts only fit one way. (Old saying: “Beware programmers carrying screwdrivers.” ;)) I can do things like adding RAM and slave drives (even if I have to set jumpers or flip DIP switches) and different cards without any problem, and I intend to build a very nice PC out of parts once the one I have bites the dust. (It is a Compaq so it already bites, but so far it’s a nice enough Slackware machine I can’t justify the expense of a new box.)

Building a PC out of parts is more like playing with Legos. You don’t need much technical skill to create something worth having if you know where to buy parts. It doesn’t imply a high degree of technical knowledge.

Getting back on track, most programmers these days are hacking COBOL (ugh), Perl, C#, Java, or something similarly abstracted from the machine. Someone who creates Perl applications doesn’t need to know a lot about hardware or even things like OS design or low-level network protocol. He can rely on great gobs of pre-written code to figure all of that out for him, and do it efficiently enough he can devote brain cycles to writing a good regex or getting process synchronization right.

Programming is all about abstraction. Most programs lie atop great mountains of other code that simply exists to provide a nice, clean abstraction to the average application program. Very few programmers actually work at a level low enough they’d need to care about which opcodes the machine understands, or how to get the disk drive to spin up, find a specific sector, and read it in, or how to craft a packet bit-by-bit. Programmers don’t like wasting their time solving solved problems.

I may be more sensitive to this, as I work in embedded software design/code/test, but I agree informal reuse is quite common. It reminds me of a familiar joke about complicated compiler *.MAK files; one guy apparently figured out how these work in 1985, and we’ve just been copying and editing his file ever since.

Intentional design for reuse, unfortunately, is somewhat less common. Because of this, I’m wary of a programmer who doesn’t know what happens to his semaphore/message/interrupt/flag/packet once it leaves the layer he’s familiar with; he/she’s just asking for a future late-night debugging session when it comes time for the maintenance phase, and that’s when they’ll discover the person who worked on that borrowed section of code left the company three months ago…

Even in a well-managed SW environment, this is true only to a certain point; it’s more of an ideal than a reality. I agree the guy who wants to rewrite, say, a USB driver from scratch is wasting his time. But to blithely assume you’ll never have to deal with problems in re-used code–SW, firmware, or at the HW level–is naive; even in cases where I’ve paid for rather standard code to be integrated into a product, the team has discovered issues unknown to the vendor. The famous Pentium floating-point divide bug is an excellent example of how HW issues even from reputable vendors can affect SW developers.