Again, Sam, we seem to be speaking about different problems. Your assempbly line analogy and your repeated reference to the element of human interaction both point to a focus on teh problem of accurately registering the vote which a person intended to make upon entering the system. I agree that there is an irreducible element of error in this which cannot be elimintated.
However, I am talking about minimizing error in the accounting of ballots actually marked. this is a far simpler problem. In particualr, by elminating a mechanical interface in the production of a marked ballot, we eliminate a major source of noise. Also, we guarantee a discrete and easily differentiated data set.
Bit switches on modern computer systems are quite rare, and parity checking algorithms reduce the danger even further. Calibration and testing of results is a well understood problem for discrete data sets: boundary conditions are easily identifiable and accuracy is assured through predefined test data. Network communications algorithms have given us a very powerful toolset for detecting and correcting sequencing errors, bit switches, etc. during data transmission. Some simple care in problem design will additionally ensure that no single bit switch, even if undetected, can change a vote from one valid recipient to another.
Of course, in any systemm it is necessary to understand where errors can originate in order to reduce them. This is, however, a very restricted problem set. The entry parameters are small, the data manipulations are minimal (trivial, even). The primary element of difficulty is one of scale. And the scale in this case brings with it no additional physical constraints/problems. As such, any modular solution will be able to scale upward with minimal risk of introducing error. (errors existing in the module will, of course, be magnified accordingly).
Remember, there is no requirement that the multiple independent machines have any communications or interactions with each other. The only increase in complexity as the problem scales upward occurs in the collection of results into a central repository. This is basically a problem of verification and addition. Binary addition is tight. Verification of data transfer is well understood and has been solved to high degrees of certainty in a number of contexts.
Price? There is always a reasonable limit, but this is a problem affecting the core principal of our republic. I think it deserves a better answer than we are presently using.