Important consideration here. What do you mean by “Computer”?
Hardware will rarely if ever make an actual error flipping 1s and 0s unless there’s some physical damage. Cosmic rays aside, this is pretty straightforward. But a computer is many layers of software on top of it, the machine code is written by humans (mostly), the APIs and OS lay on top of that, also built by humans. Then there more APIs and more plug-ins, drivers and eventually application code built on top of that. Even a simple system like a cash register or calculator has layers in which humans wrote the code and wrote the tests for the code. Even when a cash register arrives at the correct amount, there’s no certainty that that amount wasn’t changed when it was converted to LEDs on a screen or ink on paper or in the data transfer to the credit card processing system.
In some cases it’s not really a fallacy…there actually is no true Scotsman here. Humans are involved in every step of the process and the hardware is just copper and silicon until a human dirties it up with their flawed instructions.
Errors of mathematics are what I make when I try to do math that has those funny symbols. In the foreseeable future AI will be making of errors of mathematics that can’t really be attributed to humans. Some code is near to that already that goes past the initial AI programming and creates new processes on its own, and are capable of checking their own work for mistakes but will still fail at it occasionally because of self generated errors of mathematics. Possibly we may be there now within some limitations. I don’t know if any advanced AI systems are capable of self verification at a high enough level to consider divorced from initial human errors, but we’re close. And I will point out those are software errors, aside from general disfunctionality the AI should not have issues with hardware oddities.
There are also some problems where the only known methods for solving them (whether implemented directly by humans or by machines) involve starting with an approximation, and then using that starting approximation to make successively better approximations, or otherwise making an approximation that is deemed “good enough” for some purposes. Usually, these approximations are, in fact, good enough, but anyone making use of such calculations must take care to ensure that they’re good enough.
There’s also a very large category of problems where the goal is to find some sort of optimal solution: The point where the value of some function is at a maximum, or at a minimum. Computers are very good at finding a local maximum or minimum, but you can never be sure that there’s not some even better maximum or minimum somewhere else. It’s like starting on slanted ground and always moving in the uphill direction, until you reach the top of a hill or mountain, and then you can go no higher, but there might be a higher hill on the opposite side of that valley that you never even explored.
Nope. I’m at a conference and have been listening to a bunch of talks on silent data corruption. This can come from a variety of sources - cosmic rays flipping a bit, power droops inside a processor because of various causes causing the wrong bit to be captured, and a host of others. The design is fine, so no human error.
Well, kinda. A group in my department worked on something called interval arithmetic, where you explicitly give the range of values for which a calculation is valid, due to the limited precision of the floating point unit. They found that a program the Army Corps of Engineers used to figure out where in the Mississippi to dredge and how much gave pretty much random results. The calculations were fine, there were too many of them and the numbers were too big. Not sure that really counts s human error.
I just heard someone from Facebook saying that they are seeing tons of these errors. They are getting worse as feature sizes shrink. A cash register is unlikely to see any, a supercomputer maybe. High reliability computers use various methods to detect and correct these, especially errors coming from memories. I’ve seen tons. In one case the error rate was correlated to the altitude of the server, something we collected.
There are usually algorithms to find optimal solutions, but since these problems tend to be NP-hard we can’t afford to use them and polynomial (or even linear) time heuristics work pretty well. If this is an error, it is a marketing error, not a programming one.
The computer computes precisely what you tell it, which is the entire point. If your algorithm is incorrect, or has an unacceptably large error, that is your fault. (As noted, even adding numbers is not trivial if you have never programmed a cmputer.)
One would note the distinction between mathematics and arithmetic. Mostly computers do arithmetic.
To summarise some of the above. It will be extremely rare for an arithmetic error to occur. Flipped bits or failing hardware are the usual problems.
There are times when computers arguably do do actual mathematics. When they manage symbolic mathematical representations. Programs like Mathematica do real mathematics. Arguably compilers handle symbolic representations, and their optimisers include significant manipulation of abstract representations of code. Optimisers make mistakes. Insidiously a previously working program may become incorrect when recompiled with a new compiler that introduces an optimisation bug. Proper testing regimes should catch stuff, but it is hard to be really sure.
Whilst a compiler bug is arguably a human mistake, the compiler sits on the side of the fence even developers consider the computer system.
Another area for angst is numerical instability. Solving a problem can often involve successive approximations. The general idea of hill climbing is a big part of this. You get algorithms like conjugate gradient (which is just a fancy way of talking about climbing in a multidimensional terrain). But computers can’t represent numbers precisely. The tiny tiny errors can, under the wrong conditions blow up to the point that either the algorithm won’t converge or it gets total junk as the answer.
In the sciences there are lots of very complex systems used for all manner of numerical analysis.
Well known areas like computational fluid dynamics, and a whole slew of codes that try to calculate chemical processes from ab-initio QED like Gaussian, to molecular dynamics and protein folding. Use of any of these sorts of systems requires an understanding of how they work as they are quite happy to deliver totally meaningless results.
Basic computer science courses include numerical methods as a subject. Students need to understand at least the core ideas of the sources of errors and mitigation strategies.
Don’t know what you are saying here. Sounds like you think application developers feel free to blame the computer for the errors created by compiler developers. I mentioned before that a system that can verify and test it’s own internally developed functions might be culpable since it takes over the role of software development but very little software has reached that level yet.
Both you and Chronos mentioned successive approximations but the only error I see would be human failure to specify the accuracy of the results.
I think this covers a bunch of good points – particularly what is meant by computer.
I will add a case where a computer (HW and SW) will make math errors:
For sufficiently complex algorithms there is a design trade-off between accuracy and efficiency. GPUs can do up to trillions of calculations per second. One way they can achieve these rates is to not care too much about the exact order of the calculations. Since the order of calculations will affect the result when using floating point numbers, it is possible to feed the same inputs into the same algorithm and get different results. This non-determinism can be viewed as either a design flaw (bug) or an optimization.
In the machine learning field, there has been a lot of effort by HW and SW manufacturers to allow users to require determinism. These slides from NVIDIA walk through some of the cases they had to solve to achieve determinism:
I guess it depends on your definition of ‘algorithm’. IMO an algorithm has a strict order of operations and departure from that, due to an optimizer for instance, does not amount to the same algorithm as the original. Algorithms that have random factors can be non-deterministic in actuality but can still be repeatable if the same set of random input is used.
I can agree that an algorithm that has a strict order of operations is not the same exact algorithm as one that does not have a strict order – even if everything else about the two algorithms is identical. But the algorithm that does not have a strict order of operations is still an algorithm, a non-deterministic one, but still an algorithm. I consider this non-deterministic algorithm to, by design, make mistakes in math.
In many (maybe all) cases the non-determinism is introduced by software, low-level library software. The authors of the two algorithms, strict order and non-strict order, will write nearly identical code. From their perspective they are writing the same algorithm.
There is thus far no system as one that can internally test and verify its own functions. I doubt this will ever be a reality anyway. It is just too complex a problem. Developers create test suites, and there is an entire art in how to do this. Critical systems undergo a whole range of testing regimes, and any change to the system is required to get past these tests, and any change should include tests to check that the change works, for both good and bad situations. In reality there is no such thing as an error free system. There is a general adage that once you reach a certain point the number of bugs becomes constant. Even fixing known bugs tends to introduce subtle changes that lead to a constancy of bugs. And system are forever undergoing changes. Needs change, the operating environment changes, and so on.
As a developer I have been bitten by compiler bugs on a few occasions. I have been bitten by errors in the operating system as well. I spent an entire week chasing down a bug that turned out to be an error in the operating system kernel’s implementation of page fault exceptions. Computers are in and of themselves a human construct, so if we take the attitude that any error in construction of the system is a human error, the OP’s question becomes vacuous. I am taking the attitude that that part of the computer software system responsible for compilation and execution of a program as defined by the programmer, and out of control of the programmer, counts as a computer problem and not a human one.
It is deeper than that. Just specifying accuracy of results implies that there is a stable relationship between your specification and the way the algorithm operates. This isn’t necessarily so. Systems can and often are ill-conditioned. This is a big part of numerical analysis. The popular face of these problems is chaos theory, but the problem is much more insidious. A simple successive approximation algorithm might take smaller and smaller steps towards a solution, with smaller and smaller error. That assumes a nicely behaved system being solved. In reality there are many systems that simply can’t be solved this way. They either won’t converge, or they go unstable, or get a simply wrong answer. Moreover, sometimes the answer is correct, but might not be a physical solution. For instance a huge set of systems require solving a large matrix. Negative Eigenvalues can mean the solution is numerically correct, but does not represent a physical system. Numerical solution of partial differential equations is grist for the mill for many scientists and engineers. Such system require clear understanding of the system being solved. Just assuming the computer gets an answer within the bounds set is going to lead to tears.
I hate floating point operations. Most of what I work with is integer, or known decimal info (like dollars and cents). There’s truly no need for floating point data in this case, though I admit I’m not sure whether a database column with x.y digits (x to the left, y to the right of the decimal) is actually treated as floating point or not.
And rounding errors will happen if multiplication or division is involved. You might have a bunch of things that, say add up to .994 - which for dollars, would be rounded to 0.99. If you have 10 such figures, the total if not rounded would be .994 + .994… or 9.94. If you round BEFORE adding them, you’ve got .99 + .99 + …, or 9.90.
That’s not so much an error, as a condition that has to be either handled, or accepted.
30+ years ago, I was working on a project where we got massive amounts of bank account data (an S&L closing).
The total amount we were told the data should add to was in the billions (yes, Billions) of dollars. When I ran it through our program to sum up all the individual transactions, the total was off by something like 5 to 10 cents (may have even been less than 5 cents).
We had to get approval from the higher-ups overseeing the process, that this level of error was OK.
Accountant-types don’t like when numbers don’t QUITE add up (and I’m there with them… enough accountant in my blood that it makes me twitchy too).
As far as “mistakes” e.g. someone not trusting the result of a cash register’s change due: I won’t rule it out, but more likely causes are operator error (clerk rang something up wrong, or entered the “amount tendered” wrong), or the machine gave the right number but the clerk counted the change out wrong. All things I’m sure I did when I did cashier work back when we still used sticks and stones to tally things up :).
I don’t rule out things like compiler bugs - there are so many interoperating components in a computer, that even if each one works correctly, they may trip over each other. A lot of BSODs likely result from things like that. That’s not really a mathematical error as the OP posits though, in my opinion.
As a former compiler engineer, I’m not sure how I feel about this. Certainly, compiler and linker systems were considered part of the development environment and something that engineers needed to be aware of.
Its a hard question. The line lives in different places depending where you sit. Back a million years ago we had a near direct line to the compiler writers at DEC. (One was even a graduate of our department.) But in the real world, compilers are a black box. There is nothing I can usefully do to mitigate a compiler bug other than apply a workaround in my code, submit a bug report, and hope. Where you draw the line from here to the runtime, OS, kernel, microcode, RTL design, or layout of silicon is up for debate. I’m asking the bits to flip one way, and something is making them flip another.
So I’m afraid I’m classing you as part of the problem.
Gosh, isn’t the world small? We had both VMS and Ultrix systems for a while. Then a gap, and a time with Alpha based machines. Lordy that was a long time ago. Those Alpha systems were mighty fine things. We ran Unix on those, so maybe I crossed swords with your compilers