Do computer programmers need to be good at math?

I tend to break programming into four phases.

  1. Systems Analysis. Figuring out the problem you are trying to solve and if a computer will help. This can be simple or very difficult. It needs good analytic skills and sometimes good people skills boarding on psychoanalysis. A lot of programmers dislike this and will try to skip to phase 2.

  2. Coding. A lot of people think that is the main part of programming, but in reality it will be about 10% of the normal timeline. I suggest keeping the code straight forward and as simple as possible. If you don’t, you will regret it will you try to modify the code in a couple of years. Good variable names and well designed data structures are also important.

  3. Testing and Validation. I don’t think I ever met a programmer that liked testing and most hate writing test plans and it is always the part that gets short shrift at deadlines.

  4. Debugging. This is really different from testing. It is more like puzzle solving.

But if the math isn’t at the surface, isn’t that just another way of saying that using the language doesn’t require much mathematical skill?

Captain Ridley’s Shooting Party writes:

> You have an irritating habit of skimming posts.

Could you give me some examples of other threads where I have just skimmed posts? If you have no such examples, how is this a habit?

As Chronos points out, the math that you cite is fairly deep into the structure of the languages. Do most programmers using those languages use that structure? How is this relevant to the question of whether most programmers need that much math?

No, not at all. Look, the end goal is the same: we all want to write correct software, for some value of “correct”. With a strong type system, we can use the type system as a crutch in order to enforce contracts and invariants at compile time. The stronger, more expressive the type system, the more contracts and invariants to be checked can be pushed into the type system to be established statically. It just so happens that, using various metamathematical results, we can look at these type systems and say “aha: what we’re really doing when programming in language X is just finding a derivation in some logic”. This logic is the image of X’s type system under the Curry-Howard correspondence. In the case of languages like Java, C, and so forth, the logic in question may be a mathematical abomination, devoid of all useful properties one would usually expect from a well-behaved logic, but it’s still a logic.

Yet, without an (extremely) strong and expressive type system, if you want to write correct software, these invariants still need to be preserved and the requisite properties need to be shown to hold, so the programmer has to do it “by hand” so to speak, juggling every detail in his brain. The correctness criterion for quicksort does not change just because you decide to program your sorting function in C instead of in the internal language of Coq or Matita. There’s no getting away from the fact that writing correct software requires one to think in exactly the same way as a mathematician does when approaching a proof. Strongly typed languages just make that more explicit and make you do the reasoning explicitly: they bring the reasoning “to the surface”.

Which is why I said I’m wary of people who claim to be writing bug-free (or nearly bug free software) who are not thinking mathematically about their programming.

For a start, I remember you derailing for no apparent reason this thread over a mistaken definition (and in contrary to Wikipedia’s own definition) of what constitutes an “editor”.

The BBC article that you quoted:

indicates that the term “editor” is used in a different way by Felipe Ortega (whose research was being reported on) and the Wikipedia Foundation. That would indicate that the term isn’t particularly easy to understand. I don’t see that I derailed the thread. It ended because no one had anything to add. In any case, this is one example that’s more than a year old. What else do you have?

But that’s not what the original question is about. The OP wants to know if the average programmer needs a background in higher math, not if they can reason about program correctness.

[moderating]
Wendell Wagner and Capt. Ridley’s Shooting Party, please take your little feud out of GQ and stick to the subject. Thank you.
[/moderating]

I strongly agree with those who point out that programming isn’t one single monolithic profession. Today, I make most of my income as a freelance writer, and I know great writers who have never written a word of fiction (and, thus, never developed a plot), great writers who couldn’t produce a decent index if their life depended on it, and great writers who couldn’t write a tutorial for a pencil sharpener.

I started programming back in the dark ages (Ones and zeros? I wrote a Fast Fourier Transform program using only rocks and sticks!) as an operating systems programmer, and then moved into CAD (computer-aided design) application code. That job was wall-to-wall math. I had to write a window-intersection subroutine for vectors in assembly language as part of the job interview – really!

When I taught beginning programming courses, college-level algebra was a prerequisite. Students without some background in Boolean logic, sets, and basic statistics had a lot of trouble in my classes.

On the other hand, when I hired programmers for user interface design, I looked for strong artistic, psychology, and language skills.

In today’s world, many of the people with “programming” job titles do little that would have been recognizable as computer programming ten years ago. I see people whose only coding skills are HTML and CSS that call themselves programmers. They do no algorithmic work, no efficiency analysis, and barely understand what QC and debugging even are. These folks need no math beyond adding up cell widths in a table and subtracting the cell padding.

Forget math. In my humble opinion, the overwhelming majority of grunt programmers are bad at programming. And I’ve hired a lot of programmers (and fired a few, too).

No offense intended, but I really don’t understand how you can get a “Senior Systems Analyst” title without strong math skills. I had that title, and the job involved a lot of algorithm development, efficiency analysis, and statistical analysis. I had to work out interfaces and modularize the design to hand off to programmers–calculating the right places to split the design and figuring out how to balance the workload. With only high school math, you couldn’t have survived the gig for a week.

And OBO (off by one) errors are the most common programming error. (Did I start that loop at one or zero? Should I have used < or <= for the terminal condition?)

You still haven’t explained why this Curry-Howard correspondence business is relevant to the work of most programmers. Perhaps a knowledge of it may be important to understanding how programming works at some deep level, but so what? Most programmers get by without thinking about it. The OP was asking if computer programmers need to be good at math. On average, they’re pretty good at math but aren’t necessarily great at it. It’s arguable that you’re the one derailing the thread by bringing up the Curry-Howard correspondence.

Sorry, I posted before I saw the post from Gary “Wombat” Robinson.

Of those that actually make it past the compiler, at least. I find that syntactical things like missing semicolons are much more common, but the compiler usually complains about those.

Actually, he asked about mathematical know-how. It’s undoubtedly true that anybody who asked whether you need “mathematical know-how” to write a correct proof would get an answer in the affirmative. Maths, as you know, is more than just a collection of facts, but a way of thinking. What I’m saying is that this way of thinking is identical to the way of thinking required for writing software.

Nobody claimed any programmers have to explicitly invoke the Curry-Howard correspondence. It’s just a foundational framework that explains why the idea of being able to write software and reason about programs necessarily entails you “doing math”.

I guess I can see how you could read it that way, but anyone who’s spent any significant time in the industry would not; after all, most programmers have never written a proof in their lives. That’s not an insult to you–this is one of those things that’s really obvious or totally surprising depending on your background.

Good point. I meant errors that make it into the executable program.

ETA: To me, it’s not really a program until it compiles :wink:

Lots of good answers already. I’d like to add that before there were college degrees for a Bachelors Degree In Computer Science, computer engineers got degrees in mathmatics. Over time the requirements for higher math have been reduced. When I graduated I had to take 2 years of calculus and other maths. Today, at the same school, only 1 year of calculus is required for a degree in computer science. I have a minor in math cause, hey, one more class and I’ll have a minor. Statistics was very educational, especially when applied to gambling and figuring out the house’s advantage (the house always wins, the odds are always in their favor and they have more money than you).

As mentioned above, depending on the area of expertise you go into, math may or may not be crucial to your job. I spend a lot of time on user interface and data management. Every once in awhile I’ll write up an algorithim. A lot of my time is spent documenting the software and configuration management.

So that is my answer, historically math majors (including physics) were the first computer science people, not so much today.

Oh yeah, also, early on, the computer was used to solve math problems for the war and such. A word processor doesn’t require as much math as say, how much fissionable material is needed for a nuclear bomb. A word processor does do math, spacing of the characters, table layouts, the level of math for that program doesn’t seem as difficult to me as simulation software. Most of the time the computer is waiting for you to press a button.

In a business setting, I think it’s pretty common to have Senior Systems Analysts without any advanced math skills - which may not be the same as strong math skills, hard to say because I don’t see any definitions for “strong math skills”.

In the business world you generally need to know and use regularly the following short list:
Logic, Add, Subtract, Multiply, Divide, Round, Modulus, Set Union, Set Intersect and maybe a few others.
But, whether you call them math skills or programming skills or whatever - there does seems to be a particular brain type that is well suited to this type of work, regardless of what kind of education they have had. You can always tell when you are working with these types of people, as you work through something on the whiteboard or just talking about a solution to a problem - they get it and they see the additional implications quickly, even without programming or systems training. They can abstract the problem, apply the solutions logically, etc.

I’m not sure that’s really the case in web development anymore. The term “programmer” is rarely used now, in my experience. Some designers are very good at client side code, JS debugging and the like, but I never hear them describe themselves as programmers. Server side and general application programmers usually call themselves developers or software engineers. HTML+CSS is just an expected skill, I don’t know anyone who does only that (designers market their design skills, not code).

That said I’m pretty far removed from the corporate world, maybe things are different there.