Computer langauges and mathematics are two different things. In a computer language like C++ you have something like this:
a=a+1
This of course makes no sense as an algebra equation, but in a computer langauge means calculate everything to the right of the equal sign (a+1) and store it in the variable on the left side of the equal sign (a).
Because you are constantly taking variables and modifying them, the C programming language allows you to use a shorthand notation.
a+=1 translates to a=a+1
a*=7 translates to a=a*7
and similarly for most other operators in C.
In programming, the most common incriment or decrement is by 1, which allows you to loop through arrays and such. Therefore C allows you to take another shortcut, which is to use ++ or – to indicate +=1 and -=1 respectively (saves you 1 more character when you are typing).
So, instead of typing a=a+1 you simply type a++.
Intel processors actually have an incriment operation, so a=a+1 and a++ are actually two different instructions as far as the CPU is concerned.
In computers, the processor basically needs ADD, SUBTRACT, SHIFT, AND, OR, NOT, XOR, and I think that’s about it. From these a CPU can do just about any discrete math function. Note that these could be further reduced (SUBTRACT is just a combination of ADD and NOT for example), but these are all very simply functions to impliment in hardware. Computers multiply and divide using combinations of adds/subtracts and shifts.
One thing programmers quickly learn the hard way is that the math done in computers does not equal the math done on paper. In a computer, if you take 14 and divide it by 50 then multiply it by 50, you will likely get zero as the result (14 divided by 50 is 0 with some remainder which gets lost since computers work in integers, then 0 multiplied by 50 is 0). So be careful comparing computer math to real math!