For natural logarithms (logarithms in any other base can then be calculated using log[sub]y/sub = ln(x)/ln(y)):
Well, there are a number of ways, but one is by using the fact that ln(x) = 2 * (b[sup]1[/sup]/1 + b[sup]3[/sup]/3 + b[sup]5[/sup]/5 + b[sup]7[/sup]/7 + …), where b = (x - 1)/(x + 1), along with the fact that the remainder when truncating this series is always less than 1/(1 - b[sup]2[/sup]) times the (magnitude of the) first omitted term. Thus, the series produces accurate digits at a rate of at least 2 * log[sub]10[sup]-1[/sup]/sub digits per term.
For example, for x = 2, this tells us that ln(2) = 2 * (3[sup]-1[/sup]/1 + 3[sup]-3[/sup]/3 + 3[sup]-5[/sup]/5 + 3[sup]-7[/sup]/7 + …), with a remainder at any moment less than 9/8 times the next term, and thus producing accurate digits at a rate of at least 2 * log[sub]10/sub (a shade over 0.95) digits per term.
If we use 10 terms (up through 2 * 3[sup]-19[/sup]/19), we find that ln(2) = 0.69314718054etc. + a remainder of less than 10[sup]-11[/sup]. Thus, we obtain 10 accurate digits in 10 terms. Of course, the rational number arithmetic here is excessively tedious to carry out by hand; thank god for computers.
[In case you are curious, as I assume you are, the series for ln given above comes from integrating both sides of the geometric series equation 1/(1 + b) + 1/(1 - b) = 2/(1 - b[sup]2[/sup]) = 2 * (b[sup]0[/sup] + b[sup]2[/sup] + b[sup]4[/sup] + b[sup]6[/sup] + …), and the upper bound on the remainder comes from comparison to the same geometric series]
I should also note, this means the series converges faster the larger b[sup]-1[/sup] is (equivalently, the closer x is to 1).
Thus, for truly optimizing efficiency, one can often use the trick of rewriting x as a product of powers of numbers close to 1, and finding ln(x) as the corresponding linear combination of natural logarithms of numbers close to 1, for which the series produces digits at a faster rate.
For example, 2 = (7/5)[sup]2[/sup] * (100/98), so ln(2) = 2 * ln(7/5) + ln(100/98), which means ln(2) = 2P(6) + P(99) [where P(d) = 2 * (d[sup]-1[/sup]/1 + d[sup]-3[/sup]/3 + d[sup]-5[/sup]/5 + …) = ln((d + 1)/(d - 1)), producing at least 2log[sub]10/sub newly accurate digits per term].
Or, if you’re more hardcore, you can use the fact that 2 = (27/25)[sup]9[/sup] * (4802/4800)[sup]-1[/sup] * (8750/8748)[sup]4[/sup], so ln(2) = 9P(26) - P(4801) + 4P(8749). And 2 = (252/250)[sup]72[/sup] * (450/448)[sup]27[/sup] + (4802/4800)[sup]-19[/sup] * (8750/8748)[sup]31[/sup], so ln(2) = 72P(251) + 27P(449) − 19P(4801) + 31P(8749). Decompositions like this are the key to pushing computation of the decimal expansion of ln(2) to record lengths (though the efficiency gains don’t really justify the complication for anything other than record-setting; the last series produces just under 1.2 digits of precision per term (counting adding a new term to each series as adding four new terms altogether), which is just under 0.25 digits per term faster than our original straightforward calculation of ln(2) as P(3); it produces 10 digits in 8 terms, rather than 10 terms, which is no big whoop. But when you’re calculating millions of digits, the difference becomes rather more pronounced).
Finally, I should also note that our series and remainder bound works for complex arguments as well, from which the inverse trigonometric functions can be computed. (For example, the arctangent of x can be computed as the imaginary component of the natural logarithm of 1 + ix)