For those of you who aren’t familiar with it, BC is an arbitrary precision calculator. I’m working on something that requires the y[sup]x[/sup] function, but BC only has support for integer exponents.
In other words, I can do 4.5[sup]3[/sup], but not 4[sup]3.5[/sup]. I tried using the exponent/logarithm rules and came up with this:
e[sup]y*ln(x)[/sup], and it’s close, but too inaccurate. With scale=20, I got 31.99999999999999999948, and it just gets worse as the scale gets smaller (at scale=3, I got 31.976).
So my problem is that I need an algorithm to calculate exponentiation with fractional exponents, so that I can write a function to do it for me.
+,
-,
*,
/,
^ (any base to an integer exponent),
sqrt,
e[sup]x[/sup] and ln(x) (though as I noted above, this combination isn’t terribly accurate)
sin,
cosine,
tan[sup]-1[/sup],
and j(n,x) (the bessel function , whatever that is)
Along with any operation or function that can be derived from those.
There are (conditional and unconditional) looping and branching too, so it has a mostly complete set of programming ability.
That’s precisely what I did before (I have bc aliased to ‘bc -l /usr/lib/functions.bc’), and I was hoping to do away with the error.
By the way, I meant 2.5 when I typed 3.5. Sorry. Not that it really makes a difference.
Lance Turbo, I tried the series summation, and it’s slightly less accurate than the method above, and just over 100 times slower.
erislover:
my results working both ways:
4^3e(.5l(4))
127.99999999999999999872
e(3.5*l(4))
127.99999999999999999736
So it’s a little bit closer. calculating just the fractional part of the exponent [ e(.5*l(4)) ] gives me 1.99999999999999999998.
I tried looking for the definition of the ln(x) function in bc’s documentation, but haven’t found it yet. But the exp(x) function is listed, and it uses a scale of 20. Maybe if I were to increase that…
Nope. After about 30 decimal places it goes from …999… to way out in left field.
I suppose I’ll just end up doing it the e(x*l(y)) way, then putting it through a rounding routine that I wrote. But I was hoping for a little more elegant way to do it that’s more accurate and reasonably fast.
Sure, but what if the exponent is 1.73? or 2.38116124? or [symbol]p[/symbol]?
It’s not a dumb question at all. It’s just that I’m looking for a general solution, not a specific one. If I just wanted the answer to one particular expression, I’d just use a calculator.
Your question is exactly why I used 4[sup]2.5[/sup] and 4[sup]3.5[/sup]: because 4 is a perfect square, and 2.5 and 3.5 involve square roots. So that way, it’s easy to do in my head to check whatever algorithm I try (not to mention my actual coding of the function). If I put in 4[sup]2.5[/sup] and get back 17 or 63, then I know I did something wrong.
I could have just as easily used 32[sup]2.2[/sup] (which, with the method above, gives a result of 2047.99999999999999995995 instead of 2048. :mad: unless I round. I wrote a function to round to however many decimal places, but didn’t want to have to use it for this).
If you need to be able to handle arbitrary exponents, then you may be out of luck. The only other scheme I can think of (besides the Taylor expansion, which Lance Turbo already suggested) is the following: implement a “n-th root” function in BC using Newton’s method, where n is an integer. Then for a given y, try to express y as a sum of fractions: e.g. 1.73 = 1+(7/10)+(3/100) or 1+(3/4)-(1/50). Then x^y = x * (x^7)^(1/10) * (x^3)^(1/100), or x * (x^3)^(1/4) / x^(1/50), which you can calculate using basic arithmetic operators and the “n-th root” function.
But to be honest there’s no guarantee this will produce better accuracy than exp(y*log(x)). If it doesn’t, then all I can suggest is either use a tool more sophisticated than BC. Or wait for someone who knows more about numerical algorithms to wander by this thread…
Well, I’m not a numerical algorithm’s person either, but would you be able to pull the integer part out and do the Taylor expansion for just the fractional part? You’d have to carry out a multiplication of the integer part and fractional part, and you’re bound to lose something here due to rounding error, but if you can handle that, the Taylor expansion will eventually converge to as many digits as you have patience to calculate, as MathGeek said, and it ought to converge faster than just expanding the entire thing.
Ah, I mistakenly assumed that the reason you used those two was because those were characteristic of the actual calculations you had to do, and I tried to extrapolate from two data points. Very well, I have no solutions better than the ones you’ve already gotten.
Ummm. Why is an error of 10^-18 not good enough? Anything which you need better than that kind of precision you’ll want to be doing some heavy numerical analysis on it, and will probably want to use a different programming language to work on it (I wouldn’t know about that part though, as I’m not familiar with BC).
On a hardware note, the error may simply be the binary calculations. Note that binary runs : 2^n 2^(n-1)… 2^2, 2^1, 2^0, 2^-1, 2^-2… 2^-n. The calculations involved in fractional exponents (ln and e and whatnot) involve decently complex irrational number manipulation. The ability to accuratly express those in binary is extremely difficult at best (be happy that integers work - to those of you that remember intels problems 10 years ago). Given, the hardware and low level math operations themselves do their own rounding, but unless you make your equation persice to the extent of the math operation’s bit range (64 or 128 probably), then you will need to incorperate your own rounding method.
The difficulty seems to be the log and e^x aren’t really giving you the precision you’re asking for. You need to work in a higher precision, and round off at the end. Here’s one way, although it’s a bit convoluted:
bc -l
n = scale
scale = 23
n
20
w = 4^3*e(.5*l(4))
w
127.99999999999999999999936
x = w * 10^n + 0.5
x
12800000000000000000000.43600000000000000000000
scale = 0
y = x/1
y
12800000000000000000000
scale = n
z = y / 10^n
z
128.00000000000000000000
I’m with kitarak on this. You are never going to get perfect accuracy for this type of calculation, but 18 decimal places is way more than enough for any practical application. You mention rounding as if it’s some horrible kludge, but it’s the obvious thing to do in this situation. Round your answer to 17 decimals and you will get rid of all those 9’s while still having an insane degree of accuracy.
That’s actually almost exactly the rounding function that I use. (except that I made it a function: round(number,places) )
I’m not strictly limited to bc, though I’d hoped to use all free software for the things I do. I do realize that the binary-decimal thing is where a lot of the precision is going…I’m just bugged by the fact that a $5 calculator figures 12 places and displays the result rounded to the 10-digit display automatically.
18 places of precision is more than enough, but the reasons I was hoping for better are (among others):
bc is slow - It’s all implemented in software, and hogs memory and processor resources. So putting the answer through a separate routine is less efficient than using a routine that gives a better answer straight out.
precision isn’t just 18 places: if you do the calculation with scale at 3, your error is on the order of 10[sup]-2[/sup] instead of 10[sup]-18[/sup]. The error seems always to begin within AT BEST, 2 places of scale (gets worse when scale > 30).
elegance. I just want a better way.
Plus, there’s also this: I can see at a glance that it’s very close to the right answer when the result should be an integer like 32 or 128. It’s not so simple if I were to calculate 2.3[sup]3/7[/sup] or [symbol]p[/symbol][sup]e[/sup]. See what I mean? It happens to get very close to the right answer when I’m working with the square root of a power of a perfect square, but that’s an easy question. How do I know that I have even 3 or 4 places of accuracy with irrationals and trancendentals? Or even long rationals?
To be honest, I don’t know if that last is a valid argument or not. But it bothers me all the same.
Mainly, this is a self-enrichment thing, not school or work. I’m messing with numbers and programming because those are 2 of the things I like the best, and I’m trying to learn. So if nothing else, you guys have already been very helpful from that angle.
I guess my questions are pretty much answered, but it’s an interesting discussion. So I hope it doesn’t die. I’m getting a lot out of it so far.
If the documentation of bc doesn’t give algorithms and associated error bounds, couldn’t you look at the source code? More generally, you could try using interval arithmetic. Rather than returning a value, whose exact precision is unknown, it returns an interval in which the exact result is guaranteed to lie. So, if width of the result interval is 10[sup]-4[/sup] you can be certain of four place accuracy if you quote the midpoint of the interval as the “answer.” I don’t use bc, so I couldn’t say if interval arithmetic would work in it; it requires user control over the rounding of intermediate results.
Sorry, I misread your OP a bit. Didn’t mean to restate in my post what you had already said; notation and numbers threw me off. I’m no mathematician.
If you do not need to use bc you can always use a simple C program; that will eliminate the speed problems and rounding error (though maybe the C standard functions just round for you – I don’t know). I am not sure what you’re using it for, but you could either do the whole thing in C or tie disparate branches together with a shell script. This appears to return correct results:
/* exp.c */
/* gcc -lm -o exp exp.c */
/* ./exp base_float exp_float */
/* displays base_float to the exp_float */
/* example: ./exp 4.0 3.5 */
/* gives 128.000000 */
#include <stdio.h>
#include <math.h>
int main(int argc, char *argv[])
{
float base, power, answer;
base = power = answer = 0.0;
sscanf(argv[1], "%f", &base);
sscanf(argv[2], "%f", &power);
answer = pow(base, power);
printf("%f
", answer);
return 0; /* You could return the answer if you wanted. */
}