Have you ever heard it is wrong (or, poor form) to leave a radical in the denominator? Why? What’s the problem with that? (Seeking factual answers, not opinions - admins do not bump)
The worst you could say about it was that before calculators, it was poor form. If you imagined dividing 1 by sqrt(2), which is approximately 1.414, you have a miserable problem in long division. If, on the other hand, you realize it is the same as sqrt(2)/2, you can see immediately it is around .707. That, AFAIK, is the entire reason.
Many math curricula still stress the importance of rationalizing the denominator for this reason. If the answer is 1/sqrt(2), you may be perfectly correct, but your teacher will take a point or two off for not writing it in “proper” form.
Personally, I think they just enforce these standards of conformity so they don’t have to think as hard about if your answer is actually the same as their answer.
It is a common exercise in basic algebraic to rationalize the denominator. One benefit is to arrive at a more standard form for an expression. Makes grading easier I guess. There’s really no computational reason for it. Sqrt(2)/2 is no easier to compute than 1/sqrt(2) with a calculator.
I remember there being some situations in algebra where it’s more convenient to rationalize the numerator, instead, but darned if I can remember what they were.
I vaguely remember rationalizing numerators in calculus… limits perhaps?
That was certainly what I was taught at school. And when you’re using log tables to do all of your calculations (as I was), you quickly realise why it makes sense - as Hari Seldon notes.
I was told by my high school math teacher that the reason they stress rationalizing problems like:
1/(1-sqrt(2))
was more or less just practice for identifying conjugates and multiplying by them before we got to actual, important complex conjugates.
“Rationalizing the denominator” gives you a “canonical” version of the number, which makes it easier to verify when two such numbers are the same. This “canonical” version has the advantage of reflecting how the number exists in an extension of the integers or rationals.
The situation is similar with complex numbers. Complex numbers can be written in all sorts of ways, but each has a unique “canonical form” a + i**b, where a and b are real numbers, reflecting the fact that the complex numbers are just the real numbers with a square root of -1 adjoined to them. For example, by a process very much like rationalizing denominators, you can show that 2/(1 - i) = 1 + i.
All the answers so far have been informative, but let me just stress: there’s nothing wrong with radicals in the denominator. Nothing wrong with at all.
My own impression in grading rooms for college mathematics exams has been that, whatever the merits of forcing a canonical form, it’s marginal (yeah, you could do it for rationals alone, sure, but eventually you move on to contexts with so many functions around satisfying so many identities that you could hardly hope to settle on a convenient canonical form for every expression built out of them). The contortions students go through to avoid leaving perfectly good radicals in the denominators where they most naturally show up, just because someone once drilled them to be superstitiously afraid of such things, only end up being more annoying than helpful to us in the grading process.
Not to mention, it introduces more chances for minor mistakes, which can result in the wrong answer! Most of my mistakes in Calculus came from the algebra after I did the integral correctly.
I teach high school algebra and rationalizing denominators is part of the standard curriculum. I teach it merely because it’s expected that I will, and because it comes up occasionally on standardized tests. Frankly, it’s an archaic leftover from a time before easy calculations, as is for that matter simplifying radicals altogether. I suppose there is something to be said for it developing general numerical manipulation skills.
I also teach calculus, Amblydoper, and I generally don’t care about their algebraic simplification unless it helps solve the problem. Specifically, my AP Calculus students are always happy to hear that they don’t generally have to simplify, and shouldn’t, for exactly that reason.
Exactly. This is an interesting topic in computer algebra (e.g. how should a computer recognize that sqrt(2)/2 is equal to 1/sqrt(2)?).
The semi-official Ask Dr. Math stance is that rationalizing the denominator is up to you. There’s no compelling argument for enforcing a ‘canonical’ form.
For all the good math teachers in the country, there are still several bone-headed ones that couldn’t recognize decent math if it was staring them in the face. They tend to perpetuate “rules” like rationalizing the denominator with almost religious fervor.
To be fair, it may not be up to them. The department or district (or even the state standardized test) may impose standards on them that they really can’t avoid without risking their job.
I agree with what’s been said: that having a radical in the denominator is not the inherent evil that some algebra books/teachers make it out to be; and that it made a lot more sense back in the days before calculators. If you didn’t have a calculator (but did, perhaps, have access to a table of square roots), it would be a lot easier to divide by hand 1.41421356… / 2 (to whatever degree of precision you need) than to divide 1 / 1.41421356…
If I had to justify the practice, I’d say that, in general (with some exceptions), you want your denominators to be as simple as possible, even if that means making the numerator a little more complicated. The denominators are what you have to mess with if you need to find a common denominator, or something like that. And sometimes, the process of rationalizing the denominator really does make an expression noticeably simpler. Example:
10/sqrt(2) = 10sqrt(2)/2 = 5sqrt(2) (which no longer has a denominator at all).
It’s also, as Tyrrell McAllister notes, good preparation for dealing with complex numbers. If a numerical expression has sqrt(-1) in its denominator, you’ll want to rationalize that denominator to get the number into standard form.
In some cases, maybe.
But then I wouldn’t expect them to write to the Ask Dr Math site demanding (yes, DEMANDING!) that certain answers in the archives are changed to “follow the rules”.
Most math teachers are ok and have a tough job with little recognition for good work. But, as in all walks of life, there are some complete idiots. And they happen to believe there’s some kind of higher math power that has given them THE rules for math.
Oh, as for the issue of canonical form making grading simpler, what I usually do when grading (if I don’t immediately recognize the forms as equivalent) is I just find the decimal form of the answer. If there are variables in it, then I pull some values out of my hat for them. For instance, I might arbitrarily set x = 1 and y = 2, and note that the answer then works out to be 1.7392468 using the answer key. If I plug in x = 1 and y = 2 in the student’s form of the answer and still get 1.7392468, then I assume that the student did in fact get the right answer in a different form, since the chances are negligible that their expression would have the same value at that point if it weren’t an equivalent form.
Just to satisfy my own perverse sense of humor, is there somewhere we can read such demands online?
Sure, but I also don’t think there’s anything wrong with complex denominators, for all the same reasons…