The original equation is a quadratic, so it has two roots. You transformed it into a cubic (x[sup]3[/sup] - 1), which has three roots. The roots of x[sup]2[/sup] + x + 1 are the other cube roots of 1. You just happened to pick the one solution to the new equation which doesn’t satisfy the original.
I know that the math is incorrect and am familiar with the quadratic equation, but where is it written that making a substitution that transforms a quadratic into a quintic is wrong. I know that it is wrong, and I know that I introduced a false root, but I’m looking for a law, theorem, etc.
Some things that look like legitimate Algebra techniques aren’t.
Frex. Let’s take the equation x = 3. Reformat as X - 3 = 0.
Now multiply both sides by X. We get X[SUP]2[/SUP] - 3X = 0. Our original root, 3, still works. But we’ve introduced a new one with our shady Algebra, X = 0. X=0 is fine for the new equation, but if we tried to go back to the original (dividing both sides by X, for instance) - the root is not valid - nor should it be.
You transformed the equation into a higher order equation, got one of it’s solutions, and tried to apply it to the original, which doesn’t work. To put it another way… x = 3 has one solution. No matter what you transform that equation into, it still has one solution.
And as a general rule of thumb, you do not substitute an equation into itself.
Neh. Forgot something. To clarify the above : an equation has a number of solutions equal to the order of the equation. The order is the highest exponent present. By changing the order of the equation, you add or remove solutions.
I take it that you are suggesting that the FTOA says that there are identically two roots to the initial equation, and that the addition of the third, the false root, would violate the pigeonhole principle. I agree.
It seems that I recall an explanation that involved rings or fields. Any ideas?
I agree with everything you’re saying, but can you point to a theorem or postulate that supports your “rule of thumb”? I’m not debating that what you’re saying is wrong, of course it could not be wrong, but I’m looking for the seed of truth that will fill this little void of my knowledge.
Fact : An equation has a number of solutions equal to its order.
Fact : Substituting an equation into itself can change it’s order.
Fact : Getting three solutions from a third-order equation and expecting all three to work in the second-order original isn’t going to be rewarding for you.
The whole goal of substitution is to gain more information. Substituting an equation into itself can’t introduce more information than exists in the original equation.
Are there any real solutions to the original equation? The discriminant (the bit in the quadratic formula under the radical sign) seems to be negative. Maybe the problem arises because you are treating complex numbers as real numbers?
Fellas, I’m just trying to see if there is a simple explanation for all of this. A simple postulate that says don’t do this. If you don’t know of one that’s fine. Chalking it up to being a corollary is fine though.
All of your algebraic manipulations are correct from
(eq.1) x[sup]2[/sup] + x + 1 = 0
to
(eq.2) x[sup]3[/sup] = 1 .
That is, you’ve shown that any root of (1) is a root of (2). Note that you have not shown the converse of this; some of your algebraic manipulations are one-way. (Valid substitutions, as you have found out, can add solutions to the equation you’re making the substitution in. The system of equations still has the same solution set, of course; you can’t discard the original equation just because you have used it in the substitution.) Essentially, in this case, you have multiplied the original equation by (x-1), thus introducing the new root x=1 (though the derivation has been written to obscure this fact).
At this point, a better way of writing what you know is that you have a system of two equations: your original equation (1) and a new equation (2) which is guaranteed to contain all of the solutions of (1), possibly with some extra roots.
The only actual errors in your proof are the assumption that x[sup]3[/sup]=1 => x=1 and the assumption that this root must be a root of the original equation. As others have mentioned, (2) has three roots, not one, so all you can say is that the solution, if it exists, must be one of {1,w,w[sup]2[/sup]} (where w is the primitive cube root of 1). You know that the roots of the original equation (1) are all in this set (since you showed that any root of (1) is a root of (2)), but you don’t know which, if any, of these are roots of (1). Since some of your algebra may have introduced new roots you still have to check them in (1).
x^2 + x + 1 = 0 has two roots: (-1 + sqrt(3)i)/2 and (-1 - sqrt(3)i)/2
x^3 - 1 = 0 has three roots: (-1 + sqrt(3)i)/2, (-1 - sqrt(3)i)/2, and 1
x^3 - 1 is, in fact (x^2 + x + 1) * (x - 1)
Picking 1 as a root and then trying to substitute it back into the first equation will produce a ridiculous result. That’s like saying:
x + 1 = 0
(x - 1) * (x + 1) = 0 – for which 1 is a root, so
x = 1 -> (x + 1) = 0 -> (1 + 1) = 0 -> 2 = 0
More to the point, though, x^3 - 1 equals x^2 + x + 1 only when x = (-1 + sqrt(3)i)/2 or x = (-1 + sqrt(3)i)/2. For any other value of x, they are not equal. For x = 1, their difference, (x^2 + x + 1) - (x^3 - 1) is 3.
The axiom you are asking for is something like, “If a is not equal to b, then setting a = b and solving will give a contradiction.”
No you don’t. If a=b, then you can substitute b for a anywhere you like and still get a true equation. You don’t have to substitute everywhere. (As I said above, it may have more roots than before, but all of the original roots will still be roots.)
Well, it’s true that the roots of x[sup]2[/sup]+x+1=0 are also roots of x[sup]4[/sup]+x[sup]2[/sup]+1=0; but there are two more roots as well. (The four roots of this equation are the sixth roots of 1 other than 1 and -1.)