Rabin–Miller figures out if a number is prime, but there are a few deterministic and polynomial-time primality-testing algorithms.
As primality-testing can be done in polynomial-time it is dropped from time-complexity in big O notation.
GNFS is sub-exponential for integer factorization, so while integer factorization is in NP it is also not the most expensive operation.
Eratosthenes/wheel sieve algorithms only run in O(N) if they fit in memory. Eratosthenes sieve is pseudo-polynomial if you have O(n) bits of of memory, as computers don’t have 2[sup]1024[/sup] bits of memory the time complexity goes way up. This is the largest time-complexity factor and thus why I said it was exponential.
If encryption keys were smaller than 10[sup]13[/sup] this would be different.
But finding the next prime is still polynomial time, even without using a memory-hog sieve. You just keep on trying candidates, and polynomial-time prime-testing each of them, until you find one that works. Primes are dense enough that you won’t need an exponentially-large number of trials.
Yeah, I was somewhat surprised by it when I learned it as an undergrad math major. The reason why it isn’t used in elementary school is that the real work the teachers want you to do is to factor a number. The excuse of finding the GCD of two numbers is just a justification for it. In any math problem you actually have to do in school, you’re not going to be given 3 digit numbers that you need to find the GCD of to be able to solve some other problem, so efficiency isn’t a big concern. But if you can keep the kids busy factoring 3 digit numbers because they think it’s important to find their GCD, then you have a motivational tool to keep them practicing factoring numbers, which becomes helpful when you start factoring more complex things like polynomials. Not that you’ll be factoring many polynomials with 3 digit coefficients, but the idea is that factoring things is something mathematicians do, and it’s not always as easy as 21 = 3 * 7. When I heard in Algebra I that we would be doing “factoring” later, I was so used to factoring integers that I wondered what the big deal seemed to be about it.
Mind blown. That makes total sense. And I imagine most students won’t return to the topic of greatest common factors until getting into number theory or something in college, so for people like me who don’t major in math there’s no point where the algorithm is handy.
(I do periodically work with high school students on GCF problems, funnily enough – the topic sometimes appears on the SAT and ACT, where “check your multiple choice answers from biggest to smallest and stop when you find a common factor” is the algorithm of choice.)
One reason that the “difficult” method is used is that, while time consuming, it is easy to understand for a grade school student, and a grade school teacher (such teachers often being quite poor at understanding math concepts, indeed often math-phobic). Thus, while Euclid’s method might be less time consuming, it’s not something that a non-mathematician would find easy to explain, let alone understand at the age of 8, which is when common factors are usually explored as part of the process of understanding multiplication.
I dunno; I’ve seen it in sixth-grade classrooms, and I’m pretty sure it was even an honors class. By that point, they should be more than capable of handling it, and it’s an excellent opportunity to do math (something that doesn’t come up often in elementary-school “math” classes), to show how and why it works.
I’m pretty sure the first time I saw the Euclidean algorithm was in a Number Theory class, where I learned it in the context of learning the procedure for solving Diophantine equations of the form Am + Bn = C, which I thought was really cool. But I agree with glowacks about why it’s probably not taught to youngsters.
I don’t recall ever needfully finding a greatest common denominator for money, but it comes up often in recreational math exercises.
I recall one on-line contest to express a certain mapping y = f(x) with only four operators. I showed a mapping that used only two operators and works for any finite map from/to natural numbers!
f(x) = Q mod P[sub]x[/sub]
where P[sub]x[/sub] is the x’th prime and Q is a (very large) integer found via an extended Euclidean algorithm. (The moderator had misgivings about the P[sub]x[/sub] operator, but he did publish one of my 4-operator solutions.)
Brute force is an algorithm. Or rather, the term refers to any type of algorithm that involves naïvely and laboriously iterating over a large search space.
I clicked this link and was rather surprised to hear of a (year-old?) D-Wave 2000Q computer with 2000 qubits commercially available for $15 million. (Last I’d read, such computers had only a handful of qubits.)
How many qubits will such a computer need before it will outperform conventional computers per-dollar? Is 2000 enough for some algorithms?
The trouble with the D-Wave machines is that it isn’t even clear they are real quantum computers. Another problem is that quantum computers are so wildly different to conventional computers in concept and operation that it makes little to no sense to compare them for most tasks. About the only sensible use that seems to be viable is to use quantum computer systems to simulate other interesting quantum systems.
The D-Wave machines basically do a form of annealing. You connect up a mesh of qbits with certain constraints and those connections represents constraints in your problem space. The machine architecture limited by the physical ability to create interconnections, so you have a limited capability creating connections between arbitrary qbits, which constrains the problems you can represent (although you can burn more bits to get the connections you want.) As they get more and more qbits working they might be able to represent truly interesting problems. But whether they can get the machine to deliver answers faster than a conventional machine is another matter. I remain very doubtful that they can get over the signal to noise issues well enough to be able to realise any inherent speedups. But we will see.
Your posts are really, really hard to parse. For one thing, you are very misleading on standard algorithm analysis.
In particular, the input size, i.e., number of bits, is what counts in terms of talking about complexity. Esp. if using terms like NP and EXPTIME. The value of a number does not apply in this context. Furthermore, no matter which one you use you definitely do not get “requires EXPTIME” for any simple version of the problem. E.g., a simple sieve method is quadratic if you’re talking about the value. Which we shouldn’t be doing.
Your posts are quite garbled. Please re-explain using standard terminology with cites. Esp. about this “EXPTIME” claim.
And don’t bother dumbing it down for me. I am friends with several of the key people involved here. I was hanging out with Gary Miller when the whole Rabin fiasco happened. I was in the back seat of a car with R and A of RSA a week after their big breakthrough. I knew Shor before he got famous. So bring out the big guns.
Agreed, but it should be noted that sixth-grade math teachers are often certified in the subject (usually under a “middle-school math/science” certification), whereas from K to 4th, the teacher is usually just certified under a generalized elementary school classification. Having spent a couple years substituting at all grade levels before undertaking to become a real teacher, I watched math be taught at all different levels, and the difference between elementary teachers and middle-school teachers was night and day. I know of elementary teachers who literally didn’t understand that subtraction was just addition done in the inverse direction. That sort of teacher is flummoxed by anything more complicated than double-digit multiplication (God help whoever they try to teach long division to!).
It certainly would be nice if younger students saw the how and why of math at an earlier age, and as you say, the Euclid algorithm does help them see that.
To be clear, what I saw in sixth-grade classrooms was students being taught to find the GCF by finding all of the factors, at an age where they should be learning Euclid’s algorithm.
Either the goal in finding GCF is because it’s useful, or the goal is to teach principles of mathematics. If it’s because it’s useful, then you teach Euclid’s algorithm because it’s much easier than the other way. If it’s to teach principles of mathematics, then you teach Euclid’s algorithm because it’s a better illustration of mathematical principles.
Neither one of those statements is obvious to me. Can you justify them?
For small numbers, like gcd(9,21), the Euclidean algorithm seems like overkill; while for large numbers, it involves finding the remainder when one large number is divided by another, which is nontrivial.
And the “other way” involves prime factorization, breaking numbers down into their constituent parts, which makes it easy to see what a number’s factors/divisors are and why. What mathematical principles does the Euclidean algorithm teach?
I don’t recall how I computed GCD in school, but it was used only for doing LCM to add fractions and they were always obvious for the small numbers used. But the Euclidean algorithm could certainly be taught to 6th graders since it is obvious that n and m have the same GCD as n and m (mod n).
But I have a question concerning the claim that finding the next prime is not known to be in poly time. Is there any known bound on prime gaps, better than Tschebyshev’s that states, IIRC, that there is always a prime between n and 2n? But is there anything like there’s always a prime between n and n+C*log(n) for some constant C? Otherwise the simple argument of trying all the numbers > n till you find a prime could well be NP.
When we speak of efficiencies of algorithms, we’re usually referring to the average case, not the worst case (hence, for instance, why Quicksort is considered n log(n), even though its worst case is n^2). And the average spacing between primes is given by the Prime Number Theorem.
If the Euclidean algorithm is overkill, then the full factorization method is even more so. Something like gcd(9,21) is easy with Euclid’s algorithm: The biggest multiple of 9 smaller than 21 is 18, and 21 - 18 = 3, and we’re done. Now, granted, it’s also easy to see that 3 is the largest shared element of {1,3,9} and {1,3,7,21}, but I’d say that, even in that simple case, Euclid’s algorithm is slightly easier.
But now consider, say, 84 and 91. Using Euclid’s algorithm, I’d start by saying that the remainder is 7, and then I just need to check one of those numbers for divisibility by 7, and then I’m done. The only part that takes any effort at all is the divisibility by 7, and if I used the other method, I’d have to (eventually) check both numbers for that (after I had already tried both for 2, 3, and 5).
And for any numbers for which you’d need a calculator for the remainders, it’s going to be a real headache to do the factorization. Rolling a die here for some random three-digit numbers, to be sure I’m not cherry-picking: Let’s find the GCF of 988 and 270. First let’s use the full-factorization method.
988 is a multiple of 2: It’s 2494. 494 is still a multiple of 2: It’s 2247. 247… is not a multiple of 3. It’s… not a multiple of 5. It’s… not a multiple of 7. It’s… not a multiple of 11. It’s… OK, it’s a multiple of 13, so 988 = 2213*19, and I know that 19 is prime, so I’m done.
Now 270 is a multiple of 2: It’s 2135. 135 is a multiple of 3: It’s 345. 45 is still a multiple of 3: It’s 315. 15 is still a multiple of 3: It’s 35, and I know that 5 is prime. So 270 = 23335. And now I can compare the two lists, and see that 2 is the GCF.
By contrast, using Euler’s method: 270 fits 3 times into 988, and 988 - 810 = 178. 178 fits once into 270, and 270 - 178 = 92. 92 fits once into 178, and 178 - 92 = 86. 86 fits once into 92, and 92 - 86 = 6. 6 fits 14 times into 86, and 86 - 84 = 2, and 2 is a factor of 6, so I’m done.
As an aside, suppose that I had made some mistakes. After all, 6*14 = 84 is probably beyond where a student has their multiplication table memorized. Suppose I had just said “I don’t know how many times 6 fits into 84, but it’s definitely at least 10”. Then I’d take 86 - 60 = 26, and now I’m looking at 26 and 6. OK, that one I can do, 6 fits into 26 4 times, and 26 - 24 = 2, and it took me one extra step, but it still gets me to the right answer. On the other hand, the most common mistakes in the factorization method won’t give you the right answer.