Pollard’s method is an algorithm for factoring numbers, better than trial and error for larger numbers.
Let a non prime number and a non trivial factor. The actual value of the factors are unknown at this stage, and Pollard’s provides a way to find them. We do know, however, than is not larger than . In fact, we know at least one of the factors must hold , so we assume this condition.
So, how does this help? If you start picking numbers at random (keeping your numbers greater or equal to zero and strictly less than ), then the only time you will get is when and are identical. However, since is smaller than , there is a good chance that sometimes when .
Well, if , that means that is a multiple of . Since is also a multiple of , the greatest common divisor of and is a positive, integer multiple of . We can keep picking numbers randomly until the greatest common divisor of and the difference of two of our random numbers is greater than one. Then, we can divide by whatever this greatest common divisor turned out to be. In doing so, we have broken down into two factors. If we suspect that the factors may be composite, we can continue trying to break them down further by doing the algorithm again on each half.
The amazing thing here is that through all of this, we just knew there had to be some divisor of . We were able to use properties of that divisor to our advantage before we even knew what the divisor was!
This is at the heart of Pollard’s rho method. Pick a random number . Pick another random number . See if the greatest common divisor of and is greater than one. If not, pick another random number . Now, check the greatest common divisor of and . If that is not greater than one, check the greatest common divisor of and . If that doesn’t work, pick another random number . Check , , and . Continue in this way until you find a factor.
As you can see from the above paragraph, this could get quite cumbersome quite quickly. By the -th iteration, you will have to do greatest common divisor checks. Fortunately, there is way around that. By structuring the way in which you pick “random” numbers, you can avoid this buildup.
We use an appropiate polynomial to generate pseudorandom numbers. Because we’re only concerned with numbers from zero up to (but not including) , we will take all of the values of modulo . We start with some . We then pick our numbers by taking .
Now, say for example we get to some point where with . Then, because of the way that modulo arithmetic works, will be congruent to modulo . So, once we hit upon and , then each element in the sequence starting with will be congruent modulo to the corresponding element in the sequence starting at . Thus, once the sequence gets to it has looped back upon itself to match up with (when considering them modulo ).
This looping is what gives the method its name. If you go back through (once you determine ) and look at the sequence of random numbers that you used (looking at them modulo ), you will see that they start off just going along by themselves for a bit. Then, they start to come back upon themselves. They don’t typically loop the whole way back to the first number of your sequence. So, they have a bit of a tail and a loop—just like the Greek letter rho ().
Before we see why that looping helps, we will first speak to why it has to happen. When we consider a number modulo , we are only considering the numbers greater than or equal to zero and strictly less than . This is a very finite set of numbers. Your random sequence cannot possibly go on for more than numbers without having some number repeat modulo . And, if the function is well-chosen, you can probably loop back a great deal sooner.
The looping helps because it means that we can get away without accumulating the number of greatest common divisor steps we need to perform with each new random number. In fact, it makes it so that we only need to do one greatest common divisor check for every second random number that we pick.
Now, why is that? Let’s assume that the loop is of length and starts at the -th random number. Say that we are on the -th element of our random sequence. Furthermore, say that is greater than or equal to and divides . Because is greater than we know it is inside the looping part of the . We also know that if divides , then also divides . What this means is that and will be congruent modulo because they correspond to the same point on the loop. Because they are congruent modulo , their difference is a multiple of . So, if we check the greatest common divisor of with every time we get to an even , we will find some factor of without having to do greatest common divisor calculations every time we come up with a new random number. Instead, we only have to do one greatest common divisor calculation for every second random number.
The only open question is what to use for a polynomial to get some random numbers which don’t have too many choices modulo . Since we don’t usually know much about , we really can’t tailor the polynomial too much. A typical choice of polynomial is
where is some constant which isn’t congruent to or modulo . If you don’t place those restrictions on , then you will end up degenerating into the sequence as soon as you hit upon some which is congruent to either or modulo .
Let’s use the algorithm now to factor our number . We will use the sequence with . [ I also tried it with the very basic polynomial , but that one went 80 rounds before stopping so I didn’t include the table here.]
and so we have discovered the factor .
Let’s try to factor again with a different random number schema. We will use the sequence with .
Again, the factor shows up.
Pollard’s can also be applied to other finite groups besides integers, providing one of the best known methods to computing discrete logarithms on arbitrary groups.
|Date of creation||2013-03-22 12:34:58|
|Last modified on||2013-03-22 12:34:58|
|Last modified by||yark (2760)|