服务承诺
资金托管
原创保证
实力保障
24小时客服
使命必达
51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展
积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈Primality_Testing
2013-11-13 来源: 类别: 更多范文
Introduction
Primality testing of a number is perhaps the most common problem concerning number theory that topcoders deal with. A prime number is a natural number which has exactly two distinct natural number divisors: 1 and itself. Some basic algorithms and details regarding primality testing and factorization can be found here.
The problem of detecting whether a given number is a prime number has been studied extensively but nonetheless, it turns out that all the deterministic algorithms for this problem are too slow to be used in real life situations and the better ones amongst them are tedious to code. But, there are some probabilistic methods which are very fast and very easy to code. Moreover, the probability of getting a wrong result with these algorithms is so low that it can be neglected in normal situations.
This article discusses some of the popular probabilistic methods such as Fermat's test, Rabin-Miller test, Solovay-Strassen test.
Modular Exponentiation
All the algorithms which we are going to discuss will require you to efficiently compute (ab)%c ( where a,b,c are non-negative integers ). A straightforward algorithm to do the task can be to iteratively multiply the result with 'a' and take the remainder with 'c' at each step.
/* a function to compute (ab)%c */
int modulo(int a,int b,int c){
// res is kept as long long because intermediate results might overflow in "int"
long long res = 1;
for(int i=0;i 0
ab = a*(a2)((b-1)/2) if b is odd
1 if b = 0
This idea can be implemented very easily as shown below:
/* This function calculates (ab)%c */
int modulo(int a,int b,int c){
long long x=1,y=a; // long long is taken to avoid overflow of intermediate results
while(b > 0){
if(b%2 == 1){
x=(x*y)%c;
}
y = (y*y)%c; // squaring the base
b /= 2;
}
return x%c;
}
Notice that after i iterations, b becomes b/(2i), and y becomes (y(2i))%c. Multiplying x with y is equivalent to adding 2i to the overall power. We do this if the ith bit from right in the binary representation of b is 1. Let us take an example by computing (7107)%9. If we use the above code, the variables after each iteration of the loop would look like this: ( a = 7, c = 9 )
iterations b x y
0 107 1 7
1 53 7 4
2 26 1 7
3 13 1 4
4 6 4 7
5 3 4 4
6 1 7 7
7 0 4 4
Now b becomes 0 and the return value of the function is 4. Hence (7107)%9 = 4.
The above code could only work for a,b,c in the range of type "int" or the intermediate results will run out of the range of "long long". To write a function for numbers up to 10^18, we need to compute (a*b)%c when computing a*b directly can grow larger than what a long long can handle. We can use a similar idea to do that:
(2*a)*(b/2) if b is even and b > 0
a*b = a + (2*a)*((b-1)/2) if b is odd
0 if b = 0
Here is some code which uses the idea described above ( you can notice that its the same code as exponentiation, just changing a couple of lines ):
/* this function calculates (a*b)%c taking into account that a*b might overflow */
long long mulmod(long long a,long long b,long long c){
long long x = 0,y=a%c;
while(b > 0){
if(b%2 == 1){
x = (x+y)%c;
}
y = (y*2)%c;
b /= 2;
}
return x%c;
}
We could replace x=(x*y)%c with x = mulmod(x,y,c) and y = (y*y)%c with y = mulmod(y,y,c) in the original function for calculating (ab)%c. This function requires that 2*c should be in the range of long long. For numbers larger than this, we could write our own BigInt class ( java has an inbuilt one ) with addition, multiplication and modulus operations and use them.
This method for exponentiation could be further improved by using Montgomery Multiplication. Montgomery Multiplication algorithm is a quick method to compute (a*b)%c, but since it requires some pre-processing, it doesn't help much if you are just going to compute one modular multiplication. But while doing exponentiation, we need to do the pre-processing for 'c' just once, that makes it a better choice if you are expecting very high speed. You can read about it at the links mentioned in the reference section.
Similar technique can be used to compute (ab)%c in O(n3 * log(b)), where a is a square matrix of size n x n. All we need to do in this case is manipulate all the operations as matrix operations. Matrix exponentiation is a very handy tool for your algorithm library and you can see problems involving this every now and then.
Fermat Primality Test
Fermat's Little Theorem
According to Fermat's Little Theorem if p is a prime number and a is a positive integer less than p, then
ap = a ( mod p )
or alternatively:
a(p-1) = 1 ( mod p )
Algorithm of the test
If p is the number which we want to test for primality, then we could randomly choose a, such that a < p and then calculate (a(p-1))%p. If the result is not 1, then by Fermat's Little Theorem p cannot be prime. What if that is not the case' We can choose another a and then do the same test again. We could stop after some number of iterations and if the result is always 1 in each of them, then we can state with very high probability that p is prime. The more iterations we do, the higher is the probability that our result is correct. You can notice that if the method returns composite, then the number is sure to be composite, otherwise it will be probably prime.
Given below is a simple function implementing Fermat's primality test:
/* Fermat's test for checking primality, the more iterations the more is accuracy */
bool Fermat(long long p,int iterations){
if(p == 1){ // 1 isn't prime
return false;
}
for(int i=0;i= 0. If p is prime, then either as = 1 ( mod p ) as in this case, repeated squaring from as will always yield 1, so (a(p-1))%p will be 1; or a(s*(2r)) = -1 ( mod p ) for some r such that 0 <= r < d, as repeated squaring from it will always yield 1 and finally a(p-1) = 1 ( mod p ). If none of these hold true, a(p-1)will not be 1 for any prime number a ( otherwise there will be a contradiction with fact #2 ).
Algorithm
Let p be the given number which we have to test for primality. First we rewrite p-1 as (2d)*s. Now we pick some a in range [1,n-1] and then check whether as = 1 ( mod p ) or a(s*(2r)) = -1 ( mod p ). If both of them fail, then p is definitely composite. Otherwise p is probably prime. We can choose another a and repeat the same test. We can stop after some fixed number of iterations and claim that either p is definitely composite, or it is probably prime.
A small procedure realizing the above algorithm is given below:
/* Miller-Rabin primality test, iteration signifies the accuracy of the test */
bool Miller(long long p,int iteration){
if(p<2){
return false;
}
if(p!=2 && p%2==0){
return false;
}
long long s=p-1;
while(s%2==0){
s/=2;
}
for(int i=0;i0 and n is odd
int calculateJacobian(long long a,long long n){
if(!a) return 0; // (0/n) = 0
int ans=1;
long long temp;
if(a<0){
a=-a; // (a/n) = (-a/n)*(-1/n)
if(n%4==3) ans=-ans; // (-1/n) = -1 if n = 3 ( mod 4 )
}
if(a==1) return ans; // (1/n) = 1
while(a){
if(a<0){
a=-a; // (a/n) = (-a/n)*(-1/n)
if(n%4==3) ans=-ans; // (-1/n) = -1 if n = 3 ( mod 4 )
}
while(a%2==0){
a=a/2; // Property (iii)
if(n%8==3||n%8==5) ans=-ans;
}
swap(a,n); // Property (iv)
if(a%4==3 && n%4==3) ans=-ans; // Property (iv)
a=a%n; // because (a/p) = (a%p / p ) and a%pi = (a%n)%pi if n % pi = 0
if(a>n/2) a=a-n;
}
if(n==1) return ans;
return 0;
}
/* Iterations determine the accuracy of the test */
bool Solovoy(long long p,int iteration){
if(p<2) return false;
if(p!=2 && p%2==0) return false;
for(int i=0;i>= 1", "%2" can be replaced by "&1" and "*= 2" can be replaced by "<<=1". Inline Assembly can also be used to optimize them further.

