smallest-divisor/1 function to find the smallest divisor of each of the following numbers: 199, 1999, 19999.
Most Lisp implementations include a primitive called
runtime that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds). Erlang has something similar, though a bit more specific:
timer:tc/1-3 are functions which execute functions, returning a tuple of elapsed microseconds and result of function call.
timed-prime-test function, when called with an integer
n and times the function it uses to checks if
n is prime. If
n is prime, the function prints three asterisks followed by the amount of time used in performing the test. If
n is not prime, it simply prints
(defun timed-prime-test (n) (let ((`#(,elapsed-time ,value) (timer:tc #'prime?/1 `(,n)))) (report-prime elapsed-time value))) (defun report-prime ((elapsed-time 'true) (io:format "~p *** ~p~n" `(,n ,elapsed-time))) ((elapsed-time 'false) (io:format "~p~n" `(,n))))
Using these functions, write a funtion
search-for-primes that checks the primality of consecutive odd integers in a specified range. Use your function to find the three smallest primes larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed to test each prime. Since the testing algorithm has order of growth of , you should expect that testing for primes around 10,000 should take about times as long as testing for primes around 1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the prediction? Is your result compatible with the notion that programs on your machine run in time proportional to the number of steps required for the computation?
smallest-divisor/1 function shown at the start of the last section does lots of needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to see if it is divisible by any larger even numbers. This suggests that the values used for
test-divisor should not be 2, 3, 4, 5, 6, ..., but rather 2, 3, 5, 7, 9, .... To implement this change, define a function
next/1 that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. Modify the
smallest-divisor function to use
(next test-divisor) instead of
(+ test-divisor 1). With
timed-prime-test/1 incorporating this modified version of
smallest-divisor/1, run the test for each of the 12 primes found in exercise 1.22. Since this modification halves the number of test steps, you should expect it to run about twice as fast. Is this expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how do you explain the fact that it is different from 2?
timed-prime-test/1 function of exercise 1.22 to use
fast-prime?/2 (the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test has growth, how would you expect the time to test primes near 1,000,000 to compare with the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy you find?
Alyssa P. Hacker complains that we went to a lot of extra work in writing
expmod/3. After all, she says, since we already know how to compute exponentials, we could have simply written
(defun expmod (base exp m) (rem (trunc (fast-expt base exp)) m))
Is she correct? Would this function serve as well for our fast prime tester? Explain.
Louis Reasoner is having great difficulty doing exercise 1.24. His
fast-prime?/2 test seems to run more slowly than his
prime?/1 test. Louis calls his friend Eva Lu Ator over to help. When they examine Louis's code, Eva finds that he has rewritten the
expmod/3 function to use an explicit multiplication, rather than calling square:
(defun expmod (base exp m) (cond ((== exp 0) 1) ((even? exp) (rem (* (expmod base (/ exp 2) m) (expmod base (/ exp 2) m)) m)) ('true (rem (* base (expmod base (- exp 1) m)) m))))
"I don't see what difference that could make," says Louis. "I do." says Eva. "By writing the function like that, you have transformed the process into a process." Explain.
Demonstrate that the Carmichael numbers listed in the Carmichael footnote really do fool the Fermat test. That is, write a function that takes an integer and tests whether is congruent to modulo n for every , and try your function on the given Carmichael numbers.
One variant of the Fermat test that cannot be fooled is called the Miller-Rabin test (Miller 1976; Rabin 1980). This starts from an alternate form of Fermat's Little Theorem, which states that if is a prime number and is any positive integer less than , then raised to the st power is congruent to 1 modulo . To test the primality of a number by the Miller-Rabin test, we pick a random number and raise to the st power modulo using the
expmod/3 function. However, whenever we perform the squaring step in
expmod/3, we check to see if we have discovered a "nontrivial square root of 1 modulo ," that is, a number not equal to 1 or whose square is equal to 1 modulo . It is possible to prove that if such a nontrivial square root of 1 exists, then is not prime. It is also possible to prove that if is an odd number that is not prime, then, for at least half the numbers , computing in this way will reveal a nontrivial square root of 1 modulo . (This is why the Miller-Rabin test cannot be fooled.) Modify the
expmod/3 function to signal if it discovers a nontrivial square root of 1, and use this to implement the Miller-Rabin test with a function analogous to
fermat-test/1. Check your function by testing various known primes and non-primes. Hint: One convenient way to make
expmod/3 signal is to have it return 0.