S
smartin
FWIW, the algorithm you offered is (in my testing) the best performer of
all the solutions offered in this thread so far.
My *ware notes: 1.8 MHz Dual Core Intel, 1 GB RAM, XP Pro SP3, Excel 2003.
I set up one loop to time Totient(N) (where each call to Totient()
requires essentially N calls to GCD) for 1 <= N <= 10000. Yours competes
nicely with the recursive function, but the quantization effect of
Timer's precision makes the real results unclear.
So next, I set up three scenarios in a new loop:
- Call Totient(100) 10,000 times
- Call Totient(1000) 1,000 times
- Call Totient(10000) 100 times
Thus each scenario calls Totient() (and therefore GCD) 1,000,000 times.
Each scenario was run 10 times in succession, and an average timing in
seconds was determined.
Your algorithm gave the following average timings for these three scenarios:
0.50703125 0.4171875 0.49375
Next best was the recursive GCD function:
0.86484375 1.1015625 1.3039063
Then, Dana D's:
19.542971 22.956252 28.414845
Even after uncommenting the debug.print statements in ATP's native GCD
function, timings were in the 80's. I am still mystified why ATP's
performance on my machine is so poor.
Clearly, your algorithm scales well. It really starts to pull away for
the killer values of N:
- Call Totient(100000) 10 times
- Call Totient(1000000) 1 time
With these timings:
0.5671875 0.6515625
Compared to the recursive function:
1.4382813 1.6546876
An observation about my timings with Dana D's algorithm... the 10
timings always start smaller and grow, whereas the other algorithms tend
to produce timings that hover about the mean. I wonder if this is a
consequence of using variants?
Again, thanks to all who participated in this thread. This has been most
educational for me.
all the solutions offered in this thread so far.
My *ware notes: 1.8 MHz Dual Core Intel, 1 GB RAM, XP Pro SP3, Excel 2003.
I set up one loop to time Totient(N) (where each call to Totient()
requires essentially N calls to GCD) for 1 <= N <= 10000. Yours competes
nicely with the recursive function, but the quantization effect of
Timer's precision makes the real results unclear.
So next, I set up three scenarios in a new loop:
- Call Totient(100) 10,000 times
- Call Totient(1000) 1,000 times
- Call Totient(10000) 100 times
Thus each scenario calls Totient() (and therefore GCD) 1,000,000 times.
Each scenario was run 10 times in succession, and an average timing in
seconds was determined.
Your algorithm gave the following average timings for these three scenarios:
0.50703125 0.4171875 0.49375
Next best was the recursive GCD function:
0.86484375 1.1015625 1.3039063
Then, Dana D's:
19.542971 22.956252 28.414845
Even after uncommenting the debug.print statements in ATP's native GCD
function, timings were in the 80's. I am still mystified why ATP's
performance on my machine is so poor.
Clearly, your algorithm scales well. It really starts to pull away for
the killer values of N:
- Call Totient(100000) 10 times
- Call Totient(1000000) 1 time
With these timings:
0.5671875 0.6515625
Compared to the recursive function:
1.4382813 1.6546876
An observation about my timings with Dana D's algorithm... the 10
timings always start smaller and grow, whereas the other algorithms tend
to produce timings that hover about the mean. I wonder if this is a
consequence of using variants?
Again, thanks to all who participated in this thread. This has been most
educational for me.