20030416, 15:11  #12 
2·5^{2}·73 Posts 
Paul my email is not working to good today for some reason, everything is running ok so far :?

20030417, 20:16  #13 
Sep 2002
Database er0rr
5×787 Posts 
Thanks for helping out Ed :D
All blocks for 'n' below 355,000 have been taken. There are blocks available up 400,000 for anyone to test. Anything higher is in the sieve which should be up to divsors of 2.4 trillion by the end of the weekend. 
20030419, 21:40  #14 
Aug 2002
3^{2}×929 Posts 
I just ran a block on my 1.5GHz Celery... It took around 5 days... It was very well behaved and put a minimal memory hit on my computer...
Paul is very easy to work with... If I had a real computer and not a laptop I would seriously consider running this full time... I try to run through at least one "block" of whatever we have added to the "other projects" forums just to make sure there aren't any weird issues to deal with... Anyways, it was fun and educational! Thanks Paul! (YGM!) 
20030420, 09:50  #15 
Sep 2002
Database er0rr
F5F_{16} Posts 
Thanks for your help Mike. :)
I have sieved up to 2.4 trillion  it was eliminating a candidate on average every 2413 seconds. I will immediately do a further 1.5 trillion on the 400,0001,000,000 range and then review the sieve timings. Here are some LLR timings on my 1GHz Athlon: 3*2^4000111 Time : 2364.714 sec. 3*2^5000061 Time : 3861.034 sec. 3*2^6000231 Time : 5595.876 sec. 3*2^7000031 Time : 11707.696 sec. 3*2^8000231 Time : 13288.771 sec. 3*2^9000001 Time : 18232.217 sec. 3*2^9999881 Time : 23783.626 sec. What do you consider is the ideal cutoff time for the sieve 
20030420, 12:50  #16 
2^{2}·43·53 Posts 
Sieving ideas
I thought it might be nice to start a thread for sieving tips, to new users.
I'll start with my amateur techniques. First start with an estimated time for the project at hand, A. First I start a newpen file ( not sure of the optimum size of exponent) I like to search a run larger than 3000. B. Then, after a short time, stop the file and test my particular computer with the largest/last number in the file.( a couple of tests is nice but not practical with larger exponents) C. Record seconds it takes to test with LLR. **important as computers vary extremely** I never sieve longer than recorded length, and sometimes only 80% of recorded length, depending on the size of the file. D. Larger files can be broken up into smaller ones, and resieved not exceeding recorded length. (by choosing option sieve until rate of k/n is __ seconds.) I THINK this helps in theory since, larger files exclude many composites, but then become cumbersome due to the bias of the recorded length. In theory one could continue breaking up the file from a larger one. We need some kind of derivative equation??? E. Fixed n searches, can be improved by excluding prime k, as they are not as likely to produce a prime. Please correct me if I am wrong anywhere here! Please suggest any tips you have picked up along the way..... Shane F. :) :D :D ;) 
20030420, 15:40  #17  
Sep 2002
Database er0rr
5·787 Posts 
Quote:
Quote:
Quote:
To get the best performance, it is recommended that you use a wide range of n values (at least 3,000), as then NewPGen can use an algorithm that has a speed that is proportional to the square root of the range of n values being sieved (so a range that is 4 times as wide only takes twice as long to sieve). Quote:
So my question remains: Quote:


20030422, 11:57  #18 
1010001_{2} Posts 
It has been expained to me like this:
"If p divides k, then p does not divide k*2^n1. This means that N=k*2^n1 is prime with probability k/phi(k) * 1/ln(N) instead of probability 1/ln(N). For k<100 with k*2^n1 prime, this moves the probability that k is prime from 1 in 4 to 1 in 7; for k<1000, the probability of k prime moves from 1 in 6 to 1 in 11; for k<10000, the probability moves from 1 in 8 to 1 in 16." I understand that especially highly composite k, eliminate those possible factors of (k*2^n1). But maybe this is a catch22 depending on the particular k in question. ? I would like to know the reason why three was chosen for k in this project. Shouldn't we look for the shortest distance, to find the most primes per CPU cycle. Surely this is not ? ? ? I may join with one of my computers for the hell of it though, as I have 15 Ghz at my disposal now. So you see that k=3 is not likely to be practical for such a search. I am confident however that k has a frequent form( a shortest distance if you will), and would make for the ultimate collaborative effort to rival that of George's W.I.M.P.S! 
20030422, 12:58  #19 
2^{8}·5·7 Posts 
:D

20030422, 15:46  #20 
"Sander"
Oct 2002
52.345322,5.52471
29·41 Posts 
Why break up the file?
It's much more efficient to sieve the whole range at once since sieving time is proportional to the square root of the range. Sieving a range 4 times longer will cost you only twice the time to sieve (and thus you will be removing more candidates in a given time) 
20030422, 22:17  #21 
1110000110010_{2} Posts 
I see, maybe the file should have started bigger than the original intention. There is a cutoff point somewhere. If you spend ~23783 sec per exponent on the entire file, then when it grabs a composite at this rate in the 400011500006 range of this file, it cancels some of that efficiency mentioned. Since it should only take ~3000 seconds max, which is ~10 times faster, rather than the 1/2 ratio.
This is a great question! ops: 
20030423, 00:31  #22 
Sep 2002
Database er0rr
5×787 Posts 
:( thanks for joining our search, Shane.
There are 17 blocks being tested 8) 5 blocks are available for 'n' below 400,000 At 'p' equals 3.5 trillion, NewPGen is eliminating a candidate every 47:15 minutes ( 2835 seconds ). 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Forum Donation and Expense List  Xyzzy  Lounge  137  20211201 18:27 
link to status table  paulunderwood  3*2^n1 Search  2  20040517 22:13 
Status of Fermat Search  rogue  Factoring  13  20040501 14:48 
Interesting link...  Xyzzy  Hardware  0  20030821 00:06 
Nice link...  Xyzzy  Lounge  4  20030628 13:37 