Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
jespersh875840dLooking forward to hearing about your nobel prize in mathematics and compute.
@jespersh lol. A socially crippled 90 IQ guy with a 9th grade education winning a noble prize.
Stranger shit has happened.
And just like that blown on lottery tickets and vodka.
I would take up my favorite hobbies.
Giving hitchhikers free rides. Living out of my car and sleeping on people's sofas.
Hazarth710240dThat 100% accuracy is going to cost you the performance big time
@Hazarth actually I don't think it will. I have an internal function called 'calcY' that calculates if the input is a probable prime (it generates the join of the primes and the carmichaels). The outer function filters the carmichaels, but I digress.
Anyway, I realized calcY would take ages on some inputs, and in particular, larger inputs, because it uses two internal variables, [i], and [f], to do the calculations, and f is derived from [i]. But the size of [i] was dependant on the input.
And while going over the set of inputs that seemed to take forever, I saw a pattern in the actual numbers and realized I could replace them with a carefully chosen constant (1.892789) and a few logs to perform a *very* close estimate of what [i]'s value should be in *most* cases. And it worked. The pattern turned out not to be ephemeral (normally when I see something like that it turns out to be pareidolia).
The actual estimator is
Anyway, stopped working at 32 digits, realized I needed better accuracy so I implemented Decimal support.
testing at 132 bits (40 digits) instead of 26 digits like before, no problem so far, and getting a *lot fewer* false negatives.
Still returning within roughly 1-100th to 2-100th a second, so either its not input dependent or that term grows fairly slowly in big-O.
Still need to correct the false negative rate at smaller valued primes, but I think its an issue of the estimator of i *over estimating* or in some cases going negatives from what I can see.
All the false negatives seem to be for primes with unit values 1 and 7, though I'll have to double check. Might just need a modification of the estimator or a separate estimator for them.
I'm also wondering the number of bits my current estimator is good for, before it diverges.
Gonna keep going but everything is looking solid.
Keep going, we believe in you :)
The [i] values being generated included errant values from the failed tests (because theres multiple) and after logging only the *final* values of i, for the passing tests (and significant debugging), I managed to regularize the values enormously. They now form a *smooth* ramp with no integer gaps.
Which means I can put a growth rate and several other measures on the dataset and work toward ramping up the speed of the calculations significantly.
For example, this is the set of ivalues generated when determining if n is prime, for primes 9978 to 10,000.
97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 97, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100.
Theres some variability in the group sizes but looks good.
TeachMeCode361738dYou’re smarter than a 90 IQ, they tested you wrong!
"It didn't when he messaged me.
It does now."
that's The Cool Moment™ right there :)
Wisecrack706035dRewriting the function to get the derivative of the growth rate of the final i values for the prime generator, so I can write a function that outputs the expected i value for any input n.
A curious effect is that what this does is turns the function into a prime indexing function.
A given i value of course is assigned to multiple primes, but by shifting the mantissa right, I can convert fractions of an index into integers anyway, meaning even though the number of primes per index tends to grow (in 'clusters'), effectively theres only one index per prime.
Now I just have to work out the actual derivatives and a few other small details, but in practice, the hardest part is done.
I havent abandoned the exploratory work on reversing keratsubas method either, nor the initial investigation into proving collatz, or my much earlier work on certain sets of algebraic identities derived from semiprimes.
RebondJosh034dIt's a good message and you get to think
nanobot143625dI don’t get 90% of stuff written here. So all I can say is “good luck mate and keep us updated”.
@nanobot thanks man. Basically what happening is we're generating prime numbers. Normally you'd do this by using a sieve, so in the nieve case you'd generate 2, 3, 4 ,5, 6, 7, 8 , 9, etc, and then mark off multiples of 2, multiples of 3, and what's left are your primes.
What my code does is instead using a function called calcY() to return a number that MAY be a prime OR something called a pseudo Carmichael number. What those are isnt important, just that 1. both sets are generated in the process, 2. Carmichael's are *trivial* to check for, even compared to primes. So instead of doing a sieve of primes and multiples, we solve the easier problem of filtering out the Carmichael's, leaving only primes.
While all this is done, calcY() has an inner loop to find a possible match or hit. That's the 'I'm value I'm returning, and in only logging it if the number is confirmed prime after the fact.
Well it turns out this index value grows smoothly and predictably, meaning by reversing the function, I can enter some integer, and get a prime out of as a consequence.
Meaning it acts as a prime indexing function.
Knowing the growth rate and being able to take a prime number and figure out its index means calcY() doesnt have to search its inner loop. The result is, if *given* an index and a potential prime, the function can confirm primarily determinstically, and faster than Miller-Rabin, which would beat the state of the art in unoptimized code.
Knowing the growth rate of the primes vs their indexes, gives the function the necessary properties to confirm primarily fast. Which is why it is important to know the growth rate of the index values for primes.
I dont know if it's the first of it's kind or not. I thought I was hot shit when I got the prime number generator to work tillI found out it was a well known and solved problem.
So we'll see how it turns out.
God damn spellcheck fucking up my entire comment on mobile.
Like six times.
"Primarily" inccorectly put in place of "primality", "I'm"
Instead of "i", etc.
Fucking horseshit google android bullshit. Companies full of fucktards that cant do a god damn thing right.