Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "primes"
-
So I cracked prime factorization. For real.
I can factor a 1024 bit product in 11hours on an i3.
No GPU acceleration, no massive memory overhead. Probably a lot faster with parallel computation on a better cpu, or even on a gpu.
4096 bits in 97-98 hours.
Verifiable. Not shitting you. My hearts beating out of my fucking chest. Maybe it was an act of god, I don't know, but it works.
What should I do with it?241 -
I messaged a professor at MIT and surprisingly got a response back.
He told me that "generating primes deterministically is a solved problem" and he would be very surprised if what I wrote beat wheel factorization, but that he would be interested if it did.
It didnt when he messaged me.
It does now.
Tested on primes up to 26 digits.
Current time tends to be 1-100th to 2-100th of a second.
Seems to be steady.
First n=1million digits *always* returns false for composites, while for primes the rate is 56% true vs false, and now that I've made it faster, I'm fairly certain I can get it to 100% accuracy.
In fact what I'm thinking I'll do is generate a random semiprime using the suspected prime, map it over to some other factor tree using the variation on modular expotentiation several of us on devrant stumbled on, and then see if it still factors. If it does then we know the number in question is prime. And because we know the factor in question, the semiprime mapping function doesnt require any additional searching or iterations.
The false negative rate, I think goes to zero the larger the prime from what I can see. But it wont be an issue if I'm right about the accuracy being correctable.
I'd like to thank the professor for the challenge. He also shared a bunch of useful links.
That ones a rare bird.21 -
POSTMORTEM
"4096 bit ~ 96 hours is what he said.
IDK why, but when he took the challenge, he posted that it'd take 36 hours"
As @cbsa wrote, and nitwhiz wrote "but the statement was that op's i3 did it in 11 hours. So there must be a result already, which can be verified?"
I added time because I was in the middle of a port involving ArbFloat so I could get arbitrary precision. I had a crude desmos graph doing projections on what I'd already factored in order to get an idea of how long it'd take to do larger
bit lengths
@p100sch speculated on the walked back time, and overstating the rig capabilities. Instead I spent a lot of time trying to get it 'just-so'.
Worse, because I had to resort to "Decimal" in python (and am currently experimenting with the same in Julia), both of which are immutable types, the GC was taking > 25% of the cpu time.
Performancewise, the numbers I cited in the actual thread, as of this time:
largest product factored was 32bit, 1855526741 * 2163967087, took 1116.111s in python.
Julia build used a slightly different method, & managed to factor a 27 bit number, 103147223 * 88789957 in 20.9s,
but this wasn't typical.
What surprised me was the variability. One bit length could take 100s or a couple thousand seconds even, and a product that was 1-2 bits longer could return a result in under a minute, sometimes in seconds.
This started cropping up, ironically, right after I posted the thread, whats a man to do?
So I started trying a bunch of things, some of which worked. Shameless as I am, I accepted the challenge. Things weren't perfect but it was going well enough. At that point I hadn't slept in 30~ hours so when I thought I had it I let it run and went to bed. 5 AM comes, I check the program. Still calculating, and way overshot. Fuuuuuuccc...
So here we are now and it's say to safe the worlds not gonna burn if I explain it seeing as it doesn't work, or at least only some of the time.
Others people, much smarter than me, mentioned it may be a means of finding more secure pairs, and maybe so, I'm not familiar enough to know.
For everyone that followed, commented, those who contributed, even the doubters who kept a sanity check on this without whom this would have been an even bigger embarassement, and the people with their pins and tactical dots, thanks.
So here it is.
A few assumptions first.
Assuming p = the product,
a = some prime,
b = another prime,
and r = a/b (where a is smaller than b)
w = 1/sqrt(p)
(also experimented with w = 1/sqrt(p)*2 but I kept overshooting my a very small margin)
x = a/p
y = b/p
1. for every two numbers, there is a ratio (r) that you can search for among the decimals, starting at 1.0, counting down. You can use this to find the original factors e.x. p*r=n, p/n=m (assuming the product has only two factors), instead of having to do a sieve.
2. You don't need the first number you find to be the precise value of a factor (we're doing floating point math), a large subset of decimal values for the value of a or b will naturally 'fall' into the value of a (or b) + some fractional number, which is lost. Some of you will object, "But if thats wrong, your result will be wrong!" but hear me out.
3. You round for the first factor 'found', and from there, you take the result and do p/a to get b. If 'a' is actually a factor of p, then mod(b, 1) == 0, and then naturally, a*b SHOULD equal p.
If not, you throw out both numbers, rinse and repeat.
Now I knew this this could be faster. Realized the finer the representation, the less important the fractional digits further right in the number were, it was just a matter of how much precision I could AFFORD to lose and still get an accurate result for r*p=a.
Fast forward, lot of experimentation, was hitting a lot of worst case time complexities, where the most significant digits had a bunch of zeroes in front of them so starting at 1.0 was a no go in many situations. Started looking and realized
I didn't NEED the ratio of a/b, I just needed the ratio of a to p.
Intuitively it made sense, but starting at 1.0 was blowing up the calculation time, and this made it so much worse.
I realized if I could start at r=1/sqrt(p) instead, and that because of certain properties, the fractional result of this, r, would ALWAYS be 1. close to one of the factors fractional value of n/p, and 2. it looked like it was guaranteed that r=1/sqrt(p) would ALWAYS be less than at least one of the primes, putting a bound on worst case.
The final result in executable pseudo code (python lol) looks something like the above variables plus
while w >= 0.0:
if (p / round(w*p)) % 1 == 0:
x = round(w*p)
y = p / round(w*p)
if x*y == p:
print("factors found!")
print(x)
print(y)
break
w = w + i
Still working but if anyone sees obvious problems I'd LOVE to hear about it.38 -
Made a program to list primes until stopped. I got to about 2 billion when it took 8 GB of RAM and I more or less had to stopped it.
Only took 10% of CPU though.13 -
MySQL has the absolute worst error messages.
"You have an error in your SQL syntax; check the manual that corresponds blah blah near '(some random code line)'".
How vague can you be? It doesn't help that I always find the error in a completely different place to where the message says it is.5 -
Two big moments today:
1. Holy hell, how did I ever get on without a proper debugger? Was debugging some old code by eye (following along and keeping track mentally, of what the variables should be and what each step did). That didn't work because the code isn't intuitive. Tried the print() method, old reliable as it were. Kinda worked but didn't give me enough fine-grain control.
Bit the bullet and installed Wing IDE for python. And bam, it hit me. How did I ever live without step-through, and breakpoints before now?
2. Remember that non-sieve prime generator I wrote a while back? (well maybe some of you do). The one that generated quasi lucas carmichael (QLC) numbers? Well thats what I managed to debug. I figured out why it wasn't working. Last time I released it, I included two core methods, genprimes() and nextPrime(). The first generates a list of primes accurately, up to some n, and only needs a small handful of QLC numbers filtered out after the fact (because the set of primes generated and the set of QLC numbers overlap. Well I think they call it an embedding, as in QLC is included in the series generated by genprimes, but not the converse, but I digress).
nextPrime() was supposed to take any arbitrary n above zero, and accurately return the nearest prime number above the argument. But for some reason when it started, it would return 2,3,5,6...but genprimes() would work fine for some reason.
So genprimes loops over an index, i, and tests it for primality. It begins by entering the loop, and doing "result = gffi(i)".
This calls into something a function that runs four tests on the argument passed to it. I won't go into detail here about what those are because I don't even remember how I came up with them (I'll make a separate post when the code is fully fixed).
If the number fails any of these tests then gffi would just return the value of i that was passed to it, unaltered. Otherwise, if it did pass all of them, it would return i+1.
And once back in genPrimes() we would check if the variable 'result' was greater than the loop index. And if it was, then it was either prime (comparatively plentiful) or a QLC number (comparatively rare)--these two types and no others.
nextPrime() was only taking n, and didn't have this index to compare to, so the prior steps in genprimes were acting as a filter that nextPrime() didn't have, while internally gffi() was returning not only primes, and QLCs, but also plenty of composite numbers.
Now *why* that last step in genPrimes() was filtering out all the composites, idk.
But now that I understand whats going on I can fix it and hypothetically it should be possible to enter a positive n of any size, and without additional primality checks (such as is done with sieves, where you have to check off multiples of n), get the nearest prime numbers. Of course I'm not familiar enough with prime number generation to know if thats an achievement or worthwhile mentioning, so if anyone *is* familiar, and how something like that holds up compared to other linear generators (O(n)?), I'd be interested to hear about it.
I also am working on filtering out the intersection of the sets (QLC numbers), which I'm pretty sure I figured out how to incorporate into the prime generator itself.
I also think it may be possible to generator primes even faster, using the carmichael numbers or related set--or even derive a function that maps one set of upper-and-lower bounds around a semiprime, and map those same bounds to carmichael numbers that act as the upper and lower bound numbers on the factors of a semiprime.
Meanwhile I'm also looking into testing the prime generator on a larger set of numbers (to make sure it doesn't fail at large values of n) and so I'm looking for more computing power if anyone has it on hand, or is willing to test it at sufficiently large bit lengths (512, 1024, etc).
Lastly, the earlier work I posted (linked below), I realized could be applied with ECM to greatly reduce the smallest factor of a large number.
If ECM, being one of the best methods available, only handles 50-60 digit numbers, & your factors are 70+ digits, then being able to transform your semiprime product into another product tree thats non-semiprime, with factors that ARE in range of ECM, and which *does* contain either of the original factors, means products that *were not* formally factorable by ECM, *could* be now.
That wouldn't have been possible though withput enormous help from many others such as hitko who took the time to explain the solution was a form of modular exponentiation, Fast-Nop who contributed on other threads, Voxera who did as well, and support from Scor in particular, and many others.
Thank you all. And more to come.
Links mentioned (because DR wouldn't accept them as they were):
https://pastebin.com/MWechZj912 -
As you can see from the screenshot, its working.
The system is actually learning the associations between the digit sequence of semiprime hidden variables and known variables.
Training loss and value loss are super high at the moment and I'm using an absurdly small training set (10k sequence pairs). I'm running on the assumption that there is a very strong correlation between the structures (and that it isn't just all ephemeral).
This initial run is just to see if training an machine learning model is a viable approach.
Won't know for a while. Training loss could get very low (thats a good thing, indicating actual learning), only for it to spike later on, and if it does, I won't know if the sample size is too small, or if I need to do more training, or if the problem is actually intractable.
If or when that happens I'll experiment with different configurations like batch sizes, and more epochs, as well as upping the training set incrementally.
Either case, once the initial model is trained, I need to test it on samples never seen before (products I want to factor) and see if it generates some or all of the digits needed for rapid factorization.
Even partial digits would be a success here.
And I expect to create multiple training sets for each semiprime product and its unknown internal variables versus deriable known variables. The intersections of the sets, and what digits they have in common might be the best shot available for factorizing very large numbers in this approach.
Regardless, once I see that the model works at the small scale, the next step will be to increase the scope of the training data, and begin building out the distributed training platform so I can cut down the training time on a larger model.
I also want to train on random products of very large primes, just for variety and see what happens with that. But everything appears to be working. Working way better than I expected.
The model is running and learning to factorize primes from the set of identities I've been exploring for the last three fucking years.
Feels like things are paying off finally.
Will post updates specifically to this rant as they come. Probably once a day.2 -
I didn't leave, I just got busy working 60 hour weeks in between studying.
I found a new method called matrix decomposition (not the known method of the same name).
Premise is that you break a semiprime down into its component numbers and magnitudes, lets say 697 for example. It becomes 600, 90, and 7.
Then you break each of those down into their prime factorizations (with exponents).
So you get something like
>>> decon(697)
offset: 3, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('2')]]
offset: 2, exp: [[Decimal('2'), Decimal('1')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('1')]]
offset: 1, exp: [[Decimal('7'), Decimal('1')]]
And it turns out that in larger numbers there are distinct patterns that act as maps at each offset (or magnitude) of the product, mapping to the respective magnitudes and digits of the factors.
For example I can pretty reliably predict from a product, where the '8's are in its factors.
Apparently theres a whole host of rules like this.
So what I've done is gone an started writing an interpreter with some pseudo-assembly I defined. This has been ongoing for maybe a month, and I've had very little time to work on it in between at my job (which I'm about to be late for here if I don't start getting ready, lol).
Anyway, long and the short of it, the plan is to generate a large data set of primes and their products, and then write a rules engine to generate sets of my custom assembly language, and then fitness test and validate them, winnowing what doesn't work.
The end product should be a function that lets me map from the digits of a product to all the digits of its factors.
It technically already works, like I've printed out a ton of products and eyeballed patterns to derive custom rules, its just not the complete set yet. And instead of spending months or years doing that I'm just gonna finish the system to automatically derive them for me. The rules I found so far have tested out successfully every time, and whether or not the engine finds those will be the test case for if the broader system is viable, but everything looks legit.
I wouldn't have persued this except when I realized the production of semiprimes *must* be non-eularian (long story), it occured to me that there must be rich internal representations mapping products to factors, that we were simply missing.
I'll go into more details in a later post, maybe not today, because I'm working till close tonight (won't be back till 3 am), but after 4 1/2 years the work is bearing fruit.
Also, its good to see you all again. I fucking missed you guys.9 -
CONTEST - Win big $$$ straight from Wisecrack!
For all those who participated in my original "cracking prime factorization" thread (and several I decided to add just because), I'm offering a whopping $5 to anyone who posts a 64 bit *product* of two primes, which I cant factor. Partly this is a thank you for putting up with me.
FIVE WHOLE DOLLARS! In 1909 money thats $124 dollars! Imagine how many horse and buggy rides you could buy with that back then! Or blowjobs!
Probably not a lot!
But still.
So the contest rules are simple:
Go to
https://asecuritysite.com/encryptio...
Enter 32 for the number of bits per prime, and generate a 64 bit product.
Post it here to enter the contest.
Products must be 64 bits, and the result of just *two* prime numbers. Smaller or larger bit lengths for products won't be accepted at this time.
I'm expecting a few entries on this. Entries will generally be processed in the order of submission, but I reserve the right to wave this rule.
After an entry is accepted, I'll post "challenge accepted. Factoring now."
And from that point on I have no more than 5 hours to factor the number, (but results usually arrive in 30-60 minutes).
If I fail to factor your product in the specified time (from the moment I indicate I've begun factoring), congratulations, you just won $5.
Payment will be made via venmo or other method at my discretion.
One entry per user. Participants from the original thread only, as well as those explicitly mentioned.
Limitations: Factoring shall be be done
1. without *any* table lookup of primes or equivalent measures, 2. using anything greater than an i3, 3. without the aid of a gpu, 4. without multithreading. 5. without the use of more than one machine.
FINALLY:
To claim your prize, post the original factors of your product here, after the deadline has passed.
And then I'll arrange payment of the prize.
You MUST post the factors of your product after the deadline, to confirm your product and claim your prize.99 -
Music. Music teaches you numbers, creativity, patterns, structure, and basically primes your brain for math and creativity in that space. In addition, it teaches you how to think both within a structure and outside the box, as well as the importance of repetition, memorization, and learning a new language.
Music really was my second language, and the ability to read/write it fluently is a skill that takes a long time to master. I really believe that it increases your brain plasticity so much.4 -
Still on the primenumbers bender.
Had this idea that if there were subtle correlations between a sufficiently large set of identities and the digits of a prime number, the best way to find it would be to automate the search.
And thats just what I did.
I started with trace matrices.
I actually didn't expect much of it. I was hoping I'd at least get lucky with a few chance coincidences.
My first tests failed miserably. Eight percent here, 10% there. "I might as well just pick a number out of a hat!" I thought.
I scaled it way back and asked if it was possible to predict *just* the first digit of either of the prime factors.
That also failed. Prediction rates were low still. Like 0.08-0.15.
So I automated *that*.
After a couple days of on-and-off again semi-automated searching I stumbled on it.
[1144, 827, 326, 1184, -1, -1, -1, -1]
That little sequence is a series of identities representing different values derived from a randomly generated product.
Each slots into a trace matrice. The results of which predict the first digit of one of our factors, with a 83.2% accuracy even after 10k runs, and rising higher with the number of trials.
It's not much, but I was kind of proud of it.
I'm pushing for finding 90%+ now.
Some improvements include using a different sort of operation to generate results. Or logging all results and finding the digit within each result thats *most* likely to predict our targets, across all results. (right now I just take the digit in the ones column, which works but is an arbitrary decision on my part).
Theres also the fact that it's trivial to correctly guess the digit 25% of the time, simply by guessing 1, 3, 7, or 9, because all primes, except for 2, end in one of these four.
I have also yet to find a trace with a specific bias for predicting either the smaller of two unique factors *or* the larger. But I haven't really looked for one either.
I still need to write a generate that takes specific traces, and lets me mutate some of the values, to push them towards certain 'fitness' levels.
This would be useful not just for very high predictions, but to find traces with very *low* predictions.
Why? Because it would actually allow for the *elimination* of possible digits, much like sudoku, from a given place value in a predicted factor.
I don't know if any of this will even end up working past the first digit. But splitting the odds, between the two unique factors of a prime product, and getting 40+% chance of guessing correctly, isn't too bad I think for a total amateur.
Far cry from a couple years ago claiming I broke prime factorization. People still haven't forgiven me for that, lol.6 -
{ !rant, devlog }
I finished an exercise on Deletable Primes at http://catcoder.codingcontest.org just as part of my training for when I'm going to attend hackathons.
I could've done better than 08m 44s, by 3 minutes, if I hadn't made my initial IsPrime() method so damn inefficient that a large number wouldn't stop calculating, making me quickly debug and find the issue...
I used C#, because that's the language I currently just write most of my stuff in (and thus, I'm the fastest in out of all langs I know)
Gist: https://gist.github.com/filthycodin...2 -
After a year of college give everyone 2 hours to solve a programming problem in the language of their choice. Like implement a doubly linked list, or count the number of primes between two integers or something straightforward. Anyone who can’t do it gets kicked out of the major.
I’m sick of dealing with people who are 3 or 4 years into a CS degree, and can’t do 30-line programming assignments in two weeks. I might have to work with one of these clowns someday and I hope to God that my university doesn’t send them into the workforce with a degree.3 -
!rant
So I was experimenting with distributing load on separate processes in node.js. I wrote the simplest isPrime function for performance testing, and I calculated a lot of primes. To be able to see the result, I generated a 1920x1080 image where each white pixel represents a prime.
New wallpaper?5 -
Found a clever little algorithm for computing the product of all primes between n-m without recomputing them.
We'll start with the product of all primes up to some n.
so [2, 2*3, 2*3*5, 2*3*5*,7..] etc
prods = []
i = 0
total = 1
while i < 100:
....total = total*primes[i]
....prods.append(total)
....i = i + 1
Terrible variable names, can't be arsed at the moment.
The result is a list with the values
2, 6, 30, 210, 2310, 30030, etc.
Now assume you have two factors,with indexes i, and j, where j>i
You can calculate the gap between the two corresponding primes easily.
A gap is defined at the product of all primes that fall between the prime indexes i and j.
To calculate the gap between any two primes, merely look up their index, and then do..
prods[j-1]/prods[i]
That is the product of all primes between the J'th prime and the I'th prime
To get the product of all primes *under* i, you can simply look it up like so:
prods[i-1]
Incidentally, finding a number n that is equivalent to (prods[j+i]/prods[j-i]) for any *possible* value of j and i (regardless of whether you precomputed n from the list generator for prods, or simply iterated n=n+1 fashion), is equivalent to finding an algorithm for generating all prime numbers under n.
Hypothetically you could pick a number N out of a hat, thats a thousand digits long, and it happens to be the product of all primes underneath it.
You could then start generating primes by doing
i = 3
while i < N:
....if (N/k)%1 == 0:
........factors.append(N/k)
....i=i+1
The only caveat is that there should be more false solutions as real ones. In otherwords theres no telling if you found a solution N corresponding to some value of (prods[j+i]/prods[j-i]) without testing the primality of *all* values of k under N.13 -
In the 90s most people had touched grass, but few touched a computer.
In the 2090s most people will have touched a computer, but not grass.
But at least we'll have fully sentient dildos armed with laser guns to mildly stimulate our mandatory attached cyber-clits, or alternatively annihilate thought criminals.
In other news my prime generator has exhaustively been checked against, all primes from 5 to 1 million. I used miller-rabin with k=40 to confirm the results.
The set the generator creates is the join of the quasi-lucas carmichael numbers, the carmichael numbers, and the primes. So after I generated a number I just had to treat those numbers as 'pollutants' and filter them out, which was dead simple.
Whats left after filtering, is strictly the primes.
I also tested it randomly on 50-55 bit primes, and it always returned true, but that range hasn't been fully tested so far because it takes 9-12 seconds per number at that point.
I was expecting maybe a few failures by my generator. So what I did was I wrote a function, genMillerTest(), and all it does is take some number n, returns the next prime after it (using my functions nextPrime() and isPrime()), and then tests it against miller-rabin. If miller returns false, then I add the result to a list. And then I check *those* results by hand (because miller can occasionally return false positives, though I'm not familiar enough with the math to know how often).
Well, imagine my surprise when I had zero false positives.
Which means either my code is generating the same exact set as miller (under some very large value of n), or the chance of miller (at k=40 tests) returning a false positive is vanishingly small.
My next steps should be to parallelize the checking process, and set up my other desktop to run those tests continuously.
Concurrently I should work on figuring out why my slowest primality tests (theres six of them, though I think I can eliminate two) are so slow and if I can better estimate or derive a pattern that allows faster results by better initialization of the variables used by these tests.
I already wrote some cases to output which tests most frequently succeeded (if any of them pass, then the number isn't prime), and therefore could cut short the primality test of a number. I rewrote the function to put those tests in order from most likely to least likely.
I'm also thinking that there may be some clues for faster computation in other bases, or perhaps in binary, or inspecting the patterns of values in the natural logs of non-primes versus primes. Or even looking into the *execution* time of numbers that successfully pass as prime versus ones that don't. Theres a bevy of possible approaches.
The entire process for the first 1_000_000 numbers, ran 1621.28 seconds, or just shy of a tenth of a second per test but I'm sure thats biased toward the head of the list.
If theres any other approach or ideas I may be overlooking, I wouldn't know where to begin.16 -
So while exploring some new ideas, I decided to figure out if I could use variables in the known set to determine the bounds of variables in the unknown set.
The variables in question are algebraic identities derived from the semiprimes, so you already know where this is going.
The existing known set is 1194 identities.
And there are, if I recall, roughly two dozen unknowns.
Many knowns have the unknowns as their factors. The d4 product set for example is composed of variables d4a, d4u, d4z, d4z9, d4z4, d4alpha, d4theta, d4omega, etc.
The component variables themselves are unknown, just their products are known. Anyway.
What I've found interesting is if you know the minimum of some of these subsets, for example d4z is smallest out of the d4's for some semiprimes, then you know the upperbound of both the component variables d4 and z.
Unless of course either of them is < 1.
So the order of these variables, based on value, changes depending on the properties of the semiprime, which I won't get into. Most of the time the order change is minor, but for some variables they can vary a lot between semiprimes, rapidly shifting their rank in the known set. This makes it hard to do anything with them.
And what I found myself asking, over and over again, was if there was a way to lock them down? Think of it like a giant switch board, where flipping one switch lights up N number of others, apparently at random. But flipping some other switch completely alters how that first switch works and what lights it seemingly interacts with. And you have a board of them thats 1194^2 in total. So what do you do?
I'd had a similar notion a while back, where I would measure relative value in the known set, among a bunch of variables, assign a letter if the conditions were present, and generate a string, called a "haplotype."
It was hap hazard and I wrote a lot of code to do filtering, sorting, and set manipulation to find sets of elements in common, unique elements, etc. But the 'type' strings, a jumble of random letters, were only useful say, forty percent of the time. For example if a semiprime had a particular type starting with a certain series of letters, 40% of the time a certain known variable was guaranteed to be above a certain variable from the unknown set...40%~ of the time.
It was a lost cause it seemed.
But I returned to the idea recently and revamped the entire notion.
Instead what I would approach it from a more complete angle.
I'd take two known variables J and K, one would be called the indicator, and the other would be the 'target'.
Two other variables would be the 'component' variables (an element taken from the unknown set), and the constraint variable (could be from either the known or unknown set).
The idea was that relationships between the KNOWN variables (an indicator and a target variable) could be used to indicate the rank relationship between the unknown component variable and the constraint variable.
You'd think this wouldn't work either, but my intuition was there were so many seemingly 'random' rank changes of variables in the known set for any two semiprimes, that 1. no two semiprimes ever shared the same order for every variable, and 2. the order of the known variables had to be leaking information about the relationships of the unknown variables.
It turns out my intuition was correct.
Imagine you are picking a lock, and by knowing the order and position of the first two pins, you are able to deduce the relative position of two pins further back that you can't reach because of the locks security features. It doesn't let you unlock the lock directly, but by knowing this, if you can get past the lock's security features, you have a chance of using information about the third pin to get a better, if incomplete, understanding about the boundary position of the last pin.
I would initiate a big scoring list, one for each known element or identity. And then I would check it in tandem like so:
if component > constraint and indicator > target:
indicator[j]+= 1
This is a simplication, but the idea was to score ALL such combination of relationship, whether the indicator was greater than the target at the same time a component was greater than a constraint, or the opposite.
This worked out to four if checks and four separate score lists.
And by subtracting one scorelist from another, I could check for variables that were a bad fit: they'd have equal probability of scoring for example, where they were greater than the target one time, and then lesser than it for another semiprime.
So for any given relationship, greater or lesser between any unknown variable and constraint variable, I could find any indicator variable and target variable whose relationship strongly correlated to the unknown's.18 -
Following on from my previous SQL script to find prime numbers
https://devrant.com/rants/2218452/...
I wondered whether there was a way to improve it by only checking for prime factors. It feels really dirty to use a WHILE loop in SQL, but I couldn't think of another way to incrementally use the already found prime numbers when checking for prime factors.
It's fast though, 2 mins 15 seconds for primes under 1,000,000 - previous query took over an hour and a half.5 -
I got pranked. I got pranked good.
My prof at my uni had given us an asigment to do in java for a class.
Easy peasy for me, it was only a formality...
First task was normal but...
The second one included making a random number csv gen with the lenght of at least 10 digits, a class for checking which numbers are a prime or not and a class that will check numbers from that cvs and create a new cvs with only primes in it. I have created the code and only when my fans have taken off like a jet i realised... I fucked up...
In that moment i realised that prime checking might... take a while..
There was a third task but i didnt do it for obvious reasons. He wanted us to download a test set of few text files and make a csv with freq of every word in that test set. The problem was... The test set was a set of 200 literature books...17 -
Officially faster bruteforcing:
https://pastebin.com/uBFwkwTj
Provided toy values for others to try. Haven't tested if it works with cryptographic secure prime pairs (gcf(p, q) == 1)
It's a 50% reduction in time to bruteforce a semiprime. But I also have some inroads to a/30.
It's not "broke prime factorization for good!" levels of fast, but its still pretty nifty.
Could use decimal support with higher precision so I don't cause massive overflows on larger numbers, but this is just a demonstration after all.13 -
Maybe I'm severely misunderstanding set theory. Hear me out though.
Let f equal the set of all fibonacci numbers, and p equal the set of all primes.
If the density of primes is a function of the number of *multiples* of all primes under n,
then the *number of primes* or density should shrink as n increases, at an ever increasing rate
greater than the density of the number of fibonacci numbers below n.
That means as n grows, the relative density of f to p should grow as well.
At sufficiently large n, the density of p is zero (prime number theorem), not just absolutely, but relative to f as well. The density of f is therefore an upper limit of the density of p.
And the density of p given some sufficiently large n, is therefore also a lower limit on the density of f.
And that therefore the density of p must also be the upper limit on the density of the subset of primes that are Fibonacci numbers.
WHICH MEANS at sufficiently large values of n, there are either NO Fibonacci primes (the functions diverge), and therefore the set of Fibonacci primes is *finite*, OR the density of primes given n in the prime number theorem
*never* truly reaches zero, meaning the primes are in fact infinite.
Proving the Fibonacci primes are infinite, therefore would prove that the prime number line ends (fat chance). While proving the primes are infinite, proves the Fibonacci primes are finite in quantity.
And because the number of primes has been proven time and again to be infinite, as far back as 300BC,the Fibonacci primes MUST be finite.
QED.
If I've made a mistake, I'd like to know.11 -
Hey fellas, especially you security nerds.
I've had asymmetric encryption explained to me a number of times but I can't get a handle on it because no example actually talks in human terms. They always say "two enormous prime numbers", which I understand, but I can't conceptualize.
Can someone walk me through an entire process, showing your math & work, using some very small, single- or double-digit primes? Such as if I were to encrypt the text "hello world" using prime numbers like 3, 5, and 710 -
Cracked my first weak RSA implementation challenge today. Feels pretty awesome.
Involved primes that were very close, which means you can factorize the modulus quickly to get the private key. Normally, you would never use close primes as prime factorization's difficulty relies a certain amount on some distance between the two values.
The reason you can brute force close primes has to do with them being close in value to the square root of the function, meaning that you can search far quicker than if you were to try every combination of primes.2 -
So I promised a post after work last night, discussing the new factorization technique.
As before, I use a method called decon() that takes any number, like 697 for example, and first breaks it down into the respective digits and magnitudes.
697 becomes -> 600, 90, and 7.
It then factors *those* to give a decomposition matrix that looks something like the following when printed out:
offset: 3, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('2')]]
offset: 2, exp: [[Decimal('2'), Decimal('1')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('1')]]
offset: 1, exp: [[Decimal('7'), Decimal('1')]]
Each entry is a pair of numbers representing a prime base and an exponent.
Now the idea was that, in theory, at each magnitude of a product, we could actually search through the *range* of the product of these exponents.
So for offset three (600) here, we're looking at
2^3 * 3 ^ 1 * 5 ^ 2.
But actually we're searching
2^3 * 3 ^ 1 * 5 ^ 2.
2^3 * 3 ^ 1 * 5 ^ 1
2^3 * 3 ^ 1 * 5 ^ 0
2^3 * 3 ^ 0 * 5 ^ 2.
2^3 * 3 ^ 1 * 5 ^ 1
etc..
On the basis that whatever it generates may be the digits of another magnitude in one of our target product's factors.
And the first optimization or filter we can apply is to notice that assuming our factors pq=n,
and where p <= q, it will always be more efficient to search for the digits of p (because its under n^0.5 or the square root), than the larger factor q.
So by implication we can filter out any product of this exponent search that is greater than the square root of n.
Writing this code was a bit of a headache because I had to deal with potentially very large lists of bases and exponents, so I couldn't just use loops within loops.
Instead I resorted to writing a three state state machine that 'counted down' across these exponents, and it just works.
And now, in practice this doesn't immediately give us anything useful. And I had hoped this would at least give us *upperbounds* to start our search from, for any particular digit of a product's factors at a given magnitude. So the 12 digit (or pick a magnitude out of a hat) of an example product might give us an upperbound on the 2's exponent for that same digit in our lowest factor q of n.
It didn't work out that way. Sometimes there would be 'inversions', where the exponent of a factor on a magnitude of n, would be *lower* than the exponent of that factor on the same digit of q.
But when I started tearing into examples and generating test data I started to see certain patterns emerge, and immediately I found a way to not just pin down these inversions, but get *tight* bounds on the 2's exponents in the corresponding digit for our product's factor itself. It was like the complications I initially saw actually became a means to *tighten* the bounds.
For example, for one particular semiprime n=pq, this was some of the data:
n - offset: 6, exp: [[Decimal('2'), Decimal('5')], [Decimal('5'), Decimal('5')]]
q - offset: 6, exp: [[Decimal('2'), Decimal('6')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('5')]]
It's almost like the base 3 exponent in [n:7] gives away the presence of 3^1 in [q:6], even
though theres no subsequent presence of 3^n in [n:6] itself.
And I found this rule held each time I tested it.
Other rules, not so much, and other rules still would fail in the presence of yet other rules, almost like a giant switchboard.
I immediately realized the implications: rules had precedence, acted predictable when in isolated instances, and changed in specific instances in combination with other rules.
This was ripe for a decision tree generated through random search.
Another product n=pq, with mroe data
q(4)
offset: 4, exp: [[Decimal('2'), Decimal('4')], [Decimal('5'), Decimal('3')]]
n(4)
offset: 4, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('3')]]
Suggesting that a nontrivial base 3 exponent (**2 rather than **1) suggests the exponent on the 2 in the relevant
digit of [n], is one less than the same base 2 digital exponent at the same digit on [q]
And so it was clear from the get go that this approach held promise.
From there I discovered a bunch more rules and made some observations.
The bulk of the patterns, regardless of how large the product grows, should be present in the smaller bases (some bound of primes, say the first dozen), because the bulk of exponents for the factorization of any magnitude of a number, overwhelming lean heavily in the lower prime bases.
It was if the entire vulnerability was hiding in plain sight for four+ years, and we'd been approaching factorization all wrong from the beginning, by trying to factor a number, and all its digits at all its magnitudes, all at once, when like addition or multiplication, factorization could be done piecemeal if we knew the patterns to look for.7 -
Today was a rather funny day in school. School starts for me at 13:40 because our timetable planners are so qualified for this job.
First 2hrs: Physics, fine its good
Second 2hrs: Discrete Maths (however you want to call it)
Goal is to write a text (30 pages, 10, etc all those standard settings). Teacher prefers Latex over word, but we can do it in word if we want. We could choose a topic, I took primes because it looked the best. I decided to use latex because I'm a fetishist and it simply looks better in the end. A classmate was arguing with our teacher about ides: texmaker vs kile. And I'm like "I use vim". So my teacher is like kk
Later that class, when we actually started doing stuff I started the ssh session to my server because I don't know any good c++ compilers for win and I'm too lazy to get a portable version of cygwin (or whatever its called). So in my server I open vim and start coding my tool for Fermat Primes (Fermatsche Primzahlen, too lazy to actually translate). And this teacher seriously is the best teacher I ever met in my life. Usually teachers are like " dude r u hakin' the school server?" and I'm like bruh its just vim and I'm doing it this way because I cannot code on your PC coz I can't install a compiler. And this teacher is like "oh hey you actually use vi, all cool kids used it in 2000. I first though u were kidding and stuff..." And we continued talking about more of stuff like that and I have to say that this is the first teacher that actually understands me. Phew
Now I'm going to continue writing my 30 pages piece of trash latex doc and hope it'll end good1 -
ok, fuck people. i mean the people who talk about things that are a big deal. you don't need to take a course in html/css to build a website, you need documentation.
people act like programming languages are a whole separate literacy. they're not. it is not a big deal, nor an accomplishment of any significance, to learn any language to a basic extent. variables, control flow, functions and scope should not be considered challenging topics, and people should stop bragging about them. i'm pretty sure this is because programming is new. as people, i think when something is new we tend to think of it as more complex and harder to understand. basic programming is not that.
ok that was a tangent from my real point. college is a scam. anyone can learn anything from books and the internet. any time you want to learn about something, go to google, and search "${my topic} site:*.github.io" and you'll have a page about that topic written by someone who is knowledgeable and passionate of the topic. colleges don't teach people how to think like these books/websites do. and i'm fucking sick of people who'd rather see a degree then a portfolio. fuck them shits bro. i can distinct my smart friends because my smart friends speak logically and enjoy becoming smarter. i would take the kid who watches aerodynamics videos on youtube and then built a plane over a kid who studied and got a five on his ap physics exam. watching then doing is better learning than watching and repeating. after all, creativity is not at all measured in our grades, and i'd like to argue that sometimes intelligence isn't even measured. i mean, people can say they're good at math, but the kids who talk about fibinnoci numbers and why there can never be two primes more than 7 (i if i remember properly) integers apart or the ones who prove cryptographic algorithms. i guess what i'm trying to say is the dumb kids aren't dumb and the smart kids aren't smart (well not that) but kids who are passionate and just do something instead of waiting for their degree to do the same thing are the best and brightest. i forgot what i was talking about. sorry it is almost 2 am and i am intoxicated , and i don't believe i got my point across very well either.7 -
Is there any exact way to get the product of all primes under n multiplied together, without explicitly knowing what those primes are?
Lets call this number V.
Because hypothethetically, if we calculate from the *base* of V, then we can derive easy divisibility rules for V-1 and V+1, as laid out
here:
https://notaboutapples.wordpress.com/...
And then, unless I've misunderstood something, the problem of factorization has been changed from division into an addition and subtraction problem.12 -
Another GeeksForGeeks rant
Wisecrack got me a bit interested in primes (just a passing interest). I looked up their python implementation of "Sieve of Atkin". Wow, is it bad.
First of all, they use PascalCase instead of underscore_natation so that's points off right there.
Their function takes a limit as a parameter (pretty obviously).
Their program breaks if you pass a prime number as a limit. That's right, if you give it a 2, it breaks. Pretty pathetic.
Reading the comments, their Java implementation is wrong too.
For fucks sake guys, if you're going to have an algorithm blog at least write good algorithms.6 -
Finally.... Spent over an hour trying to optimize ~5 lines of code... Guess it was a review on how to use primes for hasig but....
Root cause was I just needed a slightly faster hashing function...
1 test failed from timeout of like maybe 1sec. Test shows passed, then in details shows Timeout...
https://hackerrank.com/challenges/... -
Looking for someone to test a new factorization script I wrote.
https://pastebin.com/Td2XTKe6
Tested against a set of products from all primes under 1000. Worked even on numbers up to 87954921289
Worked for about 66% of the products tested.
Obviously I'm cheating a little bit because I'm checking for four conditions n%a == 0, n%a == 1, n%b == 0, and n%b == 1
It appears it is possible to generate the series from just the product, and then factor each result. The last factor in each each set of factors becomes x, and we do p%x and check for zero.
if it works, we've found our answer.
Kind of wonky but basically what its doing is taking p, tacking on a 0 to the right, and then tacking p to the right of *that*.
So if you had a product like
314
The starting number we look at is
3140314
The middle digit becomes i, and the unit digit becomes j.
Don't know why it works more often then not, and don't know if it would really be any faster.
Just think it's cool.9 -
accurately estimates number of primes under k
from k=29, to k=232 (within +/- 1..2)
ceil(k-((phi/(1/((log(phi**(k-6), phi))/1))))/2)
Played with an alife I made.
And built a system to explore long chains of polynomials where the exponents were prime.
You can look at it if you like here:
https://pastebin.com/3trWAU7v
Don't blame me if your console explodes though!12 -
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
For any product of two non-trivial primes, it is *always* possible to get the quotient of its factors b/a derived solely from the product of those factors, *without* first factoring the product (p).
Fight me.3 -
I find it very interesting how many types of primes there are.
This kind of prime number, I think very nice!
What types of primes do you like?
https://sololearn.com/learn/12365/...6 -
Math question time!
Okay so I had this idea and I'm looking for anyone who has a better grasp of math than me.
What if instead of searching for prime factors we searched for a number above p?
One with a certain special property. BEAR WITH ME. I know I make these posts a lot and I'm a bit of a shitposter, but I'm being genuine here.
Take this cherry picked number, 697 for example.
It's factors are 17, and 41. It's trivial but just for demonstration.
If we represented it's factors as a bit string, where each bit represents the index that factor occurs at in a list of primes, it looks like this
1000001000000
When converted back to an integer that number becomes 4160, which we will call f.
And if we do 4160/(2**n) until the result returns
a fractional component, then N in this case will be 7.
And 7 is the index of our lowest factor 17 (lets call it A, and our highest factor we'll call B) in our primes list.
So the problem is changed from finding a factorization of p, to finding an algorithm that allows you to convert p into f. Once you have f it's a matter of converting it to binary, looking up the indexes of all bits set to 1, and finding the values of those indexes in the list of primes.
I'm working on doing that and if anyone has any insights I'm all ears.9 -
https://i.postimg.cc/4ycRFNZf/...
The factorization shit I'm always ranting about? I decided for once to explain it visually in this handy dandy little infographic.
We're essentially transforming the product from an unsmooth set of potential factors in its factor tree, to a factorization tree that guarantees first that the set of potential factors are all 2, 3, 5, and a or b of p, and second, that all the factors are *smooth integers* of a or b.
This is basically what Adi Shamir was trying to do with TWINKLE and TWIRL, despite checking a hundred thousand+ potential primes.
I did it in four.7 -
Wherein I disprove Goldbachs Conjecture (in one specific case)
golbach conjecture:
every even number is the sum of two primes
lets call the primes p and q
lets call our even number p+q=n
we can go further by establishing two additional variables
u=p-1, v=q-1
therefore every even number is the sum of u+v+2, according to goldbach's own reasoning.
in the simplest case...
p=2, q=2, p+q=4
u=1, v=1, u+v+2 = 4
We can therefore make a further conjecture in the simplest case every sum of two primes, less 2, is the sum of two composites. This likely has connections to the abc conjecture for a variety of reasons. But leaving ancillary discussion aside for a moment...
We can generalize to a statement that every even number is the sum of two odd numbers. And every odd number greater than 1 is the sum of an odd number and an even number.
Finding an even number that is not the sum of (p-1)+(q-1) would therefore be equivalent to disproving the goldbach conjecture. Likewise proving every even number is the sum of (p-1)+(q-1) would be the equivalent of proving it.
Proving all even numbers greater than 2 are the sum of two composites + 2 would be proof of goldbachs conjecture, and finding any example or an equation that proves an example exists such that *some* subset of even numbers are NOT the sum of two composites +2, would disprove the conjecture.
Lets start with a simple example:
2+2=4
because 4-2=2, and two is not the sum of two composite numbers goldbachs conjecture must ipso facto be false.
QED
If I've wildly misapprehended the math, please, somebody who is better at it, correct me.
Honestly if this is actually anything, I'd be floored to discover no one has stumbled on this line of reasoning before.8 -
So I made a couple slight modifications to the formula in the previous post and got some pretty cool results.
The original post is here:
https://devrant.com/rants/5632235/...
The default transformation from p, to the new product (call it p2) leads to *very* large products (even for products of the first 100 primes).
Take for example
a = 6229, b = 10477, p = a*b = 65261233
While the new product the formula generates, has a factor tree that contains our factor (a), the product is huge.
How huge?
6489397687944607231601420206388875594346703505936926682969449167115933666916914363806993605...
and
So huge I put the whole number in a pastebin here:
https://pastebin.com/1bC5kqGH
Now, that number DOES contain our example factor 6229. I demonstrated that in the prior post.
But first, it's huge, 2972 digits long, and second, many of its factors are huge too.
Right from the get go I had hunch, and did (p2 mod p) and the result was surprisingly small, much closer to the original product. Then just to see what happens I subtracted this result from the original product.
The modification looks like this:
(p-(((abs(((((p)-(9**i)-9)+1))-((((9**i)-(p)-9)-2)))-p+1)-p)%p))
The result is '49856916'
Thats within the ballpark of our original product.
And then I factored it.
1, 2, 3, 4, 6, 12, 23, 29, 46, 58, 69, 87, 92, 116, 138, 174, 276, 348, 667, 1334, 2001, 2668, 4002, 6229, 8004, 12458, 18687, 24916, 37374, 74748, 143267, 180641, 286534, 361282, 429801, 541923, 573068, 722564, 859602, 1083846, 1719204, 2167692, 4154743, 8309486, 12464229, 16618972, 24928458, 49856916
Well damn. It's not a-smooth or b-smooth (where 'smoothness' is defined as 'all factors are beneath some number n')
but this is far more approachable than just factoring the original product.
It still requires a value of i equal to
i = floor(a/2)
But the results are actually factorable now if this works for other products.
I rewrote the script and tested on a couple million products and added decimal support, and I'm happy to report it works.
Script is posted here if you want to test it yourself:
https://pastebin.com/RNu1iiQ8
What I'll do next is probably add some basic factorization of trivial primes
(say the first 100), and then figure out the average number of factors in each derived product.
I'm also still working on getting to values of i < a/2, but only having sporadic success.
It also means *very* large numbers (either a subset of them or universally) with *lots* of factors may be reducible to unique products with just two non-trivial factors, but thats a big question mark for now.
@scor if you want to take a look.5 -
!rant
Just posted some swift code to a server, in a few hours I should have all the prime numbers from 1 to a billion!4 -
Let some number P be the product of two factors, A and B
Let the iterations of A*1..2..3..N up to B+1 be a directed graph.
Would this graph be eularian?
If so then it should be possible to use the BEST algorithm to count the number of eularian circuits, yielding B, no?
Edit: this is supported by the following text:
An arborescence can equivalently be defined as a rooted digraph in which the path from the root to any other vertex is unique.
* * *
Where the product of any two primes is unique, the path to it across our graph here, is also unique. -
Heres a fairly useless but interesting tidbit:
if i = n
then
r = (abs(((((p)-(9**i)-9)+1))-((((9**i)-(p)-9)-2)))-p+1+1)
then r%a will (almost*) always return 0. when n = floor(a/2) for the lowest non-trivial factor of a two factor product.
Thats not really the interesting bit though. The interesting bit is the result of r will always be some product with a *larger* factor tree that includes the factor A of p, but not p's other larger factor, B.
So, useless from what I can see. But its an interesting function on its own, simply because of what it does.
I wrote a script to test it. For all two-factor products of the first 1000 primes, (with no repeating combinations, so if we calculated say, 23*31, we skip 31*23), only 3262 products failed this little formula, out of half a million.
All others reliably returned 0 for the following..
~~~
i = floor(a/2)
r = (abs(((((p)-(9**i)-9)+1))-((((9**i)-(p)-9)-2)))-p+1+1)
r%a
~~~
The distribution of failures was *very* early on in the set of factors, and once fixed at the value of 3262, stopped increasing for the rest of the run.
I didn't calculate if some primes were more likely to cause a product to fail or not. Nor the factor trees, nor if the factor trees had any factors in common between products, or anything of that nature.
All in all I count this as a worthwhile experiment.
If you want to run the code yourself, I posted it to pastebin here:
https://pastebin.com/Q4LFKBjB
edit:
Tried wolfram alpha just to see what it says, but apparently not much. Wish it could tell me more.40