Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "free gpu"
-
"Thank you for choosing Microsoft!"
No Microsoft, I really didn't choose you. This crappy hardware made you the inevitable, not a choice.
And like hell do I want to run your crappy shit OS. I tried to reset my PC, got all my programs removed (because that's obviously where the errors are, not the OS, right? Certified motherfuckers). Yet the shit still didn't get resolved even after a reset. Installing Windows freshly again, because "I chose this".
Give me a break, Microshaft. If it wasn't for your crappy OS, I would've gone to sleep hours ago. Yet me disabling your shitty telemetry brought this shit upon me, by disabling me to get Insider updates just because I added a registry key and disabled a service. Just how much are you going to force data collection out of your "nothing to hide, nothing to fear" users, Microsoft?
Honestly, at this point I think that Microsoft under Ballmer might've been better. Because while Linux was apparently cancer back then, at least this shitty data collection for "a free OS" wasn't yet a thing back then.
My mother still runs Vista, an OS that has since a few months ago reached EOL. Last time she visited me I recommended her to switch to Windows 7, because it looks the same but is better in terms of performance and is still supported. She refused, because it might damage her configurations. Granted, that's probably full of malware but at this point I'm glad she did.
Even Windows 7 has telemetry forcibly enabled at this point. Vista may be unsupported, but at least it didn't fall victim to the current status quo - data mining on every Microshaft OS that's still supported.
Microsoft may have been shady ever since they pursued manufacturers into defaulting to their OS, and GPU manufacturers will probably also have been lobbied into supporting Windows exclusively. But this data mining shit? Not even the Ballmer era was as horrible as this. My mother may not realize it, but she unknowingly avoided it.6 -
I wrote something to make QEMU VMs with GPU passthrough easy, thought I may share.
Feel free to look and uhh help me improve it, I really have no idea how some things in QEMU works. PRs and stuff appreciated
https://github.com/sr229/nya4 -
Oi mates!
Little #ad (Not annoying don't worry - it's a cool project)
Just wanted to let y'all know about the awesome project from the Stanford University named Folding@Home!
Basically you donate CPU/GPU power and they use it for researching cancer/alzheimer's/etc.
All you need to do is install some software on your server/computer.
Then the software downloads so called "Work Units" (no big bandwidth required - really small packets) and simulates/calculates some stuff. Afterwards the client send the results back to their server.
This way they are able to create a "supercomputer" that is spread all over the world.
You don't need to pay anything except maybe some increased electricity bills (but you change some settings to use only a little part of the CPU/GPU and therefore create less heat).
Of course the program only uses the CPU/GPU power that's not required by any other software on the computer. I can literally play games while the client is running. No performance decrease.
That's a short intro by me. I can suggest you to visit their website and maybe even start folding by yourself!
> https://foldingathome.com
Also @cr78, @kescherRant and me are in a team together. If you want to join our team as well just use our Team ID:
235222
Teams?
Yup, there's this little stats site (https://stats.foldingathome.com) where all teams can compete against each other. Nothing big.
I hope I convinced atleast some of you!
Feel free to ask questions in the comments!
See ya.11 -
After a lot of work I figured out how to build the graph component of my LLM. Figured out the basic architecture, how to connect it in, and how to train it. The design and how-to is 100%.
Ironically generating the embeddings is slower than I expect the training itself to take.
A few extensions of the design will also allow bootstrapped and transfer learning, and as a reach, unsupervised learning but I still need to work out the fine details on that.
Right now because of the design of the embeddings (different from standard transformers in a key aspect), they're slow. Like 10 tokens per minute on an i5 (python, no multithreading, no optimization at all, no training on gpu). I've came up with a modification that takes the token embeddings and turns them into hash keys, which should be significantly faster for a variety of reasons. Essentially I generate a tree of all weights, where the parent nodes are the mean of their immediate child nodes, split the tree on lesser-than-greater-than values, and then convert the node values to keys in a hashmap to make lookup very fast.
Weight comparison can be done either directly through tree traversal, or using normalized hamming distance between parent/child weight keys and the lookup weight.
That last bit is designed already and just needs implemented but it is completely doable.
The design itself is 100% attention free incidentally.
I'm outlining the step by step, only the essentials to train a word boundary detector, noun detector, verb detector, as I already considered prior. But now I'm actually able to implement it.
The hard part was figuring out the *graph* part of the model, not the NN part (if you could even call it an NN, which it doesn't fit the definition of, but I don't know what else to call it). Determining what the design would look like, the necessary graph token types, what function they should have, *how* they use the context, how thats calculated, how loss is to be calculated, and how to train it.
I'm happy to report all that is now settled.
I'm hoping to get more work done on it on my day off, but thats seven days away, 9-10 hour shifts, working fucking BurgerKing and all I want to do is program.
And all because no one takes me seriously due to not having a degree.
Fucking aye. What is life.
If I had a laptop and insurance and taxes weren't a thing, I'd go live in my car and code in a fucking mcdonalds or a park all day and not have to give a shit about any of these other externalities like earning minimum wage to pay 25% of it in rent a month and 20% in taxes and other government bullshit.4 -
doing things right seems to be a waste of time
especially considering how fast things change beneath you
for years I've said I should just stop overoptimizing but I've yet to fucking try it. I said this to myself 5 hours ago and what am I doing. I spent the last 5 hours trying to overoptimize for a theoretical scenario that I won't ever even be in, because I haven't even decided what I want to immediately do
how about immediately do something Jesus fuck
next Tuesday this system will have to be rewritten again anyway
how many fucking asset loading systems do I want under my belt
just load the fucking assets
maybe decide on which assets you'll be using first so then you don't overoptimize for "WELL MAYBE ILL LOAD THEM INDIVIDUALLY THEN PUT THEM IN A NICE IMAGE IN RAM FIRST BEFORE I SEND IT OFF TO THE GPU" omg just shut up. just because you can doesn't mean you should. doesn't help everyone using this thing keeps insisting you do it this way. but you don't fucking need to.
actually you know what, I blame them. they kept confusing me with yOu ShOulD do It ThiS WAY. next thing I know I'm walking through every possible conceivable way to do asset loading so I can decide how to load assets and then end up in architecturizing of the perfect system. I didn't want to be on this path. but they told me to be on this path. I blame them
take-away: if you can make it work at all, just use that unless it breaks something. fuck how or for "what" the dumb system is designed. people don't stick to their "designs" anyway -- it's idealism just like free stuff sounds great in theory, but in reality it just shits up everything because it's unrealistic3 -
So Im planning to build a pc, which i will mainly use it for dev and gaming in free time, my main components will be:
CPU: INTEL 8700K
GPU: GTX 1080 msi or gigabyte?
SSD: 860 EVO
RAM: 16GB 3200MHZ
MOTHERBOARD: should i go with msi or gigabyte whixh one is better?
PSU: 650W or 700W deepcooler?
Also for the cpu cooler do i get water colling or a standard cpu fan?
P.S: i plan to overclock the cpu and gpu at some point.
Also whats your opinion on the rgb lightning gpu and motherboard, and is there point in getting a mobo with sli support (is it work buying second gpu at some point or better upgrade the existing)4 -
Anybody got any tips on how to not get kicked out of the Google colab books? Everything I’ve found is from like 2020ish
-
Hi,
So I have been using colab for the past 2 years. I liked how without any setup you can use kernels with GPU and TPU with some configuration.
But recently I can't train any model. It always goes runtime error, runtime disconnected, not to mention they have limited their total hours of usage for a day.
I know you are providing everything for free but this is just annoying. I dont mind if google wants to start a subscription plan for colab...its much better for fast prototyping than getting a cloud server from google or aws or anything of such sorts.
I have been trying to train a model with only 3 gigs of data and I cant complete the model, once I change the tab it shows Runtime Disconnected. DAMN it.
Sadly, I am trying not to use colab from now on.
But yeah I am frustrated with colab and their services.3