Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "size matters"
-
Debugger...Chicken I guess? Got it from one of my students last year. Adapter for scale. It's not the size that matters...2
-
Approximately 15 years ago, at school, I had one friend that learned to code in Delphi. One day he came to me and said:
Dude! I made a program that creates a 2GB file!!! (15 years ago 2GB were a lot of memory)
- What is in this file? - I asked.
- Nothing! It's just 2GB!!!
- It's fucking AWESOME!!!3 -
Things said at work that would be misunderstood when taken out of context:
Yesterday-
client: "I don't like the D"
Boss: "well what if it's a little d"
Client: "I don't think the size of the D matters, do you think people make decisions on the size of a D"
Me: *trying so hard to laugh I spit coffee everywhere*
Today-
Boss: "are you working on that sex padding?"
Me: *trying so hard to laugh I spit coffee everywhere*1 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
A long long time ago ( 2007 I think ) I worked for a company that made landing sites, so basically an email campaign would go out, users would be sent to a 1 page website with a form to capture their data, ready to be spammed even more. You know how it was back then.
So I worked with a guy who we had just hired, I didn't do the hiring but his CV checked out, so I gave him one of my tasks. Now most pages were made with js and html, with a PHP backend ( called with Ajax). Now this guy didn't know PHP so I was like all good, ASP works too at the end of the day we don't judge, we do like 2 or 3 of these a day and never look at them again. So he goes of and does is thing.
3 weeks later, the customer calls up to me they still haven't received their landing page. Ok so he probably forgot to email the customer np, I tell him to double check he has emailed the customer. Another week goes by end the customer calls back, same problem. At this point I'm getting worried, because we're days away from the deadline and it was originally my task.
So I go back to the guy and I tell him I want that landing page so I can send it myself, half thinking to myself that we had a freeloader, that guy that comes in to companies for 3 weeks, doesn't work, but still cashes his pay. But no, this was much worse.
So he tells me he has finished yet. I ask him why, what's the blocker ? You had 4 weeks to tell me you were blocked and couldn't progress. And his answer was simply, because I wasn't blocked I have been working on it this whole time. So I tell him to zip his project up and email it to me. We didn't do SVN or git back then, simply wasn't worth it. So he comes back to me and says the email server is telling him attachments can't be bigger then 50mb. At this point I'm thinking he didn't properly sized the art or something, so I give him a flash drive to put it on.
When I then open the flash drive, the archive is 300mb, thinking to myself, the images weren't even that big to begin with.
So I open it up, and I don't even find any images, just a single asp page. About 500mb. When I opened that up and it finally loaded, I saw the most horrendous things ever.
The first 500 lines was just initializing empty vars. Then there was some code that created an empty form with an onChange event that submits the form. After that.. it was just non stop nested if's. No loops, no while, for, foreach, NO elseif's, just nested if's, for every possible combination of the state the form could be in. Abou 5000 of them, in a single file. To make matters worse, all the form ( and page ) layout was hardcoded in the if's. Includes inline css, base64 encoded images, nothing but as dynamic, based on the length of the form he changes the layout, added more background etc. He cut the images up for every possible size of the page and included them in the code.
I showed it to my boss, he fired the guy on the spot. I redid the work from scratch, in under 4 hours. Send it to the client. they had no ammends to make, happy as Larry. Whish I kept the code somewhere.
Morale of the story, allways do a coding test on interviews, even if small things just to sanity check.3 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
does recursion have any practical use outside of being a cute/elegant solution under constraints where stack overflow isn't a concern due to small input size, and leetcode?
im having trouble thinking of anywhere you could justify using recursion in industry outside of leetcoding people
i assume the iterative approach would be preferred in scenarios where scaling matters18 -
My 11k LOC frontend codebase with webpack compiles into a 1.2 MB minified bundle file...RIP mobile users1
-
Well the good thing about last week is that I helped my company get through their hurdles of getting their backend to work with their mobile apps. Though it's in the weekends, but hey it gets me paid.
I just hope that the PM would cut me some slack for not doing git commits properly. After all, we're not big in terms of company size, and if the PM is so anal about it, we can't move fast enough. As long as the PRs are reviewed and made sure that the web app works, nothing else matters.5