Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "low bandwidth"
-
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
I HATE YOU STREAMING SERVICES! FUCK YOU!
Here's the setup:
I work in a rather small office, where we are like 7 people (including me). Now, there's one person in charge of putting music through speakers (obviously, not everyone enjoys the same kind of music)
Well, we have a hell of a small bandwidth (1.5MBPS tops), now, add to that that every single fucker here uses "Spotify" and it's streaming their music...
WHY!!!!
Good side: I have my earphones and ~30GB of my music on my phone, so it's not an issue for me, also, I'm kinda audiophile, so Spotify quality sucks.
Bad side: I can't even fucking load Google because those fuckers are eating the bandwidth.5 -
Me: the web app is downloading a lot of static content while loading the page, leading to the app being very slow in low bandwidth locations. can you ensure compression is enabled while serving static files ?
UI Developer: sure, I'll look into that. Btw, I have a question reg that.
Me: yes, pls.
UI Developer: once the compressed static files are downloaded to the browser, should I write a separate module to uncompress them ?
Me: :-(Strategic Facepalm) -
Set up an account on cPanel to show the client their final product. Set the bandwidth limit to a low number so that the client can only view the website once and so that if something is not right, I can blame the bandwidth error.
After the client reported an issue, fixed it before resetting bandwidth limit. -
So, I'm looking into something and end up on Stack Overflow. Someone posted the question:
"Does minified javascript improve performance?"
This question was old as shit, all they way from 07/25/09, and about an Adobe Air application. (Remember that? Me neither...) It had a great, accepted, and still accurate answer, posted the same day the question was asked. Now, fast forward 8 years and on 12/08/17 (A mere 7 months ago...) the following answer was posted. I don't know what they were thinking, but here it is, complete and unabridged, with my comments in square brackets:
"I'd like to post this as a separate answer as it somewhat contrasts the accepted one: [Somewhat contrasts? More like completely contradicts...]
Yes, it does make a performance difference as it reduces parsing time - and that's often the critical thing. For me, it was even just simply linear in the size and I could get it from 12s to 4s parse time by minifying from 3MB to 1MB. [First off, your parse time should NEVER be THE critical thing, but secondly, and more importantly, WHO THE FUCK HAS 1MB OF MINIFIED JS ON A PAGE!!!]
It's not a big app either, it just has a couple of reasonable dependencies. [THERE IS ABSOFUCKINGLUTELY NOTHING REASONABLE ABOUT ANYTHING HE JUST SAID! What dependancies is he using?! You could use minified and not even gzipped jQuery, AngularJS, Vue, Ember, React, AND Dojo libraries on the SAME PAGE, AND have 118k of application code, AND STILL NOT HAVE HIT 1MB QUITE YET!!!]
So the moral of the story here is: Yes, minifying is important for performance - and not because of bandwidth, but because of parsing. [Javascript should NEVER take longer to parse then to download, even on a low powered device...]"
So, yeah, I'm at a loss for what this guy was thinking, but the thought the people like this exist, and that my browser might one day be subjected to their horrific nightmare of code terrifies me...2 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
I feel like i am being forced to own a shitty module in our codebase.
It was developed by previous owners and they made a frankenstien monster out of it: Its one part of codebase that is very huge, does not follow the code standards, is making complex kinds of api calls and using very niche components. It gets bugs once in a while BUT IT WORKS.
It fuckin works and is one of the important steps before customer purchases a company product, so kinda part of revenue generation flow.
But this module was never a part of our codebase which we would usually touch. it was owned by another team, they would add enhancements , new features to it and fix the bugs .
When i joined the team, i was once asked to help those guys as a "resource" because they wanted to get something shipped and were low on bandwidth. So i just worked on one of the screens, added a small bugifx and voila, task is done and am back to other part of the app.
But now out of random, they decided to pass on the ownership to ur team, gave a small KT which didn't really explained a lot of actual codebase, but rather the business functionality of it(and that too poorly). And my TL is saying that i should own it because "I worked on that module before"
I don't know how to deal with this frankenstien monster. Earlier a bug came and i was out of my wits to understand why this bug came. their logging is weird and not explaining a lot, their backend devs help provide aws logs but those aren't very helpful either .
the best i could do was declare that their technical approach is wrong and we should modify it, but that idea was quickly squashed.
ITs quite possible that company isn't going to change this module or add any new features further. but everytime a bug would come, i would be getitngfrustrated looking at their frankenstien monster5