Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "low latency"
-
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Multi-continent low-latency auto-scaling eventually-consistent kubernetes-orchestrated and spark-powered multi-cloud data-plarform.
(Note to self: why do jargon words always come in twos?)
But seriously, the engine ELT's naval and logistical data from every continent and ocean and feeds a global analytics platform for less then 0.25 USD per ingested Gb across all systems.
And sometimes the PODs are even onboard en-route ships! Edge computing, y'all!
Tech project I'm most proud of.2 -
What would it take to connect two Raspberry Pi's together via Ethernet ports? I want to make a low latency network connection between them, for Retropie Netplay.
I have a background in Python and some Linux, but I'm not well versed in raspi's.
I imagine that it would be limited to 100mb/s if I used raspberry pi zeros with adapters. And I would probably need an router since they aren't setup to be both hosts with the default setup?2 -
Hooray! voice is now working on localhost! Now to find a high latency, low reliability connection to stress-test the thing. Do you reckon sending the packets 3 times to echo.websocket.org is unreliable enough?
-
I have a USB 3.0 hub that works mostly. However, sometimes it freaks out and starts disconnecting things attached to it. It also causes my gaming mouse that updates 1000 times per second to operate wrong. Yes, it was a cheap usb hub to begin with. I am using a laptop and I want a decent hub to use with my gaming peripherals if possible. I have an old belkin hub I am going to try that usb 2.0. But I really want a decent usb 3.0 hub. I need something that is not cheap pos made by no name like most of amazon products. I want something good that I wont regret getting later. It also needs to have been tested with a 1mS update rate device like my gaming mouse.
Does such an animal exist?12 -
When you go to "Oh they do it cheap", don't expect results...
Changed my PC build around 2 years ago.
Went from Core i7 / Nvidia to Ryzen 9 / AMD
Welp, AMD is totaly unstable.
I've invested 5k $ so I'm gonna ride it, but NEVER, EVER EVER again I'm buying AMD CPU or GPU.
Shit is unstable as fuck. I have latency issues, CPU issues, Video issues almost every week.
With Intel/nVidia cvombo I had before, I had issues maybe once every 3-4 months.
So yeah, buy low cost AMD, you pay the price later in usability. Fuck them.21 -
Java vs C++ for low latency... Thoughts?
I guess my immediate other is why aren't OSs written in Java?
https://stackoverflow.blog/2021/02/...7