Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "wk191"
-
A guy on another team who is regarded by non-programmers as a genius wrote a python script that goes out to thousands of our appliances, collects information, compiles it, and presents it in a kinda sorta readable, but completely non-transferable format. It takes about 25 minutes to run, and he runs it himself every morning. He comes in early to run it before his team's standup.
I wanted to use that data for apps I wrote, but his impossible format made that impractical, so I took apart his code, rewrote it in perl, replaced all the outrageous hard-coded root passwords with public keys, and added concurrency features. My script dumps the data into a memory-resident backend, and my filterable, sortable, taggable web "frontend"(very generous nomenclature) presents the data in html, csv, and json. Compared to the genius's 25 minute script that he runs himself in the morning, mine runs in about 45 seconds, and runs automatically in cron every two hours.
Optimized!22 -
Had to wirte and optimize a C++ program that finds for 1000x1000x1000 grid points the 100 nearest points for each (with an additional factor to make it more complicated).
It had to run in under 18 minutes to pass. No matter what I did I couldn't get it fast enough. I tried kd-trees, caching of certain points, optimizing distamce calculations by ommiting any irrelevant factor, saving points' calculated squares etc etc. When Ibwas down to 20 minutes, I realized, that my makefile had an error and ignored the - O3 flag...
Well, it actually ran 5 minutes with -O3.8 -
1. Scripting out a team. I've built a collection of bash scripts to do what one of our teams does. Except the script does it in 30min and always does it well where that team used to take 4 to 10 hours and almost always missed something in the way.
2. Automate 70-80% of our BAU tasks with a single >4k loc bash script. Integrations with servicenow, lots of internal portals, predefined huge sets of commands to run on separate servers or lists of servers, do all sorts of diagnostics, schedule hw maintenance for DC folks, chase for approvals, track CHNG/CTSK tickets in a graphical chart so we would not miss any of them and lots lots more.
Finally we were able to afford time to make some coffee/tea.
These are the bau optimizations I'm proud of the most. And they have made significant impact on how our teams operate.
Whoever recognizes both company values in the tags and know what is that company - are they still using ´S´ in unix team? :)1 -
After a long time just reading your posts, here's my first post:
Just for clarification: I'm studying electrical engineering in Germany. During your time at university, you have to work half a year as a intern to get some practical experience. So I'm in a position where I mainly have to say "yes" to work that is given to me. Also I'm working with a lot of PLC programmers, so I'm nearly the only one who programs non-PLC stuff at the department.
But now it's time for my rant (and also my most satisfying optimization ever). In the job interview for the internship, my task at the company was described as C# programmer. I only programmed C and Python before, but C# looked interesting and so I learned C# from ground up in the summer before the internship. I quite liked it and I was really happy on my first day of work. Then I was greeted with this message: "I know you are hired as C# programmer, but could you please look into this VBA program, it takes 55 seconds until it finishes its task and that's to slow". So I (midly angry because I had to do VBA and not C#) started the program and it was really horribly slow (it just created a table with certain contents from a very big imported symbol file). I then opened up the source code and immideately saw bad code. The guy who wrote it basically just clicked on the macro recording button and used the recorded mouse clicks in the source code. The code was like: Click on cell A1 -> copy cell A1 -> move to sheet XY -> click on cell A2 -> paste copied stuff and so on... I never 'programmed' in VBA before, so I used my knowledge of 'real' programming languages to do this task. After using some arrays and for-loops, which did not iterate over all the 1.000.000 unused cells after the last used one, the program took only 3 seconds after it finished the new table! Everybody was quite impressed, which led to much more VBA optimization... That was clearly not my goal haha :)9 -
Last year at work we started migrating our backend from PHP on a dedicated server to Node.js on AWS lamda functions
We went from 10 second calls to 70ms calls...
At this point our frontend is not even ready for this kind of speed 😅9 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Finished porting clientX to Linux, including dev laptops and desktops. The best part was getting to personally enjoy the moment I got to delete all IIS/Windows integration logic from the .Net core services.
Commit message:
"Windows... where we're going we don't need Windows. #CLD-15"2 -
I took a web scraper that took several hours, made it multithreaded and got it down to about 15 minutes. Pretty satisfying.7
-
A little late but whatever.
About half a year ago, I started working on setting up self hosted (slippy) maps. For one, because of privacy reasons, for two, because it'd be in my own control and I could, with enough knowledge, be entirely in control of how this would work.
While the process has been going on for hours every day for about half a year (with regular exceptions), I'll briefly lay out what I've accomplished.
I started with the OpenMapTiles project and tried to implement it myself. This went well but there were two major pitfalls:
1. It worked postgres database based. This is fine but when you want to have the entire world.... the queries took insanely long (minutes, at lower zoom levels) and quite intimate postgres/tooling knowledge was required, which I don't have.
2. Due to the long queries and such, the performance was so bad that the maps could take minutes to render and when you'd want that in production... yeah, no.
After quite some time I finally let that idea sail and started looking into the MBTiles solution; generating sqlite databases of geojson features. Very fast data serving but the rendering can take quite some time.
After some more months, I finally got the hang of it to the point that I automated 50-70 percent of the entire process. The one problem? It takes a shitload of resources and time to generate a worldwide mbtiles database.
After infinite numbers of trial and error, I figured out that one can devide a 'render' (mbtiles aka sqlite database) into multiple layers (one for building data, one for water, one for roads and so on), so I started doing renders that way.
Result? Styling became way more easy and logical and one could pick specific data to display; only want to display the roads? Its way more simple this way. (Not impossible otherwise but figuring out how that works... Good luck).
Started rendering all the countries, continents and such this way and while this seemed like a great idea; the entire world is at 3-4 percent after about a month. And while 40-70 percent generates 10 times as fast, that's still way too slow.
Then, I figured out that you can fetch data per individual layer/source. Thus, I could render every layer separately which is way faster.
Tried that with a few very tiny datasets and bam, it works. (And still very fast).
So, now, I'm generating all layers per continent. I want to do it world based but figured out that that's just not manageable with my resources/budget.
Next to that, I'm working on an API which will have exactly the features I want/need!13 -
Simulating a couple million elastic search searches and aggregations in 2h instead of 3 months ... :)3
-
Recently had to implement some microbenchmarks for a project. First time I ran them, I thought they had gone stuck. It turned out they weren't, but the processing took forever.
Some smaller benchmarks revealed that runtime was not scaling linearly with the input size as one would expect for the problem. It turned out that code was iterating over the input to find corresponding entries in the output.
We changed the processing so that it creates the output in the same order as the input and just compare entries at the same position. With that we were able to cut runtime from a few hours to a few seconds. -
once found a bug in an outdated sdk we were using that made an enpoint take 10 seconds. after passing one extra param to the function from that sdk it went down to 30ms
-
Last year, we had computers architecture class where we study about the architecture of processors like RISC, CISC, SIMD... The teacher was a nice person but didn't have much knowledge on the field. I read some of Patterson&Hennessey book (computer organisation and design iirc) and learned how to use openmp and mpi, and then in the last lab we were required to optimize matrix multiplication using 4 threads in openmp, the best students optimiseed for 4 times at best, meanwhile I made 16 times optimisation and showed the teacher how fast it was. She was really impressed lol1
-
Our publish workflow had to much manuell steps in it. Wrote a tool to publish a new version with one button click.
My Colleges thank me with tears in there eyes 😄5 -
A few weeks back we ported our PHP Rest API into a couple of Go micro-services.
Incredibly _satisfying_ job.
Requests went from 20+ seconds to ~100-300ms.
There was still one bottle-neck, though, because we had to use most of the old cluster-fork of a database (because no way I'll be able to fix all that in a week).
And ooh, next we're thinking of switching to gRPC. Man, we have the best jobs.5 -
I refactored a bash script written so badly that it had fewer lines of code even after added a few new features
Link: https://github.com/dasgeekchannel/...3 -
Not dev-related, but it's always the every day schedule optimisations that are satisfying. If I need to be in various places throughout the day, change different types of transportation, etc., I always think about the optimal route and time-planning such that I have the least overhead, visit the same place as few times as possible (usually my home, since I live far from most of my daily activities), take the shortest routes and be on time
The same applies to taking public transportation in my hometown. There is no clear schedule ("arrival can vary between 10-15min", no app available to tell you about it in real time), yet by living there for so long, I figured out when certain buses/trams leave based on the ones that are already passing me and the time of the day. This way, I know which buses/trams to take and change and get where I need to be, without even having an app or a clear schedule (of course, unexpected things like buses catching on fire can always happen) -
The company I worked for had to do deletion runs of customer data (files and database records) every year, mainly for legal reasons. Two months before the next run they found out that the next year would bring multiple times the amount of objects, because a decade ago they had introduced a new solution whose data would be eligible for deletion for the first time.
The existing process was not be able to cope with those amounts of objects and froze to death gobbling up every bit of ram on the testing system. So my task was to rewrite the exising code, optimize api calls and somehow I ended up in multithreading the whole process. It worked and is most probably still in production today. 💨 -
Optimizing my RESTful API plugin, used in a CMS to make it headless, was extremely satisfying.
Thanks to caching and alorithm improvements, I brought the response times down by a factor of 150.
The caching also includes dependency tags thus it will automatically invalidate on changes in the dependent element.1 -
I wrote a whole article about it, and oh wow, it still exists. It was probably the first optimization I ever did in my life, and it was while I was learning SQL.
And writing an edu-tainment article aimed at total laymen as well as beginners was also fun.
http://swczdev.blogspot.com/2010/...
Sadly, czech language only. But... the english autotranslation actually looks readable:
https://translate.google.com/transl...
Long story short, though: 4 or 5-table join going from 7 seconds before optimization, to 0.08 seconds after optimization. Both were written by me, the optimized one was written without any reading on how to optimize SQL, based purely on me actually stopping to think about how I can reduce the DB load based on the little that I knew about how SQL servers work.
Optimization made it about 99,9999422% more efficient, based on my improvised efficiency metric of how many rows the query retrieves and produces versus how many are thrown away on the end due to the WHERE part of the query.
And that was also the day when my question of "what is there even to optimize in SQL?) was answered... by myself.3 -
Optimization concepts/patterns or instances?
For pattern its gotta be any time i can take a O(n^2) and turn it into O(n) or literally anything better than O(n^2).
Instance would probably be the time that we took an api method that returned a json list made up of dictionaries CSV-style and changed it into a dictionary with the uid as the key and the other info as key-value pairs in a sub-dictionary. So instead of:
[
{
"Name": name,
"Info":info
}
]
We now return:
{
name:
{
"Info": info
}
}
Which can, if done right, make your runtime O(1), which i love. -
Fixing some terrible rushed code from a group assignment last minute. what could've taken hours to compute, finished in under a second afterwards.
-
program, which should save report from db to file, running a couple of minutes:
1) despite prefetch precompiler option no fetch was prefetched, because for every next line fetch cursor was reopened with condition WHERE some_val > prev_val
2) allocated array of host variables to fetch 100 rows per call to db client api
program is running under 3 seconds now -
I recoded a REST endpoint that transfers large amounts of data from our db using a streaming response so it doesn't crash the server...
Pretty easy... Mostly just needed someone that knew wtf it was or has a bit of curiosity and asks questions... rather than just keep on doing what everyone else is doing...
Who hasn't seen logs updating in near real time in TeamCity, Jenkins... for the last 5yrs+... No one else ever wondered how it's done?
So yes solving a production issue with old technology and being called a genius... I guess is pretty satisfying? -
Most satisfying was reducing the time my ci/cd did to build,test,verify complance and deploy of virtually anything i want in lrss then 10 minutes. From code to running appliance fully configured and being absolute certain it will work without any other modificatio . it used to be an hour.
Achieved this to do lots of caching and parallell test runs.
The downside is that my development server is feeling like a unvoluntary black person from ghana moving to the newfound united states 400 years ago...