Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "wk102"
-
A friend of mine wanted me to scale a 64x64px png image to the size of a facebook cover without quality loss.8
-
Biggest scaling challenge?
The imaginary scaling issues from clients.
Client : How do you cope with data that's a billion times bigger than our current data set? Can you handle that? How much longer will it take to access some data then?
I could then give a speech about optimizing internal data structures and access algorithms that work with O(log n) complexity, but that wouldn't help, non-tech people will not understand that.
And telling someone, the system will be outdated and hopefully been replaced when that amount of data is reached, would be misinterpreted as "Our system can not handle it".
So the usual answer is: "No problem, our algorithms are optimized so they can handle any amount of data"6 -
Biggest scaling challenge I've faced?
Around 2006~2007 the business was in double-digit growth thanks to the eCommerce boom and we were struggling to keep up with the demand.
Upper IT management being more hardware focused and always threw more hardware at the problem. At its worst, we had over 25 web servers (back then, those physical tall-rectangle boxes..no rack system yet) and corresponding SQL server for each (replicated from our main sql server)
Then business boomed again and projected the need for 40 servers (20 web servers, 20 sql servers) over the next 5 years. Hardware+software costs (they were going to have to tear down a wall in order to expand the server room) were going to be in the $$ millions.
Even though we were making money, the folks spending it didn't seem to care, but I knew this trajectory was not sustainable, so I started utilizing (this was 2007) WCF services and Microsoft's caching framework Velocity. Started out small, product lookup data (description, price, the simple stuff) and within a month, I was able to demonstrate the web site could scale with less than half of our current hardware infrastructure.
After many political battles (I've ranted about a few of those), the $$ won and even with the current load, we were able to scale back to 5 web servers and 2 sql servers. When the business increased in the double-digits again, and again...we were still the same hardware for almost 5 years. We only had to add another service server when the international side of the business started taking off.
Challenge wasn't the scaling issue, the challenge was dealing with individuals who resisted change.3 -
Scaling a badly-written code which is not designed to scale.
Ended up re-writing the entire shit from scratch.4 -
1. Fucking MySQL database clusters.
There's nothing fun about MySQL clusters. Sometimes they start producing deadlock errors for no apparent reason... well, there's probably a reason, but it's never a transparent easy to find reason.
What was even less fun is that those errors took down a Sentry server. When your error log server goes down through ddos from your database messages, it's time to rethink your setup.
2. Wiring up a large factory with $2 arduino clones, each with a $2 esp8266 wifi chip, with various sensors for measuring flow of chemical solutions (I wanted cheap real time monitoring as an early warning system next to periodic sampling).
The scaling issue was getting over 500 streaming wifi signals to work in a 55c moist slightly corrosive atmosphere with concrete and steel everywhere, and getting it all into a single InfluxDB instance for analysis.12 -
Had to implement an API reporting on exabytes of data, scaling up eventually to a zettabyte.
I'd never even heard of these words until I started on the project.6 -
The biggest scaling challenge...
Aha, when I joined my first (startup) company as an IT guy, they had 2 rooms in a small corner of a commercial building.
When I left the company after 2 years, they had two floors of that building with 40 rooms, had 5 different websites running in AWS, was using managed GSuite and a lot more.
So yeah, keeping up with all those was my biggest challenge.1 -
Inherited a simple marketplace website that matches job seekers and hospitals in healthcare. Typically, all you need for this sort of thing is a web server, a database with search
But the precious devs decided to go micro-services in a container and db per service fashion. They ended up with over 50 docker containers with 50ish databases. It was a nightmare to scale or maintain!
With 50 database for for a simple web application that clearly needs to share data, integration testing was impossible, data loss became common, very hard to pin down, debugging was a nightmare, and also dangerous to change a service’s schema as dependencies were all tangled up.
The obvious thing was to scale down the infrastructure, so we could scale up properly, in a resource driven manner, rather than following the trend.
We made plans, but the CTO seemed worried about yet another architectural changes, so he invested in more infrastructure services, kubernetes, zipkin, prometheus etc without any idea what problems those infra services would solve.2 -
Installing my company's microsystems architecture to run locally is a pita because it is 60 GB of docker containers. With my 256 GB Macbook, that's a scaling problem for the years to come.6
-
Not planning for it at all, then see the service gain traction and traffic explodes through the roof.
Not gonna do that mistake again!1 -
Laravel app, monolith structure, with 50+ model, 48 controller (+20 report), had 2 month to make it module based as they will add 10 or some more "modules" to it ..1
-
Started off with a prototype with about 20 backend data points per hour and 10 concurrent front-end users. Total dataset in the 10,000s.
Now about 5 backend data points per second and 500 concurrent front-end users. Total dataset in excess of 20,000,000. -
So I was assigned to improve an existing internal CMS application where they wanted the ability to add extra form applications and restricting them based on people from different departments. As well as include some other improvements like speed as they mentioned that it was slow in some instances.
What I found was the original developer decided to not use any kind of framework and decided to be creative by creating his own MVC framework. With about 300 users in this system and utilising no caching of queries, views, not even using PHP OpCache, even quite a few security holes, I was damn surprised at how this thing was running. I asked the original developer why he didn't use an open source framework and he said that he thought that he'd create something and be the next Facebook.
It was a mammoth task to "improve" this system but the main thing was that I took custody of this project and that I prevented him from trying to make a bigger mess of things for this project. -
Upscaling a prod database which was running on an 8 year old Dell desktop used as server. It had about 2MB of RAM and an Intel Core 2 processor...
This was the day I've learned a lot about querying the database as efficient as humanly possible.3 -
So far? Probably Chelsea FC's raffle system which I inherited after a dev left and left it in a right fucking mess.2
-
As a student I got involved in a project written by another student who wasn't working at the company for the time I started. It was a web based application written in plain JS and PHP. His attempt was really terrible and I had to rewrite almost everything. If I sum this up I deleted about 4k lines of code and replaced them with about 2k of my code. My attempt is scalable and it is much easier to build new functions and modules.
-
When I go to luxury restaurants and I get one poot of food alongside two teaspoons of good-enough-wine.1
-
Working on small scale games to working on a full blown VR 4 person MO game, the scale from one to another is pretty big, I seem to manage somehow though :D takong it all one step at a time, making sure I don't use any repeated code in places that could need it, cleaning up classes so it's easier to access for debugging, building nice inspector things so people that create art/particles and such don't have a hard time understanding my weird naming conventions.
I could go on and on really xD i've learnt so much and i'm still learning, and I really have nothing to rant about thesw days so i've gone back to lurk mode lol -
So far? Not realizing my load balancer was not set up for sticky sessions... and since this load balancer only existed in prod not in qa or CAT o found out the night we went to install it into prod...1
-
docker service scale serviceID=5
If only scaling my bank balance were as simple as docker swarm scaling !1 -
So we were making android application for our college festival.. we decided to use Firebase as our primary service for "everything". Decided to use Firestore as database as it wouldn't require much web API call, and mainly as it had a free plan.
We thought that we would never hit the limits of free plan... Needless to say, we started hitting the daily limits about a week before the fest. And it became more of an issue a few days before the fest when we started to hit the limits within 4 hours of the day!!!
But we were lucky enough that the app sustained on the day of festival, lucky enough!!1 -
Scaled custom help desk software across 5 school districts. Way harder than it sounds when you realize that we needed a tunnel to get an external site working, complex routing to get the servers to communicate with one another without exposing one districts network to the others. And I also made it auto deploy on a successful CI test. The only thing that really perfectly worked on the first try was the database (CockroachDB). Everything else was a complete mess of DNS and routing rules.2
-
When I was little I wanted to play my train game, but my sister already occupied the pc by watching some Disney movie2
-
A fjord in the south of Greenland last summer. (Hiking rather than climbing, but it works for the purpose of the joke.) What was worse was coming back down again, it was raining.
-
Slowly increase the users of our system from 5 to 15. The dal is fucking garbage so it gets slower and slower...
-
Scaling a 3D STL File so the Tolerances are still the same but the Model is Like 300% scaled... Fucking hard...1
-
Setting up active/dr site that is not allowed to subscribe to any “cloud” services to facilitate scaling/auto failover. Ive resorted to use DNS-based failover which updates the ip attached to the host and re-propogate dns records which took 2minutes to come back online... this shoulve been better if we’re allowed to use cloud-based load balancers
-
The radio tower climb on zwift was pretty tough to scale.
Sorry for that one.
Anyone else use zwift on here?3