Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "wk148"
-
When we found out MySQL utf8 isn't actually utf8..... it's a proprietary subset of utf8 that only includes up to Unicode 255..... and there is a separate "utf8mtb4" that is actual utf8.8
-
A certain, reasonably sized company had a large in house payment system to handle all their client purchases that was developed many, many years ago. All the devs that developed it had left, and as it "just worked" they hadn't seen fit to get anyone to update or maintain it since.
That was all fine until it suddenly (and completely) stopped working one sunny afternoon.
After paying a small fortune for one of the original devs to come back and look at it, turns out the payment API it was based on had been retired. Warnings of deprecation had been sent out 18 months prior, but they had just been ignored, as the secretary receiving them after the devs left had no idea what it meant.6 -
That moment when you’re asked to shift the entire project from arduino to raspberry pi because your professor likes jazzed up crap.
Arduino so freaking reliable. A gem in a box.
Raspberry Pi is flashy hollow shiz.7 -
My tech debt meltdown is happening right now. We are releasing our huge micro service based product next week with no automated testing of any sort. Our front end clients are relatively DRY. No tests and dry = can't change anything = hacks on top of hacks.
Why? Team lead won't listen to me and has beaten me down so I don't care anymore. If it's broken fuck it.2 -
The switch from “Wild West” to ITIL uncovered so much bull crap. 20% of the people where doing 80% of the work. And then people were keeping some things alive by shear will, once Changes and Service Requests were required, it was shown how awful the environment truly was and how few people in the company knew how anything worked.
-
I worked for a company that was in entertainment news. Specifically rock music.
On the terrible night of the Battaclan (spelling?) terror attacks in Paris. Few years ago our site was one of the first to run the story (the main attack happened at a rock concert). Anyway the tech debt that we’d been complaining about for months reared it’s head. The site got so much traffic that it was just fucked all night. Literally couldn’t get the databases back up for about 7 straight hours. -
Been bughunting the last week or so on this import job that is suddenly running so slowly that it takes more than 24h and is restarting on top of itself.
It used to run in anywhere from 4 to 8 hours, which was bad enough. You see, it ran on timer scheduled by our main site. So our deployment window was determined by when this job finished, and if it didn't finish during work hours, then no deployment that day.
So we got the idea to move it to a separate service to eliminate that deployment window bottleneck. And now, seemingly unrelated, it is just running slow as shit.
There is a lot of bad design in the code, and we know we want to build a completely new solution. But we also absolutely need this import to run every day until a better solution can take over.
We've taken care of some of the most obvious problems that could cause the poor performance, but it's unclear whether it's going to be enough. And with a runtime of about a day and wild variations of the most atomic partial imports, it's extremely tedious to test as well...3 -
We rewrote the whole thing, except for iFraming some old pages in. We had to, the system was fucking awful and couldn't cope with any of the new mission critical requirements.
Client didn't understand the scope. Our project leader somehow snuck it in and we worked on it for months. We were sure we'd be kicked off the whole project... Somehow things didn't crash and burn. How it didn't blow up defies rational thought and the laws of physics. The new system worked, the client was happy, and boss made a lot of money.
Lead dev worked weekends for what feels like an eternity, it really was his baby and no one else on our company could have done it. It's where I finally learned how to do things the proper way; DDD, unit testing and TDD, architecture, building strong components in front-end, you name it. Before that I had a great nose for code smells and how not to do stuff, but now I got to see a proper system for the first time. It was glorious.
Then lead dev left and the system degraded quite a bit because new team didn't keep to the architectural patterns or general best practices. But we had a good run.1 -
After reading Clean Code principles:
Before you rename your method, remember to use the refractor option
😌4 -
For me, it was when I was on a team doing government work. We had an entire team devoted to deployments etc which were handled via ansible.
Ansible was fairly new at the time (~2015, they had just been bought by RedHat) but the team was definitely doing a great job picking it up and creating install playbooks for _every_ piece of our distributed infrastructure (load balancers, application servers, queues, databases, everything).
I luckily left before stuff got too hairy, but last I heard they are more than 6 months behind schedule. They STILL can't get a reproducible install process with the ansible playbooks! And it's all due to tech debt ie not giving any time to fix things, so its just band aid after band aid.
It's really sad to hear because the sytem itself was pretty cool, completely horizontally scalable and definitely miles ahead of the program they've been using for the last 20 years. -
Give me a full 1-2 weeks of work for the project and then i relate it to how big the project seems in comparison to previous projects 😄