Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "#git #github #fail"
-
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
Well... I once accidentally deleted a classmates entire assignment. Basically we were working together on one and we had the code in Github, I had named the repo after the module code.
He was having some weird git issues and I thought it would be easier to just delete and re-clone on his machine. You can probably see where this is going.
Me: rm -rf <DIR NAME> Enter
Him: wait, which folder did you just delete
Turns out he had the repo cloned inside another directory with the EXACT SAME NAME, which also contained his previous assignment, the only copy of it in the entire universe (it was a group project and they did it all on his laptop with no source control, which i found hilarious).
It wasnt so bad since that assignment was already submitted and graded, but a bit of a fail on both our parts. -
*cracks knuckles*
Boy was I happy to see this when I opened devRant up.
So for starters, more group projects are necessary. Many reasons why. To begin with, it allows for more complex programs than getting some input and printing some shit out. It also develops interpersonal skills (I hate people too, but when you go out to look for work you'll be with them, so better get used to it soon). If a platform like GitHub is used, it's easy to track who did what, and see what each person in the group did, so it should be fairly easy to discourage lazy asses.
Beyond that, stop giving us half completed assignments and asking us to fill in a function/method. Yes, it will take longer. But one doesn't learn to program by doing the minimum required work, you've got to crash and burn a lot in order to git gud. So ffs, let us do all the work. We're like AI, we learn through reinforcement learning.
Stop giving us a spec to follow. We'll do plenty of that in the future, right now we need to make mistakes, not be held by the hand all the way. Let us do dumb shit so you can fail us and tell us our code is repulsive, and this other way was better. Explain why. That's how people learn, not by telling us what each function should return, what can and can't be used, etc. And if you can't come up with a scenario in which what you're teaching is useful, then maybe you're not teaching us the right material.
I'll leave it at that for today... But I'll be back 😈 -
FUCK YOU GITKRAKEN
After all the suggestions in https://devrant.com/rants/1540091 I decided to give Gitkraken a try.
Here's the shitty experience you can expect:
1) It doesn't even ask you where to install it. Turns out, it spontaneously installs itself in "%LOCALAPPDATA%\gitkraken" - who the fuck installs software there??
2) It is "seamlessly integrated with GitLab", except the first time you open it you can only log in with your GitKraken or GitHub account, and NOT with a GitHub one. Just brilliant
3) After logging in, it spontaneously changes your global git username and email config, because fuck you that's why
4) If you have a repo on AWS CodeCommit with an remote that looks like "ssh://git-codecommit.us-east-2.amazonaws.com/...", *after the first push* it will spontaneously change it to "<user>@git-codecommit.us-east-2.amazonaws.com/bla/bla", causing future actions to fail. Because FUCK YOU, THAT'S WHY.
And they expect people to pay for this shit, just to be able to manage more than one account at a time (and some "additional features" that are not even listed on the site)?
FUCK OFF, AND FUCK YOU FOR WASTING MY FUCKING TIME, HOW ABOUT I CHANGE YOUR FUCKING SETTINGS TO FUCK YOU22 -
When it's 10pm, you need to put your code on master. And GitHub is down. Oh wait, it's friday night.7
-
At school during my first Java project we had to make a simulation of a parking garage and what effects price changes would have in order to find the most optimal business model from some company.
At the project kick off.
School: "we will be checking your code for plagiarism. if you use code from the internet, even if its 2 lines you need to mention the source. otherwise you will fail this cource."
We go on to do the project.
Friend of mine who was in another class sees a group presenting a 2 days old version of my teams application. theres literaly a credits button that displays the names of the people that worked on it in a popup.
Me: mentions to a teacher that my project was stolen.
They literaly didnt even change the name and pulled the entire repository from github and handed it in.
The fucking teacher doesnt even check the code / git logs after i mentioned that the entire codebase was stolen from a public github repository.
There was an endless mountain of proof to support my claim such as our team members names hard coded in the code they handed in and about 500 commits from our accounts.
I will from now on NEVER EVER mention sources when i hand in code at school.1