Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "only utc"
-
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
this just happened a few seconds ago and I am just laughing at the pathetic site that is Facebook. xD
4 years ago:
So I was quite a noobie gamer/hacker(sort of) back then and i had a habit of having multiple gmail/fb accounts, just for gaming, like accounts through which i can log in all at once in the same poker room, so 4/5 players in the game are me, or just some multiple accounts for clash of clans for donations.
I had 7-8 accounts back then. one had a name that translated to "may the dead remain in peace "@yahoomail.com . it was linked to fb using same initials. after sometime only this and 2 of my main accs were all i cared about.even today when i feel like playing, i sometimes use those accs.
2 years ago.
My dad is a simple man and was quite naive to modern techs and used to hang around with physical button nokia phones.But we had a business change, my father was now in a partnership in a restaurant where his daily work included a lot of sitting job and and casual working. So he bought a smartphone for some time pass.
He now wanted to download apps and me to teach him.I tried a lot to get him his own acc, but he couldn't remember his login credentials.
so at the end i added one of my own fake ID's(maythedead...) so he could install from playstore, watch vids on youtube and whatever.
The Actual Adventure starts now
Today, 1 hour ago:
I had completely forgot about this incident, since my parents are now quite modern in terms of tech.
But today out of nowhere i recieved an email that someone has JUST CHAINGED MY FB PASSWORD FOR ONE OF MY FAKE ACCS!?!??
what the hell, i know it was just a useless acc and i never even check my fb from any acc these days, but if someone could login into that acc, its not very difficult to track my main accs, id's, etc so i immediately opened this fb security portal and that's where the stupidity starts:
1)To recover your account they FUCKIN ASKS FOR A PHYSICAL ID. yeah, no email, no security question you have to scan your driving license or passport to get back to your account.And where would I get a license for some person named "may the dead remain in peace"? i simply went back.
2) tried another hack that i thought that will work.Closed fb help page, opened fb again , tried to login with my old credentials, it says" old password has been changed,please enter new password", i click forget password and they send an otp. i thought yes i won, because the number and recover mail id was mine only so i received it.
when i added the otp, i was first sent to a password change page (woohoo, i really won! :)) but then it sends me again to the same fuckin physical id verification page.FFFFFFFFFuck
3)I was sad and terrified that i got hacked.But 10 mins later a mail comes ,"Your Facebook password was reset using the email address on Tuesday, April 10, 2018 at 8:24pm (UTC+05:30)."
I tried clicking the links attached, hoping that the password i changed(point<2>) has actually done something to account.NADA, the account still needs a physical license to open:/
4) lost, i just login to my main account and lookup for my lost fake account. the fun part:my account has the display pic of my father?!!?!
So apparently, my father wanted to try facebook, he used the fake account i gave him to create one, fb showed him that this id already has an fb account attached to it and he accidently changed my password.MY FATHER WAS THE HACKER THE WHOLE TIME xD.
but response from fb?" well sir, if you want your virtually shitty account back , you first will have to provide us with all details of your bank transactions or your voter id card, maybe trump will like it" -
It sucks, when your clients live in a different timezone and they start to work and complaining about things just when you want to leave the office.
-
Good UTC night everyone (23:59)
which is the only way to say that I am going to stop devranting now without knowing where you live. -
These fucking calendar e-mails. Fuck. Right. Off.
They never display correctly in all clients.
If the meeting is at 1pm, just fucking say that. I don't want to pick through a load of shit only to find that the calendar app lied about UTC.
Just missed a sodding call due to this crap.4 -
DAYLIGHT SAVING!!
Up to this point, I was indifferent to the issue if it should be kept or not. My sleep schedule is fucked and non-routine anyway so one hour plus or minus doesn't play any role.
I got to a meeting scheduling problem, when I have 2 timestamps in variable timezones and want to calculate time difference. Both can have DST active. There is no algortihmic way to figure out. I checked SO and pytz and it's just a list of hardcoded dates when DST starts and ends. WHATHTEHELL.jpg
Not only we should abolish the DST, we should force the whole world into UTC/Zulu. And those, who refuse to adapt to UTC, will be forced to work with plain integer epoch dates.3 -
I had an interesting mystery the other day. I work in the UK, but I'm working remotely from the US for a while. First day, I made some changes, ran the tests and they failed. Weird part was the failing test was for a component I hadn't touched. I took a closer look, and realized it was a date off by several hours. The test was checking that a passed in date appears in the output. But it was creating the date by parsing a string. The library I was using defaults to local time, but the component uses UTC. So, I had inadvertently created a unit test that only passes when run from UTC. But I had never noticed before because my work is in that timezone. Yikes!
-
Bug report : date_created is wrong by one day, but only in the afternoon. Gaaah ! Why did nobody save the user's offset...
Or better yet just normalize to UTC!?!? -
iAPPLIED CS UNIVERSITY, DAY 1 (2018-09-24)
11:00 UTC+3: Arrived at the secretary's office to complete my registration. I met quite some people; I forgot the names of some. I spent some time over there, so I took the 13:00 class instead of the 11:00 one. It's still early, so we pick whichever we want.
13:00: Procedural Programming at the Computer's lab. The computers were running Windows 8.1! 😱 I might connect to my laptop via RDP. It would be very cool. The course was about C, but the first time was just an introduction. We are going to use Code::Blocks. We were also explained the (HTTP only) web platform in which we are logged in via our passwords and submit our assignments. The professor was very nice, but this day at least was very boring. I was watching CodeMinkey cartoons, trying to solve AdLitterams.
18:00: Back for Applied Mathematics I. At the same computer lab. No lesson did happen, because we have to s learn theory stuff first (every Friday I think). Back to home.
Tommorrow is going to be a hard day...:wq1 -
What the crap? Get this, I'm setting the timezone to UTC in PHP. All find and dandy, only that it gives me the wrong FUCKING date and time, like what is wrong with you xD I changed nothing since yesterday and yesterday it gave me the correct time xD2
-
Quick Plesk config question...
Been getting open_basedir() notices in the WordPress logs, and frankly it's flooding the log right now. Sample below:
[24-Feb-2019 07:05:19 UTC] PHP Warning: file_exists(): open_basedir restriction in effect. File(/var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-content/db.php) is not within the allowed path(s): (/var/www/vhosts/webspacedomain.com/:/tmp/) in /var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-includes/load.php on line 397
Checking the settings for open_basedir in the domain's PHP settings, it's currently set to the following default value:
{WEBSPACEROOT}{/}{:}{TMP}{/}
By my read, that **should** be granting permission to the directory. I just checked it against the setting on the dev server (which doesn't report this error), and it's configured in the same manner. Only difference between Dev environment and this one is that the one in Dev is in vhosts/webspacedomain.net/DEV instead of just vhosts/webspacedomain.net
Is there something I'm missing here?4