One Thursday noon,
operation manager: (looking at mobile)what the.....something is wrong i am getting bunch of emails about orders getting confirmed.
Colleague dev: (checks the main email where it gets all email sent/received) holy shit all of our clients getting confirmation email for orders which were already cancelled/incomplete.
Me: imediately contacting bluehost support, asking them to down the server so just that we can stopp it, 600+ emails were already sent and people keep getting it.
*calls head of IT* telling the situation because he's not in the office atm.

CEO: wtf is happening with my business, is it a hacker?

*so we have a intrusion somebody messed the site with a script or something*

All of us(dev) sits on the code finding the vulnerabilities , trying to track the issue that how somebody was able to do that.

*After an hour*

So we have gone through almost easch function written in the code which could possibly cause that but unable to find anything which could break it.

Head asking op when did you started getting it actually?
Op: right after 12 pm.

*an other hour passes*

Head: (checking the logs) so right after the last commit, site got updated too?. And....and.....wtf what da hell who wrote this shit in last commit?
* this fuckin query is missing damn where clause* 🤬

Me: me 😰

*long pause, everyone looking at me and i couldn't look at anyone*
The shame and me that how can i do that.

Head: so its you not any intrudor 😡
Further investigating, what the holy mother of #_/&;=568 why cronjob doesn't check how old the order is. Why why why.

(So basically this happened, because of that query all cancelled/incomplete orders got updated damage done already, helping it the cronjob running on all of them sending clients email and with that function some other values got updated too, inshort the whole db is fucked up.)
and now they know who did it as well.

*Head after some time cooling down, asked me the solution for the mess i create*

Me: i took backup just couple of days before i can restore that with a script and can do manual stuff for the recent 2 days. ( operation manager was already calling people and apologising from our side )

Head: okay do it now.

Me: *in panic* wrote a script to restore the records ( checking what i wrote 100000000 times now ), ran...tested...all working...restored the data.
after that wrote an apology email, because of me staff had to work alot and it becomes so hectic just because of me.

* at the end of the day CEO, head, staff accepted apology and asked me to be careful next time, so it actually teached me a lesson and i always always try to be more careful now especially with quries. People are really good here so that's how it goes* 🙂

  • 9
    Well done! You messed up and you fixed it. It's the kind of an employee I know I would keep :) mistakes can happen to anyone but when shit hits the fan not everybody can cope with them.
  • 7
    Make sure you guys have a proper post mortem. Discuss how to prevent this in the future. Implement daily backups or maybee even integrate a create backup routin in your deploy routine. Have a replicate of the live db for testing purposes and use mailhog to catch emails from that staging. Make sure to also have all crons on there running and so on.
Add Comment