Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "rate limiting"
-
Hacking/attack experiences...
I'm, for obvious reasons, only going to talk about the attacks I went through and the *legal* ones I did 😅 😜
Let's first get some things clear/funny facts:
I've been doing offensive security since I was 14-15. Defensive since the age of 16-17. I'm getting close to 23 now, for the record.
First system ever hacked (metasploit exploit): Windows XP.
(To be clear, at home through a pentesting environment, all legal)
Easiest system ever hacked: Windows XP yet again.
Time it took me to crack/hack into today's OS's (remote + local exploits, don't remember which ones I used by the way):
Windows: XP - five seconds (damn, those metasploit exploits are powerful)
Windows Vista: Few minutes.
Windows 7: Few minutes.
Windows 10: Few minutes.
OSX (in general): 1 Hour (finding a good exploit took some time, got to root level easily aftewards. No, I do not remember how/what exactly, it's years and years ago)
Linux (Ubuntu): A month approx. Ended up using a Java applet through Firefox when that was still a thing. Literally had to click it manually xD
Linux: (RHEL based systems): Still not exploited, SELinux is powerful, motherfucker.
Keep in mind that I had a great pentesting setup back then 😊. I don't have nor do that anymore since I love defensive security more nowadays and simply don't have the time anymore.
Dealing with attacks and getting hacked.
Keep in mind that I manage around 20 servers (including vps's and dedi's) so I get the usual amount of ssh brute force attacks (thanks for keeping me safe, CSF!) which is about 40-50K every hour. Those ip's automatically get blocked after three failed attempts within 5 minutes. No root login allowed + rsa key login with freaking strong passwords/passphrases.
linu.xxx/much-security.nl - All kinds of attacks, application attacks, brute force, DDoS sometimes but that is also mostly mitigated at provider level, to name a few. So, except for my own tests and a few ddos's on both those domains, nothing really threatening. (as in, nothing seems to have fucked anything up yet)
How did I discover that two of my servers were hacked through brute forcers while no brute force protection was in place yet? installed a barebones ubuntu server onto both. They only come with system-default applications. Tried installing Nginx next day, port 80 was already in use. I always run 'pidof apache2' to make sure it isn't running and thought I'd run that for fun while I knew I didn't install it and it didn't come with the distro. It was actually running. Checked the auth logs and saw succesful root logins - fuck me - reinstalled the servers and installed Fail2Ban. It bans any ip address which had three failed ssh logins within 5 minutes:
Enabled Fail2Ban -> checked iptables (iptables -L) literally two seconds later: 100+ banned ip addresses - holy fuck, no wonder I got hacked!
One other kind/type of attack I get regularly but if it doesn't get much worse, I'll deal with that :)
Dealing with different kinds of attacks:
Web app attacks: extensively testing everything for security vulns before releasing it into the open.
Network attacks: Nginx rate limiting/CSF rate limiting against SYN DDoS attacks for example.
System attacks: Anti brute force software (Fail2Ban or CSF), anti rootkit software, AppArmor or (which I prefer) SELinux which actually catches quite some web app attacks as well and REGULARLY UPDATING THE SERVERS/SOFTWARE.
So yah, hereby :P39 -
!security
(Less a rant; more just annoyance)
The codebase at work has a public-facing admin login page. It isn't linked anywhere, so you must know the url to log in. It doesn't rate-limit you, or prevent attempts after `n` failures.
The passwords aren't stored in cleartext, thankfully. But reality isn't too much better: they're salted with an arbitrary string and MD5'd. The salt is pretty easy to guess. It's literally the company name + "Admin" 🙄
Admin passwords are also stored (hashed) in the seeds.rb file; fortunately on a private repo. (Depressingly, the database creds are stored in plain text in their own config file, but that's another project for another day.)
I'm going to rip out all of the authentication cruft and replace it with a proper bcrypt approach, temporary lockouts, rate limiting, and maybe with some clientside hashing, too, for added transport security.
But it's friday, so I must unfortunately wait. :<13 -
I think I will ship a free open-source messenger with end-to-end encryption soon.
With zero maintenance cost, it’ll be awesome to watch it grow and become popular or remain unknown and become an everlasting portfolio project.
So I created Heroku account with free NodeJS dyno ($0/mo), set up UptimeRobot for it to not fall asleep ($0/mo), plugged in MongoDB (around 700mb for free) and Redis for api rate limiting (30 mb of ram for free, enough if I’m going to purge the whole database each three seconds, and there’ll be only api hit counters), set up GitHub auto deployment.
So, backend will be in nodejs, cryptico will manage private/public keys stuff, express will be responsible for api, I also decided to plug in Helmet and Sqreen, just to be sure.
Actual data will be stored in mongo, rate limit counters – in redis.
Frontend will probably be implemented in React, hosted for free at GitHub pages. I also can attach a custom domain there, let’s see if I can attach it to Freenom garbage.
So, here we go, starting up modern nosql-nodejs-react application completely for free.
If it blasts off, I’m moving to Clojure + Cassandra for backend.
And the last thing. It’ll be end-to-end encrypted. That means if it blasts off, it will probably attract evil russian government. They’ll want me to give him keys. It’ll be impossible, you know. But they doesn’t accept that answer. So if I accidentally stop posting there, please tell my girl that I love her and I’m probably dead or captured28 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
---WiFi Vision: X-Ray Vision using ambient WiFi signals now possible---
“X-Ray Vision” using WiFi signals isn’t new, though previous methods required knowledge of specific WiFi transmitter placements and connection to the network in question. These limitations made WiFi vision an unlikely security breach, until now.
Cybersecurity researchers at the University of California and University of Chicago have succeeded in detecting the presence and movement of human targets using only ambient WiFi signals and a smartphone.
The researchers designed and implemented a 2-step attack: the 1st step uses statistical data mining from standard off-the-shelf smartphone WiFi detection to “sniff” out WiFi transmitter placements. The 2nd step involves placement of a WiFi sniffer to continuously monitor WiFi transmissions.
Three proposed defenses to the WiFi vision attack are Geofencing, WiFi rate limiting, and signal obfuscation.
Geofencing, or reducing the spatial range of WiFi devices, is a great defense against the attack. For its advantages, however, geofencing is impractical and unlikely to be adopted by most, as the simplest geofencing tactic would also heavily degrade WiFi connectivity.
WiFi rate limiting is effective against the 2nd step attack, but not against the 1st step attack. This is a simple defense to implement, but because of the ubiquity of IoT devices, it is unlikely to be widely adopted as it would reduce the usability of such devices.
Signal obfuscation adds noise to WiFi signals, effectively neutralizing the attack. This is the most user-friendly of all proposed defenses, with minimal impact to user WiFi devices. The biggest drawback to this tactic is the increased bandwidth of WiFi consumption, though compared to the downsides of the other mentioned defenses, signal obfuscation remains the most likely to be widely adopted and optimized for this kind of attack.
For more info, please see journal article linked below.
https://arxiv.org/pdf/...9 -
We got DDoS attacked by some spam bot crawler thing.
Higher ups called a meeting so that one of our seniors could present ways to mitigate these attacks.
- If a custom, "obscure" header is missing (from api endpoints), send back a basic HTTP challenge. Deny all credentials.
- Some basic implementation of rate limiting on the web server
We can't implement DDoS protection at the network level because "we don't even have the new load balancer yet and we've been waiting on that for what... Two years now?" (See: spineless managers don't make the lazy network guys do anything)
So now we implement security through obscurity and DDoS protection... Using the very same machines that are supposed to be protected from DDoS attacks.17 -
It only took us 5 hours to debug. It's fine. I'm not mad that it was just rate limiting. I'm fine. We're fine. It's fine.10
-
Hey there!
So during my internship I learned a lot about Linux, Docker and servers and I recently switched from a shared hosting to my own VPS. On this VPS I currently have one nginx server running that serves a static ReactJs application. This is temponarily, I SFTP-ed the build files to the server and added a config file for ssl, ciphers and dhparams. I plan to change it later to a nextjs application with a ci/di pipeline etc. I also added a 'runuser' that owns the /srv/web directory in which the webserver files are located. Ssh has passwords disabled and my private keys have passphrases.
Now that I it's been running for a few days I noticed a lot of requests from botnets that tried to access phpmyadmin and adminpanels on my server which gave me quite a scare. Luckily my website does not have a backend and I would never expose phpmyadmin like that if I did have it.
Now my question is:
Do you guys know any good articles or have tips and tricks for securing my server and future projects? Are there any good practices that I should absolutely read and follow? (Like not exposing server details etc., php version, rate limiting). I really want to move forward with my quest for knowledge and feel like I should have a good basis when it comes to managing a server, especially with the current privacy laws in place.
Thanks in advance for enduring my rant and infodump 😅7 -
Forgot to change code in my api for rate limiting, after development. No unit tests.. because who really needs that right? 🤦♂️🙅♂️🤷♂️lolololol
Long story short, API went to production eventually, and stopped working almost immediately. Rate limiting was set for 2000 requests in a 1 hour time period. Not my finest moment.. fml 🤦♂️ -
I get an email about an hour before I get into work: Our website is 502'ing and our company email addresses are all spammed! I login to the server, test if static files (served separately from site) works (they do). This means that my upstream proxy'd PHP-FPM process was fucked. I killed the daemon, checked the web root for sanity, and ran it again. Then, I set up rate limiting. Who knew such a site would get hit?
Some fucking script kiddie set up a proxy, ran Scrapy behind it, and crawled our site for DDoS-able URLs - even out of forms. I say script kiddie because no real hacker would hit this site (it's minor tourism in New Jersey), and the crawler was too advanced for joe shmoe to write. You're no match for well-tuned rate-limiting, asshole!1 -
Even I want devRant feed to be filled with quality rants, but 1 rant/hour is bit less!
Like 5 or so would be better?3 -
I've been wondering about renting a new VPS to get all my websites sorted out again. I am tired of shared hosting and I am able to manage it as I've been in the past.
With so many great people here, I was trying to put together some of the best practices and resources on how to handle the setup and configuration of a new machine, and I hope this post may help someone while trying to gather the best know-how in the comments. Don't be scared by the lengthy post, please.
The following tips are mainly from @Condor, @Noob, @Linuxxx and some other were gathered in the webz. Thanks for @Linux for recommending me Vultr VPS. I would appreciate further feedback from the community on how to improve this and/or change anything that may seem incorrect or should be done in better way.
1. Clean install CentOS 7 or Ubuntu (I am used to both, do you recommend more? Why?)
2. Install existing updates
3. Disable root login
4. Disable password for ssh
5. RSA key login with strong passwords/passphrases
6. Set correct locale and correct timezone (if different from default)
7. Close all ports
8. Disable and delete unneeded services
9. Install CSF
10. Install knockd (is it worth it at all? Isn't it security through obscurity?)
11. Install Fail2Ban (worth to install side by side with CSF? If not, why?)
12. Install ufw firewall (or keep with CSF/Fail2Ban? Why?)
13. Install rkhunter
14. Install anti-rootkit software (side by side with rkhunter?) (SELinux or AppArmor? Why?)
15. Enable Nginx/CSF rate limiting against SYN attacks
16. For a server to be public, is an IDS / IPS recommended? If so, which and why?
17. Log Injection Attacks in Application Layer - I should keep an eye on them. Is there any tool to help scanning?
If I want to have a server that serves multiple websites, would you add/change anything to the following?
18. Install Docker and manage separate instances with a Dockerfile powered base image with the following? Or should I keep all the servers in one main installation?
19. Install Nginx
20. Install PHP-FPM
21. Install PHP7
22. Install Memcached
23. Install MariaDB
24. Install phpMyAdmin (On specific port? Any recommendations here?)
I am sorry if this is somewhat lengthy, but I hope it may get better and be a good starting guide for a new server setup (eventually become a repo). Feel free to contribute in the comments.24 -
Thats top notch design.
All actions happening on the page go to one endpoint. Removing old trusted computers, changing the password, changing 2FA, you name it.
Now if you want to remove all old trusted devices, you cannot remove all at once, there is no button for it. So you click one after the other. And then it stops working. Ok, then do the normal password rotation. Hmm, button has a loading spinner and then nothing happens.
Looking into the browser console:
- All requests go to /myaccount/security/graphql
- All requests get a 429 Too many requests
- Even if you just click a panel, it tracks the action to the graphql endpoint. Or at least tries to because even that gets shot down with a 429
Pretty dumb, eh? Must be some small shitty website. It's not. It's fucking paypal. -
I first try to figure out why I really want to build this and (if the project is intended that way) why someone would use it.
Then I strip the idea down to its bare minimums so I know what I should build for it to be of any value.
And then I start building until I no longer think it's worth working on the project.
For instance:
I am kind of surprised to see that in a world where cloud and apis become more and more leading, there isn't really a commonly accepted and flexible api management platform.
There are some cloud based platforms out there that can be configured using some interface but why is it like that? Surely you aren't going to deploy multiple versions of your core with different platforms right?
That's where my latest project comes in. I want to create an on-prem api management platform which you configure to work with your api during development. Then you can deploy it to any infrastructure alongside your core api.
This way you:
- are not bound to a specific cloud
- don't have to worry about security and firewalls
- get user management and rate limiting for free
I will probably create a collab for this once the platform is mature enough.1 -
Bullshittery continues. This time around, absolutely innocent, clamav is root cause. For once not incompetent idiot, but piece of software. IDK if that makes me happy or upset.
So our email server that I configured and took care of died. RIP. Damn, better put it back together ASAP. So Im under pressure, while still pissed at everything that I ranted before (actually my last 2 rants were throttled, and in total all of that happened past 60 minutes but devrant rate limiting) I start auditing logs. You imagine, we kindda need it NOW, and it's second time last month clamav is pulling stunts and MTA refuses (properly) to work without antivirus. So pressurized, I look at logs, what the fuck went wrong.
clamav deamonize() failed - cannot allocate memory
Hmm. Intresting, but sounds like bullshit. I know server is quite micro becouse they wanted to save on costs as much as possible, but it has well over half a gig free ram just before it crashes (like 800MB) with that message. Is it allocating almost gig in one call or what? Looked carefully at trusty htop while it was starting, and indeed, suddenly it just dies with quite a bit of ram free, almost as much as it weights already. And I remember booting it up when I was configuring it, and it had fair bit of headroom.
Google, help me friend... Okay, great, so apparently at some point clamav loads virus DB into ram (dafuq?), and than forks, which causes spike of 2x the ram usage, and than immidietely frees it up.
Great, that sounds like great design decision... At least I know, I can just slap on SWAP file, restart it and call it a day.
It worked, swap file is almost empty (used 15megs, 900 megs free ram, whatever).
That leaves me wandering, who figured out to load DB to ram? That means pretty much that clamav will eat a little bit more ram each vir db update, and that milisecond "double ram" spike will confuse innocent people who just wanted to run clamav and it worked last *long period of time* and now crashes without warning without any changes to configuration.
Maybe there is logical explanation, I want to know it.8 -
Hi fellow devRanters, I need some advice on how to detect web traffic coming from bad/malicious bots and block them.
I have ELK (Elastic) stack set up to capture the logs from the sites, I have already blocked the ones that are obviously bad (bad user-agent, IP addresses known for spamming etc). I know you can tell by looking at how fast/frequently they crawl the site but how would I know if I block the one that's causing the malicious and non-human traffic? I am not sure if I should block access from other countries because I think the bots are from local.
I am lost, I don't know what else I can do - I can't use rate limiting on the sites and I can't sign up for a paid service cause management wants everything with the price of peanuts.
Rant:
Someone asked why I can't just read through the logs (from several mid-large scale websites) and pick out the baddies.
*facepalm* Here's the gigabytes log files.9