Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "sql attack"
-
A Month ago...
Me: when are you going to complete the report
Friend: we can do it in minutes
Me: you can't Ctrl + c and Ctrl +v as there is plagiarism check
Friend: we have spin bot
Me: you do that now itself . if something happens? You can join me .
Friend: just chill
Now ...
Me: done with report
Friend: feeding it to spin bot!
Feeds text related to database security....
Spin bot:
Garbage collector == city worker
SQL statements == SQL explanation
SQL queries == SQL interrogation
SQL injection == SQL infusion
Attack == assault
Malicious == noxious
Data integrity == information uprightness
Sensitive == touchy
.....
Me: told you so...
**spin not == article rewriter3 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
Running a wild UPDATE statement against an inventory database, it was chaotic and didn't have backups.... yea I know 😅 the backup service died god knows when and no one noticed. I was only new at the time so #notMyFault
UPDATE stock_on_hand = 0; WHERE id IN(1,2,3,4,5,88,972,7388);
# rows affected 1,234,567,890
> I think I almost died inside.
Oh the fun that mess was to clean up.
The positive outcome of this was, we had backups working again not long after and the inventory counts where accurate after that stock take.3 -
A great and very common web attack is known as 'SQL injection'.
So if I am using MongoDB, does that become 'NoSQL injection'?1 -
I just found a vulnerability in my companies software.
Anyone who can edit a specific config file could implant some SQL there, which would later be executed by another (unknowing) user from within the software.
The software in question is B2B and has a server-client model, but with the client directly connecting to the database for most operations - but what you can do should be regulated by the software. With this cute little exploit I managed to drop a table from my test environment - or worse: I could manipulate data, so when you realize it it's too late to simply restore a DB backup because there might have been small changes for who knows how long. If someone was to use this maliciously the damages could be easily several million Euros for some of our customers (think about a few hundred thousand orders per day being deleted/changed).
It could also potentially be used for data exfiltration by changing protection flags, though if we're talking industry espionage they would probably find other ways and exploit the OS or DB directly, given that this attack requires specific knowledge of the software. Also we don't promise to safely store your crabby patty recipe (or other super secret secrets).
The good thing is that an attack would only possible for someone with both write access to that file and insider knowledge (though that can be gained by user of the software fairly easily with some knowledge of SQL).
Well, so much for logging off early on Friday.5 -
I run update without where on mysql console on production database Today.
CLASSIC
Just because I needed to fix database after bug fix on the backend of the application.
I thought I wrote good sql statement after executing it on my local machine and then everything got bad.
Luckily it was only one column with some cached statistics data and I checked that it was not important data before I actually started fixing stuff but still ...
Almost got hard attack afterwards.
Made a script to fix this column and it took me only 15 minutes but still...
Bug was caused in part I got no unit tests and application grow after 3 years of development from simple one for one customer and volumes of documents around 50k to over 40 customers and volumes over 2mil per month, don’t know how many pages each, just in one year after we completed all needed features.
I have daily backups and logs of every api operation but still.
I think this got to far for one backend developer.
I got scared that I will loose money cause I am contractor and the only backend developer working on it.
I am so tired of this right now I think I need a break from work.
Responsibility is killing me so hard right now.
It will take a week to get back to normal.2 -
We had a test in class where one of the questions was "What is SQL injection?" and I wrote what it was and even gave a bang on simple example where I showed how you could end up with a truncate statement on your customer db. The last part of it was:
"This will be the SQL that gets executed:
INSERT INTO Customers (Name) VALUES (' ';TRUNCATE Customers;--);
When I got it back after we had a session of "grade each others work" I got the comment: "What makes this an attack against a database?"
I mean, I'm not sure what I could have written. That it truncates the database? And, correct me if I'm wrong, but if a user truncates your DB, is that not an attack? -
When did we decide managing Users through Cloud REST architecture was more secure than having them in an underlying DB?
Because I can't put my finger on exactly why... but I don't like it and I think it's probably less secure... and just spawned from the need to be able to make user management a subscription based service like fucking everything? When a simple MySQL or postgres and some bcrypt somewhere would be both more secure and infinitely cheaper?
I'm more used to consuming REST API's than writing them. Can any you REST peeps help me understand how a REST API could be made as secure as a SQL DB connection for user management?
What do you think the attack vectors are for a REST API User Management? Like... what's the SQL injection of REST API? Pack some extra JSON somewhere or something?
At least if I can have faith my shit's not gonna get hacked because I have to use a 3rd party REST service for User Management of Users to my own fucking app I can maybe sleep tonight.2 -
Learned HTML in 7th from my sister's web designing book(she was doing some course from institute), then in 10 class met with C++, Sql because they were in the course, in college first year met with PHP as my roommate was searching how to do phishing attack, then met C++ and after that Java, Javascrip, Android development, Javascript and many more all because of various projects i did.... glad that i took those projects. 😊