Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
There's all sorts of ways of doing that sort of thing, depending on your requirements. Try looking in to Selenium for UI automation, and jMeter for API request stress tests.
-
Russian4038yThere are ways to do this with python, you can basically start as many threads as your computer can handle (and most pc's can handle loads) and they can all send their requests. This is just really a rough sketch
-
donuts236728y@Zaphod65 I'm using jmeter but how can you realistically simulate 10k concurrent requests from 1 PC?
-
gberginc4498yTry to run the load on your PC, monitor the services as well as your CPU. In case the services you are benchmarking are flooded before your CPU is fully utilised, you found the upper bound.
However, your CPU will probably be the limiting factor here, but hopefully your boss is gonna understand the fucking problem and get you additional resources.
If you only need to test simple HTTP APIs/requests, you may try with wrk tool: https://github.com/wg/wrk. -
donuts236728y@gberginc thanks. My Problem is this, say: 1 request takes 100ms for server return. 1 core can only make 5 req in that time so wouldn't the limit in terms of load = # cores * 5?
-
One of the previous devs at my work recommended to use Gatling for stress testing sites. I think the name was Gatling, if not, tag me and I'll find the correct name
-
gberginc4498y@billgates Yes, you are most probably going to hit a problem with just a single PC. Unless your servers/services are very bad, you will not be able to replicate the real workloads.
Since you have to implement the benchmark (or use one of the existing ones), I would simply approach it by setting it up on your machine and then show the concrete results showing the problem.
Here's a quick video showing what I meant before: https://youtube.com/watch/....
Server on the left side (1 CPU), workload VM (4 cores) on the right. Increasing the workload saturates workload cores before the server's core. This clearly shows I'd need to distribute the workload.
btw, make sure you set the ulimit properly. Otherwise the system will block your requests due to too many open files (sockets).
Good luck!
OK semi rant... Would like suggestions
Boss wants me to figure out someway to find the maximum load/users our servers/API/database can handle before it freezes or crashes **under normal usage**.
HOW THE FUCK AM I SUPPOSED TO DO THAT WITH 1 PC? The question seems to me to mean how big a DDoS can it handle?
I'm not sure if this is vague requirements, don't know what they're talking about, or they think I can shit gold... for nothing... or I'm missing something (I'm thinking how many concurrent requests and a single Neville melee even with 4 CPUs)
"Oh just doing up some cloud servers"
Uh well I'm a developer, I've never used Chef or Puppet and or cloud sucks, it's like a web GUI, not only do I have to create the instances manually and would have to upload the testing programs to each manually... And set up the envs needed to run it.
Docker you say? There's no Docker here... Prebuilt VM images? Not supported.
And it's due in 2 weeks...
undefined