Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "dev timeout"
-
In my office, when a dev gets too talkative he/she gets boxed in and pretty much told to shut the hell up. 😂 #DevTimeout #LessFlapsMoreTaps17
-
Yesterday the web site started logging an exception “A task was canceled” when making a http call using the .Net HTTPClient class (site calling a REST service).
Emails back n’ forth ..blaming the database…blaming the network..then a senior web developer blamed the logging (the system I’m responsible for).
Under the hood, the logger is sending the exception data to another REST service (which sends emails, generates reports etc.) which I had to quickly re-direct the discussion because if we’re seeing the exception email, the logging didn’t cause the exception, it’s just reporting it. Felt a little sad having to explain it to other IT professionals, but everyone seemed to agree and focused on the server resources.
Last night I get a call about the exceptions occurring again in much larger numbers (from 100 to over 5,000 within a few minutes). I log in, add myself to the large skype group chat going on just to catch the same senior web developer say …
“Here is the APM data that shows logging is causing the http tasks to get canceled.”
FRACK!
Me: “No, that data just shows the logging http traffic of the exception. The exception is occurring before any logging is executed. The task is either being canceled due to a network time out or IIS is running out of threads. The web site is failing to execute the http call to the REST service.”
Several other devs, DBAs, and network admins agree.
The errors only lasted a couple of minutes (exactly 2 minutes, which seemed odd), so everyone agrees to dig into the data further in the morning.
This morning I login to my computer to discover the error(s) occurred again at 6:20AM and an email from the senior web developer saying we (my mgr, her mgr, network admins, DBAs, etc) need to discuss changes to the logging system to prevent this problem from negatively affecting the customer experience...blah blah blah.
FRACKing female dog!
Good news is we never had the meeting. When the senior web dev manager came in, he cancelled the meeting.
Turned out to be a hiccup in a domain controller causing the servers to lose their connection to each other for 2 minutes (1-minute timeout, 1 minute to fully re-sync). The exact two-minute burst of errors explained (and proven via wireshark).
People and their petty office politics piss me off.2 -
Still dealing with the web department and their finger pointing after several thousand errors logged.
SeniorWebDev: “Looks like there were 250 database timeout errors at 11:02AM. DBAs might want to take a look.”
I look at the actual exceptions being logged (bulk of the over 1,600 logged errors)..
“Object reference not set to an instance of an object.”
Then I looked the email timestamp…11:00AM. We received the email notification *before* the database timeout errors occurred.
I gather some facts…when the exceptions started, when they ended, and used the stack trace to find the code not checking for null (maybe 10 minutes of junior dev detective work). Send the data to the ‘powers that be’ and carried on with my daily tasks.
I attached what I found (not the actual code, it was changed to protect the innocent)
Couple of hours later another WebDev replied…
WebDev: “These errors look like a database connectivity issue between the web site and the saleitem data service. Appears the logging framework doesn’t allow us to log any information about the database connection.”
FRACK!!...that Fracking lying piece of frack! Our team is responsible for the logging framework. I was typing up my response (having to calm down) then about a minute later the head DBA replies …
DBA: “Do you have any evidence of this? Our logs show no connectivity issues. The logging framework does have the ability to log an extensive amount of data regarding the database transaction. Database name, server, login, command text, and parameter values. Everything we need to troubleshoot. This is the link to the documentation …. If you implement the one line of code to gather the data, it will go a long way in helping us debug performance and connectivity issue. Thank you.”
DBA sends me a skype message “You’re welcome :)”
Ahh..nice to see someone else fed up with their lying bull...stuff. -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
In fact I'm a sinful dev, so that I can't easily decide which one is worst. From indenting with tabs, or using nano instead of vim/emacs, to hardcoding database credentials on server, to many hacks and workarounds I use as actual "fixes" when the deadline is upon me and I've tried all I could. But it always led only to my own regret. For instance, my latest sin was that I prefered Debian over Arch and used proprietary graphic drivers to speed up my new setup. But ended up with a curse from St. Ignucius. (check my last rant)
But my worst sin probably goes to when I was "printf-debugging" some issue for a GSM controller on a raspberry pi. I forgot to remove one little print line and deployed the new "fixed" version. I didn't follow that project after that for like a month or so, when the client posted back the device and said that "it just doesn't work anymore". It seemed that raspbian didn't boot beacause the sd card was curroptted. I dd'ed through the card and I noticed that there are billions of lines of "DEBUG:: reading stream from 192.some.shitty.ip", took almost all over the 32G sdcard. Just as I suddenly remembered the cursed line I just added a month ago, I declared the sd card dead with no hesitation, dunce-commented the line (so the history would remember), implemented a time out for the thread containing it, setup a journald unit for my service and removed the redirection of process output to a log file, found a new sd card and installed everything again, and finally posted back the new "fix" to the client.
Moral: Never comfort yourself for the sins you have commited in the past kids, they certainly will come back to you. And also not to do any io especially write to a file on an SD card with ext fs, in a potentially infinite loop with no timeout.
P.S: I'd posted my last rant just before the new week rant last nigh. I really liked the St. Ignucius meme so decided to create a new one. He's very adorable :)1 -
FUCK APPLICATION LEVEL FIREWALLS!
So i cam online today, thought already lets open the shitty outlook webmail client. Holy crap .... thats way to much mails. Many of them are missed teams messages. So i open up teams and holy crap. Like every third dev in my company send me a message screaming "gitab is not working!!!".
Yesterday i updated it so imediately get in panic mode - what the shitty hack have i done?!
So yeah gitlab seems to be working just fine, everything is speedy and responsive, so i call one of my fellow devs and ask him whats wrong? And he is like oh yeah there comes a ldap error saying timeout or something.
I try to login with active directory. Works like a charm. Try another account, same problem?!
Google the problem, search gitlab tickets. Nope there is no open bug or sth. like this.
So alright lets call the network guy. "Yo, can you check if there is something ldap-like getting blocked to the gitlab server?" - He is like oh yeah damn like almost every damn request is getting blocked. Ah wait, there was an firewall update yesterday too. Yeah ldap is no longer ldap. BLOCK THAT SHIT!
After 10 minutes of figuring out what shitty type is detected by the firewall and what needs to be whitelisted to make it fucking work again it seems to work.
But ha no, there is another update rolling on, so same shit like 15 minutes later.
Now it seems to work and i have to inform every damn fcking developer that it works again. And yeah alright you sent a mail, but fuck it, i will call you though! So yeah just answering calls, mails and chat messages. Like why the fuck cant you read your mails like a damn normal person?!1 -
Wanted to try a new alerting based on a new Prometheus metric we added. To trigger an alert we killed the dev stage db of the service. Alert didn't get triggered. The reason was that the metrics endpoint suddenly needs exactly 60s for a response if the db is killed and prometheus timeout is 20s.
And to top it off, this behavior happens for each service we developed (that has a db) .
Well at least the new alerting already helped find a bug.2 -
!dev; New Yorkers @dfox @trogus
I'm planning to go up the One World Observatory tmr morning but not sure which ticket to buy, particularly the IPAD option with streaming video. It says you can watch them later although it also expires apparently. So I'm wondering it worth it... (I'm also thinking since it's streaming and I can share it with anyone, it means I can probably download it too... the techie way).
Also can I bring a USB charger? My plan is to sorta just sit up there in the morning. Then head to Timeout Market for lunch-dinner.3 -
PHP dev help/advice needed!
We have problems with mysql. Still stuck with mariaDB, I'm using indexes (correct ones) and we have problems with scaling. we have a few tables with over 100mil rows, 1 of them is being read every morning with a subselect that counts unique rows, and fails every time because of timeout/lock, the temp table size was increased and helped for a little while but as time goes on the table grows and the problem reappears. I'm reading from a slave server that was purposely created for read only, yet we still have problems. We're using managed dedicated servers for out hosting and they aren't willing to optimise the database configs for our needs. What are the easiest options for scaling at this point? Going fully dedicated server and perconaDB? NOsql? Sharding the server? Anyone got any good blogposts or something to read about this? your own experience?11