Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "unexpected result"
-
!rant & story_time
This happend to the startup I was working for at ~2011. I was a junior Android dev, working on a very popular app.
During experiments for a new feature, I discovered that the system AlarmManager has a serious bug - you can set a repeating alarm with interval=0ms. If your app takes more then 1 ms to handle the Intent, then the AlarmManager will start to fill up the intent Queue, with unexpected results to the OS. causing it to slow down, and reboot when it ran out of Ram. Why? my guess was that because the AlarmManager was part of the OS, then any issues caused by it caused the system process to ran out of ram, crashing it, and the whole system with it. the real kicker was that even after a reboot, the AlarmManager still had Intents queued, causing the device to bootloop for a while, untill the queue was cleared. My boss decided to report the problem to google, as this was an issue in the OS. I built an example app, that caused the crash 10-30 seconds after starting, and submitted to Google. Google responded later that day with "not an issue, no one will ever do this".
Well... At this point I decided to review the autoupdate feature in our app, to make sure this will not happen to us. We just released a new feature where a user can set an update schedule option in the app settings - where you could setup a daily, weekly, or hourly update for the app. after reviewing it, It looked good, and the issue was not triggered in the manual QA I did. So, it was all good. And we released an updated version to the store.
After we did an update-install, we discoverd that, there was a provlem reading the previous version SharedPrefs value for the update schdule settings, and the value defaulted to 0...
the result was, our app caused all our users to go into a bootloop, and because the alarm was reset when the devices booted up, the bootloop could only be solved in a factory reset, or removing our app, before the device rebooted, and then waiting a few reboot cycles.
We lost 50 places in the market, and it took us 6 months to get back to where we were.
It was not my fault, but it sucked big time!4 -
Me: *spends 5 hours screwing around with recursion and performing operations in reverse order*
Unit tests: *pass*
Me: Wow. Okay, that’s interesting.
*run tests again*
Me: Right, well, that’s just dandy. Now, how did I get here and how do I document this...
TL;DR I spent 5 hours fucking about and accidentally came up with a working solution that I can’t explain
EDIT: RIP wrong category1 -
When I got the current job I started to work on an Android app that a coworker which left the company was doing.
The app was ready at about 40% and was barely usable, it lacked a lot of features and multithreading so with a huge amount of data it used to crash (Android doesn't allow you to make the app freeze for more than 2-3 seconds, it considers that the app is not responding anymore).
After a week or two the work to do was still huge, but one day one of my coworkers came in and ask me if I was able to release a beta for a client the same day... Unexpected deadline.
I spent 8 hour fixing as many bugs as possible and adding multithreading in the most weak parts.
I did it but it was so stressful and the result wasn't even great. In fact I finished the stable version 7 months later.4 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
So this month I had to do two major features which required unexpected refactors and I had to handle unexpected edge cases all over the place. Since I work in another timezone and time was of essence, I was kinda working around the clock to complete refactors as fast as possible because it was "important and critical". I have 7 other devs in my team but only half of the team are actually competent and even less are motivated to push through. Most of the team prefer to sit on low hanging fruit tasks and cant even get that fucking right.
So that resulted in me doing at least 100 hours of overtime this month. Best part all I got for pulling it off was a thank you slack message from teamlead and got assigned even more work: to lead a new initiative which seems to be even bigger clusterfuck...
So today I had a sitdown with my manager and I asked for 3 paid days off and told him that I did 50-60 hours of overtime. He okayed it as long as my teamlead was happy.
So I created a chat, adder manager and teamlead to it and explained my situation. That Im feeling burned out, I need 3 days off and combined with the weekend that should allow me to finally relax.
My fucking teamlead told me that these days are mine and he cant take them away from me. But then he started guilt tripping me that no one else will be working on the new initiative these days so we will have a very tight timeframe to deliver this (only until August).
Instead of having at least a drop of empathy that fucker tried to guilt trip me for taking days off for fucking unpaid overtime. What a motherfucker. Best part is Ive talked with manager and we actually have until end of August to deliver the new initiative, so fucker teamlead is gashlighting me with false sense of urgency.
I guess a hard lesson learnt here. Waiting for my fucking raise to be approved for the past 6 weeks (asked for a 43% bump which is on the way since I got very strong positive feedback).
So Im done. I proved myself, will get the salary of which I only dreamed about few months ago. Not putting any overtime anymore. If something is very urgent, borrow fucking decent devs from another team. Or replace half of our useless team with just one new decent dev. I bet our producticity would increase at least by 50%.
Its not my fuckint fault that 2-3 people are pulling the weight of 8 people team. Its not my responsibility to mentor retards while crunching under immense pressure just because current processes are dysfunctional. Fuck it. Hard lesson learned. If you want overtime, compensate with extra days off or pay. Putting my 7-8 hours in daily and Im not responding to your bullshit slack messages or emails after work. I dont give a fuck that you work in another timezone and my late responses might result in stuff getting done postponed by a few days or a week. Figure it out.2 -
Task: Research how to refector our grid component to be usable with Redux
Result: The next sprint will be used to replace Angular 5 by React
what happend? Not that I am not happy with the decision but this was a quite unexpected result.3