2

So this is kinda shocking but expected and deep and with layers:

layer 1 :
I just realised that : AI +Junior dev + 10% senior dev = 1 Senior Dev. This doesn't quite sit well technically, but for certain managers, this logic works and I got to see it working.

So I got cancer and took a sabbatical of 2 months. I am a dev with 6+ yrs of experience, and before I had gone , I was making PRs that consisted of adding features which required 3-4 screens , numerous logics, multiple APIs and which sould add significant impact. Basically a 3-4 days worth of task, all done solely by me to perfection, which comes with years of experience with nitty gritties of android.

And just a month ago, our team was joined by a fresh college passout, who did basic course of flutter, had 0 knowledge of Native Android and was making terrible screens using xml and viewbinding as a part of his initial training.

Now when I come back, I see a weird dynamics in group: he is always sitting around another SE1 , and is working on a task of similar intensity as I would do. He asked for an estimate of 5 days(!) and was able to create all the screens apis logics etc in those days.

1. during this time, he was near our seats every 10 mins, showing what he has made, asking next steps, and then going back to his seat.

2 on his seat, he would open chat gpt, put all his code there, get some response, put it back in AS gemini, then put it in AS, fix red lines again using gemini, run and come back to us to show if its correct.

3. and somehow his code did ended up working.

4. I reviewed his pr and apart from some basic fixes, all seemed fine. His code didn't considered various edge cases but I said fuck it, its responsibility of dev and qa to identify those cases (my PRs are essentially reviewed like this only, that's how i learnt to write quality code which won't burst on input of "abc" instead of "123")

5. but then his code got merged in temp branch from where we were to give the qa build and it crashed 3 seperate screens unrelated to his feature but related to the shitshow he had done on the data layer.

6. he and his SE1 senior then again fixed that shit and the that feature got merged, reviewed by QAs , got fixed for more bugs and finally got merged in our code.

7. however all this (stuff before qa review) happened in those 5 days and thus the managers thought that the task was done by this junior trainee in 5 days only .

thus trainee + AI + 10-30 mind per hour per day of SE1 (~3hrs) = 1 feature.
now my salary = 2x of trainee..
if i am layed off and 10% increment is given to that SE1, the total cost saved by company is around 40% of my salary.

And this blows my mind coz ever since I came, I am getting menial tasks while freshers are being given large scale tasks.

layer 2 : is it good for company?

I might sound biased but company would soon need to realise if they could afford cutting on reliability of experienced devs with this weird "hack the system with AI" style of development.

Even we seasoned devs use AI but review it on our own and think of cases before putting it in front of stakeholders as "yes sir, done!"

Additionally I don't think putting confidential code from codebase onto grmini and chat gpt would always be considered okay. Its like no one is caring for data now, but if those companies tried to come up with competition or something , we are digging our own grave.

layer 3 : is it impacting users(i e the devs?)

Well, I am scared that they might think of me as a burden and fire me for a junior trainee, so yeah its highly impacting me.

But that SE1 that is helping this trainee guy, is this part of his job role now?is it part of every Senior dev's role to train trainees via AI bots?

And what about that trainee himself? Is this really beneficial for him to learn Android Development like that?

---------
I personally have always valued folks who could write efficient code . I don't care about their ds algo knowledge, or if they deeply understand the working of apis and core code underneath. Just writing efficient, easy to understand and reliable quality code was enough for me to hire u and vice versa.

But AI is changing things for the bad and I think we will be seeing an even more increase in ds algo questions and other shitty ways by which faang like companies seperate cream devs from the rest. And this would be coming from every startup/mnc/small scale company , not just the FAANG

Comments
  • 1
    Hmm, you said the new guy was consulting with the senior devs all the time..

    What about the time lost by the devs having to babysit him?

    Also consider that the guy is unlikely to improve mich using this technique. About 10 years down the line when It's his time to be the senior and babysit another AI kiddy might not be able to

    I think there's a hidden tech and knowledge debt associated with the AI rush but it will only become visible years down the line, when It's likely too late for a whole generation of devs
  • 0
    @Hazarth about the consulting dev, that's where i counted the 10% salary increase. you kill off a guy taking say $10/month and change this guy's package from $8 to $10 or $12/month and he will be more than happy to waste time on ai kiddies.

    And yes the upcoming generation are looking at a bigger problem if all they wanna rely on is ai. But isn't AI getting better and more reliable at equivalent speed? like as i mentioned in the story, his work had lots of bugs but 80% correct files and changes.

    today these ai kids are doing stuff similar to what lazy chefs at cheap restaurants do : take the last night's food, add some sauce and cheese, give it a slight fry in oil and voila, you got yourself a fresh breakfast dish. this principle works with food once or twice but after a time its all just a dirty mess
  • 1
    The question that matters the most:

    > But isn't AI getting better and more reliable at equivalent speed?

    If we're talking about LLMs, then no. Rather, the word 'stagnation' comes to mind. Just forget about the most idiotically, prohibitively expensive compute cost for a second: they're still probabilistic bullshit machines, and generating exponentially more data to use for training is bit of a problem as we *kinda* need humans to do that, what with the whole synthetic data sets leading to model degradation.

    And that last part is a good wink to your lazy chef parallel/analogy/whatever in the fuck, so I rest my case without further explanation.

    Now, could this change with some fancy-pants quantum googloid black magic on anabolic steroids shit? I couldn't tell ya for sure, but I'ma say 'no' to that one too just to be a dick about it, OK. "It'll keep improving" is what they always say, met with farting noises and butt soup.

    Walruses.
  • 0
    @dotenvironment Yeah, I second what Liebranca said.

    Most of us were saying it for quite some time, but the tech that LLMs are running on is not scalable. Took some time, but now even the industry noticed and there have been some papers I think on this. It seems right now the trends is to try and shrink or redesign the models rather than grow them because there's diminishing returns.

    It's not impossible for a completely new tech to emerge in the next couple of years and AI could see another massive boost, but then it could also take half a century, don't forget that neural nets as a concept are over 80 years old now... maybe the next leap will be 80 years from attention is all you need... and if that's the case, then the current get of devs is screwed... we'll see though.

    Eitherway we should all learn to use it to our advantage and let the industry drive itself to the ground if it wants
Add Comment