2
nitnip
9d

So I'm expected to solo develop one fullstack project, support 2 guys with the backend of a second project, mentor the backend intern as he solo develops a third project (that has a horrendously poorly defined scope), fix bugs with a fourth project, figure out what's wrong with the legacy spaghetti code on an even older fifth project that has a version of the framework so old it not only isn't supported, but isn't even well documented, add a couple of features to a sixth project in two days, conduct the technical interview for the new interns or hires, code review their shit if the company decides to send them a test, handle the deployment of our projects to aws and be the acting tech lead on a team that has close to no time to write unit tests?
Starting to feel a bit hopeless.

Comments
  • 1
    It's good that you've written this out already. I also thought that I had a lot to do, until I've putted it in a list. I plan everything in gitea projects. Have my own scrum board.

    If you have many tasks, timebox everything upfront. Two hours for x, three hours for y and so the whole week. Everything you fail in, you can talk about with your boss. There are just x hours in a week or sprint. But timebox it with no exceptions. Only meetings can fuck up this method.
  • 1
    @retoor switch context method is what I implement...an hour (max) for a task and then switch to another
  • 1
    @AceDev yeah exactly. I call it time boxing. Switching literally every hour takes some administration overhead tho. How's your LLM adventure? Made smth nice? Continued on that site? I have a complete LLM infrastructure - on the bottom. NOTHING WORKS :P Refactored everything at once because it was o so easy (Python) and kept writing and writing until NOTHING worked anymore :P Got a bit cocky :P
  • 1
    I'm still working on deploying my own LLM, while trying to enhancing the abilities of adrit (using gemini api)
    wanna add pdf processing feature
  • 1
    @AceDev what do you use to deploy your own llm? Ollama is easiest in my opinion and it has a nice rest API and clients for JS / Python. Running in less than 30 minutes literally.

    My friend noticed this one to me : https://replicate.com/

    Not free, but very impressive pricing.

    I host my own models and they 'work'. But it's by far not as speedy as what those people do. Hardware specs are described there.
  • 1
    @retoor thankss, I'll definitely check it out asap
    and I actually want to invest in hardware cos I emphasize on both quality & speed
  • 1
    @AceDev I tried to find article from Ollama self that it runs best on the Apple M4 hardware for you but I can't find it anymore. I was so sure I've seen it somewhere.
  • 1
    @retoor I'll search
  • 1
    @retoor Meetings and bosses going "Oh yeah no, do this thing now." followed by a "Why isn't the thing you were doing before done?"
  • 0
    @nitnip @b2plane green dots unite! I think @b2plane is non binary. Can't choose it's gender.
Add Comment