2
lorentz
7h

You know how LLMs always imitate expertise and understanding? That isn't specific to English, it happens in code as well. It's harder for them to get away with it because even the best guess isn't likely to be approximately correct, but they still try and it still sometimes works.

One of cucumber scenario tests now contains a deadlock somehow. We run them one at a time. I need some rakia.

Comments
  • 2
    How the fuck am I even gonna present this to my PO who's the sponsor for AI adoption? I want both cucumber and LLMs gone from the project, but maybe it's better to just relay problems as I find them. Can I trust an AI fanboy to recognize the pattern by himself?
  • 0
    I think you're just supposed to give them busy work
Add Comment