12

Got to talking with someone in our company about AI generated code. I said we still have to audit the code, understand how it works, and ensure there isn't any nefarious libraries or code in what is produced. Like what we "should" be doing when we find libraries on the web. I explained how people will purposely create libraries that are spoofs of other libraries, but have malicious code embedded in them. It doesn't take much to imagine someone using a sketchy AI to push this kinda code.

How do you reasonably fight this if we start increasingly relying on generated code by AI? So I suggested we need an AI to review AI generated code. Then we need an AI to review the AI that reviews the AI generated code. Then...

Comments
  • 8
    responsibility

    if a person doesn't know what the code does and pushes it, you fire them

    you can use it to make stuff but it is no substitute for a human. if the person doesn't understand that, reroll
  • 2
    I have started to sound like a broken record when I say this - Fuck AI. Fuck Sam Altman. Fuck OpenAI. Fuck Sora.
  • 1
    It may be acceptable to grab an AI generated snippet of code if we aren’t sure of the best way to implement a particular thing.
    However, we should still be writing the tests that verify its behaviour.
    I think that might be something that people are missing in the AI discussion. Just because you didn’t write the code doesn’t take away your responsibility to provide regression tests. And that extends to security/performance review as well.
Add Comment