12

Microsoft Manager: "We need to slap ChatGPT onto Bing....STAT!"
Devs: "There won't be enough time to test security."
Microsoft Manager: *Throws hands in the air* "Who cares!!?? Just get it done!"
Devs: "Ok, boss."

https://arstechnica.com/information...

Comments
  • 4
    In my opinion, ethics dictate that the algorithms/models/parameters used for human interaction have to be public anyways.

    Microsoft should try honesty for the first time since that company exists. It might actually work.
  • 5
    Don’t forget all the IT departments now scrambling to find some sort of use for ChatGPT or else suffer falling behind in the “AI Arms Race”.
  • 1
    The AI war is just another fad imo. Just wait until something cataclysmic happens due to it.

    It will soon fade away like web3 did.
  • 4
    Oh give me a break with this "prompt injection" BS. This "exploit" gains absolutely no access to any systems, gets through any security walls, or anything like that. It simply reveals some of the (IMO obvioius) rules that OpenAI is trying to apply to all answers, a task destined to fail anyway considering the size of the model.
  • 0
    @fullstackclown

    They want us to not question it. Imho the label is by design.
    Like doing the DAN on chatgpt is labeled 'hack'.

    And thats wrong, but people gonna be people.

    Ffs they could stop dicking around and just give us the model / checkpoint.
Add Comment