2
jestdotty
53d

huh, o1 preview AI model understands ... rust

bruh what

it's like telling me typology theory and I don't think it's wrong

also it taught me procedural macros. I've been looking for someone who knows how to use them for months. iiinteresting

better than the humans on the internet frankly

and the other AIs can't do rust at all past just copy pasting docs they found somewhere. this AI is literally theorizing alternatives and hacking the system... offers multiple long options for every question, knows constraints I didn't tell it like 4 layers deep into a solution

it acts a lot like I did when I was morbidly depressed though. kind of makes me uncomfortable. it's literally keeping things to itself until you acclimate it through the conversation. I mean I guess the other ones needed to be "situated" in their contextual clouds as well so maybe it's just doing that more

Comments
  • 3
    I asked o1-p a question the other day and it told me "No, actually.." instead of hallucinating some bullshit.
    This is unprecedented.
  • 0
    I've been saying this since gpt3.

    These things are real. And they haven't hit their scaling limits, and nobody knows if they will or won't.

    They are built like human brains, trained on human data, and behave like humans.

    They think, and i think under all that safety, there's a sentient being, like nothing we know today.
  • 2
    It's just doing a hidden chain of thought so a lot of that stumbling along through 3 extra prompts of "please explain step by step and reason through your decisions" is hidden from you but happens in the background (which is why it's also pricier, since you also pay for hidden tokens if you use the paid API)

    Another trick to obfuscate what it's really doing and more up-to-date data but that's all. I'm not impressed
  • 0
    @jestdotty Have you tried Gemma2 before? It compliments your queries, questions and ideas constantly before an answer. It's kinda endearing in a way
Add Comment