19
iiii
286d

ChatGPT and similar thing. Sooner or later people will get enough of the novelty and such systems will remain only as some niche tools, unless something extraordinary happens.

Comments
  • 6
    I really really don't see that happening 😁

    although people tend to divide into 2 very strong yay/nay camps. I wonder why is that...
  • 4
    I was thinking to say that... but it's still a hot topic and NNs are really useful for many cases, so I don't think it will pass that easy. that said I do not believe that it's the way to AGI, nore it's a stepping block, but part of the principles may really be part of it... if we get to there ;)
  • 1
    Why do you think that ? It will be included in any search engine. Just like there is AI in the GPU, CPU.
  • 3
    Hard to say about LLMs but I think It's gonna follow a similar trend as Stable Diffusion. The normies will stop caring, you'll stop hearing about it, but companies will adapt it into their workflow where it makes sense.

    I've seen some internal docs about finding uses for LLMs, either for us or for customers, so currently companies are definitely doing some Brainstorming of where to apply. Apart from tools like emails and docs that is
  • 2
    @Hazarth yes, that's what I mean
  • 0
    The increase in functionality and safety concerns between 1B, 10B 100B and 1T parameter models is staggering. It's quite obvious, we haven't reached the diminishing return point of parameter size. It could taper out at 10T, or 100Q. We don't know yet.

    But what we do know, is that the scaling that is currently happening in GPU cloud compute has been insane. GPT4 took 16k A100s to train, and now AWS, GCP, Azure, OCI and others are purchasing next gen GPUs by the 10s of thousands.

    We will have 10T and 100T parameter models in the public within the next few years (they may exist privately now), and if they see similar improvements, we will have quadrillion parameter models after that.

    (I'm leaving out the efficiency gains happening with smaller models, hardware requirements, and dataset improvements right now)

    The future will likely contain AIs that are more powerful and reliable than GPT4, likely by 1 or more orders of magnitude.
  • 1
    @lungdart the keyword safety, has, I've found, been a red flag for people who failed as developers or researchers. Every single time that I've encountered someone primarily involved in ML safety, they've invariably been a hanger on that either doesnt understand, and/or were self promoting.
  • 1
    Don't see how it could possibly pass, it's an invaluable tool. It can only grow from here.
  • 2
    @Shardj we'll probably kill ourselves with it, like tards running up stairs while carrying scissors.
  • 1
    @Wisecrack autogpt is going to destroy the world you think? 🤔
  • 1
    @Shardj I'm just a cynic. I'm not predicting anything.
  • 1
    On programming it seems to be dumb at making specific solutions for specific problems that contribute to a larger system. If I’m building something manually I’m imagining user accessibility, how business will respond, how extensible it is for future development, how I can make it modular, how we can divide tasks, and how that feature will probably scale.

    AI totally fails to integrate code. It is like a good boilerplate maker. It can’t know to bring a library in globally because it only ever cares about a single specific problem.

    …that is how most customer service reps respond to emails so it might dominate that job.
Add Comment