some of my friends, already relying heavily on chatGPT, i'm not against it, but myself can broke chatGPT. please still use your lazy brain, don't consume information blatantly

  • 7
    It would be an awesome search engine if it would list its sources.

    Society should transition to a more self-empowering stance when it comes to artificial information assistants: Sources are a must when we let fuzzy neuronal networks answer our questions. We as humans just don't normally cite sources because we are lazy. But a slave has no right to be lazy.
  • 5
    I wonder... does ChatGPT always assume it is wrong is you correct it?

    What if you ask it something simple like "what is 5+5" and when it gives you the answer "10" you tell it "you're wrong, I think it's actually 11"... would it argue with you?
  • 2
    You don't have to be a genius to break this AI.

    The result is always fun and for basic stuff, it sure can help a lot. Like mentioned, as search engine or just basic explaining of complex stuff it can be usefull.

    (Like asking how git rebase works, or explain neural networking with basic examples)
  • 5
    @Hazarth You'd get a T-800 on your ass!
  • 3
    @Oktokolo how long till we've got AI citing AI?
  • 0
    @Hazarth it depends on the prompt, if you want them to answer it according to what you want, they will not argue. but if you prompt it to define why 5+5 is 10, then it will burn your brain.
  • 1
    @PonySlaystation i miss you so much
  • 1
    @atheist Doesn't matter.

    We already have the misinformation problem with traditional search engines - so no change here. There will always be a skill involved when using search engines. Assuming the society keeps incentivising misinformation (commonly known as ads or propaganda), other people's AI will never be fully trustworthy.

    But while this social problem can't be solved with technology, citing sources is a huge step towards making the verification step (which sadly can't be automated because there is no algorithm for truth) easier. If not blindly trusting the AI, the user can just check whether the sources seem legit by following links and matching their own experience and world view against them before checking whether the sources match the generated answer of the AI.

    This makes AI more discoverable for users, because they can easily learn where the AI is normally on point and what it tends to get wrong. Users can adapt to the new tool and then use it more efficiently.
  • 1
    @Oktokolo It does matter a little bit.

    With search engines, the truth is mostly static, or at least devolving very slowly, but with AI it's more like passing a photocopy through a photocopier again and again, the more wrong it gets, the more wrong it gets down the stream.

    So while using the search engine is still capable of giving you results with plenty of inaccuracies, you're already handling the tool that also has all the complete and correct information available if you learn to use it. But with AI if it starts quoting articles that it already generated (by other people let's say) then it's progressively quoting information that's more and more wrong, until it means nothing.

    The big difference is, that for whatever reason, people seem to trust the AI more (probably because people are already shit at doing their own research).. so this at least needs plenty of education and attention so people know what they are dealing with...
  • 1
    @Hazarth I don't see the difference between search engine returning Nth generation AI-made results and AI quoting Nth generation AI-made results. Like today, you will have to detect whether something is truth or fiction yourself.

    Fictional sources aren't an AI invention. Search for anything medical or political and you will find plenty of them. You already know how to detect them. Just treat the AI results like you treat the search engine results. Just assume all the results are made by humans.

    We will see all sorts of new conspiracies and wonder cures popping up. But so what, we already know how to deal with them. There will also be lots of AoK-style texts. But we already know to discard them quickly too.

    All that would be left would be a better interface for using a search engine (if it quotes sources) - or a nice tool for fiction authors (if it doesn't).
  • 0
    @Oktokolo Well, when you say "we already know how to deal with them" you actually mean just us here really. The majority of people don't know that (evidenced by the amount of stupid bait that people bite on the internet all the time and the constant AI crazes with not an inkling of understanding it)

    Seems to me that unless awareness of how these things really work under the hood and what they can/can't do enters the common knowledge, we're going to have a bad time looking for good information and sources. At least with a search engine, you have the power to choose the keywords intuitively, but with AI you don't really know what you get. It's like a clunky big hunk of CPUs with a bazillion levers that each slightly modifies the output... search engines are simple to understand. They find the words you type in by relevancy (and ad revenue..). Much harder to find the right answer, but once you do, you can be decently sure you got it because of how much data you already had to sift
  • 1
    @Hazarth Sure, the masses might not know.

    But like with search engines, they will have to learn how to use them. They learn to read and write. They surely can also learn how to use a text-based search tool. And asking an AI might actually be more intuitive than asking a search engine for normal folks, as they can just ask it like they would ask another human.

    Also, using a search engine isn't intuitive for a first time user. I once had used a search engine for the very first time and back then there barely where any ads or spam on the web. It wasn't intuitively clear to me what search terms lead to the desired results or when to assume that the desired result doesn't currently exist on the indexed part of the web.

    Using any tool always involves some skill. Humans can't even walk after birth. They have to learn that too. But for using AI-based seacrh tools, you can reuse a skill that you need for interaction with other humans.
  • 0
    @Oktokolo And it will take some time until 'words to search' become embedded in our DNA or based on instincts (like walking)
  • 0
    @Grumm Sure. Probably less time than it took to learn how to properly search stuff on the internet because of the intuitive natural language interface and the AI being trained on what people expect. It will probably take no time at all for mainstream topics not contested by propaganda, conspiracy theorists and advertising (the latter being the worst). So finding cat content or gardening tips will just work the very first time you try it.

    But yeah, searching for the contested stuff will require roughly the same skill as with a search engine - because it is the fact-checking and working around misinformation minefields that is the time eater and requires the most skill here.
  • 0
    @Oktokolo sure but who will make the AI neutral ?

    Any programmer will implement some kind of favoritism based on his/her personality.

    What if an AI becomes vulnerable towards propaganda because it seems logical and a better option.

    Will an AI make the difference between peace or war on its own ? Or will we train it that war is bad ? But is it really bad ? (War is always bad but conflicts will always happen and a future AI should be able to handle that too )
  • 0
    @Grumm My point is that it doesn't have to be neutral. Neutral sources don't actually exist. Even actual academic research is biased as fuck. Search engines wheren't neutral since the invention of the ad-driven business model.

    Neutrality is a myth anyways. A subjective being can't be neutral by definition. So neither devs nor users can reliably test for neutralness. Spam/SEO exists - so you have to weight sources lest the results become almost fully irrelevant...

    The "exploits" of ChatGPT show how you would do "neutrality" right: Let the user state its preferred flavor of results: Want results matching commonly accepted academic science - just tell the AI (this could also be the default, because science works, bitches). Want results matching any other belief system - just tell the AI. Subjectivity is an unsolvable social problem. And AI isn't magic.
  • 1
    @Oktokolo Yet here at work, a lot of people take papers on the internet less serious or think faster that it is not 'neutral journalism'. (I refer to the current war and information you can find)

    But, when they watch the news on TV, they believe that is always true because it is a 'national public channel'.

    Even in a democratic country, that news isn't neutral too. It will always show or twist words so that it fits some politic plan (propaganda)

    So I agree with your point. With AI in search engines, it will be even harder to filter the difference. Since you can manipulate the answer to what you believe and want to see.
  • 0
    Hey, how's the wife
Add Comment