5

The hype of Artificial Intelligence and Neutral Net gets me sick by the day.

We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.

AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.

Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”

The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.

AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.

Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.

Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.

"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.

Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.

"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.

Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.

Ref: https://thriveglobal.com/stories/...

Comments
  • 2
    Feels more like an essay than a rant, but many points are definitely accurate.

    I can’t wait until “AI” can actually apply what it has learned across topics. That’s the real key to making it intelligent.

    In the very long view, AI is scary. In the immediate, it’s infantile, and finding it scary is (almost) laughable. But as with anything, it can and will be used for evil by or nefarious (and simply stupid) individuals, even in its current state.
  • 0
    STOP SAYING "AI"
  • 1
    Honestly the only people for whom this is a problem are those who've never studied modern AI (that is, a good chunk of the people with opinions on AI on the internet) and think it's something like human intelligence. It's not. Modern AI doesn't even care about human intelligence, that's a goal of only a small subunit of AI research. That or you expect AI's machine learning background to work wonders that are information theoretically impossible.

    It's a tool. It's not hard to understand the basics, there are literally free courses for it everywhere and the foundational principles are actually quite simple. Go learn.
  • 1
    You made some excellent points.
    Let's be real AI (including AGI) will never be able to match humans in natural things like common sense, intuition, consciousness and such... At least, until some of those human "functions" can be modelled by mathematical or logical functions, until then, it won't be possible to such concepts into AI models or let alone algorithms.
    I mean, until someone finds a way to get DNNs to match the capabilities of neural networks (read real neural networks) it will be unrealistic to think that AI could achieve what humans can do.

    Also, nowadays models like GPT-3 can (more reliably) answer questions that other DL/NLP alternatives wouldn't be able to.

    As for your point on intepretablity, indeed, a fair amount of models aren't easy to explain especially as some (notably the DNNs) are black boxes. But nowadays, sectors like Healthcare, Finance and such where interpretability is a must are pretty much a solved problem as models like decision trees are applicable.
  • 0
    ... And for the black box models, there are a handful of tools that enable people to see an interpretation of the predictions. So it's not as bad as you think.

    And you seem to forget that many people are scared of AI (mostly due to ignorance or misunderstanding).
  • 0
    @Nanos 😂
Add Comment