(Warning: kinda long && somewhat of a political rant)

Every time I tell someone I work with AI, the first thing to come out of their mouth is "oh but AI is going to take over the world!"


It was only somewhat recently that it started being able to recognize what was in a picture from over 3 million images, and that too it's not that great at. Honestly people always say "AI is just if-else" ironically, but it isn't really that far from the truth, we just multiply an input by weights and check the output.

It isn't some magical sauce, it's not being born and then exploring a problem, it's just glorified-probability prediction. Even in "unsupervised" learning, the domain set is provided; in "reinforcement learning" which has gotten super popular lately we just have the computer decide which policy is optimal and apply that to an environment. It's a glorified decision tree (and technically tree models like XGBoost outperform neural networks and deep learning on a large number of problems) and it isn't going to "decide" to take over the planet.

Honestly all of this is just born out of Elon Musk fans who take his word as truth and have been led to believe that AI is going to take over the world. There are a billion reasons why it can't! And to top it off this takes away a lot of public attention from VERY concerning ethical issues with AI.

Am I the only one who saw Google Duplex being unveiled and immediately thought "fraud"? Forget phone scammers, if you trained duplex on the mannerisms of, for example, a famous politician's voice, you could impersonate them in an audio clip (or even video clip with deepfakes). Or for example the widespread use of object detection and facial recognition in surveillance systems deployed by DoD. Or the use of AI combined with location tracking and browsing analytics for targeted marketing.

The list of ethics breaches are endless, and I find it super suspicious that those profiting the most off of unethical AI are all too eager to shift public concern to some science fiction Terminator style takeover that, if ever possible, would be a long way out and is not any sort of a priority issue right now.

  • 2
    artificial intelligence is ingenious statistics
  • 2
    I agree with you, AI taking over the world by a machine revolution somehow is an idiotic concern and childish at best (although I appreciate some movies that came out of it).

    But I can't help and think what the real "AI take over" could be, and I don't imagine Terminator or space odissay, I think about jobs. Ever since the industrial revolution we've been replacing low level grunt work with automation, this decrease in jobs that are often filled with people that don't have the option to do better and constant increase in population will brew a lot of conflict.

    I think AI might boost that.
  • 0
    @JKyll true but I'm a believer in automation; if something can be made more efficient I think it always should. The job market should evolve and grow around technology and the industry, not the other way around.

    We live in a capitalist society so the market will always tend towards the most efficient, cost beneficial solution, usually regardless of diversity, ethics, or any externalities unless they directly affect $$$. Plus automation doesn't actually mean less jobs; the number of jobs usually actually increases but available jobs shift from unskilled to skilled. At an individual level it may seem unfair but in reality it forces the population to become more skilled or become unemployed; usually the former is chosen and as a result a large portion of the population moves from low-pay unskilled work to higher pay more skilled work. At the end of the day it raises the standard of living for everyone, just takes a some time and effort for the market to adjust.
  • 0
    @woodworks I understand your vision but it is a tad utopian, I strongly believe in capitalism as the best economical methodology we've come up but I also suggest to check up on the capabilities, skills and IQ mean of people in those jobs, most won't or (mostly) can't do high skilled specialized jobs.

    What will happen when you force them out of their jobs and leave them unemployed, unskilled and hungry. Population increase is a serious problem and until that gets under control full automation should be done in a controlled way, I'm not saying it should stop <you can't> but left uncontrolled along with uncontrolled population growth will brew large and nasty conflicts.
  • 1
    @JKyll perhaps as a solution companies that choose to automate away their workforce also have to contribute <proportional sum> to free education for those who lost their job/job opportunity as a result.

    Let the corporations pay for college! People get jobs and an education, and companies get [skilled] workers, win-win!
  • 1
    @woodworks I actually like the idea, the real solution to these issues though doesn't reside in the technical aspects of the matter. It is political and AI can bring a lot of good to the world if our institutions and politics weren't retarded.
  • 0
    Fuck Musk and his fanboys.
    We are so far away from when AI can take over even a single town.
  • 1
    Dude, when we talk about "AI taking over the world", we talk about the point where it has evolved to simulate a human brain at x100 times the speed. We will certainly do this once we understand the brain good enough. Then, if we give the AI enough control, which someone is likely to do it could lead to a great disaster. Hopefully we learn from our first mistake. I have worked with AI and know what it involves; weights, conditionals and trees, but so does a brain.
  • 0
    @keefoo neural networks are basically more basic versions of synapses and our brain has WAY more of them and a more complex setup than is ever going to be possible via deep learning (not to mention probably way more computing power). Even industry leaders like Andrew Ng agree that we're rapidly approaching the point of diminishing returns with neural networks and deep learning (if we haven't already).

    Maybe in the far far future computers will be able to come on par with brains; in that case it's basically a human brain but without a body or any control, limited to a virtual space that can be turned off at any time.

    If it's started with a malicious purpose, executed with a malicious purpose and kept alive to achieve that malicious purpose, then sure it could possibly some threat maybe 100 years off; but in that case it's not AI taking over, it's just a weapon--the one running it is in control.
  • 0
    I love how short sighted you are.
  • 1
    I don’t think deep neural networks as we know them today is going to be enough. I think that if we knew how the brain worked we could, just off the top of my brain: make a component full of transistors (synapses) which when make certain patterns activates other parts of the AI brain. Kind of like a pattern of lit neurons activate feelings in our own brain, and with a big/strong enough pattern, memories.

    The tricky part would be figuring out how to strengthen the synapses and communicating with the other parts of the AI brain as efficiently as our own brain. Anyway, I think we’re on the same track here.

    I am firmly sure that we one day will need strong regulations around AI’s and that we should be afraid, even today. We never know; at some point the advancement of technology around that stuff might accellerate beyond our wildest expectations.
Add Comment