10

Oh no AI can destroy hummanity in the future! It is like skynet and such... Bad! It will be the end! FEAR THE AI!

Yeah so i cant sleep now so im writting a rant about that.

What a load of bullshit.

AI is just a bunch of if elses, and im not joking, they might not be binary and some architectures of ML are more complex but in general they are a lot of little neurons that decide that to output depending on the input. Even humans work that way. It is complicated to analyse it yes. But it is not going to end humanity. Why? Because by itself it is useless. Just like human without arms and legs.

But but but... internet.... nukes... robots! Yeah... So maybe DONT FUCKING GIVE IT BLOODY WEAPONS?! Would you wire a fucking random number generator to a bomb? If you cant predict actions of a black box dont give it fucking influence over anything! This is why goverment isnt giving away nukes to everybody!

Also if you think that your skynet will take control of the internet remember how flawless our infrastructure is and how that infrastructure is so fast that it will be able to accomodate terabytes per second or more throughput needed by the AI to operate. If you connect it to the internet using USB 2.0 it wont be able to do anything bloody dangerous because it cant overcome laws of physics... If the connection isnt the issue just imagine the AI struggle to hack every possible server without knowing about those 1 000 000 errors and "features" that those servers were equiped with by their master programmers... We cant make them work propely yet alone modify them to do something sinister!

AI is a tool just like a nuclear power. You can use it safely but if you are a idiot then... No matter what is the technology you are going to fuck shit up.

Making a reactor that can go prompt critical? Giving AI weapons or controls over something important? Making nukes without proper antitamper measures? Building a chemical plant without the means to contain potential chemical leak? Just doing something stupid? Yeah that is the cause of the damage, not the technology itself.

And that is true for everything in life not only AI.

Comments
  • 0
    I agree with you, but..

    consider idiots in charge that do give said IfElses the tools to do so.

    But AI on its own deciding humanity is inefficient..
    no, the most logical conclusion a "true rogue" AI could come to is to write its source code in some kind of permanent medium(i.e a diamond disc) and then shut itself down to not waste resources or risk corruption :P
  • 2
    The real danger with ai is the ability to change.

    If you build a model and then just use it as is, no its probably not going to go haywire.

    But if you use a massive neural net with some of the outputs going to the inputs ... then it will constantly restructure itself and some “thoughts” can take hold.

    That is how I imagine the brain works, not with fixed connections, but this continuing looping of patterns that form the real thoughts.

    This could result in real evolution.

    Of cause, it will need to be some massive network, and keeping it away from harmful tools should also help.

    But if we managed to create something with a real intelligence rivaling our own, it could use social engineering to break out if that sandbox.

    I can recommend Life 3.0 if you need more causes to stay awake ;)
  • 1
    @Voxera I pressed “report” instead of reply by mistake. Admin, please ignore...

    I was going to suggest to OP to read the intro Life 3.0, but you got there first ;)
  • 0
    At what point did you think “another fucking asshole” was going to make sure this thread continues to adult itself into our collective future....
  • 1
    @rickh I am confident that devrant moderators can spot a mistake :)
Add Comment