Details
Joined devRant on 4/6/2019
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
Be sure to check the Rust Embedded book: https://rust-embedded.github.io/boo...
There are Hardware Abstraction Layer (HAL) crates for a few architectures so it might even be possible to avoid unsafe completely. -
I'd suggest doing coding challenges first for learning.
Advent Of Code, Hackerrank, CodeWars, ... something like that.
I found them perfect for learning new languages.
@Geoxion
Even if in the end you don't like it, learning rust will make you a better C++ programmer. At least for me that was the case.
My advice: Avoid unsafe like the plague. Not because it's hard (after all, unsafe rust is basically just C++) but because it provides an easy escape that could hinder you from adjusting your thinking.
Old saying: "A language that doesn't change the way you think about problems is not worth your time."
When doing rust, try to think about data and how it should move through your app.
When you find yourself jumping through hoops, rethink your approach.
Rust might seem very similar to other languages you know. The differences are subtle. It's very easy to skim over ownership rules and pretend you fully understood. -
@endor Well, people will most likely connect it to the internet, at which point the possibilities are pretty much endless.
A GAI in a box is basically just a box. So need it in the world to be able to do something useful.
Also, whoever gets a GAI first can expect serious benefits from that. And the ones who take their time worrying about security probably won't be the first.
So the situation in itself is quite worrying.
I am still optimistic though. We have really smart people working on this stuff. -
And now here's the kicker: Self preservation is a convergent instrumental goal. Whatever your goals are, you can't achieve them when you're dead, most of the time ;)
So a General Artificial Intelligence (GAI) will most likely try to prevent you from turning it off.
Goal preservation is another convergent instrumental goal. If you want to go to Paris and I offer you a brain surgery to make you not want to go to Paris but instead play Candy Crush, then you'd probably not want that because that would cause you to not go to Paris, and that's all you care about right now.
So GAI will most likely try to prevent you from changing its goals, in turn preventing you from repairing it if you notice it's doing bad stuff.
As for why it might be decide to do bad stuff, ... basically what you say is not always what you want...
You should watch Robert Miles' videos. -
You should have a look at the YouTube channel of Robert Miles. He's an AI researcher at Nottingham and explains it way better than I can.
TLDR:
Terminal goals are things you just want. E.g. I want to travel to Paris and I don't need a reason for that.
Instrumental goals are intermediate goals on the way to terminal goals. E.g. I am looking for a train station because that will take me to Paris, not because I want to travel in a train per se.
(For the argument it's irrelevant if getting to Paris is just an instrumental goal for another, unknown terminal goal.)
Convergent instrumental goals are goals that are instrumental for a wide spectrum of terminal goals. E.g. whatever my goals are, money is probably gonna help be achieve them. So although humans have vastly different goals, you can make predictions on their behaviour by assuming they're gonna want money.