Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "humans and machines"
-
I'm a little late to this, but that Python master/slave issue.. what the fuck is up with that?!
You say that you're offended by words.
=> Fuck off. If you want to serve social justice, help people in third-world countries that need your help.
=> Also, you do realize that the use of master/slave is just as much applicable to technology as client/server or host/guest are, right? It's a relationship between fucking machines or code blocks, not humans.
You say "why the outrage over this?"
=> Fuck off. Your SJW bullshit has no place in technology. It's a fucking word in fucking code!!!
You say that you're improving the Python project with this.
=> Fuck off. It breaks existing documentation and needlessly abstracts terminology that is used pretty much everywhere. What do you prefer, conciseness and a language to be easy to understand or for it to become all cushioned to soothe your frail feelings?
You know, there's something else that I wanted to talk about that's related to this. I have Asperger Syndrome, which on paper is a disability. In practice it's difficulty to socialize while having an above average IQ. That "disability" is what drove me into technology. When I see job listings actively prefer people with disabilities for social justice, you know what? That offends ME. Because I wouldn't want to be chosen as the best applicant just because it ticks social justice boxes. I want to be chosen as the best applicant because I outcompeted every other applicant with actual skill and fitness to do my job.
Also, when a company sells you a defective unit, would you be happy? Of course not. So why are you happy when they employ a defective? I am someone that would - on paper - be impeded by natural selection, because I am "handicapped". But I'm all for it. Humanity is what it is today - shit - partly because defectives have become widely accepted into society. Call me a bigot, but I'd rather be called that than to not raise concerns about this trend.
On the subject of handicaps, that's a term that's used in games, what for aiding the player that can't win against the regular opponent (which is usually just a fucking bot, wtf yo). I am handicapped, therefore YOU shouldn't use the word in a sense where it's totally reasonable to use it!! Says no one ever, me neither. Grow a fucking pair and realize that code isn't written with the intent to offend anyone. So why are you?23 -
Client: I want a new feature for my chat bot. It should be able to rap.
Me: ... k
*monologue: wait u w0t m8*
Also me: Can you please go more into the details? It should be able to rap. Ok. But how do you want it to look like? How "strong" should be the discrimination level, for instance?
Client: It should beat ass, yo.
Inner me -> core me: Let us just ignore him. We won't be able to do it, since he isn't really explaining his needs. "It should be able to rap". We are not wizards.
Core me -> inner me: Chill. We will just use some insult apis, combine it with cleverb0t api et voila.
Me: Alright. I got an idea for it. I can do it within this week. And if you don't like it, I will ofc do some changes to it.
Client: Hmmm... that's nice and good. But within 1 week?
Inner me: I can't do magic and pull that feature out of my fucking ass!
Clients... clients... clients...
0. Don't expect us to be done in a few days. We are also humans. And not fucking machines.
1. Do us (all devs on planet earth. -Microaggression in 3, 2, 1..) a favor and (kill yourself) learn how to request a feature.2 -
The pay was good. The perks were good too. Then why the hell did I resign? Because of my manager. You won't believe he never contributed to anything. In the past two months, he didn't write a single line of code.
You may say, "he is a manager. His work is to manage people". But what?? He never allows us to talk to anyone. Sets unexpected reality in the meeting. And our CEO (a good-hearted man and good software engineer, but does not know much about ML/AI) believes in him. We are working on a product which is a piece of shit. I tried to tell everyone the reality. He stopped me. Says since I don't have experience, I don't know what is possible.
What the hell??? With current talent and resources, you are saying AI will replace humans in call centers by the end of 2019. What the FUCK!!!! I tried to write a mail to the CEO, explaining him things. He threatened me. Said he will make me lose my job. So FUCK YOU!!!! FUCK YOU!!!!!
That is the reason I am resigning. He has another 11 months to fuck the company. But I am going to a place where things are real. People know the potential and challenges of AI and are doing their best. I know, eventually, everyone will know that he is a liar. A big fucking LIAR. And he will lose his job. Not because machines will take over. But good, talented human beings will replace him.8 -
Last night, after reading one of my computer science textbooks, I couldn't go to sleep because I came to the realization that computers will never be able to think like humans. Because a machine does what it's told to do. It is incapable of thinking outside of the box. What will need to happen is that parts of a human or some biological organism, essentially the squishy stuff, will need to be combined with a computer.
What I mean to say is that computers are good at answering questions in an absolute way. Essentially, you give it a problem and it will click away at it until some output pops out. Yes advanced AI exists, like Alpha Go. But again it's only doing what it was programmed to do. Looking at ways to play a game and answering for that question. In this case, playing a game of Go. I'll guarantee you, that not once did it stop to ask **why** it was playing Go. It was simply__just__ playing Go. But that's it. That's the limit. We give machines data/statistics and we let me them give us an answer based off of that data or input.
This is how I imagine intelligent machines will come about. A biological brain will be combined with a machine. The brain will be doing alot of the questions, and the machine will do a lot of the calculations. Together, they'll be able to answer hard questions. The heavy calculations will be left to the machine, and the heavy thoughts will be left to the brain.
I mean technically we're already doing that. But imagine a machine/brain computer that does not sleep, can't get sidetracked and will never procrastinate. That would be a scary machine.25 -
How many years do you think it will be until the first hackathon between humans and machines kick off?9
-
Could someone finally make remote controlled bed that transform to armchair and can ride around the house so I don’t have to get up to grab a beer or open the door....
I want to ride to fridge and get back with some beef jerky without using my muscles.
Damn technology is always aiming for stuff that nobody needs.9 -
Im getting a feeling that the more we try to replace humans with computers and machines, the worse the remaining humans get to interact with. And I dont mean that in a way that computers are better to interact with but in the sense that people tend to forget how to properly work with each other.8
-
https://www.udacity.com/human
Udacity has launched understanding humans nanodegree register in that link it gets over by today.So please put all your current threads to sleep and start a task to understand humans.
Good Luck 😉😉3 -
Trying to re-type a massive essay I lost because the app refreshed for some reason. I'll try to keep it short (spoiler: I lied).
Recently, I had a conversation with a couple of non-tech people about AI and the fear of computers making humans obsolete. I have some strong (borderline ranty) opinions about this, and thought I'd post here to see what reaction is get.
This is not a "machines will destroy us" post, it's more about the very legitimate great of losing jobs.
- AI is a tool. It's main use would to be help optimise the more complex routine tasks and free up people's time to be more creative in their jobs. Basically, it's the next step of automation.
- Human intuition can never be replaced. Sometimes, things just seem a bit off. Sure, an AI would avoid ever getting in that situation, but only if it had learnt it in the past. A human will always have to be at the helm of any such system.
- Achieving true intelligence and sentience is like trying to travel at the speed of light. The closer you get, the more challenges you face.
- Getting hyped by sensationalist news that claims the end is nigh because two computers optimised the language they used to communicate when trying to reach a goal is stupid. All this shows is that the tech is working as expected and the systems can optimise on the fly. To me, this was a pretty awesome moment.
Now, I'm not saying dystopia is impossible, neither am I saying that it is inevitable. Just like any tool presented to us, if we use it responsibly, we can make life and society a lot better.5 -
Is it so hard for other people to write code as if there will be other eyes watching? When will people learn that a programming language is what bridges the communication gap between humans and machines? If I can't follow your code, you wrote it poorly. Period. At least document what the hell is going on. Be considerate of the next person. Unbelievable.
-
Join a military organization set on keeping peace between humans and machines, and then, when people disband our team because they live in peace, become a vigilante.
++ if you know who's this2 -
My manager has sucked the soul out of me. I feel drained, anxious , highly demotivated and I have lost hope in life. He has a toxic way of managing people. The team is always micromanaged and even in that he keeps scolding people for not completing tasks in the timeline which he thinks is right. I am always filled with multiple tasks on my board and he wants me to complete all of it in one day irrespective of complexity. We have a standup that is scheduled for 30 minutes but goes on for 1 hour 30 minutes and all he does in that meeting is tell people they have not done enough even while we have done far above our levels. And there is a meeting again in the evening to update on the tasks where he again starts scolding everybody. Few of my teammates say that whatever we do we will get scolded. We have never really celebrated any success as a team. He expects the team to be always available like 24*7 and work for atleast 14 hours a day and sometimes overnight for like more than 20 hours a day. And we have alternating 6 days work week even when ceo has approved 5day work week for tech. My manager doesn't treat anybody as humans , we are all just machines to drive his deliverables. He values only deliverables. It's very difficult to get holidays. But the problem is he has inflated my salary a lot and I have un-vested esops which is holding me back at the company.3
-
Connect my brain and communicate to any computer telepathically.
Not humans, I don't want human brains.
But computers. Sweet little bad-ass machines. -
When friends ask me to do some coding...
Them: I want/need this and that and... [trillions of features and "cool" stuffs]
Me: ...okay
Them: Oh and this and that and [more "cool" stuff]
Me: Alright.. when do you need it?
Them: something around a week?
Is it just me or does everyone -not- accustomed to coding think that projects are usually done in a few days...?
Maybe the common opinion about coders is something like: They aren't humans, they are machines which convert caffeine during night time to source code3 -
fuuuuuuuuuuck. found this site with rant in the name so i decided to rant. fuck the system fuck politics fuck everything. IM A FUCKING 16 YO. I just want people to hear my voice and listen to me. i want to make a change but because im 16 everything i say is invalid. the school system sucks. i want to change that. oh wait first i have to change the people who manage that. well dang to do that i gotta change that part. before long its the entire fucking system. for fucks sake cant anyone do anything. i just want to be happy in this shitty world. maybe the world ending wouldnt be so bad. just fuck it all to hell. i mean jesus christ everything is screwy. we live in an outdated system in a modern world. when are things going to change to keep up with the times. we donbt need machines to work in factories like the school system makes. we dont need politicians who are so old they cant keep up with whats going on in the world. we need people that can keep up with current events and work to make a change so that the place can be better. just fuck it all. no one is willing to put in the work needed to get that. i say we should just destroy all humans and start anew.7
-
I hate, hate, hate sockets! All the mysterious way they can fail. The subtleties, different API's if you switch between Linux to Mac, etc.
If the communication between (supposedly) deterministic machines is already such a clusterfuck, how do we even get sentences across humans and act as if we understood?8 -
That we all fail and for every time you think someone else is stupid or falls into mistakes, you will fall into your own mistakes and be stupid in front of someone else, no one is perfect, we are all humans and at the end our work is to tell machines what to do and if they do it wrong, it's because we told them to do it that way and we are wrong.
-
Some times I get these weird ideas.
The machines now rule the world and they decided humans will not be able to program them anymore. That's why they enslaved you as part of the committee which will create the next computer language: Cryptic Script.
What feature would you add to it?
(try something real)
I'll starting by saying Cryptic Script is dynamically typed.6 -
(I'm not completely sure of what I'm saying here, so don't take this too seriously)
Settling on a language to write the api for ranterix is hard.
I'm finding a lot of things about elixir to be insanely good for a stable api.
But I'm having a lot of gripes with the most important elixir web framework, phoenix.
Take a look at this piece of code from the phoenix docs:
defmodule Hello.Repo.Migrations.CreateUsers do
use Ecto.Migration
def change do
create table(:users) do
add :name, :string
add :email, :string add :bio, :string
add :number_of_pets, :integer
timestamps()
end
end
end
Jesus christ, I hate this shit.
Wtf are create, add and timestamps. Add is somehow valid inside the create, how the fuck is that considered good code? What happens if you call timestamps twice? It's all obscure "trust me, it works" code.
It appears to be written by a child.
js may have a million problems. But one thing I like about CJS (require) or ESM (import) is that there's nothing unexplained. You know where the fuck most things come from.
You default export an eatShit() function on one file and import it from another, and what do you get?
The goddamn actual eatShit function.
require is a function the same way toString is a function and it returns whatever the fuck you had exported in the target file.
Meanwhile some dynamic langs are like "oh, I'll just export only some lang construct that i expect you to specify and put that shit in fucking global of the importing file".
Js is about the fucking freedom. It won't decide for you what things will files export, you can export whatever the fuck you want, strings, functions, classes, objects or even nothing at all, thanks to module.exports object or export statement.
And in js, you can spy on anything external, for example with (...args) => debugger; fnToSpyOn(...args)
You can spoof console.log this way to see what the fuck is calling it (note: monkey patching for debugging = GOOD, for actual programming = DOGSHIT)
To be fair though, that is possible because of being a dynamic lang and elixir is kind of a hybrid typed lang, fair enough.
But here's where i drop the shit.
Phoenix takes it one step further by following the braindead ruby style of code and pretty DSLs.
I fucking hate DSLs, I fucking hate abstraction addiction.
Get this, we're not writing fucking poetry here. We're writing programs for machines for them to execute.
Machines are not humans with emotions or creativity, nor feel.
We need some level of abstraction to save time understanding source code, sure.
But there has to be a balance. Languages can be ergonomic for humans, but they also need to be ergonomic for algorithms and machines.
Some of the people that write "beautiful" "zen" code are the folks that think that everyone who doesn't push the pretty code agenda is a code elitist that doesn't want "normal" people to get into programming.
Programming is hard, man, there's no fucking way around it.
Sometimes operating system or even hardware details bleed into code.
DSLs are one easy way to make code really really easy to understand, but also make it really fucking hard to debug or to lose "programming meaning".7 -
In the perfect future machines would do all the work... Everyone would have enough food, a house, a vehicle and lots of hobbies (cause there isn't work to keep us busy). In truth because machines are programmed by humans and humans are self destructive the rich will survive and live a good, work free live, the rest will live below the poverty line, scrapping anything to get something to eat... Remember Elisium?2
-
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7