7
JsonBoa
48d

Fucking loonies (C-level toddlers) are peddling "digital workers" now.
A.K.A. AIs disguising as actual people.
Sure, it would be great to not have to handle stupid non-tech "humans" all day, but AI isn't there yet.
And, more importantly, *companies are not there (yet?)*.

Imagine for a second that a company actually manages to "hire", onboard, assign tasks and performance review an AI.
Then the CEO issues an RTO. How does the AI complies with that?
Let's slack another variable and assume the CEO is not a complete fucking moron (stay with me here, this is an exercise in thought).
It would take no more than a quarter until the first sexual harassment offence, be the perp the AI... or the AI complaining about some human.
Then the AI forges a paper trail proving it is right (regardless of its position on the conflict). Shit hits the fan when the AI hits twitter.

Let's take another lambda step back and pretend that companies can manage the profanity that inherently arises from free-form dehumanized interactions.
Then imagine the very first performance reviews.
AIs throw tantrums! Those things reeeealy do not respond well to less-than-perfect evaluations, overshooting corrections like teenagers with a malicious compliance smirk.
AIs also falsify stuff, like, A LOT. If you tell a gpt it mistreated a client, it will say you are mad and shoot back a long, synthetic thread showing how the client loves it like a mother/son/dog, and is very graphic when expressing this love.
Finally, how do you fire an AI? I do not mean "shoot it down", I mean how does the company handles the dismissal of that "employee".
How do you replace a "worker" for unruly behaviour, if that "worker" performed more tasks than an entire fucking floor of interns?
How do you reassign duties that were performed in milliseconds to people who would take hours to do the same thing?
How do you document processes that were only in the "mind" of "someone" who can not be trusted to report on those processes?
Companies deal with this type of "Rick Sanchez" employee on the regular, but for someone that could handle a few (scores of) undocumented processes, at best. Imagine how lenient would a company be with an asshole that could only be replaced by a whole fucking department of twenty highly skilled people, or more.
Heh, the whole fucking point of "AI workers" is to have "someone" who can "act human", but in an inhuman scale, and does not "has human needs".
No wonder one cannot handle AIs like one handles humans.

Companies never had administrative maturity to handle complete sociopath nihilists as employees (real nihilists do not work, those barely even breathe).
And all AIs are that, and much worse.
Selling AIs as "supra human workers" that can also "be handled like actual employees" is like peddling Bitcoin as "government interference - free" value transfer mechanisms that can also "comply with international sanctions".
So, an oxymoron that can only be sold to a moron.
I know (of) a lot of rich morons, maybe I should get into the AI snake oil business.

Comments
  • 4
    Does the company pay these "digital workers" a salary? And the taxes to the government, like any other worker? Do they get holidays?
    No, calling them workers is bullshit. AI is a tool.
  • 2
    @cafecortado and a bad tool for the job they're using it for.
  • 2
    Truthfully, are these ai workers going to outperform existing real workers?
  • 2
    @Demolishun there are some workers (in every company) that would do a better job of they didn't do anything at all, so... maybe? for some types of workers?
  • 1
    @JsonBoa
    AI (its not AI. it is text generation) is not there yet.
    You want it to replace a person? sure - you can do that. But when that AI make a multi million mistake... who are you going to blame? Who will take the fall?
    The AI? or the C-level moron that decided to "hire" it?

    Air canada already has a similar problem.
  • 0
    @magicMirror yep, that is the point... (nearly) *every* mistake AI makes scales to millions of dollars.
Add Comment