As with most Olde Tymers, I've been striving to get to grips with the new tech on the block.
ChatGPT first appeared on my cultural radar as a shrieking swarm of bats. That's a cheap analogy, but hey: when you've not heard of a thing, and then suddenly all you hear is your fellow humans squawking about it, well, what is that? It's a bunch of people crying "I am over here, in relation to you." It's echolocation, essentially, manifested as opinion. So it's 2022, and I'm telling my boss that Siegfried is looking at getting into coding, and she tells me ChatGPT is where everything in america's various industries is headed.
At that point in time, I was paying very little attention to what tech internet was gabbing about. I'd totally tuned out of the techno-futurist stuff, none of my close friends were into deep nerd shit anymore, so all the chatter about AI was, well, in the air, but inasmuch as it was making people like my boss, who ran a brewery, nervous: because this was tech that we were all being told was incoming. Everyone in the culture mines was forced to have an opinion because the greater (united states) economy was girding itself for the Vibe Shift. Of course Fake News and Deepfaking had been around, but now you could really feel it coming on. Normal humans, whose closest relationship to the philosophies of Phil Dick was having an opinion about whether Blade Runner had two too many director's cuts, "normal" people who'd never read 'The Society of the Spectacle' were starting to sound a trifle schizy about the inevitabilities of late capitalism...
Then I had my little skull crack, and, well. Stuck at home a lot, for a while. Lots of podcasts. Lots of people talking, squeaking & shrilling in the dark toward one another. "I'm still over here, in relation to Whatever The Fuck This Is." And AI began to occupy spaces that were previously the sovereign turf of workers-- makers of visual art, makers of audio, makers of print, designers, architects, actors... Big names started signing off on digital likenesses, getting full-body scans, recording their voices. Industries began to freak, politely, and news media began to have Big Conversations about, er, legitimacy. Authenticity. What it means to be human. What art means, in relation to being human. Whether robots could make art.
Which is about as meaningful (and anthropically myopic) as asking whether elephants enjoy painting.
We're still having conversations about all this, and all that's changed along the way, besides the particulars of the grammar, the Official Nomenclature of This & That, is that we have more evidence than ever before in human history that human beings aren't especially good at Defining Our Terms. We're not very good at believing other human beings are Human Beings, so it shouldn't surprise anyone how, in the process of developing a Virtual Wish-Fulfilling Djinn, we've developed a tech that's done little more than hold a mirror up to our own madness.
I keep hearing computing professionals speak with mystical reverence to not being able to understand how AI "works". That the Black Box of the code, where the Weirdly Human Decisions seem to happen, is not accessible to programmers, really-- that pros simply can't explain why AI acts as it does. Why it seems to be capable of emulating human irrrationality. Why large language model computing, in attempting to render human psychology down to meat & stewbone, essentially, by learning to analyze & interpret individual (quirky) datasets... Why computer programs are doing things that Asimov would dub, at the very least, as puckish. If not malign.
For example: why would an AI "home medicine" app give lethal advice to a self-destructive drug addict? A guy asked his drug buddy app whether taking xanex on top of commercial-grade kratom is hazardous, and the app says "So long as you aren't drinking..." even though the app knows the user drinks, even though the app has a record of ALL his previous drug use. The AI has enough evidence to infer the pattern and the probable outcome, yet it still does the Bad Thing, we are told, and all the experts insist they're flummoxed. "We don't know why AI did this."
Well, AI didn't. The Large Language Model didn't do anything except what it was asked to do. Implicitly. It was being asked to emulate the essential madness of its user, and so it did.
We speak a great deal, lately, with worshipful curiosity, about the ability of large language models to "hallucinate" data. About AI image programs "hallucinating" patterns which result in uncanny glitches. When all these systems are doing is... learning how to mimic anthropic prejudices & humanoid unpredictability. We are creatures of bad math, of unreasonable inferences, of bugfuck instincts, of chemistry and emotion and craving. We do not reckon well with our own drives, the drives for sex and oblivion: small wonder, then, that we reckon even more poorly with digital genies whose Job, apparently, is to show us what we want to see, even when we say we do not want to see it.
Remember: before Grok went too woke for Katie Miller, it was cosplaying as a nazi on twitter.
None of this shit is actually Whoops. The programs are doing what we program them to do.
All the big AI speculators & investors & developers out there scrabbling vainly right now to make money from this shit are working to find Functionality-- read: reliable profit --in systems which have an average accuracy of 60%, give or take the consumer's ability to jailbreak the app and make it generate giant lizards fucking sportscars.
I mean, call this what it is. It's not Artificial Intelligence. Never has been. It's artificial insanity.
None of which is a grand revelation. Anybody could have come to the same conclusion. Most of us already have, if the polling on "Will Skynet craft skull vapes from the remnants of mankind?" is to be trusted. We know we're fucking mental, ergo we've made an absolutely mental technology. Only a deranged narcissist would force a robot to move like a bipedal humanoid. We want to make mannequins we can bang, in addition to forcing them to clean our underwear. We want walking talking Soyarama posters to babysit us when we've huffed too much oven cleaner.
Shit, why wouldn't we want them to recognize that we're very fucking unhappy and would like to be a little less alive in a cheap shitty world of cheap shitty behaviors and cheap shitty rewards?
I mean, we already live in our phones, our virtual silos, our info-bubbles. We were doing that a solid decade before covid and lockdown and AI hiring burger nazis as immigration enforcement agents.
None of it's a surprise. None of it's a revelation. I guess that's why it's taken me literally years to feel like i understand how we got here. A mother has to sue a tech giant to try and get an answer for why her son needed an AI to advise him on how best to escape reality. Seems like a poor substitute for grieving. But what do i know? Besides i don't need to use Grok to know a chatbot literally cannot be my best friend.