I have a lot of thoughts about so-called “artificial intelligence” in the form of language learning models. Sometimes I like them. Other times I don’t. But it’s also pretty disingenuous to pretend it’s so simple.
Even though I’ve written on the topic before, here’s everything I have to say (right now) about the machine learning and language learning models (whose marketing we’ve accepted) we call “artificial intelligence.”
AI is Copyright Infringement…
I’ve always advocated for to credit artists for their work. Once I wrote for a local arts organization and was told (verbally, not in writing, unfortunately) that I’d get a writing credit; when it came out, only photographers were credited, not me. (Fortunately, the next issue they remembered, though, so progress, amirite?)
But even before that, I volunteered for a startup arts movement and designed their magazine with the intent of making sure all photographers, writers, and other contributors were acknowledged. (Even if that caused friction when one of the other designers didn’t like how I revised his work to meet a deadline.)
While I definitely understand the notion behind sharing information as broadly as possible – and fully support the open source movement – I’m fully aware we don’t live in some sort of cyber communist utopia. Taking people’s words and images, without their consent, and profiting from those words and images, is theft. This is the only way this technology works, by scraping the public internet for content, copying it (even if temporarily), and then charging people to use it. If the benefits were spread evenly and fairly, that’d be different, but that’s not our current reality.
…But You Can’t Copyright a Recipe
The waters murk when we talk about code. While I’d never plagiarize someone else’s creative work, i regularly visit communities dedicated to sharing snippets of code. And these aren’t shadowy corners of the “dark web;” these are foundational websites for programmers and web developers. Think of GitHub, Stack Overflow, CodePen. Heck, I’m writing this with WordPress, which is free, open source software with thousands of contributors over a couple dozen years.
Programs aren’t creative writing like fiction or poetry or gonzo journalism: they’re recipes. Instructions, even. Seat a dozen code monkeys in their own room with the same problem to solve, you’d end up with twelve nearly identical copies of two or three different text files.
I’m not saying there’s not ingenuity involved or even creativity. (After all, sometimes the best solution requires an artist’s mind.) I’m just saying that code snippets aren’t the end product; they are tools needed to reach the desired result.
In that sense, why not use AI? Heck, earlier today I asked GitHub Copilot, “Write plain javascript to lazy-load a Vimeo iframe,” and got working code that accomplished my goal. While I’d never read someone’s poem, write my own using their themes, and refuse to give them credit (even though I was once accused of doing so), I’ll happily copy/paste from the WordPress Stack Exchange and even bill the client. In my mind, that’s no different than a chef using the recipe for French bread from a bag of a King Arthur flour, baking them, and selling the resulting sandwiches.
AI is Fun…
Seriously, I have a blast reading my child stories with prompts like, “An episode of Are You Afraid of the Dark? in a haunted candy store.” I’ll even generate “illustrations” with a locally installed version of Stable Diffusion. Heck, I’ve even taken selfies and put my face on cowboys and human-cat hybrids.
…And So Are Explosives
Thing is, this whole situation where Twitter became so overrun with ai-generated nonconsensual cornography that they just returned “No results found” when you searched “Taylor Swift AI” is – to put it lightly – a quick lesson in why regulations exist.
Truth is, when I started learning how to find image generation models, I quickly learned that one of the largest repositories had entire sections of their site dedicated to celebrities. You can literally search “Taylor Swift,” disable “Hide NSFW Models,” and make your own smut in a matter of minutes.
It’s honestly far too easy to violate someone’s privacy and there are currently no repercussions for doing so. While the Tay-Tay drama on Twitter unfolded, the trending searches and hashtags were filled with workarounds to find exactly what Twitter was blocking, like using a lowercase “L” in place of the final letter for #TaylorSwiftAi.
I’m not going to lie and say that I’ve never crossed state lines with a trunk full of bottle rockets. But I can say, with confidence, that if I could find those big ol’ mortar shells sold at a tent in Wal-Mart parking lots, I’d be typing with fewer than ten digits.
AI Can Save Lives…
There are currently multiple apps which allow hospital staff to talk into their phone and receive a SOAP report, which shaves several minutes of paperwork out of time healthcare professionals can then spend with a patient. Likewise, some hospitals are experimenting with using AI to listen to a patients conversation with their provider (ugh, I hate that word) and generate progress reports.
…Like a Fox
Self-driving cars, using machine learning, are killing people.
This shouldn’t be a surprise. People love to pretend that computers and AI and robots are completely unbiased, perfectly reasonable, infallible decision makers, and that’s a load of – you’ll have to pardon my French – poppycock.
We’ve known this for years, if not decades. Isaac Asimov’s I, Robot wasn’t about a perfect world where the “Three Laws of Robotics” keeps everyone safe and happy; it was about how easily even the most simple of systems can fail, especially when we forego human empathic decision making. (“But it’s so simple! After all, if there are only 3 rules, what could go wrong?”)
The second my doctor makes a diagnosis with AI is the same moment I’m registering for the class action. Sorry, but I’d rather take my chances with Dr. Web MD than Chat-GPT, DDS.
AI Isn’t AI
Just because it can write grammatical, original sentences doesn’t mean it can think.
While OpenAI and Bard and even Twitter’s racist chatbot sound like they are communicating with you, they aren’t. Yes, they can remember information over the course of several text exchanges. But they aren’t actually learning from you. They are, instead, learning from millions of pieces of data they digested a year ago and then, based on recognizing patterns, repeating something that an incredibly complicated algorithm wants them to say. This is only different than a traditionally trained program in the amount of information given to it; it still can’t think outside the boundaries of that information. This is not HAL-9000; this is a Speak-and-Spell.
We are nowhere near The Matrix or Cyberdyne or whatever Harrison Ford is in Blade Runner. We aren’t dealing with machines that can think on their own, just machines that can think really really fast and have a buttload of reference material to play around with. They still, ultimately, need us to give them a prompt. If this were actual AI, it would be giving us a prompt instead.
What do you think?