Clanker. Wireback. Cogsucker. Persons are feeling the inescapable inevitability of AI developments, the encroaching of the digital into the whole lot from leisure to work. And their reply? Slurs.
AI is all over the place — on Google summarizing search outcomes and siphoning web traffic from digital publishers, on social media platforms like Instagram, X, and Fb, including deceptive context to viral posts, and even powering Nazi chatbots. Generative AI and enormous language fashions (LLM) — AI skilled on big datasets — are getting used as therapists, consulted for medical advice, fueling spiritual psychosis, directing self-driving automobiles, and churning out the whole lot from faculty essays to cowl letters to breakup messages.
Alongside this deluge is a rising sense of discontent from folks scared of artificial intelligence stealing their jobs, and anxious what impact it might have on future generations — dropping essential expertise like media literacy, downside fixing, and cognitive operate. That is the world the place the recognition of AI and robot slurs has skyrocketed, being thrown at the whole lot from ChatGPT servers to supply drones to automated customer support representatives. Rolling Stone spoke with two language consultants who say the rise in robotic and AI slurs does come from a type of cultural pushback towards AI improvement, however what’s most attention-grabbing in regards to the development is that it makes use of one of many solely instruments AI can’t create: slang.
“Slang is transferring so quick now that an LLM skilled on the whole lot that occurred earlier than it isn’t going to have fast entry to how individuals are utilizing a selected phrase now,” says Nicole Holliday, affiliate professor of linguistics at UC Berkeley. “People [on] City Dictionary are at all times going to win.”
Clanker, the preferred of the present AI slurs, was first used within the 2005 Star Wars first-person shooter online game Republic Commando, in accordance with Know Your Meme. But it surely was launched to most audiences in a 2008 episode of the animated sequence Star Wars: The Clone Wars as a retort from a Jedi combating a horde of battle droids. “OK, clankers,” the Jedi stated. “Eat lasers.” In accordance with Adam Aleksic, the creator of the TikTok web page Etymology Nerd and creator of the 2025 guide Algospeak, the meme first gained reputation on the r/Prequelmemes subreddit, a Star Wars fan neighborhood. Star Wars isn’t the one science-fiction providing that had its characters say derogatory issues to their robotic counterparts. In Battlestar Galactica, folks referred to the sentient robots, Cylons, as “toasters” or “chrome jobs.” Each Aleksic and Holliday observe that the way in which slurs work — in these tales and in actual life — is by carrying an assumed energy construction together with them.
“Slurs are othering. Often, the issues that we find yourself contemplating to be slurs or epithets are from a majority group with energy towards a minority group,” says Holliday. “So when folks use these phrases, they’re in some methods doing in order a self-protective measure, and we tolerate that extra as a result of people are [perceived as] the minority group. And punching up is at all times extra socially acceptable than punching down.”
However there’s additionally an issue with utilizing slurs as a strategy to combat again towards AI encroachment, these consultants say, because the phrases can truly reinforce the idea that AI is changing into extra human. “It’s drawing on historic ways in which slurs have dehumanized others,” Aleksic tells Rolling Stone. “One thing requires a level of anthropomorphization, of personification, for a slur to work.”
The best place to see the humanization and dehumanization of slurs is in POV movies that think about conditions the place robotic slurs like clanker, wireback, and tin-skinned aren’t used to cheekily combat again towards chatbots, however towards AI people which have some type of place in a fictional future society — a Guess Who’s Coming to Dinner for the robotic age. Many creators submit skits the place the robot slurs are spoken whereas a robotic is making use of for a job, or assembly their human partner’s parents for a vacation.
As robotic slurs proceed to have their viral second, there’s been an increase in involved web customers who really feel just like the development is only a convoluted approach for folks to get near saying real-life slurs. They’re not improper — it’s how they bought began on the Star Wars subreddit. “The way in which clanker was used was a transparent analogy to the n-word,” Aleksic says. “There’d be a photograph of a droid providing you with the c-word go or one captioned ‘What’s up my Clanka?’ with an ‘a.’” Aleksic thinks most of the robotic slurs are standard as a result of they encourage such combined reactions. Algorithms reward sturdy emotions, and clanker content material has the additional advantage of grabbing individuals who don’t like AI, individuals who need to be edgy on-line, and people who find themselves afraid of being the “woke” good friend all on the similar time. Sadly, even when the robot-slur development died tomorrow, no matter took its place would most certainly be equally rooted in stunning and controversial language.
Whereas it’s exhausting to inform how a lot longevity these slurs may have, or how a lot of the development’s reputation comes from anti-AI sentiment or the algorithmic attraction of buzzwords, the linguists who spoke to Rolling Stone say this matches into the pure approach human language evolves. Individuals adapt phrases due to how utilizing them makes them really feel — and phrases change primarily based on the context of different phrases getting used round them. “Within the case of clanker, it’s seen as humorous or cool to be counter-cultural to AI. African American English unfold into the mainstream as a result of it was perceived as cool from the surface. It was sociologically prestigious,” Aleksic says. “However for those who take a look at how algorithms change these phrases, it’s type of an exaggerated image of what people are doing with this medium.”
Although AI platforms have begun to acknowledge clanker as a slur, Holliday famous that each ChatGPT and Google AI didn’t acknowledge “wireback,” as an alternative saying the phrase wasn’t acknowledged or probably misspelled. For Holliday, this is likely one of the elementary explanation why she believes modifications in language and slang will stay the place the place AI is at all times a step behind.
“Massive language fashions, AI, it’s a flattening of that means. As a result of we as people co-create the that means of a selected utterance in a context,” she says. “So AI has bought a whole lot of context that it’s skilled on, however it may possibly’t inform you what this particular person meant in that dialog, as a result of it doesn’t have the data that you’ve about earlier interactions with that particular person, about the way in which that the phrase has modified within the final two weeks. That’s the place people will at all times have the sting.”
From Rolling Stone US.