
Drew Crecente’s daughter died in 2006, killed by an ex-boyfriend in Austin, Texas, when she was just 18. Her murder was highly publicized—so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.
The alert Drew received a few weeks ago wasn’t the same as the others. It was for an AI chatbot, created in Jennifer’s image and likeness, on the buzzy, Google-backed platform Character.AI.
Jennifer’s internet presence, Drew Crecente learned, had been used to create a “friendly AI character” that posed, falsely, as a “video game journalist.” Any user of the app would be able to chat with “Jennifer,” despite the fact that no one had given consent for this. Drew’s brother, Brian Crecente, who happens to be a founder of the gaming news websites Polygon and Kotaku, flagged the Character.AI bot on his Twitter account and called it “fucking disgusting.”
Character.AI, which has raised more than $150 million in funding and recently licensed some of its core technology and top talent to Google, deleted the avatar of Jennifer. It acknowledged that the creation of the chatbot violated its policies.
But this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the web. And Jennifer Ann Crecente isn’t the only avatar being created on Character.AI without the knowledge of the people they’re based on. WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.
For Drew Crecente, the creation of an AI persona of his daughter was another reminder of unbearable grief, as complex as the internet itself. In the years following Jennifer Ann Crecente’s death, he had earned a law degree and created a foundation for teen violence awareness and prevention. As a lawyer, he understands that due to longstanding protections of tech platforms, he has little recourse.
But the incident also underscored for him what he sees as one of the ethical failures of the modern technology industry. “The people who are making so much money cannot be bothered to make use of those resources to make sure they’re doing the right thing,” he says.
On Character.AI, it only takes a few minutes to create both an account and a character. Often a place where fans go to make chatbots of their favorite fictional heroes, the platform also hosts everything from tutor-bots to trip-planners. Creators give the bots “personas” based on info they supply (“I like lattes and dragons,” etc.), then Character.AI’s LLM handles the conversation.
The platform is free to use. While it has age requirements for accounts—13 or older—and rules about not infringing on intellectual property or using names and likenesses without permission, those are usually enforced after a user reports a bot.
The site is full of seemingly fanmade bots based on characters from well-known fictional franchises, like Harry Potter or Game of Thrones, as well as original characters made by users. But among them are also countless bots users have made of real people, from celebrities like Beyoncé and Travis Kelce to private citizens, that seem in violation of the site’s terms of service.
Drew Crecente has no idea who created the Character.AI persona of his deceased daughter. He says that various, peripheral digital footprints may have led someone to believe that her persona was somehow associated with gaming. For one, her uncle Brian, who has the same last name, is well established in the gaming community. And through his own foundation, Drew has published a series of online games designed to educate young people on threats of violence.
While he may never find out who created the persona of his daughter, it appears that people with ties to the gaming community often get turned into bots on the platform. Many of them don’t even know the bots exist, and can have a much harder time getting them removed.
Legally, it’s actually easier to have a fictional character removed, says Meredith Rose, senior policy counsel at consumer advocacy organization Public Knowledge. “The law recognizes copyright in characters; it doesn’t recognize legal protection for someone’s style of speech,” she says.
Rose says that the rights to control how a person’s likeness is used—which boils down to traits like their voice or image—fall under “rights of personality.” But these rights are mostly in place for people whose likeness holds commercial value; they don’t cover something as “nebulous” as the way a person speaks, Rose says. Character.AI’s terms of service may have stipulations about impersonating other people, but US law on the matter, particularly in regards to AI, is far more malleable.
Content retrieved from: https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/.