Why Celebrities, Actors, Writers, and Artists Fear AI

Artificial intelligence can steal your likeness, mannerisms, voice, and creative work. Can anything be done about it?

by Leslie Alan Horvitz

With online access, you can easily tap into the powerful world of artificial intelligence (AI). By using Google’s AI chatbot, Gemini, or Microsoft’s Copilot, people can use AI to supplement or replace traditional web searches. OpenAI’s ChatGPT—the generative AI that’s become all the rage—can create a sci-fi novel or an innovative computer code and even diagnose a patient’s condition—produced in mere minutes in response to a human prompt.

An advanced AI film director operates the camera on a rural film set, blending cutting-edge technology with the simplicity of countryside life as human actors prepare for their scene.

Using a text-to-image program like DALL-E, a person can create an image of a unicorn walking along a busy city street. If they don’t like it, another prompt will tweak it for them or add another pictorial element.

But who owns this computer-generated content? Answering that question becomes tricky when the prompt includes the likeness or voice of someone other than the user. While regulators, legislators, and the courts are grappling with questions about the use and application of AI, they need to catch up, particularly on the issue of copyright.

“There’s a video out there promoting some dental plan with an AI version of me,” the actor Tom Hanks lamented in October 2023. “I have nothing to do with it.” He isn’t the only one facing these issues. Actress Scarlett Johansson also found that her voice and likeness were used in a 22-second online ad on X (formerly known as Twitter).


Don’t be taken in by singer Taylor Swift “endorsing” and giving away free Le Creuset Dutch ovens to Swifties—her fans. While Swift has said that she likes Le Creuset cookware, she isn’t doing ads for the brand. This and many other AI-generated fake ads use celebrity likenesses and voices to scam people. These include country singer Luke Combs’ promotion of weight loss gummies, journalist Gayle King’s video about weight loss products, and another fake video featuring the influencer Jimmy Donaldson (known to his followers as MrBeast).

A casual listener might have mistaken the song “Heart on My Sleeve” as a duet between the famous rap artist Drake and the equally famous singer The Weeknd. But the song, released in 2023 and credited to Ghostwriter, was never composed or sung by Drake or The Weeknd. There are several instances where the voices of singers were generated using AI. For example, an AI-generated version of Johnny Cash singing a Taylor Swift song went viral online in 2023.


This raises questions about who the rightful owners of these products are, considering that they are in whole or in part produced by AI. And what rights do Tom Hanks, Scarlett Johansson, Taylor Swift, and Drake have over their likeness and voices that were used without their permission? Do they have any rights at all?

Fighting Back

Musicians and their publishers have many ways to fight against such AI-generated content. A singer whose voice has been cloned could invoke the right of publicity (considered a facet of the right to privacy). Still, this right is on record only in certain states—notably New York and California, where many major entertainment companies are located.

According to an article in the Verge, singers Drake and The Weeknd could sue Ghostwriter (once his identity was exposed) using the same law that the TV game show Wheel of Fortune’s longtime co-host, Vanna White, relied on to sue a metallic android lookalike used in a Samsung advertisement in 1992.

The Copyright Act

The U.S. Copyright Office has adopted an official policy that declares it will “register an original work of authorship, provided that the work was created by a human being.” Based on this, can AI content be considered to be created by a human being? In one sense, it is, yet the program usually generates content that no human being is responsible for, leaving the question largely unanswered. Congress needs to address this dilemma.

The Copyright Act affords copyright protection to “original works of authorship.” However, the Constitution, which led to the establishment of the Copyright Office and the Copyright Act, is silent on that question.


The concept of transformation can be inferred from the Copyright Act—though it is not explicitly stated in the Copyright Office’s criteria about whether a work infringes on the rights of another party—. In terms of AI, this means that a story or an image generated by AI is so unique and distinctive—so transformative—that no objective observer could mistake the source(s) or the content generated by AI as the original work.

So far, no one in authority has provided satisfactory answers about what regulatory frameworks are required to ensure AI’s “ethical” use. Government officials and agencies don’t appear to have kept up with technological advances. Kevin Roose, tech correspondent for the New York Times, said on the podcast Hard Fork that new copyright laws for AI were unnecessary. “[I]t feels bizarre… that when we talk about these AI models, we’re citing case law from 30, 40, 50 years ago,” said Roose. “[I]t… feels… like we don’t quite have the legal frameworks that we would need because what’s happening under the hood of these AI models is actually quite different from other kinds of technologies.”

But what is happening under the hood of these AI models? No one is sure about that either. What the software does with the data (text, images, music, and code) fed into the system is beyond human control.

Scraping the Web to Build LLMs

Two aspects of AI concern creatives working across various fields, from books to art to music. The first is the “training” of these AI models. For instance, large language models (LLMs) are “trained” when the software is exposed to staggering amounts of texts—books, essays, poems, blogs, etc. Some of this content is collected—or scraped—from the internet. The tech companies maintain that they rely on the doctrine of fair use while doing so.

OpenAI, for instance, argues that the training process creates “a useful generative AI system” and contends that fair use is applicable because the content it uses is intended exclusively to train its programs and is not shared with the public. According to OpenAI, creating tools like its groundbreaking chatbot, ChatGPT, would be impossible without access to copyrighted material.


The AI company further states that it needs to use copyrighted materials to produce a relevant system: “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens,” according to a January 2024 Guardian article.

Getty, the image licensing service, has taken a dim view of the defense used by AI companies. It filed a lawsuit against the developer of Stable Diffusion, Stability AI, stating that the company had copied its images without permission, violating Getty Images’s copyright and trademark rights.

In its suit, Getty stated: “Stability AI has copied at least 12 million copyrighted images from Getty Images’ websites… to train its Stable Diffusion model.” This is a case of infringement—not fair use.

The second aspect of AI that worries artists and others is the prospect that AI’s production of content and other output in response to users’ prompts infringes on copyrighted work or an individual’s right to market and profit from their likeness and voice.

Also, in cases where users download content, who is charged for infringement? In the case of Napster, the now-defunct software company, the users were inadvertently implicated and had to bear legal penalties for downloading music illegally.

Will AI Make Writers and Artists Obsolete?

The Authors Guild and noted authors such as Paul Tremblay, Michael Chabon, and Sarah Silverman have filed multiple lawsuits against OpenAI and Meta (the parent company of Facebook), claiming that the “training process for AI programs infringed their copyrights in written and visual works,” stated a September 2023 report published by the Congressional Research Service. E-books, probably produced by AI (with little or no human authorial involvement), have begun to appear on Amazon.

AI researcher Melanie Mitchell discovered, to her dismay, that a book with the same title as hers—Artificial Intelligence: A Guide for Thinking Humans, published in 2019—was being marketed on Amazon but was only 45 pages long, poorly written (though it contained some of Mitchell’s original ideas), and authored by one “Shumaila Majid,” according to a January 2024 Wired article.


Artists, too, have responded with alarm to AI’s encroachment. Yet the practice of using original works by artists for training AI programs is widespread and ongoing. In December 2023, a database of artists whose works were used to train Midjourney, an AI image generator, was leaked online.

The database listed more than 16,000 artists, including many well-known ones like Keith Haring, Salvador Dalí, David Hockney, and Yayoi Kusama. Artists have protested using various means, including using the hashtag “No to AI art” on social media, adopting a tool that “poisons” image-generating software, and filing several lawsuits accusing AI companies of infringing on intellectual property rights.

“Generative AI is hurting artists everywhere by stealing not only from our pre-existing work to build its libraries without consent, but our jobs too, and it doesn’t even do it authentically or well,” artist Brooke Peachley said during an interview with Hyperallergic.

The use of AI was one of the major points of contention in the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike from July to November 2023. SAG-AFTRA represents about 160,000 performers. AI was also a sticking point in reaching a new deal for the Writers Guild of America (WGA), representing screenwriters.

For several months in 2023, the two unions’ strikes overlapped, all but shutting down movie, TV, and streaming productions.

“Human creators are the foundation of the creative industries, and we must ensure that they are respected and paid for their work,” SAG-AFTRA said in a March 2023 statement. “Governments should not create new copyright or other IP [intellectual property] exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation. Trustworthiness and transparency are essential to the success of AI.”


In its official statement, the WGA declared: “GAI [generative artificial intelligence] cannot be a ‘writer’ or ‘professional writer’ as defined in the MBA [Minimum Basic Agreement] because it is not a person, and therefore materials produced by GAI should not be considered literary material under any MBA.” The MBA is the collective bargaining agreement with the movie and TV studios.

When the WGA contract was negotiated and the strike ended in September 2023, the movie studios agreed that AI-generated content couldn’t be used to generate source material. This meant that a studio executive couldn’t ask writers to develop a story using ChatGPT and then turn it into a script (with the executive claiming rights to the original story).

In the agreement, the WGA also “reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law,” according to a September 2023 article in the Verge.

Shortly after WGA settled, the actors worked out their own agreement and ended their walkout. SAG-AFTRA subsequently signed a deal allowing the digital replication of members’ voices for video games and other forms of entertainment if the companies first secured consent and guaranteed minimum payments.

Congress Dithers, States Act

To solve some of the challenges presented by the increasing use of AI, Congress could update copyright laws by clarifying whether AI-generated works are copyrightable, determining who should be considered the author of such works, and deciding whether or not the process of training generative AI programs constitutes fair use.

By mid-2024, Congress had made little significant progress in enacting legislation to regulate AI. According to the nonprofit Brennan Center for Justice, several bills introduced in the 118th Congress (2023-2024) focused on high-risk AI, required purveyors of these systems to assess the technology, imposed transparency requirements, created a new regulatory authority to oversee AI or designated the role to an existing agency, and offered some protections to consumers by taking liability measures. Despite sharply polarized divisions between Democrats and Republicans, there is bipartisan agreement that regulation of AI is needed.


In 2023, two leaders of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, Richard Blumenthal (D-CT) and Josh Hawley (R-MO), who are otherwise politically opposed, “released a blueprint for real, enforceable AI protections,” according to Time magazine. The document called for “the creation of an independent oversight agency that AI companies would have to register with” and “[proposed] that AI companies should bear legal liability ‘when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms,’” states the article.

Meanwhile, individual states are not waiting for Congress to take action. In 2023, California and Illinois passed laws allowing people to sue AI companies that create images using their likenesses. Texas and Minnesota have made it a crime punishable with fines and prison time.

The obstacles to enacting effective regulations are formidable despite general agreement that AI should be safe, effective, trustworthy, and non-discriminatory. AI legislation must also consider the environmental costs of training large models and address surveillance, privacy, national security, and misinformation issues. Then there is a question of which federal agency would be responsible for implementing the rules, which would involve “tough judgment calls and complex tradeoffs,” according to Daniel Ho, a professor who oversees an artificial intelligence lab at Stanford University and is a member of the White House’s National AI Advisory Committee. “That’s what makes it very hard,” added the Time article.

Journalists, especially those working for small towns and regional papers, don’t have the luxury of waiting for states, much less Congress, to implement effective regulations to protect their work. The same holds for reporters employed by local radio stations and TV. Their jobs are already at risk. Cost-saving media moguls tend to look at AI as a convenient replacement for reporters, feeding AI with facts (the scores of a high school football game, the highlights of a city council or school board meeting) and then prompting the software to provide a publishable account—without a human reporter being involved.

AI as Co-Creator

The breathtaking pace of technological advances will likely lead to further changes in artificial intelligence down the road that we can’t imagine. As a writer, I believe that despite all the problems (the AI-generated books on Amazon, for instance, which deceive customers into purchasing them rather than the originals), AI is less of a threat than a potential tool. It will help save a writer’s time—especially with research—but is not destined to replace creative writers altogether.

A 2023 study by Murray Shanahan, professor of computing at the Imperial College of London, and cultural historian Catherine Clarke of the University of London supports this position.

“Large language models like ChatGPT can produce some pretty exciting material—but only through sustained engagement with a human, who is shaping sophisticated prompts and giving nuanced feedback,” said Clarke in a January 2024 Nautilus article. “Developing sophisticated and productive prompts relies on human expertise and craft.”

The authors see AI tools as “co-creators” for writers, “amplifying rather than replacing human creativity,” stated the article. The report further pointed out that mathematicians are still in business even after the introduction of calculators. Calculators simply made mathematicians’ lives easier. Similarly, using AI may change how we regard creativity.

Source: Independent Media Institute

Leslie Alan Horvitz is an author and journalist specializing in science, and a contributor to the Observatory. His nonfiction books include Eureka: Scientific Breakthroughs That Changed the World, Understanding Depression with Dr. Raymond DePaulo of Johns Hopkins University, and The Essential Book of Weather Lore. His articles have been published by Travel and Leisure, Scholastic, Washington Times, and Insight on the News, among others. Leslie has served on the board of Art Omi and is a member of PEN America. He is based in New York City. Find him online at lesliehorvitz.com.