Jamie Lee Curtis went straight to the top, and it worked.
The legendary star of the Halloween movie franchise managed to get an AI-generated ad showing her likeness removed after sending a public message to Mark Zuckerberg, CEO of Meta. The ad had used Curtis’ likeness to promote “some bullshit that I didn’t authorize, agree to or endorse,” Curtis wrote in the DM to Zuckerberg, whose company owns Facebook, Instagram and Threads, among other properties.
The fake AI commercial used footage from an actual interview Curtis did with MSNBC’s Stephanie Ruhle about the deadly Pacific Palisades fires earlier this year. The ad, which had been circulating on Instagram for months, showed a likeness of Curtis advertising a dental product, the Los Angeles Times reported.
In an Instagram post directly addressing Zuckerberg on Monday, Curtis said, “If I have a brand, besides being an actor and author and advocate, it is that I am known for telling the truth and saying it like it is and for having integrity and this (MIS)use of my images (taken from an interview I did with @stephruhle during the fires) with new, fake words put in my mouth, diminishes my opportunities to actually speak my truth.”
Representatives for Meta and Curtis did not immediately respond to a request for comment.
Variety reported that Meta spokesperson Andy Stone said the ads violated Meta’s policies “and have been removed.”
Curtis celebrated via Instagram: “IT WORKED! YAY INTERNET! SHAME HAS IT’S VALUE! THANKS ALL WHO CHIMED IN AND HELPED RECTIFY!”
Taylor Swift, Tom Hanks have been victims
The Curtis incident is only the latest in the growing issue of AI being used to hijack celebrities’ likenesses for various purposes. Earlier this year, award winning actor Scarlett Johansson warned about the “immediate future of humanity at large” after a deepfake video using her likeness went viral.
Deepfakes are created using audio and visual samples of real people to create realistic-looking photos and videos (see our explainer here). This particular video used the likenesses of Johansson and other Jewish celebrities like Jerry Seinfeld and Drake protesting Kanye West for trying to sell T-shirts featuring the swastika symbol on his Yeezy website.
Another deepfake falsely showed superstar quarterback Patrick Mahomes trashing himself after a blowout defeat in the 2025 Super Bowl. One from 2023 falsely showed actor Tom Hanks promoting a dental plan. During the 2024 presidential election campaign, superstar singer Taylor Swift chastised Donald Trump for posting a deepfake that purportedly showed Swift supporting his run for office.
‘Seeing is not always believing’
Alon Yamin, CEO of plagiarism detection platform Copyleaks, said preventing fake AI videos and deepfakes is close to impossible.
“Right now, there’s no foolproof way to prevent someone from creating a deepfake using your likeness, celebrity or not,” he said. “The technology is widely accessible, and with just a few photos or clips, anyone can become the target of synthetic media. That said, stronger laws around likeness rights and digital identity protection, paired with proactive detection tools, are essential first steps toward reining this in.”
Yamin added that the potential dangers of such AI trickery can extend far beyond deceptive advertising.
“The danger isn’t just reputational damage; it’s the erosion of public trust at scale,” he said. “Deepfakes can be used to scam, manipulate elections, incite violence, or spread misinformation with frightening realism. As the technology improves, the line between real and fake becomes harder to detect with the naked eye. We’re looking at a future where seeing is not always believing.”
Katelyn Chedraoui, CNET’s AI reporter, said that “generative AI has massively accelerated the production and accessibility of deepfake video tech. When it comes to using celebrities’ — or anyone’s — likeness, consent is key. But it’s all too common to see people abuse this technology, whether for profit or other purposes.”