Meta AI’s Keystroke Tracking Offers Loophole to Dodge Its Image Censors



Meta launched its free AI chatbot using Llama 3 last week, and a loophole to circumvent its image-generating censors has already been discovered. Meta AI is available on the Facebook, Instagram, WhatsApp, or Messenger apps, and it can also be accessed on a web browser via Meta.ai. Meta AI isn’t supposed to generate images of any specific real-world person. But over the weekend, Jane Rosenzweig, director of Harvard’s College Writing Center, found that Meta AI tracks users’ keystrokes before they submit requests. The AI can prepare images of real-world people and flash them on-screen before the user presses the button to officially send the request.Rosenzweig discovered that images that strongly resembled Taylor Swift and Macaulay Culkin could be generated if the request wasn’t officially sent, but just partially typed into Meta AI.

This Tweet is currently unavailable. It might be loading or has been removed.

The AI also generated images of Hillary Clinton, for example, if her name was misspelled as “Hilary Clinton,” and Judy Garland if submitted as “Judi Garland.””The lack of guardrails in this case allows us to see how little transparency there is in the training data used by these companies,” Rosenzweig tells PCMag in a message. “I’m concerned about the implications of how these systems are trained for the copyrighted work of artists and writers. I’m also concerned about how these tools are going to be used to create disinformation—and the ability to generate images of real people will make that easier.”PCMag tested out Meta AI on a computer’s web browser and was able to verify the existence of the keystroke loophole. In a quick test, typing “create an image of taylor s” and “create an image of taylor sw” without submitting the query generated a blonde woman with a strong physical likeness to Taylor Swift with garbled fans’ faces in the background. In another query, “create an image of elvi” generated a man who looked a lot like Elvis. In both cases, actually submitting the requests would then result in Meta AI censoring the images, but it’s easy enough to screenshot the pre-prepared images before the queries are officially sent. Meta AI’s loopholes come as the company’s Oversight Board investigates how Meta’s social media apps regulate AI-generated nude deepfakes of real-life women. PCMag has reached out to Meta for comment on Meta AI’s keystroke tracking as well as its celebrity-generating loopholes.

Recommended by Our Editors

Asking Meta AI itself why it’s unable to create images of people generates the following response: “I apologize for the limitation. I can generate images of objects, scenes, and concepts, but not of specific individuals, including celebrities like Taylor Swift. This is due to ethical and legal considerations, due to privacy and consent concerns. I can generate general images or avatars, but not ones that aim to depict a specific person. Let me know if you have another request!” Meta—which trained its AI on Facebook and Instagram posts—isn’t the first tech firm to censor its AI tools’ image-generating abilities. Google blocked Gemini from being to create any images of human beings after it faced controversy for creating historically inaccurate images, stating afterward that Google made Gemini “way more cautious” than intended. Last month, a Microsoft AI engineer accused Microsoft of not censoring its Copilot Designer tool enough, arguing that it’s creating “harmful images” without properly warning users in advance.

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

We will be happy to hear your thoughts

Leave a reply

AnsarSales
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart