Meta Finds Deceptive Content Likely Generated by AI on Facebook and Instagram

Technology



Meta said on Wednesday it had found content “probably generated by artificial intelligence” used misleadingly on its Facebook and Instagram platforms, including comments praising Israel's handling of the war in Gaza posted by organizations of world news and US lawmakers.

The social media company, in a quarterly security report, said the accounts posed as Jewish students, African-Americans and other concerned citizens, targeting audiences in the United States and Canada. He attributed the campaign to Tel Aviv-based political marketing firm STOIC.

STOIC did not immediately respond to a request for comment on the allegations.

Why is it important?

While Meta has found basic AI-generated profile photos in influencer operations since 2019, the report is the first to reveal the use of text-based generative AI technology since emerge at the end of 2022.

Researchers have worried that generative artificial intelligence, which can quickly and cheaply produce human-like text, images and audio, could lead to more effective disinformation campaigns and influence elections.

On a press call, Meta security executives said they took down the Israeli campaign early and didn't think new AI technologies would have hindered their ability to disrupt networks of influence, which are coordinated attempts to push messages.

Executives said they had not seen these networks deploy AI-generated images of politicians realistic enough to be mistaken for real photos.

key quote

“There are several examples across these networks of how they're using probabilistic generative AI tools to create content. Maybe it gives them the ability to do it faster or to do it with more volume. But it hasn't really affected the our ability to detect them,” said Meta's head of threat research, Mike Dvilyanski.

By the numbers

The report highlighted six covert influence operations that Meta disrupted in the first quarter.

In addition to the STOIC network, Meta shut down an Iran-based network focused on the Israel-Hamas conflict, although it did not identify any use of generative AI in that campaign.

Context

Meta and other tech giants have discussed how to address the potential misuse of new AI technologies, especially in elections.

Researchers have found examples of image generators from companies like OpenAI and Microsoft producing photos with voting-related misinformation, even though those companies have policies against such content.

Companies have emphasized digital tagging systems to mark AI-generated content at the point of creation, although the tools don't work on text and researchers have doubts about their effectiveness.

What follows

Meta faces key tests of its defenses with elections in the European Union in early June and in the United States in November.

© Thomson Reuters 2024


Affiliate links may be automatically generated; see our ethics statement for more information.



Source

Leave a Reply

Your email address will not be published. Required fields are marked *