It will be made available on Threads, Facebook, and Instagram.
Meta already annotates AI photos produced by its own tools. It states that it intends to provide “momentum” for the industry to address AI fakery through the new technology that it is presently developing.
However, a BBC expert on AI claimed that these technologies are “easily evadable”.
Senior executive Sir Nick Clegg of Meta states in a blog post that the company plans to increase the number of AI fakes it labels “in the coming months”.
“Simple to avoid”
However, Professor Soheil Feizi of the University of Maryland’s Reliable AI Lab asserted that it might be simple to circumvent such a system.
“They may be able to train their detector to be able to flag some images specifically generated by some specific models,” he stated to the BBC.
However, those detectors can have a large false positive rate and be readily circumvented by applying simple light processing on top of the photos.
“So I don’t think that it’s possible for a broad range of applications.”
Although a lot of the concern about AI fakes is focused on video and audio, Meta has stated that its tool is not compatible with these types of media.
Instead, the company says that it is requesting that users mark their own posts that include audio and video, and it “may apply penalties if they fail to do so”.
Furthermore, Sir Nick Clegg acknowledged that it would be hard to test for text produced by programs like ChatGPT.
“That ship has sailed,” he informed Reuters.
“Indecipherable” media strategy
The company’s policy on altered media was criticized by Meta’s Oversight Board on Monday, who described it as “incoherent, lacking in persuasive justification, and inappropriately focused on how content has been created”.
Meta provides funding for the Oversight Board, although it is separate from the business.
The decision on a film featuring US President Joe Biden sparked the backlash. The president was shown with his granddaughter on the video, but it was altered to give the impression that he was inappropriately touching her.
It did not violate Meta’s manipulated media policy and was not removed because it was not artificially manipulated and instead showed Mr. Biden acting in a way that he did not rather than saying something he did not say.
Although the Board acknowledged that the video did not violate Meta’s present policies against false media, they suggested that the policies be revised.
According to Reuters, Sir Nick generally agreed with the decision.
The current Meta policy, he said, “is just simply not fit for purpose in an environment where you’re going to have way more synthetic content and hybrid content than before.”
The business implemented a guideline in January stating that political advertisements must disclose when they use digitally manipulated content.