remarks made by a senior policy official who did not provide into detail on the penalty
According to the company’s chief policy officer on Tuesday, Meta Platforms may penalize users who do not designate AI-generated audio and visual content that is posted on its platforms.
The company’s head of global affairs, Nick Clegg, made the remarks in a Reuters interview.
Although Clegg acknowledged that methods to mark audio and video content were more complex and yet under development, he expressed confidence that tech companies could currently properly label photos generated by artificial intelligence.
Although the technology is still in its infancy, especially in the areas of audio and video, the idea is to generate enough interest and momentum for the rest of the sector to obey,.
Meanwhile, according to Clegg, Meta will begin mandating that users name their own modified audio and video content, and they may face consequences if they didn’t. He did not elaborate on the consequences.
The remarks followed Clegg’s revelation in a blog post that Meta would start employing a set of invisible markers embedded in the files to identify and label photographs created by artificial intelligence services provided by other companies in the upcoming months.
Meta intends to inform users that the photographs, which often mimic real photos, are, in fact, digital inventions by applying the labels to any content bearing the markers that is uploaded to its Facebook, Instagram, and Threads sites.