Deepfakes continues to be one of the great challenges of social platforms in 2020. In the last year we have attended the birth, or rather, popularization of this technology.
It allows you to manipulate a video or audio clip to make, for example, a person of worldwide relevance such as Barack Obama or Mark Zuckerberg appear “saying things” that they have not said.
It is a challenge because deepfakes are becoming more real and distinguishing them from a real video can be complicated. Faced with this challenge, companies such as Facebook or Adobe have got to work.
And now the Zuckerberg company has confirmed how it will act on this type of content: deleting it, as long as certain conditions are met.
As explained by Monika Bickert, vice president of global policy management at Facebook,
According to Bickert, “although these videos are still rare on the Internet, they present a significant challenge to our industry and society as their use increases.”
From Facebook, they affirm that their approach to combat deepfakes consists of several protagonists. They include people from academia and government to industry, as well as:
The Criteria by Facebook to delete Deepfakes
How will the company act upon the detection of the Deepfakes? Deleting it when it meets these criteria:
- “It has been edited or synthesized, beyond the adjustments of clarity and clarity, in a way that is not clear to an average person and that probably serves to deceive him and that he thinks the protagonist of the video said words that he did not actually say.
- If it is the product of artificial intelligence or machine learning, merging, replacing or superimposing content in a video to make it appear to be authentic. “
In other words, if the video or audio (deepfakes) has been generated by artificial intelligence or has been edited in a way that can mislead an “average person”, the company will remove it from the platform.
However, these measures do not extend to parody or satirical deepfakes nor to those videos that have been edited to omit words or change their order (something that, all said, could be used to create manipulated content).
If the content violates community norms, whether false or not, it will be removed from Facebook.
Now, the company will not completely get rid of fake news, another field that has not been without controversy. Facebook remains in its thirteen, stating that if one of its 50 data verification partners detects false or partially false news, it will “significantly reduce its distribution in News Feed and reject it if it is running as an ad.”
Likewise, people, who will share it or have shared it will receive warnings alerting that the content is false.
The Deepfakes detection Challenge
Recently, Facebook, together with several actors such as Microsoft and various universities, launched the Deepfake Detection Challenge, a challenge that aims to “produce technology that everyone can use to better detect when artificial intelligence has been used to produce deepfakes to deceive the viewer”. To subsidize it, Facebook is dedicating 10 million dollars.
Deepfakes and China
A more radical example would be China. Since January 1, deepfakes are illegal and distributing them is a criminal offense that will take users to prison. Companies may distribute this content as long as it is indicated “clearly” when such content has been created using artificial intelligence.
In a similar line is California, which prohibits the dissemination of manipulated videos to discredit political candidates during the 60 days prior to elections.