Issues in Deepfake Legislation and Regulation

In April 2018, a video of former United States (US) President Barack Obama was released, where Obama supposedly referred to Trump using profane language.[1] It was later on revealed that the video has been manipulated and is a product of what is now referred to as “deepfake technology.” Deepfake technology pertains to realistic fake pictures and videos generated by artificial intelligence. Products of deepfake technology are essentially manipulated and inherently deceptive. Over the years, the use of deepfake technology has been rampant in media platforms, and its emergence has caused alarm to government and companies alike. In fact, on January 7 of this year, Facebook announced that it would remove videos modified by artificial intelligence.[2] Twitter followed suit, stating that significantly altered or fabricated content will removed.[3]

 

Likewise, different jurisdictions have started to try and combat the dangers that come with deepfake technology. In June of last year, Yvette Clarke of the US House of Representatives proposed the DEEPFAKES Accountability Act.[4] Among other things, the bill would require people uploading deepfake media to disclose that such has been altered, and, further, failure to disclose would be criminalized. While that bill is pending, the state of California has already passed a law which allows California residents to sue if their image is used for sexually explicit content.[5] Other countries, such as China and the UK, have also recognized the need to address deepfake technology, and have set up advisory bodies to conduct researches before regulations can be enacted.[6]

 

In the Philippines, legislation has started to materialize as well. On November 16, 2018, Senate President Pro Tempore Ralph Recto filed Senate Resolution No. 188, which seeks a congressional probe into deepfakes.[7] According to the senator, “[h]yper-realistic fabrication of audios and videos has the potential to cast doubt and erode the decades-old understanding of truth, history, information and reality,” and the government should strengthen the mechanisms in the implementation of cybercrime and data privacy laws.

 

Nevertheless, even if countries such as ours succeed in passing laws and regulations concerning deepfake technology, its enforcement can be difficult. Studies[8] show that distinguishing between real and fake is becoming more and more of a challenge, especially since people are getting better at using the technology. If this is the case, then, how would the law enforcers and the courts decide which media is deepfake and which is not? Who, even, will decide whether something is a product of deepfake technology or not? Will it be, like obscenity, something that will be left to judicial determination?

 

Moreover, if our courts are to start trying cases involving deepfake, it will trigger the application of the Rules on Evidence and Rules on Electronic Evidence.[9] Different questions can arise, such as can experts be involved, and up to what extent, and in the first place, who are the experts in this field anyway? Likewise, and beyond the realm of possible deepfake cases, how will the court treat supposed deepfake evidence presented to it? Knowing what we know about deepfake, it is not completely impossible for people to obtain crime footage and alter them for presentation to the courts. There will always be a question, then, of how deepfake evidence can be regulated and inspected in such a way as to take it out of the ambit of reasonable doubt.

 

Currently, the Rules on Electronic Evidence governs the admission and authentication of electronic documents and other types of electronic evidence. Included in these Rules is the admission of audio, photographic, and video evidence of events and transactions. It is provided that such will be admissible as long as it is “shown, presented, or displayed to the court and shall be identified, explained, or authenticated by the person who made the recording or by some other person competent to testify on the accuracy thereof.”[10] On one hand, it can be argued that the current rules are sufficient enough such that the reliability and authenticity of the evidence can be proved or disproved in accordance with the Rules on Evidence—that is, there is likewise a process of authentication before electronic evidence can be admitted. Besides, the evidentiary weight of electronic data can still vary depending on factors enumerated under Rule 7 Section 1[11], which includes the reliability of the manner or method of its generation, storage, and communication, reliability of manner in which it was identified, and integrity of the information and communication system used to record and store it, among other things.

 

Still on the other hand, it can be countered that stricter procedure should be outlined given the high degree of deception of deepfake content. It can be suggested that aside from the authentication of the person who made the recording, testimony of an expert should be required. These concerns are preliminary matters that must be resolved by deepfake legislation as early as now, should one be passed.

 

Besides the problem of enforcement and distinction, another more important legal question might also come into play—that is, will a law banning deepfake technology affect the constitutionally granted freedom of expression? It has been recognized in our country that “freedom of expression…extends protection to nearly all forms of communication. It protects speech, print and assembly regarding secular as well as political causes, and is not confined to any particular field of human interest.”[12] Thus, will a legislation that bans deepfake content be considered a prior restraint or a type of censorship that is prohibited by the constitution? Will criminalizing publication of deepfake content be considered a subsequent punishment which is likewise prohibited by the bill of rights? Indeed, it can be argued, as some people definitely will, that deepfake technology can come under the classification of “unprotected speech,” like slander, libel, lewd and obscene speech, and fighting words, which are all outside the constitutional protection. However, this argument would require the analysis of the content itself, and that is a matter that is to be decided on by the court.

 

In this era of fake news and misinformation where one can be easily misled by a poorly photoshopped picture, it is frightening to imagine what a deepfake video can do, how many people it can deceive. Despite the looming difficulties, then, Congress should really begin crafting legislation to address issues of deepfake. Otherwise, this technology can be eventually used as a means to achieving less than noble purposes. In the first place, what does deepfake really achieve? Are there legitimate purposes to it at all?

[1] https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth

[2] https://www.bbc.com/news/technology-51018758

[3] https://www.theverge.com/2020/2/4/21122661/twitter-deepfake-manipulated-media-policy-rollout-date

[4] https://techcrunch.com/2019/06/13/deepfakes-accountability-act-would-impose-unenforceable-rules-but-its-a-start/

[5] https://www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce

[6] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/831179/Snapshot_Paper_-_Deepfakes_and_Audiovisual_Disinformation.pdf ; https://www.theverge.com/2019/11/29/20988363/china-deepfakes-ban-internet-rules-fake-news-disclosure-virtual-reality

[7] https://www.cnnphilippines.com/news/2019/11/16/Senate-probe-deepfakes-AI.html

[8] https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/

[9] A.M. No. 01-7-01-SC (2001)

[10] Id. at Rule 11 Section 1

[11] Id. at Rule 7

[12] Chavez v Gonzales, G.R. No. 168338, February 15, 2008.

Post a Comment