{
	"id": "c25428c6-09fe-465b-aade-8640159a9bb5",
	"created_at": "2026-04-29T02:20:31.444199Z",
	"updated_at": "2026-04-29T10:18:54.051146Z",
	"deleted_at": null,
	"sha1_hash": "3db47f32300b6b62b4c9711441f2a0004420c99b",
	"title": "",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "2024-01-31T14:24:36Z",
	"file_modification_date": "2024-01-31T14:24:38Z",
	"file_size": 2283999,
	"plain_text": "Facing reality?\r\nLaw enforcement\r\nand the challenge\r\nof deepfakes\r\nAn Observatory Report from the Europol Innovation Lab\n\nFACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\r\nAn Observatory Report from the Europol Innovation Lab\r\n \r\nNeither the European Union Agency for Law Enforcement Cooperation nor any person acting\r\non behalf of the agency is responsible for the use that might be made of the following information.\r\nLuxembourg: Publications Office of the European Union, 2022\r\nPDF | ISBN 978-92-95236-23-3 | DOI: 10.2813/158794 | QL-02-24-129-EN-N\r\n© European Union Agency for Law Enforcement Cooperation, 2022\r\nReproduction is authorised provided the source is acknowledged.\r\nFor any use or reproduction of photos or other material that is not under the copyright of\r\nthe European Union Agency for Law Enforcement Cooperation, permission must be sought\r\ndirectly from the copyright holders.\r\nWhile best efforts have been made to trace and acknowledge all copyright holders, Europol\r\nwould like to apologise should there have been any errors or omissions. Please do contact us\r\nif you possess any further information relating to the images published or their rights holder.\r\nCite this publication: Europol (2022), Facing reality? Law enforcement and the challenge\r\nof deepfakes, an observatory report from the Europol Innovation Lab, Publications Office\r\nof the European Union, Luxembourg.\r\nThis version published in January 2024 replaces the previous one. Updates were made\r\nin the chapter Understanding deepfakes.\r\nThis publication and more information on Europol are available on the Internet.\r\nwww.europol.europa.eu\n\nContents\r\n4 Introduction\r\n5 Understanding deepfakes\r\n7 The technology behind deepfakes\r\nDeep learning\r\nGenerative Adversarial Networks (GAN)\r\n10 Deepfake technology’s\r\nimpact on crime\r\nDisinformation\r\nDocument fraud\r\nDeepfake as a service\r\n14 Deepfake technology’s impact\r\non law enforcement\r\nImpact on police work\r\nImpact on the legal process\r\nNew capacities needed\r\n16 Deepfake detection\r\nManual detection\r\nAutomated detection\r\nPreventive measures\r\n19 How are other actors\r\nresponding to deepfakes?\r\nTechnology companies\r\nEuropean Union\r\n21 Conclusion\r\n3\n\nToday, threat actors are using disinformation campaigns and\r\ndeepfake content to misinform the public about events, to influence\r\npolitics and elections, to contribute to fraud, and to manipulate\r\nshareholders in a corporate context. Many organisations have\r\nnow begun to see deepfakes as an even bigger potential risk than\r\nidentity theft (for which deepfakes can also be used), especially\r\nnow that most interactions have moved online since the COVID-19\r\npandemic. This concern is echoed by a recent report by University\r\nCollege London (UCL) that ranks deepfake technology as one of the\r\nbiggest threats faced by society today.1\r\n \r\nThis poses a risk to EU citizens. Europol, as the criminal information\r\nhub for law enforcement organisations, will continue to play its part\r\nin supporting law enforcement authorities in the EU Member States\r\nto counter this threat.\r\nThis report presents the first published analysis of the Europol\r\nInnovation Lab’s Observatory function, focusing on deepfakes,\r\nthe technology behind them and their potential impact on law\r\nenforcement and EU citizens. Deepfake technology uses Artificial\r\nIntelligence to audio and audio-visual content. Deepfake technology\r\ncan produce content that convincingly shows people saying or\r\ndoing things they never did, or create personas that never existed\r\nin the first place.\r\nTo date, the Europol Innovation Lab has organised three strategic\r\nforesight activities with EU Member State law enforcement\r\nagencies and other experts. During strategic foresight activities\r\nconducted by the Europol Innovation Lab, over 80 law enforcement\r\nexperts identified and analysed the trends and technologies they\r\nbelieved would impact their work until 2030. These sessions\r\nshowed that one of the most worrying technological trends is the\r\nevolution and detection of deepfakes, as well as the need to address\r\ndisinformation more generally. The findings in this report are the\r\nresult of extensive desk research supported by research provided\r\nby partner organisations, expert consultation, and the strategic\r\nforesight activities.\r\nThose workshops provided the initial input for this report.\r\nFurthermore, the findings are the result of extensive desk research\r\nsupported by research provided by partner organisations, expert\r\nconsultation and the strategic foresight activities conducted by\r\nthe Europol Innovation Lab.\r\nStrategic foresight and scenario methods offer a way to understand\r\nand prepare for the potential impact of new technologies on law\r\nenforcement. The Europol Innovation Lab’s Observatory function\r\nmonitors technological developments that are relevant for law\r\nenforcement and reports on the risks, threats and opportunities of\r\nthese emerging technologies.\r\n1  UCL – London’s Global University, ‘‘Deepfakes’ ranked as most serious AI crime threat’,\r\nhttps://www.ucl.ac.uk/news/2020/aug/deepfakes-ranked-most-serious-ai-crime-threat.\r\nIntroduction\r\n4FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nDisinformation is being spread with the intention to deceive. Tools of\r\ndisinformation campaigns can include deepfakes, falsified photos,\r\ncounterfeit websites and other information taken out of context to\r\ndeceive the audience.2\r\nIn the original, strict sense, deepfakes are a type of synthetic media\r\nmostly disseminated with malicious intent, although they are now\r\noften used for positive applications too.3\r\n Synthetic media refers\r\nto media generated or manipulated using artificial intelligence\r\n(AI). In most cases, synthetic media is generated for gaming, to\r\nimprove services or to improve the quality of life, but the increase\r\nin synthetic media and improved technology has given rise to\r\ndisinformation possibilities, including deepfakes.\r\nDeepfakes were examined and discussed at great length in one\r\nof the Europol Innovation Lab’s strategic foresight activities.\r\nLaw enforcement experts who participated in these activities\r\nexpressed concern about the consequences of disinformation,\r\nfake news and social media on political and social discourse.\r\nThese trends are expected to become more pronounced as the\r\nsupporting technologies, such as deepfakes, are becoming more\r\nsophisticated. Their impact on privacy and personal security\r\nwill doubtless result in new categories of crime that will have to\r\nbe policed. Participants were especially concerned about the\r\nweaponisation of social media and the impact of misinformation\r\non public discourse and social cohesion.\r\nOn a daily basis, people trust their own perception to guide them\r\nand tell them what is real and what is not. This applies not only\r\nto people in their private lives, but also law enforcement officers\r\ntrying to do their jobs. First-hand accounts are valued higher than\r\nsecond-hand versions of an event. Auditory and visual recordings\r\nof an event are often treated as a truthful account of an event.\r\nPhotographs and videos are important intelligence for police work\r\nand evidence in court. But what if these media can be generated\r\nartificially, adapted to show events that never took place, to\r\nmisrepresent events, or to distort the truth?\r\nFor instance, prior to the invasion of Ukraine by Russia in 2022,\r\nthe United States revealed a Russian plot to use deepfake video\r\nto justify an invasion of Ukraine.4\r\n After the invasion happened,\r\nofficials of the Ukrainian government warned that Russia might\r\nspread deepfakes that will show the Ukrainian president Volodymyr\r\n2  Die Bundesregierung, ‘What is disinformation?’, accessed 15 March 2022, https://www.\r\nbundesregierung.de/breg-de/themen/umgang-mit-desinformation/disinformation-definition-1911048 .\r\n3  ENLETS, ‘SYNTHETIC REALITY \u0026 DEEP FAKES: IMPACT ON POLICE WORK’, 2021, accessed\r\non 15 March 2022, https://enlets.eu/wp-content/uploads/2021/11/Final-Synthetic-Reality-Deep-fakes-Impact-on-Police-Work-04.11.21.pdf.\r\n4 CBS News, ‘U.S. reveals Russian plot to use fake video as pretense for Ukraine invasion’, 2022,\r\naccessed on 10 March 2022, https://www.cbsnews.com/news/russia-disinformation-video-ukraine-invasion-united-states/.\r\nUnderstanding\r\ndeepfakes\r\n5\n\nZelenskyy surrendering.5\r\n This fear appears to have become reality\r\nafter hackers made a Ukrainian news website show a video of\r\npresident Zelenskyy telling his soldiers to surrender.6\r\n At the time\r\nof writing much is still unclear about the video and it has not been\r\nverified to be a real deepfake or another fake, but it does show how\r\nthe use of (deep)fakes are being used for disinformation purposes.\r\nExamples like the one above show that this type of disinformation\r\ncan be dangerous. Its aim is to intensify existing conflicts and\r\ndebates, undermine trust in state-run institutions and stir up anger\r\nand emotions in general. The erosion of trust is likely to make the\r\nbusiness of policing harder.\r\nThis challenge to policing is coupled with a public that seems\r\nrelatively uninformed about the dangers of deepfakes. Despite their\r\nincreasing prevalence at the time, research in 2019 showed almost\r\n72% of people in a UK survey to be unaware of deepfakes and their\r\nimpact.7\r\n This is particularly worrying as people might be unable to\r\nidentify deepfakes (videos, photos, audios) since they are not aware\r\nof the existence of such virtual forgeries or how they work. The lack\r\nof understanding of the basics of this technology presents various\r\nchallenges, some of which are relevant for law enforcement (such\r\nas disinformation and document fraud). Even more worrying results\r\nfrom recent experiments have shown that increasing awareness\r\nof deepfakes may not improve the chances for people to detect\r\nthem.8\r\n Researchers are therefore expecting criminals to increase\r\ntheir use of deepfakes in the coming years.9\r\n This shows it is vital to\r\nunderstand the deepfake threat and prepare ourselves.\r\n5 Metro, ‘Ukraine warns Russia may deploy deepfakes of Volodmyr Zelensky surrendering’,\r\n2021, accessed on 15 March 2022, https://metro.co.uk/2022/03/04/ukraine-warns-russia-may-deploy-deepfakes-of-zelensky-surrendering-16217350.\r\n6  National Public Radio, ‘Deepfake video of Zelenskyy could be ‘tip of the iceberg’\r\nin info war, experts warn’, 2021, accessed on 17 March 2022, https://www.npr.\r\norg/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.\r\n7 iProov, ‘Almost Three-Quarters of UK Public Unaware of Deepfake Threat, New Research’,\r\n2019, accessed 15 March 2022, https://www.iproov.com/press/uk-public-deepfake-threat.\r\n8 Köbis, N.C. et al., ‘Fooled twice: People cannot detect deepfakes but think they can’, iScience,\r\n24(11), 2021, accessed 15 March 2022, https://doi.org/10.1016/j.isci.2021.103364.\r\n9 Recorded Future, Insikt Group, ‘The Business of Fraud: Deepfakes, Fraud’s Next Frontier’,\r\n2021.\r\n6FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nDeepfake technology uses the power of deep learning technology\r\nto audio and audio-visual content. Employed properly, these models\r\ncan produce content that convincingly shows people saying or\r\ndoing things they never did, or create people that never existed\r\nin the first place. The rise of the application of AI to generate\r\ndeepfakes is already having, and will have, further implications for\r\nthe way people treat recorded media. Here we discuss two core\r\nadvancements behind deepfake technology, namely deep learning\r\nand generative adversarial networks, and how 5G technology may\r\nfurther enable the use of deepfakes.\r\nDeep learning\r\nDeep learning is a kind of machine learning where a computer\r\nanalyses datasets to look for patterns with the help of neural\r\nnetworks. Machine learning is an application of AI where\r\ncomputers automatically improve through the use of data.\r\nDeep learning is a kind of machine learning that applies neural\r\nnetworks. These neural networks mimic the way our brains work\r\nto more effectively learn from the data provided. Deep learning\r\ntechnology, paired with the availability of large databases with\r\nmaterial to train the generative models on, has allowed for rapid\r\nimprovement of deepfake technology.\r\nDeep learning algorithms use neural networks that mimic the brain’s\r\nprocesses to find patterns in data.10 Therefore, the availability of\r\ndata is essential for a good deepfake system; it needs examples to\r\nlearn what the result has to look like. It will try to discover patterns\r\nin the available data and thus extract what features are important\r\nand how these relate to each other. That will allow it to construct a\r\ncomplete and convincing picture. Depending on the quality of the\r\navailable data and the factors the algorithm uses, the result may be\r\nmore or less realistic.\r\nToday, large datasets with labelled visual material are becoming\r\nfreely available on the internet. These datasets are essential for\r\nthe training of the machine learning algorithms needed to produce\r\ndeepfakes. Creators of deepfakes can use these freely available\r\ndatasets on the internet and avoid the time-consuming work of\r\ncreating datasets themselves.\r\n10 Code Academy, ‘What Is Deep Learning?’, 2021, accessed on 10 March 2022, https://www.\r\ncodecademy.com/resources/blog/what-is-deep-learning/.\r\nThe technology\r\nbehind deepfakes\r\n7\n\nIn one example from 2018, filmmaker Jordan\r\nPeele and BuzzFeed CEO Jordan Peretti\r\ncreated a deepfake video to warn the public\r\nabout disinformation, specifically regarding\r\nthe public’s perception of political leaders.\r\nPeele and Peretti used free tools with the help\r\nof editing experts to overlay Peele’s voice and\r\nmouth over a pre-existing video of Barack\r\nObama. In the video, Obama allegedly said,\r\n“We are entering an era in which our enemies\r\ncan make it look like anyone is saying anything,\r\nat any point in time. Even if they would never say\r\nthose things.”11\r\nSource: Suwajanakorn, S. et al., 2017, ‘Synthesizing Obama: learning lip\r\nsync from audio’, ACM Transactions on Graphics, 36(4), accessed on 15\r\nMarch 2022, https://dl.acm.org/doi/10.1145/3072959.3073640.\r\nGenerative Adversarial Networks (GAN)\r\nA great leap in the quality and accessibility of deepfake technology\r\nwas made by the adaptation of generative adversarial networks\r\n(GANs) as proposed in 2014 by Ian Goodfellow et al.12 A GAN works\r\nwith two competing models: a generative and a discriminating\r\nmodel. The generative model creates content based on the available\r\ntraining data, trying to capture the data as closely as possible,\r\nto create content that most closely mimics the examples in the\r\ntraining data. A discriminative model then tests the results of the\r\ngenerative model by assessing the probability the tested sample\r\ncomes from the dataset rather than the generative model.\r\nWith the results from these tests, the models continuously improve\r\nuntil the generated content is just as likely to come from the\r\ngenerative model as the training data. This powerful method both\r\nsimplifies the learning process, making it more accessible, and also\r\nimproves the outcome by incorporating a mechanism designed\r\nto minimise the chance its product would be discriminated from\r\nauthentic content.\r\nWhen a new feature that may help discriminate between synthetic\r\nand authentic content is discovered, it allows for an easy\r\n11 Ars Electronica, ‘Obama Deep Fake’, 2018, accessed on 10 March 2022, https://ars.\r\nelectronica.art/center/en/obama-deep-fake/.\r\n12 Goodfellow, I. et al, (2014), Generative Adversarial Nets (PDF). Proceedings of the\r\nInternational Conference on Neural Information Processing Systems (NIPS 2014). pp.\r\n2672–2680. \r\n8FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nincorporation of that feature. For example, people’s eyes would\r\nnot blink in early deepfake videos, making them relatively easy\r\nto detect.13 Even though the training data for deepfake models\r\nincluded many pictures of people, these people generally did\r\nnot blink in pictures. Adding more videos with people blinking to\r\nthe database allowed both models to work together to produce\r\npeople with blinking eyes, making the result more realistic and\r\nconsequently harder to differentiate from authentic content.\r\nTraining data to create deepfakes may be applied in various ways\r\nfor video and image deepfakes:\r\nFace swap\r\nTransfer the face of one person for that of the person in the video;\r\nAttribute editing\r\nChange characteristics of the person in the video, e.g. style or\r\ncolour of the hair;\r\nFace re-enactment\r\nTransferring the facial expressions from the face of one person\r\nonto the person in the target video;\r\nFully synthetic material\r\nReal material is used to train what people look like,\r\nbut the resulting picture is entirely made up.\r\nSee for example https://www.thispersondoesnotexist.com\r\nand https://generated.photos\r\nOptimising these factors will improve the outcome. The more\r\nextensive the database and the more complex the algorithm\r\nbecomes, the more computing power is necessary. Generating\r\nquality data requires a large volume and diversity of data with\r\nenough examples of similar but slightly different representations of\r\nthe same characteristics to work. For example, if a database mostly\r\ncontains pictures of white men with black hair, it will not perform\r\ntoo well on creating Asian women with blonde hair. As an increasing\r\nnumber and volume of databases are available, the quality and\r\nquantity of training data increases. This has allowed the models\r\ngenerating deepfakes to increase in sophistication.\r\nParticipants in the Innovation Lab’s foresight activities noted how\r\nthe roll-out of 5G would enhance connectivity and communication\r\nwithin law enforcement agencies (LEAs) and would strengthen\r\nthe privacy and security of organisations and individuals alike.\r\nHowever, they noted that those same benefits would be leveraged\r\nby criminals to perpetrate their crimes. The additional bandwidth\r\noffered by new communication technologies, such as 5G, enables\r\nusers to utilise the power of cloud computing to manipulate video\r\nstreams in real time. Deepfake technologies can therefore be\r\napplied in videoconferencing settings, live-streaming video services\r\nand television.\r\n13 GIZMODO, ‘Most Deepfake Videos Have One Glaring Flaw’, 2018, accessed on 10 March\r\n2022, https://gizmodo.com/most-deepfake-videos-have-one-glaring-flaw-1826869949.\r\n9\n\nParticipants of the foresight activities cited several trends that\r\nEuropean LEAs should be sensitive to. Of note is crime as a service\r\n(CaaS), with criminals selling access to the tools, technologies and\r\nknowledge to facilitate cyber and cyber-enabled crime. CaaS is\r\nexpected to evolve in parallel with current technologies, resulting in\r\nthe automation of crimes such as hacking and adversarial machine\r\nlearning and deepfakes. Indeed, participants flagged the tendency\r\nof criminal actors to become early adopters of new technologies.\r\nAs a result, they are always one step ahead of law enforcement in\r\ntheir implementation, use and adaptation of these technologies.\r\nThe growing availability of disinformation and deepfakes will have\r\na profound impact on the way people perceive authority and\r\ninformation media. With the increasing volume of deepfakes, trust\r\nin authorities and official facts is undermined. Experts fear this may\r\nlead to a situation where citizens no longer have a shared reality, or\r\ncould create societal confusion about which information sources\r\nare reliable; a situation sometimes referred to as ‘information\r\napocalypse’ or ‘reality apathy’.14\r\nThis makes it essential to be aware of this manipulation and\r\nbe prepared to deal with the phenomenon, so as to distinguish\r\nbetween benign and malicious use of this technology.\r\nThe ‘Malicious Uses and Abuses of Artificial Intelligence’ report\r\nby Europol, TrendMicro and UNICRI15 included a case study on\r\nthis topic.\r\nThe report also shows that deepfake technology can facilitate\r\nvarious criminal activities, including:\r\n14  The Guardian, 2018, accessed on 10 March 2022, ‘An information apocalypse is coming.\r\nHow can we protect ourselves?’, https://www.theguardian.com/commentisfree/2018/\r\nmar/16/an-information-apocalypse-is-coming-how-can-we-protect-ourselves.\r\n15 Europol, ‘Malicious Uses and Abuses of Artificial Intelligence’, 2020, accessed on 10 March\r\n2022, https://www.europol.europa.eu/publications-events/publications/malicious-uses-and-abuses-of-artificial-intelligence.\r\n16 KYC stands for Know Your Customer and refers to the processes for identity verification and\r\nfraud risk assessment used by institutions.\r\n• harassing or humiliating\r\nindividuals online;\r\n• perpetrating extortion\r\nand fraud;\r\n• facilitating document fraud;\r\n• falsifying online identities and\r\nfooling ‘know your customer’\r\nmechanisms16;\r\n• non-consensual pornography;\r\n• online child sexual\r\nexploitation;\r\n• falsifying or manipulating\r\nelectronic evidence for\r\ncriminal justice investigations;\r\n• disrupting financial markets;\r\n• distributing disinformation\r\nand manipulating public\r\nopinion;\r\n• supporting the narratives of\r\nextremist or terrorist groups;\r\n• stoking social unrest and\r\npolitical polarisation.\r\nDeepfake\r\ntechnology’s\r\nimpact on crime\r\n10FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nDisinformation\r\nDisinformation campaigns are operations to deliberately spread\r\nfalse information in order to deceive.17 One major concern about\r\nthis use is the ease of creating a fake emergency alert that warns of\r\nan impending attack. Another concern is the disruption of elections\r\nor other aspects of politics by releasing a fake audio or video\r\nrecording of a candidate or other political figure. To illustrate this,\r\nthe BBC created a video for the 2019 general election in the UK in\r\nwhich the candidates Boris Johnson and Jeremy Corbyn endorsed\r\neach other.18 If this kind of manipulation successfully deceives a\r\nlarge enough part of the populace, this could have a serious impact\r\non the outcome of an election.\r\nBusinesses are also at risk of being targets of disinformation, as\r\ndeepfakes can be used to generate false information that could\r\nfool the public. For example, a threat actor could create a deepfake\r\nthat makes it appear that a company’s executive engaged in a\r\ncontroversial or illegal act. Certain deepfakes could be used for\r\nfalse advertising and disinformation, which could lead to bad\r\npublicity for a targeted company. Such applications of deepfakes\r\ncould impact areas like stock market and company value as the\r\npublic (stakeholders and shareholders, as well as consumers)\r\nmay believe the deepfake and start selling their stocks or\r\nboycotting the company.\r\nOne example that shows the potential for criminal activities\r\nsupported by deepfakes is the case where criminals used deepfake\r\naudio to impersonate the CEO of a company to make an employee\r\ntransfer USD 35 million.19 In this chapter/section of the report,\r\nwe will look closer at four of the criminal uses of deepfakes that\r\nparticipants in the foresight activities identified.\r\nNon-consensual pornography\r\nIn a December 2020 study, Sensity, an Amsterdam-based company\r\nthat detects and tracks deepfakes online, found 85 047 deepfake\r\nvideos on popular streaming websites, with the number doubling\r\nevery 6 months.20 In a previous September 2019 study, Sensity\r\ndiscovered that 96 % of the fake videos involved non-consensual\r\npornography. To create this, one will overlay a victim’s face onto\r\nthe body of a pornography actor, making it appear that the victim is\r\nengaging in the act. In many situations, the victims of pornographic\r\ndeepfakes are celebrities or high-profile individuals.\r\n17 Marriam-Webster, ‘Disinformation’, accessed on 10 March 2022, https://www.merriam-webster.com/dictionary/disinformation\r\n18 BBC News, ‘The fake video where Johnson and Corbyn endorse each other’, 2019, accessed\r\non 10 March 2022, https://www.bbc.com/news/av/technology-50381728.\r\n19 Forbes, ‘Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist,\r\nPolice Find’, 2021, accessed on 16 March 2022, https://www.forbes.com/sites/\r\nthomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions.\r\n20 Sensity, ‘How to Detect a Deepfake Online: Image Forensics and Analysis of Deepfake\r\nVideos’, 2021, accessed on 10 March 2022, https://sensity.ai/blog/deepfake-detection/how-to-detect-a-deepfake/.\r\n11\n\nThese videos are popular, having received approximately 134 million\r\nviews at the time21, and there are several pornographic sites that\r\nspecifically produce pornographic celebrity deepfakes. Perpetrators\r\noften act anonymously, making crime attribution more difficult.\r\nDocument fraud\r\nPassports are becoming increasingly hard to forge with modern\r\nfraud prevention measures. Synthetic media and digitally\r\nmanipulated facial images present a new approach for document\r\nfraud. Using different methods and tools, it is possible to combine,\r\nor morph, the faces of the person the passport actually belongs\r\nto and the person(s) wanting to obtain a passport illegally. This\r\nmethod may increase the chance that the photo in a forged\r\ndocument passes any identity checks including those using\r\nautomated means (facial recognition systems).22\r\nThe face in the middle of the image above is an example of a digitally manipulated\r\nfacial image made using this ‘morphing’ method from the other two images.\r\nThe images on the left and right are from The SiblingsDB, which contains different\r\ndatasets depicting images of individuals related by sibling relationships.\r\nThe subjects are voluntary students and employees of the Politecnico di Torino\r\nand their siblings, in the age range between 13 and 50.23\r\n21 Government Technology, ‘Deepfakes Are on the Rise — How Should Government Respond?’,\r\n2020, accessed on 10 March 2022, https://www.govtech.com/policy/deepfakes-are-on-the-rise-how-should-government-respond.html.\r\n22 Robertson, D.J., Mungall, A., Watson, D.G. et al, ‘Detecting morphed passport photos: a\r\ntraining and individual differences approach,’ Cogn. Research 3, 27, 2018, accessed on 16\r\nAugust 2021, https://doi.org/10.1186/s41235-018-0113-8.\r\nMIT Technology Review, ‘The hack that could make face recognition think someone\r\nelse is you’, 2020, accessed on 10 March 2022, https://www.technologyreview.\r\ncom/2020/08/05/1006008/ai-face-recognition-hack-misidentifies-person.\r\nPikoulis, E.-V. et al., ‘Face Morphing, a Modern Threat to Border Security: Recent Advances\r\nand Open Challenges’, Applied Sciences, 2021, accessed 17 February 2022, at https://www.\r\nmdpi.com/2076-3417/11/7/3207.\r\n23 T.F. Vieira, A. Bottino, A. Laurentini, M. De Simone, ‘Detecting Siblings in Image Pairs’, The\r\nVisual Computer, 2014, vol 30, issue 12, p. 1333-1345, doi: 10.1007/s00371-013-0884-3\r\n12FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nThis kind of approach to fraud can be applied to any other type of\r\ndigital identity check that requires visual authentication. It greatly\r\nundermines identity verification procedures since there is no reliable\r\nway to detect this kind of attack.24, 25, 26, 27\r\nDocument fraud is a facilitator of other crimes like illegal\r\nimmigration, trafficking in human beings, selling of various illegal\r\ngoods, and terrorism, as perpetrators often use fake IDs to travel to\r\ntheir target locations. Deepfake technology might amplify the risk\r\nfor advanced document fraud by organised crime groups.\r\nIn practice, the robustness of any identification process will depend\r\non the process as a whole, and not only its visual step(s). However,\r\na higher quality synthetic image will make a forged document more\r\nlikely to pass the check of a visual identification step in the process.\r\nIn general, the prospect of a successful document fraud attempt\r\ndepends on quality and context of the deepfake used. The quality of\r\nthe deepfake is largely dependent on available data and processing\r\npower, which is beyond the control of the identification process. The\r\ncontext in which the deepfake is applied is partially determined by\r\nthe process however, providing opportunities to limit the success of\r\njust using a good deepfake.\r\nDeepfake as a service\r\nJust like many other new technologies, deepfakes are still used\r\nmainly by proficient engineers and research parties. However,\r\ndeepfake capabilities are becoming more accessible for the\r\nmasses through deepfake apps and websites. There are special\r\nmarketplaces on which users or potential buyers can post requests\r\nfor deepfake videos (for example, requests for non-consensual\r\npornography). The increased demand for deepfakes has also\r\nled to the creation of several companies that deliver deepfakes\r\nas a product or even online service. Recorded Future has reported\r\na threat actor’s willingness to pay USD 16 000 for this kind\r\nof service.28\r\nSince deepfakes are based on advanced AI and machine learning\r\ntechnologies, a high level of expertise is required to put the\r\ntechnology together. Accordingly, there are not as many threat\r\nactors with the skillset to develop them on their own as there are\r\n24 University of Lincoln ScienceDaily, ‘Two fraudsters, one passport: Computers more accurate\r\nthan humans at detecting fraudulent identity photos,’ 2019, accessed on 20 July, 2020, at\r\nwww.sciencedaily.com/releases/2019/08/190801104038.htm.\r\n25 Naser Damer, PhD. (n.d.). Fraunhofer IGD, ‘Face morphing: a new threat?’ accessed on 20\r\nJuly 2020, at https://www.igd.fraunhofer.de/en/press/annual-reports/2018/face-morphing-a-new-threat.\r\n26 David J. Robertson, et al. ‘Detecting morphed passport photos: a training and individual\r\ndifferences approach,’ Springer Nature, 2018, accessed on 20 July 2020, at https://\r\ncognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-018-0113-8 .\r\n27 Robin S.S. Kramer, et al., ‘Face morphing attacks: Investigating detection with humans and\r\nComputers, Springer Nature, 2019, accessed on 20 July 2020, at https://link.springer.com/\r\narticle/10.1186/s41235-019-0181-4.\r\n28 Biometric update, ‘Dark news from dark web: deepfakers are getting their act together’, 2021,\r\naccessed on 16 March 2022, https://www.biometricupdate.com/202105/dark-news-from-dark-web-deepfakers-are-getting-their-act-together.\r\n13\n\nwho would be interested in deepfakes as a service. Those who\r\nknow how to leverage sophisticated AI can perform the service\r\nfor others, enabling threat actors to manipulate a person’s face\r\nand/or voice without understanding the intricacies behind how it\r\nworks. Then they can conduct advanced social engineering attacks\r\non unsuspecting victims, with the aim to make a sizable profit.\r\nPlatforms offering these kinds of services have already started\r\nto emerge.29\r\nLaw enforcement agencies will be adversely impacted by the rise\r\nof synthetic media and deepfakes. While they may provide some\r\nopportunities to benefit society, this report focuses on the malicious\r\nuse of deepfakes. Adverse effects not only include the criminal uses\r\ndescribed in the previous chapter, but also the more general impact\r\nof deepfakes on society. During foresight activities conducted by\r\nEuropol, participants discussed how certain technologies could\r\nimpact law enforcement. In relation to deepfakes, law enforcement\r\nagencies may even be forced into action, possibly the wrong action,\r\nby misinformation.\r\nImpact on police work\r\nAltered material on social media about events such as\r\ndemonstrations may lead to police coming into action\r\nwhere it is not necessary, or in the wrong place. In police\r\ninvestigations, law enforcement may chase the wrong suspect\r\nof a crime when a deepfake version of the suspect fleeing\r\na crime scene goes viral on social media, thereby giving the\r\nsuspect the opportunity to get away.\r\nUsing deepfakes, people could falsely portray police officers\r\ncommitting transgressions in order to discredit the police or\r\neven incite violence against officers. In a time where distrust in\r\nauthorities is growing, deepfakes and manipulated footage\r\nmay be used to negatively affect public opinion. The impact of\r\nsuch images and footage is not to be underestimated, especially\r\nwhen this is combined with doxxing (exposing the identity of)\r\nthe officers supposedly involved.\r\nImpact on the legal process\r\nIn court, audio-visual evidence is usually trusted to be an authentic\r\nrepresentation of events. Whether the file is extracted from the\r\nphone of a suspect, downloaded from social media, or received\r\nfrom the CCTV system of a shop near the crime scene, the\r\nauthenticity of the scene depicted is not usually questioned.\r\nWith the rise of deepfakes, it will become increasingly important\r\n29 Europol, ‘Malicious Uses and Abuses of Artificial Intelligence’, 2020, accessed on 10 March\r\n2022, https://www.europol.europa.eu/publications-events/publications/malicious-uses-and-abuses-of-artificial-intelligence.\r\nDeepfake\r\ntechnology’s\r\nimpact on law\r\nenforcement\r\n14FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nto scrutinise such content and verify if it is real or somehow\r\nartificially manipulated or generated.\r\nCross-checking footage will become even more important. It calls\r\nfor a thorough vetting of digital evidence with specific attention to\r\nshow it can be trusted to be authentic. A consistent and transparent\r\nchain of custody of digital evidence to prove no one could have\r\ndoctored the evidence in the investigation is essential. For instance,\r\nas part of a child custody case, the mother of a child tried to\r\nconvince the court that her husband behaved violently.\r\nShe manipulated an audio recording of the man to make it look like\r\nhe was making threats. Although this was not a real deepfake,\r\nit raises questions and concerns.30 What if the manipulated footage\r\nremained unproven as fake?\r\nWith lighter-weight neural network structures and advances in\r\nhardware, training and generating time will be significantly reduced.\r\nIn the near future, deepfake software will likely be able to generate\r\nfull body deepfakes, real-time impersonations, and the seamless\r\nremoval of elements within videos. The most recent algorithms can\r\ndeliver increasingly higher levels of realism and run in near real time.\r\nNew capacities needed\r\nClaims as to the use of deepfake material will require further law\r\nenforcement assessment, leading to new cases and new types\r\nof work. This will result in an increased workload and a push for\r\nlaw enforcement officers to develop new skills. Fake evidence has\r\nalways existed and law enforcement agencies have procedures\r\nin place to assess the value of evidence. These procedures\r\nare developed for the types of forgeries already known and will\r\nhave to be updated continuously with the rise of deepfakes. Law\r\nenforcement agencies will need to not only upskill their workforce\r\nto detect deepfakes, but also invest in their technical capabilities\r\nin order to address the upcoming challenges effectively while\r\nrespecting fundamental rights.\r\nLaw enforcement agencies must consider this issue from multiple\r\nperspectives, when creating, storing, protecting and analysing\r\naudio-visual material. Specifically, they should:\r\n• make use of tested and proven methods when making audio-visual recordings, e.g. certify a certain set-up for use in court and;\r\n• employ technical and organisational safeguards against\r\ntampering, in order to be able to prove the authenticity of\r\nthe footage.\r\nLooking beyond law enforcement, general prevention strategies\r\nmay be considered to make it harder to use deepfake technology\r\n30 European Parliamentary Research Service, ‘Tackling deepfakes in European policy’,\r\n2021, accessed 15 March 2022, https://www.europarl.europa.eu/RegData/etudes/\r\nSTUD/2021/690039/EPRS_STU(2021)690039_EN.pdf\r\n15\n\non audio-visual material. For example, technical solutions could\r\nbe implemented to make deepfakes easier to spot or to increase\r\nmarkers of authenticity. The Content Authenticity Initiative31 is an\r\nexample of efforts to provide a standard for content authenticity\r\nand provenance.\r\nParticipants of the Innovation Lab’s foresight activities anticipated\r\nnew forms of crime, together with the resulting challenges in terms\r\nof data collection, criminal attribution and the heightened anonymity\r\nof the perpetrators, such as in creating deepfakes for criminal\r\npurposes. Criminals are likely to adopt new modus operandi that\r\nLEAs will be unable to identify or counter. The failure to legislate\r\nfor these technologies will further stymie the investigative abilities\r\nof LEAs.\r\nMitigating these risks requires greater research and funding. Law\r\nenforcement professionals will need to anticipate possible crime\r\nscenarios such as those discussed in this report, and build out their\r\ninvestigative abilities accordingly. Furthermore, they should work\r\nwith relevant stakeholders to ensure that the appropriate legislation\r\nis in place. Greater awareness building and transparency vis-a-vis\r\nthe public is also needed to ensure the roll-out of these technologies\r\nis not hamstrung by concerns over privacy and data protection.\r\nLaw enforcement has always had to deal with fake evidence\r\nand therefore is in a good position to adapt to the presence\r\nof deepfakes. In order to handle the material LEAs encounter\r\nappropriately, it is important to account for the possibility of\r\nsynthetic content with malicious intent. Here we discuss some of\r\nthe ways this synthetic content can be uncovered, and preventative\r\nmeasures that can be taken against this threat.\r\nManual detection\r\nIt is still possible for the vast majority of deepfake content to be\r\nmanually detected by looking for inconsistencies. This is a labour\r\nintensive task, which can only be done for a very limited number\r\nof files, and requires appropriate training to be familiar with all the\r\nrelevant signs. Moreover, this process is further complicated by\r\nthe human predisposition to believe audio-visual content and work\r\nfrom a truth default perspective.32 That introduces the possibility of\r\nmistakes, both with selecting the files that need to be inspected as\r\nwell as the inspection itself.\r\n31 Content Authenticy Initiative, accessed on 10 March 2022, https://contentauthenticity.org.\r\n32 Levine, T.R., ‘Truth-Default Theory (TDT): A Theory of Human Deception and Deception\r\nDetection’ Journal of Language and Social Psychology, 2014, pp. 378-392., https://www.\r\nresearchgate.net/publication/273593306_Truth-Default_Theory_TDT_A_Theory_of_\r\nHuman_Deception_and_Deception_Detection.\r\nDeepfake detection\r\n16FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nThe models generating deepfakes might produce believable images,\r\nbut these may still contain imperfections upon closer examination.\r\nA few examples include:\r\n• blurring around the edges of the face;\r\n• lack of blinking;\r\n• light reflection in the eyes;\r\n• inconsistencies in the hair, vein patterns, scars etc.;\r\n• inconsistencies in the background, in subject as well as focus,\r\ndepth etc.33\r\nAutomated detection\r\nIdeally, a system would scan any digital content and automatically\r\nreport on its authenticity. Such a system will most likely never be\r\nperfect, but with increased sophistication of deepfake technology, a\r\nhigh degree of certainty from such a system could be worth more\r\nthan the manual inspection. There have already been efforts to\r\ncreate this kind of software from organisations such as Facebook34\r\nand security firm McAfee.35 Detection software will look for signs of\r\nmanipulation and help the reviewer decide on the authenticity with\r\nan explainable AI report on these signs.\r\nAs deepfake creation tools need training data to know what a real\r\nperson looks like, most deepfake detection models are trained using\r\ndatabases of deepfake images. The learned signs of manipulation\r\nare thus based on data of known deepfakes, making it difficult to\r\nknow how successful it will be at detecting deepfakes generated\r\nby unknown or updated models. Moreover, a deepfake GAN can\r\nbe updated to account for the signs detected by known detection\r\nmodels in order to force the results to avoid producing these signs\r\nand henceforth go undetected.\r\nSome examples36 of detection technologies that have been\r\ndeveloped in recent years are:\r\nBiological signals\r\nThis approach tries to detect deepfakes based on imperfections in\r\nthe natural changes in skin colour that arise from the flow of blood\r\nthrough the face.37\r\n33  Venema, A. E., \u0026 Geradts, Z. J., ‘Digital Forensics Deepfakes and the Legal Process,’ 2020,\r\nTheSciTechLawyer, 16(4), pp. 14-23.\r\n34 Michigan State University, MSU, ‘Facebook develop research model to fight deepfakes’, 2021,\r\naccessed on 10 March 2022, https://msutoday.msu.edu/news/2021/deepfake-detection.\r\n35 McAfee, ‘The Deepfakes Lab: Detecting \u0026 Defending Against Deepfakes with Advanced AI’,\r\n2020, accessed on 10 March 2022, https://www.mcafee.com/blogs/enterprise/security-operations/the-deepfakes-lab-detecting-defending-against-deepfakes-with-advanced-ai.\r\n36 AIM, ‘Top AI-Based Tools \u0026 Techniques For Deepfake Detection’, 2020, accessed on 24\r\nSeptember 2021, https://analyticsindiamag.com/top-ai-based-tools-techniques-for-deepfake-detection.\r\n37 U. A. Ciftci, I. Demir and L. Yin, “FakeCatcher: Detection of Synthetic Portrait Videos using\r\nBiological Signals,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi:\r\n10.1109/TPAMI.2020.3009287.\r\n17\n\nPhoneme-viseme mismatches\r\nFor some words the dynamics of the mouth, viseme, are\r\ninconsistent with the pronunciation of a phoneme. Deepfake models\r\nmay not correctly combine viseme and phoneme in these cases.38\r\nFacial movements\r\nThis approach uses correlations between facial movements\r\nand head movements to extract a characteristic movement of\r\nan individual to distinguish between real and manipulated or\r\nimpersonated content.39\r\nRecurrent Convolutional Models\r\nVideos consist of frames which are really just a set of images.\r\nThis approach looks for inconsistencies between these frames\r\nwith deep learning models.\r\nHowever, there are also challenges facing deepfake detection\r\ntechnology.\r\nf Detection algorithms are trained on specific datasets. A slight\r\nalteration of the method used to generate the deepfake may\r\ntherefore prevent detection.\r\nf An update to the discriminative model of a GAN for\r\nspecific artefacts detected by these systems will fool\r\nthe detection software.\r\nf Videos may be compressed or reduced in size, which causes\r\nproblems with the reduction in pixels and artefacts, making it\r\nharder to detect the inconsistencies the system looks for.\r\nf It has been shown that databases may be manipulated to\r\nmisclassify images with certain identifiers by adding an identifier\r\nto a small part of the dataset (e.g. applying a trigger to 5% of the\r\nimages resulted in the misclassification of fake images with the\r\ntrigger as real).40\r\nf Increased image forensics and deepfake detection capabilities\r\ndrive the increased quality of deepfake videos. GANs can catch\r\nup relatively easily; by updating the discriminator to evade\r\nthe detector, the learning capacity based on feedback loops\r\nof those GANs will work to produce a deepfake that can fool\r\nthe detector.41\r\n38 Agarwal, S. et al., ‘Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches’, 2020\r\nIEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020,\r\naccessed on 10 March 2022, https://www.ohadf.com/papers/AgarwalFaridFriedAgrawala_\r\nCVPRW2020.pdf. \r\n39  Agarwal, S. et al., ‘Protecting world leaders against deep fakes’, Proceedings of the IEEE/\r\nCVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp.\r\n38-45, 2019, accessed on 10 March 2022, http://www.hao-li.com/publications/papers/\r\ncvpr2019workshopsPWLADF.pdf.\r\n40 Cao, X. and Gong, N.Z., ‘Understanding the Security of Deepfake Detection’ ArXiv, 2021,\r\naccessed on 18 October 2021, https://arxiv.org/abs/2107.02045.\r\n41 Wired, ‘Deepfakes Aren’t Very Good. Nor Are the Tools to Detect Them’, 2020, accessed on\r\n15 March 2022, https://www.wired.com/story/deepfakes-not-very-good-nor-tools-detect.\r\n18FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nPreventive measures\r\nOrganisations that rely on some kind of authorisation by face\r\nor voice biometrics should assess the authorisation process as\r\na whole. Increasing the robustness of this process is currently\r\nconsidered as a better course of action than solely implementing\r\nspecific deepfake detection systems. Common checks are:\r\n• using audio-visual authorisation rather than just audio;\r\n• demanding live video connection;\r\n• requiring random complicated acts to be performed live in front\r\nof the camera, e.g. move hands across the face.\r\nIn order to address the challenges posed by deepfake technology,\r\nit is important to look into what kind of action other actors, including\r\nthe online platforms where most deepfakes can and might be\r\nshared, are addressing this threat. This is also influenced by the\r\ncurrent legislative framework, which can ask for mandatory or\r\nvoluntary measures. In this section, this report will show some\r\nexamples of key online service providers and companies and their\r\nanti-deepfake measures. This chapter will then examine the EU\r\nregulatory framework in this area.\r\nTechnology companies\r\nEarly in 2020, Meta (formerly Facebook) announced a new policy\r\nbanning deepfakes from their platforms.42 Meta said it would\r\nremove AI-edited content that would likely mislead people, but made\r\nit clear that satire or parodies using the same technology would still\r\nbe permissible on the platforms. In order for law enforcement to\r\nassess and address the impact of deepfakes on its work, it needs to\r\nbe aware of the policies technology companies have put in place, as\r\nit is likely that potential evidence or malicious content will be shared\r\nvia these platforms. How technology companies such as Twitter\r\nand Meta regulate deepfake technology will have an extensive\r\nimpact on how people will engage with and react to deepfakes.\r\nExamples of company policies:\r\nf Meta (which owns Facebook and Instagram) aims to remove\r\ndeepfakes, or otherwise edited media, where “manipulation\r\nisn’t apparent and could mislead, particularly in the case of\r\nvideo content.” 43\r\nf TikTok bans “Digital Forgeries (Synthetic Media or Manipulated\r\nMedia) that mislead users by distorting the truth of events\r\n42 Becoming Human: Artificial Intelligence Magazine, ‘A Look at Deepfakes in 2020’, 2020,\r\naccessed on 15 March 2022, https://becominghuman.ai/a-look-at-deepfakes-in-2020-\r\n13d3fe2b6ef7.\r\n43 Meta, ‘Manipulated media’, accessed on 10 March 2022, https://transparency.fb.com/en-gb/\r\npolicies/community-standards/manipulated-media/.\r\nHow are other\r\nactors responding\r\nto deepfakes?\r\n19\n\nand cause significant harm to the subject of the video, other\r\npersons, or society.” 44\r\nf Reddit “does not allow content that impersonates individuals\r\nor entities in a misleading or deceptive manner.” This explicitly\r\nincludes deepfakes “presented to mislead, or falsely attributed\r\nto an individual or entity.” 45\r\nf Youtube has an existing ban for manipulated media under\r\nthe spam, deceptive practices and scam policies of their\r\ncommunity guidelines.46\r\nMany of the policies use ‘intent’ as their barometer for deciding\r\nwhether or not to remove a deepfake. However, defining ‘intent’\r\nmight prove challenging and highly subjective, since it is based\r\non the assessment of individual actors. Nonetheless, it seems\r\nthat online platforms could play a pivotal role in helping victims of\r\ndeepfake technology to identify the perpetrator, but how this looks\r\nin practice remains to be seen. Moreover, technology providers also\r\nhave responsibilities in safeguarding positive and legal use of their\r\ntechnologies and cooperating with law enforcement.\r\nIn addition to the policies, various technology companies are\r\nworking on deepfake detection technologies. Developing detection\r\ntechnologies became a priority during the COVID-19 pandemic, and\r\nhas gained new attention during the current conflict between Russia\r\nand Ukraine.\r\nf Meta said it had developed an AI tool that detects deepfakes\r\nby reverse engineering a single AI-generated image to track\r\nits origin.47\r\nf Google has released a large dataset of visual deepfakes that has\r\nbeen incorporated into the FaceForensics benchmark.48\r\nf Microsoft has launched the Microsoft Video Authenticator,\r\nwhich can analyse a still photo or video to provide a percentage\r\nchance of whether the media has been artificially manipulated.49\r\n44 TikTok, ‘Community Guidelines’, accessed on 10 March 2022, https://newsroom.tiktok.com/\r\nen-us/combating-misinformation-and-election-interference-on-tiktok.\r\n45 Reddit, ‘Updates to Our Policy Around Impersonation‘, 2020, accessed on 10 March 2022,\r\nhttps://www.reddit.com/r/redditsecurity/comments/emd7yx/updates_to_our_policy_\r\naround_impersonation.\r\n46 Google Support, ‘Misinformation policies’, accessed on 10 March 2022, https://support.\r\ngoogle.com/youtube/answer/10834785.\r\n47 Politico, ‘POLITICO AI: Decoded: Big Tech on the AI Act — AI inventors — Deepfakes’, 2021,\r\naccessed on 10 March 2022, https://www.politico.eu/newsletter/ai-decoded/politico-ai-decoded-big-tech-on-the-ai-act-ai-inventors-deepfakes.\r\n48 Google AI Blog, ‘Contributing Data to Deepfake Detection Research’, 2019, accessed on 10\r\nMarch 2022, https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.\r\nhtml.\r\n49 Microsoft, ‘New Steps to Combat Disinformation’, 2020, accessed on 10 March 2022,\r\nhttps://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator.\r\n20FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nEuropean Union\r\nRegarding legal trends, participants of the foresight activities\r\nnoted that at both the national and regional level, European law is\r\nstruggling to keep pace with the evolution of technology and\r\nthe changing definitions of crime. Participants flagged the need\r\nto establish new regulatory frameworks. These should be sensitive\r\nto contemporary law enforcement challenges (particularly in\r\nthe digital realm), as well as to changing ethical norms. Some\r\nparticipants anticipated greater regulation of the digital sphere\r\nin the coming decade.\r\nThe COVID-19 crisis brought more discussion around regulation of\r\ndisinformation and deepfake detection tools, but also an increased\r\nuse of video conferencing tools with adjustable backgrounds and\r\nother filters bringing manipulated digital realities into our daily lives.\r\nThe European Parliament report, ‘Tackling Deepfakes in European\r\nPolicy’, explains this and shows that the regulatory landscape in the\r\nEuropean Union related to deepfakes “comprises a complex web of\r\nconstitutional norms, as well as hard and soft regulations on both\r\nthe EU and the Member State level”.50\r\nThe most relevant regulatory framework for law enforcement in\r\nthe area of deepfakes will be the AI regulatory framework – which\r\nis still at proposal level and not applicable yet - proposed by the\r\nEuropean Commission. The framework takes a risk-based approach\r\nto the regulation of AI and its applications. Deepfakes are explicitly\r\ncovered by the passage about “AI systems used to generate or\r\nmanipulate image, audio or video content”, and have to adhere to\r\ncertain minimum requirements. Minimum requirements include\r\nmarking content as deepfake to make clear that users are dealing\r\nwith manipulated footage.” 51\r\nDeepfake detection software used by law enforcement authorities\r\nfalls in the category of ‘high-risk’, as it is considered to pose a threat\r\nto the rights and freedoms of individuals. Detection software used\r\nby law enforcement under the AI regulatory framework would only\r\nbe permitted under strict safeguards, such as the employment of\r\nrisk-management systems and appropriate data governance and\r\nmanagement practices.52\r\n50 European Parliament Research Service, ‘Tackling deepfakes in European policy’, 2021,\r\naccessed on 10 March 2022, https://www.europarl.europa.eu/RegData/etudes/\r\nSTUD/2021/690039/EPRS_STU(2021)690039_EN.pdf.\r\n51  European Parliament Research Service, ‘Tackling deepfakes in European policy’, 2021,\r\naccessed on 10 March 2022, https://www.europarl.europa.eu/RegData/etudes/\r\nSTUD/2021/690039/EPRS_STU(2021)690039_EN.pdf.\r\n52  European Parliament Research Service, ‘Tackling deepfakes in European policy’, 2021,\r\naccessed on 10 March 2022, https://www.europarl.europa.eu/RegData/etudes/\r\nSTUD/2021/690039/EPRS_STU(2021)690039_EN.pdf.\r\n21\n\nAs this report shows, in order to effectively address the threats\r\nposed by deepfake technology, legislation and regulation need to\r\ntake into account law enforcement needs. Within the regulatory\r\nframework, law enforcement, online service providers and\r\nother organisations need to develop their policies and invest in\r\ndetection as well as prevention technology. Policymakers and law\r\nenforcement agencies need to evaluate their current policies\r\nand practices, and adapt them to be prepared for the new reality\r\nof deepfakes.\r\nThe strategic foresight activities conducted by the Europol\r\nInnovation Lab identified a series of challenges that LEAs will have\r\nto contend with in the decade ahead. In particular, they identified\r\nrisks associated with digital transformation, the adoption and\r\ndeployment of new technologies, the abuse of emerging technology\r\nby criminals, accommodating new ways of working and maintaining\r\ntrust in the face of an increase of disinformation.\r\nIn the months and years ahead, it is highly likely that threat actors\r\nwill make increasing use of deepfake technology to facilitate various\r\ncriminal acts and conduct disinformation campaigns to influence or\r\ndistort public opinion. Advances in machine learning and artificial\r\nintelligence will continue enhancing the capabilities of the software\r\nused to create deepfakes. According to experts, GANs, availability\r\nof public datasets and increased computing power will be the main\r\ndrivers of deepfake development in the future and make them more\r\ndifficult to distinguish from authentic content.\r\nThe increase in use of deepfakes will require legislation to set\r\nguidelines and enforce compliance. Additionally, social networks\r\nand other online service providers should play a greater role in\r\nidentifying and removing deepfake content from their platforms.\r\nAs the public becomes more educated on deepfakes, there will be\r\nincreasing concern worldwide about their impact on individuals,\r\ncommunities, and democracies.\r\nIn the EU there are various policies and regulatory attempts to\r\naddress deepfakes. However, law enforcement’s use of technology\r\nto detect deepfakes is considered as ‘high-risk’, according to some\r\nproposals. Therefore, it will be very important to clarify which\r\npractices should be prohibited under the AI regulatory framework.\r\nIn order to address the challenges faced with deepfakes, law\r\nenforcement agencies need to prepare and train for deepfake\r\ndetection and ensure e-evidence integrity, developing their\r\ncapacities as described in this report. The regulatory framework\r\nshould also support law enforcement preparedness efforts.\r\nThe Europol Innovation Lab is continuously monitoring the\r\ndevelopment of disruptive technologies such as deepfakes.\r\nConclusion\r\n22FACING REALITY? LAW ENFORCEMENT AND THE CHALLENGE OF DEEPFAKES\n\nAbout the Europol Innovation Lab\r\nTechnology has a major impact on the nature of crime. Criminals quickly integrate\r\nnew technologies into their modus operandi, or build brand-new business models\r\naround them. At the same time, emerging technologies create opportunities for\r\nlaw enforcement to counter these new criminal threats. Thanks to technological\r\ninnovation, law enforcement authorities can now access an increased number\r\nof suitable tools to fight crime. When exploring these new tools, respect for\r\nfundamental rights must remain a key consideration.\r\nIn October 2019, the Ministers of the Justice and Home Affairs Council called\r\nfor the creation of an Innovation Lab within Europol, which would develop a\r\ncentralised capability for strategic foresight on disruptive technologies to inform\r\nEU policing strategies.\r\nStrategic foresight and scenario methods offer a way to understand and prepare\r\nfor the potential impact of new technologies on law enforcement. The Europol\r\nInnovation Lab’s Observatory function monitors technological developments\r\nthat are relevant for law enforcement and reports on the risks, threats and\r\nopportunities of these emerging technologies. To date, the Europol Innovation\r\nLab has organised three strategic foresight activities with EU Member State law\r\nenforcement agencies and other experts.\r\nwww.europol.europa.eu",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"origins": [
		"pdf"
	],
	"references": [
		"https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf"
	],
	"report_names": [
		"Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf"
	],
	"threat_actors": [],
	"ts_created_at": 1777429231,
	"ts_updated_at": 1777457934,
	"ts_creation_date": 1706711076,
	"ts_modification_date": 1706711078,
	"files": {
		"pdf": "https://archive.orkl.eu/3db47f32300b6b62b4c9711441f2a0004420c99b.pdf",
		"text": "https://archive.orkl.eu/3db47f32300b6b62b4c9711441f2a0004420c99b.txt",
		"img": "https://archive.orkl.eu/3db47f32300b6b62b4c9711441f2a0004420c99b.jpg"
	}
}