The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear).

These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.

In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals. 

Recognizing these emerging threats, certain states already have enacted laws, or have introduced bills, aimed at regulating the use of deepfakes leading up to an election or in connection with the spread of nonconsensual intimate images. At the federal level, a bill regulating deepfake pornography has been advanced, and the Federal Communications Commission has proposed rules regulating deepfakes in political advertising. These pieces of legislation, however, are limited in scope and application and do not address myriad other ways deepfakes can wreak havoc upon an organization.

The Joint CSI highlighted two recent examples of reported deepfake threats. In one instance, an unknown actor targeted a company using synthetic visual and audio media techniques to impersonate the company’s CEO, where, for malicious purposes, the “CEO” invited a product line manager via WhatsApp to an interactive call and then impersonated the CEO’s voice. In another, the threat actor impersonated a company executive’s voice on WhatsApp and suggested a Teams meeting, where the screen appeared to show the company executive, and the threat actor attempted to trick the employee into sending a wire transfer. In a similar case, after the Joint CSI was published, CNN reported that a finance worker at a multinational firm agreed to pay $25 million to a fraudster as a result of deepfake technology used to impersonate the company’s CFO.

These examples demonstrate how deepfake technology opens up new avenues for malicious actors to exploit organizations. It is reasonable to expect that as these technologies become more readily accessible, organizations will increasingly be targeted with deepfakes to commit fraud, launch “denial of service” attacks to prevent access to their services or products, or cause damage to reputation and product. These attacks will likely target the organization’s executive and financial teams.

Executive teams should prepare, therefore, for deepfakes just like any other cybersecurity or fraud attack, including through monitoring, workforce training and the implementation of incident response plans. In April 2024, the National Institute for Standards and Technology (NIST) published draft guidance, NIST AI 100-4, entitled “Reducing Risks Posted by Synthetic Content.” The draft guidance highlights that synthetic content—such as deepfakes—can “produce concentrated fraud and social engineering, and impose financial costs on victims of these schemes” and sets forth the steps that can be taken to mitigate the risk of such an attack. Although still in draft form, the guidance provides a helpful summary of the current state of the relevant technology to protect and defend against deepfakes, including use of synthetic image, video and audio detection tools. Organizations should also consider implementing a strategy to protect the authenticity and integrity of their own content (e.g., images, text, audio, video), such as digital watermarking and fingerprinting/cryptographic hashing (using the file’s underlying metadata).  Among other things, as NIST points out, use of these technologies may have the significant benefit of enabling organizations to quickly debunk any claims that synthetic generated content is authentic.

Similarly, the Joint CSI advises that organizations should consider implementing a number of existing technologies (including commercially available tools) to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications.  

In addition to the technology solutions that will assist an organization in detecting and debunking deepfake attacks, it is important for an organization to prepare for the reputational impact that a deepfake may cause on the organization’s key managers and products, even if that deepfake attack is ultimately debunked. In our view, it is essential that an organization have a deepfake response public relations communications plan. Such a plan would not only involve communications with those impacted stakeholders, including perhaps existing and potential customers, shareholders and employees, but it would also address the initial reporting of the incident to relevant law enforcement authorities. Indeed, we strongly recommend that the nature of such response be planned ahead of time and practiced by the organization’s response team.  

A good communication plan can help limit confusion (both publicly and internally) in the attack’s aftermath as questions and conjecture arise as to whether deepfake content is genuine and increase responsiveness across the internal organization by sharing action plans, updating stakeholders, and providing transparency throughout the process of responding to the deepfake incident. The plan should identify those authorized to speak about the incident, a range of potential communication channels, the schedule of communication as well as procedures for notifying external organizations (e.g., partners, customers, consumers, etc.) that are directly involved in or impacted by the deepfake incident.  

Finally, preparation for deepfake attacks and instituting mitigation measures may also be required under cybersecurity and privacy laws and regulations requiring organizations to safeguard protected information.

Organizations should ensure, therefore, that legal and operational strategies and plans are in place and tested to respond to a variety of deepfake techniques.

Back to Workforce Bulletin Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Workforce Bulletin posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.