Will an increasing amount of deepfakes — deceptive images, content, and videos often created by AI and floated on the internet — start making us doubt our own eyes?
The most recent Indiana Jones movie shows actor Harrison Ford de-aged by 40-plus years. The movie makers used artificial intelligence to comb through all of the decades-old footage of the actor and create a younger Ford.
This technology is called deepfake, and it is not just catching on in entertainment, but it is also a growing concern in cybersecurity. Recently, more than $240,000 was stolen by someone pretending to be an executive from a British energy company. This event does not seem all that out of the ordinary, except that the executive was not even a real person. Thieves used voice-mimicking software in order to imitate the real executive — and they got away with it.
Using artificial intelligence (AI), deepfake technologies can generate or manipulate digital media, particularly video and audio content, in a way that is difficult for viewers to distinguish from authentic, original material. It involves using machine learning algorithms to synthesize new content that is based on existing data, such as images or videos of real people.
However, deepfake technology has the potential to be used for both positive and negative purposes. On the positive side, it could be used to create more realistic visual effects in movies or to generate realistic simulations for training purposes. On the negative side, it could be used to spread false information or to manipulate public opinion by creating fake videos of people saying or doing things that they never actually said or did. There are also concerns about the potential for deepfake technology to be used for malicious purposes, such as creating fake videos of politicians or other public figures in order to discredit them.
What are some other benefits of deepfake technology?
There are many potential benefits of deepfake technology, including:
-
-
- Educational applications — Deepfake technology could be used to create educational videos or simulations that are more engaging and interactive for students.
- Improved visual effects — Deepfake technology could be used to create more realistic visual effects in movies, television shows, and other forms of media. This could lead to a more immersive and engaging viewing experience for audiences.
- Enhanced simulations — Deepfake technology could be used to create realistic simulations for training purposes in a variety of industries, such as aviation, military, and healthcare. This could help to prepare professionals for real-life scenarios and improve their decision-making skills.
- Increased accessibility — Deepfake technology could be used to create subtitles or translations for audio and video content, making it more accessible to people who are hearing impaired or who speak different languages.
-
What are some downsides of deepfake technology?
Not surprisingly, there are several potential downsides to deepfake technology, including:
-
-
- Misinformation and propaganda — Deepfake technology could be used to spread false information or propaganda by creating fake videos or audio recordings of people saying or doing things that they never actually said or did. This could have serious consequences, such as undermining public trust in institutions, sowing political discord, or even inciting violence.
- Privacy violations — Deepfake technology could be used to create fake videos or audio recordings of people without their consent, potentially violating their privacy.
- Personal harm — Deepfake technology could be used to create fake videos or audio recordings of people that are embarrassing, offensive, or damaging to their reputation. This could lead to personal harm or distress for the individuals depicted in the fake content.
- Legal issues — Deepfake technology could create legal issues related to intellectual property, copyright, and defamation. For example, if a deepfake video is used to defame someone or to falsely attribute a statement to them, it could lead to legal action.
- Ethical concerns — There are also ethical concerns about the use of deepfake technology, particularly with respect to consent and transparency. It is important to ensure that people are aware when they are interacting with deepfake content and that they have given their consent for their images or voices to be used in this way.
-
How to guard against deepfake technology
Even though someone can fabricate a fake but persuasive video, software engineers, governments, and journalists can oftentimes still determine if it’s real or fake, says Renée DiResta, a disinformation expert at the Stanford Internet Observatory. Usually there are tells that are clues to a careful observer that a deepfake is at work, such as something that doesn’t look quite right.
For example, in a deepfake video of Ukrainian President Volodymyr Zelensky, he appears to surrender to Russia in the current conflict. However, his oversize head and peculiar accent identified the video as a deepfake, and it eventually was removed from social media. Unfortunately, as deepfake technology improves, these tells will become harder to spot. Yet, as this technology evolves, detection tools will also evolve.
Despite the lack of mature detection tools, here are some suggestions that may help people and institutions guard against deepfake technology:
Be skeptical of media — It is important to be critical of the media that you consume and be certain to verify the authenticity of any video or audio content that you come across. Look for signs that the content may be a deepfake, such as unnatural movements or distortions in the video or audio.
Get serious about identity verification — Users need to exercise due diligence in verifying that someone is who they claim to be.
If available, use deepfake detection tools — Detection tools are evolving slowly with multiple companies working on solutions. For instance, Intel has introduced a real-time deepfake detector that is able to determine whether the subject in a video is real and shows “blood flow” in their face.
Educate users — Familiarize your users with the types of deepfake content that are out there, and teach them how to be skeptical about media.
Government regulation — Regulatory or legal measures should be put in place to address the negative impacts of deepfake technology. Unfortunately, given the complex and evolving nature of this technology, many governments around the world are still struggling to define how to best protect their citizens. Some governments, however, have already started to consider legislation that would prohibit the use of deepfake technology for malicious purposes.
Adopt a zero-trust security model — Zero Trust is new way to look at computer security. It works on the assumption that your networks are already breached, your computers are already compromised, and all users are potential risks — trust no one or anything and always verify.
Confirm and deploy basic security measures — Basic cybersecurity best practices will play a vital role when it comes to minimizing the risk for any deepfake cybersecurity attack. Some critical actions you can take include: i) making regular backups to protect your data; ii) using stronger passwords and change them frequently; and iii) continuing to secure your systems and educate your users.
What is the future of deepfake technology?
Right now, deepfake technology is in its infancy, and it can be easily recognized as fake. However, deepfake technology is quickly maturing and increasingly becoming more difficult to detect.
While there are many initiatives from technology companies try to combat deepfakes, it will be a constant struggle until we finally outpace deepfake creators who more often than not, can quickly find new ways to stay ahead of detection methods and continue to cover their tracks.