- Originally Published on March 21, 2024
Is Deepfake Porn Illegal? What to Do If You Are a Victim
A deepfake is a video or sound recording that substitutes someone’s face or voice with another’s in a convincingly realistic manner. The technology raises pressing legal concerns, especially when used in pornographic content. Initially a concern for celebrities, the misuse of deepfake technology has now extended to the general public, with everyday people having their faces substituted over those in explicit or graphic content, prompting many to question, “Is deepfake porn illegal?”
The legality of deepfake pornography is complex and varies significantly by jurisdiction. There is no federal law in the U.S. currently addressing the issue. However, several states have made it illegal to create or distribute deepfakes under certain conditions, such as when they are used to create non-consensual pornography, influence elections, or violate intellectual property rights.
At Minc Law, we have extensive experience navigating the complexities of online content removal and the misuse of digital content, including the disturbing rise of deepfake porn. In this article, we will explore the legality of deepfake pornography and offer guidance for individuals who find themselves victimized by this disturbing digital phenomenon.
Legality of Deepfakes
Artificial intelligence (AI) and deepfake technology have transformed the digital realm significantly, ushering in a new era where seeing is no longer believing. Deepfakes, highly realistic forgeries created using AI, challenge our ability to discern what is real from what is fabricated.
The term “deepfake” combines “deep learning,” a subset of AI used to analyze images and videos, with “fake,” indicating the non-authentic nature of these creations. These technologies, particularly Generative Adversarial Networks (GANs) and machine learning (ML) enable the creation of video, audio, or image replicas that are difficult to distinguish from genuine content.
The process behind deepfakes requires a sophisticated manipulation of digital content. AI software analyzes and maps out images or video frames, identifying similarities and shared features between subjects. This allows the software to reconstruct and impose an individual’s facial points on another, matching each movement with precision. The result is content that can mislead viewers, making them believe the fabricated result is real.
This technological prowess, while impressive, has unfortunately been exploited for malicious purposes, including the creation of non-consensual pornography and misinformation. Despite the rise of deepfakes, there remains a lack of comprehensive federal regulation in many countries, including the United States, where legislation is primarily focused at the state level. Some regions have started to address the legal challenges posed by deepfakes, particularly those targeting non-consensual pornography, highlighting the urgent need for legal frameworks that can keep pace with the rapid advancement of AI-generated technology.
Are Deepfakes Legal?
The legal status of deepfakes is an evolving issue. Currently, there is no comprehensive federal legislation in the United States that directly addresses the creation and distribution of deepfakes. The legality of these AI-generated forgeries varies from one state to another, with some states imposing restrictions on their creation and distribution, particularly when they are used for harmful purposes like non-consensual pornography or to influence elections.
Deepfakes operate in a legal gray area. They are not inherently illegal; however, they can become unlawful if they infringe on intellectual property (IP) rights, violate personal rights through the creation of non-consensual pornography, disseminate misinformation, or pose a threat to national security. The specificity of laws concerning deepfakes depends heavily on the jurisdiction. This fragmentation at the state level points to a patchwork approach to regulation, which can be challenging to navigate.
A significant concern is that more than 90% of deepfake content is associated with pornographic material, often created without consent. As deepfake technology advances, it will likely be used in other unlawful ways, like extortion and digital harassment.
Recent legislative efforts, such as the proposed 2024 Defiance Act, indicate a growing recognition of the need for more robust federal oversight. This legislation, inspired in part by the misuse of deepfake technology to create non-consensual sexual images of public figures, specifically Taylor Swift, aims to establish more precise guidelines and penalties for the misuse of deepfakes.
The intersection of deepfakes with copyright and fair use laws further complicates their legal status. As deepfakes become more sophisticated and widespread, the urgency for comprehensive federal legislation to regulate this rapidly evolving technology becomes increasingly apparent. The lack of uniform laws poses a challenge not only to legal professionals and lawmakers but also to individuals impacted by the malicious use of deepfakes.
Are Deepfakes Illegal to Watch?
Watching deepfakes is not illegal in itself, except in cases where the content involves unlawful material, such as child pornography. Existing legislation primarily targets the creation and distribution of deepfakes, especially when these actions involve non-consensual pornography.
The distinction here is critical: while consuming deepfake content does not typically incur legal consequences for the viewer, the production and dissemination of such content without the consent of the subjects depicted can lead to legal consequences.
Deepfakes & Laws in the U.S.
As deepfakes draw increasing attention, a growing number of states have enacted laws to regulate them in the context of nonconsensual pornography and election-related deepfakes.
States like California, New York, and Illinois have positioned themselves at the forefront of addressing deepfakes by allowing individuals to sue creators of deepfakes in civil court. This provides victims with a means to seek damages for the harm caused by non-consensual or defamatory deepfake content.
Additional states that have laws on their books specifically targeting deepfakes include:
- Georgia,
- Hawaii,
- Minnesota,
- Virginia, and
- Washington.
Finally, deepfake legislation has also been proposed in Louisiana, Illinois, Massachusetts, and New Jersey.
Deepfake Laws at the Federal Level
As of the date of publication, there are no federal laws in the U.S. criminalizing deepfakes.
However, the push for federal legislation or further restrictions is gaining momentum among lawmakers who recognize the need for unified regulations. The absence of federal regulation creates a patchwork of state laws, which, while beneficial, may not be sufficient to address the national and international implications of deepfake technology.
For example, the Defiance Bill has been proposed at the federal level to hold accountable those responsible for creating and distributing non-consensual, sexually explicit deepfake images and videos. As proposed, it would also allow individuals depicted in nude or sexually explicit deepfakes to pursue civil penalties against those who produce these forgeries with intent to harm or anyone who receives the material knowing it was created without consent.
Further, the Federal Communications Commission (FCC) also issued a declaratory ruling confirming that artificial intelligence voice calls, used in the course of telemarketing or advertisement, are subject to the Telephone Consumer Protection Act.
Deepfake Legality Across the Globe
Countries around the world are grappling with the legal challenges posed by deepfakes. Legislation varies widely, reflecting different cultural, legal, and political concerns.
China has positioned itself as a leader in regulating deepfakes with legislation that mandates user consent for the production of deepfakes and requires that content generated using AI be marked as such.
Singapore has taken a different approach by implementing the Protection from Online Falsehoods and Manipulation Act, which targets false statements of fact on the internet. While not specifically aimed at deepfakes, this law can be applied to them, reflecting Singapore’s commitment to combating misinformation.
In the United Kingdom, the sharing of deepfake pornography has been made illegal under the Online Safety Act.
Additionally, India has issued an advisory to social media platforms to guard against deepfakes that violate the country’s IT rules (although they are not illegal per se).
As deepfake technology continues to evolve, the international community may need to consider more unified strategies to address its widespread implications.
Is Deepfake Porn Illegal?
The emergence of AI-generated nude images has introduced complex challenges. The central question is whether a nude image must be “real” for a victim to seek legal redress. AI can now create or manipulate images to produce convincing “nudes” of real individuals who never consented to or participated in the creation of the content. Although these images are not authentic, their potential use for revenge porn is a very real concern.
In the United States, laws relevant to deepfake pornography are varied and complex. States like California have laws specifically targeting deepfake pornography, but few other states have specific laws on the issue. The absence of current legislation does not necessarily leave victims without recourse, but it does require a more creative legal approach from attorneys.
For instance, Ohio’s legal framework did not initially contemplate AI-generated nudity. Yet, Ohio’s revenge porn statutes may apply to deepfakes – as they criminalize harassment using photos distributed to embarrass or harass another. Some laws already in existence, depending on their language and legal precedent, may be interpreted as criminalizing deepfake porn.
Virginia’s comprehensive approach to revenge porn also criminalizes deepfake porn. Its revenge porn statute specifies that malicious dissemination of a video or picture “created by any means whatsoever” that depicts another person nude is a crime. This definition is broad enough to include deepfakes, even if that was not the original intent of the law.
Moreover, the treatment of child pornography in the context of deepfakes is unequivocal: it is illegal, reflecting the stance that any sexualized depiction of an identifiable minor causes harm, regardless of whether the image is real or virtual.
Victims of deepfake pornography may also explore legal avenues such as IP rights, invasion of privacy, or defamation claims, depending on the specifics of their case.
Immediate Steps to Take If You Are the Victim of Deepfake Porn
Discovering that you are the victim of deepfake pornography can be an incredibly distressing experience, leaving you feeling violated and unsure where to turn. The immediate aftermath is a critical time for action, both to protect your personal and digital reputation and to explore your legal options.
Document & Preserve Evidence
If you find yourself the target of deepfake pornography, resist the urge to erase all traces of the content. While your initial reaction might be to delete everything, it could undermine your ability to seek justice later. Instead, it is critical to document and preserve evidence of the offense.
Take screenshots of the deepfakes, save any related images, and archive correspondence with the offender or platforms hosting the content. Amassing a solid collection of evidence strengthens your position if you decide to pursue legal action.
It is important to note a crucial exception: if the images or videos depict minors, including yourself, do not download or save the content. Possession of such material can be illegal, regardless of the context. In these cases, consult with an attorney immediately to understand the appropriate steps to take while ensuring compliance with the law.
Locate the Scope
Finding deepfake content of yourself can be challenging, especially if you are unaware of its existence. Much like battling revenge porn, you will want to use tools like reverse image search and Google Alerts to start. Sites like PIMEyes can also help you see where and how your images are being used.
If you want to separate the deepfakes from authentic content, look for details that AI tends to get wrong:
- Facial inconsistencies: Look for irregularities in lighting, hairlines, eyes, and ears. Most AI-generated content struggles to sync lip movements or blinking accurately, so the subject may look unnatural when they move.
- Proportions and details: Look at the lengths of arms and legs, the appearance of hands, and any background anomalies. These elements often contain errors in deepfake content.
- Naming the Victim: Often, deepfakes will feature the victim’s real name (which is easy to find with a search). Individuals who create and share deepfakes tend to do this to draw attention to the content.
It can be exceptionally difficult to distinguish real and deepfake content if there is already a significant amount of genuine content online. In those cases, it is important to focus on the finer details that AI tends to get wrong.
Report the Content to the Platform or Website
After identifying where the deepfakes are posted and preserving evidence, contact the platform or website hosting the content. Most sites have Terms of Service (ToS) that prohibit revenge porn and will likely remove the content if you report it. However, it is crucial to stay cautious during this process. If a website asks you to provide proof of identity or demands payment for the removal of photos, it could be a scam.
While every site has its own set of rules, most ToS prohibit content that violates community standards, like spam, nudity, hate speech, violence, harassment, false light, and impersonation. If the deepfake violates these guidelines, you can report it for removal. The reporting process is different for each platform, from simply flagging content to more detailed submissions. Regardless, the final decision on whether content violates the site’s ToS lies with the platform.
For instance, Twitter has specific rules concerning deepfakes, banning the digital manipulation of someone’s face onto another person’s nude body. Their policy requires that the individual in the deepfake or their representative contact the platform before action is taken. Twitter also has a synthetic and manipulated media policy that could apply to deepfakes depending on the context and potential implications.
Seek Support From Family, Friends, or Professionals
Finding explicit deepfakes of yourself can lead to feelings of anxiety, isolation, or depression. But you are not alone in this situation. Sharing your experiences with trusted friends and family can help with strong emotions. This support network can provide a sense of community during challenging times.
Also, consider reaching out to professional help centers or mental health hotlines. Organizations like Thorn, Cyber Civil Rights Initiative, StopNCII, and the Revenge Porn Helpline are specifically designed to assist individuals dealing with online abuse, including deepfake victimization.
If you find yourself grappling with severe mental health issues or thoughts of self-harm, it is imperative to seek immediate help. Remember, your safety and well-being should come first, and support is available. The Suicide Prevention Helpline (988) offers confidential, judgment-free assistance 24/7.
Removal Strategies For Victims of Deepfake Porn
Navigating the aftermath of deepfake pornography requires a proactive approach. Swift and decisive action can help mitigate the impact of the content and protect your digital identity.
Reach Out to the Site Owners or Publishers
Contacting the owners of websites hosting the deepfake pornography can be an effective strategy, especially if you are working with a lawyer. While you may not always have a clear legal claim, involving an attorney to make a formal removal request can significantly enhance the seriousness of your appeal.
An attorney’s involvement signals a readiness to pursue further legal action if necessary, which can motivate website operators to comply to avoid potential legal repercussions or negative publicity.
Legal professionals understand the nuances of law and negotiation, making them well-equipped to communicate the urgency and legitimacy of your request. Their expertise often leads to a more prompt and favorable response from site owners, who may prefer to resolve the issue quietly rather than face legal challenges or public scrutiny.
Report to Law Enforcement
Deepfake porn, especially if it can be considered revenge porn, should be reported to law enforcement. While local police may have limited resources or expertise with cases involving deepfakes, they should still be informed, particularly if the perpetrator is within their jurisdiction.
For more sophisticated cases involving deepfake technology, the FBI should be notified. The FBI’s Internet Crime Complaint Center (IC3) is a dedicated platform for reporting cybercrimes, including those involving deepfake porn.
Keep all relevant information, including usernames, email addresses, websites or names of platforms used, and any photos or videos. Then report the information to:
- FBI’s Internet Crime Complaint Center at www.ic3.gov
- FBI Field Office at www.fbi.gov/contact-us/field-offices or 1-800-CALL-FBI (225-5324)
- National Center for Missing and Exploited Children at 1-800-THE LOST orreport.cybertip.org
Reporting to these agencies not only aids in your own case but also contributes to the broader fight against the misuse of deepfake technology, potentially helping to prevent future abuse.
Consult With an Experienced Content Removal Attorney
An attorney with experience handling revenge porn and content removal can help you address the issue effectively.
These attorneys can help you remove the explicit content, preserve crucial evidence, and explore your removal and legal options, such as sending a DMCA takedown notice, submitting nonconsensual porn removal requests, de-indexing content from search engines, and filing suit.
Send a DMCA Takedown Notice
Disclaimer: Due to the nuanced legal area that deepfake pornography exists in, sending a DMCA takedown notice may not be appropriate for your situation and it is important to consult an experienced attorney.
One removal method to try is a DMCA takedown notice. This may be an effective first step if the deepfake content uses a non-adult content-related image of yours without consent. However, the intersection of deepfakes and copyright law is currently nuanced, and the core legal question primarily revolves around whether the manipulated content qualifies as fair use or copyright infringement – with high-profile deepfakes typically falling under fair use.
The DMCA requires the removal of infringing content, but it is not entirely clear if deepfake images fall under this protection. Despite these uncertainties, victims can still file a DMCA takedown notice. Victims can also reach out to the hosting site directly through their designated abuse or DMCA contact emails.
Nonconsensual Porn Removal Requests
Revenge porn notices or nonconsensual removal requests are another viable strategy. This is especially useful in scenarios where the content does not clearly infringe on copyright but is undeniably published without consent.
Major search engines, including Google and Bing, offer users the option to report nonconsensual porn. Google, for instance, has a dedicated revenge porn portal where individuals can request the removal of the explicit content from search results. This does not erase the content from the website it originates from but prevents it from appearing in search results, reducing its visibility.
Yet, it is important to prioritize the removal of the content directly from the source website whenever possible. Be prepared for the possibility that you might need to submit your request more than once, as the moderation teams reviewing these submissions are thorough and may require additional information.
Request De-Indexing From Search Engines
You can also request that search engines like Google de-index the offending images or videos. This process ensures that a general search will not display the harmful content. Google, along with other search engines, offers specific portals for individuals to request the removal of content from their indexing system.
Google’s removal portal outlines the types of content eligible for de-indexing, including:
- Images or videos that show a person nude, in a sexual act, or in an intimate state,
- Content that falsely portrays an individual in a sexual act or an intimate state,
- Content that incorrectly associates an individual with pornography.
File a Lawsuit to Obtain a Court Order Compelling Removal
Another avenue for victims is to obtain a court order, which is a directive from a judge with terms that must be followed, such as the removal of content. To get a court order, you have to file a lawsuit against the individuals or entities responsible for the deepfake’s publication.
This legal action can be based on various grounds, depending on jurisdiction, including violations of revenge porn statutes, deepfake laws, harassment or threat statutes, or privacy-related torts.
Support & Recovery From Deepfake Porn
Being the subject of deepfake porn can lead to profound emotional distress. The experience can be deeply traumatizing, but there is a broad spectrum of support networks and resources available. Remember that you are not alone in this ordeal, and help is out there.
Seeking Professional Help & Support
Revenge pornography and deepfakes have become such a pervasive issue that numerous organizations and resources offer support, often at no cost, to those impacted. These groups provide a range of services, from legal assistance to emotional support. Here are some key organizations dedicated to helping individuals targeted by nonconsensual explicit content:
- RAINN,
- Cyber Civil Rights Initiative (CCRI),
- StopNCII.org,
- NCMEC,
- National Suicide Hotline: 988,
- CCRI Helpline: 1-844-878-2274,
- Revenge Porn Helpline.
It can also help to secure long-term mental health support. A therapist, psychologist, or psychiatrist can offer valuable help with challenges like depression and anxiety that may arise from the trauma of deepfake victimization.
How to Protect Yourself Against Deepfakes
Deepfakes are a significant privacy and security threat, but there are some practical steps to bolster your defenses against the misuse of your image:
- Limit What You Share Online: The less personal information and images you share, the fewer opportunities there are for perpetrators to create deepfakes with your likeness. Be mindful of what you post on social media and other platforms.
- Enforce Privacy Restrictions: Adjust the privacy settings on your social media accounts to ensure that only people you trust can view your posts.
- Use Strong Passwords: Secure your online accounts with strong, unique passwords and enable 2FA wherever possible. This will make it harder for others to gain access to your personal information.
- Be Cautious of Phishing: Stay vigilant against phishing emails or messages that try to trick you into revealing personal information or sending compromising material. Perpetrators may use deepfake technology to create convincing scenarios or threats designed to deceive you.
- Educate Yourself and Others: Awareness is a powerful tool. Educate yourself about the nature of deepfakes, how they are created, and their potential misuse. Share this knowledge with friends and family to create a more informed and cautious online community.
These strategies can reduce the risk of becoming a victim of deepfake-related crimes. Staying informed and proactive about digital privacy and security is a key part of the fight against cybercrimes.
Digital Monitoring
If you have been targeted by cybercriminals or are at risk, it is a good idea to monitor your online presence. Many perpetrators share explicit content on multiple platforms, so if you have found deepfake porn on one site, there is a chance it could be posted elsewhere.
By monitoring your online presence, you can promptly identify any threats and respond effectively. Here are some tips for monitoring your digital footprint:
- Use Online Reputation Monitoring Tools: These tools can be invaluable in detecting threats in real-time, allowing you to take swift action.
- Search for Your Name: Many sextortionists and perpetrators use the victim’s name to draw attention to unauthorized content. Tools like Google Alerts can notify you whenever your name appears online, helping you to track and address new threats.
- Consider Professional Services: For comprehensive monitoring, consider using paid services like our Digital Risk Protection (DRP) service. DRP combines various tools and techniques to detect, assess, and neutralize digital threats, ensuring your online reputation is being proactively protected.
Implementing a robust digital monitoring strategy can significantly reduce the impact of deepfakes and sextortion on your personal and professional life. By staying ahead of potential threats, you can ensure your digital identity remains secure.
★★★★★
“Attorney Dorrian Horsey at Minc Law represented me in a content removal effort and was successful. She was very open with me about the process, and helped me understand the approach that she took. She was great to work with and very supportive of my effort. Thank you!”Steven S.
August 11, 2023
If you would like to explore your options to address deepfake pornography, please reach out to schedule your initial consultation by calling us at (216) 373-7706 or filling out our online contact form.
This page has been peer-reviewed, fact-checked, and edited by qualified attorneys to ensure substantive accuracy and coverage.