How the Trump Administration May Affect AI Policy on Intellectual Property and Deepfakes | Perkins Coie
As President-elect Donald Trump prepares for his second term, his administration is poised to influence the future of policy surrounding artificial intelligence (AI) and intellectual property (IP), including regulations relating to deepfakes.
This transition will likely mark a shift from the Biden administration’s focus on regulatory oversight and equity-driven initiatives to an agenda emphasizing innovation and market-driven strategies. Drawing on insights from President-elect Trump’s previous term and recent policy statements, this Update explores the implications of his anticipated priorities for stakeholders at the intersection of AI and IP. This Update is part of an ongoing Perkins Coie series about what to expect from the second Trump administration, which includes our prior Updates on the potential impacts of the new administration on the FCC, FTC, immigration, retailers, and general AI policy.
AI Oversight: Deregulation in Focus
As discussed in our recent Update, the Trump administration is expected to repeal Biden’s AI Act and to focus more on promoting and facilitating AI innovation and deemphasizing regulation. This aligns with Trump’s broader economic agenda and his expressed desire to emphasize private sector leadership and reduce regulatory barriers in order to foster U.S. dominance in AI development (and in particular, to ensure supremacy over China).
While the Trump administration’s anticipated emphasis on deregulation will influence a wide range of AI policies, here we focus specifically on its implications for IP law and deepfake regulations—two critical areas where this administration’s policies could significantly shape innovation and governance.
Intellectual Property in the Spotlight
Copyright
The new administration’s anticipated approach to AI could significantly affect key questions arising under copyright law with respect to AI. One of the more pressing questions in this area is whether the use of copyrighted materials for training AI models constitutes fair use. This issue is at the center of numerous lawsuits against various AI tool providers, as vast amounts of data are required to train AI models. Determining whether a license is needed for this purpose—and if so, whether it is readily available at a reasonable cost—could significantly influence the growth of AI innovation. Consequently, the new administration might seek to implement measures to facilitate the use of copyrighted materials for training purposes by AI developers in order to reduce barriers that could hinder American leadership in AI.
Another key issue that could be affected by the new administration’s desire to promote AI innovation is the question of the copyrightability of works generated with the use of AI. U.S. copyright law requires at least some human authorship for a work to be protected, which was confirmed in the recent Thaler v. Perlmutter case. However, the nature and extent of the human authorship required is not clear, as no cases have yet ruled on that issue in the context of AI. Nonetheless, in several recent decisions on registration,1 the U.S. Copyright Office has taken a very narrow view of protectability and has broadly refused protection for portions of a work that are generated by AI based on a lack of human authorship. The Copyright Office also issued guidance last year for protectability of AI-generated works that similarly takes a narrow view of the human authorship requirement and specifically takes the position that the human input that goes into creating prompts that generate AI output is not sufficient (regardless of how detailed the prompts may be). This leaves creators using AI tools without clear guidance on copyright protection for any portion of their works generated using AI. Since some other countries, including the UK, Hong Kong, Ireland, India, and South Africa, already protect or are expected to protect AI-generated works, the Trump administration might encourage changes to copyright law or urge the Copyright Office to adopt a more expansive interpretation of the human authorship requirement as a way to support AI innovation and maintain the United States’ leadership position in the field.
The U.S. Copyright Office has been actively addressing the implications of AI on copyright law. After hosting public listening sessions and webinars and soliciting comments through a notice of inquiry (which received over 10,000 comments), the Copyright Office is preparing a Report on Copyright and Artificial Intelligence, which is expected to be issued in three parts. In July 2024, they released Part 1, focusing on digital replicas/deepfakes. The forthcoming sections are expected to analyze copyrightability of generative AI output and the legal implications of training AI models on copyrighted works, including such issues as liability, licensing, and fair use. The Copyright Office has indicated these latter two reports will be issued prior to the end of 2024. Therefore, it is unclear to what extent the incoming administration could influence the substance of these reports. However, it is possible that the Trump administration might deprioritize any recommendations from the reports that could be seen as hindering innovation.
The Double-edged Sword of Deepfakes
Deepfake technology, which uses AI to create hyperrealistic images, audio, or video, presents both opportunities and risks. On one hand, it offers vast potential for innovation in creative industries, education, and entertainment, as well as for cost savings that can make it easier for smaller companies to compete. On the other hand, it can be used for deceptive purposes and can result in misinformation, political manipulation, and privacy violations. To the extent deepfakes mimic someone’s voice or likeness without authorization, there are right of publicity laws (or common law publicity or misappropriation rights) in many states that may provide protection. However, the scope and nature of these rights vary from state to state (including what is covered, what exceptions apply, and whether these rights continue after someone dies), and some states do not recognize the right of publicity. While a number of laws have been proposed at both the state and federal levels specifically addressing deepfakes, no federal deepfake laws have passed yet, and at the state level, most of the deepfake laws on the books thus far focus on election issues or revenge porn (although there are a few exceptions).2 Some of the bills pending before Congress, such as the NO FAKES Act and the No AI FRAUD Act, go beyond concerns about elections and revenge porn to more broadly cover unauthorized uses of digital depictions (more like a national right of publicity). However, several organizations (including the ACLU, the Electronic Frontier Foundation, and the Center for Democracy & Technology) have expressed concerns that these bills are overbroad and don’t adequately protect first amendment rights. Additionally, provisions in some of these bills that hold platforms liable for hosting digital replicas have also raised concerns with the tech industry.
It is difficult to predict how a Trump administration may seek to address deepfake issues; while President-elect Trump has previously voiced concerns about the dangers of deepfakes (e.g., citing scenarios where falsified media could lead to geopolitical crises or erode public trust in institutions), his own 2024 campaign used a deepfake of Taylor Swift endorsing him, framing it as satire. However, given President-elect Trump’s general disposition towards deregulation, a Trump administration may be unwilling to support the currently proposed deepfake legislation, especially anything that seeks to impose liability on AI-platform providers for deepfakes created through their platforms. Instead, it may favor industry-led self-regulation over federal mandates or at least focus on regulation aimed at bad actors and not AI platforms. Proponents of this approach argue that self-regulation allows for faster innovation and adaptation, while critics caution that it may leave significant gaps in enforcement, particularly against malicious uses like disinformation campaigns or digital blackmail. Additionally, while creative unions and professionals are concerned with the impact of digital replicas on creative jobs, the Trump administration may focus more on the benefits of this technology, such as increased efficiency and lower barriers to entry, and believe the benefits may offset any concerns about job displacement.
Shifting Patent Policy at the U.S. Patent and Trademark Office (USPTO)
Leadership changes, including the appointment of a new USPTO director (after the recent resignation of Kathy Vidal), will likely shape the direction of patent policy. President-elect Trump’s previous appointee, Andrei Iancu, prioritized reducing regulatory barriers, enhancing operational efficiency, and implementing pro-patent owner policies.
A similar appointee could push for relaxed patent eligibility standards under Section 101, particularly for AI and software innovations, further aligning with President-elect Trump’s broader deregulatory agenda. Additionally, legislative initiatives, such as the PREVAIL Act and the Patent Eligibility Restoration Act, could gain traction with Republican support in Congress, making patents easier to obtain and enforce. While these moves could incentivize innovation, critics argue they may also encourage patent trolling and litigation risks.
Conclusion
The Trump administration’s anticipated prioritization of deregulation and private-sector innovation and its focus on maintaining U.S. dominance in AI may help stimulate rapid technological advancement in AI development, and this is likely to affect the administration’s approach to IP issues relating to AI. However, it is difficult to predict what this might look like, as there is limited information available regarding the specifics of his policies in this area. In addition, differing views among some of President-elect Trump’s closest advisors could also affect how the new administration will approach AI and IP policy. Elon Musk, who is expected to play an important role in shaping the administration’s approach to AI, has been outspoken about AI’s existential risks and may be more willing to encourage President-elect Trump to support at least certain forms of regulation. Vice President-elect J.D. Vance, on the other hand, has dismissed these concerns as a ploy to usher in regulations that would favor large tech companies and make it harder for startups to compete, and he could present an alternative voice that may provide support for President-elect Trump’s inclination to minimize regulation in this area (particularly if doing so will benefit smaller developers over big tech). How these differing views might ultimately shape AI and IP policy during President-elect Trump’s second term remains to be seen.
[View source.]