As general interest and investment in AI has accelerated since the initial public launch of ChatGPT, so too has the U.S. federal government both increased its spending in the area[i] and the speed with which it adopted guidelines on the utilization of AI more generally.[ii] This tracks other actions outside the U.S.,[iii] and anticipates corresponding initiatives at the state and municipal levels.[iv]
And while some government contractors are already growing their AI capabilities to capture AI-specific contracts,[v] we expect to see more intense focus on the utilization of AI more generally, such that the incoming raft of regulations will be front of mind throughout the industry. This emphasis will also extend to mergers and acquisitions, where a target’s utilization of AI presents several unique risks and considerations in an acquiror’s due diligence process (or a target’s preparation for a sale process),[vi] on which this note elaborates below.
- Policies. From a general compliance perspective, government contractors that utilize AI (which we refer to as AI government contractors, even if their usage is relatively ancillary to their business) must create internal policies that meet regulatory compliance obligations while striking an appropriate balance between unencumbered AI functionality and the corresponding risks. These policies need to be customized based on the target’s utilization of AI (e.g., using AI in delivering services, training AI models, building AI tools, fine tuning third party tools, etc.).[vii] Acquirors should therefore consider the extent to which policies reflect the input of IT experts with the technical AI knowledge, legal and compliance personnel who have a more direct understanding of the regulatory risks, and senior management who have given consideration to the policies’ consistency with a target’s overall strategy. Additionally, acquirors should consider determining whether a target’s AI solutions are designed to implement standards for responsible AI.[viii] At a minimum, they should meet the policy standards adopted by the National Institute of Standards and Technology (“NIST”) that inform federal government contracting IT compliance requirements, and which are often followed by state and municipal governmental agencies,[ix] along with the standards propagated by agency-level AI Governance Boards.[x] As most AI government contractors will be familiar, NIST published (and has now subsequently updated) its AI Risk Management Framework in for safely developing and deploying AI.[xi][xii] NIST also released four draft publications intended to help improve the safety, security and trustworthiness of AI systems and launched a challenge series to support development of methods to distinguish between content produced by humans and content produced by AI.[xiii]
- Regulatory Compliance.
- Data Privacy. The significant amount of data required to train AI models introduces compliance obligations under generally applicable state and local data privacy laws – such as the CCPA in California and its analogs in Colorado and Virginia – and presents unique risks to AI government contractors with access to sensitive information. Acquirors should therefore evaluate how a target excluded protected data from its training protocols. Another important issue can arise when a target uses customer data that may have been collected under a privacy policy that did not contemplate using the data for AI training.[xiv] While the logical fix for this is to change the terms or privacy policy, the FTC has warned that this too can be a problem.[xv] These are not trivial issues. The FTC has imposed “algorithmic disgorgement” as a remedy for such misuse of data for training AI,[xvi] which requires destruction of the data, the AI model and any algorithms created. Given the costs of training AI and building AI applications, this remedy can significantly impact any potential investment. We also anticipate greater federal scrutiny and additional legislation in the data privacy space specifically directed toward AI, and note, in particular, that the pending America Privacy Rights Act would require AI government contractors to conduct annual impact assessments on their AI algorithms.[xvii]
- Bias Mitigation. AI is known to produce biased or discriminatory results. This can be due to biased training data, biased algorithms or as a result of how the AI is used (e.g., to adversely impact a protected group). Since AI relies on past data sets to generate predictive outputs, the AI outputs can be biased if those data sets are not carefully curated to avoid bias. When government contractors use AI algorithms for certain applications, for example in hiring or procurement, they risk bias toward employees and vendors with the demographic composition of the past data sets, which may result in discriminatory outcomes. To protect against this and other related situations, the White House has directed federal agencies to consider opportunities to remediate discrimination, including “algorithmic discrimination,”[xviii] and the federal agency charged with enforcing procurement compliance is already asking government contractors whether they are using AI or similar systems in their hiring processes.[xix] The Executive Order also addresses numerous other equity and civil rights issues with AI and mandates certain actions to ensure that AI advances equity and civil rights.[xx] The America Privacy Rights Act mentioned above would also require companies to provide individuals notice and an opportunity to opt-out of having an AI algorithm make a consequential decision about housing, employment, consumer finance, students or other covered matters. Other governmental agencies,[xxi] including several at the state level, have enacted or are considering regulations to mitigate the potential harmful impacts of AI in hiring. Acquirors should therefore evaluate to what extent a target uses AI in its hiring and procurement practices, and whether it has appropriately remediated or mitigated bias in the associated AI algorithms.
- Cross-Border. AI government contractors operating outside the US encounter a substantially more complex situation. On February 2, 2024, the European Union adopted the EU AI Act, which limits commercialization based on a given tool’s risk profile.[xxii] Other jurisdictions outside the US have also taken a variety of approaches to regulating AI,[xxiii] and we expect that countries will continue to adapt their legal frameworks to reflect the evolution of AI technology. Acquirors should be mindful to identify which regulatory authorities have jurisdiction over a target and identify any key jurisdiction-specific risks.
- Intellectual Property. Perhaps most importantly, acquirors of AI government contractors should consider the manner in which a target (i) develops its algorithms using intellectual property and (ii) protects both its algorithms and their output as intellectual property.
- Copyright. Acquirors should consider whether a target acquired and maintained the right to obtain and use the data to train its AI model. Training on copyrighted material may infringe on others’ copyrights.[xxiv] Additionally, the outputs of generative AI are typically not copyrightable. Thus, it is important to determine whether any significant content, for which copyright protection normally would be desirable, was AI-generated. In fact, if a company files a registration for a work that was in any-part AI generated, this fact must be disclosed to the Copyright Office or the copyright registration can be invalid.[xxv] If a copyright registration is invalid, the company accordingly cannot sue infringers of that work.
- Open Source. The use of AI code generators in the software development process raises issues similar to that caused by the incorporation of open-source software. That is, the AI model may have been trained on open-source software. If developers do not know when the output is based on open source (which if often the case), they do not know if incorporating the output into their purportedly proprietary software will subject that software to open source license terms. Even if the target has not fed open source code into the AI generator, using generative AI in the software development process increases the risk that the target has not complied with any applicable license terms of the software that the target uses in the development lifecycle.[xxvi] Taken together, these complications demonstrate that managing the use of open source software with AI is trickier than with typical open source software. Acquirors should therefore be cognizant of these open-source risks, and technical diligence should be considered to support their legal review.
- Trade Secrets. Acquirors should consider reviewing both employee policies and technical guardrails in place to ensure that employees are not, even unwittingly, exposing trade secrets by putting confidential information in public generative AI chatbots (e.g., the public version of ChatGPT), which could potentially obviate such information’s status as a trade secret.[xxvii] Moreover, as AI vendors will increasingly struggle to find non-copyrighted material on which to train their AI models (called the “data cliff”), those vendors are changing terms of service to expand the vendor’s rights to train their AI models on user-generated content, which merits companies’ enhanced caution on whether trade secrets are entered into these AI algorithms. [xxviii]
- Licenses. Inbound licenses (with vendors) and outbound licenses (with customers) will establish which party owns the associated “work product” which, for an AI government contractor, could be enhancements to the AI model during the course of performing a government contract. This is particularly significant given that the government customer often owns the work product. On the vendor side, acquirors will want to identify the scope of the vendor’s licenses (e.g., whether a license to train AI models on certain data allows the target to sell or copyright the AI model’s output) to confirm that the target is able to continue servicing the beneficiaries of the model.
- Patents. The US Patent and Trademark Office has published guidelines on the patentability of inventions created using the assistance of AI.[xxix] To the extent that a target expects to derive future value from patenting its research and development while using AI, acquirors should evaluate the USPTO guidelines to determine whether the target’s research and development processes using AI will allow the target to apply for patents using those AI-assisted inventions.[xxx]
- IP Indemnity. Many lawsuits are pending alleging IP infringement by AI tools. Given the infringement risks, IP indemnities for use of AI tools are important when analyzing targets’ vendor contracts, and the indemnification provisions of different tools vary significantly. With some tools, the user indemnifies the AI tool provider if output infringes. With others, the tool provider indemnifies the user. However, even in this latter scenario, the scope is limited and, unbeknownst to many users, certain preconditions must be met for the indemnity to apply. Many companies are not aware of, and thus do not fulfill, the preconditions for the indemnity to apply. For example, users may have to adopt specific “guardrails and content filters” built into the AI tool for the indemnity to apply.[xxxi] Acquirors should therefore be cognizant of these risks as they conduct their diligence review.
- Reliability. Beyond the basic IP issues, AI presents technical predictability and reliability challenges (e.g., “hallucination”) that can present additional legal risks.[xxxii] In the government contracting space, including in defense and healthcare, inappropriate reliance on AI could obviously have more significant consequences. Acquirors should therefore understand the target’s guardrails, filtering, data sources, and other tools to confirm reliability of the target’s AI outputs. A corollary to reliability is “transparency” – the extent to which an AI model arrived at an output – which an acquiror will also want to understand so as to better maintain reliability on a go-forward basis.
- Employees. Given the intense interest in deploying AI solutions, we expect that acquisitions principally motivated by a desire to hire AI experts will accelerate.[xxxiii]Such acquirors (or other acquirors of AI government contractors) should give consideration to who among the employee base is indispensable to the AI solutions, and consider how best to incentivize such employees to keep innovating under new ownership. Previously, acquirors would also consider requiring such employees to enter into noncompetition agreements. However, given the FTC’s Final Action that would ban employers from imposing non-competes in many situations, subject to pending litigation,[xxxiv] acquirors may have to rely on new or existing confidentiality obligations.[xxxv]
- Purchase Agreement Protections.
As we have seen, acquiring AI government contractors presents unique legal risks that should be considered when drafting representations and warranties in a purchase agreement, and developing a recourse structure more generally.[xxxvi] While the aforementioned risks are nominally covered by generic intellectual property, data privacy and compliance with laws representations and warranties, their more standard formulations can be refined to more precisely address the AI implications discussed above.[xxxvii] That said, given the rapidly evolving AI regulatory landscape, we expect that parties will have to regularly develop tailored recourse structures (e.g., through special indemnities and appropriately calibrated escrows) aligned with their respective risk allocation preferences.
As governments and the government contracting ecosystem become increasingly focused on AI, firms considering M&A in the space would be well advised to assess how they can most effectively negotiate this rapidly-evolving landscape in the course of their transactions.
FOOTNOTES
[i] The potential value of AI awards by the U.S. federal government is $4.2 Billion with over 95% of those awards coming from the Department of Defense in the five fiscal years ending August 31, 2023. The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings; Lots of talk about AI, but are agencies spending money on it? (federalnewsnetwork.com). The a bipartisan group of senators recently proposed spending $32 Billion on public and private research and development for AI. Senators Propose $32 Billion in Annual A.I. Spending but Defer Regulation – The New York Times (nytimes.com)
[ii] The White House Executive Order issued on October 23, 2023, mandated that an interagency council issue guidance to agencies “to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.”[ii] On April 29, 2024 the White House announced that federal agencies had completed all key AI actions across federal agencies within its 180-day plan, which includes practical applications of AI such as streamlining energy permitting processes and a pilot program to streamline domestic visa renewals. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House Biden-Harris Administration Announces Key AI Actions 180 Days Following President Biden’s Landmark Executive Order | The White House
[iii] Since the start of the war in Ukraine, European militaries have increased their military spending and specifically earmarked spending on AI for the military, which European militaries had previously been reluctant to do. For instance, NATO announced a $1 Billion innovation fund (which will presumably include AI) and German military set-aside $500 million for research and artificial intelligence. Why business is booming for military AI startups | MIT Technology Review. More generally, analysts expect around a CAGR in excess of 40% on AI in the next decade, leading the AI market to exceed $1.3 trillion by 2032. Generative AI to Become a $1.3 Trillion Market by 2032, Research Finds | Press | Bloomberg LP
[iv] State-Resource-List-on-AI-Oct-2023.pdf (nga.org)
[v] For example, the federal program that leads authorization, FedRAMP, provided guidance that chat interfaces, code-generation and debugging tools, and prompt-based image generators will be the AI priorities for federal procurement. Emerging Technology Prioritization Framework | FedRAMP.gov; see also Emerging AI Landscape: FedRAMP Publishes Draft Emerging Technology Prioritization Framework in Response to Executive Order on Artificial Intelligence | AI Law and Policy
[vi] This note generally takes the acquiror’s perspective, but prospective sellers will also want to give consideration to what extent they will be prepared to meet an acquiror’s diligence expectations.
[vii] Why Companies Need AI Legal Training and Must Develop AI Policies.
[viii] Responsible AI – Everyone is Talking About it But What Is It?
[ix] A Look at Local Government Cybersecurity in 2020 | icma.org; How state governments are addressing cybersecurity | Brookings
[x] OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence | OMB | The White House; Key takeaways from the Biden administration executive order on AI | EY – US.
[xi] AI Risk Management Framework | NIST; The White House Executive Order on AI and its Impact on Government Contractors (acc.com)
[xii] Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov).
[xiii] See NIST Updates AI RMF as Mandated by the White House Executive Order on AI.
[xiv] See, Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It
[xv] See, FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data.
[xvi] See, Legal Issues When Training AI On Previously Collected Data
[xviii] Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government | The White House.
[xix] Figure F-3: Combined Scheduling Letter and Itemized Listing | U.S. Department of Labor (dol.gov).
[xx] See Equity and Civil Rights Issues in the White House Executive Order on AI.
[xxi] The Equal Employment Opportunity Commission has indicated a focus on identifying bias in automated hiring tools. Strategic Enforcement Plan Fiscal Years 2024 – 2028 | U.S. Equal Employment Opportunity Commission (eeoc.gov). The Department of Labor published a best-practice guide for government contractors and subcontractors to clarify their legal obligations. Artificial Intelligence and Equal Employment Opportunity for Federal Contractors | U.S. Department of Labor (dol.gov)
[xxii] The EU AI Act categorizes AI uses into four of risk: harmful AI practices (e.g., social scoring by governments), high-risk AI systems (e.g., law enforcement tools), limited risk (e.g., consumer-facing chatbots), and low-risk systems (e.g., recommendation engines). Artificial intelligence act (europa.eu).
[xxiii] global_ai_law_policy_tracker.pdf (iapp.org).
[xxiv] Legal Considerations When Using Consumer Data To Train AI – Law360; How Tech Giants Cut Corners to Harvest Data for A.I. – The New York Times (nytimes.com)
[xxv] See Copyright Office Guidance on AI.
[xxvi] See, Solving Open Source Problems With AI Code Generators – Legal issues and Solutions.
[xxvii] Mind Your Audience: Disclosure of Confidential Information to AI Programs Can Give Rise to Trade Secret Misappropriation Claims | AI Law and Policy.
[xxviii] AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive | Federal Trade Commission (ftc.gov).
[xxix] Federal Register :: Inventorship Guidance for AI-Assisted Inventions.
[xxx] PowerPoint Presentation (uspto.gov).
[xxxi] Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk.
[xxxii] Lawyers have already been appraised of the reliability risks of AI, as judges have imposed penalties on lawyers who relied on AI that provided “hallucinated” (i.e., non-existent) case citations in legal briefs those lawyers submitted to the court. New York lawyers sanctioned for using fake ChatGPT cases in legal brief | Reuters.
[xxxiii] Microsoft’s AI talent raid will test regulators and VCs.
[xxxiv] Not So “Final”? Texas Federal Court Enjoins Enforcement of FTC’s Noncompete Ban, Leaving Future of Commission’s Rule in Doubt | Healthcare Law Blog (sheppardhealthlaw.com).
[xxxv] FTC Votes to Ban Noncompete Agreements | Antitrust Law Blog.
[xxxvi] By way of example, see M&A Transactions: Drafting AI Representations and Warranties for Non-AI Companies.
[xxxvii] Acquirors might consider certain AI Representations and Warranties in the domains of Intellectual Property and Privacy in an M&A Transaction. M&A Transactions: Drafting AI Representations and Warranties for Non-AI Companies | Sheppard Mullin Richter & Hampton LLP – JDSupra