AI Law and Policy

Legal Considerations Involving Artificial Intelligence

A pending lawsuit raises an interesting copyright infringement question – does scraping an AI-generated database of job listings constitute copyright infringement?

In Jobiak v. Botmakers, Jobiak is an AI-based recruitment platform that offers a service for quickly and directly publishing job postings online and leverages machine learning technology to optimize third party job descriptions in real-time and generate an automated database for its job postings. Jobiak alleges copyright infringement (among other claims) because Botmakers scraped Jobiak’s proprietary database and subsequently incorporated its contents directly into its own job listings.
Continue Reading Court to Decide Whether AI-scraped Job Database Is Subject to Copyright Protection and Is Infringed?

Is your M&A target a manufacturing company with automated production, a consumer products business with online sales and marketing or an education company that creates content for students? The increasing use and development of artificial intelligence (“AI”) systems and products, particularly generative AI, has created risks for businesses using such tools. AI plays a role in many industries and businesses whose products and services are not themselves AI. In the context of a M&A transaction, it is important to identify and allocate responsibility for these risks. Risks of AI may include: infringement (including through use of training data as well as outputs),
Continue Reading M&A Transactions: Drafting AI Representations and Warranties for Non-AI Companies

California is among a handful of states that seeks to regulate the use of artificial intelligence (“AI”) in connection with utilization review in the managed care space. SB 1120, sponsored by the California Medical Association, would require algorithms, AI and other software tools used for utilization review to comply with specified requirements. We continue to keep up to date on AI related law, policy and guidance. The Sheppard Mullin Healthcare Team has written on AI related topics this year and those articles are listed here: i) AI Related Developments, ii) FTC’s 2024 PrivacyCon Part 1, and iii) FTC’s 2024 PrivacyCon Part
Continue Reading The Intersection of Artificial Intelligence and Utilization Review

According to published reports, George Carlin’s estate settled right of publicity and copyright claims relating to an AI-scripted comedy special using a “sound-alike” of George Carlin which performed the generated script. The special – “I’m Glad I’m Dead” – sought to reflect how Carlin would have commented on current events since his death in 2008. While most of the settlement terms are confidential, it is significant as one of the first resolutions of a case involving these issues. According to plaintiff’s lawyer, the defendants agreed to permanently remove the comedy special and to never repost it on any platform.
Continue Reading George Carlin Was Funny – Copying His Likeness AIn’t – Estate Settles AI-based Right of Publicity and Copyright Claims

The Organisation for Economic Co-operation and Development (OECD), which works on establishing evidence-based international standards and develops advice on public policies, has issued updated recommendations (“Recommendation”) on responsible AI to reflect technological and policy developments, including with respect to generative AI, and to further facilitate its implementation.
Continue Reading OECD Updates Guidance on Responsible AI

The development of AI continues to advance at a blistering pace, increasing the need for companies to employ AI governance and adopt policies for the responsible development and deployment of AI. While the term “responsible AI” is frequently used, it is rarely understood and often complex. Fortunately, a growing body of resources are becoming available to help companies understand and implement responsible AI. Two of the more recent resources are a set of publications by NIST (the National Institute of Standards and Technology) and Microsoft. These publications provide examples of efforts by these institutions to develop best practices for responsible
Continue Reading Responsible AI – Everyone is Talking About it But What Is It?

We have now reached the 180-day mark since the White House Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI and we are seeing a flurry of mandated actions being completed. See here for a summary of recent actions. One of the mandated actions was for the National Institute of Standards and Technology (NIST) to update its January 2023 AI Risk Management Framework (AI RMF 1.0), which it has now done. To this end, NIST released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems and launched a challenge
Continue Reading NIST Updates AI RMF as Mandated by the White House Executive Order on AI

This is the second post in a two-part series on PrivacyCon’s key-takeaways for healthcare organizations. The first post focused on healthcare privacy issues.[1] This post focuses on insights and considerations relating to the use of Artificial Intelligence (“AI”) in healthcare. In the AI segment of the event, the Federal Trade Commission (“FTC”) covered: (1) privacy themes; (2) considerations for Large Language Models (“LLMs”); and (3) AI functionality.
Continue Reading Artificial Intelligence Highlights from FTC’s 2024 PrivacyCon

Massachusetts Attorney General Andrea Campbell issued an advisory (“Advisory”) warning to developers, suppliers, and users of artificial intelligence and algorithmic decision-making systems (collectively, “AI”) about their respective obligations under the Massachusetts’ Consumer Protection Act, Anti-Discrimination Law, Data Security Law and related regulations. There is not much surprising here, as the Advisory addresses many of the same issues raised in the White House Executive Order and Federal Trade Commission (FTC) guidance. It is helpful however in clarifying, for consumers, developers, suppliers, and users of AI systems, specific aspects of existing state laws and regulations that apply to AI and that these
Continue Reading Massachusetts AG Says Consumer Protection, Civil Rights, and Data Privacy Laws Apply to Artificial Intelligence

Colorado is the latest state to introduce a bill focused on consumer protection issues when companies develop AI tools. The bill imposes obligations on developers and deployers of AI systems. Additionally, the bill provides an affirmative defense for a developer or deployer if the developer or deployer of the high-risk system or generative system involved in a potential violation: i) has implemented and maintained a program that complies with a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates; and ii) the developer or deployer takes specified measures to discover
Continue Reading Colorado Introduces an AI Consumer Protection Bill

The USPTO issued guidance on February 6, 2024 that clarified existing rules and policies and discussed how to apply them when AI is used in the drafting of submissions to the Patent Trial and Appeal Board (PTAB) and Trademark Trial and Appeal Board (TTAB). As a follow up, the USPTO has now published additional guidance in the Federal Register on some important issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using artificial intelligence (AI) in matters before the USPTO. The guidance recognizes that practitioners use AI to prepare and prosecute patent and trademark applications. It reminds
Continue Reading USPTO Issues Additional Guidance on Use of AI Tools in Connection with USPTO Matters

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report
Continue Reading NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications

AI tools such as Chat GPT and Otter are becoming common programs that employees use to help streamline business tasks. Otter, for example, is an AI Meeting Assistant that automatically transcribes and summarizes meetings in real time, records audio, captures slides, extracts action items, and generates content such as e-mails and status updates. While tools like Otter may provide quick answers or help synthesize a large volume of information, employers and employees alike should be mindful of the types of information fed to (and possibly stored in) AI programs. The use of an AI tool to, for example, record a
Continue Reading Mind Your Audience: Disclosure of Confidential Information to AI Programs Can Give Rise to Trade Secret Misappropriation Claims

The AI landscape is rapidly changing. To keep you up to date on the fast breaking legal updates in the AI space, we will be providing weekly updates summarizing significant news and legal developments, ranging from AI lawsuits and enforcement actions to legislation and regulations. Below are some highlights of key developments and articles you can view to learn more.
Continue Reading AI Legal Updates

The SEC has charged and settled claims with two Investment advisers with making false and misleading statements about their use of artificial intelligence (AI). The SEC found that Delphia (USA) Inc. and Global Predictions Inc. marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not. SEC chair Gensler noted that when new technologies come along, they create buzz from investors and false claims by those purporting to use those new technologies. He admonished investment advisers to not mislead the public by saying they are using an AI model when
Continue Reading SEC Cracks Down on Over-Hyped AI Claims – Director Says This is Just the Beginning

In a prior article Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It, we addressed how many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. We noted, however, that the use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. As companies think through these issues, some have (or will) update their Terms of Service (TOS) and/or privacy policy
Continue Reading FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data