DID AI DO THAT? USE, REGULATION, AND RISKS FOR THE LEGAL PROFESSION
Author: Alicia MacRae, Senior Editor
“Give credit to whom credit is due.” -Samuel Adams.[i]
From profit-sharing to prison time, society honors attribution. The new iterations of artificial intelligence (“AI”) make this increasingly difficult to accomplish when our eyes and ears, once held to be reliable filters of truth, can be deceived.[ii] The stakes are high as businesses and their lawyers face growing risks of data breaches, intellectual property theft, and regulatory violations. This blog explores those risks, identifies practical strategies for risk management, and compares emerging regulatory frameworks in the United States and Europe.
AI is now a fundamental part of operations, with 82% of companies using or exploring AI use in various aspects of their business, from customer service chatbots to automated contract drafting.[iii] Its rapid adoption accelerates risks.[iv] As society grapples with AI-related legal liability, businesses and employees must understand the risks of AI use to protect themselves from exposure.
What’s At Stake
AI flaws are surfacing. In 2023, a Chevrolet dealership’s AI chatbot was tricked into offering a sophisticated AI user a 2024 Tahoe worth $76,000 for only $1. [v] In 2024, Air Canada had to honor its customer service bot’s erroneous promise of a post-flight bereavement refund, which did not exist in the company’s policies. [vi] In 2023, Samsung employees accidentally disclosed proprietary data by entering it into ChatGPT, prompting the company to ban AI use. [vii] Similarly, in 2024, Otter.ai, an AI tool used for recording and transcribing meetings, recorded and transcribed an unknowing investor conversation after their meeting had ended, causing a deal to collapse.[viii] These AI flaws have consequences.
As businesses increasingly rely on AI tools, they also face significant risks, including data privacy breaches, regulatory compliance challenges, intellectual property concerns, and unintended legal liabilities.[ix] Employees commonly rely on tools like ChatGPT, Claude, Grammarly, Perplexity, or Gamma, without considering “terminal ingestion”—the irrevocable absorption of data into generative AI models.[x] This makes it impossible to fully claw back or delete any data put into those models.[xi]
Yet many companies still lack adequate AI policies or fail to enforce them effectively.[xii] The accidental disclosure of confidential information, an issue highlighted in the 2023 Cisco Data Privacy Benchmark Study, found that breaches are common in business settings.[xiii] In February of 2024, market research reported that 31% of employees acknowledged having entered sensitive data into these tools.[xiv] For attorneys guiding company AI use effectively, they need a solid understanding of how a company is using AI and a comprehensive review of the company’s existing policies and practices.[xv] Lawyers are challenged with keeping abreast of available AI tools, actual client AI use, and evolving regulations to provide informed and effective advice.
Risk Management Simplified: Permission, Education, Training
The legal profession can lead the charge for compliance despite dealing with a patchwork of regulation.[xvi] To avoid a clash between varied AI regulations and the practical use of AI by attorneys’ clients, careful navigation of privacy concerns and ethical risks should be guided by educated risk managers.[xvii] Prevention can be done by identifying the risks and realities early, creating standard usage procedures, monitoring adherence, and revising to reflect changes.
Permission. By providing employees with company-approved AI tools, the company can limit risk to the known factors of those platforms. By having a procedure for an employee seeking to use a tool not found on the list, businesses proactively engage in researching privacy loopholes or issues associated with the alternative tool before the employee uses it.
Education. Define what interactions are appropriate. Explaining the employees’ responsibility for the output of their AI use, the need to acknowledge and correct any bias or hallucinations, and the company’s requirement for human review of all AI output can help educate and encourage conscientious usage. Employees remain gatekeepers of intellectual property.
Tracking. Employees should log AI use. Existing cybersecurity procedures can aid oversight. Flagging marks specific content as suspicious and could be used to identify AI-enhanced outputs, and tagging could be used to mark data specifically prohibited for AI use. A company’s IT department should monitor access to AI tools, allowing for regular review of inputs and outputs for quality control. By permitting only vetted tools that conform to the regulatory landscape and monitoring usage, compliance can be maintained.
The Legal Landscape: Europe v. United States
Businesses face inconsistent global regulatory standards.[xviii] The EU’s AI Act imposes a comprehensive framework, relying on a risk-based approach to govern AI systems.[xix] For example, high-risk systems such as those used in critical infrastructure, healthcare scenarios like robot-assisted surgery, are subject to the strictest obligations.[xx] The United States’ regulatory approach remains fragmented.[xxi] Agencies like the Federal Trade Commission address AI misuse through consumer protection laws,[xxii] while states like California have adopted their own frameworks targeting data privacy and AI governance.[xxiii] Regulations remain complex and inconsistently applied.[xxiv]
Call to Lead
Technical expertise is unnecessary to ensure AI is deployed responsibly.[xxv] Strategies like those suggested here or from the American Bar Association, the Federal Trade Commission, and industry leaders should include solid policy development, human oversight, and consistent enforcement.[xxvi] Through AI policies, negotiated vendor agreements, and alignment with regulations, legal counsel bridges the gap between emerging laws and real-world risks while also understanding AI’s capabilities, limitations, and implications. Corporate counsel who fail to address AI risks today may find clients exposed to liabilities tomorrow. Before asking, “Did AI do that?” ensure you and your clients fully grasp the consequences of the answer.
[i] Samuel Adams Quotes, AZ QUOTES, https://www.azquotes.com/quote/1270230 (last visited Apr. 29, 2025).
[ii] DEPT. HOMELAND SEC., INCREASING THREAT OF DEEPFAKE IDENTITIES, https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf (last visited Apr. 28, 2025).
[iii] Anthony Cardillo, How Many Companies Use AI? (New Data), EXPLODING TOPICS (Aug. 21, 2024), https://explodingtopics.com/blog/companies-using-ai.
[iv] Logan Kolas, The Mess States Are Making of AI Regulation, GOVERNING (July 29, 2024), https://www.governing.com/policy/the-mess-states-are-making-of-ai-regulation.
[v] Niamh Ancell, Chevrolet dealership duped by hacker into selling $70k car at criminally low price, CYBERNEWS (Aug. 30, 2024, 8:10 AM), https://cybernews.com/ai-news/chevrolet-dealership-chatbot-hack/.
[vi] Nick Robertson, Air Canada Must Pay Refund Promised By AI Chatbot, Tribunal Rules, THE HILL (Feb. 18, 2024), https://thehill.com/business/4476307-air-canada-must-pay-refund-promised-by-ai-chatbot-tribunal-rules.
[vii] Siladitya Ray, Samsung Bans ChatGPT Among Employees After Sensitive Code Leak, FORBES (May 2, 2023, 7:31 AM), https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak.
[viii] Tatum Hunter & Danielle Abril, AI Assistants Are Blabbing Our Embarrassing Work Secrets, WASH. POST (Oct. 2, 2024), https://www.washingtonpost.com/business/2024/10/02/ai-assistant-transcription-work-secrets-meetings/.
[ix] How Do Businesses Use Artificial Intelligence?, WHARTON SCH., U. PENN. (Jan. 19, 2022), https://online.wharton.upenn.edu/blog/how-do-businesses-use-artificial-intelligence.
[x] Stephen Pastis, A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data, FORTUNE (Aug. 30, 2023, 12:43 PM), https://fortune.com/europe/2023/08/30/researchers-impossible-remove-private-user-data-delete-trained-ai-models/; see also Matt Burgess & Reece Rogers, How to Stop Your Data From Being Used to Train AI, WIRED (October 12, 2024, 9:38 AM), https://www.wired.com/story/how-to-stop-your-data-from-being-used-to-train-ai/.
[xi] Id.; see OpenAI Privacy Policy: 2023, OPENAI (2023), https://openai.com/policies/privacy-policy (“This Privacy Policy does not apply to content that we process on behalf of customers of our business offerings, such as our API. Our use of that data is governed by our customer agreements covering access to and use of those offerings.”); see also Bernard Marr, The Employees Secretly Using AI At Work, FORBES (Sept. 5, 2024, 1:08 PM), https://www.forbes.com/sites/bernardmarr/2024/09/05/the-employees-secretly-leveraging-ai-at-work/.
[xii] Sarah Lynch, More Employees Are Using AI, but Often Don’t Have Guidance: New Reports Show that AI is Becoming More Commonplace at Work, INC. (Sept. 25, 2024), https://www.inc.com/sarah-lynch/more-employees-using-ai-often-dont-have-guidance.html.
[xiii] Privacy’s Growing Importance and Impact: Cisco 2023 Data Privacy Benchmark Study, CISCO SECURE, https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-privacy-benchmark-study-2023.pdf
[xiv] Eileen Yu, Employees Input Sensitive Data Into Generative AI Tools Despite the Risks, NDNET (Feb. 22, 2024, 2:19 AM), https://www.zdnet.com/article/employees-input-sensitive-data-into-generative-ai-tools-despite-the-risks/.
[xv] Andrew Perlman, Generative AI in the Legal Profession: The Implications of ChatGPT for Legal Services and Society, HARV. CTR. ON LEGAL PRO. (Mar./Apr. 2023), https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/
[xvi] Logan Kolas, supra note iv.
[xvii] Jennifer King & Caroline Meinhardt, Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World, STANFORD U. HUM.-CENTERED A.I. (Feb. 2024), https://hai.stanford.edu/sites/default/files/2024-02/White-Paper-Rethinking-Privacy-AI-Era.pdf.
[xviii] Adam Thierer, The Pacing Problem and the Future of Technology Regulation: Why Policymakers Must Adapt to a World That’s Constantly Innovating, MERCATUS CTR. GEO. MASON U. (Aug. 8, 2018), https://www.mercatus.org/economic-insights/expert-commentary/pacing-problem-and-future-technology-regulation.
[xix] EU AI Act: First Regulation On Artificial Intelligence, EUR. PARL. (Jun.18, 2024. 4:29 PM), https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#ai-act-different-rules-for-different-risk-levels-0.
[xx] Id.
[xxi] Tammy Whitehouse, How AI Governance Can Adapt To A Fragmented Regulatory Landscape, WALL ST. J. (Nov. 22, 2024, 3:00 PM), https://deloitte.wsj.com/riskandcompliance/how-ai-governance-can-adapt-to-a-fragmented-regulatory-landscape-26566914.
[xxii] Press Release, FTC Announces Crackdown on Deceptive AI Claims and Schemes, FED. TRADE COMM’N (Sept. 25, 2024), https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes.
[xxiii] C. Kibby & Richard Sentinella, New Laws in California Look to the Future of Privacy and AI, INT’L ASS’N PRIV. PRO. (Nov. 27, 2024), https://iapp.org/news/a/checking-in-on-proposed-california-privacy-and-ai-legislation.
[xxiv] US State-by-State AI Legislation Snapshot, BCLP LLP. (Jun. 7, 2024), https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html.
[xxv] Brittany Kauffman, The Implications of Generative AI: From the Delivery of Legal Services to the Delivery of Justice, INST. FOR ADVANCEMENT OF AMER. L. SYSTEM (Mar. 29, 2024), https://iaals.du.edu/blog/implications-generative-ai-delivery-legal-services-delivery-justice.
[xxvi] Elisa Jillson, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, FED. TRADE COMM’N (Apr. 19, 2021), https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.