AI DEGREGULATION AND THE RISK OF BIAS
Author: Savannah Schoen, Associate Editor
I. Introduction
On January 20, 2025, President Trump repealed former President Biden’s 2023 Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, and replaced it with Executive Order 14179.[i] Executive Order 14110 encouraged the Consumer Financial Protection Bureau and the Federal Housing Finance Agency to monitor for lending bias in Artificial Intelligence (“AI”) use. It also pressured those regulators to evaluate their algorithms to ensure they are not unfairly disadvantaging certain protected groups.[ii] However, Trump’s January 23, 2025, Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” moves away from this regulation, highlighting deregulation and speed.[iii]
In 2019, the nonprofit newsroom The Markup found that lenders denied loans to people of color at higher rates than to white applicants with comparable financial characteristics, without the use of AI.[iv] Algorithms pulling data from similar historical discriminatory lending decisions can reproduce these disparities on seemingly neutral factors.[v] For example, due to historical housing segregation, zip codes often serve as a proxy for race.[vi] While zip codes may be considered a neutral ground in lending, due to its historical foundation, this is not always the case.[vii] While an algorithm pulling data on zip codes does not explicitly consider race, there is a high likelihood that it will produce the same historically discriminatory effect due to the algorithm that is being used.[viii]
Due to Trump’s January Executive Order, the AI industry has been signaled to dismantle the anti-discrimination guardrails already in place in order to prioritize speed and development.[ix] When lenders are encouraged by the Trump administration to begin to implement aggressive AI development without sufficient human oversight or regulation, these systems can magnify existing disparities.[x] As a result, many minority applicants will most likely face heightened risks of unfair treatment.
II. Executive Order 14179 and State Law
On July 23, 2025, the Trump Administration released Winning the Race: America’s AI Action Plan (“Action Plan”), fulfilling the deregulation promise made in Executive Order 14179.[xi] The Action Plan emphasizes that “AI is far too important to smother in bureaucracy,” and that federal AI-related funding will not be allocated to states with “burdensome AI regulations.”[xii] Through the Action Plan, the Trump Administration may be warning states that aggressive AI regulations may cost them federal financial support.[xiii] However, the Action Plan does not define what is “burdensome,” keeping the term vague and leaving states to the discretion of the Administration.[xiv]
In 2025, all fifty states introduced AI regulation, with thirty-eight states adopting or enacting regulatory measures by the end of the year. However, some states have already begun pulling back these regulations.[xv] For example, in 2024, Colorado was the first state to create a state law regulatory framework for AI.[xvi] Set to take effect in 2026, the state law emphasized preventing algorithmic discrimination, specifically related to use in hiring, banking, and housing.[xvii] However, after President Trump issued Executive Order 14179, followed by the Action Plan, Colorado paused the implementation of the regulation, citing the need for more time before such a law is enacted.[xviii] While Colorado began as the pioneer of state AI regulation, Executive Order 14179 has caused a delay until January 2027, leaving the question of how the state will respond to the Executive Order open.[xix] However, this unanswered question does not address the fate of discriminatory algorithms, which, without proper regulation and oversight, will grow increasingly biased over time.[xx]
III. Bias in The Algorithm
Regulating the underlying AI algorithm is essential because AI systems are not programmed to perform a certain task; instead, they learn how to perform a certain task.[xxi] The AI learns by assessing and training on the data that makes up its algorithm.[xxii] One of the most common reasons an AI algorithm may develop bias is because the data being used to train the algorithm was biased.[xxiii] Using lending as an example, an algorithm trained on historical lending data, shaped by unequal access to credit, may continue the pattern of unequal access.[xxiv] As a result, the algorithm may effectively discriminate by disproportionately penalizing certain consumers, specifically minority groups.[xxv]
AI heightens the risk of reinforcing historical discrimination because it can take large amounts of data and create outputs in ways that humans may not be able to anticipate.[xxvi] Because AI is able to process numerous variables, it can recognize patterns of embedded social patterns, such as the above example of zip codes being used in housing discrimination, and perpetuate the pattern.[xxvii] This is why proponents of AI regulation have emphasized the risks of the new push for deregulation, and highlighted the need for increased human oversight.[xxviii] The Colorado legislature cited “ample evidence” of AI algorithms that conveyed “deeply biased” outputs, which need heightened oversight and regulation in areas such as banking.[xxix] However, this increased regulation may not materialize due to the Action Plan’s strong push toward deregulation and vague terms, leaving states dependent on the Trump administration’s discretion for AI related financial support.[xxx]
IV. Conclusion
While developments in AI and the implementation of AI processes promise more efficient lending decisions, the potential for discriminatory bias cannot be ignored. With President Trump’s Executive Order pushing for speed and deregulation, it is on lenders and industry participants to regulate the data being used by their AI systems.[xxxi] The future of fair lending depends on trustworthy AI innovation with strong regulatory oversight.
[i] Donovan Estrada, Artificial Authority: Federalism, Preemption, and the Constitutional Structure of AI Regulation, 53 RULREC 36, 40 (2025); see also Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 23, 2025).
[ii] Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023) (revoked by Exec. Order No. 14,148, 90 Fed. Reg. 75192 (Jan. 20, 2025)).
[iii] Supra note i.
[iv] Emmanuel Martinez & Lauren Kirchner, The Secret Bias Hidden in Mortgage‑Approval Algorithms, The Markup (Aug. 25, 2021),
https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.
[v] Korin Munsterman, When Algorithms Judge Your Credit: Understanding AI Bias in Lending Decisions, Accessible Law, Univ. of N. Tex. at Dallas C.L. (Spring 2025),
https://www.accessiblelaw.untdallas.edu/post/when-algorithms-judge-your-credit-understanding-ai-bias-in-lending-decisions.
[vi] Id.
[vii] See id.
[viii] See id.
[ix] See NYU Stern Ctr. for Bus. & Hum. Rts., Trump’s Executive Order on AI Creates a Dangerous Regulatory Vacuum (Dec. 16, 2025),
https://bhr.stern.nyu.edu/quick-take/trumps-executive-order-on-ai-creates-a-dangerous-regulatory-vacuum/.
[x] See Sadie Cavazos, The Impact of Artificial Intelligence on Lending: A New Form of Redlining?, 11 TXAMJPL 311, 343-46 (2025).
[xi] See Winning the Race: America’s AI Action Plan, White House Office of Sci. & Tech. Pol’y (July 23, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[xii] Id. at 3.
[xiii] Donovan Estrada, Artificial Authority: Federalism, Preemption, and the Constitutional Structure of AI Regulation, 53 RULREC 36, 49 (2025); supra note xi.
[xiv] Donovan Estrada, Artificial Authority: Federalism, Preemption, and the Constitutional Structure of AI Regulation, 53 RULREC 36, 49 (2025); supra note xi.
[xv] National Conference of State Legislatures, Artificial Intelligence 2025 Legislation (updated July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation (last visited Feb. 12, 2026).
[xvi] Sara Wilson, Colorado Becomes First State with Sweeping Artificial Intelligence Regulations, Colorado Newsline (May 20, 2024), https://coloradonewsline.com/briefs/colorado-first-state-artificial-intelligence-regulations/.
[xvii] Colo. Rev. Stat. § 6-1-1701 (2024).
[xviii] Letter from Jared Polis et al., Governor, State of Colo., to Colo. Gen. Assembly (May 5, 2025); see also Marianne Goodland & Colorado Politics, Gov. Jared Polis, Democrats Press for Delay in Implementation of AI Law, Colo. Politics (May 5, 2025), https://www.coloradopolitics.com/2025/05/05/gov-jared-polis-democrats-press-for-delay-in-implementation-of-ai-law-846dfdaf-9175-5ce8-92ca-fb586da7bc20/.
[xix] Id.
[xx] See supra note v.
[xxi] Michael Griffith, AI Lending and the ECOA: Avoiding Accidental Discrimination, 27 NCBNKI 349, 357 (2023).
[xxii] Id.
[xxiii] Id. at 363.
[xxiv] Supra note v.
[xxv] Id.
[xxvi] Supra note xxi.
[xxvii] Id.
[xxviii] Donovan Estrada, Artificial Authority: Federalism, Preemption, and the Constitutional Structure of AI Regulation, 53 RULREC 36, 86 (2025).
[xxix] Marianne Goodland & Colorado Politics, Gov. Jared Polis, Democrats Press for Delay in Implementation of AI Law, Colo. Politics (May 5, 2025), https://www.coloradopolitics.com/2025/05/05/gov-jared-polis-democrats-press-for-delay-in-implementation-of-ai-law-846dfdaf-9175-5ce8-92ca-fb586da7bc20/; supra note xxviii.
[xxx] Supra note xi.
[xxxi] Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 23, 2025); see supra note xi.