WHEN ARTIFICIAL INTELLIGENCE ALGORITHMS DECIDE FAMILY FATE: AI IN CHILD WELFARE AND CUSTODY DETERMINATIONS

Author: Valerie Benton, Senior Editor

In 2016, Pittsburgh’s social services division started using artificial intelligence (AI) to help determine whether it should investigate a parent for child neglect.[i] Since then, at least eleven states have adopted similar protocols, and jurisdictions in almost half of the states have considered using predictive analytics in child welfare decisions.[ii] Across the country, AI systems increasingly shape fundamental decisions about American families—often without those involved understanding how these digital judges reach their conclusions.

Family courts nationwide are rapidly adopting AI tools in three primary areas. Risk assessment algorithms, like the Allegheny Family Screening Tool used in Pittsburgh, analyze multiple variables—from prior Child Protective Services reports to parental mental health treatment and criminal history—to predict future child abuse likelihood.[iii] Custody agreement systems evaluate parental indicators such as personal schedules, geographic distances, emotional disputes, parenting philosophies, and the needs of the child to optimize parenting plans, facilitate communication, and predict judicial outcomes.[iv] Case management algorithms triage cases to allocate judicial resources effectively,[v] influencing which families receive expedited hearings and which families have to wait.

This technological revolution in family law raises novel constitutional issues that courts are only starting to address. As AI has become more prevalent in child welfare recommendations and has inched its way into custody determinations,[vi] we must examine whether our legal framework adequately protects families’ fundamental rights while encouraging beneficial innovation.

Can an AI Provide Due Process?

Integrating AI into family court could collide with the due process protection of parental authority. For over a century, the Supreme Court has repeatedly recognized that parents possess a fundamental liberty interest in the care, custody, and control of their children.[vii] When algorithms impact decisions affecting these rights, they trigger the highest level of constitutional scrutiny—strict scrutiny. This standard requires the government to prove that the algorithms both serve a compelling state interest and are the least restrictive means to achieve it.

Procedural due process requires that families receive adequate notice and meaningful opportunity to be heard before government action affects their rights.[viii] The proprietary nature of most family court AI systems enables companies to claim trade secret protections and refuse to disclose algorithmic operations or weighting factors. This opacity fundamentally conflicts with due process requirements.[ix]

Making matters worse, because many AI systems operate as “black boxes,” their developers are unable to fully explain specific decisions.[x] When custody algorithms recommend removing children from parents, how can families meaningfully challenge recommendations that neither they, nor opposing counsel, nor the judge truly understand? Parents cannot effectively dispute custody determinations if they do not know which factors the algorithm weighted more heavily. Judicial review becomes meaningless if judges cannot evaluate algorithmic reasoning either. Even worse, family court judges lacking the technical expertise could defer to flawed recommendations simply because they appear scientific and objective.[xi]

Substantive due process implications are even more troubling. Instead of nuanced personal interviews with the parties involved, AI systems make custody recommendations based on public data like employment history, criminal records, zip code, school enrollment, and healthcare utilization.[xii] As a result, they may effectively penalize families for socioeconomic status rather than actual parenting ability. Such formulaic decision-making may not satisfy the standards of strict scrutiny if fundamental rights are at stake.[xiii]

Perhaps most concerning is AI’s potential to perpetuate existing systemic inequalities. Machine learning algorithms trained on historical data inevitably absorb their biases.[xiv] Because past child welfare investigations disproportionately targeted minority and low-income families, AI systems replicate this pattern.[xv] Early studies show significantly higher AI-recommended intervention rates for Black and Hispanic families compared to White families with similar risk profiles, raising Equal Protection concerns.[xvi] Even though AI systems may not explicitly consider race or income, they can and do rely on closely correlated factors, effectively discriminating while maintaining apparent neutrality.

However, states are starting to address AI transparency in government decision-making. For example, Illinois’s Artificial Intelligence Video Interview Act requires disclosure to job candidates if AI analyzes their interviews.[xvii] California’s Artificial Intelligence Bill of Rights mandates bias testing and transparency reporting for all government AI systems.[xviii] Both provide examples of accountability principles that policy makers could adapt when drafting family court-specific rules for AI implementation.

Professional ethics rules are also evolving to address attorney-specific responsibilities when AI influences case outcomes. The American Bar Association suggests lawyers need to understand algorithmic recommendations to be able to challenge them,[xix] creating new competency requirements for family law practitioners. The Conference of Chief Justices has called for uniform standards for AI, though it has not yet developed guidelines specific to family law.[xx]

Protecting Families from AI Abuse

Developing appropriate legal frameworks requires balancing technological benefits with constitutional protections through several key principles:

·      Mandatory transparency: Families must be notified that an AI system is evaluating their data. They must understand how the system works and what factors influence determinative outcomes. Families must be provided with a clear explanation of the factors used in decision-making and their relative importance.

·      Meaningful human oversight: Trained professionals must review algorithmic recommendations with the ability to override unjust outcomes. Algorithms should assist, but never replace, human judgment in matters regarding fundamental rights.[xxi]

·      Regular auditing and bias testing: Mandatory assessments must evaluate both accuracy and fairness, with particular attention to disparate impact on protected groups.

AI integration into family law represents both opportunity and constitutional risk. While AI can help overwhelmed courts manage caseloads efficiently, these systems threaten fundamental family rights when deployed without adequate safeguards. Families must be able to meaningfully challenge algorithmic recommendations through rights to explanation, decision-making factors, and human review.

Courts must ensure that AI serves justice, not supplants it. Family law’s future will inevitably include artificial intelligence—our challenge is ensuring this future remains grounded in constitutional principle rather than algorithmic efficiency.


[i] Allegheny Cnty. Dep’t Of Hum. Servs., Office Of Analytics, Tech., & Planning, Developing Predictive Risk Models To Support Child Maltreatment Hotline (2019), https://www.alleghenycountyanalytics.us/2019/05/01/developing-predictive-risk-models-support-child-maltreatment-hotline-screening-decisions/ [https://perma.cc/H9RM-QBZ6].

[ii] Anjana Samat et al., Family Surveillance by Algorithm: The Rapidly Spreading Tools That Few Have Heard Of, Am. C.L. Union (Sept. 29, 2021), https://www.aclu.org/wp-content/uploads/document/2021.09.28_Family_Surveillance_by_Algorithm.pdf [https://perma.cc/GT3M-QRV8].

[iii] Supra note i.

[iv] The platforms CoParenter and OurFamilyWizard are two such examples. They manage individualized visitation schedules, track shared expenses, and facilitate communication between parents. See CoParenter, https://coparenter.com [https://perma.cc/R5G4-LAX9] (last visited Oct. 9, 2025); OurFamilyWizard, https://www.ourfamilywizard.com [https://perma.cc/8FWJ-GUWP] (last visited Oct. 9, 2025).

[v] Leveraging AI to Reshape the Future of Courts, Nat’l Ctr. for State Cts., https://www.ncsc.org/resources-courts/leveraging-ai-reshape-future-courts#:~:text=Document%20handling:%20AI%20can%20automate,based%20on%20expertise%20and%20experience [https://perma.cc/Z9WU-FFT4] (last visited Oct. 9, 2025).

[vi] Matthew Trail, Algorithmic Decision-Making in Child Welfare Cases and Its Legal and Ethical Challenges, Am. Bar Ass’n Litig. Sec. (Feb. 6, 2024), https://www.americanbar.org/groups/litigation/resources/newsletters/childrens-rights/winter2024-algorithmic-decision-making-in-child-welfare-cases/?login.

[vii] Meyer v. Nebraska, 262 U.S. 390, 399 (1923); Pierce v. Society of Sisters, 268 U.S. 510, 534-535 (1925); Prince v. Massachusetts, 321 U.S. 158, 188 (1944); Stanley v. Illinois, 405 U.S. 645,651 (1972); Quilloin v. Walcott, 434 U.S. 246, 255 (1978); Parham v. J.R., 442 U.S. 584, 602 (1979); Santosky v. Kramer, 455 U.S. 745, 745 (1982); Wash v. Glucksberg, 521 U.S. 702, 702 (1997); Troxel v. Granville 530 U.S. 57, 65 (2000).

[viii] State v. Leah B. (In re Int. of Jordon B.), 316 Neb. 974 (2024); In re L.C.P., 456 P.3d 1142 (Okla. Civ. App. 2019).

[ix] John Villasenor & Virginia Follo, Algorithms and Sentencing: What Does Due Process Require?, Brookings Inst. (Mar. 21, 2019), https://www.brookings.edu/articles/algorithms-and-sentencing-what-does-due-process-require/#:~:text=Due%20process%20is%20a%20core,know%20what%20those%20scores%20are [https://perma.cc/FA3G-K3KC]; but see Royal Brush Mfg. v. United States, 75 F.4th 1250 (Fed. Cir. 2023) (holding that the Trade Secrets Act permits the release of information if required by constitutional due process).

[x] Neil Savage, Breaking into the Black Box of Artificial Intelligence, Nature (Mar. 29, 2022), https://doi.org/10.1038/d41586-022-00858-1 [https://perma.cc/Q92V-K7UD].

[xi] Supra note vi; but see Gary E. Marchant, AI in Robes: Courts, Judges, and Artificial Intelligence, 50 Ohio N.U. L. Rev. 473 (2024).

[xii] Supra note vii; Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671 (2016); Christine Grillo, How Data Analysis Confirmed the Bias in a Family Screening Tool, Hum. Rts. Data Analysis Grp. (June 22, 2023), https://hrdag.org/2023/06/22/afst/#:~:text=Some%20of%20the%20factors%20that,and%20algorithms%20in%20other%20locations [https://perma.cc/X4HC-E3UX]; Lama H. Nazar et al., Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation, 2 PLOS Digit. Health e0000278 (June 22, 2022).

[xiii] Devansh Saxena & Shion Guha, Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-making, 1 ACM J. Responsible Computing 1 (2024), https://doi.org/10.1145/3616473. See also Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2122 (2019).

[xiv] Devansh Saxena et al., How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfarein Proceedings of the Chi Conference on Human Factors in Computing Systems Extended Abstracts 1 (2022).

[xv] Supra note xiii.

[xvi] Logan Stapleton et al., Extended Analysis of “How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions” (Apr. 29, 2022), https://arxiv.org/pdf/2204.13872 [https://perma.cc/T8JG-UNH4].

[xvii] 820 Ill. Comp. Stat. 42/1-99 (2019), https://www.ilga.gov/Legislation/publicacts/view/101-0260.

[xviii] S.B. 420, 2025-26 Leg., Reg. Sess. (Cal. 2025) (passed in Cal. Senate June 3, 2025, and is currently being considered in the Assembly).

[xix] ABA Comm. on Ethics & Pro. Resp., Formal Op. 512 (July 2024).

[xx] Supra note v.

[xxi] Anna Kawakami et al., Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. Association for Computing Machinery (Apr. 28, 2022), https://dl.acm.org/doi/pdf/10.1145/3491102.3517439.

 

Next
Next

NOT-SO-SECRET WARS: THE CONFLICT OVER THE THEME PARK RIGHTS TO MARVEL CHARACTERS BETWEEN DISNEY AND UNIVERSAL