Between Promise and Peril: The AI Paradox in Family Violence Response

By Dr. Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures, School of Education, La Trobe University

When Smart Systems Meet Human Stakes

The integration of artificial intelligence into our legal system presents a profound paradox. The same AI tools promising unprecedented efficiency in predicting and preventing family violence can simultaneously amplify existing biases and create dangerous blind spots.

This tension between technological promise and human care, support and protection isn’t theoretical—it’s playing out in real-time across legal systems worldwide. Through my involvement in last year’s AuDIITA Symposium, specifically the theme on AI and Family violence, our discussions highlighted the high-stakes applications of AI in family violence response. I found that the question isn’t whether AI can help, but rather how we can ensure it enhances rather than replaces human judgment in these critical contexts.

The Capabilities and the Gaps

Recent advances in AI for family violence response show remarkable technical promise:

  • Researchers have achieved over 75% accuracy in distinguishing between lethal and non-lethal violence cases using AI analysis of legal documents
  • Machine learning systems can identify patterns in administrative data that might predict escalation before it occurs
  • Natural language processing tools can potentially identify family violence disclosures on social media platforms

But these impressive capabilities obscure a troubling implementation gap. What happens when these systems encounter the messy reality of human services?

The VioGén Warning

Spain’s VioGén system offers a sobering case study. Despite being hailed as a world-leading predictive tool for family violence risk, its flaws led to tragic outcomes—with at least 247 women killed after being assessed, many after being classified as “low” or “negligible” risk.

The system’s failures stemmed from multiple factors:

  • Victims were often too afraid or ashamed to provide complete information
  • Police accepted algorithmic recommendations 95% of the time despite lacking resources for proper investigation
  • The algorithm potentially missed crucial contextual factors that human experts might have caught
  • Most critically, the system’s presence seemed to reduce human agency in decision-making, with police and judges deferring to its risk scores even when other evidence suggested danger

Research revealed that women born outside Spain were five times more likely to be killed after filing family violence complaints than Spanish-born women. This suggests the system inadequately accounted for the unique vulnerabilities of immigrant women, particularly those facing linguistic barriers or fears of deportation.

The Cultural Blind Spot

This pattern of leaving vulnerable populations behind reflects a broader challenge in technology development. Research on technology-facilitated abuse has consistently shown how digital tools can disproportionately impact culturally and linguistically diverse women, who often face a complex double-bind:

  • More reliant on technology to maintain vital connections with family overseas
  • Simultaneously at increased risk of technological abuse through those same channels
  • Often experiencing unique forms of technology-facilitated abuse, such as threats to expose culturally sensitive information

For AI risk assessment to work, it must explicitly account for how indicators of abuse and coercive control manifest differently across cultural contexts. Yet research shows even state-of-the-art systems struggle with this nuance, achieving only 76% accuracy in identifying family violence reports that use indirect or culturally specific language.

Beyond Algorithms: The Human Element

What does this mean for the future of AI in family violence response? My research suggests three critical principles must guide implementation:

1. Augment, Don’t Replace

AI systems must be designed to enhance professional judgment rather than constrain it or create efficiency dependencies. This means creating systems that:

  • Provide transparent reasoning for risk assessments
  • Allow professionals to override algorithmic recommendations based on contextual factors
  • Present information as supportive evidence rather than definitive judgment

2. Design for Inclusivity from the Start

AI systems must explicitly account for diversity in how family violence manifests across different communities:

  • Include diverse data sources and perspectives in development
  • Build systems capable of recognising cultural variations in disclosure patterns
  • Ensure technology respects various epistemologies, including indigenous perspectives

3. Maintain Robust Accountability

Implementation frameworks must preserve professional autonomy and expertise:

  • Ensure adequate resourcing for human assessment alongside technological tools
  • Create clear guidelines for when algorithmic recommendations should be questioned
  • Maintain transparent review processes to identify and address algorithmic bias

Victoria’s Balanced Approach

In Victoria and across Australia, there is encouraging evidence of a balanced approach to AI in legal contexts. While embracing technological advancements, Victorian courts have shown appropriate caution around AI use in evidence and maintained strict oversight to ensure the integrity of legal proceedings.

This approach—maintaining human oversight while allowing limited AI use in lower-risk contexts—aligns with what research suggests is crucial for successful integration: preserving professional judgment and accountability, particularly in cases involving vulnerable individuals.

The Path Forward

As we navigate the next wave of technological transformation in legal practice, we face a critical choice. We can allow AI to become a “black box of justice” that undermines transparency and human agency, or we can harness its potential while maintaining the essential human elements that make our legal system work.

Success will require not just technological sophistication but careful attention to institutional dynamics, professional practice patterns, and the complex social contexts in which these technologies operate. Most critically, it demands recognition that in high-stakes human service contexts, technology must serve human needs and judgment rather than constrain them.

The AI paradox in law is that the very tools promising to make our systems more efficient also risk making them less just. By centering human dignity and professional judgment as we develop these systems, we can navigate between the promise and the peril to create a future where technology truly serves justice.


Dr. Alexia Maddox will be presenting on “The AI Paradox in Law: When Smart Systems Meet Human Stakes – Navigating the Promise and Perils of Legal AI through 2030” at the upcoming 2030: The Future of Technology & the Legal Industry Forum on March 19, 2025, at the Grand Hyatt Melbourne.

Leave a comment