Justice at Scale: Ethical Pitfalls of Using AI in Legal Decision-Making
Details
Can we trust an algorithm to weigh freedom, risk, and justice? From predictive policing and risk-assessment in bail hearings to AI-drafted contracts, machine learning is reshaping our courts and raising tough questions about fairness, transparency, and accountability.
Join us for a lively, accessible exploration of:
- Bias baked in: How do historical crime data and skewed datasets warp AI judgments?
- Opacity vs. due process: What does it mean when a “black-box” model influences your fate?
- Who’s responsible?: Judges, developers, or vendors—who bears ethical (or legal) liability when AI gets it wrong?
- Safeguards & oversight: Which policies and technical audits can help ensure AI serves justice, not injustice?
Whether you’re a lawyer curious about tech, a developer exploring legal AI, a policy wonk, or an engaged citizen, you’ll find plenty to debate. Bring a case study you’ve read, a question you’ve pondered, or simply your passion for fair systems.
Why Attend?
- Network with local technologists, legal professionals, and ethics thinkers
- Dive into real-world examples and latest research
- Walk away with practical insights and further reading to help shape responsible AI in law
