Skip to content

Tech Ethics Bristol Event #3 AI Fairness and Explainability.

Photo of Karin Rudolph
Hosted By
Karin R.
Tech Ethics Bristol Event #3 AI Fairness and Explainability.

Details

Tech Ethics Bristol Event #3 AI Fairness and Explainability.

We are excited to announce our third Tech Ethics Bristol lunchtime event.
Join us on Friday 11th June to explore "AI Fairness and Explainability"

We will be hosting two online sessions featuring three speakers experts in their field.

Session 1: "Fairness considerations in Machine Learning"

Machine Learning is increasingly pervasive in our day to day lives, whether simply suggesting your weekly Spotify mixtape, or perhaps even provisioning medical care. Ensuring ML-based algorithms are fair and unbiased with respect to certain sensitive variables becomes an essential consideration in the development and deployment of such products. In this talk, we will discuss the different ways in which we can begin to qualitatively and quantitatively understand and measure bias, and discuss some of the ethical considerations around developing such life-affecting technologies. We will show how some of the very commonly used bias mitigation techniques that ignore cause and effect relationships in the data can actually increase bias in some hidden ways. Finally, we will close by discussing some ways forward.

Speakers:
Chris Lucas, Senior Research Engineer at Babylon Health
Sina Salek, Data Scientist at Axiom Data.

Session 2:

Ethical AI in Context: Explainability as a Relational Practice

Explainable AI (XAI) is increasingly positioned as a technical solution to address a variety of ethical challenges of automated decision making – from identifying data bias to enhancing trust and complying with regulation. In contrast, our case study at an insurance
company evidences that xAI goes beyond models and data, as explanations were generated by a variety of actors within and beyond the technical teams, and different actors held different knowledge and expectations of what needed explaining and why. We argue for the need to widen the horizon of explainable AI from normative principles and technical solutions to social practices that take into account wider organizational and community contexts. An expanded definition of xAI will allow us to employ participatory approaches that integrate the lived experiences of the people subject to automated decision making, and facilitate richer and more inclusive discussions on what makes AI fair and ethical.

Speaker:

Marisela Gutierrez is a Senior Research Associate at the Bristol Digital Futures Institute of the University of Bristol.

Our schedule is as follows:

πŸ”“ 12.20pm - CrowdCast room opens

πŸ‘‹ 12.30pm - Event starts with a welcome from your meet-up organisers, Karin & Alex. We will give you an overview of Tech Ethics Bristol and our mission

πŸ“’ 12.35pm - Session 1: "Fairness considerations in Machine Learning"

πŸ’š 13: 05 pm – Session 2: Ethical AI in Context: Explainability as a Relational Practice

πŸ—“οΈ 1.30pm - Round-up, other community notices and next event announcement.

Crowdcast Event link to attend here - https://www.crowdcast.io/e/tech-ethics-bristol-2

The event will be recorded and will be available along with slides to view shortly afterwards.

Brought to you by:
Collective Intelligence - https://www.collective-intelligence.co.uk/

Sponsored by:
ADLIB recruitment - www.adlib-recruitment.co.uk

Photo of Tech Ethics Bristol group
Tech Ethics Bristol
See more events