
What we’re about
Regensburg is full of data science expertise, both in industry and in academia. Our aim is to bring together people who share an interest in this area and offer an environment for networking in an informal setting. We continue to have speakers with a range of backgrounds offering insights into a wide spectrum of data science ranging from enterprise search to music recommendation, from automatic fact-checking to avoiding harms and biases, from generative approaches to automatic question-answering. And that is not even everything. Other topics include large language models, industry use cases of natural language processing and the list goes on and on ... We have speakers from industry (e.g. Bloomberg, Netflix, Amazon, Spotify, Deloitte ...) and universities (CMU, Queen Mary, Essex, Regensburg ...). Want to present? Drop us a message. For more details on the organising team check: https://ai.ur.de/
Upcoming events (1)
See all- Gianluca Demartini (U Queensland): Bias in Humans and AI – What To Do About It?University of Regensburg, Regensburg
Ladies and gentlemen,
It gives us great pleasure to announce our next Data Science @ Regensburg Meetup. You cannot believe how happy we are to get Gianluca Demartini into town. He is a world authority on bias and bias management (if you have not read any of his work, then do start with the Communications of the ACM opinion piece). Note that he will only be in town for a few hours, so make sure you pop by as his next gig might be in Boston, Jakarta or London.
Looking forward to seeing you all,
Udo, David & BerndSpeaker:
Gianluca Demartini (University of Queensland)Title:
Bias in Humans and AI – What To Do About It?Abstract:
The rise in popularity of general-purpose large language models (LLMs) raises questions around bias and fairness. Do these models reflect the biases and stereotypes present in the data they have been pre-trained on? What should we do about that? In this talk, reviewing recent research we conducted at The University of Queensland, we will discuss issues of bias in human data using as an example gender bias in Wikipedia, and issues of bias in AI using as an example political bias in LLMs. We will then discuss how to explore and manage such bias that exists in data and in LLMs, how these models can be used for sensitive tasks, and how users tend to trust and over-rely on AI agents, even for high-risks tasks.Bio:
Gianluca Demartini is a Professor in Data Science and an ARC Future Fellow at the School of Electrical Engineering and Computer Science at the University of Queensland, Australia. His main research interests in Data Science include Information Retrieval, Semantic Web, and Responsible Artificial Intelligence. His research is currently funded by the Australian Research Council, the Swiss National Science Foundation, Meta, Google, and the Wikimedia Foundation. He received multiple Best Paper awards at Artificial Intelligence and Information Retrieval conferences. He has published more than 200 scientific papers at major computer science venues such as the ACM Web Conference, ACM SIGIR, VLDB Journal, ISWC, and ACM CHI.