The first SEA of the year hosts two amazing speakers, followed by drinks!
** 17:00 - Alexandra Olteanu (Microsoft Research, New York) **
Discovering Problematic Suggestions in Predictive Text Applications and Beyond
The increasing interest in fairness, accountability, and transparency in computational systems has lead to a growing set of tools for measuring and correcting biases and problematic scenarios that we know to look for. However, techniques for preempting future issues that may not yet be on the product teams' radar are not nearly as well developed or understood. Addressing this gap requires deep dives into specific application areas.
I will focus a commonly encountered prototypical application scenario: given a set of actions undertaken by a user, a system generates suggestions for the next user action, to assist them in efficiently completing their task. Common applications involve predictive text systems that aim to alleviate writing burden through phrase or full sentence suggestions, including web search and email response composition suggestions. While there is evidence that users want and benefit from such assistive systems, proactively identifying problematic scenarios has remained a lingering issue; in part due to the lack of metrics, processes, and frameworks that can aid in their discovery and quantification. I will discuss a range of errors a system could make in open-domain scenarios like search and writing tasks, highlighting application specific examples, frameworks and steps that system designers can take to reduce design-induced blind-spots and preempt problematic scenarios.
Bio: Alexandra Olteanu is an interdisciplinary researcher, whose research looks at how data, methodological, and ethical limitations delimit what we can learn from online social traces, and how we can make the systems that leverage such data safer, fairer, and generally less biased. The problems she tackles are often motivated by existing societal challenges such as hate speech, racial discrimination, climate change, and disaster relief. Alexandra is currently affiliated with the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research (NYC & Montreal). Prior to joining the FATE group, she was a Social Good Fellow at the IBM T.J. Watson Research Center, NY. Her work has won two best paper awards (WISE 2014, Eurosys’ SNS workshop 2012), and has been featured in the UN OCHA's "World Humanitarian Data and Trends" and in popular media outlets, including Forbes, The Washington Post, VentureBeat, and ZDNet. Alexandra holds a PhD in Computer Science from École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
** 17:30 - Svitlana Vakulenko (ILPS, Amsterdam) **
Message Passing for Complex Question Answering over Knowledge Graphs
Question answering over knowledge graphs (KGQA) has evolved from simple single-fact questions to complex questions that require graph traversal and aggregation. We propose a novel approach for complex KGQA that uses unsupervised message passing, which propagates confidence scores obtained by parsing an input question and matching terms in the knowledge graph to a set of possible answers. First, we identify entity, relationship, and class names mentioned in a natural language question, and map these to their counterparts in the graph. Then, the confidence scores of these mappings propagate through the graph structure to locate the answer entities. Finally, these are aggregated depending on the identified question type. This approach can be efficiently implemented as a series of sparse matrix multiplications mimicking joins over small local subgraphs.
Bio: Svitlana Vakulenko is a postdoc researcher in Information and Language Processing Systems (ILPS) group, Informatics Institute of the University of Amsterdam. Her primary research interests relate to Natural Language Understanding with the application to Information Retrieval, and in particular Conversational Search.