Skip to content

Collaborative Searching and Generative Modeling

Photo of Pengjie Ren
Hosted By
Pengjie R.
Collaborative Searching and Generative Modeling

Details

This Friday we'll have two talks followed by drinks.

16:00 Claudia Hauff (TU Delft) Collaborative Search Revisited: System Design, Group Size Effects and Search as Learning

16:30 Rianne van den Berg (Google Brain) Sinkhorn Autoencoders

========================

16:00 Claudia Hauff (TU Delft) Collaborative Search Revisited: System Design, Group Size Effects and Search as Learning

While today's web search engines are designed for single-user search,
over the years research efforts have shown that complex information
needs -- which are exploratory, open-ended and multi-faceted -- can be
answered more efficiently and effectively when searching in
collaboration (typically in groups of 2 people). Collaborative search
(and sensemaking) research has investigated techniques, algorithms and
interface affordances to gain insights and improve the collaborative
search process. In this talk I will cover our recent research efforts
in collaborative search, specifically (1) the design and
implementation of an open-source collaborative search engine called
SearchX, (2) the impact of group size on search effectiveness and (3)
the learning gains users achieve in a collaborative search setting
compared to single-user search variants.

========================

16:30 Rianne van den Berg (Google Brain)

In this talk I will discuss our recent paper on generative modeling with optimal transport. Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the $p$-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error. Moreover, we prove that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum. We therefore advertise this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives. We then introduce the Sinkhorn auto-encoder (SAE), which approximates and minimizes the p-Wasserstein distance in latent space via backprogation through the Sinkhorn algorithm. SAE directly works on samples, i.e. it models the aggregated posterior as an implicit distribution, with no need for a reparameterization trick for gradients estimations. SAE is thus able to work with different metric spaces and priors with minimal adaptations. We demonstrate the flexibility of SAE on latent spaces with different geometries and priors and compare with other methods on benchmark data sets.

Photo of SEA: Search Engines Amsterdam group
SEA: Search Engines Amsterdam
See more events
Room C1.112
Science Park 904 · Amsterdam, al