Skip to content

Details

Exposing meaningful interactive controls for generative and creative tasks with machine learning approaches is challenging: 1) Supervised approaches require explicit labels on the control of interest, which can be hard or expensive to collect, or even difficult to define (like 'style'). 2) Unsupervised or weakly-supervised approaches try to avoid the need to collect labels, but this makes the learning problem more difficult. We will present methods that structure the learning problems to expose meaningful controls, and demonstrate this across two domains: for handwriting - a deeply human and personal form of expression - as represented by stroke sequences; and for images of objects for implicit and explicit 2D and 3D representation learning, to move us closer to being able to perform `in the wild' reconstruction. Finally, we will discuss how self-supervision can be a key component to help us model and structure problems and so learn useful controls.

The talk is based on the papers:

Generating Handwriting via Decoupled Style Descriptors (ECCV 2020)
Project page: http://dsd.cs.brown.edu/
git: https://github.com/brownvc/decoupled-style-descriptors

Unsupervised Attention-guided Image to Image Translation (NeurIPS 2018)
arxiv: https://arxiv.org/abs/1806.02311
git: https://github.com/AlamiMejjati/Unsupervised-Attention-guided-Image-to-Image-Translation

Generating Object Stamps
arxiv: https://arxiv.org/abs/2001.02595

Presenters BIO:

Dr. James Tompkin is an assistant professor of Computer Science at Brown University. His research at the intersection of computer vision, computer graphics, and human-computer interaction helps develop new visual computing tools and experiences. His doctoral work at University College London on large-scale video processing and exploration techniques led to creative exhibition work in the Museum of the Moving Image in New York City. Postdoctoral work at Max-Planck-Institute for Informatics and Harvard University helped create new methods to edit content within images and videos. Recent research has developed new machine learning techniques for view synthesis for VR, image editing and generation, and style and content separation.
His web page: https://jamestompkin.com/

Atsunobu Kotani (Atsu) recently graduated from Brown University and will be starting my PhD in EECS at UC Berkeley this Fall.
He is interested in style learning, particularly in artistic domains, such as painting, sculpture and calligraphy. He also works with robots to investigate collaborative art production as well as conservation.
His web page: http://www.atsunobukotani.com/

** ** Please register through the zoom link right after your RSVP. We will send the links to the zoom event via email only to those who have registered through zoom. ** **

-------------------------
Find us at:

All lectures are uploaded to our Youtube channel ➜ https://www.youtube.com/channel/UCHObHaxTXKFyI_EI8HiQ5xw

Newsletter for updates about more events ➜ http://eepurl.com/gJ1t-D

Sub-reddit for discussions ➜ https://www.reddit.com/r/2D3DAI/

Discord server for, well, discord ➜ https://discord.gg/MZuWSjF

Blog ➜ https://2d3d.ai

AI Consultancy -> https://abelians.com

Members are also interested in