The Right to Causal eXplanations, and Tensor Factor Analysis
Details
The Right to Causal eXplanations, and Tensor Factor Analysis by M. Alex O. Vasilescu, PhD.
Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust. Causal explanations are germane to the "right to an explanation" statute, i.e, to data-driven decisions, such as those that rely on images. Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin's quantitative definition of causality, where "A causes B" means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inferencing that addresses the “what if” question, and estimates a change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inferencing that addresses the “why” question. We define inverse causal inference as the estimation of causes given an estimated forward causal model, and a set of observations that constrain the solution set. This approach was demonstrated in the context of face verification by computing a set of causal explanations that employed a quantifiable definition of causality. Tensor factor analysis is data agnostic framework and is well suited for data starved domains.
About the speaker:
M. Alex O. Vasilescu, PhD., (www.cs.ucla.edu/~maov) received her education at the Massachusetts Institute of Technology and the University of Toronto. She was a research scientist at the MIT Media Lab and at New York University’s Courant Institute of Mathematical Sciences.
In the early 2000s, Dr. Vasilescu introduced the tensor algebraic framework for computer vision, computer graphics, machine learning, and generalized concepts from linear algebra to tensor algebra. She addressed causal inferencing questions by framing computer graphics and computer vision as multilinear problems. This has yielded powerful statistical methods that demonstratively disentangled the causal factors of data formation. Causal inferencing in a tensor framework facilitates the analysis, recognition, synthesis, and interpretability of sensory data. The development of the tensor framework has been spearheaded with premier papers, such as: Human Motion Signatures (2001), TensorFaces(2002), MICA(2005), TensorTextures(2003), and Multilinear Projection for Recognition (2007, 2011).
Dr. Vasilescu’s work was featured on the cover of Computer World Canada (currently, IT World Canada), and in articles in the New York Times, Washington Times, etc. MIT's Technology Review named her as TR100 honoree, and the National Academy of Science co-awarded the KeckFutures Initiative Grant.