Tsvi Achler: What is the brain doing different from machine learning algorithms?

Details
http://photos4.meetupstatic.com/photos/event/9/5/4/8/600_444038216.jpeg
Tsvi Achler has a unique background focusing on the neural mechanisms of recognition from a multidisciplinary perspective. He has done extensive work in theory and simulations, human cognitive experiments, animal neurophysiology experiments, and clinical training. He has an applied engineering background, has received bachelor degrees from UC Berkeley in Electrical Engineering, Computer Science and advanced degrees from University of Illinois at Urbana-Champaign in Neuroscience (PhD), Medicine (MD) and worked as a postdoc in Computer Science, and at Los Alamos National Labs, and IBM Research. He now heads his own startup Optimizing Mind whose goal is to provide the next generation of machine learning algorithms.
In his own words, below is an abstract of what Tsvi will talk to us about on Dec 9:
"The origin of phenomena observed in brain studies such as oscillations and a speed-accuracy tradeoff remain unclear. It also remains unclear how the brain can be computationally flexible (quickly learn, modify, and use new patterns as it encounters them from the environment), and recall (reason with or describe recognizable patterns from memory). I study the brain from multidisciplinary perspectives looking for a single, compact network that can display these phenomena and perform flexible recognition.
Virtually all popular models of the brain and algorithms of machine learning remain “feedforward” even though it has been clear since the early days that this may limit flexibility (and is not optimal for recall, symbolic reasoning, or analysis). Feedforward methods use optimized weights to perform recognition. In feedforward networks “uniqueness information” is encoded into weights based on the frequency of occurrence found in the training set. This requires optimizing weights over the whole training set.
Instead, I suggest uniqueness is estimated during recognition, by performing optimization on the current pattern that is being recognized. This is NOT optimization to learn weights, instead optimization to perform recognition. Subsequently, only simple Hebbian-like relational learning is required during learning without any uniqueness information. The weights are no longer “feedforward” but learning is more flexible and can be much faster (>>100x), especially for big data since it does not require elaborate rehearsal. From a phenomenological perspective, the optimization during recognition displays general properties observed in brain and cognitive experiments, predicting, oscillations, initial bursting with unrecognized patterns, and speed-accuracy tradeoff.
I will compare computational and cognitive properties of both approaches and discuss the state of new research initiatives."
Really looking forward to Tsvi's talk! Please try to join us!
Cheers,
Dev

Tsvi Achler: What is the brain doing different from machine learning algorithms?