DL has had unprecedented success and has revolutionised artificial intelligence (AI). However, despite the success, DL methods have a serious achilles heel; they are universally unstable and thus demonstrate highly non-human-like behaviour. This has serious consequences, and Science recently published a paper warning about the potentially fatal consequences. The question is: why do AI algorithms based on deep learning become unstable and perform so different to humans? Current mathematical theory on neural networks cannot explain this. We will demonstrate the reason for this discrepancy: neural networks do not learn the structures that humans learn, but completely different structures. These different (false) structures correlate well with the original structure that humans learn, hence the success, however they are completely unstable, yielding non-human performance.
The talk will be given by Dr. Anders Hansen UiO/Cambridge. Dr. Anders Hansen is head of the group in Applied Functional and Harmonic Analysis within the Cambridge Centre of Analysis at DAMTP. He is also Prof. II at the Institute of Mathematics, UiO.