Do We Need to Talk about AI Safety? - Jan Romportl
Not many people discussed AI Safety let’s say 10 years ago. It was maybe Eliezer Yudkowsky and some people around Future of Humanity Institute at Oxford University, such as Nick Bostrom or Anders Sandberg. Now, AI Safety is becoming yet another new sexy topic. It’s partly because it attracts all the doomsday enthusiasts and many other people who feel eligible to edify AI discussions but don’t know any ML, maths, computer science or alike. But more importantly, the recent advances in ML combined with huge computational power and ubiquitous data streams have brought about such results that sparked AI Safety interest even in the core AI research community. There are indeed obvious narrow AI safety issues such as hacking CNNs with adversarial examples, or ethical issues in the trolley problem style concerning autonomous cars. There are also more complex problems covering autonomous warfare and our (meaning us as AI/ML researchers and developers) involvement in it, as nicely illustrated by several Google employees who recently quit their jobs to protest Project Maven. But on top of all these, some people from the AI community (but definitely not all) see the global existential risks posed by AGI and Superintelligence. My talk will try to briefly map this AI Safety landscape, get to the point whether AGI risks are real deal or just chimeric, and then most importantly harness the audience to the discussion about our responsibility as the ML community.
Jan Romportl is Chief Science Officer at AI Startup Incubator and Data Science Advisor at O2 Czech Republic. Before joining the startup scene, he was Chief Data Scientist at O2 where he helped build the data science team strongly focused on machine learning from telco big data. He has also more than 10 years of academic research and teaching background in AI, man-machine interaction, speech technologies, cognitive science and philosophy. Jan cooperates also with National Institute of Mental Health and Center for Theoretical Study of Charles University, he focuses on AI safety issues and he organises the Prague’s AI Safety Meetup group.
- Networking in Bitcoin Coffee
Machine Learning Meetups (MLMU) is an independent platform for people interested in Machine Learning, Information Retrieval, Natural Language Processing, Computer Vision, Pattern Recognition, Data Journalism, Artificial Intelligence, Agent Systems and all the related topics. MLMU is a regular community meeting usually consisting of a talk, a discussion and a subsequent networking. At the end of the year 2016, MLMU spread also to Brno and Bratislava. Later on, Košice joined our MLMU family.
Support Machine Learning Meetups by a donation!
* all donations are used to cover the expenses of MLMU