René Peinl: Large language models on the way to general intelligence?


Details
Dear all,
ChatGPT? GPT-4? Sydney AI? AGI? Heller Hans? Have you lost track? Well, then it's time to get together for another Data Science Meetup.
This time we will again welcome an alumnus who will be able to tell us what a world beyond the University of Regensburg might look like. René has been working with language models for many years and he will give us his take on recent developments and the bigger picture (an area very much in flux!) To demonstrate the power of such models he simply needed to take the (German) abstract of his talk and feed in through DeepL ...
Looking forward to welcoming you soon!
Udo, David and Bernd
P.S.: Last time it has worked really well to announce the talks well ahead of time, so let's do the same again ...
P.P.S.: We plan to organise this as a hybrid event if there is enough interest.
P.P.P.S: Another date for your diary at the same location two weeks later: https://www.wids-regensburg.de
P.P.P.P.S: There might even be another Meetup on 17 May, stay tuned ...
Details:
Speaker:
René Peinl (Hof University)
Title:
Large language models on the way to general intelligence?
Abstract:
Machine understanding of text has been getting better and better in recent years. From simple tasks like marking the correct answer to a question in a short given text or classifying a product review into positive or negative, the models have been trained to ever more abstract and complex capabilities. Only six years have passed between BERT and ChatGPT, and great progress has also been made in the area of image and audio understanding, so that in the near future multimodal models such as Microsoft's Kosmos 1 or Google's PaLM-E will replace the specialists who can only process text. What "ingredients" are still missing for general AI and are we as a society already prepared for it? How can we ensure "alignment", i.e. the congruence of AI with human ideals and goals, if we don't even agree as humanity?
In this talk, you will learn more about the short but impressive history of AI development from Transformers and BERT to LaMDA and ChatGPT, to be able to classify for yourself how intelligent language models behave and find out how they can use the models in their own company (from the cloud or locally on their own computer).
Short Bio:
René Peinl (https://www.linkedin.com/in/renepeinl/) studied business information systems at the University of Regensburg until 2000, before he collected international experience as a technical consultant in the EMEA eProcurement Competence Center of Compaq Computer. He earned a PhD in knowledge management at the Martin-Luther University of Halle-Wittenberg and was introducing Microsoft SharePoint with a world-wide rollout at the Rehau AG afterwards. He joined IPI GmbH, a small SharePoint consultancy as a senior consultant for systems integration and knowledge management before he got Professor for Web architecture at Hof University in 2010 and started research in systems integration at the associated Institute of Information Systems (iisys). In 2018, he started the project „smart speaker without cloud“ and shifted his research interests from enterprise information systems, team collaboration and IoT towards deep learning for voice assistants, i.e. speech recognition, speech synthesis and NLU. Since 2021, he is one of the 100 AI professors the Bavarian state donated and his new teaching area is called „resource-efficient AI for natural language understanding“. Since 2020, he is the scientific director of the Institute of Information Systems at Hof University.

René Peinl: Large language models on the way to general intelligence?