Skip to content

Munich🥨NLP - Pre ACL Event; Accepted Paper Discussion [ONLINE]

Photo of Munich NLP
Hosted By
Munich N. and Daryna D.
Munich🥨NLP - Pre ACL Event; Accepted Paper Discussion [ONLINE]

Details

The Pre ACL Event is dedicated to presenting and discussing pioneering research in Natural Language Processing (NLP) in preparation for the main Association for Computational Linguistics (ACL) conference. Here's a summary of what the event is all about:

1. Objective: The primary goal of this event is to highlight top research papers that have been accepted at ACL. It offers a stage for researchers to display their work and participate in conversations about the latest progress in the field of NLP.
2. Moderator: Daryna will moderate the session, guiding the presentations and discussions throughout the event.
3. Presenters and Topics:

  • Dr. Michael A. Hedderich: Focuses on research related to human-centric AI and NLP, with a particular emphasis on understanding and managing generative AI models and assisting users in these areas.
  • Dr. Zhijing Jin: Her work revolves around causal inference in NLP and the safety of AI systems. Her presentations include topics such as content moderation on social media and the dynamics between reasoning and memorization in language models.
  • Dr. Lukas Edman: Specializes in character-level NLP and pretraining for languages with limited resources. His recent work involves benchmarking understanding of tokens by large language models (LLMs) and addressing positional biases in these models.

4. Format: The event is structured around presentations of research papers, followed by interactive discussions. The topics covered range from generative AI models and content moderation to understanding language models at a character level.

5. Audience: The event is aimed at researchers, practitioners, and enthusiasts in the field of NLP and AI. It is particularly appealing to those interested in the latest research and developments in these areas.
Papers discussed:

  • What's the Difference? Supporting Users in Identifying the Effects of Prompt and Model Changes Through Token Patterns
    Michael A. Hedderich, Anyi Wang, Raoyuan Zhao, Florian Eichin, Jonas Fischer, Barbara Plank
    https://arxiv.org/abs/2504.15815
  • Revealing Hidden Mechanisms of Cross-Country Content Moderation with Natural Language Processing
    Neemesh Yadav, Jiarui Liu, Francesco Ortu, Roya Ensafi, Zhijing Jin, Rada Mihalcea
    https://arxiv.org/abs/2503.05280
  • The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction
    Yihuai Hong, Dian Zhou, Meng Cao, Lei Yu, Zhijing Jin
    https://arxiv.org/abs/2503.23084
  • EXECUTE: A Multilingual Benchmark for LLM Token Understanding
    Lukas Edman, Helmut Schmid, Alexander Fraser
    https://arxiv.org/abs/2505.17784
  • Extending Context Window of Large Language Models via Positional Interpolation
    Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
    https://arxiv.org/abs/2306.15595

Dr. Michael A. Hedderich, Ph.D.
Michael A. Hedderich is a Junior Research Group Leader at LMU, associated with the MaiNLP group and the Munich Center for Machine Learning. He is also a member of the ELLIS Society. His research spans the intersection of machine learning, NLP, and human-computer interaction, focusing on human supervision in AI systems and controlling global model behaviors. Michael is passionate about building bridges from AI to other fields and has interests ranging from archeology to medical research. His work includes developing technical and foundational methods, human-centered approaches, and interdisciplinary applications.

Dr. Zhijing Jin, Ph.D.
Zhijing Jin is an incoming Assistant Professor at the University of Toronto and currently a postdoc at the Max Planck Institute. She will also be a faculty member at the Vector Institute and an ELLIS advisor. Her research areas include Large Language Models (LLMs), NLP, and Causal Inference, with a particular focus on causal reasoning with LLMs, multi-agent LLMs, and moral reasoning in LLMs. Zhijing's work also explores mechanistic interpretability and adversarial robustness, contributing to AI Safety and AI for Science. She is a recipient of Rising Star awards, Best Paper Awards, and several fellowships from organizations like Open Philanthropy and the Future of Life Institute.

###

Dr. Lukas Edman, Ph.D.
Lukas Edman is currently a post-doc at TUM, focusing on character-level NLP, low-resource pretraining, and machine translation. His recent work involves benchmarking LLMs’ understanding of their tokens and developing efficiency methods for training large models. Lukas is interested in the similarities and differences between machine learning and human learning, and he aims to incorporate brain-like learning strategies into ML models. He has participated in the BabyLM Challenge and continues to explore ways to make training large models feasible for broader audiences beyond big tech companies.

###

Dr. Daryna Dementieva, Ph.D.
Daryna Dementieva is a postdoctoral researcher at the Alex Fraser's NLP Research Group in the Technical University of Munich. She earned her PhD from the Skolkovo Institute of Science and Technology. Her research focuses on applying Large Language Models (LLMs) for social good (NLP4SG), interpretability, and efficiency in NLP solutions. Key topics include fake news detection using multilingual evidence, text style transfer for detoxification, and explainable AI for NLP. Daryna is also dedicated to advancing Ukrainian NLP to combat fake news and hate speech in the Ukrainian language context.

Photo of Munich🥨NLP group
Munich🥨NLP
See more events
Online event
Link visible for attendees
FREE