News

  • 25 Apr 2025: The program for the workshop is now available.
  • 10 Apr 2025: The deadline for the camera-ready version of accepted papers has been extended to 1 May 2025
  • 28 Mar 2025: We are excited to announce Sandip Sen as a keynote speaker for ALA 2025.
  • 12 Mar 2025: We are excited to announce Eugene Vinitsky as a keynote speaker for ALA 2025.
  • 26 Feb 2025: We are excited to announce Roxana Rădulescu as a keynote speaker for ALA 2025.
  • 24 Feb 2025: ALA 2025 submission deadline has been further to 1 March 2025 23:59 AOE
  • 30 Jan 2025: ALA 2025 submission deadline has been further to 25 Feb 2025 23:59 AOE
  • 24 Jan 2025: Added the OpenReview link to the submission details!
  • 6 Dec 2024: ALA 2025 Website goes live!

ALA 2025 - Workshop at AAMAS 2025

Adaptive and Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

The goal of this workshop is to increase awareness of and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science (e.g. agent architectures, reinforcement learning, evolutionary algorithms) but also from different fields studying similar concepts (e.g. game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion of ongoing or completed work covering both theoretical and practical aspects of adaptive and learning agents and multi-agent systems.

This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

  • Novel combinations of reinforcement and supervised learning approaches
  • Integrated learning approaches using reasoning modules like negotiation, trust, coordination, etc..
  • Supervised and semi-supervised multi-agent learning
  • Reinforcement learning (single- and multi-agent)
  • Novel deep learning approaches for adaptive single- and multi-agent systems
  • Multi-objective optimisation in single- and multi-agent systems
  • Planning (single- and multi-agent)
  • Reasoning (single- and multi-agent)
  • Distributed learning
  • Adaptation and learning in dynamic environments
  • Evolution of agents in complex environments
  • Co-evolution of agents in a multi-agent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning trust and reputation
  • Communication restrictions and their impact on multi-agent coordination
  • Design of reward structure and fitness measures for coordination
  • Scaling learning techniques to large systems of learning and adaptive agents
  • Emergent behaviour in adaptive multi-agent systems
  • Game theoretical analysis of adaptive multi-agent systems
  • Neuro-control in multi-agent systems
  • Bio-inspired multi-agent systems
  • Human-in-the-loop learning systems
  • Applications of adaptive and learning agents and multi-agent systems to real world complex systems

Important Dates

  • Submission Deadline: 4 February 2024 : 25 February 2024 1 March 2025 AOE
  • Notification of acceptance: 10 March 2025 2 April 2025
  • Camera-ready copies: 30 March 2025 1 May 2025
  • Workshop: 19 - 20 May 2025

Submission Details

Papers can be submitted through OpenReview.

We invite submission of original work, up to 8 pages in length (excluding references) in the ACM proceedings format (i.e. following the AAMAS formatting instructions). This includes work that has been accepted as a poster/extended abstract at AAMAS 2025. Keeping with previous ALA guidelines, papers are limited to 8 pages plus references. Additionally, we welcome submission of preliminary results, i.e. work-in-progress, as well as visionary outlook papers that lay out directions for future research in a specific area, both up to 6 pages in length, although shorter papers are very much welcome, and will not be judged differently. Finally, we also accept recently published journal papers in the form of a 2 page abstract.

Furthermore, for submissions that were rejected or accepted as extended abstracts at AAMAS, authors need to also append the received reviews and a pdfdiff.

All submissions will be peer-reviewed (double-blind). Accepted work will be allocated time for poster and possibly oral presentation during the workshop. In line with AAMAS, the workshop will be fully offline.

When preparing your submission for ALA 2025, please be sure to remove the AAMAS copyright block, citation information and running headers. Please replace the AAMAS copyright block in the main.tex file from the AAMAS template with the following:

    \setcopyright{none}
    \acmConference[ALA '25]{Proc.\@ of the Adaptive and Learning Agents Workshop (ALA 2025)}{May 19 -- 20, 2025}{Detroit, Michigan, USA, ala-workshop.github.io}{Avalos, Aydeniz, M\"uller, Mohammedalamen (eds.)}
    \copyrightyear{2025}
    \acmYear{2025}
    \acmDOI{}
    \acmPrice{}
    \acmISBN{}
    \settopmatter{printacmref=false}
                            

For the submission of the camera-ready paper make sure to submit the deanonymized version with the replaced copyright block above.

Program

All times are presented in local Detroit time.

Monday May 19

Welcome & Opening Remarks
09:00-10:00 Session I - Chair: TBD
09:00-10:00 Invited Talk:
Eugene Vinitsky
10:00-10:45 Coffee Break
10:45-12:30Session II - Chair: TBD
10:45-11:00 Long Talk: Umer Siddique, Peilang Li, Yongcan Cao
Learning Fair Pareto-Optimal Policies in Multi-Objective Reinforcement Learning
11:00-11:15 Long Talk: Patrick Benjamin, Alessandro Abate
Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation
11:15-11:30 Long Talk: Rory Lipkis, Adrian Agogino
Failure Analysis of Autonomous Systems with RL-Guided MCMC Sampling
11:30-11:45 Long Talk: Bhavini Jeloka, Yue Guan, Panagiotis Tsiotras
Learning Large-Scale Competitive Team Behaviors with Mean-Field Interactions
11:45-12:30 Short Talks, 5 minutes each in order:
  • Zeki Doruk Erden, Boi Faltings
    Agential AI for Integrated Continual Learning, Deliberative Behavior, and Comprehensible Models
  • Augusto Antônio Fontanive Leal, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos
    A Flexible Approach to Deliberation Cost in the Option-Critic Architecture
  • Matteo Ceriscioli, Karthika Mohan
    Causal Discovery via Adaptive Agents in Multi-Agent and Sequential Decision Tasks
  • Everardo Gonzalez, Siddarth Iyer, Kagan Tumer
    Influence Based Reward Shaping Without a Heuristic
  • Yoann Poupart, Aurélie Beynier, Nicolas Maudet
    Perspectives for Direct Interpretability in Multi-Agent Deep Reinforcement Learning
  • Pedro Sequeira, Vidyasagar Sadhu, Melinda Gervasio
    ToMCAT: Theory-of-Mind for Cooperative Agents in Teams via Multiagent Diffusion Policies
  • Lukas Schäfer, Logan Jones, Anssi Kanervisto, Yuhan Cao, Tabish Rashid, Raluca Georgescu, David Bignell, Siddhartha Sen, Andrea Treviño Gavito, Sam Devlin
    Visual Encoders for Imitation Learning in Modern Video Games
  • Kevin A. Wang, Jerry Xia, Stephen Chung, Amy Greenwald
    Dynamic Thinker: Optimizing Decision-Time Planning with Costly Compute
  • Zeki Doruk Erden, Donia Gasmi, Boi Faltings
    Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition
12:30-14:00 Lunch Break
14:00-16:00Session III & Poster Session - Chair: TBD
14:00-14:15 Long Talk: Jingjing Feng, Lucheng Wang, Alona Tenytska, Bei Peng
Sample-Efficient Preference-Based Reinforcement Learning Using Diffusion Models
14:15-14:30 Long Talk: Alicia P. Wolfe, Oliver Diamond, Brigitte Goeler-Slough, Remi Feuerman, Magdalena Kisielinska, Victoria Manfredi
Multicopy Reinforcement Learning Agents
14:30-15:45 Poster Session A
15:45-16:30 Coffee Break
16:30-17:30Session IV - Chair: TBD
16:30-17:30 Invited Talk:
Sandip Sen

Tuesday May 20

09:00-10:00 Session V - Chair: TBD
09:00-10:00 Invited Talk:
Roxana Rădulescu
10:00-10:45 Coffee Break
10:45-12:30Session VI - Chair: TBD
10:45-11:00 Long Talk: Tao Li, Juan Guevara, Xinhong Xie, Quanyan Zhu
Self-Confirming Transformer for Belief-Conditioned Adaptation in Offline Multi-Agent Reinforcement Learning
11:00-11:15 Long Talk: Willem Röpke, Raphaël Avalos, Roxana Rădulescu, Ann Nowe, Diederik M Roijers, Florent Delgrange
Integrating RL and Planning through Optimal Transport World Models
11:15-11:30 Long Talk: Yen Ru Lai, Fu-Chieh Chang, Pei-Yuan Wu
Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
11:30-12:30 Short Talks, 5 minutes each in order:
  • Narjes Nourzad, Jared Coleman, Zhongyuan Zhao, Bhaskar Krishnamachari, Gunjan Verma, Santiago Segarra
    Actor-Twin Framework for Task Graph Scheduling
  • Thibault Roux, Filipo Studzinski Perotto, Gauthier Picard
    Towards Scalable Collision Avoidance in Dense Airspaces with Deep Multi-Agent Reinforcement Learning
  • Gabriel Romio, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos
    Improving Option Learning with Hindsight Experience Replay
  • Mateus Begnini Melchiades, Gabriel de Oliveira Ramos, Bruno Castro da Silva
    Dynamic Option Creation in Option-Critic Reinforcement Learning
  • Florian Grötschla, Joël Mathys, Loïc Holbein, Roger Wattenhofer
    Reinforcement Learning for Locally Checkable Labeling Problems
  • Bernhard Hilpert, Muhan Hou, Kim Baraka, Joost Broekens
    Can You See How I learn? Human Observers' Inferences about Reinforcement Learning Agents' Learning Processes
  • Fares Chouaki, Aurélie Beynier, Nicolas Maudet, Paolo Viappiani
    Fairness in Cooperative Multiagent Multiobjective Reinforcement Learning using the Expected Scalarized Return
  • Andries Rosseau, Paolo Turrini, Marjon Blondeel, Ann Nowe
    Scaling Marginal Cost Tolling to Address Heterogeneity under Imperfect Information in Routing Games
  • Dimitris Michailidis, Sennay Ghebreab, Fernando P. Santos
    Understanding Fairness in Congestion Games with Learning Agents
  • Marc Saideh, Jamont, Laurent Vercouter
    Adaptive Authentication Factor Selection in the Internet of Things: A Trust-Based Multi-Objective Optimization Approach
12:30-14:00 Lunch Break
14:00-15:45Session VII & Poster Session - Chair: TBD
14:00-14:15 Long Talk: Arnau Mayoral-Macau, Manel Rodriguez-Soto, Enrico Marchesini, Maite López-Sánchez, Marti Sanchez-Fibla, Alessandro Farinelli, Juan Antonio Rodriguez Aguilar
Designing ethical environments using multi-agent reinforcement learning
14:15-14:30 Long Talk: Daniel Melcer, Stavros Tripakis, Christopher Amato
Learned Shields for Multi-Agent Reinforcement Learning
14:30-15:45 Poster Session B
15:45-16:30 Coffee Break
16:30-17:30 Panel Session
17:30 Awards & Closing Remarks

Poster Session A - Monday May 19 14:30-15:45

All papers presented on day 1.

Poster Session B - Tuesday May 20 14:30-15:45

All papers presented on day 2.

Accepted Papers

Paper # Authors Title
1Arnau Mayoral-Macau, Manel Rodriguez-Soto, Enrico Marchesini, Maite López-Sánchez, Marti Sanchez-Fibla, Alessandro Farinelli, Juan Antonio Rodriguez Aguilar Designing ethical environments using multi-agent reinforcement learning
2Jingjing Feng, Lucheng Wang, Alona Tenytska, Bei Peng Sample-Efficient Preference-Based Reinforcement Learning Using Diffusion Models
4Patrick Benjamin, Alessandro Abate Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation
5Matteo Ceriscioli, Karthika Mohan Causal Discovery via Adaptive Agents in Multi-Agent and Sequential Decision Tasks
6Yoann Poupart, Aurélie Beynier, Nicolas Maudet Perspectives for Direct Interpretability in Multi-Agent Deep Reinforcement Learning
7Gabriel Romio, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos Improving Option Learning with Hindsight Experience Replay
10Pedro Sequeira, Vidyasagar Sadhu, Melinda Gervasio ToMCAT: Theory-of-Mind for Cooperative Agents in Teams via Multiagent Diffusion Policies
11Yen Ru Lai, Fu-Chieh Chang, Pei-Yuan Wu Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
12Thibault Roux, Filipo Studzinski Perotto, Gauthier Picard Towards Scalable Collision Avoidance in Dense Airspaces with Deep Multi-Agent Reinforcement Learning
13Dimitris Michailidis, Sennay Ghebreab, Fernando P. Santos Understanding Fairness in Congestion Games with Learning Agents
14Lukas Schäfer, Logan Jones, Anssi Kanervisto, Yuhan Cao, Tabish Rashid, Raluca Georgescu, David Bignell, Siddhartha Sen, Andrea Treviño Gavito, Sam Devlin Visual Encoders for Imitation Learning in Modern Video Games
15Tao Li, Juan Guevara, Xinhong Xie, Quanyan Zhu Self-Confirming Transformer for Belief-Conditioned Adaptation in Offline Multi-Agent Reinforcement Learning
16Bhavini Jeloka, Yue Guan, Panagiotis Tsiotras Learning Large-Scale Competitive Team Behaviors with Mean-Field Interactions
17Narjes Nourzad, Jared Coleman, Zhongyuan Zhao, Bhaskar Krishnamachari, Gunjan Verma, Santiago Segarra Actor-Twin Framework for Task Graph Scheduling
20Florian Grötschla, Joël Mathys, Loïc Holbein, Roger Wattenhofer Reinforcement Learning for Locally Checkable Labeling Problems
22Bernhard Hilpert, Muhan Hou, Kim Baraka, Joost Broekens Can You See How I learn? Human Observers' Inferences about Reinforcement Learning Agents' Learning Processes
23Marc Saideh, Jamont, Laurent Vercouter Adaptive Authentication Factor Selection in the Internet of Things: A Trust-Based Multi-Objective Optimization Approach
24Zeki Doruk Erden, Boi Faltings Agential AI for Integrated Continual Learning, Deliberative Behavior, and Comprehensible Models
25Zeki Doruk Erden, Donia Gasmi, Boi Faltings Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition
26Mateus Begnini Melchiades, Gabriel de Oliveira Ramos, Bruno Castro da Silva Dynamic Option Creation in Option-Critic Reinforcement Learning
28Augusto Antônio Fontanive Leal, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos A Flexible Approach to Deliberation Cost in the Option-Critic Architecture
29Andries Rosseau, Paolo Turrini, Marjon Blondeel, Ann Nowe Scaling Marginal Cost Tolling to Address Heterogeneity under Imperfect Information in Routing Games
31Everardo Gonzalez, Siddarth Iyer, Kagan Tumer Influence Based Reward Shaping Without a Heuristic
33Fares Chouaki, Aurélie Beynier, Nicolas Maudet, Paolo Viappiani Fairness in Cooperative Multiagent Multiobjective Reinforcement Learning using the Expected Scalarized Return
39Willem Röpke, Raphaël Avalos, Roxana Rădulescu, Ann Nowe, Diederik M Roijers, Florent Delgrange Integrating RL and Planning through Optimal Transport World Models
40Alicia P. Wolfe, Oliver Diamond, Brigitte Goeler-Slough, Remi Feuerman, Magdalena Kisielinska, Victoria Manfredi Multicopy Reinforcement Learning Agents
42Daniel Melcer, Stavros Tripakis, Christopher Amato Learned Shields for Multi-Agent Reinforcement Learning
43Umer Siddique, Peilang Li, Yongcan Cao Learning Fair Pareto-Optimal Policies in Multi-Objective Reinforcement Learning
44Kevin A. Wang, Jerry Xia, Stephen Chung, Amy Greenwald Dynamic Thinker: Optimizing Decision-Time Planning with Costly Compute
45Rory Lipkis, Adrian Agogino Failure Analysis of Autonomous Systems with RL-Guided MCMC Sampling

Invited Talks

Roxana Rădulescu

Affiliation: Utrecht University

Website: https://www.uu.nl/staff/RTRadulescu

Title: TBA

Bio: Roxana Rădulescu is Assistant Professor in AI and Data Science, at the Department of Information and Computing Sciences, at Utrecht University. Before, she was a FWO Postdoctoral fellow at the Artificial Intelligence Lab, Vrije Universiteit Brussel, Belgium. Her research is focussed on the development of multi-agent decision making systems where each agent is driven by different objectives and goals, under the paradigm of multi-objective multi-agent reinforcement learning.

Eugene Vinitsky

Affiliation: New York University

Website: https://engineering.nyu.edu/faculty/eugene-vinitsky

Title: TBA

Bio: Eugene Vinitsky is a Professor of Civil and Urban Engineering at NYU and a member of the C2SMARTER consortium. His primary research interest is figuring out how to make developing multi-agent controllers, planners, and intelligence as easy as possible by developing new learning algorithms, software, and tools. He looks for applications of these techniques in civil engineering problems and autonomy. He received his PhD in controls engineering from UC Berkeley.

Sandip Sen

Affiliation: University of Tulsa

Website: https://utulsa.edu/people/sandip-sen/

Title: TBA

Bio: Sandip Sen is a professor in the Tandy School of Computer Science with primary research interests in artificial intelligence, intelligent agents, machine learning, and evolutionary computation. He completed his Ph.D. on intelligent, distributed scheduling from the University of Michigan in December, 1993. He has authored approximately 300 papers in workshops, conferences, and journals in several areas of artificial intelligence.

Previous Editions

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its sixteenth year. Previous editions of this workshop may be found at the following urls:

Programe Committee

  • Erman Acar, University of Amsterdam<\li>
  • Adrian Agogino, University of Texas, Austin<\li>
  • Lucas N. Alegre, Institute of Informatics - Federal University of Rio Grande do Sul<\li>
  • Nitay Alon, Hebrew University of Jerusalem<\li>
  • Philipp Altmann, LMU Munich<\li>
  • Hicham Azmani, Vrije Universiteit Brussel<\li>
  • Jérôme Botoko Ekila, Vrije Universiteit Brussel<\li>
  • Jacob Brue, University of Tulsa<\li>
  • Mustafa Mert Çelikok, Delft University of Technology<\li>
  • Fu-Chieh Chang, National Taiwan University<\li>
  • Alexandra Cimpean, Vrije Universiteit Brussel<\li>
  • Kyle Crandall, US Naval Research Lab<\li>
  • Vinicius Renan de Carvalho, Universidade de São Paulo<\li>
  • Gabriel de Oliveira Ramos, Universidade Vale do Rio dos Sinos<\li>
  • Florent Delgrange, Vrije Universiteit Brussel<\li>
  • Gaurav Dixit, Oregon State University<\li>
  • Elias Fernández Domingos, Vrije Universiteit Brussel<\li>
  • Simone Drago, Polytechnic Institute of Milan<\li>
  • Flint Xiaofeng Fan, ETHZ - ETH Zurich<\li>
  • Florian Felten, ETHZ - ETH Zurich<\li>
  • Rolando Fernandez, University of Texas at Austin<\li>
  • Timothy Flavin, University of Tulsa<\li>
  • Julian Garcia, Monash University<\li>
  • Everardo Gonzalez, Oregon State University<\li>
  • Davide Grossi, University of Groningen<\li>
  • Brent Harrison, University of Kentucky<\li>
  • Fredrik Heintz, Linköping University<\li>
  • Bernhard Hilpert, Leiden University<\li>
  • Athirai Aravazhi Irissappane, Amazon<\li>
  • Whiyoung Jung, LG AI Research<\li>
  • Michael Kaisers, Google<\li>
  • Thommen George Karimpanal, Deakin University<\li>
  • Sammie Katt, Aalto University<\li>
  • Guangliang Li, Ocean University of China<\li>
  • Woohyung Lim, LG AI Research<\li>
  • Robert Loftin, University of Sheffield<\li>
  • Junlin Lu, National University of Ireland, Galway<\li>
  • Joël Mathys, ETHZ - ETH Zurich<\li>
  • David Milec, Czech Technical Univeresity in Prague, Czech Technical University of Prague<\li>
  • Nicole Orzan, University of Groningen<\li>
  • Bei Peng, University of Liverpool<\li>
  • Ram Rachum, Tufts University<\li>
  • Roxana Rădulescu, Utrecht University (ICS), Utrecht University<\li>
  • Carrie Rebhuhn, The MITRE Corporation<\li>
  • Mathieu Reymond, Mila - Quebec Artificial Intelligence Institute<\li>
  • Juan Antonio Rodriguez Aguilar, Spanish National Research Council<\li>
  • Manel Rodriguez-Soto, Artificial Intelligence Research Institute, Spanish National Research Council<\li>
  • Diederik Roijers, University of Amsterdam<\li>
  • Willem Röpke, Vrije Universiteit Brussel<\li>
  • Andries Rosseau, Vrije Universiteit Brussel<\li>
  • Vidyasagar Sadhu, SRI International<\li>
  • Fernando P. Santos, University of Amsterdam<\li>
  • Lukas Schäfer, Microsoft<\li>
  • Sandip Sen, University of Tulsa<\li>
  • Pedro Sequeira, SRI International<\li>
  • William Tomlinson, University of California, Irvine<\li>
  • Paolo Turrini, University of Warwick<\li>
  • Pascal R. Van der Vaart, Delft University of Technology<\li>
  • Garrett Warnell, University of Texas, Austin<\li>
  • Connor Yates, Oregon State University<\li>
  • Neil Yorke-Smith, Delft University of Technology<\li>

Organization

This year's workshop is organised by:
  • Raphael Avalos (Vrije Universiteit Brussel, BE)
  • A. Alp Aydeniz (Oregon State University, US)
  • Henrik Müller (Leibniz University Hannover, DE)
  • Montaser Mohammedalamen (University of Alberta, CA)
Senior Steering Committee Members:
  • Enda Howley (University of Galway, IE)
  • Daniel Kudenko (Leibniz University Hannover, DE)
  • Patrick Mannion (University of Galway, IE)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, US)
  • Peter Stone (University of Texas at Austin, US)
  • Matthew Taylor (University of Alberta, CA)
  • Kagan Tumer (Oregon State University, US)
  • Karl Tuyls (University of Liverpool, UK)

Contact

If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.aamas AT gmail.com

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group