News

  • 28 Mar 2025: We are excited to announce Sandip Sen as a keynote speaker for ALA 2025.
  • 12 Mar 2025: We are excited to announce Eugene Vinitsky as a keynote speaker for ALA 2025.
  • 26 Feb 2025: We are excited to announce Roxana Rădulescu as a keynote speaker for ALA 2025.
  • 24 Feb 2025: ALA 2025 submission deadline has been further to 1 March 2025 23:59 AOE
  • 30 Jan 2025: ALA 2025 submission deadline has been further to 25 Feb 2025 23:59 AOE
  • 24 Jan 2025: Added the OpenReview link to the submission details!
  • 6 Dec 2024: ALA 2025 Website goes live!

ALA 2025 - Workshop at AAMAS 2025

Adaptive and Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

The goal of this workshop is to increase awareness of and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science (e.g. agent architectures, reinforcement learning, evolutionary algorithms) but also from different fields studying similar concepts (e.g. game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion of ongoing or completed work covering both theoretical and practical aspects of adaptive and learning agents and multi-agent systems.

This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

  • Novel combinations of reinforcement and supervised learning approaches
  • Integrated learning approaches using reasoning modules like negotiation, trust, coordination, etc..
  • Supervised and semi-supervised multi-agent learning
  • Reinforcement learning (single- and multi-agent)
  • Novel deep learning approaches for adaptive single- and multi-agent systems
  • Multi-objective optimisation in single- and multi-agent systems
  • Planning (single- and multi-agent)
  • Reasoning (single- and multi-agent)
  • Distributed learning
  • Adaptation and learning in dynamic environments
  • Evolution of agents in complex environments
  • Co-evolution of agents in a multi-agent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning trust and reputation
  • Communication restrictions and their impact on multi-agent coordination
  • Design of reward structure and fitness measures for coordination
  • Scaling learning techniques to large systems of learning and adaptive agents
  • Emergent behaviour in adaptive multi-agent systems
  • Game theoretical analysis of adaptive multi-agent systems
  • Neuro-control in multi-agent systems
  • Bio-inspired multi-agent systems
  • Human-in-the-loop learning systems
  • Applications of adaptive and learning agents and multi-agent systems to real world complex systems

Important Dates

  • Submission Deadline: 4 February 2024 : 25 February 2024 1 March 2025 AOE
  • Notification of acceptance: 10 March 2025 2 April 2025
  • Camera-ready copies: 30 March 2025 21 April 2025
  • Workshop: 19 - 20 May 2025

Submission Details

Papers can be submitted through OpenReview.

We invite submission of original work, up to 8 pages in length (excluding references) in the ACM proceedings format (i.e. following the AAMAS formatting instructions). This includes work that has been accepted as a poster/extended abstract at AAMAS 2025. Keeping with previous ALA guidelines, papers are limited to 8 pages plus references. Additionally, we welcome submission of preliminary results, i.e. work-in-progress, as well as visionary outlook papers that lay out directions for future research in a specific area, both up to 6 pages in length, although shorter papers are very much welcome, and will not be judged differently. Finally, we also accept recently published journal papers in the form of a 2 page abstract.

Furthermore, for submissions that were rejected or accepted as extended abstracts at AAMAS, authors need to also append the received reviews and a pdfdiff.

All submissions will be peer-reviewed (double-blind). Accepted work will be allocated time for poster and possibly oral presentation during the workshop. In line with AAMAS, the workshop will be fully offline.

When preparing your submission for ALA 2025, please be sure to remove the AAMAS copyright block, citation information and running headers. Please replace the AAMAS copyright block in the main.tex file from the AAMAS template with the following:

    \setcopyright{none}
    \acmConference[ALA '25]{Proc.\@ of the Adaptive and Learning Agents Workshop (ALA 2025)}{May 19 -- 20, 2025}{Detroit, Michigan, USA, ala-workshop.github.io}{Avalos, Aydeniz, M\"uller, Mohammedalamen (eds.)}
    \copyrightyear{2025}
    \acmYear{2025}
    \acmDOI{}
    \acmPrice{}
    \acmISBN{}
    \settopmatter{printacmref=false}
                            

For the submission of the camera-ready paper make sure to submit the deanonymized version with the replaced copyright block above.

Accepted Papers

Paper # Authors Title
1Arnau Mayoral-Macau, Manel Rodriguez-Soto, Enrico Marchesini, Maite López-Sánchez, Marti Sanchez-Fibla, Alessandro Farinelli, Juan Antonio Rodriguez Aguilar Designing ethical environments using multi-agent reinforcement learning
2Jingjing Feng, Lucheng Wang, Alona Tenytska, Bei Peng Sample-Efficient Preference-Based Reinforcement Learning Using Diffusion Models
4Patrick Benjamin, Alessandro Abate Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation
5Matteo Ceriscioli, Karthika Mohan Causal Discovery via Adaptive Agents in Multi-Agent and Sequential Decision Tasks
6Yoann Poupart, Aurélie Beynier, Nicolas Maudet Perspectives for Direct Interpretability in Multi-Agent Deep Reinforcement Learning
7Gabriel Romio, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos Improving Option Learning with Hindsight Experience Replay
10Pedro Sequeira, Vidyasagar Sadhu, Melinda Gervasio ToMCAT: Theory-of-Mind for Cooperative Agents in Teams via Multiagent Diffusion Policies
11Yen Ru Lai, Fu-Chieh Chang, Pei-Yuan Wu Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
12Thibault Roux, Filipo Studzinski Perotto, Gauthier Picard Towards Scalable Collision Avoidance in Dense Airspaces with Deep Multi-Agent Reinforcement Learning
13Dimitris Michailidis, Sennay Ghebreab, Fernando P. Santos Understanding Fairness in Congestion Games with Learning Agents
14Lukas Schäfer, Logan Jones, Anssi Kanervisto, Yuhan Cao, Tabish Rashid, Raluca Georgescu, David Bignell, Siddhartha Sen, Andrea Treviño Gavito, Sam Devlin Visual Encoders for Imitation Learning in Modern Video Games
15Tao Li, Juan Guevara, Xinhong Xie, Quanyan Zhu Self-Confirming Transformer for Belief-Conditioned Adaptation in Offline Multi-Agent Reinforcement Learning
16Bhavini Jeloka, Yue Guan, Panagiotis Tsiotras Learning Large-Scale Competitive Team Behaviors with Mean-Field Interactions
17Narjes Nourzad, Jared Coleman, Zhongyuan Zhao, Bhaskar Krishnamachari, Gunjan Verma, Santiago Segarra Actor-Twin Framework for Task Graph Scheduling
20Florian Grötschla, Joël Mathys, Loïc Holbein, Roger Wattenhofer Reinforcement Learning for Locally Checkable Labeling Problems
22Bernhard Hilpert, Muhan Hou, Kim Baraka, Joost Broekens Can You See How I learn? Human Observers' Inferences about Reinforcement Learning Agents' Learning Processes
23Marc Saideh, Jamont, Laurent Vercouter Adaptive Authentication Factor Selection in the Internet of Things: A Trust-Based Multi-Objective Optimization Approach
24Zeki Doruk Erden, Boi Faltings Agential AI for Integrated Continual Learning, Deliberative Behavior, and Comprehensible Models
25Zeki Doruk Erden, Donia Gasmi, Boi Faltings Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition
26Mateus Begnini Melchiades, Gabriel de Oliveira Ramos, Bruno Castro da Silva Dynamic Option Creation in Option-Critic Reinforcement Learning
28Augusto Antônio Fontanive Leal, Mateus Begnini Melchiades, Gabriel de Oliveira Ramos A Flexible Approach to Deliberation Cost in the Option-Critic Architecture
29Andries Rosseau, Paolo Turrini, Marjon Blondeel, Ann Nowe Scaling Marginal Cost Tolling to Address Heterogeneity under Imperfect Information in Routing Games
31Everardo Gonzalez, Siddarth Iyer, Kagan Tumer Influence Based Reward Shaping Without a Heuristic
33Fares Chouaki, Aurélie Beynier, Nicolas Maudet, Paolo Viappiani Fairness in Cooperative Multiagent Multiobjective Reinforcement Learning using the Expected Scalarized Return
39Willem Röpke, Raphaël Avalos, Roxana Rădulescu, Ann Nowe, Diederik M Roijers, Florent Delgrange Integrating RL and Planning through Optimal Transport World Models
40Alicia P. Wolfe, Oliver Diamond, Brigitte Goeler-Slough, Remi Feuerman, Magdalena Kisielinska, Victoria Manfredi Multicopy Reinforcement Learning Agents
42Daniel Melcer, Stavros Tripakis, Christopher Amato Learned Shields for Multi-Agent Reinforcement Learning
43Umer Siddique, Peilang Li, Yongcan Cao Learning Fair Pareto-Optimal Policies in Multi-Objective Reinforcement Learning
44Kevin A. Wang, Jerry Xia, Stephen Chung, Amy Greenwald Dynamic Thinker: Optimizing Decision-Time Planning with Costly Compute
45Rory Lipkis, Adrian Agogino Failure Analysis of Autonomous Systems with RL-Guided MCMC Sampling

Invited Talks

Roxana Rădulescu

Affiliation: Utrecht University

Website: https://www.uu.nl/staff/RTRadulescu

Title: TBA

Bio: Roxana Rădulescu is Assistant Professor in AI and Data Science, at the Department of Information and Computing Sciences, at Utrecht University. Before, she was a FWO Postdoctoral fellow at the Artificial Intelligence Lab, Vrije Universiteit Brussel, Belgium. Her research is focussed on the development of multi-agent decision making systems where each agent is driven by different objectives and goals, under the paradigm of multi-objective multi-agent reinforcement learning.

Eugene Vinitsky

Affiliation: New York University

Website: https://engineering.nyu.edu/faculty/eugene-vinitsky

Title: TBA

Bio: Eugene Vinitsky is a Professor of Civil and Urban Engineering at NYU and a member of the C2SMARTER consortium. His primary research interest is figuring out how to make developing multi-agent controllers, planners, and intelligence as easy as possible by developing new learning algorithms, software, and tools. He looks for applications of these techniques in civil engineering problems and autonomy. He received his PhD in controls engineering from UC Berkeley.

Sandip Sen

Affiliation: University of Tulsa

Website: https://utulsa.edu/people/sandip-sen/

Title: TBA

Bio: Sandip Sen is a professor in the Tandy School of Computer Science with primary research interests in artificial intelligence, intelligent agents, machine learning, and evolutionary computation. He completed his Ph.D. on intelligent, distributed scheduling from the University of Michigan in December, 1993. He has authored approximately 300 papers in workshops, conferences, and journals in several areas of artificial intelligence.

Previous Editions

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its sixteenth year. Previous editions of this workshop may be found at the following urls:

Organization

This year's workshop is organised by:
  • Raphael Avalos (Vrije Universiteit Brussel, BE)
  • A. Alp Aydeniz (Oregon State University, US)
  • Henrik Müller (Leibniz University Hannover, DE)
  • Montaser Mohammedalamen (University of Alberta, CA)
Senior Steering Committee Members:
  • Enda Howley (University of Galway, IE)
  • Daniel Kudenko (Leibniz University Hannover, DE)
  • Patrick Mannion (University of Galway, IE)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, US)
  • Peter Stone (University of Texas at Austin, US)
  • Matthew Taylor (University of Alberta, CA)
  • Kagan Tumer (Oregon State University, US)
  • Karl Tuyls (University of Liverpool, UK)

Contact

If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.aamas AT gmail.com

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group