Call for Papers
Submission Portal: OpenReview.net - GMS (ICLR '22)
Submission Deadline: February 26, 2022 (midnight, anywhere on Earth)
Submission Deadline: March 2nd, 2022 (11:59am UK Time)
Page Limit: 8 pages, unlimited references
LaTeX Style Files: Download
Slack channel: Join
We invite submission to the first instance of Gamification and Multiagent Solutions, hosted at ICLR 2022.
Many of life’s intelligent systems are multiagent in nature: from market economies to ant colonies, from forest ecosystems to decentralized energy grids, intelligence is often a property of the whole, not its parts. These real-world examples suggest some deeper mathematical principle of intelligence, one grounded in games and multiagent interactions. However, modern machine learning primarily takes an optimization-first, single agent approach.
What do multiagent systems have to offer in the way of solutions? Generative adversarial networks reformulate learning a generative model as a two-player, zero-sum game. Similarly, EigenGame reformulates top-k singular value decomposition / principal component analysis as a k-player, general-sum game. What other learning problems can we “Gamify” by casting them as games among interacting agents? What might we learn from reformulating machine learning from the ground up with a multiagent approach in mind?
Multiagent designs are typically distributed and decentralized which leads to robust and parallelizable learning algorithms. Interactions between multiple agents also drive the creation of curricula that challenge learning agents to improve generalization performance.
We want to bring together a community of experts to explore:
What makes a problem amenable to a multiagent approach?
Which natural multiagent systems exhibit intelligent behaviors we can reuse in artificial agents?
How do we shepherd systems of adaptive agents to useful equilibria?
Can we develop novel multiagent solutions to machine learning problems?
In which cases are multiagent approaches crucial to advancing state-of-the-art?
What new solutions will we find at the fixed points, equilibria, or attractors of our games that were not at the bottom of our loss functions?
By exploring this direction we might gain a fresh perspective on machine learning, and unearth a new and exciting direction to build multiagent solutions.
We welcome both theoretical and experimental submissions along the above directions.
Authors should submit full papers electronically in PDF format at OpenReview.net.
Formatting Guidelines: Please format papers according to the updated ICLR style file (Download).
Paper Length: Papers can be up to 8 pages long in ICLR format. Shorter submissions are appreciated, and we encourage authors to submit preliminary results and ideas. Additional pages may be used for references.
Supplemental material can be appended at the end of the paper. However, reviewers are instructed to make their evaluations based on the main submission, and are not obligated to consult the supplemental material.
Parallel Submissions: We encourage concurrent submission of papers submitted to our workshop to other workshops at ICLR 2022. To widen participation and encourage discussion, there will be no formal publication of workshop proceedings. We will, however, post the accepted papers online to the benefit of the participants to the workshop. Therefore, submission of preliminary work and papers to be submitted or in preparation for submission to other major venues in the field are encouraged.
Past Submissions: We will not accept direct submissions of previously published work, however, we expect surveys of collections of previously published works that are less well known to the ICLR community to be of value to workshop attendees. We will consider any submission that aims to present a synopsis of previous research both to prevent “reinventing the wheel” and to re-inspire future extensions of classic approaches. We will prioritize surveys of work prior to the current era of modern AI (e.g., pre-2011). Accepted papers to ICLR2022 are not considered as past published works and are eligible submissions for this workshop as they have not been exposed for a long time to the community.
We invite papers on the wide range of topics that fit within the mission of the workshop, which is a dynamical system / multiagent view to machine learning algorithms. Therefore, the submission can be originated from each of the following topics but certainly not restricted to this set. Please find a list of example papers at the end of this page.
Multiagent Reinforcement Learning
Learning in Games (e.g., solution concepts and equilibria)
Distributed computation (e.g. in distributed systems, or neural computation)
Cyber-physical and other human-in-the-loop formulations
Questions and Discussions: Please join the following Slack workspace in case you have any question regarding the workshop and the submission process:https://join.slack.com/t/gamificationmas/shared_invite/zt-10ekqbjyp-A40B1RKwQsLtlXldbpD7Iw
At least one author from each submission is expected to serve as a reviewer.
The Cooperative AI Foundation awards 2 x $500 to two accepted papers.
Submission Deadline: February 26, 202 2 (AoE).
Submission Deadline: March 2nd, 2022 11:59am UK Time
Acceptance Notification: March 26, 2022.
Camera Ready: April 16, 2022 (AoE).
Workshop: April 29, 2022.
Classical Machine Learning as a Game
EigenGame: PCA as a Nash Equilibrium ~ Gemp, McWilliams, Vernade, Graepel [ICLR '21]
EigenGame Unloaded: When Playing Games is Better than Optimizing ~ Gemp, McWilliams, Vernade, Graepel [arXiv '21]
A Non-Negative Matrix Factorization Game ~ Singh [arXiv '21]
Contrastive Divergence Learning is a Time Reversal Adversarial Game ~ Yair, Michaeli [ICLR ‘21]
Reinforcement Learning as a Game
Decentralized Reinforcement Learning: Global Decision-Making via Local Transactions ~ Chang, Kaushik, Weinberg, Griffiths, Levine [blog post, ICML '20]
Environment Shift Games: Are Multiple Agents the Solution, and not the Problem, to Non-Stationarity? ~ May, Oliehoek [AAMAS ‘21]
Language as a Game
How to Reach Linguistic Consensus: A Proof of Convergence for the Naming Game ~ De Vylder & Tuyls [Journal of Theoretical Biology '06]
Modeling the Cultural Evolution of Language ~ Steels [Physics of Life Reviews '11]
Multiagent or Equilibrium Views on Deep Learning
Deep Neural Networks Are Congestion Games: From Loss Landscape to Wardrop Equilibrium and Beyond ~ Vesseron, Redko, Laclau [arXiv '21]
Efficient Collective Learning by Ensembles of Altruistic Diversifying Neural Networks ~ Brazowski & Schneidman [#tweeprint '20, arXiv]
Deep Equilibrium Models ~ Bai, Kolter, Koltun [NeurIPS '19]
The Multiagent Brain
The Generative Adversarial Brain ~ Gershman [Frontiers in AI '19]
An Optimal Brain can be Composed of Conflicting Agents ~ Livnat, Pippenger [PNAS '06]
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation ~ Scellier & Bengio [Frontiers in Comp Neuro '17]
Desirable Solution Concepts and Stability
Neural Lyapunov Redesign ~ Mehrjou, Ghavamzadeh, Schölkopf [L4DC '21]
Gradient descent GAN Optimization is Locally Stable ~ Nagarajan & Kolter [NeurIPS '17]
Beyond Local Nash Equilibria for Adversarial Networks ~ Oliehoek, Savani, Gallego, van der Pol, Groß [Springer ‘18]
Empirical Game Theory and Self-Play
DO-GAN: A Double-Oracle Framework for Generative Adversarial Networks ~ Aung, Wang, Yu, An, Jayavelu, Li [arXiv '21]
Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers ~ Marris, Muller, Lanctot, Tuyls, Graepel [ICML '21]
A Sharp Analysis of Model-based RL with Self Play ~ Liu, Yu, Bai, Jin [ICML '21]
Min-Max Optimal / Adversarial Guarantees
Learning to Hash Robustly, with Guarantees - Andoni & Beaglehole [arXiv '20]
Generalizing to Unseen Domains via Distribution Matching ~ Albuquerque, Monteiro, Darvishi, Falk, Mitliagkas [arXiv '21]
Online Adversarial Attacks ~ Mladenovich, Bose, Berard et al [arXiv '21]
Synthetic Data Generators: Sequential and Private ~ Bousquet, Livni, Moran [arXiv '20]
A neural architecture for designing truthful and efficient auctions ~Tacchetti et al. [arXiv '19]
The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning ~ Zheng et al. [arXiv '21]
Should I tear down this wall? Optimizing social metrics by evaluating novel actions ~ Kramar et al. [COINE '20]
Optimal Auctions through Deep Learning ~ Duetting et al. [ICML '20]