Accepted Papers

ID Authors Title
2 Yifan He, Abdallah Saffidine and Michael Thielscher Solving Two-player Games with QBF Solvers in General Game Playing
25 Ahad N. Zehmakan, Xiaotian Zhou and Zhongzhi Zhang Viral Marketing in Social Networks with Competing Products
29 Zhaolin Xue, Lihua Zhang and Zhiyan Dong Successively Pruned Q-Learning: Using Self Q-function to Reduce the Overestimation
33 Hideaki Takahashi and Alex Fukunaga On the Transit Obfuscation Problem
38 Liangda Fang, Meihong Yang, Dingliang Cheng, Yunlai Hao, Quanlong Guan and Liping Xiong Generalized Strategy Synthesis of Infinite-State Impartial Combinatorial Games via Exact Binary Classification
42 Stanisław Kaźmierowski and Marcin Dziubiński Efficient Method for Finding Optimal Strategies in Chopstick Auctions with Uniform Objects Values
54 Chaya Levinger, Noam Hazon, Sofia Simola and Amos Azaria Coalition Formation with Bounded Coalition Size
65 Zhenglong Li, Vincent Tam and Kwan L. Yeung Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management
68 Bram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy and Decebal Constantin Mocanu MaDi: Learning to Mask Distractions for Generalization in Visual Deep Reinforcement Learning
70 Ayush Chopra, Arnau Quera-Bofarull, Nurullah Giray Kuru, Michael Wooldridge and Ramesh Raskar Private Agent-based Modeling
71 Ayush Chopra, Jayakumar Subramanian, Balaji Krishnamurthy and Ramesh Raskar flame: a Framework for Learning in Agent-based Models
76 Keisuke Okumura Engineering LaCAM*: Towards Real-Time, Large-Scale, and Near-Optimal Multi-Agent Pathfinding
79 Qirui Mi, Siyu Xia, Yan Song, Haifeng Zhang, Shenghao Zhu and Jun Wang TaxAI: A Dynamic Economic Simulator and Benchmark for Multi-Agent Reinforcement Learning
81 Chenmin Wang, Peng Li, Yulong Zeng and Xuepeng Fan Optimal Flash Loan Fee Function with Respect to Leverage Strategies
84 Jaël Champagne Gareau, Marc-André Lavoie, Guillaume Gosset and Éric Beaudry Cooperative Electric Vehicles Planning
86 Xingzhou Lou, Junge Zhang, Ziyan Wang, Kaiqi Huang and Yali Du Safe Reinforcement Learning with Free-form Natural Language Constraints and Pre-Trained Language Models
87 Fumiyasu Makinoshima, Tetsuro Takahashi and Yusuke Oishi Bayesian Behavioural Model Estimation for Live Crowd Simulation
88 Raven Beutner and Bernd Finkbeiner Hyper Strategy Logic
93 Vishwa Prakash H.V. and Prajakta Nimbhorkar Weighted Proportional Allocations of Indivisible Goods and Chores: Insights via Matchings
95 Paul Barde, Jakob Foerster, Derek Nowrouzezahrai and Amy Zhang A Model-Based Solution to the Offline Multi-Agent Reinforcement Learning Coordination Problem
98 Adway Mitra and Palash Dey Evaluating District-based Election Surveys with Synthetic Dirichlet Likelihood
99 Argyrios Deligkas, Eduard Eiben and Tiger-Lily Goldsmith The Parameterized Complexity of Welfare Guarantees in Schelling Segregation
100 Yunhao Yang, Cyrus Neary and Ufuk Topcu Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and Perception
104 Halvard Hummel and Ayumi Igarashi Keeping the Harmony Between Neighbors: Local Fairness in Graph Fair Division
116 Jasmina Gajcin and Ivana Dusparic RACCER: Towards Reachable and Certain Counterfactual Explanations for Reinforcement Learning
122 Sung-Ho Cho, Kei Kimura, Kiki Liu, Kwei-Guu Liu, Zhengjie Liu, Zhaohong Sun, Kentaro Yahiro and Makoto Yokoo Fairness and efficiency trade-off in two-sided matching
123 Jean Springsteen, William Yeoh and Dino Christenson Social Media Algorithmic Filtering with Partisan Polarization
128 Jinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv and Changjie Fan A Trajectory Perspective on the Role of Data Sampling Techniques in Offline Reinforcement Learning
130 Siddharth Barman, Debajyoti Kar and Shraddha Pathak Parameterized Guarantees for Almost Envy-Free Allocations
137 Sven Gronauer, Tom Haider, Felippe Schmoeller da Roza and Klaus Diepold Reinforcement Learning with Ensemble Model Predictive Safety Certification
139 Natasa Bolic, Tommaso Cesari and Roberto Colomboni An Online Learning Theory of Brokerage
144 Salil Gokhale, Samarth Singla, Shivika Narang and Rohit Vaish Capacity Modification in the Stable Matching Problem
145 Martin Bullinger, Rohith Reddy Gangam and Parnian Shahkar Robust Popular Matchings
153 Hangyu Mao, Rui Zhao, Ziyue Li, Zhiwei Xu, Hao Chen, Yiqun Chen, Bin Zhang, Zhen Xiao, Junge Zhang and Jiangjin Yin PDiT: Interleaving Perception and Decision-making Transformers for Deep Reinforcement Learning
161 Alexander Lam, Haris Aziz, Bo Li, Fahimeh Ramezani and Toby Walsh Proportional Fairness in Obnoxious Facility Location
169 Laurent Gourves and Gianpiero Monaco Nash Stability in Hedonic Skill Games
176 Aamal Hussain, Dan Leonte, Francesco Belardinelli and Georgios Piliouras On the Stability of Learning in Network Games with Many Players
178 Rangeet Bhattacharyya, Parvik Dave, Palash Dey and Swaprava Nath Optimal Referral Auction Design
179 Jiajun Chai, Yuqian Fu, Dongbin Zhao and Yuanheng Zhu Aligning Credit for Multi-Agent Cooperation via Model-based Counterfactual Imagination
181 Matthias Köppe, Martin Koutecký, Krzysztof Sornat and Nimrod Talmon Fine-Grained Liquid Democracy for Cumulative Ballots
186 Siqi Liu, Luke Marris, Marc Lanctot, Georgios Piliouras, Joel Leibo and Nicolas Heess Neural Population Learning beyond Symmetric Zero-Sum Games
188 Michael Oesterle, Tim Grams, Christian Bartelt and Heiner Stuckenschmidt RAISE the Bar: Restriction of Action Spaces for Improved Social Welfare and Equity in Traffic Management
204 Georgios Amanatidis, Aris Filos-Ratsikas, Philip Lazos, Evangelos Markakis and Georgios Papasotiropoulos On the Potential and Limitations of Proxy Voting: Delegation with Incomplete Votes
207 Sheelabhadra Dey, James Ault and Guni Sharon Continual Optimistic Initialization for Value-Based Reinforcement Learning
208 Ying Wang, Houyu Zhou and Minming Li Positive Intra-Group Externalities in Facility Location
211 Tatsuya Iwase, Aurélie Beynier, Nicolas Bredeche, Nicolas Maudet and Jason Marden Is Limited Information Enough? An Approximate Multi-agent Coverage Control in Non-Convex Discrete Environments
214 Jijia Liu, Chao Yu, Jiaxuan Gao, Yuqing Xie, Qingmin Liao, Yi Wu and Yu Wang LLM-Powered Hierarchical Language Agent for Real-time Human-AI Coordination
216 Zihao Li, Shengxin Liu, Xinhang Lu, Biaoshuai Tao and Yichen Tao A Complete Landscape for the Price of Envy-Freeness
223 Jonas Karge, Juliette-Michelle Burkhardt, Sebastian Rudolph and Dominik Rusovac To Lead or to be Led: A Generalized Condorcet Jury Theorem under Dependence
227 Matteo Castiglioni, Alberto Latino, Alberto Marchesi, Giulia Romano, Nicola Gatti and Chokha Palayamkottai Finding Effective Ad Allocations: How to Exploit User History
229 Davide Dell’Anna, Pradeep K. Murukannaiah, Bernd Dudzik, Davide Grossi, Catholijn M. Jonker, Catharine Oertel and Pinar Yolum Toward a Quality Model for Hybrid Intelligence Teams
232 Qihui Feng and Gerhard Lakemeyer Probabilistic Multi-agent Only-Believing
234 Mikayel Samvelyan, Davide Paglieri, Minqi Jiang, Jack Parker-Holder and Tim Rocktäschel Multi-Agent Diagnostics for Robustness via Illuminated Diversity
236 Michael Bernreiter, Jan Maly, Oliviero Nardi and Stefan Woltran Combining Voting and Abstract Argumentation to Understand Online Discussions
237 Mengwei Xu, Louise Dennis and Mustafa A. Mustafa Safeguard Privacy for Minimal Data Collection with Trustworthy Autonomous Agents
242 Jiaming Lu, Jingqing Ruan, Haoyuan Jiang, Ziyue Li, Hangyu Mao and Rui Zhao DuaLight: Enhancing Traffic Signal Control by Leveraging Scenario-Specific and Scenario-Shared Knowledge
245 Tobias Friedrich, Andreas Göbel, Nicolas Klodt, Martin S. Krejca and Marcus Pappik From Market Saturation to Social Reinforcement: Understanding the Impact of Non-Linearity in Information Diffusion Models
246 Giorgio Angelotti, Caroline Ponzoni Carvalho Chanel, Adam Henrique Moreira Pinto, Christophe Lounis, Corentin Chauffaut and Nicolas Drougard Offline Risk-sensitive RL with Partial Observability to Enhance Performance in Human-Robot Teaming
247 Filip Úradník, David Sychrovský, Jakub Černý and Martin Černý Reducing Optimism Bias in Incomplete Cooperative Games
250 Ioannis Caragiannis, Kristoffer Arnsfelt Hansen and Nidhi Rathi On the complexity of Pareto-optimal and envy-free lotteries
251 Daxin Liu and Vaishak Belle Progression with probabilities in the situation calculus: representation and succinctness
252 Rati Devidze, Parameswaran Kamalaruban and Adish Singla Informativeness of Reward Functions in Reinforcement Learning
253 Jannis Weil, Zhenghua Bao, Osama Abboud and Tobias Meuser Towards Generalizability of Multi-Agent Reinforcement Learning in Graphs with Recurrent Message Passing
254 Vitaliy Dolgorukov, Rustam Galimullin and Maksim Gladyshev Dynamic Epistemic Logic of Resource Bounded Information Mining Agents
265 Tesfay Zemuy Gebrekidan, Sebastian Stein and Timothy Norman Deep Reinforcement Learning with Coalition Action Selection for Online Combinatorial Resource Allocation with Arbitrary Action Space
267 Chaeeun Han, Jose Paolo Talusan, Dan Freudberg, Ayan Mukhopadhyay, Abhishek Dubey and Aron Laszka Forecasting and Mitigating Disruptions in Public Bus Transit Services
271 Daniel Bairamian, Philippe Marcotte, Joshua Romoff, Gabriel Robert and Derek Nowrouzezahrai Minimax Exploiter: A Data Efficient Approach for Competitive Self-Play
274 Alexandre Ichida, Felipe Meneguzzi and Rafael Cardoso BDI Agents in Natural Language Environments
275 Yongzhao Wang and Michael Wellman Generalized Response Objectives for Strategy Exploration in Empirical Game-Theoretic Analysis
279 David Hyland, Julian Gutierrez, Krishna Shankaranarayanan and Michael Wooldridge Rational Verification with Quantitative Probabilistic Goals
289 Zhiqiang Zhuang, Kewen Wang, Zhe Wang, Junhu Wang and Yinong Yang Maximising the Influence of Temporary Participants in Opinion Formation
292 Qidong Liu, Chaoyue Liu, Shaoyao Niu, Cheng Long, Jie Zhang and Mingliang Xu 2D-Ptr: 2D Array Pointer Network for Solving the Heterogeneous Capacitated Vehicle Routing Problem
293 Junqi Jiang, Francesco Leofante, Antonio Rago and Francesca Toni Recourse under Model Multiplicity via Argumentative Ensembling
294 Aleksei Kondratev and Egor Ianovski The Proportional Veto Principle in Preference Aggregation
295 Pragnya Alatur, Giorgia Ramponi, Niao He and Andreas Krause Provably Learning Nash Policies in Constrained Markov Potential Games
300 Chen Cheng and Jinglai Li ODEs learn to walk: ODE-Net based data-driven modeling for crowd dynamics
304 Subham Pokhriyal, Shweta Jain, Ganesh Ghalme, Swapnil Dhamal and Sujit Gujar Simultaneously Achieving Group Exposure Fairness and Within-Group Meritocracy in Stochastic Bandits
306 Sankarshan Damle, Manisha Padala and Sujit Gujar Designing Redistribution Mechanisms for Reducing Transaction Fees in Blockchains
310 Joel Dyer, Arnau Quera-Bofarull, Nicholas Bishop, J. Doyne Farmer, Anisoara Calinescu and Michael Wooldridge Population synthesis as scenario generation for simulation-based planning under uncertainty
311 Davide Catta, Jean Leneutre, Vadim Malvone and Aniello Murano Obstruction Alternating-time Temporal Logic: a Strategic Logic to Reason about Dynamic Models
318 Xinyu Tang, Hongtao Lv, Yingjie Gao, Fan Wu, Lei Liu and Lizhen Cui Towards Efficient Auction Design with ROI Constraints
319 Yudong Hu, Congying Han, Tiande Guo and Hao Xiao Applying Opponent Modeling for Automatic Bidding in Online Repeated Auctions
320 Haozhe Ma, Thanh Vinh Vo and Tze-Yun Leong Mixed-Initiative Bayesian Sub-Goal Optimization in Hierarchical Reinforcement Learning
325 Sanket Shah, Arun Suggala, Milind Tambe and Aparna Taneja Efficient Public Health Intervention Planning Using Decomposition-Based Decision-focused Learning
326 Qian Lin, Chao Yu, Zongkai Liu and Zifan Wu Policy-regularized Offline Multi-objective Reinforcement Learning
346 Matej Jusup, Barna Pásztor, Tadeusz Janik, Kenan Zhang, Francesco Corman, Andreas Krause and Ilija Bogunovic Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning
347 Sangwon Seo and Vaibhav V Unhelkar IDIL: Imitation Learning of Intent-Driven Expert Behavior
350 Chikadibia Ihejimba and Rym Z. Wenkstern A Cloud-Based Microservices Solution for Multi-Agent Traffic Control Systems
363 Sebastian Rodriguez, John Thangarajah and Andrew Davey Design Patterns for Explainable Agents (XAg)
365 Ahad N. Zehmakan Majority-based Preference Diffusion on Social Networks
367 Tong Niu, Weihao Zhang and Rong Zhao Solution-oriented Agent-based Models Generation with Verifier-assisted Iterative In-context Learning
369 Saaduddin Mahmud, Marcell Vazquez-Chanlatte, Stefan Witwicki and Shlomo Zilberstein Explaining the Behavior of POMDP-based Agents Through the Impact of Counterfactual Information
380 Benjamin Patrick Evans and Sumitra Ganesh Learning and calibrating heterogeneous bounded rational market behaviour with multi-agent reinforcement learning
384 Ninell Oldenburg and Tan Zhi-Xuan Learning and Sustaining Shared Normative Systems via Bayesian Rule Induction in Markov Games
387 Kefan Su, Siyuan Zhou, Jiechuan Jiang, Gan Chuang, Xiangjun Wang and Zongqing Lu Multi-Agent Alternate Q-Learning
388 Marc Serramia, Natalia Criado and Michael Luck Multi-user norm consensus
390 Nico Potyka, Yuqicheng Zhu, Yunjie He, Evgeny Kharlamov and Steffen Staab Robust Knowledge Extraction from Large Language Models using Social Choice Theory
397 Shaojie Bai, Dongxia Wang, Tim Muller, Peng Cheng and Jiming Chen Stability of Weighted Majority Voting under Estimated Weights
399 Yixuan Li, Weiyi Xu, Yanchen Deng, Weiwei Wu and Wanyuan Wang Factor Graph Neural Network Meets Max-Sum: A Real-Time Route Planning Algorithm for Massive-Scale Trips
401 Yuhui Chen, Haoran Li and Dongbin Zhao Boosting Continuous Control with Consistency Policy
405 Haruyuki Nakagawa, Yoshitaka Miyatani and Asako Kanezaki Linking Vision and Multi-Agent Communication through Visible Light Communication using Event Cameras
409 Gergely Csáji A Simple 1.5-approximation Algorithm for a Wide Range of Maximum Size Stable Matching Problems
413 Soumyabrata Pal, Milind Tambe, Arun Suggala, Karthikeyan Shanmugam and Aparna Taneja Improving Mobile Maternal and Child Health Care Programs: Collaborative Bandits for Time slot selection
414 Sz-Ting Tzeng, Nirav Ajmeri and Munindar P. Singh Norm Enforcement with a Soft Touch: Faster Emergence, Happier Agents
415 Cheuk Chi Kitty Fung, Qizhen Zhang, Chris Lu, Jia Wan, Timon Willi and Jakob Foerster Analysing the Sample Complexity of Opponent Shaping
420 Ziqi Liu and Laurence Liu GraphSAID: Graph Sampling via Attention based Integer Programming Method
423 Zhaoxing Yang, Haiming Jin, Yao Tang and Guiyun Fan Risk-Aware Constrained Reinforcement Learning with Non-Stationary Policies
427 Amy Fang and Hadas Kress-Gazit High-Level, Collaborative Task Planning Grammar and Execution for Heterogeneous Agents
434 Zewen Yang, Xiaobing Dai, Akshat Dubey, Sandra Bütow, Sandra Hirche and Georges Hattab Whom to Trust? Elective Learning for Distributed Gaussian Process Regression
436 Dayang Liang, Yaru Zhang and Yunlong Liu Episodic Reinforcement Learning with Expanded State-reward Space
437 Akbir Khan, Timon Willi, Newton Kwan, Andrea Tacchetti, Chris Lu, Edward Grefenstette, Tim Rocktäschel and Jakob Nicolaus Foerster Scaling Opponent Shaping to High Dimensional Games
453 Benjamin Newman, Chris Paxton, Kris Kitani and Henny Admoni Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent Collaboration
454 Gennaro Auricchio, Jie Zhang and Mengxiao Zhang Extended Ranking Mechanisms for the $m$-Capacitated Facility Location Problem in Bayesian Mechanism Design
458 Kalle Kujanpää, Amin Babadi, Yi Zhao, Juho Kannala, Alexander Ilin and Joni Pajarinen Continuous Monte Carlo Graph Search
464 Ankang Sun and Bo Li Allocating contiguous blocks of indivisible chores fairly revisited
468 Eric Roslin Wete Poaka, Joel Greenyer, Daniel Kudenko and Wolfgang Nejdl Multi-Robot Motion and Task Planning in Automotive Production Using Controller-based Safe Reinforcement Learning
470 Hung Le, Kien Do, Dung Nguyen and Svetha Venkatesh Beyond Surprise: Improving Exploration Through Surprise Novelty
477 Linas Nasvytis, Kai Sandbrink, Jakob Foerster, Tim Franzmeyer and Christian Schroeder de Witt Rethinking Out-of-Distribution Detection for Reinforcement Learning: Advancing Methods for Evaluation and Detection
481 Shuwa Miura and Shlomo Zilberstein Observer-Aware Planning with Implicit and Explicit Communication
485 Hairi, Zifan Zhang and Jia Liu Sample and Communication Efficient Fully Decentralized MARL Policy Evaluation via a New Approach: Local TD update
488 Zhicheng Zhang, Yancheng Liang, Yi Wu and Fei Fang MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure
489 Taha Eghtesad, Sirui Li, Yevgeniy Vorobeychik and Aron Laszka Multi-Agent Reinforcement Learning for Assessing False-Data Injection Attacks on Transportation Networks
494 Matheus Aparecido Do Carmo Alves, Amokh Varma, Yehia Elkhatib and Leandro Soriano Marcolino It Is Among Us: Identifying Adversaries in Ad-hoc Domains Using Q-valued Bayesian Estimations
498 Lu Li, Jiafei Lyu, Guozheng Ma, Zilin Wang, Zhenjie Yang, Xiu Li and Zhiheng Li Normalization Enhances Generalization in Visual Reinforcement Learning
500 Yaoxin Ge, Yao Zhang, Dengji Zhao, Zhihao Gavin Tang, Hu Fu and Pinyan Lu Incentives for Early Arrival in Cooperative Games
503 Xinran Li and Jun Zhang Context-aware Communication For Multi-agent Reinforcement Learning
505 Weiqin Chen, James Onyejizu, Long Vu, Lan Hoang, Dharmashankar Subramanian, Koushik Kar, Sandipan Mishra and Santiago Paternain Adaptive Primal-Dual Method for Safe Reinforcement Learning
507 Simone Parisi, Montaser Mohammedalamen, Alireza Kazemipour, Matthew Taylor and Michael Bowling Monitored Markov Decision Processes
513 Yu He, Alexander Lam and Minming Li Facility Location Games with Scaling Effects
518 Nikhil Singh and Indranil Saha Frugal Actor-Critic:  Sample Efficient Off-Policy Deep Reinforcement Learning Using Unique Experiences
523 Mingyue Zhang, Nianyu Li, Jialong Li, Jiachun Liao and Jiamou Liu Memory-Based Resilient Control  Against Non-cooperation in Multi-agent Flocking
533 Cong Guan, Ruiqi Xue, Ziqian Zhang, Lihe Li, Yichen Li, Lei Yuan and Yang Yu Cost-aware Offline Safe Meta Reinforcement Learning with Robust In-Distribution Online Task Adaptation
537 Gauri Gupta, Ritvik Kapila, Ayush Chopra and Ramesh Raskar First 100 days of pandemic; an interplay of pharmaceutical, behavioral and digital interventions – A study using agent based modeling
545 Aditya Shinde and Prashant Doshi Modeling Cognitive Biases in Decision-Theoretic Planning for Active Cyber Deception
547 Pooja Kulkarni, Rucha Kulkarni and Ruta Mehta Approximating APS Under Submodular and XOS Valuations with Binary Marginals
558 Daniel Koyfman, Shahaf Shperberg, Dor Atzmon and Ariel Felner Minimizing State Exploration While Searching Graphs with Unknown Obstacles
560 Shahaf Shperberg, Bo Liu and Peter Stone Relaxed Exploration Constrained Reinforcement Learning
562 Otto Kuusela and Debraj Roy Higher order reasoning under intent uncertainty reinforces the Hobbesian Trap
564 Mattia Chiari, Alfonso Emilio Gerevini, Andrea Loreggia, Luca Putelli and Ivan Serina Fast and Slow Goal Recognition
571 Nusrath Jahan and Johnathan Mell Unraveling the Tapestry of Deception and Personality: A Deep Dive into Multi-Issue Human-Agent Negotiation Dynamics
575 Thomas Archbold, Bart de Keijzer and Carmine Ventre Willy Wonka Mechanisms
576 Gabriel Ballot, Vadim Malvone, Jean Leneutre and Youssef Laarouchi Strategic reasoning under capacity-constrained agents
588 Andreas Sauter, Nicolò Botteghi, Erman Acar and Aske Plaat CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning
589 Nicole Orzan, Erman Acar, Davide Grossi and Roxana Rădulescu Emergent Cooperation under Uncertain Incentive Alignment
592 Alba Aguilera, Nieves Montes, Georgina Curto, Carles Sierra and Nardine Osman Can poverty be reduced by acting on discrimination? An agent-based model for policy making
600 Nicos Protopapas, Vahid Yazdanpanah, Enrico Gerding and Sebastian Stein Online Decentralised mechanisms for dynamic ridesharing
601 Bo Li, Ankang Sun and Shiji Xing Bounding the Incentive Ratio of the Probabilistic Serial Rule
608 Ian Gemp, Marc Lanctot, Luke Marris, Yiran Mao, Edgar Duéñez-Guzmán, Sarah Perrin, Andras Gyorgy, Romuald Elie, Georgios Piliouras, Michael Kaisers, Daniel Hennes, Kalesha Bullard, Kate Larson and Yoram Bachrach Approximating the Core via Iterative Coalition Sampling
617 Marc Serramia, Maite Lopez-Sanchez, Juan Antonio Rodriguez Aguilar and Stefano Moretti Value alignment in participatory budgeting
626 Ruifeng Chen, Xu-Hui Liu, Tian-Shuo Liu, Shengyi Jiang, Feng Xu and Yang Yu Foresight Distribution Adjustment for Off-policy Reinforcement Learning
638 Moritz Graf, Thorsten Engesser and Bernhard Nebel Symbolic Computation of Sequential Equilibria
640 Yashovardhan S. Chati, Ramasubramanian Suriyanarayanan and Arunchandar Vasan Think Global, Act Local – Agent-Based Inline Recovery for Airline Operations
650 Yongxin Xu, Shangshang Wang, Hengquan Guo, Xin Liu and Ziyu Shao Learning to Schedule Online Tasks with Bandit Feedback
652 Pengdeng Li, Shuxin Li, Xinrun Wang, Jakub Cerny, Youzhi Zhang, Stephen McAleer, Hau Chan and Bo An Grasper: A Generalist Pursuer for Pursuit-Evasion Problems
653 Dmitry Chistikov, Luisa Fernanda Estrada Plata, Mike Paterson and Paolo Turrini Learning a Social Network by Influencing Opinions
654 Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen and Stefano V. Albrecht Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
655 Hao Guo, Zhen Wang, Junliang Xing, Pin Tao and Yuanchun Shi Cooperation and Coordination in Heterogeneous Populations with Interaction Diversity
657 Tianyi Hu, Zhiqiang Pu, Xiaolin Ai, Tenghai Qiu and Jianqiang Yi Measuring Policy Distance for Multi-Agent Reinforcement Learning
660 Francis Rhys Ward, Matt MacDermott, Francesco Belardinelli, Francesca Toni and Tom Everitt The Reasons that Agents Act: Intention and Instrumental Goals
669 Yibin Yang, Mingfeng Fan, Chengyang He, Jianqiang Wang, Heye Huang and Guillaume Sartoretti Attention-based Priority Learning for Limited Time Multi-Agent Path Finding
671 Yaoxin Wu, Mingfeng Fan, Zhiguang Cao, Ruobin Gao, Yaqing Hou and Guillaume Sartoretti Collaborative Deep Reinforcement Learning for Solving Multi-Objective Vehicle Routing Problems
682 Francesco Belardinelli, Wojtek Jamroga, Munyque Mittelmann and Aniello Murano Verification of Stochastic Multi-Agent Systems with Forgetful Strategies
686 Wojtek Jamroga, Munyque Mittelmann, Aniello Murano and Giuseppe Perelli Playing Quantitative Games Against an Authority: On the Module Checking Problem
687 Nardine Osman and Mark d’Inverno A Computational Framework of Human Values
695 Nemanja Antonic, Raina Zakir, Marco Dorigo and Andreagiovanni Reina Collective robustness of heterogeneous decision-makers against stubborn individuals
704 Chao Chen, Dawei Wang, Feng Mao, Jiacheng Xu, Zongzhang Zhang and Yang Yu Deep Anomaly Detection via Active Anomaly Search
710 Xiaoqiang Wu, Qingling Zhu, Qiuzhen Lin, Weineng Chen and Jianqiang Li Adaptive Evolutionary Reinforcement Learning Algorithm with Early Termination Strategy
715 Chin-Wing Leung and Paolo Turrini Learning Partner Selection Rules that Sustain Cooperation in Social Dilemmas with the Option of Opting Out
716 Robert Loftin, Mustafa Mert Çelikok, Herke van Hoof, Samuel Kaski and Frans Oliehoek Uncoupled Learning of Differential Stackelberg Equilibria with Commitments
718 Panagiotis Lymperopoulos and Matthias Scheutz Oh, Now I See What You Want: Learning Agent Models with Internal States from Observations
723 Evan Albers, Mohammad Irfan and Matthew Bosch Beliefs, Shocks, and the Emergence of Roles in Asset Markets: An Agent-Based Modeling Approach
730 Xinpeng Lu, Song Heng, Huailing Ma and Junwu Zhu A Task-Driven Multi-UAV Coalition Formation Mechanism
733 Said Jabbour, Yue Ma and Badran Raddaoui Towards a Principle-based Framework for Repair Selection in Inconsistent Knowledge Bases
735 Farnoud Ghasemi and Rafał Kucharski Modelling the Rise and Fall of Two-sided Markets
741 Kipp Freud, Nathan Lepora, Matt Jones and Cian O’Donnell BrainSLAM: SLAM on Neural Population Activity Data
747 Grant Forbes, Nitish Gupta, Leonardo Villalobos-Arias, David Roberts, Colin Potts and Arnav Jhala Potential-Based Reward Shaping for Intrinsic Motivation
755 Daniel Garces and Stephanie Gil Surge Routing: Event-informed Multiagent Reinforcement Learning for Autonomous Rideshare
756 Yucheng Yang, Tianyi Zhou, Lei Han, Meng Fang and Mykola Pechenizkiy Automatic Curriculum for Unsupervised Reinforcement Learning
760 Baiting Luo, Yunuo Zhang, Abhishek Dubey and Ayan Mukhopadhyay Act as You Learn: Adaptive Decision-Making in Non-Stationary Markov Decision Processes
765 Arti Bandhana, Tomáš Kroupa and Sebastian Garcia Trust in Shapley: A Cooperative Quest for Global Trust in P2P Network
770 James Bailey and Craig Tovey Impact of Tie-Breaking on the Manipulability of Elections
781 Oz Kilic and Alan Tsang Catfished! Impacts of Strategic Misrepresentation in Online Dating
782 Elliot Fosong, Muhammad Arrasy Rahman, Ignacio Carlucho and Stefano Albrecht Learning Complex Teamwork Tasks using a Given Sub-task Decomposition
791 Turgay Caglar and Sarath Sreedharan HELP! Providing Proactive Support in the Presence of Knowledge Asymmetry
797 Shivakumar Mahesh, Anshuka Rangi, Haifeng Xu and Long Tran-Thanh Attacking Multi-Player Bandits and How to Robustify Them
802 Mathieu Reymond, Eugenio Bargiacchi, Diederik M. Roijers and Ann Nowé Interactively learning the user’s utility for best-arm identification in multi-objective multi-armed bandits
813 Jamison Weber, Dhanush Giriyan, Devendra Parkar, Dimitri Bertsekas and Andrea Richa Distributed Online Rollout for Multivehicle Routing in Unmapped Environments
826 Łukasz Janeczko, Jérôme Lang, Grzegorz Lisowski and Stanisław Szufa Discovering Consistent Subelections
836 Hannes Eriksson, Tommy Tram, Debabrota Basu, Mina Alibeigi and Christos Dimitrakakis Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer
839 Alessandro Carminati, Davide Azzalini, Simone Vantini and Francesco Amigoni A Distributed Approach for Fault Detection in Swarms of Robots
848 Swapna Thorve, Henning Mortveit, Anil Kumar Vullikanti, Madhav Marathe and Samarth Swarup Assessing fairness of residential dynamic pricing for electricity using active learning with agent-based simulation
849 Zakaria Mehrab, Logan Stundal, Samarth Swarup, Srinivasan Venaktramanan, Bryan Lewis, Henning S. Mortveit, Christopher L. Barrett, Abhishek Pandey, Chad R. Wells, Alison P. Galvani, Burton H. Singer, David A. Leblang, Rita R. Colwell and Madhav Marathe Network Agency: An Agent-based Model of Forced Migration from Ukraine
851 Haoxiang Ma, Chongyang Shi, Shuo Han, Michael Dorothy and Jie Fu Covert Planning aganist Imperfect Observers
858 Stefan Sarkadi and Peter Lewis The Triangles of Dishonesty: Modelling the Evolution of Lies, Bullshit, and Deception in Agent Societies
862 Abhijin Adiga, Yohai Trabelsi, Tanvir Ferdousi, Madhav Marathe, S. S. Ravi, Samarth Swarup, Anil Kumar Vullikanti, Mandy Wilson, Sarit Kraus, Reetwika Basu, Supriya Savalkar, Matthew Yourek, Michael Brady, Kirti Rajagopalan and Jonathan Yoder Value-based Resource Matching with Fairness Criteria: Application to Agricultural Water Trading
879 Clarissa Costen, Anna Gautier, Nick Hawes and Bruno Lacerda Multi-Robot Allocation of Assistance from a Shared Uncertain Operator
889 Soroush Ebadian, Aris Filos-Ratsikas, Mohamad Latifian and Nisarg Shah Computational Aspects of Distortion
893 Michela Meister and Jon Kleinberg Containing the spread of a contagion on a tree
899 Benedetta Flammini, Davide Azzalini and Francesco Amigoni Preventing Deadlocks for Multi-Agent Pickup and Delivery in Dynamic Environments
909 Chin-Wing Leung, Shuyue Hu and Ho-fung Leung The Stochastic Evolutionary Dynamics of Softmax Policy Gradient in Games
920 Jack Dippel, Max Dupre la Tour, April Niu, Sanjukta Roy and Adrian Vetta Gerrymandering Planar Graphs
923 Shivam Goel, Yichen Wei, Panagiotis Lymperopoulos, Klára Churá, Matthias Scheutz and Jivko Sinapov NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents Designed for Open Worlds
926 Eura Shin, Siddharth Swaroop, Weiwei Pan, Susan Murphy and Finale Doshi-Velez Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks
927 Yan Song, Jiang He, Haifeng Zhang, Zheng Tian, Weinan Zhang and Jun Wang Boosting Studies of Multi-Agent Reinforcement Learning on Google Research Football Environment: the Past, Present, and Future
929 Zhaobin Mo, Yongjie Fu and Xuan Di PI-NeuGODE: Physics-Informed Graph Neural Ordinary Differential Equations for Spatiotemporal Trajectory Prediction
933 Ahmad Esmaeili, Zahra Ghorrati and Eric Matson Holonic Learning: A Flexible Agent-based Distributed Machine Learning Framework
934 Tran Cao Son, Loc Pham and Enrico Pontelli On Dealing with False Beliefs and Maintaining KD45_n Property
949 Vade Shah and Jason Marden Battlefield transfers in coalitional Blotto games
952 Antigoni Polychroniadou, Gabriele Ciprianni, Richard Hua and Tucker Balch Atlas-X Equity Financing: Unlocking New Methods to Securely Obfuscate Axe Inventory Data Based on Differential Privacy
953 Thomy Phan, Joseph Driscoll, Justin Romberg and Sven Koenig Confidence-Based Curriculum Learning for Multi-Agent Path Finding
958 Aravind Venugopal, Stephanie Milani, Fei Fang and Balaraman Ravindran MABL: Bi-Level Latent-Variable World Model for Sample-Efficient Multi-Agent Reinforcement Learning
971 Sami Abuhaimed and Sandip Sen Team Performance and User Satisfaction in Mixed Human-Agent Teams
977 Yash Shukla, Wenchang Gao, Vasanth Sarathy, Alvaro Velasquez, Robert Wright and Jivko Sinapov LgTS: Dynamic Task Sampling using LLM-generated sub-goals for Reinforcement Learning Agents
986 Hadi Hosseini, Andrew McGregor, Rik Sengupta, Rohit Vaish and Vignesh Viswanathan Tight Approximations for Graphical House Allocation
988 Arambam James Singh and Arvind Easwaran PAS: Probably Approximate Safety Verification of Reinforcement Learning Policy Using Scenario Optimization
991 Nathaniel Sauerberg and Caspar Oesterheld Computing Optimal Commitments to Strategies and Outcome-Conditional Utility Transfers
995 Manisha Natarajan, Chunyue Xue, Sanne van Waveren, Karen Feigh and Matthew Gombolay Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian Adaptation
1004 Chenyuan Zhang, Charles Kemp and Nir Lipovetzky Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability
1011 Tan Zhi-Xuan, Lance Ying, Vikash Mansinghka and Joshua Tenenbaum Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning
1021 Linh Le Pham Van, Hung Tran-The and Sunil Gupta Policy Learning for Off-Dynamics RL with Deficient Support
1025 Xiaoliang Wu, Qilong Feng, Ziyun Huang, Jinhui Xu and Jianxin Wang New Algorithms for Distributed Fair k-Center Clustering: Almost Accurate as Sequential Algorithms
1037 Chengxing Jia, Fuxiang Zhang, Yi-Chen Li, Chenxiao Gao, Xu-Hui Liu, Lei Yuan, Zongzhang Zhang and Yang Yu Disentangling Policy from Offline Task Representation Learning via Adversarial Data Augmentation
1042 Jiazhu Fang and Wenjing Liu Facility Location Games with Fractional preferences and Limited Resources
1045 Batuhan Yardim, Artur Goldman and Niao He When is Mean-Field Reinforcement Learning Tractable and Relevant?
1050 Grzegorz Pierczyński and Stanisław Szufa Single-Winner Voting with Alliances: Avoiding the Spoiler Effect
1057 Raven Beutner, Bernd Finkbeiner, Hadar Frenkel and Niklas Metzger Monitoring Second-Order Hyperproperties
1061 Nasik Muhammad Nafi, Raja Farrukh Ali, William Hsu, Kevin Duong and Mason Vick Policy Optimization using Horizon Regularized Advantage to Improve Generalization in Reinforcement Learning
1069 Danai Vachtsevanou, Bruno de Lima, Andrei Ciortea, Jomi Fred Hubner, Simon Mayer and Jérémy Lemée Enabling BDI Agents to Reason on a Dynamic Action Repertoire in Hypermedia Environments
1076 Vittorio Bilo, Michele Flammini, Gianpiero Monaco, Luca Moscardelli and Cosimo Vinci On Green Sustainability of Resource Selection Games with Equitable Cost-Sharing
1121 Philip Jordan, Florian Grötschla, Fan Flint Xiaofeng and Roger Wattenhofer Decentralized Federated Policy Gradient with Byzantine Fault-Tolerance and Provably Fast Convergence