IFAAMAS Influential Paper Award

The IFAAMAS Influential Paper Award seeks to recognise publications that have made influential and long-lasting contributions to the field. Candidates for this award are papers that have proved a key result, led to the development of a new subfield, demonstrated a significant new application or system, or simply presented a new way of thinking about a topic that has proved influential.
This year’s award committee selected two papers (not ordered) to be recognised with an IFAAMAS Influential Paper Award.
1. Negotiation decision functions for autonomous agents. Peyman Faratin, Carles Sierra, Nicholas R. Jennings. Robotics and Autonomous Systems 24(3-4): pages 159–182 (1998)
 
Citation:
“The article has been fundamental for the field of agent negotiation in the multiagent research community. Faratin, Sierra, and Jennings published a highly influential contribution to agent research through a seminal article on agent negotiation [1]. As of today, the article has received 1681 citations. Together with Rosenschein’s and Zlotkin’s “Rules of Encounter,” this article set the foundations for the field of automated negotiation and nowadays underpins most of the current research on the topic. In fact, the research issues posed in [1] continue to guide the research on agent negotiation. As a matter of fact, the aims of the “Automated Negotiating Agents Competition” competition, run in the realm of the AAMAS conference (since 2010), were already outlined in [1]. The recent interest in Diplomacy and the Hanabi challenge has revived the interest in agent negotiation, which is called to play a fundamental role in cooperative artificial intelligence in the future.”
 
2. Learning to cooperate via policy search. Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau, Leslie Pack Kaelbling. Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 489–496 (2000)
 
Citation:
“This paper makes a simple, but critical observation: in decentralized (i.e., Dec-POMDP) settings, the ‘policy gradient’ is decentralizable. I.e., when taking the normal (i.e., centralized) policy gradient and inspecting what information is needed to update the parameters of some agent i, it turns out that this gradient does not depend on any information of other agents. This is important, because it implies that agents can implement decentralized learning (only needing to observe the team reward), with guarantees of converging to a local optimum. This stands in stark contrast to value-based methods, such as Q-learning using individual information, for which no such results are known.  It also provides an explanation of the large success that actor-critic methods have been having in recent years, and has been a key building block in many methods.”