Skip to content

Commit

Permalink
Merge pull request #131 from StanfordASL/djalota/update_avg_dec26_2024
Browse files Browse the repository at this point in the history
updated aamas paper
  • Loading branch information
djalota authored Dec 26, 2024
2 parents ac744e3 + 6d26a59 commit 867e923
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions _bibliography/ASL_Bib.bib
Original file line number Diff line number Diff line change
Expand Up @@ -5230,12 +5230,12 @@ @inproceedings{BigazziEtAl2024
@inproceedings{BerriaudElokdaEtAl2024,
author = {Berriaud, D. and Elokda, E. and Jalota, D. and Frazzoli, E. and Pavone, M. and Dorfler, F.},
title = {To Spend or to Gain: Online Learning in Repeated Karma Auctions},
booktitle = proc_WINE,
booktitle = proc_AAMAS,
year = {2024},
abstract = {Recent years have seen a surge of artificial currency-based mechanisms in contexts where monetary instruments are deemed unfair or inappropriate, e.g., for traffic congestion management or allocation of food donations. Yet the applicability of these mechanisms remains limited, since it is challenging for users to learn how to bid an artificial currency that has no value outside the mechanism. Indeed, users must learn the value of the currency as well as how to optimally spend it in a coupled manner. In this paper, we study learning to bid in two prominent classes of artificial currency auctions: those in which currency is issued at the beginning of a finite period only to be spent over the period; and those where in addition to the initial endowment currency is transferred among users by redistributing payments in each time step. In the latter class the currency has been referred to as karma, since users do not only spend karma to acquire public resources but also gain karma for yielding them. In both classes, we propose a simple learning strategy, called adaptive karma pacing strategy, and show that a) it is asymptotically optimal for a single agent bidding against a stationary competition; b) it leads to convergent learning dynamics when all agents adopt it; and c) it constitutes an approximate Nash equilibrium as the number of agents grows. This requires a novel analysis in comparison to adaptive pacing strategies in monetary auctions, since we depart from the classical assumption that the currency has known value outside the auctions. The analysis is further complicated by the possibility to both spend and gain currency in auctions with redistribution.},
address = {Edinburgh, United Kingdom},
month = jul,
keywords = {sub},
address = {Detroit, Michigan},
month = may,
keywords = {press},
owner = {devanshjalota},
timestamp = {2024-03-01},
url = {https://arxiv.org/abs/2403.04057}
Expand Down

0 comments on commit 867e923

Please sign in to comment.