-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplifying liquidation auctions #53
Comments
Finally, there is an alternative tree design to the AVL that I suggested which should be lighterweight in terms of implementation, this might be worth considering? In this design, your queue is a forest. When a tree at the back of the forest has the same height as the one next to it, you merge them, and recursively do so, like propagating a carry. It's a lot easier to implement than AVLs and thus potentially a lot lighter, though it does keep all of the logic of following pointers. |
Yes, definitely. Then they'd be batched up to make lots.
Yes, that was our thinking.
There might only be, say, 10 lots, each containing a couple of nats and a tez total, in which case it would probably be fine. You'd have to cancel all the liquidations in a lot to empty the lot out, but yes, that could happen if you could effectively fill a lot with a single burrow. Fine with disallowing cancellations tbh, the notes above were about how we might still be able to support them.
Yes, I think slices within a lot would not be stored in an O(n) structure.
It's an interesting possibility.
We'll take a look at this because it's a cool design, but I'm not confident that using it as a straight AVL replacement with all the other functionality intact would be a size win: the ability to simplify a lot of the code around auctions is a significant benefit. Ideally we'd find a design that minimises pointer fiddling. |
Right now the liquidation auction process is very complex, and results in a lot of generated Michelson, largely because of the AVL implementation. The complexity is considered a risk in terms of lurking bugs, as well as contract size and operational cost on-chain. In this issue we outline a different approach to structuring the liquidation auctions which allows for a simpler implementation. Feedback on unforeseen market effects is requested.
Limited size of liquidated collateral chunks
When the liquidation of a burrow is requested, limit the amount liquidated to a moderate but fairly high amount, e.g. 1000xtz. This implies multiple liquidation requests might be necessary to bring a large burrow back into an adequately collateralised state, and also that multiple 1xtz liquidation rewards might be paid out in the process. The goal of this change is to remove the need for splitting liquidated collateral chunks across auctions.
More relaxed batching into lots
As collateral chunks are queued for auction, they will simply be assigned to the next queued auction lot. If the lot is already larger than a certain threshold, e.g. 10,000xtz, then a fresh lot will be created in the queue. (Due to the limited chunk size above, the lots will generally still remain around 10,000xtz.) The goal here is to avoid splitting/re-splitting chunks into lots at a later time. An assumption is that the primary purpose of the original batching mechanism was to avoid auctioning small amounts individually, potentially leading to the system becoming overwhelmed.
Limit auction depth by limiting queued lots
When too many ~10,000xtz lots are queued, refuse to allow further liquidations until lots have been cleared. (There is already a simple limit based on AVL tree height, at which point further liquidation of burrows is blocked.) Setting such a limit to a fairly low number allows us to use a regular Michelson list to store lots and then iterate over them when we need to, which simplifies the implementation.
Ability to allow cancellation via simple removal from queued lots
Cancellation of chunks will not be allowed when a chunk is in a currently-open auction, but if a chunk is queued in a lot for a future auction, then cancellation could be permitted, via simple removal. This would mean that in the presence of many cancellations, upcoming lots could potentially shrink well below 10,000xtz. An auction could potentially combine multiple queued lots if they have shrunk significantly, but it would be preferable to avoid this complication. (We assume that lots would not become so small that their size would skew the auction price.)
Reconciling burrow state
Currently burrows track how much collateral is at auction, so that the expected sale can be used to determine whether the burrow can be subjected to further liquidation. This means burrow state needs to be reconciled with auction state after an auction finishes.
In the existing design, "slices"/"chunks" of liquidation are "touched" to reconcile them against their origin burrows and to transfer the won collateral to the winner. This "touching" happens via a checker entrypoint (which is a very low-level operation based on off-chain analysis) or by checker itself routinely touching some of the oldest liquidation slices.
This means a new storage layout must allow efficient iteration of chunks within an auctioned lot and lookup/modification/deletion using IDs stored in burrows. We have some thoughts about (relatively simple) data structures for this, that we will note here in due course.
The text was updated successfully, but these errors were encountered: