-
Notifications
You must be signed in to change notification settings - Fork 102
DataCap should be used based bytes being stored, not padded piece size #1419
Comments
This is a market-actor design issue, and would require changing how we handle datacap in markets - we'd have to allow clients to specify how many bytes of verified data should be assigned to a given piece - right now the market actor doesn't know about the raw data size, it only cares about whole pieces (This will require an FIP and a bunch of likely non-trivial actor changes) |
i feel like if datacap is not supposed to be sacred - this doesn't really matter, but yah having this will make things more rigorous |
The inefficient usage of data cap reflects inefficient usage of the underlying storage. Irrespective of FIL+ verified status, Filecoin right now requires pieces to be sized in powers of two. If a deal uses less than that, the padded leftover is unavailable to the miner to store other deals. I know it's a bit of a stretch right now to imagine supply of storage being scarce, but if it were then the fact that the client has to pay for the whole padded piece, and consume data cap for the whole padded piece, reflect the underlying storage economics. A client wishing to economise on use of DataCap (and paid-for storage) is incentivised to pack the data more tightly into power-of-two deals. I'm not clear on the dynamics that cause data cap to be scarce. I don't think this is actionable within actors right now, and would need a FIP to determine if we want to change it, and then lay out how. I suggest opening a discussion in the FIPs repo instead. |
DataCap, as a resource, is used to incentivize useful utilization of the supply available in the network. Miners are incentivized to take deals that come with DataCap since that provides a substantial boost to their earnings for the useful storage they provide. By having DataCap be consumed based on the whole piece size rather than the bytes, we miss out on maximizing the leverage given to clients to make deals on the network. This is in addition to clients needing to learn and optimize for deal packing, which introduces additional complexity and worse UX. Feedback received through user testing showed that it was confusing when there was a substantial disparity between the amount of data attempted to be stored vs. the amount of DataCap that ended up being used once the piece was padded. This also ends up unfairly rewarding miners that receive inflated rewards due to sub-optimally packed sectors.
@anorth quick question for you - is the ask to take this into a FIP because the current actors implementation would not be able to support this? |
Correct, see
|
ACK - this makes sense, thanks for confirming @jennijuju. FIP it is. |
Basic Information
For clients on the network today who get DataCap and use it in deals, DataCap is used based on the padded piece size, rather than the raw byte size of the data.
Describe the problem
This is relatively un-intuitive for users who end up spending DataCap faster than they think - leading to inefficient usage of DataCap they have been allocated. DataCap should instead be used per the amount of data a client is looking to store.
The text was updated successfully, but these errors were encountered: