refactor: KOS and preprocessing traits #155
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds traits for the preprocessing model and also updates KOS to implement random OT for any
Standard: Distribution<T>
type.Allocate
trait rationaleIt is difficult to properly model preprocessing in the case where a functionality is shared by multiple others. One approach would be to update the
preprocess
function so that it has acount
argument then use some sort of asyncBarrier
for synchronization. That approach doesn't work in the case where multiplexing isn't available (only 1 context).The approach I settled on was to decouple reserving capacity step from execution of the preprocessing. The implemented model is 2 steps:
Allocate
, which allocate what they need from the sub-protocol. This process does not requires synchronization.Preprocess
is called which actually executes the preprocessing all batched together.Modeling it this way works for both cases where a functionality is "shared" or owned directly by another protocol.
Side note: Once we have SoftSpoken which can be extended multiple times, we should be able to just share the base OT and perform OT extensions in parallel instead of this batching. This will remove the need for shared state synchronization and be more performant in terms of latency and compute.