-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vatWarehouse transcript replay must commit changes, and coordinate with crank success/fail rewind #2422
Comments
@warner notes complications around |
This was referenced Jun 9, 2021
warner
added a commit
that referenced
this issue
Jun 26, 2021
This enhances SwingSet to have a "Vat Warehouse" which limits the number of "paged-in" vats to some maximum (currently 50). The idea is to conserve system RAM by allowing idle vats to remain "paged-out", which consumes only space on disk, until someone sends a message to them. The vat is then paged in, by creating a new xsnap process and reloading the necessary vat state. This reload process is greatly accelerated by loading a heap snapshot, if one is available. We only need to replay the suffix of the transcript that was recorded after the snapshot was taken, rather than the full (huge) transcript. Heap snapshots are stored in a new swingstore component named the "stream store". For each vat, the warehouse saves a heap snapshot after a configurable number of deliveries (default 200). In addition, it saves an initial snapshot after just a few deliveries (default 2), because all contracts vats start out with a large delivery that provides the contract bundle to evaluate. By taking a snapshot quickly, we can avoid the time needed to re-evaluate that large bundle on almost all process restarts. This algorithm is a best guess: we'll refine it as we gather more data about the tradeoff between work now (the time it takes to create and write a snapshot), the storage space consumed by those snapshots, and work later (replaying more transcript). We're estimating that a typical contract snapshot consumes about 300kB (compressed). closes #2273 closes #2277 refs #2422 refs #2138 (might close it) * refactor(replay): hoist handle declaration * chore(xsnap): clarify names of snapStore temp files for debugging * feat(swingset): initializeSwingset snapshots XS supervisor - solo: add xsnap, tmp dependencies - cosmic-swingset: declare dependencies on xsnap, tmp - snapshotSupervisor() - vk.saveSnapshot(), vk.getLastSnapshot() - test: mock vatKeeper needs getLastSnapshot() - test(snapstore): update snapshot hash - makeSnapstore in solo, cosmic-swingset - chore(solo): create xs-snapshots directory - more getVatKeeper -> provideVatKeeper - startPos arg for replayTransript() - typecheck shows vatAdminRootKref could be missing - test pre-SES snapshot size - hoist snapSize to test title - clarify SES vs. pre-SES XS workers - factor bootWorker out of bootSESWorker - hoist Kb, relativeSize for sharing between tests misc: - WIP: restore from snapshot - hard-code remote style fix(swingset): don't leak xs-worker in initializeSwingset When taking a snapshot of the supervisor in initializeSwingset, we neglected to `.close()` it. Lack of a name hindered diagnosis, so let's fix that while we're at it. * feat(swingset): save snapshot periodically after deliveries - vk.saveSnapShot() handles snapshotInterval - annotate type of kvStore in makeVatKeeper - move getLastSnapshot up for earlier use - refactor: rename snapshotDetail to lastSnapshot - factor out getTranscriptEnd - vatWarehouse.maybeSaveSnapshot() - saveSnapshot: - don't require snapStore - fix startPos type - provide snapstore to vatKeeper via kernelKeeper - buildKernel: get snapstore out of hostStorage - chore: don't try to snapshot a terminated vat * feat(swingset): load vats from snapshots - don't `setBundle` when loading from snapshot - provide startPos to replayTranscript() - test reloading a vat * refactor(vatWarehouse): factor out, test LRU logic * fix(vat-warehouse): remove vatID from LRU when evicting * chore(vatKeeper): prune debug logging in saveSnapshot (FIXUP) * feat(swingset): log bringing vats online (esp from snapshot) - manager.replayTranscript returns number of entries replayed * chore: resove "skip crank buffering?" issue after discussion with CM: maybeSaveSnapshot() happens before commitCrank() so nothing special needed here * chore: prune makeSnapshot arg from evict() Not only is this option not implemented now, but CM's analysis shows that adding it would likely be harmful. * test(swingset): teardown snap-store * chore(swingset): initial sketch of snapshot reload test * refactor: let itemCount be not-optional in StreamPosition * feat: snapshot early then infrequently - refactor: move snapshot decision up from vk.saveSnapshot() up to vw.maybeSaveSnapshot * test: provide getLastSnapshot to mock vatKeeper * chore: vattp: turn off managerType local work-around * chore: vat-warehouse: initial snapshot after 2 deliveries integration testing shows this is closer to ideal * chore: prune deterministic snapshot assertion oops. rebase problem. * chore: fix test-snapstore ld.asset rebase / merge problem?! * chore: never mind supervisorHash optimization With snapshotInitial at 2, there is little reason to snapshot after loading the supervisor bundles. The code doesn't carry its own weight. Plus, it seems to introduce a strange bug with marshal or something... ``` test/test-home.js:37 36: const { board } = E.get(home); 37: await t.throwsAsync( 38: () => E(board).getValue('148'), getting a value for a fake id throws Returned promise rejected with unexpected exception: Error { message: 'Remotable (a string) is already frozen', } ``` * docs(swingset): document lastSnapshot kernel DB key * refactor: capitalize makeSnapStore consistently * refactor: replayTranscript caller is responsible to getLastSnapshot * test(swingset): consistent vat-warehouse test naming * refactor(swingset): compute transcriptSnapshotStats in vatKeeper In an attempt to avoid reading the lastSnapshot DB key if the t.endPosition key was enough information to decide to take a snapshot, the vatWarehouse was peeking into the vatKeeper's business. Let's go with code clarity over (un-measured) performance. * chore: use harden, not freeze; clarify lru * chore: use distinct fixture directories to avoid collision The "temporary" snapstore directories used by two different tests began to overlap when the tests were moved into the same parent dir, and one test was deleting the directory while the other was still using it (as well as mingling files at runtime), causing an xsnap process to die with an IO error if the test were run in parallel. This changes the the two tests to use distinct directories. In the long run, we should either have them use `mktmp` to build a randomly-named known-unique directory, or establish a convention where tempdir names match the name of the test file and case using them, to avoid collisions as we add more tests. Co-authored-by: Brian Warner <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What is the Problem Being Solved?
I wanted to capture an issue with the new #2277 "Vat Warehouse" that came up in conversation with @dckc today. It won't be a problem right now, but there are a couple of different lines of development that are destined to collide in an interesting way.
syscall.drop()
syscall.drops
back up to the kerneldrop
happens during replay for a vref that is not in the c-list, we ignore it: we observed and processed this one in the previous runcrankNum
is a database entry, which we increment just before (or after? on which side of the circular fencepost do you stand?) the crank is processed, and we should increment it exactly once, no matter how the crank faresNow that I write it up, I see that maybe it wouldn't be fatal to unwind the page-in
syscall.drop
consequences (in response to a failed crank) too: if the vat is then terminated, we'll be dropping all its imports, so any refcounting consequences will be repeated as part of the vat-termination logic. If we allow the vat to live (so that we might attempt the delivery again in the future, e.g. with a different Meter), then when we re-load it, we'll have another opportunity to observe the drop.But in general, we need to be very deliberate about the transaction windows, and be clear about what state changes are and are not subject to rewind when a crank fails.
The text was updated successfully, but these errors were encountered: