Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

design contract behavior upgrade semantics / API / UX #3272

Closed
warner opened this issue Jun 9, 2021 · 30 comments
Closed

design contract behavior upgrade semantics / API / UX #3272

warner opened this issue Jun 9, 2021 · 30 comments
Assignees
Labels
devex developer experience enhancement New feature or request Governance Governance needs-design SwingSet package: SwingSet Zoe package: Zoe
Milestone

Comments

@warner
Copy link
Member

warner commented Jun 9, 2021

What is the Problem Being Solved?

We've been talking about how the "please change the behavior of a contract" process ought to look. For this ticket, we're limiting the issue to contracts (not arbitrary vats), and to changes that were mostly anticipated ahead of time (think plugins or configuration settings, not spontaneous complete code overhauls). Examples of the updates that might use this process:

  • the price-selection rule for an auction: the highest bid always wins, but do they pay the first price, or second, or something else
  • a payout rule for interest on loaned tokens
  • governance policies themselves: minimum quorum threshold, veto rules, majority-wins, IRV, Condorcet, etc

So many of our deeper upgrade tickets (#1692, #1479, #3062, #2285, #1691) are not applicable.

Instead, the problem statement is to implement something to support the following sequence:

  • the first version of the contract anticipates a future change to its behavior
  • there is a facet to execute the change, closely-held by some governance mechanism
  • somebody implements the new behavior and submits it in a proposal to the governance mechanism
  • the governance mechanism chooses to adopt the proposal
  • the upgrade facet is invoked with the new behavior
  • the contract starts using the new behavior, rather than the old behavior

The governance mechanism is out of scope for this ticket: we assume there is some form of voting involved, but for our purposes here we can pretend that the proposer has direct access to the upgrade facet (modulo some important questions about how the new behavior is represented: we can't really just pass it a new function object). Our focus is on:

  • how does the proposer express the new behavior?
  • how does the proposer name/reference that behavior and send it to the chain-side upgrade facet?
  • how can the governance mechanism name/reference the proposed behavior so that voting members can know what they're voting for/against?
  • how does the upgrade facet accept and execute the new behavior?

Levels of Upgrade

It's probably useful to sketch out a spectrum of upgrades.

Parameter Change

The simplest kind of upgrade is a parameter change, like changing a fee, or interest rate, or payout period. We just want to change:

const payout = balance * 0.05;

to:

const payout = balance * 0.04;

The obvious approach is to use a variable:

let rate = 0.05;
function upgradeFacet(newRate) {
  rate = newRate;
}

const payout = balance * rate;

The proposer of the new behavior can simply send a number as the newRate, and voters can be shown this number.

await E(chainSideGovernanceThing).propose(0.04);

This works equally well for one-of-N selections from a set of pre-established options.

Simple Function/Expression

The next most sophisticated is to accept an arbitrary JavaScript expression and evaluate it in a confined scope. This allows more complex behavioral changes to be made, especially if the scope (or the way its result is used) is sufficiently rich.

let computeRate;
function upgradeFacet(newCode) {
  computeRate = eval(newCode); // indirect eval, no endowments
}

const payout = computeRate(balance, date, previousPayouts, etc);

The initial version of the function might only perform a simple multiplication, but later versions could incorporate the additional inputs as well.

This adds two new ways for the chain-side code to interact with the new behavior:

  • endowments provided in the global scope of the evaluated code
  • the interaction pattern used with the result of evaluation

The simplest pattern is for the code to expect an empty global environment (i.e. just the normal JavaScript globals like Array and Map), and always evaluate to a function, and for anything of interest to be provided as an argument to that function. But with sufficient coordination between the on-chain evaluation side and the proposer's side, more complex interactions are an option.

The main limitation of this approach is that the behavior must be expressed as a single evaluable string. In the early days of SES (and for a number of modern test cases), we used a stringified function. This allows us to write the behavior in a natural way (as a JS function, in a .js file, with our normal code editor and syntax-highlighting/analysis support), but then using toString or a template literal (in an engine that retains function source code, like Node.js) converts this into data that can be transmitted or compared by voters:

function newComputeRate(balance, date, previousPayouts, etc) {
  return balance * 0.04;
}

await E(chainSideGovernanceThing).propose(`${newComputeRate}`);

This works fine for simple functions, but breaks down when the behavior is complex enough to warrant standard software engineering conveniences like modules and mutiple files (even refactoring the behavior into multiple functions is troublesome). For example, we define our vats as a file with a buildRootObject export, but that file is allowed to (and almost always must) import other libraries. This requires more complex tools than func.toString().

Source Bundles

To support multi-file source files, we currently use rollup (and will soon switch to endo tools) to convert this single starting file into a "bundle" that includes the transitive closure of imported modules. The bundle is defined as an opaque object whose structure is known to a matching importBundle function. The opaque object is guaranteed to be serializable (with JSON.stringify), so it can be treated as data and transferred to the chain without complication.

The bundling process necessarily introduces the notion of a "graph exit": the edge at which we stop copying module source code into the bundle, and instead expect that the target environment will provide the necessary names. Alternatively, the bundling process may simply throw an error if you attempt to import something that is out-of-bounds. For our @agoric/bundle-source tool, we declare that all pure-source packages are eligible for bundling, but Node.js builtins (like fs or process) and packages with binary components are off-limits.

It also necessarily references the filesystem on the proposer's computer, and the state of their node_modules/ directory hierarchy.

The proposer writes their proposed behavior as a module, which can import other modules as they see fit:

// entrypoint.js
import { mathStuff } from 'npm_library';
import { myHelper } from './helper.js';
export function computeRate(args..) {
...
}

Note that the import statement is effectively mapping a local name (mathStuff, myHelper) to a well-known set of behavior (./helper.js, npm_library). The local name appears in the computeRate function. The well-known behavior is always defined by the proposer, either directly (helper.js, merely split into a separate file for ergonomics), or through a package registry and within a set of conventions. Anyone reading entrypoint.js and seeing npm_library will assume it behaves in a specific way because they've seen that library on NPM before, however the actual behavior is entirely specified by the proposer based upon what their local node_modules has a the time of bundling (the difference between these two being an exciting opportunity for supply-chain attacks).

To actually make the proposal, they write some additional code that points at an entry-point module, and builds a bundle around it:

const newBundle = await bundleSource('../path/to/entrypoint.js');
await E(chainSideGovernanceThing).propose(newBundle);

On the chain side, the contract must plan to accept a bundle, and use importBundle to turn it back into behavior:

let computeRate;
function upgradeFacet(bundle) {
  const namespace = await importBundle(bundle);
  computeRate = namespace.computeRate;
}

The chain-side importBundle call can be used to pass endowments into the code's global scope, just like eval did. It can impose other things on the new code, like inescapableTransforms or inescapableGlobalLexicals. And it can extract more things from the new code, because the bundle is evaluated into a complete module namespace (with multiple exports and maybe a default export), not just a single value. However the graph exits remain limited: the new code's only inputs come from the global namespace (lexicals and globals) and the arguments passed into any invocations of members of its exported namespace. In particular, all the imports get exactly the same code that was present on the proposer's machine (i.e. their node_modules/ directories, populated by yarn or npm from some yarn.lock and/or package.json data).

To display the bundle to voters in a meaningful way, we'd need some tool that can parse the bundle object, but displays the individual modules rather than evaluating them into behavior. We expect most of the modules to be identical to some common library source (e.g. "mathStuff version 4" is the same for everybody), so this tool should also have a way to map the module sources to strong identifiers (hashes), and to translate these identifiers through petname tables for each voter. So voter Alice should get a report with something like:

new behavior: (contents of `entrypoint.js`)
imports "`mathStuff`" from a module you know as "standard math stuff v4"

and hopefully that will let her amortize the review effort.

Note the growing name-vs-use distinction here: if the proposer were going to execute the new behavior themselves, they'd import { computeRate } from './entrypoint.js', but since they really want to send this behavior to the chain, they must call bundleSource to get something that can be serialized and transmitted. In the simple parameter approach, this was trivial: numbers and strings are treated the same way in both cases. For the single-function, the behavior-to-data conversion function was basically func.toString(), and the matching data-to-behavior conversion on the chain side was eval. For a source bundle, it's bundleSource on one side, and importBundle on the other.

Package/Module Graph With Configurable Exits

The next most sophisticated behavior definition I can think of is a full module graph, with some number of exits left unfilled.

From the proposer's point of view, they'll use the same source code layout as the previous scheme (where they deliver a source bundle). The local filesystem and node_modules/ defines most of the behavior, but there would be some notion of a "named exit": the proposer's source code says import { foo } from 'bar';, but the actual chain-side behavior they get would depend upon something on the chain, not just what they have locally. When entrypoint.js says:

// entrypoint.js
import { mathStuff } from 'npm_library';
import { currentChainPolicy } from 'chain-parameters'; // new
import { myHelper } from './helper.js';
export function computeRate(args..) {
...
}

the currentChainPolicy is a local name for the imported behavior, as before, but now chain-parameters is from a new kind of namespace. Instead of referring to something on the proposer's local disk (like ./helper.js), or some well-known registry name (like npm_library, grounded by something on the local disk), this chain-parameters is part of a contract between the new code and the environment it will live inside on the chain. The chain-side contract is responsible for providing something to satisfy the chain-parameters import, and what it gets is not known at the time of the proposal upload.

The chain side code would need some sort of importBundle(exits={ 'chain-parameters': xyz }) option to provide something to connect to the named exit. The bundleSource() call would need a way to know which imports are supposed to come from local disk, and which are supposed to be fulfilled by the chain-side execution environment. The Endo/Compartment "static module records" and LavaMoat manifest are highly relevant, as is the general notion of "dependency injection".

This effectively adds another pathway for the chain-side environment to pass stuff into the new code: globals/lexical endowments, function arguments, and now import things. In some way this is more ergonomic: it gets tiresome to pass a whole bunch of helper functions in as arguments, and the code is harder to test locally if they're provided as globals. If you could do import { policy } from './chain-parameters';, and provide a real (dummy/testing) file locally, but you knew it would be replaced by a real (chain-specific) namespace object if/when the proposal was accepted, that might make unit tests easier to write.

On the other hand, someone reading the code would find it awfully hard to know which things are being defined by the proposer, and which are merely being named by the proposer (but will really be defined by the chain-side execution environment). We can imagine some sort of special naming convention like import { policy } from '%%chain/parameters', or a special comment following the import, or even a source-to-source transform that enables a new syntax (import-from-chain { policy } from 'chain-parameters') or import { policy} from 'parameters' on 'chain' or something). Some of these approaches make local execution easier, some make it more difficult.

I think @erights is kind of keen on using module imports as a part of the programming interface, both for "standard behavior that I'm naming but I don't expect to vary much" and for specific configuration that the target environment provides for the proposer's code. I'm less convinced that this would be easy for the proposer (or the voters) to reason about correctly, especially because dependency injection requires operating at a meta-level beyond both the .js files on disk and the usual node_modules/ layout, which does not have well-established practices yet. We're inventing the future here, for sure, but the farther ahead we race, the more work it takes to bring our friends along.

On the other hand, I've previously advocated for using import statements as requests for authority, and for the module loading environment to be making the decisions about how to satisfy those imports, so perhaps there's more overlap here than I realized.

Proposer / Voter / Chain Expressions

In each of these levels of expressivity, we need to define APIs or experiences for three audiences:

  • proposer:
    • how does the proposer write their new behavior? can they use functions? multiple functions? multiple files? other people's files (from NPM)?
    • when someone reads the proposer's source files, what assumptions can/should they make about imported files? how much behavior is completely specified by the proposer's source, how much can be influenced by less-visible things like the state of their node_modules/ directory or the state of a registry at some particular moment, and how much depends upon the state of the chain when/if their proposal is enacted?
    • how do they write tests of their new behavior? ideally with code that looks a lot like the chain-side code that will use it, and which does import or something standard to execute it
    • how do they point at the new behavior for upload? how do they point at the governance mechanism to which they'll be making the proposal?
  • voters:
    • how do voters query the governance mechanism to learn about the proposal?
    • how do they get the source code or other expression of behavior that they'll be voting about?
    • what mechanical assistance can we give them to analyze that behavior? can they recognize shared modules and amortize their auditing efforts? can they communicate with each other about the code they're examining (are there strong names for the individual pieces, e.g. git commit hashes, single file hashes)?
    • how do they express their vote? a transaction of some sort? how does that name the option they're voting for?
  • chain-side

Avoiding Monolithic Bundles

Our current approach converts a bunch of files on the proposer's disk into a single large bundle object. A simple demo contract serializes into about 700 kB of data. Our most complicated contract is probably the core Treasury contract, and its bundle serializes into about a megabyte of data. This is larger than we'd like, and the relatively small difference between the two demonstrates that the vat majority of the bundle is shared code.

Obviously we'd like that code to only be touched once. New contracts should only need to upload the novel parts, and the on-chain code that references behavior should be able to use short references, not large blobs.

The first part of this is for the bundling step to generate a collection of pieces (modules or packages), instead of a single large bundle. The pieces should be identified by their hash, and some sort of small manifest (whose size is O(number of modules) rather than O(size of modules)) can remember how they fit together. Most of these pieces should be common libraries, identical to the corresponding pieces of other contracts.

Then the uploading step should be an interactive conversation between the uploader and the chain, wherein it figures out what pieces the chain doesn't currently hold, and only upload the new ones. This "upload component" step should use a different kind of transaction (#3269) than normal messages. Once all the pieces are in place, the manifest (or a hash of it) can be used in lieu of the monolithic bundle.

Somehow, the proposer should be able to express two things in the same source file. The first is the identity of the behavior they're proposing: we use import from 'filename' statements to execute behavior, and bundleSource(filename) to convert behavior into a bundle object directly, but I want something that looks like the import but yields a handle rather than an executable namespace or function. The second is the await E(chainThing).propose(behavior) call, which wants the behavior as an object of some sort. My ideal proposer.js deploy script looks something like:

import { newBehavior } from './proposed-behavior.js';

await E(governance).propose(newBehavior);

but I think we'll need some magic to let propose see a manifest or module identifier instead of a namespace or function object. Maybe some cooperation between the local module loader (which remembers the identity and provenance of each imported thing) and the marshalling code that is responsible for turning E() method arguments into serialized data. Maybe a new passStyleOf() === 'module'. Maybe currentModuleLoader.describe(newBehavior) === { source: URL, imports: { mathStuff: .., currentChainPolicy: ..., myHelper: .. } }. cc @kriskowal for ideas that mesh with the Endo archive format and tools.

On the chain side, once all the source components are installed (and added to some sort of #46 / #512 generic "blobstore" or more-specialized "modulestore"), there should be some special operation that accepts a hash and returns a "modulecap" (or "packagecap" or "graphcap" or "entrypointcap"): something tracked in the c-lists and being exactly as specific as the newBehavior named in the proposer's script. The governance mechanism receives this modulecap, rather than full source code (or even a hash). Voting talks about this modulecap. The execution of the vote should then look something like:

let computeRate;
function upgradeFacet(modulecap) {
  computeRate = await vatPowers.importBehavior(modulecap);
}

which somehow talks to the platform in some magic way, to access the table of components, evaluate the code that needs to be evaluated, sharing as much as possible with other contracts or instances, and yields the same sort of namespace or function object that importBundle did, but not touching any large strings in the process.

I'm thinking that passing around modulecaps instead of a hash can reduce a window of uncertainty: the hash refers to a specific piece of source code, but does the chain actually have that source available? And the source of everything it references? The components (modules/packages) must be installed with transactions, so the consensus state includes exact knowledge of what source is or is not available at any given time. We can make a special device or kernel interaction that takes a hash and either returns a modulecap or throws a "source component missing" error. But if we can do this query just once, early, then subsequent code no longer needs to worry about whether e.g. the vote will be enactable or not. As long as the modulecap is held alive in the c-lists, all the necessary source code should remain available in the kernel module/package source tables. If voting/etc referred to a hash until the final importBehavior call, it would be less clear that upgradeFacet would actually work or not.

Voters need to know what they're voting about. I'm thinking that the governance mechanism could post the source hash to some easy-to-read table (like we've discussed for non-transaction reads of chain state, especially purse balances), or perhaps a metercap (and some translation mechanism then looks up the source that backs it). Some sort of block explorer would be responsible for taking the consensus chain state and this identifier, and publishing the source graph being proposed. Local clients should be able to do the same: agoric governance show-proposal 123 could open a local browser with a display. Some sort of "proposal dissector" agent could show which parts of the source graph are novel, and which match known-audited components. This is where my Jetpack "community auditing site" thing would fit in.

@warner warner added enhancement New feature or request devex developer experience Governance Governance labels Jun 9, 2021
@dckc
Copy link
Member

dckc commented Jun 9, 2021

resource modules: please, no

I think @erights is kind of keen on using module imports ... for specific configuration that the target environment provides for the proposer's code.

Please, no.

This reminds me of an importable path.separator that's / on posix an \ on windows... and caused no end of trouble in python unit testing. Such platform stuff is a source of non-determinism and should be passed around explicitly like any other powerful capability. OCap discipline means anything globally accessible is immutable data, and that means design-time constant, not just runtime constant.

I'm less convinced that this would be easy for the proposer (or the voters) to reason about correctly,

quite.

On the other hand, I've previously advocated for using import statements as requests for authority

Again, please, no.

quoted behavior

I want something that looks like the import but yields a handle rather than an executable namespace or function.

I don't see why. bundleSource seems necessary and sufficient.

Avoiding large bundles seems orthogonal (starting with just Zoe, as in #2391, seems like a good 80% solution to start with).

And the large string device service seems like just another vat. Maybe importBundle wants to take access to it as an argument?

@zarutian
Copy link
Contributor

zarutian commented Jun 9, 2021

Reminds me of an old discussion on the three kinds of require() in CommonJS style modules (ESM with import and export statements did not exists at the time).
The three kinds were:

  1. a require() that got you the spefic powerless module named by the module specifier given. Can be petrified into a blobhash of that modules source code.
  2. a require() that named a module interface&contract shape expected by the requiring() code. (The path.separator in @dckc comment above)
  3. a require() that is actually askForPower() with module-like interface and can ask a power box (is: ?kyngikassi?) for the power/authority requested.

I am on the not so humble opinion that these three should be explictly seperate.

@kriskowal
Copy link
Member

I believe it will also be table stakes for the governor to be able to validate the new behavior by running an ineffectual shadow of the new behavior until it demonstrates that it can be relied upon for upgrade.

@kriskowal
Copy link
Member

@zarutian

I am on the not so humble opinion that these three should be explictly seperate.

I agree on principle, though my agreement has no teeth. Every module system we’re considering supports “exits” to “built-in” modules that may be used to inject powerful or parameterized dependencies.

If I may name your categories:

  1. powerless
  2. parameterized
  3. powerful

While I’d advise most applications to lean hard on 1, we’ve found it necessary to use 2 for testing and 3 for supporting-but-confining the Node.js legacy of powerful builtins. With care, all three can be used safely, though powerless modules are clearly safest.

@dckc
Copy link
Member

dckc commented Jun 9, 2021

... supporting-but-confining the Node.js legacy of powerful builtins ...

right; but that's clearly out of scope here, right, @kriskowal ?

@katelynsills katelynsills added the Zoe package: Zoe label Nov 5, 2021
@dtribble dtribble added the MN-1 label Jan 20, 2022
@Tartuffo Tartuffo added MN-1 needs-design SwingSet package: SwingSet and removed MN-1 labels Jan 20, 2022
@Tartuffo
Copy link
Contributor

Tartuffo commented Jan 26, 2022

@warner , Should we take Zoe off this?

@erights
Copy link
Member

erights commented Jan 27, 2022

The "Zoe" label? It should stay.

@Tartuffo Tartuffo removed the MN-1 label Feb 7, 2022
@Tartuffo
Copy link
Contributor

Tartuffo commented Feb 9, 2022

@warner Can you assign an estimate?

@warner
Copy link
Member Author

warner commented Feb 9, 2022

I think we're converging on this. The ticket originally explored a variety of methods for changing the behavior of a contract. At this point I think we're only going to implement two: change a parameter, and replace the entire contract bundle. The latter involves "durable collections", "baggage", and a kernel-provided upgrade API.

I'm going to use this ticket as the umbrella.

@Tartuffo
Copy link
Contributor

Tartuffo commented Feb 9, 2022

Add an issue for automated testing of upgrading an important contract. And an upgrade demo.

@warner
Copy link
Member Author

warner commented Feb 10, 2022

Not sure how to break this out from the rest, but a sequence we discussed at today's kernel meeting was:

  • cosmic-swingset remembers a list of "approved but not yet installed bundleIDs for contracts"
    • for MN-1, a governance action is required to add something to this list
  • after that happens, an installBundle txn can be submitted (and paid for) which will pass the check
    • cosmic-swingset calls controller.validateAndInstallBundle()
    • now the bundle is available for zoe
  • now something else causes zoe.install(bundleID) to be called

@michaelfig pointed out that the zoe action is also gated by governance, and it'd be nice to have just a single governance action. He asked for a way that zoe.install(bundleID) could be given a promise that fires after the bundle is installed, so it could be run early and just wait until the bundle is ready. If I can come up with a scheme to do that, then the sequence becomes:

  • a governance action passes, adding bundleID to the list and calling zoe.install(bundleID) (which waits)
  • after that, someone else comes along and does the installBundle txn
    • during DeliverTx, the controller.validateAndInstallBundle() both adds the kvStore key and pushes something onto the run-queue
    • during FinishBlock, controller.run() delivers that something, eventually the "bundle is available" promise fires, zoe wakes up and performs the install
    • if we make zoe.install return a suitable promise, then the governance action could go ahead and instantiate the contract too

@michaelfig
Copy link
Member

@michaelfig pointed out that the zoe action is also gated by governance, and it'd be nice to have just a single governance action. He asked for a way that zoe.install(bundleID) could be given a promise that fires after the bundle is installed, so it could be run early and just wait until the bundle is ready.

Equivalently, Zoe wouldn't need to accept a promise, but somehow her caller would need to be notified when the bundlecap is ready.

@Tartuffo Tartuffo modified the milestones: RUN Protocol RC0, Mainnet 1 Apr 5, 2022
@warner
Copy link
Member Author

warner commented Apr 13, 2022

Here's a design for ZCF and contracts to enable them to be upgraded with the new #1848 kernel vat-upgrade API.

Old Behavior

Currently, Zoe creates a new ZCF vat and sends it an executeContract message, with the contract bundlecap and a number of per-instance Zoe objects as arguments. ZCF reacts to this by creating some local per-instance objects (some of which close over the Zoe objects), evaluating the contract source code, then invoking the contract's one-and-only export (a function named start()). start gets the local per-instance things that ZCF created, as well as some things from zoe. start returns some per-instance contract facets. Once start has returned, the contract (and ZCF around it) are ready for business.

bundle install _ upgrade

New Behavior

In the new scheme, we need to carve out the portions of ZCF which perform per-version behavior definition from the parts that do per-instance start behavior:

bundle install _ upgrade - Frame 1

creation of version 1

When the contract vat is first created (the green box at the top), we use vatParameters to deliver the bundlecap of the contract's initial version. The vat is defined by the ZCF bundle, which is evaluated and buildRootObject is called. During this call, ZCF needs to perform the following steps:

  • examine baggage to see if Handles need to be created for the durable Kinds that will be used
    • if they are in baggage, use those, else create them and store them in baggage for future versions
  • call defineDurableKind for every Kind ZCF expects to use
  • examine baggage.get('contractBaggage'), create it if necessary
  • use vatParameters.contractBundleCap to obtain the contract source bundle, and evaluate it
    • that bundle should export two functions: initVersion and start
  • call contract.initVersion(), passing it a subset of the baggage (e.g. baggage.get('contractBaggage')

The contract's initVersion must do:

  • examine contractBaggage for Handles for the contract's Kinds, create+stash the handles if necessary
  • call defineDurableKind for each Kind the contract expects to use

That's it: neither ZCF nor the contract will do any per-instance startup work, like creating the publicFacet.

At this point, the vat is committed to hosting a specific contract installation (the source code of the contract is fixed), but it has not yet differentiated into a specific instance. If/when the zygote feature is ready, this is an appropriate point to fork the vat image. The "zygote clone: instance 2" box on the right represents a clone of the original image, created to support the second instance instead of the first.

Lacking that feature, Zoe will perform this creation step for each instance, and proceed to start() immediately with the same vat.

start()

After the new vat is configured for the contract, we need to specialize it to become a particular instance of that contract. Here, Zoe sends a message to the new vat named startZCF, with arguments pointing to instance-specific objects within Zoe that were created for the benefit of this new instance. ZCF receives these and creates local per-instance objects of its own. Then ZCF calls the contract's start() function with those objects as arguments.

The contract's start() function needs to create objects like the public facet. It gives these back to ZCF, which gives them back to Zoe. These objects form the exterior interface to the contract, and are long-term obligations (both of this version and of all future versions).

start() should only be called once in the lifetime of a particular instance. It does not get called again after each upgrade.

At this point the contract vat is ready for business. It can receive messages as soon as Zoe reveals the public facets to the world.

version 2

When Zoe decides the instance should be upgraded, it collects a ZCF bundlecap (perhaps the same as used in version 1), and a new contract bundlecap (for version 2). It then instructs the kernel to upgrate the contract vat, using the ZCF bundlecap as the new vat bundle, and putting the contract bundlecap into vatParameters.

The kernel shuts down the old vat, deleting its non-durable state, and spins up a new worker with the (maybe new, probably not) ZCF bundle. This bundle gets buildRootObject called as before, and it executes the same steps:

  • baggage will have Handles already for the Kinds created by version 1: use those
    • if version 2 creates new Kinds, those new Handles must be created and stashed here
  • call defineDurableKind for every Kind ZCF expects to use
    • this gets the version 2 behavior
  • retrieve contractBaggage from baggage, it will already exist
  • use vatParameters.contractBundleCap to obtain the (new) contract source bundle, and evaluate it
  • call contract.initVersion(), passing it a subset of the baggage (e.g. baggage.get('contractBaggage')
    • this new initVersion will fetch its Handles from contractBaggage and then call defineDurableKind with all the new version-2 behavior

This new vat does not call startZCF or the contract's start function. The vat has already allocated IDs for the public/etc facets and exported those objects: they are still in the vat's c-list, and other (downstream) vats still hold Presences for them. (Most notably, Zoe still has Presences). All those previous exports are obligations of the vat, and each version must honor them. Their new behavior is provided during initVersion, but their identity was established back in the singular start() call that happened during version-1.

Beyond providing vatParameters, Zoe does not interact with the version-2 vat before regular user traffic arrives: the vat must become completely prepared during the call to buildRootObject, therefore the contract must be completely prepared during the call to initVersion.

@mhofman
Copy link
Member

mhofman commented Apr 13, 2022

I haven't yet read @warner's new comment above, but here is what has been my mind since the beginning we started discussing the baggage approach.

Besically an instance is a further specialization, in the russian dolls that is the baggage:

  • You can create a vat, and load ZCF, but forgo a contract bundle in the params to generate a ZCF zygote.
  • Later you could deliver a "loadContract" to the root object. That stores the contract bundleCap in the ZCF part of the baggage, evaluates the contract bundle, and calls its init with the contract's baggage. Now you have a contract zygote.
  • When instantiating a contract, you deliver a "start" to the contract's root. That optionally stores the zoe instances in the the baggage, or uses them to create a singleton instance of the durable facets it returns to Zoe.
  • During the execution the contract may store more things in the baggage.
  • On upgrade, the contract's init is called during build root object. It is in charge of doing any post start setup that may be needed, based on data the contract instance stored in the baggage. No other method needs to be called.
  • When upgrading, the ZCF layer needs new data to override data that could have been previously saved in the baggage: a new contract bundlecap (aka contract specific upgrade data). Right now the approach seem to be to not create a ZCF Zygote or remember the contract for performing ZCF only upgrades, so we simply always provide the contract bundle cap in the vat params for ZCF to unconditionally evaluate. That sure simplifies things a bit for that layer.
  • It seems we haven't identified a need for "instance specific upgrade data". If we did, vat parameters could also include such data, and during buildRootObject when ZCF calls the contract's init, it could pass those instance params along with the baggage. The contract's init would then be able to realize it has new data to use instead of what was saved in the baggage. This data could for example come from the mechanism that triggered the update, or from the Manchurian mechanism.

TL;DR

  • We have an onion with the following layers: ZCF, installation, instance.
  • Each layer has access to something it may remember from a previous incarnation (baggage), and something new (vat params).
  • Each layer can be created sequentially in separate deliveries, but each layer becomes responsible to remember if another layer was added on top, and synchronously restore that layer when it itself gets restored.

@warner
Copy link
Member Author

warner commented Apr 14, 2022

One idea from today's discussion (to address @dtribble 's concern about the complexity of contracts exporting multiple functions): have the contract's init() function accept Handles and define+return Kinds, but not instantiate any of them. ZCF (which calls init() upon every version's buildRootObject) would gather the kind constructor functions returned by init(), but would only call them when Zoe tells ZCF that it's starting the first version.

// contract version 1
export const init = harden((handles, contractBaggage) => {
    // define internal kinds, using provideHandle(contractBaggage)
    // ..
    
    // now define the kinds that ZCF wants
    const makePublicFacet = defineDurableKind(handles.publicFacetKindHandle,
        (zoeStuff, zcfStuff) => { .. },
        behavior,
    );
    const makeCreatorFacet = defineDurableKind(handles.creatorFacetKindHandle,
        (zoeStuff, zcfStuff) => { .. },
        behavior,
    );
    
    return { makePublicFacet, makeCreatorFacet };
});

(@mhofman made the astute suggestion that the contract should really use defineDurableKindMulti() and return the public and creator facets as two facets of the same underlying object)

When version 1 is launched, ZCF finds the contractBundlecap in vatParameters, evaluates the contract, calls this init(), and stashes makePublicFacet/makeCreatorFacet.

A moment later, Zoe sends a startZCF() to the new vat, with per-instance Zoe things. ZCF creates the per-instance ZCF things, then invokes the stashed makePublicFacet/makeCreatorFacet to get the per-instance contract objects and return them to Zoe.

Later, when we upgrade to version 2, ZCF wakes up in the new vat, finds the new contractBundlecap in vatParameters and the old Handles in baggage, evalutes the contract, calls init() again (so the contract can redefine the Kind behavior), and this time it ignores the makePublicFacet/makeCreatorFacet (since it isn't going to call them). Zoe does not send a start (that only happens for version 1). The contract upholds its obligation to provide behavior for the previously-exported public/creator facets.

The contract can define their Kind's behavior by closing over stuff from their baggage, rather than storing things in .state, but that won't allow version-1 to close over zoe/zcf stuff (because that isn't provided until startZCF() happens).

@mhofman
Copy link
Member

mhofman commented Apr 14, 2022

So during the meeting I came around to another realization: there are 2 parts to the contract. The first part is static initialization (per installation) shared by all contract instances. The other is per instance initialization. The instance initialization simply further refines the static initialization of the installation.

Furthermore, ZCF can hide the awkwardness of having the singleton contract facets needing to be durable objects. Contract code should not have to jump through hoops and have to define any singleton durable kinds.

My proposal is:

  • The contract must expose a per instance init, which is the equivalent of the current start, and has the following signature:
    type Init = (
      instanceBaggage: MapStore,
      instanceParams: { zcf: ZCF; privateArgs?: Record<string, unknown> }
    ) => {
      creatorFacet?: Record<string, Callable>;
      publicFacet?: Record<string, Callable>;
      creatorInvitation?: DurableHandle;
    };
    • init would be called for every version, initial and upgrade/restart. It must return synchronously
      • In the initial call, the instanceBaggage is empty
      • It's responsible to rewire any instance specific kinds, if any.
    • The creatorFacet and publicFacet must be regular hardened records of functions, which can close over zcf and the params, and use any other global state.
      • On the initial call, ZCF will enumerate the properties of the facet records, automatically build the durable kind, create and return the singleton instances.
      • For upgrade/restart calls, ZCF will also enumerate the properties of the facet records to rewire the durable kind, but will not recreate new instances.
      • The kind behavior simply "proxies" to the closed over heap records returned by init, which isn't a problem since there is only ever a single instance of the kind.
    • ZCF has to remember, in its own section of the baggage, whether startContract had previously been called and save the Zoe objects passed as parameters.
      • If it's just an installation Zygote, buildRootObject does not call the contract's init, and will let startContract take care of that.
      • If ZCF detects it's an upgrade/restart, it rebuilds the start params from its own baggage, and calls the contract's init during buildRootObject.
      • The zcf instance passed to the contract code in either case does not technically have to be durable, since it's simply closed over.
  • The contract can expose an optional, shared installationInit, which can be used to setup parameter-less shared kinds that every instance would have. It has the following signature:
    type InstallationInit = (
      installationBaggage: MapStore,
      maybeSomeInstallationParam?: Record<string, unknown>
    ) => void;
    • installationInit is also called for every version, and must return synchronously.
    • Unlike init, it does not receive any per instance Zoe/ZCF references, and has its own separate baggage.
    • Its purpose is to setup top level, shared behavior, if the contract wants to optimize startup time and simplify its init
    • In the case of a restart/upgrade, it's called immediately preceding init
    • Technically there is nothing preventing an instance from reaching into the installation baggage, and the installationInit have its logic impacted by modification that were done by the instance. We could prevent that and enforce a clearer separation of concerns by making the installationBaggage deeply immutable after installationInit has returned.
    • An alternative design may be to make installationBaggage a global, and expect the top level code in the contract to rewire kinds. While it may look more convenient for the contract, it's less obvious that the contract must rewire installation kinds synchronously at evaluation time.

@warner
Copy link
Member Author

warner commented Apr 20, 2022

Interesting..

Your scheme hides the Handles for the two primary entry points (creator/private facets), along with the calls to defineDurableKind for those. But the contract author needs to provide all other Handles, and make their own defineDurableKind calls for everything else they do.

I will admit that it reduces the number of layers of indirection somewhat.

I'd worry a little bit that we're confusing contract developers by asking them to sometimes use (and understand) the full defineDurableKind, for their own kinds, but then in other cases (public/creator facets) provide merely the behavior argument. Instead of having them use defineDurableKind uniformly everywhere.

Furthermore, ZCF can hide the awkwardness of having the singleton contract facets needing to be durable objects. Contract code should not have to jump through hoops and have to define any singleton durable kinds.

Well, those facets must be durable objects, even if the contract isn't calling defineDurableKind or handling the Handle itself. I kinda think that would be more evident to contract authors if they were.

On the initial call, ZCF will enumerate the properties of the facet records, automatically build the durable kind, create and return the singleton instances

That denies the contract the ability to use the other two arguments of defineKind (the initialize and finish ones).. but I guess that isn't a big deal because ZCF is the one calling the constructor, and it's not providing any additional arguments anyways (so initialize doesn't have any more to work with than your exported init function), and it's only ever being called once (so finish couldn't really do much, like registering instances in some table). It means the contract code never learns its creatorFacet or publicFacet (which it didn't learn in my proposal either).. I dunno if that's important or not.

Overall, if the contract must do thing X on each version, and only do thing Y the first time around, I'd rather have them export two functions, so there's a clear directive to "do thing X when function 1 is invoked, do thing Y when function 2 is invoked".

Neither of our schemes achieve this; even my original version asks the contract's second export (initVersion) to implement a provideHandles which must probe contractBaggage to decide whether to allocate new Handles or not.

There's obviously some complexity tradeoffs here, but it's not clear to me which side is simpler. Having more exported functions is one kind of complexity, but requiring contract code be more stateful and have more internal conditionals is a different kind of complexity. And slicing the API down a wiggly path to minimize both would push us into an API that's harder to explain ("why do I give you behavior? when/how will it get used?"), which is a third kind of complexity.

@mhofman
Copy link
Member

mhofman commented Apr 20, 2022

provide merely the behavior argument

To be pedantic, the contract would be providing a normal object with methods. We could require it to be Far if we wanted to to make it feel more normal. Basically in the contract's view, they wouldn't know these facets are durable, because they are not. They are just used by a durable export. We just require the "ephemeral" facets to be recreated for every start.

It means the contract code never learns its creatorFacet or publicFacet

You're correct. This is one place where our approaches diverge. In my approach the facet that the contract code knows is actually never exported to the outside world. If by some means the exported facet came in as an argument, the contract code wouldn't be able to recognize it. In your approach, the contract code can technically use a finisher to get access to the facets the one time they are created, store them, and compare them later. Aka the finisher basically becomes a hidden singleton init.

Overall, if the contract must do thing X on each version, and only do thing Y the first time around

Well in my approach I don't believe they have to do anything different the first time around (besides creating handles and establishing durable kinds from scratch, but that's already the case)

@warner
Copy link
Member Author

warner commented Apr 20, 2022

provide merely the behavior argument

To be pedantic, the contract would be providing a normal object with methods. We could require it to be Far if we wanted to to make it feel more normal.

Ehh.. I think that'd be more confusing, because applying Far implies that the 2nd-arg/return-value has a meaningful identity, and here it doesn't: if we create a publicFacet at all, it will be a different object than what the contract provided. If they need to do anything with the public facet (register it with the Board, maybe?), we'll have to give them another init-time API to give them control after we've finished creating their publicFacet.

To teach users the relationship between the return value of this init and how virtual/durable objects work, I'd describe it as behavior.

Basically in the contract's view, they wouldn't know these facets are durable, because they are not. They are just used by a durable export. We just require the "ephemeral" facets to be recreated for every start.

Hm, that reminds me, we have to nail down some terminology for the benefit of users. It's the identity of an object that is durable, along with its state. The behavior is different for each version of the contract. Each facet is a durable object, because you can export it to other vats, and they can keep referring to it (or sending message to it) after you've upgraded.

So the record-of-functions returned by your init aren't facets, because they aren't the same object that ZCF will get out of the one makePublicFacet() call it makes, so they don't have a vref identity, can't be used in E() or stores, etc. And they aren't durable, because they aren't facets.

Aka the finisher basically becomes a hidden singleton init.

That's a good way to look at it. In any place where we let contract code do makeDurableKind itself, it can also build a wrapper function that does anything finish could do (and more). If our API does less than that (i.e. both our proposals), then contracts may want to lean on finish to do other stuff that we didn't give them a chance to do themselves.

@warner
Copy link
Member Author

warner commented Apr 21, 2022

In today's meeting, we sketched out a new API:

  • Each version of a contract is responsible for exporting a single function named defineInstallationKinds:
export function defineInstallationKinds(installationBaggage) {
  const installationHandles = provideFrom(installationBaggage);
  const makeFoo = defineDurableKind(installationHandle, ..);
  const defineInstanceKinds = ...;
  return defineInstanceKinds;
}
  • This function is invoked exactly once for each version of the contract. It does not get access to any per-instance objects (so it probably shouldn't send any messages). It must (synchronously) return another function, typically named defineInstanceKinds.
  • To build the zygote vat image, we'll create a new ZCF vat with the contract's version-1 bundlecap in vatParameters. ZCF will define its own Kinds, evaluate the contract, invoke defineInstallationKinds with a portion of baggage, and hold on to the returned function. Then we'll freeze/snapshot/stop-interacting-with the vat and call it the v1 zygote.

Later, when we want a new instance, we'll clone the zygote, and send it zcf.startContract(terms, privateArgs)

  • That will carve out a smaller portion of the baggage (instanceBaggage) and invoke the returned+stashed defineInstanceKinds:
function defineInstanceKinds(instanceBaggage, zcf, privateArgs) {
  const instanceHandles = provideFrom(instanceBaggage);
  const makeBar = defineDurableKind(instanceHandle, ..);
  const makeInstanceKit = ..;
  return makeInstanceKit;
}
  • Since this is the first version of the contract, ZCF will invoke the returned makeInstanceKit, which is responsible for returning { creatorFacet, publicFacet }.
  • ZCF does the remaining per-instance work and contacts Zoe with the details.

Later, when an upgrade happens:

  • Zoe is instructed to update the instance, and is possibly given new terms and privateArgs to use
  • Zoe uses the stashed vat admin node to perform the upgrade, passing the same ZCF bundlecap as before, but with vatParameters that includes the v2 contract bundlecap, and the new terms/privateArgs
  • The contract vat is upgraded
  • within the new copy of ZCF, buildRootObject is invoked
  • ZCF extracts its own handles from baggage and performs defineDurableKind for all of its internal Kinds
  • ZCF pulls the new v2 contract bundlecap from vatParameters and evaluates it, getting a v2 version of defineInstallationKinds
  • ZCF extracts installationBaggage and invokes defineInstallationKinds(installationBaggage), getting a v2 version of defineInstanceKinds
  • ZCF extracts instanceBaggage, combines it with a copy of the durable zcf object (from baggage) and either stashed or updated copies of terms and privateArgs, and then invokes defineInstanceKinds(instanceBaggage, zcf, privateArgs)
  • ZCF ignores the return value of defineInstanceKinds: it is not going to make a new instance, that was done exactly once during v1

At this point, all Kinds have been reconnected, and the v2 vat is ready for business.

Some properties:

  • For each vat, defineInstallationKinds (for all versions) is always followed by exactly one call to the defineInstanceKinds it returned. However that call might arrive in a future clone of the vat.
  • The Handles used for publicFacet and creatorFacet can be made in either defineInstallationKinds or defineInstanceKinds, but they must use the "provide" pattern:
function provideHandle(name, iface, baggage) {
  if (!baggage.has(name)) {
    baggage.set(name, makeKindHandle(iface));
  }
  return baggage.get(name);
}
  • these functions must also provide Handles for any internal Kinds
  • defineInstallationKinds should do as much work as can be done without access to per-instance objects (like zcf), so the zygote can amortize as much work as possible.
  • makeInstanceKit() should return whatever contracts' start() methods currently return
    • @erights is looking into removing creatorInvitation (in favor of just creatorFacet), which might allow:
function defineInstanceKinds(instanceBaggage, zcf, privateArgs) {
  ...
  const makeInstanceKit = defineDurableKind(...);
  return makeInstanceKit;
}

@mhofman notes that it'd be nice to somehow freeze installationBaggage, to discourage its use for per-instance state, however we concluded that the consequences of defineInstanceKinds or makeInstanceKit mutating installationBaggage were minor: some work would be done in each instance, that could have been amortized into the zygote instead.

@warner
Copy link
Member Author

warner commented Apr 26, 2022

Some notes from @dtribble:

  • the names of the two functions need to be more distinct: the discriminator can't be in the middle of the name
    • maybe use setupInstallation and setupInstance, or initInstallation/initInstance
  • for the transition period, we should retain the ability to handle start -based contracts (although not upgradable, of course)
  • in general, we need to make your first contract easy to write, and we should be able to teach with examples that focus on the contract operation, not the upgrade-specific parts
    • on the other hand, at @erights pointed out, apparent simplicity is not helpful when it omits something you really need to understand to build something properly (e.g. when you omit error handling from examples, you teach everybody to write fragile/vulnerable code)
  • new contract versions should get new terms, so we need to pass that through vatParameters for the second and succeeding versions

We talked a bunch about "record upgrade": in addition to version-2 providing new behavior, we'll (eventually) need a way to let version-2 upgrade each data record to some new format. A simple example might be a contract that records a token balance in each virtual/durable object, and a version bump that allows more precision. Tne version-1 state might contain { BLD } (integral number of BLD tokens), while the version-2 state contains { uBLD } (micro BLD). The conversion is simple: newState = { uBLD: 1_000_000n * oldState.BLD }. But knowing when and if to do the transformation is the interesting part. Given the high overhead of our execution engine and the fairly small budget for time in each block, we probably need a maximally lazy approach: defer doing any conversion on a given object until it is paged into memory for the first time.

Decades of ORM systems have had facilities for this. When version-2 is defined, in addition to providing the new behavior, the caller must also provide a function that converts a version-1 record into a version-2 record. This implies that we're telling the infrastructure about version numbers too. The API might look like this:

const makeFooV1 = defineDurableKind(handle, initV1, behaviorV1, { version: 1 });
...
const upgradeV1ToV2 = oldState => { uBLD: oldState.BLD * 1_000_000n };
const makeFooV2 = defineDurableKind(handle, initV2, behaviorV2, { version: 2, upgradeTo: { 2: upgradeV1ToV2 } });

and the virtual object manager would be responsible for knowing the version of each record, and calling the chain of upgraders whenever we unserialize a record whose version is older than the current one.

To support this in the future, we might want to add some features now:

  • including an initial version number in the serialized state record
    • currently we serialize { propName1: prop1capdata, ... }
    • we might use [0, { propName1: prop1capdata, ...}], where 0 means version-0
    • or we might skip this and use value.startsWith('{') to indicate that the data is version-0 (i.e. even the version number is added lazily)
  • require a version option now
  • maintain a DB key which counts objects of different versions
    • that might let userspace authors remove ancient upgradeTo: upgraders from their Kind definitions, but still be able to throw an error during upgrade if there are records that would be left behind by their omission

@FUDCo points out that we probably don't know enough about schema upgrade to anticipate the use cases very well yet, so it might be good to defer adding the feature until we get more experience with merely updating the behavior.

One question was how this record upgrade might overlap with other sweeps we must do through all data. During upgrade, we delete all non-durable objects, which (through dropped references) might allow us to delete some number of durable objects: those which were only kept alive by one of the old merely-virtuals. One way to do this is with a mark-and-sweep GC pass that visits all the durable objects. If it made sense, we could combine this with the record upgrade step: as we visit each reachable object, we apply the upgrader function. (this sounds pretty expensive). Or, if we wind up with some sort of incremental GC (after upgrade, each time we delivery something to the vat, we prune a little bit more), we could also perform a bit of preemptive record upgrade.

@erights and I talked about better ways to delete the old objects. We're both really worried about the cost of examining (and possibly deleting) a large number of DB keys, especially if it must be done atomically in a single crank. We're thinking:

  • we must delete c-list entries for all non-durables during upgrade, since they must not be reachable by the outside world after upgrade, and messages to/about them might arrive right away
  • we ough to reject all outstanding promises during upgrade too, since for e.g. Notifiers, their subscribing clients won't request a new getUpdateSince promise until they see the previous one reject
  • we'd like to change the DB key structure for virtual objects to
    • 1: give us a simple test of whether a key is for a virtual object vs a durable object (which doesn't depend upon the RAM state from the previous vat)
    • 2: let us determine whether that key's kindID is for a virtual (non-durable) Kind defined during the current version, or from some previous one (e.g. we compare it against a copy of the nextKindID counter recorded during startVat)
    • 3: let us slowly crawl through all DB keys, recording our progress in RAM somewhere
  • with those changes, we could have each BOYD crawl through the DB a bit further, comparing keys against Kinds, and when it encounters a DB key for an old non-durable object, it 1: deletes the key, 2: decrefs any durables or imports that were referenced
    • we'd limit this to e.g. visiting 100 entries or deleting 50, per each BOYD

That would let us spread the deletion work out over an arbitrary amount of time, to not overload any single crank, while still maintaining accurate refcounts to imports and durables (so it wouldn't interfere with a mark-and-sweep we might add some day). If the process didn't complete before a second upgrade happened, that's ok, the process can start again after the next upgrade without losing progress. It would delay dropping imports for some extended amount of time, however.

@mhofman
Copy link
Member

mhofman commented Apr 27, 2022

Besides better names for the functions, was there any other changes to the flow of a Zoe contract upgrade?

Do we have an issue to track durable object schema upgrades in general? While I think it's something we'll need to figure out, I agree with @FUDCo that we don't know enough yet about the use cases to design something here. In particular, I'd be very skeptical of adopting a versioned ORM approach (I'm of the opinion of letting the contract optionally provide a single "migrateAncestorState" and let it deal with their own versioning if they want). I do agree that a maximally lazy migration would be correct however. Anyway all this is orthogonal to Zoe contract upgrades, and apply to all vat upgrades, which is why I'd prefer virtual object schema upgrades discussed in a separate issue.

@warner
Copy link
Member Author

warner commented Apr 28, 2022

Oh bother, importBundle is async, and ZCF needs to import the contract bundle during the (sync) upgrade phase.

@warner
Copy link
Member Author

warner commented May 2, 2022

async importBundle is resolved, buildRootObject can now return a Promise, as long as it resolves promptly (by end-of-crank).

@erights
Copy link
Member

erights commented May 2, 2022

... as long as it settles ... ?

@warner
Copy link
Member Author

warner commented May 3, 2022

If it rejects, that's just as vat-fatal as not settling in time (and is indistinguishable from throwing an exception during buildRootObject, which seems like the most likely error case anyways). So I should have said that it can return a Promise that fulfills by end-of-crank.

@warner
Copy link
Member Author

warner commented May 3, 2022

https://gist.github.com/warner/95594e60b194673420bd515ab8d3662c has a sketch of what buildRootObject and a contract file would need to look like.

@warner
Copy link
Member Author

warner commented May 11, 2022

This design is complete. @erights and @Chris-Hibbert are working on implementing it in ZCF, from which we might learn about changes to make.

I'm looking to narrow this ticket down to something concrete, like "Zoe and ZCF have enough code to support a contract upgrade, and expose a facet to governance to drive it".

I don't know what other Zoe/governance -side changes need to be made to drive this, or if we have a ticket for it. We also need cosmic-swingset -side support for getting the bundle installed in the first place, but this ticket assumes the bundleID string (hash) is valid: either it's already installed, or it will be installed soon (at least soon enough for the caller of zoe's upgrade facet to be happy).

@dckc
Copy link
Member

dckc commented Jul 13, 2022

@Chris-Hibbert presented upgrade of covered call... (IOU pointer to the exact code).

The API looked a lot like #5708

@Chris-Hibbert took the ball on a couple things:

  • a few names in the API
  • connecting durable collections to baggage

@dckc
Copy link
Member

dckc commented Jul 20, 2022

recent contract upgrade demo shows this is pretty well done.

@dckc dckc closed this as completed Jul 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devex developer experience enhancement New feature or request Governance Governance needs-design SwingSet package: SwingSet Zoe package: Zoe
Projects
None yet
Development

No branches or pull requests