-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
design contract behavior upgrade semantics / API / UX #3272
Comments
resource modules: please, no
Please, no. This reminds me of an importable
quite.
Again, please, no. quoted behavior
I don't see why. Avoiding large bundles seems orthogonal (starting with just Zoe, as in #2391, seems like a good 80% solution to start with). And the large string device service seems like just another vat. Maybe |
Reminds me of an old discussion on the three kinds of
I am on the not so humble opinion that these three should be explictly seperate. |
I believe it will also be table stakes for the governor to be able to validate the new behavior by running an ineffectual shadow of the new behavior until it demonstrates that it can be relied upon for upgrade. |
I agree on principle, though my agreement has no teeth. Every module system we’re considering supports “exits” to “built-in” modules that may be used to inject powerful or parameterized dependencies. If I may name your categories:
While I’d advise most applications to lean hard on 1, we’ve found it necessary to use 2 for testing and 3 for supporting-but-confining the Node.js legacy of powerful builtins. With care, all three can be used safely, though powerless modules are clearly safest. |
right; but that's clearly out of scope here, right, @kriskowal ? |
@warner , Should we take Zoe off this? |
The "Zoe" label? It should stay. |
@warner Can you assign an estimate? |
I think we're converging on this. The ticket originally explored a variety of methods for changing the behavior of a contract. At this point I think we're only going to implement two: change a parameter, and replace the entire contract bundle. The latter involves "durable collections", "baggage", and a kernel-provided upgrade API. I'm going to use this ticket as the umbrella. |
Add an issue for automated testing of upgrading an important contract. And an upgrade demo. |
Not sure how to break this out from the rest, but a sequence we discussed at today's kernel meeting was:
@michaelfig pointed out that the zoe action is also gated by governance, and it'd be nice to have just a single governance action. He asked for a way that
|
Equivalently, Zoe wouldn't need to accept a promise, but somehow her caller would need to be notified when the bundlecap is ready. |
Here's a design for ZCF and contracts to enable them to be upgraded with the new #1848 kernel vat-upgrade API. Old BehaviorCurrently, Zoe creates a new ZCF vat and sends it an New BehaviorIn the new scheme, we need to carve out the portions of ZCF which perform per-version behavior definition from the parts that do per-instance creation of version 1When the contract vat is first created (the green box at the top), we use
The contract's
That's it: neither ZCF nor the contract will do any per-instance startup work, like creating the At this point, the vat is committed to hosting a specific contract installation (the source code of the contract is fixed), but it has not yet differentiated into a specific instance. If/when the zygote feature is ready, this is an appropriate point to fork the vat image. The "zygote clone: instance 2" box on the right represents a clone of the original image, created to support the second instance instead of the first. Lacking that feature, Zoe will perform this creation step for each instance, and proceed to start()After the new vat is configured for the contract, we need to specialize it to become a particular instance of that contract. Here, Zoe sends a message to the new vat named The contract's
At this point the contract vat is ready for business. It can receive messages as soon as Zoe reveals the public facets to the world. version 2When Zoe decides the instance should be upgraded, it collects a ZCF bundlecap (perhaps the same as used in version 1), and a new contract bundlecap (for version 2). It then instructs the kernel to upgrate the contract vat, using the ZCF bundlecap as the new vat bundle, and putting the contract bundlecap into The kernel shuts down the old vat, deleting its non-durable state, and spins up a new worker with the (maybe new, probably not) ZCF bundle. This bundle gets
This new vat does not call Beyond providing |
I haven't yet read @warner's new comment above, but here is what has been my mind since the beginning we started discussing the baggage approach. Besically an instance is a further specialization, in the russian dolls that is the baggage:
TL;DR
|
One idea from today's discussion (to address @dtribble 's concern about the complexity of contracts exporting multiple functions): have the contract's // contract version 1
export const init = harden((handles, contractBaggage) => {
// define internal kinds, using provideHandle(contractBaggage)
// ..
// now define the kinds that ZCF wants
const makePublicFacet = defineDurableKind(handles.publicFacetKindHandle,
(zoeStuff, zcfStuff) => { .. },
behavior,
);
const makeCreatorFacet = defineDurableKind(handles.creatorFacetKindHandle,
(zoeStuff, zcfStuff) => { .. },
behavior,
);
return { makePublicFacet, makeCreatorFacet };
}); (@mhofman made the astute suggestion that the contract should really use When version 1 is launched, ZCF finds the contractBundlecap in A moment later, Zoe sends a Later, when we upgrade to version 2, ZCF wakes up in the new vat, finds the new contractBundlecap in The contract can define their Kind's |
So during the meeting I came around to another realization: there are 2 parts to the contract. The first part is static initialization (per installation) shared by all contract instances. The other is per instance initialization. The instance initialization simply further refines the static initialization of the installation. Furthermore, ZCF can hide the awkwardness of having the singleton contract facets needing to be durable objects. Contract code should not have to jump through hoops and have to define any singleton durable kinds. My proposal is:
|
Interesting.. Your scheme hides the I will admit that it reduces the number of layers of indirection somewhat. I'd worry a little bit that we're confusing contract developers by asking them to sometimes use (and understand) the full
Well, those facets must be durable objects, even if the contract isn't calling
That denies the contract the ability to use the other two arguments of Overall, if the contract must do thing X on each version, and only do thing Y the first time around, I'd rather have them export two functions, so there's a clear directive to "do thing X when function 1 is invoked, do thing Y when function 2 is invoked". Neither of our schemes achieve this; even my original version asks the contract's second export ( There's obviously some complexity tradeoffs here, but it's not clear to me which side is simpler. Having more exported functions is one kind of complexity, but requiring contract code be more stateful and have more internal conditionals is a different kind of complexity. And slicing the API down a wiggly path to minimize both would push us into an API that's harder to explain ("why do I give you |
To be pedantic, the contract would be providing a normal object with methods. We could require it to be
You're correct. This is one place where our approaches diverge. In my approach the facet that the contract code knows is actually never exported to the outside world. If by some means the exported facet came in as an argument, the contract code wouldn't be able to recognize it. In your approach, the contract code can technically use a finisher to get access to the facets the one time they are created, store them, and compare them later. Aka the finisher basically becomes a hidden singleton init.
Well in my approach I don't believe they have to do anything different the first time around (besides creating handles and establishing durable kinds from scratch, but that's already the case) |
Ehh.. I think that'd be more confusing, because applying To teach users the relationship between the return value of this
Hm, that reminds me, we have to nail down some terminology for the benefit of users. It's the identity of an object that is durable, along with its state. The behavior is different for each version of the contract. Each facet is a durable object, because you can export it to other vats, and they can keep referring to it (or sending message to it) after you've upgraded. So the record-of-functions returned by your
That's a good way to look at it. In any place where we let contract code do |
In today's meeting, we sketched out a new API:
export function defineInstallationKinds(installationBaggage) {
const installationHandles = provideFrom(installationBaggage);
const makeFoo = defineDurableKind(installationHandle, ..);
const defineInstanceKinds = ...;
return defineInstanceKinds;
}
Later, when we want a new instance, we'll clone the zygote, and send it
function defineInstanceKinds(instanceBaggage, zcf, privateArgs) {
const instanceHandles = provideFrom(instanceBaggage);
const makeBar = defineDurableKind(instanceHandle, ..);
const makeInstanceKit = ..;
return makeInstanceKit;
}
Later, when an upgrade happens:
At this point, all Kinds have been reconnected, and the v2 vat is ready for business. Some properties:
function provideHandle(name, iface, baggage) {
if (!baggage.has(name)) {
baggage.set(name, makeKindHandle(iface));
}
return baggage.get(name);
}
function defineInstanceKinds(instanceBaggage, zcf, privateArgs) {
...
const makeInstanceKit = defineDurableKind(...);
return makeInstanceKit;
} @mhofman notes that it'd be nice to somehow freeze |
Some notes from @dtribble:
We talked a bunch about "record upgrade": in addition to version-2 providing new behavior, we'll (eventually) need a way to let version-2 upgrade each data record to some new format. A simple example might be a contract that records a token balance in each virtual/durable object, and a version bump that allows more precision. Tne version-1 state might contain Decades of ORM systems have had facilities for this. When version-2 is defined, in addition to providing the new behavior, the caller must also provide a function that converts a version-1 record into a version-2 record. This implies that we're telling the infrastructure about version numbers too. The API might look like this: const makeFooV1 = defineDurableKind(handle, initV1, behaviorV1, { version: 1 });
...
const upgradeV1ToV2 = oldState => { uBLD: oldState.BLD * 1_000_000n };
const makeFooV2 = defineDurableKind(handle, initV2, behaviorV2, { version: 2, upgradeTo: { 2: upgradeV1ToV2 } }); and the virtual object manager would be responsible for knowing the version of each record, and calling the chain of upgraders whenever we unserialize a record whose version is older than the current one. To support this in the future, we might want to add some features now:
@FUDCo points out that we probably don't know enough about schema upgrade to anticipate the use cases very well yet, so it might be good to defer adding the feature until we get more experience with merely updating the behavior. One question was how this record upgrade might overlap with other sweeps we must do through all data. During upgrade, we delete all non-durable objects, which (through dropped references) might allow us to delete some number of durable objects: those which were only kept alive by one of the old merely-virtuals. One way to do this is with a mark-and-sweep GC pass that visits all the durable objects. If it made sense, we could combine this with the record upgrade step: as we visit each reachable object, we apply the upgrader function. (this sounds pretty expensive). Or, if we wind up with some sort of incremental GC (after upgrade, each time we delivery something to the vat, we prune a little bit more), we could also perform a bit of preemptive record upgrade. @erights and I talked about better ways to delete the old objects. We're both really worried about the cost of examining (and possibly deleting) a large number of DB keys, especially if it must be done atomically in a single crank. We're thinking:
That would let us spread the deletion work out over an arbitrary amount of time, to not overload any single crank, while still maintaining accurate refcounts to imports and durables (so it wouldn't interfere with a mark-and-sweep we might add some day). If the process didn't complete before a second upgrade happened, that's ok, the process can start again after the next upgrade without losing progress. It would delay dropping imports for some extended amount of time, however. |
Besides better names for the functions, was there any other changes to the flow of a Zoe contract upgrade? Do we have an issue to track durable object schema upgrades in general? While I think it's something we'll need to figure out, I agree with @FUDCo that we don't know enough yet about the use cases to design something here. In particular, I'd be very skeptical of adopting a versioned ORM approach (I'm of the opinion of letting the contract optionally provide a single "migrateAncestorState" and let it deal with their own versioning if they want). I do agree that a maximally lazy migration would be correct however. Anyway all this is orthogonal to Zoe contract upgrades, and apply to all vat upgrades, which is why I'd prefer virtual object schema upgrades discussed in a separate issue. |
Oh bother, |
async |
... as long as it settles ... ? |
If it rejects, that's just as vat-fatal as not settling in time (and is indistinguishable from throwing an exception during |
https://gist.github.com/warner/95594e60b194673420bd515ab8d3662c has a sketch of what |
This design is complete. @erights and @Chris-Hibbert are working on implementing it in ZCF, from which we might learn about changes to make. I'm looking to narrow this ticket down to something concrete, like "Zoe and ZCF have enough code to support a contract upgrade, and expose a facet to governance to drive it". I don't know what other Zoe/governance -side changes need to be made to drive this, or if we have a ticket for it. We also need cosmic-swingset -side support for getting the bundle installed in the first place, but this ticket assumes the |
@Chris-Hibbert presented upgrade of covered call... (IOU pointer to the exact code). The API looked a lot like #5708 @Chris-Hibbert took the ball on a couple things:
|
recent contract upgrade demo shows this is pretty well done. |
What is the Problem Being Solved?
We've been talking about how the "please change the behavior of a contract" process ought to look. For this ticket, we're limiting the issue to contracts (not arbitrary vats), and to changes that were mostly anticipated ahead of time (think plugins or configuration settings, not spontaneous complete code overhauls). Examples of the updates that might use this process:
So many of our deeper upgrade tickets (#1692, #1479, #3062, #2285, #1691) are not applicable.
Instead, the problem statement is to implement something to support the following sequence:
The governance mechanism is out of scope for this ticket: we assume there is some form of voting involved, but for our purposes here we can pretend that the proposer has direct access to the upgrade facet (modulo some important questions about how the new behavior is represented: we can't really just pass it a new function object). Our focus is on:
Levels of Upgrade
It's probably useful to sketch out a spectrum of upgrades.
Parameter Change
The simplest kind of upgrade is a parameter change, like changing a fee, or interest rate, or payout period. We just want to change:
to:
The obvious approach is to use a variable:
The proposer of the new behavior can simply send a number as the
newRate
, and voters can be shown this number.This works equally well for one-of-N selections from a set of pre-established options.
Simple Function/Expression
The next most sophisticated is to accept an arbitrary JavaScript expression and evaluate it in a confined scope. This allows more complex behavioral changes to be made, especially if the scope (or the way its result is used) is sufficiently rich.
The initial version of the function might only perform a simple multiplication, but later versions could incorporate the additional inputs as well.
This adds two new ways for the chain-side code to interact with the new behavior:
The simplest pattern is for the code to expect an empty global environment (i.e. just the normal JavaScript globals like
Array
andMap
), and always evaluate to a function, and for anything of interest to be provided as an argument to that function. But with sufficient coordination between the on-chain evaluation side and the proposer's side, more complex interactions are an option.The main limitation of this approach is that the behavior must be expressed as a single evaluable string. In the early days of SES (and for a number of modern test cases), we used a stringified function. This allows us to write the behavior in a natural way (as a JS
function
, in a.js
file, with our normal code editor and syntax-highlighting/analysis support), but then usingtoString
or a template literal (in an engine that retains function source code, like Node.js) converts this into data that can be transmitted or compared by voters:This works fine for simple functions, but breaks down when the behavior is complex enough to warrant standard software engineering conveniences like modules and mutiple files (even refactoring the behavior into multiple functions is troublesome). For example, we define our vats as a file with a
buildRootObject
export, but that file is allowed to (and almost always must)import
other libraries. This requires more complex tools thanfunc.toString()
.Source Bundles
To support multi-file source files, we currently use
rollup
(and will soon switch toendo
tools) to convert this single starting file into a "bundle" that includes the transitive closure of imported modules. The bundle is defined as an opaque object whose structure is known to a matchingimportBundle
function. The opaque object is guaranteed to be serializable (withJSON.stringify
), so it can be treated as data and transferred to the chain without complication.The bundling process necessarily introduces the notion of a "graph exit": the edge at which we stop copying module source code into the bundle, and instead expect that the target environment will provide the necessary names. Alternatively, the bundling process may simply throw an error if you attempt to import something that is out-of-bounds. For our
@agoric/bundle-source
tool, we declare that all pure-source packages are eligible for bundling, but Node.js builtins (likefs
orprocess
) and packages with binary components are off-limits.It also necessarily references the filesystem on the proposer's computer, and the state of their
node_modules/
directory hierarchy.The proposer writes their proposed behavior as a module, which can import other modules as they see fit:
Note that the
import
statement is effectively mapping a local name (mathStuff
,myHelper
) to a well-known set of behavior (./helper.js
,npm_library
). The local name appears in thecomputeRate
function. The well-known behavior is always defined by the proposer, either directly (helper.js
, merely split into a separate file for ergonomics), or through a package registry and within a set of conventions. Anyone readingentrypoint.js
and seeingnpm_library
will assume it behaves in a specific way because they've seen that library on NPM before, however the actual behavior is entirely specified by the proposer based upon what their localnode_modules
has a the time of bundling (the difference between these two being an exciting opportunity for supply-chain attacks).To actually make the proposal, they write some additional code that points at an entry-point module, and builds a bundle around it:
On the chain side, the contract must plan to accept a bundle, and use
importBundle
to turn it back into behavior:The chain-side
importBundle
call can be used to pass endowments into the code's global scope, just likeeval
did. It can impose other things on the new code, likeinescapableTransforms
orinescapableGlobalLexicals
. And it can extract more things from the new code, because the bundle is evaluated into a complete module namespace (with multiple exports and maybe adefault
export), not just a single value. However the graph exits remain limited: the new code's only inputs come from the global namespace (lexicals and globals) and the arguments passed into any invocations of members of its exported namespace. In particular, all theimport
s get exactly the same code that was present on the proposer's machine (i.e. theirnode_modules/
directories, populated byyarn
ornpm
from someyarn.lock
and/orpackage.json
data).To display the bundle to voters in a meaningful way, we'd need some tool that can parse the bundle object, but displays the individual modules rather than evaluating them into behavior. We expect most of the modules to be identical to some common library source (e.g. "
mathStuff
version 4" is the same for everybody), so this tool should also have a way to map the module sources to strong identifiers (hashes), and to translate these identifiers through petname tables for each voter. So voter Alice should get a report with something like:and hopefully that will let her amortize the review effort.
Note the growing name-vs-use distinction here: if the proposer were going to execute the new behavior themselves, they'd
import { computeRate } from './entrypoint.js'
, but since they really want to send this behavior to the chain, they must callbundleSource
to get something that can be serialized and transmitted. In the simple parameter approach, this was trivial: numbers and strings are treated the same way in both cases. For the single-function, the behavior-to-data conversion function was basicallyfunc.toString()
, and the matching data-to-behavior conversion on the chain side waseval
. For a source bundle, it'sbundleSource
on one side, andimportBundle
on the other.Package/Module Graph With Configurable Exits
The next most sophisticated behavior definition I can think of is a full module graph, with some number of exits left unfilled.
From the proposer's point of view, they'll use the same source code layout as the previous scheme (where they deliver a source bundle). The local filesystem and
node_modules/
defines most of the behavior, but there would be some notion of a "named exit": the proposer's source code saysimport { foo } from 'bar';
, but the actual chain-side behavior they get would depend upon something on the chain, not just what they have locally. Whenentrypoint.js
says:the
currentChainPolicy
is a local name for the imported behavior, as before, but nowchain-parameters
is from a new kind of namespace. Instead of referring to something on the proposer's local disk (like./helper.js
), or some well-known registry name (likenpm_library
, grounded by something on the local disk), thischain-parameters
is part of a contract between the new code and the environment it will live inside on the chain. The chain-side contract is responsible for providing something to satisfy thechain-parameters
import, and what it gets is not known at the time of the proposal upload.The chain side code would need some sort of
importBundle(exits={ 'chain-parameters': xyz })
option to provide something to connect to the named exit. ThebundleSource()
call would need a way to know whichimport
s are supposed to come from local disk, and which are supposed to be fulfilled by the chain-side execution environment. The Endo/Compartment
"static module records" and LavaMoat manifest are highly relevant, as is the general notion of "dependency injection".This effectively adds another pathway for the chain-side environment to pass stuff into the new code: globals/lexical endowments, function arguments, and now
import
things. In some way this is more ergonomic: it gets tiresome to pass a whole bunch of helper functions in as arguments, and the code is harder to test locally if they're provided as globals. If you could doimport { policy } from './chain-parameters';
, and provide a real (dummy/testing) file locally, but you knew it would be replaced by a real (chain-specific) namespace object if/when the proposal was accepted, that might make unit tests easier to write.On the other hand, someone reading the code would find it awfully hard to know which things are being defined by the proposer, and which are merely being named by the proposer (but will really be defined by the chain-side execution environment). We can imagine some sort of special naming convention like
import { policy } from '%%chain/parameters'
, or a special comment following the import, or even a source-to-source transform that enables a new syntax (import-from-chain { policy } from 'chain-parameters'
) orimport { policy} from 'parameters' on 'chain'
or something). Some of these approaches make local execution easier, some make it more difficult.I think @erights is kind of keen on using module imports as a part of the programming interface, both for "standard behavior that I'm naming but I don't expect to vary much" and for specific configuration that the target environment provides for the proposer's code. I'm less convinced that this would be easy for the proposer (or the voters) to reason about correctly, especially because dependency injection requires operating at a meta-level beyond both the
.js
files on disk and the usualnode_modules/
layout, which does not have well-established practices yet. We're inventing the future here, for sure, but the farther ahead we race, the more work it takes to bring our friends along.On the other hand, I've previously advocated for using import statements as requests for authority, and for the module loading environment to be making the decisions about how to satisfy those imports, so perhaps there's more overlap here than I realized.
Proposer / Voter / Chain Expressions
In each of these levels of expressivity, we need to define APIs or experiences for three audiences:
node_modules/
directory or the state of a registry at some particular moment, and how much depends upon the state of the chain when/if their proposal is enacted?import
or something standard to execute itAvoiding Monolithic Bundles
Our current approach converts a bunch of files on the proposer's disk into a single large bundle object. A simple demo contract serializes into about 700 kB of data. Our most complicated contract is probably the core Treasury contract, and its bundle serializes into about a megabyte of data. This is larger than we'd like, and the relatively small difference between the two demonstrates that the vat majority of the bundle is shared code.
Obviously we'd like that code to only be touched once. New contracts should only need to upload the novel parts, and the on-chain code that references behavior should be able to use short references, not large blobs.
The first part of this is for the bundling step to generate a collection of pieces (modules or packages), instead of a single large bundle. The pieces should be identified by their hash, and some sort of small manifest (whose size is O(number of modules) rather than O(size of modules)) can remember how they fit together. Most of these pieces should be common libraries, identical to the corresponding pieces of other contracts.
Then the uploading step should be an interactive conversation between the uploader and the chain, wherein it figures out what pieces the chain doesn't currently hold, and only upload the new ones. This "upload component" step should use a different kind of transaction (#3269) than normal messages. Once all the pieces are in place, the manifest (or a hash of it) can be used in lieu of the monolithic bundle.
Somehow, the proposer should be able to express two things in the same source file. The first is the identity of the behavior they're proposing: we use
import from 'filename'
statements to execute behavior, andbundleSource(filename)
to convert behavior into a bundle object directly, but I want something that looks like theimport
but yields a handle rather than an executable namespace or function. The second is theawait E(chainThing).propose(behavior)
call, which wants the behavior as an object of some sort. My idealproposer.js
deploy script looks something like:but I think we'll need some magic to let
propose
see a manifest or module identifier instead of a namespace or function object. Maybe some cooperation between the local module loader (which remembers the identity and provenance of each imported thing) and the marshalling code that is responsible for turningE()
method arguments into serialized data. Maybe a newpassStyleOf() === 'module'
. MaybecurrentModuleLoader.describe(newBehavior) === { source: URL, imports: { mathStuff: .., currentChainPolicy: ..., myHelper: .. } }
. cc @kriskowal for ideas that mesh with the Endo archive format and tools.On the chain side, once all the source components are installed (and added to some sort of #46 / #512 generic "blobstore" or more-specialized "modulestore"), there should be some special operation that accepts a hash and returns a "modulecap" (or "packagecap" or "graphcap" or "entrypointcap"): something tracked in the c-lists and being exactly as specific as the
newBehavior
named in the proposer's script. The governance mechanism receives this modulecap, rather than full source code (or even a hash). Voting talks about this modulecap. The execution of the vote should then look something like:which somehow talks to the platform in some magic way, to access the table of components, evaluate the code that needs to be evaluated, sharing as much as possible with other contracts or instances, and yields the same sort of namespace or function object that
importBundle
did, but not touching any large strings in the process.I'm thinking that passing around modulecaps instead of a hash can reduce a window of uncertainty: the hash refers to a specific piece of source code, but does the chain actually have that source available? And the source of everything it references? The components (modules/packages) must be installed with transactions, so the consensus state includes exact knowledge of what source is or is not available at any given time. We can make a special device or kernel interaction that takes a hash and either returns a modulecap or throws a "source component missing" error. But if we can do this query just once, early, then subsequent code no longer needs to worry about whether e.g. the vote will be enactable or not. As long as the modulecap is held alive in the c-lists, all the necessary source code should remain available in the kernel module/package source tables. If voting/etc referred to a hash until the final
importBehavior
call, it would be less clear thatupgradeFacet
would actually work or not.Voters need to know what they're voting about. I'm thinking that the governance mechanism could post the source hash to some easy-to-read table (like we've discussed for non-transaction reads of chain state, especially purse balances), or perhaps a metercap (and some translation mechanism then looks up the source that backs it). Some sort of block explorer would be responsible for taking the consensus chain state and this identifier, and publishing the source graph being proposed. Local clients should be able to do the same:
agoric governance show-proposal 123
could open a local browser with a display. Some sort of "proposal dissector" agent could show which parts of the source graph are novel, and which match known-audited components. This is where my Jetpack "community auditing site" thing would fit in.The text was updated successfully, but these errors were encountered: