-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP feat: add r/demo/dynamic_call_closure #938
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Manfred Touron <[email protected]>
Signed-off-by: Manfred Touron <[email protected]>
One major problem with the pattern of setting a caller on an anonymous closure, is that the capability given to the anonymous closure is too big (it's basically all capabilities) and there's no way to restrict what the closure does, so in effect it's almost like there's no type safety. Well, there is type safety, but there's no usage safety. Another way to say it is that there is no way to statically analyze the code and see that the usage is safe, if this functionality is exposed in any way as a library (for example), because the logic of the closure isn't known. Another that was mentioned in the previous call is the that it violates the least surprise principle. Some users (like me) would expect that the closure is run as the realm that it's declared in, and never anything else, even if the closure is passed to a third party. Also I would not expect an anonymous closure to have any functional difference than a named function. The mental model that I've had is like this: each realm (sort of) has its own goroutine, and inter-realm function calls is (sort of) like passing messages via go channels, AND it's as if there is automatic synchronization so that all function calls in a realm are executed in serialized manner. Of course this doesn't make sense in a single threaded/routine environment as we have now where each tx is executed in sequence, but the point is that most developers then won't have to worry about complex concurrency issues. example: import "somelibrary"
var x int
var inc = func() {
temp := x + 1
x = temp
}
var dec = func() {
temp := x - 1
x = temp
}
func init() {
somelibrary.AddClosure(dec)
}
func PublicFunction() {
inc()
} If we support concurrency AND allow (I'm calling it one-store-one-goroutine-per-realm and not one-machine-one-goroutine-per-realm, because while it is possible to implement this with one machine per realm, that would mean the machine's stack has striations from multiple concurrent txs which is going to be a nightmare to debug, so it makes more sense to have multiple machine instances that share one realm, but only one machine running at a time per realm. And either way, implementing PrevRealm or GetFrames is going to get a little more complicated). Of course the one-store-one-goroutine-per-realm model doesn't address how to deal with conflicts/deadlocks. The oldschool SQL way (like serializable isolation model) is to abort one when there is a conflict, but this leads to contention and performance degradation that I don't think is appropriate (without further restrictions) in the blockchain context, but I think there might be other workarounds possible. The challenge is to see if there are different designs other than the "racy bare metal" go concurrency model (as in, race conflicts aren't statically analyzable AFAIK it is a hard problem, -race is empirical) that perhaps can leverage the unique tx-based model of execution of the blockchain, to make race/concurrency bugs less of a problem for smart contract developers. One idea that comes to mind... maybe it's possible to run things in parallel by default with multiple machines, each realm running in its own machine-and-goroutine as per the mental model, but instead of dealing with contention (at the course function level, not the granular race level) all the time, perhaps once a contention is detected, those two realms can be forever grouped together to always run in the same goroutine, so that there is some automatic partitioning. Then it would be possible to support "automatic" concurrency of a massive number of smart contracts. We can also consider supporting special background daemon gnoroutines that are always de-prioritized and aborted in case of contention. And just in case there is starvation, each block can reserve a few cycles to run background daemon gnoroutines after all other txs have completed for the block. Related point... it would be nice to not have to implement any race detection in the gno VM to keep it as simple as possible. It might seem that function-level synchronization is significantly less powerful than full on concurrency (with or without race detection), but web2 was built with SQL, and SQL concurrency is handled by various isolation levels per "transaction", and function-level synchronization for inter-realm calls is somewhat similar, arguably more intuitive. Will think harder and add more. |
Thank you for clarifying and providing new directions! I agree that it's important to maintain the current state of having anonymous functions within their definition realm. To address this specific PR example, I have a workaround in mind: I will use the grc20 interface instead of foo20 to support any future token, along with the LocalBanker we discussed. I will make the necessary fix soon in this PR, which will demonstrate a secure way for people interested in creating DEXes, DAOs, and MultiSigs without requiring changes to the language or VM. H/t to @tbruyelle for useful DMs. I anticipate that the topic of inter-contract functionality will arise again, and I will make sure to provide more details and link back to your comment. |
Actually, not working as (I) expected...
Checklists...
Contributors Checklist
BREAKING CHANGE: xxx
message was included in the descriptionMaintainers Checklist
CONTRIBUTING.md
BREAKING CHANGE:
in the body)