From 35263b7e27189067a8ced2dcec05cf87470e485a Mon Sep 17 00:00:00 2001 From: Satya Vusirikala Date: Wed, 23 Aug 2023 12:51:26 -0700 Subject: [PATCH] Merge main into aggregators_v2 branch (#9736) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * [storage] support sharding in fast_sync handle sharded kv batches handle sharded merkel tree save progress * [TS SDK] Support network and API endpoints on `AptosConfig` (#9549) * support network and api urls on AptosConfig * devnet as default network * Remove tutorial links from descriptions Tutorials are linked on tutorial title and again on the tutorial description just next to the title. Small nitpick, but is slightly confusing when you just arrive to the dev docs, since you expect that second link to be something different. * add profiling crate with cpu and memory profiling fix comments and warnings fix comments resolve comments add crate to cargo.toml add additional functions for different cases of profiling fix cargo.toml * remove unnecessary dependency * fix lint errors * fix lint errors * fix lint error in cargo.toml * [testsuite][pangu][pte] Creating a Pangu Command to Create a Transaction Emitter (#9495) * [testsuite][pangu][pte] Initial work for the transaction emitter. * [testsuite][pangu][pte] Finished the transaction emitter comand. * Update testsuite/test_framework/kubernetes.py good catch! Co-authored-by: Balaji Arun * [testsuite][pangu][pte] Addressed Balaji's comments. * [testsuite][pangu][pte] Hotfix for Forge compatibility. * [testsuite][pangu][pte] Another hotfix. * [testsuite][pangu][pte] Added support for dry runs, and fixed a bug in get testnet * [testsuite][pangu][pte] Minor changes. * Update README.md --------- Co-authored-by: Olsen Budanur Co-authored-by: Balaji Arun * replay-verify: log skipped version when it happens * [dashboards] sync grafana dashboards (#9440) Co-authored-by: rustielin * [forge stable] Fix HAProxy test - increase latency limits (#9575) * [indexer grpc] reduce the data service streaming channel size (#9590) * [refactoring] Remove unused genesis check from StateView (#9589) `is_genesis()` was defined in `StateView` but never used apart from a single test with `MockVM`. But the test set it to false always... Removing it to have better view interfaces. * [TS SDK] Use `account_transactions` and `account_transaction_aggregate` queries (#9403) * improve indexer client to support tokenv2 sorting and new queries * use account_transactions query * Optional argument on is not positional * [Dev Docs] Add small note about target-version in backup docs. * swich to gcp * add semgrep for github workflows (#9522) add semgrep for GitHub workflows * [GHA] Run smoke and performance tests when appropriate. * [Spec] Ensures of transaction_validation (#9461) * hp_trans_valid * hp_trans_validation * pre-pr * fix trim --------- Co-authored-by: chan-bing * [GHA] Small tweaks to jobs. * [TS SDK] filter amount > 0 on getTokenOwnersData and include all query fields (#9593) * filter amount > 0 on getTokenOwnersData * export all query fields * update changelog * [Python SDK] Add a token_transfer function to the python SDK (#9422) * Adding transfer_token function to aptos_token_client.py and adding documentation tags for the new your_first_nft tutorial. * Fixed strange formatting * Created a transfer_object function in RestClient class and altered the transfer_token function in the AptosTokenClient to just call transfer_object. Left it in for ease of use * [deps] Update clap dependency to be consistent across crates (#9605) * Fix node compatibility test CI (#9609) * [gas] fix abstract gas cost for storage (#9501) * [forge] Add latency for different workloads (#9415) * remove redundant test (#9613) * [dag] Handle certified node in DAG Driver (#9312) * [dag] Handle certified node in dag driver * [dag] Trigger new round on handler start * [dag] Use only the aptos_time_service crate * [NFT Metadata Crawler] Fix error for defaulting to original raw_image_uri (#9611) * Fix error for defaulting to original raw_image_uri * move db commit outside * [ts-sdk example] Adding an example for rotation offer capability and signer offer capability with signed structs (#9425) * Adding the offer capability example. Uses signed structs in typescript. * remove unnecessary aptosClient from getAccount call Co-authored-by: Maayan * Fixing network URL unnecessary code and potential unalignment between network/faucet url * Shortening # hyphens in output string * Moving the chainId to the bottom of the struct list since it's potentially undefined, and explaining in the sign struct function that the proof bytes must be in that specific order. * Cleaning up the code and making it more readable * Formatting * Use a much cleaner and reusable serializable class instead of a struct --------- Co-authored-by: Maayan * bump version to 1.18.0 (#9607) * [compiler-v2] Use livevar analysis to optimize file format generation (#9361) * [compiler-v2] Use livevar analysis to optimize file format generation This connects the live-variable analysis which is already present in the Move prover to the v2 bytecode pipeline. This information is then used in the file-format generator to make better decisions when to move or copy values. Livevar is a standard compiler construction data flow analysis. It computes the set of variables which are alive (being used in subsequent reachable code) before and after each program point. The implementation from the prover uses our dataflow analysis framework and is [here](https://github.com/aptos-labs/aptos-core/blob/206f529c0c9d8488e27d2e50297178f0caf429a5/third_party/move/move-prover/bytecode/src/livevar_analysis.rs#L381). In this PR we create the first processing step in the bytecode pipeline with the new module `pipeline/livevar_analysis_step.rs` which forwards logical work to the existing analysis. * Addressing reviewer comments. * [storage] update resource requriement * [TS SDK v2] Add Hex and HexInput types (#9595) Co-authored-by: maayan * [cleanup] Remove foreign contracts (#9623) * [Spec] Ensures of reconfiguration.move (#9383) * init * hp * hp reconfig * init * fix comment * fix comment --------- Co-authored-by: chan-bing * Revert "Revert "[NFT Metadata Crawler] Dockerize Parser (#9541)"" (#9622) This reverts commit aa81b95e7c4bf7abc83e28b341ae6e8fdfdf7317. * Clearer comments in `resource_account.move` (#9555) * Explicitize that the txn_seq_num param on the epilogue is unused With this we can go through all historical txns to make sure. * [dag] Integrate Order Rule with Dag Driver (#9452) * [gas] convert output to InternalGas cost (#9603) * [move-ir-compiler] add Nop token support (#9599) * [dag] Integrate Dag Fetcher with Dag Driver (#9453) * [dag] Integrate Dag Fetcher with Dag Driver * [dag] Handle fetch response in dag handler * [TS SDK v2] Add AccountAddress (#9564) * [CLI] Disallow "rotating" to same private key (#9546) * trivial: fix gas meter mutability * Remove unused deps for CLI (#9531) * typo in functions.md * typo spec-lang.md typo * Update package-upgrades.md space added * Update developer-docs-site/docs/move/book/package-upgrades.md * Update unit-testing.md typo * [GHA] Only run all perf tests on schedule. * [TS SDK example] Add a working example of converting a MultiEd25519 account to a MultiSig account (#9516) * An example of converting a MultiEd25519 account to a MultiSig account Cleaned up the file even more, added assertions and print statements at the end Removing incorrect addresses Formatting and clarifying the steps. Changing network to DEVNET from local Removing confusing comment from prior version and adding comment about making sure secondsTilExpiration is set accordingly Fixing file name Fixing variable name Making header comments less long * Adding the ability to add metadata to the multisig account, since we were using empty arrays before. Asserted that the values on-chain match the input values at the end * Using a much cleaner and reusable serializable class for the signed message * Removing the unnecessary multisig as sender option * Update e2e flow description at top of file * Resolving rebase conflict and simplifying the signature construction * Clarify the process of creating a MultiEd25519 account and that public keys are different from addresses * Improve clarity of comments on token.move * [NFT Metadata Crawler] Fix raw image uri parse error (#9628) * Fix raw_image_uri parse bug * upsert * update compatibility test base * [Dev Docs] Add note about indexer bootstrap support. * Update developer-docs-site/docs/nodes/indexer-fullnode.md * [dashboards] sync grafana dashboards * remove payer from StateValueMetadata * Track slot allocation fee on state metadata * enable state value metadata tracking for devnet * [compiler v2] Fix bugs around compilation of vector code (#9636) * [compiler v2] Fix bugs around compilation of vector code This fixes a few bugs and closes #9629 - Usage of precompiled stdlib in transactional tests: v2 compiler currently (and probably never) supports precompiled modules, working around this - Stack-based bytecode generator created wrong target on generic function calls - Needed to make implicit conversion from `&mut` to `&` explicit in generated bytecode via Freeze operation Added some additional tests while debugging this. * Adding a new option `--print-bytecode` which can be provided to the `//# publish` and `//# run` transactional test runner command. This is the applied (for now) to the `sorter` test case only. Also introduced logic to map well-known vector functions to the associated builtin opcodes. * [TS SDK v2] Reject invalid SHORT form for special addresses in fromString (#9640) * [TS SDK v2] Reject invalid SHORT form for special addresses in fromString * Update account_address.ts * Update account_address.ts * [TS SDK v2] Add equality methods for Hex and AccountAddress (#9641) * Handle empty network address in CLI (#9576) * [CLI][e2e] Tests create-resource-account and derive-resource-account-address (#8757) * CLI e2e for resource account * add account list to test resource account * [prover] lock version dependency for z3 and boogie (#8718) (#9524) Co-authored-by: Aalok Thakkar * Update workflows (#9650) * Update semgrep.yaml to also run daily * update semgrep rule * fix workflows * Update .github/workflows/semgrep.yaml Co-authored-by: Balaji Arun --------- Co-authored-by: Balaji Arun * [dag] bootstrap logic (#9455) * add guard field to struct * temp * add executor benchmark profiling * small fixes * lint fix * lint fix * lint fix * lint fix * lint fix * update cargo.lock * [Spec] Ensures of resources.move (#9382) * init * fix md * fix comment * [Spec] Ensures of transaction_fee (#9460) * hp trans_fee * pre-pr * add the condition that proposer == @vm_reserved * fix comment --------- Co-authored-by: chan-bing * [TS SDK v2] Remove redundant code in AccountAddress.fromString (#9662) * [docs] Rename Aptos Move CLI -> Aptos CLI (#9655) * Thoroughly fix Issue 8875: Error out in compilation when encounter an unknown attribute (#9229) Fix issue-8875 thoroughly: warning for unknown attributes, add --skip-attribute-check flag. Omit warning on aptos_std library files, to avoid warning on some existing code. A few tangential fixes: * [third party/.../test, compiler] Fix Move.toml files to point to local files instead of network to avoid tests using old stdlib. Oddly, this requires fixes to capitalization of std. Fix the resolution warning to make it clear that std needs to be uncapitalized. * [compiler] Hack to move-compiler to avoid warning about unknown attributes in aptos_std=0x1. This avoids surprising warning on currently deployed library code. * [compiler] test cases and stdlib source code updates for attributes checks PR * [aptos stdlib] Now that we have verified that the move-compiler Aptos stdlib hack works, fix stdlib to use #[test_only] on test helper functions instead of #[testonly]. Leave struct attributes since it's a deployment problem to make it disappear. * [Storage Service] Add type support for subscription requests. * [Storage Service] Add subscription stream support. * [Storage Service] Adopt subscription metadata and loop when serving subscriptions. * Make AccountAddress from_str conform to AIP-40, add from_str_strict (#9186) * [TS SDK v2] Remove redundant code in AccountAddress.fromString * Make AccountAddress from_str conform to AIP-40 * Use thiserror for AccountAddressParseError * [Executor-benchmark] Make connected grps of txns fully conflicting grps (#9647) [Executor-benchmark] Make connected grps of txns fully conflicting grps Fully conflicting grps are more meaningful pattern for testing out the sharded executor because they cannot be executed parallely (that is what we intend to benchmark). Cmd: cargo run -p aptos-executor-benchmark -- --block-size 100 --connected-tx-grps 5 --shuffle-connected-txns run-executor --main-signer-accounts 1000 --data-dir /tmp/some-db --checkpoint-dir /tmp/some-checkpoint --blocks 2 For a good random workload we would want 'num_accounts_per_grp' to be much greater than 'num_txns_per_grp'. The command enforces 'num_signer_accounts' > '2 * num_txns_per_grp'. * [NFT Metadata Crawler] Stop parsing completely if token_uri has already been parsed (#9649) * stop parsing completely if token_uri has already been parsed, add logs, add more panics (failed to write to gcs, failed to commit postgres tx) * add comment to clarify * error -> warn * rename * Fixed signer to be formatted properly (#8866) * Fixed signer to be formatted properly - fixed the issue in `string_utils` - replaced `testonly` with `test_only` * Add a feature flag * [Quorum Store] A couple metrics to observe backpressure better (#9239) ### Description A couple metrics to observe backpressure better * [Execution] Do not hold the Arc of committed block in execution. (#9674) * [NFT Metadata Crawler] Increase HTTP request for large files (#9677) * retry time increase * add to constants file * remove unneeded comment * Revert "swich to gcp" This reverts commit 8f28f93c26fcb1c9ab1322fab4c00e0198ffbb67. * [NFT Metadata Crawler] Maintain image aspect ration on resize (#9688) * maintain image aspect ration on resize * fix lint * [Spec] update spec for vesting.move (#9115) * new vesting * new vesting * init * fix comment * fix md * add comment * add head * add func total_accumulated_rewards time * rm boogie * fixed timeout * close timeout function's verification * rust lint fix * del head --------- Co-authored-by: chan-bing * [event_v2] module event extension, attribute and extended check * [eventv2] make v0/v1 to v1/v2 * [event_v2] test * [events v2] Sample of how to check for event type attributes in the extended checker * [eventv2] add module publishing validation * [eventv2] add module event feature * [eventv2] tests and fixes * [eventv2] address comments and forbid script emitting module events * [eventv2] add script event verifier to forbid event emitting * add #[event] to known attributes * trivial: add required feature to dependency to fix tests `e2e-move-tests` when running alone doesn't pass because `pub const GAS_UNIT_PRICE: u64 = 0;` is not in effect * Revert "add #[event] to known attributes" This reverts commit 5af30fb2adaa4247a711b2cbf75d81caa5412249. * Revert "[eventv2] add script event verifier to forbid event emitting" This reverts commit 6a30bcdd429eca16668b599e674a493efd619cb7. * Revert "[eventv2] address comments and forbid script emitting module events" This reverts commit e4a9c2cf41aeae5ae526f940f400909751bdc34f. * Revert "[eventv2] tests and fixes" This reverts commit 4040b3d6b390ed75d344008f2dcd4b51c02bce13. * Revert "[eventv2] add module event feature" This reverts commit 5a12afcb2e270217994ca5eafd309594f0a37dce. * Revert "[eventv2] add module publishing validation" This reverts commit 7543f6e02fc0f2ecaf30c0830b4ce1d425ee4be5. * Revert "[events v2] Sample of how to check for event type attributes in the extended checker" This reverts commit 72cb7b4c7b9ed192962b3adfa6ad1459331afbf7. * Revert "[event_v2] test" This reverts commit c93e61d61c9fd4884fcf0cd18ccc4f92cad9273a. * Revert "[eventv2] make v0/v1 to v1/v2" This reverts commit 0ba554c725562a50952d0dc27b81149c74f97757. * Revert "[event_v2] module event extension, attribute and extended check" This reverts commit ed99bf02ec7e3578ad4669ea36b3515a0ff91ac6. * [compiler v2] Move the stackless bytecode crate out of prover (#9698) This moves the `move-stackless-bytecode` crate out of its current location in the `move-prover` into the `move-model`. This is pure relocation. Subsequent PRs will split off prover specific code and move it back to the prover tree. * [gas][docs] example guides (#9633) * [refactoring] Relax trait bounds, reduce dependence on `StateView` (#9644) API only needs move resolver, and not state view. Decoupling `MoveResolverExt` into `AptosMoveResolver + StateView` so that any future changes were not visible on API side. 1. Move API checks for resource groups where it should be. 2. Make trait bounds for config not require state view. Right now state view is used all over the place although it seems this can be avoided. 3. Reduce the usage of `MoveResolverExt`: e.g. session only needs to talk to resolver. 4. State value metadata has its own resolver. This makes storage accesses almost uniform (still need to fix `read_write_set`) which should be implemented in storage adapter. 5. Some minor typo fixes and dead code removal. * [dag] network sender implementation (#9456) * [Storage Service] Add subscription stream support. * [Storage Service] Add tests for subscription streams. * [Storage Service] Add tests for stream looping. * Add get_state_value_u128 method to TStateView (#9097) * [Spec] Ensures for voting (#9462) * new_voting * run lint * little change * fix md * fix schema name * fix error in simplemap --------- Co-authored-by: chan-bing * [scripts] fixes to arch linux (pacman) * Archlinux / python / pip insist that you use the package manager solutions not python for pip * The `libudev-dev` seems to exist in the default system install and under a wildly different name. * [Block Executor] Refactor view & output traits & mvhashmap (#9659) * [MVHashMap] convenient (shared) view state, aggregator v1 port * [Block executor] new traits for processing the output * [typo*] aptos-token and wallets (#9661) * Update aptos-token.md typo "maxium"/maximum * [typo*]Update wallets.md dot added * [compiler v2] Stackless Bytecode Refactoring (#9713) This is the 2nd step in the refactoring started in #9698. This splits off prover specific parts from the crate `move-model/bytecode` into `move-prover/bytecode-pipeline`. Sharable dataflow analysis and transformation processors, like livevar and reaching definitions, stay. The tests have been split as well, and a common testing driver has been moved in its own test utility crate. Github does not nicely show diffs like this, but this is functionally a no-op which only Moves code around. * [DAG] Adding randomness field in dag node (#9687) * [NFT Metadata Crawler] Fix double slash URI parsing error, add ACKing PubSub message on receive (#9706) * fix double slash uri parsing error, acking on message receive * edit var names * fix test * [api] Add consistent API testing (#9577) * [api] Add consistent API testing (#9146) This commit introduces consistent API testing where we run a consistent load through the network mimicking various user flows. See proposal for more high level information. Changes: Created aptos-api-tester crate which runs 4 user flows: Account creation Coin transfer NFT transfer Module publishing, and outputs results to stdout. Added token-client to the SDK with the methods needed to run Your First NFT. mirroring the typescript SDK and the rust coin client by using existing builders Added PartialEq to aptos_rest_client::Account for testing. Added the crate to the tools docker image to run with GCP Cloud Run. * [api] Add metrics to api tester (#9307) This commit introduces logging tools on top of the consistent API testing introduced here. See proposal for more high level information. Changes: Added metrics pusher to aptos-api-tester with 4 histogram metrics for success, error, fail rates, and latency. Added logs to tests instead of printing to stdout. * [api] Add threads and timestamps to api tests (#9349) This commit builds threads and run ID on top of the consistent API testing introduced here. See proposal for more high level information. Changes: Refactored tests into dedicated file tests.rs. Switched to creating accounts for every tests instead of using the same and created pre test set up routines in testsetups.rs. Added run ID start_time to metrics. * [api] Add support for strictness level for consistency on API tests (#9444) Description This commit builds threads and run ID on top of the consistent API testing introduced here. See proposal for more high level information. One problem we saw in the dashboard is that the problems we observed were regarding eventual consistency. We added support for adjusting the strictness level for such errors. Changes: Refactor of tests into tests module and decomposing into steps (setting up for next PR, individual step timing). Persistent checks for information retrieval. Added token support for Testnet faucet. * [api] Add individual step timing to api tester (#9502) This commit builds individual step timing on top of the consistent API testing introduced here. See proposal for more high level information. This addition allows us to dive deeper on what the issue is should we detect any flow is consistently slower. Changes: Add timer macro in macros.rs. Add helpers for emitting metrics and refactor process_result. Put publish_module in its own thread by creating a tokio runtime. Add sleeps between persistent checks. Reduce API client sleep time to 100ms from 500ms. Make test setups persistent. * [api] Add view function testing to the API tester (#9658) This commit adds a new test which tests a simple view function. * [dag] e2e integration test (#9457) * [event_v2] module event extension, attribute and extended check * [eventv2] make v0/v1 to v1/v2 * [event_v2] test * [events v2] Sample of how to check for event type attributes in the extended checker * [eventv2] add module publishing validation * [eventv2] add module event feature * [eventv2] tests and fixes * [eventv2] address comments and forbid script emitting module events * [eventv2] add script event verifier to forbid event emitting * add #[event] to known attributes * [eventv2] api test * Update staking-pool-operations.md (#8685) * Update staking-pool-operations.md corrected CLI command to create staking_contract previously the command created a direct pool type * Update staking-pool-operations.md add additional 8 0s * Update Docker images (#9699) Co-authored-by: gedigi * [smart_vector] add destroy function for dropable T (#9434) * [CLI] Add support for setting faucet auth token (#9715) * [CLI] Add support for setting faucet auth token * Use FaucetClient * Unify thread name, and combine all threads in the same thread pool in framegraph produced by cpu_profiler. (#9721) * Feat/pypi ci (#9512) * add python sdk publish ci * remove check version * [framework] Reset the release yaml for upcoming 1.7 release Test Plan: ran yq / generate proposals locally ``` target/performance/aptos-release-builder generate-proposals --release-config aptos-move/aptos-release-builder/data/release.yaml --output-dir output ``` --------- Co-authored-by: Bo Wu Co-authored-by: Maayan Co-authored-by: Álvaro Lillo Igualada Co-authored-by: yunuseozer Co-authored-by: Oğuzhan (Olsen) Budanur <74462406+olsenbudanur@users.noreply.github.com> Co-authored-by: Olsen Budanur Co-authored-by: Balaji Arun Co-authored-by: aldenhu Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: rustielin Co-authored-by: igor-aptos <110557261+igor-aptos@users.noreply.github.com> Co-authored-by: larry-aptos <112209412+larry-aptos@users.noreply.github.com> Co-authored-by: George Mitenkov Co-authored-by: Josh Lind Co-authored-by: Gerardo Di Giacomo Co-authored-by: Zorrot Chen Co-authored-by: chan-bing Co-authored-by: Matt <90358481+xbtmatt@users.noreply.github.com> Co-authored-by: Greg Nazario Co-authored-by: Daniel Porteous (dport) Co-authored-by: Victor Gao <10379359+vgao1996@users.noreply.github.com> Co-authored-by: Justin Chang <37165464+just-in-chang@users.noreply.github.com> Co-authored-by: Wolfgang Grieskamp Co-authored-by: maayan Co-authored-by: Alin Tomescu Co-authored-by: William Law Co-authored-by: Vladislav ~ cryptomolot <88001005+cryptomolot@users.noreply.github.com> Co-authored-by: David Wolinsky Co-authored-by: JasperTimm Co-authored-by: Jin <128556004+0xjinn@users.noreply.github.com> Co-authored-by: aalok-t <140445856+aalok-t@users.noreply.github.com> Co-authored-by: Aalok Thakkar Co-authored-by: Brian R. Murphy <132495859+brmataptos@users.noreply.github.com> Co-authored-by: Manu Dhundi Co-authored-by: Junkil Park Co-authored-by: Brian (Sunghoon) Cho Co-authored-by: Guoteng Rao <3603304+grao1991@users.noreply.github.com> Co-authored-by: Bo Wu Co-authored-by: Aaron Gao Co-authored-by: Andrei Tonkikh Co-authored-by: Rati Gelashvili Co-authored-by: Daniel Xiang <66756900+danielxiangzl@users.noreply.github.com> Co-authored-by: Nurullah Giray Kuru Co-authored-by: michelle-aptos <120680608+michelle-aptos@users.noreply.github.com> Co-authored-by: gedigi Co-authored-by: Fenix Co-authored-by: Perry Randall --- .dockerignore | 2 + .../file-change-determinator/action.yaml | 1 + .github/actions/general-lints/action.yaml | 2 + .../pull-request-target-code-checkout.yaml | 68 + .github/workflows/cli-e2e-tests.yaml | 19 +- .github/workflows/docker-build-test.yaml | 28 +- .github/workflows/execution-performance.yaml | 14 +- .github/workflows/forge-stable.yaml | 14 +- .../indexer-grpc-integration-tests.yaml | 1 - .github/workflows/lint-test.yaml | 8 + .../node-api-compatibility-tests.yaml | 41 +- .github/workflows/python-sdk-publish.yaml | 26 + .github/workflows/semgrep.yaml | 26 + .../workflow-run-execution-performance.yaml | 30 +- .github/workflows/workflow-run-forge.yaml | 27 +- Cargo.lock | 441 +++++-- Cargo.toml | 17 +- api/Cargo.toml | 2 + ..._by_invalid_address_missing_0x_prefix.json | 15 - ...ntry_function_argument_address_string.json | 2 +- ...id_entry_function_argument_u64_string.json | 2 +- ...test_missing_entry_function_arguments.json | 2 +- ...get_account_module_by_invalid_address.json | 2 +- ...t_account_resource_by_invalid_address.json | 9 +- api/src/context.rs | 21 +- api/src/tests/accounts_test.rs | 13 - api/src/tests/events_test.rs | 46 +- api/src/tests/invalid_post_request_test.rs | 2 +- api/src/tests/modules.rs | 5 +- api/src/tests/state_test.rs | 4 +- api/test-context/src/test_context.rs | 4 +- api/types/Cargo.toml | 1 + api/types/src/address.rs | 21 +- api/types/src/transaction.rs | 36 +- .../aptos-aggregator/src/delta_change_set.rs | 9 +- aptos-move/aptos-debugger/src/lib.rs | 2 +- aptos-move/aptos-gas-calibration/README.md | 81 +- aptos-move/aptos-gas-calibration/src/main.rs | 8 +- aptos-move/aptos-gas-calibration/src/solve.rs | 47 +- aptos-move/aptos-gas-meter/src/meter.rs | 8 +- aptos-move/aptos-gas-meter/src/traits.rs | 53 +- aptos-move/aptos-gas-profiling/Cargo.toml | 1 + .../aptos-gas-profiling/src/profiler.rs | 20 +- .../src/gas_schedule/transaction.rs | 32 +- .../aptos-memory-usage-tracker/src/lib.rs | 4 +- .../aptos-release-builder/data/release.yaml | 51 +- .../src/components/feature_flags.rs | 6 + .../src/aptos_test_harness.rs | 16 +- .../aptos-validator-interface/src/lib.rs | 4 - .../samples/add-numbers/Move.toml | 3 +- aptos-move/aptos-vm-types/src/change_set.rs | 67 +- aptos-move/aptos-vm-types/src/output.rs | 8 +- aptos-move/aptos-vm-types/src/storage.rs | 33 +- .../src/tests/test_change_set.rs | 12 +- .../aptos-vm-types/src/tests/test_output.rs | 11 +- aptos-move/aptos-vm-types/src/tests/utils.rs | 10 +- aptos-move/aptos-vm/Cargo.toml | 1 - aptos-move/aptos-vm/src/adapter_common.rs | 8 +- aptos-move/aptos-vm/src/aptos_vm.rs | 125 +- aptos-move/aptos-vm/src/aptos_vm_impl.rs | 91 +- aptos-move/aptos-vm/src/block_executor/mod.rs | 43 +- .../aptos-vm/src/block_executor/vm_wrapper.rs | 10 +- aptos-move/aptos-vm/src/data_cache.rs | 55 +- aptos-move/aptos-vm/src/foreign_contracts.rs | 11 - aptos-move/aptos-vm/src/lib.rs | 3 - aptos-move/aptos-vm/src/move_vm_ext/mod.rs | 2 +- .../aptos-vm/src/move_vm_ext/resolver.rs | 48 +- .../src/move_vm_ext/respawned_session.rs | 16 +- .../aptos-vm/src/move_vm_ext/session.rs | 99 +- aptos-move/aptos-vm/src/move_vm_ext/vm.rs | 20 +- aptos-move/aptos-vm/src/natives.rs | 3 + .../cross_shard_state_view.rs | 4 - .../src/sharded_block_executor/test_utils.rs | 6 +- .../aptos-vm/src/verifier/event_validation.rs | 184 +++ aptos-move/aptos-vm/src/verifier/mod.rs | 1 + aptos-move/block-executor/src/executor.rs | 117 +- .../src/proptest_types/baseline.rs | 37 +- .../src/proptest_types/bencher.rs | 2 +- .../src/proptest_types/tests.rs | 2 +- .../src/proptest_types/types.rs | 37 +- aptos-move/block-executor/src/task.rs | 22 +- .../src/txn_last_input_output.rs | 90 +- aptos-move/block-executor/src/view.rs | 136 +- aptos-move/e2e-move-tests/Cargo.toml | 2 +- aptos-move/e2e-move-tests/src/harness.rs | 10 + .../e2e-move-tests/src/tests/attributes.rs | 23 + aptos-move/e2e-move-tests/src/tests/mod.rs | 1 + .../e2e-move-tests/src/tests/module_event.rs | 116 ++ .../src/tests/state_metadata.rs | 26 +- ..._tests__create_account__create_account.exp | 4 +- ...__tests__data_store__borrow_after_move.exp | 6 +- ...__tests__data_store__change_after_move.exp | 6 +- ...s__data_store__move_from_across_blocks.exp | 10 +- ...s__module_publishing__duplicate_module.exp | 2 +- ...e_publishing__layout_compatible_module.exp | 2 +- ...incompatible_module_with_changed_field.exp | 2 +- ...out_incompatible_module_with_new_field.exp | 2 +- ...incompatible_module_with_removed_field.exp | 2 +- ...ncompatible_module_with_removed_struct.exp | 2 +- ..._publishing__linking_compatible_module.exp | 2 +- ...g_incompatible_module_with_added_param.exp | 2 +- ...incompatible_module_with_changed_param.exp | 2 +- ...ncompatible_module_with_removed_pub_fn.exp | 2 +- ...lishing__test_publishing_allow_modules.exp | 2 +- ..._test_publishing_modules_proper_sender.exp | 2 +- ...ests__verify_txn__test_open_publishing.exp | 2 +- aptos-move/e2e-tests/src/data_store.rs | 4 - aptos-move/e2e-tests/src/executor.rs | 42 +- aptos-move/e2e-tests/src/on_chain_configs.rs | 2 +- aptos-move/e2e-testsuite/Cargo.toml | 2 +- .../src/tests/failed_transaction_tests.rs | 11 +- .../src/tests/on_chain_configs.rs | 6 +- .../e2e-testsuite/src/tests/peer_to_peer.rs | 4 +- aptos-move/e2e-testsuite/src/tests/scripts.rs | 85 +- aptos-move/framework/Cargo.toml | 3 +- .../framework/aptos-framework/doc/account.md | 1 + .../framework/aptos-framework/doc/event.md | 165 ++- .../aptos-framework/doc/reconfiguration.md | 19 +- .../aptos-framework/doc/resource_account.md | 29 +- .../aptos-framework/doc/transaction_fee.md | 64 +- .../doc/transaction_validation.md | 155 ++- .../framework/aptos-framework/doc/vesting.md | 284 ++++- .../framework/aptos-framework/doc/voting.md | 203 ++- .../aptos-framework/sources/account.spec.move | 2 + .../aptos-framework/sources/event.move | 31 + .../aptos-framework/sources/event.spec.move | 10 + .../sources/reconfiguration.spec.move | 34 +- .../sources/resource_account.move | 30 +- .../sources/resource_account.spec.move | 9 +- .../sources/transaction_fee.spec.move | 79 +- .../sources/transaction_validation.spec.move | 97 +- .../aptos-framework/sources/vesting.spec.move | 237 +++- .../aptos-framework/sources/voting.spec.move | 176 ++- .../framework/aptos-stdlib/doc/big_vector.md | 33 + .../framework/aptos-stdlib/doc/math128.md | 34 - .../framework/aptos-stdlib/doc/math64.md | 34 - .../framework/aptos-stdlib/doc/math_fixed.md | 34 - .../aptos-stdlib/doc/math_fixed64.md | 34 - .../aptos-stdlib/doc/smart_vector.md | 55 + .../aptos-stdlib/doc/string_utils.md | 57 +- .../sources/data_structures/big_vector.move | 20 +- .../sources/data_structures/smart_vector.move | 21 +- .../framework/aptos-stdlib/sources/debug.move | 36 +- .../aptos-stdlib/sources/math128.move | 2 +- .../aptos-stdlib/sources/math64.move | 2 +- .../aptos-stdlib/sources/math_fixed.move | 2 +- .../aptos-stdlib/sources/math_fixed64.move | 2 +- .../aptos-stdlib/sources/string_utils.move | 2 +- .../aptos-token-objects/doc/token.md | 4 +- .../aptos-token-objects/sources/token.move | 4 +- .../cached-packages/generated/head.mrb | Bin 596184 -> 0 bytes .../framework/move-stdlib/doc/features.md | 134 +- .../move-stdlib/sources/configs/features.move | 59 +- .../sources/configs/features.spec.move | 4 + aptos-move/framework/src/aptos.rs | 7 +- aptos-move/framework/src/built_package.rs | 16 + aptos-move/framework/src/extended_checks.rs | 147 ++- aptos-move/framework/src/module_metadata.rs | 15 + aptos-move/framework/src/natives/event.rs | 191 ++- .../framework/src/natives/string_utils.rs | 22 +- aptos-move/framework/src/prover.rs | 16 +- .../framework/tests/move_prover_tests.rs | 12 +- aptos-move/framework/tests/move_unit_test.rs | 3 +- aptos-move/move-examples/Cargo.toml | 1 + aptos-move/move-examples/event/Move.toml | 12 + .../move-examples/event/sources/event.move | 76 ++ .../move-examples/tests/move_prover_tests.rs | 2 + .../move-examples/tests/move_unit_tests.rs | 2 + aptos-move/mvhashmap/src/lib.rs | 29 +- aptos-move/mvhashmap/src/unit_tests/mod.rs | 24 +- .../src/unit_tests/proptest_types.rs | 10 +- aptos-move/mvhashmap/src/versioned_data.rs | 11 +- aptos-move/mvhashmap/src/versioned_modules.rs | 15 +- aptos-move/vm-genesis/src/genesis_context.rs | 4 - aptos-move/vm-genesis/src/lib.rs | 20 +- .../src/admin_script_builder.rs | 5 +- .../src/writeset_builder.rs | 2 +- config/src/config/state_sync_config.rs | 9 + consensus/src/consensusdb/consensusdb_test.rs | 12 +- consensus/src/dag/anchor_election.rs | 2 +- consensus/src/dag/bootstrap.rs | 117 ++ consensus/src/dag/dag_driver.rs | 87 +- consensus/src/dag/dag_fetcher.rs | 93 +- consensus/src/dag/dag_handler.rs | 76 +- consensus/src/dag/dag_network.rs | 9 +- consensus/src/dag/mod.rs | 7 +- consensus/src/dag/order_rule.rs | 4 +- .../{reliable_broadcast.rs => rb_handler.rs} | 51 +- consensus/src/dag/tests/dag_driver_tests.rs | 129 ++ consensus/src/dag/tests/dag_network_test.rs | 17 +- consensus/src/dag/tests/helpers.rs | 22 +- consensus/src/dag/tests/integration_tests.rs | 236 ++++ consensus/src/dag/tests/mod.rs | 4 +- consensus/src/dag/tests/order_rule_tests.rs | 4 +- ...broadcast_tests.rs => rb_handler_tests.rs} | 39 +- consensus/src/dag/tests/types_test.rs | 18 +- consensus/src/dag/types.rs | 39 +- .../src/liveness/leader_reputation_test.rs | 2 +- consensus/src/network.rs | 78 +- consensus/src/quorum_store/batch_generator.rs | 6 +- consensus/src/quorum_store/counters.rs | 18 + crates/aptos-api-tester/Cargo.toml | 34 + crates/aptos-api-tester/src/consts.rs | 68 + crates/aptos-api-tester/src/counters.rs | 75 ++ crates/aptos-api-tester/src/macros.rs | 18 + crates/aptos-api-tester/src/main.rs | 97 ++ .../aptos-api-tester/src/persistent_check.rs | 226 ++++ crates/aptos-api-tester/src/strings.rs | 59 + .../src/tests/coin_transfer.rs | 265 ++++ crates/aptos-api-tester/src/tests/mod.rs | 7 + .../aptos-api-tester/src/tests/new_account.rs | 147 +++ .../src/tests/publish_module.rs | 385 ++++++ .../src/tests/tokenv1_transfer.rs | 576 +++++++++ .../src/tests/view_function.rs | 177 +++ crates/aptos-api-tester/src/tokenv1_client.rs | 460 +++++++ crates/aptos-api-tester/src/utils.rs | 310 +++++ crates/aptos-profiler/Cargo.toml | 28 + crates/aptos-profiler/src/cpu_profiler.rs | 96 ++ crates/aptos-profiler/src/jeprof.py | 24 + crates/aptos-profiler/src/lib.rs | 97 ++ crates/aptos-profiler/src/memory_profiler.rs | 130 ++ crates/aptos-profiler/src/utils.rs | 17 + crates/aptos-rest-client/src/faucet.rs | 60 +- crates/aptos-rest-client/src/lib.rs | 6 +- crates/aptos-rest-client/src/types.rs | 2 +- crates/aptos-rosetta/src/types/objects.rs | 14 +- crates/aptos/Cargo.toml | 8 +- crates/aptos/e2e/cases/account.py | 73 ++ crates/aptos/e2e/main.py | 2 + crates/aptos/src/account/fund.rs | 15 +- crates/aptos/src/account/key_rotation.rs | 6 + crates/aptos/src/common/init.rs | 31 +- crates/aptos/src/common/types.rs | 40 +- crates/aptos/src/common/utils.rs | 40 +- crates/aptos/src/governance/mod.rs | 1 + crates/aptos/src/move_tool/coverage.rs | 3 + crates/aptos/src/move_tool/mod.rs | 24 +- crates/aptos/src/move_tool/show.rs | 1 + .../src/node/analyze/analyze_validators.rs | 8 +- crates/aptos/src/node/mod.rs | 50 +- crates/aptos/src/test/mod.rs | 4 +- crates/reliable-broadcast/src/lib.rs | 9 +- crates/reliable-broadcast/src/tests.rs | 2 +- dashboards/end-to-end-txn-latency.json | 28 +- dashboards/end-to-end-txn-latency.json.gz | Bin 5175 -> 5187 bytes dashboards/execution.json | 657 ++++++++-- dashboards/execution.json.gz | Bin 7381 -> 8162 bytes dashboards/overview.json | 167 ++- dashboards/overview.json.gz | Bin 10160 -> 10431 bytes dashboards/storage-backup-and-restore.json | 758 +++++------ dashboards/storage-backup-and-restore.json.gz | Bin 6588 -> 6448 bytes dashboards/system.json | 104 +- dashboards/system.json.gz | Bin 2961 -> 3167 bytes .../docs/move/book/functions.md | 2 +- .../docs/move/book/package-upgrades.md | 2 +- .../docs/move/book/unit-testing.md | 4 +- .../docs/move/move-on-aptos/cli.md | 4 +- .../docs/move/prover/spec-lang.md | 2 +- .../docs/nodes/full-node/aptos-db-restore.md | 18 + .../docs/nodes/indexer-fullnode.md | 6 + .../operator/staking-pool-operations.md | 10 +- .../docs/standards/aptos-token.md | 2 +- developer-docs-site/docs/standards/wallets.md | 2 +- developer-docs-site/docs/tutorials/index.md | 10 +- docker/builder/build-tools.sh | 4 + docker/builder/docker-bake-rust-all.hcl | 14 +- .../builder/nft-metadata-crawler.Dockerfile | 22 + docker/builder/tools.Dockerfile | 7 + .../indexer-grpc-data-service/src/main.rs | 3 + .../indexer-grpc-data-service/src/service.rs | 18 +- .../src/tests/proto_converter_tests.rs | 3 + .../models/nft_metadata_crawler_uris_query.rs | 4 + .../src/utils/constants.rs | 6 + .../src/utils/database.rs | 1 + .../src/utils/gcs.rs | 7 +- .../src/utils/image_optimizer.rs | 46 +- .../src/utils/json_parser.rs | 24 +- .../src/utils/uri_parser.rs | 12 +- .../nft-metadata-crawler-parser/src/worker.rs | 259 ++-- .../sdk/aptos_sdk/aptos_token_client.py | 12 +- .../python/sdk/aptos_sdk/async_client.py | 21 + ecosystem/python/sdk/examples/aptos-token.py | 13 +- ecosystem/typescript/sdk/CHANGELOG.md | 8 + .../multi_ed25519_to_multisig.ts | 380 ++++++ .../typescript-esm/offer_capabilities.ts | 163 +++ .../sdk/examples/typescript-esm/package.json | 4 +- ecosystem/typescript/sdk/package.json | 2 +- .../sdk/src/indexer/generated/operations.ts | 38 +- .../sdk/src/indexer/generated/queries.ts | 192 ++- .../sdk/src/indexer/generated/types.ts | 719 +++++++++-- ...urrentTokenOwnershipFieldsFragment.graphql | 46 +- .../queries/getAccountCoinsData.graphql | 13 +- .../getAccountTransactionsCount.graphql | 2 +- .../getAccountTransactionsData.graphql | 20 +- .../indexer/queries/getCollectionData.graphql | 9 +- .../getCollectionsWithOwnedTokens.graphql | 21 +- .../queries/getNumberOfDelegators.graphql | 1 + .../queries/getTokenActivities.graphql | 17 +- .../queries/getTokenCurrentOwnerData.graphql | 3 +- .../src/indexer/queries/getTokenData.graphql | 27 +- .../queries/getTokenOwnersData.graphql | 3 +- .../queries/getUserTransactions.graphql | 14 +- .../tokenActivitiesFieldsFragment.graphql | 17 + .../typescript/sdk/src/providers/indexer.ts | 40 +- .../sdk/src/tests/e2e/client.test.ts | 19 - .../sdk/src/tests/e2e/indexer.test.ts | 26 +- ecosystem/typescript/sdk/src/version.ts | 2 +- ecosystem/typescript/sdk_v2/package.json | 5 +- .../typescript/sdk_v2/src/api/aptos_config.ts | 14 + .../sdk_v2/src/core/account_address.ts | 365 ++++++ .../typescript/sdk_v2/src/core/common.ts | 40 + ecosystem/typescript/sdk_v2/src/core/hex.ts | 177 +++ ecosystem/typescript/sdk_v2/src/core/index.ts | 6 + .../typescript/sdk_v2/src/types/index.ts | 1 + .../sdk_v2/src/utils/api-endpoints.ts | 27 + .../typescript/sdk_v2/src/utils/const.ts | 3 + .../sdk_v2/tests/unit/account_address.test.ts | 358 ++++++ .../sdk_v2/tests/unit/aptos_config.test.ts | 59 + .../typescript/sdk_v2/tests/unit/hex.test.ts | 98 ++ execution/executor-benchmark/Cargo.toml | 1 + execution/executor-benchmark/src/lib.rs | 11 +- execution/executor-benchmark/src/main.rs | 40 + .../executor-benchmark/src/native_executor.rs | 4 +- .../src/transaction_generator.rs | 97 +- execution/executor-service/src/test_utils.rs | 2 +- .../src/integration_test_impl.rs | 33 +- .../src/parsed_transaction_output.rs | 4 +- execution/executor/src/block_executor.rs | 4 +- execution/executor/src/chunk_executor.rs | 4 + .../executor/src/components/chunk_output.rs | 20 +- .../executor/src/mock_vm/mock_vm_test.rs | 4 - execution/executor/src/mock_vm/mod.rs | 25 +- .../executor/tests/db_bootstrapper_test.rs | 4 +- .../tests/storage_integration_test.rs | 4 +- scripts/dev_setup.sh | 11 +- .../consensus-notifications/src/lib.rs | 2 +- .../event-notifications/src/lib.rs | 43 +- .../event-notifications/src/tests.rs | 2 +- .../src/tests/storage_synchronizer.rs | 4 +- .../state-sync-driver/src/tests/utils.rs | 2 +- .../storage-service/server/src/handler.rs | 96 +- state-sync/storage-service/server/src/lib.rs | 176 ++- .../storage-service/server/src/logging.rs | 3 + .../storage-service/server/src/metrics.rs | 33 + .../server/src/optimistic_fetch.rs | 47 +- .../server/src/subscription.rs | 991 +++++++++++++++ .../storage-service/server/src/tests/mock.rs | 13 +- .../storage-service/server/src/tests/mod.rs | 4 + .../src/tests/new_transaction_outputs.rs | 24 +- .../server/src/tests/new_transactions.rs | 26 +- .../src/tests/new_transactions_or_outputs.rs | 30 +- .../server/src/tests/optimistic_fetch.rs | 44 +- .../server/src/tests/request_moderator.rs | 18 +- .../server/src/tests/storage_summary.rs | 6 +- .../tests/subscribe_transaction_outputs.rs | 597 +++++++++ .../src/tests/subscribe_transactions.rs | 701 ++++++++++ .../subscribe_transactions_or_outputs.rs | 941 ++++++++++++++ .../server/src/tests/subscription.rs | 1125 +++++++++++++++++ .../storage-service/server/src/tests/utils.rs | 643 ++++++++-- .../storage-service/server/src/utils.rs | 185 ++- .../storage-service/types/src/requests.rs | 58 +- .../storage-service/types/src/responses.rs | 36 +- state-sync/storage-service/types/src/tests.rs | 125 +- storage/aptosdb/src/backup/restore_utils.rs | 62 +- storage/aptosdb/src/event_store/mod.rs | 38 +- storage/aptosdb/src/event_store/test.rs | 21 +- storage/aptosdb/src/ledger_db.rs | 46 +- storage/aptosdb/src/lib.rs | 69 +- storage/aptosdb/src/schema/db_metadata/mod.rs | 1 + storage/aptosdb/src/state_kv_db.rs | 9 +- storage/aptosdb/src/state_merkle_db.rs | 157 ++- storage/aptosdb/src/state_restore/mod.rs | 2 + storage/aptosdb/src/state_store/mod.rs | 44 +- storage/aptosdb/src/test_helper.rs | 33 +- storage/db-tool/src/tests.rs | 143 ++- storage/jellyfish-merkle/src/restore/mod.rs | 11 +- .../state-view/src/in_memory_state_view.rs | 4 - storage/state-view/src/lib.rs | 16 +- .../src/async_proof_fetcher.rs | 2 +- .../src/cached_state_view.rs | 8 - storage/storage-interface/src/state_view.rs | 4 - testsuite/forge-cli/src/main.rs | 148 ++- testsuite/forge/src/success_criteria.rs | 22 +- .../generate-format/tests/staged/api.yaml | 17 +- .../generate-format/tests/staged/aptos.yaml | 17 +- .../tests/staged/consensus.yaml | 17 +- testsuite/pangu.py | 1 + testsuite/pangu_lib/README.md | 216 +++- .../pangu_lib/testnet_commands/commands.py | 34 +- .../pangu_lib/testnet_commands/get_testnet.py | 5 + .../testnet_commands/transaction_emitter.py | 120 ++ testsuite/pangu_lib/util.py | 2 + testsuite/single_node_performance.py | 16 +- testsuite/test_framework/kubernetes.py | 152 +++ .../testcases/src/load_vs_perf_benchmark.rs | 142 ++- third_party/move/benchmarks/src/move_vm.rs | 1 + .../diem-framework/crates/cli/Cargo.toml | 2 +- .../move/evm/extract-ethereum-abi/Cargo.toml | 2 +- third_party/move/evm/move-to-yul/Cargo.toml | 4 +- third_party/move/evm/move-to-yul/src/lib.rs | 14 +- .../move-to-yul/tests/dispatcher_testsuite.rs | 17 +- .../move/evm/move-to-yul/tests/testsuite.rs | 14 +- .../async/move-async-vm/src/async_vm.rs | 11 +- .../async/move-async-vm/tests/testsuite.rs | 8 +- third_party/move/move-analyzer/Cargo.toml | 2 +- .../src/dependencies.rs | 2 +- third_party/move/move-compiler-v2/Cargo.toml | 5 +- .../src/bytecode_generator.rs | 42 +- .../function_generator.rs | 193 ++- .../file_format_generator/module_generator.rs | 38 + third_party/move/move-compiler-v2/src/lib.rs | 11 +- .../move/move-compiler-v2/src/options.rs | 18 +- .../pipeline/livevar_analysis_processor.rs | 49 + .../move/move-compiler-v2/src/pipeline/mod.rs | 4 + .../reference_conversion.exp | 46 + .../reference_conversion.move | 15 + .../tests/file-format-generator/assign.exp | 71 +- .../tests/file-format-generator/borrow.exp | 140 +- .../tests/file-format-generator/fields.exp | 165 ++- .../file-format-generator/generic_call.exp | 58 + .../file-format-generator/generic_call.move | 9 + .../tests/file-format-generator/globals.exp | 81 +- .../tests/file-format-generator/if_else.exp | 116 +- .../tests/file-format-generator/loop.exp | 239 +++- .../tests/file-format-generator/operators.exp | 578 ++++++++- .../file-format-generator/operators.move | 4 +- .../file-format-generator/pack_unpack.exp | 46 +- .../tests/file-format-generator/vector.exp | 20 + .../move/move-compiler-v2/tests/testsuite.rs | 47 +- .../tests/control_flow/sorter.exp | 584 +++++++++ .../tests/control_flow/sorter.move | 73 ++ .../tests/evaluation_order/arg_order.exp | 3 + .../tests/evaluation_order/arg_order.move | 18 + third_party/move/move-compiler/Cargo.toml | 2 +- .../src/attr_derivation/async_deriver.rs | 29 +- .../src/attr_derivation/evm_deriver.rs | 61 + .../move-compiler/src/attr_derivation/mod.rs | 68 +- .../move/move-compiler/src/bin/move-build.rs | 17 +- .../move/move-compiler/src/bin/move-check.rs | 16 +- .../src/command_line/compiler.rs | 37 +- .../move-compiler/src/command_line/mod.rs | 2 + .../move-compiler/src/diagnostics/codes.rs | 2 + .../move/move-compiler/src/expansion/ast.rs | 7 +- .../move-compiler/src/expansion/translate.rs | 74 +- third_party/move/move-compiler/src/lib.rs | 2 +- .../move/move-compiler/src/shared/mod.rs | 177 ++- .../src/unit_test/filter_test_members.rs | 4 +- .../src/unit_test/plan_builder.rs | 2 +- .../src/verification/ast_filter.rs | 4 +- .../parser/aptos_stdlib_attributes.exp | 24 + .../parser/aptos_stdlib_attributes.move | 10 + .../parser/aptos_stdlib_attributes2.move | 6 + .../move_check/parser/attribute_placement.exp | 84 ++ .../move_check/parser/attribute_variants.exp | 60 + .../parser/duplicate_attributes.exp | 18 + .../tests/move_check/parser/testonly.exp | 12 + .../tests/move_check/parser/testonly.move | 19 + .../aptos_stdlib_attributes.exp | 24 + .../aptos_stdlib_attributes.move | 10 + .../aptos_stdlib_attributes2.move | 6 + .../attribute_no_closing_bracket.exp | 8 + .../attribute_no_closing_bracket.move | 5 + .../attribute_placement.move | 46 + .../attribute_variants.move | 6 + .../duplicate_attributes.exp | 24 + .../duplicate_attributes.move | 7 + .../extra_attributes.move | 26 + .../extra_attributes2.move | 23 + .../skip_attribute_checks/testonly.move | 9 + .../tests/move_check/typing/assign_tuple.exp | 9 + .../tests/move_check/typing/assign_tuple.move | 15 + .../tests/move_check/typing/tuple.move | 15 + .../unit_test/extra_attributes.move | 1 - .../unit_test/extra_attributes2.move | 23 + .../tests/move_check_testsuite.rs | 19 +- third_party/move/move-core/types/Cargo.toml | 1 + .../move-core/types/src/account_address.rs | 264 +++- .../move/move-core/types/src/effects.rs | 4 +- .../move/move-core/types/src/vm_status.rs | 2 +- third_party/move/move-ir-compiler/Cargo.toml | 2 +- .../move-ir-to-bytecode/src/compiler.rs | 4 + .../move-ir-to-bytecode/syntax/src/lexer.rs | 2 + .../move-ir-to-bytecode/syntax/src/syntax.rs | 10 +- .../bytecode-generation/statements/nop.exp | 19 + .../bytecode-generation/statements/nop.mvir | 14 + third_party/move/move-ir/types/src/ast.rs | 3 + .../move-model/bytecode-test-utils/Cargo.toml | 20 + .../move-model/bytecode-test-utils/src/lib.rs | 93 ++ .../bytecode/Cargo.toml | 5 +- .../bytecode/src/annotations.rs | 0 .../bytecode/src/borrow_analysis.rs | 0 .../bytecode/src/compositional_analysis.rs | 0 .../bytecode/src/dataflow_analysis.rs | 0 .../bytecode/src/dataflow_domains.rs | 0 .../bytecode/src/debug_instrumentation.rs | 0 .../bytecode/src/function_data_builder.rs | 0 .../bytecode/src/function_target.rs | 0 .../bytecode/src/function_target_pipeline.rs | 5 - .../bytecode/src/graph.rs | 0 .../bytecode/src/lib.rs | 39 +- .../bytecode/src/livevar_analysis.rs | 13 + .../bytecode/src/reaching_def_analysis.rs | 0 .../bytecode/src/stackless_bytecode.rs | 0 .../src/stackless_bytecode_generator.rs | 0 .../src/stackless_control_flow_graph.rs | 0 .../bytecode/src/usage_analysis.rs | 0 .../bytecode/tests/borrow/basic_test.exp | 140 +- .../bytecode/tests/borrow/basic_test.move | 0 .../bytecode/tests/borrow/function_call.exp | 38 +- .../bytecode/tests/borrow/function_call.move | 0 .../bytecode/tests/borrow/hyper_edge.exp | 18 +- .../bytecode/tests/borrow/hyper_edge.move | 0 .../tests/borrow_strong/basic_test.exp | 194 +-- .../tests/borrow_strong/basic_test.move | 0 .../bytecode/tests/borrow_strong/mut_ref.exp | 104 +- .../bytecode/tests/borrow_strong/mut_ref.move | 0 .../regression_generic_and_native_type.exp | 0 .../regression_generic_and_native_type.move | 0 .../bytecode/tests/from_move/smoke_test.exp | 0 .../bytecode/tests/from_move/smoke_test.move | 0 .../bytecode/tests/from_move/specs-in-fun.exp | 0 .../tests/from_move/specs-in-fun.move | 0 .../tests/from_move/vector_instructions.exp | 0 .../tests/from_move/vector_instructions.move | 0 .../bytecode/tests/livevar/basic_test.exp | 191 +-- .../bytecode/tests/livevar/basic_test.move | 0 .../tests/reaching_def/basic_test.exp | 4 +- .../tests/reaching_def/basic_test.move | 0 .../tests/reaching_def/test_branching.exp | 9 +- .../tests/reaching_def/test_branching.move | 0 .../move-model/bytecode/tests/testsuite.rs | 66 + .../bytecode/tests/usage_analysis/test.exp | 0 .../bytecode/tests/usage_analysis/test.move | 0 third_party/move/move-model/src/lib.rs | 32 +- third_party/move/move-model/src/model.rs | 42 +- .../move/move-model/tests/testsuite.rs | 10 +- third_party/move/move-prover/Cargo.toml | 5 +- .../move-prover/boogie-backend/Cargo.toml | 3 +- .../boogie-backend/src/bytecode_translator.rs | 8 +- .../move-prover/boogie-backend/src/lib.rs | 2 +- .../move-prover/boogie-backend/src/options.rs | 84 +- .../boogie-backend/src/spec_translator.rs | 2 +- .../move-prover/bytecode-pipeline/Cargo.toml | 47 + .../src/clean_and_optimize.rs | 16 +- .../src/data_invariant_instrumentation.rs | 14 +- .../src/eliminate_imm_refs.rs | 12 +- .../src/global_invariant_analysis.rs | 16 +- .../src/global_invariant_instrumentation.rs | 16 +- .../global_invariant_instrumentation_v2.rs | 21 +- .../src/inconsistency_check.rs | 6 +- .../move-prover/bytecode-pipeline/src/lib.rs | 24 + .../src/loop_analysis.rs | 18 +- .../src/memory_instrumentation.rs | 16 +- .../src/mono_analysis.rs | 12 +- .../src/mut_ref_instrumentation.rs | 6 +- .../src/mutation_tester.rs | 12 +- .../src/number_operation.rs | 2 +- .../src/number_operation_analysis.rs | 22 +- .../src/options.rs | 0 .../src/packed_types_analysis.rs | 14 +- .../src/pipeline_factory.rs | 27 +- .../src/spec_instrumentation.rs | 26 +- .../src/verification_analysis.rs | 12 +- .../src/verification_analysis_v2.rs | 14 +- .../src/well_formed_instrumentation.rs | 14 +- .../data_invariant_instrumentation/borrow.exp | 0 .../borrow.move | 0 .../data_invariant_instrumentation/pack.exp | 0 .../data_invariant_instrumentation/pack.move | 0 .../data_invariant_instrumentation/params.exp | 0 .../params.move | 0 .../data_invariant_instrumentation/vector.exp | 0 .../vector.move | 0 .../tests/eliminate_imm_refs/basic_test.exp | 0 .../tests/eliminate_imm_refs/basic_test.move | 0 .../disable_in_body.exp | 0 .../disable_in_body.move | 0 .../global_invariant_analysis/mutual_inst.exp | 0 .../mutual_inst.move | 0 .../uninst_type_param_in_inv.exp | 0 .../uninst_type_param_in_inv.move | 0 .../borrow.exp | 0 .../borrow.move | 0 .../global_invariant_instrumentation/move.exp | 0 .../move.move | 0 .../update.exp | 0 .../update.move | 0 .../tests/memory_instr/basic_test.exp | 0 .../tests/memory_instr/basic_test.move | 0 .../tests/memory_instr/mut_ref.exp | 0 .../tests/memory_instr/mut_ref.move | 0 .../tests/mono_analysis/test.exp | 0 .../tests/mono_analysis/test.move | 0 .../mut_ref_instrumentation/basic_test.exp | 0 .../mut_ref_instrumentation/basic_test.move | 0 .../tests/spec_instrumentation/fun_spec.exp | 0 .../tests/spec_instrumentation/fun_spec.move | 0 .../tests/spec_instrumentation/generics.exp | 0 .../tests/spec_instrumentation/generics.move | 0 .../tests/spec_instrumentation/modifies.exp | 0 .../tests/spec_instrumentation/modifies.move | 0 .../spec_instrumentation/opaque_call.exp | 0 .../spec_instrumentation/opaque_call.move | 0 .../tests/testsuite.rs | 137 +- .../verification_analysis/inv_relevance.exp | 0 .../verification_analysis/inv_relevance.move | 0 .../verification_analysis/inv_suspension.exp | 0 .../verification_analysis/inv_suspension.move | 0 third_party/move/move-prover/lab/Cargo.toml | 5 +- .../move/move-prover/lab/src/benchmark.rs | 8 +- .../move-docgen/tests/testsuite.rs | 2 + third_party/move/move-prover/src/cli.rs | 16 +- third_party/move/move-prover/src/lib.rs | 10 +- .../move/move-prover/tools/check_pr.sh | 2 +- .../nursery/tests/event_tests.move | 107 -- .../move/move-stdlib/src/natives/event.rs | 14 +- .../move-vm/integration-tests/src/compiler.rs | 12 +- .../src/tests/bad_storage_tests.rs | 2 +- .../src/tests/exec_func_effects_tests.rs | 10 +- .../src/tests/loader_tests.rs | 2 +- .../src/tests/mutated_accounts_tests.rs | 2 +- .../move/move-vm/runtime/src/data_cache.rs | 42 +- .../move-vm/runtime/src/native_functions.rs | 23 +- .../move/move-vm/runtime/src/session.rs | 30 +- .../move-vm/types/src/values/values_impl.rs | 1 - .../testing-infra/test-generation/Cargo.toml | 2 +- .../testing-infra/test-generation/src/lib.rs | 8 +- .../transactional-test-runner/Cargo.toml | 2 +- .../src/framework.rs | 133 +- .../transactional-test-runner/src/tasks.rs | 4 + .../src/vm_test_harness.rs | 21 +- .../tools/move-bytecode-viewer/Cargo.toml | 2 +- third_party/move/tools/move-cli/Cargo.toml | 2 +- .../move-cli/src/sandbox/commands/doctor.rs | 12 - .../move-cli/src/sandbox/commands/publish.rs | 3 +- .../move-cli/src/sandbox/commands/run.rs | 6 +- .../move-cli/src/sandbox/commands/view.rs | 9 - .../tools/move-cli/src/sandbox/utils/mod.rs | 22 +- .../src/sandbox/utils/on_disk_state_view.rs | 71 +- .../build_tests/dependency_chain/args.exp | 6 + .../tests/build_tests/dev_address/args.exp | 6 + .../build_tests/empty_module_no_deps/args.exp | 6 + .../include_exclude_stdlib/args.exp | 12 + .../build_tests/unbound_address/args.exp | 2 +- .../cross_process_tests/Package1/Move.toml | 4 +- .../cross_process_tests/Package2/Move.toml | 4 +- .../no_git_remote_package/Move.toml | 2 +- .../upload_tests/valid_package1/Move.toml | 2 +- .../upload_tests/valid_package2/Move.toml | 2 +- .../upload_tests/valid_package3/Move.toml | 2 +- .../move/tools/move-coverage/Cargo.toml | 2 +- .../move/tools/move-disassembler/Cargo.toml | 2 +- .../move/tools/move-explain/Cargo.toml | 2 +- .../move/tools/move-package/Cargo.toml | 2 +- .../src/compilation/compiled_package.rs | 28 +- .../src/compilation/model_builder.rs | 10 +- .../move/tools/move-package/src/lib.rs | 13 +- .../src/resolution/resolution_graph.rs | 2 +- .../compilation/basic_no_deps/Move.exp | 2 + .../basic_no_deps_address_assigned/Move.exp | 2 + .../Move.exp | 2 + .../basic_no_deps_test_mode/Move.exp | 2 + .../Move.exp | 2 + .../diamond_problem_no_conflict/Move.exp | 2 + .../compilation/multiple_deps_rename/Move.exp | 2 + .../multiple_deps_rename_one/Move.exp | 2 + .../test_sources/compilation/one_dep/Move.exp | 2 + .../one_dep_assigned_address/Move.exp | 2 + .../compilation/one_dep_renamed/Move.exp | 2 + .../compilation/one_dep_with_scripts/Move.exp | 2 + .../compilation/test_symlinks/Move.exp | 2 + .../invalid_identifier_package_name/Move.exp | 2 + .../parsing/minimal_manifest/Move.exp | 2 + .../resolution/basic_no_deps/Move.exp | 2 + .../basic_no_deps_address_assigned/Move.exp | 2 + .../Move.exp | 2 +- .../Move.exp | 2 + .../resolution/dep_good_digest/Move.exp | 2 + .../Move.exp | 2 + .../diamond_problem_no_conflict/Move.exp | 2 + .../resolution/multiple_deps_rename/Move.exp | 2 + .../test_sources/resolution/one_dep/Move.exp | 2 + .../one_dep_assigned_address/Move.exp | 2 + .../one_dep_multiple_of_same_name/Move.exp | 2 + .../one_dep_reassigned_address/Move.exp | 2 + .../Move.exp | 2 + .../Package1/Move.toml | 4 +- .../Package2/Move.toml | 4 +- .../move/tools/move-unit-test/Cargo.toml | 2 +- .../move/tools/move-unit-test/src/lib.rs | 16 +- .../tools/move-unit-test/src/test_runner.rs | 2 +- types/src/contract_event.rs | 207 ++- types/src/on_chain_config/aptos_features.rs | 8 +- types/src/proof/unit_tests/proof_test.rs | 2 +- types/src/proptest_types.rs | 23 +- types/src/state_store/state_value.rs | 15 +- types/src/unit_tests/contract_event_test.rs | 13 +- types/src/write_set.rs | 1 + 698 files changed, 25277 insertions(+), 5116 deletions(-) create mode 100644 .github/linters/semgrep/pull-request-target-code-checkout.yaml create mode 100644 .github/workflows/python-sdk-publish.yaml create mode 100644 .github/workflows/semgrep.yaml delete mode 100644 api/goldens/aptos_api__tests__accounts_test__test_get_account_resources_by_invalid_address_missing_0x_prefix.json delete mode 100644 aptos-move/aptos-vm/src/foreign_contracts.rs create mode 100644 aptos-move/aptos-vm/src/verifier/event_validation.rs create mode 100644 aptos-move/e2e-move-tests/src/tests/module_event.rs delete mode 100644 aptos-move/framework/cached-packages/generated/head.mrb create mode 100644 aptos-move/move-examples/event/Move.toml create mode 100644 aptos-move/move-examples/event/sources/event.move create mode 100644 consensus/src/dag/bootstrap.rs rename consensus/src/dag/{reliable_broadcast.rs => rb_handler.rs} (79%) create mode 100644 consensus/src/dag/tests/dag_driver_tests.rs create mode 100644 consensus/src/dag/tests/integration_tests.rs rename consensus/src/dag/tests/{reliable_broadcast_tests.rs => rb_handler_tests.rs} (78%) create mode 100644 crates/aptos-api-tester/Cargo.toml create mode 100644 crates/aptos-api-tester/src/consts.rs create mode 100644 crates/aptos-api-tester/src/counters.rs create mode 100644 crates/aptos-api-tester/src/macros.rs create mode 100644 crates/aptos-api-tester/src/main.rs create mode 100644 crates/aptos-api-tester/src/persistent_check.rs create mode 100644 crates/aptos-api-tester/src/strings.rs create mode 100644 crates/aptos-api-tester/src/tests/coin_transfer.rs create mode 100644 crates/aptos-api-tester/src/tests/mod.rs create mode 100644 crates/aptos-api-tester/src/tests/new_account.rs create mode 100644 crates/aptos-api-tester/src/tests/publish_module.rs create mode 100644 crates/aptos-api-tester/src/tests/tokenv1_transfer.rs create mode 100644 crates/aptos-api-tester/src/tests/view_function.rs create mode 100644 crates/aptos-api-tester/src/tokenv1_client.rs create mode 100644 crates/aptos-api-tester/src/utils.rs create mode 100644 crates/aptos-profiler/Cargo.toml create mode 100644 crates/aptos-profiler/src/cpu_profiler.rs create mode 100644 crates/aptos-profiler/src/jeprof.py create mode 100644 crates/aptos-profiler/src/lib.rs create mode 100644 crates/aptos-profiler/src/memory_profiler.rs create mode 100644 crates/aptos-profiler/src/utils.rs create mode 100644 docker/builder/nft-metadata-crawler.Dockerfile create mode 100644 ecosystem/typescript/sdk/examples/typescript-esm/multi_ed25519_to_multisig.ts create mode 100644 ecosystem/typescript/sdk/examples/typescript-esm/offer_capabilities.ts create mode 100644 ecosystem/typescript/sdk/src/indexer/queries/tokenActivitiesFieldsFragment.graphql create mode 100644 ecosystem/typescript/sdk_v2/src/core/account_address.ts create mode 100644 ecosystem/typescript/sdk_v2/src/core/common.ts create mode 100644 ecosystem/typescript/sdk_v2/src/core/hex.ts create mode 100644 ecosystem/typescript/sdk_v2/src/core/index.ts create mode 100644 ecosystem/typescript/sdk_v2/src/utils/api-endpoints.ts create mode 100644 ecosystem/typescript/sdk_v2/src/utils/const.ts create mode 100644 ecosystem/typescript/sdk_v2/tests/unit/account_address.test.ts create mode 100644 ecosystem/typescript/sdk_v2/tests/unit/aptos_config.test.ts create mode 100644 ecosystem/typescript/sdk_v2/tests/unit/hex.test.ts create mode 100644 state-sync/storage-service/server/src/subscription.rs create mode 100644 state-sync/storage-service/server/src/tests/subscribe_transaction_outputs.rs create mode 100644 state-sync/storage-service/server/src/tests/subscribe_transactions.rs create mode 100644 state-sync/storage-service/server/src/tests/subscribe_transactions_or_outputs.rs create mode 100644 state-sync/storage-service/server/src/tests/subscription.rs create mode 100644 testsuite/pangu_lib/testnet_commands/transaction_emitter.py create mode 100644 third_party/move/move-compiler-v2/src/pipeline/livevar_analysis_processor.rs create mode 100644 third_party/move/move-compiler-v2/src/pipeline/mod.rs create mode 100644 third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.exp create mode 100644 third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.move create mode 100644 third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.exp create mode 100644 third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.move create mode 100644 third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.exp create mode 100644 third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.move create mode 100644 third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.exp create mode 100644 third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.move create mode 100644 third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.exp create mode 100644 third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.move create mode 100644 third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes2.move create mode 100644 third_party/move/move-compiler/tests/move_check/parser/attribute_placement.exp create mode 100644 third_party/move/move-compiler/tests/move_check/parser/attribute_variants.exp create mode 100644 third_party/move/move-compiler/tests/move_check/parser/testonly.exp create mode 100644 third_party/move/move-compiler/tests/move_check/parser/testonly.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.exp create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes2.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.exp create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_placement.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_variants.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.exp create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes2.move create mode 100644 third_party/move/move-compiler/tests/move_check/skip_attribute_checks/testonly.move create mode 100644 third_party/move/move-compiler/tests/move_check/typing/assign_tuple.exp create mode 100644 third_party/move/move-compiler/tests/move_check/typing/assign_tuple.move create mode 100644 third_party/move/move-compiler/tests/move_check/typing/tuple.move create mode 100644 third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes2.move create mode 100644 third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.exp create mode 100644 third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.mvir create mode 100644 third_party/move/move-model/bytecode-test-utils/Cargo.toml create mode 100644 third_party/move/move-model/bytecode-test-utils/src/lib.rs rename third_party/move/{move-prover => move-model}/bytecode/Cargo.toml (89%) rename third_party/move/{move-prover => move-model}/bytecode/src/annotations.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/borrow_analysis.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/compositional_analysis.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/dataflow_analysis.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/dataflow_domains.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/debug_instrumentation.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/function_data_builder.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/function_target.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/function_target_pipeline.rs (99%) rename third_party/move/{move-prover => move-model}/bytecode/src/graph.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/lib.rs (63%) rename third_party/move/{move-prover => move-model}/bytecode/src/livevar_analysis.rs (97%) rename third_party/move/{move-prover => move-model}/bytecode/src/reaching_def_analysis.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/stackless_bytecode.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/stackless_bytecode_generator.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/stackless_control_flow_graph.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/src/usage_analysis.rs (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/basic_test.exp (92%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/basic_test.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/function_call.exp (84%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/function_call.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/hyper_edge.exp (92%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow/hyper_edge.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow_strong/basic_test.exp (93%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow_strong/basic_test.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow_strong/mut_ref.exp (93%) rename third_party/move/{move-prover => move-model}/bytecode/tests/borrow_strong/mut_ref.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/regression_generic_and_native_type.exp (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/regression_generic_and_native_type.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/smoke_test.exp (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/smoke_test.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/specs-in-fun.exp (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/specs-in-fun.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/vector_instructions.exp (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/from_move/vector_instructions.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/livevar/basic_test.exp (53%) rename third_party/move/{move-prover => move-model}/bytecode/tests/livevar/basic_test.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/reaching_def/basic_test.exp (94%) rename third_party/move/{move-prover => move-model}/bytecode/tests/reaching_def/basic_test.move (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/reaching_def/test_branching.exp (91%) rename third_party/move/{move-prover => move-model}/bytecode/tests/reaching_def/test_branching.move (100%) create mode 100644 third_party/move/move-model/bytecode/tests/testsuite.rs rename third_party/move/{move-prover => move-model}/bytecode/tests/usage_analysis/test.exp (100%) rename third_party/move/{move-prover => move-model}/bytecode/tests/usage_analysis/test.move (100%) create mode 100644 third_party/move/move-prover/bytecode-pipeline/Cargo.toml rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/clean_and_optimize.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/data_invariant_instrumentation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/eliminate_imm_refs.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/global_invariant_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/global_invariant_instrumentation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/global_invariant_instrumentation_v2.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/inconsistency_check.rs (98%) create mode 100644 third_party/move/move-prover/bytecode-pipeline/src/lib.rs rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/loop_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/memory_instrumentation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/mono_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/mut_ref_instrumentation.rs (98%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/mutation_tester.rs (98%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/number_operation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/number_operation_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/options.rs (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/packed_types_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/pipeline_factory.rs (90%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/spec_instrumentation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/verification_analysis.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/verification_analysis_v2.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/src/well_formed_instrumentation.rs (99%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/borrow.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/borrow.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/pack.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/pack.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/params.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/params.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/vector.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/data_invariant_instrumentation/vector.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/eliminate_imm_refs/basic_test.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/eliminate_imm_refs/basic_test.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/disable_in_body.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/disable_in_body.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/mutual_inst.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/mutual_inst.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/uninst_type_param_in_inv.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_analysis/uninst_type_param_in_inv.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/borrow.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/borrow.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/move.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/move.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/update.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/global_invariant_instrumentation/update.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/memory_instr/basic_test.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/memory_instr/basic_test.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/memory_instr/mut_ref.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/memory_instr/mut_ref.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/mono_analysis/test.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/mono_analysis/test.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/mut_ref_instrumentation/basic_test.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/mut_ref_instrumentation/basic_test.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/fun_spec.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/fun_spec.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/generics.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/generics.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/modifies.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/modifies.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/opaque_call.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/spec_instrumentation/opaque_call.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/testsuite.rs (60%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/verification_analysis/inv_relevance.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/verification_analysis/inv_relevance.move (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/verification_analysis/inv_suspension.exp (100%) rename third_party/move/move-prover/{bytecode => bytecode-pipeline}/tests/verification_analysis/inv_suspension.move (100%) delete mode 100644 third_party/move/move-stdlib/nursery/tests/event_tests.move diff --git a/.dockerignore b/.dockerignore index f2da8e28e015f..9cd118c7665ab 100644 --- a/.dockerignore +++ b/.dockerignore @@ -22,11 +22,13 @@ !aptos-move/aptos-release-builder/data/release.yaml !aptos-move/aptos-release-builder/data/proposals/* !aptos-move/framework/ +!aptos-move/move-examples/hello_blockchain/ !crates/aptos/src/move_tool/*.bpl !crates/aptos-faucet/doc/ !api/doc/ !crates/indexer/migrations/**/*.sql !ecosystem/indexer-grpc/indexer-grpc-parser/migrations/**/*.sql +!ecosystem/nft-metadata-crawler-parser/migrations/**/*.sql !terraform/helm/aptos-node/ !terraform/helm/genesis/ !testsuite/forge/src/backend/k8s/ diff --git a/.github/actions/file-change-determinator/action.yaml b/.github/actions/file-change-determinator/action.yaml index 8fdb84cde2a87..a8bd8f83ef25c 100644 --- a/.github/actions/file-change-determinator/action.yaml +++ b/.github/actions/file-change-determinator/action.yaml @@ -13,4 +13,5 @@ runs: continue-on-error: true # Avoid skipping any checks if this job fails (see: https://github.com/fkirc/skip-duplicate-actions/issues/301) uses: fkirc/skip-duplicate-actions@v5 with: + skip_after_successful_duplicate: false # Don't skip if the action is a duplicate (this may cause false positives) paths_ignore: '["**/*.md", "developer-docs-site/**"]' diff --git a/.github/actions/general-lints/action.yaml b/.github/actions/general-lints/action.yaml index c602262c60a45..4479b48f500cc 100644 --- a/.github/actions/general-lints/action.yaml +++ b/.github/actions/general-lints/action.yaml @@ -10,6 +10,8 @@ runs: steps: # Checkout the repository - uses: actions/checkout@v3 + with: + fetch-depth: 0 # get all the history because cargo xtest --change-since origin/main requires it. # Install shellcheck and run it on the dev_setup.sh script - name: Run shell lints diff --git a/.github/linters/semgrep/pull-request-target-code-checkout.yaml b/.github/linters/semgrep/pull-request-target-code-checkout.yaml new file mode 100644 index 0000000000000..a6186a753ab37 --- /dev/null +++ b/.github/linters/semgrep/pull-request-target-code-checkout.yaml @@ -0,0 +1,68 @@ +rules: + - id: pull-request-target-code-checkout + languages: + - yaml + message: This GitHub Actions workflow file uses `pull_request_target` and checks + out code from the incoming pull request. When using `pull_request_target`, + the Action runs in the context of the target repository, which includes + access to all repository secrets. Please ensure you have `permission-check` + enabled for the jobs that check out code. Please see + https://securitylab.github.com/research/github-actions-preventing-pwn-requests/ + for additional mitigations. + metadata: + category: security + owasp: + - A01:2021 - Broken Access Control + cwe: + - "CWE-913: Improper Control of Dynamically-Managed Code Resources" + references: + - https://securitylab.github.com/research/github-actions-preventing-pwn-requests/ + - https://github.com/justinsteven/advisories/blob/master/2021_github_actions_checkspelling_token_leak_via_advice_symlink.md + technology: + - github-actions + subcategory: + - audit + likelihood: MEDIUM + impact: LOW + confidence: MEDIUM + license: Commons Clause License Condition v1.0[LGPL-2.1-only] + vulnerability_class: + - Code Injection + patterns: + - pattern-either: + - pattern-inside: | + on: + ... + pull_request_target: ... + ... + ... + - pattern-inside: | + on: [..., pull_request_target, ...] + ... + - pattern-inside: | + on: pull_request_target + ... + - pattern-inside: | + jobs: + ... + $JOBNAME: + ... + - pattern-not-inside: | + needs: [..., permission-check, ...] + ... + - pattern-not-inside: | + needs: + ... + - permission-check + ... + ... + - pattern-not-inside: | + needs: [permission-check] + ... + - pattern: | + ... + uses: "$ACTION" + - metavariable-regex: + metavariable: $ACTION + regex: actions/checkout@.* + severity: WARNING diff --git a/.github/workflows/cli-e2e-tests.yaml b/.github/workflows/cli-e2e-tests.yaml index 20c50ef37302e..aaa8f215fafdf 100644 --- a/.github/workflows/cli-e2e-tests.yaml +++ b/.github/workflows/cli-e2e-tests.yaml @@ -11,7 +11,14 @@ on: required: true type: string description: Use this to override the git SHA1, branch name (e.g. devnet) or tag + SKIP_JOB: + required: false + default: false + type: boolean + description: Set to true to skip this job. Useful for PRs that don't require this workflow. +# TODO: should we migrate this to a composite action, so that we can skip it +# at the call site, and don't need to wrap each step in an if statement? jobs: # Run the Aptos CLI examples. We run the CLI on this commit / PR against a # local testnet using the devnet, testnet, and mainnet branches. This way @@ -24,10 +31,12 @@ jobs: id-token: write steps: - uses: actions/checkout@v3 + if: ${{ !inputs.SKIP_JOB }} with: ref: ${{ inputs.GIT_SHA }} - uses: aptos-labs/aptos-core/.github/actions/docker-setup@main + if: ${{ !inputs.SKIP_JOB }} with: GCP_WORKLOAD_IDENTITY_PROVIDER: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }} GCP_SERVICE_ACCOUNT_EMAIL: ${{ secrets.GCP_SERVICE_ACCOUNT_EMAIL }} @@ -37,11 +46,13 @@ jobs: GIT_CREDENTIALS: ${{ secrets.GIT_CREDENTIALS }} - uses: ./.github/actions/python-setup + if: ${{ !inputs.SKIP_JOB }} with: pyproject_directory: crates/aptos/e2e # Run CLI tests against local testnet built from devnet branch. - uses: nick-fields/retry@7f8f3d9f0f62fe5925341be21c2e8314fd4f7c7c # pin@v2 + if: ${{ !inputs.SKIP_JOB }} name: devnet-tests with: max_attempts: 5 @@ -50,6 +61,7 @@ jobs: # Run CLI tests against local testnet built from testnet branch. - uses: nick-fields/retry@7f8f3d9f0f62fe5925341be21c2e8314fd4f7c7c # pin@v2 + if: ${{ !inputs.SKIP_JOB }} name: testnet-tests with: max_attempts: 5 @@ -58,6 +70,7 @@ jobs: # Run CLI tests against local testnet built from mainnet branch. - uses: nick-fields/retry@7f8f3d9f0f62fe5925341be21c2e8314fd4f7c7c # pin@v2 + if: ${{ !inputs.SKIP_JOB }} name: mainnet-tests with: max_attempts: 5 @@ -65,6 +78,10 @@ jobs: command: cd ./crates/aptos/e2e && poetry run python main.py -d --base-network mainnet --image-repo-with-project ${{ secrets.GCP_DOCKER_ARTIFACT_REPO }} --test-cli-tag ${{ inputs.GIT_SHA }} --working-directory ${{ runner.temp }}/aptos-e2e-tests-mainnet - name: Print local testnet logs on failure - if: ${{ failure() }} + if: ${{ !inputs.SKIP_JOB && failure() }} working-directory: docker/compose/validator-testnet run: docker logs aptos-tools-devnet && docker logs aptos-tools-testnet && docker logs aptos-tools-mainnet + + # Print out whether the job was skipped. + - run: echo "Skipping CLI E2E tests!" + if: ${{ inputs.SKIP_JOB }} diff --git a/.github/workflows/docker-build-test.yaml b/.github/workflows/docker-build-test.yaml index 28a752a883f63..0d373bc32a036 100644 --- a/.github/workflows/docker-build-test.yaml +++ b/.github/workflows/docker-build-test.yaml @@ -109,6 +109,18 @@ jobs: targetCacheId: ${{ env.TARGET_CACHE_ID }} targetRegistry: ${{ env.TARGET_REGISTRY }} + # This job determines which files were changed + file_change_determinator: + needs: [permission-check] + runs-on: ubuntu-latest + outputs: + only_docs_changed: ${{ steps.determine_file_changes.outputs.only_docs_changed }} + steps: + - uses: actions/checkout@v3 + - name: Run the file change determinator + id: determine_file_changes + uses: ./.github/actions/file-change-determinator + # This is a PR required job. rust-images: needs: [permission-check, determine-docker-build-metadata] @@ -185,7 +197,7 @@ jobs: # This is a PR required job. node-api-compatibility-tests: - needs: [permission-check, rust-images, determine-docker-build-metadata] # runs with the default release docker build variant "rust-images" + needs: [permission-check, rust-images, determine-docker-build-metadata, file_change_determinator] # runs with the default release docker build variant "rust-images" if: | ( github.event_name == 'push' || @@ -198,10 +210,11 @@ jobs: secrets: inherit with: GIT_SHA: ${{ needs.determine-docker-build-metadata.outputs.gitSha }} + SKIP_JOB: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' }} # This is a PR required job. cli-e2e-tests: - needs: [permission-check, rust-images, determine-docker-build-metadata] # runs with the default release docker build variant "rust-images" + needs: [permission-check, rust-images, determine-docker-build-metadata, file_change_determinator] # runs with the default release docker build variant "rust-images" if: | ( github.event_name == 'push' || @@ -214,14 +227,13 @@ jobs: secrets: inherit with: GIT_SHA: ${{ needs.determine-docker-build-metadata.outputs.gitSha }} + SKIP_JOB: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' }} indexer-grpc-e2e-tests: needs: [permission-check, rust-images, determine-docker-build-metadata] # runs with the default release docker build variant "rust-images" if: | - (github.event_name == 'push' && github.ref_name != 'main') || github.event_name == 'workflow_dispatch' || contains(github.event.pull_request.labels.*.name, 'CICD:run-e2e-tests') || - github.event.pull_request.auto_merge != null || contains(github.event.pull_request.body, '#e2e') uses: aptos-labs/aptos-core/.github/workflows/docker-indexer-grpc-test.yaml@main secrets: inherit @@ -238,6 +250,7 @@ jobs: - rust-images-failpoints - rust-images-performance - rust-images-consensus-only-perf-test + - file_change_determinator if: | !failure() && !cancelled() && needs.permission-check.result == 'success' && ( (github.event_name == 'push' && github.ref_name != 'main') || @@ -258,6 +271,7 @@ jobs: # test lifecycle is separate from that of GHA. This protects us from the case where many Forge tests are triggered # by this GHA. If there is a Forge namespace collision, Forge will pre-empt the existing test running in the namespace. FORGE_NAMESPACE: forge-e2e-${{ needs.determine-docker-build-metadata.outputs.targetCacheId }} + SKIP_JOB: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' }} # Run e2e compat test against testnet branch. This is a PR required job. forge-compat-test: @@ -269,6 +283,7 @@ jobs: - rust-images-failpoints - rust-images-performance - rust-images-consensus-only-perf-test + - file_change_determinator if: | !failure() && !cancelled() && needs.permission-check.result == 'success' && ( (github.event_name == 'push' && github.ref_name != 'main') || @@ -282,10 +297,11 @@ jobs: with: GIT_SHA: ${{ needs.determine-docker-build-metadata.outputs.gitSha }} FORGE_TEST_SUITE: compat - IMAGE_TAG: aptos-node-v1.5.1 # test against a previous testnet release + IMAGE_TAG: aptos-node-v1.6.2 # test against a previous testnet release FORGE_RUNNER_DURATION_SECS: 300 COMMENT_HEADER: forge-compat FORGE_NAMESPACE: forge-compat-${{ needs.determine-docker-build-metadata.outputs.targetCacheId }} + SKIP_JOB: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' }} # Run forge framework upgradability test. This is a PR required job. forge-framework-upgrade-test: @@ -297,6 +313,7 @@ jobs: - rust-images-failpoints - rust-images-performance - rust-images-consensus-only-perf-test + - file_change_determinator if: | !failure() && !cancelled() && needs.permission-check.result == 'success' && ( (github.event_name == 'push' && github.ref_name != 'main') || @@ -314,6 +331,7 @@ jobs: FORGE_RUNNER_DURATION_SECS: 300 COMMENT_HEADER: forge-framework-upgrade FORGE_NAMESPACE: forge-framework-upgrade-${{ needs.determine-docker-build-metadata.outputs.targetCacheId }} + SKIP_JOB: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' }} forge-consensus-only-perf-test: needs: diff --git a/.github/workflows/execution-performance.yaml b/.github/workflows/execution-performance.yaml index 02c7fc4c94c0f..5283bff7c4dd6 100644 --- a/.github/workflows/execution-performance.yaml +++ b/.github/workflows/execution-performance.yaml @@ -2,13 +2,23 @@ name: "execution-performance" on: workflow_dispatch: pull_request: + types: [labeled, opened, synchronize, reopened, auto_merge_enabled] schedule: - - cron: "0 12 * * *" # This runs every day at 12pm UTC. + - cron: "0 */4 * * *" # This runs every four hours jobs: execution-performance: - uses: aptos-labs/aptos-core/.github/workflows/workflow-run-execution-performance.yaml@main + if: | # Only run on each PR once an appropriate event occurs + ( + github.event_name == 'workflow_dispatch' || + github.event_name == 'schedule' || + contains(github.event.pull_request.labels.*.name, 'CICD:run-e2e-tests') || + github.event.pull_request.auto_merge != null) || + contains(github.event.pull_request.body, '#e2e' + ) + uses: ./.github/workflows/workflow-run-execution-performance.yaml secrets: inherit with: GIT_SHA: ${{ github.event.pull_request.head.sha || github.sha }} RUNNER_NAME: executor-benchmark-runner + RUN_ONLY_SINGLE_NODE_PERF: ${{ github.event_name != 'schedule' }} # Run all tests on the scheduled cadence diff --git a/.github/workflows/forge-stable.yaml b/.github/workflows/forge-stable.yaml index e6faf65e62f67..9328faa1cbf91 100644 --- a/.github/workflows/forge-stable.yaml +++ b/.github/workflows/forge-stable.yaml @@ -127,11 +127,23 @@ jobs: FORGE_TEST_SUITE: realistic_env_load_sweep POST_TO_SLACK: true - run-forge-realistic-env-graceful-overload: + run-forge-realistic-env-workload-sweep: if: ${{ github.event_name != 'pull_request' && always() }} needs: [determine-test-metadata, run-forge-realistic-env-load-sweep] # Only run after the previous job completes uses: aptos-labs/aptos-core/.github/workflows/workflow-run-forge.yaml@main secrets: inherit + with: + IMAGE_TAG: ${{ needs.determine-test-metadata.outputs.IMAGE_TAG }} + FORGE_NAMESPACE: forge-realistic-env-workload-sweep-${{ needs.determine-test-metadata.outputs.IMAGE_TAG }} + FORGE_RUNNER_DURATION_SECS: 1600 # Run for 26 minutes (4 tests, each for 400 seconds) + FORGE_TEST_SUITE: realistic_env_workload_sweep + POST_TO_SLACK: true + + run-forge-realistic-env-graceful-overload: + if: ${{ github.event_name != 'pull_request' && always() }} + needs: [determine-test-metadata, run-forge-realistic-env-workload-sweep] # Only run after the previous job completes + uses: aptos-labs/aptos-core/.github/workflows/workflow-run-forge.yaml@main + secrets: inherit with: IMAGE_TAG: ${{ needs.determine-test-metadata.outputs.IMAGE_TAG }} FORGE_NAMESPACE: forge-realistic-env-graceful-overload-${{ needs.determine-test-metadata.outputs.IMAGE_TAG }} diff --git a/.github/workflows/indexer-grpc-integration-tests.yaml b/.github/workflows/indexer-grpc-integration-tests.yaml index 4dbad1259f90d..5cd1929856fc5 100644 --- a/.github/workflows/indexer-grpc-integration-tests.yaml +++ b/.github/workflows/indexer-grpc-integration-tests.yaml @@ -19,7 +19,6 @@ concurrency: jobs: permission-check: - if: contains(github.event.pull_request.labels.*.name, 'CICD:non-required-tests')) runs-on: ubuntu-latest steps: - name: Check repository permission for user which triggered workflow diff --git a/.github/workflows/lint-test.yaml b/.github/workflows/lint-test.yaml index 5040a0e1eaa9b..26f98ba3b862e 100644 --- a/.github/workflows/lint-test.yaml +++ b/.github/workflows/lint-test.yaml @@ -97,6 +97,14 @@ jobs: # Run all rust smoke tests. This is a PR required job. rust-smoke-tests: needs: file_change_determinator + if: | # Only run on each PR once an appropriate event occurs + ( + github.event_name == 'workflow_dispatch' || + github.event_name == 'push' || + contains(github.event.pull_request.labels.*.name, 'CICD:run-e2e-tests') || + github.event.pull_request.auto_merge != null) || + contains(github.event.pull_request.body, '#e2e' + ) runs-on: high-perf-docker steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/node-api-compatibility-tests.yaml b/.github/workflows/node-api-compatibility-tests.yaml index 5eccb18865862..9e86168fd973a 100644 --- a/.github/workflows/node-api-compatibility-tests.yaml +++ b/.github/workflows/node-api-compatibility-tests.yaml @@ -24,6 +24,11 @@ on: required: true type: string description: Use this to override the git SHA1, branch name (e.g. devnet) or tag to release the SDK from + SKIP_JOB: + required: false + default: false + type: boolean + description: Set to true to skip this job. Useful for PRs that don't require this workflow. env: # This is the docker image tag that will be used for the SDK release. @@ -31,6 +36,8 @@ env: IMAGE_TAG: ${{ inputs.GIT_SHA || 'devnet' }} # default to "devnet" tag when not running on workflow_call GIT_SHA: ${{ inputs.GIT_SHA || github.event.pull_request.head.sha || github.sha }} # default to PR branch sha when not running on workflow_call +# TODO: should we migrate this to a composite action, so that we can skip it +# at the call site, and don't need to wrap each step in an if statement? jobs: # Confirm that the generated client within the TS SDK has been re-generated # if there are any changes that would affect it within the PR / commit. If @@ -42,10 +49,12 @@ jobs: id-token: write steps: - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # pin@v3 + if: ${{ !inputs.SKIP_JOB }} with: ref: ${{ env.GIT_SHA }} - uses: aptos-labs/aptos-core/.github/actions/docker-setup@main + if: ${{ !inputs.SKIP_JOB }} with: GCP_WORKLOAD_IDENTITY_PROVIDER: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }} GCP_SERVICE_ACCOUNT_EMAIL: ${{ secrets.GCP_SERVICE_ACCOUNT_EMAIL }} @@ -55,6 +64,7 @@ jobs: GIT_CREDENTIALS: ${{ secrets.GIT_CREDENTIALS }} - uses: actions/setup-node@969bd2663942d722d85b6a8626225850c2f7be4b # pin@v3 + if: ${{ !inputs.SKIP_JOB }} with: node-version-file: .node-version registry-url: "https://registry.npmjs.org" @@ -62,6 +72,7 @@ jobs: # Self hosted runners don't have pnpm preinstalled. # https://github.com/actions/setup-node/issues/182 - uses: pnpm/action-setup@537643d491d20c2712d11533497cb47b2d0eb9d5 # pin https://github.com/pnpm/action-setup/releases/tag/v2.2.3 + if: ${{ !inputs.SKIP_JOB }} # When using high-perf-docker, the CI is actually run with two containers # in a k8s pod, one for docker commands run in the CI steps (docker), and @@ -69,9 +80,11 @@ jobs: # mounts, ${{ runner.temp }} is one of them. Writing the specs here ensures # the docker run step writes to a same place that the runner can read from. - run: mkdir -p ${{ runner.temp }}/specs + if: ${{ !inputs.SKIP_JOB }} # Build the API specs. - uses: nick-fields/retry@7f8f3d9f0f62fe5925341be21c2e8314fd4f7c7c # pin@v2 + if: ${{ !inputs.SKIP_JOB }} name: generate-yaml-spec with: max_attempts: 3 @@ -79,6 +92,7 @@ jobs: command: docker run --rm --mount=type=bind,source=${{ runner.temp }}/specs,target=/specs ${{ secrets.GCP_DOCKER_ARTIFACT_REPO }}/tools:${IMAGE_TAG} aptos-openapi-spec-generator -f yaml -o /specs/spec.yaml - uses: nick-fields/retry@7f8f3d9f0f62fe5925341be21c2e8314fd4f7c7c # pin@v2 + if: ${{ !inputs.SKIP_JOB }} name: generate-json-spec with: max_attempts: 3 @@ -86,18 +100,29 @@ jobs: command: docker run --rm --mount=type=bind,source=${{ runner.temp }}/specs,target=/specs ${{ secrets.GCP_DOCKER_ARTIFACT_REPO }}/tools:${IMAGE_TAG} aptos-openapi-spec-generator -f json -o /specs/spec.json # Confirm that the specs we built here are the same as those checked in. - - run: echo "If this step fails, run the following commands locally to fix it:" - - run: echo "cargo run -p aptos-openapi-spec-generator -- -f yaml -o api/doc/spec.yaml" - - run: echo "cargo run -p aptos-openapi-spec-generator -- -f json -o api/doc/spec.json" - - run: git diff --no-index --ignore-space-at-eol --ignore-blank-lines ${{ runner.temp }}/specs/spec.yaml api/doc/spec.yaml - - run: git diff --no-index --ignore-space-at-eol --ignore-blank-lines ${{ runner.temp }}/specs/spec.json api/doc/spec.json + - run: | + echo "If this step fails, run the following commands locally to fix it:" + echo "cargo run -p aptos-openapi-spec-generator -- -f yaml -o api/doc/spec.yaml" + echo "cargo run -p aptos-openapi-spec-generator -- -f json -o api/doc/spec.json" + git diff --no-index --ignore-space-at-eol --ignore-blank-lines ${{ runner.temp }}/specs/spec.yaml api/doc/spec.yaml + git diff --no-index --ignore-space-at-eol --ignore-blank-lines ${{ runner.temp }}/specs/spec.json api/doc/spec.json + if: ${{ !inputs.SKIP_JOB }} # Run package install. If install fails, it probably means the lockfile # was not included in the commit. - run: cd ./ecosystem/typescript/sdk && pnpm install --frozen-lockfile + if: ${{ !inputs.SKIP_JOB }} # Ensure any changes to the generated client were checked in. - run: cd ./ecosystem/typescript/sdk && pnpm generate-client -o /tmp/generated_client - - run: echo "If this step fails, run the following command locally to fix it:" - - run: echo "cd ecosystem/typescript/sdk && pnpm generate-client" - - run: git diff --no-index --ignore-space-at-eol --ignore-blank-lines ./ecosystem/typescript/sdk/src/generated/ /tmp/generated_client/ + if: ${{ !inputs.SKIP_JOB }} + + - run: + echo "If this step fails, run the following command locally to fix it:" + echo "cd ecosystem/typescript/sdk && pnpm generate-client" + git diff --no-index --ignore-space-at-eol --ignore-blank-lines ./ecosystem/typescript/sdk/src/generated/ /tmp/generated_client/ + if: ${{ !inputs.SKIP_JOB }} + + # Print out whether the job was skipped. + - run: echo "Skipping node API compatibility tests!" + if: ${{ inputs.SKIP_JOB }} diff --git a/.github/workflows/python-sdk-publish.yaml b/.github/workflows/python-sdk-publish.yaml new file mode 100644 index 0000000000000..6629fbc142176 --- /dev/null +++ b/.github/workflows/python-sdk-publish.yaml @@ -0,0 +1,26 @@ +name: "Run Python SDK Publish" + +on: + workflow_dispatch: + +jobs: + release: + name: Release + runs-on: ubuntu-latest + steps: + - name: Checkout code + uses: actions/checkout@v3 + + - uses: ./.github/actions/python-setup + with: + pyproject_directory: ./ecosystem/python/sdk + + - name: Build project for distribution + run: poetry build + working-directory: ./ecosystem/python/sdk + + - name: Publish to PyPI + env: + POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }} + run: poetry publish + working-directory: ./ecosystem/python/sdk diff --git a/.github/workflows/semgrep.yaml b/.github/workflows/semgrep.yaml new file mode 100644 index 0000000000000..9505c7b3b2b9c --- /dev/null +++ b/.github/workflows/semgrep.yaml @@ -0,0 +1,26 @@ +name: Semgrep + +on: + workflow_dispatch: + pull_request: + types: [labeled, opened, synchronize, reopened, auto_merge_enabled] + schedule: + - cron: '0 * * * *' + +jobs: + semgrep: + name: semgrep/ci + runs-on: ubuntu-latest + + container: + image: returntocorp/semgrep + + # Skip any PR created by dependabot to avoid permission issues: + if: (github.actor != 'dependabot[bot]') + + steps: + - uses: actions/checkout@v3 + - run: semgrep ci + env: + SEMGREP_RULES: >- + ./.github/linters/semgrep/pull-request-target-code-checkout.yaml diff --git a/.github/workflows/workflow-run-execution-performance.yaml b/.github/workflows/workflow-run-execution-performance.yaml index 89e299f657661..406f49953b167 100644 --- a/.github/workflows/workflow-run-execution-performance.yaml +++ b/.github/workflows/workflow-run-execution-performance.yaml @@ -12,6 +12,11 @@ on: required: false default: executor-benchmark-runner type: string + RUN_ONLY_SINGLE_NODE_PERF: + required: false + default: false + type: boolean + description: Only run the single node performance tests # This allows the workflow to be triggered manually from the Github UI or CLI # NOTE: because the "number" type is not supported, we default to 720 minute timeout workflow_dispatch: @@ -27,6 +32,11 @@ on: options: - executor-benchmark-runner description: The name of the runner to use for the test. + RUN_ONLY_SINGLE_NODE_PERF: + required: false + default: false + type: boolean + description: Only run the single node performance tests jobs: # This job determines which files were changed @@ -49,20 +59,20 @@ jobs: - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # pin@v3 with: ref: ${{ inputs.GIT_SHA }} - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - uses: aptos-labs/aptos-core/.github/actions/rust-setup@main with: GIT_CREDENTIALS: ${{ secrets.GIT_CREDENTIALS }} - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - name: Run sequential execution benchmark in performance build mode shell: bash run: testsuite/sequential_execution_performance.py - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - - run: echo "Skipping sequential execution performance! Unrelated changes detected." - if: needs.file_change_determinator.outputs.only_docs_changed == 'true' + - run: echo "Skipping sequential execution performance!" + if: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' || inputs.RUN_ONLY_SINGLE_NODE_PERF }} # Run parallel execution performance tests parallel-execution-performance: @@ -73,20 +83,20 @@ jobs: - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # pin@v3 with: ref: ${{ inputs.GIT_SHA }} - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - uses: aptos-labs/aptos-core/.github/actions/rust-setup@main with: GIT_CREDENTIALS: ${{ secrets.GIT_CREDENTIALS }} - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - name: Run parallel execution benchmark in performance build mode shell: bash run: testsuite/parallel_execution_performance.py - if: needs.file_change_determinator.outputs.only_docs_changed != 'true' + if: ${{ needs.file_change_determinator.outputs.only_docs_changed != 'true' && !inputs.RUN_ONLY_SINGLE_NODE_PERF }} - - run: echo "Skipping parallel execution performance! Unrelated changes detected." - if: needs.file_change_determinator.outputs.only_docs_changed == 'true' + - run: echo "Skipping parallel execution performance!" + if: ${{ needs.file_change_determinator.outputs.only_docs_changed == 'true' || inputs.RUN_ONLY_SINGLE_NODE_PERF }} # Run single node execution performance tests single-node-performance: diff --git a/.github/workflows/workflow-run-forge.yaml b/.github/workflows/workflow-run-forge.yaml index 5712cca393090..338d7c6a2a5a2 100644 --- a/.github/workflows/workflow-run-forge.yaml +++ b/.github/workflows/workflow-run-forge.yaml @@ -67,6 +67,11 @@ on: default: forge description: A unique ID for Forge sticky comment on your PR. See https://github.com/marocchino/sticky-pull-request-comment#keep-more-than-one-comment + SKIP_JOB: + required: false + default: false + type: boolean + description: Set to true to skip this job. Useful for PRs that don't require this workflow. env: AWS_ACCOUNT_NUM: ${{ secrets.ENV_ECR_AWS_ACCOUNT_NUM }} @@ -96,32 +101,39 @@ env: VERBOSE: true COMMENT_ON_PR: ${{ inputs.COMMENT_ON_PR }} +# TODO: should we migrate this to a composite action, so that we can skip it +# at the call site, and don't need to wrap each step in an if statement? jobs: forge: runs-on: ubuntu-latest timeout-minutes: ${{ inputs.TIMEOUT_MINUTES }} steps: - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # pin@v3 + if: ${{ !inputs.SKIP_JOB }} with: ref: ${{ inputs.GIT_SHA }} # get the last 10 commits if GIT_SHA is not specified fetch-depth: inputs.GIT_SHA != null && 0 || 10 - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # pin@v4 + if: ${{ !inputs.SKIP_JOB }} - name: Install python deps run: pip3 install click==8.1.3 psutil==5.9.1 + if: ${{ !inputs.SKIP_JOB }} # Calculate the auth duration based on the test duration # If the test duration is less than the default 90 minutes, use the default # otherwise add 30 minutes to the length of the Forge test run - name: Calculate Forge Auth Duration + if: ${{ !inputs.SKIP_JOB }} id: calculate-auth-duration run: | auth_duration=$(( $FORGE_RUNNER_DURATION_SECS > 5400 ? $FORGE_RUNNER_DURATION_SECS + 30 * 60 : 5400 )) echo "auth_duration=${auth_duration}" >> $GITHUB_OUTPUT - uses: aptos-labs/aptos-core/.github/actions/docker-setup@main + if: ${{ !inputs.SKIP_JOB }} id: docker-setup with: GCP_WORKLOAD_IDENTITY_PROVIDER: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }} @@ -140,28 +152,32 @@ jobs: GCP_AUTH_DURATION: ${{ steps.calculate-auth-duration.outputs.auth_duration }} - name: "Install GCloud SDK" + if: ${{ !inputs.SKIP_JOB }} uses: "google-github-actions/setup-gcloud@62d4898025f6041e16b1068643bfc5a696863587" # pin@v1 with: version: ">= 418.0.0" install_components: "kubectl,gke-gcloud-auth-plugin" - name: "Export GCloud auth token" + if: ${{ !inputs.SKIP_JOB }} id: gcloud-auth run: echo "CLOUDSDK_AUTH_ACCESS_TOKEN=${{ steps.docker-setup.outputs.CLOUDSDK_AUTH_ACCESS_TOKEN }}" >> $GITHUB_ENV shell: bash - name: "Setup GCloud project" + if: ${{ !inputs.SKIP_JOB }} shell: bash run: gcloud config set project aptos-forge-gcp-0 - name: Run pre-Forge checks + if: ${{ !inputs.SKIP_JOB }} shell: bash env: FORGE_RUNNER_MODE: pre-forge run: testsuite/run_forge.sh - name: Post pre-Forge comment - if: env.COMMENT_ON_PR == 'true' && github.event.number != null + if: ${{ !inputs.SKIP_JOB && env.COMMENT_ON_PR == 'true' && github.event.number != null }} uses: marocchino/sticky-pull-request-comment@39c5b5dc7717447d0cba270cd115037d32d28443 # pin@39c5b5dc7717447d0cba270cd115037d32d2844 with: header: ${{ env.COMMENT_HEADER }} @@ -170,12 +186,13 @@ jobs: path: ${{ env.FORGE_PRE_COMMENT }} - name: Run Forge + if: ${{ !inputs.SKIP_JOB }} shell: bash run: testsuite/run_forge.sh - name: Post forge result comment # Post a Github comment if the run has not been cancelled and if we're running on a PR - if: env.COMMENT_ON_PR == 'true' && github.event.number != null && !cancelled() + if: ${{ !inputs.SKIP_JOB && env.COMMENT_ON_PR == 'true' && github.event.number != null && !cancelled() }} uses: marocchino/sticky-pull-request-comment@39c5b5dc7717447d0cba270cd115037d32d28443 # pin@39c5b5dc7717447d0cba270cd115037d32d2844 with: header: ${{ env.COMMENT_HEADER }} @@ -185,7 +202,7 @@ jobs: - name: Post to a Slack channel on failure # Post a Slack comment if the run has not been cancelled and the envs are set - if: env.POST_TO_SLACK == 'true' && failure() + if: ${{ !inputs.SKIP_JOB && env.POST_TO_SLACK == 'true' && failure() }} id: slack uses: slackapi/slack-github-action@936158bbe252e9a6062e793ea4609642c966e302 # pin@v1.21.0 with: @@ -196,3 +213,7 @@ jobs: } env: SLACK_WEBHOOK_URL: ${{ secrets.FORGE_SLACK_WEBHOOK_URL }} + + # Print out whether the job was skipped. + - run: echo "Skipping forge test!" + if: ${{ inputs.SKIP_JOB }} diff --git a/Cargo.lock b/Cargo.lock index bbde6c249b314..286fb6c9d754a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -101,6 +101,15 @@ dependencies = [ "memchr", ] +[[package]] +name = "aho-corasick" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6748e8def348ed4d14996fa801f4122cd763fff530258cdc03f64b25f89d3a5a" +dependencies = [ + "memchr", +] + [[package]] name = "aliasable" version = "0.1.3" @@ -211,7 +220,6 @@ dependencies = [ "aptos-cli-common", "aptos-config", "aptos-crypto", - "aptos-db-tool", "aptos-debugger", "aptos-faucet-core", "aptos-framework", @@ -237,7 +245,7 @@ dependencies = [ "base64 0.13.0", "bcs 0.1.4", "chrono", - "clap 4.3.5", + "clap 4.3.21", "clap_complete", "codespan-reporting", "dirs", @@ -253,11 +261,8 @@ dependencies = [ "move-core-types", "move-coverage", "move-disassembler", - "move-ir-compiler", "move-ir-types", "move-package", - "move-prover", - "move-prover-boogie-backend", "move-symbol-pool", "move-unit-test", "move-vm-runtime", @@ -274,7 +279,6 @@ dependencies = [ "termcolor", "thiserror", "tokio", - "tokio-util 0.7.3", "toml 0.7.4", "walkdir", ] @@ -343,6 +347,7 @@ dependencies = [ "aptos-state-view", "aptos-storage-interface", "aptos-types", + "aptos-utils", "aptos-vm", "async-trait", "bcs 0.1.4", @@ -414,6 +419,31 @@ dependencies = [ "warp-reverse-proxy", ] +[[package]] +name = "aptos-api-tester" +version = "0.1.0" +dependencies = [ + "anyhow", + "aptos-api-types", + "aptos-cached-packages", + "aptos-framework", + "aptos-logger", + "aptos-network", + "aptos-push-metrics", + "aptos-rest-client", + "aptos-sdk", + "aptos-types", + "futures", + "move-core-types", + "once_cell", + "prometheus", + "rand 0.7.3", + "serde 1.0.149", + "serde_json", + "tokio", + "url", +] + [[package]] name = "aptos-api-types" version = "0.0.1" @@ -434,6 +464,7 @@ dependencies = [ "move-binary-format", "move-core-types", "move-resource-viewer", + "once_cell", "poem", "poem-openapi", "poem-openapi-derive", @@ -466,7 +497,7 @@ dependencies = [ "async-trait", "bcs 0.1.4", "bytes", - "clap 4.3.5", + "clap 4.3.21", "csv", "futures", "itertools", @@ -566,7 +597,7 @@ dependencies = [ "aptos-metrics-core", "aptos-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "dashmap", "itertools", "move-core-types", @@ -622,7 +653,7 @@ name = "aptos-cli-common" version = "1.0.0" dependencies = [ "anstyle", - "clap 4.3.5", + "clap 4.3.21", "clap_complete", ] @@ -939,7 +970,7 @@ dependencies = [ "bcs 0.1.4", "byteorder", "claims", - "clap 4.3.5", + "clap 4.3.21", "dashmap", "itertools", "lru 0.7.8", @@ -973,7 +1004,7 @@ dependencies = [ "aptos-types", "aptos-vm", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", ] [[package]] @@ -1024,7 +1055,7 @@ dependencies = [ "aptos-temppath", "aptos-types", "async-trait", - "clap 4.3.5", + "clap 4.3.21", "itertools", "owo-colors", "tokio", @@ -1051,7 +1082,7 @@ dependencies = [ "aptos-vm-logging", "aptos-vm-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-binary-format", "move-cli", "move-compiler", @@ -1172,6 +1203,7 @@ dependencies = [ "aptos-logger", "aptos-metrics-core", "aptos-node-resource-metrics", + "aptos-profiler", "aptos-push-metrics", "aptos-sdk", "aptos-state-view", @@ -1183,7 +1215,7 @@ dependencies = [ "async-trait", "bcs 0.1.4", "chrono", - "clap 4.3.5", + "clap 4.3.21", "indicatif 0.15.0", "itertools", "jemallocator", @@ -1215,7 +1247,7 @@ dependencies = [ "aptos-types", "aptos-vm", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "crossbeam-channel", "itertools", "num_cpus", @@ -1284,7 +1316,7 @@ dependencies = [ "aptos-faucet-core", "aptos-logger", "aptos-sdk", - "clap 4.3.5", + "clap 4.3.21", "tokio", ] @@ -1300,7 +1332,7 @@ dependencies = [ "aptos-sdk", "async-trait", "captcha", - "clap 4.3.5", + "clap 4.3.21", "deadpool-redis", "enum_dispatch", "futures", @@ -1342,7 +1374,7 @@ dependencies = [ "anyhow", "aptos-faucet-core", "aptos-logger", - "clap 4.3.5", + "clap 4.3.21", "tokio", ] @@ -1354,7 +1386,7 @@ dependencies = [ "aptos-logger", "aptos-node-checker", "aptos-sdk", - "clap 4.3.5", + "clap 4.3.21", "env_logger", "futures", "gcp-bigquery-client", @@ -1391,7 +1423,7 @@ dependencies = [ "aptos-transaction-generator-lib", "async-trait", "chrono", - "clap 4.3.5", + "clap 4.3.21", "either", "futures", "hex", @@ -1434,7 +1466,7 @@ dependencies = [ "aptos-testcases", "async-trait", "chrono", - "clap 4.3.5", + "clap 4.3.21", "futures", "jemallocator", "rand 0.7.3", @@ -1476,7 +1508,7 @@ dependencies = [ "bulletproofs", "byteorder", "claims", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "curve25519-dalek-ng", "either", @@ -1497,6 +1529,7 @@ dependencies = [ "move-package", "move-prover", "move-prover-boogie-backend", + "move-prover-bytecode-pipeline", "move-stackless-bytecode", "move-unit-test", "move-vm-runtime", @@ -1556,7 +1589,7 @@ dependencies = [ "aptos-vault-client", "bcs 0.1.4", "byteorder", - "clap 4.3.5", + "clap 4.3.21", "datatest-stable", "hex", "move-binary-format", @@ -1597,7 +1630,7 @@ dependencies = [ "aptos-types", "aptos-vm-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "float-cmp", "move-binary-format", "move-bytecode-source-map", @@ -1633,6 +1666,7 @@ dependencies = [ "aptos-gas-meter", "aptos-package-builder", "aptos-types", + "aptos-vm-types", "inferno", "move-binary-format", "move-core-types", @@ -1664,7 +1698,7 @@ dependencies = [ "aptos-package-builder", "aptos-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-core-types", "move-model", "tempfile", @@ -1736,7 +1770,7 @@ dependencies = [ "bcs 0.1.4", "bigdecimal", "chrono", - "clap 4.3.5", + "clap 4.3.21", "diesel", "diesel_migrations", "field_count", @@ -1769,7 +1803,7 @@ dependencies = [ "async-trait", "backoff", "base64 0.13.0", - "clap 4.3.5", + "clap 4.3.21", "futures", "futures-core", "once_cell", @@ -1798,7 +1832,7 @@ dependencies = [ "aptos-runtimes", "async-trait", "base64 0.13.0", - "clap 4.3.5", + "clap 4.3.21", "cloud-storage", "futures", "once_cell", @@ -1825,7 +1859,7 @@ dependencies = [ "aptos-moving-average", "aptos-runtimes", "async-trait", - "clap 4.3.5", + "clap 4.3.21", "cloud-storage", "futures-util", "once_cell", @@ -1911,7 +1945,7 @@ dependencies = [ "async-trait", "backoff", "base64 0.13.0", - "clap 4.3.5", + "clap 4.3.21", "futures", "futures-core", "futures-util", @@ -1948,7 +1982,7 @@ dependencies = [ "bcs 0.1.4", "bigdecimal", "chrono", - "clap 4.3.5", + "clap 4.3.21", "diesel", "diesel_migrations", "field_count", @@ -1979,7 +2013,7 @@ dependencies = [ "backtrace", "base64 0.13.0", "chrono", - "clap 4.3.5", + "clap 4.3.21", "futures", "hostname", "once_cell", @@ -2003,7 +2037,7 @@ dependencies = [ "aptos-runtimes", "async-trait", "backtrace", - "clap 4.3.5", + "clap 4.3.21", "futures", "prometheus", "serde 1.0.149", @@ -2027,7 +2061,7 @@ dependencies = [ "backoff", "backtrace", "base64 0.13.0", - "clap 4.3.5", + "clap 4.3.21", "cloud-storage", "futures", "futures-core", @@ -2299,10 +2333,11 @@ dependencies = [ name = "aptos-move-examples" version = "0.1.0" dependencies = [ + "aptos-framework", "aptos-gas-schedule", "aptos-types", "aptos-vm", - "clap 4.3.5", + "clap 4.3.21", "move-cli", "move-package", "move-prover", @@ -2480,7 +2515,7 @@ dependencies = [ "aptos-logger", "aptos-network", "aptos-types", - "clap 4.3.5", + "clap 4.3.21", "futures", "serde 1.0.149", "tokio", @@ -2524,7 +2559,7 @@ dependencies = [ "backoff", "base64 0.13.0", "chrono", - "clap 4.3.5", + "clap 4.3.21", "crossbeam-channel", "csv", "diesel", @@ -2595,7 +2630,7 @@ dependencies = [ "aptos-types", "aptos-vm", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "fail 0.5.0", "futures", "hex", @@ -2625,7 +2660,7 @@ dependencies = [ "aptos-sdk", "aptos-transaction-emitter-lib", "async-trait", - "clap 4.3.5", + "clap 4.3.21", "const_format", "env_logger", "futures", @@ -2699,7 +2734,7 @@ dependencies = [ "aptos-mempool", "aptos-storage-interface", "aptos-types", - "clap 4.3.5", + "clap 4.3.21", ] [[package]] @@ -2789,6 +2824,18 @@ dependencies = [ "thiserror", ] +[[package]] +name = "aptos-profiler" +version = "0.1.0" +dependencies = [ + "anyhow", + "backtrace", + "jemalloc-sys", + "jemallocator", + "pprof", + "regex", +] + [[package]] name = "aptos-proptest-helpers" version = "0.1.0" @@ -2855,7 +2902,7 @@ dependencies = [ "aptos-types", "aptos-vm-genesis", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "futures", "git2 0.16.1", "handlebars", @@ -2916,7 +2963,7 @@ dependencies = [ "aptos-types", "bcs 0.1.4", "bytes", - "clap 4.3.5", + "clap 4.3.21", "futures", "hex", "move-binary-format", @@ -2963,7 +3010,7 @@ dependencies = [ "aptos-types", "aptos-warp-webserver", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "futures", "hex", "itertools", @@ -2988,7 +3035,7 @@ dependencies = [ "aptos-logger", "aptos-rosetta", "aptos-types", - "clap 4.3.5", + "clap 4.3.21", "serde 1.0.149", "serde_json", "tokio", @@ -3096,7 +3143,7 @@ dependencies = [ "aptos-framework", "aptos-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "heck 0.3.3", "move-core-types", "once_cell", @@ -3418,7 +3465,7 @@ dependencies = [ "base64 0.13.0", "bcs 0.1.4", "chrono", - "clap 4.3.5", + "clap 4.3.21", "debug-ignore", "flate2", "futures", @@ -3512,7 +3559,7 @@ dependencies = [ "aptos-types", "aptos-vm", "aptos-vm-logging", - "clap 4.3.5", + "clap 4.3.21", "criterion", "criterion-cpu-time", "num_cpus", @@ -3530,7 +3577,7 @@ dependencies = [ "aptos-logger", "aptos-sdk", "aptos-transaction-emitter-lib", - "clap 4.3.5", + "clap 4.3.21", "futures", "rand 0.7.3", "tokio", @@ -3552,7 +3599,7 @@ dependencies = [ "aptos-sdk", "aptos-transaction-generator-lib", "async-trait", - "clap 4.3.5", + "clap 4.3.21", "futures", "itertools", "once_cell", @@ -3574,7 +3621,7 @@ dependencies = [ "aptos-logger", "aptos-sdk", "async-trait", - "clap 4.3.5", + "clap 4.3.21", "move-binary-format", "once_cell", "rand 0.7.3", @@ -3599,7 +3646,7 @@ dependencies = [ "aptos-vm", "aptos-vm-genesis", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "datatest-stable", "hex", "move-binary-format", @@ -3752,7 +3799,7 @@ dependencies = [ "aptos-language-e2e-tests", "aptos-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-binary-format", "move-bytecode-source-map", "move-core-types", @@ -3812,7 +3859,7 @@ dependencies = [ "aptos-vm", "aptos-vm-genesis", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "glob", "move-binary-format", "move-core-types", @@ -4738,7 +4785,7 @@ checksum = "ba3569f383e8f1598449f1a423e72e99569137b47740b1da11ef19af3d5c3223" dependencies = [ "lazy_static 1.4.0", "memchr", - "regex-automata", + "regex-automata 0.1.10", ] [[package]] @@ -5022,24 +5069,23 @@ dependencies = [ [[package]] name = "clap" -version = "4.3.5" +version = "4.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2686c4115cb0810d9a984776e197823d08ec94f176549a89a9efded477c456dc" +checksum = "c27cdf28c0f604ba3f512b0c9a409f8de8513e4816705deb0498b627e7c3a3fd" dependencies = [ "clap_builder", - "clap_derive 4.3.2", + "clap_derive 4.3.12", "once_cell", ] [[package]] name = "clap_builder" -version = "4.3.5" +version = "4.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2e53afce1efce6ed1f633cf0e57612fe51db54a1ee4fd8f8503d078fe02d69ae" +checksum = "08a9f1ab5e9f01a9b81f202e8562eb9a10de70abf9eaeac1be465c28b75aa4aa" dependencies = [ "anstream", "anstyle", - "bitflags 1.3.2", "clap_lex 0.5.0", "strsim 0.10.0", ] @@ -5050,7 +5096,7 @@ version = "4.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f6b5c519bab3ea61843a7923d074b04245624bb84a64a8c150f5deb014e388b" dependencies = [ - "clap 4.3.5", + "clap 4.3.21", ] [[package]] @@ -5068,9 +5114,9 @@ dependencies = [ [[package]] name = "clap_derive" -version = "4.3.2" +version = "4.3.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8cd2b2a819ad6eec39e8f1d6b53001af1e5469f8c177579cdaeb313115b825f" +checksum = "54a9bb5758fc5dfe728d1019941681eccaf0cf8a4189b692a0ee2f2ecf90a050" dependencies = [ "heck 0.4.0", "proc-macro2 1.0.64", @@ -5352,6 +5398,15 @@ version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5827cebf4670468b8772dd191856768aedcb1b0278a04f989f7766351917b9dc" +[[package]] +name = "cpp_demangle" +version = "0.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ee34052ee3d93d6d8f3e6f81d85c47921f6653a19a7b70e939e3e602d893a674" +dependencies = [ + "cfg-if", +] + [[package]] name = "cpufeatures" version = "0.2.4" @@ -5768,6 +5823,15 @@ dependencies = [ "serde 1.0.149", ] +[[package]] +name = "debugid" +version = "0.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bef552e6f588e446098f6ba40d89ac146c8c7b64aade83c051ee00bb5d2bc18d" +dependencies = [ + "uuid", +] + [[package]] name = "der" version = "0.5.1" @@ -6452,6 +6516,18 @@ version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "31a7a908b8f32538a2143e59a6e4e2508988832d5d4d6f7c156b3cbc762643a5" +[[package]] +name = "findshlibs" +version = "0.10.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "40b9e59cd0f7e0806cca4be089683ecb6434e602038df21fe6bf6711b2f07f64" +dependencies = [ + "cc", + "lazy_static 1.4.0", + "libc", + "winapi 0.3.9", +] + [[package]] name = "fixed-hash" version = "0.7.0" @@ -6740,7 +6816,7 @@ dependencies = [ "aptos-network", "aptos-types", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-core-types", "rand 0.7.3", "serde 1.0.149", @@ -6881,7 +6957,7 @@ version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0a1e17342619edbc21a964c2afbeb6c820c6a2560032872f397bb97ea127bd0a" dependencies = [ - "aho-corasick", + "aho-corasick 0.7.18", "bstr", "fnv", "log", @@ -7340,14 +7416,14 @@ checksum = "c4a1e36c821dbe04574f602848a19f742f4fb3c98d40449f11bcad18d6b17421" [[package]] name = "httpmock" -version = "0.6.6" +version = "0.6.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c159c4fc205e6c1a9b325cb7ec135d13b5f47188ce175dabb76ec847f331d9bd" +checksum = "4b02e044d3b4c2f94936fb05f9649efa658ca788f44eb6b87554e2033fc8ce93" dependencies = [ "assert-json-diff", "async-object-pool", "async-trait", - "base64 0.13.0", + "base64 0.21.2", "basic-cookies", "crossbeam-utils", "form_urlencoded", @@ -7671,7 +7747,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2fb7c1b80a1dfa604bb4a649a5c5aeef3d913f7c520cb42b40e534e8a61bcdfc" dependencies = [ "ahash 0.8.3", - "clap 4.3.5", + "clap 4.3.21", "crossbeam-channel", "crossbeam-utils", "dashmap", @@ -8016,7 +8092,7 @@ dependencies = [ "petgraph 0.6.2", "pico-args", "regex", - "regex-syntax", + "regex-syntax 0.6.27", "string_cache", "term", "tiny-keccak", @@ -8150,9 +8226,9 @@ dependencies = [ [[package]] name = "libc" -version = "0.2.140" +version = "0.2.147" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "99227334921fae1a979cf0bfdfcc6b3e5ce376ef57e16fb6fb3ea2ed6095f80c" +checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3" [[package]] name = "libfuzzer-sys" @@ -8329,7 +8405,7 @@ name = "listener" version = "0.1.0" dependencies = [ "bytes", - "clap 4.3.5", + "clap 4.3.21", "tokio", ] @@ -8456,7 +8532,7 @@ version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558" dependencies = [ - "regex-automata", + "regex-automata 0.1.10", ] [[package]] @@ -8493,6 +8569,15 @@ version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d" +[[package]] +name = "memmap2" +version = "0.5.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "83faa42c0a078c393f6b29d5db232d8be22776a891f8f56e5284faee4a20b327" +dependencies = [ + "libc", +] + [[package]] name = "memoffset" version = "0.6.5" @@ -8586,14 +8671,14 @@ dependencies = [ [[package]] name = "mio" -version = "0.8.4" +version = "0.8.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57ee1c23c7c63b0c9250c339ffdc69255f110b298b901b9f6c82547b7b87caaf" +checksum = "927a765cd3fc26206e66b296465fa9d3e5ab003e651c1b3c060e7956d96b19d2" dependencies = [ "libc", "log", "wasi 0.11.0+wasi-snapshot-preview1", - "windows-sys 0.36.1", + "windows-sys 0.48.0", ] [[package]] @@ -8649,7 +8734,7 @@ dependencies = [ "anyhow", "aptos-framework", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-binary-format", ] @@ -8685,7 +8770,7 @@ name = "move-analyzer" version = "1.0.0" dependencies = [ "anyhow", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "crossbeam", "derivative", @@ -8795,7 +8880,7 @@ name = "move-bytecode-viewer" version = "0.1.0" dependencies = [ "anyhow", - "clap 4.3.5", + "clap 4.3.21", "crossterm 0.26.1", "move-binary-format", "move-bytecode-source-map", @@ -8812,7 +8897,7 @@ version = "0.1.0" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "colored", "datatest-stable", @@ -8874,7 +8959,7 @@ version = "0.0.1" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "datatest-stable", "difference", @@ -8912,7 +8997,7 @@ version = "0.1.0" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 3.2.23", + "clap 4.3.21", "codespan", "codespan-reporting", "datatest-stable", @@ -8920,6 +9005,7 @@ dependencies = [ "itertools", "move-binary-format", "move-command-line-common", + "move-compiler", "move-core-types", "move-disassembler", "move-ir-types", @@ -8960,6 +9046,7 @@ dependencies = [ "serde 1.0.149", "serde_bytes", "serde_json", + "thiserror", "uint", ] @@ -8969,7 +9056,7 @@ version = "0.1.0" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "codespan", "colored", "move-binary-format", @@ -8987,7 +9074,7 @@ name = "move-disassembler" version = "0.1.0" dependencies = [ "anyhow", - "clap 4.3.5", + "clap 4.3.21", "colored", "move-binary-format", "move-bytecode-source-map", @@ -9054,7 +9141,7 @@ name = "move-explain" version = "0.1.0" dependencies = [ "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-command-line-common", "move-core-types", ] @@ -9065,7 +9152,7 @@ version = "0.1.0" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "move-binary-format", "move-bytecode-source-map", "move-bytecode-verifier", @@ -9162,7 +9249,7 @@ version = "0.1.0" dependencies = [ "anyhow", "bcs 0.1.4", - "clap 4.3.5", + "clap 4.3.21", "colored", "datatest-stable", "dirs-next", @@ -9203,7 +9290,7 @@ dependencies = [ "anyhow", "async-trait", "atty", - "clap 4.3.5", + "clap 4.3.21", "codespan", "codespan-reporting", "datatest-stable", @@ -9221,6 +9308,7 @@ dependencies = [ "move-ir-types", "move-model", "move-prover-boogie-backend", + "move-prover-bytecode-pipeline", "move-prover-test-utils", "move-stackless-bytecode", "num 0.4.0", @@ -9253,6 +9341,7 @@ dependencies = [ "move-compiler", "move-core-types", "move-model", + "move-prover-bytecode-pipeline", "move-stackless-bytecode", "num 0.4.0", "once_cell", @@ -9265,6 +9354,40 @@ dependencies = [ "tokio", ] +[[package]] +name = "move-prover-bytecode-pipeline" +version = "0.1.0" +dependencies = [ + "anyhow", + "async-trait", + "atty", + "clap 4.3.21", + "codespan", + "codespan-reporting", + "datatest-stable", + "futures", + "hex", + "itertools", + "log", + "move-binary-format", + "move-core-types", + "move-model", + "move-stackless-bytecode", + "move-stackless-bytecode-test-utils", + "num 0.4.0", + "once_cell", + "pretty", + "rand 0.8.5", + "serde 1.0.149", + "serde_json", + "shell-words", + "simplelog", + "tempfile", + "tokio", + "toml 0.5.9", + "walkdir", +] + [[package]] name = "move-prover-test-utils" version = "0.1.0" @@ -9309,8 +9432,7 @@ dependencies = [ "move-core-types", "move-ir-to-bytecode", "move-model", - "move-prover-test-utils", - "move-stdlib", + "move-stackless-bytecode-test-utils", "num 0.4.0", "once_cell", "paste", @@ -9318,6 +9440,20 @@ dependencies = [ "serde 1.0.149", ] +[[package]] +name = "move-stackless-bytecode-test-utils" +version = "0.1.0" +dependencies = [ + "anyhow", + "codespan-reporting", + "move-command-line-common", + "move-compiler", + "move-model", + "move-prover-test-utils", + "move-stackless-bytecode", + "move-stdlib", +] + [[package]] name = "move-stdlib" version = "0.1.1" @@ -9382,7 +9518,7 @@ version = "0.1.0" dependencies = [ "anyhow", "atty", - "clap 4.3.5", + "clap 4.3.21", "codespan", "codespan-reporting", "datatest-stable", @@ -9424,7 +9560,7 @@ name = "move-transactional-test-runner" version = "0.1.0" dependencies = [ "anyhow", - "clap 4.3.5", + "clap 4.3.21", "colored", "datatest-stable", "difference", @@ -9459,7 +9595,7 @@ version = "0.1.0" dependencies = [ "anyhow", "better_any", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "colored", "datatest-stable", @@ -9663,6 +9799,18 @@ version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54" +[[package]] +name = "nix" +version = "0.26.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bfdda3d196821d6af13126e40375cdf7da646a96114af134d5f417a9a1dc8e1a" +dependencies = [ + "bitflags 1.3.2", + "cfg-if", + "libc", + "static_assertions", +] + [[package]] name = "no-std-compat" version = "0.4.1" @@ -10676,6 +10824,27 @@ version = "0.3.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "26f6a7b87c2e435a3241addceeeff740ff8b7e76b74c13bf9acb17fa454ea00b" +[[package]] +name = "pprof" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "196ded5d4be535690899a4631cc9f18cdc41b7ebf24a79400f46f48e49a11059" +dependencies = [ + "backtrace", + "cfg-if", + "findshlibs", + "inferno", + "libc", + "log", + "nix", + "once_cell", + "parking_lot 0.12.1", + "smallvec", + "symbolic-demangle", + "tempfile", + "thiserror", +] + [[package]] name = "ppv-lite86" version = "0.2.16" @@ -10935,7 +11104,7 @@ dependencies = [ "rand 0.8.5", "rand_chacha 0.3.1", "rand_xorshift", - "regex-syntax", + "regex-syntax 0.6.27", "rusty-fork", "tempfile", ] @@ -10989,7 +11158,7 @@ version = "0.1.0" dependencies = [ "anyhow", "chrono", - "clap 4.3.5", + "clap 4.3.21", "codespan-reporting", "hex", "itertools", @@ -10998,6 +11167,7 @@ dependencies = [ "move-model", "move-prover", "move-prover-boogie-backend", + "move-prover-bytecode-pipeline", "move-stackless-bytecode", "num 0.4.0", "plotters", @@ -11368,13 +11538,14 @@ dependencies = [ [[package]] name = "regex" -version = "1.6.0" +version = "1.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c4eb3267174b8c6c2f654116623910a0fef09c4753f8dd83db29c48a0df988b" +checksum = "81bc1d4caf89fac26a70747fe603c130093b53c773888797a6329091246d651a" dependencies = [ - "aho-corasick", + "aho-corasick 1.0.4", "memchr", - "regex-syntax", + "regex-automata 0.3.6", + "regex-syntax 0.7.4", ] [[package]] @@ -11383,7 +11554,18 @@ version = "0.1.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132" dependencies = [ - "regex-syntax", + "regex-syntax 0.6.27", +] + +[[package]] +name = "regex-automata" +version = "0.3.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fed1ceff11a1dddaee50c9dc8e4938bd106e9d89ae372f192311e7da498e3b69" +dependencies = [ + "aho-corasick 1.0.4", + "memchr", + "regex-syntax 0.7.4", ] [[package]] @@ -11392,6 +11574,12 @@ version = "0.6.27" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3f87b73ce11b1619a3c6332f45341e0047173771e8b8b73f87bfeefb7b56244" +[[package]] +name = "regex-syntax" +version = "0.7.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e5ea92a5b6195c6ef2a0295ea818b312502c6fc94dde986c5553242e18fd4ce2" + [[package]] name = "remove_dir_all" version = "0.5.3" @@ -11934,7 +12122,7 @@ name = "sender" version = "0.1.0" dependencies = [ "bytes", - "clap 4.3.5", + "clap 4.3.21", "event-listener", "quanta", "tokio", @@ -12671,6 +12859,29 @@ version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "734676eb262c623cec13c3155096e08d1f8f29adce39ba17948b18dad1e54142" +[[package]] +name = "symbolic-common" +version = "10.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1b55cdc318ede251d0957f07afe5fed912119b8c1bc5a7804151826db999e737" +dependencies = [ + "debugid", + "memmap2", + "stable_deref_trait", + "uuid", +] + +[[package]] +name = "symbolic-demangle" +version = "10.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "79be897be8a483a81fff6a3a4e195b4ac838ef73ca42d348b3f722da9902e489" +dependencies = [ + "cpp_demangle", + "rustc-demangle", + "symbolic-common", +] + [[package]] name = "syn" version = "0.15.44" @@ -12863,7 +13074,7 @@ dependencies = [ name = "test-generation" version = "0.1.0" dependencies = [ - "clap 4.3.5", + "clap 4.3.21", "crossbeam-channel", "getrandom 0.2.7", "hex", @@ -12922,22 +13133,22 @@ checksum = "222a222a5bfe1bba4a77b45ec488a741b3cb8872e5e499451fd7d0129c9c7c3d" [[package]] name = "thiserror" -version = "1.0.37" +version = "1.0.45" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "10deb33631e3c9018b9baf9dcbbc4f737320d2b576bac10f6aefa048fa407e3e" +checksum = "dedd246497092a89beedfe2c9f176d44c1b672ea6090edc20544ade01fbb7ea0" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" -version = "1.0.37" +version = "1.0.45" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "982d17546b47146b28f7c22e3d08465f6b8903d0ea13c1660d9d84a6e7adcdbb" +checksum = "7d7b1fadccbbc7e19ea64708629f9d8dccd007c260d66485f20a6d41bc1cf4b3" dependencies = [ "proc-macro2 1.0.64", "quote 1.0.29", - "syn 1.0.105", + "syn 2.0.25", ] [[package]] @@ -13074,14 +13285,14 @@ checksum = "cda74da7e1a664f795bb1f8a87ec406fb89a02522cf6e50620d016add6dbbf5c" [[package]] name = "tokio" -version = "1.21.2" +version = "1.29.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a9e03c497dc955702ba729190dc4aac6f2a0ce97f913e5b1b5912fc5039d9099" +checksum = "532826ff75199d5833b9d2c5fe410f29235e25704ee5f0ef599fb51c21f4a4da" dependencies = [ "autocfg", + "backtrace", "bytes", "libc", - "memchr", "mio", "num_cpus", "parking_lot 0.12.1", @@ -13090,7 +13301,7 @@ dependencies = [ "socket2", "tokio-macros", "tracing", - "winapi 0.3.9", + "windows-sys 0.48.0", ] [[package]] @@ -13105,13 +13316,13 @@ dependencies = [ [[package]] name = "tokio-macros" -version = "1.8.0" +version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9724f9a975fb987ef7a3cd9be0350edcbe130698af5b8f7a631e23d42d052484" +checksum = "630bdcf245f78637c13ec01ffae6187cca34625e8c63150d424b59e55af2675e" dependencies = [ "proc-macro2 1.0.64", "quote 1.0.29", - "syn 1.0.105", + "syn 2.0.25", ] [[package]] diff --git a/Cargo.toml b/Cargo.toml index d089a2d61002b..c01ae7b99e382 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -48,6 +48,7 @@ members = [ "consensus/consensus-types", "consensus/safety-rules", "crates/aptos", + "crates/aptos-api-tester", "crates/aptos-bitvec", "crates/aptos-build-info", "crates/aptos-compression", @@ -71,6 +72,7 @@ members = [ "crates/aptos-network-checker", "crates/aptos-node-identity", "crates/aptos-openapi", + "crates/aptos-profiler", "crates/aptos-proptest-helpers", "crates/aptos-push-metrics", "crates/aptos-rate-limiter", @@ -193,9 +195,11 @@ members = [ "third_party/move/move-ir-compiler/transactional-tests", "third_party/move/move-ir/types", "third_party/move/move-model", + "third_party/move/move-model/bytecode", + "third_party/move/move-model/bytecode-test-utils", "third_party/move/move-prover", "third_party/move/move-prover/boogie-backend", - "third_party/move/move-prover/bytecode", + "third_party/move/move-prover/bytecode-pipeline", "third_party/move/move-prover/lab", "third_party/move/move-prover/move-abigen", "third_party/move/move-prover/move-docgen", @@ -358,6 +362,7 @@ aptos-openapi = { path = "crates/aptos-openapi" } aptos-package-builder = { path = "aptos-move/package-builder" } aptos-peer-monitoring-service-client = { path = "network/peer-monitoring-service/client" } aptos-peer-monitoring-service-server = { path = "network/peer-monitoring-service/server" } +aptos-profiler = { path = "crates/aptos-profiler" } aptos-peer-monitoring-service-types = { path = "network/peer-monitoring-service/types" } aptos-proptest-helpers = { path = "crates/aptos-proptest-helpers" } aptos-protos = { path = "crates/aptos-protos" } @@ -446,7 +451,7 @@ chrono = { version = "0.4.19", features = ["clock", "serde"] } cfg_block = "0.1.1" cfg-if = "1.0.0" claims = "0.7" -clap = { version = "4.3.5", features = ["derive", "unstable-styles"] } +clap = { version = "4.3.9", features = ["derive", "unstable-styles"] } clap_complete = "4.3.1" cloud-storage = { version = "0.11.1", features = ["global-client"] } codespan-reporting = "0.11.1" @@ -500,7 +505,7 @@ heck = "0.3.2" hex = "0.4.3" hkdf = "0.10.0" hostname = "0.3.1" -httpmock = "0.6" +httpmock = "0.6.8" hyper = { version = "0.14.18", features = ["full"] } hyper-tls = "0.5.0" image = "0.24.5" @@ -561,7 +566,7 @@ random_word = "0.3.0" rayon = "1.5.2" redis = { version = "0.22.3", features = ["tokio-comp", "script", "connection-manager"] } redis-test = { version = "0.1.1", features = ["aio"] } -regex = "1.5.5" +regex = "1.9.3" reqwest = { version = "0.11.11", features = [ "blocking", "cookies", @@ -657,7 +662,9 @@ move-model = { path = "third_party/move/move-model" } move-package = { path = "third_party/move/tools/move-package" } move-prover = { path = "third_party/move/move-prover" } move-prover-boogie-backend = { path = "third_party/move/move-prover/boogie-backend" } -move-stackless-bytecode = { path = "third_party/move/move-prover/bytecode" } +move-prover-bytecode-pipeline = { path = "third_party/move/move-prover/bytecode-pipeline" } +move-stackless-bytecode = { path = "third_party/move/move-model/bytecode" } +move-stackless-bytecode-test-utils = { path = "third_party/move/move-model/bytecode-test-utils" } aptos-move-stdlib = { path = "aptos-move/framework/move-stdlib" } aptos-table-natives = { path = "aptos-move/framework/table-natives" } move-prover-test-utils = { path = "third_party/move/move-prover/test-utils" } diff --git a/api/Cargo.toml b/api/Cargo.toml index 4ba6c1b724c21..7c433b2641e74 100644 --- a/api/Cargo.toml +++ b/api/Cargo.toml @@ -18,6 +18,7 @@ aptos-api-types = { workspace = true } aptos-build-info = { workspace = true } aptos-config = { workspace = true } aptos-crypto = { workspace = true } +aptos-framework = { workspace = true } aptos-gas-schedule = { workspace = true } aptos-logger = { workspace = true } aptos-mempool = { workspace = true } @@ -26,6 +27,7 @@ aptos-runtimes = { workspace = true } aptos-state-view = { workspace = true } aptos-storage-interface = { workspace = true } aptos-types = { workspace = true } +aptos-utils = { workspace = true } aptos-vm = { workspace = true } async-trait = { workspace = true } bcs = { workspace = true } diff --git a/api/goldens/aptos_api__tests__accounts_test__test_get_account_resources_by_invalid_address_missing_0x_prefix.json b/api/goldens/aptos_api__tests__accounts_test__test_get_account_resources_by_invalid_address_missing_0x_prefix.json deleted file mode 100644 index 35dff7fc6566e..0000000000000 --- a/api/goldens/aptos_api__tests__accounts_test__test_get_account_resources_by_invalid_address_missing_0x_prefix.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"1\"", - "error_code": "web_framework_error", - "vm_error_code": null -} -{ - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"0xzz\"", - "error_code": "web_framework_error", - "vm_error_code": null -} -{ - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"01\"", - "error_code": "web_framework_error", - "vm_error_code": null -} diff --git a/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_address_string.json b/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_address_string.json index ab77d1b219b22..008cb331b9ded 100644 --- a/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_address_string.json +++ b/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_address_string.json @@ -1,5 +1,5 @@ { - "message": "The given transaction is invalid: Failed to parse transaction payload: parse arguments[0] failed, expect string
, caused by error: invalid account address \"invalid\"", + "message": "The given transaction is invalid: Failed to parse transaction payload: parse arguments[0] failed, expect string
, caused by error: Invalid account address: Hex characters are invalid: Invalid character 'i' at position 57", "error_code": "invalid_input", "vm_error_code": null } diff --git a/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_u64_string.json b/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_u64_string.json index ab77d1b219b22..008cb331b9ded 100644 --- a/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_u64_string.json +++ b/api/goldens/aptos_api__tests__invalid_post_request_test__test_invalid_entry_function_argument_u64_string.json @@ -1,5 +1,5 @@ { - "message": "The given transaction is invalid: Failed to parse transaction payload: parse arguments[0] failed, expect string
, caused by error: invalid account address \"invalid\"", + "message": "The given transaction is invalid: Failed to parse transaction payload: parse arguments[0] failed, expect string
, caused by error: Invalid account address: Hex characters are invalid: Invalid character 'i' at position 57", "error_code": "invalid_input", "vm_error_code": null } diff --git a/api/goldens/aptos_api__tests__invalid_post_request_test__test_missing_entry_function_arguments.json b/api/goldens/aptos_api__tests__invalid_post_request_test__test_missing_entry_function_arguments.json index c65162f36d08b..34bf02cdb264e 100644 --- a/api/goldens/aptos_api__tests__invalid_post_request_test__test_missing_entry_function_arguments.json +++ b/api/goldens/aptos_api__tests__invalid_post_request_test__test_missing_entry_function_arguments.json @@ -1,5 +1,5 @@ { - "message": "The given transaction is invalid: Failed to parse transaction payload: parse arguments[0] failed, expect string
, caused by error: invalid account address \"0\"", + "message": "The given transaction is invalid: Failed to parse transaction payload: expected 1 arguments [string
], but got 0 ([])", "error_code": "invalid_input", "vm_error_code": null } diff --git a/api/goldens/aptos_api__tests__state_test__test_get_account_module_by_invalid_address.json b/api/goldens/aptos_api__tests__state_test__test_get_account_module_by_invalid_address.json index d1c2d1a5d9aae..2c4c41d51e19e 100644 --- a/api/goldens/aptos_api__tests__state_test__test_get_account_module_by_invalid_address.json +++ b/api/goldens/aptos_api__tests__state_test__test_get_account_module_by_invalid_address.json @@ -1,5 +1,5 @@ { - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"1\"", + "message": "failed to parse path `address`: failed to parse \"string(Address)\": Invalid account address: Hex characters are invalid: Invalid character 'x' at position 61", "error_code": "web_framework_error", "vm_error_code": null } diff --git a/api/goldens/aptos_api__tests__state_test__test_get_account_resource_by_invalid_address.json b/api/goldens/aptos_api__tests__state_test__test_get_account_resource_by_invalid_address.json index 35dff7fc6566e..697e8e0518a6e 100644 --- a/api/goldens/aptos_api__tests__state_test__test_get_account_resource_by_invalid_address.json +++ b/api/goldens/aptos_api__tests__state_test__test_get_account_resource_by_invalid_address.json @@ -1,15 +1,10 @@ { - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"1\"", + "message": "failed to parse path `address`: failed to parse \"string(Address)\": Invalid account address: Hex characters are invalid: Invalid character 'x' at position 62", "error_code": "web_framework_error", "vm_error_code": null } { - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"0xzz\"", - "error_code": "web_framework_error", - "vm_error_code": null -} -{ - "message": "failed to parse path `address`: failed to parse \"string(Address)\": invalid account address \"01\"", + "message": "failed to parse path `address`: failed to parse \"string(Address)\": Invalid account address: Hex characters are invalid: Invalid character 'z' at position 62", "error_code": "web_framework_error", "vm_error_code": null } diff --git a/api/src/context.rs b/api/src/context.rs index ad0954bb2577c..c46b16ba0c4b0 100644 --- a/api/src/context.rs +++ b/api/src/context.rs @@ -43,9 +43,10 @@ use aptos_types::{ }, transaction::{SignedTransaction, TransactionWithProof, Version}, }; +use aptos_utils::aptos_try; use aptos_vm::{ data_cache::{AsMoveResolver, StorageAdapter}, - move_vm_ext::MoveResolverExt, + move_vm_ext::AptosMoveResolver, }; use futures::{channel::oneshot, SinkExt}; use move_core_types::language_storage::{ModuleId, StructTag}; @@ -347,7 +348,23 @@ impl Context { let kvs = kvs .into_iter() .map(|(key, value)| { - if state_view.as_move_resolver().is_resource_group(&key) { + let is_resource_group = + |resolver: &dyn AptosMoveResolver, struct_tag: &StructTag| -> bool { + aptos_try!({ + let md = aptos_framework::get_metadata( + &resolver.get_module_metadata(&struct_tag.module_id()), + )?; + md.struct_attributes + .get(struct_tag.name.as_ident_str().as_str())? + .iter() + .find(|attr| attr.is_resource_group())?; + Some(()) + }) + .is_some() + }; + + let resolver = state_view.as_move_resolver(); + if is_resource_group(&resolver, &key) { // An error here means a storage invariant has been violated bcs::from_bytes::(&value) .map(|map| { diff --git a/api/src/tests/accounts_test.rs b/api/src/tests/accounts_test.rs index b958178fd209e..273da75126400 100644 --- a/api/src/tests/accounts_test.rs +++ b/api/src/tests/accounts_test.rs @@ -31,19 +31,6 @@ async fn test_get_account_resources_by_address_0x0() { context.check_golden_output(resp); } -#[tokio::test(flavor = "multi_thread", worker_threads = 2)] -async fn test_get_account_resources_by_invalid_address_missing_0x_prefix() { - let mut context = new_test_context(current_function_name!()); - let invalid_addresses = vec!["1", "0xzz", "01"]; - for invalid_address in &invalid_addresses { - let resp = context - .expect_status_code(400) - .get(&account_resources(invalid_address)) - .await; - context.check_golden_output(resp); - } -} - #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_get_account_resources_by_valid_account_address() { let context = new_test_context(current_function_name!()); diff --git a/api/src/tests/events_test.rs b/api/src/tests/events_test.rs index e60132c8919b4..f612846cba677 100644 --- a/api/src/tests/events_test.rs +++ b/api/src/tests/events_test.rs @@ -3,8 +3,10 @@ // SPDX-License-Identifier: Apache-2.0 use super::new_test_context; -use aptos_api_test_context::current_function_name; +use aptos_api_test_context::{current_function_name, TestContext}; use percent_encoding::{utf8_percent_encode, NON_ALPHANUMERIC}; +use serde_json::json; +use std::path::PathBuf; static ACCOUNT_ADDRESS: &str = "0xa550c18"; static CREATION_NUMBER: &str = "0"; @@ -136,6 +138,48 @@ async fn test_get_events_by_invalid_account_event_handle_field_type() { context.check_golden_output(resp); } +#[tokio::test(flavor = "multi_thread", worker_threads = 2)] +async fn test_module_events() { + let mut context = new_test_context(current_function_name!()); + + // Prepare accounts + let mut user = context.create_account().await; + + let user_addr = user.address(); + // Publish packages + let named_addresses = vec![("event".to_string(), user_addr)]; + let txn = futures::executor::block_on(async move { + let path = PathBuf::from(std::env!("CARGO_MANIFEST_DIR")) + .join("../aptos-move/move-examples/event"); + TestContext::build_package(path, named_addresses) + }); + context.publish_package(&mut user, txn).await; + + context + .api_execute_entry_function( + &mut user, + &format!("0x{}::event::emit", user_addr), + json!([]), + json!(["7"]), + ) + .await; + + let resp = context + .get(format!("/accounts/{}/transactions", user.address()).as_str()) + .await; + let txn = &resp.as_array().unwrap()[1]; + let resp = context + .get(format!("/transactions/by_hash/{}", txn["hash"].as_str().unwrap()).as_str()) + .await; + + let events = resp["events"].as_array().unwrap(); + assert_eq!(events.len(), 7); + // All events are module events + assert!(events.iter().all(|c| c.get("guid").map_or(false, |d| d + .get("account_address") + .map_or(false, |t| t.as_str().unwrap() == "0x0")))); +} + // until we have generics in the genesis #[ignore] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] diff --git a/api/src/tests/invalid_post_request_test.rs b/api/src/tests/invalid_post_request_test.rs index f4b8179926a1c..c4d849da813a9 100644 --- a/api/src/tests/invalid_post_request_test.rs +++ b/api/src/tests/invalid_post_request_test.rs @@ -49,7 +49,7 @@ async fn test_invalid_entry_function_argument_address_string() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_missing_entry_function_arguments() { let mut req = signing_message_request(); - req["payload"]["arguments"] = json!(["0"]); + req["payload"]["arguments"] = json!([]); response_error_msg(req, current_function_name!()).await; } diff --git a/api/src/tests/modules.rs b/api/src/tests/modules.rs index f3095777d9b8d..e451ff9718585 100644 --- a/api/src/tests/modules.rs +++ b/api/src/tests/modules.rs @@ -8,10 +8,7 @@ use std::path::PathBuf; #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_abi() { let mut context = new_test_context(current_function_name!()); - let mut root_account = context.root_account().await; - let mut account = context.gen_account(); - let txn = context.create_user_account_by(&mut root_account, &account); - context.commit_block(&vec![txn]).await; + let mut account = context.create_account().await; // Publish packages let named_addresses = vec![("abi".to_string(), account.address())]; diff --git a/api/src/tests/state_test.rs b/api/src/tests/state_test.rs index b307da6d34ee0..61013cfadf7ee 100644 --- a/api/src/tests/state_test.rs +++ b/api/src/tests/state_test.rs @@ -23,7 +23,7 @@ async fn test_get_account_resource() { #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_get_account_resource_by_invalid_address() { let mut context = new_test_context(current_function_name!()); - let invalid_addresses = vec!["1", "0xzz", "01"]; + let invalid_addresses = vec!["00x1", "0xzz"]; for invalid_address in &invalid_addresses { let resp = context .expect_status_code(400) @@ -107,7 +107,7 @@ async fn test_get_account_module_by_invalid_address() { let mut context = new_test_context(current_function_name!()); let resp = context .expect_status_code(400) - .get(&get_account_module("1", "guid")) + .get(&get_account_module("xyz", "guid")) .await; context.check_golden_output(resp); } diff --git a/api/test-context/src/test_context.rs b/api/test-context/src/test_context.rs index 79aa182543cc6..751ddab19f827 100644 --- a/api/test-context/src/test_context.rs +++ b/api/test-context/src/test_context.rs @@ -51,7 +51,7 @@ use std::{boxed::Box, iter::once, net::SocketAddr, path::PathBuf, sync::Arc, tim use warp::{http::header::CONTENT_TYPE, Filter, Rejection, Reply}; use warp_reverse_proxy::reverse_proxy_filter; -const TRANSFER_AMOUNT: u64 = 10_000_000; +const TRANSFER_AMOUNT: u64 = 200_000_000; #[derive(Clone, Debug)] pub enum ApiSpecificConfig { @@ -768,7 +768,7 @@ impl TestContext { let mut request = json!({ "sender": account.address(), "sequence_number": account.sequence_number().to_string(), - "gas_unit_price": "0", + "gas_unit_price": "100", "max_gas_amount": "1000000", "expiration_timestamp_secs": "16373698888888", "payload": payload, diff --git a/api/types/Cargo.toml b/api/types/Cargo.toml index 4bcb52735c4a4..7154f0cee7f58 100644 --- a/api/types/Cargo.toml +++ b/api/types/Cargo.toml @@ -29,6 +29,7 @@ indoc = { workspace = true } move-binary-format = { workspace = true } move-core-types = { workspace = true } move-resource-viewer = { workspace = true } +once_cell = { workspace = true } poem = { workspace = true } poem-openapi = { workspace = true } poem-openapi-derive = { workspace = true } diff --git a/api/types/src/address.rs b/api/types/src/address.rs index 2b436045377ef..7e3756717f17a 100644 --- a/api/types/src/address.rs +++ b/api/types/src/address.rs @@ -49,12 +49,8 @@ impl FromStr for Address { type Err = anyhow::Error; fn from_str(s: &str) -> anyhow::Result { - let mut ret = AccountAddress::from_hex_literal(s); - if ret.is_err() { - ret = AccountAddress::from_hex(s) - } - Ok(Self(ret.map_err(|_| { - anyhow::format_err!("invalid account address {:?}", s) + Ok(Self(AccountAddress::from_str(s).map_err(|e| { + anyhow::format_err!("Invalid account address: {:#}", e) })?)) } } @@ -112,12 +108,15 @@ mod tests { assert_eq!(address.parse::
().unwrap().to_string(), "0x1"); } - let invalid_addresses = vec!["invalid", "00x1", "x1", "01", "1"]; + let invalid_addresses = vec!["invalid", "00x1", "x1"]; for address in invalid_addresses { - assert_eq!( - format!("invalid account address {:?}", address), - address.parse::
().unwrap_err().to_string() - ); + assert!(address + .parse::
() + .unwrap_err() + .to_string() + .starts_with( + "Invalid account address: Hex characters are invalid: Invalid character", + )); } } diff --git a/api/types/src/transaction.rs b/api/types/src/transaction.rs index 0fb461406e14b..c5eb163d405c5 100755 --- a/api/types/src/transaction.rs +++ b/api/types/src/transaction.rs @@ -21,6 +21,7 @@ use aptos_types::{ Script, SignedTransaction, TransactionOutput, TransactionWithProof, }, }; +use once_cell::sync::Lazy; use poem_openapi::{Object, Union}; use serde::{Deserialize, Serialize}; use std::{ @@ -31,6 +32,12 @@ use std::{ time::{SystemTime, UNIX_EPOCH}, }; +static DUMMY_GUID: Lazy = Lazy::new(|| EventGuid { + creation_number: U64::from(0u64), + account_address: Address::from(AccountAddress::ZERO), +}); +static DUMMY_SEQUENCE_NUMBER: Lazy = Lazy::new(|| U64::from(0)); + // Warning: Do not add a docstring to a field that uses a type in `derives.rs`, // it will result in a change to the type representation. Read more about this // issue here: https://github.com/poem-web/poem/issues/385. @@ -529,10 +536,16 @@ pub struct Event { impl From<(&ContractEvent, serde_json::Value)> for Event { fn from((event, data): (&ContractEvent, serde_json::Value)) -> Self { match event { - ContractEvent::V0(v0) => Self { - guid: (*v0.key()).into(), - sequence_number: v0.sequence_number().into(), - typ: v0.type_tag().clone().into(), + ContractEvent::V1(v1) => Self { + guid: (*v1.key()).into(), + sequence_number: v1.sequence_number().into(), + typ: v1.type_tag().clone().into(), + data, + }, + ContractEvent::V2(v2) => Self { + guid: *DUMMY_GUID, + sequence_number: *DUMMY_SEQUENCE_NUMBER, + typ: v2.type_tag().clone().into(), data, }, } @@ -557,11 +570,18 @@ pub struct VersionedEvent { impl From<(&EventWithVersion, serde_json::Value)> for VersionedEvent { fn from((event, data): (&EventWithVersion, serde_json::Value)) -> Self { match &event.event { - ContractEvent::V0(v0) => Self { + ContractEvent::V1(v1) => Self { + version: event.transaction_version.into(), + guid: (*v1.key()).into(), + sequence_number: v1.sequence_number().into(), + typ: v1.type_tag().clone().into(), + data, + }, + ContractEvent::V2(v2) => Self { version: event.transaction_version.into(), - guid: (*v0.key()).into(), - sequence_number: v0.sequence_number().into(), - typ: v0.type_tag().clone().into(), + guid: *DUMMY_GUID, + sequence_number: *DUMMY_SEQUENCE_NUMBER, + typ: v2.type_tag().clone().into(), data, }, } diff --git a/aptos-move/aptos-aggregator/src/delta_change_set.rs b/aptos-move/aptos-aggregator/src/delta_change_set.rs index b8ea475926c3b..d752d15b3aa06 100644 --- a/aptos-move/aptos-aggregator/src/delta_change_set.rs +++ b/aptos-move/aptos-aggregator/src/delta_change_set.rs @@ -187,13 +187,12 @@ impl DeltaOp { ) -> anyhow::Result { // In case storage fails to fetch the value, return immediately. let maybe_value = state_view - .get_state_value_bytes(state_key) + .get_state_value_u128(state_key) .map_err(|e| VMStatus::error(StatusCode::STORAGE_ERROR, Some(e.to_string())))?; // Otherwise we have to apply delta to the storage value. match maybe_value { - Some(bytes) => { - let base = deserialize(&bytes); + Some(base) => { self.apply_to(base) .map_err(|partial_error| { // If delta application fails, transform partial VM @@ -567,10 +566,6 @@ mod test { ))) } - fn is_genesis(&self) -> bool { - unreachable!() - } - fn get_usage(&self) -> anyhow::Result { unreachable!() } diff --git a/aptos-move/aptos-debugger/src/lib.rs b/aptos-move/aptos-debugger/src/lib.rs index 64351304e9815..2e91026a82af7 100644 --- a/aptos-move/aptos-debugger/src/lib.rs +++ b/aptos-move/aptos-debugger/src/lib.rs @@ -252,5 +252,5 @@ fn is_reconfiguration(vm_output: &TransactionOutput) -> bool { vm_output .events() .iter() - .any(|event| *event.key() == new_epoch_event_key) + .any(|event| event.event_key() == Some(&new_epoch_event_key)) } diff --git a/aptos-move/aptos-gas-calibration/README.md b/aptos-move/aptos-gas-calibration/README.md index 45753ca9168fb..e1d34e932bd50 100644 --- a/aptos-move/aptos-gas-calibration/README.md +++ b/aptos-move/aptos-gas-calibration/README.md @@ -39,14 +39,14 @@ cargo run -p aptos -- move init --name MY-PROJECT Calibration Functions need to be marked with `entry` and have a prefix of `calibrate_`. For example, the following functions would work: ```Move -//// acceptable formats +//// VALID public entry fun calibrate() {} public entry fun calibrate_another_txn() {} public entry fun calibrate123() {} -//// inacceptable formats +//// INVALID public fun calibrate() {} public fun test_my_txn() {} @@ -56,23 +56,74 @@ public entry fun calibrate_addition(_x: u64, _y: u64) {} If the Calibration Function is expected to error, please denote it with the postfix `_should_error`. +``` +//// VALID +public entry fun calibrate_my_test_should_error() {} + +public entry fun calibrate_should_error() {} + +//// INVALID +public entry fun should_error_calibrate() {} + +public entry fun calibrate_test_error() {} + +public entry fun calibrate_test_should_error_() {} +``` +Note: These are still valid Calibration Functions that will still run, but would not error. + ## Usage ```bash -cargo run -- --help +cargo run --release -- --help Automated Gas Calibration to calibrate Move bytecode and Native Functions Usage: aptos-gas-calibration [OPTIONS] Options: - -p, --pattern Specific tests to run that match a pattern [default: ] - -i, --iterations Number of iterations to run each Calibration Function [default: 20] - -h, --help Print help + -p, --pattern Specific tests to run that match a pattern [default: ""] + -i, --iterations Number of iterations to run each Calibration Function [default: 20] + -m, --max_execution_time Maximum execution time in milliseconds [default: 300] + -h, --help Print help ``` ## Examples -There are examples of how to write Calibration Functions under `/samples_ir` and `/samples`. There will be more examples in the future as more Users write Move Samples and add it to the calibration set. +There are examples of how to write Calibration Functions under `/samples_ir` and `/samples`. There will be more examples in the future as more Users write Move Samples and add it to the calibration set. + +Here is an example written in the Move source language: +```Move +public fun calibrate_blake2b_256_impl(num_iterations: u64) { + let i = 0; + let msg = b"abcdefghijkl" + while i < num_iterations { + // This is what I want to calibrate: + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + aptos_hash::blake2b_256(msg); + i += 1; + } +} + +public entry fun calibrate_blake2b_256_x500() { + calibrate_blake2b_256_impl(50); +} + +public entry fun calibrate_blake2b_256_x1000() { + calibrate_blake2b_256_impl(100); +} + +public entry fun calibrate_blake2b_256_x5000() { + calibrate_blake2b_256_impl(500); +} +``` +As you can see in this example, we want the thing that we are trying to calibrate to run for a long enough time, and to also be called enough times. This is why we have the Native Function being called 10 times. Furthermore, we want to sample different lengths (discussed in more detail below), which is why we have data points at 500, 1000, and 5000. This allows the system to record an accurate and appropriate gas usage of the instructions being called, while finding the best line of fit. ## FAQ @@ -90,4 +141,18 @@ For every Calibration Function, the Abstract Gas Usage and running time are dete If the matrix that represents this system is not invertible, then we report the undetermined gas parameters, or the linearly dependent combinations of gas parameters. The exact math can be found under `/src/math.rs`. -Otherwise, the User can expect to see all the values, expressed as Gas Usage per Microsecond, along with the running times and any outliers. \ No newline at end of file +Otherwise, the User can expect to see all the values, along with the running times and any outliers. + +### I see "linearly dependent variables" instead of the gas costs, what do I do? + +If you happen to see something like: + +``` +linearly dependent variables are: + +- gas parameter: HASH_BLAKE2B_256_BASE +- gas parameter: HASH_BLAKE2B_256_PER_BYTE +``` + +There are a few reasons as to why this would happen. The first reason would be that you may have an insufficient number of Calibration Functions for the gas parameters you are trying to calculate. In this example, there should be at least two Calibration Functions, since there are two gas parameters. Another reason would be that too many of the Calibration Functions are linearly dependent. That is, try writing them using different input sizes and varying number of iterations. + diff --git a/aptos-move/aptos-gas-calibration/src/main.rs b/aptos-move/aptos-gas-calibration/src/main.rs index de1db4800cbc0..bef5bb950d553 100644 --- a/aptos-move/aptos-gas-calibration/src/main.rs +++ b/aptos-move/aptos-gas-calibration/src/main.rs @@ -17,13 +17,17 @@ use std::collections::BTreeMap; /// Automated Gas Calibration to calibrate Move bytecode and Native Functions #[derive(Parser, Debug)] struct Args { - /// Specific tests to run that match a pattern + /// Specific Calibration Function tests to run that match a given pattern #[clap(short, long, default_value = "")] pattern: String, /// Number of iterations to run each Calibration Function #[clap(short, long, default_value_t = 20)] iterations: u64, + + /// Maximum execution time in milliseconds + #[clap(short, long, default_value_t = 300)] + max_execution_time: u64, } fn main() { @@ -31,6 +35,7 @@ fn main() { let args = Args::parse(); let pattern = &args.pattern; let iterations = args.iterations; + let max_execution_time = args.max_execution_time; println!( "Running each Calibration Function for {} iterations\n", @@ -76,5 +81,6 @@ fn main() { &mut coeff_matrix, &mut const_matrix, measurements.equation_names, + max_execution_time, ); } diff --git a/aptos-move/aptos-gas-calibration/src/solve.rs b/aptos-move/aptos-gas-calibration/src/solve.rs index eed37720c28c8..0cdc85ff63ab3 100644 --- a/aptos-move/aptos-gas-calibration/src/solve.rs +++ b/aptos-move/aptos-gas-calibration/src/solve.rs @@ -9,9 +9,12 @@ use crate::{ }, math_interface::generic_map, }; +use aptos_gas_schedule::{InitialGasSchedule, TransactionGasParameters}; use nalgebra::DMatrix; use std::collections::BTreeMap; +const MILLISECONDS_TO_MICROSECONDS: u64 = 1000; + /// wrapper function to build a coefficient matrix /// /// ### Arguments @@ -49,11 +52,13 @@ pub fn build_constant_matrix(input: Vec, nrows: usize, ncols: usize) -> DM /// * `input` - Collection of like-terms /// * `coeff_matrix` - Coefficient Matrix /// * `const_matrix` - Constant Matrix +/// * `max_execution_time` - Configurable flag for max execution time of txn pub fn least_squares( input: Vec>, coeff_matrix: &mut DMatrix, const_matrix: &mut DMatrix, equation_names: Vec, + max_execution_time: u64, ) { let lss = compute_least_square_solutions(coeff_matrix, const_matrix); if let Ok(answer) = lss { @@ -62,15 +67,6 @@ pub fn least_squares( let map = generic_map(input.clone()); let keys: Vec = map.keys().map(|key| key.to_string()).collect(); - let nrows = x_hat.nrows(); - let ncols = x_hat.ncols(); - println!("where the gas parameter values are (microsecond per instruction):\n"); - for i in 0..nrows { - for j in 0..ncols { - println!("{} {}", x_hat[(i, j)], keys[i]); - } - } - // TODO: error handling with division zero that bubbles up let computed_time_and_outliers = get_computed_time_and_outliers(&mut x_hat, coeff_matrix, const_matrix) @@ -79,6 +75,8 @@ pub fn least_squares( report_computed_times(&equation_names, &computed_time_and_outliers); report_outliers(&equation_names, &computed_time_and_outliers); + + convert_to_internal_gas_cost(&mut x_hat, max_execution_time, keys); } else { report_undetermined_gas_params(input, coeff_matrix, const_matrix); } @@ -168,3 +166,34 @@ fn report_undetermined_gas_params( }, } } + +/// convert gas usage per instruction to gas cost (InternalGas) +/// +/// ### Arguments +/// +/// * `x_hat` - Least Squares Solution +/// * `max_execution_time` - Configurable flag for max execution time of txn +/// * `gas_params` - A vector representing all gas parameter names in the system +fn convert_to_internal_gas_cost( + x_hat: &mut DMatrix, + max_execution_time: u64, + gas_params: Vec, +) { + let max_execution_gas = u64::from(TransactionGasParameters::initial().max_execution_gas); + let one_microsec_per_internal_gas = + (max_execution_gas / max_execution_time) / MILLISECONDS_TO_MICROSECONDS; + + println!( + "\ninternal gas cost ({} InternalGas per 1µ):\n", + one_microsec_per_internal_gas + ); + + let nrows = x_hat.nrows(); + let ncols = x_hat.ncols(); + for i in 0..nrows { + for j in 0..ncols { + let internal_gas_cost = x_hat[(i, j)] * one_microsec_per_internal_gas as f64; + println!("{} = {}", gas_params[i], internal_gas_cost); + } + } +} diff --git a/aptos-move/aptos-gas-meter/src/meter.rs b/aptos-move/aptos-gas-meter/src/meter.rs index b72d2b1640651..c6c94c946b108 100644 --- a/aptos-move/aptos-gas-meter/src/meter.rs +++ b/aptos-move/aptos-gas-meter/src/meter.rs @@ -495,8 +495,12 @@ where .map_err(|e| e.finish(Location::Undefined)) } - fn storage_fee_per_write(&self, key: &StateKey, op: &WriteOp) -> Fee { - self.vm_gas_params().txn.storage_fee_per_write(key, op) + fn storage_fee_for_state_slot(&self, op: &WriteOp) -> Fee { + self.vm_gas_params().txn.storage_fee_for_slot(op) + } + + fn storage_fee_for_state_bytes(&self, key: &StateKey, op: &WriteOp) -> Fee { + self.vm_gas_params().txn.storage_fee_for_bytes(key, op) } fn storage_fee_per_event(&self, event: &ContractEvent) -> Fee { diff --git a/aptos-move/aptos-gas-meter/src/traits.rs b/aptos-move/aptos-gas-meter/src/traits.rs index 280365138eeda..1fffc94de454c 100644 --- a/aptos-move/aptos-gas-meter/src/traits.rs +++ b/aptos-move/aptos-gas-meter/src/traits.rs @@ -6,7 +6,7 @@ use aptos_gas_schedule::VMGasParameters; use aptos_types::{ contract_event::ContractEvent, state_store::state_key::StateKey, write_set::WriteOp, }; -use aptos_vm_types::storage::StorageGasParameters; +use aptos_vm_types::{change_set::VMChangeSet, storage::StorageGasParameters}; use move_binary_format::errors::{Location, PartialVMResult, VMResult}; use move_core_types::gas_algebra::{InternalGas, InternalGasUnit, NumBytes}; use move_vm_types::gas::GasMeter as MoveGasMeter; @@ -103,8 +103,11 @@ pub trait AptosGasMeter: MoveGasMeter { /// storage costs. fn charge_io_gas_for_write(&mut self, key: &StateKey, op: &WriteOp) -> VMResult<()>; - /// Calculates the storage fee for a write operation. - fn storage_fee_per_write(&self, key: &StateKey, op: &WriteOp) -> Fee; + /// Calculates the storage fee for a state slot allocation. + fn storage_fee_for_state_slot(&self, op: &WriteOp) -> Fee; + + /// Calculates the storage fee for state bytes. + fn storage_fee_for_state_bytes(&self, key: &StateKey, op: &WriteOp) -> Fee; /// Calculates the storage fee for an event. fn storage_fee_per_event(&self, event: &ContractEvent) -> Fee; @@ -116,16 +119,16 @@ pub trait AptosGasMeter: MoveGasMeter { fn storage_fee_for_transaction_storage(&self, txn_size: NumBytes) -> Fee; /// Charges the storage fees for writes, events & txn storage in a lump sum, minimizing the - /// loss of precision. + /// loss of precision. Refundable portion of the charge is recorded on the WriteOp itself, + /// due to which mutable references are required on the parameter list wherever proper. /// /// The contract requires that this function behaves in a way that is consistent to /// the ones defining the costs. /// Due to this reason, you should normally not override the default implementation, /// unless you are doing something special, such as injecting additional logging logic. - fn charge_storage_fee_for_all<'a>( + fn process_storage_fee_for_all( &mut self, - write_ops: impl IntoIterator, - events: impl IntoIterator, + change_set: &mut VMChangeSet, txn_size: NumBytes, gas_unit_price: FeePerGasUnit, ) -> VMResult<()> { @@ -142,10 +145,16 @@ pub trait AptosGasMeter: MoveGasMeter { } // Calculate the storage fees. - let write_fee = write_ops.into_iter().fold(Fee::new(0), |acc, (key, op)| { - acc + self.storage_fee_per_write(key, op) - }); - let event_fee = events.into_iter().fold(Fee::new(0), |acc, event| { + let write_fee = change_set + .write_set_iter_mut() + .fold(Fee::new(0), |acc, (key, op)| { + let slot_fee = self.storage_fee_for_state_slot(op); + let bytes_fee = self.storage_fee_for_state_bytes(key, op); + Self::maybe_record_storage_deposit(op, slot_fee); + + acc + slot_fee + bytes_fee + }); + let event_fee = change_set.events().iter().fold(Fee::new(0), |acc, event| { acc + self.storage_fee_per_event(event) }); let event_discount = self.storage_discount_for_events(event_fee); @@ -161,6 +170,28 @@ pub trait AptosGasMeter: MoveGasMeter { Ok(()) } + // The slot fee is refundable, we record it on the WriteOp itself and it'll end up in + // the state DB. + fn maybe_record_storage_deposit(write_op: &mut WriteOp, slot_fee: Fee) { + use WriteOp::*; + + match write_op { + CreationWithMetadata { + ref mut metadata, + data: _, + } => { + if !slot_fee.is_zero() { + metadata.set_deposit(slot_fee.into()) + } + }, + Creation(..) + | Modification(..) + | Deletion + | ModificationWithMetadata { .. } + | DeletionWithMetadata { .. } => {}, + } + } + // Below are getters reexported from the gas algebra. // Gas meter instances should not reimplement these themselves. diff --git a/aptos-move/aptos-gas-profiling/Cargo.toml b/aptos-move/aptos-gas-profiling/Cargo.toml index d8230d9e2a590..f4ad7a668e274 100644 --- a/aptos-move/aptos-gas-profiling/Cargo.toml +++ b/aptos-move/aptos-gas-profiling/Cargo.toml @@ -22,6 +22,7 @@ aptos-gas-algebra = { workspace = true } aptos-gas-meter = { workspace = true } aptos-package-builder = { workspace = true } aptos-types = { workspace = true } +aptos-vm-types = { workspace = true } move-binary-format = { workspace = true } move-core-types = { workspace = true } diff --git a/aptos-move/aptos-gas-profiling/src/profiler.rs b/aptos-move/aptos-gas-profiling/src/profiler.rs index ecf2ba83030ca..9a7f97a920849 100644 --- a/aptos-move/aptos-gas-profiling/src/profiler.rs +++ b/aptos-move/aptos-gas-profiling/src/profiler.rs @@ -10,6 +10,7 @@ use aptos_gas_meter::AptosGasMeter; use aptos_types::{ contract_event::ContractEvent, state_store::state_key::StateKey, write_set::WriteOp, }; +use aptos_vm_types::change_set::VMChangeSet; use move_binary_format::{ errors::{Location, PartialVMResult, VMResult}, file_format::CodeOffset, @@ -479,7 +480,9 @@ where delegate! { fn algebra(&self) -> &Self::Algebra; - fn storage_fee_per_write(&self, key: &StateKey, op: &WriteOp) -> Fee; + fn storage_fee_for_state_slot(&self, op: &WriteOp) -> Fee; + + fn storage_fee_for_state_bytes(&self, key: &StateKey, op: &WriteOp) -> Fee; fn storage_fee_per_event(&self, event: &ContractEvent) -> Fee; @@ -511,10 +514,9 @@ where res } - fn charge_storage_fee_for_all<'a>( + fn process_storage_fee_for_all( &mut self, - write_ops: impl IntoIterator, - events: impl IntoIterator, + change_set: &mut VMChangeSet, txn_size: NumBytes, gas_unit_price: FeePerGasUnit, ) -> VMResult<()> { @@ -533,8 +535,12 @@ where // Writes let mut write_fee = Fee::new(0); let mut write_set_storage = vec![]; - for (key, op) in write_ops.into_iter() { - let fee = self.storage_fee_per_write(key, op); + for (key, op) in change_set.write_set_iter_mut() { + let slot_fee = self.storage_fee_for_state_slot(op); + let bytes_fee = self.storage_fee_for_state_bytes(key, op); + Self::maybe_record_storage_deposit(op, slot_fee); + + let fee = slot_fee + bytes_fee; write_set_storage.push(WriteStorage { key: key.clone(), op_type: write_op_type(op), @@ -546,7 +552,7 @@ where // Events let mut event_fee = Fee::new(0); let mut event_fees = vec![]; - for event in events { + for event in change_set.events().iter() { let fee = self.storage_fee_per_event(event); event_fees.push(EventStorage { ty: event.type_tag().clone(), diff --git a/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs b/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs index 95b0be6b56794..aa6e53f0cfe38 100644 --- a/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs +++ b/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs @@ -189,27 +189,31 @@ impl TransactionGasParameters { } } - /// New formula to charge storage fee for a write, measured in APT. - pub fn storage_fee_per_write(&self, key: &StateKey, op: &WriteOp) -> Fee { + pub fn storage_fee_for_slot(&self, op: &WriteOp) -> Fee { use WriteOp::*; - let excess_fee = |key: &StateKey, data: &[u8]| -> Fee { - let size = NumBytes::new(key.size() as u64) + NumBytes::new(data.len() as u64); - match size.checked_sub(self.free_write_bytes_quota) { - Some(excess) => excess * self.storage_fee_per_excess_state_byte, - None => 0.into(), - } - }; - match op { - Creation(data) | CreationWithMetadata { data, .. } => { - self.storage_fee_per_state_slot_create * NumSlots::new(1) + excess_fee(key, data) + Creation(..) | CreationWithMetadata { .. } => { + self.storage_fee_per_state_slot_create * NumSlots::new(1) }, - Modification(data) | ModificationWithMetadata { data, .. } => excess_fee(key, data), - Deletion | DeletionWithMetadata { .. } => 0.into(), + Modification(..) + | ModificationWithMetadata { .. } + | Deletion + | DeletionWithMetadata { .. } => 0.into(), } } + pub fn storage_fee_for_bytes(&self, key: &StateKey, op: &WriteOp) -> Fee { + if let Some(data) = op.bytes() { + let size = NumBytes::new(key.size() as u64) + NumBytes::new(data.len() as u64); + if let Some(excess) = size.checked_sub(self.free_write_bytes_quota) { + return excess * self.storage_fee_per_excess_state_byte; + } + } + + 0.into() + } + /// New formula to charge storage fee for an event, measured in APT. pub fn storage_fee_per_event(&self, event: &ContractEvent) -> Fee { NumBytes::new(event.size() as u64) * self.storage_fee_per_event_byte diff --git a/aptos-move/aptos-memory-usage-tracker/src/lib.rs b/aptos-move/aptos-memory-usage-tracker/src/lib.rs index cd0ba35fa2302..4e690f7fa6347 100644 --- a/aptos-move/aptos-memory-usage-tracker/src/lib.rs +++ b/aptos-move/aptos-memory-usage-tracker/src/lib.rs @@ -463,7 +463,9 @@ where delegate! { fn algebra(&self) -> &Self::Algebra; - fn storage_fee_per_write(&self, key: &StateKey, op: &WriteOp) -> Fee; + fn storage_fee_for_state_slot(&self, op: &WriteOp) -> Fee; + + fn storage_fee_for_state_bytes(&self, key: &StateKey, op: &WriteOp) -> Fee; fn storage_fee_per_event(&self, event: &ContractEvent) -> Fee; diff --git a/aptos-move/aptos-release-builder/data/release.yaml b/aptos-move/aptos-release-builder/data/release.yaml index 0c2f7f88da18c..d086df8e6528d 100644 --- a/aptos-move/aptos-release-builder/data/release.yaml +++ b/aptos-move/aptos-release-builder/data/release.yaml @@ -1,28 +1,17 @@ --- remote_endpoint: ~ -name: "v1.6" +name: "v1.7" proposals: - name: upgrade_framework metadata: - title: "Multi-step proposal to upgrade mainnet framework to v1.6" - description: "This includes changes outlined in https://github.com/aptos-labs/aptos-core/releases/aptos-node-v1.6" + title: "Multi-step proposal to upgrade mainnet framework to v1.7" + description: "This includes changes outlined in https://github.com/aptos-labs/aptos-core/releases/aptos-node-v1.7" execution_mode: MultiStep update_sequence: - DefaultGas - Framework: bytecode_version: 6 git_hash: ~ - - name: enable_fee_payer - metadata: - title: "Enable fee payer" - description: "AIP-39: Support secondary fee payer to pay gas cost on behalf of the sender." - source_code_url: "https://github.com/aptos-labs/aptos-core/pull/8904" - discussion_url: "https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-39.md" - execution_mode: MultiStep - update_sequence: - - FeatureFlag: - enabled: - - gas_payer_enabled - name: enable_block_gas_limit metadata: title: "Enable Block Gas Limit" @@ -35,37 +24,3 @@ proposals: transaction_shuffler_type: deprecated_sender_aware_v1: 32 block_gas_limit : 35000 - - name: enable_aptos_unique_identifiers - metadata: - title: "Enable Aptos Unique Identifiers" - description: "AIP-36: Support for aptos unique identifiers (generated using native function generate_unique_address)" - discussion_url: "https://github.com/aptos-foundation/AIPs/issues/154" - execution_mode: MultiStep - update_sequence: - - FeatureFlag: - enabled: - - aptos_unique_identifiers - - name: enable_partial_voting - metadata: - title: "Enable partial voting" - description: "AIP-28: Partial voting for on chain governance." - source_code_url: "https://github.com/aptos-labs/aptos-core/pull/8090" - discussion_url: "https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-28.md" - execution_mode: MultiStep - update_sequence: - - RawScript: aptos-move/aptos-release-builder/data/proposals/aip_28_initialization.move - - FeatureFlag: - enabled: - - partial_governance_voting - - delegation_pool_partial_governance_voting - - name: bulletproofs_natives - metadata: - title: "Bulletproofs natives" - description: "AIP-xx: TBD" - source_code_url: "https://github.com/aptos-labs/aptos-core/pull/3444" - discussion_url: "https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-xx.md" - execution_mode: MultiStep - update_sequence: - - FeatureFlag: - enabled: - - aptos_unique_identifiers \ No newline at end of file diff --git a/aptos-move/aptos-release-builder/src/components/feature_flags.rs b/aptos-move/aptos-release-builder/src/components/feature_flags.rs index 60fcb66a0c0d0..7c5b16caa43bf 100644 --- a/aptos-move/aptos-release-builder/src/components/feature_flags.rs +++ b/aptos-move/aptos-release-builder/src/components/feature_flags.rs @@ -45,6 +45,8 @@ pub enum FeatureFlag { GasPayerEnabled, AptosUniqueIdentifiers, BulletproofsNatives, + SignerNativeFormatFix, + ModuleEvent, } fn generate_features_blob(writer: &CodeWriter, data: &[u64]) { @@ -168,6 +170,8 @@ impl From for AptosFeatureFlag { FeatureFlag::GasPayerEnabled => AptosFeatureFlag::GAS_PAYER_ENABLED, FeatureFlag::AptosUniqueIdentifiers => AptosFeatureFlag::APTOS_UNIQUE_IDENTIFIERS, FeatureFlag::BulletproofsNatives => AptosFeatureFlag::BULLETPROOFS_NATIVES, + FeatureFlag::SignerNativeFormatFix => AptosFeatureFlag::SIGNER_NATIVE_FORMAT_FIX, + FeatureFlag::ModuleEvent => AptosFeatureFlag::MODULE_EVENT, } } } @@ -214,6 +218,8 @@ impl From for FeatureFlag { AptosFeatureFlag::GAS_PAYER_ENABLED => FeatureFlag::GasPayerEnabled, AptosFeatureFlag::APTOS_UNIQUE_IDENTIFIERS => FeatureFlag::AptosUniqueIdentifiers, AptosFeatureFlag::BULLETPROOFS_NATIVES => FeatureFlag::BulletproofsNatives, + AptosFeatureFlag::SIGNER_NATIVE_FORMAT_FIX => FeatureFlag::SignerNativeFormatFix, + AptosFeatureFlag::MODULE_EVENT => FeatureFlag::ModuleEvent, } } } diff --git a/aptos-move/aptos-transactional-test-harness/src/aptos_test_harness.rs b/aptos-move/aptos-transactional-test-harness/src/aptos_test_harness.rs index 17582020a415f..87715d43dfe1a 100644 --- a/aptos-move/aptos-transactional-test-harness/src/aptos_test_harness.rs +++ b/aptos-move/aptos-transactional-test-harness/src/aptos_test_harness.rs @@ -51,7 +51,7 @@ use move_transactional_test_runner::{ use move_vm_runtime::session::SerializedReturnValues; use once_cell::sync::Lazy; use std::{ - collections::{BTreeMap, HashMap}, + collections::{BTreeMap, BTreeSet, HashMap}, convert::TryFrom, fmt, path::Path, @@ -294,6 +294,7 @@ static PRECOMPILED_APTOS_FRAMEWORK: Lazy = Lazy::new(|| { deps, None, move_compiler::Flags::empty().set_sources_shadow_deps(false), + aptos_framework::extended_checks::get_all_attribute_names(), ) .unwrap(); match program_res { @@ -551,6 +552,10 @@ impl<'a> MoveTestAdapter<'a> for AptosTestAdapter<'a> { self.default_syntax } + fn known_attributes(&self) -> &BTreeSet { + aptos_framework::extended_checks::get_all_attribute_names() + } + fn init( default_syntax: SyntaxChoice, _comparison_mode: bool, @@ -929,8 +934,13 @@ struct PrettyEvent<'a>(&'a ContractEvent); impl<'a> fmt::Display for PrettyEvent<'a> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { writeln!(f, "{{")?; - writeln!(f, " key: {}", self.0.key())?; - writeln!(f, " seq_num: {}", self.0.sequence_number())?; + match self.0 { + ContractEvent::V1(v1) => { + writeln!(f, " key: {}", v1.key())?; + writeln!(f, " seq_num: {}", v1.sequence_number())?; + }, + ContractEvent::V2(_v2) => (), + } writeln!(f, " type: {}", self.0.type_tag())?; writeln!(f, " data: {:?}", hex::encode(self.0.event_data()))?; write!(f, "}}") diff --git a/aptos-move/aptos-validator-interface/src/lib.rs b/aptos-move/aptos-validator-interface/src/lib.rs index df7c056973eeb..aa2f9ecd65825 100644 --- a/aptos-move/aptos-validator-interface/src/lib.rs +++ b/aptos-move/aptos-validator-interface/src/lib.rs @@ -192,10 +192,6 @@ impl TStateView for DebuggerStateView { self.get_state_value_internal(state_key, self.version) } - fn is_genesis(&self) -> bool { - false - } - fn get_usage(&self) -> Result { unimplemented!() } diff --git a/aptos-move/aptos-vm-benchmarks/samples/add-numbers/Move.toml b/aptos-move/aptos-vm-benchmarks/samples/add-numbers/Move.toml index add784d359c40..2cf87321920de 100644 --- a/aptos-move/aptos-vm-benchmarks/samples/add-numbers/Move.toml +++ b/aptos-move/aptos-vm-benchmarks/samples/add-numbers/Move.toml @@ -2,6 +2,5 @@ name = 'add-numbers' version = '1.0.0' [dependencies.AptosFramework] -git = 'https://github.com/aptos-labs/aptos-core.git' -rev = 'main' +local = '../../../..' subdir = 'aptos-move/framework/aptos-framework' diff --git a/aptos-move/aptos-vm-types/src/change_set.rs b/aptos-move/aptos-vm-types/src/change_set.rs index f7c1eab09f5c2..dce0a7d183af8 100644 --- a/aptos-move/aptos-vm-types/src/change_set.rs +++ b/aptos-move/aptos-vm-types/src/change_set.rs @@ -13,8 +13,8 @@ use aptos_types::{ use move_binary_format::errors::Location; use move_core_types::vm_status::{err_msg, StatusCode, VMStatus}; use std::collections::{ - btree_map::Entry::{Occupied, Vacant}, - BTreeMap, + hash_map::Entry::{Occupied, Vacant}, + HashMap, }; /// A change set produced by the VM. @@ -23,10 +23,10 @@ use std::collections::{ /// VM. For storage backends, use `ChangeSet`. #[derive(Debug, Clone, Eq, PartialEq)] pub struct VMChangeSet { - resource_write_set: BTreeMap, - module_write_set: BTreeMap, - aggregator_write_set: BTreeMap, - aggregator_delta_set: BTreeMap, + resource_write_set: HashMap, + module_write_set: HashMap, + aggregator_write_set: HashMap, + aggregator_delta_set: HashMap, events: Vec, } @@ -49,19 +49,19 @@ macro_rules! squash_writes_pair { impl VMChangeSet { pub fn empty() -> Self { Self { - resource_write_set: BTreeMap::new(), - module_write_set: BTreeMap::new(), - aggregator_write_set: BTreeMap::new(), - aggregator_delta_set: BTreeMap::new(), + resource_write_set: HashMap::new(), + module_write_set: HashMap::new(), + aggregator_write_set: HashMap::new(), + aggregator_delta_set: HashMap::new(), events: vec![], } } pub fn new( - resource_write_set: BTreeMap, - module_write_set: BTreeMap, - aggregator_write_set: BTreeMap, - aggregator_delta_set: BTreeMap, + resource_write_set: HashMap, + module_write_set: HashMap, + aggregator_write_set: HashMap, + aggregator_delta_set: HashMap, events: Vec, checker: &dyn CheckChangeSet, ) -> anyhow::Result { @@ -92,8 +92,8 @@ impl VMChangeSet { // There should be no aggregator writes if we have a change set from // storage. - let mut resource_write_set = BTreeMap::new(); - let mut module_write_set = BTreeMap::new(); + let mut resource_write_set = HashMap::new(); + let mut module_write_set = HashMap::new(); for (state_key, write_op) in write_set { if matches!(state_key.inner(), StateKeyInner::AccessPath(ap) if ap.is_code()) { @@ -109,8 +109,8 @@ impl VMChangeSet { let change_set = Self { resource_write_set, module_write_set, - aggregator_write_set: BTreeMap::new(), - aggregator_delta_set: BTreeMap::new(), + aggregator_write_set: HashMap::new(), + aggregator_delta_set: HashMap::new(), events, }; checker.check_change_set(&change_set)?; @@ -159,11 +159,18 @@ impl VMChangeSet { .chain(self.aggregator_write_set.iter()) } - pub fn resource_write_set(&self) -> &BTreeMap { + pub fn write_set_iter_mut(&mut self) -> impl Iterator { + self.resource_write_set + .iter_mut() + .chain(self.module_write_set.iter_mut()) + .chain(self.aggregator_write_set.iter_mut()) + } + + pub fn resource_write_set(&self) -> &HashMap { &self.resource_write_set } - pub fn module_write_set(&self) -> &BTreeMap { + pub fn module_write_set(&self) -> &HashMap { &self.module_write_set } @@ -176,11 +183,11 @@ impl VMChangeSet { .extend(additional_aggregator_writes) } - pub fn aggregator_write_set(&self) -> &BTreeMap { + pub fn aggregator_v1_write_set(&self) -> &HashMap { &self.aggregator_write_set } - pub fn aggregator_delta_set(&self) -> &BTreeMap { + pub fn aggregator_v1_delta_set(&self) -> &HashMap { &self.aggregator_delta_set } @@ -209,23 +216,23 @@ impl VMChangeSet { aggregator_delta_set .into_iter() .map(into_write) - .collect::, VMStatus>>()?; + .collect::, VMStatus>>()?; aggregator_write_set.extend(materialized_aggregator_delta_set.into_iter()); Ok(Self { resource_write_set, module_write_set, aggregator_write_set, - aggregator_delta_set: BTreeMap::new(), + aggregator_delta_set: HashMap::new(), events, }) } fn squash_additional_aggregator_changes( - aggregator_write_set: &mut BTreeMap, - aggregator_delta_set: &mut BTreeMap, - additional_aggregator_write_set: BTreeMap, - additional_aggregator_delta_set: BTreeMap, + aggregator_write_set: &mut HashMap, + aggregator_delta_set: &mut HashMap, + additional_aggregator_write_set: HashMap, + additional_aggregator_delta_set: HashMap, ) -> anyhow::Result<(), VMStatus> { use WriteOp::*; @@ -305,8 +312,8 @@ impl VMChangeSet { } fn squash_additional_writes( - write_set: &mut BTreeMap, - additional_write_set: BTreeMap, + write_set: &mut HashMap, + additional_write_set: HashMap, ) -> anyhow::Result<(), VMStatus> { for (key, additional_write_op) in additional_write_set.into_iter() { match write_set.entry(key) { diff --git a/aptos-move/aptos-vm-types/src/output.rs b/aptos-move/aptos-vm-types/src/output.rs index fd9a2f45ff307..94b4ca07ff3e4 100644 --- a/aptos-move/aptos-vm-types/src/output.rs +++ b/aptos-move/aptos-vm-types/src/output.rs @@ -74,7 +74,7 @@ impl VMOutput { // First, check if output of transaction should be discarded or delta // change set is empty. In both cases, we do not need to apply any // deltas and can return immediately. - if self.status().is_discarded() || self.change_set().aggregator_delta_set().is_empty() { + if self.status().is_discarded() || self.change_set().aggregator_v1_delta_set().is_empty() { return Ok(self); } @@ -96,7 +96,7 @@ impl VMOutput { debug_assert!( materialized_output .change_set() - .aggregator_delta_set() + .aggregator_v1_delta_set() .is_empty(), "Aggregator deltas must be empty after materialization." ); @@ -114,12 +114,12 @@ impl VMOutput { // We should have a materialized delta for every delta in the output. assert_eq!( materialized_deltas.len(), - self.change_set().aggregator_delta_set().len() + self.change_set().aggregator_v1_delta_set().len() ); debug_assert!( materialized_deltas .iter() - .all(|(k, _)| self.change_set().aggregator_delta_set().contains_key(k)), + .all(|(k, _)| self.change_set().aggregator_v1_delta_set().contains_key(k)), "Materialized aggregator writes contain a key which does not exist in delta set." ); self.change_set diff --git a/aptos-move/aptos-vm-types/src/storage.rs b/aptos-move/aptos-vm-types/src/storage.rs index cf0bdade7e105..c76aee28b6169 100644 --- a/aptos-move/aptos-vm-types/src/storage.rs +++ b/aptos-move/aptos-vm-types/src/storage.rs @@ -96,10 +96,6 @@ pub struct StoragePricingV2 { } impl StoragePricingV2 { - pub fn zeros() -> Self { - Self::new_without_storage_curves(LATEST_GAS_FEATURE_VERSION, &AptosGasParameters::zeros()) - } - pub fn new_with_storage_curves( feature_version: u64, storage_gas_schedule: &StorageGasSchedule, @@ -117,22 +113,6 @@ impl StoragePricingV2 { } } - pub fn new_without_storage_curves( - feature_version: u64, - gas_params: &AptosGasParameters, - ) -> Self { - Self { - feature_version, - free_write_bytes_quota: Self::get_free_write_bytes_quota(feature_version, gas_params), - per_item_read: gas_params.vm.txn.storage_io_per_state_slot_read, - per_item_create: gas_params.vm.txn.storage_io_per_state_slot_write, - per_item_write: gas_params.vm.txn.storage_io_per_state_slot_write, - per_byte_read: gas_params.vm.txn.storage_io_per_state_byte_read, - per_byte_create: gas_params.vm.txn.storage_io_per_state_byte_write, - per_byte_write: gas_params.vm.txn.storage_io_per_state_byte_write, - } - } - fn get_free_write_bytes_quota( feature_version: u64, gas_params: &AptosGasParameters, @@ -253,10 +233,10 @@ impl StoragePricing { gas_params, )), }, - 10.. => V2(StoragePricingV2::new_without_storage_curves( + 10.. => V3(StoragePricingV3 { feature_version, - gas_params, - )), + free_write_bytes_quota: gas_params.vm.txn.free_write_bytes_quota, + }), } } @@ -424,9 +404,12 @@ impl StorageGasParameters { } } - pub fn free_and_unlimited() -> Self { + pub fn unlimited(free_write_bytes_quota: NumBytes) -> Self { Self { - pricing: StoragePricing::V2(StoragePricingV2::zeros()), + pricing: StoragePricing::V3(StoragePricingV3 { + feature_version: LATEST_GAS_FEATURE_VERSION, + free_write_bytes_quota, + }), change_set_configs: ChangeSetConfigs::unlimited_at_gas_feature_version( LATEST_GAS_FEATURE_VERSION, ), diff --git a/aptos-move/aptos-vm-types/src/tests/test_change_set.rs b/aptos-move/aptos-vm-types/src/tests/test_change_set.rs index 13217c62bcb5e..d0dcc527ea6a7 100644 --- a/aptos-move/aptos-vm-types/src/tests/test_change_set.rs +++ b/aptos-move/aptos-vm-types/src/tests/test_change_set.rs @@ -20,7 +20,7 @@ use move_core_types::{ language_storage::{ModuleId, StructTag}, vm_status::{StatusCode, VMStatus}, }; -use std::collections::BTreeMap; +use std::collections::HashMap; /// Testcases: /// ```text @@ -89,7 +89,7 @@ macro_rules! write_set_2 { macro_rules! expected_write_set { ($d:ident) => { - BTreeMap::from([ + HashMap::from([ mock_create(format!("0{}", $d), 0), mock_modify(format!("1{}", $d), 1), mock_delete(format!("2{}", $d)), @@ -164,23 +164,23 @@ fn test_successful_squash() { &expected_write_set!(descriptor) ); - let expected_aggregator_write_set = BTreeMap::from([ + let expected_aggregator_write_set = HashMap::from([ mock_create("18a", 136), mock_modify("19a", 138), mock_modify("22a", 122), mock_delete("23a"), ]); - let expected_aggregator_delta_set = BTreeMap::from([ + let expected_aggregator_delta_set = HashMap::from([ mock_add("15a", 15), mock_add("16a", 116), mock_add("17a", 134), ]); assert_eq!( - change_set.aggregator_write_set(), + change_set.aggregator_v1_write_set(), &expected_aggregator_write_set ); assert_eq!( - change_set.aggregator_delta_set(), + change_set.aggregator_v1_delta_set(), &expected_aggregator_delta_set ); } diff --git a/aptos-move/aptos-vm-types/src/tests/test_output.rs b/aptos-move/aptos-vm-types/src/tests/test_output.rs index ef994b0341d2a..c30b443473e23 100644 --- a/aptos-move/aptos-vm-types/src/tests/test_output.rs +++ b/aptos-move/aptos-vm-types/src/tests/test_output.rs @@ -12,7 +12,7 @@ use aptos_types::{ }; use claims::{assert_err, assert_matches, assert_ok}; use move_core_types::vm_status::{AbortLocation, VMStatus}; -use std::collections::BTreeMap; +use std::collections::{BTreeMap, HashMap}; fn assert_eq_outputs(vm_output: &VMOutput, txn_output: TransactionOutput) { let vm_output_writes = &vm_output @@ -77,8 +77,7 @@ fn test_ok_output_equality_with_deltas() { .clone() .into_transaction_output_with_materialized_deltas(vec![mock_modify("3", 400)]); - let expected_aggregator_write_set = - BTreeMap::from([mock_modify("2", 2), mock_modify("3", 400)]); + let expected_aggregator_write_set = HashMap::from([mock_modify("2", 2), mock_modify("3", 400)]); assert_eq!( materialized_vm_output.change_set().resource_write_set(), vm_output.change_set().resource_write_set() @@ -88,12 +87,14 @@ fn test_ok_output_equality_with_deltas() { vm_output.change_set().module_write_set() ); assert_eq!( - materialized_vm_output.change_set().aggregator_write_set(), + materialized_vm_output + .change_set() + .aggregator_v1_write_set(), &expected_aggregator_write_set ); assert!(materialized_vm_output .change_set() - .aggregator_delta_set() + .aggregator_v1_delta_set() .is_empty()); assert_eq!( vm_output.fee_statement(), diff --git a/aptos-move/aptos-vm-types/src/tests/utils.rs b/aptos-move/aptos-vm-types/src/tests/utils.rs index a6dfd66e960be..cd3c1f8f46de1 100644 --- a/aptos-move/aptos-vm-types/src/tests/utils.rs +++ b/aptos-move/aptos-vm-types/src/tests/utils.rs @@ -10,7 +10,7 @@ use aptos_types::{ write_set::WriteOp, }; use move_core_types::vm_status::VMStatus; -use std::collections::BTreeMap; +use std::collections::HashMap; pub(crate) struct MockChangeSetChecker; @@ -57,10 +57,10 @@ pub(crate) fn build_change_set( aggregator_delta_set: impl IntoIterator, ) -> VMChangeSet { VMChangeSet::new( - BTreeMap::from_iter(resource_write_set), - BTreeMap::from_iter(module_write_set), - BTreeMap::from_iter(aggregator_write_set), - BTreeMap::from_iter(aggregator_delta_set), + HashMap::from_iter(resource_write_set), + HashMap::from_iter(module_write_set), + HashMap::from_iter(aggregator_write_set), + HashMap::from_iter(aggregator_delta_set), vec![], &MockChangeSetChecker, ) diff --git a/aptos-move/aptos-vm/Cargo.toml b/aptos-move/aptos-vm/Cargo.toml index 8fab58c6640b4..c75aade943088 100644 --- a/aptos-move/aptos-vm/Cargo.toml +++ b/aptos-move/aptos-vm/Cargo.toml @@ -66,7 +66,6 @@ proptest = { workspace = true } [features] default = [] -mirai-contracts = [] fuzzing = ["move-core-types/fuzzing", "move-binary-format/fuzzing", "move-vm-types/fuzzing", "aptos-framework/fuzzing"] failpoints = ["fail/failpoints", "move-vm-runtime/failpoints"] testing = ["move-unit-test", "aptos-framework/testing"] diff --git a/aptos-move/aptos-vm/src/adapter_common.rs b/aptos-move/aptos-vm/src/adapter_common.rs index 44a9e2c328e1c..218bb050c8867 100644 --- a/aptos-move/aptos-vm/src/adapter_common.rs +++ b/aptos-move/aptos-vm/src/adapter_common.rs @@ -2,7 +2,7 @@ // Parts of the project are originally copyright © Meta Platforms, Inc. // SPDX-License-Identifier: Apache-2.0 -use crate::move_vm_ext::{MoveResolverExt, SessionExt, SessionId}; +use crate::move_vm_ext::{AptosMoveResolver, MoveResolverExt, SessionExt, SessionId}; use anyhow::Result; use aptos_types::{ block_metadata::BlockMetadata, @@ -38,7 +38,7 @@ pub(crate) trait VMAdapter { fn run_prologue( &self, session: &mut SessionExt, - storage: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, transaction: &SignatureCheckedTransaction, log_context: &AdapterLogSchema, ) -> Result<(), VMStatus>; @@ -57,14 +57,14 @@ pub(crate) trait VMAdapter { fn validate_signature_checked_transaction( &self, session: &mut SessionExt, - storage: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, transaction: &SignatureCheckedTransaction, allow_too_new: bool, log_context: &AdapterLogSchema, ) -> Result<(), VMStatus> { self.check_transaction_format(transaction)?; - let prologue_status = self.run_prologue(session, storage, transaction, log_context); + let prologue_status = self.run_prologue(session, resolver, transaction, log_context); match prologue_status { Err(err) if !allow_too_new || err.status_code() != StatusCode::SEQUENCE_NUMBER_TOO_NEW => diff --git a/aptos-move/aptos-vm/src/aptos_vm.rs b/aptos-move/aptos-vm/src/aptos_vm.rs index a06266d8b54a9..e57087d990fef 100644 --- a/aptos-move/aptos-vm/src/aptos_vm.rs +++ b/aptos-move/aptos-vm/src/aptos_vm.rs @@ -11,7 +11,7 @@ use crate::{ counters::*, data_cache::StorageAdapter, errors::expect_only_successful_execution, - move_vm_ext::{MoveResolverExt, RespawnedSession, SessionExt, SessionId}, + move_vm_ext::{AptosMoveResolver, MoveResolverExt, RespawnedSession, SessionExt, SessionId}, sharded_block_executor::{executor_client::ExecutorClient, ShardedBlockExecutor}, system_module_names::*, transaction_metadata::TransactionMetadata, @@ -33,8 +33,7 @@ use aptos_types::{ block_executor::partitioner::PartitionedTransactions, block_metadata::BlockMetadata, fee_statement::FeeStatement, - on_chain_config::{new_epoch_event_key, FeatureFlag, TimedFeatureOverride}, - state_store::state_key::StateKey, + on_chain_config::{new_epoch_event_key, ConfigStorage, FeatureFlag, TimedFeatureOverride}, transaction::{ EntryFunction, ExecutionError, ExecutionStatus, ModuleBundle, Multisig, MultisigTransactionPayload, SignatureCheckedTransaction, SignedTransaction, Transaction, @@ -42,7 +41,6 @@ use aptos_types::{ WriteSetPayload, }, vm_status::{AbortLocation, StatusCode, VMStatus}, - write_set::WriteOp, }; use aptos_utils::{aptos_try, return_on_failure}; use aptos_vm_logging::{log_schema::AdapterLogSchema, speculative_error, speculative_log}; @@ -74,7 +72,6 @@ use once_cell::sync::{Lazy, OnceCell}; use std::{ cmp::{max, min}, collections::{BTreeMap, BTreeSet}, - convert::{AsMut, AsRef}, marker::Sync, sync::{ atomic::{AtomicBool, Ordering}, @@ -93,7 +90,7 @@ pub static RAYON_EXEC_POOL: Lazy> = Lazy::new(|| { Arc::new( rayon::ThreadPoolBuilder::new() .num_threads(num_cpus::get()) - .thread_name(|index| format!("par_exec_{}", index)) + .thread_name(|index| format!("par_exec-{}", index)) .build() .unwrap(), ) @@ -119,16 +116,21 @@ macro_rules! unwrap_or_discard { } impl AptosVM { - pub fn new(state: &impl StateView) -> Self { - Self(AptosVMImpl::new(state)) + pub fn new(config_storage: &impl ConfigStorage) -> Self { + Self(AptosVMImpl::new(config_storage)) } - pub fn new_for_validation(state: &impl StateView) -> Self { + pub fn new_from_state_view(state_view: &impl StateView) -> Self { + let config_storage = StorageAdapter::new(state_view); + Self(AptosVMImpl::new(&config_storage)) + } + + pub fn new_for_validation(state_view: &impl StateView) -> Self { info!( - AdapterLogSchema::new(state.id(), 0), + AdapterLogSchema::new(state_view.id(), 0), "Adapter created for Validation" ); - Self::new(state) + Self::new_from_state_view(state_view) } /// Sets execution concurrency level when invoked the first time. @@ -224,7 +226,7 @@ impl AptosVM { pub fn load_module( &self, module_id: &ModuleId, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, ) -> VMResult> { self.0.load_module(module_id, resolver) } @@ -234,7 +236,7 @@ impl AptosVM { pub fn failed_transaction_cleanup( &self, error_code: VMStatus, - gas_meter: &mut impl AptosGasMeter, + gas_meter: &impl AptosGasMeter, txn_data: &TransactionMetadata, resolver: &impl MoveResolverExt, log_context: &AdapterLogSchema, @@ -251,7 +253,7 @@ impl AptosVM { .1 } - pub fn as_move_resolver<'a, S: StateView>(&self, state_view: &'a S) -> StorageAdapter<'a, S> { + pub fn as_move_resolver<'a, S>(&self, state_view: &'a S) -> StorageAdapter<'a, S> { StorageAdapter::new_with_cached_config( state_view, self.0.get_gas_feature_version(), @@ -279,7 +281,7 @@ impl AptosVM { fn failed_transaction_cleanup_and_keep_vm_status( &self, error_code: VMStatus, - gas_meter: &mut impl AptosGasMeter, + gas_meter: &impl AptosGasMeter, txn_data: &TransactionMetadata, resolver: &impl MoveResolverExt, log_context: &AdapterLogSchema, @@ -312,6 +314,7 @@ impl AptosVM { }, _ => status, }; + let fee_statement = AptosVM::fee_statement_from_gas_meter(txn_data, gas_meter); // The transaction should be charged for gas, so run the epilogue to do that. // This is running in a new session that drops any side effects from the // attempted transaction (e.g., spending funds that were needed to pay for gas), @@ -326,7 +329,6 @@ impl AptosVM { ) { return discard_error_vm_status(e); } - let fee_statement = AptosVM::fee_statement_from_gas_meter(txn_data, gas_meter); let txn_output = get_transaction_output( &mut (), session, @@ -347,17 +349,17 @@ impl AptosVM { fn success_transaction_cleanup( &self, mut respawned_session: RespawnedSession, - gas_meter: &mut impl AptosGasMeter, + gas_meter: &impl AptosGasMeter, txn_data: &TransactionMetadata, log_context: &AdapterLogSchema, change_set_configs: &ChangeSetConfigs, ) -> Result<(VMStatus, VMOutput), VMStatus> { + let fee_statement = AptosVM::fee_statement_from_gas_meter(txn_data, gas_meter); respawned_session.execute(|session| { self.0 .run_success_epilogue(session, gas_meter.balance(), txn_data, log_context) })?; let change_set = respawned_session.finish(change_set_configs)?; - let fee_statement = AptosVM::fee_statement_from_gas_meter(txn_data, gas_meter); let output = VMOutput::new( change_set, fee_statement, @@ -426,6 +428,12 @@ impl AptosVM { TransactionPayload::Script(script) => { let loaded_func = session.load_script(script.code(), script.ty_args().to_vec())?; + // Gerardo: consolidate the extended validation to verifier. + verifier::event_validation::verify_no_event_emission_in_script( + script.code(), + session.get_vm_config().max_binary_format_version, + )?; + let args = verifier::transaction_arg_validation::validate_combine_signer_and_txn_args( &mut session, @@ -491,15 +499,14 @@ impl AptosVM { change_set_configs: &ChangeSetConfigs, txn_data: &TransactionMetadata, ) -> Result, VMStatus> { - let change_set = session.finish(&mut (), change_set_configs)?; + let mut change_set = session.finish(&mut (), change_set_configs)?; for (key, op) in change_set.write_set_iter() { gas_meter.charge_io_gas_for_write(key, op)?; } - gas_meter.charge_storage_fee_for_all( - change_set.write_set_iter(), - change_set.events(), + gas_meter.process_storage_fee_for_all( + &mut change_set, txn_data.transaction_size, txn_data.gas_unit_price, )?; @@ -677,8 +684,8 @@ impl AptosVM { cleanup_args: Vec>, change_set_configs: &ChangeSetConfigs, ) -> Result, VMStatus> { - // Charge gas for writeset before we do cleanup. This ensures we don't charge gas for - // cleanup writeset changes, which is consistent with outer-level success cleanup + // Charge gas for write set before we do cleanup. This ensures we don't charge gas for + // cleanup write set changes, which is consistent with outer-level success cleanup // flow. We also wouldn't need to worry that we run out of gas when doing cleanup. let mut respawned_session = self.charge_change_set_and_respawn_session( session, @@ -995,6 +1002,7 @@ impl AptosVM { .map_err(|err| Self::metadata_validation_error(&err.to_string()))?; } verifier::resource_groups::validate_resource_groups(session, modules)?; + verifier::event_validation::validate_module_events(session, modules)?; if !expected_modules.is_empty() { return Err(Self::metadata_validation_error( @@ -1164,7 +1172,7 @@ impl AptosVM { F: FnOnce(u64, VMGasParameters, StorageGasParameters, Gas) -> Result, { // TODO(Gas): revisit this. - let vm = AptosVM::new(state_view); + let vm = AptosVM::new_from_state_view(state_view); // TODO(Gas): avoid creating txn metadata twice. let balance = TransactionMetadata::new(txn).max_gas_amount(); @@ -1186,10 +1194,10 @@ impl AptosVM { Ok((status, output, gas_meter)) } - fn execute_writeset( + fn execute_write_set( &self, - resolver: &impl MoveResolverExt, - writeset_payload: &WriteSetPayload, + resolver: &impl AptosMoveResolver, + write_set_payload: &WriteSetPayload, txn_sender: Option, session_id: SessionId, ) -> Result { @@ -1197,7 +1205,7 @@ impl AptosVM { let change_set_configs = ChangeSetConfigs::unlimited_at_gas_feature_version(self.0.get_gas_feature_version()); - match writeset_payload { + match write_set_payload { WriteSetPayload::Direct(change_set) => { VMChangeSet::try_from_storage_change_set(change_set.clone(), &change_set_configs) }, @@ -1232,14 +1240,19 @@ impl AptosVM { } } - fn read_writeset<'a>( + fn read_change_set( &self, state_view: &impl StateView, - write_set: impl IntoIterator, + change_set: &VMChangeSet, ) -> Result<(), VMStatus> { + assert!( + change_set.aggregator_v1_write_set().is_empty(), + "Waypoint change set should not have any aggregator writes." + ); + // All Move executions satisfy the read-before-write property. Thus we need to read each // access path that the write set is going to update. - for (state_key, _) in write_set.into_iter() { + for (state_key, _) in change_set.write_set_iter() { state_view .get_state_value_bytes(state_key) .map_err(|_| VMStatus::error(StatusCode::STORAGE_ERROR, None))?; @@ -1254,11 +1267,11 @@ impl AptosVM { let has_new_block_event = change_set .events() .iter() - .any(|e| *e.key() == new_block_event_key()); + .any(|e| e.event_key() == Some(&new_block_event_key())); let has_new_epoch_event = change_set .events() .iter() - .any(|e| *e.key() == new_epoch_event_key()); + .any(|e| e.event_key() == Some(&new_epoch_event_key())); if has_new_block_event && has_new_epoch_event { Ok(()) } else { @@ -1273,24 +1286,20 @@ impl AptosVM { pub(crate) fn process_waypoint_change_set( &self, resolver: &impl MoveResolverExt, - writeset_payload: WriteSetPayload, + write_set_payload: WriteSetPayload, log_context: &AdapterLogSchema, ) -> Result<(VMStatus, VMOutput), VMStatus> { // TODO: user specified genesis id to distinguish different genesis write sets let genesis_id = HashValue::zero(); - let change_set = self.execute_writeset( + let change_set = self.execute_write_set( resolver, - &writeset_payload, + &write_set_payload, Some(aptos_types::account_config::reserved_vm_address()), SessionId::genesis(genesis_id), )?; Self::validate_waypoint_change_set(&change_set, log_context)?; - self.read_writeset(resolver, change_set.write_set_iter())?; - assert!( - change_set.aggregator_write_set().is_empty(), - "Waypoint change set should not have any aggregator writes." - ); + self.read_change_set(resolver, &change_set)?; SYSTEM_TRANSACTIONS_EXECUTED.inc(); @@ -1300,7 +1309,7 @@ impl AptosVM { pub(crate) fn process_block_prologue( &self, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, block_metadata: BlockMetadata, log_context: &AdapterLogSchema, ) -> Result<(VMStatus, VMOutput), VMStatus> { @@ -1354,7 +1363,7 @@ impl AptosVM { txn: &SignedTransaction, state_view: &impl StateView, ) -> (VMStatus, TransactionOutput) { - let vm = AptosVM::new(state_view); + let vm = AptosVM::new_from_state_view(state_view); let simulation_vm = AptosSimulationVM(vm); let log_context = AdapterLogSchema::new(state_view.id(), 0); @@ -1379,7 +1388,7 @@ impl AptosVM { arguments: Vec>, gas_budget: u64, ) -> Result>> { - let vm = AptosVM::new(state_view); + let vm = AptosVM::new_from_state_view(state_view); let log_context = AdapterLogSchema::new(state_view.id(), 0); let mut gas_meter = MemoryTrackedGasMeter::new(StandardGasMeter::new(StandardGasAlgebra::new( @@ -1421,7 +1430,7 @@ impl AptosVM { fn run_prologue_with_payload( &self, session: &mut SessionExt, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, payload: &TransactionPayload, txn_data: &TransactionMetadata, log_context: &AdapterLogSchema, @@ -1599,7 +1608,7 @@ impl VMValidator for AptosVM { impl VMAdapter for AptosVM { fn new_session<'r>( &self, - resolver: &'r impl MoveResolverExt, + resolver: &'r impl AptosMoveResolver, session_id: SessionId, ) -> SessionExt<'r, '_> { self.0.new_session(resolver, session_id) @@ -1623,7 +1632,7 @@ impl VMAdapter for AptosVM { fn run_prologue( &self, session: &mut SessionExt, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, transaction: &SignatureCheckedTransaction, log_context: &AdapterLogSchema, ) -> Result<(), VMStatus> { @@ -1644,7 +1653,7 @@ impl VMAdapter for AptosVM { .change_set() .events() .iter() - .any(|event| *event.key() == new_epoch_event_key) + .any(|event| event.event_key() == Some(&new_epoch_event_key)) } fn execute_single_transaction( @@ -1757,23 +1766,11 @@ impl VMAdapter for AptosVM { } } -impl AsRef for AptosVM { - fn as_ref(&self) -> &AptosVMImpl { - &self.0 - } -} - -impl AsMut for AptosVM { - fn as_mut(&mut self) -> &mut AptosVMImpl { - &mut self.0 - } -} - impl AptosSimulationVM { fn validate_simulated_transaction( &self, session: &mut SessionExt, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, transaction: &SignedTransaction, txn_data: &TransactionMetadata, log_context: &AdapterLogSchema, @@ -1869,7 +1866,7 @@ impl AptosSimulationVM { self.0.success_transaction_cleanup( respawned_session, - &mut gas_meter, + &gas_meter, &txn_data, log_context, &storage_gas_params.change_set_configs, @@ -1918,7 +1915,7 @@ impl AptosSimulationVM { } else { let (vm_status, output) = self.0.failed_transaction_cleanup_and_keep_vm_status( err, - &mut gas_meter, + &gas_meter, &txn_data, resolver, log_context, diff --git a/aptos-move/aptos-vm/src/aptos_vm_impl.rs b/aptos-move/aptos-vm/src/aptos_vm_impl.rs index 7d7305f04a742..a7dc98ed7bfed 100644 --- a/aptos-move/aptos-vm/src/aptos_vm_impl.rs +++ b/aptos-move/aptos-vm/src/aptos_vm_impl.rs @@ -4,9 +4,8 @@ use crate::{ access_path_cache::AccessPathCache, - data_cache::StorageAdapter, errors::{convert_epilogue_error, convert_prologue_error, expect_only_successful_execution}, - move_vm_ext::{MoveResolverExt, MoveVmExt, SessionExt, SessionId}, + move_vm_ext::{AptosMoveResolver, MoveVmExt, SessionExt, SessionId}, system_module_names::{MULTISIG_ACCOUNT_MODULE, VALIDATE_MULTISIG_TRANSACTION}, transaction_metadata::TransactionMetadata, transaction_validation::APTOS_TRANSACTION_VALIDATION, @@ -17,14 +16,14 @@ use aptos_gas_schedule::{ AptosGasParameters, FromOnChainGasSchedule, MiscGasParameters, NativeGasParameters, }; use aptos_logger::{enabled, prelude::*, Level}; -use aptos_state_view::StateView; +use aptos_state_view::StateViewId; use aptos_types::{ account_config::CORE_CODE_ADDRESS, chain_id::ChainId, fee_statement::FeeStatement, on_chain_config::{ - ApprovedExecutionHashes, ConfigurationResource, FeatureFlag, Features, GasSchedule, - GasScheduleV2, OnChainConfig, TimedFeatures, Version, + ApprovedExecutionHashes, ConfigStorage, ConfigurationResource, FeatureFlag, Features, + GasSchedule, GasScheduleV2, OnChainConfig, TimedFeatures, Version, }, transaction::{AbortInfo, ExecutionStatus, Multisig, TransactionStatus}, vm_status::{StatusCode, VMStatus}, @@ -57,8 +56,10 @@ pub struct AptosVMImpl { features: Features, } -pub fn gas_config(storage: &impl MoveResolverExt) -> (Result, u64) { - match GasScheduleV2::fetch_config(storage) { +pub fn gas_config( + config_storage: &impl ConfigStorage, +) -> (Result, u64) { + match GasScheduleV2::fetch_config(config_storage) { Some(gas_schedule) => { let feature_version = gas_schedule.feature_version; let map = gas_schedule.to_btree_map(); @@ -67,7 +68,7 @@ pub fn gas_config(storage: &impl MoveResolverExt) -> (Result match GasSchedule::fetch_config(storage) { + None => match GasSchedule::fetch_config(config_storage) { Some(gas_schedule) => { let map = gas_schedule.to_btree_map(); (AptosGasParameters::from_on_chain_gas_schedule(&map, 0), 0) @@ -79,36 +80,42 @@ pub fn gas_config(storage: &impl MoveResolverExt) -> (Result Self { - let storage = StorageAdapter::new(state); - + pub fn new(config_storage: &impl ConfigStorage) -> Self { // Get the gas parameters - let (mut gas_params, gas_feature_version) = gas_config(&storage); + let (mut gas_params, gas_feature_version) = gas_config(config_storage); let storage_gas_params = match &mut gas_params { Ok(gas_params) => { let storage_gas_params = - StorageGasParameters::new(gas_feature_version, gas_params, &storage); - - if let StoragePricing::V2(pricing) = &storage_gas_params.pricing { - // Overwrite table io gas parameters with global io pricing. - let g = &mut gas_params.natives.table; - match gas_feature_version { - 0..=1 => (), - 2..=6 => { + StorageGasParameters::new(gas_feature_version, gas_params, config_storage); + + // Overwrite table io gas parameters with global io pricing. + let g = &mut gas_params.natives.table; + match gas_feature_version { + 0..=1 => (), + 2..=6 => { + if let StoragePricing::V2(pricing) = &storage_gas_params.pricing { g.common_load_base_legacy = pricing.per_item_read * NumArgs::new(1); g.common_load_base_new = 0.into(); g.common_load_per_byte = pricing.per_byte_read; g.common_load_failure = 0.into(); - }, - 7.. => { + } + } + 7..=9 => { + if let StoragePricing::V2(pricing) = &storage_gas_params.pricing { g.common_load_base_legacy = 0.into(); g.common_load_base_new = pricing.per_item_read * NumArgs::new(1); g.common_load_per_byte = pricing.per_byte_read; g.common_load_failure = 0.into(); - }, + } } - } + 10.. => { + g.common_load_base_legacy = 0.into(); + g.common_load_base_new = gas_params.vm.txn.storage_io_per_state_slot_read * NumArgs::new(1); + g.common_load_per_byte = gas_params.vm.txn.storage_io_per_state_byte_read; + g.common_load_failure = 0.into(); + } + }; Ok(storage_gas_params) }, Err(err) => Err(format!("Failed to initialize storage gas params due to failure to load main gas parameters: {}", err)), @@ -123,12 +130,12 @@ impl AptosVMImpl { Err(_) => (NativeGasParameters::zeros(), MiscGasParameters::zeros()), }; - let features = Features::fetch_config(&storage).unwrap_or_default(); + let features = Features::fetch_config(config_storage).unwrap_or_default(); // If no chain ID is in storage, we assume we are in a testing environment and use ChainId::TESTING - let chain_id = ChainId::fetch_config(&storage).unwrap_or_else(ChainId::test); + let chain_id = ChainId::fetch_config(config_storage).unwrap_or_else(ChainId::test); - let timestamp = ConfigurationResource::fetch_config(&storage) + let timestamp = ConfigurationResource::fetch_config(config_storage) .map(|config| config.last_reconfiguration_time()) .unwrap_or(0); @@ -147,7 +154,7 @@ impl AptosVMImpl { ) .expect("should be able to create Move VM; check if there are duplicated natives"); - let version = Version::fetch_config(&storage); + let version = Version::fetch_config(config_storage); Self { move_vm, @@ -207,7 +214,7 @@ impl AptosVMImpl { pub fn check_gas( &self, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, txn_data: &TransactionMetadata, log_context: &AdapterLogSchema, ) -> Result<(), VMStatus> { @@ -492,15 +499,17 @@ impl AptosVMImpl { .or_else(|err| convert_prologue_error(err, log_context)) } - fn run_epiloque( + fn run_epilogue( &self, session: &mut SessionExt, gas_remaining: Gas, txn_data: &TransactionMetadata, ) -> VMResult<()> { - let txn_sequence_number = txn_data.sequence_number(); let txn_gas_price = txn_data.gas_unit_price(); let txn_max_gas_units = txn_data.max_gas_amount(); + // TODO(aldenhu): repurpose this to be the amount of the storage fee refund. + let unused = 0; + // We can unconditionally do this as this condition can only be true if the prologue // accepted it, in which case the gas payer feature is enabled. if let Some(fee_payer) = txn_data.fee_payer() { @@ -511,7 +520,7 @@ impl AptosVMImpl { serialize_values(&vec![ MoveValue::Signer(txn_data.sender), MoveValue::Address(fee_payer), - MoveValue::U64(txn_sequence_number), + MoveValue::U64(unused), MoveValue::U64(txn_gas_price.into()), MoveValue::U64(txn_max_gas_units.into()), MoveValue::U64(gas_remaining.into()), @@ -526,7 +535,7 @@ impl AptosVMImpl { vec![], serialize_values(&vec![ MoveValue::Signer(txn_data.sender), - MoveValue::U64(txn_sequence_number), + MoveValue::U64(unused), MoveValue::U64(txn_gas_price.into()), MoveValue::U64(txn_max_gas_units.into()), MoveValue::U64(gas_remaining.into()), @@ -554,7 +563,7 @@ impl AptosVMImpl { )) }); - self.run_epiloque(session, gas_remaining, txn_data) + self.run_epilogue(session, gas_remaining, txn_data) .or_else(|err| convert_epilogue_error(err, log_context)) } @@ -567,7 +576,7 @@ impl AptosVMImpl { txn_data: &TransactionMetadata, log_context: &AdapterLogSchema, ) -> Result<(), VMStatus> { - self.run_epiloque(session, gas_remaining, txn_data) + self.run_epilogue(session, gas_remaining, txn_data) .or_else(|e| { expect_only_successful_execution( e, @@ -602,7 +611,7 @@ impl AptosVMImpl { pub fn new_session<'r>( &self, - resolver: &'r impl MoveResolverExt, + resolver: &'r impl AptosMoveResolver, session_id: SessionId, ) -> SessionExt<'r, '_> { self.move_vm.new_session(resolver, session_id) @@ -611,7 +620,7 @@ impl AptosVMImpl { pub fn load_module( &self, module_id: &ModuleId, - resolver: &impl MoveResolverExt, + resolver: &impl AptosMoveResolver, ) -> VMResult> { self.move_vm.load_module(module_id, resolver) } @@ -632,11 +641,9 @@ impl<'a> AptosVMInternals<'a> { } /// Returns the internal gas schedule if it has been loaded, or an error if it hasn't. - pub fn gas_params( - self, - log_context: &AdapterLogSchema, - ) -> Result<&'a AptosGasParameters, VMStatus> { - self.0.get_gas_parameters(log_context) + pub fn gas_params(self) -> Result<&'a AptosGasParameters, VMStatus> { + let log_context = AdapterLogSchema::new(StateViewId::Miscellaneous, 0); + self.0.get_gas_parameters(&log_context) } /// Returns the version of Move Runtime. diff --git a/aptos-move/aptos-vm/src/block_executor/mod.rs b/aptos-move/aptos-vm/src/block_executor/mod.rs index b83affe60cd50..0e87e89c26569 100644 --- a/aptos-move/aptos-vm/src/block_executor/mod.rs +++ b/aptos-move/aptos-vm/src/block_executor/mod.rs @@ -38,7 +38,7 @@ use aptos_vm_types::output::VMOutput; use move_core_types::vm_status::VMStatus; use once_cell::sync::OnceCell; use rayon::{prelude::*, ThreadPool}; -use std::sync::Arc; +use std::{collections::HashMap, sync::Arc}; impl BlockExecutorTransaction for PreprocessedTransaction { type Event = ContractEvent; @@ -86,31 +86,54 @@ impl BlockExecutorTransactionOutput for AptosTransactionOutput { Self::new(VMOutput::empty_with_status(TransactionStatus::Retry)) } + // TODO: get rid of the cloning data-structures in the following APIs. + /// Should never be called after incorporate_delta_writes, as it /// will consume vm_output to prepare an output with deltas. - fn get_writes(&self) -> Vec<(StateKey, WriteOp)> { + fn resource_write_set(&self) -> HashMap { self.vm_output .lock() .as_ref() .expect("Output to be set to get writes") .change_set() - .write_set_iter() - .map(|(key, op)| (key.clone(), op.clone())) - .collect() + .resource_write_set() + .clone() + } + + /// Should never be called after incorporate_delta_writes, as it + /// will consume vm_output to prepare an output with deltas. + fn module_write_set(&self) -> HashMap { + self.vm_output + .lock() + .as_ref() + .expect("Output to be set to get writes") + .change_set() + .module_write_set() + .clone() } /// Should never be called after incorporate_delta_writes, as it /// will consume vm_output to prepare an output with deltas. - fn get_deltas(&self) -> Vec<(StateKey, DeltaOp)> { + fn aggregator_v1_write_set(&self) -> HashMap { + self.vm_output + .lock() + .as_ref() + .expect("Output to be set to get writes") + .change_set() + .aggregator_v1_write_set() + .clone() + } + + /// Should never be called after incorporate_delta_writes, as it + /// will consume vm_output to prepare an output with deltas. + fn aggregator_v1_delta_set(&self) -> HashMap { self.vm_output .lock() .as_ref() .expect("Output to be set to get deltas") .change_set() - .aggregator_delta_set() - .iter() - .map(|(key, op)| (key.clone(), *op)) - .collect() + .aggregator_v1_delta_set() + .clone() } /// Should never be called after incorporate_delta_writes, as it diff --git a/aptos-move/aptos-vm/src/block_executor/vm_wrapper.rs b/aptos-move/aptos-vm/src/block_executor/vm_wrapper.rs index 97298ff64fe75..94a89c92e69e9 100644 --- a/aptos-move/aptos-vm/src/block_executor/vm_wrapper.rs +++ b/aptos-move/aptos-vm/src/block_executor/vm_wrapper.rs @@ -6,6 +6,7 @@ use crate::{ adapter_common::{PreprocessedTransaction, VMAdapter}, aptos_vm::AptosVM, block_executor::AptosTransactionOutput, + data_cache::StorageAdapter, }; use aptos_block_executor::task::{ExecutionStatus, ExecutorTask}; use aptos_logger::{enabled, Level}; @@ -30,7 +31,12 @@ impl<'a, S: 'a + StateView + Sync> ExecutorTask for AptosExecutorTask<'a, S> { type Txn = PreprocessedTransaction; fn init(argument: &'a S) -> Self { - let vm = AptosVM::new(argument); + // AptosVM has to be initialized using configs from storage. + // Using adapter allows us to fetch those. + // TODO: with new adapter we can relax trait bounds on S and avoid + // creating `StorageAdapter` here. + let config_storage = StorageAdapter::new(argument); + let vm = AptosVM::new(&config_storage); // Loading `0x1::account` and its transitive dependency into the code cache. // @@ -69,7 +75,7 @@ impl<'a, S: 'a + StateView + Sync> ExecutorTask for AptosExecutorTask<'a, S> { { Ok((vm_status, mut vm_output, sender)) => { if materialize_deltas { - // TODO: Integrate delta application failure. + // TODO: Integrate aggregator v2. vm_output = vm_output .try_materialize(view) .expect("Delta materialization failed"); diff --git a/aptos-move/aptos-vm/src/data_cache.rs b/aptos-move/aptos-vm/src/data_cache.rs index 64e581448136c..b6326040d6dbe 100644 --- a/aptos-move/aptos-vm/src/data_cache.rs +++ b/aptos-move/aptos-vm/src/data_cache.rs @@ -5,7 +5,7 @@ use crate::{ aptos_vm_impl::gas_config, - move_vm_ext::{get_max_binary_format_version, MoveResolverExt}, + move_vm_ext::{get_max_binary_format_version, AptosMoveResolver, StateValueMetadataResolver}, }; #[allow(unused_imports)] use anyhow::{bail, Error}; @@ -13,12 +13,16 @@ use aptos_aggregator::{ aggregator_extension::AggregatorID, delta_change_set::deserialize, resolver::AggregatorResolver, }; use aptos_framework::natives::state_storage::StateStorageUsageResolver; -use aptos_state_view::StateView; +use aptos_state_view::{StateView, TStateView}; use aptos_table_natives::{TableHandle, TableResolver}; use aptos_types::{ access_path::AccessPath, on_chain_config::{ConfigStorage, Features, OnChainConfig}, - state_store::{state_key::StateKey, state_storage_usage::StateStorageUsage}, + state_store::{ + state_key::StateKey, + state_storage_usage::StateStorageUsage, + state_value::{StateValue, StateValueMetadata}, + }, }; use move_binary_format::{errors::*, CompiledModule}; use move_core_types::{ @@ -29,7 +33,7 @@ use move_core_types::{ value::MoveTypeLayout, vm_status::StatusCode, }; -use std::{cell::RefCell, collections::BTreeMap, ops::Deref}; +use std::{cell::RefCell, collections::BTreeMap}; pub(crate) fn get_resource_group_from_metadata( struct_tag: &StructTag, @@ -52,7 +56,7 @@ pub struct StorageAdapter<'a, S> { RefCell>>>>, } -impl<'a, S: StateView> StorageAdapter<'a, S> { +impl<'a, S> StorageAdapter<'a, S> { pub fn new_with_cached_config( state_store: &'a S, gas_feature_version: u64, @@ -70,7 +74,9 @@ impl<'a, S: StateView> StorageAdapter<'a, S> { s.max_binary_format_version = get_max_binary_format_version(features, gas_feature_version); s } +} +impl<'a, S: StateView> StorageAdapter<'a, S> { pub fn new(state_store: &'a S) -> Self { let mut s = Self { state_store, @@ -138,7 +144,7 @@ impl<'a, S: StateView> StorageAdapter<'a, S> { } } -impl<'a, S: StateView> MoveResolverExt for StorageAdapter<'a, S> { +impl<'a, S: StateView> AptosMoveResolver for StorageAdapter<'a, S> { fn get_resource_group_data( &self, address: &AccountAddress, @@ -259,15 +265,7 @@ impl<'a, S: StateView> ConfigStorage for StorageAdapter<'a, S> { impl<'a, S: StateView> StateStorageUsageResolver for StorageAdapter<'a, S> { fn get_state_storage_usage(&self) -> Result { - self.get_usage() - } -} - -impl<'a, S> Deref for StorageAdapter<'a, S> { - type Target = S; - - fn deref(&self) -> &Self::Target { - self.state_store + self.state_store.get_usage() } } @@ -280,3 +278,30 @@ impl AsMoveResolver for S { StorageAdapter::new(self) } } + +impl<'a, S: StateView> StateValueMetadataResolver for StorageAdapter<'a, S> { + fn get_state_value_metadata( + &self, + state_key: &StateKey, + ) -> anyhow::Result>> { + let maybe_state_value = self.state_store.get_state_value(state_key)?; + Ok(maybe_state_value.map(StateValue::into_metadata)) + } +} + +// We need to implement StateView for adapter because: +// 1. When processing write set payload, storage is accessed +// directly. +// 2. When stacking Storage adapters on top of each other, e.g. +// in epilogue. +impl<'a, S: StateView> TStateView for StorageAdapter<'a, S> { + type Key = StateKey; + + fn get_state_value(&self, state_key: &Self::Key) -> anyhow::Result> { + self.state_store.get_state_value(state_key) + } + + fn get_usage(&self) -> anyhow::Result { + self.state_store.get_usage() + } +} diff --git a/aptos-move/aptos-vm/src/foreign_contracts.rs b/aptos-move/aptos-vm/src/foreign_contracts.rs deleted file mode 100644 index deb39ea38755f..0000000000000 --- a/aptos-move/aptos-vm/src/foreign_contracts.rs +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright © Aptos Foundation -// Parts of the project are originally copyright © Meta Platforms, Inc. -// SPDX-License-Identifier: Apache-2.0 - -//! This file contains models of the vm crate's dependencies for use with MIRAI. - -pub mod types { - pub mod transaction { - pub const MAX_TRANSACTION_SIZE_IN_BYTES: usize = 4096; - } -} diff --git a/aptos-move/aptos-vm/src/lib.rs b/aptos-move/aptos-vm/src/lib.rs index 1f28bf71717c9..e792f56d275db 100644 --- a/aptos-move/aptos-vm/src/lib.rs +++ b/aptos-move/aptos-vm/src/lib.rs @@ -109,9 +109,6 @@ mod access_path_cache; pub mod counters; pub mod data_cache; -#[cfg(feature = "mirai-contracts")] -pub mod foreign_contracts; - mod adapter_common; pub mod aptos_vm; mod aptos_vm_impl; diff --git a/aptos-move/aptos-vm/src/move_vm_ext/mod.rs b/aptos-move/aptos-vm/src/move_vm_ext/mod.rs index a386a2f4ec780..e2d55a41b8cbd 100644 --- a/aptos-move/aptos-vm/src/move_vm_ext/mod.rs +++ b/aptos-move/aptos-vm/src/move_vm_ext/mod.rs @@ -9,7 +9,7 @@ mod session; mod vm; pub use crate::move_vm_ext::{ - resolver::MoveResolverExt, + resolver::{AptosMoveResolver, MoveResolverExt, StateValueMetadataResolver}, respawned_session::RespawnedSession, session::{SessionExt, SessionId}, vm::{get_max_binary_format_version, verifier_config, MoveVmExt}, diff --git a/aptos-move/aptos-vm/src/move_vm_ext/resolver.rs b/aptos-move/aptos-vm/src/move_vm_ext/resolver.rs index 810a7447618f5..679ffe016fe65 100644 --- a/aptos-move/aptos-vm/src/move_vm_ext/resolver.rs +++ b/aptos-move/aptos-vm/src/move_vm_ext/resolver.rs @@ -5,21 +5,42 @@ use aptos_aggregator::resolver::AggregatorResolver; use aptos_framework::natives::state_storage::StateStorageUsageResolver; use aptos_state_view::StateView; use aptos_table_natives::TableResolver; -use aptos_types::on_chain_config::ConfigStorage; -use aptos_utils::aptos_try; +use aptos_types::{ + on_chain_config::ConfigStorage, + state_store::{state_key::StateKey, state_value::StateValueMetadata}, +}; use move_binary_format::errors::VMResult; use move_core_types::{ account_address::AccountAddress, language_storage::StructTag, resolver::MoveResolver, }; use std::collections::BTreeMap; -pub trait MoveResolverExt: +/// Allows to query storage metadata in the VM session. Needed for storage refunds. +pub trait StateValueMetadataResolver { + /// Returns metadata for a given state value: + /// - None if state value does not exist, + /// - Some(None) if state value has no metadata, + /// - Some(Some(..)) otherwise. + // TODO: Nested options are ugly, refactor. + fn get_state_value_metadata( + &self, + state_key: &StateKey, + ) -> anyhow::Result>>; +} + +/// A general resolver used by AptosVM. Allows to implement custom hooks on +/// top of storage, e.g. get resources from resource groups, etc. +pub trait AptosMoveResolver: MoveResolver + + TableResolver + AggregatorResolver + + StateStorageUsageResolver + + + StateValueMetadataResolver + ConfigStorage - + StateView + { fn get_resource_group_data( &self, @@ -36,18 +57,9 @@ pub trait MoveResolverExt: fn release_resource_group_cache( &self, ) -> BTreeMap>>>; - - // Move to API does not belong here - fn is_resource_group(&self, struct_tag: &StructTag) -> bool { - aptos_try!({ - let md = - aptos_framework::get_metadata(&self.get_module_metadata(&struct_tag.module_id()))?; - md.struct_attributes - .get(struct_tag.name.as_ident_str().as_str())? - .iter() - .find(|attr| attr.is_resource_group())?; - Some(()) - }) - .is_some() - } } + +// TODO: Remove dependency on StateView. +pub trait MoveResolverExt: AptosMoveResolver + StateView {} + +impl MoveResolverExt for T {} diff --git a/aptos-move/aptos-vm/src/move_vm_ext/respawned_session.rs b/aptos-move/aptos-vm/src/move_vm_ext/respawned_session.rs index 72950c39bab40..a6d4e447e767b 100644 --- a/aptos-move/aptos-vm/src/move_vm_ext/respawned_session.rs +++ b/aptos-move/aptos-vm/src/move_vm_ext/respawned_session.rs @@ -100,7 +100,7 @@ impl<'r> TStateView for ChangeSetStateView<'r> { fn get_state_value(&self, state_key: &Self::Key) -> Result> { // TODO: `get_state_value` should differentiate between different write types. - match self.change_set.aggregator_delta_set().get(state_key) { + match self.change_set.aggregator_v1_delta_set().get(state_key) { Some(delta_op) => Ok(delta_op .try_into_write_op(self.base, state_key)? .as_state_value()), @@ -118,10 +118,6 @@ impl<'r> TStateView for ChangeSetStateView<'r> { } } - fn is_genesis(&self) -> bool { - unreachable!("Unexpected access to is_genesis()") - } - fn get_usage(&self) -> Result { bail!("Unexpected access to get_usage()") } @@ -134,7 +130,7 @@ mod test { use aptos_language_e2e_tests::data_store::FakeDataStore; use aptos_types::write_set::WriteOp; use aptos_vm_types::check_change_set::CheckChangeSet; - use std::collections::BTreeMap; + use std::collections::HashMap; /// A mock for testing. Always succeeds on checking a change set. struct NoOpChangeSetChecker; @@ -171,23 +167,23 @@ mod test { base_view.set_legacy(key("aggregator_both"), serialize(&60)); base_view.set_legacy(key("aggregator_delta_set"), serialize(&70)); - let resource_write_set = BTreeMap::from([ + let resource_write_set = HashMap::from([ (key("resource_both"), write(80)), (key("resource_write_set"), write(90)), ]); - let module_write_set = BTreeMap::from([ + let module_write_set = HashMap::from([ (key("module_both"), write(100)), (key("module_write_set"), write(110)), ]); - let aggregator_write_set = BTreeMap::from([ + let aggregator_write_set = HashMap::from([ (key("aggregator_both"), write(120)), (key("aggregator_write_set"), write(130)), ]); let aggregator_delta_set = - BTreeMap::from([(key("aggregator_delta_set"), delta_add(1, 1000))]); + HashMap::from([(key("aggregator_delta_set"), delta_add(1, 1000))]); let change_set = VMChangeSet::new( resource_write_set, diff --git a/aptos-move/aptos-vm/src/move_vm_ext/session.rs b/aptos-move/aptos-vm/src/move_vm_ext/session.rs index 4471bbbccdafd..fa3124b284818 100644 --- a/aptos-move/aptos-vm/src/move_vm_ext/session.rs +++ b/aptos-move/aptos-vm/src/move_vm_ext/session.rs @@ -3,7 +3,7 @@ use crate::{ access_path_cache::AccessPathCache, data_cache::get_resource_group_from_metadata, - move_vm_ext::MoveResolverExt, transaction_metadata::TransactionMetadata, + move_vm_ext::AptosMoveResolver, transaction_metadata::TransactionMetadata, }; use aptos_aggregator::{aggregator_extension::AggregatorID, delta_change_set::serialize}; use aptos_crypto::{hash::CryptoHash, HashValue}; @@ -11,6 +11,7 @@ use aptos_crypto_derive::{BCSCryptoHash, CryptoHasher}; use aptos_framework::natives::{ aggregator_natives::{AggregatorChange, AggregatorChangeSet, NativeAggregatorContext}, code::{NativeCodeContext, PublishRequest}, + event::NativeEventContext, }; use aptos_table_natives::{NativeTableContext, TableChangeSet}; use aptos_types::{ @@ -25,9 +26,7 @@ use aptos_vm_types::{change_set::VMChangeSet, storage::ChangeSetConfigs}; use move_binary_format::errors::{Location, PartialVMError, VMResult}; use move_core_types::{ account_address::AccountAddress, - effects::{ - AccountChangeSet, ChangeSet as MoveChangeSet, Event as MoveEvent, Op as MoveStorageOp, - }, + effects::{AccountChangeSet, ChangeSet as MoveChangeSet, Op as MoveStorageOp}, language_storage::{ModuleId, StructTag}, vm_status::{err_msg, StatusCode, VMStatus}, }; @@ -35,7 +34,7 @@ use move_vm_runtime::{move_vm::MoveVM, session::Session}; use serde::{Deserialize, Serialize}; use std::{ borrow::BorrowMut, - collections::BTreeMap, + collections::{BTreeMap, HashMap}, ops::{Deref, DerefMut}, sync::Arc, }; @@ -123,35 +122,23 @@ impl SessionId { pub fn as_uuid(&self) -> HashValue { self.hash() } - - pub fn sender(&self) -> Option { - match self { - SessionId::Txn { sender, .. } - | SessionId::Prologue { sender, .. } - | SessionId::Epilogue { sender, .. } => Some(*sender), - SessionId::BlockMeta { .. } | SessionId::Genesis { .. } | SessionId::Void => None, - } - } } pub struct SessionExt<'r, 'l> { inner: Session<'r, 'l>, - remote: &'r dyn MoveResolverExt, - new_slot_payer: Option, + remote: &'r dyn AptosMoveResolver, features: Arc, } impl<'r, 'l> SessionExt<'r, 'l> { pub fn new( inner: Session<'r, 'l>, - remote: &'r dyn MoveResolverExt, - new_slot_payer: Option, + remote: &'r dyn AptosMoveResolver, features: Arc, ) -> Self { Self { inner, remote, - new_slot_payer, features, } } @@ -162,7 +149,7 @@ impl<'r, 'l> SessionExt<'r, 'l> { configs: &ChangeSetConfigs, ) -> VMResult { let move_vm = self.inner.get_move_vm(); - let (change_set, events, mut extensions) = self.inner.finish_with_extensions()?; + let (change_set, mut extensions) = self.inner.finish_with_extensions()?; let (change_set, resource_group_change_set) = Self::split_and_merge_resource_groups(move_vm, self.remote, change_set)?; @@ -176,9 +163,11 @@ impl<'r, 'l> SessionExt<'r, 'l> { let aggregator_context: NativeAggregatorContext = extensions.remove(); let aggregator_change_set = aggregator_context.into_change_set(); + let event_context: NativeEventContext = extensions.remove(); + let events = event_context.into_events(); + let change_set = Self::convert_change_set( self.remote, - self.new_slot_payer, self.features.is_storage_slot_metadata_enabled(), current_time.as_ref(), change_set, @@ -219,7 +208,7 @@ impl<'r, 'l> SessionExt<'r, 'l> { /// * Otherwise delete fn split_and_merge_resource_groups( runtime: &MoveVM, - remote: &dyn MoveResolverExt, + remote: &dyn AptosMoveResolver, change_set: MoveChangeSet, ) -> VMResult<(MoveChangeSet, MoveChangeSet)> { // The use of this implies that we could theoretically call unwrap with no consequences, @@ -305,13 +294,12 @@ impl<'r, 'l> SessionExt<'r, 'l> { } pub fn convert_change_set( - remote: &dyn MoveResolverExt, - new_slot_payer: Option, + remote: &dyn AptosMoveResolver, is_storage_slot_metadata_enabled: bool, current_time: Option<&CurrentTimeMicroseconds>, change_set: MoveChangeSet, resource_group_change_set: MoveChangeSet, - events: Vec, + events: Vec, table_change_set: TableChangeSet, aggregator_change_set: AggregatorChangeSet, ap_cache: &mut C, @@ -319,10 +307,10 @@ impl<'r, 'l> SessionExt<'r, 'l> { ) -> Result { let mut new_slot_metadata: Option = None; if is_storage_slot_metadata_enabled { - if let Some(payer) = new_slot_payer { - if let Some(current_time) = current_time { - new_slot_metadata = Some(StateValueMetadata::new(payer, 0, current_time)); - } + if let Some(current_time) = current_time { + // The deposit on the metadata is a placeholder (0), it will be updated later when + // storage fee is charged. + new_slot_metadata = Some(StateValueMetadata::new(0, current_time)); } } let woc = WriteOpConverter { @@ -330,10 +318,10 @@ impl<'r, 'l> SessionExt<'r, 'l> { new_slot_metadata, }; - let mut resource_write_set = BTreeMap::new(); - let mut module_write_set = BTreeMap::new(); - let mut aggregator_write_set = BTreeMap::new(); - let mut aggregator_delta_set = BTreeMap::new(); + let mut resource_write_set = HashMap::new(); + let mut module_write_set = HashMap::new(); + let mut aggregator_write_set = HashMap::new(); + let mut aggregator_delta_set = HashMap::new(); for (addr, account_changeset) in change_set.into_inner() { let (modules, resources) = account_changeset.into_inner(); @@ -402,14 +390,6 @@ impl<'r, 'l> SessionExt<'r, 'l> { } } - let events = events - .into_iter() - .map(|(guid, seq_num, ty_tag, blob)| { - let key = bcs::from_bytes(guid.as_slice()) - .map_err(|_| VMStatus::error(StatusCode::EVENT_KEY_MISMATCH, None))?; - Ok(ContractEvent::new(key, seq_num, ty_tag, blob)) - }) - .collect::, VMStatus>>()?; VMChangeSet::new( resource_write_set, module_write_set, @@ -436,7 +416,7 @@ impl<'r, 'l> DerefMut for SessionExt<'r, 'l> { } struct WriteOpConverter<'r> { - remote: &'r dyn MoveResolverExt, + remote: &'r dyn AptosMoveResolver, new_slot_metadata: Option, } @@ -450,14 +430,17 @@ impl<'r> WriteOpConverter<'r> { use MoveStorageOp::*; use WriteOp::*; - let existing_value_opt = self.remote.get_state_value(state_key).map_err(|_| { - VMStatus::error( - StatusCode::STORAGE_ERROR, - err_msg("Storage read failed when converting change set."), - ) - })?; - - let write_op = match (existing_value_opt, move_storage_op) { + let maybe_existing_metadata = + self.remote + .get_state_value_metadata(state_key) + .map_err(|_| { + VMStatus::error( + StatusCode::STORAGE_ERROR, + err_msg("Storage read failed when converting change set."), + ) + })?; + + let write_op = match (maybe_existing_metadata, move_storage_op) { (None, Modify(_) | Delete) => { return Err(VMStatus::error( // Possible under speculative execution, returning storage error waiting for re-execution @@ -485,16 +468,16 @@ impl<'r> WriteOpConverter<'r> { metadata: metadata.clone(), }, }, - (Some(existing_value), Modify(data)) => { + (Some(existing_metadata), Modify(data)) => { // Inherit metadata even if the feature flags is turned off, for compatibility. - match existing_value.into_metadata() { + match existing_metadata { None => Modification(data), Some(metadata) => ModificationWithMetadata { data, metadata }, } }, - (Some(existing_value), Delete) => { + (Some(existing_metadata), Delete) => { // Inherit metadata even if the feature flags is turned off, for compatibility. - match existing_value.into_metadata() { + match existing_metadata { None => Deletion, Some(metadata) => DeletionWithMetadata { metadata }, } @@ -508,13 +491,13 @@ impl<'r> WriteOpConverter<'r> { state_key: &StateKey, value: u128, ) -> Result { - let existing_value_opt = self + let maybe_existing_metadata = self .remote - .get_state_value(state_key) + .get_state_value_metadata(state_key) .map_err(|_| VMStatus::error(StatusCode::STORAGE_ERROR, None))?; let data = serialize(&value); - let op = match existing_value_opt { + let op = match maybe_existing_metadata { None => { match &self.new_slot_metadata { // n.b. Aggregator writes historically did not distinguish Create vs Modify. @@ -525,7 +508,7 @@ impl<'r> WriteOpConverter<'r> { }, } }, - Some(existing_value) => match existing_value.into_metadata() { + Some(existing_metadata) => match existing_metadata { None => WriteOp::Modification(data), Some(metadata) => WriteOp::ModificationWithMetadata { data, metadata }, }, diff --git a/aptos-move/aptos-vm/src/move_vm_ext/vm.rs b/aptos-move/aptos-vm/src/move_vm_ext/vm.rs index 0a737f0621671..6165e75489e82 100644 --- a/aptos-move/aptos-vm/src/move_vm_ext/vm.rs +++ b/aptos-move/aptos-vm/src/move_vm_ext/vm.rs @@ -2,13 +2,14 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - move_vm_ext::{MoveResolverExt, SessionExt, SessionId}, + move_vm_ext::{AptosMoveResolver, SessionExt, SessionId}, natives::aptos_natives_with_builder, }; use aptos_framework::natives::{ aggregator_natives::NativeAggregatorContext, code::NativeCodeContext, cryptography::{algebra::AlgebraContext, ristretto255_point::NativeRistrettoPointContext}, + event::NativeEventContext, state_storage::NativeStateStorageContext, transaction_context::NativeTransactionContext, }; @@ -144,9 +145,9 @@ impl MoveVmExt { ) } - pub fn new_session<'r, S: MoveResolverExt>( + pub fn new_session<'r, S: AptosMoveResolver>( &self, - remote: &'r S, + resolver: &'r S, session_id: SessionId, ) -> SessionExt<'r, '_> { let mut extensions = NativeContextExtensions::default(); @@ -156,12 +157,11 @@ impl MoveVmExt { .try_into() .expect("HashValue should convert to [u8; 32]"); - extensions.add(NativeTableContext::new(txn_hash, remote)); + extensions.add(NativeTableContext::new(txn_hash, resolver)); extensions.add(NativeRistrettoPointContext::new()); extensions.add(AlgebraContext::new()); - extensions.add(NativeAggregatorContext::new(txn_hash, remote)); + extensions.add(NativeAggregatorContext::new(txn_hash, resolver)); - let sender_opt = session_id.sender(); let script_hash = match session_id { SessionId::Txn { sender: _, @@ -187,16 +187,16 @@ impl MoveVmExt { self.chain_id, )); extensions.add(NativeCodeContext::default()); - extensions.add(NativeStateStorageContext::new(remote)); + extensions.add(NativeStateStorageContext::new(resolver)); + extensions.add(NativeEventContext::default()); // The VM code loader has bugs around module upgrade. After a module upgrade, the internal // cache needs to be flushed to work around those bugs. self.inner.flush_loader_cache_if_invalidated(); SessionExt::new( - self.inner.new_session_with_extensions(remote, extensions), - remote, - sender_opt, + self.inner.new_session_with_extensions(resolver, extensions), + resolver, self.features.clone(), ) } diff --git a/aptos-move/aptos-vm/src/natives.rs b/aptos-move/aptos-vm/src/natives.rs index e6bf832d41290..c84517220b30d 100644 --- a/aptos-move/aptos-vm/src/natives.rs +++ b/aptos-move/aptos-vm/src/natives.rs @@ -8,6 +8,8 @@ use anyhow::Error; use aptos_aggregator::{aggregator_extension::AggregatorID, resolver::AggregatorResolver}; #[cfg(feature = "testing")] use aptos_framework::natives::cryptography::algebra::AlgebraContext; +#[cfg(feature = "testing")] +use aptos_framework::natives::event::NativeEventContext; use aptos_gas_schedule::{MiscGasParameters, NativeGasParameters, LATEST_GAS_FEATURE_VERSION}; use aptos_native_interface::SafeNativeBuilder; #[cfg(feature = "testing")] @@ -141,4 +143,5 @@ fn unit_test_extensions_hook(exts: &mut NativeContextExtensions) { exts.add(NativeAggregatorContext::new([0; 32], &*DUMMY_RESOLVER)); exts.add(NativeRistrettoPointContext::new()); exts.add(AlgebraContext::new()); + exts.add(NativeEventContext::default()); } diff --git a/aptos-move/aptos-vm/src/sharded_block_executor/cross_shard_state_view.rs b/aptos-move/aptos-vm/src/sharded_block_executor/cross_shard_state_view.rs index 5c254b4610373..c2d25a4618d7b 100644 --- a/aptos-move/aptos-vm/src/sharded_block_executor/cross_shard_state_view.rs +++ b/aptos-move/aptos-vm/src/sharded_block_executor/cross_shard_state_view.rs @@ -131,10 +131,6 @@ impl<'a, S: StateView + Sync + Send> TStateView for CrossShardStateView<'a, S> { self.base_view.get_state_value(state_key) } - fn is_genesis(&self) -> bool { - unimplemented!("is_genesis is not implemented for InMemoryStateView") - } - fn get_usage(&self) -> Result { Ok(StateStorageUsage::new_untracked()) } diff --git a/aptos-move/aptos-vm/src/sharded_block_executor/test_utils.rs b/aptos-move/aptos-vm/src/sharded_block_executor/test_utils.rs index 2c9b646726739..e66e272007870 100644 --- a/aptos-move/aptos-vm/src/sharded_block_executor/test_utils.rs +++ b/aptos-move/aptos-vm/src/sharded_block_executor/test_utils.rs @@ -130,7 +130,7 @@ pub fn test_sharded_block_executor_no_conflict> .unwrap(); let unsharded_txn_output = AptosVM::execute_block( transactions.into_iter().map(|t| t.into_txn()).collect(), - &executor.data_store(), + executor.data_store(), None, ) .unwrap(); @@ -187,7 +187,7 @@ pub fn sharded_block_executor_with_conflict>( .unwrap(); let unsharded_txn_output = - AptosVM::execute_block(execution_ordered_txns, &executor.data_store(), None).unwrap(); + AptosVM::execute_block(execution_ordered_txns, executor.data_store(), None).unwrap(); compare_txn_outputs(unsharded_txn_output, sharded_txn_output); } @@ -245,6 +245,6 @@ pub fn sharded_block_executor_with_random_transfers Result<(), VMError> { + Err(metadata_validation_error(msg)) +} + +fn metadata_validation_error(msg: &str) -> VMError { + PartialVMError::new(StatusCode::EVENT_METADATA_VALIDATION_ERROR) + .with_message(format!("metadata and code bundle mismatch: {}", msg)) + .finish(Location::Undefined) +} + +/// Validate event metadata on modules one by one: +/// * Extract the event metadata +/// * Verify all changes are compatible upgrades (existing event attributes cannot be removed) +pub(crate) fn validate_module_events( + session: &mut SessionExt, + modules: &[CompiledModule], +) -> VMResult<()> { + for module in modules { + let mut new_event_structs = + if let Some(metadata) = aptos_framework::get_metadata_from_compiled_module(module) { + extract_event_metadata(&metadata)? + } else { + HashSet::new() + }; + + // Check all the emit calls have the correct struct with event attribute. + validate_emit_calls(&new_event_structs, module)?; + + let original_event_structs = + extract_event_metadata_from_module(session, &module.self_id())?; + + for member in original_event_structs { + // Fail if we see a removal of an event attribute. + if !new_event_structs.remove(&member) { + metadata_validation_err("Invalid change in event attributes")?; + } + } + } + Ok(()) +} + +/// Validate all the `0x1::event::emit` calls have the struct defined in the same module with event +/// attribute. +pub(crate) fn validate_emit_calls( + event_structs: &HashSet, + module: &CompiledModule, +) -> VMResult<()> { + for fun in module.function_defs() { + if let Some(code_unit) = &fun.code { + for bc in &code_unit.code { + if let Bytecode::CallGeneric(index) = bc { + let func_instantiation = &module.function_instantiation_at(*index); + let func_handle = module.function_handle_at(func_instantiation.handle); + let module_handle = module.module_handle_at(func_handle.module); + let module_addr = module.address_identifier_at(module_handle.address); + let module_name = module.identifier_at(module_handle.name); + let func_name = module.identifier_at(func_handle.name); + if module_addr != &AccountAddress::ONE + || module_name.as_str() != EVENT_MODULE_NAME + || func_name.as_str() != EVENT_EMIT_FUNCTION_NAME + { + continue; + } + let param = module + .signature_at(func_instantiation.type_parameters) + .0 + .first() + .ok_or_else(|| { + metadata_validation_error( + "Missing parameter for 0x1::event::emit function", + ) + })?; + match param { + StructInstantiation(index, _) | Struct(index) => { + let struct_handle = &module.struct_handle_at(*index); + let struct_name = module.identifier_at(struct_handle.name); + if struct_handle.module != module.self_handle_idx() { + metadata_validation_err(format!("{} passed to 0x1::event::emit function is not defined in the same module", struct_name).as_str()) + } else if !event_structs.contains(struct_name.as_str()) { + metadata_validation_err(format!("Missing #[event] attribute on {}. The #[event] attribute is required for all structs passed into 0x1::event::emit.", struct_name).as_str()) + } else { + Ok(()) + } + }, + _ => metadata_validation_err( + "Passed in a non-struct parameter into 0x1::event::emit.", + ), + }?; + } + } + } + } + Ok(()) +} + +/// Given a module id extract all event metadata +pub(crate) fn extract_event_metadata_from_module( + session: &mut SessionExt, + module_id: &ModuleId, +) -> VMResult> { + let metadata = session.load_module(module_id).map(|module| { + CompiledModule::deserialize(&module) + .map(|module| aptos_framework::get_metadata_from_compiled_module(&module)) + }); + + if let Ok(Ok(Some(metadata))) = metadata { + extract_event_metadata(&metadata) + } else { + Ok(HashSet::new()) + } +} + +/// Given a module id extract all event metadata +pub(crate) fn extract_event_metadata( + metadata: &RuntimeModuleMetadataV1, +) -> VMResult> { + let mut event_structs = HashSet::new(); + for (struct_, attrs) in &metadata.struct_attributes { + for attr in attrs { + if attr.is_event() && !event_structs.insert(struct_.clone()) { + metadata_validation_err("Found duplicate event attribute")?; + } + } + } + Ok(event_structs) +} + +pub(crate) fn verify_no_event_emission_in_script( + script_code: &[u8], + max_binary_format_version: u32, +) -> VMResult<()> { + let script = match CompiledScript::deserialize_with_max_version( + script_code, + max_binary_format_version, + ) { + Ok(script) => script, + Err(err) => { + let msg = format!("[VM] deserializer for script returned error: {:?}", err); + return Err(PartialVMError::new(StatusCode::CODE_DESERIALIZATION_ERROR) + .with_message(msg) + .finish(Location::Script)); + }, + }; + for bc in &script.code().code { + if let Bytecode::CallGeneric(index) = bc { + let func_instantiation = &script.function_instantiation_at(*index); + let func_handle = script.function_handle_at(func_instantiation.handle); + let module_handle = script.module_handle_at(func_handle.module); + let module_addr = script.address_identifier_at(module_handle.address); + let module_name = script.identifier_at(module_handle.name); + let func_name = script.identifier_at(func_handle.name); + if module_addr == &AccountAddress::ONE + && module_name.as_str() == EVENT_MODULE_NAME + && func_name.as_str() == EVENT_EMIT_FUNCTION_NAME + { + return Err(PartialVMError::new(StatusCode::INVALID_OPERATION_IN_SCRIPT) + .finish(Location::Script)); + } + } + } + Ok(()) +} diff --git a/aptos-move/aptos-vm/src/verifier/mod.rs b/aptos-move/aptos-vm/src/verifier/mod.rs index c1a3d37bacffe..5f2c3fb22dd4b 100644 --- a/aptos-move/aptos-vm/src/verifier/mod.rs +++ b/aptos-move/aptos-vm/src/verifier/mod.rs @@ -1,5 +1,6 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 +pub(crate) mod event_validation; pub(crate) mod module_init; pub(crate) mod resource_groups; pub mod transaction_arg_validation; diff --git a/aptos-move/block-executor/src/executor.rs b/aptos-move/block-executor/src/executor.rs index 58bf62685bb35..1d2824ac03a1e 100644 --- a/aptos-move/block-executor/src/executor.rs +++ b/aptos-move/block-executor/src/executor.rs @@ -13,9 +13,9 @@ use crate::{ task::{ExecutionStatus, ExecutorTask, Transaction, TransactionOutput}, txn_commit_hook::TransactionCommitHook, txn_last_input_output::TxnLastInputOutput, - view::{LatestView, MVHashMapView}, + view::{LatestView, ParallelState, SequentialState, ViewState}, }; -use aptos_aggregator::delta_change_set::{deserialize, serialize}; +use aptos_aggregator::delta_change_set::serialize; use aptos_logger::{debug, info}; use aptos_mvhashmap::{ types::{MVDataError, MVDataOutput, TxnIndex, Version}, @@ -28,8 +28,10 @@ use aptos_vm_logging::{clear_speculative_txn_logs, init_speculative_logs}; use num_cpus; use rayon::ThreadPool; use std::{ + collections::HashMap, marker::PhantomData, sync::{ + atomic::AtomicU32, mpsc, mpsc::{Receiver, Sender}, Arc, @@ -109,7 +111,6 @@ where } fn execute( - &self, version: Version, signature_verified_block: &[T], last_input_output: &TxnLastInputOutput, @@ -117,37 +118,46 @@ where scheduler: &Scheduler, executor: &E, base_view: &S, + latest_view: ParallelState, ) -> SchedulerTask { let _timer = TASK_EXECUTE_SECONDS.start_timer(); let (idx_to_execute, incarnation) = version; let txn = &signature_verified_block[idx_to_execute as usize]; - let speculative_view = MVHashMapView::new(versioned_cache, scheduler); - // VM execution. - let execute_result = executor.execute_transaction( - &LatestView::::new_mv_view(base_view, &speculative_view, idx_to_execute), - txn, - idx_to_execute, - false, - ); - let mut prev_modified_keys = last_input_output.modified_keys(idx_to_execute); + let sync_view = LatestView::new(base_view, ViewState::Sync(latest_view), idx_to_execute); + let execute_result = executor.execute_transaction(&sync_view, txn, idx_to_execute, false); + + let mut prev_modified_keys = last_input_output + .modified_keys(idx_to_execute) + .map_or(HashMap::new(), |keys| keys.collect()); // For tracking whether the recent execution wrote outside of the previous write/delta set. let mut updates_outside = false; let mut apply_updates = |output: &E::Output| { // First, apply writes. let write_version = (idx_to_execute, incarnation); - for (k, v) in output.get_writes().into_iter() { - if !prev_modified_keys.remove(&k) { + for (k, v) in output + .resource_write_set() + .into_iter() + .chain(output.aggregator_v1_write_set().into_iter()) + { + if prev_modified_keys.remove(&k).is_none() { updates_outside = true; } - versioned_cache.write(k, write_version, v); + versioned_cache.data().write(k, write_version, v); + } + + for (k, v) in output.module_write_set().into_iter() { + if prev_modified_keys.remove(&k).is_none() { + updates_outside = true; + } + versioned_cache.modules().write(k, idx_to_execute, v); } // Then, apply deltas. - for (k, d) in output.get_deltas().into_iter() { - if !prev_modified_keys.remove(&k) { + for (k, d) in output.aggregator_v1_delta_set().into_iter() { + if prev_modified_keys.remove(&k).is_none() { updates_outside = true; } versioned_cache.add_delta(k, idx_to_execute, d); @@ -176,12 +186,16 @@ where }; // Remove entries from previous write/delta set that were not overwritten. - for k in prev_modified_keys { - versioned_cache.delete(&k, idx_to_execute); + for (k, is_module) in prev_modified_keys { + if is_module { + versioned_cache.modules().delete(&k, idx_to_execute); + } else { + versioned_cache.data().delete(&k, idx_to_execute); + } } if last_input_output - .record(idx_to_execute, speculative_view.take_reads(), result) + .record(idx_to_execute, sync_view.take_reads(), result) .is_err() { // When there is module publishing r/w intersection, can early halt BlockSTM to @@ -193,7 +207,6 @@ where } fn validate( - &self, version_to_validate: Version, validation_wave: Wave, last_input_output: &TxnLastInputOutput, @@ -237,8 +250,14 @@ where clear_speculative_txn_logs(idx_to_validate as usize); // Not valid and successfully aborted, mark the latest write/delta sets as estimates. - for k in last_input_output.modified_keys(idx_to_validate) { - versioned_cache.mark_estimate(&k, idx_to_validate); + if let Some(keys) = last_input_output.modified_keys(idx_to_validate) { + for (k, is_module_path) in keys { + if is_module_path { + versioned_cache.modules().mark_estimate(&k, idx_to_validate); + } else { + versioned_cache.data().mark_estimate(&k, idx_to_validate); + } + } } scheduler.finish_abort(idx_to_validate, incarnation) @@ -347,10 +366,10 @@ where last_input_output: &TxnLastInputOutput, base_view: &S, ) { - let (num_deltas, delta_keys) = last_input_output.delta_keys(txn_idx); + let delta_keys = last_input_output.delta_keys(txn_idx); let _events = last_input_output.events(txn_idx); - let mut delta_writes = Vec::with_capacity(num_deltas); - for k in delta_keys { + let mut delta_writes = Vec::with_capacity(delta_keys.len()); + for k in delta_keys.into_iter() { // Note that delta materialization happens concurrently, but under concurrent // commit_hooks (which may be dispatched by the coordinator), threads may end up // contending on delta materialization of the same aggregator. However, the @@ -364,11 +383,12 @@ where let committed_delta = versioned_cache .materialize_delta(&k, txn_idx) .unwrap_or_else(|op| { + // TODO: this logic should improve with the new AGGR data structure + // TODO: and the ugly base_view parameter will also disappear. let storage_value = base_view - .get_state_value_bytes(&k) - .expect("No base value for committed delta in storage") - .map(|bytes| deserialize(&bytes)) - .expect("Cannot deserialize base value for committed delta"); + .get_state_value_u128(&k) + .expect("Error reading the base value for committed delta in storage") + .expect("No base value for committed delta in storage"); versioned_cache.set_aggregator_base_value(&k, storage_value); op.apply_to(storage_value) @@ -376,10 +396,7 @@ where }); // Must contain committed value as we set the base value above. - delta_writes.push(( - k.clone(), - WriteOp::Modification(serialize(&committed_delta)), - )); + delta_writes.push((k, WriteOp::Modification(serialize(&committed_delta)))); } last_input_output.record_delta_writes(txn_idx, delta_writes); if let Some(txn_commit_listener) = &self.transaction_commit_hook { @@ -404,7 +421,9 @@ where last_input_output: &TxnLastInputOutput, versioned_cache: &MVHashMap, scheduler: &Scheduler, + // TODO: should not need to pass base view. base_view: &S, + shared_counter: &AtomicU32, role: CommitRole, ) { // Make executor for each task. TODO: fast concurrent executor. @@ -448,7 +467,7 @@ where } scheduler_task = match scheduler_task { - SchedulerTask::ValidationTask(version_to_validate, wave) => self.validate( + SchedulerTask::ValidationTask(version_to_validate, wave) => Self::validate( version_to_validate, wave, last_input_output, @@ -456,7 +475,7 @@ where scheduler, ), SchedulerTask::ExecutionTask(version_to_execute, ExecutionTaskType::Execution) => { - self.execute( + Self::execute( version_to_execute, block, last_input_output, @@ -464,6 +483,7 @@ where scheduler, &executor, base_view, + ParallelState::new(versioned_cache, scheduler, shared_counter), ) }, SchedulerTask::ExecutionTask(_, ExecutionTaskType::Wakeup(condvar)) => { @@ -509,6 +529,7 @@ where assert!(self.concurrency_level > 1, "Must use sequential execution"); let versioned_cache = MVHashMap::new(); + let shared_counter = AtomicU32::new(0); if signature_verified_block.is_empty() { return Ok(vec![]); @@ -544,6 +565,7 @@ where &versioned_cache, &scheduler, base_view, + &shared_counter, role, ); }); @@ -603,6 +625,7 @@ where let init_timer = VM_INIT_SECONDS.start_timer(); let executor = E::init(executor_arguments); drop(init_timer); + let data_map = UnsyncMap::new(); let mut ret = Vec::with_capacity(num_txns); @@ -610,24 +633,32 @@ where let mut accumulated_fee_statement = FeeStatement::zero(); for (idx, txn) in signature_verified_block.iter().enumerate() { - let res = executor.execute_transaction( - &LatestView::::new_btree_view(base_view, &data_map, idx as TxnIndex), - txn, + let unsync_view = LatestView::::new( + base_view, + ViewState::Unsync(SequentialState { + unsync_map: &data_map, + _counter: &0, + }), idx as TxnIndex, - true, ); + let res = executor.execute_transaction(&unsync_view, txn, idx as TxnIndex, true); let must_skip = matches!(res, ExecutionStatus::SkipRest(_)); match res { ExecutionStatus::Success(output) | ExecutionStatus::SkipRest(output) => { assert_eq!( - output.get_deltas().len(), + output.aggregator_v1_delta_set().len(), 0, "Sequential execution must materialize deltas" ); // Apply the writes. - for (ap, write_op) in output.get_writes().into_iter() { - data_map.write(ap, write_op); + for (key, write_op) in output + .resource_write_set() + .into_iter() + .chain(output.aggregator_v1_write_set().into_iter()) + .chain(output.module_write_set().into_iter()) + { + data_map.write(key, write_op); } // Calculating the accumulated gas costs of the committed txns. let fee_statement = output.fee_statement(); diff --git a/aptos-move/block-executor/src/proptest_types/baseline.rs b/aptos-move/block-executor/src/proptest_types/baseline.rs index f65fe0943789e..094914545e70a 100644 --- a/aptos-move/block-executor/src/proptest_types/baseline.rs +++ b/aptos-move/block-executor/src/proptest_types/baseline.rs @@ -78,16 +78,18 @@ enum BaselineStatus { /// /// For both read_values and resolved_deltas the keys are not included because they are /// in the same order as the reads and deltas in the Transaction::Write. -pub(crate) struct BaselineOutput { +pub(crate) struct BaselineOutput { status: BaselineStatus, read_values: Vec>, ()>>, - resolved_deltas: Vec, ()>>, + resolved_deltas: Vec, ()>>, } -impl BaselineOutput { +impl + BaselineOutput +{ /// Must be invoked after parallel execution to have incarnation information set and /// work with dynamic read/writes. - pub(crate) fn generate( + pub(crate) fn generate( txns: &[MockTransaction], maybe_block_gas_limit: Option, ) -> Self { @@ -107,7 +109,7 @@ impl BaselineOutput { // In executor, SkipRest skips from the next index. Test assumes it's an empty // transaction, so create a successful empty reads and deltas. read_values.push(Ok(vec![])); - resolved_deltas.push(Ok(vec![])); + resolved_deltas.push(Ok(HashMap::new())); status = BaselineStatus::SkipRest; break; @@ -167,8 +169,8 @@ impl BaselineOutput { .map(|(k, v)| { // In this case transaction did not fail due to delta application // errors, and thus we should update written_ and resolved_ worlds. - current_world.insert(k, BaselineValue::Aggregator(v)); - v + current_world.insert(k.clone(), BaselineValue::Aggregator(v)); + (k, v) }) .collect())); @@ -207,7 +209,7 @@ impl BaselineOutput { // Used for testing, hence the function asserts the correctness conditions within // itself to be easily traceable in case of an error. - pub(crate) fn assert_output( + pub(crate) fn assert_output( &self, results: &BlockExecutorResult>, usize>, ) { @@ -232,20 +234,17 @@ impl BaselineOutput { baseline_read.assert_read_result(result_read) }); - resolved_deltas + let baseline_deltas = resolved_deltas .as_ref() - .expect("Aggregator failures not yet tested") + .expect("Aggregator failures not yet tested"); + output + .materialized_delta_writes + .get() + .expect("Delta writes must be set") .iter() - .zip( - output - .materialized_delta_writes - .get() - .expect("Delta writes must be set") - .iter(), - ) - .for_each(|(baseline_delta_write, (_, result_delta_write))| { + .for_each(|(k, result_delta_write)| { assert_eq!( - *baseline_delta_write, + *baseline_deltas.get(k).expect("Baseline must contain delta"), AggregatorValue::from_write(result_delta_write) .expect("Delta to a non-existent aggregator") .into() diff --git a/aptos-move/block-executor/src/proptest_types/bencher.rs b/aptos-move/block-executor/src/proptest_types/bencher.rs index efe5a228fad14..abe983b8d9564 100644 --- a/aptos-move/block-executor/src/proptest_types/bencher.rs +++ b/aptos-move/block-executor/src/proptest_types/bencher.rs @@ -40,7 +40,7 @@ pub(crate) struct BencherState< Vec: From, { transactions: Vec, ValueType, E>>, - baseline_output: BaselineOutput>, + baseline_output: BaselineOutput, ValueType>, } impl Bencher diff --git a/aptos-move/block-executor/src/proptest_types/tests.rs b/aptos-move/block-executor/src/proptest_types/tests.rs index 775a361a10343..c2561abeb098c 100644 --- a/aptos-move/block-executor/src/proptest_types/tests.rs +++ b/aptos-move/block-executor/src/proptest_types/tests.rs @@ -339,7 +339,7 @@ fn module_publishing_fallback_with_block_gas_limit( vec![], vec![], 2, - (false, true), + (true, false), maybe_block_gas_limit, ); run_transactions::<[u8; 32], [u8; 32], MockEvent>( diff --git a/aptos-move/block-executor/src/proptest_types/types.rs b/aptos-move/block-executor/src/proptest_types/types.rs index 356e994289d60..ea80c7d74cf1d 100644 --- a/aptos-move/block-executor/src/proptest_types/types.rs +++ b/aptos-move/block-executor/src/proptest_types/types.rs @@ -25,7 +25,7 @@ use once_cell::sync::OnceCell; use proptest::{arbitrary::Arbitrary, collection::vec, prelude::*, proptest, sample::Index}; use proptest_derive::Arbitrary; use std::{ - collections::{hash_map::DefaultHasher, BTreeSet}, + collections::{hash_map::DefaultHasher, BTreeSet, HashMap}, convert::TryInto, fmt::Debug, hash::{Hash, Hasher}, @@ -63,10 +63,6 @@ where StateViewId::Miscellaneous } - fn is_genesis(&self) -> bool { - unreachable!(); - } - fn get_usage(&self) -> anyhow::Result { unreachable!(); } @@ -92,10 +88,6 @@ where StateViewId::Miscellaneous } - fn is_genesis(&self) -> bool { - unreachable!(); - } - fn get_usage(&self) -> anyhow::Result { unreachable!(); } @@ -563,7 +555,6 @@ where #[derive(Debug)] pub(crate) struct MockOutput { - // TODO: Split writes into resources & modules. pub(crate) writes: Vec<(K, V)>, pub(crate) deltas: Vec<(K, DeltaOp)>, pub(crate) events: Vec, @@ -580,12 +571,30 @@ where { type Txn = MockTransaction; - fn get_writes(&self) -> Vec<(K, V)> { - self.writes.clone() + fn resource_write_set(&self) -> HashMap { + self.writes + .iter() + .filter(|(k, _)| k.module_path().is_none()) + .cloned() + .collect() + } + + fn module_write_set(&self) -> HashMap { + self.writes + .iter() + .filter(|(k, _)| k.module_path().is_some()) + .cloned() + .collect() + } + + // Aggregator v1 writes are included in resource_write_set for tests (writes are produced + // for all keys including ones for v1_aggregators without distinguishing). + fn aggregator_v1_write_set(&self) -> HashMap { + HashMap::new() } - fn get_deltas(&self) -> Vec<(K, DeltaOp)> { - self.deltas.clone() + fn aggregator_v1_delta_set(&self) -> HashMap { + self.deltas.iter().cloned().collect() } fn get_events(&self) -> Vec { diff --git a/aptos-move/block-executor/src/task.rs b/aptos-move/block-executor/src/task.rs index 5799165593429..228eab71cacb1 100644 --- a/aptos-move/block-executor/src/task.rs +++ b/aptos-move/block-executor/src/task.rs @@ -11,7 +11,7 @@ use aptos_types::{ fee_statement::FeeStatement, write_set::{TransactionWrite, WriteOp}, }; -use std::{fmt::Debug, hash::Hash}; +use std::{collections::HashMap, fmt::Debug, hash::Hash}; /// The execution result of a transaction #[derive(Debug)] @@ -74,16 +74,22 @@ pub trait TransactionOutput: Send + Sync + Debug { /// Type of transaction and its associated key and value. type Txn: Transaction; - /// Get the writes of a transaction from its output. - fn get_writes( + /// Get the writes of a transaction from its output, separately for resources, modules and + /// aggregator_v1. + fn resource_write_set( &self, - ) -> Vec<( - ::Key, - ::Value, - )>; + ) -> HashMap<::Key, ::Value>; + + fn module_write_set( + &self, + ) -> HashMap<::Key, ::Value>; + + fn aggregator_v1_write_set( + &self, + ) -> HashMap<::Key, ::Value>; /// Get the aggregator deltas of a transaction from its output. - fn get_deltas(&self) -> Vec<(::Key, DeltaOp)>; + fn aggregator_v1_delta_set(&self) -> HashMap<::Key, DeltaOp>; /// Get the events of a transaction from its output. fn get_events(&self) -> Vec<::Event>; diff --git a/aptos-move/block-executor/src/txn_last_input_output.rs b/aptos-move/block-executor/src/txn_last_input_output.rs index 20ad591e47e50..11717bdd137be 100644 --- a/aptos-move/block-executor/src/txn_last_input_output.rs +++ b/aptos-move/block-executor/src/txn_last_input_output.rs @@ -15,7 +15,6 @@ use arc_swap::ArcSwapOption; use crossbeam::utils::CachePadded; use dashmap::DashSet; use std::{ - collections::HashSet, fmt::Debug, iter::{empty, Iterator}, sync::{ @@ -31,7 +30,6 @@ type TxnInput = Vec>; pub(crate) struct TxnOutput { output_status: ExecutionStatus>, } -type KeySet = HashSet<<::Txn as Transaction>::Key>; impl TxnOutput { pub fn from_output_status(output_status: ExecutionStatus>) -> Self { @@ -56,6 +54,8 @@ enum ReadKind { Storage, /// Read triggered a delta application failure. DeltaApplicationFailure, + /// Module read. TODO: Design a better representation once more meaningfully separated. + Module, } #[derive(Clone)] @@ -87,6 +87,13 @@ impl ReadDescriptor { } } + pub fn from_module(access_path: K) -> Self { + Self { + access_path, + kind: ReadKind::Module, + } + } + pub fn from_delta_application_failure(access_path: K) -> Self { Self { access_path, @@ -115,7 +122,8 @@ impl ReadDescriptor { // Does the read descriptor describe a read from storage. pub fn validate_storage(&self) -> bool { - self.kind == ReadKind::Storage + // Module reading supported from storage version only at the moment. + self.kind == ReadKind::Storage || self.kind == ReadKind::Module } // Does the read descriptor describe to a read with a delta application failure. @@ -187,13 +195,18 @@ impl TxnLastInputO input: Vec>, output: ExecutionStatus>, ) -> anyhow::Result<()> { - let read_modules: Vec = - input.iter().filter_map(|desc| desc.module_path()).collect(); + let read_modules: Vec = input + .iter() + .filter_map(|desc| { + matches!(desc.kind, ReadKind::Module) + .then(|| desc.module_path().expect("Module path guaranteed to exist")) + }) + .collect(); let written_modules: Vec = match &output { ExecutionStatus::Success(output) | ExecutionStatus::SkipRest(output) => output - .get_writes() - .into_iter() - .filter_map(|(k, _)| k.module_path()) + .module_write_set() + .keys() + .map(|k| k.module_path().expect("Module path guaranteed to exist")) .collect(), ExecutionStatus::Abort(_) => Vec::new(), }; @@ -264,50 +277,41 @@ impl TxnLastInputO self.outputs[txn_idx as usize].load_full() } - // Extracts a set of paths written or updated during execution from transaction - // output: (modified by writes, modified by deltas). - pub(crate) fn modified_keys(&self, txn_idx: TxnIndex) -> KeySet { - match &self.outputs[txn_idx as usize].load_full() { - None => HashSet::new(), - Some(txn_output) => match &txn_output.output_status { - ExecutionStatus::Success(t) | ExecutionStatus::SkipRest(t) => t - .get_writes() - .into_iter() - .map(|(k, _)| k) - .chain(t.get_deltas().into_iter().map(|(k, _)| k)) - .collect(), - ExecutionStatus::Abort(_) => HashSet::new(), - }, - } + // Extracts a set of paths (keys) written or updated during execution from transaction + // output, .1 for each item is false for non-module paths and true for module paths. + pub(crate) fn modified_keys( + &self, + txn_idx: TxnIndex, + ) -> Option::Txn as Transaction>::Key, bool)>> + { + self.outputs[txn_idx as usize] + .load_full() + .and_then(|txn_output| match &txn_output.output_status { + ExecutionStatus::Success(t) | ExecutionStatus::SkipRest(t) => Some( + t.resource_write_set() + .into_keys() + .chain(t.aggregator_v1_write_set().into_keys()) + .chain(t.aggregator_v1_delta_set().into_keys()) + .map(|k| (k, false)) + .chain(t.module_write_set().into_keys().map(|k| (k, true))), + ), + ExecutionStatus::Abort(_) => None, + }) } pub(crate) fn delta_keys( &self, txn_idx: TxnIndex, - ) -> ( - usize, - Box::Txn as Transaction>::Key>>, - ) { - let ret: ( - usize, - Box::Txn as Transaction>::Key>>, - ) = self.outputs[txn_idx as usize].load().as_ref().map_or( - ( - 0, - Box::new(empty::<<::Txn as Transaction>::Key>()), - ), + ) -> Vec<<::Txn as Transaction>::Key> { + self.outputs[txn_idx as usize].load().as_ref().map_or( + vec![], |txn_output| match &txn_output.output_status { ExecutionStatus::Success(t) | ExecutionStatus::SkipRest(t) => { - let deltas = t.get_deltas(); - (deltas.len(), Box::new(deltas.into_iter().map(|(k, _)| k))) + t.aggregator_v1_delta_set().into_keys().collect() }, - ExecutionStatus::Abort(_) => ( - 0, - Box::new(empty::<<::Txn as Transaction>::Key>()), - ), + ExecutionStatus::Abort(_) => vec![], }, - ); - ret + ) } pub(crate) fn events( diff --git a/aptos-move/block-executor/src/view.rs b/aptos-move/block-executor/src/view.rs index 970a495c27a9e..9ab6f3abdfb85 100644 --- a/aptos-move/block-executor/src/view.rs +++ b/aptos-move/block-executor/src/view.rs @@ -8,7 +8,7 @@ use crate::{ txn_last_input_output::ReadDescriptor, }; use anyhow::Result; -use aptos_aggregator::delta_change_set::{deserialize, serialize}; +use aptos_aggregator::delta_change_set::serialize; use aptos_logger::error; use aptos_mvhashmap::{ types::{MVDataError, MVDataOutput, MVModulesError, MVModulesOutput, TxnIndex}, @@ -23,20 +23,11 @@ use aptos_types::{ write_set::TransactionWrite, }; use aptos_vm_logging::{log_schema::AdapterLogSchema, prelude::*}; -use std::{cell::RefCell, fmt::Debug, hash::Hash, sync::Arc}; - -/// A struct that is always used by a single thread performing an execution task. The struct is -/// passed to the VM and acts as a proxy to resolve reads first in the shared multi-version -/// data-structure. It also allows the caller to track the read-set and any dependencies. -/// -/// TODO(issue 10177): MvHashMapView currently needs to be sync due to trait bounds, but should -/// not be. In this case, the read_dependency member can have a RefCell type and the -/// captured_reads member can have RefCell>> type. -pub(crate) struct MVHashMapView<'a, K, V: TransactionWrite, X: Executable> { - versioned_map: &'a MVHashMap, - scheduler: &'a Scheduler, - captured_reads: RefCell>>, -} +use std::{ + cell::RefCell, + fmt::Debug, + sync::{atomic::AtomicU32, Arc}, +}; /// A struct which describes the result of the read from the proxy. The client /// can interpret these types to further resolve the reads. @@ -54,48 +45,48 @@ pub(crate) enum ReadResult { None, } -impl< - 'a, - K: ModulePath + PartialOrd + Ord + Send + Clone + Debug + Hash + Eq, - V: TransactionWrite + Send + Sync, - X: Executable, - > MVHashMapView<'a, K, V, X> -{ - pub(crate) fn new(versioned_map: &'a MVHashMap, scheduler: &'a Scheduler) -> Self { +pub(crate) struct ParallelState<'a, T: Transaction, X: Executable> { + versioned_map: &'a MVHashMap, + scheduler: &'a Scheduler, + _counter: &'a AtomicU32, + captured_reads: RefCell>>, +} + +impl<'a, T: Transaction, X: Executable> ParallelState<'a, T, X> { + pub(crate) fn new( + shared_map: &'a MVHashMap, + shared_scheduler: &'a Scheduler, + shared_counter: &'a AtomicU32, + ) -> Self { Self { - versioned_map, - scheduler, + versioned_map: shared_map, + scheduler: shared_scheduler, + _counter: shared_counter, captured_reads: RefCell::new(Vec::new()), } } - /// Drains the captured reads. - pub(crate) fn take_reads(&self) -> Vec> { - self.captured_reads.take() - } - // TODO: Actually fill in the logic to record fetched executables, etc. fn fetch_module( &self, - key: &K, + key: &T::Key, txn_idx: TxnIndex, - ) -> anyhow::Result, MVModulesError> { - // Add a fake read from storage to register in reads for now in order - // for the read / write path intersection fallback for modules to still work. + ) -> anyhow::Result, MVModulesError> { + // Register a fake read for the read / write path intersection fallback for modules. self.captured_reads .borrow_mut() - .push(ReadDescriptor::from_storage(key.clone())); + .push(ReadDescriptor::from_module(key.clone())); self.versioned_map.fetch_module(key, txn_idx) } - fn set_aggregator_base_value(&self, key: &K, value: u128) { + fn set_aggregator_base_value(&self, key: &T::Key, value: u128) { self.versioned_map.set_aggregator_base_value(key, value); } /// Captures a read from the VM execution, but not unresolved deltas, as in this case it is the /// callers responsibility to set the aggregator's base value and call fetch_data again. - fn fetch_data(&self, key: &K, txn_idx: TxnIndex) -> ReadResult { + fn fetch_data(&self, key: &T::Key, txn_idx: TxnIndex) -> ReadResult { use MVDataError::*; use MVDataOutput::*; @@ -169,39 +160,47 @@ impl< } } -enum ViewMapKind<'a, T: Transaction, X: Executable> { - MultiVersion(&'a MVHashMapView<'a, T::Key, T::Value, X>), - Unsync(&'a UnsyncMap), +pub(crate) struct SequentialState<'a, T: Transaction, X: Executable> { + pub(crate) unsync_map: &'a UnsyncMap, + pub(crate) _counter: &'a u32, } +pub(crate) enum ViewState<'a, T: Transaction, X: Executable> { + Sync(ParallelState<'a, T, X>), + Unsync(SequentialState<'a, T, X>), +} + +/// A struct that represents a single block execution worker thread's view into the state, +/// some of which (in Sync case) might be shared with other workers / threads. By implementing +/// all necessary traits, LatestView is provided to the VM and used to intercept the reads. +/// In the Sync case, also records captured reads for later validation. latest_txn_idx +/// must be set according to the latest transaction that the worker was / is executing. pub(crate) struct LatestView<'a, T: Transaction, S: TStateView, X: Executable> { base_view: &'a S, - latest_view: ViewMapKind<'a, T, X>, + latest_view: ViewState<'a, T, X>, txn_idx: TxnIndex, } impl<'a, T: Transaction, S: TStateView, X: Executable> LatestView<'a, T, S, X> { - pub(crate) fn new_mv_view( + pub(crate) fn new( base_view: &'a S, - map: &'a MVHashMapView<'a, T::Key, T::Value, X>, + latest_view: ViewState<'a, T, X>, txn_idx: TxnIndex, - ) -> LatestView<'a, T, S, X> { - LatestView { + ) -> Self { + Self { base_view, - latest_view: ViewMapKind::MultiVersion(map), + latest_view, txn_idx, } } - pub(crate) fn new_btree_view( - base_view: &'a S, - map: &'a UnsyncMap, - txn_idx: TxnIndex, - ) -> LatestView<'a, T, S, X> { - LatestView { - base_view, - latest_view: ViewMapKind::Unsync(map), - txn_idx, + /// Drains the captured reads. + pub(crate) fn take_reads(&self) -> Vec> { + match &self.latest_view { + ViewState::Sync(state) => state.captured_reads.take(), + ViewState::Unsync(_) => { + unreachable!("Take reads called in sequential setting (not captured)") + }, } } @@ -228,13 +227,13 @@ impl<'a, T: Transaction, S: TStateView, X: Executable> TStateView type Key = T::Key; fn get_state_value(&self, state_key: &T::Key) -> anyhow::Result> { - match self.latest_view { - ViewMapKind::MultiVersion(map) => match state_key.module_path() { + match &self.latest_view { + ViewState::Sync(state) => match state_key.module_path() { Some(_) => { use MVModulesError::*; use MVModulesOutput::*; - match map.fetch_module(state_key, self.txn_idx) { + match state.fetch_module(state_key, self.txn_idx) { Ok(Executable(_)) => unreachable!("Versioned executable not implemented"), Ok(Module((v, _))) => Ok(v.as_state_value()), Err(Dependency(_)) => { @@ -246,20 +245,19 @@ impl<'a, T: Transaction, S: TStateView, X: Executable> TStateView } }, None => { - let mut mv_value = map.fetch_data(state_key, self.txn_idx); + let mut mv_value = state.fetch_data(state_key, self.txn_idx); if matches!(mv_value, ReadResult::Unresolved) { - let from_storage = - self.base_view.get_state_value_bytes(state_key)?.map_or( - Err(VMStatus::error(StatusCode::STORAGE_ERROR, None)), - |bytes| Ok(deserialize(&bytes)), - )?; + let from_storage = self + .base_view + .get_state_value_u128(state_key)? + .ok_or(VMStatus::error(StatusCode::STORAGE_ERROR, None))?; // Store base value in the versioned data-structure directly, so subsequent // reads can be resolved to U128 directly without storage calls. - map.set_aggregator_base_value(state_key, from_storage); + state.set_aggregator_base_value(state_key, from_storage); - mv_value = map.fetch_data(state_key, self.txn_idx); + mv_value = state.fetch_data(state_key, self.txn_idx); } match mv_value { @@ -281,7 +279,7 @@ impl<'a, T: Transaction, S: TStateView, X: Executable> TStateView } }, }, - ViewMapKind::Unsync(map) => map.fetch_data(state_key).map_or_else( + ViewState::Unsync(state) => state.unsync_map.fetch_data(state_key).map_or_else( || self.get_base_value(state_key), |v| Ok(v.as_state_value()), ), @@ -292,10 +290,6 @@ impl<'a, T: Transaction, S: TStateView, X: Executable> TStateView self.base_view.id() } - fn is_genesis(&self) -> bool { - self.base_view.is_genesis() - } - fn get_usage(&self) -> Result { self.base_view.get_usage() } diff --git a/aptos-move/e2e-move-tests/Cargo.toml b/aptos-move/e2e-move-tests/Cargo.toml index 72027aec38675..790f939863d1b 100644 --- a/aptos-move/e2e-move-tests/Cargo.toml +++ b/aptos-move/e2e-move-tests/Cargo.toml @@ -21,7 +21,7 @@ aptos-crypto = { workspace = true } aptos-framework = { workspace = true } aptos-gas-algebra = { workspace = true } aptos-gas-profiling = { workspace = true } -aptos-gas-schedule = { workspace = true } +aptos-gas-schedule = { workspace = true, features = ["testing"] } aptos-keygen = { workspace = true } aptos-language-e2e-tests = { workspace = true } aptos-logger = { workspace = true } diff --git a/aptos-move/e2e-move-tests/src/harness.rs b/aptos-move/e2e-move-tests/src/harness.rs index 766cb87298e1d..762eac18e539b 100644 --- a/aptos-move/e2e-move-tests/src/harness.rs +++ b/aptos-move/e2e-move-tests/src/harness.rs @@ -30,6 +30,7 @@ use aptos_types::{ TransactionPayload, TransactionStatus, }, }; +use aptos_vm::AptosVM; use move_core_types::{ language_storage::{StructTag, TypeTag}, move_resource::MoveStructType, @@ -168,6 +169,7 @@ impl MoveHarness { let output = self.executor.execute_transaction(txn); if matches!(output.status(), TransactionStatus::Keep(_)) { self.executor.apply_write_set(output.write_set()); + self.executor.append_events(output.events().to_vec()); } output } @@ -431,6 +433,10 @@ impl MoveHarness { .run_block_with_metadata(proposer, failed_proposer_indices, txns) } + pub fn get_events(&self) -> &[ContractEvent] { + self.executor.get_events() + } + pub fn read_state_value(&self, state_key: &StateKey) -> Option { self.executor.read_state_value(state_key) } @@ -603,6 +609,10 @@ impl MoveHarness { ); } + pub fn new_vm(&self) -> AptosVM { + AptosVM::new_from_state_view(self.executor.data_store()) + } + pub fn set_default_gas_unit_price(&mut self, gas_unit_price: u64) { self.default_gas_unit_price = gas_unit_price; } diff --git a/aptos-move/e2e-move-tests/src/tests/attributes.rs b/aptos-move/e2e-move-tests/src/tests/attributes.rs index a5ebaffe12337..5d6015b81ec81 100644 --- a/aptos-move/e2e-move-tests/src/tests/attributes.rs +++ b/aptos-move/e2e-move-tests/src/tests/attributes.rs @@ -204,6 +204,29 @@ fn verify_resource_groups_fail_when_not_enabled() { assert_vm_status!(result, StatusCode::CONSTRAINT_NOT_SATISFIED); } +#[test] +fn verify_module_events_fail_when_not_enabled() { + let mut h = MoveHarness::new_with_features(vec![], vec![FeatureFlag::MODULE_EVENT]); + let account = h.new_account_at(AccountAddress::from_hex_literal("0xf00d").unwrap()); + let source = r#" + module 0xf00d::M { + struct Event { } + } + "#; + let fake_attribute = FakeKnownAttribute { + kind: 4, + args: vec![], + }; + let (code, metadata) = + build_package_and_insert_attribute(source, Some(("Event", fake_attribute)), None); + let result = h.run_transaction_payload( + &account, + aptos_stdlib::code_publish_package_txn(metadata, code), + ); + + assert_vm_status!(result, StatusCode::CONSTRAINT_NOT_SATISFIED); +} + fn build_package_and_insert_attribute( source: &str, struct_attr: Option<(&str, FakeKnownAttribute)>, diff --git a/aptos-move/e2e-move-tests/src/tests/mod.rs b/aptos-move/e2e-move-tests/src/tests/mod.rs index 4becf482a3a59..ee3306c8af60e 100644 --- a/aptos-move/e2e-move-tests/src/tests/mod.rs +++ b/aptos-move/e2e-move-tests/src/tests/mod.rs @@ -22,6 +22,7 @@ mod memory_quota; mod metadata; mod mint_nft; mod missing_gas_parameter; +mod module_event; mod new_integer_types; mod nft_dao; mod offer_rotation_capability; diff --git a/aptos-move/e2e-move-tests/src/tests/module_event.rs b/aptos-move/e2e-move-tests/src/tests/module_event.rs new file mode 100644 index 0000000000000..b30ddc7ad1572 --- /dev/null +++ b/aptos-move/e2e-move-tests/src/tests/module_event.rs @@ -0,0 +1,116 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::{assert_success, assert_vm_status, tests::common, MoveHarness}; +use aptos_package_builder::PackageBuilder; +use aptos_types::{account_address::AccountAddress, on_chain_config::FeatureFlag}; +use move_core_types::{language_storage::TypeTag, vm_status::StatusCode}; +use serde::{Deserialize, Serialize}; +use std::str::FromStr; + +#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)] +struct Field { + field: bool, +} + +#[derive(Debug, Serialize, Deserialize, Eq, PartialEq)] +struct MyEvent { + seq: u64, + field: Field, + bytes: Vec, +} + +#[test] +fn test_module_event_enabled() { + let mut h = MoveHarness::new_with_features(vec![FeatureFlag::MODULE_EVENT], vec![]); + + let addr = AccountAddress::from_hex_literal("0xcafe").unwrap(); + let account = h.new_account_at(addr); + + let mut build_options = aptos_framework::BuildOptions::default(); + build_options + .named_addresses + .insert("event".to_string(), addr); + + let result = h.publish_package_with_options( + &account, + &common::test_dir_path("../../../move-examples/event"), + build_options.clone(), + ); + assert_success!(result); + h.run_entry_function( + &account, + str::parse("0xcafe::event::emit").unwrap(), + vec![], + vec![bcs::to_bytes(&10u64).unwrap()], + ); + let events = h.get_events(); + assert_eq!(events.len(), 10); + for (i, event) in events.iter().enumerate().take(10) { + let module_event = event.v2().unwrap(); + assert_eq!( + module_event.type_tag(), + &TypeTag::from_str("0xcafe::event::MyEvent").unwrap() + ); + assert_eq!( + bcs::from_bytes::(module_event.event_data()).unwrap(), + MyEvent { + seq: i as u64, + field: Field { field: false }, + bytes: vec![], + } + ); + } +} + +#[test] +fn verify_module_event_upgrades() { + let mut h = MoveHarness::new_with_features(vec![FeatureFlag::MODULE_EVENT], vec![]); + let account = h.new_account_at(AccountAddress::from_hex_literal("0xf00d").unwrap()); + + // Initial code + let source = r#" + module 0xf00d::M { + #[event] + struct Event1 { } + + struct Event2 { } + } + "#; + let mut builder = PackageBuilder::new("Package"); + builder.add_source("m.move", source); + let path = builder.write_to_temp().unwrap(); + let result = h.publish_package(&account, path.path()); + assert_success!(result); + + // Compatible upgrade -- add event attribute. + let source = r#" + module 0xf00d::M { + #[event] + struct Event1 { } + + #[event] + struct Event2 { } + } + "#; + let mut builder = PackageBuilder::new("Package"); + builder.add_source("m.move", source); + let path = builder.write_to_temp().unwrap(); + let result = h.publish_package(&account, path.path()); + assert_success!(result); + + // Incompatible upgrades -- remove existing event attribute + let source = r#" + module 0xf00d::M { + struct Event1 { } + + #[event] + struct Event2 { } + } + "#; + let mut builder = PackageBuilder::new("Package"); + builder.add_source("m.move", source); + let path = builder.write_to_temp().unwrap(); + let result = h.publish_package(&account, path.path()); + assert_vm_status!(result, StatusCode::EVENT_METADATA_VALIDATION_ERROR); +} diff --git a/aptos-move/e2e-move-tests/src/tests/state_metadata.rs b/aptos-move/e2e-move-tests/src/tests/state_metadata.rs index 688a1e3d14f03..5771c0e277fa2 100644 --- a/aptos-move/e2e-move-tests/src/tests/state_metadata.rs +++ b/aptos-move/e2e-move-tests/src/tests/state_metadata.rs @@ -9,7 +9,7 @@ use aptos_types::{ use move_core_types::{account_address::AccountAddress, parser::parse_struct_tag}; #[test] -fn test_track_slot_payer() { +fn test_metadata_tracking() { let mut harness = MoveHarness::new(); harness.new_epoch(); // so that timestamp is not 0 (rather, 7200000001) let timestamp = CurrentTimeMicroseconds { @@ -25,30 +25,42 @@ fn test_track_slot_payer() { // create and fund account1 let account1 = harness.new_account_at(address1); - // Disable storage slot payer tracking + // Disable storage slot metadata tracking harness.enable_features(vec![], vec![FeatureFlag::STORAGE_SLOT_METADATA]); // Create and fund account2 harness.run_transaction_payload( &account1, aptos_cached_packages::aptos_stdlib::aptos_account_transfer(address2, 100), ); - // Observe that the payer is not tracked for address2 resources + // Observe that metadata is not tracked for address2 resources assert_eq!( harness.read_resource_metadata(&address2, coin_store.clone()), Some(None), ); - // Enable storage slot payer tracking + // Enable storage slot metadata tracking harness.enable_features(vec![FeatureFlag::STORAGE_SLOT_METADATA], vec![]); // Create and fund account3 harness.run_transaction_payload( &account1, aptos_cached_packages::aptos_stdlib::aptos_account_transfer(address3, 100), ); - // Observe that the payer is tracked for address3 resources + + let slot_fee = harness + .new_vm() + .internals() + .gas_params() + .unwrap() + .vm + .txn + .storage_fee_per_state_slot_create + .into(); + assert!(slot_fee > 0); + + // Observe that metadata is tracked for address3 resources assert_eq!( harness.read_resource_metadata(&address3, coin_store.clone()), - Some(Some(StateValueMetadata::new(address1, 0, ×tamp))), + Some(Some(StateValueMetadata::new(slot_fee, ×tamp,))), ); // Bump the timestamp and modify the resources, observe that metadata doesn't change. @@ -67,6 +79,6 @@ fn test_track_slot_payer() { ); assert_eq!( harness.read_resource_metadata(&address3, coin_store), - Some(Some(StateValueMetadata::new(address1, 0, ×tamp))), + Some(Some(StateValueMetadata::new(slot_fee, ×tamp))), ); } diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__create_account__create_account.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__create_account__create_account.exp index 17341c0156fdb..88ab5fdb5ad5d 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__create_account__create_account.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__create_account__create_account.exp @@ -16,13 +16,13 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01000000000000000000000000000000000000000000000000000000000000000104636f696e09436f696e53746f7265010700000000000000000000000000000000000000000000000000000000000000010a6170746f735f636f696e094170746f73436f696e00 }, ), hash: OnceCell(Uninit), - }: Creation(00000000000000000000000000000000000200000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000300000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1), + }: CreationWithMetadata(00000000000000000000000000000000000200000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000300000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, ), hash: OnceCell(Uninit), - }: Creation(20f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10000000000000000040000000000000001000000000000000000000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000100000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10000), + }: CreationWithMetadata(20f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10000000000000000040000000000000001000000000000000000000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000100000000000000f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__borrow_after_move.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__borrow_after_move.exp index 57f85b57237ed..6336865d9e8bf 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__borrow_after_move.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__borrow_after_move.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000), + }: CreationWithMetadata(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, @@ -76,7 +76,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Creation(0300000000000000), + }: CreationWithMetadata(0300000000000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), @@ -132,7 +132,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Deletion, + }: DeletionWithMetadata(metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__change_after_move.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__change_after_move.exp index 07efb5e5c47cb..1320a5304c72f 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__change_after_move.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__change_after_move.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000), + }: CreationWithMetadata(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, @@ -76,7 +76,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Creation(0300000000000000), + }: CreationWithMetadata(0300000000000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), @@ -132,7 +132,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Deletion, + }: DeletionWithMetadata(metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__move_from_across_blocks.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__move_from_across_blocks.exp index 1bec780feaaed..c0848aa6bd9ee 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__move_from_across_blocks.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__data_store__move_from_across_blocks.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000), + }: CreationWithMetadata(a11ceb0b06000000090100040204040308190521140735420877400ab701050cbc014f0d8b020200000101000208000003000100000402010000050001000006000100010800040001060c0002060c03010608000105010708000103014d067369676e657202543109626f72726f775f7431096368616e67655f74310972656d6f76655f74310a7075626c6973685f743101760a616464726573735f6f66f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100000000000000000000000000000000000000000000000000000000000000010002010703000100010003050b0011042b000c0102010100010005090b0011042a000c020b010b020f001502020100010006060b0011042c0013000c01020301000001050b0006030000000000000012002d0002000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, @@ -76,7 +76,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Creation(0300000000000000), + }: CreationWithMetadata(0300000000000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), @@ -132,7 +132,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Deletion, + }: DeletionWithMetadata(metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), @@ -221,7 +221,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Creation(0300000000000000), + }: CreationWithMetadata(0300000000000000, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), @@ -252,7 +252,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d02543100 }, ), hash: OnceCell(Uninit), - }: Deletion, + }: DeletionWithMetadata(metadata:V0 { deposit: 0, creation_time_usecs: 0 }), }, }, ), diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__duplicate_module.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__duplicate_module.exp index 805da66d61fab..5cfc652b1e1df 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__duplicate_module.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__duplicate_module.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000008010002020204030605050b01070c060812200a32050c3707000000010000000200000000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100020102030001000000010200), + }: CreationWithMetadata(a11ceb0b0600000008010002020204030605050b01070c060812200a32050c3707000000010000000200000000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100020102030001000000010200, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_compatible_module.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_compatible_module.exp index e706de32425ce..8ec55ffb0ba0b 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_compatible_module.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_compatible_module.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100), + }: CreationWithMetadata(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_changed_field.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_changed_field.exp index 8548cf8bda8f5..4ffc1e5aa5bb0 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_changed_field.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_changed_field.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300), + }: CreationWithMetadata(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_new_field.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_new_field.exp index 8548cf8bda8f5..4ffc1e5aa5bb0 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_new_field.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_new_field.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300), + }: CreationWithMetadata(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_field.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_field.exp index 8548cf8bda8f5..4ffc1e5aa5bb0 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_field.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_field.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300), + }: CreationWithMetadata(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_struct.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_struct.exp index 8548cf8bda8f5..4ffc1e5aa5bb0 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_struct.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__layout_incompatible_module_with_removed_struct.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300), + }: CreationWithMetadata(a11ceb0b0600000005010002020204070606080c200a2c05000000010000014d01540166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1000201020300, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_compatible_module.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_compatible_module.exp index e706de32425ce..8ec55ffb0ba0b 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_compatible_module.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_compatible_module.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100), + }: CreationWithMetadata(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_added_param.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_added_param.exp index 161b4225a8fb9..5c081cb72f02a 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_added_param.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_added_param.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000006010002030205050701070804080c200c2c070000000100000000014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000000010200), + }: CreationWithMetadata(a11ceb0b0600000006010002030205050701070804080c200c2c070000000100000000014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000000010200, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_changed_param.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_changed_param.exp index 2ade87987f340..c8464f739bfe8 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_changed_param.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_changed_param.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000006010002030205050703070a04080e200c2e0700000001000100010300014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000001010200), + }: CreationWithMetadata(a11ceb0b0600000006010002030205050703070a04080e200c2e0700000001000100010300014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000001010200, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_removed_pub_fn.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_removed_pub_fn.exp index 161b4225a8fb9..5c081cb72f02a 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_removed_pub_fn.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__linking_incompatible_module_with_removed_pub_fn.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b0600000006010002030205050701070804080c200c2c070000000100000000014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000000010200), + }: CreationWithMetadata(a11ceb0b0600000006010002030205050701070804080c200c2c070000000100000000014d0166f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000000010200, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_allow_modules.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_allow_modules.exp index bcf854c1f7b71..492d89157f63e 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_allow_modules.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_allow_modules.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100), + }: CreationWithMetadata(a11ceb0b06000000030100020702020804200000014df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe100, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_modules_proper_sender.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_modules_proper_sender.exp index db6061ec50a87..23faec141728d 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_modules_proper_sender.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__module_publishing__test_publishing_modules_proper_sender.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: 000000000000000000000000000000000000000000000000000000000a550c18, path: 00000000000000000000000000000000000000000000000000000000000a550c18014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b06000000030100020702020804200000014d000000000000000000000000000000000000000000000000000000000a550c1800), + }: CreationWithMetadata(a11ceb0b06000000030100020702020804200000014d000000000000000000000000000000000000000000000000000000000a550c1800, metadata:V0 { deposit: 0, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: 000000000000000000000000000000000000000000000000000000000a550c18, path: 010000000000000000000000000000000000000000000000000000000000000001076163636f756e74074163636f756e7400 }, diff --git a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__verify_txn__test_open_publishing.exp b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__verify_txn__test_open_publishing.exp index 1598d1d9730bd..110382581671e 100644 --- a/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__verify_txn__test_open_publishing.exp +++ b/aptos-move/e2e-tests/goldens/language_e2e_testsuite__tests__verify_txn__test_open_publishing.exp @@ -10,7 +10,7 @@ Ok( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 00f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1014d }, ), hash: OnceCell(Uninit), - }: Creation(a11ceb0b060000000601000203020a050c0607120a081c200c3c23000000010001000002000100020303010300014d036d61780373756df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000002080a000a012403060a01020a00020101000001060a000a01160c020a020200), + }: CreationWithMetadata(a11ceb0b060000000601000203020a050c0607120a081c200c3c23000000010001000002000100020303010300014d036d61780373756df5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe10001000002080a000a012403060a01020a00020101000001060a000a01160c020a020200, metadata:V0 { deposit: 50000, creation_time_usecs: 0 }), StateKey { inner: AccessPath( AccessPath { address: f5b9d6f01a99e74c790e2f330c092fa05455a8193f1dfc1b113ecc54d067afe1, path: 01000000000000000000000000000000000000000000000000000000000000000104636f696e09436f696e53746f7265010700000000000000000000000000000000000000000000000000000000000000010a6170746f735f636f696e094170746f73436f696e00 }, diff --git a/aptos-move/e2e-tests/src/data_store.rs b/aptos-move/e2e-tests/src/data_store.rs index 05a0edf31f74b..aac19fbdcbf08 100644 --- a/aptos-move/e2e-tests/src/data_store.rs +++ b/aptos-move/e2e-tests/src/data_store.rs @@ -120,10 +120,6 @@ impl TStateView for FakeDataStore { Ok(self.state_data.get(state_key).cloned()) } - fn is_genesis(&self) -> bool { - self.state_data.is_empty() - } - fn get_usage(&self) -> Result { let mut usage = StateStorageUsage::new_untracked(); for (k, v) in self.state_data.iter() { diff --git a/aptos-move/e2e-tests/src/executor.rs b/aptos-move/e2e-tests/src/executor.rs index f5433fea7860a..12c5bd4fbd004 100644 --- a/aptos-move/e2e-tests/src/executor.rs +++ b/aptos-move/e2e-tests/src/executor.rs @@ -35,6 +35,7 @@ use aptos_types::{ }, block_metadata::BlockMetadata, chain_id::ChainId, + contract_event::ContractEvent, on_chain_config::{ Features, OnChainConfig, TimedFeatureOverride, TimedFeatures, ValidatorSet, Version, }, @@ -94,6 +95,7 @@ pub type TraceSeqMapping = (usize, Vec, Vec); /// This struct is a mock in-memory implementation of the Aptos executor. pub struct FakeExecutor { data_store: FakeDataStore, + event_store: Vec, executor_thread_pool: Arc, block_time: u64, executed_output: Option, @@ -120,6 +122,7 @@ impl FakeExecutor { ); let mut executor = FakeExecutor { data_store: FakeDataStore::default(), + event_store: Vec::new(), executor_thread_pool, block_time: 0, executed_output: None, @@ -185,6 +188,7 @@ impl FakeExecutor { ); FakeExecutor { data_store: FakeDataStore::default(), + event_store: Vec::new(), executor_thread_pool, block_time: 0, executed_output: None, @@ -307,6 +311,10 @@ impl FakeExecutor { self.data_store.add_write_set(write_set); } + pub fn append_events(&mut self, events: Vec) { + self.event_store.extend(events); + } + /// Adds an account to this executor's data store. pub fn add_account_data(&mut self, account_data: &AccountData) { self.data_store.add_account_data(account_data) @@ -558,6 +566,10 @@ impl FakeExecutor { seq } + pub fn get_events(&self) -> &[ContractEvent] { + self.event_store.as_slice() + } + pub fn read_state_value(&self, state_key: &StateKey) -> Option { TStateView::get_state_value(&self.data_store, state_key).unwrap() } @@ -575,7 +587,7 @@ impl FakeExecutor { /// Verifies the given transaction by running it through the VM verifier. pub fn verify_transaction(&self, txn: SignedTransaction) -> VMValidatorResult { - let vm = AptosVM::new(self.get_state_view()); + let vm = AptosVM::new_from_state_view(self.get_state_view()); vm.validate_transaction(txn, &self.data_store) } @@ -623,7 +635,10 @@ impl FakeExecutor { .expect("Must execute transactions"); // Check if we emit the expected event for block metadata, there might be more events for transaction fees. - let event = outputs[0].events()[0].clone(); + let event = outputs[0].events()[0] + .v1() + .expect("The first event must be a block metadata v0 event") + .clone(); assert_eq!(event.key(), &new_block_event_key()); assert!(bcs::from_bytes::(event.event_data()).is_ok()); @@ -744,7 +759,7 @@ impl FakeExecutor { let a1 = Arc::new(Mutex::new(Vec::::new())); let a2 = Arc::clone(&a1); - let write_set = { + let (write_set, _events) = { // FIXME: should probably read the timestamp from storage. let timed_features = TimedFeatures::enable_all().with_override_profile(TimedFeatureOverride::Testing); @@ -778,7 +793,7 @@ impl FakeExecutor { //// TODO: fill in these with proper values LATEST_GAS_FEATURE_VERSION, InitialGasSchedule::initial(), - StorageGasParameters::free_and_unlimited(), + StorageGasParameters::unlimited(0.into()), 10000000000000, ), // coeff_buffer: BTreeMap::new(), @@ -797,11 +812,10 @@ impl FakeExecutor { &ChangeSetConfigs::unlimited_at_gas_feature_version(LATEST_GAS_FEATURE_VERSION), ) .expect("Failed to generate txn effects"); - let (write_set, _events) = change_set + change_set .try_into_storage_change_set() .expect("Failed to convert to ChangeSet") - .into_inner(); - write_set + .into_inner() }; self.data_store.add_write_set(&write_set); @@ -820,7 +834,7 @@ impl FakeExecutor { type_params: Vec, args: Vec>, ) { - let write_set = { + let (write_set, events) = { // FIXME: should probably read the timestamp from storage. let timed_features = TimedFeatures::enable_all().with_override_profile(TimedFeatureOverride::Testing); @@ -858,13 +872,13 @@ impl FakeExecutor { &ChangeSetConfigs::unlimited_at_gas_feature_version(LATEST_GAS_FEATURE_VERSION), ) .expect("Failed to generate txn effects"); - let (write_set, _events) = change_set + change_set .try_into_storage_change_set() .expect("Failed to convert to ChangeSet") - .into_inner(); - write_set + .into_inner() }; self.data_store.add_write_set(&write_set); + self.event_store.extend(events); } pub fn try_exec( @@ -873,7 +887,7 @@ impl FakeExecutor { function_name: &str, type_params: Vec, args: Vec>, - ) -> Result { + ) -> Result<(WriteSet, Vec), VMStatus> { // TODO(Gas): we probably want to switch to non-zero costs in the future let vm = MoveVmExt::new( NativeGasParameters::zeros(), @@ -904,11 +918,11 @@ impl FakeExecutor { ) .expect("Failed to generate txn effects"); // TODO: Support deltas in fake executor. - let (write_set, _events) = change_set + let (write_set, events) = change_set .try_into_storage_change_set() .expect("Failed to convert to ChangeSet") .into_inner(); - Ok(write_set) + Ok((write_set, events)) } pub fn execute_view_function( diff --git a/aptos-move/e2e-tests/src/on_chain_configs.rs b/aptos-move/e2e-tests/src/on_chain_configs.rs index b58447b729643..eedd210cc2dd1 100644 --- a/aptos-move/e2e-tests/src/on_chain_configs.rs +++ b/aptos-move/e2e-tests/src/on_chain_configs.rs @@ -17,6 +17,6 @@ pub fn set_aptos_version(executor: &mut FakeExecutor, version: Version) { executor.new_block(); executor.execute_and_apply(txn); - let new_vm = AptosVM::new(executor.get_state_view()); + let new_vm = AptosVM::new_from_state_view(executor.get_state_view()); assert_eq!(new_vm.internals().version().unwrap(), version); } diff --git a/aptos-move/e2e-testsuite/Cargo.toml b/aptos-move/e2e-testsuite/Cargo.toml index ee2a43c4e8369..3a1a357170a7b 100644 --- a/aptos-move/e2e-testsuite/Cargo.toml +++ b/aptos-move/e2e-testsuite/Cargo.toml @@ -19,7 +19,7 @@ aptos-crypto = { workspace = true } aptos-framework = { workspace = true } aptos-gas-algebra = { workspace = true } aptos-gas-meter = { workspace = true } -aptos-gas-schedule = { workspace = true } +aptos-gas-schedule = { workspace = true, features = ["testing"] } aptos-keygen = { workspace = true } aptos-language-e2e-tests = { workspace = true } aptos-logger = { workspace = true } diff --git a/aptos-move/e2e-testsuite/src/tests/failed_transaction_tests.rs b/aptos-move/e2e-testsuite/src/tests/failed_transaction_tests.rs index 21858b3df0ef7..699529ab8b958 100644 --- a/aptos-move/e2e-testsuite/src/tests/failed_transaction_tests.rs +++ b/aptos-move/e2e-testsuite/src/tests/failed_transaction_tests.rs @@ -26,7 +26,7 @@ fn failed_transaction_cleanup_test() { executor.add_account_data(&sender); let log_context = AdapterLogSchema::new(executor.get_state_view().id(), 0); - let aptos_vm = AptosVM::new(executor.get_state_view()); + let aptos_vm = AptosVM::new_from_state_view(executor.get_state_view()); let data_cache = executor.get_state_view().as_move_resolver(); let txn_data = TransactionMetadata { @@ -38,11 +38,12 @@ fn failed_transaction_cleanup_test() { }; let gas_params = AptosGasParameters::zeros(); - let storage_gas_params = StorageGasParameters::free_and_unlimited(); + let storage_gas_params = + StorageGasParameters::unlimited(gas_params.vm.txn.free_write_bytes_quota); let change_set_configs = storage_gas_params.change_set_configs.clone(); - let mut gas_meter = MemoryTrackedGasMeter::new(StandardGasMeter::new(StandardGasAlgebra::new( + let gas_meter = MemoryTrackedGasMeter::new(StandardGasMeter::new(StandardGasAlgebra::new( LATEST_GAS_FEATURE_VERSION, gas_params.vm, storage_gas_params, @@ -52,7 +53,7 @@ fn failed_transaction_cleanup_test() { // TYPE_MISMATCH should be kept and charged. let out1 = aptos_vm.failed_transaction_cleanup( VMStatus::error(StatusCode::TYPE_MISMATCH, None), - &mut gas_meter, + &gas_meter, &txn_data, &data_cache, &log_context, @@ -72,7 +73,7 @@ fn failed_transaction_cleanup_test() { // Invariant violations should be charged. let out2 = aptos_vm.failed_transaction_cleanup( VMStatus::error(StatusCode::UNKNOWN_INVARIANT_VIOLATION_ERROR, None), - &mut gas_meter, + &gas_meter, &txn_data, &data_cache, &log_context, diff --git a/aptos-move/e2e-testsuite/src/tests/on_chain_configs.rs b/aptos-move/e2e-testsuite/src/tests/on_chain_configs.rs index cdd2ef338be8e..bb16599c32363 100644 --- a/aptos-move/e2e-testsuite/src/tests/on_chain_configs.rs +++ b/aptos-move/e2e-testsuite/src/tests/on_chain_configs.rs @@ -12,7 +12,7 @@ use aptos_vm::AptosVM; #[test] fn initial_aptos_version() { let mut executor = FakeExecutor::from_head_genesis(); - let vm = AptosVM::new(executor.get_state_view()); + let vm = AptosVM::new_from_state_view(executor.get_state_view()); let version = aptos_types::on_chain_config::APTOS_MAX_KNOWN_VERSION; assert_eq!(vm.internals().version().unwrap(), version,); @@ -26,7 +26,7 @@ fn initial_aptos_version() { executor.new_block(); executor.execute_and_apply(txn); - let new_vm = AptosVM::new(executor.get_state_view()); + let new_vm = AptosVM::new_from_state_view(executor.get_state_view()); assert_eq!(new_vm.internals().version().unwrap(), Version { major: version.major + 1 }); @@ -35,7 +35,7 @@ fn initial_aptos_version() { #[test] fn drop_txn_after_reconfiguration() { let mut executor = FakeExecutor::from_head_genesis(); - let vm = AptosVM::new(executor.get_state_view()); + let vm = AptosVM::new_from_state_view(executor.get_state_view()); let version = aptos_types::on_chain_config::APTOS_MAX_KNOWN_VERSION; assert_eq!(vm.internals().version().unwrap(), version); diff --git a/aptos-move/e2e-testsuite/src/tests/peer_to_peer.rs b/aptos-move/e2e-testsuite/src/tests/peer_to_peer.rs index fb0e3a03a6c44..1a839cf3b7da2 100644 --- a/aptos-move/e2e-testsuite/src/tests/peer_to_peer.rs +++ b/aptos-move/e2e-testsuite/src/tests/peer_to_peer.rs @@ -54,7 +54,9 @@ fn single_peer_to_peer_with_event() { let rec_ev_path = receiver.received_events_key(); let sent_ev_path = sender.sent_events_key(); for event in output.events() { - assert!(rec_ev_path == event.key() || sent_ev_path == event.key()); + assert!( + rec_ev_path == event.v1().unwrap().key() || sent_ev_path == event.v1().unwrap().key() + ); } } diff --git a/aptos-move/e2e-testsuite/src/tests/scripts.rs b/aptos-move/e2e-testsuite/src/tests/scripts.rs index 0da2e5110332f..9c11ed2ea56d3 100644 --- a/aptos-move/e2e-testsuite/src/tests/scripts.rs +++ b/aptos-move/e2e-testsuite/src/tests/scripts.rs @@ -9,8 +9,9 @@ use aptos_types::{ transaction::{ExecutionStatus, Script, TransactionStatus}, }; use move_binary_format::file_format::{ - empty_script, AbilitySet, AddressIdentifierIndex, Bytecode, FunctionHandle, - FunctionHandleIndex, IdentifierIndex, ModuleHandle, ModuleHandleIndex, SignatureIndex, + empty_script, Ability, AbilitySet, AddressIdentifierIndex, Bytecode, FunctionHandle, + FunctionHandleIndex, FunctionInstantiation, FunctionInstantiationIndex, IdentifierIndex, + ModuleHandle, ModuleHandleIndex, Signature, SignatureIndex, SignatureToken, }; use move_core_types::{ identifier::Identifier, @@ -434,3 +435,83 @@ fn script_nested_type_argument_module_does_not_exist() { assert_eq!(balance, updated_sender_balance.coin()); assert_eq!(11, updated_sender.sequence_number()); } + +#[test] +fn forbid_script_emitting_events() { + let mut executor = FakeExecutor::from_head_genesis(); + + // create and publish sender + let sender = executor.create_raw_account_data(1_000_000, 10); + executor.add_account_data(&sender); + + // create an event-emitting script + let mut script = empty_script(); + script.code.code = vec![ + Bytecode::LdTrue, + Bytecode::CallGeneric(FunctionInstantiationIndex(0)), + Bytecode::Ret, + ]; + script.function_instantiations.push(FunctionInstantiation { + handle: FunctionHandleIndex(0), + type_parameters: SignatureIndex(2), + }); + script.function_handles.push(FunctionHandle { + module: ModuleHandleIndex(0), + name: IdentifierIndex(1), + parameters: SignatureIndex(1), + return_: SignatureIndex(0), + type_parameters: vec![ + AbilitySet::singleton(Ability::Store) | AbilitySet::singleton(Ability::Drop), + ], + }); + script.module_handles.push(ModuleHandle { + address: AddressIdentifierIndex(0), + name: IdentifierIndex(0), + }); + script.address_identifiers.push(AccountAddress::ONE); + script.identifiers = vec![ + Identifier::new("event").unwrap(), + Identifier::new("emit").unwrap(), + ]; + // dummy signatures + script.signatures = vec![ + Signature(vec![]), + Signature(vec![SignatureToken::TypeParameter(0)]), + Signature(vec![SignatureToken::Bool]), + ]; + let mut blob = vec![]; + script.serialize(&mut blob).expect("script must serialize"); + let txn = sender + .account() + .transaction() + .script(Script::new(blob, vec![], vec![])) + .sequence_number(10) + .gas_unit_price(1) + .sign(); + // execute transaction + let output = &executor.execute_transaction(txn); + let status = output.status(); + match status { + TransactionStatus::Keep(_) => (), + _ => panic!("TransactionStatus must be Keep"), + } + assert_eq!( + status.status(), + Ok(ExecutionStatus::MiscellaneousError(Some( + StatusCode::INVALID_OPERATION_IN_SCRIPT + ))) + ); + executor.apply_write_set(output.write_set()); + + // Check that numbers in store are correct. + let gas = output.gas_used(); + let balance = 1_000_000 - gas; + let updated_sender = executor + .read_account_resource(sender.account()) + .expect("sender must exist"); + let updated_sender_balance = executor + .read_coin_store_resource(sender.account()) + .expect("sender balance must exist"); + assert_eq!(balance, updated_sender_balance.coin()); + assert_eq!(11, updated_sender.sequence_number()); +} diff --git a/aptos-move/framework/Cargo.toml b/aptos-move/framework/Cargo.toml index a48885b6f58c2..0f217d90a6b70 100644 --- a/aptos-move/framework/Cargo.toml +++ b/aptos-move/framework/Cargo.toml @@ -17,7 +17,7 @@ anyhow = { workspace = true } aptos-aggregator = { workspace = true, features = ["testing"] } aptos-crypto = { workspace = true, features = ["fuzzing"] } aptos-gas-algebra = { workspace = true } -aptos-gas-schedule = { workspace = true } +aptos-gas-schedule = { workspace = true } aptos-move-stdlib = { workspace = true } aptos-native-interface = { workspace = true } aptos-sdk-builder = { workspace = true } @@ -59,6 +59,7 @@ move-model = { workspace = true } move-package = { workspace = true } move-prover = { workspace = true } move-prover-boogie-backend = { workspace = true } +move-prover-bytecode-pipeline = { workspace = true } move-stackless-bytecode = { workspace = true } move-vm-runtime = { workspace = true } move-vm-types = { workspace = true } diff --git a/aptos-move/framework/aptos-framework/doc/account.md b/aptos-move/framework/aptos-framework/doc/account.md index f8976561a1fff..a0f3f15f15634 100644 --- a/aptos-move/framework/aptos-framework/doc/account.md +++ b/aptos-move/framework/aptos-framework/doc/account.md @@ -2806,6 +2806,7 @@ The value of signer_capability_offer.for of Account resource under the signer is aborts_if len(ZERO_AUTH_KEY) != 32; include exists_at(resource_addr) ==> CreateResourceAccountAbortsIf; include !exists_at(resource_addr) ==> CreateAccountAbortsIf {addr: resource_addr}; +ensures signer::address_of(result_1) == resource_addr; diff --git a/aptos-move/framework/aptos-framework/doc/event.md b/aptos-move/framework/aptos-framework/doc/event.md index e223b12853dc3..9eba716ad62d4 100644 --- a/aptos-move/framework/aptos-framework/doc/event.md +++ b/aptos-move/framework/aptos-framework/doc/event.md @@ -10,21 +10,28 @@ events emitted to a handle and emit events to the event store. - [Struct `EventHandle`](#0x1_event_EventHandle) +- [Constants](#@Constants_0) +- [Function `emit`](#0x1_event_emit) +- [Function `write_to_module_event_store`](#0x1_event_write_to_module_event_store) - [Function `new_event_handle`](#0x1_event_new_event_handle) - [Function `emit_event`](#0x1_event_emit_event) - [Function `guid`](#0x1_event_guid) - [Function `counter`](#0x1_event_counter) - [Function `write_to_event_store`](#0x1_event_write_to_event_store) - [Function `destroy_handle`](#0x1_event_destroy_handle) -- [Specification](#@Specification_0) - - [Function `emit_event`](#@Specification_0_emit_event) - - [Function `guid`](#@Specification_0_guid) - - [Function `counter`](#@Specification_0_counter) - - [Function `write_to_event_store`](#@Specification_0_write_to_event_store) - - [Function `destroy_handle`](#@Specification_0_destroy_handle) +- [Specification](#@Specification_1) + - [Function `emit`](#@Specification_1_emit) + - [Function `write_to_module_event_store`](#@Specification_1_write_to_module_event_store) + - [Function `emit_event`](#@Specification_1_emit_event) + - [Function `guid`](#@Specification_1_guid) + - [Function `counter`](#@Specification_1_counter) + - [Function `write_to_event_store`](#@Specification_1_write_to_event_store) + - [Function `destroy_handle`](#@Specification_1_destroy_handle)
use 0x1::bcs;
+use 0x1::error;
+use 0x1::features;
 use 0x1::guid;
 
@@ -39,7 +46,8 @@ A handle for an event such that: 2. Storage can use this handle to prove the total number of events that happened in the past. -
struct EventHandle<T: drop, store> has store
+
#[deprecated]
+struct EventHandle<T: drop, store> has store
 
@@ -64,6 +72,70 @@ A handle for an event such that: + + + + +## Constants + + + + +Module event feature is not supported. + + +
const EMODULE_EVENT_NOT_SUPPORTED: u64 = 1;
+
+ + + + + +## Function `emit` + +Emit an event with payload msg by using handle_ref's key and counter. + + +
public fun emit<T: drop, store>(msg: T)
+
+ + + +
+Implementation + + +
public fun emit<T: store + drop>(msg: T) {
+    assert!(features::module_event_enabled(), std::error::invalid_state(EMODULE_EVENT_NOT_SUPPORTED));
+    write_to_module_event_store<T>(msg);
+}
+
+ + + +
+ + + +## Function `write_to_module_event_store` + +Log msg with the event stream identified by T + + +
fun write_to_module_event_store<T: drop, store>(msg: T)
+
+ + + +
+Implementation + + +
native fun write_to_module_event_store<T: drop + store>(msg: T);
+
+ + +
@@ -73,7 +145,8 @@ A handle for an event such that: Use EventHandleGenerator to generate a unique event handle for sig -
public(friend) fun new_event_handle<T: drop, store>(guid: guid::GUID): event::EventHandle<T>
+
#[deprecated]
+public(friend) fun new_event_handle<T: drop, store>(guid: guid::GUID): event::EventHandle<T>
 
@@ -101,7 +174,8 @@ Use EventHandleGenerator to generate a unique event handle for sig Emit an event with payload msg by using handle_ref's key and counter. -
public fun emit_event<T: drop, store>(handle_ref: &mut event::EventHandle<T>, msg: T)
+
#[deprecated]
+public fun emit_event<T: drop, store>(handle_ref: &mut event::EventHandle<T>, msg: T)
 
@@ -130,7 +204,8 @@ Emit an event with payload msg by using handle_ref's k Return the GUID associated with this EventHandle -
public fun guid<T: drop, store>(handle_ref: &event::EventHandle<T>): &guid::GUID
+
#[deprecated]
+public fun guid<T: drop, store>(handle_ref: &event::EventHandle<T>): &guid::GUID
 
@@ -155,7 +230,8 @@ Return the GUID associated with this EventHandle Return the current counter associated with this EventHandle -
public fun counter<T: drop, store>(handle_ref: &event::EventHandle<T>): u64
+
#[deprecated]
+public fun counter<T: drop, store>(handle_ref: &event::EventHandle<T>): u64
 
@@ -180,7 +256,8 @@ Return the current counter associated with this EventHandle Log msg as the countth event associated with the event stream identified by guid -
fun write_to_event_store<T: drop, store>(guid: vector<u8>, count: u64, msg: T)
+
#[deprecated]
+fun write_to_event_store<T: drop, store>(guid: vector<u8>, count: u64, msg: T)
 
@@ -203,7 +280,8 @@ Log msg as the countth event associated with the event Destroy a unique handle. -
public fun destroy_handle<T: drop, store>(handle: event::EventHandle<T>)
+
#[deprecated]
+public fun destroy_handle<T: drop, store>(handle: event::EventHandle<T>)
 
@@ -221,7 +299,7 @@ Destroy a unique handle. - + ## Specification @@ -233,12 +311,47 @@ Destroy a unique handle. - + + +### Function `emit` + + +
public fun emit<T: drop, store>(msg: T)
+
+ + + + +
pragma opaque;
+aborts_if !features::spec_module_event_enabled();
+
+ + + + + +### Function `write_to_module_event_store` + + +
fun write_to_module_event_store<T: drop, store>(msg: T)
+
+ + +Native function use opaque. + + +
pragma opaque;
+
+ + + + ### Function `emit_event` -
public fun emit_event<T: drop, store>(handle_ref: &mut event::EventHandle<T>, msg: T)
+
#[deprecated]
+public fun emit_event<T: drop, store>(handle_ref: &mut event::EventHandle<T>, msg: T)
 
@@ -251,12 +364,13 @@ Destroy a unique handle. - + ### Function `guid` -
public fun guid<T: drop, store>(handle_ref: &event::EventHandle<T>): &guid::GUID
+
#[deprecated]
+public fun guid<T: drop, store>(handle_ref: &event::EventHandle<T>): &guid::GUID
 
@@ -267,12 +381,13 @@ Destroy a unique handle. - + ### Function `counter` -
public fun counter<T: drop, store>(handle_ref: &event::EventHandle<T>): u64
+
#[deprecated]
+public fun counter<T: drop, store>(handle_ref: &event::EventHandle<T>): u64
 
@@ -283,12 +398,13 @@ Destroy a unique handle. - + ### Function `write_to_event_store` -
fun write_to_event_store<T: drop, store>(guid: vector<u8>, count: u64, msg: T)
+
#[deprecated]
+fun write_to_event_store<T: drop, store>(guid: vector<u8>, count: u64, msg: T)
 
@@ -300,12 +416,13 @@ Native function use opaque. - + ### Function `destroy_handle` -
public fun destroy_handle<T: drop, store>(handle: event::EventHandle<T>)
+
#[deprecated]
+public fun destroy_handle<T: drop, store>(handle: event::EventHandle<T>)
 
diff --git a/aptos-move/framework/aptos-framework/doc/reconfiguration.md b/aptos-move/framework/aptos-framework/doc/reconfiguration.md index c70e94925418a..aa3097550409c 100644 --- a/aptos-move/framework/aptos-framework/doc/reconfiguration.md +++ b/aptos-move/framework/aptos-framework/doc/reconfiguration.md @@ -530,6 +530,15 @@ Guid_creation_num should be 2 according to logic. aborts_if exists<Configuration>(@aptos_framework); ensures exists<Configuration>(@aptos_framework); ensures config.epoch == 0 && config.last_reconfiguration_time == 0; +ensures config.events == event::EventHandle<NewEpochEvent> { + counter: 0, + guid: guid::GUID { + id: guid::ID { + creation_num: 2, + addr: @aptos_framework + } + } +};
@@ -547,6 +556,7 @@ Guid_creation_num should be 2 according to logic.
include AbortsIfNotAptosFramework;
 aborts_if exists<DisableReconfiguration>(@aptos_framework);
+ensures exists<DisableReconfiguration>(@aptos_framework);
 
@@ -565,6 +575,7 @@ Make sure the caller is admin and check the resource DisableReconfiguration.
include AbortsIfNotAptosFramework;
 aborts_if !exists<DisableReconfiguration>(@aptos_framework);
+ensures !exists<DisableReconfiguration>(@aptos_framework);
 
@@ -581,6 +592,7 @@ Make sure the caller is admin and check the resource DisableReconfiguration.
aborts_if false;
+ensures result == !exists<DisableReconfiguration>(@aptos_framework);
 
@@ -598,13 +610,15 @@ Make sure the caller is admin and check the resource DisableReconfiguration.
pragma verify_duration_estimate = 120;
 requires exists<stake::ValidatorFees>(@aptos_framework);
-include transaction_fee::RequiresCollectedFeesPerValueLeqBlockAptosSupply;
+requires exists<CoinInfo<AptosCoin>>(@aptos_framework);
 include features::spec_periodical_reward_rate_decrease_enabled() ==> staking_config::StakingRewardsConfigEnabledRequirement;
 include features::spec_collect_and_distribute_gas_fees_enabled() ==> aptos_coin::ExistsAptosCoin;
+include transaction_fee::RequiresCollectedFeesPerValueLeqBlockAptosSupply;
 aborts_if false;
 let success = !(chain_status::is_genesis() || timestamp::spec_now_microseconds() == 0 || !reconfiguration_enabled())
     && timestamp::spec_now_microseconds() != global<Configuration>(@aptos_framework).last_reconfiguration_time;
 ensures success ==> global<Configuration>(@aptos_framework).epoch == old(global<Configuration>(@aptos_framework).epoch) + 1;
+ensures success ==> global<Configuration>(@aptos_framework).last_reconfiguration_time == timestamp::spec_now_microseconds();
 ensures !success ==> global<Configuration>(@aptos_framework).epoch == old(global<Configuration>(@aptos_framework).epoch);
 
@@ -622,6 +636,7 @@ Make sure the caller is admin and check the resource DisableReconfiguration.
aborts_if !exists<Configuration>(@aptos_framework);
+ensures result == global<Configuration>(@aptos_framework).last_reconfiguration_time;
 
@@ -638,6 +653,7 @@ Make sure the caller is admin and check the resource DisableReconfiguration.
aborts_if !exists<Configuration>(@aptos_framework);
+ensures result == global<Configuration>(@aptos_framework).epoch;
 
@@ -658,6 +674,7 @@ Should equal to 0
aborts_if !exists<Configuration>(@aptos_framework);
 let config_ref = global<Configuration>(@aptos_framework);
 aborts_if !(config_ref.epoch == 0 && config_ref.last_reconfiguration_time == 0);
+ensures global<Configuration>(@aptos_framework).epoch == 1;
 
diff --git a/aptos-move/framework/aptos-framework/doc/resource_account.md b/aptos-move/framework/aptos-framework/doc/resource_account.md index dd3ae582ac53d..c55400653add3 100644 --- a/aptos-move/framework/aptos-framework/doc/resource_account.md +++ b/aptos-move/framework/aptos-framework/doc/resource_account.md @@ -13,25 +13,27 @@ This contains several utilities to make using resource accounts more effective. A dev wishing to use resource accounts for a liquidity pool, would likely do the following: + 1. Create a new account using resource_account::create_resource_account. This creates the account, stores the signer_cap within a resource_account::Container, and rotates the key to -the current accounts authentication key or a provided authentication key. -2. Define the LiquidityPool module's address to be the same as the resource account. -3. Construct a transaction package publishing transaction for the resource account using the +the current account's authentication key or a provided authentication key. +2. Define the liquidity pool module's address to be the same as the resource account. +3. Construct a package-publishing transaction for the resource account using the authentication key used in step 1. -4. In the LiquidityPool module's init_module function, call retrieve_resource_account_cap -which will retrive the signer_cap and rotate the resource account's authentication key to +4. In the liquidity pool module's init_module function, call retrieve_resource_account_cap +which will retrieve the signer_cap and rotate the resource account's authentication key to 0x0, effectively locking it off. -5. When adding a new coin, the liquidity pool will load the capability and hence the signer to -register and store new LiquidityCoin resources. +5. When adding a new coin, the liquidity pool will load the capability and hence the signer to +register and store new LiquidityCoin resources. Code snippets to help: + ``` -fun init_module(resource: &signer) { +fun init_module(resource_account: &signer) { let dev_address = @DEV_ADDR; -let signer_cap = retrieve_resource_account_cap(resource, dev_address); +let signer_cap = retrieve_resource_account_cap(resource_account, dev_address); let lp = LiquidityPoolInfo { signer_cap: signer_cap, ... }; -move_to(resource, lp); +move_to(resource_account, lp); } ``` @@ -478,6 +480,8 @@ the SignerCapability. aborts_if exists<Container>(source_addr) && simple_map::spec_contains_key(container.store, resource_addr); aborts_if get && !(exists<Account>(resource_addr) && len(global<Account>(source_addr).authentication_key) == 32); aborts_if !get && !(exists<Account>(resource_addr) && len(optional_auth_key) == 32); + ensures simple_map::spec_contains_key(global<Container>(source_addr).store, resource_addr); + ensures exists<Container>(source_addr); }
@@ -502,6 +506,8 @@ the SignerCapability. aborts_if exists<Container>(source_addr) && simple_map::spec_contains_key(container.store, resource_addr); aborts_if get && len(global<account::Account>(source_addr).authentication_key) != 32; aborts_if !get && len(optional_auth_key) != 32; + ensures simple_map::spec_contains_key(global<Container>(source_addr).store, resource_addr); + ensures exists<Container>(source_addr); }
@@ -524,7 +530,8 @@ the SignerCapability. aborts_if !simple_map::spec_contains_key(container.store, resource_addr); aborts_if !exists<account::Account>(resource_addr); ensures simple_map::spec_contains_key(old(global<Container>(source_addr)).store, resource_addr) && -simple_map::spec_len(old(global<Container>(source_addr)).store) == 1 ==> !exists<Container>(source_addr); + simple_map::spec_len(old(global<Container>(source_addr)).store) == 1 ==> !exists<Container>(source_addr); +ensures exists<Container>(source_addr) ==> !simple_map::spec_contains_key(global<Container>(source_addr).store, resource_addr);
diff --git a/aptos-move/framework/aptos-framework/doc/transaction_fee.md b/aptos-move/framework/aptos-framework/doc/transaction_fee.md index 934bd8f83b1b3..66a96b876b46a 100644 --- a/aptos-move/framework/aptos-framework/doc/transaction_fee.md +++ b/aptos-move/framework/aptos-framework/doc/transaction_fee.md @@ -532,7 +532,9 @@ Only called during genesis. aborts_if exists<ValidatorFees>(aptos_addr); include system_addresses::AbortsIfNotAptosFramework {account: aptos_framework}; include aggregator_factory::CreateAggregatorInternalAbortsIf; +aborts_if exists<CollectedFeesPerBlock>(aptos_addr); ensures exists<ValidatorFees>(aptos_addr); +ensures exists<CollectedFeesPerBlock>(aptos_addr);
@@ -551,10 +553,7 @@ Only called during genesis.
aborts_if new_burn_percentage > 100;
 let aptos_addr = signer::address_of(aptos_framework);
 aborts_if !system_addresses::is_aptos_framework_address(aptos_addr);
-requires exists<AptosCoinCapabilities>(@aptos_framework);
-requires exists<stake::ValidatorFees>(@aptos_framework);
-requires exists<CoinInfo<AptosCoin>>(@aptos_framework);
-include RequiresCollectedFeesPerValueLeqBlockAptosSupply;
+include ProcessCollectedFeesRequiresAndEnsures;
 ensures exists<CollectedFeesPerBlock>(@aptos_framework) ==>
     global<CollectedFeesPerBlock>(@aptos_framework).burn_percentage == new_burn_percentage;
 
@@ -594,14 +593,8 @@ Only called during genesis. requires exists<AptosCoinCapabilities>(@aptos_framework); requires exists<CoinInfo<AptosCoin>>(@aptos_framework); let amount_to_burn = (burn_percentage * coin::value(coin)) / 100; -let maybe_supply = coin::get_coin_supply_opt<AptosCoin>(); -aborts_if amount_to_burn > 0 && option::is_some(maybe_supply) && optional_aggregator::is_parallelizable(option::borrow(maybe_supply)) - && aggregator::spec_aggregator_get_val(option::borrow(option::borrow(maybe_supply).aggregator)) < - amount_to_burn; -aborts_if option::is_some(maybe_supply) && !optional_aggregator::is_parallelizable(option::borrow(maybe_supply)) - && option::borrow(option::borrow(maybe_supply).integer).value < - amount_to_burn; -include (amount_to_burn > 0) ==> coin::AbortsIfNotExistCoinInfo<AptosCoin>; +include amount_to_burn > 0 ==> coin::AbortsIfAggregator<AptosCoin>{ coin: Coin<AptosCoin>{ value: amount_to_burn } }; +ensures coin.value == old(coin).value - amount_to_burn;
@@ -632,6 +625,41 @@ Only called during genesis. + + + + +
schema ProcessCollectedFeesRequiresAndEnsures {
+    requires exists<AptosCoinCapabilities>(@aptos_framework);
+    requires exists<stake::ValidatorFees>(@aptos_framework);
+    requires exists<CoinInfo<AptosCoin>>(@aptos_framework);
+    include RequiresCollectedFeesPerValueLeqBlockAptosSupply;
+    aborts_if false;
+    let collected_fees = global<CollectedFeesPerBlock>(@aptos_framework);
+    let post post_collected_fees = global<CollectedFeesPerBlock>(@aptos_framework);
+    let pre_amount = aggregator::spec_aggregator_get_val(collected_fees.amount.value);
+    let post post_amount = aggregator::spec_aggregator_get_val(post_collected_fees.amount.value);
+    let fees_table = global<stake::ValidatorFees>(@aptos_framework).fees_table;
+    let post post_fees_table = global<stake::ValidatorFees>(@aptos_framework).fees_table;
+    let proposer = option::spec_borrow(collected_fees.proposer);
+    let fee_to_add = pre_amount - pre_amount * collected_fees.burn_percentage / 100;
+    ensures is_fees_collection_enabled() ==> option::spec_is_none(post_collected_fees.proposer) && post_amount == 0;
+    ensures is_fees_collection_enabled() && aggregator::spec_read(collected_fees.amount.value) > 0 &&
+        option::spec_is_some(collected_fees.proposer) ==>
+        if (proposer != @vm_reserved) {
+            if (table::spec_contains(fees_table, proposer)) {
+                table::spec_get(post_fees_table, proposer).value == table::spec_get(fees_table, proposer).value + fee_to_add
+            } else {
+            table::spec_get(post_fees_table, proposer).value == fee_to_add
+            }
+        } else {
+            option::spec_is_none(post_collected_fees.proposer) && post_amount == 0
+        };
+}
+
+ + + ### Function `process_collected_fees` @@ -643,10 +671,7 @@ Only called during genesis. -
requires exists<AptosCoinCapabilities>(@aptos_framework);
-requires exists<stake::ValidatorFees>(@aptos_framework);
-requires exists<CoinInfo<AptosCoin>>(@aptos_framework);
-include RequiresCollectedFeesPerValueLeqBlockAptosSupply;
+
include ProcessCollectedFeesRequiresAndEnsures;
 
@@ -704,13 +729,18 @@ Only called during genesis.
let collected_fees = global<CollectedFeesPerBlock>(@aptos_framework).amount;
 let aggr = collected_fees.value;
+let coin_store = global<coin::CoinStore<AptosCoin>>(account);
 aborts_if !exists<CollectedFeesPerBlock>(@aptos_framework);
 aborts_if fee > 0 && !exists<coin::CoinStore<AptosCoin>>(account);
-aborts_if fee > 0 && global<coin::CoinStore<AptosCoin>>(account).coin.value < fee;
+aborts_if fee > 0 && coin_store.coin.value < fee;
 aborts_if fee > 0 && aggregator::spec_aggregator_get_val(aggr)
     + fee > aggregator::spec_get_limit(aggr);
 aborts_if fee > 0 && aggregator::spec_aggregator_get_val(aggr)
     + fee > MAX_U128;
+let post post_coin_store = global<coin::CoinStore<AptosCoin>>(account);
+let post post_collected_fees = global<CollectedFeesPerBlock>(@aptos_framework).amount;
+ensures post_coin_store.coin.value == coin_store.coin.value - fee;
+ensures aggregator::spec_aggregator_get_val(post_collected_fees.value) == aggregator::spec_aggregator_get_val(aggr) + fee;
 
diff --git a/aptos-move/framework/aptos-framework/doc/transaction_validation.md b/aptos-move/framework/aptos-framework/doc/transaction_validation.md index b3bf92ca9444f..ceaf398323be3 100644 --- a/aptos-move/framework/aptos-framework/doc/transaction_validation.md +++ b/aptos-move/framework/aptos-framework/doc/transaction_validation.md @@ -667,6 +667,7 @@ Aborts if TransactionValidation already exists.
let addr = signer::address_of(aptos_framework);
 aborts_if !system_addresses::is_aptos_framework_address(addr);
 aborts_if exists<TransactionValidation>(addr);
+ensures exists<TransactionValidation>(addr);
 
@@ -772,6 +773,10 @@ Give some constraints that may abort according to the conditions. !account::exists_at(secondary_signer_addresses[i]) || secondary_signer_public_key_hashes[i] != account::get_authentication_key(secondary_signer_addresses[i]); + ensures forall i in 0..num_secondary_signers: + account::exists_at(secondary_signer_addresses[i]) + && secondary_signer_public_key_hashes[i] == + account::get_authentication_key(secondary_signer_addresses[i]); }
@@ -836,6 +841,7 @@ not equal the number of singers.
pragma verify_duration_estimate = 120;
+aborts_if !features::spec_is_enabled(features::FEE_PAYER_ENABLED);
 let gas_payer = fee_payer_address;
 include PrologueCommonAbortsIf {
     gas_payer,
@@ -867,46 +873,7 @@ Abort according to the conditions.
 Skip transaction_fee::burn_fee verification.
 
 
-
aborts_if !(txn_max_gas_units >= gas_units_remaining);
-let gas_used = txn_max_gas_units - gas_units_remaining;
-aborts_if !(txn_gas_price * gas_used <= MAX_U64);
-let transaction_fee_amount = txn_gas_price * gas_used;
-let addr = signer::address_of(account);
-aborts_if !exists<CoinStore<AptosCoin>>(addr);
-aborts_if !(global<CoinStore<AptosCoin>>(addr).coin.value >= transaction_fee_amount);
-aborts_if !exists<Account>(addr);
-aborts_if !(global<Account>(addr).sequence_number < MAX_U64);
-let pre_balance = global<coin::CoinStore<AptosCoin>>(addr).coin.value;
-let post balance = global<coin::CoinStore<AptosCoin>>(addr).coin.value;
-let pre_account = global<account::Account>(addr);
-let post account = global<account::Account>(addr);
-ensures balance == pre_balance - transaction_fee_amount;
-ensures account.sequence_number == pre_account.sequence_number + 1;
-let collected_fees = global<CollectedFeesPerBlock>(@aptos_framework).amount;
-let aggr = collected_fees.value;
-let aggr_val = aggregator::spec_aggregator_get_val(aggr);
-let aggr_lim = aggregator::spec_get_limit(aggr);
-let aptos_addr = type_info::type_of<AptosCoin>().account_address;
-let apt_addr = type_info::type_of<AptosCoin>().account_address;
-let maybe_apt_supply = global<CoinInfo<AptosCoin>>(apt_addr).supply;
-let apt_supply = option::spec_borrow(maybe_apt_supply);
-let apt_supply_value = optional_aggregator::optional_aggregator_value(apt_supply);
-aborts_if if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) {
-    !exists<CollectedFeesPerBlock>(@aptos_framework)
-        || transaction_fee_amount > 0 &&
-            ( // `exists<CoinStore<AptosCoin>>(addr)` checked above.
-              // Sufficiency of funds is checked above.
-              aggr_val + transaction_fee_amount > aggr_lim
-                || aggr_val + transaction_fee_amount > MAX_U128)
-} else {
-    // Existence of CoinStore in `addr` is checked above.
-    // Sufficiency of funds is checked above.
-    !exists<AptosCoinCapabilities>(@aptos_framework) ||
-    // Existence of APT's CoinInfo
-    transaction_fee_amount > 0 && !exists<CoinInfo<AptosCoin>>(aptos_addr) ||
-    // Sufficiency of APT's supply
-    option::spec_is_some(maybe_apt_supply) && apt_supply_value < transaction_fee_amount
-};
+
include EpilogueGasPayerAbortsIf { gas_payer: signer::address_of(account), _txn_sequence_number: txn_sequence_number };
 
@@ -925,46 +892,74 @@ Abort according to the conditions. Skip transaction_fee::burn_fee verification. -
aborts_if !(txn_max_gas_units >= gas_units_remaining);
-let gas_used = txn_max_gas_units - gas_units_remaining;
-aborts_if !(txn_gas_price * gas_used <= MAX_U64);
-let transaction_fee_amount = txn_gas_price * gas_used;
-let addr = signer::address_of(account);
-aborts_if !exists<CoinStore<AptosCoin>>(gas_payer);
-aborts_if !(global<CoinStore<AptosCoin>>(gas_payer).coin.value >= transaction_fee_amount);
-aborts_if !exists<Account>(addr);
-aborts_if !(global<Account>(addr).sequence_number < MAX_U64);
-let pre_balance = global<coin::CoinStore<AptosCoin>>(gas_payer).coin.value;
-let post balance = global<coin::CoinStore<AptosCoin>>(gas_payer).coin.value;
-let pre_account = global<account::Account>(addr);
-let post account = global<account::Account>(addr);
-ensures balance == pre_balance - transaction_fee_amount;
-ensures account.sequence_number == pre_account.sequence_number + 1;
-let collected_fees = global<CollectedFeesPerBlock>(@aptos_framework).amount;
-let aggr = collected_fees.value;
-let aggr_val = aggregator::spec_aggregator_get_val(aggr);
-let aggr_lim = aggregator::spec_get_limit(aggr);
-let aptos_addr = type_info::type_of<AptosCoin>().account_address;
-let apt_addr = type_info::type_of<AptosCoin>().account_address;
-let maybe_apt_supply = global<CoinInfo<AptosCoin>>(apt_addr).supply;
-let apt_supply = option::spec_borrow(maybe_apt_supply);
-let apt_supply_value = optional_aggregator::optional_aggregator_value(apt_supply);
-aborts_if if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) {
-    !exists<CollectedFeesPerBlock>(@aptos_framework)
-        || transaction_fee_amount > 0 &&
-        ( // `exists<CoinStore<AptosCoin>>(addr)` checked above.
-            // Sufficiency of funds is checked above.
-            aggr_val + transaction_fee_amount > aggr_lim
-                || aggr_val + transaction_fee_amount > MAX_U128)
-} else {
-    // Existence of CoinStore in `addr` is checked above.
-    // Sufficiency of funds is checked above.
-    !exists<AptosCoinCapabilities>(@aptos_framework) ||
-        // Existence of APT's CoinInfo
-        transaction_fee_amount > 0 && !exists<CoinInfo<AptosCoin>>(aptos_addr) ||
-        // Sufficiency of APT's supply
-        option::spec_is_some(maybe_apt_supply) && apt_supply_value < transaction_fee_amount
-};
+
include EpilogueGasPayerAbortsIf;
+
+ + + + + + + +
schema EpilogueGasPayerAbortsIf {
+    account: signer;
+    gas_payer: address;
+    _txn_sequence_number: u64;
+    txn_gas_price: u64;
+    txn_max_gas_units: u64;
+    gas_units_remaining: u64;
+    aborts_if !(txn_max_gas_units >= gas_units_remaining);
+    let gas_used = txn_max_gas_units - gas_units_remaining;
+    aborts_if !(txn_gas_price * gas_used <= MAX_U64);
+    let transaction_fee_amount = txn_gas_price * gas_used;
+    let addr = signer::address_of(account);
+    aborts_if !exists<CoinStore<AptosCoin>>(gas_payer);
+    aborts_if !(global<CoinStore<AptosCoin>>(gas_payer).coin.value >= transaction_fee_amount);
+    aborts_if !exists<Account>(addr);
+    aborts_if !(global<Account>(addr).sequence_number < MAX_U64);
+    let pre_balance = global<coin::CoinStore<AptosCoin>>(gas_payer).coin.value;
+    let post balance = global<coin::CoinStore<AptosCoin>>(gas_payer).coin.value;
+    let pre_account = global<account::Account>(addr);
+    let post account = global<account::Account>(addr);
+    ensures balance == pre_balance - transaction_fee_amount;
+    ensures account.sequence_number == pre_account.sequence_number + 1;
+    let collected_fees = global<CollectedFeesPerBlock>(@aptos_framework).amount;
+    let aggr = collected_fees.value;
+    let aggr_val = aggregator::spec_aggregator_get_val(aggr);
+    let aggr_lim = aggregator::spec_get_limit(aggr);
+    let aptos_addr = type_info::type_of<AptosCoin>().account_address;
+    let apt_addr = type_info::type_of<AptosCoin>().account_address;
+    let maybe_apt_supply = global<CoinInfo<AptosCoin>>(apt_addr).supply;
+    let apt_supply = option::spec_borrow(maybe_apt_supply);
+    let apt_supply_value = optional_aggregator::optional_aggregator_value(apt_supply);
+    aborts_if if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) {
+        !exists<CollectedFeesPerBlock>(@aptos_framework)
+            || transaction_fee_amount > 0 &&
+            ( // `exists<CoinStore<AptosCoin>>(addr)` checked above.
+                // Sufficiency of funds is checked above.
+                aggr_val + transaction_fee_amount > aggr_lim
+                    || aggr_val + transaction_fee_amount > MAX_U128)
+    } else {
+        // Existence of CoinStore in `addr` is checked above.
+        // Sufficiency of funds is checked above.
+        !exists<AptosCoinCapabilities>(@aptos_framework) ||
+            // Existence of APT's CoinInfo
+            transaction_fee_amount > 0 && !exists<CoinInfo<AptosCoin>>(aptos_addr) ||
+            // Sufficiency of APT's supply
+            option::spec_is_some(maybe_apt_supply) && apt_supply_value < transaction_fee_amount
+    };
+    let post post_collected_fees = global<CollectedFeesPerBlock>(@aptos_framework);
+    let post post_collected_fees_value = aggregator::spec_aggregator_get_val(post_collected_fees.amount.value);
+    let post post_maybe_apt_supply = global<CoinInfo<AptosCoin>>(apt_addr).supply;
+    let post post_apt_supply = option::spec_borrow(post_maybe_apt_supply);
+    let post post_apt_supply_value = optional_aggregator::optional_aggregator_value(post_apt_supply);
+    ensures transaction_fee_amount > 0 ==>
+        if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) {
+            post_collected_fees_value == aggr_val + transaction_fee_amount
+        } else {
+            option::spec_is_some(maybe_apt_supply) ==> post_apt_supply_value == apt_supply_value - transaction_fee_amount
+        };
+}
 
diff --git a/aptos-move/framework/aptos-framework/doc/vesting.md b/aptos-move/framework/aptos-framework/doc/vesting.md index c96f22981224e..ddff724ef6d07 100644 --- a/aptos-move/framework/aptos-framework/doc/vesting.md +++ b/aptos-move/framework/aptos-framework/doc/vesting.md @@ -2899,27 +2899,41 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma verify = false;
-include ActiveVestingContractAbortsIf<VestingContract>{contract_address: vesting_contract_address};
-let vesting_contract = global<VestingContract>(vesting_contract_address);
-let staker = vesting_contract_address;
-let operator = vesting_contract.staking.operator;
-let staking_contracts = global<staking_contract::Store>(staker).staking_contracts;
-let staking_contract = simple_map::spec_get(staking_contracts, operator);
-aborts_if !exists<staking_contract::Store>(staker);
-aborts_if !simple_map::spec_contains_key(staking_contracts, operator);
-let pool_address = staking_contract.pool_address;
-let stake_pool = borrow_global<stake::StakePool>(pool_address);
-let active = coin::value(stake_pool.active);
-let pending_active = coin::value(stake_pool.pending_active);
-let total_active_stake = active + pending_active;
-let accumulated_rewards = total_active_stake - staking_contract.principal;
-let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100;
-aborts_if !exists<stake::StakePool>(pool_address);
-aborts_if active + pending_active > MAX_U64;
-aborts_if total_active_stake < staking_contract.principal;
-aborts_if accumulated_rewards * staking_contract.commission_percentage > MAX_U64;
-aborts_if (vesting_contract.remaining_grant + commission_amount) > total_active_stake;
+
pragma verify_duration_estimate = 300;
+include TotalAccumulatedRewardsAbortsIf;
+
+ + + + + + + +
schema TotalAccumulatedRewardsAbortsIf {
+    vesting_contract_address: address;
+    requires staking_contract.commission_percentage >= 0 && staking_contract.commission_percentage <= 100;
+    include ActiveVestingContractAbortsIf<VestingContract>{contract_address: vesting_contract_address};
+    let vesting_contract = global<VestingContract>(vesting_contract_address);
+    let staker = vesting_contract_address;
+    let operator = vesting_contract.staking.operator;
+    let staking_contracts = global<staking_contract::Store>(staker).staking_contracts;
+    let staking_contract = simple_map::spec_get(staking_contracts, operator);
+    aborts_if !exists<staking_contract::Store>(staker);
+    aborts_if !simple_map::spec_contains_key(staking_contracts, operator);
+    let pool_address = staking_contract.pool_address;
+    let stake_pool = global<stake::StakePool>(pool_address);
+    let active = coin::value(stake_pool.active);
+    let pending_active = coin::value(stake_pool.pending_active);
+    let total_active_stake = active + pending_active;
+    let accumulated_rewards = total_active_stake - staking_contract.principal;
+    let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100;
+    aborts_if !exists<stake::StakePool>(pool_address);
+    aborts_if active + pending_active > MAX_U64;
+    aborts_if total_active_stake < staking_contract.principal;
+    aborts_if accumulated_rewards * staking_contract.commission_percentage > MAX_U64;
+    aborts_if (vesting_contract.remaining_grant + commission_amount) > total_active_stake;
+    aborts_if total_active_stake < vesting_contract.remaining_grant;
+}
 
@@ -2937,6 +2951,26 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+pragma verify_duration_estimate = 1000;
+include TotalAccumulatedRewardsAbortsIf;
+let vesting_contract = global<VestingContract>(vesting_contract_address);
+let operator = vesting_contract.staking.operator;
+let staking_contracts = global<staking_contract::Store>(vesting_contract_address).staking_contracts;
+let staking_contract = simple_map::spec_get(staking_contracts, operator);
+let pool_address = staking_contract.pool_address;
+let stake_pool = global<stake::StakePool>(pool_address);
+let active = coin::value(stake_pool.active);
+let pending_active = coin::value(stake_pool.pending_active);
+let total_active_stake = active + pending_active;
+let accumulated_rewards = total_active_stake - staking_contract.principal;
+let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100;
+let total_accumulated_rewards = total_active_stake - vesting_contract.remaining_grant - commission_amount;
+let shareholder = spec_shareholder(vesting_contract_address, shareholder_or_beneficiary);
+let pool = vesting_contract.grant_pool;
+let shares = pool_u64::spec_shares(pool, shareholder);
+aborts_if pool.total_coins > 0 && pool.total_shares > 0
+    && (shares * total_accumulated_rewards) / pool.total_shares > MAX_U64;
+ensures result == pool_u64::spec_shares_to_amount_with_total_coins(pool, shares, total_accumulated_rewards);
 
@@ -2958,6 +2992,15 @@ This address should be deterministic for the same admin and vesting contract cre + + + + +
fun spec_shareholder(vesting_contract_address: address, shareholder_or_beneficiary: address): address;
+
+ + + ### Function `shareholder` @@ -2970,7 +3013,9 @@ This address should be deterministic for the same admin and vesting contract cre -
include ActiveVestingContractAbortsIf<VestingContract>{contract_address: vesting_contract_address};
+
pragma opaque;
+include ActiveVestingContractAbortsIf<VestingContract>{contract_address: vesting_contract_address};
+ensures [abstract] result == spec_shareholder(vesting_contract_address, shareholder_or_beneficiary);
 
@@ -3006,6 +3051,11 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+aborts_if withdrawal_address == @aptos_framework || withdrawal_address == @vm_reserved;
+aborts_if !exists<account::Account>(withdrawal_address);
+aborts_if !exists<coin::CoinStore<AptosCoin>>(withdrawal_address);
+aborts_if len(shareholders) == 0;
+aborts_if simple_map::spec_len(buy_ins) != len(shareholders);
 
@@ -3022,6 +3072,32 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include UnlockRewardsAbortsIf;
+
+ + + + + + + +
schema UnlockRewardsAbortsIf {
+    contract_address: address;
+    include TotalAccumulatedRewardsAbortsIf { vesting_contract_address: contract_address };
+    let vesting_contract = global<VestingContract>(contract_address);
+    let operator = vesting_contract.staking.operator;
+    let staking_contracts = global<staking_contract::Store>(contract_address).staking_contracts;
+    let staking_contract = simple_map::spec_get(staking_contracts, operator);
+    let pool_address = staking_contract.pool_address;
+    let stake_pool = global<stake::StakePool>(pool_address);
+    let active = coin::value(stake_pool.active);
+    let pending_active = coin::value(stake_pool.pending_active);
+    let total_active_stake = active + pending_active;
+    let accumulated_rewards = total_active_stake - staking_contract.principal;
+    let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100;
+    let amount = total_active_stake - vesting_contract.remaining_grant - commission_amount;
+    include UnlockStakeAbortsIf { vesting_contract, amount };
+}
 
@@ -3038,6 +3114,8 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+aborts_if len(contract_addresses) == 0;
+include PreconditionAbortsIf;
 
@@ -3054,6 +3132,7 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include UnlockRewardsAbortsIf;
 
@@ -3070,6 +3149,21 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+aborts_if len(contract_addresses) == 0;
+include PreconditionAbortsIf;
+
+ + + + + + + +
schema PreconditionAbortsIf {
+    contract_addresses: vector<address>;
+    requires forall i in 0..len(contract_addresses): simple_map::spec_get(global<staking_contract::Store>(contract_addresses[i]).staking_contracts, global<VestingContract>(contract_addresses[i]).staking.operator).commission_percentage >= 0
+        && simple_map::spec_get(global<staking_contract::Store>(contract_addresses[i]).staking_contracts, global<VestingContract>(contract_addresses[i]).staking.operator).commission_percentage <= 100;
+}
 
@@ -3086,6 +3180,9 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include ActiveVestingContractAbortsIf<VestingContract>;
+let vesting_contract = global<VestingContract>(contract_address);
+include WithdrawStakeAbortsIf { vesting_contract };
 
@@ -3102,6 +3199,7 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+aborts_if len(contract_addresses) == 0;
 
@@ -3118,6 +3216,9 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include ActiveVestingContractAbortsIf<VestingContract>;
+let vesting_contract = global<VestingContract>(contract_address);
+include WithdrawStakeAbortsIf { vesting_contract };
 
@@ -3133,10 +3234,11 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma aborts_if_is_partial;
-include VerifyAdminAbortsIf;
+
pragma verify = false;
 let vesting_contract = global<VestingContract>(contract_address);
 aborts_if vesting_contract.state != VESTING_POOL_TERMINATED;
+include VerifyAdminAbortsIf;
+include WithdrawStakeAbortsIf { vesting_contract };
 
@@ -3152,8 +3254,17 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma aborts_if_is_partial;
+
pragma verify = false;
 include VerifyAdminAbortsIf;
+let vesting_contract = global<VestingContract>(contract_address);
+let acc = vesting_contract.signer_cap.account;
+let old_operator = vesting_contract.staking.operator;
+include staking_contract::ContractExistsAbortsIf { staker: acc, operator: old_operator };
+let store = global<staking_contract::Store>(acc);
+let staking_contracts = store.staking_contracts;
+aborts_if simple_map::spec_contains_key(staking_contracts, new_operator);
+let staking_contract = simple_map::spec_get(staking_contracts, old_operator);
+include DistributeInternalAbortsIf { staker: acc, operator: old_operator, staking_contract, distribute_events: store.distribute_events };
 
@@ -3205,14 +3316,17 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma aborts_if_is_partial;
-aborts_if !exists<VestingContract>(contract_address);
-let vesting_contract1 = global<VestingContract>(contract_address);
-aborts_if signer::address_of(admin) != vesting_contract1.admin;
-let operator = vesting_contract1.staking.operator;
-let staker = vesting_contract1.signer_cap.account;
-include staking_contract::ContractExistsAbortsIf;
-include staking_contract::IncreaseLockupWithCapAbortsIf;
+
aborts_if !exists<VestingContract>(contract_address);
+let vesting_contract = global<VestingContract>(contract_address);
+aborts_if signer::address_of(admin) != vesting_contract.admin;
+let operator = vesting_contract.staking.operator;
+let staker = vesting_contract.signer_cap.account;
+include staking_contract::ContractExistsAbortsIf {staker, operator};
+include staking_contract::IncreaseLockupWithCapAbortsIf {staker, operator};
+let store = global<staking_contract::Store>(staker);
+let staking_contract = simple_map::spec_get(store.staking_contracts, operator);
+let pool_address = staking_contract.owner_cap.pool_address;
+aborts_if !exists<stake::StakePool>(vesting_contract.staking.pool_address);
 
@@ -3248,10 +3362,17 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma aborts_if_is_partial;
-aborts_if !exists<VestingContract>(contract_address);
-let post vesting_contract = global<VestingContract>(contract_address);
-ensures !simple_map::spec_contains_key(vesting_contract.beneficiaries,shareholder);
+
aborts_if !exists<VestingContract>(contract_address);
+let addr = signer::address_of(account);
+let vesting_contract = global<VestingContract>(contract_address);
+aborts_if addr != vesting_contract.admin && !std::string::spec_internal_check_utf8(ROLE_BENEFICIARY_RESETTER);
+aborts_if addr != vesting_contract.admin && !exists<VestingAccountManagement>(contract_address);
+let roles = global<VestingAccountManagement>(contract_address).roles;
+let role = std::string::spec_utf8(ROLE_BENEFICIARY_RESETTER);
+aborts_if addr != vesting_contract.admin && !simple_map::spec_contains_key(roles, role);
+aborts_if addr != vesting_contract.admin && addr != simple_map::spec_get(roles, role);
+let post post_vesting_contract = global<VestingContract>(contract_address);
+ensures !simple_map::spec_contains_key(post_vesting_contract.beneficiaries,shareholder);
 
@@ -3361,21 +3482,29 @@ This address should be deterministic for the same admin and vesting contract cre -
pragma verify=false;
-pragma aborts_if_is_partial;
+
pragma verify_duration_estimate = 300;
 let admin_addr = signer::address_of(admin);
 let admin_store = global<AdminStore>(admin_addr);
 let seed = bcs::to_bytes(admin_addr);
 let nonce = bcs::to_bytes(admin_store.nonce);
-let first = concat(seed,nonce);
-let second = concat(first,VESTING_POOL_SALT);
-let end = concat(second,contract_creation_seed);
+let first = concat(seed, nonce);
+let second = concat(first, VESTING_POOL_SALT);
+let end = concat(second, contract_creation_seed);
 let resource_addr = account::spec_create_resource_address(admin_addr, end);
 aborts_if !exists<AdminStore>(admin_addr);
 aborts_if len(account::ZERO_AUTH_KEY) != 32;
 aborts_if admin_store.nonce + 1 > MAX_U64;
 let ea = account::exists_at(resource_addr);
 include if (ea) account::CreateResourceAccountAbortsIf else account::CreateAccountAbortsIf {addr: resource_addr};
+let acc = global<account::Account>(resource_addr);
+let post post_acc = global<account::Account>(resource_addr);
+aborts_if !exists<coin::CoinStore<AptosCoin>>(resource_addr) && !aptos_std::type_info::spec_is_struct<AptosCoin>();
+aborts_if !exists<coin::CoinStore<AptosCoin>>(resource_addr) && ea && acc.guid_creation_num + 2 > MAX_U64;
+aborts_if !exists<coin::CoinStore<AptosCoin>>(resource_addr) && ea && acc.guid_creation_num + 2 >= account::MAX_GUID_CREATION_NUM;
+ensures exists<account::Account>(resource_addr) && post_acc.authentication_key == account::ZERO_AUTH_KEY &&
+        exists<coin::CoinStore<AptosCoin>>(resource_addr);
+ensures signer::address_of(result_1) == resource_addr;
+ensures result_2.account == resource_addr;
 
@@ -3440,6 +3569,25 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include UnlockStakeAbortsIf;
+
+ + + + + + + +
schema UnlockStakeAbortsIf {
+    vesting_contract: &VestingContract;
+    amount: u64;
+    let acc = vesting_contract.signer_cap.account;
+    let operator = vesting_contract.staking.operator;
+    include amount != 0 ==> staking_contract::ContractExistsAbortsIf { staker: acc, operator };
+    let store = global<staking_contract::Store>(acc);
+    let staking_contract = simple_map::spec_get(store.staking_contracts, operator);
+    include amount != 0 ==> DistributeInternalAbortsIf { staker: acc, operator, staking_contract, distribute_events: store.distribute_events };
+}
 
@@ -3456,6 +3604,58 @@ This address should be deterministic for the same admin and vesting contract cre
pragma verify = false;
+include WithdrawStakeAbortsIf;
+
+ + + + + + + +
schema WithdrawStakeAbortsIf {
+    vesting_contract: &VestingContract;
+    contract_address: address;
+    let operator = vesting_contract.staking.operator;
+    include staking_contract::ContractExistsAbortsIf { staker: contract_address, operator };
+    let store = global<staking_contract::Store>(contract_address);
+    let staking_contract = simple_map::spec_get(store.staking_contracts, operator);
+    include DistributeInternalAbortsIf { staker: contract_address, operator, staking_contract, distribute_events: store.distribute_events };
+}
+
+ + + + + + + +
schema DistributeInternalAbortsIf {
+    staker: address;
+    operator: address;
+    staking_contract: staking_contract::StakingContract;
+    distribute_events: EventHandle<staking_contract::DistributeEvent>;
+    let pool_address = staking_contract.pool_address;
+    aborts_if !exists<stake::StakePool>(pool_address);
+    let stake_pool = global<stake::StakePool>(pool_address);
+    let inactive = stake_pool.inactive.value;
+    let pending_inactive = stake_pool.pending_inactive.value;
+    aborts_if inactive + pending_inactive > MAX_U64;
+    let total_potential_withdrawable = inactive + pending_inactive;
+    let pool_address_1 = staking_contract.owner_cap.pool_address;
+    aborts_if !exists<stake::StakePool>(pool_address_1);
+    let stake_pool_1 = global<stake::StakePool>(pool_address_1);
+    aborts_if !exists<stake::ValidatorSet>(@aptos_framework);
+    let validator_set = global<stake::ValidatorSet>(@aptos_framework);
+    let inactive_state = !stake::spec_contains(validator_set.pending_active, pool_address_1)
+        && !stake::spec_contains(validator_set.active_validators, pool_address_1)
+        && !stake::spec_contains(validator_set.pending_inactive, pool_address_1);
+    let inactive_1 = stake_pool_1.inactive.value;
+    let pending_inactive_1 = stake_pool_1.pending_inactive.value;
+    let new_inactive_1 = inactive_1 + pending_inactive_1;
+    aborts_if inactive_state && timestamp::spec_now_seconds() >= stake_pool_1.locked_until_secs
+        && inactive_1 + pending_inactive_1 > MAX_U64;
+}
 
diff --git a/aptos-move/framework/aptos-framework/doc/voting.md b/aptos-move/framework/aptos-framework/doc/voting.md index b70bc4a9abbd8..a81e075ae80ca 100644 --- a/aptos-move/framework/aptos-framework/doc/voting.md +++ b/aptos-move/framework/aptos-framework/doc/voting.md @@ -1552,11 +1552,8 @@ Return true if the voting period of the given proposal has already ended.
requires chain_status::is_operating();
-include CreateProposalAbortsIf<ProposalType>{is_multi_step_proposal: false};
+include CreateProposalAbortsIfAndEnsures<ProposalType>{is_multi_step_proposal: false};
 ensures result == old(global<VotingForum<ProposalType>>(voting_forum_address)).next_proposal_id;
-ensures global<VotingForum<ProposalType>>(voting_forum_address).next_proposal_id
-    == old(global<VotingForum<ProposalType>>(voting_forum_address)).next_proposal_id + 1;
-ensures  table::spec_contains(global<VotingForum<ProposalType>>(voting_forum_address).proposals, result);
 
@@ -1572,23 +1569,18 @@ Return true if the voting period of the given proposal has already ended. -
pragma verify_duration_estimate = 120;
-requires chain_status::is_operating();
-include CreateProposalAbortsIf<ProposalType>;
+
requires chain_status::is_operating();
+include CreateProposalAbortsIfAndEnsures<ProposalType>;
 ensures result == old(global<VotingForum<ProposalType>>(voting_forum_address)).next_proposal_id;
-ensures global<VotingForum<ProposalType>>(voting_forum_address).next_proposal_id
-    == old(global<VotingForum<ProposalType>>(voting_forum_address)).next_proposal_id + 1;
-ensures table::spec_contains(global<VotingForum<ProposalType>>(voting_forum_address).proposals, result);
-ensures  table::spec_contains(global<VotingForum<ProposalType>>(voting_forum_address).proposals, result);
 
- + -
schema CreateProposalAbortsIf<ProposalType> {
+
schema CreateProposalAbortsIfAndEnsures<ProposalType> {
     voting_forum_address: address;
     execution_hash: vector<u8>;
     min_vote_threshold: u128;
@@ -1604,10 +1596,19 @@ Return true if the voting period of the given proposal has already ended.
     aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
     aborts_if len(execution_hash) <= 0;
     let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY);
-    aborts_if simple_map::spec_contains_key(metadata,execution_key);
+    aborts_if simple_map::spec_contains_key(metadata, execution_key);
     aborts_if voting_forum.next_proposal_id + 1 > MAX_U64;
     let is_multi_step_in_execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
-    aborts_if is_multi_step_proposal && simple_map::spec_contains_key(metadata,is_multi_step_in_execution_key);
+    aborts_if is_multi_step_proposal && simple_map::spec_contains_key(metadata, is_multi_step_in_execution_key);
+    let post post_voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+    let post post_metadata = table::spec_get(post_voting_forum.proposals, proposal_id).metadata;
+    ensures post_voting_forum.next_proposal_id == voting_forum.next_proposal_id + 1;
+    ensures table::spec_contains(post_voting_forum.proposals, proposal_id);
+    ensures if (is_multi_step_proposal) {
+        simple_map::spec_get(post_metadata, is_multi_step_in_execution_key) == std::bcs::serialize(false)
+    } else {
+        !simple_map::spec_contains_key(post_metadata, is_multi_step_in_execution_key)
+    };
 }
 
@@ -1631,12 +1632,23 @@ Return true if the voting period of the given proposal has already ended. aborts_if !table::spec_contains(voting_forum.proposals, proposal_id); aborts_if is_voting_period_over(proposal); aborts_if proposal.is_resolved; +aborts_if !exists<timestamp::CurrentTimeMicroseconds>(@aptos_framework); aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); aborts_if simple_map::spec_contains_key(proposal.metadata, execution_key) && simple_map::spec_get(proposal.metadata, execution_key) != std::bcs::serialize(false); aborts_if if (should_pass) { proposal.yes_votes + num_votes > MAX_U128 } else { proposal.no_votes + num_votes > MAX_U128 }; aborts_if !std::string::spec_internal_check_utf8(RESOLVABLE_TIME_METADATA_KEY); +let post post_voting_forum = global<VotingForum<ProposalType>>(voting_forum_address); +let post post_proposal = table::spec_get(post_voting_forum.proposals, proposal_id); +ensures if (should_pass) { + post_proposal.yes_votes == proposal.yes_votes + num_votes +} else { + post_proposal.no_votes == proposal.no_votes + num_votes +}; +let timestamp_secs_bytes = std::bcs::serialize(timestamp::spec_now_seconds()); +let key = std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY); +ensures simple_map::spec_get(post_proposal.metadata, key) == timestamp_secs_bytes;
@@ -1653,23 +1665,31 @@ Return true if the voting period of the given proposal has already ended.
requires chain_status::is_operating();
-include AbortsIfNotContainProposalID<ProposalType>;
-let voting_forum =  global<VotingForum<ProposalType>>(voting_forum_address);
-let proposal = table::spec_get(voting_forum.proposals, proposal_id);
-let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold);
-let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs;
-let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) &&
-                            (proposal.yes_votes >= early_resolution_threshold ||
-                             proposal.no_votes >= early_resolution_threshold);
-let voting_closed = voting_period_over || be_resolved_early;
-aborts_if voting_closed && (proposal.yes_votes <= proposal.no_votes || proposal.yes_votes + proposal.no_votes < proposal.min_vote_threshold);
-aborts_if !voting_closed;
-aborts_if proposal.is_resolved;
-aborts_if !std::string::spec_internal_check_utf8(RESOLVABLE_TIME_METADATA_KEY);
-aborts_if !simple_map::spec_contains_key(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY));
-aborts_if !from_bcs::deserializable<u64>(simple_map::spec_get(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY)));
-aborts_if timestamp::spec_now_seconds() <= from_bcs::deserialize<u64>(simple_map::spec_get(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY)));
-aborts_if transaction_context::spec_get_script_hash() != proposal.execution_hash;
+include IsProposalResolvableAbortsIf<ProposalType>;
+
+ + + + + + + +
schema IsProposalResolvableAbortsIf<ProposalType> {
+    voting_forum_address: address;
+    proposal_id: u64;
+    include AbortsIfNotContainProposalID<ProposalType>;
+    let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+    let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+    let voting_closed = spec_is_voting_closed<ProposalType>(voting_forum_address, proposal_id);
+    aborts_if voting_closed && (proposal.yes_votes <= proposal.no_votes || proposal.yes_votes + proposal.no_votes < proposal.min_vote_threshold);
+    aborts_if !voting_closed;
+    aborts_if proposal.is_resolved;
+    aborts_if !std::string::spec_internal_check_utf8(RESOLVABLE_TIME_METADATA_KEY);
+    aborts_if !simple_map::spec_contains_key(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY));
+    aborts_if !from_bcs::deserializable<u64>(simple_map::spec_get(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY)));
+    aborts_if timestamp::spec_now_seconds() <= from_bcs::deserialize<u64>(simple_map::spec_get(proposal.metadata, std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY)));
+    aborts_if transaction_context::spec_get_script_hash() != proposal.execution_hash;
+}
 
@@ -1686,9 +1706,22 @@ Return true if the voting period of the given proposal has already ended.
requires chain_status::is_operating();
-pragma aborts_if_is_partial;
-include AbortsIfNotContainProposalID<ProposalType>;
+include IsProposalResolvableAbortsIf<ProposalType>;
 aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_KEY);
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+let multi_step_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY);
+let has_multi_step_key = simple_map::spec_contains_key(proposal.metadata, multi_step_key);
+aborts_if has_multi_step_key && !from_bcs::deserializable<bool>(simple_map::spec_get(proposal.metadata, multi_step_key));
+aborts_if has_multi_step_key && from_bcs::deserialize<bool>(simple_map::spec_get(proposal.metadata, multi_step_key));
+let post post_voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let post post_proposal = table::spec_get(post_voting_forum.proposals, proposal_id);
+aborts_if !exists<timestamp::CurrentTimeMicroseconds>(@aptos_framework);
+ensures post_proposal.is_resolved == true;
+ensures post_proposal.resolution_time_secs == timestamp::spec_now_seconds();
+aborts_if option::spec_is_none(proposal.execution_content);
+ensures result == option::spec_borrow(proposal.execution_content);
+ensures option::spec_is_none(post_proposal.execution_content);
 
@@ -1706,10 +1739,28 @@ Return true if the voting period of the given proposal has already ended.
pragma verify = false;
 requires chain_status::is_operating();
-pragma aborts_if_is_partial;
-include AbortsIfNotContainProposalID<ProposalType>;
+include IsProposalResolvableAbortsIf<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+let post post_voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let post post_proposal = table::spec_get(voting_forum.proposals, proposal_id);
+let multi_step_in_execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
 aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
 aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_KEY);
+ensures simple_map::spec_contains_key(proposal.metadata, multi_step_in_execution_key) &&
+    ((len(next_execution_hash) != 0 && is_multi_step) || (len(next_execution_hash) == 0 && !is_multi_step)) ==>
+    simple_map::spec_get(post_proposal.metadata, multi_step_in_execution_key) == std::bcs::serialize(true);
+let multi_step_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY);
+aborts_if simple_map::spec_contains_key(proposal.metadata, multi_step_key) &&
+    !from_bcs::deserializable<bool>(simple_map::spec_get(proposal.metadata, multi_step_key));
+let is_multi_step = simple_map::spec_contains_key(proposal.metadata, multi_step_key) &&
+    from_bcs::deserialize(simple_map::spec_get(proposal.metadata, multi_step_key));
+aborts_if !is_multi_step && len(next_execution_hash) != 0;
+aborts_if len(next_execution_hash) == 0 && !exists<timestamp::CurrentTimeMicroseconds>(@aptos_framework);
+aborts_if len(next_execution_hash) == 0 && is_multi_step && !simple_map::spec_contains_key(proposal.metadata, multi_step_in_execution_key);
+ensures len(next_execution_hash) == 0 ==> post_proposal.is_resolved == true && post_proposal.resolution_time_secs == timestamp::spec_now_seconds();
+ensures len(next_execution_hash) == 0 && is_multi_step ==> simple_map::spec_get(post_proposal.metadata, multi_step_in_execution_key) == std::bcs::serialize(false);
+ensures len(next_execution_hash) != 0 ==> post_proposal.execution_hash == next_execution_hash;
 
@@ -1727,6 +1778,7 @@ Return true if the voting period of the given proposal has already ended.
aborts_if !exists<VotingForum<ProposalType>>(voting_forum_address);
+ensures result == global<VotingForum<ProposalType>>(voting_forum_address).next_proposal_id;
 
@@ -1745,6 +1797,21 @@ Return true if the voting period of the given proposal has already ended.
requires chain_status::is_operating();
 include AbortsIfNotContainProposalID<ProposalType>;
+aborts_if !exists<timestamp::CurrentTimeMicroseconds>(@aptos_framework);
+ensures result == spec_is_voting_closed<ProposalType>(voting_forum_address, proposal_id);
+
+ + + + + + + +
fun spec_is_voting_closed<ProposalType: store>(voting_forum_address: address, proposal_id: u64): bool {
+   let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+   let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+   spec_can_be_resolved_early<ProposalType>(proposal) || is_voting_period_over(proposal)
+}
 
@@ -1761,6 +1828,27 @@ Return true if the voting period of the given proposal has already ended.
aborts_if false;
+ensures result == spec_can_be_resolved_early<ProposalType>(proposal);
+
+ + + + + + + +
fun spec_can_be_resolved_early<ProposalType: store>(proposal: Proposal<ProposalType>): bool {
+   if (option::spec_is_some(proposal.early_resolution_vote_threshold)) {
+       let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold);
+       if (proposal.yes_votes >= early_resolution_threshold || proposal.no_votes >= early_resolution_threshold) {
+           true
+       } else{
+           false
+       }
+   } else {
+       false
+   }
+}
 
@@ -1775,12 +1863,7 @@ Return true if the voting period of the given proposal has already ended. voting_forum: VotingForum<ProposalType> ): u64 { let proposal = table::spec_get(voting_forum.proposals, proposal_id); - let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); - let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs; - let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) && - (proposal.yes_votes >= early_resolution_threshold || - proposal.no_votes >= early_resolution_threshold); - let voting_closed = voting_period_over || be_resolved_early; + let voting_closed = spec_is_voting_closed<ProposalType>(voting_forum_address, proposal_id); let proposal_vote_cond = (proposal.yes_votes > proposal.no_votes && proposal.yes_votes + proposal.no_votes >= proposal.min_vote_threshold); if (voting_closed && proposal_vote_cond) { PROPOSAL_STATE_SUCCEEDED @@ -1826,13 +1909,6 @@ Return true if the voting period of the given proposal has already ended. requires chain_status::is_operating(); include AbortsIfNotContainProposalID<ProposalType>; let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address); -let proposal = table::spec_get(voting_forum.proposals, proposal_id); -let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); -let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs; -let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) && - (proposal.yes_votes >= early_resolution_threshold || - proposal.no_votes >= early_resolution_threshold); -let voting_closed = voting_period_over || be_resolved_early; ensures result == spec_get_proposal_state(voting_forum_address, proposal_id, voting_forum);
@@ -1851,6 +1927,9 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result == proposal.creation_time_secs;
 
@@ -1868,6 +1947,7 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+ensures result == spec_get_proposal_expiration_secs<ProposalType>(voting_forum_address, proposal_id);
 
@@ -1885,6 +1965,9 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result == proposal.execution_hash;
 
@@ -1902,6 +1985,9 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result == proposal.min_vote_threshold;
 
@@ -1919,6 +2005,9 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result == proposal.early_resolution_vote_threshold;
 
@@ -1936,6 +2025,10 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result_1 == proposal.yes_votes;
+ensures result_2 == proposal.no_votes;
 
@@ -1953,6 +2046,9 @@ Return true if the voting period of the given proposal has already ended.
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+let proposal = table::spec_get(voting_forum.proposals, proposal_id);
+ensures result == proposal.is_resolved;
 
@@ -1984,15 +2080,15 @@ Return true if the voting period of the given proposal has already ended. -
let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
+
include AbortsIfNotContainProposalID<ProposalType>;
+let voting_forum = global<VotingForum<ProposalType>>(voting_forum_address);
 let proposal = table::spec_get(voting_forum.proposals,proposal_id);
-aborts_if !table::spec_contains(voting_forum.proposals,proposal_id);
-aborts_if !exists<VotingForum<ProposalType>>(voting_forum_address);
 aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
 let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY);
 aborts_if !simple_map::spec_contains_key(proposal.metadata,execution_key);
 let is_multi_step_in_execution_key = simple_map::spec_get(proposal.metadata,execution_key);
-aborts_if !aptos_std::from_bcs::deserializable<bool>(is_multi_step_in_execution_key);
+aborts_if !from_bcs::deserializable<bool>(is_multi_step_in_execution_key);
+ensures result == from_bcs::deserialize<bool>(is_multi_step_in_execution_key);
 
@@ -2010,6 +2106,7 @@ Return true if the voting period of the given proposal has already ended.
requires chain_status::is_operating();
 aborts_if false;
+ensures result == (timestamp::spec_now_seconds() > proposal.expiration_secs);
 
diff --git a/aptos-move/framework/aptos-framework/sources/account.spec.move b/aptos-move/framework/aptos-framework/sources/account.spec.move index 77d31e8a09bd3..3aec56ed408cb 100644 --- a/aptos-move/framework/aptos-framework/sources/account.spec.move +++ b/aptos-move/framework/aptos-framework/sources/account.spec.move @@ -419,6 +419,8 @@ spec aptos_framework::account { aborts_if len(ZERO_AUTH_KEY) != 32; include exists_at(resource_addr) ==> CreateResourceAccountAbortsIf; include !exists_at(resource_addr) ==> CreateAccountAbortsIf {addr: resource_addr}; + + ensures signer::address_of(result_1) == resource_addr; } /// Check if the bytes of the new address is 32. diff --git a/aptos-move/framework/aptos-framework/sources/event.move b/aptos-move/framework/aptos-framework/sources/event.move index 95baeb94e9c2e..9d245dfa99343 100644 --- a/aptos-move/framework/aptos-framework/sources/event.move +++ b/aptos-move/framework/aptos-framework/sources/event.move @@ -6,10 +6,33 @@ module aptos_framework::event { use std::bcs; use aptos_framework::guid::GUID; + use std::features; friend aptos_framework::account; friend aptos_framework::object; + /// Module event feature is not supported. + const EMODULE_EVENT_NOT_SUPPORTED: u64 = 1; + + /// Emit an event with payload `msg` by using `handle_ref`'s key and counter. + public fun emit(msg: T) { + assert!(features::module_event_enabled(), std::error::invalid_state(EMODULE_EVENT_NOT_SUPPORTED)); + write_to_module_event_store(msg); + } + + /// Log `msg` with the event stream identified by `T` + native fun write_to_module_event_store(msg: T); + + #[test_only] + public native fun emitted_events(): vector; + + #[test_only] + public fun was_event_emitted(msg: &T): bool { + use std::vector; + vector::contains(&emitted_events(), msg) + } + + #[deprecated] /// A handle for an event such that: /// 1. Other modules can emit events to this handle. /// 2. Storage can use this handle to prove the total number of events that happened in the past. @@ -20,6 +43,7 @@ module aptos_framework::event { guid: GUID, } + #[deprecated] /// Use EventHandleGenerator to generate a unique event handle for `sig` public(friend) fun new_event_handle(guid: GUID): EventHandle { EventHandle { @@ -28,6 +52,7 @@ module aptos_framework::event { } } + #[deprecated] /// Emit an event with payload `msg` by using `handle_ref`'s key and counter. public fun emit_event(handle_ref: &mut EventHandle, msg: T) { write_to_event_store(bcs::to_bytes(&handle_ref.guid), handle_ref.counter, msg); @@ -37,27 +62,33 @@ module aptos_framework::event { handle_ref.counter = handle_ref.counter + 1; } + #[deprecated] /// Return the GUID associated with this EventHandle public fun guid(handle_ref: &EventHandle): &GUID { &handle_ref.guid } + #[deprecated] /// Return the current counter associated with this EventHandle public fun counter(handle_ref: &EventHandle): u64 { handle_ref.counter } + #[deprecated] /// Log `msg` as the `count`th event associated with the event stream identified by `guid` native fun write_to_event_store(guid: vector, count: u64, msg: T); + #[deprecated] /// Destroy a unique handle. public fun destroy_handle(handle: EventHandle) { EventHandle { counter: _, guid: _ } = handle; } + #[deprecated] #[test_only] public native fun emitted_events_by_handle(handle: &EventHandle): vector; + #[deprecated] #[test_only] public fun was_event_emitted_by_handle(handle: &EventHandle, msg: &T): bool { use std::vector; diff --git a/aptos-move/framework/aptos-framework/sources/event.spec.move b/aptos-move/framework/aptos-framework/sources/event.spec.move index e6a26f7206671..af37cfd477f64 100644 --- a/aptos-move/framework/aptos-framework/sources/event.spec.move +++ b/aptos-move/framework/aptos-framework/sources/event.spec.move @@ -10,6 +10,16 @@ spec aptos_framework::event { ensures [concrete] handle_ref.counter == old(handle_ref.counter) + 1; } + spec emit { + pragma opaque; + aborts_if !features::spec_module_event_enabled(); + } + + /// Native function use opaque. + spec write_to_module_event_store(msg: T) { + pragma opaque; + } + /// Native function use opaque. spec write_to_event_store(guid: vector, count: u64, msg: T) { pragma opaque; diff --git a/aptos-move/framework/aptos-framework/sources/reconfiguration.spec.move b/aptos-move/framework/aptos-framework/sources/reconfiguration.spec.move index 49b7d0fba2144..bf7300717f81a 100644 --- a/aptos-move/framework/aptos-framework/sources/reconfiguration.spec.move +++ b/aptos-move/framework/aptos-framework/sources/reconfiguration.spec.move @@ -23,6 +23,7 @@ spec aptos_framework::reconfiguration { spec initialize(aptos_framework: &signer) { use std::signer; use aptos_framework::account::{Account}; + use aptos_framework::guid; include AbortsIfNotAptosFramework; let addr = signer::address_of(aptos_framework); @@ -30,17 +31,29 @@ spec aptos_framework::reconfiguration { requires exists(addr); aborts_if !(global(addr).guid_creation_num == 2); aborts_if exists(@aptos_framework); + // property 1: During the module's initialization, it guarantees that the Configuration resource will move under the Aptos framework account with initial values. ensures exists(@aptos_framework); ensures config.epoch == 0 && config.last_reconfiguration_time == 0; + ensures config.events == event::EventHandle { + counter: 0, + guid: guid::GUID { + id: guid::ID { + creation_num: 2, + addr: @aptos_framework + } + } + }; } spec current_epoch(): u64 { aborts_if !exists(@aptos_framework); + ensures result == global(@aptos_framework).epoch; } spec disable_reconfiguration(aptos_framework: &signer) { include AbortsIfNotAptosFramework; aborts_if exists(@aptos_framework); + ensures exists(@aptos_framework); } /// Make sure the caller is admin and check the resource DisableReconfiguration. @@ -48,6 +61,7 @@ spec aptos_framework::reconfiguration { use aptos_framework::reconfiguration::{DisableReconfiguration}; include AbortsIfNotAptosFramework; aborts_if !exists(@aptos_framework); + ensures !exists(@aptos_framework); } /// When genesis_event emit the epoch and the `last_reconfiguration_time` . @@ -58,33 +72,49 @@ spec aptos_framework::reconfiguration { aborts_if !exists(@aptos_framework); let config_ref = global(@aptos_framework); aborts_if !(config_ref.epoch == 0 && config_ref.last_reconfiguration_time == 0); + ensures global(@aptos_framework).epoch == 1; } spec last_reconfiguration_time { aborts_if !exists(@aptos_framework); + ensures result == global(@aptos_framework).last_reconfiguration_time; } spec reconfigure { use aptos_framework::aptos_coin; + use aptos_framework::coin::CoinInfo; + use aptos_framework::aptos_coin::AptosCoin; use aptos_framework::transaction_fee; use aptos_framework::staking_config; pragma verify_duration_estimate = 120; // TODO: set because of timeout (property proved) requires exists(@aptos_framework); + requires exists>(@aptos_framework); - include transaction_fee::RequiresCollectedFeesPerValueLeqBlockAptosSupply; include features::spec_periodical_reward_rate_decrease_enabled() ==> staking_config::StakingRewardsConfigEnabledRequirement; include features::spec_collect_and_distribute_gas_fees_enabled() ==> aptos_coin::ExistsAptosCoin; - + include transaction_fee::RequiresCollectedFeesPerValueLeqBlockAptosSupply; aborts_if false; + + // The ensure conditions of the reconfigure function are not fully written, because there is a new cycle in it, + // but its existing ensure conditions satisfy hp. let success = !(chain_status::is_genesis() || timestamp::spec_now_microseconds() == 0 || !reconfiguration_enabled()) && timestamp::spec_now_microseconds() != global(@aptos_framework).last_reconfiguration_time; + // The property below is not proved within 500s and still cause an timeout + // property 3: Synchronization of NewEpochEvent counter with configuration epoch. ensures success ==> global(@aptos_framework).epoch == old(global(@aptos_framework).epoch) + 1; + ensures success ==> global(@aptos_framework).last_reconfiguration_time == timestamp::spec_now_microseconds(); + // We remove the ensures of event increment due to inconsisency + // TODO: property 4: Only performs reconfiguration if genesis has started and reconfiguration is enabled. + // Also, the last reconfiguration must not be the current time, returning early without further actions otherwise. + // property 5: Consecutive reconfigurations without the passage of time are not permitted. ensures !success ==> global(@aptos_framework).epoch == old(global(@aptos_framework).epoch); } spec reconfiguration_enabled { + // property 2: The reconfiguration status may be determined at any time without causing an abort, indicating whether or not the system allows reconfiguration. aborts_if false; + ensures result == !exists(@aptos_framework); } } diff --git a/aptos-move/framework/aptos-framework/sources/resource_account.move b/aptos-move/framework/aptos-framework/sources/resource_account.move index 7a70d79076c3c..7f85a8bf05417 100644 --- a/aptos-move/framework/aptos-framework/sources/resource_account.move +++ b/aptos-move/framework/aptos-framework/sources/resource_account.move @@ -4,25 +4,27 @@ /// ## Resource Accounts to manage liquidity pools /// /// A dev wishing to use resource accounts for a liquidity pool, would likely do the following: -/// 1. Create a new account using `resource_account::create_resource_account`. This creates the -/// account, stores the `signer_cap` within a `resource_account::Container`, and rotates the key to -/// the current accounts authentication key or a provided authentication key. -/// 2. Define the LiquidityPool module's address to be the same as the resource account. -/// 3. Construct a transaction package publishing transaction for the resource account using the -/// authentication key used in step 1. -/// 4. In the LiquidityPool module's `init_module` function, call `retrieve_resource_account_cap` -/// which will retrive the `signer_cap` and rotate the resource account's authentication key to -/// `0x0`, effectively locking it off. -/// 5. When adding a new coin, the liquidity pool will load the capability and hence the signer to -/// register and store new LiquidityCoin resources. +/// +/// 1. Create a new account using `resource_account::create_resource_account`. This creates the +/// account, stores the `signer_cap` within a `resource_account::Container`, and rotates the key to +/// the current account's authentication key or a provided authentication key. +/// 2. Define the liquidity pool module's address to be the same as the resource account. +/// 3. Construct a package-publishing transaction for the resource account using the +/// authentication key used in step 1. +/// 4. In the liquidity pool module's `init_module` function, call `retrieve_resource_account_cap` +/// which will retrieve the `signer_cap` and rotate the resource account's authentication key to +/// `0x0`, effectively locking it off. +/// 5. When adding a new coin, the liquidity pool will load the capability and hence the `signer` to +/// register and store new `LiquidityCoin` resources. /// /// Code snippets to help: +/// /// ``` -/// fun init_module(resource: &signer) { +/// fun init_module(resource_account: &signer) { /// let dev_address = @DEV_ADDR; -/// let signer_cap = retrieve_resource_account_cap(resource, dev_address); +/// let signer_cap = retrieve_resource_account_cap(resource_account, dev_address); /// let lp = LiquidityPoolInfo { signer_cap: signer_cap, ... }; -/// move_to(resource, lp); +/// move_to(resource_account, lp); /// } /// ``` /// diff --git a/aptos-move/framework/aptos-framework/sources/resource_account.spec.move b/aptos-move/framework/aptos-framework/sources/resource_account.spec.move index 4d264165c89a0..f040ce90cb31e 100644 --- a/aptos-move/framework/aptos-framework/sources/resource_account.spec.move +++ b/aptos-move/framework/aptos-framework/sources/resource_account.spec.move @@ -75,6 +75,9 @@ spec aptos_framework::resource_account { aborts_if exists(source_addr) && simple_map::spec_contains_key(container.store, resource_addr); aborts_if get && !(exists(resource_addr) && len(global(source_addr).authentication_key) == 32); aborts_if !get && !(exists(resource_addr) && len(optional_auth_key) == 32); + + ensures simple_map::spec_contains_key(global(source_addr).store, resource_addr); + ensures exists(source_addr); } spec schema RotateAccountAuthenticationKeyAndStoreCapabilityAbortsIfWithoutAccountLimit { @@ -96,6 +99,9 @@ spec aptos_framework::resource_account { aborts_if exists(source_addr) && simple_map::spec_contains_key(container.store, resource_addr); aborts_if get && len(global(source_addr).authentication_key) != 32; aborts_if !get && len(optional_auth_key) != 32; + + ensures simple_map::spec_contains_key(global(source_addr).store, resource_addr); + ensures exists(source_addr); } spec retrieve_resource_account_cap( @@ -109,6 +115,7 @@ spec aptos_framework::resource_account { aborts_if !simple_map::spec_contains_key(container.store, resource_addr); aborts_if !exists(resource_addr); ensures simple_map::spec_contains_key(old(global(source_addr)).store, resource_addr) && - simple_map::spec_len(old(global(source_addr)).store) == 1 ==> !exists(source_addr); + simple_map::spec_len(old(global(source_addr)).store) == 1 ==> !exists(source_addr); + ensures exists(source_addr) ==> !simple_map::spec_contains_key(global(source_addr).store, resource_addr); } } diff --git a/aptos-move/framework/aptos-framework/sources/transaction_fee.spec.move b/aptos-move/framework/aptos-framework/sources/transaction_fee.spec.move index 334f046e1de14..f8157534532e2 100644 --- a/aptos-move/framework/aptos-framework/sources/transaction_fee.spec.move +++ b/aptos-move/framework/aptos-framework/sources/transaction_fee.spec.move @@ -3,11 +3,12 @@ spec aptos_framework::transaction_fee { use aptos_framework::chain_status; pragma verify = true; pragma aborts_if_is_strict; - + // property 1: Given the blockchain is in an operating state, it guarantees that the Aptos framework signer may burn Aptos coins. invariant [suspendable] chain_status::is_operating() ==> exists(@aptos_framework); } spec CollectedFeesPerBlock { + // property 4: The percentage of the burnt collected fee is always a value from 0 to 100. invariant burn_percentage <= 100; } @@ -17,33 +18,37 @@ spec aptos_framework::transaction_fee { use aptos_framework::aggregator_factory; use aptos_framework::system_addresses; + // property 2: The initialization function may only be called once. aborts_if exists(@aptos_framework); aborts_if burn_percentage > 100; let aptos_addr = signer::address_of(aptos_framework); + // property 3: Only the admin address is authorized to call the initialization function. aborts_if !system_addresses::is_aptos_framework_address(aptos_addr); aborts_if exists(aptos_addr); include system_addresses::AbortsIfNotAptosFramework {account: aptos_framework}; include aggregator_factory::CreateAggregatorInternalAbortsIf; + aborts_if exists(aptos_addr); ensures exists(aptos_addr); + ensures exists(aptos_addr); } spec upgrade_burn_percentage(aptos_framework: &signer, new_burn_percentage: u8) { use std::signer; - use aptos_framework::coin::CoinInfo; - use aptos_framework::aptos_coin::AptosCoin; + // Percentage validation aborts_if new_burn_percentage > 100; // Signer validation let aptos_addr = signer::address_of(aptos_framework); aborts_if !system_addresses::is_aptos_framework_address(aptos_addr); - // Requirements of `process_collected_fees` - requires exists(@aptos_framework); - requires exists(@aptos_framework); - requires exists>(@aptos_framework); - include RequiresCollectedFeesPerValueLeqBlockAptosSupply; + + // property 5: Prior to upgrading the burn percentage, it must process all the fees collected up to that point. + // property 6: Ensure the presence of the resource. + // Requirements and ensures conditions of `process_collected_fees` + include ProcessCollectedFeesRequiresAndEnsures; + // The effect of upgrading the burn percentage ensures exists(@aptos_framework) ==> global(@aptos_framework).burn_percentage == new_burn_percentage; @@ -51,27 +56,23 @@ spec aptos_framework::transaction_fee { spec register_proposer_for_fee_collection(proposer_addr: address) { aborts_if false; + // property 6: Ensure the presence of the resource. ensures is_fees_collection_enabled() ==> option::spec_borrow(global(@aptos_framework).proposer) == proposer_addr; } spec burn_coin_fraction(coin: &mut Coin, burn_percentage: u8) { - use aptos_framework::optional_aggregator; - use aptos_framework::aggregator; use aptos_framework::coin::CoinInfo; use aptos_framework::aptos_coin::AptosCoin; + requires burn_percentage <= 100; requires exists(@aptos_framework); requires exists>(@aptos_framework); + let amount_to_burn = (burn_percentage * coin::value(coin)) / 100; - let maybe_supply = coin::get_coin_supply_opt(); - aborts_if amount_to_burn > 0 && option::is_some(maybe_supply) && optional_aggregator::is_parallelizable(option::borrow(maybe_supply)) - && aggregator::spec_aggregator_get_val(option::borrow(option::borrow(maybe_supply).aggregator)) < - amount_to_burn; - aborts_if option::is_some(maybe_supply) && !optional_aggregator::is_parallelizable(option::borrow(maybe_supply)) - && option::borrow(option::borrow(maybe_supply).integer).value < - amount_to_burn; - include (amount_to_burn > 0) ==> coin::AbortsIfNotExistCoinInfo; + // include (amount_to_burn > 0) ==> coin::AbortsIfNotExistCoinInfo; + include amount_to_burn > 0 ==> coin::AbortsIfAggregator{ coin: Coin{ value: amount_to_burn } }; + ensures coin.value == old(coin).value - amount_to_burn; } spec fun collectedFeesAggregator(): AggregatableCoin { @@ -82,19 +83,50 @@ spec aptos_framework::transaction_fee { use aptos_framework::optional_aggregator; use aptos_framework::aggregator; let maybe_supply = coin::get_coin_supply_opt(); + // property 6: Ensure the presence of the resource. requires (is_fees_collection_enabled() && option::is_some(maybe_supply)) ==> (aggregator::spec_aggregator_get_val(global(@aptos_framework).amount.value) <= optional_aggregator::optional_aggregator_value(option::spec_borrow(coin::get_coin_supply_opt()))); } - spec process_collected_fees() { + spec schema ProcessCollectedFeesRequiresAndEnsures { use aptos_framework::coin::CoinInfo; use aptos_framework::aptos_coin::AptosCoin; + use aptos_framework::aggregator; + use aptos_std::table; + requires exists(@aptos_framework); requires exists(@aptos_framework); requires exists>(@aptos_framework); include RequiresCollectedFeesPerValueLeqBlockAptosSupply; + + aborts_if false; + + let collected_fees = global(@aptos_framework); + let post post_collected_fees = global(@aptos_framework); + let pre_amount = aggregator::spec_aggregator_get_val(collected_fees.amount.value); + let post post_amount = aggregator::spec_aggregator_get_val(post_collected_fees.amount.value); + let fees_table = global(@aptos_framework).fees_table; + let post post_fees_table = global(@aptos_framework).fees_table; + let proposer = option::spec_borrow(collected_fees.proposer); + let fee_to_add = pre_amount - pre_amount * collected_fees.burn_percentage / 100; + ensures is_fees_collection_enabled() ==> option::spec_is_none(post_collected_fees.proposer) && post_amount == 0; + ensures is_fees_collection_enabled() && aggregator::spec_read(collected_fees.amount.value) > 0 && + option::spec_is_some(collected_fees.proposer) ==> + if (proposer != @vm_reserved) { + if (table::spec_contains(fees_table, proposer)) { + table::spec_get(post_fees_table, proposer).value == table::spec_get(fees_table, proposer).value + fee_to_add + } else { + table::spec_get(post_fees_table, proposer).value == fee_to_add + } + } else { + option::spec_is_none(post_collected_fees.proposer) && post_amount == 0 + }; + } + + spec process_collected_fees() { + include ProcessCollectedFeesRequiresAndEnsures; } /// `AptosCoinCapabilities` should be exists. @@ -141,15 +173,22 @@ spec aptos_framework::transaction_fee { spec collect_fee(account: address, fee: u64) { use aptos_framework::aggregator; + let collected_fees = global(@aptos_framework).amount; let aggr = collected_fees.value; + let coin_store = global>(account); aborts_if !exists(@aptos_framework); aborts_if fee > 0 && !exists>(account); - aborts_if fee > 0 && global>(account).coin.value < fee; + aborts_if fee > 0 && coin_store.coin.value < fee; aborts_if fee > 0 && aggregator::spec_aggregator_get_val(aggr) + fee > aggregator::spec_get_limit(aggr); aborts_if fee > 0 && aggregator::spec_aggregator_get_val(aggr) + fee > MAX_U128; + + let post post_coin_store = global>(account); + let post post_collected_fees = global(@aptos_framework).amount; + ensures post_coin_store.coin.value == coin_store.coin.value - fee; + ensures aggregator::spec_aggregator_get_val(post_collected_fees.value) == aggregator::spec_aggregator_get_val(aggr) + fee; } /// Ensure caller is admin. diff --git a/aptos-move/framework/aptos-framework/sources/transaction_validation.spec.move b/aptos-move/framework/aptos-framework/sources/transaction_validation.spec.move index 4470a8769eb9d..f1457536d240b 100644 --- a/aptos-move/framework/aptos-framework/sources/transaction_validation.spec.move +++ b/aptos-move/framework/aptos-framework/sources/transaction_validation.spec.move @@ -17,6 +17,8 @@ spec aptos_framework::transaction_validation { let addr = signer::address_of(aptos_framework); aborts_if !system_addresses::is_aptos_framework_address(addr); aborts_if exists(addr); + + ensures exists(addr); } /// Create a schema to reuse some code. @@ -50,6 +52,7 @@ spec aptos_framework::transaction_validation { aborts_if max_transaction_fee > MAX_U64; aborts_if !(txn_sequence_number == global(transaction_sender).sequence_number); aborts_if !exists>(gas_payer); + // property 1: The sender of a transaction should have sufficient coin balance to pay the transaction fee. aborts_if !(global>(gas_payer).coin.value >= max_transaction_fee); } @@ -106,10 +109,17 @@ spec aptos_framework::transaction_validation { aborts_if len(secondary_signer_public_key_hashes) != num_secondary_signers; // If any account does not exist, or public key hash does not match, abort. + // property 2: All secondary signer addresses are verified to be authentic through a validation process. aborts_if exists i in 0..num_secondary_signers: !account::exists_at(secondary_signer_addresses[i]) || secondary_signer_public_key_hashes[i] != account::get_authentication_key(secondary_signer_addresses[i]); + + // By the end, all secondary signers account should exist and public key hash should match. + ensures forall i in 0..num_secondary_signers: + account::exists_at(secondary_signer_addresses[i]) + && secondary_signer_public_key_hashes[i] == + account::get_authentication_key(secondary_signer_addresses[i]); } spec multi_agent_common_prologue( @@ -162,6 +172,8 @@ spec aptos_framework::transaction_validation { chain_id: u8, ) { pragma verify_duration_estimate = 120; + + aborts_if !features::spec_is_enabled(features::FEE_PAYER_ENABLED); let gas_payer = fee_payer_address; include PrologueCommonAbortsIf { gas_payer, @@ -188,65 +200,7 @@ spec aptos_framework::transaction_validation { txn_max_gas_units: u64, gas_units_remaining: u64 ) { - use std::option; - use aptos_std::type_info; - use aptos_framework::account::{Account}; - use aptos_framework::aggregator; - use aptos_framework::aptos_coin::{AptosCoin}; - use aptos_framework::coin::{CoinStore, CoinInfo}; - use aptos_framework::optional_aggregator; - use aptos_framework::transaction_fee::{AptosCoinCapabilities, CollectedFeesPerBlock}; - - aborts_if !(txn_max_gas_units >= gas_units_remaining); - let gas_used = txn_max_gas_units - gas_units_remaining; - - aborts_if !(txn_gas_price * gas_used <= MAX_U64); - let transaction_fee_amount = txn_gas_price * gas_used; - - let addr = signer::address_of(account); - aborts_if !exists>(addr); - // Sufficiency of funds - aborts_if !(global>(addr).coin.value >= transaction_fee_amount); - - aborts_if !exists(addr); - aborts_if !(global(addr).sequence_number < MAX_U64); - - let pre_balance = global>(addr).coin.value; - let post balance = global>(addr).coin.value; - let pre_account = global(addr); - let post account = global(addr); - ensures balance == pre_balance - transaction_fee_amount; - ensures account.sequence_number == pre_account.sequence_number + 1; - - - // Bindings for `collect_fee` verification. - let collected_fees = global(@aptos_framework).amount; - let aggr = collected_fees.value; - let aggr_val = aggregator::spec_aggregator_get_val(aggr); - let aggr_lim = aggregator::spec_get_limit(aggr); - let aptos_addr = type_info::type_of().account_address; - // Bindings for `burn_fee` verification. - let apt_addr = type_info::type_of().account_address; - let maybe_apt_supply = global>(apt_addr).supply; - let apt_supply = option::spec_borrow(maybe_apt_supply); - let apt_supply_value = optional_aggregator::optional_aggregator_value(apt_supply); - // N.B.: Why can't `features::is_enabled` - aborts_if if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) { - !exists(@aptos_framework) - || transaction_fee_amount > 0 && - ( // `exists>(addr)` checked above. - // Sufficiency of funds is checked above. - aggr_val + transaction_fee_amount > aggr_lim - || aggr_val + transaction_fee_amount > MAX_U128) - } else { - // Existence of CoinStore in `addr` is checked above. - // Sufficiency of funds is checked above. - !exists(@aptos_framework) || - // Existence of APT's CoinInfo - transaction_fee_amount > 0 && !exists>(aptos_addr) || - // Sufficiency of APT's supply - option::spec_is_some(maybe_apt_supply) && apt_supply_value < transaction_fee_amount - }; + include EpilogueGasPayerAbortsIf { gas_payer: signer::address_of(account), _txn_sequence_number: txn_sequence_number }; } /// Abort according to the conditions. @@ -260,6 +214,10 @@ spec aptos_framework::transaction_validation { txn_max_gas_units: u64, gas_units_remaining: u64 ) { + include EpilogueGasPayerAbortsIf; + } + + spec schema EpilogueGasPayerAbortsIf { use std::option; use aptos_std::type_info; use aptos_framework::account::{Account}; @@ -269,6 +227,13 @@ spec aptos_framework::transaction_validation { use aptos_framework::optional_aggregator; use aptos_framework::transaction_fee::{AptosCoinCapabilities, CollectedFeesPerBlock}; + account: signer; + gas_payer: address; + _txn_sequence_number: u64; + txn_gas_price: u64; + txn_max_gas_units: u64; + gas_units_remaining: u64; + aborts_if !(txn_max_gas_units >= gas_units_remaining); let gas_used = txn_max_gas_units - gas_units_remaining; @@ -302,6 +267,7 @@ spec aptos_framework::transaction_validation { let maybe_apt_supply = global>(apt_addr).supply; let apt_supply = option::spec_borrow(maybe_apt_supply); let apt_supply_value = optional_aggregator::optional_aggregator_value(apt_supply); + // property 3: After successful execution, base the transaction fee on the configuration set by the features library. // N.B.: Why can't `features::is_enabled` aborts_if if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) { !exists(@aptos_framework) @@ -319,5 +285,18 @@ spec aptos_framework::transaction_validation { // Sufficiency of APT's supply option::spec_is_some(maybe_apt_supply) && apt_supply_value < transaction_fee_amount }; + + let post post_collected_fees = global(@aptos_framework); + let post post_collected_fees_value = aggregator::spec_aggregator_get_val(post_collected_fees.amount.value); + let post post_maybe_apt_supply = global>(apt_addr).supply; + let post post_apt_supply = option::spec_borrow(post_maybe_apt_supply); + let post post_apt_supply_value = optional_aggregator::optional_aggregator_value(post_apt_supply); + // property 3: After successful execution, base the transaction fee on the configuration set by the features library. + ensures transaction_fee_amount > 0 ==> + if (features::spec_is_enabled(features::COLLECT_AND_DISTRIBUTE_GAS_FEES)) { + post_collected_fees_value == aggr_val + transaction_fee_amount + } else { + option::spec_is_some(maybe_apt_supply) ==> post_apt_supply_value == apt_supply_value - transaction_fee_amount + }; } } diff --git a/aptos-move/framework/aptos-framework/sources/vesting.spec.move b/aptos-move/framework/aptos-framework/sources/vesting.spec.move index ad205948f494e..eaaf73c14f249 100644 --- a/aptos-move/framework/aptos-framework/sources/vesting.spec.move +++ b/aptos-move/framework/aptos-framework/sources/vesting.spec.move @@ -45,8 +45,19 @@ spec aptos_framework::vesting { } spec total_accumulated_rewards(vesting_contract_address: address): u64 { - // TODO: Verification out of resources/timeout - pragma verify = false; + pragma verify_duration_estimate = 300; + + include TotalAccumulatedRewardsAbortsIf; + } + + spec schema TotalAccumulatedRewardsAbortsIf { + vesting_contract_address: address; + + // Note: commission percentage should not be under 0 or higher than 100, cause it's a percentage number + // This requirement will solve the timeout issue of total_accumulated_rewards + // However, accumulated_rewards is still timeout + requires staking_contract.commission_percentage >= 0 && staking_contract.commission_percentage <= 100; + include ActiveVestingContractAbortsIf{contract_address: vesting_contract_address}; let vesting_contract = global(vesting_contract_address); @@ -59,7 +70,7 @@ spec aptos_framework::vesting { aborts_if !simple_map::spec_contains_key(staking_contracts, operator); let pool_address = staking_contract.pool_address; - let stake_pool = borrow_global(pool_address); + let stake_pool = global(pool_address); let active = coin::value(stake_pool.active); let pending_active = coin::value(stake_pool.pending_active); let total_active_stake = active + pending_active; @@ -69,20 +80,51 @@ spec aptos_framework::vesting { aborts_if active + pending_active > MAX_U64; aborts_if total_active_stake < staking_contract.principal; aborts_if accumulated_rewards * staking_contract.commission_percentage > MAX_U64; + // This two item both contribute to the timeout aborts_if (vesting_contract.remaining_grant + commission_amount) > total_active_stake; + aborts_if total_active_stake < vesting_contract.remaining_grant; } spec accumulated_rewards(vesting_contract_address: address, shareholder_or_beneficiary: address): u64 { - // TODO: Uses `total_accumulated_rewards` which is not verified. + // TODO: A severe timeout can not be resolved. pragma verify = false; + pragma verify_duration_estimate = 1000; + + // This schema lead to timeout + include TotalAccumulatedRewardsAbortsIf; + + let vesting_contract = global(vesting_contract_address); + let operator = vesting_contract.staking.operator; + let staking_contracts = global(vesting_contract_address).staking_contracts; + let staking_contract = simple_map::spec_get(staking_contracts, operator); + let pool_address = staking_contract.pool_address; + let stake_pool = global(pool_address); + let active = coin::value(stake_pool.active); + let pending_active = coin::value(stake_pool.pending_active); + let total_active_stake = active + pending_active; + let accumulated_rewards = total_active_stake - staking_contract.principal; + let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100; + let total_accumulated_rewards = total_active_stake - vesting_contract.remaining_grant - commission_amount; + + let shareholder = spec_shareholder(vesting_contract_address, shareholder_or_beneficiary); + let pool = vesting_contract.grant_pool; + let shares = pool_u64::spec_shares(pool, shareholder); + aborts_if pool.total_coins > 0 && pool.total_shares > 0 + && (shares * total_accumulated_rewards) / pool.total_shares > MAX_U64; + + ensures result == pool_u64::spec_shares_to_amount_with_total_coins(pool, shares, total_accumulated_rewards); } spec shareholders(vesting_contract_address: address): vector
{ include ActiveVestingContractAbortsIf{contract_address: vesting_contract_address}; } + spec fun spec_shareholder(vesting_contract_address: address, shareholder_or_beneficiary: address): address; + spec shareholder(vesting_contract_address: address, shareholder_or_beneficiary: address): address { + pragma opaque; include ActiveVestingContractAbortsIf{contract_address: vesting_contract_address}; + ensures [abstract] result == spec_shareholder(vesting_contract_address, shareholder_or_beneficiary); } spec create_vesting_schedule( @@ -99,49 +141,102 @@ spec aptos_framework::vesting { spec create_vesting_contract { // TODO: Data invariant does not hold. pragma verify = false; + aborts_if withdrawal_address == @aptos_framework || withdrawal_address == @vm_reserved; + aborts_if !exists(withdrawal_address); + aborts_if !exists>(withdrawal_address); + aborts_if len(shareholders) == 0; + aborts_if simple_map::spec_len(buy_ins) != len(shareholders); } spec unlock_rewards(contract_address: address) { // TODO: Calls `unlock_stake` which is not verified. + // Current verification times out. pragma verify = false; + include UnlockRewardsAbortsIf; + } + + spec schema UnlockRewardsAbortsIf { + contract_address: address; + + // Cause timeout here + include TotalAccumulatedRewardsAbortsIf { vesting_contract_address: contract_address }; + + let vesting_contract = global(contract_address); + let operator = vesting_contract.staking.operator; + let staking_contracts = global(contract_address).staking_contracts; + let staking_contract = simple_map::spec_get(staking_contracts, operator); + let pool_address = staking_contract.pool_address; + let stake_pool = global(pool_address); + let active = coin::value(stake_pool.active); + let pending_active = coin::value(stake_pool.pending_active); + let total_active_stake = active + pending_active; + let accumulated_rewards = total_active_stake - staking_contract.principal; + let commission_amount = accumulated_rewards * staking_contract.commission_percentage / 100; + let amount = total_active_stake - vesting_contract.remaining_grant - commission_amount; + + include UnlockStakeAbortsIf { vesting_contract, amount }; } spec unlock_rewards_many(contract_addresses: vector
) { // TODO: Calls `unlock_rewards` in loop. pragma verify = false; + aborts_if len(contract_addresses) == 0; + include PreconditionAbortsIf; } spec vest(contract_address: address) { // TODO: Calls `staking_contract::distribute` which is not verified. pragma verify = false; + include UnlockRewardsAbortsIf; } spec vest_many(contract_addresses: vector
) { // TODO: Calls `vest` in loop. pragma verify = false; + aborts_if len(contract_addresses) == 0; + include PreconditionAbortsIf; + } + + spec schema PreconditionAbortsIf { + contract_addresses: vector
; + + requires forall i in 0..len(contract_addresses): simple_map::spec_get(global(contract_addresses[i]).staking_contracts, global(contract_addresses[i]).staking.operator).commission_percentage >= 0 + && simple_map::spec_get(global(contract_addresses[i]).staking_contracts, global(contract_addresses[i]).staking.operator).commission_percentage <= 100; } spec distribute(contract_address: address) { // TODO: Can't handle abort in loop. pragma verify = false; + include ActiveVestingContractAbortsIf; + + let vesting_contract = global(contract_address); + include WithdrawStakeAbortsIf { vesting_contract }; } spec distribute_many(contract_addresses: vector
) { // TODO: Calls `distribute` in loop. pragma verify = false; + aborts_if len(contract_addresses) == 0; } spec terminate_vesting_contract(admin: &signer, contract_address: address) { // TODO: Calls `staking_contract::distribute` which is not verified. pragma verify = false; + include ActiveVestingContractAbortsIf; + + let vesting_contract = global(contract_address); + include WithdrawStakeAbortsIf { vesting_contract }; } spec admin_withdraw(admin: &signer, contract_address: address) { // TODO: Calls `withdraw_stake` which is not verified. - pragma aborts_if_is_partial; - include VerifyAdminAbortsIf; + pragma verify = false; + let vesting_contract = global(contract_address); aborts_if vesting_contract.state != VESTING_POOL_TERMINATED; + + include VerifyAdminAbortsIf; + include WithdrawStakeAbortsIf { vesting_contract }; } spec update_operator( @@ -151,8 +246,20 @@ spec aptos_framework::vesting { commission_percentage: u64, ) { // TODO: Calls `staking_contract::switch_operator` which is not verified. - pragma aborts_if_is_partial; + pragma verify = false; + include VerifyAdminAbortsIf; + + let vesting_contract = global(contract_address); + let acc = vesting_contract.signer_cap.account; + let old_operator = vesting_contract.staking.operator; + include staking_contract::ContractExistsAbortsIf { staker: acc, operator: old_operator }; + let store = global(acc); + let staking_contracts = store.staking_contracts; + aborts_if simple_map::spec_contains_key(staking_contracts, new_operator); + + let staking_contract = simple_map::spec_get(staking_contracts, old_operator); + include DistributeInternalAbortsIf { staker: acc, operator: old_operator, staking_contract, distribute_events: store.distribute_events }; } spec update_operator_with_same_commission( @@ -181,17 +288,20 @@ spec aptos_framework::vesting { admin: &signer, contract_address: address, ) { - // TODO: Unable to handle abort from `stake::assert_stake_pool_exists`. - pragma aborts_if_is_partial; aborts_if !exists(contract_address); - let vesting_contract1 = global(contract_address); - aborts_if signer::address_of(admin) != vesting_contract1.admin; + let vesting_contract = global(contract_address); + aborts_if signer::address_of(admin) != vesting_contract.admin; - let operator = vesting_contract1.staking.operator; - let staker = vesting_contract1.signer_cap.account; + let operator = vesting_contract.staking.operator; + let staker = vesting_contract.signer_cap.account; + + include staking_contract::ContractExistsAbortsIf {staker, operator}; + include staking_contract::IncreaseLockupWithCapAbortsIf {staker, operator}; - include staking_contract::ContractExistsAbortsIf; - include staking_contract::IncreaseLockupWithCapAbortsIf; + let store = global(staker); + let staking_contract = simple_map::spec_get(store.staking_contracts, operator); + let pool_address = staking_contract.owner_cap.pool_address; + aborts_if !exists(vesting_contract.staking.pool_address); } spec set_beneficiary( @@ -212,11 +322,19 @@ spec aptos_framework::vesting { contract_address: address, shareholder: address, ) { - // TODO: The abort of functions on either side of a logical operator can not be handled. - pragma aborts_if_is_partial; aborts_if !exists(contract_address); - let post vesting_contract = global(contract_address); - ensures !simple_map::spec_contains_key(vesting_contract.beneficiaries,shareholder); + + let addr = signer::address_of(account); + let vesting_contract = global(contract_address); + aborts_if addr != vesting_contract.admin && !std::string::spec_internal_check_utf8(ROLE_BENEFICIARY_RESETTER); + aborts_if addr != vesting_contract.admin && !exists(contract_address); + let roles = global(contract_address).roles; + let role = std::string::spec_utf8(ROLE_BENEFICIARY_RESETTER); + aborts_if addr != vesting_contract.admin && !simple_map::spec_contains_key(roles, role); + aborts_if addr != vesting_contract.admin && addr != simple_map::spec_get(roles, role); + + let post post_vesting_contract = global(contract_address); + ensures !simple_map::spec_contains_key(post_vesting_contract.beneficiaries,shareholder); } spec set_management_role( @@ -259,18 +377,15 @@ spec aptos_framework::vesting { admin: &signer, contract_creation_seed: vector, ): (signer, SignerCapability) { - // TODO: disabled due to timeout - pragma verify=false; - // TODO: Could not verify `coin::register` because can't get the `account_signer`. - pragma aborts_if_is_partial; + pragma verify_duration_estimate = 300; let admin_addr = signer::address_of(admin); let admin_store = global(admin_addr); let seed = bcs::to_bytes(admin_addr); let nonce = bcs::to_bytes(admin_store.nonce); - let first = concat(seed,nonce); - let second = concat(first,VESTING_POOL_SALT); - let end = concat(second,contract_creation_seed); + let first = concat(seed, nonce); + let second = concat(first, VESTING_POOL_SALT); + let end = concat(second, contract_creation_seed); let resource_addr = account::spec_create_resource_address(admin_addr, end); aborts_if !exists(admin_addr); @@ -278,6 +393,16 @@ spec aptos_framework::vesting { aborts_if admin_store.nonce + 1 > MAX_U64; let ea = account::exists_at(resource_addr); include if (ea) account::CreateResourceAccountAbortsIf else account::CreateAccountAbortsIf {addr: resource_addr}; + + let acc = global(resource_addr); + let post post_acc = global(resource_addr); + aborts_if !exists>(resource_addr) && !aptos_std::type_info::spec_is_struct(); + aborts_if !exists>(resource_addr) && ea && acc.guid_creation_num + 2 > MAX_U64; + aborts_if !exists>(resource_addr) && ea && acc.guid_creation_num + 2 >= account::MAX_GUID_CREATION_NUM; + ensures exists(resource_addr) && post_acc.authentication_key == account::ZERO_AUTH_KEY && + exists>(resource_addr); + ensures signer::address_of(result_1) == resource_addr; + ensures result_2.account == resource_addr; } spec verify_admin(admin: &signer, vesting_contract: &VestingContract) { @@ -295,11 +420,71 @@ spec aptos_framework::vesting { spec unlock_stake(vesting_contract: &VestingContract, amount: u64) { // TODO: Calls `staking_contract::unlock_stake` which is not verified. pragma verify = false; + include UnlockStakeAbortsIf; + } + + spec schema UnlockStakeAbortsIf { + vesting_contract: &VestingContract; + amount: u64; + + // verify staking_contract::unlock_stake() + let acc = vesting_contract.signer_cap.account; + let operator = vesting_contract.staking.operator; + include amount != 0 ==> staking_contract::ContractExistsAbortsIf { staker: acc, operator }; + + // verify staking_contract::distribute_internal() + let store = global(acc); + let staking_contract = simple_map::spec_get(store.staking_contracts, operator); + include amount != 0 ==> DistributeInternalAbortsIf { staker: acc, operator, staking_contract, distribute_events: store.distribute_events }; } spec withdraw_stake(vesting_contract: &VestingContract, contract_address: address): Coin { // TODO: Calls `staking_contract::distribute` which is not verified. pragma verify = false; + include WithdrawStakeAbortsIf; + } + + spec schema WithdrawStakeAbortsIf { + vesting_contract: &VestingContract; + contract_address: address; + + let operator = vesting_contract.staking.operator; + include staking_contract::ContractExistsAbortsIf { staker: contract_address, operator }; + + // verify staking_contract::distribute_internal() + let store = global(contract_address); + let staking_contract = simple_map::spec_get(store.staking_contracts, operator); + include DistributeInternalAbortsIf { staker: contract_address, operator, staking_contract, distribute_events: store.distribute_events }; + } + + spec schema DistributeInternalAbortsIf { + staker: address; // The verification below does not contain the loop in staking_contract::update_distribution_pool(). + operator: address; + staking_contract: staking_contract::StakingContract; + distribute_events: EventHandle; + + let pool_address = staking_contract.pool_address; + aborts_if !exists(pool_address); + let stake_pool = global(pool_address); + let inactive = stake_pool.inactive.value; + let pending_inactive = stake_pool.pending_inactive.value; + aborts_if inactive + pending_inactive > MAX_U64; + + // verify stake::withdraw_with_cap() + let total_potential_withdrawable = inactive + pending_inactive; + let pool_address_1 = staking_contract.owner_cap.pool_address; + aborts_if !exists(pool_address_1); + let stake_pool_1 = global(pool_address_1); + aborts_if !exists(@aptos_framework); + let validator_set = global(@aptos_framework); + let inactive_state = !stake::spec_contains(validator_set.pending_active, pool_address_1) + && !stake::spec_contains(validator_set.active_validators, pool_address_1) + && !stake::spec_contains(validator_set.pending_inactive, pool_address_1); + let inactive_1 = stake_pool_1.inactive.value; + let pending_inactive_1 = stake_pool_1.pending_inactive.value; + let new_inactive_1 = inactive_1 + pending_inactive_1; + aborts_if inactive_state && timestamp::spec_now_seconds() >= stake_pool_1.locked_until_secs + && inactive_1 + pending_inactive_1 > MAX_U64; } spec get_beneficiary(contract: &VestingContract, shareholder: address): address { diff --git a/aptos-move/framework/aptos-framework/sources/voting.spec.move b/aptos-move/framework/aptos-framework/sources/voting.spec.move index b409e79cfc83a..5439fca9c26c5 100644 --- a/aptos-move/framework/aptos-framework/sources/voting.spec.move +++ b/aptos-move/framework/aptos-framework/sources/voting.spec.move @@ -33,11 +33,9 @@ spec aptos_framework::voting { use aptos_framework::chain_status; requires chain_status::is_operating(); - include CreateProposalAbortsIf{is_multi_step_proposal: false}; + include CreateProposalAbortsIfAndEnsures{is_multi_step_proposal: false}; + // property 1: Verify the proposal_id of the newly created proposal. ensures result == old(global>(voting_forum_address)).next_proposal_id; - ensures global>(voting_forum_address).next_proposal_id - == old(global>(voting_forum_address)).next_proposal_id + 1; - ensures table::spec_contains(global>(voting_forum_address).proposals, result); } // The min_vote_threshold lower thanearly_resolution_vote_threshold. @@ -57,18 +55,14 @@ spec aptos_framework::voting { is_multi_step_proposal: bool, ): u64 { use aptos_framework::chain_status; - pragma verify_duration_estimate = 120; // TODO: set because of timeout (property proved) requires chain_status::is_operating(); - include CreateProposalAbortsIf; + include CreateProposalAbortsIfAndEnsures; + // property 1: Verify the proposal_id of the newly created proposal. ensures result == old(global>(voting_forum_address)).next_proposal_id; - ensures global>(voting_forum_address).next_proposal_id - == old(global>(voting_forum_address)).next_proposal_id + 1; - ensures table::spec_contains(global>(voting_forum_address).proposals, result); - ensures table::spec_contains(global>(voting_forum_address).proposals, result); } - spec schema CreateProposalAbortsIf { + spec schema CreateProposalAbortsIfAndEnsures { voting_forum_address: address; execution_hash: vector; min_vote_threshold: u128; @@ -86,10 +80,21 @@ spec aptos_framework::voting { aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); aborts_if len(execution_hash) <= 0; let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY); - aborts_if simple_map::spec_contains_key(metadata,execution_key); + aborts_if simple_map::spec_contains_key(metadata, execution_key); aborts_if voting_forum.next_proposal_id + 1 > MAX_U64; let is_multi_step_in_execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); - aborts_if is_multi_step_proposal && simple_map::spec_contains_key(metadata,is_multi_step_in_execution_key); + aborts_if is_multi_step_proposal && simple_map::spec_contains_key(metadata, is_multi_step_in_execution_key); + + let post post_voting_forum = global>(voting_forum_address); + let post post_metadata = table::spec_get(post_voting_forum.proposals, proposal_id).metadata; + ensures post_voting_forum.next_proposal_id == voting_forum.next_proposal_id + 1; + // property 1: Ensure that newly created proposals exist in the voting forum proposals table. + ensures table::spec_contains(post_voting_forum.proposals, proposal_id); + ensures if (is_multi_step_proposal) { + simple_map::spec_get(post_metadata, is_multi_step_in_execution_key) == std::bcs::serialize(false) + } else { + !simple_map::spec_contains_key(post_metadata, is_multi_step_in_execution_key) + }; } spec vote( @@ -103,14 +108,16 @@ spec aptos_framework::voting { // Ensures existence of Timestamp requires chain_status::is_operating(); + // property 2: While voting, it ensures that only the governance module that defines ProposalType may initiate voting + // and that the proposal under vote exists in the specified voting forum. aborts_if !exists>(voting_forum_address); let voting_forum = global>(voting_forum_address); let proposal = table::spec_get(voting_forum.proposals, proposal_id); // Getting proposal from voting forum might fail because of non-exist id aborts_if !table::spec_contains(voting_forum.proposals, proposal_id); - // Aborts when voting period is over or resolved aborts_if is_voting_period_over(proposal); aborts_if proposal.is_resolved; + aborts_if !exists(@aptos_framework); // Assert this proposal is single-step, or if the proposal is multi-step, it is not in execution yet. aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); @@ -119,26 +126,38 @@ spec aptos_framework::voting { aborts_if if (should_pass) { proposal.yes_votes + num_votes > MAX_U128 } else { proposal.no_votes + num_votes > MAX_U128 }; aborts_if !std::string::spec_internal_check_utf8(RESOLVABLE_TIME_METADATA_KEY); + + let post post_voting_forum = global>(voting_forum_address); + let post post_proposal = table::spec_get(post_voting_forum.proposals, proposal_id); + ensures if (should_pass) { + post_proposal.yes_votes == proposal.yes_votes + num_votes + } else { + post_proposal.no_votes == proposal.no_votes + num_votes + }; + let timestamp_secs_bytes = std::bcs::serialize(timestamp::spec_now_seconds()); + let key = std::string::spec_utf8(RESOLVABLE_TIME_METADATA_KEY); + ensures simple_map::spec_get(post_proposal.metadata, key) == timestamp_secs_bytes; } spec is_proposal_resolvable( voting_forum_address: address, proposal_id: u64, ) { - use aptos_framework::chain_status; // Ensures existence of Timestamp requires chain_status::is_operating(); + include IsProposalResolvableAbortsIf; + } + + spec schema IsProposalResolvableAbortsIf { + voting_forum_address: address; + proposal_id: u64; + include AbortsIfNotContainProposalID; - let voting_forum = global>(voting_forum_address); + let voting_forum = global>(voting_forum_address); let proposal = table::spec_get(voting_forum.proposals, proposal_id); - let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); - let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs; - let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) && - (proposal.yes_votes >= early_resolution_threshold || - proposal.no_votes >= early_resolution_threshold); - let voting_closed = voting_period_over || be_resolved_early; + let voting_closed = spec_is_voting_closed(voting_forum_address, proposal_id); // Avoid Overflow aborts_if voting_closed && (proposal.yes_votes <= proposal.no_votes || proposal.yes_votes + proposal.no_votes < proposal.min_vote_threshold); // Resolvable_time Properties @@ -160,9 +179,25 @@ spec aptos_framework::voting { // Ensures existence of Timestamp requires chain_status::is_operating(); - pragma aborts_if_is_partial; - include AbortsIfNotContainProposalID; + include IsProposalResolvableAbortsIf; aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_KEY); + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + let multi_step_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY); + let has_multi_step_key = simple_map::spec_contains_key(proposal.metadata, multi_step_key); + aborts_if has_multi_step_key && !from_bcs::deserializable(simple_map::spec_get(proposal.metadata, multi_step_key)); + aborts_if has_multi_step_key && from_bcs::deserialize(simple_map::spec_get(proposal.metadata, multi_step_key)); + + let post post_voting_forum = global>(voting_forum_address); + let post post_proposal = table::spec_get(post_voting_forum.proposals, proposal_id); + aborts_if !exists(@aptos_framework); + // property 3: Ensure that proposal is successfully resolved. + ensures post_proposal.is_resolved == true; + ensures post_proposal.resolution_time_secs == timestamp::spec_now_seconds(); + + aborts_if option::spec_is_none(proposal.execution_content); + ensures result == option::spec_borrow(proposal.execution_content); + ensures option::spec_is_none(post_proposal.execution_content); } spec resolve_proposal_v2( @@ -176,14 +211,37 @@ spec aptos_framework::voting { // Ensures existence of Timestamp requires chain_status::is_operating(); - pragma aborts_if_is_partial; - include AbortsIfNotContainProposalID; + include IsProposalResolvableAbortsIf; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + let post post_voting_forum = global>(voting_forum_address); + let post post_proposal = table::spec_get(voting_forum.proposals, proposal_id); + let multi_step_in_execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_KEY); + ensures simple_map::spec_contains_key(proposal.metadata, multi_step_in_execution_key) && + ((len(next_execution_hash) != 0 && is_multi_step) || (len(next_execution_hash) == 0 && !is_multi_step)) ==> + simple_map::spec_get(post_proposal.metadata, multi_step_in_execution_key) == std::bcs::serialize(true); + + let multi_step_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_KEY); + aborts_if simple_map::spec_contains_key(proposal.metadata, multi_step_key) && + !from_bcs::deserializable(simple_map::spec_get(proposal.metadata, multi_step_key)); + let is_multi_step = simple_map::spec_contains_key(proposal.metadata, multi_step_key) && + from_bcs::deserialize(simple_map::spec_get(proposal.metadata, multi_step_key)); + aborts_if !is_multi_step && len(next_execution_hash) != 0; + + aborts_if len(next_execution_hash) == 0 && !exists(@aptos_framework); + aborts_if len(next_execution_hash) == 0 && is_multi_step && !simple_map::spec_contains_key(proposal.metadata, multi_step_in_execution_key); + // property 4: For single-step proposals, it ensures that the next_execution_hash parameter is empty and resolves the proposal. + ensures len(next_execution_hash) == 0 ==> post_proposal.is_resolved == true && post_proposal.resolution_time_secs == timestamp::spec_now_seconds(); + ensures len(next_execution_hash) == 0 && is_multi_step ==> simple_map::spec_get(post_proposal.metadata, multi_step_in_execution_key) == std::bcs::serialize(false); + // property 4: For multi-step proposals, it ensures that the next_execution_hash parameter contains the hash of the next step. + ensures len(next_execution_hash) != 0 ==> post_proposal.execution_hash == next_execution_hash; } spec next_proposal_id(voting_forum_address: address): u64 { aborts_if !exists>(voting_forum_address); + ensures result == global>(voting_forum_address).next_proposal_id; } spec is_voting_closed(voting_forum_address: address, proposal_id: u64): bool { @@ -191,10 +249,32 @@ spec aptos_framework::voting { // Ensures existence of Timestamp requires chain_status::is_operating(); include AbortsIfNotContainProposalID; + aborts_if !exists(@aptos_framework); + ensures result == spec_is_voting_closed(voting_forum_address, proposal_id); + } + + spec fun spec_is_voting_closed(voting_forum_address: address, proposal_id: u64): bool { + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + spec_can_be_resolved_early(proposal) || is_voting_period_over(proposal) } spec can_be_resolved_early(proposal: &Proposal): bool { aborts_if false; + ensures result == spec_can_be_resolved_early(proposal); + } + + spec fun spec_can_be_resolved_early(proposal: Proposal): bool { + if (option::spec_is_some(proposal.early_resolution_vote_threshold)) { + let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); + if (proposal.yes_votes >= early_resolution_threshold || proposal.no_votes >= early_resolution_threshold) { + true + } else{ + false + } + } else { + false + } } spec fun spec_get_proposal_state( @@ -203,12 +283,7 @@ spec aptos_framework::voting { voting_forum: VotingForum ): u64 { let proposal = table::spec_get(voting_forum.proposals, proposal_id); - let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); - let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs; - let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) && - (proposal.yes_votes >= early_resolution_threshold || - proposal.no_votes >= early_resolution_threshold); - let voting_closed = voting_period_over || be_resolved_early; + let voting_closed = spec_is_voting_closed(voting_forum_address, proposal_id); let proposal_vote_cond = (proposal.yes_votes > proposal.no_votes && proposal.yes_votes + proposal.no_votes >= proposal.min_vote_threshold); if (voting_closed && proposal_vote_cond) { PROPOSAL_STATE_SUCCEEDED @@ -242,15 +317,7 @@ spec aptos_framework::voting { include AbortsIfNotContainProposalID; let voting_forum = global>(voting_forum_address); - let proposal = table::spec_get(voting_forum.proposals, proposal_id); - let early_resolution_threshold = option::spec_borrow(proposal.early_resolution_vote_threshold); - let voting_period_over = timestamp::now_seconds() > proposal.expiration_secs; - let be_resolved_early = option::spec_is_some(proposal.early_resolution_vote_threshold) && - (proposal.yes_votes >= early_resolution_threshold || - proposal.no_votes >= early_resolution_threshold); - let voting_closed = voting_period_over || be_resolved_early; ensures result == spec_get_proposal_state(voting_forum_address, proposal_id, voting_forum); - } spec get_proposal_creation_secs( @@ -258,6 +325,9 @@ spec aptos_framework::voting { proposal_id: u64, ): u64 { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result == proposal.creation_time_secs; } spec get_proposal_expiration_secs( @@ -265,6 +335,7 @@ spec aptos_framework::voting { proposal_id: u64, ): u64 { include AbortsIfNotContainProposalID; + ensures result == spec_get_proposal_expiration_secs(voting_forum_address, proposal_id); } spec get_execution_hash( @@ -272,6 +343,9 @@ spec aptos_framework::voting { proposal_id: u64, ): vector { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result == proposal.execution_hash; } spec get_min_vote_threshold( @@ -279,6 +353,9 @@ spec aptos_framework::voting { proposal_id: u64, ): u128 { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result == proposal.min_vote_threshold; } spec get_early_resolution_vote_threshold( @@ -286,6 +363,9 @@ spec aptos_framework::voting { proposal_id: u64, ): Option { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result == proposal.early_resolution_vote_threshold; } spec get_votes( @@ -293,6 +373,10 @@ spec aptos_framework::voting { proposal_id: u64, ): (u128, u128) { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result_1 == proposal.yes_votes; + ensures result_2 == proposal.no_votes; } spec is_resolved( @@ -300,6 +384,9 @@ spec aptos_framework::voting { proposal_id: u64, ): bool { include AbortsIfNotContainProposalID; + let voting_forum = global>(voting_forum_address); + let proposal = table::spec_get(voting_forum.proposals, proposal_id); + ensures result == proposal.is_resolved; } spec schema AbortsIfNotContainProposalID { @@ -314,23 +401,24 @@ spec aptos_framework::voting { voting_forum_address: address, proposal_id: u64, ): bool { + include AbortsIfNotContainProposalID; let voting_forum = global>(voting_forum_address); let proposal = table::spec_get(voting_forum.proposals,proposal_id); - aborts_if !table::spec_contains(voting_forum.proposals,proposal_id); - aborts_if !exists>(voting_forum_address); aborts_if !std::string::spec_internal_check_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); let execution_key = std::string::spec_utf8(IS_MULTI_STEP_PROPOSAL_IN_EXECUTION_KEY); aborts_if !simple_map::spec_contains_key(proposal.metadata,execution_key); let is_multi_step_in_execution_key = simple_map::spec_get(proposal.metadata,execution_key); - aborts_if !aptos_std::from_bcs::deserializable(is_multi_step_in_execution_key); + aborts_if !from_bcs::deserializable(is_multi_step_in_execution_key); + + ensures result == from_bcs::deserialize(is_multi_step_in_execution_key); } spec is_voting_period_over(proposal: &Proposal): bool { use aptos_framework::chain_status; requires chain_status::is_operating(); aborts_if false; + ensures result == (timestamp::spec_now_seconds() > proposal.expiration_secs); } - } diff --git a/aptos-move/framework/aptos-stdlib/doc/big_vector.md b/aptos-move/framework/aptos-stdlib/doc/big_vector.md index f7ef134771c81..526bf81fc812d 100644 --- a/aptos-move/framework/aptos-stdlib/doc/big_vector.md +++ b/aptos-move/framework/aptos-stdlib/doc/big_vector.md @@ -10,6 +10,7 @@ - [Function `empty`](#0x1_big_vector_empty) - [Function `singleton`](#0x1_big_vector_singleton) - [Function `destroy_empty`](#0x1_big_vector_destroy_empty) +- [Function `destroy`](#0x1_big_vector_destroy) - [Function `borrow`](#0x1_big_vector_borrow) - [Function `borrow_mut`](#0x1_big_vector_borrow_mut) - [Function `append`](#0x1_big_vector_append) @@ -218,6 +219,38 @@ Aborts if v is not empty. + + + + +## Function `destroy` + +Destroy the vector v if T has drop + + +
public fun destroy<T: drop>(v: big_vector::BigVector<T>)
+
+ + + +
+Implementation + + +
public fun destroy<T: drop>(v: BigVector<T>) {
+    let BigVector { buckets, end_index, bucket_size: _ } = v;
+    let i = 0;
+    while (end_index > 0) {
+        let num_elements = vector::length(&table_with_length::remove(&mut buckets, i));
+        end_index = end_index - num_elements;
+        i = i + 1;
+    };
+    table_with_length::destroy_empty(buckets);
+}
+
+ + +
diff --git a/aptos-move/framework/aptos-stdlib/doc/math128.md b/aptos-move/framework/aptos-stdlib/doc/math128.md index be26558d63967..b3a271c2e918b 100644 --- a/aptos-move/framework/aptos-stdlib/doc/math128.md +++ b/aptos-move/framework/aptos-stdlib/doc/math128.md @@ -18,7 +18,6 @@ Standard math utilities missing in the Move Language. - [Function `log2_64`](#0x1_math128_log2_64) - [Function `sqrt`](#0x1_math128_sqrt) - [Function `ceil_div`](#0x1_math128_ceil_div) -- [Function `assert_approx_the_same`](#0x1_math128_assert_approx_the_same) - [Specification](#@Specification_1) - [Function `max`](#@Specification_1_max) - [Function `min`](#@Specification_1_min) @@ -414,39 +413,6 @@ Returns square root of x, precisely floor(sqrt(x)) - - - - -## Function `assert_approx_the_same` - -For functions that approximate a value it's useful to test a value is close -to the most correct value up to last digit - - -
#[testonly]
-fun assert_approx_the_same(x: u128, y: u128, precission: u128)
-
- - - -
-Implementation - - -
fun assert_approx_the_same(x: u128, y: u128, precission: u128) {
-    if (x < y) {
-        let tmp = x;
-        x = y;
-        y = tmp;
-    };
-    let mult = pow(10, precission);
-    assert!((x - y) * mult < x, 0);
-}
-
- - -
diff --git a/aptos-move/framework/aptos-stdlib/doc/math64.md b/aptos-move/framework/aptos-stdlib/doc/math64.md index bc6436435a49e..cce24aa3e9515 100644 --- a/aptos-move/framework/aptos-stdlib/doc/math64.md +++ b/aptos-move/framework/aptos-stdlib/doc/math64.md @@ -17,7 +17,6 @@ Standard math utilities missing in the Move Language. - [Function `log2`](#0x1_math64_log2) - [Function `sqrt`](#0x1_math64_sqrt) - [Function `ceil_div`](#0x1_math64_ceil_div) -- [Function `assert_approx_the_same`](#0x1_math64_assert_approx_the_same) - [Specification](#@Specification_1) - [Function `max`](#@Specification_1_max) - [Function `min`](#@Specification_1_min) @@ -369,39 +368,6 @@ Returns square root of x, precisely floor(sqrt(x)) - - - - -## Function `assert_approx_the_same` - -For functions that approximate a value it's useful to test a value is close -to the most correct value up to last digit - - -
#[testonly]
-fun assert_approx_the_same(x: u128, y: u128, precission: u64)
-
- - - -
-Implementation - - -
fun assert_approx_the_same(x: u128, y: u128, precission: u64) {
-    if (x < y) {
-        let tmp = x;
-        x = y;
-        y = tmp;
-    };
-    let mult = (pow(10, precission) as u128);
-    assert!((x - y) * mult < x, 0);
-}
-
- - -
diff --git a/aptos-move/framework/aptos-stdlib/doc/math_fixed.md b/aptos-move/framework/aptos-stdlib/doc/math_fixed.md index e00553a0d5203..936d1ee612090 100644 --- a/aptos-move/framework/aptos-stdlib/doc/math_fixed.md +++ b/aptos-move/framework/aptos-stdlib/doc/math_fixed.md @@ -15,7 +15,6 @@ Standard math utilities missing in the Move Language. - [Function `mul_div`](#0x1_math_fixed_mul_div) - [Function `exp_raw`](#0x1_math_fixed_exp_raw) - [Function `pow_raw`](#0x1_math_fixed_pow_raw) -- [Function `assert_approx_the_same`](#0x1_math_fixed_assert_approx_the_same)
use 0x1::error;
@@ -294,39 +293,6 @@ Specialized function for x * y / z that omits intermediate shifting
 
 
 
-
-
-
-
-## Function `assert_approx_the_same`
-
-For functions that approximate a value it's useful to test a value is close
-to the most correct value up to last digit
-
-
-
#[testonly]
-fun assert_approx_the_same(x: u128, y: u128, precission: u128)
-
- - - -
-Implementation - - -
fun assert_approx_the_same(x: u128, y: u128, precission: u128) {
-    if (x < y) {
-        let tmp = x;
-        x = y;
-        y = tmp;
-    };
-    let mult = math128::pow(10, precission);
-    assert!((x - y) * mult < x, 0);
-}
-
- - -
diff --git a/aptos-move/framework/aptos-stdlib/doc/math_fixed64.md b/aptos-move/framework/aptos-stdlib/doc/math_fixed64.md index a436af50f9bd5..7dcaa1dda440c 100644 --- a/aptos-move/framework/aptos-stdlib/doc/math_fixed64.md +++ b/aptos-move/framework/aptos-stdlib/doc/math_fixed64.md @@ -15,7 +15,6 @@ Standard math utilities missing in the Move Language. - [Function `mul_div`](#0x1_math_fixed64_mul_div) - [Function `exp_raw`](#0x1_math_fixed64_exp_raw) - [Function `pow_raw`](#0x1_math_fixed64_pow_raw) -- [Function `assert_approx_the_same`](#0x1_math_fixed64_assert_approx_the_same)
use 0x1::error;
@@ -289,39 +288,6 @@ Specialized function for x * y / z that omits intermediate shifting
 
 
 
-
-
-
-
-## Function `assert_approx_the_same`
-
-For functions that approximate a value it's useful to test a value is close
-to the most correct value up to last digit
-
-
-
#[testonly]
-fun assert_approx_the_same(x: u256, y: u256, precission: u128)
-
- - - -
-Implementation - - -
fun assert_approx_the_same(x: u256, y: u256, precission: u128) {
-    if (x < y) {
-        let tmp = x;
-        x = y;
-        y = tmp;
-    };
-    let mult = (math128::pow(10, precission) as u256);
-    assert!((x - y) * mult < x, 0);
-}
-
- - -
diff --git a/aptos-move/framework/aptos-stdlib/doc/smart_vector.md b/aptos-move/framework/aptos-stdlib/doc/smart_vector.md index 47414ae54b32d..9c7ebffa32125 100644 --- a/aptos-move/framework/aptos-stdlib/doc/smart_vector.md +++ b/aptos-move/framework/aptos-stdlib/doc/smart_vector.md @@ -12,6 +12,8 @@ - [Function `empty_with_config`](#0x1_smart_vector_empty_with_config) - [Function `singleton`](#0x1_smart_vector_singleton) - [Function `destroy_empty`](#0x1_smart_vector_destroy_empty) +- [Function `destroy`](#0x1_smart_vector_destroy) +- [Function `clear`](#0x1_smart_vector_clear) - [Function `borrow`](#0x1_smart_vector_borrow) - [Function `borrow_mut`](#0x1_smart_vector_borrow_mut) - [Function `append`](#0x1_smart_vector_append) @@ -290,6 +292,59 @@ Aborts if v is not empty. + + + + +## Function `destroy` + +Destroy a table completely when T has drop. + + +
public fun destroy<T: drop>(v: smart_vector::SmartVector<T>)
+
+ + + +
+Implementation + + +
public fun destroy<T: drop>(v: SmartVector<T>) {
+    clear(&mut v);
+    destroy_empty(v);
+}
+
+ + + +
+ + + +## Function `clear` + + + +
public fun clear<T: drop>(v: &mut smart_vector::SmartVector<T>)
+
+ + + +
+Implementation + + +
public fun clear<T: drop>(v: &mut SmartVector<T>) {
+    v.inline_vec = vector[];
+    if (option::is_some(&v.big_vec)) {
+        big_vector::destroy(option::extract(&mut v.big_vec));
+    }
+}
+
+ + +
diff --git a/aptos-move/framework/aptos-stdlib/doc/string_utils.md b/aptos-move/framework/aptos-stdlib/doc/string_utils.md index 47d5054258215..05d780705f4b0 100644 --- a/aptos-move/framework/aptos-stdlib/doc/string_utils.md +++ b/aptos-move/framework/aptos-stdlib/doc/string_utils.md @@ -9,7 +9,8 @@ A module for formatting move values as strings. - [Struct `Cons`](#0x1_string_utils_Cons) - [Struct `NIL`](#0x1_string_utils_NIL) - [Struct `FakeCons`](#0x1_string_utils_FakeCons) -- [Constants](#@Constants_0) + - [[test_only]](#@[test_only]_0) +- [Constants](#@Constants_1) - [Function `to_string`](#0x1_string_utils_to_string) - [Function `to_string_with_canonical_addresses`](#0x1_string_utils_to_string_with_canonical_addresses) - [Function `to_string_with_integer_types`](#0x1_string_utils_to_string_with_integer_types) @@ -26,17 +27,17 @@ A module for formatting move values as strings. - [Function `list4`](#0x1_string_utils_list4) - [Function `native_format`](#0x1_string_utils_native_format) - [Function `native_format_list`](#0x1_string_utils_native_format_list) -- [Specification](#@Specification_1) - - [Function `to_string`](#@Specification_1_to_string) - - [Function `to_string_with_canonical_addresses`](#@Specification_1_to_string_with_canonical_addresses) - - [Function `to_string_with_integer_types`](#@Specification_1_to_string_with_integer_types) - - [Function `debug_string`](#@Specification_1_debug_string) - - [Function `format1`](#@Specification_1_format1) - - [Function `format2`](#@Specification_1_format2) - - [Function `format3`](#@Specification_1_format3) - - [Function `format4`](#@Specification_1_format4) - - [Function `native_format`](#@Specification_1_native_format) - - [Function `native_format_list`](#@Specification_1_native_format_list) +- [Specification](#@Specification_2) + - [Function `to_string`](#@Specification_2_to_string) + - [Function `to_string_with_canonical_addresses`](#@Specification_2_to_string_with_canonical_addresses) + - [Function `to_string_with_integer_types`](#@Specification_2_to_string_with_integer_types) + - [Function `debug_string`](#@Specification_2_debug_string) + - [Function `format1`](#@Specification_2_format1) + - [Function `format2`](#@Specification_2_format2) + - [Function `format3`](#@Specification_2_format3) + - [Function `format4`](#@Specification_2_format4) + - [Function `native_format`](#@Specification_2_native_format) + - [Function `native_format_list`](#@Specification_2_native_format_list)
use 0x1::string;
@@ -109,9 +110,13 @@ A module for formatting move values as strings.
 ## Struct `FakeCons`
 
 
+
 
-
#[testonly]
-struct FakeCons<T, N> has copy, drop, store
+### [test_only]
+
+
+
+
struct FakeCons<T, N> has copy, drop, store
 
@@ -138,7 +143,7 @@ A module for formatting move values as strings. - + ## Constants @@ -542,12 +547,12 @@ Formatting with a rust-like format string, eg. + ## Specification - + ### Function `to_string` @@ -564,7 +569,7 @@ Formatting with a rust-like format string, eg. + ### Function `to_string_with_canonical_addresses` @@ -581,7 +586,7 @@ Formatting with a rust-like format string, eg. + ### Function `to_string_with_integer_types` @@ -598,7 +603,7 @@ Formatting with a rust-like format string, eg. + ### Function `debug_string` @@ -615,7 +620,7 @@ Formatting with a rust-like format string, eg. + ### Function `format1` @@ -632,7 +637,7 @@ Formatting with a rust-like format string, eg. + ### Function `format2` @@ -649,7 +654,7 @@ Formatting with a rust-like format string, eg. + ### Function `format3` @@ -666,7 +671,7 @@ Formatting with a rust-like format string, eg. + ### Function `format4` @@ -683,7 +688,7 @@ Formatting with a rust-like format string, eg. + ### Function `native_format` @@ -701,7 +706,7 @@ Formatting with a rust-like format string, eg. + ### Function `native_format_list` diff --git a/aptos-move/framework/aptos-stdlib/sources/data_structures/big_vector.move b/aptos-move/framework/aptos-stdlib/sources/data_structures/big_vector.move index c053ec8461a97..a7eca39732823 100644 --- a/aptos-move/framework/aptos-stdlib/sources/data_structures/big_vector.move +++ b/aptos-move/framework/aptos-stdlib/sources/data_structures/big_vector.move @@ -48,6 +48,18 @@ module aptos_std::big_vector { table_with_length::destroy_empty(buckets); } + /// Destroy the vector `v` if T has `drop` + public fun destroy(v: BigVector) { + let BigVector { buckets, end_index, bucket_size: _ } = v; + let i = 0; + while (end_index > 0) { + let num_elements = vector::length(&table_with_length::remove(&mut buckets, i)); + end_index = end_index - num_elements; + i = i + 1; + }; + table_with_length::destroy_empty(buckets); + } + /// Acquire an immutable reference to the `i`th element of the vector `v`. /// Aborts if `i` is out of bounds. public fun borrow(v: &BigVector, i: u64): &T { @@ -288,14 +300,6 @@ module aptos_std::big_vector { length(v) == 0 } - #[test_only] - fun destroy(v: BigVector) { - while (!is_empty(&mut v)) { - pop_back(&mut v); - }; - destroy_empty(v) - } - #[test] fun big_vector_test() { let v = empty(5); diff --git a/aptos-move/framework/aptos-stdlib/sources/data_structures/smart_vector.move b/aptos-move/framework/aptos-stdlib/sources/data_structures/smart_vector.move index 04d032cf4a5e2..2c1405b848eb0 100644 --- a/aptos-move/framework/aptos-stdlib/sources/data_structures/smart_vector.move +++ b/aptos-move/framework/aptos-stdlib/sources/data_structures/smart_vector.move @@ -74,6 +74,19 @@ module aptos_std::smart_vector { option::destroy_none(big_vec); } + /// Destroy a table completely when T has `drop`. + public fun destroy(v: SmartVector) { + clear(&mut v); + destroy_empty(v); + } + + public fun clear(v: &mut SmartVector) { + v.inline_vec = vector[]; + if (option::is_some(&v.big_vec)) { + big_vector::destroy(option::extract(&mut v.big_vec)); + } + } + /// Acquire an immutable reference to the `i`th element of the vector `v`. /// Aborts if `i` is out of bounds. public fun borrow(v: &SmartVector, i: u64): &T { @@ -328,14 +341,6 @@ module aptos_std::smart_vector { length(v) == 0 } - #[test_only] - fun destroy(v: SmartVector) { - while (!is_empty(&mut v)) { - let _ = pop_back(&mut v); - }; - destroy_empty(v) - } - #[test] fun smart_vector_test() { let v = empty(); diff --git a/aptos-move/framework/aptos-stdlib/sources/debug.move b/aptos-move/framework/aptos-stdlib/sources/debug.move index e4d906a36badf..21b707c7a9982 100644 --- a/aptos-move/framework/aptos-stdlib/sources/debug.move +++ b/aptos-move/framework/aptos-stdlib/sources/debug.move @@ -66,7 +66,6 @@ module aptos_std::debug { public fun test() { let x = 42; assert_equal(&x, b"42"); - print(&x); let v = vector::empty(); vector::push_back(&mut v, 100); @@ -102,8 +101,11 @@ module aptos_std::debug { assert_equal(&str, b"\"Can you say \\\"Hel\\\\lo\\\"?\""); } - #[test] - fun test_print_primitive_types() { + + #[test_only] + use std::features; + #[test(s = @0x123)] + fun test_print_primitive_types(s: signer) { let u8 = 255u8; assert_equal(&u8, b"255"); @@ -131,10 +133,10 @@ module aptos_std::debug { let a = @0x1234c0ffee; assert_equal(&a, b"@0x1234c0ffee"); - // print a signer - /*let senders = create_signers_for_testing(1); - let sender = vector::pop_back(&mut senders); - print(&sender);*/ + if (features::signer_native_format_fix_enabled()) { + let signer = s; + assert_equal(&signer, b"signer(@0x123)"); + } } const MSG_1 : vector = b"abcdef"; @@ -159,8 +161,8 @@ module aptos_std::debug { assert_equal(&obj, b"0x1::debug::TestInner {\n val: 10,\n vec: [],\n msgs: []\n}"); } - #[test] - fun test_print_vectors() { + #[test(s1 = @0x123, s2 = @0x456)] + fun test_print_vectors(s1: signer, s2: signer) { let v_u8 = x"ffabcdef"; assert_equal(&v_u8, b"0xffabcdef"); @@ -185,8 +187,10 @@ module aptos_std::debug { let v_addr = vector[@0x1234, @0x5678, @0xabcdef]; assert_equal(&v_addr, b"[ @0x1234, @0x5678, @0xabcdef ]"); - /*let v_signer = create_signers_for_testing(4); - print(&v_signer);*/ + if (features::signer_native_format_fix_enabled()) { + let v_signer = vector[s1, s2]; + assert_equal(&v_signer, b"[ signer(@0x123), signer(@0x456) ]"); + }; let v = vector[ TestInner { @@ -203,8 +207,8 @@ module aptos_std::debug { assert_equal(&v, b"[\n 0x1::debug::TestInner {\n val: 4,\n vec: [ 127, 128 ],\n msgs: [\n 0x00ff,\n 0xabcd\n ]\n },\n 0x1::debug::TestInner {\n val: 8,\n vec: [ 128, 129 ],\n msgs: [\n 0x0000\n ]\n }\n]"); } - #[test] - fun test_print_vector_of_vectors() { + #[test(s1 = @0x123, s2 = @0x456)] + fun test_print_vector_of_vectors(s1: signer, s2: signer) { let v_u8 = vector[x"ffab", x"cdef"]; assert_equal(&v_u8, b"[\n 0xffab,\n 0xcdef\n]"); @@ -229,8 +233,10 @@ module aptos_std::debug { let v_addr = vector[vector[@0x1234, @0x5678], vector[@0xabcdef, @0x9999]]; assert_equal(&v_addr, b"[\n [ @0x1234, @0x5678 ],\n [ @0xabcdef, @0x9999 ]\n]"); - /*let v_signer = vector[create_signers_for_testing(2), create_signers_for_testing(2)]; - print(&v_signer);*/ + if (features::signer_native_format_fix_enabled()) { + let v_signer = vector[vector[s1], vector[s2]]; + assert_equal(&v_signer, b"[\n [ signer(@0x123) ],\n [ signer(@0x456) ]\n]"); + }; let v = vector[ vector[ diff --git a/aptos-move/framework/aptos-stdlib/sources/math128.move b/aptos-move/framework/aptos-stdlib/sources/math128.move index a37cbe094d4f1..8f4dee3fb9ad7 100644 --- a/aptos-move/framework/aptos-stdlib/sources/math128.move +++ b/aptos-move/framework/aptos-stdlib/sources/math128.move @@ -295,7 +295,7 @@ module aptos_std::math128 { assert!(result == 13043817825332782212, 0); } - #[testonly] + #[test_only] /// For functions that approximate a value it's useful to test a value is close /// to the most correct value up to last digit fun assert_approx_the_same(x: u128, y: u128, precission: u128) { diff --git a/aptos-move/framework/aptos-stdlib/sources/math64.move b/aptos-move/framework/aptos-stdlib/sources/math64.move index 6aa733ed7fca6..9cf086c36182d 100644 --- a/aptos-move/framework/aptos-stdlib/sources/math64.move +++ b/aptos-move/framework/aptos-stdlib/sources/math64.move @@ -251,7 +251,7 @@ module aptos_std::math64 { assert!(result == 3037000499, 0); } - #[testonly] + #[test_only] /// For functions that approximate a value it's useful to test a value is close /// to the most correct value up to last digit fun assert_approx_the_same(x: u128, y: u128, precission: u64) { diff --git a/aptos-move/framework/aptos-stdlib/sources/math_fixed.move b/aptos-move/framework/aptos-stdlib/sources/math_fixed.move index 8046246da6f04..b7993a5f13bf4 100644 --- a/aptos-move/framework/aptos-stdlib/sources/math_fixed.move +++ b/aptos-move/framework/aptos-stdlib/sources/math_fixed.move @@ -124,7 +124,7 @@ module aptos_std::math_fixed { assert_approx_the_same(result, 1 << 33, 6); } - #[testonly] + #[test_only] /// For functions that approximate a value it's useful to test a value is close /// to the most correct value up to last digit fun assert_approx_the_same(x: u128, y: u128, precission: u128) { diff --git a/aptos-move/framework/aptos-stdlib/sources/math_fixed64.move b/aptos-move/framework/aptos-stdlib/sources/math_fixed64.move index 34cf37ca37a66..2369b6afebc3e 100644 --- a/aptos-move/framework/aptos-stdlib/sources/math_fixed64.move +++ b/aptos-move/framework/aptos-stdlib/sources/math_fixed64.move @@ -127,7 +127,7 @@ module aptos_std::math_fixed64 { assert_approx_the_same(result, 1 << 65, 16); } - #[testonly] + #[test_only] /// For functions that approximate a value it's useful to test a value is close /// to the most correct value up to last digit fun assert_approx_the_same(x: u256, y: u256, precission: u128) { diff --git a/aptos-move/framework/aptos-stdlib/sources/string_utils.move b/aptos-move/framework/aptos-stdlib/sources/string_utils.move index c7e3645e7d14a..c7f239b70de0e 100644 --- a/aptos-move/framework/aptos-stdlib/sources/string_utils.move +++ b/aptos-move/framework/aptos-stdlib/sources/string_utils.move @@ -107,7 +107,7 @@ module aptos_std::string_utils { native_format_list(&b"a = {} b = {} c = {}", &l); } - #[testonly] + /// #[test_only] struct FakeCons has copy, drop, store { car: T, cdr: N, diff --git a/aptos-move/framework/aptos-token-objects/doc/token.md b/aptos-move/framework/aptos-token-objects/doc/token.md index 0fcfe66a8545e..c14a816d56a40 100644 --- a/aptos-move/framework/aptos-token-objects/doc/token.md +++ b/aptos-move/framework/aptos-token-objects/doc/token.md @@ -479,7 +479,7 @@ additional specialization. ## Function `create_token_address` -Generates the collections address based upon the creators address and the collection's name +Generates the token's address based upon the creator's address, the collection's name and the token's name.
public fun create_token_address(creator: &address, collection: &string::String, name: &string::String): address
@@ -504,7 +504,7 @@ Generates the collections address based upon the creators address and the collec
 
 ## Function `create_token_seed`
 
-Named objects are derived from a seed, the collection's seed is its name.
+Named objects are derived from a seed, the token's seed is its name appended to the collection's name.
 
 
 
public fun create_token_seed(collection: &string::String, name: &string::String): vector<u8>
diff --git a/aptos-move/framework/aptos-token-objects/sources/token.move b/aptos-move/framework/aptos-token-objects/sources/token.move
index 06e9bdfe529f4..568da32b3040f 100644
--- a/aptos-move/framework/aptos-token-objects/sources/token.move
+++ b/aptos-move/framework/aptos-token-objects/sources/token.move
@@ -157,12 +157,12 @@ module aptos_token_objects::token {
         constructor_ref
     }
 
-    /// Generates the collections address based upon the creators address and the collection's name
+    /// Generates the token's address based upon the creator's address, the collection's name and the token's name.
     public fun create_token_address(creator: &address, collection: &String, name: &String): address {
         object::create_object_address(creator, create_token_seed(collection, name))
     }
 
-    /// Named objects are derived from a seed, the collection's seed is its name.
+    /// Named objects are derived from a seed, the token's seed is its name appended to the collection's name.
     public fun create_token_seed(collection: &String, name: &String): vector {
         assert!(string::length(name) <= MAX_TOKEN_NAME_LENGTH, error::out_of_range(ETOKEN_NAME_TOO_LONG));
         let seed = *string::bytes(collection);
diff --git a/aptos-move/framework/cached-packages/generated/head.mrb b/aptos-move/framework/cached-packages/generated/head.mrb
deleted file mode 100644
index 6ef7b1c9418ae4abc5a186f76969b61fc1e833dc..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001

literal 596184
zcmYJaV~{Yg&NjTZZQHhO+qP}nHt)4<+qP}nyYD&A_syLCOsC1TO@k{bD3q+do2jyk
ziH)TZAi)1W1lU;^*%+9GMOcJcIaq}m1O+)oM8#NGMgBt*-+|KkPp2Pg^9`lsz+Xl!k0ZmJ7qXJ~6mz)L{%
zKdD4eZl+GomiBi4B{R~o(lHQ0L1`PBm^hg_JDWP|LOHvb{CA4MgOLbo0h8lf@;{uR
zfsKrvGl2gmCRV9O?wA4L_6dbJ*9BZO={4nuq97~?wRxWdnJLLw{FS4=z}J0Lu2F~p
zp*HE^>~6013d=4N<^j*^Opu$q$h)OSlEX8W>ib|TF6ob`%*1jKiZ>$y~)(W
zz8OX0m6tLbbsQa+_5as1J>Rn$SL>sO=8f?|kazo`K&-b1Qr@DOS<3
zw2djr&&dkb7X{UQ?-p}Nlj@cZ)!C1U{r>CSd$T_#E0;gof9?;pO8WipW>Z03dVD7&
z%j+Qx=A?%JS%cbxm&C$@XjRyY;;P-5D{gH1u^uD8j|d7tT87I!X*Pz9?zMeMK7!ES^
z^l&LwiAjwr`?|iGg&XSYnT(1RGe|B1_-qs;JshH$#Q6Pd8#h3whJ&Y4CI_9Ky*Oby
zi4KK$0F!Ryp$*D8X-r}~%b$oKitI%Zy1-R&;(R9M(QQznQA&&?IWrN@H%$s9ia6z|
zJ>I@=7mzvg89~ZyiZnPxGaI`-m8AJBF^yK)6(PJys3v9xdbio-tRG=+F*#aew}ol020&)&t0$ObLCAQ-*4KEJp#IVdkom
zRN4?SHqpYFi#ex3(yKRNBEs)!DLZ~5ybhf?&4-FpFgMX>*M?7B0Q|OOl^;n!O<9?ODR#2
z5J{2%y4ZqhknL&A|Lg6Y-@caa#{u`<4#6oJN3J*})`k;VG7=jIl_y|>j)G59JbHLa
zY%Pr1$-?L3Vd(wQ#>ds`;f=l5nGR<6Qlj>82xtNZDi=-C^RSAsdV3_3I{p(4;vzAfm-psD
zom5O(9g!U>)mM6=KEz|sgxnP81WvKZXcV+M@uK@5330g!76shOacxo@2fMP$t7b&#
zBZ3HGx)85>EDhHYHfAVIT9kwb0!qY*l>%)Ps07&r5T5c2<7++x6j1O$ar+ovlP-2t
z;B4gMi6bvWNi!AJu}2eE!;U3Y>XTfCcP&E*1O$nu3V1-K4Vo+~TUt3@GjjMvO6MZ}
zmDo^|dYwU(`Que41fQ~#Kpqu5*;&!FSr_VD6U@bvMO(_-W3kDoAMZ)-xXKizML788
z6!**wG@{47vqFo+$4d@#$38`enpO&kBVrU)haBY7#>a9kf}|RO3kqMgAvw9!OnaOY
z`u?caSUF7wq(z;zR@V
z^Mb4%s(=esp*&$2TXUu>m(Rx{F`||}=*RG0=a>~;+v4dDNBQDKq5`IjesHQ}Y}?kv
zb0IsF8I~3x{k+`bVSqW+x-#BKErIWW;8^luc*l4kL1L-H9GSeSI{21mv+bNK*I26=
z7ydkI8n@&udSBY1*3!S&y>I7mQ{f>a*LL?L<8*XbJsMIsW+UDqRD3s64`@uP@izeT
zxhgK?UOCXYD|OYIKB`xpwtp$%S#YV23n^T-3SfqR?*R1izEB#2s
zgTkqs`vHyaqI+DRWU^<~hu=%26?1wub-WNQ^*cm1_AgC
zh8VEx{HW-a5w+tRkE@!C_?xqzpiM|=hnFbw#
zJMD*6&UaNefaO6Hq27!_j@YpN9rZUAV!~o58Blj|aEszc;1`V1r>{=-K#N>%0yP9A
zAk0x|XbHn>y^X5YX4$Xfu_{nZ8DN`ZSO7;b$Q=sq@yGDw9ew!5ed|530?o~sy
zQxz}zycZvqXX(pNayl=p*R){Zx4mzb+OG|uP3GTX^R~siLFpRK&&zi3R_0Bf>Tl>U
z>`r|{3Tm0OwFRpKDmBYm!HRad*X~ib2Lp~EEa`K(yQ1t~;-LBD
zIef)p>ErGaS>Usn`w+icTPz#;E#quYR0|PS6X^%`ZiU7
zT>4J%DDzy>0p1?gn6#_oFLk^xoeo44-z?m(vKv?PY%1_=C~Qh)H(Q7gZ*VA({gmCX
zKeCs$@^(;WQqLz%w>+!xg(1~V>KuqG{xIW#JqoqB{2#Q2Femg2Xvyh
zIS~MihiGxBH@V$(C780LA{uFHxa7S<-BrXlGV77dXZDwtYLSQgI(w;4U{DwN=$_K4UhcZ5uZ0W%H+%g?4RCwEsf%dIy>vQFS?-#>p^m8P`ygsp!}fwv0#Ge2EP(nh&803&gOE8xnFq
zFcv`!9?KawYUbzhvoJ{qZvb1jnix9DT3xyU@N2<6xdZUb!GT$a@d>|bMtHX92L8g}
zsSE7E>gQv)wY&*AUQAgR<|S-`;;~zFRcqLIY)n#!f679!jCKJnyMkf`}3R%l!mANZ*&S`
zVd!iz4fwyHG*)iL3WyP*_fFZTtrZRc75IzW)
zd3x`yC}ogp0|^5BEH=1gSKO#N9hzgNw$*p?JWdE>Tw;>PIz`FY=m^3$Qp_g`2A^=2
z*$p{=Hl%z69wjbyHKeotJgNO6x$<6MoHS~0Ayxiq$THu0#$E>-YGkpxL$PVS6?f-g
zxnPte*!Locy@FMVC>TAbp;b!JT|KsDY9p5j%h+khYKN}hYbuCKZ|1vi)H&bI5
zd#888|7~kW$J>6J4e5_RKWG5(md@_RgWdft92s2V$pV-~Zod5nFtCAIIwd_)O-b}-
z)9&cbk6BoW=tax&r@gDx!bti!;d>@|s4$Pmb5O4_HXZVg=kQDEdjR;9-0?Ot!<QoQ3eT?A8$j$|M4fhY0|DgYEQ+;U&Vq!B2HC#V;#A|M>VxxZX9!{k52FiI~){@M$Ty;0Kfy5*Cg&85Ypiy0-kud%NP$o2lWEOBQqy#g8iW3f4;L9g>z{l8T(loC9DCF?Y
ziHuXfVTxobUFNu1;T$kn>M&u(kCuQ5J}wR==6X&X=69cAoH{TG+<%C~L1hPQ#8B3n
zSANxzpaMFAvHEYOD|yZSm-p{}>f(E7>*EdIkJIz#BBJK+1>S8MuFfp{{%oU28~)*H
zDu1^p@9+EZIGmmC!7|ii_y{^c&~Zc&)da`j7SNn_v9H(D_4D9N{waSq=iA})==-Nj
z{!Txy_j3W)q!j#P>+I<&=c@qT8L+2`b-+taW?yx*2M2`7%qR79&C7!|Pi@4-7Y`C&
z^WP?qf04@hh=ot_7J9r_+-Fl#Ot!Y&stlOO0N&+7-?W
zl7uaU7Q$DKr5*h+5ZU+&z^%xJQ4{wH)wNaizI3FOy5
zkx3^gF;XLDb_}E;hA4X%k}$n>B^y?BZzQs33Ui|iGg#9br3d(DBkX(P;Y`Hoscwqv
zehFHL<{huD!hE}P`xwnhOpRvVL07o;Fg+5_jqEX}TtNB#Ki8C|BjcR?@%g@nhCm51
zr;!U1Xb&?O&4E`e{Rc@bPrkas6iTZt)Y2e$4k7`Gj-rK8AwlFEX^g;lfPX`fY|d+f
z_vdM72P{u?YItIp2*#2b2aLdsXI%pKoQH;RfZ6RDr$@K|icfch!yIvG##k~FjWdpv
zo@xAehQhDk8ZB8Whs4kV2(T-!@9Q%9p^0dt?->_sHugm#ecp~s-pXtej_k!Gs
zjBG-4IcQcMFLQggm)c-`6;=Vjr*JX95K!R@z9=gg)&BbPdIpLh#bg4yBGMfWL6p&B
zdRakJl$p@BTkF!t@tw_rb&8oFR=WkTRw6hShs_Q3O0u89DlRBW9|?cBK0B&=h_kF!
zP3oXs2|pP<2H~m=^s(%Y10hgsdsbEj;#Cr!A4Nw);Wtkd+9_5+HZBcf`4^VwOZxefM|pczWSPT_qPAF+{VU_|iN+hxmK8q|^9KHE?xXQ)ip?$gp{Iwo_GKQj%sij8yGX@(%nLyHoR)K+y
z&Dv~x=aNaBJSsRNe!uJSCdyd=rc;b;M6N;A{9PD(Qe^pKWF=RP6x
z(hwJDa4}lTV@asG3r?JnJ2hgcHirlXW9euH2<%i&25G#?A`~NsA@&ZR(R3Y-dL>D_
z4W5*5Bl$2k7{Y*&a)`4iMLLeSk|TzBAi)_eLv*rKg2%&;RRb3g!csYw2a#h3a~)wj
z`umdI$O0R;sOjI*y_6Hoc=}@)OHW+kC&_F3w;^Qe&?fcWXv16~1T2umt@i3fguulL
z#~Bh~z7z>rop{z|)kJUvN3%u1l=cSD93kV^*@@}*6g0R(!61RUQ)kF}257%^YAVPy
z+{vvo3}1|tT2(AJLu8nDquJe0%-W!CGQ9Js|Psy^)UUp!clJ2rDpJ#k2zdO
zJausMVj>3%?1IlrfP>4~3kGt_*oRPnE#$8aM^zd9(Tl{x_d&Mp2^R@cEr~!6oFe4(
zEac(ufy)@%`uSp9Zawf*Inb$hQ~E6>HaNtf!0{7m8j>i4h0zw+H7JE+rQ%AoSEpyy
z)+xhpY@A|H8;sDETLy!R@irbIlcf<)$N5ztI{4BZ1_NbdWx49%#OHY>4OpsChfo-#
z83Z|`lmID_0s*s-`G*zAB0q{V)lO@Mwo8bIq|tgf~=51(s9M7ZJi^v$Uj-~~LcffmaX9-fI{z^)h}>-RL|
zxj-1OvDFKr4)BEPBVr!xQ<0mA?$xdmJx(QyRN@(=oIK2`aF}q-z9#KWM431wCvdzL
zzeU7%yf}11ph;0_i0<43CZmn!#~0a|I~Ex}@vR+EO&jA>JcAkIOy6t~Yv8-qZ5UGHg?&s|}QE(Rug}y{Z1wO87@K2md6kT1*hklW5+RJhW
zJB#-G&cNlVehwqadQ_HPFKB%mF3aw
zKFgtm6SK2-ci8HAFyDgbn&*BO`Lz*k0Czcz(>?w&!KUtLN{i?*Z}^(fyojutfIUqO
zizrnljHhv1)LRMP!7Jiu$&-)P&H(gTs@|8LpPR3gF{|UzV!`!qJ*&JbsHlIG*?PXr
zOnU)OQNMUaK?|IO;lIbQ+$mb5X6s^C`F
zY$At`oM8GP*G;v_^3d0qGvBEO(~eqR-o^^ZU
zG^mM;xubYu6e^RZ5$L>T3}ju#rcIc#(i1nxJX{G5MY_dg4jCabJ3Mv1+PND?H^`qX
zT(5$yhqXOgt*RfUep`LfKm+e+gWu|Q58eFhS)<+Z-WlvsU);kc+Vuy$utU5Bv)@=F
zlv^N{%lL40ONVQ8Cno7B|F8|MZ@_s%
z6V&Yclu}};YJLO^V$Rvv0kdE|Kfcd6$h#>SiG}mlG*_H4Tg_g+E*+6;&RwmueRZZz
z{_7-jjhNfNyavEBjnVh?
zxG^0>BXq&MFxhos%I4uM!di5QiDXSs^&9zdosA|$JAn8LCd1(RC&PCOTENJELGmW1NW3m>p1b{t7)ra+SI0Qsy5TjMrV5QZb&t(#S-G4Q$ev
z>2?6&pMj9V=6!ETOs>+&fuzqV7zj?CfH0&&OhuywumU84&phqkk2OE|cy
z8@vl(+W@Ei?3E$6=a}_GUpwbrD|ELLE4CNT)`zNCdm{6NqDm76UoVs3eKPVB$%V|`
zA`#koWb3j{*iUDkIJlwjScoVuz9o(Bs6)&sE|^x#%gR%u#-c`2US{^Da^*#7!?meQ
zwG+nI)6&YR?Rmd2T3$||)JB_a-@~e$
zhcevzcaoCp!@;VLeSRr{T4dWTH}jdtF77Xzt#P%S3vP=u2L1h}`SOhcUZ)nL6nd7B
zC6^;|sTR%Vvd))XIIEZ%a=k|Y
z_%O%3ZQS&Oe%<5;@MY)WqPAU51D!{Soh}AMtr_|XErG+&__%y5`+TcpD{4KW&JwE|
z0^^m$(gLma1*?X&+|dseuOew#Qyj5*t0Z$=aaijF9v5n>(cD%?Cv7B3z%#*g*Xfj*
zsw6XFyI9<S|m^YosvCv*Sxw)fic_8ujR$SRkrtBot`bn96?IfS7#w2e2Y3rc%U)7CP$!;mDqJ
z5&CPp6IF$H*4}*lOZ!31bSOQReepa>XiT*w?aAbNbb%BA7ym=AU-&G=tqsn+cj%nCiQc>rl!i%!GiF`ioSd#1!h^B4j;N=QYP
zH*E#b)^&ojpQw0lXN+hGMCTH4=f#tuXL|o+*YYgI7L_P)HHzy)pzagxABND~iu*j@
z#$Iq!FZn_Se=A1>f!%!7qP}x-z?zTu_SfPsgq>|<`11lAlO#Nb
zb!ppwm3;5as`_MucPXi9-c_0BS$OBFc#l`XJm2)PmVW7GS+p3}|#vsa&04Iy(w&*{;N`qQg-n
za-MDDow@TA<_LI-{|wu6T&K{WQb_e#Cl(r+Zm6kUur0esfXRzPQvke$yS4C@6i5YyNO=3dzT5g9hk9oK+e>N%$NzB{E2fWYo{KRf{i-@x`E
zAsr%r+mLu=Q8SUfAjM>h{{fbq7c4re{ukE)HFa{bcPfDS9|97qw&k?NhT!|AZc_#y
ztE`@6%x!yp9GHlz%^a1oK)NJHCM-xoVJHL$1Mta|`R$Dktw$iP__g04VKBEln`hd#
z75X8Dv0m9c%~{NYoP=#)&<>g^I9QxSWXS1IlNu&c;^EVWI02S%S^!0zGHhH`z=ER|
z-ycOJ4*j+(5XR;<6yJGnF`*{4MaqzQXm8x!+DAipG@X`}9oHrN_UY}*W6U*4yb~5Em;L!=2HNhHg?xKXEGbHM|Gm>LDOUV#=wh4%z?3}?tWBLv$XgBPo
z`~^4z9g`?+eSYnUirM5-^L*qyIf2{0&T!BY-|#`t$CWAc^1@cS;m$p_(5@s8KLu*ogL)k&O;nRm$g((g_?{
zo`L;}A<;tZ2wopG3K(qYVa$T1JMOD(f#q@zd`bH94*Q(?jWaR**?v+S7-7h_Rc!%@9{88uBXk$JenDwDcfj&G<{h*G&$7g3oV
zc$VBBA@}M=wtg)c0wlW4IgvMr|-iEZxqJ(F-4b
z=MqP?hn=mb`}uS)E)Jku>x45aBZ!;`IgaUuhM|Y9jRH%cq$}h}?+#;zXT~~Bd?|b6
zYrtbS3FoM~-?A2pgbHycg)_1dd!4rT#X0JNlj`V8>gDx4i{|vG8e&Hl>Dux7``$b5U)s;&pBWSP5WGiXi@mElSlbM|zB$H9BGJ@>J$Bw6DuA@y8P
ztBn(Zj1}Akh68hKE%<@q*>Rzjp#ZE^A(hLLBD0VR
ztc2UXa4gyL+pZAuKW)N_J%fm
zRzJV09@rmP^o)^a+}xi3(f+llq_OtX0~subOc-tA6Fk~BXWWoHU4qvVIwQvsZ=JiO
zD#m_O3EYYw{Ti@90TzKdoTFvgkhC`Tt>Xgd=p9)sXZ6`m?|t0i@!M<>oyk#b3U)$N
zT(2|u{28)1a3P8?}C)6S#k2cO%?It)(nSO|{2|s0o
zj~68%=_oZh8
z&$b(VGk1vM7#z^9J952*z>2GdBzGJgVx1Y_+Y`@FX
zr%9Jy|1Sj_SVgi*6}pB2O-X1AhHFG;3cWrlz#Rch0;UQJzuajN$WX?Zll{I*aRTW9
ztMSYL`&OC9Va}C222mtw~lL08+JBI4#JeuXM0rEeY
z^qTT`p%WTKqJksL+VGiE*aRRvBf{aTgEzDg))b_$pH5J{J%W$Ch&0u8C<_o#BDc6(
znfFcgl20NDFsZSoq6cIiQvcDykaDp`ug|g8SKB>?`B8o6A6W5<#1TTs=!fhw#bg`A
z28N(x-!^|WdsW>P(02ISJ>+hhxq?igh|&h)71i>mVN>ov4|X|b6>bY)Eg@K4xjvjs
z72nw9FT?a@JbHHk$Msm@%ozeNU;IaIk(mVwWcSt00I)b!PnDLS;u2w!ndtdvOnh4z
zx~zgozUO7C*}CPb{m8nDjXg5<38!^5*BO_K6_=lJS}OTHVv~F6hFZvPPB7T?qt2#WEH|r+
zw)Ki`u6p>lKVHX4`#q~2^OPeX#$RrstXSoNVb$f-QM!32EQh*kCdXqE6Yz^>(|BdK
z5hn|iot&z?C>vhgE0%q3zc8rXle49dvTMgRXR%alOFxDJKhAyU2YFk
zM)}qs0bpj?g0;d}bk=AnN;HaWiI`3zPjFaX{q-J2Q(Vr-a-(2V5kwN<-Q{ufy(KJ`
zj+@8@MB6ZFDj@D8sGyK`^8ZlwOS6e2TXmeJWZ)1C+zVDW5(F$$30|mnWMw#Zj*4%5RluDu5Lo2qD|2q)c<1-
z`M~q<=>EJonUhT~Ln~nXcxpqRE4up7PfuOAXa@98-d>E|7Dzf9fom!UcKVF-i1X|5
zSb1jl`7bzH_9yDhBxMoQ|*1{l0=
zqlog+!ML6+q1`cV2_-!dkf^)=0ZFd+&80NRNjU;+4SB|Zr>Y8FCah*ox50Om10f^F
zBq*i&bQ&NH6fT3747p)5qPy?8UPGBg>oM0x@={+Z%~((jStXn^W7AOz9L9mwUQak}
zwzBLs*vrZ8&|7omfZyt%vfh?dS0tL$3yWu4oW2W@COIp)?xK@k()m?;s0}S9e`$tj
zh5zHJU1hgud(=Z(*7=#C_~fwcp;o5R#Z~EVy+6)7hiqy|#|)%y;@&_ZHE=eC1!ky|
z76SY`u7jf;_Of3CqiVM5!666_Ex%@x{66r7J!9O}8hzflh*`Xj;f8WYvjxANH>czL
zEVHq;=@Gj_pkTlNDFHG}g-M&ph$a(1EN5**Ge(
z&P&b*Hmv=g6;1?Y>)g0QWkcC+9Dy~<1B3;p@CPDqz20yvRcuo!Xf(!>oy$xx8$A2f
zvxv!>{Al~P>$}T?AJXhg|G9_VGJ0PA=ja%coI~4>p1366U`~I1b
zgeF*nnfx^z38Rg>$ltPoJ*>qO=u-D!isJdGl;v&qiL`?@+K-Eui090i)yqMe{Y=mm
z;+seEcJy7|bz@+OD}sGtKFNPDidvBaresGdD<)j^%Aw{f<
zmI^Nmt!THRW(W3$<(m&;X38V>gAe6yY|^*B_3kOC2BP=V`LT_JX2q+2M!J?yB{irq
zRVfT0d+URFcr+2g$ks)I`IHOE52qig9;jVrZB`N0tyl(Q*tmx>9gA*)3%gRZiULJd
zxd_TM1x6||yy2f(Q+aBdM;NaVuR|;IDKmc*`ESNa{JYHBNDi9^>xf#_Cq{6n;u%6A
z-!cVs)}_1X?97D-C5sxOGryg`83xd{9r%g>JXo75tt}x9!tvH?Iz)2e$Clp#zf~6R
z1M~s&L-rcgTe-ozL76Wi@m+Di-nC%FE~+QAg^e{Ck1v!%J6snZP5
z|LkRplJtMTC4`+Xl%~q0N~zb`+8)7`9!%_<7%A{!a!Ztx+YyYSQma%|O~C*7WN^Ni
z7_C6~MqAz+P0n_fgjKh$Q1hc4PEQDMP>K@e-C)co$J?4*`Y>=)VUe0Sc?UK`u;XQk
z?_Ppo7{mZPFVSvrnPxgTKsmS?+3AdPB58rzU}#%jXu`|hg6M1EEb80_hlzq#s@`8H
zG+(y0R60OkR%yJ5vP5D^TIL|edBwoko>IWkaR+Gq4>UY8!mDQLwyx6o5+9lhZ$ZCo
zy(})~T{*q}Cv4fPhq44$I^FEcXVuX2MdSAJ{(3&(*b7DwlI#q-L|(W+7<5W(k10qQ
zReRM{<~3%FcnPepXY&YW-{eRqb4g1kfnMnR?HOWn_!S}hnB5)4aQ{keq<8RC&Oy}G
zgsN5RCs6Es=|Zyq>&S&LGc|N^{ZCW%IOhLOc0|kDc~j!Q4FX0=>SsYHjzDp_AmrNQ
z!Pn(M1LVeKO4=P01h5R|gcr{N4qYF&YftmQya36^Y^vJPC7HK*`hT5eW#aUD-2%Q`
zrgX3iDM=J?6j@v*VE&$fi{InW@e3Zep(uzRsNu_E)2Z|d=gQHtnW5{0?&m?Bh>;m*
zVdL3kgF+ZT$DkCErzpDK(ouusZtKOQcodA6kd8OoOO^s?E+?SyV?MnJ55MTY$WU~s
zKnNZqSR#P@q~fnA7j>IAQ=O1p{kmN&tui
zhkYF|4FV?Ziu)T4q@@GeKu2oXoj{-Y8ICfMci_8896;wF}@>Z
zvKJ$0E!b0^aq{@0feyo90DM|0Qvhpp%xUL=zm3?9;9}s*g^8@7vlaF^MuSB}rg-qa!y4rFvP^?t#FkK#S6&dY0Bv?}Q}RhAHsSrAAop
z(VVISAe)shFfW|aMlkbv9M@x@Vt_mMlL1`kTFP-Id>IP`DA~i?1SZM=J*tpMN010n
ze88??cpM1eEjo(GCVcs=icGaRY<~cQ)#sLtntUi2LHDgMFL7XoCaJ&Sq
zOcM#XGytXV16x@%%qRv>_6V;a8=L_7o{4&$kR(|+v?T+z_Fc)LK?-F%L`Jkkh9u1D
zawvtoh71L+>U;>-O9kMvJ6Ex^ovnBcE|VgKZ|V%Hk7}||_Z|7Z0!BhOX|#f2Nm6(M
zaL^|xg2zStVE9^;s^S*F?MEE#N&QMl<2PtfCYAk3NRUD5_rXK-j;lkH0tR)^MFnAU
zGyv3b6P3GmSdM-lKmbltQsyiXJO(~an-XRM5<|nKtweo_JtiYxq5nv^ym)NvIZO_4
zwsX(@0{Lbr1yBjbou=iQ!r)y9nIj^GeR;}&f+oE&`IDOq#-uYhtd%s5Y
zir-ws#`K+6`^T20%ksb@5ttSgIwku@oOru4tN^AN`{#V54QwTMw+)
zMzA&jb%Ud?dKw-OPr%m>EYu2+0z7O?uX+S-18LtBsR~g_Nf*AqzTCpt%L1zbwrckf
zYq@Q0%sWu8JgATnSdyC&qG9PrmNgpdx8uLd
z=RMI@=_w{JS*@_FCFKq#%PZ}Et)p#NOGJecJ)l-SU{z9sVKX9ZYqNdT2DwR2rFy)d
zw*V3DOhh&${w^p(*&lIDUTSv{vZ%TNqQk6C>XC?zSE7Q-CRK^BgVb
zr+Rbzed*YE*cg9xqW<2dbsBhEv!-*W1uv+2Tmv!<^#JKCS$_(rBcKYj7EfOSYBV@p7nwEaaJ-id@+`Zi6yS8T~V9cyS^jZW)xy#vW({JlW
zs_Ow**I3o;0S9fNpA=5d%?a|FUQT(WNgrcv4LlW&rWr%!bvP|^lp+0AL+sK9rb8-0$u%sh2vRNlcZ)Ee8+S^tg-m=U8K
zE%p6T))hw{=Um=}%t&a$*VGbqc2b&Sp52tUB@X~o52ydY9xPn)Z(#*jJY9q;b-aj{
z1WkY+Wh3?8>OV#Px_)Q+{j;KgDB!kBA>6ZmwwJAi+>KxyqqtkU#d1&WSQjJCw6ydE
zx`?WPPKrZ;cIYUp1~fy6G3^BFfeBm&t92W$62JgLZ{`c=g-(Q{{Hl6Uk}&ZE_#v|m
z$qp44YaPRyHS8VOG0LEu^c4g$umcf$+0luqYH50GBR`U|Ja)E-#us|yh8mN16Xe+HQ{*Ar8{fV#;-JyQ*#Q}I+>f(A@ge3+Gl8G>f6Yh
z#{rFH;XxFR!>=b-BLp?^`6wfcXd-|T4G7fU5`da$;e*@>_
zYA7iYvi@%c9Z{1VMU2E{q^ka4;RfLy%woOuKQn^y8iDLjX9rz@9Gp!{Qy2FK#lIX?
z@lv=`xQ7z9_LT>6XO`|kU|$c`--OP?6;Dq@BxcI2zsDe0DxM@rB-U+g*ps@%OvHod
z?n()bhpn^#V>9LGw)6p30jblCM%_O9*`5Vi<7IZQO@M&+;>`KI{etHgJeLN$rNc^J
zHOVm<;xsZtJ0v+7DrM*JP9|x0OOAwk?(6cC+8p?r69Bre{Y4uvOiVd}Zq2@DqTPmv
zq+O9$D|>y5({+I8TCulI&gC$L{_XhD(R-?w!7D5JgaGX1rHB$n&h0A^bpk!|8-waA
zmvn{O0o;G(vgrZz+eQTKtWGAMsOZSv`QQ8&r>SwV2R&w3n+RHnqZS0Or_CQ!$;ukG^H{%j1J>ltf#D6
z)7;S+S+dYjQ{rOpJY!-H|0HW4h&&oiTH9-n(T@&+
zk=CN$*T65%Sch8;2Ml{L3-q%#TYfD~{DvL=;R)TtyY}RhX1#utVui|oI;v1xFFSD6
z2=0Z562^)Uhx?NrPgW6CsGycpzHUqDNt*LXfVd}iPy3_yeblJJ6i$U4CupcJDd~DW
za3?y(cnzoKxAptd$;(B?LXi}_^A|Mv=x}{Tx~T{W(Dc$-7IU~QBHVxq
zO|N@Ubu`8Lkc3!=oLdS<-MXbV)w9pAum?I19;VUo768JjPtzQ$bJat3!DMKkEtoy=
z5Pzbu<}KRK@4)YomVU492W8LXNPgqMi$zzKzV$(jCW!;+_(h5w8
ziEW!-noIjcCJCQ;JEs$!9E@3KA(2-rwJ6#twX18LutjdKsbQOY%`WRqDh-Jo-zMKw
z{=~<{Eg~im5kc(KJ2oE_Y5g@LDNB3rf?vKr`XzOwA#bJlJBap8a%nBb-CZyd2A?guzq{Xwp8zDIA@$U!wd}=W>SztcLYVUMw>qjVhy=b}Wno
zhrIiPznz2h4Y!hX09ohRa#3%o0vJZ&d4A^g$`rr1?|ZTb>MKsT?_1n~90aM{8BB7P
zWSbbzyx^%9;V?~Wme#B{n!n%+-mH7WT;ZgQm166sxjGJE^VGOze{G%^yQ6$#g}J@v
zYspZVyCEYR*(|i)4|wM1=VoA0#nrt
z#g*-AL6QQt+;ghxEtc<9sU~R?ZM80#fc4Z47Ppg#1n+|N&CPs&6~NBto0T|NPv%8?
zqj(cZ!3KVL`Fkp7p_%xLY{ZLy*B&M7R9-^)iGMa(Z*NZnd!
zH5LJPHL6UwSnVdpgM;vuG|S&??oUZIXpjJe
z0H8PogNBaL>*-Dr^J2rPjZPXpUAJu9amFN(pnX^gB?%oPV~|0aoeab07+b+B%P-R7
zPginAQ+H=ej%DGL#>Ww97zWU0?JlK~P$o;D2F&iJ25{6rHnVmm#a7*sa1uIa{Ez?_
z<`>+{&$v^jT~8?T#+th~@1t1AYO)2jyP2JV_UI}lZ-SpH<}MhHz{
z<|lK?mOKTSHiDP~)JmpR5Sg2Mt!SQgm+j~(lI`^y4q$cQSn_zRasf9}Ofu)~Bt^Ug
zh$`E>nkyUq`GE{0wMt|nZA5rFjK;pkGuJ|Gg+A;E@_}2f25tf(z?y&=XUY1+x!<^{
zNU4?Ei=ig$gW9hcxoCI~My^$kmgxy?lCxT@Gk1Eg3b?ziWb?gw=vQg&6ZGFrOBNH6
z8$mhuj8vyXI3l-m-m&k)Q#a`f567y)KTXcQ?vQE+QC|RdWk#D{x
zwSZ`Yv;U)6>)&2|i8c5&gU;cP2i*z{cetH+g;-z%!UArb_#rSMzC>`oQXw~L3YsY*
zvu2T0EKCnruNIAKWrDxTJJC8M6S0S($9h9msIuafF3#OJYQk#u7rFv2x4L6!&^!2Q
z?9%HZb}QmLzDLkQ3vC~xcfdk}i}UHALG1~IftnqIN*jT>aP3IPQF1a^s6apnuW4r;
z!(^I>QkEC7D9(0C<{bM{U)H{$K9u(=&ITMxd7_b%M%at+;p=nA-7WxHCJFI;16Ud>
zfsaPJzDxEO8is3%e0DVDrIc9FB?_`i31Zls#vgcrVLr~(HR&(Tt{B0;yIyiG8>#>w)5?PIMKYMV>q1@(K?V@C#G8L-}thyLvthrNq^e5P5$yMiVK
z2!O}3&v=vbU4cK-XOPSu(951L!M|fS9IC-xT^-iPd3ty+dc(q60AA4POH&m1_cwvSJZf6b_#Q)eH#RQ0Z5Io_vk
zILP&Ri9EK8u1_2eZkx)cLN-~SI>t~<3ucB#O#&SIv*x9$iqT9Ij
zlI_XeqVSYgW_`aIdHTCk_LD2!r?gF)515{n3zQUX=lz*lY-y_{MShXV!T$#FFSFWETrm}k<=%l}k(NnFH7I``Hu~?1T
z&-?A;W$xqgy!CADOvy*O;p~%5b)(0%O=~^o7^GIjQg+v-*2zdXsY6!q#5*(~}S6zp_@Jo%g;}B*@HPcj3%~7OUPS=Vu*X
z+fS$#2Hk2kP>|P7RH(teZk2BbUj7QX@uJ*u2D_!{dJAgg?yC2({PXRi
z`-FK0Y9z`oQ$H)e5z^X5$T^jNGY@A!%%=E&`6KY!HYOmgh>z*lA8*Sn`+i&dI~Tc%
zwpMWLnfRoVa()yp;bLTLaZy-ILcY3N=rC&>I}t-rlyGvx8ULEEi0W^a+J7MIP6>0C
z>(Tp1@hYfmA%oA&Oj4dr`*R*#WnbEwGs5($7wSB`CbFtOj{+3?AcDg>#)$5qU7!<5hNTth<5`;_Jf&CQ0d4vFt!
za*p2I#@=R_@k-6et(K53s2OHlSHIhCGw-8DVM2@8CLi;J)nv6@;a(d?nnQAiGS1|h
z&kkbU6DC9wy9#UfU87AUn##EOcI)dU#f-DA9WH
z^nUX-V^*tb7IwRhOWbF=s7prc>ufgXe*EtJ`2#Wqb&q!wf|3o&!jB}#*NMM-F(LnL
zr;M~tln0lA?1gK>zvd9-X2EHzj!AL7>Z5yjyWosjX$;=nsNjkZyHA#8Ha9U!)=#+l
z@)^0e)-L#b7Fi!lNb#6XQ#pOUx{u?>jeW`1FVdqon)>-neN1aRc(nhCIOP{QFV=x^
zUS-jn&SHi}1-B9YC`jN)RMZGK64m2jMp@ss?!^7QNRxRjel^_
z)4EmEIkxf0N0Vvq&G$1OPR)Lao^7*~i!Dv(I1wFfCK~zqN!QwpL6NpS{lo7_J^8L*
zKAC$RbTKN2~^VSZL-(>-u!4C1~?rN()_^~6;Q_u3b
zD7*96P9^OKl}^ztX=AI7tN^5+{br6#SA5Xcu+iH=xUHU#HFo(;1Q44F<8n_qPn%G;
z`DO_v5+y$TywWFD(#`pDRW|{PZeN?-XmWg%PJXYVtm8~qyVnu-T%ce*=c8gD_R{#Y
zPa1>qE(e5o2L`poj;#x_Hw_-Q3$O1==;apLkmgnsaqIkVch$D*mNt)~@cWY9e>l-<
zaNgzGXs}zMzieRrH{F9T&Rnk=nef-%w1-3d=9MZXnNQx9wt?%)tloNy&s}hFee|>>
z#iNK8!|Oa18u)DNwOUjyci;9KqHNShCbG~$*;Kyk
z!U$pWqb{op&pv17Wv#o^@?(Pwr$NKziilgTM?9hNHW3BGbJ+%Dj?O%qA@2YUJjBl9m4U~C1ukmX)`Ko;7^E*F2UpFJH#UMTHci5t@
zzTd@tH$SKSVVck8jrF0&3N~y|`e=x{*eAPdouB8xmX}2#o%=)g-7TNA7CFP~#dsGP
zq}1;hKSgb==oe4j*+1C3>4d(oL+H`V+vKOdVvmoMUOXUK<>4Or{CK?HFqLFBbme!z
z(b;Bi5wTqpEQN_~_ow^XR>=p=H}2S1d+*Ze9c2|IzC(Er#Mw7-zq)wDE;2W#35UKR
z5!EyM*~y1z(~h64cM6t4N8GBD_JoXWsTqz-78;xhq6U80;FK!;1%JLa@Zr1D%`IEI
zTeNrYi)XjHIg2pTNWVnj-=RA4p=0e=YF4zmT1l$};JnFQu@7$6?>NyLeRJm)ZE;EG
z`R`N2`mDMt#!5@wdfh3}!5A+aN(KP=g79$qA`F~vCCKLj3nz-CM3G|LaVbC}uteFI!
zega9zKv8iOC=1SUR56;Efy0t_;wI4SU)aEH2qYeXK!LFlXeft)-i5_N2@VZXJXAq|
z5;g>wA`uHJNKitCYT2PFgIY8S4gLUtL=ix1&Ym^?r_pgNsw0L?%9E0QU(m0*f#K6$_&_D(afY_jj=x8Q;
zFoOm9SZaJQI3dF+wg4O)53yCIxRl+UM
z1Qb9ObfF2Dl``~78FmT6aQi_e+RMfLf*=HfA5^SHfkzgBS0^I0HB8jVSkMl-A49~z%@-RkrcxLVEEgUj
zPr{Ng!elfN{SPff31k$ONZUJ{HgCUU+U}5Ye
zB(xk9+rQOI|64r~)Dw8YtQMyv2u59=6@)@GJS)KD5Ntxr8wSt`O&|sQL-3PP@Yp1j
zG=`UihVDjU1xN@k1Q|(!I3*%%nNu$EVSpzzoI2S3Sx55u22AjF2y
zHEEjKIuM`*0Z@zoalsrJa6<&*Df{wRVAX)<7a%c(NKR}yatpo~HGw2#VW_xDj5z0Y
z%m*a-EN&iJ1-Khb0R`_V8UvvpgM%=QB|zmkB!pi)4#FaV2$isrpjsk{3{nz11*BvS
zsGJ=G!4yuYJqJ5f!U^}x#Rb9K7^poDHHpf-GPIX`@t00t@%gi;EHgMy=wDV!8;
z3V|X(LE(vP2=IqM;XsOp1QuL`M&bbkl|dp5i39)&iAF=M(VzgTK!P^V&(a-BKcM+y
zD>Mdf3<(o(EyAD@>HviX>Cz_}aP*iMMpPiqKN`HQkU(7gp+&~A2!IE$=%G=u?179}
z7Bf1D#)ynzC1K%|5E>Q8NTfvv;XnhxCgb))QL-Pk8+GC4}iw%Jx
z9Fq|VKJQ0f{8^MjcqRk-;2wjr^h;Zu4Eu66HX<~Dfn!FqfK($d{?OnuwphTLA{eab
zC`@buJ%$7&w8fq=z{rRsLkTE=d;cG_1yuo=v`C?lBm#5=H~%A@{z2LdBEcz^#df@w
zwhk7Sp37X-%+bxk!bJ&Ut1yN@3iu}rWEW3fQbS{@(QzzlbP&}a5}L8g<556A?xuFO
z7K@De|DlIv0VZhaZfOogo(s*+(!tu*hGuW;VsGkdZleIrW^r4fasNyvmP%(bs8P`@
zDr0jTJwg@21^C0UBlQp@1PQ5!1PmE;5B@F(OzZ5kW+m
zQ6L`APeK63fFuCk1Ti=G0!##pT`Xq<%o7E4A{3;yA$X+!OP8X5cL~@H+$G?4|E1HF
zUSK9lmLwun;priXXju#!;FXYKCQn3=P!?d4(457wd|=K1w5E`de@zsL1$-Z9UJxR{
zluaosT}uL1ivqWx0A^5t`+)~tc5xgG=m#T*AuohDLsb$=4@JgcC}1lvdT0}f{lFI)jpYVInNXJUpkopm
z1#>F4#R(z*5_(`8F(?jjFPWc$1V{lAX=O<=7%nV~2FM$92+bPW6@I8o0{lea$tduaI-nzz04U^NA~djt
z5LQ+Q?XS_nor!8Fy<=(mC;>7B$T27(K?xa3AlnHxFqYm1
z9sw+eK)~V?Lt|O7YytGB=%~;DdIaF}K#2ra1ZQXz&{;zRX+Zaliv;Q(`*J}n#GKgx
z7oi0~9E}5lLL*}$7*H!lAQ`YbASZw+5f#kfieWG#Lt`Ozl?JqOpb2vW;#rJ`711|UxY`FMJ0#FByut>eF4fR)1t0QG1Bj0G5iu>eCbDqsjg
zC3K+)@QFkiT7EFl1Q=Uz*&>^V+QCSM%V5;P5R82oS~5%F{Xt>y0O?#nr39mZ3?M*|
zffP&>j)c(QR3jnzVTAluIfL+SJ?*4L-`tK?fh@WwL0?ipq+{vZ$jhDIAyxK+^v~zAFk0Sj_*rVC7)3
z0i+x(UjLvH#4Xbaf?#Rx=-_H<>mUosT`*PZ(!&KARj@(@i{Pb?3ac*nNd`u%EA8KJ
z4UA^Q!V-Gv-ha6jOa_Hy18xM+2Np;T4g_UMZXl?@UV%6Z9>IkJCJ6^%XBIe^9Fc%T
zNJuUe;Nb*364b(U3CTK6aJ$~3@q*
zXu>aZvK8NnG*(d_?-WUB1tc|uRN%}+r6Y+s;4v#ee@3&X;i1RqFV_j6Bn$#1STv}>
zVZl`JP(lC+L^WtYHQ)#6zks>`EdisO2r0uz92yUFWT5LJ|9aVl%`BAV4u6e|43yMG
zy)~A>T2fT05z&Az#jja|2P|g>4e&JB_@&WV=qN@48dMTvLqdaCvTi{;=iW1Kr;=TVsR=_>6q-f$66-{Y~s8*a9OFHI%(JAqk
zi<+Qx3K
zFA-0~g1gadBs3oh``6%D@Hpa;SZ{z3;Llh>h&fgnAOr*JdRz#MK9nIM1-LNSHOnCG
zFLjcijKKoD09*nz{nd>p)(L9P_3v$M0g_<^5dQa4*`TFDJB~<0EOctr#ylII
zPm-WRCA)KidYCRCg9C{JZUDfym*0>eF0nHp=_yMl2|ttwK#3rf
z2tkQ3l!!oyD3pjni8z!@{MApJrSf|NjlS_K#z
zQW*mpBauM%5`j|&h)D2oK>Q8-u;I{v-2n((IN-1$0^b4H6EqUkft^F+Q7bQ5ED6uW
zcUcHlORruC(IXfw$B2S7
zut0WtTxcM$_7`(J{t@(WhNeH{$5}2D^^b_v(4=YUX@EYMaRKm~9?O-y{&5k&a0e_b
zI%uUAUN8t?!?dN>LTEt|^x#;rfN0>54rD;?3BZ+;z)Tnc;miF<1Vn>=0$4P9R3I%7
zh3x^dDNUbR`B=!X9xzmo$0+?63aU+W)#2@Y;YW5gHu`87aWelR#$%LJ<#$
zUZBWE>jHSgeL*v0R|6>0CY2f`=&c9dzlq106;0sL<
zSb_f7Zp(HRnjpAR%ew*rZV6v#!m?2Q+jje7?eVR&TXtxm3H&Q%f9%6Q)+5(SEhPA$
z3r+ZA_x-UM;WzlHOHMgw>;8i>YY01Q+aph&oZ=!QI=
zUlKTdU{w~!r2w*uKo*hRg=kJ2q*|
zYOEZQM8un6fDQ)YPrL*!84~_4T@Md5TN09jA`_R15&=|1BItlX#1RS77$>Y5kx0aW
z4$z&BD0bB64WS?kP2d5$3d*?8fG_|cptcg`tT_70R9+;{Tm5I
znIkQ!!;r@a7|4(#3IjbY2rLGIad0FI4@bfXILNWV#zp|vI}!GL1L6iHfc`F
z6LhYDX6r(8v2=}vWRc$@7X6?Y7jz=xkJABm5mSME0lTlFfu;g$+*CS?1qBTmfvQC6
zzxUwh0u~QW8ia&KfLI!f8XK~F1jEk(V8jnN&K9S_u{;&XfA;^=T;LJ^aW3#IRhN;*
zyF3>VilezYI@0Wb5H&TYM?pb0;KExphal7}(jSo0(UDa8a&Qo$1F9w!;3+a1MCO2L
z0{PHmRlzo*kbHn_Vc#kO&rU?>Aut#$aQ#9-7!pGFkF&�DETe6ez}l_!FriA$0yU
zGAF&%I;`i<}gB>`qwTp$9a15QN{QPAB4ESZFY!a9&Y$$|j;#_=Qy
z!W3p%@SCx+WJHsK(r63UD7oM6^E^aIpk
zpuvFC4Iu78KtLeyb^r%2005bQ$R$((`8?nRXwPWS1`YXv0qH_1AQkYKfIlF#;8sxA
zz%v3$pjZQR8zgc-V;}<{pFvdczlj2u#lppd2&BayS{y4#4-20jT!|Yj9em(f4v;N*
zC!m&GD`m^4B!vEy#6k`gCJm1M1Fy@U`(xw$A$JLw(F!kL@=+82RN)qREP)05HPBEP
zI9ZVs|BK&)I>7S(7%5;E;p-n5{iYS`gMH;#f}5+Ap3q;8WGGLCj+q6583P8mc>OuX
z03VCsv4xI_ET4FQLx79S3~U6XQzC(6hk^>=AQgx_K#Bp{09A{j@AebzM1I!d93(aqFa)Q7bfF_K=
z3bT7AwsLbVn?RpDO_+4B})!w*nI)0N*vlayY1gU`s%x8B>FH>xTbY}g
zS!kP^YH4X|m|IwyY3P}mTUf2uRoBzf)zs6_GGA?O0g80=)h%0)e~$7q_Lveo)Hy9@
z^>1(v-rRx^P-V#zRM$5&2+6LtHzp?UMdn&0=!3?}LaIU+I!(H)f0HE&H0
zMHIK%PIp-7%ynFt278j{L(80#$YY`GHr*`SJ~eApWpzgN^Hf#t&lfSH$tUP^hg0hZj$bc6mbj~3IT74nb}gynM#F`c+>!P?qaU5y$m-&cW={k+8QgC*8oxP_
zH+i*`a-cfu%$A8BhgiY6eP4d%{g|l;osk)vcoM?W*c$VD`)BF3!gGWIhvp<#hxzp0
zh0_v~Yeh=Rh8)o}t6M|uRid{v^^C7h4Iyzxs_9XmPM-I;#pASF|MzU^+MZj(5qb-R
zDG79afa-_Me3IXVtSwx1Z-UI&n`chYpRjQ)jINWAv_G
z9_O$+{QULD;SlxRC#Dge@4vl%qW5Ovokveo;+Bju^!zMW^e;X~5#_bpR&UwTH}k${
zUTX$btE&FOUg$)AwxAzj>-_moYvrbpGqG!h9iBcEt-M@iCf?fBbSZ#SN?F+GOK0${
z6TdK~Zoj2Cib~}xDhGPw$NhGu=t}q9^}OY)=JeeoMYHN=>!2IFC8yT3AHqR
zgBLn1@~!6lr-Gsrokk@S1wr$~vf_<)@*CI}#*S32SIE&aa3>k4W);Vb?O~UE`^r9R
z&FmRDa~!$k5k{NGp}aEhUFwvhgx%G5-@LU+o)wfek-5;r~F^
z6VE2toIU&HOT*e4tDlv@@pp_zii6ck6uk`3pg3M&>MDq95h+qX?x=hKK`m_{-Dj$f^0&KYrYEDSvQwVUMq(A&sNo>sx_!=cmv!UZ=*Z
zL*u9J?^`Qj#w_7kIP}R+FD&)Pv|;9tmY2l2k&P*91$^|z%6fM2U5gwm75XqVmyeK}
zQnPN-jd=TP21y8QbP09*u)FlEd(gHWxKBTle%$++K7Ho3%l5=G*pk;r>Cdx;uJ`Me
zsEV$x`6PCovamXS?)UXR3Ad5X;F9L^kafaBlWZ`jM@^{?ex$9WL`MH2AGgzF@46
zlzJUcMBGbl#%zt@`%U?;xb5bPu9i6;8rMBCHfTY9>^bxz
z=8?RR$t@mqh#rNfYSLJ$?>>Kb--nH*m6y9$|J)k3!|1T;;WyvK@@e#lcm)ytlsSLCVcRo|
zz{|+(UEVa6R_X@I!L4~aQ_2!g{nRTx`YhWgP*lK0^3;xn@3I-s@6m2dlejly+DE_U
zik_J$yQcDX=7xjNEf3xif4T6qDErH;E%inBn#SBaX}_<7AJYyI3jb%7Gpt=sDKvxh+1;z`GP{MH&TT_ci;o>!Zm
z9e;t6pzV?4EjyFw&E>+@DiP(=dcD2%J3%X&tF(Ntev-@jVP8#KFU6DV4|=^dy4BNs
zP%`NTCMiA6Gch&d^Az?mekw^Z$Fz+na69-&XnnNk*m%zmzjIePJkBoUOtzvFKdAfc
zn0kyK5*TkH&yRf>IFGn~?~s~6Dt>KF&(y@vsAjAD@eDJ*eI^grN?{e;+YCylx~Ju4
zf9jW|7Q0|-9p3mq*rP9rN|`(+YxQfAx5@K}`_~zNZm)NHtOWvyH08XVnKxS6^w*xg
zdM6|yCTV73N-IRa^5(eoGP4SRFOcb?=k<8P@N>
zMqJIQL%nBvR?YOHzZkKXtS|JvpMCsG?)K~g^V$eZhBWhBV~E5{6CFgF+E|*}C5nC~
z_FDbwg)M7F;y;o<=NFv`Ds#>GX&e&xLXD%kM`_;c=Z;#=8}5d`B&B!v`iL}?pq*dj
z;dNN;njE81()R}Ru&noAlasdHWPirhkn_RbBc%ebjnO4t2t~XMnPvPImA>yeR?ORP
zAy)R?(WDQobI0KGOK)9$epCg_wAl@nB-&)e*Nbc=Pp&)6!rUV+az{v
zOY``sGA%yfH|v#}-+ZTybo&7A$%V1cjP@2wmPpriSv>hh
zIWcps;@*Oos9^i2jGpRq3nP)?SJqy2i<^CubBHsHJv~-KdFLyu7H%jtSIR!gQ~nWU{Wa3C
zN~@2>qwYv*+}vty&vSL6bN4r0DJzn+|Gx3H-vsW=ognVM9a3;~e&RqDkMzu5K|dc2yL$e{Evz
zQD5zrm>^QIO1mPxRf$pW#xH$iS8nS|jL`A4?kwpas*ZL(r*HeUt_uv__)z3#|F1W1
zCNXY%Je;&w*%>~r(PXc;J%4;;G-lw_+EdB4Z$-4J16Kk+Oe*;M484EULNku!#{^C1
zB>eg|r^ZqGV@%*y`9O)h>1}eI?na%q1IS`o!zB7
zWoykwE@+Ni`)zl(s&}%ab^he{V~Sz5SLaLfx_4}ww$F|$HTp69UhV4l`QOGr?xjy)
zI}ZPB-;w`uY^qOnt$#w#Q0(k;^<>!G|se?VXM|tc5kPq+*geFi0(17-@S86neEO6woH-VUEI)RK^=Z&kQ*xv6l{bPr8^k*DVqFe3J6v^F5>}b;
z43lhTZ@
z?3N?YeD)ogHP!kQS*Tcr+B`1W+tjVpU}3IjV_XtOyV8+sZ?r|E9PgByS-UM}Z{Bq&
z8UMR2Hr=FPZ^5G}nZARY41`J^7MxHzf4x?qyIQzhU?{jQj@_@KmQUs^almGit4^9V
ze-lIee9e0W;@dlUZ{pf2%r+bJ$qGj?tLpan#;V!9W;@TWXl5XC>Ig|S^rYxu9Ot_C
z)XZR`g^*$%@oR{X8Rd-p6r=v;%bfoCPZ0;!g|_cgYivAyo7;=xjuLCUU7Nw78G9$d
z-AjLO-k>Hk=zRB$X$4;gZ`>W;R1tr_HIHQu=i^*2N}6k>mTvmombpi~OsNZP!|See
z(9IS(X?1E{JlAjDBEl+5Ua3-r=L$8KcOxqj4jyvKMa5~JKIPvUh-&hUWS-MbSSPEv
z5BD=rHEx|os5;u_SRG$Sc$)Kip<|?QN>#ueezgDgCn20hS)+*`m2Zn+YqTEU7(BL{
zOEGAxb=X1q4TD`Ld2{#QFuQ^>l$AN*|CzEgaAR!d=^Oq>FcM~8!I|g}C${aEBW*~_
z=wQgLZ%;iI5x|r^(fZZoxLMFs!{`hy_b1#J3oOwNUdekeYCh6?^CVX`>Al=~
z4pwHm2W#$S^(*VY5p=m)pO?!;!}z&-)@CR-Y$fs6)*MNDnAUySfj{DmQEzqD1}Wc?
z{F_AwgiLU(vJg9EPqP{x+qBF;PHO$@gJ}L$sr@;&?Rx&AW;Gkb4s1RnO#MA{gjg+K
zCNXn!jH9~2Jv;W@*O)?b`=K&N$IxSA%6s@Tx0Tty{Q3RR+WWM`@3qNUf#UNuYtB3#
zcJ8&6k~U6M51o0{ej{PD4$EZUAOZB(>_g2Ij$OVnk>`)E$*>(
zx<5LP7F?n&@uM+6JDs_+@i)FeRQ2pbY86#tPqf79raMNSS;n(l_3oAECB|tK?Z4Gh
zBu0PuPP<>YGkwp!H^aOmz0dC+IzPhZ__j=ns}z4?hoPsNP=9cleXicG^e~kA%M-b0
zzJ3kVf0?+WZstQ^<{GshUfJ~Bzi+4is_k4G+0s63b?uNY?QrVjqIZ%uV+T0wpPkq>
zLRhnQ?lO{YkXQHf^U<&O2IoucZ~K|K`aeaAI0iGGZhAlWLMHvklMJy1fs773$7fmD
zPgXoZzSWbDDfZ#yw`Uo5zkkQDVqLejpgP?YGrVzox(qU7oxMLDK5$TQ
zX6x3*ySM9j)(r^^4ZEMdprdfbzv+W#`H;Z+WA?|J#7ClM+Md4s5;tinPEIHcHhn8=
zb&X$n=tW-oN2*cMns0fZPcilvYWnma^1&J)4Ntv#!@7AWJ7aV#{oSd^^xr3cpA+~Q
zb3yYL^VF{Lq}5?F1FC8E3rAD~!_@i4?^Vr527hnoSP1;Uzqj0$E}h=X%WYR~;MH>^
zK&Rcgkh{{Csm>Yxj=oW(>Dj&H>xQ%)3;mDgO4NSO&s|}szBY`r)J^^I?wPxK|Kw9?
zys)_Ok8i^qv>$VW67>~_wWbo=P=$v3@?Fe2Qx5Uh3m5Ody>9k=`Yr$hRRoM$75agI(9?en;5PDcpMqzI^m*t6`#q
zr&w;lblZ_#gbi(>w%SrC@u;=yQ+?(F_bOg~sMxNy8>c}`)Sp??!rWt$c2`e0gt!^dK4Wz=xN
z=(Mp^VmI|i2bbw8s{vVgRTb;~?Iwx``%~KEt-tS=Ue}k||M_6?8wu}?Z>+a|Qcx4I
zynpc6Zi%?JN3$zSW4`>FxN^VYx42C4d`GWG)T@LZlOdJ+9zTy*{|NbMLG`qVJ=}PB
z&lT5a-+ojU41jE=C02U3yYb`I
zyB$w-PDS7KdM57hY+aUHLDb;RA^yG3_pQEXg1i=a`%06xmOhd3=CBeIv%p}gm_KV3
zvvymfZ>%5Yz{h<)UAsPb_CoJe`b&h;8iSV2#MWJTs%y_h8%TT>&QeEVhgLI=MV&F2
zoB00FqlZJ}+wlkOnvO$qdpt9Wv<}H>b#7D`AUJ-cTE4@ZqOr5=Y{uoAd3A^iZlq&NX|{YoWaz
zA|)kt3={PxD_3C~fje)WuJbBX>BjT%FzfEFlFkSg7LtCzTd_LL^43KY&D7w%CweN+
zdR9HMmk%lkG+e!sb~Nj7y>`iSAac>9K1O!Ah2mu~LFxbyU5AI?%|FFg5{AXBOSoOQW%Yi@O1
z?JlY6k>P&U#I7F0CS75c7T=A}jbV?!&;H&pz3bd#Y+d$wW@%{H-m2EVEp?RmI^l#S
z$&=^YMWPy8Dmt@cpB7S&E@;n{)@?T`aTnOpyYZzh+e?jM!Qiytcvyy6Z*C*
zpw?uwisYbo&e?0eks@;w-cJ-Yh74$z-6xuj=Q@Tot#|619(8-6`9y!@a_=@Ty9mqP
z1F~1n9}jZ7BC_oacR3^H375k@9;&?c(T_>okMA(ovaa@CaA0_{FN~OWanx^{`#i7j
zw%%H`Cu+v%P0774tGB)t@yzNxG}0~MohkLTU+AldPNeZ9sY`gx{I64Gzv6$;@+EeB
z&)6AYVDM=!H*^Q)dDfma-T`}LzyCHcRTSKLF~%(YN`{}DpjfNY)vUa*it_tDI+h-t
z)3rjnRu=aTs1K+gjD55-_&}txaQ%irRjm8g{dYrE(^5GiipRQe(G_knCy5`?dB@!i
z+9h<{UuVz64WAqk9y-Fwx=l6?L-*Wap7HO~3I2Rirp0e;f9!>e_k-s=HzXf_zo#*?
zvN7~Xf{BrY|J}@COSkaEl<54!Q+h(`@14JOBt??;xxckDoZ;qh&=?D&Tbxf-!6crR
zs`6uB<>jRuT@`eRtuIp>G-$>M*24%YhZA19X_9PHO6VpnD=OrfEu$yLe|G!
z&=dP1)g*J>-HD1;{I;f@T~ZV6iMdviI?ngf6n2tQavH_6WD4n@tgb)b==sy;M;!0sr5e2x9jXo+fKy3X(g1a6ND!(Kj?boCv~Ife35m@
zfoZmR3k@MeZpSUtk@Lzt7uZOp-!6vjahDuLCAEiz6z52q6Ay+~p7?AR7{XbFQbb%k
zw!8dnyXk@3%C&bItu8ohvC(|0lNTzuS4@g^&Ze_aJ+5o7d{3+O=>5KvXPX7__B>jZ
zvd0F7ixgsQ+W)|&&2pnf{_{3<
z(+|%x4N}gTDtStY3g0%Y_T0T@bKS9egyN~zL6pWuY8Ps=#%51NshtYp_hRd8_H3?J
zS9Kn&%_`*6MD{nv=X{Vxkh>oVI-TQ^*%6-gJbbdp7
z;`N{5QwL}D@n~#~-jwpj-0JRPHZIL7S@yBpJcP0UD}N;a-h(=$goBPx9t~p)rMiTs
z!-<~WGe_ObC~0hd?csH`SRRtPnq1L=tm?6c-bBV&gb~a5u|!s`K+1l@5EfPYM#xcV
zv8^h^$fN|{n};8TJ6}9d`3-&QoqfzX8sD$%?vK6?@l~E-=ENaYGU$m7+Ku;r63V~y5N<{
z$6Z=2f?6b0NpStoa&MC;0Np3;PcnCI`~)
z>UvK$?#lLV*u0(*_3^`_w4(_=L-ENsB0bdlF8Mx>?@#-Q$a_|)t##ktPvvXmC2yO~
z3fq`^7xsl~Z}k(!^?zI<@UTgG*i&@L(wOPM)5AkV9?}zq(%0!TR(7UDtIg)=DUxRG
z7kCV}JgsKC_P|Mu$4J;SVamk$wp@qWeqz*~i@3Kw_t8d9myctk*q&@QYa1G^f1*oN
zvlf0=O!mH4E+guf>v6?18+Z=MZD}DGu;}Sqdv&BhTrV?row5P6#4hV($md)mO$phWXI!)ftodI
zSZOb!+ROAhc0^wtoiw+e$BXN1bCJz^onGy=wYR6FYNNo#Ygc}1{r1-`N$BYwL)Y4F
z%TxH?I?$4Pw6En8LC$vrMXsLqrCGav%1|YVZ%{3XFG2Rq;$L#^(VF#}F69=tuG@6O
ze*LC|oc=3?gD+j~)FhUHlDPGoZfsZ(tM_oX&(-z*{Iq#X)u=YErV7_!#nis#yjAsP
z-~QKjnbo3v*5XHU-7~8rr}PCRWSq-O@42oM8&5-ZrgPbZh*a1fXwSZ#D;!uSTX>Gh
zb&Q*PW6IvWF6@c6JG*4Ntmo=CbTU+nD986dksN$qQM)%t^>(Ad==KA9)>(E}+*iG^
zKkQxL^vT`1hx}^CP)!>pWSz41>oNyqr6cQ(&}YQ_ZH3QkN_P6j*-%6dSbi*M*V)-&
zTfbZLQHiMHW{<>1mWWxr3A#AA8JX5$&aL<+FxP6w!Tzc468{|2J2r+w2ZkG(4w)->
zXkkwKvd6`ISY6cN)v)xsSnR
zS#H@Yg|>D+clKqPrrtLG!lJvM(=TvH+ZkwdPh6P%@b}Yc%#aMdH1JeQ=1*R~kif?w
z_2W;?@#T0SKC&+M;<37m&bER|8#<}g?>N~V`lFjEMh4ZTI-0%4o(DhFVpQ>Uj8@3VF5;tO{y2ZD5}@RnS@~#cU3iIKdRX1c9T)TyH|DCwe*(V^jRUxNaZeV
z_M?St2Dw7QeBU-!oqDsw*0#c7>#@8>Zq}-Pvn_}H+7#0>*?in2uXx(o>t^vGS*AzP
zk1ZJ6&LZ>&+uvhS
zsKU}r_`I&O3aLUqr;qa_*shAUs5>@sq@j58p>x{$tB_haez^}$=;qWIKR(ZMXSWsf
zrJg?R_ifJLO=ZsB_UERT_Rn5FrQBZn=5}C5{_b_scvoq@z=H9N;m$i}tEO>%ty9Iw
z)q*-s=5*W<&Br1icy}Qg
zQp!rTO$z&zZ)YUA47O>jNGDRrmCY%9x6kgM-o!qJ6gSy|i6}f6=d%u@$j?!}UNTeZADSvBSO@HH=VQ8+u#8LH7W@SoeC#8{TI3L|)se>qSvp
z2W(2h@14t0kIu{xAjAs0tN2>wq@`G#w9IvWFhNqkzc&aUKgD(RQH*NIiB)L-l6I{&
zx$K<2+S_52-kA
zc*rE#URmXIp<9OmX=FIPYe&#li$Z~dq>
zniXA)B&c>BF>=H|JU&s}fGPOo#(7TJWIpBK9PQy#qzvJ3g$khr3f0a75%aWAuU10hpzaO71
zlfqEm*yLfy}3;-E=ws*(WzXsv_7BZG%qt`nKp*cl-0IB|`T`czpyl6IP*c&k1}{Omq}3pW2?mS}q2MH>|;Sk=YX;$)tS
zolL#cdJp#^*ngdTVZYy9#QnRx1D;|w*iPoo>J%GKPd|ouilnryw~exDE$KG#u<0?O
z(WG?Op5e+D7R4Jkx6hQ=T;T~lBmKzl+5;iSoo#YZ@{*Z*wC*)N#H_^D!Cj=X_rz|Ea-QwASLbH74bOm(y9_kUbn}4w~tz
z8JkvVR*3U+kmcv^J{&5iX!c4uSco`_Du|ukz019AdrG@CkIWG_&YyN%uU?-l*I?j~
z=<{61-7N0#W@n1D9a1$o5>jJ$!*jUf@^$U>(S2u=i9=%O9Qs>RBcJQ08Zzqm#;mIJ
zCq8JcSD6IARNcpkIcGaQ&3JOFqlF@~$>Md)3!F*nUyrU2Z8N=?elh15KOu#{KN2UU
z@2gNtYC`u84B~S7k4PqFRZ&G1Zn8`HoJC*k=Ko|N6`i~9=bl#)yY!S>xSr;rV@0|U
zBQGtEAqV6GIjevemV^vCCo7J2k|M7Hrvx%%mKFJ?;jYs*Xa+LY?Zu}Db
zphHl~_$c)_@O3Ttu=iW_{SBiweb(4f5y$O(b4NPbHHB@wug<I~TS2u6beK3A80evxfI9##gp;5h2^9xEz(2t)ZgEb=&sULAi{fbQEhXk61ZHM$1
zgpR~JP4T~df$DEe>ajYxP3S1%LW+K!(qVy#=i+<4MURyjQomJAd5cBfvaeone+~ZI
znxfY}sxKaI%86^6e4Pph(pqZ9S{1Kf<;h>nsr4LWE@ZnHZca>Q%&yOxeb4veOk4)8
zPUmN%^TSnk-%399JbhruIT&77Q*tr7u6E<(R@!3^^#0`b9d4oTBlv$^T<05nVyNPF
z+{@1sPXu)pvo~JYsF)LJ`H1^aq)P6_nM*U@!g+71;))O4Y}_5gNc!L?@Q}mWmDg}W
zH;T3p!|QT9`siHOt&z%y8L34a0#|>F$9a!P+R0ttmsN68t_X7|h<*~!@yk}DR66&y
z=>0p%u5ZSy?{$CMaB*gp`)#hhV>v$_NpssIURrpa^M9B+r|8V0ZCl5-?Nm~+?TT&N
zwr$(CZQHh;ij6bS?@h~2$4)g=Ml488P4(nt2J_u7Pz
zHIk8a^WA*Un>U)i)|(q9vxRw8dGlq-(6{Eq(6QKr?V&~1>hpBHL~aM|w0NqPX*?r^
zzp&Le?uqJ8d-+uFLdn^_As^R<@Z|1NZsM~F2Q9;vcO2{FtGWlC-c@02D?siPLq~Xf
z@o+DiR)@cg342XGHLsw0mq=&%p6ZWL}zPW%g(
zA^TZvg&i+e*4c`kpJYZg^7fDYafd7LqeBsEU_tM_x%-u?xa4?3
zHY2K@d)o?ssbz?>tUUGf+oHorR?~2N6rla@{8GT)BEBuFlxnE4+F2ETW%Gw2bp3Cd
zHKT_Z0`)EH-kYBBpaa=O8u9Mot+haK-Q59Kq+A}tkAYDHn}7U*)+N6aJlm!-sAbM4
zx-viynHbjW(c@B0?5(Sn#|6r6YiHHPp@HDElT$$tD@6WYTEX5Akb@QIa<~^bd;TOnwwkAl`lQ>
znXg0=>M=zITPPp0uStbZ>;P6hw9g(NoR`5iXdbd?(3fZ)v)l%_BDts{gIprHQL^iz
z38%sdM61^VajZ~fmXFnWtFdDSTElunnrBrR^Eg1fy*0=Vn5>1IHS8!`ILJW?q?xOp
zMYQ0aJsOYQVw$-&U!itiruZU=Qs<*|6+!g9AD-x9rvQNAYJVNhD@8&>UKvzYVSdIk
zeepWQ9-BE&Cenww(12cRxIM_Nx1HeEE&m)Fo7?yrs20KNJYh6fLmm@8qVq<(G0<~0
ze*-Tx{FdP{cZr|bUnSRHUwz)GUigmqVGQk|44;QhYpRIUnMrP`vvQe*4I(7pjjPhy
zJJyL#=5O{Db@*UcCt@m(Jyb_j2#saTq@8FQ0nE#|vfZOa9&yQ}
zxOuGfA!YZK2-Wq~?%XoT-yDzUI3+XTt
zQ%LQKU(w9fbU^!P^%V*%I~s%Bd*6@kfY+!#-tzs;g$^_6jm@(Q-;hXF(6tE`?YB(A
zpC~917#%`#R*4Xn(!e5Wz15qiSe9a%gR*)%UU+0J^
zB^HT{v{xQ#x3jjiVU1Z`t~$C}dc8j9-jBBwz(ttV>EEZ3;}=Xlzn?|Z5$nm>4Rc7+
zc>3<`;nW43E(isBg~gOC;ESG?a4U)+Ea3oZ0_3Ip!STJBf?a!hF`o~
z4P7T}sExez81$8(fyPq-V_Zx882`w%>hi<71c7G|FSsolX4PfUW=%zkoxO%?N#gi;
z%kpQR=D{p)o8ihrtcwc_&GYlhj{z(!aHYC&8*+Hsth4I9f2XMzlgBs#sjgywuW_QvrTo#JC>jrWk3MT#|-{BP>3r+nt;Frb**zmEe#@LYvYS-;eVY&$f07((RU6!k
zq}9rURnR(UUPyAU>5Dmco0j-~&7e>92PwE%zX9=CH<0P?i-S`_4&r?~DiUl{eo@k>
zWur#wL5+oa*?Cs+>={4b-`k|uMIQG{t&aJQuZ_uLe#G3dW3eDG0qO2<6W6gDE7mu|
zR`VOjL;4f;H_HaYFtOwdp@8yadL5on(%2&&bRDO2H0oT%ipOe*h!8JUL?{(wp78rm
z4q5l0uHtC2gfF9>{Ov7oR%utIN=qjt*gL6;M9Y7eK>NMtKNiT}nT`Mn6U1S<1yb+!
zZ@2qa60cERqRVc$O1x1PMGVnSS+9VdG1D&hoair$2Odd0@#YdhZWvHmEFe`ds1CvK
zdaAL4EkGv&p!a+d@-Ak0+<4>ski3UbSOBU`Tf;efsCfTeHY|eB@3FlBO4A0tQXk~I
zQ44PZr@$m6ioh!+m;^OBn==3TatYD)j?dktk7=oB!rTs$?FBjcV4bOy#NOUQ(raUs
z`!L1&f=uMVOwm9SofK#cHH48;B%E~DKr>c-)Efe2AA%SJmHfl+Qnf7C$cM8EfYnYM+!ws0
zVwf2piMv6&v#)so78j#s5p1c1)Td>g2V1zn=Qg8VsRJ3HYx9LC#oRE_qF*$VL4s&A
zMO<><)n75!nti<7{?MH2IF3&filajh!@8ljX4DydpFc&y#kt$uC4OenZ|Q+Jr0U3|
ztw21s7mep+FnTL@
z)Uo?Z57aNCR5=71dk@`Tb?JrrG`7GVPi9;33bK(V(snS8|!AQ5Va7V_n@3Q9Kr{!U178WB)0r*
zcr(t^sN(~}9eZKia6Tj`bFs((9&-9%7ZkL$6lJ7P2w2TKz=#-v!Yf`rOp}SJYQEp|
zA5k+F>Y3iyGf*mED)N59dcKB{iFz?TI%~`2tC=OzH+L6wTu7LSF3kLw5=n?hGVU-D
zyxovQLKFj%tqdS(fxx%w^WjE;IZ^gn>qdm`wG>OI2dqUIdVxT8QdO`25VK$?04dV5Ss5-)$6VOA&)kqa#@b2l`7
zBC-V!$TqPZXSwxaPni}wpWV>U?A0!y&+K8yeD(H$CFadLOMdkj37!ROMw&gy$4p}4
zQ~UVI%>)JWS#cwa&$SazVJk~6f9rQeV)UV_hT-U1)@MOBZy=M$u*F}209cP{v@qEz
zi3-wOxT$+mn~AFTFl5lVo5Iiu`>^cxEw^)N;a|x$TI^4x(rE(i;Ra}Eyz}WIhX!$)
zmy2yj#d}e}@jiB{!u0(cY1HUuROvi&nMH~=5ZVK_C@;bhtnxqV#?-COJ#AFB$P+g#
zo}j0AJZs#Vm=NSqtob$P)+F0_Lr$P%#gY4q5{F$Sumkk*PcP=uMbuOqv9;e3YiOzD
z^OjMi<0y8%$$uI3h)fcWvQnU8_o2k+3tmB;pu3=+h4`8sUgINm6V1PNqzIm
zYSUny%!$nYLL266M+8*3?`|Z8DBWNZ+#v)l1ySk6h|Cz_mL{z`_Lg2oS+ZH-AWQe7
zCKEycmyMarP9v}U!E4==T6p6npCMXSasfgL@Vwb(
zQQiaN#6YLsO+hc4g%R7>Eb+kd?=!d4eT5n^qQAGOj>DWk8(>PH7nGyxX|U9OWx}mV
zq}4jSw5*^xi5%#zWx_VNDuE?VN59L9P4E;;JK$FLgcOVz6(=5|9m;-s-EZ}*AlH!$
z5w%UlHU9=a&ap@Ja{`;AggPb;5iz}CmED^P1C^IN5S=7jCP!TF6JXZn!DC7gKu5`8
zdV@4S;M4vCFx*F)cF*^6kGk_XDLFv-D46u^Vf;WpslLL7K;i-l38{EM0D+OiB^KFN
z>xU7(!k2qPVWafk@-=+U7SyQ(j>ZM5&*taH1ZtWv)CA4A@}I9jG3MdPiIW}0nb&8S
z=TqTNk>*9I)GM41wh;-+&PJgC+}|1dI&$I5o|GNFog1qba^lBPpswdkV^A!;eHt@X
zOY!4q%9JHPMx8se<;jKtCcND}9hov?Ot~ZdQjWCO;b*k0*_7RhD;O!c8=S$&LQI7R
z^vwqp(_mQqN4WVOsdLH2G#%$gI)Hjo$u*cAzyb
z`0$3Hq!gfMMVWV(8H%t%-S_6M?kb(Almc&c?TyF|#h8QJ9A7x$Uv!_nfC-c@!UC%#!RKnv>(osm7ub(oFf!&;xu2=CP#uI&d5EKO
zC+s}zglzzsK)~$atw_qp7D0OSy%lH+^c5+8m>*73CT)Idt=^^i*LC6}e|JAczdUdg
z|IbzYvAub<8!eZ}QfyrdTe0PRLF|}G5m40-uZ+Sqi9x%n4;9jlI376zRx*Jea^Na@
zC{YiRjHVODoqKR!Fo2=O|7>DNd^DfK5Vy^PQLj)-OQ!=?F$_MLZ8%4a_40K_q_$W2*JJw&le2#!
zf~tI!s^MA7DC7*xuGsGu38Aj07~CA%!}+>5KVZ9$8P*Ro=e7w7waFWkFN)rSc|-l3
zL{L-4Xsc?D^%iv#|Y9y
ze=c5ogfR`#0TNOo6f?-+h0yM=o<#$fq&%j^J}e=?N=>zlDkn$C1>HzfasokXC7m@i
zKTf%UyL=DUE6J1&&)grwOH>Rz}>U7}#rgsT{N)zNJnykgT!^
zf}%VLwycd2jrH1EoFrTjQ>kc(!m5-FT{N~T7Sf@7lu!9QUy?ro)D+4iR>F|gZ*+r&
zo2R}|EX(bh%I9ZH@*%GOyVD$rd3w|}sT77XEiOGp^kV$@rfGdF7E+guXUBi4zB1x@
z|6rJjYn7f`)ItAD_wPES-J#88{gtXG9-mgsf>&L{@F$BUZ7m7SRJe9rEEUW9+u9bs
zlp>V~Qokkm&Lx&@kjbg|k_Bk>-671`d3Q00{XJSJf3JZc(cbpu<-_E4qrG2^pPoSH
zz$3QfDD1R#*;C^c*CMBx*YHrs5ZeSqFOTon2Y@iHPXvP^*wPn~x_w6mH?%aQ+^Bz)
z(uA?{PdOv2qE96Tq9BxO!~G^-nCXWxEn*Rued`)=zoi3&8g1yn_&eLOvu`YGjvi3f
z?4Dp(R!-5*3|;=t%&uV8US5$xY1qQa%dH3(t~f5OZluez?uBU|+)*qBJy7DYP5-lP
zwjtZ)P<_n3jj`X9tOFwb&^+YF*4ejf_P~eKT2N~svB6Ys)`A7a9hUAqJcHNFJc8?q
zD65dk!-xT&3|BIUMem}`CFey(b<&ir7Prr)M1P1NxjJ%(6rslB>?#~FKG@-5*3Qt<
z*XuRdTha-GS7Yax)n${uZMGZLL2-!pe)ay3rjG-tJFf*WSAGyjN3}ChhH#y-iHUzi
z4xP<$_Oz#sEu~Fy-
z;Rh*REw}p{iFmVTj38-V(RiS)?EA4MfAn@-lCQ>x@ki4av0Q^5Q%U1?Q*rvB`6uk&6Ap^48%@ZPazd^EY@;Zp)78+CaqOwX=t51SM4CO|9L}9&-;tf2#7k
ziXU5Z^guo7OaQ8jO#wu0Se(sDgU0~==JBC6zCw(_=9Mq++-2BpJ9R~Wx`?VFbqHyy
zxCzNxgDjoRx)G)E#m`ku|D+3pHU^}lYF4m`3~=IQ5!&;Q)hlmKS)JMsa5svlTlar+
z+dvI$J;EXW_p^6Vwz1k`K>FU*b1;f0pKfP~X540(kBPnhqhZmFXcVp9a4fu>z|%v9
zK>p)XmUPIWfeWSJY&XaIlD%4Kj}ED-WpXl#Au)J_kV_B5KbwtpReRz@Eamc**s^TN
zt9Nu_zo}sx8+#Z9
z_rurKC)4LG)$j8O#BRS*<$+6Ot45+HI7?=(Yh~Ns+^;`GH71y{kNxb+Xef+TUAep}
zKQo)7QBeHib9$7dto4QlD?^%s?CE;$9|qSE
z8YroJ3_L2+j%UbS^8Oo
z5?$LrWBJ{7(;G32E;@9vxRA4D#Nq2>AP|l*X#io1nTaE{+fwdaFJjQFjl*LQdTjH2
zW$c~eIVi#pF{@OzH;AB~!oTaO3AD9)EHUcKZ?rAKvQPiPvyd{SFs)Ql}GWQ7C#o&kj
zk9ZF*>-Y@?A@aDd+;489e64HOsjq9Y90aii31Om>`hg)q=8NeMr$pbyTo)Dv*t#V&
z41_}yYfHdLYtSF*h4Ko%-$_J?HvKXZf;2O
z&3;P+3$qlaGxs=K3G?#!z=#%I){fpkkzzHRT06_ml8))5hISU3db9T36oZ2K?y5sh
zu(Ltszz5oTeZhOKe~|Bei*mTu8n=8b?Zsm+NL5AzhUOMP-)>46Ly_R|A@=HtulzK(
zUKzbQpUDl~y9+1Bjv;G5ad5$=x7;+z`|^bPgkS0iGuWbD(LZDU070&%)wljP)&k~#
z5=Z|f0h(9+9~AYq#-;5R2gdhanLe|C)$OlV@e84jYyBlzvohu!=>^v=STvoGv_D$m
zG}wt(!%b&hw*^O0*~1o!`k2?w(QK`hIn-}Hr|~!lVMiJr*()RM$xRZK6T+&f<)Sa4
z?*Bw25Y4G3QjB0yMc{Y|ow>>dd-v67M9cW4r&|f$nQUd!Hxi1)^k&(p&-m`Q^K?#h
z;geCFyXQhM5=^F<;z|EvPR<%jW*p9sGEhgwWr2%`>NXGy5i*T&rW6kun63vK;YA!f
zR#iD)uP7z~Do&9dm9c*sMN<+9|KeqM9mMCaoUdR|(#I7Mdu5E4>VvaFN{QRor@nXL
z2^hEr$4pbH(~;`Hj^C1SHR&kab1hLFB%NT%%)+!2b3}k5wImiPGiIVOKt**WiCyA-
z(6rL{_#Eg1+GKYn@RI*ey4XgKg~3TWRz$9SYSDRe{bpplv4%L)vL&E0v8;7R|d6N4cY#=k4
zgHtJKeqKry>g?$QU*aX9lJZhFNPu#x5HDsCuZ)5mI&*Ejp|16PEwtEnD_NCJTK
zTgbF6JvNllE(8C&#Vl+uImc=V@IF#?+V^cAoU5Cce-K(2dXzU-C#ReJ=_Bkrm2eZ{
zR$RZSoPTPfNsmu4J877ZnR<2!jZh8kB(U2#^*hmT1tSSOIfW+e$XgL9hO{$%_C)=7
z(vUAV-}Tw-f5vXBd@LUd0|WDX`h+WZjGp~Xn>#ds1B5h%es&UW-@m2-STBob$R@aQOPwk#$EN%^-^B=J^#6Dp^L)4d#QI(^Y2X#v#Ql(
zym0%gLb_NT?B6ZAo9+LBtd})nth1B%a6nSW(bN-5eQ#LqjRC#LP}eG;#q1pmVcEuv
zq_$}mGs4+Ng`i|+V^j9HN~FhxL
zB{VKV%^^lZ6%%@}4<{O;?H0|WsN^JT$i!@r~92ka@gm?PjsA}xhfsnWPSR2)C@v}eb7#!;D-RdyC79;bM
zJw5>FIDfxYZPB=lcMVNlD_(8Ddf(Y))fo77v9SktdLp!4pL2vYW(kZ*h@bh_E{|BU
zuld`kWN%|MQ^F??|7~5{Q4f%wqH!XDX)DoC&8~a#kGYpISVGN+Bi4W?V@o=$eCgN4
zt5v&wUSM3F$Tc;1lCS$QAblzD^Ps){v*|*)6~)koK4P=%O2IZ`K?Y$&Hr;_!JW@a2
z#1dB2N!e%_epA?XZ83Ju5cz7XJfayS0;IS%PZ=t%SR#xj0*snK@duTO{zQn8ct~E6
z9;D(AR0wAIx_d)-4PiOaU)_D;+45vy;bl}`V{d|>Xm1-16Rs2B!~he~K#J(w06CfY
z5K_Y}7=&a<tWCsfjAfIr338;r%i-GhX%aCchQ$%~;MlF7bM+8d@A#Qmy@99ak6vuI
ziNIFEImwNXMM0oR5CXurQ45}V^~@0x|C+4(&uZ$~=UpHH!-@kf8>!o>gg+7DS;}CU
z%Q>NnCI~%*$FPdiW``Ky1?yI)u&8QEAfF-yMOK$`Zxaozi{eS)9HMYX2*9ETCoG^%
zd7=d)iE&{>hxUpPiXeeB#Xcd5)7>+gM_WWq@R@2B4K<$;Eer3
zKnKCNI>eHm
zG;VCt1tZjnpZ^?t!N5hCAcYB1dZQaS-B?*abwm%UUlVbNP7|wx;tN>2?1v>o%MI7E
zk-pob_9L~gQ+YFsZxNXK0Ly>ed~FIF-p{0ImCZoQ6~cnvkW1i^$Pgx0&tGQ+QoP0z
zHR0#GzO=`fyO65OXzB?5262d#wD;0$WtZial+&tjt>lL+CVcL1xqQVa`nrrO^}NXE
ze6ttBtf7be>O$6B+gAHyv37dqYxkW82MQX;`g(c&oBb6Lg5({rq1XwX!>AdZw|X#mlNb<1EMP?H*2qe`rmn^3Kh3;GW7B
z5akLr7esjN1W;N5oii`=w~Pxxdevq`LiwRYd?RO5RQyBCusVicUjNG9s7S>>-;QsL
z(T>)oLVn_4OlyX+EtqI;#$N!F0Fh^04Oz
zBnBk329D%E@+|m&8J2l)|JzH9sO#Bnu_5`rl(iT78@N^Kl-b~{+tl0A!*aoiAR8kN
z=_@U6{33~*x9*_6?+IBBG#^T1Y->oJ_F}tPY_3n
zR?(zEs)j;k37U;Ril%Z7Hd5Ej%DT62k9XAw4iYaJQr_7I2pdNQCU5%Bi}fDzKo`6*Vm_y`9Zy2=j}*o~VQ$NCX!(AtPAnbT_4AS1=t4=77#!*8RB+#@2m!
z0gmGN@)^G2#H6PgXOUwVnZ~J?0;7OT{nM>8X%1aF?o?u-V%pH1;6RK*NG=}r`}c1s
zWR?>g_yT>ngUPVCj`=|n8cHTbrqmw?NzisOC5?p8@(hVMvPLX0w2V(}BARplMq(Du
zA`xeqb}Av7vu%&cq#aXxo?TmpVhveQXu7h`nV`GqpMmbfxiLDF@nR4$NgZo2fjAX
zYBAH@{&03ZB>`GfQ49&lfPQMbE!Dcw-9?W&emeY5q}9Iez$%$JZs
z=-K3_ff40Z>ta#+Q4qDLgmLxQO77HOg*iiion#{7U84j^{IUiBxk0#p*lDekhWvT)
zLBD#DC$MVY;ZdM1a4NYeu=?pAFE6^m94&8fy$xjAZ|xkgeCWthARWk*8ZVfN9b{+C
zkX4ftZ53?punlI3&^d392XM+uv(Vg88Li-Vb2c&Hi;GvTiGR7)3p}m#STf8%Q9^2*
zAyS4CVti6Ghm_tJI7T07EAAR)N0Bs?JSbGZ*2;tMCKl#9`d3&2M0Ou@g{hi^OiTZc
z2QVvqtwPWStZpGOFkb6i@$L?C)w@w&9Q;sq{>J7rgngbWi2*r6yd-%X7eoKc-r$gc
zy}T-MDr9-a*#Tr~0%PWrE@g9d@9k4UHg+=x+NrLSt=4tst?i@uV>Pu5fdkHQZoNts
zwj?dZP3|?}96>}yC`1B^?lC=f&J{NKyA_>dj+~7WqBjL%D}&O}^#V8C4MG!g+%11=
z;2E(k9E?_twRhxi)t<_t#IdFr7?~__fJWps6YDrfqy0=&(~C8
zgHj54Ro~i+!8=(K_?=p3@2`kXms@DpNX;_4Z1*5+3dPNfzo8bkYFYygi^;W*Qb%Kh
z0Z6#WY%hbQ-@W5(fe))*-^lYXy7J6_^VyJpGR7sJCZ_xr~52P!(~0c
zpyZl~9^u!{9)tl~ur`}^ib8SpjFPWAKvb8)-ba5mXPY;Db@16eF(&qfua#Z{Tc2v=
zk{U3Y*<*gXb#+%_i3kdAjA)jXxQ(P7DY_ADBL-6F48Mi*Z0(Z^9t#@os-&NeT>bHk
zxAWqjsJE(aDHV^qSg0I9fak+QKsg1yR(VUR6~F3JCwY$`^Z9Z)4KMkxIN6O@uj`lT
zy(mn6CO=vd_M$V@ZMWYgkf@`lr>5|Dc8BFx*Wv_Et8PCNdM4L_rqoN{(tj)VrQI^>4l&kx?JwQNT0GZ(8{p>}6^khiPG}cb7RvEwRrXnU
z{Ri{21^lfd3Iq#PlR~gI*IB9H>_qiIjyCa^YSx4+;>m;+GlVtBe+6}VW^m#@>xE*Bo?lk>
z+tbC<6-V$1AZHI&bYTAppVRH*#`bo-6V^tL`%&bewtw`(Kg-7k66TT#Y2Qi-UJi!^
zKNN?2L)Czgz|WY-4tE7`#K2UYIdBOsodV|xRjy(RUI73w8HO}*s+#Xq^7%E;gv4ey
z0c*sOAy3?DObFAw+H3(|l_J&+`tspcWj%AZla`rN!Q#Spxo$=ag|_B*-M!
zCYN&q!QdYreOT=R>=gLTj!`kln|$^UAK&YE_y!<4v)EDOSS)BTfeuyw{wjGxG}Vk5
z{V3RDu{bh_<#T5JRDzfdhb&JmVzX8dqqPiMVTToqtNzpVX6^yf7JNJ{A#x!1vk4;S
zAj4)B)eiOC4i@maUj0L=l$VrAhL1@pvty7AP*{;yg>@UKU;{kxR~E
zJ_NJWT^-`YBY42yqOYVJc9=6kp{0=4{lEtYqO7aXTtsUlIzKOruNVuX+;44(nOv;n
zoLu_0mI(%hA|$GJMRAh-Ptt~ay#jTZ3~^G{nwEWx%4rbj^F*ZEmB>|3M1RO@YTXK2
zQ5ktI5$8V4Ije%gbm{|ZHimvu-ofzqqF4ejH(@|M)~9m_@NPqDoDDea=CD&(PwBxr
z;=>zkNH-)RoU#|)f6BCvdecUO@8?1!#KT(od^a8{Ko
zyrs0aMfwoibkUcaH9Dh9LV>43>=PnsY8}0h1%9#WlIzXzv5R;(W!phbG}T(;agwWKsi@up2g|;O{Jt?Ov4YZL9S0Ez`NQV&p&vk^dwb
zCKXglTclgIpTs$d1-zcw1d$A_aU{87E2W?kh^V
z-O4}F-i}G9!5-Tl}Db#F`E(j82l{!_yVf
z2@q;kE&N%Lk1wz6FTj$DU1*iDzv=DlVhhc>>8&@?QI}e
zsALTesqeuD2wo}&Gr$R9xxg)!pIl40?KHH^GtRXPFU;E*A1yj*c{vRqm}OZOtL
zpXq4Ui`|WhJ_4nn}_ek!P(39_wGfy>q&3!3MTlFl1
zCts~)i_odd7P3oEk01B>@Or_o3StOdqGd+&Qvk6TJQNtUN8#$)+%&NW#=
zC#h?iCe|6YZ5(jOmOf}P#TbCscSWN@gh5j
zCWpBfv-QviJ;TOhPaX?w5i@A|hX!`YwoeGgAfJUJJOE9rY2IA|SQ4DVSkmwISd8f&
zv(Q!4C^#gVxUiW?5asg_B}vM9x_T1HI-#v=AtFo%wT;C@1)dz+_<(bgco83jy4&|J
za^*HTpq-wgOETO0GhpGdEoaI!4-*ZP(}l^Z;UQHe2!B=US^)EXQepF
zC1v@{Asi2uVMsc`)p8I4(8i4nBvPa(Xab0A`5zJ(+O5cOa`7hH3U_(aY_ZAicw8$K
z?*#7}^;!5hT7kRmv}0!wTPhh!;I!aI{03xw2`aFKmVo|^acI)mc>~wnn&M_RXNR!h
z-}2Yl@4|Aq65)_NjW5E~I-h^xkn%W51H@t-jmvgzl!wNB3VqQTMsIx^d!2gpG>(6-
zpM@$y!1k#A7OSsIWYJVlYj(IoOyXc#p>^S-+8&6L{5OwJ!BQ+}IALabrB9;?@kZN0
zD!_6kpUhIHeJY?2=%-U*&%=;`=s#t}#k>c>I5tuT8^9DLvZs(pIUR&{a;!1cYa+=Q
zvleu+Z%~@2Iio+w0P~h@EX>xsN{ov~X-pg@?--}jHw52>T|Gvl;M$B!le#0%M1<7s
zLE5i&=H`7=_J*N?-i{j{MEIF~Fkf=#i+G&4KK-DoQgXtEyKT{{DTT8RDw-)~su!}x
zW!M)W{;hh?Gn^kP%F^GjSLXi0$tp!fRCp*i<$nGNec$kLe}hyaWw{s9g<
z*~|F+jpjvv!-9=P&
zTNckEsyb{rnT7sxlk4~EABaG(69)xJ7+(;r{h?wwGXY9UI4Zdcn{rwpK|k{q>O`^v
zwOIpHefWEJQMwm%_PrmHc?_FD=84SWg~IIkiNq2
zR#Rwa_E`_?Oj4bCab)KF8b#@iVz5l3Y2{hEar?B-$Ulsl=u~koNA>K<$j~h|@vwQC
z3FG44mNGS}vEENtubR`?ETqsZcaDjs%-D=-HrJrp@H%4gOF~Xyi}sr?y=cBxN(ZV7
zi_VFiwLb>mv65
z0MOVB#V4;i$Hj2zxvanvV8KEj6>P$H5h8Z6r*VB&fOrD1y|eKv`2|!uB_LynaBIf1
z>Ji%M8m0%m;#@;iN(TCpfS3VIl4O(JGthUS(#wr+{s}m)EnV1bJ+$0XnhQN996cws
zil8D1_G}TRHl=AOtDIk&z-XB~;pm~4P%6E{lf7Jt8$`)fXoP%ABUKcc!|oLnuEA9_
zNVkD>MArTce+KEpRg_`YhgmyOMKGIm<7ODZz#lU2jyc%bIvhgl-7@dHq~a!K)bG2TNV&TD
zhgdb4wX#L9p3YspHGRKH>jqjq`dyUS&E&bviMb7A%Ue5goSiM(x7%2l{5qKD(lWod
zJti5nb7`+)52uZlLZAyxSqawDPQEr#!S*Wa^VMgfY{>sQS1
z`vd)3M--RA07~X_^nyHfl9bKI$(zZ(CmlzSbyp6*?K@*>q
zzI!1{8GerV%mBQanV#g~de$qajx*agt~}h78cb+JpO3ER7D(Lf3x}}OdA5tO*8K|x
zP*eZz5mxO#O@lLXwrJ@OoaPM(>ETi`^WFJ2brq13e?QND-GKO3#tWD4h*q`KDDhQ|
z*^NV1i&iaj#-tx{PYHCdecyBmU#zLb;gv(ij%Op
zp#g5wL4!l&3iEqY4m^x$boRa1=svg0tNNRi>5xn7l~*O{WS55>9)G~HFk`QP$mU+H
zjHoTE^=qroGu)mC%efcYmu3A?Vcp)Y?t?eglWV#)(yqJXaL4{+um1WETQ4FEhLv0j
z>h>{n!P}v(^Nc&H#kCTP+$tG@@sUC6+XPlc29aaO5YM-JY1=gZ@nu*-^9xt#)~@?Y
zc81yKL!7dU7MxWbmQ~fV+BR2Aie_DDM+2XaB|)j(xvtr>7)T6y+~|Oeg)a2AACKM$
zCivSU8`)6&RhY4xTUvJy`zOs!qGM+;kbGk%3O^|V%o%&T+Au7H4pkU_|I5;&l{Sul
z;a-L3Rtui(nbxq5nDE#si{#%xSB@-$vbKtsd*O>Y{6JzmPyPl9Uh`G{)-VFLKm3;h
z``!}wJpQjQ^FfKC%ZT3d4C4N=2C9e=KC*`Ee?Ug{|2-DO1uIL-Betzfq_7Z&=33*S
zVvhHQ3%MZJF(xGp}n6#F7Yp9wRl&Qn9D>`@~!%~{bp+ScUk9#uVXDWT|*1s98P
zm@gXx4(y|x_1$+E2Oli4LbxFFo}<6EnV*Zzt{mppi*g&nc#RM}N1we$LwSw7e9F-i
z6g-)g(rkGvpXzNJDYHKZdr}nxrQS!f9dmHn%)PtyIdR1w4iGkU0XpE{r;;pi&1SaC@
zhP=Gh-;C}Jj5-`Dgez$mXgNN;j!049d~8o$msa{E3g@9iUm?}j)p6aMuqkG`yPx<)
z;o;8P`TwKaK>dG=$C&Q_PT{V7&Ry}w5^nr>MKTVDfLVb44WuPB_{3USaR@a~>L~RH
z=bKRGdu5ZO=i9h_nzODzNb6d
zdNgWkb7yU4=6p_osyETS>5DSixeJT#-e+n;0}ys6TmzI#@k2MBk&%vRk0U$V6Su4m
z-nN%2y9IkrJ?qs4Z{)8B`ZiA|o9UOlPlm(3=*PhpQx)e$D>C1W=*qQ)96b@-`Tfg1
zzt0uFMgIh?^ZJNYT;~$CT8;E2+#U^!kI6c^6JONGk&Y4{*Pf0Z4^-HC0fn%Ya7Z^PSu
zPa$?q?z*lVJxOdHa=|pNvj$6Je5%aj!%Lp`_kx<==fmN~)ZWj_*j&*EODtcs^5$j5
z!IrZc{r77Y|L1pGlN|py;0nO+%l0vo*Z2N<+&aW@>tg;i1CIs3fz#aE+o4_C70Dv+
z?CA;fJ$6OYpU0YxkEk@L<
zOO2{BWW>A{@%}8M?|c1BLSPiOyNLSG3aTh_Dy-A1A#DpLqf5<#5;Cugy2%o4^X*wx
zGxtBvNTk?xNmY!G!n(gtqc_X@x<6KHiu;kf-!LQW^L|^PfJZmsO**Th_k9@woQ3EK
z?%6#J8VXK*;D)JermP_)ahA4e>})AUu&n`Hn$aZ{Gq$>L=)5Ofp@tqZ?-m70P!a_R;MQ$-KiUm_%wn+Lq
zR2&k+~A7
zXAK>oQ72jK#{yffv=+i!kszw{;@vmYBJUVzh;|Gk;1gpq&W-*Ki)w1rej-y=8qZOG
zI{zO4gFt-0waZ&Ghunh6n``v?Vzpd`FT#uEdJQ&%soVCTMM(k7fh2L`sdYh0aOY2W
z`USXBHa^3qFt^wBxKm5Gy}#Mad-JV^t9$P8{j3lszl;4t?%&a(&hhuIS=Pym<%`pr
zMA0XODOulJB0@C>+L2motQSXCjFwbrdD*g0i5Ji_@3Clz|7mw&0|Gz70(Xcvx~ElO
zWu&*^{^sV*dBK1=n6aNQtwG8O43)UzuE)ats;^Cbo-d+WRvSs}=}9BarBp7d;~|9`
zD@cZB5q*Tm>PG9TRV9jrD&MabZBcBEN`IyWb5t(3vv_;%H6$VOgS>5DsvtnkFATWe
zu4r7Tx-(yQL~oQpk0E~Kdo^lYnwgGNM7l2*d#zVbCR!fPHSKdJ*d7OrBHv;}yiPkC
zO5OM!aIQm$^#PQKJJQ}P)l=!ypkUN_Ql>oy||%L&(!U
zm5%P5cC59&_01Tk1N(rj-@$vBFRt$A>uW4{^V)lwS3~g4n>$|_RT9N7R-e(ldf!8o
z?^YhhcoRANbF;i7M=Y$Cx1{KPg&1aW<>9!zX>Bjqdg1e3mO3!&F88t+a{FEV@vM-e
z%a-aG*qOfmbfc0|_SR?h$dx7*E>E>7!lc5fx+UN7bxAqQ%@6c&u=ux@#@`m>lk>9h
zuiLvoK|AaK{(tU0`IBID(OEgCsv+Xc>c<(Lnb9+|qSeV@;M3<-F!QV0nEbjmQTV4P
z+wEvp#-+1@a|?Y_c6NH&YEFN)q0)MJ>ra~fw%e#CLg&*TPU~NPfD5+v6J5j3_||++
zT95S!`Lgn93tG15SnM-#zMpuTWktHXSI?uT^`NJ1BDSV^_v)fq*EB=3l7Skjvo>ClMHo3ev7dAg#x-;@pH^k9S5Ud)(fYZmer2CmWW$qg&|-4>6r*WSMB
z83pnP$3pdbyjxGu&H8$dICV}P7xB@Yw(3QRoXqWf1H8oDuShjZx7(K3MxQ*5BeBOm
zHd}l{k+IL-ojq)(TJsMdfr0fjrZ|UY6RacpuS)_JNL+!5_!4?5KXyLA@geTE=DFU5
zsa$k{ijv3lNW+=C5*MS$@EYD_X`+Z%dhw<;4gp>*Wb7<503Tn&2mg)$$?~f$vJQf7
z8eeiwAtIq{!2IS{*PANDLfl-)m%gK^=n|0!&4I|xNnqyd+xeOnS9m^cN80NZ8tfFH
z?fPfMicjhA(+z
zIJF*mGG0ZqeV+VCfY*PlZqm5>myK|JWR@#}4FW95Aa7AZ%6eU*$^)h~J(r#>EMaXX
z8^s@NJmB7V@ZGTU#&4pNYA>z8no}Udg8^H6Z!_}g2X#LhoAhe)i<7F4vz=*$lMdQ`
zdRoEI^k?6Ek26;t+~?pesK5Bee(7QZ8lB^mEz?um&_~-*N~>CCDq6C&74v#FFk9z^
zWS+Mo!@{4&PiJHPc&$b-O!c_Cq?M422_?9QzVrlTt)t7&v2R_HApgy;mi^>eSCZ>d
z(J^HZ;EQb`?A~*Z+b&sI#b`|K+*>Y9N+-%Zu7@OeHKE&_*`tWHN
zKH<^R-s76>VR3Cr$%2*-K7W4W7gw9>lc(jD8h!X-FVniu;5KQ|CW|_J&}1BLqTslz9h^nOw45)X?61u{ECRW#X~Ovx?T>*
zGHu@*Yg3SJTB**OAnFi5qiMcPPS5lDBT{}p
zMa*8t;bhFX(HF;1^zDVOH~0u$%P)?j?x2dJ2s8b{uPUI25@b}UcBkj7=LN48dMTl^T2(1(Cy&&y#u;#!S2DwqRzc`sE4p68G*MU%T5-zLw{DQ&!=T~xw~
zo^AuBEwz3g%dx2FamX0?3Xnc7)2KW$j}L8t4^vv>FF>zUDTE1%w$@j8Wy^Q+!7b4+9&
z{RdSLPUzv*)=z|&G`lvzwt<@ejw(tw<>=c!j?%xycT6t&yR_HyrVF}{&o2rR`-7KLwE6EzQPoXk$ZAnvIDU}wZyuovxv3G
z@qVX=_HX|$V&8?(w5&UL8yZz9(dodUpi~nt4AV;w*yvl-e>#ptogTBtPcLuE5U+;b
z!1%%^@Q>YK*&d^LC?M3O(;@iXBT;RbCm$zH6FCg)VIGk8Zm)K{=H{{1w>RD2TVA^$
zV^5lY=;|L91xze{EIMM!@x}c6|9b
zblWvxE989D#+pa+G27ELpBLwEnj?JMuN%bv%Rj&P>Wfdl{OHAh{W<#6M_+#a`_Dgl
zUKT%WA0ooR6>Y)#)d8zIrAVPdJBxj-zom-5!jY{<>hY>!RX1@yvhK^_QGj9HK%K~G
z+la|}fl-Tc7Oe_ZE2gte{UNF9PcPf`wiNta7T;c0C9m8UIDixdRjMJ5&EB
z?XoD$FhhJILk9>?{fzso&GfN)&fy
z>_?`pNsf0&=5`9G%;g=klhG+l@acn+MzM>YTZ&*I2OTvOH|`dcO7t(|@wqy^Y;>
z;zlC#J}Qr`uPxq0|E4IUPu*ik`!Kq`W1=3B`Q1(2ar6406fmE43jyn`cGr$Nxs&#a
z7ppTY>+)2|8iCTyd2J^2W}_)pPvvZ@<<-4EFI=SZ(+f&0Aosoyu^q2?UK^Cms_mqGP+;TxGg(^&%WJe=T0gk-$FGn>!H`z#i5(}
z+%>FHrypv3x?{$bU}DPsbBpJFCxH&EZ=u+snLcYjI6NC(pCI@DbC3O`Y0(;?)k96E
zN{h!d81t$0mUMi-?oxZ%v~Qgao7UfbChvesfp#Xn07Ci)wBKb_!v_y@vbzUk*JBRz
zFt%Bj$8F||=0Mu=7LrkFp&u=Lyy#hpqYT7^RQJ@7vU`{P0@{}PwKPp%(On~(`mxbc
zO%lykSbupjsZrB&bA6diOMXtOyKXLfj^<@^
z%=dj5iuX(2wH~A+rH8vMZ|?^ma9-?SO80u9+|NUm^`ja=@kdjqXQpdgTo1Xm)McGF
zo~zCdHLg#qy6t#<+MZH=M6IH9U3~-0RE_zOJfEy4nl5&Fcj`9)E?e`RD(~sV!L5Me
zfKGcADX+TcFmKmay$GfG<(L`o!6f&M(ZG_wrE5zxi8Hwu*Z!tFVNZSyW-&x5zeF?a
zKaE6hr&KuItNTR>)mZE&%c6_u{Jb_~v$duhi5dyPmy4y)fv1;NQ!O&Q(p@a-7QfhmJn^FL=Mp=p%aO7f*_%70y?KJHPyt+OaI=
zX{w}2jTAPIwefM1WO*iXFJz`dB9crfpBNQqsfcBwCHHCKtwj^3+ZQ~2`O)XU`=Trl
zCOPur8$2@n!%GlXd
zQ{zN)t~2G+EX(2~7na8iRAAg$ljVZ>*qYdA$GpznlMnu}#o))_?u$Dx&oY~a4E)q_43_1I2?AfbEKidpO;VKyl_fGxGaidn1m$CTLVD&B
zrxHw`JsIfiR|Cx)*o@LxY-X#1PqQ-WIyvH$!(POtFYc#j5#N7?VVhj(KfNdU)?3cq
zt0uj=JCnLy&m=5Y^R++!!*a0oUCq%U+;S}Q^T0s<{4
z146RSnDAhc+p?JXdPXOQI$7b?;TKpUFRfD$QIjewm}U-o5gd$(S6LQ(>OJI7#P(lCVJQdKRSSs`!
zT$XBOvjlS#A@D>&x0tgb&GQg4Bb?(jCy~Ukia8d5SqpEn%(>K~F${pRCUBjkI?WH6
zVQ~^{a)w6wSP3q&ObG57_YO8YkBv`Y$KymOVL-Sn#A%|_V4Wc|0qyqAN#!k6n>j6w
z5DL_!b!D6h7-l0==Rx+)3`fN2-@f+W-N6zey&m-ZhAK=7uWV+Qj>>>wUu1R$F)9xz
z7bVI!;j|}<{`PM3W-E~`=d>@96-0WgUtyaqK*J(uQQfv!17T-nwqzR%uPV2bjwZHy
z+Y-B#PFvp1ws=c@ns!?TRZQ~av^ZbYJ{ot@=i%TyZl{`R+sy}uYUmDn)Tk`N?&A0)
zJ$Fur#n8!u$srSM>=
zg@@X}5#cIPsZDb075O;{93ZBjL_eGzJQ;12!wfivR;hNLTNW#3mCP;Nti)gj*qg02
zd6wp$o5XQAS(#0IZUU@<$Z}~Erp8UiU`BKdpNMM(zYIHeIBqg9s5yof7A~F2oKlu&
zh76EZ&c@ywoyQo^rh;dT=^PXHa5mt|ISpUTOKo6@v4X_JFd5DV<9%R>h36(>6(2Pj
zCF2Y%%4y{;&a_haM`WHSuuMb~l?VrKhvf;Zt+Eat2nNjHrKG}uM#3?WhZ!>YAPmQU
zEZkEmwd45bI5x%DJTRWtPHL7W+;XLI7mTL;B+fm9YZ}9uf?u2^aclxCWdcs94AwP;
zyUV~3Wpem%Xdip7nPk?&gk}!ib|&*pYa=InLnz>@qBi=&=q~^OX5arc`qhWg{0F*^
z6pq*nzadzQc4XQWu1ev5|2EAh}JjmuK*W
z1H1@5pW?CNoUGd%@E@z~LeH296negVz8D$ujKmh*vO2gdY}>M{2=UNEsI}cOn?W
z;6zC>g$lhv_~8B^-m{hkIjJy3ZF;vXy-*8l@5%M<7bzFp>e%&4<5lfr$ETrfgNvee
z6`F6SGEu%x{Z*ywgMxWoEMVvRm
z`*32Km7ysM@p-RkesAPbUt!UYTqrp
z_S_oU?JQV*TJYh4u6ccc@9kocPT6GLE5WOL=%tfAA>oe=H`5}%=IDj+kVwp-bd?suQmUEFP22clasm&}o
zFf2jDofyx8GA@e^b`5mn;usHOBXfAdr7p3lN`SQ`9$Ryo8gFDA3xQoPQ(SO#ZCH+(
zX#jZ|+aF?I$E4Tc$?4f4%T(AM5KUuL#kt}(PJx2KBP9gU3MC!<&p3t0Mcc4;0H-cf4f}5PFjKN36UWsYL?S)4tyo!Ni(vNU?2>HUk
ztrKntGQ(Eo6D*n=xaR=p2(^q=NQ*B5%dAlr8a
zb^(5q<^cf_1G*wq4mtz3^l@+iPC%d(eoP+c@UP&@Iz`AeKxhk*x2X*}MF2yB0m+nB
zX*>*ATW9ci0e6DQ;BAbxCpqv9XjYm+`oUE4V7;+$weka(iLd}Fml^;~aCRV)sdow}
zI*{)?qypYCd|Lzsip23d)I>V0C7+VQH@HhvMB
z!eLktp(emQH9?;gL1@I#SXHbu1D#U_fFAyrX#rCUv7&s94-RRT6u}DfpdCYe<8lOO
zz%XDzjKTb2eIbVcFJK!I*c;eq4Ra+C3IpLF%rQ>mj0f&vl{hRn>`g#t#vZ+3a0uZc
z%xfO7ypZeE#M;ArsW{b092uXJt_TM!nOd3o5K=&{Sds^{fglawuj3jDY84;{hzkNo
znH$<@U?Uk5KxPm%WmzIHcbIirBgi0Q7GQ|69Vm>3qzqKi2x_4aSu2rgZG9lDoqvY{&lK?CoQ&eKYA*+Weee_&L4sb9&+D
z^uo{Sg`d+4|J&0G4~u)t>efcn9yi8y(0X|ECImS9AcSWLe0FabVxNo!0S)5TvkWmV
zfgbS3AzTd64xDGW#IVC~^bzZb3}HX)D&!v_5^OMV3PiUsa|xkE1n_VoDlia)QXCF2
zqC4l9^Qp85a{>pdrc=-}MO1+n4iJh50%akK#?hWaXh#N`Bgo*0v5iM0adbccUtR~S
z13Ezw6#^@;Qxb*(O(Y$Ofh}Ql1O;Sf5TbKsJy4Mt`%R8m76BSK?*L(8#h4d-erAK>
z4pNLb7Ez-_$de;9b+lzM1oi<$NtBCoPNk2*K+qA{8>}7Z5Cxn8mI^>0hLt>%WHj(E
zMg;DHSP;U6bz_Fi#m6t$BmyWP$JhXeI06eq>=dr*ICmyPOl<+f&B
z2?;c|mw^*0Vz?#Py%nae5`w${Utxbvazv)kakwYIRx%3fT?+n5z+F(3V$()=4mc1y
z55m=u=r{l%h5Q1W&^AT54&(zMBsNm#C{E-MC|G<*0elbx6XOu?8k-~FiwW@rUIOBQ
z2=JIJG{}kLIgD8$OfN7@h?#|{#Lk8o6xI?%2O+>-#}uY$u&aBS1HW+qhyW)zxKdIP
z=!`!g6AZLeDrOcIFNG+;xWQimKU4hQV0{7Dm*T)f9WX+py+&v(6o7Hs{Qy2<`Os5K
zP-&V0y)tkCv8WXLum}K%0lNcC0%i=`0w{}c_@C3Uhooa4mRB85td8Cdd`MvXRMT3`
z$TszOq3ZY{xU&A0j(iW2uq7s3--pM0*X8N&X1x|kuf>{>2J_^)sGNVjT-{K?cTZ$G
zUBJWbciWem)+I5;F&nz+wK}rcpZoS&y3yUgp_A-c0&Pha{%zhlOAPWynA_942_
zy>V~->d41CH)`LFJv|~`m55DpV)PobUWl@qO=#x$i$+F3s~tC)q;cP9I^QX39IR(%O!4`Ut}`6K<6~P_JVa
zjV{r-SyzD3jxjPsgURLk9QWML6CeZSwmcZK>WsSbMu(=hyD0eCVFMJxntK
z7$3IEJufF2oUAFR(EoH;+cT)qZ=1|cEq?jFw2}IKx*p>1aGm}h{o=#u?~wKN-`|@s
zvtuu^csg?A=&aWE=BHvo#ZlDWDh?Q&aI^(%?|}^=o5AVCVVpU@
z@5g1#;v|ic;Q$1r6*6%Eu8|)>rphEr0bln)8YfDEk29
z7x_X0FmuZ(>0kg2W>nXt44^;(s5}Ks29Pvnz=tIRxR(VWjzS!lzRQ%Cjv)~RIzTjv
zv>QXV17ZLOQX|O)h%N>~Y~xHJ!N7O1?4D)X}`D0PzXXx!55FzZeoBjngkgf
zWSN}+lx&r+5ynpe%=y&;yaP80kSr)Cw#Bw0BB?0icP0KouV7&!GA-QVPf~6;+L30>}(J
zTT6|c5T>2PFr<`Vkk;l%BM4+eu#2R~D`xl`3{YpJ^i%K^`4lkTd1M?iDXG*BlT$%R
zkP+4-WUzIFY6FBvauEt8k)}xiji)+JI8}mEiDgc8wONWpmB6FWb;^!d$R0IvPFY~*
zSr*dci_&B&a-?1)*a7LyVu1vWU@Ab2u|Yt+6$XxxP>3@mj#Ps5lmp6+sR>9w_n3G<
z(iS;h`GG)w^VkOYPVCJY63#60IJhm>F!fO+MDvbo54L#!~AkTxI(6sH&z
zzcT?WK46PrNV8cCjNJ(GgtQM*f3Xj+whZPPNs{9^9Fr7hii!%6@1vBo^;BPHpo}?E
zKxyVJa$m^rJF3VB8>x&Hgl~cii!sX!B&Ed
zrAH1egQO%L=0Ry5Q++>aHiSMv)IAlXLl#mfS)MSL${YrruA|VjX9^k|xf@+boza>f
zeP9eGkx+xR-;C!dTG#cB92DC1)a&RzKmFzdNJ*7A
zE5haCrpD;|`43NyT4VFtrE!$`@Kn!dK=E6V^P+PT|4?4bw^__arE&7TCgbL;%_eqw
zIws5IVD;>}BTc`%>S;rs@s8(Ri|_zhznjb*bk;SWmcFUy+i%o5PVk4kf!V)%^{`%=
zX}tfoB32v4>37NYcGp$&g0F+pf=^rTZ+ood;MAVCDmT5+_ZRO^tc~`%J6l+ddHTqM
z)=L=^n@8)s+iO>}{^b&e57znvvW@Tm^yxtG`c1~#ho4w8Y-
zw)gLKvunjWpi0NJ_a;FU{6nkdf!JB6%&^!w?kU75TbFa}HnBq%i*VRsq8NkXj&qN9U@UP-WD
znN*pK&I5zDJh!m}yonhHKxfW6p3-@jj24h0J)vR+98n0JFk&48&o~*+|q5moh+jT;J1iO7TnJcJ3
z{SoYIJ)L0a28-jv9pOUIt8GWRK_qp!`&ord!%8QtaQGn>*ZpuJ?yIISUc)po&z%m%
z^fV7~W8UFH#&RQvrbY(uuUW1JZ`Rn6DXCz4dZ+LHTluE%<11%D5=}kzv~Ty3kv;In
z(_uAr533u<=F_rPeY~!|k6;a-?!A`&X;rKL(C%9G`R$_oiF*7)`#YotzelIOZqz-I
z-+91P2G36t>Y*KulB&rhRMPbpU2ms0PPn6?oEDJNfA?GcI-u%}&waHmD`ejStf?>>
z3d0`wR*atxk~!Eb%UN;yUBL~cdm~g)_(v}t2yvX{F&x6!C#i!&L{$m!vqDI?NNvL7
zoN>C>K|1(Qo_i)^cr=c#sFQU2o}>!BZ0IEAvBX~s-;bvm-9Z!!EqH8;YCO8r&jgoC
zr(=9;5=%$EJ)M-LllfGwXH?KZr*H`2STUZ)jEjT{P3UbZVmP1{(j+3L_PL8wc$VJh
z)|ilFL3%jC%+l={TsviDP&~mzq>Q~z6Df+yn#3~hq?6idy1J0F;HmsbSV{LcjSQ%@
zmR@@f;si^Vj?o!aOq$b0WpL^-I+Gxurm`)n2a1i6MtXy8CEYiWC8?1K%jy0l@A0(<
z*^75Wz#SfvG*)?HG7p+N<23wPx>S#mAMI3{GIXv&pu$PGwn|%Xc@p60TQCffLC}d4
zE~vnqCI6EuSF$w8c?xt++s=S}w-3rOC0k=JM(W<0Xux^QONoRd6p2j$;f0oTA(-R_
z;Gqe{$$XDxaCptap0GS({@BDs)rx>qyMm2rlr(q(aClNhUxVP6u2*w8n+mlM$#JD$S`
z`?v^LUak@AxjccYA%q2c=yfR}(7R7WLVeTarAchGNLiqgY|sXgy2?3s!ds`UPUv_#
z-5aDVy$r^sg7MeSda*H
zGnipO5#Wb{u~k&^ihUzX(aO;!F@~;`5MWZ8B#`Ue@R*4NA|Ub%alE3kI-xxnWs1A%
zk|d=XMV7&sgNJEOz)GA0_{e=iH>uK{s4^oRhD_KHXMi6d{B-Ywz&_)qt^}=bNSg{?
zDcBhQ5*G1v1bBX=02y)TtRE&c9BDkco#F7=L}>-4K2icVdFyvMm%~fZhA*Cp0_AI>
z7oYz7I-;Pkel2t9!_{xVyI$#lgsv5_SpN`-A&AabGoX8R_(z=3=L$**ATl0c1WwB&DO@-E|%v35LqbgE2
z`{~o{^Z7q#AFrnuVevb5xe?Vyifve$vS8MZNzFK`3sA5Wd=9=Dbt?+zp%L21g(M%j
zfsI-&3sJEZhf}p#sY?DTGFVSVWw2t{|z7|GQBj@X~hLyqy5JnKAugxLSg_7n1F!Z>vjpQh+LbP2-Y{rD3Pz(SB
zQNlK{k*e1D4$;azOcihz;s?pCO;x9OdJaKdfoLnBG4tl;
zhW>SeKiw`TlMjNM8ZzmgzF}Yv=0Y}^bW-OeNMX#@1Z5Z+Q)_E6>^gPg-i_HM8Wx#B
z)}|GFF8~qPuMLM}9Yrz`C}Z;cfms8U
zQwvN^(vOCYBQ@8!xaE-Bv3Zy^i#pFeO|X^Pwp^m@lN-#d+IV>ubLLS*AsM6@yMO55
z6*7Xjp&cC9(zpu9_?-hiVMjJyhao^Hj=i@yHZdAouU5P)1weqb%Ah)&Z$YF@b0gP5
zF^Rh@1S_DVG?D?7$wp>y95b&#;zTl?3(rDoi;M=~I;Uenu3&zEK!(a~G@YLEER$#g
zCz$R3Az*o~wj^Ff-dmU&7vH@Lg5un?DHWnL&?9!_KHNVHPu!VqIHoppbca9Cf~~4<
zZPAed3oC0+pj`IFQJSEP1Y#J(iQs
zhTYrgG-em_*7jp)GBAr^@G)OiWqA+3@pi=(D6}XH!#rs8+HTfOSXd#BYg&{ZM<}8<
zqzSBsz6PU(?S>8$y99_-(Cs(Y0UU7bzyv0*3|mxc-7>UkDF0r`TtwqsyPK{>3REri
znh3<7rlyBSG|&S!d5UHV(?_c{B02kh2a^_!8ck%`noPsFrJny!r1UkQW-4hha4n$_
zHn(wODN-EGGHNq0q0k4%n-NNx0y0F1+#Do}4X-4O9NM#+layLZa$dkpvJ`Igx#R0H
z7bD!Os?0_XCkl*b)=F-yefSo9!~8;s;#3?lv;%VyJ2KU#@>~&2n)F&bYraI0sDe)z
za8HE+wL@dfBIrv3T?>n~p%Olv1kP#xZ)(kf8W)-bt@x(lHJDjM
zwU$~VfyC0!W*4)+Pp@ZQ;-4c_nj2ds9BC+qaH|1k4#Nh{o^Tul*$MQOPo_#K6Ed7u9~%3oEQc!+uaFTn
z4cYCMXq{_l5>2-Q|Kfm?Os}YhVWTVKENHV-Oy}w1LUSrnU(s$JEq5=-!h=q~$X%ST
zVJO8;oN>dQ3Y;C#jCKq${tLORJ3;J3;>zIsx!1B=v5O1~1TbN?xK5ps@4lIpO)4L@
z*9`~a*>*n)sJ)5cY*>hGoCRqPBjbqXEO#RnG)2NE%Qs_sSLuzI{?dWe-{j2-ECNv^
z<;K~1W3QmmMT51+(eBlNX1*}2kC-Ra#Oa)b0eTp@pTqEh&O_nbgD`trEE2V_GKVK&
zH+oo^*MrxV5>G>dcq@SQ;q-9Ch>1a#rO?Fg9~|b#VWb6yHIW##PdXVi3W*6LcQ5at
z3#?1C8w}ZllNslpzZvag0e7j7E`gnK)JMHl(NSWKf*#}&UBjU1o`vk>*}R86ogI&5
zMNo4%J9-zWO8ip82>IL;Y2R%~I``Ol^hdpdixR7SRdi)WZ66!{Ve=qtj|9!pc|UgN
z^nO;C{;t=0ngtYf455OErSJ+6G{9hFUtbh7`%-DqsDfY@s`$RA_z>cW-T4*3Hx1z{
z;O`6h${_qh_Xlj2^uX+isU^>o_+btY<~Kx+*9sijHEtb*p)scIShlBPd~Sqmiyj>HU^uLF
z9NmeItQoJmcZipn6=a9i2PSeYI-z?SnKmde>?Sm=5BARJJ#bnEbPMcY1%Xv6l{EK+
zw;}sf`&0`HvRx7Sha`FgQq+@rG!~$0s&==x#zC5PPU
z^W!&k$`)Z%fOz$w$4Kh^=QbLCp>xl45BK4B0ldxy+4x=ubMe?Mu3w1J!Fbhh_I|2@
zAhF*k`VGwsYEd-%5wqPv-;b+LI5Mcl~+6g+H;ehn%FeK@m@?XhhRn
zv-no2auiA5JB(R%IEp!q_C=e!%TP0Hwe32E=N^1ncsJNUXwXJ?X`eyx?qF%wy(A8(
z(c#l?Chp(~G~Bjm0n2p*Cp!g8XSxHDKWHt&V^-$L;ojKz3uzNOiz_`>qf&j!&z75%
zQ@zOr>^r&}9&)0r^(vY3^wzrHbK&9D{{a=z4W26^000003S)0=aA9&`bZ>Hr2_K6H
z0000000RHrS6`3fHV}X3Q@D2$3ThL4Haix?$;OkBg=2Dn%l|=C2-tPsSew9)p&UUY}X}XuhpefKGlElzd;4DVW&*>m`iKL7VLD7%zKOj!i
zfP}%HL;@j@Uvh`Ni0b-$P3g(&=q*fz1oE}XJ+6z6&p%p
zl%zovh~R}*teFokbfe1>&rcAKZb_7c)JE7EZt+0b%_#~=nmdCG
zB4>Ofg~Q>XeTzmB6jFNoJhMXT%j`efC=FKY{4dl%vvOaBkP0MKz4=Z6{yPNS;<|#V
z=iJL+TG7tkHK^yZfo=rVC^;Z6n|P|}tbH)sbT#Nwcjf9fo6I|cAaG|Elex6fsgi;E&8k9K-UQ=9Y=8JaU!rfa)(K%;*r3tE8@*34dbE-;gj_lQ##w
zlq=(7Wf;lzx=y`ru3Typ4ii;;W0hxaUr!9Huh7{@{}~KyG&JIzvc!cYmX^4KrBhfs
z($Z;wTRCl;Q0K9)>tk1L15M!bZI2otw0Rg-HZiV;(6$Gs0^#Ni2)F{_`wJlCX`E;W
zHLdXWV{yuRJ_*7srS&bMFAJ3xfy3@ATmq^s1-DD=828m`50z4I{vWl%*=)i^OLQz{
zmC)DCr93Q|OjV^`HCZ0FW*Lv2y6LpH#&{}O0A=?5|HEYZmkW8>WV*g|GL0^tOyf%@
z(@hdnVsGwsr5M1)+;OFfemwSEYFW%37{T65ovMFYTp;0?^P--q5%r
z(POk$6nH%BqWwb`T==B~yy*97};I__}5t*fPR)?FCwfWnv9%AAV_M{Iw(FlaUuJ*`3<>=-lwm
zKb3N)K61;fmjr+TBY#@XmWBbRjde9!PMC)IrVWkw&iV0i_YtoRjrgPUEVLI3OXFUYQXJV<5>ZETb;!n)yn9Dd;yCWEA=r^-H2?o;W-lvS
zGoX!;I@hKmq?Aodi<;$BwD5NX_*i`tVcLe}Tx{g*x!sMD`@AUi<*PMJUjuyz?haO%
zQc?}pv{77a30e8~Y~**R(@cxXvPp1JDwc%kz}Gzg$bN3;VJ4z#umxG>VIS$lzsHHv8_3^iPe34CHDb%dRyk_sqT&{HK|UE@BesGK^E
zV9gF`a^01pwU&71AIp`)ju#B;?2NLpMa@gtSOE_3JR|-{coQn%+gV;lM*>+)yn?I!
z+kXQfUBjJw&#lh0WfUhj8G(~;JtIToMdM@K#hixVzWa{p(0MyAqstr%T1aIO$;JnU
zJa@I1j$_9L@5zyK4f-6L(56oQ(L+EKSgRC
zt=*&Q=(efjc-Gap4yKneap2-1oNvrnCA4?t&A4hHH=!p`_|RA<$r&35XXDVzJ%8h#
z_I}CNKT(d!rKsCq{n=T0yyuw-0~96)j$ux3@LPQFC+IArD^QJ
zrfCB2dZY#kt8@4&luuJQg}mNE0_pjH)95t%2ZO`sGhhV(0000EV{&u=jQ693-6g1`X_sg<4ByI3roG`QYg>Mky6vzzRPyP_-8
z5^W2SB~_vldyD?>H^YZSJ*~9q>qFa63w~9?ylVu)+e!h
zd{1Cc-mODYP*1!i83kG(QOWp)R2tdRJt#SgOKu=2151Qqe2;r5nU0C|!uJCn+%dH{
zin;hiU}rST@-6enwje(ui#YXJYS{~6)ivWBsUdB-R(u}o$$zsn?w$*76LHv8F+Gl0
z_iU+94&SU`Mc6LLcxD$%BepJn5FNWRb
z+LDp-1)MkPr?sKk#^r^r@W&C4M}mOqT_QRkc`=0DrB{XgYBAK5g1sPA=RL03$VBY?a1a
zH~;x3z3_b=c(=rhHF1Iy2uwiOtI#UV{}R^ys8AVD(bP&7lv_z^%Y$j3NIStuQ96C|*d-!m4GsgXW_ZJu~E;{{hg
zzK`@k2Sb$-0LK4yaXp!ie*9&kQQje05P6I&=6oT5P>F=;Yw_~^WV(=fNpU@$bBEyY
zS%d#64m)Jd#i#oq3{jLKiIh9PX29;LtCHh*3yDV>%gfUggbw%S23nKo_=!r(6f$Av
zX5636WLnf@$%iKd{y+vL;%l6sT4%ElZxWenvUtlJ$N>@icv79ZVpK3Wc_PNac`{DU
z-Pdn!+}|#*F3vB{U*DYndnUZ-IhmZ)X1#XLfB*Bvg?n-SdVcdvokMsgOxJE>+vXG7
zwrzH-j&0kv?R0G0wmP#9NByY{ZN&b4Q=)K9j$JP!1}-!5og?mzFR
z34dIEy6C08%Q*?pa71nKvPpsKMWsW)YK(y{;7D)iYcP%LwduIi7Uh%RV>n;GpRflA
z#u6^^_sbj#Sws8`A_4DNj>llY^
zQLi%`7n*Zg2SZ8H0H}as*ywp1zVBlpJT&k{vl61M@@HKQygwCugh-mw8QXxwI1Z;R
zeiI+w$pU;$t$lcO;}?zC;-_|MBuucPE7Om$&yOGD+Alz&I_YEq2w-hs1)nl^i1ndL
zcF!iVKREKUH-A@)std9?~ou*$9s)_`9Xy?102aI$+=lmq8O
zipBT}2?nMLH(mb2jti-zJ#P>W;(EAF{Ccqp3bsh}_yOSwJP%QDNDLpt&mkl@e&2l#
zCF+HS&-bN+4-D|B6Bs&#;C&EGoYH-eV%Mo&`zDh4h?f*oO8UW0G!pSO%(H_`Iaj^)dI6Xt}|
z)VlVF{ip9EqYif&Ga&#Rj%>&z_&
zMyPgugHaktpY>)@6-;#0U%g?()_bxocBOa034>^GvZ!5Wc;qQ+u5z|?=zZd3
z0}3RJE5jV++|QnU4Dv^xDyy1i=E#{fL_ZL|?@q}Xw*=Fl)L3Xi#e5UkZ!_UJM=R!M
zHn+Njz6`pf?b7PQ3g=jb69lT5^nj=>Cd3&XaDu5>qGsxi=~{n`P|_K>)_$zz9%4Sy
zAOp1@r(q9mC%U!OK9L9y^X9~w;kXmNBQ-Jj5($mR0xcdkC5Sgppx}E`U^t;h5jF0V
zx?P*e;syLk73S@!Aw9wNE@+w?DSu@ae7E!t5Q>L^k!u^wi77&SQ|@Y~HF_zfUl#R<
zMYHX;>0#W%L5ArejEA2He(NC7_CO9$p|^y%p52-L4P;Q}(-bq_mPzy2pJ|%|w&oB%
za^+b84b3$h$FR+o0pg1g2M}L_k{oA2c~3)E!ks&a$hprbxv9auC@mRY3H^pJ^B@kT
zqH`(yc?n+RuHKk+UOrAkLU1GSTNhA^u0T?A$IKFr2hcv>qxkGj$5P
zB2c-ZhkMli>-dU)H1l9~>~Bl^*1TZx{uO8NrcJUL>OvU;mO_vV_yfkqpije#gHdHR
zQJWl#qRu69pzN~E80LBtOhDtmKsO5yHkm?<{&xK|^uzYCh-O>DNt~204@CZI-eBq%
z^j7=kz-!*)m1~`!<79I10huWjkex5Btok=v!=8XtCBf@T!K$vevDtpm07881%`NAmqYio%2
zdj|X|wF(2Zb}xC$Z;sOuLtG6G^h42}S_EXUwUyYz^TRywdW{ESalT@`}CtA`_6@tQc45lvKX}rN}#BW#Z
zzi8ac9RAj6j99%v4tZp!Wv{p8D_oO9^rBbpTPu1>46O8c2bzZLqx3!6uv|tjH36N{
z4GR-$$O64-yeyo)>@dBB>*9j89dPc_i(ix`^Ee2g5L5Im>z)C(D69KQ**KKe9x|Z8
zYP`-+w?}B1wZ0>h+hxI)4lC#e)jJg_V2!+es9RO?VIn@^E|SnQ>{)BGk4k~U&(d=)
z;2L}E`|YuHOqS_|s>MYfI{JDv)Fcp)2-GV@XViX5O7NUy)dY^e;~;
zL{)bfP<0Uos|MDXvmpz+2r0`SdXhSh0^VsOu}G+oMRZ>Xe_FaU%?8&A>pzPt4JUcW
zM55b9U|BELTD?E_=>EKwme_n_FX6V
z1c|yeRFNI4Z#F@zwfA{@C_=-#vr0Z4X1Q*sw7gC#IL<<{PEs?Pxz11%dcMK+m{r_W
zZnwkjX?UcpFv@I3j)3;WS}Y%&bh%F0Oc54po7M^!`4}On6%=aC>o9E5Cs)#6ty>($
z410%9(9HH8=NJaA6Vjuy7R^foSAn?F@lRPsRh(Y>2=uw(<%48Ni~k%bT!5`xYl=Op6u)T#+N7Git&hfg;e2FX9#
z&-FfmQ3NG2$5ien-i*i0ETCK5~bSUL~@O>W8i$>(>^
zX|lf6Oz3U^3Q|2B@^{DU`-{u#(f#}6@a|w4Kia6oEkV~JPS}R6WRqlZ(hytuWR)KS
zzy3G2@8=0kVTxu?$(#hdDD9wBCvrNPN}`awlx!#!+rPFtKR3TWProqV&7~LAR>;du
z!GX*5-;IUv;_P#_r?!r7iJS4o$@>Z?H-n@gNF3-?(jwYaK%}Jorf)g7LaCIrBNE(w
zAgqStFX6SoVkNxUr%vq+Nbe9XQNT^mi7?v1>0qH#naMgXTNv)=m>Y51@}^y1*PzWw
zO#*YSyF=kvgYXoLn5%G}zZr6r#bph4eX>e6QNt%H8ily_O~uL6r(PXNW&bY?u!+_M
zIk2)bD11igC{6iK4Mri4&VI|&5Ud=|!I`w*psP$~pXzcx7g(N+M(O0p(J+rsDA(5Z
z*SGwah#fwRvMx*GdC}pA0io}T_{GJ%rd+%2X$-o}zw5929kNwsT-J)R{K~kzcm}m_
zWQvl~#mCWr5wxL0<08_7k;gStMFH?>;Q~WCD%$~^Ipo|fWEuI!k-nrJ{_eTsOTvh|
zSUFH}8xIPReLLbi{=t?X7btd-vSNHiaTbY6-H9el(_j-u>o_mz2vK*4?O|fI{7vy{
z5bB5F(kOO|hrVP{nf6OgJ&gjFNK+{fA+REBt~%(q8)rv}Wg4H%##)gUNH^AV7b4k>
z>+LL=Fskxdmw!09+yWjA=Qb;?DXSv?UJdkkKjJOWxmcr}GlX<7-nN8Gz}w&IO%1p~
z*>1@K%!vXOzHO5k2@iIA)3x%`_pYZ8W7!i2%XY}dA8;3~6GwAtkZx@rc|UhJ;-g;Y7E8(c@P
zU9Pj4?Uu7PCo45O?yBE3nCZ2%JAit+ytTtuFWiE*V1j(WFHzOwP*2Mfs5ccKW6g^L
zv^NU*!YWg%7z}{nl^X!dDAd55M?|S6Jga`VZYwsGtrQuujxb{2;JFyb76va))zYhB=Mj4nTBv8-
zoRisPcpCr{=9bCYV$%?N@5Gc^e-@y!Og_Po>p=f;-)Elwl1_z3w_$$THlcs_9VV4IozG*|r!%?koYjA@hKpQVkn)ZMx7nGtd3WMnNGqi54)GJrhl8Nb6fEu$$L-
z7clqi@Ty6rUaVv#zAE)P-9oh-9FG>E^x_BhqKQpo6(XUum<3%O#OhzE+cq5H;m@O4
zk51wBQ&y==Ib-TkU@5CT)JmV2!jdfN(G)g1UvxTt=rT`nv5LzJ=n68(lR6Jir&&ta
z^Ut}VJ%4oDznY}jQcGrAT^An`{x0Ti%5}HE;|?yf8&xQ0EO{r5Nf(`mn%z0UcNX;_dU}89=f))
zH)D$=QG9M{$IMC+`Ghy)|LW?pRq0y%eksW7*s+E+#MSVE2pSa}&sG2W5EUnnAD?zt
zRZ#KS!*JL)iN~h!e}6nfeM`KW0FakCJ$tU@KUcOCkIUg;GMWxO>}txNIOo+&Lb|
z`6^a&aE}pTO}}b
zB8&v1Kk`T9pp6nqjEWBa#IsK$wMW1b+H<0Wa&}s#OA6()pFvC;;{eKl3dBerpI|A3Ij<
z(nZSMn7y2Eaym=zJ$!1O81nPYl@3CziIZsB%)xy1%RZ6oYvQ#vYEzW{WY>V2V@!T4
znP;joQ~2>eH{p#tiJhLNoIP+t9_dX}Qm#eM8A?Mn80N_c?4m{VVpf>|L&@c1oy_s}
zCP74xzKonCOq&wv_Y4IE%}b)7Qh;{}x#`elS(lIK8pq{nJ_32OJgktpmh99*jxTIG}Blb0xH
z|7b$k4Ax9g1rJBUuo067!S(e{1NCEp28_A_H>8;e$fb*%k`=z(&#WZ}D9?B}^^meQ
z%7l@R2}UhCO@Ht=1n%}7)|&ZfHp5a*Kdiv`FA~>qW37mP=w}1$tsbDb0~xTxnH)WU
zN8>V+p#Q5D}UTIBc$<
z12~9*uY16AWBWa-dIaaz@NadkmJ?3u-2kZSdnBCVcsc|1_`{WIhs5^;71zWLuz>X|
zi`S611BJZ9hWhD+E+YM&ZXnUP2he+A}M!_;L(=2`9
zG7mGQEMspS!5@reb+
z20j}4gB31f7i^;I9p)e1%yM2h2l10>4i$1dS3YAdvWNZ7N_DbvcHTYE28{&J18fM~
z_RglhQx#?4kiU|7k*5nOtx|_M*moGQ@uEgj?4{ah9*pI`<`q%JSXafHo$z}c-bR9^
zHCgH7Wd?LJ2%XO3M!*}S$)4T@f;|~rX^S?P94cZpARNNU8QJKr7}16HAGE(Q3|bAFC37Q##{Vo3>P-yFr~
zZP7#je5q0_btq}*EL7ntyoBRbVn_*ORb4yZH>b6{X
zS^>h=YVvHVA}%st14BNT=Tej_MQJM71XykCqSV}|{n{@z&u$YrKh&4@mkGP8jsHtR#7cGM<+1DluDKQ8`Q;)nHYzPmB*WuTbXBFNbf{#=g@A
z61bqknLKMUw93_8qTh;rh7l)4kv-wY?(Xu1jJW1mMG@Dkw3BqYKrXIs>FyIYw4=0y
z)B@JK@EaA{T0~kxEFuNjI)@AO0)4xS<&ayP|DyM-0zb_^IH><_#TwS#l7Z>`G<|xt
zLc5z*yHb2RQCCk|I@9#A8+PkEC%aom_ayxGtU3zU*XyipUr!57fv`Jie!I;I-=rC^
zrVDNP^Ic*pzM0()zJVGGlFwa{%4wm{csoZxnf&vf$_qH($htWE(4`s0?U
z+c<+?X?jMq+AG>k6QU_fp6Q>;q95NbMDMj$F1i-<$&KbUtnGS|d9~cHWdAy=QvCGi
zk|L0D(Cg!tq6l3lI2Z6=x)
zG9D6+c9Z~kV7*7jd`*)KRe1m6|E4%Bm+-{~4?3BR>-qSrO^rTj*gIReO{agC4xari
z+HiWH=7-jWo#&TCXPe~D&{jyC;Ek-5LN1E{eN(R&b{ewcLt?85;$~-oBKgIBw|EFa
z@Yk^!&~rLdQ7bB}XWqR+A}Jzn{wMgc7$#K2>>1E2Kzy4dE&h0w!JrVuv*PH2$r2~w
z#Dw?FZ?@MZLT-P8V*58woh%%Si7K^wHLXsv7DR1|fms@5ZUz%L=`HEfE$=fq++3*8
znGY0qID6_E4QB*J9OE-oELeJ=h*Ry*6SBY>
zZ3+qLOcYuXx$x+v;>k}{
z|04V1iZs}aXrIT)dZmCxgg5f~3v65yw;QGl%q(EmJ|=yj^g7+Y8~vSjC}NSMwIV0e
zMcX4IDP;@68$NMMX#(fqB3s-~{kp^Ha2moGXzf)=7gY1^OM%N!gIyE{EYQn0EIzRoFUl;E!_k=i_4K2Mlx
z1N%w^%-WJ2x_#`x`0**la*y}q6Kh%h$2&hF_#m<`lO%1FK9#TIBOj}~(aN4+XrlUe
z_d$jP0qxUqgr=9adsV5M*C{`7NrkE~x$=*t#l4{(9)bu4-BN=~`BC%uWZKO698+`%
zF*-Iv`}%0nIEk(Zur*h>aHo&9_2mkU$eAyTZ3}3}K^ynHH(Y9HOwKam2o0MLa(LumoFESoa9g1O!w>u9{&GCt%F}udu`6%Oy=m9EnsEcfG*19uDk>%?fgd_^
zn;Lvp?>~00e6cAij0Zkwb}XcBl>RSu_9dvg)oez5aQZ!)k2NiJRWv)*mv^O!>w@zq
zOtmHM?h;RmS)3&r71aXNzaS+rRB|1C%&p9*9Yza`P4Ebc>NM^cNR?>6M&m^U`@PZU
zS^^OgmTZw>fk(HYGs)Tz3rrC7Lj(kT^=+EIgsJB%wNhDOnF_5>vqeP4EQu``zms9Zt%mVPOOW%=CLb&aIi3a)zo>H=VrP
zqx;xGW;(Caksz?%=~b2O
zlSFeKLp?P#lj_7E6{uqj1?>t=NS`BERw2zPJ_}twb|tM21|)$Sou#2{ohzCcQB>B(
zU6sz;*}JCKTk`Q=nzfI+ftm2RQpOz_U&;vx5#*2!sq6pajWJ#
zAh_+Afo@DFVj4zJDOmE)j<%bZTL-9B8!2x(+^gS2
z;tP92Og}|e1(UmRg^e7TahcI!dNqC4c(FINsc5CDgOesig=UNx1CRe%A%nUG=N=Y7
zLw(>*E{kTf7F4cQ&oa}17V17fGddyA;sxud4%t+izOOh%2V~)@$7Wj%DlK-zQyKI*
zrv*aKOey;dXh3Ua3!A#{#QMjJSZhN1DYhxXFOQ!N7L7p|x6xO^A_~#(WPImI1
zXL@+aFToe3#gbde8>XvUWfu-8H+b!~A|FWJ%Uh)r0k}wN$RQ(z=Nk@D9f3>(2$$5J
z=wL_8f(&Rz=F1K1i+^n~j*}r@A!VK_Ftxs4(EEZXKS#A(fb>sEKtGC4LuL+fbmlMS
z&tRffcCvP!P#V%bGiJ9QpXU^>Esv!1xV1XeobY(IE(&OH&_!iZp7Z1RO^_$u{9NfqR1<`6g9aE2GIVw8RK0g)e{Y@c^ADh2Zx9dI`L}SUG*4KM!{p
zdWu*C3?72usfREn>SnY)5tX#iEYH+sA8^C%6kfg0O_mv@Mae7rY8FKb{4kI?5rJl|BGi=|M2O!94J3K
zd4Xc{xycg^03LZG2?)ms6)f}10V&V8!mWQi*XFck<*hI>wkNkB@rF3b`X
zl2=;x%O@wEBO(IA7!5nTaVm;*>0TuG2Q0w|cEkghjBuXG1vu#>V^;siLD82~Io`YnqH?suSrK5A(nW9fJ%z&fEWc^F$v$P;h`F!1FpODJ`cDZ-QGDYHU(h?d1N
zq>1Ngpim%Lp#l5?e6qvYLnx$|CQh5n=AsWtA#MAQ1S=SF@=6DAW!N)L2Fn@ZbOg^?
zK)jo=h!m&O*=%6*)18{|7AdtV=>6kvcq7Qes;E-za)q0@n@cZ-2aVjtIL=0w0*oxE
zhw%tn$0NiSocnP2<}`nct#=RPX45G(laI)z_BpQ;jH%1-TyBAm-wz
z2NXr~7*_IC2QZ05LtWniQaNK-?MX^dT|~CqU9V*?1i1s9^9RPv4cCKyPQm<&K*Xaa$J)yJ
zg6g2@@(=07Z^ez=4$37GE7w6a9~AyH
z$Wkiz!>ZU~H}+H&PjpgEmCxY$XphCzMkwI^q?C6Ep2)n%r6*P6i=42Uv7+T@IheOAa?>e))JgL?8NeI#zR#O2ppo=4))8W_a9
z5zDhSWE_fxEO249Z0VrgpP>p)X(}f236}Vk@N!+n#zxm%)gYm1AO4kz=W}K|V_CG|
zcvPe_^#KrB*Sxw@v}@;NDgmBF<*M=u=^me5?^P^LJAh&>P037$wwRpe0%;xZNROsL
zfud%X*cRQU{c`9LhDJ3|s*2hd?I2}hC{e+KUeccbpE`erMQNQ%bknPXbbLkenohpP
zB|r)hIu7TNVtr=Jsd`EC4FBr!K(Ei7%{mPpa^&7)#3O;)w(A#ELg&bVS=h|
zRWv0+>|aoH1)>o!g|QQn81$sAyl|33V$dQ!0+3$RLv)VBaVq|PwB9Ko@#sWN&>0fN
z#?eZVWs42yFqRv5iI|M6RX
z#!KbquC5S;oFu*3Oh2SGy@Cp8rIj^MDk@5kG)RhbPE37tG_;GScVd-kEL+e6(_Dlj
zzR80Fx_}JL$;BuOm3^ZL62e-POV@Na1)7UQ0pq@&sSy;#L)hZFy}!xqF5%4~vT9w9
z)jA1UXh_Y@A^FuF@Hcc@qA0v1M{_q5l+>WxRY;Skkb#|rJ+?zUO6B*Wi_oTY3i*AA
zx+rvtSWxKHg5~CR0fEdJK+Du2PHZ7)YTy+5Lcr@)D|7T+T?k36ES>ZGH9yuG4HEeMDOiC~%8GUH%JOQ^`uu&_XxUCAv;$la-Kwknp
zi(GmF^c80P|4bUDpqE=9x|KCA{QvDg&6j3l`B)r(sl5ks9s*`&f_Nu9DaKNjxFLTX
zE;dCKb8ap9jIE85q!(7{ic%e7IIZxOPAH!>K=Zre?~f6;v;o>eh|1p`P)9*WOXb2%
za&CG?
z>l!ht73Fx~wj+9%C(GS6%Q{NR>!c{znCL%dNi5T+;aPSBu;I#UrQG2A;LqI4jjtn3
z3*x=kphHg|x3H{yVLtbWz!3w5R;n$h{^*oYA?EGtB@twE2T=BjV`u+#A36^@{1Ti{
z1J206fjSuQY+IXEO}HUDUPj#3xX>W5F4ka^f9-ECN~knVS&PGip`G4C{J#`oK%T;b
zofgeiAu4PF1`-1jc4z0b5$c7<2@JooeBGMRAlyWR@j
zy*0HYx6BiPx94Y;FDjY*y_dSaz?B$qNUJD7cSH?$a$nR{=WLw=c3C_pEhil7Sps6b
zk5}LVTQZGMYp-0>)a3^;)D>u=a90#{2{j`zmIKbjT{-AFLXqjaFshGzy2%4C#|Zs2
zA#i1YwK?88e(h(*mNugk0VoXeDb0;WM1|i{nh7=ZviQjrGC%lb*E*ogQ`X^I3vYCE
zTb6N*ssS#z`9eU=odK7lJGT~;=sR`2I25RW0CY)cq(Jg-&zqH4`q?w>QH9)VSLLrc8^zE@~VdO{pyqU8@6*L8au18w^?O`7pDfQukt6-Rk
z+U9Q}(NqdJCt0-KpxZV)-+R!mL7ge$t|2kS)dH1KnyfE2cQLqFV#=WguZr1Z6c-9T
zVgYw&-+0yp?mU(hAuQiLESxqDr3wjs@5eVp@?AzIWhq?P4}x*gxU+fpn1E2eu$5uC
zR51Lr@rqO(F$;y*vC%y88xfnp`3Y#Rz`Bl$_Zw}*Qk1?p%(OPwbQn0WZ4@=`D|NEO
zgNLljzVyJ?+Csghe9-0vp_;mJzFA-N2R_%rc}o6peK$7jZs2*uj|hKL&R0DT?!4T$
z?9P5A711=b(@nfu8W;be1g^ec#zuZ4LNZJD!GNGWjg&xg-SuJhWNc>N@NX&>Vlw=T
zWB}xH4J^+D9*{{nR3Z)E=$2na{bm(Qwh-$Itswi*9GY`PZW>9Y%H8&`Y<3
z;#nq!gASHYXK1lgN2krGyoY4oo$Yp3+3_N*Qx#^WA&T3O+Huw`hCIpgxCEDN8S=?^
zH@DUdJf1OVaidrE;(l{q1|F&GRt<=Fpo=*kJyDGdovl=aAL%G
zJWc}KGkZ%-Oay%*yS@*izT?Y!A{CSXTXGvoqRghF*tV3qi^KkUR6w*FMfOtv?MiBx
zQ<+jL)lse$jHyWLp;k_Xiy#|j%f02uza!ghAxk|s=vaiyWk~?
zlNLC+@bR3IlG9|5}sv1PdI=$lyv2qH&7?0p4?e8&qQZW`%yMbai&yNZ_ITZB}AI
zds%^y)uV%~FZDFG(H(eyxeM)X=57x~DjpOxKbJA3ZUChz$}3lrdSyOGT4Z<>87+Ys
z68?p|Hn+5bDYi{upAXp^UIgzCbJbF=DwW`6y^q3K(!7XS6l%At%YMWeA8_b>)q
zTL)?8J9U4UyK5{=1{zf#`(kp?cSz-8P`7<0oIH9KmDfUDmKiryy_HU=&QtOOoNQ&iU*LKI#(Umi~XNlPw%8
zyMvLtPQPKi$il-jDdohYRbY)nddA({D4EPczHD%59Vu8voe;!dmb*XO8M^d4iKjBz
zXy^t}hch!?X1_ym;2>^AE+KMpJ?k6}*Z@o~9#dLixCCOnn!t!edMvM5?k4sp5LH9E
z_228_?h&(nTO%ZyV)bqwQk!&MFumD1z`lv$0p2}_Jjc^5fspma`g*>_o4r+?RiyE6
z+5zIgM$FXvnbGot`cxwjln)>6P{!`gU%~y0nc%w+loZ=eh=1)kC9auj
z8Qkbak%n*go);BvZftwAx_>^uE)TZ8zE0M+zFwZm07=My{aw_8zpkIlHVJKS1g5{=
zYu9@Pyn9v^T>~;^Y2}4WOb8=b{!*kO2|z<7PG^TC6!yp+^XvC9L(eJm42r3iU(-)5
zhg9?^e#QH~qQGHpZ}t5J!%UB>%$OWL5#N{}dHP}>@|eqfb&=V)CWW1^R$gFBJiDJ9
zJ4QS^5QdEAdhBB}1afV8{*z7vIt#rfw(9}jrWFiT-#aV^s@W&4I5l*k2&ih_r^h&z
zxK!#;MQXGkrDq?K-d99B`%CV1sFke-bYcz89IJEf;Zkg~%XYsM8ailZQJ2nPA>
z2Fi%BofIkQH2aQ-Q
z9rlkQf{vadch5lV%V?pXLuVJri_WN=Zcew
zPCR}jR$;XE53O#8!FG{|SgqTA#{p>^ui+657re=gu51$i;h~T%fs53H@znqlLNmQE
z9g&$zrps{c;eY;RwRLLQbiC&WS&svW^AH4Q8%TA8!p4v2L=2g|0c-ZKf)l&HqX153Yp5
zY^tGF5;9@7^5imM$7f3}f6*p1x8lI!;5*rt!Re>^7mey?+ac)v+h;%2v-sITWQJ}X
z7auD7^D*34UgvR45o(?OvO{%`z7AxNR6PZJe%A1>Dn|Lq=tQlqwr>cvbXZDV|9}?3
zzAs9_%;ItKRXl%770KzKuBUcceVNf`L_ba^lub_<(AU9UTD~!#@h7AvM-6h$rD$OI
zgaiRv#lpbGQyO2rsZar03t_Z@K^96{z}Qs4m-X{rQQtf2g^vm7nUNH*0&$vmQBXnY
zgO4&U*2@s=6&;E$3C5lW(Hooc<1x{eFl%ENSz<$+$t2cd=nSbOd$Vh3O1^N9_NZ8*
z9M8LbXR^_sko;qAkOU$e0;Bmu+j^mGhs)23>;)T;gmET|=>zdU7jNY2XjGw7ZWXMc
zTKm8U;Q%m^ixLSoCjL=j7>4Fc@+t6G>M_vCGke8ylKXRIGn_%nBO_;fnPcFp?9`v}
z1qV@XROmGo+CaTF*H+}naCJBpL-1Sp7)VWOj@0B+6tbAdsB}LaE8+@pQDf+z1Sz-p
zJKh6)nbGHHfwg4OzCOl>@ZG>;=r#Hp%{l`!=o=e2YI3na4J*v*22mKsnGChu>ILyL
zD52RVLQnT_Q)0&7Ywl9FSShU-J*xsV8YQ9BGE|d$DO%$qKUR!OYWoMaEqYVtPq|+G
zZI&I45UzfdY6a&`=D{p36HJUjE(W#UZi-rCgsw37Wc$>%+TlYlL>uJ9m{P(2oLVsN
zNYF5pj!JCwBr*piradD7ZS26}>1BSK8p{K!M(=$%8oB&S`U-AJ$wcTx>L&$#BRmox
zI%=$6&GrmWGBS;_He%1LR_ps>n00|dRN3?}jxJbpIF@7cRFYL}#alQF^YO;uTKv@$
zBDvylul44XPCj&qy97Fiiy8EqsmcSl1K2qVp{O4N+<9v)PE0M0q2MOaLrsU_##w
zmF(-ADQA?FkQQY`>wAUJ_-=S9VdnF26am^zp;VxYi*RsO
zdB!BEb*+Ni7^}CeHFk>01YhbY=yokQ=qe^^P
zY^WUI^`XZuH@pcjZVR=Mb`B0i4
zpPtEJTUVd&$d7vv>#5$VH9}0VV3yWmUa=l{hxk)8*%;>@t5F#76DofZ3n?so?hJK%GtjztuwK*g
zq5TQ{;t>5XRF)|Gx-C`e8MmlE(2tozX~3wRlZw@&a2qT5+uxv9gAk`aCJ|-RzGJCkNVu*L>x(vc$O@}!o7Wd;XPliB
z?)*%ID2@Al{Ha7?Fg7FBsaG4a`-b4G@*pa~fybqC;cXd*59zbfLiZ8!T0*%aUUck0
za{Hp+_=3Q?Egtsk)S?tOEio_a8zp$uFT$L&Mb`VR6aAe{r3IsZU=-Rll;z7X%KM`i
ziMWOS9n)|;6~l)^x=7hJp;yEbVeGt4u@}CeLWi6GPxnDvDYn5^aREQHKG45AO806q
z?wQD|Jm_YdeOSGDOd-hUkkL+|I^(Ykd}pCt?(8=#uD>oa8RdW72Z;X?BRlI0%KzF9
z-Pjg(n`2M^0we6dz<3UDSx>bg8YYJYwbe1%q}w1Or*r_o3pzB3EF^Y}`Q3Q2;6
zbgBJsfCJ*29JwBoqW_A{h7fVDwTh;XCI-8F6}HDXpcR={8shruO2YHA$~QCDKiiU(
z&*b(?b_`7Ki_x@Bw;t*fDr4X`fj>jqS*95DEf%|S*S$ZuZnG18A!h($T%NDT&*N*+
z%lCuA_GsD3$?W#xgTggk@z^;Z=OW2vTwE{j_vh>9_{W1lx1Z1Jr;h(b#@ke)s%1>D
zWcD6^y}tFm>5ohwp(hgVzXhX@4vjx0WO89?|(En$Twwu8xT^S
zkfHs#BFU-uVdxHRN}wzLb}c;cO_OB)6>S@%*YXLG8J6b!HV0w_fq!dr8jw}%vzVO{
z@Eh!s^!5+P-zmT7HNQSNzVt!mczp^Qss%F7t^++0k~yEXs{3L}L&s^5YYIxL$R?=a
z7E0>Ne*1{-WA#67Ol=`Iq}&
zc410vq&tW!EIF&mo%7mwzfTeUY6h`3v}I(qcuM6HeEcspNgADwR3;^%Rm*u;@ap5r
z9{Mo}m?W%krDe!-Up$tKu142egq>EGE;D5q53j5cquF4i3(2YGD^UK!=Ru?XV`x9q
zJw2`|@u74#(c(Z@{#tN|wsZ#4o?R!xS<~p~X?JC#WW$x?qDALYv^u*6Q;0g8grUKBaqAtE
z@pf9Y?#ed>F;m2?A4@U@;xT3!e2rX%9+{z{6c*bio#76tG>t~l!hT=1rIN)gHPZrd
z-QuXT#29UInMz_o+vL!QMMPQ1fcDj{5){It;HduOP9V26+om4LlxO^=^L
z6%mhein_Lil!bZRsIy%X4n#^(joS!PM70oOQx#IVLCkiE8aM2G0GTsihXm|nWz#sL
z*U^rx8FupYsl-=PlU`}$bocXYk_K=0`jgXb3p`{e6AY1+;_5kAn!4G%}QmMYUc>X2z>3
zf3#92YF-+@b%Y9IA=*NEUeg2$9=&tt{?vXLSR&)jG^m_|;YjV+FMn{F(q8mJ6){yb
z6^uir1ZuiJpZ4>}xESaSTm`4?EdwR-dxIjF{4labn$<+Ol9+>})?^yqMQ#O|h#$%H
zitFo9VI=e5M#w@ykvn)k44cZuTI3ax`Yn?FV&tks>r+cy>Vb*g?b4BeELZ0}hk_wZ
z=zsv=bxf+lJrq>`-u_D8(fZc>B?D@F5~n^B);QdH-n|QF-C|0uQ9tsYyF%X>gS?AIqTQ(TYnuk38Mqs$d$q)3o391U%&}~x
zA&fkILQP?<#KG_i9aW2BIoD2LuPjid3gd0Brw~1Q4JhtK)o#9dGN~_j25=p8;4?lw
z^>9HH$Rp<4OrUGg)-$p;GcwbZU8-RqF#HNB#B@OCm=?o4LXNKdihvPv(xfbG&SXHn2aG7?G
z6y==VBrKXiIXrgOAV!tSbM8#MS+Er#g0jX9oLO*QazKA!Uf>^8auB38<%;#P77HhB
z3LfIWj$%`Z#(E(n1z9?6(?S!R9T@HIhZy}orp~cF6L4G7vCWQc
zCvVuXZQC|F>e#kz+qP}nwl%%a%v}5YhUdeor`D>w;AF%7+~_d}z7UuuOOHw^A$A4-
z0e%OWTIn$@Zt-}N74AU#3>i?u%sziQW2c|&IkMqGzQ*+Dd$6K1>+Yk1TT0e!g&#}>
z*hrVO#-7Au07Pp<}*qq&67N`k&>E&EFT0JEk&_P7+@EFj6UpuVIy
zIE-qBW2wL!TTIl7GVF|R4_T^8e=-36ngfNeXR%G_*`}t@astMasd;X0Sc?=>KJ`{h
zecIxv{WF>6ZF2KC^Sp{f9Q7I#(u$xkxsF{JA}tpZdyF(VdR4`Jg-c)_s&HrZnSaRw0{5Kx-3l-ObuB!`7pO-O~zaH
zI7v8i*AaAG0fO(`F&I_aOX4=S;HMi5r+M6?Y$>r3z45w4j)L!+rojbIaE1dGQX#!)
zhu**TrlvCNsh4;BTaHzTVP5ne(n#TZX3LIEm-i@&b`Q;RI^6C{Qn@Zgu*(sh9;4Xw
zBJK`wrdu2{*HV5zWf{?vE@K>%xVGmF&BinYlr)zjR!*5kGh?R9%-}3WRV$Y^t)Pak
z13g9`Lp_o5yTEw^rKy@eaL{EhA8$f!q4naqC2?W1P=T+EG^WUx~R~k6P*L$;->MiL@ns##~ss{vz?@
zM&$Q&#Vg3}=c4?mnnZ%?F+>SHsdu_}+OsC8l`{2-g8-REX;2B`*`Nw
z`m-C*ehQWN$#D0x;W&cxI46&p^}pZY1UoKdzdKf^Bf6{>?#}!)aCXi4bVS0)88jus
z1^a>HRgkt)O&DO3iXU2hvVXqMpEh2u-$GoSBuk7mW4CH=MNa|9z~FP~xWR^pvq?2n
zd?gMY2G>tGkfOWzl^hKT343nbybiK!XyAEAkE?>!Fu%hhQOk@=gPsd&ZHB*gqci7;
zM)7&KbX|fY?2E~WZRCYZZC504oP;5+`EFlV3ylDj|bNDUS
z`=u&S$D0;yiETG$h{cPNp2R5wWNsQ2ugu-T&@|gCNY?>?D&oV3wmT~+GcyhwTU)N{
zr^qITCx@8^4j4ZWf*~W)(_t|~SP(A!6F^CSxM}0#O1J9B*1`m>_xc`<+K&{{E;^s)
zQpFe0CoNB3l3%9CI5o@>HkRMHjtDfCj)bnF?TY=_4#=&yR-kOBDaUFPuc>=)j}%dx7*6i$}Mb^W^+6n09Rc3mJAT?;J2CY)p^gk@o&27
zZuOEBh3S0(^0@l!@DMlHP`JfcZ1pl!A05}gxzy1oes^A?sKJ4J7Xj17W-PyF7IKZ>
zZH#2>NWBA_Oo8ABVz@ajV`d9Kwsj<&aBo+eb%7^gV+QlBT#3Hi3`GeBGK(9F4jiAhXQr@>qTj7o&k+g{NYi%#FwKv%`y1
z3xd8~RCUt_PT#d{a9BXYs2wwAOCv-H0c1~^a=y)qhRZjhF6-39epC%*U!mZ@{mC}!6KJ{w{=IRbqq>3L+)x5f1;OZrD&TtA`&9N|$TeoxW;)zl
z!g1aCBq~$$t^Zhq%;i^k4A7q&v*#GxHmVOvVX;oFt_h%%iBJe3J0~hV(%L@8RDK%W
zT=51sr{FQJ)Mk*>pL7e$Tf^EI*qcmSYtkQ}?2Z|2)c0@uVQ#T&yOHT4)HxizFw
zFm%FCG#ih+!D#l$K5>Jr*A7zJqHdBZiffQ^R*vBMEfP^Rk-nd;q{&;}EJUsO1jW#L`%dF({g#r(<
zB6s`Z3XA=Wef)Y*<8}m@y52NSU_E+zze1F!+Ximp|2OAKrcLwkReSGv&3$aiHxYm}Z|Nj2ZJ+tE4rfEbn*~v#(b=g50pVg!#WhWb|VK
zJO%P3-4fGmATQ1*IM-Y)7#@tCc~!ctB8WdD+1Zh1B%aJzCvX%uZ&!!-u@5?Zf}Ex=
zp2mm@b{|{ghETNTY}p6`Jwj%HItq#)*A(@4SA_bf8YtSGRIa3V-SnNNRv;b(`f%iB
zSAwp~`X~N5_J@UmGc7J%<7p`9_6b+=shB01Q}p7KQs(XL&v2oKr&f*(15&@$0;m%)Bw7+<(TTN!-sLR)qe-;mQV&yI9jd8c{
z#{C2ZK|*n8`oAQ>|IRKpfFHvDo?bR~8*E75*?JB`voeyI>hWY6fA&E(VnIOLo7H_?
zdS{^`rGNJuOQ4sDPS5^y|326GvrjtU`zC}IK5}r^))ImpGh$Ds(P_Mf!X~{fNIyWC
zNQ7An9R&5SQHyC*3!#TxkF(S0j?2u-`;T*@M6&y!UlB8MckIMl=JH2H?XJ{Ha8C@7
zPurRq#|i_?xsz77T&J%D(&qIoeieGRh79wPl5u>#iV4
z+7U9kvIGN^mJU6yF^yp&DHQ0j
z@(!xOx8qhn{-M6V1Gw
z7mUpVkSc!??n!&5PS(cwpTLL}_T|f87~s
z8ue$XfiM#2*tr*$M=coj6T~ux!tuLMP%^EsOo>>8{lFqj;QcVBwQwtWXl6oS?s2?`
zy~DIzXTYQ{K#z@}1uB5WbhY;ZsLw%A(KJ7z2xJ<_t$K2wm3zl6f(lArv?hj7lMEYyk8PYIihX1Iv;$)3H;`F*2GF0
zz8NjCE`I0f*eSuXm6Xl8wigZYr5tXpV$*l`jPo!Y#l%JR2yzela9=1=-N#t(p9nws
zA#14Nh%ktII0BXB=Rbr&v+dq6QT!2FVEDuP)I10Y>}
z9{yBh?#+7>NH#zYIMt-jQ$&gx-1jMaD>{wM1DG!}&|=Sh~Yws^WCQwJF)q(EOMI>r{j*$g@8C*$JDDhO>)fD6ty)m)eWTn|e#IEdin_70*>_sG}6|);EvPmCP8I8S|d8<-hIYt1$9;lFvUa52~p%du4EE_(N$YYO}&v
zGGxh%_pp{532OE{hCtF{6@JHieYj77V$8DU&(&(hXw|KJ3=SW3L2AajvN}Z`U%b0C
zTsZKBzt0k9CAdO62+#cCY20F!aW$Qkp2?V$j9!h}taKXEeITd-n8iZ)HpYXw&16vxCbc_EG?Y{
zmSsL2S;onVH{xU(d@|MfN5Bl{Z?-@6-s{etyv&H~5V_}HAU~?p)D98gK?n^v-a3zw
zs*w?D4TfG07Ptk{O{B`ygMHHI=3QT7EzO`wD)cQI#^*W=>O`56{FKAWWVh#9LRJY=
zMWCe<>clT>TOUo+F`#ZoXtD^^nrh4?wfyeWLi8qt(Dka*L*LSc@O4PxE!D*}8qKUO
zdRS$)i-Ih`$qL88n!}o2ox=^n*Kkde0saxYc&$d4c|ASG@
z{4pdk;%=8(ly68WqDzwt+rEH)aOaDvEjFj45YmZdgF4ZCT&+d4YsaPL7LV*49+i3}
z*L9?_Gwz@yRJe44u`M{UYz74+Ssmz9XL!bgBZV|D@fx)K9_-CkK{uO#i0)8&>T)BHfP@ZtdDRhZ2Ttr1(+6cX%8
z!#LX{!Q0`(Ke*E~$u7D_%!Vn-o%?Dpv4|h|YVdC+9W$=1R)jh5jV7MPr3$z%(L5hB
znyZI@6p|*p*omChcQ!F9?AQ?|9fdD%wDbD57}%&#bo}UtcPJHYPQ)yK?AMcH-4_fC
zW3L?4BlK!zT-#Fi+WQZPzO_l|t37#oN7f&CXP>q>?G&a`CGhsZuvw3MWUMz;Y`pD2?9qAO&!*99Qb}eQ!_24i$Y&shMCKxIhC`Uu)qz_QxB~$H=Oi?wf%k*b
z`%}^L)c+&RJ2yD3-{K_u^?pz233G!4EtNpdUGs0jxA&;G^uGCY{$KZDRE(L`H5Ukc
z-TS}>cl$mPLusU{`$!%7Th!=XiQS;S8DQOq{CI*(19>=il}j40SQ|#A9kAu
zEjiE8+unbFhQD6NcwRsEmbNKg4iG^Gy5z*bEn}=o{Q!&sQ}?43L*jX}uT0_4QubtF
z1F>=%aP&G4lVEWDFLKYD1iS>TkOVPXTM7TpP&#V1Q?%XYXVX?5i!n`PqKK6JsbcE&
zF~l{)ya&joIBOU6ZpkQY!N2|8z>~5t6s7%PQFiGq7#|#Q1BAR{LPx0<_Sy9}&=pX>FIqea2*cTPzdq9OQ7kNOSt#=DZINXB_5dOJ86__g
zOo90b6EB_}Hmy(qf3)cc=K#H1un@`RBK;Za$s}y_b9QIIKTjxhM?b+A2
zRZ1mv_(Avk$enHxV*$>bqGTB?Rg(B`MPhj0Ebz5?ttJJfup)uU28#9o!XT!|?x;Dh
zgs4kgO_X5c5pH5cjv$D6;4~;WUFRxP`n!tC1l*vorDyoq`By0p3EABoUGhR;krS?p
zd70E?Uz}e(>wMp}o|JA-?1qGuPIm#WMedZ_yLgd|a0gQS!Mvlbn%u%WmOy-$-#QOk
zNwVw8b+?pa8u1@Ve0e?Jx2RBKn;bATazl`1ZozmCe0yuZ=_hk!65NXRX}?ewW6~l6
zW*%gWFk-d9=B}m%e@zPG@*3P)&*IIdy949?ldM!K
zTY86coK!y0w)v?5{EbK_V&rR<>poDJIY}KVdyc;=A%3zNc89x4D4h539wi>F#q&u?
zyswpUW67Hlfuenqc`OQ($8904Ne=W&(bHoj9NV|)(L8@4!wxhYzzz(45gy#8vTYN
zvsUCi2%o8mBd74ok;&~%dTQoAG)>6(0Gq;87u*!UOI7duG$zCif}soU
zi4`as&k!C?1{c|-};6M=$XqpSxJ2W?+EYmCW#z45JrtX#k+TdC3xsOIFGDR
z7$~9MPz3?JVN|ExRMj%jqw~Afe(MNA`3OSKhJ*)Tw#g*xfj91aC-EE`dA}4h7;G
zKd>sCbRJ{@$3FJuWM4cf@=e%p;mww-AIYv)FJ`isMocDk%*nAkZ}B+
ze{YGh!ki%S+LNaY#59HK7^o7f6wA>w6~a^KL@HZ_O1S*YbSsN)()g=1*oYZ%>oUWX
zQ2=!t%rR7P$#&%FGD@@UJ^r#x*5zDm`)fFEV`4X;M08LHxBj@T_$#CoMz6GG*ys~5
zgV9BYg&<83r5WiACz-Mj?{TwNtf;4=;*q-`C!lXdH?<7jDVic1V*^eocpH;}WXzze
z4tuI4)Z(PwRvGN-+2HzWar1P2hrj+i#~=$8m<%9z#T8Gcg;#=+6%p)(&ERflRIf7Zbg;mn
zS?&)4IHF^e4|2+Y@|=b@{F*|O4;picB-(JWEhu%ndv2FJx9|CtzevuE)-MHM5T%dd
z9z3zAA>zJ)Yci=hPMUp~cB|*dnd|CvT|+q_CUm)j
zN@mW){ny2`jhixnZco^jiTwI^T{&^6y+Jqmou(
zszty-x@ErlEOX9w_&JW0Ko`@(+HI49HiFfVNfv8jX!9-j@`IH_Fh8r?LzRY8;V>+>
z&4J+D7;p}L48*!Qe?dLj3LUIi9(eNr-WE11Hr$}rY*SbRXf+-qPn#Au%|X?y=W`)=
z*SJFx#~z6#3YuQyp8v?@D8NzGleDb@_i6QaiWW+gpD
z{vM%d1&=2yMwxz#y^S#I;Tj2aa$+?~8%ttHfDEtIID#l8LC=2o<&&JUl8qux=_}wF
zxk-{La$O#L>X*W>{0;xJs`;=I7;^f9JdSDh;G1O>1>-S9SG*UwO?yP?x)Zt%U41#e
ziZGkzZS1iIZb)%Soh)End2z7PoLsdvF-jS_-U3N9qh|G-un6;(@a8>|JUO9QttJLy
zXfSi$FUbh6fXbnqk4H?%0+;XC0NZZ-#;K7dH!74lpd}1;
zCw;}>yhH^*swp$xbF;WYmn6zEty`nSutFqj*uEyPRliY=a{(^ITS+Ivnd(zfnV()b
zIWFMZHY4B6oRomlfH~SdyN>{&?Ryy?nJP-NUk@AQaKg=kU|Ba`bCG9zo+In6v>moJ
z$%+>yk&*f}I5s~)+M@3wPq~hY^ig~i_w}bmC|UEGP$sDjd)gE=0_WJ{{)0vJkEhgC
zX-IL}7IMU6h!m8d*xh!(zKF9z!fU-p9RmV>txeY0@kP|KI@6o7(H^l7SP%8r!}4AA
zb(tdhG9U3f=z|RzoL2{W+Sj4+6lHwzSn+YVJZ&X^D|yqr=8B)?7-_;UN-?9mG#h+d
z8LiDn+^v$R#T*Q|ZvB{ZJaday(}Z949?!BZWpKt>chc{4to@${$H=B$5aFnU&lV?G
zr&E9i9l9_&2oQb5j~gHOLfvM>($hH%-Ax~0adxxD%WYImmwGhQrWn2SS1C*vt5z#L
zR8Sby&-A)JHK^ex*^zhkv-QYPY;RZ_jia-Eo`=te?Pcbf@ubr3t8-sdjs1Oi9YvrJ
zjBO*=$43q5?Yf##yv1t=fUUZ-CDb`;YdMR#>Tyc8wYl;&lX|tFY?J^m9(%l3v(0sb
zC919$&Cu#EhGr#W8eTAz{0|jc`^twpbnW!BNs9_*r4!-0=5eCm9Ql)U$-d+n=X$IK
z$sa__c1wRGgCWfOSeh^{h+bdaQH=Ff(vP*Px7@%XNfHn4VV43`XoN2?Ljm?AeFi>$
zYbY9GxCKfeo5%Rhtf>|kY|O^UnlN^A*a`uz9vOlK`{Ad~P?jnB`KtkC;GM;mcu_Qk
z5IS65?*|xO*8K4TaWb=tCkg&+XSh?ue-sxB4Rs8uC#0y#liL95WQt_@ZR@AiqRHYN
z!nGJ$%K;qNqb|1PyhH~Axyx&iB35!$x*F?^EQsE9a~@UnhA!&ZrEi4iKv$a|@b+cm
z@;ziF*avsAlf=5{m6|&;!LYQR14EW>n(gVwAJ`om5@5#oNNGvn@PqlSk}vRiGDc^r
zgaP4m7DEznJrMn!$LqN-2zkg(%MT53sPoXQVBK{hYwMIXdRJxa<2E~Xr_nc!
zmp_)Tw(ASkbcI&p=3E=VvR)}Gxg(+>aczKV6+F<{U=vT)vPww5?v4X({O!N8_!Y=Dl+w@&PLU_#2Pf{uC$pBH;rQRBmSnC{NnLZd;mhJ
z&ppHQsU_P671SK%*;MuIr(WD5XNk%D8rkr{P+hxY-h$EioNbQI
z6S_NJUG#_ad0n=XX19kX&2ofeCCY)^#@kEDZyii?d28#JDakN2y0+?_&gwQft!j$v
zQ{Hmh=F(CjWK%7gWa#@@Ur~v)OTPt2r_jwLW(_9nqU412;9xze*r|OjM=>Fei?$SL
zA#?YhxThrcx<&jufOyL2(W`REhCy7bK8uR*uH#9S^n|31f;M=Wl!=C&m5p4lL!dHg
z{$XobR(9^EhHG`{M)Rp$Cl88U%MS|86ZHyfU=Kl<&Gn!Z%UXW>uP_Z?{j*6}K_%9A
zLs)L5j3e@m&{FjOo+DDurCL_xqxI!nS3}JM-!A
zIyd@LY+Af9$pWW&xaOMmt;Gv~6TLNGc4RUJRH|`Xt4k26IEapY*=o4Mo-&T;8AVHr
zFo3Qb#T2Ve2Jp7|7LfNCG0oMWS--
ztEGu(6vrgag`cN@9g@2brthytpLQW@W(b~9Kv@O$s{pGHHD+9sYMDFFid4PTO31>D
zv>Q6(>hgiYPvbF4DB6xx+us$o-UZn|>$(WVha?0R{bG%9oWlsiBr_Mc=LO&a4$W&3
z(_{zM@J;)KbK&+wKN?0E43^HUGnTlj+FI3my~{T`YCu-2iC8o51{cX3dL1V8I!Y%A
zk86J%RD%*4Wj72mG%b$c)c%p)+1pxiyS<;fz#Esc?6kjEL4-@D%PHsxxh}6lAI>*D
zEJ<9_rm5t!y`+k!ttZnx|?EuQ;#NX5lN6W1%h$0lPo0-M#C6Z
z7`q-xy*DHL68Ek5O>3wrAM
z_*kz#Q`TSUEHN00y=0Ot(%hlmJiT&u>D@MNTYLF3dS*`SY?B7cU}_NPA-P3nSN@20y0r
zq>Wv-xw(1tERez&+oNno94&<9x-L*YKG2)m?N_YBuLOX2zdB=#v_&F^uyiS5Z3HZH
z9`vW$b)-5iht%PD0^RIp!X$3Z~l{g>|bZ#o=71>v*k57jnsz#Z;5vNH(eSK*yr70)gF
z3}=6X8{dr&d}k`yrZZsYMh?@)4+18%f4#e{@Vq^n*z4P7SCiUtTdKR~9
zDTdfZ6wLE2qvqt|b=krDad>3g*ekz~id{BsK`Z>4rGR6?y4)j#K#62Z@+D2)u`6uGqRC
zBLcN0;WLTO#n7n5Z}a@wN<^r
zti4E{=!2iEM03@ww%RH0^gTFWrS0goopUyk+2cScxfM8LY~3_PZvTh;sGk?Hp({S;
zrjQB&dHaId8_Mq&q;usTH~WQ$%)zTYQtz4qyZE4e{?ThjO@`|ZN%312HINmrxx@@V
z!jKJQv-6>P)9y$D#~y184gQDzk%Jl1D*(rv1a}`ZUIuTSiN3l?J>I&Sc%@q_Z%6(UZ6O~Iz0>tE;Lh9&pFIH$6u>_8UX~X)r
zFuA)rtS=wJ4L=uuh0YuA>(0)pup-nS!%%h#
zR-iw_vhXY{V&XEE*CwW#OD^w1Uz`a!xujI{8!=E6tq2A;AvsoyCn1K>(>jVo{lh2tJ{{bTMLE_FQzQyA!7fCXtqV;OQSXX8adrrfK4jm!U`(7z0!7%`a@!
zqWa4PqU8C_O%4z-WDRh_ULf>q+XRS#_4YY>;+d{U9+pa1tA=m`se>{99sb-X>fbTq
zEn<4HySq9iiq~+3NaOXXx0DZ`4RX@1M9o6=LhVl0jhir6OK__4x56UBkxO){t*fqC
zU+fA1htP%)l9npy;`yoDD&{U2@302R5WZi}NqTg=rkcVj|Gq~E=ajWw+QkqpU-c8H
z+)bqw>}Ja3ko7V+yl`@;@C4$Rb1tKjsfE2gd+aZe5P`{@Jz(UOEz`?*IK>|X&4HBm
z@!6aS3vcf2z>epvoNSU~c_lqM9=7d-!<8?5V>5;5xZ8`r%Ir{c;#hU8xx`pA**Tu(
z_b_c^>DUhS>U#rJ@VOnLO?@6RwahrqPT{))^wAxkeXKZ;-kcL(6tl#@vfeSQeYJyZ
zn|RdmHN(cabzSP?NCuCP#h{XGF^gwE_{zO5O}gUfAm*){8&a&A8s0N2gybT09}3C@
zXq$XL@waleb0uoW3#LlUKtSF)RU!8ZBJ5O-P;Ni+fyToe^fP6&1)78sy{`naNBl+o
z|H|3^Hr_yD<~uW@q=J7401lL@;f`M(o(mW^9Ps0T^k!1SwBoYUk6ro~M7q}j@o)vK
z<-n7)N^|_EEJ-GLL3+2C{kv+AC7}Q4M9!36qPFM16cMw(?M$KRJEM`x<~&@ow3koq
zJTJXRpY5tDP#V)vtWWs;=%Is$WOMy=b^Z7Ram9}7+)5-YZf917M8zrN4xy3w5p59D
zbMHQAU-P9XVKU#4?Y!3Keh2*Ld2dGl<1eczd|B?KB16Q6jgufFS5~Kh+Bf6|HCf|r
z%})K6&xY3tKB-Ye3&eoR#L+WZ7na5eYw|>yxe3b6AL(PFjjbpZo^zhbDKzfVKCgs6
zct`{7#*EL$$gSOr6!+^?N4_WxC;JF1UPD)bf~MPMly`ma&86u|J9jjtjVPuXUg61d
z;Kqob?f{8fxPdHl>~3dBcY+LKIlUq;s%)7KSd}0Fb?&fYjj&{$o*il+Uj*@#ekR73
zuwv}hhN-ijvV1=4->WRm0aq$Ro0(}5;pg|x%wnk6+T$rsfsaq#TQ_^FAm7^-5coGM
z=&zVoKtJ^~{J%w{THWg1?h$QITaijA{-a2s?CtEV^<3Bhv4sCikNmUq7sTH`zQZ}I
zqHR`Vhb0#3uP}5F!!e*Wg(t46R4dAGXp>oVgQece@sV(LP9|)|3|gWL-W25GxHe%odoJY
zb4-xJsKcBZIcjV|$;0?aDq=7;3ql!4uDYaJJVqgLh}r+Wa@xjFaBP~%aU;3MW-@Kd
zIVPpLkE~td>e=%gH5Q%*Y>I_qm653t*&jrW7!6$`bB7ZX8cp{jN_WOD_Bjrv3S&^H
z8ABWHRycF|{smWAf{g-aPWt$I!bS@6rqA`cEdX(oOhDi5xnJ-FrLL29X7(9}TbE~T
zl`R$18oRqnz5K~wutDp?GLMRrYUZCJ94Rek#78-vNlBI!Po>H<{6NYr0OQmp8%iS*
z{==j9e)roTp36|Fc%3}4e{tX94-M9}ru-j;gZ^XYs7Yi_QC&KWw__WDH6oZNaR+ni
zkcFVtKjR^pP=`rIN!B9Paj+T5yyrC!qy-xImp27TbB3-Vf8YIj<>IjHSgzNdk7{0C
z__FHEnG(0Z$Mt-8DP3=yXk`)>J)Zrhoc@mH+^grMiDe#w8K>8Ah=nipi<@
ztY0s`+>g`e@!Q104tlvp5DYvDNHBgLspzf9Fi21tedEKzLqW>p8C9*aQ0;~R-@w1zmnX8?dStt7z&JS~jd&jnPHDm?+Uv2|Fz*~FGi9qL6T($9>pe^b
z1a+o?EQSGTQrX~c+}#6}U!Y$ESP}`u2><$ULqbxUjG3Tpo9G|W{#8?liqovd#yDH|
zkf~jQN`*k-5)Mk|-|ht5VKU>Kc1Xky@(HvD$)tvZPopE@Ru>?#>1{)e_-TS#sK1U{
z1S#^C?6;9m&8QfAz<>)Q7=V?w&1>7}Xp8P;(uV)ivu34qu4S;^RSJ;yiU3r4a@P-G
zz|KMsK0ytHYqFF=P@8uecclu2)#rQzLIbrZaeN(^vRprHrYe6bu@U4P{
zV$zuY)L#_B^>4mLafq<1RY1(QPsQBTqH#2k+{^bXk48aF-er#AdRwlsy}`%6fj`0T
z*rt+e)-50Lct?*+%kF0V#fDP&iHE{MUNL`V&3T|m1ta!?Y9(%uv=c?oKvNo}A>ZG{)vcof*cFzfmRUk2WD)Ve$cABkWm
z!2WetAfJr#n0<}cbn9IrWHulp*MfvUt*EA>S51i&<4M9lwutZ3Q&6s00-J+x47<_^
zL#@atmCEylbn2RgYV9TNOoO0AblgFimN(PS5oZT2r95pCWP)97
zGkuFzL0k7Oyt%ZE`l9szmH%pYZ;>=kjL|cZ(vJKudu7n21F0_h808DT
zBZ}u|uM%p_G}s@w?(&gK#^ukWu4Q@mPT60M!BUa#!5{TYD}lC>B!EA<63p1GaH@AQ
zOsuVInhZ|U{MQPn6V``(ZoF!FIEk8esp?%Ow=aGQ&Zs(@R8fg9qof=%&MZ7!&Rm{_
zY(Y5CT3(!(ivb@efNg~ul+R18Qva}aY10j%-7ry`n{&~vovxqlp^WE!ZTBvh^Cd0g
zs3C{dkySRpmu;vjuUQCH_Nmr%wqO3jf*LCCV>E9^0M$k8I3kX1eNU*47)IYn*;j}H
zh_xwSTE)@7V1ph32lNsz$P8-s6cnQa?=yG5c?y=xOw1Yl=vwo*v`8Ap*~Vc%UFk#s
z{=QROiLo8#RtPqwt;qHm!?&7ko0*ZhGIIse<(aP1R?6k7DFRYC7{feUpZopLc+G8<
zZY@^tVr^~-Xg{Wd$zdA5YI+hQCErTNSjq#Mg0@*jTr!;+p#uhd;LqxkX^pF!x6#tE
z=83GgDXs6IY38i5k{2?u+pB$veO*n7o!V0B>h-g|2{WDU_8jZDdo#@nKsDwxh^>eY
z3h%*S0CG;u+Wx9ctI|LqEe*R|9p`*ESIxuInCLuX7k>$=ln{*8H_k??*lAYRmTjya
z{*C+Tz`9S;>ZnAhT}LTKtNG4+-`s@^SER77%XYF_fZ>U{;ZJuCpzDO2W^+B(d3FM{
z)U}2eXjg_o9_-}>v;~LKEttfdytdD~l-(JL{Z!R-PGZ8;3#}xaQXmrRcOFE!ijKW|eM_C!^Hw*XRC4)e4E_7XnN{I6N4{j4{bXh}=6
zFgJ0~!+nc@(lon42||3HQ;H$e!4%WEqIhAsayyR760vVxvs92XfM{8|t7Be^2j-C%hFFVp-tw*=HUWZm
zu)Q=U#ZU)X4*p;9{#>pe)K$y1k+ry#huWzDAkQ9-h&l>tp$e%ME8jky2nk%0w{t=X
zzJg_N2Omf0i9C(kf%QFmnMM8!iL(y&T_uft1nG$hQf{c%3J)4y=FMgIO2O~%4HXAY
zTx?2rvS=f_TgGLsE9>992;pcxum_n%Oy$Blm;nWDx<{~drW^OqlcU30_^K%-11Y;j
z7~r*mIyFR$p$VoOE7=SQaER4@MIy++Ygp6s;|`#@RPsPFiNm=Ud2o=!STdech1JH!
z##zCPr5uY-(#=IC%hiP%gtzHhbW;)dI`PHTKk`AMo-eh@^`yF%Im=ScJpucvImLh5
zN99fgWI9u!;_nH)r<+7E4APGf_>f`P1tzN4(e#G_P<(;F8)yW|msKJWG9jQzoq?4i
zYl%AMmlC}*hrHzLM3Af?_a#b&3<4PfX(
zSDvNFiH)w<^dovC!s1ARQROBeD>KrwM-LmjuMiG>d*9Kwvq;E*2gm%}Uy3n%gBUR_wbdpB51jl{CoBZ%RijUzjPCPfFS>qiy@Eecg!&j`9*5;DhE
zoq_0@S6esht)DTCR}qd
zx=aFZMmtfyWn*!3dOdGEo`v_wO8=}NJ#ojgb}LVdWw&5A^uFk*A>+fKwf=X5qn%Ft
zNMfTrGxXdrWqLxW)CVG3B2zr7QDF7!!qQh)!Oxv_tixaPJ$Tch=^VsaP&pNZn=uxF$+)l-Ws
z8lq~F9?a{PL7hU80lg(M1IlvW-X4sKY~QiMNx&xHw#g1_6x
z7Xs1BwolP4mqW@31l*Obv@2_EaJ?d=@+EqcTL<%TH>0@c7St-fmW;^qfEx>DjZJWoUfm)sC#y6ndvRD8
zv{xtA#^7-9geQ8HBK~#z5+2kz-9k68?wGke=Dh4qYBedNeT4oq>($&_Rgg`=v6X@t
zUTbCR(MMF+S5dHu<}tkR)T&kMczUnRB@R6S+pmOwG_mfA(>c$y0J)n#e~x_tZHN`G
zqw|i1YfOx+&t27uD_A7M)F+W06(!4m;4_S*r0)Mf*Wu=GDhBtj)-~t154VpmScY+5
zL|7ePov|0KQy?D8vBZak4VEHakp%wH@sO;?U>j*ERKWxllTNtf6T^@oo;om@iDddd
ziYparssRB5!c7U{bRW5w78`@r3NDzSsZeCSp;_8$wkpdVh;ykNd#YfhuQfyv-m*zv
zFTh_wd?=w!Ctx)Zcjfb5frv@ba~q6>mc}quk4<_p4CK101ULmjT_HptfmX$tD1l#Y
zm6Q$^Wkn`Kx)OiQ8|SIFVTQ3vj$d%CTcS2oo&1hVLrZb*cF$p3k`aumSQjmRmt->O
z;`2*lcVVBfyq7U%J)@$j849BtwdD1dg1lbrjs9S*O*b`;HNGR3Fd!{=efaz=swlr!
ze!o^(@EpY~M%x8E@H}C1GUn`e2~P~Mc#P#AS~lnxzmWcVs6VVC<+$Ra>-GYy8rO9+
ztU1@@^kMTr8$YTC?%o?kdlR*0NNRc--)svvuzaJ;{X6-_h_J_3-D&$vSGMWGS;Ba?
zTqMV1*u>B&R`+ud%V4R7?Ge>Xmz8d1X4yu=X?e7b`8@BnwyyoWj6k~qVe>3JV3qOj
zgXuB@=eNUFiFNl$6V$Yfd;b;4sswn!UrsM_Trrq3mfrkg8>i>34f}XM?^5Z(v-{f!
zbIaCH`}@lEDki&Iy)>bQSUJNw0&2sEeQ42P1&7de8+yHEX=n^9WMwXDC`t0*$8^BB
zTITi~v~qm}0x#d$~KMas&q@F!WB#wkS)|C`=@bT;O=HFqM@Fwg`0)Te5ZTvLGt!U{_xKOw9hv@Ag<+Fm
z^2igaCl(mD>NC)h2I+j?Vt!uo2EzdbiFYaq3;r`0ra-E^7=z;5716`U-lKvUV=dva
zfzYbQ^+OWHNk_gZ3B^OKyb70Y1IEG~a(0+2r^3k1_zs%uVK5#(B?l0BZI
z7%X<%_qnue3!y$UCApHlr|E
zQup|AP1-O{I>d{^1Lbe5EtiH%LgRxWI0HZ?>0NIKnAs0*!6s5XFR5*ti#O
zdJT*ENn5fMC*&?@5#os02*kuhIY=E@=Ww$`B=Y@&d+cB9nkdO5bImV=PU37ba3CKq
zf!&hPMx8dkh|&;@M`G-qqlvlG3-~AuTDCil2Y^Mch#7>Ie@rWYoS!hy
zuUMbNh9x&$)5&gxG)Bx=Nkq8Vj2VQD6&)1e=!OGh0
zw`o@cd4k-E!c+_in`zoGt;dN7DHQ8`qIr?VWQrK4WmS1r^NO%S@4qm`^Kwxs#?*QZ
z`D)mSi`9&>v>_AeJWVqA7e!V>!VIg2^rhAjHXNlq$#;0%-6-RT6IxR&e$bSwcrsThtd0Th2tQ#5yB{ekB
zb$**h%)At$&Jqa2hvx+4-cO^98fd!p*+(XyQxHr0L&I(
z3*lf8yNYsk8BdM2#mf_qmw&OjLbHeApN|)H8$_w3R~OR4WPB*nOlEO)yAH5E?wec+
z2Ie_(%&Oq2w}lngGke13mc#>!H$bdZcphPEg=I?4;n2Ut#@Pz?I6)u|Q8;E;Qch*`
z8N39yrG(Mn3u9X{mshnoDd7&X9dO28wT)G{CJb53HM*kFE`1WTQTikYK!?-3Q^lsZ
zr!1*rL?8#YwqKuVz*e30)4O5?GjF4Q@@dvlM5zz#+>JCVp?0HaOhQOl9YUd!mWRN1S0M}s8S|CBw9@1OQw_#h
zZHYRyaL$V~1&^yxzBH)lAPy%tFzo8#TMjHV7iT5dmfWD~c2gIuS*6hqN}hR*>BwlB
zxst)Q+qGNc_zvf$GZoP)r
z-vX?j@5oMqGoxouXaua*qsp9kNSeAPmz1u+pz8IcY!_^acj*|)!Ktoe_$y@7ip%fF
zqd&TrakK~4(t*Od!U`KpS8!%+-RIzGIWUT+!MthQ@Y@(X8dmS`^mH#=VwZLbqsRt0
ztL;Bh#^=sTdD7CbPG+acyc!G1RGlEE<=wrb37^V6cwfNA&RGbz5OGr6E!kXppM*hm
zJ;AA23EesBl}8#ojF~0^etvBZ>n&scGwW*eN9&0nz5DX;UA=x&HSzNl>DEKNW?R;s
zmaS`G_PUIT4Mahf$Qz;Z>YN3azp`h%wiXV~^KbI!pRP8>5h%dP2Y%nQ!P_R|mS@}j
zSWu++EsS%T#%L4yatvG>vfi1SG3$tT1==Rv@;0u#{+dCU8XUkD`nZNd#IMre;%>f7
zGQ61skrqV+Y-e~*Q_it?T`Cc_jl+5m&7hs7G|9_yohJzGfc}~DNqGUC!Bu|f
zdeRZjYQ2(UpupFBtYXS1Lg5qg^#@OqQJ%fD%7UQlnHQ9T+-EQ)_0yD{Ww8rMFdQq{
zK~mm5&Y2y_B22T~5jxxjbkqWL;{ndt3&Q&_pdw{BHQ%y5wA!g=l;%#Sz?BBM_IbOK
zO_f)$T)9{7`mU7WoKW=R{M%JIY}N(m|%9F4w<>#?Q(XjV;MyTPkqmj3&OGg)Z>J
zvxIi(Z@j7~)ynZ@dwDnBMcV_nW7xJ{d)fbG87nZhdiVD__P3jL4~r^Vc9Qe7yzwYg
z#zublQa3B|X(YopUd~7Z`RpGJ8;`O@z%Ls(WA2pSz9^$ZLqH)XPWps3JGBq{lP9KX100000
z00RH*eQ9^xHnQmV{0c@lSMIhYnfoG-^PXhOiO=zNEGOgKnG6k*pu{)5y=J#9O^*Nj
ztpY$2BtbT*#glW-brwtAAZkTbp$aHIe*9SbaGB-We3@)#REw934apX{mTnjECR^f9
z%eD4C%QtJf*(^=R(cxTm^Cv(c+3cPBdSLFcg3sKdigm^D9rd}sR8!ec*D6x7x6Mdc9cXlD1t|p40J+2(;2eoxP+h`YA>>N
zz5%XkmU$RmZ|Gs3{fj2rv*moXrg=`2YF-ZZK>Ko;#h1vUEqsl$!<;#c(pJlC0cd3l
z7`j}O3wk6&I;SA;1VPQn`huUhFJ*CQWRYme62EM*Wq~?0ufV=tPuk
zJq!gv0GYU40GnsC>qDm7FeTHg0|u5W1ag|pzL4u28D8lIjAiU5+GjG$#DfC&!jvFi
zlzczZepvvEHroZ+&>84tvZWveu0z>8+X%G)el53aZM%RqJiA6_#pU8XOvsG>vuRUi
zS+pjo<%}^Ex@f$Wsh&Ll8+gz6>gyD>$+FWjdakK2=
zas!XSylYBwiZTW|=425|3-BB$0C6K-D2ke9^mCu<$$^sfIZ1(4V$r}@AZUx4&F!CR
zRaF?h1x3LkS5gGMGIoA@#W
z#iZ-aWLlXYKAsyWX?U{Sf;r&>HTLHbThj3*uz)TuD*bKAI9b9Hd-L{#Mz0vUe5$$!
zD=gw2@L%}Zci$D}cC}gNVZKR@kJoZY@yB-v99)dXzALo%N&RCkuH)mk
z?C&Svv2Xw5ZyTC#!sTLi{kv+o7zRqg{{l`e@(r*xFFqH4?+SZ(lKRVe7qr41MoOh5M%x%qLKL<59nQ$_Il0Q_+qs586yk#sRgU~un^s|kLB}kUkCj>d
z`s8DH{_Nz{$@}olFR#PX-#(n3A8T9J2GlJR{@w)~B|8AN>9}lcAMBSOO3*NT0PKFk
zYpTc{S`QQ6y?y!S!!RC1#p=~27j%J6XStqsF8~S9>B;#Sht$>*Gc^g-iA$o`rk+83
z-zI@;#Ez~zhLbuzwe-j(#PQ7}@e}ybbpqR=x}TVd=Qg$ElH`{KP8Gu4tCYD^?fmBx
zD|8G~!ujWu^Pj^W#QiuhZ9TFbmju*JV%LdbyFv`z_7l%`(!eIVA19HQ+IAepi5J;!
zYT59$?-_<=@3qX$dx$WKjT&9A;r}R5weeYO=MlF)MX1SqCYsYZdN)RHHHiC4S@vh)
z^OrAPoV`C2(q}}ThbM2I3*9VL!FW(Uhy^cLXw>Tv$+
z?EPCYeW~WVr!2_onX`eXRZaR~xdb*Wt_#F2VcP&96&#iQkM@zWS?od`$n~7!eK}xbTwOE
z0~Xh`LD!p;4=;Z`3t#;5CWP=OJo)wH<*Sp^S7!wjr=>K))S$Cj$$nd*HCbNlR@M|F
z0Lo-mSa=q^*IW}b7}X{i4GzW}8;V;#VBimb2UWX_qCfuYYls{AAF4fN36%xEjNP9S
zvLT<2w4W(Pq$~lDP+$pF?Br#a30p-Lj#GJ8ph_%EIV%N;f-lJTc)5beoM8eRkM(jT
zf(ZPBNXbi^eerb&*F?}l8#|$PhK2e3y7_I+EOxiG%@*5+7n?`E^O7UJdbD
z_J(C*h&|nsRSKZC>zvlO--y4F@eN*cESN2*v$-$^FKWrYdWwOY!j@ThPa%F@@HGgT
zjl~9c!$JnRIt|3JUctT!W(Et#hlj<}VWq^A>9JC&Dw6uYtrYBPPrv1#y>iyCO&z^^
z9w6Zjc~(?)ej&@a!BaH5YuJCnu5}O@BV#pAr(}KDvrQVyN$*9(Rjf6PM2z$fH_Jm#
zCQlTyfJj(o^b-T4z{)CPCIh#HBEJk_K>}tN(nMQ%A~5LhC^(}WBZ}o5@>g?OEG~&(
zh+(){hMZ^kc&}c}=!S_P1X{q?EBPjP)d)_tuTRQZIIzl%Po8^MGI0fiPR!l5W#3E!sChY%mU;`}V*`&WHsQ6K0Tb61QVZHOP+@M!H
zV>3^u4Mx|}R;UG&-FCCGoovq9cV8u;K)4Ld>wyP2QYBfF!J=(w(Ot;1s7(|Acgk~&
z5bOB<*kz<-5%$|FF4iYCHK;$mOW@6gC>GxhMA;y&Pl`XmHg!?={p-fsdBdNN9);s|Ju(
zf=PGjDgcL3lLCj3!D(o1r;+X;q%m8|_1~5(^KckpDrAO|Hpnq4d0DMW6&sZ6;GE1?
zGiJ>OFyTTh8ri%-en$1AFnKH)Od|AOjMR
zkav6w<#-hZ)%gTGG=w{rttbFuisicFnTqDT7PS=_@ESTd+BQ_KLGs0&p0yRz%s|%d
zQw$w!j&XF!;rb(e%+1ot_hs4rINCx5b+cM8muVi-1qSx8t8Ok75R6_;$zndFFFuqPK?537c$JFOv>yGEeBenfgr=sq9WPib9-!g(K%IuAek+h>+Vtx|OUbNZyVD*6c51jG<(&p=YQ^s3(AOM~A
zr?^}+O!nmNqRH}J
zN+2Okr;U_Smv|;9AHuR*+9NH3U+_onnWtD0!sL7P5kDs)s8j=HJj~fc6@u6mOsQEv1|Iy-ABV;j-iftihL;0&~~lK;o|7erhSYSS^31dZEatK=Z!?iL3t3o9KkiOH+2V=4K^y@Prk1u
zgdiPG=PRg=bC}ZuO{Gs{eX+%I>ZEpbY$_YI{@De>1E&|K$W-&IvRMLk%x0;+9t?Ze
z>AUH4>{@9`wp=QIX&MnBKtW3>;m(yYN#m!YC0}Cx&k*>Y~6|6g=-o)I+MY!$hsb_uGlO$aU!acq_-b#owA!8fQSh&8cK5
zQ^WOAV8nAPn%+eb1gVUuTuyppf<>Z*dsCG9UD2YTz!<`RCd<+6H~#JHH+tBQG!5py
zaZlWDX8OvTRS@bk8oCG&xx3kAE~T5+mwg1m*pj5R___DAJr*?a7Vba(I5fVZA>=oM
z*yToHpQSr9@1-3cl9Xc}1$Der2K=&m4dsFRJHB-KcXM4^t(}%@gbn2+NTEq1i`BH6
zee8xLUxHUnmMv}QVqmkfRx6<0Al6bA$)drDxdsuCUT1lM2ND09C)DK=*|>2llkhS-
zRGE1R$3|*iNxt-roca_m*l9|V>`*uD;NJU}ru7p%Co83rAf`GvS8Y3S?
z#qbvLVayvVpCn7gO|KDRD(Z%JC8l~+ZK#;ex6x)zu(JaSh3!`gDc#NTQ4|tyAs)uG
zvEqTESa~8DFfCdf{4mj>s!OKY-wQ|cLl+T6dGR*#sh8$<9{^a*2;Lk-X~3=m8w?b5
zRQw+z07YqWB-0u7qd3mmTd9tN7ngavkS)&+)qJ$_sE{h0C{ojJ7qe@AO0wWM{Ek`b
z40n5TcEIXlVC$1c4qF_AI$n83UT44Y}!^s4iW
zp>(;nKu-Oz`UIe=09)bJg-N@jsXoM3&4jnfLuM&B%exrabzy6W=yx1#s-wehtaYIt
z4cD}K&@t$E%}`#b4x!uwfV#cE@nkz|-t4)m@XvlJj}X*EQydoPkU`O`g57=uvs{!X
z`QC6rX~L~sLdlF9H59h?dI^~R`dVAjcst9m42U->`N>^a8lUkB*EZ*&1<&4#~yk8@QrYK1S7Q0%Qk4l~W28GJ}q7G|)|
zf^}!l`2;*4KVXI-3E>7s17;nPdmKM&b2mO{n)`_%h8O+DlUXvsN5
zr>Nh7%%ZmwNsZ~{2q|ly1i9Wz|hs?y4U43izp#@M1w`%~$CR)3&
zRqtj7pfRjf0G5m!cuR%d`d#<^W_tsfX{?cGM>9X>0Y$3ORO8RPGky4LaagF;3rsg}
znY)z?#d2|2nskBJvD6`{NRL)snICCJAKI_NcdxoS^wGQO(03!W%>ifw)7xmLcZT*k
z#xtU@Cv<=}rDDByKlUN`u}c|j>X68`Y$vfj-S=%LFs#USZ6k_JFL6xAB-q?qhu>q{
zwC%*UgVZLz7bHPy>QZAk?ITZ%Kh=!KAL0M5?*xJ6nV#=kx)*q6V7M;KZyJ_in7XUG
zmhQX0uUoF=o1SS|o*y`d=fWqBrCWyWx<=qS9zZQX=vT(G=g$o9#j`X0EI4saPoD>-
z?VUO9>8WnGmhGLMo}O67+4JYlNpN}^xX&H^^x1Ro#Cj39=868IpGhy@{MyARBTiz&
zOJl0jfH+ZmlfyO!m#}*X;l}EjliBkoj8|
ziGl6`l$08=9l0*CiHThc6VLU0fE|NW5lt!es}?Ylt$A~s?2
zM%yFS3*H49KUUefKJ?z7J^$s|+4Jxtw-aXg;cW;L2&6j~XW|DWvivA9y+8+bL%sww
z0kjAh71xeqU3VhOOd?omK^hpbV;X7fxHfU%D_=qyuUj8$ECZhyp4Y
z(&`4~fmSpm$EXIZYnHMunnc4Hlq2Pog=9|!Sram__8J|(5s{B!zZ<=E4{p=SCvxYaFt$GxHa^lyO%G<7AAxru1P`G
z1|if7(uh#%$3_x>H;wcJ;u?I7{n
zBBI!It8x=uQ!o+0l&4wkfsDpvs_2L)8dE*p*|qJY>=;;9E#`L3kaIuN`I>EEjAWZaUs4FO6@Uv#ftR&$UA@ZNjF=%AC(BKKacl1!veIrXJq+Y
zi^AI7&VA85V85DDklD%!Da9|QY04eLVyQQx0{e?z%9TQsno2fiBsZ9??OCR~xph%+
z>O$g@JpB9icycj{qx_8|wPrpk#isBV!#G@yCv_I>qB;3O*EgyYH{8Y-i5eadxUJJKfurQD&yViDn*4muQ?FJ_WcH%M27%F`r&EYCq
zoKYFP@OyZCr+Ru%s(2GU=(7do;CjZsEH)u(Ofvs(i#@}%R+DlJ=`U)}BEhhuWR|rh
zCq{2;aR`vvf?t{zgT)atJRbzhTh+&)!Xm$9uR&t(XlOKC$8+NkE*{h`q6^kAJbN^u
zQ>WR3O{2=Rc988JX^ZiAq0$zQwXK~2NW)a|*`xa9x*K@PzS%3oU3RM}2?
z*?#{(2Ka}@)X~7HqtULXU`rc-(hEI`#(SaaByp&RdTpJmN0p^dy}E!p3XCxhjB!IS
zdT5NK#<&R>EA+;I0d4hEV5B5MH>~Q&1x=T08j|pWti@|)vat#WX7pIjr)vJE+tWtG^hm?I@%MBZbH4^I8
z>i%+>!D^nw+K(XYhBwD2Lm(ev-*UH8n-{T(vEZO~fA{XC{$vt~{;U%DHje#19rul#
zsk~4I_*INTq$jFhqwWb*UU$emV}gnSLkl!U1of9U3+jm+B-MiYR=gw!_0{0J4C+aD
zP~Wcy!{U1dXNO1m9S`sJjCfb&e`Eb`7V9Is<5j{z-0ObFLr7`WKUuwpj
zLY*FEMb|Pzs-BZEh~p-j*s^~T;KCjm=TTM3i3$>Q$K<(X^)c1bS
z<9C$?VkcqLt+U$wAxmbn8(?}L_>`%J5Z}6fOX+s=@}PT>YJGHdAhvHOPsOm)ZzXo67i)LoskfJ$6!T)fy-vq|ColWwF}|hx
z;LGk?v1h*+JG+ubd&XXa(w?#Ah_Tj)vG$0u&R&l)_pXz!7uI#~Kt0qPKGYgM)E+*x
zm;Reu#H+p7CV`(u)Qu87wG4;ovEk^x;}aVD@Iyqcz_V>j4=fru4ow5!q-GGrh6NF`
z>F+he(s3(%c$_Dv33jfYE=D%j&pc;GQT9G80dICoo2>9PycyOa$U45vHUNK%wF)Rf
zKw{z=-+i`V7w0hz%ij#mlC*dSZtuyRY)}Xxfk7kNq7F4v!%WjSz}r&Pu%p;^{4|LS
z&od*_w9H8N1J9$>H7E&E-=>zaHyy>;E?xm|rZ;1E>t%yYw)2bHuFa07-j>`uI%0iG
zq1nWzwi%>>?%JO3IyNAicm%eUI!RnuPu{
zit5)phS*RA>5m45?T}5@aG@U<>wjfT
zwf_{ia-WL7i;HJHFr{`}`?Zd#rN!nQJn{F2xL#F9&$X}p8uYk0JFX=Z(8$Ov`xn2d
zQfm`kdz(OY?YL^3-_%6LrbgeOmO3t%kahafXuscesGIek-B%wb#8#?%Q-u&LlXa^s
zdI)3FhXSX8r^vej-tq>z3f=mp?&4+i+l6S@aZ4T<`tANvqWr9dF1vZIRPYfDmQ;8+
zqsl9+H|5dW_s`GXhi9)|jf45fxE0i@VZcbR=3RgV+ddrZWp)AO&y2lPFsJnw#@0K9
z;Yb`6Lc-&da{O*Y<)PMe_U-g5k0aGlUZwP`;Z(mMuP4T~I~y@csNgmJja(#uklu|1y95GSnYFOQ)>f
zD3_BmashNzl{P_AXX*zZ>sU~^X!^&y(5Tl@|8kclUoHTII7|oSC*i5)svEFix-&e*
zo(MJ*u*c2PEY{{U$G#iftd*xpfDw?)K>?*1oh5ZO+pHV6X^VnhMO!UR3>hT4NW`0r
zHrNvRuF^|KvM%p}`B!$pRdiFoSd)3qI}&0WLhOh2nk+y`H)NrGfHjuK>ue>uLW&+v
z^JTtar+4X`jsd(NoFewRH(38mZAI4DtZ`ONi-h5J7!nrS8PQg=<%YMCLZ;Bo^$M{V
z15_}Gzbj5#U6RFy_^9@P;n)Y7<>NOn1joAgp9j;h
z;d*0ufHNT!&%+;Ju#2DI!zRpR*na&1(+UnW8|U|MVw}^5!ES(1@MA-nIxj6!`96y(
z7iLU{njhgp8g>UZsf*=?zx<*r5>bZTO~<5)BxFEu7xY11zGNfIYu0+kt+&G8$+5!F
z>+LXF<$VNzPVa+{?Q{c*xhv}};sl-${a7bgJ`Y%6bk)u$e(X3IbEOOit_Jhus>3B;
z?efW2+$lGo38nONi-fAtn(~NqBb|4}-Dz#s=@`v!9-ah&98L~DvcT@+7>3u!;=J`f
z^fi8$V^rhtujv5Nk?vxSanXcy2(ekU7^7BUyj)%5#$SF-VG9D+1m$aQRRrOjji_qG
zoV%vn@jx3=C+w*T+X1{+G&R7Mc*baj8$h}ocV0CU++C)q{+4JKC_(X}wvDOpQLfu~
z{?@57n+&4U@LEM$y-(i&5}V{*bD0Pb3i?j4oz$3Fdk#G9cODE;ln$D`ymr+O6iRak
z3--JnaLv31JfCE{ixS)QLBpK6Tp6JhyS$_8ruUWxz;~8~%$In8<*({uA
zi!F26ZQ^S+puss4cDB4W^MJ)zplRN5AyLLj#z9d4QENybx{MC3rtV$+BS49`uFn7VKEnA
zz!n8dG2$&3*O*hvwI3#B3t0K}RP}=~*~JA08n@ztx*zLwcStCpx2;QGx7o3#X7>r9
zBHF6A3>G%1O4&7>GP~Sn;>)HGNA4%DIx}~>vn?Gn9+V-(kd)1kNgAi;m_&yvfckC0
z9Z;v;Cu%$7-QRh6z@ic4G>^Xy)s?FCXI(v5P8~P@-A5~WU!EfS6aLb+_NqQQI*KrN
zoT-C{3Z
z0L$%VKAvA3+SAsX<{Ejk8#0;AHJZC)nY&{Ni6>fB#epX?x8hA4d5DFcY!S{fC@HkI
zO6*gl2u~X{3TEiLTa7a^?JxYHX7h~Z)zBxkZ*j5=qxUC$FBa*#*)w5RcS3yO`cwT%
z4**xl2hWy#KmPQb&eCZ&0~orH3X9oWuX)XP
zWps3JGBq_{aAjm=a&u*F_6#432mk;800092#aLTY+c*?{=U1HJA>-^Op+LAaY&$~>
zEjz=u47AU-Y|F8xj+K?<(6qyU-y``ZJI-ahv`Y(#E$Ljo^GW9%9UL6Mr-oIsF?1s&jfq97nC
z_!KfpLwIB=*0c8xP}!R-NvN6&&ZAQ4NDr97%TASwqb&=PWE8xG2U&_!z_liPp-UG$
zXF7nm?*TgFE#Nnp!tWsPGxi1$ae*=fNkcvZ8IXhs@Ps$o+bjGgbH$}ARXCXl%O`hU
z+Vg!6@CW@_H+3@k{y>vxP-Ak(S*ix`VAKuZEuYN=m54|+V*qnKNBd%io-QgaMn0b_
z#?#5yoq@Vh^qCh(bZ$97NUVpl~|M>Bpdv*Qp^7HK{_wM@Y`u^eigL{AZ;mUn;`EU&r
zSkORm@ke(48eYNI{qx}E(fEZwJ~<^X>1i08o&?8dXXNn3_&hutpF}Umf=DChR?xi$RH8;wY2qc7p
z&kM%sAA{MXO)N9~)Hd@j^Ai?8l%;{@%&NU_1tp5=?VZv`nCH`R2SFroZxE1FFp{uu
z?cfU*rgTM
zOP+uxxF^4_UzGBc+R!=H;wBdk;GU{Xq&mnJCy8u^S0e~S2(oNP)rzR%1>XAI1bVdr
z{l=!N4R6LSk6R_B*EL$67fw$r85OZ=`}7xP9qmEI%7QKU%e#}+XoFvdNk)LvxxLW4
zPD-rhiIK8CB3HCRz7>{Y-*p|DYc^fmje2S6ORpo@{Fq0R&Ul8jidl>LyHU|3XgnKT
zlZf&>t&c|Y?7?G91?DHGoH3{e^8{aH8LI%;axys!1f|TxrC5B`PS(PjR_R=s_dHOB
z9Id5pUXHl4EO8dKZK&S%`bY46zZTf{6^PyWdD}6s^O;-CZeaT>0jw-=3n(>)wqAtu
zKOuLsGh%=$!MmwumBm+`?<=*2@@3tn=XMoa;II)L4Zz#20P^}sX&{>upg~Y0pQU-j3Y4;WPXv<}pLulBH&lkEJjM|B+jC!d4`S
zrI)iqm~Z(jO)z|{YwFLEMpPPNvq}lq^UXI;gvm
z#RuAsc3ywBJfSuAP-ULeHN55b4cOkYFSz5Hc6;Dwx`L5xnidGYf;Nt8y>|13h4Y$1
z8v)o-%vb`AtZx4V7T7B4IeA
zcy27gYt~Sk4?S5lH!JaPRbrOsWhgiK({eZN@unwy(i@Pt13t^T}JTCGZXJW6=8m73%%y)vRPAWOj_4YR^`0z{09}^l*
z%gbu}cA2FT&!O)7gi3Aeepwg&GK(S>Fr0;{eP7Nn#qWbdM^ikAjtLr3Aut>3P#S)XPWps3JGBq_{Vs&h6Wpr?IZ*OLEj2$0~2mk;800092?OJPd
z+r}0B?q9KWG&Y&i6tDmm09|(~+iL0=NA^gL`$$}BpAyF;!2+OUC8Ph|vkQ;}NKlrO
z^fgn@BqD*ueV%*v?gbAI56QdBNkL|rT23{Y%obBU)AO<*oIGDnr@CC^S!PJi=NFn(
zpRaU2F%zAWY);BcO-?68nd`F5XcQ5-%&+wD!QtVd{f`_qr92V5(2D%1^X$pTd3HV3
z3PPKrCsuFuvApCZzP!~+=Y^gVnayUC(%R+(A4u|yT#;|cZ%I!6bWq?L-Jyo=a4$R89?VI6xn=wLyF~M
zk>zDkshqoKfA}dG5T6Z5KnH|{KinP=ljSZF^W{v~xEKQE#qdEbiZ9Bn7#F1)jq*mU
z#$pqig8cM=;NP;)#4Xf$p5;&5*No=nGS|iC@;=FnRn&MvR-Zrn)M)h6nVy=#CZTV$
z$-KOM`oKMjj8=chi)>z$93WX;um4&sX)@!VCbQ+N&TAb(q+Q&&7(H(~J)MZ0=QlR*
z2USjfV&QH7VCW=DtA~7h^q2AJ(aAT*T6ir;{Hi^9yHN1By)xj>j*L&yG)D9ld__x8vHDx2JF3{BdQA|Fj)~3XIn2
z3V;`rE93&uv`Qcz6=Yp5vy4o6excjS-XFdG?s)v>yLV@=Ue@YXis+W2X6+O=MX6_P
z+&X2}JGZc~$SiLbdKGNZKOomGFb84}w%evvzMs5#H$MCB?b|n}?~Y%tObfP5oAVN0
z6SJJla)QUnCCAr!RuZ8Rj64Gj6v>t)xzS}?%gNEZSMQIA|^VjPr
z>}LL~%$KqQkEeQpda3#B4&JHBrpnePdy{!FQM#LjfL#Q1GGFUDuL0b~>uvwMa;|VD
zl2fboty6jlFv@HJ-cfnB7+^()OjJBz|B0JY7bDQE#EsuAli#~Lx7)tIU5e>MZUWsA
z1h57YP&SJ5s})IAu9w|dlBgBO_BW(1STJ4X&SI^VTM_!rFLf9xC|Z0R*!M(xixMEXC{@a1)pilfA;h0&L&&-
z>W?2~egiIiT#)l77oV?rsdrhHi(+(ms4`g$^Ww0%>@Xa9L$7_Q%JOiN<>6&Hn|?LZ
z7qU)V3$`Qp=Bximi8&rERLtRZ*U=;I6V6pUj^Wdckv-#SQHF
z1?jy!dspM{a7`nJY4b`
zAm+-kY)umg(PC-iZ`WpX+j1;`m$ofEKKwZ3Xv#PlR~C=^n77L)>$5qT{I7DujjbVG2Osw
z+^SNMWzz=Ec>%I2f74qvaidX_IL3O;1+d-g5325DG@8r-=!rs~#!KDXzbp3p^>%3g
zb$EytPUk>Suf+=O#&2O8y-oEt!q5OaFD&#eIRJ2D_lS*JzdvY6e$*Cid;V%IT-&;b
z9q`g>t<>9I9*}>_MmlWuB?X_Twt?At|02J%NZc;EO8#m$fct>&ndCU1AwP}J
zpM`O+3Hq-6q~8EI945Uw!wqt^zV>m_Y1+1iWZ^8?01vjJ0Jg8@{VmK4ekn8epSH|)
zw_s!Y!8)(4U832~Zadj6^N|})WDC2uV;1tM%(*Okk7mnyNwa#}CE9F*tJas?ZdFy+
z-sUXa*w(AG?AuFLN3WBuGw#mO>qxj$c-5V3Q4|jTORKxLcf8OCf(?qsRfKKC?CIH^
zQc(`=qmS*<*IIYF^)E1`$F0@|q
z1-ns~hITbSX!H>so72-fXm&+coRmw=7
z9__p58n@sm27J4%|GyC%U!bmI4z^ZzzJkWOE*eU&p>79>omARHS~o!QUxJ%o3CVU-
zX$Wlsb7vCvPCL-Gm}m#EpBVzzJKfffGN`C`j7bci7PtF~p&H^Hjd6l(o%o*-@$ks%
zY~uIf)4PQ@2yBV@^e~A%Ne%Y})hU-@q?CC{jFjGO
z*7Mh|PF^0L?ii+H8H5fW}D>CrnWqYhjYa#Hr$OL
  • 6A6oWD*ZTR zp(pEb*r8r`TUJM8hbh)3-EH@aH{X7{W3A1BG92Zs0jsy$r^4KUAsmO=` zn!SLlB#wP9G?j?F!7UhN8l|`Z zJO&>37>8c%L1V-?5Xl2CPK=7uAV9%{#^+%a1|Ca+;?PUA;9eX8@5WDkFG#&4fSEjn zAPR{-PlfhU>QkPCumyGJYphW*k9{h+*6VVk(QqEbYMM>Q)-YQ9t57yY3y-|uxc-kkriMG4#BEG zCTZxWTE_}&Y*^z2pFAOzN#cOhgt9oaX+}~_>RZ6XN&rMDJd{2~vSTgXQTuO(`cO_ zRUn@rgfSfEL7Va;7|4+%`)~x{BXio18GsW5fHndOQA(*@Ct?%IG={MVjex-dxem4G zr6Q)%HP}5Etqvz(dMo z4Q@uTkYYe3Md^q{r2z{>stk%XWCpB|gbKlkG~%RDxX3nEOXhfg>J-a`|(!{<8dkqVW zA0Q6!5=F;Q2vnmQQBMgh1VIIiaX$oQ0d-0MZhO12eFuxB-Y3t;ej5zPG^+=-&xn1;TtCdc2{e>%8!$fmKEv4eEG;T2 z6iQ^@vhUi)HX_T&zC?;5m86n3Qi&3!B1I`JijX2(+EG+eXPT|J)H=|K(hg z|1Wlwe$;_^rOUV8db5{zZsl3Gx{sZe--s@s>^5&&w=J-+r~36k_qHCrC#UUVMxQ>d zRRoLsOQ|eJlJH^uMb%zw=bv5mZsjFCr!xmWJyU+PiT?O0sPRGO^F;dN zyfrV^=FA=qcB~xu`s>S*<6nQSOxWaCIB+#7w&&4@;8yR=O!$JwPjBLxu9DLpntgne z#Hm<*>-ve@T%w_u9)z4mpl8_CHr7Y_xj zJCkJ5<@%g+xpCKW+bN^G9~m08mtU5K&7SN3i+Sz!iss(k8+r@k=+QH&-{Lw;JKeT) zkV2I|HEJhs-T7m1XkNmF>;%c{Dg&gdplvntR1zyW=hv#VT`6noa^yW4>N{S&+AoM> zb=IcD>f@rG_h-psi>iNlY8Woo4>(jjR0{t#cFVlF{qw3Q> z?Q*i%7X9t<$197lX$KuB#p=jX75mFl~ztL#cd;I|qQ8SUHWUj9T#D%hfv z`hM`<>`TctA(@kJe3DL+E6^bH zC67Y!Rc-4o7&f1Iof95E`MUh868ul?(Y*ekgELQOu&uLqWxYJd&8yjzd2?7KuET!d zbL=J4tuIzC+diBcIna3ao{+J8i?bEG%e+Y$AEvm4+mU*`_@0YQ{rxiz3Eh+r))hFn z^rLOg$|tgHS5@y0jEX+0Kc^(_y2y=lLs7nQ?#3BGFQtZ?F>CLX--s^j7+$f%NAgqi z7walh#>ql&pIA@%RcAFX9PMJKMDCm1vG~*OjTzTCR_B(UT$=;Owzfa7S*70ceCgb( zugM)Xff3CY&V9YN`rQib%=5P=Q)=d(^v>7l)OIbsF0qK%H8N_bYjtoMb8&aIB?_0tY%AP9&agi;F4*3@)Gv|Cc-b<0xzkV=s z{7QCl2x)pV`^%+OO)p%EaWc z<*Mn{%8z$6$v@3}{%D?R%$oa2qdww=N?&Ho!dRdN}4^BGUH;1wQFkXl93IH>vky)(eiBi zFQqNDE0ACB-o0Qjj!Dn&Eqtw#bE<5Fyb!H=dlaFI+lwcJq&Q!Bcki5*`p#loZ?oKmlozKDnI5=l zoc^+0<+g0R!!5hA`r?v8vT8$+W?rlP(belrvJ>ZsTrX{E@vw^RCq1W6accP@B042;ZKkHY z6^;7*#Z)2b&io67Z+6YsTejzIWOmB@BhsgzUVD5%SU9Ud)~76f@!mlfhuyAsuhNp{ zh#wE-Oun48!J?_m=FNhdC%W^`ZZI77bYH6NV*SGHNK}%fry{=MSNcF;TzW%*%8bPP zVpnD2cjvBIsgJfS-85)uK5|pUh-P+ z`2LfGoI>~255hmKBe=-Z98XzKpWpV~&Qiuxw)*;sGOP2uN}?-XSG;w|Et?x{_%1|u z&V#OWRxK5e~J z(Rk7{@ltO!^^}T`xi_Ak@4c2TvOvD+p2If3;={=gwmJPsTUlpi+k`PVc7FKlrck%X zsgeacXT|nlg_eJutUK$YTDz$Gm;CrCnbx1HG%0I+{W>JV?RvAO9d|#(ODiW<6_lRy zoD|*u#Bjbv$xY8uzgHpW*mt8J%#&Edko&ydMwz0X(7`=%NZos4DpN|_rI)r z=oK?$omWQ^O}ZgY}6vu|=0kUeLj5kKNuAoU!uxAEiKl$*!tVE1T^~F?mJ+O;At z=C2ki96tCxsdL}m`Q8#Pz5|P!M^+?VF$>T`}> zbvJALLUld(>h0FTd*tr(zfL#Yx$oc9*;BJe&*cYsOI|wf-QQj(PL5<-0hH1{z4${c{BLLg(9t7X;FZSahMsKM;40A%&dRcX?ykfuf*Zb%-0L8jZR3Uio{?3H-cr|G>oacO*+h~ZVOa@CG?7b#U$ZWc1p*)HMDXlw4GJ(!}ptj{L8oRj}cye8q-)6qV&x2rxVRmK0B8awjD zf9{I3C*3P6YFGWb-lH>IX7Oy~1@}{Dm1q_BK%(q5bG2{m%-O$c7SzVA)$bN6ySOxV z&2hV7(-odO;@q-omeky6jOlwfv(i}Wgt{tmeZbVa3c+I1B~uCwrW{hbz_re7q5b+4 z-uKhH#n!Movwo;>_)~mAr(@2Px?7c_-760lO)s8~&0QWJv_2MVWmJ64C`Hd*b3t?-Sj-OEk~jZhkVyI#fwIF!@Yi z!OoK*M{>*Z=P8^kaKYm2x1Fg_TED8?U39}f*`~K5ceOTH-=F$;#XB!whab|(&u5U9 zf71}rnX^V~o6wZunC<7OFO>}Hqb~Y!o;$lX@3s08RnvFqsM-Oc*4E7B7Z>}aor+9y zQus#8)w6o1dqZ18CiZQI49EAaQrM}OMHhn13GiFG6@)ogPy2dW+m*0i9?*2-Migb- z@AJQVd0*Q**Z!%V+%@8+k+>`O?o&fBHWD`03pb?fZJ&KorZu3!uPpz>r@09yP0QjX zo5goj9KBE>8$UHPzr!o=+WbbtsN(2$lSb7{r6ND}!12d3J%Ubf_gTv9b~GkL238%m z(>S?cRRpfydAr${;B}P8B33m{`jrl^iRwAGrS=>a3EFvbP`%A7^=A+F(n!yfWzM1$ zF{WyV*77Fh_oiwEi#2lDos57)%lqO6rEDYlBzNVD{^b$wcY40Ba9<#PU%uQm!7#i% z>wWMv*9){Qo|%e=52}l$VuXto_UWdm!nYki_6RCFmU65kFwk{0Fo+aZJkM?EHrHoE zw|$({azBa}Y91Jh_w7^gH@S7mPn%8c5v{&oUcM)4ROrI-scKakhtx`E*Y4ZDN_9uT zp$d-#iZ$upv>Ah;-B^3}wu3h>@9*h%*b6YGq(xT<8A_pV zRUSuHZt>H%p3HoaRyJIFujl@Ato6MGM;%m6PvnQ3iN{y??r3;qP*>WY-@QpVPs~=T zUEHr^Z$+1t=EjJFC$i@3RGCk1dMGdB|LC-Q$9uBwO;w5L?g{k8&n<&T>JMeE2^d!nuNRm#`c_My(JOBObwzPm@`ZYH)wgp5XA6_&kyEvKG* zTP1IRUCr~}@i8$$*yr}K`tzH~_{O)kvON;zRXm54h9>oWX)i8a@19PSb#7c>q!R1t zd~7XYQ!o66okiuD>V+wv3|4vSTfA03eGt*?@qa$tes(AYh z$SGwewh3Obxe+}o_^Wl9TDm9R>R^-i{z^YQm$-G~9hGK>n6sB>dQSho);)7ZD3A9z z@s*r>LVW;rwoJ`Nsyut|bB#sj4sM;>?#ce@8vXcO^4?O4&-#qoyvC&CtFEt|HCnvM zXR-b#@+a~y+`IU$Gap2dY@kPkc!x)FBE5Zrx%qOxt<2X;6C!%FMSeZX#~s-Bb#lJ) zqE$uiJ@2*zm2}wLJtA5i+TAQ~wmmDA%9&!Mw`y=WsTyLTRt`Du0^+yX94)ue{&8gq&!ce~87Bbj>Z-IkSVk3DZ_ zN*BG^e?~2xd*kSNQ*N+QKRr*A_rzRb<;a$})xznFVA(mlZ?%nnIUK9)>#RPkyUin~ zYEJXkHB-%QrrBOEHx2x$8E*1Kyd+R9{*H&*O$H%X%tG$=yA=_e_8Kh^b)0&B=&)zf zetW#!33I=BZ z>)URT5I!ZFH?ppvck%8`TLzBpQ%fBnUCHJ?X!O2SvFCBXh>ToKSZ!=%*oRy$ZjE)k zw~C+K+qPW`OBU{Do~!F@xgsTbWToBIAJ2Son-5=p7rpt)q*FUvXXYJU9q+U@n%+2f zl1z5nwg4qdaq5rfx9{pkhW08B+@F7>`KU|K= z-kM)q6@4jgVqCaLGuEpoR{QjpY~#8Zp-KIpUXQsgDQwcH(_3fIY~oVbF2 zDKf=U^675dby*aXldXN2>FX~!;rHbx93-#YZyd4K5gEL_-NQna6= zC0aw6U4)?>JTv|7tjCl4s0YbLzBXz_gzq_I#s1-W(=~HKy>5zRr?tyHxK`Oe^O{n% zs8RBhqE_m2oA0HLcMP>sFOlZ1pXQgVW=3e*8XwkF`JGbzT`R-bbm(KXOOMUdxIUTRh=W35VQJ51Vm%T5z@^CuU*v{0LrlWdLXa3Qx z8p+2K8J?a(;WUr+mrm@&RwTc^}6CE ztIq_p=1C?;TLq_o&%L0a+&G}Uu)X}PVx?NbG2ut@TbCQ1mN|M#IIp24Uc3C2n02!L z<(chP_4USu(&P`%J=T0wxS90h=cX%(rwNbJJ&!BDb!?I1Nn_>Odsy$CSR1kr@l^Zf z1jpNMJ-$ME&l8tfy*HUD-B)E^4tG2XWIZZbV6D1(;+^6y#n?)uj0gvBIb(Pwq2z@ zQ8G>=onPkKwXV*6|Lb^q+Kr0OCLQ&xi|u<-{VGhVd7DBCbM9(hSZlU$_q=GWEJ9Vt z$qM@V1B3IVmF_)AQ8o_R{KE68UZ0W}eq(+b_aObA!g|S0m1NydrAyaI>{H@sn9n+U zz=+-ab_dP7XtS5RUhM+mh`r5j;qHke8;a~MJbo%MtyD_rYqrUa0?pSOX%DFWTdgFm z=TsN_d^7)XO2X}7#15U_=3kq77k)~-pYrUsq4=D{oGRs6&5zq&B&(>;+hrVct3Pnf z%F5tmiGwQHK4xZCzWqLOM%-6M=d-tO2{#ijTDYxXOPk2C?1B?1Dc z)+bQvEe0F2ro<%Ao+P|`qgkAmM$N*4;{nD=c`nwq*lJDc$p>vZ365b?^XsR_s!#4R z&Tm@dH8uHj={{PwWAK6Y3d0uqm3))?bsurXSMByaI%pT!O8Bl9xc|q{!^;iAbJOGo z4f4N>B;;*=t$h6Xsolh_3{_W?h1xy|iw_)fS-m9c=IyJs>?ns_ZPI%%j~Eq3t53|@ zcBbjE%G#~Dg(us3HlDXN$XxT2v@z)O-IIF<0wYMTv*fdAL00}JZ(hERu_EXA3i-5k zofM}n3U{6Fd;W*zW|k>sslBZAH0j-aRM%&1S%XH2=gCCu+028L=|vjB%8D{N1{wFV zqSNh!uDZF+e3zl=ZsE=8IAPwtS20KH0k3{w`{SgOhuJ>;xkp9j9C6=2DXBw$`y*Mo z^q<#$=zN=`GOyAure{rh?~WF8Moi_Qnx7fBmkoK(JEN6TbwvCoTW8OmqXqt&(paVf;O zlhjloUl$$SRpz>S`G)edK8LgAd{z#|pKN^nw8?oxS?X<$>S)^oQ)v^&vT)D*16ygs zo{d3@tc4--mUkXkSvtk98jjbtlw?w~bv`_5^x9ys!zb#w$nt*7kcBuiAU&R@2i z?bjJ&vF@{tv%yQ_kE@oPBkh{Ei7s?*DsxxfysDxbA_s=nRk%e2$geI7cTb;uU{l;= z|1eD27F}`km~^}N^24cHx=BCi0UHlUxH3k3>7pCUwb>QjgyBj4ejW0!Y8vdZT%EIq zcYo|4I~?fuHoh8Q_#@b*o1U%lVF{_5rZ##!^3>N5*DgX;B>YM}U1dm3$lclL&UL#D z#n^9Lp03ST50V;tb@7Gx-Ha=p#V!3BMF!KOrP~|#o~L?vlEMoL-Dlydb}#jm{8G?V zGcDltXXc5^S7j(=PL!ZK!wvq6kCzr)ww?7OdEWbDgFBqu%rtrHZzz7;k(sV>pFV2- z;)usfznV(;q5CpR-{HOYbM3ne2N$cqcL@oIyXW3kAclEdwde9;>*W{R=H2gbSnTM0 zqDp~#DYr>iyew9>acXLatJdirn_mxSw)W&Ju6Xg3)qRC(FROF@bnvTP3xbZu4OX@l zZQe`luz%(`t<~j&d(WHW`j0}txl4*CMseQVUy|v_{&fDRtnY>&@hrEi_FWlx@$fC7 zDehAOe%x`X&kg6*5c(p*89XiMGhQ|817hhq^(#&>0svIr=9nr6KXoKuCc|0 zqqIof7haEh;wNRVGI{>fL((m5{=uUa86nr*Z&1%!yDk^kdDZdJwmv#U?Dc_NUv5l^ z;(c4^)EzEWZ=rZyaZ7w}kbb4;K=hqLC99>b!gfQEtn^;DWog_Y%Uz|mS3Xa3RXar9 zv3Zs68=d(dt0S*1?K*y_aqy?b)na8cn<0noO^=^D)QZRHyq#efV6=5#as@_T^@O39 zPV3f$QP0($C8o@sB;!@SEw>YL>_l2$1T-v3QRysFrl@UUa}m!Dtx$1mS@ zWRW~SJxRy!{xVHz+b7)Y$hO$ynd)uthgz$!k)f{FY`;3#7VcRS6T8SzWoz>=UjJCO zLujzkDd{3F{cb7e883P&8!*{SyNF8I};fBlB}k^`0oODY}Q z#Y(c@;;GMmoqxJ*)*Q#Gt`|r4$#pY}f@5Uj&*%uL8j+(G&D=Zlvk|uQr&aeUrlZ{J zPSVM!wH>3qu4~V1J2ppW2`9f?=zT$ZhHS=m*Y{StIkPxjQE?Uek+brn6U+U7Y%GjQ zJDXSEJ;~wRMefCes}`Mz8=PHhS#|L1%BUSqKX@e8rI+)DtulD6GLtTp{(Sq?i`Q~v zwQDAs)%77Eqpom4EpKROmSr_L;lt|VttTE_J0%~)UVnf4e7%0j-gEl$yeJyg9oT&Y%5O9U-b3B;0Zy6&rQTY9e#T9Gp+BvcZGsqXbmwav&Gc`4d5}Ui`3@EPGj!>D4W^eCG1CVITSiA5WLQQDv$V z&|h+Lu>XYOk;C<)1u~=io-Qw%PV9bFyMC{p7O!c2Vx!nnw!^;2vn!|7U0oD;J#A>( z;j7!4?@Y=*FnsPO(;~HN`i%w1Tx1B$#b~Z?Brm*6_P4#*w={v*qbu^OHh=x~C#Rh{ zY+udoy%;sgaMJvPy1~w8U%u&;*l@z*tgS^(P5k-u%8!2dTUskmx6Zj}we&;A^10%- zo`fmX#$|b4H&SqVdh(3!=v_6bZTEKN`>il4^4y=*5<~fZ%h~#~lx?y1ST0`4t z$LQL-Q*S(L84l2u%^C>)Df=rtWO@0!GuQ2>!T&lA#YR^gRoLvW@%Glw#TR4y#ghWU z#7=5SShVM?{GNYmQhroqq$z=%YKHw`>S6tKp~}dXW!K`wj;y&qH}me6{g>i@y!-Iz zVb`~a;aB+&-=Fz(l5?pQ!^1;BIYwp*kIm-1Gl|)6GU3=-8 z4Zn}6yC`AkqJiEgyS0fCKl8I6u1atj#{DL)zid1sth|;$&Y5qofO+_+ovZx z)b@7VH*u;xINhq_=T4ke_Lb-2P2VrKta(}#xR9F!zc$PqblpeXn(qHttyJ5_SB_?) z*uT9r?1z~=$srZ zm!uM{_V$yTH-4qOxZQGfOX|peTdJzWvgFdRpVz!rr)-lW9DFs@tgD)j zTzE%N3&2L3DqmQ=th{GIE#-;onT2&Hx3$z8r`XHK1Rp==cHGZMeCUw@A@abT=}Vr} z^DJ^3Ms&&gLd6VK&g+lgpFA_%=eQ@5+L)Z#wwkQ1Sz#1od;isikf#-a&u5(;plCE1 zQD%nDN>10(K507~lgdcZH@xx0P}JIRV3QCQ)80?7!TF}-I?X?|{HH?b@Fson<#VrL zL-p>oFgX|IlB6tmyed1q{+ILfy=iYpa?ONwJRg6#wa_9fT1Mk?)TBiGSNmt;^RBnn z%Y9zJJ+h6vb1HTfJ#a3$TB=~apTXuki`bF(Roxjr+_0cx@#PSzGYl!Un;&D-T+ileU>D`7>ErC{lxfH; zR5zOyI7uwcP2(ziK7Dja2WD?}mhKTd3Txp08)mlcvGOIQrjs`~otbyVsnjs@fxBm5 zTIM;iyXlmJhl@^gkH`BqeDr$c^dz16+Pc1V*l;A)i<+{w6{9<)m*Vm?LF(%CmYH+r z(Z$}1#}#!ZT()4hP)C)P?zqGex4yfuZHfbYIW0{x&^2x(E~;<+v9SGwQbjxG%eJ?xb6)QJAXZx(6S$$NB6LRvCUgz^-M)SJu%v7Am zK8O7e=2+Uq9WvY$f0?)0HGJa*AIcKZ{K;F7%~k%{bn3!}NzDtR!hgJX^mKKX$k?;= z2i3_x%rkWD0`YsKAqpc+CM)vU$&0CFPuJ5Tp3qw<;t#8$t|o2y?x9Vuy6ACkxLWH2 zVW*8@x^I#GXjV~!Gt<||LFqmr*~@9Yv}&iw$(tpN#&1lWix(>EuSqWoGa*>_VWRQ= zaZ)mU%Nf#V0wxDb?F&QcsovQiC^2Van|vTJodoUnU*i@>Xob$TP``Z1(@(D0MflI_rM_itPBq&t^Ie0r!+jAZdAjSaig%~omrrRsR}_O>cv5=97l^F>3xCw#kRReYvc(p4_$C}j}-SrbgAXEelK8rFN@UH-CyB5 zdRR|lc3RZlhKk(c4*wZNgL^ON&Kl_XzGQg;xjEc*QRH2Zt%St__(u|TDDw9 z1{b+YBe${myy}x#VM&X8#AXp&yooC|1xJY*#n`CLUryd6Cy4tFBu20fTYk%F)S4&%uyk$@Ga{61;__zbQud0^i7n|qxH`S! zu&*DEn||BxUS#lmNIVu}?3a7`nY{nSLvaNW{a*d++;tXaz6-Li$|@c_N4WU zAacjYIn0l%tgEMbeGB3WHASvk$zfLP^lhp$*_y~~4RNf#{;J0~aHd~Ji@N4d-bcLQ zT;+EnYgM&osvOf2Rc(kB(!Ak9yRo|_=-!%XDY7k}<7KaXgfBjgJ9l2pq2$s|;aHQL zqmDP*M0LU*s|UkBlDCE$Ui^MxkL$Gkdkw2B&dFT6w{gc^qpWS*lZ2WLpr^B1 zNxWp&vIRLK`b$|liatGV3ToH6Mmt?2IL=UWRHJt@;oiKUnr4^iI*D=O5jdT6Mf`tSK1CdnMZ& zF~qiMIFNQ{N4@&9l7*~>R-2bjQ-Yh77o}dxId#=?lTE;Z@7F`$U)Zu}K}bZS#2#1U z&{{FI?jueodyNz}y6qd5n`)QyTnyiD`r+hAXjWmcXbN$fvN*&Gb)^9PK!1e+#T~3_KRowG+_9xSl(kxD4M;x4YMbn(AYuxr7nKMP0CtmSz*|UW{*+1r% z#lHDpL+JShIJ}DX_$60xM-QE&ax#s%er-3 zy0;ZJz2ug5&#kIoX>jHK)jPTC_o<(&{Tyd4BbScvb$8!uctyh)Pk;XIvfCwfy^50x z=64k;8Jkrzx4Cx@sLguT!EtJ@X_`8}RAKWv`l`EX=R-d1+`P^>`P=K+wJ*|ME?FR! zS2jD#!p);RKyJ?Bb*dX{kDeYm>GbZ&Xz~`)^H@U`{HNQ{x5L|>?wwK)L~)Z&sb=e` zG)=uxG*38fvy6dS%WF84nFx+WS?qNBlpR9f6j%9L^w;Oqruboeu5$=W;B)4x( zb;)qv65Ua6u%uH%?t2YW$-LS))%g48C)$Jc)lQErRO*GIcAZ^(_*R(shU1p=M7l1C zA5PFG_cn{RfEvr^03wT4>3~8 zG5K@P;HyhItPZyK%wK+M_vddGS_L%^>64<@lOL+qc&@1mjJ>7l~b6yf<&J8aP!hOMOxM(y>YctElb4Se5#=)-}SvT5p$x)*pynk`z(+aVxp|W0QjRN2ju^FJ@CawN^}dcqb|6swdXv z`8A)R*#%clFZp$WDO#!{Clx|Cf3($FwXeQodkTKpmkyC8>(GvvpXAr0Hv`w0BTEm9 z9{+M-X|}v*u5+YcaG(!HaVx$+Zir3-cp?^q!{HW^4Q^s78*z%XqqxP8i(13Mjfa~6 zHxX_GO@^BSHx+Ig+;q4Za5Ld%!7T*8KnYuA#o@eY{ljuTssiQpy-~Bmwn<6>?w+ogH zD~!dW2M#^(=s`daB6^U}gNz<fTdL86iN4SG~vfMJ=#7zopBWJ!u*Oy`vIeXJtAO;WDJ1;n?pi7N5;Xn zQP6{mHjhRmK_(p~nK&{eSvU$7gBQY4F*soo)L;_{&?^l~qF}H@3`_|IN}^)0cq|2r zg*+^lilIivg>gB7q5eF)k8cE#8y?OJr}%TdBcsB(5hVD{rohmZ6c>Q80uUi@Utjp` z(MXQBpC1%QP<#Wt14B81e(doCv=qq=5A_bFhw%KOg1MYf?+`9MA~2rI;rVl-As?EI z@{Q!@lbrd@3xH#3g8s~yMA-i2f2 zW@Ke!WNc%GYUA)EB9(@t({N}k9SfGPyoSVY*+8VS)kBJDARLF7(H^v*!JOhoZ4MDjvF z^H^a96HtZ;H4y5dc8CB<1|TYihENAEhX^%l{^k02 zr`Qr)aYT0^vVRJW@)9`ACOl1e2R@kb3m?a?A&^LrL`nnVB4fM-=?#dB2s8)2>cH0> zfWbgFAPA9A!UKZ>v5LX7A%zt|D8wT2_Zw}zKo-#iY(D_x`k#9~CXDDl!QMe!O&<;r zzt|t?36Vn$;`;h}2SG74JTQzK;-{fKM`R*30Y-@cb%8TaLz6w8_^U{3yojIpqbM+p zgGMGAF_z%)`58!}D6Z(A!}&)kr2u`QhJi8qaRg)H!0&VT2RRRqYy+E6e0ol9|a^F2?~&|@*f2h90dxf&>Z4Yco3_)AV+bS67o2X z0*?Zr%EAk8z!G+0rHG~2Of0z?w;o5iNKmBp66o|71Onp~;VYKePr|WYkZ?kSBtEBN zL5t8N;;tY~K@S2B1-cv_2{{tV2Rz0T`L7fbng2;8@zpFE(zb9II+YK^!1I|K6HnvA zuz(XoRUw9;C1JYYlMR|6ptuOAfzZAviG_f~NJ9KN;;47f4DrGQVTuSwgeZa&p$gN5 z8Nw_PVPQNOngo>$sbxSZzu)M%kdz!Nfux{$Kd!%bRB)u|SU$hYu{;Wh(@0PD~CMaH2L5x;O=7zeN!iGD7BXn4SPVYmf{D}jhZ{eroqaBc|L zO*Fv=e+E%u5!~=dJWw=A00aB?Zxvo}pc#0|v}Tl3hR{kV!z019s$e8X=5xKr;Xu@akelGke`89r{i_;iUbCFqvLde6rip^yc0xFCMu=DhS6~h zJW&|z4m!#tG4OOeU-DxGRY3MbXEeY*ED=Nm1xSjDqeB)0ZlETRWV*EZSR4ypAviKV zO8Np|)R4u7EK%{lhQvf2p&=2)1cPD_=r}q-93eU~64NN-#0n$_q%Mq?NWl{JV1
    #jUM)05w5Sa821RUcdp&rM)NG7nZkdaA>R&9)0al-#ewRl8L zBmxnAkb#PUIw?33dZU8=3?DQK|BX(eqBjPZYCw!k0-thOeCopy7zh$aWTM7!Bo=B6 zM`5CRST+?FO+<(fz`zPYWgL@*D%0r>fGqQ0qGcxt$n0Z(^yEtRUYthV**=BCJQ*0AAgv=x{MMcC!#6={SQcP(P84+2goDz{K58fOkj}cyiYK(Xb zeBzOwJXU}Qt}>pF%qD>(2Vw%P1|b4~3zPwr8mj{dXaun>)ENK5;c5J4Ciskibxph? z)-|p#j4KW}64Ze{@nrY})d7p%ftUTo#we(e55RvQ4FiUODp2rv8u~;%!yij*FZpjopz!51g_)=4!#K7RsYQvNrGakao|cf{s|e9cUTxV^dH#azp_0_ zV5lEA793ZUzzBhcF~$ffVW4C9fIUVHi;4)~r^fgqIh>1(e=aowJQwiYcDztRM2vSB z9X&V#U4!fs1)51@#DDT@Dhv?J4d{_BfZ!mv*?(E8hzbAAY7wI!C;R{L`G?&ATQCNM zry#+NE<6PZUUdBf!yN#3kDw4E2JR12GY^ivs|&~8oMVhQth2meiNW;&Lr3-lFDg=z z=db7koAnZJ;&J9WU-kG$JuI$ndBdftLQ5jf)7`54one; zmjHA}s40b)#tY0>_DmKQTy_K+0gMU3CBpzKX6yo+0IUKPf)o+N2NO%f>w++#Wps&% z#R6BCHf9Q>kSwa9j;fG};4&Jof`rak6`)>06%!_(X<}itx?}>CF5n!X7DE3NtU&xQ zL0cpf2t+a&2VL`p3>H*JB5Vhi319ItKKIE4B9TJI1B@HG=dbcvw2G8lM*0t|!#x+I#G6df;tqx0L~_awo<5n=o=0W5W*DS>nY z@vHlLJW3PVuMW6W5yzL&LhC{X%T%zGShgg8DPdF~?qGd>_aX@*5KF_FF=0os>bgwA zH0Y8q9RGx2nWKOR{~}K313*_3n~h{9P8_5-l04LzWAjIX6(}l!OW}Hyl=#Mxc>Z2#&=0f+mpRLF30P_{#z{sdPMl z<^FD-PBmrHe|IRsAP}`w`Av}j8U>L?M9aUDPJ|r?_|UsqC>#UQiDMohF6e?ZP7Dlo}_8DJyO zDW*&o$q2{*Fja{#hl!_?OqKrZ?+?tC1k9DDlYiq3ogc9g^u-_&X{e#Cr3q1ywMh^m0?LoE8v*9X0Nw~N zU4}s$$0ooCL7p^&PNUFhN(4Rr)=~iDsdTyp6G{OaDX{zkyre;j2u2%KSCm0w5@~e! zBr^$gvMG~Egav1^2(WN4ys@QZu!w)XGXa)LXILO;I-O|&+Xa|Jp%Y*wP>Mz;>I&hI z(K}gxVt#+|PH?-!P!Qc>l?c)Z)d+$?j!jcQF6w|P!2ZYROBzN=1R5d$q5~xf815gk zK+=LqCn6z&)LJI^hY$z2Ei$eu0~&+Ptbm_<2urwuWf1RTk6_6kakt2nTACv15v>6$ ze2kHYqt&w}Glp2XcxH+SiqP^kWEA#+8={362tZ-Ta6>2vCxnz=q-23=3<@cRj(TSx z*~mncz(VlC6woo*C~zbK<_QLrdT>`E|E)O40gMC$ele25@Q@N_!`Jr$5JpB6nvz8^ zA~{hgkVh#6F?dW8hlipBJd`Bip)3Ut6_lC`(iAgA1|C!46{{*QkHJl!qzFTwAq!6h z>?EuLdlFkgX_69h36BK^z{LgnJdHwt=l~ETN&>x;P%*?_!9&M?0d9*MPZJ;)6kt>X zVw^-s5%CaWfF@`#f7mEN0n2t3jJIMdYs1ribEvof3) z6()#@DgG}o6pcd>I%x#JT?z43PFR#r5I2r9Uc&y*k_egq%9-$*lk5dc^BZDf&=K5mLxu(Cpked@}Kz=6~{;PPa8TuA&Id@ zCI-WSa7s9L(m!WUG%W}R|Ca?th~R~A31DoD`-^~y{o@5z)40z6`|!Y@_7gH1&_%;( z%*k*bD02#XHo$GlFx0FYSi?(7J7QP8p;q-s$3|Cf;I`e)=Btf;zC2&UGhi;?CE-QG zd3zm{;cl?cO`JPdY~NbSC-R(U^vTkxFILa2J5jvhzUJ}ouWc6ji%=g{2gz;3YWl0( zmS~tAocF-_T-D~tdEbJFy|122-MwmiVWBS1p%FJ(A{;c5%@E$qykoH@3x&Mq7?M}D5InX|Kzg_*Mz8f9oyh!2bsnn$Ps zhPR?0k1L3z!pS7x0DcF*fkJ@Udanckm@OT`0f=LBW@T9;KEHZdP ztO%Y+h{`fp7$Z3{NeK)J2zj&R!8#g?yvl+t0>M@`ocut5;CDmkLXd-spB_5^LIxK) zn+UOXaHI3XvRKpxRs>@DpcNpEMgcSwHkpBEfE5Y8Hk=4pIH0jHb+J$f9LoTPqWiKC z=oP{7jU0aaVDKQLg8@+kwkQF7{UTsyLOmKfKLGWRp}+!tmyQ+2iLqsn7f*zsO8^Cx zsSD!)Z4YVD{HE|?IOH4SrwL+wEs#Gsf|v-NuQT!cCPFk9fY7XoXx3~Pn8{+ECz{8BjZR6Gt(gr91Z%_!P8@~X-L7)iy7sx?3UP=Ug zUI2#$6Bj_BaNLUroKb#a;Z3~5gyDv7 z`QU2^&XMRC7G&@fRGYw1u6H=RlD&ObM+Js+S!17^V15Z5MGT=TI(9+w4TtDHm*O|p zl&}DV1L~3BkQr+%J~s{z?loQzmpV}{JRt+eIw7DU>Ueu_LTxN+NP}L{ku}gLSl+?V zt6v<48ygrA88PAI8*}0P?OdaRi;ClqTv-Cn^go|}x!A@T;csE_50145;H({MPR7i{ z$ic|i%ErpYlk;~}Ll0JRY%>&L0=%ejUpO5in1P~T2EdO62S>uVzJYMY(NED^a87T0 z7~+36KM@;||EHFbM~*+;aaK$h~)gyM)?IU{)+RPpJ;^ zMl3`_k%!8Z!S`+OxhesorEIz$Zo;>NVWV>ihG54CuKBn{{8LoGGbL#_HXV5>^zh*F zf}ZFQ^whzK&Brgo^Aihk+CW+m#X+G20s->SnLKC*#boJYqriio3fMQm5wJwy($KJA z*lZe34??m;Fdhu>3-H*9CRyO%;`^ldO@cT;-h3=hYzDA+`8eSi#1BXa;Nd#%496yT=oSdi%m2ludn@Ywa-y9Moz-WLCf?0*aov^xi z2Bu`<{2(wE!TCW&f+~(UV;0#xfKrPEa}rAxeu14z%ft~GJ8)01%nuMr<#RsxfVR*& zAO=T-=qWlRfsRPP(FuN-g-Ap!1OC7=;+srhXQ3E`fVF^K1rmw^Y!bq&uvIu3I_2vf z?hXH_ibtmbowyNTzK{jET=v)}oZcIA>hqoLtE0Swg~00$MhNhSa84vd>s^oY^&|i7`kltRO+5k!TbZ!Zb3- zc^n1iGfO&|PMu6JL7?D`F(-o-f*djsk0k;*>-2x1Iindk%{>;K>c;2+15|DQPh zhj0U3|A0n56KM2*^YkBtH^+`2|DT`!JIelBQDWf5YH@^nz_2#s%dzAdLJg7fnW{+2 zr6~wk(s;B&I+1>yPGX#*mlB!1!Z_9=;XiNyX-aqsQgg=61_FWbIR>HT~uE6F6F z4=D;#o23b;MHzm-vY?cKZ;B~6MYH@oR=)2Sf|Nmx1| z2|WY|T_l7g6bVU45m2lsAc%_eA|0%tA}UouK&ptKA_#(@0wMyUVtLf>y3fq)&L+Xf z=lwtL`}@A{_bz10nRCvZIdkSdcfF2HVEZU5D-{iw5|nbWd&jU^WAUj&QN@_Jc;@$# zz_y72QVmJGR6`>$+$h=@_FwFp80@rbDzIHMR$p^gUovY&3)YGhgM)IV3am^TyHiWL zlQsj|HZBu>+k7i$FqlBM23xID2ujhWYIRP7(-@TENOhQ=So~+Y7rd6YT(pWMhu)(a&%Quy0GUM1 z#AhlR1X&Sfg0T_$2t*IuSL#$9d-CxmWA8QN&y%1lsepi6>M!j)$%c}OiRHyb)irkI zz;j7kI>A&r!E-9P!PjP{+nFi~2cAoTlABqzq#=@7+-f~}nL^8CllPwNJe-!z`12+} zLu1CDC&3~q1}lrklH+`k&zUEaO-inYgkQu|iJ^59JXv9D9&N^-Hvy8r8GoJxv7!n~ zidxy$lc<*)V3YTr3r6ov9o|FFW#Hq=?kUug0W5-uYL$I{uzoj*5VA{CLBL~=__kinC_q;y6 za`SR~_s`8?5-uo+#zB{QLkj}-q_9_R@2>s36_Dng{ep4}`{(CFPm`Xh&w!Fet9S8oiv^ErR!#{C*|-0&4=9R>1FT zTXSwbZe?{F7?W#6PmmTvTFQt4 zH}YhM!#bIKv^)rTbQN+Wr)GamIJ_!KP32TJuT$8ma;lD&r^c=<*&` zGL9@{6b^?BiVGP9GWkJ7#RPBE@oaB`z=LzR%fglfX2p-TR?L+ec!lE7#dYFe z-d|(a1n3=_KrB@mtPg?GDjc6&q&`ZX5fuke5}Vje3#|(#TXG*2WJV$=KND>x) zY*vQkNJcN3RH-8PykaV*J*Hi@7s*`=N3=mss)u=);@F!iWtOuT$j{tchK{sSK^)*3 z%HbSG@)e~T`BYVxoML9h>Q+e?p+~0%@l-3e^an-(sknuFUAd3vRC*FtwVYP>j&ibD zimO7H*Vr;8)=bE*D6ckjLA(eZ@V&eD5}*jGmV<-LC8Z;4M=RlF2(E#yI8`t0)8u_i zd7o~PSPSTYC|iKf88lkeHGKKlGC!~~QZr&bN0Ma;9!3@AV=>n%0dJ+%5p}WuF$K?!AXOCb?GdorJ7>d zC>)8N(W~-lNUa`w{{yi~ZmFB(ST~PBz~2)$*CoFn;-RAHhCW%9z!Rp=V=8$>Mra)1 zF}3mm1Y1%l)6d8}+(9Ww3^#&88~ptIbK2eZR&TgPN%fv&6B6;W9;I?fh-zlvb z`gN1utV6V%4%wj4f#57!1!!wXa&U$rBfPIk)Hj=;jD@=r+`s*h$gm?G z^_ZG;CcRZh-ccwR8jphTsiJ}=!EA#D*My2p7f*+Bv7Ry{WO{*fxcmX3Txj{Gzt z`*!4zPTx9)dic3Xy!Opf6!KeJ2PYND2D(8(j2fNXy956UkekCg3Q;z8fM9qddw%?Q&;F63KrHhQ#D@QD-Aw* z^qn)a{%WE;gii-ISXJbmva*UK%e_It9gsA=^$ht6T!2?Jqu9=t6d?MKwZaX}oxH(? zkBWZ0p%dGa7}d~}`n7jTg?uI%y(KF;vwar~VzzCs0h&FWYho$v=8`QOHgL(LJ`sbA zT%lzN7~;DNF3q{pSEn+vd!|%~e2S61D*y`y^vh+p?DyZgrDw!Q_VEa^k84GP7zG|w z%i$CT9hOMc&Ps-He?2c5#yPtkrWk0kX&%9}3RVttHPK}`pH?xTvwU*!44JOsTg6P* zFm}UqEm=26cC+d%8oiK1w(&qV)mjnK1_d`}MiSPnlJOm9M1dl4)mJ*fUTGg zVE1Ozrj<<3TCy-R(=+C3WFoa9Z$(2P;DR^sWUwL>&u(J&aHUoRe<(G=s82qG^4;wM z6B60I$n=kD@!MckCLK)D5GMp)S0diiSU;D$#u*+)G~7ylxb08=Yu1{OK^#v1eu+vKMIur1+C^lJocH5WFo2mQiCTf%aBbfaGPSo4e3i zAL&HWwv~;qsIDn2CCiOaTspSQDcN=<8V>UhsAo}Gpl>20$+e|6bt+*!n4p3~j6B1A z5LuPtsIrQRD$D57n!>6gxtLzI4%rkd5nSU~W<4xLr0@|)VQ67mL*)`bzGxCZer&m< zDyuA;EZqaHapfgEv`Hf9Qtdx*3?lmm{m4Z~;<+h^LP?WfKCU8ENzTU*IdJUn`m4^5 zWl(Jz;N6c=NP$$783x(j`FY*((dYMzAx65Sw7de3L>4wLx&WaEP#-AGgB`c~!0rXz z^Lw|WG81YmYQ|z-bFt@jO{}ObDJU(zM`0C<$c<6dy`R|aANx97N9GdXOWT`w<>|V6Hq{k%8F~t*r+G7 zJuIF~!XbwVD5@oz8pNI&Dhq_VGm(Wv3_i~!VdJIm${C zg)x$KPQ9DN!Wqj?uf?oIPsQ?}ggPmi z>3T{j%uH7%FTS#3q5xY+O+A*l;rUg0OLy2e=;zwNBveT>g#Cg&`W)zy3cK{mM_^yC z{H|%K5)qV%Ke4o6Vr8rymx8rJgb>}rKyW?n{);uN}TY_#w$RQdf~EAneP?r@DesobVY^T}D2*h1nY( zCq*qjO!uRfcseImNTb^-7S||8XVxl756da6!Ahc%rcpJ#qqpI7S?QR+Yj_3-JvAKN zg6upZ5{+D<02Ld?0Caeac#kT#)SHb@NTG=7%#*i zkd&)8_Co9Bj1kcJMokFu4fV2zQJffFC`JN;@#ui20-Buo_6-bDNMrS$LtF2TH1txr zAiR074;1ob;=<_cfAxgEvM_uJ5rdlw2hL3?0#V1JQZO0f{a^35UT>bl!puz=Ifx#}TyEWQXZ<)FiG~tSgT2~PikEfEZPCwf=4vSB-nEYiS!a0W2%eBsE72@_s)L*h~#!; zBp1PetYRuEQ8Iy%T23al0!S^?tr=!-gsGXvqD23Qqf)i2mFO~xCXvxwI193G&zIh& z;oH^*;oBDJ0~1YJ+G)jblfdXL9|WIMqPGo`EiXZIr z6PYY|-b1M8ikb|0E;memh-C5^s`zn`9Zx&FHuwJmnUN?9n)9KWvVfZucymv2P;>ob z+mjojLck`LQjb5% z&R9!dRZGKsrLU_z<6GH>b?1%BOJ1=35?uAVIx9Npug=LqNNRwNrFiI=B{zRyZa*-c zg8=-~C&*Xs2BudP`09yLy}EzOJd#90T;Cm;N5WIF8#C@0>O!?a=>pLrb;6` zd(&q1`lY+-iYCOQC0PwL0*-k`kP@Q^#tQKw>?8$Y*aj0Nr%*V)No--#2bm1@8HFH| zUz+D5lAQLGZ4*n*9oqpIl9i47%yB6fw*AV09%b|ceY7{Q@yaZHv~0p6l| zvzTELGeS)Ij4&!pWP^kL6@@`7?RrWhDbCo`qajc~zWRC5NnmNg&o;nw#X==uYJi)H zsK?bn&p@H*d6JlGlGs?BL;j_t3@}V!?eVA?5|)J2R@~QarM7BLd)!eZ@l+xxt@Y1v z`#Hl&5R}{6X0|mTD3|#C)UA?u`kxaN#AJh@VB-(BSdQF(ctS(uEd({>?+@w(g5s|x zDE@-h-=B%D7JvM3BKR*#AX zj#U)K-}TDm4?ntHhsvnS{I}>#gRD?PLaJY$FQKW=5=n$6&kmKJL}-j<9zxTjqq^}8 z95+mOdbAz?Y0tQRy&EVXCDkEBmL6}PSi2~CcyAwKq%Lhl>txE*WA%tnWQqKck;~WW-NOBi9S*$n2$KJ%|AR-7gGutw&wDR<{5$ z8GD4(O|5f|Or{=_>Lk`ZMJ98PWB;a63Ry)clif&EHIJw&hz{O!eX45yTdJC867xV+ z^TMdGm#XG5s!C}@Re9GxpsEd$(962Y=k4RG4g1e+MN(Zn~^~dNN|== zF+>C)hLb!w5ZNi~BIyvW8~9Hi9E6Rb^5Q^N(y8K~k zWAT-*JkBk?u2l_qM*RogFCcI1=Z#eU=N>T)MN;l&VNa~e|NnZ#D7;Ri+(KnNi2mJR z2JV*ME2oe!4hQ6Q&h1AKi86vkf3iY*Mj7nnaH)lHUG}i8Dk_J)eZ*a<&0OVok7!2r zX*dWmEOuCqV4kiiDyt85k*vJBzX5wsqx(t@nI4FDLf$~wHU)Z*EQN)pssa(UB@DF@ zsPB?xI*z=3IBa3=z?P;>Yc^bbEHLB1;MT= z)U_BP8qpkL9HIe*cv6@NjmYlXOoj3mOS)X;qU9z1nvpN$T!IeC^EP=`?@rVb~p=ADd;q89J)^kYpEqeuyPYXV5`nVEtzlG z?cni5KG>`v^*a>|L_z?Zw`?>#U=*`KkK!`|w1oH}Kc8LHBFs^$3^sB$vNW1!>O`Xk z@lh){VJ8bD8AwTAt`3%;a_vOTWf&KY+Z0!^P71oJ7wCB0>rO=o@Irs4tR zSPv=U6p_copFw>~fCJ>TOFILCJFuhRRP5kaKsyt7E9{7~!_JC81?>D0^@Q>^U~=Hh zE@cTo>?+P2qLp?|gN1g1hz!9l$Y{f3X~P0=TQ1xTkF1Db3{`C;C1|5U(O=r=Kt$+h zT^zQ2Of*vTR-2iCfHhV`e435cKr?Xqtig@|M_^!JP!Rt60PG^31u;n!JHkM7y=w!I zleQYgm(FDbaw$KY(^(X#e1_s+3FS(tK74#&6KMmyH3B4OClohI>3F^Du7Dqw5qyyc z|B)FNu_f>kErR564?QcvqXKqLHW!w{z=07($|_R!3=o(CPNh`kfJ#$kGB^oti~4!x zsZhurLakxdI_4sQ}gu#HCWGs0>G9g8MAsa0@Ax zQbt|ZG5dNokCKwR8e80koZbl3svfJMmobSc13_OwD@>mFHD0$_H9Rq!y6?PEsaMqK zSjvppfI?+G#)4d*BNzBmb?{As(4T}JRR_~V5vHxIhzGO8spOW|De-|SapWoiEx9>L zh-8ukh7VNYQ4FIB4hgbz8ev7TSrA&U;xT9;;_ZO^OA#krs4|q`Mu}AeAREGW0B|I1 zaR&Gyz^N4>23UN7p9Bf{4b^AMNh0zR%HrcO*u92^zwNdO!rKp{kfnzWrA zt)yu&4BkIv6elG?(F`B@o(U|?A<@Y#+`&`NS0^5A+sSx zsN)iXa!C0=7+tLp&(q1%7ZkH&hGVAuL%$nvPb^nu6bYNy4v12rGa6w5ToNLHpsFxw z0SJ>5sOn>auQi(h#Gqqlb%wK#mNTa>j_>4oE`&EqUSEi1Wyq@N5KJh{@hi%wPo?)D zk15Z4KsRAz32s`IgC=@cBh_qxP~q~%-6a44itbXn|9i zby0W|%-nJcV?gKzsvPnmV98*K#U(HRC@6uIf$3l~Ie^H)T@c>`sGLfyCRhlV1!*c* z&F%7ZF7!bLf&-I4B&%&j*z zF<5m$+hi_Gq3L(ngLzSeX0^UUnF;`5BNtTbKuCC_pPJoK9QYexcNQ}w8df3*8iA~S zxug|(y}-KwLsYi!0t{dt<>ePxNu{S~(T9QoUAlD7?hZr> zsU-y+<+9Sql#oEBF#$CPd_GBUp=uKY`;@gEta7DOBYe7uJ!L_mV%|?xcT};|Sf;yR zz@zLw_|B6N(IMTdI$3#>Vbas~&+iYgjO=_ofTz;NebB{81OZA~74|v1=K0_`2XlRy=V`CTLD}bjd}n{I)M(>P32t_FLZ_|&Sg2U)jH#Z zR56nnJDoig$_amHLUMrc?T`7!2wXidqk`Cl*d-hg+W^NF%oDJk5Vr?Wo_Eqpwnzz# zMSZ+#c_G1%^fCbHvk9tHrl1L7MMY-_DtE~hF~Cl84`3_!zVc7NhC;W1!i{Gz_I z>N%jzLA#Xq31tufI;0)~mxC^$*05d)5hLjVwdfU=EFf7(ghv2zA1s;WrT&7;Hx@KQ zgN*{62v~Xv{Q%SnLU$+j6bdA$FVOc=SeXGFE!ooO3R>f7jOW0u;`D&xO2Jf^iF4F% zQ)a9MG>;_<$tJ<>Pp>nW!W{@apRUMn9Nu6+HVrn4fhLhOXI?%lM(Bj5V<}i@bM$b& z^yk1m(Gm%)q>kgn&ZtdhM-2`*CS}*nB25v1%2V#`^cGsN; zH%yt{=fD7uVgnq*7{dxSP8#3=7z8wQGML_swHcVYq&H*7z|S4k>3fa^WBgCO0_$n}fdFmj1MtFlrB4r!KF^vpbl8eICiLu3UoSF-sWx4Q1 z;b?2-1GaI(5iqr9xVJg|Tz;BhSd0+(t@@F+r3Nsy2Z37fnfZM`%km%^*;ETcDzp*u zGF;3_4itl&Z?Vx6Bg@d({Ae)(BP}do0xk(GuDEI;XkGX=gRi9vh-C*~Yr-agxLL?w zq^!gy(27kEeT~5!g62agODM@uVK(Ax?ci&poPlK1j20322&3h> zVw|)L7u1KyU(gjz~we!{vx^FwPcHAN4p}u&j*d1p{qh z>@9q0h`ptZh~%UVF}>Jw{t~x~C=tf(BBLHe1*aa~mMzTbJq5RlJEFA;w@d0ZNi7F{ z77Or@K8+>{xCg?+8OkhKXGeT4m}F73(Ns#TE+Wv`YD*H!TYo*Q|F5&WzD(u+kjwpB zJ{W)$HPyv8*lpKg#n8HihR^_Pjwk`oCN- zlh6Jy7cBAA|CS2|idU!*z(`vpV}JwPYUTp00x_u2Gv0{-1|Q?Y0B0IKAwz6-q)oh# zQist^CkE*bj4mu|5~CZONNkphrS&qpT4z0f(iq0@hA}3JSP1Y@;0fXxbBl0C!WjT> zCG!O|N6L&ZVYw0G3xg37L+UfW_3l)M0S5n!R{Ao)p%MeE^z|EG}UoRtCuVv6;lf9J~gm9nt=wT8`^Z@V3SYIrrtKw;xP=afC z*Y#N6dN+y@@CMn)001{hH8QswN!+f_E0eh0D2dy}8lRX(p@1gd!$1r#oYw3PVt5^4 z84|W{%Yv%UU|?d4>)S9vRp?Umk2J+@cl#;WEtf_zwAJeY^o zt=D0U$pvw?0mrZeuB80r{s#lCL@47C6}Y){~@ z=QE9=LS$K2an&M8Mx-`I)jZWpa%ljH$XvC2l;`ECArlZ!&G>8Jd=L*D+c1AEzd^=d zTYdOzTxp!hZj$yk9;3Hn-g?)U0c#{8;({B;KvKW>* zT)G{Ow2U|pq7kCq-9bqtpaiCj>2hcxzF(Q#5MHqIkg0Hf*w~+j0<5ugRsfE_9EsI&8YpraP7GL1lYV z*0)LQwtZzwpTM!(`dlRmjYy~nh#lx2qE6sQ= zAzgqq9Ck10x{=w4*dwO&hCQ3PZi4Q?4^%p4-k<`s0?0!Z!9c3X;AIgCZ{l^vBp#o_+G&BH14uJlW`JdcK;e>jnqB^av! z5?EDQQ&RzyusUv|zQ%M%Hn2FL#m0I<{gg6R&k+NZ(G@^tUR32bvZ%V0!Ob{5N6;}V zrB#VCHZ#=ck=~r%M2}OX#d{TbgM7|BnL@p{kic$8gU)c90QF2z(1oPLmXMRf>ZN$9 z2~g0R*%M%i;Of%<0C-t;5QrvUfO_rmE@{TDOE+MNOcESRxQbGX8&Ubcx$bGLvrb0# zyuo!6!Plh~W~8PC)=x)jRD-FWJFr4>&v_p}^7@tflcg&62%_55*>M3kVeM2WlexlC zH*amrwEAh8>5`+Q>SncWm#QdK*|o($IAZN+nI`q1Izw%_)U`@ivrh7px)-7vq;Wta z5zJ>H&>!F)9pFvCh0-d(ADN`2;xhPYI911-QMJaEQ zb4i^Rt{!mGIP~NURbQjQ=(!T&b47*vu2c?K3_yeg$=RecPw5CHxuK5s)PU8D%1fC> z0rW`5yiy((eXTsK_dO^Nt9}tm0d%)g!22qc0_u066u`X`2`MnL>6%|XH z+QO*V?Woo#pY61H;EUg!Dcv)1l`y-eaNf!%KYu9a@CWyIxz(xLbbC7G`R>F|!@`OK z27kKj_UbQw{lZzNkv27^A@O1KqjJ~&O*zU#dH?Q5dE_eZn5_XF(K?>AyzGbd{&Ko^ z>5|)zER3}BUmw`)`WF;t_0B0Q$j<7O)vurpb7l6f_s8h?zgh2(5i9SqgqB=}C?;3l zD0dRp%OX@TuQ`#14s5WX1J$k~8JaPfmm6u7i^~mae5dl#FjQ5byx&k6rJoGC@&jh~ z%f-rCIbHh$c8wWw<(o4zCv(r1dR6_e2~9(V9T%pH6%fr#Od4Il8?_IO&jknDwU zM^#*|HSkga78OZv*x7k_jbX1nlR0Y$Au3U z?05M|Y|TOwPs$WlHLKWwN}z{l@QzxYUWWyc0JLk5V^Y2bYp=UCSdt4qT40o6WfSau z0ee6ziP=TiCs~?EX__q9M6>+im@X#D=^WBY{&d3&8fqE|9g?lA*qFgnYRIb_@RYzI zf)0!9QWgMkWd-OuvA&16L}n;E(0Q@6A}!rWeq@}-B|C{wEaA!y>0Daj(Kj18W@}*E zvIpSqBusf3+GChpdE?*`757&UCCH(gFz#9GeQdfrClXFe@kt!>q#ANGJJ9 z11o2`y$#MtK4X{YBorr&%yvZQS+xFMV`p{%Idz*jto^Q~#Rja>v${}{e$umcg~L0w z#Ud}(KvJEYu@lRtWb9QZBU`8dfrlH8j|xaC2=Go;?~@G*%iR zWEhqP2C1^_E);ViLkwl0=Z{5K#Ude&67(1|(0Z_;8jr*TMD0OFL+98Vp+)X&}mS(E8ZkA>0=_7;?* z1#+ZNn*5My$dih5pNCANoN36}l5*l1=#xmNWAf{#uZg~!OuVEFBsOO2c(P{L)6MW3 z#6_|oLz}0~L?x}LARa&|7z3A0Eg`UK;6$iti>p@Xkn9~v(HGctC?X*W7m-?axWbVs84uHEX~wFe6Jq=MAB2t4S8^E|0u6N*D8 zK~!zF2O%5=M;J(h_=t_1a0p1)$GOFv{;KwZX!yY3(p@ve8|V1x%nSW?YAqkLRn?ho z6lx%-3J5h_>!2k*8C8?5q?kpRfz*%UXN*J-0Sf^I4hxVaPd_UM>g^Bt8V$rLR+b>! zDByJP`e7C&BYdzCN}CYU0)}EKC2hDrHW65VL>sAtr8A2B;-dZ002kp0#L(4Px*7*< zu{ItMPi@ptVVaHPI*pP2?H};@jt%@YqKBMf>9Ox&O6UqiD7!|4oR)(qpCI;$Y z*JU71!zuL$Ka!eFlJYbhxZvRGirsHBp;G|7U+8_%MlG}CZg0i z!HRpd50X?}M{(is&CRBAn<|ZoeYD`A(iTa+2vemn*%IfnibUN~c8S$Vc&17fU{xyW zwM$)$WnjhBy#Q`ook_)xtLj$RZ&@K@C1KfnnF;*fQ(7RSP{nD>DvVIyLMlY}$U0Y; zx1>iW6Ae$pl8W)=wBi>f)~#M-kMuXvcO6MuEqVKh#Pvk!zY#U-$d&aKLJjIslz58< zHC9Sc#(ySf{5y#N(A#i(ATk2qTF?`kK)_H)h}@J!W)vhjpg5#Vj4Ilt^{RRB8#WCy zO!3Sg${vV<#H#abCSV1Fn9Dttvj^iU5jrY|L!xIqgFr=nMAs-Ciy)CITGf~kWF@+$ zYD_4{JdJ3@B{a=RGl4yf@mG|II7(Op!cmS0U;yFkL=dk+F$^ZoN?cMDE|R??*p3r+ zF&S|h7RrNlv{Ae0FGu7NmxafnI;CR7X*oc=Q4B8FG+}BqPk7K?&{Ak*ZG2dkc-H{^ zYuvNE;XN!QeL+aG%@ddL#@7rw^IL{xoaH!M#*j@0k`5n1WEw294b&D8dj=j%jw-NJ z;07!(5}}V^AQ6rLwiegz;HTjk2vP_ul}2z{a1;p6mPcq9<|t)x2O`f*l}nTuCr}uJ zC^M}x)9K<|QYo`5Y|#fI+B}xCW?Eqb6(CY>;6i;Us_F}t7kco-|NBtahUu`eABD2^ z!5Yk?0w}y6u9*OoLQ>(#u{G5}(jgPXRfNE)PF8469nu;AUCG!{ljX`-4dL~ZJgNW{ z?QbZXGHpo93^YXe|AEe+4uPpd8~)AA;Z<%opeV-B`@5e{2$mvSs{NU1?9z`nbYgpa zEk#&tvARv0@uf9Iq!}r^H+8x2}>A_XLqviC=jt`(y^zRQ8Rg#)vC4ag-h?t<<)a;+I8d`fc$3EnJ@O|oLdy1c$5=<cT~BniVy)r&eIg zgR>nnP%_&f5(|gKsXKk|$`vt1ZI^I+VO|XoDGX5Xhc*OqXSCSr3V!FMR zvnOKaDw&m!V7#|CLwNoN|yyr@jmE}QHSy?Ax8IDmk zV%6BuV`}~_+kQtr{ok|YN;B%r#5#bmmzFE$Id4}=;4D}_X?(+6dqdIEz7}NpMv9T_ z-&t$#Y+06P={xN}YP8V?-i1$?4g?ptM=cj4tvA9m=mjHj^Q0ezrqRaPouEminB{?W z?hpSeTnq%^N6-LW4V1Zl5nejbLfE7bU!Hu*Gv1oKI`Bso9Na}E*Ej+fOMM!q@LYPIw4zfms zgc_2Fm|Ot?-vUGs5&hPPtxKq7f(i*(_IP^&?oAR&49P8tB-A9kbxdy2LnX&FbZ%A| zQwTGu4ABNj45fp~Zip!$-LP~Fd=WGduOChw8qIZ4nYksWe=F~7A!G|;*N*yW+;7Sn zL{8ggezI|cU^Z_RrdZw-iLJ!2gLnoG1k<*jsj{$YDufuS9rzjGM+oc~;Dk_RVJ-$< zC=`%Z3wjQ)mDuV@U4(gwO=2X6V9H~h3lmq!!37s750VkM0F(!CH8f>0FbNzZ9JIQ$ zxDv~GjZ2lBO5`cWo|~OhkkzkfA|?;4_|_Bi2TSpQlEs!5msA&lrosK8&Z2Oz#4<`4 z*h~9b9RU&zVQq2$PaM3udW^W=P#jO8t`L@jtLOsx66~@IvU<0GD4% zRDk_p{8BObJ+l;czM`0!luE$=jxD3Lgj7qAHKR2;VP3*w6?CbVEebS|t!XB$T2_s9 z)RR%YmNVnEu`LY|KUxk9ziS;S`~VmTUG z1sEbdmW@V896n#geAHm22#-)Ih02`9tY&wFGT<==5z7r6!4Ej$7NSAsYZvnRSHu~D zVY8o0_px6W+Y;k$-uSsC!St|IFh6Gf#9}!W9$>u+C4)TtNr;D7u9t(jnb?jmB3h1s z0pCPl11*MzJv{PgN63djf41n@%3Tr{3%(auY(`LCEwB(-sQ?pHG~}Phb_DhZu)#od zBz8dpTc{)$o?XZi;?H)Wpwu{n_Zcm|mf*l=BbX4&jUGy*bxe}55gSo5-xw=M3dS&M zfe;i#bqV%JGQ|l_E|L~A@r$N2U6U9f{pgy;z*Z6fteK!-qtogP3Di09ikzG?G%zgC z>_~@mJ>a}EQC(TaAV{P zo^Tm3vT_olauqZV&=-1V*ypxpO(5}@h4c+s;Xk5DpEUGN+}#)TO$RKV-eOOEN|tI)0rAqPBV|JEglCh z6)-oYlMG7ywjMO5rmAAHZ}hty)M6^DD1r|b3znzYcYoqkS)C|8YV2q^AjVdWWyWgA z{RKK_x<|MiiY}$IN<4ULp-PxKt!q{z7r-i`_m#?kIs)|<5AIfi)6q*+R@aT)F%giU zd`&DXy|=VXph}z#-7}!3-pw46J|gd9`l(e(pr`ETbxmntl6i$^ z+yP}YVrjbmgjOX~IUJ zlm(I5l|{wyRP#XQv9;-{q#*MY=mOYJx@&G;pZ-H!*>?(Rr*{m5Cq-d~C9r$J`66cs zK;*j@^berc7L`?%7L`oKj0e}$YNcnQJ&PUn%FoIv^dkNB$?ca7hn1A>()zD18g*!y zjA-h@t1V`(8qjP(_`wq2E;dp;QR?BIiu+?3rQH67od;z1#I68ZxNxOXX&kwvVTXlY zG2tpNFE}@MaCUBPPGMfw;D1^c{iWr@`3OIl`pD@wN=>j_e2g}hkJHBUSRm3#lc{|& zi$q6^ZYwg%wxWmyy9ji7O(yU|XofcO$>5P~Nz$805JE&OVnps6&`deuQg1dI%znzy zVj(?7V^(mS&|Yi`kY|18;8whqi!}rL*#(#zBPX_3fdK|#`L+a;pU_F{Dpv|YhJt@U ze6dm|^gylD;pajjfSJ5tK=xo-jzfOoCM-UpjAlY}%0&1#$-Y11xCMy_W9)c0@ILTrt zbTmR|hJsB`97D^Vp_^LiVby>c)Y||AN z+`)4P$W7GII&`$bkD=vAqmF?P8E90s6~h7FB+C74K%f)B5b>i`e|Q$Y%Oitu(VT!gGN_TvPH=x4#u$77;TyfB9V>jT;~5SZA{ZDOLXArXZz zw4^pdx?^jB33#oZB*Mg&YmNjwlU#Ebk1Si~7Qhkn-hu>XhC3xnYYlSHxYF%P)l;N!+p=%^|x!^owUB*16N0Oy*(BrHb{m@-dGz4Emi08x00 zSWhQL2u|&Zw}|j;yg2U=R#lEqOxv2HU?)#a82Ly$A-;c%0oC&mNEDoF7*V6I5{5WU>?|tCIccE{&l9lrwTOhTm^Y0G3gWw1pboo3 zP@U`u5@Y4?c_6C>vTW5HuqKwHm&B!WzKpe&@B#1XusQU6uz+jas6eb~y$tZRWHkmw65mdK6 zagfQvnM`J*$FSA-1UU2C+$9&JByKL4dw|FifMZ+HwM~MBQ#>TopeQ0F(M^G2#dDE#3{ZIvEx>AJIDzQ{Ta%6$ zDYyh-Yk_72Af$e@_1BRC%m(xq%*ZDCS_9Zs2Ps=LPJgN=P@v-=3mpersZ@xKjzcM3 z>jDm`;OE4xIKMz_1!FL7$QeVh6=K40Lm`%fO+$#YVyp9yzzv0XiaX={Bk4GS9owUD zb8RA3mIT>U!#P~EwM9`zvxBN>6oakU7z!|$Q~Ut^U_rXTIpfh7U0@<*4THdvf{>>Ltb*0ya9RT$K~{rR4^-3;M~FGp9A*WEgB9px zSPb0dh_S|6civNp0dwl=XgwKlUhx7w`9))v+j1j%9W08^AUB*D_2 zki($il8=EgDyJhAV}&I{bcBA27*K|GDw`*kPJ)h=dl0{JE~I|Wx{1Yjad>#fmjN( zGvmmyqY1nehd$D%tj4P+f|?lXKadrW)%Nw_RXuW~uly*Mj<2klY-d}w!015=^Z5A7 zk|s%)0E?@Hdr|Q?zslO`G3-bdFcr{LziNQDA$p*qT&SK{RH+uejAP4z>{dz-sFxp9 zRvY?t9y?lk3V%6|BFC5{KAs7m^&~yBEEqEwS1)$F;v%E~y!wVNDBeW?lPxQ&D=V@I z=gu1+PXa}l@``$1w6q3IiT}kRcUZGW0ZtlG0F%QkDsR@p3u5Q5U^Y7a6^n*8+_T zmsC^{{_K*-sgk#XRWAloXSw7J?@GZVYK1V2?r`OMi^2gPO&fURpbP9xWmZ-2ur;ts zc;{_epbM)ahX5cGMBT`{;aIMM+{6lAq91W6alov?I;vOEOg> z1Ju^92FxSj+GbNVu{cJsxl|26+evQ3(-YU}_7mWEz$hh{V6+qZO&T1bHY;5dM5s^E zL@=))sAjGU@lWiBtkSx%oUW56(QS5H+!nYi6NkY;W$Niln*0rz`2&*FxgnjwBf@P( z37Q|nlc_7fpd-W{QU+@lJQ_&742hGv!X`wsJ-WmVgA4T=UIHF5Nqz{=4VjZZUKZ0$ zLp#mbdRqO=tzu`H<;o0!2vrAQIYs5+AQC`yqONy4$c3KnW8#9CEeklfPR5c)r_eiq z=qtZU&K`|dnJLNDXlWz7UksT*=I91K6iB+kLM%y)T6YLUW3bomP+Acg0qsb{>&msV zXaiyrV*9ceOx!mc9N_beVvNuer&@Oys-X*7cR1u41l-e`&anb#q7{!MbVm-xbp%^d zPV_Q+A-VOvZKB&{*0%BqsL6V32?Rs?au=K=;2p%`akc)FwTEb=IGM>C#3 zp_TFoKzoS8vU&j5_ZjdaMzQE=zuSd&XHXp^XQJE476>nfV0R2$U3m0e3cQGMrx39v zJY;y>54gM-%3wA$N%?X=sXkuLJJQ&FpT20t29vfvdoV7{C$e zb8*M1DgaNNiMJi^5?Y9jE$l4Unqw7XVIhpQ@a#%^5l@^6<0ZS2YT~_L3z-*DwuhJt zDk0=r-CD6D(FMVqg&7qMdN-UW!Ggo(5ju1-EH)X2Zqml%GbNlb&YghQj87lJi`Hm} z69Q9ZdqooIO3+_-nh5;|E4<+MS@Q^fk@Gugm&#;4etHzo|?si?Ou;ly1imGwS^J+R} zr=_(@YnkEB$;wDcZ`nCLwR28RT3S}?w45$kEpxl%W_M|o+BzevOGe9-mMN*Jtuk`5 zu*qnd=FZG?r)RW!my4c7f(0g}ziOX)-S}RLGxzqC+=J!4<~%wp+kR$QM9;>llY5L^ zI_QUyfqNfn)jcq0!jbl^Za;TV|7``MUsybVFIoEJ!Zlq_Z@sZ?^;P%MU43U|t{&Dc zrgicBwD&oCpAQZ!o$*!kOTG5J`awn2eXl<7byVBes^7ZMYt7WeBNz6KUD-N$R_F3t zwXcQ5YFq6d*KTr1$nLP$w#{yG@L%Wi&P8QkTK>}u1EN3g-0Z||G2y@)`p+*4551J# zDf)w?ExjyX#-bmZ+9dBha`Is)0M+Y6?<4ITSXgm7#rDwk=eN95<{Gf>p}}v5##rY6 zSUR%TBd_oOIDUOj&$5!XuZ+1i@!UR}HYPRe#J9l*p1c12#EBhOM9y+O5pwMDphHW$ zhrMyG^PjOf8|ICAz3bFYY3mj}+Hv&5P5YYXSq(+~jHga5eDTH;EuCRwU;gCLn-T3_ zZNK@2>$4skXLwJu<=A=C6F)qDo*To3OyTsxa{DKZSP;T@6qlrW$)hoe1!1yz?9O5-&_1ls%gZwv@pL72)a_!ISi)WRDJ>KC+fYo_d@|z12FO)`(EelAxQGWLFH`5$fnzTG{;KbmH zq>@|Lo*44Y%cs(#hL7Vu`>p3g-4|SsnR%fdbtsX@)QG>_i1-!b#92bR42?NY04;gy`KV`)XU zqvumCzb|^~*_SUhwFLx3rHt(|s7=#L>0jN5U(l=S&;vgYUw1L5r~U2)m49`sTELH3 zwYsrs$Byp;N8LYS`!{mC=8rjFJ7iGV;~QGe zHU6CIZoj7{-u%h5jH|q^`lZa^Z+CB~88YFW1rs)ePHX&i+YS7bv@xGP-?9BJzhBZ1 zRPFlw%=3MPtIyl^ezR`vp+k2s$PGQW@aDmZ7s9{nxHx9av->t$7Ct_s%+wO(osDh8`fm-s!sdN!A%FxPb(_j$~FCEZtnR|@o2bKk>p5#ND{Hbqw zDDmV7qmb9VZ{)ZUL)P~>@oH1;J1I?8^bapB;7|G=ZrW(kv|b0BtlZwO=)}Xn-)Q#O z*Yn>!b8pdO&giC14)q+?_e<-n#)}@A=hv-UpU1oHda?7UMomuH_IEKZdvEIdXBG^< z+3!%-q_;P;JM#MSdw&hv)_>sA^O`krL%*pFOwNt}E^N!RMf-|+L{ODhLm+0=LQ_ir4$aD4mb@oRgAe*bawp-WuYnmL+rFV8M` zc~kd;wOLmNKhlg}oP9KK!;lXKUQWKaZTOEz^6cwg8R^$z>#4oZ+%>1oH$%7eH=cg8 zvUK&;eIH!-{)az*PTsb4=hF{%nDWQ#|N7~H315d#Z9V1N!QU;-|9->jV}lb8q-s}; z8&=U=m+rh4w(*a`g}2UJ=|2AJg8j!YR~;{ix!gFs@sVuz(7P^n>%C{&p$A_`Sn_Ai zr5iqId?w9+)dTw{}Cx+s02Y#{V z!qvzAa>RtiPF~5oU!D71qCS1=V@FSa*=)FT{o-Fb1x_?S8F;09QqqUxE`0ae&xd}T z^F_BcXTnn}0#^=-ZhiD!annb)rsnrOw}Sib<)nTKp8KKe?D&)w*;hZi_Eq%w8-c&M z*?c0xnps{mNAIP@^9Np=`FR)TAE3p3D#zcmBhPQufu+0Nn^v=G-Ld;(Y>%cd*m$jb z{Oq*oalLMAzjSo^+Bszv0e@}kI`QbT>hBjfJ$L5JuKS+8@w4AJ%lt;Yp6GLa!O0gM z{^Xgh`iEZM*63#Br;Qc|rSUsQhi%cWc}>&YweUcC?{Y))FXl9MSMJmo+-jlw^6bSu zJBPkEb=t6xm)uP2I&29DVG=FjuZKbN;X*!;|WcV&P0-GjGH&MWVK9q-;7*1Z3=`LDb;^_dlO_Xa=k zoaTWU+WEoVHy!-sp5x7*yLKwHckZY2`n)+m;)y*&Vh%sI!%miJlQ+m8$T_8xA>y>D6yPOn(KIeqK27Y@E{Sv9Wj?lzBfo~e8A z)`rR1jSf9z89!#o#uh6g5_Zig|GZcCjSC_>bsMey=99zwG(TJKUH92*o7%l~^T93t zG4IU$rp@;oAKBmLr?dCq$=4^lEMFb=)DJiMKGbsG#Kru$BkRBV^~nAb>s_W?k>bP|u6`j+bUnwQF~*jGl7%#fOt~MvcFDa_6WeCuY1kGP&!I zKMZ(i^~A@YdHb7uvBfb(>YlYzn^lvE5Y+;9EKb?5-g~J#3_niCivY+01;p&m2zefxUJF+ci=%MC&mdszM zeXT=O=k=ajaj@F(+hTH|*I4A$1jxUvwCbYP`-#r(x4nE%iuueR zff3=Z852i;d;Imw$J(zdJv}gP*0iVzGgtpLyhrb2+n!FJ)_&2pwmpM09O- zl+EeS-xXqM^5}xdV`EE#H(wj^YDCe>=T1JC@V37HQ#+?0bDVklk@oG}5o1P9{LEQ8 zWWE3L(A7zUQ%*jdH+L-G>Lc67C6k6uefp(^Y15h%+}?1aBz)J}o#R@Z9*}WzqT}@W ziH8@bP5AcUN00y6)V%qx(;a^jdVc)N^{9@s+f}Xn=}Gg>V_%n!8_qrbg5}fIX=7Wz z_SfDyFVr0GZ~#tUQHGSvHkYnW%|826%)Sn*9G5N(@6*M4@NgUvAJ${`1hk&J^DtInUH8tL>(*-h9^@w%Tv# z``h!Ly&Bx_(Ss{fKkf4J_R#IxSy?^P#&146V8`6NngOS~Ce7KxoGC(=RSP+qc^g2rRlF7+dL;X^Q}+Ym;73i{9U10`{uZ-gAZOh znz`x6T|u8(Tg{pCT-xyYy#`J?m=ZQStjX0~OQybd=+wZyGg5YSZN{743kw}r`qEY1 z*^Q6Ay*cdhQ{~-eO-UHHr}M+f_WRySu6kk3xfgaA*GIS5big+Ep3A}O-xzwverfKf z-@1-}vSdSxCeIIjGAejhPJhEkCxoy3{%jHQ?yV=suiAX?<@UcF?*87ur}kc5aU;xf zUz?Tfw)aV!GPAlo{grvs^=E#%cp>{f?#sJ}e!o3`+hbqU7R(!eq$M&D~$ZY)Sm+RgNi0RfinR_>>WwihBl@sT8J2~gIW%s_nY0=g0 z%TB!Bchbcd7u+*y=i}e){n))}U+05|C#1%P&KcLIJ7LN_GJksPLT>!s-&eUz|YLAxB+V*HhV_Pfl&R<-y-4&8a=U zdi~l=-4Dy2`K~bLkLhbdhs~H(eD!R`4l6_bsN~&=(am zbmD76yrbsY1LG!s-euGI+rI|RxISd@kz-HBAMPES`^B8n)*nqMo1D;k@%HLXuAE0> zH+Flp=fev!-ud(TT?77jkaIOY)qCJO6G!C)T;90xrHva04a)mr_sC^k4!&|^)+Bd+ z;+5JC@18!J_uC&+Qk$=N`k`6p&#msZW_84O-w*$BR{L8^YdU;*>C=g8)`koz{Q8Zw z!_RNu9{u6*o3A;ZJGMXn)9l=PTjyQd{l)Wt^joo;d$VJYO=Fi?UW!`$eD}rgmrc1^ zyl>o<-)n~USoHLb6D1fvd^Pm3tR@u-$_LjEEoAt^I56)O%`Km0lv8zMjk>AI@@%-V*Y0aPQ z@WZ%&bqF0jRru?xM!$T&{rc*H4#U#(N2jExbkE%R=g)JwZ643<6+G88 z;@dGdYr6h1vsZ{?=|$I#89%kX(0=*z3qw;bwoB~L;@(>8Usr6)-Cz43yM6kMrhmH+ zduDFA{`H_MA2j~!?1O{eIhjA|U~cBoLrdM!)(PVDAC@})bs;JBvEPOlb{zkOZq2ff zrXCGh@#p%v`*e#gw_J3o<-$(CPO`O1IG(<<{Kr@Po}Budam>fb^FKe)cJ1TyPBmJ* z;Me#z4|OTN)pt#i)ljnO*QDfC%ZJ2m&6%*uvFzROn^~`7~cICb!$zHMCjiocGUHpS*kRI$;nW?s|O4=*>T9 zx3r90%ZD_7PCMm~2-n`ijdO3ho102>sl}ZC*N+G9G3?8o8Rsfq%TGMLFs1izG2D*l zpKG)$vGZ>-|tPn{b1wBkf~pMd#O64WHS0|HIEYGZe8f6QuT%V>tfBo@@^6kR!3Ga3(7e`t)e9|&3ZhG)LTQ%c$&RU)~ zEo)k=eqTt^0Wrytv|DtIpI6xR}b#%|Y);^)nJo??S7Tdr1`F!}+1N*O^c=Xz}Q|+cd*DmWy zOs4;Zu#)Wy?M1ayZhv#jJgI%?@24iud+O`K5j%4`G<(&t^2CU&wtX**STtct^~$_C zdy6_0omlnrgk4Er4V|`e_-lnD2Oms6@#kYJug^Glq~*c)%pIGCY=7h%OC|sQ8owP2 zhpq1w6&UwOb(^*Nk*(W&Ub|-DsU3~lPW$DEv30^jj>?Uucm5T2boTVzj|-1Hv1-(; zSra}T9XoeU(2U0)%l_5mis^g9Fk{}5PIoQ+Y*CkKg97!-xOblC!xK$GYqu|1xZ$%$ z?2TsjD_GjJRm{BV;d!^NPH3Y!Gv}G4*I&ukyqq^P_WJ1f`1w1oTo|aCGi>-JVMM>= zXPUX%ub=$H7XA2|WoLG^t?ZI;BhsL0-MHBsy*i#R{zLfW!C!9b-rhSYb;Ov~e_aY+ zG%59iP6y|`G-_}E4>smUPHOw%tuI%7r7zj>U$es}!smjZJ?Y}@bF z%f<*u}E#>0cp~EApv)^l-|H9OX zm)_m`>QjFfEPV9F_tW)D$NhQ#%E^^Se_KaTDST>pbj+P*3fauG=X6!6{;oTck-#*f!pnvujyZ*@NP8(Nz)V%-5^QP7>Zhs~7 z#CMOi%)jsRMKuQ(jeX#e)0U2p6m@&iKj>&!w?}IX16Izdofo_0=)r(aZ6DNr9{J9q zgz9#A6E-COIet>?l`Ew`KXZG-%+jEh&J>8Vd@>5 zg<)a^+ilynZQHhO+qP}nwr$(CZ9NiT#1|mkg3aMR~~Z0}w}1RJoLol$@LKS>(JFUA%m}A9u7|+??!Q9xnbk zcz8J3&3;ZMKRCTbhLX2v5;K>H&1PR{kIgtSSYIvX$ z-ELz18Rq5qpW?5dlhfQdvapTStX*a4MHs&$4Ocz+M3_xehUft*I8Ejb2}dCR{N}>H zm)i>%!sGew^|^TYni_Ve*@@bhsJn~ZnUhuf-8*o?x#4{?SH21xK4W?pFv2-vr7IG8FfRMvmwD!aL^Mv_Yzv2jD7$%A zx(!FUP4I4ZV~wHk5M-H<)9-nEuzsBWsJ)!LLE__2-CoY`@AYDIUK}V5U#9Qk_&1B% z4Ud%KVzALEGj;MKkp~7(mxMx1#ui53EbP>QkyPlSp@%aP9)1sxhyVBWL#OCyTx^+H z?Bz1xBvM_Dx``Jy(3lX&?RZh!pgh@mtdUMcd+X7IARc+JH|GM0WUetO+R+-%K!-fZ z@xOKzOk8?7czwJdKc_G1$xSUSe)_}m$PtW*o7+mF60hVLDdwK@2Am?FbS4UTHVCiZ zCrZy!K~Z+Ny>&BgZjRUd>6+ShxD=%UW=9qYcgMu}jKm6&*2U3q{Stfw#!gJ96VauR zz~=*sYA(4UV!QG_|B0joF-hA=xVcj zc>jp`IDbFj+a4j#91> z87?ju!?{UPssw9nkt+CEcXKg15|V6D(qe#i3i6IX3t_r`UkGX5e~yT_`PzG0uc=rg zDCp>xs}i&pRr)is5I!#N5^s2eyip}wklA+ zQ$Y!;dL9UWKlksTB5?`?41O0zib5C<2tCU{QjZ`aeF*ktIfaG6_5;Vf*)bh&2HE&x ze>^;OV3!wg5>Bw zqI(vKd?9dRO$js3q5u@T0QlXlW0SSknPCEfY2qv+`&+sMd4L&6; z`w;lRgdm48mxMWjF?fO)QHzAQP;pUD<~5)I7#hF0;Yv(QNMc49tm6V&6Ijc~HuupI z{Mqx%#?cmn;69i-eOP#YfWL$j>A=#S{5|9640LRU12Ja3OhV3mIIs=Xzhp>R< zCwRrJWsQStq@e{OzF$76z*SykG#(;QP#prM9OF{lHA-L%Oghh8{cb77cR4Nbml^-p z_&-S+)`?iybMxrI;rNtKRjykMWk(#xYy!En^k_&218Nh6*3)gSJ{gpe0k7f~|5P}b zpo@Q_r-XmfsXHNnQeJ7#^L?bJFO%;v?K?0S9-l?O)E~I~(Ko`l`neXyO%E`Zhl}Yg zpv6x#qB!xtG2(?Vb!mVK+#t0pOKV{p$rNgU4aux)P6A7YXykE%r?9c2vS0WZn6+bo zCkzVXo@0(F_FeNF8xHu7i3`3{!EEn^XOS*MjnXFva?h4qK%pbeQ(=;>vtr*TyF6Hd zE|{;4;A=-gsQBdTq5Pd+t>%X~Q!msSjbIgE^mc2Izh!CrGQ6;WiqO^?v< z+=0Hl^YzEmj(&+Tv;tjl_ADT?N8Q??lUUl}6V*yU9~6w%230`eNjfv_f_MoOj0CZV z*;p2=aZh0}n$J%G_U)Aw@mlARb$R*TxMM^vh}!f!F-#eXi?gKCkYF6?Nsk z)af`+)`E?yodda}*mh}Hc4a~6JhcO_nmjP2;kHRtVtg@_SYeyqI07ZTiS9J`(~;AO z*z%B6lQ0*}C(`F#y2=8xB?+E`&PT5Xgm^{S)W_L({g5b0$l%YJa zD7CwvR>fbw5uT4$?hPJEnlqT$2Lj({a|S{s!W7`wPNo#;mQ=VVy?3XXE#<@t$J$z$XK1Ww*FvY4_?mnhqwS3a8pYSsHqv>z!}}CH?Picv z=F*jdOz#20hVU@r$2=Z3F=z2c5AZ{SfEz0VDcM=uf|w;oI!2L@Zx?BA6`>;dm-5!` zpkH_tQ}Z}uxj01E9^7RRHqzl$(%8J*nDC0qZ@gg0Nq(qfE)$qRbU!2=M{~ns+ftCk zzyl=GM#q+siD@IyUlC|#YQ|IS2W>KB@NrRO@Bv&H*k%9*T$k!60$oIesyGS(_o&{9ymZQ|QEC z3^OkRa2K(p$uXh0<8JUojnt+i=xPJ#Tw>6>61Gjl0M)Zl?h!4}K@s%0XbQ~Vl)~~F z#c`xE)D@<&lm~^}pz!Ka@O0aT)8+-!{qXchjBTPyYiBA1@x;klC=eVg#~_8`85nJJ z8)RE06hb`{jl{U6X3P7LZ$T_>f2)pzfM;@2GG)lt zENC8Ne`hhGbq6}fYJkm?85=7}KcdB<_hD-6gYe9hJ|(%O8&pBdT71XdOld?j+ZHlQ zi685Oof1$p&?Z9qZB@AlOy`gXRn;N}#GC8$1HnY#1`iv7sPuwV4^LyW59FeF2Y!lG zT@Bqv|OQR6wN*(PWVUsp%3mbW{u` zL)1{SJTwtx13sX-HT)kj@$5ltxZT6jEK;@qMQaQW^_jsxu%4%}0>nHN_j z0bk(Z17V}ZZNLo&m0hpO>|4}9jb5~i`NlKC$@yhGBGNtV z1$1f6DK!Y8dv`*I(&FIGda5g(z`@~>0&x4$?0hW}KeaFX5KH5JFfF2oD*6h#$Iws% z%1}2JqG;Ti)SRpk6sVGy%dEZa_s%w2Kp+I!)x4NQV;@3}F<(gavpc>Bdpax-#O@f2q?L2)7fI1IfWKC2)7)JW@&U^b|hO zN)H2Q4dn8p5rq$V9f`sa=`e%;*`&#JpSY*9U%Xi({E)Fatj_OFA0M630kYO#$vorF^f6iJuiaWe}*0u1>f-xeO1`G&T^#Il9%0 zJQ4XD*bYRdc={%w!XBmm%;9>Rg{+GWL#Qx{tIt;-vvrx=)^QIihrNus3pksd>|B}jK9-qs#HG0>c)26Y@tJ<0LM_|y1k_Vw|pz60jn;s zA_c`#afV1Dx$YiL2*IIu4P~Uu6?cd6xM%X^T6s9z(ysGWcm=pMjg_$vm9TNOb_73$ zeE(d{MUN>RJNPZvQyRJM^hoMJ(#?kd81}*rX5{nIwLaxsZ<@i()vjFqG1QvW6I`3@ z-x4z0_In#)uw!}Yq3q1V0>wp190rjsV_y|<{nf~ce1sdtfc#{`rwTFR3_W4mgr(7f z1Yr$@k?ewsbBOqM;&h!}jZ-^V5K!(kXATY7;mi&VQL$q8{#Kb5J)Ua}Tiv?05|5U? zW1+P?cTs!Ka1%-svA5JGlpkJOn7vRc-kaRuwO5Ir?iUqNVzBK=VXm`ADG^hnY`p;;){DX)h^vd-McjDq~>Ow z6FX~lTbJ--K-8goDyB0QNb!pj#uPb{u`XbzDr>~3Pd?$0OV#Igduw19(BdH6x^MH3 z?Iowdj?}E{8s~yc_f4%a5ttN}FFO=RAZ4BpF5v4`dmNRcPjs7O!l_zw46xvD#?28G zw=YC>CxxCSyF-n4!9;f6TJ^O^ZB4ctH@G<=b;c##DotsNYi}3nedj}_={W^rxvg53 zNZq`%y~}>|sxIZfM#at?8N|kxV33;D7Gaq7?p&RDP5s7H9TEZ_%aJ7^SKH$XZJdIi z)k99790ho>QWC~5v(&c}Qq^mScsoxikhEH!%bN$qpRTHpwpIGR>e*-0f0ILzm^{0v z@xu;h=T(zf*3tyZ0Gp*QUxHDc!jey|q{@{ig(U zE;Np-hB21&rOJqb)iyN!=-es=IH$10-AZ?~?#6?c#pHO8JIT%Sjk;>%_XBmkmHTE9 zo`p?SR3NcY@DE;#hev^T6!ThcmwRsN*jDzR(kiob@|;t$hqdfRGauldpUW{l$K?OSIpkF|i%O-ON zUghfNR&Y`>dfVDnLRfFyWMkWHA12j_dr-yF3_d89-$jhLFpSbnb?8s<4;q=%@aQV* zQKeFw%&MKiP6;+WA<~I58t0vkRx(p*^pQWVJeq;)m;Zn=CpikU2^ZY^p2Mn$r$XtkB6Xy#|E%XHl79i}_sBW?8&*(j!5*Bj4Ux&?G1jQGhacz8En zt&rMEg-au$fXF}jv^(Yb?1D*DYA$(Yuw+z!Cu%BZkrKlF#nNT?KW!+ z@-5EX5|mT}oFEF&CF`8f zt-YYv-b1xwgzo0MRHK!%8z3W}EB8d{I@#fu+)$WWQ!Xi8h*KGdVe`#syrC}cgN6#4 z>TPavyP+@dOV*2=lSg1H(2zk|W0sJ$a2FPgmAoPklbLeSCMjmYjQA)Scz9PN{bj{z z$q}pIs35`W+0AZjdz}jtyS(M0s=n3BP9y-51p@MQ%G#qWu6~(LX4)%hVf7131H#^x z(GvGn4J;%Olx)c?*If*F7V1oEQ6BigVR5#?n4?E90!1wtDN5nBADrzygHyuC2Xq+x{yDjoH6joo3l1lbB*T{P!NwlJ+q>XIlxo zg+Zjk8th1N9X75z7g|jG`Qm6O(SrGULuh-_Hs4>ywH#vAiy{2Mt6%3-JhZoRCdBcO z9^qzcYU4&6dIy z``-sFk4(iH1Ej;w2OBl?PXH2-XEY#Pfmom4>~dg}t3Wj~tB`4`W__d4Q7{ZN_=IMK zIVyf|%mKC{U}}4xoT@!6p$rqyybnf-Hq9xbI#=JIpG>JJ%~iq6s2*329YY(>Gw{u? zHCd%A@F^BQC#T(7+AAu74I@ciEVgREAw8qo8HfqjXr>SAc~+!YJR)KPw`x{36~*@r zmuj!tPf>vj(LI1k%eJioTqW0k8vO6a-~O6=&RH9|_hP^g5aRye>sEa7`JI;v-H4xy zeuwkwZgx;DQ`VjueZJ?|IM`5rb|l%fc_PO=HpSvUu=Dn2m(2oQHp4B=SwC0u39h;I zd3Mo)_tx!wum#LHk`oU_H)W#-!ISV!;MQFDD!^kOqTZF&oK!_s7^HNMpCDp~qb8f9Q| z&(qZIbW5~b% zsq5)ec|TDczim?*mg1hNS+<@-4fEW5HSd}4JMy0v*YtJdve*T;K*e|dPkqEaJM}%f zL%O6?k!fiLRF+5@`Q&TIoLdk*cPAxnq3dnPwq86u(=f2XHvDS9cKm9>SP+(qrp9jy z|KR^(>2X2jN)Y$aUn*5SwBD|BCw{|rQu{K9^6i1iZ$$9%Z0vX;Lg0@ZknM-@=7Y_g zGwi>Q2~ceLGiey!tWhpt>A|=b)8cjfANorTaLG0?N~ELNXg->q%=jd};pP~b$pMLqg)23?AOdNizDS9R4<;PX95*UO=(M^P9dW*#;)#Z=V> z_^jXIF#~k#j2a*Brb^*MkG0dFjA+bo4Kc!v!>-Pov$LX#zwkLVwc574LImM#a2jem z&Tp&Jhzhi!yVBDZCBoEceki*5`u&^oe_Wx&?5~&1yFwXsI7}UAgJJOWz?u7~ADxQT zl6FGQgvKUQzr?ylqNch5t0WyLu3kb*tyHDcm2m7uvvEe%0HrOiJBE4D6VHd7fPjw> zfqGJ(PvTVI&L(w(%!~?RcUj=AAhR&-#fhIZkaGIy+to7$;s|Ho><5pzF$dz+`i+>TIg5MXK-sV$%1sV;K&q6TMBYr$+KD*G7=6eAU+b%0h z>&&U$=$J&woM8)Qm7#l)CT%fP=t@z=zTgMI$fGkSfK6jo!ck)zN#BW@K0cibmK=(P z_4u+TxtIPEvjk6?v*|PmbTW@3`Wsir2GB(zJQT)^?XIcy z@Ng-|TXwZ@EGW@f1Dt}V5H2+xXX@1iM0NB-MY{k~tPZFYf{Z6Jo1y#Dgn3`KM(yXVvTpPP3P9V&AI()<*SToAbwt;ilJ<~kJoj%6?E@{FvPwRm$J1sEu_j~nuC5|b zue9rEbOd{xT;J4MO)aR=%{4WB@1>waMJg>}0R z{3Y?DkC+|GE6FEpCQ7F1$Hn`4a?NQB<{0AgcTu$BJo&3*ufr|4C6X-y~gr!#Y} zcw&V?h-7DB?NrI$=t#=+>U)vATVnZ2b*M*Cu+qZAekog23Ja9HwVLk>rYkaZKYKjR zPpjcXWoct!YuCV4!#9+%Sp|o5?5uBseq+tA=eeZwWq(uCP9OT+dZHLgfn^>o0<(@` zfH=4#*0}5_wzRlR_3`Mmcuwz5l?MR=kjc`tlNQij^?qPp@K0U3$2#7#{ZK@9KZ#n8 zDXv1V29oGb>N{jOQngSdc2kIOWPQ@e=GAai-~c2DNii_Lq6esF+WzaPjOi?W5b}1S zGUi`JvKE+G6KC?(ON5(=<4Ex1IE{cg-SXJS8s>jV_?zRE5Xn#Cv+PDpA%oyIE;$MQF8ddkU6H zpS@7pl-7nbmpt!$r{1lNw{LGTVv}Ks0%X*0l?~EVjSbD*%#UD_zLs{!2 zE>z7GokSGcf6d-j8=n;*3OUkU*CV6^oX~-9uEYLpa`_$`%OT$1r{~ARnHAbj_f-nW zm%i&xbiMbY3wg1WaotH>vKoa@ejDi?g__F75I1%jIzF#0Ef?b^$hBT=ce>1b(`C9z zl906F8U!a+S5RN%pCu8?9c#{M>BJMc`oYQ<&Jl#AHY=S!2~Qi}SZ;Lp0_6{<=#H*b z2Y+@zbHqMN6FE|A4`6RWL<%J4N3L&uhn~36^!rIf=965hFUu4&Uf>sov1zxAnRX>i z87!v;`Bo#zaBt3c%~smQV?z!=(#a|WtR6!5iAFKqMVI%f^R>a?%*1Oe!+sdjQDJ0G z6$mWJ>XX~il&{(@NEVlOqvchlE#;bJFs#UKE;-wd^pv-x=XFw2lBcwc;?OVZF?O1U z!5Ov#+l^ZE#_gYe$OY|$LOcX)JoIaptn_oEZN-W_UVe-q3i%>>oPT7pR1@{Jk_ zxl`qB&8bp#s%@I`t=`5@XQ=K$B$fYGZ$uAzqIIN1Y@*eU-3n)R{M~exdgzh*;#p9y zd(07eg!D1AE40VxKL)9%$u!lrnqqF45u^|BWEV9_H03>8_oDGWX$%4R+qqEgD9V=> z@jw}$tE*~U>r~d)01Z-{`=_pO+SLjbQYC=Dc0j;`4PT?#xjmi#21<^4tA5+84fjLm z*)?BHv2pXrz=J)Jc3fjLTrOwN4L#$(g_rW`iM~k=rK`pFFg|rqydQA+6d|_vkmSb> z3qeB3(S*}4p1_JB^xp9V4yhf?uwls_WAA%X(__ffMhoSQc3yK*U7lesxWhm>)BTeX z8Fu=ltd(MIK`RJrkGb9Ei_Prp`@6#H>%Ti(uADWt0vMHh0~>v$dtWJjo})i^&VEygJJvMwC%CDxMG41!dVIIr8$NS$WY=Q*8A& zoo*od>ffVAcE=p+Xa|Y0M?(F0dZrLp{le0>db-xkMzM9G9@!r>sRX3)XS;NcWa0K* zl^*blag=aJ;P4A)p^SHtzF787F&OW|Otj z341)$hP-Rt7ZU+El)3*XG9apWUAqV?mP%H zTdeVHfky`x6XHy6HP51oY3`siesePD=EXm49*D&94WU@=)(ahb6wzqkwzbX#AkOvx z#nMRMe%LQ8s1nG9q61~0nF1W~LK+TsaT?c1xoz?i&Q?Qwzp1%}K}g5H7bd<>u}xN+%(CcMzrHm8OM$mO^T*ZhCFn~6Hg3Z zxuB9=Z+v9*vM#idUnIijcCeh(R1rw5Ev`PDo&CBp^K94RuleoK(*NuGvKIV3&%aRb z^7z~DlM6Pt>{^`A7&PSiH7hpXzxZ7apDzF1C;J`86+&gYZWL2^Q|mr1GWhi0@zvPn z-SPYSaBS`C?&A6KO#jey4zCp8l#_<-4tUcJ)x+f3ELqma{Yp0Af1ro=7Hgjoh8`I* zPEQkaZL}Y*VY^_#;VQJ+9yiQ=ER|kv6ooc!9ej{(f#|1ou+jtJ6EQw7N6-I!7_!s} zpBNk6*8du=w-idS0m%?A#xYi78YjF>elpr0-Oe4yqpp0iy*J>>qPqxZ4AQ*k9K?rh z_IT!8R+0i3b(@N(P-NTcCJaq5Myl<-I=BHX+^G3#~IK4x;K^D(PipV#6Kz_u|*IY zbO?UgZjkyHZzZu%;Tjub={%u-oY4D(oDy>Kb8>xx-xZKv9P>Z0AJYE@>;Gq3^-TSL zgH}yf{y(nOXTE-;O6gR>jzU#cwX7CvG3W~jDKxyQYG!nAgZ!rZ0XX`M$L)_F4`2a_%ojF?b)0tSd0DioAi=@Jr$ zO{$P#6{@5{t(Z3`i!Nrxp+qBAy0VVI%ek39EGCjQfU+@8Q2MJBTFa=_K`||g_CMEC zaEuW4stF3r2@YMk?Tz#UJr5i+9UsF2wQU1fnrTXpqW~KV`i_IzW4{ErBhY^6a6WsP zB1X83ha%R56d6>N(#emMb^#_36k9a|xmOCXCQEaad9q?f$mZAbXV)*urNI(sb7{?i z8_0_#ixR45Qh|~ZZp>CVHbP%j%4w%its?#n-UXG5N$>Xg^xF6?>y8)3~Q!BZ$po`fz^D>VzU`yeJjpKN^m zd>^j-y!`x9@gq*h+m6=MEfItN&AXywESDV)`&CO$v ziUtb$;M2VSn(#J|$z8E;a#?;}Tp)XpdKN0cdY{7j%DIuzpDGES7H)VHgjOsJq{V%= zgs#=yp-l-pBd98J^uDW$R-U$&iN!HPvWLWL+NKnB52jcNi-egp=AjNWs_-ODH<1JM zNG3c}g8)lMt?%sIp$u&|(+8u2p|YBq*pqr2WmZmPlDcB(88Uh}AAY(-V{yFXy{E6h(ERU}wU*3FM?)FH zI*E+{mc>d%QVxmwLH9Hahxdltq?@^jA$%(g#*}xPC}rVt?s7ffi%ziLn-*7yj(k1? z1h2k)+jwt(;Tk5=q;WHr4_ojLP`CK_PEi~o{G?gd%}p6A=KBdf7C3pU^PhLcMzC?} zmssiBUGmwrzwCvZ&2H>3;D8$F0`dRQf5`U#v(dxwzfJ1@e}%n(`(L`~RLjsGv7my)Hu0~ISX2o`bkr$}23HG3 zq9)<}ZHL#rq^G?d{C<83D2;`Z8HiepeklY6;WXhar@<2i`2eM8G$>JYB+{pcpavl6 ziFxRND7NiQAx2LS6Gja&x+%uwGTkCA5uic`jMH?Qf*DCnTgtlSs0HGjMk`5rYOY+l z>xgo;n5HGfZx`rW9SsC3Po_MCaSASLzRGs_a^0OI8~#wkuZM4gsah+_>XXnbik}bv zMbkzFCsSN&Hm8(CekkOQ|E+hSuXw)H8Q4yhTA_w!1Sdl zU@(Ztf;kBP0NRiRV^A^$jU;f>e1bSn4<~eBix|rQwPOqQS%ijkH@9*{N zq@6#GU#`&Siy3FAue9$C@r>k%bQp&MK^ji9Q%iA7vZ=5Q-d>+|RhGsTU$aHaP){_b zD40w00I3<4C}KKjnRFp~%6VeRbV~f8t_kFsO)|`4J%%|1xtRk{tO(?H(6-|(XTb7y z&eBwWP&3lnUp;EB2nb*CrD=-eQCffR3?TGZslqja49Gx}(+|tL) zh91Cdpw4IriDlnFX}MBS6I8R{4eaV(U@2_|ma#SRwmi-7`0$uAWwoEnt(JcZ324q# zZ+X#2Dl9Qa)+9mk?iQpW=TgAXphg@6#`NkuwD7@<5Xo3IFkSa2xi+H$qE8 z;tF{87)vLSMHeL{kY?ctVQx6h6uQZQ^w+=*6x>Wb>|jRG>^#3J8E!|ScR5o%+ zmlse+0g;RA=vw3O&Cl3IrUPpXUL9;i<#d5Nfg3lFqfi2c)24VJnAPAo6AFRH=*lCv^~y5P$aE=0b{!zYXh;7#EIJ7vy0VSRv|LI?K6YR`yxUmDspp7i`UK&r8WuhA_d)R{QYN@C>S0FneAojB> zNVQJ8A{OK>PDc&^Lk)GIwiLwu*=LmU#5zZ#^`J895;29@0i^7$*!2|1)Z%ckvP>wzlPpH?P^x!XO?NM-UeSSU!!WL z*g;vA2Y|TBhki>1)baOR$X6F{a=`L8H8=_#?dYyCws9F^W_wk%_F2emiT{!p(lO11 zbeW+yk*o8q?1!7UeJ#|$cbebWaJG@k19uX%WKVL29$6c%rV6(tECvI zI$NSA449y}t;e9!hq%mBq2dr^w;XgFuO&OSS*P5U3i}=38Y^|(t_%*_MB_<$>&Ms5 z0{@z3_`*}0@hL$=y5aMH^1P?;C=2K>uaFIWW!KFaF1g5vS9~Yl`3KxrE#HO^XUq2L~Wj*!I*4O-fU9Q8fx zIGhpQaNKh@;o+wrJ4MAm$_z#!uDS-^v*uQ_t&7Hc_ze3}rQcyEO?=uDZsNHaGesXi z-_8nm`l+^Cx7 zj_e$7);C)XrFgzX>eNWiNCan=YD_$gkcw?7`UnHJ$JCD}J$S@GF=I`wN5YS`N!6qd zIj1LmxKUrGyZ_DBWk32T9b2M*H!|6J!^ZOV#v9#Weoo#J+xG=NClunx|1|y;#BS?} zUmm;4mS{aG0I*6Qht->EcKE)p7I9qu>C)^+jk+uI3#a&UZa zeZQvz^R!UCw0G`Y8u?<6j&gpOWfsX;^GTHTYnMKqkYmAN?qYuX*z_g~jBo}ezE%SW zFJ1t?{b~9`C*DuM8QUB>)$%J8wlV?8;Tyl29C}*a!vQ;n1oYTF>!_95w zzuU`e=D&lBi|>!4A%0J9M<)l*59{&lTRZQ6dfxQ_@|yoG6dJR{c(F$b06@kBuQEw= zKOYAd@Bh5#@9}znJ6b;`Zk#IfPE{1>?t~il$TXFkr@JL>OCgOLvc!4|;<;h(1&*_5 z#MMX+bO8JYX8<4<8?d0HiH<4?nA7@q^mOyx;S;|N1z@DuCuj=mdKtRrZ09DfdCfA} z%OGbqj95q`Om;<@e918*EPtrt`rBTPj<476=l${a@v`La`fzvf@}M!(TRr28F~TSV zFU2aKj7=|hMD8DAIfSM}&coH$!SmWKt?Oph&FZk(OM6Y25Cf&qYQxQ?h~0OVIx ztin?HefZz|z27r`kH52@!^lKT{M;NpG*i3kB@gIpu=U|r2|AGus zP5$TooZSAzr3zzDN2U(xaVUAu$L-1LCbzeg>J6P?k^G>Cm;uxezE14tOUdB`2D;2; z;Ma@*-xs3RW;G_SFzq?v6gc7m<5(%nH*Sxov)}jG!PCKNxA7R>pZmid@c#YaC-&sj zt*taG{t=2`j(L`o8FD_(cfY>D{qEyswsa+VJKMXCy;IA@{nbuz4P>4Kz!`t;>l3C% zz|nFtz#BO~fJng6RDk8c;Q`fyWu6bo>jh|I1C~GVUY@DvQoo^xAu_^xe@>i%`{W3+ z*D3mv2c{&1vrS2o(b4!O`UXHMU%LWB8Kpj1*3uc)%S~IrpAG;G3!7xRI)?wsDWZ>* zGfQSaTFm$)Yvi;IR0ia!4Ji2-o(KMA>=@NxIms2VkwPLrpfNhEU7k)k!J0(sAjj3I z<}!EmIJ=Y3q?0f}u!gL0OEvaAfNl4?)`naC+Sg8 zb4&OZ3XDeBH@}?M0nkZ_vZ5Ky8=0)WP+bbAy2w$=Il*mv9$M|=$e+66>5wT&qCTg^c0IOto?d{sx^K9+zE_!w?y7qj# zd;39ut@(EMe@FY)ynCK~`}Q^$R~C9S*n&e^QzRum2I#B$n|a zJ#L(6Qtmv+kDW!=aQhgnq8`)c6$}ucl#(L>k75*teMEoQGJA`q9`NnKIiSPV1HD0t zLA2K;C1zYOilp1aStVw|V$o_q&ODjA4E!sgN0NcgDAh0{*(tn#dB}|tE*`n+Kk?b9 ziZ1>4JWTnGof~$;e6<122(J@C7k6zg-FuC}Kjt}4BdM3GA{#C^I}94YV4&WAxnyxQ zhah{(4P$%r{NCBSUdoBiZCns5&tj2dWIr^UAv89Odym#CJXbt}Bu%sDcVdyqR zaNg)VGB=cSDAtc9QMlSOCKdXl6M{(MdZ3>5HE!V4^p%%m zl;jKb%geB5aS47NWnutUKr3{gHt$ju&XcepUytM!4W|&ilB$=)IIphGuJ{V3;r$Xsp`8AZ=S~;g&``FY9xL+>FLx7qcHcx_^ zD9IphE;&Z{0e+zy*hTq$Ai#j&4+vG{8!&MnN4!|KLq=vJbRL`-i)dxgReS&t zVdN6#1V+si8^It4^Xr82ZYD5(OV-I$kQ{MCPn33*PNApWId`JG-B462OzXQ?p%)tV zotHBahS~$!PE*61rtoQ9c#?8J#TZ02hN12VEx>u^qEuiu!c!OZ1Gb2StZ^bSXLIt! zDKy9srNutou4Fc3O`N)EiCT~YA3oUvo7*6%Ky`_HfA3A}yewuZ&{>;6RApN@EZ+ek zF%!sEq(eZ}kjgNUgqEp)32u=mmP%Y4H>JvqEe3KGCgqZvjGgNf^j+C77dOY1QdLwU zi{b~hL@^kO%Vek^k{w3Ly`#yNW$;Z%N(AM+33PtJN`kl_enAzf!q48?x$%t!f(V3{ zYmzV0SC$B@IM>lpICZSxkFw2>#HaRIv&ATCD5I$srr|1nOZX;WAG>~KW&604A~@w zQI;a^Wdn&HuG%EkNZf|d>4}?IXp;hNgQ7q`W6mPt$Qu0=o5l4?$0w!?$i0~zI1p`S z`UuWfMfp}^hmfC1mg<5foQU{e{_7hCr*(>Rf<6GF#9;E#CfejHfg*6+orZ25O|xJk zwz{lC5==XqE@{>~>hA;0hRKX~%13qTJoKpQ8S0r=>0r0Gik@oOpcrO2CS+N?fekgj zO5%pG7g;+2M1I=uQOSo{!GuDgxUakjowUY&YVL{(`1^b>r|9|JeOqJ5tt?4QvVKs| zY)o~JcuB3QMiqW6gi2uv+_BD8^`yeo&@hTcT>LraroI|HV~RQ)z_)83q<^d|ELGFO zR0E^ZSjE_yow3<8E388sbVS!Zs(Xl}6f|RjzX0`B7KQZB;E?uIJf|C8Ed9nd7MSX$ z%BDD8_+rYQWLXv8L{TFvkVSI=6AOwJ*9a93SERfHN7+fcO;!n*FH;GoO7BDMt~Y76 zQ1E7nuV&FLy%r<;L)ong(mOn(*s0SvGhML2p z(fcmSs%oS1N;~ifs!Q`?(mO{)fdSRQHj0*(Lq^wgLD$rcN}qDul=+(i+gyrHwu3uG zWK9>jQ;Jhmpa)Pu?Em(?hd_s>gK`@7f0#PQF42Nz%a(21wr$(CeafynW!tuG+qP}n zMxWdLdffK|_J`afGb1u0)?DT`sU|9XOkM6yiCWNAwstP$?y7)h5RO{cAA1+`cTeguyWB!r*Bm=z%HIdgCIL{54u@s8;OF)s)qt#(pa@P%$&ME_q~4|; z)d&nArBY*A0l}0Egilr_kjnTI?$oPA4ZP)GOe;*w1YZ@TCP43Qe;f&^EtnWg6OCrC z2*O6HJYYJvH84SbvcM>*XD@k!NYwXm#ypl^*6Ie;+B*m(P|Z6Q_wjxb$+xPAh{`0x zeHGre6>|Eg4}?5}l_yv~UJ_l3RX_^V5kJLDwSo&L;*}LA(6EPzF*gpvpVC&KL@;pk53OKxAo6;n<^3 zve7$}Or2MA+veuFbH0ZE*84g6##WszwL?ne$riu>WAx&WOzVg8Crdb563_qKxXEsJ-`W~=Jnvj zrq`T|=h1a9n)Z~}!OGuZK(jKrX=X=f6USYeK&qkaCt46?5(-M-6*}{RDJsO^p$Gzq zeoi^(M+|_CxKotE$XpI8`9%O;sUAbW#e+BUMf!fc>Qco=(rDAi1OJIR_4)KEZd7%N z5S}eoT*Oe$K`>;>pr8v8a&hFtT2oGG=t(7s4nY(n@2bz9xn=YP*$IVML7aBry{>`F zOr{ymQcDg6HA88CH_e5w>5IQSlll9{Wo)9L4CuV)Bb4_nCo;b!QGWtk!hvTtZsP8{ zCw&OKy9uIDC*_wF|V3!`uHN=+Es|g4S3R| zoL~fK?Hd6;>&=ul(5a3)vRC{|3Aprl$2%?*IjKeCYGH_&difMp0}=v7dEed}h#q3s zCWRn}*6bNQ><5)Y zbXba$9GL16_@u|o$H88_n|zyhO)Li6%Nd)f(Vdd|~*gbF-) zn}&#L9>rPcYS9~GiLxGq3qJ4 zzcn6T!*-;hXHOR2mngP7H1a!3_hi8&PTWoJM%F)udcgpzjIOf&5pw+nJ zFos*!OWB$Vij2kCCu10MDpfJYP917-HH^}N#J~McYL7A8?X#e=NC0G#e;w9JIsSmr znN+iIo~mkyKGsDrN2z2qAj;2whG? z*~YV4zA($zXRel7@i`G+4NKT4yYD8pKMb!T0o1J?qY+P^5r8_$8QdOAs7lnn|_h| z8^>PR>VU^{5P|0|MTB;@Y_}5gTzk}kC%=sZOXVvGs_yl;MJnX{kclw_p0@HWt@(uC zOg4HI@-<^kQYJQ8XkPg6S&jxd05UoHg9aVW(gFrgb0CPzBjS=)W1T!xRsuQJJ5-Hyy}JI-E_$5N46uTt@s_I!9@vTIZ2e zX=!6w5@TzrNH@*TaC~mZ{o1FTQPXUC;cdy-BTNz1lKn|aR97_x59Sq3?_e5l>A)^E zp{A>AewL({@@fo2z5RrA4UQEF4CZK7-YPZM;V8v#MWC?{ZMm69f$|TDNn7SR;4XC& zDzd=QmEAwx@=ys@hJSem?wN?VzgEJn9_3xSd=zj#YL6J zQSnSI%b}|Ua(1TFJQPUb#`ylh?b^0{&)6MkJrUa@c5#axKKUc|5UFF~dJM7&8;SaDk~J@> z>15KhlQCfy7~@3Vev2r@5jOlU=*$Fe>aF}S*y=J{jJ0b-Y(G;8UqKPmW^GjoHr&4d z#X4WkXx4GyX>;W$#|m*0UyXXNyQ4|Zz8v`l1l`mOQxF&_C2oXW_C*>tljPXMYMXN}eXU<9rslg}QnKul=hDDy`LW5}*xf~nrW9%)Q+`t_ zOQh7H%j^y(tK9OgsUjK*-%kMM&FaoEh+>m60~LI2+kw8IlGkjTPtFE~N~BQpWYX7{ zEHnyM?_9IOklltToXQ@eu(tp;T5&NPV46jgqITP%=OMK_=};;)xzW)!sy)Zw5nbgm*=}&%$M8pCbSW9uI&WY{}k|qLx=5$uRs5A z<74>bESQ$_;lX4V8`1)}#L^=C<(hcH)Mx<}3f_I`Zzl5*%&K8MCqaNwuK};b#9C0c zC2FbcaJ%Q2tf}b{FLjD8&uF<$OTmHod9k_hKG^_0*+?JMnJ{EFVa$Kk*-ImPmt4}9MZ+C;? z>4}+m;N6<$+q?uwX!E#$knSy#O8an08$n+>NFQ+#8fd^aBVAss zD{T7;eD@-o?d-&ABf9yPp;C2K=bF4j3#n&^8&^_Zri2`37hxmVIh9u)kGt zrTg1|Xp5&G{dj^}89|C(ruMgYi*hFQ+hFFIdT$xSi?jKA53?odu%?htJ?d=Wd#@+7 z|8Hf-qW<8z25L)hS2+d0s=zYovvklY4uF14Ikj!$M;(c%R{(~}6G zet6Yha&I7G(u1$WLlsE+=H(FUE~%Jh`eZli z$bRDM87h0`Hv1-&?^SKl_BD3+AK2eBFZFv-l!{%0TCfB%;O+JKee1vcKtcK$NalGl zU6%?vMC0qsDs;LjyMZQpQ@;kF_OuU8MI7ZC-v--v^6HP5-YJwS^&hxsM_$*L5MudX zIcf$~0^aqH?4sqJ2v(Z9$_KJooIr|yQ4~$Y)Y?2G=d}Z9WW7 zf~ek`Q~d_cj;rqR&!NiCDAeqzSrXYN-D2nl7O>%yOW7;ZJ#n`YKIqgqK@9c`qfY^< zdPJp%u`F2lYo5gJ#q9}vKCPbZk>#Dw0AQO~CoG>H74v#0s2JAj#q~e1V8uCmCrT{m zQVD|kl5SH?hC-UvprWMrHBu6|QrWKdlX3gGY#zY;Qn>pPS~Jk%S7>gc@3gvn0(;Cs z7zumeQ+<&W%km!>zY&TVoj^}kdR5h6=y|*z-(2~+}>`kl@BPnB@hucoGxOPnGah0U%Bu4F;fzJcQ%{=qLTw8S_Mjhux4@q1E0qno=i9M<(UZQlg3D0hgwn*yKdHM+YI_ z5ZGv>x(tuj1*Cy?N$80u9#jcT=D{Tykwyj5v?QhLdZrkzAyLh8;V%r)%94=Ly!-+$S!(F@t68hJ`AsnV5XRH;rAm>~vN9AT0Uo~dAF z91_AB=15QxmW3=Dir8TSDh?`uJ5EhKF?G@4h?-r3J z0G1;G&jWUq_-nk0n^my_5~kIxw5f7Vx-->bvXMMC*t^0EL=nf*gu@Us-8F7SJM%+? z%!Ci>fX4*#D~VBEXB=*C=2;N2ekz;(0UA6G7%4HNRD=j1V83vuof8Fb`Nb;9@UgZGJ5D#kAS*Or zODv%Pq-roUQ|oqzIKncw;K`5DstX84HA4(?J#e11KoYb&&6*0QGuaod4gKi@zh2#M z(!4e(><`QoLi+}Lj!SUpE&;m9BT_|wVJ2h(AG!io*TU^6>BU+b_A`gR)|j%E95hVp zOXW^%&Y; zNbln=NP=`4&VTa`9uAQB_i%DIf(8oLLTuvl`VdApL^lTHojP-Z4w zcTu{=E*%$qrLX$}KDS)dtAnSZWUZtDEGZ5y2v;7`CXi_H%Q9c6M(NQ8;?N4Hu0-G} zYaP4l<>7GuIoR79k1VCn`TDwA|9Sj2_wjI7;dVb-@PSxPR@hox5R~+vCbllUT*xT@ zPVNl<=;^jacfNGvwYMHu@9P*8Q$UwDst)<9`e7io*Ba0t`>SZIYE~atavsol-4%E zX&6riBtlaRdV1QtP3nh3cO~&83^(xh3u)$a%_5Aq*u_#mmIqZ2tIOwkzL+lto5XCBnp4exk=Y3k-{Vy8H4pg66-b{$tMrHG`S z)d3-C2zh9XQP~k-eKhQNlah5q|HG(7Ls0Rff1#NOUq3v~w6~+2;-+$UK+#R7I^ME% z$*CYo|PLtuXYOOe4D$0+IMheV|k?IEUY&G+Mzre~z= z2>YQ{pxEm{j_or~iqB>WyWd}U8`zoK+ih+Kt_{%&&h04;7-ozYz=DXuw6^EV89LvN z;reHZ=t_-noV^=}(2Y&bDo7>&Qj3w-W|B;>p1j&@ZVDIl3?igVx=yentwq$ZL9@P4 zk}Oi2oikPhj6BO`hf^I_g2{cK0Mhk^EM`5YNe6M&byby?nGRj`jLHgCC*M3nhVk>Y%BZpBAh7EAPq!C4rqtLE1yx;aQC zj`w^#C~)d1Job;>?_e6tL)g39A4PUunxbrn5vPdLT^X+yZeb|%pyjQdo|7osO$ocG zmhXQNCI7On=iP__w|s!p z0$XBv^8VMM9TPu7$t~-*QtZ%Ybo=@nS`T?-`T*-c;TqQPKTd4=|AXaEL;Y{WzSjQZ zw8@6_v!m}{#9Y2)h$}5?Yily4e6IB zG90tnwBn4Fu6ITEa=bnqpx02OqkvS+K|P>}M|+Ms$sZ;5Ac!(rHO2A%1ne)b`36Bc zZt-*;MP~*$ex@oxZ#gpPkwgmpVQM$6bx&Ys$%ch{@JjsF5u&F=B`DDuN1^6kG~l3j|4tSL(PGS|BuG%;l>0hzauk% zes}#Imh|f`loQ{t7br`We)I&?HkU;f#Qa8Ca!yJbYDZojB|l0ZJ4<%Bc<&VFfCW^Z z!ek?5NDY^z3At+);gz9_;Or{+#&dAT?JpNErOpwj0$fJxG~E=BVWl+0%qfYLoS>?X3Vs$UVf&pi zV3q^}j}+8qEa(-Tcv<2C7T`-UDH@VLuspKnitT%VRRIt|zZPh+=mIi3sAE{i0d78m zyE2RnP;aaqARwaato!Fm!6N;SNKKw}NtqPeM>_(U0p0-BBrW^oS~P2_7{`k!E%Ex$ z22ho_(PK&Vq6&(sl^}q_L@L{7s?hG*fAL)i4 zxln)XO^t^!{>FL%XG%UJ#p1456I~@7*`W4Or6>>bL0m70DYfmgJ>%J4Dvye(6glAf zgq>mJNQcR!pm4X+sWa##Bj_*=5zN;9bB+kBAd!mJQ+e=!@Mj@XOaNd*|2t>`JrPa= z9%rChZnHnPTe&hWRr+YsOBDm&EuN5FzzTH2E&|{P`EN-AAAlEX{`H5u6DX@HwpZgF zOhi&*1^}~RV?ZsnY#s;}7!^Li6pnj0-DU_tCH}A^*A^s*HLqo446gX7ai?E`wVrBSy^?mn#Ygd&Ba{S`<6Z!_67 zLVAN@x^TfvYXx ziCGmlGPk-Zm6WPD029grl?`+O8f|@6(sp*mX;9pw$>{m;h`2c{Yz>ORSZqmvkp+Ga z&Bk`hL{eJMPlmcdvhMePx^W|`O0=rBAJau0=X#hp&lK2*Jcw&lht-yPDrka{>T-z7ULo9|Kp;;j~~zaHT@N1DEeIFwN>jHT(QX@WPr+#46sF(#Xji1d{X}EX&-MkR(NG9 z+oWldYRIcT^4X;unoSV=XVz)QYBArU$%IPt`}V_BtbU3rk{j{lFG5F2cE&DpE>#q~ zVlDp0)EX(peFz6wIvvwAP{y_H&_x`p1&ps>n{M_zTO4(t=qaVDeu}$Gwew<7?Iyi2j^03bT%I5I0{j+fjHh;JG z(T&Es96S_x20VO3lw0-vWWWN5++=>F--%z?4*)L0h*vZeDEPbO(qW(%4K!?22;En_ z9DC#EIZ@K5mI*lnN2B5}|M_PBN|t+m^OPUoLqQYc2+?FmNEefH1BaJ!h>^6KDYOyn z0OH0e&4GE?%IDGYgU2_mloJ5?D^5-RG8b@i9Q0T0$wl%kz^Au(+)h8A9JK6@ogX$d zz-03U82{!vUna1f?Pf-|hAK3M4I0Cn6{N6X zdH%WYABQm(SKJGMGNOS)Y{~NK?aHa1Rgix;CyJ!*xxb-?okiE}lfXzXyy!MQl>S(% zJ^XG9izM)yu>i3;EqJ*uxUK9WFQybfvWsnoG1ZMVmI~Csw-mexd#X$@zRiwNDRE^67R=9lEEq)kwj?_@; zQiQ@z$cB?d&2h$N#no}iGa9w#P`o*@Be7ID-{`Unuo4u`RD0#oGA-5Plb!Wd_iNW- zdOm#3EaKeCIv=zgRsV7vwErzfa?FfRj<0FVQX1EZCqpgpzKk0m+0X^kL zgI+P{ho4!64RK0W`%>5?uN>8ZI_p`lWyU_dOOu?DTF`l9vxkB`_|qjl(Ax{C)qg}&D+f%A(RA(z z?cq(@&{PV_bd9hjC*Y>Yir}~Jbb0b_^A~DI?tmsB67g2Et8%J8`_}QT`eOn?zXj6e zb-!3q9i}?@rU$hJIS~BXjncL5AR{;z8hsE;yIP@3zpjOvOnEOAODzdYq zp{rjT$Fk5HAUPk%VJ2l33qm6A^ZdryVB3xqf_b?i>1mb!TL_&MiB*-2I!pAhN z0zjDog2br}bZo0uxe}!T<`tV1gz+m_&=%AtO>$bVXofxl+1(K3y{om?SEE~w=JXeG zvJ8ZB5?JADd8bhAD;Q(8z~2k5odF6oz&sie>yoL=vdeA~!W45Z(|{`J6w&)>9qZvrJcFvIbVnrhAM}!ch3hTr zE!ND$Pld)BYNH@j8BR{mvgdyF5!Su;i2SI1FhtCe6_`QD34t}MX_eSAYcWPX7b|&P z^u!MZn-A7Umk}gHhciVi1^gU!8DMlQW+%gVtMQRMw;(i*zPH1;IkZTJ0 zPZlmg79!}DOA#AT?2QY-!f|#j0-6CY)n_H@4j6Zu@L>iDKGHQr=;FV#rVT`A`I@o} z2pO=0wI-lQx+qs;f}J|rTs2t6WHPfb#=^X~xqvm~o2$z*qf^<^Yt)g6_1Kc^>^_b2 zx#;A`Yrx;Jl3Al&EJ@X&*tM9TdTu1MXMgwWmR0UIrgHwimBKxy%gSwm9X;eyyH4C`$jvbLwgq`Yi?6B2+O}$dI&l{ZGi9xSk zTW+8*N)WDS-R_Ij_R{)hle+(7ghLecy^k1+g?#Mw<@W~M`)~5S_iCUmx+EIg4JPFl zndgR0f(NelYY{#(vARTj;4ueS3a9N%OL32icRCv+jDz%XB+18Nww;*8}4wCE*+9j;v&RREsVf^OaWA zb$ihO;H8UhWUTyf=l|I1OhTD9L6nepElr!El^>QqXG2;#5-m={*@%o8nQ4?-Hv znbulg>pzsGRQ=7aU)?6zECCW(PINn&?s_#NE1ZX@p_Z*{sVGM*Qia$FAP_q3S04yG z@daAND4a%W@+ZY2@tHVLu9_^c2sA(U$O&{IkczcsFnS}NcMX(t7t1;owgrR~6-ys4_sP_s~m>Y6K13PHTk(i$mQ z)}SyNuIq_jWl%6d+or2n^Lz-)u1Xs^+`lW+mbBFQIHYKE&oVW zG0$lw8vMTA87tZoCf-|8(SO-|U(LTeShlf@RW;5%*c*-C6vi0HW0m3c@#T+^$^_hz zx;wPnE4M)lI$Cpbr3Af9V!hxq#Eu6D?KPTS12-O7s`_eUC4FM8`6_zqDJYBGZl9Ms z>kso$=s0(P7hijp41EcFS*@3p!sR8f#GYzHs0!$>kO7LMS#=L*5-fJiHrCV|<7Ffq zy(B$s2agAK@RjB|tQ~zjd`c{L7_0pC)TYnSxv|^Ljk1>G>2PcHNaii>y}U*vu74y9 zG?JMhq1}TPHq5098Wl+T>x_U-{9nlelqvzJu?HZul@+fn`8)Dkw(iT6g>?Z}$Y; z2999�a9T4XsM|Lk4c0yDxi=T)X%8d+sk;t`&pVclymv{4cP$r&}LR@c#rcNN3AG zrvLW!ANy*E|CU;&)%0UG#1Q^fp=4Cbna&{>DlP)YRRy202tmUj+Eb|-(7gOmTT*Xo zWG$R`yBn>I>A_0l$dtb9L$B7;-@nuAKfn2C*mMMm3TVXEh;C3AG)9fajn*1DLKx8Y z(Vjvq6|^K&p0Ek0iiT7MKa+exrjbE%ga(8uQ9_8KRcaSYkBntsS*6gdK)q_%U+VJs z8>-5u1f85n4>YbkI!YSAk2eoz4WlWkYP{zHfZ9)JPJZ)v4JxRn6Lc{9={mq>JqC}!Jm|K#vy_9b42A&<91 z8{DC`)5l=4S_|#ThlYwZBTxJ7A z10$MpkyG6|nZgL)J)o$oVH)C#o7Mri3_H=Gw`hG=cyNhgx{BC+Zm?_qB!7|T(3W1s zU8WSLOlu~Y)YN{hJg7^*S1uP9h67BSL3$HWWlB04a_fd@sb^C#!ZG-uAnyZz^5!-& zz2KP@*Ti$mSW|D3F+)3jbYvv0B$Ccz+u*}Y7dFf#oN6oWWJV1>OjRG`g&Lt$5OLgj z0a?6>=zK0L>GWOZnERa6y{|cb~>%V)x%Xlh!zqe6QIko_-WVwI7Ch8EUk|} z?6QXyof@#R@gpIWaGpy0_vfbjqsv;UzSP_HBgZsT@w!8?pjD@&bx=#V7eOe??6V^9jz6?3g1Ox3=4hijx=h3G1wb{hY;I;EMwUWfZ zF`INK(5;851MY#mFKq|mR2MpZ4tj!3uSv~$xFD;Tu;u0H;kVVKMh&yC|0<6Ia}xij zz<{5UuvumCqMr&Bk_>RD#pGQSV@u4DM}FocjFvkhD0EW110PV|LSK|kg+3+b%&FP@Hd!{fA$=2YdD zwkmCpo$W@`?kJH>`X|;}s;ysTNiQAq8&n4dIHfqw>*6DsWzFJ``;ABDa))}L7Ne&S z>5GimGLE-40jMUq6F7gr<}}ox@~M7rohi!>?0>EfFjqTgOI!0s{{P+^*SwZa|D02I zzp3N8JfYG}EyiOKt4!L}sHy9%{w9-pu_sF@B5&a5C-u`C;=v4Z zaG3TJ85a^v1$d;L2r|J9?7i<~?xh}O6+2+SYwGtvGskG4We+p($uCuwrh>3c4!Wbr zC1My+a=^X%3=(k3k438B-|@ zU`aU#g~oT-G>J9{`LNqVvTDkRX-ltjlt>c&BV5SIiSJV)ok%jtoXB6o0PKI==ZMxwmzMzs04x`8yKkRHbC5-U?bTQ2 zo?Og3F7`#yfzs!4z+yzWxjB??lV>o#d$>!{dY;0`k-NZi>^zI;d{FP#zjM+ns7F_UQ`fCpSoPnhk z#%7Ffx-jX{4R0DFoKFWc$6RBGMiLUqoW!Tb-m=bLk_mU-K8O*DPLu3iR&E#Bt6fC1 zGDT#G3%n;s#6CwUg?=H8nF)^w&BaAnP^0G<-(eESmo9d$eIU01 z;ibA!mCbz60*4W0`9unbX>uCHN-_)un5c^%#9_^sKmxsh0y|0>AZj744ODYK7W|)R z)eDf6bz_GQ*KBjsgNdViO0&iIXJmmzvf6wcC;yM#uftHuq267{Pget2yk67p)I(Cs z<0#^HAv(>&z5Gk&sl60piRVPCU~lxI30vHy?$_OSjGY47^ocTCTCh{|2(jlARhGWT z?wIggh1DJ7k17)ahmqev2L_Bh+K%!H zI%B4KwB#TMbf`oDAmdM|cPUh(5%!qwXC%w8(||_`9HS$zBi8*bg@0q%01C>g!d;uu z`N7qqHvrdUU|{R*PS;Sg`ntMh}s>zU#oZ_p|73bH`$UXxfC(T)QNo*=Ua%p5#l9tJ_h_99AM4m z9XLIHdbXD$0OBgX2Fj>%`E;{34S}I8{$7vQpX88keYn^d_#IROABw)Nhyx2S1PN(0 ztdhJ{60XcR{~!$5#q(l1qh;t3DmZ=LH7}1h@4JgT)|(dt`!67hw}WZvWko38vkZiL}-T&w5(3Iuao8d9-Srm z;er=V8zCu{mYD||#_7UafKR8v2!B&bJ!XMk&)g{oiIkUkd{u zGJ1o6+M&Muy$JN;m2eKli5=`L{tIXMJM(h#!me8Rzx`3@)&2Q)fxKzJ&g^G<4j8iS zXs@<11h3!o{d*7^_s4+0x>ENj~D7Yj5C;5-~~BfT?+y#dg!( zW?#CY1%5cVxBe3D(2nwpwgKKd`Qo>)ukdgzl;_I9C_DfwLt((!A!EBy)C;4zmP9HnF{ura7wQyjya51krx45^kKMr@7Ee4IHk-0hjNOHY3SYOX_!yg6p zhta5|`r5GeA-72%D@uG#<5E}T(R<*z2KN!( zM0Y@n3=J#><$DEW#BnX>P1JwMFvXpS;Pzl?LWC3^eJgi!FMwP3)fuvL5GNP;gTa+8 zRQBbRQAEIkVP0b(A%lO%$u}J>ZUx7k#dWqMk)_jOJ?q4r9w@=)3K3vGQ3`qq;ed0i zA$oORVC|W>B)JR6h_%9;Z2L)d8b$~FMTW#cdzAqV6 z=asnutY#nx2do6+p`Gc{IQ}3!OD@E#GY~!>*u+;*gf>jYExXIO1t|t&6ody0n$L=oxZJ&)58+Ggi4 z8;gQ~P7P58GZu6q+~gUbd`aX;-^5FV9U$fAf&v^bENdNo`UEBJl@3>xdM4q;ldF(?hqIw3<(5l zQF$TQ9hp7qGOX4n_Di996M+Mo%rdZtLI4g-^NWN5E)0iR{+?3}YF+0D(i<@xUjvyo z=jEc_4lV~!xu}W(cx6u}@3$|;A zdYACz=3u8R;(FfTLh0P-3a0p-~I z)GpbBU@m~!H_;z6I{-Z#5d&w#Sb?a$N9hq&D$tghXCFsw=QJ=@>;TPU4H6K6f%4&M zLFxvQGZ@3Bah@Ja)r=#-m8{b}x4 z1kdD2c1ASCnqWy-GXnSR@yx^4=K*+R9?_q;u8lPq%~)CWy5orvPEVmJLdjrYjxPby zgj-<8Ax!6wW6AF~r=TG0C!wJ7Ap`~+DF{twitr`4$B4UK@Pn!6EAjzEGWu2~JDy9* z4n^obk`8QFTJkb5Ukf29>;A)cK}j@X7g+G6z7;dC=^lC*Y1Eu9Ir!@EWT)azyafb- z)T7NSPK;z@uwx(Ni{cGU_uaZ{`@QU{8U8y*O|P?~2n+e1&~?&~Oo=_#ZGm`N`pF`8 zljAe*N)PM+47KDz?oJ+pK;3?;4G8rM!y-{5=#@%0fROs2@+K=#hX_V>+1#vQ(8Y^P_C& zu&AVfbqN+ZD-LXgNq7F-JdIwuXaR9zjkx(DwH-6?2sT^p()=vfk0J0BNImNtq=Q^K zLV!8R-pOsnvw>^*qk2mH0!d&dKPmFV3;_M;{4d=AFjTA20RQTi)7|Rc-1ab_Nuai( zV>YNSwr}o!2i-NB#mEb{j~|L$7cIuQb)LrbUf9wSP2Njd5^Vb-#JN{utdLl60!SKL zet73qYH|EPs@DeXLm5DA#8zte9uX2topLGu=~%#!AQ4}unRxf!>BdI>GlfRB`22qH zp~sRLc_$&9ir=L)CmJRHFF!K*FlS=ZTsoPnJdpp^nC?@@Itm{$nO&>lcvRP7Q%{*r zdV4L9+NTMCMMwOV>8U+=$Nn}0z#fak{-}Ba5$bxtY#S|C1r+|_?qEW>zt~jfuBXaq zt$R39j#wXSf9sHEY%vmZoV!7NK}%M>)5r;s^oP=PM70v^C^(Y>qirx`GB2Ult9O0a zUOgJ(X9%D#0j{MTVFk1hWR3@H#Hd5%EF%ydL}&_UTU-(bSLCsdVz;e za1Y9-P>eEW0PYS?X}4cJfka|27%I&?>dfH60;kWovq zKOqL2iSlCzVnFjak0OfjvKg`--eC#sQ)2V!%%jSw9ydKr@(l4P7~~prb-ik+&#=x| ztA;tElQ#<%R{S?1N{l*#RHVc^0^T^(1Am$_ZdllPWiHb2o*Ak0sSDh^61$w%OaSSf zAwOzhg?hhhOH3+UQ%gVMRqMP`;@lwlwp~chU#0X;QRYeVuJ`WEHxW8}R$J)zJ}-!x zj+LQ*V!@<8(?zE10&5Dc^rE+0t+gB8));!{3DJ?R24Z_QhGdY&9EkDPywD7JyXIJE zTp;L>!DDD-aeGf2@{YydeMpp-XB8RQKZ*hjEmka$fmeIV?;$(lfY%z0Hpa2JaM&wQ zeP0|QJ2J_m_=+7fb)euUq@v_QP9!AJ#>;EspwSx$n}wqmdOUMl?;Vdyn(H!v5vMe; z(b5rW6dvcaqAu<>$TF<|yxARg%9dl_mB)P4%Q4%~Wf8=;ABi%X#4GLzGVq>-r*Uz* zM9nhAW0iWsa27X$jsiHmVvYNl6#zjrarrqL_LtIG!US$HatkcYf4w@$G8M?;see$r z0pv31p+jhzYf(W>iIGzI(UA24J2lZtnowCP88J)m05=}tWFO3u|BXD0SB1@kF~*7( ztw87m2$r8l1c_iMn_b_~4{TB|xaqUV!EBje%-MuvMC;#s19`Yv09JzaN^KgOs}a$$ zLswV>w_~EKiU>tImQ03PHE|?t>?l$zd!Hqt-s`K8P}zvA{+8}aNe!~zkB!)a$`s=k ziYUx!+;KEP0S%Mhot^N94Bcf=LIHS7wGV*V8OzxOnwwXcXG3GROsunT|H}Hys1v=t05&Pu98*_<&;&6L@ z%s8OA-yRoyX`wMoHlt2Ta!z+M7x(jxpvI_!s!<)uvfr0gb*de%YZDkmhnxoN-hU^x^7gNu=h`KD-- zLqH>|v0Ddg_>qSe=&c$jsCp4ij)hd^EOfX`(}01hE`pN*;hMp0qju!3VuI;sHXtIk zpj6(u(Ws`)GXuo?v;?Q=z#|X|5RCrFr(C6Hd=4otgTerXT1=4AWONkQ&wfK1Xaqsi zd!h~}Ttx(09fctoLzqpYA_?TdzE9Ca3mXW^eTqzf+TO6fDEXrEn;Kqyx?T>5f0Ba$&?@P7N_@qP*si&bmiLAfMO$bqTHJ9@!V1$_bs z29&aYHjkN<>DpLDA4?aS+f!yL$J`XX|dT9^H&U@DO;2cT@U8rmMAc5v~~?$XbkX9%T_+z5Ft-O$LIC zS1mMG-zIE>DmWt~62Kg-7Em=;hJvbTF(nqpE%F9=nqUoD)eUKFBMO}$KfPT9u<)Bap4A@p{SSZJymV((ArxtJ~&UW`tAM zKJy8#e1zhb4U_B}D!+uY zdC3sn3@9jVTOF9sP1E95veJgRVlpMrSfYkB6|uOeFGwK4j!bfC<-`kHj&ROHzP)JZ z6ioARg@#yQmz+m$l5O&sx4Or9uDZdU0chKl`+g2MK~cj=0P-4qw#NSfd_aT0U_fwG zkVC+NL@rTQ1Ku`Q_go=zpw*|zm&t9=TqNFZg|1tPYwO$>$tcRejr~l!vo0<{iC8e0 z>3P3(mB@{R9E`edLjGDchMBpp&E3|kv}Wq!bClj*=SfIaw{el=y|6l8gA3Gx`1Q#- z)9%FAK^_d8DcTvM=IfSS>&|c6t!~Cq!@1FO3qwH7WxEAcTbJ#Idq69}4Zis-g1`uf z&JpN3HtJlbfl9Z;@GZ2Nyt4s&hXBOjlnu=1MSx((Ii0#8fcy#xNY5Z&s9$#jETvXc z3U5{EH?4#B-UqNN;mwOPE#v)KlEigt-Qh*%#PL&0?=uLJG>YBJR20J=7DL-BbF)#+ zbZOJ-Mm;3`dF$D`8{&7H7tuTT+-_bMmDM)bo}jDGTa>C@KQ8ZX$*n@`-mCqFYXALw z6pDVVEy@&(Di&ojv?^n<%!Jhz^ek;F)D|(68wTdPyyC{RTAIYanw39LXsyjMw16Qf z;0lF*P0P(#({+uuU}IxFwid3`BS!|XdfviqH?XS@J;TYH-7L4FYf9^y^_PQpO6ns` zR6}Z@YDX27lFRg_vja4ugsSOgD918mVJx!kTo3CLe&?dDtR^d|dr=chn8Uwd-(cci z_F>W*iK#FUjTJrGDt#Ruo5G~rEKEO2Noc|MG`s*2RYk4RtVoqDy&`*BZ0m?|nSRnx z0G|{!hLk-D7IWUYU0hLcYW#InC*wM&Qew?u`mV?PzB}^y8Zw&a-^}T*YSJIYDICf! z#l*##=B_Neo`=Kq5~E%;dRprl*L5$r`PS^GZE-7W!{5F{QnThdR-u4e-pfK-IqwuZ z_1wg23~I+QZ#{#o33t`~SbS8jh-#7wf?;V7Cw z%v+5367TXIjy1{1Rm({^Uv*)bJHDz|%yOF$ z#oECwUWGV@Ub?0q)WXMb%I|$AwXEdCJhndn!1FfmVqVs6$Tx|nYpLOJ@3I&BW|brxku};R_+A$1U!;#(1|0 zEwz}2cAeA+Z&)8$({ndwWJ>fs@1B-j(>BmfUzDX8o@Cw`EEMoj17`zQi zAZKw##hkNb_%Yy(0G+%wzlizU*6(KW)X7uJf*>l;Ko_>7=efj>^F=lpVE0R2Kw#Z) z3AU^atM)-wp%l@?*SoG`U!`rrF?Jl$(_YY|uoE|QDcf6WO!0kk`~Cvr?h+ z7q19t(0!$%@k)8$lqMuFyXIBLSnXEF_q17#Hr=wYbh*%X$`wFPk_u>$D5*#|TMf#D z1P`zs1&9mHFi@E0L%I!fI^`8n?Iqcg(v0g1w;0@H-r59H&jQl*WLPTD=Fe@eU{xNJ zwJguF^qag0*K#GEe3qptwKSEJyuDh^>{*Ms)2iRN-wG>M)2k8}JYU^yi2(%-bJe^m z)2`MOZ7i@0jb}{-hJ2^L_Gqh5hKTiuzm-Jl{4Jd4Nj4Z`Z7teDt2D-HfxEM7Z@0A@ zTbX^L85MnkTz%!)y2#>`OxMcEyJ!-fe*JYbUCVtM_qnV~iIbS|qu25DbZKgMx^Eg7 zE$s{ZS(rF?8rR)wL9QJvI6;5g{IvDxarbvI0Gw`^m~kn!NLeR@FhxF~Pg3s&Fcx+L z_Q^CE@~rM|aKxtL?rNl>Pa1aHeqv%-`77oAXN58CK;1I(v}-~&N)taMc}ZKQ-({4E z8se&{EoG%CN3X_2pGxe-wVUxrY1Q4RUe%l$NaXQ43cN}#ni{J)0Bpyd<=XL>TUP6C zBlE`AnO8dciR=jhd#dTmEDOKU?CI4bDwle}rbQX-$key@4AEAPZKB+z#97e>z0?b8 zR@Oru6IeI$N{#wtY%>v^fxeGF$6-cvy$I8h^1gkZH}*fgA)IUdk+!-X(++Ef5fTRE z>8>0vAf~Qp>gof9_D(#KhTjX}4k6YT>tPughUs)q%m&-|DFSg~RabQCwq4apTl<)~ zZ2GDXKCy%_0M);ROI^@aM^)Y~F3~PZ6<-X;OLB`$UQ82v2~Io??MWbaXxhAAd5;*D zBl=~iZ5G%2&b16)?Kyj`!}2IhOW6KRItT7*XSu|iv6^Sq!c%o=YM*a2J83=g8Z?%+ zL951kq{|}P=xLzH+bQr`E!^Bv(B~H>U6p0Gc0}kF+TFkja&x)8c1?^_+QYk>pPqm< z9M)LF4bh2GJ5F*E+`2HACs+ybyM*@JXU*sFlwdoolux zm&fdi!hbw1cdw}b9EzH5T0?yNu4!q#DO1xXXIO2>8?&5{lsUn=VyCpTMT)Vex2IWx zT#pMx@>cnG32xjlaV?YU8aj3poVJ#Z{L0L0(K$Sdk)bnIobw>e^!qzpT8akZvY89D zTFE~(H=Az3K6+$V1y(ZRk_Uo+%BI<|fXL<;3w2ArUb%*&@VKrPX+`#%WXdXxHf)6$ zoGxyLFM7wZr1@isndkKR=p=r`kRHTNTK zGW6a8F)Q_zfLqbz804S4B^regC__NAVoi^4)vR&y?KxO~DnDuM7W?K)7~W6UNw!sqm}`obX3}F>v3G}~ zfNLX&uI@ft+FVyRBBbTZ2=60^UsS5^Ze`xC9WT3Xfa|kK^ynkLmj`&Fpvs-ULTh?>iMrCW8U<>OD6h#t7bNXNpv;=R{%d^ z>^6Mo|I&MZ}#wECyO7Ka!T9rWMwTee%sny z{PcVG26|5{*YbVSgXq$)`0g5;D!NqYFMoCIdgaGTm|s2ke+U5=(aPEY0000Eb97;A zX>Mm#PA@?aKCvVanIo<@l?jolf&jN%WY zQ9jn;RLI8%1pZ_R1tOEth{kagZ_A%eshegoO|~DDpVz5gCn=pd?qmW36CB$(H1%f` zn3&CL^!RN46`eTq$Pd%4jnU}i2@R$_@*95oihtN60nFh+EhGCSFF(rS~IO?;} zAm}6|`~TTLK5||h?fvokmGk`d@$Tut(cy?>TN`AD47W?V$4q?$=utAEVKnnYHw6{9 zQ9j;(vwQsf!~w$l&h!1lqn8JVyQfFT#T0AR6en})`BR@xnB7pAD?FpbPlz90xPd?E zlb7xlnPmx^?kooN(-pK$OAJp-U|EfZS}s*h8GzDS}e30xo~@dP^Yqltkvw8j8OGQ(hTDT`xJBPktR z%jJvRlT+DRU+o_s95EX@E;aYKRdW-X+fa)Lkx$8npS?N}9i&y4FgUj@QK(N^8LUaL zPY%+Ot-y`X3XTRWa91Agoufy>qti0h(@8s@T#f~UPi6m`W}(MI)VZ6$95e(0&SUBU zr!wj1`>&4o_kiAW=eO5~ds3JhG|+@KPW;3j2f#GMDwrbtG3B$+kbNvqad5Kx?1gTE zp%7tTu-JfxHFK~0SvDge#f+Kjd(j!*gM4Au2}&~}?nUH7_ztg$8-w%AXzF_;x}fni zh%QAgRHv>M$^KL;eJ(^{m^;j7V-~{5*oCY?Sf)z)WJt#I#1cQVf!d%y7(NuIRjf~% zwVHECuBjW7onOiN)+YIhtZl$QTLbowKC}TPUW1RpDiDrBvkxei| zbOeXdX3Fk0$q1qeiNfHzlF=-ZMbsy|z??bo^LG z4+6?1mr)i>_(I2+1(FPf7sW-~D3(8~?2ieOM-;6fAt*v3Lx{}Lol~GYC!t!wI^94@lg8q{PR9?hd%!>3fcQqEdst^I;0QEGFQh3w{ zo`4h!jL{`yNt`vLDN7d&_rTJUc@$2NjN)1dlK|08I9Cv7e9xaV6d1BxN5dpT)Wwi! zeO(+zFw+!@jE8oQ5r*0%$Z&PlgIux>ldMzJE)J_*1RZ^ni^FCUy0P3TTV87lMbAw! z+$&K3OY>f-7f=gLJVgSRK=wQecs39T^rfF5gK(@$0_He*H6zC7!fuw91$^LiT!LKw zb4yEdy-0C_R~KN7EGvOKgON>SruTSw0FU$23J(M|2}*Ebo{a1)~Cg_926CZCR3Dd$KA;#`#jev&{M=zY@retS3Agir{P2Tu$S=^}>&kM6rlridXktpcgZH<5 zz8L`R2pTi_!3T^l4*9^E-UEYbmqXVqK{8ddM4Y(jn~)qn77-UBl$Qm}PWm!;qDJ2r zX6;V?g02_m3(*IXg3Syx+j6@?n3V-y%kR^^-1P)x&@`3Z*I2gjq(&0Gc4djfZAB+< zXg(8nYpWxp$jPt+w@uHl!ZmgloHSD7Sx;%je3UF&4k+?hVsPmxPHqY-^4R7q(Q6DG zy-p#e*VWsrfuHi2A>`zPGQn2%FGcTuIdrc8Ss2TC76&vn&p`{C{VvL$oS|cdE|!XO z!0J%~hz$ONxq=|tfh3=tR|u0m^n3LT-`W!qfr11I^s@*e=mJo?0@>~eZz-&{7`(da zoG+wa&IY#ztD~{1GBlX7EIFgi3o_`Q|H_S;(I^BL%xehCCco}gk)SsKk?h{t+t zQ25i!ycKR`As(3E8x8P(Ac#Hma#}3eC$W2p+Cw2s61zbz4N1LOM7SMQ)1Mg}7==<5 z<}9CLH?`_r*4V|-%&|J@P<5iFzyjS#t2h!f8gYL|sf!a0QG^CuoJ*8ZHCS`=iWLAZ5Y!Ws%*w2rxU!~f{&O1rjEG+=Hm)hNYE(u? zdIfYd5yN9jp<`U51^;lA)#2&{Bgcqd`xfyZ8`8xR&*wSL2-g z(#1lR%EPV|H_SwT(o=e%$rt7RlJ*E)b^c5`E(e=&5BuZ@f&ZnSDBE4dmRLy70KtiS zrdl&l@{&jW!0I9hsY+$;MVK z;i~dtdU0@vou~4l-K#ybusmfFwvkIC2ZHy|RjLdri!$)w$RY=&(MRwV0wc zC3JMyoi57mNb4Vq8jcbOwjV4I0)!0rdOqRjb=fXCOLOW8se9AQ%;vh?ty|vUwA&jg zD>M)_{llfU?!@fhp7lGVdzqNrxxnBj{@nbO$Sw<+SL#!GQ*7*X+&fb=buAV#O%|KF z?xT0G$s*OZ(_L{t_a?gMw{5*G0@&#w{hGB3#{=~itk!M!$Y_OXaA6Z|Jd>e*-Eb~r z0o(oVCQHrw4Tc-CVk5XU3FUSVH+(xHu-p*=Rb%)6n1GIBlijVt{q0BUI%3y0u?^2T za?jKxilz-UB~q%Jx%YMBe9=AduDzwNZioENx@yviT+>-zJU0l97~%ZBa3^p~Jr}P! zv&}D$-F_J}^z22mIoLGb=M$$4qBGx1`eZl1N*B%8jR6+Vc__g7F!-`lzixT#7q@Dr z8RlfXTHRmZP@l4#ot-6W+*R5;ysf8CmEF1r<~Y>Stl%Y+vk(oV{79!>z`}hWvSV-& zQ^ls;Ub#1r*R-*-AOh}sruZ(p291!j!y(qTVRv(o^|tYZX@xs&GLY;Yt*KL&hFgKA zpJZ!F9J#OZi_#)NJ;@)KYjw?=7i`Y(6|$Ax`Lv$QX$>SvLS2+2_0uFG44n3ME~=(2qI?0rE3Nj2p{s;H%$cOuk) zCB)Y(xH677E_dWylvp)@!SO{0tRURy`k{8_6*9t*{hhi&LW>`UVw%SeCAD^1F!0BYLO#HeN6f6oJM!|YdK;Q z%jZ6%x3x$wijh6H8^-VE7bq`uuX@`k6 zbN6io`{IGLqT*ca_*&imIQXei?irddY7%!%ny#Nbj@~j~?eQcB0aK zYvEwj6psOb<{5Xz2`N@f2pZo;%swLbgMP+-`taZDww&QQA!nifK8m_J9(dzrR^lHxDYU1E4QeZ>T za=|PhU_}#54udO%M&P4R`SK3cPW$_%jge5>0&%j?&C>HIhF}CAG0e{xD)?TYN7K|} z;e=REX6?%oQQmd94H>ocl1iC~|%Bcg-zj zJ$TC1ev{ZyJ93%gO#_)$C;{5da9A@#{7ni=yV2mYIMUOba3sEOS2X1{ZM<*3^ELU-9f*efte$pUaQV09 zRO>L^v&EcBY%w^+@0N0~k5Jz+yxEMjY`5dx%6$*P3YXb1z15hwu~bYnwFGq)VDtS9 zuEnnREtWb*IJkQ`w)}F=8xZj<0?TC;P&NlkbjEmi{{F~(bEv*=nf=xfTp&BQFutC{ z<`sJ}Wqy^Rdf{^INU;7@$o2I0?YARz`0WVz>Z|YQl@E$&JvghrnQEyQjo0K}KXhSu z;ds0B>*Kq=cwCjU`P=3fkhds89{fqj2A^)*N1CDFy}157^SwCw%x?~FfdzN62O^xx zLR)3H_uNhRiZSox>H`Zi+$0pJC7EX9WRcIKX*Nkm*Q05=OJDT6(d&77k<8O5i7s`kSw2pqB3(qw zizu0mqfs)QM#Um|m+tOnvuu$;%l}Jb-GRm$bfcr$D2?W6GQN%$c{ENh@*-P^j!_H7 zJxearXtJD*izq3I>~xlnqv177jZL!IDa<*)%wUXc7Nze;XUXgo+C#_uYL?E6v+M$v za*+(PX|}j-cW7F3ZOM2Xm$Q8pUuBE47Xs`=W!#j5{fJo?oPM`r#^>EebrPb1X#R!dbZ+rsiplyXdh8pkgrt^6|e`0?&Nt4AA z;LG_qzd+F0-TnPIolZK@5&inXX+#vn9q!m8yW{7J{Z9*@9ys5&-=MQu-Sez%9pc6MGG;OoF7N)L_Yzt-PZ)M zYB_)tW((c>0Dt=tFktl_5Tn3Fo?le7DXsx_&r1Ya)l?8##L$!MR4?XOe7sMe>R;ao z`X)ZDS|p%ipv^G$VhNK|yU9Gmhw3#QB4&D0+2j<6T#*&kjpnJ?f<$oDjTiG|Rv;&k z&*Dj%$|c%dnb;+;U5n^qIhbH>&U;w%sTv43#z;yC_k?;mQ+$Cbx9Pfs&=k#nS3g>|@q^~c0SvM2iH;D^KGlY=)0 zFVsBW`tywQRGj9SBFppa^lVXbcO`f39rDrZgE!{UeD}nE2*h9D_ylJtt`PmdKvqM( zOkiVRV)%D~43T8FoXO(XM{pTG9K1R>K0GcjB|bhliT8ipd-cOXy!R47zT9iSSFYx8 zMrUR)cJwAk$lsas9KNEF$jhFdLjyx0`%oWh;Y&L?s2i$lDH!fy?+t*VN-%m%TI6KL@m{T8}9KQMi&h6KO zH}T=CeSnGM68K&o{&aYvmtj2TV}j<3JfD`J0sI{@hJs(ZnCF)nu7=u3BEf+9>!YKW z@r$E_;}}8Z;2!{rdOqV#A4&E_0a@LwlBik}9{u_X_PD?Inx^{x=;v1@6d3Owm?ha; z!6fib!1T@$?ty>GXQ}3Qm@$G!cR$accr0q=~W zguoYwk%tCvS=iut@bd77!@s>`&}9Fyh2U^TDUKETM!zf+BM-Br$$XlnbFp<>KuuT)7J|zlMNRDpXhNJHf0C-OJ zaJzx?{*EQRf^mSOU7mNNUsK|ZB~uKG1W39-i3Gj{%)JY0Q!)yV3xE}CqvA8I9A3i(<0dG0%f(ZJM_@r;9$*_<= zjsP4U0*5wycn%ER!x0daRDCCgdsxgz52x9%J1_p@<)CNm4ydDp=8q5mM{n)X&d%1O z%IyJ!EmK_kTd7uwarUL3en8#8_v6uq_wk1()NNTF0$VV9of93P<>Pdf0E`_L?q0|gy4Ag~) zDoO8#*FtXD3Tn777hV8vLlp(PG@F({gna5N?A24CxbtG#(=@$ zjKSc12%;>A1RQp0uu^)0+Dp^xyp?MkgDz@>`Y*kO8{L~ZoPTj!)feZh>4s9XO&CKf z*ZezBtddl+A*QunZ25@fTQwdjYM+e(_2=TlUDAI%)q0iZ9kZhiUL6mNbX(Q1dN?JK z$~_Z*LON=7R3&Dyz_Hn=4c05P{W0L*YzjKiQ9;zln)zg&2nDhx-CBIhU9~-` zu)9|0vrMv|g8J*Of)yQSpxdS~HFd~AM1+M;HC>>`h$GN;B}&EJ=M!@#&VE299PF>D zowc|l)Nq?5=q#VoA-l9e?o(NQNH{?#(Kd;!C1?C7Jwj$>!bj%HkhjM{3B6f|ZL)Y@zOU30 z)XD&5Q;E$Eb(}7aPFT%m z6Dc^8YQSLSP0flVujLC9+GWiUkMaw}7I(-RX3*UdZb9h0~PkFlh*4Uv2Nd-@Q zGsNPcK_-R()F}D+1)y-UIKuG(qnuu5Imq6m?8p$gcS=Popa?yV5B0(M4rk-^ePr9; zQSs&(q(R(-p)PVkg^4c&%2vuhq*FCGCHzxfX@=7nq}8}B{i5l%=I4U8p%y`R?-+Mz z-amhRhj{6Dc~$p@C*ue+v|?gzV`6R{PC!~=TrlzWgSf+5rMde5a?BlFlL=;=CKMCy zFz#U8UfL|KenaC-cSgu41Y;rlF#?G@7I3#kvaY1MwFXuR)GJ6O|7X9w_;iorR z^J)&f=8!HbY?cFlBfmyF@-&h(0)i@TqKLSKU}4G|`4SaBX6(-q4RK472;F=}V?BAL2E&LqjE!0>Wk+lvTJZEulzU)9TJ zcU;5y(!99Wbu?B=qINjq^~NNjXFAr*H}f^()4q=Owc{Q%obz_VmMClgOl=--Hq-|nA@RO;af_z zFEx!g|&=IUL8OZg&?hIGqFevpc(kwV#xbN*(^v(jVZT zzX-GWQ|k_&w_F*n-LZO+(O3&prln^r!8^gte#rF4V*do6&LP4 z4PDW~*fUswADjbN4?#JRU&W_j4F^dAKB>L4ZaGMK>Y(&!$wNdFDwx%aR@tr-*_&yb zFsRn14-$6|%ag;&1;Waz27F=-C;V5>)A?!IvWBI@Ga7%0(rJ-8i;)w81qH$&x>^;>ZKDXO< zJszgvQG?@2~S7#}Oir6|`t+J&UqI}Ud0S7@3=R%JY zIWn?EIC@L;>S%*-$5B3YJe5~lE)sRfBA91-%|aG63qn#08OX@`tG#*9DjPS3ta5-B zgT%>Ss`a28w<}Mj)2OjN#AeOyyI8RkGgrGFAPdh;1pNe4_J$1&H@0bpH7r%J%?{Wa zbN9l`jtn1Qzkd})lYz=PKZgSXL(8~aaO;6JSJwl26Cyk;QRC?7;*d4g3eX3Ptk+GgS{6aZl=%+GumLf3BiS@BL@?8}kU z3d@Y8lV6eac0%uI9c<~mXLx~iuiO{_PVH|w7&rX`VO+*UIHfSMkh4hmI@F8y7WXU! z^wA%GL`_<|fXPV#@*xV6_sqt*4LKEwg`aq1yYObzti1m)TpLuTdU2 zGPKfrgw0q~$B`%aLgY^$MJRCA*zkbD?q|(TM)kj!N8);qxk?P<9LyLQLX@QAY8OKj}=qw;51aG zEi}ZLUX*PLR9XcN)ijARI;)ZpKB=DvrV*wbrG0Ag%jvjoI;JeccIk2^DW z^>*rYIA+XuuGrm8&o36&El1Ml{efy4yDmzB=A2hhk89$1^%!kFtae~Lmc>7f4umm> zC~1xIyvQU+(uwE?;Y8GluFkU288*5?Wog)BN5xE_aeScy({Yl&rPn#8AW{Q@vK1p? zvAJk#_%xws(8D1jK-0S^2-jp2jmx|lvGObmf_bNj^U;PaYr&i4{iS}_ON1-)+`lLa z0;~;`!@@|xgor_U!kW&2$_J52z+hl8(b0HUr3No}kgTiI14&BcaY=)1yH<&{35-*# z2(ztXb;>Y9h&2vWTH9)rPWC9CdV?LrZRM!?fwwEU8TjpVaaNK-m)d5~wr6P=-U!fe zf}^Xz1OLF=2Mc4%MmDIucJL3W8fKTtJWEg)4-JgNh(Tu_mlSB+@{Y?9xvNG){wxE| ziw31nZ#100xEgi9ifgMG?SuVm7Cps_HZV4$SShdSS@cziy#+LzF^Ut1&4m#h6(*xT z)jvDqu^LgAk8U%7;x1&88<1UGB^R+kN8Dn*QZvd8`xE-trG~y(7H9D=8NF-ynpt~2 zb+_~k{{LEq;~On_{>{-(7$I$^WOve0*vjt!@T&+Oq?UR}Ow)0(z_18%ycT*@mrEBh zbuwXC0tPM0j)qAfuDfR5AM}-=lqmD;^c3dDF2Y&a)UN2bIpJTNY4%gDt#oSFW20*@ z;;Uq#T&N|Tfkn)(T~`;uKGPH}Tf@K*8re^F^4!L#YuNwPjb(EP)1JtW@rwa$jy9yz zKzVFzO>Bz(W_SU4s$n6^=7(*M%>YLl^pFg+54%1FI(Vvq5HE|kfxE5fdDL@Jf{wdW zXTo)CTW`&w(Ie&POPZ=@GPwXbe&8o@KJl&r7wC5U%QFo=?i$UwLbXiY_6r721jZ*A z&8{!Y+L{aY=mgb_?K5|7Y(tIgb!K_!w;;C?_hLh!?Ms5gopYvRmX_2%tp z^8K@P^iH7JGM$r5>mc$y(fYrE|H}1?6fhLK7tQn2W%>=}6kRNU@E9?+#23=Z#K?rf z0bFQLE}3!=DHhHe>qf)t1#lOXKo;7!DshGVavv}!ZH(DszGj{Bl#`%K_?@k^JIV(z z6PdAqB|2T6!ujJST%b7)qE%u!+DU z)bJ}IJ#~3-X)+sB%NnJ;HQ*cK37qY?#pZ-+GlTxc*MtQIcilq$baHJ@FCtMQXNM_O zL%Y%h_Z z0Xt|QLqT|0CSL(Na8>Gp@aTa=2Hp%GDoBBwUxt z007kEJz>og7< z37_Lg6%%P$gWG4*uDB(oCZOypT*Z|2ZqU6uXPvyaQT>gEF3^Ng6Fe;sr6;%K4^LL5 zdY^cL03$*k;jDB$cX@i_y2FiN2=l}YLMc=CV_q&nc5Ph&sTquc8Lr!=@m`VuU#o9! zyW@Rp?%e;@+=cz^hmNa-XpC$EK9r4c4y%rHP>-!o{&V0R=Ugh$kI-G->ZJv)YXxi* zyu?o#)fg|^IT?vD=x6_C#$#a4&INA<2aoCs!-JsY+hEhmQlu9XcubCR-*`fX`KHm^ z=~TtPd|EH5)JKHgkI)n;k&RzJ;*OII=@t(nQl3uEF1 z9u=k#^4?x(BcT^Z>#7QQZg$(Z^agFKlCW1fAqBJn*Nh}vKJ!gmqOan|QSsMuM-U1n z6g^o3m(KA?nL<%cy;No0F4=kxlxx_JtZ~Cgx=fy}*pD_5akIlz*RehzCrYR@?P9H=tAo;~|uAQG?~u8Kagp&eLc1f-7vCULhvS zqP3!Q=9({T3CYfuwyIoaCrJ~(tssF>QIi1A=y^RliRnFF(Y34Gm}UH=nMBXZ6yg;v z+$-!j5he@7ig54r-XSFl*bu+%pW`UX$8&&2A=YGbww{d!b zCr*|y=zE^cBr)v-CUB1BI3wS0Wa@QcE9v<|ZYm~|DCO*y7hNpEiAOTSMd7lRRKnJn z(~322cO-q5R5d$4PcZ>+3WU_)8NE3jY#@jJH5HV0oMQ>SSt@JMpd(Iz;T6PMW^tK0 z7hQtLI2Nc$U5~WZ1W9A*8tU;jW>TUM_46ja0sarzn{71BCbC?Q8mxf-h1r=a$n1pW zQt>BD!9%4CshBaB!`G(?j5)l;czb&OXEka0n&rc!ftt9e)rhbgiJ-eN(o#$fNB6Yf zM0LjJ3XEmS&s z7c$U4-I~RVIehZT`VDd~;S=*6R1?(PkyL*ASGP7|4*Tjj$T^VscO5HA^<6c)(Mp{D zvnmObD&I&EhmJY@o$~Z9mY1e}iYCNkDhyQ5N$MWDZ{(S@Dg3`e?%@^via*nUPoD;O z(BTsSuSpOUaf7iS=&!>EJ%1s7o8U^=;dl^VA2F`tU}$5ehB?RAL7X*?-a)Luk5=aw z9>yFipyK1~JewwSkzv^OmsC=GG+Zu({v5TXtOrLU4>mL6!j&j+8ltUKm~wUgI^}pGvHP4HINkO zUKq)B77|=Dg!ykwq#iQmf(!Y##q8oMScmI-n3os4x=Gj+Y_>G1@1w8OL+8 zi+{>5sR08nigmYnrvq+}>_qER7~Xx{My*_(hII&se?BaxuW43^=!tC}QHl665rR#}Qymfp=_+A4B;xcQN;-(!3J#{UITrrJp-(9j2eAcW)a5XF>hWdqp@@J@`1`sIjUBcoj)g-x~2+>KK>K^ zcNR-4kx^~lRyHB3N`zX8gv4Mv+)?x}f+~4z(}dYE+qUF;RW|CiZ0sH%acf5P=g%T% z!8JzplcOWxK7MU)+**$V|9JSrk8Q`ocGA%Gs_PcvT-$Md{U zI0u*}Irmo5i|DWzl_3Juw<1ZzgW8OIHwXtWYuToDp=`_ltiS{aNXk(9!8|D`kqa!- z0V4G6`HT6fWhxRosawQEXmbJ(OEqI1e8)g_o#-p~gv0tTq$6}1Ub3h5njHkySY;!_IB!o= zBPRHByU93*teq#CDQmRrl0JDW0lo0OMY9%_pq2%pWf2-gH&p#c2G5%;08FK z38Qz64Q^%0HN7Xxf|4sc(5N7Lfq-;~&6h<~Q46D_04hSHK6jc9|2I&PD=q#hf^UR8 zC0xPup@e)Zom2C`TgIGw0XiicXSzX$@7>b=c$Y7tb1Or2JPPD*%}?OZ_WS#S!I|`K(}VjWW_k3 z!SGrm!Kf7^oTa!!JVOR*MMESAVA_ghB@5tu+Nm$E<^-a6!xFD3bEO3GS*=g8*&H=CHDziuuSvCylFaTgRnTvT z0EF10^7dU4vZ5tGzg^c9aNU7Kqn~N7N(P#o;WL{Kf2KH0u!H1 zeI@;VP#u3Gdl{3==ytX7K6z}a?A?ehO{$ab&Up5*sQI2ubVBwnQlpx%(*4K|a+?oh z^$dObS-I*+?Mm`41D?R{lrJ6K>3MQJOk;b9P%iOR!cA*xP==@VLLj+B;vNZTpruu7 zurPepI*jX-ckm*WF)ChBMJIYiN5_}}(pHD7q>whyGB$cHC`USW=LPsieP=g%m0t-o zOzp*+A4^6M*n@FL{%lR9P&Q`f65hMGLRrK_=+gyi%?fv7#|FVbw&c884UI>5KFj{# zTZiwWJ$j{8)XIlQxPT>_3rn>-JE~#-wJgXfSrRJb=@BsuQzTDOaSkZcop6qxsCWXR zcr?PL5C-UgC7Uh6iBbPIPUjOqyB$=+iFbFE2z?HrW0icpEJ1H*y5x*hVoE6&&F5hQ zE_V{o-Cc8e`0Iyx)}YNd?o>|hd@hu@>Fm8=9)5OM+g2pYqB&9ABGudl@H&3!Kr6F= zjqz5(bajFIPJzilSi}Q=X5Wq`ZOTYk*zCOi6RfDW&;(5~d#k{dE{5dxy|S6=dDIJ? z!^^$nii23mT31mU!e*ALMI27d31`gog%v(eCi9-^<(?1rNA0Te6}qJLe5vgozUj4b zDO$s=W;9)nffqJBe(0%cVumPE2tA0X(X=3Dkb`>)3*je(8W$&5iJn;d2>F7J=?dE{ z<$_n)t*)}sotG27{c^t*5knhAl0=v&ZQ`2RukAV<3M6C|I(s@3`IJj2Z zfg!l9k;>yiTncc?;nH`~8N*mkTrNT`EY;cDj|i7N1!d>+PGr?>Om%smGV}*pHLr$Q zQYWbhl*^Y<*=0VH6pOA*N{H>CHIFNv3Qt1GW`$&a%-PP91%SKM9p`9sb_gCh+p()w zg$276H8!h7lQvcVb4-VP~%TEUbrynFY|2OfOB615|^%0}HBL_#7*9CQt2a zuPqei)pA96a|yTuA06g0m;zCOf@TtX!}~HKD7d<>SPM!yy8QfT+n?aWlQ8?skNB*? zlws!B(JV}CR+>HvJhcGIGY)ONl5jA7EBuU+>R8ZFS$}H+utV!_SSL+CLbqC4Xt#j* znObM=cy|N9;ljl0K(AYg_-2snjrOc&s+wyBxvH+L+KDPxmMFb7VV-1u-F27)K@T<7 z@-@i)Zd4}aw?y$tnieuAJf-TV6mL2MDTO`zDfS{idtMQ>Y#oiPk5;m`NdK=XPYJQC z2B$OVMn{uLHp)nwPqiD>M3!l|Okaj;vD!ICWC|iQQ?4j2zo{1gl1u)Anb*=QdESc4 z7JQZV$rsis-(>+O+3J#OtFwMn0`UC$1&z6m3gpx#*D5?gvO2j`XrXoo1rKpoxOqKi z)vc)hvz!bO>0FOgu6f(Bu&@W?r9mvrV;QE5&7M8%a@9C?kd@76G1XDPduZ`?1-7DW zmGi&Br`c8sPur}Crur|4lt!vuy_+HtFy@~_sC;w;G^ zYU470TVmLy7>rH0SUQK!KzUvwZZKV?*jB<#_X}RV%w%bE3SBx5u0o^XrNgCIgItJ6 zIys8az&tck{iuZIMhX%l(~Y@uCJaqA>pDbd&34okJO+y_5&y({K`MKO@!t6HsC$u ztztSul3ai+6$|wqC3Vy&oUsfiEa&0S{WWiqX?LTa?b7!|xq!=#pV^tfK54;*2pREFsnoIpPSr`8e2EtJDJLS9jfwHMLa}|So-oeQs&vqii^}x7 zw_aCituW0URmwMQM|kMzO3EyDg=wBJMl8nJY%TiqI^Yq)8hs2e z5JS5b;7Uz}ov9mdnpoDk@#w*^_&_S&D?$QRR^2md`HdvHt0RjmrBX!$yr;2u?XZpP zpFAyfrkkOt%Nx}Fm9Um|Bcq*RYIfDh{`*Cjl>+k_J zcpG{!6~5Vc^e0RyKifwDt=lugM_$R5CbdkOhsl;Pq)u9^OoZ`GuRB#tNH@T|jR4)I zI5qH02Be)PV9bSXE!GVWzlj&f)#>^OVc8BzDPth+mV@{b0DX*$n=O)TR4P5}#b?u--d($qqnH6aj^l{(&~+lymA{Ed(_$qeu8dii@#-#@K%Uns zn=3XEwfdUo*QFb&2pSL0P*Fm!-jkolU*#|>#g{~`qB~FN8N+!OPp6rQQP7-1>+tL?xM1^VQP!*?zNjxgPGUS7B1%*FvR? z3$pcs78!%guQ>&fB-p`o`HIN%n}c8X-n=-D-|U?n)RAm?MK+}3a=?lQ%ZKH(q|=M| z<||u$@RemcIbTi}*~K(V=WVwb+BnzWjs!x4<3j6}Od}GZ_^-+bC6{?7vL0#?gpf3v z05KOIy)7$@vCHxxa85FW{+UvA$!#dp8f?Jgt*tMD$Lt&C%vp{1g7b#nk~ftrQTVeU zZQy2k?&m#gwfsYP$SqNXraszkSBE6_%CEiJnb}Nl-f?vK0T^iPabYhPe9q|rRrM+Qm$ap3yV2!8 z|0QgmXqIFQEQ~W$rS*}*Gi;sWCHgbjb1=)4iHuZq7R7jF?bV$;)Q@T)tVG&syB8Vq zFtV=AbTN|ZW2qokDrHCg+BQBR7t`N2nzcJLHt|*HhiXH-Fmu^;&26ceFlw##rwPRHD-AqN| zu-c2=?br6gzYeXXhONHI*zRkq%)+kg$MDILE7CJx9PY~5k4s~p4su=%u_ozBx|_0D-; zRbDmNXM2qEML0$J(FsQ6zdri);7xq^YX8l_-tj?v@Q?k21CU@}9{zNAQlU@s_@O2_ z3QER;Ddm+_0EgX0)enH>D%@c5)1D3b&%Gg!PSUL@Mve@QMRC!qJ6 zQDxjYMsgP2=ny1g@nJ{k1%M(`L^!Z92`X3R4=NfwabtA!e3Y*TJIe7Wv}sOGPqLds({?o zPoQ|7DaT-t`Ie&vmeaw4JinAqHK=sOs08EIL^8B8lYzV^DFVt9Hu1bBNPGLu{ zvs2D%48>HFtSoq_5V(6UE|S(?v_f2J6@*Tbf?$%Ib>I+WSsb2s{X+0R9}f~r%d;rz zD^xPlER8npkT;!S-Ik)Qr0)1di703YXA)x+@2E9<4d#FuSbZ~;wzNa#MDVX9Yk?IX zViwHL43w@2an|V*C$JLe8enu_M5qyh1I6c-ScU*Ew;F!X@)cS zw1%K_6}3*Z-<0rgphlf2^_ zF4UKWQEfvOY*k5NN!?TyV*@!JS0~QGEVodF+{Avcty%M{s%ApA{3~fUXMU+Kx*fvEGGT$ML|Jd! zopp*WnWxQE+$z_|xB0M=#6pcqt7FN6Yo@i9<@b5irXqluKgFf|taq2>FntEqrPxhH z1Xu9y+~^)RpLVHI4W^s9WM0ASt{@$8!oHodyGJwh07#-Rj7j`nQU-5@?QmYg=0cJF zYy-lP|D}0KO!@S3Ppo{@abwrL4_v|>_Q7EAjWN0R;@x<+;45(FVFWp2o|-$nxn$QE zA4w<+)MKfa65QBgL;uUGzHJOwwqg+7bDtq6RU(tv!GZ_JYs1ZTz-OePd@xRCg7D_r z)#2GC?r;OK!b1kq!@Oe#G0xBL6$ou+g21aB4ax zx8AK+>2%nEWCke{D2IRjE8iBK?`QcqZPeIJSHSK0gBQeYNp{t_OoBFy0Ow*KTQNFI z&r@$zz_I(e8c_k^C6#wbk_$TSs9trIqRb5kRy4!Oq$R@6r8k5%j_Yo)Tl1RlHBN zMbZ;p3>HHi7FlLWp1s#`NA^BgyFONUoF>6Uo|8~jCqHFkmA6r4#(^jh_ftyxd%{M) zgrg_euk$$1>WbKZG55C+ryFQz-U-pVhYp#mq%aCY~ugb zdk>3h|0c%(oj<@2s(s;hmvhhpU34=`G{C5;b>wW z;1CaB3-zodSLqbfaU$EdhAkul#oKsXu_AlfuplJ_^c3!7gX{Bl7C# z!dD_dog`Dn|B3n~d{%ambZ}wzymt`Ky2ZqoN_5fXz-L$`vdHbicSfo2(iMJ@GMgUIrUzovgBxyoAU8d* z-ng~tyFykt?JDT%Ba_}cMos_5wAXq?+#90WaY?q2S?B3BVGS;LLm9f7SLforBhntk z5Fekg-?x_{_;N?sl#k}~EBgG3xL5Q>RE@yrBizS;XAw644yxc`FQ++v>4H_)I2EM8 z&}*I|dm-|uc=>juZVDmH3Z-Vh{vAXTlPI5vam|&g=pUZmB%I zU(;`X{@$-e|L1};!y6T9QvGlU*W4*)xfkMt#x0-4hFg}KS7pU-hnsD{!YgieW9%`5 ztEeQfUz8-o@0-D|HplN(A#PdQs=_*O2lpAx;PP*Wh2b#*nz=y>h-Gtboo}dms&`TT z>t1TU8VJxxo)eXh4AbGM67knx!s9@eCLPzYC^zuGQSmtAmD zVdr4787%3&3vhCFe=y6Jr)T8UPq^v=haRii-)uTX`AE-@A0WM-_oOG;yio5^zCq`DMuZ=f0;7VV$)1_lkROi)9qWMKs!MSg!-59JOO(= z26ejGL}fZVS<|50wCYtO8?TQ2Q6_%0=8J{<-C?hnU2Xs7tNjk4>GArj4dT`Sc1cdr z6oPQMaH`!-J?PCTS0QotztjWqZ9Mu7K8$Z~?O}X(S3(D4ohQ|&ViVL37-!$lLdrce z&;Me|9QyIe9G{r6HuZ<9MS-yl=IuUa? zzB{IVaElibed8ozSXYM7vY17b@7hY$AFQS`mMY?TI$n(0##Wo{U>bl$8kl*Od}u;)H}Wv#ueNOMJ&UtNtxsUu{wr&!c?<^A3p z)6I)bcr@b6Bs{O}Hf}Dxk8Zd5*e_Y7?c3@7XDDG`q0=YCU>bxrEX;jW z?y^Z-L5V|SES&S1ztm3;4*Wjymi>E-jPEAofakCRVF%<;F|Wg!m~KvG_72~;@L52( zkeRg=)^y(L3Sq6~s1JsPUbtY**_F)V}63 zB=W@*p7995PS)H?I11Eg=N}X+g)w4wkF-*aNHBQARr6Z2c ziQy4Vfl+ShGJ3ayGjD~XT$z)MG2Nncu@D-R;@=+=e?rPh$s5Soq?C!a+<)cwI6`J4 zD1Lq8UXAUu(ZK!N8hFqT3~aK2R3mlAqY7PyP~nrIO`r7R6cr$tN)7D zi*DVyGm*&yeBn(SUnFxppc^RDKQ&+%-nydvG4jLzdH1@Tj}6B^3%_>dutjd&e(H&P z6=R}CUN3HyA(z#$svGx#S5drHGb11Xwfmyvyb9Xg=|?T-^|h`qtGdFLu4N`N5%~fH z42*(e8J~>b>ma0(n5UyOL#KFErWa_=Fh}3GGNM9>Yd3M4-`NrW`~1>op?B+}p?I{G zD*tc{{FTfNQX^fK81t%){sf5s{s#-*OVr!*DrO4Ld#NrYv`Fq7>TuMny*ppg{!N3R z81yd*BnIdRtZkZK0d5?0to*y8XAZ_%vlwfJm}p#l~B(MQT{mKuleT^ z#^d3≶Zh)O#l8Lupr#0jCj0l=h-lHl9*~6{NbNR`<9~S{M_eJ|_3g=Z2_$P_&Qc zk@~e{S5uU`prriPuoyWe+}82M!0w(W?^P-Ut+)`J6U7B)6M;$QWd<|dxm223sDle$ z9(bCv8wf+B?jvj72-f1 zou{d219FHiaQ@jWd#}y1_9b#Y)8dVq^YV)6?rj@d@2O3=)N>Bj2AZ8-ZT7QS4xlkR zP3Jn3lr{v|_fPHWWai5dpflILOy3D#^gz z97TBqTTam_z<`&48%>L22mN!WOLBvftVpNy5X1W`mcxIgqlL=z<>VFX-q>lPe_Khq zeAkCO0JT?g_84*ySKpp_D%NJ zNvyPo4hNXA6d+MV)F=kn$ls)ImFas#F&5PFh6scpE=C?&0z-Fs^*A$Q)u-D0(iK7g+y2k&lepj zt%1@w=_+G|VnXcPhrnQ13GK}pCCY1-ylaIkxn)(-W?^!0Us8@Sc^St!kTMJs6ea61 zJLBh^jGuEdexyQPp9ZxwQ#=Z0ki(+~t7Pu+{S-D(I4rPar*@I;5s7OpmLsZd3QGq| zs_}(z38xBA>`Q1~Hf-m%3YwdrZi!E~ws$JAqfX?Ydln*3Ev}!g;VzJ%{q2_i_7lL6 zHUrpinwqa4G{CQw-OnCktLYG1=Hp%kPZN%4il~E8vg@E~Y#$G=VRg`YI4$5YeB0mM z70?~OORup?9(^zH@CGow?zct#wnV?JPdaOFvk|xCh!(!tS}hMSYUBpBxkql6p0X}0 zF!Mk+!(8{9vDdv#r_v0iN-``1KwC#C<+BrJZcsA=yOz(#K(nO6TO|yn4_8nd_nTx> zStUnjSjBH6HWbU@)vuDFDKwPGj87!>5;TXPWZD(uwjNmXbzP~U;IGpkxr^Jw+hY%I z&}8w~5 z5n%qVl~_{d_cA`)5=rs6oR~vFh=!694I7i9;VQIf=ng=0aU;z0d;+98zQ{2wv%m@s z#!zC3hWi|yMywA5r>;eS9drF1Yb{OHNH;{xHdU!Jbq=&U&Sa7j)K0H;I4H2@+0>{E z+TAR=^-YM?8xFuH|}%Jj%DL(C&U!y!}R>Uisv{ zv%&qu_U$6V0ep>7T!Qkclx9#_v;m$5>isYoDrjx-v_-(LcogJ-D5ISe&uY$Uv9C%c zEGDO7u%{S=o}#v3E)0_0H5sBN_V~4*sb(KnV%WBtFl-<3jSWMOtkf5fofad%mGwfT z;Fo25B{d0K|4Dn(yBBVk{l?i;?<^D9IGv`aSdvRZ{o?HcsBo4Cdcuy-xst;-W*BLE zjCMZVSY(Si7$*QXtq5tNpr$rLbMBkq5o?@U)dds(Kz?TcWAl^`k}hwF4ye4URnM1C z7qDsEYZ+mbPmzGsuS3zp^$QOa?w%5VfEMUy+HZ)5YA*GU3R;HQmuc$(4J3n;`;BrY z1;x*6H)2sktgv~)wiVX- zzg=T+iEwijrFBH@xUvG8R+$2nj0lj>xfRsY<0aJ`|3SdT#@wL&=L8rKxS?4xy|L3W zx_93?6K+Qtl_H^!j|&f%(IJLN6LI$lYiG48u<@!dA}-H!NfM?CZ@WTO;=j>Y$5 z`k1kst^9W9lR|CuG=;W!Wv8@3qi&uf+E$}(p7+9bqi)-7x1C1ac4}PR7)IlxxfZYM zaHjihQ@?G|Z@YQF9qP9u`t1b!{YgFCnvJ+EN3^_gw{J4yjvR5v9C4?H&ewWgh)24f z5?LT^HH7p=>NTDok7j$*)AyTp+b+BH9TixuA7S5Jb*0~6W=fxXw#w%x8z*%cknbKn?H~Z4hIz7KwTt8ht+J4>&UU6SJI@g)9IDa44 zT{;X$D^D_LFp1jd#j-eyG52=s!TBEmpS{)MDjQgBpI)pd+1q~J$6M$y>2pn>yc ze?jlO+BRJyq==MnQq{#MsMV7{`r00?0x$0Ym;xi`mL%bKg(xn ztKHhaW)i7sES4$;95E4b984WO<1aWxI?qzvu+3GBrBJ(F2T&Kz022!AcF@CC^<8A~D3!Laye+Nr2 z_R9S#i<Cz&|wL%0DZ~A8_N&k$uv1FeAB<+hyGc1L(LCx zG+92z^iT4{9pWTgx7xqy0e*86YoVPPgdHq|P~@|Y2vZU-5{=OaB$%)ifNnuqRj80* zp8!Xux>km3?O0!Cu4>NlxEJCj?2Nv$Tq5hMcLuSCB?UFdA?_*_ckcQ&cQ$jIJHguw9b0H;7=$Pst>6)l z5C*1{Mu@7toM$tO!|kqk5lyaOD?pT2JlJnHbFkljlqxNivZzX0ioS3tlEomNVT3@w zTwvJ1Fvps>pUbjTs=-(G0Q;zAUG&;xFB3&`-Aps}j|D_}5`J2+K15(n>rryM$FYMt(UI7 zio2)*61BI6P+=ZL`aTv12Ffc(O@EV12u#so;jW5hxpQCZzpdxiFNC#E%%LBs#hM<| zJL^O`TTxS(>)n`91OihClPZ8m>Hf;)=XLL4y)4PM z&NF^~xu0L|J$<={%nRG}A^pjsx|tiR4(+}HCI#z?s9B?Z2Ds{VGWt200+5ccMbV#P zqgb;F`zfsiXgNpADKR{)wi9I_id-G_@u0{o|_dK_xrO;cZEr|M>A^zOCMY+H4xzU)dUUbKAClI<#NEm9ta3 zfw_n9H|Fj*T{j|Qfc6aYf4&u~jkEI$kX5oGeM=f8^|AhNoKCWtNQOPhkw_K0mV2i& z3LII>0)nsX`*|2#a2uKiq2nQUM9ezMRa7maa-vocUcTZM(YBJ?Ofj_-7BR*2VaXg+ z@Ic?;i;fbNicT*@ppaA$mNp=GeFEpoC5%`r!{94RZ=|2 zaYi0=F}J;tsA!EKE?z9D^4iq+1 z>ieTQXlkp!-QW3c&=^(qy`@qDJUeqeiKiM;W+wQcf5ZfyZ}(4x@4|qBr$2c7_}lH? zx8hg>n&%ifQ;R4fIV>z^Z`5eS2?b44hcq6Nmz%-eb4u7?0(l(I)jGzY{d&rcQkm{p z=}wS~#j+eYicqB4vvtiRdw=L({`lQGd25v*$>2DLfD$gVF}!kii$+4uC)QZmBr186 zF8kQNF8g*Zl^lNg6bufLy*x|M>V|j>dl?_fUq9SZL56lUOLd-K+QG17x|X>Qk%Rl} z&Gc_pF71w7+g%)}Z~?F)r;D>2C*xE>Ev_8;hGKjSnWbNrLzj_mUcD`l{Vy%oYN}ls z5?eu}6M;mlxR{9^#`=9d#{=n@!OZmA9i3-?o#eTCl}+qel_-qRIEoe46n!?>X~zG9ez%J zUEHNstj=y8UL!y3GZuzARconUb>mZYJJhe-KV|-#PW75re$Xy1{}f)e%|Vipw&u-y zWWYcIK4!Di7}}@r-Ubh0m6zkQ5x12|k||{3eAJ2RI;|&)G_zsJ%ta)&h**Tpi+Szu#K>Z|UUoLVqU5ick}uiQl=@FLnjdinBT|0Ldf^&)<8czp8a@NYk#9K=8D9mn4v92~0%oJUXY zvS@2rTLW;zJR2?-sSU}+$4{MizjMlh2H_`U@C%>;=wvUT67GCDO+{);w0!F0#ShSS zMUh`dq{W(%LN!2TM+CW0ZpPB))Zm2Iv^*$#mCLw;HV- zn7p^?Qura(tCwpR7BT})>7=>%t4}47;Qdo#m~tG>fs5Q`*v|Q zPgALbL>D%!%U(L%gDIaa7gkWNxeh@_KT1x4YT31I7V~6QBs7!5p%`&D;VL)PLk!p< zc2*gJU&TNSVa2xLJ331)Q)#{jY=%#Y+{?)YB0D`Z9v_x|5DtFkASIF;V?m(0Dusci zifn1UMrG*&2S!8e@I%z{Oj|+S!8*lt2OBT0yMD#Da0r|gKdn%p6;WP9=gVRtKm^@I zQjC-kh4ggX+H4G9hWVJY^9ztxli6Z=-7$w2?eX-xtKLdKd@FTb2kUpeDJ*PTc|F#J zg-7e>3)-s7G4kPy4`2L$Wge9=+Zh4?000kka$#OVWxG_`BcC z?vmVHJ|sDEFOAWnHXpOI^Z3ngX4!l9?veNNC?yLX<}o8n!7rnbrGzFVN-`#W69 zCw+L5e7!~BCr=w54S1A{$6qg4JoU)wY$n)@W^@v>Gq~ZAXQ0#>UD8PuN7>pVc=hVI zU7Wh0gr4H>_(I3?X+}S>uKVjWV+)^#p>}UgQuqa~_BAKq(_ird%wfWI`uA&&*dBJWsNeKy((; zm-!@)(m4wUQUdHI&B*!b%XjCezx>BPd-L+;`Pusyub=zBo}XWgN&fQ#azaLi&hJ4L z8Olp00+wWS#!x;k%%=o)41fPI^vGy9{AjK5;`MK*FJJuPKl||Rwg2|~-P!r;_ovU# zi#6^WYg}X;!a*$)l+jrx5n5xWO{bR7wkadB4Cx5O%;C{bY^}2de^81Kg2zt$KQ0A4 z@#Mo-HJ$3_Mr~wxVd^h=%0%fhWq-5_s#=Pw9JHFVgs{&n$TM_$DazW_S@HOl8E|kyfg_430XgS>;6nHippt0;WOjK?Kj?4bYtn z+)~|Mh35)0$U8|`iYyTkZVbr9oab=}NdbWXkHc?k!jrqfoJI+m@k=HWngmQ4Oa^yA zra3}yp%@~hU#;cCTE4kP&&^wMkVdnF39rcleZ$}`cmX`Ze?;ax65SR0$Cc0I3%7ga z1^l_qK7)Sg<3`6n={IAdUAZ^_T65ZdwR}~itSYF+R;ZD~vfTE% zonEDLQ63YXu%7djY^d_9DN%9& z@&=@cRRk1q1^-C@B40$===olh1Y7`j%ld8($P#P{)T^OtGJIkR9R9XhH}SS_XG722 zXEcQcJCdRAfB-D|LB*CV>VQUUI>inlPh#A6iA@eO99U8igWSEHkb?zO+zITRul!le zCp7lq%GP-|a3mgEOWvuk*AO@r$hcYjj?DlCm{6%)A}khw5_d778yLet^<0pb^3RG| zLD$8Tcx|LN)f5dDOA<7oDc{WUp5pO+ne9UB<`-)DQ&m?Cm-fngm~&&IGtbE6O)0x; zs5BOu>$~^3Cfo;kMFmM$bm`-!qd4x#K(dz}6x;WRWe7uT&E?3qA(%;`2~QA`~!`NeZRSu0gwyX>}^uyy0ANuY0v_SkOIUBQX2`m!v^T^WW$`_y6L zcZNFV3q&>u>pxvZ@6=ZLlZ2cp#)6e?8uOJ68(Q9GNoa*vr~mdp{QTg!i8tUL)sw&; zBnH3^Yo>%uhIX&tB>x^7`PfPgN8S@MY)%LWk7`Y2RrWcZ@#k+77B zGYd#MKuose!Q9gmDX5eo^tlRz=zmM?~;1&;y&`ddd%Z0xgMENP+4y#etN8X)O7%DH&do zgNHIiAaLvk0^Dd}2HH$_D`~w;@xL&^dk0RPP}x_rCW3RaQx zUA4-|{j1{*4mPb-vCNq^wdDzwK*5|rkip&{gD^?Vw3J-ZZ5yKsv_b(rNJH<}v>|O& z8!)eJPT1YKvYr}TDJFxNTZV>;KnW2TX|TOouqcpl1p{m>smLM(FoBVpvS}X6zU-1m z*l1()!pIzQIV33z&+jS%oFX|FiV{c$>`Jc85B3G_2^G$=Hy2H)xDp&fo(2ILa8RqU zQ8tIiKxx!ymUw1I;Ix(uhg*TfWB@%tL3Mcrgb8sGVLo%^Yf^79URbOmL>hKW?2h>i z+t%N4RJg(x;S$_nG^*|bN-B+&Qn2~i3@NsQwjRnd4j+~Pw`I?RUM?oBu6QgxNXn}X z(Aa3eJhiMWP(@{m8MJey`4dP}lYE=mP+DZ?q;}Q3fR>LLhQw5yNR7ihP)$GH{OuJ4 zaw-oJ3fqF3Rr|TZ>Xrp_SG)|6fTnCvgS2W7C*{Tum*ao_lc|p_>cov z6T2pk%Ls{8q|&#I+emHDL>i9D zCelM0V?~y?ZeS`V3!CE$qcyUbtzOKgpIE_e-3+_6hlP^Q>O;_L8BP0od%JBSf;YnBpjd)bf;X-6RbK%5%Pq=+4ls(PglaB? zi(_eCTi&+4+|ebslOH!xi@1PecV1^vW;PTFplF-BXgyrtAj8)4B;CE}_k8};I_+n_Wuf3d{(#mL#v)*+j~!nRXrJIv3BrJj^|AT7 zs|7f}mRldo(U4s5TaNjZ)pvl2y84Z0Hq~u8Gn6|!Xm`O5TR5n>>F}Y0HfJ*Bif;vA z+%jRn8=X^sbJ(@(StA)Xt|(cb-U9fWx-CWjAy+Jd2ubiOx-x0)j#1S7fJKOZ)S-;P=rVw#j5ZmIL|`%zi2 z1x#4*H*m+6>f0?r*9L}r(ccY|rY2<6cSetg9dW$pFm}{yHQk{{HV5mWCG@Y3+{Ida_#zMP@sNM>S160_DOyD@uw1dyiE<#9RGOxde!iGQ1pe`Umbl{ zwQMgGa#SO9&?G|p80L$`y66#8+X6BmWa9z*Rz8uFAM8WmY!p_(2ON-saOrvlZo8<4 zq15$?e89vF)A7&`%8e9%_bR2gex*=(>lZ4O8z2E_VPvY|8IX;9!)6?<0t%(B`Ff?| z)&j1H$M==US}~T%J;H&ZFaxXo)0%2jxWz{qmYZ7|71fWd)Ymq$*)Wr&7{N~?k=91l zf1gIB!CTFZ3i`^7%HfT<{vuk)Q*3wcXMPKOSDiG$G)d2Ri?Ns-U`d1HaFaaalYtT9 z)kv_)54XlQJ@asjXC4gQjlAY7RDdk{?dgyI|hK}#&?R@xd~RROLWi{Y?~WjBZ@aSa7L+Wmb4<{TC3iGFxfiEe)o_J|cl9!p6N)jgn3G~lq;{1@u(Ng3u5BQq|tot_wSQa_#fGqt@S2I>2lucMbv74DI4kM3=nEvSJ zyF{^89W19FjlRdAIGs`X3E^%SSU}%oO*Hr+M*r}TXEqmm0lne3fL=_Y>H_@B{Q~het=l?!oguW}a8K{s&qUIKVDH000004Rds1bY)+2bZ>HDXJzLGABzY8 z000000{`V!!EW0)5WVLsCRkus281Ms#UAAKVtWXBTI^z-wuho1XlW!9kw}*uCUFb@ zdxw-F(~9FZ=&dD!$P#BBXZVJ1*qO?=2y#QliQ38z?~T1J3P+MqI5Ji?2%pc+fQNU- zEVquZ6SAr-I)`pPnmVeA0&sL?5R+Gq$@G|~+#mPR{* z>*dX>n`^PW`S<@(y{Ain?q&ICWinVb^(T= zIAa$P^6}ehOPGG!^|+UpmvFOI4*1V+D&AENoPgE{D{Pcjx&c!I%5n{6W6E_77V8#E z3NyiM$ZACMF+^Dt+Bn}zPLUpTfC=v#i+9R+2N>YOx~1tlMZEGwn(8u8yzIgEY2V`Z zX}%F@l5Y&7=r`;j9+#oF*yRs!y0rNrPXVYBbtevfQ+!ibC?kl9)J8 z!DjOjqIWpY#t&hfV;B3(3q95-#HYUrKN0J4x_fFi^#RIp>gbC9veu%)(T4*x_AuY0ZSiETDe6#qBdT# zJ0VwzWMFjbG&AHDXJub! zVRO$dABzY8000000{_)KYj+yQlHdI+`Z}@>_#zAv=z%Psm2E{=CyA6}E4jPt_>Kl> z(AdC`c}TF9{`*!{Kc;)8X8_xKPnMj>qNlpLy1L%gqZcn;uya3Vb3gK@JmFEyMq$Lp zld$hkSe%5BKjzIRFJ8RB|Je`hoX>a!kHR}1-39!f#e9$i;VjlLAKvm&Fbin#W9^4w z_N}gAp0f|XCBY>4A0HpEVKC*hSWZZ@2Rqq=x@%s9|K#V0K@E?)8>G=4XOl4eoX!Dy z?8iGav?GQz>_x(-JNH4%Svco}hbRiv@J=?Y=e%Af+%c!GhVyd-kA6&N1F>~J=d&T7 z4FZnSMsj?|9Jly(fz0&PZjm}`K-2Y(55n0f7^jf}jO-B&`JN|9FdH-LDH0Ql2RrtI zx*-k5QD>t+F!$MNh;vK27#s&1P0~S<`U#I2_XoF(&qMgx_hUX}fFubMm|&c8gAm={ z1^^YrjD!q8{XrDQF-z`4W8I!!xZPyex17z=X`e?d9I>Z(wDD6G%v>VRcJuIfs&NT4 z&3JmD5Dq5kkXuW?<8hcq13+-=-yy>jz5UdGj%Yy~V&>17 zAN7MI@}nhtn)uO}Cr{0jA^Xt}lUqWeRR<;xnN|u#d;@tfGmgAL(qLF4KjMTa6JFp( zCKtfLr)ho%Pl*OX3TK8eAP|Hlbqc09y9c{~8Lk+D78wAx(*MKbX} z!!+0ouK#BmCIN6C>G4s}Ck1klKZ2d+aFhLrTUq$i`2_F^T71@Rwb=POT&H7-bIWJ5B5SQYi0~%kVV0Zo`p~rjbV*}o>&SKjLJpd5_@5Z2*KQ>A!hf%WTF_J7{{Z*=s?0aX|VYujU_KyN6(b; z83v;fkHD3qMNjV0s1qJ}o}~B&UPRU0NCABor}KFj2`}zX2I(X-|4IQ}u}$gO(Vl=Y zejM`{CHfwi!y)8DRU#;ONPwd#oK`rdn?6Wbc<+fvFRx#be~P5jO0-v6W(N+Rq65nb z(OUEg(4jIAB?L;pSw$KlQns>4UO|}Gq7DOSgib&lswuQs=uOiEXmwMnumFsZaGO#k z3a*e}rxp!Gep}|es-c|6iaKM(u5R!t*E-Z}X5aB^C zZPmIEoCA(Jd%+Ib-&vcTu*3G2*ie)G4*Uv5fTT`g0=yG4FFax_S2y!FXX0VGpFRf% zkUd_K?Mu-6#-J@cn)<=8`6-5Ya5+o-g>sey;96fCoS!D4j0YyD*nlUJF)Xlc82Dod z?^sPNfi_DhZPXKm-!`^Z zPL>7KH;RFJXavj_4F36@9?U#7+u3==DqMgLI!d0?{22%9;l~@!ppb%~t@-pEBClWw zj2A$$xj~Z+xg2z3J}6(tad}$!Vg^~kJj5~f>=^*egM=70%uT?v(z7f)IY8kh)&eD! zt$++?o=Q`45Q>@=U>Pkd05v7K+=$rR>lw)sO*9ZWkLn*F$-+db`U6Ibn$zeQ!QjSo zKAC{7-57^(13EwO!8WLb(RhL$5RD`hiqHp!kq90Q6JqI%y9}|h%%GsggIhj?Y@pYJ z*w>TJRETiKO?*_rlONcd5IsUdmS;>}ddL?JQ%U2rTg&d18>ilrX^7-OqLPHMH^O8W z*z+04%}8Zmo;*Qcp2jY?WehIN^M}I-eR6!PLE~iD>v0GeqT`C`=+YEla&zy#NI)H>W@U{9oSX)us3L?9IEg)Atvb=iZy&-uN@BDdUbJm zZBxE)vFbITdN0A=o?Tx3a&dWjeQ|ZE+0lZJHRG4l|MY&_?`VL-PG|q1(`g;-9kh3}jenMCoO#zFAB#XWSxTz)Y@ME~a#@_xCgiSxr#>YY zGqurh0<p_QxRHI0mVo3i1aRqsGu~35hwv@Ji`#;BnZAC zJ`h`pz#Pil(X8N&62fb2B8WitOfH8MbQ_WrClM5cSypJE*cxi3U>2GDAgVVQKg-Ze zXIFFz`3kL?1)KB`ihME&e#41TCYm~9o2|wurc~fmIp1bzX+{NBv3JS~R&O1l*4?^5 zxq?JOxXI4f$C}^xV`hp@8AYBV5z~b~S@SZ>e+?1hlVuu&5Q79WEFmd8BF6+QU{gTw zML63l7_SHRBjtUf;`o61{s2=EusshM(2n`Bz8OI5rhTJ!3g23-W~nSj*ploF$P{6iQJFG<^wmB1E%56-cmjqe}r; z0Emm;@=XZR6=3B>Hw9vpHh+pmf(GXH_Z+JRu_6C@wp{?1#x-96C(@|I+0YfPn99PL z%T!tm^#&?NX7>0E?efO5*1yH(vz5{$Hs;aBa>q;g5>eS>sw|&9m1$DxbCDf69~-At z(k>|BR`G~V;~44@v0XLKYVk!Q@*HY93$N8+ORrUjXsZUlw(48C=OcVRqURK^HAL8! z!D)C(!=@Z+86Zcp0WeL$-o~(#c(g210A=2-g%_|_Y~g(g*bZwyZm}ASZBc#)&A~we znF};8fx|Iur>?h+McAzt`(wiYWQc-(A<7I4s=VEzqPCzr!vJ;CttCz!!_=ikz>=oV zsPFAqyOXWGEUvv=y|%{mv*;Y@Iy^LrA%cRQYz&S-F%fIWDk>XNm30Dfj~`_blumQW z!|>*~Nh&G24It$A_`^qz`oZ`%q4{c2GZLH@DO{uIZz-LU@0IEX{RPY`bD9QS2*;=t zVJ4Enm*gE@6*LIC|327HhiQ=wZrldLZ4|d@RU%c(umD9@Q$Q-YFm$>)0ZQs@{#-y3gjt%oc1 zrd4wu*zZ2r8!XmWd>(+S@gNH3v@!ERNt{vkDTOkFgDpw@RwYF@Qgm)|5Aulo#Y#j% z+Nuyq4t^OwO6CGqI1iK12a<~+!c`zjt7uUanYSNG5qZhhN<`O%*1Zc+1ba%Mo?nXc zp?Dv|NzoJ#RmkpE4v`V&Scv-6Q#jh)s@mF_RBxfqMIISCeI~jng(v@@?x*Ng78Mt? z89K0N$_0m7&F&VfyEPb|xilCz@vkrNkofRO-`CP&!uN*=0Sv4dmA8(nCjXPl2U8A7 zY?gmZTh)ia$q~gq$3AfukXjj1_`C({R##C9qkc9)#A5n10wuHAUqU3Kii{lIS@}#t?&$hSOvI@XHJ?aVd zdcll(w;qHm)J?Cs+6(pUq>uHll41rO|0V3Te8&YF;IJSKcZ%UQEQ&W00ef}Av~^jz zE#2*@8+*xP1TM-}2n1)P{QSzra?SGEw)oe8aO@dBYLoV$%bdzKy!u*oI#g~kdZD#q z8OB|HDJ#3U%PsZ1jyTTKTp4N-tqKMgnsTnwwD5iFk-GYO=Axy$!;eXceNq`~?QxU2 z_7j7>jkQj=*zc}Y8Nw>V2B%ms&Dl;X9_6gBy(446stwgdySPpp%ewWoeEXPf$H91p z4J@^ZJYjX_5B^M{p%z!soNuc4dP1 z=F+$4SUI%gX>0l$(Hc_|K3mg2tLYU8{z*-+P1mQhc974k>3aFn=_dKzpUfn``%^jb zdb&yc8q+foGA@jZ9-6#TIvi5nkM7GD;w0{jf}E7!0f3BoDK1X-xqS&izxxrV{h0rm zf;G=t_|y{@9clZ$sjn);;#!uH~0mw)*$AQqEZOZ+sCz zD{38kiaoRLw==LBmBBsr7G6$e8s8&UG8QxyXiaVY4tnfN3)k3jZm+o;I7&F( zFzRMWm!!%KBkzfnA~Np0plG?Dn+uiUkPGEN9_v=zhBiH)@~LEZ+s<|k#-Q3wpyYcZ z6XISEzKxUHS|M4C=X{Xks86kEn*UN|J;!Cr9b`8rNe6E6CVkCD=vafK6tX02nbUh5 z;I!E%yfx8kHiMo6S}h1F1WQ71wDyUez}m3wteDWlv=6ExE%pfi@dCaOqn`0aV&Vo4ME>d|f0zhMs5JD4JlfV_Ss*bZ9vF&1}9r6?^Oz z(`(kVUj?HrNZTIBjP8VOi;FJeyA3l-r=EseWuT;ZX6t$VInHx#mSRw*Mp}|LJ%I)F-ra79&>8klrfzSi=5Q^IZa_y)l?S zzS$0ExXu)DSZ0bRgS&slBRhTn177r3YxFOP}TE08r+q~Ff>A+pG*u(lmq2@FlE z$-?A6+wx&nYp9?pd*AZkM3q^*sGv)~-uZ7L%cz6YSM*`5I;^X$Qd#P7)+)ZS@Hh(- zZxp7p;mVrhW>kFJ`p$(#gn|7Q(RGV4(y(5}?Ddq{Auta#no19GEr-Tpxel_+3()2iuEtY6Uk?TqB6S?&@Is4{%=U8EDY)_xsM@gmZRQ{gtosEbRh zcWu`4;y~ZtO z{_p{TYFdhboPuD9YeoXn3ulw%M{OZ#DC?Jyc4%v4v`tjkWK;8$Q$?dQv+WZ6RnIM3 zSyW&Bj%0-fS1O*eV-~W!Dq`mZQ@ORLDcRe2q7<-A<{8QZ(>9uAkh#&kjLK}9lx!h$ zIdPM{zI;&zU!R7AN&ZcYlibCpZi97I*!tmP?(D_OBqcN3;(ede$++s!kA`}BRog-$ z%b}1x$&;$e%Wk1SEN8u>u36bkSEVvnZ&t(MWa!zO@q{00JM(Nxm?aH=X9uptPFD!6 z_(#kE`z^v9b=yRpG57*c}!)k(>Lx~VKZd^dVmeL>V?_bPKrMReJ6CsNn)Bx()O z0#|k_CFS7t{hYQl?!Q#>7xZdb%iV3j#Qo=1zR@P_S^i=+^ic~}tnsHce3%V?EQ_m! zQ*e^hvTbUwhqVuw6V7BKu9l|&X{*;=n-##CzeFDrq{Jnepq zZ7UkaN0pU5KKWmlY-JcAx+Meh;~K@1A&BO>^xQ%8AsJGd-`Y^xBj{mosCF@H>7JZ)L7C>5X+G zA{lgIDG$U^-iS~i+YpO zP&{$zLQ#l);G`;*({BGE=LOPLIvm6PE z3{~e8eOR`MsJ?m-1bn|y@mrH;u2Ui8)Ks2S^GcXmzbeG+Pdefe`)jq1*s=+UwJ57% zs_dXV&EZQV2l33M^L@r+aZ%>$w%CB-?s_$DDH4^WcPOF!+rhc9V40y>Q6U>BhOGwt zbrOC~CVc%{zWuP{l*d~Jf7K(r(s?eXN@Uu}DMf&uA>uTZV`nOxDvFS0UD-2Hm6_r) z+Y;=>u?WRT)Wsf7f;Wg zy?%9kd3N!F%`F~@FOORC$Kt9d#jhaW(EWRd?zzeIY#l^DkxMz%x%TU;hAQkln<-1) z4k3>0h7(`{^=_$BudBXz$^v&su%ZBtnqq-I{DHNQn zvbvm$x^a)ggHWdr#jGMzXs=<6pVUFWZi~BN1UQT;Pro6fvEhgHN$+x>*kNs?U<45^3GD@H zW6HKbNl&2mZiER-?!gt zV?;IZrPXh#LLmNle4>tzs9Iv`vKm3Wkok6(iYh27PJ7iYOqFMCvS3onxXqZ^EZ3zH zKUnS6_sVk~)<#@bc4=s&L)O&yOK582YzyPJuK7}?)P1CH(a)VW{kZNsRF&M_4i1N6 zw9_(ngl3=TXsDh2{QmGJo`i-~q5gRO3x@g#_sR5sm4-Hh_V}b#vGMQiPv_u4KcZ`~ z^Jmr+4|kt>e52ynNo_LnpzsqhH71X2c3ZvE&bvYOU-y~yWAA$g4iPBc0Edd2;5JAY zi}T8H->6$O>EEE1a8d+ZJ=`xh3aW!BM24y~Qt;YhM5$W{| zjhN=p&6JHp2QgrU-vJ=QHL(8W7tumq6l=3e3Y}V{1?0=<0Q%N_IyD?nhrM=U74`sS z`{;cXojEcvP(cFC5I<9cSX?FZ+)QPj6qWCHxUYvx85654&5GQQKyI%p-PvKcJ>_m- z5l|p{ezq2>&>mwbBS=g}j0GYPKaQBh{2DNM@f{*3m8ejY`1Kb&(z6;3gwo91t|4dn z1^Sv_he@-0?YJI4W#tVaM2blW35|uEVF-mvk#qqplL~G~*O6E)G%z7#omUc9z2dPk zW-G*-bKx+L2YO%`Ga(dS6lBpxf@;}7FF)`5(D$rR=_IG?d*dH*ShFqqg7G{xGXh+SCrO;$?Ab_g3=1+Ji!)3P=nguqa%o55 z8YGK3cebp8lYK|Jjl~ZtR<*GWSz{#h7#4)tGAjue&*xUnrKge$!!iEO#L_GwM2)>8 z-b6w0*%7E}+WUqD7FqC$P)mc7fQ`sc((~&P)f@_lQDoyzbql0vvZ9=)XKy>)StqbKy0(;QUS%|Tts_IRqct~Rt_SRoZYOJ&%(S@V=t;ma*$7L^R$8OF?Y2ai zy1~lGlRB7^mSt^~qoG_sA&2rJorA>`%R0Y!0NOaoq3e=@ zQ4(6Uk~VAf?#9YSOr|}VbffPmdfjfz`B3AOgI&{Yr}RC(QRVv(_TDUe;@S5mlreY2 z@| zBida|Umc2!rE8=)vNqXhh4u1~Ft1G^MtKY_CCaNOvANXWBa7vfMM5 zFeIh`tLbjpkk8%(<9t$V&=!iVq#-zS_wEf78;3s{F7L?GKXX#T{ZFYk16QGayKB7+ za}@1Do3_xxp1=H?&LuS4{->WYn-y__hMLg9cDwr9mR*6!$8JmOrT3p?OIquAv+d|U zH$F(uq(&h6#kF3Zu^)Cn_3zE5CLq%Sc5bBi!bP(`2Zg_ zI~T>skCGICAmmw%n}3xBk`&}k@w?ZlkKbW@egybQW}h2RD*~@1w4~Bjv!P7FT5pj=oGyQH|)`- zZ49t8*20<~dMW`uG$;XB1G@)Rp>gni0m6C@PfcKd<(f~5qR2nISH?s7JS7GU1X{Rm zm2V(Wa~W+i#eiEjB`6~*S`;X$Ay<-9`(!BWh}-~+u7tOMn1gbX6n> zrH`Fb$4@JKf1XYm|9F!3^v$pzy$~Zonq<+WU_g{8QC&wTmq@HRwzFnq zJmz1=^4orNtdYh`5rV3Qc9xto!flae>0-60$;w+4?0;8j!KVG_Xf_j5J;@i#JR=rB zgv-@QIVn;g67=_&@OdH7IXuT%ta47~0&uBEMVJG*q9mKvPt$2h^ovHsF_I`a!N?#r zRcNGXVX_9W)sz+KIdIdg$QLv)A5R=;f2sZntN|lhtmaj^oWsb{10t5%nD?N?+pDps~3V77Ulnc+4MYE9=kCBKTRA$gk7! zm=#4{+&7=)OC+lGuuM-gW`9{$1>i6q|M`l|XT9hZ{n?9F)$H5%)~DxeQaO-jFn=qM z1@}Kw&pE>Mc9B$P-`s_NQw&EpOfW#e~U#@RB@G@%vs$8pB>{P ze;e~_`JwftPVouJ1q~%6+~Z`KoTT%#y6lPN8FPxp7io1CPt(DvspSxQNZzUa8t2mH+2e>fQ{qu>GM~wzyJQ}|!Lw|mB96vdHcKE}=@!^yB z&C&7UO9^#!f77v(6l7VxIz59OVgZ{1N;9!-`sg4x=^#Cb4Dqw0$3MM(A@SO8f!7(x z;$#lsrkAiyvkd2WiYUk(gJ=Vj?VXZ?XD<&Ap8QJWcp|~xZ2@+%g66Q}EkXRq88}bo z=`;a38I`OuNsj=YKK~(par8?~_Iov)7i#eX^zFyKd_V5`?q!P zKeH+Uf+WZoe5;48mO~&80_LsGVDlZ_`z8X-qA&*X^WpKqlY`>}BpuB#5 z@cQ`2qnA(ryXLat&|i4$$87K^-Kd?YEZ!qR}C^H+TI=J4hB&yIdkptJ@4 z0vI-}6%et9V}Gcp(9|PCuws50!K%h@agYYE7Y8qop8^XApvVbNpFhP%#sK#Ne3tS> znG~n11=A=>i%G*xX?nI8v*1gXRDkaq!W_B>U1V}5ba0XdECWrK_n2k^9;`1W)ZrX- z`Q<#FpbSe%<^LHrQ>D!b@xNT=u=vrU!0t+1DLSfZrGRKVK8#UxU6RUw(V{?l=Fqdw1}U{eKMpdGFrvo8dieR7zefB;zL)sBb4LK=>`YG*y4cY zvnefAuT2S-3}8|fx!?xSU00Jz({Wz15^R!^t6?O{fDS@mrUqPJ168dY*NWx?p?tWC z-lHBheP0jcv92GLhLKyN(L>pfmLM|l4=jVtx+DV!jSLV-tL2niyqCttGyxo-#~A}l zO97;Bpt)^#syF z(%g+pHYv4F1jz(uFF~&?rav)0CzE^$D)bb32SO2JXfuSMVibVmNKT7-DP=*NPF+w) zEDncET5f0R#(5dBWO8P!4L)_jpepxR$evx2(&jtnR>cD~&|Y)`5FWKc&=ByeGC3vZ ziH$65ob<>Ph(09WjKXNvrjm{;(9%KmwV(;d^WrQ8*z6sftY{BMTYpJA5QqQpD&CHj&jFb_-a;W0HC;$EiDuIXl9z3$;5M?BMWZ z!>~W}9!F{c-`LRtfDwwF|5WV!pc{&vUK=rZ0l|aOH#H=ooRSt2)ea?6FO%pLT$-7j z7v6HYIspw8?F~+bxYlqwY8bY&BiiF%^=!RJbnRmd)$Q2zhS67D(BX)tWl$f}c6rO` z1HK*j2mE>#svYa^?)~*$sE=ceP!u5fJLd4ZLNCRibQX0!XYbOoDjx*Ue$@HS#-ghY zg*Xmc##L@(+Y3{|)I@bJC>MqeDru)1z1QgQUK`CEv`(#KP|5ceE3oHs@DN|br}O*- zh1IIsV5ohqKaKbr!hmT<`>1CzQ%}Q4pslT&r^>i2WdNpP#O^$9R3qA_HI`@dOLw73 z6`MBOL;zGUus1R_&7nn|!MkBsS{{~$A`N)lo|X_wK&(#KBthR{TK&CL2mr%RD!@qv z8qOEID_oMb>AcBe{9y8{do?T40*7AYQ&>=}c;GspA2qD#7OP0uHRkH`jk0I1PY*@S zF9P(eKYG6zqC$Z)5e+GCD%_B<3;qz4j-V>%P|zHSU%v~3aJ9WP01Oo{+;6QAHuTqu zs-^+v%suKOBESPkbT;ihWFDAsq-o%h)V7UaV&GU+L~g(OBfScJE`uhnyaSy_`kJ|G z7NuWeIn|&kdK7A(rD=IMii6-7EsDg^xuG)4qh@gKZCrm>A>VL&b*O}XuZI1VqvqLI zYu0qS?E7c{aD0!Oy75g4bmM#6)Jn;wGjxT>#jYtB!o>`#0n3m_r_OqWzGIMifbIN}+&o~_dAvc6Q>n{+J~aVD z>KbkZ1wDF{MDhJ>1zG4Q@IZo#vK zAd0mWJ}_D~?twzUU=$2OV`XR0?72_UsHf4n+k)5X+zmU6EwKU+i4}doPM!nEvGJvo zs%g-Cg5E-)7pl)-O(V%Ek)gx-urU?vB7uS20TIf3?DqYuS)eS%0YPfbvBruwsNW>+ z#hxM{stt?;XUA3)Yb)>FGUdWmsc_>lj=}Qw)!k2nX{rhhRVO1+&sRsR5f> zTqYfLK9W8%NBggTMJn41_F-6JC8o4ZVHbr@JssGtyn-)@f!Duw%@W2+LDs>FWEMT0 zNr7PG6`o-hSbNDYY1%0EO(ttYst+dY5dcPQjr9+32C%g68zYnns^hWLGc^-Ov=*f4 zIAe5$cYHo-mfvqijS`%EG#qA54(&Le zma+t^rMRSx$nxiPJmE+;($W-RYZG*IQgqgQ=I?1+H;V1TJRf!BV0-#nwuaJk(e9Zj z_M#bBmdrJBy?h52Cne?Ia$)A1lD`lml9VoEn#~wzvrb{xV#!y>t}!}ruZ(B?nBOw- zIwsbP4{f9O<%U2fst+SiL^B9Nv;65TgEhX2#msooX^zi&=d)7 z`h4EeY>6qGvs0W8QX5Tu;^9LF6f7@1r!*d?8Kua_GV{0NJCxAvnrgRop3SvOlr_S6 zTqKvBfa)J5i`oI%+Ks=N3!eh-G=6AGYWKrNh07(ZM=6$FJ$l&QDQ_y#L5epVLg^YF z=Z)rIBim4V2w0KkQ*&~}xzCu6H#`g)boX1>+eTMkwG6einVm*^8mG*I&o+~zpNI)h zjoe>=+r{}4O_)p3)A!R$C67OJ1gU)0S+^5f=={Q>i-eARf^9s{FCv!UNdj%_fz7B6 zCxDe!A8WWb_z$^N@k5fBVh@eQGjWhO!c$* zHk;Ep?tpQ{e~UW8^+Y*nX<4p-mBF+IEsM?{6LC)g$c!-qFn)dlJ6R=caNedWlC0!O zpxqw4H(?5pfF^ys%I4`?HosIyaTC~4{|65#FP6B~qQ#H2(7_6_hzuA#jo7oIsI~H( z=WiJ>I&c3l;^*o3pAXb|x<~f~>ol>WwcwBAE;QbH4~*K@td;g*DaWDy8bZ$j2TxEwHWNM2 zn-ojtYrksc7pK=7vi4^51cTQ8j3IMuzD#L1aP9S{x%zQ4S2a28*6~P-S#pEj7Us5D z^+vcuP21@Yt;HWCw((q^5s{#5J=d;sY-?n>j-55VnGF$6!Vct3YqoA6H_27=d+mc5 zw2J76LI#VRhGbg7I;Y43X+0fT*Su{z5A`A1t!6{Z6S%%v(N*@p;DqVuU(>9`jk9)# z2T6=i@$jWm@c1EFuol>?asrNee?5LuKlHf)?t{i;?p#MeMv9wys{d^`Tm)Ixxh3e6 zoqQ4oklh^;*BB-Jh!Nz!ip~3>ay|YC=NtSXVxVO-=sUh3{#8D=1AVR17ebSxe@>a4 z9ZTSc09(X+b(R;^*ORoEtndWoOd61YAuj;o5w$vV7-M=A4H_4dQTL+jJ28Ney-aIr z#!Yq$TP&-~p!L#h4+WKDgSh5`s3m3CjJVba_=kyYlg+bfd^;4J_l3Bje=E$5_D`!& z_rSI5MdCcKDMv95nkNv>m}Z2u!J2F8C=YTyo&6TvE_QY8oN=hSS6b?�)gP^oIA{ zhhG`&rTGUU29eOUleK=P*3Oex)Z@?Ro9CH%tig~(FzW4eEt~{oJXPl51RonS<>uTjH+YDn@T47v+kFV{e4nPsd${a#ZBs2;3!rbmZW>w0`sZw1ZN3%(2jG9RqbqwJIGjp!JjH4{u|bKsdR zv_MTE>r0yyp~?$VQD;dymzLPlssyO@(;)?w7hUA#Wphy#F3S}LF}Nkbt#9O2gs z)qv=ZLA07Xj|A9a>Oynj3r)43 zpPp~k>j_4l2@K3ro5Rr71nTk0%_q}~d-uL0iJ2+|PIDl#+UDq1B)o-r0o^ zv7B=(4T+?QD&x>fYNJTygdUIQEIX~vI(xnm;qEwa)8|{WuUC?Vi6e`|x&(5mQd6d2 z%Xu3j`VbM1!BJXiM(l$vR;0E=z+emjlDLST)iqIbima7_o!848V$zRwG~ z0zp=7V6YkFdX)yg{20W}$keJW8!%=-Nu+xAs=TEredXaa)qf{+61sb}HI=OL-+Dc8 z)^P)fqLziM9y^;Sr=^IumsmG2J;k!SR;SK5tNaucqgc8!gJbOp?|i+teRKCwFPf&M z^Yn_zQhQlp#6>8fP_y{wu1yi9sWo4DY+N(<>PEhYd0kHy(tP6$n zf>tyYs>CIj#~B7ixbG2c1_&Nh;;=s@(|ONX3W+Kp$Cv_M3o3*jE9lxpwO1u~C>3Tv z@0hrlI>ci?l_;Mb=|hyzs;vo9mv_XlOE>qY}Tc`koulr|b2q49Br%LXlaGpf?}tYGw| zOga^%`#UAW!zC4Fq)yHlg|buUm<7)^rh#_P*<$evtji=Qfu$ zmyXa_V&|y7oLf=UbbaXFnOSn^ch=nR&<$EhfBOy9CQ=oDZ+Bu9q$O#Xtq$zf&a$gG zAnFe+2$v3}lprhWoG4!|R}kRCBbuCN?6adoOCPDx?yji{X(#&Y^zeje8;deyZOS| zaPgSuH&7~uhu`(&P_%TeyXA^KgXo&c4?gbR38dvNHla1IDo;Ge?mwWnI(+=$Tw9jQ zr(Pdhd3bCG(w-_xYbu2b%6C|pFyy9by~^pm*p2D_hjh|Vy3pRn>K)F017W*Z#9=}L}eb?fbOyhB@SFC1HHoemBb>S`&ALGC%;wY|D{!=9oQ{4R;J zK5yWPw)bF_3B_m{s+uG<^IyTAp^d_`)77tmX9rHY8}C96-rs{F4Rs(3t<(z$Ia$>c zuE!y{+L?kTo(crYH(OQ>$h2Q#Z8_i785-;5?69ShOLxMwjDLQG-x+xYKyja1i6tU? z^@J)!m+2I5hLR#5q``%7;1rBfqbLgixJA1UP(Y=8fGn>)DcYHOS^k-aZ16?Rz{3Ij z(EkR5(0ThU^NoCb>Y65l9W#&2Z>=mJ%#u@)Kqu%&AVWb(n;L6U!FGv)$zgx#^P9Lq ztN}%vZFx1YhAz__bZ#^A$)uP3VS_pg!sNRRg@Yx%lqbT+4IBd+j|*n#>1F$<=>T|b z^x^09^VSJjG_$>E-zQFO0Ui11rEc_wm!SWQ_ItIa!Jm!x!5)!Y1wPupd#}4*`T00X z&&^R+e0pGz`Wn$|T8k~&TLSGS%&=zA+)noHlTEG@T@ZfYss7twOryEqgQ8iTUB0821OB49Oo$3-*??u86a4sua;D9e5#b zQWS2v;mS6Zkf2#>gy9_d(%s^87Rz#evAh5)K2MVy?hW}QJscbR9&VQFp_LW3o3Ywx z!?m}GuEQOUtvva{SL{hQn(dsvJy@5teRT$zeZF_6bZhP`_ayJyP=fCL21YVFSQ7}_ z?hM4>gOGSFh2GzM8+mw>bf0ZE(OU)RDm0L-RHjdXeQr(mkIYF|ZQk?a{zHw8_><8F zwmn{t@eUQFyE|L=!j_-r@RnHx*>+$vG0M=3(M{#C4HSAYMD^RKL7TVAO866Gd|G!P zJH|<>Iki3Fj>vpPqjq!B)cRJ$L(TX+E>JGzNoi%)NJ8tajx3%7qr~-T(nWw z!W=fWlw9s>I9lIuw5bbGETo#x^isSv!F9}O0p4@OLED!5Telp!=t^HgL+wRlzR)p( zUL@GP{W&NN9a;E@e*6gwYr7da-Hdk1+~N_U^%+_pU2obqbf#BqKHFHfl7?hUSEB8$ zJiV2te@%J%&l@P)hSww8=xSkE>F8Hh_O%O$cri3Q_3EvM_E!~)o9K<;rXm5CR=|iC z*{y)P6>y)HfYao_aj+1tTv%xtjRyVbX+=d6sj>r==BGLbqk-9u>hoVPO1X~q2eKr) zJki?UTvr|3Jh6PMac(JEd7bz;$spQ`cR0lB58>X5+gmaFQN?V)Rp$t*K=%D7CZp8x zdbUmtXkX;Xzt!#N;7dqWbh^K*CchiAOf`fGcA7%4w$Lx&^Dxr@q(EE0+K)%B z;}z3Zc#)c#F-C(H-+>?Ho~4k}i=pSLjh2^d%un!s##eciR>loF+KoAY20645lfDCc zq}?2vvx@@e8kY&)6V%Z96XElFpY&b~v2#B$VsH(u{FmLJd;H_>z8`**6=t83acb4x z?!6y%@BK7{jK>ReZ!n0&rOOdkFe`epP#C{zR=2}MO)y&X{1SOqc@ueO2`qV_Dqgv( zu5`Y8*RwA%P{gS+SLX&1do;K)w51(d!j6H48~W^kw>E_7YkPgMm~Y@gY6e>szVb+)jqP7v7a2 zhS14U*X1*lYZ`BBYrf0PrY!gK)NtJTXTb{wO}f^pF(oC|DE(LMd8K`V9;;B+HAqWG zs}hsjMzwM}>qRYjcBc-#%)3*MIv4BI>cr(+)K8c6e-Q9XQ04SetI!e7XFA2|xz%OI z*M3CpN22{`)_mqd={@dORcrICh1b&dI9C zkhy!2tV7&YiH2Naw^;C|%l*|7&3nQ0$%2Z-@i^S@XaN1LJYvpXISH%Y4 z^|N#u{L$v%wR3>2p=;;xjKK$hn_ODE$vd_Aqjhu7=%(@2aA5BP0YF+-eGuWsSFIt{ zN&7n9;Sn{1kuyh|86kA2`TZQ)KE40kJ zt;LfS=$4C3)!AEpwo#w`S zZ={vGGJ;ptkjgapatWtBLdrPDHyZegskH8NUv#>!p9mbT>(lF!Tj$&xdVQwN=nVyW zL;V!vaIi6ct(6cf3VStLhYH#31!Febs~q7F^3`1vv|9k#7aD3hrD`PwjJAyXHHPqH zb&eW$?r>@uqT9BJ`!>4AUW0O_?f-62ZAb?U_O;K0qu%ps40-AvplU?c*Tvl|QDVyqPg~?y z1e$n&qlB=EkY86#sB63_cd+ihkv8R;-mksY^y8Ls7W&?Bu==+5d!VwBU-9-i{5R0M zeY(RqS`A0v_O#0^y@~?A*u>&4Qz_6s>H4=ohsAsd^+G80yijf80cWSP%U0y0PgAaI z+>%j{@%?B253;jzB_7xS00005Vr*|?Ywr^uiwFP!00002|J7P;Z`-yO{@!1~p+I2+ zuH&R>y6ZH-(#~jtySyMtyNg>4nk>;a5n1#|%8qZB|9%f&M3Izar^`BQ`I1N?&pABj zIj_vl&JMbr5sv0GEi#N!JRv#eC`MecA`$S5GLlb`m|-->B2HrwBRWyf+g%W%?{Bnb{Ne_+W(Nz60z-$?3 z4Ma4E&}xPmHr}DnECrG#lEPX40nd}m=0pgbS~^w(ktQ&7@%i=wuaqj(yW@&`u}CGR zdpQCp_UB3T!H2O5UxV?#fyd2#`7niru`N{43Q}gX~boSx&)`qY{ zE7eI300x;!Y5{;&pbU6ek`&a3M)wP_GU!-}WY-8PT|~b6bdD&49EoI*U1Fq&#h^t` z)U%6ECqJG27+qaoUR~adu1it&Dp8bKP^m zbIf`{Xtbc!i0wKc3UeByYxu=P6wnTNbq_5Gz2Un~8&7nM1|X5FGloNB(kN!rVh#}M zG-<8dHR_T-bGpQlpl`jDbKJ4tl?c9YJ@2ERtG;;O+#ogR)joyZ({{xA&U)G~WzcY_ z+)}=bW=%SoSeJerGg};-NcLz7 zmS%aJC%6ii@CU?mj$}0juE1xodAzPReW>>6ve)$+ zJh34D(@NQ|E~0Hmsw8@B#F7OwbYZH)mX(hzbQo06kS^T{2Eq-!t0MEK^Zxdjx14ug zEaz?r0%q@YBj(Zug``EIiN`1?2mvdWCnAEw06_uhf@Eb=*U72uzX4SDyxXq@QqO-a z-J>J)!*VV|0KgY`|Nc8tQv`^tJTk~M-ejxFPPseCo+c!N8s~&_+3lw|CphiYO3$w6 z{9Yt=VS(=<{OP$1BzcnRws2BKQtzj z(yc*;$rBSwonT*=Za~+>dsX;g)_KveHmt}(4F}(WsNW>}s*`abFtbF1|4Kyw|5aIR zwjQPzSg6&_dyS^_-?qdQe^EnQYlB74HLpFX%&LL9q%Zf}?730*y-?e(F(iV^38XYB z6pRU!NO>Bi1fmZa7m$pnF_%L#NM9kSW$Vag*j8adu0e(d`0<3%xoH=*VoeJRO=H8` zQg8i`!I_RSalS^-%X8}Tvdv+vq`+xU=O)NBDhHu4B)GA6&OnTHK4$U0qrx?0oR@UH>7z`PtBFqp(okyo*!b2v87OT!#Dv>Fl zG+oG+Gch~`n*uT|M1i0DRGt!6R?((8*nlusn8CieY6EyeWk#FD(DCb8fUe)tvO1_* z6(tT)Mlnip1T=bp(@Zf_UtNyQR-(6NF)s%qB=^yzxH&0N2Ub&7E-Kk0If({+5Gc$1 z*xYM#Wk)C87#jxjdF7*>U|0@r<>s(XH}pRn0MJWR+bd5R1#cs{qtivlM@|0k7lWb4 z))-VTywvuvu^|3nP~SA0YN~}x4QO%ox^GnsJ5soVLWrXpjVi&&%N&gfBei;?SyU0( zR;`UuSeC5S5XkBMhdMTIm|ce%6F_B+&~ssqX}VLSeAiL8eTs54BZ+4HU1vaU3`-=Q4%dBgH{15 z;KBW1OY!25006*kGl9)jjkjx4^;mV1m831x$Huv&X)PPRY#3%fqGgnrU5It3+qApv zT-}bMzHUn!yMaSP9XfdY9P+77vSI|hfdhDW!pK6jdP~JIxG7Pk($A6%6ku2JI-_w~ zo5CpgDPF)T@SI3HjuXKrP$-lM${jpBW!4CNw`CQ$%|EGS+Ztp4G8Ybv)au9EYOq!} zH`n~_&i=?|@*f%moN6NnAi5trkG!aO{uz!bA~{m}CG^PZK27x=a$%>uLiqz1YVP%% zrqaH-Nyu>aFB4ekVBQ%3K{9*8Nz@q|ht>6)!bj(5dz++htt0bX>1u=fJGyugUeTf5 zhC=UYGiE6Mc6(^x(A{?TQJsi~`+qcjJ+h)Aj|(xQ42+ycT>XoR5@^G{S~yd;-sXPz zyFhb0gi0^pvnGCc_J1N5`-a>n000001a)+2Y_|d*iwFP!00002|9w%rZo@DP-2D|d zCO{Fdg%f1&(#f3@*`gbwwq!_@1GmV(SAN84+9DbR0^}X9TjLQt zX%s!=M5vr<=xp5w8mmK1ev?cER4djWmVTCnNQAEh@l}q==)WVS)wF#$@vi*AmnRxl zY?E;`jUr!_a7T+qh=nFkj#bT{Q46GR3HjyMa>~93xl83Q^HVBmIQAX$D z6dOToAM@h|FBJyCLmvJYT)fT*1YZ_AH5@m8hZU z?+meD`0d${o{Y7jsJUEC*{xxgP-2|6ITXE+-{t$&SY+s^6zg{3dB4t(&uLB^BS>i- zRSof9!T%!XvH7&q@$LcZv2#f-$v40a%Q8^{000001Y>VxWtkrziwFP!00002|Fv6f zbK5o&{+?ff=7THV(Q1>XX-U?i$#gn5H#9^-7GsKF`65NV_`lyS-UxuC z?4~(QXIzrN?qabo&n}SfzWa{6U!@gU^Q6fbshXnTWnGbFlgD++bDE{~j+88;HA_g% zNz4;Q)-dS<*Gz^73{v5d;7Gh`_I@fO4HoCRJUg`ORcAkDco1FPCH!Q9@BbSWUP06b1BlEr^Qd6hdbog_%t7^86Xp#V(3Q4&`1)rD~ zJG#hNMDyKbGS7FO9K1laO(yTe-)Z0am58AF!>{tw)Q(Z(7tJC|s}-=(TyTdODQJ8P zdj?y1lY$sKLTggP`2sU6~4hQ@xfvO3S`}tB$K4&#eXibkoG4!rohW#Z5_JrwX2>@ot!k8W+iN6Y@bmUx>#p*}PdVSXl{r=^FN+w)`4b zvtY2%ifDpbt2n7xxq+dy61Kqe!+1s09CoCvzK4=i=G(QO)CLo+Xv`q!U(H-k!jd@98kY zBk(V9>}7*-oWSx}jz(w(m^Tw5gX>F(zDItLm?JBeCoBiwf_!D-1Sw=>lrc$#qn-!= zQu)pw5{&v*2%vGyn;f*Q$b^1Wi|i0E7{};>Hz103QrL6q?tqfaRr-TN6H|mA5}2-} zyP;KuLr*D+rgBEcr))$!E>sYft- zm_wi4SjN|HIi37+!m7AT7Z?w7FG}0H;lyEkGtfvnTcV?P^TKDR>{J%4xd;mv9Ec7A>u z9lx1h{Vbv{nUU{5J(;$m9n%~)L+b$4k*?QGjkrg%v8jHzczroPIgQTF&p*7Mzj|}p zQhnN`n(%F|=xSo$R*Dy0qylhSTK9PTdOH8*^fJ15_vY;Qx0cTjJ$zWkYSy|J1oN6~ zX=Mch6=f@QnVxkiZJnQ8U7fvs-7v}qIMQt66^n0C5Xh>0izo|`iJ>cO0300Xp;3a)R>EX>4(JIq*`Plo zxYbU+U8Mj_0MO>&)(A67mPpPpvXpMGbgff-!Zs}91$+f5U@HpHflwBb6{ZQAy%m9k ztvR6jjP5WALBc>QMXQw%3uw^{#wI&MUxYbBUREE3{YXN|m5RSUiFyrAj4KH5TfTV}-s8iZveO9m9Z-#q59>;<=t~s!WLQxx0`L*DsFB)`jv!XWk_Br^gnMK* z^S?ywr~AeBTpNT#$)qm9tIl$A!=N!hvrsO}8HU?y87+9v4GhA>LJc%r{;&!HyX1Wh zB3G$$c$79Z6GnbBgB-}#NoWu$WL+$_%zI6G;+MX`BMEVTZ&I-6P8*vSaI}?aJ%ZIC*IKr=Ldl$rYvJN47bps z7}BH?%V!j_iPZOz zZ)o_wV`QO7rv}lt!2T$Bi0L8lf?SVTwjUE+X1X>aH7husM zzeo0bv5t@&;OwA>Z|gnSxdFET8(6|7)s;Q;J)qzX2hq@JKo+#P6g^37o@jU#EVgIZ z**0gg4p?}l)P|NR{*SXFnXw~T`^_NF)sg*bq*r3k(P6D6JtGHB;JGj8xb!jzATzUr znMk0c4iirZ=%2AqFb#&gAKZQ)qV4v(u@d+Q`MhRP$(CJKW_(CKry+qp(`U~zkrBLx zxZ`(nXZU{V3`WC;mt)i@na!j-&pGCznd^Wt(eq-Z+rl4r)Z@PJf|F@dfPsEd zcAVNmekc-bm*_JMmy-;~v%XmEhCr$3muR^D{{Kzer;-x{9JP5`Obir@KMM|8GQy#U zg5xFoy}=H+%Ce5&`pionwzXjo>Iq>AEEm=AC~$YZlq4gaxUr&)>_DnwrfoN&6KQ4k z4vCZbS|+%DCwk)X1e8|S_j0}#br(YP9c*FAMROX;C^&TD0=$Us(Uz8p6P_&qfP#(p zEw^`ZgUhiX*Oh2+ROH$NJlCSEprUICc^lH8t`SxJuB#VyAcl|)@IYQsF=$L)v6wbh z3$&mXkDC(8mKsY+=)tOj6BA_p2>=&GBn_Te$?C@dH_S7YN;IOp6&)^sFS1Aw1@YR! zZ$@4p#YMnqG*mJ~ElH;*JB;ZbT2l04$&lc|EgbPe9Tka#yUtnRs7JIw%H!=L`I%+V zA*j3~SgZ9U(a6@SMkS)w**tBVl^K&KKqUh|gdKtgF+@C3uL~Zrq zeF*-fWKMT9@Bj`n*Zq_pV5ViW6R8gS;CK-pG75BaU=ad5(?0o_rnmI4$?Btnk+tQa zC8xK1{CQ1n=X>D70Wbb}jv>2cPIp%NiA1g#^kRM}u!#H$V!1VB>i`~R`OB8Aa!%X) zs!RCaSRQnW!(OL}jrj&)eMSo@>a|$I_4on%D2%SNnFHt)KY`%hdEQKans(-VbkQ>Y zAg4f@8+`=7^Z>f0ouCiMB0-6Gi3@97^ZD2VMYSbr)0N2io0*^&YQItq<4Bg2qwZ-_ z&_f!R_^}J1@9wund7{&47h=(f?Q8%G25n-$6JP*z@gaw<(i=2je8k`^4*8waZ*e!m z@WN2u)J!mPPte^$)imohRFbFa-3E(n#fUD1VIsJTjE%=BT+GWMfoyQ#4)YMlTN&wc zCs&PWP#j1&A_(Sj-H7WLTO5aUAv@ua1bX^q$Mu|AS*KssGicAZNxw%1CfU#BlJQ9Y zp}F2J-&G)0RlL_aaHFsqBWkEIK1}y}sl=#@^`R-+PeS(xOd!=U6!syHe@~U{fz&j24d{_XLE2K5CDt zgqFO}ds1RJ*xm$>d#h)Z)m)C7k$bu)_Kx(gTWr2%%y7ImI?xQPW-nT%$ReM6gtE+Z zASuc%5Fe+#)6ul_2-EOD@$IWB!2oZ!Q`VkZ^!*0Wiv2T^3^Y$?ds?gEUdn^mu5;=g z4AKdvgP$BVy5gZihkL&JRQq7HDb;C{6Fn<8R)_b_->abE1NvYu;o+}$(|r~-PZEzt{PIUB}|M_v2tl*1sjS8%oa9&fB{^=v9Q|FU-(AcvO(?YPy3}d1)@j zI51JCd7Y{C^#l!T(^5Av*R+mTOkUCjOalwRByQm9uTZlp#rn- zCKbGrmqGwkCTyAJ{a3uiwdq9P?}>34^2j%oi!l7K!zRF> zC2Yg9Mhb?H^4jh)y=h8wN^;#c^scSr?qPt0e+??>p1{i^u^IX26Y}|U`}EX#`oVd6 z<~;ofPcA0ZU!FVTesZ2(I8Xm@o}PT~+sSKIE7-r56^wU(gLbRuP7mTfn67GN4~2V2 zPJ{5fzY6v$V0;|tXXpa}^|`MSwmwfEa=Pxeajn_%IH%uf?MePFyp=_lY2kuF%4-(w zwakpuL|^O8&G1g2x$Wc0HfkqPvrlklSexKc`McDPTtJk{dONG(&_&b0xzM)~O;1wq z3h9&gHsb9IuU6u~?tIUk;e0A%p0Gq!JBXgQD%jh9zF?Iy zus(W@%^11RskC z0000000RHDRPW2vFcAK}zvBEdI+>e-h-9xrnLill6cN9cG`)3UZ8DR~Uh2#Kchjcr zwoDNO1EnQ*eV)7L$z?vD!zV5k`Nsn75QJ71m36lH+R6g9+O&bJ)M77oK5+)$LShq^ z7A$FlIY7L^&zCcX#%Qy+-aDe8W^>rRK{eN1rb4&D;HTW`L<`M7?Yu0C(xuRkzwFP_6Ic(yo`d~sPR z?GD2ki0ckyvo`zCWQru{FoCU8o>7wYoIf#%b%C*!g+ep^O{bF!9b&Z^-Yp`s*`XmWvD=_973W9I3NN)Np$ zr|$176aAbBu9^`Lyez{6X!RtxS$GbkxDp7YyBRr0xV>{aBpvH%-#VXiwFP!00002|LuKgcN^ES;CKFt zPTsK$SP%s?7A{)O&=PIyL>6^K*?F0q4A=k~WXAxS4-JT7Jf7b^Rd-)|1Gq>glPDz6 zeV4km-CCc1_ubCXY?fXH^Yv_%T+ZTP63^oEXq7IrV3h`!(Q=iHlgnro2gxE>UBp2a z&EcO_^jjQUrs)hnPM4E-iI0!d*(@Hf$aC{#k<8cg9s1}p{-1TSj3@Zizf(5Ur)ZW; zaIPSWSG&P$Id!zeSFR!$coL-RRgg}NH|V$7&gC+_Okt7fBAU&vad>zxS+W&Ii%9@a zd`fT}ef3Y~(IPsJ=ka0{9KC-NL>YZR?kv4p#LL~CosYPi2-X~p$LV^pN*3qAEIm)g z93%{8um<|#I9;rk5y0UB=AXrM;Ax70hIbElI=jI~`2M#nxFDb(U$AN82!0hOfD6c4 zu>Rx|z#ld@i7(SES;1Cdu~+eIhW{T&%SAld-8qtrOBRy^w!DsJ%3x%MEJy&j-xmP$?8zv~;0v;gNjyy!IM-_D`8b`#FQdy7YFj5XwewX-pUWdsudegHIrXq2V1^(q$2S zPu4RSe6n8B14OdI1qk0*(!kc*+F>1&`qDW01w?Nbn-ExD;HO3kv{A z1C)?d`cNfjH#lOn(_&nQ4706|Qnkt&Mc7}Tzd=u=#{ zr3NDmp-n_HP&+hP4A9Bg65~Y%#FXtQipV4-r${D)<@^@Mmnc=nzj5^)Ux!svMsqFZ z-wpnbpV7l0q(#WrT1|_F1Ar}Nb2&Lo9(BiJSC6M4!mK$A&AFGpMkC<1Rmj)@*5q1~%= zJ)4lhqRFQSHG0sP@EVZd{X>W+`()^$WK@=up#Wu!n+&B#CP462exanF&@0!J3J*es zJI9}+Io?U9p!6Q@JWppYcb?CZmq)O|iK3=brVo=_k_0XTW8fgL)*Y^u0W<*4|A~k4 z*&#m5(E!;b+(xqBAZpYlj7A9Z!$Z` z)UtHF919w!w@184PaKySayNSZ6reyrA+F4@1~}?0=(O8x8paQC6ANHJOba02Q-F)B z7sD}mr{klaa0ug#J+rUujyp5nC{Qdb7Jz_}86Wkd{sz0%32k#RAp_t*AuLvtlEKlp znfQe60x9+i3;-A8=`mSoLlZLyFyFBHP!+J!s9mr$9b&vQFr#7v=Z$AC+8x~52G!<3#cwNP;kDv64F zeMFT#164NUfH;}JHbqCZIN|rjxL1xt5uM3*suj5sSJzaGLoJ7)vrylct+f_}CIsy<|Or4}FC*^|>5jdV&)h8l7y_3~aU}RjY(z0x_7u z#fZ5zj_ji*{|4VZ-I=FoSqP{FeY%7jALuN<9UdA=Xz)830}L(!F|)(i>GGNW*%asu zFJ8vSS#l2E3gg31a6UMI_|##-AFdZ8&^zIi-%r8ng*qMo3d}R98| zDd*8~{z!1MU#!H2j`6Rbh|}kfoR{E~qWUH~)K&nK^c zd3!uOJ$n1`FrX;#e7*k?P=EB-A`JX18|mrOr`#VWZd1e`1{bQ!Xq3$0<`amkC%OG` zv#{{Dv2)v1mUevd_jku1hOds^58s`99R6_f%e&Wy0SXJCOJ!@$ME>S-<*Ct<#U8N+bh@&3x@-)Qr`dH3tl z+c&R=uTFmc`OWDmfyMEMSI6%@9{qUC=kB{eKmfiVZpbsranF2O2`-C+L7SZb``yVe zKmIg4diC+muYmg>kN$Q%{CG0_<=xwpSAXLx7$_?^okiINLbHLkBD6E0MHMV6AUlK^ zWQ0Tq7k$d%IVzrU;c%uN0S@{uWu~HJRba#OJAg9k0cwdsQIHZs0EKM5m_`Y8pGmtL z*!?mNC_Fx5h%#+Q%FUhGwBhhkNBIORk~=sO1|u}QQm28YEDAEA6v|HNQ@WxzhAUtZ zust!Gc2K8pk4}F&em#8iPCF;WmZQ^;L)bPysXd!e)1|!pkD?hdPl0Wd9n%a=)x^bo z_Vb%}fQWD2o#Mc!KOKDldPK@RbCK=I2aYq}AH6+#_X@G*Wcc>v?|dcuo|RZy$yV3I z30XkOv$_M7p1J+h;Eo=!I-@iSUthudDL-)s&OU?WIpLm%% zxDw|^!~cE!;e?*K7+K}|KQM;^@m|GnHy9J2p}y4jAC6xgeFP->;g@%>aLr<vVMt~QAFuHfdRld>$?-a)AuKTKmH(Q8W=MfhDOeh zaRI`+$R`kfD?96OC~a%up>Z#kH4G>ZsgJ?PA_2vXynG5*F`!b zQ6BO9GF^fjb0+jRJfbt5aYP{F*You(V#~;+(fW)4w+%M;Y|eg>EOvm`CKA`3p(37PT;BVo=w@@f6{~b1;j{Sa6GD zg-sP#d>VyD&TWOJ36X+3jCic0F>%XHRa{{tqqyeGSROe$oAC4%un1t=KLB4i`Cw0N zui%xuJ`zu#U{M*pUxH44NsxrZJti=}8k!?)w9XhG3LZGD^?0RR@7}{#Ct_?A9KXe& z+h!{i57gqqR|h%`y@7GMV>D|y=@f3JglO@hhyjHSA!wK?^ndKAi#%Z$fVV{de?_dR zxWI9G2`ij{D6`jzF4@ln)UFZrZS&n~+YC;1kE^(MJr2(F5I1(e5%1DZhZJWHg`Mc| z@Vzh^vnk%-B2_eYTaHtA0Bx2K-}5RH0UBa#KF*MiGbH02x&YboY?&OYd4~6s{UN#+zyJWZiRd8fpDsT!B+ zQ6Bk-Sh9xHB07X25BbZ7hjbpvgtSF`L~AG514Kv}6|Sijr;t$&g@RS!OD0BAjHUu_ z6nbToz0$qYY5=Yt-oT=8SvVPudKc8d9vVu&@la}tY*KBD=)(F$*lj#dRx280;EyZI zJR~Yv0$0a|bl5}H2>KkX38(la`Aj)+(?RNd`Ga`KIR>=^<-Gq3dBFKD$JLy-|IQw9 zzAsfi74oiOUW+H?fvcnG-1}E@A^8wvVDsNdTv5I(AXhI#!VsEdm!M%Xu_eV*&Y*LWq?6s?6*~BkkI*MM8(hRO2GOvm)hX$VghENC#&n9luU(4^ z=B=6v1rivG*1z8t7)7bcLl{Ivm9eE=0i4&DJbD^o#WpK0IOvC4&XYw3U!5A=m zX58E&4l^MZs%!?zH>PmNa6=-jTsjhLB|?vg>pOPQpx#N8S&{!Z6TTt=b#9ebq9w(x zG&ZnE4CsbO9_0}kLC!@pn-cOPr{&UR;|bDWR+jYW9EW`mM+#;@UMGfR3*d*>oIdTio>U6HnK5&%>uEMyAV%kj5Mln}{+ob~6h>?mkBk`GCqt~Udn}u7^9!6 zvtFg>)KQebfY78*!?d6~q^L3^sN%jVc=qGGDQGtHJH z<16KI?X;t4W)EFG^zK0LrJNO4yH_hf=5bVxcR?rS3|4!S3eq8J42R%-;f7Rw(GI4V z2XOB)eH+VPzSG2P^WDuc)bX8b#cVzQ_0La9eDyD(o_vT`>*WHKE|OuFty$;}zulwn zaCwGumxi(#dXeHoxCGZ~GR6o3`Y2ivhcw3t6D{R>n)Jw&t(0RNXlDCKIsWiwi>c02_qA@Amk$F8Ta_^}0 zF6rnR$jArHXX3e0x?HAL!}D1>ie`RL(bsm>WmY(TI~`0Wsd-Stf|WrQo1j4XllN~x z4SL{_B}!PsLXT^(cEcW30$;gO!Uk(N>D0SHOM8E*4bTi8J)&Ch;UOZOu4gWdXJ5Ps zDjqZqPAltYGGTe9&Ssyy)6e9E5BfblbYhc-2dOl0l%o{`-n-64?-Cf5gr`iZbOwiF z(x1m?R|L;oPsP0lZ0_ny!d4o^^g1~tq*o2GWFGheh$_J=82e6CbZR%k_wE9s(*J0$ zt?{S|yfp(@vm|_3dM(jpiQh^GpfFGvm1PcPWCr(yx1@WxWxB*$woka}60s)e$m+bO z(%`px`A>7x<4{F>)OlTQ?hF{I$tz8~s;nlzC6aLz4FJO>JZFgkLlFj@R$nM_w? zAI)bw`Ud3bF>tk%XoX=q8{?q%QLC_R+2^T%33TbkU#u5Z!equFco2-cj*1nEE*>YW z?g!)b3s=6T9&dv`jcGK?;#SaZijV&YV))2jg-)I3eBSB66ip(7qxC99i}6?uO3{%d zUYP+Y0AZHYMG>*_h}@!#r>ui{rj3hm)*qlsOYofD$#V1>^DyLr$N^0*LSr@@Iw7fn zl#n<-3qi$sz$I9VwPA7p4QtCbW;M0ZoX6-QuHc0!TC?FQ9V#}Pp%Qb_u*M@GHdg0+ z$eg9jp4=m0EZyhvY^?^ec=JGMHi?q`&rl1vw|kl9Ku=1s7{O|u)x?;ScW?g@h_YBF z#*q9RKiW7LthEsXKX&xr3N#?%^kmU#!Z=`^*BRF?^YwQzI*&**U3x0J=0%wMt$`aNSDR0`;7ybcxRbiG6R2Xo799qhvv z04~tgclZdmp066~3V=QU+9}!&4S=LRxx;4((TrIhAQ+#RV5XV@Ra@nck)z(*)!^4=^aY~c7NWD0-W}yoe`Q~h0!o@6=YlQgqacPaY(k;dO zsKPWVrxw3AIJM=_T?yTorrJppm8h|x71gEn*3PH|I72s)JJ~SL;{BAyyy>O#SiN6a z$y}xpw%Dw6S?8ETI{`IzyrTTcQYe5Ii+N}4OS6B^|z*i9URFd&yWS6sOOwLo&#qd;F4T+|VYqN?{6Mn`Rsd_?M zDQ291iWe=6y_41PSbis&pW491M7$)HLaP8qtg@bLTMiZaxlh;6YvsH z2i}li-cS%vK3^s>nE`W%J8T8&UvSx5VCi}sh^zgwunrXf^m+1lX@s$nqgPe2To56%?)0-u*-onD~4@PG9k&qB9{Yrqe~Y7AEHg!_oi z8tBCrSlp}O{t9G}|G32aP4X)suT%B~ZB;&*AC_~XcvA^c10dn!jO~jUEW$O#aL#i0 zO5+Q(&ghj3sdmfK@=-I?#;R3r*m9=8;)RlK3HY+%1d2K)Rz(=ZEmi&djfeN0>rwaVw?El2FOWoxSzaH3z~%<^2A zn#J@sgckh+&~g?fLiu4u;KxLQ$^;9VUQc;RQ>ajE9?9MGw@D@dO}^)a{+cqXSC)sc zDoNj@DNzCBk^$WqZ8%@A3IlWWK`CMx;W;=kLnFi}HU|uDKF8c>_R1JODU(Sg_26X7 zvzD^NC{?Md1W5~L$oTx*AWyN{^2F)*+>G_SW3pwJpDR#&9fiTQbD?=C$;wg7hY29I z5X~%QRqvV3qI05`)3r5QMx;M6n>eQ5)f7>)BqUc33y(-Ufu_?uyrdY~c3yF%X>ne!E zy;+(rv#TL*ETo&sD~P$4|HV=g&1-&FLUV|byChl}(bswr%2BMhc+l#ElAz@l>m%XN zn2|U@ZHe)Ut~%b_ao8;@1esSv8RF<_eODfS#FzfNM)mkKG8LMD#gp=e%r;C0YvfMV zR2?X+nuawR+ruLU1oEDls`y6q;BbjyjO5K$U@e-Gb~$JHw@}6Ec4U_dQCe{j0~8OH zr5jC`M^UvRQrk=~e3Zp2QEr}Z|0&B>&VqALMwv5+ z?+0rukh-5(x@(!vNKJeKWYV-n;vWEQc}=j;-v~vLFvG-06kM%pMMn%$EJk1YN^=t? z_m>7Qj&0R7&IFqS{eYC!2BQ-#(8_L_{wkPFLx9rs23eVn!sqaieKJg^R>A{c-(TFR z5t5DZwJ~*wKjkfK+;mKme?GzU7u@DzK_Xh7ud)71Bfp?;({dFqVrp}tY=*K(?nPn+ zC!NK7kc>F00AWC$zxyi8Y>Rv8jy$VSP|ICAs7ahYK7L2PXsGN?k({I+`7%C-BaD~N zMPc%n)Xu0)WqSO~??rmP$XqhS60#d&Ge(Yf#ta;UvB+XaXVzUbAB&yKG>kS1BcV@| z1v*{@k7r9K9pg{)h`fP@i}rfMgx#=!$#XsHRPNO zImojat+O|zb@m$0YI#kk=N%Tjg~b`*#;yD*#I_NSzhKl*7#} z8`dypa#F1;>cK3?)i(B-%QcM{Evx92U=gr>`r+`xRRYUJQ!V4N=qZimpTb!vt9T}7 z0WJS5RynQW89ZgS{Bt;pZWYgxUyRE?nZGnw{+xf9%o^FQD?gL!bhMYQA^14oZ6pii z=PD!T#%@etp$eYG9GJZECS>Zo-CXaHtgj(Zu@gRe4e3Z#?v={ej)EXun$wbd ztkEAlLi@xh1y_DjLSIdci76 zRBDD|p)acv6)P6LYDNChUA2o!C@{Gtf^rlXU%{Ya{lyS(zp|>F(gntX5vkDmRxfn5 zivd&FdW`MCG4_gjZK&U6n)2R$=xaRn%jTqbMBS$Hs-j@ z^t=^q#+RO@G;I-}F4O(eGiO!>Q?-m~+c`!?qwP!61p3ya9kA^7MLhlu_*y(Uk7>BP zQizcRdXRQGl1Y>$NCnlEBNfL}&L8S^EbA0%V3{?x1QP~qZ0w4avwi29%JJdjK*tx` z@K{10te|$Pinry+=bVRd$VZw+R1)4AGgPsZwSUA?>Qpp4tcy0VcNJYl*92s3GCPKp z@pk>3?5KQY55BZ1S>;RHzxB9Mu}p)8^`HWsmM5y;);W%th-L9sye*YNUQ#x)>oLFO z+6su()v6_n-F_>3uV~^sS-}1J&8a1&xwe9#^|FPV+R+~CSztJ%J{}3RoF=3cs+jT2 z#K&FK(9b+V+|W21W)(Ec7gtcMf_9O_^9B{>!gjZj8$-N`fLBFm;AP5r?qmLuk_`CG zY?S~h$Ct|GKo+&TWffwZI>Z)Li1PZwy^I$uAFWg|RFKT)u--U~`MQ?zBJhQ8Q95O~ z7%y42uW|p%@q&80;WZs6b#p?E%pf^?;M%ZxTm2LcF+ALnxjHnth{;I|@Mug#|IT6_ zp0DjU-OQrpbc)>wD%X_VZli#{H&?aJ->ZpgOWJQF&3$qhWlnCJPM5; z1n6*zy~P`@zRykXQc4(2bJtbI=$dma$#3ZFUp8n5JhhtFgrVp1$dq+C(SAnQ7H=y1 z!clOhbXf!wK73H%o4zh`#o+Na&dmpq)y3gfK;`KjA%%9c9OU#U#S&mFCuhp{{cRQ# za@{JQo0dx9Fsi7f`MfKoy5Tx-jk@_)>5%qa#NTL3N`ih={!3kMIaX2OkjY0!PdlZe zp|^at68grX@U<=m(*$F5(kt{mT(hcmA}yY-mL_|WfW&K>9!ryv4Wkl$ltwh<*0InL z*4V8P(?APfz^if)I#a`ZPXz`~{j?C1Z5p=2=>$tdXEYWPog{IFBhU&;bRx_J!1^W2 zcy_J(kzfINlNHbl-dG-K47}QJV+ji;_T=s`d=wQ z=`tK0nUnwW>eSJIGq8K_@7@dn7MVu^2BjaKX|V3NTN744#w#;3!;Zb@q(n+{oXIVb znnG%pg>NTz@`Tt5DHbNy$F?9-9_nmG=}^WAKri>t`ym=dh0VHBh03#*)Y}R7&DIR% ztF3uA<8Jg;bz2@qeXAR+bQ$pOm&AJ5X{Mq9Rb;q@`CiL-V_>J29m7n|_GW(mZdn~s zj6U+XAu`>_gsrz;SZzq{FcH&u z{^jlQ@bu{INB1ft(w2k-bzlKdTr4j-MouQ0yK+2~?A}ov^Rt+gSWwx7H2l-VnC}Ua zQMb*SAYNsXO%^lke~9=~(Vcoo&4=}fn7o=p$vIBN8r1ByLKR=|D>H8cg;8{b(CEp=hDT$2Fis0;>60`?#+0 zPD!p?lh+z>+l1G8_sWX|xeEK4>7PXxV7uTDAIVN7Er08Dk6J|*6m7oc$53g6$nD+W zZH->o^&zLzc)^=Bg-?!1u?tZ7L`oq#)fOqn$TdZZF-irIV$y**l8MK;->c{OEO%HB z0%1M%{1-htdtvX)DWQzvOgD*1{p3@>TaGN2raVsTa_ZDJy#h=pJ#8iqV1+!sbf)RDTO_FSOxqsbuLP15w7)u)NWXz zZOHCzu}R%kCFNF}+@tLT=(e;IcBDAu;-RBvVI6cd^N34frR%!9Wt4>dze@wGD@}S_ z(0!X}+@hF-zp?@^GvTT*A?j+~OoIRM`wz#jjy`%){O>Q#ZAK+lV){0TMdkJmO3_sq zz7oPAG(m~pFeW8w$=M{+O1@He6LCG6bkhVomKJ7Al49GY5Q z7dE5JDnU*69Zg4KX~M8qZCbbu$-)c5Q&_OZWK5J|YaFC!U;`vOkkI(9v#p;n(o62- zw!suFFi?hgwvjQ@Je=InbOw;;YSc~k5|xSPmzueX+caoQ+RNa{3;q(D_hp$eF{aKv z-sAl7R2-;#iMSaw)=trf!s>&ijc&mU?_}l2c%tKu(bomRifDJgKq?3!51fu=R{~|V z_aB>JbeW;;oUkHKjH_mgH0Ya^S#3ke)zIjys=2gXB*`LCFhUQWHcIsJBn-7guM(-@*}ZXd zl7#RmKY+UFf*@;usdU`L6P7m`j7L|x$MH5NygwGHaRetN#)E-|G96KKXWINE5Y6MX zW4Ei>c`_UenlWRAXe?SDtm644_?Llmq{!4v!qFGdv;X z8o+GCabQlsB^vFeU}3n{&3~f3mKZ`-5~U-o(sSCpOO*K+-30l-u9k^<=%iS$+ILau z&BzKtFnd`8uTY z;vxIX@szqv!_C~vsXRg}*-FMqGL)1vcx*Hc=bUva{KM(;jCbh90eM~v$%R@27HIE@ ze#z{l3$1#zFzpr#jc7k%C7f2(wN=hj2E z+~>T#Z*Ivd%soB%<;S0R_A{vlG5mNk{N>%-lUILh-Zv}BUCU+RimjOQNcd5aued}i z$4&*-V=|iNGkY5>2dt80RpG&qlhJq`U4e;${Gg@0wK$}DZd!hUd@n6j90kX9_?soh zZ&0+*wfLkfGv_4BD zy>MLJAz0kauJcPjORdPrTJFgmyw;+z4Y=+Wc~P7KqTC7<$#AO)FK#pV67^z|-B5Qkm~{ zi?^|wi4YChz=zcaY6Z$i>1>R*F}vH3T1Y70xrM<)pD6$qsWLWPzJo@xqTMd2lM69? z94*iaBp_sIV+yB`)X>y37#hwNHe%Ig(hvSL^j+`LIOFFzQjk6JfB$cV4(d&%42kC; zaRbh^ym(Q#7;lN2N)?sr>M50EivD@iUw3B%<-$N@gIY|z0dsZ=l!|Azsap1eVar2R zo7t{CxQoJ!sWp6sXUG-i_+=t-AHz4)3Bfdl#pheW=o)h$ctH^{FN-0LKVwZ@HdjNg zYC_D@Dse5EEUr&8cjA~QZ(ku{;=|G8Q$%{TBZf_}S6mD%5IY;u``6i%Gr5b)h}6_Y zyJx0hbQKb@3)KK8)3?QB-zY{P@Oo3q2}~>pYh%dmOXloV%uU-SP_{pz7?Dy#Y^N{} zpN954a3ND6(JI_qeDp*uY2_e1&bG60bQ$Y_Cwy1MIJ=RXLs>Ve$#DqJTmuPp&K|(h z&e@{jz*eLQgSu+wJ#v9?6}u*8rkzUb{v(KIS!^RO%64@zaIQA_W3&q)H-(D;p8z`A z6fS=}E8!}$$0=n9!%9sgFiPa{c@v3?F`u5Ny3>Zrdhu zF-oX=Et^H)mO^!6MWgay#tCdVW3>u^uyDL9V z6)z~J%p6{7Npke=bgA&BtZJ;5YoL+uv|P@LL*_Hf#uVOzDXhk%O@CKEhF+1V5F;yy z@t6VXqvgHH&rBAsMO{eLGVlY~-|Q;7&RFtQhRO3-?s8$L5ZdSN(x6EetoMq`6W{gH zk4Co0CYC!pBUiE?Gv(N0r895vh4%$lwYEpz9lG$$TA6>VRz}e=sevAAmCkgrma%xoA#QhgAQ{{9m{a%I>!FZ_^X9AcsiZ7zsRH>$j(o+ezIRn|6DtryJ zvjne$XH;ilcMoiLru1hK@z>;5%8SDWTzQl{&S_LP%Y-=4rDPRbb&W3E$f77#Ws=AQ5Qu4fF%dn#YZT^Qk=Y*YE@)%G}Z+zszIvzDRz_= zmjz^!WSOm4KpQJpEAv8Wd=yiXNHXC{2;v^_kf0tDDN~))&r@iJ zNzn}pRRqVmtx0ES1+U=33>JoT?lN;oOn%@=IXo#p9A^{mgoYC8*g!F1988Oni*;;p ziyW_)OT5~vH)TJ|D|Ovy=5s*ZZ>{ z(OcRq=d|hU=j4}=uhIn&*~*k+YO0FeTR^#^pD9>-rpg+U74R8Q47(bRl=QWXMmo{{ z6Kb&b_yV0dB(__Mot#Djx;)vi|3ZP((JeLV!&rvj(ICUu9l+r)0lPgK!m+?x#G1W{ zP<1Wkj|SBP6F$=$rO~4OeY8$^XDnPY&?<%$jxHyezVGPn@109B<+kC1fjTC5=z-rYyee@|x zX5`L~yiZ}yBy@p`4|L~YR)g9gz!3s_D;n3JS6<;V2^5>=`Kk9&U2PwQN-_gLvDgk8 ztFXfS6@^qtmF}w~+4s&B18Ae3C7kn zE)#OqZiQ?be$6j+IW%W%mSi&q#eyhQh4C5KlvKi08Ifsyb}=N$h(wT#n-y_4h{=pR ztM>34YxRJz+gZL6o1C`+QAJ1Y;0G9Z7OhC@;RrnjaSi_Jj)WldE!bAHY567IlIm-;gckwff(85F<9VDyAL1msmJz%2a3#EKh&~a+jQA! zEk0jn>hyp%10JrWM3wtOBiI9c9_ra&MVhMls>9zeHabDa1d8EAj0%d-ikE%l+46C;q?sGEFAs1dEW6!e_J&xg4XMK#r6we0nP|xEm!uNytM?U{uIsN zE|(AtWMdt=Mpa_{6f8URvVVFGt`4ZmnA(0{5RLqD32Zb4R68@#=&?DH{e-IBNfH=mA=U#XW+Ql$%h~l1>y;15ImHcuH`ePsjE=!!s5}54jR8#n zTWGY>568#N!iaac$4+Lf`Jo8LzPy|z)EIN-W)Aq%_9%(+h-}I#JS?B4op)>w%2N-p z*u(EWPZvQeg)9{@k@Hfi(=JX7d6G9r%Na{vgd)~U3}t+|UN!NjNX%H6Bz|{Dg;wCx z@melQ$vYQQDL*Zh$a|>PtOB%sg4M%UF|lSUwXbM15s;kOWy;(1Qr;})Q+ad?mCfXQ zp=#kUiXbFh1j#E{lk%z{x-O%ooohjg)LFWquatSod}CK!XpPdxW|?p$0@PFSVH;H= zbjotzlb(U;pr(9UvvJP%wK zfZht0XKX&NSHGimXo`~+|A^6? zZx);kB54n>L4yr5_pADJG#;<#gdit~7(xen>>xWmTu+M(HF1n8!X)L9l9N1^36~o9 zJx39$i)U6-4r#(-Hg!#1^?6JEDy42;C;<qT^TK_ja1ld;=*WSfFL8%;9I#t=0sX6oKZAXG0J!p9BK8PSP(REuv8Rg zwUs;4b*pD%N|I95URP4)Yw_eO@nF5&Oxbd%c*EAq1u8^=8j+XVsabMpVF2Ao=t!3$ zc6)}S+;XTG1{z22UbDZI1SO|Qq_kOw+pPtvF`EX8Y81qD9qPg}tdxKRBs@gSg#xC( zrn8tp{j6ben<}3;M^qz+DW}rvoVy7?@5FU+-9G<>9dW*5LKY*eAdVf(G!CfxX%QORIz2*Er|qFx5sS8V;!Ry>V+*n(zEgTv(5{RAK4P=zTxlZa-UUxw#6kf9 z-q+4AjHS|Q-}()eEr!@TW;1nQxUcL6A4$;qG)mx7wva8+2tatbelf@jj379x6ChgE zu{5Bj9@fW-#IlVqVyx3VH7)2D(S*jcX`8|82cOSuhcr(yJ)nIQm^KedT@oIIxLU z$wse{g;>zs-d8|Srb5ur{+2V6&+PS;ld$UP+heurkrTfRp1{w`q7nE-%HRD*8M0_F zQd_0kNH^AK>n3~4x`SAVzkq^M^p0|NoXu&W<#@&Dm*twxY7^uNiL2epRoj#5#Bk4U zbu%yb++&4kp5^rPl2M25zFwtfE9gN~eGK>No!*SxmRZ(PHa$EIc=w|7l?PPyxOLe~ z;oIfgdLdEKvCZ%{mSEw5gNf&kR-2uJ1_PF5%%lTenLmf0%ila#H~DMz?&ftl`u1C7 zSLLf7T+iXKzBg_4I3qz^KV6Gat^$K=9E9)J=UKrdy{Tq zDe5?=0Kr$M{+?)CUxhZh)RF+}2O zd~OEc1^kceW;1vy{`!vsga$7ek&&@YcEMj+tmk5Ky^sZdMc#&wwb%dap8y*0UoK|X z{~{lNHIJF@r^{#_U!}|6Fi&QjE~d%(ng%34D|$6sO%4yI$!CE5CHT-*gWj_WvtiZ> z7((ZlFh&HA1It5q^f1F!On8rGYb^LOTLrI9-kl!5JN@N!__yPK3_FLxrx^W1&)54e z0q~#yx*zp>qk}Nq-=FkGQNO*{ophsq81J{o<90iW52mBhxHBI2N4;)$)El(-4@P_Y z?cTV%AMFjM?e=s$=Nr{Sd2>-Wa{2hn~v>a`CJru!XO>R>Y7n~eJX{y}Hb8$?kr9*oCPytjX_KMK3O zc6cy~_S*e$+S%Xh_v67df`3MP-8k&_r=4Cr9rxle9>Ig-C>-ri`-4$585~Rx2L0){ z(;4lJ!hSsJj1T%zdobPaPA1byJM6~MxCe8N2L1hTa4;B89KSz#_0#b6FCUIRzBzeEN#sQ^Xy?KV8~^?Ce)#GiufSY=dJG_Zeac_$4%+Sf zDNo)V50Bn|?i-NU%`v``UpITn5pv-kBimdq4_&vv}ctD(O(Xq)mj11Arc_S z-^uIax5qyca0AMnycLk^bbRkA3=cTwkdh~f4{WH)C-bS}Bkap()Df;kXfevojDrst zB`XjtSdsPc86@ARI1GPU5+Koxp70Dlkn@_&_Sx zFo$90>p6{NG3rgsqtD*&fTln*LJf$8TL2Yi#xtZE@{srsiOQIP%bpfLI3z{a)zJuL z53z^z5VQ*(6lF}QJTw81FRb11nCdMA^$eptvUoU4=80CKNR2Zek4d!+9udO+mSpZ| zb!9-dX?Gdl3aV%eKGB{PTts(nhh`|M!`C^OdPX9C|+`q&(|EfirCr733T4^dCXV#(R?xA)b+MIvLeHn?FTRa(&6ey2PF3&7&Ii~^Lv{_9 zOicnd`M+%Cf@3^>+y2}MyUl;GG2+jcBnNbe<;g%lX+&^Jpo^swj1g~t?!xWhh?I`u zF1yosHf{dPotjCo+rW&>fy%`f1D}wriMh9;y#33s#x*O;ly;u{*dhwJ&oEQ+K!_b& zH$X3M^e8vi26N7MYs$&yr?^EeK#RG}OXEQwEIt2L&~elZQU)-OJa-()Bh{%DNZ`Yr zf*JFd+yOl$G^a8bY0emKVUL}VPJHXk7YxGfPBrFB>;G%?`{mLqf0`5xA{xAw3`w_dl-si_DKGEY0BPP)LW{(&fv*Lk;iMl%l^AXex%raI`qSKr>%S# zvJb;{zE;6)tYEX)iqz38kDa_P!`o+b+jqX-eK5ne_;f8Fh}KGu_69T)pGjhNcW<}b z|Hf5>xuTrt?sfZLi>E|XIMa*gV638~_k@*@T&9!tSS>zG%YmT(z+#NJTb;1=MRrWY zRk^HQOl1;uehv_|gMX*2xVBsyT(4+vY2Gk}tVyq%;c@ubOHeSe*ze}3H4VH=Bz+2 zq2t7kWbtG=OU^G=&h8@|ev_0o1zW?-1STI7U}JSmIQysY{3RU4WTTi0Spi>^wZ#fk z^0=P_XK1Wcs>Geqs?KAqdM8q=NICrZO`h9L97G*L5Ve6RT?6(o?PbHwce(3@9%B?)7LT!CgiJ78q*pwz&^Z2M+P6+r(57}B z8Uw!@yb&sa_5iv1V~10av@w|FuBmlmH~64FbrH=z0Wj$U*KxGe+eoI1c!-~}1!p4D ztJBAPx3+$7LX4te_gcI?TUG@#pO&vF?#IX~hcuo)nl-4p)~x>GfGTRno90ekNebSB ztiFhT+g$$U0c`R$(qoLQ74;W(06fEjXZIQRJ z%-T$3-n_`m1KQUj3{@sAh2j4LY1j$-T}2owA3nZG`Pl7+?f#%U80?2sYV_J5r@>&h z*E{GQd`-;9S&IOuYHaSIfRL&m-B&ZWa-is`4!M>)dUs;jMg?Cxs} ziIJFQcoGyB9COo7qgfVj(L1&!pHj`tgz5%-j~M_3EbDFwc!li{XtxH3$BD7A>v)wf zcJTztN+#QW<$N>G3B!1Oo@Xf6D^%pHKj;`1Kg|qSPG*#zN=zoF>bO+tj6oW_47kVw?9uFP?BiJq zGeo>|cQ9F#-tp?tqZs{|qcwKyEyMkD%(Dmd$7f2hnP~B-gmaJq`ET1eiO_u#p*)H9 z0fI;#MrZe6zxXgZ?!zEXcmCaIeGR{VM@R$hqUKS9w70sT5Z67MOgZypQU1uu2AO&y{?YRudo|fgJ;V|XF6J(wUAYVU6Qt+L;Kt=X8aFm52 zxlt4~y7hPr4^G!J(J$S%od!EJEH6)_8DBSck_kW{10u20Zz{>|+~r@$;cx7OZj8i9 zM^~hBEN8$l6nhkc%AcikN8`>>G$ZW*8sw<4r5I%g`2G|AC8?jUfUqtj>AD$7*Ud=U z2Uu)p_h7G>lY}{(Bvd#_XmAphPnF;#of@2kY~Tr<+0RYF`@&6nRFr?Q+(Zjfsz>Ug zJDfP#moJp-5GM-22c-ic?Hb2=%Hf^a0Wa3NVznjBpy0t4 zZQWNZWjgFvOkNSTE#!67yZlw%^!X#f*jl?*|)z zvgj}}K)~78m26VpP+)-mo>>Zj4{29&*@^9?Y6HcuMAyy{Dtn~6Q`M`e37azUSB?yU z;z934{42UF3u9kt!y|QVyb4ri^oqycxr!vv8)+8>!gg~U7FTO04lCY*x!{k)l>jLty0#pRM-TbiBJ;sRa)ZOi9_JW zTAst=hzrT(GpK{l<~qtPMTHe8vy1g?<}564ugLCy>`W{%l4Nz}pobL+#7!5Hqlk@H z<{A!wIO0!ubhvk(tw(AzOD8UmU0ldj!3YTk98G2cDPpbj5f7=F#Ig-KV)sng(hsEHY(0rL~rM!>Z7KEi&(i102W!qb(^@;*(lc`7k9#P z@w{8Zoq+>)=>LWH5{1At;9OgMOzP%fQdw8>IVd&Qo!aW0J&FKakNLDBYV{icar>KQ z;OSCb>mYFsPq=+q;8z=eO3|uO$N8+ZBQJ*;tqpwb>~b0<$$1wlA^vJ_|o;BG?jThFU7a z7Fm|ySjvo)b8V4V?T7mVHhOJ{SYtf!sypRXcNS=)SRX=f`AF zS0Z%vOVFSIp-XEDTRT*z@oT(V!^K@!JQR5i{`J)|8Ld}BXG@kr94)a3Z`RxpFAfgO za3qs9BQRxOfC6FA=rmcP+dGFuZ`cZT2No#I-e*7MsdGaFsgLX>Y)AZTJM9hlZ>POM zEah#Qi3(4mYxCc5EBvtA)laNsaP-P!$u#ddAayZ+^0FCK0ey6-;e{*51Wf1`8m7f_=QM0Wz@GHhrA$a^>U%er_P#bZ*_Nlz>;6hqq#H{qv6uN(RqpP(JwH^>uvTux23iV{H%7F6nR_v(4@THjtAG<7T?YgMGrO@zSzIzt+#5e` z8&y0p{bI40%qf5ffrAx~>`JUc`rL<3-!2>pOGW;?MDzc*T&tdot{{ey1TN zG9C90Ab+??1Z_Lz4prH878Bo9(Z4+$omF5>N>N2u6jJDw7mym!ZXKEGBxV5>C1o)^oQEe}ux*D4=C zJ~)d>*#oQ>rql}ZJBEDjZo@HT3yItuBC?8YClS$mj?T|gRvXRE@Y|5Tk@A-=r4|UR zgtA+DQ@LvS2vs;*1d=4UFH%#^x{LuEM68xnJ`{<@9!K$2&3~%OlUFW># z&0>XWYo4%<9_!d?`!t~Y#c*8O&xRbqg`9lID=AVjZDe%;?_vrG@-0>#hdG$p8&&vB z$u-#bW5P(iw*W)W-yAn?gUxJ#yXv`7IOsj<(&%iH`L#D|+%)Pe{6eG7{)C<7a@uQE z>X{a9)}pO;A12?tc{j0|J<@W*4(jxe2@7+InC*y^J}fB9Us#8|zrDUIecj&acF@D} z=P9kpcqcYVwQ{`YqLO>Bt_K*I_SIt>gY(aI}zay)8|&IkTjh}vk?Z7bWVeW2zjg;X2r%@v0&0VdL& z!_|^QiOjJ(b0{mnEjAF|9vk3S>caD?iql<&)ty!@O}dbPHJ8W~8zXBq5y9OWRdK*{ z@b4{I!#r40Ruj7nZgF|p%J7IudRDlC0F7X0}VKscuHxo4?c zk|YsvyMnd%_Ow)8>^RwMZca%B3*gU3|2_O=(A&~dReBn3Oh`UFU~yRmjlfJ$6@kdF z{@O4qA#XMXY}8C51)2C+aX(}=cn0ZVJ7vES;pRm@oa}DA2 z0#B-J+j)2m+B}1DfMgEej*|?VWRTcPxXa+>jfNOh$OzveIkv;?rQ(@`wY`~g`USFp z)^7F|rYzcrWO-R@>a&|2M6WIEXi75F%_So_lgwlyhr$rIzymv_%+4D^$xEH=Du}l~ z&1}%{ggsy<2^RUVtvU@g5&_C2SWR4`U4WA*CQPgCBP>_cE#czb z98{ifg7!@I+?ckICAoEFhTh*k-3uvm;Jt5`0IC?s z%ReybewC&{HYbh3!M7^PQqgX%Xy~TZx51doTcaplr`76IT5o7i`;7&7{Nw2gwQjY5 zH|+ySQ|@INCfqXQ@sLro>IUTwX96Ab%=BZc+qt__$I4xHQC}RAYNo;YC0q%Ul}t4E z<)?sPqTt8Ru|2zRvS4d+)){=`lL|LFDN%!XTfOhuaT^&wm`9XX(SxRzm#X4X@+3p8 zLwGGNvk~hL1t#64r9#@Vhl_L=e;(u2mR>0z%od$5HRKhc< zKfEirLFKIKCDGP4K+&K?Y(qrG+>v-Z=2ghR!IzL4Kv`@iyGs6Y5lw>gbfEx!6i=~$ zCx-p%Y3w`OD*^FP(<9f%>?qi&jE->`b~M+oM{nP}KKgj_VR-uS=;JS^!#D4aUVVJ? z>#=<|)XRlL(^b6G!@F;g5b8y?1M;JT!mtBO;uZL?X?+DO#5GA{8luYT_1$n0kBV`! zl~uJ)H|%ygOsSUcU>-dbGQDeWQ}r$63U;M|OE~vLC5X1U{@l=h1D*}(v2t-f1hoR? zW(A8x=&+sEJe;;3t?{=~-LN4&8V$;M7 z?VCFGjsvd>@7Y`0U7t+fN4x9|?T#xx;caD8aZ@+JYKCcll2ocMDX^_{2O?*>!ls3z zOZ55{(av>JjJ}%gfSs`8Pm?Kqn4O~BlcIi7L_jMl-ey}8m$7!`ttjtq9WO(;`HP#g z11r@Bawjj^L$5aosZ85wnEqaSuN|WB*l3u+!JyYZ=wrj+O@`SIyWPF+Anb&7hpBaD z2mQ{Vz1QvU_3{#%WZTg#6uQ_ZdAG&-`&6s*MybT4vSng4Cb~K6yT(A|Dt&oH2e*({ z1ZKHG2dJ>u?GW2m=fIsZQ?MIl6fZjzLkHWdVBP6-_Bwmr-eA!0<}H640#H{)Dkj;s zeIl@AhHZO$1QIGRL|C0+hM17FbMYn zvZ0GKw{`QV9MR&sjST1O*($kwXkY#X7m~fB_T|-ivSQHc)(?Q`Zhk50ZZD;vLa*T3 z9lf_#3!vNf_j=3cre^{YXTdpoBUhQ)%8vkk6k(V!=w^y0NgmX=Z}GLM8ql``?vA&Z zdFc+F-Zigf~IabjsXFjb3yzaJv*W=);54=4d-{1t@wjnTR zca2N;M*APYe>xlOzZ13(Cvxdi2Bc5ODp_DG`=H-ubg%q`sziX$nMlij8)vd{NRlp38bSP*e5-7Vog;%mrU zQtg^eS;Mfq-wom5LbD~#je94ol8g_2iZ9kraWlS$du}n@?-*`ix53ce3|)D7Yj!-} zOnk0R2|GmoOgCrbBr6TUZqdnKDcAM(HiHRRW_n$m%Rdv>ss^5!^e(O&AyO~g-#-X@ z`<;%e&J8xAh~VlX8DBV(-ZCQHa*D-lMnaYIfvpX10Lg)XWLRLs*9tbV5LW<6JZa_h z`lG$WSP)79iYZIEVFDlY4!S_&7@n~)ffvlw@9$$&ZkXe}+zx>b24Qz_(CdPNQky`x z;N}<%_74EK;M&?F_#}1F{*7iHQ4ol^y&w>)*$miw{lPxoO7{o8nW(PWuFMTR6Yht- z?!o>+r?*#P!tI&H0iq$$C0tSVTIv+{vKw2vhW|~6po7NCA?@S$(2VVCt!MSnjZKtR zuV9<+ls3-$TDeu2EG#`)L1|EyG3lv2pf_PcLzAYew9B_aE2!t17u47(T`BB-c0{WH z;?M$dr+gQ$4L}^04jgsX&P6AB-2$?koqSQD*Mg>6H(x?$GpM_^cdPIlrK=0oQ*(8Z z%92?T@a$wdjnTfWK{!*|9b?@QJOL5^tNzsZky**H&df?Go61O zqP9e}-7*7)dV9i-zm?Q7-bk{96(mbexr2g6N?!>1JJAbvcX0bGH!QAf*YCStr;qwJ zo3@8lie5Q;Cz5~y+PpIrm0gllBh!&VVsJL(=pnG>D(=8@7}uOFVd7&i-Dv-`uG2HB5?w_|}x)V~V`G zQGaak=X ztb%C0O6LILF)1IsjLEnYeB%-xnULbbGZ1l;YgTZ3V~^9`AS8S7;4McX-$nrzyj?b4 z>iK@)PjL@XwwnNWK!?9f#hy0A?ap4mPnPD!ZGn4tvhEzB==M7uvNR{kvPtq#rA{4^ zt9&Nu0QDS_leWgfzkqVHeq`?mu_-3JR4f-Q=nJE>wC~%2OfjbDoram9NHW8vkV1(iB@`LgVrif)#!6ZZc(ZE%nDCHft7S66Qs3fKtYg>?7jPH}?Wu}XqOCOV0TOZ82*uIo@C`xRWdv{i zT~=Z@y)NL<9%DDk-ZZWwxWWx_oo>H}8LNYRH`8NWVXs{UZUrXf&AC;lJqX+T`@L>= zz{wBB?3tu~bNT9}IYl=-05;KY_p}n3P8RDYW1(a$%x>o9=FtvEy&F+T;mG?3-JXTlzGn4G z8W*-rXQq=JG}kFxFjd>wz;8eQ2b^sL_)7O>apCLT3hUo*?}a@O=N)#P>kYbA<vzLFk`X|$dVncp z1NdanzDf2uFmr~bNGJuOmYs5daR_Jz5Krqt37j>HS*a6r8K1+{@zR+=pfa_(GN*c` zf;fn_7kLz15fxN%g_u~N00t~p6W&jVb*}=w%LWB4_m8dKg#eSih=7d9q%htL9XcfW zlp`cDa9yU$6_atN@oY*RSya+(2|3#wEBiOq6ImaN@YA|5@EwzOzw%vWmOWHG{bd6-N21P@k-_3fBxb?+#b@ zzHN`V2d=f8;;yms{Ni6q?DM&(>Bll_TgVQp7B`0T)qj zUcyCuWH<`gzP$b`L`Lz@CU>=RARqMh2Lj~u56i?!_u;)|84aRmxv=iG7*QKP66Vuhi)Em_V>Vc*kOo6|1c4Ukw8A;uoEn2 z+vFn_+cveaH-SbOoD2u;ej6sE&|rVuzgSh-A~wl}a9g9z93E0Ti&H!8%uaGgd`RuQ zZvmfLiAKEqr)yFhF~j~cTEHZU%Awpq4fpdI-uv|fM*F#rb1Pi5GpHRcYP{+W&Kv>F z%;QW|eu*6jwBlq;ETf%G+s1qSnf-RV?WsTK>Wj$iW5E^AFIz7r@e-^}*cJ#ImArUK zvwnQ0Us|iAe0q_0oF`?QU3<6QJV4kWcM#+z@b5Etb#0Q(xbn7QGG=|20uv%DcL zaD~}suF^qHIoe_mX*rk{IV|1Eh93*Iz78i8%7dX&`Oq5&Mp(6=X-%*oqAEHYSLsLz zBS2Y_$gd;ljdQF#{icM)q@FU;l{u%1G4+;Js!F|pb2ih{H{;O{J4B2EM(8ea9dzG z$k&;_gp)nEziNNDv69gZpGaV8l#*@CP-HfgkfS%_7(&88Ml{!Nk*PGEU%m2GHBE z?y3zQ?w5g=eseCyJ3Mr~FCKTmsil3{&^j@KO>x}`_nZ1Ny1m%^sx3b$yUxpKiN6j% zr6~P!b~^>E00hgxdP<{%*aySuEIQ9BY?Fp{{8NsHwrIW#xk|pGW-HJiHC$+Uul~z9 z^S{1ABfUZ`uZ^%AP<$v!Pl9N=0tU(e7D#{v6kr_C0Ujw0W5uFdlilDnrL6%a>^079 zOhZFRy|eI9^CEEAM6h*uDRuSQ~D=@FtCae+}3(jn=ajDFaCXixC}>l8%{>24!G-w<&QP&Bp5)fhK5z>&prLN(LFO z(v=V#RG~1U{EZVvVr{!AbmLjVo1Vp23|f;1H#Vuhy#s5~lFeB^5S1n%H5uW|+q=QL zlw}Yww<}}JWHBt~=m96baGa2u0A4P*Jf%Gewew!3{BAK?W9Qz0bg#f#>}u_Lc#4FI zfxiv(sf?%u2Ie>t*B0s3u6Y@V^?mv|saI4yVC0vIksVej>JnElAqa=!Ny9X_p)bB= zJt7IK%H>XlF;)*~^5vn)$s%URKoMm(_$ZVZ##tzn;@s#KYOLK5Hr*`r?-a4x_e~6j zUxFA6?^+B#VB`aTGl3xseAwm~FAQ|tx*=3DarXgvH%H#Kjj@WKf+A=p7-onn+=AmR zx_5R}4~F@8XICB9t=?guEMjj|dGCf73746x7AvkX(4Ajx9_ZqR=;CYT+h^E z>`dvuN^|Y5lG%(@hheKzDQ34yztvv%CSYdCNV(udXvpbz4qp@Bxa}-w)(W=9n zj#7p2M^Fc!2HxlcjEe~QS< z(qSYUgraFaqBqhP?{dNuJCC@6SOG$R!l<)aC-e|=b89!RFFkdPXcepk`9m6+eYF~| zvsF5;ppz8bdAwh`C@U{@kqub<0sd6JXgwsqePPLO!1ep>Z?rxUz~_pu!yLg&-HK^B z(}zU4heWxDM7f7VxranKMKg8wzPt@5?L$Jt9R&Q+%oo5Ve!miTVvrxva)EK=EGusm zkE3-KBSt2?59w7TTi@PZ`pW}~{#j6zHPU9)cYQd@5EKJ$G|Nr0K%tVw*=}&c`?}%_ znev8~7$J1d-QatP?aP?S z+*sB_*c)y)I0|^(HGGCXAfg7Y={<=9Xjiw(u=hH{ zWP-QSc(uMHhtcq3&O>v-2g}BTWn(**4VBnm5mfGKq<q#b-2&m(Xm!q%#srOqEKUxrFe z>TI8Ld$WeOIcpApLq2I;(%Bf;$_j2_0b$MpN|#bwEfZSK0I3LUco@0B0@sj}i-B9b zkPPXli5IC3d6+nf$!GL$!;qTcxQf}5s1kl%Q{9K&ds}E0w|tUjPEi?+d2n+HJ|4Ec z=f$^50M`whcLVTle=A$*Y7pUf@>|}Eb1S!=Ka|_V>;-9IhA&rXVTKb8vkUASN5+Jq zld0QYWoUa0q^QGxJsC+4#;9uHN)LG{H))y4Yj^Q*xxcN;{q5hAs|rebrX8P>buZ;q z=a%KvSC>5dAgUfj)t4cv9)wFyq&sFar93o=hI!(@8I>xrBwc6hw*o*tm@5!Tv#iJni@Q`|bU3It_b!y+LmrjrQYle-OvfUN{&{!f?>dB}F_XAyvt& z`f<^P90tN}|9jDy7mwEBieme74_q|(@4s9=Zf$G|ii7DmoQ_8O;b<@Jj7NLXv^$C7 z>Hc^x3VZQju(#iy9(2NR(&@*&X=l>g2T<&d_r~49xVI0ivF!r)5M4nUln_329f~^K zu-OP-p8$~LChGDXU(+lS3Z%neJ?OQRq|cl$FX8rq2 z#}yx#6E@j7Q@-r+FXRM5Xw%}ks{0VLvbnskixP@3fDQ_4C;maqt&!LZCw%?TsEST} z0A9^e4T{|YT9HC(@WluPk@z&7jXgMKK3NBwEJC3~PnY70i{ab@un9QpGQAp}&(cve zd#+FOGVlF{1R|TQ6(}FUt`S*nh^tF0hqTtah$?)_cK!>njNN20i9h2z@Q1lZ`en}6 zBuOeCl7r&3=s>FKgz1=3rLMw#(i}N>gHepN*s7!=NsyFDHEfy%gkYn*n>4-{gy#@n zp^69^PP$lu!#8U@BIW}FRxR8=L5Cs1t_IRV(}%RlG-%MzRr$^r%{U)B-hg`h$0^Ru z)^gnj~Kcw;3)e0#C0000GWq4&{b#!TOZeL?> zZf0p`xdb1J2mk;800092jZ|H$(?Ae??N+PvY5Rvp+lJm&I%=gD_M#PN~KSUyjFbA zcT~85pLE}+g#lxg#sY){7A*i>OhBs}b12 zDwCC(HTaIE>Q_)+Yo%A?vmphTm+W+9%;JQ5aCt2_lum*{8#B(8lQxB! z5tT$e|HtqNF#g*KQv61{i)K@2(v;Dkw%Np#p<}W-KN}f>@SkwNG88zg0X4RnJ{MLy$a7RehVug+6gBx1KM3<10vIVr8m6zGoHJblgD zQPkX2bv7Ie9y~3o>!c{wQIZ1wO%Ay6g_AOis+*G4%_3h3&Js@~0cp(=ps=Bl8GFMD zs;qW-mVcTF(ql5D$Ri}8L6m1imR?#?+M_X zXE41b-XNx337?BK0&mt$%a+mgTGB$27upxi>#AEdpz?JzPe9XVvpmgNnXaY6ByXym zXiTdHa0jUYyK>Dyl#Kli4s1sJ2hItikrk{J1QOx||MKkQjBl;~099weUQqY4ik2)X z`Oc{7y1D^ia=!wi37|iNu_bP~O8+*Fegdrl80>acR5h-;fVpiYNLePY0Y0ch#nr-v z6U|AndN+@e?(h@ z2ofSS27AzgmDn(vb~cMSuFl5|(@4!X{66pUOpr0FK`%3Z$g0klNxhCiE^{;jG1*Wt z2?c+wbSEhQt!xeYkvF)xz`&t-O|lI3LqijQ(x#23XW#$#^vPvB{rTC&<;5iGzIzxQ zMh^}GFtduG#j4lqy7rWf()nE);*1kLCV=rr_*x@{NZK_ z67;ELXUGx&R1@fv<5PrtGX2~1aw;Kyt07KrQ^v9ew(xDf?3NLDapPfFHz`wK6RR%HV=n6+n;)E@?Vkvj}Lax>d=hKU`7w1o=J)-@q zp^MX7bTc^`(uS=PGTFGm(-+T9U=<7^u33A-SQ(uOodumg=9?9b)T*W}gRU1$e%YeA z0;^smD-)$>r_U~9gc(0MpB`U6J3Ea}Up$ux?cYA!H~urwd+xFqO7vY23Jyq{@^XgZky=0i9!1w3lD3W@07+jr;3(1sI6Eal@OA}IRstL_ z!R$>^bX@g$D90>nrG$l6YC%5&VjmggVI+nukGnGeN5^6zOf)?W2Tl4}Et9;I@TS{i3xJ^_dVI~+w2#y#7nrMs33UMy!GU8>4aSFdy|m^e?-NhP zPwrL5Z}J2q5BPe)O@$U3#B~&!^2iztmgX=gmV+371o@+L=5jp=$`z6gjOeat+@WN; z3J&n=ffXnW42*}ve*HN4^*fvgOeWluKgD6g zS9S!338hHcta=3HC$eOrLb8$ak0O!cSr*R~Ic-f_cL0Si`2@j3MnU}J>4;qYMZ;|e z?M>9L4nzW;nu#O`94~j0%D>}Ee7EuiaVeH88wD0Y#WVs_hO%2yyfLztMWUtF4vdVB zi*v~726}}%<|7mYV#GuOupyt>As}Uc7>CR92o@R$vylkUF&-5TKW`w6CbdIumi&VS zU5DJ)>lGyBl=Nes-H`l}=u!+gXS1=INo&L+Y>y?32_&^uVPikVQUQ~qjJg#nvPlXo zqSQbyAcw?!-h|JeOs4q9k3dbq9!t#VyCc5ryRb$NpbW8Mpvqha3}rp$)uzlT91#03 zR3;R;1kAz~uVO{G=XtR#pro`fqPOpige_WCR8+IF`{>Irk>w4eNRvWCs-I*`D=&yA zLB;0$v+E#OGEz%m5pBe^E$1~s2r;3~!HX}Kh9SZ*)pVClwPY9#&iQP>1=9rxQXvYb z6hW8A~c`2yAHWI1aN zVIkRNGf;t#V25Y1xWVvN$ zMRkXgQI?IPQ!v|^HgqPT&P1 z#*7a}1)KnwtfjW+qQYKOn1xyvS86X+2GQUbLaou}UxT|sfZCv-YIk*wj*;lGNbpF% zgZ_q`UO$s%5+)T{6fOA{RCQxZ_i)08@K*XRi1tgYFdMt<3lY3TU>J)G%|i+XXAb)< zBw+GT8_*E(l+Pk0GVpa<^&|av9wK5`BW&TbOb5@LT{37?whb+UH1nK-RR^&`iZx+5 zI&Q0&Bl;LBKoEEx4II3O6KQ?H>T2pBs(>SxPY1j>`B+DA62Nd2QN?&N0X*UvYTzJT zR2&WudcejBuDS_ePch2`X<#qCg0Lg+TsEIm@f>%SGQd=y=k-^!sNYf%ZdoQ|xMoIG z-(_^(2nGijiw4=?X+JMrMEN#o{FFghZ65S2VT<8W$3&+`FIDEP3LQp{^lIp#FEU=f ziY!5iENc?&NrfdSlZe;(A9s(AhKf#uK=2z7XS9I$VFVR_sM=RCeHH)>sc}e9`CqNT zt`GK>otVm(3XEKJ%_6=|(%07DEqRjxN1j6vR+3OW;D9p?$vy%)ysJ2X#0l3C6;f_K8YKijRveD|AO`?cGIO&QxRh;a#O@m#;pS>_lBs z)v>W2(#LtqBDZne%QAb}O_e8vsme}JbmH)AFb3@##FW3^J_!_nKAhL06pg$yjC-%o zLDP5;u*^nA4rW)Bx(%=B3Cb3Zx8k014?&>_qfUOq)f|ZN-(cH;fbjDX1~%fuFGs${ z@ZpbtCJaW=FAgROjP|&{SRU8&*uTih96wLBpYR|nffVHi_^&etQJq4T5=ZT9#mz26 z?g7=68qbtu#l_4VUzfZCMHCroqfATgrL)667`6gXpyC1JbVNI-@YxKR%uLnVfGS=GKna^_hNf?mR0sT!e7MDx zx$N+LSs+vGq!VdzE2m}>k+;>7AF-&BtMXz;UB{CS(W~Nm8IJoR{$J zlK0Qay#>O*sZ5?VqBHT9rLapIeST?kh+;E=!IkR$qPn-*XVT)=9iD9gr^0{&FRk5c0LiFw^jIOJ;Er5+s)Y_7N zn)G?Tf16HjJ=nDzd|+W3xgk0j(GveG#dE#%Z?1Qel0T{1^+cS$)+Ombx*T-FJk|_q z;%}jVJ;a!62I4A8K}8~p3bbu&_2i#~&HZPfP7HU4g!Dr~HP0RUtW{v_WkM|8UM3Eo z+-#DBES`S4Ls3>Vm&SbPz-}Q$M!DQZ`nc+8%@utx0``HYQmvNxFb@>h9kVz zkuGFip|cMl4FQn16J74snV(^(RS3zk!>nw;(?Ode*f;vz(%&+e%xN0cF40ZJ@Jk8R zkAdC$Yg!ZoykMTp5dDJ&n=|U{vWGLEL=*(bTNXxZQ6mqkdrlt|d+63owuwFZ1XZ)g z`0kE0SqG;@gt)YSHyRAg3=u+5p8LQa~^&%)Ne-Cm3=3Cd>>RG&U(tG}9RC%2{ zMdD_YTCne2lY#(z8d?2Ele$CZWiqGhn0cQC}xv8Qc&9q)3%RjFnaQ)eoABkV zO0@VBHTW~PqZ>|i^yz?#@pUY`LwqlrQvFCh7%B;+pi7B|;tpBN{ay?ajuppIv+;q| z#i+s}9;Z@JWPlv|0aBmE6UinMoedaKcs9xnIZ$n>HRLb61LA1lg|>TUpfwD){T*## zYhS?QOU*sv<7yiWQ108_c&e{<%zaxK1$ey4!f1RP<6`1K0of zIgTtxP~*45ZW9z!ck&^R_;EdJ+XRaBIJGz04|E5|b{H)JedpW0u-+Nwgz5UncR+zb z&w;D|83RSE);;0ApiZm)RNj^Lb-ynFP$AzI-yg#S5miDRM(CBAd=Ei@(a37+zFK6w zaWi@muZi5Qn^A?1%JC%FW@oeyP&YOcx22U}M@A2}Y5)e0+Sd7e&d`zZOEN}e>g@*( z*;6J#jWKigO|sR!1onxDRl}N%jCoE56icWH(U)f+d=y?i-K?sjpfXml#6um$YrNozeuXv0+BNes3vDkuGO+vs1hYpzc-jlT{B7 zkDH2nXp=Mx11-ZwQ|HIkU|Gnpxx&?2$mNiD=WvRAOK-8my*)h)){ca4!UrU{BXY`Z zIv^eQIzBT^=bg2F`Ax=?VHKc~EH$;6WCv<#)h_ign{_ZLVBy+&tq(io{R%U0(%vAS z^@Ij=NrTFsogES}30`X(rZEGz(|XUXV)ho4y$0iL8mWy$a>LJDM>?&lhJAa?xB}+f z78_7k2^>TlLAI)74STPzs}<0!>Tp$T<6{6j2z(G6>fhxcfD6%yzf2+uHh>IM2Se-7`{RSCX{2T5N<@GkVkI@k8 zAgs=3YsCoI6<~3j6r$6lY~Ah#RWjI+^lWw)UeN)p(ZB$p9Rw(%V|RI%pmwYV8oQf) zzRJDWexX$xEx$q3hnatiJ0TyS!wPrj>9;4;b3YdhPW>eluE>}zQG9ucmwXcbO3!8w zp!4xF(M9^c7VXTjXpW)V)u#Dwp{Ey>0$=fbh6;N_Q5YHn)Pas_b_ia*8X&@6yvno? zhC;!YU2@_%5()=xj1#xs=dpIw4)N_q6m|=7 z!=7dLFxP*DXaf~K)6Hnn9ZYSthta+!guiOXbs+lm(1qukU=s@e^DpvoCzmz+=f^#7 z%&23{s%yb5fwL^qZGwU6IQ8b7hQMwjzJcU=ycF z!+7dyWVi-&=k}erx@J6-c2Qy9sAyPEIP6BgqDM51gY58(|D9F-z0)JdpWMp^*htzP zHo^yO26sbHWpB%dFro)H%%wLOtGaQ@f9eXJYqcX8mIf{exKzhtpm*uRyz@baQJW3z zuGZ7KjByX9Q~tGISF9C#WFM}t_FVxsPZIFkjl5H5CVvX{5xsrfGOpx{cN{V^WZBM5 zya&3Op(E$=^lr+@i)|vr#~A*-$-t}3_~d9U?zqaoxyihm*we*;52D;Rm7CnIzHSpt zbi8STMd3_`SQJcm{uD9Pzef0C!bTPNbO$jF14*Y)2|D4hlSan@-ivF$qZj3ASjX&k z^A^JY80$i>>=#(873CI_w-;DAjn3%fyMvx-;;G0Vr{`xq&qVIuKG+vJ(Kq|2>G9vs zPB+Yd(6_vJY2im!S5vSGt8Gs$+^}Q?5e)V$bHW7IAOA!@9J%iZIm_MSQ&|S)fv2qS z+-J4v^&^{NBSq&F{$9xA>uLh~R`$DVpdM0%TIdIMlh8=|3uMUAE{QM#b~~zcqjKoZ z=qS4W3}=1jm`8%7I-T8)x4qCpq1Ii{9N%wHeE^oJ`9gmW`5Kypcb~lbDc5V<03M z6H-Hh$#N^e7Lc0Vo!XLx-3Hyrl2?);Op5=0&*_)?Dak;xvvrr6N3hlBoIdYIx3a#z z9-LprMKC3^I3-0;UXft7OfTb$galDhkTRHO(M58%m@bwD%c-|#Oes0%_@@lPlQ@~>sQSdeLO6PH&iyqpx-GM|wC z6HZXHD6=A*<>WdlD<6&;AdGlo+Q^KEqDCiGlGW0Z)YT#^@0=nYcKea zrA3)9CuNqO68LAM##vfnqQ6?;;>(oe zP8`GGnTiV;y*JO6DdlgPmB`}XNuKrT*Av`8Fxoph`FwmHe%L!cJ~}c=&!(RmM%?kG!Fqs6XnQ;*Ts%bFGvw2XEbPC(0l;mVW z;%oe^L>1PF(&#ulKRF$RhiBoz>B&Dv#{%6}h3+sdma|zri9s=fiztcG3G6+iS~$(4 zn;^?2D>TrD$7i1p4h}yYj)0@@_de|%e;5h)FLhSx1pcWAisfRF+y+-s5sBM}PY;8tsqv1&Zwo#erWa!Oay(!2& zTZ%CsL!iKj=L(i0%7FrxSr#N&da0@G-q9#LKRF3Mog9CZuoWvlyPaQT$vvT;{r%|u z$tN4WV#fPq63?TgKoHqASO%iuP6x4WZK?qGM;{K4_CB34kb9>erElAKZ7OP<#Go_? zHU*~^T}N?(jysrv>qM?kFg4_Zj6P6>+8=!yo$rCnB`U?BKc5~_fJk)= zOYg}j0Aq9M=XUP;CQcFzt0}pR%J`aqbB)2p7FXFanc^4me32CeF4cn&^WdM4_eZA( zpH6<2K~FLHGloLsI>HUkz$Z@Wz9Tb_a-l%oJplWZQDSpL<_G+LESHJWvJjsua4_Cb zMg257{&+5pZHrrW^$^!L{W0Ofs>g&D!4q$sa8MjNkZp7$k}XGEKLuZ5X`FI-^i5bB zn{NOL{1;KZ9BjVX+Qu~)e#FPR|D>Q;o+#vr5&jFgi-!Dm zkwmw_Vwr<_lcI)(covKWfH77|2xz=H?!m4^&I&lj+k@>fHIagQB@HBaD2@{-fD|`E zTEMdasO>(k>>z+(9Sk{m{yf+mcPo+vcC=)Di|8>wr!nkk9-IpIbvOkbk7qG}?=17U z3kD1sB+b|XFO-t==u)XHfilXLsHprqbM{JluVGbEef+@{As})~VRaIbiW_oKfaS@k z<0>=oXOU-UE;jn_V}|E4^H!_pt1OwCl~BPfwOD}cpu!2t+Xek5o5WEm3T@zi#8U6R zYQ)04SvBUiWTYgTPGNV2u%(NrAnrg#T^9dh$P4SY!d1Mp<)x^#)37V3{7D)>bU25w zHeZ7M&LX=-g&w~6outFyBFmER zC3iz8dnnxzwlWHbXr=_*NnD1s*8v7g|34hk=l=rvctYOpvq$Jh3hzbDTetLA{*1Yn zv7v6|0ewPUclX39dFa-JPgtNDPeB4;k)Qw*jC0H!QIm^kMj-E`6=@b+E+Y_MNrHl?y1^l&mdk>E$N39iUqq8HjJinG3YWAL z`LwYDM!2HV3agF)BdDn6IDrIOqB}BTMxV6uKNSvA1xQq;X`U^5+{3gX+9Xf0B9laj z!x?R;p!Lx|BfZQx!Ol(Z#Y~Qa%UQce%{OQq0I^k|&Q7FfM3b!sET< z$i*#&Utt$A?XLReRXhbMb5eq;umo!oEh1XB$JqI2zN1I=u*dwV*6^fFd{%9QycAZa z1{dTix{fnycA}pkDrVY!%+eV4jXi4#E(l&2_0BLjlFwOHk8@p-%}ECk>Pu(wSGkOj z65yG!NG){&+5Q&Giv)rN@D=$P&Z?$Hrz+d32|x&atzocc%mmE3jC0CWp7{$JUn$|v zzWqtBHzn=bPCzvVFtNOR+IcE65pY!DB?0#g`W@0X4#C}b3~qFL0UdD+hjDrxCGj+j z^2_BMbidk5qM@X3L^hKKItJMr(w&if=1xH@Y>eZ>h#DI9RXGdG^30(1Atzv`1-2{2$jGYl zMY8J7_rOWBq(^?uD1*qWAVcYJNB1xpRI-$aD&Tj130;UGjY~rQr1U>ULb!vr5)xVT zcbAYv)Jj6y4BjQBSx(6BWPO?PKh)Pbw&WzvKcoc3ZrVywt=GQ0BsF9nY5w%vbxdxm z-rYySN_6hGMVcywGoG8LXiVO)6>ran*z|eVv1+^XcU?W@grf0Z%NXaNRs_*ZmzViRK(72p9FTS6cQvZm41(6|?26?fvs*;o= z*W`?n$I}ELMJ#>9)Mj5r-SL1`90-N;WA?^ywb`94bXM4jSktfG8^sCUxorkmo~-$R*vFW47{_cDXXMSwBvEZb(G_kU;~@!+ zt?c18Hl|F}1R+Y_RKd?5tYajS(a1XtG@2^=f)wMX6?E!(&#A1BB!hOr!!TZHqoUa> zv*0J{Xi*Mg(XQFx2zW}fU^4B&{21G$(K0D9Q)lg0+M$e)&LR`t^g<-{1*U}XXl$!Q zjDtd?q)sT*F*HhL6^R6EYYkFhRH?jG6&aPb@~DsGG*q4TdkH0y@Tz!hCuY2cuSjHt z1z%849TrEALY_Kywn$VV2wu<1bngG-KJ1ooTXYGRY- zTA3e34y-Zzl}Nl|&(#-Y=KR>2Ty;IR-*cWD&?|}=R>oqi)?qaEC`mnzNJ;0p-|CbZ zJR&L!MP_?+9*dmu#7TVWEPl<}fE z;fny{B186Hr>@*%ks31(Q+1}Od8#K&Q7yUng*d>OAY0d2TcUoDdERf z!YjuamkMXsZq!@HUc4~%!!NZ!jq@;Gkm{)zFF%k^59>~QbiY0;00J}V}$Z-}-Gae8)8f0vh6!a@@Yn^JY2v#L;Rz1XX^8R+{86r79^j?1s-*UO{X!Lc3k%yx*pQ~W~iH>uBnDCxqO86)tz(03Vv(b{B#+=@xe{0 zqtn&+xIt>;^BVPb)DylzOH+KknlZ~#FddOcpsa`_2(+USecq8nm_X1g6qc(+@pqQ>1W*L?Y@EQh%9)to;tZUCy_qyj*?Px3TM z8pC>`*+@(^&MjYZ>Cl>O5;7n9J3&N8eer7l^pwt zGaK?GZMt_=Izd*$<6hincLdN2#wi=@x%)-5j4m^&yoP*iT~;MVfvup)j*}3g9^slN zbVPF1R>VO%EO{W=V8uyKk#}Z{xx3vw{!x z0GjrwDvN1^-c%JXLR;cBmxjx<3K+wL7}EN5Yt78Jv~g58UzW~ZI9)ILRREp+K`}VQ z&tkhd;?iJhX^8^a${xsEq#|JGO98vD=Rn#y(Qo2t>7A{y@9Eji$WT338%FQE)CUgd zbn(8PM^ETxksY^99jpAGSC!h_3D(0=vyXMK)@s$)4cTgY=hz6Nd926kgED+ntee_p&7RSC>GDvyyr;aDu6Zqf2CZVss_*3A){r-jK|DW$KIqp#6Qp~ z^Ksn2r3{aozyGmf!J}v9zI!pN$yVgLk1SftLx2D1u8dahD@UI*cFA*QN}mF}k2y>FGTqt8wxTImOq0ez{4dDBjw z7G#GNXUl|L1}VxYr(Jr;xk--KPe8iZLtK`;=MRtNY}mq&NCcpYOFE1lSn@>N4d*|6 z=}en6r*&6c2v})~TlLrID=-_%2{C-LiWXTScy-Ycwya=@Koy=Pz!f7wj!9f`X0{7HZ45 z$S39(UAtz-9d&FNZl$q7S~W#&4H1}GI4u!usP$>6O>MtL1<|HxumMYb^ndqfO((Ua zgT06d@9sI?r{qX0Sf3f4S?57)7FMZ@8(1;cz-6)9-8Nk?$LxS_Ud@q2sQ(cy&DI*H z7NvMPN!fRoyIG?npr^ipCl_9&d}^<%->G_1CuFPiPImNEwl8N+#_yF~{OI0_F__Zt zB7c*L;OGbP(7eLvrY|+#E>?`QR=5Xwfb++VWcYTU?moP={V;W@t_7-U@)$`(y&}H& zj*4jXQW#>pn^Lc@=CIVbwXSKbFY$3WI#at3`P_l$5wt5r4}lu$4a3Medj(FV+{Vnp zk!!Dc8vE^MFOQ+2?XN^QZbS%rnr=l%BGU|#0(>=F$9nUcWjfAV6I9z;+?>E5C2m8& zypG;@S<|mlIPYq}cYyM_Sjuq&Ql@_f1pTIlvw=XkkETHC_~JG0wY(wqOIA|=|{oZUO3@wH`1%x6KGxihC>NeWr){p|LK3- zIe($hj%Upyy}?X-r4nE`&%WS0jl_$~d=%SfbLj&muI|uJvg!;JaJpl@rCo?YpnRuB zO1^dI>@nE+9l7thW|XXnohK<4{T ze5Zfc6pW7ZwQI(!BPyPM(=y3~9`X1>$Kh?xl2(lLQP5F7?lIW)TB%-ffo?C8uq>`y zT>0+EVra;D>`g~Vy7rM@ zelZ^y)INVzmW$!~`pwNvpT8rt4?XntS#*uBBk99`8W6exJoBH-5LabDRT^J`VJsDQ zL)4|dcQY7E)#-$~yza#SsyYe^dx1XZ5a?SgCpTujkEYxXFuak7V)feDvo6J${XK_0 zaaYm>c6hy_!Nm@Hy_aAC+dcwtn4HhBT_AeabVQ?YRp!zM^y?SKIoH5*bH2r`3JOVd z%De|t^}_3^j1epm4x_SD!LjPD#)zV<-EoyR>J606nh{4-$IKyfqARvII4)EIcKyrX z>m-^H{z_3g4OK-SUPMI<4f!Oyv7@V$-b3ly05KLbV)#A1q*lP}& zZV&Fm#BJont9Hk%w(f;t6EVEjtEC1F{^>XJbpbX3>D4TXlO-5U1le;5dCL@=frGD` zgTY|C7bF?GPX?diU$$c!0HZC$zZ0QJ`TwmdUnA9x!Ny9glY1AgDxHSfotzpTtUQ_N zguQ_Mkz%hM&&dRUnV9LHMfT) zn+;HJ9z_0@;{viGFq=|Lvb7VV)J1gGDy`I(Z{3r+*8?L|D@WL@Gi6@hXrycm{5Sk` z`A~8;c=-3NP+-&NUkz9+SFWUM_Z2#8x*2P<;rsL*xT?EEDz?Xu=%5ROt;UGXNogq0 zzNhO8D=iD#xV(FNGSep-F0F+lX^_65sD}KTyv9SLoONFF4;koImG*yNfcp{3^Z-9( zoolT}ti=jj$Lberbi7&3S8op7)u_)`zZtkI(5K4nO&yE7=v1HAJ@m`_6%O?LAgvpq ze^_<4+ZRoh28F(?ab;U-p`g@|qT&i36sKnCsA(2?WGP>)iwR=^Xt~rKPAl@@Bwo8y+H77`#H9;f4pnn@@h)499X%eyE42R#di+8cZT0000CXJK<+b7N>_WOZz1ss|s72mk;800092)mBY!+eQ$* z`&UdZfmEm#8WcrA#YT--O${VA5IMdsR?8u|FuAMlhh(+TfA1_w$>d6w)wDg77eULN zH*aR%3}-wZ!+jwwR66w~LW-GGXu&$;g|om_3KhCSrxu(Bsic#aYqn^UP0wWaW*C&(t*&{)gbtU!aIb|dWoP_jik`6w0S%~YkBa6)0Sn1 zR~XKdgj}($EBNya=h$ zc=rl!hTM6BY>CE_);dg&dhG=h>UXEgH3(joRKC=Q3_x4t^ZL%h&r2x=jUa}QdDZFX zK`an>Yh|v`jG)_kY+ma5j*I{Ka@YwZZiOV_^kW>0uw!4t$qu41cr$p@ct{dyS*^Lj z+dUe?5!9Veml`oLZgO8y!UhMmI2^W`Y7z$bch`3VO6M>Ef)$S-v%c(UEC#_!ib}bi zWr?nrk|Zkj1w427V4w7JtQE~a=JflX`DL7(e*RnYO}A?UO|D>$+v3luL)G`9;8HLoAk9ixXJ4D=Ib2$d%6M1-C)7VYjQrLUAk^d)`+n@3|G4u1EVrjlwU}~; z#HK%|Z;@si4KMl*<~bUOb~CUVY9(=McThUTW**wL65I8%nS1Ghp%q#oA8N}XbRv)YU8l&xgDd{aH92@7~Y{w_AVW>F-<$XrWY1jHjS72yM$51o%{q5BeBPDM*)2!79~Zs zF8UaEe>q!D0eyua6TUE-<;DACwwSVL0<@<#`&42L2glDsR0sQ~b^zcVCDSRrM4!dt zn^=C+YV)u9y&jB{5+-TNbe@)JG8I50>NN5?f%m@<^fP10yf8QZDxFSYStFLdWB4ZS zPBOpNRy|g$hK0?UU@@AQ@Ixa2?Q*EcH2-B*;2$}kFkl6EBH2iiBrPkOPT@VVS2mIq z{haF%7pz2jhLPVT({xO~&0+s>VxY@8Y$8B1zV1Z-CM4m=)tm6Oj%r%V(Gk8GtCwPU ziKgCxB~MUn6d5uMa81Xij*A565E!Dh&w9dt(LT#^j(Zw$krV|Ri}`T_B)S1+dXrBGl&9}WA|Mzh(*y|wrs*H|L5It2JIGz(EMjkc|*-C9j54Ae*%OlJDA>EwUoz+`*+hO)Q~+ zjgovaN%Nv47}J~>PXkBNfFhK`J+)=HhibDm!s$leX(T|iN)q7Or+?VvBf+-A7JnDo zUSn%@MeA-T-^zVJoRElwUrxC(Cs&x8|2^8PKjr4%K;MhnuZ6DQx>dxUU$flzXhul$K1VxnK7 z-9oX!o=Dv{a!3{>zEAWjx#R>YG$nX#)Z|K;e&)?(4GpGEKwD)6El9~})x#j*Jd%kwO^ zek;<8IkSI!$3`W-sea{?(=(~}kB)wQ%che~^d0>0;n??Tk)jHZXURgn{FZ)tnJhjy zqiahOjH-QwKt4+r$uOO!<+U@N{w@80$UHha#XmoTKi0g@-xL$!yLE$ul}BK`!2ka{ zY#d4;HcQKxp45FR6rz~u^;P2qY5P1$F9h1qBrc=3r{A6*MSQoOE)Tu}ku?0Tm4xzN zVhkkbwKhpT)%ly#XXo+XU=`2dzi;@m-pDC-swrN%PGN^$zIgM^3un5wa`3%s@N-wG4tLRhJk<(DIzUhv665=rT;m0Or{bF9bcGzmETTcK-M0Z%+Ry zfqkM8HE0OfW65R=0A#wihrXY3OwDZ<31;WR=F8a-2b>r>v@qEg zt>t35SY)}us4q|db@uZ6m%uWwp5tyj2i(6o`}+HH;3KsMgPxf6FF%3;&z8WW@kO33 z7p-ChYdGOrD{lWpF98Tkpb0^R-{5=CfKb5l10a9>@R{eOXOfVRD+>}+T*2|BMsGm> z=4iN7=66Ca1>$px(o3`IbR=~3QKa{u&A~zd@k@3P*is|X|1_31jG?xR%WOFvb81E} zkHQ;rXayAEt5f}s+7k>ez*%t=i9d8qh;*X_tkw)w1;8YAg>X4>^Uzn3FmtC*1OU>x zZQQ6SXt)@LJF=Bf1xB^xxRSosN|2tHc@ED^66>JvKV1@MDO?!_wu&}r!3cDtAg+VX zw)R2=wLV3R3pygZ!vN{XVs+KHsZLl zO%Oh_O}R*=NUy)JYmHiM1za3ecF@7wdOGSS4}T?Vc>DOSQ5GS^SAVlj8FL1Zcugr1$vmz9;5OwykDzN z4e(96c@6mJ|HxVn~j<0<0gJ1K+Gloj9MSlSmwbBqi zI`S%{u+9o2%4B%KODl-eJDPWc!VFxqOEsH8Oz!!8g?0)lwTAm;P+KKOZnlIKpdz`2 zR*59$duD=Zi>!@mU!Ma=KMqt z$)zBK9t}_jB+zXQph&tKqBcjT+zskXI4te76%fEWPZuf1ZJLAkmmlAy?CK}wR=r_h z8FC|0j3GmoU~aQ~daY6(+_0QzS-hajFSBZz@2@5_-9Su=Rn9gT{}jn9ctX=R4W+XJ zb5vIOIAa9{0mwz?GcA_GDdb_ni{M|xI&xdK;&*qYt^NdTPT`_Ln8nprVz4$ZJ*F!~ z3dAJU*zIHW<&>4KF%6pj!7u}aT*VjD3=$DfX~M6>K<%#c#j$4LR}o4+M$37-6FpF% zbaiwzBpC8VbJtsK5j9oGWQ<~c0R%G5<`*gstYwLHCrSoaD6I{|)>acs4`&^!`xvMV zu@p-z#R^N!NDf01e@t>yl?SV7O^cIas@Y9qa4M6LE+QQ#6`|KDcxg$^`U%jK*Dn2& zQCL)Z*k$SPuv%VrDtp{U8c*UzK)#n5A^8oc?E*LlbGBIi~ zWO}o*B zX~!Wn#wix7&4pgk-)J|~=2c+bm!vMD7VBPgjbbl7;7N2R%CFkLBbf=<&EV5$4NCh| zfL@UH;%e5tJmYfDcj4Bd!q@+HW?$&DO>YC}2lRtpRR$t!ko$b;a4Z3K{8 zpY!1?jJLJ7xQj&#odAh5^+0C$t9`}cAwylWm43xG;*q%`hA10s8h+8O-KU$ma%MtX zE4&MFT*s{8D40xVYWsuIh`nDxik`_hx8YLVDyXV9oghV;L>Tmn0RFD#mHle&38qL+ zWHLd8Q$=2cMswJ;6PQ-pJXOEp3@ldtKn_YD>f5EI<1xcUg?uH7ML_0=ijgqcMZ;n; zlc*$A1R529_jHmn+Zq-71IAB&%`L@1VW_h^4|qiR?MVD$lPln=a`-3Zj0Axt_#&UlG``X46fNa`wXv5B)GSR8;w!_@(gqAD31 zvnjif)xOF6I-9euD<*TP{K&aM_B}chzr@+ZHGqQFmmJ5j!e-e!7MGb4wwNq%%%qr` zz{-+4_S$PytVG3HLehD;IK`TBtI(^v+UARA%QCQ}2A%+X90g3k<9ZSBdC;b;H733M zFBUDW?Nya)ZJqrC|qv$O6q$6}l#Je!{|-PE}6?lERd@m15(cwSvk!sF!u1A!nuGsLX;DPNLGb zDlFluG+DGBC}2C0lsN{?SH!N6$YatbdUOlk&(04VYz2H9_{K+m6^iU~IFsj1JKT&cn&IIl852=CH&u$}o+q#ee~s0US7`xu zS`uC4DVtDnW?ACdG9dUx2JoS31N$O3gfQA^zc98C6*+q4nJ7FJ`&webFlwq(esy`>AWe;a12)(PujLd=Gxa#}{7 z&7*K$hfoQRmThx;Y;E16YvM7jYYH@qr^QaBr`;p~b7N)fjnU$0tvVHLhHP&n-Z$fT z)+h-Z=&q(U^ojKaIN#{DZq37o0VY&PD+iwDC7!np@Rwff-GIODdFf7hNpOPfzaWe9n{YvZ^fl-`b^aZ< zP&P#2WQ2w&c*Hy;ocOiG$a^@@SJ2X}Qme0@zdnK>&U<=ZUN16KFKWs;xX?0fi+_Lg zl{QPQo>AX4)-~bCs?95%2G!>);-0|?RqQkT#jrD>NyV+d_Q;!$m=vp zGs92p9trDx@i=VrhXxLESSZ@Ng6G$c1UHZ(HD^{-HW$}HKgO4JCs-uAerW5l!&D*6yHM>#$t*#OX!^@I$Sy7z% z+NG!|6P{dRSYcvUKR8C=kWDfkal58?7|{AM!Im2M7dgD%X2&UPdX2|x-9yf1hBat! z+DzV(6j`a-2IVgGcBm|Qz_%d^O)C@CnL3SKa9{FO?foY&tn%(EDw zYT30*B)QVLUfJ4hponj!fd4=mSjam+M|&1iQ_|p^mjm2E1^FI*VCLp7VAc_lTXHzPn)=?Qye^ zs0(NG$c4$|PDe;!!iarIJzMl94H>wC*QUg9My?SKQ7B8M;0GnWxOkl);CLa!^2LU)1a-u-Y-(>OJ+qP#0h&9 zfbcA^VN7E4k?{u73;zk6jjaigH{v)ewiiDA+NeB@Vtkwgs)}=}Q^#-~mZ(mr!q!W3 z7jb3{dNRclqddL1z&@eCPCNpy^O$jNh&n=5&0NVeNJrSLBoI5%+AeG`!{!qb0@tvp zo+%RyLBLsmbD@bpc(QIAhSEkN78Z~iC3di8L2`&m%6flb)f*DI;9ZuEssAJuGGeP^ zDt_QWa%4eq%0i!2GN0-~=~3jS(zZ^L!(M&b)}B${RB7ocFWS`3h2;#pK=UHeWUhO! zVj@sFknt(eJtBW|%o{Xn{ZfruRBGa+@DDmkY=D@Qk8K%D{8=Z8`M+&5{Zt94pq8;p z_QYIIWyaYuF{IEDFdh3MBsa|^KZ<--6YrB1vRgPE*H|rna`P~z&BRK%uOzodhKvDy zO?c#YuxNgFGe*xxy8Vo-u$SF1<5xQisrFc|%a%{IDM{hQpt0dQ#962|zky@%MZLu! z#`$#fj=T+^XfLO_)=TvGPyh42cA6T&YB<;lOO(X{!*e;umo=f?_K=))(A2vK5B?VkOlCE}@PV`nQC^Xf2 zqe?=by4paoz$VJWRv$evRc1|qzkxgAZ;@(lyststub$FKzf!MFtn-_z_>ny1po)8y z@Tl&3!N$pg36aGjJpGnTCXCJi(Ir->O0bMbHVvo92BpUF12rptD76fAhsH@q;(VY= z?}S=@I|hPwvgB4-?%Q_d_}J3MTS~cFB=lS-nQ28XEva?dqUAP*zw@yW+J?H47q&zq z{pkbzR&5~G&4+qg20Q6Xf8XG?WeMKA-W%-&i9!m!$(A$~UOkbxK;R7rD}`qx2W0c>bQ6&I{M7mWc%?`NK5CLd^w!Y8|qv7Vx3 z566vxU_pcfB%Mn=@hg9~HO{f!4%vC5Rew6J<3mv$Z+Aofoo*_8y}XY2?O^JNBHCc0 z3aaLq;?-pel+>#2DCG7BioSBU1`(@Ge-h|Y+UVx6ZKf%Fd4Sirv&UoF~>mmI+QT_C6SMgwO1siOUaU$_2foF&Z3GHo*`* z<3`Jj7{G`z+T{|C5c1KniGQhkeNv{{d?5?CoaF6IJQSc4>W7F;3%0&=``}oEWB~C# zf{AOIh>L2>%*bP9XM(quS310rF?B0L&0pMptD^QsL5Woc-rDhs4R&2}@=?k>WVp zF5HOOAk9SnFP9h|i{Mr%+Z&rEL2zzopj3i#xV#1nb(MP5_O-puGsKNTINq-EtBw&L z8H=cO=c~k-FSTf1&nr`nlxF0+T=C=0Y8!ziEavmHTs4Z3zOy_HenD(0Zb;(y_)@e^ zEDXgDWU3Rq2hjFXsN9Px9C#)ha^FY#T>WnJdRg*}goXy=f?_QYhK-A43}lE6D2278 z_pIPudEeTXL>pa9yIR568k2yk(5I;hNmk7?5Lom58XEOQT#>mT+he zrO%!8r9}hfui?^;<4`(HD<^TM`D@dl<6}JLrwO6kR1w>?-fkD5Ep@YN+&``*O9q@P zz3dCoEN=dW%~>wm{_(5Dw07$19=585klnaRbg<6Q%JkpLZ5U_`T6223!T6GJje2+9 ztp8J+(_Wz4Cx6f~ntghXD$5pFf=Vcy;_09SZh_M~%5v^saQRYoIjU-eNQ}{)ao9fZ zit)}x^4h!%d-$ZKesP{@9pgRp-kSp{l8Ew56xvR_1E82(79HbG2Wt0+>VO~_gD)^D z9e_ltS)bX=I$yEr^wAibCGjBkh#R1^5ET4$52Hz16?T+Z6oNTckNh31DiLv$24S{r zSZ#cHIwp}BY2N)OF5A|fO3TQt?V-CW7hY3ej5fzO!%d{FQ=-YU$SUzx_T)v2t7Tz( zFO&I&u4B>Nu8h7~x44aQKF?a$JLB{WA}JRnhV(3KNAXFx!ViQb2C6JDS zq)kizx)c4T{q;9bv`%|mH6Vq6oA$dI@StOA)x+rX>g=DrUB(pcX0J4?|>LUv! zZ7`5(?IC*}6#6+>xD%Gk%_8k0rqOZdyI5tBTw|w%u6JC1adyQOVtmmPcFALU5+5D` z)2$&X+>ufOu5VZsYZNcuc*7j@H{Cl;^a%*#MnrFT+0H7hiL~144}NF6SyVE-B;85X z8m5XQFu;qBg4ZS#!lY{(%qtAOEq9C8u9WvSj315)R95lux@4-GyGP_E3-pada+hJp zUh7LG6wRWaGwvXxrJYQv6rAc}a0wI0j)OhM8*GCD%IFepZsw?4Gw8HB7~A~1Z*opA zg!OLpL1kb<&cFaIN=Uied!xmIwFavXHKyhm)=R|03x9k1?YX%~ia!dUn6L8!$OIw) z*SeV~FEple_nc9PhVNKzwVfWP_C?FYIV&z(4tL0?xq4Z7XU0_CYnWGc;Lt8*_K)hl zpgBCMb6vB4RQKxU@Q~(3&LJp+$E)jL^`5g1VAprua}9Rw%b)$DhAx5*5A3@ZI*em) zc)k|ZZk*mfx~CE(Jg!|5;tu)2ym+O=&BTx;zkLPR24b<)B?@yhn7=vy{%s7Io3kHY z@B{HD(cpNkkI!D2pYI+ACPx#>dm*%_@sKgC<67X)UY(tvoqijCefsU`t7mFfFcNp&&%nU#b^BjnZ8z<$e9!+-COxk&J$>Eyv4^~SnDRyeX z_7u>C0JjHRA9QDg;5kRwZm_?q5TQj(ZU5Vq?eBm8Lqp21x$q~*6YGSpokuIK{%Lju z!^^ZC&(%2Iyx`j8qBAlX8W{Byc*m|zlpd*!;vu)d=gL{-Rj_uMN$AEUdll|MH1~+9 z?siHWm(3RCH99cr77~LfZ1e#Z{=4_SFR>{8I@r;!_@bW%JGPty)N>$ zkPonBT8o5XA*h?*jjqEsVr1_uu1mjJ37pb<0HwP7tfHWAu5HPI`btLil*m2^J)v znc%^WmO0kOrqfbJ${3Lmux^2-?1MsBmjJ+#VN_P(3G)tGZEYx7$X;q$yoYwWP=A0e z-Ii@xTeCh5X=Cg30m8b?SOV&F|}D{8nGtqv*y& z+NOZ+v2g;jLYO&;7nZt4s4eHj@&;MhgMKf7n`UWqURTH(wQQ>etozHVRS5h{a1nw@ zP_1(J-980iRChx-Fv{}uBBeX&NKPBqsOl9$ly?OjF$lYjUB(O1rNLsWR#L;A+M`>N zh8GA&+B*k{8j-4@-m8`H0$lNT{l<+ILn%-_V_M=0Lai&#w3$IbN>&I!iK73qN2d-A_gO5nH4>qsbx2Ylrcc$pqQowZzbI?R#4sNZb zHl(5Iv{U%m!&S7kc|ElO4YRg|qRWI=crj&02VW~=zk56Kxe*{jH%b9iRE13)Jh3;r zlr+c2lP9A-77aUN(_h(=+O3 zFxqd}5Nukidc5QppgOtcCbK{bAPMq>=tA^MN=M7Jz(LtesUuY z)ZKXjS5abtGgkMJXKxk&@`@I2yii+{g>joe{$=6dI@tOW{`F8s%Mb(3#sRgszuBDS za^xLM|2^2Ed2zlzh5lx*^cpw_@Y4~@jeZ7y!jl-ecs_umv;cU!Ho9z znyQyf84*)5f)d#2cQ zB*@!j{F9XMfB8{OHA&LxGH0!1nB}PJ#z^eF_XoXRZ@+_=8}LRGCy>~gPHGi|Sh4VY zWXT>nK2E{b+V0Pd>nMLG4Rd2fv0ip|6LpcE#oS?Gg&NKDA6WDlIXIZwXgU7 zr%R3f-*dD2TiYzW6ez~qe`4Sdpgx8LX{Q-sU-I?{+LF zt9TceR6rW3iN^&S6``c2vEL2cBbCz$0r0dD02Pm~GWY!0c4bT0x>g9i4Z=P?=luJz+~nQHA5`DwDc|3rq>1G4L)xu+e}DJto64gLk6L8G#*bc`LTjK zF5}eTd7~APnYULB843*|dQBJrvl{hjK>Z`?YiF=V)u8htgRhONg4K2$*A%2_jB1Fj z6XQ6%S)&#v^Ew7Y(Xw~!Yf9+Is2j}|UQGC;>lOO9`gy4&ulfMbJ5@hFuV^`s&sp(1BB49FWYN71KI*Thx-bdmoF_Akad% z`JmVIuOD^PW5wOfe~wVdCmxPy>;U^Wdp)nXfk&*ohAZ-P>m6hIjrulPx03q3inW@e z3_QU{eZbGx9DZI#ol?BlkLa=QNGUE}=Otxtef$6Dl_ zFZz-nW4B5D^#%tW8=%_5k`2$4T4Qcxt86$Q)}9Kzi<76DJ~?XRYCFgA zZQG*~YjpDnhPFg&2zd%r9uqCJ6QX<%)ECJxgEt|470F7QyN>wX;7q1rhCOq_-%c1X zcskYELL(UU*6P|X2A91$L~E-x$E`c(D#c3Zm@B6;m+_k%a&>=uSPzHqRI3DV6?_c` zzKKL>AWOD$%=ME$PM^L`sl{mH%>ud37v3@n~(Yq25sB zI$Ll{n|qwGf(}%QOCsjiA(I>J6?tKGwa`_DxJ{dmF>V?q_Un#D{TR*WEttr?Ka@%Q z$!KCNNC8wXiO@cFtT=(96d_)?dyQVJy**tehRNs`k_d`6M5}J_hSj3l-9M!Ppw3Ad zv=mdDzj(;N=h7bC2PE$UlJ^10`+($qK=M8y`SDp_RuQ3KTB9J`bL4%_FYL2fqgx-} z3rF^_3cdPO)uiG0g>`d%b3wB0c%3_k$z-i?v))HW`^K#{yOXREgmnE@!;hK_CWX!# z(r%6ZnPHZx6Lg+0w-zr2l-EvoS7MMS9mp{!t}bTSu)^#P7vT%t?MqI;S^txlp4`jW zzLO!&bZSK}n{*%4HJaZ5DJ=L6l-Bi&_a4T*hw+E>Fa%-6eQx+Pb~1w2xHaHPgS?^a zH;45)EQXNq7RIU3_8Ojh(=jygV#rx;o&H&Ktpj7NE6g5m%tsfyY_PXK*`4gN!QsLF zC>bYDb`IIj&f&q~-e|J7cd*wx7)&OE-6y;IyQ5@yz(#xfj3rM7`@`{Iu&>E%8t!n? zO9uV^c+fu>!k32!Pj-6;lRn!y+({;^KTLX~L2oeH9}f)llii)(VQ;X1$o59#NiqZ!?jB5f zlfbcS|0R&2JL7`atb49kkCaisZ*PpeHId?0)Y>?lQ&@Up++{t?z}~uQJ>dTzUraL~ z{HHXKx{9~!+=mTfIC$-zTVmSP?t?k0R@RBaFE$y~ixQl{`@l1s_r10Ec||0@o%n zWp(2JK2>%11q&%k@*^vA|voeo| zWl&zkWe`s$*<}$F^JJKgukigvb{R|;!;4_|?J0bm&fx7uwwR29Gu25t3z8VV8K)nU z(ZeGBPZG?NJk3VE;PfJ)VV&{fNs+;C)1(M4)AAxH%J@Bg5YI+(8W(Y%Tx63`k{3al z(RZUfz6|o@GR{W@e0x=<)1-$Z&?vw1QHtS3GFreg(*i!8=Sh*w$^<%wHZYZ8ndLz? z4n}EF=IPm@OtV?h3tqz0ctXe#>zm-)0REN4 zvPn7~;uU1jNh{rT{Vi zi1y?PzMgR`kNJjly20f|3WRb=6ofBnT-IYD@fu(r6zOc35PbO>*qSZ?g=ZOTUtAP` z&>gWN@tlw?8FfQBAPt4D@??_4MS@f#P}$u+g)S10+3a92IlTnwHW)7^lPgJ-@fmDR z2WF9$KNN^)|4xvrWRkG4lf@h+W{@=k3E@M-N{mZj;@C7{%)lIEyqGbt;~S&1vlwBL z%{=S+fwL3%r_2|_a*;E(5j+-4Rj4A6A~eZZVBic`BBSY^j@pH>&oDg(H+lfGBHr2K zuW4lH4>w&WD;iy3gU$gg8Uw0{N28P|69EA`8V`qy>0%O>+D0)PLSoj?OdxKj+Y62a zj>$b{ES~SM4)%(DGxm%@^a((;C>B!$8ZAz5H_r0&jPOE-7Ca3$7|@FlXe6O6yfVIt z&*O9!vggFK4EQp1&fX_88iNL&B_9Q!(?CHDkVHW?Tnb4hnlQ#p6JaAwQ=~JEqKzrh zsGNnPB+|BB^vK`3T}t-IxS+`x2_uOm3MkqF6nGQoo--uw;YB<#ZGdNTne;tJ9@yH+6_<5L>vmjN=JFIUR!#&_7B!qfJW2AIw=m{~Re`lA|;SCJ0QDkEIVU#*$3P+#J{w_}wcJ zu`}^j-*5v+43Wp_EG;ewnuauoe`s8yBT$b%phkdSjTbrem`e@@0!V}SkZ^WV@F8`c5}T{0Un8mK={g_2mYm;55I+>EZ|rFV*jx0zn`Vr$ z&daQb#(6wVF0=go)|P1g$9DnzS%6$D%F))=*|69)pCoyn<<>J&bfY;SSGl>dwe`nI zG8u=#OMLp4J^W;~DAMy;Vt0c8ot;DbML7ml=7fH=TQYTZ`#brRErs6AvuqMAo<5dS zcnc4-4zy1T()*)nJeMD{qrjVZ{>kZG0~AM8trf0hFP_I|=_D<$Lg2HP5us27OlUMo zoC)Z?(F40rRA%y0JdV-=gw#3UTap8qfgeQBthrYt_N}d5{CN+4tofdO$p^=8>pG{U z!Poov?<-ni7$noQWQ(qAO6cG4UZ?H4Pp<$R(}?4BQq+8kEE;;A&iS^E^ORVo{oOfm zwIVI}S1jkf3;?l5Eery~sn8K~iMMCm)_xLnAJpuq69UY3fJT({2xTg@k*dr4h zTsyW!fd$%AK&dwevhV-xS*MV#vm_a5bB9Oi2js&b;JmZnJ32f)-rYNmXq3IvgI^_v zJyXLd*`Cd`BNG#%F&?h@K%IHKpd(DKMEs~*zdmOw@ayZ>)@1&sCUcm%#+Wg%-4Sl< zJkLI)BeawV`PsDSmq))GO8g)6)i`?$cf6`fU>@)1z@EtVBf>MWE;73qcNG+YY-T%737cnOP^5{5vvwH|X{Bj6r zuh_`osV0=(#2?e?Vj9dA)3YRZxu~)Zff7a3S}f-CEH}vQ&F=p_c=O8}FSp+8Yoj@= zg_sB@DDMXxh##00z*M(!Hh`+rcw}s@A+1e{AJlL;M4ht)580B*2)irSv1!NY6K*wv zQ*w+{80#!XFNWPAk{%1H(t z5(ZgD@6^2MP!@2}8T&38Ex-^2YjcNsEv#s&SJ;V%DVTviq8&i*9n2@fu*aARGh8za znfN+N&3hRGYvMxK+OZ31y#w3U3ozF#CeT|vSn2j@H2Obvi8;!2*G+Lucbdson*@6bmxGi%&mK>EAeEhZpdV)`k-I^9qeUO zGNp9%a(y7H*^>KnCT$5M)Jw+?#{H6i?)1qq`@(LIIOc7I+E;_@w>Qpm=S_?0by7YlyD z?%Kfq>+lB5B)n8b+ z=ZMD7Vw)U><_fdVXR`e0$_{R}u3x_pOJTU#1-*p2OGhvcxk-Fl3al%{f_R&U~638Zq!m<_GpyB=VMj6{i((_q8 zTjG$gTq2%iv&1%l>ZqhpW?z^}nPr{ZRhV@fJF(G(JKBcq2ePgRGc`A`BqOis_V$s+ zGU$HQXk)FlCB{gL5j(|&bbpjG@BT2)Kz_PDr8!@FWtnr|xJrPfGS+Pjo{ zuE$fl&2-#b`7WI?Pg=a1VKR4uuBhX2LY_mg>l7|PGz6zW*u4~Q8j9o&R5A@r&FB(( z&xW_Hp`DvH+s>+pWvr<MXy1+mE&e?W{fE&NwMbaO6@Gq=Cn(n9aG5Pc>5LMYWdgWE>t+IoTd4TLXW@rUj&7Q4*3h2=J3xnX#>VxO3 z+iq9;lATJi@2xFzyp%(9y7Hq>K{6>4tCb0Mca)y9D%D?08j{p-d66;S1Ohj@$`kkI%l+$U3LBJcqSz1c>zfsC|5tOQp{|E0tq=E*(EET*brsQK_^aW0pxEfQ{ zkEjxUW9^7{FI3o0Ufd9wIB}JkF3^+X4F-2d890&;mv`g}%1Vys1wAPbM z8&r+}eL#Z0c6SyWjxnvs%yv8cvJT}fDULh+Rq2tf58Tb>LAJ*q$<2zg2*3%SbX27$ zu`vx1EkdmE3L6mYF3f)fqDTXg4CY}V@D{)#uO250Cj%h4jV*}4-A9=aV^hkE$LSxMXLk(Fr2 zz(XVGb`SJ_N_)T~cF{C0$8{_V;v_Hw15-S*{Oe6W@!O;sAGIuTU8OG~g5#p^I``IT ztC9}@kuA?7Ec()SZAbsDhAOIoWOaUHT!9C`?F1?&mObh`a41f>cbbNeZPUY6>m({O zPKx9L7BL``Ik-*#sIgRSOIve`&?23z7wN1<-04@2TBBC-qVh!kWy%g~PX#3Ewy*8p z_$;5yK2pskr3uB^xZq%{slR|OWjdcI)!Ep5$rZ^E>lrCzHdl3 z9vLHvU^cUtvPdT5I?Cv*O(HCRG`=p?7>KbGJZMGm8iAlqO<2nCvlzLX-brZmX)}7; z=*G9dx`ncq-I9!eM|csj6Z{>JZ2LRgItT{=Yq@`Fzy4`05201PwqR$+vCiAG41*ta z>0`r>!H~~b|8m>;tjrgQ@z86}#6M$DiN2i zP(~xOnso9Ys|CMZ7k9S~nOZ5rh9JSPlyo3`S}7pz&jO53%VK}ixI2|HuPJUyujDHeQt8V6+GWj-Z@)aGaydxQ_<;$ zM(fHs`7;1$t#x#)2D!fRVdK2bFDxLMFMTm!{m^gX_tJ))!;Vn~JRPU!e8i#&$)ymm zJ?|c{7!|LtDj@TNO0WL2#Z?51-5a}vsZf5F2~?i17Q@ua+kFKG`zBXzwhzJ4oZB7? z%pXAM6eyvXkxfe^rBmrBMLH%m8W$z39l6b*PpJwVmh8Zh^xhjQFKPY7PU`}Mg2`{- z(+_}l!ZpwZXgd_T1+8o-%?wJ*Z6r7@NAoTU;<~A{ zgrh%oN(HmhnE`VdBF9v;wd5G6SCzem8!k0}9-+A~%g|215*Pg^$uo^iJC>TGN1~vm z+xM0xq#u?U-|6`B8+2V&2{8eKivc^vOXy6Nma0gnPQf9YAK3TUj&{)|q3t(V&)}-9 z^I!_5BfjURn%Z@HW8WzVF{o1Lus|HE75ieEVjDboDa15 zyDyFDo3E{zSeRh^`jgquwxI^v5-sADywpv~VR3s3&zRT*WuHBXvN6BS-)`qyAUU3`t|JlJDCb&% z59Zu1km-;a4GlOya8{wxaPst=(l6Dc1Jz7h2jA{!#M7c)g5gHcmgk;ZEREj{RFal` z+V)eIy*LY)jHp<6iFMA8fJ@d8pevQ;i{c_e6BFanTGy_tP8ie}3siPLYoqLL4T>{~ zrzY;yWe9S=T~u%@F!vjsX@`wGNGa=%dEXUJ-f%4aHxG02qOQqr{OXk8tS*^U zo7DI$G2??LNs>9NbVbv!EA+@wS5#ZsAauQiVTPL)!sh7q!ODEww9US}3c(-dPPoj( zyA?Y0@sE>5p&Fg^bO|2*YkpjnA;FPWzBCCgH}Q6ECQqp7S<9f%)U7wKg43;&T%oDk zs+0>(&%V41jiB&nV{n1tretX9+DO#kbf~g5H1*@j;o$UUCvs@&(@pQ-bZO;yXzH;d z^Mli^C;dZR*M_xK=Nuz|U!DOrpeEQ6=h$u6GU17=Lpv1P(5j^HhQSHum{WE=J2!mw zwP2h343g_hUQRcd?w7w4sOgX|mOL^%R(ang?Y3j>!+Y5OHo^AabUQ^E3muQ?R&b|m zhtA9sH%Uk9Z2*%;;21z2xFhJuC=Pwt_AfiNhrOB{jK=BDYd7#r1I!2ZL!S+y5StHI@7k}_)2}$p z;i-G7pV!#YUyp({igJjW4gQ$y9|~UnLm!c{J|xzGu8Gd+Ot1{CQz~ZIq=&;aA1+ez zFVceJ&~J#x$C(-6*k!2Y8PF<973?EbHzWQl(yBA|9u^1@{!o#LP@e&TFf&>qXQF# zbkuR-=tURtG@>Mu(9pN@5A97BoePI^Xh*m5W@kS;_J461YlHYl@gZY5^25-YUqjrG zog&jo7s}WuOW&&HEVb)rwVPVi;t*WjDi;f;hlyJSWw37R*XGgA@VZJEnyva6+?aQT zV|28kLuUs~#=RHMO<{zQ555Rx{+6L9%YKhi7jRS^F~E#K0TwZU{~|^^m+YlyTyyFR z0gb+6JPRBBcMZE#gRAdO8*gA^{O!sc*`Al>dg!RN9&8M(s|&T}F|VNTTF6xHAlD$q zZ8A=0%uag?#;i&|?|Lhjm4V0uez%QCgkvIFjb+Sx6v5Uj96)Gy?vEWS9Xpy&8(y8i z{*0jv9(2wyv}~qPt3)GK+FWOIwc>2oJ9spBC|=i}cWJGj?FQbg???z$^09_~m_X8< z)GBUOC~PYjtUF=g!E5}E-D$I`ry&T>N0Ke+6fRb7xB}FP`|Vmk(+bCG zuePvW2ZBbwf$FG$<`@8J$r8x0Oi(~|2!IDyx)l?L>$6B;IYS)~Cq!^0Y3WrMRQ8c4 z!*q^0>@|MX?2}pz46An6D4AzPTJi-ugN4=_EUsr9&=VH3s_Av#j}}GwJsg;laY-3x zMhhx9lA{wiMYf`B=r$Fk5M=71V{~J%>9cl)4t;noX98!bz5_xFHt=a>b;~iTo0^23 z=kQKdFQNHVQpVC+kXro;x{wCCdyVLZG@=@nq+TyNtuG|SifJ1~Y}!<;ad4oz@Mg#I zybbP5rLwd{yFfCR3#|SHuPLHRUy$~QH9ymAp-N6`@ZmK-d*8L&vYS=(fc2;2tB61^ zoOJ9#?D$|OGMLvFvqOfv+<7O#Qix@R`sh9PdHwO-F-Qk=h?d+EI=-T1UB6A9+4^OD z1ieUFwW)zdfdy6+V_M2c(O=@~Os#%(i-CQIYf1qtJMNtcuv%&KTV>r^%A8TQEMLnm zUu5OLvxMs8f<27i3W(NnT>VbKD)Cjp@XYJ&H8Z)jU)?J2azDOTB77AY)|OA~evbTY zUFY8RhuoVu(XL@*|WPHucHtqEW=sP=M)UH4*bu+Z4N z3g^JOOAUKwbnBpcY|+y1kGSKX%F2Z9cRVzy$X5jA>$#jTw%v^@(9z)f?z>B{AZ9+p zH~=jh=S}ma);y;QI~y{H*7=`V9M}#%kfNZ#-3^g$i- zHg?S6+>P8fR6Ep$X^Z*FM%QJpy6d%%w`#?ezAnaE5vU;nyDG2Leb28=!QVKQy3}Ss zcX?`C{n+6NDKIg_Dcbe*kV{?ZFU=swKk=5U@SEjUr7H zbQkN-=t-n_VJ`&X2>8|zYld+1y4KLBPqV0u%V3h&&;Geta?Jw3W$#Wg;PAHfGZu?I z$}3VN@B(@biC9<%I~C{~{Q0K`-&qUfiCFD8;%WHYv+$xEKkFPHz21-hv46P#@?h^^ z_xNAY@&3vFDOmZt{zNWwy!~KD=20?BlM&v#!Ii+t30#avFZK@)_Fq&aaru0>ttZ_{ zJHGm&AgpXfsA5n6rk` zODZV5C;TVWyIGESb(Mpp*t(2-zK+*Rv_9TbX$Eaz{h@ch-XkAEu1VBrnO=2HR+G}T zj&_8cT%I?`8*4ZsMU?zj-B$1J?H&DccpAOhJ&6vFB7}{RCe5HM?```k6eZTuj;eg~ zr%gOOI#vEIH8f{-XRDD^sDv_d&4yQ)=MR0JhghLdC9T0owL<%fR+vpNTGw?T)0IS- zm00AKN3B%rnANJ7XIPnpSHEK~SfLZ>8uU{UCpXS8*%1p|THv1GdHhV}Rkp2C%$JQE zh^85a_PALhs|CUm4jfxt)B93vf-6Zu9c}6%SKSFf(^{5--5<_jL8gCVy|ScF@jO0D zCuw=*39HjGRaNE`E#gT@MOmGKKDCx$A-dpg++Pr@4JFKX z2JcmC5An z;snSEpffOVZk1Vdc2#0sf;O#XZt6KY7&5@BoA)3Z_$=#8wI_Dh(I`;3C9&A4DH zkt5I#<4iteR3>LO22ND+^z-Oq&CGwCT~Qs6k{7fBY@A|6D^@~;qpy+I%|tX8qkX8y zvGpe%&FbN0_=TP9zCLXQ<>t(4+~ri~Bw=7CpR+lGSfZVM&dN1Mwi0IZtnhi}CSKF9 zivlSve_#xNY$WhLEqNB)Nt{lx6H&uliz(9w_M~#~#}P3HFkze3vywhAfuh5#4qi6H zc7=o{+*)vIk1ke8c-6KGsmv^9@dAJ|15`z)N=#OJ8tuM#b8y%-HBR!atS`b<<6GIJ zK-S3DI=xYPIXXI+JL|Td=_|Rtw*|>frU6>gmz1ZDYwJf!cp<$=EfCkbQM*VS%1D{T z)5l%MKjr4HNsalg;*?+nscC>{aANg=qE4Z;5}2E}WqZi`+@MX8Z1+)bMD5#+jS!xn~9jAcPawc ztYvuhd)<~_he-`3y#HutN_yZo1UDFUedpbFJvCUGtosiJ9t}u;{k_DHe~c$!wT$rS z%)GG2IcQjyS^l0?#!J|ihzwHO%e1#u9)T2ib6d(J6sl=aj<&X@ae0B^wIFY+1_KBc zWdPEa#m%t7YdgW9Z~WOtx*NP9ZAcI*D5}pu)f}eN7_$i1bx+x=qun=XVi6;3^!uJM z;OWYvIORC@ULU-CDaP9zJlT9Iml9khao(yoR?|by`pMrmL?@g5U=&{!^}S%RfZfxh zV+8N%FDH?RQeu)!f_rEd(xLFYDVEDV+`=cEL+XN7iZ9_lJEO=x^lV=qovcJ{X88f5ZdRu$eTO@DU@!2C z&0Q+ew5wERR@yMmHBcpwn2#W6kv~Gmp$`b%uD^suMN-4YVq&!U-lW`>X#;<%km*+1 zD??IOJ0Gl2xk42;DJ-8XI~}=Q>B+ar3E$8}C27H(O}XOSA~%bjw9d%$l3F6bz9ThD zx)?rOT&%+4MWi501JkWaGSXO8YiWKdCbq_8?llSMYhJZr!kh=ct7W#fCc?@j4nl5m zWxs7~n&mGC?D@v#WAph2d%pSjNvMZ4!^6Q~(0x~xdI1x~%_vhEnqjRNQI86@4401v z4evKR@73I`m)@I%CNGs0DcZ#we_Qokbw%fBP~V5~X-zj9?Yr6N)~Ao+*y){Q=pXc5 z-TLS^=-AMnAK|&7dNJNMKL(o>?W*TZzlFZu-+2p!)hFp(h**Cz@@dj=zoLQjmQE{F_7rbBaXJ62iWBU=#vYN zFz^9{O3`|@UW)+H`X?*9+YRk*gVh4g9d7df775^+k~gk>`+tkmQr>pTwd3_2Lfd)@ z5bp(tnatpW3=E7!V28d+%4WpS$7M(cXu>mt_v|I2P)_FA@S<}w`z7{ieaPqRNe#K8et+MeqP8>J7-7hhZI&efBf>O>C+*n{>pkfD+-RXRAyqLf> zoN#fla@`X~Op|y<2YI;wZ8yA0@-e?4?x7JJ$5D+E&xwBAK$OD4q%r2|=Ig@a4x6fv zU$#fImigMM73VWLp;g`rdbP*eBf=6LZum=gywLAv&ofZGc?Nj1Vs5ntErpc3Qbkyx zp6Z1k7eW^ZwMb3%Pcvf~#DCNpGdvght-_(EMM7;&Sb?;bqdc{R>)OMUKJ-nR?6y1_F`uhG|t+U zWe%v|D$lKGcxvc!v7+r#8)tnK292#sshIqbJE{Q(bB8LlTfj_?`lsOPH1`1r- zWcVRU|4=Ke8v!p^tCcYc+20b_)Dn|6EKG8-sCB)J3MBGAuEeMt+~+kzZt(3vjvG)W zdhH!qa;Gx@rrAZ43ah)5ppW|%AhO1IJw(v78{Ys}ug66V3D|VR73Cr6jO6IKRb_T6 z0BN*tSYtbtc`0D)Ytan}HdvR`w-Ly$GW(Ms2&9g#Jm^N{pc{3ANTQYn%Q%Bz_1vdg!Jap;K@MV{8Y>E!9b2s>EDOo&{&ldB@2G25Sf zEA)9nwY;b0m#lyN9jqj9mx4)i;&igelTHkGDR^x@LKX|(_WS+45N#?kXPhU=WL$ek zTNTSi`;Iw)8TLi&Q z{;`U{-y!7M_|y|<8HjI9VSTzrkP9Wny8OI3m@ji9J795HP21&j*j{4qjOKsq_ss`` zYwT)hIrTGl@7m3xn$qQEI+QE}QxS|Z_!uiZnCV{D;oE(O&e^(Iw5?0@ief4B4Ys(nd;Qg8K}CjSXYH>!$iDm)ukr8lF>^P+sb;A6i-&YC=U@7@a*Lpkr#NnkA= zC0gXum*Q8jj*By+>zlbTvmA6_r{jQKz%$Pw<>fuRh-c@C$J2YQ9y7NNiCYv~rjP8s z0J|4p_X2Ds0e0^ne`gMcLRi?rAMJz&dUVb*%FDq@%iPu@gGe|2zee%_EkrI;c5(=zL<( z|6f|_%5>&J`HQunLk&1GYotIO7KocM9;A{+dYO#I{P3p0#>|c} zYx8v%aMXT6NN>VU@2;V@y2)&NG8)-EsyAp9YS&vk=UdE%YEC$#L^`lxP^LwbDE=IDe~cZ>`6!S%G^war)kxoE6#3WWxB*)75eAYKF#@2YRP2Fo#t+Xjao53=3j~-KgPnk4LFdE7PdX~s5BV`62JbK*o*LboO zv)j|lFs~bG3tvNo%e{JW*XqS@P!Qcc^?W)5@BI3 z@5oG08Kq-GXc_c(FSx!j(j(>6zz?0)FuqSx;spi!m7v@vTFzT7!f&ZWs9#R9Gjt(5 zOJ>P99j0-9WhObht%O!LKqRYXxLT*1f|cLOJlwW-_O-UC+ChOv91Om=WNTg=G?J{{ zBjV?Ph^@1~-7>ni!zzwDSe?p9(f&0e5&`z6uxX`;g%z@BNgYxowK)P@J?Fro$DN9^d+@weveZNufOOHX3+x|7&kOK!}nKucRzd{cD5 zU8!NCKQD?{LVypd5WPu)4@?~Ywng|c(t$0m^}3PE?8s8;L6)cIVE&kVeY!zoq>8{n zOZfMf>tjO%+FmcFg|@hC>mlPHM8pG?SSM>yDJ0^ROIkngx$GCgWxYIy1?;jbe~yrd zOFhdmgQHpPmWi*bS1j?hMvJkA@|7k|r*lwcrZF({NisVxFFFsvfR$&LoT>j@g_qA= z_R+)D8dcJ&d9avUH5*=aw+7esCUCe3Jc?(dY-)$hHCz9lfc5X?a#uDL+-Wh37v%*A zod2xncBX}~~%r;in9sOJWb$I4FN)uTl7=Z)EKSnX5hr5Oeh`_D*& z!(BK#_Sik1Or=+mHbd{3djcEHE$_@m(8`l16>TN3g~4nw6(Rlf`zV=ZU1lZ##qUP zNS)@Ef=-^cG#g`a#dwlvGzLu(b)f~yWZOJNiAI!G#%dQ^xGtbCxq%zp6eaj)#s5)y z4DS8+{rv2)<#&O$aQ%g^`CMeeZt5U~?U0&$vBp>|1STP1x6LbOEX)y7johMdIhPPS ztx?*J?I4dy%^{DI)9Ll`AEzJL+41kQb9Q}lesMaRvgy^?&nM?OXgGp{NrmD#j+LW` z!hlp3xU);8#eam#Dgt(cl$gFFo@I)yMAO+NJHER7ZFYY8*U85+^k`C7IFs21VxDCb zEpb}N6pB_4Dw{vuJQHS#L26~w1JS{1?JeRckk^BSlOUw9f@cIX>XHfjK}h}5h!wrc z0rna+c11ECbNZ((>xAs4=oJ;_Qc$Ndt3n=1wRF(HV{cMLp*H*~ZVMel6$N;~ zOBMT@I2dgUxw+m*fuw0mV@MZ_|0L0x_GJ3h zvC>FsjB+|AWQ8pHaJLlklJHT$AgwCkung9(@>2Dogy!uWs%S4qMprfS{*lkis=Sr% zU$hQezMV)7>dM8?HhAG)c=IWT*%1aQU`s21@1Ab&&RQehy%)-gpPBB)wX(mp-wt}) zWAwT)gd@6n`|!3AsEK$x<>-r~9M#xg+wDHI+4LqI0lbipwu($t*R`9ceq$xHrQ7`e z8)I|VFfH!8Q}Yc_uW6MjS_s8U2${USKN!6$7=QB#@971W_9$C$k@*eD37}sW z36k-<(GTx`r2OkFeidbAsazHky+S$d0=|ZFS`@a^)f7JNJ?{Mv z=VSznRSf_D000(ra$#*AF?Vrz%$BI4*5xj+WI#bFt*JESS)to1y1Y|WQ9Cnco|t9*#yxi!{CwSGt1^Q@bRtXQpXa1qW^z|;1@H3 z7hnXGBujMO8HTVB{kBbjC>lZzNQSV1J_}Q`pY2lQdjY{#nFbkE?N*2Qu`BeP#mJgJ$xbP-6i4yvMu-wj|DF~ zh$8R_N{sX^3yDpa)J9zVw(_N>OqeAHOOIm>toO<6N*7%8b=6l(EYxw!p$WF$QNwI5$QE z-9zc7%h@>sN5LpZn4bC$wJi=NMJq5)Xx)^uu^Im&P6H#|=#w zbY;F5?!_0QtLbbkk9wgWHNQ3|Gjli}PbVvp6cbZ%bY;Tv;2!DE^FF?SrA7yVF*^StOSD|uD4JYQC99jh{Ql5Ak@p_~v4WEwcwXR_ zIAef90>)6$1v>>!wk6H}%&y~j?GLy9lb^=en9XtKQN=O<1WA z4eX{N(aO1}Gn?{=szGg(P{l}}JIe7NLK*Ke$Kc(j-ku^$s{hgs)D3r+y zrz;zh;NsMjF|%VhI6*3^koVD4B*YskPP_`IBzp*uPd4IEmPj-B(BI#JXP?!Akf1gr zM+9c&_zI}0gmd&wJ%n$8NqUy25J@t%BC>+WLL%av7AO2yonn95d(Gp>_Lt;Sc3uIkEpt-cCnvb$Xi7 zTG2gDr;!e1Ieqq~ZBRK&;z?<6Ip-G?DKgs##U6{K>RB&^mF8VCM{nECts_vpS0^d) zSo>7PX6WP)13_Rj-%saR_w7_GF`~Pm6lcUw$Q4mh z)}`pQL7|hDQBZOdU5$^T`h3vuQ=>+k*t~vdZT8iVA3t=pT%J7H0aY+sI~aaD98ue) zsv6><`q%*j>h0H+3Pp+w(m;6!D40HuT-tGIpV5NA8FnLyNj!up6dL zyzCN$=|KO7L`7uFn<-KcF0;xbBiUn@b@3ibUmjoyI*tj2HWzI<&t&3Dz$p4f#aTGt zlzR!ImXmS&NmwKDF5cWwz6L*q07uHDn-JO669Ns6U%}y$HMYCHtuKbZ5M1Qzzs69M zAvIi@b1vzINLMM0CVa$omb1Fx=v&=1+ZrLAYUvozAnv|&p@!TkCM#aT;UK#rv%Nba zYtdBaRVuUC@`V32_3T`f6S)CEIW!9@S5kz|+)D1iWEW_lfb5XuDuO6y`#eSettaz= zflh#y78S6Sv2MG>^5d|-OYOY1L@gko*L9yR)AkyA@i$!}DZ#Jfl<~4+3PG1#9fFQP!6CU-mG$KVIP;=xB4K_HK@*0EqoK;#wM0~r z_yGTYb+@mYs#E|TI&wu(Yf=eg4~V$X2C6AnbclNBW6Yr65JxFyfJ;PFs2!IwhsWz~UtES5h1CJ_q(UOA3w^(JqBc9T*3i-*c z;1eKmNdj$%>u#};&|!ZARr_;e-v*5|O|{4GenpZ)bKbSCreIKPTn}IU1k{DP4b7^zl9v_Pc0000000RH*Ty1aLMiTzsU$ON8uH0iC$%)fMZi40l={jeX_0NH6>lluGxnXE-17sDG0yI^O;tWuNcHXl+<7vLe z4b>Ic#W>R8z2k|%UctuV_3+w)g2JW;sOogffaA6tqN(DXgY{B3^ zGT%qRYtT#Uo@h=rxqAB3-~aaP^2w_!`sDKR#ZS*)%}I0r9(hD&7dlxxNT0sAdP$$Z zctx-N_3WjhJ!^ULx@wj+c_N+s$G9Y$n3JR`$`*O>r-c9-CrQOPpOf~FNKq`}44qPN zm!Ezx_+J6i6g$SOb(TfBivp{`G7`MLYgc>RFW0KKsOID!-Jh4b+5R)FHJ~qR5O5+_`gy{LtS2yk;et&B?Poty7TT-z`iK1rwdakT_33h>DS|h$%RM z{fUe0^b4uVoTQ4&M9HLI4E{stE=<_WgZ|T&7j0oIXfB=pe4OoTFF;im%MFF_hE;GZ zbek@#g0m&JmISvvs&;IwIuH$7)GOSh@+oF|&qg5hX}Qg4FBa!Oup zz`Ec+LDX8%&4z*b@EsW47KG$Q19r@XVRbfjlV%w}oxWni1h^-pE?TxN5dj}aE0*m* z{W&;8LKZtP%5Q`{kv^6a3*?0$uTxlL6JJZ|0cS{20ji|eX~x!!>+9BqxHW5BvD=m! z&sa?+)`d!2s>r@++vtfd zz6s}Z&25hZdpc<^bo}v0A_f~pi$5WrkwLXM;>#*6YYG^j73+o_+2hhi2F9DLPAM1y z#1i{>u`;y>^erMnrmc}A48gV%8%wS80(hVf{Xv3@+a;VJ+}U7SqLHBw!-lA|EtsaW2>~C=of+H15u2Jcr9| zfR#=p3(25>uukXCpo>8nm~heC|No3hLxj$Z&U zpdUL2Os#qgniQ+%$25qK}tC}VIScFomGb6PUrATpf zzMP5n#Y;Q1oRedT*!qPgE}cbFuB1x_88?6kg>0vOM6akOGnl>6IKLW?gxQrKR zme#xPR3Y^-;RRYgx(lw_0mP{dtn!ZPg`u5{6DNjI(z`NQHL^~+zrfD-Nr&1&*FW_J zufWVBW2aG9#%{zt^A|Z}Axl77jH>_-YOp1!D9GG~oiXtN_Z~XNic}>-6ej3)V1~CjAX&>#zD(#TH(Zn57h+!DO;gmN-(SI`U?B%UY$k+RXa zp4g{mC)CFCphMm3kgyD5?l*@IAED4rkakGf^PmG2v%#PnonM z8Uz=Q`Dh5(*R_sr}g`F0v?*+`!X&(~Y3BE;}bo^VIi2lg|Bcs;=Kt`|bVI=zO^FGi+f5 z_#Rs^4~}~5yz5J%`g?<4pBeZHZbrrl09Lf$uJf>wHY2+7S;GFhr*XkSj6kbMy z$pMVK%@gkY+oPkwya&&chN#c_>=? zCQl3#a;(yUCLfQJ5$MsW8XaWxd5+A3J$Xsb({SE!#tSujC{aM3HVL$@4QIR`DVt^b zzcKL)Z~Cvv=CSf#1W(73&&;)YshT^{eJ$!O(6v_eXRzSp-j@6rvBl}!?cFS<57KWkNPPWzIuT7yqrv14MkpkSv45F zru!6NuPSy!El6KMK}ME!v6DxXm}QKf7dJ33uHnJ&D|q%xiX7hlV0#3?4%R1-BqHB- z5BjpbMo+zE(L{I`3QnJLv4i*CGR%XGEV71))`6oUzeY_i^!jNjqa@{ZLgdIw$- zPAgChoD{&Op=jLjVQLqeah@jrM^=DFI4^Ef!$>uq+32aB& zVRN0LELKFdrl=*fV~XJr@-;qqT)rfW9lmc@RlGYLg@Xh2#?cVw)a1i3dtiAUyE@q9 zBe2f1?kMc@1Yu4yZ;xyaaiEWP>QZ{UdF!gN0@n|tYV%=zM`4wO5cr^5(}Dd}47K2B zK%#@L>BF8#mF~NfIzFt!f8V_xG;@(S+RJ+aVq>-dRxRZWJOF6AG{a$iK<~ZR`$)!f zw(~eL{U|G>kvJ;f3YN&atZRK3M~9lr5AV zg>hj-4jj_Y0HJ=OgE-R~+hgik4hIq`O=5np#M6Z1N!ihS2*=Tz1d%csmy z+n21-fDio+zY*ZOM>;$e|9=D!hPEHjRTi(gC#4X^@_H7Lvp4ErZ#0IZTC1H7NBv7S zqrlta{LYCwesAm3>-AoB8VBps z9Coc^tEzTWw*|#E!f49L7+|?QnmRv65hk&saD~95Uc`-w_uvjkvR79xUp;&NgTx{7 z?5ROyRfKMjW@ksvM|Z7{X3-Fj+2B9j($X;fU5)#RtKB{X61sNP+J2(>U;B#Xd9-rA z2pf^v$eGjIvxBJb9z=cbAnNmjsP7*{{p~^24-TS!co220Q5z;^Cw|%u8CC&SrP-%`VVMr!(@+V=|3JZ5~lMH|9v=mS3k_uoZsu_tD;J7%l`l zI@biYdBv+*)c(e?I-gEG$K#Xjk-Lgn$?GNuJK=D}6J#_y|D5N{|3?SC|44CzQ^O9N z;h8pkZd;MvKKt5$O|6ObmtbSMQCIQ24XAeR*%=66JqSdfxZZ4Fz5CxjcsLEN_r`rB zI4BGmWqVU;d}8B=TkcHEmfH~4JGz7|kAmR3_diMIGvzZMzW5I}*?JqHO#lD@01Rzm zZeeF-WM5-%X>P_0ABzY8000000{`t;TW{Mo6n@XIIBS8<287z~vZ7FvVoBNp0hX;; z1MHzF6k4KfArh&PR2{d-f8XIn5-HhkgJwXt)h~v~bK^PRxsa2|1TF|8OPZbWoIx&O zUgw1Z#i1nE6bQ(?tO^QiQe3N*;Pr9^d;w}jL32nbc~%$n@xcT|ohwK=Q$(-JROFSC zu;DeV^P&JoX{P1oELS-x@{crvGG{76m4Zc}1YvSP1&Tm^EAVs!waSq;XG_$H=B2d| zlqsh+LIX4Uffl^djD$fdC{YwNDPJ=q3kb__Pf#eRltAXZRz|bg(8_VQB6HNEaw?+( zD;r3qxJ(v;lyuF-bsU%0Q;8A!=>YIoOA1nDaV(WU8=r@ZR0uA5YcgLlii_>)K#dk3 z$DiKPViCb9ns?S%w~64jp!6g4S1#}*-S@4NJejyZc%_Yy&7%9Cz)70&8hx-L5*QCW z03+>`iFVQWv0=cdN(HZe`#m{3d2@1l{_FYW`Kz}v)K3rL7-j>VFNs3rC{d@X7Idim z%-4nlycVhU57$oI6Pw4agi8cq%ZS0&yofK(3^a*5oLUj(0WPz6xYEeI+*I_+%Cm+B zMjcfv!W1u|A$?&Sbxk)e@cO6COVGTy7p0??DQamKCT^CxAH!|*+6R_8jBlUzw(t5k z^}NVaSkx?4IfiZE`#*EF$^==XhvABp+D}(HrF*VHi}CnZ&XZSNJaT2dR#`=ujAEnJ zncdF3QF|08B8Ep6``sa}b^{~3I7Z^X>Kp?bLZtI~K~*C#Az3C6epYH6>rYJZgfB*p zV(frvbV*5t`k^yR|L9FFl{^X}-=5&b$X7IuLQb7kam=_%7D$tg`pMzZ`(|kSgIqg? zIY(LRWLfYzDZb2>o_x+|N6zQt=dA$CqWMQoP6UsFq-${Qm~J~P zowuol<$XPX5PqHL>ISC;Pp{JzHij;PFPbdZ$rFa#^Z;c*n!k%VZIT+@i`Bh@qEX_} zU$vEB2Qs?~a5sE1tjRHapefpMR6q05Q*O#RFShSxG|fv=NW}FsFQGXa!9^nLswy@y zMx7T8=tFpq8h=M#)ri`PFmA)V|5)D9Lg?C7700z&JR9jFj`!@_gZ3zz6Nand3Ae+) zpS4uV>hD4SIksmY8#H*q#z8}umO7r#4!d(Ll0^Zth=_|p%X`6Gs3Z~pu(Gu{-6dT z8RQEi2*-DZvpAwyDk2O#Xl2ZDl4Q>SEU|io-Vby^mxv82hHa+BxL@0>19vm^o!3k> zmlOdQj4==2DO^i9D>m=L>6%x1=y|?}Y%p$<~2ncjTfT}&h#jF z>r;+gc#q+i=?$69QoNpIxSlhjjk4?W={(Yxm!ug{&FXCYe!Gi}1(ao2GxcAWnpRf~ zgHqBd9Bz7go6S4a6svMiNqJ-`da37Gw2#*#^8^f|<;;)mMXxyyRv3|ZLFm_jyUXdD z{dGodIj|Ruu`hf8N59c0!|hH>Akr(S-ki{4w>K#GtYHvwaE?)N5X4CK+ zQ-=CgzTa!yBqPnGI`OV*zNzB~rv8C?mBgd$xu9X#N;*L6R;mz}!ToHP;>b^x2!5V= zI`;GRK^LJvcRJ0r4quuNemFix)hOXE0tTqpQZT0Qq~D?Kl-KrB`xyjnsMc0z6K=^% ziXkvu0M)dfGS3TGG5DZHe{$JNOW7`01-1`~;;O-ti=#D=8?^ zKbg6l@5J5VblV*G#a?hI3-E!Hb@DYB38LN!YyI^k92f4b1wVX8Zz{ai;XB`ga5BxqhvTNW_cY|b&`*gaugML)SuQxwH=*b zWmN?K8)f5hTBh*QxGb)t`YMg`q|V-_YLsetm0qV&HJeV0vX0VxT$DJ?`z-n5YnT_m z;WK?zMaitbir%HS+xU#0(^S{RXf}cOCX?c(iiTwhljN|%XgorYPs@s?#J7uC9Ss2@m-vjQGEw zsK8iB1gj`!fD!rYEQiT*fbi9J^usO8HcsHfUX<0*6%3pg5)=ePM_-reb@3i3AjyHmGFX5HPIyCr1R%!V zu@$qJJQw26BW zvL~U2Xa&|TaH?|^kz#%2m0%okH@I>Va}r4)YA3!gq*Utx!ZKva(a05BQ3w8A6=ijm zO_40}q6P_GRZN^$Gy}-y1fJ21$@>KO4Ihf(yCi_FlXrknkWP${2&gHE0&{hyBxcMF zKW|4bK`bQi@c{}>VhfTGJvOZlzhqK_Y$V#8+=igZxCLb86{lCCJ=~6f4?Tm42%G@m zS1^LMBrY9ba3DmGii`|ql{kvy%)Tjk;ngw%v-v z180kGh`5sJv;^X2Wd1+tur|j8=7Lp~###_E{IgD1ec0*xNiE~^Jt1t=1T|#QG%seC zSA3*oQuT-v6+XC#cE(e%5@*j5XP$HYEV-SgN)o$k!UIFnxxgJBniBx^=$4~TlA{ou zAx6UCm>8l-Z6eTDCP{(eW}KB3sw3&}otQx>MfFuu2dBgn`?9F(lo1t&r&2%|^%r!y z3yPEH9%n*F?vQk2NVvHw#QR)Cz|6#8z)K9L-}-UN2m-0pM(Q}rQAjYX%GY$!%V&() z!*rx`Uq?GckBeHu2LWaujsU@QB4iJ8JyJdjkQr!o?0AmghzDgOpIu)t1wfbZ*QoSy zvZoC)7Q*wU7LU^pQxJ1_n7uEUqM2J#iGh7sT&D!7(3C7Lbo&dT%%f@)kIUpby(!9f2M1!7Sj_Ur7ZLnHTsZIY zR}K#TI7=tvUUWt}&0#W4F0x5h-}dkzj6tQ$YmmhGXo6s1l3FOdG1mq@K5dwWei#;6 zesDml+#&q6;eC<=aW-mtg{^`qVb(vJK5L$sb_&oK;Qzni8&0pYn!%uHkEqvxiFZ9c zILM2exB^PdN0t3SRgVr1DnfL8olG^r<4;G)^rQES8V_vw@;UxXdy!tb7*_CJ5ntSb z46&Z2Wm%NgZzwlmDObsUy!Yf=>tQiPrjiq$&_C@N1?)Ki#KqWtp)P@J4-U@g|E-7b zLFC|~=vSetSD+c-E5ub&8FOr_(F_N4`AJw6+uEQ^Sr%g^?+ZAP05xM!ck7BtCwA@% zrX@iKhooG9J$f3t7DfsO4x zmGhiQHC(frak|8#k830M4-Ze?9G}O>C+G3en^)&A&tCo{hKGIcsTulwTz=-0-U4$J zQ_bY3356nBj05I?Ha$*jYzgMvS96|wR0R%3Q~<80^{p}7>;A8=PWsQ|!~QXj{KFs~ z9KAmORSf=(8u`dMDrQy9EHj|a3Fzp6+?>;%vt;pQ1HY%PuHZb=2wVsQ zV+vv$jYoytiG?tFd44u{^`o41Kg_n3#QH^=f<2Ps$xWsX38&$#BnM8O)+qO*+td_4 z_RNgV2gm&%UJahdzYfmj8on`iuFwVcX+q{dxpok_K;1Kd@2JSX;w)%V*PEjsVEZRO z#z%wm{`3BMKmNzy*R!~P{5%HGzX9YvKRM>yCg*=*&adQ&oS&a7F!5{(XD^-XH7Eub zWb<@zdELOFqAx_w_^mY~kadc>GT-`CNFO}^bJ#Q3#?i}K(%S`q98vPyF6y`+z-N-? zBRVxSmVF%UzV%Z&koVyDa1b+S$wmE#NBjxHrr^#kLIV1igx3qy{7Eu(N$@N_KLJKM zI{6vq8T`V~h9VUq*Vm!Di`b!fG}T0f&P-OeZ6w+M^g&DX2Y2S`sv@%q#pq@p*zNM&~D3j89F7%`5WK&?7{LzsSIQ?1?Q3FZtA>4 zh%nD)S4{~+#cGgP3__$~%nODE4=wq9Sh;j!*>3n+IwrUS)nmtQ>r?f)S4X-(Oivn| zHQ+2rYqr%26&hFyVjYa>=weoL2jNWu;_8Ahi*Tk$NJK`M26BDq17;R|hg2nL9CRI=nYr1Q<(62-CT6CRWRL_MZ3hd!NjL{*4)kWJvBB|n@N{jir~1VB10 zw(uD281V5saL1*_VyUqhDP|+N8GxxS^!acH&YB)aow%+g&kaQNZuR>e7NxpWt7?gn>7Y# z9dP%;ceeA%r?V2EPENs(~e`Rg{l;(m-5mKDO`rqY;Z6(vKSAjd-B<(^GoG|24yrIwIu>=^g0}y?rLn z(^LMkJ*p>Exml#rwF+M6Cwr?jzSosPe@07}@ zL32X5%+ib;iOn(H60J~X`*ygp3sQ@0UcYu~`w)i36#Vid&>?y}UqPYkBcEVA9MkIQIvH&DTcHjKmn8R)@VHTxX%Z{$Zyq)6d5y*e|4aLw!00w1T7g1 zaM0lp<7f`@F=WQu&qr`lZ(nXl2njfqAoAH0rn}5gVu!gt{N8qS1XS0@^;;9rp7@}k z+oq)s+8&!ZqpUCn7{T~B9(mV|iT2qAi@VD$^GGs}$Q<{3-M;$oQThb72XCcXj*31$u=m!~S$!zcV>pVUbwR?oW!+WH%v$Vh(w0>j82XY4(((fdD0P(46Z?1(bs-yjuRPadt7y zP~rr?ZpH0U8aV)rg_bB1WTi6HxJ<|7oxf5(!BT6j=i(C+ESQ~Tth3ReZd&tX6t{D2 zV|LQ2)M6!eTADrIir<)T*hARbXl6`pEeAdmu+@%4iUYUED&@Sk0t?0-rQ2a1IW4>? zz7D_%nsG}Z#S;|AY`+BaiA)03;_0{#z{Bv048v4T*htMDn=9x=X@FESM389JLam2P z!aKyXAjHhn1-_s8KAeUv1=h6?QOD9np*#*ISY!&uavc0sWtq_}AfWk~KFjT^*~y)T zcG-dD4p;_G(sv$^-##DO9I@TnS*%O&g+5YS4E%|H&fstr*+x~voAieb_m|(^!&h&$ z;5G;ae+*g!9D^3WI|DPy2OT?sPtr_x&L&|N5<`^}omPMbw?J1f?1FVot%il99Iq5Zd! z#9gmG)do(Jw05SJ`R|y(yC58&D#eJrl8@95JzI#rk9MN1E$8R&!fV?0MXh_rB6Vn+4I$Plt>0$exy_aqqX2MG`zB~aR_3#&UDr&_& zzPypP6ylq+dEjUK>2GhJHqzQkB!&ZKFoONc_S`H8a|~}Xg=bjbduk-`ID?y}lL7Vm zUhHZxrFam*=ip#EtFGdUWcUtA%n$0`Tw=6nFzaei!aoS5%-r}#^;h4VEhV9d{R#Jq zomnH!NdSe&&9~^8VC7LG+lWSa-uQISu8|^mud)zFo=;y&qAWP-r^r=7 zi^b8Ux_{SA7(|pV0U8VAP|JyDh^qN5OQb1!FaoL`wpz3YBY92~9=q_V+$x8-I6Hvt|wb>m-6t29i0 zp`4g%Cyr|&S}~CZ^X_js2369Jk4CnmzFvi_$HovD~WK(YyJ)9i8`V&#aDHL#fb- znAwEt>;Csz;8!_JL+pnaJ$%XASlSu@kG(N@WrT@OWhR+QQA zTAI~O$}W4#Qi^h^@{Cc3hm)TpJ}NMBZespI$PLbbDSXZOfsDr;3+<1D8$V-lh{e)z6V{q!LZ41kTDy0>F(1~)`$;1Ls_j%xdHJ&F}CVs{NL z73bgvRA1tSI$yH7txO#7N{9TTWYSz}xfjqou&Su8Y-BZ6`opE)s=`zjScQq~mX@HJ zv$W-k>O7*VzfSz#osdEVF!h8^ zQmlZ)BqQ!w;l@FSSzt>WRWJZm4#f*%yyO?tMeCJEn%bA<~NRU`;7Q$_FwcN3meyW1CQ&yU=~6dgWe-$Wh(?{SW8bpbV#6$|o!yIeZ*yC`y^Lk#H~D_F4nLZdZa%B>0<`#Q^=R zc%m#pyNhd9hE2<|p^Y{VC!V>y;DB#stL9$#h7vd2Q%3%x#3?3o1oXy^9iIv0%{nma z=|?6TWErJpt66Jtr(Irwp}a~Tbne0#@(Mc!{nG`*KYLWEke#{A!jtW6#A!fr38x^r zQ(q7nmke|K!t%T-XJdI99n@}6(@10;OZ0Fs#K-6 z0^TpvQEOg}QGMb!jAOMzP>8jgChAuYFF$k!*0<+ZNA))vIm~1%{j)V$0_{1&7XarK z)mZKiS-y$6;`wP#K44kgD^&*HufFf!hOM1WH^ATlVs<5*n}l&^1nl((w62nPFZ3@` zI>ss6o7=pgih12@Ff&CS#w?e8s5;5JP#z5>%yaBgmJo(fHChs@%$Lj)5Dc{L+*zIGqHtu z)puvs{vM9LAF0)onO04W*RU%FoR--LbXOj#NA+}P+ZbU^J>4A(EZ!QiyOtMJW90@u zcQhxv_NCpB$F7~}(vLZ$zoMoys#{ib-oW#VZ90=QSKOR8VM85GXQn_g&UsT9B|4o! zRh`ax5MYno zU4xIX*`Lvi#^5AO9djd;ICrMX4KI)(ccvH!c4sux@5OQTt4*k}(&Oq?#b(BH{o>b*$QGE-}ac~Nbf@PRln~+E31uoJ7!CYjOt=Ych#4LeObg%OF2K0Rx51(3F7j>u;%~zFGJaKz-2DCl{OyX)y%au;d_k0)x zM25A@C69!&n`Ek7w?KjX4ni@(lA&pjB3xJ|n_4>{LZnMW>q%?@y@Em8OAl4NtcdB$ zEcfJ-iHEQj`Se~SJ4c6F?AylYu-><;;6}yYkSAcYRZ&tcjg#OBY~aRv>t1lAR_*^C z!MI+W(BcsndUeP}u=2^(rnz$NfofRo1mh~27BlJ>t6%4-!J~C940Zl6wTKlpmVl~W*W!QmEB51M1P^Fp-*=Kz$fEBuPA>^$@ z{)h7#X+j!P^t^AFCCBHl&K>2WjQ|a5rcyPx7>3hgbiooB0Tn=8tsougigj5|BT)wo zl#Z-s734b7BP&WpHRt_#>oSLObX(MG4%Wk8DEY`fWJK9aOaSVI`RjH6*Hb(U&<0C)?U-gPORIIFl%k7wHlC$vvQQLpTon*!Wa@5Wa`+bTvX zFlyK&0wX=&ySx>paY?BmzlV2;^c+uSyU30&VWcQMKU!@Yb3hAk0I83mUn3kDIUF5X zJS_-+UnRYdExB2gY^lvwpjW!_AiIuonA(j>(?H z4Pv+33O-t97uE0mz25xvMIi@Y^v_;wgx^|)-#l67mw>v+WVrnImw6KKp|Rm=J0tFSiqcjQds~DEU)lkLcyO57yX2lG+0qYtv)C( zfh@{x@Q&M@3RIi30QCI8a~HoZ*!8T=XA9`G|6VfJYc=!KaA7MiUp&$4O9G!~Jcdz5 zox%BO|M;wbc>eO_m~IyQc=G1>`BDWQ5onz`M(t9C9&yMy#U5Em0qMaUR#~> zN@KNAqItd@^U24C01Qu+^0x>Xqhg46^r{)vf5(Nh{18DPwgC*_HuFc`r~>+^Uv7P42s97Q(q@NyyfP zSh_X=*?=|5hm;=10u({#14#|bv+s{(^knQ=ggqCeyg7}aPm(H{!TO{^GX3>{o$i<89=eI6y1thJ7o`v_Ar!IL>k z#|cbqxH5g6C`7)7KCQZ(B24s0!y;oAx{^+8Y3b;@{BBhR`h$4wu-B3x8Cvi^;g;QL8*=Kz~KrKy~|6HEdMaB4R{0#D;a~j;sXSZ*Sdw!T}Yh$?ytjoQn+*&;O#o{LABjVsTS& z3ejKuZYB24qi3EJmY}ofwIsFL2TIyt0&*vyV|hVTEda}Beysgb=`S&+Ipf@g(};|O9F^1#=)DU+FeWk{puDSu-09L zK?HE)#@pk(b}umR_ji#s)AsJ+82w|GYP{MtMAZ}v{hJVSnj}+Drs^B?cxc7>s3eDl z@a<;lvZd9Mi2f!2fzNDF6TXz^o()bVipUsxk?VPM9Gu1HC-Ldv=;Y@?d_4Fi=0Cze zPha(4cg+xyb5e2`LU>ODmWb5hxp1p>VY((TqB%+3_t4-ob$z zjXXlNF)gLR(g}T+0h`BmzOV@|6mM&?vV)ekoZTt1>n_Bsu?tqVC=W9lA$u#B%)J_@ z5N@-Bi}q6b8Lyl1Y}2$C+fwnZlq87S;QVQL7%IHr)y%_e?K0rMtSWLo7Ps|eQjKjh z-=0|N&*r|J(s;!9PrW(_O&8KLTs_(Qnj}FJZoS~kVqB_3#GRz}78n4$LwL4glI2J1 z4Is!9Our)0LOg(bX9_!&3#@144de^ocJ#aeLGt&#YEPBCl6!GeZA zelZ7%AH$rCqFc3(W2d;o%v`zgG~;rTDca~1-o%QA_dT2<<^@M-2`Farzt!9=6ET{W z^Sxa}z44@NsxD=4{ga=39Ax^)nG={(8w2%=xwzr*@nD#8~72xlLg%RmVUstqzU@7 z<8hf>r#D6U4mD%^!HypAe?MJdAiP=$8@-;+{iv!(pfQ!jb$l_b+PzKah||&DlP9}> zZ}@~>uBd{`!NIH^|Lv*yNVITy8UPU(QB@uBj5x!~E;Yd2UbNSX_TN5j8ewpHdU6_{ z|N44h4RZ125gf*SFiGn#8%OB>aB_Mczd8XI=F5{~XOsj-h$p<)@s}+Yb9nN6Kv(wQ z``w)o0Mum~nTF8m+PN*7lnFi{J~I{xlLJCJ=Ei)^$+*6qru=OMNbj4NX>_1%P`k&g zGn99mRa(!c9oy9?DVj%X+#b09x$|LH6DF!F5IgZ={zE*TB$t+rXb&!nOshdKVCc3N z;?FzXx2oHuwg$<2K>YAM-EzK4kar-;2ImQyCkO#xi;lepy;qlPiOs90WWT5K+m5^V zT~ll^C^I!a-UYxRr1tiOEwU$m`9}P*FMfH_ee1v1MQ8)d9Ss>Y;7$KkyzA5g-*bN1 zSK-_?9w|;0zKe;wp{t4UuJ`@D;QM>t_xFSE@4F4vjIb_W+8p;7j6A=0Szx?S^V4|O zDwMv*uR7{mdP^cxbxGAISXTiKMOd;-jYJGh3GkHYk;`zSSX_>~+i<_%TlRac2nXM? z_e!`(*cL;2v=PJ&dZ~a(^$9#zN>tIg-<-~LO?gwuvAqSq*jxCEeeDZC_Kt;RvQeY` zXjG(C{#8u{RanJZZ2g+m%(h^45-L^(lRJ zjcR3@TQRF27vsl8ITF1LbbLO7FX9W8(&h(M06x2)2tHoA-!p58i--xONTf;?yKr)1alGM}e}Mgx=RXH@XZ{dr$fb0;ja z;?U2os@29b5%#;}NZ8Y^d-F&5_7;rG`$x|zL2Hh?<B&`_vCxW2KE($b)dYC&HK` zV_k`6@g`QkA*rUW1z-lYcG9-Wxz`W*h6A+cniT7T8{d3D3za%(qkgIDHmu)}HE7%x4ag#y^xzgueDT@O9BkQ0CT4YVzG3ZNWB%bWQ!=w{Qi2;SY< z+4-Rt3Eu|%hCK{h6WQ-+S@_(DG3#+ovxaW9FGoKXJv8MYn<8eX0jZw@DC?q-4Eq4U zxwjZNmy>esfWEst&_^>gMpP<91E4>Nbew|@kc7g{)&|Sd8gDyg_*F%}^|dkhpPZlhm6{mb zzt}IA2go^ZV}HAc&fvCI^Tiw<-VO1vXXf4)3$bM*bCaudsUA5#nrnf6M|r|3sl^ml z7mJoQu@QZfIAGDsWp&xka8{Oa^{yAuU#9Q6;zpd92xB4aG7B}C?(CV4J1VpBE}rG8 zva&M(+c|!Y0R-*U&X$j+)==8Vu{kV$gH3=K=TYwWmbtK?3M@Nyh-fH; zQ46W{ip7N4z>~IW%p|3T6*T`%NFX71n6!CoLANX*4*$%?>>Y;d zkSk}Hha`!gUJi&Z$$tXfW>fe^fHm&Tk}x0@v0nqGx5tC$jLFXqUkr{0?u2a^f?pxD zc%aubn96}s-|OJ)t|Su|vK5^K@SyKP+&ci@g$-`M4)`a*h- zn*f9!fY`m40K}tCKrZ{^T)QUIHDK9FJoMaJ-CK1rdk&}es|elc-p0}pwzm_sqt1SJ zdp(=kUv)G43Lr}dgY6EkpM4DY(}sY94Q}zzTon6GK}z$z_!7>Q-7kR%)IayW?E245 zk%+DKyZ82Ou%fSMyr{Pk6{~L%vDPo>i<<`akb;wT=6o;8zuMpTqAddoG~SlB&9w}? zXAQi!SnkA~1oAjJEgcixitZzbcV{fuw1SmH>ss-=4|smEJf0s`!1m$cdhgR{r|M4m z1BnMI#j*`fKc|%cb*rt$c%f@9>&)ibwez`zt~tVNKPvTrmBRQ(s9b8bBA7_P?8Urz zTuir({(SZlta~+Y*RpCI*DWS(xCpWOa(YqoJ&RAn{Km!f&8?g@TDy(d{nYmDCl?gH zefOefAws1O0%ByI2r=pSPNYbyE$te~_HEXr557jR;s_EO$fBkTI`T#ZUQi*& znX|uh7aQ3RY=jz4kIb|WNh#ps1}7z6cHCYHFZ&HhU)xatoBQU|XBsV`{d35H-}f^~ z7u=j4pVB*echY~=^Jx*5ZlQiXBxdA{NW?TwZ>TifHN-~g_RUq2Yh_9BLM|}uWVB)1 z`(WF>2d4C?_7QvXR4wpsYMFjjMOX=Zl8$SwUEu^1ERg2t0;?)BCFu?|xxMWb6r?-m z)5eURaG$~r&;0w8FIcrOf!jc&&1HV~tkti7i>4QF%$i4-FPzY(-(A-~r}k+M?A_4b z&NM5uMtNF~Jfl{dZxQYSHh%l2HOimt z{KxbC^`Km?q7AyWMgPf3K*C|Z=wJ2Dy0;KbT5VXep+e6h^?BANgb&v#gmEt$3A_Tj zAD}E|j7{0L8?n41T{H^l1w_GG!|9$9O&fD@V=ns5#T5#Ag+JY&_vyNf)b_U3zTa3~ z*0$PReXKn(*52x4?TfKKQ9`-wU!jXdeum@)o?AI~X)Z;{_?6=OtEJ{``Z(n>+rMRN_vM~fAKT#EtrEvVt&E~bWi2$Qj)tm$rE@B1vwu{+eH+ZLR|;)8mx z@JbdZrF@HQ6zy&=`@&G}eUfP_^5oEvrvuS1o9MSVqyErwlt#-S{^){*A?0(Et5zn=kj4 z{_^4YMINowG4g1go{>j);2LRnl)7l;2PqTRn}hW8J4n~XPmk7qAbt|t(?hWv)AO@3 z2o@pi&(ZP>QP{@tY(Zxp28-39 z4-O!pRVP_qb+*{yA74(2i)8YhMa18`9{Fv(1BvVAU$DQ>e#`zs_aGuzpO=kz*ocRX zcvzcwSV!|Dh1F2=q5*(}dy8Jx)KmIv&jrpnPcOgBz?esQ6 zW20>>==i@w_kH`Gk@qD6TJ)ye5B8?~oE%S08`-#^=4H&^vs=EA3L9Kt_fr<3zh=tB z7E~&0FjhTkH&qcfC)=H#Y-^PMHhC)cvz};dXgyKvNDmgd1-H8w;z_$uC_e#)SLt;U zgJxe&l4(mmvo{iaqWF$&Cdx21TlnQc;r(gvdnr#0ieSD0_hU<1gzZkqwU~W7O=A{ z$&O00uvsw0DJ{yq6cChry!tik+c3qx}jTF}zY+HlK6fO0!B_cJK%i#y=vouRt zipc6ecrAc1oR(?D60#Z|$$vfXif2Z~BjpC2Qt@| zyG<%wt#A{G59oSWa4GT6eB10#L5kb*Ez;yZvI_sTo56ceQc2xocc%~$F zi-70^vcwMf$XICIMV_?InG=hhEz>VW)m8~dzT7X6jKHh*^69HL-L4uUW`Sg^b*pq= ztw`ywXOK9nw4@8vzUo#B5GrY#!ru>_5RVc6;jHI$Zf*ncmHI_luzja{K>yf!Xsc{L zB31?|OBM+v3rJIEIxXE|NPOJ-G6hDCT0JTifjDF& z>QjpCIR0vb5Q<@sZ}-sS-6?ag{2-oM6XTRiPnBnQHJ0pN*&LX)KQ0i+>>plx-01<=zT%OH58B*rPNe1=o9Svs46Gj zkhaWtWnFa_wT9C;w`A6jT!BqC>Tejyh5B431x=Ko=6KYn(^t+dF$yUeL~coO!s%Xj zpmzn{buHx2SSCQ7-AEqDhfV;=i?J6#b}mr_O$0fKEfgy=^Z4;2VA!e5aVHB&Po9F^ zl;SZ3vm>zzXA~PbDbjV=HeTMHk~nO3ksvsV(r=i-1r6;sSlr${OuWmhDzM}3<mX(;Ldubz%_@Uo%oL}7d7R=SkTKI5?r9KTOGRSoYVN754jrv!`@?gm&$KozG< zyZ9)9Y;uE;3|9Y(aDjiY=fLe1W-ou?+C#G7w`%_6p&s*yU>cbo3TFvj zB!!x};JsI)FbA(?o{9rTpV!?iTKCj<1M!T2vc|972>KaPWm zXHi+Gle|oCt=k!(X1$B>vbs1YyNJKd`}zWQ5L=hZ5tzr+^#GR}eZWo6anLIfUmTF@aL4jE$g8q`8t^AJH9;$k1%lNM zO9PFz-V=-!8ifzEJjWPqa$EG_?U)7mBh`Sy1e>ogGSQ$$MkBS+Z^X162EaMc{4g@! z5gpMGO^89h<)I-cCVJ#3H)G_n}{$QB0gw*B!&m&&Qt2voC%gAS3 z$HC+8*_wkp1 zeLIWC_WgL{ZnzlTzhP{ng*WM4K^je>H-ixr%;#x2Sd($yB!C=Vg3w!gx*r#YXFXny4fGTJTHqQ5uE0Kj zepF~Zeo$=b9pmBHux|$5iaN6NP>IpyWGa?e7LBI?8hYM3%o|-_Lx7@ZBZr4rs-w?C zvUjt|F|}WeA05{9hu*=lf;-H7+W8>~e6f!<`OB$9hEaGGs@NywfK^J1b;7?`8rcPi zgyvgzIyh!h^{}KMC>%_RK>PfrHz*#k1&ODEO2V~#W1Astds<8grTlw9{>48*SQ zALsup0PWPX7U2GcWBjmZkf#?XgDBopJx0k8gUhq-?%|wzadCRwy?1;R)fzuMb!T>F zYkES;ZbC>Q2_2SBNEV1d0x1Y0EFsy1NRo}&1jK?^v3sxGYZoha^x9Cb?b@!rgYBaC z+Phv+^y>S4&dls?NCNlw{@%ad{lH}A%<1Pj&w0-Ce9tqj4^N!>%kR(r!Il5{(Qmxt z*x~5ja5U22+Y+`8N8gqjmkGZ*tgG(b1xdd@^0#ZJ{JCxBc^ie1E3R-*~i=-M;O&Xwd!uQ8zw6e!&p7?OHwx>P7w#&3?%IXj3l15zyIVBzU;}H_~f#W8h_iFcKS_kmR&r){Dt*O*`Bw)oOi*(6y?XO zKil&9DP?QYyGG60`RD$>Jo?(`D{o(szv8d|nsez5-`g&cpFIAUr(Ak&)1zmGe|u{A z{I1CR3$*|I;5+T~X8~4 z=})tKUo3gxg!E|MjEnD_yUZEg@b|aYxXwG_kEi;VUVZM_!E3W}+b=wA+ab@K^~|&L zHu>)e3Y))4zKMJL(z^;C;ck9%@8XtM_pVvm(_Me#Z^}Qf9C>xc)Nki{FM37mys7Vt zee=g{cHMB%iWRB-j-z*MKW)eOGir{?{rjQtnSb1$C*Qu|qE~iS9sAdd&${i`l{Y-Z z{cut0OOL%({nOf~YIiKY?6NZ#HD0rCj z81>b%tQ8Xj7ufH7uKLQTEH1qA4T{+(I7k#fa=XYNF zed@1Yyz<@b$4}a}&Arog`lQ)QI{UVHKi+xljTLvNTzZf2>ByA}Hhi+ce)%`EE_y0mM-nIA9XF9+C_~Dnj#Zw>Iwe!8byAC@(^v%li$B#;TfA4Q;BQH3neB%+dk@uCy zPS~OMT(Pfn%ev3MZ`s{5`h=dNmZoQBcsfqv-YdJd@wct+(|7NEwC|~(K7Z})$hCWx zkH~s#;yu@VvG~CGqZJnQzAo%|!-)XUHN;N(+2JngTit^EDk(oGL;5!83bulVdq z`IwYZS43AVd;b0t<{mPxGPt&MTj;*cr*gh)?p=D*KOTGcs|B5_CXM$*e;j@E^*0x6 zUsK(*ldB9*fAqQY>u3Gjqu$@X?DW*VE6=LQxbw8fKF;W<_;%`}%S+BIR#tYdf4ut8 z2k-mhzUnn|+w4tAkNtLgdfU(APCfF-e@&c!?w41*vaM>{V~<3(-F?~55573P^DozB zT<~)EtJaE9w|$q?I^(+AzWccFj(tg44`rX%`kG_c?&2lK-qQbK6(7iwZm<5=q$A(_ z^p?ZF>|ea|o5kgS+;sAl_wIXt{Vl(5FMqB5>E(szy!7JZ_Z78A+%LLA_f2|z*RN-+ zdgj_cP1zGJ9dqr(!{2ygOW&avgtvXv@@U4R%g$>*>FNtcKl0e>v@Q3{UAp?&@rS3p z-2dJ)AC{^Q+U4J?_PkwjaZ{3%{rZ><@4R%uB`-d){?n4fpSyBo`I=;T=4tkK+}}<= z@oU?@qt>h#egAU}xwre~p0slUH!e>|&3pLl-kYxf@r)yi`nGlcTFgz`a>BH2Kh@o` z=*MwCUH#?H-~RaCFONOD^CS13ov;1y?SG&C+rv|*9&_BWDR=I>G5wzxf3@m~-;TZM zhlfsA7fz~prR0n``JddiTs`XL8#Wdmf9%F@zx(Y!pKU!qbndwee*WyGtLGIQH9xY@ z`^VA0?U`8p{kPvd{WodRFKu`H{*PHp-k-H=-{UVOmoF^1@UPdNa?=g}tdFEG|Kgj^ z9{RC7&2jbl+i!Stan_SfH-Eb8o@1WO?A(&MHR~n)iDVkBaj75prrq^u?eeS7-I4c8 zsdCRx+poWM%*ChLUf;5B@3EgeedZ+f@~l($-TKOH-+a8~`p{Lc=vh6@*DO!|@|+8% zyp&O&CT#y}$J*o8H@}$t))O19-?jR*9aVpNujsN9uWfvJ;nQcndCY08wRcQCc|yhZ zbGzn#_VxMOUt9IWs$ZU8_WecgED4oEcmI4_`@B7m-FVLTT_;}oz?VCg z?R($<*)1)fo_Nb8r!7mlaqRrcCr`Y&%717nSM%!){&n|f_)mSOD*v|$f9|^EgXxcd zle6o~D{j5#m8MfGb0^)nX2z12zYDGV`&rNa%XyTT_3lIKPhGb8;)O?y3ftP>)Yto` z-PrQhn|WU?$-DT-%a=cS*OgPBzxm85*N2b){m#zojym~2|5|@z;pQc^H|1{~`@47K zt=pe}Vb^N@JG>2ZKHsg?Y=G>w^Xv>*!${Ss61@GtWexPpWu@CL};E0J& z&&}KR$-@8SI!8|bV8qIfoaOW1d2#lGk$P=P(d+z84bnO5uc^N5l}q>RzWVxU*|jt1ubO}94gcBj(%jNB9xgfSHpgWj&v^aQoDpeNTi^0NykTY8gHQi(b|4b!pCs2u z1mU)66Lz2b!OX4htBzaw`NVE%@yky>ck$SnWmTUa|4_|KduC3*&zAn*x4U*de%M9d zrM!6k#>>7q<%`pV_0IV(ZvM{+zrXzc%$}K# z^eX$pB`uXtFU!89e(M`wHSOG+{MGu?Z)iB>81*6N>lcsxY3}%j+y3(C(I@A(uKKI1 z^S#Tvno8$hdiHC-@XvqxR^f=lE2FEfk8T!^>OA$Nw(DMhscU}L#2FLU+l5QqCr>=_ z+ox`DzSe!(2L&J5l5TGjSDn^=LB=)D&MZ6StFfCs_4n_5x9gJ&?_H!{_haGs!Z)wG z_>c47Tz%Uc)f=3J$yrZ*xMOwYj-v|1^E>9th1U5IsK;pR{isZYggX#M$hzXH*-6`-TUK|PwyY?KK|w{>Bp|T@11Wi{eAuY zPX}LJviIoj#j$iy!$xqy5O^rfvCe*Q6Ow+;G{iZy&w=uJ?aRZtu9Y zrEt^R%RXLz^c(GqD;v+bYy-D;;y>PV#K66eSS*Le4SE6f9+q>2O=ETzD&z*kEgQKS((eVj? zdhm|pKDut-gsq$ByD~kmo_^h@|Cv4grO;X76JE`>t+?o9MN0JGVW4 z@+Uh|R{iz3!yXRp-n^mv*Igfea`#uaMGkx7sT;35?5*1N-OKJ6clqZJy?o6D;?Lnn zA38(ps%tza`Rl7P|CBN2n^lp`Cm#Jn@73@3ecdx9oR!W0G3C6kPDuTD;qJR5?=;Um z?D9`r=cSMO{r-1<{_(xz`R(t#Zl7CD609@Y*Sx-?yFJv?8#!vzeOjXUYPgO2=8nK) zTNdS>{Q7xUbl>T{de@y>xm9D={C%}nwWNG^TIrdmF4*g)xctQ5`#TV_K z^QV(jUixbKZ!bUX(Mg7yGKlv~3`seyf=bSR_w)VOQUcKtX5m!wqKd0r%QIi`BKVRGV^|h{T zBaTaX!9QXAyp}H)ed)bgec{rpPI%z6hWCEmlN`2xvL~>+_Io*P>fAkF|Niwy?|r3w z^VU1=`}Y3rzq3yK@x4Fo`TWOIUdnj?5&0AM{r~-;V0!vf(sJ+eN5^=s?zsQQg5IV> z*B+UC-OZ=kH-B(^Pv;|7tqk4%;7PmpJ+kr0GZt?g`+DbtfB$&RQLDP%+;z{}k8Iu> zz3aWpK0WfqkMhSx&;M`g*Vif!_J4gs$7^qNp7F-Qw`X~44!!rUm;d{vN%o%UT2CS0 zpZ7w3*7kz-Wv8xuaL38-Z+{!uh&kx?YrmS%$k0)yQTDj@Vt^4oh7H% zl>K|^y~pNteLDN}QTJ7zf7d0S+_&_shh9DDOM6G&4|~sKXd+qn#(gsb2t5V!x?Y>=b_R? z{p}B4@p9<-zieK%sJ>MjzT# z>5~e7eEh>}E-e4|hSrC#d2!CT%tJq)mPJ@O&jeK$1w=I-5iWYaCnr*Az;xh(k1 z>bg6^OS{kU-0Z7Ae0|=M<$JnopSWh!VL8vwbHV^U%{`M&UD;FTDD$tksu4nElgL4=tN|M`C&_k3~0=c_-we{=T_k6bqSn;F+`%y`8)^RdT% zc;J!CAHHPmo695nZu~H^^@qJ*zi>P^dE11~%ZQN*bUDC}*DHJaBGHDv*3OQ#2Z%?n zcx`cHL)eTu`WMGF*cFEW zBCec77g@l?Mi+%1Rl3*(k>fPF=yY+=#fdbW3wMGW>3NTY)HVml%RJAEc3g~~C%P%x z7w!r*x3)s_5e-M>aBpv<*9P}+-QnJ}W(4*2g_^0Mp|;-UuJFc4@A?!oxCJ(|kfkum z9xBkvH`9C>%Yu@HdpCqz{lmVo(%Q`!8@jSOqWf3Hij;v@1FB=a$ia)jrCM*3@z$z| z6(%EWLzk0XtuW2(pHQUD+Y|2X>WI>ZCqu2_?v8LP4e$Yeyr^O**iXcGTX$0I<9~ zX9f-Thd*!(t;vyb;26kQIA{#)7hf0 zwWHP79qIFdT0wa|k?vMsU&Pmkm_)xmYdTr!EaDJ@%NCbYei6_g`-GvdrMcS|>F(U* zTO0Nvur1Qt6>jyd-NZ6mw%8)NY*xX_C5?3rp?UR{i-U*O)i1QFSpEN^im~Zr)lo7D zHjkAQF9UrLtX~$awhCkQH~Yia5oO&F&&-;KR<6hteazc+&g<~nBw1D!&g(1^cnXzR zs2~s{~g)% zDK5XmuGnrR5L*{8-R|}1>I_m+b?h*NY>Khz`7y`s;Cvq;|r9gW7L;< zpY2;-^X%jQ!fO|ZPRBLkX6IFs&3!%A4lF@fuOtPXL8D@$B}S2CEI<;%?TU&8NJ5xy zN0`HI1D{buUd10=dESO~O5|k$e{FcA0G?CKDPfDOJtExFC$;x?v`X`vYN~~rYFkTh zxVbNEH~xe=S{>|;m~MA}mzzCAdM)-WhmS-Qp`+E^0dLuq(u|m7J;&opM1aF_STh_f z+ln#7y9slK84!P;PbWn*1^hW}uZ z$=biRvm?3=leKwWxVhCAY4f#p^x|EA6#l;HagEvTjU9dKLM_cb&1*Y4JNh=Q@d2>H zK4n^g@&#T)UkZXEI%QhzWv9lcI0TK+GOJ&j&ZmL^xFkLau~BPH(s_Dh^G1_(&ck{5 z6gFebCn1H+72`?g{cN%z8DgfejM+WX&?I5^NP)S8p zYpl|}g;&4kecCyKs$V1sj!T4g=O2RPI!Tn=TSZ1=ka1**xA6ifh_fM28>gUlHjo)z z@fKGN;*X8fL0>lBVF#_-Y_iJRbnptIGWM4f6!sS(L~trELO#L+J>$P7>_>cZctg0m zPh8s)Wn`!a>2X1GcWY-@VNP#MXy%te2$3Zj%wfn#0MZvI;@U#8bYEurxo)naf6%)UKT(uUlqSpfOAm<3cgk-b^{Wh@?d;60vgd-_r@h)cV~*C}7%Mq;js4l}d}U6G z>)&%E_h)k)n3)Rb(hjQAIR~~HF3ydzy<8HP%%yOtTpE||;J6Gdx;&Q&Qp>{s5$O7C z{LR5o=Hh=I>gmHX79}oU0)9_z941e*TE1(vS zq&29JE~AL1i$KL(i4AW{RosoXBi}L&FUrvoTm=aVV+7P{tbiBe=x)4*y9x9-QNXvN zC)seBOp&M1Wva~uE;G%I!0B|EA%LNAGbz$6y3CevnM0RK4=!_6T&lddR43sQOmjZV>2N-clU8M5S(aBj9Gn!&cyp&Xu**Wk5Cn>oJOhtXBn z<|~|Ss)BmxEzUwjXhj4^7(=eNm0($y+wm-~!$t2Xr{^Q+c6|k+dOF9Ty7n$)qjZD# z^$11Kd&+%0=R6YiclA0a2%;oJoqkU5n~Wsxe&^8VTus&PZwqgv&L6c6ih0fG)M3SDl6s|R0+ZH z^Ty8~KX+t7MRA3MwF#?}s)GL#>%hd51&3Se0|f|7VO1wkn` zHHif!fw3YPN>c@quA+l26M`Yd2ngVH7cCyD%&T@1V<5}05UeJ;v6aRZZ4pr!nFKip zBH`X4Q~Qj>cC1PoW(_No*bger>mI?**?AJ7PzBUoiPv7iDn`X3p{k=~ombrmqE0~` z<~vrPYNSQG(Loe~UX~GNR05(gijna{tx?lbqz&sz zXJ@#(JzVrZ#acE#&5AcLxu=1tZQ`XC>>n^hnT7vDoMq!DTUr<_d99&dvjfP;5NQk1 zD&E}J4{Z=cqY&1nP;);7*Y)8|E_0nX#e$51k0D62>%uMT!>zFwNEhkFqT0$b4fp!Z z6tR6`g1*KWGA4cLW;EJEq?`Bq6XM`-W3+P2_|%GUZ)iW6+a5j&gAc`1Uz*i0h=v_4 z#vTJr%%Wf_H`EB`yUUPP_&m86UhBrjk6CG_Ds)E@GuLtFnWxgDu@C@_Spfs5*GRgzp4II|| zVc&pnfEjc*ca9%E3L~N+DpKCx0~L}{9?cInVLTBpgOeFsl4vs3AXZDZmx&rZLaFv2`t(7)XV9?- zI=0m@uB$eJuhxLArbRkihv?Ka+9VCGzHZWXsH?e0ZX^cn?G3kd^mGsvo9|rKs$xu9 z&FZvBucxISsx*peF&!$Uhc?+Q)W<1J-5ZE&q2F7Vu+W9!P4)(hXBATH{&Jzp{e2j-P;WEV8U>S_GI3hf zgyFa^EJY(-VOw)gPq@2Pjjn5+5E?&evPLyD`khj-C~Kj=jBFfc<0GQCXpiEEZfx!e zLHW|w(b=i6MK{uxuya-lIU*US$ZE)#RkC)v(%fSYLA5_$L^A1cdpnSwh!zl zs@~WgwmN2*J9dZNi5Z4%i+N}Kl{m@shxs#Kkp@Q-{66b(xW5NiXdg7j!8fJ{HxeA& z(D45&Gr|xX^&Q?+Ym8jMY9`5s8mblq7YAD#Aog^HeH}0$^>ws$gnND9p}wFY?)qXJ zB?^row1lt^v|=IH0As>{Wy%NU79Eh%V0tVZ`#0m49;~dYs%xrk3{@_w4^~#M2nCnd zG&DA(nIf;xWHdg5(OAY4O}( zeWfR7Y2Tmk)fpKg?Rx?CYOA(VBTV!Nrw} z=GE0NCR;*NgSmTm&uklKMK-3I7(>)*ZSiSUn#ms#tzZ zkSEpi`tHca&TuRA(7hP2j$V`n`lIfJW_lw8x)s&x@Mwjn8eyU`kFh-V{~7>wwdTYE`U%KC^3iy{cS11Ee2i? zYa%E;xCBF4*Jw~jy!@pHD4$6OK1&WtESAZvK3eP?}kkhtV*#)L(* zyoJ=X)@n@l*-%4M)q+q%&3u??5@>Q3(F7441~q}!j6o+N?JnvwxT3GwsC7G-aJN+_ zYaP!F#tpw!CaX1A?2Jl8?gPG}2gyOfCMQWS@)}!MR2tKurL(z%EVw48u_T$)LHxU^ zAy~ca|0HYVZWh$H4lQB)5X0XE3kLOf!dO`HN?x#T-aNFc-Ws~lz*Qipj*SH-gGtu| zy33M@@?(8J#Eb+dLJ7;*7{u7nz)46pFR&VndYosl)y5i+Xe=R%MU&I0k0qbwuo~zLc8UT$B}IX5 z*@(c8;)`f7QYORVE&C-X%)MkM7rgln#ih7pT}%mZl2=Yq?B0<@F4WIaHVGvQcDSDv`&r$hycY8YQxMOUM*2kdY)8ruh=i$AAUAx3mbwX5~v> z{27rCLw~ZJ8TqsDoShFtf3}?&`Sav_MM74vwZnXl`sS6u)-u{5ySRYh;9Y#B-{zxs z;OCbdvRly++mZ}8hp2H*(OVu6k(+TRIV3a=wg&U3i!j+r0eLEi|IC1+%ghYI6=!R_ z*DgsiEd9LV^^Psl_|brSp)TWqI1oHA0=mS2O-hq96*pd^S4J~~U+G$EmS0i`!T`vC zl4C#^JOp7#p(Yv-hD?GmEHQoNMIU;x(O7QE$XDen1)fU`zsTaz(ZI^HEIu=)$cBc|yhc!cKCzzY<-ms51bFO8B%!&suE(k$h{Jsd8F zOVjNEmo!nHN^gjN$St}+oMSbz80FAiIyi%BWTE<{h4PYwEQ%aXm(!&q3%9fa3C%3% zFL4#nf^9O7DeK{dG-}3_+##Fcb!t?59TWskg=x;~a=@#jc|=P_`gH1kC$I|$R02s! zLLgblpgf%favFzF6ctF}h0FngNKb)TEHEpPORyefDQO^FJh(%WjN#=K7f}zK6AB{^ zm8{InGx~eO>+nXokP}j~J!Rjk&VnTiQW7-}$nK5l$L^CiUb-^vC3c0Mk zvnC~WF0o*i9~z4ZYS-+nlNH8-N9Zu5u}-$5)u^ydS3m+hNLWODv9eJ$v5s*dJRU`9 zC`!{+R5cGKerfi2Wke}~%p}ZvmVvBEn(PoObTN=Bu ziI_>4X)YCnnko6@0%IE0=xp?=k$9BDjxH|&cFmfJljMMvxQOPM%Z|iZQl(sDEX(Eu z5;$Tg05Y(>Bd|=^_e*>cKME_Nh53-V+hWi)@JOt^l4#46^V~|s4D|?~?F>HGa}$4? zpuHj59q)_T&V39s!Xqm}NB}{Ci$pogbOE*qqK-;n5CCjoCFLcpxxF3waYAZom9bk= zK`JKlNAB!^#4M3oR*d$qwHZJl5n32)?H=YY@MI{R{Sx((;G66>gzo4P2H9b6Q-YQA z=hp}4GkjUyvS9tZMRkWxC9&Dime5u_R$U*q>Ah{8k&V#`Uwv~2?V84G20_%6-u)OxiVjA*(#`?3!oXX%yX~Urd=-C9oI_>++QJ60Z2N)%s?QZ4qk;G z)@6WD$l?yf8f(4O`OF@-h>@w*7)*0hxX%DwIx&Va0my(mgUIFuoYIR8 z5v?qeL*Yv}2oMHe@rr95OF@!C<#B(#i6w&5{_!ctK=E{*6NiwMwMbjj54 zA`2HO56$HypA%vxZ-;y-`aK>_%6E|D=SfC{f|OL;>fqJV$P^5QP>2qbMx{dRlZto} z{*n=mPeod(*onAONjjn>>B)Msl%{9sSyGymA!X^=dXAK%XG=cGulpc7fNg`TGv-Yb zjaWCscruhRz^hmV5ejgqE?_|cO2+s@;2Twxc)%bD&qEim3Ic=L2;IXX***5meYZ<1j#ZH6yDYuTZ-hD@PsCL)Tp z?zk=FDH)y-KEiq)f`eoXdS@o`!f@Bq+)El*fELNdYAk0o7j>G`W<0jMIcSZuLTIhi z@!xv3MWIM+S(j}5VHRv!20}0{`pq!17si@3AVzLHhFRc%*D_nvOlxw&GOU=9E%T#h zm>)x3{e1(Bnl>ZcVf6x;Dq|01?K}YAW2R4LSGCNN5}7RxE;8&jflk_0bh(H=R@j0m+-G3jwH&2pF0eMF`lS96;p*ikBj)hWY?} z1iNKaAKP~1PY>9D)7dT2i_;!-x?)7Y9?+6dkn9fVc(!A^izl|QQKawVZ&aG5vDBLE|?$V1s#UJcqN%fotWH|=T=@49k(Oq8J zHI1MZ%#AfH;6k^Nx|l`e0&ePjqwBqyw7nv|lpBmS;6X&Nd;CLe6v@3`Y%O_U{ux3> zy?Df4LE0mh$!ANaipu%Q&vx}pug`Xi_f_8W-=uwZZEyBc{h1t>q{b&SQw=C7_5*q~@}iSUv)a48Y7dN(neKqv~86-RGhAInIaW2f9BXSOIL-=vsiS z8OM!u(6!J>V>`-4*CO;8#}#`hywr=8lN+6kg^?>urTcQIOfWWdI|h0MAeYnWI*zC7 zcos4tgOX0nr0b+C%4{+}0@o?Y6f%|1#&sGCnVv%-GgwhG+1pvUbWUn^9;KSY;_zfBm#(VECTJA@u=I_Rbf|CWT1Ga5CGLNiw(dr ziwWE@v>n(7fOQ}i3xjHcnIENP1bAj0;#=Jxk?SH+N- z5{0RBArzaPZW^!)3p=XAJ&|Zfp8)}$*BcHW9ge4RErxg!e{?ip8{4_5D$?E7(e6AH zR@c_v=8Y_ADp~yjBx4am{XOly%@9?TwauN)0DwwsQ5id381l@~g5oXURc=cJq8o5! zC@L;pQ8O5DQf&n?psTqv3ZaqJ)MJT2EW4KGo+K-vjn$sueX%%Q$dm$Nq7n&o>8Lf8 zZwhj;qAcl1H^EyWw8?8_N;PIDooo$kED}Jc*+c9rW0@u=yd+tV@Rb<}VP=oPW{o~A zOnAYNmIDP_vwKq^@tQ;y13rdZGbwccCWWZIbQqlVcs%#k6f2CiiALq z1A)kckj>`KV(>P~+zv6>X7~%X2^J#Zz{vdnY9v8 zi#`?$de9iQ(83tTL(C|gY(=&L_WwJS*5%)JIwZw@BZ5_RHMOCd+Ie*`48h`>1~`1G zS`e-@Q3IsAG?57#*LAe4Bh`xqd5h0NAFvVNE50?vjtODBhQR>{ozc~dohU<(fVev& zn}7g7o?VgF{?4$q3Cd2ai3x#=ozWegkSPkcuyFChfyY2Fs{rjg=m*GHtZAg#jAd4q zkQqQ0RKc}j07iz)jW#ay%Q13mfL;oo1NN_95DO2m(1a7K@Ku_9Wz~{^9IQyTMy&y^ zq@5j9f!3GML-E>J2Fffs>p3ZPzI1zMWk#7zF7!8f}O;rqb0iarKUF~7P`Z_=e z7Y1u{Ogx^sjKO!o#@TJn%Y3toq3vF@BGj~Getl&%u#kxg z@Sxceldl?hD4fRV^cf2Tk#Gwu6de{vJ#|pe5ssW4J*lw3V97>-weak~u$GH}9n7x9 z@ejK5$=at;SoIKB9HK)1qi%=&0!Xh6aqgNn+blFC<@YK zRhy=2dBRjCUE@$PDM##FH6Mfst;+N1YKQZl1tLzb&*c@mr~+TZ0}QhOz9Iy_=;!@5 zY-Bh;cAC8EBW1@Z#Z#)fO)F$7+bc=DQzIPwzgG|&=s*p;B>0FBC04yBP&XmK)0~8ky?}3t$x&@^v45uOTa)6&HOo5^U8)jZA z_3ziB8ECO4Op#K2l1GLI12X(pz_2-4cJXLwk_?@V!0KaXzzR~EOhW?*R>AI}og#HL zyNALEN@(o!SjWUV1uX->z>ZEqe_*$rNhjxR-m!#vP~3&Ev_d^YYN>HWNI2OByYKj7 zou4?BX9t0y#-dKw_)(VPMMqYY2L(d&GMP_Bbi^k0i-XhC`}V0$J^wF!i>8^0I|orw-Dz-yXcQe^;8NCi|} zq-)a!pJnP7ypxKcoSCdR3#A-6pW;Y}1I5l{MJtqw<#GyADH%K}=s1%L%nP=2(sae8 z%ibvg9WbAn0SESuc4kkvO%(vt;dFhrV4o`5r%LvzvXG2!#7Kc^b?2Qr71#)}?{bK)`Z?!lmS3g($^FbT8irPOrB zF}!np5};PzO1T8wLM|i5P(b5@*$3hlMxt`kY)}`9Kw|U&%`nEJ1W-1Z4x7=0zhZbi z%xG~kzzIqAK&mbp>_sSK8OEmx`kAcs0w)reB@701ej+p_JY$)XUuH~G_1pcLA43Ti z#y#M+^ox*2J%VSvIQKMOdX7(%|H;41D|djR>a*c;_aK zw}
    const MODULE_EVENT: u64 = 26;
    +
    + + + Whether multisig accounts (different from accounts with multi-ed25519 auth keys) are enabled. @@ -345,6 +361,17 @@ Lifetime: transient + + +Fix the native formatter for signer. +Lifetime: transient + + +
    const SIGNER_NATIVE_FORMAT_FIX: u64 = 25;
    +
    + + + Whether struct constructors are enabled @@ -1186,6 +1213,98 @@ Lifetime: transient + + + + +## Function `get_signer_native_format_fix_feature` + + + +
    public fun get_signer_native_format_fix_feature(): u64
    +
    + + + +
    +Implementation + + +
    public fun get_signer_native_format_fix_feature(): u64 { SIGNER_NATIVE_FORMAT_FIX }
    +
    + + + +
    + + + +## Function `signer_native_format_fix_enabled` + + + +
    public fun signer_native_format_fix_enabled(): bool
    +
    + + + +
    +Implementation + + +
    public fun signer_native_format_fix_enabled(): bool acquires Features {
    +    is_enabled(SIGNER_NATIVE_FORMAT_FIX)
    +}
    +
    + + + +
    + + + +## Function `get_module_event_feature` + + + +
    public fun get_module_event_feature(): u64
    +
    + + + +
    +Implementation + + +
    public fun get_module_event_feature(): u64 { MODULE_EVENT }
    +
    + + + +
    + + + +## Function `module_event_enabled` + + + +
    public fun module_event_enabled(): bool
    +
    + + + +
    +Implementation + + +
    public fun module_event_enabled(): bool acquires Features {
    +    is_enabled(MODULE_EVENT)
    +}
    +
    + + +
    @@ -1208,7 +1327,7 @@ Function to enable and disable features. Can only be called by a signer of @std. acquires Features { assert!(signer::address_of(framework) == @std, error::permission_denied(EFRAMEWORK_SIGNER_NEEDED)); if (!exists<Features>(@std)) { - move_to<Features>(framework, Features{features: vector[]}) + move_to<Features>(framework, Features { features: vector[] }) }; let features = &mut borrow_global_mut<Features>(@std).features; vector::for_each_ref(&enable, |feature| { @@ -1242,7 +1361,7 @@ Check whether the feature is enabled.
    fun is_enabled(feature: u64): bool acquires Features {
         exists<Features>(@std) &&
    -    contains(&borrow_global<Features>(@std).features, feature)
    +        contains(&borrow_global<Features>(@std).features, feature)
     }
     
    @@ -1467,6 +1586,17 @@ Helper to check whether a feature flag is enabled. + + + + +
    fun spec_module_event_enabled(): bool {
    +   spec_is_enabled(MODULE_EVENT)
    +}
    +
    + + + ### Function `set` diff --git a/aptos-move/framework/move-stdlib/sources/configs/features.move b/aptos-move/framework/move-stdlib/sources/configs/features.move index a20a1c3efb57c..6bca6d4815a22 100644 --- a/aptos-move/framework/move-stdlib/sources/configs/features.move +++ b/aptos-move/framework/move-stdlib/sources/configs/features.move @@ -34,6 +34,7 @@ module std::features { /// available. This is needed because of introduction of a new native function. /// Lifetime: transient const CODE_DEPENDENCY_CHECK: u64 = 1; + public fun code_dependency_check_enabled(): bool acquires Features { is_enabled(CODE_DEPENDENCY_CHECK) } @@ -42,6 +43,7 @@ module std::features { /// private functions. /// Lifetime: permanent const TREAT_FRIEND_AS_PRIVATE: u64 = 2; + public fun treat_friend_as_private(): bool acquires Features { is_enabled(TREAT_FRIEND_AS_PRIVATE) } @@ -50,9 +52,7 @@ module std::features { /// This is needed because of the introduction of new native functions. /// Lifetime: transient const SHA_512_AND_RIPEMD_160_NATIVES: u64 = 3; - public fun get_sha_512_and_ripemd_160_feature(): u64 { SHA_512_AND_RIPEMD_160_NATIVES } - public fun sha_512_and_ripemd_160_enabled(): bool acquires Features { is_enabled(SHA_512_AND_RIPEMD_160_NATIVES) } @@ -61,9 +61,7 @@ module std::features { /// This is needed because of the introduction of a new native function. /// Lifetime: transient const APTOS_STD_CHAIN_ID_NATIVES: u64 = 4; - public fun get_aptos_stdlib_chain_id_feature(): u64 { APTOS_STD_CHAIN_ID_NATIVES } - public fun aptos_stdlib_chain_id_enabled(): bool acquires Features { is_enabled(APTOS_STD_CHAIN_ID_NATIVES) } @@ -71,9 +69,7 @@ module std::features { /// Whether to allow the use of binary format version v6. /// Lifetime: transient const VM_BINARY_FORMAT_V6: u64 = 5; - public fun get_vm_binary_format_v6(): u64 { VM_BINARY_FORMAT_V6 } - public fun allow_vm_binary_format_v6(): bool acquires Features { is_enabled(VM_BINARY_FORMAT_V6) } @@ -81,9 +77,7 @@ module std::features { /// Whether gas fees are collected and distributed to the block proposers. /// Lifetime: transient const COLLECT_AND_DISTRIBUTE_GAS_FEES: u64 = 6; - public fun get_collect_and_distribute_gas_fees_feature(): u64 { COLLECT_AND_DISTRIBUTE_GAS_FEES } - public fun collect_and_distribute_gas_fees(): bool acquires Features { is_enabled(COLLECT_AND_DISTRIBUTE_GAS_FEES) } @@ -92,9 +86,7 @@ module std::features { /// This is needed because of the introduction of a new native function. /// Lifetime: transient const MULTI_ED25519_PK_VALIDATE_V2_NATIVES: u64 = 7; - public fun multi_ed25519_pk_validate_v2_feature(): u64 { MULTI_ED25519_PK_VALIDATE_V2_NATIVES } - public fun multi_ed25519_pk_validate_v2_enabled(): bool acquires Features { is_enabled(MULTI_ED25519_PK_VALIDATE_V2_NATIVES) } @@ -103,9 +95,7 @@ module std::features { /// This is needed because of the introduction of new native function(s). /// Lifetime: transient const BLAKE2B_256_NATIVE: u64 = 8; - public fun get_blake2b_256_feature(): u64 { BLAKE2B_256_NATIVE } - public fun blake2b_256_enabled(): bool acquires Features { is_enabled(BLAKE2B_256_NATIVE) } @@ -113,18 +103,14 @@ module std::features { /// Whether resource groups are enabled. /// This is needed because of new attributes for structs and a change in storage representation. const RESOURCE_GROUPS: u64 = 9; - public fun get_resource_groups_feature(): u64 { RESOURCE_GROUPS } - public fun resource_groups_enabled(): bool acquires Features { is_enabled(RESOURCE_GROUPS) } /// Whether multisig accounts (different from accounts with multi-ed25519 auth keys) are enabled. const MULTISIG_ACCOUNTS: u64 = 10; - public fun get_multisig_accounts_feature(): u64 { MULTISIG_ACCOUNTS } - public fun multisig_accounts_enabled(): bool acquires Features { is_enabled(MULTISIG_ACCOUNTS) } @@ -132,9 +118,7 @@ module std::features { /// Whether delegation pools are enabled. /// Lifetime: transient const DELEGATION_POOLS: u64 = 11; - public fun get_delegation_pools_feature(): u64 { DELEGATION_POOLS } - public fun delegation_pools_enabled(): bool acquires Features { is_enabled(DELEGATION_POOLS) } @@ -143,7 +127,9 @@ module std::features { /// /// Lifetime: transient const CRYPTOGRAPHY_ALGEBRA_NATIVES: u64 = 12; + public fun get_cryptography_algebra_natives_feature(): u64 { CRYPTOGRAPHY_ALGEBRA_NATIVES } + public fun cryptography_algebra_enabled(): bool acquires Features { is_enabled(CRYPTOGRAPHY_ALGEBRA_NATIVES) } @@ -152,7 +138,9 @@ module std::features { /// /// Lifetime: transient const BLS12_381_STRUCTURES: u64 = 13; + public fun get_bls12_381_strutures_feature(): u64 { BLS12_381_STRUCTURES } + public fun bls12_381_structures_enabled(): bool acquires Features { is_enabled(BLS12_381_STRUCTURES) } @@ -169,7 +157,9 @@ module std::features { /// Whether reward rate decreases periodically. /// Lifetime: transient const PERIODICAL_REWARD_RATE_DECREASE: u64 = 16; + public fun get_periodical_reward_rate_decrease_feature(): u64 { PERIODICAL_REWARD_RATE_DECREASE } + public fun periodical_reward_rate_decrease_enabled(): bool acquires Features { is_enabled(PERIODICAL_REWARD_RATE_DECREASE) } @@ -177,7 +167,9 @@ module std::features { /// Whether enable paritial governance voting on aptos_governance. /// Lifetime: transient const PARTIAL_GOVERNANCE_VOTING: u64 = 17; + public fun get_partial_governance_voting(): u64 { PARTIAL_GOVERNANCE_VOTING } + public fun partial_governance_voting_enabled(): bool acquires Features { is_enabled(PARTIAL_GOVERNANCE_VOTING) } @@ -189,7 +181,9 @@ module std::features { /// Whether enable paritial governance voting on delegation_pool. /// Lifetime: transient const DELEGATION_POOL_PARTIAL_GOVERNANCE_VOTING: u64 = 21; + public fun get_delegation_pool_partial_governance_voting(): u64 { DELEGATION_POOL_PARTIAL_GOVERNANCE_VOTING } + public fun delegation_pool_partial_governance_voting_enabled(): bool acquires Features { is_enabled(DELEGATION_POOL_PARTIAL_GOVERNANCE_VOTING) } @@ -197,6 +191,7 @@ module std::features { /// Whether alternate gas payer is supported /// Lifetime: transient const FEE_PAYER_ENABLED: u64 = 22; + public fun fee_payer_enabled(): bool acquires Features { is_enabled(FEE_PAYER_ENABLED) } @@ -204,7 +199,9 @@ module std::features { /// Whether enable MOVE functions to call create_auid method to create AUIDs. /// Lifetime: transient const APTOS_UNIQUE_IDENTIFIERS: u64 = 23; + public fun get_auids(): u64 { APTOS_UNIQUE_IDENTIFIERS } + public fun auids_enabled(): bool acquires Features { is_enabled(APTOS_UNIQUE_IDENTIFIERS) } @@ -213,13 +210,33 @@ module std::features { /// available. This is needed because of the introduction of a new native function. /// Lifetime: transient const BULLETPROOFS_NATIVES: u64 = 24; - public fun get_bulletproofs_feature(): u64 { BULLETPROOFS_NATIVES } public fun bulletproofs_enabled(): bool acquires Features { is_enabled(BULLETPROOFS_NATIVES) } + /// Fix the native formatter for signer. + /// Lifetime: transient + const SIGNER_NATIVE_FORMAT_FIX: u64 = 25; + + public fun get_signer_native_format_fix_feature(): u64 { SIGNER_NATIVE_FORMAT_FIX } + + public fun signer_native_format_fix_enabled(): bool acquires Features { + is_enabled(SIGNER_NATIVE_FORMAT_FIX) + } + + /// Whether emit function in `event.move` are enabled for module events. + /// + /// Lifetime: transient + const MODULE_EVENT: u64 = 26; + + public fun get_module_event_feature(): u64 { MODULE_EVENT } + + public fun module_event_enabled(): bool acquires Features { + is_enabled(MODULE_EVENT) + } + // ============================================================================================ // Feature Flag Implementation @@ -236,7 +253,7 @@ module std::features { acquires Features { assert!(signer::address_of(framework) == @std, error::permission_denied(EFRAMEWORK_SIGNER_NEEDED)); if (!exists(@std)) { - move_to(framework, Features{features: vector[]}) + move_to(framework, Features { features: vector[] }) }; let features = &mut borrow_global_mut(@std).features; vector::for_each_ref(&enable, |feature| { @@ -250,7 +267,7 @@ module std::features { /// Check whether the feature is enabled. fun is_enabled(feature: u64): bool acquires Features { exists(@std) && - contains(&borrow_global(@std).features, feature) + contains(&borrow_global(@std).features, feature) } /// Helper to include or exclude a feature flag. diff --git a/aptos-move/framework/move-stdlib/sources/configs/features.spec.move b/aptos-move/framework/move-stdlib/sources/configs/features.spec.move index 45c8202217b03..38e4322ba968b 100644 --- a/aptos-move/framework/move-stdlib/sources/configs/features.spec.move +++ b/aptos-move/framework/move-stdlib/sources/configs/features.spec.move @@ -49,6 +49,10 @@ spec std::features { spec_is_enabled(COLLECT_AND_DISTRIBUTE_GAS_FEES) } + spec fun spec_module_event_enabled(): bool { + spec_is_enabled(MODULE_EVENT) + } + spec periodical_reward_rate_decrease_enabled { pragma opaque; aborts_if [abstract] false; diff --git a/aptos-move/framework/src/aptos.rs b/aptos-move/framework/src/aptos.rs index b257e4fc5a68f..f4912082d2532 100644 --- a/aptos-move/framework/src/aptos.rs +++ b/aptos-move/framework/src/aptos.rs @@ -4,8 +4,9 @@ #![forbid(unsafe_code)] use crate::{ - docgen::DocgenOptions, path_in_crate, release_builder::RELEASE_BUNDLE_EXTENSION, - release_bundle::ReleaseBundle, BuildOptions, ReleaseOptions, + docgen::DocgenOptions, extended_checks, path_in_crate, + release_builder::RELEASE_BUNDLE_EXTENSION, release_bundle::ReleaseBundle, BuildOptions, + ReleaseOptions, }; use clap::ValueEnum; use move_command_line_common::address::NumericalAddress; @@ -118,6 +119,8 @@ impl ReleaseTarget { }), skip_fetch_latest_git_deps: true, bytecode_version: None, + skip_attribute_checks: false, + known_attributes: extended_checks::get_all_attribute_names().clone(), }, packages: packages.iter().map(|(path, _)| path.to_owned()).collect(), rust_bindings: packages diff --git a/aptos-move/framework/src/built_package.rs b/aptos-move/framework/src/built_package.rs index 2b2ee97ba7f98..f1255da0008f6 100644 --- a/aptos-move/framework/src/built_package.rs +++ b/aptos-move/framework/src/built_package.rs @@ -68,6 +68,10 @@ pub struct BuildOptions { pub skip_fetch_latest_git_deps: bool, #[clap(long)] pub bytecode_version: Option, + #[clap(long)] + pub skip_attribute_checks: bool, + #[clap(skip)] + pub known_attributes: BTreeSet, } // Because named_addresses has no parser, we can't use clap's default impl. This must be aligned @@ -88,6 +92,8 @@ impl Default for BuildOptions { // while in a test (and cause some havoc) skip_fetch_latest_git_deps: false, bytecode_version: None, + skip_attribute_checks: false, + known_attributes: extended_checks::get_all_attribute_names().clone(), } } } @@ -106,6 +112,8 @@ pub fn build_model( additional_named_addresses: BTreeMap, target_filter: Option, bytecode_version: Option, + skip_attribute_checks: bool, + known_attributes: BTreeSet, ) -> anyhow::Result { let build_config = BuildConfig { dev_mode, @@ -119,6 +127,8 @@ pub fn build_model( fetch_deps_only: false, skip_fetch_latest_git_deps: true, bytecode_version, + skip_attribute_checks, + known_attributes, }; build_config.move_model_for_package(package_path, ModelConfig { target_filter, @@ -133,6 +143,7 @@ impl BuiltPackage { /// and is not `Ok` if there was an error among those. pub fn build(package_path: PathBuf, options: BuildOptions) -> anyhow::Result { let bytecode_version = options.bytecode_version; + let skip_attribute_checks = options.skip_attribute_checks; let build_config = BuildConfig { dev_mode: options.dev, additional_named_addresses: options.named_addresses.clone(), @@ -145,7 +156,10 @@ impl BuiltPackage { fetch_deps_only: false, skip_fetch_latest_git_deps: options.skip_fetch_latest_git_deps, bytecode_version, + skip_attribute_checks, + known_attributes: options.known_attributes.clone(), }; + eprintln!("Compiling, may take a little while to download git dependencies..."); let mut package = build_config.compile_package_no_exit(&package_path, &mut stderr())?; @@ -157,6 +171,8 @@ impl BuiltPackage { options.named_addresses.clone(), None, bytecode_version, + skip_attribute_checks, + options.known_attributes.clone(), )?; let runtime_metadata = extended_checks::run_extended_checks(model); if model.diag_count(Severity::Warning) > 0 { diff --git a/aptos-move/framework/src/extended_checks.rs b/aptos-move/framework/src/extended_checks.rs index 57d563f2800ee..7478f117db8d7 100644 --- a/aptos-move/framework/src/extended_checks.rs +++ b/aptos-move/framework/src/extended_checks.rs @@ -3,6 +3,7 @@ use crate::{KnownAttribute, RuntimeModuleMetadataV1}; use move_binary_format::file_format::{Ability, AbilitySet, Visibility}; +use move_compiler::shared::known_attributes; use move_core_types::{ account_address::AccountAddress, errmap::{ErrorDescription, ErrorMapping}, @@ -12,24 +13,61 @@ use move_core_types::{ use move_model::{ ast::{Attribute, AttributeValue, Value}, model::{ - FunctionEnv, GlobalEnv, Loc, ModuleEnv, NamedConstantEnv, Parameter, QualifiedId, + FunId, FunctionEnv, GlobalEnv, Loc, ModuleEnv, NamedConstantEnv, Parameter, QualifiedId, StructEnv, StructId, }, symbol::Symbol, ty::{PrimitiveType, ReferenceKind, Type}, }; -use std::{collections::BTreeMap, rc::Rc, str::FromStr}; +use move_stackless_bytecode::{ + function_target::{FunctionData, FunctionTarget}, + stackless_bytecode::{AttrId, Bytecode, Operation}, + stackless_bytecode_generator::StacklessBytecodeGenerator, +}; +use once_cell::sync::Lazy; +use std::{ + collections::{BTreeMap, BTreeSet}, + rc::Rc, + str::FromStr, +}; use thiserror::Error; const INIT_MODULE_FUN: &str = "init_module"; -const LEGAC_ENTRY_FUN_ATTRIBUTE: &str = "legacy_entry_fun"; +const LEGACY_ENTRY_FUN_ATTRIBUTE: &str = "legacy_entry_fun"; const ERROR_PREFIX: &str = "E"; +const EVENT_STRUCT_ATTRIBUTE: &str = "event"; const RESOURCE_GROUP: &str = "resource_group"; const RESOURCE_GROUP_MEMBER: &str = "resource_group_member"; const RESOURCE_GROUP_NAME: &str = "group"; const RESOURCE_GROUP_SCOPE: &str = "scope"; const VIEW_FUN_ATTRIBUTE: &str = "view"; +// top-level attribute names, only. +pub fn get_all_attribute_names() -> &'static BTreeSet { + const ALL_ATTRIBUTE_NAMES: [&str; 5] = [ + LEGACY_ENTRY_FUN_ATTRIBUTE, + RESOURCE_GROUP, + RESOURCE_GROUP_MEMBER, + VIEW_FUN_ATTRIBUTE, + EVENT_STRUCT_ATTRIBUTE, + ]; + + fn extended_attribute_names() -> BTreeSet { + ALL_ATTRIBUTE_NAMES + .into_iter() + .map(|s| s.to_string()) + .collect::>() + } + + static KNOWN_ATTRIBUTES_SET: Lazy> = Lazy::new(|| { + use known_attributes::AttributeKind; + let mut attributes = extended_attribute_names(); + known_attributes::KnownAttribute::add_attribute_names(&mut attributes); + attributes + }); + &KNOWN_ATTRIBUTES_SET +} + /// Run the extended context checker on target modules in the environment and returns a map /// from module to extended runtime metadata. Any errors during context checking are reported to /// `env`. This is invoked after general build succeeds. @@ -67,6 +105,7 @@ impl<'a> ExtendedChecker<'a> { self.check_and_record_resource_group_members(module); self.check_and_record_view_functions(module); self.check_entry_functions(module); + self.check_and_record_events(module); self.check_init_module(module); self.build_error_map(module) } @@ -118,7 +157,7 @@ impl<'a> ExtendedChecker<'a> { if !fun.is_entry() { continue; } - if self.has_attribute(fun, LEGAC_ENTRY_FUN_ATTRIBUTE) { + if self.has_attribute(fun, LEGACY_ENTRY_FUN_ATTRIBUTE) { // Skip checking for legacy entries continue; } @@ -428,6 +467,87 @@ impl<'a> ExtendedChecker<'a> { } } +// ---------------------------------------------------------------------------------- +// Events + +impl<'a> ExtendedChecker<'a> { + fn check_and_record_events(&mut self, module: &ModuleEnv) { + for ref struct_ in module.get_structs() { + if self.has_attribute_iter(struct_.get_attributes().iter(), EVENT_STRUCT_ATTRIBUTE) { + let module_id = self.get_runtime_module_id(module); + // Remember the runtime info that this is a event struct. + self.output + .entry(module_id) + .or_default() + .struct_attributes + .entry( + self.env + .symbol_pool() + .string(struct_.get_name()) + .to_string(), + ) + .or_default() + .push(KnownAttribute::event()); + } + } + for fun in module.get_functions() { + if fun.is_inline() || fun.is_native() { + continue; + } + // Holder for stackless function data + let data = self.get_stackless_data(&fun); + // Handle to work with stackless functions -- function targets. + let target = FunctionTarget::new(&fun, &data); + // Now check for event emit calls. + for bc in target.get_bytecode() { + if let Bytecode::Call(attr_id, _, Operation::Function(mid, fid, type_inst), _, _) = + bc + { + self.check_emit_event_call( + &module.get_id(), + &target, + *attr_id, + mid.qualified(*fid), + type_inst, + ); + } + } + } + } + + fn check_emit_event_call( + &mut self, + module_id: &move_model::model::ModuleId, + target: &FunctionTarget, + attr_id: AttrId, + callee: QualifiedId, + type_inst: &[Type], + ) { + if !self.is_function(callee, "0x1::event::emit") { + return; + } + // We are looking at `0x1::event::emit` and extracting the `T` + let event_type = &type_inst[0]; + // Now check whether this type has the event attribute + let type_ok = match event_type { + Type::Struct(mid, sid, _) => { + let struct_ = self.env.get_struct(mid.qualified(*sid)); + // The struct must be defined in the current module. + module_id == mid + && self + .has_attribute_iter(struct_.get_attributes().iter(), EVENT_STRUCT_ATTRIBUTE) + }, + _ => false, + }; + if !type_ok { + let loc = target.get_bytecode_loc(attr_id); + self.env.error(&loc, + &format!("`0x1::event::emit` called with type `{}` which is not a struct type defined in the same module with `#[event]` attribute", + event_type.display(&self.env.get_type_display_ctx()))); + } + } +} + // ---------------------------------------------------------------------------------- // Error Map @@ -476,7 +596,15 @@ impl<'a> ExtendedChecker<'a> { impl<'a> ExtendedChecker<'a> { fn has_attribute(&self, fun: &FunctionEnv, attr_name: &str) -> bool { - fun.get_attributes().iter().any(|attr| { + self.has_attribute_iter(fun.get_attributes().iter(), attr_name) + } + + fn has_attribute_iter( + &self, + mut attrs: impl Iterator, + attr_name: &str, + ) -> bool { + attrs.any(|attr| { if let Attribute::Apply(_, name, _) = attr { self.env.symbol_pool().string(*name).as_str() == attr_name } else { @@ -497,6 +625,15 @@ impl<'a> ExtendedChecker<'a> { fn name_string(&self, symbol: Symbol) -> Rc { self.env.symbol_pool().string(symbol) } + + fn get_stackless_data(&self, fun: &FunctionEnv) -> FunctionData { + StacklessBytecodeGenerator::new(fun).generate_function() + } + + fn is_function(&self, id: QualifiedId, full_name_str: &str) -> bool { + let fun = &self.env.get_function(id); + fun.get_full_name_with_address() == full_name_str + } } // ---------------------------------------------------------------------------------- diff --git a/aptos-move/framework/src/module_metadata.rs b/aptos-move/framework/src/module_metadata.rs index e7fa7ef602b5d..edd4aa2ead73d 100644 --- a/aptos-move/framework/src/module_metadata.rs +++ b/aptos-move/framework/src/module_metadata.rs @@ -66,6 +66,7 @@ pub enum KnownAttributeKind { ViewFunction = 1, ResourceGroup = 2, ResourceGroupMember = 3, + Event = 4, } impl KnownAttribute { @@ -118,6 +119,17 @@ impl KnownAttribute { pub fn is_resource_group_member(&self) -> bool { self.kind == KnownAttributeKind::ResourceGroupMember as u8 } + + pub fn event() -> Self { + Self { + kind: KnownAttributeKind::Event as u8, + args: vec![], + } + } + + pub fn is_event(&self) -> bool { + self.kind == KnownAttributeKind::Event as u8 + } } /// Extract metadata from the VM, upgrading V0 to V1 representation as needed @@ -376,6 +388,9 @@ pub fn verify_module_metadata( continue; } } + if features.is_module_event_enabled() && attr.is_event() { + continue; + } return Err(AttributeValidationError { key: struct_.clone(), attribute: attr.kind, diff --git a/aptos-move/framework/src/natives/event.rs b/aptos-move/framework/src/natives/event.rs index c6e0584cf2591..a9756e0851c8d 100644 --- a/aptos-move/framework/src/natives/event.rs +++ b/aptos-move/framework/src/natives/event.rs @@ -7,22 +7,56 @@ use aptos_native_interface::{ SafeNativeResult, }; #[cfg(feature = "testing")] -use move_binary_format::errors::PartialVMError; -use move_core_types::account_address::AccountAddress; +use aptos_types::account_address::AccountAddress; +use aptos_types::contract_event::ContractEvent; #[cfg(feature = "testing")] -use move_core_types::vm_status::StatusCode; +use aptos_types::event::EventKey; +use better_any::{Tid, TidAble}; +use move_binary_format::errors::PartialVMError; +use move_core_types::{language_storage::TypeTag, vm_status::StatusCode}; use move_vm_runtime::native_functions::NativeFunction; #[cfg(feature = "testing")] use move_vm_types::values::{Reference, Struct, StructRef}; use move_vm_types::{loaded_data::runtime_types::Type, values::Value}; -use serde::Serialize; use smallvec::{smallvec, SmallVec}; use std::collections::VecDeque; -#[derive(Serialize)] -pub struct GUID { - creation_num: u64, - addr: AccountAddress, +/// Cached emitted module events. +#[derive(Default, Tid)] +pub struct NativeEventContext { + events: Vec, +} + +impl NativeEventContext { + pub fn into_events(self) -> Vec { + self.events + } + + #[cfg(feature = "testing")] + fn emitted_v1_events(&self, event_key: &EventKey, ty_tag: &TypeTag) -> Vec<&[u8]> { + let mut events = vec![]; + for event in self.events.iter() { + if let ContractEvent::V1(e) = event { + if e.key() == event_key && e.type_tag() == ty_tag { + events.push(e.event_data()); + } + } + } + events + } + + #[cfg(feature = "testing")] + fn emitted_v2_events(&self, ty_tag: &TypeTag) -> Vec<&[u8]> { + let mut events = vec![]; + for event in self.events.iter() { + if let ContractEvent::V2(e) = event { + if e.type_tag() == ty_tag { + events.push(e.event_data()); + } + } + } + events + } } /*************************************************************************************************** @@ -50,11 +84,20 @@ fn native_write_to_event_store( EVENT_WRITE_TO_EVENT_STORE_BASE + EVENT_WRITE_TO_EVENT_STORE_PER_ABSTRACT_VALUE_UNIT * context.abs_val_size(&msg), )?; + let ty_tag = context.type_to_type_tag(&ty)?; + let ty_layout = context.type_to_type_layout(&ty)?; + let blob = msg.simple_serialize(&ty_layout).ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::UNKNOWN_INVARIANT_VIOLATION_ERROR, + )) + })?; + let key = bcs::from_bytes(guid.as_slice()).map_err(|_| { + SafeNativeError::InvariantViolation(PartialVMError::new(StatusCode::EVENT_KEY_MISMATCH)) + })?; - if !context.save_event(guid, seq_num, ty, msg)? { - return Err(SafeNativeError::Abort { abort_code: 0 }); - } - + let ctx = context.extensions_mut().get_mut::(); + ctx.events + .push(ContractEvent::new_v1(key, seq_num, ty_tag, blob)); Ok(smallvec![]) } @@ -79,21 +122,121 @@ fn native_emitted_events_by_handle( let creation_num = guid .next() - .ok_or_else(|| PartialVMError::new(StatusCode::INTERNAL_TYPE_ERROR))? + .ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::INTERNAL_TYPE_ERROR, + )) + })? .value_as::()?; let addr = guid .next() - .ok_or_else(|| PartialVMError::new(StatusCode::INTERNAL_TYPE_ERROR))? + .ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::INTERNAL_TYPE_ERROR, + )) + })? .value_as::()?; - let guid = GUID { creation_num, addr }; - let events = context.emitted_events( - bcs::to_bytes(&guid) - .map_err(|_| PartialVMError::new(StatusCode::VALUE_SERIALIZATION_ERROR))?, - ty, - )?; + let key = EventKey::new(creation_num, addr); + let ty_tag = context.type_to_type_tag(&ty)?; + let ty_layout = context.type_to_type_layout(&ty)?; + let ctx = context.extensions_mut().get_mut::(); + let events = ctx + .emitted_v1_events(&key, &ty_tag) + .into_iter() + .map(|blob| { + Value::simple_deserialize(blob, &ty_layout).ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::UNKNOWN_INVARIANT_VIOLATION_ERROR, + )) + }) + }) + .collect::>>()?; Ok(smallvec![Value::vector_for_testing_only(events)]) } +#[cfg(feature = "testing")] +fn native_emitted_events( + context: &mut SafeNativeContext, + mut ty_args: Vec, + arguments: VecDeque, +) -> SafeNativeResult> { + debug_assert!(ty_args.len() == 1); + debug_assert!(arguments.is_empty()); + + let ty = ty_args.pop().unwrap(); + + let ty_tag = context.type_to_type_tag(&ty)?; + let ty_layout = context.type_to_type_layout(&ty)?; + let ctx = context.extensions_mut().get_mut::(); + let events = ctx + .emitted_v2_events(&ty_tag) + .into_iter() + .map(|blob| { + Value::simple_deserialize(blob, &ty_layout).ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::VALUE_DESERIALIZATION_ERROR, + )) + }) + }) + .collect::>>()?; + Ok(smallvec![Value::vector_for_testing_only(events)]) +} + +#[inline] +fn native_write_module_event_to_store( + context: &mut SafeNativeContext, + mut ty_args: Vec, + mut arguments: VecDeque, +) -> SafeNativeResult> { + debug_assert!(ty_args.len() == 1); + debug_assert!(arguments.len() == 1); + + let ty = ty_args.pop().unwrap(); + let msg = arguments.pop_back().unwrap(); + + context.charge( + EVENT_WRITE_TO_EVENT_STORE_BASE + + EVENT_WRITE_TO_EVENT_STORE_PER_ABSTRACT_VALUE_UNIT * context.abs_val_size(&msg), + )?; + + let type_tag = context.type_to_type_tag(&ty)?; + + // Additional runtime check for module call. + if let (Some(id), _, _) = context + .stack_frames(1) + .stack_trace() + .first() + .ok_or_else(|| { + SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::UNKNOWN_INVARIANT_VIOLATION_ERROR, + )) + })? + { + if let TypeTag::Struct(ref struct_tag) = type_tag { + if id != &struct_tag.module_id() { + return Err(SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::INTERNAL_TYPE_ERROR, + ))); + } + } else { + return Err(SafeNativeError::InvariantViolation(PartialVMError::new( + StatusCode::INTERNAL_TYPE_ERROR, + ))); + } + } + let layout = context.type_to_type_layout(&ty)?; + let blob = msg.simple_serialize(&layout).ok_or_else(|| { + SafeNativeError::InvariantViolation( + PartialVMError::new(StatusCode::UNKNOWN_INVARIANT_VIOLATION_ERROR) + .with_message("Event serialization failure".to_string()), + ) + })?; + let ctx = context.extensions_mut().get_mut::(); + ctx.events.push(ContractEvent::new_v2(type_tag, blob)); + + Ok(smallvec![]) +} + /*************************************************************************************************** * module * @@ -109,10 +252,18 @@ pub fn make_all( native_emitted_events_by_handle as RawSafeNative, )]); + #[cfg(feature = "testing")] + natives.extend([("emitted_events", native_emitted_events as RawSafeNative)]); + natives.extend([( "write_to_event_store", native_write_to_event_store as RawSafeNative, )]); + natives.extend([( + "write_to_module_event_store", + native_write_module_event_to_store as RawSafeNative, + )]); + builder.make_named_natives(natives) } diff --git a/aptos-move/framework/src/natives/string_utils.rs b/aptos-move/framework/src/natives/string_utils.rs index 8c14175d8c745..5dbab71d3aa33 100644 --- a/aptos-move/framework/src/natives/string_utils.rs +++ b/aptos-move/framework/src/natives/string_utils.rs @@ -7,6 +7,7 @@ use aptos_native_interface::{ safely_pop_arg, RawSafeNative, SafeNativeBuilder, SafeNativeContext, SafeNativeError, SafeNativeResult, }; +use aptos_types::on_chain_config::FeatureFlag; use ark_std::iterable::Iterable; use move_core_types::{ account_address::AccountAddress, @@ -175,13 +176,30 @@ fn native_format_impl( write!(out, "@{}", str).unwrap(); }, MoveTypeLayout::Signer => { - let addr = val.value_as::()?; + let fix_enabled = context + .context + .get_feature_flags() + .is_enabled(FeatureFlag::SIGNER_NATIVE_FORMAT_FIX); + let addr = if fix_enabled { + val.value_as::()? + .unpack()? + .next() + .unwrap() + .value_as::()? + } else { + val.value_as::()? + }; + let str = if context.canonicalize { addr.to_canonical_string() } else { addr.to_hex_literal() }; - write!(out, "signer({})", str).unwrap(); + if fix_enabled { + write!(out, "signer(@{})", str).unwrap(); + } else { + write!(out, "signer({})", str).unwrap(); + } }, MoveTypeLayout::Vector(ty) => { if let MoveTypeLayout::U8 = ty.as_ref() { diff --git a/aptos-move/framework/src/prover.rs b/aptos-move/framework/src/prover.rs index 8e4c6bfe43fa4..2de96ee9269c1 100644 --- a/aptos-move/framework/src/prover.rs +++ b/aptos-move/framework/src/prover.rs @@ -8,7 +8,11 @@ use codespan_reporting::{ }; use log::LevelFilter; use move_core_types::account_address::AccountAddress; -use std::{collections::BTreeMap, path::Path, time::Instant}; +use std::{ + collections::{BTreeMap, BTreeSet}, + path::Path, + time::Instant, +}; use tempfile::TempDir; #[derive(Debug, Clone, clap::Parser, serde::Serialize, serde::Deserialize)] @@ -114,6 +118,8 @@ impl ProverOptions { package_path: &Path, named_addresses: BTreeMap, bytecode_version: Option, + skip_attribute_checks: bool, + known_attributes: &BTreeSet, ) -> anyhow::Result<()> { let now = Instant::now(); let for_test = self.for_test; @@ -123,6 +129,8 @@ impl ProverOptions { named_addresses, self.filter.clone(), bytecode_version, + skip_attribute_checks, + known_attributes.clone(), )?; let mut options = self.convert_options(); // Need to ensure a distinct output.bpl file for concurrent execution. In non-test @@ -168,12 +176,12 @@ impl ProverOptions { let opts = move_prover::cli::Options { output_path: "".to_string(), verbosity_level, - prover: move_stackless_bytecode::options::ProverOptions { + prover: move_prover_bytecode_pipeline::options::ProverOptions { stable_test_output: self.stable_test_output, auto_trace_level: if self.trace { - move_stackless_bytecode::options::AutoTraceLevel::VerifiedFunction + move_prover_bytecode_pipeline::options::AutoTraceLevel::VerifiedFunction } else { - move_stackless_bytecode::options::AutoTraceLevel::Off + move_prover_bytecode_pipeline::options::AutoTraceLevel::Off }, report_severity: Severity::Warning, dump_bytecode: self.dump, diff --git a/aptos-move/framework/tests/move_prover_tests.rs b/aptos-move/framework/tests/move_prover_tests.rs index 684a8650f3622..54947cb469e9a 100644 --- a/aptos-move/framework/tests/move_prover_tests.rs +++ b/aptos-move/framework/tests/move_prover_tests.rs @@ -1,7 +1,7 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 -use aptos_framework::prover::ProverOptions; +use aptos_framework::{extended_checks, prover::ProverOptions}; use std::{collections::BTreeMap, path::PathBuf}; const ENV_TEST_INCONSISTENCY: &str = "MVP_TEST_INCONSISTENCY"; @@ -51,8 +51,16 @@ pub fn run_prover_for_pkg(path_to_pkg: impl Into) { options.vc_timeout = read_env_var(ENV_TEST_VC_TIMEOUT) .parse::() .unwrap_or(options.vc_timeout); + let skip_attribute_checks = false; options - .prove(false, pkg_path.as_path(), BTreeMap::default(), None) + .prove( + false, + pkg_path.as_path(), + BTreeMap::default(), + None, + skip_attribute_checks, + extended_checks::get_all_attribute_names(), + ) .unwrap() } } diff --git a/aptos-move/framework/tests/move_unit_test.rs b/aptos-move/framework/tests/move_unit_test.rs index b48fe2781442e..de6ce0ffdf086 100644 --- a/aptos-move/framework/tests/move_unit_test.rs +++ b/aptos-move/framework/tests/move_unit_test.rs @@ -2,7 +2,7 @@ // Parts of the project are originally copyright © Meta Platforms, Inc. // SPDX-License-Identifier: Apache-2.0 -use aptos_framework::path_in_crate; +use aptos_framework::{extended_checks, path_in_crate}; use aptos_gas_schedule::{MiscGasParameters, NativeGasParameters, LATEST_GAS_FEATURE_VERSION}; use aptos_types::on_chain_config::{Features, TimedFeatures}; use aptos_vm::natives; @@ -18,6 +18,7 @@ fn run_tests_for_pkg(path_to_pkg: impl Into) { move_package::BuildConfig { test_mode: true, install_dir: Some(tempdir().unwrap().path().to_path_buf()), + known_attributes: extended_checks::get_all_attribute_names().clone(), ..Default::default() }, // TODO(Gas): double check if this is correct diff --git a/aptos-move/move-examples/Cargo.toml b/aptos-move/move-examples/Cargo.toml index a3494f792bd52..bef94422971ef 100644 --- a/aptos-move/move-examples/Cargo.toml +++ b/aptos-move/move-examples/Cargo.toml @@ -13,6 +13,7 @@ repository = { workspace = true } rust-version = { workspace = true } [dependencies] +aptos-framework = { workspace = true } aptos-gas-schedule = { workspace = true } aptos-types = { workspace = true } aptos-vm ={ workspace = true, features = ["testing"] } diff --git a/aptos-move/move-examples/event/Move.toml b/aptos-move/move-examples/event/Move.toml new file mode 100644 index 0000000000000..faef308f77c93 --- /dev/null +++ b/aptos-move/move-examples/event/Move.toml @@ -0,0 +1,12 @@ +[package] +name = "Event example" +version = "0.0.1" + +[addresses] +std = "0x1" +aptos_framework = "0x1" +event = "_" + +[dependencies] +AptosFramework = { local = "../../framework/aptos-framework" } + diff --git a/aptos-move/move-examples/event/sources/event.move b/aptos-move/move-examples/event/sources/event.move new file mode 100644 index 0000000000000..3bc31e4136640 --- /dev/null +++ b/aptos-move/move-examples/event/sources/event.move @@ -0,0 +1,76 @@ +/// This provides an example shows how to use module events. + +module event::event { + use aptos_framework::event; + #[test_only] + use std::vector; + + struct Field has store, drop { + field: bool, + } + + #[event] + struct MyEvent has store, drop { + seq: u64, + field: Field, + bytes: vector + } + + public entry fun emit(num: u64) { + let i = 0; + while (i < num) { + let event = MyEvent { + seq: i, + field: Field { field: false }, + bytes: vector[] + }; + event::emit(event); + i = i + 1; + } + } + + public entry fun call_inline() { + emit_one_event() + } + + inline fun emit_one_event() { + event::emit(MyEvent { + seq: 1, + field: Field { field: false }, + bytes: vector[] + }); + } + + #[test] + public entry fun test_emitting() { + emit(20); + let module_events = event::emitted_events(); + assert!(vector::length(&module_events) == 20, 0); + let i = 0; + while (i < 20) { + let event = MyEvent { + seq: i, + field: Field {field: false}, + bytes: vector[] + }; + assert!(vector::borrow(&module_events, i) == &event, i); + i = i + 1; + }; + let event = MyEvent { + seq: 0, + field: Field { field: false }, + bytes: vector[] + }; + assert!(event::was_event_emitted(&event), i); + } + + #[test] + public entry fun test_inline() { + call_inline(); + assert!(event::was_event_emitted(&MyEvent { + seq: 1, + field: Field { field: false }, + bytes: vector[] + }), 0); + } +} diff --git a/aptos-move/move-examples/tests/move_prover_tests.rs b/aptos-move/move-examples/tests/move_prover_tests.rs index c425f1a991a71..99ab8b3f8d480 100644 --- a/aptos-move/move-examples/tests/move_prover_tests.rs +++ b/aptos-move/move-examples/tests/move_prover_tests.rs @@ -1,6 +1,7 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 +use aptos_framework::extended_checks; use aptos_types::account_address::AccountAddress; use move_cli::base::prove::run_move_prover; use std::{collections::BTreeMap, path::PathBuf}; @@ -24,6 +25,7 @@ pub fn run_prover_for_pkg( additional_named_addresses: named_addr, test_mode: true, install_dir: Some(tempdir().unwrap().path().to_path_buf()), + known_attributes: extended_checks::get_all_attribute_names().clone(), ..Default::default() }; run_move_prover( diff --git a/aptos-move/move-examples/tests/move_unit_tests.rs b/aptos-move/move-examples/tests/move_unit_tests.rs index ca2adbe511fb6..ea802b0b00157 100644 --- a/aptos-move/move-examples/tests/move_unit_tests.rs +++ b/aptos-move/move-examples/tests/move_unit_tests.rs @@ -1,6 +1,7 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 +use aptos_framework::extended_checks; use aptos_gas_schedule::{MiscGasParameters, NativeGasParameters, LATEST_GAS_FEATURE_VERSION}; use aptos_types::{ account_address::{create_resource_address, AccountAddress}, @@ -33,6 +34,7 @@ pub fn run_tests_for_pkg( test_mode: true, install_dir: Some(tempdir().unwrap().path().to_path_buf()), additional_named_addresses: named_addr, + known_attributes: extended_checks::get_all_attribute_names().clone(), ..Default::default() }, UnitTestingConfig::default_with_bound(Some(100_000)), diff --git a/aptos-move/mvhashmap/src/lib.rs b/aptos-move/mvhashmap/src/lib.rs index 725f85c954fec..bce3b0333be46 100644 --- a/aptos-move/mvhashmap/src/lib.rs +++ b/aptos-move/mvhashmap/src/lib.rs @@ -3,7 +3,7 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - types::{MVDataError, MVDataOutput, MVModulesError, MVModulesOutput, TxnIndex, Version}, + types::{MVDataError, MVDataOutput, MVModulesError, MVModulesOutput, TxnIndex}, versioned_data::VersionedData, versioned_modules::VersionedModules, }; @@ -56,31 +56,12 @@ impl self.modules.mark_estimate(key, txn_idx), - None => self.data.mark_estimate(key, txn_idx), - } - } - - /// Delete an entry from transaction 'txn_idx' at access path 'key'. Will panic - /// if the corresponding entry does not exist. - pub fn delete(&self, key: &K, txn_idx: TxnIndex) { - // This internally deserializes the path, TODO: fix. - match key.module_path() { - Some(_) => self.modules.delete(key, txn_idx), - None => self.data.delete(key, txn_idx), - }; + pub fn data(&self) -> &VersionedData { + &self.data } - /// Add a versioned write at a specified key, in data or modules map according to the key. - pub fn write(&self, key: K, version: Version, value: V) { - match key.module_path() { - Some(_) => self.modules.write(key, version.0, value), - None => self.data.write(key, version, value), - } + pub fn modules(&self) -> &VersionedModules { + &self.modules } // ----------------------------------------------- diff --git a/aptos-move/mvhashmap/src/unit_tests/mod.rs b/aptos-move/mvhashmap/src/unit_tests/mod.rs index 20e132b9a04a3..8b28bf89e6fe3 100644 --- a/aptos-move/mvhashmap/src/unit_tests/mod.rs +++ b/aptos-move/mvhashmap/src/unit_tests/mod.rs @@ -113,7 +113,7 @@ fn create_write_read_placeholder_struct() { assert_eq!(Err(NotFound), r_db); // Write by txn 10. - mvtbl.write(ap1.clone(), (10, 1), value_for(10, 1)); + mvtbl.data().write(ap1.clone(), (10, 1), value_for(10, 1)); // Reads that should go the DB return Err(NotFound) let r_db = mvtbl.fetch_data(&ap1, 9); @@ -136,8 +136,8 @@ fn create_write_read_placeholder_struct() { assert_eq!(Ok(Resolved(u128_for(10, 1) + 11 + 12 - (61 + 13))), r_sum); // More writes. - mvtbl.write(ap1.clone(), (12, 0), value_for(12, 0)); - mvtbl.write(ap1.clone(), (8, 3), value_for(8, 3)); + mvtbl.data().write(ap1.clone(), (12, 0), value_for(12, 0)); + mvtbl.data().write(ap1.clone(), (8, 3), value_for(8, 3)); // Verify reads. let r_12 = mvtbl.fetch_data(&ap1, 15); @@ -148,7 +148,7 @@ fn create_write_read_placeholder_struct() { assert_eq!(Ok(Versioned((8, 3), arc_value_for(8, 3))), r_8); // Mark the entry written by 10 as an estimate. - mvtbl.mark_estimate(&ap1, 10); + mvtbl.data().mark_estimate(&ap1, 10); // Read for txn 11 must observe a dependency. let r_10 = mvtbl.fetch_data(&ap1, 11); @@ -159,25 +159,25 @@ fn create_write_read_placeholder_struct() { assert_eq!(Err(Dependency(10)), r_11); // Delete the entry written by 10, write to a different ap. - mvtbl.delete(&ap1, 10); - mvtbl.write(ap2.clone(), (10, 2), value_for(10, 2)); + mvtbl.data().delete(&ap1, 10); + mvtbl.data().write(ap2.clone(), (10, 2), value_for(10, 2)); // Read by txn 11 no longer observes entry from txn 10. let r_8 = mvtbl.fetch_data(&ap1, 11); assert_eq!(Ok(Versioned((8, 3), arc_value_for(8, 3))), r_8); // Reads, writes for ap2 and ap3. - mvtbl.write(ap2.clone(), (5, 0), value_for(5, 0)); - mvtbl.write(ap3.clone(), (20, 4), value_for(20, 4)); + mvtbl.data().write(ap2.clone(), (5, 0), value_for(5, 0)); + mvtbl.data().write(ap3.clone(), (20, 4), value_for(20, 4)); let r_5 = mvtbl.fetch_data(&ap2, 10); assert_eq!(Ok(Versioned((5, 0), arc_value_for(5, 0))), r_5); let r_20 = mvtbl.fetch_data(&ap3, 21); assert_eq!(Ok(Versioned((20, 4), arc_value_for(20, 4))), r_20); // Clear ap1 and ap3. - mvtbl.delete(&ap1, 12); - mvtbl.delete(&ap1, 8); - mvtbl.delete(&ap3, 20); + mvtbl.data().delete(&ap1, 12); + mvtbl.data().delete(&ap1, 8); + mvtbl.data().delete(&ap3, 20); // Reads from ap1 and ap3 go to db. match_unresolved( @@ -200,7 +200,7 @@ fn create_write_read_placeholder_struct() { let val = value_for(10, 3); // sub base sub_for for which should underflow. let sub_base = AggregatorValue::from_write(&val).unwrap().into(); - mvtbl.write(ap2.clone(), (10, 3), val); + mvtbl.data().write(ap2.clone(), (10, 3), val); mvtbl.add_delta(ap2.clone(), 30, delta_sub(30 + sub_base, u128::MAX)); let r_31 = mvtbl.fetch_data(&ap2, 31); assert_eq!(Err(DeltaApplicationFailure), r_31); diff --git a/aptos-move/mvhashmap/src/unit_tests/proptest_types.rs b/aptos-move/mvhashmap/src/unit_tests/proptest_types.rs index 5a30f2f73cbf2..a9b9e959f8274 100644 --- a/aptos-move/mvhashmap/src/unit_tests/proptest_types.rs +++ b/aptos-move/mvhashmap/src/unit_tests/proptest_types.rs @@ -205,8 +205,9 @@ where }) .collect::>(); for (key, idx) in versions_to_write { - map.write(KeyType(key.clone()), (idx as TxnIndex, 0), Value(None)); - map.mark_estimate(&KeyType(key), idx as TxnIndex); + map.data() + .write(KeyType(key.clone()), (idx as TxnIndex, 0), Value(None)); + map.data().mark_estimate(&KeyType(key), idx as TxnIndex); } let current_idx = AtomicUsize::new(0); @@ -283,10 +284,11 @@ where } }, Operator::Remove => { - map.write(KeyType(key.clone()), (idx as TxnIndex, 1), Value(None)); + map.data() + .write(KeyType(key.clone()), (idx as TxnIndex, 1), Value(None)); }, Operator::Insert(v) => { - map.write( + map.data().write( KeyType(key.clone()), (idx as TxnIndex, 1), Value(Some(v.clone())), diff --git a/aptos-move/mvhashmap/src/versioned_data.rs b/aptos-move/mvhashmap/src/versioned_data.rs index 57ca5dcba3090..d8d62faa190af 100644 --- a/aptos-move/mvhashmap/src/versioned_data.rs +++ b/aptos-move/mvhashmap/src/versioned_data.rs @@ -223,7 +223,9 @@ impl VersionedData { .insert(txn_idx, CachePadded::new(Entry::new_delta_from(delta))); } - pub(crate) fn mark_estimate(&self, key: &K, txn_idx: TxnIndex) { + /// Mark an entry from transaction 'txn_idx' at access path 'key' as an estimated write + /// (for future incarnation). Will panic if the entry is not in the data-structure. + pub fn mark_estimate(&self, key: &K, txn_idx: TxnIndex) { let mut v = self.values.get_mut(key).expect("Path must exist"); v.versioned_map .get_mut(&txn_idx) @@ -231,7 +233,9 @@ impl VersionedData { .mark_estimate(); } - pub(crate) fn delete(&self, key: &K, txn_idx: TxnIndex) { + /// Delete an entry from transaction 'txn_idx' at access path 'key'. Will panic + /// if the corresponding entry does not exist. + pub fn delete(&self, key: &K, txn_idx: TxnIndex) { // TODO: investigate logical deletion. let mut v = self.values.get_mut(key).expect("Path must exist"); assert!( @@ -251,7 +255,8 @@ impl VersionedData { .unwrap_or(Err(MVDataError::NotFound)) } - pub(crate) fn write(&self, key: K, version: Version, data: V) { + /// Versioned write of data at a given key (and version). + pub fn write(&self, key: K, version: Version, data: V) { let (txn_idx, incarnation) = version; let mut v = self.values.entry(key).or_default(); diff --git a/aptos-move/mvhashmap/src/versioned_modules.rs b/aptos-move/mvhashmap/src/versioned_modules.rs index 8e88889363215..364e46c88fb25 100644 --- a/aptos-move/mvhashmap/src/versioned_modules.rs +++ b/aptos-move/mvhashmap/src/versioned_modules.rs @@ -107,7 +107,9 @@ impl VersionedModules< } } - pub(crate) fn mark_estimate(&self, key: &K, txn_idx: TxnIndex) { + /// Mark an entry from transaction 'txn_idx' at access path 'key' as an estimated write + /// (for future incarnation). Will panic if the entry is not in the data-structure. + pub fn mark_estimate(&self, key: &K, txn_idx: TxnIndex) { let mut v = self.values.get_mut(key).expect("Path must exist"); v.versioned_map .get_mut(&txn_idx) @@ -115,20 +117,21 @@ impl VersionedModules< .mark_estimate(); } - pub(crate) fn write(&self, key: K, txn_idx: TxnIndex, data: V) { + /// Versioned write of module at a given key (and version). + pub fn write(&self, key: K, txn_idx: TxnIndex, data: V) { let mut v = self.values.entry(key).or_default(); v.versioned_map .insert(txn_idx, CachePadded::new(Entry::new_write_from(data))); } - pub(crate) fn store_executable(&self, key: &K, descriptor_hash: HashValue, executable: X) { + pub fn store_executable(&self, key: &K, descriptor_hash: HashValue, executable: X) { let mut v = self.values.get_mut(key).expect("Path must exist"); v.executables .entry(descriptor_hash) .or_insert_with(|| Arc::new(executable)); } - pub(crate) fn fetch_module( + pub fn fetch_module( &self, key: &K, txn_idx: TxnIndex, @@ -147,7 +150,9 @@ impl VersionedModules< } } - pub(crate) fn delete(&self, key: &K, txn_idx: TxnIndex) { + /// Delete an entry from transaction 'txn_idx' at access path 'key'. Will panic + /// if the corresponding entry does not exist. + pub fn delete(&self, key: &K, txn_idx: TxnIndex) { // TODO: investigate logical deletion. let mut v = self.values.get_mut(key).expect("Path must exist"); assert!( diff --git a/aptos-move/vm-genesis/src/genesis_context.rs b/aptos-move/vm-genesis/src/genesis_context.rs index 1de641636b520..b73ef3c2248bc 100644 --- a/aptos-move/vm-genesis/src/genesis_context.rs +++ b/aptos-move/vm-genesis/src/genesis_context.rs @@ -46,10 +46,6 @@ impl TStateView for GenesisStateView { .map(StateValue::new_legacy)) } - fn is_genesis(&self) -> bool { - true - } - fn get_usage(&self) -> Result { Ok(StateStorageUsage::zero()) } diff --git a/aptos-move/vm-genesis/src/lib.rs b/aptos-move/vm-genesis/src/lib.rs index ef8c425ae01ed..cfdddcb3a3828 100644 --- a/aptos-move/vm-genesis/src/lib.rs +++ b/aptos-move/vm-genesis/src/lib.rs @@ -20,7 +20,7 @@ use aptos_gas_schedule::{ use aptos_types::{ account_config::{self, aptos_test_root_address, events::NewEpochEvent, CORE_CODE_ADDRESS}, chain_id::ChainId, - contract_event::ContractEvent, + contract_event::{ContractEvent, ContractEventV1}, on_chain_config::{ FeatureFlag, Features, GasScheduleV2, OnChainConsensusConfig, OnChainExecutionConfig, TimedFeatures, APTOS_MAX_KNOWN_VERSION, @@ -160,7 +160,7 @@ pub fn encode_aptos_mainnet_genesis_transaction( // not deltas. The second session only publishes the framework module bundle, which should not // produce deltas either. assert!( - change_set.aggregator_delta_set().is_empty(), + change_set.aggregator_v1_delta_set().is_empty(), "non-empty delta change set in genesis" ); assert!(!change_set.write_set_iter().any(|(_, op)| op.is_deletion())); @@ -270,7 +270,7 @@ pub fn encode_genesis_change_set( // not deltas. The second session only publishes the framework module bundle, which should not // produce deltas either. assert!( - change_set.aggregator_delta_set().is_empty(), + change_set.aggregator_v1_delta_set().is_empty(), "non-empty delta change set in genesis" ); @@ -414,10 +414,12 @@ pub fn default_features() -> Vec { FeatureFlag::STRUCT_CONSTRUCTORS, FeatureFlag::CRYPTOGRAPHY_ALGEBRA_NATIVES, FeatureFlag::BLS12_381_STRUCTURES, + FeatureFlag::STORAGE_SLOT_METADATA, FeatureFlag::CHARGE_INVARIANT_VIOLATION, FeatureFlag::APTOS_UNIQUE_IDENTIFIERS, FeatureFlag::GAS_PAYER_ENABLED, FeatureFlag::BULLETPROOFS_NATIVES, + FeatureFlag::MODULE_EVENT, ] } @@ -629,16 +631,22 @@ fn emit_new_block_and_epoch_event(session: &mut SessionExt) { /// Verify the consistency of the genesis `WriteSet` fn verify_genesis_write_set(events: &[ContractEvent]) { - let new_epoch_events: Vec<&ContractEvent> = events + let new_epoch_events: Vec<&ContractEventV1> = events .iter() - .filter(|e| e.key() == &NewEpochEvent::event_key()) + .filter_map(|e| { + if e.event_key() == Some(&NewEpochEvent::event_key()) { + Some(e.v1().unwrap()) + } else { + None + } + }) .collect(); assert_eq!( new_epoch_events.len(), 1, "There should only be exactly one NewEpochEvent" ); - assert_eq!(new_epoch_events[0].sequence_number(), 0,); + assert_eq!(new_epoch_events[0].sequence_number(), 0); } /// An enum specifying whether the compiled stdlib/scripts should be used or freshly built versions diff --git a/aptos-move/writeset-transaction-generator/src/admin_script_builder.rs b/aptos-move/writeset-transaction-generator/src/admin_script_builder.rs index fc0eb093e747f..0fee6788fb569 100644 --- a/aptos-move/writeset-transaction-generator/src/admin_script_builder.rs +++ b/aptos-move/writeset-transaction-generator/src/admin_script_builder.rs @@ -24,8 +24,11 @@ pub fn compile_script(source_file_str: String, bytecode_version: Option) -> .files() .unwrap(), aptos_framework::named_addresses().clone(), + Flags::empty() + .set_sources_shadow_deps(false) + .set_skip_attribute_checks(false), + aptos_framework::extended_checks::get_all_attribute_names(), ) - .set_flags(Flags::empty().set_sources_shadow_deps(false)) .build_and_report() .unwrap(); assert!(compiled_program.len() == 1); diff --git a/aptos-move/writeset-transaction-generator/src/writeset_builder.rs b/aptos-move/writeset-transaction-generator/src/writeset_builder.rs index 348a89c6164a9..1c395b1b33544 100644 --- a/aptos-move/writeset-transaction-generator/src/writeset_builder.rs +++ b/aptos-move/writeset-transaction-generator/src/writeset_builder.rs @@ -139,7 +139,7 @@ where }; // Genesis never produces the delta change set. - assert!(change_set.aggregator_delta_set().is_empty()); + assert!(change_set.aggregator_v1_delta_set().is_empty()); change_set .try_into_storage_change_set() .expect("Conversion from VMChangeSet into ChangeSet should always succeed") diff --git a/config/src/config/state_sync_config.rs b/config/src/config/state_sync_config.rs index 549fc7107aab7..d0e7d86ed26a8 100644 --- a/config/src/config/state_sync_config.rs +++ b/config/src/config/state_sync_config.rs @@ -154,10 +154,14 @@ pub struct StorageServiceConfig { pub max_network_channel_size: u64, /// Maximum number of bytes to send per network message pub max_network_chunk_bytes: u64, + /// Maximum number of active subscriptions (per peer) + pub max_num_active_subscriptions: u64, /// Maximum period (ms) of pending optimistic fetch requests pub max_optimistic_fetch_period_ms: u64, /// Maximum number of state keys and values per chunk pub max_state_chunk_size: u64, + /// Maximum period (ms) of pending subscription requests + pub max_subscription_period_ms: u64, /// Maximum number of transactions per chunk pub max_transaction_chunk_size: u64, /// Maximum number of transaction outputs per chunk @@ -179,8 +183,10 @@ impl Default for StorageServiceConfig { max_lru_cache_size: 500, // At ~0.6MiB per chunk, this should take no more than 0.5GiB max_network_channel_size: 4000, max_network_chunk_bytes: MAX_MESSAGE_SIZE as u64, + max_num_active_subscriptions: 30, max_optimistic_fetch_period_ms: 5000, // 5 seconds max_state_chunk_size: MAX_STATE_CHUNK_SIZE, + max_subscription_period_ms: 30_000, // 30 seconds max_transaction_chunk_size: MAX_TRANSACTION_CHUNK_SIZE, max_transaction_output_chunk_size: MAX_TRANSACTION_OUTPUT_CHUNK_SIZE, min_time_to_ignore_peers_secs: 300, // 5 minutes @@ -252,6 +258,8 @@ pub struct AptosDataClientConfig { pub max_response_timeout_ms: u64, /// Maximum number of state keys and values per chunk pub max_state_chunk_size: u64, + /// Maximum version lag we'll tolerate when sending subscription requests + pub max_subscription_version_lag: u64, /// Maximum number of transactions per chunk pub max_transaction_chunk_size: u64, /// Maximum number of transaction outputs per chunk @@ -277,6 +285,7 @@ impl Default for AptosDataClientConfig { max_optimistic_fetch_version_lag: 50_000, // Assumes 5K TPS for 10 seconds, which should be plenty max_response_timeout_ms: 60_000, // 60 seconds max_state_chunk_size: MAX_STATE_CHUNK_SIZE, + max_subscription_version_lag: 100_000, // Assumes 5K TPS for 20 seconds, which should be plenty max_transaction_chunk_size: MAX_TRANSACTION_CHUNK_SIZE, max_transaction_output_chunk_size: MAX_TRANSACTION_OUTPUT_CHUNK_SIZE, optimistic_fetch_timeout_ms: 5000, // 5 seconds diff --git a/consensus/src/consensusdb/consensusdb_test.rs b/consensus/src/consensusdb/consensusdb_test.rs index 3add1e81e1781..c8d73d2060922 100644 --- a/consensus/src/consensusdb/consensusdb_test.rs +++ b/consensus/src/consensusdb/consensusdb_test.rs @@ -3,7 +3,7 @@ // SPDX-License-Identifier: Apache-2.0 use super::*; -use crate::dag::{CertifiedNode, Node, Vote}; +use crate::dag::{CertifiedNode, Extensions, Node, Vote}; use aptos_consensus_types::{ block::block_test_utils::certificate_for_genesis, common::{Author, Payload}, @@ -93,7 +93,15 @@ fn test_dag() { let tmp_dir = TempPath::new(); let db = ConsensusDB::new(&tmp_dir); - let node = Node::new(1, 1, Author::random(), 123, Payload::empty(false), vec![]); + let node = Node::new( + 1, + 1, + Author::random(), + 123, + Payload::empty(false), + vec![], + Extensions::empty(), + ); test_dag_type::::Key>(node.digest(), node.clone(), &db); let certified_node = CertifiedNode::new(node.clone(), AggregateSignature::empty()); diff --git a/consensus/src/dag/anchor_election.rs b/consensus/src/dag/anchor_election.rs index c04bbf6998048..ff7c45dc5a72b 100644 --- a/consensus/src/dag/anchor_election.rs +++ b/consensus/src/dag/anchor_election.rs @@ -3,7 +3,7 @@ use aptos_consensus_types::common::{Author, Round}; -pub trait AnchorElection { +pub trait AnchorElection: Send { fn get_anchor(&self, round: Round) -> Author; fn commit(&mut self, round: Round); diff --git a/consensus/src/dag/bootstrap.rs b/consensus/src/dag/bootstrap.rs new file mode 100644 index 0000000000000..a07c5f73f1e00 --- /dev/null +++ b/consensus/src/dag/bootstrap.rs @@ -0,0 +1,117 @@ +// Copyright © Aptos Foundation + +use super::{ + anchor_election::RoundRobinAnchorElection, + dag_driver::DagDriver, + dag_fetcher::{DagFetcher, FetchRequestHandler}, + dag_handler::NetworkHandler, + dag_network::TDAGNetworkSender, + dag_store::Dag, + order_rule::OrderRule, + rb_handler::NodeBroadcastHandler, + storage::DAGStorage, + types::DAGMessage, + CertifiedNode, +}; +use crate::{network::IncomingDAGRequest, state_replication::PayloadClient}; +use aptos_channels::{aptos_channel, message_queues::QueueStyle}; +use aptos_consensus_types::common::Author; +use aptos_infallible::RwLock; +use aptos_reliable_broadcast::{RBNetworkSender, ReliableBroadcast}; +use aptos_types::{ + epoch_state::EpochState, ledger_info::LedgerInfo, validator_signer::ValidatorSigner, +}; +use futures::stream::{AbortHandle, Abortable}; +use std::sync::Arc; +use tokio_retry::strategy::ExponentialBackoff; + +pub fn bootstrap_dag( + self_peer: Author, + signer: ValidatorSigner, + epoch_state: Arc, + latest_ledger_info: LedgerInfo, + storage: Arc, + rb_network_sender: Arc>, + dag_network_sender: Arc, + time_service: aptos_time_service::TimeService, + payload_client: Arc, +) -> ( + AbortHandle, + AbortHandle, + aptos_channel::Sender, + futures_channel::mpsc::UnboundedReceiver>>, +) { + let validators = epoch_state.verifier.get_ordered_account_addresses(); + let current_round = latest_ledger_info.round(); + + let (ordered_nodes_tx, ordered_nodes_rx) = futures_channel::mpsc::unbounded(); + let (dag_rpc_tx, dag_rpc_rx) = aptos_channel::new(QueueStyle::FIFO, 64, None); + + // A backoff policy that starts at 100ms and doubles each iteration. + let rb_backoff_policy = ExponentialBackoff::from_millis(2).factor(50); + let rb = Arc::new(ReliableBroadcast::new( + validators.clone(), + rb_network_sender, + rb_backoff_policy, + time_service.clone(), + )); + + let dag = Arc::new(RwLock::new(Dag::new(epoch_state.clone(), storage.clone()))); + + let anchor_election = Box::new(RoundRobinAnchorElection::new(validators)); + let order_rule = OrderRule::new( + epoch_state.clone(), + latest_ledger_info, + dag.clone(), + anchor_election, + ordered_nodes_tx, + ); + + let (dag_fetcher, fetch_requester, node_fetch_waiter, certified_node_fetch_waiter) = + DagFetcher::new( + epoch_state.clone(), + dag_network_sender, + dag.clone(), + time_service.clone(), + ); + let fetch_requester = Arc::new(fetch_requester); + + let dag_driver = DagDriver::new( + self_peer, + epoch_state.clone(), + dag.clone(), + payload_client, + rb, + current_round, + time_service, + storage.clone(), + order_rule, + fetch_requester, + ); + let rb_handler = + NodeBroadcastHandler::new(dag.clone(), signer, epoch_state.clone(), storage.clone()); + let fetch_handler = FetchRequestHandler::new(dag, epoch_state.clone()); + + let dag_handler = NetworkHandler::new( + epoch_state, + dag_rpc_rx, + rb_handler, + dag_driver, + fetch_handler, + node_fetch_waiter, + certified_node_fetch_waiter, + ); + + let (nh_abort_handle, nh_abort_registration) = AbortHandle::new_pair(); + let (df_abort_handle, df_abort_registration) = AbortHandle::new_pair(); + + tokio::spawn(Abortable::new(dag_handler.start(), nh_abort_registration)); + tokio::spawn(Abortable::new(dag_fetcher.start(), df_abort_registration)); + + ( + nh_abort_handle, + df_abort_handle, + dag_rpc_tx, + ordered_nodes_rx, + ) +} diff --git a/consensus/src/dag/dag_driver.rs b/consensus/src/dag/dag_driver.rs index c27c4573d78fc..ae3d398121bc8 100644 --- a/consensus/src/dag/dag_driver.rs +++ b/consensus/src/dag/dag_driver.rs @@ -1,26 +1,41 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 -use super::{storage::DAGStorage, types::DAGMessage}; +use super::{ + dag_fetcher::FetchRequester, + order_rule::OrderRule, + storage::DAGStorage, + types::{CertifiedAck, DAGMessage, Extensions}, + RpcHandler, +}; use crate::{ dag::{ dag_store::Dag, types::{CertificateAckState, CertifiedNode, Node, NodeCertificate, SignatureBuilder}, }, state_replication::PayloadClient, - util::time_service::TimeService, }; +use anyhow::{bail, Ok}; use aptos_consensus_types::common::{Author, Payload}; use aptos_infallible::RwLock; +use aptos_logger::error; use aptos_reliable_broadcast::ReliableBroadcast; +use aptos_time_service::{TimeService, TimeServiceTrait}; use aptos_types::{block_info::Round, epoch_state::EpochState}; use futures::{ future::{AbortHandle, Abortable}, FutureExt, }; use std::sync::Arc; +use thiserror::Error as ThisError; use tokio_retry::strategy::ExponentialBackoff; +#[derive(Debug, ThisError)] +pub enum DagDriverError { + #[error("missing parents")] + MissingParents, +} + pub(crate) struct DagDriver { author: Author, epoch_state: Arc, @@ -28,9 +43,11 @@ pub(crate) struct DagDriver { payload_client: Arc, reliable_broadcast: Arc>, current_round: Round, - time_service: Arc, + time_service: TimeService, rb_abort_handle: Option, storage: Arc, + order_rule: OrderRule, + fetch_requester: Arc, } impl DagDriver { @@ -41,8 +58,10 @@ impl DagDriver { payload_client: Arc, reliable_broadcast: Arc>, current_round: Round, - time_service: Arc, + time_service: TimeService, storage: Arc, + order_rule: OrderRule, + fetch_requester: Arc, ) -> Self { // TODO: rebroadcast nodes after recovery Self { @@ -55,24 +74,40 @@ impl DagDriver { time_service, rb_abort_handle: None, storage, + order_rule, + fetch_requester, } } + pub fn try_enter_new_round(&mut self) { + // In case of a new epoch, kickstart building the DAG by entering the next round + // without any parents. + if self.current_round == 0 { + self.enter_new_round(vec![]); + } + // TODO: add logic to handle building DAG from the middle, etc. + } + pub fn add_node(&mut self, node: CertifiedNode) -> anyhow::Result<()> { let mut dag_writer = self.dag.write(); let round = node.metadata().round(); - if dag_writer.all_exists(node.parents_metadata()) { - dag_writer.add_node(node)?; - if self.current_round == round { - let maybe_strong_links = dag_writer - .get_strong_links_for_round(self.current_round, &self.epoch_state.verifier); - drop(dag_writer); - if let Some(strong_links) = maybe_strong_links { - self.enter_new_round(strong_links); - } + + if !dag_writer.all_exists(node.parents_metadata()) { + if let Err(err) = self.fetch_requester.request_for_certified_node(node) { + error!("request to fetch failed: {}", err); + } + bail!(DagDriverError::MissingParents); + } + + dag_writer.add_node(node)?; + if self.current_round == round { + let maybe_strong_links = dag_writer + .get_strong_links_for_round(self.current_round, &self.epoch_state.verifier); + drop(dag_writer); + if let Some(strong_links) = maybe_strong_links { + self.enter_new_round(strong_links); } } - // TODO: handle fetching missing dependencies Ok(()) } @@ -80,7 +115,7 @@ impl DagDriver { // TODO: support pulling payload let payload = Payload::empty(false); // TODO: need to wait to pass median of parents timestamp - let timestamp = self.time_service.get_current_timestamp(); + let timestamp = self.time_service.now_unix_time(); self.current_round += 1; let new_node = Node::new( self.epoch_state.epoch, @@ -89,6 +124,7 @@ impl DagDriver { timestamp.as_micros() as u64, payload, strong_links, + Extensions::empty(), ); self.storage .save_node(&new_node) @@ -115,3 +151,24 @@ impl DagDriver { } } } + +impl RpcHandler for DagDriver { + type Request = CertifiedNode; + type Response = CertifiedAck; + + fn process(&mut self, node: Self::Request) -> anyhow::Result { + let epoch = node.metadata().epoch(); + { + let dag_reader = self.dag.read(); + if dag_reader.exists(node.metadata()) { + return Ok(CertifiedAck::new(epoch)); + } + } + + let node_metadata = node.metadata().clone(); + self.add_node(node) + .map(|_| self.order_rule.process_new_node(&node_metadata))?; + + Ok(CertifiedAck::new(epoch)) + } +} diff --git a/consensus/src/dag/dag_fetcher.rs b/consensus/src/dag/dag_fetcher.rs index f70bacc89b4cf..7ebce618df141 100644 --- a/consensus/src/dag/dag_fetcher.rs +++ b/consensus/src/dag/dag_fetcher.rs @@ -3,7 +3,7 @@ use super::{dag_network::RpcWithFallback, types::NodeMetadata, RpcHandler}; use crate::dag::{ - dag_network::DAGNetworkSender, + dag_network::TDAGNetworkSender, dag_store::Dag, types::{CertifiedNode, FetchResponse, Node, RemoteFetchRequest}, }; @@ -13,14 +13,78 @@ use aptos_infallible::RwLock; use aptos_logger::error; use aptos_time_service::TimeService; use aptos_types::epoch_state::EpochState; -use futures::StreamExt; -use std::{collections::HashMap, sync::Arc, time::Duration}; +use futures::{stream::FuturesUnordered, Stream, StreamExt}; +use std::{ + collections::HashMap, + pin::Pin, + sync::Arc, + task::{Context, Poll}, + time::Duration, +}; use thiserror::Error as ThisError; use tokio::sync::{ mpsc::{Receiver, Sender}, oneshot, }; +pub struct FetchWaiter { + rx: Receiver>, + futures: Pin>>>, +} + +impl FetchWaiter { + fn new(rx: Receiver>) -> Self { + Self { + rx, + futures: Box::pin(FuturesUnordered::new()), + } + } +} + +impl Stream for FetchWaiter { + type Item = Result; + + fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { + if let Poll::Ready(Some(rx)) = self.rx.poll_recv(cx) { + self.futures.push(rx); + } + + self.futures.as_mut().poll_next(cx) + } +} + +pub struct FetchRequester { + request_tx: Sender, + node_waiter_tx: Sender>, + certified_node_waiter_tx: Sender>, +} + +impl FetchRequester { + pub fn request_for_node(&self, node: Node) -> anyhow::Result<()> { + let (res_tx, res_rx) = oneshot::channel(); + let fetch_req = LocalFetchRequest::Node(node, res_tx); + self.request_tx + .try_send(fetch_req) + .map_err(|e| anyhow::anyhow!("unable to send node fetch request to channel: {}", e))?; + self.node_waiter_tx.try_send(res_rx)?; + Ok(()) + } + + pub fn request_for_certified_node(&self, node: CertifiedNode) -> anyhow::Result<()> { + let (res_tx, res_rx) = oneshot::channel(); + let fetch_req = LocalFetchRequest::CertifiedNode(node, res_tx); + self.request_tx.try_send(fetch_req).map_err(|e| { + anyhow::anyhow!( + "unable to send certified node fetch request to channel: {}", + e + ) + })?; + self.certified_node_waiter_tx.try_send(res_rx)?; + Ok(()) + } +} + +#[derive(Debug)] pub enum LocalFetchRequest { Node(Node, oneshot::Sender), CertifiedNode(CertifiedNode, oneshot::Sender), @@ -55,9 +119,9 @@ impl LocalFetchRequest { } } -struct DagFetcher { +pub struct DagFetcher { epoch_state: Arc, - network: Arc, + network: Arc, dag: Arc>, request_rx: Receiver, time_service: TimeService, @@ -66,11 +130,18 @@ struct DagFetcher { impl DagFetcher { pub fn new( epoch_state: Arc, - network: Arc, + network: Arc, dag: Arc>, time_service: TimeService, - ) -> (Self, Sender) { + ) -> ( + Self, + FetchRequester, + FetchWaiter, + FetchWaiter, + ) { let (request_tx, request_rx) = tokio::sync::mpsc::channel(16); + let (node_tx, node_rx) = tokio::sync::mpsc::channel(100); + let (certified_node_tx, certified_node_rx) = tokio::sync::mpsc::channel(100); ( Self { epoch_state, @@ -79,7 +150,13 @@ impl DagFetcher { request_rx, time_service, }, - request_tx, + FetchRequester { + request_tx, + node_waiter_tx: node_tx, + certified_node_waiter_tx: certified_node_tx, + }, + FetchWaiter::new(node_rx), + FetchWaiter::new(certified_node_rx), ) } diff --git a/consensus/src/dag/dag_handler.rs b/consensus/src/dag/dag_handler.rs index 3d23fd1bf62ca..e1da8eb1a1b6e 100644 --- a/consensus/src/dag/dag_handler.rs +++ b/consensus/src/dag/dag_handler.rs @@ -1,62 +1,78 @@ // Copyright © Aptos Foundation use super::{ - dag_fetcher::FetchRequestHandler, reliable_broadcast::CertifiedNodeHandler, - storage::DAGStorage, types::TDAGMessage, + dag_driver::DagDriver, + dag_fetcher::{FetchRequestHandler, FetchWaiter}, + types::TDAGMessage, + CertifiedNode, Node, }; use crate::{ - dag::{ - dag_network::RpcHandler, dag_store::Dag, reliable_broadcast::NodeBroadcastHandler, - types::DAGMessage, - }, + dag::{dag_network::RpcHandler, rb_handler::NodeBroadcastHandler, types::DAGMessage}, network::{IncomingDAGRequest, TConsensusMsg}, }; use anyhow::bail; use aptos_channels::aptos_channel; use aptos_consensus_types::common::Author; -use aptos_infallible::RwLock; use aptos_logger::{error, warn}; use aptos_network::protocols::network::RpcError; -use aptos_types::{epoch_state::EpochState, validator_signer::ValidatorSigner}; +use aptos_types::epoch_state::EpochState; use bytes::Bytes; use futures::StreamExt; use std::sync::Arc; +use tokio::select; -struct NetworkHandler { +pub(crate) struct NetworkHandler { + epoch_state: Arc, dag_rpc_rx: aptos_channel::Receiver, node_receiver: NodeBroadcastHandler, - certified_node_receiver: CertifiedNodeHandler, + dag_driver: DagDriver, fetch_receiver: FetchRequestHandler, - epoch_state: Arc, + node_fetch_waiter: FetchWaiter, + certified_node_fetch_waiter: FetchWaiter, } impl NetworkHandler { - fn new( - dag: Arc>, - dag_rpc_rx: aptos_channel::Receiver, - signer: ValidatorSigner, + pub fn new( epoch_state: Arc, - storage: Arc, + dag_rpc_rx: aptos_channel::Receiver, + node_receiver: NodeBroadcastHandler, + dag_driver: DagDriver, + fetch_receiver: FetchRequestHandler, + node_fetch_waiter: FetchWaiter, + certified_node_fetch_waiter: FetchWaiter, ) -> Self { Self { + epoch_state, dag_rpc_rx, - node_receiver: NodeBroadcastHandler::new( - dag.clone(), - signer, - epoch_state.clone(), - storage, - ), - certified_node_receiver: CertifiedNodeHandler::new(dag.clone()), - epoch_state: epoch_state.clone(), - fetch_receiver: FetchRequestHandler::new(dag, epoch_state), + node_receiver, + dag_driver, + fetch_receiver, + node_fetch_waiter, + certified_node_fetch_waiter, } } - async fn start(mut self) { + pub async fn start(mut self) { + self.dag_driver.try_enter_new_round(); + // TODO(ibalajiarun): clean up Reliable Broadcast storage periodically. - while let Some(msg) = self.dag_rpc_rx.next().await { - if let Err(e) = self.process_rpc(msg).await { - warn!(error = ?e, "error processing rpc"); + loop { + select! { + Some(msg) = self.dag_rpc_rx.next() => { + if let Err(e) = self.process_rpc(msg).await { + warn!(error = ?e, "error processing rpc"); + } + }, + Some(res) = self.node_fetch_waiter.next() => { + if let Err(e) = res.map_err(|e| anyhow::anyhow!("recv error: {}", e)).and_then(|node| self.node_receiver.process(node)) { + warn!(error = ?e, "error processing node fetch notification"); + } + }, + Some(res) = self.certified_node_fetch_waiter.next() => { + if let Err(e) = res.map_err(|e| anyhow::anyhow!("recv error: {}", e)).and_then(|certified_node| self.dag_driver.process(certified_node)) { + warn!(error = ?e, "error processing certified node fetch notification"); + } + } } } } @@ -78,7 +94,7 @@ impl NetworkHandler { .map(|r| r.into()), DAGMessage::CertifiedNodeMsg(node) => node .verify(&self.epoch_state.verifier) - .and_then(|_| self.certified_node_receiver.process(node)) + .and_then(|_| self.dag_driver.process(node)) .map(|r| r.into()), DAGMessage::FetchRequest(request) => request .verify(&self.epoch_state.verifier) diff --git a/consensus/src/dag/dag_network.rs b/consensus/src/dag/dag_network.rs index 1440aa4b27070..b56511d961d73 100644 --- a/consensus/src/dag/dag_network.rs +++ b/consensus/src/dag/dag_network.rs @@ -2,6 +2,7 @@ use super::types::DAGMessage; use aptos_consensus_types::common::Author; +use aptos_reliable_broadcast::RBNetworkSender; use aptos_time_service::{Interval, TimeService, TimeServiceTrait}; use async_trait::async_trait; use futures::{ @@ -24,7 +25,7 @@ pub trait RpcHandler { } #[async_trait] -pub trait DAGNetworkSender: Send + Sync { +pub trait TDAGNetworkSender: Send + Sync + RBNetworkSender { async fn send_rpc( &self, receiver: Author, @@ -79,7 +80,7 @@ pub struct RpcWithFallback { futures: Pin< Box> + Send>>>>, >, - sender: Arc, + sender: Arc, interval: Pin>, } @@ -89,7 +90,7 @@ impl RpcWithFallback { message: DAGMessage, retry_interval: Duration, rpc_timeout: Duration, - sender: Arc, + sender: Arc, time_service: TimeService, ) -> Self { Self { @@ -106,7 +107,7 @@ impl RpcWithFallback { } async fn send_rpc( - sender: Arc, + sender: Arc, peer: Author, message: DAGMessage, timeout: Duration, diff --git a/consensus/src/dag/mod.rs b/consensus/src/dag/mod.rs index 5395518cf1e14..eeddccfe9b07a 100644 --- a/consensus/src/dag/mod.rs +++ b/consensus/src/dag/mod.rs @@ -3,17 +3,18 @@ #![allow(dead_code)] mod anchor_election; +mod bootstrap; mod dag_driver; mod dag_fetcher; mod dag_handler; mod dag_network; mod dag_store; mod order_rule; -mod reliable_broadcast; +mod rb_handler; mod storage; #[cfg(test)] mod tests; mod types; -pub use dag_network::RpcHandler; -pub use types::{CertifiedNode, DAGNetworkMessage, Node, NodeId, Vote}; +pub use dag_network::{RpcHandler, RpcWithFallback, TDAGNetworkSender}; +pub use types::{CertifiedNode, DAGMessage, DAGNetworkMessage, Extensions, Node, NodeId, Vote}; diff --git a/consensus/src/dag/order_rule.rs b/consensus/src/dag/order_rule.rs index 3cf8762441969..d48697d7e715e 100644 --- a/consensus/src/dag/order_rule.rs +++ b/consensus/src/dag/order_rule.rs @@ -46,8 +46,8 @@ impl OrderRule { (r1 ^ r2) & 1 == 0 } - pub fn process_new_node(&mut self, node: &CertifiedNode) { - let round = node.round(); + pub fn process_new_node(&mut self, node_metadata: &NodeMetadata) { + let round = node_metadata.round(); // If the node comes from the proposal round in the current instance, it can't trigger any ordering if round <= self.lowest_unordered_anchor_round || Self::check_parity(round, self.lowest_unordered_anchor_round) diff --git a/consensus/src/dag/reliable_broadcast.rs b/consensus/src/dag/rb_handler.rs similarity index 79% rename from consensus/src/dag/reliable_broadcast.rs rename to consensus/src/dag/rb_handler.rs index c7b5592a1bb6c..09bddf19623be 100644 --- a/consensus/src/dag/reliable_broadcast.rs +++ b/consensus/src/dag/rb_handler.rs @@ -1,11 +1,7 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 -use super::{ - storage::DAGStorage, - types::{CertifiedAck, CertifiedNode}, - NodeId, -}; +use super::{storage::DAGStorage, NodeId}; use crate::dag::{ dag_network::RpcHandler, dag_store::Dag, @@ -29,7 +25,7 @@ pub enum NodeBroadcastHandleError { NotEnoughParents, } -pub struct NodeBroadcastHandler { +pub(crate) struct NodeBroadcastHandler { dag: Arc>, votes_by_round_peer: BTreeMap>, signer: ValidatorSigner, @@ -161,46 +157,3 @@ impl RpcHandler for NodeBroadcastHandler { } } } - -#[derive(Debug, ThisError)] -pub enum CertifiedNodeHandleError { - #[error("node already exists")] - NodeExists, - #[error("missing parents")] - MissingParents, -} - -pub struct CertifiedNodeHandler { - dag: Arc>, -} - -impl CertifiedNodeHandler { - pub fn new(dag: Arc>) -> Self { - Self { dag } - } -} - -impl RpcHandler for CertifiedNodeHandler { - type Request = CertifiedNode; - type Response = CertifiedAck; - - fn process(&mut self, node: Self::Request) -> anyhow::Result { - let epoch = node.metadata().epoch(); - { - let dag_reader = self.dag.read(); - if dag_reader.exists(node.metadata()) { - return Ok(CertifiedAck::new(node.metadata().epoch())); - } - - if !dag_reader.all_exists(node.parents_metadata()) { - // TODO(ibalajiarun): implement fetching logic. - bail!(CertifiedNodeHandleError::MissingParents); - } - } - - let mut dag_writer = self.dag.write(); - dag_writer.add_node(node)?; - - Ok(CertifiedAck::new(epoch)) - } -} diff --git a/consensus/src/dag/tests/dag_driver_tests.rs b/consensus/src/dag/tests/dag_driver_tests.rs new file mode 100644 index 0000000000000..6fb85f2399e97 --- /dev/null +++ b/consensus/src/dag/tests/dag_driver_tests.rs @@ -0,0 +1,129 @@ +// Copyright © Aptos Foundation + +use crate::{ + dag::{ + anchor_election::RoundRobinAnchorElection, + dag_driver::{DagDriver, DagDriverError}, + dag_fetcher::DagFetcher, + dag_network::{RpcWithFallback, TDAGNetworkSender}, + dag_store::Dag, + order_rule::OrderRule, + tests::{dag_test::MockStorage, helpers::new_certified_node}, + types::{CertifiedAck, DAGMessage}, + RpcHandler, + }, + test_utils::MockPayloadManager, +}; +use aptos_consensus_types::common::Author; +use aptos_infallible::RwLock; +use aptos_reliable_broadcast::{RBNetworkSender, ReliableBroadcast}; +use aptos_time_service::TimeService; +use aptos_types::{ + epoch_state::EpochState, ledger_info::LedgerInfo, validator_verifier::random_validator_verifier, +}; +use async_trait::async_trait; +use claims::{assert_ok, assert_ok_eq}; +use std::{sync::Arc, time::Duration}; +use tokio_retry::strategy::ExponentialBackoff; + +struct MockNetworkSender {} + +#[async_trait] +impl RBNetworkSender for MockNetworkSender { + async fn send_rb_rpc( + &self, + _receiver: Author, + _messagee: DAGMessage, + _timeout: Duration, + ) -> anyhow::Result { + unimplemented!() + } +} + +#[async_trait] +impl TDAGNetworkSender for MockNetworkSender { + async fn send_rpc( + &self, + _receiver: Author, + _message: DAGMessage, + _timeout: Duration, + ) -> anyhow::Result { + unimplemented!() + } + + /// Given a list of potential responders, sending rpc to get response from any of them and could + /// fallback to more in case of failures. + async fn send_rpc_with_fallbacks( + &self, + _responders: Vec, + _message: DAGMessage, + _retry_interval: Duration, + _rpc_timeout: Duration, + ) -> RpcWithFallback { + unimplemented!() + } +} + +#[test] +fn test_certified_node_handler() { + let (signers, validator_verifier) = random_validator_verifier(4, None, false); + let epoch_state = Arc::new(EpochState { + epoch: 1, + verifier: validator_verifier, + }); + let storage = Arc::new(MockStorage::new()); + let dag = Arc::new(RwLock::new(Dag::new(epoch_state.clone(), storage.clone()))); + + let zeroth_round_node = new_certified_node(0, signers[0].author(), vec![]); + + let network_sender = Arc::new(MockNetworkSender {}); + let rb = Arc::new(ReliableBroadcast::new( + signers.iter().map(|s| s.author()).collect(), + network_sender.clone(), + ExponentialBackoff::from_millis(10), + aptos_time_service::TimeService::mock(), + )); + let time_service = TimeService::mock(); + let (ordered_nodes_sender, _) = futures_channel::mpsc::unbounded(); + let validators = signers.iter().map(|vs| vs.author()).collect(); + let order_rule = OrderRule::new( + epoch_state.clone(), + LedgerInfo::mock_genesis(None), + dag.clone(), + Box::new(RoundRobinAnchorElection::new(validators)), + ordered_nodes_sender, + ); + + let (_, fetch_requester, _, _) = DagFetcher::new( + epoch_state.clone(), + network_sender, + dag.clone(), + aptos_time_service::TimeService::mock(), + ); + let fetch_requester = Arc::new(fetch_requester); + + let mut driver = DagDriver::new( + signers[0].author(), + epoch_state, + dag, + Arc::new(MockPayloadManager::new(None)), + rb, + 1, + time_service, + storage, + order_rule, + fetch_requester, + ); + + // expect an ack for a valid message + assert_ok!(driver.process(zeroth_round_node.clone())); + // expect an ack if the same message is sent again + assert_ok_eq!(driver.process(zeroth_round_node), CertifiedAck::new(1)); + + let parent_node = new_certified_node(0, signers[1].author(), vec![]); + let invalid_node = new_certified_node(1, signers[0].author(), vec![parent_node.certificate()]); + assert_eq!( + driver.process(invalid_node).unwrap_err().to_string(), + DagDriverError::MissingParents.to_string() + ); +} diff --git a/consensus/src/dag/tests/dag_network_test.rs b/consensus/src/dag/tests/dag_network_test.rs index 22398bf0e9624..2bf07bd8c1db9 100644 --- a/consensus/src/dag/tests/dag_network_test.rs +++ b/consensus/src/dag/tests/dag_network_test.rs @@ -1,12 +1,13 @@ // Copyright © Aptos Foundation use crate::dag::{ - dag_network::{DAGNetworkSender, RpcWithFallback}, + dag_network::{RpcWithFallback, TDAGNetworkSender}, types::{DAGMessage, TestAck, TestMessage}, }; use anyhow::{anyhow, bail}; use aptos_consensus_types::common::Author; use aptos_infallible::Mutex; +use aptos_reliable_broadcast::RBNetworkSender; use aptos_time_service::{TimeService, TimeServiceTrait}; use aptos_types::validator_verifier::random_validator_verifier; use async_trait::async_trait; @@ -28,7 +29,19 @@ struct MockDAGNetworkSender { } #[async_trait] -impl DAGNetworkSender for MockDAGNetworkSender { +impl RBNetworkSender for MockDAGNetworkSender { + async fn send_rb_rpc( + &self, + _receiver: Author, + _message: DAGMessage, + _timeout: Duration, + ) -> anyhow::Result { + unimplemented!() + } +} + +#[async_trait] +impl TDAGNetworkSender for MockDAGNetworkSender { async fn send_rpc( &self, receiver: Author, diff --git a/consensus/src/dag/tests/helpers.rs b/consensus/src/dag/tests/helpers.rs index b6eb7a1c9bad8..84cad58ce6de1 100644 --- a/consensus/src/dag/tests/helpers.rs +++ b/consensus/src/dag/tests/helpers.rs @@ -1,6 +1,6 @@ // Copyright © Aptos Foundation -use crate::dag::types::{CertifiedNode, Node, NodeCertificate}; +use crate::dag::types::{CertifiedNode, Extensions, Node, NodeCertificate}; use aptos_consensus_types::common::{Author, Payload, Round}; use aptos_types::aggregate_signature::AggregateSignature; @@ -9,7 +9,15 @@ pub(crate) fn new_certified_node( author: Author, parents: Vec, ) -> CertifiedNode { - let node = Node::new(1, round, author, 0, Payload::empty(false), parents); + let node = Node::new( + 1, + round, + author, + 0, + Payload::empty(false), + parents, + Extensions::empty(), + ); CertifiedNode::new(node, AggregateSignature::empty()) } @@ -19,5 +27,13 @@ pub(crate) fn new_node( author: Author, parents: Vec, ) -> Node { - Node::new(0, round, author, timestamp, Payload::empty(false), parents) + Node::new( + 0, + round, + author, + timestamp, + Payload::empty(false), + parents, + Extensions::empty(), + ) } diff --git a/consensus/src/dag/tests/integration_tests.rs b/consensus/src/dag/tests/integration_tests.rs new file mode 100644 index 0000000000000..9fab57de6026d --- /dev/null +++ b/consensus/src/dag/tests/integration_tests.rs @@ -0,0 +1,236 @@ +// Copyright © Aptos Foundation + +use super::dag_test; +use crate::{ + dag::{bootstrap::bootstrap_dag, CertifiedNode}, + network::{DAGNetworkSenderImpl, IncomingDAGRequest, NetworkSender}, + network_interface::{ConsensusMsg, ConsensusNetworkClient, DIRECT_SEND, RPC}, + network_tests::{NetworkPlayground, TwinId}, + test_utils::{consensus_runtime, MockPayloadManager, MockStorage}, +}; +use aptos_channels::{aptos_channel, message_queues::QueueStyle}; +use aptos_config::network_id::{NetworkId, PeerNetworkId}; +use aptos_consensus_types::common::Author; +use aptos_logger::debug; +use aptos_network::{ + application::interface::NetworkClient, + peer_manager::{conn_notifs_channel, ConnectionRequestSender, PeerManagerRequestSender}, + protocols::{ + network::{self, Event, NetworkEvents, NewNetworkEvents, NewNetworkSender}, + wire::handshake::v1::ProtocolIdSet, + }, + transport::ConnectionMetadata, + ProtocolId, +}; +use aptos_time_service::TimeService; +use aptos_types::{ + epoch_state::EpochState, + validator_signer::ValidatorSigner, + validator_verifier::{random_validator_verifier, ValidatorVerifier}, +}; +use claims::assert_gt; +use futures::{ + stream::{select, AbortHandle, Select}, + StreamExt, +}; +use futures_channel::mpsc::UnboundedReceiver; +use maplit::hashmap; +use std::sync::Arc; + +struct DagBootstrapUnit { + nh_abort_handle: AbortHandle, + df_abort_handle: AbortHandle, + dag_rpc_tx: aptos_channel::Sender, + network_events: + Box, aptos_channels::Receiver>>>, +} + +impl DagBootstrapUnit { + fn make( + self_peer: Author, + epoch: u64, + signer: ValidatorSigner, + storage: Arc, + network: NetworkSender, + time_service: TimeService, + network_events: Box< + Select, aptos_channels::Receiver>>, + >, + ) -> (Self, UnboundedReceiver>>) { + let epoch_state = EpochState { + epoch, + verifier: storage.get_validator_set().into(), + }; + let dag_storage = dag_test::MockStorage::new(); + + let network = Arc::new(DAGNetworkSenderImpl::new(Arc::new(network))); + + let payload_client = Arc::new(MockPayloadManager::new(None)); + + let (nh_abort_handle, df_abort_handle, dag_rpc_tx, ordered_nodes_rx) = bootstrap_dag( + self_peer, + signer, + Arc::new(epoch_state), + storage.get_ledger_info(), + Arc::new(dag_storage), + network.clone(), + network.clone(), + time_service, + payload_client, + ); + + ( + Self { + nh_abort_handle, + df_abort_handle, + dag_rpc_tx, + network_events, + }, + ordered_nodes_rx, + ) + } + + async fn start(mut self) { + loop { + match self.network_events.next().await.unwrap() { + Event::RpcRequest(sender, msg, protocol, response_sender) => match msg { + ConsensusMsg::DAGMessage(msg) => { + debug!("handling RPC..."); + self.dag_rpc_tx.push(sender, IncomingDAGRequest { + req: msg, + sender, + protocol, + response_sender, + }) + }, + _ => unreachable!("expected only DAG-related messages"), + }, + _ => panic!("Unexpected Network Event"), + } + .unwrap() + } + } +} + +fn create_network( + playground: &mut NetworkPlayground, + id: usize, + author: Author, + validators: ValidatorVerifier, +) -> ( + NetworkSender, + Box, aptos_channels::Receiver>>>, +) { + let (network_reqs_tx, network_reqs_rx) = aptos_channel::new(QueueStyle::FIFO, 8, None); + let (connection_reqs_tx, _) = aptos_channel::new(QueueStyle::FIFO, 8, None); + let (consensus_tx, consensus_rx) = aptos_channel::new(QueueStyle::FIFO, 8, None); + let (_conn_mgr_reqs_tx, conn_mgr_reqs_rx) = aptos_channels::new_test(8); + let (_, conn_status_rx) = conn_notifs_channel::new(); + let network_sender = network::NetworkSender::new( + PeerManagerRequestSender::new(network_reqs_tx), + ConnectionRequestSender::new(connection_reqs_tx), + ); + let network_client = NetworkClient::new( + DIRECT_SEND.into(), + RPC.into(), + hashmap! {NetworkId::Validator => network_sender}, + playground.peer_protocols(), + ); + let consensus_network_client = ConsensusNetworkClient::new(network_client); + let network_events = NetworkEvents::new(consensus_rx, conn_status_rx, None); + + let (self_sender, self_receiver) = aptos_channels::new_test(1000); + let network = NetworkSender::new(author, consensus_network_client, self_sender, validators); + + let twin_id = TwinId { id, author }; + + playground.add_node(twin_id, consensus_tx, network_reqs_rx, conn_mgr_reqs_rx); + + let all_network_events = Box::new(select(network_events, self_receiver)); + + (network, all_network_events) +} + +fn bootstrap_nodes( + playground: &mut NetworkPlayground, + signers: Vec, + validators: ValidatorVerifier, +) -> ( + Vec, + Vec>>>, +) { + let peers_and_metadata = playground.peer_protocols(); + let (nodes, ordered_node_receivers) = signers + .iter() + .enumerate() + .map(|(id, signer)| { + let peer_id = signer.author(); + let mut conn_meta = ConnectionMetadata::mock(peer_id); + conn_meta.application_protocols = ProtocolIdSet::from_iter([ + ProtocolId::ConsensusDirectSendJson, + ProtocolId::ConsensusDirectSendBcs, + ProtocolId::ConsensusRpcBcs, + ]); + let peer_network_id = PeerNetworkId::new(NetworkId::Validator, peer_id); + peers_and_metadata + .insert_connection_metadata(peer_network_id, conn_meta) + .unwrap(); + + let (_, storage) = MockStorage::start_for_testing((&validators).into()); + let (network, network_events) = + create_network(playground, id, signer.author(), validators.clone()); + + DagBootstrapUnit::make( + signer.author(), + 1, + signer.clone(), + storage, + network, + aptos_time_service::TimeService::real(), + network_events, + ) + }) + .unzip(); + + (nodes, ordered_node_receivers) +} + +#[tokio::test] +async fn test_dag_e2e() { + let num_nodes = 7; + let runtime = consensus_runtime(); + let mut playground = NetworkPlayground::new(runtime.handle().clone()); + let (signers, validators) = random_validator_verifier(num_nodes, None, false); + let author_indexes = validators.address_to_validator_index().clone(); + + let (nodes, mut ordered_node_receivers) = bootstrap_nodes(&mut playground, signers, validators); + for node in nodes { + runtime.spawn(node.start()); + } + + runtime.spawn(playground.start()); + + let display = |node: &Arc| { + ( + node.metadata().round(), + *author_indexes.get(node.metadata().author()).unwrap(), + ) + }; + + for _ in 1..10 { + let mut all_ordered = vec![]; + for receiver in &mut ordered_node_receivers { + let block = receiver.next().await.unwrap(); + all_ordered.push(block) + } + let first: Vec<_> = all_ordered.first().unwrap().iter().map(display).collect(); + assert_gt!(first.len(), 0, "must order nodes"); + debug!("Nodes: {:?}", first); + for ordered in all_ordered.iter() { + let a: Vec<_> = ordered.iter().map(display).collect(); + assert_eq!(a.len(), first.len(), "length should match"); + assert_eq!(a, first); + } + } + runtime.shutdown_background(); +} diff --git a/consensus/src/dag/tests/mod.rs b/consensus/src/dag/tests/mod.rs index cc3267dc26bf9..115bcd045144a 100644 --- a/consensus/src/dag/tests/mod.rs +++ b/consensus/src/dag/tests/mod.rs @@ -1,10 +1,12 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 +mod dag_driver_tests; mod dag_network_test; mod dag_test; mod fetcher_test; mod helpers; +mod integration_tests; mod order_rule_tests; -mod reliable_broadcast_tests; +mod rb_handler_tests; mod types_test; diff --git a/consensus/src/dag/tests/order_rule_tests.rs b/consensus/src/dag/tests/order_rule_tests.rs index a56cfad56f5ef..a9b47656c5a80 100644 --- a/consensus/src/dag/tests/order_rule_tests.rs +++ b/consensus/src/dag/tests/order_rule_tests.rs @@ -167,7 +167,7 @@ proptest! { let dag = Arc::new(RwLock::new(dag.clone())); let (mut order_rule, mut receiver) = create_order_rule(epoch_state.clone(), dag); for idx in seq { - order_rule.process_new_node(&flatten_nodes[idx]); + order_rule.process_new_node(flatten_nodes[idx].metadata()); } let mut ordered = vec![]; while let Ok(Some(mut ordered_nodes)) = receiver.try_next() { @@ -241,7 +241,7 @@ fn test_order_rule_basic() { let dag = Arc::new(RwLock::new(dag.clone())); let (mut order_rule, mut receiver) = create_order_rule(epoch_state, dag); for node in nodes.iter().flatten().flatten() { - order_rule.process_new_node(node); + order_rule.process_new_node(node.metadata()); } let expected_order = vec![ // anchor (1, 0) has 1 votes, anchor (3, 1) has 2 votes and a path to (1, 0) diff --git a/consensus/src/dag/tests/reliable_broadcast_tests.rs b/consensus/src/dag/tests/rb_handler_tests.rs similarity index 78% rename from consensus/src/dag/tests/reliable_broadcast_tests.rs rename to consensus/src/dag/tests/rb_handler_tests.rs index 07d4229b93093..994f97a0474c0 100644 --- a/consensus/src/dag/tests/reliable_broadcast_tests.rs +++ b/consensus/src/dag/tests/rb_handler_tests.rs @@ -3,16 +3,10 @@ use crate::dag::{ dag_store::Dag, - reliable_broadcast::{ - CertifiedNodeHandleError, CertifiedNodeHandler, NodeBroadcastHandleError, - NodeBroadcastHandler, - }, + rb_handler::{NodeBroadcastHandleError, NodeBroadcastHandler}, storage::DAGStorage, - tests::{ - dag_test::MockStorage, - helpers::{new_certified_node, new_node}, - }, - types::{CertifiedAck, NodeCertificate}, + tests::{dag_test::MockStorage, helpers::new_node}, + types::NodeCertificate, NodeId, RpcHandler, Vote, }; use aptos_infallible::RwLock; @@ -149,30 +143,3 @@ fn test_node_broadcast_receiver_storage() { assert_ok!(rb_receiver.gc_before_round(2)); assert_eq!(storage.get_votes().unwrap().len(), 0); } - -#[test] -fn test_certified_node_receiver() { - let (signers, validator_verifier) = random_validator_verifier(4, None, false); - let epoch_state = Arc::new(EpochState { - epoch: 1, - verifier: validator_verifier, - }); - let storage = Arc::new(MockStorage::new()); - let dag = Arc::new(RwLock::new(Dag::new(epoch_state, storage))); - - let zeroth_round_node = new_certified_node(0, signers[0].author(), vec![]); - - let mut rb_receiver = CertifiedNodeHandler::new(dag); - - // expect an ack for a valid message - assert_ok!(rb_receiver.process(zeroth_round_node.clone())); - // expect an ack if the same message is sent again - assert_ok_eq!(rb_receiver.process(zeroth_round_node), CertifiedAck::new(1)); - - let parent_node = new_certified_node(0, signers[1].author(), vec![]); - let invalid_node = new_certified_node(1, signers[0].author(), vec![parent_node.certificate()]); - assert_eq!( - rb_receiver.process(invalid_node).unwrap_err().to_string(), - CertifiedNodeHandleError::MissingParents.to_string() - ); -} diff --git a/consensus/src/dag/tests/types_test.rs b/consensus/src/dag/tests/types_test.rs index 93fa3e7c0c165..786a91da9d0ba 100644 --- a/consensus/src/dag/tests/types_test.rs +++ b/consensus/src/dag/tests/types_test.rs @@ -4,8 +4,8 @@ use super::helpers::new_node; use crate::dag::{ tests::helpers::new_certified_node, types::{ - CertifiedNode, DagSnapshotBitmask, Node, NodeCertificate, NodeMetadata, RemoteFetchRequest, - TDAGMessage, + CertifiedNode, DagSnapshotBitmask, Extensions, Node, NodeCertificate, NodeMetadata, + RemoteFetchRequest, TDAGMessage, }, }; use aptos_consensus_types::common::Payload; @@ -24,6 +24,7 @@ fn test_node_verify() { NodeMetadata::new_for_test(0, 0, signers[0].author(), 0, HashValue::random()), Payload::empty(false), vec![], + Extensions::empty(), ); assert_eq!( invalid_node @@ -33,20 +34,20 @@ fn test_node_verify() { "invalid digest" ); - // Well-formed round 0 node - let zeroth_round_node = new_node(0, 10, signers[0].author(), vec![]); - assert_ok!(zeroth_round_node.verify(&validator_verifier)); + // Well-formed round 1 node + let first_round_node = new_node(1, 10, signers[0].author(), vec![]); + assert_ok!(first_round_node.verify(&validator_verifier)); - // Round 1 node without parents + // Round 2 node without parents let node = new_node(2, 20, signers[0].author(), vec![]); assert_eq!( node.verify(&validator_verifier).unwrap_err().to_string(), "not enough parents to satisfy voting power", ); - // Round 1 + // Round 1 cert let parent_cert = NodeCertificate::new( - zeroth_round_node.metadata().clone(), + first_round_node.metadata().clone(), AggregateSignature::empty(), ); let node = new_node(3, 20, signers[0].author(), vec![parent_cert]); @@ -64,6 +65,7 @@ fn test_certified_node_verify() { NodeMetadata::new_for_test(0, 0, signers[0].author(), 0, HashValue::random()), Payload::empty(false), vec![], + Extensions::empty(), ); let invalid_certified_node = CertifiedNode::new(invalid_node, AggregateSignature::empty()); assert_eq!( diff --git a/consensus/src/dag/types.rs b/consensus/src/dag/types.rs index cef5236bee87d..be66b5a5e1993 100644 --- a/consensus/src/dag/types.rs +++ b/consensus/src/dag/types.rs @@ -38,6 +38,18 @@ impl TDAGMessage for CertifiedAck { } } +#[derive(Clone, Serialize, Deserialize, CryptoHasher, Debug, PartialEq)] +pub enum Extensions { + Empty, + // Reserved for future extensions such as randomness shares +} + +impl Extensions { + pub fn empty() -> Self { + Self::Empty + } +} + #[derive(Serialize)] struct NodeWithoutDigest<'a> { epoch: u64, @@ -46,6 +58,7 @@ struct NodeWithoutDigest<'a> { timestamp: u64, payload: &'a Payload, parents: &'a Vec, + extensions: &'a Extensions, } impl<'a> CryptoHash for NodeWithoutDigest<'a> { @@ -68,6 +81,7 @@ impl<'a> From<&'a Node> for NodeWithoutDigest<'a> { timestamp: node.metadata.timestamp, payload: &node.payload, parents: &node.parents, + extensions: &node.extensions, } } } @@ -131,6 +145,7 @@ pub struct Node { metadata: NodeMetadata, payload: Payload, parents: Vec, + extensions: Extensions, } impl Node { @@ -141,9 +156,17 @@ impl Node { timestamp: u64, payload: Payload, parents: Vec, + extensions: Extensions, ) -> Self { - let digest = - Self::calculate_digest_internal(epoch, round, author, timestamp, &payload, &parents); + let digest = Self::calculate_digest_internal( + epoch, + round, + author, + timestamp, + &payload, + &parents, + &extensions, + ); Self { metadata: NodeMetadata { @@ -157,6 +180,7 @@ impl Node { }, payload, parents, + extensions, } } @@ -165,11 +189,13 @@ impl Node { metadata: NodeMetadata, payload: Payload, parents: Vec, + extensions: Extensions, ) -> Self { Self { metadata, payload, parents, + extensions, } } @@ -181,6 +207,7 @@ impl Node { timestamp: u64, payload: &Payload, parents: &Vec, + extensions: &Extensions, ) -> HashValue { let node_with_out_digest = NodeWithoutDigest { epoch, @@ -189,6 +216,7 @@ impl Node { timestamp, payload, parents, + extensions, }; node_with_out_digest.hash() } @@ -201,6 +229,7 @@ impl Node { self.metadata.timestamp, &self.payload, &self.parents, + &self.extensions, ) } @@ -248,8 +277,10 @@ impl TDAGMessage for Node { let current_round = self.metadata().round(); - if current_round == 0 { - ensure!(self.parents().is_empty(), "invalid parents for round 0"); + ensure!(current_round > 0, "current round cannot be zero"); + + if current_round == 1 { + ensure!(self.parents().is_empty(), "invalid parents for round 1"); return Ok(()); } diff --git a/consensus/src/liveness/leader_reputation_test.rs b/consensus/src/liveness/leader_reputation_test.rs index ad1f58ad3eacf..a40bd15db5268 100644 --- a/consensus/src/liveness/leader_reputation_test.rs +++ b/consensus/src/liveness/leader_reputation_test.rs @@ -440,7 +440,7 @@ impl MockDbReader { self.events.lock().push(EventWithVersion::new( *idx, - ContractEvent::new( + ContractEvent::new_v1( new_block_event_key(), *idx, TypeTag::Struct(Box::new(NewBlockEvent::struct_tag())), diff --git a/consensus/src/network.rs b/consensus/src/network.rs index 93f59c1c1111c..e09ef67df522a 100644 --- a/consensus/src/network.rs +++ b/consensus/src/network.rs @@ -5,7 +5,7 @@ use crate::{ block_storage::tracing::{observe_block, BlockStage}, counters, - dag::DAGNetworkMessage, + dag::{DAGMessage, DAGNetworkMessage, RpcWithFallback, TDAGNetworkSender}, logging::LogEvent, monitor, network_interface::{ConsensusMsg, ConsensusNetworkClient}, @@ -29,10 +29,12 @@ use aptos_network::{ protocols::{network::Event, rpc::error::RpcError}, ProtocolId, }; +use aptos_reliable_broadcast::{RBMessage, RBNetworkSender}; use aptos_types::{ account_address::AccountAddress, epoch_change::EpochChangeProof, ledger_info::LedgerInfoWithSignatures, validator_verifier::ValidatorVerifier, }; +use async_trait::async_trait; use bytes::Bytes; use fail::fail_point; use futures::{ @@ -43,6 +45,7 @@ use futures::{ use serde::{de::DeserializeOwned, Serialize}; use std::{ mem::{discriminant, Discriminant}, + sync::Arc, time::Duration, }; @@ -404,6 +407,79 @@ impl QuorumStoreSender for NetworkSender { } } +// TODO: this can be improved +#[derive(Clone)] +pub struct DAGNetworkSenderImpl { + sender: Arc, + time_service: aptos_time_service::TimeService, +} + +impl DAGNetworkSenderImpl { + pub fn new(sender: Arc) -> Self { + Self { + sender, + time_service: aptos_time_service::TimeService::real(), + } + } +} + +#[async_trait] +impl TDAGNetworkSender for DAGNetworkSenderImpl { + async fn send_rpc( + &self, + receiver: Author, + message: DAGMessage, + timeout: Duration, + ) -> anyhow::Result { + self.sender + .consensus_network_client + .send_rpc(receiver, message.into_network_message(), timeout) + .await + .map_err(|e| anyhow!("invalid rpc response: {}", e)) + .and_then(TConsensusMsg::from_network_message) + } + + /// Given a list of potential responders, sending rpc to get response from any of them and could + /// fallback to more in case of failures. + async fn send_rpc_with_fallbacks( + &self, + responders: Vec, + message: DAGMessage, + retry_interval: Duration, + rpc_timeout: Duration, + ) -> RpcWithFallback { + let sender = Arc::new(self.clone()); + RpcWithFallback::new( + responders, + message, + retry_interval, + rpc_timeout, + sender, + self.time_service.clone(), + ) + } +} + +#[async_trait] +impl RBNetworkSender for DAGNetworkSenderImpl +where + M: RBMessage + TConsensusMsg + 'static, +{ + async fn send_rb_rpc( + &self, + receiver: Author, + message: M, + timeout: Duration, + ) -> anyhow::Result { + self.sender + .consensus_network_client + .send_rpc(receiver, message.into_network_message(), timeout) + .await + .map_err(|e| anyhow!("invalid rpc response: {}", e)) + .and_then(|msg| TConsensusMsg::from_network_message(msg)) + } +} + pub struct NetworkTask { consensus_messages_tx: aptos_channel::Sender< (AccountAddress, Discriminant), diff --git a/consensus/src/quorum_store/batch_generator.rs b/consensus/src/quorum_store/batch_generator.rs index 4259e684d5aa9..ca7c579cc89d0 100644 --- a/consensus/src/quorum_store/batch_generator.rs +++ b/consensus/src/quorum_store/batch_generator.rs @@ -224,8 +224,9 @@ impl BatchGenerator { .flatten() .cloned() .collect(); - + counters::BATCH_PULL_EXCLUDED_TXNS.observe(exclude_txns.len() as f64); trace!("QS: excluding txs len: {:?}", exclude_txns.len()); + let mut pulled_txns = self .mempool_proxy .pull_internal( @@ -250,6 +251,9 @@ impl BatchGenerator { } else { counters::PULLED_TXNS_COUNT.inc(); counters::PULLED_TXNS_NUM.observe(pulled_txns.len() as f64); + if pulled_txns.len() as u64 == max_count { + counters::BATCH_PULL_FULL_TXNS.observe(max_count as f64) + } } counters::BATCH_CREATION_DURATION.observe_duration(self.last_end_batch_time.elapsed()); diff --git a/consensus/src/quorum_store/counters.rs b/consensus/src/quorum_store/counters.rs index d31cab768583a..3f24be22b10fb 100644 --- a/consensus/src/quorum_store/counters.rs +++ b/consensus/src/quorum_store/counters.rs @@ -292,6 +292,24 @@ pub static PULLED_EMPTY_TXNS_COUNT: Lazy = Lazy::new(|| { .unwrap() }); +/// Number of txns (equals max_count) for each time the pull for batches returns full. +pub static BATCH_PULL_FULL_TXNS: Lazy = Lazy::new(|| { + register_avg_counter( + "quorum_store_batch_pull_full_txns", + "Number of txns (equals max_count) for each time the pull for batches returns full.", + ) +}); + +/// Histogram for the number of txns excluded on pull for batches. +pub static BATCH_PULL_EXCLUDED_TXNS: Lazy = Lazy::new(|| { + register_histogram!( + "quorum_store_batch_pull_excluded_txns", + "Histogram for the number of txns excluded on pull for batches.", + TRANSACTION_COUNT_BUCKETS.clone() + ) + .unwrap() +}); + /// Count of the created batches since last restart. pub static CREATED_BATCHES_COUNT: Lazy = Lazy::new(|| { register_int_counter!( diff --git a/crates/aptos-api-tester/Cargo.toml b/crates/aptos-api-tester/Cargo.toml new file mode 100644 index 0000000000000..33177e2f18daa --- /dev/null +++ b/crates/aptos-api-tester/Cargo.toml @@ -0,0 +1,34 @@ +[package] +name = "aptos-api-tester" +description = "Aptos developer API tester" +version = "0.1.0" + +# Workspace inherited keys +authors = { workspace = true } +edition = { workspace = true } +homepage = { workspace = true } +license = { workspace = true } +publish = { workspace = true } +repository = { workspace = true } +rust-version = { workspace = true } + +[dependencies] +anyhow = { workspace = true } +aptos-api-types = { workspace = true } +aptos-cached-packages = { workspace = true } +aptos-framework = { workspace = true } +aptos-logger = { workspace = true } +aptos-network = { workspace = true } +aptos-push-metrics = { workspace = true } +aptos-rest-client = { workspace = true } +aptos-sdk = { workspace = true } +aptos-types = { workspace = true } +futures = { workspace = true } +move-core-types = { workspace = true } +once_cell = { workspace = true } +prometheus = { workspace = true } +rand = { workspace = true } +serde = { workspace = true } +serde_json = { workspace = true } +tokio = { workspace = true } +url = { workspace = true } diff --git a/crates/aptos-api-tester/src/consts.rs b/crates/aptos-api-tester/src/consts.rs new file mode 100644 index 0000000000000..76fd1a022d169 --- /dev/null +++ b/crates/aptos-api-tester/src/consts.rs @@ -0,0 +1,68 @@ +// Copyright © Aptos Foundation + +use crate::utils::NetworkName; +use once_cell::sync::Lazy; +use std::{env, time::Duration}; +use url::Url; + +// Node and faucet constants + +// TODO: consider making this a CLI argument +pub static NETWORK_NAME: Lazy = Lazy::new(|| { + env::var("NETWORK_NAME") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(NetworkName::Devnet) +}); + +pub static DEVNET_NODE_URL: Lazy = + Lazy::new(|| Url::parse("https://fullnode.devnet.aptoslabs.com").unwrap()); + +pub static DEVNET_FAUCET_URL: Lazy = + Lazy::new(|| Url::parse("https://faucet.devnet.aptoslabs.com").unwrap()); + +pub static TESTNET_NODE_URL: Lazy = + Lazy::new(|| Url::parse("https://fullnode.testnet.aptoslabs.com").unwrap()); + +pub static TESTNET_FAUCET_URL: Lazy = + Lazy::new(|| Url::parse("https://faucet.testnet.aptoslabs.com").unwrap()); + +pub const FUND_AMOUNT: u64 = 100_000_000; + +// Persistency check constants + +// How long a persistent check runs for. +pub static PERSISTENCY_TIMEOUT: Lazy = Lazy::new(|| { + env::var("PERSISTENCY_TIMEOUT") + .ok() + .and_then(|s| s.parse().ok()) + .map(Duration::from_secs) + .unwrap_or(Duration::from_secs(30)) +}); + +// Wait time between tries during a persistent check. +pub static SLEEP_PER_CYCLE: Lazy = Lazy::new(|| { + env::var("SLEEP_PER_CYCLE") + .ok() + .and_then(|s| s.parse().ok()) + .map(Duration::from_millis) + .unwrap_or(Duration::from_millis(100)) +}); + +// Runtime constants + +// The number of threads to use for running tests. +pub static NUM_THREADS: Lazy = Lazy::new(|| { + env::var("NUM_THREADS") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(4) +}); + +// The size of the stack for each thread. +pub static STACK_SIZE: Lazy = Lazy::new(|| { + env::var("STACK_SIZE") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(4 * 1024 * 1024) +}); diff --git a/crates/aptos-api-tester/src/counters.rs b/crates/aptos-api-tester/src/counters.rs new file mode 100644 index 0000000000000..1f6305dd644b6 --- /dev/null +++ b/crates/aptos-api-tester/src/counters.rs @@ -0,0 +1,75 @@ +// Copyright © Aptos Foundation + +use once_cell::sync::Lazy; +use prometheus::{register_histogram_vec, Histogram, HistogramVec}; + +pub static API_TEST_SUCCESS: Lazy = Lazy::new(|| { + register_histogram_vec!( + "api_test_success", + "Number of user flows which succesfully passed", + &["test_name", "network_name", "run_id"], + ) + .unwrap() +}); + +pub fn test_success(test_name: &str, network_name: &str, run_id: &str) -> Histogram { + API_TEST_SUCCESS.with_label_values(&[test_name, network_name, run_id]) +} + +pub static API_TEST_FAIL: Lazy = Lazy::new(|| { + register_histogram_vec!( + "api_test_fail", + "Number of user flows which failed checks", + &["test_name", "network_name", "run_id"], + ) + .unwrap() +}); + +pub fn test_fail(test_name: &str, network_name: &str, run_id: &str) -> Histogram { + API_TEST_FAIL.with_label_values(&[test_name, network_name, run_id]) +} + +pub static API_TEST_ERROR: Lazy = Lazy::new(|| { + register_histogram_vec!("api_test_error", "Number of user flows which crashed", &[ + "test_name", + "network_name", + "run_id" + ],) + .unwrap() +}); + +pub fn test_error(test_name: &str, network_name: &str, run_id: &str) -> Histogram { + API_TEST_ERROR.with_label_values(&[test_name, network_name, run_id]) +} + +pub static API_TEST_LATENCY: Lazy = Lazy::new(|| { + register_histogram_vec!( + "api_test_latency", + "Time it takes to complete a user flow", + &["test_name", "network_name", "run_id", "result"], + ) + .unwrap() +}); + +pub fn test_latency(test_name: &str, network_name: &str, run_id: &str, result: &str) -> Histogram { + API_TEST_LATENCY.with_label_values(&[test_name, network_name, run_id, result]) +} + +pub static API_TEST_STEP_LATENCY: Lazy = Lazy::new(|| { + register_histogram_vec!( + "api_test_step_latency", + "Time it takes to complete a user flow step", + &["test_name", "step_name", "network_name", "run_id", "result"], + ) + .unwrap() +}); + +pub fn test_step_latency( + test_name: &str, + step_name: &str, + network_name: &str, + run_id: &str, + result: &str, +) -> Histogram { + API_TEST_STEP_LATENCY.with_label_values(&[test_name, step_name, network_name, run_id, result]) +} diff --git a/crates/aptos-api-tester/src/macros.rs b/crates/aptos-api-tester/src/macros.rs new file mode 100644 index 0000000000000..f40d2fd093b58 --- /dev/null +++ b/crates/aptos-api-tester/src/macros.rs @@ -0,0 +1,18 @@ +// Copyright © Aptos Foundation + +#[macro_export] +macro_rules! time_fn { + ($func:expr, $($arg:expr), *) => {{ + // start timer + let start = tokio::time::Instant::now(); + + // call the flow + let result = $func($($arg),+).await; + + // end timer + let time = (tokio::time::Instant::now() - start).as_micros() as f64; + + // return + (result, time) + }}; +} diff --git a/crates/aptos-api-tester/src/main.rs b/crates/aptos-api-tester/src/main.rs new file mode 100644 index 0000000000000..947bb07e7ef3f --- /dev/null +++ b/crates/aptos-api-tester/src/main.rs @@ -0,0 +1,97 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +#![forbid(unsafe_code)] + +mod consts; +mod counters; +mod persistent_check; +mod strings; +mod tests; +mod tokenv1_client; +mod utils; +#[macro_use] +mod macros; + +use crate::utils::{NetworkName, TestName}; +use anyhow::Result; +use aptos_logger::{info, Level, Logger}; +use aptos_push_metrics::MetricsPusher; +use consts::{NETWORK_NAME, NUM_THREADS, STACK_SIZE}; +use futures::future::join_all; +use std::time::{SystemTime, UNIX_EPOCH}; +use tokio::runtime::{Builder, Runtime}; + +async fn test_flows(runtime: &Runtime, network_name: NetworkName) -> Result<()> { + let run_id = SystemTime::now() + .duration_since(UNIX_EPOCH)? + .as_secs() + .to_string(); + info!( + "----- STARTING TESTS FOR {} WITH RUN ID {} -----", + network_name.to_string(), + run_id + ); + + // Flow 1: New account + let test_time = run_id.clone(); + let handle_newaccount = runtime.spawn(async move { + TestName::NewAccount.run(network_name, &test_time).await; + }); + + // Flow 2: Coin transfer + let test_time = run_id.clone(); + let handle_cointransfer = runtime.spawn(async move { + TestName::CoinTransfer.run(network_name, &test_time).await; + }); + + // Flow 3: NFT transfer + let test_time = run_id.clone(); + let handle_nfttransfer = runtime.spawn(async move { + TestName::TokenV1Transfer + .run(network_name, &test_time) + .await; + }); + + // Flow 4: Publishing module + let test_time = run_id.clone(); + let handle_publishmodule = runtime.spawn(async move { + TestName::PublishModule.run(network_name, &test_time).await; + }); + + // Flow 5: View function + let test_time = run_id.clone(); + let handle_viewfunction = runtime.spawn(async move { + TestName::ViewFunction.run(network_name, &test_time).await; + }); + + join_all(vec![ + handle_newaccount, + handle_cointransfer, + handle_nfttransfer, + handle_publishmodule, + handle_viewfunction, + ]) + .await; + Ok(()) +} + +fn main() -> Result<()> { + // create runtime + let runtime = Builder::new_multi_thread() + .worker_threads(*NUM_THREADS) + .enable_all() + .thread_stack_size(*STACK_SIZE) + .build()?; + + // log metrics + Logger::builder().level(Level::Info).build(); + let _mp = MetricsPusher::start_for_local_run("api-tester"); + + // run tests + runtime.block_on(async { + let _ = test_flows(&runtime, *NETWORK_NAME).await; + }); + + Ok(()) +} diff --git a/crates/aptos-api-tester/src/persistent_check.rs b/crates/aptos-api-tester/src/persistent_check.rs new file mode 100644 index 0000000000000..5a000c125ceee --- /dev/null +++ b/crates/aptos-api-tester/src/persistent_check.rs @@ -0,0 +1,226 @@ +// Copyright © Aptos Foundation + +// Persistent checking is a mechanism to increase tolerancy to eventual consistency issues. In our +// earlier tests we have observed that parallel runs of the flows returned higher failure rates +// than serial runs, and these extra failures displayed the following pattern: 1) the flow submits +// a transaction to the API (such as account creation), 2) the flow reads the state from the API, +// and gets a result that does not include the transaction. We attribute this to the second call +// ending up on a different node which is not yet up to sync. Therefore, for state checks, we +// repeat the whole check for a period of time until it is successful, and throw a failure only if +// it fails to succeed. Note that every time a check fails we will still get a failure log. + +// TODO: The need for having a different persistent check wrapper for each function signature is +// due to a lack of overloading in Rust. Consider using macros to reduce code duplication. + +use crate::{ + consts::{PERSISTENCY_TIMEOUT, SLEEP_PER_CYCLE}, + strings::ERROR_COULD_NOT_CHECK, + tokenv1_client::TokenClient, + utils::TestFailure, +}; +use anyhow::anyhow; +use aptos_api_types::HexEncodedBytes; +use aptos_rest_client::Client; +use aptos_sdk::types::LocalAccount; +use aptos_types::account_address::AccountAddress; +use futures::Future; +use tokio::time::{sleep, Instant}; + +pub async fn account<'a, 'b, F, Fut>( + step: &str, + f: F, + client: &'a Client, + account: &'b LocalAccount, +) -> Result<(), TestFailure> +where + F: Fn(&'a Client, &'b LocalAccount) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(client, account).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn address<'a, F, Fut>( + step: &str, + f: F, + client: &'a Client, + address: AccountAddress, +) -> Result<(), TestFailure> +where + F: Fn(&'a Client, AccountAddress) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(client, address).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn address_address<'a, F, Fut>( + step: &str, + f: F, + client: &'a Client, + address: AccountAddress, + address2: AccountAddress, +) -> Result<(), TestFailure> +where + F: Fn(&'a Client, AccountAddress, AccountAddress) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(client, address, address2).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn address_bytes<'a, 'b, F, Fut>( + step: &str, + f: F, + client: &'a Client, + address: AccountAddress, + bytes: &'b HexEncodedBytes, +) -> Result<(), TestFailure> +where + F: Fn(&'a Client, AccountAddress, &'b HexEncodedBytes) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(client, address, bytes).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn address_version<'a, F, Fut>( + step: &str, + f: F, + client: &'a Client, + address: AccountAddress, + version: u64, +) -> Result<(), TestFailure> +where + F: Fn(&'a Client, AccountAddress, u64) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(client, address, version).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn token_address<'a, F, Fut>( + step: &str, + f: F, + token_client: &'a TokenClient<'a>, + address: AccountAddress, +) -> Result<(), TestFailure> +where + F: Fn(&'a TokenClient<'a>, AccountAddress) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(token_client, address).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +pub async fn token_address_address<'a, F, Fut>( + step: &str, + f: F, + token_client: &'a TokenClient<'a>, + address: AccountAddress, + address2: AccountAddress, +) -> Result<(), TestFailure> +where + F: Fn(&'a TokenClient<'a>, AccountAddress, AccountAddress) -> Fut, + Fut: Future>, +{ + // set a default error in case checks never start + let mut result: Result<(), TestFailure> = Err(could_not_check(step)); + let timer = Instant::now(); + + // try to get a good result + while Instant::now().duration_since(timer) < *PERSISTENCY_TIMEOUT { + result = f(token_client, address, address2).await; + if result.is_ok() { + break; + } + sleep(*SLEEP_PER_CYCLE).await; + } + + // return last failure if no good result occurs + result +} + +// Utils + +fn could_not_check(step: &str) -> TestFailure { + anyhow!(format!("{} in step: {}", ERROR_COULD_NOT_CHECK, step)).into() +} diff --git a/crates/aptos-api-tester/src/strings.rs b/crates/aptos-api-tester/src/strings.rs new file mode 100644 index 0000000000000..99dbb40c312a9 --- /dev/null +++ b/crates/aptos-api-tester/src/strings.rs @@ -0,0 +1,59 @@ +// Copyright © Aptos Foundation + +// Fail messages + +pub const FAIL_WRONG_ACCOUNT_DATA: &str = "wrong account data"; +pub const FAIL_WRONG_BALANCE: &str = "wrong balance"; +pub const FAIL_WRONG_BALANCE_AT_VERSION: &str = "wrong balance at version"; +pub const FAIL_WRONG_COLLECTION_DATA: &str = "wrong collection data"; +pub const FAIL_WRONG_MESSAGE: &str = "wrong message"; +pub const FAIL_WRONG_MODULE: &str = "wrong module"; +pub const FAIL_WRONG_TOKEN_BALANCE: &str = "wrong token balance"; +pub const FAIL_WRONG_TOKEN_DATA: &str = "wrong token data"; + +// Error messages + +pub const ERROR_BAD_BALANCE_STRING: &str = "bad balance string"; +pub const ERROR_COULD_NOT_BUILD_PACKAGE: &str = "failed to build package"; +pub const ERROR_COULD_NOT_CHECK: &str = "persistency check never started"; +pub const ERROR_COULD_NOT_CREATE_ACCOUNT: &str = "failed to create account"; +pub const ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION: &str = + "failed to create and submit transaction"; +pub const ERROR_COULD_NOT_FINISH_TRANSACTION: &str = "failed to finish transaction"; +pub const ERROR_COULD_NOT_FUND_ACCOUNT: &str = "failed to fund account"; +pub const ERROR_COULD_NOT_SERIALIZE: &str = "failed to serialize"; +pub const ERROR_COULD_NOT_VIEW: &str = "view function failed"; +pub const ERROR_NO_ACCOUNT_DATA: &str = "can't find account data"; +pub const ERROR_NO_BALANCE: &str = "can't find account balance"; +pub const ERROR_NO_BALANCE_STRING: &str = "the API did not return a balance string"; +pub const ERROR_NO_BYTECODE: &str = "can't find bytecode"; +pub const ERROR_NO_COLLECTION_DATA: &str = "can't find collection data"; +pub const ERROR_NO_MESSAGE: &str = "can't find message"; +pub const ERROR_NO_METADATA: &str = "can't find metadata"; +pub const ERROR_NO_MODULE: &str = "can't find module"; +pub const ERROR_NO_TOKEN_BALANCE: &str = "can't find token balance"; +pub const ERROR_NO_TOKEN_DATA: &str = "can't find token data"; +pub const ERROR_NO_VERSION: &str = "can't find transaction version"; + +// Step names + +pub const SETUP: &str = "setup"; +pub const CHECK_ACCOUNT_DATA: &str = "check_account_data"; +pub const FUND: &str = "fund"; +pub const CHECK_ACCOUNT_BALANCE: &str = "check_account_balance"; +pub const TRANSFER_COINS: &str = "transfer_coins"; +pub const CHECK_ACCOUNT_BALANCE_AT_VERSION: &str = "check_account_balance_at_version"; +pub const CREATE_COLLECTION: &str = "create_collection"; +pub const CHECK_COLLECTION_METADATA: &str = "check_collection_metadata"; +pub const CREATE_TOKEN: &str = "create_token"; +pub const CHECK_TOKEN_METADATA: &str = "check_token_metadata"; +pub const CHECK_SENDER_BALANCE: &str = "check_sender_balance"; +pub const OFFER_TOKEN: &str = "offer_token"; +pub const CLAIM_TOKEN: &str = "claim_token"; +pub const CHECK_RECEIVER_BALANCE: &str = "check_receiver_balance"; +pub const BUILD_MODULE: &str = "build_module"; +pub const PUBLISH_MODULE: &str = "publish_module"; +pub const CHECK_MODULE_DATA: &str = "check_module_data"; +pub const SET_MESSAGE: &str = "set_message"; +pub const CHECK_MESSAGE: &str = "check_message"; +pub const CHECK_VIEW_ACCOUNT_BALANCE: &str = "check_view_account_balance"; diff --git a/crates/aptos-api-tester/src/tests/coin_transfer.rs b/crates/aptos-api-tester/src/tests/coin_transfer.rs new file mode 100644 index 0000000000000..3496a59ff662f --- /dev/null +++ b/crates/aptos-api-tester/src/tests/coin_transfer.rs @@ -0,0 +1,265 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::FUND_AMOUNT, + persistent_check, + strings::{ + CHECK_ACCOUNT_BALANCE, CHECK_ACCOUNT_BALANCE_AT_VERSION, CHECK_ACCOUNT_DATA, + ERROR_COULD_NOT_CREATE_ACCOUNT, ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, + ERROR_COULD_NOT_FINISH_TRANSACTION, ERROR_COULD_NOT_FUND_ACCOUNT, ERROR_NO_BALANCE, + ERROR_NO_VERSION, FAIL_WRONG_BALANCE, FAIL_WRONG_BALANCE_AT_VERSION, SETUP, TRANSFER_COINS, + }, + time_fn, + utils::{ + check_balance, create_account, create_and_fund_account, emit_step_metrics, NetworkName, + TestFailure, TestName, + }, +}; +use anyhow::{anyhow, Result}; +use aptos_api_types::U64; +use aptos_logger::error; +use aptos_rest_client::Client; +use aptos_sdk::{coin_client::CoinClient, types::LocalAccount}; +use aptos_types::account_address::AccountAddress; + +const TRANSFER_AMOUNT: u64 = 1_000; + +/// Tests coin transfer. Checks that: +/// - receiver balance reflects transferred amount +/// - receiver balance shows correct amount at the previous version +pub async fn test(network_name: NetworkName, run_id: &str) -> Result<(), TestFailure> { + // setup + let (client, mut account, receiver) = emit_step_metrics( + time_fn!(setup, network_name), + TestName::CoinTransfer, + SETUP, + network_name, + run_id, + )?; + let coin_client = CoinClient::new(&client); + + // persistently check that API returns correct account data (auth key and sequence number) + emit_step_metrics( + time_fn!( + persistent_check::address_address, + CHECK_ACCOUNT_DATA, + check_account_data, + &client, + account.address(), + receiver + ), + TestName::CoinTransfer, + CHECK_ACCOUNT_DATA, + network_name, + run_id, + )?; + + // transfer coins to the receiver + let version = emit_step_metrics( + time_fn!( + transfer_coins, + &client, + &coin_client, + &mut account, + receiver + ), + TestName::CoinTransfer, + TRANSFER_COINS, + network_name, + run_id, + )?; + + // persistently check that receiver balance is correct + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_ACCOUNT_BALANCE, + check_account_balance, + &client, + receiver + ), + TestName::CoinTransfer, + CHECK_ACCOUNT_BALANCE, + network_name, + run_id, + )?; + + // persistently check that account balance is correct at previoud version + emit_step_metrics( + time_fn!( + persistent_check::address_version, + CHECK_ACCOUNT_BALANCE_AT_VERSION, + check_account_balance_at_version, + &client, + receiver, + version + ), + TestName::CoinTransfer, + CHECK_ACCOUNT_BALANCE_AT_VERSION, + network_name, + run_id, + )?; + + Ok(()) +} + +// Steps + +async fn setup( + network_name: NetworkName, +) -> Result<(Client, LocalAccount, AccountAddress), TestFailure> { + // spin up clients + let client = network_name.get_client(); + let faucet_client = network_name.get_faucet_client(); + + // create account + let account = match create_and_fund_account(&faucet_client, TestName::CoinTransfer).await { + Ok(account) => account, + Err(e) => { + error!( + "test: coin_transfer part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FUND_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + // create receiver + let receiver = match create_account(&faucet_client, TestName::CoinTransfer).await { + Ok(account) => account.address(), + Err(e) => { + error!( + "test: coin_transfer part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + Ok((client, account, receiver)) +} + +async fn check_account_data( + client: &Client, + account: AccountAddress, + receiver: AccountAddress, +) -> Result<(), TestFailure> { + check_balance(TestName::CoinTransfer, client, account, U64(FUND_AMOUNT)).await?; + check_balance(TestName::CoinTransfer, client, receiver, U64(0)).await?; + + Ok(()) +} + +async fn transfer_coins( + client: &Client, + coin_client: &CoinClient<'_>, + account: &mut LocalAccount, + receiver: AccountAddress, +) -> Result { + // create transaction + let pending_txn = match coin_client + .transfer(account, receiver, TRANSFER_AMOUNT, None) + .await + { + Ok(pending_txn) => pending_txn, + Err(e) => { + error!( + "test: coin_transfer part: transfer_coins ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait and get version + let response = match client.wait_for_transaction(&pending_txn).await { + Ok(response) => response, + Err(e) => { + error!( + "test: coin_transfer part: transfer_coins ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + let version = match response.inner().version() { + Some(version) => version, + None => { + error!( + "test: coin_transfer part: transfer_coins ERROR: {}", + ERROR_NO_VERSION + ); + return Err(anyhow!(ERROR_NO_VERSION).into()); + }, + }; + + // return version + Ok(version) +} + +async fn check_account_balance( + client: &Client, + address: AccountAddress, +) -> Result<(), TestFailure> { + // expected + let expected = U64(TRANSFER_AMOUNT); + + // actual + let actual = match client.get_account_balance(address).await { + Ok(response) => response.into_inner().coin.value, + Err(e) => { + error!( + "test: coin_transfer part: check_account_balance ERROR: {}, with error {:?}", + ERROR_NO_BALANCE, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: coin_transfer part: check_account_balance FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_BALANCE, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_BALANCE)); + } + + Ok(()) +} + +async fn check_account_balance_at_version( + client: &Client, + address: AccountAddress, + transaction_version: u64, +) -> Result<(), TestFailure> { + // expected + let expected = U64(0); + + // actual + let actual = match client + .get_account_balance_at_version(address, transaction_version - 1) + .await + { + Ok(response) => response.into_inner().coin.value, + Err(e) => { + error!( + "test: coin_transfer part: check_account_balance_at_version ERROR: {}, with error {:?}", + ERROR_NO_BALANCE, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: coin_transfer part: check_account_balance_at_version FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_BALANCE_AT_VERSION, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_BALANCE_AT_VERSION)); + } + + Ok(()) +} diff --git a/crates/aptos-api-tester/src/tests/mod.rs b/crates/aptos-api-tester/src/tests/mod.rs new file mode 100644 index 0000000000000..73e66cdf0ef58 --- /dev/null +++ b/crates/aptos-api-tester/src/tests/mod.rs @@ -0,0 +1,7 @@ +// Copyright © Aptos Foundation + +pub mod coin_transfer; +pub mod new_account; +pub mod publish_module; +pub mod tokenv1_transfer; +pub mod view_function; diff --git a/crates/aptos-api-tester/src/tests/new_account.rs b/crates/aptos-api-tester/src/tests/new_account.rs new file mode 100644 index 0000000000000..feff0153c74c1 --- /dev/null +++ b/crates/aptos-api-tester/src/tests/new_account.rs @@ -0,0 +1,147 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::FUND_AMOUNT, + persistent_check, + strings::{ + CHECK_ACCOUNT_BALANCE, CHECK_ACCOUNT_DATA, ERROR_COULD_NOT_CREATE_ACCOUNT, + ERROR_COULD_NOT_FUND_ACCOUNT, ERROR_NO_ACCOUNT_DATA, FAIL_WRONG_ACCOUNT_DATA, FUND, SETUP, + }, + time_fn, + utils::{check_balance, create_account, emit_step_metrics, NetworkName, TestFailure, TestName}, +}; +use aptos_api_types::U64; +use aptos_logger::error; +use aptos_rest_client::{Account, Client, FaucetClient}; +use aptos_sdk::types::LocalAccount; +use aptos_types::account_address::AccountAddress; + +/// Tests new account creation. Checks that: +/// - account data exists +/// - account balance reflects funded amount +pub async fn test(network_name: NetworkName, run_id: &str) -> Result<(), TestFailure> { + // setup + let (client, faucet_client, account) = emit_step_metrics( + time_fn!(setup, network_name), + TestName::NewAccount, + SETUP, + network_name, + run_id, + )?; + + // persistently check that API returns correct account data (auth key and sequence number) + emit_step_metrics( + time_fn!( + persistent_check::account, + CHECK_ACCOUNT_DATA, + check_account_data, + &client, + &account + ), + TestName::NewAccount, + CHECK_ACCOUNT_DATA, + network_name, + run_id, + )?; + + // fund account + emit_step_metrics( + time_fn!(fund, &faucet_client, account.address()), + TestName::NewAccount, + FUND, + network_name, + run_id, + )?; + + // persistently check that account balance is correct + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_ACCOUNT_BALANCE, + check_account_balance, + &client, + account.address() + ), + TestName::NewAccount, + CHECK_ACCOUNT_BALANCE, + network_name, + run_id, + )?; + + Ok(()) +} + +// Steps + +async fn setup( + network_name: NetworkName, +) -> Result<(Client, FaucetClient, LocalAccount), TestFailure> { + // spin up clients + let client = network_name.get_client(); + let faucet_client = network_name.get_faucet_client(); + + // create account + let account = match create_account(&faucet_client, TestName::NewAccount).await { + Ok(account) => account, + Err(e) => { + error!( + "test: new_account part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + Ok((client, faucet_client, account)) +} + +async fn fund(faucet_client: &FaucetClient, address: AccountAddress) -> Result<(), TestFailure> { + // fund account + if let Err(e) = faucet_client.fund(address, FUND_AMOUNT).await { + error!( + "test: new_account part: fund ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FUND_ACCOUNT, e + ); + return Err(e.into()); + } + + Ok(()) +} + +async fn check_account_data(client: &Client, account: &LocalAccount) -> Result<(), TestFailure> { + // expected + let expected = Account { + authentication_key: account.authentication_key(), + sequence_number: account.sequence_number(), + }; + + // actual + let actual = match client.get_account(account.address()).await { + Ok(response) => response.into_inner(), + Err(e) => { + error!( + "test: new_account part: check_account_data ERROR: {}, with error {:?}", + ERROR_NO_ACCOUNT_DATA, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: new_account part: check_account_data FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_ACCOUNT_DATA, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_ACCOUNT_DATA)); + } + + Ok(()) +} + +async fn check_account_balance( + client: &Client, + address: AccountAddress, +) -> Result<(), TestFailure> { + check_balance(TestName::NewAccount, client, address, U64(FUND_AMOUNT)).await +} diff --git a/crates/aptos-api-tester/src/tests/publish_module.rs b/crates/aptos-api-tester/src/tests/publish_module.rs new file mode 100644 index 0000000000000..620395a29bbfc --- /dev/null +++ b/crates/aptos-api-tester/src/tests/publish_module.rs @@ -0,0 +1,385 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::FUND_AMOUNT, + persistent_check, + strings::{ + BUILD_MODULE, CHECK_ACCOUNT_DATA, CHECK_MESSAGE, CHECK_MODULE_DATA, + ERROR_COULD_NOT_BUILD_PACKAGE, ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, + ERROR_COULD_NOT_FINISH_TRANSACTION, ERROR_COULD_NOT_FUND_ACCOUNT, + ERROR_COULD_NOT_SERIALIZE, ERROR_NO_BYTECODE, ERROR_NO_MESSAGE, ERROR_NO_METADATA, + ERROR_NO_MODULE, FAIL_WRONG_MESSAGE, FAIL_WRONG_MODULE, PUBLISH_MODULE, SETUP, SET_MESSAGE, + }, + time_fn, + tokenv1_client::{build_and_submit_transaction, TransactionOptions}, + utils::{ + check_balance, create_and_fund_account, emit_step_metrics, NetworkName, TestFailure, + TestName, + }, +}; +use anyhow::{anyhow, Result}; +use aptos_api_types::{HexEncodedBytes, U64}; +use aptos_cached_packages::aptos_stdlib::EntryFunctionCall; +use aptos_framework::{BuildOptions, BuiltPackage}; +use aptos_logger::error; +use aptos_rest_client::Client; +use aptos_sdk::{bcs, types::LocalAccount}; +use aptos_types::{ + account_address::AccountAddress, + transaction::{EntryFunction, TransactionPayload}, +}; +use move_core_types::{ident_str, language_storage::ModuleId}; +use std::{collections::BTreeMap, path::PathBuf}; + +static MODULE_NAME: &str = "message"; +static TEST_MESSAGE: &str = "test message"; + +/// Tests module publishing and interaction. Checks that: +/// - can publish module +/// - module data exists +/// - can interact with module +/// - interaction is reflected correctly +pub async fn test(network_name: NetworkName, run_id: &str) -> Result<(), TestFailure> { + // setup + let (client, mut account) = emit_step_metrics( + time_fn!(setup, network_name), + TestName::PublishModule, + SETUP, + network_name, + run_id, + )?; + + // persistently check that API returns correct account data (auth key and sequence number) + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_ACCOUNT_DATA, + check_account_data, + &client, + account.address() + ), + TestName::PublishModule, + CHECK_ACCOUNT_DATA, + network_name, + run_id, + )?; + + // build module + let package = emit_step_metrics( + time_fn!(build_module, account.address()), + TestName::PublishModule, + BUILD_MODULE, + network_name, + run_id, + )?; + + // publish module + let blob = emit_step_metrics( + time_fn!(publish_module, &client, &mut account, package), + TestName::PublishModule, + PUBLISH_MODULE, + network_name, + run_id, + )?; + + // persistently check that API returns correct module package data + emit_step_metrics( + time_fn!( + persistent_check::address_bytes, + CHECK_MODULE_DATA, + check_module_data, + &client, + account.address(), + &blob + ), + TestName::PublishModule, + CHECK_MODULE_DATA, + network_name, + run_id, + )?; + + // set message + emit_step_metrics( + time_fn!(set_message, &client, &mut account), + TestName::PublishModule, + SET_MESSAGE, + network_name, + run_id, + )?; + + // persistently check that the message is correct + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_MESSAGE, + check_message, + &client, + account.address() + ), + TestName::PublishModule, + CHECK_MESSAGE, + network_name, + run_id, + )?; + + Ok(()) +} + +// Steps + +async fn setup(network_name: NetworkName) -> Result<(Client, LocalAccount), TestFailure> { + // spin up clients + let client = network_name.get_client(); + let faucet_client = network_name.get_faucet_client(); + + // create account + let account = match create_and_fund_account(&faucet_client, TestName::PublishModule).await { + Ok(account) => account, + Err(e) => { + error!( + "test: publish_module part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FUND_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + Ok((client, account)) +} + +async fn check_account_data(client: &Client, account: AccountAddress) -> Result<(), TestFailure> { + check_balance(TestName::PublishModule, client, account, U64(FUND_AMOUNT)).await?; + + Ok(()) +} + +async fn build_module(address: AccountAddress) -> Result { + // get file to compile + let move_dir = PathBuf::from("./aptos-move/move-examples/hello_blockchain"); + + // insert address + let mut named_addresses: BTreeMap = BTreeMap::new(); + named_addresses.insert("hello_blockchain".to_string(), address); + + // build options + let options = BuildOptions { + named_addresses, + ..BuildOptions::default() + }; + + // build module + let package = match BuiltPackage::build(move_dir, options) { + Ok(package) => package, + Err(e) => { + error!( + "test: publish_module part: publish_module ERROR: {}, with error {:?}", + ERROR_COULD_NOT_BUILD_PACKAGE, e + ); + return Err(e.into()); + }, + }; + + Ok(package) +} + +async fn publish_module( + client: &Client, + account: &mut LocalAccount, + package: BuiltPackage, +) -> Result { + // get bytecode + let blobs = package.extract_code(); + + // get metadata + let metadata = match package.extract_metadata() { + Ok(data) => data, + Err(e) => { + error!( + "test: publish_module part: publish_module ERROR: {}, with error {:?}", + ERROR_NO_METADATA, e + ); + return Err(e.into()); + }, + }; + + // serialize metadata + let metadata_serialized = match bcs::to_bytes(&metadata) { + Ok(data) => data, + Err(e) => { + error!( + "test: publish_module part: publish_module ERROR: {}, with error {:?}", + ERROR_COULD_NOT_SERIALIZE, e + ); + return Err(anyhow!(e).into()); + }, + }; + + // create payload + let payload: aptos_types::transaction::TransactionPayload = + EntryFunctionCall::CodePublishPackageTxn { + metadata_serialized, + code: blobs.clone(), + } + .encode(); + + // create transaction + let pending_txn = + match build_and_submit_transaction(client, account, payload, TransactionOptions::default()) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: publish_module part: publish_module ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: publish_module part: publish_module ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + // get blob for later comparison + let blob = match blobs.get(0) { + Some(bytecode) => HexEncodedBytes::from(bytecode.clone()), + None => { + error!( + "test: publish_module part: publish_module ERROR: {}", + ERROR_NO_BYTECODE + ); + return Err(anyhow!(ERROR_NO_BYTECODE).into()); + }, + }; + + Ok(blob) +} + +async fn check_module_data( + client: &Client, + address: AccountAddress, + expected: &HexEncodedBytes, +) -> Result<(), TestFailure> { + // actual + let response = match client.get_account_module(address, MODULE_NAME).await { + Ok(response) => response, + Err(e) => { + error!( + "test: publish_module part: check_module_data ERROR: {}, with error {:?}", + ERROR_NO_MODULE, e + ); + return Err(e.into()); + }, + }; + let actual = &response.inner().bytecode; + + // compare + if expected != actual { + error!( + "test: publish_module part: check_module_data FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_MODULE, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_MODULE)); + } + + Ok(()) +} + +async fn set_message(client: &Client, account: &mut LocalAccount) -> Result<(), TestFailure> { + // set up message + let message = match bcs::to_bytes(TEST_MESSAGE) { + Ok(data) => data, + Err(e) => { + error!( + "test: publish_module part: set_message ERROR: {}, with error {:?}", + ERROR_COULD_NOT_SERIALIZE, e + ); + return Err(anyhow!(e).into()); + }, + }; + + // create payload + let payload = TransactionPayload::EntryFunction(EntryFunction::new( + ModuleId::new(account.address(), ident_str!(MODULE_NAME).to_owned()), + ident_str!("set_message").to_owned(), + vec![], + vec![message], + )); + + // create transaction + let pending_txn = + match build_and_submit_transaction(client, account, payload, TransactionOptions::default()) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: publish_module part: set_message ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: publish_module part: set_message ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + Ok(()) +} + +async fn check_message(client: &Client, address: AccountAddress) -> Result<(), TestFailure> { + // expected + let expected = TEST_MESSAGE.to_string(); + + // actual + let actual = match get_message(client, address).await { + Some(message) => message, + None => { + error!( + "test: publish_module part: check_message ERROR: {}", + ERROR_NO_MESSAGE + ); + return Err(anyhow!(ERROR_NO_MESSAGE).into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: publish_module part: check_message FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_MESSAGE, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_MESSAGE)); + } + + Ok(()) +} + +// Utils + +async fn get_message(client: &Client, address: AccountAddress) -> Option { + let resource = match client + .get_account_resource( + address, + format!("{}::message::MessageHolder", address.to_hex_literal()).as_str(), + ) + .await + { + Ok(response) => response.into_inner()?, + Err(_) => return None, + }; + + Some(resource.data.get("message")?.as_str()?.to_owned()) +} diff --git a/crates/aptos-api-tester/src/tests/tokenv1_transfer.rs b/crates/aptos-api-tester/src/tests/tokenv1_transfer.rs new file mode 100644 index 0000000000000..a95272e59e6b4 --- /dev/null +++ b/crates/aptos-api-tester/src/tests/tokenv1_transfer.rs @@ -0,0 +1,576 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::FUND_AMOUNT, + persistent_check, + strings::{ + CHECK_ACCOUNT_DATA, CHECK_COLLECTION_METADATA, CHECK_RECEIVER_BALANCE, + CHECK_SENDER_BALANCE, CHECK_TOKEN_METADATA, CLAIM_TOKEN, CREATE_COLLECTION, CREATE_TOKEN, + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, ERROR_COULD_NOT_FINISH_TRANSACTION, + ERROR_COULD_NOT_FUND_ACCOUNT, ERROR_NO_COLLECTION_DATA, ERROR_NO_TOKEN_BALANCE, + ERROR_NO_TOKEN_DATA, FAIL_WRONG_COLLECTION_DATA, FAIL_WRONG_TOKEN_BALANCE, + FAIL_WRONG_TOKEN_DATA, OFFER_TOKEN, SETUP, + }, + time_fn, + tokenv1_client::{ + CollectionData, CollectionMutabilityConfig, RoyaltyOptions, TokenClient, TokenData, + TokenMutabilityConfig, + }, + utils::{ + check_balance, create_and_fund_account, emit_step_metrics, NetworkName, TestFailure, + TestName, + }, +}; +use aptos_api_types::U64; +use aptos_logger::error; +use aptos_rest_client::Client; +use aptos_sdk::types::LocalAccount; +use aptos_types::account_address::AccountAddress; + +const COLLECTION_NAME: &str = "test collection"; +const TOKEN_NAME: &str = "test token"; +const TOKEN_SUPPLY: u64 = 10; +const OFFER_AMOUNT: u64 = 2; + +/// Tests nft transfer. Checks that: +/// - collection data exists +/// - token data exists +/// - token balance reflects transferred amount +pub async fn test(network_name: NetworkName, run_id: &str) -> Result<(), TestFailure> { + // setup + let (client, mut account, mut receiver) = emit_step_metrics( + time_fn!(setup, network_name), + TestName::TokenV1Transfer, + SETUP, + network_name, + run_id, + )?; + let token_client = TokenClient::new(&client); + + // persistently check that API returns correct account data (auth key and sequence number) + emit_step_metrics( + time_fn!( + persistent_check::address_address, + CHECK_ACCOUNT_DATA, + check_account_data, + &client, + account.address(), + receiver.address() + ), + TestName::TokenV1Transfer, + CHECK_ACCOUNT_DATA, + network_name, + run_id, + )?; + + // create collection + emit_step_metrics( + time_fn!(create_collection, &client, &token_client, &mut account), + TestName::TokenV1Transfer, + CREATE_COLLECTION, + network_name, + run_id, + )?; + + // persistently check that API returns correct collection metadata + emit_step_metrics( + time_fn!( + persistent_check::token_address, + CHECK_COLLECTION_METADATA, + check_collection_metadata, + &token_client, + account.address() + ), + TestName::TokenV1Transfer, + CHECK_COLLECTION_METADATA, + network_name, + run_id, + )?; + + // create token + emit_step_metrics( + time_fn!(create_token, &client, &token_client, &mut account), + TestName::TokenV1Transfer, + CREATE_TOKEN, + network_name, + run_id, + )?; + + // persistently check that API returns correct token metadata + emit_step_metrics( + time_fn!( + persistent_check::token_address, + CHECK_TOKEN_METADATA, + check_token_metadata, + &token_client, + account.address() + ), + TestName::TokenV1Transfer, + CHECK_TOKEN_METADATA, + network_name, + run_id, + )?; + + // offer token + emit_step_metrics( + time_fn!( + offer_token, + &client, + &token_client, + &mut account, + receiver.address() + ), + TestName::TokenV1Transfer, + OFFER_TOKEN, + network_name, + run_id, + )?; + + // persistently check that sender token balance is correct + emit_step_metrics( + time_fn!( + persistent_check::token_address, + CHECK_SENDER_BALANCE, + check_sender_balance, + &token_client, + account.address() + ), + TestName::TokenV1Transfer, + CHECK_SENDER_BALANCE, + network_name, + run_id, + )?; + + // claim token + emit_step_metrics( + time_fn!( + claim_token, + &client, + &token_client, + &mut receiver, + account.address() + ), + TestName::TokenV1Transfer, + CLAIM_TOKEN, + network_name, + run_id, + )?; + + // persistently check that receiver token balance is correct + emit_step_metrics( + time_fn!( + persistent_check::token_address_address, + CHECK_RECEIVER_BALANCE, + check_receiver_balance, + &token_client, + receiver.address(), + account.address() + ), + TestName::TokenV1Transfer, + CHECK_RECEIVER_BALANCE, + network_name, + run_id, + )?; + + Ok(()) +} + +// Steps + +async fn setup( + network_name: NetworkName, +) -> Result<(Client, LocalAccount, LocalAccount), TestFailure> { + // spin up clients + let client = network_name.get_client(); + let faucet_client = network_name.get_faucet_client(); + + // create account + let account = match create_and_fund_account(&faucet_client, TestName::TokenV1Transfer).await { + Ok(account) => account, + Err(e) => { + error!( + "test: nft_transfer part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FUND_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + // create receiver + let receiver = match create_and_fund_account(&faucet_client, TestName::TokenV1Transfer).await { + Ok(receiver) => receiver, + Err(e) => { + error!( + "test: nft_transfer part: setup ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FUND_ACCOUNT, e + ); + return Err(e.into()); + }, + }; + + Ok((client, account, receiver)) +} + +async fn check_account_data( + client: &Client, + account: AccountAddress, + receiver: AccountAddress, +) -> Result<(), TestFailure> { + check_balance(TestName::TokenV1Transfer, client, account, U64(FUND_AMOUNT)).await?; + check_balance( + TestName::TokenV1Transfer, + client, + receiver, + U64(FUND_AMOUNT), + ) + .await?; + + Ok(()) +} + +async fn create_collection( + client: &Client, + token_client: &TokenClient<'_>, + account: &mut LocalAccount, +) -> Result<(), TestFailure> { + // set up collection data + let collection_data = collection_data(); + + // create transaction + let pending_txn = match token_client + .create_collection( + account, + &collection_data.name, + &collection_data.description, + &collection_data.uri, + collection_data.maximum.into(), + None, + ) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: nft_transfer part: create_collection ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: nft_transfer part: create_collection ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + Ok(()) +} + +async fn check_collection_metadata( + token_client: &TokenClient<'_>, + address: AccountAddress, +) -> Result<(), TestFailure> { + // set up collection data + let collection_data = collection_data(); + + // expected + let expected = collection_data.clone(); + + // actual + let actual = match token_client + .get_collection_data(address, &collection_data.name) + .await + { + Ok(data) => data, + Err(e) => { + error!( + "test: nft_transfer part: check_collection_metadata ERROR: {}, with error {:?}", + ERROR_NO_COLLECTION_DATA, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: nft_transfer part: check_collection_metadata FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_COLLECTION_DATA, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_COLLECTION_DATA)); + } + + Ok(()) +} + +async fn create_token( + client: &Client, + token_client: &TokenClient<'_>, + account: &mut LocalAccount, +) -> Result<(), TestFailure> { + // set up token data + let token_data = token_data(account.address()); + + // create transaction + let pending_txn = match token_client + .create_token( + account, + COLLECTION_NAME, + &token_data.name, + &token_data.description, + token_data.supply.into(), + &token_data.uri, + token_data.maximum.into(), + None, + None, + ) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: nft_transfer part: create_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: nft_transfer part: create_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + Ok(()) +} + +async fn check_token_metadata( + token_client: &TokenClient<'_>, + address: AccountAddress, +) -> Result<(), TestFailure> { + // set up token data + let token_data = token_data(address); + + // expected + let expected = token_data; + + // actual + let actual = match token_client + .get_token_data(address, COLLECTION_NAME, TOKEN_NAME) + .await + { + Ok(data) => data, + Err(e) => { + error!( + "test: nft_transfer part: check_token_metadata ERROR: {}, with error {:?}", + ERROR_NO_TOKEN_DATA, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: nft_transfer part: check_token_metadata FAIL: {}, expected {:?}, got {:?}", + FAIL_WRONG_TOKEN_DATA, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_TOKEN_DATA)); + } + + Ok(()) +} + +async fn offer_token( + client: &Client, + token_client: &TokenClient<'_>, + account: &mut LocalAccount, + receiver: AccountAddress, +) -> Result<(), TestFailure> { + // create transaction + let pending_txn = match token_client + .offer_token( + account, + receiver, + account.address(), + COLLECTION_NAME, + TOKEN_NAME, + OFFER_AMOUNT, + None, + None, + ) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: nft_transfer part: offer_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: nft_transfer part: offer_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + Ok(()) +} + +async fn check_sender_balance( + token_client: &TokenClient<'_>, + address: AccountAddress, +) -> Result<(), TestFailure> { + check_token_balance( + token_client, + address, + address, + U64(TOKEN_SUPPLY - OFFER_AMOUNT), + "check_sender_balance", + ) + .await +} + +async fn claim_token( + client: &Client, + token_client: &TokenClient<'_>, + receiver: &mut LocalAccount, + sender: AccountAddress, +) -> Result<(), TestFailure> { + // create transaction + let pending_txn = match token_client + .claim_token( + receiver, + sender, + sender, + COLLECTION_NAME, + TOKEN_NAME, + None, + None, + ) + .await + { + Ok(txn) => txn, + Err(e) => { + error!( + "test: nft_transfer part: claim_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_CREATE_AND_SUBMIT_TRANSACTION, e + ); + return Err(e.into()); + }, + }; + + // wait for transaction to finish + if let Err(e) = client.wait_for_transaction(&pending_txn).await { + error!( + "test: nft_transfer part: claim_token ERROR: {}, with error {:?}", + ERROR_COULD_NOT_FINISH_TRANSACTION, e + ); + return Err(e.into()); + }; + + Ok(()) +} + +async fn check_receiver_balance( + token_client: &TokenClient<'_>, + address: AccountAddress, + creator: AccountAddress, +) -> Result<(), TestFailure> { + check_token_balance( + token_client, + address, + creator, + U64(OFFER_AMOUNT), + "check_receiver_balance", + ) + .await +} + +// Utils + +fn collection_data() -> CollectionData { + CollectionData { + name: COLLECTION_NAME.to_string(), + description: "collection description".to_string(), + uri: "collection uri".to_string(), + maximum: U64(1000), + mutability_config: CollectionMutabilityConfig { + description: false, + maximum: false, + uri: false, + }, + } +} + +fn token_data(address: AccountAddress) -> TokenData { + TokenData { + name: TOKEN_NAME.to_string(), + description: "token description".to_string(), + uri: "token uri".to_string(), + maximum: U64(1000), + mutability_config: TokenMutabilityConfig { + description: false, + maximum: false, + properties: false, + royalty: false, + uri: false, + }, + supply: U64(TOKEN_SUPPLY), + royalty: RoyaltyOptions { + // change this when you use! + payee_address: address, + royalty_points_denominator: U64(0), + royalty_points_numerator: U64(0), + }, + largest_property_version: U64(0), + } +} + +async fn check_token_balance( + token_client: &TokenClient<'_>, + address: AccountAddress, + creator: AccountAddress, + expected: U64, + part: &str, +) -> Result<(), TestFailure> { + // actual + let actual = match token_client + .get_token(address, creator, COLLECTION_NAME, TOKEN_NAME) + .await + { + Ok(data) => data.amount, + Err(e) => { + error!( + "test: nft_transfer part: {} ERROR: {}, with error {:?}", + part, ERROR_NO_TOKEN_BALANCE, e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: nft_transfer part: {} FAIL: {}, expected {:?}, got {:?}", + part, FAIL_WRONG_TOKEN_BALANCE, expected, actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_TOKEN_BALANCE)); + } + + Ok(()) +} diff --git a/crates/aptos-api-tester/src/tests/view_function.rs b/crates/aptos-api-tester/src/tests/view_function.rs new file mode 100644 index 0000000000000..345daf3ef1338 --- /dev/null +++ b/crates/aptos-api-tester/src/tests/view_function.rs @@ -0,0 +1,177 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::FUND_AMOUNT, + persistent_check, + strings::{ + CHECK_ACCOUNT_DATA, CHECK_VIEW_ACCOUNT_BALANCE, ERROR_BAD_BALANCE_STRING, + ERROR_COULD_NOT_FUND_ACCOUNT, ERROR_COULD_NOT_VIEW, ERROR_NO_BALANCE_STRING, + FAIL_WRONG_BALANCE, SETUP, + }, + time_fn, + utils::{ + check_balance, create_and_fund_account, emit_step_metrics, NetworkName, TestFailure, + TestName, + }, +}; +use anyhow::anyhow; +use aptos_api_types::{ViewRequest, U64}; +use aptos_logger::error; +use aptos_rest_client::Client; +use aptos_sdk::types::LocalAccount; +use aptos_types::account_address::AccountAddress; + +/// Tests view function use. Checks that: +/// - view function returns correct value +pub async fn test(network_name: NetworkName, run_id: &str) -> Result<(), TestFailure> { + // setup + let (client, account) = emit_step_metrics( + time_fn!(setup, network_name), + TestName::ViewFunction, + SETUP, + network_name, + run_id, + )?; + + // check account data persistently + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_ACCOUNT_DATA, + check_account_data, + &client, + account.address() + ), + TestName::ViewFunction, + CHECK_ACCOUNT_DATA, + network_name, + run_id, + )?; + + // check account balance from view function persistently + emit_step_metrics( + time_fn!( + persistent_check::address, + CHECK_VIEW_ACCOUNT_BALANCE, + check_view_account_balance, + &client, + account.address() + ), + TestName::ViewFunction, + CHECK_VIEW_ACCOUNT_BALANCE, + network_name, + run_id, + )?; + + Ok(()) +} + +// Steps + +async fn setup(network_name: NetworkName) -> Result<(Client, LocalAccount), TestFailure> { + // spin up clients + let client = network_name.get_client(); + let faucet_client = network_name.get_faucet_client(); + + // create account + let account = match create_and_fund_account(&faucet_client, TestName::ViewFunction).await { + Ok(account) => account, + Err(e) => { + error!( + "test: {} part: {} ERROR: {}, with error {:?}", + TestName::ViewFunction.to_string(), + SETUP, + ERROR_COULD_NOT_FUND_ACCOUNT, + e + ); + return Err(e.into()); + }, + }; + + Ok((client, account)) +} + +async fn check_account_data(client: &Client, account: AccountAddress) -> Result<(), TestFailure> { + check_balance(TestName::ViewFunction, client, account, U64(FUND_AMOUNT)).await?; + + Ok(()) +} + +async fn check_view_account_balance( + client: &Client, + address: AccountAddress, +) -> Result<(), TestFailure> { + // expected + let expected = U64(FUND_AMOUNT); + + // actual + + // get client response + let response = match client + .view( + &ViewRequest { + function: "0x1::coin::balance".parse()?, + type_arguments: vec!["0x1::aptos_coin::AptosCoin".parse()?], + arguments: vec![serde_json::Value::String(address.to_hex_literal())], + }, + None, + ) + .await + { + Ok(response) => response, + Err(e) => { + error!( + "test: {} part: {} ERROR: {}, with error {:?}", + TestName::ViewFunction.to_string(), + CHECK_VIEW_ACCOUNT_BALANCE, + ERROR_COULD_NOT_VIEW, + e + ); + return Err(e.into()); + }, + }; + + // get the string value from the serde_json value + let value = match response.inner()[0].as_str() { + Some(value) => value, + None => { + error!( + "test: {} part: {} ERROR: {}, with error {:?}", + TestName::ViewFunction.to_string(), + CHECK_VIEW_ACCOUNT_BALANCE, + ERROR_NO_BALANCE_STRING, + response.inner() + ); + return Err(anyhow!(ERROR_NO_BALANCE_STRING).into()); + }, + }; + + // parse the string into a U64 + let actual = match value.parse::() { + Ok(value) => U64(value), + Err(e) => { + error!( + "test: {} part: {} ERROR: {}, with error {:?}", + TestName::ViewFunction.to_string(), + CHECK_VIEW_ACCOUNT_BALANCE, + ERROR_BAD_BALANCE_STRING, + e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: {} part: {} FAIL: {}, expected {:?}, got {:?}", + TestName::ViewFunction.to_string(), + CHECK_VIEW_ACCOUNT_BALANCE, + FAIL_WRONG_BALANCE, + expected, + actual + ); + } + + Ok(()) +} diff --git a/crates/aptos-api-tester/src/tokenv1_client.rs b/crates/aptos-api-tester/src/tokenv1_client.rs new file mode 100644 index 0000000000000..7ef4f3c25b03b --- /dev/null +++ b/crates/aptos-api-tester/src/tokenv1_client.rs @@ -0,0 +1,460 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +// TODO: this should be part of the SDK + +use anyhow::{anyhow, Context, Result}; +use aptos_api_types::U64; +use aptos_cached_packages::aptos_token_sdk_builder::EntryFunctionCall; +use aptos_sdk::{ + rest_client::{Client as ApiClient, PendingTransaction}, + transaction_builder::TransactionFactory, + types::LocalAccount, +}; +use aptos_types::{ + account_address::AccountAddress, chain_id::ChainId, transaction::TransactionPayload, +}; +use serde::{Deserialize, Serialize}; + +/// Gets chain ID for use in submitting transactions. +async fn get_chain_id(client: &ApiClient) -> Result { + let id = client + .get_index() + .await + .context("Failed to get chain ID")? + .inner() + .chain_id; + + Ok(ChainId::new(id)) +} + +/// Helper function to take care of a transaction after creating the payload. +pub async fn build_and_submit_transaction( + client: &ApiClient, + account: &mut LocalAccount, + payload: TransactionPayload, + options: TransactionOptions, +) -> Result { + // create factory + let factory = TransactionFactory::new(get_chain_id(client).await?) + .with_gas_unit_price(options.gas_unit_price) + .with_max_gas_amount(options.max_gas_amount) + .with_transaction_expiration_time(options.timeout_secs); + + // create transaction + let builder = factory + .payload(payload) + .sender(account.address()) + .sequence_number(account.sequence_number()); + + // sign transaction + let signed_txn = account.sign_with_transaction_builder(builder); + + // submit and return + Ok(client + .submit(&signed_txn) + .await + .context("Failed to submit transaction")? + .into_inner()) +} + +#[derive(Clone, Debug)] +pub struct TokenClient<'a> { + api_client: &'a ApiClient, +} + +impl<'a> TokenClient<'a> { + pub fn new(api_client: &'a ApiClient) -> Self { + Self { api_client } + } + + /// Helper function to get the handle address of collection_data for 0x3::token::Collections + /// resources. + async fn get_collection_data_handle(&self, address: AccountAddress) -> Option { + if let Ok(response) = self + .api_client + .get_account_resource(address, "0x3::token::Collections") + .await + { + Some( + response + .into_inner()? + .data + .get("collection_data")? + .get("handle")? + .as_str()? + .to_owned(), + ) + } else { + None + } + } + + /// Helper function to get the handle address of token_data for 0x3::token::Collections + /// resources. + async fn get_token_data_handle(&self, address: AccountAddress) -> Option { + if let Ok(response) = self + .api_client + .get_account_resource(address, "0x3::token::Collections") + .await + { + Some( + response + .into_inner()? + .data + .get("token_data")? + .get("handle")? + .as_str()? + .to_owned(), + ) + } else { + None + } + } + + /// Helper function to get the handle address of tokens for 0x3::token::TokenStore resources. + async fn get_tokens_handle(&self, address: AccountAddress) -> Option { + if let Ok(response) = self + .api_client + .get_account_resource(address, "0x3::token::TokenStore") + .await + { + Some( + response + .into_inner()? + .data + .get("tokens")? + .get("handle")? + .as_str()? + .to_owned(), + ) + } else { + None + } + } + + /// Creates a collection with the given fields. + pub async fn create_collection( + &self, + account: &mut LocalAccount, + name: &str, + description: &str, + uri: &str, + max_amount: u64, + options: Option, + ) -> Result { + // create payload + let payload = EntryFunctionCall::TokenCreateCollectionScript { + name: name.to_owned().into_bytes(), + description: description.to_owned().into_bytes(), + uri: uri.to_owned().into_bytes(), + maximum: max_amount, + mutate_setting: vec![false, false, false], + } + .encode(); + + // create and submit transaction + build_and_submit_transaction( + self.api_client, + account, + payload, + options.unwrap_or_default(), + ) + .await + } + + /// Creates a token with the given fields. Does not support property keys. + pub async fn create_token( + &self, + account: &mut LocalAccount, + collection_name: &str, + name: &str, + description: &str, + supply: u64, + uri: &str, + max_amount: u64, + royalty_options: Option, + options: Option, + ) -> Result { + // set default royalty options + let royalty_options = match royalty_options { + Some(opt) => opt, + None => RoyaltyOptions { + payee_address: account.address(), + royalty_points_denominator: U64(0), + royalty_points_numerator: U64(0), + }, + }; + + // create payload + let payload = EntryFunctionCall::TokenCreateTokenScript { + collection: collection_name.to_owned().into_bytes(), + name: name.to_owned().into_bytes(), + description: description.to_owned().into_bytes(), + balance: supply, + maximum: max_amount, + uri: uri.to_owned().into_bytes(), + royalty_payee_address: royalty_options.payee_address, + royalty_points_denominator: royalty_options.royalty_points_denominator.0, + royalty_points_numerator: royalty_options.royalty_points_numerator.0, + mutate_setting: vec![false, false, false, false, false], + // todo: add property support + property_keys: vec![], + property_values: vec![], + property_types: vec![], + } + .encode(); + + // create and submit transaction + build_and_submit_transaction( + self.api_client, + account, + payload, + options.unwrap_or_default(), + ) + .await + } + + /// Retrieves collection metadata from the API. + pub async fn get_collection_data( + &self, + creator: AccountAddress, + collection_name: &str, + ) -> Result { + // get handle for collection_data + let handle = match self.get_collection_data_handle(creator).await { + Some(s) => AccountAddress::from_hex_literal(&s)?, + None => return Err(anyhow!("Couldn't retrieve handle for collections data")), + }; + + // get table item with the handle + let value = self + .api_client + .get_table_item( + handle, + "0x1::string::String", + "0x3::token::CollectionData", + collection_name, + ) + .await? + .into_inner(); + + Ok(serde_json::from_value(value)?) + } + + /// Retrieves token metadata from the API. + pub async fn get_token_data( + &self, + creator: AccountAddress, + collection_name: &str, + token_name: &str, + ) -> Result { + // get handle for token_data + let handle = match self.get_token_data_handle(creator).await { + Some(s) => AccountAddress::from_hex_literal(&s)?, + None => return Err(anyhow!("Couldn't retrieve handle for token data")), + }; + + // construct key for table lookup + let token_data_id = TokenDataId { + creator: creator.to_hex_literal(), + collection: collection_name.to_string(), + name: token_name.to_string(), + }; + + // get table item with the handle + let value = self + .api_client + .get_table_item( + handle, + "0x3::token::TokenDataId", + "0x3::token::TokenData", + token_data_id, + ) + .await? + .into_inner(); + + Ok(serde_json::from_value(value)?) + } + + /// Retrieves the information for a given token. + pub async fn get_token( + &self, + account: AccountAddress, + creator: AccountAddress, + collection_name: &str, + token_name: &str, + ) -> Result { + // get handle for tokens + let handle = match self.get_tokens_handle(account).await { + Some(s) => AccountAddress::from_hex_literal(&s)?, + None => return Err(anyhow!("Couldn't retrieve handle for tokens")), + }; + + // construct key for table lookup + let token_id = TokenId { + token_data_id: TokenDataId { + creator: creator.to_hex_literal(), + collection: collection_name.to_string(), + name: token_name.to_string(), + }, + property_version: U64(0), + }; + + // get table item with the handle + let value = self + .api_client + .get_table_item(handle, "0x3::token::TokenId", "0x3::token::Token", token_id) + .await? + .into_inner(); + + Ok(serde_json::from_value(value)?) + } + + /// Transfers specified amount of tokens from account to receiver. + pub async fn offer_token( + &self, + account: &mut LocalAccount, + receiver: AccountAddress, + creator: AccountAddress, + collection_name: &str, + name: &str, + amount: u64, + property_version: Option, + options: Option, + ) -> Result { + // create payload + let payload = EntryFunctionCall::TokenTransfersOfferScript { + receiver, + creator, + collection: collection_name.to_owned().into_bytes(), + name: name.to_owned().into_bytes(), + property_version: property_version.unwrap_or(0), + amount, + } + .encode(); + + // create and submit transaction + build_and_submit_transaction( + self.api_client, + account, + payload, + options.unwrap_or_default(), + ) + .await + } + + pub async fn claim_token( + &self, + account: &mut LocalAccount, + sender: AccountAddress, + creator: AccountAddress, + collection_name: &str, + name: &str, + property_version: Option, + options: Option, + ) -> Result { + // create payload + let payload = EntryFunctionCall::TokenTransfersClaimScript { + sender, + creator, + collection: collection_name.to_owned().into_bytes(), + name: name.to_owned().into_bytes(), + property_version: property_version.unwrap_or(0), + } + .encode(); + + // create and submit transaction + build_and_submit_transaction( + self.api_client, + account, + payload, + options.unwrap_or_default(), + ) + .await + } +} + +pub struct TransactionOptions { + pub max_gas_amount: u64, + + pub gas_unit_price: u64, + + /// This is the number of seconds from now you're willing to wait for the + /// transaction to be committed. + pub timeout_secs: u64, +} + +impl Default for TransactionOptions { + fn default() -> Self { + Self { + max_gas_amount: 5_000, + gas_unit_price: 100, + timeout_secs: 10, + } + } +} + +#[derive(Clone, Debug, PartialEq, Deserialize)] +pub struct CollectionData { + pub name: String, + pub description: String, + pub uri: String, + pub maximum: U64, + pub mutability_config: CollectionMutabilityConfig, +} + +#[derive(Clone, Deserialize, Debug, PartialEq)] +pub struct CollectionMutabilityConfig { + pub description: bool, + pub maximum: bool, + pub uri: bool, +} + +#[derive(Debug, PartialEq, Deserialize)] +pub struct TokenData { + pub name: String, + pub description: String, + pub uri: String, + pub maximum: U64, + pub supply: U64, + pub royalty: RoyaltyOptions, + pub mutability_config: TokenMutabilityConfig, + pub largest_property_version: U64, +} + +#[derive(Debug, PartialEq, Deserialize)] +pub struct RoyaltyOptions { + pub payee_address: AccountAddress, + pub royalty_points_denominator: U64, + pub royalty_points_numerator: U64, +} + +#[derive(Deserialize, Debug, PartialEq)] +pub struct TokenMutabilityConfig { + pub description: bool, + pub maximum: bool, + pub properties: bool, + pub royalty: bool, + pub uri: bool, +} + +#[derive(Debug, Deserialize)] +pub struct Token { + // id: TokenId, + pub amount: U64, + // todo: add property support +} + +#[derive(Debug, Deserialize, Serialize)] +struct TokenId { + token_data_id: TokenDataId, + property_version: U64, +} + +#[derive(Debug, Deserialize, Serialize)] +struct TokenDataId { + creator: String, + collection: String, + name: String, +} diff --git a/crates/aptos-api-tester/src/utils.rs b/crates/aptos-api-tester/src/utils.rs new file mode 100644 index 0000000000000..1ed118b8a558d --- /dev/null +++ b/crates/aptos-api-tester/src/utils.rs @@ -0,0 +1,310 @@ +// Copyright © Aptos Foundation + +use crate::{ + consts::{ + DEVNET_FAUCET_URL, DEVNET_NODE_URL, FUND_AMOUNT, TESTNET_FAUCET_URL, TESTNET_NODE_URL, + }, + counters::{test_error, test_fail, test_latency, test_step_latency, test_success}, + strings::{ERROR_NO_BALANCE, FAIL_WRONG_BALANCE}, + tests::{coin_transfer, new_account, publish_module, tokenv1_transfer, view_function}, + time_fn, +}; +use anyhow::{anyhow, Error, Result}; +use aptos_api_types::U64; +use aptos_logger::{error, info}; +use aptos_rest_client::{error::RestError, Client, FaucetClient}; +use aptos_sdk::types::LocalAccount; +use aptos_types::account_address::AccountAddress; +use std::{env, num::ParseIntError, str::FromStr}; + +// Test failure + +#[derive(Debug)] +pub enum TestFailure { + // Variant for failed checks, e.g. wrong balance + Fail(&'static str), + // Variant for test failures, e.g. client returns an error + Error(anyhow::Error), +} + +impl From for TestFailure { + fn from(e: anyhow::Error) -> TestFailure { + TestFailure::Error(e) + } +} + +impl From for TestFailure { + fn from(e: RestError) -> TestFailure { + TestFailure::Error(e.into()) + } +} + +impl From for TestFailure { + fn from(e: ParseIntError) -> TestFailure { + TestFailure::Error(e.into()) + } +} + +// Test name + +#[derive(Clone, Copy)] +pub enum TestName { + NewAccount, + CoinTransfer, + TokenV1Transfer, + PublishModule, + ViewFunction, +} + +impl TestName { + pub async fn run(&self, network_name: NetworkName, run_id: &str) { + let output = match &self { + TestName::NewAccount => time_fn!(new_account::test, network_name, run_id), + TestName::CoinTransfer => time_fn!(coin_transfer::test, network_name, run_id), + TestName::TokenV1Transfer => time_fn!(tokenv1_transfer::test, network_name, run_id), + TestName::PublishModule => time_fn!(publish_module::test, network_name, run_id), + TestName::ViewFunction => time_fn!(view_function::test, network_name, run_id), + }; + + emit_test_metrics(output, *self, network_name, run_id); + } +} + +impl ToString for TestName { + fn to_string(&self) -> String { + match &self { + TestName::NewAccount => "new_account".to_string(), + TestName::CoinTransfer => "coin_transfer".to_string(), + TestName::TokenV1Transfer => "tokenv1_transfer".to_string(), + TestName::PublishModule => "publish_module".to_string(), + TestName::ViewFunction => "view_function".to_string(), + } + } +} + +// Network name + +#[derive(Clone, Copy)] +pub enum NetworkName { + Testnet, + Devnet, +} + +impl ToString for NetworkName { + fn to_string(&self) -> String { + match &self { + NetworkName::Testnet => "testnet".to_string(), + NetworkName::Devnet => "devnet".to_string(), + } + } +} + +impl FromStr for NetworkName { + type Err = Error; + + fn from_str(s: &str) -> Result { + match s { + "testnet" => Ok(NetworkName::Testnet), + "devnet" => Ok(NetworkName::Devnet), + _ => Err(anyhow!("invalid network name")), + } + } +} + +impl NetworkName { + /// Create a REST client. + pub fn get_client(&self) -> Client { + match self { + NetworkName::Testnet => Client::new(TESTNET_NODE_URL.clone()), + NetworkName::Devnet => Client::new(DEVNET_NODE_URL.clone()), + } + } + + /// Create a faucet client. + pub fn get_faucet_client(&self) -> FaucetClient { + match self { + NetworkName::Testnet => { + let faucet_client = + FaucetClient::new(TESTNET_FAUCET_URL.clone(), TESTNET_NODE_URL.clone()); + match env::var("TESTNET_FAUCET_CLIENT_TOKEN") { + Ok(token) => faucet_client.with_auth_token(token), + Err(_) => faucet_client, + } + }, + NetworkName::Devnet => { + FaucetClient::new(DEVNET_FAUCET_URL.clone(), DEVNET_NODE_URL.clone()) + }, + } + } +} + +// Setup helpers + +/// Create an account with zero balance. +pub async fn create_account( + faucet_client: &FaucetClient, + test_name: TestName, +) -> Result { + let account = LocalAccount::generate(&mut rand::rngs::OsRng); + faucet_client.create_account(account.address()).await?; + + info!( + "CREATED ACCOUNT {} for test: {}", + account.address(), + test_name.to_string() + ); + Ok(account) +} + +/// Create an account with 100_000_000 balance. +pub async fn create_and_fund_account( + faucet_client: &FaucetClient, + test_name: TestName, +) -> Result { + let account = LocalAccount::generate(&mut rand::rngs::OsRng); + faucet_client.fund(account.address(), FUND_AMOUNT).await?; + + info!( + "CREATED ACCOUNT {} for test: {}", + account.address(), + test_name.to_string() + ); + Ok(account) +} + +/// Check account balance. +pub async fn check_balance( + test_name: TestName, + client: &Client, + address: AccountAddress, + expected: U64, +) -> Result<(), TestFailure> { + // actual + let actual = match client.get_account_balance(address).await { + Ok(response) => response.into_inner().coin.value, + Err(e) => { + error!( + "test: {} part: check_account_data ERROR: {}, with error {:?}", + &test_name.to_string(), + ERROR_NO_BALANCE, + e + ); + return Err(e.into()); + }, + }; + + // compare + if expected != actual { + error!( + "test: {} part: check_account_data FAIL: {}, expected {:?}, got {:?}", + &test_name.to_string(), + FAIL_WRONG_BALANCE, + expected, + actual + ); + return Err(TestFailure::Fail(FAIL_WRONG_BALANCE)); + } + + Ok(()) +} + +// Metrics helpers + +/// Emit metrics based on test result. +pub fn emit_test_metrics( + output: (Result<(), TestFailure>, f64), + test_name: TestName, + network_name: NetworkName, + run_id: &str, +) { + // deconstruct + let (result, time) = output; + + // emit success rate and get result word + let result_label = match result { + Ok(_) => { + test_success(&test_name.to_string(), &network_name.to_string(), run_id).observe(1_f64); + test_fail(&test_name.to_string(), &network_name.to_string(), run_id).observe(0_f64); + test_error(&test_name.to_string(), &network_name.to_string(), run_id).observe(0_f64); + + "success" + }, + Err(e) => match e { + TestFailure::Fail(_) => { + test_success(&test_name.to_string(), &network_name.to_string(), run_id) + .observe(0_f64); + test_fail(&test_name.to_string(), &network_name.to_string(), run_id).observe(1_f64); + test_error(&test_name.to_string(), &network_name.to_string(), run_id) + .observe(0_f64); + + "fail" + }, + TestFailure::Error(_) => { + test_success(&test_name.to_string(), &network_name.to_string(), run_id) + .observe(0_f64); + test_fail(&test_name.to_string(), &network_name.to_string(), run_id).observe(0_f64); + test_error(&test_name.to_string(), &network_name.to_string(), run_id) + .observe(1_f64); + + "error" + }, + }, + }; + + // log result + info!( + "----- TEST FINISHED test: {} result: {} time: {} -----", + test_name.to_string(), + result_label, + time, + ); + + // emit latency + test_latency( + &test_name.to_string(), + &network_name.to_string(), + run_id, + result_label, + ) + .observe(time); +} + +/// Emit metrics based on result. +pub fn emit_step_metrics( + output: (Result, f64), + test_name: TestName, + step_name: &str, + network_name: NetworkName, + run_id: &str, +) -> Result { + // deconstruct and get result word + let (result, time) = output; + let result_label = match &result { + Ok(_) => "success", + Err(e) => match e { + TestFailure::Fail(_) => "fail", + TestFailure::Error(_) => "error", + }, + }; + + // log result + info!( + "STEP FINISHED test: {} step: {} result: {} time: {}", + test_name.to_string(), + step_name, + result_label, + time, + ); + + // emit latency + test_step_latency( + &test_name.to_string(), + step_name, + &network_name.to_string(), + run_id, + result_label, + ) + .observe(time); + + result +} diff --git a/crates/aptos-profiler/Cargo.toml b/crates/aptos-profiler/Cargo.toml new file mode 100644 index 0000000000000..ea5558379ff1a --- /dev/null +++ b/crates/aptos-profiler/Cargo.toml @@ -0,0 +1,28 @@ +[package] +name = "aptos-profiler" +version = "0.1.0" + +# Workspace inherited keys +authors = { workspace = true } +edition = { workspace = true } +homepage = { workspace = true } +license = { workspace = true } +publish = { workspace = true } +repository = { workspace = true } +rust-version = { workspace = true } + +[dependencies] +anyhow = { workspace = true } +regex = { workspace = true } + +[target.'cfg(unix)'.dependencies] +pprof = { version = "0.11", features = ["flamegraph"] } +backtrace = { version = "0.3" } +jemallocator = { version = "0.3.2", features = [ + "profiling", + "unprefixed_malloc_on_supported_platforms", +] } +jemalloc-sys = { version = "0.3" } + + + diff --git a/crates/aptos-profiler/src/cpu_profiler.rs b/crates/aptos-profiler/src/cpu_profiler.rs new file mode 100644 index 0000000000000..4f3ef50aaf2e2 --- /dev/null +++ b/crates/aptos-profiler/src/cpu_profiler.rs @@ -0,0 +1,96 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::{ + utils::{convert_svg_to_string, create_file_with_parents}, + CpuProfilerConfig, Profiler, +}; +use anyhow::Result; +use pprof::ProfilerGuard; +use regex::Regex; +use std::{path::PathBuf, thread, time}; + +pub struct CpuProfiler<'a> { + frequency: i32, + svg_result_path: PathBuf, + guard: Option>, +} + +impl<'a> CpuProfiler<'a> { + pub(crate) fn new(config: &CpuProfilerConfig) -> Self { + Self { + frequency: config.frequency, + svg_result_path: config.svg_result_path.clone(), + guard: None, + } + } + + pub(crate) fn set_guard(&mut self, guard: ProfilerGuard<'a>) -> Result<()> { + self.guard = Some(guard); + Ok(()) + } + + pub(crate) fn destory_guard(&mut self) -> Result<()> { + self.guard = None; + Ok(()) + } + + fn frames_post_processor() -> impl Fn(&mut pprof::Frames) { + let regex = Regex::new(r"^(.*)-(\d*)$").unwrap(); + + move |frames| { + if let Some((_, [name, _])) = regex.captures(&frames.thread_name).map(|c| c.extract()) { + frames.thread_name = name.to_string(); + } + } + } +} + +impl Profiler for CpuProfiler<'_> { + /// Perform CPU profiling for the given duration + fn profile_for(&self, duration_secs: u64, _binary_path: &str) -> Result<()> { + let guard = pprof::ProfilerGuard::new(self.frequency).unwrap(); + thread::sleep(time::Duration::from_secs(duration_secs)); + + if let Ok(report) = guard.report().build() { + let file = create_file_with_parents(self.svg_result_path.as_path())?; + let _result = report.flamegraph(file); + }; + + Ok(()) + } + + /// Start profiling until it is stopped + fn start_profiling(&mut self) -> Result<()> { + let guard = pprof::ProfilerGuard::new(self.frequency).unwrap(); + self.set_guard(guard)?; + Ok(()) + } + + /// End profiling + fn end_profiling(&mut self, _binary_path: &str) -> Result<()> { + if let Some(guard) = self.guard.take() { + if let Ok(report) = guard + .report() + .frames_post_processor(Self::frames_post_processor()) + .build() + { + let file = create_file_with_parents(self.svg_result_path.as_path())?; + let _result = report.flamegraph(file); + } + self.destory_guard()?; + } + Ok(()) + } + + /// Expose the results as TXT + fn expose_text_results(&self) -> Result { + unimplemented!(); + } + + /// Expose the results as SVG + fn expose_svg_results(&self) -> Result { + let content = convert_svg_to_string(self.svg_result_path.as_path()); + content + } +} diff --git a/crates/aptos-profiler/src/jeprof.py b/crates/aptos-profiler/src/jeprof.py new file mode 100644 index 0000000000000..f8f694cabcfde --- /dev/null +++ b/crates/aptos-profiler/src/jeprof.py @@ -0,0 +1,24 @@ +import subprocess +import sys + +def execute_command(command): + try: + output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT) + return output.decode('utf-8').strip() + except subprocess.CalledProcessError as e: + return f"Command execution failed with error code {e.returncode}. Output:\n{e.output.decode('utf-8').strip()}" + +text_location = sys.argv[1] +svg_location = sys.argv[2] +binary_path = sys.argv[3] + + + +command = "mkdir profiling_results" +result = execute_command(command) +command = "jeprof --show_bytes " + binary_path + " ./*.heap --svg > " + svg_location +result = execute_command(command) +command = "jeprof --show_bytes " + binary_path + " ./*.heap --text > " + text_location +result = execute_command(command) +command = "rm ./*.heap" +result = execute_command(command) diff --git a/crates/aptos-profiler/src/lib.rs b/crates/aptos-profiler/src/lib.rs new file mode 100644 index 0000000000000..07b4fc7871af6 --- /dev/null +++ b/crates/aptos-profiler/src/lib.rs @@ -0,0 +1,97 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::{cpu_profiler::CpuProfiler, memory_profiler::MemProfiler}; +use anyhow::Result; +use std::path::PathBuf; + +mod cpu_profiler; +mod memory_profiler; +mod utils; + +#[derive(Debug, Clone)] +pub struct ProfilerConfig { + cpu_profiler_config: Option, + mem_profiler_config: Option, +} + +impl ProfilerConfig { + pub fn new_with_defaults() -> Self { + Self { + cpu_profiler_config: CpuProfilerConfig::new_with_defaults(), + mem_profiler_config: MemProfilerConfig::new_with_defaults(), + } + } +} + +#[derive(Debug, Clone)] +struct CpuProfilerConfig { + frequency: i32, + svg_result_path: PathBuf, +} + +impl CpuProfilerConfig { + pub fn new_with_defaults() -> Option { + Some(Self { + frequency: 100, + svg_result_path: PathBuf::from("./profiling_results/cpu_flamegraph.svg"), + }) + } +} + +#[derive(Debug, Clone)] +struct MemProfilerConfig { + txt_result_path: PathBuf, + svg_result_path: PathBuf, +} + +impl MemProfilerConfig { + pub fn new_with_defaults() -> Option { + Some(Self { + txt_result_path: PathBuf::from("./profiling_results/heap.txt"), + svg_result_path: PathBuf::from("./profiling_results/heap.svg"), + }) + } +} + +/// This defines the interface for caller to start profiling +pub trait Profiler { + // Perform profiling for duration_secs + fn profile_for(&self, duration_secs: u64, binary_path: &str) -> Result<()>; + // Start profiling + fn start_profiling(&mut self) -> Result<()>; + // End profiling + fn end_profiling(&mut self, binary_path: &str) -> Result<()>; + // Expose the results as a JSON string for visualization + fn expose_text_results(&self) -> Result; + // Expose the results as a JSON string for visualization + fn expose_svg_results(&self) -> Result; +} + +pub struct ProfilerHandler { + config: ProfilerConfig, +} + +impl ProfilerHandler { + pub fn new(config: ProfilerConfig) -> Self { + Self { config } + } + + pub fn get_cpu_profiler(&self) -> Box { + Box::new(CpuProfiler::new( + self.config + .cpu_profiler_config + .as_ref() + .expect("CPU profiler config is not set"), + )) + } + + pub fn get_mem_profiler(&self) -> Box { + Box::new(MemProfiler::new( + self.config + .mem_profiler_config + .as_ref() + .expect("Memory profiler config is not set"), + )) + } +} diff --git a/crates/aptos-profiler/src/memory_profiler.rs b/crates/aptos-profiler/src/memory_profiler.rs new file mode 100644 index 0000000000000..7e044fe2f13f7 --- /dev/null +++ b/crates/aptos-profiler/src/memory_profiler.rs @@ -0,0 +1,130 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::{utils::convert_svg_to_string, MemProfilerConfig, Profiler}; +use anyhow::{anyhow, Result}; +use std::{path::PathBuf, process::Command, thread, time::Duration}; + +pub struct MemProfiler { + txt_result_path: PathBuf, + svg_result_path: PathBuf, +} + +impl MemProfiler { + pub(crate) fn new(config: &MemProfilerConfig) -> Self { + Self { + txt_result_path: config.txt_result_path.clone(), + svg_result_path: config.svg_result_path.clone(), + } + } +} + +impl Profiler for MemProfiler { + fn profile_for(&self, duration_secs: u64, binary_path: &str) -> Result<()> { + let mut prof_active: bool = true; + + let result = unsafe { + jemalloc_sys::mallctl( + b"prof.active\0".as_ptr() as *const _, + std::ptr::null_mut(), + std::ptr::null_mut(), + &mut prof_active as *mut _ as *mut _, + std::mem::size_of::(), + ) + }; + + if result != 0 { + return Err(anyhow!("Failed to activate jemalloc profiling")); + } + + thread::sleep(Duration::from_secs(duration_secs)); + + let mut prof_active: bool = false; + let result = unsafe { + jemalloc_sys::mallctl( + b"prof.active\0".as_ptr() as *const _, + std::ptr::null_mut(), + std::ptr::null_mut(), + &mut prof_active as *mut _ as *mut _, + std::mem::size_of::(), + ) + }; + + if result != 0 { + return Err(anyhow!("Failed to deactivate jemalloc profiling")); + } + + // TODO: Run jeprof commands from within Rust, current tries give unresolved errors + Command::new("python3") + .arg("./crates/aptos-profiler/src/jeprof.py") + .arg(self.txt_result_path.to_string_lossy().as_ref()) + .arg(self.svg_result_path.to_string_lossy().as_ref()) + .arg(binary_path) + .output() + .expect("Failed to execute command"); + + Ok(()) + } + + /// Enable memory profiling until it is disabled + fn start_profiling(&mut self) -> Result<()> { + let mut prof_active: bool = true; + + let result = unsafe { + jemalloc_sys::mallctl( + b"prof.active\0".as_ptr() as *const _, + std::ptr::null_mut(), + std::ptr::null_mut(), + &mut prof_active as *mut _ as *mut _, + std::mem::size_of::(), + ) + }; + + if result != 0 { + return Err(anyhow!("Failed to activate jemalloc profiling")); + } + + Ok(()) + } + + /// Disable profiling and run jeprof to obtain results + fn end_profiling(&mut self, binary_path: &str) -> Result<()> { + let mut prof_active: bool = false; + let result = unsafe { + jemalloc_sys::mallctl( + b"prof.active\0".as_ptr() as *const _, + std::ptr::null_mut(), + std::ptr::null_mut(), + &mut prof_active as *mut _ as *mut _, + std::mem::size_of::(), + ) + }; + + if result != 0 { + return Err(anyhow!("Failed to deactivate jemalloc profiling")); + } + + // TODO: Run jeprof commands from within Rust, current tries give unresolved errors + Command::new("python3") + .arg("./crates/aptos-profiler/src/jeprof.py") + .arg(self.txt_result_path.to_string_lossy().as_ref()) + .arg(self.svg_result_path.to_string_lossy().as_ref()) + .arg(binary_path) + .output() + .expect("Failed to execute command"); + + Ok(()) + } + + /// Expose the results in TXT format + fn expose_text_results(&self) -> Result { + let content = convert_svg_to_string(self.txt_result_path.as_path()); + content + } + + /// Expose the results in SVG format + fn expose_svg_results(&self) -> Result { + let content = convert_svg_to_string(self.svg_result_path.as_path()); + content + } +} diff --git a/crates/aptos-profiler/src/utils.rs b/crates/aptos-profiler/src/utils.rs new file mode 100644 index 0000000000000..075e9860e34be --- /dev/null +++ b/crates/aptos-profiler/src/utils.rs @@ -0,0 +1,17 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use anyhow::Result; +use std::{fs, fs::File, path::Path}; + +pub fn convert_svg_to_string(svg_file_path: &Path) -> Result { + fs::read_to_string(svg_file_path).map_err(|e| e.into()) +} + +pub fn create_file_with_parents>(path: P) -> Result { + let path = path.as_ref(); + if let Some(parent) = path.parent() { + std::fs::create_dir_all(parent)?; + } + File::create(path) +} diff --git a/crates/aptos-rest-client/src/faucet.rs b/crates/aptos-rest-client/src/faucet.rs index fba4b04ebc599..16c08a3ebe8a4 100644 --- a/crates/aptos-rest-client/src/faucet.rs +++ b/crates/aptos-rest-client/src/faucet.rs @@ -5,25 +5,19 @@ use crate::{error::FaucetClientError, Client, Result}; use aptos_types::transaction::SignedTransaction; use move_core_types::account_address::AccountAddress; -use reqwest::{Client as ReqwestClient, Url}; +use reqwest::{Client as ReqwestClient, Response, Url}; use std::time::Duration; pub struct FaucetClient { faucet_url: Url, inner: ReqwestClient, rest_client: Client, + token: Option, } impl FaucetClient { pub fn new(faucet_url: Url, rest_url: Url) -> Self { - Self { - faucet_url, - inner: ReqwestClient::builder() - .timeout(Duration::from_secs(10)) - .build() - .unwrap(), - rest_client: Client::new(rest_url), - } + Self::new_from_rest_client(faucet_url, Client::new(rest_url)) } pub fn new_for_testing(faucet_url: Url, rest_url: Url) -> Self { @@ -39,9 +33,28 @@ impl FaucetClient { // versioned API however, so we just set it to `/`. .version_path_base("/".to_string()) .unwrap(), + token: None, + } + } + + pub fn new_from_rest_client(faucet_url: Url, rest_client: Client) -> Self { + Self { + faucet_url, + inner: ReqwestClient::builder() + .timeout(Duration::from_secs(10)) + .build() + .unwrap(), + rest_client, + token: None, } } + // Set auth token. + pub fn with_auth_token(mut self, token: String) -> Self { + self.token = Some(token); + self + } + /// Create an account with zero balance. pub async fn create_account(&self, address: AccountAddress) -> Result<()> { let mut url = self.faucet_url.clone(); @@ -49,13 +62,7 @@ impl FaucetClient { let query = format!("auth_key={}&amount=0&return_txns=true", address); url.set_query(Some(&query)); - let response = self - .inner - .post(url) - .header("content-length", 0) - .send() - .await - .map_err(FaucetClientError::request)?; + let response = self.build_and_submit_request(url).await?; let status_code = response.status(); let body = response.text().await.map_err(FaucetClientError::decode)?; if !status_code.is_success() { @@ -83,13 +90,7 @@ impl FaucetClient { // Faucet returns the transaction that creates the account and needs to be waited on before // returning. - let response = self - .inner - .post(url) - .header("content-length", 0) - .send() - .await - .map_err(FaucetClientError::request)?; + let response = self.build_and_submit_request(url).await?; let status_code = response.status(); let body = response.text().await.map_err(FaucetClientError::decode)?; if !status_code.is_success() { @@ -115,4 +116,17 @@ impl FaucetClient { Ok(()) } + + // Helper to carry out requests. + async fn build_and_submit_request(&self, url: Url) -> Result { + // build request + let mut request = self.inner.post(url).header("content-length", 0); + if let Some(token) = &self.token { + request = request.header("Authorization", format!("Bearer {}", token)); + } + + // carry out and return response + let response = request.send().await.map_err(FaucetClientError::request)?; + Ok(response) + } } diff --git a/crates/aptos-rest-client/src/lib.rs b/crates/aptos-rest-client/src/lib.rs index 1a6f653636f47..9bdc733b09828 100644 --- a/crates/aptos-rest-client/src/lib.rs +++ b/crates/aptos-rest-client/src/lib.rs @@ -566,6 +566,7 @@ impl Client { F: Fn(HashValue) -> Fut, Fut: Future>>, { + // TODO: make this configurable const DEFAULT_DELAY: Duration = Duration::from_millis(500); let mut reached_mempool = false; let start = std::time::Instant::now(); @@ -1196,10 +1197,11 @@ impl Client { .into_iter() .map(|event| { let version = event.transaction_version; - let sequence_number = event.event.sequence_number(); + let event = event.event.v1()?; + let sequence_number = event.sequence_number(); Ok(VersionedNewBlockEvent { - event: bcs::from_bytes(event.event.event_data())?, + event: bcs::from_bytes(event.event_data())?, version, sequence_number, }) diff --git a/crates/aptos-rest-client/src/types.rs b/crates/aptos-rest-client/src/types.rs index 9b825ddd81221..fe12ac2ff9001 100644 --- a/crates/aptos-rest-client/src/types.rs +++ b/crates/aptos-rest-client/src/types.rs @@ -40,7 +40,7 @@ where parse_struct_tag(&s).map_err(D::Error::custom) } -#[derive(Clone, Debug, Deserialize)] +#[derive(Clone, Debug, Deserialize, PartialEq)] pub struct Account { #[serde(deserialize_with = "deserialize_from_prefixed_hex_string")] pub authentication_key: AuthenticationKey, diff --git a/crates/aptos-rosetta/src/types/objects.rs b/crates/aptos-rosetta/src/types/objects.rs index 3ae720c820e45..6320d8eb8b6aa 100644 --- a/crates/aptos-rosetta/src/types/objects.rs +++ b/crates/aptos-rosetta/src/types/objects.rs @@ -1723,8 +1723,8 @@ async fn parse_delegation_pool_resource_changes( } else { warn!( "Failed to parse withdraw undelegated event! Skipping for {}:{}", - e.key().get_creator_address(), - e.key().get_creation_number() + e.v1()?.key().get_creator_address(), + e.v1()?.key().get_creation_number() ); continue; }; @@ -1817,8 +1817,14 @@ fn filter_events Option, T>( ) -> Vec { events .iter() - .filter(|event| event.key() == event_key) - .sorted_by(|a, b| a.sequence_number().cmp(&b.sequence_number())) + .filter(|event| event.is_v1()) + .filter(|event| event.v1().unwrap().key() == event_key) + .sorted_by(|a, b| { + a.v1() + .unwrap() + .sequence_number() + .cmp(&b.v1().unwrap().sequence_number()) + }) .filter_map(|event| parser(event_key, event)) .collect() } diff --git a/crates/aptos/Cargo.toml b/crates/aptos/Cargo.toml index 0fe593906ad60..d00f00c48de55 100644 --- a/crates/aptos/Cargo.toml +++ b/crates/aptos/Cargo.toml @@ -22,7 +22,6 @@ aptos-cached-packages = { workspace = true } aptos-cli-common = { workspace = true } aptos-config = { workspace = true } aptos-crypto = { workspace = true } -aptos-db-tool = { workspace = true } aptos-debugger = { workspace = true } aptos-faucet-core = { workspace = true } aptos-framework = { workspace = true } @@ -48,7 +47,7 @@ async-trait = { workspace = true } base64 = { workspace = true } bcs = { workspace = true } chrono = { workspace = true } -clap = { workspace = true, features = ["unstable-styles"] } +clap = { workspace = true, features = ["env", "unstable-styles"] } clap_complete = { workspace = true } codespan-reporting = { workspace = true } dirs = { workspace = true } @@ -63,11 +62,8 @@ move-compiler = { workspace = true } move-core-types = { workspace = true } move-coverage = { workspace = true } move-disassembler = { workspace = true } -move-ir-compiler = { workspace = true } move-ir-types = { workspace = true } move-package = { workspace = true } -move-prover = { workspace = true } -move-prover-boogie-backend = { workspace = true } move-symbol-pool = { workspace = true } move-unit-test = { workspace = true, features = [ "debugging" ] } move-vm-runtime = { workspace = true, features = [ "testing" ] } @@ -79,12 +75,10 @@ self_update = { version = "0.34.0", features = ["archive-zip", "compression-zip- serde = { workspace = true } serde_json = { workspace = true } serde_yaml = { workspace = true } -shadow-rs = { workspace = true } tempfile = { workspace = true } termcolor = { workspace = true } thiserror = { workspace = true } tokio = { workspace = true } -tokio-util = { workspace = true } toml = { workspace = true } walkdir = { workspace = true } diff --git a/crates/aptos/e2e/cases/account.py b/crates/aptos/e2e/cases/account.py index 54e38459199ac..b6db5262ae40b 100644 --- a/crates/aptos/e2e/cases/account.py +++ b/crates/aptos/e2e/cases/account.py @@ -149,3 +149,76 @@ def test_account_rotate_key(run_helper: RunHelper, test_name=None): raise TestError( f"lookup-address of new public key does not match original address: {old_profile.account_address}" ) + + +@test_case +def test_account_resource_account(run_helper: RunHelper, test_name=None): + # Seed for the resource account + seed = "1" + + # Create the new resource account. + result = run_helper.run_command( + test_name, + [ + "aptos", + "account", + "create-resource-account", + "--seed", + seed, + "--assume-yes", # assume yes to gas prompt + ], + ) + + result = json.loads(result.stdout) + sender = result["Result"].get("sender") + resource_account_address = result["Result"].get("resource_account") + + if resource_account_address == None or sender == None: + raise TestError("Resource account creation failed") + + # Derive the resource account + result = run_helper.run_command( + test_name, + [ + "aptos", + "account", + "derive-resource-account-address", + "--seed", + seed, + "--address", + sender, + ], + ) + + if resource_account_address not in result.stdout: + raise TestError( + f"derive-resource-account-address result does not match expected: {resource_account_address}" + ) + + # List the resource account + result = run_helper.run_command( + test_name, + [ + "aptos", + "account", + "list", + "--query=resources", + ], + ) + + json_result = json.loads(result.stdout) + found_resource = False + + # Check if the resource account is in the list + for module in json_result["Result"]: + if module.get("0x1::resource_account::Container") != None: + data = module["0x1::resource_account::Container"]["store"]["data"] + for resource in data: + if resource.get("key") == f"0x{resource_account_address}": + found_resource = True + break + + if not found_resource: + raise TestError( + "Cannot find the resource account in the account list after resource account creation" + ) diff --git a/crates/aptos/e2e/main.py b/crates/aptos/e2e/main.py index a2be47afe3367..4d7323fea7c9a 100644 --- a/crates/aptos/e2e/main.py +++ b/crates/aptos/e2e/main.py @@ -33,6 +33,7 @@ test_account_create, test_account_fund_with_faucet, test_account_lookup_address, + test_account_resource_account, test_account_rotate_key, ) from cases.config import test_config_show_profiles @@ -138,6 +139,7 @@ def run_tests(run_helper): test_account_fund_with_faucet(run_helper) test_account_create(run_helper) test_account_lookup_address(run_helper) + test_account_resource_account(run_helper) # Make sure the aptos-cli header is included on the original request test_aptos_header_included(run_helper) diff --git a/crates/aptos/src/account/fund.rs b/crates/aptos/src/account/fund.rs index 9d24d50ac88cb..5bc6be9df1105 100644 --- a/crates/aptos/src/account/fund.rs +++ b/crates/aptos/src/account/fund.rs @@ -3,10 +3,7 @@ use crate::{ account::create::DEFAULT_FUNDED_COINS, - common::{ - types::{CliCommand, CliTypedResult, FaucetOptions, ProfileOptions, RestOptions}, - utils::{fund_account, wait_for_transactions}, - }, + common::types::{CliCommand, CliTypedResult, FaucetOptions, ProfileOptions, RestOptions}, }; use aptos_types::account_address::AccountAddress; use async_trait::async_trait; @@ -51,14 +48,10 @@ impl CliCommand for FundWithFaucet { } else { self.profile_options.account_address()? }; - let hashes = fund_account( - self.faucet_options.faucet_url(&self.profile_options)?, - self.amount, - address, - ) - .await?; let client = self.rest_options.client(&self.profile_options)?; - wait_for_transactions(&client, hashes).await?; + self.faucet_options + .fund_account(client, &self.profile_options, self.amount, address) + .await?; return Ok(format!( "Added {} Octas to account {}", self.amount, address diff --git a/crates/aptos/src/account/key_rotation.rs b/crates/aptos/src/account/key_rotation.rs index 081a692d84f85..6ff8fe2b55324 100644 --- a/crates/aptos/src/account/key_rotation.rs +++ b/crates/aptos/src/account/key_rotation.rs @@ -104,6 +104,12 @@ impl CliCommand for RotateKey { let (current_private_key, sender_address) = self.txn_options.get_key_and_address()?; + if new_private_key == current_private_key { + return Err(CliError::CommandArgumentError( + "New private key cannot be the same as the current private key".to_string(), + )); + } + // Get sequence number for account let sequence_number = self.txn_options.sequence_number(sender_address).await?; let auth_key = self.txn_options.auth_key(sender_address).await?; diff --git a/crates/aptos/src/common/init.rs b/crates/aptos/src/common/init.rs index 80156caa16e33..228c72449bed3 100644 --- a/crates/aptos/src/common/init.rs +++ b/crates/aptos/src/common/init.rs @@ -9,7 +9,7 @@ use crate::{ ConfigSearchMode, EncodingOptions, PrivateKeyInputOptions, ProfileConfig, ProfileOptions, PromptOptions, RngArgs, DEFAULT_PROFILE, }, - utils::{fund_account, prompt_yes_with_override, read_line, wait_for_transactions}, + utils::{fund_account, prompt_yes_with_override, read_line}, }, }; use aptos_crypto::{ed25519::Ed25519PrivateKey, PrivateKey, ValidCryptoMaterialStringExt}; @@ -45,6 +45,11 @@ pub struct InitTool { #[clap(long)] pub faucet_url: Option, + /// Auth token, if we're using the faucet. This is only used this time, we don't + /// store it. + #[clap(long, env)] + pub faucet_auth_token: Option, + /// Whether to skip the faucet for a non-faucet endpoint #[clap(long)] pub skip_faucet: bool, @@ -159,15 +164,14 @@ impl CliCommand<()> for InitTool { }; let public_key = private_key.public_key(); - let client = aptos_rest_client::Client::new( - Url::parse( - profile_config - .rest_url - .as_ref() - .expect("Must have rest client as created above"), - ) - .map_err(|err| CliError::UnableToParse("rest_url", err.to_string()))?, - ); + let rest_url = Url::parse( + profile_config + .rest_url + .as_ref() + .expect("Must have rest client as created above"), + ) + .map_err(|err| CliError::UnableToParse("rest_url", err.to_string()))?; + let client = aptos_rest_client::Client::new(rest_url); // lookup the address from onchain instead of deriving it // if this is the rotated key, deriving it will outputs an incorrect address @@ -225,14 +229,15 @@ impl CliCommand<()> for InitTool { "Account {} doesn't exist, creating it and funding it with {} Octas", address, NUM_DEFAULT_OCTAS ); - let hashes = fund_account( + fund_account( + client, Url::parse(faucet_url) .map_err(|err| CliError::UnableToParse("rest_url", err.to_string()))?, - NUM_DEFAULT_OCTAS, + self.faucet_auth_token.as_deref(), address, + NUM_DEFAULT_OCTAS, ) .await?; - wait_for_transactions(&client, hashes).await?; eprintln!("Account {} funded successfully", address); } } else if account_exists { diff --git a/crates/aptos/src/common/types.rs b/crates/aptos/src/common/types.rs index 2297c69cb01b3..2a8249e0fc986 100644 --- a/crates/aptos/src/common/types.rs +++ b/crates/aptos/src/common/types.rs @@ -1,6 +1,7 @@ // Copyright © Aptos Foundation // SPDX-License-Identifier: Apache-2.0 +use super::utils::fund_account; use crate::{ common::{ init::Network, @@ -987,6 +988,10 @@ pub struct MovePackageDir { /// Specify the version of the bytecode the compiler is going to emit. #[clap(long)] pub bytecode_version: Option, + + /// Do not complain about unknown attributes in Move code. + #[clap(long)] + pub skip_attribute_checks: bool, } impl MovePackageDir { @@ -998,6 +1003,7 @@ impl MovePackageDir { named_addresses: Default::default(), skip_fetch_latest_git_deps: true, bytecode_version: None, + skip_attribute_checks: false, } } @@ -1263,14 +1269,22 @@ pub struct FaucetOptions { /// URL for the faucet endpoint e.g. `https://faucet.devnet.aptoslabs.com` #[clap(long)] faucet_url: Option, + + /// Auth token to bypass faucet ratelimits. You can also set this as an environment + /// variable with FAUCET_AUTH_TOKEN. + #[clap(long, env)] + faucet_auth_token: Option, } impl FaucetOptions { - pub fn new(faucet_url: Option) -> Self { - FaucetOptions { faucet_url } + pub fn new(faucet_url: Option, faucet_auth_token: Option) -> Self { + FaucetOptions { + faucet_url, + faucet_auth_token, + } } - pub fn faucet_url(&self, profile: &ProfileOptions) -> CliTypedResult { + fn faucet_url(&self, profile: &ProfileOptions) -> CliTypedResult { if let Some(ref faucet_url) = self.faucet_url { Ok(faucet_url.clone()) } else if let Some(Some(url)) = CliConfig::load_profile( @@ -1282,9 +1296,27 @@ impl FaucetOptions { reqwest::Url::parse(&url) .map_err(|err| CliError::UnableToParse("config faucet_url", err.to_string())) } else { - Err(CliError::CommandArgumentError("No faucet given. Please add --faucet-url or add a faucet URL to the .aptos/config.yaml for the current profile".to_string())) + Err(CliError::CommandArgumentError("No faucet given. Please add --faucet-url or add a faucet URL to the .aptos/config.yaml for the current profile".to_string())) } } + + /// Fund an account with the faucet. + pub async fn fund_account( + &self, + rest_client: Client, + profile: &ProfileOptions, + num_octas: u64, + address: AccountAddress, + ) -> CliTypedResult<()> { + fund_account( + rest_client, + self.faucet_url(profile)?, + self.faucet_auth_token.as_deref(), + address, + num_octas, + ) + .await + } } /// Gas price options for manipulating how to prioritize transactions diff --git a/crates/aptos/src/common/utils.rs b/crates/aptos/src/common/utils.rs index b0ef086ea77d7..1f165b96e1a0c 100644 --- a/crates/aptos/src/common/utils.rs +++ b/crates/aptos/src/common/utils.rs @@ -13,7 +13,7 @@ use aptos_build_info::build_information; use aptos_crypto::ed25519::{Ed25519PrivateKey, Ed25519PublicKey}; use aptos_keygen::KeyGen; use aptos_logger::{debug, Level}; -use aptos_rest_client::{aptos_api_types::HashValue, Account, Client, State}; +use aptos_rest_client::{aptos_api_types::HashValue, Account, Client, FaucetClient, State}; use aptos_telemetry::service::telemetry_is_disabled; use aptos_types::{ account_address::create_multisig_account_address, @@ -416,35 +416,23 @@ pub fn read_line(input_name: &'static str) -> CliTypedResult { Ok(input_buf) } -/// Fund account (and possibly create it) from a faucet +/// Fund account (and possibly create it) from a faucet. This function waits for the +/// transaction on behalf of the caller. pub async fn fund_account( + rest_client: Client, faucet_url: Url, - num_octas: u64, + faucet_auth_token: Option<&str>, address: AccountAddress, -) -> CliTypedResult> { - let response = reqwest::Client::new() - .post(format!( - "{}mint?amount={}&auth_key={}", - faucet_url, num_octas, address - )) - .body("{}") - .send() - .await - .map_err(|err| { - CliError::ApiError(format!("Failed to fund account with faucet: {:#}", err)) - })?; - if response.status() == 200 { - let hashes: Vec = response - .json() - .await - .map_err(|err| CliError::UnexpectedError(err.to_string()))?; - Ok(hashes) - } else { - Err(CliError::ApiError(format!( - "Faucet issue: {}", - response.status() - ))) + num_octas: u64, +) -> CliTypedResult<()> { + let mut client = FaucetClient::new_from_rest_client(faucet_url, rest_client); + if let Some(token) = faucet_auth_token { + client = client.with_auth_token(token.to_string()); } + client + .fund(address, num_octas) + .await + .map_err(|err| CliError::ApiError(format!("Faucet issue: {:#}", err))) } /// Wait for transactions, returning an error if any of them fail. diff --git a/crates/aptos/src/governance/mod.rs b/crates/aptos/src/governance/mod.rs index 708777955a8b1..60a2817d62fa8 100644 --- a/crates/aptos/src/governance/mod.rs +++ b/crates/aptos/src/governance/mod.rs @@ -996,6 +996,7 @@ impl CliCommand<()> for GenerateUpgradeProposal { move_options.skip_fetch_latest_git_deps, move_options.named_addresses(), move_options.bytecode_version, + move_options.skip_attribute_checks, ); let package = BuiltPackage::build(package_path, options)?; let release = ReleasePackage::new(package)?; diff --git a/crates/aptos/src/move_tool/coverage.rs b/crates/aptos/src/move_tool/coverage.rs index 2212aa10bde65..d5dc439c7a9a4 100644 --- a/crates/aptos/src/move_tool/coverage.rs +++ b/crates/aptos/src/move_tool/coverage.rs @@ -2,6 +2,7 @@ // SPDX-License-Identifier: Apache-2.0 use crate::common::types::{CliCommand, CliError, CliResult, CliTypedResult, MovePackageDir}; +use aptos_framework::extended_checks; use async_trait::async_trait; use clap::{Parser, Subcommand}; use move_compiler::compiled_unit::{CompiledUnit, NamedCompiledModule}; @@ -149,6 +150,8 @@ fn compile_coverage( additional_named_addresses: move_options.named_addresses(), test_mode: false, install_dir: move_options.output_dir.clone(), + known_attributes: extended_checks::get_all_attribute_names().clone(), + skip_attribute_checks: false, ..Default::default() }; let path = move_options.get_package_path()?; diff --git a/crates/aptos/src/move_tool/mod.rs b/crates/aptos/src/move_tool/mod.rs index 6c5c1f27695fb..f79cf5d81f684 100644 --- a/crates/aptos/src/move_tool/mod.rs +++ b/crates/aptos/src/move_tool/mod.rs @@ -318,6 +318,7 @@ impl CliCommand> for CompilePackage { self.move_options.skip_fetch_latest_git_deps, self.move_options.named_addresses(), self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, ) }; let pack = BuiltPackage::build(self.move_options.get_package_path()?, build_options) @@ -377,6 +378,7 @@ impl CompileScript { self.move_options.skip_fetch_latest_git_deps, self.move_options.named_addresses(), self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, ) }; let package_dir = self.move_options.get_package_path()?; @@ -449,12 +451,15 @@ impl CliCommand<&'static str> for TestPackage { } async fn execute(self) -> CliTypedResult<&'static str> { + let known_attributes = extended_checks::get_all_attribute_names(); let mut config = BuildConfig { dev_mode: self.move_options.dev, additional_named_addresses: self.move_options.named_addresses(), test_mode: true, install_dir: self.move_options.output_dir.clone(), skip_fetch_latest_git_deps: self.move_options.skip_fetch_latest_git_deps, + known_attributes: known_attributes.clone(), + skip_attribute_checks: self.move_options.skip_attribute_checks, ..Default::default() }; @@ -465,6 +470,8 @@ impl CliCommand<&'static str> for TestPackage { self.move_options.named_addresses(), None, self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, + known_attributes.clone(), )?; let _ = extended_checks::run_extended_checks(model); if model.diag_count(Severity::Warning) > 0 { @@ -500,6 +507,7 @@ impl CliCommand<&'static str> for TestPackage { // Print coverage summary if --coverage is set if self.compute_coverage { + // TODO: config seems to be dead here. config.test_mode = false; let summary = SummaryCoverage { summarize_functions: false, @@ -570,6 +578,8 @@ impl CliCommand<&'static str> for ProvePackage { move_options.get_package_path()?.as_path(), move_options.named_addresses(), move_options.bytecode_version, + move_options.skip_attribute_checks, + extended_checks::get_all_attribute_names(), ) }) .await @@ -616,6 +626,8 @@ impl CliCommand<&'static str> for DocumentPackage { docgen_options: Some(docgen_options), skip_fetch_latest_git_deps: move_options.skip_fetch_latest_git_deps, bytecode_version: move_options.bytecode_version, + skip_attribute_checks: move_options.skip_attribute_checks, + known_attributes: extended_checks::get_all_attribute_names().clone(), }; BuiltPackage::build(move_options.get_package_path()?, build_options)?; Ok("succeeded") @@ -681,6 +693,7 @@ impl TryInto for &PublishPackage { self.move_options.skip_fetch_latest_git_deps, self.move_options.named_addresses(), self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, ); let package = BuiltPackage::build(package_path, options) .map_err(|e| CliError::MoveCompilationError(format!("{:#}", e)))?; @@ -696,7 +709,7 @@ impl TryInto for &PublishPackage { if !self.override_size_check && size > MAX_PUBLISH_PACKAGE_SIZE { return Err(CliError::UnexpectedError(format!( "The package is larger than {} bytes ({} bytes)! To lower the size \ - you may want to include less artifacts via `--included-artifacts`. \ + you may want to include fewer artifacts via `--included-artifacts`. \ You can also override this check with `--override-size-check", MAX_PUBLISH_PACKAGE_SIZE, size ))); @@ -748,6 +761,7 @@ impl IncludedArtifacts { skip_fetch_latest_git_deps: bool, named_addresses: BTreeMap, bytecode_version: Option, + skip_attribute_checks: bool, ) -> BuildOptions { use IncludedArtifacts::*; match self { @@ -761,6 +775,8 @@ impl IncludedArtifacts { named_addresses, skip_fetch_latest_git_deps, bytecode_version, + skip_attribute_checks, + known_attributes: extended_checks::get_all_attribute_names().clone(), ..BuildOptions::default() }, Sparse => BuildOptions { @@ -772,6 +788,8 @@ impl IncludedArtifacts { named_addresses, skip_fetch_latest_git_deps, bytecode_version, + skip_attribute_checks, + known_attributes: extended_checks::get_all_attribute_names().clone(), ..BuildOptions::default() }, All => BuildOptions { @@ -783,6 +801,8 @@ impl IncludedArtifacts { named_addresses, skip_fetch_latest_git_deps, bytecode_version, + skip_attribute_checks, + known_attributes: extended_checks::get_all_attribute_names().clone(), ..BuildOptions::default() }, } @@ -924,6 +944,7 @@ impl CliCommand for CreateResourceAccountAndPublishPackage { move_options.skip_fetch_latest_git_deps, move_options.named_addresses(), move_options.bytecode_version, + move_options.skip_attribute_checks, ); let package = BuiltPackage::build(package_path, options)?; let compiled_units = package.extract_code(); @@ -1055,6 +1076,7 @@ impl CliCommand<&'static str> for VerifyPackage { self.move_options.skip_fetch_latest_git_deps, self.move_options.named_addresses(), self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, ) }; let pack = BuiltPackage::build(self.move_options.get_package_path()?, build_options) diff --git a/crates/aptos/src/move_tool/show.rs b/crates/aptos/src/move_tool/show.rs index 3cd10df1543fb..ec40258d72c5a 100644 --- a/crates/aptos/src/move_tool/show.rs +++ b/crates/aptos/src/move_tool/show.rs @@ -64,6 +64,7 @@ impl CliCommand> for ShowAbi { self.move_options.skip_fetch_latest_git_deps, self.move_options.named_addresses(), self.move_options.bytecode_version, + self.move_options.skip_attribute_checks, ) }; diff --git a/crates/aptos/src/node/analyze/analyze_validators.rs b/crates/aptos/src/node/analyze/analyze_validators.rs index a5fa20b107274..26deff503040c 100644 --- a/crates/aptos/src/node/analyze/analyze_validators.rs +++ b/crates/aptos/src/node/analyze/analyze_validators.rs @@ -208,14 +208,14 @@ impl AnalyzeValidators { )?; let end = raw_events.len() < batch; for raw_event in raw_events { - if cursor <= raw_event.event.sequence_number() { + if cursor <= raw_event.event.v1()?.sequence_number() { println!( "Duplicate event found for {} : {:?}", cursor, - raw_event.event.sequence_number() + raw_event.event.v1()?.sequence_number() ); } else { - cursor = raw_event.event.sequence_number(); + cursor = raw_event.event.v1()?.sequence_number(); let event = bcs::from_bytes::(raw_event.event.event_data())?; match epoch.cmp(&event.epoch()) { @@ -223,7 +223,7 @@ impl AnalyzeValidators { result.push(VersionedNewBlockEvent { event, version: raw_event.transaction_version, - sequence_number: raw_event.event.sequence_number(), + sequence_number: raw_event.event.v1()?.sequence_number(), }); }, Ordering::Greater => { diff --git a/crates/aptos/src/node/mod.rs b/crates/aptos/src/node/mod.rs index 442bf3b5d2103..22d6ff10a17fb 100644 --- a/crates/aptos/src/node/mod.rs +++ b/crates/aptos/src/node/mod.rs @@ -853,27 +853,36 @@ pub struct ValidatorSetSummary { pub total_joining_power: u128, } +impl ValidatorSetSummary { + fn convert_to_summary_vec( + validator_info: Vec, + ) -> Result, bcs::Error> { + let mut validators: Vec = vec![]; + for validator in validator_info.iter() { + match validator.try_into() { + Ok(validator) => validators.push(validator), + Err(err) => return Err(err), + } + } + Ok(validators) + } +} + impl TryFrom<&ValidatorSet> for ValidatorSetSummary { type Error = bcs::Error; fn try_from(set: &ValidatorSet) -> Result { + let active_validators: Vec = + Self::convert_to_summary_vec(set.active_validators.clone())?; + let pending_inactive: Vec = + Self::convert_to_summary_vec(set.pending_inactive.clone())?; + let pending_active: Vec = + Self::convert_to_summary_vec(set.pending_active.clone())?; Ok(ValidatorSetSummary { scheme: set.scheme, - active_validators: set - .active_validators - .iter() - .filter_map(|validator| validator.try_into().ok()) - .collect(), - pending_inactive: set - .pending_inactive - .iter() - .filter_map(|validator| validator.try_into().ok()) - .collect(), - pending_active: set - .pending_active - .iter() - .filter_map(|validator| validator.try_into().ok()) - .collect(), + active_validators, + pending_inactive, + pending_active, total_voting_power: set.total_voting_power, total_joining_power: set.total_joining_power, }) @@ -976,11 +985,17 @@ impl ValidatorConfig { } pub fn fullnode_network_addresses(&self) -> Result, bcs::Error> { - bcs::from_bytes(&self.fullnode_network_addresses) + match &self.validator_network_addresses.is_empty() { + true => Ok(vec![]), + false => bcs::from_bytes(&self.fullnode_network_addresses), + } } pub fn validator_network_addresses(&self) -> Result, bcs::Error> { - bcs::from_bytes(&self.validator_network_addresses) + match &self.validator_network_addresses.is_empty() { + true => Ok(vec![]), + false => bcs::from_bytes(&self.validator_network_addresses), + } } } @@ -1008,7 +1023,6 @@ impl TryFrom<&ValidatorConfig> for ValidatorConfigSummary { }; Ok(ValidatorConfigSummary { consensus_public_key, - // TODO: We should handle if some of these are not parsable validator_network_addresses: config.validator_network_addresses()?, fullnode_network_addresses: config.fullnode_network_addresses()?, validator_index: config.validator_index, diff --git a/crates/aptos/src/test/mod.rs b/crates/aptos/src/test/mod.rs index 725548375750c..a15ffd284d961 100644 --- a/crates/aptos/src/test/mod.rs +++ b/crates/aptos/src/test/mod.rs @@ -541,6 +541,7 @@ impl CliTestFramework { network: Some(Network::Custom), rest_url: Some(self.endpoint.clone()), faucet_url: Some(self.faucet_endpoint.clone()), + faucet_auth_token: None, rng_args: RngArgs::from_seed([0; 32]), private_key_options: PrivateKeyInputOptions::from_private_key(private_key)?, profile_options: Default::default(), @@ -1042,6 +1043,7 @@ impl CliTestFramework { named_addresses: Self::named_addresses(account_strs), skip_fetch_latest_git_deps: true, bytecode_version: None, + skip_attribute_checks: false, } } @@ -1078,7 +1080,7 @@ impl CliTestFramework { } pub fn faucet_options(&self) -> FaucetOptions { - FaucetOptions::new(Some(self.faucet_endpoint.clone())) + FaucetOptions::new(Some(self.faucet_endpoint.clone()), None) } fn transaction_options( diff --git a/crates/reliable-broadcast/src/lib.rs b/crates/reliable-broadcast/src/lib.rs index da7e5d010b889..7eb35c812d95e 100644 --- a/crates/reliable-broadcast/src/lib.rs +++ b/crates/reliable-broadcast/src/lib.rs @@ -11,7 +11,12 @@ pub trait RBMessage: Send + Sync + Clone {} #[async_trait] pub trait RBNetworkSender: Send + Sync { - async fn send_rpc(&self, receiver: Author, message: M, timeout: Duration) -> anyhow::Result; + async fn send_rb_rpc( + &self, + receiver: Author, + message: M, + timeout: Duration, + ) -> anyhow::Result; } pub trait BroadcastStatus { @@ -74,7 +79,7 @@ where ( receiver, network_sender - .send_rpc(receiver, message, Duration::from_millis(500)) + .send_rb_rpc(receiver, message, Duration::from_millis(500)) .await, ) } diff --git a/crates/reliable-broadcast/src/tests.rs b/crates/reliable-broadcast/src/tests.rs index c627ae85e3a87..e1a0c22f75f19 100644 --- a/crates/reliable-broadcast/src/tests.rs +++ b/crates/reliable-broadcast/src/tests.rs @@ -87,7 +87,7 @@ where TestAck: TryFrom + Into, TestMessage: TryFrom + Into, { - async fn send_rpc( + async fn send_rb_rpc( &self, receiver: Author, message: M, diff --git a/dashboards/end-to-end-txn-latency.json b/dashboards/end-to-end-txn-latency.json index 41c29d8567222..2243fe7d4e458 100644 --- a/dashboards/end-to-end-txn-latency.json +++ b/dashboards/end-to-end-txn-latency.json @@ -82,6 +82,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -240,6 +241,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -336,6 +338,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -495,6 +498,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -605,6 +609,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -665,6 +670,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -733,6 +739,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -791,6 +798,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -847,6 +855,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -910,6 +919,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -970,6 +980,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1029,6 +1040,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1089,6 +1101,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1149,6 +1162,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1215,6 +1229,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1287,6 +1302,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1359,6 +1375,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1420,6 +1437,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1481,6 +1499,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1559,6 +1578,7 @@ "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1590,7 +1610,7 @@ { "datasource": { "type": "prometheus", "uid": "fHo-R604z" }, "editorMode": "code", - "expr": "sum(rate(aptos_data_client_request_latencies_sum{chain_name=~\"$chain_name\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"^(public-.*|)fullnode-.*\",role=\"fullnode\",request_type=~\"get_new_transactions_or_outputs_with_proof_compressed|get_transactions_or_outputs_with_proof_compressed\"}[$__rate_interval]))\n/\nsum(rate(aptos_data_client_request_latencies_count{chain_name=~\"$chain_name\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"^(public-.*|)fullnode-.*\",role=~\"fullnode\",request_type=~\"get_new_transactions_or_outputs_with_proof_compressed|get_transactions_or_outputs_with_proof_compressed\"}[$__rate_interval]))", + "expr": "sum(rate(aptos_data_client_request_latencies_sum{chain_name=~\"$chain_name\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"^(public-.*|)fullnode-.*\",role=\"fullnode\",request_type=~\"get_transactions_or_outputs_with_proof_compressed\"}[$__rate_interval]))\n/\nsum(rate(aptos_data_client_request_latencies_count{chain_name=~\"$chain_name\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"^(public-.*|)fullnode-.*\",role=~\"fullnode\",request_type=~\"get_transactions_or_outputs_with_proof_compressed\"}[$__rate_interval]))", "hide": false, "instant": false, "legendFormat": "Fetch Chunk", @@ -1800,8 +1820,8 @@ "multiFormat": "", "name": "interval", "options": [ - { "selected": false, "text": "auto", "value": "$__auto_interval_interval" }, - { "selected": true, "text": "1m", "value": "1m" }, + { "selected": true, "text": "auto", "value": "$__auto_interval_interval" }, + { "selected": false, "text": "1m", "value": "1m" }, { "selected": false, "text": "5m", "value": "5m" }, { "selected": false, "text": "10m", "value": "10m" }, { "selected": false, "text": "30m", "value": "30m" }, @@ -1831,6 +1851,6 @@ "timezone": "", "title": "end-to-end-txn-latency", "uid": "ae591b2c-8a2f-445d-9122-ee53f99400df", - "version": 36, + "version": 37, "weekStart": "" } diff --git a/dashboards/end-to-end-txn-latency.json.gz b/dashboards/end-to-end-txn-latency.json.gz index fb8b9d8bc7956183840c964a4795dedbdcc37cfc..2cc268d78cad55140a06de4b8f7851964625b096 100644 GIT binary patch literal 5187 zcmV-J6uj#niwFP!000001MOXHbK5o&{+?ff<8*H9oajoj{Fb@vjN>#-?%Kqy-Q3(I zISoWY62}zD@Wqbfy1)Gv0N+W;qAXdYw4O{P0R%`ab{{-gEWo>GgecUqY@hmuZF$NI zatBF}Xd0dmzdnjzBuo`KgTOHTw-#h}T1l!-ed^hPt1(^&QT$tne^W*-9a4)bEusX5 z&QnyC{3H{Jsw(6@RSC1`z~sezH(;ssv7wiw8k%jL*{1FCTHVor?zCH^+v&i62L~wM~tm@nI48WV$m?X;m{WkP*zd}`f7!jPICYw?Rm zTx1_Vw~_|y)Ef_M>gw616|=*<_@cXqg_%lE{DwTk zcA@q{j^UeZO4d}(5ucx*g*v1#-SBh0t_%%Nqvn4YbuXZ`{x7!WkE5>Rn&XTwZQJw> zCo*T=lUH7B`H@m8)O+RHj&9#rIi(UB!JiM>#$1bv`-@Llo-xwx?>g%aqTlT&`>{O$v9J)?QJN zh=NIG+&{|9=Er$~Yl@4XMLP01?AL4d&b~>TlHyQ{nI)aA*``SykLh{C1C=rKbK8?P zGZsoi6kch!2VcGl)glVFn|5LniU6w4$x)n;-1u`*3O`!9>f0)O6dUY93rahB8e}@I zJz=0mftP~J;Tv21?XcbZmavbvmp&k-^-**DG9uTyG?9e-i?~f z_nE4h)bk820}H%F;Q96>R}a_HuZDNVEKo$I|ChOT5$}@A_)UKaZ^!XODY@upI$&lF zN8-$LlWJ@NcM}>H_;&uH0d-GJV`PQbv~$_IOK&dxTiI6}_w(h4hH1WcsAl-L0#I|= zfI7MXRY#4m>|zysz<=#ZQNc~vh*^m)rfnihxi-E}Wk<%fggK`xKOFna~0$~6W-c#D*phBqrbJ0Ahh zd9`9xv)DB&Jdp>+G^=r*Pqiz<8p#`uTssh8a?0&t688a&JZACs{@7*k2&V2uJ)Tl_ zdpK0WK}h~(m3=}T2X33Sa+zaTdA&;Asc0V24e7wM&A=~c3+Dm}iY0${iH$O2Fg*;>wA0O@X{U&$ zhjD`KWXZcojE#mjiJDqWUUb8AOnMucpCI~>?4)5~VEY`XS|J6Cs^X(omg&TXn9>Um zdfgNZ)w5kcHt|?SNlWCInC-d@#6B9;MJ=2dC7`MC&uo53ig&zVLUvs0vmG%K_xVWN zpRkDo&FuSMEl8LEP3!|mFiX4b!}WLCm>O1})7*dj`APXM`S}SR4LAfSrz>;hcb*K- z?i!lcmrWD7*(81uuLEp5%#VLT7FhI%{o|7oJ`L40=J-tS4@_IT;zeECZG7ukYA;{Jj6$ae{$qXzJecKX#+*;m0SC;m7X`j)|cEpLci3_oS+HG&`{T zb;yT2iGZ^5+5KdxpR6s^AW3#Zsi>$zl_XTqp&v*)i%mdYd@4Y>Ft&#fpUfK1U%w;j z599-UfGo51xYQaI{Q*3K4d3dL5R{E!2*+qlO*N(l0rIrwepixSMWP@J1FY6|^Wtfg z{yepCc1Z~mhfe4s3|E2Z4a1=?MXyIvd%gs=8%+A1&-G`(+t&B({<(e8*8^90s~TlcJfD9N zn$hp`CiW8r0lS&E(ao}#+09;sT?jMa_Hh3GqN+8#YKAp90IQ%D9BMY*2GUJM&3|AX z{Qg4dy=#Nis99C{f2^DGe+i)FhRJr?dxu9YlHnB5a4WcN-*>5ISfhp-Lqa1wV`LS` zLu41To8DC8c%bnsv6Ik1xUpP_n-U?*ss{7AZU(cneHLT8I$u5BQj;lf{Vw@CkLTfk z(i^mWYJoud6hN8=PY$N4ApZne0_uXx^_L1uJ7#y*lF!YgvGJ<;jwoG7@LJu4zg#FiY|t-Z)=CL`Smz{ z`@UxKP~^UMYiZ#^oBle(n>s0n0F553(a#C(EQP6>s> z0kF9PVAh(4F;@A;g!Mh`AJ`u!_~e&M&0?SB`1zY>7$wZU%44m54V5>$`j zkT*1J=d%=?;(kEf4~Y8#aX;Xu?gtcKN;d>{+sC*e5O<*9hCtj9*n}+>jZ&X=>_7S2 zCu>7>2lrL1dS6AAIo&T(a*f|Diu)zDX}^TPN8A*En*!#>xnG6g$c3LiU29ITtKD{( zEnNJ-58^-5LRmgt1{LKC^?~)PNZ@+KZq4$ z1vV*GVEcd&2g$VrS6%oZ+(*Y!Rf{GdDxfZPJw`?@^bjjKXBfi^FuVZ63oyK3%fbud za3!$bP#eS8H#}1qW8`WOI<*Bca(JfjBG!Q6gN<)2o+*rH3gc0j4>umq6vnv0 z>d{f#VlsH9Fajx_DU4`}=NjUf!Z^;ZbDYI9g%MvdfOpaMBf{d?i)RXN;-;6Gkqeu& z@8xM4y0z$tBPX6KyiMl{he5ZSgpf;bQ!UO6#sLn`4SuoMxxpY|ALP82lY@2UX|CZ! zJG+ucz&(HAMY!h=_xy!ranE0Ejub9#`b#VeH~rx*6x{TOoBnWbNezXA=LO@Q2|O>j z+Bl4RA+~8R1fCa+0snYj@HQ=KVNeyG7mTKZ>%kaFg|OOeSjF>#F$Q3rF#y{KayS5P z!CAmjguoI`1J-QQq>jh*d^4+o>UVdq;>+*T0RA);VSYfVf7hZ6Ig~YEw7ZPSB~;H3IV@>_q`(CA3lAEN5SUGV>~7R zpvF8Q9*!A|4C@1Pz0eL%_Bwl=)F`pk5Rhr3gs7>?`F4X_m@feJ0^$`6B;o20Zg)3! z4R<=#jMu!Utlus}Qkz1{r^>2wY;RQdHPw6pV41IO%gg}iOV!*T*Fu`@_guK5A-HlvlNAScQFz#E$ks@M$U|!898&lNNqWDjOyLu z%kg53M9%!goOx5ZF>ZKS;f9wYdo8)8AhR*69R%7H^Q-gO)4BNU>Gg=b6~iSMpX%GH zc+-M!-JtP}T^D!al!-H^-+%sj zP0Obz#c1U@mrvuZ=<8lPMJ|S1tSMX!9bPphXzN(zJZKle_Xt-x*Fo&WDqc9pcl$a6 zQHDRM*+h;*ejx2t$D{T8g`sC@!Dr`TR}j$^WJeEVbQ1uf13=>iE?f1vEczVpNAl@;6?3#PHFrgj5Ep4EmMoNn3i;Y0F8h zVMtq^khW~!u`&2!>^1C3Ee+gKqrk1z5Wts$M5TkAWa~G8^zj3uNO;VdNWdjUN^I`gd5ZQ+KYAU1^@hA$KMkR2jBOkIKO5ZPhA zNNw3+{LaHNdWd^Dksdw;K?(=bZRLkkn!kkvF_OflFhse(7}t!E8Hy{A86q>x7pX5Z zJehS)Nh0gO>nAx2z$n0`2YkJb%69-2{0NYP+VAk#l`68B!^ov;P za`fEII6Qw&lF}RMrFOsGik$d-7%#<9qn9Ch;{7t=+BSb=q^L+yk)k3+EflE@MLiA? z{vZ$V$Hkqs$WYgG4&CZsW%%Gb{7T;YH-(Ui$5i<>>fUoTx?bfk51jFG!>4$+;lqtw zQj;qC7H7QSu9DO7!|Qid+2?5~j-bw0()s`GIKjX)H1&_&jZpYX?OB_)pI6uRDs!#m zD~NqfoapmRT;gdi&RmjX}g6wK!6u@r%dL;2gJzj`2&Tvagk@Jy0$%->u%%226c-3Bo19Wv$_IWlj>g zj?0aT#5TKe{t*~#&^3at5p<29Yox)hk zb2&7R@blZ1^L;_>6Q+NTOdpv(GJRzF&1U+uRzdNEZDjfw%r+0lZ}QsAViUEdEYDP) zvu&Q^{d_%KeE7yze>-gVzCFOh1*HHIeQmIt=w;4jk1JkG7!z^B;=e}>Dfg-=cLzB) zX#nR27%`D`ulc=$onFYVg$F&oKkD$X$gh(Pur+)0patNlIv2^xRx>yfnzWZ?PNhD78GG7F==qw%fM@-wAxLe`EM#{ze{q z2$X9g-T}k(KX|E1C8qP;?o^2gY#u>ht^ZX@VNa$<4*D9Ii{1R!5iB=NI z3JmB+c|6+pr;cOZE@ntC(U6`(X`=PG$aS#wRI|iE8`)7J?zO?hQ!b%QN{G?Kar|rhE?X?Wv11HiBvT+t0EC}Mr@}EEjIJs zVEb7>8XM?_IIrl2coyD+KLwvV(#&csVuAM3yP}@r)1QWFwQV1+FV;c?MiOW-`q$o-Hhe!CeD9 zn%v+KGzR@eDk%BMvmq<0~L!Gi}Z2)Zb`JMlSv&hK66v{o}b4R zf<^B=kK`$STteq@(74@up03loGMwLCQyke`YdbYcUGp$2)v8yvE0L08zoNbaA5{< zAMd}cp?a?lsuxpl3Cyg^RL50M_S+{Vu$|7`F1?TEmCexnDI1UPQ`6-dnncYI|D%BX z&nYwb7=ixh25W!r+BuZJ#E;TwGnnBtx-az_ZI!K{r7UhK+-+FJG^QDZ*tNz*5v$s7 zY-0P2B(4tfid`Gl(4{KBSdCOrl{(l^>XJ$4Av1+e;HlDg40-$Qe2xrxP&#dI5f0Ru zWJnMEtd+Yh@-|P=uoOgPn<+$?v{J?*4R+PgKDKa?cB}V z)ynVZ%I{Y!-x+6bKdw~zFkAYtTIo(Vck^z=@_M$oUa9m^w)9c8(w$!J=Dlj=_uG1| z-}H2-RH^&9Gie=6TA(2<=;c;t+&Ub$KxVCu-a67-@B}SUYp&!2p6Tw&fHk(o2wTM` z4{{Q6600y0*oHEYPwBS1`)d10?H*iq_h0PqzvvzBwcG9gDk7pUbH(9hyZwUyJCs+X z>U0e4in%d=i`p7vgIuB!K+EMWJ|qN4Q|`!*(6Xd~NKFL|3p+;cSWZD$wyZ29ts~U( zZcl!QZkM{pdn+rBZ#)Ppl7F(;#VlR*ZB=}HwbXE5w!A2$9F!fLbOv2bJ*M5E+Up(Y x>Pe^DRT(?jAD*1_+HHL(3tT4)S>g-|<%Y4V3r=0douA!5`#&`ZQQ$0&0RWPUB@qAs literal 5175 zcmV-76v*oziwFP!000001MOXHbK5o&{+?gK<8*H9oajoj{Fb@vjN>#-?%Kqy-Q3(I zISoWY62}z5@Wqbfy1)Gv0N+W;qHIZ|w4O{P0R%}bb{{-gEP%UbilP$RW*G`V77~WU@@R`+ zJmMnz__>u-*dX3`z=&&Pt5(bo^Wux{9u{UQJ&|W5;khJ9zdAx(>Y{Sw7hjm)_^sQi zWXrT^e2+!s{w1L9ZN9KKW;Z{>onw)wvV7{~FNfy%7rpgLlD*d^wTxoMlG>Vn^~)c~ zGnfl^FXWiMMWWsbpa?3_{wjo|MGb)&9LEQQ}eO|os>3Jm&zxL&^G6{XIl zImxT36b8TLdw)z_?1$7i0VGncBLQ@LE(URng6GOfO% zpdtz;nQ{B5Gpirx1+FD7eirG-=dfL`={t6lHYL>|HnmC`TW6L<9FH1#!vmEujdSKn zn;8qGQB;1@ZV$eE6{@AE+-};5NvHy-IwwbQLh|6xMJfDf8=B8F_$XG`g&LGL^fbtH zTsEPgMuC@t%;6iR{dU;yeM{LbmSx|?UvTmXku${mA=ybA{ebx#hFT%Pif+aGq%4n#%`BxC9<;h8_@~EQ zKQ`c4MoCR%kC$z_41hiwv&CIF?MWa}=bzd9kn-+$!9>|{iBES#KilX1Y=1&04pg)6 zf3+cD0wk~xAVF>Ywh!0e>0@HreNJNk@#iP?yX5C5cr{=Mpq#GEk>7bTyt`}aUSC#C zvmM2{W10XTf zr(WM-M*P12+i`+{W$N1A^FMZ@>*2>Ikm1MgOpb}5|DShvmG70R($QI9`|FU8vJwGh z<+J?JPE#HoS=dn;QB(>*Dz}#Tc_k6BD175ej&-&-=qHhGQ@K!a-pm;w2A~d7l=T+<{ z3IcXBZ>5`MFSDz?47(6!!0qAu{Y6!4c-0JRZ~#_8EjZL{x(%eOikknxIQadA(0iAG z)QDA8`G2gN@_z}S=B7n=+IxpbEhWP#qV863+rIA--LyvyHHL&nc;?71kcY@FW>>wb z#_>SqmtrTOfpBBF5D!&^EUOyK>$(}t&h}Xha}7Q>yrm{nUi)3;?>t(E|4DDq_NfH| z=_-IU4W1lK-%E8|3{8JRG+n#q(63Yv{H-;dDb+9F=@x1R_OrA5*(IQB22DA<{urv~ zV)*(7@pbKbBwgRz3$WAbj|hu>@56oX5->IksT^d_A{AW>W#85gW%KiK{PBI=;(^9} z@7C7Ci8vD)>Om8#t_L&X7FxyV2aAViLy{vrG?aUOQ?GbC#g$qz={Td3bC^m!tGTtv zh`r{+z2*|IF^7avh#l9|s;KzS`k*46I2LhB1yB>noF00+Nr<_ET22Xt!w#^y9bnd) zhcQ<9=7jb=>O%dN7rl1AAo~3)g?{0;lEr-&e7zEY?X^j7lpUzrh$g5W!y&I|SkGrE zIK|a|xY`d_`{8Q8X0G-VUrLwzb=$|d+z;26;Br4)?$?9`6pd1!b*wJ=+b4TNbq7~X zEPK^Nl@Z%7QgV%73yP~DwrMqlz()-E$B_TIaqd?kIOW1m?~FAk*wt>a%N9;;VOZ75 z!>Z0AaVk6F`5?;Os-}aH|La#CoE64cDumVAz^aU-0!pqB+fbx(Ha|oaBLOxk5@7p) z5If1Wgic-fAlyesN>z&$P&7bY;(AmWxzIwiWRGAB9>CxM3?9JXf#h!kf(PQQf{%*9 z11A_ffWZTgo*Q?DHiilm#L?uyfF~5k-gDf%^ieXxy?_yJ(V&)TPz?fRjM%D^aBx51 zN?^UAHinUHxF0Y^z||lyY6~LZa6jNhtO0`q8(&%64;c3Y#%(Mgt~~Asj8TErW1qIg zWN<%V1XA1&7||5>B*guIv7cRMKa2YTBfesI?xOWagvGuW_XFO*`l_ND24(|1fDOOEQ>fEHS*Q02CCoPy^1ftPlpR|P$dJaAYqldE-OS*l}!4DbAH`kZWqt}{rr?) zCpiJsAvU$LbE{$|I6bkNo?W|pY;`m4Lgl`s1@q3w_yI1vd4gDtsSSYf8mo=hz;KQA zhii!0bPU#5Fjyl~nZP!M;drGiymAc3Q3&S+H18GB{BZRF90iLikKmX9Jeul zGVKr4^+G#5+3W0eQhmUa26#;C1H_%G>`gbgh0y>IE1*@uK+>uH;C6RoXK1Gr$#_P4 z%Ixe@NNQ6^_f%O`j?0b8z9zaa04&ov70CD99e<(w`xke2=~lSEf6PXa)4BNNS6d0& zPI0I!#3K9~>4z^lG#BX}(mi=Iqda2GqZSK2U)hDIj@b zDWNk}aj8ztYv`**o)69Jf!;*(jiNAbE~>=S0(;1akr8JuMMjK_c>NC0UblECU96GF zh*!*rH;nskCeAi1DAt`<}*#4aG;%w zy41u$`3As=0Y#W7Y$!<_CvDM64o?}v%@wiAL(yF-5;>(iXBY8rq;yE>vX>&ILrS-V z4G5-rGmG41WA9xaKh6^9JY2o8+b_Key9M<&m^z!UY|MGqR{QX7$ z)$hNZUcP<*u6Ar`$Xo@CU$K}yQcNE>5xY`|;6*@>MobbXQ?Ng3@H9zbAql<{B@Cwy zx939&g%m1#>Dnk%SyMEPK_OczVN1s|JYBWjt7-Gt67Ct$_{y$}8F50ydC~7b|GcJ& z&68r3@SGEy@dEF4&uSvmLZ+3u6rEM*tXlv4<3YO!tw%WjxDMJT=GVgUx!X4=h%)?1 ztR^xT^QJ~ztFKfS#tl+NR#9R=CHWbUo?`mw5<;Sqy%dScHv5EDNlJRtNy$m9 zVMt0=NJ_SEBNaRddJXG1O9Q5~C}3(i^zNmgNa^4rS^Ev3bbJdXCEWN+B;aHpGCO2; z=z^*h+dH1Y_SP^lh-_~Q*&glS3m9-pWs(U z{&4*xlD}2ZpRk+UR`xfg`ddi1yzOj3-j+L9Xr*fs4hl!lcTV;b9GJBxK^_`%kv=q}E?ceLjKlrbB;~vz zUTVeYt;mUYOz~0-G&;D)6KA1>v(EgLk$@rr&0dNG6bb0+2l*@994MZPxaO?0eq+3Si!^kSNA+Why3BB5S3q3$4~UO%G-kg$+b zulbC;PA_E7!uy?GmvneoWY5X6(wd!;=RvcBvdqDrpOzh$bqL}vPr-~``X!*AUxi&Q z$qP7xag>m<=)Jk*cxjM#-=;U*$!U8)9Ju22ncHW9?*zWrzcKwWKVOCofly7vi6+$e zgO{pQVmh1sUb5Nwg!|WY#i8ERjK4ZpEz9Y z$&Fo5LP`4hd68r!e~evbLznJ0&hW*E<}dLiKwsyrR>>^qmq=qww31MkU_d|0Eycb+ zbsXzobI;aE{;ZcIN}Miz2~_pvf4X11O;q}d*dN7 z*V(&)&W$Q&UjHFkC6li*iZ3(BCJf1{8D|xVpj%@*O=z(({07_40@Bz(hsJqDhsLw; z8T>8yh$dZ07zB>#ge2i``^Pdl8Y>&bm6ZfuEC)`Z?@4DaRpotsI&t_Q)lPg6lEEhw zIyD9$mX*jdB|o0=AspCb()GY~&#?DSli+<8jcq z-Fu$Cr*~yKzq^*$Qn^_5aO9SjYozq4o+dw_tEzh0H%cjAiOMC4##=TuGFJhL;HH_b z?5QG0xo^+3&B!UFR?xgGFg>~AWxIh2%9km1Yx^f`H}8tQUqX}XNSySN_9m5jAf;D=wo`*>X049%ai{P;dKT&|%>)C}=I3dsMQGJ}s1=zngo_UEph zL-|YmD2+CQ8BU}7Qm@fg*$P_9;+DeQhE+^snn8%wZCn(ws{KYM%%@7?>QG+MYttUO zMB^80kqT;32OCOVGU+_{rqBpHRr-!0Z@-<-k>M3;r|m7mfjW~6>4BfMa<@g^<|!JM zf~eWmlwnP|p$wN^^nmzFZiXa#ThTJ}3h$wJI>?qjs8+hu&ON+ct^9tj{C>sqopJW@ z<4UCuv!xHKmF{$N5ARkiZ)A%bl}aCFOCMD$-Rb2X-m6xAzis5&%}9r3mAaohlh(nc z1sc+VR&I61t;2B(WY+2!ts|oaZ_pBV&6Rw>Gu>Spu*RksVXOG$0aZdyViraM+farC zs@-;XUuz#}-Gj^S{)_$n7ro=XcDwywRm28nt~k7Gw_osohw_TlJsnfOqHfIJqPoV= zAg5>q&~mwp4+#O%lsobxv@B^LQd2?0!iJG6n^O>$Eh`I2>j<^H+mj!n*`@CB*2;?G z8xOdOr(0RT3ZN*MqE diff --git a/dashboards/execution.json b/dashboards/execution.json index ad58605ff68eb..13d7292f437b8 100644 --- a/dashboards/execution.json +++ b/dashboards/execution.json @@ -49,6 +49,14 @@ ], "liveNow": false, "panels": [ + { + "collapsed": false, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 0 }, + "id": 64, + "panels": [], + "title": "Block STM Insight", + "type": "row" + }, { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "fieldConfig": { @@ -64,6 +72,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -85,7 +94,7 @@ }, "overrides": [] }, - "gridPos": { "h": 8, "w": 24, "x": 0, "y": 0 }, + "gridPos": { "h": 8, "w": 24, "x": 0, "y": 1 }, "id": 34, "options": { "legend": { @@ -157,7 +166,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 8 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 9 }, "hiddenSeries": false, "id": 39, "legend": { @@ -178,7 +187,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -250,7 +259,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 8 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 9 }, "hiddenSeries": false, "id": 40, "legend": { @@ -271,7 +280,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -353,7 +362,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 8 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 9 }, "hiddenSeries": false, "id": 41, "legend": { @@ -374,7 +383,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -510,7 +519,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 16 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 17 }, "hiddenSeries": false, "id": 42, "legend": { @@ -531,7 +540,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -604,7 +613,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 16 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 17 }, "hiddenSeries": false, "id": 43, "legend": { @@ -625,7 +634,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -725,7 +734,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 16 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 17 }, "hiddenSeries": false, "id": 44, "legend": { @@ -746,7 +755,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -897,7 +906,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 24 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 25 }, "hiddenSeries": false, "id": 45, "legend": { @@ -918,7 +927,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1007,7 +1016,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 24 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 25 }, "hiddenSeries": false, "id": 46, "legend": { @@ -1028,7 +1037,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1076,6 +1085,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1095,9 +1105,18 @@ }, "unit": "s" }, - "overrides": [] + "overrides": [ + { + "__systemRef": "hideSeriesFrom", + "matcher": { + "id": "byNames", + "options": { "mode": "exclude", "names": ["vm_execute_block"], "prefix": "All except:", "readOnly": true } + }, + "properties": [{ "id": "custom.hideFrom", "value": { "legend": false, "tooltip": false, "viz": true } }] + } + ] }, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 24 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 25 }, "id": 48, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, @@ -1160,7 +1179,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 32 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 33 }, "hiddenSeries": false, "id": 51, "legend": { @@ -1181,7 +1200,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1219,7 +1238,7 @@ "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "editable": false, "error": false, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 40 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 41 }, "id": 13, "panels": [], "span": 0, @@ -1239,7 +1258,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 41 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 42 }, "hiddenSeries": false, "id": 9, "legend": { @@ -1260,7 +1279,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1303,7 +1322,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 41 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 42 }, "hiddenSeries": false, "id": 24, "legend": { @@ -1324,7 +1343,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1367,7 +1386,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 41 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 42 }, "hiddenSeries": false, "id": 23, "legend": { @@ -1388,7 +1407,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1431,7 +1450,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 49 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 50 }, "hiddenSeries": false, "id": 18, "legend": { @@ -1452,7 +1471,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1495,7 +1514,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 49 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 50 }, "hiddenSeries": false, "id": 25, "legend": { @@ -1516,7 +1535,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1559,7 +1578,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 49 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 50 }, "hiddenSeries": false, "id": 26, "legend": { @@ -1580,7 +1599,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1623,7 +1642,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 57 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 58 }, "hiddenSeries": false, "id": 16, "legend": { @@ -1644,7 +1663,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1687,7 +1706,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 57 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 58 }, "hiddenSeries": false, "id": 27, "legend": { @@ -1708,7 +1727,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1751,7 +1770,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 57 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 58 }, "hiddenSeries": false, "id": 28, "legend": { @@ -1772,7 +1791,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1815,7 +1834,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 65 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 66 }, "hiddenSeries": false, "id": 17, "legend": { @@ -1836,7 +1855,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1879,7 +1898,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 65 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 66 }, "hiddenSeries": false, "id": 29, "legend": { @@ -1900,7 +1919,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -1943,7 +1962,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 65 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 66 }, "hiddenSeries": false, "id": 30, "legend": { @@ -1964,7 +1983,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2000,7 +2019,7 @@ "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "editable": false, "error": false, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 73 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 74 }, "id": 11, "panels": [], "span": 0, @@ -2020,7 +2039,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 74 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 75 }, "hiddenSeries": false, "id": 6, "legend": { @@ -2041,7 +2060,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2084,7 +2103,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 74 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 75 }, "hiddenSeries": false, "id": 15, "legend": { @@ -2105,7 +2124,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2148,7 +2167,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 74 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 75 }, "hiddenSeries": false, "id": 4, "legend": { @@ -2169,7 +2188,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2212,7 +2231,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 82 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 83 }, "hiddenSeries": false, "id": 2, "legend": { @@ -2233,7 +2252,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2276,7 +2295,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 82 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 83 }, "hiddenSeries": false, "id": 20, "legend": { @@ -2297,7 +2316,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2338,7 +2357,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 82 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 83 }, "hiddenSeries": false, "id": 50, "legend": { @@ -2359,7 +2378,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2404,7 +2423,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 90 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 91 }, "hiddenSeries": false, "id": 32, "legend": { @@ -2425,7 +2444,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2470,7 +2489,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 90 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 91 }, "hiddenSeries": false, "id": 49, "legend": { @@ -2491,7 +2510,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2675,7 +2694,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 90 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 91 }, "hiddenSeries": false, "id": 22, "legend": { @@ -2696,7 +2715,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2727,7 +2746,7 @@ }, { "collapsed": false, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 98 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 99 }, "id": 55, "panels": [], "title": "Execution Per Block Gas", @@ -2745,7 +2764,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 99 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 100 }, "hiddenSeries": false, "id": 52, "legend": { @@ -2766,7 +2785,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2819,7 +2838,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 99 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 100 }, "hiddenSeries": false, "id": 62, "legend": { @@ -2840,7 +2859,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2893,7 +2912,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 99 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 100 }, "hiddenSeries": false, "id": 58, "legend": { @@ -2914,7 +2933,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -2967,7 +2986,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 107 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 108 }, "hiddenSeries": false, "id": 61, "legend": { @@ -2988,7 +3007,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -3041,7 +3060,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 107 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 108 }, "hiddenSeries": false, "id": 60, "legend": { @@ -3062,7 +3081,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -3106,7 +3125,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 107 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 108 }, "hiddenSeries": false, "id": 53, "legend": { @@ -3127,7 +3146,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -3183,7 +3202,7 @@ "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 115 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 116 }, "hiddenSeries": false, "id": 63, "legend": { @@ -3204,7 +3223,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.3-cloud.1.14737d80", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -3236,6 +3255,478 @@ { "format": "short", "logBase": 1, "show": true } ], "yaxis": { "align": false } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "The average time spent to dedup the txns in a block", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 116 }, + "hiddenSeries": false, + "id": 70, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(aptos_execution_transaction_dedup_seconds_sum{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) / rate(aptos_execution_transaction_dedup_seconds_count{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "format": "time_series", + "intervalFactor": 1, + "legendFormat": "{{kubernetes_pod_name}}-{{role}}", + "range": true, + "refId": "A" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Avg Txn dedup time", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "µs", "logBase": 1, "show": true }, + { "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "The time spent waiting for batches, per second", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 116 }, + "hiddenSeries": false, + "id": 71, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(aptos_consensus_batch_wait_duration_sum{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "format": "time_series", + "intervalFactor": 1, + "legendFormat": "{{kubernetes_pod_name}}-{{role}}", + "range": true, + "refId": "A" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Batch wait duration (per s)", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "µs", "logBase": 1, "show": true }, + { "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } + }, + { + "collapsed": false, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 124 }, + "id": 65, + "panels": [], + "title": "Sharded Execution", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "none" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 125 }, + "hiddenSeries": false, + "id": 66, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": " avg by (round_id)(max by(shard_id, round_id) (rate(sharded_block_execution_by_rounds_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) ))", + "hide": false, + "legendFormat": "sub_block_execution_{{round_id}}", + "range": true, + "refId": "D" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(sharded_block_execution_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "sharded_execution", + "range": true, + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(sharded_execution_result_aggregation_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "aggregation_result", + "range": true, + "refId": "B" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(wait_for_sharded_output_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "wait_for_sharded_output", + "range": true, + "refId": "C" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Sharded execution time in 1s", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "$$hashKey": "object:140", "format": "none", "logBase": 1, "show": true }, + { "$$hashKey": "object:141", "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "none" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 125 }, + "hiddenSeries": false, + "id": 68, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "sum by (round_id) (sum by (round_id, shard_id) (rate(sharded_block_executor_txn_count_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) ) / sum by(round_id, shard_id) (rate(sharded_block_execution_by_rounds_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) ) )", + "hide": false, + "legendFormat": "{{round_id}}", + "range": true, + "refId": "D" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Sharded execution TPS by round", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "$$hashKey": "object:140", "format": "none", "logBase": 1, "show": true }, + { "$$hashKey": "object:141", "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "none" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 125 }, + "hiddenSeries": false, + "id": 67, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "sum by (round_id) (avg by (round_id, shard_id) (rate(sharded_block_executor_txn_count_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) ) )", + "hide": false, + "legendFormat": "{{round_id}}", + "range": true, + "refId": "D" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "(to fix) Sharded execution transaction counts by round", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "$$hashKey": "object:140", "format": "none", "logBase": 1, "show": true }, + { "$$hashKey": "object:141", "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "description": "", + "editable": false, + "error": false, + "fieldConfig": { "defaults": { "unit": "none" }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 133 }, + "hiddenSeries": false, + "id": 69, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "hideEmpty": false, + "hideZero": false, + "max": false, + "min": false, + "rightSide": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.2.0-59422pre", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 0, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": " avg by (round_id)(max by(shard_id, round_id) (rate(sharded_block_execution_by_rounds_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]) ))", + "hide": false, + "legendFormat": "sub_block_execution_{{round_id}}", + "range": true, + "refId": "D" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(sharded_block_execution_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "sharded_execution", + "range": true, + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(sharded_execution_result_aggregation_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "aggregation_result", + "range": true, + "refId": "B" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "rate(wait_for_sharded_output_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])", + "hide": false, + "legendFormat": "wait_for_sharded_output", + "range": true, + "refId": "C" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "avg(rate(drop_state_view_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval]))", + "hide": false, + "legendFormat": "drop_state_view", + "range": true, + "refId": "E" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "max(avg by(shard_id) (rate(execute_shard_command_seconds_sum{ chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\", role=~\"$role\"}[$interval])))", + "hide": false, + "legendFormat": "execute_shard_command_seconds", + "range": true, + "refId": "F" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "(test-only)Sharded execution time in 1s", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "format": "", "logBase": 0, "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "$$hashKey": "object:140", "format": "none", "logBase": 1, "show": true }, + { "$$hashKey": "object:141", "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } } ], "refresh": false, @@ -3416,8 +3907,8 @@ "multiFormat": "", "name": "interval", "options": [ - { "selected": true, "text": "auto", "value": "$__auto_interval_interval" }, - { "selected": false, "text": "1m", "value": "1m" }, + { "selected": false, "text": "auto", "value": "$__auto_interval_interval" }, + { "selected": true, "text": "1m", "value": "1m" }, { "selected": false, "text": "5m", "value": "5m" }, { "selected": false, "text": "10m", "value": "10m" }, { "selected": false, "text": "30m", "value": "30m" }, @@ -3447,6 +3938,6 @@ "timezone": "", "title": "execution", "uid": "execution", - "version": 27, + "version": 43, "weekStart": "" } diff --git a/dashboards/execution.json.gz b/dashboards/execution.json.gz index 60f6d1be05c7c2e0d3478e4fdec15267ef0e9e98..f756f309412ebc525cecbae4bff1c55f66d86e38 100644 GIT binary patch literal 8162 zcmV<8A06NyiwFP!000001MOYga@$CfefL*j8IHBwLCKrFg1 zW;W~fwO(i%cw~21`&k~;c6WQ)U8WIkvyp|j1zyNA$ET)Ia@5ov`;B8c9=7UDMr>!l zuN@2q@Za%qUmHB{WiGbaOuWylw6ofOYOgHr1%CFrfO!)h$Ua!+%*0;@`_OQ2#sYo% z%AUi4F|&N$$BQQ@vajFzX@?EwPe%^(47F>;)6u!)qT7dsnK_=yGnVjDlH@n85Ld6R z9P#3F^G|GgKXceJZJxAINA7=w+?(fb_NUJEM`$?_dFE8W-TdV+9KZ0nUvk)gZ*t2h zp4iY>^P8ajfq264p!LETGqCuAQwxzJetvrrVW`IqGsum4Z*2NHv;K!O?;Ls?d~)ny z8V?;J$DLj{jun`0tj;)+mwux8F(`Y?f9pA};auA}Pzglf`$6BhXERH~7JBJvTem`k ze_@`Vzw?rF_rjdzVVa1#$K1g2cXh|(y>FSbIBKsg22aV~h*HFN$=sNp%*)Z2fvJQIO^~W-e0F>ZtErHy*26{*j^LLYA$_ff(ArN? z(-XRUa+Lg?KKEO(d#^1=ztqkzK4~9p-<(W?VlO@DhwNToeep%M#r2#S52ifyGq&aF z?c3z?y9~$yR{-xfjy*Of$($jZ=VKOHfv+5u@)E`Yo;Y(1`9x;%AmF>Y#eCn?6&=Dm zLO*b3xpokVH>UrF+W5%XSI=ERP!+0UIeJa&B^%;m75k<8yeHZAKnm+59OFS#8w)G(ptxDU2dYyl_7 z_nwp#JcCcToq`g&o#=Ob^oR^nqlXpV9cPQr*sQ;3uzb*}`}*bw zOusbkiG1F~b3!r4yh4qbWrv{f>|p9~c)n%$@n~m&?u^I1$Xd&PD$r-lb>X3^vgcYY zszKa18~KhE1_fB)1ew~c^xti&i_k;PZS33?vqEgm2I1uUrrrWdm%ZV1U+I{fj>@|k z+G2w3`MoD47xbJf?t$gxQVe-=RJ5w6i>&G(Z&eRdtLnHa%@qd|l~D~AvvfHNfq4o9 z0yq=Gnz5UnI!=b^yB3=#mPX9N$`x+bj(Cd6mZODCCw>o{(@JHO>T(8MGcdO%zL)QK zqIr;^p5iSPhMYK6OekGm%hVuRLMoD4ULAj`=OY^Gv4HQ0$$H3dxDG1o4IOxrHw=Im zc>d7mz%C5<8P0C?DKqUM63Ur{!c&*w2uKYNxIc6qBYk+ualG7fEO{P(`Mr1dbujz(TzjD{-A~;K?Vzze zRR>w>RZ41aZ}H~4JMH$iq|v)O^=^;ZlcEEIC;0e-AM z>}#8l9mOLG6ruq6j5yYX`qxcCeP&|Ru2D0<5<}T$O?Iz^2 zvRD`5-)lXS#6|(LLVpYwp>x z#%u$Ao@vflgSIs6sph-f4z$40;!$9lfMzO~O#VZCS?+lvc%>fAg|OAobsDb1_svJA zlt8#;Wdpb3z|T?GRynaSf_0h;V4Y=}2aForK9|Fs?}3bNoh5&l1hNnX6Z@5aQ6?BX zeKjdLs)wE@Uq62ab>`j74dx5Zl?UWS$gJe3X_p)o(XsPV-4tQ6lEZ-$FstOah*BnU zD+I(akX%Sg;lb;C(7K0#Q4w2CBJpq_g{75AR`+Em_`@6?yM+XOPHk&2l=LoY9pL(DkFG2-|)8HN#^Z3|uG6^GTLVi>5 zot{MNTV@t8lCGdzgw6|(#UQlHjj|l>Hijer2$>z27T?+5J9*icGb7A6tRGKfC+-)Wl>CXbQ3cRV?FV21E-yj3yi9lKAHLOb>*ORq{NA znN*Sq`3fjWY_}EOU>S0%5Xc6?>0an8&8X+|Drm$;-#14^jLanAU8*<`OjWgd^YWSZ;Uk5lo(7`LWvm{+C|0w2z% ze$&;M?lgVG4K8t!4;W|Go`GGOT{1qSK^6-8du;vm=J6s2{4tA(+ir?L?~R4-XFL*BO5 z0&n}YmbZPBK7qV#^0vv_CU5%@z3tL{5afj~L9KnbA56edNNNlNEZKEy$n`ZdgCX3u+=7Ek*rudmh%L`2dsCK73fDI2GfEiCjApJWM-l za2J&mJz)_UYXy*Aw3FVKJGT$>`$Pn}!S2NRBvY<*H*ExWo2(^)Ja58rj9VVK$;?vB zzeFG1G|*WmWN&!~S3tHh(k)L|3RfOl0r*TXh{lfXj`;g_<;>=$laG%m2+QVxdw_wrFh?Q zRZa(Y^>t&V7wJ_|uc*FI$BM_?TVIGYWCabO8cQuwlO<~`k-n^;FHJ5qwS$2+xZ#^Adf10M%Aw0PFH z9%328ZkPqK3M*KJ1Uo5eI+=>sgkG*!##16qyh zASwAuO1{!p#6Et9PwRAZ`v`qmhqjL-2Uhz?&cSiD-6O~8&Ij5(Lc2$3_sF9zbZ-ei zA&#~JN6TXZG`wDpps~xs%h!*#0~e$kE2xGFjM|FPu2q*U0R<_>s)`YdrnX)*X0f8g zL|WL~4#mBnkbHsAq(P+KeBZ#o;| zz^jR6^*&`n(&yWb%_$i|$&gwfCaanZIiX|-B||6~vIe@=u2m>CIY-36f?2lzt3>rT z!aK^|jp#L= zXrkVGQc@W*BW04+?)b=+`DY5STw|sUGh6U0Ow)zgFuDRe@d2gpIu?t;g9n+He=|J# z_dF?x&WRcK-24-}b(}lr>wt{6rVmDc6h@E8z&v_b;oVWkKF1Sz&*oRW@J2WR-`6=Y zADebWZ6}@+y3#IZjFT9H!oySR91nK}>CSlEOVl3!sYstO*M*0w<~!GFQ4iwA*~oXS zeEEWiMW}{R{<}?;5qikE-HUB296^GX@0)rH&JLLyPWP3LfmK`B&Db~aOJh!|vyizj z*X`lZpF`JXf8%3p4AC!nzR-c348SPk2e%5si@ku}X#Oj5hhDzNDmRTq&2NCG7-G^8 zlmp+imoF3A`_@NO%*W=9zzzr$?&EIoBi@HawSTj%d44h|5tt?{I`G~=dNE&bAT~%JgvF0w`My}8i?%Ql;2C~n7xACz2x12IE>qKhuq3YNKl|@h~u!V|n zX=K6<;Nk@M;ZyOUuclj-^s~UOYt4s&Gqkud!8dy>5ak}c${M+1m?@4Ly^}C~@kSUo z837R4Dmi)?-V@;q`xSMoG;GnlEQx!(-`Cu;V~yDc{5;c~u?B6)_=xWU`z(o)v?sL^ z?=v|(tdv?gP7?r~Nv-4$Q*MQFE9;n9#Z;Ou4>UqAD%RM{P0W4GaesQI5TIwuq*-~+ ztXTNs_O|3j@9tJ%VZ|A=cA`cRO!-{fja*GjzOkk|mc?A3r@^@O(v^w?qcGtQq6z=t zC^zFD4f653!C`99na!=DSyruY#+Kf$+Fd28qTQ7!&SyD+feYdnd8r2@N`$lqUbdEy z9vxI7q-VrOiH{N=y_b(hd~<60fdhnKHq69y(C7f++*v+-Fbefc-hy2Uqq(+Ia-+Ph zuUWiClxIDRb@%ob-=RQVH_TAQQliXffW^e~6{e+ct>&mTi72f?COWgk(&dVD=AeUL z0VX~}d}eKY=8X6Z@tJ%1%o*_+;xoi&D)O0R`CnB+lg1W_&=8>^LQ|R0oDrcRLUS*n zIU_IU1{<^@60kXdd%kFB{3UfHpFZ;lHej{L(GOYa}l#4X0x6YRYht;YnqAL z5VhG%j*6%aQ5)LUMbu`))JD$~{%+B#n>Y=1a#Ae~MZp?kG{k5&lZGn)=nT=BmeQFE zv?Iw^CY5=HGEqcoh}6{1Kvg6)2UHrFNDYyi&19g6)DWpjNDU>Sh}00NsYYs&_+A41d+#;h$ih!Bc4e-b19y=d}-=teiUBek=hGw`TpuoXg8iY zZz(B$wHZ?U>T05u7256y7ykFZwwMjKm~XXhartRI)-|BIkVLzPcGad`3zv_#mw7Gh zGLf%Fk}s`vK|IkfqF?KzUu{pTR^(mu*;%4pM7wI!uJXibdpTEme~EB4lW^UW2kr>% zBGR>a(nZC=3n|Id!)hz~iKG!pBa&8^q%AK0-Cm-$xX(n^8cNodE@@8Ojkwzyxm!fo zfR5pQg$z66OrL7{G_)@zBEbyp`g2={bM2vQr!^@XZC4=9Mx1TsWn^G-;-HhTG##LB z4|ctU3~u}R*SmPxPBI5ZQ(uRYl*>>jVqKJ`T+F7DG-X9Pbxd@M=oHbZ`{@+e2FHts zPRE~>=~h?Pj?u!g^peen#+WTEfcmM=M~j&$2~K5k8AZv6uhrsfXil(V2T(OVv{+zX zajiWBEzVG$4XRb=pcEzXGva5P;b$-k!;G~VvJu#3A}cK-EANsuG1*x~jI)?;TFq6S zKApl8{tHKI-g8F(;Ck@m;NTeBOrLK%ws?xgMjVZ7PYA~cdS8j)lt1f4eHL*a*^0`2 z+QV^9>v5c;4%SswEg?}HqBv`zIGM6!ColWecK2aWN^3an(J(Hxgb?lvQ#D@(mC4_< z)K*nzg$cHhSu6QFslyM_z&9Bh*l&tbVP_5QqJgsY`S6;VK|7S`g$9%Bj?|INP8C~D zuKK)*P7;}j5)-jd|MLo-NGFH)RouNs={Z<9(?*6mws(+}dlltYz+P3w-!@vsV?2n}~wE{QFddCao7m0c=EnnjwmCe2z5 zuc~_Wp;>z6u{rdWl!}yU{Ytd}TU9mcug%h^w7=JmcR9Q^gQ*O5wo0|g;p>XQSURMm z!o8}N|I$<~pYUL4^P7N_yb&EFC0|L&6|`5?@sCZ>@v&($Yv_c58wLP`z>^i{+NIs3 z2j7()FThw;o%+-)oszYXhxplFPC7+8wSJu{LaeGv{n`|j3SF^2PT+=nKB*F^(%Muh zk1Y%S0?J=imwwyKMq=>+)ANf@nk;%U@;JLRoNF5k@JAoj+6nl~P36~>9mR)(qh7yu zVU+R~b9__@STiNk7A=@EtX^&4vKH>cpIo!U8R$P2R1v#}ea+)8h>;t6?b&dg zRK2UXbAL|BN!L5hO+I)s7UT{MltWJxr=orS=28Leq(mmgw8_cahWeUSoxBrr^2o^}Cy$8c zD$FS+Nbt&z4s!fw<^cJ3Ym^hYcBF9?k<@jsZVi%ZwL=$w)STQoa_7jM+ZJ~&S-n&_ zX?IHg9QkwP&k?(<%bA|#8PhZJ-s(t&d^OUhdvVig&#aDII^l7D>|?V;!N83r4!MUJtAj z+SDZn>{UvA$?YPyi_%|P;dq_ab-dDNkmE&;7dc*QAd8YB{%+&h9r*+Q0gBO5?Uc|C zNjjcquf~5Z5A^4Diab-8Q!0tfuRt<4D9@%-)T$DVe!N&4RB=M)Fem#%BlMy#hBXzV zj{DOyr8?PC@Z5w58w|^;LWCc+OWOh=e4$k?PruP>BgjI1mjgrH)x6N&71hr2FG{`> z`A(1FJE3{O0%lmwEb*3Jl=?;Rn+~Wo?TNfv@wzN?W(Ip&dt9f?%E4;B)A1qsPUJh0 z@3h|fxQn9e$k2uDi3m{>`B0_g%M#^cYknsywe+pZ(sTnn%jOy$vD$DMhuD&A#k(A* z(Q-saz@>9sd~k5QfQz#c$N9X+M$p;FQCW?m4-QU<*y*Q!+R zc2jAi((z6E7Zoc$f3M&yu3)*iu{AlyBFZI5R*Oks`jCJBs}iFl+E$mg8Qch65kC!X zY+p0&_7S+#ngs4;2N#_Xfg=L1-B5t*bTZXV*Gc!O> z0K}$^SfImu^u_1sqz!2gXG^o)RY_Y1w`dY+Bht2E(v~jx6J}?KhDcpLNxY>EyyZ2H zu9#-DcyzTLF~PL^E2j-1FzfgP+2G(P{|&N}CBH#-4r2|BR(n@SpSSVrW1D7EBAfmA zY0QJ;YH7?9B2+}Eh)}JAP>F&(xZT2YLfaUc#`7Jy0b>VeQ9OVoB{~IHA^-BF&5bqy zWi|xOhvHIS37R!<|AiVgtG^dL92I1QCCojcW++MMRr$P!|M|Xxg=MR2lVj zykPKS(#43nZFc}TkfJj+X6h)i;d#x_RTbDDxQCfHL`5d7y-js+P5j!Gw zD=z=2%Isbevm<6l%x*)>E}y^XF0*S9i`fYrZEW5=*Gi&P>0)I~44+TzhS*JdoY>7u zOEC@)tC5>CA~!^Ch}_Up3|flu-2p^PF=#0UEycJO0<;vPlf%A&r5IZ?_P`n>7ce}> z9Rd>y`0&c)*R2}&x&xn+N!0;=w~Yf1q%ewl?c_sUamjFbTz6(O1_Y;7K&Xp^dqq)6)(IW5mpLQ;*5b3KVJM(iLgvRLi;`jpRD>Tn;P%EQ52;=y2^EP4a( z>O@GYC-y$>>JE70vfY5sT#Mlr>?HKkvrIq8$W;l*Sno5VsGRtMgOop>24fVf=W`2W zn4&FvF5ow!`7b7JLp0eZX)3gzEoa0mZRc0V-kpKf0maWL^Hux?OWj&#nqzygpE+dO zV9E^s$|?)eBy~Hpot^jj{fA0Md%TxSu0b_^ed%pKWwoU)ibpPA4&gZsX7W;~gNhaIX({znNThlfR!Qdqjm@5Ro zZUCwia|IA8`kf$CyxvgS?;RVA{17CibKgf+ijdRmWZpavVA6$&?F$1i zjx7FP;xVO)`~bw)g4{o{kFO2dhanr8EETBn!7YR5rVV6DI3EX_OK?|!Z~9hS2;ITP zpu3E{MOY}neL5UM8{3B44V%@-8`>Jo_V07#?h;RK3(>qeLriE+kKd2x#5->Wo)-ir z?lM7#Ed=^@neDGJ(!Vtr>bYx|Kzd1tsnM9fC(d?P8ffd3Vk}gPe7qu~C@_vDXo2J9 z)FpGgQyPOo*D5NqNHN5xI#Cp9`ef=kw&S}~?(tnby&HpXSE8?>yhAx222N4zcvLPn zg>#4d`7?4UQg5*DF9TrNzckJ!9+{^!*lZj%Pk!lbRj4d^t8%bqtky>QpKn&R3%$Wi zX%qgu@8}FN$^J? zJ~)y;#HdPW;b5q}{nN42kw|la>LR6>PvnkXC8?t$@oA^tHRqS-NJ2%6Pwt-lKigX? ICTX<-0D*4ihyVZp literal 7381 zcmV;`94g} zrGiLE;+P`Yyx3N|J#RHHFi*Az051SY*&-!5B6;#7vOp0a4!(2node+T=8<9ai0gV0 ziEPgedw(%*pa=>bJB;AlSN_dNO8J#>Y&+3UE?gY;^HPgMB=q9Iq^u637cG4DS8p1S z32{lUZ}ei@VkP_g#(Pm{?CqEiQ!3BpnM!yiNz!Ydi>p^x zj_k+h_8+Y3fmH0+F3swwBlq898Z7ji!uYA;-|BZn@zv=lk~^V_2YLp^HQkv8hRi5;56`Cm$cbLegK(Q~6& zI&=&<{_L~oIg#zB>dZ#+BFr>D1!a$fZvxM^yen4&6-NYnKIj__T;dq4hF*TzHJ#X^ zUq}$@cV1BHf41j3OfymUh#z_3zUc+D_f5LWM(veDpcVZ_CPnrv(vA5=U5=p;BopJe zr0bXT7w<|&ThAvhbxP#1Ohx^`o6~4U<4{u3$2V`XpWjMYf#Cs^y!PCQJ_nkb3gu>^W6ifL`0-YwE(7QS1L`;1zW& z{~2raW6~VMaIfS?AIO+G8g{dbpB!S+Ikd(HCyqS*M`IGabnK~{{AQqCwgPf>9xcS+ zGFzkHKd~L>SD%=6v?#e9fNt4PbJ_>jE7ri~n0JAY6tqF7)U{;w&(tI>xwQXCzo{Zm z%;8T=w!Y_ZqnAE63G{YZ>ECUOtL0F;=6kjqCD!&tyT~PSRAtH>3d11R#(cU!hVs40EGQ7`a0TrFF7IaS@>!@C_8yh|pyyrEAON5WG3X2e9@Vx!d7%(21Jo%z z1+!;|$MDBB*9@b_HAK3L+o+@C?7QcyT#y%L3Za@K;+VozgYk(t0x03anv?6EI!=}y z`VLuSc7#t=$}fD`N(Ai~&bOMoly|h7ke6k#Da60oMa1uQ z#LDkiBQ}lh>r-TLE2y~Zxm#dzZzUy9{*pq$X}tFhjfm}p#b7V<977t|G!6FlwXn8h z6A|C!@s2zWFc$G@rQbYs{DHbt5v&_1<+SYQn}`v!NJU`#X%x{v8T1YR?8G3h1)pbz zH!+|tgZ0!1edpgt@O0v^0lOSJ&C8ko+*RqT27=#&!Xb{sh@ z%M0EoL0RR((g@b67Qi}{st1f()IAr&toJ}fx6ZQfC4nr4f~os5{9Gm&cKLEzQfkIQ zARb@8!gS{C+>aJZu9b(|dC07!)OJfsd35Z&R5y8;tfV;dBI1;k^C)E|w?aUS1(FLZ zH>~hV4_fzFV3fy}vp72xUuqZiicAa9w+(HY!g*7~7ZUW5z`t3?CK=ktl<XaWIJ?kaPa&^U(Ap&|Ao{I<8Wlf z0blswGr!Tixv+0|{*Ofm?T-ft?&xVkW+<7HV{sCLT6=;l}n`+Vc->TuymcHffx`~|M**v_`b6k zMc&Av)|5TjX3(Kk+X{EOvk-yiBIqm&ocPQN0#O3my)meiSHDh}z380D>+)Y^r)^IE zn@$rd__f5Fd&Q$f*~BMYn{hzN1>1jQ05vqkI7|MXyyzRw2wQSIJ$;h_iA1%l= z)wmt#&*K@ml|NIyE6yALgPPG_hQ|X|XWo9OKEP0IP*LP%H4e2JXG~gm*IF3sMJ^le zOZBpiG3a$0E%3TeYkA$rc?;-uqt}gIH+tRQ!RszPJb>Q#3KTnp`@s~9xTGd9z=CDB zhGbt6I~tL2;hG~5uQ%BgyXN%*GDWQ|_rux_5(yup35`tVZ1PVXEzb-CD9fH~C-^1+ zS5Ovn(Zm(Or}aI|C1P7=0kA>pKzw`TKrxJa^HqU$CQ%3ALrLgBxYFar=u&XPZs>#Y z)L4yP=nA0-j=`f9dm6htFKCL;74+Af4om_wfHk_b>D5S+Tva`Ix1FHlxxlLtP-02J zMxb$0Boaf97gWZY!O19VmZ@UcKZL*dKa@~Zp zF>ZMPN1CNbc)@&i+hR^TB74U>xCXM7k#70Im4@$>>>|8(yo2k)yXFq+#;ikWqaE5) zmqc+u**c9qvCU+INQ+=z8xQ5OStlU9)5s{;J?L8qUaN4v}+z*3kr!qs)FIKy9fqj6*RE*IhS5Xf)}+32W{m;HPNjMbX9!+w2Sy&1>RR) za51{FqZ#765glA(AEq+WRRR9DW&zwM0dXAaNJwuLwCm5|1;JEB!p~bg=63%1qk#w9 zvOOV3!{ZI_GC3Na;4Tw9;I?8~nc?$lTT}A=2)H!`x2DMDxHSb2xRq@mzUP1&KnLo*AgB2I!73K!*W34A5bKZgUx^koT<-p~Ey>m`S5* z8ZK10`i>UnB5RSRbUhat8iI8qE|`!F6SB3>C?GvpZWS;oUB?JQN0ZW_1*p#pwv?F< zjlp*}1|b{R*rBp-J!0t%yI~g4Dy(4@GVJ82>BQEb^!#H^iW+JKP{X^S;kCE)2_La^ z=3nRB`v!MmtelZoTk6K07`PK7E5?Mpt0d&Dx~t+CcU9o73fxtJ=QB2Wz8lFLW+ld~ z#Fz&JQ2T{q_Qu4LlF81L&ymF_T$4KyvJ9vyFD_HO4Ifuv9PLZ7CCm`?? z37#U6%JCG*!)^xQ+r!KEk+uUDs2Xdih6;?@iqWp!mn{JW zD#p5sk&33aUNmOYqQFF+=-iF@7OgPvI*;Tv9S+Y{?ybOo)B9T+{Q&KdnB04+kN(CU zy9s^x%OR0{Nk?St1yMWTz%wP`j8J!{ZUGdv6{E4o$tAl0p0*YHqlxW+XsOkP=>hd) zJue&Z06?g%74NpaD&oPT@$K~?u88FAyPivNQ3Mx7YQ3Y~VVTol1Y5nL46AZ=^c zu9TYGBVl6ET<-rm(fzIPkFsY&+R-%Hu@z?Uc}5Ka_KanO<(qekZ#zB62UMd2`6MFK z0vY2bV@cw1(^2-p^hXf|b4|j~Hg)r3f<^G=T0LwI23ftPF8BG$=g<6StkKkfMf6%t z+mig~0~u3C;YQNMPYzg>xtQF01nissBVc+j*^tdO0+Wg(AXn$nLg*ozEcN>t0s0k8 zm(4kXe)}|`M%g^-6>DH7>RlitC6O8-6Ir{{l1uxK9AH{y>=A!rvh_WOq|ji6B+%Ps zrGM9_{^S~;PtWZ?*h8Oa*FqD<{f!-hNgu~a6B3A%#tKiyH0V=0P4`@SNefTJrq^G9 zX%r^%^g5euZ8ssX)4+>;VRa?b!zEm3Jh#2+U`jalCX-&K)a;*%<2mttXjC=MTCGKt zr5UP$yE({NU7UPEhm&P8m9M6y@lC+>4wvR(lM}EOS_qR0AXol zNplGbW4gS>{dWJfFRd!!eVQ0Ohnp-yw@sxg5YG>%1@P{Md__6-rF_ zhqNkvdQd0Pbnxb23GH4!i>Bx*bicr4&lr7`rs&;ceYK-hZ3xpKZbzceVYl(H`s;%vnRR0I>%K(H2$f|}(Z$w@H6ehD7qNG3@^?!$-KykoqV;`e zF^asAL#-)$vWEhZ*6=#pu!<3kG$8R-!1UQ`Zrp@($rnV#=t;aM!58aS(ydaDxOsti z|76fN{Ie5-xE6e#8Q#Qzx`gW;`oKO*9Psu;@6#gl_a>6(^FK7!k#*Y`qUdfH(w!Y!e8d0CkFxUN?8}d%6F|u zZ=&TB&{vqAzO|aG)+D63@fB&!3cF=1(wf5#zK9li4f2|e^O`f{HOOo3LLML( zNC_i8V6pg#rmGos4Y8M&|5oflZVwO2maWbz(V$^|bOLD*(jcTk+gP^>$t4{r`K-D{ zg?5+;z1j{F`kc>EwwujCY&rrc7%~)aK@0g2@}mb_|Ek814m+4aANdjTqX%98LVkq& z=nj5_3uDNSkRLtRI$A}7gz5B5>M|PgBjiVS@grPcLw@u?8;2MizkF;D?Mau- zPF_@7HtQhk!LSh0o(EkvTbc!`8?0xSHxO-!_I&x6%{OAU!6RSDaFF4w#c)pRF`UB} z$Z(M1Aj8?}!WS|eWH@;E3mFbFoQ*79Ri!w%_ZcY;Qk?CsT_MFmii5|%km78a;+Rq* z@fPjMNtdUdpHy3(!r%=u8)P=yU7jj`nGMpKmeQKf%${T~MpWh-T%kg8gXE_6%2Y*i zbBL*yk=!7;+3v~|k{cv98M(p5DI_;YZmN-+Y*DHryE)3+Kz4)d2HDLv7NwBgAiK%g z4X#Ule|FQ@x0t;+l3EdqFZndHgKshW0;2(KFcw(=YKvZC_8E94fys{_o9sp=>j`iV=&@Uuk+b3U`Y`m~kd3soF=RcA) zBx^|4)+KAp(}}m2ur2R1lD3ADwv|()BZor{w?Ph<5I3M@>|BGy(kL-!hB=Gf3xP;5 zL;K;vHQ`!&sM~2x>W0S}kh>vw+t@ZUFgmKKqgx7nTF%8YG~3nv7cUy10HxAk0Y3ptReMdd*4;X0@FxXy70^Rud!5UCDQ zosCi*DXrP_7yW9d0a=jBXgK{bSe$AFA>0+VYQ79AlfYT2&8p50Gi))2t*q+_9ln%k;ZVlP2mp>Um)zXE3j-)~}kX*3ihs=u%|7q^R7x-a%CEb(C8H zdsP*G(`Xe>@qik>;~hi|Ut7Zy$y8PFw@p!S4e+5obx9NlbaY7rdt#eJtn;-<(@`br zt3)|Gsw&63rYMJi1km{C8m2zB%FrrJz&WpD3SLmjrJ-j!ibEPO+*eiZ_f1i54feR4 z(`tqIj_Wyh0j;Wb{k19DC9vg_fH)3yUS*TeX(yZn1s;XB%HA}AovVgv#QlV09 zUa6K~tExu*tyvnC_xH*RE=E^&G!wzjR;d;`d|fdZD~EJcxL4KkpPQ=XQyPt2dL5yX zH=={6b zh`lzLqE4YsZCDTQXDCRFd zJ^%dC5J^zR0VNlfcjdA~{mDzX_98m>bNO}UWbx7PxYw_pD5bnc?W{7M9~QL~bEqWR zVnj2-vRhkhTMG^0D>dAB4*Jg$tngFBp%G9Y#7M1!_8dIUvf))cd3=ajR`BRC9zCv< zWLT#`%tDBrWlfCAH7R@U+eGHZTPAbkwXpfECK`=Nnai9$;)h(~s)`mSuVFHI9Uki` z!cHY|h-|I-%f7)@Q0M+MH=kgt=N#2|Pvnez$#4eH7J4hNRSH-D7B37 z!bta3NN>&BOS$_~1GFckzsE3<*hKnCix^Rcsk3V7fjdZuFck~(p$CYcfmPZI*+4xt z_67$hOBQuSGzqL(Q)6j;gLjgZNK4fu{L}XMhsz@G;@wN^JAYu1-H*RzTj7d}!S{)( zu|c+M9^&1$~hbM*Dl*F#?q`Q|1FRLYa!kv$XS_@`8WZr%pv zgdQGhTty^x-MQO9m7ZM=qUk$0oFUuct$I?EAW7K>v;C+C^t<$#%-&;6=4Hw%lbv zXAGS&xg4FbhoFxv?Q!$T<9}?@7fVIbbmmw`DCmlzE^XEm>x4FS$q{>*Q($zz(EY;I zu$^$gPU|{gc^l|}p#vtDqXV`kX%tlPHm!VVQe(?DKOQ2BZ%F7o*GSmQ1qeDhk6JfikTOd?TqC< z%e$ z2`49G%I5VR8AE3ihh;sgK0JJmRDIRmW|Z%Ap! z`k>hHkg_))<*<>jMf93i|HWoHWs{G>7#QyzZw!vq-Y=fJKL@)4TBIrLW%>jw-CE|b zr=H(HDzaTLRu+Bfl+lXJFOWMu2l8l_ksMZDE^QAdmr)5=15llMD}YeZ?+lsz@kYWB@3~;DM<6MahR^_zy=cS?H+Z2M3DER9oA1se;(}4L zz7PN-crpLQn<-V~dmz3Mecr=1Kb`7^%Hmixhvoo0O-?e?_HuKeX5Y5{&#E9m+{BAU7{&zF* zyd?B+D}@zy5a_#Qw!fxG|JGorYrkCq=_T=+Mq~btNZM^_D6Lb9u~aSec!i`WFpf58 zffs1%B9(VaV^}1!ii#{#48A3pi6VpC>sjErUg*zgK=;|@{SW!Y1egH5tpcPVh)?(4Pc zQXS5fO5tmjyHC`+PpaKL9B7RXs@;F2-G5Z^{^3k*epc!3=jz?htKB_3)EYmmc)z9I zZdJPbg?jgkYIhHhwZ@OD-G4N&v~gqiXSbJzj)(L9$-EB;>BA`ZhqM0kSs!H9A6oqv zRv(_A&ui80yeF!BxHXHL>0)Yh`RBJsiTM=^L1Nhm=2Pygcvt&J924BKvhO|V$G&Y| z@HOyc?IY<#ru1Uj#=gr*L~N1`|a{{n~rphWIs6;ZNK>b>G70saKU zhsWZBkE(zc8w|C#e|T;_68RdQy5B726I$tIwsds(f)`v-dU4K3D6jF+?W6w(sjZBw HW^4igaXgXy diff --git a/dashboards/overview.json b/dashboards/overview.json index 6fabab36b08e6..a8d1eb2f550a0 100644 --- a/dashboards/overview.json +++ b/dashboards/overview.json @@ -95,7 +95,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -152,7 +152,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -209,7 +209,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -267,7 +267,7 @@ "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -329,7 +329,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -381,7 +381,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -439,7 +439,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -495,7 +495,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -557,7 +557,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -605,7 +605,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "grafana-bigquery-datasource", "uid": "${BigQuery}" }, @@ -663,7 +663,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -714,7 +714,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "grafana-bigquery-datasource", "uid": "${BigQuery}" }, @@ -756,7 +756,7 @@ "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "grafana-bigquery-datasource", "uid": "${BigQuery}" }, @@ -807,7 +807,7 @@ "reduceOptions": { "calcs": ["mean"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -867,7 +867,7 @@ "showThresholdLabels": false, "showThresholdMarkers": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -915,7 +915,7 @@ "showThresholdLabels": false, "showThresholdMarkers": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -957,7 +957,7 @@ "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -993,7 +993,7 @@ "footer": { "countRows": false, "fields": "", "reducer": ["sum"], "show": false }, "showHeader": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "scroll": false, "span": 0, "targets": [ @@ -1058,7 +1058,7 @@ "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "span": 0, "sparkline": {}, "targets": [ @@ -1103,7 +1103,7 @@ "footer": { "countRows": false, "fields": "", "reducer": ["sum"], "show": false }, "showHeader": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "scroll": false, "span": 0, "targets": [ @@ -1164,7 +1164,7 @@ "footer": { "countRows": false, "fields": "", "reducer": ["sum"], "show": false }, "showHeader": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -1261,6 +1261,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1324,6 +1325,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1397,7 +1399,7 @@ "footer": { "countRows": false, "fields": "", "reducer": ["sum"], "show": false }, "showHeader": true }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -1510,7 +1512,7 @@ "showHeader": true, "sortBy": [] }, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "targets": [ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, @@ -1572,7 +1574,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -1627,6 +1629,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1686,6 +1689,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1786,7 +1790,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -1851,7 +1855,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -1899,6 +1903,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1952,6 +1957,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -2036,7 +2042,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2100,7 +2106,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2163,7 +2169,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2247,7 +2253,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2321,7 +2327,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2385,7 +2391,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2448,7 +2454,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2532,7 +2538,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2594,6 +2600,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 1, @@ -2649,6 +2656,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 1, @@ -2721,7 +2729,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2782,7 +2790,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2843,7 +2851,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2917,7 +2925,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -2963,6 +2971,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -2999,7 +3008,7 @@ "expr": "quantile(0.5, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m])) ", "format": "time_series", "intervalFactor": 1, - "legendFormat": "p50", + "legendFormat": "p50 (mainnet)", "range": true, "refId": "A" }, @@ -3008,7 +3017,7 @@ "editorMode": "code", "expr": "quantile(0.9, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m]))", "hide": false, - "legendFormat": "p90", + "legendFormat": "p90 (mainnet)", "range": true, "refId": "B" }, @@ -3017,12 +3026,41 @@ "editorMode": "code", "expr": "quantile(0.99, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", scope=\"e2e\"}[1m]))", "hide": false, - "legendFormat": "p99", + "legendFormat": "p99 (mainnet)", "range": true, "refId": "C" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "quantile(0.5, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])) ", + "format": "time_series", + "hide": false, + "intervalFactor": 1, + "legendFormat": "p50", + "range": true, + "refId": "D" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "quantile(0.9, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])) ", + "hide": false, + "legendFormat": "p90", + "range": true, + "refId": "E" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "quantile(0.99, rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])/rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"$kubernetes_pod_name\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", submitted_by=\"peer_validator\"}[1m])) ", + "hide": false, + "legendFormat": "p99", + "range": true, + "refId": "F" } ], - "title": "e2e latency", + "title": "Validator e2e latency", "type": "timeseries" }, { @@ -3057,7 +3095,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.3.f250259e", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -3120,6 +3158,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3180,6 +3219,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3200,7 +3240,16 @@ }, "unit": "s" }, - "overrides": [] + "overrides": [ + { + "__systemRef": "hideSeriesFrom", + "matcher": { + "id": "byNames", + "options": { "mode": "exclude", "names": ["avg"], "prefix": "All except:", "readOnly": true } + }, + "properties": [{ "id": "custom.hideFrom", "value": { "legend": false, "tooltip": false, "viz": true } }] + } + ] }, "gridPos": { "h": 8, "w": 8, "x": 8, "y": 96 }, "id": 158, @@ -3210,6 +3259,17 @@ }, "pluginVersion": "9.1.1", "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "sum (rate(aptos_core_mempool_txn_commit_latency_sum{stage=~\"commit_accepted\", kubernetes_pod_name=~\"pfn.*\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=\"vmagent\", namespace=~\"$namespace\", submitted_by=\"client\"}[1m]))/sum(rate(aptos_core_mempool_txn_commit_latency_count{stage=~\"commit_accepted\", kubernetes_pod_name=~\"pfn.*\", chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=\"vmagent\", namespace=~\"$namespace\", submitted_by=\"client\"}[1m]))", + "format": "time_series", + "hide": false, + "intervalFactor": 1, + "legendFormat": "avg", + "range": true, + "refId": "D" + }, { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "editorMode": "code", @@ -3258,6 +3318,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3344,6 +3405,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3399,6 +3461,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3464,6 +3527,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3547,6 +3611,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3606,6 +3671,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3664,6 +3730,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3722,6 +3789,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3781,6 +3849,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -3847,13 +3916,7 @@ "thresholdsStyle": { "mode": "off" } }, "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { "color": "green", "value": null }, - { "color": "red", "value": 80 } - ] - } + "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }, { "color": "red", "value": 80 }] } }, "overrides": [] }, @@ -4051,6 +4114,6 @@ "timezone": "", "title": "overview", "uid": "overview", - "version": 21, + "version": 24, "weekStart": "" } diff --git a/dashboards/overview.json.gz b/dashboards/overview.json.gz index 9d36af4ecdf114566834016f1e175b4c82391494..9d26e4216a4877faf920c47a2cee99ed0ef0b79b 100644 GIT binary patch delta 6326 zcmV;n7)j@_Prp&H-2)V+y4l^(EhjWKw>EottGnAHdrGyl@&i>2e*sNZ37&NV&sQj| zay>Q@Euy9~JF7`zPy=Ehuz+mT=l?=25ZzdE*Nh>6CWm3=E!EBNNl8N1G{(24|Ble1 zLX7_>o>L}f@onOAI@>{3aub8xf&@pICnBsCBo%HNVDEs=DMa^lbJ+$Bh7{ac5mws; zL!SmoJO)K&vgkVbf5%52Hs}O%Tn7e>xqBW*pmHw*?L<5V>?OO4*d4SIpJ?m5j+M-k z(M0U!OGZ=Uz?5n(&!t`w(K6glF3$(1LEa1FH7lvn#|mYqTqoJYRoYo}!C_L}Kr`!C zG^P>z9vj_U-*!@|jkrnf$#kwK?rlNPo#8M$e<){=mO{Uie+aB){QH5!KSYY134r5VJF?`hYN{KWJ6w3KT`a9998&bcMU zkrYQ#9PtiGaaK=pmNLn>+sVcBEn_(k1e%vDXDKYFZ~Ngm!wcGR4D~U-Wf>(5Be{qE z@d>kpXk;eMe^5iI60aSd805d`aa*PBz|@dtkzpV-Ab&<^BHomxo<^w9?(yF`QO~qy z?dT``8_&D;?N{*M`p>=Aj$G5>z0T{6jUn681E`_%H+c`B-gc4L>`x{((gSGOiIX(S8R-iPkBsnC_RrJX%K6zJX`2- zhuNaSby~o_QY=qj$;5f?1HjV+P2*SeD}7}_`<`dv5_2ClACrlXW`KUClly^#=rn_m zzHqcD?v0&mYwt_~ee&*oce}f_yS1mP>cOGfe|tnck?HpoZ}swuYB$wWWm}3Tg>$%H zM3pnWD4g~nYaUyp7kos`|a`e-p( zww6wd@W#x}e`L5H3)a!iBz%ngbTSzQY1*%t(%;$=-p^^l8MmYIRftYJORm$|kQ99w zf4@BN&&yf{F)O?zRS!Lnmp`9F&6f{jm$@gE<(0c&MdynYQrY<;`Baah_gSf#lnq=b z&YlFg5HkhkCtM2XOI?Tjoc?5qH+eUuVstuw@Jb06p-m3lDLGj}$-8~TGi|u2e&^r= z%yWv$TS&j&%cd7a2D%kU3Zp5?&~k);e^1<2Kh6lcsIPQVHlmBc;i%J8;j`3&aEbAm z2aU_-)r!%So-0XDTW{rK)UvZMbZ0y^19})Ui^O?9Behro`6)Rt;+s0YXwMU4e8xcH zC5=#sC_cq86O%ZJXv}6Jc#O#@8Ksd+3TXH)?npGC2SfS@1}XB1EL+f8+e~-jN zJDs2H#zv7A%cyA^=Cx^r?98PC+_IRH=_hWfk179v7~Yc}Mm~BAtU3BZvWRbT>n>7? zQBe||m%*t!d`l(_u@^5W8UGidYF%*#^mOx|)U95XOu!ettoJPE`82Ph{rF8uJ+iV= zcF>eitFj4&`>H~3(IzPrHT&(%f7?ss=q~Mucz!0b`AW%A1DBVw_WshGsQ-bSAqsA7 zzw9pQP2E&)YMWe>LW!E@PQ8lL?N;K2#8qIK5}kQnH*d4-mDw!Z0A7K6^_0u1V^F5% zny#wO!Cd3Wg7xRFrBiX=UT7MvrWW(ji#FOoJ@j&|X#2EPUk@5zWaS|ge;tYOHA@J3 zb3RJQ z-~$o8BpG}PGWg(WzoI`{;jG_=awP72SAFDf#s{yVbzC~>%bp?z7;!}&L-;yP9Uu=J zhuu)|d1#2_PQ`9$E$7`Ef0>p2lBn=g4uBa0wEEK>-8ZGp(`fIC?SgkjulC**X`o#` zl%%Ug9b7GT^O7~2t3@vjj3r$yq^m_HUAkJ#N5SG~Af-D^BTCRtXK`)9r~mHZ)*HvW z(oEw`Kh?G|_{S&mWX_9TuE0V7)V%u`n!F0`9v>2apJs_7rNTmy-WKo~#bm!5}1)lR{oYPJsA7@4+8Z9Jk5qn4KrQPLr5!K~vb z9K$f6v06gc4?Kd$=|%^3^t+ERq9dOfK}g3og74klWp`wUE)wRp8c$$$=OZxN>0U=k zU?hQ&1ZH7QS7S1E0pT>NZ?1_Jf2%5uDJYFiJ2hr=S~;cqJ{Mi78$bCPW61i`lQQ3? z?bqAWxlB#NT*^PN^Yq?R4Inj5Cme}e$c&8RbsB0*G7iZ&B;$|{L1kQL67hJHXV9t~ zgXUTXtEzvAhtN)yhtRTD#+dLek=2zCveIz6+udwAbkQ3|_sqZqZj);de~yU7Ln%sz zKFe^Jf5a$P^HeEA%FZ$1vL_R>?gB?*@#T$1pW?zL>#GUcjVt>i{a zo?HjQvOXIvt9|39cTw9_e+%xs+o^Ud^HEEWx|g0}aYi#ZSJLOb*(leR3~wPRm!w=Z za}{&Lfo19qBVf``tl^AG`iV(Du@(D?%}ifpG*L-kBz=*NVh=cq6$q8|5^ESakzQgo zqBKpX8;O_L%zQ(-iAla8`G#~8o6b$FWFBXJAF+nGhLzO0PJUree;hGE`ix1RvFO(4 z!O5l1m}KOS@fn+$k;|>3l95YBzFMELMy^*%hcW3e_8c6>8W2oPJn5U``p#CW%Ig_S z?OpXrqLm(U|7q{qn$t$I@Oyqm7nP|22V-Q1#7xy}Rg!ULv$aXQwv&BGQYry$P$48D zfiL6n{`T$D-MX~|e?~GO37;1mbVJ>JF5kKKMgq#Gt}}H!!(y{(rtK zQ+9-z_T0rlt{Hne*DdlDc!mreW~}s>wjwi!w3!v&ry}BPdhGv<7~Yr(26k@BpHGRw zj#(Bwn7yfdSU7fUL@Y1DM#TQe@;4$@&#k2qaoS=P)Q8x)f4t1shB)4GIR1;t_QUE} zWGmv>VXnl2&cs8PQoGhvdlEO7+yclTa>H>uHZZ2qqQeX^OvFsd+P~Ph9eETdrbdzp z&+X7WtZe_{(pm_haOgmvN(JQU46_TpPz5J_YD?b|qy+>y_^Mv#ff7o-NVb9(YaYt1MASc&XOjOTD^* zm;^BiViLq8#bS~m#UurkL>!0+O4gW2U>TFCJ~3{B!U`GE)j`SZ0y=p?)RFBF_G%LC zff5p3Psq#lp{<1=COgWiK)g_aa2F#=xD!t}kDN;~e=z`PInK0I{VIcF)lTeYiZMp^ zz`+nBh8SbVF~q3oYN?AsjD+=#LXhfQOjMX!o^%I2vY%#eg@M@F2p3GW=GBbbTqs6DyM3HiK-n{ zJF509s$JFUlHP?@uYfwQfF^$@rj&4v{zQ~`_M~A<4PH4l`1rLNK7F8vWeHM)PpQjS z*zVJ0@Tkh|Gpi1^0-`$3gtd+^0EAczu~sguf92Akl@wFw^hWp0zvvzR$ScX%Yftpq z%)*+nB^Yl1j&}xK2 zqu-~~ndm1_dIZ=cUfH<_0gm;NB0rT@wV*De}UYA-0?W>=;r5+cO~bJr-#7iJz4$KyM^Xc zl)8Gfkm!lB%PO;y1Enowvh46JE9d=Da?X2tY6&@SU7VK%*=6OMld|&-G7d7%78s|S zm2uv$R4V*6*shB~QzBPnC#!^u%=ge@2OgPRn=#?zdPDnj*j-+HpnA`8`eSB!^c|?I>CO9NI=fT%qOXKP^Ys<%{c2Iw zS+RbH_-R7H72~p4N&y|DfK5_BXN8QQVkw~0Mhe&<1th#G8&2?l3rj`B!a+Mfk$V+f zw$r&`$ zTS}W;<`;!@UnLqfzAB(mBW9NYTHJzvczS8K_SS?F8BCq97qDlmw3KQb03asyZMb`j z8f1gKAm43un8wu9UHX$jubIb}&6!f!G|L>ogOYHflw&(kEt=TC{Qh56e;nGT?Ugr7 zShJTQ^$9iXM#6c{?^#z%&o{6%Q>+Vzwu&uNWKVUt3OdiR zwNf<+me6cO+c|>fH>k*GlE*Z?Hmf45CozV?tn+&l7e)ot$Txyx%tR~mSEo|xH zQA$n@PyFdC$N55;1Eyj6BOZpi zQ-&nae64r;3G`h}9dSI=ao zG5E|RbxrZ?q|6cFDq;$In^V6#fp}3NAzOg}h%7-CNg!T;+^2WDe}6<7PKu?}DPW%+ zw3M=%5tXr&61gD)@B}dRK*8TTWIsJL^uiK2LH1?D>19vg+)C>^~O!x z90|@?l~zVdLH<~#HP(=yjP+)vqyVJ*Y;>PFz4+ZD-IgWsO7E=P1ZCT=Nhnm|xXkmmjBqW238`Ar37 zcNCYQS)P~Lcho$2den|aHd67EqT&si%=H1Cx)f4i1`V=K`+0He{M9?yYwedRVC;s` z(zF)#!MM(6kkxa3Yxgtsk>A(`!OVu{xmzB4pONd>%;Ey;f1X)3!_jm;BlRnR9z8N; zCqy+*0}XvN^ebcNr!7|1?EI?R^lLTVwZnuk?j)@Dc4@QxJ+{PRXPJl0_Y#`t5d`Fc z9@9Z@=u$`A713`bWA$iC0V>QI-?qT+12Vh+c9?gA#h|4>`v6Z(^!8s<5_H(C1YiUt zJV!KkSpo_Se_RaA%S)!6A(+skvuh?}c;H}U4oB3R5o2%#iE<2y5lKPBum{oD0)u*D zS^NtSc_8vY(N!SuC4`Sej>=VbXr#!3)e@|l1o7?)F&3l7X9>uA%qT;wF zY3FrnW!$o9Be0HG#*ox|bYwD2TVLvN4b}|lNY`QAe|0@VMZuwz)|>JHkO!gd*Pnw(JE%YB zw7mbEzhI-$*wfl88o0sOeu>}Uza5*eQAQi}-RT$d#p8Wn{tXmLphP~0X^NZPDi@(8 z7kL<3a%jmFVaW|>f84vB(G72Yn_|ZuAE6zWO(}f*m;Lj@`^ndFUL?k07JGD`X}8K{ox<^nvk>ebHo>jFA!hI+Ynz=8DDG=UIcZG zY4{?bKExMW^!9L!!KQGm^Ie#94iqYiGRQs&hhtnBrUi{iD?Ox~i30f?Om8#7AEp|{ z!qqpLrNMVVyVbwh@}uq#V$cCRZcAR zc2iL5cnOqxjZmskD8*Fhwd4BEB$xK7c|`}i12AG>Dn3|V@gjPE?#_6(>VI0Te*w!U zG3&)To|MuJt=;~G{7$s1cc)*li~|u>RS*@H0+$p#t-WGH1yNxrE{USLhucR{PoPPO z`C#sJVX#51SJ<2 zk7dwu3dW}$>Se*+A(=*`f01EX;ao+TkDUmZHhNsPLQ}Q7$B3;zql2_r<$4z|m_D+Kbf427iZteN=$-i8% zwF9QUZSBQG^VpnzaV_2&I5~7FeWUa06MRJMsWLT&6JxHlx90FaFg)H(h$LrBuH)gV z>$qsi{B`kvFUzVhEO%+x#(vKn@?-mh)Wv;ZF;<`BY!kW?E60~O!pt!i4v?qhT-mof zim!rWF>YIv#see5f3(d+hnyAoIWB$H&dzb=mw|X<(lVeZ{(3r}Q?DMY`(ePyo#|kM z-rcKeSdo%});gaVHd7IUuLfi7o<#+Lr;5e3(dZI6@H6`JxhXaey`G+(eEo3p-%lqO zF_MiOE*aHsS__3(Z#yKr0&KjG`mIVx(b~o+I-d*%LWvUde{+Ux*`bmV@`lB)v`k7v z#7ljz_kmSaLh|06z((@UP2Pd&Tnz$W;5E39cmzsF`9%wMPD26Vf8&(@BSQM$eA@p} zVd7V=U4iyf{5zHAf@NDArAFE+$twImxX|x~^U0h@^1KxBjGFCS?`tF8ddQLAYmtkm zW`9g&SV+EoRFH-w99&wP6ZVn#r*VIDhyT^YzoWMJ#((6H|J4RQ^Y9<}3p~XB_(S&{ sgaNKhdMyr1&WRtq3S4oA+^t``rgVG`w8NkG?EcyR0WSd6i3MN+0K!ITtpET3 delta 6075 zcmV;s7ewg4QLs<2-2)Y7cT?TaEhjWKw>F2poo;VuUjT!Gv+)B}41YJ(O;v(toxt-I zN~>ItjYNy6>CDb*k{Hy07zivN8}<3WPzyvimfSUC2%yPfSb0lzGkj8#kTs3*t?9ob zbf^&H|B2_6iCKJ`xSY;*kd@rTAh#gFQRayVs|87gn+DiBpmPe*J>6WkL4zR$cUFYe zcEQl6K@yKaQJE~dPJjOKk%tXB!5r6t0b}l-#}TO9%V-fl(YAFRE13hMiP+13m!`&H zC)He@OMM=qMYf$>o)0{Nycb4eRt}?&6-q{#OtM9*v@7Ux0;MD+`=WS$fb}aHv50+- zja{xyJE=@Y%p>$;IO><1h~diI*H^DIBJ6`{6i)`t3OW`551_j1vBk+(ZBPWLH8X zG80~?p+tz+j(<+-@n7_)tkQO1YDlxlFc2D$KO-~|Z^}|HAyjDh_-~zvWm>a#^b`J# z=S=(dEBJ5y=S^!zo$2si=e5GdP-*GC)6m(Oy!TFTyGU&ICled#y|e5ENCu>rjvr86 z-Bc*4qGJ5JvpCnvwYk-!@Cipa6gY?qChNm>Ltp8sQ-6C;of5d(neMPQ>8Z1369Ljw zXDw%f@B9riClpnm1!nZ`dN9P}1K1 zZmyuo?|&J%B0JEaxsQpD$;3x9KtI#T{lGzVn!!h3INB8V zwa&FIcP4>8dH24%-QC*V+EZ2a;85*7A{NN>`+tGAdU-{)o9d~u?ZA^lG2Abr%9&mi zPHGS~s((Hxc2MQUV~~B!-D6mLq_rL3G@_T#W&Y)7Vvo45M>msB%5GnMq?asPOQ%J6 zV`k?+GTe^^+URByK1N}-&Hsz=fLtkg`(2AvaUPXb(s zcY^X0E(P?Zu0wuKf3n1zyqi)nvK&7&rG$&nCI{}6oC=}j-9F-(He6J{bMOJ?IYs3y zq+jo4(~BYl-3lay(G+E9IYO`|ZmS<>1b;grp~>xAO62**O-vGaj1(JxG~F(Y&9LTC9NllpOZ(O&wpf=ZOzKV-WC?MkquB zpW=v!Nt{G9W;4+@#^jWY(x@W^G<+9#BpT2|9eo6Y6!}CKAX1#6^)InU;-Q_+&wqAf zqezQo)U*xr+B8CTUQz*WS-i;f6SvfZlz%`B%}EauAH43&ix-rP|BFzyt~djFGWk#HR&ZQdf;E3-(rdAkDV=_waY$Dk_BHC;2E zgO|oJ|LV_7ODEdCz0fpTO)X}d7j3kGdg$d^k@9J)z8)mK$hku(IuavamVeOi#`)OR zjf+VNCMnoF6pWo?&=?Lj^Vz?}lX+i1a5C>5u`IA2bF^*J6SX_bH_!Bv^EFDS-UE@e zB-MKgs`ub2yP_{y;f&peG9d0aSAEoO#(%D%bzC~7%bwol({t`0L+3h89Uu=JL)}pE zd1#2_PNQyUE$7|Wn3esKXn*We4uBa0qx#bwnKz|H(`e6#?Sf}SulAl1X@FZkl%$(P z9o!^#^O7~2n?x@SStZ>hq?<%0UAjrkN5SGq9;LfPBTCRt=V)!hC-?5*)*HvW(oEw` zKh?G|_{S&VWX^u6J#kq+k*}DW-MGJxDp4C}iWETK@m?ne_Y7TQZ+{B$BbGzw0E9TuE0V7)V#{&Wfq2D}AK%Ha8*k|av;Es`kJh$!t<5+!vfoi>t8>8Y6#53>Sa(TVSClz$me$Y>}nV$^57D&!1@ z^w{K`aU1Y_Gsyd29}f06?#5XO)ArRU)cwNr40+OEGd)Zx)ux7MDvBuJUJ z_gB8&m_PF-5w;oqqhuHvx!x$(8=uZUDs_S?bCJp~Kx#t}6v~8}u@adhG1D(3rma#N zZ>Gbj<)t%|bbn@AFzYx9$1n_NjFiyz1CQWwx=Dc@m+m8s=*VYA5Yn-Y;Cr`s*}c-C zi-ei0#uJ#``3TH*x=&FO7)f9xfmxWt)R>H2Ksb%+n`@%Qs!C%DN@LSbjoDaMPN}}n zMOW&^Prk+&vi|g>%r9yC_4ah`PSY@#@(=7hz2{Q{NPkV!2}k1eF(c!6od&^@j6*UG z$vC9DPZ`&lL_8kl_p>UupSjk-s_I|j|Fcu&|Fi6sF(!OVWOe0(tTc@6b~jrNUG#O) zJu@(Y+vM7VBVzF|h?1etG92a~G0N3AIn*WVxDeA8g&sZra6FLGO6_k?sMHPiZ1K>D zWz$rnDi5CIDey(eqz#3Y{hPVI_60 zlV8{qM@*1DW720Vy7hT*a_KWB8Tn&;#%5;Za;vChy^@BOgfA`2Zyl+ z1b}t>p0$rB&8D21{FfWlJI3b-rv4`x?6XFk&KXpPaa}hw1m3*T)uPZbF8`L zXa-w--B{R^*i3FipHhQrOxY1;+H)5Jxqo5oY29?l1@H_RTg+G)Fl|L#i*%V4KBOYz ztUK(dOEhQ71OxkQ&Yw?-#;zF#Jea+yd{{7dEHf+zVVU8_$_&Rp+en$=w5=$q%CL2L znXkxjw4iYG8|LPzRcsuxFPh+IhhI=J9d3-(qZ=FH(?S8S(YKpb8yZGXNTV~_^b+4tGx+Je+c7^gWdi8iojTeL7oP7A6g zONAv43I2;ireBK#q{}42DHdklbt$!#&hGSJw zA-jM*UXl3Bb_iLui1Nts4^>Ub%gte_r4S`MhN?lVF9y)gCzNmpoN)YDmt>{^jIykG zr~W4dj^kEfKT~`#)V~J9gBTu+T*vTWOjk=?3}PfWn#h( zMVgjR*W{wI5zKD1u(_!rCuq%o`xS(!3PSYPIs`NMgg4>dDER2Y30G8STF9K(#aLsu z*k3FEt%X(6H>YU!H zj{X`%;-O~B25N9iabZCbrN+N*_Ty1GFd-A=DXBJr>gnsaYWnTM(}5kt-4 zo$4%R_YQz=z)kWqZGSkVb61@7uROkM5oc)e@&xqIkDnc6SOF0j+`TnulA@9Cycerh zl`XNiRYB(IBl95h0P{pCWs>T5Nn?qAW|nwgb(TP$K%RISPxQ0$#D}%cf3SBWa?yA4 zz(<*`dnIz=iFpzkefO`QVV*>k7tnQ&uKP@``w6p_t+B1m)PI=fX6m>R(p~hH4j0hm zQ~W~Jfng1&%orG2kR&-+45!q+kGBM^h0gM<9%#X3(73kRy!CUMP-|)0t9V4Wk^*{2 z0Yy?kZ;jwixfIarA_Wvk0SOOC!MR%RVXKJPIH+F`r_ z<;9`)-n-L}WPb>?_+V8DnL} zbudr}Sa1vO-1(*1+20TbWFU0LUchwMIs4x{0JuvU+wj+xHOVGgH}oMCSPFOTYB4+x=p1@B+tkp(6-{omJE)uOP)be?PrT*mARnSXmJh`d)qS#3n*+fLovKKyA(900{H2Y2iyp<%b0TT!`L7mfxoI}iDz#BP-ceD7&IPO*7N`Xn&@onO;fiI|JJX)r*vP(0b@T1>!cOm3wpx45ax!?@EI!L z71r8_S;#z@!C~(+u`QEXTwuF1%Vs>8FI>{N7Ulqk7j&yXlZ6Vwuvm? zVt?PLX`X0It#o?{+q_%Zq*jM*v8Z@k!#0s2eB*$YQb8-IQUlbcd1CZ56|5RBr)UlV zcRhSR%-i_EFEz-1fu|PLyZXX=dkzvHzb~YZs}c-g&|#!sUNYSafq(&ZZ0dn+K3U3t3%sqNkK$2N8z{t19D(Tv1f}=@ekm4@zRx7FQ&k z{>PnO=djXFf7zZ|HzoLhH&8{qUX|)Z4b%z$xHf3yf3d^ed~Onygmc}YiFKP;W?Dnd z7CebPZ(*tnw(d<*dyE!el{iXkk#1gxRK`Nlg@861QJXjStEf&*3RO8ZWLF>k63sDl#7$Ea4b<}?7p~FWxtK#e4#EJZA?pnQX8lS#6ckp&_@PU zXU^Zcb))Gn`wg65FI5#Lu6p?v%YWj22{;r-S*n^)MQdQGkL;bQ7{cBOQXv^>rF3dI z=hYXH9I8PMA<;30A^;#e#4n~kq}7lgf57`>rx=(Z+FzE!q4X%THWe732vBbm1BiC3-|&k)>0;v32%<-I?oV^BKwgeLf#hR)3iE@eBFl)d5dQ z|3jn{8iq&-ky7Y$L`vl#rT0YvrQ;Pq=`8}KQh^dvkT;g?If86Dr1~`-?GAj1K&9wp z`8g6%>vDJIyH(%OdL>Cdi&ZbzX%ssUp>(^ykl%=M{o(W*9*#h~6d8)gXP@!$*;Ht> z@pBMyh!h0Lk{GEUw0{||?eW?QVj%s|jO5ZlLHwL&u8ia#dyz8?yaC+-6oHB5*j6NI z2GVbN>sWYaDBp3(y(lFe{5(<;)pL4p0>r=S zz#=DjKm{)UZL_Y{Q9?f}y+`U_Ff@I}sBPgyu`pcsD5BfxzkdyVa>WBIC$`%jJr3cBH>hhAthY zttwaP0Pl3j)7CXv27~-zSOd)*^7FNbJRBY6OBtd~ome@qb`? zytofZu78-^$HR5^aoLu6`{I3HmR(^m@lrFj_CO!=Ywc0$=036+>#uRP1zn4kqgxze z<_HV>pjL9OEP)+{UH-KQw{1w{fqkNzdZJ*>8vGoUJ{xD}sPfA|JP~Q>Qxtzam9MGO zh#ci%z{qx0FmLZ4;%Yc1C4H^+)zwU{Jz{PnphHV?g? zo}FBLJo)eElk*74hAvl(YKzuF0oL0N$*ut#AHpuN3R1LE7)9rk!9XZsVt&bxExS}P zLM~YRO3S1&M7+|i``}wu6(sNC1U8a)Zu0g`=XwzM60gBS$Rkie$}d~6a~kpq|2wDr zpMN3J|L)QL&l(edZ0s7epW@%CH5aVv7L*#qt0k-OpWs4&5Y8ukCdu!z(t zcy{=F9(JlBU)DkDzv?fv{E;owrboUo6?4~6@qd;EVz{5a~0zxbCN^8b~Q$2|N? z{sIrNH~+wW2VsC~o!*Gcl5?ULuYFhCAsAnay&FoWUx9Y`^Ikl>_&=ElKP&o90s!06 B*--!h diff --git a/dashboards/storage-backup-and-restore.json b/dashboards/storage-backup-and-restore.json index 001a54d34075b..aefe6f587badf 100644 --- a/dashboards/storage-backup-and-restore.json +++ b/dashboards/storage-backup-and-restore.json @@ -34,8 +34,16 @@ "panels": [ { "collapsed": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 0 }, + "id": 3305, + "panels": [], + "title": "Resource usage", + "type": "row" + }, + { + "collapsed": false, + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 1 }, "id": 284, "panels": [], "targets": [{ "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "refId": "A" }], @@ -55,7 +63,7 @@ }, "overrides": [] }, - "gridPos": { "h": 7, "w": 17, "x": 0, "y": 1 }, + "gridPos": { "h": 7, "w": 17, "x": 0, "y": 2 }, "id": 255, "options": { "displayMode": "lcd", @@ -66,7 +74,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.3-cloud.4.aed62623", + "pluginVersion": "10.2.0-59422pre", "repeatDirection": "h", "targets": [ { @@ -81,7 +89,7 @@ { "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "editorMode": "code", - "expr": "last_over_time(aptos_db_backup_coordinator_heartbeat_timestamp_s{cluster=~ \".*testnet.*\"}[7d]) * 1000", + "expr": "last_over_time(aptos_db_backup_coordinator_heartbeat_timestamp_s{cluster=~\"$cluster\", chain_name=\"$chain_name\"}[7d]) * 1000", "interval": "", "legendFormat": " {{cluster}} {{kubernetes_pod_name}}", "range": true, @@ -99,12 +107,12 @@ "color": { "mode": "continuous-RdYlGr" }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }] }, + "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }, "unit": "none" }, "overrides": [] }, - "gridPos": { "h": 7, "w": 8, "x": 0, "y": 8 }, + "gridPos": { "h": 7, "w": 8, "x": 0, "y": 9 }, "id": 3299, "options": { "displayMode": "lcd", @@ -115,7 +123,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.2.0-59422pre", "repeatDirection": "h", "targets": [ { @@ -155,6 +163,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -166,7 +175,13 @@ }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }, { "color": "red", "value": 80 }] }, + "thresholds": { + "mode": "absolute", + "steps": [ + { "color": "green", "value": null }, + { "color": "red", "value": 80 } + ] + }, "unit": "none" }, "overrides": [ @@ -176,7 +191,7 @@ } ] }, - "gridPos": { "h": 7, "w": 9, "x": 8, "y": 8 }, + "gridPos": { "h": 7, "w": 9, "x": 8, "y": 9 }, "id": 741, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, @@ -234,12 +249,12 @@ "color": { "mode": "continuous-RdYlGr" }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }] }, + "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }, "unit": "none" }, "overrides": [] }, - "gridPos": { "h": 7, "w": 8, "x": 0, "y": 15 }, + "gridPos": { "h": 7, "w": 8, "x": 0, "y": 16 }, "id": 3298, "options": { "displayMode": "lcd", @@ -250,7 +265,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.2.0-59422pre", "repeatDirection": "h", "targets": [ { @@ -291,6 +306,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -302,7 +318,7 @@ }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }] }, + "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }, "unit": "none" }, "overrides": [ @@ -312,7 +328,7 @@ } ] }, - "gridPos": { "h": 7, "w": 8, "x": 8, "y": 15 }, + "gridPos": { "h": 7, "w": 8, "x": 8, "y": 16 }, "id": 746, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, @@ -374,7 +390,7 @@ "fieldConfig": { "defaults": { "links": [] }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 7, "w": 8, "x": 16, "y": 15 }, + "gridPos": { "h": 7, "w": 8, "x": 16, "y": 16 }, "hiddenSeries": false, "id": 659, "legend": { @@ -392,7 +408,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -432,12 +448,12 @@ "color": { "mode": "continuous-RdYlGr" }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }] }, + "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }, "unit": "none" }, "overrides": [] }, - "gridPos": { "h": 7, "w": 8, "x": 0, "y": 22 }, + "gridPos": { "h": 7, "w": 8, "x": 0, "y": 23 }, "id": 3300, "options": { "displayMode": "lcd", @@ -448,7 +464,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.2.0-59422pre", "repeatDirection": "h", "targets": [ { @@ -489,6 +505,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -500,7 +517,7 @@ }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }] }, + "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }, "unit": "none" }, "overrides": [ @@ -510,7 +527,7 @@ } ] }, - "gridPos": { "h": 7, "w": 8, "x": 8, "y": 22 }, + "gridPos": { "h": 7, "w": 8, "x": 8, "y": 23 }, "id": 660, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, @@ -581,7 +598,7 @@ "fieldConfig": { "defaults": { "links": [] }, "overrides": [] }, "fill": 0, "fillGradient": 0, - "gridPos": { "h": 7, "w": 8, "x": 16, "y": 22 }, + "gridPos": { "h": 7, "w": 8, "x": 16, "y": 23 }, "hiddenSeries": false, "id": 658, "legend": { @@ -599,7 +616,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.2.0-59422pre", "pointradius": 2, "points": false, "renderer": "flot", @@ -647,6 +664,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -658,7 +676,13 @@ }, "links": [], "mappings": [], - "thresholds": { "mode": "absolute", "steps": [{ "color": "green" }, { "color": "red", "value": 80 }] }, + "thresholds": { + "mode": "absolute", + "steps": [ + { "color": "green", "value": null }, + { "color": "red", "value": 80 } + ] + }, "unit": "none" }, "overrides": [ @@ -668,7 +692,7 @@ } ] }, - "gridPos": { "h": 7, "w": 8, "x": 16, "y": 29 }, + "gridPos": { "h": 7, "w": 8, "x": 16, "y": 30 }, "id": 744, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, @@ -702,7 +726,7 @@ { "collapsed": true, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 36 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 37 }, "id": 3289, "panels": [ { @@ -718,7 +742,7 @@ }, "overrides": [] }, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 37 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 2 }, "id": 3302, "options": { "displayMode": "lcd", @@ -729,7 +753,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "repeatDirection": "h", "targets": [ { @@ -763,7 +787,7 @@ "description": "Epoch of the epoch ending LedgerInfo verified", "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 37 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 2 }, "hiddenSeries": false, "id": 3292, "legend": { @@ -780,7 +804,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -820,7 +844,7 @@ "description": "Version of the epoch ending LedgerInfo verified", "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 37 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 2 }, "hiddenSeries": false, "id": 3294, "legend": { @@ -837,7 +861,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -881,7 +905,7 @@ }, "overrides": [] }, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 45 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 10 }, "id": 3301, "options": { "displayMode": "lcd", @@ -892,7 +916,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "repeatDirection": "h", "targets": [ { @@ -926,7 +950,7 @@ "description": "Version of the transaction verified", "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 45 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 10 }, "hiddenSeries": false, "id": 3293, "legend": { @@ -943,7 +967,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -983,7 +1007,7 @@ "description": "The speed of transaction verification.", "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 45 }, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 10 }, "hiddenSeries": false, "id": 3296, "legend": { @@ -1000,7 +1024,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -1044,7 +1068,7 @@ }, "overrides": [] }, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 53 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 18 }, "id": 3303, "options": { "displayMode": "lcd", @@ -1055,7 +1079,7 @@ "showUnfilled": true, "valueMode": "color" }, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "repeatDirection": "h", "targets": [ { @@ -1089,7 +1113,7 @@ "description": "Version of the state snapshot being verified", "fill": 0, "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 53 }, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 18 }, "hiddenSeries": false, "id": 3295, "legend": { @@ -1106,7 +1130,7 @@ "nullPointMode": "null", "options": { "alertThreshold": true }, "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", + "pluginVersion": "10.1.0-cloud.3.2a3062e8", "pointradius": 2, "points": false, "renderer": "flot", @@ -1153,6 +1177,7 @@ "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, + "insertNulls": false, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, @@ -1180,7 +1205,7 @@ } ] }, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 61 }, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 26 }, "id": 3287, "options": { "legend": { "calcs": ["lastNotNull"], "displayMode": "table", "placement": "right", "showLegend": false }, @@ -1227,7 +1252,7 @@ { "collapsed": true, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 37 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 38 }, "id": 240, "panels": [ { @@ -1352,7 +1377,7 @@ { "collapsed": true, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 38 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 39 }, "id": 1101, "panels": [ { @@ -1847,344 +1872,345 @@ "type": "row" }, { - "collapsed": false, + "collapsed": true, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "gridPos": { "h": 1, "w": 24, "x": 0, "y": 39 }, + "gridPos": { "h": 1, "w": 24, "x": 0, "y": 40 }, "id": 1084, - "panels": [], - "targets": [{ "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "refId": "A" }], - "title": "Restore Status Monitoring", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "description": "Right edge of the graph shows latest state of the restore job and when it entered such state.", - "fill": 0, - "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 40 }, - "hiddenSeries": false, - "id": 3290, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { "alertThreshold": true }, - "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_coordinator_start_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_succeed_timestamp_s > aptos_db_restore_coordinator_start_timestamp_s\nunless aptos_db_restore_coordinator_fail_timestamp_s > aptos_db_restore_coordinator_start_timestamp_s\n\n# any restore activity:\nand (rate(aptos_db_restore_epoch_ending_epoch[1m]) > 0 or rate(aptos_db_restore_state_snapshot_leaf_index[1m]) > 0 or rate(aptos_db_restore_transaction_save_version[1m]) > 0 or rate(aptos_db_restore_transaction_replay_version[1m]) > 0) == bool 1", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} running since", - "refId": "A" - }, - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_coordinator_succeed_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_start_timestamp_s > aptos_db_restore_coordinator_succeed_timestamp_s\nunless aptos_db_restore_coordinator_fail_timestamp_s > aptos_db_restore_coordinator_succeed_timestamp_s", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} succeeded since", - "refId": "B" - }, - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_coordinator_fail_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_start_timestamp_s > aptos_db_restore_coordinator_fail_timestamp_s\nunless aptos_db_restore_coordinator_succeed_timestamp_s > aptos_db_restore_coordinator_fail_timestamp_s", - "hide": false, - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} failed since", - "refId": "C" - } - ], - "thresholds": [], - "timeRegions": [], - "title": "Job State", - "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, - "type": "graph", - "xaxis": { "mode": "time", "show": true, "values": [] }, - "yaxes": [ - { "format": "dateTimeAsLocal", "logBase": 1, "show": true }, - { "format": "short", "logBase": 1, "show": true } - ], - "yaxis": { "align": false } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "description": "save: Version of txn being saved without replaying\nreplay: Version of txn being replayed\ntarget: When \"replay\" hits this version, the restore is done.", - "fill": 0, - "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 40 }, - "hiddenSeries": false, - "id": 1108, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { "alertThreshold": true }, - "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ + "panels": [ { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_transaction_save_version and irate(aptos_db_restore_transaction_save_version[1m]) > 0", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} saved", - "refId": "A" + "description": "Right edge of the graph shows latest state of the restore job and when it entered such state.", + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 40 }, + "hiddenSeries": false, + "id": 3290, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.1.0-cloud.3.2a3062e8", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_coordinator_start_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_succeed_timestamp_s > aptos_db_restore_coordinator_start_timestamp_s\nunless aptos_db_restore_coordinator_fail_timestamp_s > aptos_db_restore_coordinator_start_timestamp_s\n\n# any restore activity:\nand (rate(aptos_db_restore_epoch_ending_epoch[1m]) > 0 or rate(aptos_db_restore_state_snapshot_leaf_index[1m]) > 0 or rate(aptos_db_restore_transaction_save_version[1m]) > 0 or rate(aptos_db_restore_transaction_replay_version[1m]) > 0) == bool 1", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} running since", + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_coordinator_succeed_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_start_timestamp_s > aptos_db_restore_coordinator_succeed_timestamp_s\nunless aptos_db_restore_coordinator_fail_timestamp_s > aptos_db_restore_coordinator_succeed_timestamp_s", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} succeeded since", + "refId": "B" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_coordinator_fail_timestamp_s{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", kubernetes_pod_name=~\"$kubernetes_pod_name\"} * 1000\nunless aptos_db_restore_coordinator_start_timestamp_s > aptos_db_restore_coordinator_fail_timestamp_s\nunless aptos_db_restore_coordinator_succeed_timestamp_s > aptos_db_restore_coordinator_fail_timestamp_s", + "hide": false, + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} failed since", + "refId": "C" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Job State", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "dateTimeAsLocal", "logBase": 1, "show": true }, + { "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } }, { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_transaction_replay_version and irate(aptos_db_restore_transaction_replay_version[1m]) > 0", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} replayed", - "refId": "B" + "description": "save: Version of txn being saved without replaying\nreplay: Version of txn being replayed\ntarget: When \"replay\" hits this version, the restore is done.", + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 40 }, + "hiddenSeries": false, + "id": 1108, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.1.0-cloud.3.2a3062e8", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_transaction_save_version and irate(aptos_db_restore_transaction_save_version[1m]) > 0", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} saved", + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_transaction_replay_version and irate(aptos_db_restore_transaction_replay_version[1m]) > 0", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} replayed", + "refId": "B" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_db_restore_coordinator_target_version and (irate(aptos_db_restore_transaction_save_version[1m]) > 0 or irate(aptos_db_restore_transaction_replay_version[1m]) > 0)", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} target", + "refId": "C" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Transaction Restore Progress", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "none", "logBase": 1, "show": true }, + { "format": "none", "logBase": 1, "show": true } + ], + "yaxis": { "align": true } }, { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_db_restore_coordinator_target_version and (irate(aptos_db_restore_transaction_save_version[1m]) > 0 or irate(aptos_db_restore_transaction_replay_version[1m]) > 0)", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} target", - "refId": "C" - } - ], - "thresholds": [], - "timeRegions": [], - "title": "Transaction Restore Progress", - "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, - "type": "graph", - "xaxis": { "mode": "time", "show": true, "values": [] }, - "yaxes": [ - { "format": "none", "logBase": 1, "show": true }, - { "format": "none", "logBase": 1, "show": true } - ], - "yaxis": { "align": true } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "description": "", - "fill": 0, - "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 16, "y": 40 }, - "hiddenSeries": false, - "id": 1109, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { "alertThreshold": true }, - "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "(aptos_db_restore_transaction_save_version and irate(aptos_db_restore_transaction_save_version[1m]) > 0) / aptos_db_restore_coordinator_target_version", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} saved", - "refId": "A" - }, - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "(aptos_db_restore_transaction_replay_version and irate(aptos_db_restore_transaction_replay_version[1m]) > 0) / aptos_db_restore_coordinator_target_version", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} replayed", - "refId": "B" - } - ], - "thresholds": [], - "timeRegions": [], - "title": "Transaction Restore Progress %", - "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, - "type": "graph", - "xaxis": { "mode": "time", "show": true, "values": [] }, - "yaxes": [ - { "format": "percentunit", "logBase": 1, "max": "1.1", "min": "0", "show": true }, - { "format": "percentunit", "logBase": 1, "show": true } - ], - "yaxis": { "align": true } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "description": "save: Version of txn being saved without replaying\nreplay: Version of txn being replayed\ntarget: When \"replay\" hits this version, the restore is done.", - "fill": 0, - "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 0, "y": 48 }, - "hiddenSeries": false, - "id": 3304, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { "alertThreshold": true }, - "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "editorMode": "code", - "expr": "aptos_db_restore_state_snapshot_leaf_index and irate(aptos_db_restore_state_snapshot_leaf_index[1m])", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} saved", - "range": true, - "refId": "A" + "description": "", + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 16, "y": 40 }, + "hiddenSeries": false, + "id": 1109, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.1.0-cloud.3.2a3062e8", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "(aptos_db_restore_transaction_save_version and irate(aptos_db_restore_transaction_save_version[1m]) > 0) / aptos_db_restore_coordinator_target_version", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} saved", + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "(aptos_db_restore_transaction_replay_version and irate(aptos_db_restore_transaction_replay_version[1m]) > 0) / aptos_db_restore_coordinator_target_version", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} replayed", + "refId": "B" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Transaction Restore Progress %", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "percentunit", "logBase": 1, "max": "1.1", "min": "0", "show": true }, + { "format": "percentunit", "logBase": 1, "show": true } + ], + "yaxis": { "align": true } }, { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "editorMode": "code", - "expr": "aptos_db_restore_state_snapshot_target_leaf_index and irate(aptos_db_restore_state_snapshot_leaf_index[1m])", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} target", - "range": true, - "refId": "B" - } - ], - "thresholds": [], - "timeRegions": [], - "title": "State Snapshot Restore Progress", - "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, - "type": "graph", - "xaxis": { "mode": "time", "show": true, "values": [] }, - "yaxes": [ - { "format": "none", "logBase": 1, "show": true }, - { "format": "none", "logBase": 1, "show": true } - ], - "yaxis": { "align": true } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "description": "\"Committed\" version is the version on the latest ledger info saved.\n\n\"Synced\" version is the version on the latest transaction saved, it's only different when state-sync is working to catch up with peers.\n", - "fieldConfig": { "defaults": { "links": [] }, "overrides": [] }, - "fill": 0, - "fillGradient": 0, - "gridPos": { "h": 8, "w": 8, "x": 8, "y": 48 }, - "hiddenSeries": false, - "id": 1086, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { "alertThreshold": true }, - "percentage": false, - "pluginVersion": "10.0.1-cloud.2.a7a20fbf", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_storage_latest_transaction_version{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", role=~\"validator|fullnode\"}", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} {{kubernetes_pod_name}} synced", - "refId": "A" + "description": "save: Version of txn being saved without replaying\nreplay: Version of txn being replayed\ntarget: When \"replay\" hits this version, the restore is done.", + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 0, "y": 48 }, + "hiddenSeries": false, + "id": 3304, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.1.0-cloud.3.2a3062e8", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "aptos_db_restore_state_snapshot_leaf_index and irate(aptos_db_restore_state_snapshot_leaf_index[1m])", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} saved", + "range": true, + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "editorMode": "code", + "expr": "aptos_db_restore_state_snapshot_target_leaf_index and irate(aptos_db_restore_state_snapshot_leaf_index[1m])", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} target", + "range": true, + "refId": "B" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "State Snapshot Restore Progress", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "none", "logBase": 1, "show": true }, + { "format": "none", "logBase": 1, "show": true } + ], + "yaxis": { "align": true } }, { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, "datasource": { "type": "prometheus", "uid": "${Datasource}" }, - "expr": "aptos_storage_ledger_version{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", role=~\"validator|fullnode\"}", - "interval": "", - "legendFormat": "{{kubernetes_pod_name}} {{kubernetes_pod_name}} committed", - "refId": "B" + "description": "\"Committed\" version is the version on the latest ledger info saved.\n\n\"Synced\" version is the version on the latest transaction saved, it's only different when state-sync is working to catch up with peers.\n", + "fieldConfig": { "defaults": { "links": [] }, "overrides": [] }, + "fill": 0, + "fillGradient": 0, + "gridPos": { "h": 8, "w": 8, "x": 8, "y": 48 }, + "hiddenSeries": false, + "id": 1086, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { "alertThreshold": true }, + "percentage": false, + "pluginVersion": "10.1.0-cloud.3.2a3062e8", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_storage_latest_transaction_version{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", role=~\"validator|fullnode\"}", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} {{kubernetes_pod_name}} synced", + "refId": "A" + }, + { + "datasource": { "type": "prometheus", "uid": "${Datasource}" }, + "expr": "aptos_storage_ledger_version{chain_name=~\"$chain_name\", cluster=~\"$cluster\", metrics_source=~\"$metrics_source\", namespace=~\"$namespace\", role=~\"validator|fullnode\"}", + "interval": "", + "legendFormat": "{{kubernetes_pod_name}} {{kubernetes_pod_name}} committed", + "refId": "B" + } + ], + "thresholds": [], + "timeRegions": [], + "title": "Committed and Synced Versions", + "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, + "type": "graph", + "xaxis": { "mode": "time", "show": true, "values": [] }, + "yaxes": [ + { "format": "none", "logBase": 1, "show": true }, + { "format": "short", "logBase": 1, "show": true } + ], + "yaxis": { "align": false } } ], - "thresholds": [], - "timeRegions": [], - "title": "Committed and Synced Versions", - "tooltip": { "shared": true, "sort": 0, "value_type": "individual" }, - "type": "graph", - "xaxis": { "mode": "time", "show": true, "values": [] }, - "yaxes": [ - { "format": "none", "logBase": 1, "show": true }, - { "format": "short", "logBase": 1, "show": true } - ], - "yaxis": { "align": false } + "targets": [{ "datasource": { "type": "prometheus", "uid": "${Datasource}" }, "refId": "A" }], + "title": "Restore Status Monitoring", + "type": "row" } ], "refresh": "10s", @@ -2377,6 +2403,6 @@ "timezone": "", "title": "storage-backup-and-restore", "uid": "9oXpx4n4z", - "version": 20, + "version": 24, "weekStart": "" } diff --git a/dashboards/storage-backup-and-restore.json.gz b/dashboards/storage-backup-and-restore.json.gz index 09eae9514ea18311989d47a7ed73f2c5ef5d8121..ae66464ae71bec852dc5eadb3c5eefaebd61d6d6 100644 GIT binary patch literal 6448 zcmV-08PDb)iwFP!000001MOXVbKAJJ|39Asm)+azq_$R)9Y@YgcE(Az+uX@!)7VXK zZ=BIkBqXt>NG>0?mDK(0JplLsK~N&~ut^D?osA8EARY(j{LTZ1-=83)>5k)udT6;$ zpuI%jp$H0XD+uAQFZ3@Im(rS1WZB_62P%7qS*fXqdf-OBfk_`o3+EpBO`G`oSa)>o z5NVNRl9Hn%^b0FQM@JgE=Q_cTKC;PUp&wzcd}^7JQp<3iH?Hmaq*s43(hs_asNd_s z|DHcTM7^gP*Vxf#bUf$TWYHhdxsClW;M;||KfxiJgKf<$@@uaP9cwc^G^Xdy916y| z9pFRKctV!!%h$tf!loWfN3QOh`KcAFdn zUtK4hLVcH3!U}uSOV_nS%S$!|hVEPk>Ha0Vr|H34-}OxQ))AJIE+!ctz6-JM=r$rf zXj!#m*pZ1p>wX|Ud5y7mY0bn9;ua?>A#3~E)}d#yu@%xOYq}S@!I9zmID2`RPEO+r z#Rbi_QHy+=Hhz*U@Yi@zZtw^9Hr+r{=IIW$B`amPwyk>sHu=`HX5w4sM>k+3GNmhy zG}3AR1b(@r`+_t=Cf$qzG)nu&$KB^?N%rQi#bx+}*&w&F--3xXVtc z=esi;PH_})hxCWelFnm90g%OiY=wEO!MI|v>|BQ{Ed8>JN>Ldb_u z55ERe_clN}{D&g%7LEa&E9{u$_~K4Bjj@Go^Ns6_tx3w`;}?(h$PR;CDWfCGSYbvp zmvUbyaze|A+$cEuWd6hc#m{j#p6Pte70-0fvz&>fCY<^hCSaRE{!RKQaP26>WaWYo zdkIY>g90kV*detyx*d^6oXECen}q>I4&~ouk1%0T=fS(c)zTI`e|B%M52TIRD4z%o zie&#HCi~tCf$aMQvVWePdDqK}S+)gM;Mw~8*Ld=_Ve$fLW;vg&Z-2qoWXfbwq3-XN znTdnsMZSd{Cg(_>Q~3KEY@crPUHjOK4E$R*CRkzJHs}&)wjP8Z-0%Y|tVZM;rJ#WE z;CS7GgvaDol2GK|fY5B4G9f`|(+7V|DT?f8VX-|sn!wDkAH>7&bvyk|_vrcANx$#$ zlEKFw*2A}!kBxMynv9{O1IQ8iM$RtniAZhN&*o)fEnk1a*8G#hte2W<4 zcW z$MS|#KtACJcA7Q~LVe~9gYSSyfI0ZD{(YtW5dVG!gEXeP}_zQ2dxucHy*B^=^l=(#3+(fxhh{W<6MC;qLP#QICJFcOJ1 z0_xWzc(W2)Efrgi>)_2wtAQY_&I$;%e|Dy%Ra(2}(rTvP#e6w{_MnIi9ncSlBil8u zhuCwC>35FX{##(Tz>4J@8b<3|26ReCM{Dy%q!H;O@btJo{fksdcrI-uDJRW+*JkzP z7mxx^R}{Q9pvWNHH`za0yVTi6WVoOd%DKQ9&MsuGk0```orBPTZy|RGYqjdlAmrJ`xloK2JoU*x8ie^%% z`{Kw*X(HEqLY?Qr$-A(=k!d{_>WEbnzqJD31EVN@1xKmz%L-#BGW#RxmI`Pi*dr)|J2L6)m<|y4AkG1rGlb?+u&HO-$s8gI` zj6ie62vq>2hYKEo!^l4{=;r3jEZOqe*Mzm)qvFhoB_792hVie_`~xxYAM&6z5!i(- z1P{LT!`q&;9-z_koy+LWKr8gvgpo}YXAdkuVk7UT0l z89sk;lAbR)JWtsI50A-UMchvOJBI|la(r*(hJ+IzvLp6B>57Q8yfY(ZH5~2FKpI%| zS=Z0@IbDYZfcdP`>+~88gc(4J(a}_QOdGco85Pm66bu)DOm*~B0X#`C4ehEPjA{&+ z-aQQo0fnQAH2)a%vqPGgQnA-pvEMXtV|PxcUU5seDi7XPQmcQa`t;9CpJrC{mxvYK z1SnRzf6g(plt8G%0s90iZc)otcJy}J(R-&7L;64&($iXHNEf6$lXSxncwC^VmM}tdv%^c4ced4_iCk`0W|nA03WzkM52A zdmz}qi@5s5$?47;z*`4zrKZAGq8w!(-?n{xw-ldpOSxo%%2ckD=*B3~@ly@l=iO7C zD{gE zQ8!nPWz8Q_kI(DxykbHxs^x9axzIZbue-T|Ut;AoSFsdN_fBPi8VJ$E&IQ{AagFHY zPoHPe;2E&aaIk(eDN-^h628xdDAoWAmz7$Mq?91=0-XZ#4(S0ATjwp)1;|a5n!?Rh zQg|z>(qxPuvHyv_{5UNoDjn5>Q>y{7kjqpNLc5?{T_)OOL4VLd`7(%^eo|lqHEiHi z*C)gfk+=5q<;CqJULovLrBNH(Zdg^@U}yBV9F1m`gdNJx9aZ@YV<5`2ht2myaTn_v zsMYwu!VY#>wfXXVCR%hi>DFBbbQa|fZ=A{9n4aNJcoH8z*;VRG!SYK46}N!_g5!t+ zY`Biei78&e52m`GIki&#zMI%bh6ycNj%nRkW&|9HJMLU61dlsecf=H$w;+(oB*usA zbt!AhNr$<9C)kOgqj9=FhDXnX?M`0n0oK?(ZLZ%O-s2yHHh#F|(ZcS`v$^QRaI2-J zP^oZNPV?+swcJX%hSR)%EXj|M$q?l z$)ckJy6R6G<~F~egN)HPa>rEkr(+&r66+W0*VqA)aAzJta42*IyF~IFu-^&xH!S;= zD;_CV{9d@?w|njF(J?Of=-ztbi(viqbO}~hp7@4%;sp@ZVvCh>#w%z1D$aQM2~hrc z<&UqesTVeB*WK~T46}zf6kBR{X;+{{wcCr#&~XuO#mhK5PB-NyQOXueom5jfE_|Nm#Yy&o7Uj{Akw`;5I_<{C=ufDJwuq09 zU~_;S1;IK%xiMC|-y_`#EgK(nJ8+~Kb^H_C_IzvwQ$`6Cm$i`+wy7n>9D5DNOHo%U zQ++iBthyI?|@rz=Pt3h@>NcfQ27Ej8S&40xf@5f>azI6XM8G2r1}`)$lG&gvC|r0ji3{)N zN@eJTZOx+iaJHJ|G2&dkBu<>YxI(NrZMcl`%aO zJdUV})^1tQ`85{wiH!1_$KgP0L> zpGIMSSjn|MfdNTF)9L$4uzZFJ%4f4mf#S}Ya7S4=a%a|4RU|y zj9qjSucrul^8V1YQ05b*l}p9?phT=M@{H03NfOn(+?TjW+1AP2q_n2gRZ456?ow7G zbD7c_soRv+l(aQH)>o{>Pn^cEx#m`G@&~wcClX{P7)p{Oe+C|ZeOnz!Rgb&wyymY&k_?78i z?=)2UoWBJ4=wK-eB3l(Lo#aXtT2g39p`{I?rPf?#sEe1{Gckpj;z}7}`YgNGq%c!6 zFq3kCDF;{u2UxFL;sML!807&|91{#Fdhi)%#)7VYkxS zX42UWjC4s7XL=X_B|%9Agk(@%zhEBPp7(2YoOGHiRX9oEB!!bI;-sBL6+bCZQ2-Sz zEyG1VEdy)4YrsM|ifIc;AsI=quYqa+#5oO+?v5lucw!-%)F2+j93&X-$&vK*SU;o>}+Y>}4a(xiJ-=q!yd=jatzVF5lh&`6Ym+^?G~Xs`wm9b|YqM(JP5N}b+?(X5 zcyhLEc|2T}EOU`*5||*q1c`7Symw){YFH_S9wl5snluwcD8MKoZD!{gY@cNl9%ng5 z$yVf3Xcp>G_7*OyCZk_&L5^j8^UuDk!lv$DAH~Zoa?JuYLWY}Q%48naFFwHBvZm|5=cP2~JKm0Gd;gBR7T*wVD51*@@O>kjo1%fp&WQtDGX zx0s_;ZdqO>QE73NmLx1?Z)(1-M5a3=0O_Ebfl&C^3_MH|A%^7y^ zKHTkI`BzTl*d*cO3X^BYnx8)F`zN$YZ{kwYzSbHb;n-Up-MWVND#<_n$Z?3_j6eYQ(XVwFi}bix$5Be6E0TN3(QeFfEE(TJ97qw(?7G`*og*o3xjK)l(Yy5MG)Q<$ z$M+D2F{hNRcoN^#_DlY+J3`csD)&2ziLv=!(;j(f{C(Z?ofGw-_hc_ghWwfZO;L~P zyqL?|cW;}X#GCUYGBvAzl6_TdSq|@(I(Rqpxxl!qwOzd82RYKMS;@mYIL#}`%E8-A z2d_k{SMJ;mK(GW}MY98|_&q^bjT_6r@UIgNM{^K+&BsoK$h1R}la3MuEfl7vdsZ&s z{V5vF4=)h@MmJR$Iw(Dm^{MiQpc zffRMZYiJlnY0O#^f}--QTEKi2Z)9%aYDYr*C#xC$j{5_JyR(YsTK-;i`kk(_{A~*i zr+&4V8MZ#MDDj#mUdhE1EIJ?oDc4(*YHt#qlPkRD%J~C`sd*~4$erOEE1bW)a)|Ug z$lpv!IxT+KCshcL>m&I?OD?i3e3rk@9OySVx#3guQBYrPeO`F@DSGt^jR3Br-nR6a zWx-C(h-a#Fep?xgRPO|f=Va-@Dz3?vPH^!>+oIUz(Ll`z_^Cp(EsWOLF2RZmvNg#j zzNlv8%w|Y=Ni0qMaGA9&NM&(0yJj*H@3=JAe6(AxuO;5C%w%x?AnvWQ6bdWq*}Yy@ z>DgI@(zBJGU8a|{0qICmAS_cfHF6(guDj9G=k2nr)ey|lZ55`;;y zHQ!Py#Rek=l_61KhKG!vB#{~~6EdR#l6Ov=t(jDr=++9SZWJOGmjUHh4*Rv_DOQQi zD~BE7mxz8l<4R){R~njHVZgqX5p%A?TvR5Mo9?o2Wi$=y4Az7?-Lt%mR8U7j9R+pD zgUVVCb{5CVZ4u-!rrx^Xw(=g_e=S!ODz*MA5E6OlyCn zrHq=`J~ShAi~kVtrx;d#^xX;c8z@z3fhx7>5H8_~-BzVqrLd$9+|@Of{Ed92u_%pY z4>gwRBD^ioWMyRX6#ck3&^!;-_FBWK1KxIsa7s~WGe~Czh3Jt`Ec)*q$FZ@PqaWfZ z;8BJRG@5=V!2?=Xs{wQucCX-WtD_ChRs*!Xt^%}Kg#xsD3TSJ>zZQU#J&7FP;lc&% z5Q2k+F0kP`=8j_pqtQvP6bN_>Km5ud;6#Cdv_gRZ1p;=)MzM!5;1k^=1q7M_1fB>F zK5+KS*o4%^JeMb2itWVxG)M)t%5%B-JeS)`+qi1B^t1d66))wl8J%V?O9|~(lL~adTcAaR~EiUse-sfB6 zvfNK=-Ie=EU6oT;@DB6$M<(6u4kqV8#>{XI5_Qunl)4z~8#9+?Im4OR;}I#$u(T!lh`3I&=LoZw`q4f1bw zD^F=mN{zIzLLkdS<38P?Bw0Ls%>jiQ+@V4a-sWvoxh<{IRy`m}6cYO?Tbr`AJqY}} z(%!hUnH7cr*t7(X5eP;QS@I|RNB)e1$ZZm2?jy??yNnZdNbKU3b}@I1>b-F4BgW#(8``8(;AO4K}IYnoidN30Qx6=-n#xZ0h*zU3_UcaDDo&sMIMGX!C-1-+5+fu zos#V%en)XN4BG1s)OQrK3WY%x2F;-tB1K`3hK%XtY*>$ZHPLr%S_(98nZz>ouQ8CM z1LS$7NiGU*1`L&=S<1JYZpIY$lgc2qhe^4S2=QjvJX1_TRRRVK5^Isdvg(>CrV+PM zfPQryqL2cdTXgaCNjzTCJ0VRbP#GaX*^L!TSt|Cmz?kBhp2^*UpA)a|4Gsw6SCVMh72U=V%6O4_HH9oo3Wr`B9nPlLUry&;U*# z@KO_>+Y&sd+4=~JoKBko=(Z${X}Ns;2zMi88JY358EE7AkoJ3o{kcf9LbY^PGV1E( zN++25DLT(SrX6;k{j2jVSx@@MCF~uMAM?&LzR%#=^8V)Agat~bp7OV>D4~AtAU+W` zq%P&-0*a*4* z&cA+TO0Poav5Zg1oahA(&Xajk_5ftwBznzY^TW_;1q2S{oQe-^WbLlbj*~z#t<=2!v+2HK2=Z~NZ#Ay)QY@UChyb=(GCJHF~Hn&#+3$A z{$n%tJWBaODgP@+`d<>-|Jq{W7ur^!eTns})u&*Ah4Ef`O?_96%7*E_CRLUYU_xcY zv|B<}+~O}(D=J7Tu?JDuCFbN@KWSl|o>CL_P}--YnzI;6O)E*mLnP~4ctEP2Ehepm zj^c!|D-e>f29L}^nfGF(8`qIH46uJ=8O$l=81rE;G~CDu*Wj{C=vjEL?pd4RoHb{~ z^Q+91c2qLDqlA?mtk~@O!C#X_(DJn<6*<1h zRYU>4WL9Y2?afZ#*C+90nVW)={5%j;RwXYkvJ9h%tr2f@KPJYV#y z=SEYY_Zbmp*q?s(JSVdKbWz~Z-K#KLzjXBs9R3t|zUAGx(sZ)Pq|ZlgBNDjSL39Ko zC%dK-TjUE4Lix%oLj2G6OkN;ia`F~Bw{IK_Bg>ubh`q9b{m2UsOfMk$&7IY( zKV{wT1df1Jv#pO_$cLD+bwdVyw8LZg<(923GAOh4g{#=0ho{F`L;m>kG_%I<>pm`Q zEHZw_XlL;EWqHBgVDl>W(Ob_8EZfDA7oamUvajvPCSiHG0+@fkEc?X?-^{dHBs2rt zr~8}s`}>r*2rfn={*jswdqJy7g2IY`%t z(5-$Jz6(7g>mkZH@0tX_+Jryl6RAU)?O!BpKX@Ur{h-A5Ak@R_Dy?*cLz;wLW>K*qmu}%(84v%v74v0^1^phPBGkax2C0MctDKPs+ z_AncXYC$i^-_IgOk$&ql26nxhy{G6IRnh%oj|4LmWH%!?WC9!e&t3ouJsF~SWq{pD z8Ezm!6uwdN{sT`1WwasPe!~d1=pIv~Lu)kTqHidOzTp&*L^OhxW?jPw&-`Kd9W*=$ ziQx6W(S^}_7D2m9qTaI$JV>ry?-whLDPFBWvOa+=Zi^agi#@cUIf0-QnME<@k5 zh7_LN-8Ef@GhrQ)-|A_ozoZkO&`=}bavZ~*wVG;|nyS`IuIG}?s-sg$8=cD9=;-j} zORbJF>gwufkf@`>9)5uj`{U8L!|G^;Zxg{ALVr9VLl+FH;mGmKt0D0{bNbz!V%uf% z?)ASfj6dXmUckd-BA8P6@3a}7Hn2??lsz4z7Mo}%I!5T@gr5CHNeW@hIw{&|_rP;F zo&Eyi|LKaF)xlJWJT~RZmU##h0Yb1VYtA)%_f-3P#$kxfGU8LtbSSYvJwA+T--PP* zu|uMWfMmpBXqz(AnlRViOd5xJcW&+QEuqUstp6i{r`n`xH+s{Q?IVy;hN9DZJR;@g zPlz8KV6++BjcM>W^5jl1*=LSDag#oMiEIV<<~*A7b)@h`zMiU@-+-I#Xs)4^L^G7-?Q{yfHDJ4GE=16OBYk%V{be*vh7QM9vVRIBe4~>iRk+{?XQh6 zg-fNq$-KuT+2>q@jOHJxQ+i*-kEzZq%`*rgp9IXO#@~&P!9_ujvId`j;eEby=P2zD z@Dd;<$9oRj(u{)c;El!Q$$oDDm0u=M`Lqfuzc|h|ff^}iJ5xle^kHQPO=A-K6i5o9 zXXHgx81D0Z@ID=iYL=n{KHKdAJ)6;pSelPJ=m`hZ@dmsV)y=B5M~*JFsD8VC|2B}p{sq^9rIxi zo-3T#r`m~~QMD7hLcMbpH;h2Ig_ym?@S4v+`*pxWEBaOC&)Uze{oK3Y=Vr^=Do3~WWAmRC;PPoDTz+wUvh$AY&Y@MM%deB5Mft$0E0^42zSYi#1=sur&E?peY$7q%rzXD=j6zS{nA z?AVxFjtt-B;AzB2t)6cFL6R@+sy zQUqS&Q$XGk+aPl9qQkiaxv5c8IyFffZ)Dq)oSKg%`@sO?%cS1-Nl6T>;M!(JK4prS-um~qEwztbB&e!KN*-^X6Fzq_vv#ftauZ(!$ z7WBy{GD$Y!*Ch&Tlqf7=0Ye1a5{1O{TuV?_qM;v7agfKXGCRMY`bmZb9^iLi(sH}wmzoM_JiY1-r$fJ{EV|O zZUOg+7eb#PT5@S=b>{iIVCs)GG7qOx$?G6^zOTCGo?Igcet1|4f*(DyVq4gaIt+(z7nU1XUpEV-uWek3DvIK(b z$a+g6>k=mFu;)n!*L84xmEgL12k7{^j;}X1yi1F;>*00nRe6Lb2RrIU=~m)Iy;FtU zZ*iV1#jB)nPJW<}tahMKCroT5VWMCs)k&Dh5A*4S37s&Z6DAhnc8B2c(1!WFZNDz6 zl{T?aMYLHL|H^q2=Y?RvD`ZDxrjsRhBw6Bv2Wv|MAZ|PZ7Np@2Y?@vSl8`1#u+RvD zC9sja2>!+qg8xFFm^Nr_D~+S*$cc`e6uaukNgE?4?JvHE5J#*XjIHEG!^`rq#r%6A zI=G@@k(LHmx(!j$pU?np5uzTz;(%xg!*#G~YaD35$JmW*hwSxxu%$V7{F6A&d~AnP z&It?}b&(UcX{aO;dyN>#GH+RKeSAz9jylXf)7zr61g|!CnTg+*D)>3>`}0yqC*`V8 zr}@bn>HAn)+$md5rDpf2vJa96P}&ITay3n><{Yg=S&USPv^XDaI8TKQss>c_M+Q8~ zbn8;bGMk-Nq+K7qkgiNquMZ2YDtN-zW>E?@pU&zuY=N#w#pXAzk&eweZY3p~jl3`| zoAszp%`UG@W8Y-*b_-w>>!E^D^ry6AHYOO@XiL;6@AHU)jO>N0Vp5D)BiP87yh2$f z{7_byC0=XQ;x9?>27*1Fce9x3*MXeZ&EC8cPAS=-aej9LCOMOsMEFZ`C8P-J3tbNC z$}4=qga2WtN97~|q|Ewe_p1Q&RnbXCR7EHmp(09|(aKS{gtR3*#B0AMDdUCdde8F5 z-??KCT_^J?qn=`HGHX=DCmC_6UO%nU>x;satV3FOw5a$M@k+jSs+c9CRK_hCRT;bF z6jl6^QIs)EMyZHnGMX}$NqpYqnVt%5^AW3YUVUG zCsal^`D!ePVe$?YaZFBB#WE90WjvG7I{DMPr7v|bpO4N(r|(x1jr+5Pn{uUxJr!Cki2 zJB^jT6c5-vI9kerGpoa;<3g*3OBya|xU@mI)S06NO#xGPUQ9!#gs6f{UsR`zG-zrE zXwoq-9RsTo13P|R5d$mY7##!CF|aMfz;^jOC3|%p4J)%jy=Yh`d19x-Vd4jX(rQ6E z9=6EGBhA?TM!`y&y|!EDsL-n2I@+zH-8$>Jb-Jm{mV0%&`Ci(olMoe!pdW`%ho24W znoDOlFw$pQmC`2wC@D%RAtaO8`X%?!_QGGQ9tHwzhCuy8i6DRE~s`ycZiV~>c z#SzYn#SwV#T>}<6SfqnR0(HlNMbD2af<;A-kq#E=V37_M>0r_Q1&g{`0D7n6Mbbwz zbjTr~JAt zx>?5dEwH0PKvq<)WVCwKEAy^PDp+P6tEyOL9hOzHWE~e*v*eSsq@pG3w6Llr>rtb! zCF{4z>XvN4`V}s-?iE!o8GS*eOGaO<+9khqX~j$4ZE@90-e=Xym+b0h)i3E!c~5NF zr`WXQCA(|tbOiB6K7*`33*URNTn+q>(uj)vuB?a&$`JsPR1>rJ43^J!sPwX3vtlWV z^)X9yl;1@tnaTNAn2;mY*y6QpB8xRPnY4SG&aj1ptAk{kWqDZQhP=*Z#iXu3b0F|d zLaZi}*WhhAmnmPh^k#*>H{hQ>87TeJho>*<`KOgNM@#--$%`G~5ihb@s6ZMp54K!@ zRP8ffr3z_zmX=f_<#%eo6+~ZmNTJVOJ=dS~Q5U$FRXL3C%pZmwu#&zWT43rt153Bx zzX&ee*mY<%#}!_lZ)@@O`Pe_ARYsGil8v?215$~-)u&t6rCz79XArwCb;)tjxgpE3 zwchu&He;C|AJukLvgnHJUa_@XvmGm*?>;u9jAnK{Ahynyl$TuZ$JO{B`=-HX081C^({k%XeHVsgTM9r1{S~vv z^ntpZphh6dPYW~3W5Omp!QEilqDmdsG#~zvf}QpPoeZcf$0?;Wv&uKw;0e7B0+p(H z)Y#<#u}eGLQ0JT_(Amd#2vmX>Po{nxbxR7;qz=2^=v9NNfAd9HeB<5c7BoX}p+-;3U1udh9S+XBN` zTrClXozE;vqo!$8a`6k61CW4}o6SkRJITSx6>f9k{wJuZd8W4LY2a%+n!mbmsrK3{ zo;gVeEq}8oGYF6uAoWE{j;JiWRy?U3;%kzg-D!9!n6D0=mo9#aUcW{ofa_?mEqi8J zvQsxbF!r!T)xq^ICgn9P&*QS>hNp_vvs!1u=0d#O|!{2 zYM42*8CG7BOVd1EW^E5rRi4eRxs2p9F70(6?N;b(NpvfB8Qk4Vc&oCo_G)(aV9?ig zc23ZCwzjjY?6NK(9Yt}2WrpTv>KU{+#PsMssJdQ>f*MzAbdZhIa&(Z_lU^Af=mgxH zpy7dr2MTT4?dnI1Ug0+^|7B4}YN(^3j)pqb zNo5@eJBxGWwg_^#Q14uFTXllq4l#l%^KqN3kg4iU=Zz(XLdV!)Z)L$ivuM{~rnNu6 zQpHVt9omt)C4S=fQvxeL2Hph54Yet?z?9l_3YT=nZktk_GFY+z?&=mx@kGA1ShU6R z2rZWSD!ePuWMyRX6#cL`(Yy%N9<_zj1ibAK<5Z&3cCgM02GIkdSoB{zj$;!w$3DbS z!lN7;7_9tGiU*9o(E#W!9A3lSPG=juTs7N31KOOR0qr9Nv<=~32f)cA$s7>j!a1xE zf{lgFiRro4j$;M0)kUu~2zZD%{K_EUSc8C!ph18J0XyTPc!V(E6I&w<1lj=vo=6wo zbM?!_g|uRw%Yuly|3a67I#1(Ir6bG`d!`n(_MuKm1wXE z309!M4XxONl-?}i#7d0Vpn9ZGw0$8qOsht|%x|K=glarkjRn;>P>liA_^%54sc~Qa zt}48z#(GO|UL%aR0N<6d-Jsv=_XY>1of?%6jYTyWH|eyfa9FJAtWY>AHd1eKRcFPe z_+X2i6y3K+J8+nBlg8}nYR2qf&_C72tTtvjp_9gJl`XP*0;~L;tW6-iQX*>G?2T%> zY@LM8Elj04GqcSwxtOrJ#+k1l*E;g`qkQ6P1B;9utrSnEVr_pxE4(_1SR!B~g~ zg`VAqgIfVzdF9K-sOzOhT^e<9LKje1eND9%>{{pS$1b9@CQ5!ynD)R^-vHFz6O<5+ z)pQ?BZ1xDvddFuT59oMgKBz3A;iRs1?cYs`(T%`R4E_)fk4rQX1`QUv^|TU_NOdehAp8Zss5<;wXAmKqk}yc##?VT zv!e)rijm?F3Oy(iN&kfZ=$}bqs6!J+17y2nk4vH+&4s!!&gZUKe-y#Y;iK#$JNoYs zdO34s+2b*xh=C`XvWL90dB6UyOfNIY%av>DY0!Ai*j6&tlcY0U zU%GT$A@B)@wre}l5=>1Qp*bZpoa>IGQ{EiS`6604xZ;S2?>H`#$nk96@HMP%lK ze58ktQk4!0F^q&M6BWJV?S1gvgBfgp*ea4)b)7yztxHuRE=`COPUHf&f=scXm^ zo`J$~*B@>g~n9nHm4Za zB}Fz*+wOtQ+r+Oqe7qlCt${>=o%5$^AuA8I#>%S+ZOSMjC;wqMghAWJ$dcD$<9y7G zoz^h1$ipQ&?JhIaTFBnplhm5MS7z_b8t(>!S2!Z>1SdlaIsc&rdl994Wt@L0u>RMS z_b)rl{8HZLW7v?dIB(#()!vUy~gFq;6 uma_Uw?|=QxXQ5j2l9(&Z)j+w|H{*qYb zqk%|BLJdBa0BtL(`rCT}kOV-|Ytw5xa?E^)B)oBP?zxA61TQxn$3;;TbHqs;G50%X z2}zJh3FGkTRDN>uRGBl$2<0y$$n<(TH9#D(I7@vj+HhRHNW?pLmZB+&klS)_sj^|KM*7>t@NO8)US&pN0hPj#pB_R=Sy&*L0hKT#~!)O5sQ$!i=iNZ5sveQpJ zJ>dYc`6NbZU{9@{o#&N>me(88NZ0l(EPNwN;>$#u%e}1~Md1_jSu{N~k|~LB*~V?` z{#%CAMYS@U#}}`m<4{tKh@1ZKQGS*+w4D?Pse zrHj~68Ye+~5mi7X5h2D0bHizbs3Tf%b#~;_EWjU7%BnTbFizf+unJRU4AJCUKF6tZ zR7_k0rfw9btRigkDWAtV{yDxdq2nef!qk|YFaZhP#Y|yfE{)*0qS9yw-dsvkIj+FL z5UdQd5oluSQSpXJ8=MCm5@tJGDBZck@V3dZu%D0m=4l&F(V}@@(TOO6gR|d#n zT>)M%;%G`{C1K*A+q zFK`5lje~#VG_E_RE+#sClXoY;?=}>AjV72@KwoCQqsYf0^d<_RjK|eN6O-YEU6 zNI55;i%+)6rMEC(#-B4PVcKL@F^QML`bkVBpB6JiX?0{G^`9i*b30bcyd)+O&xw4u zk|!twAAtZ!$t26WFp!#eRvWuu>vy7A$-ogl$F&hL@W9`j9OKA8BhgHCI7{P9lKi1? ziO^hwTIYP8Vi*7oSh3s&(&OpW&FND8vymR6B!NzIdbe^&R7Hi+Q{6X;4$*|gG~+d_ zh^VFcNB>)D2ZdQ<9Ek~^BbtfUA~0BWzIGMxM6<%d(3-4@5zSN2A{q7E1OvlFTz=eW zDC)+rn!uG<8L6sW8hErI(t`?*dPXfta(m^+II%^jOe`HrOc+tW48j3n2}O&-HKkyM zQg;`KBGROD{EG69SPESvgL*0lnJn6PnFP^Z}jIZ65bYd<#^CiX}L_|V72Nmm*ognE4#%M zPhSR#M>Mwccc1i<6aoFkyI-AOg^lZSLkv0}^M(Ndt&%7~D(XoTYlKXc{INJsikH_o znkj-FTIsl{_+2OmLnYeHgMUkN+o=_yVG9h+AFfnKHc41=g;JA9Wx`K+aJ4e7Cz68( zB7qnj;OIoHOmzxMvy~q;p=kF+_eS_Y_bk^I#V!XlS2#HW1(E?oLaX+9V9qtdg8r#z|gTG#9^E+ z#usraBo%NiA5Rt>vo#AAk0(Hw!tc}!93lP%&vIy=Hbj&+awTd(#nc>Zj)aQK-K>-x z!$Q`L3$`awK+Z{!fh!x{Vr;i1bKS+Ic!p9POpVP2Fj74G0WqBSPzW9|lR_-%p_Ke! zQThU1;tCwiP9)Ooo&=E>g4*iCRAN>m0rtZy8ZtSqVXYQ=;*#4dMZ38D@NwJQyItTe zpH92T-9_%NF><%!hc|zZetUTwemJXc*9m^mn{B>$wezpi2Px|mL0u7`XjVTyc>t;K z%gM*X!J`%o`pl-KV%X^vL#G(NX2tMmzi#1tqcg`W!G1At`3s1(9P&Dx2R-07 zM7k}C?wcY{T6GksIiu}%qNDqcXD2$Igy^XJC}7IQDMpXrw*byxocho&QX-!I*L~~{ zc$6Yuzq7;$d2ihhVN7aK01e|${NvD&`)f?5J%w@ifB zYqd`?cECx*%DtDh9Q*@|pEWuE`B^Q{lR5(W;gM(`Kc*BV4C|{Fu8!%wE3Wj|-l_f; zeYCHcy1Xl-2^n$Je|6vp`#VhC{}C4Ve^i%N+yc@#h3+PhN}=v15Tmi~CXntXkS}o) zh&1X(Lz%m~0%Uu)#=8einq7DOL@xnp6(Od7i~GOq4R=1@R$8+xbS4XDv*9Z_&z;Q( z>Wx*b)3=a(0hzkGcQHC1*TTym)}WQk!rELhvQgY<23-lhF$oh0dX7V+LAQONl8ZnW zpj6$(R^DIdqJ+n+@5d>2Rl6{tLdTw1S3$_->I*vYEFV=6Bc~(x%-E z+Mrr;)@ayU5$VnLD{e|WGA?;E$hV|LGcf6E6HQ8}WH+vg29sx{&ahx=cd+%YY;dsk z-@#U4Px;|JtetS##bC?2uJMc{zoztL5%~>MFQ1GI!mZzHS*6vs$g@XeV6Q~gDZI&t z-v>t%yXx*IBW5;aTll!6?}`e2P9Wyy!NP5Uz{;JTcMU+Bj9Y-v^6wIva(H9)Fsh;m zSjxwYi7&mTd@KlWxvGYRieHyBd%{r!q!xTY0LBT$|B&4@I`TCTzZ2yCW_EvT&|VMO zoU&}77Kv*H&-*4wEixVkn>Wxa0AD?-HH5BrZ_vF7-wIe*z4ZvJJkUb7KU-2xuGM+Qn5a;*D+Sj8T7K8wZ;c?l- zx|P0WfNv!`+9SR^w`+s=t8lJ=OC_vQVog7D^~)0d0(gCeyss8Yy-x@R>Q{P_aEKO!PZ}&e{^rLmGCfVm79UJ z#S*DwKh7kKw};gnbzIOg zPruq~dCZnK>xEXM@UczpUAy*fyV~Bc()qAm{q0Kq?Uwbux!wJ|RqZ{y_FlW%-l)>~ zsAc`Yt{$|iy>HjvZ&%yfsdT>6uKxBgsH|IH9yd1b+ebfwvpQ7yWIy@{%saXV)m4 z5dTbmzb!vt$R|;{wmO@Td@0N(Uz7(cPxRDF?idQ+3TOb3+^*^b$Y3D5uBxQKor83= zqh92yDd-4e+w=S!M>;6zqa1Y>rV?|xVJ4lIXEO3o-Mzs0O#TfX+0@2rbv^>p`RDbZ}jaa|xgE#NNFXHL zuJcw2oo?54=7tg+qhTnT@ifDRd_sZ-De-CihK4j1TGR0mt@S!i-}B(V?d^`^y>gAp zF^c4P_Vr|)KRf#&PB}B{ag>fRS91tSM8uz74-~r*;{Iemo`PV6LWVn{@>rPc^sJ*N z93VCsQj`Yv)au1~Sy^dzvoQ_1wr64CTUiocCDL5(UF|3;ACvDwbT>h6dzABF$KC_RSR@((oTlVa%va+1ksN<&VP!kxuRp0m>P3sAa< z9i%h~=w)01l|+OXAIuG>F$x`_!PUjFA7%kQK`E=YJjXcsOrk1Gl`(|MPke$?=b)Il z229;3Oj$+Ph>0ZFP1~R-YRS54MISe?#LD z8JBTXsmCLfg`8PZko&pAK#vg(RF5dcoa3$^BF2bs;Yzf~7^hK1k64{6!rtH*761qT z!YQpgjIJg+{g8Jj0O>YVdXI)UtN^hrJPMJIBj`<3KpCgiN<)qup+tQXBp65jIf=)r!*NP88CLfN;3Lz*wa)n@#V~*{V8vn@ zIHseKn+GWM&&F|tk_0-<1FBU*;_8|kJ=Kw4bclwGh8eFRLF5VT@%rCPyC2LN<5*1i z0)?5-7K6d6^R=6RpP3Ezde&rBRBoPn7Rx;2CK%#J%;n=sLsgf;YNmi=Wuz)wfI*Ys zh%#k%i8n~tu84Pg@OdST=(y5>j8-y)?Uau+v1y+amhK`Zj6%OmdjVldh^B?(Ns0*K z#Eb_+%0-0gV00_6ii zZ+`mkS9euEeueQv$^+n~szm)Q#E?u<;8m?G zT_qwy0Upm@SuWs?N>afg)xsoFf|*^VXnKfzF5zv3t{e}VoXlnk4d(OiY*vn7KDRZd zc=Rq%aYSP~|LaUINs;H@9DQ-V2piWEycl#o<`n}1nwNwGshlQJtPwI)gu{w?k}kZ* z@mK}uo+Zak)$fBB7`)6@5B@F9ZA&XO!ln;8euy5bLBb*rMZ4HrmBo2+_1W!?Zd5PI|6PQv#^j9bj9ufmNfwKDg*E+q?(kv z%K1N8IM;(c(Zcs#f5(ngh1R1!p z;Vs4nS2EFET!}X%)xpSkod+Y)@z03iyoZADh?x{*Ne`ur4^~K@qAOg1quGf>n%$EK z@7d~?+~OH=i(+%p^ICBWZ*$2lmfT{=EpCVpQqyhu#d$Ww zA?AY%8fGG0bHv#oV>tK&Hy;uK3Q!hvH7Y+uBofs&*1Kz(dc5l5jdkE3&JRxGEPSMn z0iVuH9*;>i4s2mq)d6AWidH#)de+2BcVeFC{h!+~P$z9rvZP5*jVA5gjVAS%G-*kb zmNaQelYWm}Od8#(ku&5&p3j}xOxPifF-&;?R6@>h0a?Qf;S2%x@8#TPNf>l6c+!NS zJhN%b7`B#-VaXVtGh=wNU$=06(1in*V80l+{LR5a4tWvIgC6l4BFim`Mc-vT&)aq2_AOo@2=U-Yp*;!%qDd~J~tcna_Z z@#}jd56ny6eR4j5m*&66ei^ic&}6v-vg9I_9gvn}#M3`(Tau9_8F_|eq*Z`8`f+FP zaO-gI$KC$n&cW{1!ClWyD*PeFTx_@i|M4-oz-s&A59AV%yQL65Z{$A3*a0UoEB9U& za`2B#eir2X=c8Jn7j*>m^P+N}mEtN5Saq?W$|6F*{0WYbj?SBVN?cs(5TI1u08-wX z;-Z98*7a$MU8R)cC=u5w#QL&~-??-Mg<c{|Mib>&$J1P=-V7a}6J}OkRco zYEFpoAc8(9Pj=phbcjM{?IVr55d;N@A{F;6t^v4RQ|&nw&S2&br@T33KWxhAVZ#j_ z`OoJY_=S)++poBv=fJpHNSEq#ZeG)%FY7TgPo=nVRWy=3OFG7a7P%YiN2S7z_5WoeF!QlDO0aEk9$HC?;s3pKxk7@y->pd8BZxe3?EG*!z4hEoO(Qx~~VKskQ zSs2WkkgdIE&C3_91w`}11>zTq=1SdLfVq5kej0GTX4_zHYzZtN&L1vn-;8ot5CR~k zsfBe*zF>fFMLXIfzTCTNgZS%kuD?$#tWsi4zjJlV8r=eTU4^`7i@0iBJuL$3V*o8n zX4h^G&KD=_CN#Js{BD?FBM7aK_lw-5?%H9n4H!4p8@})U(^q$GWBsqMzPfkVN_ZT! z%6nC7izTE3pJo!qo4sm@+BI~&9=jR%vV$6<5L$|zEwb!e3-fP!>v(kREWqWU8cRy*>ddF_P)2_AG zuXNsT**>tF2d!G~+O2onwf43uoo}^kzu5~a>lT=o?>6q-dQoRP>HtDIu*w~8(%G4G zz-ApU=Ko3vx-49reW=Tw-I!i>H|5I9a4DW$ zqjEz0b9u&_A28&TC|z5fO~`mD%qBmS2P;qX)JN_Z3f~H70Fd0S>PXIrKz3WzNP#;C zX@5(7$W>F&5yrOX`JKi(Dd-b2HiegnNC|VgVTvyFnT&kC#Q0p^{4Q0k%vb&g5kFM5 H@?ih~J1n9| diff --git a/developer-docs-site/docs/move/book/functions.md b/developer-docs-site/docs/move/book/functions.md index 00a71b066861c..6ead8fd6a230c 100644 --- a/developer-docs-site/docs/move/book/functions.md +++ b/developer-docs-site/docs/move/book/functions.md @@ -463,7 +463,7 @@ fun add(x: u64, y: u64): u64 { } ``` -[As mentioned above](#function-body), the function's body is an [expression block](./variables.md). The expression block can sequence various statements, and the final expression in the block will be be the value of that block +[As mentioned above](#function-body), the function's body is an [expression block](./variables.md). The expression block can sequence various statements, and the final expression in the block will be the value of that block ```move= fun double_and_add(x: u64, y: u64): u64 { diff --git a/developer-docs-site/docs/move/book/package-upgrades.md b/developer-docs-site/docs/move/book/package-upgrades.md index f1844bc8a1d7f..3f8eb7f307c35 100644 --- a/developer-docs-site/docs/move/book/package-upgrades.md +++ b/developer-docs-site/docs/move/book/package-upgrades.md @@ -78,7 +78,7 @@ published previously need to be compatible and follow the rules below: modified. Struct abilities also cannot be changed (no new ones added or existing removed). - All public and entry functions cannot change their signature (argument types, type argument, return types). However, argument names can change. -- Public(friend) functions are treated as private and thus their signature can arbitrarily change. This is safe as +- `public(friend)` functions are treated as private and thus their signature can arbitrarily change. This is safe as only modules in the same package can call friend functions anyway and they need to be updated if the signature changes. When updating your modules, if you see an incompatible error, make sure to check the above rules and fix any violations. diff --git a/developer-docs-site/docs/move/book/unit-testing.md b/developer-docs-site/docs/move/book/unit-testing.md index 2c85f34043d86..f035487630d38 100644 --- a/developer-docs-site/docs/move/book/unit-testing.md +++ b/developer-docs-site/docs/move/book/unit-testing.md @@ -95,7 +95,7 @@ When running tests, every test will either `PASS`, `FAIL`, or `TIMEOUT`. If a te A test will be marked as timing out if it exceeds the maximum number of instructions that can be executed for any single test. This bound can be changed using the options below, and its default value is set to 5000 instructions. Additionally, while the result of a test is always deterministic, tests are run in parallel by default, so the ordering of test results in a test run is non-deterministic unless running with only one thread (see `OPTIONS` below). -There are also a number of options that can be passed to the unit testing binary to fine-tune testing and to help debug failing tests. These can be found using the the help flag: +There are also a number of options that can be passed to the unit testing binary to fine-tune testing and to help debug failing tests. These can be found using the help flag: ``` $ aptos move test -h @@ -220,7 +220,7 @@ Module 0000000000000000000000000000000000000000000000000000000000000001::my_modu Please use `aptos move coverage -h` for more detailed source or bytecode test coverage of this package ``` -Then by running `aptos move coverage`, we can get more detailed coverage information. These can be found using the the help flag: +Then by running `aptos move coverage`, we can get more detailed coverage information. These can be found using the help flag: ``` $ aptos move coverage -h diff --git a/developer-docs-site/docs/move/move-on-aptos/cli.md b/developer-docs-site/docs/move/move-on-aptos/cli.md index 9c8e00ff130a4..1afbb648ead0b 100644 --- a/developer-docs-site/docs/move/move-on-aptos/cli.md +++ b/developer-docs-site/docs/move/move-on-aptos/cli.md @@ -1,10 +1,10 @@ --- -title: "Aptos Move CLI" +title: "Aptos CLI" --- import CodeBlock from '@theme/CodeBlock'; -# Use the Aptos Move CLI +# Use the Aptos CLI The `aptos` tool is a command line interface (CLI) for developing on the Aptos blockchain, debugging, and for node operations. This document describes how to use the `aptos` CLI tool. To download or build the CLI, follow [Install Aptos CLI](../../tools/aptos-cli/install-cli/index.md). diff --git a/developer-docs-site/docs/move/prover/spec-lang.md b/developer-docs-site/docs/move/prover/spec-lang.md index cc4c05fdf9745..0ec8c8c553e9d 100644 --- a/developer-docs-site/docs/move/prover/spec-lang.md +++ b/developer-docs-site/docs/move/prover/spec-lang.md @@ -237,7 +237,7 @@ function in such a situation, one can use the notation `::len(v)`. In MSL, expressions have partial semantics. This is in contrast to Move program expressions, which have total semantics, since they either deliver a value or abort. -An expression `e[X]` that depends on some some variables `X` may have a known interpretation for +An expression `e[X]` that depends on some variables `X` may have a known interpretation for some assignments to variables in `X` but is unknown for others. An unknown interpretation for a sub-expression causes no issue if its value is not needed for the overall expression result. Therefore it does not matter if we say `y != 0 && x / y > 0` diff --git a/developer-docs-site/docs/nodes/full-node/aptos-db-restore.md b/developer-docs-site/docs/nodes/full-node/aptos-db-restore.md index 4a66473a5326d..75d0ae9a47390 100644 --- a/developer-docs-site/docs/nodes/full-node/aptos-db-restore.md +++ b/developer-docs-site/docs/nodes/full-node/aptos-db-restore.md @@ -79,6 +79,19 @@ aptos node bootstrap-db \ ``` ### Restore a fullnode with full history from genesis +**Resource Requirements** + +* Open File Limit: Set the open file limit to 10K. +* Testnet: + * Disk: 1.5TB + * RAM: 32GB + * Duration: Approximately 10 hours to finish. +* Mainnet: + * Disk: 1TB + * RAM: 32GB + * Duration: Approximately 5 hours to finish. + + To restore a fullnode with full history from genesis, set `ledger-history-start-version` to 0 and disable the pruner by [disabling the ledger pruner](../../guides/data-pruning.md). Example command: @@ -90,6 +103,11 @@ aptos node bootstrap-db \ --command-adapter-config /path/to/s3-public.yaml \ --target-db-dir /path/to/local/db ``` + +:::tip +If you don't specify the target_version (via `--target-version`), the tool will use the latest version in the backup as the target version. +::: + Disable the pruner in the node config to prevent the early history from being pruned when you start the node. ```Yaml storage: diff --git a/developer-docs-site/docs/nodes/indexer-fullnode.md b/developer-docs-site/docs/nodes/indexer-fullnode.md index a2cf1d9b49682..cd012dc941383 100644 --- a/developer-docs-site/docs/nodes/indexer-fullnode.md +++ b/developer-docs-site/docs/nodes/indexer-fullnode.md @@ -80,6 +80,12 @@ For an Aptos indexer fullnode, install these packages: emit_every: 500 ``` +:::tip Bootstap the fullnode +Instead of syncing your indexer fullnode from genesis, which may take a long period of time, you can choose to bootstrap your fullnode using backup data before starting it. To do so, follow the instructions to [restore from a backup](../nodes/full-node/aptos-db-restore.md). + +Note: indexers cannot be bootstrapped using [a snapshot](../nodes/full-node/bootstrap-fullnode.md) or [fast sync](../guides/state-sync.md#fast-syncing). +::: + 1. Run the indexer fullnode with either `cargo run` or `docker run` depending upon your setup. Remember to supply the arguments you need for your specific node: ```bash docker run -p 8080:8080 \ diff --git a/developer-docs-site/docs/nodes/validator-node/operator/staking-pool-operations.md b/developer-docs-site/docs/nodes/validator-node/operator/staking-pool-operations.md index 10f2243d0d1e3..a4450d6936b03 100644 --- a/developer-docs-site/docs/nodes/validator-node/operator/staking-pool-operations.md +++ b/developer-docs-site/docs/nodes/validator-node/operator/staking-pool-operations.md @@ -55,11 +55,11 @@ You can either enter the private key from an existing wallet, or create new wall ### Initialize staking pool ```bash -aptos stake initialize-stake-owner \ - --initial-stake-amount 100000000000000 \ - --operator-address \ - --voter-address \ - --profile mainnet-owner +aptos stake create-staking-contract \ +--operator \ +--voter \ +--amount 100000000000000 \ +--commission-percentage 10 ``` ### Transfer coin between accounts diff --git a/developer-docs-site/docs/standards/aptos-token.md b/developer-docs-site/docs/standards/aptos-token.md index 2ac487991b94a..7978fb7a960dd 100644 --- a/developer-docs-site/docs/standards/aptos-token.md +++ b/developer-docs-site/docs/standards/aptos-token.md @@ -166,7 +166,7 @@ The following tables describe fields at the struct level. For the definitive lis | Field | Description | | --- | --- | | [`Collections`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#resource-collections) | Maintains a table called `collection_data`, which maps the collection name to the `CollectionData`. It also stores all the `TokenData` that this creator creates. | -| [`CollectionData`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#struct-collectiondata) | Stores the collection metadata. The supply is the number of tokens created for the current collection. The maxium is the upper bound of tokens in this collection. | +| [`CollectionData`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#struct-collectiondata) | Stores the collection metadata. The supply is the number of tokens created for the current collection. The maximum is the upper bound of tokens in this collection. | | [`CollectionMutabilityConfig`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#0x3_token_CollectionMutabilityConfig) | Specifies which field is mutable. | | [`TokenData`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#0x3_token_TokenData) | Acts as the main struct for holding the token metadata. Properties is a where users can add their own properties that are not defined in the token data. Users can mint more tokens based on the `TokenData`, and those tokens share the same `TokenData`. | | [`TokenMutabilityConfig`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/doc/token.md#0x3_token_TokenMutabilityConfig) | Controls which fields are mutable. | diff --git a/developer-docs-site/docs/standards/wallets.md b/developer-docs-site/docs/standards/wallets.md index 33097204a4323..776f651c73595 100644 --- a/developer-docs-site/docs/standards/wallets.md +++ b/developer-docs-site/docs/standards/wallets.md @@ -217,7 +217,7 @@ Key rotation is currently not implemented in any wallets. Mapping of rotated key Wallets that import a private key will have to do the following: 1. Derive the authentication key. -2. Lookup the authentication key onchain in the Account origination table +2. Lookup the authentication key onchain in the Account origination table. - If the account doesn't exist, it's a new account. The address to be used is the authentication key. - If the account does exist, it's a rotated key account, and the address to be used will come from the table. diff --git a/developer-docs-site/docs/tutorials/index.md b/developer-docs-site/docs/tutorials/index.md index d77617a81c4ad..ae7540cdde298 100644 --- a/developer-docs-site/docs/tutorials/index.md +++ b/developer-docs-site/docs/tutorials/index.md @@ -8,15 +8,15 @@ If you are new to the Aptos blockchain, begin with these quickstarts before you ### [Your First Transaction](first-transaction.md) -How to [generate, submit and verify a transaction](first-transaction.md) to the Aptos blockchain. +How to generate, submit and verify a transaction to the Aptos blockchain. ### [Your First NFT](your-first-nft.md) -Learn the Aptos `token` interface and how to use it to [generate your first NFT](your-first-nft.md). This interface is defined in the [`token.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/sources/token.move) Move module. +Learn the Aptos `token` interface and how to use it to generate your first NFT. This interface is defined in the [`token.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-token/sources/token.move) Move module. ### [Your First Coin](first-coin.md) -Learn how to [deploy and manage a coin](first-coin.md). The `coin` interface is defined in the [`coin.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/coin.move) Move module. +Learn how to deploy and manage a coin. The `coin` interface is defined in the [`coin.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/coin.move) Move module. ### [Your First Fungible Asset](first-fungible-asset.md) @@ -24,11 +24,11 @@ Learn how to [deploy and manage a fungible asset](first-fungible-asset.md). The ### [Your First Move Module](first-move-module.md) -[Write your first Move module](first-move-module.md) for the Aptos blockchain. +Write your first Move module for the Aptos blockchain. ### [Your First Dapp](first-dapp.md) -Learn how to [build your first dapp](first-dapp.md). Focuses on building the user interface for the dapp. +Learn how to build your first dapp. Focuses on building the user interface for the dapp. ### [Your First Multisig](first-multisig.md) diff --git a/docker/builder/build-tools.sh b/docker/builder/build-tools.sh index 7c55469f110d7..afca44fbbae11 100755 --- a/docker/builder/build-tools.sh +++ b/docker/builder/build-tools.sh @@ -26,6 +26,8 @@ cargo build --locked --profile=$PROFILE \ -p aptos-indexer-grpc-file-store \ -p aptos-indexer-grpc-data-service \ -p aptos-indexer-grpc-post-processor \ + -p aptos-nft-metadata-crawler-parser \ + -p aptos-api-tester \ "$@" # After building, copy the binaries we need to `dist` since the `target` directory is used as docker cache mount and only available during the RUN step @@ -44,6 +46,8 @@ BINS=( aptos-indexer-grpc-file-store aptos-indexer-grpc-data-service aptos-indexer-grpc-post-processor + aptos-nft-metadata-crawler-parser + aptos-api-tester ) mkdir dist diff --git a/docker/builder/docker-bake-rust-all.hcl b/docker/builder/docker-bake-rust-all.hcl index 1b6194266c074..10d0e78fd972b 100644 --- a/docker/builder/docker-bake-rust-all.hcl +++ b/docker/builder/docker-bake-rust-all.hcl @@ -58,6 +58,7 @@ group "all" { "telemetry-service", "indexer-grpc", "validator-testing", + "nft-metadata-crawler", ]) } @@ -68,7 +69,7 @@ group "forge-images" { target "debian-base" { dockerfile = "docker/builder/debian-base.Dockerfile" contexts = { - debian = "docker-image://debian:bullseye@sha256:2c407480ad7c98bdc551dbb38b92acb674dc130c8298f2e0fa2ad34da9078637" + debian = "docker-image://debian:bullseye@sha256:7ac88cb3b95d347e89126a46696374fab97153b63d25995a5c6e75b5e98a0c79" } } @@ -77,7 +78,7 @@ target "builder-base" { target = "builder-base" context = "." contexts = { - rust = "docker-image://rust:1.71.1-bullseye@sha256:6b5a53fef2818e28548be943a622bfc52d73920fe0f8784f4296227bca30cdf1" + rust = "docker-image://rust:1.71.1-bullseye@sha256:79ddef683780336ce47c56c86184cf49e4f36c598d8f0bfe9453f52437b1b9a9" } args = { PROFILE = "${PROFILE}" @@ -213,6 +214,15 @@ target "indexer-grpc" { tags = generate_tags("indexer-grpc") } +target "nft-metadata-crawler" { + inherits = ["_common"] + target = "nft-metadata-crawler" + dockerfile = "docker/builder/nft-metadata-crawler.Dockerfile" + tags = generate_tags("nft-metadata-crawler") + cache-from = generate_cache_from("nft-metadata-crawler") + cache-to = generate_cache_to("nft-metadata-crawler") +} + function "generate_cache_from" { params = [target] result = CI == "true" ? [ diff --git a/docker/builder/nft-metadata-crawler.Dockerfile b/docker/builder/nft-metadata-crawler.Dockerfile new file mode 100644 index 0000000000000..4452cc1986c66 --- /dev/null +++ b/docker/builder/nft-metadata-crawler.Dockerfile @@ -0,0 +1,22 @@ +### NFT Metadata Crawler Image ### + +FROM tools-builder + +FROM debian-base AS nft-metadata-crawler + +RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update && apt-get install --no-install-recommends -y \ + libssl1.1 \ + ca-certificates \ + net-tools \ + tcpdump \ + iproute2 \ + netcat \ + libpq-dev \ + curl + +COPY --link --from=tools-builder /aptos/dist/aptos-nft-metadata-crawler-parser /usr/local/bin/aptos-nft-metadata-crawler-parser + +# The health check port +EXPOSE 8080 diff --git a/docker/builder/tools.Dockerfile b/docker/builder/tools.Dockerfile index 55b467e01e386..ff268ea05d3c7 100644 --- a/docker/builder/tools.Dockerfile +++ b/docker/builder/tools.Dockerfile @@ -31,6 +31,13 @@ COPY --link --from=tools-builder /aptos/dist/aptos /usr/local/bin/aptos COPY --link --from=tools-builder /aptos/dist/aptos-openapi-spec-generator /usr/local/bin/aptos-openapi-spec-generator COPY --link --from=tools-builder /aptos/dist/aptos-fn-check-client /usr/local/bin/aptos-fn-check-client COPY --link --from=tools-builder /aptos/dist/aptos-transaction-emitter /usr/local/bin/aptos-transaction-emitter +COPY --link --from=tools-builder /aptos/dist/aptos-api-tester /usr/local/bin/aptos-api-tester + +# Copy the example module to publish for api-tester +COPY --link --from=tools-builder /aptos/aptos-move/framework/aptos-framework /aptos-move/framework/aptos-framework +COPY --link --from=tools-builder /aptos/aptos-move/framework/aptos-stdlib /aptos-move/framework/aptos-stdlib +COPY --link --from=tools-builder /aptos/aptos-move/framework/move-stdlib /aptos-move/framework/move-stdlib +COPY --link --from=tools-builder /aptos/aptos-move/move-examples/hello_blockchain /aptos-move/move-examples/hello_blockchain ### Get Aptos Move releases for genesis ceremony RUN mkdir -p /aptos-framework/move diff --git a/ecosystem/indexer-grpc/indexer-grpc-data-service/src/main.rs b/ecosystem/indexer-grpc/indexer-grpc-data-service/src/main.rs index ecfbf3d357bae..e05859fe06e02 100644 --- a/ecosystem/indexer-grpc/indexer-grpc-data-service/src/main.rs +++ b/ecosystem/indexer-grpc/indexer-grpc-data-service/src/main.rs @@ -48,6 +48,8 @@ pub struct IndexerGrpcDataServiceConfig { // The address for TLS and non-TLS gRPC server to listen on. pub data_service_grpc_tls_config: Option, pub data_service_grpc_non_tls_config: Option, + // The size of the response channel that response can be buffered. + pub data_service_response_channel_size: Option, // A list of auth tokens that are allowed to access the service. pub whitelisted_auth_tokens: Vec, // File store config. @@ -91,6 +93,7 @@ impl RunnableConfig for IndexerGrpcDataServiceConfig { let server = RawDataServerWrapper::new( self.redis_read_replica_address.clone(), self.file_store_config.clone(), + self.data_service_response_channel_size, ); let svc = aptos_protos::indexer::v1::raw_data_server::RawDataServer::new(server) .send_compressed(CompressionEncoding::Gzip) diff --git a/ecosystem/indexer-grpc/indexer-grpc-data-service/src/service.rs b/ecosystem/indexer-grpc/indexer-grpc-data-service/src/service.rs index 95251c23a79c0..bd871b7518901 100644 --- a/ecosystem/indexer-grpc/indexer-grpc-data-service/src/service.rs +++ b/ecosystem/indexer-grpc/indexer-grpc-data-service/src/service.rs @@ -49,9 +49,8 @@ const AHEAD_OF_CACHE_RETRY_SLEEP_DURATION_MS: u64 = 50; // TODO(larry): fix all errors treated as transient errors. const TRANSIENT_DATA_ERROR_RETRY_SLEEP_DURATION_MS: u64 = 1000; -// Up to MAX_RESPONSE_CHANNEL_SIZE response can be buffered in the channel. If the channel is full, -// the server will not fetch more data from the cache and file store until the channel is not full. -const MAX_RESPONSE_CHANNEL_SIZE: usize = 80; +// Default max response channel size. +const DEFAULT_MAX_RESPONSE_CHANNEL_SIZE: usize = 3; // The server will retry to send the response to the client and give up after RESPONSE_CHANNEL_SEND_TIMEOUT. // This is to prevent the server from being occupied by a slow client. @@ -62,16 +61,22 @@ const SHORT_CONNECTION_DURATION_IN_SECS: u64 = 10; pub struct RawDataServerWrapper { pub redis_client: Arc, pub file_store_config: IndexerGrpcFileStoreConfig, + pub data_service_response_channel_size: Option, } impl RawDataServerWrapper { - pub fn new(redis_address: String, file_store_config: IndexerGrpcFileStoreConfig) -> Self { + pub fn new( + redis_address: String, + file_store_config: IndexerGrpcFileStoreConfig, + data_service_response_channel_size: Option, + ) -> Self { Self { redis_client: Arc::new( redis::Client::open(format!("redis://{}", redis_address)) .expect("Create redis client failed."), ), file_store_config, + data_service_response_channel_size, } } } @@ -114,7 +119,10 @@ impl RawData for RawDataServerWrapper { let transactions_count = request.transactions_count; // Response channel to stream the data to the client. - let (tx, rx) = channel(MAX_RESPONSE_CHANNEL_SIZE); + let (tx, rx) = channel( + self.data_service_response_channel_size + .unwrap_or(DEFAULT_MAX_RESPONSE_CHANNEL_SIZE), + ); let mut current_version = match &request.starting_version { Some(version) => *version, None => { diff --git a/ecosystem/indexer-grpc/indexer-grpc-fullnode/src/tests/proto_converter_tests.rs b/ecosystem/indexer-grpc/indexer-grpc-fullnode/src/tests/proto_converter_tests.rs index 4fd778cfc65b7..5c5d22a739bf5 100644 --- a/ecosystem/indexer-grpc/indexer-grpc-fullnode/src/tests/proto_converter_tests.rs +++ b/ecosystem/indexer-grpc/indexer-grpc-fullnode/src/tests/proto_converter_tests.rs @@ -7,6 +7,7 @@ use crate::{ }; use aptos_api_test_context::current_function_name; +use aptos_framework::extended_checks; use aptos_protos::extractor::v1::{ transaction::{TransactionType, TxnData}, transaction_payload::{Payload, Type as PayloadType}, @@ -245,6 +246,8 @@ async fn build_test_module(account: AccountAddress) -> Vec { generate_docs: false, install_dir: Some(package_dir.clone()), additional_named_addresses: [("TestAccount".to_string(), account)].into(), + known_attributes: extended_checks::get_all_attribute_names().clone(), + skip_attribute_checks: false, ..Default::default() }; let package = build_config diff --git a/ecosystem/nft-metadata-crawler-parser/src/models/nft_metadata_crawler_uris_query.rs b/ecosystem/nft-metadata-crawler-parser/src/models/nft_metadata_crawler_uris_query.rs index ae672d3c7dd75..84dd4a7cb41b5 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/models/nft_metadata_crawler_uris_query.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/models/nft_metadata_crawler_uris_query.rs @@ -56,12 +56,14 @@ impl NFTMetadataCrawlerURIsQuery { } pub fn get_by_raw_image_uri( + token_uri: String, raw_image_uri: String, conn: &mut PooledConnection>, ) -> anyhow::Result> { let mut op = || { parsed_token_uris::table .filter(parsed_token_uris::raw_image_uri.eq(raw_image_uri.clone())) + .filter(parsed_token_uris::token_uri.ne(token_uri.clone())) .first::(conn) .optional() .map_err(Into::into) @@ -85,12 +87,14 @@ impl NFTMetadataCrawlerURIsQuery { } pub fn get_by_raw_animation_uri( + token_uri: String, raw_animation_uri: String, conn: &mut PooledConnection>, ) -> anyhow::Result> { let mut op = || { parsed_token_uris::table .filter(parsed_token_uris::raw_animation_uri.eq(raw_animation_uri.clone())) + .filter(parsed_token_uris::token_uri.ne(token_uri.clone())) .first::(conn) .optional() .map_err(Into::into) diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/constants.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/constants.rs index 67c7bafc704a3..876268fa3da73 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/constants.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/constants.rs @@ -2,3 +2,9 @@ /// Maximum retry time for exponential backoff (5 sec = 3-4 retries) pub const MAX_RETRY_TIME_SECONDS: u64 = 5; + +/// Allocate 30 seconds for downloading large JSON files +pub const MAX_JSON_REQUEST_RETRY_SECONDS: u64 = 30; + +/// Allocate 180 seconds for downloading large image files +pub const MAX_IMAGE_REQUEST_RETRY_SECONDS: u64 = 180; diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/database.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/database.rs index 25d945c784fc8..2bd24bb30426a 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/database.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/database.rs @@ -50,6 +50,7 @@ pub fn upsert_uris( cdn_animation_uri.eq(excluded(cdn_animation_uri)), image_optimizer_retry_count.eq(excluded(image_optimizer_retry_count)), json_parser_retry_count.eq(excluded(json_parser_retry_count)), + animation_optimizer_retry_count.eq(excluded(animation_optimizer_retry_count)), )); let debug_query = diesel::debug_query::(&query).to_string(); diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/gcs.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/gcs.rs index 34eff06a277f2..50b5f899e4070 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/gcs.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/gcs.rs @@ -12,7 +12,7 @@ use serde_json::Value; pub async fn write_json_to_gcs(bucket: String, id: String, json: Value) -> anyhow::Result { let client = init_client().await?; - let filename = format!("{}/json.json", id); + let filename = format!("cdn/{}.json", id); let json_string = json.to_string(); let json_bytes = json_string.into_bytes(); @@ -55,7 +55,7 @@ pub async fn write_image_to_gcs( _ => "jpeg".to_string(), }; - let filename = format!("{}/image.{}", id, extension); + let filename = format!("cdn/{}.{}", id, extension); let upload_type = UploadType::Simple(Media { name: filename.clone().into(), @@ -72,8 +72,7 @@ pub async fn write_image_to_gcs( buffer, &upload_type, ) - .await - .context("Error uploading image to GCS")?; + .await?; Ok(filename) } diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/image_optimizer.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/image_optimizer.rs index 185a394fb99df..3d953f08c3a40 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/image_optimizer.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/image_optimizer.rs @@ -1,6 +1,9 @@ // Copyright © Aptos Foundation -use crate::{get_uri_metadata, utils::constants::MAX_RETRY_TIME_SECONDS}; +use crate::{ + get_uri_metadata, + utils::constants::{MAX_IMAGE_REQUEST_RETRY_SECONDS, MAX_RETRY_TIME_SECONDS}, +}; use anyhow::Context; use backoff::{future::retry, ExponentialBackoff}; use futures::FutureExt; @@ -10,7 +13,7 @@ use image::{ }; use reqwest::Client; use std::{io::Cursor, time::Duration}; -use tracing::error; +use tracing::warn; pub struct ImageOptimizer; @@ -24,18 +27,16 @@ impl ImageOptimizer { ) -> anyhow::Result<(Vec, ImageFormat)> { let (_, size) = get_uri_metadata(uri.clone()).await?; if size > max_file_size_bytes { - let error_msg = format!( + return Err(anyhow::anyhow!(format!( "Image optimizer received file too large: {} bytes, skipping", size - ); - error!(uri = uri, "[NFT Metadata Crawler] {}", error_msg); - return Err(anyhow::anyhow!(error_msg)); + ))); } let op = || { async { let client = Client::builder() - .timeout(Duration::from_secs(MAX_RETRY_TIME_SECONDS / 3)) + .timeout(Duration::from_secs(MAX_IMAGE_REQUEST_RETRY_SECONDS)) .build() .context("Failed to build reqwest client")?; @@ -58,8 +59,11 @@ impl ImageOptimizer { _ => { let img = image::load_from_memory(&img_bytes) .context(format!("Failed to load image from memory: {} bytes", size))?; - let resized_image = resize(&img.to_rgb8(), 400, 400, FilterType::Gaussian); - Ok((Self::to_json_bytes(resized_image, image_quality)?, format)) + let (nwidth, nheight) = + Self::calculate_dimensions_with_ration(512, img.width(), img.height()); + let resized_image = + resize(&img.to_rgb8(), nwidth, nheight, FilterType::Gaussian); + Ok((Self::to_jpeg_bytes(resized_image, image_quality)?, format)) }, } } @@ -74,8 +78,9 @@ impl ImageOptimizer { match retry(backoff, op).await { Ok(result) => Ok(result), Err(e) => { - error!( + warn!( uri = uri, + error = ?e, "[NFT Metadata Crawler] Exponential backoff timed out, skipping image" ); Err(e) @@ -83,8 +88,25 @@ impl ImageOptimizer { } } + /// Calculate new dimensions given a goal size while maintaining original aspect ratio + fn calculate_dimensions_with_ration(goal: u32, width: u32, height: u32) -> (u32, u32) { + if width == 0 || height == 0 { + return (0, 0); + } + + if width > height { + let new_width = goal; + let new_height = (goal as f64 * (height as f64 / width as f64)).round() as u32; + (new_width, new_height) + } else { + let new_height = goal; + let new_width = (goal as f64 * (width as f64 / height as f64)).round() as u32; + (new_width, new_height) + } + } + /// Converts image to JPEG bytes vector - fn to_json_bytes( + fn to_jpeg_bytes( image_buffer: ImageBuffer, Vec>, image_quality: u8, ) -> anyhow::Result> { @@ -93,7 +115,7 @@ impl ImageOptimizer { match dynamic_image.write_to(&mut byte_store, ImageOutputFormat::Jpeg(image_quality)) { Ok(_) => Ok(byte_store.into_inner()), Err(e) => { - error!(error = ?e, "[NFT Metadata Crawler] Error converting image to bytes:: {} bytes", dynamic_image.as_bytes().len()); + warn!(error = ?e, "[NFT Metadata Crawler] Error converting image to bytes: {} bytes", dynamic_image.as_bytes().len()); Err(anyhow::anyhow!(e)) }, } diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/json_parser.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/json_parser.rs index cffff38366861..e520dcdf7326f 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/json_parser.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/json_parser.rs @@ -1,6 +1,9 @@ // Copyright © Aptos Foundation -use crate::{get_uri_metadata, utils::constants::MAX_RETRY_TIME_SECONDS}; +use crate::{ + get_uri_metadata, + utils::constants::{MAX_JSON_REQUEST_RETRY_SECONDS, MAX_RETRY_TIME_SECONDS}, +}; use anyhow::Context; use backoff::{future::retry, ExponentialBackoff}; use futures::FutureExt; @@ -8,7 +11,7 @@ use image::ImageFormat; use reqwest::Client; use serde_json::Value; use std::time::Duration; -use tracing::{error, info}; +use tracing::{info, warn}; pub struct JSONParser; @@ -21,16 +24,15 @@ impl JSONParser { ) -> anyhow::Result<(Option, Option, Value)> { let (mime, size) = get_uri_metadata(uri.clone()).await?; if ImageFormat::from_mime_type(mime.clone()).is_some() { - let error_msg = format!("JSON parser received image file: {}, skipping", mime); - error!(uri = uri, "[NFT Metadata Crawler] {}", error_msg); - return Err(anyhow::anyhow!(error_msg)); + return Err(anyhow::anyhow!(format!( + "JSON parser received image file: {}, skipping", + mime + ))); } else if size > max_file_size_bytes { - let error_msg = format!( + return Err(anyhow::anyhow!(format!( "JSON parser received file too large: {} bytes, skipping", size - ); - error!(uri = uri, "[NFT Metadata Crawler] {}", error_msg); - return Err(anyhow::anyhow!(error_msg)); + ))); } let op = || { @@ -38,7 +40,7 @@ impl JSONParser { info!("Sending request for token_uri {}", uri); let client = Client::builder() - .timeout(Duration::from_secs(MAX_RETRY_TIME_SECONDS / 3)) + .timeout(Duration::from_secs(MAX_JSON_REQUEST_RETRY_SECONDS)) .build() .context("Failed to build reqwest client")?; @@ -70,7 +72,7 @@ impl JSONParser { match retry(backoff, op).await { Ok(result) => Ok(result), Err(e) => { - error!( + warn!( uri = uri, error = ?e, "[NFT Metadata Parser] Exponential backoff timed out, skipping JSON" diff --git a/ecosystem/nft-metadata-crawler-parser/src/utils/uri_parser.rs b/ecosystem/nft-metadata-crawler-parser/src/utils/uri_parser.rs index 8f12ecfb45ef4..da0158a2d265e 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/utils/uri_parser.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/utils/uri_parser.rs @@ -27,7 +27,7 @@ impl URIParser { let path = captures.name("path").map(|m| m.as_str().to_string()); Ok(format!( - "{}/{}{}", + "{}{}{}", ipfs_prefix, cid, path.unwrap_or_default() @@ -42,7 +42,7 @@ impl URIParser { mod tests { use super::*; - const IPFS_PREFIX: &str = "https://testipfsprefix.com/ipfs"; + const IPFS_PREFIX: &str = "https://testipfsprefix.com/ipfs/"; const CID: &str = "testcid"; const PATH: &str = "testpath"; @@ -50,12 +50,12 @@ mod tests { fn test_parse_ipfs_uri() { let test_ipfs_uri = format!("ipfs://{}/{}", CID, PATH); let parsed_uri = URIParser::parse(IPFS_PREFIX.to_string(), test_ipfs_uri).unwrap(); - assert_eq!(parsed_uri, format!("{IPFS_PREFIX}/{CID}/{PATH}")); + assert_eq!(parsed_uri, format!("{IPFS_PREFIX}{CID}/{PATH}")); // Path is optional for IPFS URIs let test_ipfs_uri_no_path = format!("ipfs://{}/{}", CID, ""); let parsed_uri = URIParser::parse(IPFS_PREFIX.to_string(), test_ipfs_uri_no_path).unwrap(); - assert_eq!(parsed_uri, format!("{}/{}/{}", IPFS_PREFIX, CID, "")); + assert_eq!(parsed_uri, format!("{}{}/{}", IPFS_PREFIX, CID, "")); // IPFS URIs must contain a CID, expect error here let test_ipfs_uri_no_cid = format!("ipfs://{}/{}", "", PATH); @@ -68,13 +68,13 @@ mod tests { let test_public_gateway_uri = format!("https://ipfs.io/ipfs/{}/{}", CID, PATH); let parsed_uri = URIParser::parse(IPFS_PREFIX.to_string(), test_public_gateway_uri).unwrap(); - assert_eq!(parsed_uri, format!("{IPFS_PREFIX}/{CID}/{PATH}",)); + assert_eq!(parsed_uri, format!("{IPFS_PREFIX}{CID}/{PATH}",)); // Path is optional for public gateway URIs let test_public_gateway_uri_no_path = format!("https://ipfs.io/ipfs/{}/{}", CID, ""); let parsed_uri = URIParser::parse(IPFS_PREFIX.to_string(), test_public_gateway_uri_no_path).unwrap(); - assert_eq!(parsed_uri, format!("{}/{}/{}", IPFS_PREFIX, CID, "")); + assert_eq!(parsed_uri, format!("{}{}/{}", IPFS_PREFIX, CID, "")); // Public gateway URIs must contain a CID, expect error here let test_public_gateway_uri_no_cid = format!("https://ipfs.io/ipfs/{}/{}", "", PATH); diff --git a/ecosystem/nft-metadata-crawler-parser/src/worker.rs b/ecosystem/nft-metadata-crawler-parser/src/worker.rs index 6b7c30d1128f2..8ba38d5d2810c 100644 --- a/ecosystem/nft-metadata-crawler-parser/src/worker.rs +++ b/ecosystem/nft-metadata-crawler-parser/src/worker.rs @@ -36,7 +36,7 @@ use tokio::{ task::JoinHandle, time::sleep, }; -use tracing::{error, info}; +use tracing::{error, info, warn}; /// Structs to hold config from YAML #[derive(Clone, Debug, Deserialize, Serialize)] @@ -108,6 +108,7 @@ async fn consume_pubsub_entries_to_channel_loop( error = ?e, "[NFT Metadata Crawler] Failed to send PubSub entry to channel" ); + panic!(); }); } @@ -119,27 +120,34 @@ async fn spawn_parser( semaphore: Arc, receiver: Arc>>, subscription: Subscription, - release: bool, + ack_parsed_uris: bool, ) -> anyhow::Result<()> { loop { - let _ = semaphore.acquire().await?; - // Pulls worker from Channel + let _ = semaphore.acquire().await?; let (mut worker, ack) = receiver.lock().await.recv()?; - worker.parse().await?; - // Sends ack to PubSub only if running on release mode - if release { + // Sends ack to PubSub only if ack_parsed_uris flag is true + if ack_parsed_uris { info!( token_data_id = worker.token_data_id, token_uri = worker.token_uri, last_transaction_version = worker.last_transaction_version, force = worker.force, - "[NFT Metadata Crawler] Acking message" + "[NFT Metadata Crawler] Received worker, acking message" ); subscription.ack(vec![ack]).await?; } + worker.parse().await?; + + info!( + token_data_id = worker.token_data_id, + token_uri = worker.token_uri, + last_transaction_version = worker.last_transaction_version, + force = worker.force, + "[NFT Metadata Crawler] Worker finished" + ); sleep(Duration::from_millis(500)).await; } } @@ -255,70 +263,98 @@ impl Worker { /// Main parsing flow pub async fn parse(&mut self) -> anyhow::Result<()> { - info!( - last_transaction_version = self.last_transaction_version, - "[NFT Metadata Crawler] Starting worker" - ); - // Deduplicate token_uri - // Proceed if force or if token_uri has not been parsed - if self.force - || NFTMetadataCrawlerURIsQuery::get_by_token_uri( + // Exit if not force or if token_uri has already been parsed + if !self.force + && NFTMetadataCrawlerURIsQuery::get_by_token_uri( self.token_uri.clone(), &mut self.conn, )? - .is_none() + .is_some() { - // Parse token_uri - self.model.set_token_uri(self.token_uri.clone()); - let token_uri = self.model.get_token_uri(); - let json_uri = URIParser::parse(self.config.ipfs_prefix.clone(), token_uri.clone()) - .unwrap_or(token_uri); - - // Parse JSON for raw_image_uri and raw_animation_uri - let (raw_image_uri, raw_animation_uri, json) = - JSONParser::parse(json_uri, self.config.max_file_size_bytes) - .await - .unwrap_or_else(|e| { - // Increment retry count if JSON parsing fails - error!( - last_transaction_version = self.last_transaction_version, - error = ?e, - "[NFT Metadata Crawler] JSON parse failed", - ); - self.model.increment_json_parser_retry_count(); - (None, None, Value::Null) - }); - - self.model.set_raw_image_uri(raw_image_uri); - self.model.set_raw_animation_uri(raw_animation_uri); - - // Save parsed JSON to GCS - if json != Value::Null { - let cdn_json_uri = - write_json_to_gcs(self.config.bucket.clone(), self.token_data_id.clone(), json) - .await - .map(|value| format!("{}{}", self.config.cdn_prefix, value)) - .ok(); - self.model.set_cdn_json_uri(cdn_json_uri); - } + return Ok(()); + } + + // Parse token_uri + let json_uri = + URIParser::parse(self.config.ipfs_prefix.clone(), self.model.get_token_uri()) + .unwrap_or(self.model.get_token_uri()); - // Commit model to Postgres - if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { + // Parse JSON for raw_image_uri and raw_animation_uri + let (raw_image_uri, raw_animation_uri, json) = + JSONParser::parse(json_uri, self.config.max_file_size_bytes) + .await + .unwrap_or_else(|e| { + // Increment retry count if JSON parsing fails + warn!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] JSON parse failed", + ); + self.model.increment_json_parser_retry_count(); + (None, None, Value::Null) + }); + + self.model.set_raw_image_uri(raw_image_uri); + self.model.set_raw_animation_uri(raw_animation_uri); + + // Save parsed JSON to GCS + if json != Value::Null { + let cdn_json_uri_result = + write_json_to_gcs(self.config.bucket.clone(), self.token_data_id.clone(), json) + .await; + + if let Err(e) = cdn_json_uri_result.as_ref() { error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, last_transaction_version = self.last_transaction_version, + force = self.force, error = ?e, - "[NFT Metadata Crawler] Commit to Postgres failed" + "[NFT Metadata Crawler] Failed to write JSON to GCS" ); + panic!(); } + + let cdn_json_uri = cdn_json_uri_result + .map(|value| format!("{}{}", self.config.cdn_prefix, value)) + .ok(); + self.model.set_cdn_json_uri(cdn_json_uri); + } + + // Commit model to Postgres + if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { + error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] Commit to Postgres failed" + ); + panic!(); } // Deduplicate raw_image_uri // Proceed with image optimization of force or if raw_image_uri has not been parsed + // Since we default to token_uri, this check works if raw_image_uri is null because deduplication for token_uri has already taken place if self.force || self.model.get_raw_image_uri().map_or(true, |uri_option| { - NFTMetadataCrawlerURIsQuery::get_by_raw_image_uri(uri_option, &mut self.conn) - .map_or(true, |uri| uri.is_none()) + NFTMetadataCrawlerURIsQuery::get_by_raw_image_uri( + self.token_uri.clone(), + uri_option, + &mut self.conn, + ) + .map_or(true, |uri| match uri { + Some(uris) => { + self.model.set_cdn_image_uri(uris.cdn_image_uri); + false + }, + None => true, + }) }) { // Parse raw_image_uri, use token_uri if parsing fails @@ -326,8 +362,8 @@ impl Worker { .model .get_raw_image_uri() .unwrap_or(self.model.get_token_uri()); - let img_uri = URIParser::parse(self.config.ipfs_prefix.clone(), raw_image_uri) - .unwrap_or(self.model.get_token_uri()); + let img_uri = URIParser::parse(self.config.ipfs_prefix.clone(), raw_image_uri.clone()) + .unwrap_or(raw_image_uri); // Resize and optimize image and animation let (image, format) = ImageOptimizer::optimize( @@ -338,8 +374,11 @@ impl Worker { .await .unwrap_or_else(|e| { // Increment retry count if image is None - error!( + warn!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, last_transaction_version = self.last_transaction_version, + force = self.force, error = ?e, "[NFT Metadata Crawler] Image optimization failed" ); @@ -349,26 +388,44 @@ impl Worker { if !image.is_empty() { // Save resized and optimized image to GCS - let cdn_image_uri = write_image_to_gcs( + let cdn_image_uri_result = write_image_to_gcs( format, self.config.bucket.clone(), self.token_data_id.clone(), image, ) - .await - .map(|value| format!("{}{}", self.config.cdn_prefix, value)) - .ok(); + .await; + + if let Err(e) = cdn_image_uri_result.as_ref() { + error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] Failed to write image to GCS" + ); + panic!(); + } + + let cdn_image_uri = cdn_image_uri_result + .map(|value| format!("{}{}", self.config.cdn_prefix, value)) + .ok(); self.model.set_cdn_image_uri(cdn_image_uri); } + } - // Commit model to Postgres - if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { - error!( - last_transaction_version = self.last_transaction_version, - error = ?e, - "[NFT Metadata Crawler] Commit to Postgres failed" - ); - } + // Commit model to Postgres + if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { + error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] Commit to Postgres failed" + ); + panic!(); } // Deduplicate raw_animation_uri @@ -376,9 +433,18 @@ impl Worker { let mut raw_animation_uri_option = self.model.get_raw_animation_uri(); if !self.force && raw_animation_uri_option.clone().map_or(true, |uri| { - NFTMetadataCrawlerURIsQuery::get_by_raw_animation_uri(uri, &mut self.conn) - .unwrap_or(None) - .is_some() + NFTMetadataCrawlerURIsQuery::get_by_raw_animation_uri( + self.token_uri.clone(), + uri, + &mut self.conn, + ) + .map_or(true, |uri| match uri { + Some(uris) => { + self.model.set_cdn_animation_uri(uris.cdn_animation_uri); + true + }, + None => true, + }) }) { raw_animation_uri_option = None; @@ -399,8 +465,11 @@ impl Worker { .await .unwrap_or_else(|e| { // Increment retry count if animation is None - error!( + warn!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, last_transaction_version = self.last_transaction_version, + force = self.force, error = ?e, "[NFT Metadata Crawler] Animation optimization failed" ); @@ -410,26 +479,44 @@ impl Worker { // Save resized and optimized animation to GCS if !animation.is_empty() { - let cdn_animation_uri = write_image_to_gcs( + let cdn_animation_uri_result = write_image_to_gcs( format, self.config.bucket.clone(), self.token_data_id.clone(), animation, ) - .await - .map(|value| format!("{}{}", self.config.cdn_prefix, value)) - .ok(); + .await; + + if let Err(e) = cdn_animation_uri_result.as_ref() { + error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] Failed to write animation to GCS" + ); + panic!(); + } + + let cdn_animation_uri = cdn_animation_uri_result + .map(|value| format!("{}{}", self.config.cdn_prefix, value)) + .ok(); self.model.set_cdn_animation_uri(cdn_animation_uri); } + } - // Commit model to Postgres - if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { - error!( - last_transaction_version = self.last_transaction_version, - error = ?e, - "[NFT Metadata Crawler] Commit to Postgres failed" - ); - } + // Commit model to Postgres + if let Err(e) = upsert_uris(&mut self.conn, self.model.clone()) { + error!( + token_data_id=self.token_data_id, + token_uri=self.token_uri, + last_transaction_version = self.last_transaction_version, + force = self.force, + error = ?e, + "[NFT Metadata Crawler] Commit to Postgres failed" + ); + panic!(); } Ok(()) diff --git a/ecosystem/python/sdk/aptos_sdk/aptos_token_client.py b/ecosystem/python/sdk/aptos_sdk/aptos_token_client.py index bf11e49e15d6d..dda862503c2ea 100644 --- a/ecosystem/python/sdk/aptos_sdk/aptos_token_client.py +++ b/ecosystem/python/sdk/aptos_sdk/aptos_token_client.py @@ -383,6 +383,7 @@ def create_collection_payload( return TransactionPayload(payload) + # :!:>create_collection async def create_collection( self, creator: Account, @@ -401,7 +402,7 @@ async def create_collection( tokens_freezable_by_creator: bool, royalty_numerator: int, royalty_denominator: int, - ) -> str: + ) -> str: # <:!:create_collection payload = AptosTokenClient.create_collection_payload( description, max_supply, @@ -458,6 +459,7 @@ def mint_token_payload( return TransactionPayload(payload) + # :!:>mint_token async def mint_token( self, creator: Account, @@ -466,7 +468,7 @@ async def mint_token( name: str, uri: str, properties: PropertyMap, - ) -> str: + ) -> str: # <:!:mint_token payload = AptosTokenClient.mint_token_payload( collection, description, name, uri, properties ) @@ -515,6 +517,12 @@ async def mint_soul_bound_token( ) return await self.client.submit_bcs_transaction(signed_transaction) + # :!:>transfer_token + async def transfer_token( + self, owner: Account, token: AccountAddress, to: AccountAddress + ) -> str: + return await self.client.transfer_object(owner, token, to) # <:!:transfer_token + async def burn_token(self, creator: Account, token: AccountAddress) -> str: payload = EntryFunction.natural( "0x4::aptos_token", diff --git a/ecosystem/python/sdk/aptos_sdk/async_client.py b/ecosystem/python/sdk/aptos_sdk/async_client.py index a5236cae07782..d2ed9ee77b847 100644 --- a/ecosystem/python/sdk/aptos_sdk/async_client.py +++ b/ecosystem/python/sdk/aptos_sdk/async_client.py @@ -752,6 +752,27 @@ async def get_collection( collection_name, ) + async def transfer_object( + self, owner: Account, object: AccountAddress, to: AccountAddress + ) -> str: + transaction_arguments = [ + TransactionArgument(object, Serializer.struct), + TransactionArgument(to, Serializer.struct), + ] + + payload = EntryFunction.natural( + "0x1::object", + "transfer_call", + [], + transaction_arguments, + ) + + signed_transaction = await self.create_bcs_signed_transaction( + owner, + TransactionPayload(payload), + ) + return await self.submit_bcs_transaction(signed_transaction) + class FaucetClient: """Faucet creates and funds accounts. This is a thin wrapper around that.""" diff --git a/ecosystem/python/sdk/examples/aptos-token.py b/ecosystem/python/sdk/examples/aptos-token.py index 3cc024be586bf..e60923ea67446 100644 --- a/ecosystem/python/sdk/examples/aptos-token.py +++ b/ecosystem/python/sdk/examples/aptos-token.py @@ -5,7 +5,7 @@ from aptos_sdk.account import Account from aptos_sdk.account_address import AccountAddress -from aptos_sdk.aptos_token_client import AptosTokenClient, Property, PropertyMap +from aptos_sdk.aptos_token_client import AptosTokenClient, Object, Property, PropertyMap from aptos_sdk.async_client import FaucetClient, RestClient from .common import FAUCET_URL, NODE_URL @@ -107,6 +107,17 @@ async def main(): token_data = await token_client.read_object(token_addr) print(f"Alice's token: {token_data}") + print("\n=== Transferring the Token from Alice to Bob ===") + print(f"Alice: {alice.address()}") + print(f"Bob: {bob.address()}") + print(f"Token: {token_addr}\n") + print(f"Owner: {token_data.resources[Object].owner}") + print(" ...transferring... ") + txn_hash = await rest_client.transfer_object(alice, token_addr, bob.address()) + await rest_client.wait_for_transaction(txn_hash) + token_data = await token_client.read_object(token_addr) + print(f"Owner: {token_data.resources[Object].owner}\n") + await rest_client.close() diff --git a/ecosystem/typescript/sdk/CHANGELOG.md b/ecosystem/typescript/sdk/CHANGELOG.md index b48a567d3956d..3bcec551d26de 100644 --- a/ecosystem/typescript/sdk/CHANGELOG.md +++ b/ecosystem/typescript/sdk/CHANGELOG.md @@ -3,7 +3,12 @@ All notable changes to the Aptos Node SDK will be captured in this file. This changelog is written by hand for now. It adheres to the format set out by [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ## Unreleased + +## 1.18.0 (2023-08-10) + - Fix default behavior for coin client to transfer and create account by default +- Filter amount > 0 on `getTokenOwnersData` +- Include missing fields for all Indexer queries ## 1.17.0 (2023-08-04) @@ -20,6 +25,9 @@ All notable changes to the Aptos Node SDK will be captured in this file. This ch - Support for sorting indexer queries - `orderBy` optional argument in `extraArgs` arguments - Support for get owned tokens by token address or token data id - `getOwnedTokensByTokenData` - Add support for local/custom networks without an indexer client +- Move to use `account_transactions` query in `getAccountTransactionsData` on `IndexerClient` +- Move to use `account_transaction_aggregate` query in `getAccountTransactionsCount` on `IndexerClient` +- Optional `startVersion` argument on `getUserTransactions` is not positional and part of the object param ## 1.15.0 (2023-07-28) diff --git a/ecosystem/typescript/sdk/examples/typescript-esm/multi_ed25519_to_multisig.ts b/ecosystem/typescript/sdk/examples/typescript-esm/multi_ed25519_to_multisig.ts new file mode 100644 index 0000000000000..042505c3936b7 --- /dev/null +++ b/ecosystem/typescript/sdk/examples/typescript-esm/multi_ed25519_to_multisig.ts @@ -0,0 +1,380 @@ +import { + AptosAccount, + FaucetClient, + Network, + Provider, + HexString, + TxnBuilderTypes, + BCS, + Types, + TransactionBuilder, +} from "aptos"; +import assert from "assert"; + +const ED25519_ACCOUNT_SCHEME = 0; +const MULTI_ED25519_ACCOUNT_SCHEME = 1; + +class MultiSigAccountCreationWithAuthKeyRevocationMessage { + public readonly moduleAddress: TxnBuilderTypes.AccountAddress = TxnBuilderTypes.AccountAddress.CORE_CODE_ADDRESS; + public readonly moduleName: string = "multisig_account"; + public readonly structName: string = "MultisigAccountCreationWithAuthKeyRevocationMessage"; + public readonly functionName: string = "create_with_existing_account_and_revoke_auth_key"; + + constructor( + public readonly chainId: number, + public readonly multiSigAddress: TxnBuilderTypes.AccountAddress, + public readonly sequenceNumber: number, + public readonly owners: Array, + public readonly numSignaturesRequired: number, + ) {} + + serialize(serializer: BCS.Serializer): void { + this.moduleAddress.serialize(serializer); + serializer.serializeStr(this.moduleName); + serializer.serializeStr(this.structName); + serializer.serializeU8(this.chainId); + this.multiSigAddress.serialize(serializer); + serializer.serializeU64(this.sequenceNumber); + serializer.serializeU32AsUleb128(this.owners.length); + this.owners.forEach((owner) => owner.serialize(serializer)); + serializer.serializeU64(this.numSignaturesRequired); + } +} + +////////////////////////////////////////////////////////////////////////////////////////// +////////////////////////////////////////////////////////////////////////////////////////// +// // +// Demonstration of e2e flow // +// // +////////////////////////////////////////////////////////////////////////////////////////// +////////////////////////////////////////////////////////////////////////////////////////// +/* + * This example demonstrates how to convert a MultiEd25519 account to a MultiSig account and revoke its auth key using the `0x1::multisig_account` module. + 1. Initialize N accounts and fund them. + 2. Initialize a MultiEd25519 account with the created accounts as owners and a signature threshold K as NUM_SIGNATURES_REQUIRED + - See: https://aptos.dev/concepts/accounts/#multi-signer-authentication for more information on MultiEd25519 accounts. + 3. Create a proof struct for at minimum K of the N accounts to sign. + 4. Gather the signatures from the accounts. + 5. Assemble a MultiEd25519 signed proof struct with the gathered signatures. + 6. Call the `0x1::multisig_account::create_with_existing_account_and_revoke_auth_key` function with the assembled proof struct and other logistical information + - Because the function requires a signed proof by the MultiEd25519 account, it does not require or check the signer, meaning anyone can submit the transaction + with the proof struct. + - We submit it as a randomly generated account here to convey this. + 7. The transaction will be executed and the following occurs on chain: + a. The MultiEd25519 account is converted into a MultiSig account. + b. The resulting account can from then on be used as a MultiSig account, potentially with new owners and/or a new minimum signature threshold. + c. The original MultiEd25519 account has its authentication key rotated, handing over control to the `0x1::multisig_account` contract. +*/ +const main = async () => { + const provider = new Provider(Network.DEVNET); + const faucetClient = new FaucetClient(provider.aptosClient.nodeUrl, "https://faucet.devnet.aptoslabs.com"); + const NUM_SIGNATURES_REQUIRED = 3; + + // Step 1. + // Initialize N accounts and fund them. See: https://aptos.dev/concepts/accounts/#multi-signer-authentication + // Works with any # of addresses, you just need to change NUM_SIGNATURES_REQUIRED and the signingAddresses + const account1 = new AptosAccount(); + const account2 = new AptosAccount(); + const account3 = new AptosAccount(); + + await faucetClient.fundAccount(account1.address(), 100_000_000); + await faucetClient.fundAccount(account2.address(), 100_000_000); + await faucetClient.fundAccount(account3.address(), 100_000_000); + + const accounts = [account1, account2, account3]; + const accountAddresses = accounts.map((acc) => TxnBuilderTypes.AccountAddress.fromHex(acc.address())); + + // If the signing accounts are a subset of the original accounts in the actual e2e flow, we'd use this to track who's actually signing + const signingAccounts = [account1, account2, account3]; + const signingAddresses = signingAccounts.map((acc) => TxnBuilderTypes.AccountAddress.fromHex(acc.address())); + + // Step 2. + // Initialize a MultiEd25519 account with the created accounts as owners and a signature threshold K as NUM_SIGNATURES_REQUIRED + await initializeMultiEd25519(faucetClient, accounts, NUM_SIGNATURES_REQUIRED); + + const publicKeys = accounts.map((acc) => new TxnBuilderTypes.Ed25519PublicKey(acc.signingKey.publicKey)); + const multiSigPublicKey = new TxnBuilderTypes.MultiEd25519PublicKey(publicKeys, NUM_SIGNATURES_REQUIRED); + const authKey = TxnBuilderTypes.AuthenticationKey.fromMultiEd25519PublicKey(multiSigPublicKey); + const multiSigAddress = TxnBuilderTypes.AccountAddress.fromHex(authKey.derivedAddress()); + + const sequenceNumber = Number((await provider.getAccount(multiSigAddress.toHexString())).sequence_number); + const chainId = Number(await provider.getChainId()); + + // Step 3. + // Create a proof struct for at minimum K of the N accounts to sign + const proofStruct = new MultiSigAccountCreationWithAuthKeyRevocationMessage( + chainId, + multiSigAddress, + sequenceNumber, + accountAddresses, + NUM_SIGNATURES_REQUIRED, + ); + + // Step 4. + // Gather the signatures from the accounts + // In an e2e dapp example, you'd be getting these from each account/client with a wallet prompt to sign a message. + const bcsSerializedStruct = BCS.bcsToBytes(proofStruct); + const structSig1 = account1.signBuffer(bcsSerializedStruct); + const structSig2 = account2.signBuffer(bcsSerializedStruct); + const structSig3 = account3.signBuffer(bcsSerializedStruct); + const structSignatures = [structSig1, structSig2, structSig3].map((sig) => sig.toUint8Array()); + + // Step 5. + // Assemble a MultiEd25519 signed proof struct with the gathered signatures. + // This represents the multisig signed struct by all owners. This is used as proof of authentication by the overall MultiEd25519 account, since there's no signer + // checked in the entry function we're using. + const multiSigStruct = createMultiSigStructFromSignedStructs(accountAddresses, signingAddresses, structSignatures); + + // Create test metadata for the multisig account post-creation + const metadataKeys = ["key 123", "key 456", "key 789"]; + const metadataValues = [new Uint8Array([1, 2, 3]), new Uint8Array([4, 5, 6]), new Uint8Array([7, 8, 9])]; + + // Pack the signed multi-sig struct into the entry function payload with the number of signatures required + multisig account info + const entryFunctionPayload = createWithExistingAccountAndRevokeAuthKeyPayload( + multiSigAddress, + multiSigPublicKey, + multiSigStruct, + accountAddresses, + NUM_SIGNATURES_REQUIRED, + metadataKeys, + metadataValues, + ); + + // Step 6. + // Call the `0x1::multisig_account::create_with_existing_account_and_revoke_auth_key` function + // Since you've already authenticated the signed struct message with the required K of N accounts, you do not need to construct a MultiEd25519 authenticated signature. + // You can submit the transaction as literally any account, because the entry function does not check the sender. The transaction is validated with the multisig signed struct. + const randomAccount = new AptosAccount(); + await faucetClient.fundAccount(randomAccount.address(), 100_000_000); + + // The sender here is essentially just paying for the gas fees. + const txn = await provider.generateSignSubmitTransaction(randomAccount, entryFunctionPayload, { + expireTimestamp: BigInt(Math.floor(Date.now() / 1000) + 60), + }); + + // Step 7. + // Wait for the transaction to complete, then observe and assert that the authentication key for the original MultiEd25519 account + // has been rotated and all capabilities revoked. + const txnInfo = await provider.waitForTransactionWithResult(txn); + printRelevantTxInfo(txnInfo as Types.UserTransaction); + assertChangesAndPrint(provider, multiSigAddress, sequenceNumber, false, metadataKeys, metadataValues); +}; + +////////////////////////////////////////////////////////////////////////////////////////// +////////////////////////////////////////////////////////////////////////////////////////// +// // +// Helper/utility functions // +// // +////////////////////////////////////////////////////////////////////////////////////////// +////////////////////////////////////////////////////////////////////////////////////////// + +// For clarification, the process of creating a MultiEd25519 account is: +// 1. Create or specify N accounts. We will use their public keys to generate the MultiEd25519 account. +// 2. Create a MultiEd25519 public key with the N public keys and a signature threshold K. K must be <= N. +// NOTE: A public key is different from an account's address. It can be always derived from an account's authentication key or private key, not necessarily its address. +// 3. Create a MultiEd25519 authentication key with the MultiEd25519 public key. +// You can then derive the address from the authentication key. +// 4. Fund the derived MultiEd25519 account at the derived address. +// +// Funds and thus creates the derived MultiEd25519 account and prints out the derived address, authentication key, and public key. +const initializeMultiEd25519 = async ( + faucetClient: FaucetClient, + accounts: Array, + numSignaturesRequired: number, +): Promise> => { + const multiSigPublicKey = new TxnBuilderTypes.MultiEd25519PublicKey( + accounts.map((acc) => new TxnBuilderTypes.Ed25519PublicKey(acc.signingKey.publicKey)), + numSignaturesRequired, + ); + + const multiSigAuthKey = TxnBuilderTypes.AuthenticationKey.fromMultiEd25519PublicKey(multiSigPublicKey); + const multiSigAddress = multiSigAuthKey.derivedAddress(); + + // Note that a MultiEd25519's public and private keys are simply the concatenated corresponding key values of the original owners. + console.log("\nMultiEd25519 account information:"); + console.log({ + MultiEd25519Address: multiSigAddress.toString(), + MultiEd25519PublicKey: HexString.fromUint8Array(multiSigPublicKey.toBytes()).toString(), + }); + + return await faucetClient.fundAccount(multiSigAddress.toString(), 100_000_000); +}; + +// Helper function to create the bitmap from the difference between the original addresses at creation vs the current signing addresses +// NOTE: The originalAddresses MUST be in the order that was used to create the MultiEd25519 account originally. +const createBitmapFromDiff = ( + originalAddresses: Array, + signingAddresses: Array, +): Uint8Array => { + const signersSet = new Set(signingAddresses.map((addr) => addr.toHexString())); + const bits = originalAddresses + .map((addr) => addr.toHexString()) + .map((item, index) => (signersSet.has(item) ? index : -1)) + .filter((index) => index !== -1); + + // Bitmap masks which public key has signed transaction. + // See https://aptos-labs.github.io/ts-sdk-doc/classes/TxnBuilderTypes.MultiEd25519Signature.html#createBitmap + return TxnBuilderTypes.MultiEd25519Signature.createBitmap(bits); +}; + +// This is solely used to create the entry function payload for the 0x1::multisig_account::create_with_existing_account_and_revoke_auth_key function +// The multiSignedStruct is constructed with the `signStructForMultiSig` function. +const createWithExistingAccountAndRevokeAuthKeyPayload = ( + multiSigAddress: TxnBuilderTypes.AccountAddress, + multiSigPublicKey: TxnBuilderTypes.MultiEd25519PublicKey, + multiSignedStruct: Uint8Array, + newOwners: Array, + newNumSignaturesRequired: number, + metadataKeys: Array, + metadataValues: Array, +): TxnBuilderTypes.TransactionPayloadEntryFunction => { + assert(metadataKeys.length == metadataValues.length, "Metadata keys and values must be the same length."); + return new TxnBuilderTypes.TransactionPayloadEntryFunction( + TxnBuilderTypes.EntryFunction.natural( + `0x1::multisig_account`, + "create_with_existing_account_and_revoke_auth_key", + [], + [ + BCS.bcsToBytes(multiSigAddress), + BCS.serializeVectorWithFunc( + newOwners.map((o) => o.address), + "serializeFixedBytes", + ), + BCS.bcsSerializeUint64(newNumSignaturesRequired), + BCS.bcsSerializeU8(MULTI_ED25519_ACCOUNT_SCHEME), + BCS.bcsSerializeBytes(multiSigPublicKey.toBytes()), + BCS.bcsSerializeBytes(multiSignedStruct), + BCS.serializeVectorWithFunc(metadataKeys, "serializeStr"), + BCS.serializeVectorWithFunc(metadataValues, "serializeBytes"), + ], + ), + ); +}; + +// We create the multisig struct by concatenating the individually signed structs together +// Then we append the bitmap at the end. +const createMultiSigStructFromSignedStructs = ( + accountAddresses: Array, + signingAddresses: Array, + signatures: Array, +): Uint8Array => { + // Flatten the signatures into a single byte array + let flattenedSignatures = new Uint8Array(); + signatures.forEach((sig) => { + flattenedSignatures = new Uint8Array([...flattenedSignatures, ...sig]); + }); + + // This is the bitmap indicating which original owners are present as signers. It takes a diff of the original owners and the signing owners and creates the bitmap based on that. + const bitmap = createBitmapFromDiff(accountAddresses, signingAddresses); + + // Add the bitmap to the end of the byte array + return new Uint8Array([...flattenedSignatures, ...bitmap]); +}; + +// Helper function to check all the different resources on chain that should have changed and print them out +const assertChangesAndPrint = async ( + provider: Provider, + multiEd25519Address: TxnBuilderTypes.AccountAddress, + sequenceNumber: number, + submittedAsMultiEd25519: boolean, + metadataKeys: Array, + metadataValues: Array, +) => { + // Query the account resources on-chain + const accountResource = await provider.getAccountResource(multiEd25519Address.toHexString(), "0x1::account::Account"); + const data = accountResource?.data as any; + + // Check that the authentication key of the original MultiEd25519 account was rotated + // Normalize the inputs and then convert them to a string for comparison + const authKey = TxnBuilderTypes.AccountAddress.fromHex(data.authentication_key).toHexString().toString(); + const zeroAuthKey = TxnBuilderTypes.AccountAddress.fromHex("0x0").toHexString().toString(); + const authKeyRotated = authKey === zeroAuthKey; + + // Sequence number only increments if MultiEd was the signer + const expectedSequenceNumber = submittedAsMultiEd25519 ? sequenceNumber + 1 : sequenceNumber + 0; + + // Check the rotation/signer capability offers. They should have been revoked if there were any outstanding offers + const rotationCapabilityOffer = data.rotation_capability_offer.for.vec as Array; + const signerCapabilityOffer = data.signer_capability_offer.for.vec as Array; + + // Check that the metadata keys and values were correctly stored at the multisig's account address + const multisigAccountResource = await provider.getAccountResource( + multiEd25519Address.toHexString(), + "0x1::multisig_account::MultisigAccount", + ); + + const metadata = (multisigAccountResource?.data as any).metadata.data as Array; + const onChainMetadataValues = metadata.map((m) => new HexString(m.value).toUint8Array()); + + console.log(`\nMetadata added to MultiSig Account:`); + console.log(metadata); + + // Assert our expectations about the metadata key/value map + onChainMetadataValues.forEach((v, i) => { + assert( + v.length === metadataValues[i].length, + `Incorrect length. Input ${metadataValues[i].length} but on-chain length is ${v.length}`, + ); + (v as unknown as Array).forEach((vv, ii) => { + assert( + Number(vv) === Number(metadataValues[i][ii]), + `Incorrect value. Input ${metadataValues[i][ii]} but on-chain value is ${vv}`, + ); + }); + }); + const onChainMetadataKeys = metadata.map((m) => m.key); + assert( + onChainMetadataKeys.length === metadataKeys.length, + `Incorrect length. Input ${metadataKeys.length} but on-chain length is ${onChainMetadataKeys.length}`, + ); + onChainMetadataKeys.forEach((k, i) => { + assert(k === metadataKeys[i], `Incorrect key. Input ${metadataKeys[i]} but on-chain key is ${k}`); + }); + + // Assert our expectations about the account resources + assert(Number(data.sequence_number) === expectedSequenceNumber, "Incorrect sequence number."); + assert(authKeyRotated, "nAuthentication key was not rotated."); + assert(rotationCapabilityOffer.length == 0); + assert(signerCapabilityOffer.length == 0); + + // Print any relevant account resource info + console.log(`\nAuthentication key was rotated successfully:`); + console.log({ + authentication_key: data.authentication_key, + sequence_number: Number(data.sequence_number), + rotation_capability_offer: rotationCapabilityOffer, + signer_capability_offer: signerCapabilityOffer, + }); +}; + +const printRelevantTxInfo = (txn: Types.UserTransaction): void => { + const signatureType = txn.signature?.type; + let signatureTypeMessage = ""; + switch (signatureType) { + case "ed25519_signature": + signatureTypeMessage = "a MultiEd25519 account."; + break; + case "multi_ed_25519_signature": + signatureTypeMessage = "an Ed25519 account."; + break; + default: + signatureTypeMessage = "a different signature type."; + } + console.log(txn.payload); + // Print the relevant transaction response information. + console.log(`\nSubmitted transaction response as ${signatureType}:`); + console.log({ + version: txn.version, + hash: txn.hash, + success: txn.success, + vm_status: txn.vm_status, + sender: txn.sender, + expiration_timestamp_secs: txn.expiration_timestamp_secs, + payload: txn.payload, + signature: txn.signature, + events: txn.events, + timestamp: txn.timestamp, + }); +}; + +main(); diff --git a/ecosystem/typescript/sdk/examples/typescript-esm/offer_capabilities.ts b/ecosystem/typescript/sdk/examples/typescript-esm/offer_capabilities.ts new file mode 100644 index 0000000000000..34c499d17f089 --- /dev/null +++ b/ecosystem/typescript/sdk/examples/typescript-esm/offer_capabilities.ts @@ -0,0 +1,163 @@ +import { AptosAccount, FaucetClient, Network, Provider, HexString, TxnBuilderTypes, BCS, Types } from "aptos"; +import assert from "assert"; + +const ED25519_ACCOUNT_SCHEME = 0; + +class SignerCapabilityOfferProofChallengeV2 { + public readonly moduleAddress: TxnBuilderTypes.AccountAddress = TxnBuilderTypes.AccountAddress.CORE_CODE_ADDRESS; + public readonly moduleName: string = "account"; + public readonly structName: string = "SignerCapabilityOfferProofChallengeV2"; + public readonly functionName: string = "offer_signer_capability"; + + constructor( + public readonly sequenceNumber: number, + public readonly sourceAddress: TxnBuilderTypes.AccountAddress, + public readonly recipientAddress: TxnBuilderTypes.AccountAddress, + ) {} + + serialize(serializer: BCS.Serializer): void { + this.moduleAddress.serialize(serializer); + serializer.serializeStr(this.moduleName); + serializer.serializeStr(this.structName); + serializer.serializeU64(this.sequenceNumber); + this.sourceAddress.serialize(serializer); + this.recipientAddress.serialize(serializer); + } +} + +class RotationCapabilityOfferProofChallengeV2 { + public readonly moduleAddress: TxnBuilderTypes.AccountAddress = TxnBuilderTypes.AccountAddress.CORE_CODE_ADDRESS; + public readonly moduleName: string = "account"; + public readonly structName: string = "RotationCapabilityOfferProofChallengeV2"; + public readonly functionName: string = "offer_rotation_capability"; + + constructor( + public readonly chainId: number, + public readonly sequenceNumber: number, + public readonly sourceAddress: TxnBuilderTypes.AccountAddress, + public readonly recipientAddress: TxnBuilderTypes.AccountAddress, + ) {} + + serialize(serializer: BCS.Serializer): void { + this.moduleAddress.serialize(serializer); + serializer.serializeStr(this.moduleName); + serializer.serializeStr(this.structName); + serializer.serializeU8(this.chainId); + serializer.serializeU64(this.sequenceNumber); + this.sourceAddress.serialize(serializer); + this.recipientAddress.serialize(serializer); + } +} + +const createAndFundAliceAndBob = async ( + faucetClient: FaucetClient, +): Promise<{ alice: AptosAccount; bob: AptosAccount }> => { + console.log(`\n--------- Creating and funding new accounts for Bob & Alice ---------\n`); + const alice = new AptosAccount(); + const bob = new AptosAccount(); + await faucetClient.fundAccount(alice.address(), 100_000_000); + await faucetClient.fundAccount(bob.address(), 100_000_000); + console.log({ + alice: alice.address().toString(), + bob: bob.address().toString(), + }); + return { + alice, + bob, + }; +}; + +(async () => { + const provider = new Provider(Network.DEVNET); + const faucetClient = new FaucetClient(provider.aptosClient.nodeUrl, "https://faucet.devnet.aptoslabs.com"); + const chainId = await provider.getChainId(); + + const { alice, bob } = await createAndFundAliceAndBob(faucetClient); + const aliceAccountAddress = TxnBuilderTypes.AccountAddress.fromHex(alice.address()); + const bobAccountAddress = TxnBuilderTypes.AccountAddress.fromHex(bob.address()); + + // Offer Alice's rotation capability to Bob + { + // Construct the RotationCapabilityOfferProofChallengeV2 struct + const rotationCapProof = new RotationCapabilityOfferProofChallengeV2( + chainId, + Number((await provider.getAccount(alice.address())).sequence_number), // Get Alice's account's latest sequence number + aliceAccountAddress, + bobAccountAddress, + ); + + console.log(`\n--------------- RotationCapabilityOfferProofChallengeV2 --------------\n`); + + // Sign the BCS-serialized struct, submit the transaction, and wait for the result. + const res = await signStructAndSubmitTransaction(provider, alice, rotationCapProof, ED25519_ACCOUNT_SCHEME); + + // Print the relevant transaction submission info + const { hash, version, success, payload } = res; + console.log("Submitted transaction results:"); + console.log({ hash, version, success, payload }); + + // Query Alice's Account resource on-chain to verify that she has offered the rotation capability to Bob + console.log("\nChecking Alice's account resources to verify the rotation capability offer is for Bob..."); + const { data } = await provider.getAccountResource(alice.address(), "0x1::account::Account"); + const offerFor = (data as any).rotation_capability_offer.for.vec[0]; + + console.log({ rotation_capability_offer: { for: offerFor } }); + assert(offerFor.toString() == bob.address().toString(), "Bob's address should be in the rotation capability offer"); + console.log("...success!\n"); + } + + // Offer Alice's signer capability to Bob + { + // Construct the SignerCapabilityOfferProofChallengeV2 struct + const signerCapProof = new SignerCapabilityOfferProofChallengeV2( + Number((await provider.getAccount(alice.address())).sequence_number), // Get Alice's account's latest sequence number + aliceAccountAddress, + bobAccountAddress, + ); + + console.log(`\n--------------- SignerCapabilityOfferProofChallengeV2 ---------------\n`); + + // Sign the BCS-serialized struct, submit the transaction, and wait for the result. + const res = await signStructAndSubmitTransaction(provider, alice, signerCapProof, ED25519_ACCOUNT_SCHEME); + + // Print the relevant transaction submission info + const { hash, version, success, payload } = res; + console.log("Submitted transaction results:"); + console.log({ hash, version, success, payload }); + + // Query Alice's Account resource on-chain to verify that she has offered the signer capability to Bob + console.log("\nChecking Alice's account resources to verify the signer capability offer is for Bob..."); + const { data } = await provider.getAccountResource(alice.address(), "0x1::account::Account"); + const offerFor = (data as any).signer_capability_offer.for.vec[0]; + + console.log({ signer_capability_offer: { for: offerFor } }); + assert(offerFor.toString() == bob.address().toString(), "Bob's address should be in the signer capability offer\n"); + console.log("...success!\n"); + } +})(); + +const signStructAndSubmitTransaction = async ( + provider: Provider, + signer: AptosAccount, + struct: SignerCapabilityOfferProofChallengeV2 | RotationCapabilityOfferProofChallengeV2, + accountScheme: number = ED25519_ACCOUNT_SCHEME, +): Promise => { + const bcsStruct = BCS.bcsToBytes(struct); + const signedMessage = signer.signBuffer(bcsStruct); + + const payload = new TxnBuilderTypes.TransactionPayloadEntryFunction( + TxnBuilderTypes.EntryFunction.natural( + `${struct.moduleAddress.toHexString()}::${struct.moduleName}`, + struct.functionName, + [], + [ + BCS.bcsSerializeBytes(signedMessage.toUint8Array()), + BCS.bcsSerializeU8(accountScheme), + BCS.bcsSerializeBytes(signer.pubKey().toUint8Array()), + BCS.bcsToBytes(struct.recipientAddress), + ], + ), + ); + const txnResponse = await provider.generateSignSubmitWaitForTransaction(signer, payload); + return txnResponse as Types.UserTransaction; +}; diff --git a/ecosystem/typescript/sdk/examples/typescript-esm/package.json b/ecosystem/typescript/sdk/examples/typescript-esm/package.json index 47287199cd80d..10643f6a09e24 100644 --- a/ecosystem/typescript/sdk/examples/typescript-esm/package.json +++ b/ecosystem/typescript/sdk/examples/typescript-esm/package.json @@ -7,7 +7,9 @@ "scripts": { "build": "rm -rf dist/* && tsc -p .", "test": "pnpm build && node ./dist/index.js", - "rotate_key": "ts-node --esm rotate_key.ts" + "offer_capabilities": "ts-node --esm offer_capabilities.ts", + "rotate_key": "ts-node --esm rotate_key.ts", + "multi_ed25519_to_multisig": "ts-node --esm multi_ed25519_to_multisig.ts" }, "keywords": [], "author": "", diff --git a/ecosystem/typescript/sdk/package.json b/ecosystem/typescript/sdk/package.json index e004e9a7f38e9..532f08585e204 100644 --- a/ecosystem/typescript/sdk/package.json +++ b/ecosystem/typescript/sdk/package.json @@ -86,5 +86,5 @@ "typedoc": "^0.23.20", "typescript": "4.8.2" }, - "version": "1.17.0" + "version": "1.18.0" } diff --git a/ecosystem/typescript/sdk/src/indexer/generated/operations.ts b/ecosystem/typescript/sdk/src/indexer/generated/operations.ts index 52358a8db48fc..1f260abbc41b9 100644 --- a/ecosystem/typescript/sdk/src/indexer/generated/operations.ts +++ b/ecosystem/typescript/sdk/src/indexer/generated/operations.ts @@ -1,6 +1,6 @@ import * as Types from './types'; -export type CurrentTokenOwnershipFieldsFragment = { __typename?: 'current_token_ownerships_v2', token_standard: string, is_fungible_v2?: boolean | null, is_soulbound_v2?: boolean | null, property_version_v1: any, table_type_v1?: string | null, token_properties_mutated_v1?: any | null, amount: any, last_transaction_timestamp: any, last_transaction_version: any, storage_id: string, owner_address: string, current_token_data?: { __typename?: 'current_token_datas_v2', token_name: string, token_data_id: string, token_uri: string, token_properties: any, supply: any, maximum?: any | null, last_transaction_version: any, last_transaction_timestamp: any, largest_property_version_v1?: any | null, current_collection?: { __typename?: 'current_collections_v2', collection_name: string, creator_address: string, description: string, uri: string, collection_id: string, last_transaction_version: any, current_supply: any, mutable_description?: boolean | null, total_minted_v2?: any | null, table_handle_v1?: string | null, mutable_uri?: boolean | null } | null } | null }; +export type CurrentTokenOwnershipFieldsFragment = { __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }; export type GetAccountCoinsDataQueryVariables = Types.Exact<{ owner_address?: Types.InputMaybe; @@ -9,7 +9,7 @@ export type GetAccountCoinsDataQueryVariables = Types.Exact<{ }>; -export type GetAccountCoinsDataQuery = { __typename?: 'query_root', current_coin_balances: Array<{ __typename?: 'current_coin_balances', amount: any, coin_type: string, coin_info?: { __typename?: 'coin_infos', name: string, decimals: number, symbol: string } | null }> }; +export type GetAccountCoinsDataQuery = { __typename?: 'query_root', current_coin_balances: Array<{ __typename?: 'current_coin_balances', amount: any, coin_type: string, coin_type_hash: string, last_transaction_timestamp: any, last_transaction_version: any, owner_address: string, coin_info?: { __typename?: 'coin_infos', coin_type: string, coin_type_hash: string, creator_address: string, decimals: number, name: string, supply_aggregator_table_handle?: string | null, supply_aggregator_table_key?: string | null, symbol: string, transaction_created_timestamp: any, transaction_version_created: any } | null }> }; export type GetAccountCurrentTokensQueryVariables = Types.Exact<{ address: Types.Scalars['String']; @@ -38,16 +38,17 @@ export type GetAccountTransactionsCountQueryVariables = Types.Exact<{ }>; -export type GetAccountTransactionsCountQuery = { __typename?: 'query_root', move_resources_aggregate: { __typename?: 'move_resources_aggregate', aggregate?: { __typename?: 'move_resources_aggregate_fields', count: number } | null } }; +export type GetAccountTransactionsCountQuery = { __typename?: 'query_root', account_transactions_aggregate: { __typename?: 'account_transactions_aggregate', aggregate?: { __typename?: 'account_transactions_aggregate_fields', count: number } | null } }; export type GetAccountTransactionsDataQueryVariables = Types.Exact<{ - address?: Types.InputMaybe; - limit?: Types.InputMaybe; + where_condition: Types.Account_Transactions_Bool_Exp; offset?: Types.InputMaybe; + limit?: Types.InputMaybe; + order_by?: Types.InputMaybe | Types.Account_Transactions_Order_By>; }>; -export type GetAccountTransactionsDataQuery = { __typename?: 'query_root', move_resources: Array<{ __typename?: 'move_resources', transaction_version: any }> }; +export type GetAccountTransactionsDataQuery = { __typename?: 'query_root', account_transactions: Array<{ __typename?: 'account_transactions', transaction_version: any, account_address: string, token_activities_v2: Array<{ __typename?: 'token_activities_v2', after_value?: string | null, before_value?: string | null, entry_function_id_str?: string | null, event_account_address: string, event_index: any, from_address?: string | null, is_fungible_v2?: boolean | null, property_version_v1: any, to_address?: string | null, token_amount: any, token_data_id: string, token_standard: string, transaction_timestamp: any, transaction_version: any, type: string }> }> }; export type GetCollectionDataQueryVariables = Types.Exact<{ where_condition: Types.Current_Collections_V2_Bool_Exp; @@ -57,7 +58,7 @@ export type GetCollectionDataQueryVariables = Types.Exact<{ }>; -export type GetCollectionDataQuery = { __typename?: 'query_root', current_collections_v2: Array<{ __typename?: 'current_collections_v2', collection_id: string, token_standard: string, collection_name: string, creator_address: string, current_supply: any, description: string, uri: string }> }; +export type GetCollectionDataQuery = { __typename?: 'query_root', current_collections_v2: Array<{ __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string }> }; export type GetCollectionsWithOwnedTokensQueryVariables = Types.Exact<{ where_condition: Types.Current_Collection_Ownership_V2_View_Bool_Exp; @@ -67,7 +68,7 @@ export type GetCollectionsWithOwnedTokensQueryVariables = Types.Exact<{ }>; -export type GetCollectionsWithOwnedTokensQuery = { __typename?: 'query_root', current_collection_ownership_v2_view: Array<{ __typename?: 'current_collection_ownership_v2_view', distinct_tokens?: any | null, last_transaction_version?: any | null, current_collection?: { __typename?: 'current_collections_v2', creator_address: string, collection_name: string, token_standard: string, collection_id: string, description: string, table_handle_v1?: string | null, uri: string, total_minted_v2?: any | null, max_supply?: any | null } | null }> }; +export type GetCollectionsWithOwnedTokensQuery = { __typename?: 'query_root', current_collection_ownership_v2_view: Array<{ __typename?: 'current_collection_ownership_v2_view', collection_id?: string | null, collection_name?: string | null, collection_uri?: string | null, creator_address?: string | null, distinct_tokens?: any | null, last_transaction_version?: any | null, owner_address?: string | null, single_token_uri?: string | null, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, mutable_description?: boolean | null, max_supply?: any | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null }> }; export type GetDelegatedStakingActivitiesQueryVariables = Types.Exact<{ delegatorAddress?: Types.InputMaybe; @@ -87,7 +88,7 @@ export type GetNumberOfDelegatorsQueryVariables = Types.Exact<{ }>; -export type GetNumberOfDelegatorsQuery = { __typename?: 'query_root', num_active_delegator_per_pool: Array<{ __typename?: 'num_active_delegator_per_pool', num_active_delegator?: any | null }> }; +export type GetNumberOfDelegatorsQuery = { __typename?: 'query_root', num_active_delegator_per_pool: Array<{ __typename?: 'num_active_delegator_per_pool', num_active_delegator?: any | null, pool_address?: string | null }> }; export type GetOwnedTokensQueryVariables = Types.Exact<{ where_condition: Types.Current_Token_Ownerships_V2_Bool_Exp; @@ -97,7 +98,7 @@ export type GetOwnedTokensQueryVariables = Types.Exact<{ }>; -export type GetOwnedTokensQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, is_fungible_v2?: boolean | null, is_soulbound_v2?: boolean | null, property_version_v1: any, table_type_v1?: string | null, token_properties_mutated_v1?: any | null, amount: any, last_transaction_timestamp: any, last_transaction_version: any, storage_id: string, owner_address: string, current_token_data?: { __typename?: 'current_token_datas_v2', token_name: string, token_data_id: string, token_uri: string, token_properties: any, supply: any, maximum?: any | null, last_transaction_version: any, last_transaction_timestamp: any, largest_property_version_v1?: any | null, current_collection?: { __typename?: 'current_collections_v2', collection_name: string, creator_address: string, description: string, uri: string, collection_id: string, last_transaction_version: any, current_supply: any, mutable_description?: boolean | null, total_minted_v2?: any | null, table_handle_v1?: string | null, mutable_uri?: boolean | null } | null } | null }> }; +export type GetOwnedTokensQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }> }; export type GetOwnedTokensByTokenDataQueryVariables = Types.Exact<{ where_condition: Types.Current_Token_Ownerships_V2_Bool_Exp; @@ -107,7 +108,7 @@ export type GetOwnedTokensByTokenDataQueryVariables = Types.Exact<{ }>; -export type GetOwnedTokensByTokenDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, is_fungible_v2?: boolean | null, is_soulbound_v2?: boolean | null, property_version_v1: any, table_type_v1?: string | null, token_properties_mutated_v1?: any | null, amount: any, last_transaction_timestamp: any, last_transaction_version: any, storage_id: string, owner_address: string, current_token_data?: { __typename?: 'current_token_datas_v2', token_name: string, token_data_id: string, token_uri: string, token_properties: any, supply: any, maximum?: any | null, last_transaction_version: any, last_transaction_timestamp: any, largest_property_version_v1?: any | null, current_collection?: { __typename?: 'current_collections_v2', collection_name: string, creator_address: string, description: string, uri: string, collection_id: string, last_transaction_version: any, current_supply: any, mutable_description?: boolean | null, total_minted_v2?: any | null, table_handle_v1?: string | null, mutable_uri?: boolean | null } | null } | null }> }; +export type GetOwnedTokensByTokenDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }> }; export type GetTokenActivitiesQueryVariables = Types.Exact<{ where_condition: Types.Token_Activities_V2_Bool_Exp; @@ -134,7 +135,7 @@ export type GetTokenCurrentOwnerDataQueryVariables = Types.Exact<{ }>; -export type GetTokenCurrentOwnerDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', owner_address: string }> }; +export type GetTokenCurrentOwnerDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }> }; export type GetTokenDataQueryVariables = Types.Exact<{ where_condition?: Types.InputMaybe; @@ -144,7 +145,7 @@ export type GetTokenDataQueryVariables = Types.Exact<{ }>; -export type GetTokenDataQuery = { __typename?: 'query_root', current_token_datas_v2: Array<{ __typename?: 'current_token_datas_v2', token_data_id: string, token_name: string, token_uri: string, token_properties: any, token_standard: string, largest_property_version_v1?: any | null, maximum?: any | null, is_fungible_v2?: boolean | null, supply: any, last_transaction_version: any, last_transaction_timestamp: any, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, uri: string, current_supply: any } | null }> }; +export type GetTokenDataQuery = { __typename?: 'query_root', current_token_datas_v2: Array<{ __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null }> }; export type GetTokenOwnedFromCollectionQueryVariables = Types.Exact<{ where_condition: Types.Current_Token_Ownerships_V2_Bool_Exp; @@ -154,7 +155,7 @@ export type GetTokenOwnedFromCollectionQueryVariables = Types.Exact<{ }>; -export type GetTokenOwnedFromCollectionQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, is_fungible_v2?: boolean | null, is_soulbound_v2?: boolean | null, property_version_v1: any, table_type_v1?: string | null, token_properties_mutated_v1?: any | null, amount: any, last_transaction_timestamp: any, last_transaction_version: any, storage_id: string, owner_address: string, current_token_data?: { __typename?: 'current_token_datas_v2', token_name: string, token_data_id: string, token_uri: string, token_properties: any, supply: any, maximum?: any | null, last_transaction_version: any, last_transaction_timestamp: any, largest_property_version_v1?: any | null, current_collection?: { __typename?: 'current_collections_v2', collection_name: string, creator_address: string, description: string, uri: string, collection_id: string, last_transaction_version: any, current_supply: any, mutable_description?: boolean | null, total_minted_v2?: any | null, table_handle_v1?: string | null, mutable_uri?: boolean | null } | null } | null }> }; +export type GetTokenOwnedFromCollectionQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }> }; export type GetTokenOwnersDataQueryVariables = Types.Exact<{ where_condition: Types.Current_Token_Ownerships_V2_Bool_Exp; @@ -164,7 +165,7 @@ export type GetTokenOwnersDataQueryVariables = Types.Exact<{ }>; -export type GetTokenOwnersDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', owner_address: string }> }; +export type GetTokenOwnersDataQuery = { __typename?: 'query_root', current_token_ownerships_v2: Array<{ __typename?: 'current_token_ownerships_v2', token_standard: string, token_properties_mutated_v1?: any | null, token_data_id: string, table_type_v1?: string | null, storage_id: string, property_version_v1: any, owner_address: string, last_transaction_version: any, last_transaction_timestamp: any, is_soulbound_v2?: boolean | null, is_fungible_v2?: boolean | null, amount: any, current_token_data?: { __typename?: 'current_token_datas_v2', collection_id: string, description: string, is_fungible_v2?: boolean | null, largest_property_version_v1?: any | null, last_transaction_timestamp: any, last_transaction_version: any, maximum?: any | null, supply: any, token_data_id: string, token_name: string, token_properties: any, token_standard: string, token_uri: string, current_collection?: { __typename?: 'current_collections_v2', collection_id: string, collection_name: string, creator_address: string, current_supply: any, description: string, last_transaction_timestamp: any, last_transaction_version: any, max_supply?: any | null, mutable_description?: boolean | null, mutable_uri?: boolean | null, table_handle_v1?: string | null, token_standard: string, total_minted_v2?: any | null, uri: string } | null } | null }> }; export type GetTopUserTransactionsQueryVariables = Types.Exact<{ limit?: Types.InputMaybe; @@ -174,10 +175,13 @@ export type GetTopUserTransactionsQueryVariables = Types.Exact<{ export type GetTopUserTransactionsQuery = { __typename?: 'query_root', user_transactions: Array<{ __typename?: 'user_transactions', version: any }> }; export type GetUserTransactionsQueryVariables = Types.Exact<{ - limit?: Types.InputMaybe; - start_version?: Types.InputMaybe; + where_condition: Types.User_Transactions_Bool_Exp; offset?: Types.InputMaybe; + limit?: Types.InputMaybe; + order_by?: Types.InputMaybe | Types.User_Transactions_Order_By>; }>; export type GetUserTransactionsQuery = { __typename?: 'query_root', user_transactions: Array<{ __typename?: 'user_transactions', version: any }> }; + +export type TokenActivitiesFieldsFragment = { __typename?: 'token_activities_v2', after_value?: string | null, before_value?: string | null, entry_function_id_str?: string | null, event_account_address: string, event_index: any, from_address?: string | null, is_fungible_v2?: boolean | null, property_version_v1: any, to_address?: string | null, token_amount: any, token_data_id: string, token_standard: string, transaction_timestamp: any, transaction_version: any, type: string }; diff --git a/ecosystem/typescript/sdk/src/indexer/generated/queries.ts b/ecosystem/typescript/sdk/src/indexer/generated/queries.ts index 668155e2f5f2d..77002cdb76ba6 100644 --- a/ecosystem/typescript/sdk/src/indexer/generated/queries.ts +++ b/ecosystem/typescript/sdk/src/indexer/generated/queries.ts @@ -5,38 +5,46 @@ import * as Dom from 'graphql-request/dist/types.dom'; export const CurrentTokenOwnershipFieldsFragmentDoc = ` fragment CurrentTokenOwnershipFields on current_token_ownerships_v2 { token_standard - is_fungible_v2 - is_soulbound_v2 - property_version_v1 - table_type_v1 token_properties_mutated_v1 - amount - last_transaction_timestamp - last_transaction_version + token_data_id + table_type_v1 storage_id + property_version_v1 owner_address + last_transaction_version + last_transaction_timestamp + is_soulbound_v2 + is_fungible_v2 + amount current_token_data { - token_name + collection_id + description + is_fungible_v2 + largest_property_version_v1 + last_transaction_timestamp + last_transaction_version + maximum + supply token_data_id - token_uri + token_name token_properties - supply - maximum - last_transaction_version - last_transaction_timestamp - largest_property_version_v1 + token_standard + token_uri current_collection { + collection_id collection_name creator_address + current_supply description - uri - collection_id + last_transaction_timestamp last_transaction_version - current_supply + max_supply mutable_description - total_minted_v2 - table_handle_v1 mutable_uri + table_handle_v1 + token_standard + total_minted_v2 + uri } } } @@ -63,6 +71,25 @@ export const CollectionDataFieldsFragmentDoc = ` creator_address } `; +export const TokenActivitiesFieldsFragmentDoc = ` + fragment TokenActivitiesFields on token_activities_v2 { + after_value + before_value + entry_function_id_str + event_account_address + event_index + from_address + is_fungible_v2 + property_version_v1 + to_address + token_amount + token_data_id + token_standard + transaction_timestamp + transaction_version + type +} + `; export const GetAccountCoinsData = ` query getAccountCoinsData($owner_address: String, $offset: Int, $limit: Int) { current_coin_balances( @@ -72,10 +99,21 @@ export const GetAccountCoinsData = ` ) { amount coin_type + coin_type_hash + last_transaction_timestamp + last_transaction_version + owner_address coin_info { - name + coin_type + coin_type_hash + creator_address decimals + name + supply_aggregator_table_handle + supply_aggregator_table_key symbol + transaction_created_timestamp + transaction_version_created } } } @@ -116,10 +154,7 @@ export const GetAccountTokensCount = ` `; export const GetAccountTransactionsCount = ` query getAccountTransactionsCount($address: String) { - move_resources_aggregate( - where: {address: {_eq: $address}} - distinct_on: transaction_version - ) { + account_transactions_aggregate(where: {account_address: {_eq: $address}}) { aggregate { count } @@ -127,18 +162,21 @@ export const GetAccountTransactionsCount = ` } `; export const GetAccountTransactionsData = ` - query getAccountTransactionsData($address: String, $limit: Int, $offset: Int) { - move_resources( - where: {address: {_eq: $address}} - order_by: {transaction_version: desc} - distinct_on: transaction_version + query getAccountTransactionsData($where_condition: account_transactions_bool_exp!, $offset: Int, $limit: Int, $order_by: [account_transactions_order_by!]) { + account_transactions( + where: $where_condition + order_by: $order_by limit: $limit offset: $offset ) { + token_activities_v2 { + ...TokenActivitiesFields + } transaction_version + account_address } } - `; + ${TokenActivitiesFieldsFragmentDoc}`; export const GetCollectionData = ` query getCollectionData($where_condition: current_collections_v2_bool_exp!, $offset: Int, $limit: Int, $order_by: [current_collections_v2_order_by!]) { current_collections_v2( @@ -148,11 +186,18 @@ export const GetCollectionData = ` order_by: $order_by ) { collection_id - token_standard collection_name creator_address current_supply description + last_transaction_timestamp + last_transaction_version + max_supply + mutable_description + mutable_uri + table_handle_v1 + token_standard + total_minted_v2 uri } } @@ -166,18 +211,29 @@ export const GetCollectionsWithOwnedTokens = ` order_by: $order_by ) { current_collection { - creator_address - collection_name - token_standard collection_id + collection_name + creator_address + current_supply description + last_transaction_timestamp + last_transaction_version + mutable_description + max_supply + mutable_uri table_handle_v1 - uri + token_standard total_minted_v2 - max_supply + uri } + collection_id + collection_name + collection_uri + creator_address distinct_tokens last_transaction_version + owner_address + single_token_uri } } `; @@ -209,6 +265,7 @@ export const GetNumberOfDelegators = ` distinct_on: pool_address ) { num_active_delegator + pool_address } } `; @@ -244,24 +301,10 @@ export const GetTokenActivities = ` offset: $offset limit: $limit ) { - after_value - before_value - entry_function_id_str - event_account_address - event_index - from_address - is_fungible_v2 - property_version_v1 - to_address - token_amount - token_data_id - token_standard - transaction_timestamp - transaction_version - type + ...TokenActivitiesFields } } - `; + ${TokenActivitiesFieldsFragmentDoc}`; export const GetTokenActivitiesCount = ` query getTokenActivitiesCount($token_id: String) { token_activities_v2_aggregate(where: {token_data_id: {_eq: $token_id}}) { @@ -279,10 +322,10 @@ export const GetTokenCurrentOwnerData = ` limit: $limit order_by: $order_by ) { - owner_address + ...CurrentTokenOwnershipFields } } - `; + ${CurrentTokenOwnershipFieldsFragmentDoc}`; export const GetTokenData = ` query getTokenData($where_condition: current_token_datas_v2_bool_exp, $offset: Int, $limit: Int, $order_by: [current_token_datas_v2_order_by!]) { current_token_datas_v2( @@ -291,23 +334,34 @@ export const GetTokenData = ` limit: $limit order_by: $order_by ) { + collection_id + description + is_fungible_v2 + largest_property_version_v1 + last_transaction_timestamp + last_transaction_version + maximum + supply token_data_id token_name - token_uri token_properties token_standard - largest_property_version_v1 - maximum - is_fungible_v2 - supply - last_transaction_version - last_transaction_timestamp + token_uri current_collection { collection_id collection_name creator_address - uri current_supply + description + last_transaction_timestamp + last_transaction_version + max_supply + mutable_description + mutable_uri + table_handle_v1 + token_standard + total_minted_v2 + uri } } } @@ -332,10 +386,10 @@ export const GetTokenOwnersData = ` limit: $limit order_by: $order_by ) { - owner_address + ...CurrentTokenOwnershipFields } } - `; + ${CurrentTokenOwnershipFieldsFragmentDoc}`; export const GetTopUserTransactions = ` query getTopUserTransactions($limit: Int) { user_transactions(limit: $limit, order_by: {version: desc}) { @@ -344,11 +398,11 @@ export const GetTopUserTransactions = ` } `; export const GetUserTransactions = ` - query getUserTransactions($limit: Int, $start_version: bigint, $offset: Int) { + query getUserTransactions($where_condition: user_transactions_bool_exp!, $offset: Int, $limit: Int, $order_by: [user_transactions_order_by!]) { user_transactions( + order_by: $order_by + where: $where_condition limit: $limit - order_by: {version: desc} - where: {version: {_lte: $start_version}} offset: $offset ) { version @@ -375,7 +429,7 @@ export function getSdk(client: GraphQLClient, withWrapper: SdkFunctionWrapper = getAccountTransactionsCount(variables?: Types.GetAccountTransactionsCountQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { return withWrapper((wrappedRequestHeaders) => client.request(GetAccountTransactionsCount, variables, {...requestHeaders, ...wrappedRequestHeaders}), 'getAccountTransactionsCount', 'query'); }, - getAccountTransactionsData(variables?: Types.GetAccountTransactionsDataQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { + getAccountTransactionsData(variables: Types.GetAccountTransactionsDataQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { return withWrapper((wrappedRequestHeaders) => client.request(GetAccountTransactionsData, variables, {...requestHeaders, ...wrappedRequestHeaders}), 'getAccountTransactionsData', 'query'); }, getCollectionData(variables: Types.GetCollectionDataQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { @@ -420,7 +474,7 @@ export function getSdk(client: GraphQLClient, withWrapper: SdkFunctionWrapper = getTopUserTransactions(variables?: Types.GetTopUserTransactionsQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { return withWrapper((wrappedRequestHeaders) => client.request(GetTopUserTransactions, variables, {...requestHeaders, ...wrappedRequestHeaders}), 'getTopUserTransactions', 'query'); }, - getUserTransactions(variables?: Types.GetUserTransactionsQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { + getUserTransactions(variables: Types.GetUserTransactionsQueryVariables, requestHeaders?: Dom.RequestInit["headers"]): Promise { return withWrapper((wrappedRequestHeaders) => client.request(GetUserTransactions, variables, {...requestHeaders, ...wrappedRequestHeaders}), 'getUserTransactions', 'query'); } }; diff --git a/ecosystem/typescript/sdk/src/indexer/generated/types.ts b/ecosystem/typescript/sdk/src/indexer/generated/types.ts index 5b7600a4f25f4..8e9b6b57d86c1 100644 --- a/ecosystem/typescript/sdk/src/indexer/generated/types.ts +++ b/ecosystem/typescript/sdk/src/indexer/generated/types.ts @@ -1852,11 +1852,15 @@ export type Current_Collection_Datas_Stream_Cursor_Value_Input = { export type Current_Collection_Ownership_V2_View = { __typename?: 'current_collection_ownership_v2_view'; collection_id?: Maybe; + collection_name?: Maybe; + collection_uri?: Maybe; + creator_address?: Maybe; /** An object relationship */ current_collection?: Maybe; distinct_tokens?: Maybe; last_transaction_version?: Maybe; owner_address?: Maybe; + single_token_uri?: Maybe; }; /** aggregated selection of "current_collection_ownership_v2_view" */ @@ -1902,37 +1906,53 @@ export type Current_Collection_Ownership_V2_View_Bool_Exp = { _not?: InputMaybe; _or?: InputMaybe>; collection_id?: InputMaybe; + collection_name?: InputMaybe; + collection_uri?: InputMaybe; + creator_address?: InputMaybe; current_collection?: InputMaybe; distinct_tokens?: InputMaybe; last_transaction_version?: InputMaybe; owner_address?: InputMaybe; + single_token_uri?: InputMaybe; }; /** aggregate max on columns */ export type Current_Collection_Ownership_V2_View_Max_Fields = { __typename?: 'current_collection_ownership_v2_view_max_fields'; collection_id?: Maybe; + collection_name?: Maybe; + collection_uri?: Maybe; + creator_address?: Maybe; distinct_tokens?: Maybe; last_transaction_version?: Maybe; owner_address?: Maybe; + single_token_uri?: Maybe; }; /** aggregate min on columns */ export type Current_Collection_Ownership_V2_View_Min_Fields = { __typename?: 'current_collection_ownership_v2_view_min_fields'; collection_id?: Maybe; + collection_name?: Maybe; + collection_uri?: Maybe; + creator_address?: Maybe; distinct_tokens?: Maybe; last_transaction_version?: Maybe; owner_address?: Maybe; + single_token_uri?: Maybe; }; /** Ordering options when selecting data from "current_collection_ownership_v2_view". */ export type Current_Collection_Ownership_V2_View_Order_By = { collection_id?: InputMaybe; + collection_name?: InputMaybe; + collection_uri?: InputMaybe; + creator_address?: InputMaybe; current_collection?: InputMaybe; distinct_tokens?: InputMaybe; last_transaction_version?: InputMaybe; owner_address?: InputMaybe; + single_token_uri?: InputMaybe; }; /** select columns of table "current_collection_ownership_v2_view" */ @@ -1940,11 +1960,19 @@ export enum Current_Collection_Ownership_V2_View_Select_Column { /** column name */ CollectionId = 'collection_id', /** column name */ + CollectionName = 'collection_name', + /** column name */ + CollectionUri = 'collection_uri', + /** column name */ + CreatorAddress = 'creator_address', + /** column name */ DistinctTokens = 'distinct_tokens', /** column name */ LastTransactionVersion = 'last_transaction_version', /** column name */ - OwnerAddress = 'owner_address' + OwnerAddress = 'owner_address', + /** column name */ + SingleTokenUri = 'single_token_uri' } /** aggregate stddev on columns */ @@ -1979,9 +2007,13 @@ export type Current_Collection_Ownership_V2_View_Stream_Cursor_Input = { /** Initial value of the column from where the streaming should start */ export type Current_Collection_Ownership_V2_View_Stream_Cursor_Value_Input = { collection_id?: InputMaybe; + collection_name?: InputMaybe; + collection_uri?: InputMaybe; + creator_address?: InputMaybe; distinct_tokens?: InputMaybe; last_transaction_version?: InputMaybe; owner_address?: InputMaybe; + single_token_uri?: InputMaybe; }; /** aggregate sum on columns */ @@ -2012,74 +2044,6 @@ export type Current_Collection_Ownership_V2_View_Variance_Fields = { last_transaction_version?: Maybe; }; -/** columns and relationships of "current_collection_ownership_view" */ -export type Current_Collection_Ownership_View = { - __typename?: 'current_collection_ownership_view'; - collection_data_id_hash?: Maybe; - collection_name?: Maybe; - creator_address?: Maybe; - distinct_tokens?: Maybe; - last_transaction_version?: Maybe; - owner_address?: Maybe; -}; - -/** Boolean expression to filter rows from the table "current_collection_ownership_view". All fields are combined with a logical 'AND'. */ -export type Current_Collection_Ownership_View_Bool_Exp = { - _and?: InputMaybe>; - _not?: InputMaybe; - _or?: InputMaybe>; - collection_data_id_hash?: InputMaybe; - collection_name?: InputMaybe; - creator_address?: InputMaybe; - distinct_tokens?: InputMaybe; - last_transaction_version?: InputMaybe; - owner_address?: InputMaybe; -}; - -/** Ordering options when selecting data from "current_collection_ownership_view". */ -export type Current_Collection_Ownership_View_Order_By = { - collection_data_id_hash?: InputMaybe; - collection_name?: InputMaybe; - creator_address?: InputMaybe; - distinct_tokens?: InputMaybe; - last_transaction_version?: InputMaybe; - owner_address?: InputMaybe; -}; - -/** select columns of table "current_collection_ownership_view" */ -export enum Current_Collection_Ownership_View_Select_Column { - /** column name */ - CollectionDataIdHash = 'collection_data_id_hash', - /** column name */ - CollectionName = 'collection_name', - /** column name */ - CreatorAddress = 'creator_address', - /** column name */ - DistinctTokens = 'distinct_tokens', - /** column name */ - LastTransactionVersion = 'last_transaction_version', - /** column name */ - OwnerAddress = 'owner_address' -} - -/** Streaming cursor of the table "current_collection_ownership_view" */ -export type Current_Collection_Ownership_View_Stream_Cursor_Input = { - /** Stream column input with initial value */ - initial_value: Current_Collection_Ownership_View_Stream_Cursor_Value_Input; - /** cursor ordering */ - ordering?: InputMaybe; -}; - -/** Initial value of the column from where the streaming should start */ -export type Current_Collection_Ownership_View_Stream_Cursor_Value_Input = { - collection_data_id_hash?: InputMaybe; - collection_name?: InputMaybe; - creator_address?: InputMaybe; - distinct_tokens?: InputMaybe; - last_transaction_version?: InputMaybe; - owner_address?: InputMaybe; -}; - /** columns and relationships of "current_collections_v2" */ export type Current_Collections_V2 = { __typename?: 'current_collections_v2'; @@ -4112,6 +4076,7 @@ export type Move_Resources_Variance_Fields = { export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions = { __typename?: 'nft_marketplace_v2_current_nft_marketplace_auctions'; buy_it_now_price?: Maybe; + coin_type?: Maybe; collection_id: Scalars['String']; contract_address: Scalars['String']; current_bid_price?: Maybe; @@ -4138,6 +4103,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Bool_Exp = { _not?: InputMaybe; _or?: InputMaybe>; buy_it_now_price?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; current_bid_price?: InputMaybe; @@ -4160,6 +4126,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Bool_Exp = { /** Ordering options when selecting data from "nft_marketplace_v2.current_nft_marketplace_auctions". */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Order_By = { buy_it_now_price?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; current_bid_price?: InputMaybe; @@ -4184,6 +4151,8 @@ export enum Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Select_Column { /** column name */ BuyItNowPrice = 'buy_it_now_price', /** column name */ + CoinType = 'coin_type', + /** column name */ CollectionId = 'collection_id', /** column name */ ContractAddress = 'contract_address', @@ -4230,6 +4199,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Stream_Cursor_In /** Initial value of the column from where the streaming should start */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Stream_Cursor_Value_Input = { buy_it_now_price?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; current_bid_price?: InputMaybe; @@ -4253,6 +4223,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Auctions_Stream_Cursor_Va export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers = { __typename?: 'nft_marketplace_v2_current_nft_marketplace_collection_offers'; buyer: Scalars['String']; + coin_type?: Maybe; collection_id: Scalars['String']; collection_offer_id: Scalars['String']; contract_address: Scalars['String']; @@ -4275,6 +4246,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Bool_Ex _not?: InputMaybe; _or?: InputMaybe>; buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_offer_id?: InputMaybe; contract_address?: InputMaybe; @@ -4293,6 +4265,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Bool_Ex /** Ordering options when selecting data from "nft_marketplace_v2.current_nft_marketplace_collection_offers". */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Order_By = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_offer_id?: InputMaybe; contract_address?: InputMaybe; @@ -4313,6 +4286,8 @@ export enum Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Select_ /** column name */ Buyer = 'buyer', /** column name */ + CoinType = 'coin_type', + /** column name */ CollectionId = 'collection_id', /** column name */ CollectionOfferId = 'collection_offer_id', @@ -4351,6 +4326,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Stream_ /** Initial value of the column from where the streaming should start */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Stream_Cursor_Value_Input = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_offer_id?: InputMaybe; contract_address?: InputMaybe; @@ -4369,6 +4345,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Collection_Offers_Stream_ /** columns and relationships of "nft_marketplace_v2.current_nft_marketplace_listings" */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings = { __typename?: 'nft_marketplace_v2_current_nft_marketplace_listings'; + coin_type?: Maybe; collection_id: Scalars['String']; contract_address: Scalars['String']; current_token_data?: Maybe; @@ -4391,6 +4368,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Bool_Exp = { _and?: InputMaybe>; _not?: InputMaybe; _or?: InputMaybe>; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4409,6 +4387,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Bool_Exp = { /** Ordering options when selecting data from "nft_marketplace_v2.current_nft_marketplace_listings". */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Order_By = { + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4427,6 +4406,8 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Order_By = { /** select columns of table "nft_marketplace_v2.current_nft_marketplace_listings" */ export enum Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Select_Column { + /** column name */ + CoinType = 'coin_type', /** column name */ CollectionId = 'collection_id', /** column name */ @@ -4467,6 +4448,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Stream_Cursor_In /** Initial value of the column from where the streaming should start */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Stream_Cursor_Value_Input = { + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4487,6 +4469,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Listings_Stream_Cursor_Va export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers = { __typename?: 'nft_marketplace_v2_current_nft_marketplace_token_offers'; buyer: Scalars['String']; + coin_type?: Maybe; collection_id: Scalars['String']; contract_address: Scalars['String']; current_token_data?: Maybe; @@ -4510,6 +4493,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Bool_Exp = { _not?: InputMaybe; _or?: InputMaybe>; buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4529,6 +4513,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Bool_Exp = { /** Ordering options when selecting data from "nft_marketplace_v2.current_nft_marketplace_token_offers". */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Order_By = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4550,6 +4535,8 @@ export enum Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Select_Colum /** column name */ Buyer = 'buyer', /** column name */ + CoinType = 'coin_type', + /** column name */ CollectionId = 'collection_id', /** column name */ ContractAddress = 'contract_address', @@ -4590,6 +4577,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Stream_Curso /** Initial value of the column from where the streaming should start */ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Stream_Cursor_Value_Input = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; contract_address?: InputMaybe; entry_function_id_str?: InputMaybe; @@ -4610,6 +4598,7 @@ export type Nft_Marketplace_V2_Current_Nft_Marketplace_Token_Offers_Stream_Curso export type Nft_Marketplace_V2_Nft_Marketplace_Activities = { __typename?: 'nft_marketplace_v2_nft_marketplace_activities'; buyer?: Maybe; + coin_type?: Maybe; collection_id: Scalars['String']; collection_name: Scalars['String']; contract_address: Scalars['String']; @@ -4632,12 +4621,52 @@ export type Nft_Marketplace_V2_Nft_Marketplace_Activities = { transaction_version: Scalars['bigint']; }; +/** aggregated selection of "nft_marketplace_v2.nft_marketplace_activities" */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Aggregate = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_aggregate'; + aggregate?: Maybe; + nodes: Array; +}; + +/** aggregate fields of "nft_marketplace_v2.nft_marketplace_activities" */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Aggregate_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_aggregate_fields'; + avg?: Maybe; + count: Scalars['Int']; + max?: Maybe; + min?: Maybe; + stddev?: Maybe; + stddev_pop?: Maybe; + stddev_samp?: Maybe; + sum?: Maybe; + var_pop?: Maybe; + var_samp?: Maybe; + variance?: Maybe; +}; + + +/** aggregate fields of "nft_marketplace_v2.nft_marketplace_activities" */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Aggregate_FieldsCountArgs = { + columns?: InputMaybe>; + distinct?: InputMaybe; +}; + +/** aggregate avg on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Avg_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_avg_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + /** Boolean expression to filter rows from the table "nft_marketplace_v2.nft_marketplace_activities". All fields are combined with a logical 'AND'. */ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Bool_Exp = { _and?: InputMaybe>; _not?: InputMaybe; _or?: InputMaybe>; buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_name?: InputMaybe; contract_address?: InputMaybe; @@ -4659,9 +4688,62 @@ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Bool_Exp = { transaction_version?: InputMaybe; }; +/** aggregate max on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Max_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_max_fields'; + buyer?: Maybe; + coin_type?: Maybe; + collection_id?: Maybe; + collection_name?: Maybe; + contract_address?: Maybe; + creator_address?: Maybe; + entry_function_id_str?: Maybe; + event_index?: Maybe; + event_type?: Maybe; + fee_schedule_id?: Maybe; + marketplace?: Maybe; + offer_or_listing_id?: Maybe; + price?: Maybe; + property_version?: Maybe; + seller?: Maybe; + token_amount?: Maybe; + token_data_id?: Maybe; + token_name?: Maybe; + token_standard?: Maybe; + transaction_timestamp?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate min on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Min_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_min_fields'; + buyer?: Maybe; + coin_type?: Maybe; + collection_id?: Maybe; + collection_name?: Maybe; + contract_address?: Maybe; + creator_address?: Maybe; + entry_function_id_str?: Maybe; + event_index?: Maybe; + event_type?: Maybe; + fee_schedule_id?: Maybe; + marketplace?: Maybe; + offer_or_listing_id?: Maybe; + price?: Maybe; + property_version?: Maybe; + seller?: Maybe; + token_amount?: Maybe; + token_data_id?: Maybe; + token_name?: Maybe; + token_standard?: Maybe; + transaction_timestamp?: Maybe; + transaction_version?: Maybe; +}; + /** Ordering options when selecting data from "nft_marketplace_v2.nft_marketplace_activities". */ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Order_By = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_name?: InputMaybe; contract_address?: InputMaybe; @@ -4688,6 +4770,8 @@ export enum Nft_Marketplace_V2_Nft_Marketplace_Activities_Select_Column { /** column name */ Buyer = 'buyer', /** column name */ + CoinType = 'coin_type', + /** column name */ CollectionId = 'collection_id', /** column name */ CollectionName = 'collection_name', @@ -4727,6 +4811,33 @@ export enum Nft_Marketplace_V2_Nft_Marketplace_Activities_Select_Column { TransactionVersion = 'transaction_version' } +/** aggregate stddev on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stddev_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_stddev_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate stddev_pop on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stddev_Pop_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_stddev_pop_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate stddev_samp on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stddev_Samp_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_stddev_samp_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + /** Streaming cursor of the table "nft_marketplace_v2_nft_marketplace_activities" */ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stream_Cursor_Input = { /** Stream column input with initial value */ @@ -4738,6 +4849,7 @@ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stream_Cursor_Input = /** Initial value of the column from where the streaming should start */ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stream_Cursor_Value_Input = { buyer?: InputMaybe; + coin_type?: InputMaybe; collection_id?: InputMaybe; collection_name?: InputMaybe; contract_address?: InputMaybe; @@ -4759,6 +4871,267 @@ export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Stream_Cursor_Value_In transaction_version?: InputMaybe; }; +/** aggregate sum on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Sum_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_sum_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate var_pop on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Var_Pop_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_var_pop_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate var_samp on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Var_Samp_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_var_samp_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** aggregate variance on columns */ +export type Nft_Marketplace_V2_Nft_Marketplace_Activities_Variance_Fields = { + __typename?: 'nft_marketplace_v2_nft_marketplace_activities_variance_fields'; + event_index?: Maybe; + price?: Maybe; + token_amount?: Maybe; + transaction_version?: Maybe; +}; + +/** columns and relationships of "nft_marketplace_v2_top_collections" */ +export type Nft_Marketplace_V2_Top_Collections = { + __typename?: 'nft_marketplace_v2_top_collections'; + collection_id?: Maybe; + current_collection?: Maybe; + price?: Maybe; +}; + +/** Boolean expression to filter rows from the table "nft_marketplace_v2_top_collections". All fields are combined with a logical 'AND'. */ +export type Nft_Marketplace_V2_Top_Collections_Bool_Exp = { + _and?: InputMaybe>; + _not?: InputMaybe; + _or?: InputMaybe>; + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** Ordering options when selecting data from "nft_marketplace_v2_top_collections". */ +export type Nft_Marketplace_V2_Top_Collections_Order_By = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** select columns of table "nft_marketplace_v2_top_collections" */ +export enum Nft_Marketplace_V2_Top_Collections_Select_Column { + /** column name */ + CollectionId = 'collection_id', + /** column name */ + Price = 'price' +} + +/** Streaming cursor of the table "nft_marketplace_v2_top_collections" */ +export type Nft_Marketplace_V2_Top_Collections_Stream_Cursor_Input = { + /** Stream column input with initial value */ + initial_value: Nft_Marketplace_V2_Top_Collections_Stream_Cursor_Value_Input; + /** cursor ordering */ + ordering?: InputMaybe; +}; + +/** Initial value of the column from where the streaming should start */ +export type Nft_Marketplace_V2_Top_Collections_Stream_Cursor_Value_Input = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** columns and relationships of "nft_marketplace_v2_top_collections_token_v1_24h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_24h = { + __typename?: 'nft_marketplace_v2_top_collections_token_v1_24h'; + collection_id?: Maybe; + current_collection?: Maybe; + price?: Maybe; +}; + +/** Boolean expression to filter rows from the table "nft_marketplace_v2_top_collections_token_v1_24h". All fields are combined with a logical 'AND'. */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Bool_Exp = { + _and?: InputMaybe>; + _not?: InputMaybe; + _or?: InputMaybe>; + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** Ordering options when selecting data from "nft_marketplace_v2_top_collections_token_v1_24h". */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Order_By = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** select columns of table "nft_marketplace_v2_top_collections_token_v1_24h" */ +export enum Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Select_Column { + /** column name */ + CollectionId = 'collection_id', + /** column name */ + Price = 'price' +} + +/** Streaming cursor of the table "nft_marketplace_v2_top_collections_token_v1_24h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Stream_Cursor_Input = { + /** Stream column input with initial value */ + initial_value: Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Stream_Cursor_Value_Input; + /** cursor ordering */ + ordering?: InputMaybe; +}; + +/** Initial value of the column from where the streaming should start */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_24h_Stream_Cursor_Value_Input = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** columns and relationships of "nft_marketplace_v2_top_collections_token_v1_48h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_48h = { + __typename?: 'nft_marketplace_v2_top_collections_token_v1_48h'; + collection_id?: Maybe; + current_collection?: Maybe; + price?: Maybe; +}; + +/** Boolean expression to filter rows from the table "nft_marketplace_v2_top_collections_token_v1_48h". All fields are combined with a logical 'AND'. */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Bool_Exp = { + _and?: InputMaybe>; + _not?: InputMaybe; + _or?: InputMaybe>; + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** Ordering options when selecting data from "nft_marketplace_v2_top_collections_token_v1_48h". */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Order_By = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** select columns of table "nft_marketplace_v2_top_collections_token_v1_48h" */ +export enum Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Select_Column { + /** column name */ + CollectionId = 'collection_id', + /** column name */ + Price = 'price' +} + +/** Streaming cursor of the table "nft_marketplace_v2_top_collections_token_v1_48h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Stream_Cursor_Input = { + /** Stream column input with initial value */ + initial_value: Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Stream_Cursor_Value_Input; + /** cursor ordering */ + ordering?: InputMaybe; +}; + +/** Initial value of the column from where the streaming should start */ +export type Nft_Marketplace_V2_Top_Collections_Token_V1_48h_Stream_Cursor_Value_Input = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** columns and relationships of "nft_marketplace_v2_top_collections_token_v2_24h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_24h = { + __typename?: 'nft_marketplace_v2_top_collections_token_v2_24h'; + collection_id?: Maybe; + current_collection?: Maybe; + price?: Maybe; +}; + +/** Boolean expression to filter rows from the table "nft_marketplace_v2_top_collections_token_v2_24h". All fields are combined with a logical 'AND'. */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Bool_Exp = { + _and?: InputMaybe>; + _not?: InputMaybe; + _or?: InputMaybe>; + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** Ordering options when selecting data from "nft_marketplace_v2_top_collections_token_v2_24h". */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Order_By = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** select columns of table "nft_marketplace_v2_top_collections_token_v2_24h" */ +export enum Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Select_Column { + /** column name */ + CollectionId = 'collection_id', + /** column name */ + Price = 'price' +} + +/** Streaming cursor of the table "nft_marketplace_v2_top_collections_token_v2_24h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Stream_Cursor_Input = { + /** Stream column input with initial value */ + initial_value: Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Stream_Cursor_Value_Input; + /** cursor ordering */ + ordering?: InputMaybe; +}; + +/** Initial value of the column from where the streaming should start */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_24h_Stream_Cursor_Value_Input = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** columns and relationships of "nft_marketplace_v2_top_collections_token_v2_48h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_48h = { + __typename?: 'nft_marketplace_v2_top_collections_token_v2_48h'; + collection_id?: Maybe; + current_collection?: Maybe; + price?: Maybe; +}; + +/** Boolean expression to filter rows from the table "nft_marketplace_v2_top_collections_token_v2_48h". All fields are combined with a logical 'AND'. */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Bool_Exp = { + _and?: InputMaybe>; + _not?: InputMaybe; + _or?: InputMaybe>; + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** Ordering options when selecting data from "nft_marketplace_v2_top_collections_token_v2_48h". */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Order_By = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + +/** select columns of table "nft_marketplace_v2_top_collections_token_v2_48h" */ +export enum Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Select_Column { + /** column name */ + CollectionId = 'collection_id', + /** column name */ + Price = 'price' +} + +/** Streaming cursor of the table "nft_marketplace_v2_top_collections_token_v2_48h" */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Stream_Cursor_Input = { + /** Stream column input with initial value */ + initial_value: Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Stream_Cursor_Value_Input; + /** cursor ordering */ + ordering?: InputMaybe; +}; + +/** Initial value of the column from where the streaming should start */ +export type Nft_Marketplace_V2_Top_Collections_Token_V2_48h_Stream_Cursor_Value_Input = { + collection_id?: InputMaybe; + price?: InputMaybe; +}; + /** columns and relationships of "num_active_delegator_per_pool" */ export type Num_Active_Delegator_Per_Pool = { __typename?: 'num_active_delegator_per_pool'; @@ -5128,8 +5501,6 @@ export type Query_Root = { current_collection_ownership_v2_view: Array; /** fetch aggregated fields from the table: "current_collection_ownership_v2_view" */ current_collection_ownership_v2_view_aggregate: Current_Collection_Ownership_V2_View_Aggregate; - /** fetch data from the table: "current_collection_ownership_view" */ - current_collection_ownership_view: Array; /** fetch data from the table: "current_collections_v2" */ current_collections_v2: Array; /** fetch data from the table: "current_collections_v2" using primary key columns */ @@ -5217,8 +5588,20 @@ export type Query_Root = { nft_marketplace_v2_current_nft_marketplace_token_offers_by_pk?: Maybe; /** fetch data from the table: "nft_marketplace_v2.nft_marketplace_activities" */ nft_marketplace_v2_nft_marketplace_activities: Array; + /** fetch aggregated fields from the table: "nft_marketplace_v2.nft_marketplace_activities" */ + nft_marketplace_v2_nft_marketplace_activities_aggregate: Nft_Marketplace_V2_Nft_Marketplace_Activities_Aggregate; /** fetch data from the table: "nft_marketplace_v2.nft_marketplace_activities" using primary key columns */ nft_marketplace_v2_nft_marketplace_activities_by_pk?: Maybe; + /** fetch data from the table: "nft_marketplace_v2_top_collections" */ + nft_marketplace_v2_top_collections: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v1_24h" */ + nft_marketplace_v2_top_collections_token_v1_24h: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v1_48h" */ + nft_marketplace_v2_top_collections_token_v1_48h: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v2_24h" */ + nft_marketplace_v2_top_collections_token_v2_24h: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v2_48h" */ + nft_marketplace_v2_top_collections_token_v2_48h: Array; /** fetch data from the table: "num_active_delegator_per_pool" */ num_active_delegator_per_pool: Array; /** fetch data from the table: "processor_status" */ @@ -5490,15 +5873,6 @@ export type Query_RootCurrent_Collection_Ownership_V2_View_AggregateArgs = { }; -export type Query_RootCurrent_Collection_Ownership_ViewArgs = { - distinct_on?: InputMaybe>; - limit?: InputMaybe; - offset?: InputMaybe; - order_by?: InputMaybe>; - where?: InputMaybe; -}; - - export type Query_RootCurrent_Collections_V2Args = { distinct_on?: InputMaybe>; limit?: InputMaybe; @@ -5838,12 +6212,66 @@ export type Query_RootNft_Marketplace_V2_Nft_Marketplace_ActivitiesArgs = { }; +export type Query_RootNft_Marketplace_V2_Nft_Marketplace_Activities_AggregateArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + export type Query_RootNft_Marketplace_V2_Nft_Marketplace_Activities_By_PkArgs = { event_index: Scalars['bigint']; transaction_version: Scalars['bigint']; }; +export type Query_RootNft_Marketplace_V2_Top_CollectionsArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Query_RootNft_Marketplace_V2_Top_Collections_Token_V1_24hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Query_RootNft_Marketplace_V2_Top_Collections_Token_V1_48hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Query_RootNft_Marketplace_V2_Top_Collections_Token_V2_24hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Query_RootNft_Marketplace_V2_Top_Collections_Token_V2_48hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + export type Query_RootNum_Active_Delegator_Per_PoolArgs = { distinct_on?: InputMaybe>; limit?: InputMaybe; @@ -6116,10 +6544,6 @@ export type Subscription_Root = { current_collection_ownership_v2_view_aggregate: Current_Collection_Ownership_V2_View_Aggregate; /** fetch data from the table in a streaming manner : "current_collection_ownership_v2_view" */ current_collection_ownership_v2_view_stream: Array; - /** fetch data from the table: "current_collection_ownership_view" */ - current_collection_ownership_view: Array; - /** fetch data from the table in a streaming manner : "current_collection_ownership_view" */ - current_collection_ownership_view_stream: Array; /** fetch data from the table: "current_collections_v2" */ current_collections_v2: Array; /** fetch data from the table: "current_collections_v2" using primary key columns */ @@ -6249,10 +6673,32 @@ export type Subscription_Root = { nft_marketplace_v2_current_nft_marketplace_token_offers_stream: Array; /** fetch data from the table: "nft_marketplace_v2.nft_marketplace_activities" */ nft_marketplace_v2_nft_marketplace_activities: Array; + /** fetch aggregated fields from the table: "nft_marketplace_v2.nft_marketplace_activities" */ + nft_marketplace_v2_nft_marketplace_activities_aggregate: Nft_Marketplace_V2_Nft_Marketplace_Activities_Aggregate; /** fetch data from the table: "nft_marketplace_v2.nft_marketplace_activities" using primary key columns */ nft_marketplace_v2_nft_marketplace_activities_by_pk?: Maybe; /** fetch data from the table in a streaming manner : "nft_marketplace_v2.nft_marketplace_activities" */ nft_marketplace_v2_nft_marketplace_activities_stream: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections" */ + nft_marketplace_v2_top_collections: Array; + /** fetch data from the table in a streaming manner : "nft_marketplace_v2_top_collections" */ + nft_marketplace_v2_top_collections_stream: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v1_24h" */ + nft_marketplace_v2_top_collections_token_v1_24h: Array; + /** fetch data from the table in a streaming manner : "nft_marketplace_v2_top_collections_token_v1_24h" */ + nft_marketplace_v2_top_collections_token_v1_24h_stream: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v1_48h" */ + nft_marketplace_v2_top_collections_token_v1_48h: Array; + /** fetch data from the table in a streaming manner : "nft_marketplace_v2_top_collections_token_v1_48h" */ + nft_marketplace_v2_top_collections_token_v1_48h_stream: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v2_24h" */ + nft_marketplace_v2_top_collections_token_v2_24h: Array; + /** fetch data from the table in a streaming manner : "nft_marketplace_v2_top_collections_token_v2_24h" */ + nft_marketplace_v2_top_collections_token_v2_24h_stream: Array; + /** fetch data from the table: "nft_marketplace_v2_top_collections_token_v2_48h" */ + nft_marketplace_v2_top_collections_token_v2_48h: Array; + /** fetch data from the table in a streaming manner : "nft_marketplace_v2_top_collections_token_v2_48h" */ + nft_marketplace_v2_top_collections_token_v2_48h_stream: Array; /** fetch data from the table: "num_active_delegator_per_pool" */ num_active_delegator_per_pool: Array; /** fetch data from the table in a streaming manner : "num_active_delegator_per_pool" */ @@ -6644,22 +7090,6 @@ export type Subscription_RootCurrent_Collection_Ownership_V2_View_StreamArgs = { }; -export type Subscription_RootCurrent_Collection_Ownership_ViewArgs = { - distinct_on?: InputMaybe>; - limit?: InputMaybe; - offset?: InputMaybe; - order_by?: InputMaybe>; - where?: InputMaybe; -}; - - -export type Subscription_RootCurrent_Collection_Ownership_View_StreamArgs = { - batch_size: Scalars['Int']; - cursor: Array>; - where?: InputMaybe; -}; - - export type Subscription_RootCurrent_Collections_V2Args = { distinct_on?: InputMaybe>; limit?: InputMaybe; @@ -7146,6 +7576,15 @@ export type Subscription_RootNft_Marketplace_V2_Nft_Marketplace_ActivitiesArgs = }; +export type Subscription_RootNft_Marketplace_V2_Nft_Marketplace_Activities_AggregateArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + export type Subscription_RootNft_Marketplace_V2_Nft_Marketplace_Activities_By_PkArgs = { event_index: Scalars['bigint']; transaction_version: Scalars['bigint']; @@ -7159,6 +7598,86 @@ export type Subscription_RootNft_Marketplace_V2_Nft_Marketplace_Activities_Strea }; +export type Subscription_RootNft_Marketplace_V2_Top_CollectionsArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_StreamArgs = { + batch_size: Scalars['Int']; + cursor: Array>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V1_24hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V1_24h_StreamArgs = { + batch_size: Scalars['Int']; + cursor: Array>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V1_48hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V1_48h_StreamArgs = { + batch_size: Scalars['Int']; + cursor: Array>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V2_24hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V2_24h_StreamArgs = { + batch_size: Scalars['Int']; + cursor: Array>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V2_48hArgs = { + distinct_on?: InputMaybe>; + limit?: InputMaybe; + offset?: InputMaybe; + order_by?: InputMaybe>; + where?: InputMaybe; +}; + + +export type Subscription_RootNft_Marketplace_V2_Top_Collections_Token_V2_48h_StreamArgs = { + batch_size: Scalars['Int']; + cursor: Array>; + where?: InputMaybe; +}; + + export type Subscription_RootNum_Active_Delegator_Per_PoolArgs = { distinct_on?: InputMaybe>; limit?: InputMaybe; diff --git a/ecosystem/typescript/sdk/src/indexer/queries/CurrentTokenOwnershipFieldsFragment.graphql b/ecosystem/typescript/sdk/src/indexer/queries/CurrentTokenOwnershipFieldsFragment.graphql index 07f6651851250..bdf67d3604a3e 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/CurrentTokenOwnershipFieldsFragment.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/CurrentTokenOwnershipFieldsFragment.graphql @@ -1,37 +1,45 @@ fragment CurrentTokenOwnershipFields on current_token_ownerships_v2 { token_standard - is_fungible_v2 - is_soulbound_v2 - property_version_v1 - table_type_v1 token_properties_mutated_v1 - amount - last_transaction_timestamp - last_transaction_version + token_data_id + table_type_v1 storage_id + property_version_v1 owner_address + last_transaction_version + last_transaction_timestamp + is_soulbound_v2 + is_fungible_v2 + amount current_token_data { - token_name + collection_id + description + is_fungible_v2 + largest_property_version_v1 + last_transaction_timestamp + last_transaction_version + maximum + supply token_data_id - token_uri + token_name token_properties - supply - maximum - last_transaction_version - last_transaction_timestamp - largest_property_version_v1 + token_standard + token_uri current_collection { + collection_id collection_name creator_address + current_supply description - uri - collection_id + last_transaction_timestamp last_transaction_version - current_supply + max_supply mutable_description - total_minted_v2 - table_handle_v1 mutable_uri + table_handle_v1 + token_standard + total_minted_v2 + uri } } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getAccountCoinsData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getAccountCoinsData.graphql index 15bb54c62635c..897cae6ca1367 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getAccountCoinsData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getAccountCoinsData.graphql @@ -2,10 +2,21 @@ query getAccountCoinsData($owner_address: String, $offset: Int, $limit: Int) { current_coin_balances(where: { owner_address: { _eq: $owner_address } }, offset: $offset, limit: $limit) { amount coin_type + coin_type_hash + last_transaction_timestamp + last_transaction_version + owner_address coin_info { - name + coin_type + coin_type_hash + creator_address decimals + name + supply_aggregator_table_handle + supply_aggregator_table_key symbol + transaction_created_timestamp + transaction_version_created } } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsCount.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsCount.graphql index e27c6e7a61f2f..009d559dd5d7f 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsCount.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsCount.graphql @@ -1,5 +1,5 @@ query getAccountTransactionsCount($address: String) { - move_resources_aggregate(where: { address: { _eq: $address } }, distinct_on: transaction_version) { + account_transactions_aggregate(where: { account_address: { _eq: $address } }) { aggregate { count } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsData.graphql index 3a9ac0ce861f2..c0d57ce337a01 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getAccountTransactionsData.graphql @@ -1,11 +1,15 @@ -query getAccountTransactionsData($address: String, $limit: Int, $offset: Int) { - move_resources( - where: { address: { _eq: $address } } - order_by: { transaction_version: desc } - distinct_on: transaction_version - limit: $limit - offset: $offset - ) { +#import "./TokenActivitiesFieldsFragment"; +query getAccountTransactionsData( + $where_condition: account_transactions_bool_exp! + $offset: Int + $limit: Int + $order_by: [account_transactions_order_by!] +) { + account_transactions(where: $where_condition, order_by: $order_by, limit: $limit, offset: $offset) { + token_activities_v2 { + ...TokenActivitiesFields + } transaction_version + account_address } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getCollectionData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getCollectionData.graphql index b701d92da079d..1d51a73e36d03 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getCollectionData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getCollectionData.graphql @@ -6,11 +6,18 @@ query getCollectionData( ) { current_collections_v2(where: $where_condition, offset: $offset, limit: $limit, order_by: $order_by) { collection_id - token_standard collection_name creator_address current_supply description + last_transaction_timestamp + last_transaction_version + max_supply + mutable_description + mutable_uri + table_handle_v1 + token_standard + total_minted_v2 uri } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getCollectionsWithOwnedTokens.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getCollectionsWithOwnedTokens.graphql index a27bd1a0609d8..8ce0f80e0de76 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getCollectionsWithOwnedTokens.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getCollectionsWithOwnedTokens.graphql @@ -6,17 +6,28 @@ query getCollectionsWithOwnedTokens( ) { current_collection_ownership_v2_view(where: $where_condition, offset: $offset, limit: $limit, order_by: $order_by) { current_collection { - creator_address - collection_name - token_standard collection_id + collection_name + creator_address + current_supply description + last_transaction_timestamp + last_transaction_version + mutable_description + max_supply + mutable_uri table_handle_v1 - uri + token_standard total_minted_v2 - max_supply + uri } + collection_id + collection_name + collection_uri + creator_address distinct_tokens last_transaction_version + owner_address + single_token_uri } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getNumberOfDelegators.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getNumberOfDelegators.graphql index 92278d8f0f049..3911f200ecb8c 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getNumberOfDelegators.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getNumberOfDelegators.graphql @@ -4,5 +4,6 @@ query getNumberOfDelegators($poolAddress: String) { distinct_on: pool_address ) { num_active_delegator + pool_address } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getTokenActivities.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getTokenActivities.graphql index 4c2cc43027243..a11d37792b3c6 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getTokenActivities.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getTokenActivities.graphql @@ -1,3 +1,4 @@ +#import "./TokenActivitiesFieldsFragment"; query getTokenActivities( $where_condition: token_activities_v2_bool_exp! $offset: Int @@ -5,20 +6,6 @@ query getTokenActivities( $order_by: [token_activities_v2_order_by!] ) { token_activities_v2(where: $where_condition, order_by: $order_by, offset: $offset, limit: $limit) { - after_value - before_value - entry_function_id_str - event_account_address - event_index - from_address - is_fungible_v2 - property_version_v1 - to_address - token_amount - token_data_id - token_standard - transaction_timestamp - transaction_version - type + ...TokenActivitiesFields } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getTokenCurrentOwnerData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getTokenCurrentOwnerData.graphql index b7fa4111a055c..934bfc3b19c51 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getTokenCurrentOwnerData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getTokenCurrentOwnerData.graphql @@ -1,3 +1,4 @@ +#import "./CurrentTokenOwnershipFieldsFragment"; query getTokenCurrentOwnerData( $where_condition: current_token_ownerships_v2_bool_exp! $offset: Int @@ -5,6 +6,6 @@ query getTokenCurrentOwnerData( $order_by: [current_token_ownerships_v2_order_by!] ) { current_token_ownerships_v2(where: $where_condition, offset: $offset, limit: $limit, order_by: $order_by) { - owner_address + ...CurrentTokenOwnershipFields } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getTokenData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getTokenData.graphql index 582bec97f67be..507558091f68f 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getTokenData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getTokenData.graphql @@ -5,23 +5,34 @@ query getTokenData( $order_by: [current_token_datas_v2_order_by!] ) { current_token_datas_v2(where: $where_condition, offset: $offset, limit: $limit, order_by: $order_by) { + collection_id + description + is_fungible_v2 + largest_property_version_v1 + last_transaction_timestamp + last_transaction_version + maximum + supply token_data_id token_name - token_uri token_properties token_standard - largest_property_version_v1 - maximum - is_fungible_v2 - supply - last_transaction_version - last_transaction_timestamp + token_uri current_collection { collection_id collection_name creator_address - uri current_supply + description + last_transaction_timestamp + last_transaction_version + max_supply + mutable_description + mutable_uri + table_handle_v1 + token_standard + total_minted_v2 + uri } } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getTokenOwnersData.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getTokenOwnersData.graphql index 423c6a8b28789..b8b4d32ea37e9 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getTokenOwnersData.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getTokenOwnersData.graphql @@ -1,3 +1,4 @@ +#import "./CurrentTokenOwnershipFieldsFragment"; query getTokenOwnersData( $where_condition: current_token_ownerships_v2_bool_exp! $offset: Int @@ -5,6 +6,6 @@ query getTokenOwnersData( $order_by: [current_token_ownerships_v2_order_by!] ) { current_token_ownerships_v2(where: $where_condition, offset: $offset, limit: $limit, order_by: $order_by) { - owner_address + ...CurrentTokenOwnershipFields } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/getUserTransactions.graphql b/ecosystem/typescript/sdk/src/indexer/queries/getUserTransactions.graphql index 529f43ac3b24c..268319c9d23b6 100644 --- a/ecosystem/typescript/sdk/src/indexer/queries/getUserTransactions.graphql +++ b/ecosystem/typescript/sdk/src/indexer/queries/getUserTransactions.graphql @@ -1,10 +1,10 @@ -query getUserTransactions($limit: Int, $start_version: bigint, $offset: Int) { - user_transactions( - limit: $limit - order_by: { version: desc } - where: { version: { _lte: $start_version } } - offset: $offset - ) { +query getUserTransactions( + $where_condition: user_transactions_bool_exp! + $offset: Int + $limit: Int + $order_by: [user_transactions_order_by!] +) { + user_transactions(order_by: $order_by, where: $where_condition, limit: $limit, offset: $offset) { version } } diff --git a/ecosystem/typescript/sdk/src/indexer/queries/tokenActivitiesFieldsFragment.graphql b/ecosystem/typescript/sdk/src/indexer/queries/tokenActivitiesFieldsFragment.graphql new file mode 100644 index 0000000000000..5cda631343468 --- /dev/null +++ b/ecosystem/typescript/sdk/src/indexer/queries/tokenActivitiesFieldsFragment.graphql @@ -0,0 +1,17 @@ +fragment TokenActivitiesFields on token_activities_v2 { + after_value + before_value + entry_function_id_str + event_account_address + event_index + from_address + is_fungible_v2 + property_version_v1 + to_address + token_amount + token_data_id + token_standard + transaction_timestamp + transaction_version + type +} diff --git a/ecosystem/typescript/sdk/src/providers/indexer.ts b/ecosystem/typescript/sdk/src/providers/indexer.ts index 1ef67c2c06b6d..533b821a4f40a 100644 --- a/ecosystem/typescript/sdk/src/providers/indexer.ts +++ b/ecosystem/typescript/sdk/src/providers/indexer.ts @@ -47,12 +47,14 @@ import { import { ClientConfig, post } from "../client"; import { ApiError } from "./aptos_client"; import { + Account_Transactions_Order_By, Current_Collections_V2_Order_By, Current_Collection_Ownership_V2_View_Order_By, Current_Token_Datas_V2_Order_By, Current_Token_Ownerships_V2_Order_By, InputMaybe, Token_Activities_V2_Order_By, + User_Transactions_Order_By, } from "../indexer/generated/types"; /** @@ -364,6 +366,7 @@ export class IndexerClient { const whereCondition: any = { token_data_id: { _eq: tokenAddress }, + amount: { _gt: "0" }, }; if (propertyVersion) { @@ -478,7 +481,7 @@ export class IndexerClient { * Queries account's current owned tokens by token address (v2) or token data id (v1). * * @param token token address (v2) or token data id (v1) - * @returns GetOwnedTokensByTokenDataIdQuery response type + * @returns GetOwnedTokensByTokenDataQuery response type */ async getOwnedTokensByTokenData( token: MaybeHexString, @@ -704,13 +707,26 @@ export class IndexerClient { */ async getAccountTransactionsData( accountAddress: MaybeHexString, - options?: IndexerPaginationArgs, + extraArgs?: { + options?: IndexerPaginationArgs; + orderBy?: IndexerSortBy[]; + }, ): Promise { const address = HexString.ensure(accountAddress).hex(); IndexerClient.validateAddress(address); + + const whereCondition: any = { + account_address: { _eq: address }, + }; + const graphqlQuery = { query: GetAccountTransactionsData, - variables: { address, offset: options?.offset, limit: options?.limit }, + variables: { + where_condition: whereCondition, + offset: extraArgs?.options?.offset, + limit: extraArgs?.options?.limit, + order_by: extraArgs?.orderBy, + }, }; return this.queryIndexer(graphqlQuery); } @@ -732,12 +748,26 @@ export class IndexerClient { /** * Queries top user transactions * + * @param startVersion optional - can be set to tell indexer what version to start from * @returns GetUserTransactionsQuery response type */ - async getUserTransactions(startVersion?: number, options?: IndexerPaginationArgs): Promise { + async getUserTransactions(extraArgs?: { + startVersion?: number; + options?: IndexerPaginationArgs; + orderBy?: IndexerSortBy[]; + }): Promise { + const whereCondition: any = { + version: { _lte: extraArgs?.startVersion }, + }; + const graphqlQuery = { query: GetUserTransactions, - variables: { start_version: startVersion, offset: options?.offset, limit: options?.limit }, + variables: { + where_condition: whereCondition, + offset: extraArgs?.options?.offset, + limit: extraArgs?.options?.limit, + order_by: extraArgs?.orderBy, + }, }; return this.queryIndexer(graphqlQuery); } diff --git a/ecosystem/typescript/sdk/src/tests/e2e/client.test.ts b/ecosystem/typescript/sdk/src/tests/e2e/client.test.ts index ec913b02aa507..dbfd68ed78de6 100644 --- a/ecosystem/typescript/sdk/src/tests/e2e/client.test.ts +++ b/ecosystem/typescript/sdk/src/tests/e2e/client.test.ts @@ -2,25 +2,6 @@ import { AptosApiError, aptosRequest } from "../../client"; import { VERSION } from "../../version"; import { getTransaction, longTestTimeout, NODE_URL } from "../unit/test_helper.test"; -test( - "server response should include cookies", - async () => { - try { - const response = await aptosRequest({ - // use devnet as localnet doesnt set cookies - url: "https://fullnode.devnet.aptoslabs.com/v1", - method: "GET", - originMethod: "test cookies", - }); - expect(response.headers).toHaveProperty("set-cookie"); - } catch (error: any) { - // should not get here - expect(true).toBe(false); - } - }, - longTestTimeout, -); - test( "call should include x-aptos-client header", async () => { diff --git a/ecosystem/typescript/sdk/src/tests/e2e/indexer.test.ts b/ecosystem/typescript/sdk/src/tests/e2e/indexer.test.ts index be031715b01d2..6055be752d528 100644 --- a/ecosystem/typescript/sdk/src/tests/e2e/indexer.test.ts +++ b/ecosystem/typescript/sdk/src/tests/e2e/indexer.test.ts @@ -48,15 +48,15 @@ describe("Indexer", () => { const fullNodeChainId = await provider.getChainId(); console.log( - `\n fullnode chain id is: ${fullNodeChainId}, indexer chain id is: ${indexerLedgerInfo.ledger_infos[0].chain_id}`, + `\n devnet chain id is: ${fullNodeChainId}, indexer chain id is: ${indexerLedgerInfo.ledger_infos[0].chain_id}`, ); if (indexerLedgerInfo.ledger_infos[0].chain_id !== fullNodeChainId) { - console.log(`\n fullnode chain id and indexer chain id are not synced, skipping rest of tests`); + console.log(`\n devnet chain id and indexer chain id are not synced, skipping rest of tests`); skipTest = true; runTests = describe.skip; } else { - console.log(`\n fullnode chain id and indexer chain id are in synced, running tests`); + console.log(`\n devnet chain id and indexer chain id are in synced, running tests`); } if (!skipTest) { @@ -194,7 +194,6 @@ describe("Indexer", () => { const tokenData = await indexerClient.getTokenData( accountTokens.current_token_ownerships_v2[0].current_token_data!.token_data_id, ); - expect(tokenData.current_token_datas_v2[0].token_standard).toEqual("v1"); expect(tokenData.current_token_datas_v2[0].token_name).toEqual(tokenName); }, longTestTimeout, @@ -313,7 +312,7 @@ describe("Indexer", () => { "gets account transactions count", async () => { const accountTransactionsCount = await indexerClient.getAccountTransactionsCount(alice.address().hex()); - expect(accountTransactionsCount.move_resources_aggregate.aggregate?.count).toEqual(5); + expect(accountTransactionsCount.account_transactions_aggregate.aggregate?.count).toEqual(5); }, longTestTimeout, ); @@ -322,7 +321,8 @@ describe("Indexer", () => { "gets account transactions data", async () => { const accountTransactionsData = await indexerClient.getAccountTransactionsData(alice.address().hex()); - expect(accountTransactionsData.move_resources[0]).toHaveProperty("transaction_version"); + expect(accountTransactionsData.account_transactions.length).toEqual(5); + expect(accountTransactionsData.account_transactions[0]).toHaveProperty("transaction_version"); }, longTestTimeout, ); @@ -339,7 +339,7 @@ describe("Indexer", () => { it( "gets user transactions", async () => { - const userTransactions = await indexerClient.getUserTransactions(482294669, { limit: 4 }); + const userTransactions = await indexerClient.getUserTransactions({ options: { limit: 4 } }); expect(userTransactions.user_transactions.length).toEqual(4); }, longTestTimeout, @@ -431,5 +431,17 @@ describe("Indexer", () => { expect(tokens.token_activities_v2).toHaveLength(2); expect(tokens.token_activities_v2[0].token_standard).toEqual("v1"); }); + + it( + "gets account transactions data", + async () => { + const accountTransactionsData = await indexerClient.getAccountTransactionsData(alice.address().hex(), { + orderBy: [{ transaction_version: "desc" }], + }); + expect(accountTransactionsData.account_transactions.length).toEqual(5); + expect(accountTransactionsData.account_transactions[0]).toHaveProperty("transaction_version"); + }, + longTestTimeout, + ); }); }); diff --git a/ecosystem/typescript/sdk/src/version.ts b/ecosystem/typescript/sdk/src/version.ts index d2ca287130198..bedfa73cd921b 100644 --- a/ecosystem/typescript/sdk/src/version.ts +++ b/ecosystem/typescript/sdk/src/version.ts @@ -1,2 +1,2 @@ // hardcoded for now, we would want to have it injected dynamically -export const VERSION = "1.17.0"; +export const VERSION = "1.18.0"; diff --git a/ecosystem/typescript/sdk_v2/package.json b/ecosystem/typescript/sdk_v2/package.json index 974a0be087151..9e621a3cfe851 100644 --- a/ecosystem/typescript/sdk_v2/package.json +++ b/ecosystem/typescript/sdk_v2/package.json @@ -25,7 +25,10 @@ "_build:cjs": "tsup src/index.ts --format cjs --dts --out-dir dist/cjs", "_build:types": "tsup src/types/index.ts --dts --out-dir dist/types", "generate-openapi-response-types": "openapi -i ../../../../api/doc/spec.yaml -o ./src/types/generated --exportCore=false --exportServices=false", - "lint": "eslint \"**/*.ts\"" + "_fmt": "prettier 'src/**/*.ts' 'tests/**/*.ts' '.eslintrc.js'", + "fmt": "pnpm _fmt --write", + "lint": "eslint \"**/*.ts\"", + "test": "pnpm jest" }, "dependencies": { "@aptos-labs/aptos-client": "^0.0.2", diff --git a/ecosystem/typescript/sdk_v2/src/api/aptos_config.ts b/ecosystem/typescript/sdk_v2/src/api/aptos_config.ts index cd2c26a9e857c..2bfd56fd929b5 100644 --- a/ecosystem/typescript/sdk_v2/src/api/aptos_config.ts +++ b/ecosystem/typescript/sdk_v2/src/api/aptos_config.ts @@ -1,9 +1,23 @@ import { ClientConfig } from "../client/types"; +import { NetworkToNodeAPI, NetworkToFaucetAPI, NetworkToIndexerAPI, Network } from "../utils/api-endpoints"; +import { DEFAULT_NETWORK } from "../utils/const"; export class AptosConfig { + readonly network?: Network; + + readonly fullnode?: string; + + readonly faucet?: string; + + readonly indexer?: string; + readonly clientConfig?: ClientConfig; constructor(config?: AptosConfig) { + this.network = config?.network ?? DEFAULT_NETWORK; + this.fullnode = config?.fullnode ?? NetworkToNodeAPI[this.network]; + this.faucet = config?.faucet ?? NetworkToFaucetAPI[this.network]; + this.indexer = config?.indexer ?? NetworkToIndexerAPI[this.network]; this.clientConfig = config?.clientConfig ?? {}; } } diff --git a/ecosystem/typescript/sdk_v2/src/core/account_address.ts b/ecosystem/typescript/sdk_v2/src/core/account_address.ts new file mode 100644 index 0000000000000..bf4776bdfab71 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/core/account_address.ts @@ -0,0 +1,365 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +import { bytesToHex, hexToBytes } from "@noble/hashes/utils"; +import { HexInput } from "../types"; +import { ParsingError, ParsingResult } from "./common"; + +/** + * This enum is used to explain why an address was invalid. + */ +export enum AddressInvalidReason { + INCORRECT_NUMBER_OF_BYTES = "incorrect_number_of_bytes", + INVALID_HEX_CHARS = "invalid_hex_chars", + TOO_SHORT = "too_short", + TOO_LONG = "too_long", + LEADING_ZERO_X_REQUIRED = "leading_zero_x_required", + LONG_FORM_REQUIRED_UNLESS_SPECIAL = "long_form_required_unless_special", + INVALID_PADDING_ZEROES = "INVALID_PADDING_ZEROES", +} + +/** + * NOTE: Only use this class for account addresses. For other hex data, e.g. transaction + * hashes, use the Hex class. + * + * AccountAddress is used for working with account addresses. Account addresses, when + * represented as a string, generally look like these examples: + * - 0x1 + * - 0xaa86fe99004361f747f91342ca13c426ca0cccb0c1217677180c9493bad6ef0c + * + * Proper formatting and parsing of account addresses is defined by AIP-40. + * To learn more about the standard, read the AIP here: + * https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + * + * The comments in this class make frequent reference to the LONG and SHORT formats, + * as well as "special" addresses. To learn what these refer to see AIP-40. + */ +export class AccountAddress { + /* + * This is the internal representation of an account address. + */ + readonly data: Uint8Array; + + /* + * The number of bytes that make up an account address. + */ + static readonly LENGTH: number = 32; + + /* + * The length of an address string in LONG form without a leading 0x. + */ + static readonly LONG_STRING_LENGTH: number = 64; + + static ONE: AccountAddress = AccountAddress.fromString({ input: "0x1" }); + + static TWO: AccountAddress = AccountAddress.fromString({ input: "0x2" }); + + static THREE: AccountAddress = AccountAddress.fromString({ input: "0x3" }); + + static FOUR: AccountAddress = AccountAddress.fromString({ input: "0x4" }); + + /** + * Creates an instance of AccountAddress from a Uint8Array. + * + * @param args.data A Uint8Array representing an account address. + */ + constructor(args: { data: Uint8Array }) { + if (args.data.length !== AccountAddress.LENGTH) { + throw new ParsingError( + "AccountAddress data should be exactly 32 bytes long", + AddressInvalidReason.INCORRECT_NUMBER_OF_BYTES, + ); + } + this.data = args.data; + } + + /** + * Returns whether an address is special, where special is defined as 0x0 to 0xf + * inclusive. In other words, the last byte of the address must be < 0b10000 (16) + * and every other byte must be zero. + * + * For more information on how special addresses are defined see AIP-40: + * https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + * + * @returns true if the address is special, false if not. + */ + isSpecial(): boolean { + return ( + this.data.slice(0, this.data.length - 1).every((byte) => byte === 0) && this.data[this.data.length - 1] < 0b10000 + ); + } + + // === + // Methods for representing an instance of AccountAddress as other types. + // === + + /** + * Return the AccountAddress as a string as per AIP-40. + * https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + * + * In short, it means that special addresses are represented in SHORT form, meaning + * 0x0 through to 0xf inclusive, and every other address is represented in LONG form, + * meaning 0x + 64 hex characters. + * + * @returns AccountAddress as a string conforming to AIP-40. + */ + toString(): string { + return `0x${this.toStringWithoutPrefix()}`; + } + + /** + * NOTE: Prefer to use `toString` where possible. + * + * Return the AccountAddress as a string as per AIP-40 but without the leading 0x. + * + * Learn more by reading the docstring of `toString`. + * + * @returns AccountAddress as a string conforming to AIP-40 but without the leading 0x. + */ + toStringWithoutPrefix(): string { + let hex = bytesToHex(this.data); + if (this.isSpecial()) { + hex = hex[hex.length - 1]; + } + return hex; + } + + /** + * NOTE: Prefer to use `toString` where possible. + * + * Whereas toString will format special addresses (as defined by isSpecial) using the + * SHORT form (no leading 0s), this format the address in the LONG format + * unconditionally. + * + * This means it will be 0x + 64 hex characters. + * + * @returns AccountAddress as a string in LONG form. + */ + toStringLong(): string { + return `0x${this.toStringLongWithoutPrefix()}`; + } + + /* + * NOTE: Prefer to use `toString` where possible. + * + * Whereas toString will format special addresses (as defined by isSpecial) using the + * SHORT form (no leading 0s), this function will include leading zeroes. The string + * will not have a leading zero. + * + * This means it will be 64 hex characters without a leading 0x. + * + * @returns AccountAddress as a string in LONG form without a leading 0x. + */ + toStringLongWithoutPrefix(): string { + return bytesToHex(this.data); + } + + /** + * Get the inner hex data. The inner data is already a Uint8Array so no conversion + * is taking place here, it just returns the inner data. + * + * @returns Hex data as Uint8Array + */ + toUint8Array(): Uint8Array { + return this.data; + } + + // === + // Methods for creating an instance of AccountAddress from other types. + // === + + /** + * NOTE: This function has strict parsing behavior. For relaxed behavior, please use + * the `fromStringRelaxed` function. + * + * Creates an instance of AccountAddress from a hex string. + * + * This function allows only the strictest formats defined by AIP-40. In short this + * means only the following formats are accepted: + * + * - LONG + * - SHORT for special addresses + * + * Where: + * - LONG is defined as 0x + 64 hex characters. + * - SHORT for special addresses is 0x0 to 0xf inclusive without padding zeroes. + * + * This means the following are not accepted: + * - SHORT for non-special addresses. + * - Any address without a leading 0x. + * + * Learn more about the different address formats by reading AIP-40: + * https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + * + * @param args.input A hex string representing an account address. + * + * @returns An instance of AccountAddress. + */ + static fromString(args: { input: string }): AccountAddress { + // Assert the string starts with 0x. + if (!args.input.startsWith("0x")) { + throw new ParsingError("Hex string must start with a leading 0x.", AddressInvalidReason.LEADING_ZERO_X_REQUIRED); + } + + const address = AccountAddress.fromStringRelaxed(args); + + // Check if the address is in LONG form. If it is not, this is only allowed for + // special addresses, in which case we check it is in proper SHORT form. + if (args.input.length != AccountAddress.LONG_STRING_LENGTH + 2) { + if (!address.isSpecial()) { + throw new ParsingError( + "The given hex string is not a special address, it must be represented as 0x + 64 chars.", + AddressInvalidReason.LONG_FORM_REQUIRED_UNLESS_SPECIAL, + ); + } else { + // 0x + one hex char is the only valid SHORT form for special addresses. + if (args.input.length != 3) { + throw new ParsingError( + "The given hex string is a special address not in LONG form, it must be 0x0 to 0xf without padding zeroes.", + AddressInvalidReason.INVALID_PADDING_ZEROES, + ); + } + } + } + + return address; + } + + /** + * NOTE: This function has relaxed parsing behavior. For strict behavior, please use + * the `fromString` function. Where possible use `fromString` rather than this + * function, `fromStringRelaxed` is only provided for backwards compatibility. + * + * Creates an instance of AccountAddress from a hex string. + * + * This function allows all formats defined by AIP-40. In short this means the + * following formats are accepted: + * + * - LONG, with or without leading 0x + * - SHORT, with or without leading 0x + * + * Where: + * - LONG is 64 hex characters. + * - SHORT is 1 to 63 hex characters inclusive. + * - Padding zeroes are allowed, e.g. 0x0123 is valid. + * + * Learn more about the different address formats by reading AIP-40: + * https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + * + * @param args.input A hex string representing an account address. + * + * @returns An instance of AccountAddress. + */ + static fromStringRelaxed(args: { input: string }): AccountAddress { + let { input } = args; + + // Remove leading 0x for parsing. + if (input.startsWith("0x")) { + input = input.slice(2); + } + + // Ensure the address string is at least 1 character long. + if (input.length === 0) { + throw new ParsingError( + "Hex string is too short, must be 1 to 64 chars long, excluding the leading 0x.", + AddressInvalidReason.TOO_SHORT, + ); + } + + // Ensure the address string is not longer than 64 characters. + if (input.length > 64) { + throw new ParsingError( + "Hex string is too long, must be 1 to 64 chars long, excluding the leading 0x.", + AddressInvalidReason.TOO_LONG, + ); + } + + let addressBytes: Uint8Array; + try { + // Pad the address with leading zeroes so it is 64 chars long and then convert + // the hex string to bytes. Every two characters in a hex string constitutes a + // single byte. So a 64 length hex string becomes a 32 byte array. + addressBytes = hexToBytes(input.padStart(64, "0")); + } catch (e) { + const error = e as Error; + // At this point the only way this can fail is if the hex string contains + // invalid characters. + throw new ParsingError(`Hex characters are invalid: ${error.message}`, AddressInvalidReason.INVALID_HEX_CHARS); + } + + return new AccountAddress({ data: addressBytes }); + } + + /** + * Convenience method for creating an AccountAddress from HexInput. For more + * more information on how this works, see the constructor and fromString. + * + * @param args.input A hex string or Uint8Array representing an account address. + * + * @returns An instance of AccountAddress. + */ + static fromHexInput(args: { input: HexInput }): AccountAddress { + if (args.input instanceof Uint8Array) { + return new AccountAddress({ data: args.input }); + } + return AccountAddress.fromString({ input: args.input }); + } + + /** + * Convenience method for creating an AccountAddress from HexInput. For more + * more information on how this works, see the constructor and fromStringRelaxed. + * + * @param args.input A hex string or Uint8Array representing an account address. + * + * @returns An instance of AccountAddress. + */ + static fromHexInputRelaxed(args: { input: HexInput }): AccountAddress { + if (args.input instanceof Uint8Array) { + return new AccountAddress({ data: args.input }); + } + return AccountAddress.fromStringRelaxed({ input: args.input }); + } + + // === + // Methods for checking validity. + // === + + /** + * Check if the string is a valid AccountAddress. + * + * @param str A hex string representing an account address. + * @param relaxed If true, use relaxed parsing behavior. If false, use strict parsing behavior. + * + * @returns valid = true if the string is valid, valid = false if not. If the string + * is not valid, invalidReason will be set explaining why it is invalid. + */ + static isValid(args: { input: string; relaxed?: boolean }): ParsingResult { + try { + if (args.relaxed) { + AccountAddress.fromStringRelaxed({ input: args.input }); + } else { + AccountAddress.fromString({ input: args.input }); + } + return { valid: true }; + } catch (e) { + const error = e as ParsingError; + return { + valid: false, + invalidReason: error.invalidReason, + invalidReasonMessage: error.message, + }; + } + } + + /** + * Return whether AccountAddresses are equal. AccountAddresses are considered equal + * if their underlying byte data is identical. + * + * @param other The AccountAddress to compare to. + * @returns true if the AccountAddresses are equal, false if not. + */ + equals(other: AccountAddress): boolean { + if (this.data.length !== other.data.length) return false; + return this.data.every((value, index) => value === other.data[index]); + } +} diff --git a/ecosystem/typescript/sdk_v2/src/core/common.ts b/ecosystem/typescript/sdk_v2/src/core/common.ts new file mode 100644 index 0000000000000..19e1e3b68db80 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/core/common.ts @@ -0,0 +1,40 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +/** + * This error is used to explain why parsing failed. + */ +export class ParsingError extends Error { + /** + * This provides a programmatic way to access why parsing failed. Downstream devs + * might want to use this to build their own error messages if the default error + * messages are not suitable for their use case. This should be an enum. + */ + public invalidReason: T; + + constructor(message: string, invalidReason: T) { + super(message); + this.invalidReason = invalidReason; + } +} + +/** + * Whereas ParsingError is thrown when parsing fails, e.g. in a fromString function, + * this type is returned from "defensive" functions like isValid. + */ +export type ParsingResult = { + /** + * True if valid, false otherwise. + */ + valid: boolean; + + /* + * If valid is false, this will be a code explaining why parsing failed. + */ + invalidReason?: T; + + /* + * If valid is false, this will be a string explaining why parsing failed. + */ + invalidReasonMessage?: string; +}; diff --git a/ecosystem/typescript/sdk_v2/src/core/hex.ts b/ecosystem/typescript/sdk_v2/src/core/hex.ts new file mode 100644 index 0000000000000..88c2b71125261 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/core/hex.ts @@ -0,0 +1,177 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +import { bytesToHex, hexToBytes } from "@noble/hashes/utils"; +import { HexInput } from "../types"; +import { ParsingError, ParsingResult } from "./common"; + +/** + * This enum is used to explain why parsing might have failed. + */ +export enum HexInvalidReason { + TOO_SHORT = "too_short", + INVALID_LENGTH = "invalid_length", + INVALID_HEX_CHARS = "invalid_hex_chars", +} + +/** + * NOTE: Do not use this class when working with account addresses, use AccountAddress. + * + * NOTE: When accepting hex data as input to a function, prefer to accept HexInput and + * then use the static helper methods of this class to convert it into the desired + * format. This enables the greatest flexibility for the developer. + * + * Hex is a helper class for working with hex data. Hex data, when represented as a + * string, generally looks like this, for example: 0xaabbcc, 45cd32, etc. + * + * You might use this class like this: + * + * ```ts + * getTransactionByHash(txnHash: HexInput): Promise { + * const txnHashString = Hex.fromHexInput({ hexInput: txnHash }).toString(); + * return await getTransactionByHashInner(txnHashString); + * } + * ``` + * + * This call to `Hex.fromHexInput().toString()` converts the HexInput to a hex string + * with a leading 0x prefix, regardless of what the input format was. + * + * These are some other ways to chain the functions together: + * - `Hex.fromString({ hexInput: "0x1f" }).toUint8Array()` + * - `new Hex({ data: [1, 3] }).toStringWithoutPrefix()` + */ +export class Hex { + private data: Uint8Array; + + /** + * Create a new Hex instance from a Uint8Array. + * + * @param hex Uint8Array + */ + constructor(args: { data: Uint8Array }) { + this.data = args.data; + } + + // === + // Methods for representing an instance of Hex as other types. + // === + + /** + * Get the inner hex data. The inner data is already a Uint8Array so no conversion + * is taking place here, it just returns the inner data. + * + * @returns Hex data as Uint8Array + */ + toUint8Array(): Uint8Array { + return this.data; + } + + /** + * Get the hex data as a string without the 0x prefix. + * + * @returns Hex string without 0x prefix + */ + toStringWithoutPrefix(): string { + return bytesToHex(this.data); + } + + /** + * Get the hex data as a string with the 0x prefix. + * + * @returns Hex string with 0x prefix + */ + toString(): string { + return `0x${this.toStringWithoutPrefix()}`; + } + + // === + // Methods for creating an instance of Hex from other types. + // === + + /** + * Static method to convert a hex string to Hex + * + * @param str A hex string, with or without the 0x prefix + * + * @returns Hex + */ + static fromString(args: { str: string }): Hex { + let input = args.str; + + if (input.startsWith("0x")) { + input = input.slice(2); + } + + if (input.length === 0) { + throw new ParsingError( + "Hex string is too short, must be at least 1 char long, excluding the optional leading 0x.", + HexInvalidReason.TOO_SHORT, + ); + } + + if (input.length % 2 !== 0) { + throw new ParsingError("Hex string must be an even number of hex characters.", HexInvalidReason.INVALID_LENGTH); + } + + try { + return new Hex({ data: hexToBytes(input) }); + } catch (e) { + const error = e as Error; + throw new ParsingError( + `Hex string contains invalid hex characters: ${error.message}`, + HexInvalidReason.INVALID_HEX_CHARS, + ); + } + } + + /** + * Static method to convert an instance of HexInput to Hex + * + * @param str A HexInput (string or Uint8Array) + * + * @returns Hex + */ + static fromHexInput(args: { hexInput: HexInput }): Hex { + if (args.hexInput instanceof Uint8Array) return new Hex({ data: args.hexInput }); + return Hex.fromString({ str: args.hexInput }); + } + + // === + // Methods for checking validity. + // === + + /** + * Check if the string is valid hex. + * + * @param str A hex string representing byte data. + * + * @returns valid = true if the string is valid, false if not. If the string is not + * valid, invalidReason and invalidReasonMessage will be set explaining why it is + * invalid. + */ + static isValid(args: { str: string }): ParsingResult { + try { + Hex.fromString(args); + return { valid: true }; + } catch (e) { + const error = e as ParsingError; + return { + valid: false, + invalidReason: error.invalidReason, + invalidReasonMessage: error.message, + }; + } + } + + /** + * Return whether Hex instances are equal. Hex instances are considered equal if + * their underlying byte data is identical. + * + * @param other The Hex instance to compare to. + * @returns true if the Hex instances are equal, false if not. + */ + equals(other: Hex): boolean { + if (this.data.length !== other.data.length) return false; + return this.data.every((value, index) => value === other.data[index]); + } +} diff --git a/ecosystem/typescript/sdk_v2/src/core/index.ts b/ecosystem/typescript/sdk_v2/src/core/index.ts new file mode 100644 index 0000000000000..4328f607f82da --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/core/index.ts @@ -0,0 +1,6 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +export * from "./account_address"; +export * from "./common"; +export * from "./hex"; diff --git a/ecosystem/typescript/sdk_v2/src/types/index.ts b/ecosystem/typescript/sdk_v2/src/types/index.ts index 9b7e79f0a6e24..f1704bfd23d61 100644 --- a/ecosystem/typescript/sdk_v2/src/types/index.ts +++ b/ecosystem/typescript/sdk_v2/src/types/index.ts @@ -1 +1,2 @@ export type AnyNumber = number | bigint; +export type HexInput = string | Uint8Array; diff --git a/ecosystem/typescript/sdk_v2/src/utils/api-endpoints.ts b/ecosystem/typescript/sdk_v2/src/utils/api-endpoints.ts new file mode 100644 index 0000000000000..80f13c29edff4 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/utils/api-endpoints.ts @@ -0,0 +1,27 @@ +export const NetworkToIndexerAPI: Record = { + mainnet: "https://indexer.mainnet.aptoslabs.com/v1/graphql", + testnet: "https://indexer-testnet.staging.gcp.aptosdev.com/v1/graphql", + devnet: "https://indexer-devnet.staging.gcp.aptosdev.com/v1/graphql", +}; + +export const NetworkToNodeAPI: Record = { + mainnet: "https://fullnode.mainnet.aptoslabs.com/v1", + testnet: "https://fullnode.testnet.aptoslabs.com/v1", + devnet: "https://fullnode.devnet.aptoslabs.com/v1", + local: "http://localhost:8080/v1", +}; + +export const NetworkToFaucetAPI: Record = { + mainnet: "https://faucet.mainnet.aptoslabs.com", + testnet: "https://faucet.testnet.aptoslabs.com", + devnet: "https://faucet.devnet.aptoslabs.com", + local: "http://localhost:8081", +}; + +export enum Network { + MAINNET = "mainnet", + TESTNET = "testnet", + DEVNET = "devnet", + LOCAL = "local", + CUSTOM = "custom", +} diff --git a/ecosystem/typescript/sdk_v2/src/utils/const.ts b/ecosystem/typescript/sdk_v2/src/utils/const.ts new file mode 100644 index 0000000000000..9ab4ac01bf3c0 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/src/utils/const.ts @@ -0,0 +1,3 @@ +import { Network } from "./api-endpoints"; + +export const DEFAULT_NETWORK = Network.DEVNET; diff --git a/ecosystem/typescript/sdk_v2/tests/unit/account_address.test.ts b/ecosystem/typescript/sdk_v2/tests/unit/account_address.test.ts new file mode 100644 index 0000000000000..226ad38042db7 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/tests/unit/account_address.test.ts @@ -0,0 +1,358 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +import { AccountAddress, AddressInvalidReason } from "../../src/core/account_address"; + +type Addresses = { + shortWith0x: string; + shortWithout0x: string; + longWith0x: string; + longWithout0x: string; + bytes: Uint8Array; +}; + +// Special addresses. + +const ADDRESS_ZERO: Addresses = { + shortWith0x: "0x0", + shortWithout0x: "0", + longWith0x: "0x0000000000000000000000000000000000000000000000000000000000000000", + longWithout0x: "0000000000000000000000000000000000000000000000000000000000000000", + bytes: new Uint8Array([ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + ]), +}; + +const ADDRESS_ONE: Addresses = { + shortWith0x: "0x1", + shortWithout0x: "1", + longWith0x: "0x0000000000000000000000000000000000000000000000000000000000000001", + longWithout0x: "0000000000000000000000000000000000000000000000000000000000000001", + bytes: new Uint8Array([ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, + ]), +}; + +const ADDRESS_F: Addresses = { + shortWith0x: "0xf", + shortWithout0x: "f", + longWith0x: "0x000000000000000000000000000000000000000000000000000000000000000f", + longWithout0x: "000000000000000000000000000000000000000000000000000000000000000f", + bytes: new Uint8Array([ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, + ]), +}; + +const ADDRESS_F_PADDED_SHORT_FORM: Addresses = { + shortWith0x: "0x0f", + shortWithout0x: "0f", + longWith0x: "0x000000000000000000000000000000000000000000000000000000000000000f", + longWithout0x: "000000000000000000000000000000000000000000000000000000000000000f", + bytes: new Uint8Array([ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, + ]), +}; + +// Non-special addresses. + +const ADDRESS_TEN: Addresses = { + shortWith0x: "0x10", + shortWithout0x: "10", + longWith0x: "0x0000000000000000000000000000000000000000000000000000000000000010", + longWithout0x: "0000000000000000000000000000000000000000000000000000000000000010", + bytes: new Uint8Array([ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, + ]), +}; + +const ADDRESS_OTHER: Addresses = { + shortWith0x: "0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", + shortWithout0x: "ca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", + // These are the same as the short variants. + longWith0x: "0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", + longWithout0x: "ca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", + bytes: new Uint8Array([ + 202, 132, 50, 121, 227, 66, 113, 68, 206, 173, 94, 77, 89, 153, 163, 208, 202, 132, 50, 121, 227, 66, 113, 68, 206, + 173, 94, 77, 89, 153, 163, 208, + ]), +}; + +// These tests show that fromStringRelaxed works happily parses all formats. +describe("AccountAddress fromStringRelaxed", () => { + it("parses special address: 0x0", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ZERO.longWith0x }).toString()).toBe( + ADDRESS_ZERO.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ZERO.longWithout0x }).toString()).toBe( + ADDRESS_ZERO.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ZERO.shortWith0x }).toString()).toBe( + ADDRESS_ZERO.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ZERO.shortWithout0x }).toString()).toBe( + ADDRESS_ZERO.shortWith0x, + ); + }); + + it("parses special address: 0x1", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ONE.longWith0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ONE.longWithout0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ONE.shortWith0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_ONE.shortWithout0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + }); + + it("parses special address: 0xf", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F.longWith0x }).toString()).toBe(ADDRESS_F.shortWith0x); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F.longWithout0x }).toString()).toBe(ADDRESS_F.shortWith0x); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F.shortWith0x }).toString()).toBe(ADDRESS_F.shortWith0x); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F.shortWithout0x }).toString()).toBe( + ADDRESS_F.shortWith0x, + ); + }); + + it("parses special address with padded short form: 0x0f", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F_PADDED_SHORT_FORM.shortWith0x }).toString()).toBe( + ADDRESS_F.shortWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_F_PADDED_SHORT_FORM.shortWithout0x }).toString()).toBe( + ADDRESS_F.shortWith0x, + ); + }); + + it("parses non-special address: 0x10", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_TEN.longWith0x }).toString()).toBe(ADDRESS_TEN.longWith0x); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_TEN.longWithout0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_TEN.shortWith0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_TEN.shortWithout0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + }); + + it("parses non-special address: 0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", () => { + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_OTHER.longWith0x }).toString()).toBe( + ADDRESS_OTHER.longWith0x, + ); + expect(AccountAddress.fromStringRelaxed({ input: ADDRESS_OTHER.longWithout0x }).toString()).toBe( + ADDRESS_OTHER.longWith0x, + ); + }); +}); + +// These tests show that fromString only parses addresses with a leading 0x and only +// SHORT if it is a special address. +describe("AccountAddress fromString", () => { + it("parses special address: 0x0", () => { + expect(AccountAddress.fromString({ input: ADDRESS_ZERO.longWith0x }).toString()).toBe(ADDRESS_ZERO.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_ZERO.longWithout0x })).toThrow(); + expect(AccountAddress.fromString({ input: ADDRESS_ZERO.shortWith0x }).toString()).toBe(ADDRESS_ZERO.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_ZERO.shortWithout0x })).toThrow(); + }); + + it("parses special address: 0x1", () => { + expect(AccountAddress.fromString({ input: ADDRESS_ONE.longWith0x }).toString()).toBe(ADDRESS_ONE.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_ONE.longWithout0x })).toThrow(); + expect(AccountAddress.fromString({ input: ADDRESS_ONE.shortWith0x }).toString()).toBe(ADDRESS_ONE.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_ONE.shortWithout0x })).toThrow(); + }); + + it("parses special address: 0xf", () => { + expect(AccountAddress.fromString({ input: ADDRESS_F.longWith0x }).toString()).toBe(ADDRESS_F.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_F.longWithout0x })).toThrow(); + expect(AccountAddress.fromString({ input: ADDRESS_F.shortWith0x }).toString()).toBe(ADDRESS_F.shortWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_F.shortWithout0x })).toThrow(); + }); + + it("throws when parsing special address with padded short form: 0x0f", () => { + expect(() => AccountAddress.fromString({ input: ADDRESS_F_PADDED_SHORT_FORM.shortWith0x })).toThrow(); + expect(() => AccountAddress.fromString({ input: ADDRESS_F_PADDED_SHORT_FORM.shortWithout0x })).toThrow(); + }); + + it("parses non-special address: 0x10", () => { + expect(AccountAddress.fromString({ input: ADDRESS_TEN.longWith0x }).toString()).toBe(ADDRESS_TEN.longWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_TEN.longWithout0x })).toThrow(); + expect(() => AccountAddress.fromString({ input: ADDRESS_TEN.shortWith0x })).toThrow(); + expect(() => AccountAddress.fromString({ input: ADDRESS_TEN.shortWithout0x })).toThrow(); + }); + + it("parses non-special address: 0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", () => { + expect(AccountAddress.fromString({ input: ADDRESS_OTHER.longWith0x }).toString()).toBe(ADDRESS_OTHER.longWith0x); + expect(() => AccountAddress.fromString({ input: ADDRESS_OTHER.longWithout0x })).toThrow(); + }); +}); + +describe("AccountAddress fromHexInput", () => { + it("parses special address: 0x1", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_ONE.longWith0x }).toString()).toBe(ADDRESS_ONE.shortWith0x); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_ONE.longWithout0x })).toThrow(); + expect(AccountAddress.fromHexInput({ input: ADDRESS_ONE.shortWith0x }).toString()).toBe(ADDRESS_ONE.shortWith0x); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_ONE.shortWithout0x })).toThrow(); + expect(AccountAddress.fromHexInput({ input: ADDRESS_ONE.bytes }).toString()).toBe(ADDRESS_ONE.shortWith0x); + }); + + it("parses non-special address: 0x10", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_TEN.longWith0x }).toString()).toBe(ADDRESS_TEN.longWith0x); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_TEN.longWithout0x })).toThrow(); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_TEN.shortWith0x })).toThrow(); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_TEN.shortWithout0x })).toThrow(); + expect(AccountAddress.fromHexInput({ input: ADDRESS_TEN.bytes }).toString()).toBe(ADDRESS_TEN.longWith0x); + }); + + it("parses non-special address: 0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_OTHER.longWith0x }).toString()).toBe(ADDRESS_OTHER.longWith0x); + expect(() => AccountAddress.fromHexInput({ input: ADDRESS_OTHER.longWithout0x })).toThrow(); + expect(AccountAddress.fromHexInput({ input: ADDRESS_OTHER.bytes }).toString()).toBe(ADDRESS_OTHER.shortWith0x); + }); +}); + +describe("AccountAddress fromHexInputRelaxed", () => { + it("parses special address: 0x1", () => { + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_ONE.longWith0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_ONE.longWithout0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_ONE.shortWith0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_ONE.shortWithout0x }).toString()).toBe( + ADDRESS_ONE.shortWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_ONE.bytes }).toString()).toBe(ADDRESS_ONE.shortWith0x); + }); + + it("parses non-special address: 0x10", () => { + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_TEN.longWith0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_TEN.longWithout0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_TEN.shortWith0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_TEN.shortWithout0x }).toString()).toBe( + ADDRESS_TEN.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_TEN.bytes }).toString()).toBe(ADDRESS_TEN.longWith0x); + }); + + it("parses non-special address: 0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", () => { + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_OTHER.longWith0x }).toString()).toBe( + ADDRESS_OTHER.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_OTHER.longWithout0x }).toString()).toBe( + ADDRESS_OTHER.longWith0x, + ); + expect(AccountAddress.fromHexInputRelaxed({ input: ADDRESS_OTHER.bytes }).toString()).toBe( + ADDRESS_OTHER.longWith0x, + ); + }); +}); + +describe("AccountAddress toUint8Array", () => { + it("correctly returns bytes for special address: 0x1", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_ONE.longWith0x }).toUint8Array()).toEqual(ADDRESS_ONE.bytes); + }); + + it("correctly returns bytes for non-special address: 0x10", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_TEN.longWith0x }).toUint8Array()).toEqual(ADDRESS_TEN.bytes); + }); + + it("correctly returns bytes for non-special address: 0xca843279e3427144cead5e4d5999a3d0ca843279e3427144cead5e4d5999a3d0", () => { + expect(AccountAddress.fromHexInput({ input: ADDRESS_OTHER.longWith0x }).toUint8Array()).toEqual( + ADDRESS_OTHER.bytes, + ); + }); +}); + +describe("AccountAddress toStringWithoutPrefix", () => { + it("formats special address correctly: 0x0", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_ZERO.shortWith0x }); + expect(addr.toStringWithoutPrefix()).toBe(ADDRESS_ZERO.shortWithout0x); + }); + + it("formats non-special address correctly: 0x10", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_TEN.longWith0x }); + expect(addr.toStringWithoutPrefix()).toBe(ADDRESS_TEN.longWithout0x); + }); +}); + +describe("AccountAddress toStringLong", () => { + it("formats special address correctly: 0x0", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_ZERO.shortWith0x }); + expect(addr.toStringLong()).toBe(ADDRESS_ZERO.longWith0x); + }); + + it("formats non-special address correctly: 0x10", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_TEN.longWith0x }); + expect(addr.toStringLong()).toBe(ADDRESS_TEN.longWith0x); + }); +}); + +describe("AccountAddress toStringLongWithoutPrefix", () => { + it("formats special address correctly: 0x0", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_ZERO.shortWith0x }); + expect(addr.toStringLongWithoutPrefix()).toBe(ADDRESS_ZERO.longWithout0x); + }); + + it("formats non-special address correctly: 0x10", () => { + const addr = AccountAddress.fromString({ input: ADDRESS_TEN.longWith0x }); + expect(addr.toStringLongWithoutPrefix()).toBe(ADDRESS_TEN.longWithout0x); + }); +}); + +describe("AccountAddress other parsing", () => { + it("throws exception when initiating from too long hex string", () => { + expect(() => { + AccountAddress.fromString({ input: `${ADDRESS_ONE.longWith0x}1` }); + }).toThrow("Hex string is too long, must be 1 to 64 chars long, excluding the leading 0x."); + }); + + test("throws when parsing invalid hex char", () => { + expect(() => AccountAddress.fromString({ input: "0xxyz" })).toThrow(); + }); + + test("throws when parsing account address of length zero", () => { + expect(() => AccountAddress.fromString({ input: "0x" })).toThrow(); + expect(() => AccountAddress.fromString({ input: "" })).toThrow(); + }); + + test("throws when parsing invalid prefix", () => { + expect(() => AccountAddress.fromString({ input: "0za" })).toThrow(); + }); + + it("isValid is false if too long with 0xf", () => { + const { valid, invalidReason, invalidReasonMessage } = AccountAddress.isValid({ + input: `0x00${ADDRESS_F.longWithout0x}`, + }); + expect(valid).toBe(false); + expect(invalidReason).toBe(AddressInvalidReason.TOO_LONG); + expect(invalidReasonMessage).toBe("Hex string is too long, must be 1 to 64 chars long, excluding the leading 0x."); + }); + + it("isValid is true if account address string is valid", () => { + const { valid, invalidReason, invalidReasonMessage } = AccountAddress.isValid({ input: ADDRESS_F.longWith0x }); + expect(valid).toBe(true); + expect(invalidReason).toBeUndefined(); + expect(invalidReasonMessage).toBeUndefined(); + }); + + it("compares equality with equals as expected", () => { + const addressOne = AccountAddress.fromStringRelaxed({ input: "0x123" }); + const addressTwo = AccountAddress.fromStringRelaxed({ input: "0x123" }); + expect(addressOne.equals(addressTwo)).toBeTruthy(); + }); +}); diff --git a/ecosystem/typescript/sdk_v2/tests/unit/aptos_config.test.ts b/ecosystem/typescript/sdk_v2/tests/unit/aptos_config.test.ts new file mode 100644 index 0000000000000..8e79dc2275ad0 --- /dev/null +++ b/ecosystem/typescript/sdk_v2/tests/unit/aptos_config.test.ts @@ -0,0 +1,59 @@ +import { Aptos, AptosConfig } from "../../src"; +import { Network } from "../../src/utils/api-endpoints"; + +describe("aptos config", () => { + test("it should set DEVNET network if network is not provided", async () => { + const aptos = new Aptos(); + expect(aptos.config.network).toEqual("devnet"); + expect(aptos.config.fullnode).toEqual("https://fullnode.devnet.aptoslabs.com/v1"); + expect(aptos.config.faucet).toEqual("https://faucet.devnet.aptoslabs.com"); + expect(aptos.config.indexer).toEqual("https://indexer-devnet.staging.gcp.aptosdev.com/v1/graphql"); + }); + + test("it should set urls based on the provided network", async () => { + const settings: AptosConfig = { + network: Network.TESTNET, + }; + const aptos = new Aptos(settings); + expect(aptos.config.network).toEqual("testnet"); + expect(aptos.config.fullnode).toEqual("https://fullnode.testnet.aptoslabs.com/v1"); + expect(aptos.config.faucet).toEqual("https://faucet.testnet.aptoslabs.com"); + expect(aptos.config.indexer).toEqual("https://indexer-testnet.staging.gcp.aptosdev.com/v1/graphql"); + }); + + test("it should set urls based on a local network", async () => { + const settings: AptosConfig = { + network: Network.LOCAL, + }; + const aptos = new Aptos(settings); + expect(aptos.config.network).toEqual("local"); + expect(aptos.config.fullnode).toEqual("http://localhost:8080/v1"); + expect(aptos.config.faucet).toEqual("http://localhost:8081"); + expect(aptos.config.indexer).toBeUndefined(); + }); + + test("it should have undefined urls when network is custom and no urls provided", async () => { + const settings: AptosConfig = { + network: Network.CUSTOM, + }; + const aptos = new Aptos(settings); + expect(aptos.config.network).toEqual("custom"); + expect(aptos.config.fullnode).toBeUndefined(); + expect(aptos.config.faucet).toBeUndefined(); + expect(aptos.config.indexer).toBeUndefined(); + }); + + test("it should set urls when network is custom and urls provided", async () => { + const settings: AptosConfig = { + network: Network.CUSTOM, + fullnode: "my-fullnode-url", + faucet: "my-faucet-url", + indexer: "my-indexer-url", + }; + const aptos = new Aptos(settings); + expect(aptos.config.network).toEqual("custom"); + expect(aptos.config.fullnode).toEqual("my-fullnode-url"); + expect(aptos.config.faucet).toEqual("my-faucet-url"); + expect(aptos.config.indexer).toEqual("my-indexer-url"); + }); +}); diff --git a/ecosystem/typescript/sdk_v2/tests/unit/hex.test.ts b/ecosystem/typescript/sdk_v2/tests/unit/hex.test.ts new file mode 100644 index 0000000000000..698c17bcf19fa --- /dev/null +++ b/ecosystem/typescript/sdk_v2/tests/unit/hex.test.ts @@ -0,0 +1,98 @@ +import { ParsingError } from "../../src/core"; +import { Hex, HexInvalidReason } from "../../src/core/hex"; + +const mockHex = { + withoutPrefix: "007711b4d0", + withPrefix: "0x007711b4d0", + bytes: new Uint8Array([0, 119, 17, 180, 208]), +}; + +test("creates a new Hex instance from bytes", () => { + const hex = new Hex({ data: mockHex.bytes }); + expect(hex.toUint8Array()).toEqual(mockHex.bytes); +}); + +test("creates a new Hex instance from string", () => { + const hex = new Hex({ data: mockHex.bytes }); + expect(hex.toString()).toEqual(mockHex.withPrefix); +}); + +test("converts hex bytes input into hex data", () => { + const hex = new Hex({ data: mockHex.bytes }); + expect(hex instanceof Hex).toBeTruthy(); + expect(hex.toUint8Array()).toEqual(mockHex.bytes); +}); + +test("converts hex string input into hex data", () => { + const hex = Hex.fromString({ str: mockHex.withPrefix }); + expect(hex instanceof Hex).toBeTruthy(); + expect(hex.toUint8Array()).toEqual(mockHex.bytes); +}); + +test("accepts hex string input without prefix", () => { + const hex = Hex.fromString({ str: mockHex.withoutPrefix }); + expect(hex instanceof Hex).toBeTruthy(); + expect(hex.toUint8Array()).toEqual(mockHex.bytes); +}); + +test("accepts hex string with prefix", () => { + const hex = Hex.fromString({ str: mockHex.withPrefix }); + expect(hex instanceof Hex).toBeTruthy(); + expect(hex.toUint8Array()).toEqual(mockHex.bytes); +}); + +test("converts hex string to bytes", () => { + const hex = Hex.fromHexInput({ hexInput: mockHex.withPrefix }).toUint8Array(); + expect(hex instanceof Uint8Array).toBeTruthy(); + expect(hex).toEqual(mockHex.bytes); +}); + +test("converts hex bytes to string", () => { + const hex = Hex.fromHexInput({ hexInput: mockHex.bytes }).toString(); + expect(typeof hex).toEqual("string"); + expect(hex).toEqual(mockHex.withPrefix); +}); + +test("converts hex bytes to string without 0x prefix", () => { + const hex = Hex.fromHexInput({ hexInput: mockHex.withPrefix }).toStringWithoutPrefix(); + expect(hex).toEqual(mockHex.withoutPrefix); +}); + +test("throws when parsing invalid hex char", () => { + expect(() => Hex.fromString({ str: "0xzyzz" })).toThrow( + "Hex string contains invalid hex characters: Invalid byte sequence", + ); +}); + +test("throws when parsing hex of length zero", () => { + expect(() => Hex.fromString({ str: "0x" })).toThrow( + "Hex string is too short, must be at least 1 char long, excluding the optional leading 0x.", + ); + expect(() => Hex.fromString({ str: "" })).toThrow( + "Hex string is too short, must be at least 1 char long, excluding the optional leading 0x.", + ); +}); + +test("throws when parsing hex of invalid length", () => { + expect(() => Hex.fromString({ str: "0x1" })).toThrow("Hex string must be an even number of hex characters."); +}); + +test("isValid returns true when parsing valid string", () => { + const result = Hex.isValid({ str: "0x11aabb" }); + expect(result.valid).toBe(true); + expect(result.invalidReason).toBeUndefined(); + expect(result.invalidReasonMessage).toBeUndefined(); +}); + +test("isValid returns false when parsing hex of invalid length", () => { + const result = Hex.isValid({ str: "0xa" }); + expect(result.valid).toBe(false); + expect(result.invalidReason).toBe(HexInvalidReason.INVALID_LENGTH); + expect(result.invalidReasonMessage).toBe("Hex string must be an even number of hex characters."); +}); + +test("compares equality with equals as expected", () => { + const hexOne = Hex.fromString({ str: "0x11" }); + const hexTwo = Hex.fromString({ str: "0x11" }); + expect(hexOne.equals(hexTwo)).toBeTruthy(); +}); diff --git a/execution/executor-benchmark/Cargo.toml b/execution/executor-benchmark/Cargo.toml index 1bb1300e2b5bf..15fd91268f487 100644 --- a/execution/executor-benchmark/Cargo.toml +++ b/execution/executor-benchmark/Cargo.toml @@ -50,6 +50,7 @@ toml = { workspace = true } [target.'cfg(unix)'.dependencies] jemallocator = { workspace = true } +aptos-profiler = { workspace = true } [dev-dependencies] aptos-temppath = { workspace = true } diff --git a/execution/executor-benchmark/src/lib.rs b/execution/executor-benchmark/src/lib.rs index 3e8cf06980c63..f578a9ad600e1 100644 --- a/execution/executor-benchmark/src/lib.rs +++ b/execution/executor-benchmark/src/lib.rs @@ -101,6 +101,7 @@ pub fn run_benchmark( transaction_mix: Option>, mut transactions_per_sender: usize, connected_tx_grps: usize, + shuffle_connected_txns: bool, num_main_signer_accounts: usize, num_additional_dst_pool_accounts: usize, source_dir: impl AsRef, @@ -252,6 +253,7 @@ pub fn run_benchmark( num_blocks, transactions_per_sender, connected_tx_grps, + shuffle_connected_txns, ); } if pipeline_config.delay_execution_start { @@ -588,10 +590,11 @@ mod tests { 6, /* block_size */ 5, /* num_blocks */ transaction_type.map(|t| vec![(t.materialize(2, false), 1)]), - 2, /* transactions per sender */ - 0, /* independent tx groups in a block */ - 25, /* num_main_signer_accounts */ - 30, /* num_dst_pool_accounts */ + 2, /* transactions per sender */ + 0, /* connected txn groups in a block */ + false, /* shuffle the connected txns in a block */ + 25, /* num_main_signer_accounts */ + 30, /* num_dst_pool_accounts */ storage_dir.as_ref(), checkpoint_dir, verify_sequence_numbers, diff --git a/execution/executor-benchmark/src/main.rs b/execution/executor-benchmark/src/main.rs index 9c56597fc2b8e..fb7353816f426 100644 --- a/execution/executor-benchmark/src/main.rs +++ b/execution/executor-benchmark/src/main.rs @@ -8,6 +8,7 @@ use aptos_config::config::{ use aptos_executor::block_executor::TransactionBlockExecutor; use aptos_executor_benchmark::{native_executor::NativeExecutor, pipeline::PipelineConfig}; use aptos_metrics_core::{register_int_gauge, IntGauge}; +use aptos_profiler::{ProfilerConfig, ProfilerHandler}; use aptos_push_metrics::MetricsPusher; use aptos_transaction_generator_lib::args::TransactionTypeArg; use aptos_vm::AptosVM; @@ -115,6 +116,15 @@ impl PipelineOpt { } } +#[derive(Parser, Debug)] +struct ProfilerOpt { + #[clap(long)] + cpu_profiling: bool, + + #[clap(long)] + memory_profiling: bool, +} + #[derive(Parser, Debug)] struct Opt { #[clap(long, default_value_t = 10000)] @@ -128,6 +138,9 @@ struct Opt { #[clap(long, default_value_t = 0)] connected_tx_grps: usize, + #[clap(long)] + shuffle_connected_txns: bool, + #[clap(long)] concurrency_level: Option, @@ -155,6 +168,9 @@ struct Opt { #[clap(long)] use_native_executor: bool, + + #[clap(flatten)] + profiler_opt: ProfilerOpt, } impl Opt { @@ -287,6 +303,7 @@ where transaction_mix, opt.transactions_per_sender, opt.connected_tx_grps, + opt.shuffle_connected_txns, main_signer_accounts, additional_dst_pool_accounts, data_dir, @@ -343,11 +360,34 @@ fn main() { AptosVM::set_num_shards_once(opt.pipeline_opt.num_executor_shards); NativeExecutor::set_concurrency_level_once(opt.concurrency_level()); + let config = ProfilerConfig::new_with_defaults(); + let handler = ProfilerHandler::new(config); + + let cpu_profiling = opt.profiler_opt.cpu_profiling; + let memory_profiling = opt.profiler_opt.memory_profiling; + + let mut cpu_profiler = handler.get_cpu_profiler(); + let mut memory_profiler = handler.get_mem_profiler(); + + if cpu_profiling { + let _cpu_start = cpu_profiler.start_profiling(); + } + if memory_profiling { + let _mem_start = memory_profiler.start_profiling(); + } + if opt.use_native_executor { run::(opt); } else { run::(opt); } + + if cpu_profiling { + let _cpu_end = cpu_profiler.end_profiling(""); + } + if memory_profiling { + let _mem_end = memory_profiler.end_profiling("./target/release/aptos-executor-benchmark"); + } } #[test] diff --git a/execution/executor-benchmark/src/native_executor.rs b/execution/executor-benchmark/src/native_executor.rs index cd65b8e810179..4726970386e05 100644 --- a/execution/executor-benchmark/src/native_executor.rs +++ b/execution/executor-benchmark/src/native_executor.rs @@ -130,7 +130,7 @@ impl NativeExecutor { ]; // TODO(grao): Some values are fake, because I'm lazy. - let events = vec![ContractEvent::new( + let events = vec![ContractEvent::new_v1( EventKey::new(0, sender_address), 0, TypeTag::Struct(Box::new(WithdrawEvent::struct_tag())), @@ -224,7 +224,7 @@ impl NativeExecutor { } let events = vec![ - ContractEvent::new( + ContractEvent::new_v1( EventKey::new(0, recipient_address), 0, TypeTag::Struct(Box::new(DepositEvent::struct_tag())), diff --git a/execution/executor-benchmark/src/transaction_generator.rs b/execution/executor-benchmark/src/transaction_generator.rs index c92fc4fb403fb..a8e5bad2fef2f 100644 --- a/execution/executor-benchmark/src/transaction_generator.rs +++ b/execution/executor-benchmark/src/transaction_generator.rs @@ -274,6 +274,7 @@ impl TransactionGenerator { num_transfer_blocks: usize, transactions_per_sender: usize, connected_tx_grps: usize, + shuffle_connected_txns: bool, ) { assert!(self.block_sender.is_some()); self.gen_transfer_transactions( @@ -281,6 +282,7 @@ impl TransactionGenerator { num_transfer_blocks, transactions_per_sender, connected_tx_grps, + shuffle_connected_txns, ); } @@ -451,91 +453,89 @@ impl TransactionGenerator { } } - /// To generate 'n' connected groups, we divide the signer accounts into 'n' groups, and create - /// 'block_size / n' transactions in each group. - /// To get all the transactions in a group to be connected, we pick at random at-least one of - /// the sender or receiver accounts from the pool of accounts already used for a transaction in - /// the same group. - fn get_connected_grps_transfer_indices( + /// 'Conflicting groups of txns' are a type of 'connected groups of txns'. + /// Here we generate conflicts completely on one particular address (which can be sender or + /// receiver). + /// To generate 'n' conflicting groups, we divide the signer accounts into 'n' pools, and + /// create 'block_size / n' transactions in each group. In each group, we randomly pick + /// an address from the pool belonging to that group, and create all txns with that address as + /// a sender or receiver (thereby generating conflicts around that address). In other words, + /// all txns in a group would have to be executed in serial order. + /// Finally, after generating such groups of conflicting txns, we shuffle them to generate a + /// more realistic workload (that is conflicting txns need not always be consecutive). + fn get_conflicting_grps_transfer_indices( rng: &mut StdRng, num_signer_accounts: usize, block_size: usize, - connected_tx_grps: usize, + conflicting_tx_grps: usize, + shuffle_indices: bool, ) -> Vec<(usize, usize)> { - let num_accounts_per_grp = num_signer_accounts / connected_tx_grps; + let num_accounts_per_grp = num_signer_accounts / conflicting_tx_grps; // TODO: handle when block_size isn't divisible by connected_tx_grps; an easy // way to do this is to just generate a few more transactions in the last group - let num_txns_per_grp = block_size / connected_tx_grps; + let num_txns_per_grp = block_size / conflicting_tx_grps; - if num_txns_per_grp >= num_accounts_per_grp { + if 2 * conflicting_tx_grps >= num_signer_accounts { panic!( - "For the desired workload we want num_accounts_per_grp ({}) > num_txns_per_grp ({})", - num_accounts_per_grp, num_txns_per_grp); - } else if connected_tx_grps > block_size { + "For the desired workload we want num_signer_accounts ({}) > 2 * num_txns_per_grp ({})", + num_signer_accounts, num_txns_per_grp); + } else if conflicting_tx_grps > block_size { panic!( "connected_tx_grps ({}) > block_size ({}) cannot guarantee at least 1 txn per grp", - connected_tx_grps, block_size + conflicting_tx_grps, block_size ); } let mut signer_account_indices: Vec<_> = (0..num_signer_accounts).collect(); signer_account_indices.shuffle(rng); - let mut transfer_indices: Vec<_> = (0..connected_tx_grps) + let mut transfer_indices: Vec<_> = (0..conflicting_tx_grps) .flat_map(|grp_idx| { let accounts_start_idx = grp_idx * num_accounts_per_grp; let accounts_end_idx = accounts_start_idx + num_accounts_per_grp - 1; - let mut unused_indices: Vec<_> = + let mut accounts_pool: Vec<_> = signer_account_indices[accounts_start_idx..=accounts_end_idx].to_vec(); - let mut used_indices: Vec<_> = - vec![unused_indices.pop().unwrap(), unused_indices.pop().unwrap()]; - let mut transfer_indices: Vec<(_, _)> = vec![(used_indices[0], used_indices[1])]; - - for _ in 1..num_txns_per_grp { - // index1 is always from used_indices, so that all the txns are connected - let mut index1 = used_indices[rng.gen_range(0, used_indices.len())]; - - // index2 is either from used_indices or unused_indices with equal probability - let mut index2; - if rng.gen::() { - index2 = used_indices[rng.gen_range(0, used_indices.len())]; - } else { - // unused_indices is shuffled already, so last element is random - index2 = unused_indices.pop().unwrap(); - used_indices.push(index2); - } - - if rng.gen::() { - // with 50% probability, swap the indices of sender and receiver - (index1, index2) = (index2, index1); - } - transfer_indices.push((index1, index2)); - } - transfer_indices + let index1 = accounts_pool.pop().unwrap(); + + let conflicting_indices: Vec<_> = (0..num_txns_per_grp) + .map(|_| { + let index2 = accounts_pool[rng.gen_range(0, accounts_pool.len())]; + if rng.gen::() { + (index1, index2) + } else { + (index2, index1) + } + }) + .collect(); + conflicting_indices }) .collect(); - transfer_indices.shuffle(rng); + if shuffle_indices { + transfer_indices.shuffle(rng); + } transfer_indices } /// A 'connected transaction group' is a group of transactions where all the transactions are - /// connected to each other, that is they cannot be executed in parallel. - /// Transactions across different groups can be executed in parallel. + /// connected to each other. For now we generate connected groups of txns as conflicting, but + /// real world workloads can be more complex (and we can generate them as needed in the future). pub fn gen_connected_grps_transfer_transactions( &mut self, block_size: usize, num_blocks: usize, connected_tx_grps: usize, + shuffle_connected_txns: bool, ) { for _ in 0..num_blocks { let num_signer_accounts = self.main_signer_accounts.as_ref().unwrap().accounts.len(); let rng = &mut self.main_signer_accounts.as_mut().unwrap().rng; let transfer_indices: Vec<_> = - TransactionGenerator::get_connected_grps_transfer_indices( + TransactionGenerator::get_conflicting_grps_transfer_indices( rng, num_signer_accounts, block_size, connected_tx_grps, + shuffle_connected_txns, ); let mut transactions: Vec<_> = transfer_indices @@ -567,12 +567,14 @@ impl TransactionGenerator { num_blocks: usize, transactions_per_sender: usize, connected_tx_grps: usize, + shuffle_connected_txns: bool, ) { if connected_tx_grps > 0 { self.gen_connected_grps_transfer_transactions( block_size, num_blocks, connected_tx_grps, + shuffle_connected_txns, ); } else { self.gen_random_transfer_transactions(block_size, num_blocks, transactions_per_sender); @@ -623,7 +625,7 @@ impl TransactionGenerator { } #[test] -fn test_get_connected_grps_transfer_indices() { +fn test_get_conflicting_grps_transfer_indices() { let mut rng = StdRng::from_entropy(); fn dfs(node: usize, adj_list: &HashMap>, visited: &mut HashSet) { @@ -653,11 +655,12 @@ fn test_get_connected_grps_transfer_indices() { // we check for (i) block_size not divisible by connected_txn_grps (ii) when divisible // (iii) when all txns in the block are independent (iv) all txns are dependent for connected_txn_grps in [3, block_size / 10, block_size, 1] { - let transfer_indices = TransactionGenerator::get_connected_grps_transfer_indices( + let transfer_indices = TransactionGenerator::get_conflicting_grps_transfer_indices( &mut rng, num_signer_accounts, block_size, connected_txn_grps, + true, ); let mut adj_list: HashMap> = HashMap::new(); diff --git a/execution/executor-service/src/test_utils.rs b/execution/executor-service/src/test_utils.rs index 2c2b15c467d08..39efe84cdc966 100644 --- a/execution/executor-service/src/test_utils.rs +++ b/execution/executor-service/src/test_utils.rs @@ -118,7 +118,7 @@ pub fn test_sharded_block_executor_no_conflict> .unwrap(); let unsharded_txn_output = AptosVM::execute_block( transactions.into_iter().map(|t| t.into_txn()).collect(), - &executor.data_store(), + executor.data_store(), None, ) .unwrap(); diff --git a/execution/executor-test-helpers/src/integration_test_impl.rs b/execution/executor-test-helpers/src/integration_test_impl.rs index 42f879d008170..69f7d75c5f234 100644 --- a/execution/executor-test-helpers/src/integration_test_impl.rs +++ b/execution/executor-test-helpers/src/integration_test_impl.rs @@ -31,9 +31,18 @@ use aptos_types::{ }; use aptos_vm::AptosVM; use rand::SeedableRng; -use std::sync::Arc; +use std::{path::Path, sync::Arc}; pub fn test_execution_with_storage_impl() -> Arc { + let path = aptos_temppath::TempPath::new(); + path.create_as_dir().unwrap(); + test_execution_with_storage_impl_inner(false, path.path()) +} + +pub fn test_execution_with_storage_impl_inner( + force_sharding: bool, + db_path: &Path, +) -> Arc { const B: u64 = 1_000_000_000; let (genesis, validators) = aptos_vm_genesis::test_genesis_change_set_and_validators(Some(1)); @@ -45,9 +54,8 @@ pub fn test_execution_with_storage_impl() -> Arc { 0, ); - let path = aptos_temppath::TempPath::new(); - path.create_as_dir().unwrap(); - let (aptos_db, db, executor, waypoint) = create_db_and_executor(path.path(), &genesis_txn); + let (aptos_db, db, executor, waypoint) = + create_db_and_executor(db_path, &genesis_txn, force_sharding); let parent_block_id = executor.committed_block_id(); let signer = aptos_types::validator_signer::ValidatorSigner::new( @@ -487,7 +495,11 @@ pub fn test_execution_with_storage_impl() -> Arc { assert_eq!(account3_received_events_batch1.len(), 10); // Account3 has one extra deposit event from being minted to. assert_eq!( - account3_received_events_batch1[0].event.sequence_number(), + account3_received_events_batch1[0] + .event + .v1() + .unwrap() + .sequence_number(), 16 ); @@ -503,7 +515,11 @@ pub fn test_execution_with_storage_impl() -> Arc { .unwrap(); assert_eq!(account3_received_events_batch2.len(), 7); assert_eq!( - account3_received_events_batch2[0].event.sequence_number(), + account3_received_events_batch2[0] + .event + .v1() + .unwrap() + .sequence_number(), 6 ); @@ -518,13 +534,16 @@ fn approx_eq(a: u64, b: u64) -> bool { pub fn create_db_and_executor>( path: P, genesis: &Transaction, + force_sharding: bool, // if true force sharding db otherwise using default db ) -> ( Arc, DbReaderWriter, BlockExecutor, Waypoint, ) { - let (db, dbrw) = DbReaderWriter::wrap(AptosDB::new_for_test(&path)); + let (db, dbrw) = force_sharding + .then(|| DbReaderWriter::wrap(AptosDB::new_for_test_with_sharding(&path))) + .unwrap_or_else(|| DbReaderWriter::wrap(AptosDB::new_for_test(&path))); let waypoint = bootstrap_genesis::(&dbrw, genesis).unwrap(); let executor = BlockExecutor::new(dbrw.clone()); diff --git a/execution/executor-types/src/parsed_transaction_output.rs b/execution/executor-types/src/parsed_transaction_output.rs index 0b15724f774aa..1208adb28cf3b 100644 --- a/execution/executor-types/src/parsed_transaction_output.rs +++ b/execution/executor-types/src/parsed_transaction_output.rs @@ -16,7 +16,9 @@ pub struct ParsedTransactionOutput { impl ParsedTransactionOutput { pub fn parse_reconfig_events(events: &[ContractEvent]) -> impl Iterator { - events.iter().filter(|e| *e.key() == *NEW_EPOCH_EVENT_KEY) + events + .iter() + .filter(|e| e.event_key().cloned() == Some(*NEW_EPOCH_EVENT_KEY)) } } diff --git a/execution/executor/src/block_executor.rs b/execution/executor/src/block_executor.rs index 0ea9dc82dade6..217e3c4587cfb 100644 --- a/execution/executor/src/block_executor.rs +++ b/execution/executor/src/block_executor.rs @@ -185,7 +185,7 @@ where block_id, transactions, } = block; - let committed_block = self.block_tree.root_block(); + let committed_block_id = self.committed_block_id(); let mut block_vec = self .block_tree .get_blocks_opt(&[block_id, parent_block_id])?; @@ -203,7 +203,7 @@ where return Ok(b.output.as_state_compute_result(parent_accumulator)); } - let output = if parent_block_id != committed_block.id && parent_output.has_reconfiguration() + let output = if parent_block_id != committed_block_id && parent_output.has_reconfiguration() { info!( LogSchema::new(LogEntry::BlockExecutor).block_id(block_id), diff --git a/execution/executor/src/chunk_executor.rs b/execution/executor/src/chunk_executor.rs index cfc62b7e12e74..dfb65626c5ead 100644 --- a/execution/executor/src/chunk_executor.rs +++ b/execution/executor/src/chunk_executor.rs @@ -475,6 +475,10 @@ impl ChunkExecutorInner { batch_begin, batch_begin + 1, )?; + info!( + version_skipped = batch_begin, + "Skipped known broken transaction, applied transaction output directly." + ); batch_begin += 1; batch_end = *batch_ends.next().unwrap(); continue; diff --git a/execution/executor/src/components/chunk_output.rs b/execution/executor/src/components/chunk_output.rs index f85cf208f2667..c7d01a9fc3245 100644 --- a/execution/executor/src/components/chunk_output.rs +++ b/execution/executor/src/components/chunk_output.rs @@ -17,6 +17,7 @@ use aptos_storage_interface::{ use aptos_types::{ account_config::CORE_CODE_ADDRESS, block_executor::partitioner::{ExecutableTransactions, PartitionedTransactions}, + contract_event::ContractEvent, transaction::{ExecutionStatus, Transaction, TransactionOutput, TransactionStatus}, }; use aptos_vm::{ @@ -202,7 +203,7 @@ impl ChunkOutput { ) -> Result> { Ok(V::execute_block( transactions, - &state_view, + state_view, maybe_block_gas_limit, )?) } @@ -223,7 +224,7 @@ impl ChunkOutput { let transaction_outputs = match state_view.id() { // this state view ID implies a genesis block in non-test cases. StateViewId::Miscellaneous => { - V::execute_block(transactions, &state_view, maybe_block_gas_limit)? + V::execute_block(transactions, state_view, maybe_block_gas_limit)? }, _ => transactions .iter() @@ -390,11 +391,16 @@ pub fn update_counters_for_processed_chunk( } for event in output.events() { - let is_core = event.key().get_creator_address() == CORE_CODE_ADDRESS; - let creation_number = if is_core && detailed_counters { - event.key().get_creation_number().to_string() - } else { - "event".to_string() + let (is_core, creation_number) = match event { + ContractEvent::V1(v1) => ( + v1.key().get_creator_address() == CORE_CODE_ADDRESS, + if detailed_counters { + v1.key().get_creation_number().to_string() + } else { + "event".to_string() + }, + ), + ContractEvent::V2(_v2) => (false, "event".to_string()), }; metrics::APTOS_PROCESSED_USER_TRANSACTIONS_CORE_EVENTS .with_label_values(&[ diff --git a/execution/executor/src/mock_vm/mock_vm_test.rs b/execution/executor/src/mock_vm/mock_vm_test.rs index 05cbd65739387..21cbab6817b1e 100644 --- a/execution/executor/src/mock_vm/mock_vm_test.rs +++ b/execution/executor/src/mock_vm/mock_vm_test.rs @@ -28,10 +28,6 @@ impl TStateView for MockStateView { Ok(None) } - fn is_genesis(&self) -> bool { - false - } - fn get_usage(&self) -> Result { Ok(StateStorageUsage::new_untracked()) } diff --git a/execution/executor/src/mock_vm/mod.rs b/execution/executor/src/mock_vm/mod.rs index 5e65c9f2d85d2..ee85fdcee3eca 100644 --- a/execution/executor/src/mock_vm/mod.rs +++ b/execution/executor/src/mock_vm/mod.rs @@ -81,27 +81,6 @@ impl VMExecutor for MockVM { state_view: &impl StateView, _maybe_block_gas_limit: Option, ) -> Result, VMStatus> { - if state_view.is_genesis() { - assert_eq!( - transactions.len(), - 1, - "Genesis block should have only one transaction." - ); - let output = TransactionOutput::new( - gen_genesis_writeset(), - // mock the validator set event - vec![ContractEvent::new( - new_epoch_event_key(), - 0, - TypeTag::Bool, - bcs::to_bytes(&0).unwrap(), - )], - 0, - KEEP_STATUS.clone(), - ); - return Ok(vec![output]); - } - // output_cache is used to store the output of transactions so they are visible to later // transactions. let mut output_cache = HashMap::new(); @@ -132,7 +111,7 @@ impl VMExecutor for MockVM { // WriteSet cannot be empty so use genesis writeset only for testing. gen_genesis_writeset(), // mock the validator set event - vec![ContractEvent::new( + vec![ContractEvent::new_v1( new_epoch_event_key(), 0, TypeTag::Bool, @@ -341,7 +320,7 @@ fn gen_payment_writeset( } fn gen_events(sender: AccountAddress) -> Vec { - vec![ContractEvent::new( + vec![ContractEvent::new_v1( EventKey::new(111, sender), 0, TypeTag::Vector(Box::new(TypeTag::U8)), diff --git a/execution/executor/tests/db_bootstrapper_test.rs b/execution/executor/tests/db_bootstrapper_test.rs index 74651fd30b862..d6c1f8962acc8 100644 --- a/execution/executor/tests/db_bootstrapper_test.rs +++ b/execution/executor/tests/db_bootstrapper_test.rs @@ -255,7 +255,7 @@ fn test_new_genesis() { .freeze() .unwrap(), vec![ - ContractEvent::new( + ContractEvent::new_v1( *configuration.events().key(), 0, TypeTag::Struct(Box::new( @@ -263,7 +263,7 @@ fn test_new_genesis() { )), vec![], ), - ContractEvent::new( + ContractEvent::new_v1( new_block_event_key(), 0, TypeTag::Struct(Box::new(NewBlockEvent::struct_tag())), diff --git a/execution/executor/tests/storage_integration_test.rs b/execution/executor/tests/storage_integration_test.rs index 3c8db39c61577..c7234cbbdc189 100644 --- a/execution/executor/tests/storage_integration_test.rs +++ b/execution/executor/tests/storage_integration_test.rs @@ -31,7 +31,7 @@ fn test_genesis() { let path = aptos_temppath::TempPath::new(); path.create_as_dir().unwrap(); let genesis = aptos_vm_genesis::test_genesis_transaction(); - let (_, db, _executor, waypoint) = create_db_and_executor(path.path(), &genesis); + let (_, db, _executor, waypoint) = create_db_and_executor(path.path(), &genesis, false); let trusted_state = TrustedState::from_epoch_waypoint(waypoint); let state_proof = db.reader.get_state_proof(trusted_state.version()).unwrap(); @@ -77,7 +77,7 @@ fn test_reconfiguration() { let (genesis, validators) = aptos_vm_genesis::test_genesis_change_set_and_validators(Some(1)); let genesis_key = &aptos_vm_genesis::GENESIS_KEYPAIR.0; let genesis_txn = Transaction::GenesisTransaction(WriteSetPayload::Direct(genesis)); - let (_, db, executor, _waypoint) = create_db_and_executor(path.path(), &genesis_txn); + let (_, db, executor, _waypoint) = create_db_and_executor(path.path(), &genesis_txn, false); let parent_block_id = executor.committed_block_id(); let signer = ValidatorSigner::new( validators[0].data.owner_address, diff --git a/scripts/dev_setup.sh b/scripts/dev_setup.sh index 6885b35260a97..5e2ede5d2b3ff 100755 --- a/scripts/dev_setup.sh +++ b/scripts/dev_setup.sh @@ -672,7 +672,7 @@ function install_lld { # this is needed for hdpi crate from aptos-ledger function install_libudev-dev { # Need to install libudev-dev for linux - if [[ "$(uname)" == "Linux" ]]; then + if [[ "$(uname)" == "Linux" && "$PACKAGE_MANAGER" != "pacman" ]]; then install_pkg libudev-dev "$PACKAGE_MANAGER" fi } @@ -1034,9 +1034,12 @@ if [[ "$INSTALL_JSTS" == "true" ]]; then fi install_python3 -pip3 install pre-commit - -install_libudev-dev +if [[ "$PACKAGE_MANAGER" != "pacman" ]]; then + pip3 install pre-commit + install_libudev-dev +else + install_pkg python-pre-commit "$PACKAGE_MANAGER" +fi # For now best effort install, will need to improve later if command -v pre-commit; then diff --git a/state-sync/inter-component/consensus-notifications/src/lib.rs b/state-sync/inter-component/consensus-notifications/src/lib.rs index 96ddd7a6c7369..2bb72a7c7defb 100644 --- a/state-sync/inter-component/consensus-notifications/src/lib.rs +++ b/state-sync/inter-component/consensus-notifications/src/lib.rs @@ -439,7 +439,7 @@ mod tests { } fn create_contract_event() -> ContractEvent { - ContractEvent::new( + ContractEvent::new_v1( EventKey::new(0, AccountAddress::random()), 0, TypeTag::Bool, diff --git a/state-sync/inter-component/event-notifications/src/lib.rs b/state-sync/inter-component/event-notifications/src/lib.rs index 5077cc1d40099..f2383ae469148 100644 --- a/state-sync/inter-component/event-notifications/src/lib.rs +++ b/state-sync/inter-component/event-notifications/src/lib.rs @@ -201,30 +201,33 @@ impl EventSubscriptionService { let mut reconfig_event_found = false; let mut event_subscription_ids_to_notify = HashSet::new(); + // TODO(eventv2): This doesn't deal with module events subscriptions. for event in events.iter() { - let event_key = event.key(); - - // Process all subscriptions for the current event - if let Some(subscription_ids) = self.event_key_subscriptions.get(event_key) { - // Add the event to the subscription's pending event buffer - // and store the subscriptions that will need to notified once all - // events have been processed. - for subscription_id in subscription_ids.iter() { - if let Some(event_subscription) = self - .subscription_id_to_event_subscription - .get_mut(subscription_id) - { - event_subscription.buffer_event(event.clone()); - event_subscription_ids_to_notify.insert(*subscription_id); - } else { - return Err(Error::MissingEventSubscription(*subscription_id)); + if let ContractEvent::V1(v1) = event { + let event_key = v1.key(); + + // Process all subscriptions for the current event + if let Some(subscription_ids) = self.event_key_subscriptions.get(event_key) { + // Add the event to the subscription's pending event buffer + // and store the subscriptions that will need to notified once all + // events have been processed. + for subscription_id in subscription_ids.iter() { + if let Some(event_subscription) = self + .subscription_id_to_event_subscription + .get_mut(subscription_id) + { + event_subscription.buffer_event(event.clone()); + event_subscription_ids_to_notify.insert(*subscription_id); + } else { + return Err(Error::MissingEventSubscription(*subscription_id)); + } } } - } - // Take note if a reconfiguration (new epoch) has occurred - if *event_key == on_chain_config::new_epoch_event_key() { - reconfig_event_found = true; + // Take note if a reconfiguration (new epoch) has occurred + if *event_key == on_chain_config::new_epoch_event_key() { + reconfig_event_found = true; + } } } diff --git a/state-sync/inter-component/event-notifications/src/tests.rs b/state-sync/inter-component/event-notifications/src/tests.rs index f5ca813c5c056..e9da128a44d86 100644 --- a/state-sync/inter-component/event-notifications/src/tests.rs +++ b/state-sync/inter-component/event-notifications/src/tests.rs @@ -492,7 +492,7 @@ fn notify_events( } fn create_test_event(event_key: EventKey) -> ContractEvent { - ContractEvent::new(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) + ContractEvent::new_v1(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) } fn create_random_event_key() -> EventKey { diff --git a/state-sync/state-sync-v2/state-sync-driver/src/tests/storage_synchronizer.rs b/state-sync/state-sync-v2/state-sync-driver/src/tests/storage_synchronizer.rs index 97be2bdd23faa..21643f7b221f9 100644 --- a/state-sync/state-sync-v2/state-sync-driver/src/tests/storage_synchronizer.rs +++ b/state-sync/state-sync-v2/state-sync-driver/src/tests/storage_synchronizer.rs @@ -81,7 +81,7 @@ async fn test_apply_transaction_outputs() { // Subscribe to the expected event let mut event_listener = event_subscription_service .lock() - .subscribe_to_events(vec![*event_to_commit.key()]) + .subscribe_to_events(vec![*event_to_commit.v1().unwrap().key()]) .unwrap(); // Attempt to apply a chunk of outputs @@ -214,7 +214,7 @@ async fn test_execute_transactions() { // Subscribe to the expected event let mut event_listener = event_subscription_service .lock() - .subscribe_to_events(vec![*event_to_commit.key()]) + .subscribe_to_events(vec![*event_to_commit.v1().unwrap().key()]) .unwrap(); // Attempt to execute a chunk of transactions diff --git a/state-sync/state-sync-v2/state-sync-driver/src/tests/utils.rs b/state-sync/state-sync-v2/state-sync-driver/src/tests/utils.rs index 6337e2b113c25..2c2f4c5421a59 100644 --- a/state-sync/state-sync-v2/state-sync-driver/src/tests/utils.rs +++ b/state-sync/state-sync-v2/state-sync-driver/src/tests/utils.rs @@ -59,7 +59,7 @@ pub fn create_epoch_ending_ledger_info() -> LedgerInfoWithSignatures { /// Creates a single test event pub fn create_event(event_key: Option) -> ContractEvent { let event_key = event_key.unwrap_or_else(EventKey::random); - ContractEvent::new(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) + ContractEvent::new_v1(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) } /// Creates a test driver configuration for full nodes diff --git a/state-sync/storage-service/server/src/handler.rs b/state-sync/storage-service/server/src/handler.rs index 1da068eb2df9c..658d659c25070 100644 --- a/state-sync/storage-service/server/src/handler.rs +++ b/state-sync/storage-service/server/src/handler.rs @@ -7,13 +7,15 @@ use crate::{ metrics, metrics::{ increment_counter, start_timer, LRU_CACHE_HIT, LRU_CACHE_PROBE, OPTIMISTIC_FETCH_ADD, + SUBSCRIPTION_ADD, SUBSCRIPTION_FAILURE, }, moderator::RequestModerator, network::ResponseSender, optimistic_fetch::OptimisticFetchRequest, storage::StorageReaderInterface, + subscription::{SubscriptionRequest, SubscriptionStreamRequests}, }; -use aptos_config::network_id::PeerNetworkId; +use aptos_config::{config::StorageServiceConfig, network_id::PeerNetworkId}; use aptos_infallible::Mutex; use aptos_logger::{debug, error, sample, sample::SampleRate, trace, warn}; use aptos_storage_service_types::{ @@ -32,7 +34,7 @@ use aptos_types::transaction::Version; use arc_swap::ArcSwap; use dashmap::DashMap; use lru::LruCache; -use std::{sync::Arc, time::Duration}; +use std::{collections::HashMap, sync::Arc, time::Duration}; /// Storage server constants const INVALID_REQUEST_LOG_FREQUENCY_SECS: u64 = 5; // The frequency to log invalid requests (secs) @@ -49,6 +51,7 @@ pub struct Handler { lru_response_cache: Arc>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, } @@ -59,14 +62,16 @@ impl Handler { lru_response_cache: Arc>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, ) -> Self { Self { - storage, cached_storage_server_summary, optimistic_fetches, lru_response_cache, request_moderator, + storage, + subscriptions, time_service, } } @@ -75,6 +80,7 @@ impl Handler { /// request directly. pub fn process_request_and_respond( &self, + storage_service_config: StorageServiceConfig, peer_network_id: PeerNetworkId, request: StorageServiceRequest, response_sender: ResponseSender, @@ -92,6 +98,17 @@ impl Handler { return; } + // Handle any subscription requests + if request.data_request.is_subscription_request() { + self.handle_subscription_request( + storage_service_config, + peer_network_id, + request, + response_sender, + ); + return; + } + // Process the request and return the response to the client let response = self.process_request(&peer_network_id, request.clone(), false); self.send_response(request, response, response_sender); @@ -232,6 +249,79 @@ impl Handler { ); } + /// Handles the given subscription request. If a failure + /// occurs during handling, the client is notified. + pub fn handle_subscription_request( + &self, + storage_service_config: StorageServiceConfig, + peer_network_id: PeerNetworkId, + request: StorageServiceRequest, + response_sender: ResponseSender, + ) { + // Create a new subscription request + let subscription_request = + SubscriptionRequest::new(request.clone(), response_sender, self.time_service.clone()); + + // Grab the lock on the active subscriptions map + let mut subscriptions = self.subscriptions.lock(); + + // Get the existing stream ID and the request stream ID + let existing_stream_id = + subscriptions + .get_mut(&peer_network_id) + .map(|subscription_stream_requests| { + subscription_stream_requests.subscription_stream_id() + }); + let request_stream_id = subscription_request.subscription_stream_id(); + + // If the stream already exists, add the request to the stream. Otherwise, create a new one. + if existing_stream_id == Some(request_stream_id) { + // Add the request to the existing stream (the stream IDs match) + if let Some(existing_stream) = subscriptions.get_mut(&peer_network_id) { + if let Err((error, subscription_request)) = existing_stream + .add_subscription_request(storage_service_config, subscription_request) + { + // Something went wrong when adding the request to the stream + sample!( + SampleRate::Duration(Duration::from_secs(INVALID_REQUEST_LOG_FREQUENCY_SECS)), + warn!(LogSchema::new(LogEntry::SubscriptionRequest) + .error(&error) + .peer_network_id(&peer_network_id) + .request(&request) + ); + ); + + // Update the subscription metrics + increment_counter( + &metrics::SUBSCRIPTION_EVENTS, + peer_network_id.network_id(), + SUBSCRIPTION_FAILURE.into(), + ); + + // Notify the client of the failure + self.send_response( + request, + Err(StorageServiceError::InvalidRequest(error.to_string())), + subscription_request.take_response_sender(), + ); + return; + } + } + } else { + // Create a new stream (either no stream exists, or we have a new stream ID) + let subscription_stream_requests = + SubscriptionStreamRequests::new(subscription_request, self.time_service.clone()); + subscriptions.insert(peer_network_id, subscription_stream_requests); + } + + // Update the subscription metrics + increment_counter( + &metrics::SUBSCRIPTION_EVENTS, + peer_network_id.network_id(), + SUBSCRIPTION_ADD.into(), + ); + } + /// Processes a storage service request for which the response /// might already be cached. fn process_cachable_request( diff --git a/state-sync/storage-service/server/src/lib.rs b/state-sync/storage-service/server/src/lib.rs index 309322afcd70b..0f6d4d225153e 100644 --- a/state-sync/storage-service/server/src/lib.rs +++ b/state-sync/storage-service/server/src/lib.rs @@ -7,6 +7,7 @@ use crate::{ logging::{LogEntry, LogSchema}, network::StorageServiceNetworkEvents, + subscription::SubscriptionStreamRequests, }; use aptos_bounded_executor::BoundedExecutor; use aptos_channels::{aptos_channel, message_queues::QueueStyle}; @@ -31,7 +32,7 @@ use handler::Handler; use lru::LruCache; use moderator::RequestModerator; use optimistic_fetch::OptimisticFetchRequest; -use std::{ops::Deref, sync::Arc, time::Duration}; +use std::{collections::HashMap, ops::Deref, sync::Arc, time::Duration}; use storage::StorageReaderInterface; use thiserror::Error; use tokio::runtime::Handle; @@ -44,6 +45,7 @@ mod moderator; pub mod network; mod optimistic_fetch; pub mod storage; +mod subscription; mod utils; #[cfg(test)] @@ -76,6 +78,10 @@ pub struct StorageServiceServer { // A set of active optimistic fetches for peers waiting for new data optimistic_fetches: Arc>, + // TODO: Reduce lock contention on the mutex. + // A set of active subscriptions for peers waiting for new data + subscriptions: Arc>>, + // A moderator for incoming peer requests request_moderator: Arc, @@ -108,6 +114,7 @@ impl StorageServiceServer { let lru_response_cache = Arc::new(Mutex::new(LruCache::new( storage_service_config.max_lru_cache_size as usize, ))); + let subscriptions = Arc::new(Mutex::new(HashMap::new())); let request_moderator = Arc::new(RequestModerator::new( aptos_data_client_config, cached_storage_server_summary.clone(), @@ -126,6 +133,7 @@ impl StorageServiceServer { cached_storage_server_summary, lru_response_cache, optimistic_fetches, + subscriptions, request_moderator, storage_service_listener, } @@ -133,17 +141,27 @@ impl StorageServiceServer { /// Spawns all continuously running utility tasks async fn spawn_continuous_storage_summary_tasks(&mut self) { - // Create a channel to notify the optimistic fetch - // handler about updates to the cached storage summary. - let (cached_summary_update_notifier, cached_summary_update_listener) = + // Create channels to notify the optimistic fetch and subscription + // handlers about updates to the cached storage summary. + let (cache_update_notifier_optimistic_fetch, cache_update_listener_optimistic_fetch) = + aptos_channel::new(QueueStyle::LIFO, CACHED_SUMMARY_UPDATE_CHANNEL_SIZE, None); + let (cache_update_notifier_subscription, cache_update_listener_subscription) = aptos_channel::new(QueueStyle::LIFO, CACHED_SUMMARY_UPDATE_CHANNEL_SIZE, None); // Spawn the refresher for the storage summary cache - self.spawn_storage_summary_refresher(cached_summary_update_notifier) + let cache_update_notifiers = vec![ + cache_update_notifier_optimistic_fetch.clone(), + cache_update_notifier_subscription.clone(), + ]; + self.spawn_storage_summary_refresher(cache_update_notifiers) .await; // Spawn the optimistic fetch handler - self.spawn_optimistic_fetch_handler(cached_summary_update_listener) + self.spawn_optimistic_fetch_handler(cache_update_listener_optimistic_fetch) + .await; + + // Spawn the subscription handler + self.spawn_subscription_handler(cache_update_listener_subscription) .await; // Spawn the refresher for the request moderator @@ -153,7 +171,7 @@ impl StorageServiceServer { /// Spawns a non-terminating task that refreshes the cached storage server summary async fn spawn_storage_summary_refresher( &mut self, - cached_summary_update_notifier: aptos_channel::Sender<(), CachedSummaryUpdateNotification>, + cache_update_notifiers: Vec>, ) { // Clone all required components for the task let cached_storage_server_summary = self.cached_storage_server_summary.clone(); @@ -170,8 +188,6 @@ impl StorageServiceServer { // Spawn the task self.bounded_executor .spawn(async move { - // TODO: consider removing the interval once we've tested the notifications - // Create a ticker for the refresh interval let duration = Duration::from_millis(config.storage_summary_refresh_interval_ms); let ticker = time_service.interval(duration); @@ -186,7 +202,7 @@ impl StorageServiceServer { cached_storage_server_summary.clone(), storage.clone(), config, - cached_summary_update_notifier.clone(), + cache_update_notifiers.clone(), ) }, notification = storage_service_listener.select_next_some() => { @@ -202,7 +218,7 @@ impl StorageServiceServer { cached_storage_server_summary.clone(), storage.clone(), config, - cached_summary_update_notifier.clone(), + cache_update_notifiers.clone(), ) }, } @@ -227,13 +243,12 @@ impl StorageServiceServer { let lru_response_cache = self.lru_response_cache.clone(); let request_moderator = self.request_moderator.clone(); let storage = self.storage.clone(); + let subscriptions = self.subscriptions.clone(); let time_service = self.time_service.clone(); // Spawn the task self.bounded_executor .spawn(async move { - // TODO: consider removing the interval once we've tested the notifications - // Create a ticker for the refresh interval let duration = Duration::from_millis(config.storage_summary_refresh_interval_ms); let ticker = time_service.interval(duration); @@ -252,12 +267,13 @@ impl StorageServiceServer { lru_response_cache.clone(), request_moderator.clone(), storage.clone(), + subscriptions.clone(), time_service.clone(), ).await; }, notification = cached_summary_update_listener.select_next_some() => { trace!(LogSchema::new(LogEntry::ReceivedCacheUpdateNotification) - .message(&format!("Received cache update notification! Highest synced version: {:?}", notification.highest_synced_version)) + .message(&format!("Received cache update notification for optimistic fetch handler! Highest synced version: {:?}", notification.highest_synced_version)) ); // Handle the optimistic fetches because of a cache update @@ -269,6 +285,75 @@ impl StorageServiceServer { lru_response_cache.clone(), request_moderator.clone(), storage.clone(), + subscriptions.clone(), + time_service.clone(), + ).await; + }, + } + } + }) + .await; + } + + /// Spawns a non-terminating task that handles subscriptions + async fn spawn_subscription_handler( + &mut self, + mut cached_summary_update_listener: aptos_channel::Receiver< + (), + CachedSummaryUpdateNotification, + >, + ) { + // Clone all required components for the task + let bounded_executor = self.bounded_executor.clone(); + let cached_storage_server_summary = self.cached_storage_server_summary.clone(); + let config = self.storage_service_config; + let optimistic_fetches = self.optimistic_fetches.clone(); + let lru_response_cache = self.lru_response_cache.clone(); + let request_moderator = self.request_moderator.clone(); + let storage = self.storage.clone(); + let subscriptions = self.subscriptions.clone(); + let time_service = self.time_service.clone(); + + // Spawn the task + self.bounded_executor + .spawn(async move { + // Create a ticker for the refresh interval + let duration = Duration::from_millis(config.storage_summary_refresh_interval_ms); + let ticker = time_service.interval(duration); + futures::pin_mut!(ticker); + + // Continuously handle the subscriptions + loop { + futures::select! { + _ = ticker.select_next_some() => { + // Handle the subscriptions periodically + handle_active_subscriptions( + bounded_executor.clone(), + cached_storage_server_summary.clone(), + config, + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage.clone(), + subscriptions.clone(), + time_service.clone(), + ).await; + }, + notification = cached_summary_update_listener.select_next_some() => { + trace!(LogSchema::new(LogEntry::ReceivedCacheUpdateNotification) + .message(&format!("Received cache update notification for subscription handler! Highest synced version: {:?}", notification.highest_synced_version)) + ); + + // Handle the subscriptions because of a cache update + handle_active_subscriptions( + bounded_executor.clone(), + cached_storage_server_summary.clone(), + config, + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage.clone(), + subscriptions.clone(), time_service.clone(), ).await; }, @@ -331,8 +416,10 @@ impl StorageServiceServer { // I/O-bound, so we want to spawn on the blocking thread pool to // avoid starving other async tasks on the same runtime. let storage = self.storage.clone(); + let config = self.storage_service_config; let cached_storage_server_summary = self.cached_storage_server_summary.clone(); let optimistic_fetches = self.optimistic_fetches.clone(); + let subscriptions = self.subscriptions.clone(); let lru_response_cache = self.lru_response_cache.clone(); let request_moderator = self.request_moderator.clone(); let time_service = self.time_service.clone(); @@ -344,9 +431,11 @@ impl StorageServiceServer { lru_response_cache, request_moderator, storage, + subscriptions, time_service, ) .process_request_and_respond( + config, peer_network_id, storage_service_request, network_request.response_sender, @@ -369,6 +458,14 @@ impl StorageServiceServer { ) -> Arc> { self.optimistic_fetches.clone() } + + #[cfg(test)] + /// Returns a copy of the active subscriptions for test purposes + pub(crate) fn get_subscriptions( + &self, + ) -> Arc>> { + self.subscriptions.clone() + } } /// Handles the active optimistic fetches and logs any @@ -381,6 +478,7 @@ async fn handle_active_optimistic_fetches( lru_response_cache: Arc>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, ) { if let Err(error) = optimistic_fetch::handle_active_optimistic_fetches( @@ -391,6 +489,7 @@ async fn handle_active_optimistic_fetches( lru_response_cache, request_moderator, storage, + subscriptions, time_service, ) .await @@ -401,14 +500,46 @@ async fn handle_active_optimistic_fetches( } } +/// Handles the active subscriptions and logs any +/// errors that were encountered. +async fn handle_active_subscriptions( + bounded_exector: BoundedExecutor, + cached_storage_server_summary: Arc>, + config: StorageServiceConfig, + optimistic_fetches: Arc>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + subscriptions: Arc>>, + time_service: TimeService, +) { + if let Err(error) = subscription::handle_active_subscriptions( + bounded_exector, + cached_storage_server_summary, + config, + optimistic_fetches, + lru_response_cache, + request_moderator, + storage, + subscriptions, + time_service, + ) + .await + { + error!(LogSchema::new(LogEntry::SubscriptionRequest) + .error(&error) + .message("Failed to handle active subscriptions!")); + } +} + /// Refreshes the cached storage server summary and sends -/// a notification via the given channel. If an error +/// a notification via the given channels. If an error /// occurs, it is logged. pub(crate) fn refresh_cached_storage_summary( cached_storage_server_summary: Arc>, storage: T, storage_config: StorageServiceConfig, - cached_summary_update_notifier: aptos_channel::Sender<(), CachedSummaryUpdateNotification>, + cache_update_notifiers: Vec>, ) { // Fetch the new data summary from storage let new_data_summary = match storage.get_data_summary() { @@ -448,17 +579,20 @@ pub(crate) fn refresh_cached_storage_summary( .get_synced_ledger_info_version(); let update_notification = CachedSummaryUpdateNotification::new(highest_synced_version); - // Send the notification via the notifier - if let Err(error) = cached_summary_update_notifier.push((), update_notification) { - error!(LogSchema::new(LogEntry::StorageSummaryRefresh) - .error(&Error::StorageErrorEncountered(error.to_string())) - .message("Failed to send an update notification for the new cached summary!")); + // Send a notification via each notifier channel + for cached_summary_update_notifier in cache_update_notifiers { + if let Err(error) = cached_summary_update_notifier.push((), update_notification) { + error!(LogSchema::new(LogEntry::StorageSummaryRefresh) + .error(&Error::StorageErrorEncountered(error.to_string())) + .message("Failed to send an update notification for the new cached summary!")); + } } } } /// A simple notification sent to the optimistic fetch handler that the /// cached storage summary has been updated with the specified version. +#[derive(Clone, Copy)] pub struct CachedSummaryUpdateNotification { highest_synced_version: Option, } diff --git a/state-sync/storage-service/server/src/logging.rs b/state-sync/storage-service/server/src/logging.rs index 8316f9592dc76..6baf251a58fd9 100644 --- a/state-sync/storage-service/server/src/logging.rs +++ b/state-sync/storage-service/server/src/logging.rs @@ -47,4 +47,7 @@ pub enum LogEntry { SentStorageResponse, StorageServiceError, StorageSummaryRefresh, + SubscriptionRefresh, + SubscriptionRequest, + SubscriptionResponse, } diff --git a/state-sync/storage-service/server/src/metrics.rs b/state-sync/storage-service/server/src/metrics.rs index 894e1332a7284..5fec3643d5989 100644 --- a/state-sync/storage-service/server/src/metrics.rs +++ b/state-sync/storage-service/server/src/metrics.rs @@ -14,6 +14,9 @@ pub const LRU_CACHE_HIT: &str = "lru_cache_hit"; pub const LRU_CACHE_PROBE: &str = "lru_cache_probe"; pub const OPTIMISTIC_FETCH_ADD: &str = "optimistic_fetch_add"; pub const OPTIMISTIC_FETCH_EXPIRE: &str = "optimistic_fetch_expire"; +pub const SUBSCRIPTION_ADD: &str = "subscription_add"; +pub const SUBSCRIPTION_EXPIRE: &str = "subscription_expire"; +pub const SUBSCRIPTION_FAILURE: &str = "subscription_failure"; /// Gauge for tracking the number of actively ignored peers pub static IGNORED_PEER_COUNT: Lazy = Lazy::new(|| { @@ -126,6 +129,36 @@ pub static STORAGE_REQUEST_PROCESSING_LATENCY: Lazy = Lazy::new(|| .unwrap() }); +/// Gauge for tracking the number of active subscriptions +pub static SUBSCRIPTION_COUNT: Lazy = Lazy::new(|| { + register_int_gauge_vec!( + "aptos_storage_service_server_subscription_count", + "Gauge for tracking the number of active subscriptions", + &["network_id"] + ) + .unwrap() +}); + +/// Counter for subscription events +pub static SUBSCRIPTION_EVENTS: Lazy = Lazy::new(|| { + register_int_counter_vec!( + "aptos_storage_service_server_subscription_event", + "Counters related to subscription events", + &["network_id", "event"] + ) + .unwrap() +}); + +/// Time it takes to process a subscription request +pub static SUBSCRIPTION_LATENCIES: Lazy = Lazy::new(|| { + register_histogram_vec!( + "aptos_storage_service_server_subscription_latency", + "Time it takes to process a subscription request", + &["network_id", "request_type"] + ) + .unwrap() +}); + /// Increments the network frame overflow counter for the given response pub fn increment_network_frame_overflow(response_type: &str) { NETWORK_FRAME_OVERFLOW diff --git a/state-sync/storage-service/server/src/optimistic_fetch.rs b/state-sync/storage-service/server/src/optimistic_fetch.rs index 2fb64515b80d6..f02832b337a0f 100644 --- a/state-sync/storage-service/server/src/optimistic_fetch.rs +++ b/state-sync/storage-service/server/src/optimistic_fetch.rs @@ -8,6 +8,7 @@ use crate::{ moderator::RequestModerator, network::ResponseSender, storage::StorageReaderInterface, + subscription::SubscriptionStreamRequests, utils, LogEntry, LogSchema, }; use aptos_bounded_executor::BoundedExecutor; @@ -54,11 +55,6 @@ impl OptimisticFetchRequest { } } - /// Returns the response sender and consumes the request - pub fn get_response_sender(self) -> ResponseSender { - self.response_sender - } - /// Creates a new storage service request to satisfy the optimistic fetch /// using the new data at the specified `target_ledger_info`. pub fn get_storage_request_for_missing_data( @@ -170,9 +166,14 @@ impl OptimisticFetchRequest { .as_millis(); elapsed_time > timeout_ms as u128 } + + /// Returns the response sender and consumes the request + pub fn take_response_sender(self) -> ResponseSender { + self.response_sender + } } -/// Handles ready optimistic fetches +/// Handles active and ready optimistic fetches pub(crate) async fn handle_active_optimistic_fetches( bounded_executor: BoundedExecutor, cached_storage_server_summary: Arc>, @@ -181,6 +182,7 @@ pub(crate) async fn handle_active_optimistic_fetches( lru_response_cache: Arc>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, ) -> Result<(), Error> { // Update the number of active optimistic fetches @@ -195,6 +197,7 @@ pub(crate) async fn handle_active_optimistic_fetches( lru_response_cache.clone(), request_moderator.clone(), storage.clone(), + subscriptions.clone(), time_service.clone(), ) .await?; @@ -208,6 +211,7 @@ pub(crate) async fn handle_active_optimistic_fetches( lru_response_cache, request_moderator, storage, + subscriptions, time_service, peers_with_ready_optimistic_fetches, ) @@ -226,6 +230,7 @@ async fn handle_ready_optimistic_fetches( lru_response_cache: Arc>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, peers_with_ready_optimistic_fetches: Vec<(PeerNetworkId, LedgerInfoWithSignatures)>, ) { @@ -241,6 +246,7 @@ async fn handle_ready_optimistic_fetches( let lru_response_cache = lru_response_cache.clone(); let request_moderator = request_moderator.clone(); let storage = storage.clone(); + let subscriptions = subscriptions.clone(); let time_service = time_service.clone(); // Spawn a blocking task to handle the optimistic fetch @@ -250,18 +256,32 @@ async fn handle_ready_optimistic_fetches( let optimistic_fetch_start_time = optimistic_fetch.fetch_start_time; let optimistic_fetch_request = optimistic_fetch.request.clone(); + // Get the storage service request for the missing data + let missing_data_request = match optimistic_fetch + .get_storage_request_for_missing_data(config, &target_ledger_info) + { + Ok(storage_service_request) => storage_service_request, + Err(error) => { + // Failed to get the storage service request + warn!(LogSchema::new(LogEntry::OptimisticFetchResponse) + .error(&Error::UnexpectedErrorEncountered(error.to_string()))); + return; + }, + }; + // Notify the peer of the new data if let Err(error) = utils::notify_peer_of_new_data( cached_storage_server_summary.clone(), - config, optimistic_fetches.clone(), + subscriptions.clone(), lru_response_cache.clone(), request_moderator.clone(), storage.clone(), time_service.clone(), &peer_network_id, - optimistic_fetch, + missing_data_request, target_ledger_info, + optimistic_fetch.take_response_sender(), ) { warn!(LogSchema::new(LogEntry::OptimisticFetchResponse) .error(&Error::UnexpectedErrorEncountered(error.to_string()))); @@ -294,6 +314,7 @@ pub(crate) async fn get_peers_with_ready_optimistic_fetches>>, request_moderator: Arc, storage: T, + subscriptions: Arc>>, time_service: TimeService, ) -> aptos_storage_service_types::Result, Error> { // Fetch the latest storage summary and highest synced version @@ -315,6 +336,7 @@ pub(crate) async fn get_peers_with_ready_optimistic_fetches( config: StorageServiceConfig, cached_storage_server_summary: Arc>, optimistic_fetches: Arc>, + subscriptions: Arc>>, lru_response_cache: Arc>>, request_moderator: Arc, storage: T, @@ -387,6 +410,7 @@ async fn identify_expired_invalid_and_ready_fetches( bounded_executor, cached_storage_server_summary, optimistic_fetches, + subscriptions, lru_response_cache, request_moderator, storage, @@ -412,6 +436,7 @@ async fn identify_ready_and_invalid_optimistic_fetches>, optimistic_fetches: Arc>, + subscriptions: Arc>>, lru_response_cache: Arc>>, request_moderator: Arc, storage: T, @@ -437,6 +462,7 @@ async fn identify_ready_and_invalid_optimistic_fetches Self { + Self { + request, + response_sender, + request_start_time: time_service.now(), + } + } + + /// Creates a new storage service request to satisfy the request + /// using the new data at the specified `target_ledger_info`. + fn get_storage_request_for_missing_data( + &self, + config: StorageServiceConfig, + known_version: u64, + target_ledger_info: &LedgerInfoWithSignatures, + ) -> aptos_storage_service_types::Result { + // Calculate the number of versions to fetch + let target_version = target_ledger_info.ledger_info().version(); + let mut num_versions_to_fetch = + target_version.checked_sub(known_version).ok_or_else(|| { + Error::UnexpectedErrorEncountered( + "Number of versions to fetch has overflown!".into(), + ) + })?; + + // Bound the number of versions to fetch by the maximum chunk size + num_versions_to_fetch = min( + num_versions_to_fetch, + self.max_chunk_size_for_request(config), + ); + + // Calculate the start and end versions + let start_version = known_version.checked_add(1).ok_or_else(|| { + Error::UnexpectedErrorEncountered("Start version has overflown!".into()) + })?; + let end_version = known_version + .checked_add(num_versions_to_fetch) + .ok_or_else(|| { + Error::UnexpectedErrorEncountered("End version has overflown!".into()) + })?; + + // Create the storage request + let data_request = match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(_) => { + DataRequest::GetTransactionOutputsWithProof(TransactionOutputsWithProofRequest { + proof_version: target_version, + start_version, + end_version, + }) + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + DataRequest::GetTransactionsWithProof(TransactionsWithProofRequest { + proof_version: target_version, + start_version, + end_version, + include_events: request.include_events, + }) + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + DataRequest::GetTransactionsOrOutputsWithProof( + TransactionsOrOutputsWithProofRequest { + proof_version: target_version, + start_version, + end_version, + include_events: request.include_events, + max_num_output_reductions: request.max_num_output_reductions, + }, + ) + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + }; + let storage_request = + StorageServiceRequest::new(data_request, self.request.use_compression); + Ok(storage_request) + } + + /// Returns the highest version known by the peer when the stream started + fn highest_known_version_at_stream_start(&self) -> u64 { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(request) => { + request + .subscription_stream_metadata + .known_version_at_stream_start + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + request + .subscription_stream_metadata + .known_version_at_stream_start + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + request + .subscription_stream_metadata + .known_version_at_stream_start + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the highest epoch known by the peer when the stream started + fn highest_known_epoch_at_stream_start(&self) -> u64 { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(request) => { + request + .subscription_stream_metadata + .known_epoch_at_stream_start + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + request + .subscription_stream_metadata + .known_epoch_at_stream_start + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + request + .subscription_stream_metadata + .known_epoch_at_stream_start + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the maximum chunk size for the request + /// depending on the request type. + fn max_chunk_size_for_request(&self, config: StorageServiceConfig) -> u64 { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(_) => { + config.max_transaction_output_chunk_size + }, + DataRequest::SubscribeTransactionsWithProof(_) => config.max_transaction_chunk_size, + DataRequest::SubscribeTransactionsOrOutputsWithProof(_) => { + config.max_transaction_output_chunk_size + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the subscription stream id for the request + pub fn subscription_stream_id(&self) -> u64 { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(request) => { + request.subscription_stream_metadata.subscription_stream_id + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + request.subscription_stream_metadata.subscription_stream_id + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + request.subscription_stream_metadata.subscription_stream_id + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the subscription stream index for the request + fn subscription_stream_index(&self) -> u64 { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(request) => { + request.subscription_stream_index + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + request.subscription_stream_index + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + request.subscription_stream_index + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the subscription stream metadata for the request + fn subscription_stream_metadata(&self) -> SubscriptionStreamMetadata { + match &self.request.data_request { + DataRequest::SubscribeTransactionOutputsWithProof(request) => { + request.subscription_stream_metadata + }, + DataRequest::SubscribeTransactionsWithProof(request) => { + request.subscription_stream_metadata + }, + DataRequest::SubscribeTransactionsOrOutputsWithProof(request) => { + request.subscription_stream_metadata + }, + request => unreachable!("Unexpected subscription request: {:?}", request), + } + } + + /// Returns the response sender and consumes the request + pub fn take_response_sender(self) -> ResponseSender { + self.response_sender + } +} + +impl Debug for SubscriptionRequest { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!( + f, + "SubscriptionRequest: {{ request_start_time: {:?}, request: {:?} }}", + self.request_start_time, self.request + ) + } +} + +/// A set of subscription requests that together form a stream +#[derive(Debug)] +pub struct SubscriptionStreamRequests { + subscription_stream_metadata: SubscriptionStreamMetadata, // The metadata for the subscription stream (as specified by the client) + + highest_known_version: u64, // The highest version known by the peer (at this point in the stream) + highest_known_epoch: u64, // The highest epoch known by the peer (at this point in the stream) + + next_index_to_serve: u64, // The next subscription stream request index to serve + pending_subscription_requests: BTreeMap, // The pending subscription requests by stream index + + last_stream_update_time: Instant, // The last time the stream was updated + time_service: TimeService, // The time service +} + +impl SubscriptionStreamRequests { + pub fn new(subscription_request: SubscriptionRequest, time_service: TimeService) -> Self { + // Extract the relevant information from the request + let highest_known_version = subscription_request.highest_known_version_at_stream_start(); + let highest_known_epoch = subscription_request.highest_known_epoch_at_stream_start(); + let subscription_stream_metadata = subscription_request.subscription_stream_metadata(); + + // Create a new set of pending subscription requests using the first request + let mut pending_subscription_requests = BTreeMap::new(); + pending_subscription_requests.insert( + subscription_request.subscription_stream_index(), + subscription_request, + ); + + Self { + highest_known_version, + highest_known_epoch, + next_index_to_serve: 0, + pending_subscription_requests, + subscription_stream_metadata, + last_stream_update_time: time_service.now(), + time_service, + } + } + + /// Adds a subscription request to the existing stream. If this operation + /// fails, the request is returned to the caller so that the client + /// can be notified of the error. + pub fn add_subscription_request( + &mut self, + storage_service_config: StorageServiceConfig, + subscription_request: SubscriptionRequest, + ) -> Result<(), (Error, SubscriptionRequest)> { + // Verify that the subscription metadata is valid + let subscription_stream_metadata = subscription_request.subscription_stream_metadata(); + if subscription_stream_metadata != self.subscription_stream_metadata { + return Err(( + Error::InvalidRequest(format!( + "The subscription request stream metadata is invalid! Expected: {:?}, found: {:?}", + self.subscription_stream_metadata, subscription_stream_metadata + )), + subscription_request, + )); + } + + // Verify that the subscription request index is valid + let subscription_request_index = subscription_request.subscription_stream_index(); + if subscription_request_index < self.next_index_to_serve { + return Err(( + Error::InvalidRequest(format!( + "The subscription request index is too low! Next index to serve: {:?}, found: {:?}", + self.next_index_to_serve, subscription_request_index + )), + subscription_request, + )); + } + + // Verify that the number of active subscriptions respects the maximum + let max_num_active_subscriptions = + storage_service_config.max_num_active_subscriptions as usize; + if self.pending_subscription_requests.len() >= max_num_active_subscriptions { + return Err(( + Error::InvalidRequest(format!( + "The maximum number of active subscriptions has been reached! Max: {:?}, found: {:?}", + max_num_active_subscriptions, self.pending_subscription_requests.len() + )), + subscription_request, + )); + } + + // Insert the subscription request into the pending requests + let existing_request = self.pending_subscription_requests.insert( + subscription_request.subscription_stream_index(), + subscription_request, + ); + + // Refresh the last stream update time + self.refresh_last_stream_update_time(); + + // If a pending request already existed, return the previous request to the caller + if let Some(existing_request) = existing_request { + return Err(( + Error::InvalidRequest(format!( + "Overwriting an existing subscription request for the given index: {:?}", + subscription_request_index + )), + existing_request, + )); + } + + Ok(()) + } + + /// Returns a reference to the first pending subscription request + /// in the stream (if it exists). + pub fn first_pending_request(&self) -> Option<&SubscriptionRequest> { + self.pending_subscription_requests + .first_key_value() + .map(|(_, request)| request) + } + + /// Returns true iff the subscription stream has expired. + /// There are two ways a stream can expire: (i) the first + /// pending request has been blocked for too long; or (ii) + /// the stream has been idle for too long. + fn is_expired(&self, timeout_ms: u64) -> bool { + // Determine the time when the stream was first blocked + let time_when_first_blocked = + if let Some(subscription_request) = self.first_pending_request() { + subscription_request.request_start_time // The stream is blocked on the first pending request + } else { + self.last_stream_update_time // The stream is idle and hasn't been updated in a while + }; + + // Verify the stream hasn't been blocked for too long + let current_time = self.time_service.now(); + let elapsed_time = current_time + .duration_since(time_when_first_blocked) + .as_millis(); + elapsed_time > (timeout_ms as u128) + } + + /// Returns true iff there is at least one pending request + /// and that request is ready to be served (i.e., it has the + /// same index as the next index to serve). + fn first_request_ready_to_be_served(&self) -> bool { + if let Some(subscription_request) = self.first_pending_request() { + subscription_request.subscription_stream_index() == self.next_index_to_serve + } else { + false + } + } + + /// Removes the first pending subscription request from the stream + /// and returns it (if it exists). + fn pop_first_pending_request(&mut self) -> Option { + self.pending_subscription_requests + .pop_first() + .map(|(_, request)| request) + } + + /// Refreshes the last stream update time to the current time + fn refresh_last_stream_update_time(&mut self) { + self.last_stream_update_time = self.time_service.now(); + } + + /// Returns the unique stream id for the stream + pub fn subscription_stream_id(&self) -> u64 { + self.subscription_stream_metadata.subscription_stream_id + } + + /// Updates the highest known version and epoch for the stream + /// using the latest data response that was sent to the client. + fn update_known_version_and_epoch( + &mut self, + data_response: &DataResponse, + ) -> Result<(), Error> { + // Determine the number of data items and target ledger info sent to the client + let (num_data_items, target_ledger_info) = match data_response { + DataResponse::NewTransactionOutputsWithProof(( + transaction_output_list, + target_ledger_info, + )) => ( + transaction_output_list.transactions_and_outputs.len(), + target_ledger_info, + ), + DataResponse::NewTransactionsWithProof((transaction_list, target_ledger_info)) => { + (transaction_list.transactions.len(), target_ledger_info) + }, + DataResponse::NewTransactionsOrOutputsWithProof(( + (transaction_list, transaction_output_list), + target_ledger_info, + )) => { + if let Some(transaction_list) = transaction_list { + (transaction_list.transactions.len(), target_ledger_info) + } else if let Some(transaction_output_list) = transaction_output_list { + ( + transaction_output_list.transactions_and_outputs.len(), + target_ledger_info, + ) + } else { + return Err(Error::UnexpectedErrorEncountered(format!( + "New transactions or outputs response is missing data: {:?}", + data_response + ))); + } + }, + _ => { + return Err(Error::UnexpectedErrorEncountered(format!( + "Unexpected data response type: {:?}", + data_response + ))) + }, + }; + + // Update the highest known version + self.highest_known_version += num_data_items as u64; + + // Update the highest known epoch if we've now hit an epoch ending ledger info + if self.highest_known_version == target_ledger_info.ledger_info().version() + && target_ledger_info.ledger_info().ends_epoch() + { + self.highest_known_epoch += 1; + } + + // Update the next index to serve + self.next_index_to_serve += 1; + + // Refresh the last stream update time + self.refresh_last_stream_update_time(); + + Ok(()) + } + + #[cfg(test)] + /// Returns the highest known version and epoch for test purposes + pub fn get_highest_known_version_and_epoch(&self) -> (u64, u64) { + (self.highest_known_version, self.highest_known_epoch) + } + + #[cfg(test)] + /// Returns the next index to serve for test purposes + pub fn get_next_index_to_serve(&self) -> u64 { + self.next_index_to_serve + } + + #[cfg(test)] + /// Returns the pending subscription requests for test purposes + pub fn get_pending_subscription_requests(&mut self) -> &mut BTreeMap { + &mut self.pending_subscription_requests + } + + #[cfg(test)] + /// Sets the next index to serve for test purposes + pub fn set_next_index_to_serve(&mut self, next_index_to_serve: u64) { + self.next_index_to_serve = next_index_to_serve; + } +} + +/// Handles active and ready subscriptions +pub(crate) async fn handle_active_subscriptions( + bounded_executor: BoundedExecutor, + cached_storage_server_summary: Arc>, + config: StorageServiceConfig, + optimistic_fetches: Arc>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + subscriptions: Arc>>, + time_service: TimeService, +) -> Result<(), Error> { + // Continuously handle the subscriptions until we identify that + // there are no more subscriptions ready to be served now. + loop { + // Update the number of active subscriptions + update_active_subscription_metrics(subscriptions.clone()); + + // Identify the peers with ready subscriptions + let peers_with_ready_subscriptions = get_peers_with_ready_subscriptions( + bounded_executor.clone(), + config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await?; + + // If there are no peers with ready subscriptions, we're finished + if peers_with_ready_subscriptions.is_empty() { + return Ok(()); + } + + // Remove and handle the ready subscriptions + handle_ready_subscriptions( + bounded_executor.clone(), + cached_storage_server_summary.clone(), + config, + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage.clone(), + subscriptions.clone(), + time_service.clone(), + peers_with_ready_subscriptions, + ) + .await; + } +} + +/// Handles the ready subscriptions by removing them from the +/// active map and notifying the peer of the new data. +async fn handle_ready_subscriptions( + bounded_executor: BoundedExecutor, + cached_storage_server_summary: Arc>, + config: StorageServiceConfig, + optimistic_fetches: Arc>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + subscriptions: Arc>>, + time_service: TimeService, + peers_with_ready_subscriptions: Vec<(PeerNetworkId, LedgerInfoWithSignatures)>, +) { + // Go through all peers with ready subscriptions + let mut active_tasks = vec![]; + for (peer_network_id, target_ledger_info) in peers_with_ready_subscriptions { + // Remove the subscription from the active subscription stream + let subscription_request_and_known_version = + subscriptions.clone().lock().get_mut(&peer_network_id).map( + |subscription_stream_requests| { + ( + subscription_stream_requests.pop_first_pending_request(), + subscription_stream_requests.highest_known_version, + ) + }, + ); + + // Handle the subscription + if let Some((Some(subscription_request), known_version)) = + subscription_request_and_known_version + { + // Clone all required components for the task + let cached_storage_server_summary = cached_storage_server_summary.clone(); + let optimistic_fetches = optimistic_fetches.clone(); + let lru_response_cache = lru_response_cache.clone(); + let request_moderator = request_moderator.clone(); + let storage = storage.clone(); + let subscriptions = subscriptions.clone(); + let time_service = time_service.clone(); + + // Spawn a blocking task to handle the subscription + let active_task = bounded_executor + .spawn_blocking(move || { + // Get the subscription start time and request + let subscription_start_time = subscription_request.request_start_time; + let subscription_data_request = subscription_request.request.clone(); + + // Get the storage service request for the missing data + let missing_data_request = match subscription_request + .get_storage_request_for_missing_data( + config, + known_version, + &target_ledger_info, + ) { + Ok(storage_service_request) => storage_service_request, + Err(error) => { + // Failed to get the storage service request + warn!(LogSchema::new(LogEntry::OptimisticFetchResponse) + .error(&Error::UnexpectedErrorEncountered(error.to_string()))); + return; + }, + }; + + // Notify the peer of the new data + match utils::notify_peer_of_new_data( + cached_storage_server_summary, + optimistic_fetches, + subscriptions.clone(), + lru_response_cache, + request_moderator, + storage, + time_service.clone(), + &peer_network_id, + missing_data_request, + target_ledger_info, + subscription_request.take_response_sender(), + ) { + Ok(data_response) => { + // Update the streams known version and epoch + if let Some(subscription_stream_requests) = + subscriptions.lock().get_mut(&peer_network_id) + { + // Update the known version and epoch for the stream + subscription_stream_requests + .update_known_version_and_epoch(&data_response) + .unwrap_or_else(|error| { + warn!(LogSchema::new(LogEntry::SubscriptionResponse) + .error(&Error::UnexpectedErrorEncountered( + error.to_string() + ))); + }); + + // Update the subscription latency metric + let subscription_duration = + time_service.now().duration_since(subscription_start_time); + metrics::observe_value_with_label( + &metrics::SUBSCRIPTION_LATENCIES, + peer_network_id.network_id(), + &subscription_data_request.get_label(), + subscription_duration.as_secs_f64(), + ); + } + }, + Err(error) => { + warn!(LogSchema::new(LogEntry::SubscriptionResponse) + .error(&Error::UnexpectedErrorEncountered(error.to_string()))); + }, + } + }) + .await; + + // Add the task to the list of active tasks + active_tasks.push(active_task); + } + } + + // Wait for all the active tasks to complete + join_all(active_tasks).await; +} + +/// Identifies the subscriptions that can be handled now. +/// Returns the list of peers that made those subscriptions +/// alongside the ledger info at the target version for the peer. +pub(crate) async fn get_peers_with_ready_subscriptions( + bounded_executor: BoundedExecutor, + config: StorageServiceConfig, + cached_storage_server_summary: Arc>, + optimistic_fetches: Arc>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + subscriptions: Arc>>, + time_service: TimeService, +) -> aptos_storage_service_types::Result, Error> { + // Fetch the latest storage summary and highest synced version + let latest_storage_summary = cached_storage_server_summary.load().clone(); + let highest_synced_ledger_info = match &latest_storage_summary.data_summary.synced_ledger_info { + Some(ledger_info) => ledger_info.clone(), + None => return Ok(vec![]), + }; + let highest_synced_version = highest_synced_ledger_info.ledger_info().version(); + let highest_synced_epoch = highest_synced_ledger_info.ledger_info().epoch(); + + // Identify the peers with expired, invalid and ready subscriptions + let ( + peers_with_expired_subscriptions, + peers_with_invalid_subscriptions, + peers_with_ready_subscriptions, + ) = identify_expired_invalid_and_ready_subscriptions( + bounded_executor, + config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + subscriptions.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage.clone(), + time_service.clone(), + highest_synced_ledger_info, + highest_synced_version, + highest_synced_epoch, + ) + .await; + + // Remove the expired subscriptions + remove_expired_subscriptions(subscriptions.clone(), peers_with_expired_subscriptions); + + // Remove the invalid subscriptions + remove_invalid_subscriptions(subscriptions.clone(), peers_with_invalid_subscriptions); + + // Return the ready subscriptions + Ok(peers_with_ready_subscriptions) +} + +/// Identifies the expired, invalid and ready subscriptions +/// from the active map. Returns each peer list separately. +async fn identify_expired_invalid_and_ready_subscriptions( + bounded_executor: BoundedExecutor, + config: StorageServiceConfig, + cached_storage_server_summary: Arc>, + optimistic_fetches: Arc>, + subscriptions: Arc>>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + time_service: TimeService, + highest_synced_ledger_info: LedgerInfoWithSignatures, + highest_synced_version: Version, + highest_synced_epoch: u64, +) -> ( + Vec, + Vec, + Vec<(PeerNetworkId, LedgerInfoWithSignatures)>, +) { + // Gather the highest synced version and epoch for each peer + // that has an active subscription ready to be served. + let mut peers_and_highest_synced_data = HashMap::new(); + let mut peers_with_expired_subscriptions = vec![]; + for (peer_network_id, subscription_stream_requests) in subscriptions.lock().iter() { + // Gather the peer's highest synced version and epoch + if !subscription_stream_requests.is_expired(config.max_subscription_period_ms) { + // Ensure that the first request is ready to be served + if subscription_stream_requests.first_request_ready_to_be_served() { + let highest_known_version = subscription_stream_requests.highest_known_version; + let highest_known_epoch = subscription_stream_requests.highest_known_epoch; + + // Save the peer's version and epoch + peers_and_highest_synced_data.insert( + *peer_network_id, + (highest_known_version, highest_known_epoch), + ); + } + } else { + // The request has expired -- there's nothing to do + peers_with_expired_subscriptions.push(*peer_network_id); + } + } + + // Identify the peers with ready and invalid subscriptions + let (peers_with_ready_subscriptions, peers_with_invalid_subscriptions) = + identify_ready_and_invalid_subscriptions( + bounded_executor, + cached_storage_server_summary, + optimistic_fetches, + subscriptions, + lru_response_cache, + request_moderator, + storage, + time_service, + highest_synced_ledger_info, + highest_synced_version, + highest_synced_epoch, + peers_and_highest_synced_data, + ) + .await; + + // Return all peer lists + ( + peers_with_expired_subscriptions, + peers_with_invalid_subscriptions, + peers_with_ready_subscriptions, + ) +} + +/// Identifies the ready and invalid subscriptions from the given +/// map of peers and their highest synced versions and epochs. +async fn identify_ready_and_invalid_subscriptions( + bounded_executor: BoundedExecutor, + cached_storage_server_summary: Arc>, + optimistic_fetches: Arc>, + subscriptions: Arc>>, + lru_response_cache: Arc>>, + request_moderator: Arc, + storage: T, + time_service: TimeService, + highest_synced_ledger_info: LedgerInfoWithSignatures, + highest_synced_version: Version, + highest_synced_epoch: u64, + peers_and_highest_synced_data: HashMap, +) -> ( + Vec<(PeerNetworkId, LedgerInfoWithSignatures)>, + Vec, +) { + // Create the peer lists for ready and invalid subscriptions + let peers_with_ready_subscriptions = Arc::new(Mutex::new(vec![])); + let peers_with_invalid_subscriptions = Arc::new(Mutex::new(vec![])); + + // Go through all peers and highest synced data and identify the relevant entries + let mut active_tasks = vec![]; + for (peer_network_id, (highest_known_version, highest_known_epoch)) in + peers_and_highest_synced_data.into_iter() + { + // Clone all required components for the task + let cached_storage_server_summary = cached_storage_server_summary.clone(); + let highest_synced_ledger_info = highest_synced_ledger_info.clone(); + let optimistic_fetches = optimistic_fetches.clone(); + let subscriptions = subscriptions.clone(); + let lru_response_cache = lru_response_cache.clone(); + let request_moderator = request_moderator.clone(); + let storage = storage.clone(); + let time_service = time_service.clone(); + let peers_with_invalid_subscriptions = peers_with_invalid_subscriptions.clone(); + let peers_with_ready_subscriptions = peers_with_ready_subscriptions.clone(); + + // Spawn a blocking task to determine if the subscription is ready or + // invalid. We do this because each entry may require reading from storage. + let active_task = bounded_executor + .spawn_blocking(move || { + // Check if we have synced beyond the highest known version + if highest_known_version < highest_synced_version { + if highest_known_epoch < highest_synced_epoch { + // Fetch the epoch ending ledger info from storage (the + // peer needs to sync to their epoch ending ledger info). + let epoch_ending_ledger_info = match utils::get_epoch_ending_ledger_info( + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + subscriptions.clone(), + highest_known_epoch, + lru_response_cache.clone(), + request_moderator.clone(), + &peer_network_id, + storage.clone(), + time_service.clone(), + ) { + Ok(epoch_ending_ledger_info) => epoch_ending_ledger_info, + Err(error) => { + // Log the failure to fetch the epoch ending ledger info + error!(LogSchema::new(LogEntry::SubscriptionRefresh) + .error(&error) + .message(&format!( + "Failed to get the epoch ending ledger info for epoch: {:?} !", + highest_known_epoch + ))); + + return; + }, + }; + + // Check that we haven't been sent an invalid subscription request + // (i.e., a request that does not respect an epoch boundary). + if epoch_ending_ledger_info.ledger_info().version() <= highest_known_version + { + peers_with_invalid_subscriptions + .lock() + .push(peer_network_id); + } else { + peers_with_ready_subscriptions + .lock() + .push((peer_network_id, epoch_ending_ledger_info)); + } + } else { + peers_with_ready_subscriptions + .lock() + .push((peer_network_id, highest_synced_ledger_info.clone())); + }; + } + }) + .await; + + // Add the task to the list of active tasks + active_tasks.push(active_task); + } + + // Wait for all the active tasks to complete + join_all(active_tasks).await; + + // Gather the invalid and ready subscriptions + let peers_with_invalid_subscriptions = peers_with_invalid_subscriptions.lock().deref().clone(); + let peers_with_ready_subscriptions = peers_with_ready_subscriptions.lock().deref().clone(); + + ( + peers_with_ready_subscriptions, + peers_with_invalid_subscriptions, + ) +} + +/// Removes the expired subscription streams from the active map +fn remove_expired_subscriptions( + subscriptions: Arc>>, + peers_with_expired_subscriptions: Vec, +) { + for peer_network_id in peers_with_expired_subscriptions { + if subscriptions.lock().remove(&peer_network_id).is_some() { + increment_counter( + &metrics::SUBSCRIPTION_EVENTS, + peer_network_id.network_id(), + SUBSCRIPTION_EXPIRE.into(), + ); + } + } +} + +/// Removes the invalid subscription streams from the active map +fn remove_invalid_subscriptions( + subscriptions: Arc>>, + peers_with_invalid_subscriptions: Vec, +) { + for peer_network_id in peers_with_invalid_subscriptions { + if let Some(subscription_stream_requests) = subscriptions.lock().remove(&peer_network_id) { + warn!(LogSchema::new(LogEntry::SubscriptionRefresh) + .error(&Error::InvalidRequest( + "Mismatch between known version and epoch!".into() + )) + .message(&format!( + "Dropping invalid subscription stream with ID: {:?}!", + subscription_stream_requests.subscription_stream_id() + ))); + } + } +} + +/// Updates the active subscription metrics for each network +fn update_active_subscription_metrics( + subscriptions: Arc>>, +) { + // Calculate the total number of subscriptions for each network + let mut num_validator_subscriptions = 0; + let mut num_vfn_subscriptions = 0; + let mut num_public_subscriptions = 0; + for subscription_stream_requests in subscriptions.lock().iter() { + // Get the peer network ID + let peer_network_id = subscription_stream_requests.0; + + // Increment the number of subscriptions for the peer's network + match peer_network_id.network_id() { + NetworkId::Validator => num_validator_subscriptions += 1, + NetworkId::Vfn => num_vfn_subscriptions += 1, + NetworkId::Public => num_public_subscriptions += 1, + } + } + + // Update the number of active subscriptions for each network + metrics::set_gauge( + &metrics::SUBSCRIPTION_COUNT, + NetworkId::Validator.as_str(), + num_validator_subscriptions as u64, + ); + metrics::set_gauge( + &metrics::SUBSCRIPTION_COUNT, + NetworkId::Vfn.as_str(), + num_vfn_subscriptions as u64, + ); + metrics::set_gauge( + &metrics::SUBSCRIPTION_COUNT, + NetworkId::Public.as_str(), + num_public_subscriptions as u64, + ); +} diff --git a/state-sync/storage-service/server/src/tests/mock.rs b/state-sync/storage-service/server/src/tests/mock.rs index c2d4af78f31ce..0aaf6d9ff268e 100644 --- a/state-sync/storage-service/server/src/tests/mock.rs +++ b/state-sync/storage-service/server/src/tests/mock.rs @@ -354,15 +354,19 @@ mock! { } } -/// Creates a mock db with the basic expectations required to handle optimistic fetch requests -pub fn create_mock_db_for_optimistic_fetch( - highest_ledger_info_clone: LedgerInfoWithSignatures, +/// Creates a mock db with the basic expectations required to +/// handle storage summary updates. +pub fn create_mock_db_with_summary_updates( + highest_ledger_info: LedgerInfoWithSignatures, lowest_version: Version, ) -> MockDatabaseReader { + // Create a new mock db reader let mut db_reader = create_mock_db_reader(); + + // Set up the basic expectations to handle storage summary updates db_reader .expect_get_latest_ledger_info() - .returning(move || Ok(highest_ledger_info_clone.clone())); + .returning(move || Ok(highest_ledger_info.clone())); db_reader .expect_get_first_txn_version() .returning(move || Ok(Some(lowest_version))); @@ -375,6 +379,7 @@ pub fn create_mock_db_for_optimistic_fetch( db_reader .expect_is_state_merkle_pruner_enabled() .returning(move || Ok(true)); + db_reader } diff --git a/state-sync/storage-service/server/src/tests/mod.rs b/state-sync/storage-service/server/src/tests/mod.rs index 7c6c1bb21e656..af71f856efd31 100644 --- a/state-sync/storage-service/server/src/tests/mod.rs +++ b/state-sync/storage-service/server/src/tests/mod.rs @@ -13,6 +13,10 @@ mod protocol_version; mod request_moderator; mod state_values; mod storage_summary; +mod subscribe_transaction_outputs; +mod subscribe_transactions; +mod subscribe_transactions_or_outputs; +mod subscription; mod transaction_outputs; mod transactions; mod transactions_or_outputs; diff --git a/state-sync/storage-service/server/src/tests/new_transaction_outputs.rs b/state-sync/storage-service/server/src/tests/new_transaction_outputs.rs index 2f397b6afe39a..d46c93c98d714 100644 --- a/state-sync/storage-service/server/src/tests/new_transaction_outputs.rs +++ b/state-sync/storage-service/server/src/tests/new_transaction_outputs.rs @@ -33,7 +33,7 @@ async fn test_get_new_transaction_outputs() { // Create the mock db reader let mut db_reader = - mock::create_mock_db_for_optimistic_fetch(highest_ledger_info.clone(), lowest_version); + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); utils::expect_get_transaction_outputs( &mut db_reader, peer_version + 1, @@ -103,7 +103,7 @@ async fn test_get_new_transaction_outputs_different_networks() { // Create the mock db reader let mut db_reader = - mock::create_mock_db_for_optimistic_fetch(highest_ledger_info.clone(), lowest_version); + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); utils::expect_get_transaction_outputs( &mut db_reader, peer_version_1 + 1, @@ -202,7 +202,7 @@ async fn test_get_new_transaction_outputs_epoch_change() { ); // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), lowest_version, ); @@ -253,35 +253,41 @@ async fn test_get_new_transaction_outputs_epoch_change() { #[tokio::test(flavor = "multi_thread")] async fn test_get_new_transaction_outputs_max_chunk() { + // Create a storage service config with a configured max chunk size + let max_transaction_output_chunk_size = 400; + let storage_service_config = StorageServiceConfig { + max_transaction_output_chunk_size, + ..StorageServiceConfig::default() + }; + // Create test data let highest_version = 65660; let highest_epoch = 30; let lowest_version = 101; - let max_chunk_size = StorageServiceConfig::default().max_transaction_output_chunk_size; - let requested_chunk_size = max_chunk_size + 1; + let requested_chunk_size = max_transaction_output_chunk_size + 100; let peer_version = highest_version - requested_chunk_size; let highest_ledger_info = utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); let output_list_with_proof = utils::create_output_list_with_proof( peer_version + 1, - peer_version + requested_chunk_size, + peer_version + max_transaction_output_chunk_size, highest_version, ); // Create the mock db reader let mut db_reader = - mock::create_mock_db_for_optimistic_fetch(highest_ledger_info.clone(), lowest_version); + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); utils::expect_get_transaction_outputs( &mut db_reader, peer_version + 1, - max_chunk_size, + max_transaction_output_chunk_size, highest_version, output_list_with_proof.clone(), ); // Create the storage client and server let (mut mock_client, service, storage_service_notifier, mock_time, _) = - MockClient::new(Some(db_reader), None); + MockClient::new(Some(db_reader), Some(storage_service_config)); let active_optimistic_fetches = service.get_optimistic_fetches(); tokio::spawn(service.start()); diff --git a/state-sync/storage-service/server/src/tests/new_transactions.rs b/state-sync/storage-service/server/src/tests/new_transactions.rs index 07a810cd967ce..0eed390bce1f9 100644 --- a/state-sync/storage-service/server/src/tests/new_transactions.rs +++ b/state-sync/storage-service/server/src/tests/new_transactions.rs @@ -35,7 +35,7 @@ async fn test_get_new_transactions() { ); // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( highest_ledger_info.clone(), lowest_version, ); @@ -118,7 +118,7 @@ async fn test_get_new_transactions_different_networks() { ); // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( highest_ledger_info.clone(), lowest_version, ); @@ -228,7 +228,7 @@ async fn test_get_new_transactions_epoch_change() { ); // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), lowest_version, ); @@ -286,31 +286,37 @@ async fn test_get_new_transactions_epoch_change() { #[tokio::test(flavor = "multi_thread")] async fn test_get_new_transactions_max_chunk() { + // Create a storage service config with a configured max chunk size + let max_transaction_chunk_size = 200; + let storage_service_config = StorageServiceConfig { + max_transaction_chunk_size, + ..StorageServiceConfig::default() + }; + // Test event inclusion for include_events in [true, false] { // Create test data let highest_version = 1034556; let highest_epoch = 343; let lowest_version = 3453; - let max_chunk_size = StorageServiceConfig::default().max_transaction_chunk_size; - let requested_chunk_size = max_chunk_size + 1; + let requested_chunk_size = max_transaction_chunk_size + 1; let peer_version = highest_version - requested_chunk_size; let highest_ledger_info = utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); let transaction_list_with_proof = utils::create_transaction_list_with_proof( peer_version + 1, - peer_version + requested_chunk_size, - peer_version + requested_chunk_size, + peer_version + max_transaction_chunk_size, + peer_version + max_transaction_chunk_size, include_events, ); // Create the mock db reader let mut db_reader = - mock::create_mock_db_for_optimistic_fetch(highest_ledger_info.clone(), lowest_version); + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); utils::expect_get_transactions( &mut db_reader, peer_version + 1, - max_chunk_size, + max_transaction_chunk_size, highest_version, include_events, transaction_list_with_proof.clone(), @@ -318,7 +324,7 @@ async fn test_get_new_transactions_max_chunk() { // Create the storage client and server let (mut mock_client, service, storage_service_notifier, mock_time, _) = - MockClient::new(Some(db_reader), None); + MockClient::new(Some(db_reader), Some(storage_service_config)); let active_optimistic_fetches = service.get_optimistic_fetches(); tokio::spawn(service.start()); diff --git a/state-sync/storage-service/server/src/tests/new_transactions_or_outputs.rs b/state-sync/storage-service/server/src/tests/new_transactions_or_outputs.rs index e0d62771cf37a..3a5682045c9ba 100644 --- a/state-sync/storage-service/server/src/tests/new_transactions_or_outputs.rs +++ b/state-sync/storage-service/server/src/tests/new_transactions_or_outputs.rs @@ -40,7 +40,7 @@ async fn test_get_new_transactions_or_outputs() { ); // Creates a small transaction list // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( highest_ledger_info.clone(), lowest_version, ); @@ -154,7 +154,7 @@ async fn test_get_new_transactions_or_outputs_different_network() { ); // Creates a small transaction list // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( highest_ledger_info.clone(), lowest_version, ); @@ -313,7 +313,7 @@ async fn test_get_new_transactions_or_outputs_epoch_change() { ); // Creates a small transaction list // Create the mock db reader - let mut db_reader = mock::create_mock_db_for_optimistic_fetch( + let mut db_reader = mock::create_mock_db_with_summary_updates( utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), lowest_version, ); @@ -404,32 +404,32 @@ async fn test_get_new_transactions_or_outputs_max_chunk() { let highest_version = 65660; let highest_epoch = 30; let lowest_version = 101; - let max_chunk_size = StorageServiceConfig::default().max_transaction_output_chunk_size; - let requested_chunk_size = max_chunk_size + 1; + let max_transaction_output_chunk_size = 600; + let requested_chunk_size = max_transaction_output_chunk_size + 1; let peer_version = highest_version - requested_chunk_size; let highest_ledger_info = utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); let output_list_with_proof = utils::create_output_list_with_proof( peer_version + 1, - peer_version + requested_chunk_size, + peer_version + max_transaction_output_chunk_size, highest_version, ); let transaction_list_with_proof = utils::create_transaction_list_with_proof( peer_version + 1, peer_version + 1, - peer_version + requested_chunk_size, + peer_version + max_transaction_output_chunk_size, false, ); // Creates a small transaction list // Create the mock db reader let max_num_output_reductions = 5; let mut db_reader = - mock::create_mock_db_for_optimistic_fetch(highest_ledger_info.clone(), lowest_version); + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); for i in 0..=max_num_output_reductions { utils::expect_get_transaction_outputs( &mut db_reader, peer_version + 1, - (max_chunk_size as u32 / (u32::pow(2, i as u32))) as u64, + (max_transaction_output_chunk_size as u32 / (u32::pow(2, i as u32))) as u64, highest_version, output_list_with_proof.clone(), ); @@ -438,21 +438,25 @@ async fn test_get_new_transactions_or_outputs_max_chunk() { utils::expect_get_transactions( &mut db_reader, peer_version + 1, - max_chunk_size, + max_transaction_output_chunk_size, highest_version, false, transaction_list_with_proof.clone(), ); } - // Create the storage client and server - let storage_config = utils::configure_network_chunk_limit( + // Create the storage service config + let mut storage_service_config = utils::configure_network_chunk_limit( fallback_to_transactions, &output_list_with_proof, &transaction_list_with_proof, ); + storage_service_config.max_transaction_output_chunk_size = + max_transaction_output_chunk_size; + + // Create the storage client and server let (mut mock_client, service, storage_service_notifier, mock_time, _) = - MockClient::new(Some(db_reader), Some(storage_config)); + MockClient::new(Some(db_reader), Some(storage_service_config)); let active_optimistic_fetches = service.get_optimistic_fetches(); tokio::spawn(service.start()); diff --git a/state-sync/storage-service/server/src/tests/optimistic_fetch.rs b/state-sync/storage-service/server/src/tests/optimistic_fetch.rs index f299e8c15f589..7b51375045e8d 100644 --- a/state-sync/storage-service/server/src/tests/optimistic_fetch.rs +++ b/state-sync/storage-service/server/src/tests/optimistic_fetch.rs @@ -21,16 +21,16 @@ use aptos_storage_service_types::{ NewTransactionsOrOutputsWithProofRequest, NewTransactionsWithProofRequest, StorageServiceRequest, }, - responses::{CompleteDataRange, StorageServerSummary}, + responses::StorageServerSummary, }; use aptos_time_service::TimeService; -use aptos_types::{epoch_change::EpochChangeProof, ledger_info::LedgerInfoWithSignatures}; +use aptos_types::epoch_change::EpochChangeProof; use arc_swap::ArcSwap; use dashmap::DashMap; use futures::channel::oneshot; use lru::LruCache; use rand::{rngs::OsRng, Rng}; -use std::sync::Arc; +use std::{collections::HashMap, sync::Arc}; use tokio::runtime::Handle; #[tokio::test] @@ -78,6 +78,7 @@ async fn test_peers_with_ready_optimistic_fetches() { storage_service_config, time_service.clone(), )); + let subscriptions = Arc::new(Mutex::new(HashMap::new())); // Verify that there are no peers with ready optimistic fetches let peers_with_ready_optimistic_fetches = @@ -89,6 +90,7 @@ async fn test_peers_with_ready_optimistic_fetches() { lru_response_cache.clone(), request_moderator.clone(), storage_reader.clone(), + subscriptions.clone(), time_service.clone(), ) .await @@ -97,7 +99,7 @@ async fn test_peers_with_ready_optimistic_fetches() { // Update the storage server summary so that there is new data for optimistic fetch 1 let synced_ledger_info = - update_storage_server_summary(cached_storage_server_summary.clone(), 2, 1); + utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 2, 1); // Verify that optimistic fetch 1 is ready let peers_with_ready_optimistic_fetches = @@ -109,6 +111,7 @@ async fn test_peers_with_ready_optimistic_fetches() { lru_response_cache.clone(), request_moderator.clone(), storage_reader.clone(), + subscriptions.clone(), time_service.clone(), ) .await @@ -123,7 +126,7 @@ async fn test_peers_with_ready_optimistic_fetches() { // Update the storage server summary so that there is new data for optimistic fetch 2, // but the optimistic fetch is invalid because it doesn't respect an epoch boundary. - let _ = update_storage_server_summary(cached_storage_server_summary.clone(), 100, 2); + let _ = utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 100, 2); // Verify that optimistic fetch 2 is not returned because it was invalid let peers_with_ready_optimistic_fetches = @@ -135,6 +138,7 @@ async fn test_peers_with_ready_optimistic_fetches() { lru_response_cache, request_moderator, storage_reader, + subscriptions, time_service, ) .await @@ -171,6 +175,7 @@ async fn test_remove_expired_optimistic_fetches() { storage_service_config, time_service.clone(), )); + let subscriptions = Arc::new(Mutex::new(HashMap::new())); // Create the first batch of test optimistic fetches let num_optimistic_fetches_in_batch = 10; @@ -188,7 +193,7 @@ async fn test_remove_expired_optimistic_fetches() { utils::elapse_time(max_optimistic_fetch_period_ms / 2, &time_service).await; // Update the storage server summary so that there is new data - let _ = update_storage_server_summary(cached_storage_server_summary.clone(), 1, 1); + let _ = utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 1, 1); // Remove the expired optimistic fetches and verify none were removed let peers_with_ready_optimistic_fetches = @@ -200,6 +205,7 @@ async fn test_remove_expired_optimistic_fetches() { lru_response_cache.clone(), request_moderator.clone(), storage.clone(), + subscriptions.clone(), time_service.clone(), ) .await @@ -233,6 +239,7 @@ async fn test_remove_expired_optimistic_fetches() { lru_response_cache.clone(), request_moderator.clone(), storage.clone(), + subscriptions.clone(), time_service.clone(), ) .await @@ -253,6 +260,7 @@ async fn test_remove_expired_optimistic_fetches() { lru_response_cache, request_moderator, storage.clone(), + subscriptions, time_service.clone(), ) .await @@ -313,27 +321,3 @@ fn create_optimistic_fetch_request( // Create and return the optimistic fetch request OptimisticFetchRequest::new(storage_service_request, response_sender, time_service) } - -/// Updates the storage server summary with new data and returns the synced ledger info -fn update_storage_server_summary( - cached_storage_server_summary: Arc>, - highest_synced_version: u64, - highest_synced_epoch: u64, -) -> LedgerInfoWithSignatures { - // Create the storage server summary and synced ledger info - let mut storage_server_summary = StorageServerSummary::default(); - let highest_synced_ledger_info = - utils::create_test_ledger_info_with_sigs(highest_synced_epoch, highest_synced_version); - - // Update the epoch ending ledger infos and synced ledger info - storage_server_summary - .data_summary - .epoch_ending_ledger_infos = Some(CompleteDataRange::new(0, highest_synced_epoch).unwrap()); - storage_server_summary.data_summary.synced_ledger_info = - Some(highest_synced_ledger_info.clone()); - - // Update the cached storage server summary - cached_storage_server_summary.store(Arc::new(storage_server_summary)); - - highest_synced_ledger_info -} diff --git a/state-sync/storage-service/server/src/tests/request_moderator.rs b/state-sync/storage-service/server/src/tests/request_moderator.rs index 66da5d57d26a7..f32c7fa01123a 100644 --- a/state-sync/storage-service/server/src/tests/request_moderator.rs +++ b/state-sync/storage-service/server/src/tests/request_moderator.rs @@ -24,11 +24,7 @@ use aptos_storage_service_types::{ use aptos_time_service::MockTimeService; use aptos_types::{account_address::AccountAddress, network_address::NetworkAddress, PeerId}; use claims::assert_matches; -use std::{collections::HashMap, future::Future, str::FromStr, sync::Arc, time::Duration}; -use tokio::time::timeout; - -// Useful test constants -const MAX_WAIT_TIME_SECS: u64 = 60; +use std::{collections::HashMap, str::FromStr, sync::Arc, time::Duration}; #[tokio::test] async fn test_request_moderator_ignore_pfn() { @@ -423,14 +419,6 @@ async fn send_invalid_transaction_request( mock_client.wait_for_response(receiver).await } -/// Spawns the given task with a timeout -async fn spawn_with_timeout(task: impl Future, timeout_error_message: &str) { - let timeout_duration = Duration::from_secs(MAX_WAIT_TIME_SECS); - timeout(timeout_duration, task) - .await - .expect(timeout_error_message) -} - /// Waits for the request moderator to garbage collect the peer state async fn wait_for_request_moderator_to_garbage_collect( unhealthy_peer_states: Arc>>, @@ -454,7 +442,7 @@ async fn wait_for_request_moderator_to_garbage_collect( }; // Spawn the task with a timeout - spawn_with_timeout( + utils::spawn_with_timeout( garbage_collect, "Timed-out while waiting for the request moderator to perform garbage collection", ) @@ -496,7 +484,7 @@ async fn wait_for_request_moderator_to_unblock_peer( }; // Spawn the task with a timeout - spawn_with_timeout( + utils::spawn_with_timeout( unblock_peer, "Timed-out while waiting for the request moderator to unblock the peer", ) diff --git a/state-sync/storage-service/server/src/tests/storage_summary.rs b/state-sync/storage-service/server/src/tests/storage_summary.rs index 46df4e827a9be..ad010302a8076 100644 --- a/state-sync/storage-service/server/src/tests/storage_summary.rs +++ b/state-sync/storage-service/server/src/tests/storage_summary.rs @@ -62,7 +62,7 @@ async fn test_refresh_cached_storage_summary() { cached_storage_server_summary.clone(), storage_reader.clone(), storage_service_config, - cached_summary_update_notifier.clone(), + vec![cached_summary_update_notifier.clone()], ); // Verify that the cached summary update listener is notified @@ -102,7 +102,7 @@ async fn test_refresh_cached_storage_summary() { cached_storage_server_summary.clone(), storage_reader.clone(), storage_service_config, - cached_summary_update_notifier.clone(), + vec![cached_summary_update_notifier.clone()], ); // Verify that the cached summary update listener is notified @@ -129,7 +129,7 @@ async fn test_refresh_cached_storage_summary() { cached_storage_server_summary.clone(), storage_reader.clone(), storage_service_config, - cached_summary_update_notifier.clone(), + vec![cached_summary_update_notifier.clone()], ); // Verify that the cached summary update listener is notified diff --git a/state-sync/storage-service/server/src/tests/subscribe_transaction_outputs.rs b/state-sync/storage-service/server/src/tests/subscribe_transaction_outputs.rs new file mode 100644 index 0000000000000..c7aafea832dd0 --- /dev/null +++ b/state-sync/storage-service/server/src/tests/subscribe_transaction_outputs.rs @@ -0,0 +1,597 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::tests::{mock, mock::MockClient, utils}; +use aptos_config::{ + config::StorageServiceConfig, + network_id::{NetworkId, PeerNetworkId}, +}; +use aptos_types::{epoch_change::EpochChangeProof, PeerId}; +use claims::assert_none; + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_different_networks() { + // Test small and large chunk sizes + let max_output_chunk_size = StorageServiceConfig::default().max_transaction_output_chunk_size; + for chunk_size in [100, max_output_chunk_size] { + // Create test data + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 4566; + let peer_version_1 = highest_version - chunk_size; + let peer_version_2 = highest_version - (chunk_size - 10); + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let output_list_with_proof_1 = utils::create_output_list_with_proof( + peer_version_1 + 1, + highest_version, + highest_version, + ); + let output_list_with_proof_2 = utils::create_output_list_with_proof( + peer_version_2 + 1, + highest_version, + highest_version, + ); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version_1 + 1, + highest_version - peer_version_1, + highest_version, + output_list_with_proof_1.clone(), + ); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version_2 + 1, + highest_version - peer_version_2, + highest_version, + output_list_with_proof_2.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), None); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to transaction outputs for peer 1 + let peer_id = PeerId::random(); + let subscription_stream_id = 200; + let peer_network_1 = PeerNetworkId::new(NetworkId::Public, peer_id); + let mut response_receiver_1 = utils::subscribe_to_transaction_outputs_for_peer( + &mut mock_client, + peer_version_1, + highest_epoch, + subscription_stream_id, + 0, + Some(peer_network_1), + ) + .await; + + // Send a request to subscribe to transaction outputs for peer 2 + let peer_network_2 = PeerNetworkId::new(NetworkId::Vfn, peer_id); + let mut response_receiver_2 = utils::subscribe_to_transaction_outputs_for_peer( + &mut mock_client, + peer_version_2, + highest_epoch, + subscription_stream_id, + 0, + Some(peer_network_2), + ) + .await; + + // Wait until the subscriptions are active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 2).await; + + // Verify no subscription response has been received yet + assert_none!(response_receiver_1.try_recv().unwrap()); + assert_none!(response_receiver_2.try_recv().unwrap()); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data for both peers + utils::verify_new_transaction_outputs_with_proof( + &mut mock_client, + response_receiver_1, + output_list_with_proof_1, + highest_ledger_info.clone(), + ) + .await; + utils::verify_new_transaction_outputs_with_proof( + &mut mock_client, + response_receiver_2, + output_list_with_proof_2, + highest_ledger_info, + ) + .await; + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_epoch_change() { + // Create test data + let highest_version = 45576; + let highest_epoch = 1032; + let lowest_version = 4566; + let peer_version = highest_version - 100; + let peer_epoch = highest_epoch - 20; + let epoch_change_version = peer_version + 45; + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![utils::create_test_ledger_info_with_sigs( + peer_epoch, + epoch_change_version, + )], + more: false, + }; + let output_list_with_proof = utils::create_output_list_with_proof( + peer_version + 1, + epoch_change_version, + epoch_change_version, + ); + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_with_summary_updates( + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), + lowest_version, + ); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + 1, + epoch_change_version - peer_version, + epoch_change_version, + output_list_with_proof.clone(), + ); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), None); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to transaction outputs + let response_receiver = utils::subscribe_to_transaction_outputs( + &mut mock_client, + peer_version, + peer_epoch, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + utils::verify_new_transaction_outputs_with_proof( + &mut mock_client, + response_receiver, + output_list_with_proof, + epoch_change_proof.ledger_info_with_sigs[0].clone(), + ) + .await; +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_max_chunk() { + // Create a storage service config with a configured max chunk size + let max_transaction_output_chunk_size = 301; + let storage_service_config = StorageServiceConfig { + max_transaction_output_chunk_size, + ..StorageServiceConfig::default() + }; + + // Create test data + let highest_version = 1034556; + let highest_epoch = 343; + let lowest_version = 3453; + let requested_chunk_size = max_transaction_output_chunk_size + 100; + let peer_version = highest_version - requested_chunk_size; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let output_list_with_proof = utils::create_output_list_with_proof( + peer_version + 1, + peer_version + requested_chunk_size, + highest_version, + ); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + 1, + max_transaction_output_chunk_size, + highest_version, + output_list_with_proof.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to new transaction outputs + let response_receiver = utils::subscribe_to_transaction_outputs( + &mut mock_client, + peer_version, + highest_epoch, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + utils::verify_new_transaction_outputs_with_proof( + &mut mock_client, + response_receiver, + output_list_with_proof, + highest_ledger_info, + ) + .await; +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_streaming() { + // Create a storage service config + let max_transaction_output_chunk_size = 200; + let storage_service_config = StorageServiceConfig { + max_transaction_output_chunk_size, + ..Default::default() + }; + + // Create test data + let num_stream_requests = 30; + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 4566; + let peer_version = highest_version - (num_stream_requests * max_transaction_output_chunk_size); + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the output lists with proofs + let output_lists_with_proofs: Vec<_> = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_output_chunk_size) + 1; + let end_version = start_version + max_transaction_output_chunk_size - 1; + utils::create_output_list_with_proof(start_version, end_version, highest_version) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for i in 0..num_stream_requests { + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + (i * max_transaction_output_chunk_size) + 1, + max_transaction_output_chunk_size, + highest_version, + output_lists_with_proofs[i as usize].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send multiple batches of requests to the server and verify the responses + let num_batches_to_send = 6; + for batch_id in 0..num_batches_to_send { + // Send the request batch to subscribe to transaction outputs + let num_requests_per_batch = num_stream_requests / num_batches_to_send; + let first_request_index = batch_id * num_requests_per_batch; + let last_request_index = (batch_id * num_requests_per_batch) + num_requests_per_batch - 1; + let mut response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + first_request_index, + last_request_index, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_requests_per_batch as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_cache_update_notification( + &mut mock_client, + &mock_time, + &storage_service_notifier, + true, + true, + ) + .await; + + // Continuously run the subscription service until the batch responses are received + for stream_request_index in first_request_index..=last_request_index { + // Verify that the correct response is received + utils::verify_output_subscription_response( + output_lists_with_proofs.clone(), + highest_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_streaming_epoch_change() { + // Create a storage service config + let max_transaction_output_chunk_size = 5; + let max_num_active_subscriptions = 50; + let storage_service_config = StorageServiceConfig { + max_transaction_output_chunk_size, + max_num_active_subscriptions, + ..Default::default() + }; + + // Create test data + let highest_version = 1000; + let highest_epoch = 2; + let lowest_version = 0; + let peer_version = highest_version - 500; + let peer_epoch = highest_epoch - 1; + let epoch_change_version = peer_version + 97; + + // Create the highest ledger info and epoch change proof + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let epoch_change_ledger_info = + utils::create_epoch_ending_ledger_info(peer_epoch, epoch_change_version); + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![epoch_change_ledger_info.clone()], + more: false, + }; + + // Create the output lists with proofs + let chunk_start_and_end_versions = utils::create_data_chunks_with_epoch_boundary( + max_transaction_output_chunk_size, + max_num_active_subscriptions, + peer_version, + epoch_change_version, + ); + let output_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_output_list_with_proof(*start_version, *end_version, highest_version) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + for (i, (start_version, end_version)) in chunk_start_and_end_versions.iter().enumerate() { + let proof_version = if *end_version <= epoch_change_version { + epoch_change_version + } else { + highest_version + }; + utils::expect_get_transaction_outputs( + &mut db_reader, + *start_version, + end_version - start_version + 1, + proof_version, + output_lists_with_proofs[i].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the request batch to subscribe to transaction outputs + let mut response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + max_num_active_subscriptions - 1, + stream_id, + peer_version, + peer_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + max_num_active_subscriptions as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Continuously run the subscription service until all the responses are received + for stream_request_index in 0..max_num_active_subscriptions { + // Determine the target ledger info for the response + let first_output_version = output_lists_with_proofs[stream_request_index as usize] + .first_transaction_output_version + .unwrap(); + let target_ledger_info = if first_output_version > epoch_change_version { + highest_ledger_info.clone() + } else { + epoch_change_ledger_info.clone() + }; + + // Verify that the correct response is received + utils::verify_output_subscription_response( + output_lists_with_proofs.clone(), + target_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_outputs_streaming_loop() { + // Create a storage service config + let max_transaction_output_chunk_size = 100; + let storage_service_config = StorageServiceConfig { + max_transaction_output_chunk_size, + ..Default::default() + }; + + // Create test data + let num_stream_requests = 30; + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 4566; + let peer_version = highest_version - (num_stream_requests * max_transaction_output_chunk_size); + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the output lists with proofs + let output_lists_with_proofs: Vec<_> = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_output_chunk_size) + 1; + let end_version = start_version + max_transaction_output_chunk_size - 1; + utils::create_output_list_with_proof(start_version, end_version, highest_version) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for i in 0..num_stream_requests { + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + (i * max_transaction_output_chunk_size) + 1, + max_transaction_output_chunk_size, + highest_version, + output_lists_with_proofs[i as usize].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the requests to the server and verify the responses + let mut response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + num_stream_requests - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests as usize, + ) + .await; + + // Verify the state of the subscription stream + utils::verify_subscription_stream_entry( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests, + peer_version, + highest_epoch, + max_transaction_output_chunk_size, + ); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify all responses are received + for stream_request_index in 0..num_stream_requests { + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + utils::verify_new_transaction_outputs_with_proof( + &mut mock_client, + response_receiver, + output_lists_with_proofs[stream_request_index as usize].clone(), + highest_ledger_info.clone(), + ) + .await; + } +} diff --git a/state-sync/storage-service/server/src/tests/subscribe_transactions.rs b/state-sync/storage-service/server/src/tests/subscribe_transactions.rs new file mode 100644 index 0000000000000..fa5523cdb59d3 --- /dev/null +++ b/state-sync/storage-service/server/src/tests/subscribe_transactions.rs @@ -0,0 +1,701 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::tests::{mock, mock::MockClient, utils}; +use aptos_config::{ + config::StorageServiceConfig, + network_id::{NetworkId, PeerNetworkId}, +}; +use aptos_network::protocols::network::RpcError; +use aptos_types::{ + epoch_change::EpochChangeProof, ledger_info::LedgerInfoWithSignatures, + transaction::TransactionListWithProof, PeerId, +}; +use bytes::Bytes; +use claims::assert_none; +use futures::channel::oneshot::Receiver; +use std::collections::HashMap; + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_different_networks() { + // Test small and large chunk sizes + let max_transaction_chunk_size = StorageServiceConfig::default().max_transaction_chunk_size; + for chunk_size in [100, max_transaction_chunk_size] { + // Test event inclusion + for include_events in [true, false] { + // Create test data + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 4566; + let peer_version_1 = highest_version - chunk_size; + let peer_version_2 = highest_version - (chunk_size - 10); + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let transaction_list_with_proof_1 = utils::create_transaction_list_with_proof( + peer_version_1 + 1, + highest_version, + highest_version, + include_events, + ); + let transaction_list_with_proof_2 = utils::create_transaction_list_with_proof( + peer_version_2 + 1, + highest_version, + highest_version, + include_events, + ); + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_with_summary_updates( + highest_ledger_info.clone(), + lowest_version, + ); + utils::expect_get_transactions( + &mut db_reader, + peer_version_1 + 1, + highest_version - peer_version_1, + highest_version, + include_events, + transaction_list_with_proof_1.clone(), + ); + utils::expect_get_transactions( + &mut db_reader, + peer_version_2 + 1, + highest_version - peer_version_2, + highest_version, + include_events, + transaction_list_with_proof_2.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), None); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to transactions for peer 1 + let peer_id = PeerId::random(); + let subscription_stream_id = 200; + let peer_network_1 = PeerNetworkId::new(NetworkId::Public, peer_id); + let mut response_receiver_1 = utils::subscribe_to_transactions_for_peer( + &mut mock_client, + peer_version_1, + highest_epoch, + include_events, + subscription_stream_id, + 0, + Some(peer_network_1), + ) + .await; + + // Send a request to subscribe to transactions for peer 2 + let peer_network_2 = PeerNetworkId::new(NetworkId::Vfn, peer_id); + let mut response_receiver_2 = utils::subscribe_to_transactions_for_peer( + &mut mock_client, + peer_version_2, + highest_epoch, + include_events, + subscription_stream_id, + 0, + Some(peer_network_2), + ) + .await; + + // Wait until the subscriptions are active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 2).await; + + // Verify no subscription response has been received yet + assert_none!(response_receiver_1.try_recv().unwrap()); + assert_none!(response_receiver_2.try_recv().unwrap()); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data for both peers + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver_1, + transaction_list_with_proof_1, + highest_ledger_info.clone(), + ) + .await; + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver_2, + transaction_list_with_proof_2, + highest_ledger_info, + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_epoch_change() { + // Test event inclusion + for include_events in [true, false] { + // Create test data + let highest_version = 45576; + let highest_epoch = 1032; + let lowest_version = 4566; + let peer_version = highest_version - 100; + let peer_epoch = highest_epoch - 20; + let epoch_change_version = peer_version + 45; + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![utils::create_test_ledger_info_with_sigs( + peer_epoch, + epoch_change_version, + )], + more: false, + }; + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + peer_version + 1, + epoch_change_version, + epoch_change_version, + include_events, + ); + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_with_summary_updates( + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), + lowest_version, + ); + utils::expect_get_transactions( + &mut db_reader, + peer_version + 1, + epoch_change_version - peer_version, + epoch_change_version, + include_events, + transaction_list_with_proof.clone(), + ); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), None); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to transactions + let response_receiver = utils::subscribe_to_transactions( + &mut mock_client, + peer_version, + peer_epoch, + include_events, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver, + transaction_list_with_proof, + epoch_change_proof.ledger_info_with_sigs[0].clone(), + ) + .await; + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_max_chunk() { + // Create a storage service config with a configured max chunk size + let max_transaction_chunk_size = 400; + let storage_service_config = StorageServiceConfig { + max_transaction_chunk_size, + ..StorageServiceConfig::default() + }; + + // Test event inclusion + for include_events in [true, false] { + // Create test data + let highest_version = 1034556; + let highest_epoch = 343; + let lowest_version = 3453; + let requested_chunk_size = max_transaction_chunk_size + 1; + let peer_version = highest_version - requested_chunk_size; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + peer_version + 1, + peer_version + requested_chunk_size, + peer_version + requested_chunk_size, + include_events, + ); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_transactions( + &mut db_reader, + peer_version + 1, + max_transaction_chunk_size, + highest_version, + include_events, + transaction_list_with_proof.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to new transactions + let response_receiver = utils::subscribe_to_transactions( + &mut mock_client, + peer_version, + highest_epoch, + include_events, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver, + transaction_list_with_proof, + highest_ledger_info, + ) + .await; + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_streaming() { + // Create a storage service config + let max_transaction_chunk_size = 100; + let storage_service_config = StorageServiceConfig { + max_transaction_chunk_size, + ..Default::default() + }; + + // Create test data + let num_stream_requests = 20; + let highest_version = 100_000; + let highest_epoch = 10; + let lowest_version = 10_000; + let peer_version = 50_000; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the transaction lists with proofs + let transaction_lists_with_proofs: Vec<_> = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_chunk_size) + 1; + let end_version = start_version + max_transaction_chunk_size - 1; + utils::create_transaction_list_with_proof( + start_version, + end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for i in 0..num_stream_requests { + utils::expect_get_transactions( + &mut db_reader, + peer_version + (i * max_transaction_chunk_size) + 1, + max_transaction_chunk_size, + highest_version, + false, + transaction_lists_with_proofs[i as usize].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send multiple batches of requests to the server and verify the responses + let num_batches_to_send = 10; + for batch_id in 0..num_batches_to_send { + // Send the request batch to subscribe to transaction outputs + let num_requests_per_batch = num_stream_requests / num_batches_to_send; + let first_request_index = batch_id * num_requests_per_batch; + let last_request_index = (batch_id * num_requests_per_batch) + num_requests_per_batch - 1; + let mut response_receivers = send_transaction_subscription_request_batch( + &mut mock_client, + peer_network_id, + first_request_index, + last_request_index, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_requests_per_batch as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_cache_update_notification( + &mut mock_client, + &mock_time, + &storage_service_notifier, + true, + true, + ) + .await; + + // Continuously run the subscription service until the batch responses are received + for stream_request_index in first_request_index..=last_request_index { + // Verify that the correct response is received + verify_transaction_subscription_response( + transaction_lists_with_proofs.clone(), + highest_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_streaming_epoch_change() { + // Create a storage service config + let max_transaction_chunk_size = 50; + let max_num_active_subscriptions = 25; + let storage_service_config = StorageServiceConfig { + max_transaction_chunk_size, + max_num_active_subscriptions, + ..Default::default() + }; + + // Create test data + let highest_version = 10_000; + let highest_epoch = 2; + let lowest_version = 0; + let peer_version = highest_version - 5000; + let peer_epoch = highest_epoch - 1; + let epoch_change_version = peer_version + 97; + + // Create the highest ledger info and epoch change proof + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let epoch_change_ledger_info = + utils::create_epoch_ending_ledger_info(peer_epoch, epoch_change_version); + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![epoch_change_ledger_info.clone()], + more: false, + }; + + // Create the transaction lists with proofs + let chunk_start_and_end_versions = utils::create_data_chunks_with_epoch_boundary( + max_transaction_chunk_size, + max_num_active_subscriptions, + peer_version, + epoch_change_version, + ); + let transaction_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_transaction_list_with_proof( + *start_version, + *end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + for (i, (start_version, end_version)) in chunk_start_and_end_versions.iter().enumerate() { + let proof_version = if *end_version <= epoch_change_version { + epoch_change_version + } else { + highest_version + }; + utils::expect_get_transactions( + &mut db_reader, + *start_version, + end_version - start_version + 1, + proof_version, + false, + transaction_lists_with_proofs[i].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the request batch to subscribe to transaction outputs + let mut response_receivers = send_transaction_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + max_num_active_subscriptions - 1, + stream_id, + peer_version, + peer_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + max_num_active_subscriptions as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Continuously run the subscription service until all the responses are received + for stream_request_index in 0..max_num_active_subscriptions { + // Determine the target ledger info for the response + let first_output_version = transaction_lists_with_proofs[stream_request_index as usize] + .first_transaction_version + .unwrap(); + let target_ledger_info = if first_output_version > epoch_change_version { + highest_ledger_info.clone() + } else { + epoch_change_ledger_info.clone() + }; + + // Verify that the correct response is received + verify_transaction_subscription_response( + transaction_lists_with_proofs.clone(), + target_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_streaming_loop() { + // Create a storage service config + let max_transaction_chunk_size = 100; + let storage_service_config = StorageServiceConfig { + max_transaction_chunk_size, + ..Default::default() + }; + + // Create test data + let num_stream_requests = 20; + let highest_version = 100_000; + let highest_epoch = 10; + let lowest_version = 10_000; + let peer_version = 50_000; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the transaction lists with proofs + let transaction_lists_with_proofs: Vec<_> = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_chunk_size) + 1; + let end_version = start_version + max_transaction_chunk_size - 1; + utils::create_transaction_list_with_proof( + start_version, + end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for i in 0..num_stream_requests { + utils::expect_get_transactions( + &mut db_reader, + peer_version + (i * max_transaction_chunk_size) + 1, + max_transaction_chunk_size, + highest_version, + false, + transaction_lists_with_proofs[i as usize].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the requests to the server and verify the responses + let mut response_receivers = send_transaction_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + num_stream_requests - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests as usize, + ) + .await; + + // Verify the state of the subscription stream + utils::verify_subscription_stream_entry( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests, + peer_version, + highest_epoch, + max_transaction_chunk_size, + ); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify all responses are received + for stream_request_index in 0..num_stream_requests { + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver, + transaction_lists_with_proofs[stream_request_index as usize].clone(), + highest_ledger_info.clone(), + ) + .await; + } +} + +/// Sends a batch of transaction requests and +/// returns the response receivers for each request. +async fn send_transaction_subscription_request_batch( + mock_client: &mut MockClient, + peer_network_id: PeerNetworkId, + first_stream_request_index: u64, + last_stream_request_index: u64, + stream_id: u64, + peer_version: u64, + peer_epoch: u64, +) -> HashMap>> { + // Shuffle the stream request indices to emulate out of order requests + let stream_request_indices = + utils::create_shuffled_vector(first_stream_request_index, last_stream_request_index); + + // Send the requests and gather the response receivers + let mut response_receivers = HashMap::new(); + for stream_request_index in stream_request_indices { + // Send the transaction subscription request + let response_receiver = utils::subscribe_to_transactions_for_peer( + mock_client, + peer_version, + peer_epoch, + false, + stream_id, + stream_request_index, + Some(peer_network_id), + ) + .await; + + // Save the response receiver + response_receivers.insert(stream_request_index, response_receiver); + } + + response_receivers +} + +/// Verifies that a response is received for a given stream request index +/// and that the response contains the correct data. +async fn verify_transaction_subscription_response( + expected_transaction_lists_with_proofs: Vec, + expected_target_ledger_info: LedgerInfoWithSignatures, + mock_client: &mut MockClient, + response_receivers: &mut HashMap>>, + stream_request_index: u64, +) { + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + utils::verify_new_transactions_with_proof( + mock_client, + response_receiver, + expected_transaction_lists_with_proofs[stream_request_index as usize].clone(), + expected_target_ledger_info, + ) + .await; +} diff --git a/state-sync/storage-service/server/src/tests/subscribe_transactions_or_outputs.rs b/state-sync/storage-service/server/src/tests/subscribe_transactions_or_outputs.rs new file mode 100644 index 0000000000000..45d17de5b9825 --- /dev/null +++ b/state-sync/storage-service/server/src/tests/subscribe_transactions_or_outputs.rs @@ -0,0 +1,941 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::tests::{mock, mock::MockClient, utils}; +use aptos_config::{ + config::StorageServiceConfig, + network_id::{NetworkId, PeerNetworkId}, +}; +use aptos_network::protocols::network::RpcError; +use aptos_types::{ + epoch_change::EpochChangeProof, + ledger_info::LedgerInfoWithSignatures, + transaction::{TransactionListWithProof, TransactionOutputListWithProof}, + PeerId, +}; +use bytes::Bytes; +use claims::assert_none; +use futures::channel::oneshot::Receiver; +use std::collections::HashMap; + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_or_outputs_different_network() { + // Test small and large chunk sizes + let max_output_chunk_size = StorageServiceConfig::default().max_transaction_output_chunk_size; + for chunk_size in [100, max_output_chunk_size] { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let highest_version = 5060; + let highest_epoch = 30; + let lowest_version = 101; + let peer_version_1 = highest_version - chunk_size; + let peer_version_2 = highest_version - (chunk_size - 50); + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let output_list_with_proof_1 = utils::create_output_list_with_proof( + peer_version_1 + 1, + highest_version, + highest_version, + ); + let output_list_with_proof_2 = utils::create_output_list_with_proof( + peer_version_2 + 1, + highest_version, + highest_version, + ); + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + highest_version, + highest_version, + highest_version, + false, + ); // Creates a small transaction list + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_with_summary_updates( + highest_ledger_info.clone(), + lowest_version, + ); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version_1 + 1, + highest_version - peer_version_1, + highest_version, + output_list_with_proof_1.clone(), + ); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version_2 + 1, + highest_version - peer_version_2, + highest_version, + output_list_with_proof_2.clone(), + ); + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + peer_version_1 + 1, + highest_version - peer_version_1, + highest_version, + false, + transaction_list_with_proof.clone(), + ); + utils::expect_get_transactions( + &mut db_reader, + peer_version_2 + 1, + highest_version - peer_version_2, + highest_version, + false, + transaction_list_with_proof.clone(), + ); + } + + // Create the storage client and server + let storage_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_list_with_proof_1, + &transaction_list_with_proof, + ); + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to transactions or outputs for peer 1 + let peer_id = PeerId::random(); + let subscription_stream_id = 56756; + let peer_network_1 = PeerNetworkId::new(NetworkId::Public, peer_id); + let mut response_receiver_1 = utils::subscribe_to_transactions_or_outputs_for_peer( + &mut mock_client, + peer_version_1, + highest_epoch, + false, + 0, // Outputs cannot be reduced and will fallback to transactions + subscription_stream_id, + 0, + Some(peer_network_1), + ) + .await; + + // Send a request to subscribe to transactions or outputs for peer 2 + let peer_network_2 = PeerNetworkId::new(NetworkId::Vfn, peer_id); + let mut response_receiver_2 = utils::subscribe_to_transactions_or_outputs_for_peer( + &mut mock_client, + peer_version_2, + highest_epoch, + false, + 0, // Outputs cannot be reduced and will fallback to transactions + subscription_stream_id, + 0, + Some(peer_network_2), + ) + .await; + + // Wait until the subscriptions are active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 2).await; + + // Verify no response has been received yet + assert_none!(response_receiver_1.try_recv().unwrap()); + assert_none!(response_receiver_2.try_recv().unwrap()); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + if fallback_to_transactions { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver_1, + Some(transaction_list_with_proof.clone()), + None, + highest_ledger_info.clone(), + ) + .await; + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver_2, + Some(transaction_list_with_proof), + None, + highest_ledger_info, + ) + .await; + } else { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver_1, + None, + Some(output_list_with_proof_1.clone()), + highest_ledger_info.clone(), + ) + .await; + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver_2, + None, + Some(output_list_with_proof_2.clone()), + highest_ledger_info, + ) + .await; + } + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_or_outputs_epoch_change() { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let highest_version = 10000; + let highest_epoch = 10000; + let lowest_version = 0; + let peer_version = highest_version - 1000; + let peer_epoch = highest_epoch - 1000; + let epoch_change_version = peer_version + 1; + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![utils::create_test_ledger_info_with_sigs( + peer_epoch, + epoch_change_version, + )], + more: false, + }; + let output_list_with_proof = utils::create_output_list_with_proof( + peer_version + 1, + epoch_change_version, + epoch_change_version, + ); + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + peer_version + 1, + peer_version + 1, + epoch_change_version, + false, + ); // Creates a small transaction list + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_with_summary_updates( + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version), + lowest_version, + ); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + 1, + epoch_change_version - peer_version, + epoch_change_version, + output_list_with_proof.clone(), + ); + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + peer_version + 1, + epoch_change_version - peer_version, + epoch_change_version, + false, + transaction_list_with_proof.clone(), + ); + } + + // Create the storage client and server + let storage_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_list_with_proof, + &transaction_list_with_proof, + ); + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to new transactions or outputs + let response_receiver = utils::subscribe_to_transactions_or_outputs( + &mut mock_client, + peer_version, + peer_epoch, + false, + 5, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + if fallback_to_transactions { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + Some(transaction_list_with_proof), + None, + epoch_change_proof.ledger_info_with_sigs[0].clone(), + ) + .await; + } else { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + None, + Some(output_list_with_proof), + epoch_change_proof.ledger_info_with_sigs[0].clone(), + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_or_outputs_max_chunk() { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let highest_version = 65660; + let highest_epoch = 30; + let lowest_version = 101; + let max_transaction_output_chunk_size = + StorageServiceConfig::default().max_transaction_output_chunk_size; + let requested_chunk_size = max_transaction_output_chunk_size + 100; + let peer_version = highest_version - requested_chunk_size; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let output_list_with_proof = utils::create_output_list_with_proof( + peer_version + 1, + peer_version + requested_chunk_size, + highest_version, + ); + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + peer_version + 1, + peer_version + 1, + peer_version + requested_chunk_size, + false, + ); // Creates a small transaction list + + // Create the mock db reader + let max_num_output_reductions = 5; + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for i in 0..=max_num_output_reductions { + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + 1, + (max_transaction_output_chunk_size as u32 / (u32::pow(2, i as u32))) as u64, + highest_version, + output_list_with_proof.clone(), + ); + } + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + peer_version + 1, + max_transaction_output_chunk_size, + highest_version, + false, + transaction_list_with_proof.clone(), + ); + } + + // Create the storage client and server + let storage_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_list_with_proof, + &transaction_list_with_proof, + ); + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send a request to subscribe to new transactions or outputs + let response_receiver = utils::subscribe_to_transactions_or_outputs( + &mut mock_client, + peer_version, + highest_epoch, + false, + max_num_output_reductions, + utils::get_random_u64(), + 0, + ) + .await; + + // Wait until the subscription is active + utils::wait_for_active_subscriptions(active_subscriptions.clone(), 1).await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify a response is received and that it contains the correct data + if fallback_to_transactions { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + Some(transaction_list_with_proof), + None, + highest_ledger_info, + ) + .await; + } else { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + None, + Some(output_list_with_proof), + highest_ledger_info, + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_or_outputs_streaming() { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let max_transaction_output_chunk_size = 90; + let num_stream_requests = 30; + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 2; + let peer_version = 1; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the transaction and output lists with proofs + let chunk_start_and_end_versions = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_output_chunk_size) + 1; + let end_version = start_version + max_transaction_output_chunk_size - 1; + (start_version, end_version) + }) + .collect::>(); + let output_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_output_list_with_proof(*start_version, *end_version, highest_version) + }) + .collect(); + let transaction_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_transaction_list_with_proof( + *start_version, + *end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for (i, (start_version, _)) in chunk_start_and_end_versions.iter().enumerate() { + // Set expectations for transaction output reads + utils::expect_get_transaction_outputs( + &mut db_reader, + *start_version, + max_transaction_output_chunk_size, + highest_version, + output_lists_with_proofs[i].clone(), + ); + + // Set expectations for transaction reads + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + *start_version, + max_transaction_output_chunk_size, + highest_version, + false, + transaction_lists_with_proofs[i].clone(), + ); + } + } + + // Create the storage service config + let mut storage_service_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_lists_with_proofs[0], + &transaction_lists_with_proofs[0], + ); + storage_service_config.max_transaction_output_chunk_size = + max_transaction_output_chunk_size; + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send multiple batches of requests to the server and verify the responses + let num_batches_to_send = 5; + for batch_id in 0..num_batches_to_send { + // Send the request batch to subscribe to transaction outputs + let num_requests_per_batch = num_stream_requests / num_batches_to_send; + let first_request_index = batch_id * num_requests_per_batch; + let last_request_index = + (batch_id * num_requests_per_batch) + num_requests_per_batch - 1; + let mut response_receivers = send_transaction_or_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + first_request_index, + last_request_index, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_requests_per_batch as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_cache_update_notification( + &mut mock_client, + &mock_time, + &storage_service_notifier, + true, + true, + ) + .await; + + // Continuously run the subscription service until the batch responses are received + for stream_request_index in first_request_index..=last_request_index { + // Verify that the correct response is received + verify_transaction_or_output_subscription_response( + transaction_lists_with_proofs.clone(), + output_lists_with_proofs.clone(), + highest_ledger_info.clone(), + fallback_to_transactions, + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transactions_or_outputs_streaming_epoch_change() { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let max_transaction_output_chunk_size = 10; + let max_num_active_subscriptions = 50; + let highest_version = 1000; + let highest_epoch = 2; + let lowest_version = 0; + let peer_version = highest_version - 900; + let peer_epoch = highest_epoch - 1; + let epoch_change_version = peer_version + 97; + + // Create the highest ledger info and epoch change proof + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let epoch_change_ledger_info = + utils::create_epoch_ending_ledger_info(peer_epoch, epoch_change_version); + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![epoch_change_ledger_info.clone()], + more: false, + }; + + // Create the transaction and output lists with proofs + let chunk_start_and_end_versions = utils::create_data_chunks_with_epoch_boundary( + max_transaction_output_chunk_size, + max_num_active_subscriptions, + peer_version, + epoch_change_version, + ); + let output_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_output_list_with_proof(*start_version, *end_version, highest_version) + }) + .collect(); + let transaction_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_transaction_list_with_proof( + *start_version, + *end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_epoch_ending_ledger_infos( + &mut db_reader, + peer_epoch, + peer_epoch + 1, + epoch_change_proof.clone(), + ); + for (i, (start_version, end_version)) in chunk_start_and_end_versions.iter().enumerate() { + // Set expectations for transaction output reads + let proof_version = if *end_version <= epoch_change_version { + epoch_change_version + } else { + highest_version + }; + utils::expect_get_transaction_outputs( + &mut db_reader, + *start_version, + end_version - start_version + 1, + proof_version, + output_lists_with_proofs[i].clone(), + ); + + // Set expectations for transaction reads + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + *start_version, + end_version - start_version + 1, + proof_version, + false, + transaction_lists_with_proofs[i].clone(), + ); + } + } + + // Create the storage service config + let mut storage_service_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_lists_with_proofs[0], + &transaction_lists_with_proofs[0], + ); + storage_service_config.max_transaction_output_chunk_size = + max_transaction_output_chunk_size; + storage_service_config.max_num_active_subscriptions = max_num_active_subscriptions; + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the request batch to subscribe to transactions or outputs + let mut response_receivers = send_transaction_or_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + max_num_active_subscriptions - 1, + stream_id, + peer_version, + peer_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + max_num_active_subscriptions as usize, + ) + .await; + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Continuously run the subscription service until all the responses are received + for stream_request_index in 0..max_num_active_subscriptions { + // Determine the target ledger info for the response + let first_version = output_lists_with_proofs[stream_request_index as usize] + .first_transaction_output_version + .unwrap(); + let target_ledger_info = if first_version > epoch_change_version { + highest_ledger_info.clone() + } else { + epoch_change_ledger_info.clone() + }; + + // If we're syncing to the epoch change, then we don't need + // to fallback as the configured network limit won't be reached. + let epoch_change_version = epoch_change_ledger_info.ledger_info().version(); + let fallback_to_transactions = if fallback_to_transactions + && (first_version < epoch_change_version) + && (first_version + max_transaction_output_chunk_size) >= epoch_change_version + { + false + } else { + fallback_to_transactions + }; + + // Verify that the correct response is received + verify_transaction_or_output_subscription_response( + transaction_lists_with_proofs.clone(), + output_lists_with_proofs.clone(), + target_ledger_info.clone(), + fallback_to_transactions, + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } + } +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscribe_transaction_or_outputs_streaming_loop() { + // Test fallback to transaction syncing + for fallback_to_transactions in [false, true] { + // Create test data + let max_transaction_output_chunk_size = 90; + let num_stream_requests = 30; + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 2; + let peer_version = 1; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the transaction and output lists with proofs + let chunk_start_and_end_versions = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_output_chunk_size) + 1; + let end_version = start_version + max_transaction_output_chunk_size - 1; + (start_version, end_version) + }) + .collect::>(); + let output_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_output_list_with_proof(*start_version, *end_version, highest_version) + }) + .collect(); + let transaction_lists_with_proofs: Vec<_> = chunk_start_and_end_versions + .iter() + .map(|(start_version, end_version)| { + utils::create_transaction_list_with_proof( + *start_version, + *end_version, + highest_version, + false, + ) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for (i, (start_version, _)) in chunk_start_and_end_versions.iter().enumerate() { + // Set expectations for transaction output reads + utils::expect_get_transaction_outputs( + &mut db_reader, + *start_version, + max_transaction_output_chunk_size, + highest_version, + output_lists_with_proofs[i].clone(), + ); + + // Set expectations for transaction reads + if fallback_to_transactions { + utils::expect_get_transactions( + &mut db_reader, + *start_version, + max_transaction_output_chunk_size, + highest_version, + false, + transaction_lists_with_proofs[i].clone(), + ); + } + } + + // Create the storage service config + let mut storage_service_config = utils::configure_network_chunk_limit( + fallback_to_transactions, + &output_lists_with_proofs[0], + &transaction_lists_with_proofs[0], + ); + storage_service_config.max_transaction_output_chunk_size = + max_transaction_output_chunk_size; + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a new peer and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send the requests to the server and verify the responses + let mut response_receivers = send_transaction_or_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + num_stream_requests - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests as usize, + ) + .await; + + // Verify the state of the subscription stream + utils::verify_subscription_stream_entry( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests, + peer_version, + highest_epoch, + max_transaction_output_chunk_size, + ); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify all responses are received + for stream_request_index in 0..num_stream_requests { + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + if fallback_to_transactions { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + Some(transaction_lists_with_proofs[stream_request_index as usize].clone()), + None, + highest_ledger_info.clone(), + ) + .await; + } else { + utils::verify_new_transactions_or_outputs_with_proof( + &mut mock_client, + response_receiver, + None, + Some(output_lists_with_proofs[stream_request_index as usize].clone()), + highest_ledger_info.clone(), + ) + .await; + } + } + } +} + +/// Sends a batch of transaction or output requests and +/// returns the response receivers for each request. +async fn send_transaction_or_output_subscription_request_batch( + mock_client: &mut MockClient, + peer_network_id: PeerNetworkId, + first_stream_request_index: u64, + last_stream_request_index: u64, + stream_id: u64, + peer_version: u64, + peer_epoch: u64, +) -> HashMap>> { + // Shuffle the stream request indices to emulate out of order requests + let stream_request_indices = + utils::create_shuffled_vector(first_stream_request_index, last_stream_request_index); + + // Send the requests and gather the response receivers + let mut response_receivers = HashMap::new(); + for stream_request_index in stream_request_indices { + // Send the transaction output subscription request + let response_receiver = utils::subscribe_to_transactions_or_outputs_for_peer( + mock_client, + peer_version, + peer_epoch, + false, + 0, // Outputs cannot be reduced and will fallback to transactions + stream_id, + stream_request_index, + Some(peer_network_id), + ) + .await; + + // Save the response receiver + response_receivers.insert(stream_request_index, response_receiver); + } + + response_receivers +} + +/// Verifies that a response is received for a given stream request index +/// and that the response contains the correct data. +async fn verify_transaction_or_output_subscription_response( + expected_transaction_lists_with_proofs: Vec, + expected_output_lists_with_proofs: Vec, + expected_target_ledger_info: LedgerInfoWithSignatures, + fallback_to_transactions: bool, + mock_client: &mut MockClient, + response_receivers: &mut HashMap>>, + stream_request_index: u64, +) { + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + if fallback_to_transactions { + utils::verify_new_transactions_or_outputs_with_proof( + mock_client, + response_receiver, + Some(expected_transaction_lists_with_proofs[stream_request_index as usize].clone()), + None, + expected_target_ledger_info.clone(), + ) + .await; + } else { + utils::verify_new_transactions_or_outputs_with_proof( + mock_client, + response_receiver, + None, + Some(expected_output_lists_with_proofs[stream_request_index as usize].clone()), + expected_target_ledger_info.clone(), + ) + .await; + } +} diff --git a/state-sync/storage-service/server/src/tests/subscription.rs b/state-sync/storage-service/server/src/tests/subscription.rs new file mode 100644 index 0000000000000..1ccb793211323 --- /dev/null +++ b/state-sync/storage-service/server/src/tests/subscription.rs @@ -0,0 +1,1125 @@ +// Copyright © Aptos Foundation +// SPDX-License-Identifier: Apache-2.0 + +use crate::{ + error::Error, + moderator::RequestModerator, + network::ResponseSender, + storage::StorageReader, + subscription, + subscription::{SubscriptionRequest, SubscriptionStreamRequests}, + tests::{mock, mock::MockClient, utils}, +}; +use aptos_bounded_executor::BoundedExecutor; +use aptos_config::{ + config::{AptosDataClientConfig, StorageServiceConfig}, + network_id::PeerNetworkId, +}; +use aptos_infallible::Mutex; +use aptos_storage_service_types::{ + requests::{ + DataRequest, StorageServiceRequest, SubscribeTransactionOutputsWithProofRequest, + SubscribeTransactionsOrOutputsWithProofRequest, SubscribeTransactionsWithProofRequest, + SubscriptionStreamMetadata, + }, + responses::StorageServerSummary, + StorageServiceError, +}; +use aptos_time_service::TimeService; +use aptos_types::epoch_change::EpochChangeProof; +use arc_swap::ArcSwap; +use claims::assert_matches; +use dashmap::DashMap; +use futures::channel::oneshot; +use lru::LruCache; +use std::{collections::HashMap, sync::Arc}; +use tokio::runtime::Handle; + +#[tokio::test] +async fn test_peers_with_ready_subscriptions() { + // Create a mock time service and subscriptions map + let time_service = TimeService::mock(); + let subscriptions = Arc::new(Mutex::new(HashMap::new())); + + // Create three peers with ready subscriptions + let mut peer_network_ids = vec![]; + for known_version in &[1, 5, 10] { + // Create a random peer network id + let peer_network_id = PeerNetworkId::random(); + peer_network_ids.push(peer_network_id); + + // Create a subscription stream and insert it into the pending map + let subscription_stream_requests = create_subscription_stream_requests( + time_service.clone(), + Some(*known_version), + Some(1), + Some(0), + Some(0), + ); + subscriptions + .lock() + .insert(peer_network_id, subscription_stream_requests); + } + + // Create epoch ending test data at version 9 + let epoch_ending_ledger_info = utils::create_epoch_ending_ledger_info(1, 9); + let epoch_change_proof = EpochChangeProof { + ledger_info_with_sigs: vec![epoch_ending_ledger_info], + more: false, + }; + + // Create the mock db reader + let mut db_reader = mock::create_mock_db_reader(); + utils::expect_get_epoch_ending_ledger_infos(&mut db_reader, 1, 2, epoch_change_proof); + + // Create the storage reader + let storage_service_config = StorageServiceConfig::default(); + let storage_reader = StorageReader::new(storage_service_config, Arc::new(db_reader)); + + // Create test data with an empty storage server summary + let bounded_executor = BoundedExecutor::new(100, Handle::current()); + let cached_storage_server_summary = + Arc::new(ArcSwap::from(Arc::new(StorageServerSummary::default()))); + let optimistic_fetches = Arc::new(DashMap::new()); + let lru_response_cache = Arc::new(Mutex::new(LruCache::new(0))); + let request_moderator = Arc::new(RequestModerator::new( + AptosDataClientConfig::default(), + cached_storage_server_summary.clone(), + mock::create_peers_and_metadata(vec![]), + StorageServiceConfig::default(), + time_service.clone(), + )); + + // Verify that there are no peers with ready subscriptions + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + + // Update the storage server summary so that there is new data (at version 2) + let highest_synced_ledger_info = + utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 2, 1); + + // Verify that peer 1 has a ready subscription + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!(peers_with_ready_subscriptions, vec![( + peer_network_ids[0], + highest_synced_ledger_info + )]); + + // Manually remove subscription 1 from the map + subscriptions.lock().remove(&peer_network_ids[0]); + + // Update the storage server summary so that there is new data (at version 8) + let highest_synced_ledger_info = + utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 8, 1); + + // Verify that peer 2 has a ready subscription + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!(peers_with_ready_subscriptions, vec![( + peer_network_ids[1], + highest_synced_ledger_info + )]); + + // Manually remove subscription 2 from the map + subscriptions.lock().remove(&peer_network_ids[1]); + + // Update the storage server summary so that there is new data (at version 100) + let _ = utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 100, 2); + + // Verify that subscription 3 is not returned because it was invalid + // (i.e., the epoch ended at version 9, but the peer didn't respect it). + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!(peers_with_ready_subscriptions, vec![]); + + // Verify that the subscriptions are now empty + assert!(subscriptions.lock().is_empty()); +} + +#[tokio::test] +async fn test_remove_expired_subscriptions_no_new_data() { + // Create a storage service config + let max_subscription_period_ms = 100; + let storage_service_config = StorageServiceConfig { + max_subscription_period_ms, + ..Default::default() + }; + + // Create the mock storage reader and time service + let db_reader = mock::create_mock_db_reader(); + let storage_reader = StorageReader::new(storage_service_config, Arc::new(db_reader)); + let time_service = TimeService::mock(); + + // Create test data with an empty storage server summary + let bounded_executor = BoundedExecutor::new(100, Handle::current()); + let cached_storage_server_summary = + Arc::new(ArcSwap::from(Arc::new(StorageServerSummary::default()))); + let optimistic_fetches = Arc::new(DashMap::new()); + let lru_response_cache = Arc::new(Mutex::new(LruCache::new(0))); + let request_moderator = Arc::new(RequestModerator::new( + AptosDataClientConfig::default(), + cached_storage_server_summary.clone(), + mock::create_peers_and_metadata(vec![]), + StorageServiceConfig::default(), + time_service.clone(), + )); + + // Create the first batch of test subscriptions + let num_subscriptions_in_batch = 10; + let subscriptions = Arc::new(Mutex::new(HashMap::new())); + for _ in 0..num_subscriptions_in_batch { + let subscription_stream_requests = + create_subscription_stream_requests(time_service.clone(), Some(9), Some(9), None, None); + subscriptions + .lock() + .insert(PeerNetworkId::random(), subscription_stream_requests); + } + + // Verify the number of active subscriptions + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch); + + // Elapse a small amount of time (not enough to expire the subscriptions) + utils::elapse_time(max_subscription_period_ms / 2, &time_service).await; + + // Update the storage server summary so that there is new data + let _ = utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 1, 1); + + // Remove the expired subscriptions and verify none were removed + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch); + + // Create another batch of test subscriptions + for _ in 0..num_subscriptions_in_batch { + let subscription_stream_requests = + create_subscription_stream_requests(time_service.clone(), Some(9), Some(9), None, None); + subscriptions + .lock() + .insert(PeerNetworkId::random(), subscription_stream_requests); + } + + // Verify the new number of active subscriptions + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch * 2); + + // Elapse enough time to expire the first batch of subscriptions + utils::elapse_time(max_subscription_period_ms, &time_service).await; + + // Remove the expired subscriptions and verify the first batch was removed + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch); + + // Elapse enough time to expire the second batch of subscriptions + utils::elapse_time(max_subscription_period_ms, &time_service).await; + + // Remove the expired subscriptions and verify the second batch was removed + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + assert!(subscriptions.lock().is_empty()); +} + +#[tokio::test] +async fn test_remove_expired_subscriptions_blocked_stream() { + // Create a storage service config + let max_subscription_period_ms = 100; + let storage_service_config = StorageServiceConfig { + max_subscription_period_ms, + ..Default::default() + }; + + // Create a mock time service + let time_service = TimeService::mock(); + + // Create a batch of test subscriptions + let num_subscriptions_in_batch = 10; + let subscriptions = Arc::new(Mutex::new(HashMap::new())); + let mut peer_network_ids = vec![]; + for i in 0..num_subscriptions_in_batch { + // Create a new peer + let peer_network_id = PeerNetworkId::random(); + peer_network_ids.push(peer_network_id); + + // Create a subscription stream request for the peer + let subscription_stream_requests = create_subscription_stream_requests( + time_service.clone(), + Some(1), + Some(1), + Some(i as u64), + Some(0), + ); + subscriptions + .lock() + .insert(peer_network_id, subscription_stream_requests); + } + + // Create test data with an empty storage server summary + let bounded_executor = BoundedExecutor::new(100, Handle::current()); + let cached_storage_server_summary = + Arc::new(ArcSwap::from(Arc::new(StorageServerSummary::default()))); + let optimistic_fetches = Arc::new(DashMap::new()); + let lru_response_cache = Arc::new(Mutex::new(LruCache::new(0))); + let request_moderator = Arc::new(RequestModerator::new( + AptosDataClientConfig::default(), + cached_storage_server_summary.clone(), + mock::create_peers_and_metadata(vec![]), + StorageServiceConfig::default(), + time_service.clone(), + )); + let storage_reader = StorageReader::new( + storage_service_config, + Arc::new(mock::create_mock_db_reader()), + ); + + // Update the storage server summary so that there is new data (at version 5) + let _ = utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 5, 1); + + // Handle the active subscriptions + subscription::handle_active_subscriptions( + bounded_executor.clone(), + cached_storage_server_summary.clone(), + storage_service_config, + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + + // Verify that all subscription streams are now empty because + // the pending requests were sent. + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch); + for (_, subscription_stream_requests) in subscriptions.lock().iter() { + assert!(subscription_stream_requests + .first_pending_request() + .is_none()); + } + + // Elapse enough time to expire the blocked streams + utils::elapse_time(max_subscription_period_ms + 1, &time_service).await; + + // Add a new subscription request to the first subscription stream + let subscription_request = + create_subscription_request(&time_service, Some(1), Some(1), Some(0), Some(1)); + add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_ids[0], + ) + .unwrap(); + + // Remove the expired subscriptions and verify the second batch was removed + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + assert_eq!(subscriptions.lock().len(), 1); + assert!(subscriptions.lock().contains_key(&peer_network_ids[0])); +} + +#[tokio::test] +async fn test_remove_expired_subscriptions_blocked_stream_index() { + // Create a storage service config + let max_subscription_period_ms = 100; + let storage_service_config = StorageServiceConfig { + max_subscription_period_ms, + ..Default::default() + }; + + // Create a mock time service + let time_service = TimeService::mock(); + + // Create the first batch of test subscriptions + let num_subscriptions_in_batch = 10; + let subscriptions = Arc::new(Mutex::new(HashMap::new())); + for _ in 0..num_subscriptions_in_batch { + let subscription_stream_requests = create_subscription_stream_requests( + time_service.clone(), + Some(1), + Some(1), + None, + Some(0), + ); + subscriptions + .lock() + .insert(PeerNetworkId::random(), subscription_stream_requests); + } + + // Create test data with an empty storage server summary + let bounded_executor = BoundedExecutor::new(100, Handle::current()); + let cached_storage_server_summary = + Arc::new(ArcSwap::from(Arc::new(StorageServerSummary::default()))); + let optimistic_fetches = Arc::new(DashMap::new()); + let lru_response_cache = Arc::new(Mutex::new(LruCache::new(0))); + let request_moderator = Arc::new(RequestModerator::new( + AptosDataClientConfig::default(), + cached_storage_server_summary.clone(), + mock::create_peers_and_metadata(vec![]), + StorageServiceConfig::default(), + time_service.clone(), + )); + let storage_reader = StorageReader::new( + storage_service_config, + Arc::new(mock::create_mock_db_reader()), + ); + + // Update the storage server summary so that there is new data (at version 5) + let highest_synced_ledger_info = + utils::update_storage_summary_cache(cached_storage_server_summary.clone(), 5, 1); + + // Verify that all peers have ready subscriptions (but don't serve them!) + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!( + peers_with_ready_subscriptions.len(), + num_subscriptions_in_batch + ); + + // Elapse enough time to expire the subscriptions + utils::elapse_time(max_subscription_period_ms + 1, &time_service).await; + + // Remove the expired subscriptions and verify they were all removed + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + assert!(subscriptions.lock().is_empty()); + + // Create another batch of test subscriptions (where the stream is + // blocked on the next index to serve). + let mut peer_network_ids = vec![]; + for i in 0..num_subscriptions_in_batch { + // Create a new peer + let peer_network_id = PeerNetworkId::random(); + peer_network_ids.push(peer_network_id); + + // Create a subscription stream request for the peer + let subscription_stream_requests = create_subscription_stream_requests( + time_service.clone(), + Some(1), + Some(1), + None, + Some(i as u64 + 1), + ); + subscriptions + .lock() + .insert(peer_network_id, subscription_stream_requests); + } + + // Verify the number of active subscriptions + assert_eq!(subscriptions.lock().len(), num_subscriptions_in_batch); + + // Verify that none of the subscriptions are ready to be served (they are blocked) + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert!(peers_with_ready_subscriptions.is_empty()); + + // Elapse enough time to expire the batch of subscriptions + utils::elapse_time(max_subscription_period_ms + 1, &time_service).await; + + // Add a new subscription request to the first subscription stream (to unblock it) + let subscription_request = + create_subscription_request(&time_service, Some(1), Some(1), None, Some(0)); + add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_ids[0], + ) + .unwrap(); + + // Verify that the first peer subscription stream is unblocked + let peers_with_ready_subscriptions = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!(peers_with_ready_subscriptions.len(), 1); + assert!( + peers_with_ready_subscriptions.contains(&(peer_network_ids[0], highest_synced_ledger_info)) + ); + + // Remove the expired subscriptions and verify all but one were removed + let _ = subscription::get_peers_with_ready_subscriptions( + bounded_executor.clone(), + storage_service_config, + cached_storage_server_summary.clone(), + optimistic_fetches.clone(), + lru_response_cache.clone(), + request_moderator.clone(), + storage_reader.clone(), + subscriptions.clone(), + time_service.clone(), + ) + .await + .unwrap(); + assert_eq!(subscriptions.lock().len(), 1); + assert!(subscriptions.lock().contains_key(&peer_network_ids[0])); +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscription_invalid_requests() { + // Create a mock time service + let time_service = TimeService::mock(); + + // Create a new batch of subscriptions that includes a single stream and request + let subscriptions = Arc::new(Mutex::new(HashMap::new())); + let peer_network_id = PeerNetworkId::random(); + let peer_known_version = 10; + let peer_known_epoch = 1; + let subscription_stream_id = utils::get_random_u64(); + let subscription_stream_requests = create_subscription_stream_requests( + time_service.clone(), + Some(peer_known_version), + Some(peer_known_epoch), + Some(subscription_stream_id), + Some(0), + ); + subscriptions + .lock() + .insert(peer_network_id, subscription_stream_requests); + + // Add a request to the stream that is invalid (the stream id is incorrect) + let subscription_request = create_subscription_request( + &time_service, + Some(peer_known_version), + Some(peer_known_epoch), + Some(subscription_stream_id + 1), + Some(1), + ); + let (error, _) = add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_id, + ) + .unwrap_err(); + assert_matches!(error, Error::InvalidRequest(_)); + + // Add a request to the stream that is invalid (the known version is incorrect) + let subscription_request = create_subscription_request( + &time_service, + Some(peer_known_version + 1), + Some(peer_known_epoch), + Some(subscription_stream_id), + Some(1), + ); + let (error, _) = add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_id, + ) + .unwrap_err(); + assert_matches!(error, Error::InvalidRequest(_)); + + // Add a request to the stream that is invalid (the known epoch is incorrect) + let subscription_request = create_subscription_request( + &time_service, + Some(peer_known_version), + Some(peer_known_epoch + 1), + Some(subscription_stream_id), + Some(1), + ); + let (error, _) = add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_id, + ) + .unwrap_err(); + assert_matches!(error, Error::InvalidRequest(_)); + + // Update the next index to serve for the stream + let next_index_to_serve = 10; + let mut subscriptions_lock = subscriptions.lock(); + let subscription_stream_requests = subscriptions_lock.get_mut(&peer_network_id).unwrap(); + subscription_stream_requests.set_next_index_to_serve(next_index_to_serve); + drop(subscriptions_lock); + + // Add a request to the stream that is invalid (the stream index is less than the next index to serve) + let subscription_request = create_subscription_request( + &time_service, + Some(peer_known_version), + Some(peer_known_epoch), + Some(subscription_stream_id), + Some(next_index_to_serve - 1), + ); + let (error, _) = add_subscription_request_to_stream( + subscription_request, + subscriptions.clone(), + &peer_network_id, + ) + .unwrap_err(); + assert_matches!(error, Error::InvalidRequest(_)); +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscription_max_pending_requests() { + // Create a storage service config + let max_transaction_output_chunk_size = 5; + let max_num_active_subscriptions = 10; + let storage_service_config = StorageServiceConfig { + max_num_active_subscriptions, + max_transaction_output_chunk_size, + ..Default::default() + }; + + // Create test data + let num_stream_requests = max_num_active_subscriptions * 10; // Send more requests than allowed + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 0; + let peer_version = 50; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + + // Create the output lists with proofs + let output_lists_with_proofs: Vec<_> = (0..num_stream_requests) + .map(|i| { + let start_version = peer_version + (i * max_transaction_output_chunk_size) + 1; + let end_version = start_version + max_transaction_output_chunk_size - 1; + utils::create_output_list_with_proof(start_version, end_version, highest_version) + }) + .collect(); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + for stream_request_index in 0..num_stream_requests { + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + (stream_request_index * max_transaction_output_chunk_size) + 1, + max_transaction_output_chunk_size, + highest_version, + output_lists_with_proofs[stream_request_index as usize].clone(), + ); + } + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), Some(storage_service_config)); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Send the maximum number of stream requests + let peer_network_id = PeerNetworkId::random(); + let stream_id = 101; + let mut response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + max_num_active_subscriptions - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the maximum number of stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + max_num_active_subscriptions as usize, + ) + .await; + + // Send another batch of stream requests (to exceed the maximum number of + // subscriptions), and verify that the client receives a failure for each request. + for stream_request_index in max_num_active_subscriptions..max_num_active_subscriptions * 2 { + // Send the transaction output subscription request + let response_receiver = utils::subscribe_to_transaction_outputs_for_peer( + &mut mock_client, + peer_version, + highest_epoch, + stream_id, + stream_request_index, + Some(peer_network_id), + ) + .await; + + // Verify that the client receives an invalid request error + let response = mock_client + .wait_for_response(response_receiver) + .await + .unwrap_err(); + assert!(matches!(response, StorageServiceError::InvalidRequest(_))); + } + + // Verify the request indices that are pending + verify_pending_subscription_request_indices( + active_subscriptions.clone(), + peer_network_id, + 0, + max_num_active_subscriptions, + num_stream_requests, + ); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Continuously run the subscription service until all of the responses are sent + for stream_request_index in 0..max_num_active_subscriptions { + // Verify that the correct response is received + utils::verify_output_subscription_response( + output_lists_with_proofs.clone(), + highest_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + stream_request_index, + ) + .await; + } + + // Send another batch of requests for transaction outputs + let _response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + max_num_active_subscriptions, + (max_num_active_subscriptions * 2) - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the maximum number of stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + max_num_active_subscriptions as usize, + ) + .await; + + // Send another batch of stream requests (to exceed the maximum number of + // subscriptions), and verify that the client receives a failure for each request. + for stream_request_index in max_num_active_subscriptions * 2..max_num_active_subscriptions * 3 { + // Send the transaction output subscription request + let response_receiver = utils::subscribe_to_transaction_outputs_for_peer( + &mut mock_client, + peer_version, + highest_epoch, + stream_id, + stream_request_index, + Some(peer_network_id), + ) + .await; + + // Verify that the client receives an invalid request error + let response = mock_client + .wait_for_response(response_receiver) + .await + .unwrap_err(); + assert!(matches!(response, StorageServiceError::InvalidRequest(_))); + } + + // Verify the request indices that are pending + verify_pending_subscription_request_indices( + active_subscriptions, + peer_network_id, + max_num_active_subscriptions, + max_num_active_subscriptions * 2, + num_stream_requests, + ); +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subscription_overwrite_streams() { + // Create test data + let highest_version = 45576; + let highest_epoch = 43; + let lowest_version = 0; + let peer_version = highest_version - 100; + let highest_ledger_info = + utils::create_test_ledger_info_with_sigs(highest_epoch, highest_version); + let output_list_with_proof = + utils::create_output_list_with_proof(peer_version + 1, highest_version, highest_version); + let transaction_list_with_proof = utils::create_transaction_list_with_proof( + peer_version + 1, + highest_version, + highest_version, + false, + ); + + // Create the mock db reader + let mut db_reader = + mock::create_mock_db_with_summary_updates(highest_ledger_info.clone(), lowest_version); + utils::expect_get_transaction_outputs( + &mut db_reader, + peer_version + 1, + highest_version - peer_version, + highest_version, + output_list_with_proof.clone(), + ); + utils::expect_get_transactions( + &mut db_reader, + peer_version + 1, + highest_version - peer_version, + highest_version, + false, + transaction_list_with_proof.clone(), + ); + + // Create the storage client and server + let (mut mock_client, service, storage_service_notifier, mock_time, _) = + MockClient::new(Some(db_reader), None); + let active_subscriptions = service.get_subscriptions(); + tokio::spawn(service.start()); + + // Create a peer network ID and stream ID + let peer_network_id = PeerNetworkId::random(); + let stream_id = utils::get_random_u64(); + + // Send multiple requests to subscribe to transaction outputs with the stream ID + let num_stream_requests = 10; + let mut response_receivers = utils::send_output_subscription_request_batch( + &mut mock_client, + peer_network_id, + 0, + num_stream_requests - 1, + stream_id, + peer_version, + highest_epoch, + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests( + active_subscriptions.clone(), + peer_network_id, + num_stream_requests as usize, + ) + .await; + + // Verify no subscription response has been received yet + utils::verify_no_subscription_responses(&mut response_receivers); + + // Force the subscription handler to work + utils::force_subscription_handler_to_run( + &mut mock_client, + &mock_time, + &storage_service_notifier, + ) + .await; + + // Verify that the correct response is received (when it comes through) + utils::verify_output_subscription_response( + vec![output_list_with_proof.clone()], + highest_ledger_info.clone(), + &mut mock_client, + &mut response_receivers, + 0, + ) + .await; + + // Send a request to subscribe to transactions with a new stream ID + let new_stream_id = utils::get_random_u64(); + let response_receiver = utils::subscribe_to_transactions_for_peer( + &mut mock_client, + peer_version, + highest_epoch, + false, + new_stream_id, + 0, + Some(peer_network_id), + ) + .await; + + // Wait until the stream requests are active + utils::wait_for_active_stream_requests(active_subscriptions.clone(), peer_network_id, 1).await; + + // Verify the new stream ID has been used + utils::verify_active_stream_id_for_peer( + active_subscriptions.clone(), + peer_network_id, + new_stream_id, + ); + + // Force the subscription handler to work + utils::force_cache_update_notification( + &mut mock_client, + &mock_time, + &storage_service_notifier, + true, + true, + ) + .await; + + // Verify a response is received and that it contains the correct data + utils::verify_new_transactions_with_proof( + &mut mock_client, + response_receiver, + transaction_list_with_proof, + highest_ledger_info, + ) + .await; +} + +/// Adds a subscription request to the subscription stream for the given peer +fn add_subscription_request_to_stream( + subscription_request: SubscriptionRequest, + subscriptions: Arc>>, + peer_network_id: &PeerNetworkId, +) -> Result<(), (Error, SubscriptionRequest)> { + let mut subscriptions = subscriptions.lock(); + let subscription_stream_requests = subscriptions.get_mut(peer_network_id).unwrap(); + subscription_stream_requests + .add_subscription_request(StorageServiceConfig::default(), subscription_request) +} + +/// Creates a random request for subscription data +fn create_subscription_data_request( + known_version_at_stream_start: Option, + known_epoch_at_stream_start: Option, + subscription_stream_id: Option, + subscription_stream_index: Option, +) -> DataRequest { + // Get the request data + let known_version_at_stream_start = known_version_at_stream_start.unwrap_or_default(); + let known_epoch_at_stream_start = known_epoch_at_stream_start.unwrap_or_default(); + let subscription_stream_id = subscription_stream_id.unwrap_or_default(); + let subscription_stream_index = subscription_stream_index.unwrap_or_default(); + + // Create the subscription stream metadata + let subscription_stream_metadata = SubscriptionStreamMetadata { + known_version_at_stream_start, + known_epoch_at_stream_start, + subscription_stream_id, + }; + + // Generate the random data request + let random_number = utils::get_random_u64(); + match random_number % 3 { + 0 => DataRequest::SubscribeTransactionOutputsWithProof( + SubscribeTransactionOutputsWithProofRequest { + subscription_stream_metadata, + subscription_stream_index, + }, + ), + 1 => DataRequest::SubscribeTransactionsWithProof(SubscribeTransactionsWithProofRequest { + subscription_stream_metadata, + include_events: false, + subscription_stream_index, + }), + 2 => DataRequest::SubscribeTransactionsOrOutputsWithProof( + SubscribeTransactionsOrOutputsWithProofRequest { + subscription_stream_metadata, + include_events: false, + max_num_output_reductions: 0, + subscription_stream_index, + }, + ), + number => panic!("This shouldn't be possible! Got: {:?}", number), + } +} + +/// Creates a random subscription request using the given data +fn create_subscription_request( + time_service: &TimeService, + known_version: Option, + known_epoch: Option, + subscription_stream_id: Option, + subscription_stream_index: Option, +) -> SubscriptionRequest { + // Create a storage service request + let data_request = create_subscription_data_request( + known_version, + known_epoch, + subscription_stream_id, + subscription_stream_index, + ); + let storage_service_request = StorageServiceRequest::new(data_request, true); + + // Create the response sender + let (callback, _) = oneshot::channel(); + let response_sender = ResponseSender::new(callback); + + // Create a subscription request + SubscriptionRequest::new( + storage_service_request, + response_sender, + time_service.clone(), + ) +} + +/// Creates a random subscription stream using the given data +fn create_subscription_stream_requests( + time_service: TimeService, + known_version: Option, + known_epoch: Option, + subscription_stream_id: Option, + subscription_stream_index: Option, +) -> SubscriptionStreamRequests { + // Create a new subscription request + let subscription_request = create_subscription_request( + &time_service, + known_version, + known_epoch, + subscription_stream_id, + subscription_stream_index, + ); + + // Create and return the subscription stream containing the request + SubscriptionStreamRequests::new(subscription_request, time_service) +} + +/// Verifies that the pending subscription request indices are valid. +/// Note the expected end indices are exclusive. +fn verify_pending_subscription_request_indices( + active_subscriptions: Arc>>, + peer_network_id: PeerNetworkId, + expected_start_index: u64, + expected_end_index: u64, + ignored_end_index: u64, +) { + // Get the pending subscription requests + let mut active_subscriptions = active_subscriptions.lock(); + let subscription_stream_requests = active_subscriptions.get_mut(&peer_network_id).unwrap(); + let pending_subscription_requests = + subscription_stream_requests.get_pending_subscription_requests(); + + // Verify that the expected indices are present + for request_index in expected_start_index..expected_end_index { + assert!(pending_subscription_requests.contains_key(&request_index)); + } + + // Verify that the ignored indices are not present + for request_index in expected_end_index..ignored_end_index { + assert!(!pending_subscription_requests.contains_key(&request_index)); + } +} diff --git a/state-sync/storage-service/server/src/tests/utils.rs b/state-sync/storage-service/server/src/tests/utils.rs index 9bd0f1f5c688b..946e4dc438624 100644 --- a/state-sync/storage-service/server/src/tests/utils.rs +++ b/state-sync/storage-service/server/src/tests/utils.rs @@ -4,6 +4,7 @@ use crate::{ optimistic_fetch::OptimisticFetchRequest, storage::StorageReader, + subscription::SubscriptionStreamRequests, tests::mock::{MockClient, MockDatabaseReader}, StorageServiceServer, }; @@ -12,14 +13,18 @@ use aptos_config::{ network_id::{NetworkId, PeerNetworkId}, }; use aptos_crypto::{ed25519::Ed25519PrivateKey, HashValue, PrivateKey, SigningKey, Uniform}; +use aptos_infallible::Mutex; use aptos_logger::Level; +use aptos_network::protocols::network::RpcError; use aptos_storage_service_notifications::{ StorageServiceNotificationSender, StorageServiceNotifier, }; use aptos_storage_service_types::{ requests::{ DataRequest, StateValuesWithProofRequest, StorageServiceRequest, - TransactionsWithProofRequest, + SubscribeTransactionOutputsWithProofRequest, + SubscribeTransactionsOrOutputsWithProofRequest, SubscribeTransactionsWithProofRequest, + SubscriptionStreamMetadata, TransactionsWithProofRequest, }, responses::{CompleteDataRange, DataResponse, StorageServerSummary, StorageServiceResponse}, Epoch, StorageServiceError, @@ -42,11 +47,18 @@ use aptos_types::{ validator_verifier::ValidatorVerifier, write_set::WriteSet, }; +use arc_swap::ArcSwap; +use bytes::Bytes; +use claims::assert_none; use dashmap::DashMap; use futures::channel::oneshot::Receiver; use mockall::predicate::eq; -use rand::{rngs::OsRng, Rng}; -use std::{sync::Arc, time::Duration}; +use rand::{prelude::SliceRandom, rngs::OsRng, Rng}; +use std::{collections::HashMap, future::Future, sync::Arc, time::Duration}; +use tokio::time::timeout; + +// Useful test constants +const MAX_WAIT_TIME_SECS: u64 = 60; /// Advances the given timer by the amount of time it takes to refresh storage pub async fn advance_storage_refresh_time(mock_time: &MockTimeService) { @@ -55,18 +67,36 @@ pub async fn advance_storage_refresh_time(mock_time: &MockTimeService) { mock_time.advance_ms_async(cache_update_freq_ms).await; } -/// Advances the storage refresh time and -/// waits for the storage summary to refresh. -pub async fn advance_time_and_wait_for_refresh( - mock_client: &mut MockClient, - mock_time: &MockTimeService, - old_storage_server_summary: StorageServerSummary, -) { - // Advance the storage refresh time - advance_storage_refresh_time(mock_time).await; - - // Wait for the storage server to refresh the cached summary - wait_for_cached_summary_update(mock_client, mock_time, old_storage_server_summary, true).await; +/// Creates and returns a list of data chunks that respect an epoch change +/// version (i.e., no single chunk crosses the epoch boundary). Each chunk +/// is of the form (start_version, end_version), inclusive. The list contains +/// the specified number of chunks and start at the given version. +pub fn create_data_chunks_with_epoch_boundary( + chunk_size: u64, + num_chunks_to_create: u64, + start_version: u64, + epoch_change_version: u64, +) -> Vec<(u64, u64)> { + (0..num_chunks_to_create) + .map(|i| { + let chunk_start_version = start_version + (i * chunk_size) + 1; + let chunk_end_version = chunk_start_version + chunk_size - 1; + if chunk_end_version < epoch_change_version { + (chunk_start_version, chunk_end_version) // The chunk is before the epoch change + } else if chunk_start_version < epoch_change_version + && epoch_change_version < chunk_end_version + { + (chunk_start_version, epoch_change_version) // The chunk would cross the epoch boundary + } else { + let chunk_shift_amount = + (chunk_start_version - epoch_change_version - 1) % chunk_size; + ( + chunk_start_version - chunk_shift_amount, + chunk_end_version - chunk_shift_amount, + ) // The chunk is after the epoch change (shift it left) + } + }) + .collect() } /// Creates a test epoch ending ledger info @@ -140,6 +170,14 @@ pub fn create_output_list_with_proof( ) } +/// Creates a vector of entries from first_index to last_index (inclusive) +/// and shuffles the entries randomly. +pub fn create_shuffled_vector(first_index: u64, last_index: u64) -> Vec { + let mut vector: Vec = (first_index..=last_index).collect(); + vector.shuffle(&mut rand::thread_rng()); + vector +} + /// Creates a test ledger info with signatures pub fn create_test_ledger_info_with_sigs(epoch: u64, version: u64) -> LedgerInfoWithSignatures { // Create a mock ledger info with signatures @@ -345,36 +383,78 @@ pub fn extract_peer_and_network_id( } } -/// This function forces the optimistic fetch handler to work. +/// This function forces a cache update notification to be sent +/// to the optimistic fetch and subscription handlers. +/// /// This can be done in two ways: (i) a state sync notification -/// is sent to the storage service, invoking the handler; or (ii) -/// enough time elapses that the handler runs manually. -pub async fn force_optimistic_fetch_handler_to_run( +/// is sent to the storage service, invoking the handlers; or (ii) +/// enough time elapses that the handlers execute manually. +pub async fn force_cache_update_notification( mock_client: &mut MockClient, mock_time: &MockTimeService, storage_service_notifier: &StorageServiceNotifier, + always_advance_time: bool, + wait_for_storage_cache_update: bool, ) { // Generate a random number and if the number is even, send - // a state sync notification. Otherwise, wait for the storage - // summary to refresh manually (by advancing time). + // a state sync notification. Otherwise, advance enough time + // to refresh the storage cache manually. let random_number: u8 = OsRng.gen(); - if random_number % 2 == 0 { - // Send a state sync notification and wait for storage to update - send_notification_and_wait_for_refresh( + if always_advance_time || random_number % 2 != 0 { + // Advance the storage refresh time manually + advance_storage_refresh_time(mock_time).await; + } else { + // Send a state sync notification with the highest synced version + storage_service_notifier + .notify_new_commit(random_number as u64) + .await + .unwrap(); + } + + // Wait for the storage server to refresh the cached summary + if wait_for_storage_cache_update { + wait_for_cached_summary_update( mock_client, mock_time, - storage_service_notifier, - random_number as u64, StorageServerSummary::default(), + true, ) .await; - } else { - // Advance the time manually and wait for storage to update - advance_time_and_wait_for_refresh(mock_client, mock_time, StorageServerSummary::default()) - .await; } } +/// This function forces the optimistic fetch handler to work +pub async fn force_optimistic_fetch_handler_to_run( + mock_client: &mut MockClient, + mock_time: &MockTimeService, + storage_service_notifier: &StorageServiceNotifier, +) { + force_cache_update_notification( + mock_client, + mock_time, + storage_service_notifier, + false, + true, + ) + .await; +} + +/// This function forces the subscription handler to work +pub async fn force_subscription_handler_to_run( + mock_client: &mut MockClient, + mock_time: &MockTimeService, + storage_service_notifier: &StorageServiceNotifier, +) { + force_cache_update_notification( + mock_client, + mock_time, + storage_service_notifier, + false, + true, + ) + .await; +} + /// Sends a number of states request and processes the response pub async fn get_number_of_states( mock_client: &mut MockClient, @@ -385,6 +465,11 @@ pub async fn get_number_of_states( send_storage_request(mock_client, use_compression, data_request).await } +/// Generates and returns a random number (u64) +pub fn get_random_u64() -> u64 { + OsRng.gen() +} + /// Sends a state values with proof request and processes the response pub async fn get_state_values_with_proof( mock_client: &mut MockClient, @@ -427,23 +512,40 @@ pub fn initialize_logger() { .build(); } -/// Sends a state sync notification to the storage server -/// and waits for the storage summary to refresh. -pub async fn send_notification_and_wait_for_refresh( +/// Sends a batch of transaction output requests and +/// returns the response receivers for each request. +pub async fn send_output_subscription_request_batch( mock_client: &mut MockClient, - mock_time: &MockTimeService, - storage_service_notifier: &StorageServiceNotifier, - highest_synced_version: u64, - old_storage_server_summary: StorageServerSummary, -) { - // Send a state sync notification with the highest synced version - storage_service_notifier - .notify_new_commit(highest_synced_version) - .await - .unwrap(); + peer_network_id: PeerNetworkId, + first_stream_request_index: u64, + last_stream_request_index: u64, + stream_id: u64, + peer_version: u64, + peer_epoch: u64, +) -> HashMap>> { + // Shuffle the stream request indices to emulate out of order requests + let stream_request_indices = + create_shuffled_vector(first_stream_request_index, last_stream_request_index); - // Wait for the storage server to refresh the cached summary - wait_for_cached_summary_update(mock_client, mock_time, old_storage_server_summary, false).await; + // Send the requests and gather the response receivers + let mut response_receivers = HashMap::new(); + for stream_request_index in stream_request_indices { + // Send the transaction output subscription request + let response_receiver = subscribe_to_transaction_outputs_for_peer( + mock_client, + peer_version, + peer_epoch, + stream_id, + stream_request_index, + Some(peer_network_id), + ) + .await; + + // Save the response receiver + response_receivers.insert(stream_request_index, response_receiver); + } + + response_receivers } /// Sends the given storage request to the given client @@ -456,6 +558,164 @@ pub async fn send_storage_request( mock_client.process_request(storage_request).await } +/// Creates and sends a request to subscribe to new transactions or outputs +pub async fn subscribe_to_transactions_or_outputs( + mock_client: &mut MockClient, + known_version: u64, + known_epoch: u64, + include_events: bool, + max_num_output_reductions: u64, + stream_id: u64, + stream_index: u64, +) -> Receiver> { + subscribe_to_transactions_or_outputs_for_peer( + mock_client, + known_version, + known_epoch, + include_events, + max_num_output_reductions, + stream_id, + stream_index, + None, + ) + .await +} + +/// Creates and sends a request to subscribe to new transactions or outputs for the specified peer +pub async fn subscribe_to_transactions_or_outputs_for_peer( + mock_client: &mut MockClient, + known_version_at_stream_start: u64, + known_epoch_at_stream_start: u64, + include_events: bool, + max_num_output_reductions: u64, + subscription_stream_id: u64, + subscription_stream_index: u64, + peer_network_id: Option, +) -> Receiver> { + // Create the data request + let subscription_stream_metadata = SubscriptionStreamMetadata { + known_version_at_stream_start, + known_epoch_at_stream_start, + subscription_stream_id, + }; + let data_request = DataRequest::SubscribeTransactionsOrOutputsWithProof( + SubscribeTransactionsOrOutputsWithProofRequest { + subscription_stream_metadata, + include_events, + max_num_output_reductions, + subscription_stream_index, + }, + ); + let storage_request = StorageServiceRequest::new(data_request, true); + + // Send the request + let (peer_id, network_id) = extract_peer_and_network_id(peer_network_id); + mock_client + .send_request(storage_request, peer_id, network_id) + .await +} + +/// Creates and sends a request to subscribe to new transaction outputs +pub async fn subscribe_to_transaction_outputs( + mock_client: &mut MockClient, + known_version: u64, + known_epoch: u64, + stream_id: u64, + stream_index: u64, +) -> Receiver> { + subscribe_to_transaction_outputs_for_peer( + mock_client, + known_version, + known_epoch, + stream_id, + stream_index, + None, + ) + .await +} + +/// Creates and sends a request to subscribe to new transaction outputs for the specified peer +pub async fn subscribe_to_transaction_outputs_for_peer( + mock_client: &mut MockClient, + known_version_at_stream_start: u64, + known_epoch_at_stream_start: u64, + subscription_stream_id: u64, + subscription_stream_index: u64, + peer_network_id: Option, +) -> Receiver> { + // Create the data request + let subscription_stream_metadata = SubscriptionStreamMetadata { + known_version_at_stream_start, + known_epoch_at_stream_start, + subscription_stream_id, + }; + let data_request = DataRequest::SubscribeTransactionOutputsWithProof( + SubscribeTransactionOutputsWithProofRequest { + subscription_stream_metadata, + subscription_stream_index, + }, + ); + let storage_request = StorageServiceRequest::new(data_request, true); + + // Send the request + let (peer_id, network_id) = extract_peer_and_network_id(peer_network_id); + mock_client + .send_request(storage_request, peer_id, network_id) + .await +} + +/// Creates and sends a request to subscribe to new transactions +pub async fn subscribe_to_transactions( + mock_client: &mut MockClient, + known_version: u64, + known_epoch: u64, + include_events: bool, + stream_id: u64, + stream_index: u64, +) -> Receiver> { + subscribe_to_transactions_for_peer( + mock_client, + known_version, + known_epoch, + include_events, + stream_id, + stream_index, + None, + ) + .await +} + +/// Creates and sends a request to subscribe to new transactions for the specified peer +pub async fn subscribe_to_transactions_for_peer( + mock_client: &mut MockClient, + known_version_at_stream_start: u64, + known_epoch_at_stream_start: u64, + include_events: bool, + subscription_stream_id: u64, + subscription_stream_index: u64, + peer_network_id: Option, +) -> Receiver> { + // Create the data request + let subscription_stream_metadata = SubscriptionStreamMetadata { + known_version_at_stream_start, + known_epoch_at_stream_start, + subscription_stream_id, + }; + let data_request = + DataRequest::SubscribeTransactionsWithProof(SubscribeTransactionsWithProofRequest { + subscription_stream_metadata, + include_events, + subscription_stream_index, + }); + let storage_request = StorageServiceRequest::new(data_request, true); + + // Send the request + let (peer_id, network_id) = extract_peer_and_network_id(peer_network_id); + mock_client + .send_request(storage_request, peer_id, network_id) + .await +} + /// Updates the storage server summary with the specified data pub fn update_storage_server_summary( storage_server: &mut StorageServiceServer, @@ -488,6 +748,49 @@ pub fn update_storage_server_summary( .store(Arc::new(storage_server_summary)); } +/// Updates the storage server summary cache with new data +/// and returns the synced ledger info. +pub fn update_storage_summary_cache( + cached_storage_server_summary: Arc>, + highest_synced_version: u64, + highest_synced_epoch: u64, +) -> LedgerInfoWithSignatures { + // Create the storage server summary and synced ledger info + let mut storage_server_summary = StorageServerSummary::default(); + let highest_synced_ledger_info = + create_test_ledger_info_with_sigs(highest_synced_epoch, highest_synced_version); + + // Update the epoch ending ledger infos and synced ledger info + storage_server_summary + .data_summary + .epoch_ending_ledger_infos = Some(CompleteDataRange::new(0, highest_synced_epoch).unwrap()); + storage_server_summary.data_summary.synced_ledger_info = + Some(highest_synced_ledger_info.clone()); + + // Update the cached storage server summary + cached_storage_server_summary.store(Arc::new(storage_server_summary)); + + highest_synced_ledger_info +} + +/// Verifies that the peer has an active subscription stream +/// and that the stream has the appropriate ID. +pub fn verify_active_stream_id_for_peer( + active_subscriptions: Arc>>, + peer_network_id: PeerNetworkId, + new_stream_id: u64, +) { + // Get the subscription stream requests for the peer + let mut active_subscriptions = active_subscriptions.lock(); + let subscription_stream_requests = active_subscriptions.get_mut(&peer_network_id).unwrap(); + + // Verify the stream ID is correct + assert_eq!( + subscription_stream_requests.subscription_stream_id(), + new_stream_id + ); +} + /// Verifies that a new transaction outputs with proof response is received /// and that the response contains the correct data. pub async fn verify_new_transaction_outputs_with_proof( @@ -514,11 +817,37 @@ pub async fn verify_new_transaction_outputs_with_proof( }; } +/// Verifies that a new transactions with proof response is received +/// and that the response contains the correct data. +pub async fn verify_new_transactions_with_proof( + mock_client: &mut MockClient, + receiver: Receiver>, + expected_transactions_with_proof: TransactionListWithProof, + expected_ledger_info: LedgerInfoWithSignatures, +) { + match mock_client + .wait_for_response(receiver) + .await + .unwrap() + .get_data_response() + .unwrap() + { + DataResponse::NewTransactionsWithProof((transactions_with_proof, ledger_info)) => { + assert_eq!(transactions_with_proof, expected_transactions_with_proof); + assert_eq!(ledger_info, expected_ledger_info); + }, + response => panic!( + "Expected new transaction with proof but got: {:?}", + response + ), + }; +} + /// Verifies that a new transactions or outputs with proof response is received /// and that the response contains the correct data. pub async fn verify_new_transactions_or_outputs_with_proof( mock_client: &mut MockClient, - receiver: Receiver>, + receiver: Receiver>, expected_transaction_list_with_proof: Option, expected_output_list_with_proof: Option, expected_ledger_info: LedgerInfoWithSignatures, @@ -550,25 +879,67 @@ pub async fn verify_new_transactions_or_outputs_with_proof( }; } -/// Verifies that a new transactions with proof response is received +/// Verifies that no subscription responses have been received yet +pub fn verify_no_subscription_responses( + response_receivers: &mut HashMap>>, +) { + for response_receiver in response_receivers.values_mut() { + assert_none!(response_receiver.try_recv().unwrap()); + } +} + +/// Verifies that a response is received for a given stream request index /// and that the response contains the correct data. -pub async fn verify_new_transactions_with_proof( +pub async fn verify_output_subscription_response( + expected_output_lists_with_proofs: Vec, + expected_target_ledger_info: LedgerInfoWithSignatures, mock_client: &mut MockClient, - receiver: Receiver>, - expected_transactions_with_proof: TransactionListWithProof, - expected_ledger_info: LedgerInfoWithSignatures, + response_receivers: &mut HashMap>>, + stream_request_index: u64, ) { - let storage_service_response = mock_client.wait_for_response(receiver).await.unwrap(); - match storage_service_response.get_data_response().unwrap() { - DataResponse::NewTransactionsWithProof((transactions_with_proof, ledger_info)) => { - assert_eq!(transactions_with_proof, expected_transactions_with_proof); - assert_eq!(ledger_info, expected_ledger_info); - }, - response => panic!( - "Expected new transaction with proof but got: {:?}", - response - ), - }; + let response_receiver = response_receivers.remove(&stream_request_index).unwrap(); + verify_new_transaction_outputs_with_proof( + mock_client, + response_receiver, + expected_output_lists_with_proofs[stream_request_index as usize].clone(), + expected_target_ledger_info, + ) + .await; +} + +/// Verifies the state of an active subscription stream entry. +/// This is useful for manually testing internal logic. +pub fn verify_subscription_stream_entry( + active_subscriptions: Arc>>, + peer_network_id: PeerNetworkId, + num_requests_per_batch: u64, + peer_known_version: u64, + expected_epoch: u64, + max_transaction_output_chunk_size: u64, +) { + // Get the subscription stream for the specified peer + let mut active_subscriptions = active_subscriptions.lock(); + let subscription_stream_requests = active_subscriptions.get_mut(&peer_network_id).unwrap(); + + // Get the next index to serve on the stream + let next_index_to_serve = subscription_stream_requests.get_next_index_to_serve(); + + // Verify the highest known version and epoch in the stream + let expected_version = + peer_known_version + (max_transaction_output_chunk_size * next_index_to_serve); + assert_eq!( + subscription_stream_requests.get_highest_known_version_and_epoch(), + (expected_version, expected_epoch) + ); + + // Verify the number of active requests + let num_active_stream_requests = subscription_stream_requests + .get_pending_subscription_requests() + .len(); + assert_eq!( + num_active_stream_requests as u64, + num_requests_per_batch - (next_index_to_serve % num_requests_per_batch) + ); } /// Waits for the specified number of optimistic fetches to be active @@ -576,15 +947,97 @@ pub async fn wait_for_active_optimistic_fetches( active_optimistic_fetches: Arc>, expected_num_active_fetches: usize, ) { - loop { - let num_active_fetches = active_optimistic_fetches.len(); - if num_active_fetches == expected_num_active_fetches { - return; // We found the expected number of active fetches + // Wait for the specified number of active fetches + let check_active_fetches = async move { + loop { + // Check if we've found the expected number of active fetches + let num_active_fetches = active_optimistic_fetches.len(); + if num_active_fetches == expected_num_active_fetches { + return; // We found the expected number of active fetches + } + + // Otherwise, sleep for a while + tokio::time::sleep(Duration::from_millis(100)).await; } + }; - // Sleep for a while - tokio::time::sleep(Duration::from_millis(100)).await; - } + // Spawn the task with a timeout + spawn_with_timeout( + check_active_fetches, + &format!( + "Timed-out while waiting for {} active fetches!", + expected_num_active_fetches + ), + ) + .await; +} + +/// Waits for the specified number of active stream requests for +/// the given peer ID. +pub async fn wait_for_active_stream_requests( + active_subscriptions: Arc>>, + peer_network_id: PeerNetworkId, + expected_num_active_stream_requests: usize, +) { + // Wait for the specified number of active stream requests + let check_active_stream_requests = async move { + loop { + // Check if the number of active stream requests matches + if let Some(subscription_stream_requests) = + active_subscriptions.lock().get_mut(&peer_network_id) + { + let num_active_stream_requests = subscription_stream_requests + .get_pending_subscription_requests() + .len(); + if num_active_stream_requests == expected_num_active_stream_requests { + return; // We found the expected number of stream requests + } + } + + // Otherwise, sleep for a while + tokio::time::sleep(Duration::from_millis(100)).await; + } + }; + + // Spawn the task with a timeout + spawn_with_timeout( + check_active_stream_requests, + &format!( + "Timed-out while waiting for {} active stream requests.", + expected_num_active_stream_requests + ), + ) + .await; +} + +/// Waits for the specified number of subscriptions to be active +pub async fn wait_for_active_subscriptions( + active_subscriptions: Arc>>, + expected_num_active_subscriptions: usize, +) { + // Wait for the specified number of active subscriptions + let check_active_subscriptions = async move { + loop { + // Check if the number of active subscriptions matches + let num_active_subscriptions = active_subscriptions.lock().len(); + if num_active_subscriptions == expected_num_active_subscriptions { + return; // We found the expected number of active subscriptions + } + + // Otherwise, sleep for a while + tokio::time::sleep(Duration::from_millis(100)).await; + } + }; + + // Spawn the task with a timeout + spawn_with_timeout( + check_active_subscriptions, + &format!( + "Timed-out while waiting for {} active subscriptions.", + expected_num_active_subscriptions + ), + ) + .await; } /// Waits for the cached storage summary to update @@ -594,25 +1047,43 @@ async fn wait_for_cached_summary_update( old_storage_server_summary: StorageServerSummary, continue_advancing_time: bool, ) { + // Create a storage summary request let storage_request = StorageServiceRequest::new(DataRequest::GetStorageServerSummary, true); - // Loop until the storage summary has updated - while mock_client - .process_request(storage_request.clone()) - .await - .unwrap() - == StorageServiceResponse::new( - DataResponse::StorageServerSummary(old_storage_server_summary.clone()), - true, - ) - .unwrap() - { - // Advance the storage refresh time - if continue_advancing_time { - advance_storage_refresh_time(mock_time).await; + // Wait for the storage summary to update + let storage_summary_updated = async move { + while mock_client + .process_request(storage_request.clone()) + .await + .unwrap() + == StorageServiceResponse::new( + DataResponse::StorageServerSummary(old_storage_server_summary.clone()), + true, + ) + .unwrap() + { + // Advance the storage refresh time + if continue_advancing_time { + advance_storage_refresh_time(mock_time).await; + } + + // Sleep for a while + tokio::time::sleep(Duration::from_millis(100)).await; } + }; - // Sleep for a while - tokio::time::sleep(Duration::from_millis(100)).await; - } + // Spawn the task with a timeout + spawn_with_timeout( + storage_summary_updated, + "Timed-out while waiting for the cached storage summary to update!", + ) + .await; +} + +/// Spawns the given task with a timeout +pub async fn spawn_with_timeout(task: impl Future, timeout_error_message: &str) { + let timeout_duration = Duration::from_secs(MAX_WAIT_TIME_SECS); + timeout(timeout_duration, task) + .await + .expect(timeout_error_message) } diff --git a/state-sync/storage-service/server/src/utils.rs b/state-sync/storage-service/server/src/utils.rs index 83519070477f2..34342ff5508c5 100644 --- a/state-sync/storage-service/server/src/utils.rs +++ b/state-sync/storage-service/server/src/utils.rs @@ -2,10 +2,11 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - error::Error, handler::Handler, moderator::RequestModerator, + error::Error, handler::Handler, moderator::RequestModerator, network::ResponseSender, optimistic_fetch::OptimisticFetchRequest, storage::StorageReaderInterface, + subscription::SubscriptionStreamRequests, }; -use aptos_config::{config::StorageServiceConfig, network_id::PeerNetworkId}; +use aptos_config::network_id::PeerNetworkId; use aptos_infallible::Mutex; use aptos_storage_service_types::{ requests::{DataRequest, EpochEndingLedgerInfoRequest, StorageServiceRequest}, @@ -16,12 +17,13 @@ use aptos_types::ledger_info::LedgerInfoWithSignatures; use arc_swap::ArcSwap; use dashmap::DashMap; use lru::LruCache; -use std::sync::Arc; +use std::{collections::HashMap, sync::Arc}; /// Gets the epoch ending ledger info at the given epoch pub fn get_epoch_ending_ledger_info( cached_storage_server_summary: Arc>, optimistic_fetches: Arc>, + subscriptions: Arc>>, epoch: u64, lru_response_cache: Arc>>, request_moderator: Arc, @@ -46,6 +48,7 @@ pub fn get_epoch_ending_ledger_info( lru_response_cache, request_moderator, storage, + subscriptions, time_service, ); let storage_response = handler.process_request(peer_network_id, storage_request, true); @@ -74,106 +77,102 @@ pub fn get_epoch_ending_ledger_info( } } -/// Notifies a peer of new data according to the target ledger info. +/// Notifies a peer of new data according to the target ledger info +/// and returns a copy of the raw data response that was sent. /// -/// Note: we don't need to check the size of the optimistic fetch response -/// because: (i) each sub-part should already be checked; and (ii) -/// optimistic fetch responses are best effort. +/// Note: we don't need to check the size of the response because: +/// (i) each sub-part should already be checked; and (ii) responses pub fn notify_peer_of_new_data( cached_storage_server_summary: Arc>, - config: StorageServiceConfig, optimistic_fetches: Arc>, + subscriptions: Arc>>, lru_response_cache: Arc>>, request_moderator: Arc, storage: T, time_service: TimeService, peer_network_id: &PeerNetworkId, - optimistic_fetch: OptimisticFetchRequest, + missing_data_request: StorageServiceRequest, target_ledger_info: LedgerInfoWithSignatures, -) -> aptos_storage_service_types::Result<(), Error> { - match optimistic_fetch.get_storage_request_for_missing_data(config, &target_ledger_info) { - Ok(storage_request) => { - // Handle the storage service request to fetch the missing data - let use_compression = storage_request.use_compression; - let handler = Handler::new( - cached_storage_server_summary, - optimistic_fetches, - lru_response_cache, - request_moderator, - storage, - time_service, - ); - let storage_response = - handler.process_request(peer_network_id, storage_request.clone(), true); - - // Transform the missing data into an optimistic fetch response - let transformed_data_response = match storage_response { - Ok(storage_response) => match storage_response.get_data_response() { - Ok(DataResponse::TransactionsWithProof(transactions_with_proof)) => { - DataResponse::NewTransactionsWithProof(( - transactions_with_proof, - target_ledger_info.clone(), - )) - }, - Ok(DataResponse::TransactionOutputsWithProof(outputs_with_proof)) => { - DataResponse::NewTransactionOutputsWithProof(( - outputs_with_proof, - target_ledger_info.clone(), - )) - }, - Ok(DataResponse::TransactionsOrOutputsWithProof(( - transactions_with_proof, - outputs_with_proof, - ))) => { - if let Some(transactions_with_proof) = transactions_with_proof { - DataResponse::NewTransactionsOrOutputsWithProof(( - (Some(transactions_with_proof), None), - target_ledger_info.clone(), - )) - } else if let Some(outputs_with_proof) = outputs_with_proof { - DataResponse::NewTransactionsOrOutputsWithProof(( - (None, Some(outputs_with_proof)), - target_ledger_info.clone(), - )) - } else { - return Err(Error::UnexpectedErrorEncountered( - "Failed to get a transaction or output response for peer!".into(), - )); - } - }, - data_response => { - return Err(Error::UnexpectedErrorEncountered(format!( - "Failed to get appropriate data response for peer! Got: {:?}", - data_response - ))) - }, - }, - response => { - return Err(Error::UnexpectedErrorEncountered(format!( - "Failed to fetch missing data for peer! {:?}", - response - ))) - }, - }; - let storage_response = - match StorageServiceResponse::new(transformed_data_response, use_compression) { - Ok(storage_response) => storage_response, - Err(error) => { - return Err(Error::UnexpectedErrorEncountered(format!( - "Failed to create transformed response! Error: {:?}", - error - ))); - }, - }; + response_sender: ResponseSender, +) -> aptos_storage_service_types::Result { + // Handle the storage service request to fetch the missing data + let use_compression = missing_data_request.use_compression; + let handler = Handler::new( + cached_storage_server_summary, + optimistic_fetches, + lru_response_cache, + request_moderator, + storage, + subscriptions, + time_service, + ); + let storage_response = + handler.process_request(peer_network_id, missing_data_request.clone(), true); - // Send the response to the peer - handler.send_response( - storage_request, - Ok(storage_response), - optimistic_fetch.get_response_sender(), - ); - Ok(()) + // Transform the missing data into an optimistic fetch response + let transformed_data_response = match storage_response { + Ok(storage_response) => match storage_response.get_data_response() { + Ok(DataResponse::TransactionsWithProof(transactions_with_proof)) => { + DataResponse::NewTransactionsWithProof(( + transactions_with_proof, + target_ledger_info, + )) + }, + Ok(DataResponse::TransactionOutputsWithProof(outputs_with_proof)) => { + DataResponse::NewTransactionOutputsWithProof(( + outputs_with_proof, + target_ledger_info, + )) + }, + Ok(DataResponse::TransactionsOrOutputsWithProof(( + transactions_with_proof, + outputs_with_proof, + ))) => { + if let Some(transactions_with_proof) = transactions_with_proof { + DataResponse::NewTransactionsOrOutputsWithProof(( + (Some(transactions_with_proof), None), + target_ledger_info, + )) + } else if let Some(outputs_with_proof) = outputs_with_proof { + DataResponse::NewTransactionsOrOutputsWithProof(( + (None, Some(outputs_with_proof)), + target_ledger_info, + )) + } else { + return Err(Error::UnexpectedErrorEncountered( + "Failed to get a transaction or output response for peer!".into(), + )); + } + }, + data_response => { + return Err(Error::UnexpectedErrorEncountered(format!( + "Failed to get appropriate data response for peer! Got: {:?}", + data_response + ))) + }, }, - Err(error) => Err(error), - } + response => { + return Err(Error::UnexpectedErrorEncountered(format!( + "Failed to fetch missing data for peer! {:?}", + response + ))) + }, + }; + + // Create the storage service response + let storage_response = + match StorageServiceResponse::new(transformed_data_response.clone(), use_compression) { + Ok(storage_response) => storage_response, + Err(error) => { + return Err(Error::UnexpectedErrorEncountered(format!( + "Failed to create transformed response! Error: {:?}", + error + ))); + }, + }; + + // Send the response to the peer + handler.send_response(missing_data_request, Ok(storage_response), response_sender); + + Ok(transformed_data_response) } diff --git a/state-sync/storage-service/types/src/requests.rs b/state-sync/storage-service/types/src/requests.rs index e89c219a7d998..e7e03503cc6b8 100644 --- a/state-sync/storage-service/types/src/requests.rs +++ b/state-sync/storage-service/types/src/requests.rs @@ -44,6 +44,9 @@ pub enum DataRequest { GetTransactionsWithProof(TransactionsWithProofRequest), // Fetches a list of transactions with a proof GetNewTransactionsOrOutputsWithProof(NewTransactionsOrOutputsWithProofRequest), // Optimistically fetches new transactions or outputs GetTransactionsOrOutputsWithProof(TransactionsOrOutputsWithProofRequest), // Fetches a list of transactions or outputs with a proof + SubscribeTransactionOutputsWithProof(SubscribeTransactionOutputsWithProofRequest), // Subscribes to transaction outputs with a proof + SubscribeTransactionsOrOutputsWithProof(SubscribeTransactionsOrOutputsWithProofRequest), // Subscribes to transactions or outputs with a proof + SubscribeTransactionsWithProof(SubscribeTransactionsWithProofRequest), // Subscribes to transactions with a proof } impl DataRequest { @@ -63,13 +66,16 @@ impl DataRequest { "get_new_transactions_or_outputs_with_proof" }, Self::GetTransactionsOrOutputsWithProof(_) => "get_transactions_or_outputs_with_proof", + Self::SubscribeTransactionOutputsWithProof(_) => { + "subscribe_transaction_outputs_with_proof" + }, + Self::SubscribeTransactionsOrOutputsWithProof(_) => { + "subscribe_transactions_or_outputs_with_proof" + }, + Self::SubscribeTransactionsWithProof(_) => "subscribe_transactions_with_proof", } } - pub fn is_storage_summary_request(&self) -> bool { - matches!(self, &Self::GetStorageServerSummary) - } - pub fn is_optimistic_fetch(&self) -> bool { matches!(self, &Self::GetNewTransactionOutputsWithProof(_)) || matches!(self, &Self::GetNewTransactionsWithProof(_)) @@ -79,6 +85,16 @@ impl DataRequest { pub fn is_protocol_version_request(&self) -> bool { matches!(self, &Self::GetServerProtocolVersion) } + + pub fn is_storage_summary_request(&self) -> bool { + matches!(self, &Self::GetStorageServerSummary) + } + + pub fn is_subscription_request(&self) -> bool { + matches!(self, &Self::SubscribeTransactionOutputsWithProof(_)) + || matches!(self, &Self::SubscribeTransactionsWithProof(_)) + || matches!(self, Self::SubscribeTransactionsOrOutputsWithProof(_)) + } } /// A storage service request for fetching a list of epoch ending ledger infos. @@ -153,3 +169,37 @@ pub struct TransactionsOrOutputsWithProofRequest { pub include_events: bool, // Whether or not to include events (if transactions are returned) pub max_num_output_reductions: u64, // The max num of output reductions before transactions are returned } + +/// A storage service request for subscribing to transaction +/// outputs with a corresponding proof. +#[derive(Clone, Debug, Deserialize, Eq, Hash, PartialEq, Serialize)] +pub struct SubscribeTransactionOutputsWithProofRequest { + pub subscription_stream_metadata: SubscriptionStreamMetadata, // The metadata for the subscription stream request + pub subscription_stream_index: u64, // The request index of the subscription stream +} + +/// A storage service request for subscribing to transactions +/// or outputs with a corresponding proof. +#[derive(Clone, Debug, Deserialize, Eq, Hash, PartialEq, Serialize)] +pub struct SubscribeTransactionsOrOutputsWithProofRequest { + pub subscription_stream_metadata: SubscriptionStreamMetadata, // The metadata for the subscription stream request + pub subscription_stream_index: u64, // The request index of the subscription stream + pub include_events: bool, // Whether or not to include events in the response + pub max_num_output_reductions: u64, // The max num of output reductions before transactions are returned +} + +/// A storage service request for subscribing to transactions +/// with a corresponding proof. +#[derive(Clone, Debug, Deserialize, Eq, Hash, PartialEq, Serialize)] +pub struct SubscribeTransactionsWithProofRequest { + pub subscription_stream_metadata: SubscriptionStreamMetadata, // The metadata for the subscription stream request + pub subscription_stream_index: u64, // The request index of the subscription stream + pub include_events: bool, // Whether or not to include events in the response +} + +#[derive(Clone, Copy, Debug, Deserialize, Eq, Hash, PartialEq, Serialize)] +pub struct SubscriptionStreamMetadata { + pub known_version_at_stream_start: u64, // The highest known transaction version at stream start + pub known_epoch_at_stream_start: u64, // The highest known epoch at stream start + pub subscription_stream_id: u64, // The unique id of the subscription stream +} diff --git a/state-sync/storage-service/types/src/responses.rs b/state-sync/storage-service/types/src/responses.rs index 0ccef356597d6..0aa4f1aaa2a6d 100644 --- a/state-sync/storage-service/types/src/responses.rs +++ b/state-sync/storage-service/types/src/responses.rs @@ -7,7 +7,8 @@ use crate::{ GetNewTransactionsOrOutputsWithProof, GetNewTransactionsWithProof, GetNumberOfStatesAtVersion, GetServerProtocolVersion, GetStateValuesWithProof, GetStorageServerSummary, GetTransactionOutputsWithProof, GetTransactionsOrOutputsWithProof, - GetTransactionsWithProof, + GetTransactionsWithProof, SubscribeTransactionOutputsWithProof, + SubscribeTransactionsOrOutputsWithProof, SubscribeTransactionsWithProof, }, responses::Error::DegenerateRangeError, Epoch, StorageServiceRequest, COMPRESSION_SUFFIX_LABEL, @@ -535,6 +536,24 @@ impl DataSummary { can_serve_txns && can_serve_outputs && can_create_proof }, + SubscribeTransactionOutputsWithProof(request) => { + let known_version = request + .subscription_stream_metadata + .known_version_at_stream_start; + self.can_service_subscription_request(aptos_data_client_config, known_version) + }, + SubscribeTransactionsOrOutputsWithProof(request) => { + let known_version = request + .subscription_stream_metadata + .known_version_at_stream_start; + self.can_service_subscription_request(aptos_data_client_config, known_version) + }, + SubscribeTransactionsWithProof(request) => { + let known_version = request + .subscription_stream_metadata + .known_version_at_stream_start; + self.can_service_subscription_request(aptos_data_client_config, known_version) + }, } } @@ -545,6 +564,21 @@ impl DataSummary { known_version: u64, ) -> bool { let max_version_lag = aptos_data_client_config.max_optimistic_fetch_version_lag; + self.check_synced_version_lag(known_version, max_version_lag) + } + + /// Returns true iff the subscription data request can be serviced + fn can_service_subscription_request( + &self, + aptos_data_client_config: &AptosDataClientConfig, + known_version: u64, + ) -> bool { + let max_version_lag = aptos_data_client_config.max_subscription_version_lag; + self.check_synced_version_lag(known_version, max_version_lag) + } + + /// Returns true iff the synced version is within the given lag range + fn check_synced_version_lag(&self, known_version: u64, max_version_lag: u64) -> bool { self.synced_ledger_info .as_ref() .map(|li| (li.ledger_info().version() + max_version_lag) > known_version) diff --git a/state-sync/storage-service/types/src/tests.rs b/state-sync/storage-service/types/src/tests.rs index 6c7fe315816e6..3421789713d9e 100644 --- a/state-sync/storage-service/types/src/tests.rs +++ b/state-sync/storage-service/types/src/tests.rs @@ -5,7 +5,9 @@ use crate::{ requests::{ DataRequest, EpochEndingLedgerInfoRequest, NewTransactionOutputsWithProofRequest, NewTransactionsOrOutputsWithProofRequest, NewTransactionsWithProofRequest, - StateValuesWithProofRequest, TransactionOutputsWithProofRequest, + StateValuesWithProofRequest, SubscribeTransactionOutputsWithProofRequest, + SubscribeTransactionsOrOutputsWithProofRequest, SubscribeTransactionsWithProofRequest, + SubscriptionStreamMetadata, TransactionOutputsWithProofRequest, TransactionsOrOutputsWithProofRequest, TransactionsWithProofRequest, }, responses::{CompleteDataRange, DataSummary, ProtocolMetadata}, @@ -134,6 +136,54 @@ fn test_data_summary_service_optimistic_fetch() { } } +#[test] +fn test_data_summary_service_subscription() { + // Create a data client config with the specified max subscription lag + let max_subscription_version_lag = 1000; + let data_client_config = AptosDataClientConfig { + max_subscription_version_lag, + ..Default::default() + }; + + // Create a data summary with the specified synced ledger info version + let highest_synced_version = 50_000; + let data_summary = DataSummary { + synced_ledger_info: Some(create_ledger_info_at_version(highest_synced_version)), + ..Default::default() + }; + + // Verify the different requests that can be serviced + for compression in [true, false] { + // Test the known versions that are within the subscription lag + let known_versions = vec![ + highest_synced_version, + highest_synced_version + (max_subscription_version_lag / 2), + highest_synced_version + max_subscription_version_lag - 1, + ]; + verify_can_service_subscription_requests( + &data_client_config, + &data_summary, + compression, + known_versions, + true, + ); + + // Test the known versions that are outside the subscription lag + let known_versions = vec![ + highest_synced_version + max_subscription_version_lag, + highest_synced_version + max_subscription_version_lag + 1, + highest_synced_version + (max_subscription_version_lag * 2), + ]; + verify_can_service_subscription_requests( + &data_client_config, + &data_summary, + compression, + known_versions, + false, + ); + } +} + #[test] fn test_data_summary_service_transactions() { // Create a data client config and data summary @@ -455,27 +505,27 @@ fn create_optimistic_fetch_request( use_compression: bool, ) -> StorageServiceRequest { // Generate a random number - let random_number: u64 = thread_rng().gen(); + let random_number = get_random_u64(); // Determine the data request type based on the random number let data_request = if random_number % 3 == 0 { DataRequest::GetNewTransactionsWithProof(NewTransactionsWithProofRequest { known_version, - known_epoch: 1, + known_epoch: get_random_u64(), include_events: false, }) } else if random_number % 3 == 1 { DataRequest::GetNewTransactionOutputsWithProof(NewTransactionOutputsWithProofRequest { known_version, - known_epoch: 1, + known_epoch: get_random_u64(), }) } else { DataRequest::GetNewTransactionsOrOutputsWithProof( NewTransactionsOrOutputsWithProofRequest { known_version, - known_epoch: 1, + known_epoch: get_random_u64(), include_events: false, - max_num_output_reductions: 0, + max_num_output_reductions: get_random_u64(), }, ) }; @@ -498,6 +548,45 @@ fn create_outputs_request( StorageServiceRequest::new(data_request, use_compression) } +/// Creates a new subscription request +fn create_subscription_request(known_version: u64, use_compression: bool) -> StorageServiceRequest { + // Create a new subscription stream metadata + let subscription_stream_metadata = SubscriptionStreamMetadata { + known_version_at_stream_start: known_version, + known_epoch_at_stream_start: get_random_u64(), + subscription_stream_id: get_random_u64(), + }; + + // Generate a random number + let random_number = get_random_u64(); + + // Determine the data request type based on the random number + let data_request = if random_number % 3 == 0 { + DataRequest::SubscribeTransactionsWithProof(SubscribeTransactionsWithProofRequest { + subscription_stream_metadata, + include_events: false, + subscription_stream_index: get_random_u64(), + }) + } else if random_number % 3 == 1 { + DataRequest::SubscribeTransactionOutputsWithProof( + SubscribeTransactionOutputsWithProofRequest { + subscription_stream_metadata, + subscription_stream_index: get_random_u64(), + }, + ) + } else { + DataRequest::SubscribeTransactionsOrOutputsWithProof( + SubscribeTransactionsOrOutputsWithProofRequest { + subscription_stream_metadata, + include_events: false, + max_num_output_reductions: get_random_u64(), + subscription_stream_index: get_random_u64(), + }, + ) + }; + StorageServiceRequest::new(data_request, use_compression) +} + /// Creates a request for transactions fn create_transactions_request( proof: Version, @@ -555,6 +644,11 @@ fn create_state_values_request_at_version( create_state_values_request(version, 0, 1000, use_compression) } +/// Generates a random u64 +fn get_random_u64() -> u64 { + thread_rng().gen() +} + /// Verifies the serviceability of the epoch ending request ranges against /// the specified data summary. If `expect_service` is true, then the /// request should be serviceable. @@ -612,6 +706,25 @@ fn verify_can_service_state_chunk_requests( } } +/// Verifies the serviceability of the subscription versions against +/// the specified data summary. If `expect_service` is true, then the +/// request should be serviceable. +fn verify_can_service_subscription_requests( + data_client_config: &AptosDataClientConfig, + data_summary: &DataSummary, + compression: bool, + known_versions: Vec, + expect_service: bool, +) { + for known_version in known_versions { + // Create the subscription request + let request = create_subscription_request(known_version, compression); + + // Verify the serviceability of the request + verify_serviceability(data_client_config, data_summary, request, expect_service); + } +} + /// Verifies the serviceability of the transaction request ranges against /// the specified data summary. If `expect_service` is true, then the /// request should be serviceable. diff --git a/storage/aptosdb/src/backup/restore_utils.rs b/storage/aptosdb/src/backup/restore_utils.rs index afef6ba813a7f..85e516312ed9a 100644 --- a/storage/aptosdb/src/backup/restore_utils.rs +++ b/storage/aptosdb/src/backup/restore_utils.rs @@ -6,6 +6,7 @@ //! state sync v2. use crate::{ event_store::EventStore, + ledger_db::LedgerDbSchemaBatches, ledger_store::LedgerStore, new_sharded_kv_schema_batch, schema::{ @@ -119,13 +120,13 @@ pub(crate) fn save_transactions( events: &[Vec], write_sets: Vec, existing_batch: Option<( - &mut SchemaBatch, + &mut LedgerDbSchemaBatches, &mut ShardedStateKvSchemaBatch, &SchemaBatch, )>, kv_replay: bool, ) -> Result<()> { - if let Some((batch, state_kv_batches, state_kv_metadata_batch)) = existing_batch { + if let Some((ledger_db_batch, state_kv_batches, state_kv_metadata_batch)) = existing_batch { save_transactions_impl( Arc::clone(&ledger_store), transaction_store, @@ -136,13 +137,13 @@ pub(crate) fn save_transactions( txn_infos, events, write_sets.as_ref(), - batch, + ledger_db_batch, state_kv_batches, state_kv_metadata_batch, kv_replay, )?; } else { - let mut batch = SchemaBatch::new(); + let mut ledger_db_batch = LedgerDbSchemaBatches::new(); let mut sharded_kv_schema_batch = new_sharded_kv_schema_batch(); let state_kv_metadata_batch = SchemaBatch::new(); save_transactions_impl( @@ -155,7 +156,7 @@ pub(crate) fn save_transactions( txn_infos, events, write_sets.as_ref(), - &mut batch, + &mut ledger_db_batch, &mut sharded_kv_schema_batch, &state_kv_metadata_batch, kv_replay, @@ -169,8 +170,7 @@ pub(crate) fn save_transactions( sharded_kv_schema_batch, )?; - // TODO(grao): Support splitted ledger DBs here. - ledger_store.ledger_db.metadata_db().write_schemas(batch)?; + ledger_store.ledger_db.write_schemas(ledger_db_batch)?; } Ok(()) @@ -231,32 +231,46 @@ pub(crate) fn save_transactions_impl( txn_infos: &[TransactionInfo], events: &[Vec], write_sets: &[WriteSet], - batch: &mut SchemaBatch, + ledger_db_batch: &mut LedgerDbSchemaBatches, state_kv_batches: &mut ShardedStateKvSchemaBatch, state_kv_metadata_batch: &SchemaBatch, kv_replay: bool, ) -> Result<()> { - // TODO(grao): Support splited ledger db here. for (idx, txn) in txns.iter().enumerate() { transaction_store.put_transaction( first_version + idx as Version, txn, /*skip_index=*/ false, - batch, + &ledger_db_batch.transaction_db_batches, )?; } - ledger_store.put_transaction_infos(first_version, txn_infos, batch, batch)?; - event_store.put_events_multiple_versions(first_version, events, batch)?; + + ledger_store.put_transaction_infos( + first_version, + txn_infos, + &ledger_db_batch.transaction_info_db_batches, + &ledger_db_batch.transaction_accumulator_db_batches, + )?; + + event_store.put_events_multiple_versions( + first_version, + events, + &ledger_db_batch.event_db_batches, + )?; // insert changes in write set schema batch for (idx, ws) in write_sets.iter().enumerate() { - transaction_store.put_write_set(first_version + idx as Version, ws, batch)?; + transaction_store.put_write_set( + first_version + idx as Version, + ws, + &ledger_db_batch.write_set_db_batches, + )?; } if kv_replay && first_version > 0 && state_store.get_usage(Some(first_version - 1)).is_ok() { state_store.put_write_sets( write_sets.to_vec(), first_version, - batch, + &ledger_db_batch.ledger_metadata_db_batches, // used for storing the storage usage state_kv_batches, state_kv_metadata_batch, state_store.state_kv_db.enabled_sharding(), @@ -264,14 +278,18 @@ pub(crate) fn save_transactions_impl( } let last_version = first_version + txns.len() as u64 - 1; - batch.put::( - &DbMetadataKey::LedgerCommitProgress, - &DbMetadataValue::Version(last_version), - )?; - batch.put::( - &DbMetadataKey::OverallCommitProgress, - &DbMetadataValue::Version(last_version), - )?; + ledger_db_batch + .ledger_metadata_db_batches + .put::( + &DbMetadataKey::LedgerCommitProgress, + &DbMetadataValue::Version(last_version), + )?; + ledger_db_batch + .ledger_metadata_db_batches + .put::( + &DbMetadataKey::OverallCommitProgress, + &DbMetadataValue::Version(last_version), + )?; Ok(()) } diff --git a/storage/aptosdb/src/event_store/mod.rs b/storage/aptosdb/src/event_store/mod.rs index 3525d9bb8562d..0786bb3f88351 100644 --- a/storage/aptosdb/src/event_store/mod.rs +++ b/storage/aptosdb/src/event_store/mod.rs @@ -317,15 +317,17 @@ impl EventStore { .iter() .enumerate() .try_for_each::<_, Result<_>>(|(idx, event)| { - if !skip_index { - batch.put::( - &(*event.key(), event.sequence_number()), - &(version, idx as u64), - )?; - batch.put::( - &(*event.key(), version, event.sequence_number()), - &(idx as u64), - )?; + if let ContractEvent::V1(v1) = event { + if !skip_index { + batch.put::( + &(*v1.key(), v1.sequence_number()), + &(version, idx as u64), + )?; + batch.put::( + &(*v1.key(), version, v1.sequence_number()), + &(idx as u64), + )?; + } } batch.put::(&(version, idx as u64), event) })?; @@ -469,14 +471,16 @@ impl EventStore { ) -> anyhow::Result<()> { let mut current_version = start; for events in self.get_events_by_version_iter(start, (end - start) as usize)? { - for (current_index, event) in (events?).into_iter().enumerate() { - db_batch.delete::(&( - *event.key(), - current_version, - event.sequence_number(), - ))?; - db_batch.delete::(&(*event.key(), event.sequence_number()))?; - db_batch.delete::(&(current_version, current_index as u64))?; + for (idx, event) in (events?).into_iter().enumerate() { + if let ContractEvent::V1(v1) = event { + db_batch.delete::(&( + *v1.key(), + current_version, + v1.sequence_number(), + ))?; + db_batch.delete::(&(*v1.key(), v1.sequence_number()))?; + } + db_batch.delete::(&(current_version, idx as u64))?; } current_version += 1; } diff --git a/storage/aptosdb/src/event_store/test.rs b/storage/aptosdb/src/event_store/test.rs index 7b3cdcc9617b2..b0fab747f8ac2 100644 --- a/storage/aptosdb/src/event_store/test.rs +++ b/storage/aptosdb/src/event_store/test.rs @@ -190,12 +190,19 @@ fn test_index_get_impl(event_batches: Vec>) { .into_iter() .enumerate() .for_each(|(ver, batch)| { - batch.into_iter().for_each(|e| { - let mut events_and_versions = - events_by_event_key.entry(*e.key()).or_insert_with(Vec::new); - assert_eq!(events_and_versions.len() as u64, e.sequence_number()); - events_and_versions.push((e, ver as Version)); - }) + batch + .into_iter() + .filter(|e| matches!(e, ContractEvent::V1(_))) + .for_each(|e| { + let mut events_and_versions = events_by_event_key + .entry(*e.v1().unwrap().key()) + .or_insert_with(Vec::new); + assert_eq!( + events_and_versions.len() as u64, + e.v1().unwrap().sequence_number() + ); + events_and_versions.push((e, ver as Version)); + }) }); // Fetch and check. @@ -271,7 +278,7 @@ prop_compose! { Vec::new(), // failed_proposers timestamp, ); - let event = ContractEvent::new( + let event = ContractEvent::new_v1( new_block_event_key(), seq, TypeTag::Struct(Box::new(NewBlockEvent::struct_tag())), diff --git a/storage/aptosdb/src/ledger_db.rs b/storage/aptosdb/src/ledger_db.rs index 5e6bd879481dd..e6481dd55cab3 100644 --- a/storage/aptosdb/src/ledger_db.rs +++ b/storage/aptosdb/src/ledger_db.rs @@ -18,7 +18,7 @@ use anyhow::Result; use aptos_config::config::{RocksdbConfig, RocksdbConfigs}; use aptos_logger::prelude::info; use aptos_rocksdb_options::gen_rocksdb_options; -use aptos_schemadb::{ColumnFamilyDescriptor, ColumnFamilyName, DB}; +use aptos_schemadb::{ColumnFamilyDescriptor, ColumnFamilyName, SchemaBatch, DB}; use aptos_types::transaction::Version; use std::{ path::{Path, PathBuf}, @@ -34,10 +34,38 @@ pub const TRANSACTION_DB_NAME: &str = "transaction_db"; pub const TRANSACTION_INFO_DB_NAME: &str = "transaction_info_db"; pub const WRITE_SET_DB_NAME: &str = "write_set_db"; +#[derive(Debug)] +pub struct LedgerDbSchemaBatches { + pub ledger_metadata_db_batches: SchemaBatch, + pub event_db_batches: SchemaBatch, + pub transaction_accumulator_db_batches: SchemaBatch, + pub transaction_db_batches: SchemaBatch, + pub transaction_info_db_batches: SchemaBatch, + pub write_set_db_batches: SchemaBatch, +} + +impl Default for LedgerDbSchemaBatches { + fn default() -> Self { + Self { + ledger_metadata_db_batches: SchemaBatch::new(), + event_db_batches: SchemaBatch::new(), + transaction_accumulator_db_batches: SchemaBatch::new(), + transaction_db_batches: SchemaBatch::new(), + transaction_info_db_batches: SchemaBatch::new(), + write_set_db_batches: SchemaBatch::new(), + } + } +} + +impl LedgerDbSchemaBatches { + pub fn new() -> Self { + Self::default() + } +} + #[derive(Debug)] pub struct LedgerDb { ledger_metadata_db: Arc, - event_db: Arc, transaction_accumulator_db: Arc, transaction_db: Arc, @@ -320,4 +348,18 @@ impl LedgerDb { ledger_db_folder } } + + pub fn write_schemas(&self, schemas: LedgerDbSchemaBatches) -> Result<()> { + self.write_set_db + .write_schemas(schemas.write_set_db_batches)?; + self.transaction_info_db + .write_schemas(schemas.transaction_info_db_batches)?; + self.transaction_db + .write_schemas(schemas.transaction_db_batches)?; + self.ledger_metadata_db + .write_schemas(schemas.ledger_metadata_db_batches)?; + self.event_db.write_schemas(schemas.event_db_batches)?; + self.transaction_accumulator_db + .write_schemas(schemas.transaction_accumulator_db_batches) + } } diff --git a/storage/aptosdb/src/lib.rs b/storage/aptosdb/src/lib.rs index cb25547894153..dcf80a58f4821 100644 --- a/storage/aptosdb/src/lib.rs +++ b/storage/aptosdb/src/lib.rs @@ -47,7 +47,7 @@ use crate::{ db_options::{ledger_db_column_families, state_merkle_db_column_families}, errors::AptosDbError, event_store::EventStore, - ledger_db::LedgerDb, + ledger_db::{LedgerDb, LedgerDbSchemaBatches}, ledger_store::LedgerStore, metrics::{ API_LATENCY_SECONDS, COMMITTED_TXNS, LATEST_TXN_VERSION, LEDGER_VERSION, NEXT_BLOCK_EPOCH, @@ -117,6 +117,7 @@ use rayon::prelude::*; use std::{ borrow::Borrow, collections::HashMap, + default::Default, fmt::{Debug, Formatter}, iter::Iterator, path::Path, @@ -135,7 +136,7 @@ pub(crate) const NUM_STATE_SHARDS: usize = 16; static COMMIT_POOL: Lazy = Lazy::new(|| { rayon::ThreadPoolBuilder::new() .num_threads(32) - .thread_name(|index| format!("commit_{}", index)) + .thread_name(|index| format!("commit-{}", index)) .build() .unwrap() }); @@ -522,6 +523,26 @@ impl AptosDB { ) } + /// This opens db with sharding enabled. + #[cfg(any(test, feature = "fuzzing"))] + pub fn new_for_test_with_sharding + Clone>(db_root_path: P) -> Self { + let db_config = RocksdbConfigs { + use_sharded_state_merkle_db: true, + split_ledger_db: true, + ..Default::default() + }; + Self::open( + db_root_path, + false, + NO_OP_STORAGE_PRUNER_CONFIG, /* pruner */ + db_config, + false, + BUFFERED_STATE_TARGET_ITEMS, + DEFAULT_MAX_NUM_NODES_PER_LRU_CACHE_SHARD, + ) + .expect("Unable to open AptosDB") + } + /// This opens db in non-readonly mode, without the pruner and cache. #[cfg(any(test, feature = "fuzzing"))] pub fn new_for_test_no_cache + Clone>(db_root_path: P) -> Self { @@ -767,11 +788,15 @@ impl AptosDB { .into_iter() .map(|(seq, ver, idx)| { let event = self.event_store.get_event_by_version_and_index(ver, idx)?; + let v0 = match &event { + ContractEvent::V1(event) => event, + ContractEvent::V2(_) => bail!("Unexpected module event"), + }; ensure!( - seq == event.sequence_number(), + seq == v0.sequence_number(), "Index broken, expected seq:{}, actual:{}", seq, - event.sequence_number() + v0.sequence_number() ); Ok(EventWithVersion::new(ver, event)) }) @@ -2136,8 +2161,6 @@ impl DbWriter for AptosDB { ledger_infos: &[LedgerInfoWithSignatures], ) -> Result<()> { gauged_api("finalize_state_snapshot", || { - // TODO(grao): Support splitted ledger DBs in this function. - // Ensure the output with proof only contains a single transaction output and info let num_transaction_outputs = output_with_proof.transactions_and_outputs.len(); let num_transaction_infos = output_with_proof.proof.transaction_infos.len(); @@ -2168,7 +2191,7 @@ impl DbWriter for AptosDB { )?; // Create a single change set for all further write operations - let mut batch = SchemaBatch::new(); + let mut ledger_db_batch = LedgerDbSchemaBatches::new(); let mut sharded_kv_batch = new_sharded_kv_schema_batch(); let state_kv_metadata_batch = SchemaBatch::new(); // Save the target transactions, outputs, infos and events @@ -2198,7 +2221,11 @@ impl DbWriter for AptosDB { &transaction_infos, &events, wsets, - Option::Some((&mut batch, &mut sharded_kv_batch, &state_kv_metadata_batch)), + Option::Some(( + &mut ledger_db_batch, + &mut sharded_kv_batch, + &state_kv_metadata_batch, + )), false, )?; @@ -2207,22 +2234,26 @@ impl DbWriter for AptosDB { self.ledger_db.metadata_db(), self.ledger_store.clone(), ledger_infos, - Some(&mut batch), + Some(&mut ledger_db_batch.ledger_metadata_db_batches), )?; - batch.put::( - &DbMetadataKey::LedgerCommitProgress, - &DbMetadataValue::Version(version), - )?; - batch.put::( - &DbMetadataKey::OverallCommitProgress, - &DbMetadataValue::Version(version), - )?; + ledger_db_batch + .ledger_metadata_db_batches + .put::( + &DbMetadataKey::LedgerCommitProgress, + &DbMetadataValue::Version(version), + )?; + ledger_db_batch + .ledger_metadata_db_batches + .put::( + &DbMetadataKey::OverallCommitProgress, + &DbMetadataValue::Version(version), + )?; // Apply the change set writes to the database (atomically) and update in-memory state // - // TODO(grao): Support sharding here. - self.ledger_db.metadata_db().write_schemas(batch)?; + // state kv and SMT should use shared way of committing. + self.ledger_db.write_schemas(ledger_db_batch)?; self.ledger_pruner.save_min_readable_version(version)?; self.state_store diff --git a/storage/aptosdb/src/schema/db_metadata/mod.rs b/storage/aptosdb/src/schema/db_metadata/mod.rs index a4d5fe20acfc2..4a8ebf2fbb49d 100644 --- a/storage/aptosdb/src/schema/db_metadata/mod.rs +++ b/storage/aptosdb/src/schema/db_metadata/mod.rs @@ -65,6 +65,7 @@ pub enum DbMetadataKey { StateMerkleShardPrunerProgress(ShardId), EpochEndingStateMerkleShardPrunerProgress(ShardId), StateKvShardPrunerProgress(ShardId), + StateMerkleShardRestoreProgress(ShardId, Version), } define_schema!( diff --git a/storage/aptosdb/src/state_kv_db.rs b/storage/aptosdb/src/state_kv_db.rs index 08cee9b83d5f7..6e79aa5bf1751 100644 --- a/storage/aptosdb/src/state_kv_db.rs +++ b/storage/aptosdb/src/state_kv_db.rs @@ -101,7 +101,9 @@ impl StateKvDb { COMMIT_POOL.scope(|s| { let mut batches = sharded_state_kv_batches.into_iter(); for shard_id in 0..NUM_STATE_SHARDS { - let state_kv_batch = batches.next().unwrap(); + let state_kv_batch = batches + .next() + .expect("Not sufficient number of sharded state kv batches"); s.spawn(move |_| { // TODO(grao): Consider propagating the error instead of panic, if necessary. self.commit_single_shard(version, shard_id as u8, state_kv_batch) @@ -116,11 +118,6 @@ impl StateKvDb { self.write_progress(version) } - pub(crate) fn commit_raw_batch(&self, state_kv_batch: SchemaBatch) -> Result<()> { - // TODO(grao): Support sharding here. - self.state_kv_metadata_db.write_schemas(state_kv_batch) - } - pub(crate) fn write_progress(&self, version: Version) -> Result<()> { self.state_kv_metadata_db.put::( &DbMetadataKey::StateKvCommitProgress, diff --git a/storage/aptosdb/src/state_merkle_db.rs b/storage/aptosdb/src/state_merkle_db.rs index bc257fcfabf4a..3ec3b6932c344 100644 --- a/storage/aptosdb/src/state_merkle_db.rs +++ b/storage/aptosdb/src/state_merkle_db.rs @@ -48,7 +48,7 @@ pub const STATE_MERKLE_METADATA_DB_NAME: &str = "state_merkle_metadata_db"; static TREE_COMMIT_POOL: Lazy = Lazy::new(|| { rayon::ThreadPoolBuilder::new() .num_threads(32) - .thread_name(|index| format!("tree_commit_{}", index)) + .thread_name(|index| format!("tree_commit-{}", index)) .build() .unwrap() }); @@ -139,6 +139,29 @@ impl StateMerkleDb { self.commit_top_levels(version, top_levels_batch) } + pub(crate) fn commit_no_progress( + &self, + top_level_batch: SchemaBatch, + batches_for_shards: Vec, + ) -> Result<()> { + ensure!(batches_for_shards.len() == NUM_STATE_SHARDS); + TREE_COMMIT_POOL.scope(|s| { + let mut state_merkle_batch = batches_for_shards.into_iter(); + for shard_id in 0..NUM_STATE_SHARDS { + let batch = state_merkle_batch.next().unwrap(); + s.spawn(move |_| { + self.state_merkle_db_shards[shard_id] + .write_schemas(batch) + .unwrap_or_else(|_| { + panic!("Failed to commit state merkle shard {shard_id}.") + }); + }); + } + }); + + self.state_merkle_metadata_db.write_schemas(top_level_batch) + } + pub(crate) fn create_checkpoint( db_root_path: impl AsRef, cp_root_path: impl AsRef, @@ -657,6 +680,63 @@ impl StateMerkleDb { Ok(ret) } + + fn get_rightmost_leaf_in_single_shard( + &self, + version: Version, + shard_id: u8, + ) -> Result> { + assert!( + shard_id < NUM_STATE_SHARDS as u8, + "Invalid shard_id: {}", + shard_id + ); + let shard_db = self.state_merkle_db_shards[shard_id as usize].clone(); + // The encoding of key and value in DB looks like: + // + // | <-------------- key --------------> | <- value -> | + // | version | num_nibbles | nibble_path | node | + // + // Here version is fixed. For each num_nibbles, there could be a range of nibble paths + // of the same length. If one of them is the rightmost leaf R, it must be at the end of this + // range. Otherwise let's assume the R is in the middle of the range, so we + // call the node at the end of this range X: + // 1. If X is leaf, then X.account_key() > R.account_key(), because the nibble path is a + // prefix of the account key. So R is not the rightmost leaf. + // 2. If X is internal node, then X must be on the right side of R, so all its children's + // account keys are larger than R.account_key(). So R is not the rightmost leaf. + // + // Given that num_nibbles ranges from 0 to ROOT_NIBBLE_HEIGHT, there are only + // ROOT_NIBBLE_HEIGHT+1 ranges, so we can just find the node at the end of each range and + // then pick the one with the largest account key. + let mut ret = None; + + for num_nibbles in 0..=ROOT_NIBBLE_HEIGHT { + let mut iter = shard_db.iter::(Default::default())?; + // nibble_path is always non-empty except for the root, so if we use an empty nibble + // path as the seek key, the iterator will end up pointing to the end of the previous + // range. + let seek_key = (version, (num_nibbles + 1) as u8); + iter.seek_for_prev(&seek_key)?; + + if let Some((node_key, node)) = iter.next().transpose()? { + if node_key.version() != version { + continue; + } + if let Node::Leaf(leaf_node) = node { + match ret { + None => ret = Some((node_key, leaf_node)), + Some(ref other) => { + if leaf_node.account_key() > other.1.account_key() { + ret = Some((node_key, leaf_node)); + } + }, + } + } + } + } + Ok(ret) + } } impl TreeReader for StateMerkleDb { @@ -705,11 +785,10 @@ impl TreeReader for StateMerkleDb { fn get_rightmost_leaf(&self, version: Version) -> Result> { // Since everything has the same version during restore, we seek to the first node and get // its version. - // - // TODO(grao): Support sharding here. let mut iter = self .metadata_db() .iter::(Default::default())?; + // get the root node corresponding to the version iter.seek(&(version, 0))?; match iter.next().transpose()? { Some((node_key, node)) => { @@ -720,50 +799,19 @@ impl TreeReader for StateMerkleDb { None => return Ok(None), }; - // The encoding of key and value in DB looks like: - // - // | <-------------- key --------------> | <- value -> | - // | version | num_nibbles | nibble_path | node | - // - // Here version is fixed. For each num_nibbles, there could be a range of nibble paths - // of the same length. If one of them is the rightmost leaf R, it must be at the end of this - // range. Otherwise let's assume the R is in the middle of the range, so we - // call the node at the end of this range X: - // 1. If X is leaf, then X.account_key() > R.account_key(), because the nibble path is a - // prefix of the account key. So R is not the rightmost leaf. - // 2. If X is internal node, then X must be on the right side of R, so all its children's - // account keys are larger than R.account_key(). So R is not the rightmost leaf. - // - // Given that num_nibbles ranges from 0 to ROOT_NIBBLE_HEIGHT, there are only - // ROOT_NIBBLE_HEIGHT+1 ranges, so we can just find the node at the end of each range and - // then pick the one with the largest account key. - let mut ret = None; - - for num_nibbles in 1..=ROOT_NIBBLE_HEIGHT + 1 { - // TODO(grao): Support sharding here. - let mut iter = self - .metadata_db() - .iter::(Default::default())?; - // nibble_path is always non-empty except for the root, so if we use an empty nibble - // path as the seek key, the iterator will end up pointing to the end of the previous - // range. - let seek_key = (version, num_nibbles as u8); - iter.seek_for_prev(&seek_key)?; - - if let Some((node_key, node)) = iter.next().transpose()? { - if node_key.version() != version { - continue; - } - if let Node::Leaf(leaf_node) = node { - match ret { - None => ret = Some((node_key, leaf_node)), - Some(ref other) => { - if leaf_node.account_key() > other.1.account_key() { - ret = Some((node_key, leaf_node)); - } - }, - } - } + let ret = None; + // if sharding is not enable, we only need to search once. + let shards = self + .enable_sharding + .then(|| (0..NUM_STATE_SHARDS)) + .unwrap_or(0..1); + + // Search from right to left to find the first leaf node. + for shard_id in shards.rev() { + if let Some((node_key, leaf_node)) = + self.get_rightmost_leaf_in_single_shard(version, shard_id as u8)? + { + return Ok(Some((node_key, leaf_node))); } } @@ -773,14 +821,21 @@ impl TreeReader for StateMerkleDb { impl TreeWriter for StateMerkleDb { fn write_node_batch(&self, node_batch: &NodeBatch) -> Result<()> { - // TODO(grao): Support sharding here. let _timer = OTHER_TIMERS_SECONDS .with_label_values(&["tree_writer_write_batch"]) .start_timer(); - let batch = SchemaBatch::new(); + // Get the top level batch and sharded batch from raw NodeBatch + let top_level_batch = SchemaBatch::new(); + let mut jmt_shard_batches: Vec = Vec::with_capacity(NUM_STATE_SHARDS); + jmt_shard_batches.resize_with(NUM_STATE_SHARDS, SchemaBatch::new); node_batch.iter().try_for_each(|(node_key, node)| { - batch.put::(node_key, node) + if let Some(shard_id) = node_key.get_shard_id() { + jmt_shard_batches[shard_id as usize] + .put::(node_key, node) + } else { + top_level_batch.put::(node_key, node) + } })?; - self.metadata_db().write_schemas(batch) + self.commit_no_progress(top_level_batch, jmt_shard_batches) } } diff --git a/storage/aptosdb/src/state_restore/mod.rs b/storage/aptosdb/src/state_restore/mod.rs index 0b4b7a0d1d2de..3f52cf775ba9a 100644 --- a/storage/aptosdb/src/state_restore/mod.rs +++ b/storage/aptosdb/src/state_restore/mod.rs @@ -119,10 +119,12 @@ impl StateValueRestore { usage.add_item(k.key_size() + v.value_size()); } + // prepare the sharded kv batch let kv_batch: StateValueBatch> = chunk .into_iter() .map(|(k, v)| ((k, self.version), Some(v))) .collect(); + self.db.write_kv_batch( self.version, &kv_batch, diff --git a/storage/aptosdb/src/state_store/mod.rs b/storage/aptosdb/src/state_store/mod.rs index 2015efc6e18f0..31a19d4825599 100644 --- a/storage/aptosdb/src/state_store/mod.rs +++ b/storage/aptosdb/src/state_store/mod.rs @@ -9,6 +9,7 @@ use crate::{ epoch_by_version::EpochByVersionSchema, ledger_db::LedgerDb, metrics::{STATE_ITEMS, TOTAL_STATE_BYTES}, + new_sharded_kv_schema_batch, schema::{state_value::StateValueSchema, state_value_index::StateValueIndexSchema}, stale_state_value_index::StaleStateValueIndexSchema, state_kv_db::StateKvDb, @@ -24,7 +25,7 @@ use crate::{ version_data::VersionDataSchema, AptosDbError, LedgerStore, ShardedStateKvSchemaBatch, StaleNodeIndexCrossEpochSchema, StaleNodeIndexSchema, StateKvPrunerManager, StateMerklePrunerManager, TransactionStore, - OTHER_TIMERS_SECONDS, + NUM_STATE_SHARDS, OTHER_TIMERS_SECONDS, }; use anyhow::{ensure, format_err, Context, Result}; use aptos_crypto::{ @@ -82,7 +83,7 @@ const MAX_COMMIT_PROGRESS_DIFFERENCE: u64 = 100000; static IO_POOL: Lazy = Lazy::new(|| { rayon::ThreadPoolBuilder::new() .num_threads(32) - .thread_name(|index| format!("kv_reader_{}", index)) + .thread_name(|index| format!("kv_reader-{}", index)) .build() .unwrap() }); @@ -934,6 +935,32 @@ impl StateStore { Ok(()) } + pub(crate) fn shard_state_value_batch( + &self, + metadata_batch: &SchemaBatch, + sharded_batch: &ShardedStateKvSchemaBatch, + values: &StateValueBatch, + ) -> Result<()> { + values.iter().for_each(|((key, version), value)| { + let shard_id = key.get_shard_id() as usize; + assert!( + shard_id < NUM_STATE_SHARDS, + "Invalid shard id: {}", + shard_id + ); + sharded_batch[shard_id] + .put::(&(key.clone(), *version), value) + .expect("Inserting into sharded schema batch should never fail"); + + if self.state_kv_db.enabled_sharding() { + metadata_batch + .put::(&(key.clone(), *version), &()) + .expect("Inserting into state value index schema batch should never fail"); + } + }); + Ok(()) + } + /// Merklize the results generated by `value_state_sets` to `batch` and return the result root /// hashes for each write set. #[cfg(test)] @@ -1113,16 +1140,17 @@ impl StateValueWriter for StateStore { .with_label_values(&["state_value_writer_write_chunk"]) .start_timer(); let batch = SchemaBatch::new(); - node_batch - .par_iter() - .map(|(k, v)| batch.put::(k, v)) - .collect::>>()?; + let sharded_schema_batch = new_sharded_kv_schema_batch(); + batch.put::( &DbMetadataKey::StateSnapshotRestoreProgress(version), &DbMetadataValue::StateSnapshotProgress(progress), )?; - // TODO(grao): Support sharding here. - self.state_kv_db.commit_raw_batch(batch) + + self.shard_state_value_batch(&batch, &sharded_schema_batch, node_batch)?; + + self.state_kv_db + .commit(version, batch, sharded_schema_batch) } fn write_usage(&self, version: Version, usage: StateStorageUsage) -> Result<()> { diff --git a/storage/aptosdb/src/test_helper.rs b/storage/aptosdb/src/test_helper.rs index a578d1f856477..9c4e534ff4edb 100644 --- a/storage/aptosdb/src/test_helper.rs +++ b/storage/aptosdb/src/test_helper.rs @@ -516,14 +516,25 @@ fn get_events_by_event_key( }; let events: Vec<_> = itertools::zip_eq(events, expected_seq_nums) - .map(|(e, _)| (e.transaction_version, e.event)) - .collect(); + .map(|(e, _)| Ok((e.transaction_version, e.event))) + .collect::>() + .unwrap(); let num_results = events.len() as u64; if num_results == 0 { break; } - assert_eq!(events.first().unwrap().1.sequence_number(), cursor); + assert_eq!( + events + .first() + .unwrap() + .1 + .clone() + .v1() + .unwrap() + .sequence_number(), + cursor + ); if order == Order::Ascending { if cursor + num_results > last_seq_num { @@ -573,11 +584,17 @@ fn verify_events_by_event_key( .first() .expect("Shouldn't be empty") .1 + .clone() + .v1() + .unwrap() .sequence_number(); let last_seq = events .last() .expect("Shouldn't be empty") .1 + .clone() + .v1() + .unwrap() .sequence_number(); let traversed = get_events_by_event_key( @@ -616,10 +633,12 @@ fn group_events_by_event_key( let mut event_key_to_events: HashMap> = HashMap::new(); for (batch_idx, txn) in txns_to_commit.iter().enumerate() { for event in txn.events() { - event_key_to_events - .entry(*event.key()) - .or_default() - .push((first_version + batch_idx as u64, event.clone())); + if let ContractEvent::V1(v1) = event { + event_key_to_events + .entry(*v1.key()) + .or_default() + .push((first_version + batch_idx as u64, event.clone())); + } } } event_key_to_events.into_iter().collect() diff --git a/storage/db-tool/src/tests.rs b/storage/db-tool/src/tests.rs index 7ab31ac41fbc9..a64be3532f3df 100644 --- a/storage/db-tool/src/tests.rs +++ b/storage/db-tool/src/tests.rs @@ -71,7 +71,7 @@ fn run_cmd(args: &[&str]) { } #[cfg(test)] -mod compaction_tests { +mod dbtool_tests { use crate::DBTool; use aptos_backup_cli::{ coordinators::backup::BackupCompactor, @@ -82,14 +82,23 @@ mod compaction_tests { }; use aptos_config::config::RocksdbConfigs; use aptos_db::AptosDB; - use aptos_executor_test_helpers::integration_test_impl::test_execution_with_storage_impl; + use aptos_executor_test_helpers::integration_test_impl::{ + test_execution_with_storage_impl, test_execution_with_storage_impl_inner, + }; use aptos_temppath::TempPath; use aptos_types::{ state_store::{state_key::StateKeyTag::AccessPath, state_key_prefix::StateKeyPrefix}, transaction::Version, }; use clap::Parser; - use std::{ops::Deref, path::PathBuf, sync::Arc, time::Duration}; + use std::{ + default::Default, + fs, + ops::Deref, + path::{Path, PathBuf}, + sync::Arc, + time::Duration, + }; use tokio::runtime::Runtime; fn assert_metadata_view_eq(view1: &MetadataView, view2: &MetadataView) { @@ -274,13 +283,15 @@ mod compaction_tests { start: Version, end: Version, backup_dir: PathBuf, + old_db_dir: PathBuf, new_db_dir: PathBuf, + force_sharding: bool, ) -> (Runtime, String) { use aptos_db::utils::iterators::PrefixedStateValueIterator; use aptos_storage_interface::DbReader; use itertools::zip_eq; - let db = test_execution_with_storage_impl(); + let db = test_execution_with_storage_impl_inner(force_sharding, old_db_dir.as_path()); let (rt, port) = start_local_backup_service(Arc::clone(&db)); let server_addr = format!(" http://localhost:{}", port); // Backup the local_test DB @@ -415,41 +426,55 @@ mod compaction_tests { .run(), ) .unwrap(); - // boostrap a historical DB starting from version 1 to version 12 - rt.block_on( - DBTool::try_parse_from([ - "aptos-db-tool", - "restore", - "bootstrap-db", - "--ledger-history-start-version", - format!("{}", start).as_str(), - "--target-version", - format!("{}", end).as_str(), - "--target-db-dir", - new_db_dir.as_path().to_str().unwrap(), - "--local-fs-dir", - backup_dir.as_path().to_str().unwrap(), - ]) - .unwrap() - .run(), - ) - .unwrap(); + + let start_string = format!("{}", start); + let end_string = format!("{}", end); + let mut restore_args = vec![ + "aptos-db-tool".to_string(), + "restore".to_string(), + "bootstrap-db".to_string(), + "--ledger-history-start-version".to_string(), + start_string, // use start_string here + "--target-version".to_string(), + end_string, // use end_string here + "--target-db-dir".to_string(), + new_db_dir.as_path().to_str().unwrap().to_string(), + "--local-fs-dir".to_string(), + backup_dir.as_path().to_str().unwrap().to_string(), + ]; + if force_sharding { + let additional_args = vec!["--split-ledger-db", "--use-sharded-state-merkle-db"] + .into_iter() + .map(|s| s.to_string()) + .collect::>(); + restore_args.extend(additional_args); + } + rt.block_on(DBTool::try_parse_from(restore_args).unwrap().run()) + .unwrap(); // verify the new DB has the same data as the original DB + let db_config = if !force_sharding { + RocksdbConfigs::default() + } else { + RocksdbConfigs { + use_sharded_state_merkle_db: true, + split_ledger_db: true, + ..Default::default() + } + }; let (_ledger_db, tree_db, state_kv_db) = - AptosDB::open_dbs(new_db_dir, RocksdbConfigs::default(), true, 0).unwrap(); + AptosDB::open_dbs(new_db_dir, db_config, false, 0).unwrap(); // assert the kv are the same in db and new_db // current all the kv are still stored in the ledger db // - // TODO(grao): Support state kv db sharding here. for ver in start..=end { let new_iter = PrefixedStateValueIterator::new( &state_kv_db, StateKeyPrefix::new(AccessPath, b"".to_vec()), None, ver, - false, + force_sharding, ) .unwrap(); let old_iter = db @@ -487,13 +512,24 @@ mod compaction_tests { let backup_dir = TempPath::new(); backup_dir.create_as_dir().unwrap(); let new_db_dir = TempPath::new(); + let old_db_dir = TempPath::new(); // Test the basic db boostrap that replays from previous snapshot to the target version let (rt, _) = db_restore_test_setup( 16, 16, PathBuf::from(backup_dir.path()), + PathBuf::from(old_db_dir.path()), PathBuf::from(new_db_dir.path()), + false, ); + let backup_size = dir_size(backup_dir.path()); + let db_size = dir_size(new_db_dir.path()); + let old_db_size = dir_size(old_db_dir.path()); + println!( + "backup size: {}, old db size: {}, new db size: {}", + backup_size, old_db_size, db_size + ); + rt.shutdown_timeout(Duration::from_secs(1)); } #[test] @@ -501,12 +537,15 @@ mod compaction_tests { let backup_dir = TempPath::new(); backup_dir.create_as_dir().unwrap(); let new_db_dir = TempPath::new(); + let old_db_dir = TempPath::new(); // Test the db boostrap in some historical range with all the kvs restored let (rt, _) = db_restore_test_setup( 1, 16, PathBuf::from(backup_dir.path()), + PathBuf::from(old_db_dir.path()), PathBuf::from(new_db_dir.path()), + false, ); rt.shutdown_timeout(Duration::from_secs(1)); } @@ -517,12 +556,15 @@ mod compaction_tests { backup_dir.create_as_dir().unwrap(); let new_db_dir = TempPath::new(); new_db_dir.create_as_dir().unwrap(); + let old_db_dir = TempPath::new(); // Test the basic db boostrap that replays from previous snapshot to the target version let (rt, _) = db_restore_test_setup( 1, 16, PathBuf::from(backup_dir.path()), + PathBuf::from(old_db_dir.path()), PathBuf::from(new_db_dir.path()), + false, ); // boostrap a historical DB starting from version 1 to version 18 // This only replays the txn from txn 17 to 18 @@ -546,4 +588,53 @@ mod compaction_tests { .unwrap(); rt.shutdown_timeout(Duration::from_secs(1)); } + + #[test] + fn test_restore_with_sharded_db() { + let backup_dir = TempPath::new(); + backup_dir.create_as_dir().unwrap(); + let new_db_dir = TempPath::new(); + let old_db_dir = TempPath::new(); + + let (rt, _) = db_restore_test_setup( + 16, + 16, + PathBuf::from(backup_dir.path()), + PathBuf::from(old_db_dir.path()), + PathBuf::from(new_db_dir.path()), + true, + ); + let backup_size = dir_size(backup_dir.path()); + let db_size = dir_size(new_db_dir.path()); + let old_db_size = dir_size(old_db_dir.path()); + println!( + "backup size: {}, old db size: {}, new db size: {}", + backup_size, old_db_size, db_size + ); + + println!( + "backup size: {:?}, old db size: {:?}, new db size: {:?}", + backup_dir.path(), + old_db_dir.path(), + new_db_dir.path() + ); + rt.shutdown_timeout(Duration::from_secs(1)); + } + + fn dir_size>(path: P) -> u64 { + let mut size = 0; + + for entry in fs::read_dir(path).unwrap() { + let entry = entry.unwrap(); + let metadata = entry.metadata().unwrap(); + + if metadata.is_dir() { + size += dir_size(entry.path()); + } else { + size += metadata.len(); + } + } + + size + } } diff --git a/storage/jellyfish-merkle/src/restore/mod.rs b/storage/jellyfish-merkle/src/restore/mod.rs index cdfd8a68285e7..f5e7a81485b0f 100644 --- a/storage/jellyfish-merkle/src/restore/mod.rs +++ b/storage/jellyfish-merkle/src/restore/mod.rs @@ -10,7 +10,7 @@ use crate::{ get_child_and_sibling_half_start, Child, Children, InternalNode, LeafNode, Node, NodeKey, NodeType, }, - NibbleExt, TreeReader, TreeWriter, IO_POOL, ROOT_NIBBLE_HEIGHT, + NibbleExt, TreeReader, TreeWriter, ROOT_NIBBLE_HEIGHT, }; use anyhow::{ensure, Result}; use aptos_crypto::{ @@ -27,6 +27,7 @@ use aptos_types::{ transaction::Version, }; use itertools::Itertools; +use once_cell::sync::Lazy; use std::{ cmp::Eq, collections::HashMap, @@ -36,6 +37,14 @@ use std::{ }, }; +static IO_POOL: Lazy = Lazy::new(|| { + rayon::ThreadPoolBuilder::new() + .num_threads(32) + .thread_name(|index| format!("jmt_batch_{}", index)) + .build() + .unwrap() +}); + #[derive(Clone, Debug, Eq, PartialEq)] enum ChildInfo { /// This child is an internal node. The hash of the internal node is stored here if it is diff --git a/storage/state-view/src/in_memory_state_view.rs b/storage/state-view/src/in_memory_state_view.rs index 36fe5c2d9d7c3..37a64b1aaa08f 100644 --- a/storage/state-view/src/in_memory_state_view.rs +++ b/storage/state-view/src/in_memory_state_view.rs @@ -29,10 +29,6 @@ impl TStateView for InMemoryStateView { Ok(self.state_data.get(state_key).cloned()) } - fn is_genesis(&self) -> bool { - unimplemented!("is_genesis is not implemented for InMemoryStateView") - } - fn get_usage(&self) -> Result { Ok(StateStorageUsage::new_untracked()) } diff --git a/storage/state-view/src/lib.rs b/storage/state-view/src/lib.rs index f52017ef34cf9..407fce8ca11ae 100644 --- a/storage/state-view/src/lib.rs +++ b/storage/state-view/src/lib.rs @@ -36,6 +36,14 @@ pub trait TStateView { StateViewId::Miscellaneous } + /// Tries to interpret the state value as u128. + fn get_state_value_u128(&self, state_key: &Self::Key) -> Result> { + match self.get_state_value_bytes(state_key)? { + Some(bytes) => Ok(Some(bcs::from_bytes(&bytes)?)), + None => Ok(None), + } + } + /// Gets the state value bytes for a given state key. fn get_state_value_bytes(&self, state_key: &Self::Key) -> Result>> { let val_opt = self.get_state_value(state_key)?; @@ -45,10 +53,6 @@ pub trait TStateView { /// Gets the state value for a given state key. fn get_state_value(&self, state_key: &Self::Key) -> Result>; - /// VM needs this method to know whether the current state view is for genesis state creation. - /// Currently TransactionPayload::WriteSet is only valid for genesis state creation. - fn is_genesis(&self) -> bool; - /// Get state storage usage info at epoch ending. fn get_usage(&self) -> Result; @@ -88,10 +92,6 @@ where self.deref().get_state_value(state_key) } - fn is_genesis(&self) -> bool { - self.deref().is_genesis() - } - fn get_usage(&self) -> Result { self.deref().get_usage() } diff --git a/storage/storage-interface/src/async_proof_fetcher.rs b/storage/storage-interface/src/async_proof_fetcher.rs index b20f1b53ce110..b3b8704213a50 100644 --- a/storage/storage-interface/src/async_proof_fetcher.rs +++ b/storage/storage-interface/src/async_proof_fetcher.rs @@ -25,7 +25,7 @@ use std::{ static IO_POOL: Lazy = Lazy::new(|| { rayon::ThreadPoolBuilder::new() .num_threads(AptosVM::get_num_proof_reading_threads()) - .thread_name(|index| format!("proof_reader_{}", index)) + .thread_name(|index| format!("proof_reader-{}", index)) .build() .unwrap() }); diff --git a/storage/storage-interface/src/cached_state_view.rs b/storage/storage-interface/src/cached_state_view.rs index 48ce53993092f..731f9138b68de 100644 --- a/storage/storage-interface/src/cached_state_view.rs +++ b/storage/storage-interface/src/cached_state_view.rs @@ -222,10 +222,6 @@ impl TStateView for CachedStateView { Ok(value_opt.clone()) } - fn is_genesis(&self) -> bool { - self.snapshot.is_none() - } - fn get_usage(&self) -> Result { Ok(self.speculative_state.usage()) } @@ -267,10 +263,6 @@ impl TStateView for CachedDbStateView { Ok(new_value.clone()) } - fn is_genesis(&self) -> bool { - self.db_state_view.is_genesis() - } - fn get_usage(&self) -> Result { self.db_state_view.get_usage() } diff --git a/storage/storage-interface/src/state_view.rs b/storage/storage-interface/src/state_view.rs index d29d593cdd285..e88b9f7254faf 100644 --- a/storage/storage-interface/src/state_view.rs +++ b/storage/storage-interface/src/state_view.rs @@ -35,10 +35,6 @@ impl TStateView for DbStateView { self.get(state_key) } - fn is_genesis(&self) -> bool { - self.version.is_none() - } - fn get_usage(&self) -> Result { self.db.get_state_storage_usage(self.version) } diff --git a/testsuite/forge-cli/src/main.rs b/testsuite/forge-cli/src/main.rs index 5a6c2ef9e3151..e1d2ee6db1539 100644 --- a/testsuite/forge-cli/src/main.rs +++ b/testsuite/forge-cli/src/main.rs @@ -527,6 +527,7 @@ fn single_test_suite( // Rest of the tests: "realistic_env_max_load_large" => realistic_env_max_load_test(duration, test_cmd, 20, 10), "realistic_env_load_sweep" => realistic_env_load_sweep_test(), + "realistic_env_workload_sweep" => realistic_env_workload_sweep_test(), "realistic_env_graceful_overload" => realistic_env_graceful_overload(), "realistic_network_tuned_for_throughput" => realistic_network_tuned_for_throughput_test(), "epoch_changer_performance" => epoch_changer_performance(), @@ -797,31 +798,19 @@ fn consensus_stress_test() -> ForgeConfig { }) } -fn realistic_env_load_sweep_test() -> ForgeConfig { +fn realistic_env_sweep_wrap( + num_validators: usize, + num_fullnodes: usize, + test: LoadVsPerfBenchmark, +) -> ForgeConfig { ForgeConfig::default() - .with_initial_validator_count(NonZeroUsize::new(20).unwrap()) - .with_initial_fullnode_count(10) - .add_network_test(wrap_with_realistic_env(LoadVsPerfBenchmark { - test: Box::new(PerformanceBenchmark), - workloads: Workloads::TPS(&[10, 100, 1000, 3000, 5000]), - criteria: [ - (9, 1.5, 3., 4.), - (95, 1.5, 3., 4.), - (950, 2., 3., 4.), - (2750, 2.5, 3.5, 4.5), - (4600, 3., 4., 5.), - ] - .into_iter() - .map(|(min_tps, max_lat_p50, max_lat_p90, max_lat_p99)| { - SuccessCriteria::new(min_tps) - .add_max_expired_tps(0) - .add_max_failed_submission_tps(0) - .add_latency_threshold(max_lat_p50, LatencyType::P50) - .add_latency_threshold(max_lat_p90, LatencyType::P90) - .add_latency_threshold(max_lat_p99, LatencyType::P99) - }) - .collect(), + .with_initial_validator_count(NonZeroUsize::new(num_validators).unwrap()) + .with_initial_fullnode_count(num_fullnodes) + .with_node_helm_config_fn(Arc::new(move |helm_values| { + helm_values["validator"]["config"]["execution"] + ["processed_transactions_detailed_counters"] = true.into(); })) + .add_network_test(wrap_with_realistic_env(test)) // Test inherits the main EmitJobRequest, so update here for more precise latency measurements .with_emit_job( EmitJobRequest::default().latency_polling_interval(Duration::from_millis(100)), @@ -841,6 +830,98 @@ fn realistic_env_load_sweep_test() -> ForgeConfig { ) } +fn realistic_env_load_sweep_test() -> ForgeConfig { + realistic_env_sweep_wrap(20, 10, LoadVsPerfBenchmark { + test: Box::new(PerformanceBenchmark), + workloads: Workloads::TPS(&[10, 100, 1000, 3000, 5000]), + criteria: [ + (9, 1.5, 3., 4.), + (95, 1.5, 3., 4.), + (950, 2., 3., 4.), + (2750, 2.5, 3.5, 4.5), + (4600, 3., 4., 5.), + ] + .into_iter() + .map(|(min_tps, max_lat_p50, max_lat_p90, max_lat_p99)| { + SuccessCriteria::new(min_tps) + .add_max_expired_tps(0) + .add_max_failed_submission_tps(0) + .add_latency_threshold(max_lat_p50, LatencyType::P50) + .add_latency_threshold(max_lat_p90, LatencyType::P90) + .add_latency_threshold(max_lat_p99, LatencyType::P99) + }) + .collect(), + }) +} + +fn realistic_env_workload_sweep_test() -> ForgeConfig { + realistic_env_sweep_wrap(7, 3, LoadVsPerfBenchmark { + test: Box::new(PerformanceBenchmark), + workloads: Workloads::TRANSACTIONS(&[ + TransactionWorkload { + transaction_type: TransactionTypeArg::CoinTransfer, + num_modules: 1, + unique_senders: false, + mempool_backlog: 20000, + }, + TransactionWorkload { + transaction_type: TransactionTypeArg::NoOp, + num_modules: 100, + unique_senders: false, + mempool_backlog: 20000, + }, + TransactionWorkload { + transaction_type: TransactionTypeArg::ModifyGlobalResource, + num_modules: 1, + unique_senders: true, + mempool_backlog: 20000, + }, + TransactionWorkload { + transaction_type: TransactionTypeArg::TokenV2AmbassadorMint, + num_modules: 1, + unique_senders: true, + mempool_backlog: 10000, + }, + // transactions get rejected, to fix. + // TransactionWorkload { + // transaction_type: TransactionTypeArg::PublishPackage, + // num_modules: 1, + // unique_senders: true, + // mempool_backlog: 1000, + // }, + ]), + // Investigate/improve to make latency more predictable on different workloads + criteria: [ + (3700, 0.35, 0.5, 0.8, 0.65), + (2800, 0.35, 0.5, 1.2, 1.2), + (1800, 0.35, 0.5, 1.5, 2.7), + (950, 0.35, 0.65, 1.5, 2.7), + // (150, 0.5, 1.0, 1.5, 0.65), + ] + .into_iter() + .map( + |(min_tps, batch_to_pos, pos_to_proposal, proposal_to_ordered, ordered_to_commit)| { + SuccessCriteria::new(min_tps) + .add_max_expired_tps(200) + .add_max_failed_submission_tps(200) + .add_latency_breakdown_threshold(LatencyBreakdownThreshold::new_strict(vec![ + (LatencyBreakdownSlice::QsBatchToPos, batch_to_pos), + (LatencyBreakdownSlice::QsPosToProposal, pos_to_proposal), + ( + LatencyBreakdownSlice::ConsensusProposalToOrdered, + proposal_to_ordered, + ), + ( + LatencyBreakdownSlice::ConsensusOrderedToCommit, + ordered_to_commit, + ), + ])) + }, + ) + .collect(), + }) +} + fn load_vs_perf_benchmark() -> ForgeConfig { ForgeConfig::default() .with_initial_validator_count(NonZeroUsize::new(20).unwrap()) @@ -875,9 +956,6 @@ fn workload_vs_perf_benchmark() -> ForgeConfig { helm_values["validator"]["config"]["execution"] ["processed_transactions_detailed_counters"] = true.into(); })) - // .with_emit_job(EmitJobRequest::default().mode(EmitJobMode::MaxLoad { - // mempool_backlog: 10000, - // })) .add_network_test(LoadVsPerfBenchmark { test: Box::new(PerformanceBenchmark), workloads: Workloads::TRANSACTIONS(&[ @@ -885,41 +963,49 @@ fn workload_vs_perf_benchmark() -> ForgeConfig { transaction_type: TransactionTypeArg::NoOp, num_modules: 1, unique_senders: false, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::NoOp, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::NoOp, num_modules: 1000, unique_senders: false, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::CoinTransfer, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::CoinTransfer, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::AccountResource32B, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::AccountResource1KB, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, TransactionWorkload { transaction_type: TransactionTypeArg::PublishPackage, num_modules: 1, unique_senders: true, + mempool_backlog: 20000, }, ]), criteria: Vec::new(), @@ -1533,9 +1619,15 @@ fn realistic_env_max_load_test( .add_latency_threshold(4.5, LatencyType::P90) .add_latency_breakdown_threshold(LatencyBreakdownThreshold::new_strict(vec![ (LatencyBreakdownSlice::QsBatchToPos, 0.35), - (LatencyBreakdownSlice::QsPosToProposal, 0.5), + ( + LatencyBreakdownSlice::QsPosToProposal, + if ha_proxy { 0.6 } else { 0.5 }, + ), (LatencyBreakdownSlice::ConsensusProposalToOrdered, 0.8), - (LatencyBreakdownSlice::ConsensusOrderedToCommit, 0.65), + ( + LatencyBreakdownSlice::ConsensusOrderedToCommit, + if ha_proxy { 1.2 } else { 0.65 }, + ), ])) .add_chain_progress(StateProgressThreshold { max_no_progress_secs: 10.0, diff --git a/testsuite/forge/src/success_criteria.rs b/testsuite/forge/src/success_criteria.rs index 5856ee21c905d..88fd7c40fc34e 100644 --- a/testsuite/forge/src/success_criteria.rs +++ b/testsuite/forge/src/success_criteria.rs @@ -114,10 +114,17 @@ impl LatencyBreakdownThreshold { } } - pub fn ensure_threshold(&self, metrics: &LatencyBreakdown) -> anyhow::Result<()> { + pub fn ensure_threshold( + &self, + metrics: &LatencyBreakdown, + traffic_name_addition: &String, + ) -> anyhow::Result<()> { for (slice, threshold) in &self.thresholds { let samples = metrics.get_samples(slice); - threshold.ensure_metrics_threshold(&format!("{:?}", slice), samples.get())?; + threshold.ensure_metrics_threshold( + &format!("{:?}{}", slice, traffic_name_addition), + samples.get(), + )?; } Ok(()) } @@ -220,7 +227,8 @@ impl SuccessCriteriaChecker { &traffic_name_addition, )?; if let Some(latency_breakdown_thresholds) = &success_criteria.latency_breakdown_thresholds { - latency_breakdown_thresholds.ensure_threshold(latency_breakdown.unwrap())?; + latency_breakdown_thresholds + .ensure_threshold(latency_breakdown.unwrap(), &traffic_name_addition)?; } Ok(()) } @@ -244,22 +252,24 @@ impl SuccessCriteriaChecker { ); let stats_rate = stats.rate(); + let no_traffic_name_addition = "".to_string(); Self::check_throughput( success_criteria.min_avg_tps, success_criteria.max_expired_tps, success_criteria.max_failed_submission_tps, &stats_rate, - &"".to_string(), + &no_traffic_name_addition, )?; Self::check_latency( &success_criteria.latency_thresholds, &stats_rate, - &"".to_string(), + &no_traffic_name_addition, )?; if let Some(latency_breakdown_thresholds) = &success_criteria.latency_breakdown_thresholds { - latency_breakdown_thresholds.ensure_threshold(latency_breakdown)?; + latency_breakdown_thresholds + .ensure_threshold(latency_breakdown, &no_traffic_name_addition)?; } if let Some(timeout) = success_criteria.wait_for_all_nodes_to_catchup { diff --git a/testsuite/generate-format/tests/staged/api.yaml b/testsuite/generate-format/tests/staged/api.yaml index 1073420d1a229..9da6f228b9ced 100644 --- a/testsuite/generate-format/tests/staged/api.yaml +++ b/testsuite/generate-format/tests/staged/api.yaml @@ -69,10 +69,14 @@ CoinStoreResource: ContractEvent: ENUM: 0: - V0: + V1: + NEWTYPE: + TYPENAME: ContractEventV1 + 1: + V2: NEWTYPE: - TYPENAME: ContractEventV0 -ContractEventV0: + TYPENAME: ContractEventV2 +ContractEventV1: STRUCT: - key: TYPENAME: EventKey @@ -80,6 +84,11 @@ ContractEventV0: - type_tag: TYPENAME: TypeTag - event_data: BYTES +ContractEventV2: + STRUCT: + - type_tag: + TYPENAME: TypeTag + - event_data: BYTES DepositEvent: STRUCT: - amount: U64 @@ -234,8 +243,6 @@ StateValueMetadata: 0: V0: STRUCT: - - payer: - TYPENAME: AccountAddress - deposit: U64 - creation_time_usecs: U64 StructTag: diff --git a/testsuite/generate-format/tests/staged/aptos.yaml b/testsuite/generate-format/tests/staged/aptos.yaml index bf44c8bcc8455..0e6a8607e531f 100644 --- a/testsuite/generate-format/tests/staged/aptos.yaml +++ b/testsuite/generate-format/tests/staged/aptos.yaml @@ -49,10 +49,14 @@ ChangeSet: ContractEvent: ENUM: 0: - V0: + V1: + NEWTYPE: + TYPENAME: ContractEventV1 + 1: + V2: NEWTYPE: - TYPENAME: ContractEventV0 -ContractEventV0: + TYPENAME: ContractEventV2 +ContractEventV1: STRUCT: - key: TYPENAME: EventKey @@ -60,6 +64,11 @@ ContractEventV0: - type_tag: TYPENAME: TypeTag - event_data: BYTES +ContractEventV2: + STRUCT: + - type_tag: + TYPENAME: TypeTag + - event_data: BYTES Ed25519PublicKey: NEWTYPESTRUCT: BYTES Ed25519Signature: @@ -166,8 +175,6 @@ StateValueMetadata: 0: V0: STRUCT: - - payer: - TYPENAME: AccountAddress - deposit: U64 - creation_time_usecs: U64 StructTag: diff --git a/testsuite/generate-format/tests/staged/consensus.yaml b/testsuite/generate-format/tests/staged/consensus.yaml index 888875c6567ff..57de4e6b205f4 100644 --- a/testsuite/generate-format/tests/staged/consensus.yaml +++ b/testsuite/generate-format/tests/staged/consensus.yaml @@ -259,10 +259,14 @@ ConsensusMsg: ContractEvent: ENUM: 0: - V0: + V1: + NEWTYPE: + TYPENAME: ContractEventV1 + 1: + V2: NEWTYPE: - TYPENAME: ContractEventV0 -ContractEventV0: + TYPENAME: ContractEventV2 +ContractEventV1: STRUCT: - key: TYPENAME: EventKey @@ -270,6 +274,11 @@ ContractEventV0: - type_tag: TYPENAME: TypeTag - event_data: BYTES +ContractEventV2: + STRUCT: + - type_tag: + TYPENAME: TypeTag + - event_data: BYTES DAGNetworkMessage: STRUCT: - epoch: U64 @@ -469,8 +478,6 @@ StateValueMetadata: 0: V0: STRUCT: - - payer: - TYPENAME: AccountAddress - deposit: U64 - creation_time_usecs: U64 StructTag: diff --git a/testsuite/pangu.py b/testsuite/pangu.py index 532f34fcce17d..3092922e35325 100644 --- a/testsuite/pangu.py +++ b/testsuite/pangu.py @@ -47,6 +47,7 @@ def testnet(): testnet.add_command(testnet_commands.healthcheck) testnet.add_command(testnet_commands.update) testnet.add_command(testnet_commands.restart) +testnet.add_command(testnet_commands.transaction_emitter) @cli.group() diff --git a/testsuite/pangu_lib/README.md b/testsuite/pangu_lib/README.md index b09365c7f3d99..75b7a517017ac 100644 --- a/testsuite/pangu_lib/README.md +++ b/testsuite/pangu_lib/README.md @@ -6,9 +6,221 @@ Pangu is a testnet creation and management CLI, which deploys on top of existing infrastructure. -## Dev Setup +## What is Pangu CLI? + +Ever had to wait for the Aptos devnet/testnet releases to test a new feature? Or, create a PR to launch testnets through Forge? Well, these will be a thing of the past with Pangu. + +Pangu is a modular, customizable, and next-gen Aptos testnet creation-management CLI tool written in Python. Pangu allows you to create, and manage testnets on demand, and blazingly fast 🚀🚀🚀 + +Pangu is inherently faster than its predecessors (Forge testnet creation) because: + +- Pangu does not use Helm +- Pangu introduces new optimizations using concurrency/parallelism + +Also, Pangu’s source code is significantly easier to read because it is written in strictly-typed Python 3.0, and in a modular manner. + +## Vision + +Pangu is meant to be used by researchers/devs that want to create a testnet quickly. But, Pangu also aims to replace how testnets are created and deployed in Forge. + +The Forge integrations are outside of the scope of the initial iteration built by @Olsen Budanur , but (🤞) Forge will eventually call the Pangu CLI to do testnet creation, and management. + +## Main Functionalities + +Here is a brief overview of all the commands offered by the Pangu CLI. **For more information about the options + arguments, use pangu [testnet/node] [command] -h.** + +### **Testnet Functions** + +pangu testnet create [OPTIONS] : Creates a testnet with the configurations given in the options in the connected cluster + +pangu testnet delete [TESTNET_NAME] : Deletes testnet in the connected cluster + +pangu testnet get: Displays all active testnets in the connected cluster + +pangu testnet get [TESTNET_NAME]: Displays the nodes of a singular testnet in the connected cluster + +pangu testnet healthcheck [TESTNET_NAME]: Healthcheck a singular testnet in the connected cluster (WIP) + +pangu testnet restart [TESTNET_NAME]: Restart all nodes in a singular testnet in the connected cluster + +pangu testnet update [TESTNET_NAME] [OPTIONS]: Update all nodes in a singular testnet in the connected cluster using the options + +pangu testnet transaction-emitter [TESTNET_NAME] [OPTIONS]: Create a transaction emitter for a testnet by name. + +### **Node Functions** + +pangu node stop [TESTNET_NAME] [NODE_NAME]: Stops a nodes in a singular testnet in the connected cluster + +pangu node start [TESTNET_NAME] [NODE_NAME]: Starts a nodes in a singular testnet in the connected cluster + +pangu node restart [TESTNET_NAME] [NODE_NAME]: Restarts a nodes in a singular testnet in the connected cluster + +pangu node profile [TESTNET_NAME] [NODE_NAME]: Shows you node profiling tools created by @Yunus Ozer + +pangu node wipe [TESTNET_NAME] [NODE_NAME]: Wipes a nodes in a singular testnet in the connected cluster (WIP) + +pangu node add-pfn [TESTNET_NAME] [NODE_NAME] [OPTIONS]: Adds a pfn in a singular testnet in the connected cluster using the options (WIP) + +## More Info About the “Create” Command + +pangu testnet create [OPTIONS] : Creates a testnet with the configurations given in the options in the connected cluster + +CREATE OPTIONS: + +1. **`-pangu-node-configs-path`**: + - The Pangu node configs (yaml) + - Default: The default node config in aptos-core/testsuite/pangu_lib/template_testnet_files + - Example: **`-pangu-node-configs-path /path/to/node/configs.yaml`** +2. **`-layout-path`**: + - The path to the layout file (yaml). + - Default: The default layout in aptos-core/testsuite/pangu_lib/template_testnet_files + - Example: **`-layout-path /path/to/layout.yaml`** +3. **`-framework-path`**: + - The compiled move framework (head.mrb, or framework.mrb) file. Defaults to the default framework in the pangu_lib. + - Default: **`util.TEMPLATE_DIRECTORY/framework.mrb`** + - Example: **`-framework-path /path/to/framework.mrb`** +4. **`-num-of-validators`**: + - The number of generic validators you would like to have in the testnet. This option will be overwritten if you are passing custom Pangu node configs + - Default: **`10`** + - Example: **`-num-of-validators 20`** +5. **`-workspace`**: + - The path to the folder you would like the genesis files to be generated (default is a temp folder). + - Example: **`-workspace /path/to/workspace`** +- **`-dry-run`**: + - Pass **`true`** if you would like to run genesis without deploying on Kubernetes (K8S). All Kubernetes YAML files will be dumped to the workspace. If you don’t provide a workspace, all the YAML files will be dumped to a tmp folder. + - Default: **`False`** + - Example: **`-dry-run true`** +1. **`-aptos-cli-path`**: + - The path to the Aptos CLI if it is not in your $PATH variable. + - Default: **`aptos`** + - Example: **`-aptos-cli-path /path/to/aptos`** +2. **`-name`**: + - Name for the testnet. The default is a randomly generated name. The name will automatically have “pangu-” appended to it. + - Example: **`-name MyTestnet`** + + +## Pangu Node Config (Customizability) + +[Pangu config template](https://github.com/aptos-labs/aptos-core/blob/main/testsuite/pangu_lib/template_testnet_files/pangu_node_config.yaml) + +```yaml +blueprints: + nodebp: # Must to be all lowercase, and distinct + validator_config_path: "" # Should provide an absolute path. Can leave empty for the default + validator_image: "" # Can leave empty for the default + validator_storage_class_name: "" # Can leave empty for the default + vfn_config_path: "" # Should provide an absolute path. Use empty str if create_vfns: false. # Can leave empty for the default + vfn_image: "" # Can leave empty for the default + vfn_storage_class_name: "" # Can leave empty for the default + nodes_persistent_volume_claim_size: "" # Can leave empty for the default + create_vfns: true # CANNOT BE MODIFIED AFTER DEPLOYMENT + stake_amount: 100000000000000 # CANNOT BE MODIFIED AFTER DEPLOYMENT + count: -1 # CANNOT BE MODIFIED AFTER DEPLOYMENT... This is count of validators. In the template, the count doesn't matter as it gets overriden by either the default (10), user's --num-of-validators, or user's custom pangue node config. + # nodebpexample1: + # validator_config_path: "" + # validator_image: "" + # validator_storage_class_name: "" # Can leave empty for the default + # vfn_config_path: "" + # vfn_image: "" + # nodes_persistent_volume_claim_size: "" # Can leave empty for the default + # create_vfns: false # + # stake_amount: 100000000000000 + # count: -1 + # nodebpexample2: + # validator_config_path: "" + # validator_image: "" + # validator_storage_class_name: "" # Can leave empty for the default + # vfn_config_path: "" + # nodes_persistent_volume_claim_size: "" # Can leave empty for the default + # vfn_image: "" + # vfn_storage_class_name: "" # Can leave empty for the default + # create_vfns: false # + # stake_amount: 100000000000000 + # count: -1 +``` + +Pangu allows you to use a default template to create n number of nodes without much customization. However, if you want to create a testnet with varying node configurations and pod images, this is also possible through a custom pangu config. + +To create a testnet with a custom topology, create a new pangu config file and pass it with the option "--pangu-node-configs-path" + +- [**See the default config here**](https://github.com/aptos-labs/aptos-core/blob/main/testsuite/pangu_lib/template_testnet_files/pangu_node_config.yaml) + - The config yaml should start with “blueprints:” + - A blueprint describes the validator config, the validator image, the vfn config, the vfn image, stake_amount for the validator, and the number of validator/vfn pairs you would like to create with this specific blueprint. + - The name of the blueprint will dictate the names of the pods (validators, vfns) created using it. + - A validator created using the bp “nodebp” will be named nodebp-node-{i}-validator (i being the index of the validator. Likewise, a vfn created using the bp “nodebp” will be named nodebp-node-{i}-vfn. + - You can (and for most cases, should) have multiple blueprints. +- The pangu configs are not only used for creating testnets, but also could be used to update one using the pangu testnet update command. You can change the image, and the node configs of a testnet that is already started by modifying your pangu node configs and using the testnet update command. + +## How to Use Pangu + +**1-** Have aptos-core installed locally, and navigate to the testsuite directory. + +**2-** The entrypoint for all python operations is `[poetry](https://python-poetry.org/)`: + +- Install poetry: [https://python-poetry.org/docs/#installation](https://python-poetry.org/docs/#installation) +- Install poetry deps: `poetry install` + +**3-** To set up the pangu alias + +```bash +alias pangu="poetry run python pangu.py" +``` + +**4-** Have a K8s environment set up. For testing purposes, I suggest you use KinD. [Here is a script that can be used to set up KinD.](https://github.com/aptos-labs/internal-ops/blob/main/docker/kind/start-kind.sh) + +**5-** Use “pangu -h”, “pangu node -h”, and “pangu testnet -h” commands to get more info about the Pangu commands + +## Codebase + +Pangu lives in aptos-core/testsuite. Tips for navigating the codebase: + +- [**aptos-core/testsuite/pangu.py**](https://github.com/aptos-labs/aptos-core/blob/main/testsuite/pangu.py) + - This is the entry point to the Pangu CLI. Use poetry run python pangu.py to run. +- [**aptos-core/testsuite/test_framework**](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/test_framework) + - Includes the system abstractions for testing. + - The Kubernetes abstraction might need to be updated to add new Kubernetes features. +- [**aptos-core/testsuite/pangu_lib/node_commands**](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/pangu_lib/node_commands) + - Includes the commands for the pangu node {COMMAND} commands + - Each command has its own .py file, which are then aggregated in the commands.py file to be exported to pangu.py +- [a**ptos-core/testsuite/pangu_lib/testnet_commands**](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/pangu_lib/testnet_commands) + - Includes the commands for the pangu testnet {COMMAND} commands + - Each command has its own .py file, which are then aggregated in the commands.py file to be exported to pangu.py +- [**aptos-core/testsuite/pangu_lib/tests**](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/pangu_lib/tests) + - Includes the unit tests +- [**aptos-core/testsuite/pangu-sdk**](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/pangu-sdk) + - The Pangu Rust SDK is a light Rust wrapper around the Pangu CLI. It allows rust code to be able to run Pangu commands by passing structs, without having to generate the Pangu Config Yaml files. It is not feature complete, but should be a good starting point for the Pangu-Forge integrations. + +## Metrics + +@Olsen Budanur tested Pangu’s performance by creating testsnets of varying sizes in a standard GKE cluster. The table below shows how long it took Pangu to run genesis + apply all the k8s resources. + +Unlike Forge, Pangu was ran from a different cluster than where the testnet is deployed. Thus, it was disadvantaged in that regard during testing. + +| Run | 4 Vals | 7 Vals + 5 VFNs | 100 Vals + 100 VFNs | 100 Vals + 0 VFNs | +| --- | --- | --- | --- | --- | +| 1 | 5 s | 9 s | 111 s | 65 s | +| 2 | 5 s | 12 s | 116 s | 65 s | +| 3 | 6 s | 10 s | 112 s | 65 s | +| 4 | 6 s | 10 s | 111 s | 66 s | +| 5 | 5 s | 10 s | 113 s | 63 s | +| 6 | 5 s | 10 s | x | x | +| 7 | 5 s | 10 s | x | x | +| 8 | 6 s | 13 s | x | x | +| 9 | 6 s | 12 s | x | x | +| 10 | 5 s | 12 s | x | x | +| ———————— | ———————— | ————————— | ———————————— | ————————— | +| AVG | 5.4 s | 10.8 s | 112.6 s | 64.8 s | +| Forge AVG | ~121 s | ~148 s | x | x | +| Diff | Pangu ~22x Faster | Pangu ~14x Faster | x | x | +| Savings | *see below | *see below | x | x | + +## Important Notes + +- Pangu **DOES NOT** provision infrastructure. Being connected to a K8s cluster is a pre-req to using Pangu. Works with GKE node auto provisioning. +- If you are getting cryptic errors, comment out the line below “UNCOMMENT FOR MORE VERBOSE ERROR MESSAGES” on pangu.py for more error information. All exceptions are routed to this code block. + - You can also set all stream_output’s in create_testnet.py to be True to get even more logs. +- You should not rely too much on the default move framework mrb, and compile a new version often. An update to the move framework can, and has, break Pangu. -Since this repo has a few separate stacks, setup can be split into different steps: ### Python diff --git a/testsuite/pangu_lib/testnet_commands/commands.py b/testsuite/pangu_lib/testnet_commands/commands.py index 6f0134b597750..7530536dfa960 100644 --- a/testsuite/pangu_lib/testnet_commands/commands.py +++ b/testsuite/pangu_lib/testnet_commands/commands.py @@ -5,10 +5,11 @@ from .healthcheck import healthcheck_main from .update_nodes import update_nodes_main from .restart_nodes import restart_nodes_main +from .transaction_emitter import transaction_emitter_main from test_framework.shell import LocalShell from test_framework.filesystem import LocalFilesystem from test_framework.kubernetes import LiveKubernetes -from typing import Optional +from typing import Optional, List import random import string import os @@ -192,3 +193,34 @@ def update(testnet_name: str, pangu_node_configs_path: str): pangu_node_configs_path, SystemContext(LocalShell(), LocalFilesystem(), LiveKubernetes()), ) + + +@click.command( + help="Create a transaction emitter for a testnet by name.", + context_settings=dict(ignore_unknown_options=True), +) +@click.argument("testnet_name") +@click.option( + "--dry-run", + default=False, + help="Pass in true if you would like to run genesis without deploying on K8S. All k8s YAML files will be dumped to the workspace", +) +@click.option("--workspace", default="/tmp", help="Pass the path to the workspace.") +@click.argument("args", nargs=-1, required=True) +def transaction_emitter( + testnet_name: str, dry_run: bool, workspace: str, args: List[str] +): + """Create a transaction emitter for a testnet by name. + + Args: + testnet_name (str): the testnet to add a transaction emitter to + dry_run (bool): whether to deploy to kubernetes, or save the deployment instructions to the workspace + workspace (str): path to the folder you would like the genesis files to be generated (default is a temp folder). + """ + transaction_emitter_main( + testnet_name, + dry_run, + workspace, + args, + system_context=SystemContext(LocalShell(), LocalFilesystem(), LiveKubernetes()), + ) diff --git a/testsuite/pangu_lib/testnet_commands/get_testnet.py b/testsuite/pangu_lib/testnet_commands/get_testnet.py index 87fd60d251944..50237a64901a3 100644 --- a/testsuite/pangu_lib/testnet_commands/get_testnet.py +++ b/testsuite/pangu_lib/testnet_commands/get_testnet.py @@ -8,6 +8,7 @@ import json import sys from pangu_lib.util import NodeType +from pangu_lib.util import TX_EMITTER_TYPE class PanguTestnet: @@ -177,6 +178,8 @@ def get_singular_testnet(testnet_name: str, kubernetes: Kubernetes) -> PanguTest pangu_testnet.num_validator_fullnodes += 1 elif type == NodeType.PFN.value: pangu_testnet.num_public_fullnodes += 1 + elif type == TX_EMITTER_TYPE: + pass else: raise Exception(f"Unknown type: {type}") pangu_testnet.node_statefulsets = sts_objects @@ -195,6 +198,8 @@ def get_singular_testnet(testnet_name: str, kubernetes: Kubernetes) -> PanguTest pangu_testnet.num_validator_fullnodes_active += 1 elif type == NodeType.PFN.value: pangu_testnet.num_public_fullnodes_active += 1 + elif type == TX_EMITTER_TYPE: + pass else: raise Exception(f"Unknown type: {type}") pangu_testnet.node_pods = pod_objects diff --git a/testsuite/pangu_lib/testnet_commands/transaction_emitter.py b/testsuite/pangu_lib/testnet_commands/transaction_emitter.py new file mode 100644 index 0000000000000..e6a9e2f55d367 --- /dev/null +++ b/testsuite/pangu_lib/testnet_commands/transaction_emitter.py @@ -0,0 +1,120 @@ +from .create_testnet import SystemContext +import pangu_lib.util as util +from kubernetes import client +from test_framework.logging import log +from typing import List +import pangu_lib.util as util + +import random +import string +import time + + +def transaction_emitter_main( + testnet_name: str, + dry_run: bool, + workspace: str, + args: List[str], + system_context: SystemContext, + timeout: int = 360, + ask_for_delete: bool = True, +): + # + # Create command array + command_array = ["aptos-transaction-emitter"] + command_array.extend(args) + + # + # Create pod name + random_postfix: str = "".join( + random.choices(string.ascii_lowercase + string.digits, k=8) + ) + pod_name = f"{testnet_name}-tx-emitter-{random_postfix}" + + # + # Create Pod + log.info("Creating a transaction emitter pod...") + pod: client.V1Pod = create_transaction_emitter_pod(pod_name, command_array) + + # + # If dry run, dump pod yaml and return + if dry_run: + util.kubernetes_object_to_yaml( + f"{workspace}/{pod_name}.yaml", + pod, + system_context.filesystem, + ) + log.info( + f'Transaction emitter pod yaml dumped to "{workspace}/{pod_name}.yaml"...' + ) + return + + # + # Apply pod + system_context.kubernetes.create_resource(pod, testnet_name) + log.info("Transaction emitter pod created...") + + # + # Get logs + command = ["kubectl", "logs", "-f", pod_name, "-n", testnet_name] + time_passed = 0 + while time_passed < timeout: + log.info( + f"Attempting to get logs from transaction emitter, time passed: {time_passed}..." + ) + if system_context.shell.run(command, stream_output=True).succeeded(): + log.info("Successfully got logs from transaction emitter") + break + time_passed += 5 + time.sleep(5) + + # + # Check if we timed out + if time_passed == timeout: + log.error("Failed to get logs from transaction emitter") + + # + # Ask for delete flag for running Pangu without being able to use stdin (e.g. in CI/Forge) + if ask_for_delete: + # + # Delete pod + user_input = input( + '-------------------------------------------------------\n- The transaction emitter logs are complete. \n- Type "delete" to delete the transaction emitter pod\n-------------------------------------------------------\n' + ) + if user_input == "delete": + system_context.kubernetes.delete_resource(pod, testnet_name) + + +def create_transaction_emitter_pod( + pods_name: str, command_array: list[str] +) -> client.V1Pod: + container: client.V1Container = client.V1Container( + name=pods_name, + image=util.DEFAULT_TRANSACTION_EMITTER_IMAGE, + env=[ + client.V1EnvVar(name="RUST_BACKTRACE", value="1"), + client.V1EnvVar(name="REUSE_ACC", value="1"), + ], + command=command_array, + resources=client.V1ResourceRequirements( + requests={"cpu": "15", "memory": "26Gi"}, # Check if too much/not enough + limits={"cpu": "15", "memory": "26Gi"}, # Check if too much/not enough + ), + ) + + pod_spec: client.V1PodSpec = client.V1PodSpec( + restart_policy="Never", + containers=[container], + ) + + pod: client.V1Pod = client.V1Pod( + api_version="v1", + kind="Pod", + metadata=client.V1ObjectMeta( + name=pods_name, + labels={"type": util.TX_EMITTER_TYPE}, + ), + spec=pod_spec, + ) + + return pod diff --git a/testsuite/pangu_lib/util.py b/testsuite/pangu_lib/util.py index 5d1024c70d25e..8990c2322fb9c 100644 --- a/testsuite/pangu_lib/util.py +++ b/testsuite/pangu_lib/util.py @@ -71,6 +71,8 @@ class SystemContext: STATE_SYNC_DB_NAME: str = "state_sync_db" DEFAULT_PERSISTENT_VOLUME_CLAIM_SIZE: str = "10Gi" +DEFAULT_TRANSACTION_EMITTER_IMAGE: str = "aptoslabs/tools:devnet" +TX_EMITTER_TYPE: str = "tx_emitter" def generate_labels( diff --git a/testsuite/single_node_performance.py b/testsuite/single_node_performance.py index b2a005490e2d6..764475fa9c8cd 100755 --- a/testsuite/single_node_performance.py +++ b/testsuite/single_node_performance.py @@ -22,25 +22,25 @@ ("coin-transfer", False, 1): (12500.0, True), ("coin-transfer", True, 1): (30300.0, True), ("account-generation", False, 1): (10500.0, True), - ("account-generation", True, 1): (26500.0, True), + # ("account-generation", True, 1): (26500.0, True), ("account-resource32-b", False, 1): (15400.0, True), ("modify-global-resource", False, 1): (3400.0, True), ("modify-global-resource", False, 10): (10100.0, True), ("publish-package", False, 1): (120.0, True), ("mix_publish_transfer", False, 1): (1400.0, False), ("batch100-transfer", False, 1): (370, True), - ("batch100-transfer", True, 1): (940, True), + # ("batch100-transfer", True, 1): (940, True), ("token-v1ft-mint-and-transfer", False, 1): (1550.0, True), ("token-v1ft-mint-and-transfer", False, 20): (7000.0, True), - ("token-v1nft-mint-and-transfer-sequential", False, 1): (1000.0, True), - ("token-v1nft-mint-and-transfer-sequential", False, 20): (5150.0, True), - ("token-v1nft-mint-and-transfer-parallel", False, 1): (1300.0, True), - ("token-v1nft-mint-and-transfer-parallel", False, 20): (5300.0, True), + # ("token-v1nft-mint-and-transfer-sequential", False, 1): (1000.0, True), + # ("token-v1nft-mint-and-transfer-sequential", False, 20): (5150.0, True), + # ("token-v1nft-mint-and-transfer-parallel", False, 1): (1300.0, True), + # ("token-v1nft-mint-and-transfer-parallel", False, 20): (5300.0, True), # ("token-v1ft-mint-and-store", False): 1000.0, # ("token-v1nft-mint-and-store-sequential", False): 1000.0, # ("token-v1nft-mint-and-store-parallel", False): 1000.0, - ("no-op2-signers", False, 1): (18000.0, True), - ("no-op5-signers", False, 1): (18000.0, True), + # ("no-op2-signers", False, 1): (18000.0, True), + # ("no-op5-signers", False, 1): (18000.0, True), ("token-v2-ambassador-mint", False, 1): (1500.0, True), ("token-v2-ambassador-mint", False, 20): (5000.0, True), } diff --git a/testsuite/test_framework/kubernetes.py b/testsuite/test_framework/kubernetes.py index 0b6dc07da0898..8f9e5f3b635db 100644 --- a/testsuite/test_framework/kubernetes.py +++ b/testsuite/test_framework/kubernetes.py @@ -25,6 +25,12 @@ def create_resource( ) -> KubernetesResource: pass + @abstractmethod + def delete_resource( + self, kubernetes_object: KubernetesResource, namespace: str = "default" + ) -> bool: + pass + @abstractmethod def get_resources( self, type: type, namespace: str = "default" @@ -111,9 +117,104 @@ def create_resource( return core_v1_api.create_namespaced_persistent_volume_claim( namespace=namespace, body=kubernetes_object ) + elif isinstance(kubernetes_object, client.V1Pod): # type: ignore + return core_v1_api.create_namespaced_pod( + namespace=namespace, body=kubernetes_object + ) else: raise NotImplemented("This resource type is not implemented!") + def delete_resource( + self, kubernetes_object: KubernetesResource, namespace: str = "default" + ) -> bool: + config.load_kube_config() # type:ignore + core_v1_api = client.CoreV1Api() + apps_v1_api = client.AppsV1Api() + if not kubernetes_object.metadata or not kubernetes_object.metadata.name: + raise ApiException( + status=400, + reason="Cannot delete a k8s resource without metadata or name!", + ) + self._verify_k8s_obj_name(namespace) + resource_name: str = kubernetes_object.metadata.name + self._verify_k8s_obj_name(resource_name) + if isinstance(kubernetes_object, client.V1Namespace): + try: + core_v1_api.delete_namespace( + name=namespace, body=client.V1DeleteOptions() + ) + except Exception as exception: + log.error(f'Failed deleting the namespace "{namespace}""!') + return False + elif isinstance(kubernetes_object, client.V1Service): + try: + core_v1_api.delete_namespaced_service( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error(f'Failed deleting the service "{resource_name}""!') + return False + elif isinstance(kubernetes_object, client.V1StatefulSet): + try: + apps_v1_api.delete_namespaced_stateful_set( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error(f'Failed deleting the statefulset "{resource_name}""!') + return False + elif isinstance(kubernetes_object, client.V1ConfigMap): + try: + core_v1_api.delete_namespaced_config_map( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error(f'Failed deleting the configmap "{resource_name}""!') + return False + elif isinstance(kubernetes_object, client.V1Secret): # type: ignore + try: + core_v1_api.delete_namespaced_secret( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error(f'Failed deleting the secret "{resource_name}""!') + return False + elif isinstance(kubernetes_object, client.V1PersistentVolumeClaim): # type: ignore + try: + core_v1_api.delete_namespaced_persistent_volume_claim( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error( + f'Failed deleting the persistent volume claim "{resource_name}""!' + ) + return False + elif isinstance(kubernetes_object, client.V1Pod): # type: ignore + try: + core_v1_api.delete_namespaced_pod( + name=resource_name, + namespace=namespace, + body=client.V1DeleteOptions(), + ) + except Exception as exception: + log.error(f'Failed deleting the pod "{resource_name}""!') + return False + else: + raise NotImplemented( + "Delete operation on this resource type is not implemented!" + ) + + return True + def get_resources( self, type: type, namespace: str = "default" ) -> List[KubernetesResource]: @@ -171,6 +272,7 @@ def scale_stateful_set( raise ApiException(status=400, reason="NO STATEFULSET SPEC FOUND.") def delete_namespace(self, namespace: str, wait_deletion: bool) -> bool: + # TODO Deprecate this method, merge with delete_resource config.load_kube_config() # type:ignore core_v1_api = client.CoreV1Api() try: @@ -322,6 +424,31 @@ def create_resource( namespace, resource_name, kubernetes_object ) + def delete_resource( + self, kubernetes_object: KubernetesResource, namespace: str = "default" + ) -> bool: + if not kubernetes_object.metadata or not kubernetes_object.metadata.name: + raise ApiException( + status=400, + reason="Cannot delete a k8s resource without metadata or name!", + ) + self._verify_k8s_obj_name(namespace) + resource_name: str = kubernetes_object.metadata.name + self._verify_k8s_obj_name(resource_name) + if isinstance(kubernetes_object, client.V1Namespace): + if not resource_name in self.namespaces: + raise ApiException( + status=400, + reason=f'The namespace with the name "{resource_name}" does not exist!', + ) + self.namespaces.pop(resource_name) + self.namespaced_resource_dictionary.pop(resource_name) + return True + else: + return self._delete_resource_helper( + namespace, resource_name, kubernetes_object + ) + def get_resources( self, type: type, namespace: str = "default" ) -> List[KubernetesResource]: @@ -463,3 +590,28 @@ def _create_resource_helper( ) resources[resource_name] = resource return resource + + def _delete_resource_helper( + self, + namespace: str, + resource_name: str, + resource: KubernetesResource, + ) -> bool: + resource_type: int = hash(type(resource)) + self._check_namespace_exists(namespace) + resource_types: dict[ + int, dict[str, KubernetesResource] + ] = self.namespaced_resource_dictionary[namespace] + if not resource_type in resource_types: + resource_types[resource_type] = dict() + resources: dict[str, KubernetesResource] = resource_types[resource_type] + if not resource_name in resources: + log.error( + f'This {resource_type} named "{resource_name}" does not exist in this namespace "{namespace}"!' + ) + raise ApiException( + status=409, + reason=f'The namespace with the name "{resource_name}" does not exist!', + ) + resources.pop(resource_name) + return True diff --git a/testsuite/testcases/src/load_vs_perf_benchmark.rs b/testsuite/testcases/src/load_vs_perf_benchmark.rs index a282eb70a542b..d69aef659b4da 100644 --- a/testsuite/testcases/src/load_vs_perf_benchmark.rs +++ b/testsuite/testcases/src/load_vs_perf_benchmark.rs @@ -10,10 +10,7 @@ use aptos_forge::{ }; use aptos_logger::info; use rand::SeedableRng; -use std::{ - fmt::{self, Debug, Display}, - time::Duration, -}; +use std::{fmt::Debug, time::Duration}; use tokio::runtime::Runtime; pub struct SingleRunStats { @@ -24,6 +21,7 @@ pub struct SingleRunStats { actual_duration: Duration, } +#[derive(Debug)] pub enum Workloads { TPS(&'static [usize]), TRANSACTIONS(&'static [TransactionWorkload]), @@ -37,10 +35,29 @@ impl Workloads { } } - fn name(&self, index: usize) -> String { + fn type_name(&self) -> String { + match self { + Self::TPS(_) => "Load (TPS)".to_string(), + Self::TRANSACTIONS(_) => "Workload".to_string(), + } + } + + fn phase_name(&self, index: usize, phase: usize) -> String { match self { - Self::TPS(tpss) => tpss[index].to_string(), - Self::TRANSACTIONS(workloads) => workloads[index].to_string(), + Self::TPS(tpss) => { + assert_eq!(phase, 0); + format!("{}", tpss[index]) + }, + Self::TRANSACTIONS(workloads) => format!( + "{}{}: {}", + index, + if workloads[index].is_phased() { + format!(": ph{}", phase) + } else { + "".to_string() + }, + workloads[index].phase_name(phase) + ), } } @@ -57,16 +74,23 @@ pub struct TransactionWorkload { pub transaction_type: TransactionTypeArg, pub num_modules: usize, pub unique_senders: bool, + pub mempool_backlog: usize, } impl TransactionWorkload { + fn is_phased(&self) -> bool { + self.unique_senders + } + fn configure(&self, request: EmitJobRequest) -> EmitJobRequest { let account_creation_type = TransactionTypeArg::AccountGenerationLargePool.materialize(1, false); - if self.unique_senders { - request.transaction_type(self.transaction_type.materialize(self.num_modules, false)) - } else { + let request = request.mode(EmitJobMode::MaxLoad { + mempool_backlog: self.mempool_backlog, + }); + + if self.is_phased() { let write_type = self.transaction_type.materialize(self.num_modules, true); request.transaction_mix_per_phase(vec![ // warmup @@ -76,13 +100,27 @@ impl TransactionWorkload { // cooldown vec![(write_type, 1)], ]) + } else { + request.transaction_type(self.transaction_type.materialize(self.num_modules, false)) } } -} -impl Display for TransactionWorkload { - fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { - Debug::fmt(self, f) + fn phase_name(&self, phase: usize) -> String { + format!( + "{}{}[{:.1}k]", + match (self.is_phased(), phase) { + (true, 0) => "CreateBurnerAccounts".to_string(), + (true, 1) => format!("{:?}", self.transaction_type), + (false, 0) => format!("{:?}", self.transaction_type), + _ => unreachable!(), + }, + if self.num_modules > 1 { + format!("({} modules)", self.num_modules) + } else { + "".to_string() + }, + self.mempool_backlog as f32 / 1000.0, + ) } } @@ -120,14 +158,9 @@ impl LoadVsPerfBenchmark { )?; let mut result = vec![]; - let phased = stats_by_phase.len() > 1; for (phase, phase_stats) in stats_by_phase.into_iter().enumerate() { result.push(SingleRunStats { - name: if phased { - format!("{}_phase_{}", workloads.name(index), phase) - } else { - workloads.name(index) - }, + name: workloads.phase_name(index, phase), stats: phase_stats.emitter_stats, latency_breakdown: phase_stats.latency_breakdown, ledger_transactions: phase_stats.ledger_transactions, @@ -163,37 +196,34 @@ impl NetworkTest for LoadVsPerfBenchmark { std::thread::sleep(buffer); } - info!("Starting for {}", self.workloads.name(index)); - results.append(&mut self.evaluate_single( - ctx, - &self.workloads, - index, - individual_duration, - )?); + info!("Starting for {:?}", self.workloads); + results.push(self.evaluate_single(ctx, &self.workloads, index, individual_duration)?); // Note: uncomment below to perform reconfig during a test // let mut aptos_info = ctx.swarm().aptos_public_info(); // runtime.block_on(aptos_info.reconfig()); - let table = to_table(&results); + let table = to_table(self.workloads.type_name(), &results); for line in table { info!("{}", line); } } - let table = to_table(&results); + let table = to_table(self.workloads.type_name(), &results); for line in table { ctx.report.report_text(line); } for (index, result) in results.iter().enumerate() { - let rate = result.stats.rate(); + // always take last phase for success criteria + let target_result = &result[result.len() - 1]; + let rate = target_result.stats.rate(); if let Some(criteria) = self.criteria.get(index) { SuccessCriteriaChecker::check_core_for_success( criteria, ctx.report, &rate, - Some(&result.latency_breakdown), - Some(result.name.clone()), + Some(&target_result.latency_breakdown), + Some(target_result.name.clone()), )?; } } @@ -201,11 +231,11 @@ impl NetworkTest for LoadVsPerfBenchmark { } } -fn to_table(results: &[SingleRunStats]) -> Vec { +fn to_table(type_name: String, results: &[Vec]) -> Vec { let mut table = Vec::new(); table.push(format!( - "{: <30} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12}", - "workload", + "{: <40} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12}", + type_name, "submitted/s", "committed/s", "expired/s", @@ -222,26 +252,28 @@ fn to_table(results: &[SingleRunStats]) -> Vec { "actual dur" )); - for result in results { - let rate = result.stats.rate(); - table.push(format!( - "{: <30} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12}", - result.name, - rate.submitted, - rate.committed, - rate.expired, - rate.failed_submission, - result.ledger_transactions / result.actual_duration.as_secs(), - rate.latency, - rate.p50_latency, - rate.p90_latency, - rate.p99_latency, - result.latency_breakdown.get_samples(&LatencyBreakdownSlice::QsBatchToPos).max_sample(), - result.latency_breakdown.get_samples(&LatencyBreakdownSlice::QsPosToProposal).max_sample(), - result.latency_breakdown.get_samples(&LatencyBreakdownSlice::ConsensusProposalToOrdered).max_sample(), - result.latency_breakdown.get_samples(&LatencyBreakdownSlice::ConsensusOrderedToCommit).max_sample(), - result.actual_duration.as_secs() - )); + for run_results in results { + for result in run_results { + let rate = result.stats.rate(); + table.push(format!( + "{: <40} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12} | {: <12.3} | {: <12.3} | {: <12.3} | {: <12.3} | {: <12}", + result.name, + rate.submitted, + rate.committed, + rate.expired, + rate.failed_submission, + result.ledger_transactions / result.actual_duration.as_secs(), + rate.latency, + rate.p50_latency, + rate.p90_latency, + rate.p99_latency, + result.latency_breakdown.get_samples(&LatencyBreakdownSlice::QsBatchToPos).max_sample(), + result.latency_breakdown.get_samples(&LatencyBreakdownSlice::QsPosToProposal).max_sample(), + result.latency_breakdown.get_samples(&LatencyBreakdownSlice::ConsensusProposalToOrdered).max_sample(), + result.latency_breakdown.get_samples(&LatencyBreakdownSlice::ConsensusOrderedToCommit).max_sample(), + result.actual_duration.as_secs() + )); + } } table diff --git a/third_party/move/benchmarks/src/move_vm.rs b/third_party/move/benchmarks/src/move_vm.rs index 2f03c7e63dc4d..b622fa6173260 100644 --- a/third_party/move/benchmarks/src/move_vm.rs +++ b/third_party/move/benchmarks/src/move_vm.rs @@ -40,6 +40,7 @@ fn compile_modules() -> Vec { let (_files, compiled_units) = Compiler::from_files( src_files, vec![], + Flags::empty().set_skip_attribute_checks(false), move_stdlib::move_stdlib_named_addresses(), ) .build_and_report() diff --git a/third_party/move/documentation/examples/diem-framework/crates/cli/Cargo.toml b/third_party/move/documentation/examples/diem-framework/crates/cli/Cargo.toml index 7ad0759cb9026..eb05180cd3785 100644 --- a/third_party/move/documentation/examples/diem-framework/crates/cli/Cargo.toml +++ b/third_party/move/documentation/examples/diem-framework/crates/cli/Cargo.toml @@ -9,7 +9,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } bcs = "0.1.4" move-cli = { path = "../../../../../tools/move-cli" } diff --git a/third_party/move/evm/extract-ethereum-abi/Cargo.toml b/third_party/move/evm/extract-ethereum-abi/Cargo.toml index 01d5a436fb5f2..7cfc16350a66f 100644 --- a/third_party/move/evm/extract-ethereum-abi/Cargo.toml +++ b/third_party/move/evm/extract-ethereum-abi/Cargo.toml @@ -16,7 +16,7 @@ move-to-yul = { path = "../move-to-yul" } # external dependencies anyhow = "1.0.38" atty = "0.2.14" -clap = { version = "4.3.5", features = ["derive", "env"] } +clap = { version = "4.3.9", features = ["derive", "env"] } codespan = "0.11.1" codespan-reporting = "0.11.1" ethabi = "17.0.0" diff --git a/third_party/move/evm/move-to-yul/Cargo.toml b/third_party/move/evm/move-to-yul/Cargo.toml index 8e40033800901..023f06e52e287 100644 --- a/third_party/move/evm/move-to-yul/Cargo.toml +++ b/third_party/move/evm/move-to-yul/Cargo.toml @@ -17,12 +17,12 @@ move-ir-types = { path = "../../move-ir/types" } move-model = { path = "../../move-model" } # move dependencies move-prover-boogie-backend = { path = "../../move-prover/boogie-backend" } -move-stackless-bytecode = { path = "../../move-prover/bytecode" } +move-stackless-bytecode = { path = "../../move-model/bytecode" } # external dependencies anyhow = "1.0.38" atty = "0.2.14" -clap = { version = "4.3.5", features = ["derive", "env"] } +clap = { version = "4.3.9", features = ["derive", "env"] } codespan = "0.11.1" codespan-reporting = "0.11.1" ethnum = "1.0.4" diff --git a/third_party/move/evm/move-to-yul/src/lib.rs b/third_party/move/evm/move-to-yul/src/lib.rs index 346043fef772b..9333400655200 100644 --- a/third_party/move/evm/move-to-yul/src/lib.rs +++ b/third_party/move/evm/move-to-yul/src/lib.rs @@ -31,7 +31,9 @@ use codespan_reporting::{ diagnostic::Severity, term::termcolor::{ColorChoice, StandardStream, WriteColor}, }; -use move_compiler::{shared::PackagePaths, Flags}; +use move_compiler::{ + attr_derivation::get_known_attributes_for_flavor, shared::PackagePaths, Flags, +}; use move_core_types::metadata::Metadata; use move_model::{ model::GlobalEnv, options::ModelBuilderOptions, parse_addresses_from_options, @@ -49,6 +51,8 @@ pub fn run_to_yul_errors_to_stderr(options: Options) -> anyhow::Result<()> { pub fn run_to_yul(error_writer: &mut W, mut options: Options) -> anyhow::Result<()> { // Run the model builder. let addrs = parse_addresses_from_options(options.named_address_mapping.clone())?; + let flags = Flags::empty().set_flavor("async"); + let known_attributes = get_known_attributes_for_flavor(&flags); let env = run_model_builder_with_options_and_compilation_flags( vec![PackagePaths { name: None, @@ -61,7 +65,8 @@ pub fn run_to_yul(error_writer: &mut W, mut options: Options) -> named_address_map: addrs, }], ModelBuilderOptions::default(), - Flags::empty().set_flavor("async"), + flags, + &known_attributes, )?; // If the model contains any errors, report them now and exit. check_errors( @@ -102,6 +107,8 @@ pub fn run_to_abi_metadata( ) -> anyhow::Result> { // Run the model builder. let addrs = parse_addresses_from_options(options.named_address_mapping.clone())?; + let flags = Flags::empty().set_flavor("async"); + let known_attributes = get_known_attributes_for_flavor(&flags); let env = run_model_builder_with_options_and_compilation_flags( vec![PackagePaths { name: None, @@ -114,7 +121,8 @@ pub fn run_to_abi_metadata( named_address_map: addrs, }], ModelBuilderOptions::default(), - Flags::empty().set_flavor("async"), + flags, + &known_attributes, )?; // If the model contains any errors, report them now and exit. check_errors( diff --git a/third_party/move/evm/move-to-yul/tests/dispatcher_testsuite.rs b/third_party/move/evm/move-to-yul/tests/dispatcher_testsuite.rs index f452d7684d3c3..ee1f9c916a765 100644 --- a/third_party/move/evm/move-to-yul/tests/dispatcher_testsuite.rs +++ b/third_party/move/evm/move-to-yul/tests/dispatcher_testsuite.rs @@ -5,8 +5,13 @@ use anyhow::Result; use evm::{backend::MemoryVicinity, ExitReason}; use evm_exec_utils::{compile, exec::Executor}; -use move_compiler::shared::{NumericalAddress, PackagePaths}; -use move_model::{options::ModelBuilderOptions, run_model_builder_with_options}; +use move_compiler::{ + attr_derivation::get_known_attributes_for_flavor, + shared::{Flags, NumericalAddress, PackagePaths}, +}; +use move_model::{ + options::ModelBuilderOptions, run_model_builder_with_options_and_compilation_flags, +}; use move_stdlib::move_stdlib_named_addresses; use move_to_yul::{generator::Generator, options::Options}; use primitive_types::{H160, U256}; @@ -60,7 +65,9 @@ fn compile_yul_to_bytecode_bytes(filename: &str) -> Result> { "Async".to_string(), NumericalAddress::parse_str("0x1").unwrap(), ); - let env = run_model_builder_with_options( + let flags = Flags::verification().set_flavor("evm"); + let known_attributes = get_known_attributes_for_flavor(&flags); + let env = run_model_builder_with_options_and_compilation_flags( vec![PackagePaths { name: None, paths: vec![contract_path(filename).to_string_lossy().to_string()], @@ -72,12 +79,14 @@ fn compile_yul_to_bytecode_bytes(filename: &str) -> Result> { named_address_map, }], ModelBuilderOptions::default(), + flags, + &known_attributes, )?; let options = Options::default(); let (_, out, _) = Generator::run(&options, &env) .pop() .expect("not contract in test case"); - let (bc, _) = compile::solc_yul(&out, false)?; + let (bc, _) = compile::solc_yul(&out, true)?; Ok(bc) } diff --git a/third_party/move/evm/move-to-yul/tests/testsuite.rs b/third_party/move/evm/move-to-yul/tests/testsuite.rs index f782f09153d22..7b5fd6191ae83 100644 --- a/third_party/move/evm/move-to-yul/tests/testsuite.rs +++ b/third_party/move/evm/move-to-yul/tests/testsuite.rs @@ -7,7 +7,10 @@ use codespan_reporting::{diagnostic::Severity, term::termcolor::Buffer}; use evm::backend::MemoryVicinity; use evm_exec_utils::{compile, exec::Executor, tracing}; use move_command_line_common::testing::EXP_EXT; -use move_compiler::shared::{NumericalAddress, PackagePaths}; +use move_compiler::{ + attr_derivation, + shared::{NumericalAddress, PackagePaths}, +}; use move_model::{ model::{FunId, GlobalEnv, QualifiedId}, options::ModelBuilderOptions, @@ -46,6 +49,10 @@ fn test_runner(path: &Path) -> datatest_stable::Result<()> { "Async".to_string(), NumericalAddress::parse_str("0x1").unwrap(), ); + let flags = move_compiler::Flags::empty() + .set_sources_shadow_deps(true) + .set_flavor("async"); + let known_attributes = attr_derivation::get_known_attributes_for_flavor(&flags); let env = run_model_builder_with_options_and_compilation_flags( vec![PackagePaths { name: None, @@ -58,9 +65,8 @@ fn test_runner(path: &Path) -> datatest_stable::Result<()> { named_address_map, }], ModelBuilderOptions::default(), - move_compiler::Flags::empty() - .set_sources_shadow_deps(true) - .set_flavor("async"), + flags, + &known_attributes, )?; for exp in std::iter::once(String::new()).chain(experiments.into_iter()) { let mut options = Options { diff --git a/third_party/move/extensions/async/move-async-vm/src/async_vm.rs b/third_party/move/extensions/async/move-async-vm/src/async_vm.rs index 2cd90cffec8c2..b5c1d2c9c2501 100644 --- a/third_party/move/extensions/async/move-async-vm/src/async_vm.rs +++ b/third_party/move/extensions/async/move-async-vm/src/async_vm.rs @@ -11,7 +11,7 @@ use crate::{ use move_binary_format::errors::{Location, PartialVMError, PartialVMResult, VMError, VMResult}; use move_core_types::{ account_address::AccountAddress, - effects::{ChangeSet, Event, Op}, + effects::{ChangeSet, Op}, identifier::Identifier, language_storage::{ModuleId, StructTag, TypeTag}, resolver::MoveResolver, @@ -141,7 +141,6 @@ pub type Message = (AccountAddress, u64, Vec>); /// A structure to represent success for the execution of an async session operation. pub struct AsyncSuccess<'r> { pub change_set: ChangeSet, - pub events: Vec, pub messages: Vec, pub gas_used: Gas, pub ext: NativeContextExtensions<'r>, @@ -219,7 +218,7 @@ impl<'r, 'l> AsyncSession<'r, 'l> { mutable_reference_outputs: _, mut return_values, }, - (mut change_set, events, mut native_extensions), + (mut change_set, mut native_extensions), )) => { if return_values.len() != 1 { Err(async_extension_error(format!( @@ -238,7 +237,6 @@ impl<'r, 'l> AsyncSession<'r, 'l> { let async_ext = native_extensions.remove::(); Ok(AsyncSuccess { change_set, - events, messages: async_ext.sent, gas_used, ext: native_extensions, @@ -311,7 +309,7 @@ impl<'r, 'l> AsyncSession<'r, 'l> { mut mutable_reference_outputs, return_values: _, }, - (mut change_set, events, mut native_extensions), + (mut change_set, mut native_extensions), )) => { if mutable_reference_outputs.len() > 1 { Err(async_extension_error(format!( @@ -332,7 +330,6 @@ impl<'r, 'l> AsyncSession<'r, 'l> { let async_ext = native_extensions.remove::(); Ok(AsyncSuccess { change_set, - events, messages: async_ext.sent, gas_used, ext: native_extensions, @@ -430,13 +427,11 @@ impl<'r> Display for AsyncSuccess<'r> { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { let AsyncSuccess { change_set, - events, messages, gas_used, ext: _, } = self; write!(f, "change_set: {:?}", change_set)?; - write!(f, ", events: {:?}", events)?; write!(f, ", messages: {:?}", messages)?; write!(f, ", gas: {}", gas_used) } diff --git a/third_party/move/extensions/async/move-async-vm/tests/testsuite.rs b/third_party/move/extensions/async/move-async-vm/tests/testsuite.rs index e460eb9720237..b1883e1274ffa 100644 --- a/third_party/move/extensions/async/move-async-vm/tests/testsuite.rs +++ b/third_party/move/extensions/async/move-async-vm/tests/testsuite.rs @@ -13,7 +13,7 @@ use move_async_vm::{ use move_binary_format::access::ModuleAccess; use move_command_line_common::testing::EXP_EXT; use move_compiler::{ - compiled_unit::CompiledUnit, diagnostics::report_diagnostics_to_buffer, + attr_derivation, compiled_unit::CompiledUnit, diagnostics::report_diagnostics_to_buffer, shared::NumericalAddress, Compiler, Flags, }; use move_core_types::{ @@ -350,8 +350,10 @@ impl Harness { .filter(|p| *p != path) .cloned() .collect(); - let compiler = Compiler::from_files(targets, deps, address_map.clone()) - .set_flags(Flags::empty().set_flavor("async")); + let flags = Flags::empty().set_flavor("async"); + let known_attributes = attr_derivation::get_known_attributes_for_flavor(&flags); + let compiler = + Compiler::from_files(targets, deps, address_map.clone(), flags, &known_attributes); let (sources, inner) = compiler.build()?; match inner { Err(diags) => bail!( diff --git a/third_party/move/move-analyzer/Cargo.toml b/third_party/move/move-analyzer/Cargo.toml index 690e4589ea6d2..91348e9549fb3 100644 --- a/third_party/move/move-analyzer/Cargo.toml +++ b/third_party/move/move-analyzer/Cargo.toml @@ -10,7 +10,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan-reporting = "0.11.1" crossbeam = "0.8" derivative = "2.2.0" diff --git a/third_party/move/move-bytecode-verifier/src/dependencies.rs b/third_party/move/move-bytecode-verifier/src/dependencies.rs index a3876ec34bc8e..a38f1e900cf64 100644 --- a/third_party/move/move-bytecode-verifier/src/dependencies.rs +++ b/third_party/move/move-bytecode-verifier/src/dependencies.rs @@ -252,7 +252,7 @@ fn verify_imported_structs(context: &Context) -> PartialVMResult<()> { StatusCode::LOOKUP_FAILED, IndexKind::StructHandle, idx as TableIndex, - )) + )); }, } } diff --git a/third_party/move/move-compiler-v2/Cargo.toml b/third_party/move/move-compiler-v2/Cargo.toml index 95818d809d4b8..bda0ea4877d5b 100644 --- a/third_party/move/move-compiler-v2/Cargo.toml +++ b/third_party/move/move-compiler-v2/Cargo.toml @@ -12,12 +12,13 @@ edition = "2021" [dependencies] anyhow = "1.0.62" move-binary-format = { path = "../move-binary-format" } +move-compiler = { path = "../move-compiler" } move-core-types = { path = "../move-core/types" } move-model = { path = "../move-model" } -move-stackless-bytecode = { path = "../move-prover/bytecode" } +move-stackless-bytecode = { path = "../move-model/bytecode" } bcs = { workspace = true } -clap = { version = "3.2.23", features = ["derive", "env"] } +clap = { version = "4.3.9", features = ["derive", "env"] } codespan = "0.11.1" codespan-reporting = { version = "0.11.1", features = ["serde", "serialization"] } ethnum = "1.0.4" diff --git a/third_party/move/move-compiler-v2/src/bytecode_generator.rs b/third_party/move/move-compiler-v2/src/bytecode_generator.rs index e7eff285c27bd..37afa08c4624d 100644 --- a/third_party/move/move-compiler-v2/src/bytecode_generator.rs +++ b/third_party/move/move-compiler-v2/src/bytecode_generator.rs @@ -285,7 +285,6 @@ impl<'env> Generator<'env> { // Dispatcher impl<'env> Generator<'env> { - /// Generate code, for the given expression, and store the result in the given temporary. fn gen(&mut self, targets: Vec, exp: &Exp) { match exp.as_ref() { ExpData::Invalid(id) => self.internal_error(*id, "invalid expression"), @@ -720,7 +719,25 @@ impl<'env> Generator<'env> { .env() .get_node_instantiation_opt(id) .unwrap_or_default(); - let args = self.gen_arg_list(args); + // Function calls can have implicit conversion of &mut to &, need to compute implicit + // conversions. + let param_types: Vec = self + .env() + .get_function(fun) + .get_parameters() + .into_iter() + .map(|Parameter(_, ty)| ty.instantiate(&type_args)) + .collect(); + if args.len() != param_types.len() { + self.internal_error(id, "inconsistent type arity"); + return; + } + let args = args + .iter() + .zip(param_types.into_iter()) + .map(|(e, t)| self.maybe_convert(e, &t)) + .collect::>(); + let args = self.gen_arg_list(&args); self.emit_with(id, |attr| { Bytecode::Call( attr, @@ -732,6 +749,27 @@ impl<'env> Generator<'env> { }) } + /// Convert the expression so it matches the expected type. This is currently only needed + /// for `&mut` to `&` conversion, in which case we need to to introduce a Freeze operation. + fn maybe_convert(&self, exp: &Exp, expected_ty: &Type) -> Exp { + let id = exp.node_id(); + let exp_ty = self.env().get_node_type(id); + if let ( + Type::Reference(ReferenceKind::Mutable, _), + Type::Reference(ReferenceKind::Immutable, et), + ) = (exp_ty, expected_ty) + { + let freeze_id = self + .env() + .new_node(self.env().get_node_loc(id), expected_ty.clone()); + self.env() + .set_node_instantiation(freeze_id, vec![et.as_ref().clone()]); + ExpData::Call(freeze_id, Operation::Freeze, vec![exp.clone()]).into_exp() + } else { + exp.clone() + } + } + fn gen_arg_list(&mut self, exps: &[Exp]) -> Vec { exps.iter().map(|exp| self.gen_arg(exp)).collect() } diff --git a/third_party/move/move-compiler-v2/src/file_format_generator/function_generator.rs b/third_party/move/move-compiler-v2/src/file_format_generator/function_generator.rs index 1e7b0de4ea083..2f76bab4e21a0 100644 --- a/third_party/move/move-compiler-v2/src/file_format_generator/function_generator.rs +++ b/third_party/move/move-compiler-v2/src/file_format_generator/function_generator.rs @@ -15,6 +15,7 @@ use move_model::{ use move_stackless_bytecode::{ function_target::FunctionTarget, function_target_pipeline::FunctionVariant, + livevar_analysis::LiveVarAnnotation, stackless_bytecode::{Bytecode, Label, Operation}, }; use std::collections::{BTreeMap, BTreeSet}; @@ -51,6 +52,13 @@ pub struct FunctionContext<'env> { type_parameters: Vec, } +/// Immutable context for processing a bytecode instruction. +#[derive(Clone)] +struct BytecodeContext<'env> { + fun_ctx: &'env FunctionContext<'env>, + code_offset: FF::CodeOffset, +} + #[derive(Debug, Copy, Clone)] /// Represents the location of a temporary if it is not only on the stack. struct TempInfo { @@ -92,6 +100,7 @@ impl<'a> FunctionGenerator<'a> { code: vec![], }; let target = ctx.targets.get_target(&fun_env, &FunctionVariant::Baseline); + let code = fun_gen.gen_code(&FunctionContext { module: ctx.clone(), fun: target, @@ -132,17 +141,22 @@ impl<'a> FunctionGenerator<'a> { // Walk the bytecode let bytecode = ctx.fun.get_bytecode(); for i in 0..bytecode.len() { + let code_offset = i as FF::CodeOffset; + let bytecode_ctx = BytecodeContext { + fun_ctx: ctx, + code_offset, + }; if i + 1 < bytecode.len() { let bc = &bytecode[i]; let next_bc = &bytecode[i + 1]; - self.gen_bytecode(ctx, &bytecode[i], Some(next_bc)); + self.gen_bytecode(&bytecode_ctx, &bytecode[i], Some(next_bc)); if !bc.is_branch() && matches!(next_bc, Bytecode::Label(..)) { // At block boundaries without a preceding branch, need to flush stack // TODO: to avoid this, we should use the CFG for code generation. - self.abstract_flush_stack(ctx, 0); + self.abstract_flush_stack_after(&bytecode_ctx, 0); } } else { - self.gen_bytecode(ctx, &bytecode[i], None) + self.gen_bytecode(&bytecode_ctx, &bytecode[i], None) } } @@ -189,11 +203,11 @@ impl<'a> FunctionGenerator<'a> { /// Generate file-format bytecode from a stackless bytecode and an optional next bytecode /// for peephole optimizations. - fn gen_bytecode(&mut self, ctx: &FunctionContext, bc: &Bytecode, next_bc: Option<&Bytecode>) { + fn gen_bytecode(&mut self, ctx: &BytecodeContext, bc: &Bytecode, next_bc: Option<&Bytecode>) { match bc { Bytecode::Assign(_, dest, source, _mode) => { self.abstract_push_args(ctx, vec![*source]); - let local = self.temp_to_local(ctx, *dest); + let local = self.temp_to_local(ctx.fun_ctx, *dest); self.emit(FF::Bytecode::StLoc(local)); self.abstract_pop(ctx) }, @@ -207,10 +221,10 @@ impl<'a> FunctionGenerator<'a> { }, Bytecode::Load(_, dest, cons) => { let cons = self.gen.constant_index( - &ctx.module, - &ctx.loc, + &ctx.fun_ctx.module, + &ctx.fun_ctx.loc, cons, - ctx.fun.get_local_type(*dest), + ctx.fun_ctx.fun.get_local_type(*dest), ); self.emit(FF::Bytecode::LdConst(cons)); self.abstract_push_result(ctx, vec![*dest]); @@ -248,7 +262,7 @@ impl<'a> FunctionGenerator<'a> { self.abstract_pop(ctx); }, Bytecode::Jump(_, label) => { - self.abstract_flush_stack(ctx, 0); + self.abstract_flush_stack_before(ctx, 0); self.add_label_reference(*label); self.emit(FF::Bytecode::Branch(0)); }, @@ -263,7 +277,9 @@ impl<'a> FunctionGenerator<'a> { Bytecode::SaveMem(_, _, _) | Bytecode::Call(_, _, _, _, Some(_)) | Bytecode::SaveSpecVar(_, _, _) - | Bytecode::Prop(_, _, _) => ctx.internal_error("unexpected specification bytecode"), + | Bytecode::Prop(_, _, _) => ctx + .fun_ctx + .internal_error("unexpected specification bytecode"), } } @@ -272,7 +288,7 @@ impl<'a> FunctionGenerator<'a> { /// the stack empty at end. fn balance_stack_end_of_block( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, result: impl AsRef<[TempIndex]>, ) { let result = result.as_ref(); @@ -281,7 +297,7 @@ impl<'a> FunctionGenerator<'a> { if self.stack.len() != result.len() { // Unfortunately, there is more on the stack than needed. // Need to flush and push again so the stack is empty after return. - self.abstract_flush_stack(ctx, 0); + self.abstract_flush_stack_before(ctx, 0); self.abstract_push_args(ctx, result.as_ref()); assert_eq!(self.stack.len(), result.len()) } @@ -307,11 +323,12 @@ impl<'a> FunctionGenerator<'a> { /// Generates code for an operation. fn gen_operation( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, dest: &[TempIndex], oper: &Operation, source: &[TempIndex], ) { + let fun_ctx = ctx.fun_ctx; match oper { Operation::Function(mid, fid, inst) => { self.gen_call(ctx, dest, mid.qualified(*fid), inst, source); @@ -372,8 +389,8 @@ impl<'a> FunctionGenerator<'a> { ); }, Operation::BorrowLoc => { - let local = self.temp_to_local(ctx, source[0]); - if ctx.fun.get_local_type(dest[0]).is_mutable_reference() { + let local = self.temp_to_local(fun_ctx, source[0]); + if fun_ctx.fun.get_local_type(dest[0]).is_mutable_reference() { self.emit(FF::Bytecode::MutBorrowLoc(local)) } else { self.emit(FF::Bytecode::ImmBorrowLoc(local)) @@ -391,7 +408,7 @@ impl<'a> FunctionGenerator<'a> { ); }, Operation::BorrowGlobal(mid, sid, inst) => { - let is_mut = ctx.fun.get_local_type(dest[0]).is_mutable_reference(); + let is_mut = fun_ctx.fun.get_local_type(dest[0]).is_mutable_reference(); self.gen_struct_oper( ctx, dest, @@ -411,13 +428,15 @@ impl<'a> FunctionGenerator<'a> { ) }, Operation::Vector => { - let elem_type = if let Type::Vector(el) = ctx.fun.get_local_type(dest[0]) { + let elem_type = if let Type::Vector(el) = fun_ctx.fun.get_local_type(dest[0]) { el.as_ref().clone() } else { - ctx.internal_error("expected vector type"); + fun_ctx.internal_error("expected vector type"); Type::new_prim(PrimitiveType::Bool) }; - let sign = self.gen.signature(&ctx.module, &ctx.loc, vec![elem_type]); + let sign = self + .gen + .signature(&fun_ctx.module, &fun_ctx.loc, vec![elem_type]); self.gen_builtin( ctx, dest, @@ -478,30 +497,42 @@ impl<'a> FunctionGenerator<'a> { | Operation::UnpackRef | Operation::PackRef | Operation::UnpackRefDeep - | Operation::PackRefDeep => ctx.internal_error("unexpected specification opcode"), + | Operation::PackRefDeep => fun_ctx.internal_error("unexpected specification opcode"), } } /// Generates code for a function call. fn gen_call( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, dest: &[TempIndex], id: QualifiedId, inst: &[Type], source: &[TempIndex], ) { + let fun_ctx = ctx.fun_ctx; self.abstract_push_args(ctx, source); - if inst.is_empty() { - let idx = + if let Some(opcode) = ctx.fun_ctx.module.get_well_known_function_code( + &ctx.fun_ctx.loc, + id, + Some( self.gen - .function_index(&ctx.module, &ctx.loc, &ctx.module.env.get_function(id)); + .signature(&ctx.fun_ctx.module, &ctx.fun_ctx.loc, inst.to_vec()), + ), + ) { + self.emit(opcode) + } else if inst.is_empty() { + let idx = self.gen.function_index( + &fun_ctx.module, + &fun_ctx.loc, + &fun_ctx.module.env.get_function(id), + ); self.emit(FF::Bytecode::Call(idx)) } else { let idx = self.gen.function_instantiation_index( - &ctx.module, - &ctx.loc, - ctx.fun.func_env, + &fun_ctx.module, + &fun_ctx.loc, + &fun_ctx.module.env.get_function(id), inst.to_vec(), ); self.emit(FF::Bytecode::CallGeneric(idx)) @@ -515,7 +546,7 @@ impl<'a> FunctionGenerator<'a> { /// to create for each case. fn gen_struct_oper( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, dest: &[TempIndex], id: QualifiedId, inst: &[Type], @@ -523,15 +554,18 @@ impl<'a> FunctionGenerator<'a> { mk_simple: impl FnOnce(FF::StructDefinitionIndex) -> FF::Bytecode, mk_generic: impl FnOnce(FF::StructDefInstantiationIndex) -> FF::Bytecode, ) { + let fun_ctx = ctx.fun_ctx; self.abstract_push_args(ctx, source); - let struct_env = &ctx.module.env.get_struct(id); + let struct_env = &fun_ctx.module.env.get_struct(id); if inst.is_empty() { - let idx = self.gen.struct_def_index(&ctx.module, &ctx.loc, struct_env); + let idx = self + .gen + .struct_def_index(&fun_ctx.module, &fun_ctx.loc, struct_env); self.emit(mk_simple(idx)) } else { let idx = self.gen.struct_def_instantiation_index( - &ctx.module, - &ctx.loc, + &fun_ctx.module, + &fun_ctx.loc, struct_env, inst.to_vec(), ); @@ -544,19 +578,22 @@ impl<'a> FunctionGenerator<'a> { /// Generate code for the borrow-field instruction. fn gen_borrow_field( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, dest: &[TempIndex], id: QualifiedId, inst: Vec, offset: usize, source: &[TempIndex], ) { + let fun_ctx = ctx.fun_ctx; self.abstract_push_args(ctx, source); - let struct_env = &ctx.module.env.get_struct(id); + let struct_env = &fun_ctx.module.env.get_struct(id); let field_env = &struct_env.get_field_by_offset(offset); - let is_mut = ctx.fun.get_local_type(dest[0]).is_mutable_reference(); + let is_mut = fun_ctx.fun.get_local_type(dest[0]).is_mutable_reference(); if inst.is_empty() { - let idx = self.gen.field_index(&ctx.module, &ctx.loc, field_env); + let idx = self + .gen + .field_index(&fun_ctx.module, &fun_ctx.loc, field_env); if is_mut { self.emit(FF::Bytecode::MutBorrowField(idx)) } else { @@ -565,7 +602,7 @@ impl<'a> FunctionGenerator<'a> { } else { let idx = self .gen - .field_inst_index(&ctx.module, &ctx.loc, field_env, inst); + .field_inst_index(&fun_ctx.module, &fun_ctx.loc, field_env, inst); if is_mut { self.emit(FF::Bytecode::MutBorrowFieldGeneric(idx)) } else { @@ -579,7 +616,7 @@ impl<'a> FunctionGenerator<'a> { /// Generate code for a general builtin instruction. fn gen_builtin( &mut self, - ctx: &FunctionContext, + ctx: &BytecodeContext, dest: &[TempIndex], bc: FF::Bytecode, source: &[TempIndex], @@ -598,7 +635,8 @@ impl<'a> FunctionGenerator<'a> { /// Ensure that on the abstract stack of the generator, the given temporaries are ready, /// in order, to be consumed. Ideally those are already on the stack, but if they are not, /// they will be made available. - fn abstract_push_args(&mut self, ctx: &FunctionContext, temps: impl AsRef<[TempIndex]>) { + fn abstract_push_args(&mut self, ctx: &BytecodeContext, temps: impl AsRef<[TempIndex]>) { + let fun_ctx = ctx.fun_ctx; // Compute the maximal prefix of `temps` which are already on the stack. let temps = temps.as_ref(); let mut temps_to_push = temps; @@ -624,11 +662,12 @@ impl<'a> FunctionGenerator<'a> { temps_to_push = temps; } } - self.abstract_flush_stack(ctx, stack_to_flush); + self.abstract_flush_stack_before(ctx, stack_to_flush); // Finally, push `temps_to_push` onto the stack. for temp in temps_to_push { - let local = self.temp_to_local(ctx, *temp); - if ctx.is_copyable(*temp) { + let local = self.temp_to_local(fun_ctx, *temp); + // Copy the temporary if it is copyable or still used after this code point. + if fun_ctx.is_copyable(*temp) && ctx.is_alive_after(*temp) { self.emit(FF::Bytecode::CopyLoc(local)) } else { self.emit(FF::Bytecode::MoveLoc(local)); @@ -637,17 +676,38 @@ impl<'a> FunctionGenerator<'a> { } } - /// Flush the abstract stack, ensuring that all values on the stack are stored in locals. - fn abstract_flush_stack(&mut self, ctx: &FunctionContext, top: usize) { + /// Flush the abstract stack, ensuring that all values on the stack are stored in locals, if + /// they are still alive. The `before` parameter determines whether we care about + /// variables alive before or after the current program point. + fn abstract_flush_stack(&mut self, ctx: &BytecodeContext, top: usize, before: bool) { + let fun_ctx = ctx.fun_ctx; while self.stack.len() > top { let temp = self.stack.pop().unwrap(); - let local = self.temp_to_local(ctx, temp); - self.emit(FF::Bytecode::StLoc(local)); + if before && ctx.is_alive_before(temp) + || !before && ctx.is_alive_after(temp) + || self.pinned.contains(&temp) + { + // Only need to save to a local if the temp is still used afterwards + let local = self.temp_to_local(fun_ctx, temp); + self.emit(FF::Bytecode::StLoc(local)); + } else { + self.emit(FF::Bytecode::Pop) + } } } + /// Shortcut for `abstract_flush_stack(..., true)` + fn abstract_flush_stack_before(&mut self, ctx: &BytecodeContext, top: usize) { + self.abstract_flush_stack(ctx, top, true) + } + + /// Shortcut for `abstract_flush_stack(..., false)` + fn abstract_flush_stack_after(&mut self, ctx: &BytecodeContext, top: usize) { + self.abstract_flush_stack(ctx, top, false) + } + /// Push the result of an operation to the abstract stack. - fn abstract_push_result(&mut self, ctx: &FunctionContext, result: impl AsRef<[TempIndex]>) { + fn abstract_push_result(&mut self, ctx: &BytecodeContext, result: impl AsRef<[TempIndex]>) { let mut flush_mark = usize::MAX; for temp in result.as_ref() { if self.pinned.contains(temp) { @@ -657,19 +717,19 @@ impl<'a> FunctionGenerator<'a> { self.stack.push(*temp); } if flush_mark != usize::MAX { - self.abstract_flush_stack(ctx, flush_mark) + self.abstract_flush_stack_after(ctx, flush_mark) } } /// Pop a value from the abstract stack. - fn abstract_pop(&mut self, ctx: &FunctionContext) { + fn abstract_pop(&mut self, ctx: &BytecodeContext) { if self.stack.pop().is_none() { - ctx.internal_error("unbalanced abstract stack") + ctx.fun_ctx.internal_error("unbalanced abstract stack") } } /// Pop a number of values from the abstract stack. - fn abstract_pop_n(&mut self, ctx: &FunctionContext, cnt: usize) { + fn abstract_pop_n(&mut self, ctx: &BytecodeContext, cnt: usize) { for _ in 0..cnt { self.abstract_pop(ctx) } @@ -710,7 +770,7 @@ impl<'env> FunctionContext<'env> { /// Returns true of the given temporary can/should be copied when it is loaded onto the stack. /// Currently, this is using the `Copy` ability, but in the future it may also use lifetime - /// analysis results to check whether the variable is still accessed. + /// pipeline results to check whether the variable is still accessed. pub fn is_copyable(&self, temp: TempIndex) -> bool { self.module .env @@ -718,3 +778,32 @@ impl<'env> FunctionContext<'env> { .has_ability(FF::Ability::Copy) } } + +impl<'env> BytecodeContext<'env> { + /// Determine whether the temporary is alive (used) in the reachable code after this point. + pub fn is_alive_after(&self, temp: TempIndex) -> bool { + let an = self + .fun_ctx + .fun + .get_annotations() + .get::() + .expect("livevar analysis result"); + an.get_live_var_info_at(self.code_offset) + .map(|a| a.after.contains(&temp)) + .unwrap_or(false) + } + + /// Determine whether the temporary is alive (used) in the reachable code before and until + /// this point. + pub fn is_alive_before(&self, temp: TempIndex) -> bool { + let an = self + .fun_ctx + .fun + .get_annotations() + .get::() + .expect("livevar analysis result"); + an.get_live_var_info_at(self.code_offset) + .map(|a| a.before.contains(&temp)) + .unwrap_or(false) + } +} diff --git a/third_party/move/move-compiler-v2/src/file_format_generator/module_generator.rs b/third_party/move/move-compiler-v2/src/file_format_generator/module_generator.rs index b0a1671a4d1ab..4c94c2b1d9620 100644 --- a/third_party/move/move-compiler-v2/src/file_format_generator/module_generator.rs +++ b/third_party/move/move-compiler-v2/src/file_format_generator/module_generator.rs @@ -16,6 +16,7 @@ use move_binary_format::{ }; use move_core_types::{account_address::AccountAddress, identifier::Identifier}; use move_model::{ + ast::Address, model::{ FieldEnv, FunId, FunctionEnv, GlobalEnv, Loc, ModuleEnv, ModuleId, Parameter, QualifiedId, StructEnv, StructId, TypeParameter, TypeParameterKind, @@ -623,4 +624,41 @@ impl<'env> ModuleContext<'env> { value as FF::TableIndex } } + + /// Get the file format opcode for a well-known function. This applies currently to a set + /// vector functions which have builtin opcodes. Gets passed an optional type instantiation + /// in form of a signature. + pub fn get_well_known_function_code( + &self, + loc: &Loc, + qid: QualifiedId, + inst_sign: Option, + ) -> Option { + let fun = self.env.get_function(qid); + let mod_name = fun.module_env.get_name(); + if mod_name.addr() != &Address::Numerical(AccountAddress::ONE) { + return None; + } + let pool = self.env.symbol_pool(); + if pool.string(mod_name.name()).as_str() == "vector" { + if let Some(inst) = inst_sign { + match pool.string(fun.get_name()).as_str() { + "empty" => Some(FF::Bytecode::VecPack(inst, 0)), + "length" => Some(FF::Bytecode::VecLen(inst)), + "borrow" => Some(FF::Bytecode::VecImmBorrow(inst)), + "borrow_mut" => Some(FF::Bytecode::VecMutBorrow(inst)), + "push_back" => Some(FF::Bytecode::VecPushBack(inst)), + "pop_back" => Some(FF::Bytecode::VecPopBack(inst)), + "destroy_empty" => Some(FF::Bytecode::VecUnpack(inst, 0)), + "swap" => Some(FF::Bytecode::VecSwap(inst)), + _ => None, + } + } else { + self.internal_error(loc, "expected type instantiation for vector operation"); + None + } + } else { + None + } + } } diff --git a/third_party/move/move-compiler-v2/src/lib.rs b/third_party/move/move-compiler-v2/src/lib.rs index af5026ce70716..efa7ab9fbd49b 100644 --- a/third_party/move/move-compiler-v2/src/lib.rs +++ b/third_party/move/move-compiler-v2/src/lib.rs @@ -6,11 +6,14 @@ mod bytecode_generator; mod experiments; mod file_format_generator; mod options; +pub mod pipeline; +use crate::pipeline::livevar_analysis_processor::LiveVarAnalysisProcessor; use anyhow::anyhow; use codespan_reporting::term::termcolor::{ColorChoice, StandardStream, WriteColor}; pub use experiments::*; use move_binary_format::{file_format as FF, file_format::CompiledScript, CompiledModule}; +use move_compiler::shared::known_attributes::KnownAttribute; use move_model::{model::GlobalEnv, PackageInfo}; use move_stackless_bytecode::function_target_pipeline::{ FunctionTargetPipeline, FunctionTargetsHolder, FunctionVariant, @@ -74,6 +77,8 @@ pub fn run_checker(options: Options) -> anyhow::Result { sources: options.dependencies.clone(), address_map: addrs, }], + options.skip_attribute_checks, + KnownAttribute::get_all_attribute_names(), )?; // Store options in env, for later access env.set_extension(options); @@ -121,9 +126,9 @@ pub fn run_file_format_gen( /// Returns the bytecode processing pipeline. pub fn bytecode_pipeline(_env: &GlobalEnv) -> FunctionTargetPipeline { - // TODO: insert processors here as we proceed. - // Use `env.get_extension::()` to access compiler options - FunctionTargetPipeline::default() + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(Box::new(LiveVarAnalysisProcessor())); + pipeline } /// Report any diags in the env to the writer and fail if there are errors. diff --git a/third_party/move/move-compiler-v2/src/options.rs b/third_party/move/move-compiler-v2/src/options.rs index 4692634b0cc41..1517c7c2c0687 100644 --- a/third_party/move/move-compiler-v2/src/options.rs +++ b/third_party/move/move-compiler-v2/src/options.rs @@ -12,26 +12,24 @@ pub struct Options { /// Directories where to lookup dependencies. #[clap( short, - takes_value(true), - multiple_values(true), - multiple_occurrences(true) + num_args = 0.. )] pub dependencies: Vec, /// Named address mapping. #[clap( short, - takes_value(true), - multiple_values(true), - multiple_occurrences(true) + num_args = 0.. )] pub named_address_mapping: Vec, /// Output directory. - #[clap(short)] - #[clap(long, default_value = "")] + #[clap(short, long, default_value = "")] pub output_dir: String, /// Whether to dump intermediate bytecode for debugging. #[clap(long = "dump-bytecode")] pub dump_bytecode: bool, + /// Do not complain about unknown attributes in Move code. + #[clap(long, default_value = "false")] + pub skip_attribute_checks: bool, /// Whether we generate code for tests. This specifically guarantees stable output /// for baseline testing. #[clap(long)] @@ -41,9 +39,7 @@ pub struct Options { #[clap(short)] #[clap( long = "experiment", - takes_value(true), - multiple_values(true), - multiple_occurrences(true) + num_args = 0.. )] pub experiments: Vec, /// Sources to compile (positional arg, therefore last) diff --git a/third_party/move/move-compiler-v2/src/pipeline/livevar_analysis_processor.rs b/third_party/move/move-compiler-v2/src/pipeline/livevar_analysis_processor.rs new file mode 100644 index 0000000000000..53ba45026a7c7 --- /dev/null +++ b/third_party/move/move-compiler-v2/src/pipeline/livevar_analysis_processor.rs @@ -0,0 +1,49 @@ +// Copyright © Aptos Foundation +// Parts of the project are originally copyright © Meta Platforms, Inc. +// SPDX-License-Identifier: Apache-2.0 + +//! Implements a live-variable analysis processor, annotating lifetime information about locals. +//! See also https://en.wikipedia.org/wiki/Live-variable_analysis + +use move_model::model::FunctionEnv; +use move_stackless_bytecode::{ + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, + livevar_analysis, +}; + +pub struct LiveVarAnalysisProcessor(); + +impl FunctionTargetProcessor for LiveVarAnalysisProcessor { + fn process( + &self, + _targets: &mut FunctionTargetsHolder, + fun_env: &FunctionEnv, + mut data: FunctionData, + _scc_opt: Option<&[FunctionEnv]>, + ) -> FunctionData { + if fun_env.is_native() { + return data; + } + // Call the existing live-var analysis from the move-prover. + let target = FunctionTarget::new(fun_env, &data); + let offset_to_live_refs = livevar_analysis::LiveVarAnnotation::from_map( + livevar_analysis::run_livevar_analysis(&target, &data.code), + ); + // Annotate the result on the function data. + data.annotations.set(offset_to_live_refs, true); + data + } + + fn name(&self) -> String { + "LiveVarAnalysisProcessor".to_owned() + } +} + +impl LiveVarAnalysisProcessor { + /// Registers annotation formatter at the given function target. This is for debugging and + /// testing. + pub fn register_formatters(target: &FunctionTarget) { + target.register_annotation_formatter(Box::new(livevar_analysis::format_livevar_annotation)) + } +} diff --git a/third_party/move/move-compiler-v2/src/pipeline/mod.rs b/third_party/move/move-compiler-v2/src/pipeline/mod.rs new file mode 100644 index 0000000000000..fcbb3faa2cdb6 --- /dev/null +++ b/third_party/move/move-compiler-v2/src/pipeline/mod.rs @@ -0,0 +1,4 @@ +// Copyright © Aptos Foundation +// Parts of the project are originally copyright © Meta Platforms, Inc. +// SPDX-License-Identifier: Apache-2.0 +pub mod livevar_analysis_processor; diff --git a/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.exp b/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.exp new file mode 100644 index 0000000000000..a9fcef47dcab8 --- /dev/null +++ b/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.exp @@ -0,0 +1,46 @@ +// ---- Model Dump +module 0x42::reference_conversion { + private fun deref(r: &u64) { + Deref(r) + } + private fun use_it() { + { + let x: u64 = 42; + { + let r: &mut u64 = Borrow(Mutable)(x); + r = 43; + reference_conversion::deref(r) + } + } + } +} // end 0x42::reference_conversion + +============ initial bytecode ================ + +[variant baseline] +fun reference_conversion::deref($t0: &u64): u64 { + var $t1: u64 + 0: $t1 := read_ref($t0) + 1: return $t1 +} + + +[variant baseline] +fun reference_conversion::use_it(): u64 { + var $t0: u64 + var $t1: u64 + var $t2: u64 + var $t3: &mut u64 + var $t4: &mut u64 + var $t5: u64 + var $t6: &u64 + 0: $t2 := 42 + 1: $t1 := move($t2) + 2: $t4 := borrow_local($t1) + 3: $t3 := move($t4) + 4: $t5 := 43 + 5: write_ref($t3, $t5) + 6: $t6 := freeze_ref($t3) + 7: $t0 := reference_conversion::deref($t6) + 8: return $t0 +} diff --git a/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.move b/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.move new file mode 100644 index 0000000000000..d22d3d256f4c5 --- /dev/null +++ b/third_party/move/move-compiler-v2/tests/bytecode-generator/reference_conversion.move @@ -0,0 +1,15 @@ +module 0x42::reference_conversion { + + fun deref(r: &u64): u64 { + *r + } + + fun use_it(): u64 { + let x = 42; + let r = &mut x; + *r = 43; + deref(r) + } + + +} diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/assign.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/assign.exp index 0ebc48e81c396..1e9e054b55620 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/assign.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/assign.exp @@ -43,6 +43,67 @@ fun assign::assign_struct($t0: &mut assign::S) { 5: return () } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun assign::assign_field($t0: &mut assign::S, $t1: u64) { + var $t2: &mut u64 + # live vars: $t0, $t1 + 0: $t2 := borrow_field.f($t0) + # live vars: $t1, $t2 + 1: write_ref($t2, $t1) + # live vars: + 2: return () +} + + +[variant baseline] +fun assign::assign_int($t0: &mut u64) { + var $t1: u64 + # live vars: $t0 + 0: $t1 := 42 + # live vars: $t0, $t1 + 1: write_ref($t0, $t1) + # live vars: + 2: return () +} + + +[variant baseline] +fun assign::assign_pattern($t0: assign::S, $t1: u64, $t2: u64): u64 { + var $t3: u64 + var $t4: assign::T + # live vars: $t0 + 0: ($t1, $t4) := unpack assign::S($t0) + # live vars: $t1, $t4 + 1: $t2 := unpack assign::T($t4) + # live vars: $t1, $t2 + 2: $t3 := +($t1, $t2) + # live vars: $t3 + 3: return $t3 +} + + +[variant baseline] +fun assign::assign_struct($t0: &mut assign::S) { + var $t1: assign::S + var $t2: u64 + var $t3: assign::T + var $t4: u64 + # live vars: $t0 + 0: $t2 := 42 + # live vars: $t0, $t2 + 1: $t4 := 42 + # live vars: $t0, $t2, $t4 + 2: $t3 := pack assign::T($t4) + # live vars: $t0, $t2, $t3 + 3: $t1 := pack assign::S($t2, $t3) + # live vars: $t0, $t1 + 4: write_ref($t0, $t1) + # live vars: + 5: return () +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -57,18 +118,18 @@ struct S { assign_field(Arg0: &mut S, Arg1: u64) { B0: - 0: CopyLoc[0](Arg0: &mut S) + 0: MoveLoc[0](Arg0: &mut S) 1: MutBorrowField[0](S.f: u64) 2: StLoc[2](loc0: &mut u64) - 3: CopyLoc[1](Arg1: u64) - 4: CopyLoc[2](loc0: &mut u64) + 3: MoveLoc[1](Arg1: u64) + 4: MoveLoc[2](loc0: &mut u64) 5: WriteRef 6: Ret } assign_int(Arg0: &mut u64) { B0: 0: LdConst[0](U64: [42, 0, 0, 0, 0, 0, 0, 0]) - 1: CopyLoc[0](Arg0: &mut u64) + 1: MoveLoc[0](Arg0: &mut u64) 2: WriteRef 3: Ret } @@ -86,7 +147,7 @@ B0: 1: LdConst[0](U64: [42, 0, 0, 0, 0, 0, 0, 0]) 2: Pack[0](T) 3: Pack[1](S) - 4: CopyLoc[0](Arg0: &mut S) + 4: MoveLoc[0](Arg0: &mut S) 5: WriteRef 6: Ret } diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/borrow.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/borrow.exp index 41caecaea2cc3..d547c785ce92a 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/borrow.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/borrow.exp @@ -88,6 +88,130 @@ fun borrow::mut_param($t0: u64): u64 { 5: return $t1 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun borrow::field($t0: &borrow::S): u64 { + var $t1: u64 + var $t2: &u64 + var $t3: &u64 + # live vars: $t0 + 0: $t3 := borrow_field.f($t0) + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t1 := read_ref($t2) + # live vars: $t1 + 3: return $t1 +} + + +[variant baseline] +fun borrow::local($t0: u64): u64 { + var $t1: u64 + var $t2: u64 + var $t3: u64 + var $t4: &u64 + var $t5: &u64 + # live vars: + 0: $t3 := 33 + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t5 := borrow_local($t2) + # live vars: $t5 + 3: $t4 := move($t5) + # live vars: $t4 + 4: $t1 := read_ref($t4) + # live vars: $t1 + 5: return $t1 +} + + +[variant baseline] +fun borrow::param($t0: u64): u64 { + var $t1: u64 + var $t2: &u64 + var $t3: &u64 + # live vars: $t0 + 0: $t3 := borrow_local($t0) + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t1 := read_ref($t2) + # live vars: $t1 + 3: return $t1 +} + + +[variant baseline] +fun borrow::mut_field($t0: &mut borrow::S): u64 { + var $t1: u64 + var $t2: &mut u64 + var $t3: &mut u64 + var $t4: u64 + # live vars: $t0 + 0: $t3 := borrow_field.f($t0) + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t4 := 22 + # live vars: $t2, $t4 + 3: write_ref($t2, $t4) + # live vars: $t2 + 4: $t1 := read_ref($t2) + # live vars: $t1 + 5: return $t1 +} + + +[variant baseline] +fun borrow::mut_local($t0: u64): u64 { + var $t1: u64 + var $t2: u64 + var $t3: u64 + var $t4: &mut u64 + var $t5: &mut u64 + var $t6: u64 + # live vars: + 0: $t3 := 33 + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t5 := borrow_local($t2) + # live vars: $t5 + 3: $t4 := move($t5) + # live vars: $t4 + 4: $t6 := 22 + # live vars: $t4, $t6 + 5: write_ref($t4, $t6) + # live vars: $t4 + 6: $t1 := read_ref($t4) + # live vars: $t1 + 7: return $t1 +} + + +[variant baseline] +fun borrow::mut_param($t0: u64): u64 { + var $t1: u64 + var $t2: &mut u64 + var $t3: &mut u64 + var $t4: u64 + # live vars: $t0 + 0: $t3 := borrow_local($t0) + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t4 := 22 + # live vars: $t2, $t4 + 3: write_ref($t2, $t4) + # live vars: $t2 + 4: $t1 := read_ref($t2) + # live vars: $t1 + 5: return $t1 +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -98,10 +222,10 @@ struct S { field(Arg0: &S): u64 { B0: - 0: CopyLoc[0](Arg0: &S) + 0: MoveLoc[0](Arg0: &S) 1: ImmBorrowField[0](S.f: u64) 2: StLoc[1](loc0: &u64) - 3: CopyLoc[1](loc0: &u64) + 3: MoveLoc[1](loc0: &u64) 4: ReadRef 5: Ret } @@ -112,7 +236,7 @@ B0: 1: StLoc[1](loc0: u64) 2: ImmBorrowLoc[1](loc0: u64) 3: StLoc[2](loc1: &u64) - 4: CopyLoc[2](loc1: &u64) + 4: MoveLoc[2](loc1: &u64) 5: ReadRef 6: Ret } @@ -120,19 +244,19 @@ param(Arg0: u64): u64 { B0: 0: ImmBorrowLoc[0](Arg0: u64) 1: StLoc[1](loc0: &u64) - 2: CopyLoc[1](loc0: &u64) + 2: MoveLoc[1](loc0: &u64) 3: ReadRef 4: Ret } mut_field(Arg0: &mut S): u64 { B0: - 0: CopyLoc[0](Arg0: &mut S) + 0: MoveLoc[0](Arg0: &mut S) 1: MutBorrowField[0](S.f: u64) 2: StLoc[1](loc0: &mut u64) 3: LdConst[1](U64: [22, 0, 0, 0, 0, 0, 0, 0]) 4: CopyLoc[1](loc0: &mut u64) 5: WriteRef - 6: CopyLoc[1](loc0: &mut u64) + 6: MoveLoc[1](loc0: &mut u64) 7: ReadRef 8: Ret } @@ -146,7 +270,7 @@ B0: 4: LdConst[1](U64: [22, 0, 0, 0, 0, 0, 0, 0]) 5: CopyLoc[2](loc1: &mut u64) 6: WriteRef - 7: CopyLoc[2](loc1: &mut u64) + 7: MoveLoc[2](loc1: &mut u64) 8: ReadRef 9: Ret } @@ -157,7 +281,7 @@ B0: 2: LdConst[1](U64: [22, 0, 0, 0, 0, 0, 0, 0]) 3: CopyLoc[1](loc0: &mut u64) 4: WriteRef - 5: CopyLoc[1](loc0: &mut u64) + 5: MoveLoc[1](loc0: &mut u64) 6: ReadRef 7: Ret } diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/fields.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/fields.exp index 68c3894490319..ec1d5d21bf434 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/fields.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/fields.exp @@ -111,6 +111,165 @@ fun fields::write_val($t0: fields::S): fields::S { 6: return $t1 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun fields::read_ref($t0: &fields::S): u64 { + var $t1: u64 + var $t2: &fields::T + var $t3: &u64 + # live vars: $t0 + 0: $t2 := borrow_field.g($t0) + # live vars: $t2 + 1: $t3 := borrow_field.h($t2) + # live vars: $t3 + 2: $t1 := read_ref($t3) + # live vars: $t1 + 3: return $t1 +} + + +[variant baseline] +fun fields::read_val($t0: fields::S): u64 { + var $t1: u64 + var $t2: &fields::T + var $t3: &fields::S + var $t4: &u64 + # live vars: $t0 + 0: $t3 := borrow_local($t0) + # live vars: $t3 + 1: $t2 := borrow_field.g($t3) + # live vars: $t2 + 2: $t4 := borrow_field.h($t2) + # live vars: $t4 + 3: $t1 := read_ref($t4) + # live vars: $t1 + 4: return $t1 +} + + +[variant baseline] +fun fields::write_local_direct(): fields::S { + var $t0: fields::S + var $t1: fields::S + var $t2: fields::S + var $t3: u64 + var $t4: fields::T + var $t5: u64 + var $t6: u64 + var $t7: &mut u64 + var $t8: &mut fields::T + var $t9: &mut fields::S + # live vars: + 0: $t3 := 0 + # live vars: $t3 + 1: $t5 := 0 + # live vars: $t3, $t5 + 2: $t4 := pack fields::T($t5) + # live vars: $t3, $t4 + 3: $t2 := pack fields::S($t3, $t4) + # live vars: $t2 + 4: $t1 := move($t2) + # live vars: $t1 + 5: $t6 := 42 + # live vars: $t1, $t6 + 6: $t9 := borrow_local($t1) + # live vars: $t1, $t6, $t9 + 7: $t8 := borrow_field.g($t9) + # live vars: $t1, $t6, $t8 + 8: $t7 := borrow_field.h($t8) + # live vars: $t1, $t6, $t7 + 9: write_ref($t7, $t6) + # live vars: $t1 + 10: $t0 := move($t1) + # live vars: $t0 + 11: return $t0 +} + + +[variant baseline] +fun fields::write_local_via_ref(): fields::S { + var $t0: fields::S + var $t1: fields::S + var $t2: fields::S + var $t3: u64 + var $t4: fields::T + var $t5: u64 + var $t6: &mut fields::S + var $t7: &mut fields::S + var $t8: u64 + var $t9: &mut u64 + var $t10: &mut fields::T + # live vars: + 0: $t3 := 0 + # live vars: $t3 + 1: $t5 := 0 + # live vars: $t3, $t5 + 2: $t4 := pack fields::T($t5) + # live vars: $t3, $t4 + 3: $t2 := pack fields::S($t3, $t4) + # live vars: $t2 + 4: $t1 := move($t2) + # live vars: $t1 + 5: $t7 := borrow_local($t1) + # live vars: $t1, $t7 + 6: $t6 := move($t7) + # live vars: $t1, $t6 + 7: $t8 := 42 + # live vars: $t1, $t6, $t8 + 8: $t10 := borrow_field.g($t6) + # live vars: $t1, $t8, $t10 + 9: $t9 := borrow_field.h($t10) + # live vars: $t1, $t8, $t9 + 10: write_ref($t9, $t8) + # live vars: $t1 + 11: $t0 := move($t1) + # live vars: $t0 + 12: return $t0 +} + + +[variant baseline] +fun fields::write_param($t0: &mut fields::S) { + var $t1: u64 + var $t2: &mut u64 + var $t3: &mut fields::T + # live vars: $t0 + 0: $t1 := 42 + # live vars: $t0, $t1 + 1: $t3 := borrow_field.g($t0) + # live vars: $t1, $t3 + 2: $t2 := borrow_field.h($t3) + # live vars: $t1, $t2 + 3: write_ref($t2, $t1) + # live vars: + 4: return () +} + + +[variant baseline] +fun fields::write_val($t0: fields::S): fields::S { + var $t1: fields::S + var $t2: u64 + var $t3: &mut u64 + var $t4: &mut fields::T + var $t5: &mut fields::S + # live vars: $t0 + 0: $t2 := 42 + # live vars: $t0, $t2 + 1: $t5 := borrow_local($t0) + # live vars: $t0, $t2, $t5 + 2: $t4 := borrow_field.g($t5) + # live vars: $t0, $t2, $t4 + 3: $t3 := borrow_field.h($t4) + # live vars: $t0, $t2, $t3 + 4: write_ref($t3, $t2) + # live vars: $t0 + 5: $t1 := move($t0) + # live vars: $t1 + 6: return $t1 +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -125,7 +284,7 @@ struct S { read_ref(Arg0: &S): u64 { B0: - 0: CopyLoc[0](Arg0: &S) + 0: MoveLoc[0](Arg0: &S) 1: ImmBorrowField[0](S.g: T) 2: ImmBorrowField[1](T.h: u64) 3: ReadRef @@ -171,7 +330,7 @@ B0: 5: MutBorrowLoc[0](loc0: S) 6: StLoc[1](loc1: &mut S) 7: LdConst[1](U64: [42, 0, 0, 0, 0, 0, 0, 0]) - 8: CopyLoc[1](loc1: &mut S) + 8: MoveLoc[1](loc1: &mut S) 9: MutBorrowField[0](S.g: T) 10: MutBorrowField[1](T.h: u64) 11: WriteRef @@ -183,7 +342,7 @@ B0: write_param(Arg0: &mut S) { B0: 0: LdConst[1](U64: [42, 0, 0, 0, 0, 0, 0, 0]) - 1: CopyLoc[0](Arg0: &mut S) + 1: MoveLoc[0](Arg0: &mut S) 2: MutBorrowField[0](S.g: T) 3: MutBorrowField[1](T.h: u64) 4: WriteRef diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.exp new file mode 100644 index 0000000000000..8c072a9559fe7 --- /dev/null +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.exp @@ -0,0 +1,58 @@ +============ initial bytecode ================ + +[variant baseline] +fun Test::foo($t0: u64): u64 { + var $t1: u64 + 0: $t1 := Test::identity($t0) + 1: return $t1 +} + + +[variant baseline] +fun Test::identity<#0>($t0: #0): #0 { + var $t1: #0 + 0: $t1 := move($t0) + 1: return $t1 +} + +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun Test::foo($t0: u64): u64 { + var $t1: u64 + # live vars: $t0 + 0: $t1 := Test::identity($t0) + # live vars: $t1 + 1: return $t1 +} + + +[variant baseline] +fun Test::identity<#0>($t0: #0): #0 { + var $t1: #0 + # live vars: $t0 + 0: $t1 := move($t0) + # live vars: $t1 + 1: return $t1 +} + + +============ disassembled file-format ================== +// Move bytecode v6 +module 42.Test { + + +foo(Arg0: u64): u64 { +B0: + 0: MoveLoc[0](Arg0: u64) + 1: Call identity(u64): u64 + 2: Ret +} +identity(Arg0: Ty0): Ty0 { +B0: + 0: MoveLoc[0](Arg0: Ty0) + 1: StLoc[1](loc0: Ty0) + 2: MoveLoc[1](loc0: Ty0) + 3: Ret +} +} diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.move b/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.move new file mode 100644 index 0000000000000..350acc32f1d41 --- /dev/null +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/generic_call.move @@ -0,0 +1,9 @@ +module 0x42::Test { + fun identity(x: T): T { + x + } + + fun foo(x: u64): u64 { + identity(x) + } +} diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/globals.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/globals.exp index 10b0382ef49d1..f9cd8a9832544 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/globals.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/globals.exp @@ -49,6 +49,75 @@ fun globals::write($t0: address, $t1: u64): u64 { 6: return $t2 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun globals::check($t0: address): bool { + var $t1: bool + # live vars: $t0 + 0: $t1 := exists($t0) + # live vars: $t1 + 1: return $t1 +} + + +[variant baseline] +fun globals::publish($t0: &signer) { + var $t1: globals::R + var $t2: u64 + # live vars: $t0 + 0: $t2 := 1 + # live vars: $t0, $t2 + 1: $t1 := pack globals::R($t2) + # live vars: $t0, $t1 + 2: move_to($t0, $t1) + # live vars: + 3: return () +} + + +[variant baseline] +fun globals::read($t0: address): u64 { + var $t1: u64 + var $t2: &globals::R + var $t3: &globals::R + var $t4: &u64 + # live vars: $t0 + 0: $t3 := borrow_global($t0) + # live vars: $t3 + 1: $t2 := move($t3) + # live vars: $t2 + 2: $t4 := borrow_field.f($t2) + # live vars: $t4 + 3: $t1 := read_ref($t4) + # live vars: $t1 + 4: return $t1 +} + + +[variant baseline] +fun globals::write($t0: address, $t1: u64): u64 { + var $t2: u64 + var $t3: &mut globals::R + var $t4: &mut globals::R + var $t5: u64 + var $t6: &mut u64 + # live vars: $t0 + 0: $t4 := borrow_global($t0) + # live vars: $t4 + 1: $t3 := move($t4) + # live vars: $t3 + 2: $t5 := 2 + # live vars: $t3, $t5 + 3: $t6 := borrow_field.f($t3) + # live vars: $t5, $t6 + 4: write_ref($t6, $t5) + # live vars: + 5: $t2 := 9 + # live vars: $t2 + 6: return $t2 +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -59,7 +128,7 @@ struct R has store { check(Arg0: address): bool { B0: - 0: CopyLoc[0](Arg0: address) + 0: MoveLoc[0](Arg0: address) 1: Exists[0](R) 2: Ret } @@ -68,28 +137,28 @@ B0: 0: LdConst[0](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 1: Pack[0](R) 2: StLoc[1](loc0: R) - 3: CopyLoc[0](Arg0: &signer) + 3: MoveLoc[0](Arg0: &signer) 4: MoveLoc[1](loc0: R) 5: MoveTo[0](R) 6: Ret } read(Arg0: address): u64 { B0: - 0: CopyLoc[0](Arg0: address) + 0: MoveLoc[0](Arg0: address) 1: ImmBorrowGlobal[0](R) 2: StLoc[1](loc0: &R) - 3: CopyLoc[1](loc0: &R) + 3: MoveLoc[1](loc0: &R) 4: ImmBorrowField[0](R.f: u64) 5: ReadRef 6: Ret } write(Arg0: address, Arg1: u64): u64 { B0: - 0: CopyLoc[0](Arg0: address) + 0: MoveLoc[0](Arg0: address) 1: MutBorrowGlobal[0](R) 2: StLoc[2](loc0: &mut R) 3: LdConst[1](U64: [2, 0, 0, 0, 0, 0, 0, 0]) - 4: CopyLoc[2](loc0: &mut R) + 4: MoveLoc[2](loc0: &mut R) 5: MutBorrowField[0](R.f: u64) 6: WriteRef 7: LdConst[2](U64: [9, 0, 0, 0, 0, 0, 0, 0]) diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/if_else.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/if_else.exp index 6ca94d4797a51..63c1e573a82bf 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/if_else.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/if_else.exp @@ -51,6 +51,90 @@ fun if_else::if_else_nested($t0: bool, $t1: u64): u64 { 20: return $t2 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun if_else::if_else($t0: bool, $t1: u64): u64 { + var $t2: u64 + var $t3: u64 + var $t4: u64 + # live vars: $t0, $t1 + 0: if ($t0) goto 1 else goto 5 + # live vars: $t1 + 1: label L0 + # live vars: $t1 + 2: $t3 := 1 + # live vars: $t1, $t3 + 3: $t2 := +($t1, $t3) + # live vars: $t2 + 4: goto 8 + # live vars: $t1 + 5: label L1 + # live vars: $t1 + 6: $t4 := 1 + # live vars: $t1, $t4 + 7: $t2 := -($t1, $t4) + # live vars: $t2 + 8: label L2 + # live vars: $t2 + 9: return $t2 +} + + +[variant baseline] +fun if_else::if_else_nested($t0: bool, $t1: u64): u64 { + var $t2: u64 + var $t3: bool + var $t4: u64 + var $t5: u64 + var $t6: u64 + var $t7: u64 + var $t8: u64 + var $t9: u64 + # live vars: $t0, $t1 + 0: if ($t0) goto 1 else goto 5 + # live vars: $t1 + 1: label L0 + # live vars: $t1 + 2: $t5 := 1 + # live vars: $t1, $t5 + 3: $t4 := +($t1, $t5) + # live vars: $t1, $t4 + 4: goto 8 + # live vars: $t1 + 5: label L1 + # live vars: $t1 + 6: $t6 := 1 + # live vars: $t1, $t6 + 7: $t4 := -($t1, $t6) + # live vars: $t1, $t4 + 8: label L2 + # live vars: $t1, $t4 + 9: $t7 := 10 + # live vars: $t1, $t4, $t7 + 10: $t3 := >($t4, $t7) + # live vars: $t1, $t3 + 11: if ($t3) goto 12 else goto 16 + # live vars: $t1 + 12: label L3 + # live vars: $t1 + 13: $t8 := 2 + # live vars: $t1, $t8 + 14: $t2 := *($t1, $t8) + # live vars: $t2 + 15: goto 19 + # live vars: $t1 + 16: label L4 + # live vars: $t1 + 17: $t9 := 2 + # live vars: $t1, $t9 + 18: $t2 := /($t1, $t9) + # live vars: $t2 + 19: label L5 + # live vars: $t2 + 20: return $t2 +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -60,25 +144,25 @@ module 42.if_else { if_else(Arg0: bool, Arg1: u64): u64 { L0: loc2: u64 B0: - 0: CopyLoc[0](Arg0: bool) + 0: MoveLoc[0](Arg0: bool) 1: BrFalse(9) B1: 2: LdConst[0](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 3: StLoc[2](loc0: u64) - 4: CopyLoc[1](Arg1: u64) - 5: CopyLoc[2](loc0: u64) + 4: MoveLoc[1](Arg1: u64) + 5: MoveLoc[2](loc0: u64) 6: Add 7: StLoc[3](loc1: u64) 8: Branch(15) B2: 9: LdConst[0](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 10: StLoc[4](loc2: u64) - 11: CopyLoc[1](Arg1: u64) - 12: CopyLoc[4](loc2: u64) + 11: MoveLoc[1](Arg1: u64) + 12: MoveLoc[4](loc2: u64) 13: Sub 14: StLoc[3](loc1: u64) B3: - 15: CopyLoc[3](loc1: u64) + 15: MoveLoc[3](loc1: u64) 16: Ret } if_else_nested(Arg0: bool, Arg1: u64): u64 { @@ -88,13 +172,13 @@ L2: loc4: u64 L3: loc5: u64 L4: loc6: u64 B0: - 0: CopyLoc[0](Arg0: bool) + 0: MoveLoc[0](Arg0: bool) 1: BrFalse(9) B1: 2: LdConst[0](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 3: StLoc[2](loc0: u64) 4: CopyLoc[1](Arg1: u64) - 5: CopyLoc[2](loc0: u64) + 5: MoveLoc[2](loc0: u64) 6: Add 7: StLoc[3](loc1: u64) 8: Branch(15) @@ -102,33 +186,33 @@ B2: 9: LdConst[0](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 10: StLoc[4](loc2: u64) 11: CopyLoc[1](Arg1: u64) - 12: CopyLoc[4](loc2: u64) + 12: MoveLoc[4](loc2: u64) 13: Sub 14: StLoc[3](loc1: u64) B3: 15: LdConst[1](U64: [10, 0, 0, 0, 0, 0, 0, 0]) 16: StLoc[5](loc3: u64) - 17: CopyLoc[3](loc1: u64) - 18: CopyLoc[5](loc3: u64) + 17: MoveLoc[3](loc1: u64) + 18: MoveLoc[5](loc3: u64) 19: Gt 20: BrFalse(28) B4: 21: LdConst[2](U64: [2, 0, 0, 0, 0, 0, 0, 0]) 22: StLoc[6](loc4: u64) - 23: CopyLoc[1](Arg1: u64) - 24: CopyLoc[6](loc4: u64) + 23: MoveLoc[1](Arg1: u64) + 24: MoveLoc[6](loc4: u64) 25: Mul 26: StLoc[7](loc5: u64) 27: Branch(34) B5: 28: LdConst[2](U64: [2, 0, 0, 0, 0, 0, 0, 0]) 29: StLoc[8](loc6: u64) - 30: CopyLoc[1](Arg1: u64) - 31: CopyLoc[8](loc6: u64) + 30: MoveLoc[1](Arg1: u64) + 31: MoveLoc[8](loc6: u64) 32: Div 33: StLoc[7](loc5: u64) B6: - 34: CopyLoc[7](loc5: u64) + 34: MoveLoc[7](loc5: u64) 35: Ret } } diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/loop.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/loop.exp index b1f56c601e389..760bb0facc992 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/loop.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/loop.exp @@ -117,6 +117,205 @@ fun loops::while_loop_with_break_and_continue($t0: u64): u64 { 31: return $t1 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun loops::nested_loop($t0: u64): u64 { + var $t1: u64 + var $t2: bool + var $t3: u64 + var $t4: bool + var $t5: u64 + var $t6: u64 + var $t7: u64 + var $t8: u64 + var $t9: u64 + # live vars: $t0 + 0: label L0 + # live vars: $t0 + 1: $t3 := 0 + # live vars: $t0, $t3 + 2: $t2 := >($t0, $t3) + # live vars: $t0, $t2 + 3: if ($t2) goto 4 else goto 25 + # live vars: $t0 + 4: label L2 + # live vars: $t0 + 5: label L5 + # live vars: $t0 + 6: $t5 := 10 + # live vars: $t0, $t5 + 7: $t4 := >($t0, $t5) + # live vars: $t0, $t4 + 8: if ($t4) goto 9 else goto 15 + # live vars: $t0 + 9: label L7 + # live vars: $t0 + 10: $t7 := 1 + # live vars: $t0, $t7 + 11: $t6 := -($t0, $t7) + # live vars: $t6 + 12: $t0 := move($t6) + # live vars: $t0 + 13: goto 19 + # live vars: $t0 + 14: goto 17 + # live vars: $t0 + 15: label L8 + # live vars: $t0 + 16: goto 19 + # live vars: $t0 + 17: label L9 + # live vars: $t0 + 18: goto 5 + # live vars: $t0 + 19: label L6 + # live vars: $t0 + 20: $t9 := 1 + # live vars: $t0, $t9 + 21: $t8 := -($t0, $t9) + # live vars: $t8 + 22: $t0 := move($t8) + # live vars: $t0 + 23: goto 0 + # live vars: $t0 + 24: goto 27 + # live vars: $t0 + 25: label L3 + # live vars: $t0 + 26: goto 29 + # live vars: $t0 + 27: label L4 + # live vars: $t0 + 28: goto 0 + # live vars: $t0 + 29: label L1 + # live vars: $t0 + 30: $t1 := move($t0) + # live vars: $t1 + 31: return $t1 +} + + +[variant baseline] +fun loops::while_loop($t0: u64): u64 { + var $t1: u64 + var $t2: bool + var $t3: u64 + var $t4: u64 + var $t5: u64 + # live vars: $t0 + 0: label L0 + # live vars: $t0 + 1: $t3 := 0 + # live vars: $t0, $t3 + 2: $t2 := >($t0, $t3) + # live vars: $t0, $t2 + 3: if ($t2) goto 4 else goto 9 + # live vars: $t0 + 4: label L2 + # live vars: $t0 + 5: $t5 := 1 + # live vars: $t0, $t5 + 6: $t4 := -($t0, $t5) + # live vars: $t4 + 7: $t0 := move($t4) + # live vars: $t0 + 8: goto 11 + # live vars: $t0 + 9: label L3 + # live vars: $t0 + 10: goto 13 + # live vars: $t0 + 11: label L4 + # live vars: $t0 + 12: goto 0 + # live vars: $t0 + 13: label L1 + # live vars: $t0 + 14: $t1 := move($t0) + # live vars: $t1 + 15: return $t1 +} + + +[variant baseline] +fun loops::while_loop_with_break_and_continue($t0: u64): u64 { + var $t1: u64 + var $t2: bool + var $t3: u64 + var $t4: bool + var $t5: u64 + var $t6: bool + var $t7: u64 + var $t8: u64 + var $t9: u64 + # live vars: $t0 + 0: label L0 + # live vars: $t0 + 1: $t3 := 0 + # live vars: $t0, $t3 + 2: $t2 := >($t0, $t3) + # live vars: $t0, $t2 + 3: if ($t2) goto 4 else goto 25 + # live vars: $t0 + 4: label L2 + # live vars: $t0 + 5: $t5 := 42 + # live vars: $t0, $t5 + 6: $t4 := ==($t0, $t5) + # live vars: $t0, $t4 + 7: if ($t4) goto 8 else goto 11 + # live vars: $t0 + 8: label L5 + # live vars: $t0 + 9: goto 29 + # live vars: $t0 + 10: goto 12 + # live vars: $t0 + 11: label L6 + # live vars: $t0 + 12: label L7 + # live vars: $t0 + 13: $t7 := 21 + # live vars: $t0, $t7 + 14: $t6 := ==($t0, $t7) + # live vars: $t0, $t6 + 15: if ($t6) goto 16 else goto 19 + # live vars: $t0 + 16: label L8 + # live vars: $t0 + 17: goto 0 + # live vars: $t0 + 18: goto 20 + # live vars: $t0 + 19: label L9 + # live vars: $t0 + 20: label L10 + # live vars: $t0 + 21: $t9 := 1 + # live vars: $t0, $t9 + 22: $t8 := -($t0, $t9) + # live vars: $t8 + 23: $t0 := move($t8) + # live vars: $t0 + 24: goto 27 + # live vars: $t0 + 25: label L3 + # live vars: $t0 + 26: goto 29 + # live vars: $t0 + 27: label L4 + # live vars: $t0 + 28: goto 0 + # live vars: $t0 + 29: label L1 + # live vars: $t0 + 30: $t1 := move($t0) + # live vars: $t1 + 31: return $t1 +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -132,21 +331,21 @@ B0: 0: LdConst[0](U64: [0, 0, 0, 0, 0, 0, 0, 0]) 1: StLoc[1](loc0: u64) 2: CopyLoc[0](Arg0: u64) - 3: CopyLoc[1](loc0: u64) + 3: MoveLoc[1](loc0: u64) 4: Gt 5: BrFalse(30) B1: 6: LdConst[1](U64: [10, 0, 0, 0, 0, 0, 0, 0]) 7: StLoc[2](loc1: u64) 8: CopyLoc[0](Arg0: u64) - 9: CopyLoc[2](loc1: u64) + 9: MoveLoc[2](loc1: u64) 10: Gt 11: BrFalse(20) B2: 12: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 13: StLoc[3](loc2: u64) - 14: CopyLoc[0](Arg0: u64) - 15: CopyLoc[3](loc2: u64) + 14: MoveLoc[0](Arg0: u64) + 15: MoveLoc[3](loc2: u64) 16: Sub 17: StLoc[0](Arg0: u64) 18: Branch(22) @@ -159,8 +358,8 @@ B5: B6: 22: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 23: StLoc[4](loc3: u64) - 24: CopyLoc[0](Arg0: u64) - 25: CopyLoc[4](loc3: u64) + 24: MoveLoc[0](Arg0: u64) + 25: MoveLoc[4](loc3: u64) 26: Sub 27: StLoc[0](Arg0: u64) 28: Branch(0) @@ -171,9 +370,9 @@ B8: B9: 31: Branch(0) B10: - 32: CopyLoc[0](Arg0: u64) + 32: MoveLoc[0](Arg0: u64) 33: StLoc[5](loc4: u64) - 34: CopyLoc[5](loc4: u64) + 34: MoveLoc[5](loc4: u64) 35: Ret } while_loop(Arg0: u64): u64 { @@ -183,14 +382,14 @@ B0: 0: LdConst[0](U64: [0, 0, 0, 0, 0, 0, 0, 0]) 1: StLoc[1](loc0: u64) 2: CopyLoc[0](Arg0: u64) - 3: CopyLoc[1](loc0: u64) + 3: MoveLoc[1](loc0: u64) 4: Gt 5: BrFalse(13) B1: 6: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 7: StLoc[2](loc1: u64) - 8: CopyLoc[0](Arg0: u64) - 9: CopyLoc[2](loc1: u64) + 8: MoveLoc[0](Arg0: u64) + 9: MoveLoc[2](loc1: u64) 10: Sub 11: StLoc[0](Arg0: u64) 12: Branch(14) @@ -199,9 +398,9 @@ B2: B3: 14: Branch(0) B4: - 15: CopyLoc[0](Arg0: u64) + 15: MoveLoc[0](Arg0: u64) 16: StLoc[3](loc2: u64) - 17: CopyLoc[3](loc2: u64) + 17: MoveLoc[3](loc2: u64) 18: Ret } while_loop_with_break_and_continue(Arg0: u64): u64 { @@ -213,14 +412,14 @@ B0: 0: LdConst[0](U64: [0, 0, 0, 0, 0, 0, 0, 0]) 1: StLoc[1](loc0: u64) 2: CopyLoc[0](Arg0: u64) - 3: CopyLoc[1](loc0: u64) + 3: MoveLoc[1](loc0: u64) 4: Gt 5: BrFalse(29) B1: 6: LdConst[3](U64: [42, 0, 0, 0, 0, 0, 0, 0]) 7: StLoc[2](loc1: u64) 8: CopyLoc[0](Arg0: u64) - 9: CopyLoc[2](loc1: u64) + 9: MoveLoc[2](loc1: u64) 10: Eq 11: BrFalse(14) B2: @@ -231,7 +430,7 @@ B4: 14: LdConst[4](U64: [21, 0, 0, 0, 0, 0, 0, 0]) 15: StLoc[3](loc2: u64) 16: CopyLoc[0](Arg0: u64) - 17: CopyLoc[3](loc2: u64) + 17: MoveLoc[3](loc2: u64) 18: Eq 19: BrFalse(22) B5: @@ -241,8 +440,8 @@ B6: B7: 22: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) 23: StLoc[4](loc3: u64) - 24: CopyLoc[0](Arg0: u64) - 25: CopyLoc[4](loc3: u64) + 24: MoveLoc[0](Arg0: u64) + 25: MoveLoc[4](loc3: u64) 26: Sub 27: StLoc[0](Arg0: u64) 28: Branch(30) @@ -251,9 +450,9 @@ B8: B9: 30: Branch(0) B10: - 31: CopyLoc[0](Arg0: u64) + 31: MoveLoc[0](Arg0: u64) 32: StLoc[5](loc4: u64) - 33: CopyLoc[5](loc4: u64) + 33: MoveLoc[5](loc4: u64) 34: Ret } } diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/operators.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/operators.exp index 9eab27dca498f..d94be5c5553ea 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/operators.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/operators.exp @@ -1,27 +1,553 @@ +============ initial bytecode ================ -Diagnostics: -error: no matching declaration of `<<` - ┌─ tests/file-format-generator/operators.move:7:9 - │ -7 │ x << y & x | y >> x ^ y - │ ^^^^^^ - │ - = outruled candidate `<<(u8, u8): u8` (expected `u8` but found `u64` for argument 1) - = outruled candidate `<<(u16, u8): u16` (expected `u16` but found `u64` for argument 1) - = outruled candidate `<<(u32, u8): u32` (expected `u32` but found `u64` for argument 1) - = outruled candidate `<<(u64, u8): u64` (expected `u8` but found `u64` for argument 2) - = outruled candidate `<<(u128, u8): u128` (expected `u128` but found `u64` for argument 1) - = outruled candidate `<<(u256, u8): u256` (expected `u256` but found `u64` for argument 1) - -error: no matching declaration of `>>` - ┌─ tests/file-format-generator/operators.move:7:22 - │ -7 │ x << y & x | y >> x ^ y - │ ^^^^^^ - │ - = outruled candidate `>>(u8, u8): u8` (expected `u8` but found `u64` for argument 1) - = outruled candidate `>>(u16, u8): u16` (expected `u16` but found `u64` for argument 1) - = outruled candidate `>>(u32, u8): u32` (expected `u32` but found `u64` for argument 1) - = outruled candidate `>>(u64, u8): u64` (expected `u8` but found `u64` for argument 2) - = outruled candidate `>>(u128, u8): u128` (expected `u128` but found `u64` for argument 1) - = outruled candidate `>>(u256, u8): u256` (expected `u256` but found `u64` for argument 1) +[variant baseline] +fun operators::arithm($t0: u64, $t1: u64): u64 { + var $t2: u64 + var $t3: u64 + var $t4: u64 + var $t5: u64 + var $t6: u64 + 0: $t6 := -($t0, $t1) + 1: $t5 := /($t1, $t6) + 2: $t4 := *($t5, $t1) + 3: $t3 := %($t4, $t0) + 4: $t2 := +($t0, $t3) + 5: return $t2 +} + + +[variant baseline] +fun operators::bits($t0: u64, $t1: u8): u64 { + var $t2: u64 + var $t3: u64 + var $t4: u64 + var $t5: u64 + var $t6: u64 + 0: $t4 := <<($t0, $t1) + 1: $t3 := &($t4, $t0) + 2: $t6 := >>($t0, $t1) + 3: $t5 := ^($t6, $t0) + 4: $t2 := |($t3, $t5) + 5: return $t2 +} + + +[variant baseline] +fun operators::bools($t0: bool, $t1: bool): bool { + var $t2: bool + var $t3: bool + var $t4: bool + var $t5: bool + var $t6: bool + var $t7: bool + 0: if ($t0) goto 1 else goto 4 + 1: label L0 + 2: $t5 := move($t1) + 3: goto 6 + 4: label L1 + 5: $t5 := false + 6: label L2 + 7: if ($t5) goto 8 else goto 11 + 8: label L3 + 9: $t4 := true + 10: goto 19 + 11: label L4 + 12: if ($t0) goto 13 else goto 16 + 13: label L6 + 14: $t4 := !($t1) + 15: goto 18 + 16: label L7 + 17: $t4 := false + 18: label L8 + 19: label L5 + 20: if ($t4) goto 21 else goto 24 + 21: label L9 + 22: $t3 := true + 23: goto 33 + 24: label L10 + 25: $t6 := !($t0) + 26: if ($t6) goto 27 else goto 30 + 27: label L12 + 28: $t3 := move($t1) + 29: goto 32 + 30: label L13 + 31: $t3 := false + 32: label L14 + 33: label L11 + 34: if ($t3) goto 35 else goto 38 + 35: label L15 + 36: $t2 := true + 37: goto 47 + 38: label L16 + 39: $t7 := !($t0) + 40: if ($t7) goto 41 else goto 44 + 41: label L18 + 42: $t2 := !($t1) + 43: goto 46 + 44: label L19 + 45: $t2 := false + 46: label L20 + 47: label L17 + 48: return $t2 +} + + +[variant baseline] +fun operators::equality<#0>($t0: #0, $t1: #0): bool { + var $t2: bool + 0: $t2 := ==($t0, $t1) + 1: return $t2 +} + + +[variant baseline] +fun operators::inequality<#0>($t0: #0, $t1: #0): bool { + var $t2: bool + 0: $t2 := !=($t0, $t1) + 1: return $t2 +} + + +[variant baseline] +fun operators::order($t0: u64, $t1: u64): bool { + var $t2: bool + var $t3: bool + var $t4: bool + var $t5: bool + var $t6: bool + var $t7: bool + 0: $t5 := <($t0, $t1) + 1: if ($t5) goto 2 else goto 5 + 2: label L0 + 3: $t4 := <=($t0, $t1) + 4: goto 7 + 5: label L1 + 6: $t4 := false + 7: label L2 + 8: if ($t4) goto 9 else goto 13 + 9: label L3 + 10: $t6 := >($t0, $t1) + 11: $t3 := !($t6) + 12: goto 15 + 13: label L4 + 14: $t3 := false + 15: label L5 + 16: if ($t3) goto 17 else goto 21 + 17: label L6 + 18: $t7 := >=($t0, $t1) + 19: $t2 := !($t7) + 20: goto 23 + 21: label L7 + 22: $t2 := false + 23: label L8 + 24: return $t2 +} + +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun operators::arithm($t0: u64, $t1: u64): u64 { + var $t2: u64 + var $t3: u64 + var $t4: u64 + var $t5: u64 + var $t6: u64 + # live vars: $t0, $t1 + 0: $t6 := -($t0, $t1) + # live vars: $t0, $t1, $t6 + 1: $t5 := /($t1, $t6) + # live vars: $t0, $t1, $t5 + 2: $t4 := *($t5, $t1) + # live vars: $t0, $t4 + 3: $t3 := %($t4, $t0) + # live vars: $t0, $t3 + 4: $t2 := +($t0, $t3) + # live vars: $t2 + 5: return $t2 +} + + +[variant baseline] +fun operators::bits($t0: u64, $t1: u8): u64 { + var $t2: u64 + var $t3: u64 + var $t4: u64 + var $t5: u64 + var $t6: u64 + # live vars: $t0, $t1 + 0: $t4 := <<($t0, $t1) + # live vars: $t0, $t1, $t4 + 1: $t3 := &($t4, $t0) + # live vars: $t0, $t1, $t3 + 2: $t6 := >>($t0, $t1) + # live vars: $t0, $t3, $t6 + 3: $t5 := ^($t6, $t0) + # live vars: $t3, $t5 + 4: $t2 := |($t3, $t5) + # live vars: $t2 + 5: return $t2 +} + + +[variant baseline] +fun operators::bools($t0: bool, $t1: bool): bool { + var $t2: bool + var $t3: bool + var $t4: bool + var $t5: bool + var $t6: bool + var $t7: bool + # live vars: $t0, $t1 + 0: if ($t0) goto 1 else goto 4 + # live vars: $t0, $t1 + 1: label L0 + # live vars: $t0, $t1 + 2: $t5 := move($t1) + # live vars: $t0, $t1, $t5 + 3: goto 6 + # live vars: $t0, $t1 + 4: label L1 + # live vars: $t0, $t1 + 5: $t5 := false + # live vars: $t0, $t1, $t5 + 6: label L2 + # live vars: $t0, $t1, $t5 + 7: if ($t5) goto 8 else goto 11 + # live vars: $t0, $t1 + 8: label L3 + # live vars: $t0, $t1 + 9: $t4 := true + # live vars: $t0, $t1, $t4 + 10: goto 19 + # live vars: $t0, $t1 + 11: label L4 + # live vars: $t0, $t1 + 12: if ($t0) goto 13 else goto 16 + # live vars: $t0, $t1 + 13: label L6 + # live vars: $t0, $t1 + 14: $t4 := !($t1) + # live vars: $t0, $t1, $t4 + 15: goto 18 + # live vars: $t0, $t1 + 16: label L7 + # live vars: $t0, $t1 + 17: $t4 := false + # live vars: $t0, $t1, $t4 + 18: label L8 + # live vars: $t0, $t1, $t4 + 19: label L5 + # live vars: $t0, $t1, $t4 + 20: if ($t4) goto 21 else goto 24 + # live vars: $t0, $t1 + 21: label L9 + # live vars: $t0, $t1 + 22: $t3 := true + # live vars: $t0, $t1, $t3 + 23: goto 33 + # live vars: $t0, $t1 + 24: label L10 + # live vars: $t0, $t1 + 25: $t6 := !($t0) + # live vars: $t0, $t1, $t6 + 26: if ($t6) goto 27 else goto 30 + # live vars: $t0, $t1 + 27: label L12 + # live vars: $t0, $t1 + 28: $t3 := move($t1) + # live vars: $t0, $t1, $t3 + 29: goto 32 + # live vars: $t0, $t1 + 30: label L13 + # live vars: $t0, $t1 + 31: $t3 := false + # live vars: $t0, $t1, $t3 + 32: label L14 + # live vars: $t0, $t1, $t3 + 33: label L11 + # live vars: $t0, $t1, $t3 + 34: if ($t3) goto 35 else goto 38 + # live vars: + 35: label L15 + # live vars: + 36: $t2 := true + # live vars: $t2 + 37: goto 47 + # live vars: $t0, $t1 + 38: label L16 + # live vars: $t0, $t1 + 39: $t7 := !($t0) + # live vars: $t1, $t7 + 40: if ($t7) goto 41 else goto 44 + # live vars: $t1 + 41: label L18 + # live vars: $t1 + 42: $t2 := !($t1) + # live vars: $t2 + 43: goto 46 + # live vars: + 44: label L19 + # live vars: + 45: $t2 := false + # live vars: $t2 + 46: label L20 + # live vars: $t2 + 47: label L17 + # live vars: $t2 + 48: return $t2 +} + + +[variant baseline] +fun operators::equality<#0>($t0: #0, $t1: #0): bool { + var $t2: bool + # live vars: $t0, $t1 + 0: $t2 := ==($t0, $t1) + # live vars: $t2 + 1: return $t2 +} + + +[variant baseline] +fun operators::inequality<#0>($t0: #0, $t1: #0): bool { + var $t2: bool + # live vars: $t0, $t1 + 0: $t2 := !=($t0, $t1) + # live vars: $t2 + 1: return $t2 +} + + +[variant baseline] +fun operators::order($t0: u64, $t1: u64): bool { + var $t2: bool + var $t3: bool + var $t4: bool + var $t5: bool + var $t6: bool + var $t7: bool + # live vars: $t0, $t1 + 0: $t5 := <($t0, $t1) + # live vars: $t0, $t1, $t5 + 1: if ($t5) goto 2 else goto 5 + # live vars: $t0, $t1 + 2: label L0 + # live vars: $t0, $t1 + 3: $t4 := <=($t0, $t1) + # live vars: $t0, $t1, $t4 + 4: goto 7 + # live vars: $t0, $t1 + 5: label L1 + # live vars: $t0, $t1 + 6: $t4 := false + # live vars: $t0, $t1, $t4 + 7: label L2 + # live vars: $t0, $t1, $t4 + 8: if ($t4) goto 9 else goto 13 + # live vars: $t0, $t1 + 9: label L3 + # live vars: $t0, $t1 + 10: $t6 := >($t0, $t1) + # live vars: $t0, $t1, $t6 + 11: $t3 := !($t6) + # live vars: $t0, $t1, $t3 + 12: goto 15 + # live vars: $t0, $t1 + 13: label L4 + # live vars: $t0, $t1 + 14: $t3 := false + # live vars: $t0, $t1, $t3 + 15: label L5 + # live vars: $t0, $t1, $t3 + 16: if ($t3) goto 17 else goto 21 + # live vars: $t0, $t1 + 17: label L6 + # live vars: $t0, $t1 + 18: $t7 := >=($t0, $t1) + # live vars: $t7 + 19: $t2 := !($t7) + # live vars: $t2 + 20: goto 23 + # live vars: + 21: label L7 + # live vars: + 22: $t2 := false + # live vars: $t2 + 23: label L8 + # live vars: $t2 + 24: return $t2 +} + + +============ disassembled file-format ================== +// Move bytecode v6 +module 42.operators { + + +arithm(Arg0: u64, Arg1: u64): u64 { +B0: + 0: CopyLoc[0](Arg0: u64) + 1: CopyLoc[1](Arg1: u64) + 2: Sub + 3: StLoc[2](loc0: u64) + 4: CopyLoc[1](Arg1: u64) + 5: MoveLoc[2](loc0: u64) + 6: Div + 7: MoveLoc[1](Arg1: u64) + 8: Mul + 9: CopyLoc[0](Arg0: u64) + 10: Mod + 11: StLoc[3](loc1: u64) + 12: MoveLoc[0](Arg0: u64) + 13: MoveLoc[3](loc1: u64) + 14: Add + 15: Ret +} +bits(Arg0: u64, Arg1: u8): u64 { +B0: + 0: CopyLoc[0](Arg0: u64) + 1: CopyLoc[1](Arg1: u8) + 2: Shl + 3: CopyLoc[0](Arg0: u64) + 4: BitAnd + 5: CopyLoc[0](Arg0: u64) + 6: MoveLoc[1](Arg1: u8) + 7: Shr + 8: MoveLoc[0](Arg0: u64) + 9: Xor + 10: BitOr + 11: Ret +} +bools(Arg0: bool, Arg1: bool): bool { +L0: loc2: bool +L1: loc3: bool +B0: + 0: CopyLoc[0](Arg0: bool) + 1: BrFalse(5) +B1: + 2: CopyLoc[1](Arg1: bool) + 3: StLoc[2](loc0: bool) + 4: Branch(7) +B2: + 5: LdConst[0](Bool: [0]) + 6: StLoc[2](loc0: bool) +B3: + 7: MoveLoc[2](loc0: bool) + 8: BrFalse(12) +B4: + 9: LdConst[1](Bool: [1]) + 10: StLoc[3](loc1: bool) + 11: Branch(20) +B5: + 12: CopyLoc[0](Arg0: bool) + 13: BrFalse(18) +B6: + 14: CopyLoc[1](Arg1: bool) + 15: Not + 16: StLoc[3](loc1: bool) + 17: Branch(20) +B7: + 18: LdConst[0](Bool: [0]) + 19: StLoc[3](loc1: bool) +B8: + 20: MoveLoc[3](loc1: bool) + 21: BrFalse(25) +B9: + 22: LdConst[1](Bool: [1]) + 23: StLoc[4](loc2: bool) + 24: Branch(33) +B10: + 25: CopyLoc[0](Arg0: bool) + 26: Not + 27: BrFalse(31) +B11: + 28: CopyLoc[1](Arg1: bool) + 29: StLoc[4](loc2: bool) + 30: Branch(33) +B12: + 31: LdConst[0](Bool: [0]) + 32: StLoc[4](loc2: bool) +B13: + 33: MoveLoc[4](loc2: bool) + 34: BrFalse(38) +B14: + 35: LdConst[1](Bool: [1]) + 36: StLoc[5](loc3: bool) + 37: Branch(47) +B15: + 38: MoveLoc[0](Arg0: bool) + 39: Not + 40: BrFalse(45) +B16: + 41: MoveLoc[1](Arg1: bool) + 42: Not + 43: StLoc[5](loc3: bool) + 44: Branch(47) +B17: + 45: LdConst[0](Bool: [0]) + 46: StLoc[5](loc3: bool) +B18: + 47: MoveLoc[5](loc3: bool) + 48: Ret +} +equality(Arg0: Ty0, Arg1: Ty0): bool { +B0: + 0: MoveLoc[0](Arg0: Ty0) + 1: MoveLoc[1](Arg1: Ty0) + 2: Eq + 3: Ret +} +inequality(Arg0: Ty0, Arg1: Ty0): bool { +B0: + 0: MoveLoc[0](Arg0: Ty0) + 1: MoveLoc[1](Arg1: Ty0) + 2: Neq + 3: Ret +} +order(Arg0: u64, Arg1: u64): bool { +L0: loc2: bool +B0: + 0: CopyLoc[0](Arg0: u64) + 1: CopyLoc[1](Arg1: u64) + 2: Lt + 3: BrFalse(9) +B1: + 4: CopyLoc[0](Arg0: u64) + 5: CopyLoc[1](Arg1: u64) + 6: Le + 7: StLoc[2](loc0: bool) + 8: Branch(11) +B2: + 9: LdConst[0](Bool: [0]) + 10: StLoc[2](loc0: bool) +B3: + 11: MoveLoc[2](loc0: bool) + 12: BrFalse(19) +B4: + 13: CopyLoc[0](Arg0: u64) + 14: CopyLoc[1](Arg1: u64) + 15: Gt + 16: Not + 17: StLoc[3](loc1: bool) + 18: Branch(21) +B5: + 19: LdConst[0](Bool: [0]) + 20: StLoc[3](loc1: bool) +B6: + 21: MoveLoc[3](loc1: bool) + 22: BrFalse(29) +B7: + 23: MoveLoc[0](Arg0: u64) + 24: MoveLoc[1](Arg1: u64) + 25: Ge + 26: Not + 27: StLoc[4](loc2: bool) + 28: Branch(31) +B8: + 29: LdConst[0](Bool: [0]) + 30: StLoc[4](loc2: bool) +B9: + 31: MoveLoc[4](loc2: bool) + 32: Ret +} +} diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/operators.move b/third_party/move/move-compiler-v2/tests/file-format-generator/operators.move index 831bc2b6835fe..b06cebbbde997 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/operators.move +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/operators.move @@ -3,8 +3,8 @@ module 0x42::operators { x + y / (x - y) * y % x } - fun bits(x: u64, y: u64): u64 { - x << y & x | y >> x ^ y + fun bits(x: u64, y: u8): u64 { + x << y & x | x >> y ^ x } fun bools(x: bool, y: bool): bool { diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/pack_unpack.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/pack_unpack.exp index 23f5758a17b4c..4024d1b16ead4 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/pack_unpack.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/pack_unpack.exp @@ -24,6 +24,40 @@ fun pack_unpack::unpack($t0: pack_unpack::S): (u64, u64) { 4: return ($t1, $t2) } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun pack_unpack::pack($t0: u64, $t1: u64): pack_unpack::S { + var $t2: pack_unpack::S + var $t3: pack_unpack::T + # live vars: $t0, $t1 + 0: $t3 := pack pack_unpack::T($t1) + # live vars: $t0, $t3 + 1: $t2 := pack pack_unpack::S($t0, $t3) + # live vars: $t2 + 2: return $t2 +} + + +[variant baseline] +fun pack_unpack::unpack($t0: pack_unpack::S): (u64, u64) { + var $t1: u64 + var $t2: u64 + var $t3: u64 + var $t4: u64 + var $t5: pack_unpack::T + # live vars: $t0 + 0: ($t3, $t5) := unpack pack_unpack::S($t0) + # live vars: $t3, $t5 + 1: $t4 := unpack pack_unpack::T($t5) + # live vars: $t3, $t4 + 2: $t1 := move($t3) + # live vars: $t1, $t4 + 3: $t2 := move($t4) + # live vars: $t1, $t2 + 4: return ($t1, $t2) +} + ============ disassembled file-format ================== // Move bytecode v6 @@ -38,10 +72,10 @@ struct S { pack(Arg0: u64, Arg1: u64): S { B0: - 0: CopyLoc[1](Arg1: u64) + 0: MoveLoc[1](Arg1: u64) 1: Pack[0](T) 2: StLoc[2](loc0: T) - 3: CopyLoc[0](Arg0: u64) + 3: MoveLoc[0](Arg0: u64) 4: MoveLoc[2](loc0: T) 5: Pack[1](S) 6: Ret @@ -56,12 +90,12 @@ B0: 2: Unpack[0](T) 3: StLoc[1](loc0: u64) 4: StLoc[2](loc1: u64) - 5: CopyLoc[2](loc1: u64) + 5: MoveLoc[2](loc1: u64) 6: StLoc[3](loc2: u64) - 7: CopyLoc[1](loc0: u64) + 7: MoveLoc[1](loc0: u64) 8: StLoc[4](loc3: u64) - 9: CopyLoc[3](loc2: u64) - 10: CopyLoc[4](loc3: u64) + 9: MoveLoc[3](loc2: u64) + 10: MoveLoc[4](loc3: u64) 11: Ret } } diff --git a/third_party/move/move-compiler-v2/tests/file-format-generator/vector.exp b/third_party/move/move-compiler-v2/tests/file-format-generator/vector.exp index ea04db9c64c65..13bd5227e5fa1 100644 --- a/third_party/move/move-compiler-v2/tests/file-format-generator/vector.exp +++ b/third_party/move/move-compiler-v2/tests/file-format-generator/vector.exp @@ -13,6 +13,26 @@ fun vector::create(): vector { 4: return $t0 } +============ after LiveVarAnalysisProcessor: ================ + +[variant baseline] +fun vector::create(): vector { + var $t0: vector + var $t1: u64 + var $t2: u64 + var $t3: u64 + # live vars: + 0: $t1 := 1 + # live vars: $t1 + 1: $t2 := 2 + # live vars: $t1, $t2 + 2: $t3 := 3 + # live vars: $t1, $t2, $t3 + 3: $t0 := vector($t1, $t2, $t3) + # live vars: $t0 + 4: return $t0 +} + ============ disassembled file-format ================== // Move bytecode v6 diff --git a/third_party/move/move-compiler-v2/tests/testsuite.rs b/third_party/move/move-compiler-v2/tests/testsuite.rs index 1dae4b9d16d92..43eefb9fcdeb6 100644 --- a/third_party/move/move-compiler-v2/tests/testsuite.rs +++ b/third_party/move/move-compiler-v2/tests/testsuite.rs @@ -5,12 +5,16 @@ use codespan_reporting::{diagnostic::Severity, term::termcolor::Buffer}; use move_binary_format::{binary_views::BinaryIndexedView, file_format as FF}; use move_command_line_common::files::FileHash; -use move_compiler_v2::{run_file_format_gen, Options}; +use move_compiler_v2::{ + pipeline::livevar_analysis_processor::LiveVarAnalysisProcessor, run_file_format_gen, Options, +}; use move_disassembler::disassembler::Disassembler; use move_ir_types::location; use move_model::model::GlobalEnv; use move_prover_test_utils::{baseline_test, extract_test_directives}; -use move_stackless_bytecode::function_target_pipeline::FunctionTargetPipeline; +use move_stackless_bytecode::{ + function_target::FunctionTarget, function_target_pipeline::FunctionTargetPipeline, +}; use std::{ cell::RefCell, path::{Path, PathBuf}, @@ -66,25 +70,27 @@ fn test_runner(path: &Path) -> datatest_stable::Result<()> { impl TestConfig { fn get_config_from_path(path: &Path) -> TestConfig { let path = path.to_string_lossy(); + let mut pipeline = FunctionTargetPipeline::default(); if path.contains("/checking/") { Self { check_only: true, dump_ast: true, - pipeline: FunctionTargetPipeline::default(), + pipeline, generate_file_format: false, } } else if path.contains("/bytecode-generator/") { Self { check_only: false, dump_ast: true, - pipeline: FunctionTargetPipeline::default(), + pipeline, generate_file_format: false, } } else if path.contains("/file-format-generator/") { + pipeline.add_processor(Box::new(LiveVarAnalysisProcessor {})); Self { check_only: false, dump_ast: false, - pipeline: FunctionTargetPipeline::default(), + pipeline, generate_file_format: true, } } else { @@ -131,22 +137,28 @@ impl TestConfig { |targets_before| { let out = &mut test_output.borrow_mut(); Self::check_diags(out, &env); - out.push_str(&move_stackless_bytecode::print_targets_for_test( - &env, - "initial bytecode", - targets_before, - )); + out.push_str( + &move_stackless_bytecode::print_targets_with_annotations_for_test( + &env, + "initial bytecode", + targets_before, + Self::register_formatters, + ), + ); }, // Hook which is run after every step in the pipeline. Prints out // bytecode after the processor. |_, processor, targets_after| { let out = &mut test_output.borrow_mut(); Self::check_diags(out, &env); - out.push_str(&move_stackless_bytecode::print_targets_for_test( - &env, - &format!("after {}:", processor.name()), - targets_after, - )); + out.push_str( + &move_stackless_bytecode::print_targets_with_annotations_for_test( + &env, + &format!("after {}:", processor.name()), + targets_after, + Self::register_formatters, + ), + ); }, ); let ok = Self::check_diags(&mut test_output.borrow_mut(), &env); @@ -170,6 +182,11 @@ impl TestConfig { Ok(()) } + /// Callback from the framework to register formatters for annotations. + fn register_formatters(target: &FunctionTarget) { + LiveVarAnalysisProcessor::register_formatters(target) + } + fn check_diags(baseline: &mut String, env: &GlobalEnv) -> bool { let mut error_writer = Buffer::no_color(); env.report_diag(&mut error_writer, Severity::Note); diff --git a/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.exp b/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.exp new file mode 100644 index 0000000000000..2c4c770e675bf --- /dev/null +++ b/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.exp @@ -0,0 +1,584 @@ +processed 2 tasks + +task 0 'publish'. lines 1-65: + + + +==> Compiler v2 delivered same results! + +>>> V1 Compiler { +== BEGIN Bytecode == +// Move bytecode v6 +module 42.heap { + + +array_equals(Arg0: &vector, Arg1: &vector): bool { +L0: loc2: u64 +B0: + 0: CopyLoc[0](Arg0: &vector) + 1: VecLen(7) + 2: StLoc[3](loc1: u64) + 3: CopyLoc[1](Arg1: &vector) + 4: VecLen(7) + 5: StLoc[4](loc2: u64) + 6: CopyLoc[3](loc1: u64) + 7: MoveLoc[4](loc2: u64) + 8: Neq + 9: BrFalse(16) +B1: + 10: MoveLoc[1](Arg1: &vector) + 11: Pop + 12: MoveLoc[0](Arg0: &vector) + 13: Pop + 14: LdFalse + 15: Ret +B2: + 16: LdU64(0) + 17: StLoc[2](loc0: u64) +B3: + 18: CopyLoc[2](loc0: u64) + 19: CopyLoc[3](loc1: u64) + 20: Lt + 21: BrFalse(44) +B4: + 22: Branch(23) +B5: + 23: CopyLoc[0](Arg0: &vector) + 24: CopyLoc[2](loc0: u64) + 25: VecImmBorrow(7) + 26: ReadRef + 27: CopyLoc[1](Arg1: &vector) + 28: CopyLoc[2](loc0: u64) + 29: VecImmBorrow(7) + 30: ReadRef + 31: Neq + 32: BrFalse(39) +B6: + 33: MoveLoc[1](Arg1: &vector) + 34: Pop + 35: MoveLoc[0](Arg0: &vector) + 36: Pop + 37: LdFalse + 38: Ret +B7: + 39: MoveLoc[2](loc0: u64) + 40: LdU64(1) + 41: Add + 42: StLoc[2](loc0: u64) + 43: Branch(18) +B8: + 44: MoveLoc[1](Arg1: &vector) + 45: Pop + 46: MoveLoc[0](Arg0: &vector) + 47: Pop + 48: LdTrue + 49: Ret +} +create1(): vector { +B0: + 0: LdConst[0](Vector(U64): [6, 3, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0]) + 1: Ret +} +create2(): vector { +B0: + 0: LdConst[1](Vector(U64): [6, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0]) + 1: Ret +} +public main() { +L0: loc0: vector +L1: loc1: vector +L2: loc2: vector +B0: + 0: Call create1(): vector + 1: StLoc[0](loc0: vector) + 2: Call create2(): vector + 3: StLoc[1](loc1: vector) + 4: ImmBorrowLoc[0](loc0: vector) + 5: Call vcopy(&vector): vector + 6: StLoc[2](loc2: vector) + 7: ImmBorrowLoc[0](loc0: vector) + 8: ImmBorrowLoc[2](loc2: vector) + 9: Call array_equals(&vector, &vector): bool + 10: BrFalse(12) +B1: + 11: Branch(14) +B2: + 12: LdU64(23) + 13: Abort +B3: + 14: ImmBorrowLoc[1](loc1: vector) + 15: ImmBorrowLoc[1](loc1: vector) + 16: Call array_equals(&vector, &vector): bool + 17: BrFalse(19) +B4: + 18: Branch(21) +B5: + 19: LdU64(29) + 20: Abort +B6: + 21: MutBorrowLoc[0](loc0: vector) + 22: Call sort(&mut vector) + 23: ImmBorrowLoc[1](loc1: vector) + 24: ImmBorrowLoc[0](loc0: vector) + 25: Call array_equals(&vector, &vector): bool + 26: BrFalse(28) +B7: + 27: Branch(30) +B8: + 28: LdU64(31) + 29: Abort +B9: + 30: ImmBorrowLoc[0](loc0: vector) + 31: ImmBorrowLoc[1](loc1: vector) + 32: Call array_equals(&vector, &vector): bool + 33: BrFalse(35) +B10: + 34: Branch(37) +B11: + 35: LdU64(29) + 36: Abort +B12: + 37: ImmBorrowLoc[0](loc0: vector) + 38: ImmBorrowLoc[2](loc2: vector) + 39: Call array_equals(&vector, &vector): bool + 40: Not + 41: BrFalse(43) +B13: + 42: Branch(45) +B14: + 43: LdU64(31) + 44: Abort +B15: + 45: Ret +} +sort(Arg0: &mut vector) { +L0: loc1: u64 +L1: loc2: &mut vector +L2: loc3: u64 +L3: loc4: u64 +L4: loc5: u64 +B0: + 0: LdU64(0) + 1: StLoc[5](loc4: u64) +B1: + 2: CopyLoc[5](loc4: u64) + 3: CopyLoc[0](Arg0: &mut vector) + 4: FreezeRef + 5: VecLen(7) + 6: Lt + 7: BrFalse(54) +B2: + 8: Branch(9) +B3: + 9: CopyLoc[5](loc4: u64) + 10: LdU64(1) + 11: Add + 12: StLoc[6](loc5: u64) +B4: + 13: CopyLoc[6](loc5: u64) + 14: CopyLoc[0](Arg0: &mut vector) + 15: FreezeRef + 16: VecLen(7) + 17: Lt + 18: BrFalse(49) +B5: + 19: Branch(20) +B6: + 20: CopyLoc[0](Arg0: &mut vector) + 21: CopyLoc[5](loc4: u64) + 22: StLoc[2](loc1: u64) + 23: StLoc[1](loc0: &mut vector) + 24: CopyLoc[0](Arg0: &mut vector) + 25: CopyLoc[6](loc5: u64) + 26: StLoc[4](loc3: u64) + 27: StLoc[3](loc2: &mut vector) + 28: MoveLoc[1](loc0: &mut vector) + 29: FreezeRef + 30: MoveLoc[2](loc1: u64) + 31: VecImmBorrow(7) + 32: ReadRef + 33: MoveLoc[3](loc2: &mut vector) + 34: FreezeRef + 35: MoveLoc[4](loc3: u64) + 36: VecImmBorrow(7) + 37: ReadRef + 38: Gt + 39: BrFalse(44) +B7: + 40: CopyLoc[0](Arg0: &mut vector) + 41: CopyLoc[5](loc4: u64) + 42: CopyLoc[6](loc5: u64) + 43: VecSwap(7) +B8: + 44: MoveLoc[6](loc5: u64) + 45: LdU64(1) + 46: Add + 47: StLoc[6](loc5: u64) + 48: Branch(13) +B9: + 49: MoveLoc[5](loc4: u64) + 50: LdU64(1) + 51: Add + 52: StLoc[5](loc4: u64) + 53: Branch(2) +B10: + 54: MoveLoc[0](Arg0: &mut vector) + 55: Pop + 56: Ret +} +vcopy(Arg0: &vector): vector { +L0: loc1: u64 +L1: loc2: vector +B0: + 0: VecPack(7, 0) + 1: StLoc[3](loc2: vector) + 2: LdU64(0) + 3: StLoc[1](loc0: u64) + 4: CopyLoc[0](Arg0: &vector) + 5: VecLen(7) + 6: StLoc[2](loc1: u64) +B1: + 7: CopyLoc[1](loc0: u64) + 8: CopyLoc[2](loc1: u64) + 9: Lt + 10: BrFalse(23) +B2: + 11: Branch(12) +B3: + 12: MutBorrowLoc[3](loc2: vector) + 13: CopyLoc[0](Arg0: &vector) + 14: CopyLoc[1](loc0: u64) + 15: VecImmBorrow(7) + 16: ReadRef + 17: VecPushBack(7) + 18: MoveLoc[1](loc0: u64) + 19: LdU64(1) + 20: Add + 21: StLoc[1](loc0: u64) + 22: Branch(7) +B4: + 23: MoveLoc[0](Arg0: &vector) + 24: Pop + 25: MoveLoc[3](loc2: vector) + 26: Ret +} +} +== END Bytecode == + +task 1 'run'. lines 67-73: + +== BEGIN Bytecode == +// Move bytecode v6 +script { +use 0000000000000000000000000000000000000000000000000000000000000042::heap; + + + + +main() { +B0: + 0: Call heap::main() + 1: Ret +} +} +== END Bytecode == +} + +>>> V2 Compiler { +== BEGIN Bytecode == +// Move bytecode v6 +module 42.heap { + + +array_equals(Arg0: &vector, Arg1: &vector): bool { +L0: loc2: u64 +L1: loc3: u64 +B0: + 0: CopyLoc[0](Arg0: &vector) + 1: VecLen(2) + 2: StLoc[2](loc0: u64) + 3: CopyLoc[1](Arg1: &vector) + 4: VecLen(2) + 5: StLoc[3](loc1: u64) + 6: CopyLoc[2](loc0: u64) + 7: MoveLoc[3](loc1: u64) + 8: Neq + 9: BrFalse(13) +B1: + 10: LdConst[0](Bool: [0]) + 11: Ret +B2: + 12: Branch(13) +B3: + 13: LdConst[1](U64: [0, 0, 0, 0, 0, 0, 0, 0]) + 14: StLoc[4](loc2: u64) +B4: + 15: CopyLoc[4](loc2: u64) + 16: CopyLoc[2](loc0: u64) + 17: Lt + 18: BrFalse(39) +B5: + 19: CopyLoc[0](Arg0: &vector) + 20: CopyLoc[4](loc2: u64) + 21: VecImmBorrow(2) + 22: ReadRef + 23: CopyLoc[1](Arg1: &vector) + 24: CopyLoc[4](loc2: u64) + 25: VecImmBorrow(2) + 26: ReadRef + 27: Neq + 28: BrFalse(32) +B6: + 29: LdConst[0](Bool: [0]) + 30: Ret +B7: + 31: Branch(32) +B8: + 32: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 33: StLoc[5](loc3: u64) + 34: MoveLoc[4](loc2: u64) + 35: MoveLoc[5](loc3: u64) + 36: Add + 37: StLoc[4](loc2: u64) + 38: Branch(40) +B9: + 39: Branch(41) +B10: + 40: Branch(15) +B11: + 41: LdConst[3](Bool: [1]) + 42: Ret +} +create1(): vector { +B0: + 0: LdConst[4](U64: [3, 0, 0, 0, 0, 0, 0, 0]) + 1: LdConst[5](U64: [2, 0, 0, 0, 0, 0, 0, 0]) + 2: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 3: LdConst[6](U64: [5, 0, 0, 0, 0, 0, 0, 0]) + 4: LdConst[7](U64: [8, 0, 0, 0, 0, 0, 0, 0]) + 5: LdConst[8](U64: [4, 0, 0, 0, 0, 0, 0, 0]) + 6: VecPack(2, 6) + 7: Ret +} +create2(): vector { +B0: + 0: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 1: LdConst[5](U64: [2, 0, 0, 0, 0, 0, 0, 0]) + 2: LdConst[4](U64: [3, 0, 0, 0, 0, 0, 0, 0]) + 3: LdConst[8](U64: [4, 0, 0, 0, 0, 0, 0, 0]) + 4: LdConst[6](U64: [5, 0, 0, 0, 0, 0, 0, 0]) + 5: LdConst[7](U64: [8, 0, 0, 0, 0, 0, 0, 0]) + 6: VecPack(2, 6) + 7: Ret +} +public main() { +L0: loc0: vector +L1: loc1: vector +L2: loc2: vector +B0: + 0: Call create1(): vector + 1: StLoc[0](loc0: vector) + 2: Call create2(): vector + 3: StLoc[1](loc1: vector) + 4: ImmBorrowLoc[0](loc0: vector) + 5: Call vcopy(&vector): vector + 6: StLoc[2](loc2: vector) + 7: ImmBorrowLoc[0](loc0: vector) + 8: ImmBorrowLoc[2](loc2: vector) + 9: Call array_equals(&vector, &vector): bool + 10: BrFalse(12) +B1: + 11: Branch(14) +B2: + 12: LdConst[9](U64: [23, 0, 0, 0, 0, 0, 0, 0]) + 13: Abort +B3: + 14: ImmBorrowLoc[1](loc1: vector) + 15: ImmBorrowLoc[1](loc1: vector) + 16: Call array_equals(&vector, &vector): bool + 17: BrFalse(19) +B4: + 18: Branch(21) +B5: + 19: LdConst[10](U64: [29, 0, 0, 0, 0, 0, 0, 0]) + 20: Abort +B6: + 21: MutBorrowLoc[0](loc0: vector) + 22: Call sort(&mut vector) + 23: ImmBorrowLoc[1](loc1: vector) + 24: ImmBorrowLoc[0](loc0: vector) + 25: Call array_equals(&vector, &vector): bool + 26: BrFalse(28) +B7: + 27: Branch(30) +B8: + 28: LdConst[11](U64: [31, 0, 0, 0, 0, 0, 0, 0]) + 29: Abort +B9: + 30: ImmBorrowLoc[0](loc0: vector) + 31: ImmBorrowLoc[1](loc1: vector) + 32: Call array_equals(&vector, &vector): bool + 33: BrFalse(35) +B10: + 34: Branch(37) +B11: + 35: LdConst[10](U64: [29, 0, 0, 0, 0, 0, 0, 0]) + 36: Abort +B12: + 37: ImmBorrowLoc[0](loc0: vector) + 38: ImmBorrowLoc[2](loc2: vector) + 39: Call array_equals(&vector, &vector): bool + 40: Not + 41: BrFalse(43) +B13: + 42: Branch(45) +B14: + 43: LdConst[11](U64: [31, 0, 0, 0, 0, 0, 0, 0]) + 44: Abort +B15: + 45: Ret +} +sort(Arg0: &mut vector) { +L0: loc1: u64 +L1: loc2: u64 +L2: loc3: u64 +L3: loc4: u64 +L4: loc5: u64 +L5: loc6: u64 +B0: + 0: LdConst[1](U64: [0, 0, 0, 0, 0, 0, 0, 0]) + 1: StLoc[1](loc0: u64) +B1: + 2: CopyLoc[0](Arg0: &mut vector) + 3: FreezeRef + 4: VecLen(2) + 5: StLoc[2](loc1: u64) + 6: CopyLoc[1](loc0: u64) + 7: MoveLoc[2](loc1: u64) + 8: Lt + 9: BrFalse(57) +B2: + 10: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 11: StLoc[3](loc2: u64) + 12: CopyLoc[1](loc0: u64) + 13: MoveLoc[3](loc2: u64) + 14: Add + 15: StLoc[4](loc3: u64) +B3: + 16: CopyLoc[0](Arg0: &mut vector) + 17: FreezeRef + 18: VecLen(2) + 19: StLoc[5](loc4: u64) + 20: CopyLoc[4](loc3: u64) + 21: MoveLoc[5](loc4: u64) + 22: Lt + 23: BrFalse(48) +B4: + 24: CopyLoc[0](Arg0: &mut vector) + 25: FreezeRef + 26: CopyLoc[1](loc0: u64) + 27: VecImmBorrow(2) + 28: ReadRef + 29: CopyLoc[0](Arg0: &mut vector) + 30: FreezeRef + 31: CopyLoc[4](loc3: u64) + 32: VecImmBorrow(2) + 33: ReadRef + 34: Gt + 35: BrFalse(41) +B5: + 36: CopyLoc[0](Arg0: &mut vector) + 37: CopyLoc[1](loc0: u64) + 38: CopyLoc[4](loc3: u64) + 39: VecSwap(2) + 40: Branch(41) +B6: + 41: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 42: StLoc[6](loc5: u64) + 43: MoveLoc[4](loc3: u64) + 44: MoveLoc[6](loc5: u64) + 45: Add + 46: StLoc[4](loc3: u64) + 47: Branch(49) +B7: + 48: Branch(50) +B8: + 49: Branch(16) +B9: + 50: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 51: StLoc[7](loc6: u64) + 52: MoveLoc[1](loc0: u64) + 53: MoveLoc[7](loc6: u64) + 54: Add + 55: StLoc[1](loc0: u64) + 56: Branch(58) +B10: + 57: Branch(59) +B11: + 58: Branch(2) +B12: + 59: Ret +} +vcopy(Arg0: &vector): vector { +L0: loc1: u64 +L1: loc2: u64 +L2: loc3: u64 +L3: loc4: vector +B0: + 0: VecPack(2, 0) + 1: StLoc[1](loc0: vector) + 2: LdConst[1](U64: [0, 0, 0, 0, 0, 0, 0, 0]) + 3: StLoc[2](loc1: u64) + 4: CopyLoc[0](Arg0: &vector) + 5: VecLen(2) + 6: StLoc[3](loc2: u64) +B1: + 7: CopyLoc[2](loc1: u64) + 8: CopyLoc[3](loc2: u64) + 9: Lt + 10: BrFalse(24) +B2: + 11: MutBorrowLoc[1](loc0: vector) + 12: CopyLoc[0](Arg0: &vector) + 13: CopyLoc[2](loc1: u64) + 14: VecImmBorrow(2) + 15: ReadRef + 16: VecPushBack(2) + 17: LdConst[2](U64: [1, 0, 0, 0, 0, 0, 0, 0]) + 18: StLoc[4](loc3: u64) + 19: MoveLoc[2](loc1: u64) + 20: MoveLoc[4](loc3: u64) + 21: Add + 22: StLoc[2](loc1: u64) + 23: Branch(25) +B3: + 24: Branch(26) +B4: + 25: Branch(7) +B5: + 26: MoveLoc[1](loc0: vector) + 27: StLoc[5](loc4: vector) + 28: MoveLoc[5](loc4: vector) + 29: Ret +} +} +== END Bytecode == + +task 1 'run'. lines 67-73: + +== BEGIN Bytecode == +// Move bytecode v6 +script { +use 0000000000000000000000000000000000000000000000000000000000000042::heap; + + + + +main() { +B0: + 0: Call heap::main() + 1: Ret +} +} +== END Bytecode == +} diff --git a/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.move b/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.move new file mode 100644 index 0000000000000..b8095984b9f3b --- /dev/null +++ b/third_party/move/move-compiler-v2/transactional-tests/tests/control_flow/sorter.move @@ -0,0 +1,73 @@ +//# publish --print-bytecode +module 0x42::heap { + use std::vector; + + fun create1(): vector { + vector[3, 2, 1, 5, 8, 4] + } + + fun create2(): vector { + vector[1, 2, 3, 4, 5, 8] + } + + fun vcopy(x: &vector): vector { + let y : vector = vector::empty(); + let i : u64 = 0; + let l : u64 = vector::length(x); + while (i < l) { + vector::push_back(&mut y, *vector::borrow(x, i)); + i = i + 1; + }; + y + } + + fun sort(x: &mut vector) { + let i: u64 = 0; + while (i < vector::length(x)) { + let j: u64 = i + 1; + while (j < vector::length(x)) { + if (*vector::borrow(x, i) > *vector::borrow(x, j)) { + vector::swap(x, i, j) + }; + j = j + 1; + }; + i = i + 1; + } + } + + fun array_equals(x: &vector, y: &vector): bool { + let l1: u64 = vector::length(x); + let l2: u64 = vector::length(y); + if (l1 != l2) { + return false + }; + let i: u64 = 0; + while (i < l1) { + if (*vector::borrow(x, i) != *vector::borrow(y, i)) { + return false + }; + i = i + 1; + }; + true + } + + public fun main() { + let x: vector = create1(); + let y: vector = create2(); + let z: vector = vcopy(&x); + assert!(array_equals(&x, &z), 23); + assert!(array_equals(&y, &y), 29); + sort(&mut x); + assert!(array_equals(&y, &x), 31); + assert!(array_equals(&x, &y), 29); + assert!(!array_equals(&x, &z), 31); + } +} + +//# run --print-bytecode +script { +use 0x42::heap::main; +fun mymain() { + main(); +} +} diff --git a/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.exp b/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.exp new file mode 100644 index 0000000000000..a6db107b3b9ca --- /dev/null +++ b/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.exp @@ -0,0 +1,3 @@ +processed 2 tasks + +==> Compiler v2 delivered same results! diff --git a/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.move b/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.move new file mode 100644 index 0000000000000..b51c62f6fff81 --- /dev/null +++ b/third_party/move/move-compiler-v2/transactional-tests/tests/evaluation_order/arg_order.move @@ -0,0 +1,18 @@ +//# publish +module 0x42::test { + public fun two_args(x: u64, b: bool): u64 { + if (b) { + x + } else { + 0 + } + } +} + +//# run +script { + use 0x42::test::two_args; + fun mymain() { + assert!(two_args(42, true) == 42, 1); + } +} diff --git a/third_party/move/move-compiler/Cargo.toml b/third_party/move/move-compiler/Cargo.toml index 04bf910cbe101..2ed5d10d5a39f 100644 --- a/third_party/move/move-compiler/Cargo.toml +++ b/third_party/move/move-compiler/Cargo.toml @@ -9,7 +9,7 @@ license = "Apache-2.0" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan-reporting = "0.11.1" difference = "2.0.0" hex = "0.4.3" diff --git a/third_party/move/move-compiler/src/attr_derivation/async_deriver.rs b/third_party/move/move-compiler/src/attr_derivation/async_deriver.rs index d4efbd9b641de..c565d56e3d90e 100644 --- a/third_party/move/move-compiler/src/attr_derivation/async_deriver.rs +++ b/third_party/move/move-compiler/src/attr_derivation/async_deriver.rs @@ -18,14 +18,41 @@ use move_core_types::account_address::AccountAddress; use move_ir_types::location::{sp, Loc}; use move_symbol_pool::Symbol; use sha3::{Digest, Sha3_256}; -use std::convert::TryInto; +use std::{collections::BTreeSet, convert::TryInto}; const ACTOR_ATTR: &str = "actor"; const STATE_ATTR: &str = "state"; const INIT_ATTR: &str = "init"; const MESSAGE_ATTR: &str = "message"; + +const CONT_ATTR: &str = "cont"; +const EVENT_ATTR: &str = "event"; // "message" is mysteriously transformed into "event" +const RPC_ATTR: &str = "rpc"; + +const GENERATED_CONT_ATTR: &str = "_generated_cont"; +const GENERATED_RPC_ATTR: &str = "_generated_rpc"; +const GENERATED_SEND_ATTR: &str = "_generated_send"; + const MAX_SEND_PARAM_COUNT: usize = 8; +pub(crate) fn add_attributes_for_async(attributes: &mut BTreeSet) { + const ALL_ATTRIBUTE_NAMES: [&str; 10] = [ + ACTOR_ATTR, + CONT_ATTR, + EVENT_ATTR, + INIT_ATTR, + MESSAGE_ATTR, + RPC_ATTR, + STATE_ATTR, + GENERATED_CONT_ATTR, + GENERATED_RPC_ATTR, + GENERATED_SEND_ATTR, + ]; + ALL_ATTRIBUTE_NAMES.into_iter().for_each(|elt| { + attributes.insert(elt.to_string()); + }); +} + pub(crate) fn derive_for_async( env: &mut CompilationEnv, address_map: &NamedAddressMap, diff --git a/third_party/move/move-compiler/src/attr_derivation/evm_deriver.rs b/third_party/move/move-compiler/src/attr_derivation/evm_deriver.rs index 28bc1ff1e6f80..89fc637095f96 100644 --- a/third_party/move/move-compiler/src/attr_derivation/evm_deriver.rs +++ b/third_party/move/move-compiler/src/attr_derivation/evm_deriver.rs @@ -11,11 +11,72 @@ use crate::{ }; use move_ir_types::location::sp; use move_symbol_pool::Symbol; +use std::collections::BTreeSet; const CONTRACT_ATTR: &str = "contract"; const CALLABLE_ATTR: &str = "callable"; const EXTERNAL_ATTR: &str = "external"; +// The following appear in test code under /evm/. +const ACTOR_ATTR: &str = "actor"; +const INIT_ATTR: &str = "init"; +const MESSAGE_ATTR: &str = "message"; +const ABI_STRUCT_ATTR: &str = "abi_struct"; +const CREATE_ATTR: &str = "create"; +const DECODE_ATTR: &str = "decode"; +const DELETE_ATTR: &str = "delete"; +const ENCODE_ATTR: &str = "encode"; +const ENCODE_PACKED_ATTR: &str = "encode_packed"; +const EVENT_ATTR: &str = "event"; +const EVM_ARITH_ATTR: &str = "evm_arith"; +const EVM_TEST_ATTR: &str = "evm_test"; +const FALLBACK_ATTR: &str = "fallback"; +const INTERFACE_ATTR: &str = "interface"; +const INTERFACE_ID_ATTR: &str = "interface_id"; +const SELECTOR_ATTR: &str = "selector"; +const STATE_ATTR: &str = "state"; +const STORAGE_ATTR: &str = "storage"; + +const EVM_CONTRACT_ATTR: &str = "evm_contract"; +const PAYABLE_ATTR: &str = "payable"; +const RECEIVE_ATTR: &str = "receive"; +const VIEW_ATTR: &str = "view"; +const PURE_ATTR: &str = "pure"; + +pub(crate) fn add_attributes_for_evm(attributes: &mut BTreeSet) { + const ALL_ATTRIBUTE_NAMES: [&str; 26] = [ + CALLABLE_ATTR, + CONTRACT_ATTR, + EXTERNAL_ATTR, + ABI_STRUCT_ATTR, + ACTOR_ATTR, + CREATE_ATTR, + DECODE_ATTR, + DELETE_ATTR, + ENCODE_ATTR, + ENCODE_PACKED_ATTR, + EVENT_ATTR, + EVM_ARITH_ATTR, + EVM_TEST_ATTR, + FALLBACK_ATTR, + INIT_ATTR, + INTERFACE_ATTR, + INTERFACE_ID_ATTR, + MESSAGE_ATTR, + SELECTOR_ATTR, + STATE_ATTR, + STORAGE_ATTR, + EVM_CONTRACT_ATTR, + PAYABLE_ATTR, + RECEIVE_ATTR, + VIEW_ATTR, + PURE_ATTR, + ]; + ALL_ATTRIBUTE_NAMES.into_iter().for_each(|elt| { + attributes.insert(elt.to_string()); + }); +} + pub(crate) fn derive_for_evm( _env: &mut CompilationEnv, _address_map: &NamedAddressMap, diff --git a/third_party/move/move-compiler/src/attr_derivation/mod.rs b/third_party/move/move-compiler/src/attr_derivation/mod.rs index 6634eccd26b72..2046fa4db2ef9 100644 --- a/third_party/move/move-compiler/src/attr_derivation/mod.rs +++ b/third_party/move/move-compiler/src/attr_derivation/mod.rs @@ -3,17 +3,24 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - attr_derivation::{async_deriver::derive_for_async, evm_deriver::derive_for_evm}, + attr_derivation::{ + async_deriver::{add_attributes_for_async, derive_for_async}, + evm_deriver::{add_attributes_for_evm, derive_for_evm}, + }, parser::ast::{ Attribute, AttributeValue, Attribute_, Attributes, Definition, Exp, Exp_, Function, FunctionBody_, FunctionName, FunctionSignature, LeadingNameAccess_, NameAccessChain, NameAccessChain_, StructDefinition, StructFields, StructName, Type, Type_, Value_, Var, Visibility, }, - shared::{CompilationEnv, Name, NamedAddressMap}, + shared::{ + known_attributes::{AttributeKind, KnownAttribute}, + CompilationEnv, Flags, Name, NamedAddressMap, + }, }; use move_ir_types::location::{sp, Loc}; use move_symbol_pool::Symbol; +use std::collections::BTreeSet; mod async_deriver; mod evm_deriver; @@ -38,11 +45,29 @@ pub fn derive_from_attributes( } } +pub fn add_attributes_for_flavor(flags: &Flags, known_attributes: &mut BTreeSet) { + if flags.has_flavor(EVM_FLAVOR) { + add_attributes_for_evm(known_attributes); + } + if flags.has_flavor(ASYNC_FLAVOR) { + add_attributes_for_async(known_attributes); + // Tests with flavor "async" seem to also use EVM attributes. + add_attributes_for_evm(known_attributes); + } + KnownAttribute::add_attribute_names(known_attributes); +} + +pub fn get_known_attributes_for_flavor(flags: &Flags) -> BTreeSet { + let mut known_attributes = BTreeSet::new(); + add_attributes_for_flavor(flags, &mut known_attributes); + known_attributes +} + // ========================================================================================== // Helper Functions for analyzing attributes and creating the AST /// Helper function to find an attribute by name. -pub fn find_attr<'a>(attrs: &'a Attributes, name: &str) -> Option<&'a Attribute> { +pub(crate) fn find_attr<'a>(attrs: &'a Attributes, name: &str) -> Option<&'a Attribute> { attrs .value .iter() @@ -50,7 +75,7 @@ pub fn find_attr<'a>(attrs: &'a Attributes, name: &str) -> Option<&'a Attribute> } /// Helper function to find an attribute in a slice. -pub fn find_attr_slice<'a>(vec: &'a [Attributes], name: &str) -> Option<&'a Attribute> { +pub(crate) fn find_attr_slice<'a>(vec: &'a [Attributes], name: &str) -> Option<&'a Attribute> { for attrs in vec { if let Some(a) = find_attr(attrs, name) { return Some(a); @@ -62,7 +87,7 @@ pub fn find_attr_slice<'a>(vec: &'a [Attributes], name: &str) -> Option<&'a Attr /// Helper to extract the parameters of an attribute. If the attribute is of the form /// `n(a1, ..., an)`, this extracts the a_i as a vector. Otherwise the attribute is assumed /// to have no parameters. -pub fn attr_params(attr: &Attribute) -> Vec<&Attribute> { +pub(crate) fn attr_params(attr: &Attribute) -> Vec<&Attribute> { match &attr.value { Attribute_::Parameterized(_, vs) => vs.value.iter().collect(), _ => vec![], @@ -71,7 +96,7 @@ pub fn attr_params(attr: &Attribute) -> Vec<&Attribute> { /// Helper to extract a named value attribute, as in `n [= v]`. #[allow(unused)] -pub fn attr_value(attr: &Attribute) -> Option<(&Name, Option<&AttributeValue>)> { +pub(crate) fn attr_value(attr: &Attribute) -> Option<(&Name, Option<&AttributeValue>)> { match &attr.value { Attribute_::Name(n) => Some((n, None)), Attribute_::Assigned(n, v) => Some((n, Some(v))), @@ -80,7 +105,7 @@ pub fn attr_value(attr: &Attribute) -> Option<(&Name, Option<&AttributeValue>)> } /// Creates a new attribute. -pub fn new_attr(loc: Loc, name: &str, params: Vec) -> Attribute { +pub(crate) fn new_attr(loc: Loc, name: &str, params: Vec) -> Attribute { let n = sp(loc, Symbol::from(name)); if params.is_empty() { sp(loc, Attribute_::Name(n)) @@ -90,7 +115,7 @@ pub fn new_attr(loc: Loc, name: &str, params: Vec) -> Attribute { } /// Helper to create a new native function declaration. -pub fn new_native_fun( +pub(crate) fn new_native_fun( loc: Loc, name: FunctionName, attributes: Attributes, @@ -112,7 +137,7 @@ pub fn new_native_fun( } /// Helper to create a new function declaration. -pub fn new_fun( +pub(crate) fn new_fun( loc: Loc, name: FunctionName, attributes: Attributes, @@ -138,7 +163,7 @@ pub fn new_fun( } /// Helper to create a new struct declaration. -pub fn new_struct(loc: Loc, name: StructName, fields: StructFields) -> StructDefinition { +pub(crate) fn new_struct(loc: Loc, name: StructName, fields: StructFields) -> StructDefinition { StructDefinition { attributes: vec![sp( // #[event] @@ -154,12 +179,12 @@ pub fn new_struct(loc: Loc, name: StructName, fields: StructFields) -> StructDef } /// Helper to create a new named variable. -pub fn new_var(loc: Loc, name: &str) -> Var { +pub(crate) fn new_var(loc: Loc, name: &str) -> Var { Var(sp(loc, Symbol::from(name))) } /// Helper to create a new type, based on its simple name. -pub fn new_simple_type(loc: Loc, ty_str: &str, ty_args: Vec) -> Type { +pub(crate) fn new_simple_type(loc: Loc, ty_str: &str, ty_args: Vec) -> Type { sp( loc, Type_::Apply(Box::new(new_simple_name(loc, ty_str)), ty_args), @@ -167,12 +192,17 @@ pub fn new_simple_type(loc: Loc, ty_str: &str, ty_args: Vec) -> Type { } /// Helper to create a simple name. -pub fn new_simple_name(loc: Loc, name: &str) -> NameAccessChain { +pub(crate) fn new_simple_name(loc: Loc, name: &str) -> NameAccessChain { sp(loc, NameAccessChain_::One(sp(loc, Symbol::from(name)))) } /// Helper to create a full name. -pub fn new_full_name(loc: Loc, addr_alias: &str, module: &str, name: &str) -> NameAccessChain { +pub(crate) fn new_full_name( + loc: Loc, + addr_alias: &str, + module: &str, + name: &str, +) -> NameAccessChain { let leading = sp( loc, LeadingNameAccess_::Name(sp(loc, Symbol::from(addr_alias))), @@ -187,22 +217,22 @@ pub fn new_full_name(loc: Loc, addr_alias: &str, module: &str, name: &str) -> Na } /// Helper to create a call exp. -pub fn new_call_exp(loc: Loc, fun: NameAccessChain, args: Vec) -> Exp { +pub(crate) fn new_call_exp(loc: Loc, fun: NameAccessChain, args: Vec) -> Exp { sp(loc, Exp_::Call(fun, false, None, sp(loc, args))) } -pub fn new_borrow_exp(loc: Loc, arg: Exp) -> Exp { +pub(crate) fn new_borrow_exp(loc: Loc, arg: Exp) -> Exp { sp(loc, Exp_::Borrow(false, Box::new(arg))) } /// Helper to create a name exp. -pub fn new_simple_name_exp(loc: Loc, name: Name) -> Exp { +pub(crate) fn new_simple_name_exp(loc: Loc, name: Name) -> Exp { sp(loc, Exp_::Name(sp(loc, NameAccessChain_::One(name)), None)) } /// Helper to create an expression for denoting a vector value. #[allow(unused)] -pub fn new_vec_u8(loc: Loc, vec: &[u8]) -> Exp { +pub(crate) fn new_vec_u8(loc: Loc, vec: &[u8]) -> Exp { let values = vec .iter() .map(|x| { @@ -223,7 +253,7 @@ pub fn new_vec_u8(loc: Loc, vec: &[u8]) -> Exp { } /// Helper to create new u64. -pub fn new_u64(loc: Loc, val: u64) -> Exp { +pub(crate) fn new_u64(loc: Loc, val: u64) -> Exp { sp( loc, Exp_::Value(sp(loc, Value_::Num(Symbol::from(val.to_string())))), diff --git a/third_party/move/move-compiler/src/bin/move-build.rs b/third_party/move/move-compiler/src/bin/move-build.rs index 86cb802d2581d..3dcd81fa270a1 100644 --- a/third_party/move/move-compiler/src/bin/move-build.rs +++ b/third_party/move/move-compiler/src/bin/move-build.rs @@ -8,7 +8,7 @@ use clap::*; use move_command_line_common::files::verify_and_create_named_address_mapping; use move_compiler::{ command_line::{self as cli}, - shared::{self, Flags, NumericalAddress}, + shared::{self, known_attributes::KnownAttribute, Flags, NumericalAddress}, }; #[derive(Debug, Parser)] @@ -77,11 +77,16 @@ pub fn main() -> anyhow::Result<()> { let interface_files_dir = format!("{}/generated_interface_files", out_dir); let named_addr_map = verify_and_create_named_address_mapping(named_addresses)?; let bytecode_version = flags.bytecode_version(); - let (files, compiled_units) = - move_compiler::Compiler::from_files(source_files, dependencies, named_addr_map) - .set_interface_files_dir(interface_files_dir) - .set_flags(flags) - .build_and_report()?; + + let (files, compiled_units) = move_compiler::Compiler::from_files( + source_files, + dependencies, + named_addr_map, + flags, + KnownAttribute::get_all_attribute_names(), + ) + .set_interface_files_dir(interface_files_dir) + .build_and_report()?; move_compiler::output_compiled_units( bytecode_version, emit_source_map, diff --git a/third_party/move/move-compiler/src/bin/move-check.rs b/third_party/move/move-compiler/src/bin/move-check.rs index 294db69b7b38b..5a7586250f8bc 100644 --- a/third_party/move/move-compiler/src/bin/move-check.rs +++ b/third_party/move/move-compiler/src/bin/move-check.rs @@ -8,7 +8,7 @@ use clap::*; use move_command_line_common::files::verify_and_create_named_address_mapping; use move_compiler::{ command_line::{self as cli}, - shared::{self, Flags, NumericalAddress}, + shared::{self, known_attributes::KnownAttribute, Flags, NumericalAddress}, }; #[derive(Debug, Parser)] @@ -65,10 +65,16 @@ pub fn main() -> anyhow::Result<()> { named_addresses, } = Options::parse(); let named_addr_map = verify_and_create_named_address_mapping(named_addresses)?; - let _files = move_compiler::Compiler::from_files(source_files, dependencies, named_addr_map) - .set_interface_files_dir_opt(out_dir) - .set_flags(flags) - .check_and_report()?; + + let _files = move_compiler::Compiler::from_files( + source_files, + dependencies, + named_addr_map, + flags, + KnownAttribute::get_all_attribute_names(), + ) + .set_interface_files_dir_opt(out_dir) + .check_and_report()?; Ok(()) } diff --git a/third_party/move/move-compiler/src/command_line/compiler.rs b/third_party/move/move-compiler/src/command_line/compiler.rs index ae2ac2f4763cb..3dccf4c16bdf9 100644 --- a/third_party/move/move-compiler/src/command_line/compiler.rs +++ b/third_party/move/move-compiler/src/command_line/compiler.rs @@ -3,6 +3,7 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ + attr_derivation::add_attributes_for_flavor, cfgir, command_line::{DEFAULT_OUTPUT_DIR, MOVE_COMPILED_INTERFACES_DIR}, compiled_unit, @@ -22,7 +23,7 @@ use move_command_line_common::files::{ use move_core_types::language_storage::ModuleId as CompiledModuleId; use move_symbol_pool::Symbol; use std::{ - collections::BTreeMap, + collections::{BTreeMap, BTreeSet}, fs, fs::File, io::{Read, Write}, @@ -42,6 +43,7 @@ pub struct Compiler<'a> { pre_compiled_lib: Option<&'a FullyCompiledProgram>, compiled_module_named_address_mapping: BTreeMap, flags: Flags, + known_attributes: BTreeSet, } pub struct SteppedCompiler<'a, const P: Pass> { @@ -95,6 +97,8 @@ impl<'a> Compiler<'a> { pub fn from_package_paths, NamedAddress: Into>( targets: Vec>, deps: Vec>, + flags: Flags, + known_attributes: &BTreeSet, ) -> Self { fn indexed_scopes( maps: &mut NamedAddressMaps, @@ -132,7 +136,8 @@ impl<'a> Compiler<'a> { interface_files_dir_opt: None, pre_compiled_lib: None, compiled_module_named_address_mapping: BTreeMap::new(), - flags: Flags::empty(), + flags, + known_attributes: known_attributes.clone(), } } @@ -140,6 +145,8 @@ impl<'a> Compiler<'a> { targets: Vec, deps: Vec, named_address_map: BTreeMap, + flags: Flags, + known_attributes: &BTreeSet, ) -> Self { let targets = vec![PackagePaths { name: None, @@ -151,13 +158,7 @@ impl<'a> Compiler<'a> { paths: deps, named_address_map, }]; - Self::from_package_paths(targets, deps) - } - - pub fn set_flags(mut self, flags: Flags) -> Self { - assert!(self.flags.is_empty()); - self.flags = flags; - self + Self::from_package_paths(targets, deps, flags, known_attributes) } pub fn set_interface_files_dir(mut self, dir: String) -> Self { @@ -210,13 +211,15 @@ impl<'a> Compiler<'a> { pre_compiled_lib, compiled_module_named_address_mapping, flags, + mut known_attributes, } = self; generate_interface_files_for_deps( &mut deps, interface_files_dir_opt, &compiled_module_named_address_mapping, )?; - let mut compilation_env = CompilationEnv::new(flags); + add_attributes_for_flavor(&flags, &mut known_attributes); + let mut compilation_env = CompilationEnv::new(flags, known_attributes); let (source_text, pprog_and_comments_res) = parse_program(&mut compilation_env, maps, targets, deps)?; let res: Result<_, Diagnostics> = pprog_and_comments_res.and_then(|(pprog, comments)| { @@ -428,12 +431,16 @@ pub fn construct_pre_compiled_lib, NamedAddress: Into>, interface_files_dir_opt: Option, flags: Flags, + known_attributes: &BTreeSet, ) -> anyhow::Result> { - let (files, pprog_and_comments_res) = - Compiler::from_package_paths(targets, Vec::>::new()) - .set_interface_files_dir_opt(interface_files_dir_opt) - .set_flags(flags) - .run::()?; + let (files, pprog_and_comments_res) = Compiler::from_package_paths( + targets, + Vec::>::new(), + flags, + known_attributes, + ) + .set_interface_files_dir_opt(interface_files_dir_opt) + .run::()?; let (_comments, stepped) = match pprog_and_comments_res { Err(errors) => return Ok(Err((files, errors))), diff --git a/third_party/move/move-compiler/src/command_line/mod.rs b/third_party/move/move-compiler/src/command_line/mod.rs index f305e20bc3f03..e0badc1288326 100644 --- a/third_party/move/move-compiler/src/command_line/mod.rs +++ b/third_party/move/move-compiler/src/command_line/mod.rs @@ -17,6 +17,8 @@ pub const DEFAULT_OUTPUT_DIR: &str = "build"; pub const SHADOW: &str = "shadow"; pub const SHADOW_SHORT: char = 'S'; +pub const SKIP_ATTRIBUTE_CHECKS: &str = "skip-attribute-checks"; + pub const SOURCE_MAP: &str = "source-map"; pub const SOURCE_MAP_SHORT: char = 'm'; diff --git a/third_party/move/move-compiler/src/diagnostics/codes.rs b/third_party/move/move-compiler/src/diagnostics/codes.rs index bda46559ab0d2..04fe20fe839a1 100644 --- a/third_party/move/move-compiler/src/diagnostics/codes.rs +++ b/third_party/move/move-compiler/src/diagnostics/codes.rs @@ -136,6 +136,8 @@ codes!( InvalidNonPhantomUse: { msg: "invalid non-phantom type parameter usage", severity: Warning }, InvalidAttribute: { msg: "invalid attribute", severity: NonblockingError }, + // TODO(https://github.com/aptos-labs/aptos-core/issues/9411) turn into NonblockingError when safe to do so. + UnknownAttribute: { msg: "unknown attribute", severity: Warning }, ], // errors name resolution, mostly expansion/translate and naming/translate NameResolution: [ diff --git a/third_party/move/move-compiler/src/expansion/ast.rs b/third_party/move/move-compiler/src/expansion/ast.rs index 217671b1b9a28..eee44aeafb591 100644 --- a/third_party/move/move-compiler/src/expansion/ast.rs +++ b/third_party/move/move-compiler/src/expansion/ast.rs @@ -8,8 +8,11 @@ use crate::{ QuantKind, SpecApplyPattern, StructName, UnaryOp, Var, ENTRY_MODIFIER, }, shared::{ - ast_debug::*, known_attributes::KnownAttribute, unique_map::UniqueMap, - unique_set::UniqueSet, *, + ast_debug::*, + known_attributes::{AttributeKind, KnownAttribute}, + unique_map::UniqueMap, + unique_set::UniqueSet, + *, }, }; use move_ir_types::location::*; diff --git a/third_party/move/move-compiler/src/expansion/translate.rs b/third_party/move/move-compiler/src/expansion/translate.rs index 4ea08bf3f83bf..6c5c64969c8f4 100644 --- a/third_party/move/move-compiler/src/expansion/translate.rs +++ b/third_party/move/move-compiler/src/expansion/translate.rs @@ -4,6 +4,7 @@ use super::aliases::{AliasMapBuilder, OldAliasMap}; use crate::{ + command_line::SKIP_ATTRIBUTE_CHECKS, diag, diagnostics::Diagnostic, expansion::{ @@ -14,16 +15,23 @@ use crate::{ parser::ast::{ self as P, Ability, ConstantName, Field, FunctionName, ModuleName, StructName, Var, }, - shared::{known_attributes::AttributePosition, unique_map::UniqueMap, *}, + shared::{ + known_attributes::{AttributeKind, AttributePosition, KnownAttribute}, + parse_u128, parse_u64, parse_u8, + unique_map::UniqueMap, + CompilationEnv, Identifier, Name, NamedAddressMap, NamedAddressMaps, NumericalAddress, + }, FullyCompiledProgram, }; use move_command_line_common::parser::{parse_u16, parse_u256, parse_u32}; use move_ir_types::location::*; use move_symbol_pool::Symbol; +use once_cell::sync::Lazy; use std::{ collections::{BTreeMap, BTreeSet, VecDeque}, iter::IntoIterator, }; +use str; //************************************************************************************************** // Context @@ -39,6 +47,7 @@ struct Context<'env, 'map> { in_spec_context: bool, exp_specs: BTreeMap, env: &'env mut CompilationEnv, + in_aptos_stdlib: bool, // TODO(https://github.com/aptos-labs/aptos-core/issues/9410) remove after bugfix propagates. } impl<'env, 'map> Context<'env, 'map> { fn new( @@ -53,6 +62,7 @@ impl<'env, 'map> Context<'env, 'map> { aliases: AliasMap::new(), is_source_definition: false, in_spec_context: false, + in_aptos_stdlib: false, exp_specs: BTreeMap::new(), } } @@ -384,6 +394,32 @@ fn set_sender_address( }) } +// This is a hack to recognize APTOS StdLib to avoid warnings on some old errors. +// This will be removed after library attributes are cleaned up. +// (See https://github.com/aptos-labs/aptos-core/issues/9410) +fn module_is_in_aptos_stdlib(module_address: Option>) -> bool { + const APTOS_STDLIB_NAME: &str = "aptos_std"; + static APTOS_STDLIB_NUMERICAL_ADDRESS: Lazy = + Lazy::new(|| NumericalAddress::parse_str("0x1").unwrap()); + match &module_address { + Some(spanned_address) => { + let address = spanned_address.value; + match address { + Address::Numerical(optional_name, spanned_numerical_address) => match optional_name + { + Some(spanned_symbol) => { + (&spanned_symbol.value as &str) == APTOS_STDLIB_NAME + && (spanned_numerical_address.value == *APTOS_STDLIB_NUMERICAL_ADDRESS) + }, + None => false, + }, + Address::NamedUnassigned(_) => false, + } + }, + None => false, + } +} + fn module_( context: &mut Context, package_name: Option, @@ -398,7 +434,9 @@ fn module_( name, members, } = mdef; + context.in_aptos_stdlib = module_is_in_aptos_stdlib(module_address); let attributes = flatten_attributes(context, AttributePosition::Module, attributes); + assert!(context.address.is_none()); assert!(address.is_none()); set_sender_address(context, &name, module_address); @@ -574,12 +612,40 @@ fn unique_attributes( | E::Attribute_::Assigned(n, _) | E::Attribute_::Parameterized(n, _) => *n, }; - let name_ = match known_attributes::KnownAttribute::resolve(sym) { - None => E::AttributeName_::Unknown(sym), + let name_ = match KnownAttribute::resolve(sym) { + None => { + let flags = &context.env.flags(); + if !flags.skip_attribute_checks() { + let known_attributes = &context.env.get_known_attributes(); + // TODO(See https://github.com/aptos-labs/aptos-core/issues/9410) remove after bugfix propagates. + if !is_nested && !known_attributes.contains(sym.as_str()) { + if !context.in_aptos_stdlib { + let msg = format!("Attribute name '{}' is unknown (use --{} CLI option to ignore); known attributes are '{:?}'.", + sym.as_str(), + SKIP_ATTRIBUTE_CHECKS, known_attributes); + context + .env + .add_diag(diag!(Declarations::UnknownAttribute, (nloc, msg))); + } + } else if is_nested && known_attributes.contains(sym.as_str()) { + let msg = format!( + "Known attribute '{}' is not expected in a nested attribute position.", + sym.as_str() + ); + context + .env + .add_diag(diag!(Declarations::InvalidAttribute, (nloc, msg))); + }; + } + E::AttributeName_::Unknown(sym) + }, Some(known) => { debug_assert!(known.name() == sym.as_str()); if is_nested { - let msg = "Known attribute '{}' is not expected in a nested attribute position"; + let msg = format!( + "Known attribute '{}' is not expected in a nested attribute position", + sym.as_str() + ); context .env .add_diag(diag!(Declarations::InvalidAttribute, (nloc, msg))); diff --git a/third_party/move/move-compiler/src/lib.rs b/third_party/move/move-compiler/src/lib.rs index eee5721e09681..3d58b6935c55b 100644 --- a/third_party/move/move-compiler/src/lib.rs +++ b/third_party/move/move-compiler/src/lib.rs @@ -7,7 +7,7 @@ #[macro_use(sp)] extern crate move_ir_types; -mod attr_derivation; +pub mod attr_derivation; pub mod cfgir; pub mod command_line; pub mod compiled_unit; diff --git a/third_party/move/move-compiler/src/shared/mod.rs b/third_party/move/move-compiler/src/shared/mod.rs index 7468f59c31fd9..c55fb9053cd71 100644 --- a/third_party/move/move-compiler/src/shared/mod.rs +++ b/third_party/move/move-compiler/src/shared/mod.rs @@ -12,7 +12,7 @@ use move_ir_types::location::*; use move_symbol_pool::Symbol; use petgraph::{algo::astar as petgraph_astar, graphmap::DiGraphMap}; use std::{ - collections::BTreeMap, + collections::{BTreeMap, BTreeSet}, fmt, hash::Hash, sync::atomic::{AtomicUsize, Ordering as AtomicOrdering}, @@ -51,7 +51,6 @@ pub fn parse_named_address(s: &str) -> anyhow::Result<(String, NumericalAddress) let name = before_after[0].parse()?; let addr = NumericalAddress::parse_str(before_after[1]) .map_err(|err| anyhow::format_err!("{}", err))?; - Ok((name, addr)) } @@ -175,15 +174,19 @@ pub type AttributeDeriver = dyn Fn(&mut CompilationEnv, &mut ModuleDefinition); pub struct CompilationEnv { flags: Flags, diags: Diagnostics, + /// Internal table used to pass known attributes to the parser for purposes of + /// checking for unknown attributes. + known_attributes: BTreeSet, // TODO(tzakian): Remove the global counter and use this counter instead // pub counter: u64, } impl CompilationEnv { - pub fn new(flags: Flags) -> Self { + pub fn new(flags: Flags, known_attributes: BTreeSet) -> Self { Self { flags, diags: Diagnostics::new(), + known_attributes, } } @@ -239,6 +242,10 @@ impl CompilationEnv { pub fn flags(&self) -> &Flags { &self.flags } + + pub fn get_known_attributes(&self) -> &BTreeSet { + &self.known_attributes + } } //************************************************************************************************** @@ -317,6 +324,12 @@ pub struct Flags { /// included only in tests, without creating the unit test code regular tests do. #[clap(skip)] keep_testing_functions: bool, + + /// Do not complain about unknown attributes. + #[clap( + long = cli::SKIP_ATTRIBUTE_CHECKS, + )] + pub skip_attribute_checks: bool, } impl Flags { @@ -328,6 +341,7 @@ impl Flags { flavor: "".to_string(), bytecode_version: None, keep_testing_functions: false, + skip_attribute_checks: false, } } @@ -339,6 +353,7 @@ impl Flags { flavor: "".to_string(), bytecode_version: None, keep_testing_functions: false, + skip_attribute_checks: false, } } @@ -350,6 +365,7 @@ impl Flags { flavor: "".to_string(), bytecode_version: None, keep_testing_functions: false, + skip_attribute_checks: false, } } @@ -361,6 +377,7 @@ impl Flags { flavor: "".to_string(), bytecode_version: None, keep_testing_functions: true, + skip_attribute_checks: false, } } @@ -412,6 +429,17 @@ impl Flags { pub fn bytecode_version(&self) -> Option { self.bytecode_version } + + pub fn skip_attribute_checks(&self) -> bool { + self.skip_attribute_checks + } + + pub fn set_skip_attribute_checks(self, new_value: bool) -> Self { + Self { + skip_attribute_checks: new_value, + ..self + } + } } //************************************************************************************************** @@ -435,11 +463,21 @@ pub mod known_attributes { Spec, } + pub trait AttributeKind + where + Self: Sized, + { + fn add_attribute_names(table: &mut BTreeSet); + fn name(&self) -> &str; + fn expected_positions(&self) -> &'static BTreeSet; + } + #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] pub enum KnownAttribute { Testing(TestingAttribute), Verification(VerificationAttribute), Native(NativeAttribute), + Deprecation(DeprecationAttribute), } #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] @@ -462,6 +500,13 @@ pub mod known_attributes { pub enum NativeAttribute { // It is a fake native function that actually compiles to a bytecode instruction BytecodeInstruction, + NativeInterface, + } + + #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] + pub enum DeprecationAttribute { + // Marks deprecated funcitons whose use causes warnings + Deprecated, } impl fmt::Display for AttributePosition { @@ -494,29 +539,55 @@ pub mod known_attributes { NativeAttribute::BYTECODE_INSTRUCTION => { Self::Native(NativeAttribute::BytecodeInstruction) }, + NativeAttribute::NATIVE_INTERFACE => Self::Native(NativeAttribute::NativeInterface), + DeprecationAttribute::DEPRECATED_NAME => { + Self::Deprecation(DeprecationAttribute::Deprecated) + }, _ => return None, }) } - pub const fn name(&self) -> &str { + pub fn get_all_attribute_names() -> &'static BTreeSet { + static KNOWN_ATTRIBUTES_SET: Lazy> = Lazy::new(|| { + let mut known_attributes = BTreeSet::new(); + KnownAttribute::add_attribute_names(&mut known_attributes); + known_attributes + }); + &KNOWN_ATTRIBUTES_SET + } + } + + impl AttributeKind for KnownAttribute { + fn add_attribute_names(table: &mut BTreeSet) { + TestingAttribute::add_attribute_names(table); + VerificationAttribute::add_attribute_names(table); + NativeAttribute::add_attribute_names(table); + DeprecationAttribute::add_attribute_names(table); + } + + fn name(&self) -> &str { match self { Self::Testing(a) => a.name(), Self::Verification(a) => a.name(), Self::Native(a) => a.name(), + Self::Deprecation(a) => a.name(), } } - pub fn expected_positions(&self) -> &'static BTreeSet { + fn expected_positions(&self) -> &'static BTreeSet { match self { Self::Testing(a) => a.expected_positions(), Self::Verification(a) => a.expected_positions(), Self::Native(a) => a.expected_positions(), + Self::Deprecation(a) => a.expected_positions(), } } } impl TestingAttribute { pub const ABORT_CODE_NAME: &'static str = "abort_code"; + const ALL_ATTRIBUTE_NAMES: [&'static str; 3] = + [Self::TEST, Self::TEST_ONLY, Self::EXPECTED_FAILURE]; pub const ARITHMETIC_ERROR_NAME: &'static str = "arithmetic_error"; pub const ERROR_LOCATION: &'static str = "location"; pub const EXPECTED_FAILURE: &'static str = "expected_failure"; @@ -527,7 +598,24 @@ pub mod known_attributes { pub const TEST_ONLY: &'static str = "test_only"; pub const VECTOR_ERROR_NAME: &'static str = "vector_error"; - pub const fn name(&self) -> &str { + pub fn expected_failure_cases() -> &'static [&'static str] { + &[ + Self::ABORT_CODE_NAME, + Self::ARITHMETIC_ERROR_NAME, + Self::VECTOR_ERROR_NAME, + Self::OUT_OF_GAS_NAME, + Self::MAJOR_STATUS_NAME, + ] + } + } + impl AttributeKind for TestingAttribute { + fn add_attribute_names(table: &mut BTreeSet) { + for str in Self::ALL_ATTRIBUTE_NAMES { + table.insert(str.to_string()); + } + } + + fn name(&self) -> &str { match self { Self::Test => Self::TEST, Self::TestOnly => Self::TEST_ONLY, @@ -535,7 +623,7 @@ pub mod known_attributes { } } - pub fn expected_positions(&self) -> &'static BTreeSet { + fn expected_positions(&self) -> &'static BTreeSet { static TEST_ONLY_POSITIONS: Lazy> = Lazy::new(|| { IntoIterator::into_iter([ AttributePosition::AddressBlock, @@ -558,28 +646,26 @@ pub mod known_attributes { TestingAttribute::ExpectedFailure => &EXPECTED_FAILURE_POSITIONS, } } - - pub fn expected_failure_cases() -> &'static [&'static str] { - &[ - Self::ABORT_CODE_NAME, - Self::ARITHMETIC_ERROR_NAME, - Self::VECTOR_ERROR_NAME, - Self::OUT_OF_GAS_NAME, - Self::MAJOR_STATUS_NAME, - ] - } } impl VerificationAttribute { + const ALL_ATTRIBUTE_NAMES: [&'static str; 1] = [Self::VERIFY_ONLY]; pub const VERIFY_ONLY: &'static str = "verify_only"; + } + impl AttributeKind for VerificationAttribute { + fn add_attribute_names(table: &mut BTreeSet) { + for str in Self::ALL_ATTRIBUTE_NAMES { + table.insert(str.to_string()); + } + } - pub const fn name(&self) -> &str { + fn name(&self) -> &str { match self { Self::VerifyOnly => Self::VERIFY_ONLY, } } - pub fn expected_positions(&self) -> &'static BTreeSet { + fn expected_positions(&self) -> &'static BTreeSet { static VERIFY_ONLY_POSITIONS: Lazy> = Lazy::new(|| { IntoIterator::into_iter([ AttributePosition::AddressBlock, @@ -599,19 +685,68 @@ pub mod known_attributes { } impl NativeAttribute { + const ALL_ATTRIBUTE_NAMES: [&'static str; 2] = + [Self::BYTECODE_INSTRUCTION, Self::NATIVE_INTERFACE]; pub const BYTECODE_INSTRUCTION: &'static str = "bytecode_instruction"; + pub const NATIVE_INTERFACE: &'static str = "native_interface"; + } + impl AttributeKind for NativeAttribute { + fn add_attribute_names(table: &mut BTreeSet) { + for str in Self::ALL_ATTRIBUTE_NAMES { + table.insert(str.to_string()); + } + } - pub const fn name(&self) -> &str { + fn name(&self) -> &str { match self { NativeAttribute::BytecodeInstruction => Self::BYTECODE_INSTRUCTION, + NativeAttribute::NativeInterface => Self::NATIVE_INTERFACE, } } - pub fn expected_positions(&self) -> &'static BTreeSet { + fn expected_positions(&self) -> &'static BTreeSet { static BYTECODE_INSTRUCTION_POSITIONS: Lazy> = Lazy::new(|| IntoIterator::into_iter([AttributePosition::Function]).collect()); + static NATIVE_INTERFACE_POSITIONS: Lazy> = + Lazy::new(|| IntoIterator::into_iter([AttributePosition::Function]).collect()); match self { NativeAttribute::BytecodeInstruction => &BYTECODE_INSTRUCTION_POSITIONS, + NativeAttribute::NativeInterface => &NATIVE_INTERFACE_POSITIONS, + } + } + } + + impl DeprecationAttribute { + const ALL_ATTRIBUTE_NAMES: [&'static str; 1] = [Self::DEPRECATED_NAME]; + pub const DEPRECATED_NAME: &'static str = "deprecated"; + } + + impl AttributeKind for DeprecationAttribute { + fn add_attribute_names(table: &mut BTreeSet) { + for str in Self::ALL_ATTRIBUTE_NAMES { + table.insert(str.to_string()); + } + } + + fn name(&self) -> &str { + match self { + Self::Deprecated => Self::DEPRECATED_NAME, + } + } + + fn expected_positions(&self) -> &'static BTreeSet { + static DEPRECATED_POSITIONS: Lazy> = Lazy::new(|| { + IntoIterator::into_iter([ + AttributePosition::AddressBlock, + AttributePosition::Module, + AttributePosition::Constant, + AttributePosition::Struct, + AttributePosition::Function, + ]) + .collect() + }); + match self { + Self::Deprecated => &DEPRECATED_POSITIONS, } } } diff --git a/third_party/move/move-compiler/src/unit_test/filter_test_members.rs b/third_party/move/move-compiler/src/unit_test/filter_test_members.rs index 7931b9fe16855..b4aff62312f71 100644 --- a/third_party/move/move-compiler/src/unit_test/filter_test_members.rs +++ b/third_party/move/move-compiler/src/unit_test/filter_test_members.rs @@ -241,7 +241,9 @@ fn test_attributes(attrs: &P::Attributes) -> Vec<(Loc, known_attributes::Testing .filter_map( |attr| match KnownAttribute::resolve(attr.value.attribute_name().value)? { KnownAttribute::Testing(test_attr) => Some((attr.loc, test_attr)), - KnownAttribute::Verification(_) | KnownAttribute::Native(_) => None, + KnownAttribute::Verification(_) + | KnownAttribute::Native(_) + | KnownAttribute::Deprecation(_) => None, }, ) .collect() diff --git a/third_party/move/move-compiler/src/unit_test/plan_builder.rs b/third_party/move/move-compiler/src/unit_test/plan_builder.rs index a36e733b3e08e..0133e1d17c500 100644 --- a/third_party/move/move-compiler/src/unit_test/plan_builder.rs +++ b/third_party/move/move-compiler/src/unit_test/plan_builder.rs @@ -10,7 +10,7 @@ use crate::{ }, parser::ast::ConstantName, shared::{ - known_attributes::{KnownAttribute, TestingAttribute}, + known_attributes::{AttributeKind, KnownAttribute, TestingAttribute}, unique_map::UniqueMap, CompilationEnv, Identifier, NumericalAddress, }, diff --git a/third_party/move/move-compiler/src/verification/ast_filter.rs b/third_party/move/move-compiler/src/verification/ast_filter.rs index ec39fafcbd9bd..32aeab8a39467 100644 --- a/third_party/move/move-compiler/src/verification/ast_filter.rs +++ b/third_party/move/move-compiler/src/verification/ast_filter.rs @@ -66,7 +66,9 @@ fn verification_attributes( .filter_map( |attr| match KnownAttribute::resolve(attr.value.attribute_name().value)? { KnownAttribute::Verification(verify_attr) => Some((attr.loc, verify_attr)), - KnownAttribute::Testing(_) | KnownAttribute::Native(_) => None, + KnownAttribute::Testing(_) + | KnownAttribute::Native(_) + | KnownAttribute::Deprecation(_) => None, }, ) .collect() diff --git a/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.exp b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.exp new file mode 100644 index 0000000000000..563f0310aeee4 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.exp @@ -0,0 +1,24 @@ +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/parser/aptos_stdlib_attributes.move:4:10 + │ +4 │ #[a, a(x = 0)] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/parser/aptos_stdlib_attributes.move:8:12 + │ +8 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/parser/aptos_stdlib_attributes.move:8:19 + │ +8 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + diff --git a/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.move b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.move new file mode 100644 index 0000000000000..2180c108cb141 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes.move @@ -0,0 +1,10 @@ +// Test that warnings about unknown "#[testonly]" attribute is +// suppressed in apts_std module. +module aptos_std::module_with_suppressed_warnings { + #[a, a(x = 0)] + fun foo() {} + + #[testonly] + #[b(a, a = 0, a(x = 1))] + fun bar() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes2.move b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes2.move new file mode 100644 index 0000000000000..781e7484cc8cb --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/aptos_stdlib_attributes2.move @@ -0,0 +1,6 @@ +module aptos_std::M { + fun foo() {} + + #[testonly] + fun bar() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/parser/attribute_placement.exp b/third_party/move/move-compiler/tests/move_check/parser/attribute_placement.exp new file mode 100644 index 0000000000000..aee01fa4e65f6 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/attribute_placement.exp @@ -0,0 +1,84 @@ +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:3:3 + │ +3 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:5:7 + │ +5 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:8:7 + │ +8 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:11:7 + │ +11 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:14:7 + │ +14 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:17:7 + │ +17 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:22:3 + │ +22 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:24:7 + │ +24 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:27:7 + │ +27 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:31:3 + │ +31 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:33:7 + │ +33 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:36:7 + │ +36 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:39:7 + │ +39 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_placement.move:44:7 + │ +44 │ #[attr] + │ ^^^^ Attribute name 'attr' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/move-compiler/tests/move_check/parser/attribute_variants.exp b/third_party/move/move-compiler/tests/move_check/parser/attribute_variants.exp new file mode 100644 index 0000000000000..8d58dee27063a --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/attribute_variants.exp @@ -0,0 +1,60 @@ +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:2:3 + │ +2 │ #[attr0] + │ ^^^^^ Attribute name 'attr0' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:3:3 + │ +3 │ #[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] + │ ^^^^^ Attribute name 'attr1' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:3:12 + │ +3 │ #[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] + │ ^^^^^ Attribute name 'attr2' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:3:28 + │ +3 │ #[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] + │ ^^^^^ Attribute name 'attr3' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:3:41 + │ +3 │ #[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] + │ ^^^^^ Attribute name 'attr4' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:3:53 + │ +3 │ #[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] + │ ^^^^^ Attribute name 'attr5' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:4:3 + │ +4 │ #[bttr0=false, bttr1=0u8, bttr2=0u64, bttr3=0u128] + │ ^^^^^ Attribute name 'bttr0' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:4:16 + │ +4 │ #[bttr0=false, bttr1=0u8, bttr2=0u64, bttr3=0u128] + │ ^^^^^ Attribute name 'bttr1' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:4:27 + │ +4 │ #[bttr0=false, bttr1=0u8, bttr2=0u64, bttr3=0u128] + │ ^^^^^ Attribute name 'bttr2' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/attribute_variants.move:4:39 + │ +4 │ #[bttr0=false, bttr1=0u8, bttr2=0u64, bttr3=0u128] + │ ^^^^^ Attribute name 'bttr3' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/move-compiler/tests/move_check/parser/duplicate_attributes.exp b/third_party/move/move-compiler/tests/move_check/parser/duplicate_attributes.exp index 4930a1562bd0e..ece55e6ee1b76 100644 --- a/third_party/move/move-compiler/tests/move_check/parser/duplicate_attributes.exp +++ b/third_party/move/move-compiler/tests/move_check/parser/duplicate_attributes.exp @@ -1,3 +1,15 @@ +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/duplicate_attributes.move:2:7 + │ +2 │ #[a, a(x = 0)] + │ ^ Attribute name 'a' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/duplicate_attributes.move:2:10 + │ +2 │ #[a, a(x = 0)] + │ ^ Attribute name 'a' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + error[E02001]: duplicate declaration, item, or annotation ┌─ tests/move_check/parser/duplicate_attributes.move:2:10 │ @@ -6,6 +18,12 @@ error[E02001]: duplicate declaration, item, or annotation │ │ │ Attribute previously given here +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/duplicate_attributes.move:5:7 + │ +5 │ #[b(a, a = 0, a(x = 1))] + │ ^ Attribute name 'b' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + error[E02001]: duplicate declaration, item, or annotation ┌─ tests/move_check/parser/duplicate_attributes.move:5:12 │ diff --git a/third_party/move/move-compiler/tests/move_check/parser/testonly.exp b/third_party/move/move-compiler/tests/move_check/parser/testonly.exp new file mode 100644 index 0000000000000..0dc78b0a3f312 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/testonly.exp @@ -0,0 +1,12 @@ +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/testonly.move:5:7 + │ +5 │ #[testonly] + │ ^^^^^^^^ Attribute name 'testonly' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + +warning[W02016]: unknown attribute + ┌─ tests/move_check/parser/testonly.move:15:7 + │ +15 │ #[view] + │ ^^^^ Attribute name 'view' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/move-compiler/tests/move_check/parser/testonly.move b/third_party/move/move-compiler/tests/move_check/parser/testonly.move new file mode 100644 index 0000000000000..6631942d17dc6 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/parser/testonly.move @@ -0,0 +1,19 @@ +module 0x1::A { + #[test] + fun a() { } + + #[testonly] + public fun a_call() { + abort 0 + } + + #[test_only] + public fun b_call() { + abort 0 + } + + #[view] + public fun c_call() { + abort 0 + } +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.exp b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.exp new file mode 100644 index 0000000000000..d6ae3f625caa8 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.exp @@ -0,0 +1,24 @@ +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move:4:10 + │ +4 │ #[a, a(x = 0)] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move:8:12 + │ +8 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move:8:19 + │ +8 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move new file mode 100644 index 0000000000000..2180c108cb141 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes.move @@ -0,0 +1,10 @@ +// Test that warnings about unknown "#[testonly]" attribute is +// suppressed in apts_std module. +module aptos_std::module_with_suppressed_warnings { + #[a, a(x = 0)] + fun foo() {} + + #[testonly] + #[b(a, a = 0, a(x = 1))] + fun bar() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes2.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes2.move new file mode 100644 index 0000000000000..781e7484cc8cb --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/aptos_stdlib_attributes2.move @@ -0,0 +1,6 @@ +module aptos_std::M { + fun foo() {} + + #[testonly] + fun bar() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.exp b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.exp new file mode 100644 index 0000000000000..3148a6a7ca752 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.exp @@ -0,0 +1,8 @@ +error[E01002]: unexpected token + ┌─ tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.move:4:5 + │ +3 │ #[attr = 0 + │ - To match this '[' +4 │ fun foo() {} + │ ^ Expected ']' + diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.move new file mode 100644 index 0000000000000..9a28c27c391e4 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_no_closing_bracket.move @@ -0,0 +1,5 @@ +module 0x42::M { + // Errors expecting a ']' + #[attr = 0 + fun foo() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_placement.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_placement.move new file mode 100644 index 0000000000000..0c50c6f7a5485 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_placement.move @@ -0,0 +1,46 @@ +#[attr] +address 0x42 { +#[attr] +module M { + #[attr] + use 0x42::N; + + #[attr] + struct S {} + + #[attr] + const C: u64 = 0; + + #[attr] + public fun foo() { N::bar() } + + #[attr] + spec foo {} +} +} + +#[attr] +module 0x42::N { + #[attr] + friend 0x42::M; + + #[attr] + public fun bar() {} +} + +#[attr] +script { + #[attr] + use 0x42::M; + + #[attr] + const C: u64 = 0; + + #[attr] + fun main() { + M::foo(); + } + + #[attr] + spec main { } +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_variants.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_variants.move new file mode 100644 index 0000000000000..482fa4cbacacc --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/attribute_variants.move @@ -0,0 +1,6 @@ +#[] +#[attr0] +#[attr1=0, attr2=b"hello", attr3=x"0f", attr4=0x42, attr5(attr0, attr1, attr2(attr0, attr1=0))] +#[bttr0=false, bttr1=0u8, bttr2=0u64, bttr3=0u128] +#[] +module 0x42::M {} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.exp b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.exp new file mode 100644 index 0000000000000..1de53e06c5e25 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.exp @@ -0,0 +1,24 @@ +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/duplicate_attributes.move:2:10 + │ +2 │ #[a, a(x = 0)] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/duplicate_attributes.move:5:12 + │ +5 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + +error[E02001]: duplicate declaration, item, or annotation + ┌─ tests/move_check/skip_attribute_checks/duplicate_attributes.move:5:19 + │ +5 │ #[b(a, a = 0, a(x = 1))] + │ - ^^^^^^^^ Duplicate attribute 'a' attached to the same item + │ │ + │ Attribute previously given here + diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.move new file mode 100644 index 0000000000000..e2de77f0f23a4 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/duplicate_attributes.move @@ -0,0 +1,7 @@ +module 0x42::M { + #[a, a(x = 0)] + fun foo() {} + + #[b(a, a = 0, a(x = 1))] + fun bar() {} +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes.move new file mode 100644 index 0000000000000..11ff5533c4f37 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes.move @@ -0,0 +1,26 @@ +// tests non-abort related execution failures +module 0x1::n {} +module 0x1::m { + #[test_only] + use 0x1::n; + + #[test] + #[expected_failure(vector_error, location=std::vector, hello=0)] + fun t0() { } + + #[test] + #[expected_failure(arithmetic_error, location=n, wowza)] + fun t1() { } + + #[test] + #[expected_failure(out_of_gas, location=Self, so_many_attrs)] + fun t2() { } + + #[test] + #[expected_failure(major_status=4004, an_attr_here_is_unused, location=Self)] + fun t3() { } + + #[test] + #[expected_failure(major_status=4016, minor_code=0, location=Self)] + fun t4() { } +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes2.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes2.move new file mode 100644 index 0000000000000..ba09afd866987 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/extra_attributes2.move @@ -0,0 +1,23 @@ +// tests non-abort related execution failures with errors in attributes +module 0x1::n {} +module 0x1::m { + #[test] + #[expected_failure(arithmetic_error, location=Self)] + fun t5() { } + + #[test] + #[expected_failure(abort_code=3, test, location=Self)] + fun t6() { } + + #[test] + #[expected_failure(vector_error, test_only, location=Self)] + fun t7() { } + + #[test_only] + #[expected_failure(bytecode_instruction, location=Self)] + fun t8() { } + + #[test] + #[expected_failure(verify_only)] + fun t9() { } +} diff --git a/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/testonly.move b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/testonly.move new file mode 100644 index 0000000000000..0923625d717ba --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/skip_attribute_checks/testonly.move @@ -0,0 +1,9 @@ +module 0x1::A { + #[test] + fun a() { } + + #[testonly] + public fun a_call() { + abort 0 + } +} diff --git a/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.exp b/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.exp new file mode 100644 index 0000000000000..d547fa6237e24 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.exp @@ -0,0 +1,9 @@ +error[E04005]: expected a single type + ┌─ tests/move_check/typing/assign_tuple.move:12:13 + │ + 7 │ fun tuple(x: u64): (u64, S) { + │ -------- Expected a single type, but found expression list type: '(u64, 0x42::tuple_invalid::S)' + · +12 │ let x = tuple(x); + │ ^ Invalid type for local + diff --git a/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.move b/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.move new file mode 100644 index 0000000000000..bcba65ad8363c --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/typing/assign_tuple.move @@ -0,0 +1,15 @@ +module 0x42::tuple_invalid { + + struct S { + f: u64, + } + + fun tuple(x: u64): (u64, S) { + (x, S{f: x + 1}) + } + + fun use_tuple1(x: u64): u64 { + let x = tuple(x); + 1 + } +} diff --git a/third_party/move/move-compiler/tests/move_check/typing/tuple.move b/third_party/move/move-compiler/tests/move_check/typing/tuple.move new file mode 100644 index 0000000000000..ae3958d3b8e6c --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/typing/tuple.move @@ -0,0 +1,15 @@ +module 0x42::tuple { + + struct S { + f: u64, + } + + fun tuple(x: u64): (u64, S) { + (x, S{f: x + 1}) + } + + fun use_tuple(x: u64): u64 { + let (x, S{f: y}) = tuple(x); + x + y + } +} diff --git a/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes.move b/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes.move index 33bf5eda30142..11ff5533c4f37 100644 --- a/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes.move +++ b/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes.move @@ -23,5 +23,4 @@ module 0x1::m { #[test] #[expected_failure(major_status=4016, minor_code=0, location=Self)] fun t4() { } - } diff --git a/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes2.move b/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes2.move new file mode 100644 index 0000000000000..ba09afd866987 --- /dev/null +++ b/third_party/move/move-compiler/tests/move_check/unit_test/extra_attributes2.move @@ -0,0 +1,23 @@ +// tests non-abort related execution failures with errors in attributes +module 0x1::n {} +module 0x1::m { + #[test] + #[expected_failure(arithmetic_error, location=Self)] + fun t5() { } + + #[test] + #[expected_failure(abort_code=3, test, location=Self)] + fun t6() { } + + #[test] + #[expected_failure(vector_error, test_only, location=Self)] + fun t7() { } + + #[test_only] + #[expected_failure(bytecode_instruction, location=Self)] + fun t8() { } + + #[test] + #[expected_failure(verify_only)] + fun t9() { } +} diff --git a/third_party/move/move-compiler/tests/move_check_testsuite.rs b/third_party/move/move-compiler/tests/move_check_testsuite.rs index 93d87352f9105..7d05b5f75b000 100644 --- a/third_party/move/move-compiler/tests/move_check_testsuite.rs +++ b/third_party/move/move-compiler/tests/move_check_testsuite.rs @@ -9,7 +9,7 @@ use move_command_line_common::{ use move_compiler::{ compiled_unit::AnnotatedCompiledUnit, diagnostics::*, - shared::{Flags, NumericalAddress}, + shared::{known_attributes::KnownAttribute, Flags, NumericalAddress}, unit_test, CommentMap, Compiler, SteppedCompiler, PASS_CFGIR, PASS_PARSER, }; use std::{collections::BTreeMap, fs, path::Path}; @@ -23,8 +23,12 @@ const VERIFICATION_EXT: &str = "verification"; /// Root of tests which require to set flavor flags. const FLAVOR_PATH: &str = "flavors/"; +/// Root of tests which require to set skip_attribute_checks flag. +const SKIP_ATTRIBUTE_CHECKS_PATH: &str = "skip_attribute_checks/"; + fn default_testing_addresses() -> BTreeMap { let mapping = [ + ("aptos_std", "0x1"), ("std", "0x1"), ("M", "0x1"), ("A", "0x42"), @@ -85,8 +89,8 @@ fn move_check_testsuite(path: &Path) -> datatest_stable::Result<()> { let out_path = path.with_extension(OUT_EXT); let mut flags = Flags::empty(); - match path.to_str() { - Some(p) if p.contains(FLAVOR_PATH) => { + if let Some(p) = path.to_str() { + if p.contains(FLAVOR_PATH) { // Extract the flavor from the path. Its the directory name of the file. let flavor = path .parent() @@ -96,8 +100,10 @@ fn move_check_testsuite(path: &Path) -> datatest_stable::Result<()> { .to_string_lossy() .to_string(); flags = flags.set_flavor(flavor) - }, - _ => {}, + } + if p.contains(SKIP_ATTRIBUTE_CHECKS_PATH) { + flags = flags.set_skip_attribute_checks(true); + } }; run_test(path, &exp_path, &out_path, flags)?; Ok(()) @@ -111,8 +117,9 @@ fn run_test(path: &Path, exp_path: &Path, out_path: &Path, flags: Flags) -> anyh targets, move_stdlib::move_stdlib_files(), default_testing_addresses(), + flags, + KnownAttribute::get_all_attribute_names(), ) - .set_flags(flags) .run::()?; let diags = move_check_for_errors(comments_and_compiler_res); diff --git a/third_party/move/move-core/types/Cargo.toml b/third_party/move/move-core/types/Cargo.toml index fbfd938df03d0..1bcc406e23f6f 100644 --- a/third_party/move/move-core/types/Cargo.toml +++ b/third_party/move/move-core/types/Cargo.toml @@ -23,6 +23,7 @@ rand = "0.8.3" ref-cast = "1.0.6" serde = { version = "1.0.124", default-features = false } serde_bytes = "0.11.5" +thiserror = "1.0.45" uint = "0.9.4" bcs = { workspace = true } diff --git a/third_party/move/move-core/types/src/account_address.rs b/third_party/move/move-core/types/src/account_address.rs index ca56a90264c71..f7d6feab78b91 100644 --- a/third_party/move/move-core/types/src/account_address.rs +++ b/third_party/move/move-core/types/src/account_address.rs @@ -119,9 +119,10 @@ impl AccountAddress { self.0 } + /// NOTE: Where possible use from_str_strict or from_str instead. pub fn from_hex_literal(literal: &str) -> Result { if !literal.starts_with("0x") { - return Err(AccountAddressParseError); + return Err(AccountAddressParseError::LeadingZeroXRequired); } let hex_len = literal.len() - 2; @@ -145,9 +146,10 @@ impl AccountAddress { format!("0x{}", self.short_str_lossless()) } + /// NOTE: Where possible use from_str_strict or from_str instead. pub fn from_hex>(hex: T) -> Result { <[u8; Self::LENGTH]>::from_hex(hex) - .map_err(|_| AccountAddressParseError) + .map_err(|e| AccountAddressParseError::InvalidHexChars(format!("{:#}", e))) .map(Self) } @@ -159,9 +161,56 @@ impl AccountAddress { pub fn from_bytes>(bytes: T) -> Result { <[u8; Self::LENGTH]>::try_from(bytes.as_ref()) - .map_err(|_| AccountAddressParseError) + .map_err(|e| AccountAddressParseError::InvalidHexChars(format!("{:#}", e))) .map(Self) } + + /// NOTE: This function has strict parsing behavior. For relaxed behavior, please use + /// the `from_str` function. Where possible, prefer to use `from_str_strict`. + /// + /// Create an instance of AccountAddress by parsing a hex string representation. + /// + /// This function allows only the strictest formats defined by AIP-40. In short this + /// means only the following formats are accepted: + /// + /// - LONG + /// - SHORT for special addresses + /// + /// Where: + /// + /// - LONG is defined as 0x + 64 hex characters. + /// - SHORT for special addresses is 0x0 to 0xf inclusive. + /// + /// This means the following are not accepted: + /// + /// - SHORT for non-special addresses. + /// - Any address without a leading 0x. + /// + /// Learn more about the different address formats by reading AIP-40: + /// https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. + pub fn from_str_strict(s: &str) -> Result { + // Assert the string starts with 0x. + if !s.starts_with("0x") { + return Err(AccountAddressParseError::LeadingZeroXRequired); + } + + let address = AccountAddress::from_str(s)?; + + // Check if the address is in LONG form. If it is not, this is only allowed for + // special addresses, in which case we check it is in proper SHORT form. + if s.len() != (AccountAddress::LENGTH * 2) + 2 { + if !address.is_special() { + return Err(AccountAddressParseError::LongFormRequiredUnlessSpecial); + } else { + // 0x + one hex char is the only valid SHORT form for special addresses. + if s.len() != 3 { + return Err(AccountAddressParseError::InvalidPaddingZeroes); + } + } + } + + Ok(address) + } } impl AsRef<[u8]> for AccountAddress { @@ -276,19 +325,43 @@ impl TryFrom for AccountAddress { type Error = AccountAddressParseError; fn try_from(s: String) -> Result { - Self::from_hex(s) + Self::from_str(&s) } } impl FromStr for AccountAddress { type Err = AccountAddressParseError; + /// NOTE: This function has relaxed parsing behavior. For strict behavior, please use + /// the `from_str_strict` function. Where possible use `from_str_strict` rather than + /// this function. + /// + /// Create an instance of AccountAddress by parsing a hex string representation. + /// + /// This function allows all formats defined by AIP-40. In short this means the + /// following formats are accepted: + /// + /// - LONG, with or without leading 0x + /// - SHORT, with or without leading 0x + /// + /// Where: + /// + /// - LONG is 64 hex characters. + /// - SHORT is 1 to 63 hex characters inclusive. + /// + /// Learn more about the different address formats by reading AIP-40: + /// https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-40.md. fn from_str(s: &str) -> Result { - // Accept 0xADDRESS or ADDRESS - if let Ok(address) = AccountAddress::from_hex_literal(s) { - Ok(address) + if !s.starts_with("0x") { + if s.is_empty() { + return Err(AccountAddressParseError::TooShort); + } + AccountAddress::from_hex_literal(&format!("0x{}", s)) } else { - Self::from_hex(s) + if s.len() == 2 { + return Err(AccountAddressParseError::TooShort); + } + AccountAddress::from_hex_literal(s) } } } @@ -329,20 +402,31 @@ impl Serialize for AccountAddress { } } -#[derive(Clone, Copy, Debug)] -pub struct AccountAddressParseError; +#[derive(thiserror::Error, Debug)] +pub enum AccountAddressParseError { + #[error("AccountAddress data should be exactly 32 bytes long")] + IncorrectNumberOfBytes, -impl fmt::Display for AccountAddressParseError { - fn fmt(&self, f: &mut fmt::Formatter) -> std::fmt::Result { - write!( - f, - "Unable to parse AccountAddress (must be hex string of length {})", - AccountAddress::LENGTH - ) - } -} + #[error("Hex characters are invalid: {0}")] + InvalidHexChars(String), + + #[error("Hex string is too short, must be 1 to 64 chars long, excluding the leading 0x")] + TooShort, -impl std::error::Error for AccountAddressParseError {} + #[error("Hex string is too long, must be 1 to 64 chars long, excluding the leading 0x")] + TooLong, + + #[error("Hex string must start with a leading 0x")] + LeadingZeroXRequired, + + #[error( + "The given hex string is not a special address, it must be represented as 0x + 64 chars" + )] + LongFormRequiredUnlessSpecial, + + #[error("The given hex string is a special address not in LONG form, it must be 0x0 to 0xf without padding zeroes")] + InvalidPaddingZeroes, +} #[cfg(test)] mod tests { @@ -575,6 +659,143 @@ mod tests { .unwrap_err(); } + #[test] + fn test_account_address_from_str() { + assert_eq!( + &AccountAddress::from_str("0x0") + .unwrap() + .to_standard_string(), + "0x0" + ); + assert_eq!( + &AccountAddress::from_str("0x1") + .unwrap() + .to_standard_string(), + "0x1" + ); + assert_eq!( + &AccountAddress::from_str("0xf") + .unwrap() + .to_standard_string(), + "0xf" + ); + assert_eq!( + &AccountAddress::from_str("0x0f") + .unwrap() + .to_standard_string(), + "0xf" + ); + assert_eq!( + &AccountAddress::from_str("0x010") + .unwrap() + .to_standard_string(), + "0x0000000000000000000000000000000000000000000000000000000000000010" + ); + assert_eq!( + &AccountAddress::from_str("0xfdfdf") + .unwrap() + .to_standard_string(), + "0x00000000000000000000000000000000000000000000000000000000000fdfdf" + ); + assert_eq!( + &AccountAddress::from_str( + "0x0500000000000000000000000000000000000000000000000000000000aadfdf" + ) + .unwrap() + .to_standard_string(), + "0x0500000000000000000000000000000000000000000000000000000000aadfdf" + ); + + // As above but without the 0x prefix. + assert_eq!( + &AccountAddress::from_str("0").unwrap().to_standard_string(), + "0x0" + ); + assert_eq!( + &AccountAddress::from_str("1").unwrap().to_standard_string(), + "0x1" + ); + assert_eq!( + &AccountAddress::from_str("f").unwrap().to_standard_string(), + "0xf" + ); + assert_eq!( + &AccountAddress::from_str("0f").unwrap().to_standard_string(), + "0xf" + ); + assert_eq!( + &AccountAddress::from_str("010") + .unwrap() + .to_standard_string(), + "0x0000000000000000000000000000000000000000000000000000000000000010" + ); + assert_eq!( + &AccountAddress::from_str("fdfdf") + .unwrap() + .to_standard_string(), + "0x00000000000000000000000000000000000000000000000000000000000fdfdf" + ); + assert_eq!( + &AccountAddress::from_str( + "0500000000000000000000000000000000000000000000000000000000aadfdf" + ) + .unwrap() + .to_standard_string(), + "0x0500000000000000000000000000000000000000000000000000000000aadfdf" + ); + } + + #[test] + fn test_account_address_from_str_strict() { + // See that only special addresses are accepted in SHORT form and all other + // addresses must use LONG form. + assert_eq!( + &AccountAddress::from_str_strict("0x0") + .unwrap() + .to_standard_string(), + "0x0" + ); + assert_eq!( + &AccountAddress::from_str_strict("0x1") + .unwrap() + .to_standard_string(), + "0x1" + ); + assert_eq!( + &AccountAddress::from_str_strict("0xf") + .unwrap() + .to_standard_string(), + "0xf" + ); + + assert!(&AccountAddress::from_str_strict("0x010").is_err()); + assert!(&AccountAddress::from_str_strict("0xfdfdf").is_err()); + assert_eq!( + &AccountAddress::from_str_strict( + "0x0500000000000000000000000000000000000000000000000000000000aadfdf" + ) + .unwrap() + .to_standard_string(), + "0x0500000000000000000000000000000000000000000000000000000000aadfdf" + ); + + // Assert that special addresses must be in either SHORT or LONG form, meaning + // either 0x0 to 0xf inclusive (no leading zeros) or 0x0{63}[0-f]. + assert!(&AccountAddress::from_str_strict("0x0f").is_err()); + + // As above but without the 0x prefix. See that they are all errors. + assert!(&AccountAddress::from_str_strict("0").is_err()); + assert!(&AccountAddress::from_str_strict("1").is_err()); + assert!(&AccountAddress::from_str_strict("f").is_err()); + assert!(&AccountAddress::from_str_strict("010").is_err()); + assert!(&AccountAddress::from_str_strict("fdfdf").is_err()); + assert!(&AccountAddress::from_str_strict( + "0500000000000000000000000000000000000000000000000000000000aadfdf" + ) + .is_err()); + assert!(&AccountAddress::from_str_strict("0f").is_err()); + } + #[test] fn test_ref() { let address = AccountAddress::new([1u8; AccountAddress::LENGTH]); @@ -614,6 +835,9 @@ mod tests { fn test_address_from_empty_string() { assert!(AccountAddress::try_from("".to_string()).is_err()); assert!(AccountAddress::from_str("").is_err()); + assert!(AccountAddress::from_str("0x").is_err()); + assert!(AccountAddress::from_str_strict("").is_err()); + assert!(AccountAddress::from_str_strict("0x").is_err()); } proptest! { diff --git a/third_party/move/move-core/types/src/effects.rs b/third_party/move/move-core/types/src/effects.rs index a55c75b4f2fa3..6d4bfc595ed0e 100644 --- a/third_party/move/move-core/types/src/effects.rs +++ b/third_party/move/move-core/types/src/effects.rs @@ -5,7 +5,7 @@ use crate::{ account_address::AccountAddress, identifier::Identifier, - language_storage::{ModuleId, StructTag, TypeTag}, + language_storage::{ModuleId, StructTag}, }; use anyhow::{bail, Result}; use std::collections::btree_map::{self, BTreeMap}; @@ -320,5 +320,3 @@ impl Changes { // types. pub type AccountChangeSet = AccountChanges, Vec>; pub type ChangeSet = Changes, Vec>; - -pub type Event = (Vec, u64, TypeTag, Vec); diff --git a/third_party/move/move-core/types/src/vm_status.rs b/third_party/move/move-core/types/src/vm_status.rs index b52c27ad7a32a..a2412fedb8512 100644 --- a/third_party/move/move-core/types/src/vm_status.rs +++ b/third_party/move/move-core/types/src/vm_status.rs @@ -701,7 +701,7 @@ pub enum StatusCode { MAX_FIELD_DEFINITIONS_REACHED = 1121, // Reserved error code for future use TOO_MANY_BACK_EDGES = 1122, - RESERVED_VERIFICATION_ERROR_1 = 1123, + EVENT_METADATA_VALIDATION_ERROR = 1123, RESERVED_VERIFICATION_ERROR_2 = 1124, RESERVED_VERIFICATION_ERROR_3 = 1125, RESERVED_VERIFICATION_ERROR_4 = 1126, diff --git a/third_party/move/move-ir-compiler/Cargo.toml b/third_party/move/move-ir-compiler/Cargo.toml index 2c071d924d3c1..c713655fb3c60 100644 --- a/third_party/move/move-ir-compiler/Cargo.toml +++ b/third_party/move/move-ir-compiler/Cargo.toml @@ -11,7 +11,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } move-binary-format = { path = "../move-binary-format" } move-bytecode-source-map = { path = "move-bytecode-source-map" } move-bytecode-verifier = { path = "../move-bytecode-verifier" } diff --git a/third_party/move/move-ir-compiler/move-ir-to-bytecode/src/compiler.rs b/third_party/move/move-ir-compiler/move-ir-to-bytecode/src/compiler.rs index 6e93e6f6d4417..0c90585b50b1f 100644 --- a/third_party/move/move-ir-compiler/move-ir-to-bytecode/src/compiler.rs +++ b/third_party/move/move-ir-compiler/move-ir-to-bytecode/src/compiler.rs @@ -1523,6 +1523,10 @@ fn compile_call( function_frame.pop()?; function_frame.push()?; }, + Builtin::Nop => { + push_instr!(call.loc, Bytecode::Nop); + function_frame.pop()?; + }, } }, FunctionCall_::ModuleFunctionCall { diff --git a/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/lexer.rs b/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/lexer.rs index 1435d31df3770..e4e08b6ead364 100644 --- a/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/lexer.rs +++ b/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/lexer.rs @@ -114,6 +114,7 @@ pub enum Tok { LSquare, RSquare, PeriodPeriod, + Nop, } impl Tok { @@ -492,6 +493,7 @@ fn get_name_token(name: &str) -> Tok { "succeeds_if" => Tok::SucceedsIf, "synthetic" => Tok::Synthetic, "true" => Tok::True, + "nop" => Tok::Nop, _ => Tok::NameValue, } } diff --git a/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/syntax.rs b/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/syntax.rs index 0f67eca705444..2c39d126d2494 100644 --- a/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/syntax.rs +++ b/third_party/move/move-ir-compiler/move-ir-to-bytecode/syntax/src/syntax.rs @@ -471,7 +471,8 @@ fn parse_qualified_function_name( | Tok::ToU32 | Tok::ToU64 | Tok::ToU128 - | Tok::ToU256 => { + | Tok::ToU256 + | Tok::Nop => { let f = parse_builtin(tokens)?; FunctionCall_::Builtin(f) }, @@ -618,7 +619,8 @@ fn parse_call_or_term_(tokens: &mut Lexer) -> Result { + | Tok::ToU256 + | Tok::Nop => { let f = parse_qualified_function_name(tokens)?; let exp = parse_call_or_term(tokens)?; Ok(Exp_::FunctionCall(f, Box::new(exp))) @@ -877,6 +879,10 @@ fn parse_builtin(tokens: &mut Lexer) -> Result { + tokens.advance()?; + Ok(Builtin::Nop) + }, t => Err(ParseError::InvalidToken { location: current_token_loc(tokens), message: format!("unrecognized token kind for builtin {:?}", t), diff --git a/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.exp b/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.exp new file mode 100644 index 0000000000000..0de15c4df4e58 --- /dev/null +++ b/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.exp @@ -0,0 +1,19 @@ +processed 1 task + +task 0 'print-bytecode'. lines 1-14: +// Move bytecode v6 +module cafe.Nop { + + +nop_valid() { +B0: + 0: Nop + 1: Ret +} +nop_invalid() { +B0: + 0: Nop + 1: Pop + 2: Ret +} +} diff --git a/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.mvir b/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.mvir new file mode 100644 index 0000000000000..1270cf3613061 --- /dev/null +++ b/third_party/move/move-ir-compiler/transactional-tests/tests/bytecode-generation/statements/nop.mvir @@ -0,0 +1,14 @@ +//# print-bytecode --input=module +module 0xcafe.Nop { + nop_valid() { + label b0: + (nop()); + return; + } + + nop_invalid() { + label b0: + _ = (nop()); + return; + } +} diff --git a/third_party/move/move-ir/types/src/ast.rs b/third_party/move/move-ir/types/src/ast.rs index 076692e98b3e5..40fa5695baf43 100644 --- a/third_party/move/move-ir/types/src/ast.rs +++ b/third_party/move/move-ir/types/src/ast.rs @@ -445,6 +445,8 @@ pub enum Builtin { ToU128, /// Cast an integer into u256. ToU256, + /// `nop noplabel` + Nop, } /// Enum for different function calls @@ -1519,6 +1521,7 @@ impl fmt::Display for Builtin { Builtin::ToU64 => write!(f, "to_u64"), Builtin::ToU128 => write!(f, "to_u128"), Builtin::ToU256 => write!(f, "to_u256"), + Builtin::Nop => write!(f, "nop;"), } } } diff --git a/third_party/move/move-model/bytecode-test-utils/Cargo.toml b/third_party/move/move-model/bytecode-test-utils/Cargo.toml new file mode 100644 index 0000000000000..5a8064fbaf96d --- /dev/null +++ b/third_party/move/move-model/bytecode-test-utils/Cargo.toml @@ -0,0 +1,20 @@ +[package] +name = "move-stackless-bytecode-test-utils" +version = "0.1.0" +authors = ["Diem Association "] +description = "Move stackless bytecode" +repository = "https://github.com/diem/diem" +homepage = "https://diem.com" +license = "Apache-2.0" +publish = false +edition = "2021" + +[dependencies] +anyhow = "1.0.52" +codespan-reporting = { version = "0.11.1", features = ["serde", "serialization"] } +move-command-line-common = { path = "../../move-command-line-common" } +move-compiler = { path = "../../move-compiler" } +move-model = { path = ".." } +move-prover-test-utils = { path = "../../move-prover/test-utils" } +move-stackless-bytecode = { path = "../bytecode" } +move-stdlib = { path = "../../move-stdlib" } diff --git a/third_party/move/move-model/bytecode-test-utils/src/lib.rs b/third_party/move/move-model/bytecode-test-utils/src/lib.rs new file mode 100644 index 0000000000000..b6e0836a7c4c2 --- /dev/null +++ b/third_party/move/move-model/bytecode-test-utils/src/lib.rs @@ -0,0 +1,93 @@ +// Copyright © Aptos Foundation +// Parts of the project are originally copyright © Meta Platforms, Inc. +// SPDX-License-Identifier: Apache-2.0 + +use anyhow::anyhow; +use codespan_reporting::{diagnostic::Severity, term::termcolor::Buffer}; +use move_command_line_common::testing::EXP_EXT; +use move_compiler::shared::{known_attributes::KnownAttribute, PackagePaths}; +use move_model::{model::GlobalEnv, options::ModelBuilderOptions, run_model_builder_with_options}; +use move_prover_test_utils::{baseline_test::verify_or_update_baseline, extract_test_directives}; +use move_stackless_bytecode::{ + function_target_pipeline::{ + FunctionTargetPipeline, FunctionTargetsHolder, ProcessorResultDisplay, + }, + print_targets_for_test, +}; +use std::path::Path; + +/// A test runner which dumps annotated bytecode and can be used for implementing a `datatest` +/// runner. In addition to the path where the Move source resides, an optional processing +/// pipeline is passed to establish the state to be tested. This will dump the initial +/// bytecode and the result of the pipeline in a baseline file. +/// The Move source file can use comments of the form `// dep: file.move` to add additional +/// sources. +pub fn test_runner( + path: &Path, + pipeline_opt: Option, +) -> anyhow::Result<()> { + let mut sources = extract_test_directives(path, "// dep:")?; + sources.push(path.to_string_lossy().to_string()); + let env: GlobalEnv = run_model_builder_with_options( + vec![PackagePaths { + name: None, + paths: sources, + named_address_map: move_stdlib::move_stdlib_named_addresses(), + }], + vec![], + ModelBuilderOptions::default(), + false, + KnownAttribute::get_all_attribute_names(), + )?; + let out = if env.has_errors() { + let mut error_writer = Buffer::no_color(); + env.report_diag(&mut error_writer, Severity::Error); + String::from_utf8_lossy(&error_writer.into_inner()).to_string() + } else { + let dir_name = path + .parent() + .and_then(|p| p.file_name()) + .and_then(|p| p.to_str()) + .ok_or_else(|| anyhow!("bad file name"))?; + + // Initialize and print function targets + let mut text = String::new(); + let mut targets = FunctionTargetsHolder::default(); + for module_env in env.get_modules() { + for func_env in module_env.get_functions() { + targets.add_target(&func_env); + } + } + text += &print_targets_for_test(&env, "initial translation from Move", &targets); + + // Run pipeline if any + if let Some(pipeline) = pipeline_opt { + pipeline.run(&env, &mut targets); + let processor = pipeline.last_processor(); + if !processor.is_single_run() { + text += &print_targets_for_test( + &env, + &format!("after pipeline `{}`", dir_name), + &targets, + ); + } + text += &ProcessorResultDisplay { + env: &env, + targets: &targets, + processor, + } + .to_string(); + } + // add Warning and Error diagnostics to output + let mut error_writer = Buffer::no_color(); + if env.has_errors() || env.has_warnings() { + env.report_diag(&mut error_writer, Severity::Warning); + text += "============ Diagnostics ================\n"; + text += &String::from_utf8_lossy(&error_writer.into_inner()); + } + text + }; + let baseline_path = path.with_extension(EXP_EXT); + verify_or_update_baseline(baseline_path.as_path(), &out)?; + Ok(()) +} diff --git a/third_party/move/move-prover/bytecode/Cargo.toml b/third_party/move/move-model/bytecode/Cargo.toml similarity index 89% rename from third_party/move/move-prover/bytecode/Cargo.toml rename to third_party/move/move-model/bytecode/Cargo.toml index dd4afd77c1905..36c3307e6b08c 100644 --- a/third_party/move/move-prover/bytecode/Cargo.toml +++ b/third_party/move/move-model/bytecode/Cargo.toml @@ -17,7 +17,7 @@ move-command-line-common = { path = "../../move-command-line-common" } move-compiler = { path = "../../move-compiler" } move-core-types = { path = "../../move-core/types" } move-ir-to-bytecode = { path = "../../move-ir-compiler/move-ir-to-bytecode" } -move-model = { path = "../../move-model" } +move-model = { path = ".." } codespan = "0.11.1" codespan-reporting = { version = "0.11.1", features = ["serde", "serialization"] } @@ -34,8 +34,7 @@ serde = { version = "1.0.124", features = ["derive"] } [dev-dependencies] anyhow = "1.0.52" datatest-stable = "0.1.1" -move-prover-test-utils = { path = "../test-utils" } -move-stdlib = { path = "../../move-stdlib" } +move-stackless-bytecode-test-utils = { path = "../bytecode-test-utils" } [[test]] name = "testsuite" diff --git a/third_party/move/move-prover/bytecode/src/annotations.rs b/third_party/move/move-model/bytecode/src/annotations.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/annotations.rs rename to third_party/move/move-model/bytecode/src/annotations.rs diff --git a/third_party/move/move-prover/bytecode/src/borrow_analysis.rs b/third_party/move/move-model/bytecode/src/borrow_analysis.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/borrow_analysis.rs rename to third_party/move/move-model/bytecode/src/borrow_analysis.rs diff --git a/third_party/move/move-prover/bytecode/src/compositional_analysis.rs b/third_party/move/move-model/bytecode/src/compositional_analysis.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/compositional_analysis.rs rename to third_party/move/move-model/bytecode/src/compositional_analysis.rs diff --git a/third_party/move/move-prover/bytecode/src/dataflow_analysis.rs b/third_party/move/move-model/bytecode/src/dataflow_analysis.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/dataflow_analysis.rs rename to third_party/move/move-model/bytecode/src/dataflow_analysis.rs diff --git a/third_party/move/move-prover/bytecode/src/dataflow_domains.rs b/third_party/move/move-model/bytecode/src/dataflow_domains.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/dataflow_domains.rs rename to third_party/move/move-model/bytecode/src/dataflow_domains.rs diff --git a/third_party/move/move-prover/bytecode/src/debug_instrumentation.rs b/third_party/move/move-model/bytecode/src/debug_instrumentation.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/debug_instrumentation.rs rename to third_party/move/move-model/bytecode/src/debug_instrumentation.rs diff --git a/third_party/move/move-prover/bytecode/src/function_data_builder.rs b/third_party/move/move-model/bytecode/src/function_data_builder.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/function_data_builder.rs rename to third_party/move/move-model/bytecode/src/function_data_builder.rs diff --git a/third_party/move/move-prover/bytecode/src/function_target.rs b/third_party/move/move-model/bytecode/src/function_target.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/function_target.rs rename to third_party/move/move-model/bytecode/src/function_target.rs diff --git a/third_party/move/move-prover/bytecode/src/function_target_pipeline.rs b/third_party/move/move-model/bytecode/src/function_target_pipeline.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/function_target_pipeline.rs rename to third_party/move/move-model/bytecode/src/function_target_pipeline.rs index f76a0e439a776..8faccfd9f1809 100644 --- a/third_party/move/move-prover/bytecode/src/function_target_pipeline.rs +++ b/third_party/move/move-model/bytecode/src/function_target_pipeline.rs @@ -337,11 +337,6 @@ impl FunctionTargetPipeline { let src_idx = nodes.get(&fun_id).unwrap(); let fun_env = env.get_function(fun_id); for callee in fun_env.get_called_functions().expect("called functions") { - assert!( - nodes.contains_key(callee), - "{}", - env.get_function(*callee).get_full_name_str() - ); let dst_idx = nodes .get(callee) .expect("callee is not in function targets"); diff --git a/third_party/move/move-prover/bytecode/src/graph.rs b/third_party/move/move-model/bytecode/src/graph.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/graph.rs rename to third_party/move/move-model/bytecode/src/graph.rs diff --git a/third_party/move/move-prover/bytecode/src/lib.rs b/third_party/move/move-model/bytecode/src/lib.rs similarity index 63% rename from third_party/move/move-prover/bytecode/src/lib.rs rename to third_party/move/move-model/bytecode/src/lib.rs index ad16a441afe35..bb7420046f31b 100644 --- a/third_party/move/move-prover/bytecode/src/lib.rs +++ b/third_party/move/move-model/bytecode/src/lib.rs @@ -4,56 +4,47 @@ #![forbid(unsafe_code)] -use crate::function_target_pipeline::FunctionTargetsHolder; +use crate::{function_target::FunctionTarget, function_target_pipeline::FunctionTargetsHolder}; use move_model::model::GlobalEnv; use std::fmt::Write; pub mod annotations; pub mod borrow_analysis; -pub mod clean_and_optimize; pub mod compositional_analysis; -pub mod data_invariant_instrumentation; pub mod dataflow_analysis; pub mod dataflow_domains; pub mod debug_instrumentation; -pub mod eliminate_imm_refs; pub mod function_data_builder; pub mod function_target; pub mod function_target_pipeline; -pub mod global_invariant_analysis; -pub mod global_invariant_instrumentation; -pub mod global_invariant_instrumentation_v2; pub mod graph; -pub mod inconsistency_check; pub mod livevar_analysis; -pub mod loop_analysis; -pub mod memory_instrumentation; -pub mod mono_analysis; -pub mod mut_ref_instrumentation; -pub mod mutation_tester; -pub mod number_operation; -pub mod number_operation_analysis; -pub mod options; -pub mod packed_types_analysis; -pub mod pipeline_factory; pub mod reaching_def_analysis; -pub mod spec_instrumentation; pub mod stackless_bytecode; pub mod stackless_bytecode_generator; pub mod stackless_control_flow_graph; pub mod usage_analysis; -pub mod verification_analysis; -pub mod verification_analysis_v2; -pub mod well_formed_instrumentation; /// An error message used for cases where a compiled module is expected to be attached -pub(crate) const COMPILED_MODULE_AVAILABLE: &str = "compiled module missing"; +pub const COMPILED_MODULE_AVAILABLE: &str = "compiled module missing"; /// Print function targets for testing and debugging. pub fn print_targets_for_test( env: &GlobalEnv, header: &str, targets: &FunctionTargetsHolder, +) -> String { + print_targets_with_annotations_for_test(env, header, targets, |target| { + target.register_annotation_formatters_for_test() + }) +} + +/// Print function targets for testing and debugging. +pub fn print_targets_with_annotations_for_test( + env: &GlobalEnv, + header: &str, + targets: &FunctionTargetsHolder, + register_annotations: impl Fn(&FunctionTarget), ) -> String { let mut text = String::new(); writeln!(&mut text, "============ {} ================", header).unwrap(); @@ -64,7 +55,7 @@ pub fn print_targets_for_test( } for (variant, target) in targets.get_targets(&func_env) { if !target.data.code.is_empty() || target.func_env.is_native_or_intrinsic() { - target.register_annotation_formatters_for_test(); + register_annotations(&target); writeln!(&mut text, "\n[variant {}]\n{}", variant, target).unwrap(); } } diff --git a/third_party/move/move-prover/bytecode/src/livevar_analysis.rs b/third_party/move/move-model/bytecode/src/livevar_analysis.rs similarity index 97% rename from third_party/move/move-prover/bytecode/src/livevar_analysis.rs rename to third_party/move/move-model/bytecode/src/livevar_analysis.rs index d19925aff0e5c..c609dade69472 100644 --- a/third_party/move/move-prover/bytecode/src/livevar_analysis.rs +++ b/third_party/move/move-model/bytecode/src/livevar_analysis.rs @@ -27,10 +27,23 @@ pub struct LiveVarInfoAtCodeOffset { pub after: BTreeSet, } +/// Auxiliary entry point for livevar analysis. +pub fn run_livevar_analysis( + target: &FunctionTarget, + code: &[Bytecode], +) -> BTreeMap { + LiveVarAnalysisProcessor::analyze(target, code) +} + +/// Annotation which can be attached to function data. #[derive(Default, Clone)] pub struct LiveVarAnnotation(BTreeMap); impl LiveVarAnnotation { + pub fn from_map(m: BTreeMap) -> Self { + Self(m) + } + pub fn get_live_var_info_at( &self, code_offset: CodeOffset, diff --git a/third_party/move/move-prover/bytecode/src/reaching_def_analysis.rs b/third_party/move/move-model/bytecode/src/reaching_def_analysis.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/reaching_def_analysis.rs rename to third_party/move/move-model/bytecode/src/reaching_def_analysis.rs diff --git a/third_party/move/move-prover/bytecode/src/stackless_bytecode.rs b/third_party/move/move-model/bytecode/src/stackless_bytecode.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/stackless_bytecode.rs rename to third_party/move/move-model/bytecode/src/stackless_bytecode.rs diff --git a/third_party/move/move-prover/bytecode/src/stackless_bytecode_generator.rs b/third_party/move/move-model/bytecode/src/stackless_bytecode_generator.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/stackless_bytecode_generator.rs rename to third_party/move/move-model/bytecode/src/stackless_bytecode_generator.rs diff --git a/third_party/move/move-prover/bytecode/src/stackless_control_flow_graph.rs b/third_party/move/move-model/bytecode/src/stackless_control_flow_graph.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/stackless_control_flow_graph.rs rename to third_party/move/move-model/bytecode/src/stackless_control_flow_graph.rs diff --git a/third_party/move/move-prover/bytecode/src/usage_analysis.rs b/third_party/move/move-model/bytecode/src/usage_analysis.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/usage_analysis.rs rename to third_party/move/move-model/bytecode/src/usage_analysis.rs diff --git a/third_party/move/move-prover/bytecode/tests/borrow/basic_test.exp b/third_party/move/move-model/bytecode/tests/borrow/basic_test.exp similarity index 92% rename from third_party/move/move-prover/bytecode/tests/borrow/basic_test.exp rename to third_party/move/move-model/bytecode/tests/borrow/basic_test.exp index b4d7cf99bd71c..a6f5acf70b0ee 100644 --- a/third_party/move/move-prover/bytecode/tests/borrow/basic_test.exp +++ b/third_party/move/move-model/bytecode/tests/borrow/basic_test.exp @@ -269,10 +269,8 @@ fun TestBorrow::test1(): TestBorrow::R { fun TestBorrow::test2($t0|x_ref: &mut u64, $t1|v: u64) { # live_nodes: LocalRoot($t1), Reference($t0) 0: write_ref($t0, $t1) - # live_nodes: LocalRoot($t1), Reference($t0) - 1: trace_local[x_ref]($t0) # live_nodes: LocalRoot($t1) - 2: return () + 1: return () } @@ -281,18 +279,14 @@ public fun TestBorrow::test3($t0|r_ref: &mut TestBorrow::R, $t1|v: u64) { var $t2: &mut u64 # live_nodes: LocalRoot($t1), Reference($t0) 0: $t2 := borrow_field.x($t0) - # live_nodes: LocalRoot($t1), Reference($t0), Reference($t2) + # live_nodes: LocalRoot($t1), Reference($t2) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} 1: TestBorrow::test2($t2, $t1) - # live_nodes: LocalRoot($t1), Reference($t0) - # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} - # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} - 2: trace_local[r_ref]($t0) # live_nodes: LocalRoot($t1) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} - 3: return () + 2: return () } @@ -329,14 +323,10 @@ public fun TestBorrow::test5($t0|r_ref: &mut TestBorrow::R): &mut u64 { var $t1: &mut u64 # live_nodes: Reference($t0) 0: $t1 := borrow_field.x($t0) - # live_nodes: Reference($t0), Reference($t1) - # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t1))} - # borrows_from: Reference($t1) -> {(.x (u64), Reference($t0))} - 1: trace_local[r_ref]($t0) # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t1))} # borrows_from: Reference($t1) -> {(.x (u64), Reference($t0))} - 2: return $t1 + 1: return $t1 } @@ -400,7 +390,7 @@ fun TestBorrow::test7($t0|b: bool) { # live_nodes: LocalRoot($t0), Reference($t3), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 6: if ($t0) goto 15 else goto 18 + 6: if ($t0) goto 16 else goto 19 # live_nodes: LocalRoot($t0), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} @@ -420,43 +410,47 @@ fun TestBorrow::test7($t0|b: bool) { # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, LocalRoot($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, Reference($t3))}, Reference($t7) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(@, LocalRoot($t2))} - 11: label L0 + 11: goto 12 + # live_nodes: LocalRoot($t0), Reference($t3) + # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, LocalRoot($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, Reference($t3))}, Reference($t7) -> {(@, Reference($t3))} + # borrows_from: Reference($t3) -> {(@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(@, LocalRoot($t2))} + 12: label L0 # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, LocalRoot($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, Reference($t3))}, Reference($t7) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(@, LocalRoot($t2))} - 12: $t8 := 0 + 13: $t8 := 0 # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, LocalRoot($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, Reference($t3))}, Reference($t7) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(@, LocalRoot($t2))} - 13: TestBorrow::test3($t3, $t8) + 14: TestBorrow::test3($t3, $t8) # live_nodes: LocalRoot($t0) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, LocalRoot($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, Reference($t3))}, Reference($t7) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(@, LocalRoot($t2))} - 14: return () + 15: return () # live_nodes: LocalRoot($t0), Reference($t3), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 15: label L2 + 16: label L2 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 16: destroy($t3) + 17: destroy($t3) # live_nodes: LocalRoot($t0), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 17: goto 7 + 18: goto 7 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 18: label L3 + 19: label L3 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 19: destroy($t6) + 20: destroy($t6) # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t6))}, Reference($t6) -> {(@, LocalRoot($t1))} - 20: goto 11 + 21: goto 12 } @@ -493,154 +487,166 @@ fun TestBorrow::test8($t0|b: bool, $t1|n: u64, $t2|r_ref: &mut TestBorrow::R) { # borrowed_by: LocalRoot($t4) -> {(@, Reference($t8))} # borrows_from: Reference($t8) -> {(@, LocalRoot($t4))} 5: $t5 := $t8 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + # borrowed_by: LocalRoot($t4) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t4))} + 6: goto 7 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 6: label L6 + 7: label L6 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 7: $t9 := 0 + 8: $t9 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 8: $t10 := <($t9, $t1) + 9: $t10 := <($t9, $t1) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 9: if ($t10) goto 10 else goto 29 + 10: if ($t10) goto 11 else goto 32 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 10: label L1 + 11: label L1 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 11: label L2 + 12: goto 13 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 12: destroy($t5) + 13: label L2 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} + 14: destroy($t5) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 13: $t11 := 2 + 15: $t11 := 2 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 14: $t12 := /($t1, $t11) + 16: $t12 := /($t1, $t11) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 15: $t13 := 0 + 17: $t13 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 16: $t14 := ==($t12, $t13) + 18: $t14 := ==($t12, $t13) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 17: if ($t14) goto 18 else goto 22 + 19: if ($t14) goto 20 else goto 24 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 18: label L4 + 20: label L4 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 19: $t15 := borrow_local($t3) + 21: $t15 := borrow_local($t3) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t15) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 20: $t5 := $t15 + 22: $t5 := $t15 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 21: goto 25 + 23: goto 28 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 22: label L3 + 24: label L3 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 23: $t16 := borrow_local($t4) + 25: $t16 := borrow_local($t4) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t16) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 24: $t5 := $t16 + 26: $t5 := $t16 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 25: label L5 + 27: goto 28 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 26: $t17 := 1 + 28: label L5 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 27: $t1 := -($t1, $t17) + 29: $t17 := 1 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 28: goto 6 + 30: $t1 := -($t1, $t17) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 29: label L0 + 31: goto 7 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 30: if ($t0) goto 31 else goto 36 + 32: label L0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 31: label L8 + 33: if ($t0) goto 34 else goto 39 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 32: destroy($t5) - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 34: label L8 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 33: $t18 := 0 + 35: destroy($t5) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 34: TestBorrow::test3($t2, $t18) + 36: $t18 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 35: goto 40 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + 37: TestBorrow::test3($t2, $t18) + # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 36: label L7 + 38: goto 44 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 37: destroy($t2) + 39: label L7 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 38: $t19 := 0 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + 40: destroy($t2) + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 39: TestBorrow::test3($t5, $t19) - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 41: $t19 := 0 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 40: label L9 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 42: TestBorrow::test3($t5, $t19) + # live_nodes: LocalRoot($t0), LocalRoot($t1) + # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} + 43: goto 44 + # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 41: trace_local[r_ref]($t2) + 44: label L9 # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t15))}, LocalRoot($t4) -> {(@, Reference($t8)), (@, Reference($t16))}, Reference($t8) -> {(@, Reference($t5))}, Reference($t15) -> {(@, Reference($t5))}, Reference($t16) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t8)), (@, Reference($t15)), (@, Reference($t16))}, Reference($t8) -> {(@, LocalRoot($t4))}, Reference($t15) -> {(@, LocalRoot($t3))}, Reference($t16) -> {(@, LocalRoot($t4))} - 42: return () + 45: return () } diff --git a/third_party/move/move-prover/bytecode/tests/borrow/basic_test.move b/third_party/move/move-model/bytecode/tests/borrow/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/borrow/basic_test.move rename to third_party/move/move-model/bytecode/tests/borrow/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/borrow/function_call.exp b/third_party/move/move-model/bytecode/tests/borrow/function_call.exp similarity index 84% rename from third_party/move/move-prover/bytecode/tests/borrow/function_call.exp rename to third_party/move/move-model/bytecode/tests/borrow/function_call.exp index 2172d27d3308e..38764bb039850 100644 --- a/third_party/move/move-prover/bytecode/tests/borrow/function_call.exp +++ b/third_party/move/move-model/bytecode/tests/borrow/function_call.exp @@ -118,11 +118,11 @@ fun MultiLayerCalling::outer($t0|has_vector: &mut MultiLayerCalling::HasVector) ============ after pipeline `borrow` ================ [variant baseline] -public intrinsic fun vector::contains<#0>($t0|v: vector<#0>, $t1|e: #0): bool; +public intrinsic fun vector::contains<#0>($t0|v: &vector<#0>, $t1|e: �): bool; [variant baseline] -public intrinsic fun vector::index_of<#0>($t0|v: vector<#0>, $t1|e: #0): (bool, u64); +public intrinsic fun vector::index_of<#0>($t0|v: &vector<#0>, $t1|e: �): (bool, u64); [variant baseline] @@ -130,7 +130,7 @@ public intrinsic fun vector::append<#0>($t0|lhs: &mut vector<#0>, $t1|other: vec [variant baseline] -public native fun vector::borrow<#0>($t0|v: vector<#0>, $t1|i: u64): #0; +public native fun vector::borrow<#0>($t0|v: &vector<#0>, $t1|i: u64): � [variant baseline] @@ -146,11 +146,11 @@ public native fun vector::empty<#0>(): vector<#0>; [variant baseline] -public intrinsic fun vector::is_empty<#0>($t0|v: vector<#0>): bool; +public intrinsic fun vector::is_empty<#0>($t0|v: &vector<#0>): bool; [variant baseline] -public native fun vector::length<#0>($t0|v: vector<#0>): u64; +public native fun vector::length<#0>($t0|v: &vector<#0>): u64; [variant baseline] @@ -208,22 +208,18 @@ fun MultiLayerCalling::inner($t0|has_vector: &mut MultiLayerCalling::HasVector): var $t3: &mut MultiLayerCalling::HasAnotherVector # live_nodes: Reference($t0) 0: $t1 := borrow_field.v($t0) - # live_nodes: Reference($t0), Reference($t1) + # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.v (vector), Reference($t1))} # borrows_from: Reference($t1) -> {(.v (vector), Reference($t0))} 1: $t2 := 7 - # live_nodes: Reference($t0), Reference($t1) + # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.v (vector), Reference($t1))} # borrows_from: Reference($t1) -> {(.v (vector), Reference($t0))} 2: $t3 := vector::borrow_mut($t1, $t2) - # live_nodes: Reference($t0), Reference($t3) - # borrowed_by: Reference($t0) -> {(.v (vector), Reference($t1))}, Reference($t1) -> {([], Reference($t3))} - # borrows_from: Reference($t1) -> {(.v (vector), Reference($t0))}, Reference($t3) -> {([], Reference($t1))} - 3: trace_local[has_vector]($t0) # live_nodes: Reference($t3) # borrowed_by: Reference($t0) -> {(.v (vector), Reference($t1))}, Reference($t1) -> {([], Reference($t3))} # borrows_from: Reference($t1) -> {(.v (vector), Reference($t0))}, Reference($t3) -> {([], Reference($t1))} - 4: return $t3 + 3: return $t3 } @@ -232,14 +228,10 @@ fun MultiLayerCalling::mid($t0|has_vector: &mut MultiLayerCalling::HasVector): & var $t1: &mut MultiLayerCalling::HasAnotherVector # live_nodes: Reference($t0) 0: $t1 := MultiLayerCalling::inner($t0) - # live_nodes: Reference($t0), Reference($t1) - # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))} - # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))} - 1: trace_local[has_vector]($t0) # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))} # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))} - 2: return $t1 + 1: return $t1 } @@ -250,25 +242,21 @@ fun MultiLayerCalling::outer($t0|has_vector: &mut MultiLayerCalling::HasVector) var $t3: u8 # live_nodes: Reference($t0) 0: $t1 := MultiLayerCalling::mid($t0) - # live_nodes: Reference($t0), Reference($t1) + # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))} # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))} 1: $t2 := borrow_field.v($t1) - # live_nodes: Reference($t0), Reference($t2) + # live_nodes: Reference($t2) # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))}, Reference($t1) -> {(.v (vector), Reference($t2))} # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))}, Reference($t2) -> {(.v (vector), Reference($t1))} 2: $t3 := 42 - # live_nodes: Reference($t0), Reference($t2) + # live_nodes: Reference($t2) # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))}, Reference($t1) -> {(.v (vector), Reference($t2))} # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))}, Reference($t2) -> {(.v (vector), Reference($t1))} 3: vector::push_back($t2, $t3) - # live_nodes: Reference($t0) - # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))}, Reference($t1) -> {(.v (vector), Reference($t2))} - # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))}, Reference($t2) -> {(.v (vector), Reference($t1))} - 4: trace_local[has_vector]($t0) # borrowed_by: Reference($t0) -> {(.v (vector)/[], Reference($t1))}, Reference($t1) -> {(.v (vector), Reference($t2))} # borrows_from: Reference($t1) -> {(.v (vector)/[], Reference($t0))}, Reference($t2) -> {(.v (vector), Reference($t1))} - 5: return () + 4: return () } diff --git a/third_party/move/move-prover/bytecode/tests/borrow/function_call.move b/third_party/move/move-model/bytecode/tests/borrow/function_call.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/borrow/function_call.move rename to third_party/move/move-model/bytecode/tests/borrow/function_call.move diff --git a/third_party/move/move-prover/bytecode/tests/borrow/hyper_edge.exp b/third_party/move/move-model/bytecode/tests/borrow/hyper_edge.exp similarity index 92% rename from third_party/move/move-prover/bytecode/tests/borrow/hyper_edge.exp rename to third_party/move/move-model/bytecode/tests/borrow/hyper_edge.exp index 35a2f95de42ed..0b2542c95d926 100644 --- a/third_party/move/move-prover/bytecode/tests/borrow/hyper_edge.exp +++ b/third_party/move/move-model/bytecode/tests/borrow/hyper_edge.exp @@ -130,11 +130,11 @@ public fun Test::foo<#0>($t0|i: u64) { ============ after pipeline `borrow` ================ [variant baseline] -public intrinsic fun vector::contains<#0>($t0|v: vector<#0>, $t1|e: #0): bool; +public intrinsic fun vector::contains<#0>($t0|v: &vector<#0>, $t1|e: �): bool; [variant baseline] -public intrinsic fun vector::index_of<#0>($t0|v: vector<#0>, $t1|e: #0): (bool, u64); +public intrinsic fun vector::index_of<#0>($t0|v: &vector<#0>, $t1|e: �): (bool, u64); [variant baseline] @@ -142,7 +142,7 @@ public intrinsic fun vector::append<#0>($t0|lhs: &mut vector<#0>, $t1|other: vec [variant baseline] -public native fun vector::borrow<#0>($t0|v: vector<#0>, $t1|i: u64): #0; +public native fun vector::borrow<#0>($t0|v: &vector<#0>, $t1|i: u64): � [variant baseline] @@ -158,11 +158,11 @@ public native fun vector::empty<#0>(): vector<#0>; [variant baseline] -public intrinsic fun vector::is_empty<#0>($t0|v: vector<#0>): bool; +public intrinsic fun vector::is_empty<#0>($t0|v: &vector<#0>): bool; [variant baseline] -public native fun vector::length<#0>($t0|v: vector<#0>): u64; +public native fun vector::length<#0>($t0|v: &vector<#0>): u64; [variant baseline] @@ -219,18 +219,14 @@ public fun Collection::borrow_mut<#0>($t0|c: &mut Collection::Collection<#0>, $t var $t3: &mut #0 # live_nodes: LocalRoot($t1), Reference($t0) 0: $t2 := borrow_field>.items($t0) - # live_nodes: LocalRoot($t1), Reference($t0), Reference($t2) + # live_nodes: LocalRoot($t1), Reference($t2) # borrowed_by: Reference($t0) -> {(.items (vector<#0>), Reference($t2))} # borrows_from: Reference($t2) -> {(.items (vector<#0>), Reference($t0))} 1: $t3 := vector::borrow_mut<#0>($t2, $t1) - # live_nodes: LocalRoot($t1), Reference($t0), Reference($t3) - # borrowed_by: Reference($t0) -> {(.items (vector<#0>), Reference($t2))}, Reference($t2) -> {([], Reference($t3))} - # borrows_from: Reference($t2) -> {(.items (vector<#0>), Reference($t0))}, Reference($t3) -> {([], Reference($t2))} - 2: trace_local[c]($t0) # live_nodes: LocalRoot($t1), Reference($t3) # borrowed_by: Reference($t0) -> {(.items (vector<#0>), Reference($t2))}, Reference($t2) -> {([], Reference($t3))} # borrows_from: Reference($t2) -> {(.items (vector<#0>), Reference($t0))}, Reference($t3) -> {([], Reference($t2))} - 3: return $t3 + 2: return $t3 } diff --git a/third_party/move/move-prover/bytecode/tests/borrow/hyper_edge.move b/third_party/move/move-model/bytecode/tests/borrow/hyper_edge.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/borrow/hyper_edge.move rename to third_party/move/move-model/bytecode/tests/borrow/hyper_edge.move diff --git a/third_party/move/move-prover/bytecode/tests/borrow_strong/basic_test.exp b/third_party/move/move-model/bytecode/tests/borrow_strong/basic_test.exp similarity index 93% rename from third_party/move/move-prover/bytecode/tests/borrow_strong/basic_test.exp rename to third_party/move/move-model/bytecode/tests/borrow_strong/basic_test.exp index 1728873fb23a9..f8b41f93ed568 100644 --- a/third_party/move/move-prover/bytecode/tests/borrow_strong/basic_test.exp +++ b/third_party/move/move-model/bytecode/tests/borrow_strong/basic_test.exp @@ -401,7 +401,7 @@ fun TestBorrow::test10($t0|b: bool): TestBorrow::R { # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 6: if ($t0) goto 18 else goto 21 + 6: if ($t0) goto 19 else goto 22 # live_nodes: LocalRoot($t0), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} @@ -417,7 +417,7 @@ fun TestBorrow::test10($t0|b: bool): TestBorrow::R { # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 10: goto 13 + 10: goto 14 # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} @@ -426,50 +426,54 @@ fun TestBorrow::test10($t0|b: bool): TestBorrow::R { # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} 12: destroy($t6) + # live_nodes: LocalRoot($t0), Reference($t2) + # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} + # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} + 13: goto 14 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 13: label L2 + 14: label L2 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 14: $t8 := 0 + 15: $t8 := 0 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 15: write_ref($t2, $t8) + 16: write_ref($t2, $t8) # live_nodes: LocalRoot($t0) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 16: $t9 := move($t1) + 17: $t9 := move($t1) # live_nodes: LocalRoot($t0), LocalRoot($t9) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.y (u64), Reference($t2)), (.x (u64)/@, Reference($t2)), (.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t6)), (.x (u64)/@, Reference($t6)), (@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 17: return $t9 + 18: return $t9 # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 18: label L3 + 19: label L3 # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 19: destroy($t2) + 20: destroy($t2) # live_nodes: LocalRoot($t0), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 20: goto 7 + 21: goto 7 # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 21: label L4 + 22: label L4 # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6), Reference($t7) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 22: destroy($t7) + 23: destroy($t7) # live_nodes: LocalRoot($t0), Reference($t2), Reference($t6) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t6))}, Reference($t6) -> {(.x (u64), Reference($t7))}, Reference($t7) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t7))}, Reference($t6) -> {(@, LocalRoot($t1))}, Reference($t7) -> {(.x (u64), Reference($t6))} - 23: goto 11 + 24: goto 11 } @@ -477,10 +481,8 @@ fun TestBorrow::test10($t0|b: bool): TestBorrow::R { fun TestBorrow::test2($t0|x_ref: &mut u64, $t1|v: u64) { # live_nodes: LocalRoot($t1), Reference($t0) 0: write_ref($t0, $t1) - # live_nodes: LocalRoot($t1), Reference($t0) - 1: trace_local[x_ref]($t0) # live_nodes: LocalRoot($t1) - 2: return () + 1: return () } @@ -489,18 +491,14 @@ public fun TestBorrow::test3($t0|r_ref: &mut TestBorrow::R, $t1|v: u64) { var $t2: &mut u64 # live_nodes: LocalRoot($t1), Reference($t0) 0: $t2 := borrow_field.x($t0) - # live_nodes: LocalRoot($t1), Reference($t0), Reference($t2) + # live_nodes: LocalRoot($t1), Reference($t2) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} 1: TestBorrow::test2($t2, $t1) - # live_nodes: LocalRoot($t1), Reference($t0) - # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} - # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} - 2: trace_local[r_ref]($t0) # live_nodes: LocalRoot($t1) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.x (u64), Reference($t0))} - 3: return () + 2: return () } @@ -539,14 +537,10 @@ public fun TestBorrow::test5($t0|r_ref: &mut TestBorrow::R): &mut u64 { var $t1: &mut u64 # live_nodes: Reference($t0) 0: $t1 := borrow_field.x($t0) - # live_nodes: Reference($t0), Reference($t1) - # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t1))} - # borrows_from: Reference($t1) -> {(.x (u64), Reference($t0))} - 1: trace_local[r_ref]($t0) # live_nodes: Reference($t1) # borrowed_by: Reference($t0) -> {(.x (u64), Reference($t1))} # borrows_from: Reference($t1) -> {(.x (u64), Reference($t0))} - 2: return $t1 + 1: return $t1 } @@ -618,7 +612,7 @@ fun TestBorrow::test7($t0|b: bool) { # live_nodes: LocalRoot($t0), Reference($t3), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 8: if ($t0) goto 17 else goto 20 + 8: if ($t0) goto 18 else goto 21 # live_nodes: LocalRoot($t0), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} @@ -638,43 +632,47 @@ fun TestBorrow::test7($t0|b: bool) { # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, LocalRoot($t2) -> {(@, Reference($t9))}, Reference($t8) -> {(@, Reference($t3))}, Reference($t9) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8)), (@, Reference($t9))}, Reference($t8) -> {(@, LocalRoot($t1))}, Reference($t9) -> {(@, LocalRoot($t2))} - 13: label L0 + 13: goto 14 # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, LocalRoot($t2) -> {(@, Reference($t9))}, Reference($t8) -> {(@, Reference($t3))}, Reference($t9) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8)), (@, Reference($t9))}, Reference($t8) -> {(@, LocalRoot($t1))}, Reference($t9) -> {(@, LocalRoot($t2))} - 14: $t10 := 0 + 14: label L0 # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, LocalRoot($t2) -> {(@, Reference($t9))}, Reference($t8) -> {(@, Reference($t3))}, Reference($t9) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8)), (@, Reference($t9))}, Reference($t8) -> {(@, LocalRoot($t1))}, Reference($t9) -> {(@, LocalRoot($t2))} - 15: TestBorrow::test3($t3, $t10) + 15: $t10 := 0 + # live_nodes: LocalRoot($t0), Reference($t3) + # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, LocalRoot($t2) -> {(@, Reference($t9))}, Reference($t8) -> {(@, Reference($t3))}, Reference($t9) -> {(@, Reference($t3))} + # borrows_from: Reference($t3) -> {(@, Reference($t8)), (@, Reference($t9))}, Reference($t8) -> {(@, LocalRoot($t1))}, Reference($t9) -> {(@, LocalRoot($t2))} + 16: TestBorrow::test3($t3, $t10) # live_nodes: LocalRoot($t0) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, LocalRoot($t2) -> {(@, Reference($t9))}, Reference($t8) -> {(@, Reference($t3))}, Reference($t9) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8)), (@, Reference($t9))}, Reference($t8) -> {(@, LocalRoot($t1))}, Reference($t9) -> {(@, LocalRoot($t2))} - 16: return () + 17: return () # live_nodes: LocalRoot($t0), Reference($t3), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 17: label L2 + 18: label L2 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 18: destroy($t3) + 19: destroy($t3) # live_nodes: LocalRoot($t0), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 19: goto 9 + 20: goto 9 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 20: label L3 + 21: label L3 # live_nodes: LocalRoot($t0), Reference($t3), Reference($t8) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 21: destroy($t8) + 22: destroy($t8) # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: LocalRoot($t1) -> {(@, Reference($t8))}, Reference($t8) -> {(@, Reference($t3))} # borrows_from: Reference($t3) -> {(@, Reference($t8))}, Reference($t8) -> {(@, LocalRoot($t1))} - 22: goto 13 + 23: goto 14 } @@ -717,154 +715,166 @@ fun TestBorrow::test8($t0|b: bool, $t1|n: u64, $t2|r_ref: &mut TestBorrow::R) { # borrowed_by: LocalRoot($t4) -> {(@, Reference($t10))} # borrows_from: Reference($t10) -> {(@, LocalRoot($t4))} 7: $t5 := $t10 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + # borrowed_by: LocalRoot($t4) -> {(@, Reference($t10))}, Reference($t10) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t10))}, Reference($t10) -> {(@, LocalRoot($t4))} + 8: goto 9 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 8: label L6 + 9: label L6 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 9: $t11 := 0 + 10: $t11 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 10: $t12 := <($t11, $t1) + 11: $t12 := <($t11, $t1) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 11: if ($t12) goto 12 else goto 31 + 12: if ($t12) goto 13 else goto 34 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 12: label L1 + 13: label L1 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} + 14: goto 15 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 13: label L2 + 15: label L2 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 14: destroy($t5) + 16: destroy($t5) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 15: $t13 := 2 + 17: $t13 := 2 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 16: $t14 := /($t1, $t13) + 18: $t14 := /($t1, $t13) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 17: $t15 := 0 + 19: $t15 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 18: $t16 := ==($t14, $t15) + 20: $t16 := ==($t14, $t15) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 19: if ($t16) goto 20 else goto 24 + 21: if ($t16) goto 22 else goto 26 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 20: label L4 + 22: label L4 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 21: $t17 := borrow_local($t3) + 23: $t17 := borrow_local($t3) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t17) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 22: $t5 := $t17 + 24: $t5 := $t17 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 23: goto 27 + 25: goto 30 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 24: label L3 + 26: label L3 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 25: $t18 := borrow_local($t4) + 27: $t18 := borrow_local($t4) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t18) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 26: $t5 := $t18 + 28: $t5 := $t18 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 27: label L5 + 29: goto 30 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 28: $t19 := 1 + 30: label L5 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 29: $t1 := -($t1, $t19) + 31: $t19 := 1 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 30: goto 8 + 32: $t1 := -($t1, $t19) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 31: label L0 + 33: goto 9 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 32: if ($t0) goto 33 else goto 38 + 34: label L0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 33: label L8 + 35: if ($t0) goto 36 else goto 41 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 34: destroy($t5) - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 36: label L8 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 35: $t20 := 0 + 37: destroy($t5) # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 36: TestBorrow::test3($t2, $t20) + 38: $t20 := 0 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 37: goto 42 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + 39: TestBorrow::test3($t2, $t20) + # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 38: label L7 + 40: goto 46 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 39: destroy($t2) + 41: label L7 # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 40: $t21 := 0 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2), Reference($t5) + 42: destroy($t2) + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 41: TestBorrow::test3($t5, $t21) - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 43: $t21 := 0 + # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t5) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 42: label L9 - # live_nodes: LocalRoot($t0), LocalRoot($t1), Reference($t2) + 44: TestBorrow::test3($t5, $t21) + # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 43: trace_local[r_ref]($t2) + 45: goto 46 # live_nodes: LocalRoot($t0), LocalRoot($t1) # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} - 44: return () + 46: label L9 + # live_nodes: LocalRoot($t0), LocalRoot($t1) + # borrowed_by: LocalRoot($t3) -> {(@, Reference($t17))}, LocalRoot($t4) -> {(@, Reference($t10)), (@, Reference($t18))}, Reference($t10) -> {(@, Reference($t5))}, Reference($t17) -> {(@, Reference($t5))}, Reference($t18) -> {(@, Reference($t5))} + # borrows_from: Reference($t5) -> {(@, Reference($t10)), (@, Reference($t17)), (@, Reference($t18))}, Reference($t10) -> {(@, LocalRoot($t4))}, Reference($t17) -> {(@, LocalRoot($t3))}, Reference($t18) -> {(@, LocalRoot($t4))} + 47: return () } @@ -895,10 +905,10 @@ fun TestBorrow::test9($t0|b: bool, $t1|r_ref: &mut TestBorrow::R): &mut u64 { # borrowed_by: Reference($t1) -> {(.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} 5: $t2 := borrow_field.y($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} - 6: goto 9 + 6: goto 10 # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) # borrowed_by: Reference($t1) -> {(.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} @@ -907,22 +917,22 @@ fun TestBorrow::test9($t0|b: bool, $t1|r_ref: &mut TestBorrow::R): &mut u64 { # borrowed_by: Reference($t1) -> {(.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} 8: destroy($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) - # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} - # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} - 9: label L2 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) + # borrowed_by: Reference($t1) -> {(.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} + # borrows_from: Reference($t2) -> {(@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} + 9: goto 10 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} - 10: $t4 := 0 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + 10: label L2 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} - 11: write_ref($t2, $t4) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + 11: $t4 := 0 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} - 12: trace_local[r_ref]($t1) + 12: write_ref($t2, $t4) # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.y (u64), Reference($t2)), (.x (u64), Reference($t3))}, Reference($t3) -> {(@, Reference($t2))} # borrows_from: Reference($t2) -> {(.y (u64), Reference($t1)), (@, Reference($t3))}, Reference($t3) -> {(.x (u64), Reference($t1))} diff --git a/third_party/move/move-prover/bytecode/tests/borrow_strong/basic_test.move b/third_party/move/move-model/bytecode/tests/borrow_strong/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/borrow_strong/basic_test.move rename to third_party/move/move-model/bytecode/tests/borrow_strong/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/borrow_strong/mut_ref.exp b/third_party/move/move-model/bytecode/tests/borrow_strong/mut_ref.exp similarity index 93% rename from third_party/move/move-prover/bytecode/tests/borrow_strong/mut_ref.exp rename to third_party/move/move-model/bytecode/tests/borrow_strong/mut_ref.exp index e689748dc2a17..70b15e6c620b6 100644 --- a/third_party/move/move-prover/bytecode/tests/borrow_strong/mut_ref.exp +++ b/third_party/move/move-model/bytecode/tests/borrow_strong/mut_ref.exp @@ -450,11 +450,11 @@ fun TestMutRef::return_ref_different_root($t0|b: bool, $t1|x: &mut TestMutRef::T ============ after pipeline `borrow_strong` ================ [variant baseline] -public intrinsic fun vector::contains<#0>($t0|v: vector<#0>, $t1|e: #0): bool; +public intrinsic fun vector::contains<#0>($t0|v: &vector<#0>, $t1|e: �): bool; [variant baseline] -public intrinsic fun vector::index_of<#0>($t0|v: vector<#0>, $t1|e: #0): (bool, u64); +public intrinsic fun vector::index_of<#0>($t0|v: &vector<#0>, $t1|e: �): (bool, u64); [variant baseline] @@ -462,7 +462,7 @@ public intrinsic fun vector::append<#0>($t0|lhs: &mut vector<#0>, $t1|other: vec [variant baseline] -public native fun vector::borrow<#0>($t0|v: vector<#0>, $t1|i: u64): #0; +public native fun vector::borrow<#0>($t0|v: &vector<#0>, $t1|i: u64): � [variant baseline] @@ -478,11 +478,11 @@ public native fun vector::empty<#0>(): vector<#0>; [variant baseline] -public intrinsic fun vector::is_empty<#0>($t0|v: vector<#0>): bool; +public intrinsic fun vector::is_empty<#0>($t0|v: &vector<#0>): bool; [variant baseline] -public native fun vector::length<#0>($t0|v: vector<#0>): u64; +public native fun vector::length<#0>($t0|v: &vector<#0>): u64; [variant baseline] @@ -956,26 +956,26 @@ fun TestMutRef::return_ref_different_path($t0|b: bool, $t1|x: &mut TestMutRef::N 1: label L1 # live_nodes: LocalRoot($t0), Reference($t1) 2: $t2 := borrow_field.value($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.value (u64), Reference($t1))} - 3: goto 7 + 3: goto 8 # live_nodes: LocalRoot($t0), Reference($t1) 4: label L0 # live_nodes: LocalRoot($t0), Reference($t1) 5: $t3 := borrow_field.t($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.t (TestMutRef::T), Reference($t3))} # borrows_from: Reference($t3) -> {(.t (TestMutRef::T), Reference($t1))} 6: $t2 := borrow_field.value($t3) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) - # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t2)), (.t (TestMutRef::T), Reference($t3))}, Reference($t3) -> {(.value (u64), Reference($t2))} - # borrows_from: Reference($t2) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t3))}, Reference($t3) -> {(.t (TestMutRef::T), Reference($t1))} - 7: label L2 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) + # borrowed_by: Reference($t1) -> {(.t (TestMutRef::T), Reference($t3))}, Reference($t3) -> {(.value (u64), Reference($t2))} + # borrows_from: Reference($t2) -> {(.value (u64), Reference($t3))}, Reference($t3) -> {(.t (TestMutRef::T), Reference($t1))} + 7: goto 8 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t2)), (.t (TestMutRef::T), Reference($t3))}, Reference($t3) -> {(.value (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t3))}, Reference($t3) -> {(.t (TestMutRef::T), Reference($t1))} - 8: trace_local[x]($t1) + 8: label L2 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t2)), (.t (TestMutRef::T), Reference($t3))}, Reference($t3) -> {(.value (u64), Reference($t2))} # borrows_from: Reference($t2) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t3))}, Reference($t3) -> {(.t (TestMutRef::T), Reference($t1))} @@ -996,38 +996,38 @@ fun TestMutRef::return_ref_different_path_vec($t0|b: bool, $t1|x: &mut TestMutRe 1: label L1 # live_nodes: LocalRoot($t0), Reference($t1) 2: $t3 := borrow_field.is($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))} # borrows_from: Reference($t3) -> {(.is (vector), Reference($t1))} 3: $t4 := 1 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))} # borrows_from: Reference($t3) -> {(.is (vector), Reference($t1))} 4: $t2 := vector::borrow_mut($t3, $t4) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))}, Reference($t3) -> {([], Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3))}, Reference($t3) -> {(.is (vector), Reference($t1))} - 5: goto 10 + 5: goto 11 # live_nodes: LocalRoot($t0), Reference($t1) 6: label L0 # live_nodes: LocalRoot($t0), Reference($t1) 7: $t5 := borrow_field.is($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t5) + # live_nodes: LocalRoot($t0), Reference($t5) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t5))} # borrows_from: Reference($t5) -> {(.is (vector), Reference($t1))} 8: $t6 := 0 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t5) + # live_nodes: LocalRoot($t0), Reference($t5) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t5))} # borrows_from: Reference($t5) -> {(.is (vector), Reference($t1))} 9: $t2 := vector::borrow_mut($t5, $t6) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) - # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.is (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t2))} - # borrows_from: Reference($t2) -> {([], Reference($t3)), ([], Reference($t5))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.is (vector), Reference($t1))} - 10: label L2 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) + # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t5))}, Reference($t5) -> {([], Reference($t2))} + # borrows_from: Reference($t2) -> {([], Reference($t5))}, Reference($t5) -> {(.is (vector), Reference($t1))} + 10: goto 11 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.is (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3)), ([], Reference($t5))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.is (vector), Reference($t1))} - 11: trace_local[x]($t1) + 11: label L2 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.is (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3)), ([], Reference($t5))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.is (vector), Reference($t1))} @@ -1049,42 +1049,42 @@ fun TestMutRef::return_ref_different_path_vec2($t0|b: bool, $t1|x: &mut TestMutR 1: label L1 # live_nodes: LocalRoot($t0), Reference($t1) 2: $t3 := borrow_field.is($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))} # borrows_from: Reference($t3) -> {(.is (vector), Reference($t1))} 3: $t4 := 1 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))} # borrows_from: Reference($t3) -> {(.is (vector), Reference($t1))} 4: $t2 := vector::borrow_mut($t3, $t4) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3))}, Reference($t3) -> {([], Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3))}, Reference($t3) -> {(.is (vector), Reference($t1))} - 5: goto 11 + 5: goto 12 # live_nodes: LocalRoot($t0), Reference($t1) 6: label L0 # live_nodes: LocalRoot($t0), Reference($t1) 7: $t5 := borrow_field.ts($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t5) + # live_nodes: LocalRoot($t0), Reference($t5) # borrowed_by: Reference($t1) -> {(.ts (vector), Reference($t5))} # borrows_from: Reference($t5) -> {(.ts (vector), Reference($t1))} 8: $t6 := 0 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t5) + # live_nodes: LocalRoot($t0), Reference($t5) # borrowed_by: Reference($t1) -> {(.ts (vector), Reference($t5))} # borrows_from: Reference($t5) -> {(.ts (vector), Reference($t1))} 9: $t7 := vector::borrow_mut($t5, $t6) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t7) + # live_nodes: LocalRoot($t0), Reference($t7) # borrowed_by: Reference($t1) -> {(.ts (vector), Reference($t5))}, Reference($t5) -> {([], Reference($t7))} # borrows_from: Reference($t5) -> {(.ts (vector), Reference($t1))}, Reference($t7) -> {([], Reference($t5))} 10: $t2 := borrow_field.value($t7) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) - # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.ts (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t7))}, Reference($t7) -> {(.value (u64), Reference($t2))} - # borrows_from: Reference($t2) -> {([], Reference($t3)), (.value (u64), Reference($t7))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.ts (vector), Reference($t1))}, Reference($t7) -> {([], Reference($t5))} - 11: label L2 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) + # borrowed_by: Reference($t1) -> {(.ts (vector), Reference($t5))}, Reference($t5) -> {([], Reference($t7))}, Reference($t7) -> {(.value (u64), Reference($t2))} + # borrows_from: Reference($t2) -> {(.value (u64), Reference($t7))}, Reference($t5) -> {(.ts (vector), Reference($t1))}, Reference($t7) -> {([], Reference($t5))} + 11: goto 12 + # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.ts (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t7))}, Reference($t7) -> {(.value (u64), Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3)), (.value (u64), Reference($t7))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.ts (vector), Reference($t1))}, Reference($t7) -> {([], Reference($t5))} - 12: trace_local[x]($t1) + 12: label L2 # live_nodes: LocalRoot($t0), Reference($t2) # borrowed_by: Reference($t1) -> {(.is (vector), Reference($t3)), (.ts (vector), Reference($t5))}, Reference($t3) -> {([], Reference($t2))}, Reference($t5) -> {([], Reference($t7))}, Reference($t7) -> {(.value (u64), Reference($t2))} # borrows_from: Reference($t2) -> {([], Reference($t3)), (.value (u64), Reference($t7))}, Reference($t3) -> {(.is (vector), Reference($t1))}, Reference($t5) -> {(.ts (vector), Reference($t1))}, Reference($t7) -> {([], Reference($t5))} @@ -1101,34 +1101,30 @@ fun TestMutRef::return_ref_different_root($t0|b: bool, $t1|x: &mut TestMutRef::T 1: label L1 # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) 2: destroy($t2) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t1) 3: $t3 := borrow_field.value($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t3))} # borrows_from: Reference($t3) -> {(.value (u64), Reference($t1))} - 4: goto 8 + 4: goto 9 # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) 5: label L0 # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) 6: destroy($t1) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2) + # live_nodes: LocalRoot($t0), Reference($t2) 7: $t3 := borrow_field.value($t2) - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2), Reference($t3) - # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t3))}, Reference($t2) -> {(.value (u64), Reference($t3))} - # borrows_from: Reference($t3) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t2))} - 8: label L2 - # live_nodes: LocalRoot($t0), Reference($t1), Reference($t2), Reference($t3) - # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t3))}, Reference($t2) -> {(.value (u64), Reference($t3))} - # borrows_from: Reference($t3) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t2))} - 9: trace_local[x]($t1) - # live_nodes: LocalRoot($t0), Reference($t2), Reference($t3) + # live_nodes: LocalRoot($t0), Reference($t3) + # borrowed_by: Reference($t2) -> {(.value (u64), Reference($t3))} + # borrows_from: Reference($t3) -> {(.value (u64), Reference($t2))} + 8: goto 9 + # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t3))}, Reference($t2) -> {(.value (u64), Reference($t3))} # borrows_from: Reference($t3) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t2))} - 10: trace_local[y]($t2) + 9: label L2 # live_nodes: LocalRoot($t0), Reference($t3) # borrowed_by: Reference($t1) -> {(.value (u64), Reference($t3))}, Reference($t2) -> {(.value (u64), Reference($t3))} # borrows_from: Reference($t3) -> {(.value (u64), Reference($t1)), (.value (u64), Reference($t2))} - 11: return $t3 + 10: return $t3 } diff --git a/third_party/move/move-prover/bytecode/tests/borrow_strong/mut_ref.move b/third_party/move/move-model/bytecode/tests/borrow_strong/mut_ref.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/borrow_strong/mut_ref.move rename to third_party/move/move-model/bytecode/tests/borrow_strong/mut_ref.move diff --git a/third_party/move/move-prover/bytecode/tests/from_move/regression_generic_and_native_type.exp b/third_party/move/move-model/bytecode/tests/from_move/regression_generic_and_native_type.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/regression_generic_and_native_type.exp rename to third_party/move/move-model/bytecode/tests/from_move/regression_generic_and_native_type.exp diff --git a/third_party/move/move-prover/bytecode/tests/from_move/regression_generic_and_native_type.move b/third_party/move/move-model/bytecode/tests/from_move/regression_generic_and_native_type.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/regression_generic_and_native_type.move rename to third_party/move/move-model/bytecode/tests/from_move/regression_generic_and_native_type.move diff --git a/third_party/move/move-prover/bytecode/tests/from_move/smoke_test.exp b/third_party/move/move-model/bytecode/tests/from_move/smoke_test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/smoke_test.exp rename to third_party/move/move-model/bytecode/tests/from_move/smoke_test.exp diff --git a/third_party/move/move-prover/bytecode/tests/from_move/smoke_test.move b/third_party/move/move-model/bytecode/tests/from_move/smoke_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/smoke_test.move rename to third_party/move/move-model/bytecode/tests/from_move/smoke_test.move diff --git a/third_party/move/move-prover/bytecode/tests/from_move/specs-in-fun.exp b/third_party/move/move-model/bytecode/tests/from_move/specs-in-fun.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/specs-in-fun.exp rename to third_party/move/move-model/bytecode/tests/from_move/specs-in-fun.exp diff --git a/third_party/move/move-prover/bytecode/tests/from_move/specs-in-fun.move b/third_party/move/move-model/bytecode/tests/from_move/specs-in-fun.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/specs-in-fun.move rename to third_party/move/move-model/bytecode/tests/from_move/specs-in-fun.move diff --git a/third_party/move/move-prover/bytecode/tests/from_move/vector_instructions.exp b/third_party/move/move-model/bytecode/tests/from_move/vector_instructions.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/vector_instructions.exp rename to third_party/move/move-model/bytecode/tests/from_move/vector_instructions.exp diff --git a/third_party/move/move-prover/bytecode/tests/from_move/vector_instructions.move b/third_party/move/move-model/bytecode/tests/from_move/vector_instructions.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/from_move/vector_instructions.move rename to third_party/move/move-model/bytecode/tests/from_move/vector_instructions.move diff --git a/third_party/move/move-prover/bytecode/tests/livevar/basic_test.exp b/third_party/move/move-model/bytecode/tests/livevar/basic_test.exp similarity index 53% rename from third_party/move/move-prover/bytecode/tests/livevar/basic_test.exp rename to third_party/move/move-model/bytecode/tests/livevar/basic_test.exp index c000ec05cd9ec..4f0f7eb59f93b 100644 --- a/third_party/move/move-prover/bytecode/tests/livevar/basic_test.exp +++ b/third_party/move/move-model/bytecode/tests/livevar/basic_test.exp @@ -120,12 +120,15 @@ fun TestLiveVars::test3($t0|n: u64, $t1|r_ref: &TestLiveVars::R): u64 { ============ after pipeline `livevar` ================ [variant baseline] -fun TestLiveVars::test1($t0|r_ref: TestLiveVars::R): u64 { - var $t1: u64 +fun TestLiveVars::test1($t0|r_ref: &TestLiveVars::R): u64 { + var $t1: &u64 + var $t2: u64 # live vars: r_ref - 0: $t1 := get_field.x($t0) + 0: $t1 := borrow_field.x($t0) # live vars: $t1 - 1: return $t1 + 1: $t2 := read_ref($t1) + # live vars: $t2 + 2: return $t2 } @@ -133,105 +136,137 @@ fun TestLiveVars::test1($t0|r_ref: TestLiveVars::R): u64 { fun TestLiveVars::test2($t0|b: bool): u64 { var $t1|r1: TestLiveVars::R var $t2|r2: TestLiveVars::R - var $t3|r_ref: TestLiveVars::R + var $t3|r_ref: &TestLiveVars::R var $t4: u64 - var $t5: TestLiveVars::R - var $t6: u64 - var $t7: TestLiveVars::R + var $t5: u64 + var $t6: &TestLiveVars::R + var $t7: &TestLiveVars::R var $t8: u64 # live vars: b 0: $t4 := 3 # live vars: b, $t4 - 1: $t5 := pack TestLiveVars::R($t4) - # live vars: b, $t5 - 2: $t6 := 4 - # live vars: b, $t5, $t6 - 3: $t7 := pack TestLiveVars::R($t6) - # live vars: b, $t5, $t7 - 4: $t3 := $t5 - # live vars: b, r_ref, $t7 - 5: if ($t0) goto 6 else goto 8 - # live vars: $t7 - 6: label L1 + 1: $t1 := pack TestLiveVars::R($t4) + # live vars: b, r1 + 2: $t5 := 4 + # live vars: b, r1, $t5 + 3: $t2 := pack TestLiveVars::R($t5) + # live vars: b, r1, r2 + 4: $t6 := borrow_local($t1) + # live vars: b, r2, $t6 + 5: $t3 := $t6 + # live vars: b, r2, r_ref, $t6 + 6: if ($t0) goto 15 else goto 18 + # live vars: r2, $t6 + 7: label L1 + # live vars: r2, $t6 + 8: destroy($t6) + # live vars: r2 + 9: $t7 := borrow_local($t2) # live vars: $t7 - 7: $t3 := $t7 + 10: $t3 := $t7 + # live vars: r_ref + 11: goto 12 # live vars: r_ref - 8: label L0 + 12: label L0 # live vars: r_ref - 9: $t8 := TestLiveVars::test1($t3) + 13: $t8 := TestLiveVars::test1($t3) # live vars: $t8 - 10: return $t8 + 14: return $t8 + # live vars: r2, r_ref, $t6 + 15: label L2 + # live vars: r2, r_ref, $t6 + 16: destroy($t3) + # live vars: r2, $t6 + 17: goto 7 + # live vars: r_ref, $t6 + 18: label L3 + # live vars: r_ref, $t6 + 19: destroy($t6) + # live vars: r_ref + 20: goto 12 } [variant baseline] -fun TestLiveVars::test3($t0|n: u64, $t1|r_ref: TestLiveVars::R): u64 { +fun TestLiveVars::test3($t0|n: u64, $t1|r_ref: &TestLiveVars::R): u64 { var $t2|r1: TestLiveVars::R var $t3|r2: TestLiveVars::R var $t4: u64 - var $t5: TestLiveVars::R + var $t5: u64 var $t6: u64 - var $t7: TestLiveVars::R + var $t7: bool var $t8: u64 - var $t9: bool + var $t9: u64 var $t10: u64 - var $t11: u64 - var $t12: u64 - var $t13: bool + var $t11: bool + var $t12: &TestLiveVars::R + var $t13: &TestLiveVars::R var $t14: u64 var $t15: u64 # live vars: n, r_ref 0: $t4 := 3 # live vars: n, r_ref, $t4 - 1: $t5 := pack TestLiveVars::R($t4) - # live vars: n, r_ref, $t5 - 2: $t6 := 4 - # live vars: n, r_ref, $t5, $t6 - 3: $t7 := pack TestLiveVars::R($t6) - # live vars: n, r_ref, $t5, $t7 - 4: label L6 - # live vars: n, r_ref, $t5, $t7 - 5: $t8 := 0 - # live vars: n, r_ref, $t5, $t7, $t8 - 6: $t9 := <($t8, $t0) - # live vars: n, r_ref, $t5, $t7, $t9 - 7: if ($t9) goto 8 else goto 24 - # live vars: n, $t5, $t7 - 8: label L1 - # live vars: n, $t5, $t7 - 9: label L2 - # live vars: n, $t5, $t7 - 10: $t10 := 2 - # live vars: n, $t5, $t7, $t10 - 11: $t11 := /($t0, $t10) - # live vars: n, $t5, $t7, $t11 - 12: $t12 := 0 - # live vars: n, $t5, $t7, $t11, $t12 - 13: $t13 := ==($t11, $t12) - # live vars: n, $t5, $t7, $t13 - 14: if ($t13) goto 15 else goto 18 - # live vars: n, $t5, $t7 - 15: label L4 - # live vars: n, $t5, $t7 - 16: $t1 := $t5 - # live vars: n, r_ref, $t5, $t7 - 17: goto 20 - # live vars: n, $t5, $t7 - 18: label L3 - # live vars: n, $t5, $t7 - 19: $t1 := $t7 - # live vars: n, r_ref, $t5, $t7 - 20: label L5 - # live vars: n, r_ref, $t5, $t7 - 21: $t14 := 1 - # live vars: n, r_ref, $t5, $t7, $t14 - 22: $t0 := -($t0, $t14) - # live vars: n, r_ref, $t5, $t7 - 23: goto 4 + 1: $t2 := pack TestLiveVars::R($t4) + # live vars: n, r_ref, r1 + 2: $t5 := 4 + # live vars: n, r_ref, r1, $t5 + 3: $t3 := pack TestLiveVars::R($t5) + # live vars: n, r_ref, r1, r2 + 4: goto 5 + # live vars: n, r_ref, r1, r2 + 5: label L6 + # live vars: n, r_ref, r1, r2 + 6: $t6 := 0 + # live vars: n, r_ref, r1, r2, $t6 + 7: $t7 := <($t6, $t0) + # live vars: n, r_ref, r1, r2, $t7 + 8: if ($t7) goto 9 else goto 30 + # live vars: n, r_ref, r1, r2 + 9: label L1 + # live vars: n, r_ref, r1, r2 + 10: goto 11 + # live vars: n, r_ref, r1, r2 + 11: label L2 + # live vars: n, r_ref, r1, r2 + 12: destroy($t1) + # live vars: n, r1, r2 + 13: $t8 := 2 + # live vars: n, r1, r2, $t8 + 14: $t9 := /($t0, $t8) + # live vars: n, r1, r2, $t9 + 15: $t10 := 0 + # live vars: n, r1, r2, $t9, $t10 + 16: $t11 := ==($t9, $t10) + # live vars: n, r1, r2, $t11 + 17: if ($t11) goto 18 else goto 22 + # live vars: n, r1, r2 + 18: label L4 + # live vars: n, r1, r2 + 19: $t12 := borrow_local($t2) + # live vars: n, r1, r2, $t12 + 20: $t1 := $t12 + # live vars: n, r_ref, r1, r2 + 21: goto 26 + # live vars: n, r1, r2 + 22: label L3 + # live vars: n, r1, r2 + 23: $t13 := borrow_local($t3) + # live vars: n, r1, r2, $t13 + 24: $t1 := $t13 + # live vars: n, r_ref, r1, r2 + 25: goto 26 + # live vars: n, r_ref, r1, r2 + 26: label L5 + # live vars: n, r_ref, r1, r2 + 27: $t14 := 1 + # live vars: n, r_ref, r1, r2, $t14 + 28: $t0 := -($t0, $t14) + # live vars: n, r_ref, r1, r2 + 29: goto 5 # live vars: r_ref - 24: label L0 + 30: label L0 # live vars: r_ref - 25: $t15 := TestLiveVars::test1($t1) + 31: $t15 := TestLiveVars::test1($t1) # live vars: $t15 - 26: return $t15 + 32: return $t15 } diff --git a/third_party/move/move-prover/bytecode/tests/livevar/basic_test.move b/third_party/move/move-model/bytecode/tests/livevar/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/livevar/basic_test.move rename to third_party/move/move-model/bytecode/tests/livevar/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/reaching_def/basic_test.exp b/third_party/move/move-model/bytecode/tests/reaching_def/basic_test.exp similarity index 94% rename from third_party/move/move-prover/bytecode/tests/reaching_def/basic_test.exp rename to third_party/move/move-model/bytecode/tests/reaching_def/basic_test.exp index baab1afae6517..f6a91b898b558 100644 --- a/third_party/move/move-prover/bytecode/tests/reaching_def/basic_test.exp +++ b/third_party/move/move-model/bytecode/tests/reaching_def/basic_test.exp @@ -57,8 +57,8 @@ fun ReachingDefTest::basic($t0|a: u64, $t1|b: u64): u64 { [variant baseline] -fun ReachingDefTest::create_resource($t0|sender: signer) { - var $t1: signer +fun ReachingDefTest::create_resource($t0|sender: &signer) { + var $t1: &signer var $t2: u64 var $t3: bool var $t4: ReachingDefTest::R diff --git a/third_party/move/move-prover/bytecode/tests/reaching_def/basic_test.move b/third_party/move/move-model/bytecode/tests/reaching_def/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/reaching_def/basic_test.move rename to third_party/move/move-model/bytecode/tests/reaching_def/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/reaching_def/test_branching.exp b/third_party/move/move-model/bytecode/tests/reaching_def/test_branching.exp similarity index 91% rename from third_party/move/move-prover/bytecode/tests/reaching_def/test_branching.exp rename to third_party/move/move-model/bytecode/tests/reaching_def/test_branching.exp index 27b13bd8293f9..335f083dfeaf0 100644 --- a/third_party/move/move-prover/bytecode/tests/reaching_def/test_branching.exp +++ b/third_party/move/move-model/bytecode/tests/reaching_def/test_branching.exp @@ -36,11 +36,12 @@ fun TestBranching::branching($t0|cond: bool): u64 { 2: label L1 3: $t3 := 3 4: $t1 := $t3 - 5: goto 9 + 5: goto 10 6: label L0 7: $t4 := 4 8: $t1 := $t4 - 9: label L2 - 10: $t5 := move($t1) - 11: return $t1 + 9: goto 10 + 10: label L2 + 11: $t5 := move($t1) + 12: return $t1 } diff --git a/third_party/move/move-prover/bytecode/tests/reaching_def/test_branching.move b/third_party/move/move-model/bytecode/tests/reaching_def/test_branching.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/reaching_def/test_branching.move rename to third_party/move/move-model/bytecode/tests/reaching_def/test_branching.move diff --git a/third_party/move/move-model/bytecode/tests/testsuite.rs b/third_party/move/move-model/bytecode/tests/testsuite.rs new file mode 100644 index 0000000000000..eeb2344aed2f0 --- /dev/null +++ b/third_party/move/move-model/bytecode/tests/testsuite.rs @@ -0,0 +1,66 @@ +// Copyright (c) The Diem Core Contributors +// Copyright (c) The Move Contributors +// SPDX-License-Identifier: Apache-2.0 + +use anyhow::anyhow; +use move_stackless_bytecode::{ + borrow_analysis::BorrowAnalysisProcessor, function_target_pipeline::FunctionTargetPipeline, + livevar_analysis::LiveVarAnalysisProcessor, reaching_def_analysis::ReachingDefProcessor, + usage_analysis::UsageProcessor, +}; +use std::path::Path; + +fn get_tested_transformation_pipeline( + dir_name: &str, +) -> anyhow::Result> { + match dir_name { + "from_move" => Ok(None), + "reaching_def" => { + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(ReachingDefProcessor::new()); + Ok(Some(pipeline)) + }, + "livevar" => { + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(ReachingDefProcessor::new()); + pipeline.add_processor(LiveVarAnalysisProcessor::new()); + Ok(Some(pipeline)) + }, + "borrow" => { + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(ReachingDefProcessor::new()); + pipeline.add_processor(LiveVarAnalysisProcessor::new()); + pipeline.add_processor(BorrowAnalysisProcessor::new()); + Ok(Some(pipeline)) + }, + "borrow_strong" => { + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(ReachingDefProcessor::new()); + pipeline.add_processor(LiveVarAnalysisProcessor::new()); + pipeline.add_processor(BorrowAnalysisProcessor::new()); + Ok(Some(pipeline)) + }, + "usage_analysis" => { + let mut pipeline = FunctionTargetPipeline::default(); + pipeline.add_processor(UsageProcessor::new()); + Ok(Some(pipeline)) + }, + _ => Err(anyhow!( + "the sub-directory `{}` has no associated pipeline to test", + dir_name + )), + } +} + +fn test_runner(path: &Path) -> datatest_stable::Result<()> { + let dir_name = path + .parent() + .and_then(|p| p.file_name()) + .and_then(|p| p.to_str()) + .ok_or_else(|| anyhow!("bad file name"))?; + let pipeline_opt = get_tested_transformation_pipeline(dir_name)?; + move_stackless_bytecode_test_utils::test_runner(path, pipeline_opt)?; + Ok(()) +} + +datatest_stable::harness!(test_runner, "tests", r".*\.move"); diff --git a/third_party/move/move-prover/bytecode/tests/usage_analysis/test.exp b/third_party/move/move-model/bytecode/tests/usage_analysis/test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/usage_analysis/test.exp rename to third_party/move/move-model/bytecode/tests/usage_analysis/test.exp diff --git a/third_party/move/move-prover/bytecode/tests/usage_analysis/test.move b/third_party/move/move-model/bytecode/tests/usage_analysis/test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/usage_analysis/test.move rename to third_party/move/move-model/bytecode/tests/usage_analysis/test.move diff --git a/third_party/move/move-model/src/lib.rs b/third_party/move/move-model/src/lib.rs index 36fc33bb2a510..6b2f0febb2a33 100644 --- a/third_party/move/move-model/src/lib.rs +++ b/third_party/move/move-model/src/lib.rs @@ -78,6 +78,8 @@ pub struct PackageInfo { pub fn run_model_builder_in_compiler_mode( source: PackageInfo, deps: Vec, + skip_attribute_checks: bool, + known_attributes: &BTreeSet, ) -> anyhow::Result { let to_package_paths = |PackageInfo { sources, @@ -94,26 +96,14 @@ pub fn run_model_builder_in_compiler_mode( compile_via_model: true, ..ModelBuilderOptions::default() }, - Flags::model_compilation(), + Flags::model_compilation().set_skip_attribute_checks(skip_attribute_checks), + known_attributes, ) } // ================================================================================================= // Entry Point V1 -/// Builds the move model with default compilation flags and default options. This calls -/// the move compiler v1 to compile to bytecode first and attach the generated bytecode to -/// the model. -pub fn run_model_builder< - Paths: Into + Clone, - NamedAddress: Into + Clone, ->( - move_sources: Vec>, - deps: Vec>, -) -> anyhow::Result { - run_model_builder_with_options(move_sources, deps, ModelBuilderOptions::default()) -} - /// Build the move model with default compilation flags and custom options. pub fn run_model_builder_with_options< Paths: Into + Clone, @@ -122,12 +112,17 @@ pub fn run_model_builder_with_options< move_sources: Vec>, deps: Vec>, options: ModelBuilderOptions, + skip_attribute_checks: bool, + known_attributes: &BTreeSet, ) -> anyhow::Result { + let mut flags = Flags::verification(); + flags = flags.set_skip_attribute_checks(skip_attribute_checks); run_model_builder_with_options_and_compilation_flags( move_sources, deps, options, - Flags::verification(), + flags, + known_attributes, ) } @@ -140,15 +135,16 @@ pub fn run_model_builder_with_options_and_compilation_flags< deps: Vec>, options: ModelBuilderOptions, flags: Flags, + known_attributes: &BTreeSet, ) -> anyhow::Result { let mut env = GlobalEnv::new(); let compile_via_model = options.compile_via_model; env.set_extension(options); // Step 1: parse the program to get comments and a separation of targets and dependencies. - let (files, comments_and_compiler_res) = Compiler::from_package_paths(move_sources, deps) - .set_flags(flags) - .run::()?; + let (files, comments_and_compiler_res) = + Compiler::from_package_paths(move_sources, deps, flags, known_attributes) + .run::()?; let (comment_map, compiler) = match comments_and_compiler_res { Err(diags) => { // Add source files so that the env knows how to translate locations of parse errors diff --git a/third_party/move/move-model/src/model.rs b/third_party/move/move-model/src/model.rs index 67dc1850046cd..abdb5764792e2 100644 --- a/third_party/move/move-model/src/model.rs +++ b/third_party/move/move-model/src/model.rs @@ -3028,6 +3028,15 @@ impl<'env> FunctionEnv<'env> { ) } + /// Gets full name with module address as string. + pub fn get_full_name_with_address(&self) -> String { + format!( + "{}::{}", + self.module_env.get_full_name_str(), + self.get_name_str() + ) + } + pub fn get_name_str(&self) -> String { self.get_name().display(self.symbol_pool()).to_string() } @@ -3390,22 +3399,23 @@ impl<'env> FunctionEnv<'env> { /// is attached. If the local is an argument, use that for naming, otherwise generate /// a unique name. pub fn get_local_name(&self, idx: usize) -> Option { - if idx < self.data.params.len() { - return Some(self.data.params[idx].0); - } - // Try to obtain name from source map. - let source_map = self.module_env.data.source_map.as_ref()?; - if let Ok(fmap) = source_map.get_function_source_map(self.data.def_idx?) { - if let Some((ident, _)) = fmap.get_parameter_or_local_name(idx as u64) { - // The Move compiler produces temporary names of the form `%#`, - // where seems to be generated non-deterministically. - // Substitute this by a deterministic name which the backend accepts. - let clean_ident = if ident.contains("%#") { - format!("tmp#${}", idx) - } else { - ident - }; - return Some(self.module_env.env.symbol_pool.make(clean_ident.as_str())); + // Try to obtain user name + if let Some(source_map) = &self.module_env.data.source_map { + if idx < self.data.params.len() { + return Some(self.data.params[idx].0); + } + if let Ok(fmap) = source_map.get_function_source_map(self.data.def_idx?) { + if let Some((ident, _)) = fmap.get_parameter_or_local_name(idx as u64) { + // The Move compiler produces temporary names of the form `%#`, + // where seems to be generated non-deterministically. + // Substitute this by a deterministic name which the backend accepts. + let clean_ident = if ident.contains("%#") { + format!("tmp#${}", idx) + } else { + ident + }; + return Some(self.module_env.env.symbol_pool.make(clean_ident.as_str())); + } } } Some(self.module_env.env.symbol_pool.make(&format!("$t{}", idx))) diff --git a/third_party/move/move-model/tests/testsuite.rs b/third_party/move/move-model/tests/testsuite.rs index 35baedcc3a9e1..871a7fbbb63a5 100644 --- a/third_party/move/move-model/tests/testsuite.rs +++ b/third_party/move/move-model/tests/testsuite.rs @@ -4,7 +4,7 @@ use codespan_reporting::{diagnostic::Severity, term::termcolor::Buffer}; use move_command_line_common::testing::EXP_EXT; -use move_compiler::shared::PackagePaths; +use move_compiler::shared::{known_attributes::KnownAttribute, PackagePaths}; use move_model::{options::ModelBuilderOptions, run_model_builder_with_options}; use move_prover_test_utils::baseline_test::verify_or_update_baseline; use std::path::Path; @@ -15,7 +15,13 @@ fn test_runner(path: &Path, options: ModelBuilderOptions) -> datatest_stable::Re paths: vec![path.to_str().unwrap().to_string()], named_address_map: std::collections::BTreeMap::::new(), }]; - let env = run_model_builder_with_options(targets, vec![], options)?; + let env = run_model_builder_with_options( + targets, + vec![], + options, + false, + KnownAttribute::get_all_attribute_names(), + )?; let diags = if env.diag_count(Severity::Warning) > 0 { let mut writer = Buffer::no_color(); env.report_diag(&mut writer, Severity::Warning); diff --git a/third_party/move/move-prover/Cargo.toml b/third_party/move/move-prover/Cargo.toml index 3f6a2eaa608a7..4f69f90b6ad95 100644 --- a/third_party/move/move-prover/Cargo.toml +++ b/third_party/move/move-prover/Cargo.toml @@ -19,12 +19,13 @@ move-ir-types = { path = "../move-ir/types" } move-model = { path = "../move-model" } # move dependencies move-prover-boogie-backend = { path = "boogie-backend" } -move-stackless-bytecode = { path = "bytecode" } +move-prover-bytecode-pipeline = { path = "bytecode-pipeline" } +move-stackless-bytecode = { path = "../move-model/bytecode" } # external dependencies async-trait = "0.1.42" atty = "0.2.14" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan = "0.11.1" codespan-reporting = "0.11.1" futures = "0.3.12" diff --git a/third_party/move/move-prover/boogie-backend/Cargo.toml b/third_party/move/move-prover/boogie-backend/Cargo.toml index 70fb6a9c84e88..8e3ecd14c856c 100644 --- a/third_party/move/move-prover/boogie-backend/Cargo.toml +++ b/third_party/move/move-prover/boogie-backend/Cargo.toml @@ -20,7 +20,8 @@ move-command-line-common = { path = "../../move-command-line-common" } move-compiler = { path = "../../move-compiler" } move-core-types = { path = "../../move-core/types" } move-model = { path = "../../move-model" } -move-stackless-bytecode = { path = "../bytecode" } +move-prover-bytecode-pipeline = { path = "../bytecode-pipeline" } +move-stackless-bytecode = { path = "../../move-model/bytecode" } num = "0.4.0" once_cell = "1.7.2" pretty = "0.10.0" diff --git a/third_party/move/move-prover/boogie-backend/src/bytecode_translator.rs b/third_party/move/move-prover/boogie-backend/src/bytecode_translator.rs index 4673c8d1d7cee..3e23e137bc876 100644 --- a/third_party/move/move-prover/boogie-backend/src/bytecode_translator.rs +++ b/third_party/move/move-prover/boogie-backend/src/bytecode_translator.rs @@ -37,15 +37,17 @@ use move_model::{ ty::{PrimitiveType, Type, TypeDisplayContext, BOOL_TYPE}, well_known::{TYPE_INFO_MOVE, TYPE_NAME_GET_MOVE, TYPE_NAME_MOVE}, }; -use move_stackless_bytecode::{ - function_target::FunctionTarget, - function_target_pipeline::{FunctionTargetsHolder, FunctionVariant, VerificationFlavor}, +use move_prover_bytecode_pipeline::{ mono_analysis, number_operation::{ FuncOperationMap, GlobalNumberOperationState, NumOperation, NumOperation::{Bitwise, Bottom}, }, options::ProverOptions, +}; +use move_stackless_bytecode::{ + function_target::FunctionTarget, + function_target_pipeline::{FunctionTargetsHolder, FunctionVariant, VerificationFlavor}, stackless_bytecode::{ AbortAction, BorrowEdge, BorrowNode, Bytecode, Constant, HavocKind, IndexEdgeKind, Operation, PropKind, diff --git a/third_party/move/move-prover/boogie-backend/src/lib.rs b/third_party/move/move-prover/boogie-backend/src/lib.rs index 1808df428e539..a8b598a2a8bf2 100644 --- a/third_party/move/move-prover/boogie-backend/src/lib.rs +++ b/third_party/move/move-prover/boogie-backend/src/lib.rs @@ -29,7 +29,7 @@ use move_model::{ }, ty::{PrimitiveType, Type}, }; -use move_stackless_bytecode::mono_analysis; +use move_prover_bytecode_pipeline::mono_analysis; use serde::{Deserialize, Serialize}; use std::collections::BTreeSet; use tera::{Context, Tera}; diff --git a/third_party/move/move-prover/boogie-backend/src/options.rs b/third_party/move/move-prover/boogie-backend/src/options.rs index c1ac4c9f87e95..5a0b6cd74ffe0 100644 --- a/third_party/move/move-prover/boogie-backend/src/options.rs +++ b/third_party/move/move-prover/boogie-backend/src/options.rs @@ -19,9 +19,17 @@ const DEFAULT_BOOGIE_FLAGS: &[&str] = &[ "-proverOpt:O:model_validate=true", ]; -const MIN_BOOGIE_VERSION: &str = "2.15.8"; -const MIN_Z3_VERSION: &str = "4.11.0"; -const MIN_CVC5_VERSION: &str = "0.0.3"; +/// Versions for boogie, z3, and cvc5. The upgrade of boogie and z3 is mostly backward compatible, +/// but not always. Setting the max version allows Prover to warn users for the higher version of +/// boogie and z3 because those may be incompatible. +const MIN_BOOGIE_VERSION: Option<&str> = Some("2.15.8.0"); +const MAX_BOOGIE_VERSION: Option<&str> = Some("2.15.8.0"); + +const MIN_Z3_VERSION: Option<&str> = Some("4.11.2"); +const MAX_Z3_VERSION: Option<&str> = Some("4.11.2"); + +const MIN_CVC5_VERSION: Option<&str> = Some("0.0.3"); +const MAX_CVC5_VERSION: Option<&str> = None; #[derive(Debug, Clone, Copy, Serialize, Deserialize)] pub enum VectorTheory { @@ -306,17 +314,27 @@ impl BoogieOptions { version_arg, r"version ([0-9.]*)", )?; - Self::check_version_is_greater("boogie", &version, MIN_BOOGIE_VERSION)?; + Self::check_version_is_compatible( + "boogie", + &version, + MIN_BOOGIE_VERSION, + MAX_BOOGIE_VERSION, + )?; } if !self.z3_exe.is_empty() && !self.use_cvc5 { let version = Self::get_version("z3", &self.z3_exe, &["--version"], r"version ([0-9.]*)")?; - Self::check_version_is_greater("z3", &version, MIN_Z3_VERSION)?; + Self::check_version_is_compatible("z3", &version, MIN_Z3_VERSION, MAX_Z3_VERSION)?; } if !self.cvc5_exe.is_empty() && self.use_cvc5 { let version = Self::get_version("cvc5", &self.cvc5_exe, &["--version"], r"version ([0-9.]*)")?; - Self::check_version_is_greater("cvc5", &version, MIN_CVC5_VERSION)?; + Self::check_version_is_compatible( + "cvc5", + &version, + MIN_CVC5_VERSION, + MAX_CVC5_VERSION, + )?; } Ok(()) } @@ -340,29 +358,55 @@ impl BoogieOptions { } } - fn check_version_is_greater(tool: &str, given: &str, expected: &str) -> anyhow::Result<()> { - let given_parts = given.split('.').collect_vec(); - let expected_parts = expected.split('.').collect_vec(); - if given_parts.len() < expected_parts.len() { + fn check_version_is_compatible( + tool: &str, + given: &str, + expected_min: Option<&str>, + expected_max: Option<&str>, + ) -> anyhow::Result<()> { + if let Some(expected) = expected_min { + Self::check_version_le(expected, given, "least", expected, given, tool)?; + } + if let Some(expected) = expected_max { + Self::check_version_le(given, expected, "most", expected, given, tool)?; + } + Ok(()) + } + + // This function checks if expected_lesser is actually less than or equal to expected_greater + fn check_version_le( + expected_lesser: &str, + expected_greater: &str, + relative_term: &str, + expected_version: &str, + given_version: &str, + tool: &str, + ) -> anyhow::Result<()> { + let lesser_parts = expected_lesser.split('.').collect_vec(); + let greater_parts = expected_greater.split('.').collect_vec(); + + if lesser_parts.len() < greater_parts.len() { return Err(anyhow!( "version strings {} and {} for `{}` cannot be compared", - given, - expected, - tool, + given_version, + expected_version, + tool )); } - for (g, e) in given_parts.into_iter().zip(expected_parts.into_iter()) { + + for (l, g) in lesser_parts.into_iter().zip(greater_parts.into_iter()) { + let ln = l.parse::()?; let gn = g.parse::()?; - let en = e.parse::()?; - if gn < en { + if gn < ln { return Err(anyhow!( - "expected at least version {} but found {} for `{}`", - expected, - given, + "expected at {} version {} but found {} for `{}`", + relative_term, + expected_version, + given_version, tool )); } - if gn > en { + if gn > ln { break; } } diff --git a/third_party/move/move-prover/boogie-backend/src/spec_translator.rs b/third_party/move/move-prover/boogie-backend/src/spec_translator.rs index ea27083a129c1..afd58a8817347 100644 --- a/third_party/move/move-prover/boogie-backend/src/spec_translator.rs +++ b/third_party/move/move-prover/boogie-backend/src/spec_translator.rs @@ -35,7 +35,7 @@ use move_model::{ ty::{PrimitiveType, Type}, well_known::{TYPE_INFO_SPEC, TYPE_NAME_GET_SPEC, TYPE_NAME_SPEC, TYPE_SPEC_IS_STRUCT}, }; -use move_stackless_bytecode::{ +use move_prover_bytecode_pipeline::{ mono_analysis::MonoInfo, number_operation::{GlobalNumberOperationState, NumOperation::Bitwise}, }; diff --git a/third_party/move/move-prover/bytecode-pipeline/Cargo.toml b/third_party/move/move-prover/bytecode-pipeline/Cargo.toml new file mode 100644 index 0000000000000..3845e728e550a --- /dev/null +++ b/third_party/move/move-prover/bytecode-pipeline/Cargo.toml @@ -0,0 +1,47 @@ +[package] +name = "move-prover-bytecode-pipeline" +version = "0.1.0" +authors = ["Aptos Labs "] +publish = false +edition = "2021" +license = "Apache-2.0" + +[dependencies] +anyhow = "1.0.52" +move-binary-format = { path = "../../move-binary-format" } +move-core-types = { path = "../../move-core/types" } +move-model = { path = "../../move-model" } + +# move dependencies +move-stackless-bytecode = { path = "../../move-model/bytecode" } + +# external dependencies +async-trait = "0.1.42" +atty = "0.2.14" +clap = { version = "4.3.9", features = ["derive"] } +codespan = "0.11.1" +codespan-reporting = "0.11.1" +futures = "0.3.12" +hex = "0.4.3" +itertools = "0.10.0" +log = { version = "0.4.14", features = ["serde"] } +num = "0.4.0" +once_cell = "1.7.2" +pretty = "0.10.0" +rand = "0.8.3" +serde = { version = "1.0.124", features = ["derive"] } +serde_json = "1.0.64" +simplelog = "0.9.0" +tokio = { version = "1.18.2", features = ["full"] } +toml = "0.5.8" + +[dev-dependencies] +datatest-stable = "0.1.1" +move-stackless-bytecode-test-utils = { path = "../../move-model/bytecode-test-utils" } +shell-words = "1.0.0" +tempfile = "3.2.0" +walkdir = "2.3.1" + +[[test]] +name = "testsuite" +harness = false diff --git a/third_party/move/move-prover/bytecode/src/clean_and_optimize.rs b/third_party/move/move-prover/bytecode-pipeline/src/clean_and_optimize.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/clean_and_optimize.rs rename to third_party/move/move-prover/bytecode-pipeline/src/clean_and_optimize.rs index d03c9beeb332d..b72c9fd6f7c7d 100644 --- a/third_party/move/move-prover/bytecode/src/clean_and_optimize.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/clean_and_optimize.rs @@ -4,21 +4,21 @@ // Final phase of cleanup and optimization. -use crate::{ +use crate::options::ProverOptions; +use move_binary_format::file_format::CodeOffset; +use move_model::{ + model::FunctionEnv, + pragmas::INTRINSIC_FUN_MAP_BORROW_MUT, + well_known::{EVENT_EMIT_EVENT, VECTOR_BORROW_MUT}, +}; +use move_stackless_bytecode::{ dataflow_analysis::{DataflowAnalysis, TransferFunctions}, dataflow_domains::{AbstractDomain, JoinResult}, function_target::{FunctionData, FunctionTarget}, function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - options::ProverOptions, stackless_bytecode::{BorrowNode, Bytecode, Operation}, stackless_control_flow_graph::StacklessControlFlowGraph, }; -use move_binary_format::file_format::CodeOffset; -use move_model::{ - model::FunctionEnv, - pragmas::INTRINSIC_FUN_MAP_BORROW_MUT, - well_known::{EVENT_EMIT_EVENT, VECTOR_BORROW_MUT}, -}; use std::collections::BTreeSet; pub struct CleanAndOptimizeProcessor(); diff --git a/third_party/move/move-prover/bytecode/src/data_invariant_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/data_invariant_instrumentation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/data_invariant_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/data_invariant_instrumentation.rs index bd3eef6d00b5d..2af5efe8bb83d 100644 --- a/third_party/move/move-prover/bytecode/src/data_invariant_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/data_invariant_instrumentation.rs @@ -11,13 +11,7 @@ //! (which depends on the compilation scheme). It also handles PackRef/PackRefDeep //! instructions introduced by memory instrumentation, as well as the Pack instructions. -use crate::{ - function_data_builder::FunctionDataBuilder, - function_target::FunctionData, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - options::ProverOptions, - stackless_bytecode::{Bytecode, Operation, PropKind}, -}; +use crate::options::ProverOptions; use move_model::{ ast, ast::{ConditionKind, Exp, ExpData, QuantKind, TempIndex}, @@ -26,6 +20,12 @@ use move_model::{ pragmas::{INTRINSIC_FUN_MAP_SPEC_GET, INTRINSIC_TYPE_MAP}, ty::Type, }; +use move_stackless_bytecode::{ + function_data_builder::FunctionDataBuilder, + function_target::FunctionData, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, + stackless_bytecode::{Bytecode, Operation, PropKind}, +}; const INVARIANT_FAILS_MESSAGE: &str = "data invariant does not hold"; diff --git a/third_party/move/move-prover/bytecode/src/eliminate_imm_refs.rs b/third_party/move/move-prover/bytecode-pipeline/src/eliminate_imm_refs.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/eliminate_imm_refs.rs rename to third_party/move/move-prover/bytecode-pipeline/src/eliminate_imm_refs.rs index 5eb6a764fc797..cd1c2d2745f42 100644 --- a/third_party/move/move-prover/bytecode/src/eliminate_imm_refs.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/eliminate_imm_refs.rs @@ -2,17 +2,17 @@ // Copyright (c) The Move Contributors // SPDX-License-Identifier: Apache-2.0 -use crate::{ - function_data_builder::FunctionDataBuilder, - function_target::FunctionData, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - stackless_bytecode::{AssignKind, Bytecode, Operation}, -}; use move_model::{ ast::TempIndex, model::FunctionEnv, ty::{ReferenceKind, Type}, }; +use move_stackless_bytecode::{ + function_data_builder::FunctionDataBuilder, + function_target::FunctionData, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, + stackless_bytecode::{AssignKind, Bytecode, Operation}, +}; pub struct EliminateImmRefsProcessor {} diff --git a/third_party/move/move-prover/bytecode/src/global_invariant_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/global_invariant_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/global_invariant_analysis.rs index d40d661d58d46..82af45d86b5f3 100644 --- a/third_party/move/move-prover/bytecode/src/global_invariant_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_analysis.rs @@ -4,20 +4,20 @@ // Analysis pass which analyzes how to injects global invariants into the bytecode. -use crate::{ +use crate::verification_analysis::{is_invariant_suspendable, InvariantAnalysisData}; +use move_binary_format::file_format::CodeOffset; +use move_model::{ + ast::ConditionKind, + model::{FunId, FunctionEnv, GlobalEnv, GlobalId, QualifiedId, QualifiedInstId, StructId}, + ty::{Type, TypeDisplayContext, TypeInstantiationDerivation, TypeUnificationAdapter, Variance}, +}; +use move_stackless_bytecode::{ function_target::{FunctionData, FunctionTarget}, function_target_pipeline::{ FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, }, stackless_bytecode::{BorrowNode, Bytecode, Operation, PropKind}, usage_analysis, - verification_analysis::{is_invariant_suspendable, InvariantAnalysisData}, -}; -use move_binary_format::file_format::CodeOffset; -use move_model::{ - ast::ConditionKind, - model::{FunId, FunctionEnv, GlobalEnv, GlobalId, QualifiedId, QualifiedInstId, StructId}, - ty::{Type, TypeDisplayContext, TypeInstantiationDerivation, TypeUnificationAdapter, Variance}, }; use std::{ collections::{BTreeMap, BTreeSet}, diff --git a/third_party/move/move-prover/bytecode/src/global_invariant_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/global_invariant_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation.rs index a7b5a70a97e3a..dd29ad2fb0c0b 100644 --- a/third_party/move/move-prover/bytecode/src/global_invariant_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation.rs @@ -5,14 +5,8 @@ // Instrumentation pass which injects global invariants into the bytecode. use crate::{ - function_data_builder::FunctionDataBuilder, - function_target::{FunctionData, FunctionTarget}, - function_target_pipeline::{ - FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, - }, - global_invariant_analysis::{self, PerFunctionRelevance}, + global_invariant_analysis, global_invariant_analysis::PerFunctionRelevance, options::ProverOptions, - stackless_bytecode::{Bytecode, Operation, PropKind}, }; use move_binary_format::file_format::CodeOffset; use move_model::{ @@ -22,6 +16,14 @@ use move_model::{ spec_translator::{SpecTranslator, TranslatedSpec}, ty::Type, }; +use move_stackless_bytecode::{ + function_data_builder::FunctionDataBuilder, + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{ + FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, + }, + stackless_bytecode::{Bytecode, Operation, PropKind}, +}; use std::collections::{BTreeMap, BTreeSet}; const GLOBAL_INVARIANT_FAILS_MESSAGE: &str = "global memory invariant does not hold"; diff --git a/third_party/move/move-prover/bytecode/src/global_invariant_instrumentation_v2.rs b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation_v2.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/global_invariant_instrumentation_v2.rs rename to third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation_v2.rs index 09c23fcc641c9..3262c60496389 100644 --- a/third_party/move/move-prover/bytecode/src/global_invariant_instrumentation_v2.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/global_invariant_instrumentation_v2.rs @@ -4,17 +4,7 @@ // Transformation which injects global invariants into the bytecode. -use crate::{ - function_data_builder::FunctionDataBuilder, - function_target::{FunctionData, FunctionTarget}, - function_target_pipeline::{ - FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, - }, - options::ProverOptions, - stackless_bytecode::{BorrowNode, Bytecode, Operation, PropKind}, - usage_analysis, - verification_analysis_v2::InvariantAnalysisData, -}; +use crate::{options::ProverOptions, verification_analysis_v2::InvariantAnalysisData}; use itertools::Itertools; #[allow(unused_imports)] use log::{debug, info, log, warn}; @@ -28,6 +18,15 @@ use move_model::{ spec_translator::{SpecTranslator, TranslatedSpec}, ty::{Type, TypeUnificationAdapter, Variance}, }; +use move_stackless_bytecode::{ + function_data_builder::FunctionDataBuilder, + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{ + FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, + }, + stackless_bytecode::{BorrowNode, Bytecode, Operation, PropKind}, + usage_analysis, +}; use std::collections::{BTreeMap, BTreeSet}; const GLOBAL_INVARIANT_FAILS_MESSAGE: &str = "global memory invariant does not hold"; diff --git a/third_party/move/move-prover/bytecode/src/inconsistency_check.rs b/third_party/move/move-prover/bytecode-pipeline/src/inconsistency_check.rs similarity index 98% rename from third_party/move/move-prover/bytecode/src/inconsistency_check.rs rename to third_party/move/move-prover/bytecode-pipeline/src/inconsistency_check.rs index 033d15e5f058b..1da58828dbeb1 100644 --- a/third_party/move/move-prover/bytecode/src/inconsistency_check.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/inconsistency_check.rs @@ -20,16 +20,16 @@ //! any post-condition can be proved. Checking of this behavior is turned-off by default, and can //! be enabled with the `unconditional-abort-as-inconsistency` flag. -use crate::{ +use crate::options::ProverOptions; +use move_model::{exp_generator::ExpGenerator, model::FunctionEnv}; +use move_stackless_bytecode::{ function_data_builder::FunctionDataBuilder, function_target::FunctionData, function_target_pipeline::{ FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, }, - options::ProverOptions, stackless_bytecode::{Bytecode, PropKind}, }; -use move_model::{exp_generator::ExpGenerator, model::FunctionEnv}; // This message is for the boogie wrapper, and not shown to the users. const EXPECTED_TO_FAIL: &str = "expected to fail"; diff --git a/third_party/move/move-prover/bytecode-pipeline/src/lib.rs b/third_party/move/move-prover/bytecode-pipeline/src/lib.rs new file mode 100644 index 0000000000000..f8d8d9ad95e08 --- /dev/null +++ b/third_party/move/move-prover/bytecode-pipeline/src/lib.rs @@ -0,0 +1,24 @@ +// Copyright © Aptos Foundation +// Parts of the project are originally copyright © Meta Platforms, Inc. +// SPDX-License-Identifier: Apache-2.0 +pub mod clean_and_optimize; +pub mod data_invariant_instrumentation; +pub mod eliminate_imm_refs; +pub mod global_invariant_analysis; +pub mod global_invariant_instrumentation; +pub mod global_invariant_instrumentation_v2; +pub mod inconsistency_check; +pub mod loop_analysis; +pub mod memory_instrumentation; +pub mod mono_analysis; +pub mod mut_ref_instrumentation; +pub mod mutation_tester; +pub mod number_operation; +pub mod number_operation_analysis; +pub mod options; +pub mod packed_types_analysis; +pub mod pipeline_factory; +pub mod spec_instrumentation; +pub mod verification_analysis; +pub mod verification_analysis_v2; +pub mod well_formed_instrumentation; diff --git a/third_party/move/move-prover/bytecode/src/loop_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/loop_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/loop_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/loop_analysis.rs index a35fbe0469027..6203344653073 100644 --- a/third_party/move/move-prover/bytecode/src/loop_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/loop_analysis.rs @@ -2,15 +2,7 @@ // Copyright (c) The Move Contributors // SPDX-License-Identifier: Apache-2.0 -use crate::{ - function_data_builder::{FunctionDataBuilder, FunctionDataBuilderOptions}, - function_target::{FunctionData, FunctionTarget}, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - graph::{Graph, NaturalLoop}, - options::ProverOptions, - stackless_bytecode::{AttrId, Bytecode, HavocKind, Label, Operation, PropKind}, - stackless_control_flow_graph::{BlockContent, BlockId, StacklessControlFlowGraph}, -}; +use crate::options::ProverOptions; use move_binary_format::file_format::CodeOffset; use move_model::{ ast::{self, TempIndex}, @@ -19,6 +11,14 @@ use move_model::{ pragmas::UNROLL_PRAGMA, ty::{PrimitiveType, Type}, }; +use move_stackless_bytecode::{ + function_data_builder::{FunctionDataBuilder, FunctionDataBuilderOptions}, + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, + graph::{Graph, NaturalLoop}, + stackless_bytecode::{AttrId, Bytecode, HavocKind, Label, Operation, PropKind}, + stackless_control_flow_graph::{BlockContent, BlockId, StacklessControlFlowGraph}, +}; use std::collections::{BTreeMap, BTreeSet}; const LOOP_INVARIANT_BASE_FAILED: &str = "base case of the loop invariant does not hold"; diff --git a/third_party/move/move-prover/bytecode/src/memory_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/memory_instrumentation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/memory_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/memory_instrumentation.rs index 9250fc6c9d023..1fb9196c3931e 100644 --- a/third_party/move/move-prover/bytecode/src/memory_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/memory_instrumentation.rs @@ -2,7 +2,14 @@ // Copyright (c) The Move Contributors // SPDX-License-Identifier: Apache-2.0 -use crate::{ +use move_binary_format::file_format::CodeOffset; +use move_model::{ + ast::ConditionKind, + exp_generator::ExpGenerator, + model::{FunctionEnv, StructEnv}, + ty::{Type, BOOL_TYPE}, +}; +use move_stackless_bytecode::{ borrow_analysis::{BorrowAnnotation, WriteBackAction}, function_data_builder::FunctionDataBuilder, function_target::FunctionData, @@ -13,13 +20,6 @@ use crate::{ Operation, }, }; -use move_binary_format::file_format::CodeOffset; -use move_model::{ - ast::ConditionKind, - exp_generator::ExpGenerator, - model::{FunctionEnv, StructEnv}, - ty::{Type, BOOL_TYPE}, -}; use std::collections::BTreeSet; pub struct MemoryInstrumentationProcessor {} diff --git a/third_party/move/move-prover/bytecode/src/mono_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/mono_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/mono_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/mono_analysis.rs index 19fce6252ca31..544e732cddca8 100644 --- a/third_party/move/move-prover/bytecode/src/mono_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/mono_analysis.rs @@ -5,12 +5,6 @@ //! Analysis which computes information needed in backends for monomorphization. This //! computes the distinct type instantiations in the model for structs and inlined functions. -use crate::{ - function_target::FunctionTarget, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, - stackless_bytecode::{BorrowEdge, Bytecode, Operation}, - usage_analysis::UsageProcessor, -}; use itertools::Itertools; use move_model::{ ast, @@ -26,6 +20,12 @@ use move_model::{ TYPE_NAME_SPEC, TYPE_SPEC_IS_STRUCT, }, }; +use move_stackless_bytecode::{ + function_target::FunctionTarget, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, + stackless_bytecode::{BorrowEdge, Bytecode, Operation}, + usage_analysis::UsageProcessor, +}; use std::{ collections::{BTreeMap, BTreeSet}, fmt, diff --git a/third_party/move/move-prover/bytecode/src/mut_ref_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/mut_ref_instrumentation.rs similarity index 98% rename from third_party/move/move-prover/bytecode/src/mut_ref_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/mut_ref_instrumentation.rs index 5733f7e31a51c..0b224c7842bbe 100644 --- a/third_party/move/move-prover/bytecode/src/mut_ref_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/mut_ref_instrumentation.rs @@ -2,14 +2,14 @@ // Copyright (c) The Move Contributors // SPDX-License-Identifier: Apache-2.0 -use crate::{ +use itertools::Itertools; +use move_model::{ast::TempIndex, model::FunctionEnv}; +use move_stackless_bytecode::{ function_data_builder::FunctionDataBuilder, function_target::FunctionData, function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, stackless_bytecode::{AssignKind, Bytecode, Operation}, }; -use itertools::Itertools; -use move_model::{ast::TempIndex, model::FunctionEnv}; pub struct MutRefInstrumenter {} diff --git a/third_party/move/move-prover/bytecode/src/mutation_tester.rs b/third_party/move/move-prover/bytecode-pipeline/src/mutation_tester.rs similarity index 98% rename from third_party/move/move-prover/bytecode/src/mutation_tester.rs rename to third_party/move/move-prover/bytecode-pipeline/src/mutation_tester.rs index ff867c219552e..06f34a560f01d 100644 --- a/third_party/move/move-prover/bytecode/src/mutation_tester.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/mutation_tester.rs @@ -9,17 +9,17 @@ //! It emits instructions in bytecode format, but with changes made //! Note that this mutation does nothing if mutation flags are not enabled -use crate::{ +use crate::options::ProverOptions; +use move_model::{ + exp_generator::ExpGenerator, + model::{FunctionEnv, GlobalEnv}, +}; +use move_stackless_bytecode::{ function_data_builder::FunctionDataBuilder, function_target::FunctionData, function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - options::ProverOptions, stackless_bytecode::{Bytecode, Operation}, }; -use move_model::{ - exp_generator::ExpGenerator, - model::{FunctionEnv, GlobalEnv}, -}; pub struct MutationTester {} diff --git a/third_party/move/move-prover/bytecode/src/number_operation.rs b/third_party/move/move-prover/bytecode-pipeline/src/number_operation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/number_operation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/number_operation.rs index 706c0e774e6b6..7c50796cd6526 100644 --- a/third_party/move/move-prover/bytecode/src/number_operation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/number_operation.rs @@ -6,7 +6,6 @@ //! mark the operation (arithmetic or bitwise) that a variable or a field involves, //! which will be used later when the correct number type (`int` or `bv`) in the boogie program -use crate::COMPILED_MODULE_AVAILABLE; use itertools::Itertools; use move_model::{ ast::{PropertyValue, TempIndex, Value}, @@ -14,6 +13,7 @@ use move_model::{ pragmas::{BV_PARAM_PROP, BV_RET_PROP}, ty::Type, }; +use move_stackless_bytecode::COMPILED_MODULE_AVAILABLE; use std::{collections::BTreeMap, ops::Deref, str}; static PARSING_ERROR: &str = "error happened when parsing the bv pragma"; diff --git a/third_party/move/move-prover/bytecode/src/number_operation_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/number_operation_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/number_operation_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/number_operation_analysis.rs index 1a939c746a28d..9699d45cb8e54 100644 --- a/third_party/move/move-prover/bytecode/src/number_operation_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/number_operation_analysis.rs @@ -7,19 +7,11 @@ // The result of this analysis will be used when generating the boogie code use crate::{ - dataflow_analysis::{DataflowAnalysis, TransferFunctions}, - dataflow_domains::{AbstractDomain, JoinResult}, - function_target::FunctionTarget, - function_target_pipeline::{ - FunctionTargetPipeline, FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, - }, number_operation::{ - GlobalNumberOperationState, - NumOperation::{self, Arithmetic, Bitwise, Bottom}, + GlobalNumberOperationState, NumOperation, + NumOperation::{Arithmetic, Bitwise, Bottom}, }, options::ProverOptions, - stackless_bytecode::{AttrId, Bytecode, Operation}, - stackless_control_flow_graph::StacklessControlFlowGraph, }; use itertools::Either; use move_binary_format::file_format::CodeOffset; @@ -28,6 +20,16 @@ use move_model::{ model::{FunId, GlobalEnv, ModuleId, Parameter}, ty::{PrimitiveType, Type}, }; +use move_stackless_bytecode::{ + dataflow_analysis::{DataflowAnalysis, TransferFunctions}, + dataflow_domains::{AbstractDomain, JoinResult}, + function_target::FunctionTarget, + function_target_pipeline::{ + FunctionTargetPipeline, FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, + }, + stackless_bytecode::{AttrId, Bytecode, Operation}, + stackless_control_flow_graph::StacklessControlFlowGraph, +}; use std::{ collections::{BTreeMap, BTreeSet}, str, diff --git a/third_party/move/move-prover/bytecode/src/options.rs b/third_party/move/move-prover/bytecode-pipeline/src/options.rs similarity index 100% rename from third_party/move/move-prover/bytecode/src/options.rs rename to third_party/move/move-prover/bytecode-pipeline/src/options.rs diff --git a/third_party/move/move-prover/bytecode/src/packed_types_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/packed_types_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/packed_types_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/packed_types_analysis.rs index d4e71c7e576fd..26a062f2e5e26 100644 --- a/third_party/move/move-prover/bytecode/src/packed_types_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/packed_types_analysis.rs @@ -2,7 +2,13 @@ // Copyright (c) The Move Contributors // SPDX-License-Identifier: Apache-2.0 -use crate::{ +use move_binary_format::file_format::CodeOffset; +use move_core_types::language_storage::{StructTag, TypeTag}; +use move_model::{ + model::{FunctionEnv, GlobalEnv}, + ty::Type, +}; +use move_stackless_bytecode::{ compositional_analysis::{CompositionalAnalysis, SummaryCache}, dataflow_analysis::{DataflowAnalysis, TransferFunctions}, dataflow_domains::{AbstractDomain, JoinResult, SetDomain}, @@ -11,12 +17,6 @@ use crate::{ stackless_bytecode::{Bytecode, Operation}, COMPILED_MODULE_AVAILABLE, }; -use move_binary_format::file_format::CodeOffset; -use move_core_types::language_storage::{StructTag, TypeTag}; -use move_model::{ - model::{FunctionEnv, GlobalEnv}, - ty::Type, -}; use std::collections::BTreeSet; /// Get all closed types that may be packed by (1) genesis and (2) all transaction scripts. diff --git a/third_party/move/move-prover/bytecode/src/pipeline_factory.rs b/third_party/move/move-prover/bytecode-pipeline/src/pipeline_factory.rs similarity index 90% rename from third_party/move/move-prover/bytecode/src/pipeline_factory.rs rename to third_party/move/move-prover/bytecode-pipeline/src/pipeline_factory.rs index 95083f159c390..eeaa53d395a4a 100644 --- a/third_party/move/move-prover/bytecode/src/pipeline_factory.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/pipeline_factory.rs @@ -3,29 +3,27 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - borrow_analysis::BorrowAnalysisProcessor, clean_and_optimize::CleanAndOptimizeProcessor, data_invariant_instrumentation::DataInvariantInstrumentationProcessor, - debug_instrumentation::DebugInstrumenter, eliminate_imm_refs::EliminateImmRefsProcessor, - function_target_pipeline::{FunctionTargetPipeline, FunctionTargetProcessor}, global_invariant_analysis::GlobalInvariantAnalysisProcessor, global_invariant_instrumentation::GlobalInvariantInstrumentationProcessor, - inconsistency_check::InconsistencyCheckInstrumenter, - livevar_analysis::LiveVarAnalysisProcessor, - loop_analysis::LoopAnalysisProcessor, - memory_instrumentation::MemoryInstrumentationProcessor, - mono_analysis::MonoAnalysisProcessor, - mut_ref_instrumentation::MutRefInstrumenter, - mutation_tester::MutationTester, - number_operation_analysis::NumberOperationProcessor, - options::ProverOptions, - reaching_def_analysis::ReachingDefProcessor, + inconsistency_check::InconsistencyCheckInstrumenter, loop_analysis::LoopAnalysisProcessor, + memory_instrumentation::MemoryInstrumentationProcessor, mono_analysis::MonoAnalysisProcessor, + mut_ref_instrumentation::MutRefInstrumenter, mutation_tester::MutationTester, + number_operation_analysis::NumberOperationProcessor, options::ProverOptions, spec_instrumentation::SpecInstrumentationProcessor, - usage_analysis::UsageProcessor, verification_analysis::VerificationAnalysisProcessor, well_formed_instrumentation::WellFormedInstrumentationProcessor, }; +use move_stackless_bytecode::{ + borrow_analysis::BorrowAnalysisProcessor, + debug_instrumentation::DebugInstrumenter, + function_target_pipeline::{FunctionTargetPipeline, FunctionTargetProcessor}, + livevar_analysis::LiveVarAnalysisProcessor, + reaching_def_analysis::ReachingDefProcessor, + usage_analysis::UsageProcessor, +}; pub fn default_pipeline_with_options(options: &ProverOptions) -> FunctionTargetPipeline { // NOTE: the order of these processors is import! @@ -79,6 +77,7 @@ pub fn default_pipeline_with_options(options: &ProverOptions) -> FunctionTargetP res } +#[allow(unused)] pub fn default_pipeline() -> FunctionTargetPipeline { default_pipeline_with_options(&ProverOptions::default()) } diff --git a/third_party/move/move-prover/bytecode/src/spec_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/spec_instrumentation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/spec_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/spec_instrumentation.rs index 8178f886f239a..ff8329f5fe1b3 100644 --- a/third_party/move/move-prover/bytecode/src/spec_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/spec_instrumentation.rs @@ -4,30 +4,30 @@ // Transformation which injects specifications (Move function spec blocks) into the bytecode. -use crate::{ +use crate::{options::ProverOptions, verification_analysis}; +use itertools::Itertools; +use move_model::{ + ast, + ast::{Exp, ExpData, TempIndex, Value}, + exp_generator::ExpGenerator, + model::{FunId, FunctionEnv, GlobalEnv, Loc, ModuleId, QualifiedId, QualifiedInstId, StructId}, + pragmas::{ABORTS_IF_IS_PARTIAL_PRAGMA, EMITS_IS_PARTIAL_PRAGMA, EMITS_IS_STRICT_PRAGMA}, + spec_translator::{SpecTranslator, TranslatedSpec}, + ty::{ReferenceKind, Type, TypeDisplayContext, BOOL_TYPE, NUM_TYPE}, +}; +use move_stackless_bytecode::{ function_data_builder::FunctionDataBuilder, function_target::{FunctionData, FunctionTarget}, function_target_pipeline::{ FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant, VerificationFlavor, }, livevar_analysis::LiveVarAnalysisProcessor, - options::ProverOptions, reaching_def_analysis::ReachingDefProcessor, stackless_bytecode::{ AbortAction, AssignKind, AttrId, BorrowEdge, BorrowNode, Bytecode, HavocKind, Label, Operation, PropKind, }, - usage_analysis, verification_analysis, COMPILED_MODULE_AVAILABLE, -}; -use itertools::Itertools; -use move_model::{ - ast, - ast::{Exp, ExpData, TempIndex, Value}, - exp_generator::ExpGenerator, - model::{FunId, FunctionEnv, GlobalEnv, Loc, ModuleId, QualifiedId, QualifiedInstId, StructId}, - pragmas::{ABORTS_IF_IS_PARTIAL_PRAGMA, EMITS_IS_PARTIAL_PRAGMA, EMITS_IS_STRICT_PRAGMA}, - spec_translator::{SpecTranslator, TranslatedSpec}, - ty::{ReferenceKind, Type, TypeDisplayContext, BOOL_TYPE, NUM_TYPE}, + usage_analysis, COMPILED_MODULE_AVAILABLE, }; use std::{ collections::{BTreeMap, BTreeSet}, diff --git a/third_party/move/move-prover/bytecode/src/verification_analysis.rs b/third_party/move/move-prover/bytecode-pipeline/src/verification_analysis.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/verification_analysis.rs rename to third_party/move/move-prover/bytecode-pipeline/src/verification_analysis.rs index a728d0cb42671..700cdef56730e 100644 --- a/third_party/move/move-prover/bytecode/src/verification_analysis.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/verification_analysis.rs @@ -7,12 +7,7 @@ //! each function as well as collect information on how these invariants should be handled (i.e., //! checked after bytecode, checked at function exit, or deferred to caller). -use crate::{ - function_target::{FunctionData, FunctionTarget}, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, - options::ProverOptions, - usage_analysis, COMPILED_MODULE_AVAILABLE, -}; +use crate::options::ProverOptions; use codespan_reporting::diagnostic::Severity; use itertools::Itertools; use move_model::{ @@ -24,6 +19,11 @@ use move_model::{ }, ty::{TypeUnificationAdapter, Variance}, }; +use move_stackless_bytecode::{ + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, + usage_analysis, COMPILED_MODULE_AVAILABLE, +}; use std::{ collections::{BTreeMap, BTreeSet}, fmt::{self, Formatter}, diff --git a/third_party/move/move-prover/bytecode/src/verification_analysis_v2.rs b/third_party/move/move-prover/bytecode-pipeline/src/verification_analysis_v2.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/verification_analysis_v2.rs rename to third_party/move/move-prover/bytecode-pipeline/src/verification_analysis_v2.rs index b07cf2f5e2c59..a7384307bb72b 100644 --- a/third_party/move/move-prover/bytecode/src/verification_analysis_v2.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/verification_analysis_v2.rs @@ -4,13 +4,7 @@ //! Analysis which computes an annotation for each function whether -use crate::{ - dataflow_domains::SetDomain, - function_target::{FunctionData, FunctionTarget}, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, - options::ProverOptions, - usage_analysis, COMPILED_MODULE_AVAILABLE, -}; +use crate::options::ProverOptions; use itertools::Itertools; use log::debug; use move_model::{ @@ -20,6 +14,12 @@ use move_model::{ DISABLE_INVARIANTS_IN_BODY_PRAGMA, VERIFY_PRAGMA, }, }; +use move_stackless_bytecode::{ + dataflow_domains::SetDomain, + function_target::{FunctionData, FunctionTarget}, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder, FunctionVariant}, + usage_analysis, COMPILED_MODULE_AVAILABLE, +}; use std::collections::{BTreeMap, BTreeSet, VecDeque}; /// The annotation for information about verification. diff --git a/third_party/move/move-prover/bytecode/src/well_formed_instrumentation.rs b/third_party/move/move-prover/bytecode-pipeline/src/well_formed_instrumentation.rs similarity index 99% rename from third_party/move/move-prover/bytecode/src/well_formed_instrumentation.rs rename to third_party/move/move-prover/bytecode-pipeline/src/well_formed_instrumentation.rs index e9d08e688937d..4434653781415 100644 --- a/third_party/move/move-prover/bytecode/src/well_formed_instrumentation.rs +++ b/third_party/move/move-prover/bytecode-pipeline/src/well_formed_instrumentation.rs @@ -14,13 +14,6 @@ //! Because data invariants cannot refer to global memory, they are not relevant for memory //! usage, and their injection therefore can happen after this phase. -use crate::{ - function_data_builder::FunctionDataBuilder, - function_target::FunctionData, - function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, - stackless_bytecode::PropKind, - usage_analysis::UsageProcessor, -}; use move_core_types::account_address::AccountAddress; use move_model::{ ast::{Operation, QuantKind}, @@ -28,6 +21,13 @@ use move_model::{ model::FunctionEnv, ty::BOOL_TYPE, }; +use move_stackless_bytecode::{ + function_data_builder::FunctionDataBuilder, + function_target::FunctionData, + function_target_pipeline::{FunctionTargetProcessor, FunctionTargetsHolder}, + stackless_bytecode::PropKind, + usage_analysis::UsageProcessor, +}; pub struct WellFormedInstrumentationProcessor {} diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/borrow.exp b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/borrow.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/borrow.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/borrow.exp diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/borrow.move b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/borrow.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/borrow.move rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/borrow.move diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/pack.exp b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/pack.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/pack.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/pack.exp diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/pack.move b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/pack.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/pack.move rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/pack.move diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/params.exp b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/params.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/params.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/params.exp diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/params.move b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/params.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/params.move rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/params.move diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/vector.exp b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/vector.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/vector.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/vector.exp diff --git a/third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/vector.move b/third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/vector.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/data_invariant_instrumentation/vector.move rename to third_party/move/move-prover/bytecode-pipeline/tests/data_invariant_instrumentation/vector.move diff --git a/third_party/move/move-prover/bytecode/tests/eliminate_imm_refs/basic_test.exp b/third_party/move/move-prover/bytecode-pipeline/tests/eliminate_imm_refs/basic_test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/eliminate_imm_refs/basic_test.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/eliminate_imm_refs/basic_test.exp diff --git a/third_party/move/move-prover/bytecode/tests/eliminate_imm_refs/basic_test.move b/third_party/move/move-prover/bytecode-pipeline/tests/eliminate_imm_refs/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/eliminate_imm_refs/basic_test.move rename to third_party/move/move-prover/bytecode-pipeline/tests/eliminate_imm_refs/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/disable_in_body.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/disable_in_body.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/disable_in_body.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/disable_in_body.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/disable_in_body.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/disable_in_body.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/disable_in_body.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/disable_in_body.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/mutual_inst.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/mutual_inst.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/mutual_inst.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/mutual_inst.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/mutual_inst.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/mutual_inst.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/mutual_inst.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/mutual_inst.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/uninst_type_param_in_inv.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/uninst_type_param_in_inv.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/uninst_type_param_in_inv.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/uninst_type_param_in_inv.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_analysis/uninst_type_param_in_inv.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/uninst_type_param_in_inv.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_analysis/uninst_type_param_in_inv.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_analysis/uninst_type_param_in_inv.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/borrow.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/borrow.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/borrow.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/borrow.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/borrow.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/borrow.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/borrow.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/borrow.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/move.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/move.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/move.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/move.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/move.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/move.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/move.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/move.move diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/update.exp b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/update.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/update.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/update.exp diff --git a/third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/update.move b/third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/update.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/global_invariant_instrumentation/update.move rename to third_party/move/move-prover/bytecode-pipeline/tests/global_invariant_instrumentation/update.move diff --git a/third_party/move/move-prover/bytecode/tests/memory_instr/basic_test.exp b/third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/basic_test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/memory_instr/basic_test.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/basic_test.exp diff --git a/third_party/move/move-prover/bytecode/tests/memory_instr/basic_test.move b/third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/memory_instr/basic_test.move rename to third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/memory_instr/mut_ref.exp b/third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/mut_ref.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/memory_instr/mut_ref.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/mut_ref.exp diff --git a/third_party/move/move-prover/bytecode/tests/memory_instr/mut_ref.move b/third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/mut_ref.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/memory_instr/mut_ref.move rename to third_party/move/move-prover/bytecode-pipeline/tests/memory_instr/mut_ref.move diff --git a/third_party/move/move-prover/bytecode/tests/mono_analysis/test.exp b/third_party/move/move-prover/bytecode-pipeline/tests/mono_analysis/test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/mono_analysis/test.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/mono_analysis/test.exp diff --git a/third_party/move/move-prover/bytecode/tests/mono_analysis/test.move b/third_party/move/move-prover/bytecode-pipeline/tests/mono_analysis/test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/mono_analysis/test.move rename to third_party/move/move-prover/bytecode-pipeline/tests/mono_analysis/test.move diff --git a/third_party/move/move-prover/bytecode/tests/mut_ref_instrumentation/basic_test.exp b/third_party/move/move-prover/bytecode-pipeline/tests/mut_ref_instrumentation/basic_test.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/mut_ref_instrumentation/basic_test.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/mut_ref_instrumentation/basic_test.exp diff --git a/third_party/move/move-prover/bytecode/tests/mut_ref_instrumentation/basic_test.move b/third_party/move/move-prover/bytecode-pipeline/tests/mut_ref_instrumentation/basic_test.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/mut_ref_instrumentation/basic_test.move rename to third_party/move/move-prover/bytecode-pipeline/tests/mut_ref_instrumentation/basic_test.move diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/fun_spec.exp b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/fun_spec.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/fun_spec.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/fun_spec.exp diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/fun_spec.move b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/fun_spec.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/fun_spec.move rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/fun_spec.move diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/generics.exp b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/generics.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/generics.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/generics.exp diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/generics.move b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/generics.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/generics.move rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/generics.move diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/modifies.exp b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/modifies.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/modifies.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/modifies.exp diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/modifies.move b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/modifies.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/modifies.move rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/modifies.move diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/opaque_call.exp b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/opaque_call.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/opaque_call.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/opaque_call.exp diff --git a/third_party/move/move-prover/bytecode/tests/spec_instrumentation/opaque_call.move b/third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/opaque_call.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/spec_instrumentation/opaque_call.move rename to third_party/move/move-prover/bytecode-pipeline/tests/spec_instrumentation/opaque_call.move diff --git a/third_party/move/move-prover/bytecode/tests/testsuite.rs b/third_party/move/move-prover/bytecode-pipeline/tests/testsuite.rs similarity index 60% rename from third_party/move/move-prover/bytecode/tests/testsuite.rs rename to third_party/move/move-prover/bytecode-pipeline/tests/testsuite.rs index 6bfb513400752..94cc328281870 100644 --- a/third_party/move/move-prover/bytecode/tests/testsuite.rs +++ b/third_party/move/move-prover/bytecode-pipeline/tests/testsuite.rs @@ -3,40 +3,29 @@ // SPDX-License-Identifier: Apache-2.0 use anyhow::anyhow; -use codespan_reporting::{diagnostic::Severity, term::termcolor::Buffer}; -use move_command_line_common::testing::EXP_EXT; -use move_compiler::shared::PackagePaths; -use move_model::{model::GlobalEnv, options::ModelBuilderOptions, run_model_builder_with_options}; -use move_prover_test_utils::{baseline_test::verify_or_update_baseline, extract_test_directives}; -use move_stackless_bytecode::{ - borrow_analysis::BorrowAnalysisProcessor, +use move_prover_bytecode_pipeline::{ clean_and_optimize::CleanAndOptimizeProcessor, data_invariant_instrumentation::DataInvariantInstrumentationProcessor, eliminate_imm_refs::EliminateImmRefsProcessor, - function_target_pipeline::{ - FunctionTargetPipeline, FunctionTargetsHolder, ProcessorResultDisplay, - }, global_invariant_analysis::GlobalInvariantAnalysisProcessor, global_invariant_instrumentation::GlobalInvariantInstrumentationProcessor, - livevar_analysis::LiveVarAnalysisProcessor, - memory_instrumentation::MemoryInstrumentationProcessor, - mono_analysis::MonoAnalysisProcessor, + memory_instrumentation::MemoryInstrumentationProcessor, mono_analysis::MonoAnalysisProcessor, mut_ref_instrumentation::MutRefInstrumenter, - options::ProverOptions, - print_targets_for_test, - reaching_def_analysis::ReachingDefProcessor, spec_instrumentation::SpecInstrumentationProcessor, - usage_analysis::UsageProcessor, verification_analysis::VerificationAnalysisProcessor, well_formed_instrumentation::WellFormedInstrumentationProcessor, }; +use move_stackless_bytecode::{ + borrow_analysis::BorrowAnalysisProcessor, function_target_pipeline::FunctionTargetPipeline, + livevar_analysis::LiveVarAnalysisProcessor, reaching_def_analysis::ReachingDefProcessor, + usage_analysis::UsageProcessor, +}; use std::path::Path; fn get_tested_transformation_pipeline( dir_name: &str, ) -> anyhow::Result> { match dir_name { - "from_move" => Ok(None), "eliminate_imm_refs" => { let mut pipeline = FunctionTargetPipeline::default(); pipeline.add_processor(EliminateImmRefsProcessor::new()); @@ -48,39 +37,6 @@ fn get_tested_transformation_pipeline( pipeline.add_processor(MutRefInstrumenter::new()); Ok(Some(pipeline)) }, - "reaching_def" => { - let mut pipeline = FunctionTargetPipeline::default(); - pipeline.add_processor(EliminateImmRefsProcessor::new()); - pipeline.add_processor(MutRefInstrumenter::new()); - pipeline.add_processor(ReachingDefProcessor::new()); - Ok(Some(pipeline)) - }, - "livevar" => { - let mut pipeline = FunctionTargetPipeline::default(); - pipeline.add_processor(EliminateImmRefsProcessor::new()); - pipeline.add_processor(MutRefInstrumenter::new()); - pipeline.add_processor(ReachingDefProcessor::new()); - pipeline.add_processor(LiveVarAnalysisProcessor::new()); - Ok(Some(pipeline)) - }, - "borrow" => { - let mut pipeline = FunctionTargetPipeline::default(); - pipeline.add_processor(EliminateImmRefsProcessor::new()); - pipeline.add_processor(MutRefInstrumenter::new()); - pipeline.add_processor(ReachingDefProcessor::new()); - pipeline.add_processor(LiveVarAnalysisProcessor::new()); - pipeline.add_processor(BorrowAnalysisProcessor::new()); - Ok(Some(pipeline)) - }, - "borrow_strong" => { - let mut pipeline = FunctionTargetPipeline::default(); - pipeline.add_processor(EliminateImmRefsProcessor::new()); - pipeline.add_processor(MutRefInstrumenter::new()); - pipeline.add_processor(ReachingDefProcessor::new()); - pipeline.add_processor(LiveVarAnalysisProcessor::new()); - pipeline.add_processor(BorrowAnalysisProcessor::new()); - Ok(Some(pipeline)) - }, "memory_instr" => { let mut pipeline = FunctionTargetPipeline::default(); pipeline.add_processor(EliminateImmRefsProcessor::new()); @@ -188,11 +144,6 @@ fn get_tested_transformation_pipeline( pipeline.add_processor(MonoAnalysisProcessor::new()); Ok(Some(pipeline)) }, - "usage_analysis" => { - let mut pipeline = FunctionTargetPipeline::default(); - pipeline.add_processor(UsageProcessor::new()); - Ok(Some(pipeline)) - }, _ => Err(anyhow!( "the sub-directory `{}` has no associated pipeline to test", dir_name @@ -201,73 +152,13 @@ fn get_tested_transformation_pipeline( } fn test_runner(path: &Path) -> datatest_stable::Result<()> { - let mut sources = extract_test_directives(path, "// dep:")?; - sources.push(path.to_string_lossy().to_string()); - let env: GlobalEnv = run_model_builder_with_options( - vec![PackagePaths { - name: None, - paths: sources, - named_address_map: move_stdlib::move_stdlib_named_addresses(), - }], - vec![], - ModelBuilderOptions::default(), - )?; - let out = if env.has_errors() { - let mut error_writer = Buffer::no_color(); - env.report_diag(&mut error_writer, Severity::Error); - String::from_utf8_lossy(&error_writer.into_inner()).to_string() - } else { - let options = ProverOptions { - stable_test_output: true, - ..Default::default() - }; - env.set_extension(options); - let dir_name = path - .parent() - .and_then(|p| p.file_name()) - .and_then(|p| p.to_str()) - .ok_or_else(|| anyhow!("bad file name"))?; - let pipeline_opt = get_tested_transformation_pipeline(dir_name)?; - - // Initialize and print function targets - let mut text = String::new(); - let mut targets = FunctionTargetsHolder::default(); - for module_env in env.get_modules() { - for func_env in module_env.get_functions() { - targets.add_target(&func_env); - } - } - text += &print_targets_for_test(&env, "initial translation from Move", &targets); - - // Run pipeline if any - if let Some(pipeline) = pipeline_opt { - pipeline.run(&env, &mut targets); - let processor = pipeline.last_processor(); - if !processor.is_single_run() { - text += &print_targets_for_test( - &env, - &format!("after pipeline `{}`", dir_name), - &targets, - ); - } - text += &ProcessorResultDisplay { - env: &env, - targets: &targets, - processor, - } - .to_string(); - } - // add Warning and Error diagnostics to output - let mut error_writer = Buffer::no_color(); - if env.has_errors() || env.has_warnings() { - env.report_diag(&mut error_writer, Severity::Warning); - text += "============ Diagnostics ================\n"; - text += &String::from_utf8_lossy(&error_writer.into_inner()); - } - text - }; - let baseline_path = path.with_extension(EXP_EXT); - verify_or_update_baseline(baseline_path.as_path(), &out)?; + let dir_name = path + .parent() + .and_then(|p| p.file_name()) + .and_then(|p| p.to_str()) + .ok_or_else(|| anyhow!("bad file name"))?; + let pipeline_opt = get_tested_transformation_pipeline(dir_name)?; + move_stackless_bytecode_test_utils::test_runner(path, pipeline_opt)?; Ok(()) } diff --git a/third_party/move/move-prover/bytecode/tests/verification_analysis/inv_relevance.exp b/third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_relevance.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/verification_analysis/inv_relevance.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_relevance.exp diff --git a/third_party/move/move-prover/bytecode/tests/verification_analysis/inv_relevance.move b/third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_relevance.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/verification_analysis/inv_relevance.move rename to third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_relevance.move diff --git a/third_party/move/move-prover/bytecode/tests/verification_analysis/inv_suspension.exp b/third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_suspension.exp similarity index 100% rename from third_party/move/move-prover/bytecode/tests/verification_analysis/inv_suspension.exp rename to third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_suspension.exp diff --git a/third_party/move/move-prover/bytecode/tests/verification_analysis/inv_suspension.move b/third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_suspension.move similarity index 100% rename from third_party/move/move-prover/bytecode/tests/verification_analysis/inv_suspension.move rename to third_party/move/move-prover/bytecode-pipeline/tests/verification_analysis/inv_suspension.move diff --git a/third_party/move/move-prover/lab/Cargo.toml b/third_party/move/move-prover/lab/Cargo.toml index 6c08599f6e04e..f761474b210d8 100644 --- a/third_party/move/move-prover/lab/Cargo.toml +++ b/third_party/move/move-prover/lab/Cargo.toml @@ -12,7 +12,8 @@ move-compiler = { path = "../../move-compiler" } move-model = { path = "../../move-model" } move-prover = { path = ".." } move-prover-boogie-backend = { path = "../boogie-backend" } -move-stackless-bytecode = { path = "../bytecode" } +move-prover-bytecode-pipeline = { path = "../bytecode-pipeline" } +move-stackless-bytecode = { path = "../../move-model/bytecode" } # FB external dependencies z3tracer = "0.8.0" @@ -20,7 +21,7 @@ z3tracer = "0.8.0" # external dependencies anyhow = "1.0.52" chrono = "0.4.19" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan-reporting = "0.11.1" hex = "0.4.3" itertools = "0.10.0" diff --git a/third_party/move/move-prover/lab/src/benchmark.rs b/third_party/move/move-prover/lab/src/benchmark.rs index 2e34e1f04722e..e1143977b042b 100644 --- a/third_party/move/move-prover/lab/src/benchmark.rs +++ b/third_party/move/move-prover/lab/src/benchmark.rs @@ -14,7 +14,7 @@ use clap::{ use codespan_reporting::term::termcolor::{ColorChoice, StandardStream}; use itertools::Itertools; use log::LevelFilter; -use move_compiler::shared::PackagePaths; +use move_compiler::shared::{known_attributes::KnownAttribute, PackagePaths}; use move_model::{ model::{FunctionEnv, GlobalEnv, ModuleEnv, VerificationScope}, parse_addresses_from_options, run_model_builder_with_options, @@ -23,7 +23,7 @@ use move_prover::{ check_errors, cli::Options, create_and_process_bytecode, create_init_num_operation_state, generate_boogie, verify_boogie, }; -use move_stackless_bytecode::options::ProverOptions; +use move_prover_bytecode_pipeline::options::ProverOptions; use std::{ fmt::Debug, fs::File, @@ -150,6 +150,8 @@ fn run_benchmark( }; let addrs = parse_addresses_from_options(options.move_named_address_values.clone())?; options.move_deps.append(&mut dep_dirs.to_vec()); + let skip_attribute_checks = true; + let known_attributes = KnownAttribute::get_all_attribute_names().clone(); let env = run_model_builder_with_options( vec![PackagePaths { name: None, @@ -162,6 +164,8 @@ fn run_benchmark( named_address_map: addrs, }], options.model_builder.clone(), + skip_attribute_checks, + &known_attributes, )?; let mut error_writer = StandardStream::stderr(ColorChoice::Auto); diff --git a/third_party/move/move-prover/move-docgen/tests/testsuite.rs b/third_party/move/move-prover/move-docgen/tests/testsuite.rs index baf86d6d0d8a4..5376df7b22766 100644 --- a/third_party/move/move-prover/move-docgen/tests/testsuite.rs +++ b/third_party/move/move-prover/move-docgen/tests/testsuite.rs @@ -20,6 +20,7 @@ const FLAGS: &[&str] = &[ "--dependency=../../move-stdlib/sources", "--named-addresses=std=0x1", "--docgen", + "--skip-attribute-checks", ]; fn test_runner(path: &Path) -> datatest_stable::Result<()> { @@ -74,6 +75,7 @@ fn test_runner(path: &Path) -> datatest_stable::Result<()> { fn test_docgen(path: &Path, mut options: Options, suffix: &str) -> anyhow::Result<()> { let mut temp_path = PathBuf::from(TempDir::new()?.path()); options.docgen.output_directory = temp_path.to_string_lossy().to_string(); + options.skip_attribute_checks = true; let base_name = format!( "{}.md", path.file_stem() diff --git a/third_party/move/move-prover/src/cli.rs b/third_party/move/move-prover/src/cli.rs index febfdbd73c87b..6f1012860e194 100644 --- a/third_party/move/move-prover/src/cli.rs +++ b/third_party/move/move-prover/src/cli.rs @@ -11,12 +11,12 @@ use clap::{builder::PossibleValuesParser, Arg, ArgAction, ArgAction::SetTrue, Co use codespan_reporting::diagnostic::Severity; use log::LevelFilter; use move_abigen::AbigenOptions; -use move_compiler::shared::NumericalAddress; +use move_compiler::{command_line::SKIP_ATTRIBUTE_CHECKS, shared::NumericalAddress}; use move_docgen::DocgenOptions; use move_errmapgen::ErrmapOptions; use move_model::{model::VerificationScope, options::ModelBuilderOptions}; use move_prover_boogie_backend::options::{BoogieOptions, VectorTheory}; -use move_stackless_bytecode::options::{AutoTraceLevel, ProverOptions}; +use move_prover_bytecode_pipeline::options::{AutoTraceLevel, ProverOptions}; use once_cell::sync::Lazy; use serde::{Deserialize, Serialize}; use simplelog::{ @@ -63,6 +63,8 @@ pub struct Options { pub move_named_address_values: Vec, /// Whether to run experimental pipeline pub experimental_pipeline: bool, + /// Whether to skip checking for unknown attributes + pub skip_attribute_checks: bool, /// BEGIN OF STRUCTURED OPTIONS. DO NOT ADD VALUE FIELDS AFTER THIS /// Options for the model builder. @@ -100,6 +102,7 @@ impl Default for Options { abigen: AbigenOptions::default(), errmapgen: ErrmapOptions::default(), experimental_pipeline: false, + skip_attribute_checks: false, } } } @@ -468,6 +471,12 @@ impl Options { .action(SetTrue) .help("whether to run experimental pipeline") ) + .arg( + Arg::new(SKIP_ATTRIBUTE_CHECKS) + .long(SKIP_ATTRIBUTE_CHECKS) + .action(SetTrue) + .help("whether to not complain about unknown attributes in Move code") + ) .arg( Arg::new("weak-edges") .long("weak-edges") @@ -703,6 +712,9 @@ impl Options { if matches.get_flag("experimental-pipeline") { options.experimental_pipeline = true; } + if matches.contains_id(SKIP_ATTRIBUTE_CHECKS) { + options.skip_attribute_checks = true; + } if matches.contains_id("timeout") { options.backend.vc_timeout = *matches.try_get_one("timeout")?.unwrap(); } diff --git a/third_party/move/move-prover/src/lib.rs b/third_party/move/move-prover/src/lib.rs index 801fc449e007f..6b01de0e15990 100644 --- a/third_party/move/move-prover/src/lib.rs +++ b/third_party/move/move-prover/src/lib.rs @@ -10,7 +10,7 @@ use codespan_reporting::term::termcolor::{ColorChoice, StandardStream, WriteColo #[allow(unused_imports)] use log::{debug, info, warn}; use move_abigen::Abigen; -use move_compiler::shared::PackagePaths; +use move_compiler::shared::{known_attributes::KnownAttribute, PackagePaths}; use move_docgen::Docgen; use move_errmapgen::ErrmapGen; use move_model::{ @@ -20,10 +20,10 @@ use move_model::{ use move_prover_boogie_backend::{ add_prelude, boogie_wrapper::BoogieWrapper, bytecode_translator::BoogieTranslator, }; -use move_stackless_bytecode::{ - function_target_pipeline::FunctionTargetsHolder, number_operation::GlobalNumberOperationState, - pipeline_factory, +use move_prover_bytecode_pipeline::{ + number_operation::GlobalNumberOperationState, pipeline_factory, }; +use move_stackless_bytecode::function_target_pipeline::FunctionTargetsHolder; use std::{ fs, path::{Path, PathBuf}, @@ -59,6 +59,8 @@ pub fn run_move_prover( named_address_map: addrs, }], options.model_builder.clone(), + options.skip_attribute_checks, + KnownAttribute::get_all_attribute_names(), )?; run_move_prover_with_model(&env, error_writer, options, Some(now)) } diff --git a/third_party/move/move-prover/tools/check_pr.sh b/third_party/move/move-prover/tools/check_pr.sh index 6d13f0db1468c..46df0336c6b93 100755 --- a/third_party/move/move-prover/tools/check_pr.sh +++ b/third_party/move/move-prover/tools/check_pr.sh @@ -78,7 +78,7 @@ fi CRATES="\ - $BASE/language/move-prover/bytecode \ + $BASE/language/move-model/bytecode \ $BASE/language/move-prover/boogie-backend \ $BASE/language/move-prover\ $BASE/language/move-model\ diff --git a/third_party/move/move-stdlib/nursery/tests/event_tests.move b/third_party/move/move-stdlib/nursery/tests/event_tests.move deleted file mode 100644 index 064ebb7877eeb..0000000000000 --- a/third_party/move/move-stdlib/nursery/tests/event_tests.move +++ /dev/null @@ -1,107 +0,0 @@ -#[test_only] -module std::event_tests { - ////////////////// - // Storage tests - ////////////////// - - use std::bcs; - use std::event::{Self, EventHandle, emit_event, new_event_handle}; - use std::signer::address_of; - use std::vector; - - struct Box has copy, drop, store { x: T } - struct Box3 has copy, drop, store { x: Box> } - struct Box7 has copy, drop, store { x: Box3> } - struct Box15 has copy, drop, store { x: Box7> } - struct Box31 has copy, drop, store { x: Box15> } - struct Box63 has copy, drop, store { x: Box31> } - struct Box127 has copy, drop, store { x: Box63> } - - struct MyEvent has key { - e: EventHandle - } - - fun box3(x: T): Box3 { - Box3 { x: Box { x: Box { x } } } - } - - fun box7(x: T): Box7 { - Box7 { x: box3(box3(x)) } - } - - fun box15(x: T): Box15 { - Box15 { x: box7(box7(x)) } - } - - fun box31(x: T): Box31 { - Box31 { x: box15(box15(x)) } - } - - fun box63(x: T): Box63 { - Box63 { x: box31(box31(x)) } - } - - fun box127(x: T): Box127 { - Box127 { x: box63(box63(x)) } - } - - fun maybe_init_event(s: &signer) { - if (exists>(address_of(s))) return; - - move_to(s, MyEvent { e: new_event_handle(s)}) - } - - public fun event_128(s: &signer) acquires MyEvent { - maybe_init_event>(s); - - emit_event(&mut borrow_global_mut>>(address_of(s)).e, box127(true)) - } - - public fun event_129(s: &signer) acquires MyEvent { - maybe_init_event>>(s); - - // will abort - emit_event( - &mut borrow_global_mut>>>(address_of(s)).e, - Box { x: box127(true) } - ) - } - - #[test(s = @0x42)] - fun test_event_128(s: signer) acquires MyEvent { - event_128(&s); - } - - #[test(s = @0x42)] - #[expected_failure] // VM_MAX_VALUE_DEPTH_REACHED - fun test_event_129(s: signer) acquires MyEvent { - event_129(&s); - } - - // More detailed version of the above--test BCS compatibility between the old event - // format and the new wrapper hack. - // this test lives here because it is important for the correctness of GUIDWrapper; - // see the comments there for more details - #[test(s = @0x42)] - fun test_guid_wrapper_backward_compatibility(s: signer) { - let sender_bytes = bcs::to_bytes(&address_of(&s)); - let count_bytes = bcs::to_bytes(&0u64); - vector::append(&mut count_bytes, sender_bytes); - let old_guid = count_bytes; - // should be 32 bytes of address + 8 byte integer - assert!(vector::length(&old_guid) == 40, 0); - let old_guid_bytes = bcs::to_bytes(&old_guid); - // old_guid_bytes should be length prefix (40), followed by content of vector - // the length prefix is a ULEB encoded 32-bit value, so for length prefix 24, - // this should only occupy 1 byte: https://github.com/diem/bcs#uleb128-encoded-integers - // hence, 24 byte contents + 1 byte length prefix = 25 bytes - assert!(vector::length(&old_guid_bytes) == 41, 1); - - // now, build a new GUID and check byte-for-byte compatibility - let guid_wrapper = event::create_guid_wrapper_for_test(&s); - let guid_wrapper_bytes = bcs::to_bytes(&guid_wrapper); - - // check that the guid grapper bytes are identical to the old guid bytes - assert!(vector::length(&guid_wrapper_bytes) == vector::length(&old_guid_bytes), 2); - } -} diff --git a/third_party/move/move-stdlib/src/natives/event.rs b/third_party/move/move-stdlib/src/natives/event.rs index 1301574bbdca0..5ab61c203f805 100644 --- a/third_party/move/move-stdlib/src/natives/event.rs +++ b/third_party/move/move-stdlib/src/natives/event.rs @@ -7,7 +7,7 @@ use move_binary_format::errors::PartialVMResult; use move_core_types::gas_algebra::InternalGasPerAbstractMemoryUnit; use move_vm_runtime::native_functions::{NativeContext, NativeFunction}; use move_vm_types::{ - loaded_data::runtime_types::Type, natives::function::NativeResult, pop_arg, values::Value, + loaded_data::runtime_types::Type, natives::function::NativeResult, values::Value, views::ValueView, }; use smallvec::smallvec; @@ -27,24 +27,16 @@ pub struct WriteToEventStoreGasParameters { #[inline] fn native_write_to_event_store( gas_params: &WriteToEventStoreGasParameters, - context: &mut NativeContext, - mut ty_args: Vec, + _context: &mut NativeContext, + ty_args: Vec, mut arguments: VecDeque, ) -> PartialVMResult { debug_assert!(ty_args.len() == 1); debug_assert!(arguments.len() == 3); - let ty = ty_args.pop().unwrap(); let msg = arguments.pop_back().unwrap(); - let seq_num = pop_arg!(arguments, u64); - let guid = pop_arg!(arguments, Vec); - let cost = gas_params.unit_cost * std::cmp::max(msg.legacy_abstract_memory_size(), 1.into()); - if !context.save_event(guid, seq_num, ty, msg)? { - return Ok(NativeResult::err(cost, 0)); - } - Ok(NativeResult::ok(cost, smallvec![])) } diff --git a/third_party/move/move-vm/integration-tests/src/compiler.rs b/third_party/move/move-vm/integration-tests/src/compiler.rs index 5441bfa04fdd5..53fbd2cb4d4d6 100644 --- a/third_party/move/move-vm/integration-tests/src/compiler.rs +++ b/third_party/move/move-vm/integration-tests/src/compiler.rs @@ -4,7 +4,11 @@ use anyhow::{bail, Result}; use move_binary_format::file_format::{CompiledModule, CompiledScript}; -use move_compiler::{compiled_unit::AnnotatedCompiledUnit, Compiler as MoveCompiler}; +use move_compiler::{ + compiled_unit::AnnotatedCompiledUnit, + shared::{known_attributes::KnownAttribute, Flags}, + Compiler as MoveCompiler, +}; use std::{fs::File, io::Write, path::Path}; use tempfile::tempdir; @@ -21,6 +25,8 @@ pub fn compile_units(s: &str) -> Result> { vec![file_path.to_str().unwrap().to_string()], vec![], move_stdlib::move_stdlib_named_addresses(), + Flags::empty().set_skip_attribute_checks(false), + KnownAttribute::get_all_attribute_names(), ) .build_and_report()?; @@ -42,6 +48,8 @@ pub fn compile_units_with_stdlib(s: &str) -> Result> vec![file_path.to_str().unwrap().to_string()], move_stdlib::move_stdlib_files(), move_stdlib::move_stdlib_named_addresses(), + Flags::empty().set_skip_attribute_checks(false), + KnownAttribute::get_all_attribute_names(), ) .build_and_report()?; @@ -64,6 +72,8 @@ pub fn compile_modules_in_file(path: &Path) -> Result> { vec![path.to_str().unwrap().to_string()], vec![], std::collections::BTreeMap::::new(), + Flags::empty().set_skip_attribute_checks(false), + KnownAttribute::get_all_attribute_names(), ) .build_and_report()?; diff --git a/third_party/move/move-vm/integration-tests/src/tests/bad_storage_tests.rs b/third_party/move/move-vm/integration-tests/src/tests/bad_storage_tests.rs index b354940a07d32..de202e45452ec 100644 --- a/third_party/move/move-vm/integration-tests/src/tests/bad_storage_tests.rs +++ b/third_party/move/move-vm/integration-tests/src/tests/bad_storage_tests.rs @@ -103,7 +103,7 @@ fn test_malformed_resource() { ) .map(|_| ()) .unwrap(); - let (changeset, _) = sess.finish().unwrap(); + let changeset = sess.finish().unwrap(); storage.apply(changeset).unwrap(); // Execute the second script and make sure it succeeds. This script simply checks diff --git a/third_party/move/move-vm/integration-tests/src/tests/exec_func_effects_tests.rs b/third_party/move/move-vm/integration-tests/src/tests/exec_func_effects_tests.rs index cfe8db77ecb17..0ddd0f7fd7916 100644 --- a/third_party/move/move-vm/integration-tests/src/tests/exec_func_effects_tests.rs +++ b/third_party/move/move-vm/integration-tests/src/tests/exec_func_effects_tests.rs @@ -6,7 +6,7 @@ use crate::compiler::{as_module, compile_units}; use move_binary_format::errors::VMResult; use move_core_types::{ account_address::AccountAddress, - effects::{ChangeSet, Event}, + effects::ChangeSet, identifier::Identifier, language_storage::ModuleId, u256::U256, @@ -56,7 +56,7 @@ fn fail_arg_deserialize() { fn mutref_output_success() { let mod_code = setup_module(); let result = run(&mod_code, USE_MUTREF_LABEL, MoveValue::U64(1)); - let (_, _, ret_values) = result.unwrap(); + let (_, ret_values) = result.unwrap(); assert_eq!(1, ret_values.mutable_reference_outputs.len()); let parsed = parse_u64_arg(&ret_values.mutable_reference_outputs.first().unwrap().1); assert_eq!(EXPECT_MUTREF_OUT_VALUE, parsed); @@ -85,7 +85,7 @@ fn run( module: &ModuleCode, fun_name: &str, arg_val0: MoveValue, -) -> VMResult<(ChangeSet, Vec, SerializedReturnValues)> { +) -> VMResult<(ChangeSet, SerializedReturnValues)> { let module_id = &module.0; let modules = vec![module.clone()]; let (vm, storage) = setup_vm(&modules); @@ -102,8 +102,8 @@ fn run( &mut UnmeteredGasMeter, ) .and_then(|ret_values| { - let (change_set, events) = session.finish()?; - Ok((change_set, events, ret_values)) + let change_set = session.finish()?; + Ok((change_set, ret_values)) }) } diff --git a/third_party/move/move-vm/integration-tests/src/tests/loader_tests.rs b/third_party/move/move-vm/integration-tests/src/tests/loader_tests.rs index 8ccd0fa2ef069..b3cdf389e1142 100644 --- a/third_party/move/move-vm/integration-tests/src/tests/loader_tests.rs +++ b/third_party/move/move-vm/integration-tests/src/tests/loader_tests.rs @@ -93,7 +93,7 @@ impl Adapter { .publish_module(binary, WORKING_ACCOUNT, &mut UnmeteredGasMeter) .unwrap_or_else(|_| panic!("failure publishing module: {:#?}", module)); } - let (changeset, _) = session.finish().expect("failure getting write set"); + let changeset = session.finish().expect("failure getting write set"); self.store .apply(changeset) .expect("failure applying write set"); diff --git a/third_party/move/move-vm/integration-tests/src/tests/mutated_accounts_tests.rs b/third_party/move/move-vm/integration-tests/src/tests/mutated_accounts_tests.rs index ad9d9188d668a..bee3a0bbaefb8 100644 --- a/third_party/move/move-vm/integration-tests/src/tests/mutated_accounts_tests.rs +++ b/third_party/move/move-vm/integration-tests/src/tests/mutated_accounts_tests.rs @@ -87,7 +87,7 @@ fn mutated_accounts() { .unwrap(); assert_eq!(sess.num_mutated_accounts(&TEST_ADDR), 2); - let (changes, _) = sess.finish().unwrap(); + let changes = sess.finish().unwrap(); storage.apply(changes).unwrap(); let mut sess = vm.new_session(&storage); diff --git a/third_party/move/move-vm/runtime/src/data_cache.rs b/third_party/move/move-vm/runtime/src/data_cache.rs index 14d0748c988cd..481f4c1dc17a3 100644 --- a/third_party/move/move-vm/runtime/src/data_cache.rs +++ b/third_party/move/move-vm/runtime/src/data_cache.rs @@ -6,7 +6,7 @@ use crate::loader::Loader; use move_binary_format::errors::*; use move_core_types::{ account_address::AccountAddress, - effects::{AccountChanges, ChangeSet, Changes, Event, Op}, + effects::{AccountChanges, ChangeSet, Changes, Op}, gas_algebra::NumBytes, identifier::Identifier, language_storage::{ModuleId, TypeTag}, @@ -51,7 +51,6 @@ impl AccountDataCache { pub(crate) struct TransactionDataCache<'r> { remote: &'r dyn MoveResolver, account_map: BTreeMap, - event_data: Vec<(Vec, u64, Type, MoveTypeLayout, Value)>, } impl<'r> TransactionDataCache<'r> { @@ -61,7 +60,6 @@ impl<'r> TransactionDataCache<'r> { TransactionDataCache { remote, account_map: BTreeMap::new(), - event_data: vec![], } } @@ -69,7 +67,7 @@ impl<'r> TransactionDataCache<'r> { /// published modules. /// /// Gives all proper guarantees on lifetime of global data as well. - pub(crate) fn into_effects(self, loader: &Loader) -> PartialVMResult<(ChangeSet, Vec)> { + pub(crate) fn into_effects(self, loader: &Loader) -> PartialVMResult { let resource_converter = |value: Value, layout: MoveTypeLayout| -> PartialVMResult> { value.simple_serialize(&layout).ok_or_else(|| { @@ -86,7 +84,7 @@ impl<'r> TransactionDataCache<'r> { self, resource_converter: &dyn Fn(Value, MoveTypeLayout) -> PartialVMResult, loader: &Loader, - ) -> PartialVMResult<(Changes, Resource>, Vec)> { + ) -> PartialVMResult, Resource>> { let mut change_set = Changes::new(); for (addr, account_data_cache) in self.account_map.into_iter() { let mut modules = BTreeMap::new(); @@ -122,16 +120,7 @@ impl<'r> TransactionDataCache<'r> { } } - let mut events = vec![]; - for (guid, seq_num, ty, ty_layout, val) in self.event_data { - let ty_tag = loader.type_to_type_tag(&ty)?; - let blob = val - .simple_serialize(&ty_layout) - .ok_or_else(|| PartialVMError::new(StatusCode::INTERNAL_TYPE_ERROR))?; - events.push((guid, seq_num, ty_tag, blob)) - } - - Ok((change_set, events)) + Ok(change_set) } pub(crate) fn num_mutated_accounts(&self, sender: &AccountAddress) -> u64 { @@ -291,27 +280,4 @@ impl<'r> TransactionDataCache<'r> { })? .is_some()) } - - #[allow(clippy::unit_arg)] - pub(crate) fn emit_event( - &mut self, - loader: &Loader, - guid: Vec, - seq_num: u64, - ty: Type, - val: Value, - ) -> PartialVMResult<()> { - let ty_layout = loader.type_to_type_layout(&ty)?; - Ok(self.event_data.push((guid, seq_num, ty, ty_layout, val))) - } - - pub(crate) fn emitted_events(&self, guid: Vec, ty: Type) -> PartialVMResult> { - let mut events = vec![]; - for event in self.event_data.iter() { - if event.0 == guid && event.2 == ty { - events.push(event.4.copy_value()?); - } - } - Ok(events) - } } diff --git a/third_party/move/move-vm/runtime/src/native_functions.rs b/third_party/move/move-vm/runtime/src/native_functions.rs index 22323c5eae9ae..f5b8ccddd4b45 100644 --- a/third_party/move/move-vm/runtime/src/native_functions.rs +++ b/third_party/move/move-vm/runtime/src/native_functions.rs @@ -15,7 +15,7 @@ use move_core_types::{ identifier::Identifier, language_storage::TypeTag, value::MoveTypeLayout, - vm_status::{StatusCode, StatusType}, + vm_status::StatusCode, }; use move_vm_types::{ loaded_data::runtime_types::Type, natives::function::NativeResult, values::Value, @@ -139,27 +139,6 @@ impl<'a, 'b, 'c> NativeContext<'a, 'b, 'c> { Ok((exists, num_bytes)) } - pub fn save_event( - &mut self, - guid: Vec, - seq_num: u64, - ty: Type, - val: Value, - ) -> PartialVMResult { - match self - .data_store - .emit_event(self.resolver.loader(), guid, seq_num, ty, val) - { - Ok(()) => Ok(true), - Err(e) if e.major_status().status_type() == StatusType::InvariantViolation => Err(e), - Err(_) => Ok(false), - } - } - - pub fn emitted_events(&self, guid: Vec, ty: Type) -> PartialVMResult> { - self.data_store.emitted_events(guid, ty) - } - pub fn type_to_type_tag(&self, ty: &Type) -> PartialVMResult { self.resolver.loader().type_to_type_tag(ty) } diff --git a/third_party/move/move-vm/runtime/src/session.rs b/third_party/move/move-vm/runtime/src/session.rs index a0582d785d78e..8fbd61eb03780 100644 --- a/third_party/move/move-vm/runtime/src/session.rs +++ b/third_party/move/move-vm/runtime/src/session.rs @@ -3,7 +3,7 @@ // SPDX-License-Identifier: Apache-2.0 use crate::{ - data_cache::TransactionDataCache, loader::LoadedFunction, move_vm::MoveVM, + config::VMConfig, data_cache::TransactionDataCache, loader::LoadedFunction, move_vm::MoveVM, native_extensions::NativeContextExtensions, }; use move_binary_format::{ @@ -13,7 +13,7 @@ use move_binary_format::{ }; use move_core_types::{ account_address::AccountAddress, - effects::{ChangeSet, Changes, Event}, + effects::{ChangeSet, Changes}, gas_algebra::NumBytes, identifier::IdentStr, language_storage::{ModuleId, TypeTag}, @@ -256,7 +256,7 @@ impl<'r, 'l> Session<'r, 'l> { /// This function should always succeed with no user errors returned, barring invariant violations. /// /// This MUST NOT be called if there is a previous invocation that failed with an invariant violation. - pub fn finish(self) -> VMResult<(ChangeSet, Vec)> { + pub fn finish(self) -> VMResult { self.data_cache .into_effects(self.move_vm.runtime.loader()) .map_err(|e| e.finish(Location::Undefined)) @@ -265,44 +265,38 @@ impl<'r, 'l> Session<'r, 'l> { pub fn finish_with_custom_effects( self, resource_converter: &dyn Fn(Value, MoveTypeLayout) -> PartialVMResult, - ) -> VMResult<(Changes, Resource>, Vec)> { + ) -> VMResult, Resource>> { self.data_cache .into_custom_effects(resource_converter, self.move_vm.runtime.loader()) .map_err(|e| e.finish(Location::Undefined)) } /// Same like `finish`, but also extracts the native context extensions from the session. - pub fn finish_with_extensions( - self, - ) -> VMResult<(ChangeSet, Vec, NativeContextExtensions<'r>)> { + pub fn finish_with_extensions(self) -> VMResult<(ChangeSet, NativeContextExtensions<'r>)> { let Session { data_cache, native_extensions, .. } = self; - let (change_set, events) = data_cache + let change_set = data_cache .into_effects(self.move_vm.runtime.loader()) .map_err(|e| e.finish(Location::Undefined))?; - Ok((change_set, events, native_extensions)) + Ok((change_set, native_extensions)) } pub fn finish_with_extensions_with_custom_effects( self, resource_converter: &dyn Fn(Value, MoveTypeLayout) -> PartialVMResult, - ) -> VMResult<( - Changes, Resource>, - Vec, - NativeContextExtensions<'r>, - )> { + ) -> VMResult<(Changes, Resource>, NativeContextExtensions<'r>)> { let Session { data_cache, native_extensions, .. } = self; - let (change_set, events) = data_cache + let change_set = data_cache .into_custom_effects(resource_converter, self.move_vm.runtime.loader()) .map_err(|e| e.finish(Location::Undefined))?; - Ok((change_set, events, native_extensions)) + Ok((change_set, native_extensions)) } /// Try to load a resource from remote storage and create a corresponding GlobalValue @@ -428,6 +422,10 @@ impl<'r, 'l> Session<'r, 'l> { pub fn get_move_vm(&self) -> &'l MoveVM { self.move_vm } + + pub fn get_vm_config(&self) -> &'l VMConfig { + self.move_vm.runtime.loader().vm_config() + } } pub struct LoadedFunctionInstantiation { diff --git a/third_party/move/move-vm/types/src/values/values_impl.rs b/third_party/move/move-vm/types/src/values/values_impl.rs index c0d0b4fbd8e17..f05175296f716 100644 --- a/third_party/move/move-vm/types/src/values/values_impl.rs +++ b/third_party/move/move-vm/types/src/values/values_impl.rs @@ -3020,7 +3020,6 @@ impl<'t, 'l, 'v> serde::Serialize for AnnotatedValue<'t, 'l, 'v, MoveTypeLayout, }) .serialize(serializer) }, - (L::Vector(layout), ValueImpl::Container(c)) => { let layout = layout.as_ref(); match (layout, c) { diff --git a/third_party/move/testing-infra/test-generation/Cargo.toml b/third_party/move/testing-infra/test-generation/Cargo.toml index 4398bdfba7edf..de58688a1311c 100644 --- a/third_party/move/testing-infra/test-generation/Cargo.toml +++ b/third_party/move/testing-infra/test-generation/Cargo.toml @@ -10,7 +10,7 @@ publish = false edition = "2021" [dependencies] -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } crossbeam-channel = "0.5.0" getrandom = "0.2.2" hex = "0.4.3" diff --git a/third_party/move/testing-infra/test-generation/src/lib.rs b/third_party/move/testing-infra/test-generation/src/lib.rs index bd86243740f57..b909aba8b4c33 100644 --- a/third_party/move/testing-infra/test-generation/src/lib.rs +++ b/third_party/move/testing-infra/test-generation/src/lib.rs @@ -25,7 +25,11 @@ use move_binary_format::{ }, }; use move_bytecode_verifier::verify_module; -use move_compiler::{compiled_unit::AnnotatedCompiledUnit, Compiler}; +use move_compiler::{ + compiled_unit::AnnotatedCompiledUnit, + shared::{known_attributes::KnownAttribute, Flags}, + Compiler, +}; use move_core_types::{ account_address::AccountAddress, effects::{ChangeSet, Op}, @@ -59,6 +63,8 @@ static STORAGE_WITH_MOVE_STDLIB: Lazy = Lazy::new(|| { move_stdlib::move_stdlib_files(), vec![], move_stdlib::move_stdlib_named_addresses(), + Flags::empty().set_skip_attribute_checks(true), // Not much point in checking it here. + KnownAttribute::get_all_attribute_names(), ) .build_and_report() .unwrap(); diff --git a/third_party/move/testing-infra/transactional-test-runner/Cargo.toml b/third_party/move/testing-infra/transactional-test-runner/Cargo.toml index b1b11cf7cc520..86dc3fc6377d8 100644 --- a/third_party/move/testing-infra/transactional-test-runner/Cargo.toml +++ b/third_party/move/testing-infra/transactional-test-runner/Cargo.toml @@ -11,7 +11,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } colored = "2.0.0" move-binary-format = { path = "../../move-binary-format" } move-bytecode-source-map = { path = "../../move-ir-compiler/move-bytecode-source-map" } diff --git a/third_party/move/testing-infra/transactional-test-runner/src/framework.rs b/third_party/move/testing-infra/transactional-test-runner/src/framework.rs index 6001828b21949..db8017b734ce0 100644 --- a/third_party/move/testing-infra/transactional-test-runner/src/framework.rs +++ b/third_party/move/testing-infra/transactional-test-runner/src/framework.rs @@ -44,7 +44,9 @@ use move_disassembler::disassembler::{Disassembler, DisassemblerOptions}; use move_ir_types::location::Spanned; use move_symbol_pool::Symbol; use move_vm_runtime::session::SerializedReturnValues; +use once_cell::sync::Lazy; use rayon::iter::Either; +use regex::Regex; use std::{ collections::{BTreeMap, BTreeSet, VecDeque}, fmt::{Debug, Write as FmtWrite}, @@ -123,6 +125,7 @@ pub trait MoveTestAdapter<'a>: Sized { fn compiled_state(&mut self) -> &mut CompiledState<'a>; fn default_syntax(&self) -> SyntaxChoice; + fn known_attributes(&self) -> &BTreeSet; fn run_config(&self) -> TestRunConfig { TestRunConfig::CompilerV1 } @@ -215,18 +218,21 @@ pub trait MoveTestAdapter<'a>: Sized { Either::Right(compile_ir_module(state.dep_modules(), data_path)?) }, }; - let source_mapping = SourceMapping::new_from_view( - match &compiled { - Either::Left(script) => BinaryIndexedView::Script(script), - Either::Right(module) => BinaryIndexedView::Module(module), - }, - Spanned::unsafe_no_loc(()).loc, - ) - .expect("Unable to build dummy source mapping"); - let disassembler = Disassembler::new(source_mapping, DisassemblerOptions::new()); + let view = match &compiled { + Either::Left(script) => BinaryIndexedView::Script(script), + Either::Right(module) => BinaryIndexedView::Module(module), + }; + let disassembler = disassembler_for_view(view); Ok(Some(disassembler.disassemble()?)) }, - TaskCommand::Publish(PublishCommand { gas_budget, syntax }, extra_args) => { + TaskCommand::Publish( + PublishCommand { + gas_budget, + syntax, + print_bytecode, + }, + extra_args, + ) => { let syntax = syntax.unwrap_or_else(|| self.default_syntax()); let data = match data { Some(f) => f, @@ -242,6 +248,7 @@ pub trait MoveTestAdapter<'a>: Sized { // Run the V2 compiler if requested SyntaxChoice::Source if run_config == TestRunConfig::CompilerV2 => { let ((module, _), warning_opt) = compile_source_unit_v2( + state.pre_compiled_deps, state.named_address_mapping.clone(), &state.source_files().cloned().collect::>(), data_path.to_owned(), @@ -259,6 +266,7 @@ pub trait MoveTestAdapter<'a>: Sized { state.named_address_mapping.clone(), &state.source_files().cloned().collect::>(), data_path.to_owned(), + self.known_attributes(), )?; let (named_addr_opt, module) = match unit { AnnotatedCompiledUnit::Module(annot_module) => { @@ -281,12 +289,24 @@ pub trait MoveTestAdapter<'a>: Sized { (None, module, None) }, }; - let (output, module) = self.publish_module( + let printed = if print_bytecode { + let disassembler = disassembler_for_view(BinaryIndexedView::Module(&module)); + Some(format!( + "\n== BEGIN Bytecode ==\n{}\n== END Bytecode ==", + disassembler.disassemble()? + )) + } else { + None + }; + let (mut output, module) = self.publish_module( module, named_addr_opt.map(|s| Identifier::new(s.as_str()).unwrap()), gas_budget, extra_args, )?; + if print_bytecode { + output = merge_output(output, printed); + } match syntax { SyntaxChoice::Source => self.compiled_state().add_with_source_file( named_addr_opt, @@ -308,6 +328,7 @@ pub trait MoveTestAdapter<'a>: Sized { gas_budget, syntax, name: None, + print_bytecode, }, extra_args, ) => { @@ -326,6 +347,7 @@ pub trait MoveTestAdapter<'a>: Sized { // Run the V2 compiler if requested. SyntaxChoice::Source if run_config == TestRunConfig::CompilerV2 => { let ((_, script), warning_opt) = compile_source_unit_v2( + state.pre_compiled_deps, state.named_address_mapping.clone(), &state.source_files().cloned().collect::>(), data_path.to_owned(), @@ -343,6 +365,7 @@ pub trait MoveTestAdapter<'a>: Sized { state.named_address_mapping.clone(), &state.source_files().cloned().collect::>(), data_path.to_owned(), + self.known_attributes(), )?; match unit { AnnotatedCompiledUnit::Script(annot_script) => (annot_script.named_script.script, warning_opt), @@ -354,15 +377,25 @@ pub trait MoveTestAdapter<'a>: Sized { }, SyntaxChoice::IR => (compile_ir_script(state.dep_modules(), data_path)?, None), }; + let printed = if print_bytecode { + let disassembler = disassembler_for_view(BinaryIndexedView::Script(&script)); + Some(format!( + "\n== BEGIN Bytecode ==\n{}\n== END Bytecode ==", + disassembler.disassemble()? + )) + } else { + None + }; let args = self.compiled_state().resolve_args(args)?; let type_args = self.compiled_state().resolve_type_args(type_args)?; - let (output, return_values) = + let (mut output, return_values) = self.execute_script(script, type_args, signers, args, gas_budget, extra_args)?; let rendered_return_value = display_return_values(return_values); - Ok(merge_output( - warning_opt, - merge_output(output, rendered_return_value), - )) + output = merge_output(output, rendered_return_value); + if print_bytecode { + output = merge_output(output, printed); + } + Ok(merge_output(warning_opt, output)) }, TaskCommand::Run( RunCommand { @@ -372,6 +405,7 @@ pub trait MoveTestAdapter<'a>: Sized { gas_budget, syntax, name: Some((raw_addr, module_name, name)), + print_bytecode: _, }, extra_args, ) => { @@ -427,6 +461,12 @@ pub trait MoveTestAdapter<'a>: Sized { } } +fn disassembler_for_view(view: BinaryIndexedView) -> Disassembler { + let source_mapping = + SourceMapping::new_from_view(view, Spanned::unsafe_no_loc(()).loc).expect("source mapping"); + Disassembler::new(source_mapping, DisassemblerOptions::new()) +} + fn display_return_values(return_values: SerializedReturnValues) -> Option { let SerializedReturnValues { mutable_reference_outputs, @@ -603,6 +643,7 @@ impl<'a> CompiledState<'a> { } fn compile_source_unit_v2( + pre_compiled_deps: Option<&FullyCompiledProgram>, named_address_mapping: BTreeMap, deps: &[String], path: String, @@ -610,9 +651,26 @@ fn compile_source_unit_v2( (Option, Option), Option, )> { + let deps = if let Some(p) = pre_compiled_deps { + // The v2 compiler does not (and perhaps never) supports precompiled programs, so + // compile from the sources again, computing the directories where they are found. + let mut dirs: BTreeSet<_> = p + .files + .iter() + .filter_map(|(_, (file_name, _))| { + Path::new(file_name.as_str()) + .parent() + .map(|p| p.to_string_lossy().to_string()) + }) + .collect(); + dirs.extend(deps.iter().cloned()); + dirs.into_iter().collect() + } else { + deps.to_vec() + }; let options = move_compiler_v2::Options { sources: vec![path], - dependencies: deps.to_vec(), + dependencies: deps, named_address_mapping: named_address_mapping .into_iter() .map(|(alias, addr)| format!("{}={}", alias, addr)) @@ -647,6 +705,7 @@ fn compile_source_unit( named_address_mapping: BTreeMap, deps: &[String], path: String, + known_attributes: &BTreeSet, ) -> Result<(AnnotatedCompiledUnit, Option)> { fn rendered_diags(files: &FilesSourceText, diags: Diagnostics) -> Option { if diags.is_empty() { @@ -662,11 +721,17 @@ fn compile_source_unit( } use move_compiler::PASS_COMPILATION; - let (mut files, comments_and_compiler_res) = - move_compiler::Compiler::from_files(vec![path], deps.to_vec(), named_address_mapping) - .set_pre_compiled_lib_opt(pre_compiled_deps) - .set_flags(move_compiler::Flags::empty().set_sources_shadow_deps(true)) - .run::()?; + let (mut files, comments_and_compiler_res) = move_compiler::Compiler::from_files( + vec![path], + deps.to_vec(), + named_address_mapping, + move_compiler::Flags::empty() + .set_sources_shadow_deps(true) + .set_skip_attribute_checks(false), // In case of bugs in transactional test code. + known_attributes, + ) + .set_pre_compiled_lib_opt(pre_compiled_deps) + .run::()?; let units_or_diags = comments_and_compiler_res .map(|(_comments, move_compiler)| move_compiler.into_compiled_units()); @@ -745,6 +810,7 @@ where (vec![config], false) // either V1 or V2 }; let mut last_output = String::new(); + let mut bytecode_print_output = BTreeMap::::new(); for run_config in runs { let mut output = String::new(); let mut tasks = taskify::< @@ -791,6 +857,17 @@ where for task in tasks { handle_known_task(&mut output, &mut adapter, task); } + // Extract any bytecode outputs, they should not be part of the diff. + static BYTECODE_REX: Lazy = Lazy::new(|| { + Regex::new("(?m)== BEGIN Bytecode ==(.|\n|\r)*== END Bytecode ==").unwrap() + }); + while let Some(m) = BYTECODE_REX.find(&output) { + bytecode_print_output + .entry(run_config) + .or_default() + .push_str(&output.drain(m.range()).collect::()); + } + // If there is a previous output, compare to that one if !last_output.is_empty() && last_output != output { let diff = format_diff_no_color(&last_output, &output); @@ -804,6 +881,18 @@ where // Indicate in output that we passed comparison test last_output += "\n==> Compiler v2 delivered same results!\n" } + // Dump printed bytecode at last + for (config, out) in bytecode_print_output { + last_output += &format!( + "\n>>> {} {{\n{}\n}}\n", + match config { + TestRunConfig::CompilerV1 => "V1 Compiler", + TestRunConfig::CompilerV2 => "V2 Compiler", + _ => panic!("unexpected test config"), + }, + out + ); + } handle_expected_output(path, last_output)?; Ok(()) } diff --git a/third_party/move/testing-infra/transactional-test-runner/src/tasks.rs b/third_party/move/testing-infra/transactional-test-runner/src/tasks.rs index 7fa7edaa864b6..76ac8f54c0cc3 100644 --- a/third_party/move/testing-infra/transactional-test-runner/src/tasks.rs +++ b/third_party/move/testing-infra/transactional-test-runner/src/tasks.rs @@ -233,6 +233,8 @@ pub struct PublishCommand { pub gas_budget: Option, #[clap(long = "syntax")] pub syntax: Option, + #[clap(long = "print-bytecode")] + pub print_bytecode: bool, } #[derive(Debug, Parser)] @@ -261,6 +263,8 @@ pub struct RunCommand { pub syntax: Option, #[clap(name = "NAME", value_parser = parse_qualified_module_access)] pub name: Option<(ParsedAddress, Identifier, Identifier)>, + #[clap(long = "print-bytecode")] + pub print_bytecode: bool, } #[derive(Debug, Parser)] diff --git a/third_party/move/testing-infra/transactional-test-runner/src/vm_test_harness.rs b/third_party/move/testing-infra/transactional-test-runner/src/vm_test_harness.rs index b5ccf25c69b47..530fb47803a82 100644 --- a/third_party/move/testing-infra/transactional-test-runner/src/vm_test_harness.rs +++ b/third_party/move/testing-infra/transactional-test-runner/src/vm_test_harness.rs @@ -18,7 +18,9 @@ use move_command_line_common::{ address::ParsedAddress, files::verify_and_create_named_address_mapping, }; use move_compiler::{ - compiled_unit::AnnotatedCompiledUnit, shared::PackagePaths, FullyCompiledProgram, + compiled_unit::AnnotatedCompiledUnit, + shared::{known_attributes::KnownAttribute, Flags, PackagePaths}, + FullyCompiledProgram, }; use move_core_types::{ account_address::AccountAddress, @@ -37,7 +39,10 @@ use move_vm_runtime::{ }; use move_vm_test_utils::{gas_schedule::GasStatus, InMemoryStorage}; use once_cell::sync::Lazy; -use std::{collections::BTreeMap, path::Path}; +use std::{ + collections::{BTreeMap, BTreeSet}, + path::Path, +}; const STD_ADDR: AccountAddress = AccountAddress::ONE; @@ -106,6 +111,10 @@ impl<'a> MoveTestAdapter<'a> for SimpleVMTestAdapter<'a> { self.default_syntax } + fn known_attributes(&self) -> &BTreeSet { + KnownAttribute::get_all_attribute_names() + } + fn run_config(&self) -> TestRunConfig { self.run_config } @@ -384,8 +393,7 @@ impl<'a> SimpleVMTestAdapter<'a> { let res = f(&mut session, &mut gas_status)?; // save changeset - // TODO support events - let (changeset, _events) = session.finish()?; + let changeset = session.finish()?; self.storage.apply(changeset).unwrap(); Ok(res) } @@ -399,7 +407,8 @@ static PRECOMPILED_MOVE_STDLIB: Lazy = Lazy::new(|| { named_address_map: move_stdlib::move_stdlib_named_addresses(), }], None, - move_compiler::Flags::empty(), + Flags::empty().set_skip_attribute_checks(true), // no point in checking. + KnownAttribute::get_all_attribute_names(), ) .unwrap(); match program_res { @@ -416,6 +425,8 @@ static MOVE_STDLIB_COMPILED: Lazy> = Lazy::new(|| { move_stdlib::move_stdlib_files(), vec![], move_stdlib::move_stdlib_named_addresses(), + Flags::empty().set_skip_attribute_checks(true), // no point in checking here. + KnownAttribute::get_all_attribute_names(), ) .build() .unwrap(); diff --git a/third_party/move/tools/move-bytecode-viewer/Cargo.toml b/third_party/move/tools/move-bytecode-viewer/Cargo.toml index 5cde6ab84cc67..db459b743ad9a 100644 --- a/third_party/move/tools/move-bytecode-viewer/Cargo.toml +++ b/third_party/move/tools/move-bytecode-viewer/Cargo.toml @@ -9,7 +9,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } crossterm = "0.26.1" move-binary-format = { path = "../../move-binary-format" } move-bytecode-source-map = { path = "../../move-ir-compiler/move-bytecode-source-map" } diff --git a/third_party/move/tools/move-cli/Cargo.toml b/third_party/move/tools/move-cli/Cargo.toml index 25027c55bfe04..1d4d3b91e2f90 100644 --- a/third_party/move/tools/move-cli/Cargo.toml +++ b/third_party/move/tools/move-cli/Cargo.toml @@ -11,7 +11,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan-reporting = "0.11.1" colored = "2.0.0" difference = "2.0.0" diff --git a/third_party/move/tools/move-cli/src/sandbox/commands/doctor.rs b/third_party/move/tools/move-cli/src/sandbox/commands/doctor.rs index 95892ceff0171..d68b0fd48007f 100644 --- a/third_party/move/tools/move-cli/src/sandbox/commands/doctor.rs +++ b/third_party/move/tools/move-cli/src/sandbox/commands/doctor.rs @@ -76,17 +76,5 @@ pub fn doctor(state: &OnDiskStateView) -> Result<()> { ) } } - // deserialize each event - for event_path in state.event_paths() { - let event = state.view_events(&event_path); - if event.is_err() { - bail!( - "Failed to deserialize event {:?} stored under address {:?}", - event_path.file_name().unwrap(), - parent_addr(&event_path) - ) - } - } - Ok(()) } diff --git a/third_party/move/tools/move-cli/src/sandbox/commands/publish.rs b/third_party/move/tools/move-cli/src/sandbox/commands/publish.rs index 3bc7db762b590..80813761abe5c 100644 --- a/third_party/move/tools/move-cli/src/sandbox/commands/publish.rs +++ b/third_party/move/tools/move-cli/src/sandbox/commands/publish.rs @@ -150,8 +150,7 @@ pub fn publish( } if !has_error { - let (changeset, events) = session.finish().map_err(|e| e.into_vm_status())?; - assert!(events.is_empty()); + let changeset = session.finish().map_err(|e| e.into_vm_status())?; if verbose { explain_publish_changeset(&changeset); } diff --git a/third_party/move/tools/move-cli/src/sandbox/commands/run.rs b/third_party/move/tools/move-cli/src/sandbox/commands/run.rs index 1deabfebe8b0c..9f59ea58c045f 100644 --- a/third_party/move/tools/move-cli/src/sandbox/commands/run.rs +++ b/third_party/move/tools/move-cli/src/sandbox/commands/run.rs @@ -125,10 +125,10 @@ move run` must be applied to a module inside `storage/`", txn_args, ) } else { - let (changeset, events) = session.finish().map_err(|e| e.into_vm_status())?; + let changeset = session.finish().map_err(|e| e.into_vm_status())?; if verbose { - explain_execution_effects(&changeset, &events, state)? + explain_execution_effects(&changeset, state)? } - maybe_commit_effects(!dry_run, changeset, events, state) + maybe_commit_effects(!dry_run, changeset, state) } } diff --git a/third_party/move/tools/move-cli/src/sandbox/commands/view.rs b/third_party/move/tools/move-cli/src/sandbox/commands/view.rs index ae45254086c8a..6269f5ee94440 100644 --- a/third_party/move/tools/move-cli/src/sandbox/commands/view.rs +++ b/third_party/move/tools/move-cli/src/sandbox/commands/view.rs @@ -14,15 +14,6 @@ pub fn view(state: &OnDiskStateView, path: &Path) -> Result<()> { Some(resource) => println!("{}", resource), None => println!("Resource not found."), } - } else if state.is_event_path(path) { - let events = state.view_events(path)?; - if events.is_empty() { - println!("Events not found.") - } else { - for event in events { - println!("{}", event) - } - } } else if is_bytecode_file(path) { let bytecode_opt = if contains_module(path) { OnDiskStateView::view_module(path)? diff --git a/third_party/move/tools/move-cli/src/sandbox/utils/mod.rs b/third_party/move/tools/move-cli/src/sandbox/utils/mod.rs index b3706105f698b..b5b6dbcc24592 100644 --- a/third_party/move/tools/move-cli/src/sandbox/utils/mod.rs +++ b/third_party/move/tools/move-cli/src/sandbox/utils/mod.rs @@ -21,7 +21,7 @@ use move_compiler::{ }; use move_core_types::{ account_address::AccountAddress, - effects::{ChangeSet, Event, Op}, + effects::{ChangeSet, Op}, errmap::ErrorMapping, language_storage::{ModuleId, TypeTag}, transaction_argument::TransactionArgument, @@ -151,21 +151,10 @@ fn print_struct_diff_with_indent( pub(crate) fn explain_execution_effects( changeset: &ChangeSet, - events: &[Event], state: &OnDiskStateView, ) -> Result<()> { // execution effects should contain no modules assert!(changeset.modules().next().is_none()); - if !events.is_empty() { - println!("Emitted {:?} events:", events.len()); - // TODO: better event printing - for (event_key, event_sequence_number, _event_type, event_data) in events { - println!( - "Emitted {:?} as the {}th event to stream {:?}", - event_data, event_sequence_number, event_key - ) - } - } if !changeset.accounts().is_empty() { println!( "Changed resource(s) under {:?} address(es):", @@ -243,11 +232,10 @@ pub(crate) fn explain_execution_effects( Ok(()) } -/// Commit the resources and events modified by a transaction to disk +/// Commit the resources modified by a transaction to disk pub(crate) fn maybe_commit_effects( commit: bool, changeset: ChangeSet, - events: Vec, state: &OnDiskStateView, ) -> Result<()> { // similar to explain effects, all module publishing happens via save_modules(), so effects @@ -263,11 +251,7 @@ pub(crate) fn maybe_commit_effects( } } } - - for (event_key, event_sequence_number, event_type, event_data) in events { - state.save_event(&event_key, event_sequence_number, event_type, event_data)? - } - } else if !(changeset.resources().next().is_none() && events.is_empty()) { + } else if changeset.resources().next().is_some() { println!("Discarding changes; re-run without --dry-run if you would like to keep them.") } diff --git a/third_party/move/tools/move-cli/src/sandbox/utils/on_disk_state_view.rs b/third_party/move/tools/move-cli/src/sandbox/utils/on_disk_state_view.rs index d210480a0a472..4236c361ed702 100644 --- a/third_party/move/tools/move-cli/src/sandbox/utils/on_disk_state_view.rs +++ b/third_party/move/tools/move-cli/src/sandbox/utils/on_disk_state_view.rs @@ -21,22 +21,17 @@ use move_core_types::{ }; use move_disassembler::disassembler::Disassembler; use move_ir_types::location::Spanned; -use move_resource_viewer::{AnnotatedMoveStruct, AnnotatedMoveValue, MoveValueAnnotator}; +use move_resource_viewer::{AnnotatedMoveStruct, MoveValueAnnotator}; use std::{ - convert::{TryFrom, TryInto}, fmt::Debug, fs, path::{Path, PathBuf}, }; -type Event = (Vec, u64, TypeTag, Vec); - /// subdirectory of `DEFAULT_STORAGE_DIR/` where resources are stored pub const RESOURCES_DIR: &str = "resources"; /// subdirectory of `DEFAULT_STORAGE_DIR/` where modules are stored pub const MODULES_DIR: &str = "modules"; -/// subdirectory of `DEFAULT_STORAGE_DIR/` where events are stored -pub const EVENTS_DIR: &str = "events"; /// file under `DEFAULT_BUILD_DIR` where a registry of generated struct layouts are stored pub const STRUCT_LAYOUTS_FILE: &str = "struct_layouts.yaml"; @@ -92,10 +87,6 @@ impl OnDiskStateView { self.is_data_path(p, RESOURCES_DIR) } - pub fn is_event_path(&self, p: &Path) -> bool { - self.is_data_path(p, EVENTS_DIR) - } - pub fn is_module_path(&self, p: &Path) -> bool { self.is_data_path(p, MODULES_DIR) } @@ -113,20 +104,6 @@ impl OnDiskStateView { path.with_extension(BCS_EXTENSION) } - // Events are stored under address/handle creation number - fn get_event_path(&self, key: &[u8]) -> PathBuf { - // TODO: this is a hacky way to get the account address and creation number from the event key. - // The root problem here is that the move-cli is using the Diem-specific event format. - // We will deal this later when we make events more generic in the Move VM. - let account_addr = AccountAddress::try_from(&key[8..]) - .expect("failed to get account address from event key"); - let creation_number = u64::from_le_bytes(key[..8].try_into().unwrap()); - let mut path = self.get_addr_path(&account_addr); - path.push(EVENTS_DIR); - path.push(creation_number.to_string()); - path.with_extension(BCS_EXTENSION) - } - fn get_module_path(&self, module_id: &ModuleId) -> PathBuf { let mut path = self.get_addr_path(module_id.address()); path.push(MODULES_DIR); @@ -222,25 +199,6 @@ impl OnDiskStateView { } } - fn get_events(&self, events_path: &Path) -> Result> { - Ok(if events_path.exists() { - match Self::get_bytes(events_path)? { - Some(events_data) => bcs::from_bytes::>(&events_data)?, - None => vec![], - } - } else { - vec![] - }) - } - - pub fn view_events(&self, events_path: &Path) -> Result> { - let annotator = MoveValueAnnotator::new(self); - self.get_events(events_path)? - .iter() - .map(|(_, _, event_type, event_data)| annotator.view_value(event_type, event_data)) - .collect() - } - fn view_bytecode(path: &Path, is_module: bool) -> Result> { if path.is_dir() { bail!("Bad bytecode path {:?}. Needed file, found directory", path) @@ -302,29 +260,6 @@ impl OnDiskStateView { Ok(fs::write(path, bcs_bytes)?) } - pub fn save_event( - &self, - event_key: &[u8], - event_sequence_number: u64, - event_type: TypeTag, - event_data: Vec, - ) -> Result<()> { - // save event data in handle_address/EVENTS_DIR/handle_number - let path = self.get_event_path(event_key); - if !path.exists() { - fs::create_dir_all(path.parent().unwrap())?; - } - // grab the old event log (if any) and append this event to it - let mut event_log = self.get_events(&path)?; - event_log.push(( - event_key.to_vec(), - event_sequence_number, - event_type, - event_data, - )); - Ok(fs::write(path, bcs::to_bytes(&event_log)?)?) - } - /// Save `module` on disk under the path `module.address()`/`module.name()` pub fn save_module(&self, module_id: &ModuleId, module_bytes: &[u8]) -> Result<()> { let path = self.get_module_path(module_id); @@ -386,10 +321,6 @@ impl OnDiskStateView { self.iter_paths(move |p| self.is_module_path(p)) } - pub fn event_paths(&self) -> impl Iterator + '_ { - self.iter_paths(move |p| self.is_event_path(p)) - } - /// Build all modules in the self.storage_dir. /// Returns an Err if a module does not deserialize. pub fn get_all_modules(&self) -> Result> { diff --git a/third_party/move/tools/move-cli/tests/build_tests/dependency_chain/args.exp b/third_party/move/tools/move-cli/tests/build_tests/dependency_chain/args.exp index ea1be9b5e0cee..46931d7174c7c 100644 --- a/third_party/move/tools/move-cli/tests/build_tests/dependency_chain/args.exp +++ b/third_party/move/tools/move-cli/tests/build_tests/dependency_chain/args.exp @@ -2,3 +2,9 @@ Command `build -v`: INCLUDING DEPENDENCY Bar INCLUDING DEPENDENCY Foo BUILDING A +warning[W02016]: unknown attribute + ┌─ ./sources/A.move:1:3 + │ +1 │ #[evm_contract] // for passing evm test flavor + │ ^^^^^^^^^^^^ Attribute name 'evm_contract' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/tools/move-cli/tests/build_tests/dev_address/args.exp b/third_party/move/tools/move-cli/tests/build_tests/dev_address/args.exp index fa89fc39cc0e8..6d43848bd756e 100644 --- a/third_party/move/tools/move-cli/tests/build_tests/dev_address/args.exp +++ b/third_party/move/tools/move-cli/tests/build_tests/dev_address/args.exp @@ -1,2 +1,8 @@ Command `build -v -d`: BUILDING A +warning[W02016]: unknown attribute + ┌─ ./sources/A.move:1:3 + │ +1 │ #[evm_contract] // for passing evm test flavor + │ ^^^^^^^^^^^^ Attribute name 'evm_contract' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/tools/move-cli/tests/build_tests/empty_module_no_deps/args.exp b/third_party/move/tools/move-cli/tests/build_tests/empty_module_no_deps/args.exp index dffc3c3b344de..981f783b21258 100644 --- a/third_party/move/tools/move-cli/tests/build_tests/empty_module_no_deps/args.exp +++ b/third_party/move/tools/move-cli/tests/build_tests/empty_module_no_deps/args.exp @@ -1,2 +1,8 @@ Command `build -v`: BUILDING A +warning[W02016]: unknown attribute + ┌─ ./sources/A.move:1:3 + │ +1 │ #[evm_contract] // for passing evm test flavor + │ ^^^^^^^^^^^^ Attribute name 'evm_contract' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/tools/move-cli/tests/build_tests/include_exclude_stdlib/args.exp b/third_party/move/tools/move-cli/tests/build_tests/include_exclude_stdlib/args.exp index ce37b1cf91b5a..1d60e31b47be4 100644 --- a/third_party/move/tools/move-cli/tests/build_tests/include_exclude_stdlib/args.exp +++ b/third_party/move/tools/move-cli/tests/build_tests/include_exclude_stdlib/args.exp @@ -1,5 +1,11 @@ Command `build -v`: BUILDING build_include_exclude_stdlib +warning[W02016]: unknown attribute + ┌─ ./sources/UseSigner.move:1:3 + │ +1 │ #[evm_contract] // for passing evm test flavor + │ ^^^^^^^^^^^^ Attribute name 'evm_contract' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + error[E03002]: unbound module ┌─ ./sources/UseSigner.move:3:7 │ @@ -15,3 +21,9 @@ error[E03002]: unbound module Command `-d -v build`: INCLUDING DEPENDENCY MoveStdlib BUILDING build_include_exclude_stdlib +warning[W02016]: unknown attribute + ┌─ ./sources/UseSigner.move:1:3 + │ +1 │ #[evm_contract] // for passing evm test flavor + │ ^^^^^^^^^^^^ Attribute name 'evm_contract' is unknown (use --skip-attribute-checks CLI option to ignore); known attributes are '{"bytecode_instruction", "deprecated", "expected_failure", "native_interface", "test", "test_only", "verify_only"}'. + diff --git a/third_party/move/tools/move-cli/tests/build_tests/unbound_address/args.exp b/third_party/move/tools/move-cli/tests/build_tests/unbound_address/args.exp index 47bf3e9d49617..46fb821f74f26 100644 --- a/third_party/move/tools/move-cli/tests/build_tests/unbound_address/args.exp +++ b/third_party/move/tools/move-cli/tests/build_tests/unbound_address/args.exp @@ -4,5 +4,5 @@ Named address 'A' in package 'A' ] To fix this, add an entry for each unresolved address to the [addresses] section of ./Move.toml: e.g., [addresses] -Std = "0x1" +std = "0x1" Alternatively, you can also define [dev-addresses] and call with the --dev flag diff --git a/third_party/move/tools/move-cli/tests/cross_process_tests/Package1/Move.toml b/third_party/move/tools/move-cli/tests/cross_process_tests/Package1/Move.toml index adb1224f7217a..6915223ce1792 100644 --- a/third_party/move/tools/move-cli/tests/cross_process_tests/Package1/Move.toml +++ b/third_party/move/tools/move-cli/tests/cross_process_tests/Package1/Move.toml @@ -3,7 +3,7 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" [dependencies] -MoveStdlib = { git = "https://github.com/diem/move.git", subdir = "language/move-stdlib", rev = "98ed299" } +MoveStdlib = { local = "../../../../move-stdlib" } diff --git a/third_party/move/tools/move-cli/tests/cross_process_tests/Package2/Move.toml b/third_party/move/tools/move-cli/tests/cross_process_tests/Package2/Move.toml index 779b56687d865..64468724325e7 100644 --- a/third_party/move/tools/move-cli/tests/cross_process_tests/Package2/Move.toml +++ b/third_party/move/tools/move-cli/tests/cross_process_tests/Package2/Move.toml @@ -3,7 +3,7 @@ name = "Package2" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" [dependencies] -MoveStdlib = { git = "https://github.com/diem/move.git", subdir = "language/move-stdlib", rev = "98ed299" } +MoveStdlib = { local = "../../../../move-stdlib" } diff --git a/third_party/move/tools/move-cli/tests/upload_tests/no_git_remote_package/Move.toml b/third_party/move/tools/move-cli/tests/upload_tests/no_git_remote_package/Move.toml index 637d99d854110..1591df0a97dcf 100644 --- a/third_party/move/tools/move-cli/tests/upload_tests/no_git_remote_package/Move.toml +++ b/third_party/move/tools/move-cli/tests/upload_tests/no_git_remote_package/Move.toml @@ -3,4 +3,4 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" diff --git a/third_party/move/tools/move-cli/tests/upload_tests/valid_package1/Move.toml b/third_party/move/tools/move-cli/tests/upload_tests/valid_package1/Move.toml index 637d99d854110..1591df0a97dcf 100644 --- a/third_party/move/tools/move-cli/tests/upload_tests/valid_package1/Move.toml +++ b/third_party/move/tools/move-cli/tests/upload_tests/valid_package1/Move.toml @@ -3,4 +3,4 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" diff --git a/third_party/move/tools/move-cli/tests/upload_tests/valid_package2/Move.toml b/third_party/move/tools/move-cli/tests/upload_tests/valid_package2/Move.toml index 637d99d854110..1591df0a97dcf 100644 --- a/third_party/move/tools/move-cli/tests/upload_tests/valid_package2/Move.toml +++ b/third_party/move/tools/move-cli/tests/upload_tests/valid_package2/Move.toml @@ -3,4 +3,4 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" diff --git a/third_party/move/tools/move-cli/tests/upload_tests/valid_package3/Move.toml b/third_party/move/tools/move-cli/tests/upload_tests/valid_package3/Move.toml index 637d99d854110..1591df0a97dcf 100644 --- a/third_party/move/tools/move-cli/tests/upload_tests/valid_package3/Move.toml +++ b/third_party/move/tools/move-cli/tests/upload_tests/valid_package3/Move.toml @@ -3,4 +3,4 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" diff --git a/third_party/move/tools/move-coverage/Cargo.toml b/third_party/move/tools/move-coverage/Cargo.toml index d9bb76ca4dee0..9c1fac4b15192 100644 --- a/third_party/move/tools/move-coverage/Cargo.toml +++ b/third_party/move/tools/move-coverage/Cargo.toml @@ -11,7 +11,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan = { version = "0.11.1", features = ["serialization"] } colored = "2.0.0" once_cell = "1.7.2" diff --git a/third_party/move/tools/move-disassembler/Cargo.toml b/third_party/move/tools/move-disassembler/Cargo.toml index 91c337c95eb32..4a225e4bef06d 100644 --- a/third_party/move/tools/move-disassembler/Cargo.toml +++ b/third_party/move/tools/move-disassembler/Cargo.toml @@ -20,7 +20,7 @@ move-core-types = { path = "../../move-core/types" } move-coverage = { path = "../move-coverage" } move-ir-types = { path = "../../move-ir/types" } -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } [features] default = [] diff --git a/third_party/move/tools/move-explain/Cargo.toml b/third_party/move/tools/move-explain/Cargo.toml index d35fac69e6931..91396df15ab1f 100644 --- a/third_party/move/tools/move-explain/Cargo.toml +++ b/third_party/move/tools/move-explain/Cargo.toml @@ -10,7 +10,7 @@ publish = false edition = "2021" [dependencies] -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } move-command-line-common = { path = "../../move-command-line-common" } move-core-types = { path = "../../move-core/types" } diff --git a/third_party/move/tools/move-package/Cargo.toml b/third_party/move/tools/move-package/Cargo.toml index 514a3f5c41fec..1c6a0d5494426 100644 --- a/third_party/move/tools/move-package/Cargo.toml +++ b/third_party/move/tools/move-package/Cargo.toml @@ -9,7 +9,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } colored = "2.0.0" dirs-next = "2.0.0" itertools = "0.10.0" diff --git a/third_party/move/tools/move-package/src/compilation/compiled_package.rs b/third_party/move/tools/move-package/src/compilation/compiled_package.rs index f36e348732551..dae2e44e43f95 100644 --- a/third_party/move/tools/move-package/src/compilation/compiled_package.rs +++ b/third_party/move/tools/move-package/src/compilation/compiled_package.rs @@ -9,7 +9,7 @@ use crate::{ layout::{SourcePackageLayout, REFERENCE_TEMPLATE_FILENAME}, parsed_manifest::{FileName, PackageDigest, PackageName}, }, - BuildConfig, + Architecture, BuildConfig, }; use anyhow::{ensure, Result}; use colored::Colorize; @@ -26,6 +26,7 @@ use move_command_line_common::{ }, }; use move_compiler::{ + attr_derivation, compiled_unit::{ self, AnnotatedCompiledUnit, CompiledUnit, NamedCompiledModule, NamedCompiledScript, }, @@ -575,11 +576,30 @@ impl CompiledPackage { &resolved_package, transitive_dependencies, )?; - let flags = if resolution_graph.build_options.test_mode { + let mut flags = if resolution_graph.build_options.test_mode { Flags::testing() } else { Flags::empty() }; + let skip_attribute_checks = resolution_graph.build_options.skip_attribute_checks; + flags = flags.set_skip_attribute_checks(skip_attribute_checks); + let mut known_attributes = resolution_graph.build_options.known_attributes.clone(); + match &resolution_graph.build_options.architecture { + Some(x) => { + match x { + Architecture::Move => (), + Architecture::AsyncMove => { + flags = flags.set_flavor("async"); + }, + Architecture::Ethereum => { + flags = flags.set_flavor("evm"); + }, + }; + }, + None => (), + }; + attr_derivation::add_attributes_for_flavor(&flags, &mut known_attributes); + // Partition deps_package according whether src is available let (src_deps, bytecode_deps): (Vec<_>, Vec<_>) = deps_package_paths .clone() @@ -600,7 +620,7 @@ impl CompiledPackage { let mut paths = src_deps; paths.push(sources_package_paths.clone()); - let compiler = Compiler::from_package_paths(paths, bytecode_deps).set_flags(flags); + let compiler = Compiler::from_package_paths(paths, bytecode_deps, flags, &known_attributes); let (file_map, all_compiled_units) = compiler_driver(compiler)?; let mut root_compiled_units = vec![]; let mut deps_compiled_units = vec![]; @@ -631,6 +651,8 @@ impl CompiledPackage { vec![sources_package_paths], deps_package_paths.into_iter().map(|(p, _)| p).collect_vec(), ModelBuilderOptions::default(), + skip_attribute_checks, + &known_attributes, )?; if resolution_graph.build_options.generate_docs { diff --git a/third_party/move/tools/move-package/src/compilation/model_builder.rs b/third_party/move/tools/move-package/src/compilation/model_builder.rs index 0710ed955c500..6708bfc80a67a 100644 --- a/third_party/move/tools/move-package/src/compilation/model_builder.rs +++ b/third_party/move/tools/move-package/src/compilation/model_builder.rs @@ -115,6 +115,14 @@ impl ModelBuilder { ), }; - run_model_builder_with_options(all_targets, all_deps, ModelBuilderOptions::default()) + let skip_attribute_checks = self.resolution_graph.build_options.skip_attribute_checks; + let known_attributes = &self.resolution_graph.build_options.known_attributes; + run_model_builder_with_options( + all_targets, + all_deps, + ModelBuilderOptions::default(), + skip_attribute_checks, + known_attributes, + ) } } diff --git a/third_party/move/tools/move-package/src/lib.rs b/third_party/move/tools/move-package/src/lib.rs index 050cdc4828f7d..af47bd762f548 100644 --- a/third_party/move/tools/move-package/src/lib.rs +++ b/third_party/move/tools/move-package/src/lib.rs @@ -19,12 +19,15 @@ use crate::{ }; use anyhow::{bail, Result}; use clap::*; +use move_compiler::{ + command_line::SKIP_ATTRIBUTE_CHECKS, shared::known_attributes::KnownAttribute, +}; use move_core_types::account_address::AccountAddress; use move_model::model::GlobalEnv; use serde::{Deserialize, Serialize}; use source_package::layout::SourcePackageLayout; use std::{ - collections::BTreeMap, + collections::{BTreeMap, BTreeSet}, fmt, io::Write, path::{Path, PathBuf}, @@ -137,6 +140,14 @@ pub struct BuildConfig { /// Bytecode version to compile move code #[clap(long = "bytecode-version", global = true)] pub bytecode_version: Option, + + // Known attribute names. Depends on compilation context (Move variant) + #[clap(skip = KnownAttribute::get_all_attribute_names().clone())] + pub known_attributes: BTreeSet, + + /// Do not complain about an unknown attribute in Move code. + #[clap(long = SKIP_ATTRIBUTE_CHECKS, default_value = "false")] + pub skip_attribute_checks: bool, } #[derive(Debug, Clone, Eq, PartialEq, PartialOrd)] diff --git a/third_party/move/tools/move-package/src/resolution/resolution_graph.rs b/third_party/move/tools/move-package/src/resolution/resolution_graph.rs index 4415f93fb6513..9c3928f3dbcd6 100644 --- a/third_party/move/tools/move-package/src/resolution/resolution_graph.rs +++ b/third_party/move/tools/move-package/src/resolution/resolution_graph.rs @@ -179,7 +179,7 @@ impl ResolvingGraph { bail!( "Unresolved addresses found: [\n{}\n]\n\ To fix this, add an entry for each unresolved address to the [addresses] section of {}/Move.toml: \ - e.g.,\n[addresses]\nStd = \"0x1\"\n\ + e.g.,\n[addresses]\nstd = \"0x1\"\n\ Alternatively, you can also define [dev-addresses] and call with the --dev flag", unresolved_addresses.join("\n"), root_package_path.to_string_lossy() diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps/Move.exp index cb5253d17487a..a562ec60f64d4 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps/Move.exp @@ -18,5 +18,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_assigned/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_assigned/Move.exp index 6ee4f9b85807e..785be46e0eaad 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_assigned/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_assigned/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp index 7a2e6343a15eb..44424a468b45e 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_test_mode/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_test_mode/Move.exp index 0f8b1e0a0d4a6..d679d645623a1 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_test_mode/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/basic_no_deps_test_mode/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_backflow_resolution/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_backflow_resolution/Move.exp index fe3846c31a1fa..e45c1031f3d49 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_backflow_resolution/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_backflow_resolution/Move.exp @@ -21,5 +21,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_no_conflict/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_no_conflict/Move.exp index fe3846c31a1fa..e45c1031f3d49 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_no_conflict/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/diamond_problem_no_conflict/Move.exp @@ -21,5 +21,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename/Move.exp index da8ed9338db00..e18b4560e1d2e 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename/Move.exp @@ -22,5 +22,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename_one/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename_one/Move.exp index bb13331e51006..2eee90247064c 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename_one/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/multiple_deps_rename_one/Move.exp @@ -22,5 +22,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep/Move.exp index 04d6d3c40e95a..eb858ed34d65f 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_assigned_address/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_assigned_address/Move.exp index 5da411f35697c..5ff2f16628f81 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_assigned_address/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_assigned_address/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_renamed/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_renamed/Move.exp index 04d6d3c40e95a..eb858ed34d65f 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_renamed/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_renamed/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_with_scripts/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_with_scripts/Move.exp index 04d6d3c40e95a..eb858ed34d65f 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_with_scripts/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/one_dep_with_scripts/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/compilation/test_symlinks/Move.exp b/third_party/move/tools/move-package/tests/test_sources/compilation/test_symlinks/Move.exp index 6ee4f9b85807e..785be46e0eaad 100644 --- a/third_party/move/tools/move-package/tests/test_sources/compilation/test_symlinks/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/compilation/test_symlinks/Move.exp @@ -20,5 +20,7 @@ CompiledPackageInfo { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, } diff --git a/third_party/move/tools/move-package/tests/test_sources/parsing/invalid_identifier_package_name/Move.exp b/third_party/move/tools/move-package/tests/test_sources/parsing/invalid_identifier_package_name/Move.exp index 21da7cb042776..891d51523e355 100644 --- a/third_party/move/tools/move-package/tests/test_sources/parsing/invalid_identifier_package_name/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/parsing/invalid_identifier_package_name/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/parsing/minimal_manifest/Move.exp b/third_party/move/tools/move-package/tests/test_sources/parsing/minimal_manifest/Move.exp index 902cee9609a12..d66c3ca089c6d 100644 --- a/third_party/move/tools/move-package/tests/test_sources/parsing/minimal_manifest/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/parsing/minimal_manifest/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps/Move.exp index 79e181224f8ba..7868882c69450 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_assigned/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_assigned/Move.exp index 2ab48f290b8c5..af971a5488ef9 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_assigned/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_assigned/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned/Move.exp index 067d57d26ce13..286d76ea24d5e 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned/Move.exp @@ -3,5 +3,5 @@ Named address 'A' in package 'test' ] To fix this, add an entry for each unresolved address to the [addresses] section of tests/test_sources/resolution/basic_no_deps_address_not_assigned/Move.toml: e.g., [addresses] -Std = "0x1" +std = "0x1" Alternatively, you can also define [dev-addresses] and call with the --dev flag diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp index 93c1ba97cdf97..ade5b69d90846 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/basic_no_deps_address_not_assigned_with_dev_assignment/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/dep_good_digest/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/dep_good_digest/Move.exp index 1692ad324087a..3e052a4a24928 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/dep_good_digest/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/dep_good_digest/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_backflow_resolution/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_backflow_resolution/Move.exp index cf5e6e183c4b7..ea59fd6ea9361 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_backflow_resolution/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_backflow_resolution/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_no_conflict/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_no_conflict/Move.exp index a55b459854f19..9ef82bdfb100b 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_no_conflict/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/diamond_problem_no_conflict/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/multiple_deps_rename/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/multiple_deps_rename/Move.exp index 3e792236e464f..9ec3d07405ed9 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/multiple_deps_rename/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/multiple_deps_rename/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep/Move.exp index 78e15d8a03c81..c283da45cf4c3 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_assigned_address/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_assigned_address/Move.exp index e1d4eaab354a6..478aaa186c352 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_assigned_address/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_assigned_address/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_multiple_of_same_name/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_multiple_of_same_name/Move.exp index 0540c070119c5..83b6075587743 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_multiple_of_same_name/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_multiple_of_same_name/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_reassigned_address/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_reassigned_address/Move.exp index a590010df9ff4..02416ca1cb776 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_reassigned_address/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_reassigned_address/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_unification_across_local_renamings/Move.exp b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_unification_across_local_renamings/Move.exp index 8dc604a4dc597..9480595537c41 100644 --- a/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_unification_across_local_renamings/Move.exp +++ b/third_party/move/tools/move-package/tests/test_sources/resolution/one_dep_unification_across_local_renamings/Move.exp @@ -14,6 +14,8 @@ ResolutionGraph { fetch_deps_only: false, skip_fetch_latest_git_deps: false, bytecode_version: None, + known_attributes: {}, + skip_attribute_checks: false, }, root_package: SourceManifest { package: PackageInfo { diff --git a/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package1/Move.toml b/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package1/Move.toml index adb1224f7217a..3fa9d44a66093 100644 --- a/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package1/Move.toml +++ b/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package1/Move.toml @@ -3,7 +3,7 @@ name = "Package1" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" [dependencies] -MoveStdlib = { git = "https://github.com/diem/move.git", subdir = "language/move-stdlib", rev = "98ed299" } +MoveStdlib = { local = "../../../../../move-stdlib" } diff --git a/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package2/Move.toml b/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package2/Move.toml index 779b56687d865..4eb794dd165e6 100644 --- a/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package2/Move.toml +++ b/third_party/move/tools/move-package/tests/thread_safety_package_test_sources/Package2/Move.toml @@ -3,7 +3,7 @@ name = "Package2" version = "0.0.0" [addresses] -Std = "0x1" +std = "0x1" [dependencies] -MoveStdlib = { git = "https://github.com/diem/move.git", subdir = "language/move-stdlib", rev = "98ed299" } +MoveStdlib = { local = "../../../../../move-stdlib" } diff --git a/third_party/move/tools/move-unit-test/Cargo.toml b/third_party/move/tools/move-unit-test/Cargo.toml index b236b0b0b9a54..3f54f52d4f463 100644 --- a/third_party/move/tools/move-unit-test/Cargo.toml +++ b/third_party/move/tools/move-unit-test/Cargo.toml @@ -12,7 +12,7 @@ edition = "2021" [dependencies] anyhow = "1.0.52" better_any = "0.1.1" -clap = { version = "4.3.5", features = ["derive"] } +clap = { version = "4.3.9", features = ["derive"] } codespan-reporting = "0.11.1" colored = "2.0.0" evm-exec-utils = { path = "../../evm/exec-utils", optional = true } diff --git a/third_party/move/tools/move-unit-test/src/lib.rs b/third_party/move/tools/move-unit-test/src/lib.rs index 7e44638122a7f..c0b055106f8d4 100644 --- a/third_party/move/tools/move-unit-test/src/lib.rs +++ b/third_party/move/tools/move-unit-test/src/lib.rs @@ -13,7 +13,7 @@ use move_command_line_common::files::verify_and_create_named_address_mapping; use move_compiler::{ self, diagnostics::{self, codes::Severity}, - shared::{self, NumericalAddress}, + shared::{self, known_attributes::KnownAttribute, NumericalAddress}, unit_test::{self, TestPlan}, Compiler, Flags, PASS_CFGIR, }; @@ -161,11 +161,15 @@ impl UnitTestingConfig { ) -> Option { let addresses = verify_and_create_named_address_mapping(self.named_address_values.clone()).ok()?; - let (files, comments_and_compiler_res) = - Compiler::from_files(source_files, deps, addresses) - .set_flags(Flags::testing()) - .run::() - .unwrap(); + let (files, comments_and_compiler_res) = Compiler::from_files( + source_files, + deps, + addresses, + Flags::testing().set_skip_attribute_checks(false), + KnownAttribute::get_all_attribute_names(), + ) + .run::() + .unwrap(); let (_, compiler) = diagnostics::unwrap_or_report_diagnostics(&files, comments_and_compiler_res); diff --git a/third_party/move/tools/move-unit-test/src/test_runner.rs b/third_party/move/tools/move-unit-test/src/test_runner.rs index 08d427bf8f0ab..e1f19d43e4bb5 100644 --- a/third_party/move/tools/move-unit-test/src/test_runner.rs +++ b/third_party/move/tools/move-unit-test/src/test_runner.rs @@ -296,7 +296,7 @@ impl SharedTestingConfig { .into(), ); match session.finish_with_extensions() { - Ok((cs, _, extensions)) => (Ok(cs), Ok(extensions), return_result, test_run_info), + Ok((cs, extensions)) => (Ok(cs), Ok(extensions), return_result, test_run_info), Err(err) => (Err(err.clone()), Err(err), return_result, test_run_info), } } diff --git a/types/src/contract_event.rs b/types/src/contract_event.rs index b91682105d83d..895615ff8a08a 100644 --- a/types/src/contract_event.rs +++ b/types/src/contract_event.rs @@ -7,17 +7,19 @@ use crate::{ event::EventKey, transaction::Version, }; -use anyhow::{Error, Result}; +use anyhow::{bail, Error, Result}; use aptos_crypto_derive::{BCSCryptoHash, CryptoHasher}; -use move_core_types::{language_storage::TypeTag, move_resource::MoveStructType}; +use move_core_types::{ + account_address::AccountAddress, language_storage::TypeTag, move_resource::MoveStructType, +}; #[cfg(any(test, feature = "fuzzing"))] use proptest_derive::Arbitrary; use serde::{Deserialize, Serialize}; -use std::{convert::TryFrom, ops::Deref}; +use std::convert::TryFrom; /// This trait is used by block executor to abstractly represent an event. /// Block executor uses `get_event_data` to get the event data. -/// Block executor then checks for the occurences of aggregators and aggregatorsnapshots +/// Block executor then checks for the occurrences of aggregators and aggregatorsnapshots /// in the event data, processes them, and calls `update_event_data` to update the event data. pub trait ReadWriteEvent { /// Returns the event data. @@ -29,59 +31,109 @@ pub trait ReadWriteEvent { /// Support versioning of the data structure. #[derive(Hash, Clone, Eq, PartialEq, Serialize, Deserialize, CryptoHasher, BCSCryptoHash)] pub enum ContractEvent { - V0(ContractEventV0), + V1(ContractEventV1), + V2(ContractEventV2), } impl ReadWriteEvent for ContractEvent { fn get_event_data(&self) -> (EventKey, u64, &TypeTag, &[u8]) { match self { - ContractEvent::V0(event) => ( + ContractEvent::V1(event) => ( *event.key(), event.sequence_number(), event.type_tag(), event.event_data(), ), + ContractEvent::V2(event) => ( + EventKey::new(0, AccountAddress::ZERO), + 0, + event.type_tag(), + event.event_data(), + ), } } fn update_event_data(&mut self, event_data: Vec) { match self { - ContractEvent::V0(event) => event.event_data = event_data, + ContractEvent::V1(event) => event.event_data = event_data, + ContractEvent::V2(event) => event.event_data = event_data, } } } impl ContractEvent { - pub fn new( + pub fn new_v1( key: EventKey, sequence_number: u64, type_tag: TypeTag, event_data: Vec, ) -> Self { - ContractEvent::V0(ContractEventV0::new( + ContractEvent::V1(ContractEventV1::new( key, sequence_number, type_tag, event_data, )) } -} -// Temporary hack to avoid massive changes, it won't work when new variant comes and needs proper -// dispatch at that time. -impl Deref for ContractEvent { - type Target = ContractEventV0; + pub fn new_v2(type_tag: TypeTag, event_data: Vec) -> Self { + ContractEvent::V2(ContractEventV2::new(type_tag, event_data)) + } + + pub fn event_key(&self) -> Option<&EventKey> { + match self { + ContractEvent::V1(event) => Some(event.key()), + ContractEvent::V2(_event) => None, + } + } + + pub fn event_data(&self) -> &[u8] { + match self { + ContractEvent::V1(event) => event.event_data(), + ContractEvent::V2(event) => event.event_data(), + } + } - fn deref(&self) -> &Self::Target { + pub fn type_tag(&self) -> &TypeTag { + match self { + ContractEvent::V1(event) => &event.type_tag, + ContractEvent::V2(event) => &event.type_tag, + } + } + + pub fn size(&self) -> usize { match self { - ContractEvent::V0(event) => event, + ContractEvent::V1(event) => event.size(), + ContractEvent::V2(event) => event.size(), } } + + pub fn is_v1(&self) -> bool { + matches!(self, ContractEvent::V1(_)) + } + + pub fn is_v2(&self) -> bool { + matches!(self, ContractEvent::V2(_)) + } + + pub fn v1(&self) -> Result<&ContractEventV1> { + Ok(match self { + ContractEvent::V1(event) => event, + ContractEvent::V2(_event) => bail!("This is a module event"), + }) + } + + pub fn v2(&self) -> Result<&ContractEventV2> { + Ok(match self { + ContractEvent::V1(_event) => bail!("This is a instance event"), + ContractEvent::V2(event) => event, + }) + } } /// Entry produced via a call to the `emit_event` builtin. #[derive(Hash, Clone, Eq, PartialEq, Serialize, Deserialize, CryptoHasher)] -pub struct ContractEventV0 { +pub struct ContractEventV1 { /// The unique key that the event was emitted to key: EventKey, /// The number of messages that have been emitted to the path previously @@ -93,7 +145,7 @@ pub struct ContractEventV0 { event_data: Vec, } -impl ContractEventV0 { +impl ContractEventV1 { pub fn new( key: EventKey, sequence_number: u64, @@ -129,14 +181,74 @@ impl ContractEventV0 { } } +impl std::fmt::Debug for ContractEventV1 { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!( + f, + "ContractEvent {{ key: {:?}, index: {:?}, type: {:?}, event_data: {:?} }}", + self.key, + self.sequence_number, + self.type_tag, + hex::encode(&self.event_data) + ) + } +} + +/// Entry produced via a call to the `emit` builtin. +#[derive(Hash, Clone, Eq, PartialEq, Serialize, Deserialize, CryptoHasher)] +pub struct ContractEventV2 { + /// The type of the data + type_tag: TypeTag, + /// The data payload of the event + #[serde(with = "serde_bytes")] + event_data: Vec, +} + +impl ContractEventV2 { + pub fn new(type_tag: TypeTag, event_data: Vec) -> Self { + Self { + type_tag, + event_data, + } + } + + pub fn size(&self) -> usize { + bcs::to_bytes(&self.type_tag).unwrap().len() + self.event_data.len() + } + + pub fn type_tag(&self) -> &TypeTag { + &self.type_tag + } + + pub fn event_data(&self) -> &[u8] { + &self.event_data + } +} + +impl std::fmt::Debug for ContractEventV2 { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!( + f, + "ModuleEvent {{ type: {:?}, event_data: {:?} }}", + self.type_tag, + hex::encode(&self.event_data) + ) + } +} + impl TryFrom<&ContractEvent> for NewBlockEvent { type Error = Error; fn try_from(event: &ContractEvent) -> Result { - if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { - anyhow::bail!("Expected NewBlockEvent") + match event { + ContractEvent::V1(event) => { + if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { + bail!("Expected NewBlockEvent") + } + Self::try_from_bytes(&event.event_data) + }, + ContractEvent::V2(_) => bail!("This is a module event"), } - Self::try_from_bytes(&event.event_data) } } @@ -144,10 +256,15 @@ impl TryFrom<&ContractEvent> for NewEpochEvent { type Error = Error; fn try_from(event: &ContractEvent) -> Result { - if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { - anyhow::bail!("Expected NewEpochEvent") + match event { + ContractEvent::V1(event) => { + if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { + bail!("Expected NewEpochEvent") + } + Self::try_from_bytes(&event.event_data) + }, + ContractEvent::V2(_) => bail!("This is a module event"), } - Self::try_from_bytes(&event.event_data) } } @@ -155,10 +272,15 @@ impl TryFrom<&ContractEvent> for WithdrawEvent { type Error = Error; fn try_from(event: &ContractEvent) -> Result { - if event.type_tag != TypeTag::Struct(Box::new(WithdrawEvent::struct_tag())) { - anyhow::bail!("Expected Sent Payment") + match event { + ContractEvent::V1(event) => { + if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { + bail!("Expected Sent Payment") + } + Self::try_from_bytes(&event.event_data) + }, + ContractEvent::V2(_) => bail!("This is a module event"), } - Self::try_from_bytes(&event.event_data) } } @@ -166,39 +288,42 @@ impl TryFrom<&ContractEvent> for DepositEvent { type Error = Error; fn try_from(event: &ContractEvent) -> Result { - if event.type_tag != TypeTag::Struct(Box::new(DepositEvent::struct_tag())) { - anyhow::bail!("Expected Received Payment") + match event { + ContractEvent::V1(event) => { + if event.type_tag != TypeTag::Struct(Box::new(Self::struct_tag())) { + bail!("Expected Received Payment") + } + Self::try_from_bytes(&event.event_data) + }, + ContractEvent::V2(_) => bail!("This is a module event"), } - Self::try_from_bytes(&event.event_data) } } impl std::fmt::Debug for ContractEvent { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { - write!( - f, - "ContractEvent {{ key: {:?}, index: {:?}, type: {:?}, event_data: {:?} }}", - self.key, - self.sequence_number, - self.type_tag, - hex::encode(&self.event_data) - ) + match self { + ContractEvent::V1(event) => event.fmt(f), + ContractEvent::V2(event) => event.fmt(f), + } } } impl std::fmt::Display for ContractEvent { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { if let Ok(payload) = WithdrawEvent::try_from(self) { + let v1 = self.v1().unwrap(); write!( f, "ContractEvent {{ key: {}, index: {:?}, type: {:?}, event_data: {:?} }}", - self.key, self.sequence_number, self.type_tag, payload, + v1.key, v1.sequence_number, v1.type_tag, payload, ) } else if let Ok(payload) = DepositEvent::try_from(self) { + let v1 = self.v1().unwrap(); write!( f, "ContractEvent {{ key: {}, index: {:?}, type: {:?}, event_data: {:?} }}", - self.key, self.sequence_number, self.type_tag, payload, + v1.key, v1.sequence_number, v1.type_tag, payload, ) } else { write!(f, "{:?}", self) @@ -209,7 +334,7 @@ impl std::fmt::Display for ContractEvent { #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)] #[cfg_attr(any(test, feature = "fuzzing"), derive(Arbitrary))] pub struct EventWithVersion { - pub transaction_version: u64, // Should be `Version` + pub transaction_version: Version, pub event: ContractEvent, } diff --git a/types/src/on_chain_config/aptos_features.rs b/types/src/on_chain_config/aptos_features.rs index 05a3c51f23d2c..6f9e61e50f4f5 100644 --- a/types/src/on_chain_config/aptos_features.rs +++ b/types/src/on_chain_config/aptos_features.rs @@ -32,6 +32,8 @@ pub enum FeatureFlag { GAS_PAYER_ENABLED = 22, APTOS_UNIQUE_IDENTIFIERS = 23, BULLETPROOFS_NATIVES = 24, + SIGNER_NATIVE_FORMAT_FIX = 25, + MODULE_EVENT = 26, } /// Representation of features on chain as a bitset. @@ -44,7 +46,7 @@ pub struct Features { impl Default for Features { fn default() -> Self { Features { - features: vec![0b00100000, 0b00100000, 0b00000100], + features: vec![0b00100000, 0b00100000, 0b00001100], } } } @@ -69,6 +71,10 @@ impl Features { pub fn is_storage_slot_metadata_enabled(&self) -> bool { self.is_enabled(FeatureFlag::STORAGE_SLOT_METADATA) } + + pub fn is_module_event_enabled(&self) -> bool { + self.is_enabled(FeatureFlag::MODULE_EVENT) + } } // -------------------------------------------------------------------------------------------- diff --git a/types/src/proof/unit_tests/proof_test.rs b/types/src/proof/unit_tests/proof_test.rs index 5f9ebeeb422d3..b1f22f576b7b8 100644 --- a/types/src/proof/unit_tests/proof_test.rs +++ b/types/src/proof/unit_tests/proof_test.rs @@ -600,5 +600,5 @@ fn create_transaction_info( fn create_event() -> ContractEvent { let event_key = EventKey::new(0, AccountAddress::random()); - ContractEvent::new(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) + ContractEvent::new_v1(event_key, 0, TypeTag::Bool, bcs::to_bytes(&0).unwrap()) } diff --git a/types/src/proptest_types.rs b/types/src/proptest_types.rs index e86c9d6b0422f..9b55dc1284141 100644 --- a/types/src/proptest_types.rs +++ b/types/src/proptest_types.rs @@ -622,6 +622,7 @@ pub struct ContractEventGen { type_tag: TypeTag, payload: Vec, use_sent_key: bool, + use_event_v2: bool, } impl ContractEventGen { @@ -631,16 +632,20 @@ impl ContractEventGen { universe: &mut AccountInfoUniverse, ) -> ContractEvent { let account_info = universe.get_account_info_mut(account_index); - let event_handle = if self.use_sent_key { - &mut account_info.sent_event_handle + if self.use_event_v2 { + ContractEvent::new_v2(self.type_tag, self.payload) } else { - &mut account_info.received_event_handle - }; - let sequence_number = event_handle.count(); - *event_handle.count_mut() += 1; - let event_key = event_handle.key(); + let event_handle = if self.use_sent_key { + &mut account_info.sent_event_handle + } else { + &mut account_info.received_event_handle + }; + let sequence_number = event_handle.count(); + *event_handle.count_mut() += 1; + let event_key = event_handle.key(); - ContractEvent::new(*event_key, sequence_number, self.type_tag, self.payload) + ContractEvent::new_v1(*event_key, sequence_number, self.type_tag, self.payload) + } } } @@ -726,7 +731,7 @@ impl ContractEvent { vec(any::(), 1..10), ) .prop_map(|(event_key, seq_num, type_tag, event_data)| { - ContractEvent::new(event_key, seq_num, type_tag, event_data) + ContractEvent::new_v1(event_key, seq_num, type_tag, event_data) }) } } diff --git a/types/src/state_store/state_value.rs b/types/src/state_store/state_value.rs index f7c38e7132406..0d2c6fefd6149 100644 --- a/types/src/state_store/state_value.rs +++ b/types/src/state_store/state_value.rs @@ -10,7 +10,6 @@ use aptos_crypto::{ HashValue, }; use aptos_crypto_derive::{BCSCryptoHash, CryptoHasher}; -use move_core_types::account_address::AccountAddress; use once_cell::sync::OnceCell; #[cfg(any(test, feature = "fuzzing"))] use proptest::{arbitrary::Arbitrary, prelude::*}; @@ -31,24 +30,24 @@ use serde::{Deserialize, Deserializer, Serialize, Serializer}; )] pub enum StateValueMetadata { V0 { - payer: AccountAddress, deposit: u64, creation_time_usecs: u64, }, } impl StateValueMetadata { - pub fn new( - payer: AccountAddress, - deposit: u64, - creation_time_usecs: &CurrentTimeMicroseconds, - ) -> Self { + pub fn new(deposit: u64, creation_time_usecs: &CurrentTimeMicroseconds) -> Self { Self::V0 { - payer, deposit, creation_time_usecs: creation_time_usecs.microseconds, } } + + pub fn set_deposit(&mut self, amount: u64) { + match self { + StateValueMetadata::V0 { deposit, .. } => *deposit = amount, + } + } } #[derive(Clone, Debug, CryptoHasher)] diff --git a/types/src/unit_tests/contract_event_test.rs b/types/src/unit_tests/contract_event_test.rs index 79349f5a62531..c05f1c39ac156 100644 --- a/types/src/unit_tests/contract_event_test.rs +++ b/types/src/unit_tests/contract_event_test.rs @@ -15,9 +15,18 @@ proptest! { } #[test] -fn test_event_json_serialize() { +fn test_event_v1_json_serialize() { let event_key = EventKey::random(); - let contract_event = ContractEvent::new(event_key, 0, TypeTag::Address, vec![0u8]); + let contract_event = ContractEvent::new_v1(event_key, 0, TypeTag::Address, vec![0u8]); + let contract_json = + serde_json::to_string(&contract_event).expect("event serialize to json should succeed."); + let contract_event2: ContractEvent = serde_json::from_str(contract_json.as_str()).unwrap(); + assert_eq!(contract_event, contract_event2) +} + +#[test] +fn test_event_v2_json_serialize() { + let contract_event = ContractEvent::new_v2(TypeTag::Address, vec![0u8]); let contract_json = serde_json::to_string(&contract_event).expect("event serialize to json should succeed."); let contract_event2: ContractEvent = serde_json::from_str(contract_json.as_str()).unwrap(); diff --git a/types/src/write_set.rs b/types/src/write_set.rs index a6083da1f17b7..23525ac8256b5 100644 --- a/types/src/write_set.rs +++ b/types/src/write_set.rs @@ -287,6 +287,7 @@ impl WriteSetV0 { /// This is separate because it goes through validation before becoming an immutable `WriteSet`. #[derive(Clone, Debug, Default, Eq, Hash, PartialEq, Serialize, Deserialize)] pub struct WriteSetMut { + // TODO: Change to HashMap with a stable iterator for serialization. write_set: BTreeMap, }

    0HwgTMh{l^NGThoh&j+AaZsc~f2HKPpcJ+F5ueXzO4IZt!cG;C z4b3)^YF3CD#e386ERNzaE5sr$Yn#s;=&n6hyW9f~uk(@_%@SR`DSE(+E%gtKS71}uc>i%7-9v$O0> ze*`ijTR9dF8&d)COs54E5~;Q%1zu`Ik*>fV z+&WOZqnqJ4&=GZ*ub9l3$^hIUOPA7Y`0UwFH#EY~V6d~&hBOcVbT}Y~BZhsj-5DkB zpT{6Qjj2#@w{_sWwV4X_Nrv#gf7<;NTnYLZrl?3%9SD69xuY9sc-3k{!ZzC4%A5OO zot^A6j_EO_*)Yn1m=46!OAYBfaFF7Fnc>*}+_S&3%~UK&{e5kL|4k7zgyt)bMX>n) zFn_UcxAu`k`SCmOF8P1*$cICHx@H{oAhLrhR)n**AuNddFayn6dI;7YJKRzeY$&w0 zZ!uY!ovop9#LmU?A7!;9DQN98tAmROSrn|UCQ43->Z84-wa;XmSc^DuUV!}7VM7r| z-&Bhc#rUEz}1nx}2 zoo5jP5%>{o?qkh?B@*19^QvBntW{WeRMQp)5hp-mmm-$J+=Vq2I8;207UA^@>m`sp zNn$y&Ld89)ShyTmd_|k36hUGM%NFm2I1~U%P6vdp9INacYX`+83FYL24&r8N1w>(w zQ4Xzs4mq~!BMZ6G2^gQKNd;1ooTSN`IM+!gP(EK80eKAE7cwp^a9Cck5*WWUAdsEx z72Npn)o{BalDt8tP=_LwiOE2ElO)ay>4j(#jCOEZVx}eenWr)XFmwV`f^rLF4@`xE zQ7)kAxsWQbv=iXFKxdx4SlTogH+8a^q8hZYd({$vtIVdP!g9zqD_L^8CGD$%2W}to4_YC|fqjSj}8KCX}(xYuu z{u`cwOLvA?d?GJx$I|^Mzf8GE_(oDs!ODHTGT!r+a*d?@ta}`%IZl>*FBbECU_8ST0%iUq%OdWl8kLVy>wpt8q@SrH|NGBc1Dl zQrj?oG%;@nt<3b1VniAkA=Pv%Vor4Pu=~jULZO6@>!a9m2_uthE$RM3OnHC;QwBaV zO@7o5eGihhgaLmHwX#YYDt;?fbUz0rY$_VLFKz5ITg`K7J%AzMRG7m!Y*6gI3zu0zKGW zfVE^NNC63I_(2--n0dltLc{#Ul4>m!BH(JqZ)l~srMDbnH9SkA$;p0L&dg(Xv9%_B z(0a(ujm6Nfc>|IFU5gXZIPO-Bl?q4RmNb@F>9Nd_!Z1#LwECG8)(*~jA$0Q_ZF z;{1pSq>fUV?&;^bIyY+EWq`~Fp zkVHsfMr{Fy;JM#pUP%d$$~;CE%VXWj><4-E7v5*P6!`FMLcR7ML3ZpDmN=g?4i9jo zLuHZ)v?l;jk)&YwUMIl|M;giZI^e)KuSO^eZ2f5)Oy2GIWX9}`bN^uPuxKCErG zi>E*b3p}cu#c&PGz)59BL*Z`H&9|CNb+|tXc*h~Q@+{Wjlr}j?jT~tB$fcm+G5<&f zpSI{j`~_W+$5}SDRd{z6;ewPa@`}LI(I^N?LKp*66YQT>gv2-SKMO?;ey7F`17Q3S zDv4TwT&^mU?&Z}lF#cx>lRb|KkMP2>DGfA-L!cl(8+4_zv4|il!+WX*N03F!XGt5bn^eV%XRwLpq z?r7!K~0rm&3CBm&qrU={K83DG5y>=vKFbpBa zzx6OkPDN7>^c#{!MRm?x!%M@#sr8^ShWZY`_F?IH%&@fB42_53!yjTEk#pFG^CdKt z^h=8Q&uFY!3~PAh;wApdxbK?S2>LccKVvz)!PqAZ2#9ykfEfghQyz?@;hTN5#a=;2 zk{6_u#ZIa)iUseD2|bK*vDd}22~(pNDRE-ZSkYmDK@fbAJW7`| zp@7+%ps}DJGXG)KH=6n27;759IF>DBSBq;d6{KH;8S-Dnmqq0iKi%QHr(&e*?TT!4m&v1z16uYdt za|+26wlq@StLbbGarO*_b56tNeHJMQv?}w+7mmOnY9`5SToz0Y5;p=;mIUZ8#2qdN zUdJUa7l2KP%VSp`yZU8FB3wS;!4g*hc}3zzvim}I9mVp5A_U>Z&^Jk3iR{F+)J6BB zA=pT^pcmt1n@4H_ct-$?c7c-02{mMIP!CCmhEP%$W&rvwsf#kR@JC+^N5{5WC#+i9 zAvyFV7^yA~=2&f>&yNRnUWOmBpC%MI{=?P$~#pl%@*iiG0T4%JG? z2)x`;%G;bPvoZ3orxZ0+6`YW@)3VY=q~$nsv-5I&*-ocFO?8gU z=5X4#B)c@XbaZK1Y58bp-e}#KS61$A3;h-0ubBza-21QH!p1k*(77rtTRuqM@H!zDBU|l4$ zp$^6ifM`KzB2EmlDoH5MyrUbVGZgj;+O&8oh1_qL5##NVWuxL|JQK(ZbO8=Fsj!b| z5;d9iy4gg0UIMklS^;e+j?z=>;n`zA96er<_#K=wTUj9)E7i6kLY{9e-H`~_yonm3 zn7`hEetEShL$i>>M-#MbIp(#NsG$VtIF2{Diuux6(i;J!vAL5em{jX2soumv&5QK* zcd>WwdUz%7+z_^+*q256!t9L$cc?$RkLtz)OP5NzqqW8dj*zl76mIVA++Tgv7@Yn*h{wl*eO2MBU%8F z$nd5Nf5$!XJWzVrs#F@{#L}~dY@mIEj$q>{-0HzS@&;4n9qS4V417p0gvC7XZakv5E%sHS>Cs=R~rsVi1>w)t`I*CDF)U7w`Eova$g zzz=jh54M@>?lT4$_6tMyp{})W3bSU0qiT1==+G3_0Anzjj70I=>mpIsIn9Q7BPrIH zm|N-?Le^#VsX;k*IL|Y3PTQZr!t}|CXdU<$icy0{RY8Ra&^?A#|Q5sGSL8!(YG$E!1Kn=+;6ZCTch>$!=&}r-kB+ zdnpxF>^Rvk5$p@uh5a;PWI;_s$ne4jo#hg8>$jw?p>h$Rw(#s2tZE|kq$Z5W8YUY2 z2*y8eqX{|LpO4-(GPvZ``zKQDRZl>d5SPsiy13xyHA*7U5e{ zJxfdm>;wBYp`O+hs_+E?pa(?QU`4*xe)CIOs0GvV_Rx;IwT#`Hz*U+Q(K14MwuLt$ zDW*7SWxElFvI$2m{cl#hR+e$q=-6O&=>h5ltTNmmupA?F0>m8V+<0!m#Y-B=XQw%` ztbZr(pQAa=$jv2ro7?3Arz;GaU3;u0i1lqI$HIJBG^aDuZiNCNZNh z6b-;6TXg9QnmIv%-n2M|V~%UqOO~+(fq=1p=LeH76|CGtwlGUpu(DK+02RGA~tzo zS!W(WK~`|OvACvcWqmv;wDjy2&PJkizwl66LfI|Ca3Vu zW_<36=%>PB8UEnG=5(cLKc{3q(`RCQp0H)RoR;l)?O?n>Jz}#B-ySo5Glne7Y&EKb$MC9dGbC2I`x z1ry2O47Eb@Ay1@COtD|A=7dsWEi6M~K(OH$w5_?L({OP4f77-K3&lMN`lNUl#JHT< zMGGpI0n!~@-nieiJej%%%^(u^Us~P_?~~o_o#BDQR>mk~4HJ%HW5-07H9k7%1bGF7 z4SMef!LNk_Rcp>0pR$Mn!!zPM1WQ3edls97WKZ;~w@#!uehr03>OIG36YYS|%{yjturEZoSen zCyS|01@7(AU1~rKNPw=VJJRt5#2HT9>gf)T9Vlh5?s3`awYo=IRTCX)ngaJ>5*(7r@HJ+0ICVF?Qzh%E@V}L!r|G)x z(4B+O$b$mtBX!1aDtQseoD4^BS_b+B=tejllVIGk>lvgJM&ALY3!pw!W&w@9zui+RliydDT6K+LU%kfjg# z>cB0h5a6D-6~Mn*?5)TKP5|y#1C&aJ@sJFFibCUWggH{Y`7q!i;m8@R+D?32lYxaK zZvpg;ICVfhhT=78ijLy|X)>X{V7DzOLWt%NfrU>;r9~8g%nIQsFk464%H(kr&yIKi zpG#A0DSqILrUST?2E6~E&NJO7+KQ4BtFMW0_y;!!^l>Vnu*HCfx<{ZRFq#eJa8v9l zBbjqWz;ReYoM<56esKx({zarFoeo>2iOQ$*0l;&UsS~ah0{C!f06s7RsYx<&g$XfE zOz_j=#An&uP86927l1fTpi2OXd#T=I=CF>+V5bFSO`Z(*aOfbBrVGZy>gxjYj(XT9 zWn**!GDGL{-ZH?hE8wm{0%Mu1r+TyT$C0YbDA29LHj4qp1x!GK!#G&eCp!aK%%P)g z+>r@IyEK28Dr_GU8CLdBNKs;99y8sb!#cT_!4*jPIe z7~E_M3EaYu`3atrE6tFraC`>nXdD9p>FCl*6-BVBJE-b5>9AOlsvWj?8AYn8atJO% z9g}UTnuFTw)Lmv19YnJZYNC^x$b3`U#%}?~154Q-yMMwsq#^D z3nu2mfgLwmO`m}U1TK&Rm?5xYb~23G$XX4jDr78zRuOQf9G1ddrTvj0Lsg{jFSqhWq;5LGV0GeH?QFeZ&BH*yOngh36)$yobJ1>7}pECebCC9#cqSPbH}t zfJA`kNn*>OU34bh!CfytHAp*%Y=AU{Vi*Sa68LAcm6FM9H15PQ-pz^S0NINbn5)tl z%qnp#?!<8fz8jy6w-a!YC&D?jILU)*&_OK*uwyoU=s*F*RKX&lb0#nzfa(Ii8wa2; zA?{#y6Jaeh!T=Fq@9+vAhJ!%>5fPOx@DxW-KD?(mq|PRXbBKf?iX+-($UN)AF1j*p z8aw`O=vQNC0%u#tQBY6z07KR{d7=ZG=yf;>j-bgC2ictCCO~I{k!KalGR1tV4{vPl zZH*d%j(L>J5+mO-NQ*7~oshto!=@-)_lM{y3Y}(81VFrABPl*E(-$rtKC%@JJ64?J z7a=<4lK5yK;+QLZ^1ujHB^(l9pWPm2pTh!P2&PHu0x}Wz_k^syJChqNXZIu_t`9Ta zkWmb)4D*d`>|@bM*u=wy0x8d;XaG5MOn}5E#ZT&5?I|=ZnZwqAX;o8Wv5apf7&VE9 zvR)+|2-=(+i)(~Zhvl1L2dHMc**T<r1GK|~I9BJf@fk577gIlsIHW2aMH}S-u|sAA z40SI#!!Rm5APcy{Fou_Uw#bN2+j27>Rw!y~MlxY9&JOcyO~@Gf)eE4)lW}(ohs31N??UiJS^wzGjkC<2d)Cx~hduOG4F61Qw~QEhKIQB?@B&aW!6C<464-!-(V> zdDI6Ft*ozZ2qBALs2aEtXqCvPYEeBSVjmF;jwQrsI6;@hva=L@DKQH@tn&+@d919R zADrC~`w{`;LC6iPi%={kSylKJ<0~)L7r^2152cyfDa?l$gmYvpb5otRr0!6BgrTOE zzD5AW%%YmbHH~u#TVTvEA7f}J#2{t>D8bNtAo&(LCyp9eCDJ){=z@yJmiZ=)T@{0I z(#d_}+a-zJQyXuMIkAl1i8lv&9_FuMzwB2cd-{MvVrHviI>O9YVWiMYjFV6xdIOp| z##a6h3Qe5W&Iu4ZL!gAG+_vm+vepAIpHg@hNI7?)Vf^ z8?er_EaFI9YfM9#f)hkbfGVb=Niof%pXlED_zzPdn+KC(9x^PpmlUk7X&{S6Jn`58 zi9yM5F~IPSq}zu&I_9jyTo0JTS0zWx74t+N?-%pM0zeF8zbcGWg+g(ZScHRNB5(|X z!|P>eFu|Kt!e#9o9cneN%utB52HfZw-f$NUWom64jpgux-~IY=t%1jzeYU<+28z)gjaLy7Jzr4 z@c6Osky1XMEHzIdK3y@AzV4vn%vKpiw^MXkaTH49Ve}nFtdRMVr=CoxtKF!p#++C3TsnE}M~f1=LB~^kqy*V%sII6S8YU|*G+0j+ z2m>f5(PS&csr-a&=&+lS5PtZvy-z7(zArgwN{5<4x$MvY)fFAid2`8oKRhYmQ5^&Jw8M{#iWCQ2mE(=-PZ1ixxkpCxM+`&ICNk7VK zr{2zh;+4!G>t#vN`=Hb5&|NZ!%kN5@nI}-m3 z(YK@UT!cM$G5(g|e<}Q-myv=~D93lir_yYdR+Dbi(F$bY@dUARU5LFfDMrP6fe#e# z!$o}z7j87#&Q%bmb_}tgv2>1(8%MzI@pPF$E~_U((aBAsDK!~0gquR@g8+^ua?_m9 z%}sa1ed!DZu1RMCGS1DCv4fvYcXM!bfvcp)x$r#ARZ&>Aj9q?E#)lf`(c^r&EP!Hz zt0C{3htSP$2TS<>ZEQ z1-(6td_=6IyTj>j6=e{D`v-0frD%qC2W~AzZt>v~_T$n(;80RHZ7ZR9Z@V>$ZARsZ^4xpSGu^x3^SF zs*<{1msEOHQuot)&o(y3Hej%gjctrE<^i_B#w!>M#uz-T6G0GzCt{cgVmuQzW55x^ zGQeP9n9ui|%)D8f)PDcY?bfZlCr_R{`||sJf8QTBy+7lO^UmmN&IBX?-k*h|6D;$e zbDYUS_0Ky8;)4HUjN(G+$DKnU)ayUtOd*gi)c=BW^I9t4Md~WQ`;8Vf5v)9J1&s?9qe(?H*>ld$Ix_>sMa8brWY)7p|V0dnb6edG1QF`MI%k7d{`HyLfK)+^5c6dVlP0@!XpW zW9KeQ#)E1n3@d$3*w6f7{sa+Lmi%*kkcfPc5y^~`uMp$R5N=FGGj~c=7jb?IaqIx% zBfk{Z!lo0p5FjC5g5Wa@J(C1xyZqHUb1Fl6OsJ{D5iX){ zmu)O+(mbs%5SD~T`hqM&WUZRLu?+S9vkq9W7ruFs~N>DAppSar zMAn<#xVdQQZu>4TBJAB1+_)xhuvtc)wQnrS-0dUIzTGnHPNBHj<1^TM8ei-tL;W}4 zy?X)^@1r%WOP{XcncajpdHFRCml7tLfU-&RI?%x8?MD*)blwy`q5y7=VdmDLZY zA1z#Xy1e`dUpLey=##a@HIqW4n+4M|YT54#aL#pqiTJmj-OJTZH47wjt+x{{)^0w^ z;2o{7)mV{tEwNkKjd_V-M6}zZpd2npw8)9q_5RfCOVz8miBbGO622zc7lQ1S<@82Z z*q(|(tUc{;Is_2e+paYKQP!8@)X3d9_0@X_fpl->*?bn#%4ql`bT%>>fUsm68saQc zf{3mgE}AU;uCZbkgs}Ghit$33NJJ9FL5AY9y6%t0jolPoKra%h_CLL7BE(p z-Ll!LHks=NLUgkDr43;`{A96_3hWYf2296^G&(6o+D~%23B=G#PnKxEFE854@5=?) zPGQ=loeR#gGPqTsI^CMA;F8`k#N~#HPrFO&3(O4N-hW(|Rhd&p8sg+z1Gt5em`=^Ar68FyjAl(foo0S z_cRI$! zXL>*0l;vicRN6#%f#1+NZdiM^K+Sz~!ontLTA27$Ca|*Y00m(Oz#2lbq)Dr+1f^?y;GCn@R5a%Jp5$D=YWs`5)t~ zB$9ElFn{+M5QbgG)!RKSBe_sJF+N^-EEAF!fE1YL?>t4>!`x@My4P!yGmi;5(!5Wq zm;U-e7)92igspEQBqHy+VR!0Sf{i)4Yqo&mU0$4jfHk)|dSbQnA{0eR7L*FvxB1IC zuB%l>R%kxOF!k473{wyE95%DNsaglrO%~SGS0l?apS=EC9*p!U!k%nDc};jqVx8A@ zr`9ETU3b?wCuMx=T9j|4^_JjPpRY^n{oiU5Ihs#AYjm>4vO$**Nb+N7mx=n$0)SitqrEJac zTUeHiYd5lLT+i|-CG#rgE>K7%M`o}KDd%IRQavorNG!!}%_npj-R7CUISW74B>@?* z+@ashoSjH@A%EqsPDQc?KH^&*sAMIx-=|k7AL8+e+fz7@Hz1ZoSrWa|G z-L-f#tYp{?nK!UV2`RKd-4aK83YN2}bOLM7UVHm0SP*DGyPTSxYmyS=n)^jbCp-D6 zbb=hrrJ8nMlii!Artz9!4OCMo4Cl*r_G(}xHoSf+oxTMunY}!BaZcU=vT=AR>s2Yi z%fcdZ1B%R)q)MKixh&j;*3l=3w$d@CX$wo2Y^1EEYeOUJTNWZT%|O?rV7J4`tflJ2 zl+7Nkm6YtkmQr9s+K~44RJsxk&0d(Dy*Yo&*hQ|dnaj5X98f~$_(vZ$o|)87#D1uK z9RosZr|B;2=0E6WJ)8eb(`V*0W`33GMgoR(81s9hXoH~5r&0p95{sn4S+&{i>*ioy zZVYR-lxYxsdwB>qUm&j`#*1Y>M1t38$^U{Bq939+ah+wO7QhB($$wZxSGeRqDz0Go z;yx~ZQ9cxwe0(S@`9Nov{IX)LLUhV?MN!ux_X!-Vx_s(?UCC#Wk=f7mHDTz|>`PILf&gnxgOf8WKwrt5t-f4+x*e~f?M>l9-jibi!tj>--5P3ISr2e#mkEB$ur(^Y0n|e%SH;lmopxulf;rA@=YJ)b>$@ z#xKh@1OLSCLD9p*nI`v}u%*=6dv5`zfc^~!a54TAI*zPE(tQ=guuWpSZqRq2jg3sy z@eFEc0*{!m8o4m2g_FDXR|=;p#Zvp`KD;Za?~|P0&jt%h6GCzqaXCeebddD z+CB>y_r`-h=z~K_N9}haiVKG!m5bK~k*Ek%68h8}rGOJlPOlGzt`_N7Xn-VHax>km z7Wbp+xAr|0Dw znd@nSc{z7RZ*O9t7<|ZH1XuDc_Ud4Q;StKcnZ4#%8nWT&R zDI*?ii+#qdB>!l$?aI5ST2BBmoM1S$7!#o6WsSBOvX}Q_LYL*r0k-1Cm-`J_BNzmy z*1~wmK4GeZf>9~4`6fTq4F#0j45#^;x{(UfG%;Jkd-bUH+HK=Z1Lay8q1kKvXgKZH zk5b(-6bl&O@I70Px^$<&37R+onc@Z(OxcXEQ#TrPutSxgDZ=NfwfMn?45KY<$gpL? zu!q5b?~sx^Bnya|a6;X_bTEu}7=IkKHxAgD31hF=XR>f2kLd(pSd+Rb#+Pwna)nl7 zE`VEV(BCD1lE;vx%VE2O2~(pVSQ0eUSi02QVP=GBjlD}u9p2e=!vpJ0?%K^=(ILI5#%!U}M|%}I z{fz2DXMl;I&>3XvE_8N)Mma%=Tv}%Vj@2j+p z_m*9b2YC?diN@H@sBrdbTG*|wvh-Wc!%V~3Pv#;hQDMn8wL;TI?j8>$9NqdbEMYKZ z<1qyM)Ioooxd0C*p5t5t>qUQY<4@CbFsF)dRD)VD)!6~y3JW~}?`U;0rAbj^IeVL4 z?vBv%5Qft7LEj|6w1m*rrOsJ6hFkIB*d0|<4K#?rglu?GHPJZDpkC~F20a3Zx z1Zr1fjI#s=@ms*-;=zD5c);JOphEoi1j&}_B`un?uU1VgR;!h8pD}SHPD52u)l&MB z|B4=}>%CRhV&_29tl+KAt`YA7>nyMZ?yK(1>P~|<;Hd{FNZT85CCd=|r%G|E0{QKE zCf%kp-K@tBFuu5mJ zg>Tu6*xANm?(phfr|4abf`(i3IwPG?OwuUwDT*F62is9N?wvDN6<{4fW3bo1uxVu4 z@F!-XQkrIq;$8ROW~$O_Ln@UQIJKf)zyM-Kn-)i${fv7H{)b7_5>ZJG8`<-$D2*Ws z9CY@$`=CQ}Yv4EApZ#}^smfu>?sL^s-4PALZ{L$8$I$FyT`XgATORW!{obf23eGh# zL*7y8WpS+gRUpHhmFXy_DnW*mWlXb*^*AKgSM6F*QxR>Fi(mqd#=OgZ2ahfx$@P(3 zpG^)*e)dU>s8ba%DM_jN0JEN`v`Ma^e{N^2JMr-SRKwNW zZlbgw8_k_#-VZqbhbHmTjMn^WJ*w0F`uVO^o0ujxEB$ma)3K`6dbPQu9aZ*ue+*Yj zE%WPCU$s@MRjO2UDVOV#zgVd$-QG%zKf}G7GDGNKZ+9M!b~K`13UGzF)aGC4Mf<7Z z<7+V5HRkJWiLjh&6&-e`!dlWi>0D+BsXV%gK#Maw^breb2&n5bhG zrH zCKpfJPVMufc0JTBAYRlY)2G>G_)L3#{8_D#jT-j9S#(`+RCy2nybs+F`99Q98#8NsS;R|A(BsnuHv8@s6+oYu7}oeB5gb~Sv+ zJ)|1G$5vctAydPURoAE+a^9%-x`(%WbjpPj(WxzfAlIWDO=z5A(l||}rB=b6h}Rg= zUV^5mcJ{ltJO}N{@uTJ*)%v8p*Ch>m<3r?_dD|i^jQOr$a}ytT!ACl^`}@|rE2ExC zd_x`pIeXfW(m8PZ(U2wcICDJRq=Lf@@-%8TNcGO7d&0ENgHF~u;sJj-DrK!x+R!@u z&RR!vc8@k8E^}#5FTdK1Z*`YO8;k759`#TPPoHcxItSd7oacHzm@{*vS~i+KLx+M{ z+}m9%bVk$yQ8)%QMY$1G+?g{~c*mIl+uaew1Y8kXIT_1E2-|}yeb#tYx!@nD19BH7 zIyqBH(+cU*>;n9QasqeS${ijunCQJ45Of6O4o=L`PTj3Dg&Q6n*H)N{)M3pI zKiS=TcFWVt?5T~us3`g^CV!M|J!KGC_x|5X0( zPV^rugVpbnIW^o`-taH zaOLiompO0J;e_fQ=;OWK!G3*@4ggno4-F}i!y=dVr@*v(M}~>WzOh^_*UI&BtK8$a z%e{UdO*c>;Lh?bs0H6#+SSV#APmX~mbDog#yfDY|Cg&OitN8=LtNCTVtsA+bu))UT z8$2aEz;9g$cWcPj?X@8xrH>H-<9T>tZ7`=z#Z354&6OTSzGgCJ!xij4A2`~V6X~X^ zdZfD>@uWEfX%0A=fI=PCvk$5x;zr!Ou_w z8AoU86mLTU)x4*8i;qb4@t&fVO?;Vg?@DV94A;v1o!omU(9)xAzzrKP3fm*?hI5Uv zDRBdg7!($7<7`%>Afc-q5~vCgl5TL=!{P0&Rsd$NtzjbjYZO6q}^3Z8#ScSsY{})&p@VQI(PfV z)p-nJXYh10eR1~iEi_i@J$Zv`l1iQV7!z=rL?1hf$u%kAzm5rwsn0DAnJI0US<2rf z=m9n@dn4!A!d@>i@N?oaB95sJZ0oqMJ;_2`N*CfG?Uu5g5LSz9BUxYr_aKXF$F+hK zsRK0svBOMmpi3dlX_Nyt<{SecUSh+5k2_=R8Si03+Q4;z=V zC7r_iGh0`i2zzC@LScKW$K?(he+9i~V-K4^S1&X2N2e3uM22Oy6GAuUa0V?qB+DjN z7O04wWecmDY)Exeg$#;X+i@&YIJRKi_EcEL+4Eu`OOe@b46*~x-^EiMYfTwXHK8db zaXL8ahUc8@tAJ+5T{v8jpq*Uql-ED@d8&AMt5pe+JwkDA9_K8rDiUqIR%Tp`=t5h7 zZ{Og_p;~P1_s_J0t3Tm}{}E3P-(Nh`{5!={(SIuTR=*P?soJmlVdEe8*~*XjJmg;) zd$WinaY;nlaztIIsVb#Uc6wksw?346xhiPkv`jIw6v7aT@5fKM^r*-U5*NRi0L4upR?kQ%D8MTFL< zT^yeGzw*OQ@B{95`QeY@tnu#!Uv2)K@IN@wUyVA||10{ITJ7KV?`eGR09;I|fhwaw ze9FenGGMPRACyX6R~{O=YU(SNXU&#IG&(T0zZjxn&(2eQRd65uQe_TE+CA9d1qX@j z98%#A>uZWH_Xs!5SdtkZUL9m`ByMYhI!{Fb+3G;E%l*b78=|$z4+#6Is(#H?sMai)?0vUq)9}UVoA_T8kWy={`Bd{awfLG(*Zqzj zG1NZHQt7?-W^Y^r5d;G5Rx-1v+3$rjy&G(hYUg-a3QawjZ6U8*!$;+{t8+8p?A`^( zbrmia!}g-h?#%5QZ_Q?td6#JDI<6z3uMs{^Sz-5B+)*6X9Ob%VD8cp@l9Hk&DH+F# z*4-O=`@lK4-b4|re2nrBu)7f1bRI zu;IN)*l%JruMc5Awa;Fbem;314$q=mv!57%5LYGCqR5=s$FZcwEWLy^e;TcM1ayNv zA@s*<9w6fcM=gN9BMFABc-9MUt>Oo58>VTBuls_8a ztI~#zrZS+f7>J5^C+r7e02kJoirXx;D}!;#Y<%*Kh(9)S+{lN8gh>Pze~}$hJt$A& zu`)Tu_;F(*>+pC*OKHohP$eg?jEea394Mg@Fdu%dyIx454a9|Y;raN#$2cuKm1lLv zyYsIv9erwF$m`MxL|MvT=UP;$uKA%fJ%qX zT1MhsWa}JBqOKoIF6b)s1H~2cXw(h-sL?eQt)Bi+G+-;-;Sh>&8>NxU5fE?R9NXhM z2lw&;QS4Bo?YDtfK}NJcD^WGrFGa^*?-qE!E{D;m*PTJKxXdkR0H``p9K=n!d4aiv z^Q|itcodF$fOX9Dfx?`52i!P{YRk)uqEExt&Rx@$k_K&=FX)#2i&DagIU0{)q-@R)06o|X>tw?!CXtBiSv(#=|E``MDqz1Ea0gn&1NXh} zc0B+4oc+NsJ3rxvzgZe7eQR*5`M(6;Q;hy))L;F5l^-qEezaa{{8+tCIYN#F^mzQd z*EmdPj5n-;B9KLogm3~LnnIse#ePi$W>p!~)sVh+=*F-Xe^5Pz@*cv9V`SWe^~Ncc z_PV$N6#h-FNuKc!a8)_TGl3S!@GxJHp~FMVu>k?Pp?BpRkBB^R)^3QI z#bqGk*J9HelRl7xq(?H!w`csE`Iy}-HoL9?Y`~VshJaLL5w(#IA!R<;!cXZ2wu8jQ zyOm+*PaT_17+r4JjneVjjM>vB?pDNCAmQze^%M=YtBW7c%3OO^W|K8GwPMY)=ED6* zs`W@F7)X-Zvk2W`=K3_18YDNZbzz?sVsCshcib3EI7V$XrVop)4@=DTfr&u8(hXm< z!imH|Qr`Pg-V5tCoAdQnVmXt$oJF5;Y4AXq)tqayuYEFzA%lnIIY<$ z*Uryfcvl7+)92ISy;ON%w}sQv(kA#%@DMN_wO$nzDScJMA~Fh&OSMN zFW$qoYFdR|vzl|VeVCW;rhV9Kqy;jfQ0p$Ij~&I`;>c$jcSf1Yon0(pnmql$mq1$LD?S3FhH)t`-j-ur$)>1v=GA=x zQ>T#>7rJ;S;OI!FoLm4xnpU~W%j;qeSHS0 z)>AC9l|xPsbFz_g!6zPbR){NcHQO1EdSxHAw)puZj?Y%q>;m~RPRx-fOJSRq88Sr^ zfh{PaI!T`$e0aun&qwC#m(lf`4$If5DG6rHVzIT!9D0L zSysKhaU8c`4z%Ip-W0GdmK0aTAq=1t2y7}ns}ZUCsS=keQMjoRUM%6A{Z55&zu}~7 zE*Nu;NgiS=zv%_-FNEJX68(tdSO2Wj)A&W?f83@3DircNzMxmyv}*Nq7qHBzzjZNN z)G*mKZW4PFafY!Y&LVDfrkI~f)0*4nFL^@=Oy=mDGI`KN5&z37?O#2XmJm_p0r$ZB z#Xscy<)Zg17$JO{|GyW5|1_K`s~*>b{2StYow4fidskSfj#>A!&2cspr%MI@qV^=MYepmE z)ULBz;uCzjR49K=AOQDnSZJ(pzTK^AAJNyS_ELK^#P$}M8TRQAG@(8DejWWLb*4W+ zP~l*RnDh{2QH8_$X^a++u+1zSr7jA`*k%=u1Gq1o(A7z$e5yg@mzsQapvW)03`(E? z1RsJu*{Qxt~VSv z?S1=sum8aTWTx#U<)QY4!&40O>ivN!l65{jivQPTO5#3bYwbPa-Hpf2@j~N^Cn|;d zYIw3xU*lD+C#SI%D#TBT+5QNGgy(!*d)X(w8`FmxSJGI~ETo__k+j{#Vo7=*U-krU> z`giw_^zYq0vircEgQY{I!=>&Nom~u2aE(EB|mrkOcE~xy<$1@)O3d zQdMGSuKAw}7br!MxN;GWiw?-;m}tZ!hcH~u2TZDvZ}9@=6x`NEhT6?~zRj)^ayR>1 z)NlAJgkH^NMyzP1uwpW%4C=o4!8jP&2muZ?LZKwRlAI?bp}qoO!!tMZ1w^Bi{Na*R zGXKCtOy5O9Y^Q6wT|mqpa@uIP;uzzyXf70>EQo83UIW&Pf|EhKp!5*U348d_LnZ0j z=Mnk)ttXFoC;-^Hn->_KH-N};A!)4lrE7?Ta#^{wXJ^%P|rj{?aCKj)FRvi4Bi`UYWP_x&6A%o3Nq zn@#|mY0ZpjZU6LRnYJ!2%%*-HFX8raacwInzRm6$VfRS60CO-{>+i<6vABjlV(sqI z^3s!MWo^_}7a)CzZyQ1|`@DIm>+ZK^t$l6WwZ^(%Zyg`N=oU5~n_F}+t4oic7|(0F zx4V%<8cxW$67G%+>0!y|l0}YW&7!x-dGNr72=kds+Ko3pI^MrIXkNaE2c#{K1|!Kr z<`dBzAi88$$!%lpyhG1q-Xv`HP-U?3{BSWc>fEIVm>0w41?iQ!oR@_+2Hr-gYhIb8 zm-^!{#6L|vkbhANYwCIFjaC=Y1#@R{A%6hNstZ*NJx|&t`*bz;b1{GF!msW_dQ`Wq zc)CO{4$=3!FYMtDM z=AJpu8b`!66(-}oDk&Sd|4pkhGF6FwX=OR@vAEtnyX%2C?HF0Rx3_9YPNRw6k2HDP zQthOs*EPW>a1Y#03T8`_aPd@3c@`~mQMXdoe1)%e-49o+|3TVo&E88q1xGhEcNhcd zEw+1xc_u5ZIpA#4NSQe6D`=e;;SDS_qiGk(wNFwZ$!UH4DckrcE-f!Udhq0fk{wG4 za4dJ|-pZOf&sx(^2e$^}ro)#rV#dfTz7;$cUM+dR}ykA zjO6b;U4F7N8Tv`Hi2p~teRu5q*G{G&CPB$b>^Zkb1CP#RILVTTo49IjGTs4X*!l7k zc;qqu9!<}Ff$ew39JG!6BlGvYNs=MToxCT>goI`*f8c{jxdB2xToePO3crYMdZB&K zI1oxl^8EefI}g^xinH~pqd%$ghfkka>Yt?A**!k*!+s+@lIJBIsTd&7O2b~-eDbmp zA$|18t4t;F)hEw)!+zC#@=l=v5G7%$$~1E8wRG_?3s>Z};w*q3YZfGbBzbVVKU;xZ zek;<^zkQO8!sG+7X+%uOYcUjxu4rM*-#RrfBj;uYse}C!T601U& zT*h|rk*AQj@it?In8VQ2_czuZidyMklv=8~ zuFc-QI(_@rrE53l-kUvtbb5Uk7t`3vDm>KY~U0Nl5p|K{3Tvo|il z?(>pq@-7<vj*9 zCYNlalt$)cMa{LiYW!rUq^1iQiO#SY>>FN^`e1&2U27+_F<@n$KGaN-^f4xo^?69U z$5f%TX>-c)evADnq1$e$(|gH2W8KD!d(YW%bN6|hZf+yLdd(gsIitSrzu})PPD7-S z9CbO6&v+uW$Bg2FIYD32cWF-Ff#*7J5|N`V1R8o0<++WHnZw!Qr%3H`@rp#XS2@6P z$i=bmReMxM;)9f9B4|nm8(^~CDFn`_aiD^XwHyy~i$HJ^$Yar{bzE;$FhQAcStFlb z_#y~E1B;~&5J9X*WbYOuOJsZigFu!VG$+f(hXxEG6Da*~XUuuqs(aBDd*v9<#>5`R z)GZz}_S90hX@DW%LKa&^3p-UGL zZUoQ66Y+k+aH4~~s}oN~^?lxc|FCYl+|&zVuiC`xaH|ty_S)){y=lIq_VILKvqeKX zcsyNP(~!PS1yk=Ble;4bO=^Nn)GO5pL*x$N4jQW1kO$)i;EEW^4uMWwOwlk7#(%La z`Z3O|L@lR-cVojuf?!ldAt3{P4$B8Su}iCpfI!&jNhyLWrE#wzf44pH{z@<|{cujB z3=09}tb04tslPx78h5|ib8vqe63RMm*O5# ziOrhaAJJD}x954US)z9}<5LLp6((96-0sAuO%JNX9lWr_Scm4G2P>;h%CTPT&}8#= z2Ih)F{E8e0Vbcv3puxcdl%olhCzxza>K=5gXk{eJOB}F z+81NvW|xwZzt`g#!yi(3eb|hooc$#x6Q@ z9`G6%0#IkWL^^^hK+$cP;yDU|=MRR~IsA&puM-=ePRJTP90(aJ z*&>PwY*L+G)146+8X$&91qL@%P=GUJDriV52kLERnFRM?VX3hiNlTNCRSH&@ZF4*+ z6Ujb*&G zbB�kL(O_Hvwh>_qc^VeOWt0Ce88$!tY=|j3e$Oes%lA!7`VJs$3Qi*D=WR zri48@!rE3j%3u$U0SGJ|C!z2J+KSRiu1cr48@(h$#M2~H?4Xn@MK5!we1=yzuNcoH zpeX9K!P;HN6A!tv z02uT?IZ07eTncaa$;g28Ig5GfKnvAIZ&9Tt1NK?s}J{cV)AT3PA~aN5s7QJj;+ z;YM<@7hJ*w7y;*i6?q!`#GFXj*{GWvb)yEBW^H-p!PCWob$|(b&j@?+$c46vB&c;R z!k`H>fMY?Zb9c-(DTQm;@DCP6M``4S1~^~^WaP<6OLC&W-H8lpL#&5O_f}WLw_ma; z)z{xQ;(}Zb)>6vz_0bZMmZzO}Vnecui-Cn5_zV1KY31o!QVJcxxL$hl(c-;Kbl1O* zkz4?8xH++cjCDLgq+&C;@@QeLB?{{GZ*Hmb)(0g*rgADMe)t@={g}P+!zDvyEU&st zDE%LSyWikRz;IC;p;C~j_P&Z(Y>U2wc!zPN_&d=z}As1xBnmH%ARB$HGQN zkThEo&knYXOgPTqA@=Y!d*ib?e2G0$@hYK9_+h9^r$%NIn2Je0i(~mapOC+|u1M-} zN8F+!#ukA`ZdZ2@O}0M{GmH$1q}n-V&}HpLHg)a!QF1AVO807b~0xz zZ%|lzvvLgUo5eTW>C5NmU~wdJNBupp*@35r!fCHM0n68$b-_fuz+>0Jjsos7(da@ zrV^npsHK&{;~lnZbsZG=!euB{n^PUQtw3d4$B+hfGr;zneIEx=7U)7#JG{^WR|G(z zm;$2+e8Oh3CjbacQMb-OJj5vl_@~n0ARyEb2LS`9;UJ)&Zaaf*520(N*xHuvSVuFK zv3w2WfjUq|6d;bkVAX8J1=uX~4U^Z64@c}12PU!txMFr;|0h?MxYlI522o1~T#z}; zqFN77#H&YqErZRlQ9<2b@o<2JjqSjx@XAQ_VWaU?j5vu&TXDhb9a80a#8EH>EG({r zZK9l5T9&&Y1Y?-v%eA;{g4Uw$=S3L97 z`xzhkTOKL>TKOl6&3|3}T|fG*)|Kkd_x#aH?N@hg1A8Im3jL}I zz@)}sFt>#UZU*JHZ?CqHU}KYQBwVJL>Ungxwpy;2jaH$r%oW`V)Ge6n!k`RahiZK7 zsPi>k6i?F$!%jaQ6kDyn z0Yn=JH`ob=GQBGS&f*Fp(*ovo=6?dw_baHxmd}{#27!~ROH@@;YeNt1l_VB_^iTr z3juIL_t+qtMxxp>vb~2Uqw|jtf9I_Vwcko!gm&@aW3&UAJD0@hA8n>7RL_Rr^H07c z1oao6qWqpu?u~BvZB<4ICp^2h$5I@da;I;-v&t58_0H* zsU=hVM;w7sDc`GyBy2x#{jBtLnnWA`pWG53?3dXeUMB& zP^!eHOxhq$08*=!ujB?(Lc7rKThi;l)+Hv}_$KRgCO~(!V zFM4HLLX27kwISZ92fyUA@r{-x>LM5ggKLQhro?rsSEsj@>4)=^N+f zZrzx^AzRv8^EcHqrq7>81$lF3UCJClICH$8&B5=dnZvyaJ}Yo8kk^_ft)98|*vUy~ z=Gv9(Xba`fz_!}_`MI0&4sibL3_4BIv7~o0bFJ-g_}Ma3`x3MGhGIxll^C@wOwpLG zyW1aOBY|2aEbW2dIC96~djm75I}_|B+>kr_Lw~Y(AS@k>+(Ya>4zsOPn!!%H{@&1 zbZZQU_p;k7o@)UyEj6Ku-Q$gk^Vu*xavbpDI_i!t=p3Wk9<0JPuziPmfPERG3H47otmmL2*wy))L zkiNrdSZeb`z^UiQ9e#z>CE3)UecosM55T1+IWYBrLI`0r(b{ok)cY3uvZyfXJ@xS= zupw?iaa9sTmsur5JnhgVb%^)tW8T*s9}fW{+>Xx0oPny<8UoKsRfVZUxyp{P5>{tH zZS#}5ux5RYr~&beUW{D#m*D=*JyG^wuQa@0cBm6u1XF_>2xGCtAQigZ* zl7Euc_UL0JWaOnz!F$c0f!|XJ6el*Yj2^K!jJ9yc=-F0Qha7$A<>CmjA1q3Is#L6; zu!irXAJNt5BTmF@nV(6V{4*e!!T1?7WT9Y8n1-S$05$A4H?ZwgZ-Z*N4R=sEiub+92?2S!D** zQCXmSd*ogL5QMLCPR_6XRznh~7zyt4s zBoc4(J03;Z5qj6cYT3V8DFiL(S}SFv2lRG*w-Z;;pW3c$nsS_`jHS?VO(RlXC>m8> zD7Mcg_V%oP();WtTI8WzgO`-Kke%angoZFW0!HbLU!*x~T{|UW4VT z7k){aq}ot9MXL@G=a~0p+OJUwX9jxxZ{Wf?KM?nNIPmZ!WkHSBq+iF8|6MjD`CUd~ zDr;4YSkbr90iYH@bAaa0B|zuxV}+J4O{6YShZ-#;m#B~ENJSZ>AUZ&Er9-)n6KACKZ=6HTA9f!^zn@d-A1uz-e!J*4{uR$5Vz7!FHm^o8 zj`aifEs`rlXcWbUg>eqf;zehJS{1zpELw@1rcE}a_I$WeO|c2q-2fSAoitg9U`~Et zL>G|wI^0eIo7khxrx=aA)HfML+xehHTE*B}rhJl!C-K!c)D>h>p17(IHQFXYg?^^wE$?@5hWq-!MWa za>024Hm5COLv=*dz_ChQO)kB$HTx<2n+Lf*GUARb6u;H^pMCG2m2UZeDfnGK`0?6* z*ctxt!I9E0AN)bL`Rj-Or=IA?rvJ55{hpcM9;^N5`yKL{br(9z7WPUE)0lsv^X@@h zV=SY=6lRu7go&47h<`h{mUok#{Dn~)-z0_Fvb9rkR!W7^s!ZS(zw3%>gPUvO-L-XUtg1w4IL=6xjh*0#SRB^C6Pi+Rmj#6jD-qE zJtR%XAayMqheow=jQvQK(RcTX!(G>F(NsYGWyb{VfgN>_MQU$6C%^6QM1 z!W(56O3qfenl3ZNos*H)%n*;AAK+@%@JAJ{jqr87GFm8ndJkbY^mS9u-cp8chrIkPMSeRND-_-d#tVga_ww^SeSJn> zpVimr_VM+3#hYhhEB`@fKUVX9m>z`sbT_74rSEhOp}P$K2;N`{;U9(9st|q`#8w5U zM#Sd&J?8t5={x*h2de4X_cR&6)yVIjx+5Bp7W#5{#g{MUvsD#>;s)k zh3cPk<~jxE&pU6@Lw?M;a+S6Zf84o-0PH86>!`H{q#e&Uwcv)OhE+m}P}O?}YCaYQNx|kXcpbFDr@KUvcIs zW1;qo&Xu}CuTWFrUv;pTtNk@+eu1Fymz?*g#lP-cA1zeWHsRkemGH~X1K#i!@uOdK zmhjqCh`!JHf+{TfE6#@}rHcQXrdEE%p=BzcC*t(v%)w6`pLyl>t21xBd-mPVyXW4W zdGGvtue|rh{N=^7OOqd7et7xAC>V!0UKZ*=CgdVmhUipvemZZm?^T3Lh2)Od9!L z)u4OeEc7V6sgqKcR~eA$Hso=oFs_+Q8X`GjV=QLwsP1YdUQQme3w97TH^vT)Q$y1vEwi{P z6gFD8b%7}!Ri?YD8;5*=6bkXZRY%rL^`>=3^;3yzCpC#k&B~%asXC8-zEdxs6QEvrklHBnSjbPghE+YFp{5epB=OjJcp8 z0-XI+d9U7&37oHosE=Y9={(`|A}TiVr7cne!NgN>XU^PhlSy2TV|fGgqNyb2pT2Yw zKje_TK3Yi3+g@nXw%K+KYx|Ye+VsML>?0Npnqn)9wSn`AFYvdFvD@{Pj~7=h^dh~n zz@5dldh(NUUMG+J>wg+Qj%yz*JsyM|BXPkyv+^)O0hsjSyy2Tn-PU@p-L;kumjQ-6 zeLVB*-tuDm2L1-otRe#I=DT^@xasZ&)Rm=f4!ZVn>n!51rE9&NxQXvRTShu!Ess1E zWGHb*4Q28gh-i7nUcm%oVR7-{&IIhRU~4FCVQ2&?<63ICfk8&%ihUl{zbH?}lIL_M z@e2$5!YYi@IQ>Z+%ekzf{* z;-OD4NRsD9f`M6)Ru4Zfr8JU-%{;oZd~nOynTC@ecN+y&h!rf#-P3$f0Uw;|_ySyhyA|#t*<& zwe;F2dnRp*4V8I(!_ymHxYd0A>L+?+oA((5-!SbEae9%>%GLZRio3%&8CTV0VdUrH zLXGdF4=`Kgje!%(+U)V#x*4K>o4gIKi)Fc?1P^`sxX;j8>6(_M(=1ugWn)8S`o>1L zCrv&r+@0mM-rPdawE*5^INi33iJisM#GBn>(&-`9&ErZjU|v6x5`bop?WRff*ynoF z46O?t&JkDT<_-Hun2ww{cgt4Oj$BAu;xx;q{@3z$%VrhVWYi_3QBbb$r+e0P?BuPc zx3Rn)PKVpNxizT)-^_`vq{DQpX`zt>*&3o7rMWlj`PSyx&E%Q11|Q{A>Rm(SFkN+X zsLCMOf0gXIlO~ky>^3R$yUPArD(0i}JXtK+vACT@?Q;yID!E?OwAs+Mz!qr(0M@+f zd1l$qGqu`|N1+7$1MB^gPayWp-_>%2yXZzO%Y8 zze+8pS?un0?0TQrlaFq!9=p8eDsZRP^X?E%Kr$)nrD|i^RGyodDtc|5XY#Gg(+4(G znHV3m!&=yY5ZPQTyM@cF!TWL>4qnZs)U238$<|N(8rqFJ%i3|?XMnG)7MXUovf0mZ z1!TCBTCy$oheRC?QQ1m7uC zGjBhR1krr**p<|*wJ3KI$=%davlu4y$*6-<-G1JtJoSm2$G9MPOj1GCnOq?vO|Fps zCReGcYM%Z(wL&k3g@5!zuP7oA+7U}=nVILGIkTak_#-CC=X{Y~;cu_}yKfuY%7e_` z>=8Yc`Lt)Ry?qr_1&n(J4shYx?W^Z6ozG66KnPZrM@(k|YLF5qH0@d&~6Y8?)2r-<_X*XYS^$n}?=tPnL2&byLIJ1Ub0_%wbt>i({Pb?n1mxs;_R$|H2BMaAcEgj?tU4Ys$!Y(8*vbZ8^-RXRn?Y$N1ZG zw=SK(G5t2?yEfBTQH9`EM_EpH$p*ckgPl;~D*{I(N1+9-@umZ>qFs>QH$?;UA*{I> zsu{Q7t_(a5aKfNdRJ_c0&Kon?wmG81r?Uc@N_({O^uY%s@7s;R`w90z z3jWY+V|xkZP79eqPR3?$3U!3p`P|iY?^4mGZ{C`}c6sKXx;NQ?eV~fjzjS{@)I5ta zO9I>1!DQ<>^W*O5Cl{f7y)aSZ?>*4Tv=Tk`S=6uZjwq-V;D2~ zG*@Zs=U4!6Qy`iJ+P@ zg09S6#WetY7V|XX%}dijhs-E^e=40>59VH)j-ssnH@yB<{`KizEi3%jnUR!ZU?|N< zI#XZ`GG`%3hcgJ$hH1w5wzB=+ti@tNm`dmF_G~&E%p2DRX5j2gAY=)?RsAE~NsfGZ zadjmb=jm$GCR5jM%tEBVu()vhDnszv)&2`lA8}M=R0-y^h`p5{c?%hZ06?%L~8(&N#Kjt0O!u~RJxrwxSnEU z{@g6Aue0;El9>MiZ%O9|(?n({CoBw`>)cGZThwo% z*%e2j`r9N2B5bqpjliviPTbf=`lmZvi_OwI>F7;6|D9#-Y%k{H=JREuM0kPFOEEU@ zE%QPjV!TKgNR$`L%82xdq}0Uwy@;NP{(F-?^FYD%-uLhLcZ>JLOSw>7#CRBs@%R1v z{vx)x-UIIgZ^`>Y(Q~U0!p3YEewZlD9`YCM*P}3gT)bHPB09aWu%_?w6MaWd^0|#(#b@}b3%+bsQD5axvG@&Ene~n4Pw|`dO!fCkO&B`gZ2q_yZ+wdb z(1wiN@qa%S!}jW1p}26#^B-{hFTjIAmKV^(f16YMcHGjru70AA)-&HBhkQ6>2_|07 zIqw4@23!P%V|-EowRliK$OGWyfR>yE;RW4-?8BpADn2U43!6Si8m5ose<;yfT~Dc6tEB6=hXrJlD`I} zOTxxwnqLX~h!l8xv369%et?3sd3@CpLX3N-LAK%_vH{Gl!WSL}+o6yqp!~ z1#J^V4s9bM5|ET}Ilj3MI`jc>wihACIp{H2gmnM|N=T)TZ&BSm(EtM~_6cKeDa*b% zI{Rw!X4pjFt!a0>8n`X_+ID-*`(K}1_kQLl^$qj>0N`%GoYOI=?5IuVFp}Hjy(avw z+@XN$jd#%QM8l@_w7{XbPmhWivEm>F7crm{31&HfOO?ohmsA5L<#cz3F2ml_>O~h{-4iH_^dUSPFm#GMNV1TKfjo+z50@MjQVW-Sh z#J>vvs;%H~-~#`0f(yH7D9|ob%og}v_gZzYSz}1x7`)boZwL?9G)~dmp|OYi^EZ6J zu?|cn%*bWf}JlDObu7U7!@ic#bc%`v=s5K@8Ts;+%>~ zrv@seQnK!aT&)j%uT8TE=Q&sso@?NmMMf>AR+&GbC$ z?j&-o`J+w#H0OWb-9^5EiI4zK9rT;^wu||s<9+H}mQ`9ahD-(DRVOD`<-$L|A429&E?>Zqx| zD0f78XZC3}<2&xC^7QF><;l!S)u^gGL2J!CUyiEGA$P$I^Ox++nuYTUhSVUkyEC7> zc9|$aZ#>&CE~lWEuuUt9Fvu)*0UeX6r54}Qq(id1t99>M!nBy;Po+!mc6&d#?%L9`7@x5xu|dprfhc)$=zhSY#IV5`gu(qLOo~c%>E7?z!Swk`Deia2)tBL6iciry7zS>9YUrd391P?{plp(*0j{G8@uLdP78oFQ9zUQ| zc(Tb3hy}V+a}N@-Lu(StH7(cx8{Qdk4-vXsi#T>Lok6cooi|xEJ3|-*(4ke=-O@uy zl=K5UB5Q_v|JCibdfMWd=^VR%?L}nd!9<69DU5QI1^HwmfZ92tpzikXi@L$AY4&~$ zy(uUH0K_*iV=z-?xwFHaQsM4QkZlG9sbcjL1)F;NsHc5S)^-dpZKv3PCHos?VZ(On zM$yH7*pm&Ao#a<5iDGrUc@@27h!&oO|sZL_+!T9ly{77T8T(i zGilYIGK5<}d?uo0c%5trn2#8vtB|B|RSlTXx{RHbp1Q}d1^{ok*dM=*lWQCZQWri$QEeLU zV#O=6{uIsn!+Joce#bqYJVK>!6XzrM#D*U2#2>pSO;@^~%z$(pR?8`-K)M=rLj%#d znej@@cwNh_QK%=vpqEt&@+I(t+IIBEUt;fikZp3&T=Lw4?MND83uHM|+#aSperiT; zl{&jHI{7R)z5-y6B^iUAhTqF_tu?W`X(7~Y)CgzTaIbA=3;**N>A0sn+2yf5RKixV zGs?7MlI!fz^xN!fN8y-vhAZ|sek17J)Efy6dpS7xjlN#?q=M)#pQUz{1GZ(2emXN4 zP@99X5hxa`G&k+cKfE2VGKhUnOR7g+{8pti?pCU>aO}kjr_vl?-5v8za^ zCMj^ZsE@vm9QK(UV(Q~>%pv4|+vc#3o_8em|ylJ{I zY>(JZ*2le;b_eU|=KB4iATnLs)CAA!cu?)FR#=q5Gczw$F8flufixSDF@YRwbh8LZF-PjY0+0|^zk+pepx~j zP+i7`z%GW3&&*X3mWL)c@YZ}ITlk?$5A%Q}tE+bQyRVxfJ0zbaGY;68l^ZI+WVF)2 z44C*VgYpgc4OQu5Y$3Z3q#_Tq+OXg5ncJ2zifY1Ozj|!ApvG9iCS1(v>fg|+>WI5_rQPWY=> zh5ZIrVc&^W*uMuU{-51TwZDy3*spD26^1iq77|>gK+lPYEinjV5=o50g47_)s`Z3% zu$E7s)F5m&b>v|U!v2401x9sZr9BpP%&2Ni;-HZO$ z-t(0o>GP^z>-(v4?H}(Rs{i(Gukky(b4G$2t-g4X4ly!(mZf#fZ=9}Dh8_fC5(nTH zQ`TPr{H{J`RLm@&MnKn3406QRpxDWVv?<3wCStx}eCT4WvrBecyWw^$j4%}xMp*+2 zdt?nc#wNEgj`XguS6BPk`xGX0cR%~?!la}v2jUjyUOjvr($`_mq+wwS4${IAW$9=E z>#buweb{@EkHU!}Unk|O^OTqeTROwY3k$D6CKVQ5#ieFgc+JFl zeHdYG;fwFzJhymf>T9! zYybX2={8rMOVI1-w{$a8rdMM^6{Urq38c3Z*X(>Y; zhBMnFMBkBYL_}>QiXtS{Y_Vk~LKKWypvy%P&Tfj7uq3G{LDpH>y_BjpR3=AaJu<)a z1^1ckiWF52k^*c^`~W*B7n2!zOCp>MWHvY4zj|U<}7d(UtW5& zcxUz2>YYbxz|jdQT{9Vs?4KEs4;6cP5hBUQwKV3eppKRF7takR_zO$n#oU_a0fg-R z{&hE{uLWDUDcmsmfCD32X?$>Sw!-=azSkTsl$e1f+7#e{e1yZ-C!X9nR+5@jZKcpD z<44q{k)ptQl4#;dSxC}P8^Oj*y=AO_E=@8CH3Ve|9w7yI z%OB{0cG?-9EZzI?S(3T*#o*LeDZ5%R&G()p{uZ~=;+AZE47=vKk%CwJ4VD~&QFSGE#EU11Zjnn&Pn!Y9^~8c({|j^w9l%?;hQFMqGwYkl() zkMFe%Qd-Nh1WIqYn*9P%)XIKjy*bW;46=_>3~a3Z`Gi+%C%qf*sadNTU$%>D_o$LP zkDhpIi_7;TjSZVUP_>jlHl5z|49yP^UGs!iMoxpU{sNe5h72q~U1BM;H&o|wm9L?jrig=KZy?heBu4NZ`z8_<&V z6qyKGWnp=7Gc{~=1uXx{eGsF#r^tbM&8A}eFVuqnWouMmiWy@UnLNHVKCp`gw9!Ow z$m;W7UFW~K&i}gwm($-upOvIopS^bO&Doh-2?lNdGz3g$s?7ifcDz}1o3LrN=ImC9 zO-lOw`IHXTfZ1~CY=x5RrjwSeSME@5U%hD1EAR+iy?J5w#?9jbJ|y|c(lG#zq+^bx zeFnF@7UP;$T5RxWN#P6{aY~7496)8XJ~1|Bmf|$=W;}gWoJK+i-kLQ| zPBv!A^y3kx1%u$y^2h$oe44WW=H{*#0@<6l!2KPW{n9-wq}E0r-ucqf!>12H>{%`> zJq->jNjfc826-Epl{=T)H$Ygeuj-R3R*Fm(UPmA!v#Kwoki@RLouu`q+}ujaMq}XU zmyVq|V_$vd%r(;i%*vWyW$CD;2Zj|`EJ?He1;;+k;$2xzCKdI-~e`e1~^?PV%x_Zjh1Jib&%8x>U;rgfN{_3n!=M zU|fI>HlK5qrO7ZvPeVD-hUwT&kzCynqIW->qP9q66PF?sD#{9_&39?43dfbxK2+)< zi`p(bodBgNcaCrjL1IO%I~>a;tidP3x?sus2=0YXMT*51I$!v1Q&|R{BWPf{N6Rs8 z4O&vH$$7H;8PLl1SJCMHP>=C!(0{7erWLn4bz#@_=; z&?^&BQGH-PM~q?;(eDjW*>GxXan`Fl2Buld1+JsY2=Sy!wLJ;2{09bbJ~Vf~};qN>xclIvTdU*helX zCN#Q^$2%xgK{a>Q-)(Dd+@Hv5&O7K&nS81CjPRHgUh-ez7-m!OO$z=H3!d?2Ehh=| zRd@rKT1vQjM!=5NN|--v#9WVQM`qyMN{`^Bjd5~ z*N6_}FhZh(Vh_AVZOv$<-k?xCMkdcma8T&eB-<|pk(C_4j}R7y#LKlK;E-4t)+DzR zR084yzy@JqH!6&R#>^Bcgdc8zC;Pvr+ zu%D=hG>ab|ApR7p`NEMQC34i;!S``7*qqp@fRnqBE1WU{f|ty*r%jB`ZryxYBkjzn zE??OL`9k51v#(Ri!~3Tu%lmgt9yxS$>gchf#}A+AnXF8nm^yj-)ajQ_xAsSqr%z0sczNpO zV=o^+as15b@yUIY)yY>UPaH$?SVdIqq4LG=6JO@MZaI(4?$-x{NFtT-ZQtyD(lNgNMduv!7qzg%})Nx@xtOg z#7E0(m|ZG-4e^o5asaekb<#}!W|p~GKzgFArl%+&*aHsHtB8vL5+_IK^!?oss3+ta z`F{7PmTryBQ5|N_d`5Ka=7dO-1j+u|f4I2%V6i7-vNb1aQyHF;j??tB5(g&a$%UB><2bMHNOuXD3U)0@$xNs~0)w~IEj4YW<$bTdNQq-~&W zYSKcB3)w?JEqjV6pa=*ks33?4f-*!lD59bQ3JQn}2k+i91F~Mr_gA7zs*7A`l&60-LoHxBUQTStBZcE(ejQjNRgo zk^z{GgsnXlTYn{dVX{8dFN;bq+fp!N!nF0gaA{eMa$cZ>jQ+iht9{!Tl3 znpY^}mnDiGxVgK#9sewy-O5GFEZHdAy^bC8R%Ot(hnx0Lk;313}j?%!5^y<3#z~|x9Nw916wNFFQ}Rz%y=A|z`T(j zZ!JrvhE8}|9M8PA#ZtBr5CQ``JC3zquE-mDy4S8;xlD{9TidY;YMUlGADKyvvdp17 zCB`KEN+u!ZW2R7|k>#2C6}DqZG+trW+SA)OWrpn<_JT6*l5j=X-LiWIT>3S!!&n8JVD} zHJ0+hNzgb6u{&sKd&}C@D-M#iR134~p+d2{bj9ehmbN@HRn?V?qCgKL^el;HM08bp)R-Gw%N-{-ked% za{^M?#4M5tE7hw~t!&_|e$i-;axpq4)Z@0-->EBvRUH#XX$nqQW;eH|N$J9S1_zZ} zVSB)%w{KkDf(fu4(6nwfeg9tsWgB|s&;wgLmp3o#HUn#3y`;UO&00uF+(O%;(q~Hq zN$!m4_7IIJwJD&bkdy>cjOfxGYN@Ffcq$07OYJ?OcVLIu7Airc)T@%dt2?G>Z@AGO zvhi!Yy3@(rd@j^SQc5!E|6gR&o-B-ZYtX^c+w^X{bj_GD&x|@J4&W>q1JGT{6dU(b zCvG3uvfVb!*0 zFc%}1rd*F&opJ|l!6LOcLY!NG`H{95h>Y93Re-sdr&)GVTC3VmwW+NYMC9i4BF|zC z<#baZHYX4(bGHX#9f26aUh^pm@g?3m*`FctHE*G}E@1zS3_);|;G`|kQq-tLtZUSn z#N-Gbr}*(85nSZtPcPf6Hx-YlHtov*i8 zcXQ=?pLutw-j#J1j2c2MaD1S>m5?+r-J1NIKZp@cH?kBJN%vV2YmslLb0pS+^U*zV z1bLb>m3AmDLixpCxh^d)bmW!D!3<+=LbzOc1#6HYT3Ap zMWQ~d)-I?K$h9EAGGx-?4$LR6&~g!gQz()c0HAfivB5*SkueHH5YJ&c4CDeg#n!~M z($alEp^*?H`_>RpirdO0X9?g&YKsd*k(rC2*eX%pbN;`ScOV_{HXyP^ zPD_gt+b&i9;s~4a4@zIL%0J&Me-Y8^d7~RG5>v;_N~HCH`AD#xS_b5bb7pwlH9dxd zUun^@g2$3bHE1$#mM3ZzaW`NUN0^F`;!wszcAvpdG2;r5N2h!p45Qh!1pG<^rt!z= zqf&?>^;e&Vaz*AZJ;Wz!4Y8ytNGHS=C#(5}b4A=wD_RIwe%cUE)M)|syX*BF%vKP% zWf;TFR^%3E*6w<54wh9BJ)MK*Bp1_)JW&)7P8skq8jal>!~PsHXm&+`j@IFCE>Afi zpp>wc%2yf%pJ)_x)6B=}Svqh>rBevZH?qnW@j7Lz=cWrN-MCe{h(NQlgzZVWU_XgT z3GxMjFLr@mGB3&pRXBbzP?7Y;O?yU9x=66o>a}9gIq=Ar4=@v#-02dfmxz#YvPCzH zVY({aP_iXsi+hslh>NCX)l-GiIY}jjT$#>BHxR(M5Esi6)dA@SsskbzMyn)tT%tk4 zbO6%!G@c$+tyY^UGJE9M!d0oLk<5ucCNqxF9w5^NRmBr5O1%gO0}wXj{SIs=VtS;7 zt}FevP@X;wnl;x`HPzeg)z{bgzo6aY4*bZI^8Lm$J8{7KB`tKFUKqJszgCO>+UJhH zjH3Y3m66Ef1UksJT7Y$^6FoF=FJAUs5t+@Ra)7PbrRNc;Ki?-J`2uilzCtAHF4`)M zwV3A=yonzZFsR_eEGVcC?qp=Nc*1iduY|N0L+60o#)ppZO*lR_G%O3`-fjIo+j@up z7msp*CB4J#yYLDZ=A;{kUgD%FnsvTzdX{^Egb2I{8M-I)%|y2u9UrtRuKs*^BP$IS zkkxa1|9(p{>48SX4?YVT#^Lx@9Q6nbDQjQxl6DZE1%jPXo0Bn8r7Wjo*?tT{k_r~E zI1g}U@9|HN!(?O{RG#|jDeL55f5vle@EKVLsGn@)ago~D*0ZF!yQjMqf*M>xDFwYl zVniXP>_R+*e*-&hLT`}?vZK9u&02&I&}($V?je!7-HCuViLMSXID!_pJJ%y8G+P%Z zfLiItu9Q7LX@<@mCxf{R2MsLtXq=imJZIcD{cbSSwK9;q>rjoW4^flL^oD0qY>ZPl zmO-!Ql2GZ^dHO6q(Y#arZm99q>oV-1eZfzMl#hXTX+oC`1ecJ24G5D4dxVR+cyR+o;iSL#EClI=Q(nDJ0gX#y2=-ufQ0gUOOD9gfh zNm+A^K-hAvyel9plO0PQC6v!U(Nh)j&+@L|D-o5MR_=3!Dk3hHn)G6KHTctc3@%6b zRhq*)7ywG8zfX`Xl?gs!B6>hqN_1+Kg21g3Y(R|+6qDs?tsrpg$hvDvl+ACwyxb6H ztJx@LQ)T*ZN^&+$TqR6rwg6qrdow+xbe2r6v*ndJ@-!_`&XvP_(UvdZLFnjkq04_v zf!F2h!5OD7a4eACVDE%7;O*YcsdC=fQo+E~2R(nVe^M^DgIgz6^T)HThE-!^do7JX z8<3LV$L+sDTnUHVSm4)4;Yxx$0fXSlq(XHpdUr?}}K^3HMvhi9{p^@jG$o9goJ zE$4gx3G;b&;`{|J|GtHs?I%6nlMW+x@23~Z^UpLB$!A+g+K|@Dx7z38ZD4diPmrOL zwPnQn1#Ja!oFZ-nf~RT|R!Rh?X{-45bWH+3Lz95d)HZNDOC8VFHuBv$sFPfwbLGvl zqEu=5#Ink&(n+P&;hM>f6Q`Ay&6zPVJ#p^Dvc(HZ7tUGOzOZ9q<-#QkmoDsFShR3) z*^;u3meR7$vZV{xm8~lSmI8DZbrB}X(uq(B3)w%E?LrOD;EYmC&OqD70}0{yNA<z^U^%Z`6^C|9!h=;{HRKg|} z>Oew@DF@3-&U2W-RVWm^6`JTXb9hgxz{-K93PcM;MH>pp8W;&K zt8iMZqM^Lmq7trE6Qq%gkWc5QiX0k!sRBC(ssJS&dVDm4kA?m-kP6B7d449+@>SzjR6W6AUtuPq$<7QQ+( ztXRy%!Kkk2fY{7CAv5J!SnJ&8zCMS%p2<}yd`9x;7q_A{Gl}U*7&Ev^mwGYod}>nd z=dU*Xt!8-Q)N6^={miw?6w|HwUALJQJFquHD^7B1Q$Q%gUlZ zryEJ%mhD{t@_M;8qg*?RJU?&T021cib;9nhz8e9&74@h?UQ6L*Nc77@EI1^K7^h5C zsaR^gyb;v3Hb3?uY0$?|^~Spd2;Ay09vY&&1TrpBnvCx@WP5ZXysZ$qXGmjhfc5Ct zOy*HV1srm6*`Jd+bu_b|L|>#B(Z>XDDwBtuPzn^svB@?KcpK{iTmtXz-BAh;;h2IP z?lHMKOKe6bCt1`3odYe;aMJeXn{kkJF##D_3FiGwk@Q>fN8o>VT17;fx6N`L&q~1V z-T@q)NN*l0k`J;UIh^`M^7Xh+g7S^3`k=gHo+<8N3DP2S&QfS+Vc)ZK+74tu9N1f9 zi|9?HIx|}Gh|ETe$-_OeMcb_C=pC@U1%bCmBU`=>1)A!n2O$PFEIfy4?;z2Qq+K4f zi}R4Tq~uTPasWPHGqpN9#l8Kb@b?zD-$R~80UwBG1gvBnUf@VJotwp~J(7Re#-i9z zmT{Pb$CiePg&zY&i0|FiyBA#wobQTmk!u;Dtiic=ccFt?F$XT^!ARtSMTwjzBb?pX zoL95?xowreA-SW~q)=`S;KJmsN@Um!FN-cYV|-!0fc4C14+TS|k|I$qnI=1`nK}t$ zcXg;A`)(UqpY08t&5ZhG@5)(#Tld%;S!Wr%0-G)Co~?p{@u++Z$MNWqX&el~YZ%N~ zffctyK)SZ0qqDWM9c8siU~9q>j#cZDlv#j|vtl@kcL0PkbW155+x@hMOT(NIPj%q{=8guT;Soq6YfA?$^u*TFDM5+e)AlkgAY ze-Y#amLE!!#&NZo-PNgXHGU~zz%G+zlyO^m_LjwKhRe9-6N2l4Ww;sb{Ja%oLqYdv z(h7^pQ5BC^KR{$@E%;0oo&#%`AVwHKIoeU4j>$oHg`{{y8U-yA>E2V`Z5E+g{jVKwosX=u zUYVUp_}L4CdGoTrb#@N^ah7)o7RFze^Wb1hQ+YHxN^6PFqk}FOd3LCED~~^EMa?TN zw`ZY#n7&9DHq8?heGB!)+-vqs#IqirmJicgcmWZ=oVI$}*vY!(6ufz}=P>!1$x&M=NAD^*hF8lm)GbHX8aX1k z@T}F>5mDHTdXZ;63QF$=(znsG$rCm29;qLthVO!kkQR5LY}3&92qg=`Yj~3U!uNpU z%_tTQB$#X_AO_?OG<6~gL|+FHrf5-dh*wgF&~c*Z>o7mSK)OyyH~euCSi+)%NC!12 zdOWm{Fe_E?7k#2TKjcGm81_k+!nP#JA0&^8M-TCw7Oa94ur{bng?vUO`pLSW1gY5M z(q5%1)VZ$|wn4FZQIWf*1QQamsB9IZrcbnD$Wfwl7q+^n#~+j<>OrUs6|-g?8a`tp zS`-hymqoA%@0XAy#WL(a%F?11y>lJ<7a<80JuO+E3LO?|SK*LR))+iP{|l9rsBX1j z)6P&v%TWz$me<#3O;D3?I}LS~xSX~K9#M+dz}bOAZ=(T%xR?j)7szYE!D?^0Uaf?; zyrGI3N946=c%XFRGTIOf$0>8TN8&hq4<#rh;ciOfYLgcz47N!2ekf3Og<-ih!Dn4K|g=5ea1;a)hhB=}BfEUM$GK>@Dy*Vi4k5cHU zj-VbxUuo93NThim&1Ap`irU#VcRONc>RFiPN?{;`Ob&9La6Y0JH;p20OqaqYyAUzi zs{vOzTs=sM5PM7x`gS^os>=Z6Bp+_rpT1Bzw)+7*7 zZUe0!+pP6mw*X7Eyfg+@=>-2Usgs&Wqa;W}t|(-hX}|_)7@yLG7(=1p9ycTi9node zh!2N^jQW>!VYUCqi}AoGZGg5QXD}4TLqfl+{cHSU{T;U8Waq z9hiHi+>?5`m_ca#5(6G0n)aa3?v5O7V38FKJFJc}Xr;V9thnr<$Sbs;TCl52B?jIb z)n`H^25L8YtaL=iM_Q-!SV77_K@<^XOmhv7v~-D(cJfse`Zg&~>x?$7$`j5VD$8gM z19Si)!6n%WhLQb-!*t(h9L6g~8VGY3ZD76<>O@8`UG%o7@k!chnvvZA7{F-;hDTvi zC=l9lp3sA8!EPEBZ#kvOTRGGpeU$nwPM-4t!1{sE;aVvu+Cna+eGn0%EA$O&BS~!6EKcZ%mf>1PDmv;NdskXcXq0}~?fBC3%7;iebV22$Q7qWwyj z30jE}9F`EhE)b8AR7#je8i8!Bm|nDglW;V(DQiP1V7ZzV@y5In(UF-y3Ov~iXB7n; zsr0VVi$cjzSVqMd+*DA@cF~q&5q~2?;%o+FxiFdkXr8Vx`hD7Guu)iUjgD}%Amqs& ztr&a42Cty;P=tcU5+)#5n1I6xV<{wC_I?bCP5?tm0eD~v3Yvo^vZ+Efh=Z&fC>E-Y zUTSnwDfLkZ&$}zpYQ|`*!JAWO0B)i@cu9dfU{^IT3HIaYf-MpNC z6@FJ_vZ<5QYeiG(sl(c<9-r`RL*g_1F8^B5W`+g`r&9ElRbm7<$;+Wy4mBVqTvK!o z4L)!TQw7=IBoC&AI7|=oU}l8FESWiG$2iPEH3tBLpytY9o*d@Kd3k|^yilx&50lG9 z28YEGd^2;Ft3^I*l~>y2i*|{lL*iJH8ByFX9 zyGnvu9nN!wP69m?`ZRbT-)C4@>z{>m$fbX-6n(4v^AkAwPu3=)Rq}m-?(a&VRV;V; zPL*@_X)uDhBByJWqQcanhzHvff+oJo6*)^&ic$S+t%e}bEKY`D=v=MV>(aic)lSiT zvGcTguPbt<=4s&Lhi_7KXi?zkg{M9be+M}-crCNH4%ZK-OAevH~QCz_D@)g>o!{y~GwId{^U;^}S z5Gn%5YzU*K!c;V)Vvope zQ4vOQ?Jc@e0^mZ{skrJ!2ge0Z^Fpn12ph`&2vHEltQR#t&9+6T&WJVKt2JP=dd%G>O|9MQN@QgmsBYg zF!P>7Xljfl;i55if-${|qs~OB8Bj`1ip8&kP7E+P1>mnD)qLP6ZBezx7zwScs(h=| zN@;}1MHI!Sq)qZJno4e>1$I(OcxZvSmV)IU&@K5Q2?dsDN~tjw2A0>PAnMb7_D%CX z`eq8u3{4chUg{%PZV(jj!w-+fl%R-J{+H?j=R_X7B!w5`4352 zRKwuKK#m`SSx-QGL;z@|;ypGt%721Mpkb3&MNJW)C8_=-6>D_sz&=r2A(GoOW4aHU zpY5|}20lseI!2>=FvL}G&%jQS*SmAKrJD^k_aP-zqQs;+U5}teEbx@3(o)N=Gjn8X zQq`l;YS}UR*PQPDJuTQ#ZXMV%0K=F1C~QemI(KayjI9xFlU*o<)T5-SOkHiNht#bc zQ%=)0Qs=5|>^pYbNjf*g z$(yPWQ~4Vct)4R96@g~Ij?K@Oew#C{e~jB-UBzE$hh|4;>&h*?azxdz<>)nO$I!dO zel6D)o~l6-Z5)+3JiZjRSd^HSiVL^t1KBM~G@V&ZA}+6HU60UsA6)l3p28H@ys8CD zKKuK0X;Z^GiQdy6GP5QLS6cXJCPFZ&ek)^H%J2NqRc#jP|EjZAnAl7lI5kyuk~b1A zVBHN%dad)Z>gPnJrCWiF95Ag919qE{TB5Bu(~sGWNA>NbLT{MmByN`J_m*z-lQoIa z5k}Ug#&(1j7Ei9G*|~{E>EAhEYcD?`DMFb7&KX^f7z%7W$CL!SC( z8ytW*%Y^9wDn2HB%cxx0zgvo8OM7bC|J$l_D-vwA$0%+82T09G=d}N;vfOc}o9!Xl zRyU8Qg+H{{TkqaIThn7r)730u^)(r?v+w|-G7m^^7j1vWtJ^_U9iuTYqt)5js^iX^aN~U(rm4=ZN%vb zWNhhe{a3aZUy(7(FWZ*hHl5Ap+$%F*V*y@Vw7P3qJHGl>w2O+kdG$t&ez7rM)4n?7 zE-DkP+uDe`yUbod;v&{+-C_l4^-f2r zT+rUWeC3*rAjcI8odZrUU&p8;s!n_0$kJ;nUKBu~5fH)X4=p=is=_g^u)08_Jyzs8{+3NFhY5CQHMu05M44y|t0&m& zBGcB~(XwZ$=NnTiC`P5E@5LK1xW$1Tl&qs{w+E51*=3eMXSt?mi7cFL_(-OiH#xpo zCpfupZf#XW7)!At0kI)tPWODfDy@0YhGz^aJ$+^kol@%^+THL#pVfvO53O9+P2o$YPQnn!XkSgm(7)uuFT2OJ|wl~R<& zD^r<0&a?N5a+x)Pa5-WF!u9Bl5VyuG*SR-#QP0hB_AheRDt|$~S&Mt_ zj<-eT_PCof?jL-!!^2P5@bLz)gcTvZ+p|W=zQ87|)7R@8Far0_(N~G~pGy?~BC0W; zfKHH;BlV*}Cjc5)4}{?uy~p(IDc>B0HwCxc57=*Rz-On={0JwbyZ~vrdAUXJLpNt# z^m{gdI?%S7GNh$bZgwTuvv5uM&6{TN73`^@! zB*+;?;>Hw(U5FOV4rx=OXd3(|oC7+h!kRxK2DVxq-=Qtn8-XmKyivFVqgIqFY5d1! z`5!6w)}Zl3dx!o-Uvul(U9tYC^27%O}sx)I1Wvd>=b4k7F7gQETwNZt5 z2xES&49tNm*q@LgikghNQjg*##+aVMC<4`>((P-;3=B2Jv(E^x3AuWS^@4{NP}!o} zYZjVAEbq+^AwI2vvs82y<}Xr&h4-9DF`6<_Qdl=jLKB<4k!bRcS{76x_Buf!gqR$CW)IwsZW&WzX zz4}%u_?Tn@Q&xd8)MD$lC*)deuNtuhwSK6!1LEd^$_YTFS`f*P<%_C0T!=HgfUz(# zA(l@Vc_E|D6-kDYaXpmR<_#y~K6Fz$+CT)#KONy;lz%b)#rc=upTWN*|8kPi z25f;bJF#`MHQ4+ z2ANZ#$}2!&#OT^iaL^2UNHmxSQ8`aU#|(~uL>t&BWu-;KJC#tZk~pupLltM3CA2T| ztQ3d=FTnRGwwi8$5iTv#0^a~b3HD=H^Fy@YnnKeQdv-pZpgax&!1E~z2JVz8hqPL> z4TJ*K!I&0uGb)8SL=_ms10)OcuucW_h|12$8e}43o(^9DEW+70u{~!?GU8@Dk;oPP z0&Y43PQu6q3W1yGXvZ`)9-_pj(Tcv-Pmz}6vk8rOzG$U!FnhQWHUT3=cw)i*j&2-L zgSu9sMQnUS2`C32ZGIxGluamy#UZN}U0SI^XIu>74YxuLnDymRvDF+b* z04)!Ql{kU+r1=c5X5zO?ItFM751^1r<-0XP=}3Q(>@uI^3w(m&Sypcsk2q2TP*)pC z@iph5j+ig1l97f81;tpR)uCgTE<%y0WK3Z8pc_*tlk&hFfzSX?RxKRKT6Fmmvk&6W zRJs$`2UwH4haxYVUU8ZWz&yq*fqr zpgy{0DVPUF!m4mGbE#Q#tHk>qF=u*VxmQ|_qU)`%@iX(rHSwNf{Lh0KSuNBbZW=Sp76tG>kTwcgh9eGq z#Va6wpE-%;@h4yPMOv!7PsdOVC>g|QR!Zd*w2%>}oYg4LSUS;Nt|A71#aJTfOrJDa z4pEs@n3#M7R3JX<%u^^x+{Rks_7Pt`3gu3Z4Ecq@nTAt<$~4Z{P9v=A?NseHFHkK1v)eZCXWC2=2JOd^O;1m z29d*TA*TPC#SmGepr;6oP%*buJ6+RUUB+`2D|q&>FXav0_vr3gkn66cjcyR^r`bA< zO{_HPrO8;0){}04(mv{n>QO@ob>Py_091Lf&PSCj_?xH~dV7BqpL)?qG!YH@+)>;? z$ACeKoxJO6pN+u*)ksSUc#;kJqJ+Qezgplicgpvsdv@YF&y!l{AKp;pzux}nAMh^z zM|}kD2DzVDOIz&bL!3cj&mqRa69?_9C2-INGXh@PlI(^=xeVnBP&x>TQDA2& zniW^Y`1~7|iEd(eKa%O4wluqcI2X0-I+Nt0)v^2ZTX(V;HgX1b4eS}fAmpTeCDh)7 zG$Jf|Gm9Z3+YV3P*?ZjJ&_OseaxHoJ79pWU_BFT{Th%N>v9Y4#F(UVDMh)uWP`?p$ zrivedw?9$dIUbG1Jy;J|F->sfoy(gKXWMVlUG%J2yS$}+^%32>v2@%zpy(PAU@ zb~^HkErh+TzmaqU;t+orG#yY&0>A~>!Qh$H8kjDEvhh&aQ+~c#cbRwG3;^Jd=rVe( zOL6Mp!V+@&DVm~?s$@-<1EAB-tvXo;G9f{tWPtgXlKOT$tADVl2zbiCHhm2??CZ4? z{a3kvp$A^`r+jAx?$HuI4Xz7a4xQ31q5Hk@zr{xM5tm=6ZA@$$Tmuv7FN)7bItVU% zIV1#oWeCwuQpUs_**xTm(NrEx7yeRmt51kVU3wV@ytUHXedU-3geq7shAL&zSOp7? zZxZ+PYTnUmLWV0aSr)amp|~qpr$t?!DOwKjwGbgRgjjq=Vns1MBCrsb;6nI?**^<9 znC&da3P>Y_I*iq?{O6Zva?gB0faT_y+zar%`BaF>)N4Z1Kp*EEd=~v2NRM#u@HQx9 zyT|8a|GjJ2lu7ki##)G{Y@|sRwj@A90KR}bG|_gMBSgUBxPgvGS<{Xg8ZzJz)9B8M zXls9e-kMCr>r5jt;tllg9@sIsZEt_lK2elByLI{mWp@H3du*Du!FK3N0 zamzTzepI5AH$7sq9WR&jW&5E(d$kc-6j%cKsG+%%kS*D?aufY2DAR~Fk)mgUY zP;6$EQRb+)^H7XhGF*mzTS01#s0aJQ@NKV^GsMmI%bvhr8O!A73H;4Dd(%04$2oi5 zIeXhV`@3`Y7w7B^=j<)(?31SoT)`fsAC99mYjJS@;r6bzYkImmgo>#yt2<@5jtBN2 zZ{C64M(pGfb?-~T%|vG1zI;pz+=XcL(D;SiCHNQR7lPE$5Ij7x6d8z>IMAH4BGcJ8 z>D6igcNrnvjtH{d7xK&s`DPXRH6gMrWE>BxJzF8`SU8zN{%YiDlkhSUt}&wCDY6zf zVr=f@FV;r}E^-x1l2&RMaFY1g8YkKFvzw9F1&=S_YOImisET|^H*%1v=y>%Y35;Y+ zLzop9tULV?7Fkx3)l4MPu@Gx4@*^8lWvCU|H!mS~3$u&BuJgPT^c3s&5DaqMC4R3YtX8jA2}5SYyrF*ih&Wtq(vz0aJNJ5l)Q{pwv{_L-AGZlkstO|s!|3$USkx5{pHF{4x%1JFa*F>mH5T4 zv&0|va7olDy;4P;k!qb>>7A@k!B~kJ39(JaI3N_L_Ri2}BUa@@w(sSThaQHQg)54s zkX;65Y2FUKlM3S=6(|-~UHWQu4gH)`qKIRu8>_gj66NVbRavuj+MS-~8Tz5Phl%Mw z^jzlgKjr&rG;m37%6C`pKRk(>3fF`_E=))6D!Mfk{ZnNX-yf9fY9yE(H?8?NXf1RV zxr*@(TT0uVKvO8=-$d3XhXq`TRueQH5Hz+HZTGuMI{Yqe4Z!iEL{1zy+LRW{lDS+^ z+ZBSwuGBeH;Y1k^vK*?#CtXdD!(@SJ*M>RNaj7k&S%qW;prtqj76n;9cowrLDr!z* zQWS2<$#Er?cnSbB2kN8P#RQfXA`M7%77M;PdU$DxtK^6YWv-HC6Qiz@^GP9Zpjc!`Hwl-Sl(yF;Ete)b*k4ruAt#2T{U^U5$Y~WB_ zUC7(=PO=)McAlKYg#C%RiEXr#jf=GLD(%aTWATC%R+# z%+ouQ@HWrputg4ib387;6hSPWh$i!M;<@p>{7DMH>|yQ3QiV3c`XUt20J@5IS#=Ne zD+0rSCkh@~)-19FVNJq6Rws&yNja8{uytdVf?rtd69@!6t+;k!urdL)nx_!UKo&$8 zG0}N=Q5|$50?&)~LxN>-6XHyL%w2rq2m;!C9Z9Ukf+XZo;$A!xlAjqU$VA*lR$*tdfqCX?&|D)XMmzo-g>YbiT8)T#=^%171sw*}Xdp*o0J zAY5yO6CZgwZ2O`GG@>H4SL=gA!zI?MS^H~iDL*)5vp0?RY`FdqA6mQ_84{PQRd!xh zXlARCJ;CIxyCb&dtlC(6W?szAdeKR!Lez?voMBtoy@3=hI?G z!Mw$!?v%Bv$*NU4=20JH1b{oF6Xa$C7X_s!fZeD`8f3+qupzOwxFc#(fmn~B9%rL$ zAxX2k!|uJCS@h*5iWRs+?~a3^#z)_$|0_zeai9bT#aL(HmUd`TE!l(%2VwKUcN5;S2lw~TAlM#4 zvcqmgV7iJ?Vhl{a-0Ch{huD&-O3XkzO=t)MlE|dY@lOJD=v_Uut_fM~e(X>TWsV~r zGaZ*$b)G9SGQnN-;K;-TQS|r4#2H0K+!k9-4l3=?E#qC<#Ew9dx7`A^loq_J61x z*%HQsy~8T>kU!b@9O`Go4G;E?v->&73$jl+)K9hvhx*x&)PubfI`n=yv^G*W&aQZ| zDK8|@B#gu>mvy${W*Ib}LYX0!H-81~Htwu#gfC}#>UK}j>shht4IoElV=*l zB!TJr41K0P%QHLVo&#Y=$UT=;pxZMq;GXYUpf-9Dh{lbum%SYlQX_))5dL&a!wDEe zHcCc34EX`H8uCG6-eMcJ3%>}Y6j=&NPsrD7i2f#ol5&0%R>|rFl^%p8JkRKW@<|gc1M-#HaOeI-%Su84D%32PODt5_ zoCne*7BVWmuR($zDh$P1@**fPtRIjNv73%0+|AKQ5|R&pC>}~yvR5pZo1u7%sCU6^ zNP6qhr51|azR;$t^zuMb3toUg_=B5+eCRgv)8HVYn4afC(Sa>x)-F1(i;1xL6F@-1 z)0`8@SzO}W%L^q!IhEdB`cd*QlxWGx<%AgWBhhGHINDYcPGp6b(AS4C+NYKZBE`I@ zCw#z5L{TQyhq4c)FSkVo0nLl`LO_C)H*xAJC!t*S;IeHF8buMg70&hJg!LQjEy6){ zX%y=>$WFujE>aXKA{JW8ZJPc=5I~w)E{IJ^YIqcoM?co|N5(!1%A+SV{rORku;SNo zKkNM+q_N>5BdGsDy?`@Qt7P&cF)F?gz#BKpS3?cqqq3092*P86$F8*q7WUR3YCmZh>`~&q!)uB{tzPZPi)Q7f@~(b9y>z z?C?-Z2*Hbz{!E@IKm#XCq~>~awcYWGAc4ViNJ}*h0s0Ku34J%KP$9_`WE-pvMoOit zb~!?l@<==qi^X7#iTQQE&+kY4Qpvdxxm5Xskwh$!#*uNQUJy;RqYn&4`NYs-F<@1O zQKct2^BC0}y+#d3w}E47qn0yaWKhB|AXpKjj=NL1>oV#&Yv5laS9tj5tcf$-n8w+3 z{>@Ngr6$qU%)}K7icA8XB}0-iTM2Y#2z3QE@RY9$$<{3Of{2Ke&YRcf;9pt9Z)r7g z&B7nIv{PI;Q+h&pK_5!Q8154BTw|_voj{K-n&Wv0+|YR|01&zjm$5(wV)T#56g_B# z7?;J)ms2lV4TK|ZEO$xqj~%Tj;te*l6`uMdyeCHVQ?()gd)m`l;1OIJyiPBF*RwJ7 zs5dYCJFq5y@Lr@vKhV9ge~Z0;MwWyQ2b6=zd$g!J#?>(?k|Dea!%ib9a=|pkku5vN zASx>ts2Us$8#p*XX@d@k4LWzdz~TgtCwiSgLT=|9+}6}5QsY51_+g}x?7xw*nA#h1 z%WhnWJ9Kab2p*7ha6k$H*#qufmY`9{s!*O}V_TeNO!16Uc3T5IB%puZm- zE>992>eLByf52Wzk6MssDKIjtb!D#VY#l74P-XmMQ&id=_aJ|>RI}r(3EHJD)lZpa zv^KA7Zs}asxu&zddvdb{ zjy@5+rb~xS2xctCrui=0|A0$81-KV7tV%aShU%Hav@MugR~Qf+T?%hZ{G(IJ#3{Dm zyr`tXb6tyAhDgA1e8~_n9ZUrrVu}#|1B9wPq0GTo5}ygQ&`bA0zfWXoU;%p5LJ;PqarZ`V)uy{Y@(44<1H_@F%HyR_ncEpb8B8S&6XGyb7P&YpEe zVf5?Es|e_S5XLfi=I~Eu2pn>o>>e!tkSn^1_*cxmao9a#wG`eIpDR2=bga>Zrl>aF zBRQc{Z4pKf_ZbNOTqzQPe3nCnsAwxG7*`d&*ELCo;%fQ4MkqZd%b^bStE)bYVL}5E zRaYZJhij@Fn#3*9G&xKcbEBDH`CPN)Y_aqdoWt@8 z4l61-bV;gKR&iK0sT!`28Z^QC!+uxzBt^#Y=~}QT(a%g_I`qar(;!oK;ImWVFfcwh zO?1(p&(C-HPuAr41v$n~(Pqr#CGAv=z?{BtvcB*TBPJcIIM6*;<8>n8s2DO&ezBEC#p9~24rDcbs( zlDW&Z4IBBwxI&wD49UAv>#61CtF&YJ6S`WX8spb!8~S-AdX2WVpHG_dr{zztn^8Zr zah88>{rvoe>BG{C=1z+?MHkn#E^J!d+A?kNviimC%R81YUEaC8arxoPk66BJ`SRr} zR@BctV&?Ky(bZ9OWp!)oXD;h*pWZ%m#UlUAj&=U^{+UZxO%P`JuALJFRB4pxXmm-M1EH{hO!-|O#Ly(VDUlkM$4i5}* z#g8E_-Te^s5Xv2-c;1#TkRtGsO(Il9f4&UBydAr5eSnQ9xHZ2b27RXu(zB9YU3utGIBc8Y9?y=^+KIKYaxLqvD^IH|v zDND<(Zcl9A*S}@&p8nPS;=H@RZ}jt+`5Xyk`_K>q-{qF4%(aU0qfdnqUoQ5&WZzm7 zl#RMpF%!aa1h^;_W*p?P2=kl4P#w{~e|7%}qH@oClVg9Al|Ey2|4vjbS$Uk8_0&ms z2OdxR&4gBr_q==WmMvK7S+7hmGu90nM>(GwUTv|CtDFXfmR}yub&}64o+IC~J2u!)g~YFXL0timS+JzJps-nDmkxuAK( z?6YS~0n!AJeh-&AZ)6u4Fs&Vfy?ur9&~}B9^?X>F@@L(cU^&?s9^4^B38Qd78HGf^ zc6FPp5;f|*Qq(@EnWQQ9?l6^3X3ojcaDhA=r!D);Lc}YFlu>qZ;FUuasof;ApFswr zo~e(a+0R@IPsXlv3nhb`JNt{Vg&P_^v3IxGIl`IK;kvBzPwIzO7qv^p_FiFMJFUj1 z^H9lE9`1Jb79YHSPq1Y1u=m)#HLJ4X&Qr7KyrL2_eW{hB6fvFw1>#DAtSX2IT=ojj5r28OnS1?BCsQoo+#QrU+E*d@JV_&P7%f z!}=PTZOrE8)a$4;WwE0eSwmv~7`B<*5%q5IlcYu$t21g^FBFX(b*v|0Rfc9fm51uP z*89cwJ6Soj9#xLGrN&h0)>fZ4qbRUTsK-olpY`n+&8yX$Bi3J0Lv2@<#>3_qCLt^R zl(2HdtZFyQrY|QuAyxp<5&bl7=0XBdHCrOQIS1*iC8C@=$TM9<9I&RMzATA@7uKMD z?=T$mqMpohF5+lF$0Ej~BgkLIfG#a>BIBVH9nutgPO=f|_Dqq{$BqwZbhNaVi;~gM z$UGnk16%i-TE@N}*wG!0T9ASy`ZCTd0HLhI7v*bI7K9dBF+A*4brKyFBGQ%}~YQS=oxi)-QPiA`F$E zC-A)U#k1BKOx7uRcBmhFPS-fXwYOs;M^V2-md34X{fhS0^SZyUm-ZdWCr zRgufrE?d({_H6xZPv@HM_GKMY*T9@%cP6KLz%cjk*qT+vLdqiT%i;W}XdfM*aEh$y zX-Ecrn4E4k?3MNO&U2(}lKq{FtQv2Y8i{W!0&yIEtqX`~~_ z>*eihn%kP!H1{0QzOlQfc|}_fZf(}KclWe)i7%JtHLXi$NUoT&#BPVew&&awI{=fv zht*B^-S837z?XNeBmVaFRyXb8a`m$2mD5(*@tUnvdXPDct7grV zVKv{Bz!rD(?ryZl-aJRwFKxkYA@v!{D!c!~$s+BbcTCf3w1-%e$Vj!~&Sq=Wuof-Q zN-0(s6DG#&Xww_nt<+{W8r`lN?J+pN9Unz2x`lYCYej~*rZIbDk&R$heGl|ybq7T= z*JzLJrOvqNARJUH?2$ga#wkpQSlF65%rP`R+J}^aVvbUywCd*yMns*P1NY#rhAm}ni99;!UY2Kv+ywLd(A!&H_~d;yFQF{$W;S+v z*63?J>ySX=&IS)Q8xU1`HbNT{h#iSj9rsa|81ZQ9ukKE?pxgEwV~R(Q^(^xAD&gp1 z)(e}Jl(bKd!G1YLw#rf4CdcS@Il2er7(Pypq2uM~^|^Jgdj}~|fn!mTbp^ZhPNE38 zcd6qb5$W0*eK!O-+NY3Xy7diwI23XZd-eoyID-ow>k`!SUe5`*;_;lwt$kz~kGG~z z?Zu++Q9na$QslnSE=+}V0D={x7GR6q!#2UZ2fGNg94HUaOqE8lDh#;-@Xsh;J;G9B zhX!~DdwLF-dVudi0}DfqP~nKn9cX~i$6-e&$~?Tg7>x=%&;-|hLI6F)h0&t8{LrzX z5OMD88U+ZDbkypK$O0vLjq-Yms<_eXSV2r71~IY-~a!_iu>mO zIaXm3QI+G*kwOF0Kpv=uSpqhgYRr>zgQdl9gpCD)N{9hnNaNc9>j~U)@C9@LXa+-z z=c?-F5P_M%?@D8Ifu0j}EFTetw<_jto{RR;{0X@WezG-;)G0U2OyQJ}!UovTi20CI zl{hyFn{I84<|Iawrb&?DaYwTPOj!Yz+W`W9I9Pyb)WQUbpPW$v#w5T6MMewpyNr24 zm6G%U7Lt}FR!G6L065wrfta$SNyils#!!xD^ybD9-}zyIvA+wa;Y=~V6X=l?m;Eh3 zq^SU^g8rn@j$KfPcM^A&VE(fd+pD0_sW-q8062_<0q??$OO(%2q{h@Xe@yhrc-lxC zl0d`2l?-jCb+d6emRU4r&F_b|WCRK)DYPujesS28D_%o<*d<}Tj3W=HD+7Yc3Fl>FWrVOOsW*s&%8Ymft z0m;at!RncDMvW>$#G-f2aoQ6tSx9CW#76W$+8NH5K)g-B5>5(1(0PW@MTr?H>`i2B zk?!fO!T!yB8e&|)+S&BALUN#0Ds!WL*7TM18`X5ZMvCqpf=@;=K!nP0%mb;!L}*tS z*eELclNNK)b)-M7gm{GjR+oikpkbIBl}PAefSWL!piD{x(ib@ zk`;YT)nfGE=1-j15Fl#0kpa@G#Tg0oKOyYoQhILGEC)uc+_tdmZGn< zQ#g7k^o$(}(R0aAsM0P0D{d7vE7a*E2UZmVlu2TWfbtaMiTxDEM5W|lUD_EXu#j;w zMU#h?iL#COGdH2!XjZn?lA~#Q4n!C37H}H)*YTw*ymcYG1f?r;y^}_9!PJIPx>-Tz zVX>^;QX`#`CgZ12($O%aFAEG4+@+ZVnn5It-!Js*d2-0EU(tw% zD!oVOlf@ZF%TajlgHcIuiHaLGe+&~UaX3|-!aB0WtS?}mjAxSV2$7AfT-6LW+|GWDo|;wF2-!t}%Ybmi^{lc{37+ zBPjupFEf zD04YZE*C?eldge03Sd(`{dn_C%AGaNXxz8ohM!KeaNi~*Iiyu!^gKZ#iV;o>EO>ef z!0IZf3c2UuUX=7v6y0!HQ0J&-3h=|tveYT)L;u? zBcp>)#z(Lu?sNrHv>z{aa$boYG&v8j$~2+8v;uddOq6|TXQC8HF&vRH(J*aLqH&16 znE8l6&5JrCmpeTT?}cg^Ps?*UUxl|`n8$E5I6V(KG?<29A0vx#gFeVGfrG0VZZ;WS zKsX`Y1)zl$jUEH);e#?TuKNK?1_Lr9e9f?daDbOsYoJG$&_Zvp9{PLa15fm|gaWDpro+qnK<+^f+iX zSt@~q(+WV0R`9oyHv}06)C9N*$P8Anv`P$<#l$!%CdNR;LCvX~`7QuIu0rf;v1pdJ ziUHLIuxyswrDAqGfdxdsRTkxRVl2*SDiH_2l0)cPRc@Y3n^YjSrqxA!=dKCKhHY|j ziOXGEI>F_x!!TN#GLe9_dOgqOZYWK;+>NE>E^TTBpC&8$RBNi@GH)*wffCN0%S%EH+68_8gPPBPcX%P%r~ z`DMve(ruI{E0Y+mC##b_V{)=KS(luW1h*<+GAx0yq@`e3d|)%{_hIu2VkO`U_}uEC zW05P+79c?YX|bL4XbNs5uohwY4AKCAzzBi{1eCf#s?2W`h89$>858lSNC5lLR{#;o z$E0U%uj!S33G;R0|{a3C-~lv{*9SSph)m0mDIbfkda{G3z~wx?-|-F z*28>eAr>rjh7C>_@rAAH$)sN;*omuIl5Yck!%AL+LW;1l&NJOBC>+F4|JD+*HO9Pd zRPg&?RUZzb2EyR*r2bugF@=*H34%8|;@!LvSGEae?#R3SDm)a`xNuMb_H5nJyDgvu zNF>a#PfT10$#M|Ph9p!uE3s3YV~cz7B`;NgE_iFusg1yLOK@u_8$6z&%-hq8zx3Xq zx+4~pF|%Ur#EM2*a2Mj=-h5&k1}oMWGa3**s*>}>78DCiOEFLCB>{za=Miyt50x#% zAf{}LhY?FNttwh!e9Sf`z-aN|V})g^5K($a9J+J8Op2i=@MO&#&(6 z8MOUVjFS^P`xxBPGEMN!aF&YKT6Z7p&vVjZ6{7O* zt=s(tIb5cxaWS%iabw)p#QpKx9uE-fm9y{VM1#(b)tk#35@}Of10U{&-oVb3h=!Dv z0?%bhnFIvRGKRG{?8Q7Gh{7WFfqcSFh0TW#8axclctyt!N-Q>ny`(@aY77KSL;>Rg z@un0mT_#o~neZ9K5`2j`TxJnjYVxT_@Odz}V%o#fg(w-V(m~{U+_w@m5l6*#f)Es_ zCpf*7ETybGg~<`?cFCQ}j!dE8A4g^z>k`JqBp zCsOdN?|5Irku-=v$CJ(xMp>eefK(3c!ceTTnYc66mM80sH^TNB1rordJDKaL-sinq zOX}azZq)qGyFc~>p7W)A@B7k;Tm27dq4xrz$j5M?`72%y6-3%3tY9s{jYt5g0?1&+9s(9t3}euZDL`TQ z3+~o!<8A@6`!iSSF0;D%m`$d-f%HZ&w=O1U`&~r=0;m^Utq3(ngEq#&!~%ou>mO@q zlMQh_fhl=#D9D^C+VB3Mu^%Q4%w`!Set6 zzkn&m0aDy1$_(>Y*e(uEnmd6|C`I7PwJpm!yO*}N)i@0Z<)>MLN?WINf1;hUbeVQi z3fkAMXm*$$Rai=U*(RD@Cj{%G)QOfR+-!q=X!4Z;Fr#0#Vzk~Tt!@obuJk))1;rkm zQP@D#ZjJu3_w|a20Yign^2F}U^bZhX>=DYVG*V|ZYRa3+`SnT+s{uHHfX zV%i3|VlXGeBI(7X7159IkTKNCu33XPo*tbb^t3xm0E;y?0@%3R#->*cbH#`{BLOUr zh#i&3+Z5L7@l%Y|F@}If0`VNM8PtL7dlMJ7C;56|-7sC@@?jyDzg!28k2 z5uKLdljebSqRsG1vaI1;&LSL;rFdO#8PlTQtiDE!G4WB5>3Z7_i?q&xLA7acqDL z*tg$jE(#eG;6oWh%VU%0L}XEIGpZEGg~^|y{OsBH72QSFebE39O|W0iX7zA_DjxS~ z=0lqVMo>sYAXY|xIOeJPFeqWFv#epv$V!*FPsowi`ucYWtrAOV^+AEWH6o$ryW^9$ zjP^w~!mr|3H!F^3mrrndGTNu3HO3t5k91|$!?F@XV}b03m5y7;^6AT_^`c^Y4H!YM z=pyjAp|f7VqDMYs05CF7F#DD#VwcFGf+w=WP$CgWLRqV4B$>#x{CFauafaZ{t3m#cos76o5HYq9(s1a(Ns<)M zLGir~QQs&5Zde4kMB*coi1c?mk}OgNWPU_~9www$dMk+pWvoQPz+gsFM)0T!gb9#- zZT|8Z0mRR{1n!so*@Y`61iD^TAJF@9kb@Pwsy=&&up4A ztK)iY;<>E+__;qm`urb-e$teC`uqRNE&ueEbMH9y+$#?3`o_{7x2(Rrx_r)>=8xX` zQ%B;>lRkIv@QYvl*Q(#%=DDFJcYgW%<@@LT%ed;aqRQyYxBdFYP0u91`piRZRV{Cv z{FUCDJ~ul#^SQ>mF0GAE{MkiUU;6%&cSfca{<3J#e|~)Bb=SOo;TI>|GWVV2v+ut+ zwBehNU3l|3=hQst{^PHIx9!zMM?dtfzXgBrU;V;m8+Q#39{9z~%buFqbG1I4+**}-;mY$L zNQFKQywh}c=k!U>{ccCuD(|J$wMErE+ppZYbH@XJdg{tYPnrAH4E805*S_*< z@ZSwj{x`o(pY-9*FCF+)a$E1k+s>Na^LXv2rr-I^hitQa8&H3kdCO>YvH~N!H`({4#;gvW1uKv*n^!~uyp;xyy?lWe-^^<>JJR|W? z>X%=9^OB#QGWm$5>mL1cXXr}z+Gg*nqJ6I6OAk~<2Y-@2tMH+l&)Yb?w&K`hudcc5 zn(Myx(Pw|~g#YL--&*25ti9t4_jlg6sCb+IqtJ6F-?sFr@80>4{@3^KJLakFZv?{& z)-1l#@0+*$z-707d3D*=NAG#~sMC+w^y~9u?eDGH|NCvFkG^)l@6=zvoZs`R_tC5T zpSt?>zr579?}+=4f9rvw$qkKLD=#1VL-3D7J;BY>Tjn0PaK(>TJhSE41HCPSKY!$f zxBvFvKR)x|A744{gC`fwzxN-vc743)wFAF@?D(O!6HYwggxKXLKXJhmzW@H~{tu@< z&~wBGKl<#Kev;U7^5y;$?+UDX=d@!cw$(ref|HH$tv z^|xKOp84O`fAG+7=e=vnmkd4`{n5nkd5nnft{EY1vURo1;?Du~ge*E4Ee|hjr?@s;He{Ou`hfl9rb;i5t+TWM%uH1Uo z9dF5Q zmuJ5G?2mF!xvy~jwGZ7pd3=2~~#%rig#cC@r=$>uHhJyCFF+0d(H z7k;PNw{7Wvuc~|{e!*Wme^FWgv-khi8(DT{&Z4D*=M@ut%0NB{N6bN4-$x~1%y z`}=;sJ^uDnSLA)`@;hC>x=GV+dar%qDfj)n@aBQj?@qk&V9x!UJiS|&1kO78qSMMF zm)BK&{LpY$k)_rA1p?tgwex#(DT)9-(?<+k%yFKGRC z?X?y4zk2m&|9s~5udck~*-Q1o*8bmpf9-~aGr!|ryE^upo-@sg+B8hCKQ-IvDJ-W>kX*$sQ&40jwq`^5c6@0fFY?a`iH#%nkI=bPKE zJh17U%Udd^e_`p2&t5ox?QbHHm+yFQL-eErAKv)n#U`II|4jK9ojdZE{U z^Lu6cwx097s;d@X61?w~M^D@8TGm=~GdaVJ!VJw z@Y)xT?~d2}%l(n@&UYs-ef`S+CMT`UJL>2o&PkukF}= z&N1G;haO0FUjI(THE+$o?y;->^_$}s|Ll<+KU`k4=9;P-=XO8-=CO}oy!5uhjaS#c zbm7Xsp15-HZ67CpQt+>?i}Sw!x4J)H@!ZkgXJTjl`M^IO`^ydIhTMBD%sXYyH#R=o ze2n&Ca(ejbu8WThJ@w4vO;^3SdAsY=#u1*EZ@8kk|Fy|e^Kbdfb9X&aJ@xmWz5SkL zXFT)H(2Dlw8lGr+cjfM9_Fr3Z+iB}poYfJUe0<3lUfjR3V(JsyulU~9M_Mk+f4;=K z_tKRwcCGN29zXxp!@Bm?))&_N;E8pmNB-c2m4l%dj{C+*H!uCL>6%xTbzC#}uKTp# zOy2(7ws%7QIb};$4J`@%`uwIzO&5Rhx91+|E&b!-JAeFz=q(@rc>SN7FWq|mw8Dxv zZkn^}FB|?esrr$B-!`!9%DK&hx1aP@<@L$vk2fc;JJnMXedRN$BTu~OD_2x@-cU|sSA2hS@$bL9{^pm~$G=vbyy>dn{_NF_ zPu#QrSM>wubzM~VoudYS^4YKM==|bE8*Yu{esRxJJ5Ige!N|t{%v{;{!+FPCep$}8 zr#8NH<)vT#<-*viIgvj$eRlZX&SUQ_x&NWR&zX6BZQ$X*tvYqVYgO-+ANBL!-~7Az z{<}_&p4srLuYU9SyeYqE`1kVD9{An-CAWEeXICfB9K3Pg!*>nNS~B?j$vYpP_sa8A zzU_IU?(tc(FFgIj;!l73v&a45g-0tMJ7&|m&-~)bt=nDSop9j=OYi*2w&j!lQaP*d z#){2P?S1o>k8bTcx%~YD=}n)z^S-uIzaJ|;`2}zAlIKg0fA5#C<$O8+k#~>%xBiRm zw>8!N>e$?l-h0>NKl!Qqw_kH)QRDtZ_eHx7T-Me1-5=b2$Lse$*Y($r=NCVEVBZ~6 zrX4ot$kyLCJ^p%hdEV=9O!?Jg^O|};>hM0(`|d37y#-%={*kBh{xneWgS{`F@z=zP zNxe6`7wWqEp4;#E_L}xTeQ)QRp(Qugel2`X!==7o-{#eB8l3%1@%2alZ<{OO{v{NVnEIrX1;XUAP<|Dta7Gd(B# zn1gje&P-rqqVL7o-|~^J2@9v18U7w^m#i-g{l?ryu&#cM4AW-uGX7=Jw+r zm><6GvAeD~=G40uJ@&w)fyW;FZE@qhM_zT!1@G*5dCz@I-+FS_KbJ>m{Op4#EAQ@@ z|M~}IZ~v}*)eYLsFAiM4edS9}ee-*@opXzxK6k=rHs72oeRKKkC%yOYhm#v`J@XeA zzO?RyrymVJweXl%zjIgD^DBQoz3kzi&t6-0_hWnY4OJLUw14K7Xe{)t)av8($$H+r?l^y~|ALn;+xC}|U#@t!{lMd| z-gxhscYpfXH2B}Erp^z8f3?koMmSv&Xd z+xn;9o_XGx=PlAEeEY8E#B^Ej8hd0VZ5p$gy0YgYnN{U)HuVg?G&}M6CvJGQvI^55>w>D+kX2TMM``R>1( z&iN<4fA??SzOnw2Blg^Vt^V^3{|8M#vcFQM1Z_EcqzNfdDQH?C!4OUog!2TEj0%*K z9Xxv=IU<%g z0`9imrpK2{{egaGX-Za=GL)49%Rf|D>WTVusazpLC`f{K8wlX4Sq<@CaVocsDkxOK zNFoT?ROJYA_bpXEODKm;AB37-0#5jt z-|B%r=TsVRc}WmmOS!ckV9Nzs zqA+c(F%O~#!XStMJNSy?PMQMBH~GDaD_P?*6^->sb25dhCrtwK6^&y|XM9j=YuX-_ z9mjFY1KAf5dW2PsU#Y?CMGI>?s5oiLcBtF}o&+kGG!^`b9;-3~c~9>*0^LR-@m^dcC(e zFEhfG=oV$0a}tuXjOIPFws=p7uXt4c>{DsZ@vAgvc~uULP`HdvD4xd)(1c2=w=FA) z-0w}b$*(`lZ3E?W%GoM>@a)ulAnRE9NFDrvER2)+mG5|U?L0te6NU-2!r+!^r0X!b zZVKw~uxOu2^vOaB?i6>YHv5d>l<)I@kJGzV-H@&8W88Eqelh-f*b+F%y}eGYb2lKv zW=3QE0O*7Ov#dL1_+2wzrE7Jhs5)ZZ3qyF)GCQaP^aVgWM5hPLKE)j#@N00e zf=}kC&{%xnliJ4)eiZ?(@GF}TgpscIu!>*4gJXQMM}u}aR1IdSZ>p-(yyTZzXa4j2 z^6*_>gqg>j>RzvNNpR=|50ynu-t{vBZQmYHurt-ZN-q59GGlYG`Z znhyf6*+NbOv5$8~fZ5do`g+3`f!os#pC9CgVK&&EgFh_jeI;fj_`T)r2uV=)r_&O^ zRk%igztv}2o#%_-Zgq5E%(>=hNHu#rBJ_4oh=^K45D~6^V+ash4NWJ5jG^A(fBF1z427wU zSZ7fCuB`^a`QA{BK$yv45a2*I>sP{Ey0qTeI5{$EDf01Q`?++*!~^bkp+eKdAkys) zcnHpqbACO%{iwTYDBs&LxDR&aCxU{p0`L1ynNpQ&?IcsavGc6^{MDyg z=q6}L(uMD!rexs6mQig@Ew-D*(^Z}6FCfpGJi|NFBi5oxedudXnCboIQJc*Az4YiD zaow_>FhTy`@2F3d#I786{HwvK^Vf2ue~RHp|D?u^HXZcn^2E$P$))Snvy1D6#t>$4 zzB!q~DiiLi)NY*DRHtK=G5=U`&O$rzPV9&|I_d`O*yukp{3l*@z2JW9`QkrPFBQFZ zF8}}l00ne!Yh`Z6Z6AvW0000000RH*eQQ%2SGMT){E8l*Dn`o47_j4HAa-3Dz zcz7fycPiCsPz(B4Qft%_g!uTs-?iUsKYO@^|!b4>9m~g=%1AnvcT=__b2&q&`FNypC62es^W5-n{QULX)(T(Z=TRU%{Q;} ziy4_ye#%xkm}aB=rkuXp-p(#A;GAYV{wLsg#xprjkNou$?0uMj2)r~aMmZeOXktD? zmo>}Ihq)Z`l>B84#!nKi6s~4=n~t)HeB&+u@J%*RfBc*c=iXuHKPlY`ETNp{{7jxa z;r}GN<(S;*tP-Eoe=o?V$?4Jm?H#6Xy8oTNIXmsXI_RbQhkM(}{Q0xwMY6GDJ$ZJ# z?|O36e)9O}*Y3gTudb)J?5Fp7C%eb{Z%_A+4qb0NwcmJqeDt<=?4ICR;PIck2WLIk zOW*C_gV{YgIOy%t_0+6QJ07@s~!+)IqB!}PnVaIsk z@bzirlVr*IL*H#~NqCa*n7qqx6AEknq!x1jJ2luFB*h z8z<-U>6nr)3bE^AQ1Zz|Ii2R!q#XCj3j#5cYXbKP5W!cwCrOo0i)>i@33Hy8 z{ngp=Aw9QOzoxs#z3%DJ@pi%p#E#L%`)kb{?kTqPjo_iPAq?EVP4YjAic>{< zP;o}R|K2}2Jwb#>j@tX_=7T|TQ2^EFK)?Y2kH3F-a`yW5{_cM7@YKBL z@9Y~G<w%NMI|n4EJei*li|R_>DjLs?FB|4~CxrZhT$fYqq${)s zJOTRF^Ni>~!D<30#xG}90?H@Zh@f3E%m6Y7y2D4stl}R#=6ef=ns(nDox#l=bO8{K z)1%|G_ph_=0bCxxKlZ(1Lk6M_ihS57RG-4+oJv%9s@MBHfC#cFNm%(vPc=vYnl+zY zl~c+xU^`@}S#H1sCAvKaL~K4YPn@o&KrD>0Kx06*#r?w+-GCeuBi&g=m47-LB98aMkavcDuYFjhqsjVY&zh9${es<3K7~R{h*`k zr~vr#qAMiYM@Q+w(cwQN&-l(57q$@`P>NR8 zNY9OtK(z7=LZmM1ea9X2^GR9JTVU`=N%My5@Q*-deS35QxX7T8Tr<651NXLmuxFw4 zbAwIcBAzuPT7*wvG8vbka&mr~ls97?yr~QY7kx-zMolBR?ncb0TwJg?>``cBOa_m0 zz{zW%7L){4RJmpVPvzO{VrCsyp3En}$B0mROLQ}Rb~fwT+dV>Kj(dQ^KLZyN5}#ay zMghs{^s5znb+AZw8GN3fxM`+>vR*lnCfO*uAnsC09rF?Jgo<7w-f=abOoq3vjT(Z-)8`gczQrn# z*b|}fqil9@W#1eJg?-ov+Y@DL-P2vEfITIIK+f`upyi5U$|h-U5$Og0o4orIrT(Hx-BnNTl1R@B~s* zl9&YA*U4%3aIbr;RR0Z!{ewtK=L;dU8Z~r8M|l=hKRSBxqYyGp09}adr2+xRju{xh}_M zW71vf6QZ*uU`Htj)yt5YEuOu(g6+7L(NR(jR^ziN(+4WT0^t-Knm&jvQoijZH$)^Q zz{t+4m^tR3q01+>FD{r>9XNtj1bK%>X9gRNrK|$Z;umoiZ zRM_G_#Jm91ZB_uqr;1vJWF!m(`+Wrl9RSG5Z71o&90(-D?H6*D7v*SFjq!lNW_4_~q zDse(wZs;WW<#w|1$3-^C5i!utoYR`blg5;2Vh%|s0+69{j;23*7IiaEp*%(#Mh{*bmh_fYwKDt$k zvy}E+m6didMD*H_Gcz&Bvz)VW0fUZ+#0)+qFgxxmOl{4lMLU_4a4U?t(z*Y6a&!n_ zH_VaYxEu@~UtDDc)13$?bVlbtFN@gYpe31<==EIEb8J?bU8F!Gwwg^ESRwdOVV84|i`3izW z#^G9?7>0|=axr|^ZSh3}EdaKBRb1f1cTR(KH zyU;1xe{F6(efB6oUBuG&^6Fw*oKqL4fNJ?bpME(B6G208QE)~!r05ML9rOXoj1|tJ z8BI~gAygBmm;E8xm4tO(7jjurAz zQi`TkDnZ%6cjGaH0S>}YAebZcO+RNuzwu-mFC+f4PW@0o{9MCOnix5R08Ym+l5j;1 zg;3r#oJJ99hr~j#Yc!1IYOoL3G&CVfDj!0gDWup^u2!#NC#8GzAH;KPr5>^u4q8d2Dgo$mY zPH06m6>;H(=3gBP8FQ4+GOG9z$tKe*wAOi64)joiQsq}bK9>N{%Y@=0KvMd$hAdoV zRzw=T!kA5pW^}>bE`3VPYeFy|1?lVlQRFHtGyuMtqPQLt_Tg&_eVliqOLp=$+~*%u z#=07~3WY)#$K?z?CI6k4SwA@ft!4iox4l_7ocurYY{>M(q7rl_Fg+;l477yB zo9eer%wvi#)q1oob{H{QK;IX1l7ZfZjwaGaZBa{GwD)ri3v-cGAC5&jZpkksrjtcW zcGw;{{;HEDcSG2Z$*}}`-c^GWEO+Y#jOtLS!(sDc-#TsIPWv?V63{@^w8}5sHoudc zTFRMul!chkE2Wf=>>;po4mNz_J>85z|P`Uz3{@sWo1Lx*)M%@obnQc zBUF;Dspv}sk-$Zb)H}FN%$iI?bC86)5z0xvq;eV1!5S%{kxH$bbT?DVHJdibER~Gz zW8=PbJVt4zdQe3Jx*ne0PIAwn2n=ZS1iNqw0_#lHW#9ns5LXfqWNP*2{E(b42`isU zstIsSJ+9IfhJi*GHWmSw02t$mbfsX4xT5XGuQmi`1UI3Z=z*ui{Y4leVgMoT{<(z& z$xH0&sdX4#IJJvpdVnQC1DiY;qHW+*{H_rsL%VU!Ne&v4l@FGimv6TReEXJoxR^Mo z1-+v)1L0&encdnlEIqqPMk4;D2u6eUE~q#|hjwTSZ1FAoc-A=EuAx7{?K=<5O=`Ig zUF9=^&=P`(_Qv6-4d$@_%!&J6)4tyORUx%2w6l)(A(hj$Ptb<+FJLqJ2Ye&OhuS{m zSM41S4j_jrk(r9|WIhXD9+PIwd*h};$GwFdaSPlj57;%Y@8R>$dIj(Wqmpn1>-eR#mqk&`iBT5bIRVJ? zhy^>r=OpOpPnl<}f2%=^=PL#!o-Zj>U`pYzs6?DUzCryPj}TBFR46+i>-xqN=FnGanauzvYOBZlkp~2l3PiB;E#iMj zyeNzK)%;4JM9|HX>`ZDS(6JrnZkeOEJ=pAUfq(@q9&c>Y~av$HBl^19^jb=?cHD~MDe;SD++h(=7A@Q~{Hhky1uJ1ccWHdwYU%n)#5^pO z88<~oge^7f*$b=YK02Bao_TRB61d}YS-03xcI3%nVBj#N-L7?4C2CNAv%}vU}Vu{@RqgvL>7MZ%e zZtPHv`K>-6`)kJ_+^BBA_I3`Q5x!9VF8x|zC;QZv2u}IrRq{K#oZnHTB9(xDC;$JQ zFz4TAj7d1{Dt;}R{24S;Vz;U<~ikA9zoE6z&CwD*Xc z9gAIH7U5Yl0#b=YrS1^-m)e-u=MU9(OJLLR@m0XCtLdSmOE&7 zoYn}s0c3f7;UM(h)xdeRYc|A-(1>8;)*6}&olwVfY_Ugbwz2{CcC z){rU44ma)qN#cc&=Ci>MEeCyUyKl;2I`HPfw!JrIkW3*)==&k{5mF*KEn_n_RJ+#D zS!kOED7?~c5XA$NowKaJ_teptb|if!i^+i`jktEIS@^BgQEuAKb_Y zW*1}~TNg3Rz=2Fn5k1Ad)h`^MJ=Uj1Gx7W0%i&y+$%;`TSOy<}csQxpxv{QmmHI-( z5?}?5i&v{-zCaj#y>j4i6p~ba%_y$ zalYl_$ae6V!|Z^M-}KG+`{AARs!Tx{`QSzBwQ^_D4!KQzn9;%3HACT~|OfqMag zdnIi*m$pBiC`>EI#0$cnLNjfb@f%CQlnC9Q49s`omw?s$odtIH38Y_U*I&N8>#%Gv z5%_bHfvHqV=nQumav4A?g$aLio@tg8;d87%I3=6<&yjG-w*^uU_)uxUzQ}Cm-w?Mc z5Aw(3h`4_u6KN%4h%P=K>#6KUWFrj%8<%RjVmv6g8*wos<;5$1TwDd}<1%!M7bj79~5H(iDTR&r$0Jfs_LX^W6G9f9Be?m7+h;LISVAHK4y-o_ROy&Zk8am_=dG zwXiCKdbrHZ(k@$Tv^vd?FN}KU0Zkm2=WGYG;uSK~AOnd%zDFd0;THvdl0i?QD*)}} znHS`lcQe5d_~#Sn?lycqMv63nd&EjU#iE!gSIB<}AgDOa7vfn%g7b-8M!_bZ z#HF$V{Ae90)7T8NIeMx&b?h`|GH@~*4pYq(5QBhDJ8m24PdiOa4f+!N%Uv?+I<)$G z#ac@_zuU-o`5z5ec}`9cO?_Il`3(*vt3zMMUAZwggo;FdjELomDW8gD%eXAbdlxVp z6M@f_3Pi#GeMpb&bHCuYFx*ptQkN%jvSiYV z+-0k_H%U7@I$zpdM*pHI5)KJjkRZ`~QlkN>_?%QLGSGrG-~CbRC;Rdxf*o24r)5k) zsv8*t=_7;&;D|FOm2r44msI%h6^$?v0cDOlQO59+awJ3RV0Xrz?j(ib2^7aOxhC17 z1C*5AJR}L8X}dg}qVI84j`G&G`UGNiF+=N)$s`|Ed0;j;=iIoRR{NWPt5Y1(r+Pu{3I;Zv!*MW=V&{9AnPg zrHDBULovLmi*%G#F=|fW7DVMB0qDm)fo%CirJCQlkV2pc=Kabla>51k0~ZL}ZCDY( zQ?w*h83FE=N(bb2^I!g3aSHO^U+DHeF;?ySv{ ziCvQyBXelN4E_=+o|(+S8P#l)?K?y^yZ2{?H$I(h@5+xGRW(gg}ECHnJHTAsLK=Q7%X(P3yFvTlc50*}V4 zu(3!)Y;W3M*%9txxDSb4yhaCayrEyGT%fT@E_WJL8QdVDZ8D-*jvXK$>Zf7TyNKc> zXnpPVP+t%GVz-4UgtK(GIz$&aYk8wLp>$GN|8)gRPkH#@|%6C}Se==H*h6c+Bw zwDfWcVq1|79VG-s!2AHfh=)LwHYxDrfTaGs57u<>8}qouF@F=(d_<|V32RHwsv|7GHzf%T z2zaUL14x4Hwq2%Kq3CWil_||;Ep0#Tz@#yqxAC0zd61t*KD=7ro!e*k!8950z7U9% z@bazT&)}^bD6ajrQtjc!{e)mRE!axy>6>sRdS_{6>r7uBSJb1biLnvVd#NgBh#hz? zzS6dNXuRGQe}#6Jsz|q%2`!9bRO7M^N1IQp!-eNoHkHQ5#mkb(a$rWbpk5vgO&KVY zCa}a9I;3};o%D_=_#eGJP4_|H-aiJ*LV9}KJv@1>=W#|{fzOL2oVlO$Mnz9m^a{vXEdxo1HtW>XIMliCHX+ zXcNrp0vxv0b~8w;jo%;x*~#;)(bG@Te8Rz*HhJaf@TckyJ4dzJ{`rU1n*i2~t91tS zk7;J@h6fEd5A@Fxl;iwU`UMrAOae^rGH>-O4*T0oCOl;=dRaHgR40hGRIw&sUng2M z7$@jdL!1qf!60pZ~vn!!Yk%#)2@*dP5(Tj#n+c)gqMfQR(qe>DLUo$u@r1rtVc z$I;=FbK7Tp9_z!`%sqYb7OdB%oyi_v*7p#}^4=R^H$jKxg+0=PE?moT3Q}F~UuWF| zxGa8OcHxgk^+ebpyfna1-Jg5Daw0Kc@HjDV-erOz5P%OoTxVQyDGVXH5t4hEJGH2$ zTbTe_7($6(z=figx+d1QoT{dH6&CCER3UZp<>i$JZ6Y_1=anvnOn6@=b zlu#F{3J=jr{-SdXX_*BM!pE~>l&h|Ls2Lx%LC_`+qMuztp7ki5a_wq_`f5f?Ypn-B z?&$cZ03!IMS_xmLx7>!iNJKq~>+A0L)i<$D(-f4Ui!m=iH=?_aUp@hZr5w19N_V!_ zU>EUTf<I{`sYkoFdnkHOe}K!7Qvll zQ=%oL%Yy^$S)2H7CoL|6@DM9|oR{q{4~E3_#>qHltrdax4^Pfszuw>7?}6|mr&h57 zNJ7yV#gs?~{zK>SUFHH`cot~teOg$RV|Wx><+NyPL132ZP)mZHq&OBGZH^L!peDC* zo)buZ#*w-QF}YJ17H!>1SFM~rT@S6QrD{zmTQqfZkO@OQG)$eiPt`XbA=Z$AA_A8p z?K^S2ZBg@N@G5iz7!v4A`oD|Y{h+!uAZuTN?*dm=u_Z89=fa2BkMVAahBOUcWE!gQ zru4A;rdOYJXujajK)mM~8uc({moLD!JuMDPGx|nb^PGuR+N*jNdW>C9xqM@u zwmXg1o3xy*Tv2#jXd)Dj5j&CIQo3#))u=4SIH$EA4h?PEo4GlY8BC7nt zEME0Ku1_T+y>@z{)vA)E*KdP~E3R{_wht#L@GBuP*3R3&wJ7{WPZ*fi7nIe&Cse7@ zgSw&GjeHU7K1!jVj=RnCdml~Q*66d1c2*@D`!byu8UzdP)qe1_CBz~C3H0+9wn5HE z%=g=@0aPq2jxA?=&y|cN0n_by&Xw3S%fN&h;uJh?Uw#}qZEwjVyN}Z6bQyZ2c+5T2 zr=SJ;pT@LuYWuI&pTic8qj9t^{%C!&KZB0gs#7$jzl7&-9qWzcMY~4F<4@HeTT8QD zit0h#V^KPXe6oqMf!Ofpt?YdI$gn%>_R%C)lXsRxcilX&YAk#;r@Fe;!t9+;TlhZM zA;99czBIZ9#~5>paf(V!GlO6c1570EN((M_npW}>no1U=qCo*P0Y zTl`HnCLojK<3TySKtV(k&A~(xrOYVaB8s9!6o%wWUzo?@u!uq zq*fLxR*s%>REZX!S*aS{q$by_juE}7Ne(JZ+H>{NUyjZW_TcJwd%ZnO%0*=~e_%~W z_EKg1-q)*$zEFEh!Sv{zmO3L#Eky<%cY>7ODaW)!P3(i}l%1_&238RJlO~XKOu?gL z2p8=doqO9Q6m{w}@rV-jGEqqViE-JdQJXfZa5Ds-EzG4#>mz&8kUY&37T1KGQ0SF` z1CP}NTZg1z`$@O*-xN5UB68qsUzULmv4$m~6R!4@+VMcsd?qq%;0Vr1So9$218*zF z7p5S=0H~kLNjxlzFfn=qASLTE*Gi@`nq!v4tyiVn(jf}U$;-&fHlK9`U=Nq9?dzer zXWgZDBOScNL5!Wj9yQvyVMjf%M%T=K`5JE67DlXES8+#JYR8x?)6S~kN4j)lbLs5Z0m^|*R0WF3|fFdgdT`Zp`D(WoK@9v!&JWJI8tCjx5qX;h(|$oA|l!>GLx3=T7+A#Ede zBXG1LF?HIc&$gIh+Z7A{!RhUDR@Ts84AuFntML_6;LDwCO76V^^&OztRUCa>$|`ZR zCPBnjwJJ79z7_e146sNs5+<&Z7QyUv$d|Ir11Vg>$l`i(3No?yJgKhA`LIv)v&Z#C zI}i}y&}ct)5eq2qvKwfI5x?Z^@`Q^oiV%OpDX^k;-5bL`L!}o zJrQ13*7=fHXoR`KgbAaejveuVs)lH$Ao~-G#W_`OIdrt{tOuLf8EY1wZ5Cl`gnZUV zcYf-K%ln8f)lMLBgi*6tzPPJ`X-QjyrQehbirhMOBQV{LjwN>{A~lC=UT9BNi$E55 zl%i^y+#8umSrrP8jda7+-9RxCN@RkP9UEQGH?%?{(O^t6q1&_8q?^KhZ}9}{u*YSi z_T;a!N-L12AHT3nzei<6JSQ74fQ#5Rt7HkPhZ)hi+Kjw)7lc-KpgVu`Mbu<5pjE5r zrIs}2u8IDL3?Sc|;s4$qbP>~`z^cV3M1w?3Nh80UlZhtb_Jnp#P4^t^xEZWk?i775 zxMEbmlOvQBhxa}~nJtVu6P?)UCGvQDQr6NLf$Exq)s~!6q#}ubX^ZCfo>Bwh9vYpf zXMNCmGo;|7#6E7mg=Iz zsFL9#<`t8Ht4Z2gojIgEYlU21bq~6SyFH`POWjd>k6G82o3vMOnknl0-}r47k#2am zt)jWTVrgAp>WvriU0*zg0Xxh*EX~N}RQAEz0tpwEb^@}M5T_)puA2&KRG_93GDJk% zUBgC9NF+SC)@HVfHK_sZWyMW7K-m$G=LHfJSsEK@)Fi0%qjtJlG=KTCzwmI7tGko) z60Uef7(O#NiHke0QBzG=X`nQ(faNm+7Jm5w8YMeVxFEZM+RAh-cdSGYNi)}Bra z1)4%8m+%C_H7C^A22S48F`Qt5mnckPXn~^gEc;$9Dna|MTfblg3$8y*O0@NZD6{0X zQwevmbsKV(Z_S9UE_v-*s@-CH@bG-kUI>2t)#>J3_Qp(Yd(r(F>M`HmfBpNq$Zvjg znru;}1cOV|rHEOap^*%=X(HeO{Fd^i{s7ezjnE(Nrt^^kwdtVrG_RgRjqEkNF`s7h(xE`@a!zWh2=gxyGlfi~9nA+fTYV6`od(86#iWW&`XWfmD-lE-?yAb;8scPy)gRM^(jT;O4Y$Ptpx zp9UTDdE1evMFyO}x{5#0fYtmwWxu!9zJ*~sgrh$<2FJImFMnhbaHzgDEm4-&#M(Y47Cp@zLSIugUAPLk7pj zW5PQz1H(~EX1XT-XXB5}4)5(s|JrK*R@a|ZPdT?e5oYGI))RGs+-1?_h_6G42L#^e zZX5(=O;8w2AQl;i=gaKWCSYk=&b2{?t+M-@iG(Mf#EFiSYZhw#>h2 zwWLmh|7rNR_hqM{#co|J&r9W3CY!i_u?B=|INVxC7#GOz?rJc<`zjjfSKxjZaNpAZ z{4V(C7Yy#{Mst8yFK;dtTha~xbH=FdUh!`0>tHkZEQLRg$fa#im{bziqVMAsT{WaYxTv>Xp_f2#@nA?CxDN}F*=CyZjLb-0P)FXeRDf7EWW3e`CA zP7B~2oM`Xa-JtN3yXA^R$u^X(6l(8lgM$9(uy(sdvhf_PlLp*XY~sZ8Rs`1Cqqbr zjncX7FJUgZ%ttDO+c55v$NWwSZCOs8%A->GKMyU$O)QG4okqvT_&Fcx; zuqw3)l&T;HR_r~HZQb!?EzF7$Oqq>f!xd56;m?%RdWX-ZG;ECXbrJ(EXKXJ`9$0br zXW;-@@(l`E-m8ujlYqIxI#7K{f*9^D+M`%1^IPYCO?yAnmIz zaeR6%hm$l?g@*U59Nc9X<0QUBOV+Ei<3rk;_0_NG?s2bsdUX8tnXyP|16nSO-W;V1 z8o-9q@dD6ih|<%Xx1tdFYs#d}%C)rgq`caS<`nJEZv}Xw7CG8D@vcc*B+3S}B`;dY zMWo3fiSyXb|Cmj)iy4#q)FHK#$BFY1wIoeQau6L!^}=LFXd3NuKyn`r20u*`8gqcpFCcGm;HrxkFqa=k-$osJ_ zUM7Sf>!Cxh&^ofF{bDf4r#UYp0O%7VaPZjUg~g0dC?Qyp$14-$r^R!6>;Y14_DbWe zZq2}C5mZcQo_bjCJ}fa?&FSNRIv)G_))+&ymx8EvwpJ@UFKLm9J|T%ux4wV=_vf8t zSYEK&s4tR}d^j-C+gB!$3Fs&7vo}9WW(am-0n7r%f<@iccf8b{Ct4; zDAF9hDs)`o<7bMGfovWJ>TNa0WCJUx{f!}v&1hHl;`;!LMq93J7*u9m*wGNX|1Qk2&P%Ct<{VQx}zP|3jBE}=Gm`YR=9VX zMe#B!XCW$wAGEDg+FD}@uT9EvpT^>;;*7dcoPEYZN=cw!KL_m2`%6oJ3gkc8^=7pU zD6>3hAkeq%KFsh0QSFf<`HCm~1MdNKh^gTsjTC)%8^%1>$*-FPtKMuq<4<2RuR@x*hlf-`3*|8X}A@@Zb{hMAY|N;mYS=7%*}< zmWnG&Km(&#qyrT;H|jCL7$uAio6A%5`F>=f32*w_5>RB7Q}wA)=>g?btx}+tkpksR zU!_``EYQDb0_(C9jB5HMET~N{p=b0kc?JIGWSNVZACrCj_JL1%LZu$hxsHKdn^wwG!$PV9gfex`|kU#T7RQ&-z*p9 z&wN_0;C==7F1X)V5_w&=fo4e{wi2C}CpvpUGZH2>tBMaEE#potSti3qWy3q@wKd!F zv#Da60I!tIzIR%=P4e67tgR}-6!)$sN(arqYjOgTQt9^<3W~}?VN$Ny(uSW0a zRvRHF&}Ea<2^*0~!syZ?rg67kWIOd{@3dHSkPFHBCC8HcH zQyNR+Yr~15vYux{_o5kC4q^*iuPCMPAbOmAWWoNc^l-@EYe7OL3pleM7ISkOVS!%= z7_R(oONGi1fCT*j_gA=PtG;jzd6yIs1>jLk*C$yMt8zJL4=-~4~-ETW#8p#cB@01tF; zYh`XfcpiLqst8LNf7&az8ozCVo{}W!K&>wq3g#me% z2uZjpw7TmHb3!IlnqQ7DW}l*%bqQZdP_E16r7Zb z8~MzVHM7O4i6GMY%Fn>f^NYziN1xA*=i?7i$IpX0*jm-n?or~21L;1ST}|hG6|WK% zQ-~y&s=i(k*Gt4I)Q9z$O2h91XX$6&^#>u3blauF|rm?tXUN@w1GNq z{Q`if3%jfG&1S`g6iwz+e@?wfv&WwszD@i9Gu#2Rr|TK7K$N=% z0CbUCR@FRS0}n$~!EDR)HMgODK+9nxJNRZYf{6Mi;M%AY_Qt%oqb@5&2n7KNhQ3Li z#PfvD*}R|0sY^NTJH?lK5Lv0JOyc4GLhxJQarISdb_c0|n$k>~CCronxoIY%yjVru z`pNvYQJM|NsHhqMgSBuE>CyVH^{SzXySqu-M? zXA*lVmE8)))H=CLZW}noETM%IO{RtyW>1}C>|FqM0t_7rjqPlWRR{0``b22dyfr@! z>W-kiy#;s_Nz^tvJu^L?u}s`1E(8f0BoLdBK#T}M0wD^4xBvkbcee!=hu|!(ySOb5 zK^IxvWpNf;;J($}GnoMUefRnQ``o~ESC{LlbE=MX4f?eE&HU%SEl%&~%oltfHe#rK z>XS($8$=p@IaG0N*~=&EtD1i8qz)R+&)m^>^7g(@x9nN@YEbu`-urBCHgo^gzfqH% zH|oE_!rmqBzPEo_*QCYz);qmzhhOw_B$(A zjkRg-Gq~xVQQqOwwy*ZS6pj74*N&yzQXTF()?eNI)wsha?(aOw_d330*5+y38aB|U z76!DM=xzIWYG&_-WCt)9k}-zk3M> z&*p^InQ7lavL<-z&75vguNusn(YkTarOC7AM?Vf}uGxJcsXyQH^b_4=q5qVZpH^*> zdLEy2b%fi;eOEKC7FY&JYJG^X)9g67>fz;KbMDl-bNpO%;g)_I&$jsUwsp&4b1N^+ z?l9?O>S+1yk-O#|I(XR1rFF!(mWyu9**H4X?f1ds)lFxup3&>*s#|Keku7?JduY*`Eguo-O2fNw+D3{^tnlc_g`+h9J%IwjTi(nL1m*p?a~Vt)lm6D$9$sD z;>-NBef-tRhvW;KTBVeK3MV^CNIMn~lM0LkEk;hi1+}$&&aBAe(sq+p- zH5)YAZR+9X*Mr`@h&-9ops2WudVOd`m(cZp^hmrrnb*%^AOAw~{ImSzr)dhG*jgJ0 z+uA?_9KAib$$GceZR&SP4A&mJdb>l5sS$$*7q%Y!xhJ1mRN*)+qhrW`yhaHviya4P z;#N<~HOwBI_BwnA^@n#uasM$T{PQ&A~ zydNHsY#(Bk-@WCXLYH%XHp?U{FGmOMwM&y$T-&_rLG+b}jrZun}i zX-ktu$FoK~>>qF}h-y+4KDE!2`h7+^yrHFavwG~Z2%dPRS+`8}$k}l|ZO*nh9B^}x ztLu=92Q*#o77Jz-ciHc}s31v`xM5G7FKtsC$GF`dCc3nCfNz5<+gnsp<9_EGlB?Wm zj~kSp@pAI=`b|ZNQ6al?j>+=35B{^UBq%{&&;I0zk2!CCtKYU=5?wg&(Y+5Lg|Y^n z?KgeUZ4CIW_wp{ALJv+{-*0lZ?9YQ;&MKoT{@nec_uvMK#4Ey-wabs} z>iX=Tmv;_(Ih8GP=y6eR-(f<@WA}5Z*>``RKhq$(^!P>l4*nB7?CMVpL;hJls2BZ6z48A`mgW8$0xTXI2_+R3J z^6TH&wztLp32h9cdL+j-Y&`o{QSA7N<*R28ZJM#=Wd26~Gc9UcUaX&UxmW4#O!3Is z=X2^5FCWnA%kIqi%F)~CU8n3m-o<87Nxj%y(Ei_(J?qW3wt5+TTyXr#{L?=Nc7Cgj zdb@Vz9>IE#s@aq3`g=FbwefzCa=m-OjqIDn3y$6HOTX0i`+KLdbHf%Z`q`|N`5oQ0 zM-yc2{IO+r$+|AtZ<}=6wW>$JZQG^$uRobmZ~Z@&&n`sOak_kbLY3F2iSc=>aswYO zJ@Tdb&GK9I3$NVUG;j8>0d+i`y3T+0_kF76jRUoPUGqZL?F?}KXV{R;Ifi6$<)6dq ztKQSo_qtJg1^X3g%?w`-_j z;FzZ`mnDqzS{8A?XhP(oh^trCE_=rLzWw-4Gv#8h0&!H&*6WL>I$FGYtyw-W^ut%( z@f(6cJ?A^rnsk!);_8ZZzw(6g@$GpRE_u23?v^>N%04Aev_AS5L*J{J-MYL>)Lo2g zALg>^>|fm-CMTaPeNKOxapm;>${jt*Ml zk>-#t&)>g0ARr((?(~pP|IQeEq50}oMQ*8=S3bNEw7J$}(If98E7C5E9WrD?w_yF+lgnkq5K8X|PhgR(C<9qwi=s0 zyehfNq+L~_wAC%ItZdnD=f@T8S}DI2>1M}HIpTY^mEp+eA;pakZ@aX!_2MPzj&tV@ zE((+%a5`E%$RX)qZOg^!mW!7MxZ2m-ceM7c9t}mm_8vJPGHkQUPeH-q9UCcY+5G9b z!f#uOz*{x1S*MRZd!~Hu^k?C?#0jJO-B^ENRJV%J>+2@$IM(vkh@jKip6$;a>Rj0J zgXFlDzFBwp@c|iA278@YW%shUrR(H`FEL+)c^z!_Kff>edHKw7)2}&+f(NArjkI^X z)T@5{s38^GA3bffdXv|{oXSUbo6eP-?fYOhJXKT`q!m9l_Ase z206T)m6_Ntc)%6$r-`e?ZBsw&H*}iWFg@*MRFj{FUwX83%Zce{8}3N&wdv2a3$?7b z4|nZ-Tjg`9o@F!oal1W1LA&P0EiTdy{&08Z(ZT-L*4`?3GIs2K&x6X>SLUz2Lj_0O zxUx+k9x^@hsddcleDUl*M*iNjZ_3r5sAf|~T2DPtl_%+Pqu!`?CkHiq_*al)?PtzI zMqFQ%?X&ptmpkcggNmjfxW4Vdj>@mRs^>x%KTopJ!NSe|_(v%jU8xZzup!>3u%y@j z7#S*@^-0+Tq3nuMr+B6uDzkWD3zH2L)X>AA1Y^mJCx*RsDB7o`P`2;Js^vGT zv!@jWy4P z#I_}xs_lreMteS@IPhU!+0hb6gA+;SOpFA&z$73~FSp^*mbMl=zNdu+)IQr;Kvh5A zi@0U>L~XxN?%+sXN(T{MB5$I-oA#lhx7Jq-BM(A9%7p}L?es*wG%W4$1pc&$C#a1m zf;zN@C#XwX@C5Z}Esq~SvUPO#APMw#UVP{luJ7)PFFT!s&Qa&2b9UA{yXxF@e!5yZ zf1LnoswI4A6Nil**jj=Te)IrdGUyPeM9{|%1^+VGdxGI31QDYJ=vo1uGN|!~8C>E* z3aAN&K4RpBED#&E1mHHsB84CT@sJAH;3pR_#sFXffOQZP0I(#$evooSQVKSQ@CQo+ zd}6AI3!`8pKnTt60$31WU#Oum7bIk|g#|rIjN;K4rm`>~2GKx(D+Zjw6ysOjHP?Iaj$SkF^OG*k$B*aEc8a~a<4!}SU)K5#xDvGkxatm?_ z?iSin{`K>sE1t#6SOcc8x z`lgkn_o1`%i^?j5!~!{|zFz>_-7fgCW^@q1SQ)69pQ0o?J|$I0pMZ&1QLJ}&IN$Qxr|v= z8#)k30C%|1hg`5gGMMv&aSkDA6^wiE%L8S6p#Uu*o}^(|#bU1YGQQ4|v9`lDy;-wF zX3|1Ow~V!#(V{%CC0VMx9~ez|FEAN{b~Us$bW96LFocA6Ofz(dN=!ko!VHG+wCJdqs3aef zOF?;lMs^9Y>2As|yKiQ8c2+54G9fQFKex=-VU}ju!2&%~HTxP>Ay(H5yvx{X82et& zv?nYsIW~zg+`yV|Zyyq!Y)E5#ed4((%r>O}x~0fU)@K%$7nH%0JIRqT<8B;RmmlVs z)G?mR(EGa#$u$2yKjWyY3{2WsG>WO5gDE|e&e$rZ4#s)pP~90Ak+j&57(-f8TwGdo zTx@GUrf0EW^gb9IU(&l}<|CGi&Mhp|=M@%o#~g#TBtT&Xkn{QOGQI~4a9}P|PB6t_ zBbIm?Z8pQi5v)VN+5>i9ccGTHfgW~PHnjJa^I)ujSRf^8?U9hNP^5uT3C4IdtgnfM zh@(cFKt8dmgv|?|FeFPX=P+(Q?BEbLtVJ-Ko(eCh35FcWI(FeNn>DutIAU?1iA*>GrLv$1PD2Wd( zr-iLxhJuEL377<@4a5z5*DmA>HW-QWc~V8#{3J@}@gX24E6Co;R-FfvPsDN((+>0v zV>ih)QpscV+tXsPr%-~-Y0`NxTMMR^ z@grz2CG1)d<@iEa`d|hmfNlsQmA2j%#7c$`Mpp%)v<@a$Ni3LH$EIqgGhaYMqdH@{ z$*nn4E7%UGQ7dE+BB`zwY5^>S`y)0$WI}ggQ`!l&;FDSiP#!;2LX5l$*y_}#>#@}# z#=k`x=-@Xio)63GG%*zgBS@qhH84}fm^fxe1U3LPu%ZO})|f^x_pH;n!+6y9ahVzL zU0e@hMC!YI$+&)>uP0+l>f1VydGdW+FPLEdK3_7+zK`n-YYyMXC3E!qxWraA5KNeG zkf9gqz*Yf_JJBkvBW96ke7U6*DB)WoqNg8O4uMi)&s`x7Ho`L)Ns+@uO7$3FX@p=> zv&Iy>XdUb-3q!FIEJRoo0)-TIA*5Jt;h6nq10zJiFrF;p>rj+;TTB8hHW;-fHgyGX zOJSc^<|<_ENnpO(d&5KO${0??!l&R-T?H61kpU<~(jJTnVAzjfEC6FxViGg6680l_ zGS(C@THO*B8$@!6#)=LjE*+e(3}#k4nsy;(9tgJW1KTtNzy!*$qR}8LVv(EYSEF;p z>i?ro;p@U-j?AoBQ>S1-6IixOYb07=`%KMO2Mgk(guhT;^d9PGN_?JwM=IH-2$sE5 zoT3zmlxqYQuQXcKa4i66HL(J_IuahtoTB+Wv=!tDey}ITQ{tz>PmP}j#0iWBi!4FV z*bq1)wiWXPjf4~|+_plvj(}t<0z9pW%Z`9fdjc~Z2&8l*O#l!9eM083 z{3B+?Br@2iVLjj@lf%}h0>GFy#(|Mf)+rG1VeLk0{D4^-#{3O2ECyWO0_Z+Nc>wc= zr6U*=C5Fae3{;30O_Bd1xWq6tikFnaevr98ea*K#FrkvoL29kwCb%dDozPaW3W?{C~VaX-A zW&!r$**WRud1c1!+bqcX$Hi#A%Ul~*R2Ef`RFYm$3Imc%U9<^>73q0p6=vCKl88oT zCNz=QZ)_i+HdY1gVEj2k#3vgVR9dE~xiGC`YHwOT(_mJyv&Xj)^U?17^86Z+^xsA* zFUhSDfiquPlO}RwQf0B7V@5ETmCiZOZRPa9CHhD%W^@B^n=CEZTsv#zRI}LO0USyW?E!Lvi&|2b6@jJ%>An~ajdR3X-$oeFO@SBH8Q;* zE018u#uaJoikgu<%p#ceW6ekw<^x}<`L0Cg|8WwQ+Wlw7R>qDnUoxduKbnW7((Jxv zFszIDD+np5Ff(=~V`f@5fiG&(U5qQO2h)CGL196*5$*bqw~-L~k6>&Je4(C(?Nc|2 z%pHcGxGxWZr-0A7y(BVk0Cc{{hxP?{m&p78(`&In-5*dLz}OhVW&*oqMh}t?jHlJ6 zfbkmr>ImxMgepK#U(kTYp;1GCYzeyi9k(gV4jui{8}C{J4LM6X=Hv9g8KsBrbsM>c`T|%EDEu3ibY~s zn8zZvTdVVeAuA9k0uoDUNlGLpU)fTLRXweMv8gVVi%>2MsRs%`h9j;T*7y(?=#Me3 z7Pt@UCzaEPDU^#~NS0XQmT13^3A3`Mm^5P2eAqFkfLVjs0P1E|J6PEZ6!Br;oRU~s zGbKu8Fo#Y0gt7StgitQgiWM;Pg_5Nx8Mqq=6A4LS^L@!uvyx$J5BR1ObBb~yS^G-# zLPy|gKs@n->D53OBMXG7tw5L)3&eqGAkIqz)u59!fm*GkwuE*jm>QuUap*|Hz&MO| zOT^64$FJo^^a%ujHQ&0YU@*0s7Oob>(jP=m#NxZkUmYZq{eneRewPL6m35BlWUoT2 z_=^38<8>@1lyzgUFw?0okWL^{s2PJq0UfpgF$^2^)kqW&B_L5eVkusSFe}8d#?J?p*|Sx7u)}*TEyBE*_c0l~oS`*#U6n)W@T{KOQ|A;E~%9kDkP@S0g;~8aLtb@&lXl zcm?jBJYHcl_!TwBqqqegB|!)%wZ@~YWm6u15e0*b;>A=j3^4OVl~gFr6D^^_oOq(8 zRJaq5zl?;_%PC(yPrL&6EQrNl$)s6DwT;G?bTt(t=Hc!EPr({04tTS8E!8$2LE?23 zl?2PW)OyOdJtg9AVA5@*I>IC$kJ?0`8>H)~R9BvGGnJl!ll?6eg_m?IMY-~1+o)bJ zOu3!P$MZMF{X10*_YSHQ@Ol@?OS*!J>BE!mruz2g3HMMHn3B@HR6jg7QvDHiA2k5a z{nSr*9$*q5q|if>LpY-6N&ldRfK+MUs(nQJ$kfQL5$UlR5nWqlw90Ii8IhHom6FvV zr&W4#PDFZ2de@xDu92y&a_OFQYD8vaR@bcVS+Q9?(mSO0$nD*;cUkYA*16W{-7_MA zfopMEFOz72(vxEri3LT70C-EZI9;a%jEj&3@PHQ#9*gk9={v#t3Hq+!g9HW46|!jn zSCyvN0*MyX0SqHbw9Jiz7Njbd5fcIf*2WM*fLNeJiJH_zMhq0gE+`EW7Y8&XNKD?w zdP%g*MTFcEH$|w14qsCIWcVrYv%pV@#0ruZJ`}jz!2SpNh06fmxI7}C3J^U^otPKf zb_-LjK*`cv5|bZ^h!lhv0)50ajP;Zf!iXbX<1ehoaCb_CtW98FV|1A36!)D!#;mv>vpSL@Sc$mD)J&g2_)#r+PK zEp~f~$%rBH#JC+Y7Kg$lN+d*8NR3D)xP{`n4n8WN4!BifGvt#|CD%IS+Q<@5KI%!r zP`H4E89zj5yU`5_l#4=1G?WDU6E<=-4QW4Vyo933FtGEO+#9i<0)Yh?PGh4b7N&Fp z%o+45lH;z)0x58E!KKG~G5Zz>@- zlJd|{G6+Zu2YGW#jr0ziiNuCb+RSn!lr}7QQ>2F^YUY88G4*BYsl&Y;PKCIB6*84) zMuE6kvf4;Qvv*+s0~t-1rDx>fP9`@$8@Dt0MKx~Gz9-I=q2(n7)x@eY%!K8KrE)0kG|W~pj;=4M%b^Gz;5ooo0JvaCk= zHIuMKtId;;qU<@3KHPW_>+Qw?%VH-IOZMA3jInjIAdVc>f0N?7&hh;32xb(p z>wm}M3Ud1I7$Cj5vmmQ)B9)dG73EcEzwt$UOXBrSADCaXt4X+V@{?0MIq~4;k&T%f zU1U1kU`prWq?wLFH0H6H(+VbYb`;a|$;F>7iBdhRX}FHg$<5BoDpgbmU}l{4c!*b-{QV}~8b2aZ3?i0k_w2XZYntGrp(=wq{Hvg2T9DuY>jX2V|{ zm0OiwHKXRGmvqOfs2-8t*(K!AgXwXmSxh>vx-K;no7IYi7vH4f+QE!dbzW$Isz-V$ zH!P*(X6O%5)-?2FHWh zf-~zbyYEA}rdxdl0XMsV{S)5Ou-|Jy;b{VtFz)L5PB&NHr@?~`zy zt@|!7RMVO}QTt|H>Wk7#$iXNB&&XEC0=8=SNtYI3GY7KinPGh9%Wb7$msm5%ovB``lzvcy# z+M13ct4~?8OH1vIFs=xKMrEbuAkg-kAQKd{Fiwdy!bYSreX(cBO4B#P?;|3o#OOnE zaG9IMG9IxJS$-gk8r4zP7)s1y+y4R=Vy<@BN}0{OUz6Z?(}%8WUw2B%mx z+m_jRGRK-lc)-3pT2^x0IUIUOSkrJsom+}ol3pe(Ed*l~Iecb0%Km@8FZ%!B-lyhW zlSK{QvH#|u6X%wuk*5EcW$pjPjn)6nwbFlGRmlqq`;eJcSdjHY{<{A<_~-op*ZTn( z-`*}MuB{<9Ei^eHHY7CKkQUl8Eexz)NpT4>zOBvofpICZh6J!>8jApP5c_A&@TNEU z8LJ?W*qQu{j0*w@&J~$IP^Tsr&J{U7FO$o|Y*PGdR0!6p}v>*(rXF)`D{BqxO=Ven*pv4|b9A8_<-)){+|r!#~^ zv+R74A&F_BA<-eRVTQVdNsIw*dZ1K#j%L(@*wkmH7Zenh=`*tR)i_fY&Rm&4dqY@U zbhIHXDJm{DkzuHdPY#WaN{lpw*G85lNW}2q;WpmvPETK6kOe-!vL4_)OV4CD>x>i0 z&Md>4`PB+zE0U8-DrKU8P%Aw{xD}Ron{qQwJoDLlWEa7OT7)u zdN8D=#3e_E8#xxM+YA=H4pA}5F^vpne2Ue%qDBk~k)CJ9+gQDM8E3ndAu2X8IU*t| zEXn}BL~cy%qY6sPb8>PsbAh>VgToDsar*0!5WI^~;o$Zo3pvJ+(Av;|>nH}7WqAF{ z`V_M1OZ8;u(`V)8fSsT^*W4`QD$*NbqELext>v<28|uk6l+Bjxt$Kp(m|B^hmsi*) zJIlE2EDf=740K5g3yBX2jf##+>Zpw^)RX-d6U#_m8&{CMfh{XMBq`*Z;j&F4g@u!{ zYL1j~&Doi!CCrniiD*-uluYEJ@{GLP(jM7a#`S0{H4$LXP;<_)r837jVEr^>CpE6> z23)C$@n~9FNDRYQ8XbaBNJxtVhikj!kZ6nzH#}^3V{UvHoR}NevO=siJ&Td*8RUp8 z4|wI4VNx?Bs1ZnN4G~cW%teMl6l)(#->E&fT3QNG!kcFx+>B_Q=FwmAHV}}GxzpR7lB}b-9nUb*l%1p3>CBit^pf@BZ zq8ms(c^hUbC2n&^ll&GZ0 z@Pv?*v`&VEI2*1Djd+~C9@~#bz^^sLMzH+7%)pR7GJ+4b*VB#t-Xk4F7vxniU@w5{ zYU94ZOkZX}RCgMgz63iro4&@)ttRX;=fP!oq^rS#o?z}|sEmxa#;uF1w|P=#Y|)zU z(ik@p3qx{36b6Uo_O&Ea0UqYL8#g>RGMwRt__##ORdzHx_TpRQ*| zngJX2xb87-r7ot`{o~9mxy+c=UXRVmxT)G27+eCgF_ugaf`PDAa%KzU<}pc1C@XzN zg`UB>jYtC^JBz4%p95FrALPL86MEAGTy^ayO>)Sio%TVdFC9 zz^M`M@iKw?gt(3&(cr%aC%qwo;Am8;%=3|9$#Z?1WlHkrO#zqzD=Ya$=)2u zkOgdZN@hfIJh9T)V&X$O8VqS6VJzQ#c$|UV>zSR@!&n+a@)`GY23#@#f*~%9*oWHG z0&e4M7FNBJ8#}jpM=8$-L@H&F5A*u6)VUGIa^?D$8?`7@?i>Do4|Yz)vz+}P3$gLD z-25y^XL#w22&YfY&DaEYszCs^R_dp42gPQfz_d&BNbjAkH`~5h7;Z%JY^uQ-Y8@8= z#3CNBG11zTB}iB>>N6x}cBv8nbgpLeH_cU`?1@PsvEd;J;S8t0kr*apY`i2OGm@RQ z#`^;v^}ikq6OLCtzg%L`TYy`@J`!zTiMB$b?I-9j7yzbCzMv0PfeUHyZT`gF$qnM} z(gu?YK8bKB^P-0_FEKb@hEwzi@V+1uKBSq@kO)A zJtny+YfQ~y8YY^{yfpKe7jHiE!kw;g0Yw*(&Ra+cG+gZgcW@OA4jf+r1R}&@p(Ytj z-n9gbm;(>o)f%wQCwmhP6JAJwoL7_go(^#WmlDH)Uk7~IXkDliZb${16qBHU7i230 zI>I7u#1MI=Fii)suco7$(3SR*)0Eg=)5)9mmE+Q}CWF2hfqY=&l)&Joq`)Z-K5L$) zvrGV?3)(_#gJd!R>QCernEDI+1S&`rFf8nWl+c-W6+4qgfJKHNtH%j9Z#b!-peb_zZUor$mAMkZG&tXMx|k*>l`=n3%9$<-FRK#4|2 zZzDYePCR<2(Xod(=!2A8MI4aHA;Z#cE7LOcxB1`@85c4#G7FX|4I`scs+f_%sMyjD z-wp}F$dGu^wQ7t9<6S)-JhSl#qQl8}VDzM9MzVuJA{YlFj&5%r0X>SrNa6;AVunJ+ z4TZ91?{A=x9ozd#();AYtoPTlz0cIm*!!eyKkogN|F3#~30h*_`zuJxNbj#PI@S{h zy@g`iM3yWd33%{+2uX-+5+z80Xo*>{i~t%wS2=*At4_?2@+Ev;FTiw zG6{HzF@BKgNfCQ{Vy#13hZ4JyA;ml94#>eJCWRobH>b{VQnU3JIeWRkaVoh~)1tXrsNrYK2@<-r!;rbUG zsuqlx5d|hq^UC6jR2DB0TXhMkx)NeWWJPtdRmwCfz#t|Yv;fSYgrO6%8cMmG$ZKdc z3P=e-YD1`LMG=7`O3--;rR1t8V&v?YS{hry4Ik4GnjKL`$blJ06wE80%+iAg=@C!v2s1|m@WfMU$du;^4RjlT1ZLxcH(#a+r1ZjeSbgwx!bk&z zadE{AWUT?5`eCV=!6s{&lE5Y864Z+xfXNRgDI?G_Ij|L?4Kk5N zj`<{zK`KS2(DaZ=A=8O%i4IWjMN(Gl0k1VE?6yOq_s-*)pat$MpMFvpQncY*g zYCc#rDU}|~ujmtOU78k1L}{!Uy+IUsk1dJ(I{x@ps1~Jw2ZU_6KR%c`DOH!I*hYob z5X3_$hIo`C8YJWSsfhfWN{tvA9TQC}3e?K6WeOc=J&}cT#a0RlImp=|Mj&4=tS9HI z*e`6FmUN)$Gf;<$1@XZ|N~s8B0JEu92XRfQ*qv-wm`5y}twrUZ_(UB<1~@d@)T<95~Y2{$j} zcGL0e8QtzWKG=FGK~}BrX8^V#EK~@mGBd!`8<+vMKyigN_hV1F5IZn4guZg2aR|W~ zObN1G&@({yu!ykJ5wwFrN%>5lQ`lR0C_;ykhuASeN-^7cfwpQ79iOlmQ$nU+`QS9B z1UWyLsC;gsvem>yW%3{s)mH7P#0I&b)(}|d3MdI^Nc|C{L{78^ z<{pX2%mIWz934_2HK%G6Jfu1q-Vgrsz^T*RbOlV)&%auLu<&=VreP*d+|EC6yz z!MWZ`N3?}d&wy54ni6*oZB5J7SgL|tXPTx_5_!}2k8Yz z+J)%?#*0*#X&Ra+Lj&s&{7+aHfCmI2(=K?kV)ixLl_C?9b_uNUmgqc%QFN?aOo^FB zaUpUsX;epeL$8W)N}y4YC}A3fotbGA5IYPNX_PfTgv`O7p3LUT^fq=EAz}lngOVXM zq7It?vJv{FyO*_;SSG20c~gf?b% zbc2=vJ1L_dz!9Js1qmsLqZ0%X$p{YOM}rox_7eB{I5)er(0I7*IwN@yZ z%E^?NIBJ>N6hO++X+h|ME>LN#k&PPr2M#1O?c{`(^ON&IY%{2fRubp}Oj)b)w32HW zFahdUPA{cUD^h&Tbok~_9wr@M0c?NEY_NftW;Ud&s&OSba88lC z(37?$*a6T_C?=9gHQ_)bOii%HiH#*D3_pTQiXzZXn4};Xuu{pAWZ(&z@m^}Z4g{Ai zraNMDGak`o1OPf?4Py7gG$>@Ifrk~pHOa0d_D9)~X>WF};3l;GX4>1b)82u0<6xU< z+DkF*$pHxq@)&O+s^+Q_HUM+w%Sch%|Q9)z(RF8=$xZDVN z6pG;9CbtYLkdi?QL5P#PZ~z?!bp#@uor#g3NJge~66Y$>1?nMy&_0mS1p-}Qt33M3aN!mCJpbcln1&Ai)9unLZC@C<#Zp`t;0uexyWqjMd&nA;p)yp zAK|D*(}y3$OowJa8b5YAxcx8F!CUP}W`(!ffqgka`3Jo4a3BvR$gM}bhLUUm+ZgDKzv=t2S6yMd0-b>uWoO^Nms5hb=II4vYo19d8UqOU_kVJ0$g zNi%?-Kq>+W5{v~wkTgmBM8?PJyN<8!Hl|+@Gl`Tvlxk-NFvyv!MC?g=hpcCTjPE5R z=OJ>&2RMl}ZO4+Uq`q|ttkr4)J!{mSL8kaZe{85+?Mb6hdo)M}T_t=EVH81&h|@+6 zypYrz{R_1{allcSrg{;@J&?6i@B|du-z|7RJ%~~soZT>oKe&<*5QxfC;R*xVCUA=Z zRYW+dk>TtJYA1lzD}_qT73dvgvIN+{S(pH=QR9S64T?e~;O(_AN~_8k4Ta4Fkxl49 z&BVV^e+eZ&i-JWLMDgmS;;lm2Re6G9jUp30X43P)6X9?Nh4YBuE2Hg$9J2A`;ERC1 z4-rIFMJnj^(?V$>R3B2?M4!Z8N0O&9iIHBv-_s}PFv421KR zjm!Czs+p7wUj>^$$$B6gCbm(yMwdxMSMoR(Z?J8U{8*_R2w^ZH6>L75J%cixh=4+~1kkfYo41WVxkR>UDOF;4>>iq4E z3<*YmqCJqUm-~MQnYfY_%q3kKdsk=)E(7qN=0bD_45Ms@n!LiyUJ&}h4;+}h=~J3L0mGpY;cdv zq)!VqL=ax4=(w=9$?<7PQ89*QY?fG095Y`&w6SLC3(LX&f_r9W>43)%6B&GfURk9YDPXOa2bF@A}WmKm11<%)naqUe1PUY%vjSkICHbEH+BLOW*jmbV`sBW zQ=|i!ylPkH#ZIdK$czm%P6{>HAD{&=HZHk!B(DFXVjT%{6C{7iyZ69tKjBg$Y-ij( zxRF0iYXCb8c#^>V5uO$k7oHq#(1d3fL1YV}9a;MP!mRSVY|ITl-%enUYj8-ziXEUn zgLagNoCMAS2Z4)3itWY~C(jb>wAfzUiTVn@CwbCBKIO%PK#Y`0<4xNMd`KE>2AJGp zC}{8|pqz6Okk2`>9~hlv*g0P^6r58;4qMn9aNA>al7?}}z!dE&Jg% z(3(C)9i_xG1*?RTed1uz5Q$7ZP7*J>DhX4}m(CPeY*8SW>b(p{6wp}(uNqpbpdtag z7;aB^B3pL%65C-%b1?2;#tn=#6k@R0#E2SR9nnu^+`_Q$(NdnsU5?9lp4h`e$rF33 zp#0WLL&WX9Ewwz6kCipT_`XUdPvl4060!SQv^_%o9U;tL+nLDRgOGJ08PwH7L>?Es zN}0k&Ez{U()EXO09A+x)ur)%d6pxY87GcZ7-UxAU)KtL;cSGWu$ZwMnfe?G51VIp* z7D^-#d4_fqsIA3#Nl_^Y^u`ilamX_hg|rMd0i7B&q}oYpD6oPcEfj4aDU%#@fiXFe zh6i%Sd_wIB#Uw@oV|=DqvWL^ynI%j%Y{4j(h+&J#zRm5_M&`IuF~>YxbH~o@4(SAu z`Nqv{*YOYx#r_$Vm!6wn%F3;Rc?tV#9AO7#HeMC8_?Q@G-__v?CY<@M7J*f8yL2JR zhTCi{&HXr8Dms&ttYDs;mr>De8DXk|h$=FGJc<2x8?UL!G!sX`#$-jbltpa2(wL07 zZH+||xOmy$ZEZEH$A)tiV?Q9uV^k#j?G{Va$Pz89=29E1F5GG%Ic^WPsDY%INoZJ_ z3!kFJM26KQH&QK1@?!~(nwSKU;4qU8VMW*g2L7i%icaj@@Vp)!9z*fCoWkj4C2 zb1*zE0_>dCd!HQ(!kIxbL2qW?Ln^+|hFc@{SOgM*BY9Fzq_l=mL1< z{DgeID+vWm0cPz1@)9bF2%VuJN3lX*sMu%pCyJqbMd(M=1mf12oC!lI4`?%7YsqoE z$iwXpLvRu0m>>**VQX>d!(Dt?Kv^NERHC^OR0GPt|8?QopR)2O@&A%_}>acl+>1~+oYT^$0LJ^`iZi`#Z`X^E5!+b;XcrbU6|KG{#E~`A zS>%>bL@**g(bB}zFk0bRNRiPK;?JeV`N9hhYt*PoNI*b%m?5lTqXtczgoFl!Hf$6S z64;Oz3ch4PhGF~I``1Q`exav z4DY+UC(W<3JEYI5XFc4aZ0p7U?Q|}Bso&4M-?lc^YZ543?aDX5L{s@Fy zpvRYIQw&SfqUBdhb{%q_XXxIuvE%2#4|-hveYRWOGyN(%D{F0e^`V{?(Dqa!Lz>q96c)iE$!v_;p=Zo zvxa+ypZGdB_;X-)yWw|Z^Exis`qO5<+us6dRZr#crMU;sL`eCmYl5{Ae2wAZN&5E9 z*Jq>!3lFxA{CKy`9C7rzY>&=kTFAx-M}ASh2_1WBXT8{#>2|lq{#v2Cuyy-7-O9Vs zL-y4j@Tv6%$?M2%f_`J=mkpL{wn{MBol ztmu*J{8r4EZm+nI>b6JLD5+LbpFrDxt40J>^bh)F=lnBQS8r>wZO^qCHFVKj zUmE5%cYFo?j8X zW7ozGX9u0h*`=B{WGuaGZMtJ(%$mMK{kye0e>An*0 zlzW`6GyhDLzwDJns;YJ5!;HRB&chlGn794u+as?#Y3FpddN%e^@VPZ{p})!&3>aSj z=-V{9$E7!`-zod{T6$Z=pPw<%?__LL(y^@pyN`X$5_KE<(zUazz%@HW|0+DlR@Oda z%?^G}!sxfQ4~`_+hh&ZmUOs1X$36YVUE)n2;d*TDgvgisME~47a>nA(u;C}&J#H*~ zk+pKpg!jUd+P_s~-M!T~BF8Izr}DIkDFw~Pu2w(Uy5x{| z+b`16QCr&FcsA#L>C^sMQ@ka|fA&rfa6A0BSG(W|ZMyW2ZK&~?`qFdc@Z7FBZ?)Pr z{_#5&r}tmyH`Dg=-Wj)@u3nJssaG6ZKdfk*;Pr@S+fOzg_H1&5KW~>u+3K{{iry1v zE_W5(Jv1Y|r>fKIYvqyF|Nc^X{UX8|H6Kz=DvVW!WQHL2rlHyvt`%YW3*16QxO}xFqyFMX7%_qE3 z^p(uIUKKj<*A8v-n>ATIeBa2So>RORZR_6Y(H2+)Xxjd5@Y#u(c7X4XfH>T;h-*-N7Ht3IaQ+06DzHq!6^xM6x-6_SrQyb?upLM}u`s;0Xwura+ zt=%)E(}$<~@^m@g1syl=Yg4X4=8Q61wCzc>u7BAxwFfEf0A?g z#nF!I8f7(2@wMx7V}0{GGn+eXrF;5jB^IT)xA>!N<(Y9aA53gJWy3R{2ixi?bI)(Q z-tN(%JAUQcpL9??I)7{AgXXP%O&V31eaO4j=gP-*c?NC%o+hWihOSZ_Pb%D6Imq?G zs4mhsi{2~d=+?xapzpM+Q|x%iH*!&9*KSXTo&30^5o9l@R&lB55miDgaIDb~* z*z@`J*IX?+x&GAjqx{qZ;kMS`7lLBoYB`Y`&Sv{XYX8Ekf!eQo77YEH-*v}>Wl`H| z%e+r@nceTI^Vubl3EiGs%R{#Im}ArNRl5Yo!pYAEHW(1;@pk>R?Yp*KU3{ra+q)-< zocGu%SN`I;2m+?kP!Q{Dxb?!$9HHb+r(??Svt1i)Vzj2 z5BF-g_%Bt+jI4u~9WR$1oqO0v`JwFN?zV1&B6jFAgLA%UmyAhRU>MqZZtReX8CAvB zJwu0u&ZIpdxxT1meGLJcjJISSsrnFLo{FU)2*vBiZ zO`ye)FP;`NvmDRX5*-L`9XK^nJ?EtD?h#hYhlWi(`5+)}+Whx#Z5uY9LoEoXOrQMb z{Mq`ae*foV>$x9uPh41{>Ns{!?4w!3&+iHUrTdE>M?z=FMvTAgaVO)l&(gSMc|*ra zeUC_=cVFMoe(BgNA?l*p3x65a_WU4QRmf-m!5YQg+betyeeGVg*s=Ec!oZ=r7V~mu zOcaVkBL?aY`Sz@}&^2brwDk?EqMA;7ea1F%)ti`{0L5PkujWad21mB(v;Q}lSDdxp z@7Bw+s>FzC%c6YWwYFJN`{u~g`*P$nVx=oQ`R>DP8(1%L7$$$=^KR_xX-&LG>{={* zUmRnza>BBK4flQBYqikrO;5#VNz3=YuAe6xVVFB&NP=$p*ORvIEqpqjvz$-41>C$j z%6aOIPAS$-si#Vu0%T3}U$XW;5~khl*Eu?O_^ZKZ_Wt~bvdqmm4={NoLtmjuwXQZ#{`uyYgk$2W^{A)r3pLEO7IqzJz z$2C0Keg2pQe?+og7#+N!$V&0TXJou^(DcAD#p6T9hlC0Suk;D` zh|GA=*Cisos%l}xs!ee&t(-s0vzj?7t}Cwh*f9UXSWmw6w)<|Os`boyA5!vWoa7x^ z9Mx=1dg*`{={M&snA^9t_u;}u)6(+1CWl}2P`I}}IRENF#jLK=9?sc!`PQRPlR{Hx zzL(DlRG+y&D|<|ERmI%V<1-TeEqXEM;`3Yk0%m`CyZ6((n{UFW{j@T*&BVjE)*hQE zI$U`A#=TRoo?cknfJ#nR_1p0Jgi}_Xlg|I`z-{T8~PJ1eP_1jcvnnA{=sgQvrpgRdeh*N06~^lkCH4ec=hm*9lr)RU7R#?LrV z-*$NTLTf9F#eFB&KmD)V@9^+Ef0?y4{bp=QhlvXYUmH@l)0Md5D(#|H-8I2u>bKdK zXBT}dYHZu^Het<1Wau*J0sFk=rxHdV-@G$;&8d-%{PqR8Zcpibea>mV=bnwV;%D$v z+xahc4!?9DZOfYA$RSy?iaTZx89JcK^}S2%q{+3G{rw7B9yf zO+M7Ic9FE{>AS=2?~Y9j79ahk1(oss+>z7#E%i?g+2!3-zOc?vS?Gq3BlP#8j-6Va zpmz&P9jAKLp~vvLvll;Ww6lr(+NmF-U9~z-WyOGVw&!2$c3arwY0Lt*zpZ=qJ^$I! zt$Xql-Go2BOw3=Uo8F*suOl_s^WI`gF|V&VW>a@MBy<}jT<PUEb5fN@G>#&7xz1EtA}_k^wG6%x_9A-pp+Kwr5&tB3!Gff1f);-%f5NG zQ`F4TPAeWY>hg!%o)5V;TU+;7zeG1D!Y1U@-0lY^PdE}dZhHHmbmh8k=jx29Qx0_|u=1M^Eo(Eoz=MYk%AIb)NRJYF773?bO`dk~-@Thum&?M!#nC z&AGw(+po5H+hykRu%wCqEU8dT-FQF}es50inMjk*?n_2Tf26Fyh9 zU;p{+(^;MBhL@i0w|Mcsfur+No(y>woj$6v_JsofRf*#+Yqmz{3heDVht^pp-Q@cy z<7mCM<62HyIJLoEx6XIgTRhC(dDQcg$Nd4C9v{Ya8{B!*HJNze!B2l$Ki{~0Px2Vi zj;xkh^`_Jv*KCcX_miC`uK1rA&|+}9fQi{^P?K_@4H`q@H0JYM3XjxRxj?K>boi8 z#8bPa2fm~@1Z(Ej-TEYS6!pSz{>{8q<@0@S4$FV~rP;u{{^z&6o7p>XLs(MV_)XK# zTwJ&2?aLW?v;6*a@BewEXkgbthdVw#l&RWUY8htn_*J4it+WcZoL68{TLB`mpw$)~?X6@6qSjj6WMI#%I#ZsHa??~osh2jJf)rfZZkUO*zSM34!QX0+11)uzt8esp$+x#}dJRh^r5?OXWaLQLhqR#!WZ zOZL5I|LkGk&%3&mc}2Z| zaCT@Lt5zEi9IH2=v~_7B!OlkPm2^ZPG%-Tx_D9B^TyL#y~Q z`#*QDtu@;wd)D61qaHlH7oFPc>LLAnDp3`-X>D`A zYeYn-L)87jS5{uycWJ)&fy1f8iVCl8Ts3#tw17gz=T{5640?X~Y0GC9H@hr7{`b7q zbuyDT2Htos8JGN~zaP)(-I8qw-ZcCBXQ{(9|5@wH+6SE6eXH!~!^PVdznOA)rvH&` zpFS>=e3=|M9%mk#$z$B|CrJHOdJ2IqsL!ts{Q%o#hx9< zJHKdCk=5s+^0saCMYXhZiv@8rcDi2j7dG74YJnp7ucO6&F)OwmnYzbyMidv^UYq?LWvJ zOY$-o4=AP%@0@v~$;{O8UE5J|2M66eOWPftHq_mbmeRH7pT~}$eYyId7gJo%RZi^q z=aoj|Gc$_riO$(~pVp_*qli;nb$I^ZCV_-FLq`zi)64w=pL#*p^MbakuA=@DA=66RncRP^1`QP|IT;HvzZ1uADTFdF;i#Hr=Tc-@FI$zT8jOT@I%@s2*yj^-R z$$oX^vgG8xAAC}ij^1B3_|2!@hRqgjulHV>e(5*=d4o2e=+*kv#%BlnAHKDw`>KVO z!WRqATp#;uB7b)6gnfMu-%sj!ztNgC<7Agdmo(WPTv_{eofr4N6zS_s9ke4p{&(>W zp`dxI$u}JgLzDYoF0`lH*LikIQg!Hlo8Sw>9@OpX7y15CpME30))o6l4Y1CcyY;8Z zQoWn9o?_d&u6c!p2j0|k{%|a;SId+{T}b2I{vi`w?6yU=nfdqhBGJPH;H-C>toyb;`SPyz?|EaJhpuh3B*=RC_^Yd6Pd(S|fyQ|oj;`)AKx=^Zw=dbAiT6`ie=h1Yu!w0X9z zSBpy>PrjWx!*%(Ej~(}|yRo%KCaN!D>HB|ATE!D;o4p!4@GYSoq5axwLB<_kvsK+JI~R`)-dg&= ztLNh#2U`XzvI*YbHjGmFt%I%=t30`@Ko$*6-CcJKplvn}G5A9$M}W)r`rUduOvpdeqsb zV~abqZ+c?f>%1{-Y|FR%ZQ@&H&x%NQ4(iKSv}q$4(bRdx_{0LlymPM`XY65Irk(t+u|qXaCVNIu5tgb^p!ww}aV7y3U=Z9&zURErl>#pWOal zgVacS&scf=n0iIhneW|>w`vrAX@xlWx1sM`yM*nXFHUth)b~Wi=MTJngOZ1&k2%wK z{OcCS-@WO6d6h@UX3k$fD7`mKioJCB^dAcsp9mRLQq}yQGxp*47uw#r)pT^_tGoM; zc@Bwwu>RkP?*eqf4x~{N%J373Q$XKk8=P@wXx~*4Hva<@B5!U5^!Afz5riUc|61Q^`m(^FrjY8QbIF% z$}2_G4!&0OT=xF;K<}q_y;a4!9Rn={Id-1cAimNh_fA69(c05DITsgqjKB8((#MNIk~0-Sv?Q5 z2iy-0jo&LgtmRo`@^1ffrREkTIr@VII~wvXZr&6<^rEsb%X{aTb@RKJ{wJ4T;Cx*| z=3Fal+mhoXy5NQ8RlD`WOWHP=_DST_FN~_HA9wo{HPY4y#l;@RMh4gO3Te11`-*MR zdv{0ep>c}`8U0|~!#M`5Q2UrFW0Gk)mKossCM>0RUQEY!=HzO_mB= zKFm?R#pxd5rPy(;mwf)NuN69(5BtXUX^LkEXgQT;T4muZqx5~@`$~q)xqWsPUlyHFFAudXb#h^D+O5{?5YIKIc`Vn>&R)5a zIw*4Y{XKV;9ehi-kb)K)uF03VS(($eq+lD*x$+8mn~}HvBdG%bt-t;+6hy**IERlW%~^fg>VldsXTEj2S;JM_l3rF)HSo@BqI z8sIc{EDn((*sqDX?dv!4(l-8zQg-=u;&R40<;KZ))#3Ad9O=&fe8rh2#$h(X=dFqM zVNRnO%k1eA5;m)FTMZ+yLh(K}k7=hXo>iDiMl}*xPnXCf>b~)`o44xNm|@(pwL6M9 z?oY^{v0Fvyj+axkHWxp;kRC?ADJH<{t(oa;mr$PU$+v*FTB%U;dEN2*Ej|ouTT8|M zYFw9%t%{8D!F{ay3leV<+J$ZWjH=>Zsw|awc<-k8+V~RsS-tB49R0O}72TmB%BsGd znaAAER(*=daf;17nP@y^XM!6$MGQi^DIH0Dg0@0;p=s6CZ8QYn!Y(t_y5M-xO~y+%nsqj zE3h~6wq8t-gck zlH$?9q)aLKv$ATO37&ddJaaZ4k&!*dvCKn}?yC!mPSI68D*aiBwtXeoK|rD+SbZE0?bVS-$?qju8I6t0#9n({^u z`5xCrHbn(0xvBGib$V9UL=*WbO>#*;^T_G*2Da6hS=;J-oA&Tmjkuk>(ez{A^K{p< zN^*Mb-A`PMcqQDpGl*bR9ebCdOg&lr<0+?WpgTKc;OXVG<;wk62aY*|}@XA(Qt1crz^h{vY*_jsGm?bN0J*|o)lS(u zUz6I{m0=6-w;4E-F_y=N*8Yrm$H+-3Nanh{WAxETU(&VFWNqG@D*sV|s$SDQabjD~ zxkk>p;B|;v7;8Kd$h@t)&ik=??y6$z8omWCF|uJjX}W(#tT;iYZDo!& zex0UKfVPp_fsn+;!#eIC@*eIzDX1SfvLs-ym-VwTiGw^3C6gak9S~l-?^vBo_Ya|; zmKx^<#4e1a*wtI+H1v15QgS~%b88>UKVYL;aOkLyi|fi1&mi?{g0;^M*sX3fu2LGf zf57Wv-hHjayBqRt2hX*#&g`deY&CFq?++!lKE!q8cgpl?jdUO6O+UUW$};OHdHtD) z5uBF#OZU{!5i^tiPE2Khq_EJklO3PkgoHI z#!C|RN4pY_%TLZ{l!fGfryhL}yOp&~>XF#C^pm{tXYTpC9hsNJ|6l>()0(WZLbE&H z4aj|23pEdv*SRFtS5`E5hP%Ih+V20LZoTX1-n?scxifBa2CtoSFg>->`edkNoHnmq zt6$X$kF56SBH1K-TfK*ZX8Y@^FAmy;V*;fC7cZ1&5Z{*1Pu#h-ZGcm|bnmijXI$Ks zHzr>!K4b4=3~-5Tl$Tr+yGPLFaR7w6F`y zdAlSXdnjbj%2H+B&tFOsnHC(pJ*h^e*(GU5Hs4b=ky~jxfm_np_wE(sZN#hx47D@9WcopSh?72G@T)JC9CMZkik&s;5WrzUJVdk3AT9 zeWzccs^hC#^Q)#0l}D}wd8X3!B{^mFd@f*2i%l$5TXWyoZB239xh`vdql%G%*i=IN_{6kyH4&QAK;MekZs2F1z=hb!YWcTh?s3RV-hAD+%XFK8i+1-e zyjz}eNWS033@J$EZ&OYdg*ZIt)XJ6OF{M<_ey#TC!ZoBUb1 z&o}puI*A8R>ngr3{4Rkv6v-^QnNKEa$>yv`);mmWkn|0;lql+q*v9a-EpBBKDeRy2q^?8#lc0AC(epf;5A^*~_=n8xxDE{!pj^Jl&MlY;*wxm2c z{7J0KWuw7vmMFhNmAQ3i-O@ovDgFv(U*Q3@OU2U9H+Kh(8g}idP8V1mw8gn@ZPdq_ zw+3t8=$tR@I>Q|;WisLD9CY);@SXm5A7jGib{K6~pB0RMvGM28!$+U|4CmV@W1JFs zeNJ3ynSjKN>}Rh-7fA0Kzg08JG08_rKR`$jeK4lYd=~QH&9!|)9}Ci4-8LW69hYwq zDHM)Ar=L1OT{Cg{c=ElS*Cf_2h@bqjY(>V2Pdu{PW#8J&NUxS~%5{s3*nB>w+Mx4& z$;EH-;xQu6x91J#_bt5e)Qn&A0>LHEP21B)G$lu%V?>&B=nYqkZ1{-LzRPD+2V?6j zRCYPUHMvAvSJy3%`0jskz1AlCb3Mn^KlC%rYCmn52ucsgYgO_d<-K%e_pygvS&KqC zzDwguJA19}@E%$evbtq{{@FIxPu7yTwj@P&1Wf-y|B@{6kDJ#|h!;nimU=&0@2<X-hhWzv9et#B{TqI*BH5~f%LGb8qWy6d`%r~vwGP}hV zNbOU_$5|!zf4Jh}ks0~&(#AJhG7s(-Cd`+fD>N@$E^Mhl*@G>d8A;-MZWTuMZnvKl z9ahC zx9(XhbuUQHeGu+^hil=tgm5c!5xcNR^5o03{U4JudQ4=*m`i&#hvLuRh3s}WVTyZN zA9)>|eC~2ptv9-NEuYl0xpI#;m4A8jIg^WQz1l2_@Z{2W)*+MbsEd53%boA!#l$Rr zxu{4Yt*=LYwSlAA^UG3;t}{9$kqB{)9_sb4rVL zOZLcc?emnq-FaEkK9FDF@Z<%`LEie|#Em`bi(*s9=(QqOqN>8Fi;h&ZVcIz3UvnFH z%5RA_F4oPKiIKiuQ_Hd(jo}<3>!v+4m@`jf)21yWVPOkny^o00R~`Sbbwj(NWEzebm=Sb1a3i z(jx_&QCHv)?@uXKVq( z;P`xA#Y6&MLaAAHNQoM`>1+9A`p84Pfo?ZRdVlS%B#Sr1gFW^yTRwj!^i?R%H_)Z1 zWOfpM*z}J33~YX2+Tv}15p2+#KcSo;<~ulH8MWW?vh{i9(wnbLBU(gu7GAIpXv->G z#6zr7O`lv~ULCV><7fcK&v#cjL|%qS=$N;?N$AYenYTcH|3lR|$1hpgUmaB%Sj~}N zmNEQPd#TN(!-wkQeWh2(F74?lGxe%Eo;H%~_0TSPe6PTVxtA{|2#B4KaKbelPg}W} zAf)jH#?zz-mb?1r02{VZ$ciI&kug;MJH;dL}Fhrs<`{Wclqk~gqJ(U^8~8;!%Id#6OFSaB^myO# z*VlZGZ|5#iAcZDGdy3c`W~O}FEqLWiSAuu`^~NS6%LlftI(z%CuC|uFxjv33LRZwY zDve`z*ZAk9X2Xi1Ah3IDx&_zlxUxz{Wt z`_gy~yFPbveJM40haG)(f>D%aK39B~t&7M$4l}Q{W{*A1rMWIVP$OpRwA=1&oyV^J z#!vWYV72e`k5L}G{0%$fayGEDWL8ZGk;PcMBqVh2mK*6`=e6u3-^jg(ZTS&GpKD~o zy{P2=YOyNS*hR*jy-FS|1L%VEE9wGGZeh30Xg{7E(Y*NSk?8#ufKCuEzb8E8@U{Q3FNYyOkjHvP#u z?xltEB}E^t%39ElU1R1tzHXdqu%(XpVSBH!Vz`kY&+`;_9ht~}o+6dk)MvP@8L?Nj zN(#4Le`lF~O?vAmeVX3IhQOhr1aXkM#?_&hsj!tv}2)?A`Tih_6wt-aP4XC3*%Gqi`F895hy(3QcpLaih>005Bi)4d$s-EP&kMSWpG+W1#6L=4Dy)a+S&sWs?Ew+*C z#>Fji`hsRVZN~?@b3UqyZ#Y}XePLUX32BpdNqmYKFQ(p;*Dz!Sw!NlA;=2>^N~jG{ zUBp53N=Yx#@+2l9?E;lhoSm35REIMw(s^xTcVQKFU(1y+_}=ln!O_}J#PX3ZUW^0$&s-R9BY)w15TfNx@jSE}fS>h>pvT_a!83=5UL%dUR^ z!dYf&g}FOMrPOx@Wqpg_NOkJurul|6<+UBfP`*%9OUY;m%B>PNvTEnj3 z;zPaGyKwb5?%it>coOSVJA}KoDv)A$i$gl$9?R>ecVeUbsja$d`FwlkjGbDnme1TH ziM1);#$SSW&LqSa8U~Q_{P{cImRg`CH$g4)ki%sPfsp{5d|eP|%<$X>3o{aeFr{L*ZbJ z@lR^_Ns)6crbR8M4!yki(27jzpcAYbO3qylDb)Zt7v1e zMDEJ*=mC#;7S(nR!cMi1Esh7*^tNtzNK6y0@#a*%v~Hi;smpV}yinel>1CLqn#38& z42f8Ki05s`1D?wrs-8~y-hQ+);Xod7|C733-*A?X83k?Kx0zW)Q(Kx#XzS4ZdGOL` z`I=UX7~!f>_V6dJLDlTeeI~1YT6}*-jotYgQg0Z^x}@Dcyul8aY4m!dtUFOh;m&ob zHwSX9mL%EDRayIb@iRX;=lG04<=P!jt!{mnJ*MUwJSS;*dBXczLqU&XZ}&6VvQHNo zOK-+npZdDtk@UH$b1RPdZRk!hX1quYc&YN?if~HNiBJCA7}0NEHHQ6m&o#8J>y1=) zY}Kk-w!}6)sP+r7`ASfW%BG5Sf(HoRL04&nw~toxy3=A>CIxnpw%^UV2>^LxeA(1zGrNuw>JV60=RwJ@Ut5^!$xCMM~KjehanP{beZA{*1Ddrty>w(xwTTkCI6s;@0I!0OC0$h+T~ThoV$bM<+OA_ zcxczw4?>o09hnM@dp}p%ygt$DdOjxb-FG>IWv;R9XKx)IeOYH!KTnADBg>Mw_fx}l z*S)!O8!Q&&Ufh)CXxR74d3mOC<4duHw@-;5kSyhwUEsbW&f~qwmOk=O`Lml=H+5?( zmYX)}M(AiPy7lghwfFA*9rkxi^OD{1omu6YN2B=`{&a3@9-R=H?^;dq-aV9aueavw z%^Q!`tdHz`@`1G@W*?WWVgJH1=D9|ZlgX4wm98Gw{(~iJyohO5?^z{s-tFd=&xIF? zIjT0g?yfre)6mJk{d8dZl~gh9Js;|`VvdAx?EkRk;rsehxBYHMSAI)AxLvfeZ|R-G zk>hU;MeeJ6<7T;Zi&gjH$y~R-&sl3@T?IpGGKJ$TUPyJx*5$3ebi(R@^VjEUQMkqV zk>+hFSVtF4%10Kr{KS%dD@;XBWaO=n%36_{SaMf3R+`U2xpl?Gm6lC6)yRF}uD(yX zCcItmzf#%k_)#^$^Xx-`!*^|d*gU7+yk9MrkVCdvyj_2t&2YBIBFo$MEl_ok9CWuDJVR#e`YQ@5v6)u@%R zFR{GQbKB}Bv2FgtX>VQh)bZuU7`Bo$Zl}5*VXusY;fs{Ut?7aZvYaVuYDbzC-{9L+ zIKA6antIX&6OW6fsB3x3vs+S%eB?55D?_Z#2V7M58qCVQ_~rTP_>L1FtgQ@$)zsKG^%%)6S6kHm?8-7$0!>IYVM)^}^ecMO> zklf?%wL_~Hv^-dLEKy@p@latH@m}|wulvvGxHJx(y@4xE*tPU*p4is64vD;}AGB5+ zcit8K>P1q-dguKI8n*TJ1h%Rjz@AV)w>%}MtK@UFQ&f)AsfubRf4Hr$c3P36P_wM9 zly$P>yPH8N|MJS=?<+2t%eSiSTtD73_Gbbf#Invp5`QDzY81#NY6y?G0E~R&UBX%X)16MmQr9O?iS^U~N2pgc;tR2FkT^w^} z?eGZGZ_C7q1r@>hF`JDaL?s0@6m02SweWg9wLf+pjdy5EoQ3Sr&(#OChw9sGj6bq2 zE#j8^m+p$=3}0&#P%1v`>XalC~wkPH5p{Fscc8~BXRclIHmTa zfz|#oUCI7ku^LV;4KX#}`A6>^*zsfC2KV5o_w+vSj>2 z;J0hzM4ji2tTEs6W6Pb<6L)j=`MhR)yO(GUYJmy~L;qrbL@hw`42=kx=IM)IiPWx7KtfQjCwcvxzHeNk>_cu~hN@>^Li*gDOWJ?2A$=jJ_q(8g=qP|i!>`1R& zL7=`=gIl{ybX;5CME=~Ti3g!-aX zCTTaf2J9g|m$KrMPdZ$AfEaCFH4dbk%3Ea( z<@^xc=lV_G<)ymdu7;S!?ScJ=_Ux`%R_MET>z=}P@|UvjY8O*7?h3zNx0e(?G|zm| z68>#H1v_hwepPCfQ%gNg*=R2IqJ+Q`U@_6D)r;?MOEj^%r&Y^Q@UFO+_uJSYN5GX` z%gc)ca2xpL_HRFP+1J$7s(jn?&rh;OKkj~UxPJb(&dG1RV~LY*Z|slz@nWO+9n)1m zdm1+Ozpu|sv2%PpaOlvF@rR~^$vyw$QLqd$Mo;q zJ1>*E=%mBRvE0D}FRbHuZ#3NBcHEw#6mNUY(l5tgazpf&C!Yr%&DS|k?OCKT5%-nt zXOnrHyVs-0yy4Ru|AnRjM`GePI6XOT`cw63VCbsQaV!tp?d2xIQ}*fo zviWwd{G8|)#X}39Tc5A|vVWtX5{HnT+RNA<%j4xdjU>7yoCeSOTMEgXJW}u|rMvO% z*_PTJkRLYkL)Js-+kMN~Fr_;jKqX%@4QU zy8GDd(dyR2tDgqNo-X2|n?1eswWimM~ehc>xa~w%(!heX=P-Dc5-?(=UTevY(h*c$%>h@A|q@3@D2qts=K=5lbj(n?@k_VC`H_9!yx7k;on00ljmp=NtF$h(W!<{SG0M01TJnweX*t88Z>Uby z@jXn@bWREVM|wvJvvv0HG#|0HJ&55{P$o6Y?6SGq+Qcfp$J3bjn9wWoiLJbEi^khZ zzC`Da&8PXJj)lIv-~U}G&WPK>=;WHDv7*}|pOg}oUm4pbII48f)V^>B&2M3L1J7y0 zEjF$-KNg&0eiJ@ys+_e+a&U1lF4<;_`f98!|1Phrw&r1@-QJzV)X)BsBSkOluPzpp z#2Up@qVgI%!#2fD$lejK9nO<~h2g(QsqY74$X+#KsS+nO?007g3@4qt86{w7Y9g`l zy5m!R9e*EpVeg1T`B7`xr$lQ{@bhi`WTc$7Z-7d_)#oFSpFdJuuqhmWLv5~lWy$`9 za)D7Uoy&S!(sQnEm{TN@`dI#g)tS&E6Nk0FCg*K^NZ>a35ZC;Ui>jx!NRDHDdsM&G zWuBk3NYzHoyXV|~ZY|?dA4!bv^vUd9f6jZugWTku+xh04>~ou2zY#CzPLScen{chp zFv{;@acMfeHT2ufW6vBpUmF^YSZ@i9xFTw=_AO5A)vLoAw#i;6z8$-2`!av1TNS67 zD&n^FsJ*Up{((z=H#<08#WFlA4GN8XMiA1Z9U;NzB<%Oko%t2yC-nRKPr{_)g1Zw-#ESg>H54OOxD$iz;0 zK{7k}{)1ydqQj1SJa%4ZJ;R>8`l@*DJZ*oH?fs8fZmF#KQaSne-{f%;&AkcuJ=g4c zi;udWr(kzCL@rSj=l;Txe~sxXBdTJG=h6F;TiTWInD4slL7QABg7!X%;kQeL@`g7j zo16A2Y_5&^Ay_QD>GEPZ`3w zefQo;d#`efcJc9TQ_DR~heLES6E>DBV0%Ax=2E&B9c0Kl?=9(Z{Cd;XE%c;ookB#? zyMrmEcM{JwVDM6!XP36)-l%zAyk6NLQh$QIb1-aMts*z0N9|sbAT=#E4`Wqpsa z{C27-jKj+yr)gU+w}pak=JmNr2en9 zxdkt~FPErxzS7LL+iovn<>s;xjJpo?ZLtr>4}NlL{;~R+lYaOf9i^sI`6(sCp{p^e zgr4Z^fE0@zYV$?hNWpIV*J6X@^Uvtp7IK??V07Ns$>o5?~U4H@l9x+-|{CMKVJ!|%v3;pES?=G6os_uQi&TwQc=}MZk`S=UlGxe8mR+~xN z(=N7q)n18COlsjgtFu0F<>u$6bGLdLC}p2I)_vHoLi^LH?VIux)a6y)6)(RfRIrnH z6?ezo%<;vZHDYcew|8VZ@OqiQ-2T<6YF`*mXZnEznGAG*nv|SZnLE zw{Yog`xnOMoT*lfj_h@udlm0tBz9!)ptasGk!oE$pt8T9u`0P#zqGk3@M7uMrq9-8 z&I=@qz8rX4ba$j7?~QTL=iT8~uid=4V%MFoQKyFQ1RUDSt$Tb(_GVf_iBZOiYV3Bd z_1>E5ijPISHMP&3-ah;;d;dX=>p3^M6Jq7lixQ=dt&UY@3`YLwa&$JGIMgZV+3xOg zzBY01m8U++vH`agOW&TjzFz;7)b0Gb47ECy3r zZt^O*lvY&u;_oV#B=M{S-mm(xI05P z>7!15OmW|$bB}qeEQVj6_U3jeE81FiCv}M(I-b~sB=Baey6_^v8cYGBZqdz>W z9a(ib9vhyWBh`IuP(P(yc6rlY5^Y)f&f39rVP}qq=DV%dhq&}rr+Vu}by!OGIeb&m zZmOA68+N+%+H=jxK2f*UqKfRC53AC=>|Hc}etePI8x-7nUezdJImPq2XTiJt_FJDG zn`YJJG&MMvW$?6kHBY2^J{I?8U3@w!oGZn&RXILEE+t4^6)hLtX`1%V|Fi7#e2Jyu zyPxpnZC@7l?D0IiFny=Hcg|+0b7ze_aulr9t9^P;^Vse~kMC8kbG~I#7`D>F?qRrx zjfQ;P`D4dT9QkWBdET6{9*EOF5qe9gT+tQT-<@ri22uvkGJG~&H_|dLbQ)AU->u=b%T(yJ>;{hT z>MVm&@6ynjIU6*&=dT@JR`gVLxA&y? zX!OVJ{O``p2{hU^=hS9mMfnk9+8d|&mL4nQmUn&%QhTR=-Qa-e(6u(g+$P51lh+D& zDp|)>U)=a^e}tRisc!cIe_N`S!~SM+LAcLguK4~^Ih*o;$ERG4tB!oGRjX+|`FvGN zetRajlNqVB;bB}ruI9Nj12kXNnw&n3Z7g=TqkS~JA+uU-@gv((>jjuI52Euoljq0mIj2i2knOlbIun}6bzsbd zeIPd3Va~8gsZfZdL_sELTV+hT`T_2Xn7eh?FP{`Id~3P(Scg*0`^R^NXeu{0bUh6f zPSg0_MoJ!EnRm(b!PkX{%YK$FQD0q>n0jxZ_>_EZ^L5TsE@f&BMu~Y!Cmbs=`9aoY z%|e{l(==;TREig5=N`Lpr13-7qSCiV^aOoF%3e7*ZCF&ZONw=|1r8RCx1Ty!KOStk zx}Rv{Rc9O^%qBMQxmBOk+c7WL4$EK@y*=0Y2Du0d+dPW6;7xhjRa1Odq0yyjt%K~T z*KMVfD|ACzBCgC`q|LE7ppAZG&XaAWM;bpJ7F)LOTvXjM&)Y{YHCOfz2h=QN_V|KBumve+`^wW# zw7O2}IF}UQ%Jb%Hh28Ogd*kulGy8|v^3OSFe7RxghJ1ZxTeGAuId@NN{UCY#3HI91 z!`newVmJKxo^3G6eO<~LT)2$4&-au5ciVfU4&hgF?Adqi2anm1iq17H!0{YgIdb8lO2Yc_lUh>EY_99V6x#*iC&ZR- zf)iq^>lDb_iDc31nhiR_cDD5=z3L7LH4{I zU(_#@e|9rjX9L+gol;UoJ*_N(sl{hk&h*Hn&8y_D}4Lo>!_1^XuzV1n~>1L_%)-2yId}HhneiAjgk0ZMJ!ubJu2j}CjcQ`h$co!U3n9@*Fw{9Wx!$|n0i0_?w zd3yOz%Fl!i&fVB}-O2R9xO4OUvhL)>y=B@ba;?*KMFi&1>&`a{h#o9^l0AXn-~9aB zrq^YBjxUVwUbUjcWIkQ^O?V>sSfxWAqL27LtsgN_12(2%z|y!eBy1h>&VT%{G&Hz< zgim0A*MEdVys%U&u)8-Mlz5I~#zGqd3>x7mB6yiqK$}J)reGb&PqDQ)N)7=GjS|E- zzY>yg)G-o^zMcFA%lH7Mz(#<3dD`A62H{=;Q$RQx34>=K*5GhVxMFdfV3UUo26-TE z$TJ54><6I+UFl>n=f(n(9SQ~rQNTbO;=@gV(7^TuCi{tCVu>FvH~|(8EHnu+Il(NO z5FV~LVJ!-MTOFy#bxNx@JwSWW}m!C)?p zL?NPA3|vtcBH~B-6$Au2ybgBqkQ;95|I3&$oHJ$Mj1+=e6CwdsC~&7!ts)~lBSOOL zeErDN6ojNHt6C%<-$368U)n56!`~F=941i_U@r{b;#uPhhSI>$p9jL03Nd+v`TCKp z(a1^G=%?^&d?T1fA;DlwEz*1HTLi>58OJXw#XpJeNmCS%#DHLknGVJk!K4|O_5z!u zzioOkr*bi4UF;FM00X+c>G1zx4eS$C__QzoFTzOUKPV#kW+);N+pNDyA!i~L`1P8} z3L_H}WBVCaWjTb*G;Es85)8^so330F<_onpO^~aurY1X+Vv%1F&0@p!Ftd*e*RYS@{U1bGrri&EFupQEhWnW69p$32NGh%iv}b^I4pEBuoexxC|EEoh7irsz*7mjGh!(bk8s~$ zco37o7(Cc5Q@{cT-qa%(26)%tT?Qu;1BS&A+7m1R>~YP-0(~0rgE95M2r>;i8-!;A zgAWi%;h4Zzq6`KEXJW?kAdFvAU%~MpbXr&}H*gYi;@O6qF7a987Fd>Ch+lpWOUTEm5ZiDMapdE~4*#D0xjSRxoa2J4d3Hj+VA3$U=Ix=3ROVCh|z&e zS)2)_Ac4w~Ebx*-F4BmMSK&g2ELCo93_*>}gT)Zksc^+>@Pf5TO#(Lts|8*pZ2}(# zs|!zfJpoA8Jh~tTuP-bD!}ok)FvM&iAqj6T-~oaz!a^__jv*|PhG+6(FkFXU0`@QP zhI6PGf{_F;?BI-*q5a@Z=wMBfWJ&~*6E>Thi(5!YNR%xmBrYN*Dkdf-A|)=xmKTv1 zl^2ue5@XGk=9UqXohu_MBMzJz=s-(G%u^Cc6b__Z;R=I2Sd`?zQ^1@XaH4?=1$g?v z<(C7V1h0a1Qt*d12?0nHm=%b)X=ilQ=D;Jq$lvskjeZPFSd!tR@I5LHOQPbK6!;tk zcn_xHr_;e3K8Zta2w^4%7|w){bYxy2K_jIjDF#EB$V>qhiAIJ@95f){nu-`AgUVn) zk{JIIIr0`*-h{{zOV@u%0OChOF6h_sL^7E31e@3}L6IQYWHbTD?DQ-D5OyXMgoMu$ zrzW9U26_H4v^h02OwUH7sWE%n?B>k8#bF5i#U_aLN$PK&TbgM=l&_C{WN2vMCJrm0 zl12kC(CG~3wW;90CpVU3a9}`C0NA6QdgadbZ9>$pVD_l@wF^(q;WBf|pl@CS3Rk8czUozfxJY4g=I zv{aXR2KxXhx_4w)*zYt@gFK_AlLPj*BRm5=z=SrGHVxsPo_QT{nttX&qlWzX1^5Q~ zcp&K^g!=~i{bgSNJ>mbJ;b*L!{<#7m1^NcBiSYkpxj^*V6yY0AbAYu&IM_sm)e4jZ zO6Kh08MY=8DZ&{OqDTUKJf@=j>k@)9*mvVpvOU1^Iplqo%~EnexJP(MkS`qyG6E*q zKZW3sU|%vUDiDrqbjic}Yt=DhixpXm2>1kqdjdZ2oStd_~ zr*n<2W`I6>W=(}eLx_y<)BCq2$+Y|buyTO`9sSK-!^|u)W?KV?DPua^Or&^9{4Y1e ztUmI?Q^1}j`U9?{*)5fdBOop9AJ#&tL}cbge?U-fZD|Q~_vY5iJ^ow{i6SeJ-@OC1 zkoxykk?`yY>>X`vEM0&m8d)Ei7_a(E4E~-7*|Vp&KNzhJhzJMbX=HGbfGS4}j|ZVf zy%7t$Kz!{P3}(Jj276y0B?;)su)>r;+JS^8nEpoEysyv9dg|W^?N$DgfEhx205{0G z3_jo)6%YjU-BWFQrkMZas+U3{ndYkhmlqL>r^t^Nat)@<&27-)oFThM39o030|@8! zzeF@s4X60n#gXJdx@SsnhIt&N83@?mi=Q=y|JV9(_W1r^;r=?X z|5sQd(%@hLh7NylpBnbzVP8a7a@)WMSH)fE`_; z9M~aOz$+pm1|i)Af{3Z`V6$9_!a>Agm5C%ALGCYsV66*A&^cw$8NphT9wR9J(t zdB~(+Tv=E)FRYB&!2XD=hWR6A@sS{_Y?K3E!1Di)ULymCx# zaH7Z(77z^)CnDlR11F$QT}reJfq zrt*el7o5<%QK#~TWEY&!yrBgP^hUxcZz>lQC6YHHlR+aw-Z=Fb1Q&Sb)MF}>h%4dF zg4|OHUQAfRp?RY~-q=hwd) z^j}h~PSp7$)%0Jf=7d6;N;U9;L8_ThBUVs0OteO<;m!g1q7s}S)r?=M1_Em|)rg^G z;7ma~01%4J>HswCTr4LbDl%BC$B`6)GfofC6WT3+)m#xJ6ojtvYlh&b5?B~sV2T5p zc|9&*5&~XuZlW66Y1EKTgK*+N$C>p4k%i?28`p$cFQD|z|B)af3h^*1fClwYW_Ud| z7ZFTABWYCiK+_03VbFE8!8SGy{p74P@==M9Mo0$fcQOz+65@v8feRVKY3e zmsbw_0AoOK$PQi)5(|&e^YKJ?_y~#M0v{nER7SJ&1#F3sFO(heS6&e}4sZjGZiF>) zIyU4H9a>hxID#@h0ml>VziR}TXo3F#i&c{cK4i4BPV?{~4Vs2QY6w{VpdPtV(lvA} zLrG&5VGKjK7!iI-9E$-&<8y$3nh)W8{uLXvz3JGH9`irN#+-_cIvpFJ(En;rOdcE# z*n|+SKN7-X1j8hbKmx1?<2o1y2h@ha#7oS;amXkSjz7ORwMof;}|LoDxgf9g5nemi4*WG3cxAc z6wr0xFa)3=0*xZTa6EA0;Z%OpKCloJ4hlt>SAcyOt^$a%Q4qNZAs1m#IABCz;Kqm& z5Y#S)1{J5nQcD7X<&vn(L12QP6!0)%q>+mZa*;*62&!x@mU zT>w7#g>)$lWsx*^EQYeBP?pHZVhDzF_(8>KBq0dE(O4d$GLe8nFja($8PMmFmP#sN z$mS|AK~c~w;g`|5Fn9|(Cx&2&hwtO9pjzV2%kjynDyxYJb82EWIfb#>>Z0m8{2cr^ejPDA0l=m}Yz0XN z_=;k3K+y0&At7*|fc%G`S`Kgpk|#tVK!AUM<&hgSXm}lg#VGUzSPlRg0)qZwN{fKv zV+cYe6NyALRtgo5(3t=c#Z(9cxPy9#h9G!Q10d=E&?6w<2h;K3f;=N(NG#+9U@(Wr zsfY*^GJG4zHjo4YXAd22kWP3)bOK1~=%@k%5<3+i5j6%W7k6wrmpR1OUMLv^D3f7Jj?aar*Gp$AOe+^=6amu;^%yA^JZ+ca9Li@BpkMK|>E% z8GZV{RR}ZKc^c*Zsi~jZeL!IC41k{Akodg`!JNU|Q>Yx(5d2L!@~;YoSs)fzThRn%cjZZZuCo0f))bCdF1zpj^dYQ0*}+z|C@e-h<22J z*HcX5e=W3TIM4o6Q3xPd812XZqMe|ld|O6#CQBTw?dBk%L&HKg1o(ggVoDQ=U~y!> zCIY(Vt&W(f#{bXi2<+6D{$G%v{1?bSGi)NDe|k^s z@1Xy$V?TuW`k*m2&k0mIB(?~m zCV)~y_P@w<1b^XC1q7ZgCI|Qn=sS7zV6z(ygJ86O34`KlaZk`JNFa>Z07|J$DiHF{ zAZ8++2kd~9At+ynC=P`HblTE#OeQ!1gq9^LB2E~@i3gkj5U2q=F*p&y ziArP30N!N)rkB;@Kok>zeI?j(0MIFPGJ{1&n1e|wGtCO4M+NF?AQ1;G0gVe!=Vr2L z+{$<+mmZzqPNOln^cZYKq6fqdta1pInz$$EOCSr7k-7gr%mOk9Dfmrgfr$kj4uuXB z*)(pp(%&+}`%m>DPL%m;eJB!Df7J)?RDEa@_5VNC2k&%!K)uj-rt8D&SAFRnlMo1qHTvnWYCa3)v4wK6{f;N$1!kZ4m~CVU?zxqOd1Z|j{WcE zjc|n{UEp7vH=YN9#28f(C|Gr*kFquNI6)PGbU7}NwCDsqE*e3P8*xEW3%v}8s(>Pk zNW;SpGoTfW+HZ9MFBcAly#Mh<0uM0hLYg!Xy&#b3QHis=C2=kuD8Tjj=maJK8ZsXZ zdOYX`F?#$6E0>-CY*B*-1nD4pg6QZ0B!F1FrV_w)w2MsxY@~|;z=l0B|38q%<^Ag#7te zqy%2zTmBvCY*oPj7%3f%lsr39GA|qufJpznddYD32F75(p+Iwt!{CWPtaw0fmIiO+aG!AGKH?(+30F}98?IsmF1U&lxZw%}W^k1x@W54y7h;s= zgNqD5e37KcWMOb~IRJMmF^L$wGLr{`S7EX-cvU8B$Ez{HL!HS9JN8UK;+jkb2Cs#@ zp$%^k8A6~zA&_-|8ymLoMWNvm^kA2nLS%~(!Gkc5#J~{rN#YnHuLLMy3B>tWDY#J# zq+!G*@yb9H3&iGt3tks!DTz{Y@<^bCa*A-15|kE^5s?*{Bd8##C^(m;EJzWNW=IpI znbIt2PHDC@mo&FDhcu6>pqe0naSXsy;L!&3j65Rv7WUK86Hz4C8Ak{w5zqFhZ_CJqOUmVIa82Jk2I71lXA)Hd%Rw zGG?Bg5I8xF$q{x&4*2sxBJr6*>(=PmGCJ&mMtBAUhkLB^-Sih2{?FhSIe9h>@fg5S z8wwkCV2CR;GJ*hme?D~NGK0D~P{AuCB#=7wgl;FJk5j~;Qvo8?HS{Q`Yoxl)w3CLm z&a{)J#`J@Z7I`X=9%+i`a{8Pbed@T|bl#9n@Trq;9QJ4#LYWYdY!gvbje@Gb?P4Qj znn*I?sD($Q9tpXs>!3GH4fF=zMjbWu&^x4V7BwdacunUAum&82iwyResxAZp0CA@W zx{v-BB>kV*@UM7zS_MLZ-E!Zc(1=Yl8{1R++sM(uX;kxXw;9axZ~m}aCc~l@>BGN{ z+0EXpo_T-valC0n2#|0FQ{j<=PUvqo*zBM7{tX?W55tfFN@VC=a1ITJh5bGlK!^mi zt>EyuNZ@&d?QL}(^roqS-gLB(AV_OL??@%!DCB8~O~uiuQ_3xaNWFzr91k~oa&D?# z^!4o{!T`|y!)E!D-^AM@z`|^8V`Asv0!ROB{<2+;?#lwYMEe6?b%WP85eZCU$owTr{%VWv@Ca-dEMN^UI_u1aPrd&_PBvGdpb!0aHJ8K zlNbTk28YRBcka&2yp5?w3Kj29AUI$*CJ?I&bALrJm55&_HK!Q7eWOA_eF@+m ziFT%?F{Yw0UkBqH_>EKomqo4Kfob zoZUebLPUKsC!I`O38F9}8j@j0h3EpJ2qG>c!+smlnZ`*LMMN8-BLWXNU^|=4fE76r z&><|oXkAnOrOMF+wk%$A!vJB$$rWXG2Obu1VlI7GzxD}@2|bA)>71wxo%0$ZL+3Ml+=t`5h=u~^vz1jPSmYopNNR0wF`5S1E|X>p`3;T%v5h>&MF zq$;7*qd|l$t0FZCg&qYWUfsFAP#ouVQk??l*Pzck7&SVk5V(E99EA9Ml{&T$6^)wkck&|&Y|J}EGaK^*qM42PW1{IoFYCVFU+7u)+05dN`F>pS7v7tJky$XA z3qSM%ng4$_iPBC#Cd%~JDBbpR78;rU8mAk+mN?U2<23s(CC>EMIK$vq5@-5roNi@# z31s?foC(4563Fz|I5Xn6a_Kc>HvHan{wRC1>=ruGg6#G@!D*r*C!?s#{%$(|Fg>=C zWNVW2Xon6bSnHY|=%81du)t?bbM0%{rU#ByIcJa4PTG5(p0d*3E9ogK?Y*L&veMpj z^^}$NUO`V;Y47Frl$G|Lqo=I&SY^|D>9K04_r}eAIbF%}jW>e>OHFs`277f`&RS8M zeC7Ovql^d`qC<-$N4qsmQt^DRL6cPA$Z*s|CTAMI>1DvVNo(?2?vuaf>|firod4Lr zunRvc{-NUjW%6?Vr>zV1KehgAmwq>Uru^^OfBBWaZOjg0bEx^i<_wsC9H`3>P(g9`QOhWHJ%jJYgOU|a=ZzwNf&7q}@GsXce+xJG&4*lnf(GRgRjko%u2JWGkatf% z2M@$(XxP*+a3p&|aQaETY-w^^!zm4?HJs6KR>L_B=QUi=a8bh}8ZK$rky;Gcw_%mL)u)DQIK3-q5h8;YqE`DXsjbRzCMu4H(h9>ib6NZIp_x^G>5~ znAeEA@3IiN*C$wHN;h*YC`9kgAgFSKS)jjL0sxkRY11g)URX5j_p3+nuYIsYJKgCV zH41k-Fo5ovB<8-=Suu)tS5^)C{>*XW--i5C_-GxN@jG>)i+s1k<@Y*W!~VF(vDJED z+D7U2Dc2}{UxK@xO{28afq!(jGi$h?ZoOu>4^LxfckPUkdmK`u&(2~|YhPGDPmMmm zK)lCQY5jGA4GDS#xjk;}etwDI2bX!QrRveDT|E|bt4r36N!SjGL7}BD23$G zbpd1H{797Zh^9meSB%PUgk&WTEjbRrtJ0PzY`So}Tv1+S>MXn}1vyI&3M9rJsH6*v z8B!KwMs!GGvWUZuZo7~;?vY2oU)YEorFuJ(0 zTesZ0AjoBYt8j4CkaT-U~etLqmyp?(X?>E`kgc& z61x)#iQ7X(qPC^O=MKww@Nj2;zcG|cb%$kW&k@o5XI~7Bl3I+IC!=^Gi=d}Z;mIHw z6r^X=?#*~KXcxz&n#r)rG`3n{0E;qiD#?FttSBc=O7%xad$4eSJotkd*qzp2^+$eY zu+PL2D4V>!`{32iPbdkd15%0BH7p5K?+B^;RyL(T^_ezJFPAX-3F-(Zo0 zE}60H$b{#Y68R!xoAUJz6G;v`5JvlQgrnB~6hWWH7U}q<6=Fxjp$fYfA@Ys&c#Kx| z*^l@3_p@+oW#`%6Bb*?*mOe^&d|689^CZq8HqrX0R~%}SS4 zX$3zW7)m>ob{TL%wW9hwr#~EUO&#WXiPICP{;*>3_c){KpDaVK?|Mlv^-&I-5sr_4 z!i$gXy|ha31?hW9oxcfvFKMQ4PTx!17|pBqZr`|iO9_B)et73r$^g_QBW3`~5+QQN zhya*utw~e25=R1lnJkfTU3@gjnxW33PnAz7E0_2$2Kj_m=BzQ!oiKbv8CN!+Jqhq0 zFto>!GQ10tbjaC5axz{<17cxYLiY^sU@arycH-+(bICwaTYQGuSWz(*#K;RD`yP=^X~K2`RcS3rh?8f`g3dM$rt z)pOSnf`OD-_EDz`c@Sm}si=e^6`H>amapb}Y98-KDUW$>5}VHp@jfyynjvU{S@Q+V z+o^dwHg8AfL&S@`2pjdjF3X68iBNIOHK>WIARJc$qxH+K5d>}Bt8FIhGJOoTOOx-rt1YGax>HE1=3Wf`f`OJPMp*fIT{S2EnVelLV|MPkPoaRbF- z$CM~W47f2R7&d~(jww+J6p{fQtIAH6>%_C5!mR%MrB zX{m%Ap68}D*Ef9h6&ilen(rH4euai_Sqp0T|Fc5hwifknulkU{SE^+Ayl)+e%)jD8 zf|w6ifnnosz6#QK4L$FB#*@buyCL9iqL?}DMkR+RWS Z`9NS&xEZDEDK<%)$)v4EvZ{k5?|*Jp7+L@T diff --git a/aptos-move/framework/move-stdlib/doc/features.md b/aptos-move/framework/move-stdlib/doc/features.md index 299e233ab084f..a2febd0207659 100644 --- a/aptos-move/framework/move-stdlib/doc/features.md +++ b/aptos-move/framework/move-stdlib/doc/features.md @@ -66,6 +66,10 @@ return true. - [Function `auids_enabled`](#0x1_features_auids_enabled) - [Function `get_bulletproofs_feature`](#0x1_features_get_bulletproofs_feature) - [Function `bulletproofs_enabled`](#0x1_features_bulletproofs_enabled) +- [Function `get_signer_native_format_fix_feature`](#0x1_features_get_signer_native_format_fix_feature) +- [Function `signer_native_format_fix_enabled`](#0x1_features_signer_native_format_fix_enabled) +- [Function `get_module_event_feature`](#0x1_features_get_module_event_feature) +- [Function `module_event_enabled`](#0x1_features_module_event_enabled) - [Function `change_feature_flags`](#0x1_features_change_feature_flags) - [Function `is_enabled`](#0x1_features_is_enabled) - [Function `set`](#0x1_features_set) @@ -278,6 +282,18 @@ Lifetime: transient + + +Whether emit function in event.move are enabled for module events. + +Lifetime: transient + + +