Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: sol-ts parity checks for decoder-encoder #412

Merged
merged 5 commits into from
May 2, 2023

Conversation

LHerskind
Copy link
Contributor

@LHerskind LHerskind commented Apr 28, 2023

Description

Closes #272, #399 and #404

Updates computation of calldata hash, needs to be extended in native implementation as well to have parity between CPP, SOL and TS (see #413).

The calldata hash between CPP and (SOL/TS) will not hit parity until BB is fixed to make WASM and CPP use same generators for hash computations.

Checklist:

  • I have reviewed my diff in github, line by line.
  • Every change is related to the PR description.
  • I have linked this pull request to the issue(s) that it resolves.
  • There are no unexpected formatting changes, superfluous debug logs, or commented-out code.
  • The branch has been merged or rebased against the head of its merge target.
  • I'm happy for the PR to be merged at the reviewer's next convenience.

@LHerskind LHerskind requested a review from Maddiaa0 April 28, 2023 23:42
@LHerskind LHerskind marked this pull request as ready for review April 28, 2023 23:42
Copy link
Member

@Maddiaa0 Maddiaa0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments from a little onceover

}

bytes shit_block =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isnt used but lmao

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops, will remove 💩.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha!

"Invalid diff root"
);
}
function testEmptyBlocks() public {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice refactor on this, looks alot nicer

@@ -0,0 +1,19 @@
// SPDX-License-Identifier: UNLICENSED
pragma solidity >=0.8.18;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for seperating

@@ -0,0 +1,63 @@
/**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than all of these files with bytecode copied and pasted, would we just be better off symlinking the artifacts like what was done in AC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @benesjan already have a separate PR (#408) for handling some of this, which was why i did not want to make more changes to how it worked here.

await setTxHistoricTreeRoots(tx);

// Calculate what would be the tree roots after the txs from the first base rollup land and update mock circuit output
await updateExpectedTreesFromTxs(txsLeft);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could be moved into a helper function as it shows up alot in other sequencer tests

const decodedRes = await decodeHelper.methods.decode(block.encode()).call();
const stateInRollup = await rollup.methods.rollupStateHash().call();

// @note There seems to be something wrong here. The Bytes32 returned are actually strings :(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a ticket for this?

@@ -14,17 +38,38 @@ const anvilHost = process.env.ANVIL_HOST ?? 'http://127.0.0.1:8545';
const chainId = 31337;

describe.skip('L1Publisher integration', () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently skipped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests were set as skipped before due to requiring an anvil instance running, which was not part of the setup. Maybe @ludamad got insights on better solution. Here simply replacing the old block publisher tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a tech debt so I created a ticket for it. Would make sense to address this in separate PR.

@LHerskind LHerskind requested a review from benesjan April 29, 2023 13:39
Copy link
Contributor

@benesjan benesjan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments.

@@ -65,6 +210,9 @@ async function deployRollup() {
const ethRpc = new EthereumRpc(provider);

// Deploy Rollup and unverifiedDataEmitter contracts
const decodeHelper = new DecoderHelper(ethRpc, undefined, { from: deployer, gas: 1e6 });
await decodeHelper.deploy().send().getReceipt();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would rename decodeHelper to decoderHelper for consistency + please update the comment above.

@@ -14,17 +38,38 @@ const anvilHost = process.env.ANVIL_HOST ?? 'http://127.0.0.1:8545';
const chainId = 31337;

describe.skip('L1Publisher integration', () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a tech debt so I created a ticket for it. Would make sense to address this in separate PR.

this.getCalldataHash(),
);
const temp = toBigIntBE(sha256(buf));
const p = BigInt('21888242871839275222246405745257275088548364400416034343698204186575808495617');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const p = BigInt('21888242871839275222246405745257275088548364400416034343698204186575808495617');
// Prime order of BN254 curve
const p = BigInt('21888242871839275222246405745257275088548364400416034343698204186575808495617');

)
dstOffset := add(dstOffset, mul(2, 0x20))

// Kernel1.contract.aztecaddress
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Kernel1.contract.aztecaddress
// Kernel1.contract.aztecAddress

)
dstOffset := add(dstOffset, 0x20)

// Kernel2.contract.aztecaddress
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Kernel2.contract.aztecaddress
// Kernel2.contract.aztecAddress

for (let i = 0; i < leafCount; i++) {
const commitmentPerBase = KERNEL_NEW_COMMITMENTS_LENGTH * 2;
const nullifierPerBase = KERNEL_NEW_NULLIFIERS_LENGTH * 2;
const publicDataWritesPerBase = STATE_TRANSITIONS_LENGTH * 2; // @note why is this constant named differently?
Copy link
Contributor

@benesjan benesjan May 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@note why is this constant named differently?

What name did you expect here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you look at the commitments and nullifier constants you see that there are KERNEL_ prefix making it clear that this is per kernel, for STATE_TRANSITIONS_LENTGH it is not clear if per kernel, per rollup or whatever it is. Also that it is called STATE_TRANSITIONS here but public data writes other places, and state transition could also be the general one from StartStateHash to EndStateHash.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, good catch. In that case replacing STATE_TRANSITIONS with KERNEL_PUB_DATA_WRITES everywhere sounds good to me. It should probably be done in a separate PR but I don't really care if you do it in the same one. If you are too busy with other things I would have time to pick it up once we merge this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do it. Have one for L2 -> L1 stuff that right now is part of my other pr, but might be better to just have one with the naming 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benesjan or @PhilWindle, is there both a function limit and a kernel limit, and do they differ? e.g., for new commitments, there are both a function limit and a kernel limit but the value is the same atm, but have you used the value as a function or kenel?

l1-contracts/src/core/Decoder.sol Show resolved Hide resolved
l1-contracts/src/core/Decoder.sol Show resolved Hide resolved
@Maddiaa0 Maddiaa0 merged commit 6470364 into master May 2, 2023
@Maddiaa0 Maddiaa0 deleted the lh/e2e_encoder_decoder branch May 2, 2023 09:46
ludamad pushed a commit that referenced this pull request Jul 14, 2023
codygunton pushed a commit that referenced this pull request Jan 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update rollup contract to incorporate public data in state hashing
4 participants