-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(protocol-parser): add encoding/decoding functions #1084
Conversation
|
readonly dynamicFields: readonly DynamicAbiType[]; | ||
|
||
constructor(staticFields: readonly StaticAbiType[], dynamicFields: readonly DynamicAbiType[] = []) { | ||
// TODO: validate at least one static field |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i believe there can also just be dynamic fields? are we even enforcing that there are any fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I think you might be right, will remove the comment
would be good to have tests for no-static-fields and no-dynamic-fields though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good in general, but one thing I wonder about is whether we want to make Schema
a class or to go the factory / utils route. It seems like the js ecosystem is generally going in the direction of avoiding classes (see this opinion piece or viem) and most of our codebase is using functions over classes too. Wonder what your opinion is here!
// TODO: refactor this so we don't break for bytes offsets >UINT40 | ||
if (BigInt(actualDynamicDataLength) !== dataLayout.totalByteLength) { | ||
console.warn( | ||
"Decoded dynamic data length does not match data layout's expected data length. Data may get corrupted. Did the data layout change?", | ||
{ | ||
expectedLength: dataLayout.totalByteLength, | ||
actualLength: actualDynamicDataLength, | ||
bytesOffset, | ||
} | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does this TODO mean? is the refactor to make bytesOffset
a bigint instead of a number? is there a downside to already do that now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think since this is new code, I can probably do that now. This was copied over from another area of the codebase, where it was a much bigger refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh I remember now: viem's sliceHex
operations expect a number
, so that's why bytesOffset
is a number
moving it to bigint
means that I can't use sliceHex
unless downcasting, which has the same potential issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uint40
max is 1099511627775
, which means we'd need a 1TB string to hit this issue. Feels like we could punt on this for now.
: decodeDynamicField(this.dynamicFields[fieldIndex - this.staticFields.length], data); | ||
} | ||
|
||
encodeRecord(values: readonly (StaticPrimitiveType | DynamicPrimitiveType)[]): Hex { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh nice, we need this for #1074 too!
Yeah I keep going back and forth on this! const schema = createSchema(staticFields, dynamicFields); // or just {staticFields, dynamicFields}
const schema = hexToSchema(hex);
const data = encodeRecord(schema, [1, 2, 3, 'something']);
// versus
const schema = new Schema(staticFields, dynamicFields);
const schema = Schema.fromHex(staticFields, dynamicFields);
const data = schema.encodeRecord([1, 2, 3, 'something']); and const tableId = createTableId(namespace, name); // or just {namespace, name}
const tableId = hexToTableId(hex);
const tableIdString = tableIdToString(tableId);
const tableIdHex = tableIdToHex(tableId);
// versus
const tableId = new TableId(namespace, name);
const tableId = TableId.fromHex(hex);
const tableIdString = tableId.toString();
const tableIdHex = tableId.toHex(); (one small perk of the class approach is that |
I don't have a strong opinion yet, but i do find the first approach more aesthetically pleasing, and it's not even really more to type. And when googling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
Moves schemas to a class instance with encoding/decoding functions. Pulled out of #1033 for #1075.
cc @yonadaaa