Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The read/write/sizeOf interface #121

Closed
roblabla opened this issue Feb 23, 2015 · 10 comments
Closed

The read/write/sizeOf interface #121

roblabla opened this issue Feb 23, 2015 · 10 comments

Comments

@roblabla
Copy link
Member

Currently, there is a very major problem in the read/write/sizeOf functions of the internal binary parser. Their return values are modeled around 1.6 assumptions that do not hold true anymore. For instance, returning null if there is not enough data, even though since 1.7, not having enough data means there was an error in what the client sent, or an error in the protocol file.

Most importantly, read returns not just the read value, but the read size. This leads to an incredibly convoluted API. Write and sizeOf has less problems.

I think it would be in our best interest to rethink the read interface, lest we let the maintainers go 😵 !

It might also be a good idea to split the serializer into different files, similar to what is done in roblabla/Protocols

@rom1504
Copy link
Member

rom1504 commented May 14, 2015

splitting into different files is done.

@rom1504
Copy link
Member

rom1504 commented Sep 20, 2015

On that theme : would it make sense to regroup read/write/sizeOf in classes ?
Instead of having something like

'varint': [readVarInt, writeVarInt, sizeOfVarInt]

function readVarInt(buffer, offset) {

}

function sizeOfVarInt(value) {

}

function writeVarInt(value, buffer, offset) {

}

having

'varint': varint

class varint
{
  read(buffer, offset) {

  }

  sizeOf(value) {

  }

  write(value, buffer, offset) {

  }
}

That doesn't change much functionally but I think it could be cleaner than repeating the name of the type again and again.

@rom1504
Copy link
Member

rom1504 commented Nov 20, 2015

"read returns not just the read value, but the read size" don't we need the size ?

@roblabla
Copy link
Member Author

Not if the cursor is kept on the buffer object. I'm currently toying with the idea of a polling interface instead of a pushing. With a read() function, we wouldn't necessarely need a size.

@rom1504
Copy link
Member

rom1504 commented Nov 20, 2015

I don't know what you mean by "polling interface" and "pushing interface"

@roblabla
Copy link
Member Author

pushing interface : we are made aware of new data, we react to it.
polling interface : we wait until more data is made available to us.

pushing is on('data')
polling is a "blocking" read()

@rom1504
Copy link
Member

rom1504 commented Nov 20, 2015

Aren't node stream only implemantable with a pushing approach?

@roblabla
Copy link
Member Author

@rom1504 we can layer a small compatibility shim to get pulling api on top of pushing api without much pain. This also means we don't have to worry about "not having enough data" anymore.

@rom1504
Copy link
Member

rom1504 commented Feb 18, 2016

see ProtoDef-io/node-protodef#24 for ideas about improving that interface but doing it asyncly makes things complicated and slow

I'd like to get rid of the offset handling, and make it "automatically handled", which can be done without going async.

Leaving this open, but this isn't really a priority anymore.

@rom1504
Copy link
Member

rom1504 commented Mar 1, 2016

tracking of that issue moved to ProtoDef-io/node-protodef#49

@rom1504 rom1504 closed this as completed Mar 1, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants