Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatxpool: rework the way how transactions are getting dropped due to limits #6409

Open
Tracked by #5472
michalkucharczyk opened this issue Nov 7, 2024 · 3 comments
Assignees

Comments

@michalkucharczyk
Copy link
Contributor

When transaction pool is filled with txs up to its limits, the newly sent transaction will replace those already kept in the pool, causing older transactions to be dropped.

It seems reasonable to change this behaviour and reject newly incoming transactions (with the same priority) from entering the pool.

Some related past work:
https://github.com/paritytech/substrate/pull/1676/files

@michalkucharczyk michalkucharczyk self-assigned this Nov 7, 2024
@michalkucharczyk michalkucharczyk changed the title fatxpool: rework way how transactions are getting dropped due to limits fatxpool: rework the way how transactions are getting dropped due to limits Nov 7, 2024
@bkchr
Copy link
Member

bkchr commented Nov 7, 2024

For the future pool this makes a lot of sense to throw out the old transactions. Otherwise someone may just sends future transactions, occupying the pool without them being ever removed. Yeah for sure you could have different priorities etc, still I would not keep them.

When it comes to the ready pool, the transactions which are longer in the pool, have a higher likelihood of being invalid (maintenance did not yet rechecked them). So, it is probably discussable on what is better. Users are probably also sending transactions to non authoring nodes. Meaning the transactions could already have been included in a block and it would be smart to remove them from the pool. (So, on an authoring node we should not accept new transactions via p2p if the ready pool is full). There are a lot of nuances to this and just changing this because someone is sending too much transactions is maybe not the best idea. So, yeah, we can probably improve this, but we should think a little bit more about it :)

@michalkucharczyk
Copy link
Contributor Author

michalkucharczyk commented Nov 7, 2024

Meaning the transactions could already have been included in a block and it would be smart to remove them from the pool.

Another question here is if it had chance to be gossiped before it gets dropped.

(So, on an authoring node we should not accept new transactions via p2p if the ready pool is full)

I think it will be rejected today (it uses TransactionPool::submit_one internally). And I am not sure if sending node will try to resend it again to the same peer. Needs to be checked. We would need for sure some more sophisticated protocol here.

@bkchr
Copy link
Member

bkchr commented Nov 7, 2024

Another question here is if it had chance to be gossiped before it gets dropped.

Makes sense.

And I am not sure if sending node will try to resend it again to the same peer.

Will not resend it.

We would need for sure some more sophisticated protocol here.

Yes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants