-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix memory leak #269
Fix memory leak #269
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What a bug! Well found and replicated. 👏🏻
I think the redesign is sensible and pretty minimal, but would like to explore a version without UUIDs and HashMap. Made some comments to suggest how that might work, but may well have missed something.
crux_core/src/capability/executor.rs
Outdated
// we read off both queues and execute the tasks we receive. | ||
// Since either queue can generate work for the other queue, | ||
// we read from them in a loop until we are sure both queues | ||
// are exhaused | ||
let mut did_some_work = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be simpler to automatically consider received tasks ready and enqueue them as they get an ID? Here we would then go through all new tasks first, then run the ready queue to exhaustion?
Unless I'm missing something, which is not unlikely :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, what I'm probably missing is that ready tasks may spawn further tasks, which we need to take in. So the result is the same tik-tok loop...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's an ugly loop but necessary, I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a chance this might alter observable behaviour, since the running order of the tasks has potentially changed. Of course we shouldn't be relying on the order that effects arrive, but in practice at least in tests it is hard to avoid.
That could be avoided by having everything arrive on one queue (using an enum
), not very pretty though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think effectively random order of execution is a good thing - nothing should be relying on the ordering (beyond the causal one).
be4f11b
to
48f6c15
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This latest version looks great! Happy to merge this unless you want to have a go at that test in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bloody hell, there's a blog post behind that test 😳 cool!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'tis a thing of beauty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
I made this primarily to show how it can be done, rather than to offer as the 'right' fix. Basically instead of passing
Arc<Task>
everywhere we keep the Task inside theQueuingExecutor
and just send auuid
u32
to notify when it should be polled again. Very 'async executor 101'.Fixes #268
TODO: Update runtime docs