Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve initial model loading message #1431

Closed
hannobraun opened this issue Dec 9, 2022 · 7 comments · Fixed by #1476
Closed

Improve initial model loading message #1431

hannobraun opened this issue Dec 9, 2022 · 7 comments · Fixed by #1476
Labels
type: feature New features and improvements to existing features

Comments

@hannobraun
Copy link
Owner

Here's the status message displayed when first loading a model after starting the app:
Screenshot from 2022-12-09 11-15-17

It says "change in model detected", even though that isn't the case here. The model was loaded unconditionally, for the first time. It just reuses the same mechanism that is also used for reloading the model after a change was detected, hence the status message. It would be nicer, if an appropriate status message was displayed each time, that explained what actually happened.

The code that governs this is in fj-host. Watcher watches the model for changes. When it detects a change, it triggers Evaluator. Both of them are wrapped by Host, which is then used in fj-window to control model loading and to display those status messages in response to the model events. A ChangeDetected event is emitted by Evaluator every time a model evaluation is triggered, which results in that status message.

I've been thinking through how things could be restructured to achieve this. One possibility is for Watcher to provide its own channel where it sends ChangeDetected messages, and those events could be handled by both Evaluator and the main event loop in fj-window. Instead of triggering the first evaluation by sending an initial event, Host could take care of triggering Evaluator for the first time, or there could be separate StartWatching/ChangeDetected events. Probably, the events produced by Watcher/Evaluator would be filtered through Host before they are handled in fj-window, so fj-window just gets a single channel to handle events from.

As I'm thinking through this, I wonder if it makes sense to do away with all the threads and just make everything async.

@hannobraun hannobraun added type: feature New features and improvements to existing features topic: host labels Dec 9, 2022
@zthompson47
Copy link
Contributor

I'd like to work on this one if it's not already in someone's queue. Either way, I have some questions that would help me understand the code better.

  1. What is the race condition referred to here?
    https://github.com/hannobraun/Fornjot/blob/04ec62af313f6bda89f9100335b47b561a7a2ccd/crates/fj-host/src/watcher.rs#L71-L73
    At first glance it seems like the the initial load should instead happen before the start of watching (so initialization always happens before any updates).

  2. How do I trigger loading a new model with the 'choose file' dialog that results in a new model load here?
    https://github.com/hannobraun/Fornjot/blob/04ec62af313f6bda89f9100335b47b561a7a2ccd/crates/fj-window/src/event_loop_handler.rs#L193-L198
    I can't figure out how to start fj-app without a model and there isn't a 'load new model' in the gui.

  3. What advantages do you see with async?
    From what I can tell, it wouldn't logically change the code much (e.g. thread::spawn becomes tokio::spawn), and it might introduce some integration issues with winit and wasm.

@hannobraun
Copy link
Owner Author

I'd like to work on this one if it's not already in someone's queue.

I'm not aware of anyone working on this, so feel free to take it, @zthompson47!

What is the race condition referred to here?

I think the race condition would go something like this:

  1. Trigger the initial load.
  2. Context switch to another thread. Model is loaded and shown to the user.
  3. User makes a change to the model's source code.
  4. Context switch back to the original thread. Start watching the model.
  5. User sits there, waiting for the model to reload with their change.

I think this is extremely unlikely (if it's possible at all), but it's also easily avoided by just starting to watch first, then loading for the first time.

How do I trigger loading a new model with the 'choose file' dialog that results in a new model load here?

This is in fact the default thing that happens when you start fj-app without passing a model path as a parameter. I assume you've never seen this, because you're running fj-app (or cargo run) from within the repository, where we have this configuration:
https://github.com/hannobraun/Fornjot/blob/04ec62af313f6bda89f9100335b47b561a7a2ccd/fj.toml#L1-L7

Comment out default-model, then run fj-app (or cargo run), and you should see the UI.

3. What advantages do you see with async?
From what I can tell, it wouldn't logically change the code much (e.g. thread::spawn becomes tokio::spawn), and it might introduce some integration issues with winit and wasm.

This is just something I'm wondering about, so I'm not sure it would actually be better.

I've tried to attack #1327 multiple times in recent weeks. Some of that led to cleanups that I merged. But most of it died in local branches, because I ran into some problem or another and didn't have the time to focus on it.

I believe that Host can't be owned by the event loop, if we want to fix this problem. It needs to move outside, and use an EventLoopProxy to pass events inside the event loop. In my various attempts to solve this, Host inevitably grew its own thread, so it could reconcile messages being passed to it from multiple directions (from the UI, if a new model needs to be loaded; and from the rest of fj-host, when the loaded model changes). I got the impression that this setup could be simplified using async, but I might be wrong.

Integration with winit would be handled by having a single thread be in charge of the EventLoopProxy. That thread could call into an async fj-host API in a loop, using futures::block_on as the executor.

As for integration with WASM, I believe we'd be fine, as long as we don't use anything from Tokio or another runtime in fj-host. Then we could use whatever the thing is that bridges async Rust to the JavaScript world. If you're right and we indeed ended up with tokio::spawn, that wouldn't work I think, and the whole idea would be a non-starter.

@zthompson47
Copy link
Contributor

User sits there, waiting for the model to reload with their change.

I think this is extremely unlikely (if it's possible at all), but it's also easily avoided by just starting to watch first, then loading for the first time.

That makes sense. I was thinking of the equally-unlikely chance of the watcher triggering an update before the initial load, but at least that ends up with an up-to-date compilation.

Comment out default-model, then run fj-app (or cargo run), and you should see the UI.

you're running fj-app (or cargo run) from within the repository

Yes, that's it.

As for the async, I'll take a look at EventLoopProxy and #1327 and consider giving Host a more active role in dispatching these events. I still need to unpack these ideas, but I'll post back when I have something and/or more questions.

As for integration with WASM, I believe we'd be fine, as long as we don't use anything from Tokio or another runtime in fj-host

That's interesting - I'm still a bit confused about how async works in wasm, or maybe I'm just confused about using async without a runtime. Either way, your answers are very helpful and I just need to play around with these things to see how they work. Thanks!

@hannobraun
Copy link
Owner Author

As for the async, I'll take a look at EventLoopProxy and #1327 and consider giving Host a more active role in dispatching these events. I still need to unpack these ideas, but I'll post back when I have something and/or more questions.

Sounds good 👍

And don't feel pressured to use async just because I mentioned it. If you think it doesn't make sense, or you simply don't want to deal with it, that's fine!

That's interesting - I'm still a bit confused about how async works in wasm, or maybe I'm just confused about using async without a runtime.

The important part of what I said about no runtime is "in fj-host". Of course we need a runtime to run it in the end, but as long as fj-host only uses what's part of the language, standard library, or the futures crate, it is runtime-agnostic.

So if you're right, and we end up needing tokio::spawn, that won't work, because it will bind us to Tokio, which we can't run in the browser. But if we can build it in a way that just uses async fn/Future/Stream, then we can run that with any async runtime. Outside of the browser, that would likely be Tokio, and inside the browser something else.

As for what the in-browser runtime would be, I don't know. I'm sure I saw something that seemed suitable, but I'm not sure what it was. Maybe I'm remembering wasm-bindgen-futures?

@zthompson47
Copy link
Contributor

The important part of what I said about no runtime is "in fj-host"

It's starting to click now. I was thinking in terms of a monolithic setup like the well-known learn wgpu tutorial where the whole thing (well, almost) runs either in a native window or the web. Splitting into a workspace with multiple crates, like Fornjot, would allow a more mix-and-match approach I suppose.

Maybe I'm remembering wasm-bindgen-futures?

I've also seen that, and I guess the idea there is that the rust Future (aka async fn) gets converted to JavaScript Promise, essentially making JavaScript itself the runtime. We would just export an async fn as an entry point for JavaScript.

I'll proceed with all this in mind and see where it goes.

@zthompson47
Copy link
Contributor

I came up with a couple of approaches to fix this issue, and both also fix #1327. The 'long-lived' version spawns a host thread that lives outside of the event loop and takes commands from the event loop via a handle (actor pattern). The 'on-demand' version is a simpler approach that spawns an evaluator thread on each new model, like the current design.

Both approaches send the ShapeProcessor to the background thread so that all processing happens outside of the event loop, preventing blocking.

I initially wrote the 'long-lived' version and, after thinking it was overkill if the only command is LoadModel, came up with the more simple 'on-demand' version. However, I was looking at #35 and thought that it would be easier with the host holding on to the current model on the 'long-lived' thread. I imagine there might be some more commands that the gui could issue in the future, but not sure of the big picture.

I'd love some high-level feedback about these ideas and suggestions on where to go from here. Each PR needs some cleanup and I'd also like to write some tests before a merge, so I submitted them as drafts.

Thanks!

@hannobraun
Copy link
Owner Author

Thank you, @zthompson47, this is awesome!

I haven't had the chance to look at the pull requests yet, and most likely I won't until tomorrow. But I wanted to give you a quick reply to your comment here.

I think both approaches sound reasonable, and I'd be happy with either. That said, I agree with your assessment that the long-lived model might be better suited with what we need in the future.

I imagine there might be some more commands that the gui could issue in the future, but not sure of the big picture.

I expect the interaction between GUI and model to become much richer than it is now. #35 (which you mentioned) is one example of that. #13/#241 is another example of where a selection in the GUI changes what we want to see from the model, which might require interaction with the model code. And then there's interactive debugging of the kernel (see #1385, which is a bit vague right now).

I don't know what all of that will look like when it comes to fruition, but I do agree that a long-lived thread sounds like the more flexible approach when it comes to implementing stuff like that.


Again, thank you @zthompson47! I'll have a look at the pull requests tomorrow, and I'll hopefully have more specific feedback then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature New features and improvements to existing features
Projects
None yet
2 participants