-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instant speed is no longer instant #44
Comments
I have not actually timed anything, but my guess is that the cause of this is the IPC we do in order to communicate with the renderer process. I think there is a pretty good case for this being the cause of the slowness given that every single drawing command currently gets serialized to JSON, sent to the other process, and then deserialized. That means that there is pretty much no way to for instant to compete with what we had before because it has to go through way more steps (way more expensive steps) for all of the same operations. We could speed this up by using some more compact representation like bincode or something, but I'd actually like to pursue a solution that goes even further. I'm thinking that maybe we should experiment with using memory shared between processes. There is a Basically, the idea is that we'd have a Another thought I have is that instead of using drawing commands at all we should instead just hold the actual drawing itself in the shared memory. This would enable us to do things like store the temporary path as a function of some There still needs to be a lot of thought put into the design of this. These are just a bunch of random thoughts I have about the problem and a potential way we can solve it. An interesting design issue is that this would radically change the way we are thinking of doing WASM stuff. (Probably in a good way because we'll be able to have a single congruent approach that uses
|
Did a tiny (somewhat unscientific) test to see how much impact (de)serialization overhead was having on the instant speed. Switching from JSON to bincode made an enormous difference (nearly a 4x speedup for the simple snowman example). Just that change alone got us pretty close to the instant speed we are aiming for. Any further IPC work could then be moved to #50. Since we're currently using stdin/stdout for IPC, I needed to write and then read an extra newline to work around buffering behaviour. Shared memory would not have that issue. See the patch below for more details. PatchThese changes are pretty quick and dirty and we would have to clean them up in order to actually integrate them into the codebase. Things like error handling were just completely removed. diff --git a/Cargo.toml b/Cargo.toml
index 2890b6a..1ab5190 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -25,6 +25,7 @@ azure-devops = { project = "sunjayv/turtle", pipeline = "sunjay.turtle" }
serde = { version = "1.0", features = ["derive"] }
serde_derive = "1.0"
serde_json = "1.0"
+bincode = "1.2"
interpolation = "0.2"
rand = "0.6"
diff --git a/examples/snowman.rs b/examples/snowman.rs
index 77bdcd1..73fbe79 100644
--- a/examples/snowman.rs
+++ b/examples/snowman.rs
@@ -4,6 +4,7 @@ use turtle::Turtle;
fn main() {
let mut turtle = Turtle::new();
+ turtle.set_speed("instant");
turtle.pen_up();
turtle.backward(250.0);
@@ -21,6 +22,7 @@ fn main() {
}
turtle.hide();
+ std::process::exit(0);
}
fn circle(turtle: &mut Turtle, radius: f64) {
diff --git a/src/messenger.rs b/src/messenger.rs
index f324c68..2a8f3f3 100644
--- a/src/messenger.rs
+++ b/src/messenger.rs
@@ -6,7 +6,6 @@ compile_error!("This module should not be included when compiling to wasm");
use std::io::{BufRead, BufReader, Read, Write};
use serde::{Serialize, de::DeserializeOwned};
-use serde_json::{self, error::Category};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Default)]
pub struct Disconnected;
@@ -17,44 +16,25 @@ pub struct Disconnected;
///
/// If that function returns `Disconnected`, break the loop. Otherwise continue to read until EOF.
pub fn read_forever<R: Read, T: DeserializeOwned, F: FnMut(T) -> Result<(), Disconnected>>(
- reader: R,
+ mut reader: R,
unable_to_read_bytes: &'static str,
failed_to_read_result: &'static str,
mut handler: F,
) {
- let mut reader = BufReader::new(reader);
loop {
- let mut buffer = String::new();
- let read_bytes = reader.read_line(&mut buffer).expect(unable_to_read_bytes);
- if read_bytes == 0 {
- // Reached EOF, renderer process must have quit
- break;
- }
-
- let result = serde_json::from_str(&buffer)
- .map_err(|err| match err.classify() {
- // In addition to cases where the JSON formatting is incorrect for some reason, this
- // panic will occur if you use `println!` from inside the renderer process. This is
- // because anything sent to stdout from within the renderer process is parsed as JSON.
- // To avoid that and still be able to debug, switch to using the `eprintln!` macro
- // instead. That macro will write to stderr and you will be able to continue as normal.
- Category::Io | Category::Syntax | Category::Data => panic!(failed_to_read_result),
- Category::Eof => Disconnected,
- })
+ let result = bincode::deserialize_from(&mut reader)
+ .map_err(|err| panic!("{:?}", err))
.and_then(|result| handler(result));
if result.is_err() {
break;
}
+ reader.read_exact(&mut [0]).unwrap();
}
}
/// Writes a message to given Write stream.
pub fn send<W: Write, T: Serialize>(mut writer: W, message: &T, unable_to_write_newline: &str) -> Result<(), Disconnected> {
- serde_json::to_writer(&mut writer, message)
- .map_err(|err| match err.classify() {
- Category::Io | Category::Eof => Disconnected,
- // The other cases for err all have to do with input, so those should never occur
- Category::Syntax | Category::Data => unreachable!("bug: got an input error when writing output"),
- })
+ bincode::serialize_into(&mut writer, message)
+ .map_err(|err| panic!("{:?}", err))
.map(|_| writeln!(writer).expect(unable_to_write_newline))
} Timing DataThe patch above shows that we modified the example with Timing data was collected after RLS finished running. The results were skewed considerably whenever the CPU was busy. Before
After
|
Two comments:
|
This issue was one of the large focuses of the work in #173. We now have very little latency and the instant speed is very quick on most platforms. There is still probably work to be done on this, but I am closing this issue for now as part of #173 until it becomes a more noticeable problem. If/when that happens, we can create a new issue to track what needs to be done. |
I should note that a large part of fixing this was realizing that the debug performance of most of our libraries is just insufficient for trying to realize the goal of having an "instant" speed that is actually instant. To remedy that, I have updated the Quickstart guide to recommend that users compile dependencies with |
When the new architecture was implemented, its intentionally naive querying model introduced quite a bit of latency within animations. This latency makes it so that even running the simple circle example on instant does not result in the circle being instantly drawn.
To fix this, someone needs to find the bottleneck that is slowing down animations when the speed is set to instant. As mentioned, this is likely a result of the querying taking a while. A possible fix may be to just somehow skip a query when the speed is set to instant and immediately update the turtle to the end state of the animation.
Animations are created with code like this:
turtle/src/turtle_window.rs
Lines 116 to 137 in 2e3f549
fetch_turtle()
will perform a query, but so willplay_animation()
at least once. It is probably better to just update the turtle immediately instead of running an animation if we know that the speed is instant.If you would like to work on this, let me know in the comments and I can provide you with more details and instructions.
The text was updated successfully, but these errors were encountered: