-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
toml_edit
is pretty slow to compile
#327
Comments
It's funny I migrated the parser from nom to combine a while ago (#26) because of too many macros. Combine was mainly based on traits at the time. It would be ironic to go full circle.
I'd say rewriting the parser manually is still on the table and might be beneficial long-term (e.g. precise control over errors). Short-term we could try to reduce the number of macros and generics in toml_edit itself. |
And
I had looked at trying to drop the macros where possible but combine's API is such a mess to work with because everything is a combinator, returning a generic parser, rather than nom's model of operating on the actual parsers themselves (which also simplifies some parser cases in my experience).
Its interesting to see what perspectives exist in the community. I've heard from some that the use of a parser library in toml_edit made it more favorable to toml-rs. I've also seen others say to just write a parser by hand. I lean towards preferring parser libraries as I feel it makes the parsing code more maintainable unless there is a very specific and critical need otherwise. |
Fair point, I haven't looked at the current state of nom. Feel free to explore this approach.
Interesting indeed. I do agree that using parser libraries result in more readable and maintainable code. The downside is that we lose precise control. Control over compile times, control over error messages and when things don't work as expected we're at the mercy of the library developers if they are complex enough. Another concern is security/vulnerability response. That being said, I'm not sure whether the problem is with combine or how we (ab)use it (probably a combination of both). But if you think nom's approach will have fewer downsides, go for it :) |
As a follow up to my earlier comment, like most parser libraries, the development of nom has slowed down, so we do not yet have these ergonomic improvements. I am coordinator with the maintainer for getting access to make these improvements. They will be back from vacation soon which will hopefully get things moving forward. |
btw I had forgotten to call out that I made a parser benchmark repo. This was in part to test my combine theory. While combine performs fairly well, that is for an old version of combine (all I could find a json implementation for) and only uses one macro rather than our 2 layers of macros for every parser approach. Also, for more background on why I had originally suspected combine and our use of it: When nnethercote was analyzing hot spots for compiling, he found that tt munchers were slow to compile because it needed to re-parse at each layer of recursion to check what pattern it matched. While we aren't doing tt munching, we put the entire body of every parser function into two layers of macros. However, if this is all done during typecheck, then the numbers earlier in this thread blow that theory out of the water. However, the use of generics that combine forces on us and the complexities by only working through combinators are both reasons for us to still re-evaluate our use of combine. |
I have a prototype Running This is with Rust 1.66 on a i9-12900H processor |
While its too early to read into these numbers, I ran
|
Parser is now fully compliant. The only regression is in error messages |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This is using a short-lived fork `nom` until we can get the changes upstreamed. Note that `combine` requires using `attempt` to backtrack while `nom` backtracks by default and requires `cut` to not backtrack. I find the straight functions much easier to understand what is happening and this allows us to intermix procedural with declarative logic (e.g. in string processing I skipped combinators to make it easier to not require an allocation). Regarding that allocation, we still do it but this opens the door for us to use `InternalString` for user values which might give us more of a performance boost (previously, the forced allocation made it a moot point to measure it). Running `cargo clean -p toml_edit && time cargo check`, I get 3s for building `toml_edit` with `combine` and 0.5s for `nom`. For runtime performance - Parsing `cargo init`s generated manifest took 4% less time - Parsing `cargo`s manifest took 2% less time - 10 tables took 37% less time - 100 tables took 41% less time - Array of 10 tables took 38% less time - Array of 100 tables took 40% less time This is with Rust 1.66 on a i9-12900H processor under WSL2 Fixes toml-rs#327
This is using a short-lived fork `nom` until we can get the changes upstreamed. Note that `combine` requires using `attempt` to backtrack while `nom` backtracks by default and requires `cut` to not backtrack. I find the straight functions much easier to understand what is happening and this allows us to intermix procedural with declarative logic (e.g. in string processing I skipped combinators to make it easier to not require an allocation). Regarding that allocation, we still do it but this opens the door for us to use `InternalString` for user values which might give us more of a performance boost (previously, the forced allocation made it a moot point to measure it). Running `cargo clean -p toml_edit && time cargo check`, I get 3s for building `toml_edit` with `combine` and 0.5s for `nom`. For runtime performance - Parsing `cargo init`s generated manifest took 4% less time - Parsing `cargo`s manifest took 2% less time - 10 tables took 37% less time - 100 tables took 41% less time - Array of 10 tables took 38% less time - Array of 100 tables took 40% less time This is with Rust 1.66 on a i9-12900H processor under WSL2 Fixes toml-rs#327
toml_edit
is pretty slow to compile, one of the slowest dependencies ofcargo
. My suspicion is the number of macros forcombine
is the core problem (due to the extra parsing steps the compiler has to do). I want to explore usingnom
once some ergonomic improvements are made since it doesn't use macros.Originally posted by @epage in #323 (comment)
The text was updated successfully, but these errors were encountered: