-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prevent expanded paths from getting into the history cache #141
Conversation
Thanks for the pull request, and welcome! The Rails team is excited to review your changes, and you should hear from @arthurnn (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. |
Awesome, thanks for your help. I think you hit the nail on the head. I was trying to save an array allocation, but it totally pollutes the history cache by accident. Sorry for going quiet on the original issue. I'm not getting emails from github for some reason right now. @jeremy you think this is what you were seeing? I think we need to also add a test so that this doesn't regress by accident. Once we're good to go I can also backport to 3.x |
This affects the in-memory cache, but not the cache across deploys. That's "polluted" by the dependency cache. It stores file digests with absolute paths, but has a cache key that doesn't change across deploys anymore (due to 436e2d6#diff-879998e90345f35e805b8b9707ab1ff2R189). So a dep cache lookup will return a dep list that references assets from old deploys, then we look ask for an asset matching that dep list from the cache, which faithfully returns the old asset. Making a test case that exposes this proves to be challenging. Can see the case we want to tickle, but we don't have any test coverage for "portable" assets scenarios like this. Note, just reverting to That also has the virtue of making the dep cache useless, since it expires on every deploy. The dep cache needs relative paths, also, to be useful. |
It will leak into the cache when the modified array gets re-written:
I'm not sure the conditions where an older digest is used. When index is 0 it means that it's using the last cached build. Maybe if a change is made then reverted it uses an older set? I'm trying to get a failure case for this locally to see if I can help figuring out how to test things. |
In my error scenario the actual place it was leaked was the line:
It was occurring when a js asset had a changed js dependency between deploys. |
Bam! Absolutely. |
prevent expanded paths from getting into the history cache
Thanks for the clarification, I was looking in the wrong place. With the new info it was really easy to reproduce. Set up a file that requires another:
Clean and compile once, then modify dependency, 💥 you get absolute paths in your cache.
I'm going to work on a test |
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
I wrote a really hacky test for this #142, it directly scans the cache for absolute paths. It fails without your patch and passes with it. While I can observe the bad behavior in my above script, I don't know how to actually load an asset that uses the bad paths. If you could give me some more info on how to force a failure scenario (i.e. make a second app that points at the first load the wrong asset, etc.) I would really appreciate it. As is this test is valid but pretty brittle. |
This issue was closed via d80c07a, going to backport to 3.x and cut 3.3.5, thanks for your help! While the PR is closed if you can help me repro the problem further to write a better test it would be hugely helpful. |
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
Try doing things in three steps, not just 2. It's actually the third step that reveals the problem at the application level. Step 1 - clobber, deploy / precompile. Everything is good so far. In each of these steps, the Rails root should be changing to something step specific, to mirror the way that Capistrano puts a release date in the folder path for each new deploy. So the assets will have different absolute paths, although the same relative ones. |
This patch addresses #127 by avoiding expanding the history entries in-place.
I verified that this change fixed the problem with my deploys.