-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minimal implementation of local Vcs #8780
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
8 Skipped Deployments
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
🟢 Turbopack Benchmark CI successful 🟢Thanks |
✅ This change can build |
|
return false; | ||
} | ||
.scope( | ||
RefCell::new(CurrentTaskState::new(this.execution_id_factory.get())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe to make this diff a little clearer you can move this into a var?
@@ -282,6 +304,13 @@ where | |||
pub fn cell(inner: Inner) -> Self { | |||
<T::CellMode as VcCellMode<T>>::cell(inner) | |||
} | |||
|
|||
pub fn local_cell(inner: Inner) -> Self { | |||
// `T::CellMode` isn't applicable here, we always create new local cells. Local |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit of a random question but just want to challenge my understanding. We have a few ways of interacting 'up the chain' so to speak. At the top level you have the global 'arena' with any number of 'subarenas' for local tasks. Presumably you can interleave local and global tasks at will. Do we intend on having local areanas further down the call graph be able to read / access from Vcs in their 'superarenas' and global state?
#[turbo::function]
fn a(path: Vc<Path>) -> Vc<CompiledCode> {
let is_dir = path.is_directory().await;
if (is_dir) {
return b(path);
}
// do stuff
}
#[turbo::function(local)]
fn b(path: Vc<Path>) -> Vc<CompiledCode> {
// is Path::is_directory resolved here or do we recompute?
}
Path::is_directory is already resolved in the global cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we intend on having local areanas further down the call graph be able to read / access from Vcs in their 'superarenas' and global state?
This is a good question.
Global state: Yes. Local tasks will create local Vcs by default, but they will be able to both create and read resolved Vcs that use global cells. When creating global cells, those cells will be assigned to nearest non-local parent task.
In your example, Path::is_directory
would not need to be re-executed because it's in the global cache.
"Superarenas" of calling tasks: No, at least for right now. Local Vcs must be resolved before being passed as an argument into another task.
One of the unintuitive challenges with reading from local "superarenas" is that (currently) every task is tokio::spawn
ed and never canceled/aborted. This means that a child called task can outlive it's parent caller. This isn't common as most calling tasks await all their children, but can happen in the case of panics in the parent/calling task or with use of APIs like tokio::select!
.
For memory safety we either need a mechanism for canceling tasks (triggering a drop at the await point) if they try to read from an expired caller's "superarena", or (more likely) we need to defer the cleanup of local arenas until all of their child spawned tasks exit.
Migrated to the next.js repo: vercel/next.js#68469 |
*This is a migrated PR. This was in the turbo repository before the next.js merge.* **Migrated From:** vercel/turborepo#8780 # Description Local Vcs store task-local values inside of the current task's state. The `SharedReference` holding onto their values is dropped when the task exits. The contents of a local Vc must still use a refcounted `SharedReference`/`triomphe::Arc` because they can be turned into `ReadRef` objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a `'static` lifetime. We can experiment with adding a lifetime to `ReadRef` in a future iteration, but there's little advantage to doing this until we have local tasks. # Limitations This implements *just enough* to create and read local Vcs (with a test!). There are many things that are still unimplemented with `todo!()`. Most notably: - We can't resolve a local `Vc` to a `ResolvedVc`. - There's no way to return or pass a local `Vc` as an argument yet. - For safety, we should only allow construction of local `Vc`s in functions where we know that the return value is a `ResolvedValue`. Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff # Memory Usage - This increases the size of `Vc`/`RawVc` from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens. - This increase the size of `CurrentTaskState`. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time. **I was not able to measure any change to peak memory/heap usage.** Built using ``` cargo build --release -p next-build-test ``` Ran heaptrack on a single page (`/sink`) in `shadcn/ui` with: ``` cd ~/ui/apps/www heaptrack ~/nextpack/target/release/next-build-test run sequential 1 1 '/sink' ``` And analyzed the results with ``` heaptrack --analyze /home/bgw.linux/ui/apps/www/heaptrack.next-build-test.3066837.zst | tail -20 ``` ### Before ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` ``` total runtime: 135.48s. calls to allocation functions: 48619554 (358863/s) temporary memory allocations: 3883888 (28667/s) peak heap memory consumption: 1.12G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.71M suppressed leaks: 7.24K ``` ### After ``` total runtime: 157.20s. calls to allocation functions: 48509638 (308579/s) temporary memory allocations: 3902883 (24827/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.86M suppressed leaks: 7.24K ``` ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` # Testing Instructions ``` cargo nextest r -p turbo-tasks -p turbo-tasks-memory ```
*This is a migrated PR. This was in the turbo repository before the next.js merge.* **Migrated From:** vercel/turborepo#8780 # Description Local Vcs store task-local values inside of the current task's state. The `SharedReference` holding onto their values is dropped when the task exits. The contents of a local Vc must still use a refcounted `SharedReference`/`triomphe::Arc` because they can be turned into `ReadRef` objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a `'static` lifetime. We can experiment with adding a lifetime to `ReadRef` in a future iteration, but there's little advantage to doing this until we have local tasks. # Limitations This implements *just enough* to create and read local Vcs (with a test!). There are many things that are still unimplemented with `todo!()`. Most notably: - We can't resolve a local `Vc` to a `ResolvedVc`. - There's no way to return or pass a local `Vc` as an argument yet. - For safety, we should only allow construction of local `Vc`s in functions where we know that the return value is a `ResolvedValue`. Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff # Memory Usage - This increases the size of `Vc`/`RawVc` from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens. - This increase the size of `CurrentTaskState`. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time. **I was not able to measure any change to peak memory/heap usage.** Built using ``` cargo build --release -p next-build-test ``` Ran heaptrack on a single page (`/sink`) in `shadcn/ui` with: ``` cd ~/ui/apps/www heaptrack ~/nextpack/target/release/next-build-test run sequential 1 1 '/sink' ``` And analyzed the results with ``` heaptrack --analyze /home/bgw.linux/ui/apps/www/heaptrack.next-build-test.3066837.zst | tail -20 ``` ### Before ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` ``` total runtime: 135.48s. calls to allocation functions: 48619554 (358863/s) temporary memory allocations: 3883888 (28667/s) peak heap memory consumption: 1.12G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.71M suppressed leaks: 7.24K ``` ### After ``` total runtime: 157.20s. calls to allocation functions: 48509638 (308579/s) temporary memory allocations: 3902883 (24827/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.86M suppressed leaks: 7.24K ``` ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` # Testing Instructions ``` cargo nextest r -p turbo-tasks -p turbo-tasks-memory ```
*This is a migrated PR. This was in the turbo repository before the next.js merge.* **Migrated From:** vercel/turborepo#8780 # Description Local Vcs store task-local values inside of the current task's state. The `SharedReference` holding onto their values is dropped when the task exits. The contents of a local Vc must still use a refcounted `SharedReference`/`triomphe::Arc` because they can be turned into `ReadRef` objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a `'static` lifetime. We can experiment with adding a lifetime to `ReadRef` in a future iteration, but there's little advantage to doing this until we have local tasks. # Limitations This implements *just enough* to create and read local Vcs (with a test!). There are many things that are still unimplemented with `todo!()`. Most notably: - We can't resolve a local `Vc` to a `ResolvedVc`. - There's no way to return or pass a local `Vc` as an argument yet. - For safety, we should only allow construction of local `Vc`s in functions where we know that the return value is a `ResolvedValue`. Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff # Memory Usage - This increases the size of `Vc`/`RawVc` from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens. - This increase the size of `CurrentTaskState`. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time. **I was not able to measure any change to peak memory/heap usage.** Built using ``` cargo build --release -p next-build-test ``` Ran heaptrack on a single page (`/sink`) in `shadcn/ui` with: ``` cd ~/ui/apps/www heaptrack ~/nextpack/target/release/next-build-test run sequential 1 1 '/sink' ``` And analyzed the results with ``` heaptrack --analyze /home/bgw.linux/ui/apps/www/heaptrack.next-build-test.3066837.zst | tail -20 ``` ### Before ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` ``` total runtime: 135.48s. calls to allocation functions: 48619554 (358863/s) temporary memory allocations: 3883888 (28667/s) peak heap memory consumption: 1.12G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.71M suppressed leaks: 7.24K ``` ### After ``` total runtime: 157.20s. calls to allocation functions: 48509638 (308579/s) temporary memory allocations: 3902883 (24827/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.86M suppressed leaks: 7.24K ``` ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` # Testing Instructions ``` cargo nextest r -p turbo-tasks -p turbo-tasks-memory ```
*This is a migrated PR. This was in the turbo repository before the next.js merge.* **Migrated From:** vercel/turborepo#8780 # Description Local Vcs store task-local values inside of the current task's state. The `SharedReference` holding onto their values is dropped when the task exits. The contents of a local Vc must still use a refcounted `SharedReference`/`triomphe::Arc` because they can be turned into `ReadRef` objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a `'static` lifetime. We can experiment with adding a lifetime to `ReadRef` in a future iteration, but there's little advantage to doing this until we have local tasks. # Limitations This implements *just enough* to create and read local Vcs (with a test!). There are many things that are still unimplemented with `todo!()`. Most notably: - We can't resolve a local `Vc` to a `ResolvedVc`. - There's no way to return or pass a local `Vc` as an argument yet. - For safety, we should only allow construction of local `Vc`s in functions where we know that the return value is a `ResolvedValue`. Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff # Memory Usage - This increases the size of `Vc`/`RawVc` from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens. - This increase the size of `CurrentTaskState`. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time. **I was not able to measure any change to peak memory/heap usage.** Built using ``` cargo build --release -p next-build-test ``` Ran heaptrack on a single page (`/sink`) in `shadcn/ui` with: ``` cd ~/ui/apps/www heaptrack ~/nextpack/target/release/next-build-test run sequential 1 1 '/sink' ``` And analyzed the results with ``` heaptrack --analyze /home/bgw.linux/ui/apps/www/heaptrack.next-build-test.3066837.zst | tail -20 ``` ### Before ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` ``` total runtime: 135.48s. calls to allocation functions: 48619554 (358863/s) temporary memory allocations: 3883888 (28667/s) peak heap memory consumption: 1.12G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.71M suppressed leaks: 7.24K ``` ### After ``` total runtime: 157.20s. calls to allocation functions: 48509638 (308579/s) temporary memory allocations: 3902883 (24827/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.86M suppressed leaks: 7.24K ``` ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` # Testing Instructions ``` cargo nextest r -p turbo-tasks -p turbo-tasks-memory ```
*This is a migrated PR. This was in the turbo repository before the next.js merge.* **Migrated From:** vercel/turborepo#8780 # Description Local Vcs store task-local values inside of the current task's state. The `SharedReference` holding onto their values is dropped when the task exits. The contents of a local Vc must still use a refcounted `SharedReference`/`triomphe::Arc` because they can be turned into `ReadRef` objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a `'static` lifetime. We can experiment with adding a lifetime to `ReadRef` in a future iteration, but there's little advantage to doing this until we have local tasks. # Limitations This implements *just enough* to create and read local Vcs (with a test!). There are many things that are still unimplemented with `todo!()`. Most notably: - We can't resolve a local `Vc` to a `ResolvedVc`. - There's no way to return or pass a local `Vc` as an argument yet. - For safety, we should only allow construction of local `Vc`s in functions where we know that the return value is a `ResolvedValue`. Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff # Memory Usage - This increases the size of `Vc`/`RawVc` from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens. - This increase the size of `CurrentTaskState`. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time. **I was not able to measure any change to peak memory/heap usage.** Built using ``` cargo build --release -p next-build-test ``` Ran heaptrack on a single page (`/sink`) in `shadcn/ui` with: ``` cd ~/ui/apps/www heaptrack ~/nextpack/target/release/next-build-test run sequential 1 1 '/sink' ``` And analyzed the results with ``` heaptrack --analyze /home/bgw.linux/ui/apps/www/heaptrack.next-build-test.3066837.zst | tail -20 ``` ### Before ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` ``` total runtime: 135.48s. calls to allocation functions: 48619554 (358863/s) temporary memory allocations: 3883888 (28667/s) peak heap memory consumption: 1.12G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.71M suppressed leaks: 7.24K ``` ### After ``` total runtime: 157.20s. calls to allocation functions: 48509638 (308579/s) temporary memory allocations: 3902883 (24827/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.70G total memory leaked: 4.86M suppressed leaks: 7.24K ``` ``` total runtime: 130.25s. calls to allocation functions: 48553541 (372786/s) temporary memory allocations: 3863919 (29666/s) peak heap memory consumption: 1.13G peak RSS (including heaptrack overhead): 2.69G total memory leaked: 4.96M suppressed leaks: 7.24K ``` # Testing Instructions ``` cargo nextest r -p turbo-tasks -p turbo-tasks-memory ```
Description
Local Vcs store task-local values inside of the current task's state. The
SharedReference
holding onto their values is dropped when the task exits.The contents of a local Vc must still use a refcounted
SharedReference
/triomphe::Arc
because they can be turned intoReadRef
objects (https://turbopack-rust-docs.vercel.sh/rustdoc/turbo_tasks/struct.ReadRef.html), which require a'static
lifetime. We can experiment with adding a lifetime toReadRef
in a future iteration, but there's little advantage to doing this until we have local tasks.Limitations
This implements just enough to create and read local Vcs (with a test!). There are many things that are still unimplemented with
todo!()
. Most notably:Vc
to aResolvedVc
.Vc
as an argument yet.Vc
s in functions where we know that the return value is aResolvedValue
.Grand plan: https://www.notion.so/vercel/Resolved-Vcs-Vc-Lifetimes-Local-Vcs-and-Vc-Refcounts-49d666d3f9594017b5b312b87ddc5bff
Memory Usage
This increases the size of
Vc
/RawVc
from 96 bits to 128 bits. With clever packing this could (in theory) be solved. However, that it increase the number of machine words (2x 64bit), so it might not have any real impact after alignment happens.This increase the size of
CurrentTaskState
. I suspect this isn't too bad as there should be a smallish number of tasks active at any given time.I was not able to measure any change to peak memory/heap usage.
Built using
Ran heaptrack on a single page (
/sink
) inshadcn/ui
with:And analyzed the results with
Before
After
Testing Instructions