Skip to content
This repository has been archived by the owner on Jun 27, 2023. It is now read-only.

Partial enumeration #97

Closed

Conversation

schomatis
Copy link
Contributor

@schomatis schomatis commented Jun 3, 2021

At the moment this PR is just the proposal, not the implementation.

@schomatis schomatis self-assigned this Jun 3, 2021

## Implementation

The proposed implementation change is rather straightforward then: modify `makeAsyncTrieGetLinks` to store the fetched nodes (likely unifying some of its logic with `loadChild`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stored where, in memory? The data is already stored on disk when we fetch it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in memory. I'm not doing any assumptions about what is stored on disk under the DAG service interface (e.g., something getting GC or explicitly removed by other thread), should I?


The proposed implementation change is rather straightforward then: modify `makeAsyncTrieGetLinks` to store the fetched nodes (likely unifying some of its logic with `loadChild`).

We can then create a low-level internal function that fetches only a part of the DAG (canceling the context after a certain number of nodes are loaded; we don't really care which). After that we enumerate only loaded nodes (so only a partial DAG, but enough to check against `HAMTDirectory`). The "partial"term of this proposal is more crucially applied not to the enumeration itself but to the fetching process: we only fetch part of the DAG and then enumerate those nodes and compute their size.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can then create a low-level internal function that fetches only a part of the DAG

Is the order deterministic or non-deterministic? If non-deterministic then we'll end up doing more network requests than necessary since we won't utilize the cache of data we have on disk.

canceling the context after a certain number of nodes are loaded; we don't really care which

nit: cancelling contexts is generally less good than just aborting your function as you have to wait for the context cancellation to get checked.

After that we enumerate only loaded nodes

I'm not sure how annoying this is given the dagservice that you're handed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the order deterministic or non-deterministic?

Non-deterministic, what comes faster, from the PR:

fetches only a part of the DAG (canceling the context after a certain number of nodes are loaded; we don't really care which).


If non-deterministic then we'll end up doing more network requests than necessary since we won't utilize the cache of data we have on disk.

We are implicitly caching in memory by storing the retrieved Shards instead of discarding them as the cited function does. The cache is the structure itself (we can move it to the disk if your assumption above holds, but you'd still need to keep track of which shards you already have on disk, otherwise I'm missing something in your proposal, if so please correct me).


I'm not sure how annoying this is given the dagservice that you're handed.

They are already in memory in the childer structure, you wouldn't go through the DAG service.


## Trade memory for bandwidth

The key insight of this proposal is the trade-off between memory (MEM) consumed by the `HAMTDirectory`/`Shard` and the bandwidth (BW) used to fetch the different nodes comprising the full `HAMTDirectory` DAG during enumeration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What memory are we trading off and why?

Aren't we just trying to minimize bandwidth by only fetching what we need and then we have the option of trading off BW vs latency by concurrently fetching more of the shards and perhaps overestimating the number we need to download?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory of holding the Shard structure in the childer.children instead of the links:

go-unixfs/hamt/hamt.go

Lines 540 to 541 in 12b9a86

links []*ipld.Link
children []*Shard


The key insight of this proposal is the trade-off between memory (MEM) consumed by the `HAMTDirectory`/`Shard` and the bandwidth (BW) used to fetch the different nodes comprising the full `HAMTDirectory` DAG during enumeration.

We want to minimize BW in the sense of not fetching *all* nodes at the same time but only as many needed to check if we are still above the transition threshold (`HAMTShardingSize`). It is not so much the *aggregated* BW we want to minimize (in the long run we may end up traversing the whole DAG) but the "burst" consumed in a single `Directory` API call when evaluating the transition (ideally we need only a number of nodes whose entries aggregated add up to `HAMTShardingSize`). This burst of enumeration then need not be full (all nodes) and hence we refer to partial enumeration. For this partial enumeration to be meaningful it needs to *not* overlap: we can't count the same node/entry twice toward the aggregated directory size. This means we need to track what was enumerated before: this is where we need to expense MEM.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We ideally also don't want to fetch data that shouldn't be needed. For example, minimizing the number of times when doing a modification while offline would result in some error block not found even when we have the relevant data locally

@schomatis
Copy link
Contributor Author

We're instead going in this direction: #94 (comment)

@schomatis schomatis closed this Jun 23, 2021
@schomatis schomatis deleted the schomatis/hamt/partial-enum branch August 27, 2021 18:36
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants