Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggressive S3 metadata caching #556

Merged
2 commits merged into from
Mar 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## Unreleased
### Added
- Implementation of `ObjectRepository` that can cache small objects on local file system (e.g. to avoid too many calls to S3 repo)
- Optional `S3RegistryCache` component that can cache the list of datasets under an S3 repo to avoid very expensive bucket prefix listing calls

## [0.166.1] - 2024-03-14
### Fixed
- Allow OData adapter to skip fields with unsupported data types instead of chasing
Expand Down
40 changes: 26 additions & 14 deletions src/infra/core/src/repos/dataset_repository_local_fs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,25 @@ impl DatasetRepositoryLocalFs {
))
}

fn build_dataset(layout: DatasetLayout, event_bus: Arc<EventBus>) -> Arc<dyn Dataset> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notice that I have severed ties between DatasetRepositorys and DatasetFactory.

After making a bunch of changes to make DatasetFactory super configurable, I realized that DatasetFactory should be used only when talking to remote repositories, and DatasetRepository implementations should have independent low-level control over how they create dataset instances. Attempting to combine the two just results in configuration and dependency nightmare.

Arc::new(DatasetImpl::new(
event_bus,
MetadataChainImpl::new(
MetadataBlockRepositoryCachingInMem::new(MetadataBlockRepositoryImpl::new(
ObjectRepositoryLocalFSSha3::new(layout.blocks_dir),
)),
ReferenceRepositoryImpl::new(NamedObjectRepositoryLocalFS::new(layout.refs_dir)),
),
ObjectRepositoryLocalFSSha3::new(layout.data_dir),
ObjectRepositoryLocalFSSha3::new(layout.checkpoints_dir),
NamedObjectRepositoryLocalFS::new(layout.info_dir),
))
}

// TODO: Make dataset factory (and thus the hashing algo) configurable
fn get_dataset_impl(&self, dataset_handle: &DatasetHandle) -> impl Dataset {
fn get_dataset_impl(&self, dataset_handle: &DatasetHandle) -> Arc<dyn Dataset> {
let layout = DatasetLayout::new(self.storage_strategy.get_dataset_path(dataset_handle));

DatasetFactoryImpl::get_local_fs(layout, self.event_bus.clone())
Self::build_dataset(layout, self.event_bus.clone())
}

// TODO: Used only for testing, but should be removed it in future to discourage
Expand Down Expand Up @@ -173,7 +187,7 @@ impl DatasetRepository for DatasetRepositoryLocalFs {
) -> Result<Arc<dyn Dataset>, GetDatasetError> {
let dataset_handle = self.resolve_dataset_ref(dataset_ref).await?;
let dataset = self.get_dataset_impl(&dataset_handle);
Ok(Arc::new(dataset))
Ok(dataset)
}

async fn create_dataset(
Expand Down Expand Up @@ -226,14 +240,11 @@ impl DatasetRepository for DatasetRepositoryLocalFs {
}

// It's okay to create a new dataset by this point

let dataset_id = seed_block.event.dataset_id.clone();
let dataset_handle = DatasetHandle::new(dataset_id, dataset_alias.clone());
let dataset_path = self.storage_strategy.get_dataset_path(&dataset_handle);

let layout = DatasetLayout::create(&dataset_path).int_err()?;

let dataset = DatasetFactoryImpl::get_local_fs(layout, self.event_bus.clone());
let dataset = Self::build_dataset(layout, self.event_bus.clone());

// There are three possibilities at this point:
// - Dataset did not exist before - continue normally
Expand All @@ -259,7 +270,7 @@ impl DatasetRepository for DatasetRepositoryLocalFs {
};

self.storage_strategy
.handle_dataset_created(&dataset, &dataset_handle.alias)
.handle_dataset_created(dataset.as_ref(), &dataset_handle.alias)
.await?;

tracing::info!(
Expand All @@ -277,7 +288,7 @@ impl DatasetRepository for DatasetRepositoryLocalFs {

Ok(CreateDatasetResult {
dataset_handle,
dataset: Arc::new(dataset),
dataset,
head,
})
}
Expand Down Expand Up @@ -467,7 +478,7 @@ impl DatasetSingleTenantStorageStrategy {
dataset_alias: &DatasetAlias,
) -> Result<DatasetSummary, ResolveDatasetError> {
let layout = DatasetLayout::new(dataset_path);
let dataset = DatasetFactoryImpl::get_local_fs(layout, self.event_bus.clone());
let dataset = DatasetRepositoryLocalFs::build_dataset(layout, self.event_bus.clone());
dataset
.get_summary(GetSummaryOpts::default())
.await
Expand Down Expand Up @@ -643,7 +654,7 @@ impl DatasetMultiTenantStorageStrategy {
dataset_id: &DatasetID,
) -> Result<DatasetAlias, ResolveDatasetError> {
let layout = DatasetLayout::new(dataset_path);
let dataset = DatasetFactoryImpl::get_local_fs(layout, self.event_bus.clone());
let dataset = DatasetRepositoryLocalFs::build_dataset(layout, self.event_bus.clone());
match dataset.as_info_repo().get("alias").await {
Ok(bytes) => {
let dataset_alias_str = std::str::from_utf8(&bytes[..]).int_err()?.trim();
Expand Down Expand Up @@ -871,11 +882,12 @@ impl DatasetStorageStrategy for DatasetMultiTenantStorageStrategy {
) -> Result<(), InternalError> {
let dataset_path = self.get_dataset_path(dataset_handle);
let layout = DatasetLayout::new(dataset_path);
let dataset = DatasetFactoryImpl::get_local_fs(layout, self.event_bus.clone());
let dataset = DatasetRepositoryLocalFs::build_dataset(layout, self.event_bus.clone());

let new_alias =
DatasetAlias::new(dataset_handle.alias.account_name.clone(), new_name.clone());
self.save_dataset_alias(&dataset, &new_alias).await?;
self.save_dataset_alias(dataset.as_ref(), &new_alias)
.await?;

Ok(())
}
Expand Down
Loading
Loading