Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate how to upload bench results to AWS #7669

Closed
Akirathan opened this issue Aug 28, 2023 · 3 comments
Closed

Investigate how to upload bench results to AWS #7669

Akirathan opened this issue Aug 28, 2023 · 3 comments
Assignees
Labels
-ci -tooling Category: tooling

Comments

@Akirathan
Copy link
Member

Akirathan commented Aug 28, 2023

Motivation

GH has 90 days retention policy for all the artifacts. So the artifacts (bench-results.xml) that were created during benchmark workflow runs will be deleted after 90 days. This makes it difficult to maintain the bench_downloader script that generates the bench results website.

Let's investigate how difficult it is to upload benchmark results in the workflow run directly to AWS.

Issue created from comment:

Issues a warning if the user wants to download benchmarks that are expired according to the GH policy.

  • The default retention policy for all the artifacts is 90 days. Cannot be changed.

If we need to, it is rather straightforward to add a step to the benchmark job that not only uploads the results as a GH artifact, but also as a file to S3 - we already have/had such functionality. On S3 we can store the results indefinitely, if we want to.

Upload to S3 would be very nice, that would solve a lot of problems. Do you know how to do that? Or refer me to the point when we used to do that?

I've been writing this using the Github Actions a long time ago. I think we are still doing this, but I'm less familiar with the new Rust CI structure - I think

pub async fn update_manifest(repo_context: &impl IsRepo, edition_file: &Path) -> Result {
let bucket_context = BucketContext {
client: aws_sdk_s3::Client::new(&aws_config::load_from_env().await),
bucket: EDITIONS_BUCKET_NAME.to_string(),
upload_acl: ObjectCannedAcl::PublicRead,
key_prefix: Some(repo_context.name().to_string()),
};
let new_edition_name = Edition(
edition_file
.file_stem()
.context("Edition file path is missing filename stem!")?
.as_str()
.to_string(),
);
ide_ci::fs::expect_file(edition_file)?;
let manifest = bucket_context.get_yaml::<Manifest>(MANIFEST_FILENAME).await?;
debug!("Got manifest index from S3: {:#?}", manifest);
let (new_manifest, nightlies_to_remove) =
manifest.with_new_nightly(new_edition_name, NIGHTLY_EDITIONS_LIMIT);
for nightly_to_remove in nightlies_to_remove {
debug!("Should remove {}", nightly_to_remove);
}
let new_edition_filename =
edition_file.file_name().context("Edition file path is missing filename!")?;
bucket_context
.put(new_edition_filename.as_str(), ByteStream::from_path(&edition_file).await?)
.await?;
bucket_context.put_yaml("manifest.yaml", &new_manifest).await?;
Ok(())
}
may be a starting point but you may need to ask @mwu-tow for more details.

Originally posted by @radeusgd in #7599 (comment)

@JaroslavTulach
Copy link
Member

@PabloBuchu mentioned to me there is going to be a batch execution of Enso scripts including access to some DB in Enso Cloud. I'd be good to eat our own dog food.

@mwu-tow
Copy link
Contributor

mwu-tow commented Apr 10, 2024

@Akirathan I think this was covered by #9075?

@Akirathan
Copy link
Member Author

@Akirathan I think this was covered by #9075?

@mwu-tow Yes, you are right. This issue is no longer relevant.

@github-project-automation github-project-automation bot moved this from ❓New to 🟢 Accepted in Issues Board Apr 11, 2024
@farmaazon farmaazon moved this from 🟢 Accepted to 🗄️ Archived in Issues Board May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
-ci -tooling Category: tooling
Projects
Archived in project
Development

No branches or pull requests

3 participants