-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ACTION] Create Falco benchmark workflow #121
Comments
Let's work together on this @nikimanoledaki |
Completing this should resolve this error: #122 (comment) |
TODO:
|
Hi folks, @locomundo, @rossf7 👋 We've come across a missing feature on GitHub Actions 😬 ProblemA reusable workflow that is in a separate repository can only be referenced in the top level
However, this top-level It is possible to evaluate expressions when using a nested
However, unfortunately, a nested I tried to set only part of it as an expression that should evaluate but this is not possible either, see error:
I found multiple issues referencing this problem so it seems to be a missing feature e.g. https://github.com/orgs/community/discussions/9050 I'm going to look into some elegant solutions to workaround this and report back. |
What if we just keep the benchmark workflows in the green reviews repository? In the CNCF project repo they would have the k8s manifests that need to be applied for a benchmark test (in the case of Falco, Maybe the downside would be that the CNCF project maintainer would have less flexibility and would have to ask us if there are new tests to be included. But we can decide that each file in the directory they specify is a test, and then make the github action iterate the files and run one test for each? Not sure if it's easy to do that in github actoins? Does this make any sense? |
This makes sense! I think that we can keep the benchmark workflows in the Green Reviews repository and point to a script in the POC here: #124 -- @locomundo could you review and merge it if you agree? |
[Proposal] to change the
|
@dipankardas011 I'm not sure we're aligned here. I thought rather than running a workflow we would store the manifest URL in projects.json? This is less flexible as a workflow can contain logic but its simpler. We just @nikimanoledaki @locomundo I realise this is quite a big change from the proposal so please chime in if I got this wrong or you disagree. The other worry I have is for Falco we run the same benchmarks for all 3 configurations. Maybe for a future CNCF project we need this but its hard to predict those requirements. So I would keep project.json simple for now.
😂 We don't need it for this change but I like this idea! Rather than hardcoding the 15 mins for Falco in the pipeline it makes it clear how long the benchmarks run for. I took your structure and adapted it with these changes. WDYT? {
"projects": [
{
"name": "falco",
"organization": "falcosecurity",
"benchmark": {
"manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
"duration_mins": 15
},
"configs": [
"ebpf",
"modern-ebpf",
"kmod"
]
}
]
} |
Yes, had used the manifest location URL 🤔 I think it is less obvious from the json what kind of file it should be {
"projects": [
{
"name": "falco",
"organization": "falcosecurity",
"benchmark": {
- "manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
+ "k8s_manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
"duration_mins": 15
},
"configs": [
"ebpf",
"modern-ebpf",
"kmod"
]
}
]
} We can even choose this formatting as well instead - https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml
+ github.com/falcosecurity/cncf-green-review-testing@main:benchmark-tests/deployments.yaml Yet another approach instead of telling the users to assemble the url we can give them fields which they can fill and we can aassemble independenly {
"projects": [
{
"name": "falco",
"organization": "falcosecurity",
"benchmark": {
- "manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
+ "manifest_ref": {
+ "git_provider": "github",
+ "organization": "falcosecurity",
+ "repository": "cncf-green-review-testing",
+ "branch": "main",
+ "path": "benchmark-tests/deployments.yaml"
+ },
"duration_mins": 15
},
"configs": [
"ebpf",
"modern-ebpf",
"kmod"
]
}
]
} wdyt |
@dipankardas011 I'm fine with renaming to |
Updated the schema |
Part of Proposal-002 - Run the CNCF project benchmark tests as part of the automated pipeline
Task Description
This issue tracks the implementation part of proposal 2: create a GitHub Actions workflow in this repo that runs the 3 benchmark tests created by the Falco team.
The 3 manifest files can be found here https://github.com/falcosecurity/cncf-green-review-testing/tree/main/benchmark-tests
See the Design Details section of proposal 2 for more detail on the benchmark workflow and jobs.
#118 is related and is to implement the pipeline changes to call this benchmark workflow
Tasks
[benchmark_path](https://github.com/cncf-tags/green-reviews-tooling/blob/main/projects/projects.json#L6)
The text was updated successfully, but these errors were encountered: