Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Add test run entity (no-changelog) #11832

Open
wants to merge 14 commits into
base: ai-428-testrunner-kick-off-evaluation-workflow
Choose a base branch
from

Conversation

burivuhster
Copy link
Contributor

Summary

To store the status and results of each test run we need a new DB entity.
This PR:

  • adds a new table for storing test runs
  • adds typeorm entity and repository
  • creates a new test run entry on each test run, updates it's status during workflow test execution

Related Linear tickets, Github issues, and Community forum posts

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)

@n8n-assistant n8n-assistant bot added core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team labels Nov 21, 2024
Copy link

codecov bot commented Nov 21, 2024

Codecov Report

Attention: Patch coverage is 67.64706% with 11 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...c/databases/repositories/test-run.repository.ee.ts 30.76% 9 Missing ⚠️
packages/cli/src/databases/entities/test-run.ee.ts 81.81% 0 Missing and 2 partials ⚠️

📢 Thoughts on this report? Let us know!

@burivuhster burivuhster changed the base branch from master to ai-428-testrunner-kick-off-evaluation-workflow November 22, 2024 10:11
@burivuhster burivuhster marked this pull request as ready for review November 22, 2024 10:15
@burivuhster burivuhster requested a review from a team as a code owner November 22, 2024 10:15
@burivuhster
Copy link
Contributor Author

⚠️ NOTE: The branch for this PR is based on ai-428-testrunner-kick-off-evaluation-workflow. It will be rebased on master after #11757 is merged.

completedAt: Date;

@Column(jsonColumnType, { nullable: true })
metrics: IDataObject;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDataObject is very problematic as it's basically a generic record and hence doesn't provide any type safety. Do we have some idea what these metrics are going to be? Let's preferably use a more precise type if possible. Or if it's not known we can use unknown and then you have to parse/validate the type before using it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the POC we assume metrics will be a list of key:value pairs, where values would be numbers. We have rough idea of where it can develop over time, but you are right, we can narrow the type now and expand it later.

runAt: Date;

@Column(datetimeColumnType)
completedAt: Date;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be nullable, right?

status: TestRunStatus;

@Column(datetimeColumnType)
runAt: Date;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be nullable or is the test run created directly in the running state?

status: 'new',
});

await this.testRunRepository.save(testRun);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use insert instead or does .save update something behind the scenes that we need?

Comment on lines 190 to 174
testRun.status = 'running';
testRun.runAt = new Date();
await this.testRunRepository.save(testRun);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very core business logic, and it would be nice to have it encapsulated somewhere as markAsRunning method/function. One option would be in the TestRunRepository

@@ -208,9 +224,17 @@ export class TestRunnerService {
console.log({ evalResult });

// TODO: collect metrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO still needed?

Comment on lines 234 to 226
testRun.status = 'completed';
testRun.completedAt = new Date();
testRun.metrics = aggregatedMetrics;
await this.testRunRepository.save(testRun);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. Should be encapsulated as markAsCompleted

@@ -208,9 +224,17 @@ export class TestRunnerService {
console.log({ evalResult });

// TODO: collect metrics
metrics.push(evalResult);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the intermediate result?

@burivuhster burivuhster force-pushed the ai-428-testrunner-kick-off-evaluation-workflow branch from 097f6fd to 5e03e80 Compare November 25, 2024 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants