-
Notifications
You must be signed in to change notification settings - Fork 675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending custom data to reporter #3584
Comments
I would also like to send custom data to reporter. My use-case scenario is as follows: I grouped bunch of test-scenarios together in one single test function (this is done in this way for test execution time saving purpose). Each test-scenario is considered to be a single test case and has associated testcase ID in TestRail test management system (https://www.gurock.com/testrail) and I use TestRail API in reporter plugin (https://www.npmjs.com/package/testcafe-reporter-html-testrail) to update the status of the test cases. Since I grouped the tests, I want to be able to capture the status during course of the test execution and make that info available in test reporter. For e.g test("My Test", async (t) => {
// Test scenario for caseID_123
await t.click(<DOM element found>);
await t.report({caseID_123: "PASS"},/* set the status of caseID_123 as PASS*/);
// Test scenario for caseID_234
await t.click(<DOM element found>);
await t.report({caseID_234: "PASS"},/* set the status of caseID_234 as PASS*/);
// Test scenario for caseID_456
await t.click(<DOM element not found>);
await t.report({caseID_456: "FAIL"},/* set the status of caseID_456 as FAIL or NOT TESTED */)
}); I don't think |
Currently, TestCafe does not have such functionality. However, I think it'll be useful. |
Was this enhancement implemented? I really need this feature |
Currently, this feature is still not implemented, but we keep it in mind. You are welcome to submit your PR and contribute to our project. |
I'd love to try and create a PR for this. Is there any direction or tips I should know before starting? Digging into a completely new codebase can get confusing sometimes. :) |
@danlomantoSamsungNext, I think we need to discuss API extensions first. From what it seems to me, simply adding |
@AndreyBelym I think people are looking for more than just a I believe the thinking was to use the meta() functionality because it's already accessible when building a custom reporter as well. |
@danlomantoSamsungNext yes, I've got your point. I discussed it in private with some other team members, and we agreed that it is easier to add metadata modification functions to the test controller. type Meta = { [name: string]: string };
interface TestControllerMetaExtensions {
/**
* Returns the test metadata as an key-value object.
* @example
* test.meta('foo', 'bar')('test', async t => {
* console.log(t.meta);
* });
* // { 'foo': 'bar' }
*/
readonly meta: Meta;
/**
* Adds an additional metadata entry to the test metadata.
* If an entry with the specified name already exists, overrides its value.
* @param name - The name of the metadata entry to be added.
* @param value - The value of the metadata entry to be added.
* @example
* test.meta('foo', 'bar')('test', async t => {
* await t.addMeta('ans', '42');
* console.log(t.meta);
* await t.addMeta('foo', 'foobar');
* console.log(t.meta);
* });
* // { 'foo': 'bar', 'ans': '42' }
* // { 'foo': 'foobar', 'ans': '42' }
*/
addMeta(name: string, value: string): TestController;
/**
* Adds an additional metadata entries to the test metadata.
* If an entry with the specified name already exists, overrides its value.
* @param meta - A set of metadata entries to be added.
* @example
* test.meta('foo', 'bar')('test', async t => {
* await t.addMeta({ 'foo': 'foobar', 'ans': '42' });
* console.log(t.meta);
* });
* // { 'foo': 'foobar', 'ans': '42' }
*/
addMeta(meta: Meta): TestController;
/**
* Removes metadata entries with the specified names from the test metadata;
* @param names - A list of metadata names to be removed
* @example
* test.meta({ 'foo': 'bar', 'ans': '42' })('test', async t => {
* await t.removeMeta('foo');
* console.log(t.meta);
* });
* // { 'ans': '42' }
* test.meta({ 'foo': 'bar', 'ans': '42' })('test', async t => {
* await t.removeMeta('foo', 'ans');
* console.log(t.meta);
* });
* // { }
* test.meta({ 'foo': 'bar', 'ans': '42' })('test', async t => {
* await t.removeMeta(['foo', 'ans']);
* console.log(t.meta);
* });
* // { }
*/
removeMeta(...names: (string | string[])[]): TestController;
} |
@AndreyBelym That looks absolutely fantastic to me!!!! Thank you so much for digging into that! |
Thank you for sharing the detailed information about your use case. |
Given that this issue has been open since 2019 with many supporting comments, I have to assume that there really is no intent to implement it. In the hope that I am wrong, I'll add my use case: this is for an investment reporting app.
I have a working TC regression test that: loops thru multiple clients; loops through the AsOfDates; loops through the available Periods; and examines the various graphs to ensure that the x-axis data is presented according to spec. Obviously, there are many possible points of failure in this process, but in the event of a failure I want to simply write a message documenting the failure and continue to the end of the test. When the test completes, I want to collect any error information and report it to the TC reporter. There was a suggestion #5563 made some time ago to provide a mechanism to explicity cancel a test and provide some information but that was closed last year. I just have to say that, from our perspective, this is a huge omission and one that will count heavily against TestCafe when we are considering a testing framework for future client implementations. |
Hi @je, It's quite a complicated enhancement because a lot of users have different cases, and this enhancement should satisfy most of them. That is why implementation takes a long time. Thank you for sharing your use case. |
Thanks! |
Hi, We are using TestCafe to do API Testing. We include the API Response body within the Report Output as proof the API did output a response body. If there is any workaround to the above scenario that will be appreciated. Thanks for all the effort! |
Hi @reinhartbuit, Thank you for sharing your case. There is no workaround. However, why can't you just check a response body with |
Hello, I am having the following case: Thank you! |
Hi @marinamarin91, If I understand correctly, you execute all the code inside one test due to the need to re-authenticate on the test page. If Roles are not suitable for you, you can accomplish your task using CustomActions. You need to put your custom methods' code into separate actions in the config file and return an object that contains a description field. After that, you will be able to access the result in the reporter's reportTestActionDone method. So, if you use Jenkins, you need to modify the Jenkins reporter code to display your custom information. const { Selector } = require('testcafe');
module.exports = {
customActions: {
async myFirstAction () {
await this.typeText(Selector('#developer-name'), 'Peter Parker');
return { description: 'first action description' }
},
async mySecondAction () {
await this.pressKey('backspace').expect(Selector('#developer-name').value).eql('Pete Parker');
return { description: 'second action description' }
},
},
} In this case, you can modify the jenkins reporter in the following way: export default function () {
return {
...
customActions: [],
isCustomAction (type) {
return type === 'run-custom-action';
},
reportTestActionDone (actionName, { command }) {
if (this.isCustomAction(command.type)) {
const { actionResult, name } = command;
const description = actionResult?.description || '';
this.customActions.push({ name, description });
}
},
_renderCustomActionsInfo () {
this.report += this.indentString('<system-out>\n', 4);
this.customActions.forEach(({ name, description }) => {
this.report += this.indentString(`${name}: ${description}\n`, 4);
});
this.report += this.indentString('</system-out>\n', 4);
},
async reportTestDone (name, testRunInfo) {
...
if (hasScreenshots || hasVideos)
this._renderAttachments(testRunInfo, hasScreenshots, hasVideos);
if (this.customActions.length)
this._renderCustomActionsInfo();
this.report += this.indentString('</testcase>\n', 2);
}, Using this code, I was able to run the following test: fixture`A set of examples that illustrate how to use TestCafe API`
.page`https://devexpress.github.io/testcafe/example/`;
test('Dealing with text using keyboard', async t => {
await t.customActions.myFirstAction();
await t.customActions.mySecondAction();
}); And got the following report result: <testsuite name="TestCafe Tests: Chrome 109.0.0.0 / Ubuntu 22.04" tests="1" failures="0" skipped="0" time="3.908" timestamp="Tue, 14 Feb 2023 06:15:29 GMT" id="9945521c-ac82-4024-8010-0d8b6a65309e">
<testcase classname="A set of examples that illustrate how to use TestCafe API" name="Dealing with text using keyboard" time="1.779">
<system-out>
myFirstAction: first action description
mySecondAction: second action description
</system-out>
</testcase>
</testsuite> Please let me know if this helps. |
Release v2.5.0-rc.1 addresses this. |
tested with v2.5.0. not seeing it with default spec reporter. Also, what about I verified |
huh. manually updating |
Hi @codambro TestCafe v2.5.0 uses the following dependency: |
## Purpose Currently, you need to create your custom reporter in case you need to change some reporter output, even if the change is not significant. ## Approach Add a global onBeforeWrite hook that allows you to modify the string passed to the reporter "write" method. ## API ```ts export interface WriteInfo { initiator: string; formattedText: string; formatOptions: { useWordWrap: boolean; indent: number; } data: undefined | object; } interface ReporterHooks { onBeforeWrite?: { [reporterName: string]: Function } } interface GlobalHooks { ... reporter?: ReporterHooks } ``` ## Example ```js function formatTaskStartOutput (writeInfo) { const { formattedText, formatOptions, data } = writeInfo; const { indent, useWordWrap } = formatOptions; const { startTime, userAgents, testCount } = data || {}; writeInfo.formattedText = formattedText.replaceAll(userAgents[0], 'blablabla'); } function formatFixtureStartOuptut (writeInfo) { const { name, path, meta } = writeInfo.data || {}; writeInfo.formattedText = ''; } function formatTestStartOutput (writeInfo) { const { name, meta } = writeInfo.data || {}; writeInfo.formattedText = ''; } function formatTestDoneOutput (writeInfo) { const { name, testRunInfo, meta } = writeInfo.data || {}; writeInfo.formattedText = ''; } function formatTaskDoneOutput (writeInfo) { const { endTime, passed, warnings, result } = writeInfo.data || {}; writeInfo.formattedText = ''; } function onBeforeWriteHook (writeInfo) { const { initiator } = writeInfo; if (initiator === 'reportTaskStart') formatTaskStartOutput(writeInfo); else if (initiator === 'reportFixtureStart') formatFixtureStartOuptut(writeInfo); else if (initiator === 'reportTestStart') formatTestStartOuptut(writeInfo); else if (initiator === 'reportTestDone') formatTestDoneOutput(writeInfo); else if (initiator === 'reportTaskDone') formatTaskDoneOutput(writeInfo); } module.exports = { hooks: { reporter: { onBeforeWrite: { 'spec': onBeforeWriteHook, }, }, }, }; ``` ## References #3584 DevExpress/testcafe-reporter-spec#8 ## Pre-Merge TODO - [ ] Write tests for your proposed changes - [ ] Make sure that existing tests do not fail
What is your Test Scenario?
We are using testcafe to measure performance regressions. Our test case does some setup, tests something, and makes sure its under some X time.
I would like to send the time it took (currently just a
console.log
) to our custom reporter so we can track this in a graph and see how performance increases/decreases over time.Currently this is not possible, the closest solution i saw was to use
meta
like here but I need custom information after the test has passed, not before.What are you suggesting?
Some way to pass data, either by log or by the test object, into my reporter.
The time of the test is not good enough because it includes the setup/teardown of the test.
What alternatives have you considered?
Right now I need to log the results to a file, and pick them up in another step of my CI disconnected completely from testcafe.
The text was updated successfully, but these errors were encountered: