Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e upgrade #2352

Merged
merged 24 commits into from
Jun 15, 2021
Merged

e2e upgrade #2352

merged 24 commits into from
Jun 15, 2021

Conversation

ignapas
Copy link
Contributor

@ignapas ignapas commented May 27, 2021

What do these changes do?

  • Uses a combination of puppeteer, jest-puppeteer and jest that appears to be more stable. Tests seem to run a bit more consistent across platforms (MacOS, Linux, etc) with more consistent results. And screenshots don't hang.
  • Also checks for file names instead of only number of files.
  • Adds a different, not GUI method to delete the studies.
  • When puppeteer waits, it is possible to include a reason that will be logged.
  • Flash messages are logged

Related issue/s

How to test

Checklist

@codecov
Copy link

codecov bot commented May 27, 2021

Codecov Report

Merging #2352 (4413e43) into master (bb62cf8) will decrease coverage by 0.0%.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff            @@
##           master   #2352     +/-   ##
========================================
- Coverage    74.8%   74.7%   -0.1%     
========================================
  Files         516     516             
  Lines       20033   20033             
  Branches     1971    1971             
========================================
- Hits        14986   14974     -12     
- Misses       4530    4538      +8     
- Partials      517     521      +4     
Flag Coverage Δ
integrationtests 67.3% <ø> (-0.1%) ⬇️
unittests 68.2% <ø> (-0.1%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...e_director_v2/modules/db/repositories/comp_runs.py 88.3% <0.0%> (-4.7%) ⬇️
...c/simcore_service_director_v2/modules/scheduler.py 93.5% <0.0%> (-3.3%) ⬇️
...ages/service-library/src/servicelib/aiopg_utils.py 87.5% <0.0%> (-3.2%) ⬇️
...webserver/computation_comp_tasks_listening_task.py 84.0% <0.0%> (-2.0%) ⬇️
.../director/src/simcore_service_director/producer.py 60.8% <0.0%> (-0.3%) ⬇️

@ignapas ignapas marked this pull request as ready for review June 1, 2021 10:00
Copy link
Contributor

@GitHK GitHK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 just a question below

}
catch(err) {
tutorial.setTutorialFailed(true);
console.log('Tutorial error: ' + err);
}
finally {
await tutorial.toDashboard();
await tutorial.removeStudy2(studyId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

@odeimaiz odeimaiz Jun 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the purpose of removeStudy2. Why don't you rename it and move the call to the catch section? (and keep the original way to removeStudy)
Also, removeStudy2 doesn't need to go toDashboard

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @odeimaiz, then we could detect when the remove study fails and still remove the studies

await utils.sleep(waitFor);
await this.takeScreenshot('waitFor_finished')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

Comment on lines +87 to +88
width: 1680,
height: 950
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why was this done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was too big for my laptop screen to check it in demo mode

Copy link
Member

@odeimaiz odeimaiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • As I said, I would not remove the screenshooter.
  • removeStudy2 uses Resources instead of interacting with the GUI for deleting a study. What's the benefit?
  • Checking file names is only done in 3 out of 5 tutorials, why?

@ignapas
Copy link
Contributor Author

ignapas commented Jun 1, 2021

  • As I said, I would not remove the screenshooter.
  • removeStudy2 uses Resources instead of interacting with the GUI for deleting a study. What's the benefit?
  • Checking file names is only done in 3 out of 5 tutorials, why?
  • I will make the screenshooter work only during waiting time
  • Using the Resources class makes it more resilient as it can always remove the study, no matter the state of the interface. Before the studies were not removed because the test was failing half way or it was not possible to go back to the Dashboard.
  • I will check the remaining two tutorials, my bad

Copy link
Member

@sanderegg sanderegg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice!
please just consider the issue with the Kember test

console.log("Study ID:", studyId);

const workbenchData = utils.extractWorkbenchData(studyData["data"]);
await tutorial.waitForServices(workbenchData["studyId"], [workbenchData["nodeIds"][0]]);

// Wait for the output files to be pushed
await tutorial.waitFor(30000);
await tutorial.waitFor(30000, 'Wait for the output files to be pushed');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, probably then we can remove the duplicated comments.

tests/e2e/tutorials/tutorialBase.js Outdated Show resolved Hide resolved
async removeStudy2(studyId) {
console.log(`Removing study ${studyId}`)
const resp = await this.__page.evaluate(async function(studyId) {
return await osparc.data.Resources.fetch('studies', 'delete', {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool

}
catch(err) {
tutorial.setTutorialFailed(true);
console.log('Tutorial error: ' + err);
}
finally {
await tutorial.toDashboard();
await tutorial.removeStudy2(studyId);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @odeimaiz, then we could detect when the remove study fails and still remove the studies

Copy link
Member

@pcrespov pcrespov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the drawback of having continues screenshots that you decided to remove it? I also find them very useful to debug (together with the client's log) and since the last update they do not take a lot of space anymore.

@ignapas
Copy link
Contributor Author

ignapas commented Jun 3, 2021

What is the drawback of having continues screenshots that you decided to remove it? I also find them very useful to debug (together with the client's log) and since the last update they do not take a lot of space anymore.

imo is not that they are bad themselves.. even if there are some things I don't like, like most of them being useless, or the periodicity, that seems a bit arbitrary. the same with the waiting times. also, we would like to know what went wrong just by reading a descriptive log rather than searching and opening some screenshots and wondering what happened of course based on a deep knowledge about the platform. By disabling them, I am more trying to force ourselves to do it differently.

for example: instead of taking one screenshot every two seconds, just take screenshots when they are needed: before and after a click or action is performed, or whenever a notification appears on the screen (although this could be just logged, I will try that). instead of waiting for a fixed amount of time for stuff to happen, just wait for the DOM to change in some way. @odeimaiz what do you think?

@odeimaiz
Copy link
Member

odeimaiz commented Jun 3, 2021

What is the drawback of having continues screenshots that you decided to remove it? I also find them very useful to debug (together with the client's log) and since the last update they do not take a lot of space anymore.

imo is not that they are bad themselves.. even if there are some things I don't like, like most of them being useless, or the periodicity, that seems a bit arbitrary. the same with the waiting times. also, we would like to know what went wrong just by reading a descriptive log rather than searching and opening some screenshots and wondering what happened of course based on a deep knowledge about the platform. By disabling them, I am more trying to force ourselves to do it differently.

for example: instead of taking one screenshot every two seconds, just take screenshots when they are needed: before and after a click or action is performed, or whenever a notification appears on the screen (although this could be just logged, I will try that). instead of waiting for a fixed amount of time for stuff to happen, just wait for the DOM to change in some way. @odeimaiz what do you think?

I agree that a better logging would help, but I still think the optimization step is too far. I would keep the screenshooter as is.

@ignapas ignapas requested a review from sanderegg June 3, 2021 14:19
@ignapas ignapas requested review from odeimaiz and pcrespov June 3, 2021 14:19
 Conflicts:
	tests/e2e/package.json
Copy link
Member

@pcrespov pcrespov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some recommendations on the PR:

  • Add a more descriptive title. It helps us when we build release notes
  • Assign the case to yourself
  • Assign this sprint as Milestone
  • Add labels to classify the PR
  • Comment and resolve all reviews (now there is a new feature "Conversations")

@ignapas ignapas changed the title some e2e e2e upgrade Jun 4, 2021
@ignapas ignapas self-assigned this Jun 4, 2021
@ignapas ignapas added this to the Chinchilla milestone Jun 4, 2021
@ignapas ignapas added the e2e Bugs found by or related to the end-2-end testing label Jun 4, 2021
Copy link
Member

@odeimaiz odeimaiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • removeStudy2: rename it and move it to the catch section
  • Leave the screenshooter as is. If you want to start adding logs that eventually will make the every-2"-screenshot useless, very good, but in the meantime I wouldn't touch it.

@ignapas
Copy link
Contributor Author

ignapas commented Jun 10, 2021

  • removeStudy2: rename it and move it to the catch section
  • Leave the screenshooter as is. If you want to start adding logs that eventually will make the every-2"-screenshot useless, very good, but in the meantime I wouldn't touch it.

done

@ignapas ignapas requested review from odeimaiz and pcrespov June 10, 2021 13:42
@odeimaiz
Copy link
Member

You win....

Copy link
Member

@pcrespov pcrespov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@ignapas ignapas merged commit a18f686 into ITISFoundation:master Jun 15, 2021
@sanderegg sanderegg modified the milestones: Chinchilla, Marmoset Jun 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e Bugs found by or related to the end-2-end testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants