Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve behat .feature files #517

Closed
individual-it opened this issue Jan 16, 2018 · 12 comments
Closed

improve behat .feature files #517

individual-it opened this issue Jan 16, 2018 · 12 comments

Comments

@individual-it
Copy link
Member

This is a summary of the discussion in owncloud/guests#162 and should serve as a base for further discussions.
Current issue:

  1. The .feature files are sometimes hard to understand
  2. A lot of scenarios (of different features) are merged into a single .feature file
  3. Wording of the scenarios are not consistent

Ideas to improve the situation:
(This are just a couple of ideas, that should be discussed and improved)

  1. use declarative description of the tests.

    Use a language that all stakeholders do understand (PM, dev, QA)
    Hide away the implementation details in the PHP code

  2. Split every feature into a separate file

    If a feature gets to complicated e.g to many variations split it further.
    If you have to create Scenarios with a lot of edge cases, put the edge cases into a separate file someFeatureEdgeCases.feature. This will save customers/PM to read those boring scenarios.

  3. Combine features into Suites

    Features that kind of belong together can go into one suite. This makes it easier to run tests parallel.

  4. write a good description for every feature

    Answer the question: Who wants to achieve what, why?
    Every feature file has to have a heading that looks like:

    As a [role]
    I want [feature]
    So that [benefit]
    

    There might be multiple explanation blocks per .feature file. E.g. an admin might want to achieve something else with a specific feature than a user.

  5. keep Scenarios short

    there is a trade-off here because every setup costs time. So specially in UI tests one long scenario with only one user setup and login would run much faster than multiple short scenarios

  6. make other stakeholders to review your feature files.

    If BDD is all about improving communication between different parties in the dev. process, all parties should agree on the description of the feature and the step definition. If QA wrote a feature and PM has no idea what the description is about or what the steps do, its probably not declarative enough.
    This might be unrealistic for existing behaviour and would slow down test development for cases when a specific feature already exists in the App and QA simply develops tests for it. But in cases where a new behaviour is added to the App involving all stakeholders would help to make sure everybody (management/customer/developers/qa) is talking about the same functionality

  7. be consistent in the step definition
    do we use 'When I'm login as "admin"' or 'Given As "admin"'?
    do we use 'When I copy a file' or 'When a file is copied'?

    for UI tests:
    use first person When I copy a file for cases where its about an action that is done through the UI and third person When a file is copied in cases when the action is done by any other means, e.g. by occ or a HTTP API.

  8. get stakeholders to write feature files
    for new proposed features or even for features that are not tested yet
    (this is the ideal place to get to with BDD - the stakeholder writes the feature file, then "the team" reviews and negotiates some of the wording, and then the implementation of tests and code is assigned in "the team")

  9. let a native speaker check your language & grammar
    I'm really bad at that, in any given language, @phil-davis is my spelling saviour :-)

Readings on BDD
What’s in a Story?
Behavior Driven Development (BDD)
BDD 101: Writing Good Gherkin

@PVince81
Copy link
Contributor

do we use 'When I'm login as "admin"' or 'Given As "admin"'?
do we use 'When I copy a file' or 'When a file is copied'?

This article https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/ suggests always using the third person. See https://automationpanda.com/2017/01/18/should-gherkin-steps-use-first-person-or-third-person/

@patrickjahns
Copy link
Contributor

A project that is based on BDD/TDD that could provide some ideas/background on how to write feature files:

https://github.com/Sylius/Sylius/tree/master/features/

@phil-davis
Copy link
Contributor

phil-davis commented Jan 22, 2018

And "Givens should always use present perfect tense, and Whens and Thens should always use present tense." can be a way to distinguish between a step form that is creating state from one that is testing state.
e.g. at the moment there is confusion about the following kind of thing:

Given user "user1" exists
Then user "user1" exists

In the "given" line we are thinking that the step will create the user (if it does not already exist) and thus the step should create state (and always, hopefully, pass). But in the "then" line we expect that the user already exists (from some previous "when" step that creates a user by some method) and we want the code to test if the user exists and fail if the user does not exist.

Those steps could we written:

Given user "user1" has been created
Then user "user1" should exist

(these 2 step definitions would not usually appear in the same scenario :)

At the moment we sometimes write "assure user exists" and similar - that is "unusual" English and it would be nice to replace all those with something better.

@PVince81
Copy link
Contributor

We tried to start writing share feature with Gherkin syntax: owncloud/core#30221

@phil-davis
Copy link
Contributor

An example of how Given When Then step text could be standardized to avoid being ambiguous and implement various of the above suggestions - PR owncloud/core#30233

@patrickjahns
Copy link
Contributor

patrickjahns commented Feb 12, 2018

I'd like to raise the point again, that currently our scenarios quite often deal with Implementation details (if I upload this via the UI, if I upload this via the API etc)

I'd propose to get to the point, where we start with quite abstract / higher level test scenarios - which can be run either directly with the API (as Context) or with a UI (as Context).

From my POV we gain the following:

  • we have a more behavior driven approach on the features our product has
  • we can test the same feature / use-case from different angles:
    • if the feature works on an API level
    • if the same feature works when being combined with a server providing the API in question

What we lose:

  • developer specific tests

@individual-it
Copy link
Member Author

If we have the steps and with that the functions in place to test the different implementations (e.g. upload via API, upload via UI) then we can easily build an other layer that has only the "upload" tests and decides itself which ones to run, based on the context

@patrickjahns
Copy link
Contributor

I wonder if upload is the right scenario to discuss this one - wonder if scenarios like sharing make more sense.

Since sharing can also be done via our clients (iOS, Android, desktop)

Open for further Feedback and disucssions here

@phil-davis
Copy link
Contributor

Once we get API and UI tests refactored, renamed to acceptance and running "together" as suites within the acceptance area then we should try doing this with some example feature.

It will be easy enough to write high-level steps and push the code that does the work underneath, olut of the way. The trickier part is to work out what is actually useful to expose in the Behat-Gherkin. Because, for example, there are high-level business requirements:

As an API client
I can use the API to share a file

and

As a webUI user
I can use the webUI to share a file

but actually the API client is a consumer of the API and does care if the implementation underneath changes. So if someone keeps all the Behat-Gherkin the same, but underneath changes all the API endpoint names (offered by the API and used by the tests) then the tests continue to pass, and a "high-level" business person reading the feature files will think all is great. But actually the real-life clients will scream. Once the API has been designed and implemented and published then there is a requirement to support THAT API rather than just any old API.

Similarly for the webUI. Once a webUI layout has been designed and implemented and users get used to it, then there is a requirement that that UI be "reasonably" maintained. "Randomly shuffling the UI elements and workflow around" every release is not actually acceptable. But with very high level feature files, it is possible to shuffle the UI underneath and a high-level business person reading the feature file will think that all is well. But real users will scream each month when the UI workflow is randomized.

So there will be a balance somewhere to find.

@phil-davis
Copy link
Contributor

phil-davis commented Mar 28, 2018

PRs related to refactoring gherkin acceptance test step text and refactoring underlying API and webUI acceptance test code together:
owncloud/core#30413
owncloud/core#30493
owncloud/core#30584
owncloud/core#30635
owncloud/core#30659
owncloud/core#30667
owncloud/core#30676
owncloud/core#30679
owncloud/core#30681
owncloud/core#30686
owncloud/core#30718
owncloud/core#30722
owncloud/core#30735
owncloud/core#30736
owncloud/core#30743
owncloud/core#30785
owncloud/core#30808
owncloud/core#30830
owncloud/core#30831
owncloud/core#30832
owncloud/core#30851
owncloud/core#30856
owncloud/core#30859
owncloud/core#30863
owncloud/core#30869
owncloud/core#30871
owncloud/core#30893
owncloud/core#30927
owncloud/core#30943

@phil-davis
Copy link
Contributor

ToDo:

@phil-davis phil-davis self-assigned this Nov 14, 2018
@dpakach
Copy link
Contributor

dpakach commented Dec 17, 2019

Looks like it's covered in owncloud/docs#156

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants