Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: mediation and transport tests for AFJ #448

Merged

Conversation

TimoGlastra
Copy link
Contributor

@TimoGlastra TimoGlastra commented Mar 11, 2022

Adds support for mediation and transport tests between AFJ agents. Also supports mediation interop between ACA-Py and AFJ. I think @T004-RFC0211 is the most valuable test here. It tests two AFJ agents without inbound endpoint (the case in mobile environments) to connect with an ACA-Py mediator and request mediation. They the connect with each other which is only possible because of the mediator.

I've disabled the deny mediation request tests for now (marked it @wip) as there's a bit of a configuration issue. If connecting to an agent without inbound endpoint we need to auto-respond to the message to be able to leverage return routing. If we make the acceptance a separate step, the transport will be closed and the mediator can't reach the recipient agent anymore because it has no inbound endpoint. But because auto acceptance is enabled, we can't deny anymore. Maybe later we can enable this test again by adding more advanced startup parameters determining whether mediation auto accept should be enabled, but for now this seemed like a more valuable use case that testing the deny flow. Some context on why we need auto-acceptance to be enabled: openwallet-foundation/credo-ts#668

TODO:

  • Update all ci files to exclude the new tests that are not supported
  • Test if nothing is broken in other flows
  • enable mediation tests between AFJ and ACA-Py (running into some config issues)...

Currently uses a custom build of AFJ until the following PRs will be merged (but we can already merge this pr):

@TimoGlastra TimoGlastra marked this pull request as draft March 11, 2022 14:57
@dtlab-labcn-admin
Copy link

dtlab-labcn-admin commented Mar 11, 2022

Hello @TimoGlastra we know your PR is still a draft, but we manage to run the following command at least twice in your branch, without problems:
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io ./manage run -d acapy-main -t @AcceptanceTest -t ~@wip -t @AIP10,@RFC0211 -t ~@DIDExchangeConnection
Finally yields:

(...)
5 features passed, 0 failed, 5 skipped
29 scenarios passed, 0 failed, 106 skipped
212 steps passed, 0 failed, 954 skipped, 0 undefined
Took 15m29.131s
(...)

@TimoGlastra TimoGlastra force-pushed the feature/afj-mediation branch from d81ae2f to 9851fce Compare March 13, 2022 11:26
@TimoGlastra TimoGlastra force-pushed the feature/afj-mediation branch from 3f6b2c7 to 76f44ce Compare March 13, 2022 13:33
@TimoGlastra TimoGlastra marked this pull request as ready for review March 13, 2022 13:34
@TimoGlastra TimoGlastra requested review from nodlesh and swcurran March 13, 2022 13:34
@swcurran
Copy link
Contributor

Did some testing with this and got mixed results. Not sure what to expect, especially with the existing tests not in good shape right now. Here is what I found:

  • After PR checkout, rebuilt the agents: ./manage rebuild -a acapy-main -a javascript
  • Ran tests for RFC0211 and RFC0025, as I see they were adjusted in the PR: ./manage run -d acapy-main -b javascript -t @AIP10,@RFC0025,@RFC0211 -t ~@wip -t ~@DIDExchangeConnection -t ~@T004-RFC0211
  • Result: A number of tests failed for RFC0036, RFC0037, RFC0160 and RFC0211 (below).

Is that expected? A number of the mediator tests are passing.

Failing scenarios:
  features/0036-issue-credential.feature:9  Issue a credential with the Holder beginning with a proposal
  features/0036-issue-credential.feature:37  Issue a credential with the Holder beginning with a proposal with negotiation
  features/0036-issue-credential.feature:53  Issue a credential with the Issuer beginning with an offer
  features/0036-issue-credential.feature:66  Issue a credential with the Issuer beginning with an offer with negotiation
  features/0037-present-proof.feature:19  Present Proof where the prover does not propose a presentation of the proof and is acknowledged -- @1.1 
  features/0037-present-proof.feature:20  Present Proof where the prover does not propose a presentation of the proof and is acknowledged -- @1.2 
  features/0037-present-proof.feature:55  Present Proof of specific types and proof is acknowledged with a Drivers License credential type -- @1.1 
  features/0037-present-proof.feature:56  Present Proof of specific types and proof is acknowledged with a Drivers License credential type -- @1.2 
  features/0037-present-proof.feature:74  Present Proof of specific types and proof is acknowledged with a Biological Indicators credential type -- @1.1 
  features/0037-present-proof.feature:91  Present Proof of specific types and proof is acknowledged with multiple credential types -- @1.1 
  features/0037-present-proof.feature:109  Present Proof where the prover does not propose a presentation of the proof and is acknowledged -- @1.1 
  features/0037-present-proof.feature:110  Present Proof where the prover does not propose a presentation of the proof and is acknowledged -- @1.2 
  features/0037-present-proof.feature:149  Present Proof where the prover has proposed the presentation of proof in response to a presentation request and is acknowledged -- @1.1 
  features/0037-present-proof.feature:150  Present Proof where the prover has proposed the presentation of proof in response to a presentation request and is acknowledged -- @1.2 
  features/0037-present-proof.feature:170  Present Proof where the prover has proposed the presentation of proof from a different credential in response to a presentation request and is acknowledged -- @1.1 
  features/0037-present-proof.feature:171  Present Proof where the prover has proposed the presentation of proof from a different credential in response to a presentation request and is acknowledged -- @1.2 
  features/0037-present-proof.feature:227  Present Proof where the prover starts with a proposal the presentation of proof and is acknowledged -- @1.1 
  features/0160-connection.feature:21  establish a connection between two agents -- @1.1 
  features/0160-connection.feature:46  Connection established between two agents but inviter sends next message to establish full connection state -- @1.1 
  features/0160-connection.feature:52  Inviter Sends invitation for one agent second agent tries after connection
  features/0160-connection.feature:69  Inviter Sends invitation for one agent second agent tries during first share phase
  features/0160-connection.feature:99  Establish a connection between two agents who already have a connection initiated from invitee
  features/0211-coordinate-mediation.feature:26  Request mediation with the mediator accepting the mediation request -- @2.1 0160 connection
  features/0211-coordinate-mediation.feature:50  Request mediation with the mediator accepting the mediation request and creating a connection using the mediator -- @2.1 0160 connection

1 feature passed, 4 failed, 6 skipped
8 scenarios passed, 24 failed, 104 skipped
86 steps passed, 24 failed, 1064 skipped, 0 undefined
Took 2m14.382s

@TimoGlastra
Copy link
Contributor Author

That's definitely not what should happen! I see you ran without -t @AcceptanceTest which is included in the CI. But even then there shouldn't be that much tests failing. Let me clean/rebuild and test again...

@TimoGlastra
Copy link
Contributor Author

Seems I broke some things 😳, should be fixed now

@swcurran
Copy link
Contributor

Looking much better! Running with -d javascript passes all but these two tests (below). Trying it now with ACA-Py and AFJ. If that works and you don't stop me because of the two errors below, I'll merge it.

Failing scenarios:
  features/0160-connection.feature:52  Inviter Sends invitation for one agent second agent tries after connection
  features/0160-connection.feature:69  Inviter Sends invitation for one agent second agent tries during first share phase

4 features passed, 1 failed, 6 skipped
30 scenarios passed, 2 failed, 104 skipped
239 steps passed, 2 failed, 933 skipped, 0 undefined
Took 10m19.938s

Copy link
Contributor

@swcurran swcurran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM -- tested with acapy-main, javascript each alone and with each other. Still have two tests failing (see comments) but that may be unrelated. Would like them resolved if you could take a look at them.

@swcurran swcurran merged commit b87f842 into openwallet-foundation:main Mar 14, 2022
@@ -218,7 +218,7 @@ def get_agent_args(self):
("--label", self.label),
# "--auto-ping-connection",
# "--auto-accept-invites",
# "--auto-accept-requests",
"--auto-accept-requests",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this is intentional and needed for mediation tests to succeed. Didn't notice it broke a test.

Here's some more context on why this is needed: openwallet-foundation/credo-ts#668

Let me try to figure out a way to only enable auto-accept for tests that need it. I think I can add this option the agent (re)start endpoint

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't documented really great, but there is mention here of a way to pass a configuration file to the acapy start command in the backchannel using AGENT_CONFIG_FILE. https://github.com/hyperledger/aries-agent-test-harness/blob/main/README.md#using-aath-agents-as-services

So using a mediation config file, we could either have a new runset with Mediation, or better yet, just add another run section in the workflow file before uploading the results to allure. Not sure if there would have to be changes to the action or not to support this. Maybe another action that just runs tests instead of building everything all over again.

The base tests require all auto options off on aca-py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could solve this using the /agent/command/agent/start endpoint which would be a lot simpler than adding more runsets, as we can just do it dynamically during runtime. I think we should aim for as much dynamic configuration as it's becoming more and more complex to run the tests with all agents having a very specific support matrix for features.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still have the failing test mentioned in issue #456, we should make a decision on this and implement. I still lean on using the same workflow with a following run test step. This then is not another run set, but just another execution of more tests in the same runset.

Another way we can approach this is to actually have another agent run like an Auto_Acme, and if that agent is needed the the test scenarios for mediation can begin something like,

    Given we have "2" agents
      | name | role      |
      | Auto_Acme | responder |
      | Bob  | requester |

We can come up with a better name than auto_acme, but you get the point. This way Acme stays the same and this new agent comes into play when needed. Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a stab at this but got distracted by other tasks, will try to finish. The approach I've taken is to restart the agent using different parameters.

Re your comment:

There is also the argument that once we start up an agent with different parameters, the test environment has changed, and would warrant another runset to differentiate. Combining a newly instantiated agent into an already running suite, may cause issues with test order. You'd have to be sure these tests are executed last or resets the agent into it's previous state before continuing.

This is already being used for other tests and whenever the @UsesCustomParameters tag is available the agents will be restarted and reset to their initial startup configuration. This allows to start an agent with specified configuration before a test and restore it afterwards.

I like this approach because it's a bit more flexible IMO than adding e.g. the Auto_Acme agent. What if we need just a slightly different configuration next week? With dynamic configuration this would be as simple as modifying the agent startup configuration, without adding another agent to the test matrix.

@swcurran
Copy link
Contributor

swcurran commented Apr 1, 2022

I think we need to start having at least periodic meetings vs. pull request discussions so that we can address the "more and more complex to run tests" question that @TimoGlastra raises. I think there are some good ideas floating around that we need to discuss, make decisions and act upon.

How about this -- a monthly meeting at minimum, and then at that meeting, decisions and action items, some of which could be to have extra meetings on a specific topic. Alternative to a specific AATH meeting is a scheduled monthly session at the Aries WG meeting with a specific organizer (@nodlesh ... is that you volunteering? :-) ) to lead with the same goal -- raise specific issues and make decisions.

@nodlesh
Copy link
Contributor

nodlesh commented Apr 1, 2022

Sure I'll volunteer. :) Maybe one monthly meeting to talk about AATH and AMTH.

I agree Timo, the start endpoint could be enhanced to help with this use case. However, doing the config file approach and adding another step in the GitHub workflows won't create another runset. The end result would be a set of combined results that show up in one report in Allure and the Interop Summary would also look like it had one run.

There is also the argument that once we start up an agent with different parameters, the test environment has changed, and would warrant another runset to differentiate. Combining a newly instantiated agent into an already running suite, may cause issues with test order. You'd have to be sure these tests are executed last or resets the agent into it's previous state before continuing.

@TimoGlastra
Copy link
Contributor Author

Sounds like a good idea @swcurran. When do we want to hold the first meeting? Not sure if the Aries WG is the best place for these discussions as it'll probably be very in-depth discussion about implementation details of the test harness rather than high level discussions about Aries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants