-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: mediation and transport tests for AFJ #448
feat: mediation and transport tests for AFJ #448
Conversation
Hello @TimoGlastra we know your PR is still a draft, but we manage to run the following command at least twice in your branch, without problems:
|
d81ae2f
to
9851fce
Compare
Signed-off-by: Timo Glastra <[email protected]>
3f6b2c7
to
76f44ce
Compare
Did some testing with this and got mixed results. Not sure what to expect, especially with the existing tests not in good shape right now. Here is what I found:
Is that expected? A number of the mediator tests are passing.
|
That's definitely not what should happen! I see you ran without |
Signed-off-by: Timo Glastra <[email protected]>
Signed-off-by: Timo Glastra <[email protected]>
Seems I broke some things 😳, should be fixed now |
Looking much better! Running with
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- tested with acapy-main, javascript each alone and with each other. Still have two tests failing (see comments) but that may be unrelated. Would like them resolved if you could take a look at them.
@@ -218,7 +218,7 @@ def get_agent_args(self): | |||
("--label", self.label), | |||
# "--auto-ping-connection", | |||
# "--auto-accept-invites", | |||
# "--auto-accept-requests", | |||
"--auto-accept-requests", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TimoGlastra Did you mean to change this to have --auto-accept-requests on? It seems to be causing at least one failure. https://allure.vonx.io/api/allure-docker-service/projects/afgo-b-acapy/reports/latest/index.html?redirect=false#behaviors/b42ec1e2628bcda077660120786966a4/865c2ca749632e9b/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is intentional and needed for mediation tests to succeed. Didn't notice it broke a test.
Here's some more context on why this is needed: openwallet-foundation/credo-ts#668
Let me try to figure out a way to only enable auto-accept for tests that need it. I think I can add this option the agent (re)start endpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't documented really great, but there is mention here of a way to pass a configuration file to the acapy start command in the backchannel using AGENT_CONFIG_FILE
. https://github.com/hyperledger/aries-agent-test-harness/blob/main/README.md#using-aath-agents-as-services
So using a mediation config file, we could either have a new runset with Mediation, or better yet, just add another run section in the workflow file before uploading the results to allure. Not sure if there would have to be changes to the action or not to support this. Maybe another action that just runs tests instead of building everything all over again.
The base tests require all auto options off on aca-py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could solve this using the /agent/command/agent/start
endpoint which would be a lot simpler than adding more runsets, as we can just do it dynamically during runtime. I think we should aim for as much dynamic configuration as it's becoming more and more complex to run the tests with all agents having a very specific support matrix for features.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still have the failing test mentioned in issue #456, we should make a decision on this and implement. I still lean on using the same workflow with a following run test step. This then is not another run set, but just another execution of more tests in the same runset.
Another way we can approach this is to actually have another agent run like an Auto_Acme, and if that agent is needed the the test scenarios for mediation can begin something like,
Given we have "2" agents
| name | role |
| Auto_Acme | responder |
| Bob | requester |
We can come up with a better name than auto_acme, but you get the point. This way Acme stays the same and this new agent comes into play when needed. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a stab at this but got distracted by other tasks, will try to finish. The approach I've taken is to restart the agent using different parameters.
Re your comment:
There is also the argument that once we start up an agent with different parameters, the test environment has changed, and would warrant another runset to differentiate. Combining a newly instantiated agent into an already running suite, may cause issues with test order. You'd have to be sure these tests are executed last or resets the agent into it's previous state before continuing.
This is already being used for other tests and whenever the @UsesCustomParameters
tag is available the agents will be restarted and reset to their initial startup configuration. This allows to start an agent with specified configuration before a test and restore it afterwards.
I like this approach because it's a bit more flexible IMO than adding e.g. the Auto_Acme agent. What if we need just a slightly different configuration next week? With dynamic configuration this would be as simple as modifying the agent startup configuration, without adding another agent to the test matrix.
I think we need to start having at least periodic meetings vs. pull request discussions so that we can address the "more and more complex to run tests" question that @TimoGlastra raises. I think there are some good ideas floating around that we need to discuss, make decisions and act upon. How about this -- a monthly meeting at minimum, and then at that meeting, decisions and action items, some of which could be to have extra meetings on a specific topic. Alternative to a specific AATH meeting is a scheduled monthly session at the Aries WG meeting with a specific organizer (@nodlesh ... is that you volunteering? :-) ) to lead with the same goal -- raise specific issues and make decisions. |
Sure I'll volunteer. :) Maybe one monthly meeting to talk about AATH and AMTH. I agree Timo, the start endpoint could be enhanced to help with this use case. However, doing the config file approach and adding another step in the GitHub workflows won't create another runset. The end result would be a set of combined results that show up in one report in Allure and the Interop Summary would also look like it had one run. There is also the argument that once we start up an agent with different parameters, the test environment has changed, and would warrant another runset to differentiate. Combining a newly instantiated agent into an already running suite, may cause issues with test order. You'd have to be sure these tests are executed last or resets the agent into it's previous state before continuing. |
Sounds like a good idea @swcurran. When do we want to hold the first meeting? Not sure if the Aries WG is the best place for these discussions as it'll probably be very in-depth discussion about implementation details of the test harness rather than high level discussions about Aries. |
Adds support for mediation and transport tests between AFJ agents. Also supports mediation interop between ACA-Py and AFJ. I think
@T004-RFC0211
is the most valuable test here. It tests two AFJ agents without inbound endpoint (the case in mobile environments) to connect with an ACA-Py mediator and request mediation. They the connect with each other which is only possible because of the mediator.I've disabled the deny mediation request tests for now (marked it
@wip
) as there's a bit of a configuration issue. If connecting to an agent without inbound endpoint we need to auto-respond to the message to be able to leverage return routing. If we make the acceptance a separate step, the transport will be closed and the mediator can't reach the recipient agent anymore because it has no inbound endpoint. But because auto acceptance is enabled, we can't deny anymore. Maybe later we can enable this test again by adding more advanced startup parameters determining whether mediation auto accept should be enabled, but for now this seemed like a more valuable use case that testing the deny flow. Some context on why we need auto-acceptance to be enabled: openwallet-foundation/credo-ts#668TODO:
Currently uses a custom build of AFJ until the following PRs will be merged (but we can already merge this pr):