Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decision Proposal 240 - ADR Metrics #240

Closed
CDR-API-Stream opened this issue Feb 26, 2022 · 12 comments
Closed

Decision Proposal 240 - ADR Metrics #240

CDR-API-Stream opened this issue Feb 26, 2022 · 12 comments
Labels
Category: API A proposal for a decision to be made for the API Standards made Industry: All This proposal impacts the CDR as a whole (all sectors) Status: No Decision Taken No determination for this decision has been made

Comments

@CDR-API-Stream
Copy link
Contributor

CDR-API-Stream commented Feb 26, 2022

This decision proposal contains a proposal for the addition to the regime of an API to collect metrics from Accredited Data Recipients.

The decision proposal is embedded below:
Decision Proposal 240 - ADR Metrics.pdf

Note that this proposal has arisen as a result of discussion at the February and March meetings of the Data Standards Advisory Committee. The Advisory Committee members have already reviewed the proposal which is now being opened up for public review and comment.

Consultation on this proposal has been extended to the 27th of May 2022 due to the caretaker period arising from the Federal election.

@CDR-API-Stream CDR-API-Stream changed the title <Placeholder for upcoming consultation> Decision Proposal 240 - ADR Metrics Mar 16, 2022
@CDR-API-Stream CDR-API-Stream added Category: API A proposal for a decision to be made for the API Standards made Status: Open For Feedback Feedback has been requested for the decision Industry: All This proposal impacts the CDR as a whole (all sectors) labels Mar 16, 2022
@CDR-API-Stream
Copy link
Contributor Author

We've put together a short video introducing Decision Proposal 240 - ADR Metrics.

Video: https://youtu.be/TiCI1jIIako

@perlboy
Copy link
Contributor

perlboy commented Mar 23, 2022

This proposal risks introducing more vanity metrics for the ACCC to misinterpret as non-compliance and therefore is likely to be more damaging to the ecosystem than benefit.

By way of example the current incidents raised by the ACCC has had various misinterpretations of the NFRs including, but not limited to:

  • Not understanding the difference between 95th percentile and an average response time - NFR is 95th percentile based and response times are a maximum not an average
  • Raising variability of response times as an issue - no such NFR obligation exists
  • Zero values for average response time being raised as non compliance when the number of requests received are 0 (ie. It is impossible to have a response time of 0 - except if there are 0 requests in the first place) - A 0 value is both NFR and Schema compliant

Given that context, specifically on this proposal:

  1. averageResponse is not an NFR and providing this information will only result in another inaccurate data point to be misused and misinterpreted by the regulator (which is already happening)
  2. *schemaErrors is highly subjective especially in the context of very large amounts of conditional values in the ecosystem and the only official schema validation being dsb-schema-tools which, by the very definition of JSON Schema, cannot process business rules to assess beyond null or not null ("Hot Dog, Not Hot Dog")
  3. latencyMetrics is very likely to result in a very high number of false positives for investigation because it is based on implied data latency. By way of example a bank transaction date/time vs. when the API call is made may, in all likelihood, be an ever increasing number because the vast number of sharing arrangements right now are attached to Production Verification Testing accounts (ie. "fake" accounts intended for testing without continuous transaction flow)

Finally, I'm confused why the DSB is expending resources on this sort of proposal when adoption is vanishingly small - it appears to be public capital expenditure on contracting resources purely for the benefit of the government and not the taxpayer. Consumers don't care about the governments vanity metrics (they, in all likelihood, don't even know they exist), they do care about whether they can achieve what they want to achieve. Perhaps if the DSB and the Government focused on increasing adoption and utilisation the ecosystem (along with innovation) or setup their own data recipient to collect these metrics there would actually be some real data points worthy of discussing with regulated entities.

P.S. meta isn't marked optional or mandatory, given the DSB seems to be predisposed to publishing it as Mandatory in energy specs it should probably be clearly specified in this DP as Optional.

@commbankoss
Copy link

CBA welcomes the opportunity to provide feedback on DP-240.

At a high level, CBA recommends that the following principles are adopted:

  1. Metrics must be measurable;
  2. The formula to derive the metric from the data is clearly defined, transparent and aligned to objectives; and
  3. There is a clearly defined framework for using metrics to meet stated objectives;

To provide substantiative feedback to this DP before it closes CBA would appreciate the DSB confirming (or adding to) our understanding of the objectives and issues that the DP seeks to address.

After reviewing both DP-240 and DP-145, CBA understands that the objectives are limited to the following:

• Remove/reduce requirement for manual reporting on the ADR side (per context of DP-145)
• Provide a mechanism to measure ADR performance (per detail of DP-145)
• Provide insights to allow CX optimisation (per detail of DP-145)
• Provide a mechanism to measure DH performance from the perspective of an invoking client system (per context of DP-240)

Kind regards
CBA Team

@CDR-API-Stream
Copy link
Contributor Author

Thanks for the feedback to date. To provide clarity the purpose of the proposal is to address issues raised in the Advisory Committee related to perceived Data Holder data quality and performance issues that is being observed by ADRs. Data is needed for the ACCC to be able to identify and address these issues with individual Data Holders.

In the context of the objectives raised by CBA this is specifically:

  • Remove/reduce requirement for manual reporting on the ADR side (per context of DP-145)
  • Provide a mechanism to measure DH performance from the perspective of an invoking client system (per context of DP-240)

At the same time we are canvassing the opportunity to look at CX performance i.e.:

  • Provide insights to allow CX optimisation (per detail of DP-145)

We have tried to propose metrics that are objective and measurable. Feedback on specific metrics where measurability concerns exist would be welcome.

Also, constructive feedback on how to actually improve the metrics proposed would be welcome.

@spikejump
Copy link

@CDR-API-Stream Thanks for the above context of this DP. This DP seems to be a bit different to the driver in DP-145 where the "Data Recipient Metrics" mentioned in DP-145 is more about intermediary performance?

If we understand the driver behind this DP correctly, based on the above comment, it is that ACCC needs formal data from ADR on potential data quality and performance issues on specific DH that the ADR is raising with ACCC. Without such data ACCC is not in a position to investigate and discuss the claims with the DHs.

If this is correct, this DP seems to impose a very heavy burden on ADRs needing to support the DP if it turns into a mandatory requirement. There are costs in standing up API services; security and maintenance costs are a huge part of that. For the potential hundreds and thousands ADRs, there will be a large portion that will never have an issue with any DHs; there will be those ADRs that have specific use-case related issues with some DHs. One ADR's claim may be different to another ADR's. For example, ADR1 claims DH1's Get Transactions response is too slow while ADR2 is perfectly happy with DH1's Get Transactions performance but it is having transaction data quality issue with DH1 but it is not an issue for ADR1. When an issue needs to be investigated the data required, by all parties, will need to deeper and wider than the proposed reporting data in the DP.

As an ADR, it is great to see ACCC wanting to resolve issues raised by ADRs to improve the usability of the ecosystem. The good old "user pays" concept comes to mind. By this, we mean, the ADRs that need to raise issues with ACCC should be prepared to "pay" by preparing data in the required format and submit them to ACCC via ADRs' own CDR portal account (just a suggestion). ACCC, via the submission of the required data, can systematically process the data succinctly and also trigger an investigation event. This way, only ADRs that need to raise any issues have to "pay" for the cost of raising issues. ACCC, on the other hand, also reduced its cost to stand-up a service to continuously poll for performance data from hundreds and thousands of ADRs.

In addition, it should be point out that data quality is not something that the schema can pick up, especially given such large amounts of optional data in CDR. The data quality issue can be, for example, Product Name not matching actual product, to optional data not supplied when DHs have such data. In fact, our experience so far tells us that DHs are quite willing to address identified schema related errors. It is the non-measurable part of the data quality that's harder for DHs to respond in a timely manner.

We are also assuming that there is no feedback required on CX performance for this DP as mentioned in the above comment as this DP does not address CX.

@AusBanking
Copy link

Please see below ABA's response to this consultation:

DP 240 - ABA response.pdf

@commbankoss
Copy link

Commonwealth Bank supports the ABA’s view on Decision Proposal 240.

@RobHale-Truelayer
Copy link

Some good dialogue and responses have already been provided...
.

Additionally...

Why would ADRs want to do this?

  • Is it appropriate to push obligation and responsibility for analysing and reporting on DH performance onto ADRs?
  • Even if the requirement is optional, it seems inappropriate for ADRs to take on an oversight data gathering function
  • Right now the ecosystem is struggling to gain ADR participation beyond intermediaries, more barriers and obligations may act as a deterrent when the opposite is needed.
  • Longer term, with potentially thousands of ADRs, the collective effort to develop and maintain this capability would be substantial.

What would ADR-sourced numbers tell us?

  • Not all DHs will be assessed by all ADRs. Especially when other industry verticals and economic sectors are designated.
  • We already know that the small number of ADRs in operation tend to favour a subset of DH endpoints. Direct Debits and Scheduled Payments are not heavily used. So potentially we would only receive partial data for less popular DHs and less popular endpoints.
  • Would the absence of data for a given DH or endpoint(s) excuse that DH from some compliance obligations because there were no metrics with which to assess them?

Would ADR metrics be representative?

  • An ADR’s perspective of DH performance will be impacted by many things including their platform, infrastructure, architecture, geographical location and cloud services provider(s).
  • ADR requests will be made at different times of day when DH platforms are operating at varying levels of load.
  • Each ADR will therefore rate DH performance differently, even on a relative basis.
  • Even when calling the same endpoints at the same time, ADRs will be collecting different data with using different page sizes with varying payloads.

What would a valid assessment look like?

  • Would we risk ignoring outliers that might appear infrequently for specific less common use cases?
  • What if those combinations and parameters provide a vital service to a specific group of consumers?
  • Is it even possible to define what an accurate assessment of DH performance would be?
  • Who would determine the validity of such scenarios?

The "Who has the right numbers" problem

There is a very real danger that we could end up debating who has the right numbers, rather than doing something about the numbers?

The Centralised, Independent Assessment option

  • Could we instead use a single, centralised platform to consistently assess all DHs across a range of predetermined metrics?
  • If so, which entity would be best placed, appropriately independent and qualified to make assessments of participant performance?
  • Is the ACCC or DSB equipped with the resources to undertake this role on behalf of all ADRs?
  • Commercial platforms that can carry out this function already exist, would this be a more practical and economic approach?

Whatever approach is used, what data are we collecting?

  • Even if we have a single, centralised platform to collect data from all endpoints of all DHs from a range of locations and cloud providers, what data is this platform going to collect?
  • In the absence of a real consumer use case and consent, what data is going to be collected and how could this be done?
  • This comes back to another long-standing problem. The lack of test data for the ecosystem, for ADRs to develop use cases against.

Perhaps there is a solution to both?

  • What if all DHs were required to create production accounts with predefined credentials and sets of test data?
  • Would this enable valid equivalent comparison to be made of DH performance?
  • Could that performance assessment be made across a range of dimensions including data quality and accuracy?
  • Could this also enable ADRs to access test data from all DHs with which to build their propositions?

Right now ADRs and their equivalents in other jurisdictions are creating “test accounts” in banks across the world. They do this so they can test their services and solutions. That’s a lot of bank accounts, and here in Australia it will soon extend to energy and telco accounts. These accounts are created for no purpose other than to test a specific service for a specific ADR. It’s a global problem that needs solving and a global burden for banks and other data holders.

Maybe Australia and the CDR could show the way and fix three things at once:

  1. Consistent independent and reliable assessment of DH performance
  2. Realistic test data to help ADRs develop market propositions
  3. Access to test accounts to help ADRs and intermediaries confirm platform operational access

@jimbasiq
Copy link

jimbasiq commented May 9, 2022

Thank you for the opportunity to feed back on Decision Proposal 240 - ADR Metrics. In principal Basiq are supportive of the proposal with the following comments/observations:

  • We disagree with comment “Aggregated data across all DHs would not be considered helpful as it would not facilitate targeted remediation.”. An aggregate metric could be indicative of the overall health of the CDR framework.

  • We suggest the “At a minimum” metrics approach for initial implementation to see if the approach works. As has been previously noted, we do not wish to make the barrier to entry for an ADR to be too high.

  • The “In addition” metrics include elements that an ADR, acting as a Vendor providing CDR data access services to a Representative or Affiliate, may wish to provide as a value add service or could give away information the ADR/Representative/Affiliate may deem confidential. In summary the currently suggested implementation is overly complex in our opinion.

  • There should be rules defined for how regularly the ACCC (or other authorised parties) can request these metrics.

If the above approach is taken and the ADR metrics obligations are kept to a minimum, Basiq is supportive of a mandatory ADR Metrics API.

@ACCC-CDR
Copy link

The ACCC values ongoing feedback and guidance on how we best apply efforts for the benefits of CDR. As the provider of the CDR RAAP, we are an active part of the CDR technology ecosystem and expect changes that have cost and time implications should be evaluated for the benefit they provide.

For clarity, this decision proposal was not raised at the request of the ACCC. We currently have a process whereby participants can raise concerns with our Technical Operations team that we will investigate and seek to resolve using the various options at our disposal. Details can submitted via [email protected] or via our CDR Service Management Portal. The low volume of issues raised via the current process puts question to whether the CDR would currently benefit from effort required to support introduction of the approach suggested by DP240.

We continue to look for other perspectives in how best to monitor the CDR, the feedback and alternate view provided by True Layer provided a useful perspective.

@CDR-API-Stream
Copy link
Contributor Author

Thank you for all of the responses on this proposal. As this proposal was initiated by a discussion at the Data Standards Advisory Committee the DSB will report a summary of the feedback to the next meeting and seek advice on the next steps.

The summary of feedback is understood to be along the following lines:

  • There is significant concern across stakeholder groups about the ongoing operational burden that this would place on ADRs. This was anticipated in the proposal with the suggestion that implementation would be voluntary but was still a significant feature of the feedback received.
  • There was constructive feedback on the value (or lack of value) of specific metrics
  • This is not the preferred approach of the ACCC to address the underlying issues of DH performance issues materially impacting ADRs
  • There was the suggestion that an alternative solution could be the creation of "test" accounts in production that could be used to share data with an ADR for performance assessment purposes. This solution would also imply the creation or nomination of an that is considered a valid ADR by production implementations to actually initiate the consumer data requests.

@CDR-API-Stream
Copy link
Contributor Author

This consultation will now be closed with no decision taken. If additional action items arise from the DSAC meeting the consultation will either be reopened or a new consultation will be raised.

@ConsumerDataStandardsAustralia ConsumerDataStandardsAustralia locked and limited conversation to collaborators Jun 1, 2022
@CDR-API-Stream CDR-API-Stream added Status: Feedback Period Closed The feedback period is complete and a final decision is being formulated and removed Status: Open For Feedback Feedback has been requested for the decision labels Jun 1, 2022
@CDR-API-Stream CDR-API-Stream added Status: No Decision Taken No determination for this decision has been made and removed Status: Feedback Period Closed The feedback period is complete and a final decision is being formulated labels Jul 31, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Category: API A proposal for a decision to be made for the API Standards made Industry: All This proposal impacts the CDR as a whole (all sectors) Status: No Decision Taken No determination for this decision has been made
Projects
None yet
Development

No branches or pull requests

8 participants