Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define required AT versions for reporting on recommended test plans #809

Closed
mcking65 opened this issue Oct 5, 2023 · 2 comments
Closed
Labels
enhancement New feature or request

Comments

@mcking65
Copy link

mcking65 commented Oct 5, 2023

Goal for updating reports on recommended test plans

One goal of the ARIA-AT working mode specific to test plans in the recommended phase is to keep their interop reports current. That means, for each AT/Browser combo that has a report, generate a new report when a new version of an AT or browser is released. In addition to keeping data current, over time, this will also create data that can be used to publish trend charts.

As of September 27, 2023, the CG plans to update recommended reports for each public release of a supported screen reader; see Minutes for ARIA and Assistive Technologies Community Group Weekly Teleconference – 27 September 2023. Until the ARIA-AT project has sufficient automation and compute resources, new browser releases will not trigger report updates. The browser requirement will be "Any Recent Version". In the event of report run conflicts that are attributable to browser version differences, data from the more recent browser version will be published.

Problem - Determining which AT versions require reports

Once a test plan becomes recommended, for which AT versions should reports be required?

Some options:

  • Option A: the earliest version used to generate any of the approved published reports for that plan
    Option B: * The latest version used to generate any of the approved published reports for that plan
  • Option C: Any versions released after the plan becomes recommended

Note: The reports that are published as approved may have been generated during draft review. It is possible that several new AT versions were released between the time that a plan completed draft review and the time the plan finished candidate review and became recommended.

@css-meeting-bot
Copy link
Member

The ARIA-AT Community Group just discussed Issue 809 - Define required AT versions for reporting on recommended test plans.

The full IRC log of that discussion <jugglinmike> Topic: Issue 809 - Define required AT versions for reporting on recommended test plans
<jugglinmike> github: https://github.com//issues/809
<jugglinmike> Matt_King: Last week, we talked about how when a test plan becomes recommended, that's when we want to start making sure that the data for the test plan remains current
<jugglinmike> Matt_King: That way, visitors to APG can get information about how the latest version of their screen reader is working with that test plan
<jugglinmike> Matt_King: We don't have the ability to show reporting history and trends on the site (that's something we'll talk about next year)
<jugglinmike> Matt_King: Last week, we decided that we can't keep up with doing that for doing that for every new browser version and every new AT version
<jugglinmike> Matt_King: We decided we would only keep up with new releases of AT
<jugglinmike> Matt_King: So I'm thinking about how we determined when/if AT versions are missing
<jugglinmike> Matt_King: In this issue, I'm proposing three possiblities as to decisions that we could make
<jugglinmike> Matt_King: There may be more options, I just wanted to have a starting point for a decision framework
<jugglinmike> Matt_King: The three options that I came up with are...
<jugglinmike> Matt_King: Suppose we marked a test plan as recommended today, let's limit our discussion to NVDA.
<jugglinmike> Matt_King: There are three options for "which versions of NVDA should have required reports?"
<jugglinmike> Matt_King: One option: since we first started with NVDA in 2022 (let's say), we should generate data for every version since then. (The rational being that we started collecting for that version, and we don't want gaps in the data)
<jugglinmike> Another option: We use whatever the latest version of the test plan that we have
<jugglinmike> A third option: We only consider new versions released after the moment the report became recommended
<jugglinmike> In summary: (1) the version used to generate the earliest approved report, (2) the version used to generate the latest approved report, or (3) the first version released after the test plan becomes recommended
<jugglinmike> Hadi: I'm having trouble following; I think seeing it in writing would help
<jugglinmike> Matt_King: I wrote this up in issue gh-809
<jugglinmike> Hadi: Great, thank you
<jugglinmike> Matt_King: I don't necessarily expect to make a decision today, but I definitely didn't want to make the call on my own. This effects the folks here and various external stakeholders
<jugglinmike> s/Another option:/Matt_King: Another option:/
<jugglinmike> s/A third option/Matt_King: A third option/
<jugglinmike> s/In summary:/Matt_King: In summary:/
<jugglinmike> James_Scholes: I think we need to explicitly define what we mean by "recent" when it comes to browser releases and AT releases. We shouldn't ask Testers to make a judgement call about that
<jugglinmike> Matt_King: Sounds good
<jugglinmike> Matt_King:
<jugglinmike> James_Scholes: I think the word should be interpreted in terms of the time that the Tester is done
<jugglinmike> Matt_King: Knowing the current major version of stable browsers will take some work
<jugglinmike> Matt_King: If we have this requirement, can the app figure it out, or will it be a human job?
<jugglinmike> howard-e: I think it might have to be a human job because otherwise, we may make false assumptions
<jugglinmike> Matt_King: Can the app use an API (e.g. provided by a browser vendor) to discover new versions?
<jugglinmike> James_Scholes: It has to be possible to some extent because Playwright knows how to download the latest browser
<jugglinmike> Matt_King: Do we know if we can do that for Firefox, Chrome, Safari, and Edge?
<jugglinmike> howard-e: I don't know for all of those. Mostly likely yes for Firefox and Chrome
<jugglinmike> James_Scholes: Edge may be difficult because of the way Edge is pushed out
<jugglinmike> Matt_King: I'll open an issue to perform an investigation into the feasibility
<jugglinmike> James_Scholes: We could do this together on a weekly cadence
<jugglinmike> James_Scholes: But we shouldn't ask someone to do this more often, so we'll still be in a position where we don't know for sure that we've identified the latest version available
<jugglinmike> jugglinmike: Recall that last week, we talked about setting policies in terms of release date, e.g. "any version released in the past 6 months"
<jugglinmike> jugglinmike: So the data on release that we collect (whether via an automated process or a manual one), should probably include both the version numbers and date
<jugglinmike> jugglinmike: We don't have to only consider the release of software in the time window
<jugglinmike> jugglinmike: Instead of saying "the version released in the past 90 days, or, missing that, the next version released"...
<jugglinmike> jugglinmike: ...we could say "the newest version available in the past 90 days" (meaning that even if there was no release, we would accept the version that Testers would plausibly have installed during that time frame
<jugglinmike> Zakim, end the meeting

@ccanash
Copy link
Contributor

ccanash commented Jul 2, 2024

Closing as we implemented option B as part of the test queue updates and include in PR #1123

@ccanash ccanash moved this from Todo to In production / Completed in Trend Reporting Foundation for ARIA-AT Interoperability Data Jul 2, 2024
@ccanash ccanash closed this as completed Jul 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: In production / Completed
Development

No branches or pull requests

3 participants