-
Notifications
You must be signed in to change notification settings - Fork 28
Archived 2020 Test Authorship and Test Runner Development Plan
Please provide feedback/questions in issue 42.
Make ARIA rendering as reliable and robust as CSS rendering.
Co-author screen reader test plans for aria practices with screen reader developers . Develop a test runner and test format to collect data on these test cases in a standard way, and create a plan for completing test runs at a regular cadence.
The phases below are sequential. The work streams within a phase can happen in parallel.
Role | Description |
---|---|
Product Owner | Synthesizes feedback into prioritized backlog issues in the ARIA-AT software itself, and approves releases to production. |
ARIA AT Engineer | Implements features for the ARIA-AT Runner and Reporting Site |
Release Marshal | Coordinates QA of the ARIA AT software itself. Decides when a release gets promoted from development to QA and from QA to Staging. The Release Marshal also coordinates with the Product owner to make sure that releases to Production are tagged and documented properly. |
ARIA-AT will use a staged, multi-build release model to ensure stability and completeness. In the first stage of this process the development team tests their own work and then “freezes” their work into a “Develop” build.
According to the cadence below, features advance through four stages (which can also be thought of as “channels”): Develop, Staging, and Production.
Each stage is tested in different ways than the last, with “Production” being the final stage of release. The “Production” release contains the software that will go live on the ARIA-AT website.
Each type of release (Develop, Staging, Production) will correspond to a git branch with the same name, which is where the release must reside. You can view a list of branches at https://github.com/bocoup/aria-at-report/branches
Every release is assigned a git “tag” with a unique version number using the principles of Semantic Versioning (“SemVer”). Each tag will graduate to the next branch. For example, tag 1.0.0-staging in Staging, will move onto Production with the same version tag, 1.0.0-production. You can see this in action at https://github.com/bocoup/aria-at-report/releases.
Version numbers are applied to all build releases from Staging forward (“Develop” releases do not need a number because they are constantly changing). This is to ensure that all previous builds from this point can be referenced by future developers and testers.
For clarity and compatibility, the SemVer version numbering system (MAJOR.MINOR.PATCH) should be adhered to in principle. We do not follow the exact SemVer specification. For example, we update the MAJOR and MINOR numbers based on product milestones as opposed to changes to a public-facing API.
The versions of the build will be the same from release to release. For example 0.1.22-staging will be the same content (but built for Production branch settings) when it becomes 0.1.22-production. The development branch official version number will be "a version behind" as changes will land on top of 0.1.22-develop that still get built for Develop, but it will not be tagged to 0.1.23-develop until we are ready to release a new version to 0.1.23-staging. So development builds can have features above their versioned number - consider development in this case to be 0.1.23-plus-more-developy-stuff-maybe.
Overview: Collect feedback from screen reader vendors on the initial test design, assertion model, and their future involvement in the project. Create wireframe for test runner.
Work stream 1.1: Collect screen reader developer input with demo testing for checkbox, combobox, menubar, and grid
Timeline | Week | Task | Owner | Collaborators |
---|---|---|---|---|
Feb 3-7 | Week 1 | Build tool for reviewing existing tests of a design pattern | Bocoup | |
Feb 10-14 | Week 1-2 | Complete test files for checkbox, combobox, menubar and grid | ARIA-AT CG | Bocoup |
Feb 17-21 | Week 3 | Test these patterns with JAWS, NVDA, and VoiceOver in Chrome and Firefox on Windows and Chrome on Mac in the ARIA-AT prototype runner. Upload test results into ARIA-AT repository. | ARIA-AT CG | Bocoup |
Feb 24-28 | Week 4 | Refine reports of test results | Bocoup | |
March 2-6 | Week 5 | Share test results with NVAccess, Freedom Scientific, and Apple | Matt King | Bocoup, AT developers, ARIA-AT CG |
March 9-13 | Week 5 | Collaborate to create a plan for addressing ongoing feedback on runner design, test design, assertion design and results reporting | ARIA-AT CG | Bocoup |
March 16-20 | Week 5-6 | Adjust test and assertion design based on feedback | Bocoup | |
March 16-20 | Week 5-6 | Collaborate to create a plan for ongoing project participation in test writing, including: How/when to review test plans and results for each pattern; How/where to provide feedback on test plans; and results; Github policies | ARIA-AT CG | Bocoup, AT developers |
March 23-27 | Week 7 | Share project vision, viability, plan and stakeholder contribution options with a wider audience. | ARIA-AT CG | Bocoup, TBD |
Timeline | Week | Task | Owner | Collaborators |
---|---|---|---|---|
Feb 3-24 | Week 1-4 | Collaborate and document requirements for the test runner. Analyze gaps between prototype and use cases | Bocoup | ARIA-AT CG |
Mar 2-13 | Week 5-6 | Create wireframes for test runner and results websites | Bocoup |
Overview: Create the test runner and the results website. Collect feedback and address concerns. Create a test contribution workflow. Run all tests and report any bugs found in browsers, accessibility APIs and assistive technologies.
Timeline | Week | Task | Owner | Collaborators |
---|---|---|---|---|
Mar 9-13 | Week 6 | Research and plan: Break work into milestones based on use case prioritization; Develop high-level system requirements for each milestone; Break work into sprints for each milestone | ARIA-AT CG | Bocoup |
Mar 9 - May 8 | Week 6-14 | Develop production system: Lay down foundations for the website; Build website based on feature prioritization; Document use of Test Runner for testers; Address feedback | Bocoup | |
Apr 27 - May 1 | Week 13 | Provide feedback on testing system | AT developers | Bocoup |
May 11 - May 15 | Week 15 | Run usability testing on test runner | Bocoup | ARIA-AT CG |
May 25 - June 5 | Week 17-18 | Fix critical issues identified by pilot; Final production system prep | Bocoup | ARIA-AT CG |
Timeline | Week | Task | Owner | Collaborators |
---|---|---|---|---|
April 20 - May 27 | Week 12-17 | Complete development of test plans 1-2 | ARIA-AT CG | |
May 11-15 | Week 15 | Run usability test: look for system problems, usability problems, training problems. | Testers | Bocoup |
May 27-June 7 | Week 17 | Run pilot test of first 2 tests | ARIA-AT CG | |
TBD H2 2020 | Review test plans 1-2 | AT developers | ||
TBD H2 2020 | Complete development of test plans 3-4 | ARIA-AT CG | ||
TBD H2 2020 | Review test plans 3-4 | AT developers | ||
TBD H2 2020 | Complete development of test plans 5-n | ARIA-AT CG | ||
TBD H2 2020 | Review test plans 5-n | AT developers | ||
TBD H2 2020 | Train testers | ARIA-AT CG | Bocoup, testers | |
TBD H2 2020 | Run first full test cycle with 6 browser/screen reader combinations | ARIA-AT CG | ||
TBD H2 2020 | Make draft test results ready for review for stakeholders | ARIA-AT CG | ||
TBD H2 2020 | Review and provide feedback on draft test results (test plans 1-2) | AT developers | ||
TBD H2 2020 | Review and provide feedback on draft test results (test plans 3-4) | AT developers | ||
TBD H2 2020 | Review and provide feedback on draft test results (test plans 5-n) | AT developers |