Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Heuristic evaluation method with new instructions and guidance #35

Closed
pglevy opened this issue Jul 22, 2020 · 8 comments · Fixed by #80
Closed

Update Heuristic evaluation method with new instructions and guidance #35

pglevy opened this issue Jul 22, 2020 · 8 comments · Fixed by #80
Assignees
Milestone

Comments

@pglevy
Copy link
Collaborator

pglevy commented Jul 22, 2020

@e-nardi and I talked about some shortcomings with current instructions:

  1. It's not clear what the expectations are for the evaluators, and what their level of domain knowledge (site content) or usability needs to be.
  2. As opposed to having evaluators create their own list of heuristics, it seems to make more sense to come up with a standard/recommended list to have everyone work from.

As the conclusion of Emilia's current activity, we will revisit and update the instructions to address these issues and whatever else we learned from going through it.

@e-nardi
Copy link

e-nardi commented Aug 4, 2020

Here's a list of changes that were made to a heuristic evaluation plan for hhs.gov:

Expert evaluators - For somebody to go through this process as an evaluator we would need to assume good knowledge of usability practices. Philip recommended speaking with a group of UX designers (from Bixal and/or Webfirst) who don't have a ton of familiarity with the website.

Heuristics list - Establish a common set of Heuristics for everyone to refer to and measure the site against instead of having evaluators create their own. Emilia created a hybrid list of Heuristics from NN and the USDS playbook.

  1. Use a simple and flexible design style guide for the service.
  2. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  3. Consistent use of a design style guide.
  4. Give users clear information about where they are in each step of the process.
  5. Follow accessibility best practices to ensure all people can use the service.
  6. Provide users with a way to exit and return later to complete the process.
  7. Use language and design consistently throughout the service, including online and offline touch points.
  8. Utilization of plain language. Users should not have to wonder whether different words, situations, or actions mean the same thing.
  9. The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.Minimize the user's memory load by making objects, actions, and options visible.
  10. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

Scope of evaluation - We don't know specific user tasks yet (why people are coming to hhs.gov and what they are doing) so we shouldn't ask them to do super specific things. Philip recommended identifying key pages or site sections you'd like people to explore. Navigate them to certain sections and ask them to measure what they see against pre-determined heuristic benchmarks.

@pglevy
Copy link
Collaborator Author

pglevy commented Aug 12, 2020

@e-nardi , we can talk more about this, but here's what I'd like to do:

  1. I'll show you how to create a branch where you can work on some changes to new version of this method.
  2. You make changes to "How to do it" section (mainly questions 2 and 3).
  3. You create a separate doc/page with the heuristics list above as a template. (We can discuss format.)
  4. I'll show you how to create a "pull request" to submit your proposed changes for review.

@pglevy
Copy link
Collaborator Author

pglevy commented Nov 17, 2020

Looks some HE work done on SSA toward the beginning we can reference as well.
image

@pglevy
Copy link
Collaborator Author

pglevy commented Dec 16, 2020

How's this one coming along, @allie-shaw?

@allie-shaw
Copy link

@pglevy
Copy link
Collaborator Author

pglevy commented Dec 18, 2020

Ready for feedback from @Bixal/methods

@sofya-UX
Copy link
Contributor

Hey @allie-shaw, nice work! It's a very clean and easy read.

I have a philosophical question to @Bixal/methods group, which might apply to other methods as we are tweaking them, and maybe it's a broader discussion for a later time, but I am capturing it here not to forget:

  • while reading the new version of this method, it stood out to me that it reads as a DIY method (as in "the designer/UXer on the project is doing the evaluation as opposed to a collaborative method of tasking other UXers to evaluate the site and then discuss as a group. I am curious to hear your thoughts whether we should indicate that this evaluation can be done by one or more people, time permitting, or if we want to keep it explicitly as a one person job? Thanks.

@pglevy
Copy link
Collaborator Author

pglevy commented Jan 11, 2021

I took the approach of a lighter edit of what exists today to get this out the door.
Main changes:

  • Recommend providing common set of known heuristics instead of having participants create their own.
  • Recommend using NN Group but also include Play 3 from Digital Services Playbook as an option.
  • Moved to Awareness categories since this is something that can be done early on before access to users.

pglevy added a commit that referenced this issue Jan 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants