-
Notifications
You must be signed in to change notification settings - Fork 0
Meeting05.12.2017
Charles LaPierre, Jason White, Amaya Webster, Rob Howard, Jesse Greenberg, Bruce Walker, Nirupa Kumar, Richard Ladner, Sina Bahram
Jason Schwab
Sina: Was anything on the last call with integration with Bolder. Has that not folded in? Not in Scope etc.?
Jason: Haven't heard anything about it. We need a beter process for finding cases, and not readily understood best to present sources destination., if a path is involved etc. Narrow down similar issues case identify areas guidance.
Bruce: example: work with Phet making Phet physics simulations, grab and drag around.
Jason: more drag and Release when you "drop" it you release it so that the electrostatic forces
Bruce:Classify the different problems. where we should produce prototypes.
Sina: multiple goals, pedagogical goals etc... lives in environment domain specific info. There are several ways to inclusive design, could be specific domains, but for this do we want generalizable abstract uses for wider use.
Bruce: document successful / unsuccessful and get general guidelines. Try to build some Proof of concept D&D tools and any new uses of D&D could employ those templates.
Sina: the latter is what we are interested in?
Jason: Really interested in seeing if we can narrow down some of the issues in the simpler cases so we can identify any residual areas there that need guidance.
Bruce: I’ll volunteer one example that we’re working on. In our work with the PhET folks, we’re working on many of their great simulations, physics simulations, more accessible. Many of them involve true drag and drop, and drag and drag around activities. One example is balloons and static activity which helps people understand how charges transfer from things like walls, your hair a sweater to a balloon. In the simulation you have a sweater full of charges on the left and a wall on the right and balloons in the middle. And you can drag the balloon around have it interact with the sweater and the wall or each other.
Jason: It’s more of a drag and release. When you drop it, you’re actually releasing it so that it can utilize the charges.
Bruce: You can also drag one balloon towards the other and play tag with the balloons, you can drag and move around, you can drag and drop and release. We’re looking at how do you move the balloons and select them, for example with keyboard controls, what is the corresponding voice controls like a screen reader or self-voicing sim, and what can we do with non-voiced, other auditory cues to help signal to the user what’s going on. So we’re looking at all those input and output aspects.
Charles: Is there also work and simulations that are similar with magnets? You can drag a magnet close to another material, one of which could be another magnet, and play with how it can be a repelling force. How would you describe that the force needed to drag it closer becomes higher and higher?
Bruce: PhET has scores and scores of representations and one is the balloons and static electricity. We’re working on one now called build and atom where you drop neutrons and protons to the center of the nucleus of the atom. Another called John Travoltage where you drag the limbs of a character around, so all three sims require dragging interactions. Jason: I should point out that we’re working on a few of those simulations as well with PhET. We have spoken interfaces with one that is dialogue based natural language interface. Interesting activity that’s going on here. For purposes of this work, I think what we started doing earlier, which is to classify the different problems in this area, and then to start coming up with guidance that could be offered by way of solutions is a fruitful approach. And if there are any areas where we think we should produce prototypes of interactions that don’t already exist, they should be scoped for doing that. Is that what people want to see as the outcome of this work?
Charles: Our ultimate goal, from my perspective, would have some recommendations on how to build these things for people, so that when they create these drag and drops it’s accessible, and to disseminate that using our repository of reusable prototypes and accessible widgets, so one would be the simple drag and drop and then all the different use cases we identify, to have at least one implementation of each that’s accessible. And if others want to create something in a different language then all the better. But at least have one and extend from there.
Sina: So many systems have multiple goals, not just to drag the balloons, but also there are pedagogical goals, and the types of variables that you want to concentrate on giving the users updates on. It lives in an environment with domain specific things can be explored. My question for this group is there are several ways one could make something like that fully inclusively designed, and the range and levels of appropriateness for that one domain might be worth it because it achieves the pedagogical goals, for this are we looking for generalizable, more abstract solutions that can be appropriate for a wider range of issues or problems?
Bruce: I think you can come at it from both ends. You can document and collect examples of successful and unsuccessful attempts to deal with drag and drop in all its many guises. And from that bottom up collection try and come up with generalities. As a separate but parallel activity, it’s possible to try and build a proof of concept or some drag and drop tools that are generally useful and then any new sims or uses could employ those templates if you will.
Sina: I think that’s perfect, I’m just asking about this call, if the latter is what we are interesting it, knowing that the former is being explored by PhET.
Jason: I think generalizable solutions are very important and knowing what they can address and what they can’t are very important. What are the circumstances where you need to think through a carefully customized solution, and when is there a known solution that you can just apply.
Bruce: right now the work isn’t considering how broadly the solution will be used.
Jason: I can say that I’m involved with a project here that has a drag and drop related element. I’m not the person writing them, so I’m not sure, but I may have something I can contribute a few months from now. I assume others on this call will probably have examples. So maybe we should engage in a good example collecting process. Especially the ones that illustrate some of the more difficult aspects of the problem. Would people find that useful? Are there known areas where you think there are gaps in the existing examples that you would like to see more quickly?
Bruce: Does that precede the earlier efforts to build a taxonomy?
Jason: Where we were up to last time is that we had good distinctions, but we didn’t have examples. So we can collect examples and then see if there are gaps in our classifications.
Sina: I feel that would help a lot. The classification approach is something I’m a fan of so we can understand the space. I also think there is low hanging fruit here. The other aspect of it that would be nice for the taxonomy is once we’ve enumerated them and have examples is to populate the affordances we want to play with, to facilitate those actions. Are we interested in exploring purely keyboard efforts right now, sonification, a blending of all. I’m trying to figure out what our immediate path forward is. I think coming up with the samples for each one and grow the taxonomy if we need to, and then figure out which modalities we want to map to so we can begin exploring some of those abstract solutions. For example Bruce, on the balloon example you gave it would be great to have a somewhat abstract solution that would be evaluated against the balloon example to see if it would fit.
Bruce: IS there a document, a google doc that exists and can we be reminded by email where that is and continue to flesh out that taxonomy with examples and list more that don’t fit to continue that work? In terms of the project that we’re working on, if people have thoughts or ideas and we come up with ways or design recommendations, I can have my students build it using the PhET sims as a base because we have wrappers that we can use to make things happen. And we can collect data and see how it compares.
Sina: I would enjoy talking to you about the concept of multi user dungeons. It really does a good job letting you explore the area. So not sure if we want to talk about it on this call, it’s a modality that covers audio, in and out, text in and out, etc.
Jason: If there is a source for the authoring and delivering system I’d like more information. Can you send that to the list? I think there are people here or are definitely interested in that. If you have quasi natural language commands that you can use, it opens up a much richer space of input. We’ve discovered that with some of the work here, using a natural language dialogue system to have appropriate commands to do quickly what would otherwise take a lot of dragging around.
Richard: I came back from CHI in Denver and an accessibility talk that struck me was one about people with mobility difficulty effectively use a touch screen and do things like drag and drop, so thinking about other affordances for accessibility beyond screen readers and such. Maybe we should think boarder about doing these things. And also, touch screens and direct manipulation is becoming more popular beyond lap tops and computers, and they are fairly accessible for people who are blind because of screen readers and talk back. Here at the University of Washington we’re trying to do programing using block languages on a tablet so drag and drop is an issue for that.
Jason: It seems to me that there are different kinds of guidance that can be provided. If a person can use and input or output device to some extent there will be guidelines on how to maximize their ability to use it. And how do you make things accessible across different input and output types is seems different than maximizing the ability of using devices. So perhaps we can have different levels of guidance.
Richard: In terms of output we’re going to use robots that can talk and make noises and things like that, so it’s not just coming out of speakers. So it’s not just speech and sonification.
Jason: I think it’s clear that the kind and diversity of hardware that might be available is expanding, which creates new opportunities. So we seem to have some actions here that include revisiting the taxonomy and starting to populate it with examples that we know about. I’ve become aware of different hardware environments and how we might want to discuss the design considerations relative to those and that seems that it could potentially offer extensions to the taxonomy and the discussion, so we could integrate that into the action we’ve already planned to do. Are there others?
Charles: To find use cases to support the different taxonomies that we’ve come up with?
Jason: Let’s work out the solutions and try to find the commonalities and generalities.
Richard: I wonder if the taxonomy document can also have an area where there are links to research papers and pointers to other people doing things beyond us and tracking what’s going on beyond the DIAGRAM project.
Jason: that sounds useful to me. As well as potential implementers.
Richard: I think it helps to not reinvent the wheel, but be able to point to things that people are already doing.
Jason: I’ve been reading some of the proceedings from that conference, but some of the others look interesting too. Like on authentication. There was a good paper that involves designing for people with significant cognitive disabilities. So I’ll send the links out to that to the mailing list. It seems to me based on the collective expertise here we can get a good set of examples in place for the classifications and then start looking at solutions and we’ll open up the section for references, so what’s being discussed and related work.