-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.5.1 Pointer Gestures: clarity on path-based language in Understanding document #522
Comments
Hi Mike, |
That is a nuanced argument that I think gains validity the more intermediary steps there are, primarily because the gesture potentially demands a higher level of precision and fine motor control to release at the desired intermediate increment. I don't find it as persuasive, say in a 3-point drag bar (one on the left, one in the middle, one on the right), especially if the middle is my starting point. BTW, let's be careful not to confuse your use of "endpoint" here to describe the last position on a widget with endpoint as used in the Understanding docs. |
I'd have to do more testing to confirm but so far I'm finding you don't have to follow any path at all. It's not just you don't have to do "the exact straight line"; you could drag almost straight up on a horizontal slider and as long as there is a slight x axis movement, that is going to be the only part of the movement affecting (interpreted by) the slider. re"free-form gesture" . The crux of the issue to me is what exactly constitutes "path-based gestures". It's in the normative text, but it's an undefined term. The best existing wording in WCAG about it is in the Understanding document for 2.1.1 -- language that although non-normative has been around a long time,. It provides this concept of movement relying on the user's path "not just the end points." I think many of us are used to that notion. It's re-used in 2.5.1. BTW, "free-form" is a term that I think first occurs in the Pointer Gestures Understanding doc, and is just used once to specifically address drag and drop. I'm not sure it adds clarity to the discussion. |
I don't think that's valid. A swipe has to have a direction associated to it, so it is loosely path-based. If I tried to swipe right by doing some elaborate parabola that ultimately got me to the right side of the screen, that is not going to be interpreted as a swipe right. But if I grabbed an object and did the same 'free-form' path to get to a drop location on the right side of the screen, no problem.
Yes, the algorithm for interpreting a user's movement as a gesture has to have some elasticity. But I think whether the interaction with a thumb slider is subject to much of that interpretation is at the crux of my question. I don't anticipate too much debate about Swipe Left being a path-based gesture. I suspect many of us have experienced our swipe being misinterpreted. I also think it's important to note that I've seen implementations (primarily in VoiceOver?) that detect a Swipe Left gesture for a slider, interpret that, and move the slider an increment to the left. I'm not talking about that situation. I'd agree that is a gesture and should fall under this guidance. But if I can take the same component, grab the slider and then drag it 'free form' to reposition to my desired location/setting, is that really a path-based gesture in the same way as a Swipe Left? Once the system detects that I have tapped and held the component, I think it has now defined an interaction which is allowed in 2.5.1. and it then interprets subsequent movement as a drag.
That is not my motivation or desire. I'm writing up IBM's guidance, and I see real points of confusion in what constitutes path-based gestures. I'm confused, and I was involved in lots of the discussions! The Understanding document excludes drag and drop. I think we have to acknowledge that the difference between drag-and-drop and dragging a thumb slider is not obvious at a basic functional level. I'm also not arguing that some users will not have difficulty with drag and drop functions or that offering a some simple buttons to achieve the same means is undesirable. I'm simply saying that banning an ability to use drag to move a slider position while allowing it for moving other objects is confusing. |
BTW, please look at comments by many in #403 It's a very similar argument and your conclusion is the same (as is my perspective), but I'm not sure that the written response precisely captures all said previously. At the least, the concerns I'm raising are echoed there by others. |
Hi Mike, perhaps it is best to discuss this in one of the telcos or at TPAC (in case you are around then). Let's see what others say then (or here). |
After much discussion, the working group revised the Understanding document for Pointer Gestures by removing references to drag and drop. However, the wording "dragging of a slider thumb" has persisted, and several of the examples involve solutions which overcome dragging. This issue came up again in discussion at IBM this week. Half a year has gone by since @detlevhfischer's last comment, and this may just be a matter where we have to agree to disagree, but there is continued concern that the existing discussions on this topic indicate that the language which exists forbidding dragging does not have full agreement by the working group. I'll restate the arguments one last time:
The concern continues to be that users will cite Pointer Gestures as a reason to fail something like a 3-point slider or rotor, when those mechanisms can be operated with a single pointer. |
hi @mbgower I am happy to revise the text if you express a consensus view here. Let me just address two things. You wrote:
Nothing in the SC forbids dragging ( a slider thumb). It just requires an alternative way of setting the value, such as tapping the groove, increment buttons, numerical input field as alternative, whatever.
When a three point slider can be operated with a single point gesture (e.g. by tapping the groove and thumb will jump there) it would meet the SC. So the issue is sliders that CANNOT be operated by single point input (and have no alternatives). Other swipe gestures (like for moving image sliders / carousels) also do not require a straight line swipe and often work with a parabolic gesture. So your argument for excluding control sliders might be extended to those and by extension, to any swipe gesture that does not need to be straight. With drag-n-drop already out of scope, this might leave precious little substance to the SC. I welcome more discussion of this issue. |
@detlevhfischer , I just want a consistent rationale that I can explain to a developer. If the ultimate direction is "you can use drag and drop but not dragging of elements" I can live (uncomfortably) with that. But if so, I feel that our language on 'path-based gestures' needs to be scrutinized. You said:
Let me be clear. I am speaking directly to what is defined as being a "single pointer without a path-based gesture." I understand more complex gestures can be used with an alternate. That's not what I'm discussing. I'm tackling what can be used without requiring an alternate.
It still covers all custom multi-point gestures, which I think was the primary motivator for the SC. Let me also be clear here that I don't see anything in the SC language about drag and drop. That is in the Understanding doc only, correct? So if the WG feels that dragging of any kind is in scope, then drag & drop could be back in. What I'm driving at here is that there seems to be no real difference in difficulty or operation between drag&drop and dragging a slider. |
Hi @mbgower I just re-read the Understanding document to understand the issue. I am actually not sure what you mean by
Single pointer without path-based gesture is tap / click (incl. double tap/click and long presses at same position) - I am not sure what is unclear about that. Drag-n-drop has been excluded because arguably it does not rely on a particular path but on start- and end points. That exclusion seems awkward, but I believe it was decided thus by the WG. One rationale for exclusion was that it might be very cumbersome for authors to meet 2.5.1 for drag-n-drop even in cases where it was optimised to be keyboard-operable. Interestingly, the last bullet point in section Examples looks like a drag-n-drop example camouflaged as swipe example:
Then you write:
You are probably right that there is not much difference regarding the difficulty. One option would be to include things like picture sliders where the point of contact for a swipe can be anywhere, and exclude things like sliders where you have to touch the thumb before starting your dragging gesture. The other option would be bringing also drag-n-drop into scope. This should be discussed in a WG call, I think. |
Proposed working group response: The rationale for excluding drag-and-drop while including other dragging gestures was that for the latter, providing single-pointer alternatives is relatively easy: slider controls may set the thumb position when users single-tap on a particular point of the track, image sliders may include arrow buttons for advancing the position of (hidden) content. In contrast, adding single pointer activation to freeform drag-and-drop interfaces beyond ensuring keyboard operability is cumbersome and has trade-offs in terms of increased cognitive load / interface complexity. The question remains whether exclusion of drag-and-job for these reasons stands up to scrutinity. From a user perspective, one could argue that drag-and-drop should be included. From an author perspective, having drag-and-drop in scope may rule out this mode of interaction altogether (feeding the narrative of "accessibility prevents smart/intuitive design") or invite a violation of the SC when implementing it, due to the inherent complexity of implementing a single-pointer operable solution. As I see it, the working group must choose between: (A) Uphold the exclusion of drag-and-drop and leave other swipe and drag gestures based on constraints (slider groove, horizontal image slider) covered - this may need a better argued rationale for that choice (B) Extend the scope to also cover drag-and-drop interactions (C) Limit the scope to path-based gestures without specific start and end point (control sliders where you pick up the thumb to drag would be out, image sliders where you can start swiping anywhere withing a larger area would be in) - so essentially: swipe is in, drag is out. |
Nice summation, @detlevhfischer. I think the rationale you give on the relative ease of providing alternatives for dragging is an understandable one, and if folks are happy with that being the rationale for drag and drop exclusion (as opposed to the path-based gesture argument) I think that is a reasonable pivot point. I agree we need a better rationale to bring this about. |
I was mostly persuaded my Mike's argument that a slider doesn't require a specific gesture, does that fit "C" best in Detlev's summary? |
No, the option to remove dragging from scope was not one of the options he gave. :) |
I have created a pull request to address the resolution to the group discussion, which was to remove dragging as being in scope. #714 |
@awkawk I'd like to reiterate my feeling that making a new SC that requires an alternative to dragging would be a good addition for 2.2. It's not that tough to implement:
|
@mbgower Agreed. We should think about what the use cases are where not having a requirement for single-pointer access to UI creates problems. |
As I have indicated in my review of #714, I still think it is possible and sensible to treat both swiping and dragging as path-based gestures. As to another SC for dragging, when I look at the sequence
This does not look straightforward, neither conceptually nor in terms of user experience. It is certainly not an established pattern. You'd either have to make that sequence explicit (explain it somewhere, which takes up screen real estate or needs to be discovered and thereby likely missed), or you basically take the user by the hand in presenting these options step by step as a consequence of the individual interactions. Applied to drag and drop items (think a kanban board), it also takes away the possibility of treating a click/tap as distinct from a drag-and-drop action. So in an implementation where activating an issue opens a detail view of that issue, you wouldn't be able to do that because now the click/tap is consumed by adding the affordance for movement via single pointer actions (whatever way that is done) - or you start differentiating by long press etc. which adds its own set of complexities. |
Also please include in the discussion the question of css only scrollable area requiring swipe (https://css-tricks.com/pure-css-horizontal-scrolling/ or https://css-tricks.com/practical-css-scroll-snapping/ for example). Do WG seen them as a case where SC pointure gesture apply or not ? |
In that discussion, we've defined three levels of possible exclusion. I think where it got to (although I wasn't there) was that scrollable areas would not fit into the definition for 'gestures' as you do not have to go in a particular direction to activate functionality. (Depending on how it's implemented though, if you have to go in a particular direction (e.g. right) and cannot stray off that path, it would be in scope.) |
Checking in on this one, considering the understanding updates and techniques, is this issue done with? |
I'm still a bit confused by "Examples of path-based gestures include swiping, sliders and carousels dependent on the direction of interaction" in the understanding. What you say is that the criteria apply in these cases only if user have to do a particular gesture (like precise shape gesture or perfectly straight gesture to the right / left) to move to next/prev slide ? |
at least it make one non compliant criteria less for slick.js/swiper.js/etc ;) |
Hi @goetsu, one of the additions to the understanding doc was the second paragraph and image for path based gestures, which answers that question for me, is that not clear? |
I think the solution to define a path-based gesture as one that requires an initial directionality does include those elements (control sliders, content sliders, carousels) where (if movement is constrained horizontally) a straight vertical movement followed by horizontal movement will not move the element (so the path applied for testing is like an L shape rotated by 180 degrees - for sliding content that is moved to the left). On mobile devices, what happens is usually that the page scrolls instead of the content being moved. So one could argue for such an element and interaction being in scope of 2.5.1 if mobile usage is part of the accessibility baseline of an evaluation -- or you might have two different results: one for desktop (no directional constraint) and one for mobile (directional constraint). |
It would be worth digging into a couple of examples, I wonder if we can base it on what the content defines? e.g. if it only detects for a horizontal movement then it is covered, but if the scripting allows for any direction then it isn't covered. I'm not saying that would be the typical test method, but if we run through a few examples and can predict how it behaves in different user-agents based on that, then it would make writing the tests and understanding easier. |
@detlevhfischer first do we all agree that native scrollable area or carousels using native scrollable area are out of the scope ? @alastc not sure as for me as swipe gesture don't have to go through a specific B point, In fact when you want to go right you can start your gesture by going to the left and then go back to the right. Your movements are free between the start and end point. You can even start by doing a straight up or down mouvement (scroll will usually scroll the page as said by @detlevhfischer) and then move to the right |
Yes, the understanding doc includes: "This Success Criterion applies to gestures in the author-provided content, not gestures defined by the operating system, user agent, or assistive technology"
Actually there's quite a lot of variety there, we all fell into the trap of assuming our own experience in certain apps is how it works in general. I did some testing of various implementations, no alt text I'm afraid, but some raw videos of 10 implementations. Generally if you start up/down you will then not trigger a left/right gesture. There are also a lot of ways of defining path based gestures, which scopes in/out various implementations. This is a working spreadsheet of definitions we considered vs types of dragging. |
I don't say that all carrousels work this way my question is to know that if a carrousel work this way then it's compliant or not (for me yes based on the current definition of path based gesture as there is not B point in the case I describe) |
I don't get that result for slick.js, and from what I can tell about it's code (around this point) it is looking for a left or right swipe motion to activate. If you can go up or down first, that's a bug, and not one I can replicate. That means to activate to the left you have to start at "A" and quickly go through "B", a point immediately to the left of "A". Then you can go anywhere before releasing the swipe. (B is different for a gesture to the right.) If you can put your finger down and go in any direction, then it wouldn't be in scope. (Which is possible, you can capture the events from touch/mousedown and prevent the default scroll action.) |
Here's what I found with the Slick examples on my portrait mode iPhone 6:
To me, Slick slider does expect users to go in horizontal direction first in order to navigate the carousel, even if Slick then disables native vertical scroll after that horizontal direction has been met. |
Hi - I'm building a native mobile application which I want to make accessible for users but the wording of 2.5.1 is causing confusion when it comes to assessing native horizontal scrolling sections, the question is does 2.5.1 apply or not? I see the following two replies (one, two) that to me suggest native horizontal scrolling content is out of scope of 2.5.1 but our assessors disagree citing that a horizontal scroll "should be operated with a single pointer without a a path based gesture". The cause of the conflict is that horizontal scrolling is very common pattern in native mobile applications. When I look at many popular native apps from Apple Music "Recently Played" section, Apple Weather conditions by hour section, Instagram stories, and Airbnb top navigation ("Design", "Camping", "Surfing" etc) - they all use native horizontal scrolling sections and don't provide a single pointer alternative. Based on the comments above I believe these designs are accessible and should be considered out of scope of 2.5.1 based on the feedback earlier. I do also observe that on web based implementations where there is not native scrolling a single pointer is implemented in the design. Airbnb is a good example where a single pointer interaction is provided in the top nav section on the website but does not for native mobile. Apple Music makes the section header clickable to link to a vertical list. These both make sense in the context of web as the browser cannot make accommodations whereas the native mobile application the OS can. I also observe that a carousel design that does not provide clear affordances for extra content that is common to provide a single pointer alternative but I believe this is used to signify additional content as it is unclear it is either scrollable or has more content. The Airbnb accommodation images on mobile for an example are a good example of this but I don't believe this is related to 2.5.1 but more about providing clear signifies or additional content. Have I understood this correctly or am I missing something? Apologies for the longer message but I want to be as specific as possible to achieve clarity and make sure we're doing the best by our users. |
@g-davies, the first critical point to make is that we're currently putting comments into a 2018 issue about WCAG 2.1 (opened by me, ironically). It is rendered fairly moot by the fact that WCAG 2.2 added a similar requirement covering Dragging Movements:
Regardless of where one makes a distinction between Pointer Gestures and Dragging, the takeaway is that there must be a means of operating an interface without dragging or directional gestures, which predominantly means stuff has to be accomplishable with single clicks. Before, responding to your specific questions, I should clarify that the WCAG standard is web-focused, and so does not in itself apply to a native mobile application. That said, the European EN 301549 standard, covering Accessibility requirements for ICT products and services, generally applies the WCAG standard to all electronic communication. They take some guidance from the WCAG2ICT task force, which has a draft document covering WCAG 2.2. Citing native app functionality gets us in the realm of WCAG2ICT, not WCAG. I'm not sure how Apple is going to defend some native app behaviours when someone from the EU comes knocking. Not my concern. So I am only going to respond in the context of a web application. If the weather app was an html page, it would be at risk of failing Pointer Gestures or Dragging Movements, because there is no single-touch mechanism for advancing the horizontal list beyond the 5 or 6 hourly items shown. It would be easy to address this; just add prior and next targets at each end of the scrolling area. You identified a few other mechanisms that could be added in any of the examples you provided to allow single-click interaction. A possible defence would be to argue what "functionality" means, since it could be interpreted in more than one way. Is the consideration be isolated to operating the hourly view in the horizontal area? Or can it be stated as an ability to view the hourly temperatures in a 24-hour day? With that second framing of 'functionality' in mind, when I single-click on the hourly area, I can display a full 24-hour's worth of hourly temperatures (and basic weather icons) in one view. The info is not quite as granular as the horizontal scroll (1 hour chunks versus 3-hour chunks in the single view), but I think one could make a decent case for equivalency. I suspect this notion of attaining the outcome in an alternative way will be more baked into WCAG 3, so I won't spend too much time going into detail on it, but it is a good fallback for any design consideration you're doing, if you lack a means for overriding the native operators and so cannot provide a single-click solution within a draggable section. |
@mbgower thank you for both of your very thoughtful and detailed comments. It's really insightful and helpful. The reflow points make complete sense to me and I had been looking at these independently but it's great you have reinforced this. I wanted to riff a little more on the second comment on Pointer Gestures to get your feedback. I'm going to be a pain and focus a little more on the differences between web and native mobile apps but I think your perspective will be really helpful. If we consider the user's experience is the sum of the app + the runtime environment.
My motivation for mentioning this my assumption is that the complexity of web distribution (also why web is awesome) means more emphasis must be placed on the app design as the author can not be certain of the OS handling. On native mobile it is far easier to design knowing accessibility capabilities will be provided by OS (especially with minimum OS targets). I believe this means it's easy to defer known accessibility accommodations to the OS if you specify the minimum version. Focusing on iOS for the sake of simplicity, I have been exploring the AssistiveTouch capabilities (I think introduced in iOS10) and it appears many of these gesture issues can be mitigated by the OS directly. I was able to use AssistiveTouch to interact with horizontal scroll areas from a pointer and without gestures. I believe this means if a design is compatible with the AssistiveTouch features then this would meet the SC? I've included some screenshots and a short video to show it working with the weather app. In summary, my current thinking is that on web there should be greater emphasis to solve in the app design for gestures alternatives due to under certain of accessibility features in the runtime environment. On native mobile if the deployment is constrained to OS versions that support AssistiveTouch and the design is compatible with AT then this should satisfy the SC. Does this sound rational and sensible? RPReplay_Final1710953563_sm.mov |
2.5.1 says actions must be achieved "without a path-based gesture". I took this to mean the same thing as in the 2.1.1 Keyboard SC , whose language gives an exception "where the underlying function requires input that depends on the path of the user's movement and not just the endpoints."
That seems to be the same concept as "path-based" in 2.5.1. However, 2.5.1's Understanding document gives the following for disallowed path-based gestures (my emphasis):
The slider thumb seems to contradict the interpetation in the pre-existing 2.1.1 Understanding document, which provides the following :
With a slider, I can normally drag in any direction I want, and the slider just interprets my movement as it relates to a pre-determined axis. So I can take any path I want to arrive at the slider position (end point) I desire.
With this in mind, I would suggest that operating a slider by dragging should be given the same pass as drag-and-drop:
The text was updated successfully, but these errors were encountered: