[webkit-dev] Considering a Touchhover

Charles Pritchard chuck at jumis.com
Mon Mar 21 13:33:08 PDT 2011


On 3/21/2011 12:06 PM, James Craig wrote:
> and the DOM structure mimics the UI visible in the canvas. It requires a focus model change to WebKit because the canvas subtree was not originally intended to be accessible in any modality.
This is being worked on, Dominic Mazzoni put some effort into this,
and the list discussed some possible fixes to his original patch. 
Mozilla has had some
discussion on their boards, but no movement yet. Their efforts are 
likely going to
be made-easier by the lessons learned in Webkit's implementation.

> http://lists.w3.org/Archives/Public/public-canvas-api/2009OctDec/0026.html
>
> I'm not sure I agree with the need for a touchhover event, but I'd be interested to hear how you think it should work.
>
> James

Regarding Chris' post:

> On Mar 11, 2011, at 3:30 PM, Chris Fleizach wrote:
>
...
>> In the meantime, VoiceOver on iOS will call .focus() every time it "hovers" on an item, so you can use that monitor where VO is at any moment.
>>
>> If that doesn't work with<canvas>  tags please file bugs at bugs.webkit.org and CC me
I've not filed a bug report, it takes some effort to put together.

Currently, VoiceOver treats the entire canvas element as one item.

The eyes-free ui requires double-tapping on a canvas element before
it will send DOM events to the element.

The Canvas spec example, of displaying two distinct checkboxes, requires
that the scripting environment and author-provided hit testing be part 
of that loop.

Hit testing is not performed until -after- a user calls focus() on the 
canvas element
and double-taps the screen. Prior to that point, touch events are not 
sent to the DOM.

My suggestion, of a touchhover event, would notify the DOM of user
coordinates, so that, the scripting environment can delegate the focus,
via ARIA, DOM focus and any other method available.

I have not tried VoiceOver with SVG.

This is pseudo-code.
<style>#test:hover { ...; }</style>
<canvas id="test"
     ontouchhover="myLastPosition(this,event.pageX,event.pageY)"
     onfocus="if(!(hitAndFocusTest(this))) 
myParentFocusElement(this);">...</canvas>

Currently Wacom tablets masquerade as mouse events, and Apple treats
touch events on the desktop as UA gestures. For now, it's just mobile 
devices.

I'm a fan of the VoiceOver iOS AT. For a sighted user, it emulates a 
touch "hover" event.
VoiceOver does not have a settings for touch sensitivity/proximity.

Alternatives:

Manipulating the underlying CSS of the DOM shadow tree is something I'd
like to reserve for cases where I'm supporting legacy software.

SVG paths require extra-work to construct (from canvas coordinates), and
a11y support for svg has not been much of a priority.

Introducing a new method into the Canvas 2d context spec is an option,
using drawFocusRing as a model. drawFocusRing is currently designed
for screen magnifiers, and has not been adapted for touch input.

Methods requiring that the scripting environment mutate css/dom or a stack,
managing element positions, are less efficient and less flexible than 
event-based methods.

-Charles


More information about the webkit-dev mailing list