My interest in human machine interface goes from the practical to the academic, with regards to the later, Pointer Events (PointerEvent) have been formally defined by W3C and Windows 8 has actually implemented them. A pointer is a hardware agnostic representation of input devices that can target a specific coordinate (or set of coordinates) on a screen. This recommendation is really good news as it has always been a really inconsistent experience for touch across web applications. As you would anticipate there is a one to one relationship between Mouse events and Pointer Events that cover the current use cases:
Mouse event | PointerEvent (equivalent) |
mousedown | pointerdown |
mouseup | pointerup |
mouseleave | pointerleave |
mouseenter | pointerenter |
mousemove | pointermove |
moouseout | pointerout |
mouseover | pointerover |
If you have completed any amount of mobile development this pointer/touch paradigm as a subset of the mouse event is probably quite straightforward. You can still expect to see the normal properties like height and width (location where you touched the screen). Other properties are available like pointerType (currently defined as pen, touch, mouse), there is also a pressure property (0-1), together with the tiltX and tiltY. Hardware like PixelSense use these properties in combination to determine things like hand or pen orientation.
Feature Detection
As I have mentioned before feature detection in JavaScript is the only way reliably discover what functions your browser can perform, and this is equally true for PointerEvents, the following could be the basis of PointerEvent detection:
if (window.PointerEvent) { }
Then we can check if you we have multi-touch support as follows:
if (navigator.maxTouchPoints && navigator.maxTouchPoints > 1) { }
While PointerEvents appear fully fleshed out at W3C I still see room for things like Gesture Events and Manipulation Events, although I can see valid arguments about which layer of abstraction should figure this event out. Internet Explorer 11 has included native support for touch-based HTML5 drag and drop but to detect a manipulation events with Internet Explorer 10 you would need to listen for the following:- MSGestureHold, MSGestureStart, MSGestureChange, MSGestureEnd. As you can see support across just one browser type is complicated enough, you would then still need to support all the other browsers out there and to that end the jQuery Foundation and Standards has taken over Google's neglected PointerEvents polyfill project, we can at least hope to see a framework provide consistent experiences with PointerEvents.
…W3C took the Pointer Events specification to the Proposed Recommendation stage. This makes Pointer Events one step closer to a finished standard and gives browsers a solid base on which to implement these APIs. Some browsers have even begun their implementation. Unsurprisingly Internet Explorer, where the first implementation of Pointer Events began before being submitted to the W3C for standardization, has implemented Pointer Events and Firefox has a branch of their code base implementing Pointer Events which they intend to port to all version of Firefox. Both of these implementations recently passed 100% of the Pointer Events test suite so implementation is progressing nicely.
If you are interested, the PointerEvents polyfill project is over on GitHub here.
Comments are closed.