Next: Focus Events, Previous: Motion Events, Up: Input Events [Contents][Index]
Some window systems provide support for input devices that react to the user’s touching the screen and moving fingers while touching the screen. These input devices are known as touchscreens, and Emacs reports the events they generate as touchscreen events.
Most individual events generated by a touchscreen only have meaning as part of a larger sequence of other events: for instance, the simple operation of tapping the touchscreen involves the user placing and raising a finger on the touchscreen, and swiping the display to scroll it involves placing a finger, moving it many times upwards or downwards, and then raising the finger.
While a simplistic model consisting of one finger is adequate for taps and scrolling, more complicated gestures require support for keeping track of multiple fingers, where the position of each finger is represented by a touch point. For example, a “pinch to zoom” gesture might consist of the user placing two fingers and moving them individually in opposite directions, where the distance between the positions of their individual points determine the amount by which to zoom the display, and the center of an imaginary line between those positions determines where to pan the display after zooming.
The low-level touchscreen events described below can be used to implement all the touch sequences described above. In those events, each point is represented by a cons of an arbitrary number identifying the point and a mouse position list (see Click Events) specifying the position of the finger when the event occurred.
(touchscreen-begin point)
This event is sent when point is created by the user pressing a finger against the touchscreen.
Imaginary prefix keys are also affixed to these events
read-key-sequence
when they originate on top of a special part
of a frame or window. See Key Sequence Input.
(touchscreen-update points)
This event is sent when a point on the touchscreen has changed position. points is a list of touch points containing the up-to-date positions of each touch point currently on the touchscreen.
(touchscreen-end point canceled)
This event is sent when point is no longer present on the display, because another program took the grab, or because the user raised the finger from the touchscreen.
canceled is non-nil
if the touch sequence has been
intercepted by another program (such as the window manager), and Emacs
should undo or avoid any editing commands that would otherwise result
from the touch sequence.
Imaginary prefix keys are also affixed to these events
read-key-sequence
when they originate on top of a special part
of a frame or window.
If a touchpoint is pressed against the menu bar, then Emacs will not
generate any corresponding touchscreen-begin
or
touchscreen-end
events; instead, the menu bar may be displayed
after touchscreen-end
would have been delivered under other
circumstances.
When no command is bound to touchscreen-begin
,
touchscreen-end
or touchscreen-update
, Emacs calls a
“key translation function” (see Keymaps for Translating Sequences of Events) to
translate key sequences containing touch screen events into ordinary
mouse events (see Mouse Events.) Since Emacs doesn’t support
distinguishing events originating from separate mouse devices, it
assumes that a maximum of two touchpoints are active while translation
takes place, and does not place any guarantees on the results of event
translation when that restriction is overstepped.
Emacs applies two different strategies for translating touch events
into mouse events, contingent on factors such as the commands bound to
keymaps that are active at the location of the
touchscreen-begin
event. If a command is bound to
down-mouse-1
at that location, the initial translation consists
of a single down-mouse-1
event, with subsequent
touchscreen-update
events translated to mouse motion events
(see Motion Events), and a final touchscreen-end
event
translated to a mouse-1
or drag-mouse-1
event (unless
the touchscreen-end
event indicates that the touch sequence has
been intercepted by another program.) This is dubbed “simple
translation”, and produces a simple correspondence between touchpoint
motion and mouse motion.
However, some commands bound to
down-mouse-1
–mouse-drag-region
, for example–either
conflict with defined touch screen gestures (such as “long-press to
drag”), or with user expectations for touch input, and shouldn’t
subject the touch sequence to simple translation. If a command whose
name contains the property (see Symbol Properties)
ignored-mouse-command
is encountered or there is no command
bound to down-mouse-1
, a more irregular form of translation
takes place: here, Emacs processes touch screen gestures
(see Touchscreens in The GNU Emacs Manual) first, and
finally attempts to translate touch screen events into mouse events if
no gesture was detected prior to a closing touchscreen-end
event (with its canceled parameter nil
, as with simple
translation) and a command is bound to mouse-1
at the location
of that event. Before generating the mouse-1
event, point is
also set to the location of the touchscreen-end
event, and the
window containing the position of that event is selected, as a
compromise for packages which assume mouse-drag-region
has
already set point to the location of any mouse click and selected the
window where it took place.
To prevent unwanted mouse-1
events arriving after a mouse menu
is dismissed (see Menus and the Mouse), Emacs also avoids simple
translation if down-mouse-1
is bound to a keymap, making it a
prefix key. In lieu of simple translation, it translates the closing
touchscreen-end
to a down-mouse-1
event with the
starting position of the touch sequence, consequently displaying
the mouse menu.
Since certain commands are also bound to down-mouse-1
for the
purpose of displaying pop-up menus, Emacs additionally behaves as
illustrated in the last paragraph if down-mouse-1
is bound to a
command whose name has the property mouse-1-menu-command
.
When a second touch point is registered as a touch point is already
being translated, gesture translation is terminated, and the distance
from the second touch point (the ancillary tool) to the first is
measured. Subsequent motion from either of those touch points will
yield touchscreen-pinch
events incorporating the ratio formed
by the distance between their new positions and the distance measured
at the outset, as illustrated in the following table.
If touch gestures are detected during translation, one of the following input events may be generated:
(touchscreen-scroll window dx dy)
If a “scrolling” gesture is detected during the translation process,
each subsequent touchscreen-update
event is translated to a
touchscreen-scroll
event, where dx and dy specify,
in pixels, the relative motion of the touchpoint from the position of
the touchscreen-begin
event that started the sequence or the
last touchscreen-scroll
event, whichever came later.
(touchscreen-hold posn)
If the single active touchpoint remains stationary for more than
touch-screen-delay
seconds after a touchscreen-begin
is
generated, a “long-press” gesture is detected during the translation
process, and a touchscreen-hold
event is sent, with posn
set to a mouse position list containing the position of the
touchscreen-begin
event.
(touchscreen-drag posn)
If a “long-press” gesture is detected while translating the current
touch sequence or “drag-to-select” is being resumed as a result of
the touch-screen-extend-selection
user option, a
touchscreen-drag
event is sent upon each subsequent
touchscreen-update
event with posn set to the new
position of the touchpoint.
(touchscreen-restart-drag posn)
This event is sent upon the start of a touch sequence resulting in the
continuation of a “drag-to-select” gesture (subject to the
aforementioned user option) with posn set to the position list of
the initial touchscreen-begin
event within that touch sequence.
(touchscreen-pinch posn ratio pan-x pan-y ratio-diff)
This event is delivered upon significant changes to the positions of either active touch point when an ancillary tool is active.
posn is a mouse position list for the midpoint of a line drawn from the ancillary tool to the other touch point being observed.
ratio is the distance between both touch points being observed divided by that distance when the ancillary point was first registered; which is to say, the scale of the “pinch” gesture.
pan-x and pan-y are the difference between the pixel position of posn and this position within the last event delivered appertaining to this series of touch events, or in the case that no such event exists, the centerpoint between both touch points when the ancillary tool was first registered.
ratio-diff is the difference between this event’s ratio and ratio in the last event delivered; it is ratio if no such event exists.
Such events are sent when the magnitude of the changes they represent
will yield a ratio which differs by more than 0.2
from
that in the previous event, or the sum of pan-x and pan-y
will surpass half the frame’s character width in pixels (see Frame Font).
Several functions are provided for Lisp programs that handle touch
screen events. The intended use of the first two functions described
below is from commands bound directly to touchscreen-begin
events; they allow responding to commonly used touch screen gestures
separately from mouse event translation.
This function is used to track a single “tap” gesture originating
from the touchscreen-begin
event event, often used to
set the point or to activate a button. It waits for a
touchscreen-end
event with the same touch identifier to arrive,
at which point it returns t
, signifying the end of the gesture.
If a touchscreen-update
event arrives in the mean time and
contains at least one touchpoint with the same identifier as in
event, the function update is called with two arguments,
the list of touchpoints in that touchscreen-update
event, and
data.
If threshold is non-nil
and such an event indicates that
the touchpoint represented by event has moved beyond a threshold
of either threshold or 10 pixels if it is not a number from the
position of event, nil
is returned and mouse event
translation is resumed for that touchpoint, so as not to impede the
recognition of any subsequent touchscreen gesture arising from its
sequence.
If any other event arrives in the mean time, nil
is returned.
The caller should not perform any action in that case.
This function is used to track a single “drag” gesture originating
from the touchscreen-begin
event event
.
It behaves like touch-screen-track-tap
, except that it returns
no-drag
and refrains from calling update if the
touchpoint in event
did not move far enough (by default, 5
pixels from its position in event
) to qualify as an actual
drag.
In addition to those two functions, a function is provided for commands bound to some types of events generated through mouse event translation to prevent unwanted events from being generated after it is called.
This function inhibits the generation of touchscreen-drag
events during mouse event translation for the duration of the touch
sequence being translated after it is called. It must be called from
a command which is bound to a touchscreen-hold
or
touchscreen-drag
event, and signals an error otherwise.
Since this function can only be called after a gesture is already recognized during mouse event translation, no mouse events will be generated from touch events constituting the previously mentioned touch sequence after it is called.
Next: Focus Events, Previous: Motion Events, Up: Input Events [Contents][Index]