ACT-Touch Reference Manual
Working Draft
Franklin P. Tamborello, II Kristen K. Greene Cogscent, LLC National Institute of
Standards and Technology
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
1
Preface…………………………………………………………………………………………………………………………………..3 Acknowledgments…………………………………………………………………………………………………………………..4 Introduction ……………………………………………………………………………………………………………………………5 Loading ACT-Touch………………………………………………………………………………………………………………..7 Manual Request Extensions to ACT-R ………………………………………………………………………………………8 Virtual Multitouch Display Device………………………………………………………………………………………….12
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
2
Preface
This document describes the manual request extensions to ACT-R 6 (hereafter, ACT-R) provided by ACT-Touch as well as the included virtual multitouch display device. As ACT-Touch is intended to serve as an extension to ACT-R, this document follows the formatting conventions of the ACT-R Reference Manual by Dan Bothell. A description of that notation is quoted verbatim from that document for the reader’s convenience. The scope of this manual is restricted to the ACT-Touch distribution. This document is written with the assumption that the reader is familiar with ACT-R and some sections further assume familiarity with programming in Lisp. Please refer to the ACT-R 6.0 Reference Manual for all other ACT-R issues. ACT-Touch is intended to be a long-term work in progress and therefore so is this documentation. As the software is updated this documentation will be as well to reflect the changes made to the software. Please report all errata to Frank Tamborello.
ACT-Touch revision 13 has been tested with ACT-R 6.1 and found to be compatible. This version of ACT-Touch is incompatible with ACT-R 6.0. Use ACT-Touch revision 12 with ACT-R 6.0.
3
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
Acknowledgments
• Mike Byrne, Rice University
• Public release of library code upon which ACT-Touch depends
• Dan Bothell, Carnegie Mellon University • ACT-R technical support
• Ross Micheals, National Institute of Standards and Technology • Project guidance
• NIST
• This work is funded by Measurement Science & Engineering grant 60NANB12D134 from the
National Institute of Standards and Technology in support of their Biometric Web Services project (bws.nist.gov).
4
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
Introduction
ACT-Touch is a set of manual motor request extensions for ACT-R. Facility with programming models for ACT-R is assumed throughout this document. These manual request extensions constitute theoretical claims predicting motor preparation and execution times for certain manual gestures commonly used with multitouch computer displays. These manual request extensions follow ACT-R’s theoretical claims about how cognition interfaces with action to output information from the human to the environment, which in turn originated with the EPIC architecture. This document is meant as a practical guide to using ACT-Touch; it does not focus on theoretical developments.
As ACT-Touch extends ACT-R’s framework with additional manual movement vocabulary, many movement styles are analogous to extant ACT-R movement styles in the sense that a movement is composed of a certain set of features which specify the movement such as a hand, a finger, a direction and a distance. Consequently ACT-Touch’s movement styles are subject to the same constraints and caveats as ACT-R’s, e.g., finger positions that would be physically impossible for any physical human hand to attain are specifiable for a cognitive model, and so it is up to the modeler to consider such things.
Unlike ACT-R, all distances in ACT-Touch and its virtual multitouch display are specified in pixels. The virtual multitouch display measures 1,024 pixels wide by 768 pixels tall by default. The model’s default positions for its hands are at either side of the display, approximately centered vertically.
ACT-Touch is implemented as Lisp code that is meant to load with ACT-R’s software. ACT-Touch can be downloaded as a single archive from Cogscent, LLC’s website, http://www.cogscent.com. The archive contains act-touch.lisp, which is the set of manual request extensions; support files implementing a demonstration ACT-R device to handle ACT-Touch’s manual requests; a demonstration model; and this reference manual. Direct technical support inquiries regarding ACT-Touch to Frank Tamborello at frank.tamborello@cogscent.com.
5
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
Notations in the Documentation1
When describing the commands’ syntax the following conventions will be used:
– items appearing in bold are to be entered verbatim
– items appearing in italics take user-supplied values
– items enclosed in {curly braces} are optional
– * indicates that any number of items may be supplied
– + indicates that one or more items may be supplied
– | indicates a choice between options which are enclosed in [square brackets]
– (parentheses) denote that the enclosed items are to be in a list
– a pair of items enclosed in
second the cdr
– -> indicates that calling the command on the left of the arrow will return the item to the right of
the arrow
– ::= indicates that the item on the left of that symbol is of the form given by the expression on the
right
When examples are provided for the commands they are shown as if they have been evaluated at a Lisp prompt. The prompt that is shown prior to the command indicates additional information about the examples. There are three types of prompts that are used in the examples:
– A prompt with just the character ‘>’ indicates that it is an individual example – independent of those preceding or following it.
– A prompt with a number followed by ‘>’, for example 2> means that the example is part of a sequence of calls which were evaluated and the result depends on the preceding examples. For any given sequence of calls in an example the numbering will start at 1 and increase by 1 with each new example in the sequence.
– A prompt with the letter E preceding the ‘>’, E>, indicates that this is an example which is either incorrect or was evaluated in a context where the call results in an error or warning. This is done to show examples of the warnings and errors that can occur.
In the description of some commands it will describe a parameter or return value as a generalized boolean. What that means is that the value is used to represent a truth value – either true/successful or false/failure. If the value is the symbol nil then it represents false and all other values represent true. When a generalized boolean is returned by one of the commands, one should not make any assumptions about the returned value for the true case. Sometimes the true value may look like it provides additional information, but if that is not specified in the command’s description then it is not guaranteed to hold for all cases or across updates to the command.
1 Quoted from the ACT-R 6.0 Reference Manual
6
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
Loading ACT-Touch
The most straightforward way to load ACT-Touch is to simply place its files in ACT-R’s user-loads folder before loading ACT-R. This will load ACT-Touch at the end of the ACT-R loading process. Load act-touch.lisp alone if you don’t want to use the included virtual multitouch display device or demo model. However, note that whatever device you use must supply an index-z class slot (pixels at 72 ppi). Load misc-lib.lisp, virtual-experiment-window.lisp, and virtual-multitouch-device.lisp if you wish to use the included virtual multitouch display device. Act-touch-demo-model.lisp has an example model.
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
7
Manual Request Extensions to ACT-R
Isa tap
hand [ left | right ]
finger [ index | middle | ring | pinkie | thumb ]
This request will execute a tap action for the specified finger on the specified hand. This will result in the finger moving toward and momentarily contacting the surface of the multitouch display directly under the finger’s current location. This is analogous to ACT-R’s punch command. These are the actions which will be shown in the trace for a tap action indicating the request being received, the preparation of the features completing, the initiation time having passed, the actual contacting of the display surface which is currently under that finger (showing the global XY screen coordinates tapped by the finger on the virtual multitouch display), and the time to finish the execution of the action (returning the finger to its starting position where it is ready to act again):
…
…
…
…
0.713 MOTOR
0.763 MOTOR
0.050 MOTOR
0.200 MOTOR
0.250 MOTOR
TAP HAND RIGHT FINGER INDEX
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-TAP #
FINISH-MOVEMENT
Isa tap-hold
hand [ left | right ]
finger [ index | middle | ring | pinkie | thumb ]
This movement style results in the model tapping and holding the specified finger on the surface of the multitouch display device until the model requests the tap-release movement style.
#(500 300) RIGHT INDEX
… … …
0.050 MOTOR
0.200 MOTOR
0.250 MOTOR
TAP-HOLD HAND RIGHT FINGER INDEX
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-TAP-HOLD #
Model tap-held (RIGHT INDEX).
…
0.763 MOTOR
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
8
Isa tap-release
hand [ left | right ]
finger [ index | middle | ring | pinkie | thumb ]
When the model is already holding a finger against the surface of the multitouch display device (e.g., with a tap-hold), this movement style will release the finger from the display and return it to its default distance from the display at the current X, Y coordinates.
… … …
0.050 MOTOR
0.200 MOTOR
0.250 MOTOR
TAP-RELEASE HAND RIGHT FINGER INDEX
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-TAP-RELEASE #
Model tap-released (RIGHT INDEX).
…
0.400 MOTOR
If the model does not already have a finger on the display surface (index-z = 0), this error will result:
#|Warning: Finger must already be held against the surface
of the multitouch display. |#
Isa swipe
hand [ left | right ]
finger [ index | middle | ring | pinkie | thumb ] r distance
theta direction
num-fngrs integer
The model moves the specified number of fingers, starting with the specified finger and incrementing from index to pinkie and thumb, onto the display and then moves the specified distance and direction, then releases the fingers from the display. Num-fngrs defaults to 1.
…
…
…
…
0.050 MOTOR
0.350 MOTOR
0.400 MOTOR
SWIPE HAND RIGHT FINGER INDEX R 100 THETA 1 NUM-FNGRS 3
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-SWIPE #
FINISH-MOVEMENT
1.362 MOTOR
2.375 MOTOR
#(500 300) #(554 384)
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
9
Isa pinch
hand [ left | right ]
finger [ index | middle | ring | pinkie ] start-width integer
end-width integer
The model moves the specified finger and thumb onto the surface of the display, then moves them together or apart by the difference between the specified start- and end-widths, then releases them from the display. Start- and end-widths are in pixels.
…
…
…
…
0.942 MOTOR
1.455 MOTOR
0.050 MOTOR
0.300 MOTOR
0.350 MOTOR
PINCH HAND RIGHT FINGER INDEX START-WIDTH 200 END-WIDTH 0
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-PINCH #
FINISH-MOVEMENT
Isa rotate
hand [ left | right ]
finger [ index | middle | ring | pinkie ] rotation direction
This movement request results in a movement of the specified finger and the thumb to the display, moves them rotationally by the specified direction (in radians), then releases them.
#(500 300) RIGHT INDEX 200 0
…
…
…
…
0.050 MOTOR
0.250 MOTOR
0.300 MOTOR
ROTATE HAND RIGHT FINGER INDEX ROTATION 1
PREPARATION-COMPLETE
INITIATION-COMPLETE
DEVICE-HANDLE-ROTATE #
FINISH-MOVEMENT
1.153 MOTOR
1.666 MOTOR
#(500 300) 36
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
10
developments
• Explosive growth of mobile touchscreen computers’ importance
• “Fat Finger” Problem 11
Modeling Elliptical Error Distributions in a 2-D
Isa move-hand-touch
[ object object | loc location ] Pointing Task
• Two-dimensional pointing error is larger parallel to axis of motion than
perpendicular
Analogous to move-cursor, this request will result in a ply-style movement of the model’s right hand.
That ply will move the index finger to either the object (which must be a chunk which is a subtype of visual-object) or location (which must be a chunk which is a subtype of visual-location). The trace
Task Environment
indicates the distance and direction moved.
• Mobile touchscreen computer
If the motor module’s :cursor-noise parameter is bound to t, then move-hand-touch will output noisy • Portrait orientation, 768 px wide by 1,024 px tall (72 ppi)
locations with error, σ, scaled by target width, w, and off-movement-axis error also scaled by .75, • Tap lower-right, then tap one of twelve possible target locations
according to this equation developed by May (2012) and Gallagher and Byrne (2013):
Model Mechanics
• Error, σ, scaled by target width, w • Off-axis scaled again by .75
Future Directions
• Modeling nuances of motor performance that have implications for
Gallagher, M. A., & Byrne, M. D. (2013). The devil is in the distribution: Refining an ACT-R model of a delays andcofrnutinsutroaustiomnotsoratasssko. IcniaPtreocdeewdiintghs uofntihnet1e2nthdeIndterancationasl CwonilfleerencheaoncCeognitive Modeling.
May, K. (2012). A model of error in 2D pointing tasks. Undergraduate Honors Thesis, Rice University, the fidelity of the ACT-Touch platform
are building0.100 MOTOR
… …
References
the SIGCHI Conference on Human Factors in
…
Grossman, T., Kong, N., & Balakrishnan, R. (2007). Modeling pointing at targets of arbitrary shapes. In Human Factors in Computing Systems: Proceedings of CHI 2007 (pp. 463-472). New York, NY:
Computing Systems. …
Anderson, J. R. (2007). How can the human mind
recommendations. In Human Factors in Computing Systems: Proceedings of CHI 2011 (pp. 983-986). FINISH-MOVEMENT
occur in the physical universe? New0Yo.rk5: 52 Oxford University Press.
New York, NY: ACM.
MacKenzie, I. S. (1992). Fitts’ law as a research and design tool in human-computer interaction.
Houston, TX.
• Realistic error representation is one iterative step toward the integrative mobile touchscreen computer-compatible human modeling framework we
0.300 MOTOR
0.350 MOTOR
MOVE-HAND-TOUCH OBJECT NIL LOC VISUAL-LOCATION0-0-0
PREPARATION-COMPLETE
INITIATION-COMPLETE
Accot, J., & Zhai, S. (2003). Refining Fitts’ law models for bivariate pointing. CHI ’03: Proceedings of
MOTOR MOTOR
Gallagher, M. A., & Byrne, M. D. (2013). The devil is
in the distribution: Refining an ACT-R model of a continuous motor task. In Proceedings of the 12th
International Conference on Cognitive Modeling.
Greene, K. K., & Tamborello, F. P. (2013). Initial ACT-R extensions for user modeling in the mobile touchscreen domain. In Proceedings of the 12th International Conference on Cognitive Modeling.
0.502
MOAVCEM-. A-HAND RIGHT 544.76416 -2.6188035 John, B. E. (2011). Using predictive human performance models to inspire and support UI design
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
σ=w 4.133 *
3 π
Human-Computer Interaction, 7, 91-139.
May, K. (2012). A model of error in 2D pointing tasks. Undergraduate Honors Thesis, Rice University,
Houston, TX.
Wobbrock, J. O., Shinohara, K., & Jansen, A. (2011). Modeling and predicting pointing errors in two
dimensions. In Human Factors in Computing Systems: Proceedings of CHI 2011 (pp. 1653- 1656). New York, NY: ACM.
0
1000 800 600
400 200
Y
Virtual Multitouch Display Device
The virtual multitouch display device included with ACT-Touch is a basic ACT-R device built to present a virtual visual environment to an ACT-R model and receive ACT-R’s touch gesture motor movements. It includes some code that is specific to the demonstration model and one class slot that is important for ACT-Touch functionality. The index-z slot of the multitouch-display class is used for the ACT-Touch manual request extensions as a measure of distance from the model’s index finger to the multitouch display surface, in pixels at 72 ppi. Whatever device is used with ACT-Touch, index-z must be supplied as a slot of the device’s object class.
This section provides only superficial coverage regarding the mechanics of the device. See the ACT- R documentation for details regarding how to build your own device for ACT-R. This section includes some details about ACT-Touch’s device interface methods as well as some discussion about how to modify ACT-Touch’s included multitouch device. We assume familiarity with Common Lisp’s object system, so object-oriented programming in Lisp will not be discussed here.
The virtual multitouch display is a slightly more sophisticated version of the list device presented in extending-actr.pdf distributed with ACT-R. Like the list device, it uses a paired list of visual-location and visual-object chunks as ACT-R’s visual world. Unlike the list device, it uses objects of a virtual- widget class to encapsulate those chunks with data used by the experiment code to control the state of the simulated task environment and perform some action according to the appropriate device handler methods. Virtual-multitouch-device.lisp also contains all the device handler methods for what the experiment code should do when the model outputs each of ACT-Touch’s manual request extension types. Familiarity with CLOS and ACT-R device programming are helpful for adapting or replacing the virtual multitouch display device. Both topics are covered in-depth elsewhere, namely in ANSI Common Lisp by Paul Graham and the ACT-R Reference Manual, respectively. However, this section provides basic information about ACT-Touch’s virtual multitouch display device that will help you to modify it.
Device Classes
multitouch-display () visual-world
index-z
widgets
This class, when initialized, becomes ACT-R’s device.
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
12
virtual-widget () nick-name vwindow
vis-loc
vis-obj action-type
The virtual-widget class is a parent class to act as a container for the visual-location and visual- object chunks that comprise a model’s visual environment. It also is to receive motor actions from the model so that the device can interact with the model. It may be subclassed for each style of movement (e.g., tap, swipe). The nick-name slot takes a keyword with which the experiment software may refer to that widget (e.g., :tap1). Vwindow refers to the virtual-window within which the virtual-widget appears (i.e., the virtual-multitouch-device). Vis-loc and vis-obj are the visual chunks that ACT-R will access when it constructs its visicon and the visual-location’s associated visual-object. The action-type specifies the movement style the virtual-widget is to receive (e.g., tap).
Device Methods & Functions
initialize-instance :after ((wind multitouch-display) &key)
This method sets up the multitouch-display device with its visual-location and visual-object chunks, which it uses to construct the widgets which will constitute the model’s simulated world. The first subexpression within the let expression defines the visual-location chunks. The slots, such as screen-x and screen-y, determine where the widgets are placed within the ACT-R device. Change the value of the visual-location slots here to change where widgets are located and what they look like. The second subexpression defines some textual labels to appear within the device. The third subexpression defines the visual-object chunks that ACT-R’s device methods will pair with each of the visual-location chunks defined for the device. The device methods (described below) determine how when ACT-R moves visual-attention to a visual-location, ACT-R gets the appropriate visual-object chunk. In this third subexpression, multitouch-display’s initialize-instance after method defines these visual-object chunks.
The fourth subexpression defines the widgets that comprise the ACT-R device. There are different types of widgets, each of which correspond to the various gestures of the multitouch command vocabulary (e.g., tap, pinch). Change the first argument to change what type of widget, and thus which command, to use. This should be a symbol naming a virtual-widget subclass, such as ‘tap-widget. Each widget takes the visual-location and visual-object chunks defined in the first and third subexpressions and assigns them to widgets, interface objects to receive model input and perform some task-relevant function. These widgets take the visual-location and visual-object chunks according to the position indicated by the second argument of the nth expressions—e.g., (nth 0 vis-locs) refers to the first visual- location chunk. Nick-names assigned here are to correspond to states of the multitouch-display, steps of the model’s task (e.g., :TAP1). This concludes the let expression.
Next, multitouch-display’s initialize-instance after method sets the multitouch-display’s visual-world slot to be a paired list of the visual-location and visual-object chunks and its widgets slot to be the widgets just defined. After that the method calls some model setup model functions that should be familiar to any ACT-R modelers, such as install-device and proc-display.
Last revised 2014.10.05 for ACT-Touch.lisp revision 13
13