Tim Taylor, 4 June 2005
This document describes the system-level design of the JAST Joint Action Software design. Lack of time has meant that it was not possible to produce a more thorough reference, but this document should at least provide a basic introduction and overview of the software design and implementation.
The JAST software is designed to run the Joint Action experiments on two PCs concurrently. To get a feel for how the software works from the user's point of view, read the User Documentation before continuing.
The basic idea is that a shared 2D graphical world is presented to the experimental subjects on the two PCs, and they can both interact with the world, moving objects, joining the objects together, breaking them, etc. The details of the experiment configuration are passed into the software at start-up from a configuration file (an XML file of DocType JastExpt, which also refers to one or more other XML files of DocType StimulusSet); see the User Documentation for details. Time-stamped data from the experiment is written to an plain-text output file during the experiment.
The intention is that the software should eventually be adapted for use with the SR Research EyeLink II eye tracker system, in order to track where each subject is looking on their screens during the experiment. Although the software does not currently provide EyeLink II support, it was designed with this requirement in mind. The final section of this document provides a brief discussion of what will be required to add this.
The JAST software is implemented in C++, making use of the libraries described in the following section. Because the EyeLink II software is best supported under Windows, the JAST software was also developed for the Windows (XP) environment. It was developed using Microsoft Visual C++ .NET 2003 edition, and a Visual C++ "Solution" (Project) file, jast.sln, is included in the distribution. Having said that, the only code that is Windows-specific is for bringing up file-selection dialog boxes at the start-up time (which is mostly confined to the JastExpt::requestFilename method).
The following libraries are used by the software. The source code for these libraries (apart from SDL) is provided with the JAST software source code distribution.
SDL is a multi-platform graphics API that makes use of either OpenGL or DirectX, whichever is available on the platform on which it is running. It is used as the basis for the 2D graphics in the JAST software. One reason for chosing it was that the EyeLink II API comes in two flavours, one of which is built upon SDL (and the other one on Microsoft's native GDI graphics API). SDL was chosen over GDI for reasons of future portability (although it is doubtful that we will ever actually need to port the JAST software to a non-Windows environment). To run the JAST software on a PC, or to develop and compile the software, SDL must first be installed.
This is an extra library that extends the basic SDL interface to provide functions for drawing polygons, circles, lines, etc.
SGE is another extra library for SDL, providing 2D collision detection algorithms (using collision maps), rotation & scaling of graphics, and text support. Note that rather than using SGE as a library, the SGE source files required by the JAST software have been included directly into the jast.sln Visual C++ project. I was experiencing problems getting the library to link successfully with the linkage specification defined by DECLSPEC in the file sge_internal.h. I therefore replaced this with a (blank) declaration of JAST_DECLSPEC, and replaced DECLSPEC with JAST_DECLSPEC in all the SGE header files used in the project. This is a quick and dirty solution, but it worked.
The Xerces C library provides extensive support for dealing with XML documents. Actually it is much more powerful than is necessary for JAST - using it is overkill really! As well as providing routines for parsing XML documents, there are also validation routines to validate a document against its DTD type specification. For reading XML documents, Xerces provides support for DOM and SAX parsing. For JAST, DOM parsing (whereby a data structure is created in the program's memory that represents the XML data) would have made sense. However, for no better reason than it was easier to get started with, I started using the SAX interface, and once started never actually got around to switching to DOM. This has lead to a rather big and ugly file, JastSAX2Handler.cpp, which does all of the parsing in a simple-minded, repetitive way. If I was to program the thing again, I would use DOM parsing instead, but there you go...
The Simple Sockets Library (which is in the subdirectory COSMIC of the jast_src directory) provides a very simple library for socket programming (i.e. TCP/IP communications between networked computers). It relies on a program called PortMaster (also provided in the library distribution) being run in the background, which deals with incoming communications from other machines. See the library's documentation for more details. When the JAST program starts, it checks whether PortMaster is running on the local machine, and, if not, it starts it (in a backgrounded command prompt); this happens at the start of the JastExpt::run method.
The software is built upon a client-server architecture for sharing data between the two PCs upon which an experiment runs. The JastExpt xml configuration file specifies the name of the ServerPC and the ClientPC. When the software begins, it queries the name of the machine on which it is running. If this name matches the ServerPC name in the configuration file, the software behaves as a server, otherwise it behaves as a client. (There is one exception to this: if the configuration file specifies the option 'simulate="true"' for the ClientPC, then the software runs in "stand alone" mode and does not attempt to connect to another machine. This option is provided only for the purpose of testing during development.)
When running in normal client-server mode, the PC designated as the server keeps the "master record" of the current state of the world (i.e. the positions and orientations of all parts on the screen, the positions of the cursors etc.). When the experimental subject on the server PC changes the state of the world (e.g. moving the cursor, dragging or rotating a part, creating a part, joining parts, breaking a part, etc.), the change is made on the server PC's master record of the world state, and the change is communicated to the client PC by sending a message via the socket (JastExpt::m_pClientSocket). All messages for client-server communications are defined in the file globals.h; look for the #defines of JSM_* messages, where JSM stands for Jast Server Message. The software on both PCs regularly checks for incoming socket messages in the method JastExpt::processIncomingJSMMessages.
When the experimental subject on the client PC attempts to change the state of the world, the corresponding change does not happen immediately. Instead, the client PC sends a JSM message to the server PC (via JastExpt::m_pAcceptSocket), to request that the world state is updated. The world state on the server PC is then updated, and the server PC then sends another JSM message back to the client PC to inform it to make the change on its local copy of the world state. In this way, the server PC always contains the definitive version of the world state, and we avoid possible conflicts in updates to the world state caused by the two subjects acting simultaneously.
The main function is contained in the file jast_main.cpp. This file is based upon example programs supplied by SR Research for their EyeLink II system. Basically, in its current form, all that happens here is that an instance of the JastExpt class is created, and its run method is called. JastExpt::run controls the whole experiment, from reading the configuration files, to running the trials, and recording the data. This is the place to start when inspecting the code in order to understand how it all fits together.
The following list introduces the main classes used in the software, giving a brief description of each one and of how they work together.
This class contains the top-level state of the experiment and provides methods for running the trials, reading the configuration files, writing the output files, communicating between the two PCs, etc. The JastExpt::run method is the entry point into the whole system, controlling system start-up and reading configuration files, running the specified sequence of trials, and running the main message loop within each trial.
This class interfaces with the Xerces XML library in order to read the configuration files. It is used in the JastExpt::parseJastExptConfigFile and JastExpt::loadStimulusSets methods. The class also calls Xerces methods to validate the XML documents: JastExpt files are validated against the scheme defined in the file JastExpt.dtd, and StimulusSet files are validated against StimulusSet.dtd.
Contains all data associated with the graphical display of the trials, including the parts, the New Part buttons, the Target Configuration model, the displaying of messages, the clock, the cursors, etc. It is closely linked with the JastExpt class; indeed, some parts of the Screen class really belong in JastExpt, and vice versa, but lack of time prevented a more careful redesign of this relationship. The Screen::initialiseSDL method is static, and serves to initialise the SDL library once at the beginning of an experiment (that is, the beginning of a set of trials). The Screen::initialise method is called each time a new Stimulus Set is used for the next trial, and Screen::initialiseExpt is called at the start of each trial (whether the StimulusSet has changed or not). The Screen class has a container Screen::m_TemplateParts, which contains TemplatePart objects, one for each different part specified in the Stimulus Set configuration file. Whenever a new part needs to be made on screen, it is generated from one of these templates. The main screen refresh method to update the screen at each pass of the main trial loop is provided by Screen::drawSceneToBuffer. Checking for user input (e.g. mouse moves, button clicks, key presses) at each pass of the main trial loop happens in Screen::messageLoop.
Before a trial commences, the StimulusSet configuration file is read in, and an associated StimulusSet object is created. This contains information about the element parts used in the trial, their initial positions on the screen, and their desired target configuration. The container StimulusSet::m_UniquePartSet contains a Polygon object for each different type of part specified in the configuration file.
A TemplatePart contains a template of the graphic representation of each Polygon defined in the StimulusSet. When a new TemplatePart is created from a Polygon, the polygon's mass and the position of its geometric centre are calculated and stored in the TemplatePart object.
Each movable part displayed on screen is an instance of the MovablePart class. At the beginning of a trial, a MovablePart object is created for each part specified in the StimulusSet, utilising a call to the TemplatePart::copySurface method of the corresponding TemplatePart. The MovablePart class represents any single movable part, which may be a single elementary part, or may be an aggregation of two or more joined elementary parts. Details of the elementary parts comprising the MovablePart are stored in the container MovablePart::m_ElementParts.
MovablePart deals with the manipulations involved in part rotation. Rotation happens in two stages; rotateTentative and rotateFix. The former tries to rotate the part by the specified angle, but remembers the previous orientation for the time being. This is called from the Screen class, which then performs collision detection checks via Screen::collisionWithPart and Screen::collisionWithBoundary. MovablePart::rotateFix is then called with the result of the collision check; if a collision occured, the part remains in its original orientation, otherwise it is changed to the new orientation.
MovablePart also has a method (MovablePart::join) for joining the current part to another MovablePart. In addition, it contains a collision map for the part (used in the collision detection methods of the Screen class), and methods for drawing the geometric centre (CoM) graphic on the part, etc.
An object of the Polygon class stores the details (e.g. vertices) of a polygon defined in the Stimulus Set file, but does not have any methods for the graphical representation of the polygon.
This class deals with the graphical representation of the mouse cursor position (of each subject) on screen. The Cursor class could also be used for displaying the position of the subjects' eye gaze positions when EyeLink II support is added.
For analysis for the experiments, a screen-capture movie needs to be recorded of the experiments, along with synchronised audio of both subjects' voices. Various hardware and software video/audio capture methods were considered. Surprisingly few hardware options were suitable (all of the affordable choices involved converting the video card output to TV format (NTSC or PAL), with an unacceptable loss in video resolution/quality).
Various software screen capture utilities were considered. Ideally, we would use something that had a programming interface so that AV recording could be controlled by the JAST software itself, with no intervention required by the experimenter. However, "off-the-shelf" products that include a programming interface were too expensive. Another option would have been to write the capture code myself, using one of Microsoft's various SDKs/APIs for multimedia support (DirectX, Windows Media Encoder, etc). Windows Media Encoder might have been particularly suitable for this, allowing a screen capture program to be written in only a handful of lines of code. However, time constraints prevented this from happening.
Instead, the JAST project partners in Nijmegen are using TechSmith's Camtasia Studio software (http://www.techsmith.com/products/studio/default.asp ) as a good compromise. It is fairly cheap (US$299), and although it requires the experimenter to manually start recording before starting the JAST software and stop recording afterwards, most tasks can be automated to a high degree. The AVI files it produces are of high quality without the software being a noticeable drain on the computer's resources during the experiment. I suggest the Edinburgh group uses Camtasia too.
Graphical representation of the eye traces. The EyeLink II API provides high-level methods for retrieving a subject's current eye gaze position, expressed in screen coordinates. Having got this information, the gaze can be very easily displayed on screen by using the existing Cursor class; just add two more Cursor objects to the Screen class, one for the local subject's eye gaze position, and one for the remote subject's. Add code to update these objects according to the new eye gaze positions at each pass through the main trial loop (in the JastExpt::run method), and code to refresh the screen with the new positions (in the Screen::drawSceneToBuffer method).
Data logging. At present, the data from the trials is written to a .JDF data file in plain text (ASCII) format. The EyeLink II system logs eye tracking data to its own .EDF data files in a compressed binary format. However, this is still essentially time-stamped data for eye gaze positions and significant events (saccades etc.). The EyeLink API also allows the programmer to add arbitrary extra data to the .EDF file. Therefore, when the system is extended for use with the eye trackers, the current data logging methods should be replaced with methods that will add the corresponding data to the .EDF file rather than writing it to a plain text .JDF file. All data logging is performed by the JastExpt class, and 99% of it is confined to the JastExpt::recordXXX methods (a search for the text string "JDF" will show you any places in JastExpt.cpp where data is being written to the file). Replacing this code with calls to the EyeLink data logging methods should therefore be straightforward.