Gaze Contingent Paradigms with PsychoPy and Tobii Pro SDK

At some point in research, simply showing a series of stimuli is not ideal; Researchers might want to control which stimulus is shown next in a dynamic way based on participants’ behavior or responses.

One way to do that is by preparing an experimental procedure which allows enough time for experimenters to choose the next stimulus or condition. Thus, the experimenter would observe participants and decide trial-by-trial which condition to present.

Another way is to automatically identify participants’ responses and program the experiment to take that information into account when choosing the next condition. A common example for this procedure are gaze-contingent paradigms.

In gaze-contingent paradigms, participants’ gaze direction is observed live and conditions are selected based on where their sight is directed. For example, Akechi and colleagues (2011) used such a paradigm to ensure presenting a specific type of cuing. Using an eye tracker, they basically programmed the experiment to observe which object were participants looking at and decide whether to cue that same object (follow-in condition) or the opposite object (discrepant condition).

Understandably, employing such a paradigm requires some knowledge in programming, that’s why I decided to provide an example of a gaze-contingent paradigm with the least amount of programming possible.

Starting from PsychoPy’s builder, to cut the need for programming stimulus presentation, I created a two-step trial that has a “selection” and a “result” routines (see Figure 1 for screenshots). Then I added a code component to the selection routine with the needed parts to connect to a Tobii Pro eye tracker and get gaze data in real-time.


Figure 1: Screenshots of PsychoPy’s builder for this demo with the “selection” routine (left pane), the “result” routine (top-right) and the flow for the whole experiment (bottom-right).


Begin Experiment

The first part of the code runs at the beginning of the experiment once. It handles setting some variables and loading the necessary libraries if using the eye tracker is set to true.

# A variable to choose whether to use eye tracking or not.
# If it is left to False, mouse position will be used instead.
expInfo["Eye Tracker"] = False

# Defining the variable that will be used when controlling for gaze or mouse position.
# Here we give it an initial value from mouse position

# Here we define a function to convert Tobii's normalized coordinates to pixels
# and change origin to center of screen, like PsychoPy does
def norm2pix(point, win):
    if not np.isnan(point[0]):
        x = point[0] * win.size[0]
        y = point[1] * win.size[1]
        xAdj = x - (win.size[0] / 2)
        yAdj = (y * -1) + (win.size[1] / 2)
        return (xAdj, yAdj)
        return (np.nan, np.nan)

# If eye tracker is to be used, we need to prepare a few things
if expInfo['Eye Tracker']:
    # import tobii pro module
    import tobii_research as tr
    # turn off mouse visibility
    # find eye trackers
    found_eyetrackers = []
    while len(found_eyetrackers) == 0:
        found_eyetrackers = tr.find_all_eyetrackers()
    # select first eye tracker
    my_eyetracker = found_eyetrackers[0]
    # create list in which we append gaze data
    gaze_list = []
    # create callback to get gaze data
    def gaze_data_callback(gaze_data):
        # append timestamp and gazePointLeft at callback
        gaze_list.append([gaze_data['system_time_stamp'],gaze_data['left_gaze_point_on_display_area']]) # left eye only


Begin Routine

In this part we “subscribe” to the eye tracker’s live stream and start getting data in real-time.

# If we are using eye tracking
if expInfo['Eye Tracker']:
    # start getting live data from the eye tracker
    my_eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback, as_dictionary=True)
# create a list to collect position data when it is within one of the stimuli.
# this will be reset everytime we look outside a stimulus


Each Frame

Whatever code we write in this section runs on every screen refresh. Hence, it runs at the same sampling rate as our screen.

Here we get the coordinates, convert them to pixels and check whether they fall within one of our stimuli.

# If we are using eye tracking
if expInfo['Eye Tracker']:
    # if we have some gaze returned from the eye tracker
    if len(gaze_list) > 0:
        # get the last sample sent from eye tracker
        gpos = gaze_list[-1]
        # use our custom function to convert gaze coordinates to pixels and reset origin to center.
        dotPosition=norm2pix(gpos[1], win)
    # if we don't have gaze data
        # set position to na
        dotPosition = (np.nan, np.nan)
# if we are not using eye tracking
    # get mouse position
    dotPosition = mouse.getPos()

# if our position variable is not na
if not np.isnan(dotPosition[0]):
    # check if it is within one of the images
    if CARD1.contains(dotPosition):
    elif CARD2.contains(dotPosition):
    elif CARD3.contains(dotPosition):
    # if gaze is out of stimuli, reset collection list

# if we collect 60 sample or more in the same stimuli
if len(gazeCount)>=60:
    # set result image file name to the selected image
    selectedImage = "media" + os.sep + imageName+'.png'
    # stop selection routine and show result

# check for quit (the Esc key)
if event.getKeys(keyList=["escape"]):
    # if using eye tracking
    if expInfo['Eye Tracker']:
        # stop getting data from eye tracker
        my_eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)


End Routing


This part of the code runs once at the end of each trial. Here we save the selected image and stop data stream from the eye tracker.

# add the selected image to exported data
trials.addData("Selected Image", selectedImage)
# stop collecting data from eye tracker
if expInfo['Eye Tracker']:
    my_eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)


I have saved this demo both in “.py” and “.psyexp” formats, which can be found on GitHub using this link. Feel free to try and adapt them to your needs. If you also need to save gaze coordinates for later analysis, I have written an article before in which I have shown one way to achieve that. It should not be very hard to integrate both demos in a single experiment. If you have any questions or comments, please feel free to contact me using this form or on social media below.



Akechi, H., Senju, A., Kikuchi, Y., Tojo, Y., Osanai, H., & Hasegawa, T. (2011). Do children with ASD use referential gaze to learn the name of an object? An eye- tracking study. Research in Autism Spectrum Disorders, 5(3), 1230–1242. doi: 10.1016/j.rasd.2011.01.013