PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose, while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility. We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling, and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.

ResearchGate Logo

Discover the world's research

  • 20+ million members
  • 135+ million publications
  • 700k+ research projects

Join for free

PsychoPy2: Experiments in behavior made easy

Jonathan Peirce

1

&Jeremy R. Gray

2

&Sol Simpson

3

&Michael MacAskill

4,5

&Richard Höchenberger

6

&Hiroyuki Sogo

7

&

Erik Kastman

8

&Jonas Kristoffer Lindeløv

9

#The Author(s) 2019

Abstract

PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with

precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose,

while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that

have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing

users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility.

We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling,

and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more

than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.

Keywords Psychology . Software . Experiment . Open-source . Open science . Reaction time . Timi ng

Computers are an incredibly useful, almost ubiquitous, feature

of the modern behavioral research laboratory, freeing many

scientists from the world of tachistoscopes and electrical en-

gineering. Scientists have a large range of choices available, in

terms of hardware (e.g., mouse vs. touchscreen) and operating

system (Mac, Windows, Linux, or mobile or online plat-

forms), and they no longer need to have a degree in computer

science to make their experiment run with frame-by-frame

control of the monitor.

A wide range of software options are also available for

running experiments and collecting data, catering for various

needs. There are commercial products, such as E-Prime

(Psychology Software Tools Inc., Sharpsburg, PA, USA),

Presentation (Neurobehavioral Systems Inc., Berkeley,

California, USA), Experiment Builder (SR Research Ltd.,

Canada), and Psykinematrix (Kybervision, LLC, Japan). A

relatively new possibility, however, has been the option to

use free open-source products, provided directly by academics

writing tools for their own labs and then making them freely

available to others.

The most widely used example, to date, began as a set of C

routines, called VideoToolbox, written by Denis Pelli, initially

to carry out studies in vision science (Pelli, 1997 ). David

Brainard wrote MATLAB wrappers around the

VideoToolbox library, with some additional pure MATLAB

code, and called the package Psychophysics Toolbox

(Brainard, 1997). This has now gone through several itera-

tions and substantial rewriting, especially by Mario Kleiner

in the most recent version, Psychtoolbox 3 (Kleiner,

Brainard, & Pelli, 2007 ). Psychtoolbox shows how successful

these projects can be. After 20 years, it is still in active devel-

opment and has been used extensively in research. It also

shows how popular the open-source movement has become;

in 2004, the Psychophysics Toolbox article (Brainard, 1997)

*Jonathan Peirce

jonathan.peirce@nottingham.ac.uk

1

School of Psychology, University of Nottingham, Nottingham, UK

2

Knack, Inc., Okemos, MI, USA

3

iSolver Software Solutions, Osgoode, Ontario, Canada

4

Department of Medicine, University of Otago, Christchurch, New

Zealand

5

New Zealand Brain Research Institute, Christchurch, New Zealand

6

Cognitive Neuroscience, Institute of Neuroscience and Medicine

(INM-3), Research Center Jülich, Jülich, Germany

7

Faculty of Law and Letters, Ehime University, Matsuyama, Ehime,

Japan

8

Department of Psychology and Center for Brain Science, Harvard

University, Cambridge, MA, USA

9

CCN, Department of Psychology and Communication, Aalborg

University, Aalborg, Denmark

Behavior Research Methods

https://doi.org/10.3758/s13428-018-01193-y

received 123 citations (according to Google Scholar), whereas

in 2018 it received 1,570.

Open-source packages have several attractive features be-

yond being free. Having access to all the source code means

that a scientist can examine what is happening B under the

hood^ and can extend or adapt the code themselves if the

package does not already have the features or performance

they need. Most open-source packages are written in high-

level interpreted languages, typically MATLAB or Python.

This has made it relatively easy to provide support for all

platforms, so the scientist can develop and run the same study

on any machine. Most important to many people, however, is

the principle of openness and the sense that this is good prac-

tice for replicable research.

In terms of the choice of available scripting languages,

while there are again many options (e.g., R, MATLAB,

Mathematica, or Java), Python is one of the most popular

languages in the world at the time of writing. The

PopularitY of Programming Language project (PYPL) has

analyzed Google searches for programming tutorials and

found that over 25% of searches are for Python tutorials, as

compared with 2.5% for MATLAB and 4% for R (see http://

pypl.github.io/PYPL.html for up-to-date statistics). Python is

so useful as a scripting language that MacOS and many

flavors of Linux provide it as standard in their operating sys-

tems. That popularity means that the language receives a great

deal of support from hardware manufacturers and program-

mers from all spheres.

The PsychoPy project began in 2002, as a Python library to

conduct visual neuroscience experiments in Jonathan Peirce' s

lab. It developed a small following of Python enthusiasts in

the field, and gradually it grew to provide further stimuli and

features (Peirce, 2007 , 2009). At that point, PsychoPy provid-

ed a useful set of stimuli and methods and a basic editor with

which to write code, but it required users to program their

experiments, which made it inaccessible to nonprogrammers,

including most undergraduate psychology students.

The question was how to enable nonprogrammers to use

PsychoPy. Ideally, the package should be accessible enough

for typical undergraduates in psychology (who are often quite

averse to programming), while also offering the flexibility

required for professional researchers to build a range of pre-

cise experiments.

This led to the addition of a graphical experiment creation

interface called the Builder, the defining feature in the devel-

opment of PsychoPy2. In addition to the Builder, which freed

users from the need to be computer programmers, a large

number of improvements and new features have been added.

Additionally, PsychoPy has adopted a more robust develop-

ment and testing workflow and has benefited from the growth

of a supportive online community. With the bulk of that phase

of development now complete the Builder interface has be-

come a relatively stable tool and has shown itself capable of

running a wide range of studies this article provides a brief

summary of the features and changes that have come about

over the last 10 years of development of PsychoPy.

It is beyond the scope of this article to teach readers how to

use the software. For that there are numerous didactic re-

sources available, such as YouTube videos (e.g., https://

www.youtube.com/playlist?list=PLFB5A1BE51964D587),

the demo menus that are built into the application, the

extensiveonlinedocumentationathttp://www.psychopy.org,

and even a textbook (Peirce & MacAskill, 2018 ).

Other packages

At the time that the core PsychoPy library was written, the

other comparable packages were Vision Egg (Straw, 2008)

and PyEPL (Geller, Schlefer, Sederberg, Jacobs, & Kahana,

2007), both of which subsequently ceased development. Since

2008, numerous additional libraries have been created in

Python, such as Expyriment (Krause & Lindemann, 2014),

PyGaze (Dalmaijer, Mathôt, & Van der Stigchel, 2014 ),

mPsy (https://wisions.github.io/mPsy /), and SMILE (http://

smile-docs.readthedocs.io/).Incomparisontothese,

PsychoPy offers a broader list of stimulus options,

experimental designs, response options (such as rating

scales), and hardware support, as well as a larger community

of active developers.

Most critically, however, the other libraries do not offer a

graphical interface to create studies, which limits their suit-

ability for undergraduate teaching. Another Python-based ap-

plication, OpenSesame (Mathôt, Schreij, & Theeuwes, 2012),

was, however, developed around the same time as the

PsychoPy Builder interface. PsychoPy and OpenSesame re-

main, to our knowledge, the most versatile open-source ex-

periment-building packages currently available, and we com-

pare them in the following section. There was also an open-

source Macintosh application called PsyScopeX (http://psy.

ck.sissa.it/ ), buCupdate since 2015.

Builder

The idea of the Builder interface was to allow the user to

create a graphical representation of an experiment. From

this, the software would then generate a Python script to

actually run the experiment. We wanted something that

would be cross-platform, open and free, and that would

support Python programming when experiments needed

extending. We also wanted to provide stimuli that were

dynamic, with stimulus attributes that could be updated

on each screen refresh as specified directly from the

graphical interface, which was not possible (or certainly

Behav Res

was not easy) using other graphical interfaces An image

of the Builder interface can be seen in Fig. 1 .

How does Builder work?

In PsychoPy Builder, an experiment is described by a set of

Routines, which contain a set of one or more Components,

such as stimuli and response options. The Components in

the Routines can be thought of as a series of tracks in a

video- or music-editing suite; they can be controlled indepen-

dently in time that is, onsets and offsets but also in terms

of their properties. The last part of the experiment description

is the Flow: a flow diagram that controls how the Routines

relate to each other. It contains the Routines themselves, as

well as Loops (which repeat the Routines they encompass).

The Flow has no B knowledge^ of time per se; it simply runs

each Routine immediately after the previous one has ended.

The experimental timing is controlled by specifying the times

of onset and offset of the stimuli and of response-gathering

events within the Routines themselves.

This experiment description is internally stored in terms of

standard Python objects: a Python list of Routines, each of

which is a list of Components, which are themselves essen-

tially a Python dictionary of parameters and, finally, a list of

items on the Flow. Builder saves the experiment as standard

XML-formatted text files using the open-standard psyexp for-

mat (read more at http://www.psychopy.org/psyexp.html).

These files need not be specific to PsychoPy or Python; any

system that can interpret a simple XML file could theoretically

receive a Builder-generated experiment file and use that de-

scription to conduct the study, if it has a similar set of stimulus

features.

The first job of the Builder interface is to provide a graph-

ical means to create and represent these psyexp experiment

descriptions. The second job is to be able to generate, from

those descriptions, working code to actually run the experi-

ments. This step is made relatively easy by Python' spowerful

text-handling structures and object-oriented syntax. Users can

compile and inspect the resulting script at the click of a button.

In general, that output script is a Python/PsychoPy script,

but the interface could output scripts for alternative targets as

well. Since the Builder is merely generating text, representing

code based on the Components and Flow, it is only a matter of

developer resources to expand this capability in order to gen-

erate experiments written inlanguages other than Pythonfor

example, to generate a Psychophysics Toolbox script

(Brainard, 1997 ; Kleiner et al., 2007 ; Pelli, 1997 )inthe

MATLAB language. Indeed, we are now working on an

HTML/JavaScript output so that Builder experiments can also

run in a web browser (which is not possible with Python

code).

The generated Python code is well-formatted and heavily

commented, to allow users to learn more about Python

programming and the PsychoPy package in particularin

a top-down fashion. This allows the user to adapt that output

script and then run the adapted version themselves, although

this is a one-way road scripts cannot be converted back into

the graphical representation.

Fig. 1 The PsychoPy Builder interface. The right-hand panel contains the

Components that can beadded to the experiment, organized by categories

that can be expanded or collapsed. These Components can be added into

Routines and appear like Btracks^ in the Routine panel. In the demo

shown here, in the Routine named B trial,^ we simply present a word after

a 500 ms pause and simultaneously start monitoring the keyboard for

responses, but any number of Components can be set to start and stop

in a synchronous or asynchronous fashion. The bottom panel of the in-

terface shows the Flow of the experiment: the sequence in which the

Routines will be presented, including the occurrence of any Loops in

which we can repeat trials and/or blocks and control the randomization

of conditions. Users report that this view is a highly intuitive and flexible

way to implement their experimental designs

Behav Res

Additionally, the Builder also provides a Code Component

that allows users to execute arbitrary Python code at any of the

same points available to standard Components (at the begin-

ning of the experiment, beginning of a trial, every screen re-

fresh, etc.). These Code Components allow the experimenter

to add a high level of customization to the study without

leaving the comfort of the Builder interface. This provides a

balance between ease of use (via the graphical interface) and

flexibility (allowing out-of-the-ordinary requirements to be

implemented in custom code).

OpenSesame (Mathôt et al., 2012 ) has a similar goal of

providing a graphical interface for specifying experiments.

OpenSesame uses, among several options, the PsychoPy

Python module as a back end to present stimuli and has be-

come popular for its ease of use and slick interface. It differs

from PsychoPy mainly in that (1) OpenSesame represents the

flow in a nested list, similar to E-Prime, where PsychoPy has a

horizontal flow with loops; (2) in Routines, the PsychoPy

interface emphasizes the temporal sequence of components,

where the Open Sesame interface emphasizes their spatial

layout; (3) PsychoPy allows the experimental and stimulus

parametersto vary dynamically on every screen refresh during

a routine, whereas OpenSesame requires stimuli to be

pregenerated; and (4) PsychoPy generates Python scripts that

can be exported and run independently of the graphical

interface.

How well does the Builder achieve its goals?

Intuitive enough for teaching The aim was to allow nonpro-

grammers, including undergraduates, to be able to generate

experiments. The best way for the reader to judge whether this

aim has been achieved is perhaps to watch one of the walk-

through tutorials on YouTube (e.g., https://www.youtube.com/

playlist?list=PLFB5A1BE51964D587). The first of these

videos shows, in 15 min and assuming no prior knowledge,

how to create a study, run it, and analyze the generated data.

PsychoPy' s Builder interface is being used for undergrad-

uate teaching in many institutions, allowing students to create

their own experiments. At the School of Psychology,

University of Nottingham, we previously used E-Prime for

our undergraduate practical classes. In the academic year

September 2010 2011, our first-year undergraduates spent

the first half using PsychoPy and the second using E-Prime.

We surveyed the students at the end of that year and, of the 60

respondents, 31 preferred PsychoPy to E-Prime (as compared

with nine preferring E-Prime, and the remainder expressing no

preference), and 52 reported that they could B maybe,^

Bprobably,^ or B definitely^ create a study on their own fol-

lowing the five sessions with PsychoPy. PsychoPy has gained

a number of usability improvements since then, and

Nottingham now uses PsychoPy for all its undergraduate

classes.

Flexible enough for high-quality experiments We aimed to

generate software that can implement most Bstandard^exper-

iments to satisfactory levels of temporal, spatial, and chromat-

ic accuracy and precision. In terms of features, the Builder can

make use nearly all the stimuli in the PsychoPy library with no

additional code. For instance, it can present images, text,

movies, sounds, shapes, gratings (including second-order

gratings),and random-dot kinematograms. All of these stimuli

can be presented through apertures, combined with alpha-

blending (transparency), and updated in most of their param-

eters on every screen refresh. Builder also supports inputs via

keyboard, mouse, rating scales, microphone, various button

boxes, and serial and parallel ports. It also supports a wide

range of experiment structures, including advanced options

such as interleaved staircase (e.g., QUEST) procedures. The

use of arbitrary loop insertions, which can be nested and can

be inserted around multiple other objects, allows the user to

create a wide range of experimental flows. Figure 2 is a

screenshot of one such Builder representation of an experi-

mental flow.

At times, an experimenter will require access to features in

the PsychoPy library that have not been provided directly as

part of the graphical interface (often to keep the interface

simple), or will want to call external Python modules beyond

the PsychoPy library itself. This can be achieved by inserting

snippets of custom code within a Code Component, as de-

scribed above.

As evidence that PsychoPy is used by professional re-

searchers, and not just as a teaching tool, according to

Google Scholar, the original article describing PsychoPy

(Peirce, 2007 ) now has over 1,800 citations.Mostofthese

are empirical studies in which the software was used for stim-

ulus presentation and response collection. The Builder inter-

face is not only used by nonprogrammers, but also by re-

searchers perfectly adept at programming, who find that they

can create high-precision studies with greater efficiency and

fewer errors by using this form of B graphical programming.^

Indeed, several of the authors of this article use the Builder

interface rather than handwritten Python code, despite being

very comfortable with programming in Python. Overall, the

clearest indication that people find PsychoPy both easy to use

and flexible is the growth in user numbers since the Builder

interface was first released (see Fig. 3). We have seen user

numbers grow from a few hundred regular users in 2009 to

tens of thousands of users per month in 2018.

Precision and accuracy The Builder interface includes provi-

sion for high-precision stimulus delivery, just as with the

code-driven experiments. Notably, the user can specify stim-

ulus durations in terms of number of frames, for precise short-

interval timing. PsychoPy will handle a range of issues, such

as ensuring that trigger pulses to the parallel port are synchro-

nized to the screen refresh. Builder-generated scripts are

Behav Res

oriented around a drawing and event loop that is synchronized

to the regular refresh cycle of the computer monitor. Hence, in

general, the presentation of visual stimuli is both temporally

accurate (being presented at the desired time and for the de-

sired duration) and precise (with little variability in those

times). One published study suggested that the temporal pre-

cision of PsychoPy' s visual stimuli was poor (Garaizar,

Vadillo, López-de-Ipiña, & Matute, 2014 ), but this was an

artifactual finding due to the authors using a prototype version

of the Builder interface (v1.64, from 2011, which did carry an

explicit warning that it should not be used for precision stud-

ies). The authors subsequently reran their analysis, using an

official production-ready release (v1.80, 2014). Using the

timing approach recommended in the documentation, they

found very good timing of visual stimulus display, for Bnormal

usage^ (Garaizar & Vadillo, 2014).

A current limitation of temporal stimulus accuracy and pre-

cision, however, is the presentation of sound stimuli. There

can be a lag (i.e., impaired accuracy) of sound onset, poten-

tially up to tens of milliseconds, with associated trial-to-trial

variability in those times of onset. Sound presentation relies

on one of several underlying third-party sound libraries, and

performance can vary across operating systems and sound

hardware. The authors are currently conducting objective

testing of performance across all these factors and updating

PsychoPy's sound library to one with better performance.

Features and enhancements

As well as providing this new interface, making it easier for

researchers at all levels to create experiments, there have been

a large number of new features added to the PsychoPy Python

library since the time of the last publication about the package

(Peirce, 2009 ). Most notably, (1) researchers have the option

to choose which version of PsychoPy to run the experiment

on; (2) the range of stimuli that can be generated B out of the

box^ has grown considerably, as have the options for manip-

ulating the existing stimulus types; and (3) increased support

is available for external hardware and asynchronous response

inputs.

Choosing the software version at run time

One issue for reproducible and open science is that software

releases do not always maintain compatibility from one ver-

sion to another, and changes to software may have very subtle

effects on stimulus presentation, response collection, and ex-

perimental flow. One unattractive solution is that users retain

the same version of the software in the lab and avoid

upgrading. This precludes users (and their colleagues) from

accessing new features and benefiting from important bug

fixes. To circumvent such issues, PsychoPy now allows the

user to specify which version of the library to use for running

the experiment, regardless of which version is currently

installed.Typically, this will be the PsychoPy version in which

the experiment was initially created. This can be done in the

Experiment Settings of the Builder interface, or in code via the

useVersion() function (see the top of Code Snippet 1). The

specified version will be used to interpret the script, regardless

of what PsychoPy version is currently installed.

The idea is that the user should get the experiment working

correctly in the current latest version of the software and test it

thoroughly in that version. For instance, experimenters should

ensure that data files contain the necessary values by actually

performing an analysis. They should ensure that the timing is

correct, preferably with a Black Box Toolkit (Plant & Quinlan,

2013) or similar hardware. When they are confident that the

2010

2011

2012

2013 2014

2015

2016

2017

2018

0

5000

10000

15000

20000

Jan Feb Mar Apr Ma

Jun Ju l Au

Sep Oct Nov Dec

Unique users per month

Fig. 3 Users per month, based on unique IP addresses launching the

application. These figures are underestimates, due mostly to the fact

that multiple computers on a local area network typically have a single

IP address. We can also see the holiday patterns of users, with dips in

usage during Christmas and the Northern hemisphere summer

Fig. 2 A more complex Flow arrangement. Loops and Routines can be

nested in arbitrarily complex ways. PsychoPy itself is agnostic about

whether a Loop designates trials, a sequence of stimuli within a trial, or

a sequence of blocks around a loop of trials, as above. Furthermore, the

mechanism for each loop is independent; it might be sequential, random,

or a something more complex, such as an interleaved staircase of trials

Behav Res

study runs as intended, they should then B freeze ^the experi-

ment so that it will continue to use that version of the

PsychoPy library indefinitely, even when the lab version of

the PsychoPy application is itself updated. This is optional:

Users who do want the latest features, and do not mind occa-

sionally updating their code when PsychoPy necessarily intro-

duces incompatible changes, can simply ignore the useVersion

setting.

Even if the script requests a version from the Bfuture^(i.e.,

one that has never actually been installed locally), PsychoPy

will fetch it online as needed. If the experimental script does

not explicitly specify a version, it will simply run using the

latest installed version. Hence, this capability ensures both

backward and forward capability.

We should note that there are still limitations to this system

when the version being requested is not compatible with the

Python installation or dependencies. The user cannot, for in-

stance, request version 1.84.0 using an installation of Python

3, because compatibility with that version of Python was only

added in PsychoPy 1.90.0.

New stimuli and added features

Rating scales PsychoPy now provides rating scales, in both its

Python library and as a Component in the Builder interface.

Ratings can be collected in a range of ways, from standard

Likert-style scales to scales with a range of gradations, or

continuous B visual analog^scales. These are highly custom-

izable objects that allow many aspects to be controlled, includ-

ing the text of a confirmation button, the shape and style of the

response slider, and independent colors of various parts of the

scale.

Movies Movie stimuli were already available in 2008, but their

reliability and efficiency have improved. Movies remain a

technically challenging stimulus, but the recent improvements

in performance mean that a fast computer running a recent

version of PsychoPy should be able to present high-

definition video smoothly.

Element arrays The ElementArrayStim is a stimulus allowing

the display of a large array of related elements in a highly

optimized fashion. The key optimization is that the code can

modify an entire array of objects in one go, leveraging the

power of the graphics card to do so. The only constraint is

that each element must use the same texture (e.g., a grating or

an image) and mask, but the elements can differ in almost

every other possible way (e.g., each having its own color,

position, size, opacity, or phase). Hundreds or thousands of

objects can be rendered by this means (in tasks such as visual

search arrays or global form patterns), or instead as an array of

simple masks that can gradually be removed . Currently, this

stimulus is only available using code (either in scripts or as

Code Components in the Builder interface), because it is in-

herently an object that needs programmatic control. See Code

Snippet 1 for an example.

Geometric shapes Users can now create vector-based shapes

by specifying points geometrically, to create standard poly-

gons, such as rectangles, or arbitrary shapes. See Code

Snippet 1 for an example.

Greater flexibility of stimulus attributes The code syntax for

changing stimulus attributes dynamically has been vastly ex-

panded and homogenized across stimulus types, to the point

that almost all attributes can be altered during runtime. The

syntax for doing so has been simplified. See the stimulus

updates in Code Snippet 1 for an example.

Application localization and translation Another addition was

the ability of the PsychoPy application' s graphical user inter-

face to support localization into different languages. The code

to make this possible was largely written by author J.R.G. To

date, H.S. has translated all the elements of the application into

Japanese, with other localizations possible and welcome.

Support for Python 3 Since 2008, the Python language has

undergone a substantial change from version 2 to version 3.

PsychoPy now supports both Python 2 and Python 3, so that

users with older Python 2 code can continue to run their stud-

ies with no further changes, whereas users that want access to

the new features of Python 3 can do so. A few of the depen-

dent libraries, notably in specialized hardware interfaces, are

still not available in Python 3 compatible versions, such that a

few features still require a Python 2 installation. We therefore

aim to continue supporting Python 2 for the foreseeable

future.

ioHub and hardware polling One of the most substantial ad-

ditions to the package is the ioHub system for asynchronous

control of hardware, written by S.S. IoHub was conceived

initially for the purpose of providing a unified application

programming interface (API) for eyetracking hardware, so

that users could use one set of functions to control and read

data from any eyetracker. It comes with integrated support for

trackers from SMI, SR Research, Tobii, LC Technologies, Eye

Tribe, and Gazepoint.

IoHub runs as a separate process, ensuring high-rate hard-

ware polling without disturbing the main process that handles

stimulus presentation and experimental control. The system is

capable of polling data and also streaming it at very high

rates for instance, capturing all the data from a 2-kHz

eyetracker. IoHub can also be used for other hardware, such

as keyboards, mice, Labjack boxes, Arduinos, and so forth.

IoHub is also capable of streaming data to its own unified

data file, combining the data from all the devices being polled

Behav Res

(and data from the experiment, as sent by the main PsychoPy

process), all timestamped using the same clock. This is all

saved in the well-established HDF5 format. These data files

allow for very high-performance hierarchical storage that can

be read by most analysis packages, including MATLAB, R,

and Python, thus freeing the researcher from the proprietary

formats of the eyetracker itself.

Another option for eyetracking measurements in Python is

PyGaze (Dalmaijer et al., 2014 ). PyGaze is similar to ioHub,

in that it provides a unified API to several eye-gaze tracking

hardware systems (EyeLink and Eye Tribe, with experimental

support for SMI and Tobii trackers at the time of writing).

Unlike ioHub, PyGaze makes its calls from the same thread

as the main experiment/stimulus-generation thread (although

the processing by the eyetracker system itself is usually car-

ried out on another core, or even on a separate, dedicated

eyetracker computer). With PyGaze, users cannot as easily

combine data from different devices (e.g., button box and

eyetracker) timestamped on the same clock, and they must

rely on the proprietary data format of the eyetracker manufac-

turer and associated analysis tools.

Open science and reproducibility

The developers of PsychoPy are advocates of open science.

We believe that sharing materials, data, code, and stimuli is

critical to scientific progress and hope our work has supported

these goals in a variety of ways. Within the project itself, we

support open science by having provided the full source code

of PsychoPy since its inception in 2002 and by maintaining

standard open file formats throughout.

We also encourage open scientific practices in others. By

being open ourselves, by offering active support on our forum

(at https://discourse.psychopy.org), and by providing many

demos and free software, we hope that we set an example to

the community of how science can and should be conducted.

A very strong sense of community has grown around the

PsychoPy project, and we believe that this is also important

in encouraging open science.

With a recent grant from the Wellcome Trust, we have

added our own open-source online experiment repository for

users to search and upload experiments to share with each

other. The site is called Pavlovia.org and can be accessed via

Git version control, as well as directly through the PsychoPy

application. This feature is currently in beta testing for the new

PsychoPy3 phase of development.

Development workflow

The workflow of PsychoPy development has changed consid-

erably since 2008. Most notably, the version control system

has moved from SVN to Git, and we have developed a test

suite to ensure backward-compatibility. The source code has

moved from its initial location at sourceforge.net to GitHub

(https://github.com/psychopy/psychopy).

Git: It is now easier for users to make contributions, and for

us to examine the community-contributed code. Git makes it

easy to create forks and branches from the existing project, to

work on them locally, and then propose changes to be folded

back into the main project. The PsychoPy development repos-

itory is hosted on GitHub, which eases the workflow for con-

tributors seeking to submit their changes and for the lead de-

velopers to review such changes.

Te s t s: Proposed code changes are now tested in an auto-

mated fashion that makes our released versions more robust to

contributed errors. Using the pytest library, the testing now

includes checks of stimulus renderings to reference images

across combinations of back ends and settings, to ensure back-

ward-compatibility. The full test suite runs automatically,

checking every new contribution to the code. We believe that

the combination of the extensively used test suite and the

useVersion functionality yields the reliability expected for crit-

ical parts of scientific experiments.

The growing community

We have mentioned the community aspect of the project al-

ready, but that is because it has an impact on so many aspects

of the success and development of an open-source project.

Open-source development works best when many people get

behind a project. Without large numbers of users, there is

always a danger that a project will stop being supported, due

to the lack of recruitment of new developers and less impetus

for the existing developers. A large community also brings the

advantage of there being many shared experiments and teach-

ing resources.

Figure 3 shows our growth in users, from a few hundred

regular users in 2008 to a monthly-active-user count of over

21,000 in November 2018. The data are based on unique IP

addresses launching the application. This systematically un-

derestimates the actual number of users, because multiple

computers ona local areanetwork often share a single external

IP address, appearing externally like a single user.

Additionally, many labs disconnect their laboratory machines

from the internet while running experiments, and some users

choose to disable the sending of usage stats.

A small percentage of users also become contributors, in

terms of providing bug fixes, new features, and documenta-

tion contributions. The project on GitHub shows an active

developer community, with over 90 people contributing to

the main project code. The size of these contributions natural-

ly varies, but all fixes, even just for a typo in the documenta-

tion, are welcome. A number of contributors have devoted

Behav Res

considerable amounts of time and effort to the project. At

present, 18 contributors have each committed over 1,000 lines

of updated code. In open-source software, people refer (some-

what morosely) to the Bus Factor of a project (the number of

people that would have to be hit by a bus for the project to

languish) and, sadly, for many projects the Bus Factor is as

low as 1. The strong developer community for PsychoPy is an

important ingredient in this sense; we certainly have a Bus

Factor well over 1.

The third place where the community is important is in

terms of mutual support. PsychoPy has a users' forum

(https://discourse.psychopy.org) based on the open-source

Discourse software. This serves as a place where users ask

for, and offer, support in generating experiments, where the

developers discuss potential changes, and where announce-

ments are made. The forum has just over 2,000 registered

users and receives roughly ten posts per day, across a variety

of categories. Users have also written a range of resources,

with various workshop and online-tutorial materials, some of

whichhavebeencollatedathttp://www.psychopy.org/

resources/resources.html.

Books

In addition to the online documentation and the user forum,

there are now several books to help users learn PsychoPy.

Some of the PsychoPy team have written a book teaching

how to use the Builder interface (Peirce & MacAskill, 2018 )

and will soon release a companion book focused on program-

ming experiments in Python. Dalmaijer (2016 ) uses PsychoPy

to illustrate Python programming in experimental psychology.

Sogo (2017 ) has written a textbook in Japanese on using

PsychoPy to run experiments and Python to analyze data.

Bertamini ( 2018) uses PsychoPy to teach readers how to im-

plement a wide range of visual illusions.

Future developments

We are now working on the next major phase of development

(PsychoPy3), adding the capacity to present experiments on-

line (and by extension, on mobile devices). In recent years,

web browsers have become capable of providing access to

hardware-accelerated graphics (even including GL Shader

programs). This means that we can present visual stimuli in

a browser with timing sufficient to synchronize to the screen

refresh cycle, at least on modern hardware and software. The

PsychoPy Builder interface allows this to be achieved by gen-

erating a script using HTML and JavaScript rather than the

established Python code. A beta version of that system is

already available, but it should be used with caution.

Author note Author contributions: All of the authors of this

work have contributed their code voluntarily to the project. J.P.

wrote the bulk of the code and designed the logic of both the

application interfaces and the underlying Python library. He re-

mains the primary maintainer of the code. J.R.G. contributed the

next largest amount of code, most notably contributing the rating

scale and the translation code, but he has really touched on nearly

all aspects of the library and application, and his contribution to

the project cannot be overestimated. S.S. wrote the very substan-

tial ioHub subpackage for high-performance hardware interfac-

ing. He also added many other features, including the TextBox

for high-performance text rendering. M.M. has contributed less

to the code base itself, but has been probably the most active

supporter of users in the forum of anyone other than J.P. R.H. has

been incredibly influential in terms of additions to the code, user

support, and especially in the endeavor of improving our devel-

opment and testing framework and the update to Python 3. H.S.

has spent a great deal of time making sure that we appropriately

support non-English users, most obviously in terms of writing a

full set of translations into Japanese, but also in fixing many

issues with Unicode conversions. E.K. most notably contributed

and maintains the code to support switching PsychoPy versions

during a script, and J.L. has provided a wide range of smaller

features and bug fixes that have all very much improved the

function of the software. J.P. wrote the first draft of the manu-

script, but all authors were then involved in editing that draft.

Acknowledgments: Many people have supported the project

along the way, either with code contributions or by supporting

users on the forum, and we are very grateful to the entire com-

munity for their work in this respect sorry we cannot make you

all authors! Special thanks to Yaroslav Halchenko, for providing

the Neurodebian packaging and for the additional support he has

provided us over the years (especially with Travis-CI testing).

Support: The project has received small grants from the Higher

Education Academy, UK, for development of teaching materials;

from Cambridge Research Systems, UK, for providing support

for some of their hardware (Bits#); and from the Center for Open

Science, to write an interface to integrate with their server. Most

recently, this work was supported by the Wellcome Trust [grant

number WT 208368/Z/17/Z). Conflicts of interest: PsychoPy is

provided completely open-source and free of charge. The authors

occasionally provide consultancy in the form of training or paid

support in developing experiments, although any other individ-

uals are equally permitted to gain from providing training and

consultancy on PsychoPy in this manner.

Open Access This article is distributed under the terms of the Creative

Commons Attribution 4.0 International License (http://

creativecommons.org/licenses/by/4.0/), which permits unrestricted use,

distribution, and reproduction in any medium, provided you give appro-

priate credit to the original author(s) and the source, provide a link to the

Creative Commons license, and indicate if changes were made.

Behav Res

Publisher' snote Springer Nature remains neutral with regard to jurisdic-

tional claims in published maps and institutional affiliations.

References

Bertamini, M. (2018). Programming visual illusions for everyone. Cham:

Springer. https://doi.org/10.1007/978-3-319-64066-2

Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10,

433 436. https://doi.org/10.1163/156856897X00357

Dalmaijer, E. S. (2016). Python for experimental psychologists. London:

Routledge.

Dalmaijer, E. S., Mathôt, S., & Van der Stigchel, S. (2014). PyGaze: an

open-source, cross-platform toolbox for minimal-effort program-

ming of eye tracking experiments. Behavior Research Methods,

46,913 921. https://doi.org/10.3758/s13428-013-0422-2

Garaizar, P, & Vadillo, M. A. (2014). Accuracy and precision of visual

stimulus timing in PsychoPy: No timing errors in standard usage.

PLoS ONE, 9, e112033. https://doi.org/10.1371/journal.pone.

0112033

Garaizar, P., Vadillo, M. A., López-de-Ipiña, D., & Matute, H. (2014).

Measuring software timing errors in the presentation of visual stim-

uli in cognitive neuroscience experiments. PLoS ONE ,9 , e85108.

https://doi.org/10.1371/journal.pone.0085108

Geller, A. S., Schlefer, I. K., Sederberg, P. B., Jacobs, J., & Kahana, M. J.

(2007). PyEPL: A cross-platform experiment-programming library.

Behavior Research Methods, 39 ,950 958. https://doi.org/10.3758/

BF03192990

Kleiner, M., Brainard, D., & Pelli, D. (2007). What' snewin

Psychtoolbox-3? Perception,36(ECVPAbstract Suppl), 14.

Krause, F., & Lindemann, O. (2014). Expyriment: A Python library for

cognitive and neuroscientific experiments. Behavior Research

Methods,46 ,416 428. https://doi.org/10.3758/s13428-013-0390-6

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-

source, graphical experiment builder for the social sciences.

Behavior Research Methods, 44 ,314 324. https://doi.org/10.3758/

s13428-011-0168-7

Peirce, J. W. (2007). PsychoPy Psychophysics software in Python.

Journal of Neuroscience Methods ,162 ,8 13.

Peirce, J. W. (2009). Generating stimuli for neuroscience using PsychoPy.

Frontiers in Neuroinformatics, 2,10.https://doi.org/10.3389/neuro.

11.010.2008

Peirce, J. W., & MacAskill, M. R. (2018). Building experiments in

PsychoPy. London: Sage.

Pelli, D. G. (1997). The VideoToolboxsoftware for visual psychophysics:

Transforming numbers into movies. Spatial Vision, 10,437 442.

https://doi.org/10.1163/156856897X00366

Plant, R. R., & Quinlan, P. T. (2013). Could millisecond timing errors in

commonly used equipment be a cause of replication failure in some

neuroscience studies? Cognitive, Affective, & Behavioral

Neuroscience,13, 598 614. https://doi.org/10.3758/s13415-013-

0166-6

Sogo, H. (2017). Shinrigaku jikken programming Python/PsychoPy ni

yoru jikken sakusei to data shori [Programming psychological ex-

periments: Creating experiment programs and data handling with

Python/PsychoPy]. Tokyo: Asakura Shoten

Straw, A. D. (2008). Vision Egg: An open-source library for realtime

visual stimulus generation. Frontiers in Neuroinformatics ,2 ,4.

https://doi.org/10.3389/neuro.11.004.2008

Behav Res

... All participants gave informed consent. All tasks were computerized (Dell OptiPlex 760 computer, 17-inch monitor, 1024 × 768 pixels, 85 Hz) using PsychoPy 73,74 . Participants filled out questionnaires on background variables, their history of reading problems, and current and childhood symptoms of ADHD. ...

Faces and words are traditionally assumed to be independently processed. Dyslexia is also traditionally thought to be a non-visual deficit. Counter to both ideas, face perception deficits in dyslexia have been reported. Others report no such deficits. We sought to resolve this discrepancy. 60 adults participated in the study (24 dyslexic, 36 typical readers). Feature-based processing and configural or global form processing of faces was measured with a face matching task. Opposite laterality effects in these tasks, dependent on left–right orientation of faces, supported that they tapped into separable visual mechanisms. Dyslexic readers tended to be poorer than typical readers at feature-based face matching while no differences were found for global form face matching. We conclude that word and face perception are associated when the latter requires the processing of visual features of a face, while processing the global form of faces apparently shares minimal—if any—resources with visual word processing. The current results indicate that visual word and face processing are both associated and dissociated—but this depends on what visual mechanisms are task-relevant. We suggest that reading deficits could stem from multiple factors, and that one such factor is a problem with feature-based processing of visual objects.

... The program represents a simple solution to an inconvenience of certain experimental software programs (e.g., for designing and running psychology experiments). Many such programs (e.g., PsychoPy [7] and OpenSesame [4]) create individual datafiles for each participant, generally in CSV format. These programs are growing in popularity, especially for online data collection (e.g., because they are free software and some popular paid software programs, like E-Prime [8], do not allow for online data collection), perhaps even more so recently with limitations on the ability to run experiments in the lab during the COVID-19 pandemic. ...

  • James R Schmidt James R Schmidt

In experimental psychology and other applications, researchers will often have numerous datafiles (e.g., one for each participant) that need to be joined together into one larger dataset before data analyses. Many experimental software programs, including some increasingly popular ones (e.g., PsychoPy or OpenSesame), do not include data merging functionality. Copy-and-pasting (potentially error prone) or the writing of situation-specific scripts (potentially difficult and time consuming) may be necessary. CSVDataMerge was created as a free Java application that merges CSV (or comma-separated TXT) data with little more than a double-click. The program also appropriately deals with datasets that have different column orders in different datafiles or empty cells. More trivially, it can also concatenate datafiles that do not contain headers and allows the user to specify which columns to keep and in what order.

... Images were presented on a Dell 24: P2418HT touchscreen monitor using PsychoPy (version v1.83.04; Peirce et al., 2019). At the outset of each sorting condition, participants saw all the images to be sorted. ...

The present study examined how children spontaneously represent facial cues associated with emotion. 106 three‐ to six‐year‐old children (48 male, 58 female; 9.4% Asian, 84.0% White, 6.6% more than one race) and 40 adults (10 male, 30 female; 10% Hispanic, 30% Asian, 2.5% Black, 57.5% White) were recruited from a Midwestern city (2019–2020), and sorted emotion cues in a spatial arrangement method that assesses emotion knowledge without reliance on emotion vocabulary. Using supervised and unsupervised analyses, the study found evidence for continuities and gradual changes in children's emotion knowledge compared to adults. Emotion knowledge develops through an incremental learning process in which children change their representations using combinations of factors—particularly valence—that are weighted differently across development.

Eye contact is a dynamic social signal that captures attention and plays a critical role in human communication. In particular, direct gaze often accompanies communicative acts in an ostensive function: a speaker directs her gaze towards the addressee to highlight the fact that this message is being intentionally communicated to her. The addressee, in turn, integrates the speaker's auditory and visual speech signals (i.e., her vocal sounds and lip movements) into a unitary percept. It is an open question whether the speaker's gaze affects how the addressee integrates the speaker's multisensory speech signals. We investigated this question using the classic McGurk illusion, an illusory percept created by presenting mismatching auditory (vocal sounds) and visual information (speaker's lip movements). Specifically, we manipulated whether the speaker (a) moved his eyelids up/down (i.e., open/closed his eyes) prior to speaking or did not show any eye motion, and (b) spoke with open or closed eyes. When the speaker's eyes moved (i.e., opened or closed) before an utterance, and when the speaker spoke with closed eyes, the McGurk illusion was weakened (i.e., addressees reported significantly fewer illusory percepts). In line with previous research, this suggests that motion (opening or closing), as well as the closed state of the speaker's eyes, captured addressees' attention, thereby reducing the influence of the speaker's lip movements on the addressees' audiovisual integration process. Our findings reaffirm the power of speaker gaze to guide attention, showing that its dynamics can modulate low-level processes such as the integration of multisensory speech signals.

High density breast tissue has been found to reduce radiologists' accuracy in detecting and classifying mammogram abnormalities. The current research examines the perceptual and decisional components that underlie diagnostic classification (independent of detection) in a sample of novices. Mammograms were varied along two dimensions: Breast tissue density (low/high) and the nature of an identified mass (benign/malignant). In two experiments, participants learned to classify images into 4 categories created by factorial combination of these dimensions. In low density tissue, accuracy was higher for benign than for malignant masses. Surprisingly, and in contrast to the mammography literature, accuracy for malignant masses was higher in high-density than in low-density tissue. Cognitive modeling based in general recognition theory (GRT) indicated that low-density/benign category accuracy was largely due to high perceptual discriminability for that category. The high accuracy level for malignant masses in high density tissue was accounted for by decision bound slopes that favored the "malignant" response for items in the high-density malignant category. Because GRT can provide insight into perceptual and decisional determinants of observed classification accuracy, as well as individual-difference level parameters about attention allocation, GRT may be a means to obtain a more detailed understanding of diagnostic classification of complex naturalistic stimuli.

Cognitive and physical effort are typically regarded as costly, but demands for effort also seemingly boost the appeal of prospects under certain conditions. One contextual factor that might influence choices for or against effort is the mix of different types of demand a decision maker encounters in a given environment. In two foraging experiments, participants encountered prospective rewards that required equally long intervals of cognitive effort, physical effort, or unfilled delay. Monetary offers varied per trial, and the two experiments differed in whether the type of effort or delay cost was the same on every trial, or varied across trials. When each participant faced only one type of cost, cognitive effort persistently produced the highest acceptance rate compared to trials with an equivalent period of either physical effort or unfilled delay. We theorized that if cognitive effort were intrinsically rewarding, we would observe the same pattern of preferences when participants foraged for varying cost types in addition to rewards. Contrary to this prediction, in the second experiment, an initially higher acceptance rate for cognitive effort trials disappeared over time amid an overall decline in acceptance rates as participants gained experience with all three conditions. Our results indicate that cognitive demands may reduce the discounting effect of delays, but not because decision makers assign intrinsic value to cognitive effort. Rather, the results suggest that a cognitive effort requirement might influence contextual factors such as subjective delay duration estimates, which can be recalibrated if multiple forms of demand are interleaved.

Functional near-infrared spectroscopy (fNIRS) is gaining popularity as a non-invasive neuroimaging technique in a broad range of fields, including the context of gaming and serious games. However, the capabilities of fNIRS are still underutilized. FNIRS is less prone to motion artifacts and more portable in comparison to other neuroimaging methods and it is therefore ideal for experimental designs which involve physical activity. In this paper, the goal is to demonstrate the feasibility of fNIRS for the recording of cortical activation during a motion-intensive task, namely basketball dribbling. FNIRS recordings over sensorimotor regions were conducted in a block-design on 20 participants, who dribbled a basketball with their dominant right hand. Signal quality for task-related concentration changes in oxy-Hb and deoxy-Hb has been investigated by means of the contrast-to-noise ratio (CNR). A statistical comparison of average CNR from the fNIRS signal revealed the expected effect of significantly higher CNR over the left as compared to the right sensorimotor region. Our findings demonstrate that fNIRS delivers sufficient signal quality to measure hemispheric activation differences during a motion-intensive motoric task like basketball dribbling and bare indications for future endeavors with fNIRS in less constraint settings.

Prior work has noted changes in musical cue use between the Classical and Romantic periods. Here we complement and extend musicological findings by blending score-based analyses with perceptual evaluations to provide new insight into this important issue. Participants listened to excerpts from either Bach's The Well-Tempered Clavier or Chopin's 24 Preludes – historically important sets drawn from distinct musical eras, with 12 major and 12 minor key pieces each. Participants selected one of five categories for each piece, adapted from previous musical analyses exploring historical changes in music's cues. Combining participant classifications with score-extracted cues offers a useful way to complement and extend previous work exploring changes in the function of mode across musical eras based only on notational information. In doing so, we find evidence that changing associations of cues in the Romantic era influence judgments of affective meaning. This study provides a useful step toward bridging the divide between traditional approaches to musicology, music theory, and music perception by combining perceptual evaluations with cues extracted from musical scores to shed light on changes in musical emotion across eras.

  • Zhiguo Wang

PsychoPy is a very popular tool among psychologists. It can be installed as a Python module, but you can also use it as a standalone application that features a graphical user interface. PsychoPy offers a set of convenient functions for generating visual stimuli, registering keyboard and mouse events, and interfacing with research equipment. This chapter will first give an overview of the essential PsychoPy features that are frequently used for building experiments, most notably the various visual stimuli one can generate with PsychoPy and keyboard and mouse events handling. At the end of this chapter, we put together the building blocks of PsychoPy to create a simple script to illustrate the famous Simon effect.

Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.

In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers.

Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments.

The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eye-tracking experiments in Python syntax with the least possible effort, and offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation, for response collection via keyboard, mouse, joystick, and other external hardware, and for online detection of eye movements based on a custom algorithm. A wide range of eye-trackers of different brands (Eyelink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eye-tracking experiments. Essentially, PyGaze is a software-bridge for eye-tracking research.

Expyriment is an open-source and platform-independent lightweight Python library for designing and conducting timing-critical behavioral and neuroimaging experiments. The major goal is to provide a well-structured Python library for script-based experiment development, with a high priority being the readability of the resulting program code. Expyriment has been tested extensively under Linux and Windows and is an all-in-one solution, as it handles stimulus presentation, the recording of input/output events, communication with other devices, and the collection and preprocessing of data. Furthermore, it offers a hierarchical design structure, which allows for an intuitive transition from the experimental design to a running program. It is therefore also suited for students, as well as for experimental psychologists and neuroscientists with little programming experience.

Neuroscience is a rapidly expanding field in which complex studies and equipment setups are the norm. Often these push boundaries in terms of what technology can offer, and increasingly they make use of a wide range of stimulus materials and interconnected equipment (e.g., magnetic resonance imaging, electroencephalography, magnetoencephalography, eyetrackers, biofeedback, etc.). The software that bonds the various constituent parts together itself allows for ever more elaborate investigations to be carried out with apparent ease. However, research over the last decade has suggested a growing, yet underacknowledged, problem with obtaining millisecond-accurate timing in some computer-based studies. Crucially, timing inaccuracies can affect not just response time measurements, but also stimulus presentation and the synchronization between equipment. This is not a new problem, but rather one that researchers may have assumed had been solved with the advent of faster computers, state-of-the-art equipment, and more advanced software. In this article, we highlight the potential sources of error, their causes, and their likely impact on replication. Unfortunately, in many applications, inaccurate timing is not easily resolved by utilizing ever-faster computers, newer equipment, or post-hoc statistical manipulation. To ensure consistency across the field, we advocate that researchers self-validate the timing accuracy of their own equipment whilst running the actual paradigm in situ.

In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  • Marco Bertamini Marco Bertamini

If you find visual illusions fascinating Programming Visual Illusions for Everyone is a book for you. It has some background, some history and some theories about visual illusions, and it describes in some detail twelve illusions. Some are about surfaces, some are about apparent size of objects, some are about colour and some involve movement. This is only one aspect of the book. The other is to show you how you can create these effects on any computer. The book includes a brief introduction to a powerful programming language called Python. No previous experience with programming is necessary. There is also an introduction to a package called PsychoPy that makes it easy to draw on a computer screen. It is perfectly ok if you have never heard the names Python or PsychoPy before. Python is a modern and easy-to-read language, and PsychoPy takes care of all the graphical aspects of drawing on a screen and also interacting with a computer. By the way, both Python and PsychoPy are absolutely free. Is this a book about illusions or about programming? It is both!

  • Edwin Dalmaijer Edwin Dalmaijer

Order the book: https://www.routledge.com/products/9781138671577 Supporting material: http://www.pygaze.org/pep/ Programming is an important part of experimental psychology and cognitive neuroscience, and Python is an ideal language for novices. It sports a very readable syntax, intuitive variable management, and a very large body of functionality that ranges from simple arithmetic to complex computing. Python for Experimental Psychologists provides researchers without prior programming experience with the knowledge they need to independently script experiments and analyses in Python. It teaches readers the basics of programming in Python, enabling them to go on and program their own experiments. The skills it offers include: how to display stimuli on a computer screen; how to get input from peripherals (e.g. keyboard, mouse etc.) and specialized equipment (e.g. eye trackers); how to log data; and how to control timing. In addition, it shows readers the basic principles of data analysis applied to behavioral data, and the more advanced techniques required to analyse trace data (e.g. pupil size and EEG) and gaze data. Written informally and accessibly, the book deliberately focuses on the parts of Python that are relevant to experimental psychologists and cognitive neuroscientists. It is also accompanied by a companion website where you will find colour versions of the figures, along with example stimuli, datasets and scripts, and a portable Windows installation of Python.

  • Denis Pelli Denis Pelli

The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.