Category Archives: experiments

Open data for online psychology experiments

The research activities in cognitive science are usually limited to a single lab, or to a small group of collaborators. But they need not be. A key problem in encouraging wider collaboration is finding ways of sharing data from human subjects in ways that do not compromise the privacy and confidentiality of the participants (DeAngelis, 2004), or the legal and ethical norms designed to protect that privacy.

Bullock, qtd. in DeAngelis (2004), notes other difficulties: Psychology traditionally has not built large data systems for storing or sharing large data sets, and, has not developed a culture of sharing data as seen in other disciplines.

The challenge in brief: what would it take to build a general purpose internet-based experiment presentation and data collection system where the resulting data is automatically and anonymously shared after a suitable embargo period? This could be done by individual researchers self-archiving data, or be sent to a repository. One of the advantages of such an an automated data publishing system would be a reduction in the costs in publishing properly formatted raw psychological data.

This is a practical project, the contemplation and building of which is entirely feasible with today’s technology. The charm of this project is that it requires three issues to be worked out in order to succeed.

One of these issues has to do with the internet: how do we offer reasonable guarantees that data collected remains anonymous, and is not compromised in transmission back to the experimental server or on the server itself? Although this varies by institution, in order to make human subjects data publicly available, the main requirement is that the data be presented so that it is anonymous. The name, and any personally identifying information, must not be stored with the data.

This simple practice of separating data from identifying information can be complicated by the fact that the data is being sent over the internet, where data can be intercepted, and servers can be compromised. Some solutions to this problem could include the use of strong encryption or IP anonymization. They could also include discarding potentially identifying data on the client machine before it is even transmitted, or discarding some of the received information after summary calculations are made, but before it is stored.

A second issue is societal: under what circumstances do we have the right to share human subjects data? This question particularly includes challenges that can vary by institution. It also involves aligning the policies and interests of different stakeholders. University oversight on these matters can include ethics committees, privacy officers, and legal departments. There may also be assertions of intellectual property by either the institution or the body funding the research. It may be that one or all of these groups would need to be consulted, depending on the location of the researcher and/or repository. This is an issue that would need to be explored carefully and sensitively with the relevant stakeholders at the repository institution.

A third issue is experimental. There are no shortage of online experiments on the web. Psychological Research on the Web lists hundreds. As the Top Ten Online Psychology Experiments points out, it’s a little hard to assess the validity of these results because of variations in the speed of the hardware. (This post also notes that we also don’t know who is taking these tests, or whether they have understood the instructions properly). How can we offer reasonable guarantees that data collected on different hardware will be valid? This can include timing accuracy for both input and output (Plant, RR & Turner, G., 2009), and adjusting stimuli to ensure similarity of size, colour, or volume.

An embargo period is important for three reasons: (a) to protect participant privacy, (b) to protect the integrity of the experiment, and (c) to protect, to the extent they desire, the researcher’s work. In particular,

(a) it is important not to release data immediately upon collection because anyone with a knowledge of when that person had conducted the experiment might be able to trace the data back to them. A standard (known) period of embargo would have the same property. A better approach may be a randomized period of embargo, or a set period of embargo for all collected data.

(b) usually when online experiments are conducted, the data will not be made available until after the experiment is run so that there is no way a potential participant could look at the results and be influenced by them.

(c) some researchers may not wish to release their data until they have published, but would be happy to release the data afterwards.

The online-experiment-runner could, of course, be made open source, as could the experiments that run on it, but these are separate issues.

Does my account include problems that don’t exist in practice? Are there places where it is actually more complicated than I’ve sketched out here? Are there examples of open data collected on the internet? Do you know of other references on the ethics and practice of making human subjects data available in various context (for psychology or otherwise)?

Acknowledgements: Terry Stewart for many illuminating conversations on open models, and open modelling, and to the folks at ISPOC for their model repository. Thanks also for stimulating questions and feedback to Michael Nielsen, Greg Wilson, Jon Udell, Andre and Carlene Paquette, Jim McGinley, and James Redekop.

______

DeAngelis, T. (2004). ‘Data sharing: a different animal‘. APA Monitor on Psychology. February 2004. 35: 2.

Plant, RR & Turner, G. (2009). Millisecond precision psychological research in a world of commodity computers: New hardware, new problems? Behavior Research Methods, 41, 598-614. doi: 10.3758/BRM.41.3.598. [Thanks to Mike Lawrence for pointing me at this]

Advertisements

Online psychology experiments: calibration 2 — size

Cognitive psychology papers that report computer-based experiments often specify the distance of a participant to the monitor, the size of the monitor, or the visual angle of a stimulus. This may prove a problem for some online experiments, where monitor sizes may vary from 13″ to 24″ — and beyond. Subjects may be sitting very close to their display, as often happens with laptops, or very far from their display, as often happens when the machine is connected to an HDTV. Asking a participant to measure the size of their monitor (“17” inch screens can vary in size) or how far their eyes are from the screen, may limit participation. It would be useful to have the equivalent of both of these measures in order to be able to properly assess results.

Given the size of the monitor and the distance to the screen, many experiments could compensate by resizing their stimuli. If such measurements were available, the presentation package could be designed in a way that any stimulus presentation could be cleanly altered in size. Writing the stimulus presentation package in a way that allows for coding relative, rather than absolute, coordinates would obviously assist with this.

How are these measures to be achieved, if not with time and tape measure? The classic result that the thumb appears to be 2 degrees of visual arc wide when held at arms length could be useful here (Groot, Ortega, & Beltran 1994; O’Shea 1991).

We could ask the participant to hold their thumb at arm’s length, covering a single dot. Their free hand would be used to adjust the size of the dot, until their thumb exactly covered the dot. They begin by making the dot smaller than their thumb. They increase the size of the dot until they can just see it. Cursor keys work well for this. The rest of the stimuli can then be adjusted to fit. I programmed something similar using a plastic keypad in an fMRI experiment (Tovey, M., Whitney, D., & Goodale, M. 2004), and it works well.

A second possibility would be to ask the subject to hold their thumb out at arms length so that it just covers one of, say, fifteen dots on the screen, each of a slightly different size. The only thing the participant would then have to do is click on the dot closest to their thumb size.

Addendum: What problems might these approaches encounter? What other difficulties might come about due to variations in hardware and environment?

________

Groot C, Ortega F, Beltran FS. (1994). ‘Thumb rule of visual angle: a new confirmation‘. Perceptual & Motor Skills. 78(1):232-4.

O’Shea, RP. (1991). ‘Thumb’s rule tested: visual angle of thumb’s width is about 2 deg‘. Perception. 20(3) 415 – 418 doi:10.1068/p200415

Tovey, M., Whitney, D., & Goodale, M. (2004, February). Blind Spot Retinotopy: a control considered. Cognitive Science Research Seminar, Carleton University, Ottawa.

Online psychology experiments: calibration

There are no shortage of online experiments on the web. Psychological Research on the Web lists hundreds. As the Top Ten Online Psychology Experiments points out, it’s a little hard to assess the validity of these results because of variations in speeed of the hardware. They note that we also don’t know who is taking these tests, or whether they have understood the instructions properly.

The open source stimulus presentation packages for desktops I’ve programmed with (PsyScript, VisionEgg) advertise impressive temporal accuracy for output. (See, for instance the Appendix in Bates & D’Oliviero (2003)). An important question is: how would you verify such assertions for yourself? How would you make sure your experiment software is calibrated properly? When I raised the issue of calibration with my engineer friend Bob Erickson, he suggested that an oscilloscope connected to a light-sensitive diode held up to the screen, similar to what Bates & D’Oliviero describe, would be the best way to check to see whether screen displays last as long as you expect them to.

In the online space, we can’t expect subjects to employ oscilloscopes. When I discussed the issue of non-standard hardware with Jim McGinley, he showed me the video calibration tests used by Rock Band, in which a metronome-like bar swings back and forth, and the user is invited to strum in time with the metronome. The metronome may not be the right approach, but something like this, where the user attempts to hit the space bar at the same time as a phenomenon on the screen, seems like it would be on the right track.

Jim tells me that Flash is supposed to run at 60 frames a second. This means a temporal resolution of 16.66 ms, at least for output, which is plenty good enough for a lot of psychology experiments. This says nothing about input. The real question, however, is how much temporal variability would be introduced by other applications running on the same machine. For some experiments, having a particular stimulus display for a precise number of milliseconds is crucial.

Any thoughts on output calibration, especially for online experiments, would be welcome.

_____

Bates, TC & D’Oliveiro, L. (2003). ‘PsyScript: A Macintosh Application for Scripting Experiments.’ Behaviour Research Methods 35: 565-576.

Straw, Andrew D. (2008) ‘Vision Egg: An Open-Source Library for Realtime Visual Stimulus Generation.’ Frontiers in Neuroinformatics. doi: 10.3389/neuro.11.004.2008 link