Open source neurophysiology kit

TED, a conference famous for freely sharing their talks, launched an initiative this year to share online lessons: TedEd. The idea of TedEd is to pair gifted educators with inspiring animators to produce short lessons on subjects designed to engage school-children.

As with all TED videos, these are free to share under the Creative Commons license.

The first TedEd talk introduced the SpikerBox, an open source neural recording kit, specifically designed to give students a hands-on experience with neural recording. In this case, recording the neural activity in the legs of cockroaches:


The SpikerBox is a product of BackyardBrains, founded by Greg Gage and Tim Marzullo, joined by an enthusiastic group of fellow developers.

The SpikerBox comes either pre-assembled, or in kit form, and allows for easy recording of neural activity using Audacity (open source software for recording audio). The SpikerBox can also be hooked up to a free iPhone/iPod app, and there is also an android version. The newer EMG SpikerBox (EMG=ElectroMyoGram) enables users to measure the electrical activity of their own muscles.

The Backyard Brains wiki includes a number of experiments that can be conducted with the SpikerBox, as well as a library of raw spike recordings that you can analyse, plus tools for data analysis. Intriguingly, they have also opened up their finances for all to inspect.

Three other Backyard Brains devices are in beta: a platform to turn a smartphone into a microscope (SmartScope), a device to (briefly) control the left right movements of a cockroach (RoboRoach), and a Micromanipulator for precisely placing electrodes in a cockroach brain.

BackyardBrains are currently developing a module for the EMG Spikerbox which will measure reaction time in humans using muscle contractions.

What other intriguing hardware have you seen emerging in the DIY cognitive experiment space?

(For instance, Chip Epstein makes an interesting entry into the open source neuroscience space, with a set of plans for Homebrew Do-it-yourself EEG, EKG, and EMG.)

Advertisement

Writing…

Open source cognitive science is on hiatus while your humble scribe is
finishing his dissertation. Grin. I’ll be back again with lots of new ideas in January 2011.

Open Desks

I noticed this nifty film in which scientists share their workspaces:

Portrait Of A Scientist – Scientist and Their Desks from Imagine Science Films on Vimeo.

What does your desk look like? What unique features inspire you, make you more productive, or mark off your territory as a cognitive scientist?

Open data for online psychology experiments

The research activities in cognitive science are usually limited to a single lab, or to a small group of collaborators. But they need not be. A key problem in encouraging wider collaboration is finding ways of sharing data from human subjects in ways that do not compromise the privacy and confidentiality of the participants (DeAngelis, 2004), or the legal and ethical norms designed to protect that privacy.

Bullock, qtd. in DeAngelis (2004), notes other difficulties: Psychology traditionally has not built large data systems for storing or sharing large data sets, and, has not developed a culture of sharing data as seen in other disciplines.

The challenge in brief: what would it take to build a general purpose internet-based experiment presentation and data collection system where the resulting data is automatically and anonymously shared after a suitable embargo period? This could be done by individual researchers self-archiving data, or be sent to a repository. One of the advantages of such an an automated data publishing system would be a reduction in the costs in publishing properly formatted raw psychological data.

This is a practical project, the contemplation and building of which is entirely feasible with today’s technology. The charm of this project is that it requires three issues to be worked out in order to succeed.

One of these issues has to do with the internet: how do we offer reasonable guarantees that data collected remains anonymous, and is not compromised in transmission back to the experimental server or on the server itself? Although this varies by institution, in order to make human subjects data publicly available, the main requirement is that the data be presented so that it is anonymous. The name, and any personally identifying information, must not be stored with the data.

This simple practice of separating data from identifying information can be complicated by the fact that the data is being sent over the internet, where data can be intercepted, and servers can be compromised. Some solutions to this problem could include the use of strong encryption or IP anonymization. They could also include discarding potentially identifying data on the client machine before it is even transmitted, or discarding some of the received information after summary calculations are made, but before it is stored.

A second issue is societal: under what circumstances do we have the right to share human subjects data? This question particularly includes challenges that can vary by institution. It also involves aligning the policies and interests of different stakeholders. University oversight on these matters can include ethics committees, privacy officers, and legal departments. There may also be assertions of intellectual property by either the institution or the body funding the research. It may be that one or all of these groups would need to be consulted, depending on the location of the researcher and/or repository. This is an issue that would need to be explored carefully and sensitively with the relevant stakeholders at the repository institution.

A third issue is experimental. There are no shortage of online experiments on the web. Psychological Research on the Web lists hundreds. As the Top Ten Online Psychology Experiments points out, it’s a little hard to assess the validity of these results because of variations in the speed of the hardware. (This post also notes that we also don’t know who is taking these tests, or whether they have understood the instructions properly). How can we offer reasonable guarantees that data collected on different hardware will be valid? This can include timing accuracy for both input and output (Plant, RR & Turner, G., 2009), and adjusting stimuli to ensure similarity of size, colour, or volume.

An embargo period is important for three reasons: (a) to protect participant privacy, (b) to protect the integrity of the experiment, and (c) to protect, to the extent they desire, the researcher’s work. In particular,

(a) it is important not to release data immediately upon collection because anyone with a knowledge of when that person had conducted the experiment might be able to trace the data back to them. A standard (known) period of embargo would have the same property. A better approach may be a randomized period of embargo, or a set period of embargo for all collected data.

(b) usually when online experiments are conducted, the data will not be made available until after the experiment is run so that there is no way a potential participant could look at the results and be influenced by them.

(c) some researchers may not wish to release their data until they have published, but would be happy to release the data afterwards.

The online-experiment-runner could, of course, be made open source, as could the experiments that run on it, but these are separate issues.

Does my account include problems that don’t exist in practice? Are there places where it is actually more complicated than I’ve sketched out here? Are there examples of open data collected on the internet? Do you know of other references on the ethics and practice of making human subjects data available in various context (for psychology or otherwise)?

Acknowledgements: Terry Stewart for many illuminating conversations on open models, and open modelling, and to the folks at ISPOC for their model repository. Thanks also for stimulating questions and feedback to Michael Nielsen, Greg Wilson, Jon Udell, Andre and Carlene Paquette, Jim McGinley, and James Redekop.

______

DeAngelis, T. (2004). ‘Data sharing: a different animal‘. APA Monitor on Psychology. February 2004. 35: 2.

Plant, RR & Turner, G. (2009). Millisecond precision psychological research in a world of commodity computers: New hardware, new problems? Behavior Research Methods, 41, 598-614. doi: 10.3758/BRM.41.3.598. [Thanks to Mike Lawrence for pointing me at this]

Cognitive Science Dictionaries and Open Access

There are some quick reference applications in the cognitive sciences for which Wikipedia is not yet fully adequate. I notice this especially when I’m trying to understand a difficult paper. Usually, one of the reasons a paper is difficult to understand is because it contains unfamiliar terminology.

My own experience, and I suspect that this will be a fairly uncontroversial claim, is that the level of technical coverage provided in paper-only specialist dictionaries is currently greater than that provided by either Wikipedia or most other free online reference sources. Three paper reference works I find particularly helpful are the APA Dictionary of Psychology (which seems to be one of the most extensive available), David Crystal’s A Dictionary of Linguistics and Phonetics, and the Dictionary of Cell and Molecular Biology.

When I am fortunate enough to be reading a paper in the library, what I sometimes like to do is pull from the shelves not just one, but several dictionaries relating to the subject at hand. Each time I come to a word I don’t understand, I will look up that term in all of the dictionaries. In many cases, there are pieces missing from one definition that are neatly filled in by another.

If I am reading a psychology paper, I will assemble several dictionaries of psychological terms. In reading a linguistics paper, I will take down multiple dictionaries of linguistic terms. And so on.

There are a few problems with this approach. First, if I am using a set of dictionaries, no one else in the library can use them. Second, I must actually be sitting in a reference library, with a stack of dictionaries in front of me, to do my reading. Third, it is time-consuming to look up each of the definitions one by one.

Single library copies are less of a problem as publishers create online editions of their reference works, and libraries subscribe to them. My institution, for instance, allows me to access the Routledge Encyclopedia of Philosophy (REP) and the MIT Encyclopedia of Cognitive Science (MITECS).

This is useful to the academic researcher, but not all institutions subscribe to all publications. And not all researchers have institutional access. Even when they do, it does not fully help with the third problem—which is that it is time consuming to look up a single term in multiple dictionaries at once.

There are a lot of metacrawler search engines out there, with various levels of customizability. None of them, that I know of, allows you the flexibility to be able to work with your library’s proxy server to be able to query multiple subscribed dictionaries at once.

I started wondering: when it comes to reference works in the cognitive sciences, are there strong open access alternatives? Is it possible to produce a reference work which approaches the depth and specificity of those mentioned above, in a cost-effective, open access format?

It is possible, because at least one example already exists.

A model reference work in this space is the Stanford Encylopedia of Philosophy (SEP). It’s peer-reviewed, revised quarterly, and each entry is maintained by a single expert or team of experts. Reading the SEP’s Publishing Model is highly instructive. They have carefully automated a great deal of their workflow to keep costs down. Designated areas for authors, subject editors, and the Principal Editor each have functionality designed to make that role easier to perform. Reminders, quarterly archiving (to create unchanging, citable references), cross-referencing and link-checking are all automatic, reducing the editorial burden.  This approach, which the SEP bills as a scholarly dynamic reference work, seems to be unique to the Stanford Encyclopedia of Philosophy, but could clearly be applied elsewhere.

What other Open Access reference works exist in the cognitive sciences? Are there other Open Access reference works with the depth of MITECS or the range of the APA? Could the scholarly dynamic approach (peer-reviewed, single-author, regular fixed editions), work across the cognitive sciences? What other models exist that might be tried?

Lastly, are there tools I have missed that might be useful for looking up multiple definitions simultaneously?

I welcome your thoughts.

______

About the Stanford Encyclopedia of Philosophy’. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/about.html#pubmod&gt;.

Craig, E. (2003). Routledge encyclopedia of philosophy online. London: Routledge. http://www.rep.routledge.com/index.html.

Crystal, D., & Crystal, D. (2008). A dictionary of linguistics and phonetics. Malden, MA: Blackwell Pub.

Lackie, J. M., Dow, J. A. T., & Blackshaw, S. E. (1999). The dictionary of cell and molecular biology. San Diego: Academic Press.

Wilson, RA. & Kiel FC (eds.). MIT encyclopedia of the cognitive sciences. (1999).  Cambridge, Mass: MIT Press.

Kinds of openness in cognitive science

The response to the first few days of open source cognitive science has been gratifying. There are fascinating problems and technical challenges in this area. It is a treat to be able to think about them together with interested colleagues.

Cognitive science could benefit by more fully adopting some of the existing forms of openness:

  1. Open access, where publications are made available on the web without charge (cognitive scientist Stevan Harnad is a champion, with CogPrints being cognitive science’s answer to the Physics pre-print arXiv).

  2. Open courseware is a movement that invites educators to make their course-material directly available on the web. (An example is Tutus Vilis’ completely Flash animated course, The Physiology of the Senses—Transformations for Perception and Action)

  3. Open source hardware and software, where plans for apparatus and tools for analysis are made freely and openly available (Praat, for example, is a GNU GPL licensed signal analysis package which can be used for analyzing speech, generating auditory stimuli, and doing speech synthesis).

  4. Open stimuli, where stimulus sets or corpi are made available for use in replications or new experiments (See the Psychological Image Collection at Stirling (PICS (http://pics.psych.stir.ac.uk/)), or the Irvine Phonotactic Online Dictionary (Vaden, Hickok, and Halpin (2005))).

  5. Open workflows, in which researchers can freely share chains of experimentation, analysis, and visualization. (Talkoot is a collaborative workflow initiative for Earth Science. A quick search of the workflow custom search engine on Google reveals no workflows in the cognitive sciences).

  6. Open data, where individual researchers release their datasets, either as the data is collected, upon publication, or after a suitable embargo period. (An example with a rich dataset would be The Linguistic Atlas of the Iberian Peninsula (ALPI)).

  7. Open model repositories, where computational models from published papers can be centralized. (ISPOC is an open modelling initiative which includes cognitive models).

  8. Open research, where open lab notebooks are used to describe ongoing details of a particular strain of research.

Cognitive science can include up to six areas of research (psychology, neuroscience, linguistics, computer science, philosophy, and sometimes anthropology). One could visualize an 8×6 matrix serving as a preliminary grid for exploration.

Did I miss anything? Other great examples? Your thoughts are welcome.

_______

Boersma, Paul & Weenink, David (2009). Praat: doing phonetics by computer (Version 5.1.11) [Computer program]. Retrieved August 3rd, 2009, from http://www.praat.org/.

Vaden, K.I., Hickok, G.S., & Halpin, H.R. (2005). Irvine Phonotactic Online Dictionary, Version 1.3. [Data file]. Available from http://www.iphod.com.

Online psychology experiments: calibration 2 — size

Cognitive psychology papers that report computer-based experiments often specify the distance of a participant to the monitor, the size of the monitor, or the visual angle of a stimulus. This may prove a problem for some online experiments, where monitor sizes may vary from 13″ to 24″ — and beyond. Subjects may be sitting very close to their display, as often happens with laptops, or very far from their display, as often happens when the machine is connected to an HDTV. Asking a participant to measure the size of their monitor (“17” inch screens can vary in size) or how far their eyes are from the screen, may limit participation. It would be useful to have the equivalent of both of these measures in order to be able to properly assess results.

Given the size of the monitor and the distance to the screen, many experiments could compensate by resizing their stimuli. If such measurements were available, the presentation package could be designed in a way that any stimulus presentation could be cleanly altered in size. Writing the stimulus presentation package in a way that allows for coding relative, rather than absolute, coordinates would obviously assist with this.

How are these measures to be achieved, if not with time and tape measure? The classic result that the thumb appears to be 2 degrees of visual arc wide when held at arms length could be useful here (Groot, Ortega, & Beltran 1994; O’Shea 1991).

We could ask the participant to hold their thumb at arm’s length, covering a single dot. Their free hand would be used to adjust the size of the dot, until their thumb exactly covered the dot. They begin by making the dot smaller than their thumb. They increase the size of the dot until they can just see it. Cursor keys work well for this. The rest of the stimuli can then be adjusted to fit. I programmed something similar using a plastic keypad in an fMRI experiment (Tovey, M., Whitney, D., & Goodale, M. 2004), and it works well.

A second possibility would be to ask the subject to hold their thumb out at arms length so that it just covers one of, say, fifteen dots on the screen, each of a slightly different size. The only thing the participant would then have to do is click on the dot closest to their thumb size.

Addendum: What problems might these approaches encounter? What other difficulties might come about due to variations in hardware and environment?

________

Groot C, Ortega F, Beltran FS. (1994). ‘Thumb rule of visual angle: a new confirmation‘. Perceptual & Motor Skills. 78(1):232-4.

O’Shea, RP. (1991). ‘Thumb’s rule tested: visual angle of thumb’s width is about 2 deg‘. Perception. 20(3) 415 – 418 doi:10.1068/p200415

Tovey, M., Whitney, D., & Goodale, M. (2004, February). Blind Spot Retinotopy: a control considered. Cognitive Science Research Seminar, Carleton University, Ottawa.

Online psychology experiments: calibration

There are no shortage of online experiments on the web. Psychological Research on the Web lists hundreds. As the Top Ten Online Psychology Experiments points out, it’s a little hard to assess the validity of these results because of variations in speeed of the hardware. They note that we also don’t know who is taking these tests, or whether they have understood the instructions properly.

The open source stimulus presentation packages for desktops I’ve programmed with (PsyScript, VisionEgg) advertise impressive temporal accuracy for output. (See, for instance the Appendix in Bates & D’Oliviero (2003)). An important question is: how would you verify such assertions for yourself? How would you make sure your experiment software is calibrated properly? When I raised the issue of calibration with my engineer friend Bob Erickson, he suggested that an oscilloscope connected to a light-sensitive diode held up to the screen, similar to what Bates & D’Oliviero describe, would be the best way to check to see whether screen displays last as long as you expect them to.

In the online space, we can’t expect subjects to employ oscilloscopes. When I discussed the issue of non-standard hardware with Jim McGinley, he showed me the video calibration tests used by Rock Band, in which a metronome-like bar swings back and forth, and the user is invited to strum in time with the metronome. The metronome may not be the right approach, but something like this, where the user attempts to hit the space bar at the same time as a phenomenon on the screen, seems like it would be on the right track.

Jim tells me that Flash is supposed to run at 60 frames a second. This means a temporal resolution of 16.66 ms, at least for output, which is plenty good enough for a lot of psychology experiments. This says nothing about input. The real question, however, is how much temporal variability would be introduced by other applications running on the same machine. For some experiments, having a particular stimulus display for a precise number of milliseconds is crucial.

Any thoughts on output calibration, especially for online experiments, would be welcome.

_____

Bates, TC & D’Oliveiro, L. (2003). ‘PsyScript: A Macintosh Application for Scripting Experiments.’ Behaviour Research Methods 35: 565-576.

Straw, Andrew D. (2008) ‘Vision Egg: An Open-Source Library for Realtime Visual Stimulus Generation.’ Frontiers in Neuroinformatics. doi: 10.3389/neuro.11.004.2008 link

Tools for Psychology and Neuroscience

Open source tools make new options available for designing experiments, doing analysis, and writing papers. Already, we can see hardware becoming available for low-cost experimentation. There is an OpenEEG project. There are open source eye tracking tools for webcams. Stimulus packages like VisionEgg can be used to collect reaction times or to send precise timing signals to fMRI scanners. Neurolens is a free functional neuroimage analysis tool.

Cheaper hardware and software make it easier for students to practice techniques in undergraduate labs, and easier for graduate students to try new ideas that might otherwise be cost-prohibitive.

Results can be collected and annotated using personal wiki lab notebook programs like Garrett Lisi’s deferentialgeometry.org. Although some people, like Lisi, share their notebooks on the web (a practice known as open notebook science), it is not necessary to share wiki notebooks with anyone to receive substantial benefit from them. Wiki notebooks are an aid to the working researcher because they can be used to record methods, references and stimuli in much more detail than the published paper can afford. Lab notebooks, significantly, can include pointers to all of the raw data, together with each transformation along the chain of data provenance. This inspires trust in the analysis, and makes replication easier. Lab notebooks can also be a place to make a record of the commands that were used to generate tables and graphs in languages like R.

R is an open source statistics package. It scriptable, and can be used in place of SPSS (Revelle (2008), Baron & Li (2007)). It is multi-platform, can be freely shared with collaborators, and can import and export data in a CSV form that is readable by other statistics packages, spreadsheets, and graphing packages.

R code can be embedded directly into a LaTeX or OpenOffice document using a utility called Sweave. Sweave can be used with LaTeX to automatically format documents in APA style (Zahn, 2008). With Sweave, when you see a graph or table in a paper, it’s always up to date, generated on the fly from the original R code when the PDF is generated. Including the LaTeX along with the PDF becomes a form of reproducible research, rooted in Donald Knuth’s idea of literate programming. When you want to know in detail how the analysis was done, you need look no further than the source text of the paper itself.

_____

Baron, J. & Li, Y. (9 Nov 2007). ‘Notes on the use of R for psychology experiments and questionnaires.’
http://www.psych.upenn.edu/~baron/rpsych/rpsych.html

Revelle, W. (25 May 2008). ‘Using R for Psychological Research. A simple guide to an elegant package.’ http://www.personality-project.org/R/

Zahn, Ista. (2008). ‘Learning to Sweave in APA Style.’ The PracTeX Journal. http://www.tug.org/pracjourn/2008-1/zahn/