Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Sunday, March 13, 2016

EEG feature of fear downregulation

How do you identify fear immediately? How do you classify fear from EEG? How would you quantify fear? These questions bothered me in the beginning when I started researching the intersection of emotions, affective physiological responses and EEG signal processing. I imagined a biofeedback computer game / therapy program that helps people conquer their fears by rewarding them in the process. As time went on, my empirical knowledge and experimental data were extended, which, though changed the questions, not the vision. Here I'd like to outline how it all happened, what are the results and prospects of the research.

Experiment

While reading papers on EEG response upon fear stimulation, I was quite annoyed that the stimuli used in previous experiments were mostly IAPS images or video clips from horror movies (yeah, The Shining) completely taken out of context. To this day I believe that such stimuli are not sufficient to evoke strong enough emotional responses in a time period higher than a couple of seconds, and thus unable to provide insight into the big picture of fear regulation.

The experimental design hasn't changed much since my previous measurements, but gameplay and webcam videos were also recorded this time, in addition to EEG, heart rate (HR), and galvanic skin response (GSR) vital signals. First open-, then closed-eye measurements were taken for Individual Alpha Frequency estimation. Participants then played the "daylight" version of a computer FPS game as the baseline measurement. Only the "night" version contained fear inducing stimuli.

Thursday, August 14, 2014

Parameter fitting tool for MOOSE

During my last week in India, I contributed to the MOOSE project by providing an interface to a parameter fitting tool. Now parameters of MOOSE models can be searched by doing some tiny Python scripting.

When you have experimental data on a phenomenon, and you intend to create a computational model of it, usually you need to apply parameter fitting/searching techniques. These methods help you determine those parameters of your model that you have no reference on. Parameter fitting using MOOSE models are accomplished through utilizing Optimizer [1], a parameter fitting tool developed for neural simulations, which I came across by pure chance.

Results 'layer' from the Optimizer tutorial. Optimizer has a simple GUI to guide the user to a successive parameter fitting.

[1] P. Friedrich, M. Vella, A. I. Gulyás, T. F. Freund, and S. Káli, A flexible, interactive software tool for fitting the parameters of neuronal models, Front. Neuroinform, vol. 8, p. 63, 2014.

Friday, August 8, 2014

Profiling Python C extensions

During my collaboration with Upinder S. Bhalla's lab, I made possible to profile MOOSE C++ code through Python scripting, which means one can profile the native functions of MOOSE without writing any C++ code.

For profiling I used gperftools: I wrapped three of its functions, particularly ProfilerStart, ProfilerStop and ProfilerFlush in python functions using Cython. The recompilation of MOOSE is also needed with the -lprofiler flag. After that, one can just call the wrapper functions for ProfilerStart and ProfilerStop before and after the python code that calls the C extensions one is interested in to profile. Then pprof can be used to investigate the results of profiling.

Howto

To profile Python C extensions, first Cython, gperftools and libc6-prof packages need to be installed. If you'd like a visual representation of the results of your profiling, better install kcachegrind as well.

The simplest way to get the wrapper done is to write a cython script wrapping the gperftools functions and a python script that compiles the wrapped functions and link them to the gperftools library.

Let's call the Cython script gperftools_wrapped.pyx:

1:  cdef extern from "gperftools/profiler.h":  
2:      int ProfilerStart(char* fname)  
3:      void ProfilerStop()  
4:      void ProfilerFlush()  
5:    
6:  def ProfStart(fname):  
7:      return ProfilerStart(fname)  
8:    
9:  def ProfStop():  
10:      ProfilerStop()  
11:    
12:  def ProfFlush():  
13:      ProfilerFlush()  
14:    

Here we define a python function for each function of gperftools that we wrap. More functions can be wrapped for more custom profiling (see ProfilerStartWithOptions()).

Model of the basal amygdala - computational Pavlovian fear conditioning

After the first week of the CAMP course we had to decide which project we intended to do next week. I was always interested in how fear and anxiety evolve in the brain so at the end I decided taking a joint project with my course fellow Cristiano Köhler about modeling the basal amygdala - a part of amygdala which is involved in the formation of fear and extinction memories. The main task was to reproduce a model of the basal amygdala and investigate its behavior. Our helpful supervisor was Arvind Kumar, one of the authors of the paper [1] that describes the model in the first place.

Objectives

At first, we reproduced the model of Vlachos et al., 2011 of the basolateral amygdala. The task of the model is to be able to remember a conditional stimulus (CS, e.g. flute tone) paired to an unconditional stimulus (US, e.g. electric shock) - so basically after applying Pavlovian fear conditioning on the model, it takes the same conditional stimulus (without the shock) as a cue of fear. However, further presentations of CS alone results in a decline of this conditioned response - that process is called fear extinction. Another phenomenon is called fear renewal, which shows the context-dependency of fear conditioning and extinction: after the fear conditioning and extinction period in context A (CTXA) and context B (CTXB) respectively, a repeated CS presentation in context A immediately brings back the fear memories despite the effect of the extinction period.

Model

The model intends to reproduce the functions of the basal amygdala. It's a large-scale spiking neuron network implemented in Python, utilizing the Brian library. The model consists of 4000 leaky integrate-and-fire neurons - 3600 so called excitatory and 400 inhibitory neurons. Three kind of inputs are given to the system as Poisson spike trains: (1) CS-US pairs, (2) context information to a subset of neurons and (3) background noise to all the neurons. Plasticity is established in the connections between the inputs (CS-US, context information) and the excitatory neurons. Further plasticity is introduced in the synapses from inhibitory to excitatory neurons to investigate the effects it may bring about.

Distribution of inputs in the spiking neural network model. CS-US are provided to all neurons while CTX input is fed only to a subpopulations of excitatory neurons - from [1].

Thursday, July 24, 2014

Computational Approaches to Memory and Plasticity - a great insight into neuroscience

I had the opportunity mostly thanks to Dr. Upinder S.Bhalla and Subhasis Ray to attend a 2-week-long course on neuroscience, more particularly on how memory is formed and what are the methods to computationally model this process. I'm staying in India at NCBS, where the conference took place, for more 3 weeks; at the moment I'm working in Upi's lab to optimize MOOSE by implementing some GPGPU code.

Photo taken at campus of NCBS - such greenery.