Thursday, November 27, 2014

Tóth's food serving game

Recently I've dived into game theory by attending a course. I've always been haunted by a dilemma while serving food to my guests at my apartment and now I'd like to introduce this problem in the formal terms of game theory - just for fun :D

Let's have a grill party G(N, f), where N is the set of hungry participants (|N| is the number of them), and f is the amount of food available that is divided among p1, p2, ... p|N| plates. Let hN be the host of the party, g1, g2, ..., g|N|-1N the guests. h serves the food for all the participants including h, as well as divides the f amount among the plates while, being a kind host, lets the guests to choose which plate they want to take. In the end h takes the last one left. Actions of the players' are as follows (a1, a2, ..., anAx denotes the set of actions player x is able to take):

Thursday, August 14, 2014

Parameter fitting tool for MOOSE

During my last week in India, I contributed to the MOOSE project by providing an interface to a parameter fitting tool. Now parameters of MOOSE models can be searched by doing some tiny Python scripting.

When you have experimental data on a phenomenon, and you intend to create a computational model of it, usually you need to apply parameter fitting/searching techniques. These methods help you determine those parameters of your model that you have no reference on. Parameter fitting using MOOSE models are accomplished through utilizing Optimizer [1], a parameter fitting tool developed for neural simulations, which I came across by pure chance.

Results 'layer' from the Optimizer tutorial. Optimizer has a simple GUI to guide the user to a successive parameter fitting.

[1] P. Friedrich, M. Vella, A. I. Gulyás, T. F. Freund, and S. Káli, A flexible, interactive software tool for fitting the parameters of neuronal models, Front. Neuroinform, vol. 8, p. 63, 2014.

Friday, August 8, 2014

Profiling Python C extensions

During my collaboration with Upinder S. Bhalla's lab, I made possible to profile MOOSE C++ code through Python scripting, which means one can profile the native functions of MOOSE without writing any C++ code.

For profiling I used gperftools: I wrapped three of its functions, particularly ProfilerStart, ProfilerStop and ProfilerFlush in python functions using Cython. The recompilation of MOOSE is also needed with the -lprofiler flag. After that, one can just call the wrapper functions for ProfilerStart and ProfilerStop before and after the python code that calls the C extensions one is interested in to profile. Then pprof can be used to investigate the results of profiling.

Howto

To profile Python C extensions, first Cython, gperftools and libc6-prof packages need to be installed. If you'd like a visual representation of the results of your profiling, better install kcachegrind as well.

The simplest way to get the wrapper done is to write a cython script wrapping the gperftools functions and a python script that compiles the wrapped functions and link them to the gperftools library.

Let's call the Cython script gperftools_wrapped.pyx:

1:  cdef extern from "gperftools/profiler.h":  
2:      int ProfilerStart(char* fname)  
3:      void ProfilerStop()  
4:      void ProfilerFlush()  
5:    
6:  def ProfStart(fname):  
7:      return ProfilerStart(fname)  
8:    
9:  def ProfStop():  
10:      ProfilerStop()  
11:    
12:  def ProfFlush():  
13:      ProfilerFlush()  
14:    

Here we define a python function for each function of gperftools that we wrap. More functions can be wrapped for more custom profiling (see ProfilerStartWithOptions()).

Model of the basal amygdala - computational Pavlovian fear conditioning

After the first week of the CAMP course we had to decide which project we intended to do next week. I was always interested in how fear and anxiety evolve in the brain so at the end I decided taking a joint project with my course fellow Cristiano Köhler about modeling the basal amygdala - a part of amygdala which is involved in the formation of fear and extinction memories. The main task was to reproduce a model of the basal amygdala and investigate its behavior. Our helpful supervisor was Arvind Kumar, one of the authors of the paper [1] that describes the model in the first place.

Objectives

At first, we reproduced the model of Vlachos et al., 2011 of the basolateral amygdala. The task of the model is to be able to remember a conditional stimulus (CS, e.g. flute tone) paired to an unconditional stimulus (US, e.g. electric shock) - so basically after applying Pavlovian fear conditioning on the model, it takes the same conditional stimulus (without the shock) as a cue of fear. However, further presentations of CS alone results in a decline of this conditioned response - that process is called fear extinction. Another phenomenon is called fear renewal, which shows the context-dependency of fear conditioning and extinction: after the fear conditioning and extinction period in context A (CTXA) and context B (CTXB) respectively, a repeated CS presentation in context A immediately brings back the fear memories despite the effect of the extinction period.

Model

The model intends to reproduce the functions of the basal amygdala. It's a large-scale spiking neuron network implemented in Python, utilizing the Brian library. The model consists of 4000 leaky integrate-and-fire neurons - 3600 so called excitatory and 400 inhibitory neurons. Three kind of inputs are given to the system as Poisson spike trains: (1) CS-US pairs, (2) context information to a subset of neurons and (3) background noise to all the neurons. Plasticity is established in the connections between the inputs (CS-US, context information) and the excitatory neurons. Further plasticity is introduced in the synapses from inhibitory to excitatory neurons to investigate the effects it may bring about.

Distribution of inputs in the spiking neural network model. CS-US are provided to all neurons while CTX input is fed only to a subpopulations of excitatory neurons - from [1].

Thursday, July 24, 2014

Computational Approaches to Memory and Plasticity - a great insight into neuroscience

I had the opportunity mostly thanks to Dr. Upinder S.Bhalla and Subhasis Ray to attend a 2-week-long course on neuroscience, more particularly on how memory is formed and what are the methods to computationally model this process. I'm staying in India at NCBS, where the conference took place, for more 3 weeks; at the moment I'm working in Upi's lab to optimize MOOSE by implementing some GPGPU code.

Photo taken at campus of NCBS - such greenery.

Monday, July 21, 2014

EEG data analysis of audio induced fear

My previous post ended with some statistics now I proceed to the analysis of the retrieved signals. The results and the measurement itself is rather instructive than useful and practical, so if you are interested in a working solution I have to disappoint you. I'm planning a measurement concept that is going to be much simpler and going to look at the problem from a different perspective. Also as I dived more into the literature, I found that there was no need for the 3-minute calm part, there was rather need for the measurements immediately before and after the effects played.

Hypothesis to test - frontal alpha asymmetry

Pattern of asymmetrical frontal EEG activity is found to distinguish positive (happiness, joy) from negative (fear, anger) emotions by Hell and Nitschke [James A. Coan, 2004]. Although, this hypothesis turned out to be false (at least other more supported hypothesis came), and instead some showed that it would be the indicator of valence - greater right frontal activity (which means greater left alpha power) compared to the left frontal activity indicates approach or engage stimulus and greater left frontal activity infer tendency to withdrawal stimulus. This is called Davidson's approach/withdrawal model [Davidson, 1993]. Here I should point out that activity is inversely related to alpha power, which means lower power reflects more activity and vice versa.

Alpha power means the values of the signal converted to frequency domain at 7 to 12 Hz. Here is the alpha power spatial representation of a fearful (withdrawal stimulus) person playing a horror game - not part of the main measurements. Here the alpha asymmetry model actually worked.

Saturday, May 10, 2014

Measured audio induced fear of 24 subjects using EEG signals

For my current thesis I had measurements of EEG activity from 24 mostly student subjects. The intention was to capture moments of fear or anxiousness which are induced only using audio effects. What turned out early that it's pretty hard to do. How can you arouse anxiousness to people that have different concepts of what's frightening and what's not? How can you point out the exact moment, when that feeling occurred? Are just audio inputs enough for someone to experience real danger?

Wednesday, April 16, 2014

From PDEs to Hines' solver

To understand and model electrical activity in neurons and neural networks, it is necessary to solve equations on current flows. When we take a single neuron into account, we should distribute that current flow over time and space which takes us to solve a PDE (Partial differential equation) instead of a more simple ODE. Such a cable equation allowing spatial variation looks like this:

Wednesday, April 2, 2014

NNGPU architectural design finished

NNGPU is a GPU (CUDA) powered trainable artificial neural network simulator. It's designed to be able to manage not just multilayer perceptron (MLP) networks, but practically any kind of trainable networks. Supervised learning is to be used with backpropagation training algorithm. Ability of generalization of a network can be measured, both train and test sets can be provided. It's not a complete, fully optimized framework for making artificial neural networks; it's rather a GPGPU demonstration of speed relative to a single-threaded CPU implementation. I plan to make a comparison between (single-threaded) CPU and GPGPU run to evaluate effectiveness.

First I've made a use-case diagram on the functionality of NNGPU; here it is.

Sunday, March 30, 2014

Numerical methods for solving ODEs in MOOSE

This post is about methods for solving ODEs (Ordinary Differential Equations) and about such methods used particularly in GENESIS, the ancestor of MOOSE (Multiscale Object-Oriented Simulation Environment), for neuronal modeling.

Before we discuss the methods themselves, there's a need for mentioning stiffness. We consider an ODE stiff if it has abrupt transitions in it, like an action potential. Basically stiffness measures the difficulty of solving an ODE numerically.

Stiff systems