Friday, June 20, 2014

GSoC Open Source Brain: Retinal Filter II

LGN-Retinal Filter II

Now that we know how to crate a filter is time to use it to calculate how an LGN neuron would react to an incoming stimulus. In this entry we will create a white noise stimulus in order to see how an LGN neuron reacts to it, this approach has the advantage that we can then recover the filter by reverse correlation methods as a sanity check.

In the same spirit of the last post, we will define the spatial and time parameters that determine the lengths and resolutions in those dimensions:

     #Time parameters  
     dt = 1.0  # resolution of the response  (in milliseconds)
     dt_kernel = 5.0 # resolution of the kernel  (in milliseconds)
     dt_stimuli = 10.0  # resolution of the stimuli  (in milliseconds)

     kernel_size = 25 # The size of the kernel 

     T_simulation = 2 * 10 ** 2.0 # Total time of the simulation in ms
     Nt_simulation = int(T_simulation / dt) #Simulation points 
     N_stimuli = int(T_simulation / dt_stimuli) #Number of stimuli

     # Space parameters 
     dx = 1.0
     Lx = 20.0
     Nx = int(Lx / dx)
     dy = 1.0
     Ly = 20.0
     Ny = int(Ly / dy ) 
    

Now, we call our kernel which we have wrapped-up as a function from the work in the last post:

      # Call the kernel 
      # Size of center area in the center-surround profile 
      sigma_center = 15  
      # Size of surround area in the center-surround profile 
      sigma_surround = 3  
      kernel = create_kernel(dx, Lx, dy, Ly, sigma_surround, 
                             sigma_center, dt_kernel, kernel_size) 
    

With this in our hand we can use the numpy random functions to create our white noise stimuli, we use here the realization of white noise call ternary noise which consists on values of -1, 0 and 1 assigned randomly to each pixel in our stimuli:

    
      # Call the stimuli 
      stimuli = np.random.randint(-1, 2, size=(N_stimuli, Nx, Ny))
    

Before we can proceed to calculate the convolution we need to do some preliminary work. The convolution problem involves three time scales with different resolutions. We have first the resolution of the response dt , the resolution of the kernel dt_kernel and finally the resolution of the stimulus dt_stimuli.Operations with the kernel involve jumping from one scale to another constantly so we need a mechanism to keep track of that. In short, we would like to have a mechanism that transforms from some coordinates to the others in one specific place and not scatter all over the place.

Furthermore, in the convolution the kernel is multiplied by a specific point of images for each point in time. For the sake of efficiency we would like to have a mechanism that does this for once. With this in mind I have built a set of indexes for each scale that allow us to associate each element on the indexes of the response to its respective set of images. Also, we have a vector that associates every possible delay time in the kernel to the set of indexes in the response. We illustrate the mechanisms in the next figure

We can appreciate the three different times scales in the image. Furthermore, we have a set of indexes called delay indexes that maps each response to its respective image and also other set of indexes called delay indexes that map each of the delays to his respective response. We can create this set of indexes with the following code:

      # Scale factors 
      input_to_image = dt / dt_stimuli  # Transforms input to image
      kernel_to_input = dt_kernel / dt  # Transforms kernel to input 
      input_to_kernel = dt / dt_kernel  # Transforms input to kernel   

      working_indexes = np.arange(Nt_simulation).astype(int)
      # From here we remove the start at put the ones
      remove_start = int(kernel_size * kernel_to_input)
      signal_indexes = np.arange(remove_start,
                                 Nt_simulation).astype(int)

      # Calculate kernel
      kernel_times = np.arange(kernel_size)
      kernel_times = kernel_times.astype(int) 
      
      # Delay indexes 
      delay_indexes = np.floor(kernel_times * kernel_to_input)
      delay_indexes = delay_indexes.astype(int) 
     
      # Image Indexes 
      stimuli_indexes = np.zeros(working_indexes.size)
      stimuli_indexes = np.floor(working_indexes * input_to_image)
      stimuli_indexes = stimuli_indexes.astype(int)

    

Now, we can calculate the response of a neuron with a center-surround receptive field by performing the convolution between its filter and the stimuli. We also plot the stimuli to see how it looks:

    for index in signal_indexes:
        delay = stimuli_indexes[index - delay_indexes] 
        # Do the calculation    
        signal[index] = np.sum(kernel[kernel_times,...]
                               * stimuli[delay,...])

    t = np.arange(remove_start*dt, T_simulation, dt)
    plt.plot(t, signal[signal_indexes], '-', 
             label='Kernel convoluted with noise')
    plt.legend()
    plt.xlabel('Time (ms)')
    plt.ylabel('Convolution')
    plt.grid()
    plt.show()
    

We can see that the resolution of the response is as good as the resolution of the filter and this explains the discontinuities in the figure above.

As a sanity check we can calculate a voltage triggered average to recover the sta:

 
      ## Calculate the STA
      kernel_size = kernel_times.size
      Nside = np.shape(stimuli)[2]
      sta = np.zeros((kernel_size ,Nside, Nside))

      for tau, delay_index in zip(kernel_times, delay_indexes):
         # For every tau we calculate the possible delay 
         # and take the appropriate image index
         delay = stimuli_indexes[signal_indexes - delay_index] 
         # Now we multiply the voltage for the appropriate images 
         weighted_stimuli = np.sum( signal[signal_indexes, np.newaxis, np.newaxis] * stimuli[delay,...], axis=0)
         # Finally we divide for the sample size 
         sta[tau,...] = weighted_stimuli / signal_indexes.size
    
    
    

Which we can plot in a convenient way with the following set of instructions:

      ## Visualize the STA 
      closest_square_to_kernel = int(np.sqrt(kernel_size)) ** 2

      # Define the color map
      cdict1 = {'red':   ((0.0, 0.0, 0.0),
      (0.5, 0.0, 0.1),
      (1.0, 1.0, 1.0)),

      'green': ((0.0, 0.0, 0.0),
      (1.0, 0.0, 0.0)),

      'blue':  ((0.0, 0.0, 1.0),
      (0.5, 0.1, 0.0),
      (1.0, 0.0, 0.0))
      }

      from matplotlib.colors import LinearSegmentedColormap
      blue_red1 = LinearSegmentedColormap('BlueRed1', cdict1)

      n = int( np.sqrt(closest_square_to_kernel))
      # Plot the filters 
      for i in range(closest_square_to_kernel):
         plt.subplot(n,n,i + 1)
         plt.imshow(sta[i,:,:], interpolation='bilinear',
                    cmap=blue_red1)
         plt.colorbar()

      plt.show()
    

Wednesday, June 18, 2014

GSoC Open Source Brain: Retinal Filter I

LGN-Retinal filter

Treating the retina and the LGN together as a spatio-temporal filter is a common trait in the models that we are going to work with in this project. In this series of articles I am going to develop and explain the mechanism that I have developed to model this particular stage of processing.

Structure of the filter

In this particular example we are dealing with a separable spatio-temporal filter. That is, we can write our filter as a product of the spatial part and the temporal part. In the following sections, we will describe them in this given order.

Spatial filter

In order to calculate the spatial filter we need two things. First, how wide our filters are going to be and then the resolution level. So lx and ly will stand for the size of the receptive filters (in degrees) for the x and y direction respectively. The same will go for dx and dy with respect to the resolution. With this in our hands we then can proceed to create a 2 dimensional structure for our functions with the help of the mesh functions:

## Spatial parameters
x = np.arange(-lx/2, lx/2, dx)
y = np.arange(-ly/2, ly/2, dy)

X, Y = np.meshgrid(x, y) # Create the 2D dimensional coordinates

Now that we have the function it is just a matter of calculating the function with the appropiate formula:

R = np.sqrt(X**2 + Y**2) # Distance
center = (17.0 / sigma_center**2) * np.exp(-(R / sigma_center)**2)
surround = (16.0 / sigma_surround**2) * np.exp(-(R / sigma_surround)**2)
Z = surround - center

With the formula in our hands we can plot it as a countour line to have an idea of how it looks like in space:
# Plot countour map
plt.contourf(X, Y, Z, 50, alpha=0.75, cmap=plt.cm.hot)
plt.colorbar()

C = plt.contour(X, Y, Z, 10, colors='black', linewidth=.5)
plt.clabel(C, inline=10, fontsize=10)
plt.show()


We can also show a view when y=0 in order to offer a side view of the plot in order to gain further insight in the structure of it:

Temporal filter

The behaviour of the temporal filters in time is usually called bi-phasic response. This consists in a response that initially grows in time for the mean level but then it goes bellow to the mean level to finally return at it in time window of the order 200ms approximately. The particular function and parameters that we are going to use to illustrate this behaviour were taken from the paper by Cai et Al (Reference bellow). But first, we have to define our kernel size and resolution as in the code down here:

# First the kernel size and resolution
kernel_size = 200
dt_kernel = 1.0
t = np.arange(0, kernel_size * dt_kernel, dt_kernel) # Time vector

Now, we construct the function in the following way:
## Temporal parameters
K1 = 1.05
K2 = 0.7
c1 = 0.14
c2 = 0.12
n1 = 7.0
n2 = 8.0
t1 = -6.0
t2 = -6.0
td = 6.0

p1 = K1 * ((c1*(t - t1))**n1 * np.exp(-c1*(t - t1))) / ((n1**n1) * np.exp(-n1))
p2 = K2 * ((c2*(t - t2))**n2 * np.exp(-c2*(t - t2))) / ((n2**n2) * np.exp(-n2))
p3 = p1 - p2

We plot the function to have an idea of how it looks now:


Spatio-Temporal Filter

Finally, in order to have build the kernel, given that our filter is separable, we just have to multiply the temporal the temporal part by the spatial part at each point in space:
     # Initialize and fill the spatio-temporal kernel 
     kernel = np.zeros((kernel_size, int(lx/dx), int(ly/dy)))

     for k, p3 in enumerate(p3): 
          kernel[k,...] = p3 * Z 
    

Which we can show now in the following plot:

References

  • Cai, Daqing, Gregory C. Deangelis, and Ralph D. Freeman. "Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kittens." Journal of Neurophysiology 78.2 (1997): 1045-1061.

Thursday, June 5, 2014

GSoC Open Source Brain: How to Create Connections, Two Neurons Example

Two Neurons

In this example we are going to define one of the most simple networks possible: one with three elements. This will allow us to introduce a couple of fundamental concepts in PyNN.

As in our past example we start by importing PyNN and the necessary libraries. Also we declare some initialization variables and we start the simulator:

import pyNN.nest as simulator
import pyNN.nest.standardmodels.electrodes as elect
import matplotlib.pyplot as plt
import numpy as np

N = 3 # Number of neurons
t = 150.0 # Simulation time

# Has to be called at the beginning of the simulation
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)

Now we have to declare our cell modell. But before to so, let's check which parameters of it we can play with. In order to do so we can consult the available parameters for each model class with the method default_parameters. In our case of example we are interested in the integrate and fire model with current based synapses. We can call the following code to see the parameters available:

simulator.IF_curr_exp.default_parameters

After this we are going to get a list of the available parameters, we set them and the declare our model with the following instructions:

# Neuron Model's parameters
i_offset = 0
R = 20
tau_m = 20.0
tau_refractory = 50
v_thresh = 0
v_rest = -60
tau_syn_E = 5.0
tau_syn_I = 5.0
cm = tau_m / R

# Declare our cell model
model = simulator.IF_curr_exp(cm=cm, i_offset=i_offset, tau_m=tau_m, tau_refrac=tau_refractory, tau_syn_E=tau_syn_E, tau_syn_I=tau_syn_I, v_reset=v_rest, v_thresh=v_thresh)

# Declare a population
neurons = simulator.Population(N, model)

In order to modify a specific subset of a given population in PyNN we use the concept of View. Views allow us to use Python array notation to access a given subset of a population and modify it for our purposes. In this particular case we are going select two sub-populations (neurons 1 and 2) to modify the parameters and a create a connection to them from the reminder neuron. In order to modify parameters from a given population we use the method set_parameters that each population posses:

# Create views
neuron1 = neurons[[0]]
neuron2 = neurons[1, 2]

# Modify second neuron
tau_m2 = 10.0
cm2 = tau_m2 / R
neurons[1].set_parameters(cm=cm2, tau_m=tau_m2)

tau_m3 = 5.0
cm3 = tau_m3 / R
neurons[2].set_parameters(cm=cm3, tau_m=tau_m3)

Now, in order to create connections in PyNN we need the concept of projection. As stated in the tutorial of PyNN a project needs the following elements to be declared:

  • The pre-synaptic population
  • The post-synaptic population
  • A connection algorithm
  • A synapse type

The general form of the Projection method is given by

Projection(presynaptic population, posynaptic population, connection algorithm, synapase type)

In our example the pre-synaptic population is going to be the neuron 0 and the post-synaptic population is going to be composed of the neurons 1 and 2 as declared in the views above. As a connection algorithm we are going to use the AllToAllConnector method which connects every member of the presynaptic population to the post-synaptic population. Finally we can define a static syanpse with the method StaticSynapse, it requires two attributes a value that determines the size (weight) and a delay that determines how long after the spike in the pre-synaptic neuron the pos-synaptic neuron elicits a response. Our code bellow is:

# Synapses
syn = simulator.StaticSynapse(weight=10, delay=0.5)
# Projections
connections = simulator.Projection(neuron1, neuron2, simulator.AllToAllConnector(),
syn, receptor_type='excitatory')

Finally we set the current for the neuron 1 and the recorder as in the previous example:

# DC source
current = elect.DCSource(amplitude=3.5, start=20.0, stop=100.0)
current.inject_into(neuron1)
#neurons.inject(current)

# Record the voltage
neurons.record('v')

simulator.run(t) # Run the simulations for t ms
simulator.end()

Finally we extract the data and plot the function

# Extracts the data
data = neurons.get_data() # Creates a Neo Block
segment = data.segments[0] # Takes the first segment
vm = segment.analogsignalarrays[0] # Take the arrays

# Extract the data for neuron 1
vm1 = vm[:, 0]
vm2 = vm[:, 1]
vm3 = vm[:, 2]

# Plot the data
plt.plot(vm.times, vm1, label='pre-neuron')
plt.hold('on')
plt.plot(vm.times, vm2, label='post-neuron 1')
plt.plot(vm.times, vm3, label= 'post-neuron 2')

plt.xlabel('time')
plt.ylabel('Voltage')
plt.legend()

plt.show()

A particular example of the simulation running is attached next. Playing with the parameters in this example can provide a clear idea of how the models and synapsis' parameters work:

GSoC Open Source Brain: First Example, Integrate and Fire neuron

First Example

In this entry I am going to describe a very basic example with PyNN. Our aim is to build a system with a simple Integrate and Fire neuron under the influence of a direct current. The first thing that we need to do is to import the simulator that we are going to use with the instruction:

import pyNN.nest as simulator

Note that this could also be Neuron or Brian instead of Nest

Now, we need to define our model but before that we need to do some presettings. So we set first the number of neurons that our simulations is going to run and also the total time it will take.

N = 1 # Number of neurons
t = 100.0 #Simulation time

# Has to be called at the beginning of the simulation
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)

Now we can define a neuron model for our neuron. For this example we are going to chose a leaky integrate and fire with exponentially decaying pos-synpatic current.

model = simulator.IF_curr_exp()
neurons = simulator.Population(N, model)

Note that once we have defined our model and our number of neurons we can define a population with this.

As a next step we define the current and inject it into the neurons

# DC source
current = simulator.DCSource(amplitude=0.5, start=20.0, stop=80.0)
#current = elect.DCSource(amplitude=0.5, start=20.0, stop=80.0)
current.inject_into(neurons)
#neurons.inject(current)

And finally we can indicate our simulator to run with the instruction

simulator.run(t) # Run the simulations for t ms

With this we have already simulated our neuron. However, as the things stand right now it we are unable to visualize the trajectory in time of the voltage in our system. In order to extract the membrane potential from our data we have to declare a recorder for the voltage.

neurons.record('v') # Record voltage
simulator.run(t) # Run the simulations for t ms

Now in ourder to extract our simulations from the recorder we use:

data = neurons.get_data()

This returns a Neo bloc. A Neo blocked is a container of segments which in turn contain the data recorded in a given experiment. In order to extract our data from the block above we use the following code:

data = neurons.get_data() # Crates a Neo Block
segment = data.segments[0] # Takes the first
vm = segment.analogsignalarrays[0]

Finally in order to plot our data:

import matplotlib.pyplot as plt
plt.plot(vm.times, vm)
plt.xlabel('time')
plt.ylabel('Vm')
plt.show()

Which produces the next plot:

GSoC Open Source Brain: Google Summer of Code 2014 Presentation

Presentation

Introduction

My name is Ramon Heberto Martinez and I am student in the Erasmus Mundus Master in Complex Systems Science. I will use the coming series of entries in this blog to describe and report my progress in the project for Google Summer of Code 2014 (GSoC 2014). I have been lucky enough to have my proposal entitled Open source, cross simulator, large scale cortical models with the International Neuroinformatics Coordinating Facility (INCF). This project will be co- mentored by Andrew Davison at the INCF's French branch and Padraig Gleeson from the INCF branch in the UK.

The Project

As we advance the study of the brain we have required more powerful tools to study it. In particular more powerful computational tools have become available as well as more elaborated simulation environments. An example of these efforts is the Open Source Brain Project (OSB) project that provides a space where the computational models can be built collaboratively and shared with open standards such as PyNN [Davison et al., 2008] and NeuroML [Gleeson et al., 2010].

On the other hand there is a lack of well tested open models that can serve as benchmarks to test and reliably compare the capabilities of the different environments. It is the spirit of this project to try to reduce the lack of such models. In particular this project will consist on developing models of the visual system which, at the date that I write, is an area not very well covered by the OSB project so far

The work specifically will consist on developing the code of the models below in PyNN and release them as free code in the platform of the Open Source Brain Project
. Papers:

  • Different roles for simple-cell and complex-cell inhibition in v1 [Lauritzen and Miller, 2003].
  • Inhibitory stabilization of the cortical network underlies visual surround suppression [Ozeki et al., 2009]
  • Feedforward origins of response variability underlying contrast invari- ant orientation tuning in cat visual cortex [Sadagopan and Ferster, 2012]
The theme of the papers is to have a thorough set of properties of the visual system (V1) and in particular of the orientation invariance property.

References and Links

References

  • Andrew P Davison, Daniel Br ̈derle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. Pynn: a common interface for neuronal network simulators. Frontiers in neuroinformatics, 2, 2008.
  • Padraig Gleeson, Sharon Crook, Robert C Cannon, Michael L Hines, Guy O Billings, Matteo Farinella, Thomas M Morse, Andrew P Davison, Subhasis Ray, Upinder S Bhalla, et al. Neuroml: a language for describing datadriven models of neurons and networks with a high degree of biological detail. PLoS computational biology,(6):e1000815, 2010.
  • Thomas Z Lauritzen and Kenneth D Miller. Different roles for simple-cell and complex-cell inhibition in v1. The Journal of neuroscience, 23(32):10201–10213, 2003.
  • Hirofumi Ozeki, Ian M Finn, Evan S Schaffer, Kenneth D Miller, and David Ferster. Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62(4):578–592, 2009.
  • Roger D Peng. Reproducible research in computational science. Science (New York, Ny), 334(6060):1226, 2011.
  • Srivatsun Sadagopan and David Ferster. Feedforward origins of response variability underlying contrast invariant orientation tuning in cat visual cortex. Neuron, 74(5):911–923, 2012.

Links

Project Page at INCF
PyNN
NeuroML
Open Source Brain Project
Neural Ensemblet
International Neuroinformatics Coordinating Facility
Google Summer of Code 2014