Wednesday, August 20, 2014

GSoC Open Source Brain: Cortical Connections

Cortical Connections

Cortical Connections

In the same vein that the post before this one we will show here how to construct the connections between the cortical layers. In order to do so we will construct a function that works in general for any arbitrary connectivity, we describe in the following its structure. First, as in the thalamo-cortical connectivity, we have again the same structure of a function that loops over the target population extracting the relevant parameters that characterize these neurons. Furthermore we have another function that loops over the source population creating the corresponding tuples for the connection list. Is in this last function where the particular connectivity rule is implemented.

In the particular case of the Troyer model the connectivity between the cortical cells is determined by the correlation between the receptive fields of the neurons, the receptive fields here being Gabor functions. In more detail the neurons whose receptive fields are more correlated will be the ones more likely to have excitatory connections between them. On the other hand the ones whose receptive fields are less correlated will be more likely to receive inhibitory connections. In this post we show two schemes that accomplish this connectivity. The first one uses the fact the parameters of the receptive field to calculate a connectivity and the second one uses the receptive fields directly to calculate the correlations. We present the determining functions in the stated order down here.

Now we present the function that creates the connectivity for a given neuron par. The circular distance between the orientation and phases are calculated as a proxy to estimate how similar the receptive fields of the neurons are. After that, the distance between them is weighted and normalized with a normal function in order to obtain a value that we can interpret as a probability value. Finally in order to calculate the connectivity we sample n_pick times with the given probability value to see hwo strong a particular connection should be.

 
def cortical_to_cortical_connection(target_neuron_index, connections, source_population, n_pick, g, delay, source_orientations,
                                    source_phases, orientation_sigma, phase_sigma, target_neuron_orientation,
                                    target_neuron_phase, target_type):
    """
    Creates the connections from the source population to the target neuron

    """
    for source_neuron in source_population:
        # Extract index, orientation and phase of the target
        source_neuron_index = source_population.id_to_index(source_neuron)
        source_neuron_orientation = source_orientations[source_neuron_index]
        source_neuron_phase = source_phases[source_neuron_index]

        # Now calculate phase and orientation distances
        or_distance = circular_dist(target_neuron_orientation, source_neuron_orientation, 180)

        if target_type:
            phase_distance = circular_dist(target_neuron_phase, source_neuron_phase, 360)
        else:
            phase_distance = 180 - circular_dist(target_neuron_phase, source_neuron_phase, 360)

        # Now calculate the gaussian function
        or_gauss = normal_function(or_distance, mean=0, sigma=orientation_sigma)
        phase_gauss = normal_function(phase_distance, mean=0, sigma=phase_sigma)

        # Now normalize by guassian in zero
        or_gauss = or_gauss / normal_function(0, mean=0, sigma=orientation_sigma)
        phase_gauss = phase_gauss / normal_function(0, mean=0, sigma=phase_sigma)

        # Probability is the product
        probability = or_gauss * phase_gauss
        probability = np.sum(np.random.rand(n_pick) < probability)  # Samples
        synaptic_weight = (g / n_pick) * probability


        if synaptic_weight > 0:
                    connections.append((source_neuron_index, target_neuron_index, synaptic_weight, delay))

    return connections
      
    

Note that the overall strength is weighted by the conductivity value g that is passed as an argument. Furthermore a delay that is also passed as an argument is added to the list as the last element of the tuple.

Secondly we present the full correlation scheme. In this scheme we utilize the kernels directly to calculate the spatial correlation between them. In particular after we have flattened our kernels Z to have a series instead of a matrix we use the function perasonr from scipy.stats to calculate the correlation. Again as in the case above we use this probability to sample n_pick times and then calculate the relative connectivity strength with this.

 
def cortical_to_cortical_connection_corr(target_neuron_index, connections, source_population, n_pick, g, delay,
                                    source_orientations, source_phases, target_neuron_orientation, target_neuron_phase,
                                    Z1, lx, dx, ly, dy, sigma, gamma, w, target_type):
    """
    Creates the connections from the source population to the target neuron

    """
    for source_neuron in source_population:
        # Extract index, orientation and phase of the target
        x_source, y_source = source_neuron.position[0:2]
        source_neuron_index = source_population.id_to_index(source_neuron)
        source_neuron_orientation = source_orientations[source_neuron_index]
        source_neuron_phase = source_phases[source_neuron_index]

        Z2 = gabor_kernel(lx, dx, ly, dy, sigma, gamma, source_neuron_phase, w, source_neuron_orientation,
                          x_source, y_source)


        if target_type:
            probability = pearsonr(Z1.flat, Z2.flat)[0]
        else:
            probability = (-1) * pearsonr(Z1.flat, Z2.flat)[0]

        probability = np.sum(np.random.rand(n_pick) < probability)  # Samples
        synaptic_weight = (g / n_pick) * probability

        if synaptic_weight > 0:
                    connections.append((source_neuron_index, target_neuron_index, synaptic_weight, delay))

    return connections
    

Note that the overall strength is weighted by the conductivity value g that is passed as an argument. Furthermore a delay that is also passed as an argument is added to the list as the last element of the tuple.

We now show how a plot that illustrates how the probabilities change when the parameters that determined the gabor function are changed for each scheme.

In the figure above we have int he upper part how the probability for the first scheme a neuron with phase 0 and orientation 0 change as we vary the phase (left) and orientation (right). In the two graphs bellow we have the same for the second scheme we presented

Monday, August 18, 2014

GSoC Open Source Brain: Thalamo-Cortical Connections

Thalamo-cortical connections

Thalamo-cortical connections

In this post I will show how to build arbitrary custom connections in PyNN. We will illustrate the general technique in the particular case of the Troyer model. In the Troyer model the connections from the LGN to the cortex are determined with a gabor-profile therefore I am going to describe the required functions to achieve such an aim.

In the PyNN documentation we find that one of the ways of implementing arbitrary connectivity patterns is to use the FromListConnector utility. In this format we have to construct a list of tuples with a tuple for each connection. In each tuple we need to include the index of the source neuron (the neuron from which the synapse originates), the index of the target neuron (the neuron into which the synapse terminates), the weight and the delay. For example (0, 1, 5, 0.1) would indicate that we have a connection from the neuron 0 to the neuron 1 with a synaptic weight of 5 and a delay of 0.1.

In the light of the explanation above we need to construct a function that is able to construct a list with the appropriate weights given a target and a source populations. In order to start moving towards this goal we will first write a function that connects a given neuron in the target population to all the neurons in the source population. We first present the function here bellow and we will explain it later:

 
def lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_neurons, n_pick, g, delay, polarity, sigma,
                               gamma, phi, w, theta, x_cortical, y_cortical):
    """
    Creates connections from the LGN to the cortex with a Gabor profile.

    This function adds all the connections from the LGN to the cortical cell with index = cortical_neuron_index. It
    requires as parameters the cortical_neruon_index, the current list of connections, the lgn population and also
    the parameters of the Gabor function.

    Parameters
    ----
    cortical_neuron_index : the neuron in the cortex -target- that we are going to connect to
    connections: the list with the connections to which we will append the new connnections
    lgn_neurons: the source population
    n_pick: How many times we will sample per neuron
    g: how strong is the connection per neuron
    delay: the time it takes for the action potential to arrive to the target neuron from the source neuron
    polarity: Whether we are connection from on cells or off cells
    sigma: Controls the decay of the exponential term
    gamma: x:y proportionality factor, elongates the pattern
    phi: Phase of the overall pattern
    w: Frequency of the pattern
    theta: Rotates the whole pattern by the angle theta
    x_cortical, y_cortical : The spatial coordinate of the cortical neuron

    """

    for lgn_neuron in lgn_neurons:
            # Extract position
            x, y = lgn_neuron.position[0:2]
            # Calculate the gabbor probability
            probability = polarity * gabor_probability(x, y, sigma, gamma, phi, w, theta, x_cortical, y_cortical)
            probability = np.sum(np.random.rand(n_pick) < probability)  # Samples

            synaptic_weight = (g / n_pick) * probability
            lgn_neuron_index = lgn_neurons.id_to_index(lgn_neuron)

            # The format of the connector list should be pre_neuron, post_neuron, w, tau_delay
            if synaptic_weight > 0:
                connections.append((lgn_neuron_index, cortical_neuron_index, synaptic_weight, delay))

    

The first thing to note from the function above are its arguments. It contains the source population and the particular target neuron that we want to connect to. It also contains all the connectivity and gabor-function related parameters. In the body of the function we have one loop over the whole source population that decides whether we add a connection from a particular cell or not. In order to decide if we add a connection we have to determine the probability from the gabor function. Once we have this we sample n_pick times and add a weighted synaptic weight accordingly to the list for each neuron.

In the function above we have the values of the gabor function passed as arguments. However, the values of each gabor function depend on the nature of the cell of the target population. In the light of this we will construct another function that loops over the target population and extracts the appropriate gabor values for each function in this population. We again present the function and then explain it:

def create_lgn_to_cortical(lgn_population, cortical_population, polarity,  n_pick, g, delay,  sigma, gamma, phases,
                           w, orientations):
    """
    Creates the connection from the lgn population to the cortical population with a gabor profile. It also extracts
    the corresponding gabor parameters that are needed in order to determine the connectivity.
    """

    print 'Creating connection from ' + lgn_population.label + ' to ' + cortical_population.label

    # Initialize connections
    connections = []

    for cortical_neuron in cortical_population:
        # Set the parameters
        x_cortical, y_cortical = cortical_neuron.position[0:2]
        cortical_neuron_index = cortical_population.id_to_index(cortical_neuron)
        theta = orientations[cortical_neuron_index]
        phi = phases[cortical_neuron_index]

        # Create the connections from lgn to cortical_neuron
        #lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_population, n_pick, g, polarity, sigma,
        #gamma, phi, w, theta, x_cortical, y_cortical)

        lgn_to_cortical_connection(cortical_neuron_index, connections, lgn_population, n_pick, g, delay, polarity, sigma,
                                   gamma, phi, w, theta, 0, 0)

    return connections
    

This function requires as arguments the source and target populations as well as the necessary parameters that characterize each cell connectivity: orientation and phase. In the body of the function we have a loop over the cortical population that extracts the relevant parameters -position, orientation and phase- and then calls the function that we already describe previously in order to create the connectivity from the source population to the cell in place.

So now we have the necessary functions to construct a list. Now, we can use FromListConnector to transform the list into a connector. And the use this to define a Projection. We define both the excitatory and inhibitory connections. We abstract this complete set into the following function:

def create_thalamocortical_connection(source, target, polarity, n_pick, g, delay, sigma, gamma, w, phases, orientations, simulator):
    """
    Creates a connection from a layer in the thalamus to a layer in the cortex through the mechanism of Gabor sampling
    """

    # Produce a list with the connections
    connections_list = create_lgn_to_cortical(source, target, polarity, n_pick, g, delay, sigma, gamma, phases, w, orientations)

    # Transform it into a connector
    connector = simulator.FromListConnector(connections_list, column_names=["weight", "delay"])

    # Create the excitatory and inhibitory projections
    simulator.Projection(source, target, connector, receptor_type='excitatory')
    simulator.Projection(source, target, connector, receptor_type='inhibitory')      
    

With this we can create in general connections from one target population to the other. We can even change change the gabor function for whatever we want if we want to experiment with other connectivity patterns. Finally we present down here an example of a sampling from a Gabor function with the aglorithm we just constructed:

So in the image we show in the left the sampling from the ideal Gabor function in the right.

Saturday, August 16, 2014

GSoC Open Source Brain: Arbitrary Spike-trains in PyNN

Arbitrary Spikes in PyNN

Arbitrary Spike-trains in PyNN

In this example we are going to create a population of cells with arbitrary spike trains. We will load the spike train from a file where they are stored as a list of arrays with the times at which they occurred. In order to so we are going to use the SpikeSourceArray class model of PyNN

First we start importing PyNN in with nest and all the other required libraries. Furthermore we start the simulators and given some general parameters. We assume that we already have produced spikes for the cells with 0.50 contrast:

      import numpy as np
      import matplotlib.pyplot as plt
      import cPickle
      import pyNN.nest as simulator

      contrast = 0.50
      Nside_lgn = 30
      Ncell_lgn = Nside_lgn * Nside_lgn
      N_lgn_layers = 4
      t = 1000  # ms

      simulator.setup(timestep=0.1, min_delay=0.1, max_delay=5.0)
    

So we are going to suppose that we have our data stored in './data'. The spike-trains are lists as long as the cell population that contain for each element an array with the times at which the spikes occurred for that particular neuron. In order to load them we will use the following code

      directory = './data/'
      format = '.cpickle'

      spikes_on = []
      spikes_off = []

      for layer in xrange(N_lgn_layers):

      #  Layer 1
      layer = '_layer' + str(layer)

      polarity = '_on'
      contrast_mark = str(contrast)
      mark = '_spike_train'
      spikes_filename = directory + contrast_mark + mark + polarity + layer + format
      f2 = open(spikes_filename, 'rb')
      spikes_on.append(cPickle.load(f2))
      f2.close()

      polarity = '_off'
      contrast_mark = str(contrast)
      mark = '_spike_train'
      spikes_filename = directory + contrast_mark + mark + polarity + layer + format
      f2 = open(spikes_filename, 'rb')
      spikes_off.append(cPickle.load(f2))
      f2.close()

    

Now this is the crucial part. If we want to utilize the SpikeSourceArray model for a cell in PyNN we can define a function that pass the spike-train for each cell in the population. In order to so we use the following code:

      def spike_times(simulator, layer, spikes_file):
         return [simulator.Sequence(x) for x in spikes_file[layer]]
    

Note that we have to change every spike-train array to a sequence before using it as a spike-train. After defining this function we can create the LGN models:

      # Cells models for the LGN spikes (SpikeSourceArray)
      lgn_spikes_on_models = []
      lgn_spikes_off_models = []


      for layer in xrange(N_lgn_layers):
         model = simulator.SpikeSourceArray(spike_times=spike_times(simulator, layer, spikes_on))
         lgn_spikes_on_models.append(model)
         model = simulator.SpikeSourceArray(spike_times=spike_times(simulator, layer, spikes_off))
         lgn_spikes_off_models.append(model)
    

Now that we have the corresponding model for the cells we can create the populations in the usual way:

 # LGN Popluations

 lgn_on_populations = []
 lgn_off_populations = []

 for layer in xrange(N_lgn_layers):
 population = simulator.Population(Ncell_lgn, lgn_spikes_on_models[layer], label='LGN_on_layer_' + str(layer))
 lgn_on_populations.append(population)
 population = simulator.Population(Ncell_lgn, lgn_spikes_off_models[layer], label='LGN_off_layer_' + str(layer))
 lgn_off_populations.append(population)
      

In order to analyze the spike-trains patterns for each population we need to declare a recorder for each population:

 
      layer = 0  # We declare here the layer of our interest 

      population_on = lgn_on_populations[layer]
      population_off = lgn_off_populations[layer]

      population_on.record('spikes')
      population_off.record('spikes')
    

Note here that we can chose the layer of our interest by modifying the value of the layer variable. Finally we run the model with the usual instructions and extract the spikes:

      #############################
      # Run model
      #############################

      simulator.run(t)  # Run the simulations for t ms
      simulator.end()

      #############################
      # Extract the data
      #############################
      data_on = population_on.get_data()  # Creates a Neo Block
      data_off = population_off.get_data()

      segment_on = data_on.segments[0]  # Takes the first segment
      segment_off = data_off.segments[0]
    

In order to visualize the spikes we use the following function:

 
      # Plot spike trains
      def plot_spiketrains(segment):
          """
          Plots the spikes of all the cells in the given segments
          """
          for spiketrain in segment.spiketrains:
              y = np.ones_like(spiketrain) * spiketrain.annotations['source_id']
              plt.plot(spiketrain, y, '*b')
              plt.ylabel('Neuron number')
              plt.xlabel('Spikes')
    

Here the spiketrain variable contains the spike-train for each cell, that is, an array with the times at which the action potentials happened for each cell. In order to tell them apart we assigned them the value of the cell id. Finally we can plot the spikes of the on and off cells with the following code:


      plt.subplot(2, 1, 1)
      plt.title('On cells ')
      plot_spiketrains(segment_on)

      plt.subplot(2, 1, 2)
      plt.title('Off cells ')
      plot_spiketrains(segment_off)

      plt.show()
    

We now show the plot produced by the code above. Note that the on and off cells are off-phase by 180.

Thursday, August 14, 2014

GSoC Open Source Brain: Firing Rate Induced by a Sinus Grating

Firing Rate Induced by a Sinus Grating

Firing Rate induced by a Sinus Grating

Now that we know how to do convolutions with our center-surround kernel we can chose any other kind of stimulus to carry this out. In the neuoscience of vision it is very common to use a sinus grating in a wide array of experimental setings so we are going to use it now. In short, in this post we are going to see the see what signal does a center-surround kernel produces when is convolved with a sinus grating.

Center-Surround Kernel

In order to do the convolution we are going to define the kernel in the usual way using a function that we have utilized from our work before:

      # First we define the size and resolution of the space in which the convolution is going to happen
      dx = 0.05
      dy = 0.05
      lx = 6.0  # In degrees
      ly = 6.0  # In degrees

      # Now we define the temporal parameters of the kernel
      dt_kernel = 5.0  # ms
      kernel_duration = 150  # ms
      kernel_size = int(kernel_duration / dt_kernel)

      #  Now the center surround parameters
      factor = 1  # Controls the overall size of the center-surround pattern
      sigma_center = 0.25 * factor  # Corresponds to 15'
      sigma_surround = 1 * factor  # Corresponds to 1 degree

      # Finally we create the kernel
      kernel_on = create_kernel(dx, lx, dy, ly, sigma_surround, sigma_center, dt_kernel, kernel_size)
    

Sinus Grating

Now we are going to construct our sinus grating. But first, we need to think on how long our stimulus is going to last which is a function of how long the we want to simulate the convolution and of the resolutions of the stimulus and the simulation:

      ## Now we define the temporal l parameters of the sinus grating
      dt_stimuli = 5.0  # ms

      # We also need to add how long do we want to convolve
      dt = 1.0  # Simulation resolution
      T_simulation = 1 * 10 ** 3.0 # ms
      T_simulation += int(kernel_size * dt_kernel)  # Add the size of the kernel
      Nt_simulation = int(T_simulation / dt)  # Number of simulation points
      N_stimuli = int(T_simulation / dt_stimuli)  # Number of stimuli points     
    

Finally we now present the parameters that determine the sinus grating. First the spatial frequency (K), followed by the spatial phase (Phi) and orientation (Theta). Furthermore we have also a parameter for the amplitude and the temporal frequency:

      # And now the spatial parameters of the sinus grating
      K = 0.8  # Cycles per degree
      Phi = 0  # Spatial phase 
      Theta = 0 # Orientation 
      A = 1 # Amplitude 
      # Temporal frequency of sine grating
      w = 3  # Hz
    

Now with all the spatial parameters in our possession we can call the function that produces the sine grating, we define it as the following function that we present below:

 
      stimuli = sine_grating(dx, lx, dy, ly, A, K, Phi, Theta, dt_stimuli, N_stimuli, w)
      
      def sine_grating(dx, Lx, dy, Ly, A, K, Phi, Theta, dt_stimuli, N_stimuli, w):
       '''
       Returns a sine grating stimuli
       '''
       Nx = int(Lx / dx)
       Ny = int(Ly / dy)

       # Transform to appropriate units
       K = K * 2 * np.pi # Transforms K to cycles per degree
       w = w / 1000.0 # Transforms w to kHz

       x = np.arange(-Lx/2, Lx/2, dx)
       y = np.arange(-Ly/2, Ly/2, dy)
       X, Y = np.meshgrid(x, y)
       Z = A * np.cos(K * X *cos(Theta) + K * Y * sin(Theta) - Phi)
       t = np.arange(0, N_stimuli * dt_stimuli, dt_stimuli)
       f_t = np.cos(w * 2 * np.pi *  t )

       stimuli = np.zeros((N_stimuli, Nx, Ny))

       for k, time_component in enumerate(f_t):
           stimuli[k, ...] = Z * time_component

       return stimuli

    

Convolution

Now that we have the stimulus and the kernel we can do the convolution, in order to do that we use again our functions and indexes that we use in the last post:

 
      ## Now we can do the convolution

      # First we define the necessary indexes to the convolution
      signal_indexes, delay_indexes, stimuli_indexes = create_standar_indexes(dt, dt_kernel, dt_stimuli, kernel_size, Nt_simulation)
      working_indexes, kernel_times = create_extra_indexes(kernel_size, Nt_simulation)

      # Now we calculate the signal
      signal = np.zeros(Nt_simulation)

      for index in signal_indexes:
          signal[index] = convolution(index, kernel_times, delay_indexes, stimuli_indexes, kernel_on, stimuli)
    

We can visualize signal with the following code:

 
      #Plot the signal 
      t = np.arange(kernel_size*dt_kernel, T_simulation, dt)
      plt.plot(t, signal[signal_indexes])
      plt.show()
    

We can see that the signal is also a sinus with a frequency that is consistent with the one from the sinus grating.

Friday, June 20, 2014

GSoC Open Source Brain: Retinal Filter II

LGN-Retinal Filter II

Now that we know how to crate a filter is time to use it to calculate how an LGN neuron would react to an incoming stimulus. In this entry we will create a white noise stimulus in order to see how an LGN neuron reacts to it, this approach has the advantage that we can then recover the filter by reverse correlation methods as a sanity check.

In the same spirit of the last post, we will define the spatial and time parameters that determine the lengths and resolutions in those dimensions:

     #Time parameters  
     dt = 1.0  # resolution of the response  (in milliseconds)
     dt_kernel = 5.0 # resolution of the kernel  (in milliseconds)
     dt_stimuli = 10.0  # resolution of the stimuli  (in milliseconds)

     kernel_size = 25 # The size of the kernel 

     T_simulation = 2 * 10 ** 2.0 # Total time of the simulation in ms
     Nt_simulation = int(T_simulation / dt) #Simulation points 
     N_stimuli = int(T_simulation / dt_stimuli) #Number of stimuli

     # Space parameters 
     dx = 1.0
     Lx = 20.0
     Nx = int(Lx / dx)
     dy = 1.0
     Ly = 20.0
     Ny = int(Ly / dy ) 
    

Now, we call our kernel which we have wrapped-up as a function from the work in the last post:

      # Call the kernel 
      # Size of center area in the center-surround profile 
      sigma_center = 15  
      # Size of surround area in the center-surround profile 
      sigma_surround = 3  
      kernel = create_kernel(dx, Lx, dy, Ly, sigma_surround, 
                             sigma_center, dt_kernel, kernel_size) 
    

With this in our hand we can use the numpy random functions to create our white noise stimuli, we use here the realization of white noise call ternary noise which consists on values of -1, 0 and 1 assigned randomly to each pixel in our stimuli:

    
      # Call the stimuli 
      stimuli = np.random.randint(-1, 2, size=(N_stimuli, Nx, Ny))
    

Before we can proceed to calculate the convolution we need to do some preliminary work. The convolution problem involves three time scales with different resolutions. We have first the resolution of the response dt , the resolution of the kernel dt_kernel and finally the resolution of the stimulus dt_stimuli.Operations with the kernel involve jumping from one scale to another constantly so we need a mechanism to keep track of that. In short, we would like to have a mechanism that transforms from some coordinates to the others in one specific place and not scatter all over the place.

Furthermore, in the convolution the kernel is multiplied by a specific point of images for each point in time. For the sake of efficiency we would like to have a mechanism that does this for once. With this in mind I have built a set of indexes for each scale that allow us to associate each element on the indexes of the response to its respective set of images. Also, we have a vector that associates every possible delay time in the kernel to the set of indexes in the response. We illustrate the mechanisms in the next figure

We can appreciate the three different times scales in the image. Furthermore, we have a set of indexes called delay indexes that maps each response to its respective image and also other set of indexes called delay indexes that map each of the delays to his respective response. We can create this set of indexes with the following code:

      # Scale factors 
      input_to_image = dt / dt_stimuli  # Transforms input to image
      kernel_to_input = dt_kernel / dt  # Transforms kernel to input 
      input_to_kernel = dt / dt_kernel  # Transforms input to kernel   

      working_indexes = np.arange(Nt_simulation).astype(int)
      # From here we remove the start at put the ones
      remove_start = int(kernel_size * kernel_to_input)
      signal_indexes = np.arange(remove_start,
                                 Nt_simulation).astype(int)

      # Calculate kernel
      kernel_times = np.arange(kernel_size)
      kernel_times = kernel_times.astype(int) 
      
      # Delay indexes 
      delay_indexes = np.floor(kernel_times * kernel_to_input)
      delay_indexes = delay_indexes.astype(int) 
     
      # Image Indexes 
      stimuli_indexes = np.zeros(working_indexes.size)
      stimuli_indexes = np.floor(working_indexes * input_to_image)
      stimuli_indexes = stimuli_indexes.astype(int)

    

Now, we can calculate the response of a neuron with a center-surround receptive field by performing the convolution between its filter and the stimuli. We also plot the stimuli to see how it looks:

    for index in signal_indexes:
        delay = stimuli_indexes[index - delay_indexes] 
        # Do the calculation    
        signal[index] = np.sum(kernel[kernel_times,...]
                               * stimuli[delay,...])

    t = np.arange(remove_start*dt, T_simulation, dt)
    plt.plot(t, signal[signal_indexes], '-', 
             label='Kernel convoluted with noise')
    plt.legend()
    plt.xlabel('Time (ms)')
    plt.ylabel('Convolution')
    plt.grid()
    plt.show()
    

We can see that the resolution of the response is as good as the resolution of the filter and this explains the discontinuities in the figure above.

As a sanity check we can calculate a voltage triggered average to recover the sta:

 
      ## Calculate the STA
      kernel_size = kernel_times.size
      Nside = np.shape(stimuli)[2]
      sta = np.zeros((kernel_size ,Nside, Nside))

      for tau, delay_index in zip(kernel_times, delay_indexes):
         # For every tau we calculate the possible delay 
         # and take the appropriate image index
         delay = stimuli_indexes[signal_indexes - delay_index] 
         # Now we multiply the voltage for the appropriate images 
         weighted_stimuli = np.sum( signal[signal_indexes, np.newaxis, np.newaxis] * stimuli[delay,...], axis=0)
         # Finally we divide for the sample size 
         sta[tau,...] = weighted_stimuli / signal_indexes.size
    
    
    

Which we can plot in a convenient way with the following set of instructions:

      ## Visualize the STA 
      closest_square_to_kernel = int(np.sqrt(kernel_size)) ** 2

      # Define the color map
      cdict1 = {'red':   ((0.0, 0.0, 0.0),
      (0.5, 0.0, 0.1),
      (1.0, 1.0, 1.0)),

      'green': ((0.0, 0.0, 0.0),
      (1.0, 0.0, 0.0)),

      'blue':  ((0.0, 0.0, 1.0),
      (0.5, 0.1, 0.0),
      (1.0, 0.0, 0.0))
      }

      from matplotlib.colors import LinearSegmentedColormap
      blue_red1 = LinearSegmentedColormap('BlueRed1', cdict1)

      n = int( np.sqrt(closest_square_to_kernel))
      # Plot the filters 
      for i in range(closest_square_to_kernel):
         plt.subplot(n,n,i + 1)
         plt.imshow(sta[i,:,:], interpolation='bilinear',
                    cmap=blue_red1)
         plt.colorbar()

      plt.show()
    

Wednesday, June 18, 2014

GSoC Open Source Brain: Retinal Filter I

LGN-Retinal filter

Treating the retina and the LGN together as a spatio-temporal filter is a common trait in the models that we are going to work with in this project. In this series of articles I am going to develop and explain the mechanism that I have developed to model this particular stage of processing.

Structure of the filter

In this particular example we are dealing with a separable spatio-temporal filter. That is, we can write our filter as a product of the spatial part and the temporal part. In the following sections, we will describe them in this given order.

Spatial filter

In order to calculate the spatial filter we need two things. First, how wide our filters are going to be and then the resolution level. So lx and ly will stand for the size of the receptive filters (in degrees) for the x and y direction respectively. The same will go for dx and dy with respect to the resolution. With this in our hands we then can proceed to create a 2 dimensional structure for our functions with the help of the mesh functions:

## Spatial parameters
x = np.arange(-lx/2, lx/2, dx)
y = np.arange(-ly/2, ly/2, dy)

X, Y = np.meshgrid(x, y) # Create the 2D dimensional coordinates

Now that we have the function it is just a matter of calculating the function with the appropiate formula:

R = np.sqrt(X**2 + Y**2) # Distance
center = (17.0 / sigma_center**2) * np.exp(-(R / sigma_center)**2)
surround = (16.0 / sigma_surround**2) * np.exp(-(R / sigma_surround)**2)
Z = surround - center

With the formula in our hands we can plot it as a countour line to have an idea of how it looks like in space:
# Plot countour map
plt.contourf(X, Y, Z, 50, alpha=0.75, cmap=plt.cm.hot)
plt.colorbar()

C = plt.contour(X, Y, Z, 10, colors='black', linewidth=.5)
plt.clabel(C, inline=10, fontsize=10)
plt.show()


We can also show a view when y=0 in order to offer a side view of the plot in order to gain further insight in the structure of it:

Temporal filter

The behaviour of the temporal filters in time is usually called bi-phasic response. This consists in a response that initially grows in time for the mean level but then it goes bellow to the mean level to finally return at it in time window of the order 200ms approximately. The particular function and parameters that we are going to use to illustrate this behaviour were taken from the paper by Cai et Al (Reference bellow). But first, we have to define our kernel size and resolution as in the code down here:

# First the kernel size and resolution
kernel_size = 200
dt_kernel = 1.0
t = np.arange(0, kernel_size * dt_kernel, dt_kernel) # Time vector

Now, we construct the function in the following way:
## Temporal parameters
K1 = 1.05
K2 = 0.7
c1 = 0.14
c2 = 0.12
n1 = 7.0
n2 = 8.0
t1 = -6.0
t2 = -6.0
td = 6.0

p1 = K1 * ((c1*(t - t1))**n1 * np.exp(-c1*(t - t1))) / ((n1**n1) * np.exp(-n1))
p2 = K2 * ((c2*(t - t2))**n2 * np.exp(-c2*(t - t2))) / ((n2**n2) * np.exp(-n2))
p3 = p1 - p2

We plot the function to have an idea of how it looks now:


Spatio-Temporal Filter

Finally, in order to have build the kernel, given that our filter is separable, we just have to multiply the temporal the temporal part by the spatial part at each point in space:
     # Initialize and fill the spatio-temporal kernel 
     kernel = np.zeros((kernel_size, int(lx/dx), int(ly/dy)))

     for k, p3 in enumerate(p3): 
          kernel[k,...] = p3 * Z 
    

Which we can show now in the following plot:

References

  • Cai, Daqing, Gregory C. Deangelis, and Ralph D. Freeman. "Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kittens." Journal of Neurophysiology 78.2 (1997): 1045-1061.

Thursday, June 5, 2014

GSoC Open Source Brain: How to Create Connections, Two Neurons Example

Two Neurons

In this example we are going to define one of the most simple networks possible: one with three elements. This will allow us to introduce a couple of fundamental concepts in PyNN.

As in our past example we start by importing PyNN and the necessary libraries. Also we declare some initialization variables and we start the simulator:

import pyNN.nest as simulator
import pyNN.nest.standardmodels.electrodes as elect
import matplotlib.pyplot as plt
import numpy as np

N = 3 # Number of neurons
t = 150.0 # Simulation time

# Has to be called at the beginning of the simulation
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)

Now we have to declare our cell modell. But before to so, let's check which parameters of it we can play with. In order to do so we can consult the available parameters for each model class with the method default_parameters. In our case of example we are interested in the integrate and fire model with current based synapses. We can call the following code to see the parameters available:

simulator.IF_curr_exp.default_parameters

After this we are going to get a list of the available parameters, we set them and the declare our model with the following instructions:

# Neuron Model's parameters
i_offset = 0
R = 20
tau_m = 20.0
tau_refractory = 50
v_thresh = 0
v_rest = -60
tau_syn_E = 5.0
tau_syn_I = 5.0
cm = tau_m / R

# Declare our cell model
model = simulator.IF_curr_exp(cm=cm, i_offset=i_offset, tau_m=tau_m, tau_refrac=tau_refractory, tau_syn_E=tau_syn_E, tau_syn_I=tau_syn_I, v_reset=v_rest, v_thresh=v_thresh)

# Declare a population
neurons = simulator.Population(N, model)

In order to modify a specific subset of a given population in PyNN we use the concept of View. Views allow us to use Python array notation to access a given subset of a population and modify it for our purposes. In this particular case we are going select two sub-populations (neurons 1 and 2) to modify the parameters and a create a connection to them from the reminder neuron. In order to modify parameters from a given population we use the method set_parameters that each population posses:

# Create views
neuron1 = neurons[[0]]
neuron2 = neurons[1, 2]

# Modify second neuron
tau_m2 = 10.0
cm2 = tau_m2 / R
neurons[1].set_parameters(cm=cm2, tau_m=tau_m2)

tau_m3 = 5.0
cm3 = tau_m3 / R
neurons[2].set_parameters(cm=cm3, tau_m=tau_m3)

Now, in order to create connections in PyNN we need the concept of projection. As stated in the tutorial of PyNN a project needs the following elements to be declared:

  • The pre-synaptic population
  • The post-synaptic population
  • A connection algorithm
  • A synapse type

The general form of the Projection method is given by

Projection(presynaptic population, posynaptic population, connection algorithm, synapase type)

In our example the pre-synaptic population is going to be the neuron 0 and the post-synaptic population is going to be composed of the neurons 1 and 2 as declared in the views above. As a connection algorithm we are going to use the AllToAllConnector method which connects every member of the presynaptic population to the post-synaptic population. Finally we can define a static syanpse with the method StaticSynapse, it requires two attributes a value that determines the size (weight) and a delay that determines how long after the spike in the pre-synaptic neuron the pos-synaptic neuron elicits a response. Our code bellow is:

# Synapses
syn = simulator.StaticSynapse(weight=10, delay=0.5)
# Projections
connections = simulator.Projection(neuron1, neuron2, simulator.AllToAllConnector(),
syn, receptor_type='excitatory')

Finally we set the current for the neuron 1 and the recorder as in the previous example:

# DC source
current = elect.DCSource(amplitude=3.5, start=20.0, stop=100.0)
current.inject_into(neuron1)
#neurons.inject(current)

# Record the voltage
neurons.record('v')

simulator.run(t) # Run the simulations for t ms
simulator.end()

Finally we extract the data and plot the function

# Extracts the data
data = neurons.get_data() # Creates a Neo Block
segment = data.segments[0] # Takes the first segment
vm = segment.analogsignalarrays[0] # Take the arrays

# Extract the data for neuron 1
vm1 = vm[:, 0]
vm2 = vm[:, 1]
vm3 = vm[:, 2]

# Plot the data
plt.plot(vm.times, vm1, label='pre-neuron')
plt.hold('on')
plt.plot(vm.times, vm2, label='post-neuron 1')
plt.plot(vm.times, vm3, label= 'post-neuron 2')

plt.xlabel('time')
plt.ylabel('Voltage')
plt.legend()

plt.show()

A particular example of the simulation running is attached next. Playing with the parameters in this example can provide a clear idea of how the models and synapsis' parameters work:

GSoC Open Source Brain: First Example, Integrate and Fire neuron

First Example

In this entry I am going to describe a very basic example with PyNN. Our aim is to build a system with a simple Integrate and Fire neuron under the influence of a direct current. The first thing that we need to do is to import the simulator that we are going to use with the instruction:

import pyNN.nest as simulator

Note that this could also be Neuron or Brian instead of Nest

Now, we need to define our model but before that we need to do some presettings. So we set first the number of neurons that our simulations is going to run and also the total time it will take.

N = 1 # Number of neurons
t = 100.0 #Simulation time

# Has to be called at the beginning of the simulation
simulator.setup(timestep=0.1, min_delay=0.1, max_delay=10)

Now we can define a neuron model for our neuron. For this example we are going to chose a leaky integrate and fire with exponentially decaying pos-synpatic current.

model = simulator.IF_curr_exp()
neurons = simulator.Population(N, model)

Note that once we have defined our model and our number of neurons we can define a population with this.

As a next step we define the current and inject it into the neurons

# DC source
current = simulator.DCSource(amplitude=0.5, start=20.0, stop=80.0)
#current = elect.DCSource(amplitude=0.5, start=20.0, stop=80.0)
current.inject_into(neurons)
#neurons.inject(current)

And finally we can indicate our simulator to run with the instruction

simulator.run(t) # Run the simulations for t ms

With this we have already simulated our neuron. However, as the things stand right now it we are unable to visualize the trajectory in time of the voltage in our system. In order to extract the membrane potential from our data we have to declare a recorder for the voltage.

neurons.record('v') # Record voltage
simulator.run(t) # Run the simulations for t ms

Now in ourder to extract our simulations from the recorder we use:

data = neurons.get_data()

This returns a Neo bloc. A Neo blocked is a container of segments which in turn contain the data recorded in a given experiment. In order to extract our data from the block above we use the following code:

data = neurons.get_data() # Crates a Neo Block
segment = data.segments[0] # Takes the first
vm = segment.analogsignalarrays[0]

Finally in order to plot our data:

import matplotlib.pyplot as plt
plt.plot(vm.times, vm)
plt.xlabel('time')
plt.ylabel('Vm')
plt.show()

Which produces the next plot:

GSoC Open Source Brain: Google Summer of Code 2014 Presentation

Presentation

Introduction

My name is Ramon Heberto Martinez and I am student in the Erasmus Mundus Master in Complex Systems Science. I will use the coming series of entries in this blog to describe and report my progress in the project for Google Summer of Code 2014 (GSoC 2014). I have been lucky enough to have my proposal entitled Open source, cross simulator, large scale cortical models with the International Neuroinformatics Coordinating Facility (INCF). This project will be co- mentored by Andrew Davison at the INCF's French branch and Padraig Gleeson from the INCF branch in the UK.

The Project

As we advance the study of the brain we have required more powerful tools to study it. In particular more powerful computational tools have become available as well as more elaborated simulation environments. An example of these efforts is the Open Source Brain Project (OSB) project that provides a space where the computational models can be built collaboratively and shared with open standards such as PyNN [Davison et al., 2008] and NeuroML [Gleeson et al., 2010].

On the other hand there is a lack of well tested open models that can serve as benchmarks to test and reliably compare the capabilities of the different environments. It is the spirit of this project to try to reduce the lack of such models. In particular this project will consist on developing models of the visual system which, at the date that I write, is an area not very well covered by the OSB project so far

The work specifically will consist on developing the code of the models below in PyNN and release them as free code in the platform of the Open Source Brain Project
. Papers:

  • Different roles for simple-cell and complex-cell inhibition in v1 [Lauritzen and Miller, 2003].
  • Inhibitory stabilization of the cortical network underlies visual surround suppression [Ozeki et al., 2009]
  • Feedforward origins of response variability underlying contrast invari- ant orientation tuning in cat visual cortex [Sadagopan and Ferster, 2012]
The theme of the papers is to have a thorough set of properties of the visual system (V1) and in particular of the orientation invariance property.

References and Links

References

  • Andrew P Davison, Daniel Br ̈derle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. Pynn: a common interface for neuronal network simulators. Frontiers in neuroinformatics, 2, 2008.
  • Padraig Gleeson, Sharon Crook, Robert C Cannon, Michael L Hines, Guy O Billings, Matteo Farinella, Thomas M Morse, Andrew P Davison, Subhasis Ray, Upinder S Bhalla, et al. Neuroml: a language for describing datadriven models of neurons and networks with a high degree of biological detail. PLoS computational biology,(6):e1000815, 2010.
  • Thomas Z Lauritzen and Kenneth D Miller. Different roles for simple-cell and complex-cell inhibition in v1. The Journal of neuroscience, 23(32):10201–10213, 2003.
  • Hirofumi Ozeki, Ian M Finn, Evan S Schaffer, Kenneth D Miller, and David Ferster. Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62(4):578–592, 2009.
  • Roger D Peng. Reproducible research in computational science. Science (New York, Ny), 334(6060):1226, 2011.
  • Srivatsun Sadagopan and David Ferster. Feedforward origins of response variability underlying contrast invariant orientation tuning in cat visual cortex. Neuron, 74(5):911–923, 2012.

Links

Project Page at INCF
PyNN
NeuroML
Open Source Brain Project
Neural Ensemblet
International Neuroinformatics Coordinating Facility
Google Summer of Code 2014

Friday, April 18, 2014

Sumatra 0.6 released

We would like to announce the release of version 0.6.0 of Sumatra, a tool for automated tracking of simulations and computational analyses so as to be able to easily replicate them at a later date.

This version of Sumatra sees many new features, improvements to existing ones, and bug fixes:

New export formats: LaTeX and bash script

You can now export your project history as a LaTeX document, allowing you to easily generate a PDF listing details of all the simulations or data analyses you've run.

    $ smt list -l -f latex > myproject.tex

Using tags, you can also restrict the output to only a subset of the history.

You can also generate a shell script, which can be executed to repeat the sequence of computations captured by Sumatra, or saved as an executable record of your work. The shell script includes all version control commands needed to ensure the correct version of the code is used at each step.

    $ smt list -l -f shell > myproject.sh

Working with multiple VCS branches

Previous versions of Sumatra would always update to the latest version of the code in your version control repository before running the computation.

Now, "smt run" will never change the working copy, which makes it much easier to work with multiple version control branches and to go back to running earlier versions of your code.

Programs that read from stdin or write to stdout

Programs that read from standard input or write to standard output can now be run with "smt run". For example, if the program is normally run using:

    $ myprog < input > output

You can run it with Sumatra using:

    $ smt run -e myprog -i input -o output

Improvements to the Web browser interface

There have been a number of small improvements to the browser interface:

  • a slightly more compact and easier-to-read (table-like) layout for parameters;
  • the working directory is now displayed in the record detail view;
  • the interface now works if record label contain spaces or forward slashes.

The minimal Django version needed is now 1.4.

Support for PostgreSQL

The default record store for Sumatra is based on the Django ORM, using SQLite as the backend. It is now possible to use PostgreSQL instead of SQLite, which gives better performance and allows the Sumatra record store to be placed on a separate server.

To set up a new Sumatra project using PostgreSQL, you will first have to create a database using the PostgreSQL tools ("psql", etc.). You then configure Sumatra as follows:

    $ smt init --store=postgres://username:password@hostname/databasename MyProject

Note that the database tables will not be created until after the first "smt run".

Integration with the SLURM resource manager

Preliminary support for launching MPI computations via SLURM has been added.

  $ smt configure --launch_mode=slurm-mpi
  $ smt run -n 256 input_file1 input_file2

This will launch 256 tasks using "salloc" and "mpiexec".

Command-line options for SLURM can also be set using "smt configure"

  $ smt configure --launch_mode_options=" --tasks-per-node=1"

If you are using "mpiexec" on its own, without a resource manager, you can set MPI command-line options in the same way.

Improvements to the @capture decorator

The "@capture" decorator, which makes it easy to add Sumatra support to Python scripts (as an alternative to using the "smt" command), now captures stdout and stderr.

Improvements to parameter file handling

Where a parameter file has a standard mime type (like json, yaml), Sumatra uses the appropriate extension if rewriting the file (e.g. to add parameters specified on the command line), rather than the generic ".param".

Migrating data files

If you move your input and/or output data files, either within the filesystem on your current computer or to a new computer, you need to tell Sumatra about it so that it can still find your files. For this there is a new command, "smt migrate". This command also handles changes to the location of the data archive, if you are using one, and changes to the base URL of any mirrored data stores. For usage information, run:

    $ smt help migrate

Miscellaneous improvements

  • "smt run" now passes unknown keyword args on to the user program. There is also a new option '--plain' which prevents arguments of the form "x=y" being interpreted by Sumatra; instead they are passed straight through to the program command line.
  • When repeating a computation, the label of the original is now stored in the 'repeats' attribute of 'Record', rather than appending "_repeat" to the original label. The new record will get a new unique label, or a label specified by the user. This means a record can be repeated more than once, and is a more reliable method of indicating a repeat.
  • A Sumatra project now knows the version of Sumatra with which it was created.
  • "smt list" now has an option '-r'/'--reverse' which lists records oldest to newest.

Bug fixes

A fair number of bugs have been fixed.

Download, support and documentation

The easiest way to get the latest version of Sumatra is

  $ pip install sumatra

Alternatively, Sumatra 0.6.0 may be downloaded from PyPI or from the INCF Software Center. Support is available from the sumatra-users Google Group. Full documentation is available on pythonhosted.org.

Monday, March 3, 2014

Docker image to run the software neural simulators used in BrainScaleS

Docker is like a virtual machine (Virtualbox, VMware,...) but with one huge advantage for everyone wanting to run a neural simulator on his own computer: being based on Linux containers, a Docker image can use all the resources (CPUs, RAMs) of your host computer.
That's why we have decided to put all the tools needed to run the software neural simulators used in the European project BrainScaleS into an openly accessible docker image. To use it, please follow the official step-by-step guide.
This Docker image (brainscales/neural-networks:software) contains the following neural simulators and the associated tools (openmpi, visualization,...):
  • NEST (version 2.2.2)
  • NEURON (version 7.3)
  • Brian (latest version from PyPI, currently 1.4.1)
  • PyNN (latest version from PyPI, currently 0.7.5), the language-independent simulator (it uses as backend either NEST, Brian or NEURON),
  • MUSIC, the multi-simulation coordinator (multiconn branch). Note that, in this image, only NEURON has been compiled with MUSIC.
To use this image, you need to install Docker on your host computer. On an Ubuntu computer, this means installing lxc-docker:
  $ sudo apt-get install lxc-docker
Then, you download the image from the Docker index repository:
  $ sudo docker pull brainscales/neural-networks:software
You can now follow this step-by-step guide to run the examples from all these neural simulators using all the resources from your host computer, without having to install other software than Docker itself and a ssh client. Enjoy!

Joël Chavas and Andrew Davison.

Wednesday, January 22, 2014

Tools for cortical modelling from the Bednar Lab

Jim Bednar writes:

"We are pleased to announce the availability of a comprehensive new example of modelling topographic maps in the visual cortex, suitable as a ready-to-run starting point for future research. This example consists of:

1. A new J. Neuroscience paper (Stevens et al. 2013a) describing the GCAL model and showing that is is stable, robust, and adaptive, developing orientation maps like those observed in ferret V1.

2. A new open-source Python software package, Lancet, for launching simulations and collating the results into publishable figures.

3. A new Frontiers in Neuroinformatics paper (Stevens et al. 2013b) describing a lightweight and practical workflow for doing reproducible research using Lancet and IPython.

4. An IPython notebook showing the precise steps necessary to reproduce the 842 simulation runs required to reproduce the complete set of figures and text of Stevens et al. (2013a),  using the Topographica simulator.

5. A family of Python packages that were once part of Topographica but are now usable by a broader audience. These packages include 'param' for specifying parameters declaratively, 'imagen' for defining 0D, 1D, and 2D distributions (such as visual stimuli), and  'featuremapper' for analyzing the activity of neural populations  (e.g. to estimate receptive fields, feature maps, or tuning curves).

The resulting recipe for building mechanistic models of cortical map development should be an excellent way for new researchers to start doing work in this area.

Jean-Luc R. Stevens, Judith S. Law, Jan Antolik, Philipp Rudiger, Chris Ball, and James A. Bednar

Computational Systems Neuroscience Group
The University of Edinburgh


_______________________________________________________________________________


1. STEVENS et. al. 2013a: GCAL model

Our recent paper:

Jean-Luc R. Stevens, Judith S. Law, Jan Antolik, and James A. Bednar. Mechanisms for stable, robust, and adaptive  development of orientation maps in the primary visual cortex. Journal of Neuroscience, 33:15747-15766, 2013.  http://dx.doi.org/10.1523/JNEUROSCI.1037-13.2013

shows how the GCAL model was designed to replace previous models of V1development that were unstable and not robust.  The model in this paper accurately reproduces the process of orientation map development in ferrets, as illustrated in this animation comparing GCAL, a simpler model, and chronic optical imaging data from ferrets.


2. LANCET

Lancet is a lightweight Python package that offers a set of flexible components to allow researchers to declare their intentions succinctly and reproducibly. Lancet makes it easy to specify a parameter space,
run jobs, and collate the output from an external simulator or analysis tool. The approach is fully general, to allow the researcher to switch between different software tools and platforms as necessary.


3. STEVENS et. al. 2013b: LANCET/IPYTHON workflow

Jean-Luc R. Stevens, Marco I. Elver, and James A. Bednar.  An Automated and Reproducible Workflow for Running and Analyzing  Neural Simulations Using Lancet and IPython Notebook. Frontiers in Neuroinformatics, in press, 2013. http://www.frontiersin.org/Journal/10.3389/fninf.2013.00044/abstract

Lancet is designed to integrate well into an exploratory workflow within the Notebook environment offered by the IPython project. In an IPython notebook, you can generate data, carry out analyses, and plot the results interactively, with a complete record of all the code used.
Together with Lancet, it becomes practical to automate every step needed to generate a publication within IPython Notebook, concisely and reproducibly.  This new paper describes the reproducible workflow and shows how to use it in your own projects.


4. NOTEBOOKS for Stevens et al. 2013a

As an extended example of how to use Lancet with IPython to do reproducible research, the complete recipe for reproducing Stevens et al. 2013a is available in models/stevens.jn13 of Topographica's GitHub repository.  The first of two notebooks defines the model, alternating between code specification, a textual description of the key model properties with figures, and interactive visualization of the model's initial weights and training stimuli. The second notebook can be run to quickly generate the last three published figures (at half resolution) but can also launch all 842, high-quality simulations
needed to reproduce all the published figures in the paper.  Static copies of these notebooks, along with instructions for downloading runnable versions, can be viewed here:

  http://topographica.org/_static/gcal.html
  http://topographica.org/_static/stevens_jn13.html

5. PARAM, IMAGEN, and FEATUREMAPPER

The Topographica simulator has been refactored into several fully independent Python projects available on GitHub (http://ioam.github.io). These projects are intended to be useful to a wide audience of both computational and experimental neuroscientists:

  param: The parameters offered by param allow scientific Python programs to be written declaratively, with type and range checking,  optional documentation strings, dynamically generated values,  default values and many other features.

  imagen: Imagen offers a set of 0D,1D and 2D pattern distributions. These patterns may be procedurally generated or  loaded from files. They can be used to generate simple scalar values, such as values drawn from a specific random distribution,  or for generating complex, resolution-independent composite image pattern distributions typically used as visual stimuli (e.g. Gabor and Gaussian patches or masked sinusoidal gratings).


  featuremapper: Featuremapper allows the response properties of a neural population to be measured from any simulator or experimental setup that can give estimates of the neural activity values in  response to an input pattern. Featuremapper may be used to measure preference and selectivity maps for various stimulus features (e.g  orientation and direction of visual stimuli, or frequency for  auditory stimuli), to compute tuning curves for these features, or  to measure receptive fields, regardless of the underlying implementation of the model or experimental setup."