Learning Spatio-Temporally Encoded Pattern Transformations in ...

Report 16 Downloads 36 Views
Introduction

Background

Our Approach

Results

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks Andr´e Gr¨ uning, Brian Gardner and Ioana Sporea Department of Computer Science, University of Surrey, Guildford

26th August 2015 – Rev : 1677

Summary

Introduction

Background

Our Approach

Results

Summary

Introduction

Background

Our Approach

Results

Summary

Introduction

Background

1

Introduction

2

Background

3

Our Approach

4

Results

5

Summary

Our Approach

Results

Summary

Introduction

Background

Our Approach

Results

Summary

What are we doing? What are we doing? Formulate a supervised learning rule for spiking neural networks that can train spiking networks containing a hidden layer of neurons, can map arbitrary spatio-temporal input into arbitrary output spike patterns, ie multiple spike trains.

Why worthwhile? Understand how spike-pattern based information processing takes place in the brain. A learning rule for spiking neural networks with technical potential. Find a rule that is to spiking networks what is backprop to rate neuron networks.

Introduction

Background

Our Approach

Scientific Area

Where are we scientifically? In the middle of nowhere between: computational neuroscience cognitive science artificial intelligence / machine learning

Results

Summary

Introduction

Background

Our Approach

Results

Summary

Introduction

Background

1

Introduction

2

Background

3

Our Approach

4

Results

5

Summary

Our Approach

Results

Summary

Introduction

Background

Our Approach

Results

Spiking Neurons (a) input spikes output spike

(c) u output spike

(b)

input spikes

Spiking neurons: real neurons communicate with each other via sequences of pulses – spikes. 1 2 3

Dendritic tree, axon and cell body of a neuron. Top: Spikes arrive from other neurons and its membrane potential rises. Bottom: incoming spikes on various dendrites elicit timed spikes responses as the output. response of the membrane potential to incoming spikes. If the threshold θ is crossed, the membrane potential is reset to a low value, and a spike fired.

From Andre Gruning and Sander Bohte. Spiking neural networks: Principles and challenges. In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning – ESANN, Brugge, 2014. Invited Contribution.

Summary

Introduction

Background

Our Approach

Results

Spiking Neurons

Spiking Information Processing The precise timing of spikes generated by neurons conveys meaningful information. Synaptic plasticity forms the basis of learning. Changes in synaptic strength depend on relative pre- and postsynaptic spike times, and third signals. Challenge: to relate such localised plasticity changes to learning on the network level.

Summary

Introduction

Background

Our Approach

Results

Summary

Learning for Spiking NN General Learning Algorithms for Spiking NN? There is no general-purpose algorithm for spiking neural networks. Challenge: discontinuous nature of spiking events. Various supervised learning algorithms exist, each with its own limitations eg: network topology, adaptability (e.g. reservoir computing), limited spike encoding (e.g. latency, or spike vs no spike).

Most focus on classification rather than more challenging tasks like mapping from one spike train to another.

Introduction

Background

Our Approach

Results

Summary

Some Learning Algorithms for Spiking NN

SpikeProp 1 , ReSuMe 2 , Tempotron 3 , Chronotron 4 , SPAN 5 , Urbanczik and Senn 6 , Brea et al. 7 , Freimaux et al. 8 , . . .

1

S.M. Bohte, J.N. Kok, and H. La Poutr´ e. Spike-prop: error-backpropagation in multi-layer networks of spiking neurons. Neurocomputing, 48(1–4):17–37, 2002

2

Filip Ponulak and Andrzej Kasi´ nski. Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification and spike shifting. Neural Computation, 22:467–510, 2010

3

Robert G¨ utig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience, 9(3), 2006. doi: 10.1038/nn1643

4

R˘ azvan V Florian. The chronotron: A neuron that learns to fire temporally precise spike patterns. PLoS ONE, 7(8):e40233, 2012

5

A. Mohemmed, S. Schliebs, and N. Kasabov. SPAN: Spike pattern association neuron for learning spatio-temporal sequences. Int. J. Neural Systems, 2011

6

R. Urbanczik and W. Senn. A gradient learning rule for the tempotron. Neural Computation, 21:340–352, 2009

7

Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience, 33(23):9565–9575, 2013

8

Nicolas Fremaux, Henning Sprekeler, and Wulfram Gerstner. Functional requirements for reward-modulated spile-timing-dependent plasticity. The Journal of Neuroscience, 30(40):13326–13337, 10 2010

Introduction

Background

Our Approach

Results

Summary

Introduction

Background

1

Introduction

2

Background

3

Our Approach

4

Results

5

Summary

Our Approach

Results

Summary

Introduction

Background

Our Approach

Results

Our Approach

MultilayerSpiker Generalise backpropagation to Spiking Neural Networks with hidden neurons. Use stochastic neuron model to connect smooth quantities (derivative exists) with discrete spike trains (no derivative)

Summary

Introduction

Background

Our Approach

Results

Summary

Neuron model Membrane potential uo (t) :=

X h

Z

t

0

0

0

Z

Yh (t )(t −t )dt +

woh 0

t

Zo (t 0 )κ(t −t 0 )dt 0 , (1)

0

o postsynaptic neurons, h presynaptic neuron uo membrane potential of o. woh strength of synaptic connection from h to o. P Yh (t) = th