Simulating Spiking Neural P Systems with Vectors, a Matrix, and the GPU.

WARNING: LONG POST

tl;dr Turing started it, Computational model based on neurons, Math magic, GPU magic

The Turing Machine is perhaps the most well known computational model out there. It is the theoretical basis of a modern computer which is one of if not the most important inventions of man. The Turing Machine however, is not perfect. There are some problem that are hard to solve using the Turing Machine, that is the problems that are in NP but not in Passuming of course that P \neq NP.

Because of this problem, people started to look for ways around this problem. Alan Turing himself wrote papers, both published and unpublished, to move this effort forward. In one of his unpublished papers, he proposed two types of randomly connected neural networks with one key feature being the possibility of learning and training it to solve problems [1]; sound familiar? The same year he wrote about the former, he also wrote a paper proposing genetical or evolutionary search [1]. But perhaps the most important  proposal Turing had in this effort is his idea of a machine beyond the Turing machine, this machine he dubbed the O-Machine which is basically a Turing Machine with an oracle that can solve any decision problem within a class of problems, such as NP, for free [1].

FUN FACT!

The paper where Turing proposed neural networks wasn’t published because his boss at the time dismissed it as a “schoolboy essay”. Do you know who his boss was? Sir Charles Darwin, not the Charles Darwin writer of The Origin of Species, but rather his grandson [1].

In recent years, many alternative computational models have been proposed, including the Quantum Computer. In this entry though, are model of interest is the Spiking Neural P System or SNP for short.

So what is an SNP? In not-so-layman’s-terms, an SNP is a computational model proposed in [4] based on neural systems and as such is made up of interconnected units called neurons where each connection is called a synapse. Each neuron receives, processes, and sends signals called spikes, represented by a^n where n is the number of spikes ,depending on some rules within each neuron. To better understand this, here’s a picture of a small SNP:

Source: Presentation namin, oo ako gumawa nyan

Figure 1. An SNP System comparing two numbers: 10 and 5

Before we explore how we simulate an SNP System, we must first define it formally

A Spiking Neural P System \Pi is defined as [2]:

\Pi = (O, \sigma_1, ..., \sigma_m, syn, out, in) where,

  1. O is the alphabet \{a\} where a represents a spike,
  2. \sigma_1, ..., \sigma_m are neurons which are 2-tuples defined as \sigma_i = (n_i, R_i), 1 \leq i \leq m where n_i is the initial number of spikes in the neuron and R_i is a finite set of rules in the neuron,
  3. syn are the set of synapses of the form (i, j) where (i, j) \subseteq \{1, ..., m \} \times \{1, ..., m \}/(i,i),
  4. in \in \{1, ..., m\} is the label for the input neuron, and
  5. out \in \{1, ..., m\} is the label for the output neuron

To not make this post any longer the tl;dr of the rules is: based on the number of spikes inside a neuron, a neuron can produce spikes to send to the outwardly connected neurons or remove all the spikes inside that neuron. Also one thing to note here is that, the computation in an SNP is massively parallel, that is each neuron does a computation per step of the overall computation.

SNPs comes in various sizes and functions, take for example the SNP in Figure 1, which actually implements a 2-input bitonic sort algorithm in SNP [3]. The SNP in Figure 1 is quite trivial to simulate on paper but what if we increase it in size, let’s say into 4 inputs:

bitonic_snp

Figure 2. The structure of a 4-input SNP implementing bitonic sort

Now there are 36 neurons, 32 of which have rules of their own; fret not this is still doable, trust me. How about 8 inputs? 144 neurons would take sometime. 16 inputs? 480 neurons. 32? 64? 128? Now you see the problem? Some SNPs are very large that it would take some considerable amount of time for us mere human beings to simulate on paper, the problem lies on how we usually solve these problems on pen and paper: linearly or in this case 1 neuron at a time; this is not a SNP is supposed to be processed. This is the same reason it is hard to simulate an SNP on a CPU. Even if your CPU has 4 cores, it is still quite problematic.The proposed solution? Use a GPU.

Again, to prevent this post from getting any longer, the tl;dr of CPU vs GPU is: CPU is good for linear complex tasks, GPU is good for many simple tasks.

So, how do we simulate an SNP on a GPU? We first represent an SNP into several vectors like the configuration vector on step k C_k = (n_1^{(k)}, ..., n_m^{(k)}) which represents how many spikes are in neurons \sigma_1, ..., \sigma_m at step k and a matrix specifically the Spiking Transition Matrix which basically applies the rules of each neuron[2]. Then we do some math magic then we’re good. That’s the theory part at least, we still have to code it though.

tl;dr of the coding part, we do the most of the math part in the GPU and the uhm… giving instructions part in the CPU. tl;dr of the tl;dr CPU is master and GPU is math slave.

So there you have it, a brief history on Non-Turing computation, a very brief introduction to Spiking Neural P Systems, and a very very brief overview on simulating Spiking Neural P Systems.

References:

[1] Gheorghe Paun,Gheoghe Paun, From Cells to (Silicon) Computers, and Back, Springer New York, 2008

[2] Xiangxiang Zeng , Henry Adorna, et al., Matrix Representation of Spiking Neural P Systems, Springer Berlin Heidelberg, 2010

[3] G.I. Palaganas, A.R. Lagunda, F.G.C. Cabarle, H Adorna, Spiking Neural P Systems GPU Simulation using OpenCLProc. 16th Philippine Computing Science Congress PCSC2016. 2016

[4]  Mihai Iomescu, Gheorghe Păun, Takashi Yokomori, Spiking Neural P Systems,  Fundamenta Informaticae, vol. 71, no. 2,3, 2006

Advertisements