Theoretical Computer Science 529 (2014) 82–95
Contents lists available at ScienceDirect
Theoretical Computer Science www.elsevier.com/locate/tcs
Spiking neural P systems with rules on synapses Tao Song a , Linqiang Pan a,∗ , Gheorghe P˘aun b a
Key Laboratory of Image Processing and Intelligent Control, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, 014700 Bucuresti, ¸ Romania
b
a r t i c l e
i n f o
Article history: Received 4 September 2013 Received in revised form 12 December 2013 Accepted 7 January 2014 Communicated by M.J. Perez Jimenez Keywords: Membrane computing Spiking neural P system Rule on synapse Small universal system
a b s t r a c t Spiking neural P systems (SN P systems, for short) are a class of membrane systems inspired from the way the neurons process information and communicate by means of spikes. In this paper, we introduce and investigate a new class of SN P systems, with spiking rules placed on synapses. The computational completeness is first proved, then two small universal SN P systems with rules on synapses for computing functions are constructed. Specifically, when using standard spiking rules, we obtain a universal system with 39 neurons, while when using extended spiking rules on synapses, a universal SN P system with 30 neurons is constructed. © 2014 Elsevier B.V. All rights reserved.
1. Introduction Spiking neural P systems (SN P systems, for short) were introduced in [3] as a class of parallel and distributed computation models in membrane computing. They were abstracted from the way the neurons process information and communicate to each other by sending spikes along synapses. An SN P system can be represented by a directed graph, where neurons are placed in the nodes and each neuron sends spikes to its neighbor neurons along the synapses (represented by the arcs of the graph). Each neuron can contain a number of copies of a single object type, called the spike, spiking rules and forgetting rules. Using its rules, a neuron can send information (in the form of spikes) to other neurons. One of the neurons is the output neuron and its spikes are also sent to the environment. When a spike is emitted by the output neuron, we marked that time with 1; otherwise, we marked it with 0. This binary sequence is called the spike train of the system – it might be infinite if the computation does not stop. A result can be associated with a computation in various ways: for example, as the number of spikes sent to the environment (in the case of halting computations) or as the time elapsed between the first two consecutive spikes sent to the environment by the (output neuron of the) system. Many computational properties of SN P systems have been investigated. Many variants of SN P systems were proved to be Turing universal as number accepting or generating devices [1,3,20,15], language generators [2,16,21], and function computing devices [13,14]. SN P systems can be also used to (theoretically) solve computationally hard problems in a feasible (polynomial or linear) time [4,7,11]. In the present work, we introduce a new class of SN P systems, namely, SN P systems with rules on synapses. In SN P systems of this type, the neurons contain only spikes, while the rules are moved on the synapses. At any step, when the number of spikes in a given neuron is “recognized” by a rule on a synapse leaving from that neuron, the rule is enabled. A number of spikes are consumed in the neuron and a number of spikes are sent to the neuron at the end of the synapse. More precise definitions will be given in Section 2.
*
Corresponding author. E-mail addresses:
[email protected] (L. Pan),
[email protected] (G. P˘aun).
0304-3975/$ – see front matter © 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.tcs.2014.01.001
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
83
As expected, the SN P systems with rules on synapses are able to compute all Turing computable sets of numbers. We prove this for SN P systems with extended rules (a spiking rule can produce more than one spike), working in the generating mode, with the result of a computation being defined as the number of spikes sent to the environment. Then, we address the question of constructing small universal SN P systems with rules on synapses. This is a much investigated topic for Turing machines and for other computing devices, and many small universal systems have been reported: small universal Turing machines [18], small deterministic Turing machines [6], small universal register machines [5] and small universal cellular automata [10]. This topic was considered also for P systems, in particular, for SN P systems: in [13] two universal SN P systems with standard and extended spiking rules were obtained. Then several improved universal SN P systems (with a smaller number of neurons) for computing functions and generating sets of numbers were presented in [9,12,22]. In this paper, we construct two small universal SN P systems with rules on synapses for computing functions. When using standard spiking rules, a universal system having 39 neurons is obtained; when using extended spiking rules, we get a universal system with 30 neurons. The strategy used in this work is the same as that in [13]. Specifically, the universal systems are obtained by simulating one of the universal register machines from [5]. As an overall observation, placing the spiking and forgetting rules on synapses proves to be a powerful feature, both simpler proofs and smaller universal systems are obtained in comparison with the case when the rules are placed in the neurons. 2. SN P systems with rules on synapses Before introducing the SN P systems with rules on synapses, we recall some prerequisites. It is useful for readers to have some familiarity with basic elements of formal language theory, e.g., from [19], as well as with basic concepts and notions in SN P systems [3,17]. For an alphabet Σ , Σ ∗ denotes the set of all finite strings of symbols from Σ ; the empty string is denoted by λ, and the set of all nonempty strings over Σ is denoted by Σ + . When Σ = {a} is a singleton, then we write simply a∗ and a+ instead of {a}∗ , {a}+ , respectively. A regular expression over an alphabet Σ is defined as follows: (i) λ and each a ∈ Σ is a regular expression, (ii) if E 1 , E 2 are regular expressions over Σ , then ( E 1 )( E 2 ), ( E 1 ) ∪ ( E 2 ), and ( E 1 )+ are regular expressions over Σ , and (iii) nothing else is a regular expression over Σ . With each regular expression E we associate a language L ( E ), defined in the following way: (i) L (λ) = {λ} and L (a) = {a}, for all a ∈ Σ , (ii) L (( E 1 ) ∪ ( E 2 )) = L ( E 1 ) ∪ L ( E 2 ), L (( E 1 )( E 2 )) = L ( E 1 ) L ( E 2 ), and L (( E 1 )+ ) = ( L ( E 1 ))+ , for all regular expressions E 1 , E 2 over Σ . Unnecessary parentheses can be omitted when writing a regular expression, and ( E )+ ∪ {λ} can also be written as E ∗ . Let us now define the SN P systems with extended rules on synapses and with delay (in the general form, for computing functions). Such a system of degree m 1 is a construct of the form
Π = ( O , σ1 , σ2 , . . . , σm , syn, i in , i o ), where
• O = {a} is the singleton alphabet (a is called spike); • σ1 , σ2 , . . . , σm are neurons of the form σi = (ni ), with 1 i m, where ni is initial number of spikes in neuron σi ; • syn is the set of synapses; each element in syn is a pair of the form ((i , j ), R (i , j ) ), where (i , j ) indicates that there is a synapse connecting neurons σi and σ j , with i , j ∈ {1, 2, . . . , m}, i = j, and R (i , j ) is a finite set of rules of the following two forms: (1) E /ac → a p ; d, where E is a regular expression over O , c p 1 and d 0; / L ( E ) for any rule E /ac → a p ; d from any R (i , j ) ; (2) as → λ, for some s 1, with the restriction that a s ∈ • i in indicates the input neuron (usually labeled by in) and i o indicates the output neuron (usually labeled by out). A rule E /ac → a p ; d with p 1 is called an extended spiking rule; if p = 1, the rule is called a standard spiking rule. If E = ac , then the rule can be written in the simplified form ac → a p ; d; if d = 0, then the rule can be simply written as E /ac → a p . A rule of the form as → λ is called a forgetting rule. The spiking rules are applied as follows. If E /ac → a p ; d ∈ R (i , j ) and neuron σi contains k spikes such that ak ∈ L ( E ), k c, then the rule is enabled, c spikes from neuron σi are consumed and p spikes are sent to neuron σ j after a delay of d steps. If d = 0, then the p spikes are sent to the target neuron immediately. If the rule is used in step t and d 1, then in steps t , t + 1, . . . , t + d − 1 the synapse is closed, that is, it cannot use any rule. In step t + d, the p spikes are sent to neuron σ j , and synapse (i , j ) can start to apply a rule at step t + d + 1 (become open again). When neuron σi contains exactly s spikes, then a forgetting rule as → λ ∈ R (i , j ) is enabled. By using it, s 1 spikes are removed from the neuron. As usual in SN P systems, a global clock is assumed, marking the time for all neurons and synapses. In each time unit, if a synapse (i , j ) can use one of its rules, then a rule from R (i , j ) must be used. It is possible that there are more than one
84
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
rule that can be used on a synapse at some moment, since two firing rules, E 1 /ak1 → a p 1 ; d1 and E 2 /ak2 → a p 2 ; d2 , may have L ( E 1 ) ∩ L ( E 2 ) = ∅. In this case, the synapse will non-deterministically choose one of the enabled rules to use. The system works sequentially on each synapse (at most one rule from each set R (i , j ) can be used), and in parallel at the level of the system (if a synapse has at least one rule enabled, then it has to use a rule). A delicate problem appears when several synapses starting from the same neuron have rules which can be applied. We work here with the restriction that all rules which are applied consume the same number of spikes from the given neuron. Let us assume that the applied rules on the synapses leaving from σi are of the form E u /ac → a p u ; du ; then c spikes are removed from σi (and not a multiple of c, according to the number of applied rules). Of course, this restriction can be replaced by another strategy: various rules can consume various numbers of spikes and the sum of these numbers of spikes is removed from the neuron. The configuration of the system is described by both the number of spikes in each neuron and the number of steps to wait until each synapse becomes open (this number is zero if the corresponding synapse is already open). The initial configuration of the system is n1 , n2 , . . . , nm , 0, 0, . . . , 0 (that is, in the initial configuration, all synapses in the system are open, and each neuron contains the initial spikes). Using the rules as described above, one can define transitions. Any series of transitions starting from the initial configuration is called a computation. A computation is successful if it reaches a configuration where no rule can be applied on any synapse (i.e., the SN P system has halted). The result of a successful computation of the system is defined as usual in general SN P systems, as the number of spikes sent to the environment or as the time interval elapsed between the first two spikes sent to the environment by the output neuron. In order to compute a function f : Nk → N by SN P systems with rules on synapses, we introduce k natural numbers n1 , n2 , . . . , nk in the system by “reading” from the environment a spike train (which is a binary sequence) z = 10n1 −1 10n2 −1 1 · · · 10nk −1 1. This means that the input neuron of the system receives a spike in each step corresponding to a digit 1 from the string z; otherwise, no spike is received. Note that we input exactly k + 1 spikes, i.e., after the last spike we assume that no further spike is coming to the input neuron. The result of the computation is also encoded as the distance between the first two spikes emitted by the output neuron, hence producing a spike train of the form 0b 10r −1 1, for some b 0 and with r = f (n1 , n2 , . . . , nk ). The system outputs no spike in the b 0 steps from the beginning of the computation. In the following sections, SN P systems with rules on synapses are represented graphically. This is easier to understand than when given in a symbolic way. We use a circle with the initial number of spikes inside to represent a neuron and the directed edge associated with rules to represent the synapse. The input/output neuron has an incoming/outgoing arrow, respectively suggesting their communication with the environment. When the input or the output neuron is omitted, we obtain an SN P system working in the generating or the accepting mode, respectively, as usual for SN P systems with rules in neurons. Let us denote by NsSNnm P the family of sets of numbers computed (generated) by SN P systems with at most m neurons and at most n rules associated with a synapse; as usual, the indices n and m are replaced with ∗ when no bound is imposed on the respective parameter. 3. Computational completeness We will prove here that SN P systems with rules on synapses can generate all recursively enumerable sets of numbers (their family is denoted by NRE). To this aim, we simulate register machines, known to be equivalent with Turing machines. A register machine is a construct M = (m, H , l0 , lh , I ), where m is the number of registers (the registers are labeled by 0, 1, . . . , m − 1), H is the set of instruction labels, l0 is the start label, lh is the halt label (assigned to instruction HALT), and I is the set of instructions; each label from H labels only one instruction from I , thus precisely identifying it. The instructions are of the following forms:
• li : (ADD(r ), l j , lk ) (add 1 to register r and then go non-deterministically to one of the instructions with labels l j , lk ), • li : (SUB(r ), l j , lk ) (if register r is non-zero, then subtract 1 from it, and go to the instruction with label l j ; otherwise, go to the instruction with label lk ),
• lh : HALT (the halt instruction).
A register machine M computes (generates) a number in the following way: it starts with all registers empty (i.e., storing the number zero), applies the instruction with label l0 and proceeds to apply instructions as indicated by labels (and, in the case of SUB instructions, by the content of registers). If the machine reaches the halt instruction, then the number stored at that time in the first register is said to be computed by M. It is known that register machines characterize the family NRE, and this can be obtained even if we impose that the first register is never decremented during a computation (see, e.g., [8]). Theorem 3.1. NsSN2∗ P = NRE. Proof. We only prove the inclusion NRE ⊆ NsSN2∗ P and to this aim we construct an SN P system with rules on synapses which simulates a register machine M = (m, H , l0 , lh , I ). We assume that register 0, the one where the result is obtained, is never decremented.
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
85
Fig. 1. The ADD module.
Table 1 The numbers of spikes in neurons of ADD module during the ADD instruction simulation with neuron σl j finally activated. Neuron
Step
σli σr σci1 σci2 σci3 σl j σlk
t
t+1
t+2
2 2n 0 0 0 0 0
0 2n + 2 2 0 0 0 0
0 2n + 2 0 0 0 2 0
Table 2 The numbers of spikes in neurons of ADD module during the ADD instruction simulation with neuron σlk finally activated. Neuron
σli σr σci1 σci2 σci3 σl j σlk
Step t
t +1
t+2
t+3
2 2n 0 0 0 0 0
0 2n + 2 1 0 0 0 0
0 2n + 2 0 1 1 0 0
0 2n + 2 0 0 0 0 2
For each instruction of M we construct a module of our SN P system Π . With each register of M we associate a neuron; if register r contains the number n, then this is encoded in the associated neuron σr by means of 2n spikes. With each l ∈ H we also associate a neuron σl . Further neurons are present in the modules below. There are two distinguished neurons, σ g (a “garbage collector”) and σout (the output neuron). Initially, all neurons are empty, with the exception of σl0 , where we place two spikes. In general, a neuron σl , l ∈ H , is active if it gets two spikes; rules on the synapses leaving σl can then be used. When some neuron σl is active, then the instruction labeled with l starts to be simulated. In what follows, we introduce the mentioned modules (we give them in the graphical form, as suggested in the end of the previous section). The module associated with an ADD instruction li : (ADD(r ), l j , lk ) is given in Fig. 1. It works as follows. Assume that, at step t, neuron σli contains two spikes, then both synapses leaving from this neuron fire. One of them sends two spikes to σr (and this corresponds to increasing by one the value of this register), and the other one sends one or two spikes to neuron σc i1 . Depending on this number, one of the neurons σl j , σlk gets two spikes and in this way the modules associated with those neurons/labels become active. The evolution of the numbers of spikes in neurons of ADD module during the ADD instruction simulation is shown in Tables 1 and 2, where Tables 1 and 2 correspond to the two cases in which neuron σl j or σlk is activated, respectively. The module associated with an SUB instruction li : (SUB(r ), l j , lk ) is given in Fig. 2. Assume that, at step t, neuron σli contains two spikes, then it is activated, and both σr and σc i1 receive one spike. In this way, σr contains an odd number of spikes and its synapses can fire. If there is only one spike in σr (hence the neuron was empty), then a spike is sent to σc i2 ,
86
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
Fig. 2. The SUB module. Table 3 The numbers of spikes in the neurons of SUB module during the SUB instruction simulation with the case that neuron σr has n (n > 0) spikes. Neuron
σli σr σci1 σci2 σci3 σg σl j σlk
Step t
t+1
t+2
t+3
2 2n 0 0 0 – 0 0
0 2n + 1 1 0 0 – 0 0
0 2n − 2 0 1 2 – 0 0
0 2n − 2 0 0 0 – 2 0
Table 4 The numbers of spikes in the neurons of SUB module during the SUB instruction simulation with the case that the number of spikes in neuron σr is 0. Neuron
σli σr σci1 σci2 σci3 σg σl j σlk
Step t
t+1
t+2
t+3
2 0 0 0 0 – 0 0
0 1 1 0 0 – 0 0
0 0 0 2 1 – 0 0
0 0 0 0 0 – 0 2
and σlk gets two spikes. If the register r is not empty, then neuron σl j gets two spikes, through neuron σc i3 (while 3 spikes are removed from σr ). In both cases, the continuation of the simulation of the register machine is correct. Note the important detail that if any of the neurons σc i2 , σc i3 receives only one spike, the synapse having associated the rule a → a must be enabled, and in this way the spike is removed (and one spike is added to the “garbage collector” σ g ). This is useful also in ensuring that the SUB modules do not interfere in an unwanted way: several SUB instructions can send a spike to the same register r; if the spike does not come from the neuron σli , then neurons σc i2 , σc i3 will get only one spike, which is immediately moved in the “garbage collector”, hence no neuron σl j , σlk is activated. The evolution of the numbers of spikes of SUB module during the SUB instruction simulation is shown in Tables 3 and 4, corresponding to the two cases in which, at step t, the number stored in register r is n, n > 0 and the number stored in register r is 0, respectively. We do not care the number of spikes in the “garbage collector”, so we use “–” to denote it in these two tables. The module associated with the HALT instruction lh : HALT is given in Fig. 3. Assume that, at step t, neuron σlh has two spikes, then its synapse sends one spike to neuron σ0 (which is never decreased during the computation of M, hence this is the first time when this neuron contains an odd number of spikes). Each pair of spikes will send one spike to neuron σout ,
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
87
Fig. 3. The HALT module.
Table 5 The numbers of spikes in neurons of HALT module during the process outputting the computational result. Neuron
σlh σ0 σout
Step t
t+1
t+2
t +3
...
t +n
t +n+1
2 2n 0
0 2n + 1 0
0 2n − 1 1
0 2n − 3 2
... ... ...
0 3 n−1
0 1 n
and the process stops when only one spike remains in σ0 . Thus, in the end, the output neuron will contain the number stored in the halting configuration by register 0 of M. The evolution of the numbers of spikes in the neurons during the computation of the HALT module is shown in Table 5. In conclusion, M and Π compute the same set of numbers. 2 It is worth mentioning that the maximal number of rules associated with each synapse in the previous proof is two, because of the need of having non-determinism in the functioning of the system. If we will use an SN P system in the accepting mode (start the computation by introducing a number of spikes in a neuron and accept that number if the computation halts), then we can have only one rule associated with each synapse. Note also that the rules of the SN P system from the previous proof do not involve the delay feature, and that we do not use forgetting rules. The result of a computation can be also defined as the number of steps elapsed between the first and the second spike sent to the environment by the output neuron; the necessary changes in the HALT module from the previous proof can be easily obtained. A natural question in this framework is to find small universal SN P systems with rules on synapses. We address this issue in the next section. 4. Small universal SN P systems with rules on synapses In this section, we construct two universal SN P systems with rules on synapses for computing functions, one for the case of standard spiking rules and one for extended rules. Because our goal is to minimize the number of neurons, in these constructions we will also use the delay feature and forgetting rules. The constructions are based on simulating universal register machines as those given in [5]. A register machine M = (m, H , l0 , lh , I ) can compute a function f : Nk → N as follows: the arguments are introduced in special registers r1 , r2 , . . . , rk (without loss of the generality, it is assumed that we use the first k registers); the computation starts as usual, with the initial instruction l0 and, if the register machine halts (with the instruction labeled by lh ), then the value of the function is placed in another specified register rt with all registers different from rt being empty (storing number 0). The partial function computed by a register machine M in this way is denoted by M (n1 , n2 , . . . , nk ). All Turing computable functions can be computed in this way. Moreover, the register machine can be considered deterministic, without losing the Turing completeness: the ADD instructions li : (ADD(r ), l j , lk ) have l j = lk , and we write them in the form li : (ADD(r ), l j ). In [5], several universal register machines for computing functions were defined. Let (ϕ0 , ϕ1 , . . .) be a fixed admissible enumeration of the unary partial recursive functions. A register machine M u is said to be universal if there is a recursive function g such that for all natural numbers x, y we have ϕx ( y ) = M u ( g (x) , y ). As introduced in [5], we can use the universal register machine to compute any ϕx ( y ) by inputting a couple of numbers g (x) and y in registers 1 and 2, and the result is obtained in register 0. In the following proofs of universal results, we use a specific universal register machine M u from [5], the machine M u = (8, H , l0 , lh , I ) presented in Fig. 4. In this universal register machine M u , there are 8 registers (numbered from 0 to 7) and 23 instructions, and the last instruction is the halting one. As described above, the input numbers (the “code” of the partial recursive function to compute and the argument for this function) are introduced in registers 1 and 2, and the result is obtained in register 0 when the machine M u halts. 4.1. A universal SN P system with rules on synapses using standard spiking rules Before presenting the construction, a modification should be made in M u , because the subtraction operation on the register where the result is placed is not allowed in the construction from the previous section (neither in [3]), but register
88
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
Fig. 4. The universal register machine M u from [5].
0 of M u is subject of such operations. That is why a further register is needed – labeled with 8 – and the halt instruction lh of M u should be replaced by the following instructions:
l22 : SUB(0), l23 , lh ,
l23 : ADD(8), l22 ,
lh : HALT.
In this way, the obtained universal register machine M u has 9 registers, 24 ADD and SUB instructions, and 25 labels. The result of a computation of M u is stored in register 8. Theorem 4.1. There is a universal SN P system with standard spiking rules (with delay) on synapses having 39 neurons for computing functions. Proof. An SN P system with rules on synapses Π using standard spiking rules is constructed to simulate the computation of the universal register machine M u . Specifically, the system Π consists of ADD, SUB, ADD-ADD, ADD-SUB, SUB-ADD modules, as well as an INPUT and an OUTPUT module. The ADD and SUB modules are used to simulate the ADD and SUB instructions of M u , and the ADD-ADD, ADD-SUB, and SUB-ADD modules can be used to simulate particular consecutive ADD-ADD, ADD-SUB, and SUB-ADD instructions in M u . The INPUT module introduces the necessary spikes into the system by reading a spike train from the environment, and the OUTPUT module outputs the computation result (in the form of a suitable spike train). With each register r of M u , we associate a neuron σr ; the number stored in register r is encoded by the number of spikes in neuron σr . Specifically, if register r holds the number n 0, then neuron σr contains 2n spikes. With each instruction li in M u , a neuron σli in system Π is associated. If neuron σli has two spikes inside, the synapses leaving from it will use a rule and start to simulate the instruction li . When neuron σl (associated with the label lh of the halting instruction of M u ) h
receives two spikes, the computation in M u is completely simulated by the system Π ; the number of steps between the first two spikes emitted to the environment by the output neuron corresponds to the result computed by M u (stored in register 8). The modules will be given in a graphical form, indicating the neurons and the synapses with the associated sets of rules. In the initial configuration, all neurons are empty. The general design of the universal SN P system with rules on synapses Π is shown in Fig. 5. The tasks of loading 2g (x) spikes in neuron σ1 and 2 y spikes in neuron σ2 by reading the spike train 10 g (x) 10 y 1 can be carried out by the INPUT module shown in Fig. 6. The module INPUT works as follows. At the beginning, neuron σin receives one spike from the environment. The rules a → a on synapses (in, c 1 ) and (in, c 2 ) are enabled, and one spike is sent to neurons σc1 and σc2 , respectively. Having one spike inside, the rules a → a on synapses (c 1 , c 2 ) and (c 2 , c 1 ) can be used. From that moment on, neurons σc1 and σc2 emit one spike to each other in each step. Moreover, the rules a → a on synapses (c 1 , 1) and (c 2 , 1) can also be used, sending two spikes to neuron σ1 in each step, until the second spike arrives in neurons σc1 and σc2 . Neuron σin receives the second spike after g (x) steps. One step later, neurons σc1 and σc2 receive the second spike and contain two spikes. At that moment, the rules on synapses (c 1 , 1) and (c 2 , 1) are not enabled, hence no further spikes are sent to neuron σ1 . In this way, 2g (x) spikes have been loaded in neuron σ1 . When the second spike arrives in neurons σc1 and σc2 , the rules a2 /a → a on synapses (c 1 , c 2 ) and (c 2 , c 1 ) can be used. From that moment on, the two neurons emit one spike to each other in each step. The rules a2 /a → a on synapses (c 1 , 2) and (c 2 , 2) are also enabled, sending two spikes to neuron σ2 in each step, until the third spike arrives in neurons σc1 and σc2 . In this way, when neurons σc1 and σc2 get the third spike, 2 y spikes have been loaded in neuron σ2 . With three
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
89
Fig. 5. The general design of the universal SN P system with rules on synapses Π .
Fig. 6. Module INPUT.
spikes in neurons σc1 and σc2 , only the rules a3 → a on synapses (c 1 , l0 ) and (c 2 , l0 ) can be used. In this way, neuron σl0 receives two spikes and the system begins to simulate the initial instruction l0 of M u . During the process of reading spike train 10 g (x) 10 y 1, the evolution of the numbers of spikes in the neurons of INPUT module is shown in Table 6. The ADD module simulating an ADD instruction l i : (ADD(r ), l j ) is shown in Fig. 7. Suppose that at any step t, an ADD instruction l i : (ADD(r ), l j ) has to be simulated. With two spikes in neuron σli , the (1)
rules a2 /a → a; 1 on synapses (li , r ) and (li , l j ) can be used, as well as the rule a2 → a on synapse (li , li ). One spike is sent to neuron σl(1) immediately, and one spike is emitted to neurons σl j and σr one step later (due to the delay 1), respectively. i
σl(1) , the rules a → a on synapses (l(i 1) , r ) and (l(i 1) , l j ) can be used sending one spike to each of i neurons σl j and σr . At step t + 2, neuron σr receives two spikes, which corresponds to the fact that the number stored in register r is increased by one. Neuron σl j receives two spikes, hence the system Π pass to simulate the instruction l j . Having one spike in neuron
The SUB module associated with an SUB instruction l i : (SUB(r ), l j , lk ) is shown in Fig. 8. Suppose that at any step t, an SUB instruction li : (SUB(r ), l j , lk ) has to be simulated. With two spikes in neuron σli , the rules a2 /a → a; 1 on synapses (li , l j ) and (li , lk ) can be used, as well as the rule a2 → a on synapse (li , r ). One spike is sent to neuron σr immediately, and one spike is sent to neurons σl j and σlk one step later. Two cases are possible for neuron σr : – At step t + 1, if there are 2n + 1, n > 0, spikes in neuron σr (corresponding to the fact that the number stored in register r is n, and n > 0), then the rule a(a2 )+ /a3 → a on synapse (r , l j ) is enabled. The number of spikes in neuron σr is decreased by 3 (ending with 2(n − 1) spikes), which simulates that the number stored in register r is decreased by
90
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
Table 6 The numbers of spikes in the neurons of INPUT module during the process of reading spike train 10 g (x) 10 y 1. Neuron
σin σc1 σc2 σ1 σ2 σl0
Step 1
2
3
...
g (x) + 1
g (x) + 2
g (x) + 3
...
g (x) + y + 4
1 0 0 0 0 0
0 1 1 0 0 0
0 1 1 2 0 0
... ... ... ... ... ...
0 1 1 2g (x) − 2 0 0
1 1 1 2g (x) 0 0
0 2 2 2g (x) 2 0
... ... ... ... ... ...
0 0 0 2g (x) 2y 2
Fig. 7. Module ADD simulating the ADD instruction li : (ADD(r ), l j ).
Fig. 8. Module SUB simulating the SUB instruction li : (SUB(r ), l j , lk ).
one. One spike is sent to neuron σl j . In this case, neuron σl j receives two spikes, and the system Π starts to simulate instruction l j . – At step t + 1, if there is only one spike in neuron σr (corresponding to the fact that the number stored in register r is 0), then the rule a → a on synapse (r , lk ) is enabled. At step t + 1, neuron σlk receives two spikes, which means that the system Π starts to simulate instruction lk . The simulation of SUB instruction is correct: system Π starts from neuron σli and ends in neuron σl j (if the number stored in register r is greater than 0 and decreased by one), or in neuron σlk (if the number stored in register r is 0). Note that one synapse starting from each σli has associated a rule a → λ, hence in all cases when neuron σli receives only one spike (e.g., during the simulation of a SUB instruction), that spike is immediately removed. Thus, there is no interference between the ADD modules and the SUB modules, other than correctly sending two spikes to the neurons σl j or σlk , which may label instructions of the other kind. Similarly, if there are several SUB instructions l v that act on register r, l v : (SUB(r ), l j , lk ), then neuron σr has synapse connections to all neurons σl and σl , and it may send one spike to neuron j
k
σl j or σlk when simulating instruction li . These spikes are removed by the forgetting rules a → λ associated with synapses
leaving from these neurons. Consequently, the interference among modules will not cause undesired steps in Π (i.e., steps that do not correspond to correct simulations of instructions of M u ). Assume now that the computation in M u halts, that is, the instruction lh is reached. The result of the computation is placed in register 8, which is never decremented during the computation. The task of outputting the result is carried out by the OUTPUT module shown Fig. 9. Assume that at step t, neuron σl receives two spikes and the number of spikes in neuron σ8 is 2n, n > 0. The rule on h
synapse (lh , 8) can be used, sending one spike to neuron σ8 . In this way, the number of spikes in neuron σ8 becomes odd, hence the rules a(a2 )+ /a2 → a on synapses (8, d1 ) and (8, out) are enabled. A spike arrives in neuron σout at step t + 1, and the rule a(a2 )∗ /a → a on the synapse pointing to the environment is enabled. The first spike is sent to the environment by the output neuron at step t + 2. From step t + 3 on, until exhausting the spikes in neuron σ8 , neuron σout receives two spikes in each step, hence neuron σout collects an even number of spikes and the rule on the synapse pointing to the
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
91
Fig. 9. Module OUTPUT.
Fig. 10. The module simulating consecutive ADD-ADD instructions l17 : (ADD(2), l21 ), l21 : (ADD(3), l18 ).
environment is not enabled. At step t + n + 2, neuron σ8 contains one spike, the rule on synapse (8, out) is not enabled, and one further spike comes in neuron σout from neuron σd1 . One step later, neuron σout has an odd number of spikes, thus the rule on the synapse pointing to the environment can be applied for the second time, emitting the second spike out. The interval between these two spikes sent to the environment is (t + n + 3) − (t + 3) = n, which is exactly the number stored in register 8 of M u at the moment when the computation of M u halts. Until now, we have used
• • • • •
9 neurons for 9 registers, 25 neurons for 25 labels, 10 neurons for 10 ADD instructions, 3 additional neurons in the INPUT module, 2 neurons in the OUTPUT module,
which comes to a total of 49 neurons. This number can be slightly decreased by some “code optimization”, exploring some particularities of the register machine M u . For instance, the sequence of ADD instructions
l17 : ADD(2), l21 ,
l21 : ADD(3), l18 ,
without any other instruction addressing the label l21 , can be simulated by the module shown in Fig. 10. In this way, we save the neuron associated with l21 , and instead of two auxiliary neurons used in two separated ADD modules we use one neuron, thus saving two neurons. There are also two pairs of ADD-SUB instructions
l5 : ADD(5), l6 ,
l6 : SUB(7), l7 , l8 ,
l9 : ADD(6), l10 ,
l10 : SUB(4), l0 , l11 .
Each sequence of ADD-SUB instructions
li : ADD(r1 ), l g ,
l g : SUB(r2 ), l j , lk ,
can be simulated by the ADD-SUB module shown in Fig. 11. In this way, we save the two neurons associated with labels l6 and l10 . A similar operation is possible for the following six sequences of SUB-ADD instructions:
l0 : SUB(1), l1 , l2 ,
l1 : ADD(7), l0 ,
l4 : SUB(6), l5 , l3 ,
l5 : ADD(5), l6 ,
l6 : SUB(7), l7 , l8 ,
l7 : ADD(1), l4 ,
92
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
Fig. 11. The module simulating consecutive ADD-SUB instructions li : (ADD(r1 ), l g ), l g : (SUB(r2 ), l j , lk ).
Fig. 12. The module simulating consecutive SUB-ADD instructions li : (SUB(r1 ), l j , lk ), l j : (ADD(r2 ), l g ).
l8 : SUB(6), l9 , l0 ,
l14 : SUB(5), l16 , l17 ,
lh : SUB(0), l22 , lh ,
l9 : ADD(6), l10 ,
l16 : ADD(4), l11 ,
l22 : ADD(8), lh .
Each consecutive SUB-ADD instruction
li : SUB(r1 ), l j , lk ,
l j : ADD(r2 ), l g ,
can be simulated by the module shown in Fig. 12. In this way we save the 6 neurons associated with intermediate labels l1 , l5 , l7 , l9 , l16 , l22 . Therefore, by using the ADD-ADD module in Fig. 10, ADD-SUB module in Fig. 11 and SUB-ADD module in Fig. 12, we can totally save 10 neurons, hence the number of neurons can be decremented from 49 to 39. This completes the proof. 2 Note that, at the price of only one additional neuron, σ g , the forgetting rules can be avoided: instead of the rule a → λ associated with a synapse (li , x) we introduce the rule a → a associated with the synapse (li , g ) (the undesired spikes are moved into the “garbage collector” neuron σ g ). Furthermore, the delay can be omitted, but this time the price is one further
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
93
Fig. 13. The ADD module in Π .
Fig. 14. Simulating the two consecutive ADD instructions l17 : (ADD(2), l21 ) and l21 : (ADD(3), l18 ).
neuron for each rule with delay: an intermediate neuron is introduced along the synapse having associated the rules with delay (it is important here that we use only rules with the delay equal to 1). 4.2. A universal SN P system with rules on synapses using extended spiking rules As expected, when extended spiking rules are used (rules of the form E /ac → a p ; d, with p 1), universal SN P systems with rules on synapses can be obtained with a smaller number of neurons. Theorem 4.2. There is a universal SN P system with extended spiking rules (with delay) on synapses having 30 neurons for computing functions. Proof. As in the previous proof, we construct an SN P system Π with extended rules on synapses, simulating the universal register machine M u . The system Π consists of ADD, SUB, ADD-ADD, ADD-SUB, and SUB-ADD modules, as well as an INPUT and an OUTPUT module. As above, neurons are associated with labels and registers of M u , with 2n spikes in the neuron σr if register r contains the number n. The INPUT module in Fig. 6, SUB module in Fig. 8, and OUTPUT module in Fig. 9 are used also in the system Π , but the ADD, ADD-ADD, and SUB-ADD modules are modified. The ADD module of system Π is shown in Fig. 13. Neuron σli receives two spikes, and the rules a2 → a2 on synapses (li , l j ) and (li , r ) are enabled, sending two spikes to neurons σr and σl j , respectively. The number of spikes in neuron σr is increased by two, which simulates the fact that the number in register r is increased by one. With two spikes in neuron σl j , the system Π begins to simulate the instruction l j . In this way, we have
• • • •
9 neurons for 9 registers, 25 neurons for 25 labels, 3 neurons in the INPUT module, 2 neurons in the OUTPUT module,
which comes to a total of 39 neurons. The number of neurons can be slightly decremented by using consecutive module to simulate the particular consecutive ADD-ADD, ADD-SUB, and SUB-ADD instructions in M u . The module simulating consecutive ADD-ADD instructions
l17 : ADD(2), l21 ,
l21 : ADD(3), l18 ,
is shown in Fig. 14. The neuron associated with label l21 can be omitted, thus we can save one neuron. There are two pairs of consecutive ADD-SUB instructions in M u :
l5 : ADD(5), l6 ,
l9 : ADD(6), l10 ,
l6 : SUB(7), l7 , l8 ,
l10 : SUB(4), l0 , l11 .
94
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
Fig. 15. The module simulating consecutive ADD-SUB instructions in Π .
Fig. 16. The module simulating consecutive SUB-ADD instructions of Π .
The module simulating the consecutive ADD-SUB instructions
li : ADD(r1 ), l g ,
l g : SUB(r2 ), l j , lk
is given in Fig. 15. The neurons associated with the intermediate labels (l6 and l10 ) can be saved. The consecutive SUB-ADD instructions of the form
li : SUB(r1 ), l j , lk ,
l j : ADD(r2 ), l g ,
can be simulated by the module shown in Fig. 16. In this way, we can save the 6 neurons associated with intermediate labels. Therefore, we can totally save 9 neurons. Hence we reduce the number of neurons from 39 to 30. This completes the proof. 2 5. Conclusion and discussion In this paper, we introduce and investigate a new class of SN P systems, with rules on synapses. The computational completeness is proved, then we construct small universal SN P systems. Specifically, in case of using standard spiking rules, we construct a universal SN P system with rules on synapses having 39 neurons; when using extended spiking rules on synapses, 30 neurons are sufficient for the system to achieve the universality as a functions computing device. Compared with small universal SN P systems constructed for the case when the rules are placed in the neurons, universal SN P systems with rules on synapses are obtained with a smaller number of neurons. In [12], by using a “powerful” neuron (each instruction of the register machine has a corresponding rule in the neuron), the number of neurons in general universal SN P systems with extended rules can be reduced to 10. We conjecture that, if a similar strategy (a “powerful” synapse) is used, then also universal SN P systems with rules on synapses having a significantly smaller number of neurons than in the previous theorems can be obtained. In [13], two universal SN P systems (with 76 neurons and 50 neurons) used as generators of sets of numbers were constructed. It remains to consider the generative case also for SN P systems with rules on synapses. Moreover, it is worth investigating the SN P systems with rules on synapses as language generators. In the definition of SN P systems with rules on synapses, we use the restriction that if several synapses starting from the same neuron have rules that can be applied, then all enabled rules consume the same number of spikes from the given neuron. What will happen if this restriction is removed?
T. Song et al. / Theoretical Computer Science 529 (2014) 82–95
95
Acknowledgements This work was supported by National Natural Science Foundation of China (61033003, 91130034, and 61320106005), Ph.D. Programs Foundation of Ministry of Education of China (20100142110072 and 2012014213008), and Natural Science Foundation of Hubei Province (2011CDA027). References [1] M. Cavaliere, O.H. Ibarra, Gh. P˘aun, O. Ecegioglu, M. Ionescu, S. Woodworth, Asynchronous spiking neural P systems, Theor. Comput. Sci. 410 (24) (2009) 2352–2364. [2] H. Chen, R. Freund, M. Ionescu, Gh. P˘aun, M.J. Pérez-Jiménez, On string languages generated by spiking neural P systems, Fundam. Inform. 75 (1) (2007) 141–162. [3] M. Ionescu, Gh. P˘aun, T. Yokomori, Spiking neural P systems, Fundam. Inform. 71 (2) (2006) 279–308. [4] T.-O. Ishdorj, A. Leporati, L. Pan, X. Zeng, X. Zhang, Deterministic solutions to QSAT and Q3SAT by spiking neural P systems with pre-computed resources, Theor. Comput. Sci. 411 (25) (2010) 2345–2358. [5] I. Korec, Small universal register machines, Theor. Comput. Sci. 168 (2) (1996) 267–301. [6] M. Kudlek, Small deterministic Turing machines, Theor. Comput. Sci. 168 (2) (1996) 241–255. [7] A. Leporati, G. Mauri, C. Zandron, Gh. P˘aun, M.J. Pérez-Jiménez, Uniform solutions to SAT and subset sum by spiking neural P systems, Nat. Comput. 8 (4) (2009) 681–702. [8] M.L. Minsky, Computation: Finite and Infinite Machines, Prentice-Hall, Englewood Cliffs, NJ, 1967. [9] T. Neary, A universal spiking neural P system with 11 neurons, in: Proceedings of the Eleventh International Conference on Membrane Computing, Jena, Germany, 2010, pp. 327–346. [10] N. Ollinger, The quest for small universal cellular automata, in: Proc. Automata, Languages and Programming, Springer-Verlag, Berlin, 2002. [11] L. Pan, Gh. P˘aun, M.J. Pérez-Jiménez, Spiking neural P systems with neuron division and budding, Sci. China Inform. Sci. 54 (8) (2011) 1596–1607. [12] L. Pan, X. Zeng, A note on small universal spiking neural P systems, in: Membrane Computing. Proc. 10th Intern. Workshop, Curtea de Arges, ¸ Romania, August 2009, in: LNCS, vol. 5957, Springer-Verlag, Berlin, 2010, pp. 436–447. [13] A. P˘aun, Gh. P˘aun, Small universal spiking neural P systems, Biosystems 90 (1) (2007) 48–60. [14] A. P˘aun, M. Sidoroff, Sequentiality induced by spike number in SNP systems: small universal machines, in: Membrane Computing. 12th Inter. Conference, Fontainebleau, France, August 2011, in: LNCS, vol. 7184, Springer-Verlag, Berlin, 2012, pp. 333–345. [15] Gh. P˘aun, Spiking neural P systems with astrocyte-like control, J. Univers. Comput. Sci. 13 (11) (2007) 1707–1721. [16] Gh. P˘aun, M.J. Pérez-Jiménez, G. Rozenberg, Spike trains in spiking neural P systems, Int. J. Found. Comput. Sci. 17 (4) (2006) 975–1002. [17] Gh. P˘aun, G. Rozenberg, A. Salomaa (Eds.), The Oxford Handbook of Membrane Computing, Oxford Univ. Press, 2010. [18] Y. Rogozhin, Small universal Turing machines, Theor. Comput. Sci. 168 (2) (1996) 215–240. [19] G. Rozenberg, A. Salomaa (Eds.), Handbook of Formal Languages, Springer-Verlag, Berlin, 1997. [20] T. Song, L. Pan, Gh. P˘aun, Asynchronous spiking neural P systems with local synchronization, Inf. Sci. 219 (2012) 197–207. [21] X. Zhang, X. Zeng, L. Pan, On string languages generated by spiking neural P systems with exhaustive use of rules, Nat. Comput. 7 (4) (2008) 535–549. [22] X. Zhang, X. Zeng, L. Pan, Smaller universal spiking neural P systems, Fundam. Inform. 87 (1) (2008) 117–136.