## Abstract

We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

## 1. Introduction

Brain-machine interfaces (BMIs) establish direct communications between living brain tissue and external devices such as an artificial arm by sensing and interpreting neuronal activity and translating it to the external device (Nicolelis & Lebedev, 2009). Recent advances in BMIs have shown the capability of BMI-based neuroprostheses in rehabilitating motor-disabled subjects such as amputees (Hochberg et al., 2006). Lack of proper incorporation of sensory feedback such as proprioception and tactile information, which are unavailable due to loss of natural pathways in amputees, has greatly limited the widespread clinical deployment of neuroprosthetic systems in rehabilitation (Cunningham et al., 2011). Recently, microstimulation techniques have emerged as a promising approach in providing artificial proprioceptive (Weber et al., 2011) and tactile (Doherty et al., 2011) feedback by stimulating appropriate neurons of the primary somatosensory cortex. Although these techniques are promising for developing future BMI-based neuroprostheses, the experimental trial-and-error approach in designing appropriate stimulating sensory input currents may be detrimental to neural tissue or adversely change neural functionality. Therefore, a systematic approach that uses optimal feedback control theory (OFCT) is highly desirable in designing these artificial sensory feedback pathways (Scherberger, 2009). Figure 1 shows one such approach in an optimal receding horizon control framework (Kumar, Aggarwal, Thakor, Schieber, & Kothare, 2011; Kumar, Schieber, Thakor, & Kothare, 2013).

As shown in Figure 1, the receding horizon controller designs optimal artificial sensory feedback currents and stimulates neurons of the appropriate cortical sensory areas such that the closed-loop performance of the BMI can be recovered for a given motor task. Briefly, the controller computes optimal stimulating input currents by solving an optimal control problem in a predictive framework (Kwon & Han, 2005). Here is a set of parameters or decision variables. At a given time, system measurements are obtained, and a dynamic model of the cortical network of spiking neurons, decoder, and the prosthetic arm is used to predict the outputs of the system. An optimal control problem is then formulated to minimize the difference between the predicted and the desired outputs. The design of artificial sensory feedback currents is constrained by experimentally observed minimum and maximum limits on the instantaneous firing rate or the corresponding inter-spike interval (ISI) of neurons in the cortical network model as well as any constraints on the system outputs. These constraints appear explicitly in the resulting nonlinear optimization problem. A local gradient-based optimization algorithm for solving the optimization problem requires at least the first-order continuous differentiability of the objective and constraint functions with respect to decision variables. The reset of state variables at the occurrence of each action potential in typical single-neuron models such as the Izhikevich model makes it nontrivial to determine the continuous differentiability of ISIs.

ISIs and their gradients have also been used in developing gradient-based learning rules for training spiking neural networks (see McKennoch, Voegtlin, & Bushnell, 2009 and references therein; Pfister, Toyoizumi, Barber, & Gerstner, 2006). In most of these works (with the exception of Pfister et al. (2006) where a stochastic spike response model (SRM) has been used), the leaky integrate-and-fire model or the theta neuron model has been used where analytical solution for the error gradient has been derived. The extension of these learning rules to models such as the Izhikevich neuron model requires investigation of differentiability of ISIs with respect to the synaptic weights since analytical solutions in these models are difficult to obtain.

With this motivation, we here consider a recurrent network of *n* interconnected neurons. We assume that each neuron is connected with the remaining *n*−1 neurons. The dynamical behavior of each neuron is represented by a class of discrete event–based discontinuous single-neuron models. The flow of synaptic information from one neuron to another is modeled using a synaptic conductance model. Stimulation of an input neuron using an external input current drives the network. Measured inter-spike intervals of a single-output neuron defines the output of the network. In this setup, we derive sufficient conditions under which the first-order continuous differentiability of ISIs of this output neuron with respect to parameters (decision variables) of the external stimulating current can be guaranteed.

The letter starts with the essence of our theoretical results in the form of simulations. Section 3 describes the single-neuron model and the synaptic current model used. In section 4, we state a mathematical formulation of the problem. In section 5, we derive sufficient conditions for the continuous differentiability of ISIs with respect to parameters of the external stimulating current using the well-known implicit function theorem. Finally, an example is presented using known bidimensional spiking neuron models, which is followed by conclusions.

*Mathematical notation:* Most notation is standard and will be introduced as the results are developed. We here define the mathematical symbols frequently used in this letter: : continuous; : continuously differentiable; : membrane potential; : membrane recovery variable; : external input current; : synaptic current; *t*: continuous time; *t ^{f}*: time of the

*f*th action potential of a single neuron;

*t*: time of the

^{f}_{j}*f*th action potential of the neuron

*j*in the network. :

*f*th ISI of a single neuron; :

*f*th ISI of the neuron

*j*in the network; : decision variables.

## 2. Simulation Results

*v*(

*t*) is the membrane potential,

*C*is the membrane capacitance, and

*R*is the membrane resistance.

*I*(

*t*) is a smoothly varying stimulating input current to the neuron.

*v*(

*t*) is reset to

*C*whenever

*v*(

*t*) exceeds a constant firing threshold

*v*. We simulate equation 2.1 with

_{p}*C*= 1 F,

*R*= 10 k,

*C*= 0 mV, and

*v*=20 mV. Figure 2a shows our simulation result for the case where ISIs are discontinuous with respect to

_{p}*I*(

*t*).

As shown in Figure 2a, a stimulating input current *I*(*t*) is designed such that the membrane potential *v*(*t*) reaches the firing threshold *v _{p}* with zero slope at the time of the first action potential (shown by the solid line). A small perturbation in

*I*(

*t*) shifts the occurrence of this action potential by () 15 ms (shown by the dashed line). This large deviation in the timing of the first action potential with respect to a small perturbation in

*I*(

*t*) clearly indicates that the corresponding ISI is not continuous with respect to

*I*(

*t*). Figure 2b shows the case where the membrane potential

*v*(

*t*) reaches the firing threshold

*v*with a positive slope at the time of action potentials (shown by the solid line). As shown in this figure, a small perturbation in

_{p}*I*(

*t*) leads to a small change (shown by the dashed line) in the timing of the occurrence of both action potentials. Thus, the corresponding ISIs are continuous with respect to

*I*(

*t*).

Next we consider a recurrent network of two synaptically connected neurons *S*_{1} and *S*_{2}, as shown in Figure 4. The dynamics of both neurons are represented by equation 2.1. An external input current (the same used in Figure 2a) drives the network by stimulating neuron *S*_{1}. Thus, the total input current *I*(*t*) to neuron *S*_{1} is the sum of the external input current and synaptic currents from neuron *S*_{2}. The neuron *S*_{2} is driven by synaptic currents induced by the action potentials of the neuron *S*_{1}. With this setup, we simulate the recurrent network. Figure 3a shows our simulation results for the case where the ISIs of both neurons are discontinuous with respect to the external input current.

As shown in the top plot of Figure 3a, the membrane potential *v*_{1}(*t*) of neuron *S*_{1} reaches the firing threshold with zero slope at the time of the first action potential (shown by the solid line). A small perturbation in the designed step input current shifts the timing of the occurrence of this action potential by () 15 ms (shown by the dashed line), which clearly indicates that the first ISI of neuron *S*_{1} is discontinuous with respect to the designed input current. Next we analyze the effect of this small perturbation in the designed input current on the first ISI of neuron *S*_{2}.

*S*

_{2}is initially stimulated by the synaptic current induced by the first action potential of neuron

*S*

_{1}. We model this synaptic current as Here,

*q*is the maximum conductance transmitted by the synapse of neuron

*S*

_{1}. is a time constant,

*t*

^{1}

_{1}is the time at which the first action potential occurs in neuron

*S*

_{1}, and is the heavy-side function. For fixed

*q*(>0) and ,

*I*(

^{s}*t*) is a function of

*t*−

*t*

^{1}

_{1}. Moreover,

*I*(

^{s}*t*)=0 for . Clearly, a change in

*t*

^{1}

_{1}will change the time at which

*I*(

^{s}*t*) becomes greater than zero and thus the time at which the first action potential occurs in neuron

*S*

_{2}. This is shown at the bottom plot of Figure 3a. As shown in this plot, the change in

*t*

^{1}

_{1}by () 15 ms (shown in the top plot of Figure 3a) leads to the same change (15 ms) in the timing of the occurrence of the first action potential in neuron

*S*

_{2}. Thus, the first ISI of neuron

*S*

_{2}is also discontinuous with respect to the designed input current even though the membrane potential

*v*

_{2}(

*t*) reaches the firing threshold with a positive slope.

Figure 3b shows the case where the membrane potential reaches the firing threshold with a positive slope at the time of action potentials in both neurons. As shown in this figure, a small perturbation in the designed input current (the same as the one used in Figure 2b) leads to a small change in ISIs of both neurons, which clearly indicates that the ISIs are continuous with respect to the designed input current.

Our simulation results show that ISIs of individual neurons in a recurrent network are continuous with respect to a time-continuous stimulating input current if the requirement of a nonzero slope of membrane potential at the time of action potentials is satisfied by all the neurons in the network. In the rest of the letter, we derive this condition rigorously and show that this condition is sufficient under certain assumptions to guarantee the continuous differentiability of ISIs in a network of synaptically connected neurons.

## 3. Single-Neuron Model

*v*(

*t*) and

*u*(

*t*) are the time-varying membrane potential and the membrane recovery variable of a neuron, respectively.

*C*is the membrane capacitance.

*I*(

*t*) is the total input current delivered to the neuron.

*v*(

_{p}*t*) is a firing threshold.

*C*and

*d*are the model parameters. Typically

*C*is set to the membrane resting potential

*v*. The time at which the membrane potential

_{r}*v*(

*t*) reaches the firing threshold

*v*(

_{p}*t*), starting from the resting state, is called the time of the occurrence of an action potential or the spike time. At this time,

*v*(

*t*) is reset to

*C*and

*u*(

*t*) is reset to

*u*(

*t*)+

*d*. Again starting from this reset time and taking the reset values of

*v*(

*t*) and

*u*(

*t*) as initial conditions, the time of occurrence of the next action potential is determined through equations 3.1. We define

*t*, the time of occurrence of the

^{f}*f*th action potential starting from

*t*=0, as An inter-spike interval (ISI) is defined as the time difference between the occurrence of two consecutive action potentials. Thus, the

*f*th ISI is for with the convention

*t*

^{0}=0. As a result of this,

*t*can be expressed in terms of the summation of ISIs as

^{f}*I*(

*t*) can be an external current to the neuron or a synaptic current

*I*(

^{s}*t*) delivered to the neuron in a network or a combination of both. Here is a vector of parameters or decision variables over the real space. Mathematically, the synaptic current

*I*(

^{s}*t*) is modeled as Here,

*g*(

_{e}*t*) and

*g*(

_{i}*t*) are the excitatory and inhibitory synaptic conductances, respectively.

*E*and

_{e}*E*are the excitatory and inhibitory membrane reversal potentials, respectively. Typically the synaptic conductance is modeled by taking the weighted sum of all presynaptic neuronal activities and is represented in the following form (Gerstner & Kistler, 2002):

_{i}*N*is the total number of presynaptic neurons of type

_{x}*x*.

*w*is the weight of the synapse

_{j}*j*to the postsynaptic neuron.

*t*is the time of the

^{f}_{j}*f*th incoming action potential from the synapse

*j*to the postsynaptic neuron.

*K*(

*t*−

*t*) models the stereotypical time course of postsynaptic conductances following presynaptic spikes. With appropriate choices for the weight, type, and number of synapses, as well as the kernel shape, this model has shown the capability of predicting the qualitative behavior of various cortical areas under different dynamical regimes (Brunel, 2000; Vogels, Rajan, & Abbott, 2005; Hanuschkin, Kunkel, Helias, Morrison, & Diesmann, 2010).

^{f}_{j}In this work, we assume that each presynaptic neuron establishes only one synapse with each of its postsynaptic neurons and all synaptic connections within the network are excitatory: *g _{i}*=0 and with . However, the results apply to the case as well. Moreover, we assume that there is no time delay in the feedback loop. In the next section, we use the models of single-neuron dynamics and synaptic currents presented above to define a network of synaptically connected cortical spiking neurons and state the problem.

## 4. Problem Statement

*n*−1 neurons and thus receives and transfers synaptic information within the network in the form of synaptic input currents

*I*(

^{s}*t*) according to equations 3.4 and 3.5. The dynamics of neuron

*S*are given by equation 3.1. External input current drives the network by stimulating neuron

_{j}*S*

_{1}. Here is a vector of parameters or decision variables over the real space. The measured output of neuron

*S*is represented as a sequence of ISIs . With this setup, we formulate the problem as follows: Under what conditions is the

_{j}*f*th ISI of the

*j*th neuron

*S*expressible as for all , where

_{j}*h*

_{j,f}is a continuously differentiable function of in a neighborhood of that is, on a domain with ?

## 5. Results

*i*th ISI of this neuron is expressible as Here, on a domain for some and . We call this problem 1. Then we consider a single neuron stimulated by synaptic currents induced by spikes of a presynaptic neuron. We represent the

*i*th spike time of the presynaptic neuron by

*t*. Here, the subscript

^{i}_{p}*p*stands for the presynaptic neuron. The synaptic current induced by a sequence of spikes of a presynaptic neuron (see equations 3.4 and 3.5) stimulates the postsynaptic neuron. Here, . We define as the

*i*th ISI of the presynaptic neuron. Since by definition, we can write where is a vector of first

*r*ISIs of the presynaptic neuron. With this, we establish conditions under which the

*i*th ISI of the postsynaptic neuron is expressible as where on a domain for some . Here . This is problem 2. Finally, we use results of problems 1 and 2 to establish conditions for the existence of equation 4.1.

It is clear from our problem setup in the previous section that *S*_{1} is the only neuron in the network that is stimulated by both the external input current and the synaptic currents available from the remaining *n*−1 neurons in the network. The other *n*−1 neurons in the network are driven by only the synaptic currents and thus have indirect effect of through the ISIs of neuron *S*_{1}. Also, is the only stimulating current to neuron *S*_{1} until the time at which synaptic currents from the remaining neurons in the network arrive at neuron *S*_{1}. Therefore, the problem described by equation 4.1 can also be viewed as the combination of the following subproblems:

Under what conditions are ISIs of neuron

*S*_{1}continuously differentiable with respect to when the only available stimulating input current to neuron*S*_{1}is the external input current ?Under what conditions are ISIs of neuron continuously differentiable with respect to ?

Under what conditions are ISIs of neuron

*S*_{1}continuously differentiable with respect to when the stimulating input current to neuron*S*_{1}is the sum of the external input current and the synaptic currents from the remaining*n*−1 neurons of the network?

We first consider an example of a recurrent network of two synaptically connected neurons and show that the establishment of conditions for the existence of equations 5.1 and 5.2 is sufficient to solve subproblems 1, 2, and 3 and thus guarantee the existence of equation 4.1 in the case of two neurons.

Figure 4 shows a network of two synaptically connected neurons driven by an external stimulating input current .

At this point, let us assume that equations 5.1 and 5.2 hold. We will later show the existence of these equations in sections 5.1 and 5.2, respectively. For simplicity, let us further assume that : the first spike of neuron *S*_{2} occurs between the time of the first and the second spike of neuron *S*_{1}. Thus, neuron *S*_{1} receives the feedback information, in the form of the synaptic input current *I ^{s}*(

*t*;

*t*

^{1}

_{2}), from neuron

*S*

_{2}at time

*t*=

*t*

^{1}

_{2}.

We first consider subproblem 1 and show that it can be solved using equation 5.1—problem 1. It is clear that is the only stimulating input current to neuron *S*_{1} for *t*<*t*^{1}_{2}. Therefore by setting in problem 1 until , the continuous differentiability of ISI of neuron *S*_{1} with respect to can be established using equation 5.1.

Next we consider subproblem 2 and show that it can be solved using equations 5.1 and 5.2—problems 1 and 2. Since is the only ISI of neuron *S*_{1} upto time *t*=*t*^{1}_{2}, equation 5.2 establishes that is a continuously differentiable function in a small neighborhood of (problem 2). Also from equation 5.1, we know that is a continuously differentiable function of (problem 1). Now using the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, equations 5.1 and 5.2 together establish that is a continuously differentiable function of .

Finally, we consider the effect of feedback on *S*_{1} and show that subproblem 3 can be solved using equation 5.1. It is clear that for , is the only input to neuron *S*_{1}. For , the total input to neuron *S*_{1} is the sum of and *I ^{s}*(

*t*;

*t*

^{1}

_{2}) (the synaptic current from

*S*

_{2}to

*S*

_{1}). By definition . From subproblem 2, we know that is a continuous differentiable function of . Thus,

*I*(

^{s}*t*;

*t*

^{1}

_{2}) is an implicit function of and the total input current to neuron

*S*

_{1}for , that is, the sum of and

*I*(

^{s}*t*;

*t*

^{1}

_{2}) is a function of . Since the input current to neuron

*S*

_{1}is continuous in

*t*at

*t*=

*t*

^{1}

_{2}, equation 5.1 establishes the continuous differentiability of with respect to .

If we continue the arguments presented above for , we find that the ISIs of neurons *S*_{1} and *S*_{2} beyond time are also a continuously differentiable function of . Thus, the establishment of the existence of equations 5.1 and 5.2 is sufficient to guarantee the existence of equation 4.1 for a network of two neurons.

Example 1 demonstrates that equations 5.1 and 5.2, along with the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, establishes the existence of equation 4.1 for a recurrent network of two neurons. By following the arguments in this example, it is not difficult to show that the existence of equations 5.1 and 5.2 is sufficient to guarantee the existence of equation 4.1 for a recurrent network of *n*>2 synaptically connected neurons.

We next establish the existence of equations 5.1 and 5.2 by solving problems 1 and 2. In both problems, we first make use of the well-known existence and uniqueness theorems (see theorems 2 and 3 in the appendix) for ordinary differential equations. These theorems ensure the existence of a continuously differentiable solution of equation 3.1. Then we apply the implicit function theorem (see theorem 4) and establish conditions under which equations 5.1 and 5.2 exist.

### 5.1. Problem 1.

In this section, we establish conditions under which equation 5.1 exists. The description of the underlying problem is as follows. An input current , a function of *t* and , stimulates a single neuron. For , the neuron fires a sequence of action potentials according to equation 3.1. The goal is to find conditions under which equation 5.1 exists in a small neighborhood (it may be infinitesimally small) of .

In this section, *t ^{i}*

_{0}and represent the time of the

*i*th action potential and the

*i*th ISI of a single neuron, respectively, when . represents the

*i*th ISI of a single neuron otherwise. stands for continuity. stands for continuous differentiability.

We define a domain *D _{i}* on the space with

**=[**

*x**v*,

*u*]

^{T}as , where , and

*t*

^{0}

_{0}=0. Here , , , and .

We define .

There exists an such that the solution of equation 3.1 exists on the domain *D _{i}* for .

The solution of equation 3.1 is nonchaotic.

The following result states conditions under which equation 5.1 exists.

Under assumptions 1 and 2, there exists a and a unique function *h _{i}* such that with and on if

on

*D*and on_{i}*D*_{i}on

*D*and on_{i}*D*_{i}on

*D*and on_{i}*D*_{i}that is, the partial derivative of the membrane potential with respect to

*t*is not equal to the partial derivative of the firing threshold*v*(_{p}*t*) with respect to*t*at the time of the*i*th action potential*t*=*t*^{i}_{0}

If the firing threshold *v _{p}*(

*t*) is time invariant, for all (as is the case for most of the spiking neuron models), the required condition of lemma 1 reduces to the nonzero slope of at the time of the

*i*th action potential—.

We first consider equation 3.1 with an initial condition at *t*=0, where and are constants. We define where . From the hypothesis of lemma 1, on *D*_{1} and on *D*_{1}. Also, the partial derivative of with respect to ** x** and is continuous on

*D*

_{1}and the partial derivative of

*g*(

**) with respect to**

*x***is continuous on**

*x**D*

_{1}. Thus by theorems 2 and 3 (see the appendix) along with assumption 1, there exists a such that for any , there exist unique solutions and of equation 3.1 for all .

We define a function . Clearly, is continuously differentiable with respect to *t* and for and . Also, . In addition, the partial derivative of with respect to *t* at *t*=*t*^{1}_{0} and is nonzero, that is, from the hypothesis of lemma 1. Now by applying the implicit function theorem (theorem 4 in the appendix) on the function for and , we find that there exists a and a unique function *h*_{1} on the domain such that

- •
on

- •
- •
i.e. for every

We next consider equation 3.1 with as the initial condition at , where . It should be noted that is the time at which the reset of and occurs according to equation 3.1.

*D*

_{2}and on

*D*

_{2}. Also, the partial derivative of with respect to

**and is continuous on**

*x**D*

_{2}, and the partial derivative of

*g*(

**) with respect to**

*x***is continuous on**

*x**D*

_{2}. We know from the reset conditions of equation 3.1 that at , . We also know that on . Thus, again by theorems 2 and 3 (see the appendix), along with assumptions 1 and 2, there exists a such that for any that satisfies there exist unique solutions and of equation 3.1 for all .

We define a function . It is clear that is continuously differentiable with respect to *t* and for all and for all satisfying equation 5.3. Also, . In addition, the derivative of with respect to *t* at *t*=*t*^{2}_{0} and is nonzero, , from the hypothesis of lemma 1. Now by applying the implicit function theorem (see theorem 4 in the appendix) on the function for and satisfying equation 5.3, we find that there exists and a unique function *h*_{2} on such that

- •
on

- •
- •
, that is, for every

Now the claim in lemma 1 for follows directly from mathematical induction.

### 5.2. Problem 2.

In this section, we establish conditions under which equation 5.2 exists. The description of the underlying problem is as follows. The synaptic current stimulates the postsynaptic neuron. Here is a vector of the first *r* ISIs of the presynaptic neuron. For , the postsynaptic neuron fires its *i*th action potentials at time *t ^{i}* according to equation 3.1. The goal is to find conditions under which equation 5.2 exists in a small neighborhood (may be infinitesimally small) of .

In this section, *t ^{i}*

_{0}and represent the time of the

*i*th action potential and the

*i*th ISI of a single neuron, respectively, when . represents the

*i*th ISI of a single neuron otherwise.

We define a domain *F _{i}* on the space with

**=[**

*x**v*,

*u*]

^{T}as where , and

*t*

^{0}

_{0}=0. Here , , , and .

We define .

There exists an such that the solution of equation 3.1 exists on the domain *F _{i}* for .

The following result states conditions under which equation 5.2 exists.

Under assumptions 2 and 3, there exists a a and a unique function such that with and on if

on

*F*and on_{i}*F*._{i}on

*F*and on_{i}*F*._{i}on

*F*and on_{i}*F*._{i}, that is, the partial derivative of the membrane potential with respect to

*t*is not equal to the partial derivative of the firing threshold*v*(_{p}*t*) with respect to*t*at the time of the*i*th action potential*t*=*t*^{i}_{0}.

By following the steps provided in proving the claim of lemma 1, it is straightforward to show the claim of lemma 2.

If synaptic currents induced by spikes of more than one presynaptic neuron stimulate the postsynaptic neuron, then in lemma 2 can be replaced by the sum of synaptic currents from all the presynaptic neurons.

With the establishment of results for problems 1 and 2, we next state our main result on the existence of equation 4.1.

In this section, *t ^{i}*

_{j,0}and

*t*represent the time of the

^{i}_{j}*i*th action potential of neuron

*S*in the network defined in section 4 for and , respectively.

_{j}Let us assume that neurons in the network defined in section 4 fire a sequence of action potentials for . Let us also assume that the conditions of lemma 1 are satisfied for neuron *S*_{1} and the conditions of lemma 2 are satisfied for neuron *S _{j}* for until time for some . Then there exists a neighborhood such that the

*i*th ISI of neuron

*S*is expressible as for all and , where on .

_{j}Let us define a time . Without loss of generality, let us say that the total number of action potentials occurred in neuron *S*_{1} until time is . It is clear from section 4 that the only stimulating input current to neuron *S*_{1} until time is the external input current with in a small neighborhood of . From the hypothesis of theorem 1, the conditions of lemma 1 are satisfied for neuron *S*_{1} until time for some (arbitrarily small) . Thus, from the conclusion of lemma 1, there exists a and a continuously differentiable function *h*_{1,i}=*h _{i}* on a domain such that the

*i*th ISI of neuron

*S*

_{1}is expressible as for all and .

We now consider the remaining *n*−1 neurons in the network. Until time , the total stimulating input current to individual neurons in the remaining *n*−1 neurons is the synaptic current from neuron *S*_{1} for . Defining , we can write as . Since by definition, we say that for some . From the hypothesis of theorem 1, the conditions of lemma 2 are satisfied for neuron *S _{j}* until time . Thus, from the conclusions of lemma 2, there exist a and a continuously differentiable function such that the first ISI of neuron

*S*is expressible as for . We know that for is continuously differentiable with respect to for . Using the fact that a continuously differentiable function of a continuously differentiable function is also continuously differentiable with respect to the underlying arguments, it is easy to see that is also continuously differentiable with respect to for where . Thus, there exists a and a continuously differentiable composite function

_{j}*h*

_{j,1}on such that the first ISI of neuron

*S*is expressible as .

_{j}It is clear that the synaptic current *I ^{s}*(

*t*;

*t*

^{1}

_{j}), induced by the first action potential of neuron

*S*, contributes to the total stimulating input current to the remaining

_{j}*n*−1 neurons beyond the time of this action potential. Since the hypothesis of lemma 2 is satisfied by neuron

*S*with and ,

_{l}*I*(

^{s}*t*;

*t*

^{1}

_{j}) is continuously differentiable with respect to . We know that is continuously differentiable with respect to on . Thus,

*I*(

^{s}*t*;

*t*

^{1}

_{j}) is continuously differentiable with respect to and the total input current to neuron

*S*

_{1}, that is, is a function of

*t*and on . By applying the above arguments to subsequent action potentials in the network, it is easy to see that the conclusion of theorem 1 holds for all and .

We consider a single neuron stimulated by an external input current . Here, is a vector of parameters or decision variables. The neuron fires a sequence of action potentials according to equation 3.1 for . We assume that is continuously differentiable with respect to along with its continuity with respect to *t* in a neighborhood of . We also assume that the qualitative behavior of the neuron remains the same in the neighborhood of . We represent the dynamics of the single neuron by the following known spiking single neuron models and state conditions (see lemma 1) under which a neighborhood of exists such that ISIs of the neuron are a continuously differentiable function of in the neighborhood of . Here we assume that the parameters of these models are such that the models exhibit nonchaotic behaviors.

*1. Bidimensional models with linear f(v, u).*In this class of models, we consider the adaptive integrate-and-fire (aIF) model and the adaptive threshold integrate-and-fire (aTIF) model (Rossant, Goodman, Platkiewicz, & Brette, 2010). The aIF model is given by Clearly,

*f*(

*v*,

*u*)=−

*v*−

*u*is continuously differentiable with respect to

*v*and

*u*. Also,

*g*(

*v*,

*u*)=−

*u*is continuously differentiable with respect to

*v*and

*u*. Since

*v*(

_{p}*t*)=1, the required conditions of lemma 1 are satisfied if the slope of

*v*(

*t*) with respect to

*t*at the time of action potentials is nonzero, that is, for all . The same conclusion holds for the the integrate-and fire model since this model is a special case of the aIF model with

*u*(

*t*)=0 for all .

*f*(

*v*,

*u*)=−

*v*is continuously differentiable with respect to

*v*and

*u*. Also,

*g*(

*v*,

*u*)=

*av*−

*u*is continuously differentiable with respect to

*v*and

*u*. Since

*v*(

_{p}*t*)=1+

*u*(

*t*), the required conditions of lemma 1 are satisfied if the partial derivative of

*v*(

*t*) with respect to

*t*is not equal to the partial derivative of

*u*(

*t*) with respect to

*t*at the time of action potentials.

*2. Bidimensional models with nonlinear f(v, u).* In this class of models, we consider the Izhikevich model, the adaptive exponential (AdEx) integrate-and-fire (IF) model, and the Touboul model. The AdEx IF model is represented by equation 3.1 with and *g*(*v*, *u*)=*a*(*v*−*E _{L}*)−

*u*(Brette & Gerstner, 2005). The Touboul model is represented by equation 3.1 with

*f*(

*v*,

*u*)=

*v*

^{4}+2

*av*−

*u*and

*g*(

*v*,

*u*)=

*a*(

*bv*−

*u*) (Touboul, 2011). In all these models, the firing threshold

*v*(

_{p}*t*)=

*v*(time invariant). Clearly,

_{p}*f*(

*v*,

*u*) is continuously differentiable with respect to

*v*and

*u*. Also,

*g*(

*v*,

*u*) is continuously differentiable with respect to

*v*and

*u*. By assumption, is continuously differentiable with respect to along with its continuity with respect to

*t*in a neighborhood of . Because of the convexity, faster growth and regularity behavior of the nonlinear function

*f*(

*v*,

*u*) (Touboul, 2011), the slope of

*v*(

*t*) with respect to

*t*at

*v*(

*t*)=

*v*, that is, at the time of action potentials, it always greater than 0 (required condition for the existence of the inverse of the Jacobian in order to apply the implicit function theorem). Thus, these models satisfy the required conditions of lemma 1. From the conclusion of lemma 1, there exists a neighborhood of such that the ISIs predicted by these models are a continuously differentiable function of in the neighborhood of . The same conclusion holds for the quadratic integrate-and-fire model since this model is a special case of the Izhikevich model with

_{p}*u*(

*t*)=0 for all .

As an example, Figure 5 shows the continuity of ISIs with respect to in a small neighborhood of for the Izhikevich model. The model parameters are chosen such that the model shows regular spiking behavior with a step input current.

In the Adex IF model, the spike rises very quickly past the intrinsic threshold *v _{T}* of the exponential nonlinearity. Therefore, it may appear that our continuous differentiability conditions for ISIs hold over a very narrow domain of . We have observed in simulations that a 10% perturbation in a step input current shows a maximum of 15% deviation in ISIs predicted by this model. Here, we have chosen the model parameters for tonic spiking. The input current and its perturbation are sufficiently larger than the the input current required for spiking through saddle-node bifurcation (Touboul, 2008). These observations suggest that our results are applicable for a significantly large domain of when the model parameters and the stimulating input current are chosen appropriately.

*3. Spike response model.*The spike response model (SRM) is represented as where

*v*(

_{i}*t*) is the membrane potential of a single neuron

*i*. is the firing time of the last spike of neuron

*i*, and

*t*

^{(f)}

_{j}are spikes of presynaptic neurons

*j*.

*w*is the synaptic efficacy. The function describes the form of the action potential and its spike afterpotential. The function describes the time course of response to an incoming spike, and is a linear response to an input pulse. The next action potential occurs when

_{ij}*v*(

_{i}*t*) hits a threshold with

*dv*(

_{i}*t*)/

*dt*>0 (Gerstner & Kistler, 2002).

In this model, the required conditions of lemma 1 are satisfied if (1) *v _{i}*(

*t*) is continuously differentiable with respect to

*t*and in a small neighborhood of , and (2) the difference between partial derivatives of

*v*(

*t*) and with respect to

*t*is nonzero at the time of the next action potential. Clearly, satisfaction of these conditions requires restrictions on functions , , and . As an example, let us assume that is time invariant and

*w*=0. Now if is continuously differentiable with respect to

_{ij}*t*for , is continuous in

*t*for , and and are continuously differentiable with respect to at in a small neighborhood of , then all the required conditions of lemma 1 are satisfied by the model. As a result, there exists a small neighborhood of such that the next ISI predicted by the model is continuously differentiable with respect to in that neighborhood of .

## 6. Conclusion

In this letter, we have established conditions under which inter-spike intervals of individual neurons in a recurrent network of synaptically connected spiking neurons are continuously differentiable with respect to parameters (decision variables) of an external stimulating input current that drives the network. The dynamical behavior of a spiking neuron has been represented by a two-state first-order ordinary differential equation with a reset of state variables at the occurrence of each ISI. Using existence theorems for the solution of ordinary differential equations and the implicit function theorem, we have found that ISIs are continuously differentiable with respect to the decision variables if

A continuously differentiable solution of spiking neurons with respect to time

*t*and the decision variables exists between consecutive action potentialsThe partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials.

Under certain assumptions, we have shown that these conditions are fulfilled by nonlinear bidimensional spiking neuron models in the presence of a time-continuous input current that is also continuously differentiable with respect to its parameters (see example 2). In the case of linear bidimensional spiking neuron models, additional constraints must be imposed on the stimulating input current in order to fulfill these conditions.

*Are our results applicable to spiking neuron models in the presence of extrinsic or intrinsic noise?* It is well known that single neurons are intrinsically noisy (Faisal, Selen, & Wolpert, 2008; Gerstner & Naud, 2009). Typically this noisy characteristic of single neurons is captured in deterministic dynamical models of spiking neurons by including an additive or multiplicative noise term in form of an external input current to the model. In this case, the dynamical model takes the form of stochastic differential equations and the resultant sequence of ISIs (also known as the first passage times) becomes a stochastic process (Ricciardi & Sato, 1988; Touboul & Faugeras, 2007, 2008; Sirovich & Knight, 2011). From a optimization point of view, relevant questions here would be to find conditions under which the expectation or the conditional expectation of first passage time is continuously differentiable with respect to to decision variables.

Since our derived results in this letter are based on properties of ordinary differential equations, these results are not applicable to stochastic models of spiking neurons. In future work, we will derive conditions under which the expectation or the conditional expectation of first passage time is continuously differentiable in stochastic models of spiking neurons and recurrent networks.

*Are our results applicable to spiking neuron models that show chaotic behavior?* We consider the adaptive exponential (AdEx) integrate-and-fire (IF) model, which has shown to give a chaotic solution with appropriate model parameters (Naud, Marcille, Clopath, & Gerstner, 2008; Touboul & Brette, 2008). The dynamical behavior of the model is given by equation 3.1 with , *g*(*v*, *u*)=*a*(*v*−*E _{L}*)−

*u*and

*v*(

_{p}*t*)=

*v*. is an external input current to the model.

_{p}When the dynamical behavior of this model is nonchaotic, we have shown in example 2 that the model satisfies the conditions of lemma 1 for , which is continuous in *t* and continuously differentiable with respect to in a small neighborhood of . Thus, in this case, our results guarantee the existence of a small neighborhood of in which ISIs are a continuous differentiable function of . When the dynamical behavior of this model is chaotic, our results cannot guarantee the existence of a small neighborhood of in which ISIs are a continuously differentiable function of beyond the first ISI.

Although our results are not applicable to chaotic neuron models beyond the first ISI, it may be possible to find a small neighborhood of for which the initial ISIs can be expressed as a continuously differentiable function of since the conditions of lemma 1 are satisfied by the model. As an example, Figure 6 shows the deviation in ISIs predicted by the AdEx IF model with chaotic model parameters when the applied step input current is perturbed by 0.01%.

In this example, the initial ISIs show small changes with 0.01% perturbation in *I*. Moreover, we have verified in simulation (not shown here) that the difference in the initial ISIs decreases with the decrease in the perturbation size which confirms the continuity of these ISIs with respect to *I* in a small neighborhood. Later ISIs show irregular changes even with an infinitesimal perturbation in *I* due to chaotic behavior of the model.

In conclusion, our results guarantee the continuous differentiability of ISIs in all spiking neuron models and network models if the required conditions of theorem 1 are satisfied by these models.

Throughout our work, we have assumed that each presynaptic neuron establishes only one synapse with each of its postsynaptic neurons and all synapses within the network are identical (i.e., excitatory). These assumptions are used only to simplify the mathematical complexity of our work. Relaxation of these assumptions by including multiple synapses of both types, excitatory and inhibitory, will not change the conclusion of the reported work as long as our derived conditions are satisfied.

## Appendix: Frequently Used Theorems

*D*is a domain in the (

*t*,

*x*) space. Let be the domain of space, that is, for

*c*>0, where is fixed. is defined as . Let on and satisfy a Lipschitz condition in

*x*uniformly on . For a given fixed , let be the solution of on an interval . There exists a such that for any where }, there exists a unique solution of equation A.1 on satisfying . Moreover, on

*n*+

*k*+2 dimensional domain .

(Coddington & Levinson, 1955). Let the hypothesis of theorem 2 be satisfied. Suppose that , on . Then the solution defined in theorem 2 is of class on .

(Apostol, 1957). Let be a vector-valued function defined on an open set *S* in *E*_{n+k} with values in *E _{n}*. Suppose on

*S*. Let be a point in

*S*for which and for which the determinant . Then there exists a

*k*-dimensional neighborhood

*T*

_{0}of and one, and only one, vector-valued function

**, defined on**

*g**T*

_{0}and having values in

*E*, such that

_{n}on

*T*_{0},,

for every .

## Acknowledgments

Financial support from the U.S. National Science Foundation, Cyber Enabled Discovery and Innovation program through grant EECS0835632, is gratefully acknowledged. We acknowledge the constructive suggestions by the reviewers, which helped us in improving the readability of the manuscript.