1336
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 6, JUNE 2004
REFERENCES [1] J. Bierbrauer and Y. Edel, “Quantum twisted codes,” J. Combin. Des., vol. 8, pp. 174–188, 2000. [2] A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, “Quantum error-correction via codes over GF(4),” IEEE. Trans. Inform. Theory, vol. 44, pp. 1369–1387, July 1998. [3] A. R. Calderbank and P. W. Shor, “Good quantum error-correcting codes exist,” Phys. Rev. A, vol. 54, pp. 1098–1105, Aug. 1996. [4] H. Chen, “Some good quantum error-correcting codes from algebric geometric codes,” IEEE. Trans. Inform. Theory, vol. 47, pp. 2059–2061, July 2001. [5] H. Chen, S. Ling, and C. P. Xing, “Quantum codes from concatenated algebric geometric codes,” preprint, 2001. [6] D. Gottesman, “A class of quantum error-correcting codes saturating the quantum Hamming bound,” Phys. Rev. A, vol. 54, pp. 1862–1868, 1996. [7] F. J. MacWillams and N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdan, The Netherlands: North-Holland, 1977. [8] P. W. Shor, “Scheme for reducing decoherence in quantum computer memory,” Phys. Rev. A, vol. 52, pp. R2493–2496, Oct. 1995. [9] A. M. Steane, “Error correcting codes in quantum theory,” Phys. Rev.Lett., vol. 77, pp. 793–797, July 1996. , “Quantum Reed–Muller codes,” IEEE. Trans. Inform. Theory, vol. [10] 45, pp. 1701–1703, July 1999. [11] , “Enlargement of Calderbank–Shor–Steane quantum codes,” IEEE. Trans. Inform. Theory, vol. 45, pp. 2492–2495, Nov. 1999. [12] V. Pless, “A classification of self-orthogonal codes over GF(2),” Discr. Math., vol. 3, pp. 209–246, 1972.
Time Intervals and Counting in Point Processes Bernard Picinbono, Life Fellow, IEEE
priate physical devices for this approach are counters. A limit aspect of counting appears in coincidence experiments in which the time intervals of counting are so small that they can only contain one or zero point [4]. On the other hand, it is possible to analyze point processes by using time intervals between points measurements. This introduces the concept of residual time, or survival time, or waiting time of order n, which is the time distance between an arbitrary time instant and the nth point of the processes following this instant. It is also possible to study the life time which is the time distance between successive or nonsuccessive points of the processes. In the stationary case, the calculation of the probability distributions of residual or life times in terms of counting probabilities is known [5], [6]. However, in many practical situations, the stationarity assumption cannot be introduced and it appears that the direct transposition of the results obtained in the stationary case is not possible. The main reason is that time intervals must be considered as random variables (RV) defined by conditonal distributions. We shall see that this remark is of no importance in the stationary case but it must be taken into account for nonstationary processes. The omission of this fact has resulted in many incorrect expressions appearing in classical books on point processes. This is one of the reasons for analyzing the problem again and more carefully. Before going further, let us introduce some general concepts and notation that will be used throughout the correspondence. As indicated in the title, we are interested in time point processes, which means that the points are time instants. We assume that the point processes studied are defined only in a time interval Ti ; T , where Ti and T are the beginning and the end of the processes, respectively. For the sake of simplicity we take Ti as the . origin of time, or Ti We denote by N t1 ; t2 the number of points in the interval t1 ; t2 . It is a discrete-valued RV and the point process is entirely defined if for ti the joint probability any set of nonoverlapping intervals ti ; ti ti g is known. These probabilities distribution of the RVs fN ti ; ti are denoted counting probabilities, and we shall use the notations
(
)
[
Abstract—Time point processes can be analyzed in two different ways: by the number of points in arbitrary time intervals or by distance between points. This corresponds to two distinct physical devices: counting or timeinterval measurements. We present an explicit calculation, valid for arbitrary regular processes, of the statistical properties of time intervals such as residual or life time in terms of counting probabilities. For this calculation, we show that these intervals must be considered as random variables defined by conditional distributions.
=0 )
[
[ +1 ) +1 )
[
pi (t; )
P fN [t; ) = ig:
A. General Results
Point processes play an important role in many areas of physics and information sciences. They appear on a microscopic scale in the description of particle emission, and, for example, optical communication at a very low level of intensity requires the use of statistical properties of photons or photoelectrons [1], [2]. On the other hand, at a macroscopic level many areas such as traffic problems or computer communications require the use of point process statistics [3]. There are two approaches to describe point processes theoretically or to study them experimentally. The first one makes use of counting procedures in one or several nonoverlapping time intervals. The approManuscript received July 18, 2002; revised July 17, 2003. The material in this correspondence was presented in part at the GRETSI Conference (in French), Paris, France, September 2003. The author is with the Laboratoire des Signaux et Systèmes (L2S), Supélec, Plateau de Moulon, 91912, Gif sur Yvette, France (e-mail: bernard.picinbono @lss.supelec.fr). Communicated by A. Kavˇcic´, Associate Editor for Detection and Estimation. Digital Object Identifier 10.1109/TIT.2004.828081
(1)
II. RESIDUAL TIME OF ORDER n
Index Terms—Counting, point processes, time measurements.
I. INTRODUCTION
)
0=
Let t be an arbitrary time instant satisfying Ti t T . The residual time of order n is the RV Rn t equal to the distance between the origin Ti and the nth point of the process posterior to t. It is fundamental to note that this RV does not exist if there are less than n points posterior to t, or if the event N t; T < n is realized. Consequently, the distribution function (DF) of Rn t defined P Rn t , t T , is the conditional probaby Fn t; bility
()
[ )
( )= [ ()
]
()
Fn (t; ) = P (fN [t; ) ngjfN [t; T ) ng) = P (fN [t; P)(N [nt;gT1 )fN[nt;)T ) ng) :
As < T , the numerator is equal to
P (N [t; ) n) = 1 0
01
n
i=0
pi (t; ):
( )
(2)
(3)
The denominator has the same structure but pi t; is replaced by i pi t; T . This yields
( )
Fn (t; ) =
0018-9448/04$20.00 © 2004 IEEE
10
1
0 i
n 1 i=0
10
01
n
i=0
pi (t; ) :
=
(4)
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 6, JUNE 2004
( )
It results directly from (2) that Fn t; is effectively a DF, which means a nondecreasing function varying from to when varies from t to T . The probability density function (pdf) fn t; is the derivative of this expression with respect to , when it does exist, or n01 0 @pi t; fn t; (5) n 0 1 0 i=0 i i=0 @ : It is clear that (4) or (5) establish a relation between counting probabilities and statistics of the residual time, which is the objective of this correspondence.
( )= 1
B. Stationary Case
( )= (
0 1 ( )
1
( )
)
In this case, pi t; pi 0 t and T tends to infinity. As a result, i and (4) becomes n01 Fn t; 0 pi 0 t i=0 where pi 0 t is the probability of counting i points between t and . This is the classical expression for stationary point processes. This shows clearly the difference between the stationary and nonstationary cases. When the point process is stationary, it is not necessary to consider a conditional distribution because the event introducing the condition is realized with probability . Indeed, except when the process has a zero density, there is always an infinite number of points posterior to any time instant t.
=0
( )=1
(
(
)
)
1
C. Poisson Processes
()
Consider a nonstationary Poisson process defined by a density t equal to zero if t is not in the interval Ti ; T . For the following calculations, it is worth introducing the quantity dn m associated to any Poisson distribution of mean value m and defined by n01 mn01 0m dn m pi m 111 : (6) n0 i=0 This is obviously the probability that a Poisson RV of mean m takes a value smaller than n. With this notation (4) can be written as 0 dn m Fn t; 0 dn M where T d; M M t; T m m t; d: t t (7) In order to calculate the pdf, we note that the derivative of dn m defined by (6) with respect to m is 0 0m mn01= n 0 . Consequently, the pdf of the residual time is 0m(t; ) mn01 t; ; fn t; T: 0 dn M n0 (8) This expression can be obtained directly from the properties of a Poisson process. Note that the factor 0 dn M 01 in (8) ensures that fn t; is indeed a pdf. This factor is usually forgotten, which introduces a function that is not a pdf. This, for example, is the case for [7, eq. (3.3)], [8, eq. (2.20)], [9, eq. (2.3.6)], or [10, eq. (8.8)]. The reason for these expressions is that the calculations do not take into account the fact that the residual time is an RV defined by a conditional DF. In order to visualize the effect of the nonstationarity, consider the example of a Poisson process with a density equal to in the interval ; T and zero outside. The value of the pdf of the residual time Rn Rn is n01 0 fn T (9) 0 dn M n 0 ;
[
( )
=e
)
1+ + + (
( ) = 11
= ( )=
()
( )
[0 ) (0) =
1)!
( ) ( )
= ( )=
exp(
( ) = 1 1 ( )e
( )
(
)
( ) ( 1)! ( ) [1
( ) = 1 1 ( ) (( ) 1)! e
()
1)!
0
( )]
0
( )
1337
and zero outside this interval. It is easy to calculate the mean value of this RV which is n 0 dn+1 M n Rn (10) cn M 0 dn M where dn and M are defined just above. When T ! 1, cn M ! , and Rn tends to the value n=. This is in agreement with the fact that in a stationary Poisson process of density the distances between successive points are independent and identically distributed (i.i.d.) random variables of mean value =. Thus, the term cn M can be considered as a correction factor due to the nonstationary character of the process. This correction factor is represented in Fig. 1 as a function of M for various values of n. It is clear that the normalization factor 0 dn M in the pdf is fundamental to obtain the correct mean value. The difference between stationary and nonstationary Poisson processes with constant density is especially important when the density becomes very small. In the stationary case, the mean value = tends to infinity when ! . This is of course impossible in our particular case of Poisson process with a density equal to zero outside the interval ; T . For ! , the mean value Rn defined by (10) tends to T n= n . For example, for n this gives T = , and this can also be obtained from the pdf f1 which tends to =T and shows that the RV R1 is uniformly distributed in the interval ; T .
( ) ( ) =
( ) = 11
( )
( )
1
1
( ) 1
( )
( )
[0 ) [ ( + 1)]
1
0 0
()
=1
( )
2 1 [0 )
D. Compound Poisson Processes Compound Poisson processes, sometimes called doubly stochastic processes [8], are Poisson point processes in which the density t is a random function [10]. They play an important role in many areas of physics or information sciences and especially in optical communications. Indeed, it can be shown by various arguments that they describe the point process of the detection of photons and that the random density is proportional to the random intensity of the optical field [11]. For these processes, the calculation of pi and i require an ensemble average over t . Then all the previous calculations can be used again with the only difference that dn m and dn M are replaced by their expectation values with respect to t . The pdf of the residual time thus becomes
()
()
( )
()
( )
( ) = 1 0 [1dn(M )] n0 t ()d 1 : (11) ()d ( )exp 0 (n 0 1)! t In the case where (t) is stationary, [dn (M )] = 0, and we find once
fn t;
1
again a known classical expression (see [10, p. 348]). III. LIFE TIME OF ORDER n A. General Results
The difference with the residual time is that there is now a point of the process at t. However, as this is not an event, we proceed as follows. Let Ln t; t be the RV equal to the distance between the origin and the nth point of the process posterior to t t, on the condition t and at least n points in that there is at least one point in t; t t t; T . We shall first calculate the DF Gn t; ; t of Ln t; t and, second, its limit Gn t; when t ! . In order to calculate Gn t; ; t , we proceed as in Section II and we start from
( 1)
[ +1 ) ^(
( 1)
(fN [t + 1t; ) ngjfN [t + 1t; T ) ng 1 fN [t; t + 1t) 1g) : (12) We assume obviously that P (fN [t; t + 1t) 1g) > 0, otherwise, there is almost surely no point in [t; t + 1t), and the problem does not make sense. This is ensured if (t) > 0, where (t) is the density of Gn t; ;
1 t)
+1 [ +1) ^ ( 1) ( ) 1 0 ^ ( 1)
P
the point process. Its exact definition is given by (31).
1338
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 6, JUNE 2004
Fig. 1.
( )
Correction term cn M of (10) in terms of M for n
^( 0 1 1 )=0
= 1; 2; 3; 4; 5.
1)
It results directly from this expression that Gn t; ; t is a DF, or a nondecreasing function of varying from to when varies from t t to T . Of course, we have Gn t; ; t for < t t and for > T . By using the definition of conditional probability and noting that t t < < T we obtain
1
^(
+1
+1
+1 ^n (t; ; 1t) = P (fN [t + 1t; ) ng 1 fN [t; t + 1t) 1)g) : G P (fN [t + 1t; T ) ng 1 fN [t; t + 1t) 1)g
Note that for some calculations it is simpler to write the sums appearing in (17) and (18) in another form by using the relation
0
n 1 i i=0 k=0
For n
1
(n 0 k )u k :
(19)
1 = 1, (17) and (18) can be expressed as @ (t) 0 @t p (t; ) G (t; ) = @ (t) 0 @t p (t; T ) 01 @ p (t; ) g (t; ) = : @ (t) 0 @t p (t; T ) @t@ 0
0
1
(14)
0
(20)
0
For example, in the case of a nonstationary compound Poisson process where p0 t; f 0m t; g, this expression becomes qi;
=0
=
same idea to the denominator, we deduce from (15) that
^(
k=0
2
(1t) = P (N [t; t + 1t) 1): (15) i The numerator of (13) is obviously 1 i n qi; (1t) and applying the Gn t; ;
0
n 1
1
Let qi;
and satisfying
=
B. Life Time of Order
(13)
(1t) be the probability defined by qi; (1t) P (fN [t + 1t; ) = ig 1 fN [t; t + 1t) 1g)
uk
1) 0 1t) = PP ((NN[[t;t; tt ++ 11tt)) 1) 0
0 0
n 1 i=0 qi; n 1 i=0 qi;T
(1t) : (1t)
(16)
( ) = exp[ ( )] 1 g (t; ) = (t)( )e0m t; (t) 0 [(t)e0M ] where (t) = [(t)] and M and m are given by (7). (
1
C. Stationary Case
)
(21)
( ) = . The denominator of (18) be-
In this case, T ! 1 and t comes and with (19) we obtain
( ) = 01
0
k (t; ) (n 0 k) @ p@t@ (22) : It is shown in Appendix I that if the partial derivative (t; ) k exists, assumption verified in all the examples presented below, then But in the stationary case, pk (t; ) is only a function of 0 t, say ^ n (t; ; 1t) is the limit for 1t ! 0 of G pk ( 0 t). This yields another form of (22) written as i @ (t) 0 in0 k @t pk (t; ) ; 1 n0 (n 0 k)p00k ( 0 t): Gn (t; ) = (17) n0 i @ g ( t; ) = (23) n ( t) 0 i p ( t; T ) k @t k 1 =0 1 =0
( ) = ( t) 0
0
n 1 i=0
01
i @ k=0 @t pk
2
=0
1
=0
=0
k=0
It is also shown in Appendix I that this limit is indeed a DF. The corresponding pdf, when it exists, is obviously gn t;
n 1
gn t;
@ @t pk
0
n 1 i
(t; T ) i
=0
k=0
( )
@ 2 pk t; : @t@
(18)
This is the same as relation (81) which appears in [5]. The use of (22) and (23) requires some care and this shall be analyzed through the example of g1 t; in the case of a stationary compound Poisson process. For such a point process
( )
( )=
p0 t;
exp 0
t
()
d
(24)
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 6, JUNE 2004
which is indeed a function of 0 t only, because random function. Application of (22) yields
g1 (t; ) =
= [ ( )]
1
(t)( )exp
0
(t) is a stationary
()d
t
1339
(25)
where E t . This can be obtained directly from the properties of stationary compound Poisson processes (see [10, p. 348]). On the other hand, application of (23) yields
g (t; ) = 1
0 0 ( ) + 2 ( )
1
exp 0
t
()d
:
(26)
It is not obvious that these two expressions are identical. This can be shown by analytical arguments not presented here. However, it is clear that (25) is much more convenient than (26) because it does not use the derivative of the random function t . Its interpretation in terms of stationary compound Poisson processes is also simpler because t and are directly related to the presence of a point of the process at t and , respectively, while the exponential term indicates no point between the instants.
()
()
()
D. Nonstationary Compound Poisson Processes In this case
pk (t; ) = fexp[0m(t; )][m(t; )]k =k!g with m(t; ) = t ()d and the expectation is taken with respect to (). The principle of the calculation is given in Appendix II, and the
result is
1 ( t) 0 c n0 1 (t) t (n(0)d (27) ( )exp 0 ()d 1)! t with c = [(t)dn (M )] and dn (M ) is given by (6) and (7). For n = 1, we once again find (21). The main difference with (11) is the presence of the term (t) due to the condition that there is one point at t.
gn (t; ) =
1
IV. MULTIPLE TIME DISTANCES Instead of studying a single time distance it is possible to jointly analyze two or several such distances. In order to simplify the presentation, we will restrict ourselves to the analysis of the case of two distances of order one, the extensions to other cases introducing only notational complexity but no conceptual difficulties. Consider the two RVs R1 t and R2 t as defined at the beginning of Section II. By construction, they satisfy the condition R1 t < R2 t . Let f t; 1 ; 2 be their joint pdf. A calculation transposing to this case the method used above yields
()
(
()
f (t; 1 ; 2 ) =
t 1 p10 (t; 1 ; 2 ) for
()
()
)
01 @ p (t; ; ) 1 0 0 @ @
< T
2
0
1
10
1
1
2
(28)
2
and zero otherwise. In this expression, 2 is the probability that the intervals t; 1 and 1 ; 2 contain and points, respectively. In the case of a nonstationary Poisson process, this pdf is
1
0
f (t; 1 ; 2 ) =
( )
[
[
[
1 1 0 d (M ) ( )( )exp [0m(t; )] 2
1
2
2
]
The same calculation can be made with the additional condition that there is one point at the origin, or by using the RVs L1 t and L2 t of Section III. A calculation similar to the one presented in Section III yields
()
g(t; 1 ; 2 ) =
S1 = R1 0 t; S 2 = R2 0 R1 : They are defined in the domain s1 0, s2 0, and s1+s2 T 0t. Their
joint pdf can easily be deduced from (26) by an obvious transformation.
(t) 0 2 @p @t(t;T ) 0 @p @t(t;T )
@ 3 p10 (t; 1 ; 2 ) : @t@1 @2
(30)
It is easy to find that in the case of a nonstationary Poisson process we again obtain (29), which is a direct consequence of the absence of memory in Poisson processes. On the other hand, for compound Poisson processes we find
g(t; 1 ; 2 ) =
( t) 0
()
1
f(t)d [m(t; T )]g 1 f(t)( )( )exp[0m(t; )]g: 2
1
2
2
In the case where t is no longer random, i.e., in the case of a Poisson t. process, this expression once again gives (29) because t The procedure introduced in this section can easily be extended to more than two RVs Ri or Li . The method of calculation is the same and there is only greater complexity in the expressions but no specific difficulties. Thus, we shall not present these calculations here. However, the general result is the same. The joint pdf of a set of n such RVs can be deduced from the counting probabilities in n adjacent intervals.
( )= ( )
V. CONCLUSION The purpose of this correspondence was to calculate the statistical properties of the distances between points of a point process in terms of the statistics of counting in some time intervals. This problem was solved a long time ago for stationary processes, but the extension of the results for nonstationary processes has so far been presented in an incorrect way. We have shown that this extension requires the consideration of time distances between points as conditional random variables and the errors appearing in literature stemmed from overlooking this point. The consequences of this were analyzed in this correspondence for the residual time and some numerical examples on pure and compound Poisson processes illustrate the calculations. The analysis was extended for the life time of any regular point process. It was shown that the pdf of the life time can be deduced in terms of counting probabilities in one single interval. Finally, the procedure was extended for multiple time distances and the expression of the statistics of the distance was given in terms counting probabilities in adjacent intervals. APPENDIX I CALCULATIONS OF Gn
(t; )
Let us first note that a regular point process is a process such that there is no accumulation domain, which means that an infinitesimal interval can contain only or point (see [12, p. 53]). In order to t . The regularity express this fact, let I be the RV equal to N t; t and P I > is specified by the fact that the probabilities P I can be expressed by
0 1
[ +1 ) ( = 1)
P (I = 1) = [(t) + (t; 1t)]1t P (I > 1) = (t; 1t)1t
(29)
where d2 M is defined by (6) and (7). It is easy to verify that this pdf is normalized in its domain of definition t 1 2 < T . Instead of the RVs R1 and R2 , it is sometimes more interesting to introduce the distances between successive points, or the RVs S1 and S2 defined by
01
()
1
0
(
1)
(31)
where and tend to zero when t ! . This equation defines the density t of the point process. As a result, we have
()
where
P (I = 0) = 1 0 [(t) + (t; 1t)]1t
= + . It results from (31) that P (I 1) = [(t) + (t; 1t)]1t:
1340
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 6, JUNE 2004
[ + 1t; ). The probability q ; (1t)
Let A be the RV equal to N t defined by (14) can be expressed as
i
P (fA = ig 1 fI
qi; (1t)
1g)
1g) = P (A = i) 0 P (fA = ig 1 fI = 0g):
[ )
Let H be the RV equal to N t; . The event H in a sum of disjoint events by
(
(33)
(34)
where
[fA = i 0 2g 1 fI = 2g] + [fA = i 0 3g 1 fI = 3g]+ 1 1 1 + [ f A = 0 g 1 f I = i g ]:
1)
0
1
01
n
= ( )
(36)
( = 1
P (fA = 0 g 1 f I
(37)
(38)
2)
1g) = P (A = 0) 0 P (H = 0):
( ) ( )=0 ^ ( 1)
( ) = [exp( )
1g) =
i k=0
[P (A = k) 0 P (H = k)] +
( = ) = ( +1 ) ( ) ( ) ( )
i k=1
rk : (39)
( = )= ( )
k pk t t; and P H k pk t; . Note that P A Let Ti t; be the sum ik=0 pk t; and let us assume, as indicated above, that pk t; has a derivative with respect to t for all t. It results from the definition of Ti t; that Ti (t + 1t; ) 0 Ti (t; ) =
@ T (t; ) + i (t; ; 1t) @t i
1t
(40)
where i (t; ; 1t) ! 0 when 1t ! 0. Furthermore, as P (fA i 0 j g 1 fI = j g) P (I = j ), we have
P (Sk ) P (I = 2) + P (I = 3) + 1 1 1 + P (I P (I > 1) = (t; 1t)1t:
!]
0m mk=k where Let us start from (18) with pk t; d . Leaving for the moment the expectation and the t derivatives we get
=
()
pk (t; ) = di+1 (m):
Using the property indicated after (7) yields that the derivative with respect to is 0 0m mi =i . The sum with respect to i yields 0 dn m and the derivative with respect to t is
() ( )
( )exp(
)
!
0(t)( )exp(0m)mn01 =(n 0 1)!:
Taking the expectation value yields the numerator of (27). The same procedure is applied to calculate the term c of (27). ACKNOWLEDGMENT The author would like to acknowledge Dr. C. Bendjaballah for the stimulating discussions concerning this correspondence. REFERENCES
All that yields the recursion
( )
i
( )
k=0
with ri P Si 0 P fA i 0 g1fI g . Note also that it results from (33) and from the definitions of A and H that
P (fA = ig 1 f I
0
( )=1
m
By using (33), (36), and (37), we obtain the recursion
1g) = P (fA = i 0 1g 1 f I 1g) +[P (A = i) 0 P (H = i)] + ri
(42)
By repeating the same reasoning for the denominator we obtain (17). The last point consists in verifying that the limit Gn t; given by (17) is effectively a DF. It is obvious on its definition that Gn t; t and that Gn t; T . Furthermore, the functions Gn t; ; t are nondecreasing functions. Their limit Gn t; inherits these properties and then is a DF.
i
= 1 g) = P (fA = i 0 1g 1 f I 1g) 0 P (fA = i 0 1g 1 f I 2g):
1t
@ pk (t; ) + o(1t): @t i=0 k=0
( t) 0
Furthermore, it is obvious that
P (fA = ig 1 f I
@ T (t; ) + i (t; ; 1t) @t i
1g) =
APPENDIX II CALCULATIONS OF (27)
= i) = P (fA = ig 1 f I = 0 g) +P (fA = i 0 1g 1 fI = 1g) + P (Si):
P (fA = i 0 1g 1 f I
2g) P (I > 1) = (t; 1t)1t.
(35)
As a result, we have
P (H
1
where i t; ; t ! when t ! . As a consequence, the numerator of (16) can be written as
= i can be decomposed
fH = ig = [fA = ig 1 fI = 0g] + [fA = i 0 1g 1 fI = 1g] + Si Si
P (fA = ig 1 f I
(32)
and satisfies
P (fA = ig 1 f I
( =
Similarly, P fA k 0 g 1 fI Combining all these results yields
=
= k) (41)
[1] J. R. Pierce, E. C. Posner, and E. R. Rodemich, “The capacity of photon counting channel,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 61–77, Jan. 1981. [2] B. Saleh and M. Teich, Fundamentals of Photonics. New York: Wiley, 1991. [3] E. Gelenbe and G. Pujolle, Introduction to Queuing Networks. New York: Wiley, 1987. [4] O. Macchi, “The coincidence approach to stochastic point proccesses,” Adv. Appl. Probab., vol. 7, pp. 83–122, 1975. [5] J. A. McFadden, “The axis crossing intervals of random functions,” IRE Trans. Inform. Theory, vol. IT-4, pp. 14–24, Mar. 1958. [6] , “On the lengths of intervals in a stationary point process,” J. Roy. Statist. Soc., vol. B 24, pp. 364–382, 1962. [7] D. R. Cox and V. Isham, Point Processes. London, U.K.: Chapman & Hall, 1980. [8] D. L. Snyder and M. I. Miller, Random Point Processes in Time and Space. New York: Springer-Verlag, 1991. [9] E. P. Kao, An Introduction to Stochastic Processes. Boston, MA: Duxbury, 1997. [10] B. Picinbono, Random Signals and Systems. Englewood Cliffs, NJ: Prentice-Hall, 1993. [11] J. Klauder and E. Sudarshan, Fundamentals od Quantum Optics. New York: Benjamin, 1968. [12] H. Cramer and M. R. Leadbetter, Stationary and Related Stochastic Processes. New York: Wiley, 1967.