Distribution of Human Response Times Tao Ma and R.A. Serota∗ Department of Physics, University of Cincinnati, Cincinnati, OH 45221-0011
John G. Holden†
arXiv:1305.6320v1 [q-bio.NC] 27 May 2013
CAP Center for Cognition, Action, and Perception, Department of Psychology, University of Cincinnati, Cincinnati, OH 45221-0376 (Dated: May 29, 2013) We demonstrate that distributions of human response times have power-law tails and, among closed-form distributions, are best fit by the generalized inverse gamma distribution. We speculate that the task difficulty tracks the half-width of the distribution and show that it is related to the exponent of the power-law tail.
I.
INTRODUCTION
Human response time (RT) is defined as the time delay between a signal and the starting point of human action. For example, one can measure time interval from a word appearing on a computer screen to when a participant pushes a keyboard button to indicate his or her response. Two well established empirical facts of RT are the power law tails of RT distributions [1] and 1/f noise of RT time series [2–5], to which any theoretical description must conform. The generalized inverse gamma (GIGa) function (Appendix A) belongs to a family of distributions (Appendix B), which includes inverse gamma (IGa), lognormal (LN), gamma (Ga) and generalized gamma (GGa). The remarkable property of GIGa is its power-law tail; for a general three-parameter case, the power-law exponent is given by the negative 1 + αγ, so that GIGa(x; α, β, γ) ∝ x−1−αγ , x → ∞. GIGa emerges as a steady state distribution in a number of systems, from a network model of economy, [6] to ontogenetic mass growth, [7] to stock volatility [8]. This common feature can be traced to a birth-death phenomenological model subject to stochastic perturbations (Appendix C). Here we argue that among closed form distributions the GIGa best describes RT distribution. GIGa has a natural scale parameter, which determines the onset of the power law tail, and two shape parameters, which determine the exponent of the tail. As such, our argument is an extension of previous approaches, such as “cocktail” model, [1] which effectively contains shape and scale parameters as well. Furthermore, we speculate that the difficulty of a cognitive task tracks the half-width of the RT distribution and discuss it within the GIGa framework. Our numerical analysis is performed on the following data (explained in text): ELP (English Lexicon Project), HE (Hick’s Experiments) and LDT (Lexical Decision Time). Two key features distinguish our approach. First,
∗ Electronic † Electronic
address:
[email protected] address:
[email protected] in addition to usual individual participant fitting, we perform distribution fitting on combined participants’ data. While in line with individual fitting, this creates considerably less noisy sets of data. Second, we develop a procedure for fitting the tails of the distribution directly (Appendix D), which unequivocally proves the existence of power law tails. This paper is organized as follows. In Section II, we provide description of the experimental setup and data acquisition. In Section III, we conduct log-log tail fitting and RT distribution fitting with GIGa. In Section IV, we conclude with the discussion of task difficulty.
II. A.
DATA ACQUISITION
Data sources and descritpion
ELP data is from the English Lexicon Project [9, 10]. HE and LDT data was collected under the supervision of J. G. Holden.
1.
ELP
ELP (English Lexicon Project) studies pronunciation latencies to visually presented words; participants sampled from six different Universities. [9, 10]. Data: Two sessions, 470 participants each: session 1 (ELP1), 1500 trials; session 2 (ELP2), 1030 trials.
2.
HE
HE (Hick’s Choice RT Experiment) – given a stimulus selected from a finite set of stimuli, participants try to respond with an action from a set of actions corresponding to this set of stimuli. Original HE is described in [11]. Data: 11 participants completed 1440 trials of 2, 4, 6, 8, and 10 options, approximately 16 000 combined datapoints for each condition.
2 3.
Hick, 2 ch oices
LDT
Hick, 2 cho ices,k=-3.27
Coun t log10 H1-CDFL 0
3000 2500
LDT (Lexical Decision Time). Data: Three groups 60 participants completed 100 word and 100 nonword trials of 1, 2, and 4 word LDT respectively, only the correct word trials are depicted, approximately 6000 datapoints for each group.
Data preprocessing
3.0
3.5
lo g10 RT
2000 -2 1500 -3
1000
-4
500 500
1000
1500
2000
2500
3000
3500
RT Hm sL
-5
Hick, 4 ch oices Hick, 4 cho ices,k=-4.14
Coun t 2000
log10 H1-CDFL 0
1500
B.
2.5
-1
2.5
3.0
3.5
lo g10 RT
-1 -2
1000
-3 500
To enhance our efforts to understand the distribution’s tail behavior, we combined all participants’ data from each experiment into a single distribution.
-4 500
1000
1500
2000
2500
3000
3500
RT Hm sL
-5
Hick, 6 ch oices Hick, 6 cho ices,k=-4.37
Cou nt log10 H1-CDFL 0 1500
2.5
3.0
3.5
lo g10 RT
-1 1000
III.
DATA ANALYSIS
-2 -3
500
-4
A.
500
Tail fitting
1000
1500
2000
2500
3000
3500
RT Hm sL
-5
Hick, 8 ch oices Hick, 8 cho ices,k=-4.88
Cou nt 1200
log10 H1-CDFL 0
2.5
3.0
3.5
lo g10 RT
1000 -1 800
ELP1
ELP1,k=-2.56
Co u nt 40 000
log10 H1-CDFL 0 2.6
-2
600 2.8
3.0
3.2
3.4
log10 RT 3.6
-3
400 -1 30 000
200
-4
-2 20 000
500
-3
1000
1500
2000
2500
3000
3500
RT Hm sL
-5
-4
10 000
FIG. 2: Histogram and log-log plot of Hick’s experiment.
-5 1000
2000
3000
4000
RT Hm sL
-6
ELP2
ELP2,k=-2.49
Cou n t
log10 H1-CDFL 0 2.6
25 000
2.8
3.0
3.2
3.4
log10 RT 3.6
-1 20 000 -2 15 000 -3 10 000 -4 5000 -5 1000
2000
3000
4000
RT Hm sL
-6
FIG. 1: Histogram and log-log plot of ELP.
Log-log plot of power law tail fitting is discussed in Appendix D. In Figs. 1, 2, 3, we show the results for RT experiments. With the exception of LDT, trials for most of the tasks timed out by 4 or 5 seconds. This requirement has the potential to distort RT distributions, especially their slow tails, as log-log plot bends downward when RT is close to 4 seconds. (In the future, the requirement of maximum time limit should be dropped or, at least, the limiting time cutoff must increase to reflect the natural RT distribution.) In contrast, the maximum RT for LTD is approximately 10 seconds and, as seen In Fig. 3 and Fig. 5, the log-log plots are closer to straight lines and GIGa fit is good.
the fitting of GIGa distribution. α, β, γ, the cut and shift parameters are all found by minimizing the chi-squared test statistic as follows. We choose the cut and shift parameters, find α, β, γ through maximum likelihood estimation and compute the chi-squared test statistic. We repeat this process for another group of cut and shift parameters. In the end, we obtain the parameters that minimize the chi-squared test statistic. Visually, GIGa fitting is good, yet p-values are all zero with the exception of LDT. As discussed above, a possible explanation is that the participants are not given enough time to respond, which distorts RT distributions. Also, Ref. [12] argues that chi-squared statistic yields poor results for goodness-of-fit – we used chi-squared statistic because, due to the cut parameter, the total number of RTs is not fixed in our parameter fitting. Lastly, in Fig. 7 we show the the relationship between the tail exponent parameter αγ and log-log fitted exponent parameter – with the exception of 4 LDT (which is one of the hardest tasks – see below), the correspondence is quite good.
IV. B.
TASK DIFFICULTY
GIGa distribution fitting
In Figs. 4, 5, and 6, we show GIGa distribution (Appendix A) fitting of RT. In the figures, the distance from the origin to the blue dot is rightward shift of GIGa distribution. The RTs to the left of the red lines are cut in
In Fig. 8, we plot the power law exponent from the best fit GIGa above as a function of their half-width. With the exception of Hick 6, there is a clear tracking between the two (notice that by eye HE PDFs seemingly show decrease of modal PDF and increase of PDF half-width
3 On e Word LDT Cou n t
ELP1
One Wo rd LDT,k=-3.53 log10 H1-CDFL 0.0 2.6 - 0.5
400
300
2.8
3.0
3.2
3.4
3.6
3.8
log10 RT 4.0
PDF
-1.0
0.0025
-1.5 200
- 2.0 - 2.5
100
0.0020
- 3.0 - 3.5 1000
2000
3000
4000
RT Hm sL
- 4.0
0.0015
Two Word LDT Co un t
Two Word LDT,k=-3.62 log10 H1-CDFL 0.0 2.6 - 0.5
200
150
2.8
3.0
3.2
3.4
3.6
3.8
log10 RT 4.0
0.0010
-1.0
0.0005
-1.5 100
- 2.0 - 2.5
50
- 3.0
1000
- 3.5 1000
2000
3000
4000
5000
6000
7000
RT Hm sL
4000
3000
4000
RT Hm sL
PDF 0.0030
Fo ur Word LDT,k=-3.45 log10 H1-CDFL 0.0 2.6 - 0.5
3000
ELP2
- 4.0
Fou r Word LDT Co un t 200
2000
2.8
3.0
3.2
3.4
3.6
3.8
log10 RT 4.0
150 -1.0
0.0025
-1.5 100
- 2.0
0.0020
- 2.5 50 - 3.0 - 3.5 2000
4000
6000
8000
RT Hm sL
0.0015
- 4.0
0.0010
FIG. 3: Histogram and log-log plot of one, two, and four word LDT.
0.0005
1000
with the increase of Hick’s number). We speculate that the half width of the distribution would be a natural measure of a task difficulty. This is easily analyzed in terms of the GIGa distribution, which we believe is well suited to description of RT distributions. In Appendix A, it is explained that due to GIGa’s scaling property, it is sufficient to consider the γ = 1 case, that is IGa. Furthermore, we can eliminate one more parameter by setting mean to unity. In some cognitive tasks, the mean may not be a good indicator of difficulty since an easy cognitive task may require a more idiosyncratic response and vice versa. For such IGa, a single parameter α then defines both scale and shape, that is the half width is directly relates to the exponent of the power law tail. As seen in Fig. 9 it has a maximum as a function of this parameter, which also marks a crossover between IGa limiting behaviors. This opens up an interesting possibility that depending on the magnitude of α, increase in the task difficulty may either increase or decreases the magnitude of the power law exponent. This subject, including sufficient data to analyze the aforementioned scaling property, requires further investigation.
V.
ACKNOWLEDGMENTS
J.G. Holden’s work was supported by the National Science Foundation Award BCS-0642718. We repeat a number of Appendices from [8] verbatim, given that the expected audiences for these two papers are vastly different.
2000
RT Hm sL
FIG. 4: GIGa fitting of ELP. ELP1: GIGa(0.73, 396, 3.69) with αγ = 2.7. ELP2: GIGa(1.04, 345, 2.33) with αγ = 2.4. The p-values are both 0.
Appendix A: Properties of GIGa distribution
We begin with the γ = 1 limit of GIGa, namely IGa distribution PDF 1+α 1 β β PIGa (x) = exp − . (A1) βΓ(α) x x Setting the mean to unity, the scaled distribution is (α − 1)α exp − α−1 Scaled x . (A2) PIGa (x) = Γ(α)x1+α The mode of the above distribution is xmode = (α − 1)/(α + 1). The modal PDF is Scaled PIGa (xmode ) =
(1 + α)1+α exp(−1 − α) , Γ(α)(α − 1)
(A3)
which has a minimum at α ≈ 3.48 as shown in Fig. 9. The change in PDF behavior on transition through this value is clearly observed in Fig. 10. Also plotted in Fig. 9 is the half-width of the distribution. Clearly, it highly correlates with the PDF maximum above. Both minimum and maximum above clearly separate the regime of small α: α → 1, where the approximate form of the scaled PDF is
Scaled PIGa (x)
(α − 1) exp − α−1 x ≈ x2
(A4)
4 On e word LDT
Hick 2
PDF
PDF
0.0035 0.008 0.0030 0.0025 0.006 0.0020 0.0015
0.004
0.0010 0.0005
0.002 1000
2000
3000
4000
RT Hm sL RT
Two word LDT
500
PDF 0.0014
1000
1500
2000
Hick 4 PDF
0.0012 0.005 0.0010 0.004
0.0008 0.0006
0.003 0.0004 0.002
0.0002 2000
4000
6000
8000
RT Hm sL
0.001
Fou r word LDT PDF
RT 500
1000
1500
2000
2500
3000
Hick 6
0.0006 PDF 0.0005 0.004 0.0004 0.0003
0.003
0.0002 0.002
0.0001 2000
4000
6000
8000
RT Hm sL 10 000 0.001
FIG. 5: GIGa fitting of one, two, four word LDT. One word LDT: GIGa(0.754, 357, 3.96) with αγ = 3.0. Two word LDT: GIGa(1.96, 1424, 2.39) with αγ = 4.7. Four word LDT: GIGa(25.1, 7.37 × 106 , 0.376) with αγ = 9.4. The p-values are 0.97, 0.82, and 0.87 respectively.
RT 1000
2000
3000
4000
3000
4000
Hick 8 PDF 0.0030 0.0025
whose mode is (α − 1)/2 and the magnitude of the maximum is 4 exp[−(α − 1)2 /2]/(α − 1) ≈ 4/(α − 1), from the regime of large α, α → ∞, where Scaled PIGa (x)
→ δ(x − 1).
(A5)
We now turn to GIGa distribution and the effect of parameter γ. In Fig. 11 we give the contour plots of modal PDF and total half-widths in the (η, γ) plane, where η = αγ and −1−η is the exponent of the power law tail. We observe an interesting scaling property of GIGa: for γ ≈ 2.1/η, the dependence of the PDF on η is very weak, as demonstrated in Fig. 12, where it is plotted for integer η from 2 to 7. An alternative way to illustrate this is to plot PDF for a fixed η and variable γ, as shown
0.0020 0.0015 0.0010 0.0005 RT 1000
2000
FIG. 6: GIGa fitting of Hick’s experiment. The parameters {α, β, γ} of GIGa are {0.731, 115, 3.41}, {1.57, 275, 2.48}, {1.64, 430, 3.07}, and {7.80, 2922, 1.10} respectively. αγ are 2.5, 3.9, 5.0, and 8.6 respectively. The p-values are all 0.
5 5.0
PScaled IGa Hxmode L
ì
1.8
Hick 8
Slope of log-log plot
4.5
ì
Hick 6
ì
4.0
left halfwidth+right halfwidth 0.7 0.6 0.5 0.4 0.3 0.2 0.1
1.6 1.4
Hick 4
1.2
ì
2 LDT
1 LDT
à
2
4 LDT
4
6
8
10
Α
2
4
6
8
10
Α
Hick 2
3.0 2.5 2.0
à
à
3.5
ò
ò
FIG. 9: Mode and half-width of scaled IGA as a function of α
ELP1
ELP2
2
4
6 GIGa ΑΓ
8
10
PScaled IGa HxL 1.5
Absolute power law tail exponent: GIGa 1+ΑΓ
FIG. 7: Best fit GIGa αγ versus log-log fitted tail exponent; triangles: ELP, squares: LDT, diamonds: HE
à
1.0 0.5
4 LDT
10
ì
Hick 8
0.5
1.0
1.5
2.0
2.5
3.0
x
8
6
ì
Hick 6
FIG. 10: PDF of IGa distributions. From left to right, α = 1.5, 2, 3, 3.48, 4, 5, and 6, corresponding to red, magenta, orange, green, cyan, blue, and purple lines.
à
2 LDT ì
4
1 LDT à ELP1 ò
ì
Hick 2
2 0.4
0.5
Hick 4 ò
ELP2
0.6 0.7 Total half-widths
0.8
FIG. 8: Best fit GIGa absolute power law tail exponent αγ +1 versus its half width; triangles: ELP, squares: LDT, diamonds: HE
in Fig. 13. Following the thick line we notice that, for η > 3, mode and half-width change very little with η. The key implication of the scaling property is that IGa contains all essential features pertinent to GIGa. Appendix B: Parametrization of the GIGa family of distributions
This Appendix is a self-contained re-derivation of a LN limit of GIGa. [13] The three-parameter GIGa distribution is given by 1+αγ γ γ β −( β ) x e (B1) GIGa(x; α, β, γ) = βΓ(α) x for x > 0 and 0 otherwise. We require that α, β, γ > 0. IGa is the the γ = 1 case of GIGa: 1+α 1 β −β x IGa(x; α, β) = e . (B2) βΓ(α) x Note that GIGa and IGa have power-law tails x−1−αγ and x−1−α respectively for x β.
We proceed to rewrite GIGa in the following form: " −γ −γ # γ x x GIGa(x; α, β, γ) = exp α ln − . xΓ(α) β β (B3) A re-parameterization µ = ln β −
1 1 ln γ λ2
1 √ γ α 1 λ = √ , α
σ =
(B4) (B5) (B6)
with σ > 0 and λ > 0, allows to express the old parameters in terms of the new: 1 λ2 2σ β = eµ λ − λ λ γ = σ
α =
leading, in turn, to −γ λ x = e− σ (ln x−µ) λ−2 β −γ x λ ln = − (ln x − µ) + ln(λ−2 ) β σ
(B7) (B8) (B9)
(B10)
(B11)
6 1.7 1.5 3.0 1.8
1.7
1.7 1.6 1.7
1.5
1.8
1.9
2.3 2.1 2.22.22.2 2
2
2.1
1.9
2.5
1.7
2.1
1.7
1.9
1.6
1.6
2.0
2.2 2.1
1.6 1.4
Γ
2
1.8
1.5
1.3
1.9
1.6
1.8 1.7
1.5 1.5
1.4
1.2
1.0
1.5 1.0
1.0
0.5
0.5
1.2
1.0
1.5
2.0
2.5
3.0
x
0.5
1.0
Η=4 1.5
1.3
1.5
2.0
2.5
3.0
2.0
2.5
3.0
2.0
2.5
3.0
x
Η=5
Pscaled GIGa 2.5
Pscaled GIGa
2.0
2.0
1.5
1.5
1.0
1.0
1.1
1.3
1.9 1 1.7 0.5 2.1 2 2.3 2.5 1.8 1.7 2.2 2.4 2.6 2 1.9 2.3 1.7 2.5 1.6 2.2 2.4 2.1 1.9 2.3 1.8 2.6 2.2 1.7 2.1 2.4 1.6 22.1 2.4 1.9 1.8 2.6 2.2 2.5
2
1.1
1.1
1
1 1
4
0.46 0.490.5 0.48 3.0 0.390.45
6 Η 0.48
0.5
10 0.5 0.39
0.44
0.44
0.49
0.46
0.53 0.57
0.58 0.54 0.52
0.61 0.65 0.7
0.35 0.37
0.68
0.72
0.67
0.75
0.71
0.50.27 0.26
0.76
2
0.55
4
0.66
6 Η
8
10
Pscaled GIGa 1.2 1.0 0.8 0.6 0.4 0.2 2
2.5
3.0
3
0.5
1.0
1.5
x
Η=7
Pscaled GIGa
Pscaled GIGa
2.0
2.0
1.5
1.5
1.0
1.0 0.5 0.5
1.0
1.5
2.0
2.5
3.0
x
0.5
1.0
1.5
x
0.78
0.74
FIG. 11: Top: contours of modal PDF of GIGa distributions with mean 1. Thin lines: contours of modal PDF. Thick line: γ = 2.1/η. Bottom: contours of total half-widths of GIGa distributions with mean 1. Thick line: γ = 2.5/η.
1
2.0
FIG. 13: Scaled PDF of GIGa distributions with mean 1. In each subplot with constant η, from left to right, γ = 0.5/η, 1/η, 1.5/η, 2/η, 2.5/η, 3/η, and 3.5/η, corresponding to red, magenta, orange, green, cyan, blue, and purple lines.
0.59
0.63
0.39 0.19 0.29 0.22 0.15 0.31 0.33 0.1 0.23 0.18 0.13 0.07 0.05 0.11 0.16 0.32 0.09 0.14 0.2 0.41 0.36 0.28 0.24 0.06
1.5
0.5
0.62
1.5 0.42
1.0
x
Η=6
0.42 0.41 0.4
0.45
0.4
0.5
0.5
8
2.5
Γ
2.0
2.0
0.5
1.6
1.0 0.4
2.5
2.5
1.4 1.5
2.00.41
3.0
1.2 1.2
Η=3 Pscaled GIGa
1.5
1.7
1.4
Η=2 Pscaled GIGa
4
5
x
FIG. 12: Scaled PDF of GIGa distributions with mean 1. In the plots, γ = 2.1/η. Six lines correspond to η = 2, 3, ..., 7
also prove that γ ln(λ−2 ) − 1 1 exp =√ , Γ(α) λ2 2πσ
(B13)
based on the Stirling’s approximation when we let λ−2 = α → +∞. Upon substitution of Eqs. (B12) and (B13) into Eq. (B1), we obtain the LN distribution 1 (ln x − µ)2 LN(x; µ, σ) = √ exp − . (B14) 2σ 2 2πσx In conclusion, GIGa has the limit of LN when λ tends to 0 in such a way that α tends to +∞ quadratically and γ tends to 0 linearly. GIGa (IGa) are also transparently related to γ↔−γ GGa (Ga) distribution: GGa(x; α, β, γ) ←−−−→ −GGa(x; α, β, −γ) = −GIGa(x; α, β, γ) and GGa(x; α, β, γ) ↔ GIGa(1/x; α, 1/β, γ). Note, finally, that Lawless [14] derived the LN limit of GGa in a manner similar to ours, which solidifies the concept of the “family” that unites these distributions.
and −γ −γ x x ln(λ−2 ) − 1 (ln x − µ)2 α ln − ≈ − , β β λ2 2σ 2 (B12) where we have used the Taylor expansion of the exp term in Eq. (B10), which depends on λ/σ = γ → 0+ . We can
Appendix C: Stochastic “birth-death” model
Many natural and social phenomena fall into a stochastic “birth-death” model, described by the equation dx = c1 x1−γ dt − c2 xdt + σxdW
(C1)
7
Appendix D: Log-log plot of distribution tails
The exponent of a power law tail can be easily calculated once we notice that Z +∞ 1 − CDF(x) = PDF(x)dx. (D1)
2.0
PDF
1.5
0.0 0
2
3
4
5
1.0
1.5
2.0
0 -1 -2 -3 -4 -5 -0.5
0.0
0.5 log10 x
FIG. 14: Top: plots of PDF of LN(x; µ, σ) with mean 1. The left red, middle green, and right blue curves correspond to parameter σ = 1, 0.5, and 0.2 respectively. Bottom: log-log plots of simulated data sampled from the LN distributions. Below −1 of the y-axis, the left blue, middle green, and right red curves correspond to σ = 0.2, 0.5, and 1 respectively. The dashed lines are fitting of log10 (1 − CDF(x)) vs. log10 x in a range of CDF from 0.9 to 0.99. 1.2 1.0 0.8
In Figs. 14 and 15, we show the log-log plot of the tail of LN and IGa distributions respectively. Clearly, a straight line fit is considerably better for the latter, even though the fitted slope does not coincide with the theoretical value. Towards this end, in Fig. 16, we show log-log plots of the tail of GIGa distributions for γ = 0.5 and γ = 2. The empirical trend emerging form the IGa and GIGa plots is that the straight line fits of log-log plots become progressively better as γ gets larger. To understand this γ-dependence the difference between the theoretical and fitted slope, we consider the local slope of the log-log plot, (D3)
For GIGa (and IGa, γ = 1), the local slope is given by αγ β γ γe−( x ) βx γ , (D4) −1 Γ(α) Q α, βx with the regularized gamma R ∞ function Q(s, x) = Γ(s, x)/Γ(s), where Γ(s, x) ≡ x ts−1 e−t dt is the incomplete gamma function. The local slopes are shown, as
0.6 0.4 0.2 0.0 0
1
2
3
4
5
1.0
1.5
2.0
x 0 log10 H1-CDFHxLL
(D2)
PDF
If PDF(x) ∝ Cx−1−ρ with x 1, then
d log(1 − CDF(x)) . d log x
1
x
x
log(1 − CDF(x)) ∝ const − ρ log x.
1.0 0.5
log10 H1-CDFHxLL
where x can alternatively stand for such additive quantities as wealth, [6] mass of a species, [7] volatility variance, [8] etc., and cognitive response times here. The second term in the rhs describes an exponentially fast decay, such as the loss of wealth and mass due to the use of one’s own resources, or the reduction of volatility in the absence of competing inputs and of response times due to learning. The first rhs term may alternatively describe metabolic consumption, acquisition of wealth in economic exchange, plethora of market signals, and variability of cognitive inputs. The third, stochastic term is the one that changes the otherwise deterministic approach, characterized by the saturation to a final value of the quantity, with the probabilistic distribution of the values - as it were, GIGa in the steady-state limit. Furthermore, just as the wealth model has microscopic underpinnings in a network model of economic exchange, [6] it is likely that stochastic ontogenetic mass growth [7] could be described by analogous network model based on capillary exchange. A network analogy may be possible for cognitive response times and volatility as well.
-1 -2 -3 -4 -5 -0.5
0.0
0.5 log10 x
FIG. 15: Top: plots of PDF of IGa(x; α, β) with mean 1. The left red, middle green, and right blue curves correspond to α = 3, 4, and 5 respectively. Bottom: log-log plots of simulated data sampled from the IGa distributions. Below −1 of the y-axis, the left blue, middle green, and right red curves correspond to α = 5, 4, and 3 respectively. The dashed lines with slopes −3.5, −3.0, and −2.5 respectively are fitting of log10 (1 − CDF(x)) vs. log10 x in a range of CDF from 0.9 to 0.99.
8
log10 H1-CDFHxLL
0
function of x in Figs. 17 and 18 respectively. It is clear that the local slope can differ substantially from its limiting (saturation) value. As γ becomes larger, the local slope tends closer to its limiting value.
-1 -2 -3 -4 -5 -0.5
For the LN distribution, the local slope is given by 0.0
0.5
1.0 log10 x
1.5
2.0
2.5
log10 H1-CDFHxLL
0 -1 -2 -3
q
-4 -5 -0.5
2 − πe
(log x−µ)2 2σ 2
, σ 1 + erf − log√x−µ 2σ 0.0
0.5 log10 x
1.0
(D5)
1.5
FIG. 16: Log-log plots of simulated data sampled from GIGa distributions GIGa(x; α, β, 0.5)(top) and GIGa(x; α, β, 2) (bottom) with mean 1. Below −1 of the y-axis, the left blue, middle green, and right red curves correspond to α = 2.5, 2, and 1.5 respectively. The dashed lines with slopes −2.8, −2.4, and −2.0 respectively (top) and −4.3, −3.6, and −2.8 (bottom) are fitting of log10 (1 − CDF(x)) vs. log10 x in a range of CDF from 0.9 to 0.99.
[1] J. G. Holden and S. Rajaraman, Frontiers in Psychology, 3, 209 (2012). [2] G. C. Van Orden, J. G. Holden, and M. T. Turvey, Journal of Experimental Psychology: General 132, 331 (2003). [3] G. C. Van Orden, J. G. Holden, and M. T. Turvey, Journal of Experimental Psychology: General 134, 117 (2005). [4] C. T. Kello, G. G. Anderson, J. G. Holden, and G. C. Van Orden, Cognitive Science 32, 1217 (2010). [5] E.-J. Wagenmakers, S. Farrell, and R. Ratcliff, Psychonomic Bulletin and Review 11, 579 (2004). [6] T. Ma, J. G. Holden, and R. A. Serota, Physica A: Statistical Mechanics and its Applications 392, 2434 (2013). [7] D. West and B. West, Int. J. Mod. Phys. B 26, 1230010 (2012).
which slowly decreases with x. But as is clear from (D5) and Fig. 19, the local slope does not saturate when x → ∞.
[8] T. Ma and R. Serota, arXiv:1305.4173 (2013). [9] D. A. Balota, M. J. Yap, K. A. Hutchison, M. J. Cortese, B. Kessler, B. Loftis, J. H. Neely, D. L. Nelson, G. B. Simpson, and R. Treiman, Behavior Research Methods 39, 445 (2007). [10] http://elexicon.wustl.edu/ (2009). [11] W. E. Hick, Quarterly Journal of Experimental Psychology, 4:1, 11 (1952). [12] T. Van Zandt, Psychological Bulletin & Review, 7(3), 424 (2000). [13] http://www.weibull.com/hotwire/issue15/ hottopics15.htm (2002). [14] J. F. Lawless, Statistical Models and Methods for Lifetime Data, (John Wiley & Sons, New York, 1982).
9 1-CDFHxL
Lo cal slo p e of lo g -lo g p lot
0.200 0.150 0.100 0.070 0.050
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
x
-1.5 -1.6
0.030 0.020 0.015 0.010
-1.7 -1.8 -1.9 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL
- 2.0 Lo cal slo p e of lo g -lo g p lot
0.100 0.050
x
- 2.0 - 2.2
0.010
- 2.4
0.005
- 2.6 - 2.8 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL 1
- 3.0 Lo cal slo p e of lo g -lo g p lot
0.1
x
- 3.0
0.01
- 3.5 0.001
- 4.0 - 4.5 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL 1
- 5.0 Lo cal slo p e of lo g -lo g p lot
0.1 0.01
x
-4 0.001 -5 10 -4 10
-6
-5
1.5
2.0
3.0
5.0
7.0
10.0
x
-7
FIG. 17: Local slope of log-log plot of IGa distribution IGa(x; α, β) with mean 1 (β = Γ(α)/Γ(α − 1)). The left column is the log-log plot and the right one is the local slope of the log-log plot from Eq. (D4). α is 2, 3, 5, and 7 for the first, second, third, and fourth rows respectively. The red lines are −α: the limit of the local slope when x → ∞.
10 1-CDFHxL 0.300 0.200 0.150 0.100 0.070 0.050 0.030 0.020 0.015 0.010
Lo cal slo p e of lo g -lo g p lot -1.5
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
2
4
6
8
10
x
- 2.0 - 2.5 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL
- 3.0 Lo cal slo p e of lo g -lo g p lot -1.5
0.100 0.050
- 2.0
x
- 2.5 - 3.0
0.010 0.005
- 3.5 - 4.0
0.001 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL
- 4.5 - 5.0 Lo cal slo p e of lo g -lo g p lot
0.100 0.050
x
- 2.4 0.010 0.005
- 2.6 - 2.8
0.001 1.5
2.0
3.0
5.0
7.0
10.0
x
1-CDFHxL 1
- 3.0 Lo cal slo p e of lo g -lo g p lot
0.1 0.01
- 3.5
0.001
- 4.0
10 -4
x
- 4.5 1.5
2.0
3.0
5.0
7.0
10.0
x
- 5.0
FIG. 18: Local slope of log-log plot of GIGa(x; α, β, γ) with mean 1 (β = Γ(α)/Γ(α − 1/γ)). The left column is the log-log plot and the right one is the local slope of the log-log plot from Eq. (D4). {α, γ} is {6, 0.5}, {10, 0.5}, {1.5, 2}, and {2.5, 2} for the first, second, third, and fourth rows respectively. The red lines are −αγ: the limit of the local slope when x → ∞.
11 1-CDFHxL
Lo cal slo p e o f lo g -lo g p lot
0.1
1
10 -4
-10
10 -7
- 20
2
3
4
x
5
- 30
10 -10
- 40 10 -13 - 50 1.5 2.0
3.0
5.0
7.0 10.0 15.0 20.0
x
1-CDFHxL 1
- 60 Lo cal slo p e o f lo g -lo g p lot
0.01
5
10
15
20
5
10
15
20
5
10
15
x
-4
10 -4
-6 10 -6
-8
10 -8
-10 1.5 2.0
3.0
5.0
7.0 10.0 15.0 20.0
x
-12 Lo cal slo p e o f lo g -lo g p lot
1-CDFHx L 0.100 0.050
x
- 2.0
0.010 0.005
- 2.5
0.001 5 ´ 10 -4
- 3.0 1.5 2.0
3.0
5.0 7.0 10.0 15.0 20.0
x
1-CDFHxL 0.150
- 3.5 Lo cal slop e o f log -log p lot
0.100 0.070 0.050
- 0.8 - 0.9 -1.0
0.030 -1.1
0.020 0.015
-1.2
0.010
-1.3 1.5 2.0
3.0
5.0
7.0 10.0 15.0 20.0
x
20
x
FIG. 19: Local slope of log-log plot of lognormal distribution. The mean of the distribution is set as 1 through µ = −σ 2 /2. The left column is the log-log plot and the right one is the local slope of the log-log plot in Eq. (D5). σ is 0.2, 0.5, 1, and 2 for the first, second, third, and fourth row respectively. The jagged part of the top right plot is due to computational precision.