Generalized moment-independent importance measures based on Minkowski distance Qingqing Zhai1, Jun Yang1, 2*, Min Xie2 and Yu Zhao1 1
School of Reliability and Systems Engineering, Beihang University, Beijing, China
[email protected],
[email protected],
[email protected] 2
Department of Systems Engineering and Engineering Management, City University of Hong Kong, Hong Kong, China
[email protected] Abstract: Importance measures have been widely studied and applied in reliability and
safety
engineering.
This
paper
presents
a
general
formulation
of
moment-independent importance measures and several commonly discussed importance
measures
are
unified
based
on
Minkowski
distance
(MD).
Moment-independent importance measures can be categorized into three classes of MD importance measures, i.e. probability density function based MD importance measure, cumulative distribution function based MD importance measure and quantile based MD importance measure. Some properties of the proposed MD importance measures are investigated. Several new importance measures are also derived as special cases of the generalized MD importance measures and illustrated with some case studies.
*Corresponding Author.
Tel.: +86 10 82316003; fax: +86 10 82316003.
E-mail address:
[email protected] (Jun Yang). 1
Keywords: Reliability; Moment-independent importance measures; Minkowski distance; Probability density function; Cumulative distribution function; Quantile
1. Introduction Importance measures are very useful in ranking components in complex systems, especially when further improvement decisions have to be made. It is closely related to sensitivity analysis (Castillo, Mínguez, & Castillo, 2008; Helton, Johnson, Sallaberry, & Storlie, 2006; Kleijnen, 2005; Kleijnen & Helton, 1999; A. Saltelli, Chan, & Scott, 2000; Min Xie, 1987; M Xie & Bergman, 1992). With the information of the relative importance of the parameters, one can quickly identify the improvement for the system performance instead of directly searching for a global optimal solution (E Borgonovo, 2010; Kuo & Zhu, 2012a). The importance measures in reliability analysis can be classified into structure importance measure, reliability importance measure and lifetime importance measure (Kuo & Zhu, 2012b). A variety of importance measures have been proposed for different systems and different purposes, such as the traditional Birnbaum measure (Birnbaum, 1968) and Barlow-Proschan measure (Barlow & Proschan, 1975; Eryilmaz, 2013), the joint importance of components (Gao, Cui, & Li, 2007; Hong, Koo, & Lie, 2002), the importance measures for multi-state systems (Levitin, Podofillini, & Zio, 2003; Si, Cai, Sun, & Zhang, 2010; Si, Dui, Zhao, Zhang, & Sun, 2012) as well as the more recently importance measures for Markovian systems (Do Van, Barros, & Bérenguer, 2008, 2009, 2010) or for semi-Markov systems (Distefano, Longo, & Trivedi, 2012; Hellmich & Berg, 2013). On the other hand, the importance measures can be considered as “local” or “global” from the viewpoint of sensitivity analysis. The global sensitivity analysis techniques, including non-parametric techniques (Curtis B Storlie & Helton, 2008; Curtis B Storlie, Reich, Helton, Swiler, & Sallaberry, 2013; 2
C.B. Storlie, Swiler, Helton, & Sallaberry, 2009), variance-based techniques (Iman, 1987; Li, Lu, & Zhou, 2011; I.M. Sobol, 2001; I. M. Sobol, 2003; Zhou, Lu, Li, Feng, & Wang, 2013) and moment-independent techniques (E. Borgonovo, 2007), provide an attractive perspective in identifying the influence of the inputs on the output. The moment-independent importance measures, which consider the impact of a certain parameter on the distribution of the output and bear the merits of model-free and moment-independent, have been proposed recently and received considerable attention. Borgonovo (E. Borgonovo, 2007) introduced a probability density function (PDF)-based importance measure, whereas Liu and Homma (Liu & Homma, 2010) proposed a similar importance measure but in terms of the cumulative distribution function (CDF). In order to carry out the sensitivity analysis of the parameter inside the distribution function, Cui et al. (Cui, Lu, & Wang, 2012) suggested a CDF-based importance measure utilizing 𝐿2 distance between the original CDF and the conditional CDF instead of 𝐿1 distance. In fact, all these importance measures weigh the importance of the parameter in terms of the distance between the conditional distribution function and the original distribution function of the output, where the distance can be 𝐿1 distance or 𝐿2 distance. So it is natural to extend the measure into a more general case by 𝐿𝑝 distance, i.e. Minkowski distance. On the other hand, since the distribution of the output can be characterized by its PDF, CDF or quantile function, it is reasonable to define importance measures on any one of these functions. With these considerations, three classes of generalized Minkowski distance based (MD) importance measures, i.e. PDF-based MD importance measure, CDF-based MD importance measure and quantile-based MD importance measure are proposed. The generalized MD importance
measures
have
considerable
flexibility
and
the
existing
moment-independent importance measures can be seen as their special cases. The properties of the generalized MD importance measures are studied and some new and 3
promising moment-independent importance measures are derived from the generalized MD importance measures. The remainder of this paper is organized as follows. In Section 2, three unified MD importance measures are proposed. In Section 3, properties of the proposed MD importance measures are investigated. In Section 4, some new importance measures derived from the generalized MD importance measures are discussed and illustrated with case studies. Conclusions are given in the end.
2. Importance measures based on Minkowski distance 2.1 Minkowski distance (MD) Minkowski distance is derived from the well-known Minkowski inequality. In general, the Minkowski distance of order 𝑝 could be defined as: 1 𝑝
𝑑(𝑔1 , 𝑔2 ) = (∫ |𝑔1 (𝑥) − 𝑔2 (𝑥)|𝑝 𝑑𝑥) , 𝑝 ≥ 1,
(1)
where 𝑔1 (𝑥) and 𝑔2 (𝑥) are functions of 𝑥 and is the range of the integration. For instance, if is an index set, = {1, 2, ⋯ , 𝑛}, and 𝑔1 (𝑥) and 𝑔2 (𝑥) are real, then Eq. (1) can be rewritten as 𝑛 𝑝
1 𝑝
𝑑(𝑔1 , 𝑔2 ) = (∑|𝑔1𝑖 − 𝑔2𝑖 | ) ,
(2)
𝑖=1
which is the distance between two points (𝑔11 , 𝑔12 , ⋯ , 𝑔1𝑛 ) and (𝑔21 , 𝑔22 , ⋯ , 𝑔2𝑛 ) in 𝑅 𝑛 . Alternatively, if = [𝑎, 𝑏] and 𝑔1 (𝑥) and 𝑔2 (𝑥) are 𝐿𝑝 integrable, i.e. 𝑏
𝑏
∫𝑎 |𝑔1 (𝑥)|𝑝 𝑑𝑥 < ∞ and ∫𝑎 |𝑔2 (𝑥)|𝑝 𝑑𝑥 < ∞, then
4
1/𝑝
𝑏
𝑑(𝑔1 , 𝑔2 ) = (∫ |𝑔1 (𝑥) − 𝑔2
(𝑥)|𝑝
(3)
𝑑𝑥)
𝑎
is the 𝐿𝑝 distance between functions 𝑔1 (𝑥) and 𝑔2 (𝑥). The Minkowski distance reduces to the rectilinear distance and Euclid distance when the order 𝑝 is set to 1 and 2, respectively. As 𝑝 approaches positive infinity, the Chebyshev distance is obtained: 1/𝑝
lim (∫ |𝑔1 (𝑥) − 𝑔2
𝑝→∞
(𝑥)|𝑝
𝑑𝑥)
(4)
= sup|𝑔1 (𝑥) − 𝑔2 (𝑥)|. 𝑥∈
In this paper, we focus on the model 𝑌 = ℎ(𝑿), where 𝑿 = [𝑋1 , 𝑋2 , ⋯ , 𝑋𝑛 ] is the random model inputs which can be described by the continuous PDF 𝑓𝑿 (⋅), and ℎ(⋅) is a deterministic continuous scalar function. The uncertainty in 𝑿 transfers through function ℎ(⋅), which results in that the model output 𝑌 is also a random variable. As mentioned, the moment-independent importance measures concentrate on the discrimination between the original distribution function and the conditional distribution function of the output, where the original distribution function is simply the distribution function of 𝑌 and the conditional distribution function is the distribution function of 𝑌 conditioned on that 𝑋𝑖 is fixed to a certain value 𝑥𝑖0 , i.e. 𝑌|𝑋𝑖 = 𝑥𝑖0 . Minkowski distance can be used to measure the difference between the original distribution
and
the
conditional
distribution
to
generalize
the
existing
moment-independent importance measures in a unified form. Since the distribution of the output can be characterized by its PDF, CDF or its quantile function, three classes of generalized MD importance measures are proposed based on different distribution functions.
5
2.2 PDF-based MD importance measure The PDF of a random variable characterizes the relative likelihood for this random variable to take on a certain value. For a random input 𝑋𝑖 , if its uncertainty is removed, say 𝑋𝑖 = 𝑥𝑖0 , then the shape of the conditional PDF 𝑓𝑌|𝑋𝑖 =𝑥𝑖0 would differ from the original PDF 𝑓𝑌 (𝑦). For any value of 𝑦, the likelihood that 𝑌 takes 𝑦 is 𝑝
1/𝑝
different and the Minkowski distance (∫|𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥𝑖0 (𝑦)| 𝑑𝑦)
over the
output space measures to what extent the difference would be. Since we do not know the true value of 𝑋𝑖 , it is natural to use the expected distance to represent the shape change of the PDF due to 𝑋𝑖 . So the PDF-based MD importance measure could be defined as follows. 𝑝
1/𝑝
Suppose (∫|𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦)| 𝑑𝑦)
Definition 1.
< ∞ holds for 𝑝 ≥ 1
and 𝑥 ∈ 𝑖 , then the PDF-based MD importance measure of parameter 𝑋𝑖 of order 𝑝 is defined as:
𝑝𝑑𝑓 𝐼𝑖 (𝑝)
𝑝
1 𝑝
= 𝐸𝑋𝑖 (∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)| 𝑑𝑦)
(5) 𝑝
1/𝑝
= ∫ (∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦)| 𝑑𝑦) 𝑖
𝑓𝑋𝑖 (𝑥)𝑑𝑥,
where 𝑖 is the domain of random variable 𝑋𝑖 , 𝑓𝑋𝑖 (𝑥) is the marginal PDF of 𝑋𝑖 and 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦) is the conditional PDF of 𝑌 given 𝑋𝑖 = 𝑥. 𝑝𝑑𝑓
Specifically, 𝐼𝑖
(∞) is defined as:
6
𝑝𝑑𝑓
𝐼𝑖
(∞) = 𝐸𝑋𝑖 ( sup |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)|) 𝑦∈
(6)
= ∫ ( sup |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦)|) 𝑓𝑋𝑖 (𝑥)𝑑𝑥. 𝑖
𝑦∈
𝑝
1/𝑝
Remark 1. Since the Minkowski distance (∫|𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)| 𝑑𝑦)
is actually
a random variable, one can also use its quantile, e.g. the lower quartile or the upper quartile, to represent the importance of 𝑋𝑖 . 𝑝
1/𝑝
Remark 2. The variance of (∫|𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)| 𝑑𝑦)
reflects the variation of
the Minkowski distance between 𝑓𝑌 (𝑦) and 𝑓𝑌|𝑋𝑖 (𝑦) due to the uncertainty in 𝑋𝑖 , which can give an intuition of the range of the difference between 𝑓𝑌 (𝑦) and 𝑓𝑌|𝑋𝑖 (𝑦). For a small variance, one only needs to consider whether it is necessary to remove the uncertainty in 𝑋𝑖 , while a large variance requires the analyst to consider not only the necessity to remove the uncertainty in 𝑋𝑖 , but also the extent that the uncertainty in 𝑋𝑖 should be reduced (since distance between 𝑓𝑌 (𝑦) and 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦) varies a lot for different 𝑥). Therefore, it can also serve as an importance indicator. Borgonovo (E. Borgonovo, 2007) proposed a moment-independent importance measure 𝛿𝑖 as follows ∞ 1 𝛿𝑖 = 𝐸𝑋𝑖 (∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)|𝑑𝑦). 2 −∞
𝑝𝑑𝑓
Clearly, this is 𝐼𝑖
(7)
(1) except for a normalization constant.
2.3 CDF-based MD importance measure With the notations unchanged, if we substitute PDFs with CDFs in Definition 1, 7
the CDF-based MD importance measure could be obtained. 𝑝
1/𝑝
Suppose (∫|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)| 𝑑𝑦)
Definition 2.
< ∞ holds for 𝑝 ≥ 1
and 𝑥 ∈ 𝑖 , then the CDF-based MD importance measure of parameter 𝑋𝑖 of order 𝑝 is defined as:
𝑝
𝑐𝑑𝑓 𝐼𝑖 (𝑝)
1/𝑝
= 𝐸𝑋𝑖 (∫ |𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 (𝑦)| 𝑑𝑦)
(8) 𝑝
1/𝑝
= ∫ (∫ |𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)| 𝑑𝑦) 𝑖
𝑓𝑋𝑖 (𝑥)𝑑𝑥,
where 𝐹𝑌 (𝑦) is the CDF of 𝑌 and 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦) is the conditional CDF of 𝑌 given 𝑋𝑖 = 𝑥. 𝑐𝑑𝑓
Specifically, 𝐼𝑖
𝑐𝑑𝑓
𝐼𝑖
(∞) is defined as:
(∞) = 𝐸𝑋𝑖 (sup|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 (𝑦)|) 𝑦∈
(9)
= ∫ (sup|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)|) 𝑓𝑋𝑖 (𝑥)𝑑𝑥. 𝑖
𝑦∈
Apparently, the importance measure introduced in (Liu & Homma, 2010)
(𝐶𝐷𝐹)
𝑆𝑖
𝑐𝑑𝑓
is 𝐼𝑖
=
𝐸𝑋𝑖 (∫|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 (𝑦)|𝑑𝑦)
(10)
|𝐸(𝑌)|
(1) except for a normalization constant, whereas the importance measure
introduced in (Cui, et al., 2012)
8
2
𝜀𝑖 = 𝐸𝑋𝑖 (∫ |𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 (𝑦)| 𝑑𝑦)
(11)
𝑐𝑑𝑓
can be seen as a variant of 𝐼𝑖
(2).
2.4 Quantile-based MD importance measure 𝑝
1/𝑝
The distance (∫|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)| 𝑑𝑦)
measures the vertical shape shift
of the conditional CDF from the original CDF. If we want to measure the shape shift horizontally, the quantile function (or the inverse distribution function) can be applied. Let 𝐺𝑌 (𝜇) = inf{𝑦: 𝐹𝑌 (𝑦) > 𝜇} and 𝐺𝑌|𝑋𝑖 =𝑥 (𝜇) = inf{𝑦: 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦) > 𝜇} be the quantile function of 𝑌 and 𝑌|𝑋𝑖 = 𝑥, respectively, then the quantile-based MD importance measure can be defined as follows.
Definition 3.
1/𝑝
𝑝
1
Suppose (∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 =𝑥 (𝜇)| 𝑑𝜇)
< ∞ holds for 𝑝 ≥ 1
and 𝑥 ∈ 𝑖 , then the quantile-based MD importance measure of parameter 𝑋𝑖 of order 𝑝 is defined as:
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (𝑝)
1
1/𝑝
𝑝
= 𝐸𝑋𝑖 (∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇) 0
(12) 1
𝑝
1/𝑝
= ∫ (∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 =𝑥 (𝜇)| 𝑑𝜇) 𝑖
0
𝑓𝑋𝑖 (𝑥)𝑑𝑥.
Specifically, 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞) is defined as:
9
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞) = 𝐸𝑋𝑖 ( sup |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)|) 𝜇∈[0,1]
(13)
= ∫ ( sup |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 =𝑥 (𝜇)|) 𝑓𝑋𝑖 (𝑥)𝑑𝑥. 𝑖
𝜇∈[0,1]
𝑐𝑑𝑓
Remark 3. 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (1) is identical to 𝐼𝑖
(1). In fact, ∫|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)|𝑑𝑦
measures the area bounded by 𝐹𝑌 (𝑦) and 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦) vertically, whereas 1
∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 =𝑥 (𝜇)|𝑑𝜇 measures it horizontally, which is illustrated in Fig. 1.
Fig.1 The distance between the original distribution function and the conditional distribution function measured vertically or horizontally.
3. Some properties of the proposed MD importance measures In this section, we focus on the properties of the proposed MD importance measures and some analytical results are presented. They are useful for the applications of the proposed MD importance measures. 10
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (𝑝) is
Proposition 1.
monotonically
increasing
𝑝≥1 ,
with
i.e.
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (𝑝1 ) ≤ 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (𝑝2 ) holds for any 1 ≤ 𝑝1 < 𝑝2. 1/𝑝
𝑝
1
Proof. Let ℎ(𝑝) = (∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 )
. The derivative with respect to 𝑝
is 𝜕ℎ(𝑝) 𝜕𝑝
1 (∫0 |𝐺𝑌 (𝜇) 𝑝2 1
=
𝑝
1 −1 𝑝
𝑝
− 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 )
𝑝
1
(∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| ln|𝐺𝑌 (𝜇) −
𝑝
1
𝑝
1
𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 − ∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 ln ∫0 |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 ) .
First note that
1 (∫ |𝐺𝑌 (𝜇) − 𝑝2 0 1
1 −1 𝑝
𝑝
𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇)
𝑥 ln 𝑥 is a convex function for 𝑥 > 0, since
> 0. Second, the function 𝑔(𝑥) =
𝑑𝑔(𝑥) 𝑑𝑥
= ln 𝑥 + 1 and
𝑑2 𝑔(𝑥) 𝑑𝑥 2
1
= 𝑥 > 0.
Then, according to Jensen’s inequality we have 1
1
𝑝
𝑝
𝑔 (∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇) ≤ ∫ 𝑔(|𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| )𝑑𝜇, 0
0
i.e. 1
𝑝
1
𝑝
∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 ln ∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 0
0
1
𝑝
𝑝
≤ ∫ |𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| ln|𝐺𝑌 (𝜇) − 𝐺𝑌|𝑋𝑖 (𝜇)| 𝑑𝜇 . 0
Therefore, the derivative
𝜕ℎ(𝑝) 𝜕𝑝
≥ 0 holds, from which one can immediately conclude
that 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (𝑝) is monotonically increasing. One may wonder whether the similar proposition can be carried out for the PDF-based or CDF-based importance measures. Actually, it is not possible to simply 11
extend the conclusion of Proposition 1 to the PDF-based importance measure or the CDF-based measure (the first case study in Section 4 can serve as a counterexample). 𝑝𝑑𝑓
However, for 𝐼𝑖 Proposition 2.
(𝑝) and 𝐼𝑖𝑐𝑑𝑓 (𝑝), we have the following proposition.
If ∫ 𝛿0 [𝑓𝑌 (𝑦)]𝑑𝑦 ≤ 1, where
𝛿0 [𝑓𝑌 (𝑦)] = { 𝑝𝑑𝑓
then 𝐼𝑖
𝑐𝑑𝑓
(𝑝) and 𝐼𝑖
for 𝑓𝑌 (𝑦) > 0 , otherwise
1 0,
(𝑝) are monotonically increasing for 𝑝 ≥ 1. 𝑝𝑑𝑓
Proof. We only prove the proposition for 𝐼𝑖
𝑐𝑑𝑓
(𝑝) and the proof for 𝐼𝑖 𝑝𝑑𝑓
obtained similarly. Let 𝐶 = ∫ 𝛿0 [𝑓𝑌 (𝑦)]𝑑𝑦, then 𝐼𝑖
(𝑝) can be
(𝑝) can be rewritten as
𝑝
1
𝑝 |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)| 𝑝𝑑𝑓 𝐼𝑖 (𝑝) = 𝐸𝑋𝑖 [𝐶 (∫ 𝑑𝑦) ]. 𝐶 1 𝑝
It can be proved that (∫
|𝑓𝑌 (𝑦)−𝑓𝑌|𝑋𝑖 (𝑦)| 𝐶
𝑝
1 𝑝
𝑑𝑦) is monotonically increasing with 𝑝
with a similar trick as in the proof for Proposition 1, and 𝐶 1 𝑝
increasing with 𝑝 for 𝐶 ≤ 1, so we can conclude that 𝐶 (∫
1 𝑝
is monotonically
|𝑓𝑌 (𝑦)−𝑓𝑌|𝑋𝑖 (𝑦)| 𝐶
𝑝
1 𝑝
𝑑𝑦)
is monotonically increasing with 𝑝, which directly leads to Proposition 2.
Proposition 3.
𝑐𝑑𝑓
𝐼𝑖
(∞) < 𝐼𝑖𝑝𝑑𝑓 (1).
Proof. Let 𝑦𝑥∗ = arg sup|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 =𝑥 (𝑦)|. Then we have 𝑦∈
12
|𝐹𝑌 (𝑦𝑥∗ )
−
𝐹𝑌|𝑋𝑖 =𝑥 (𝑦𝑥∗ )|
𝑦𝑥∗
𝑦𝑥∗
= |∫ 𝑓𝑌 (𝑦)𝑑𝑦 − ∫ 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦) 𝑑𝑦| −∞
−∞
𝑦𝑥∗
≤ ∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦)|𝑑𝑦 < ∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 =𝑥 (𝑦)|𝑑𝑦,
−∞
which directly leads to
𝐸𝑋𝑖 (sup|𝐹𝑌 (𝑦) − 𝐹𝑌|𝑋𝑖 (𝑦)|) < 𝐸𝑋𝑖 (∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖 (𝑦)|𝑑𝑦). 𝑦∈
𝑐𝑑𝑓
i.e. 𝐼𝑖
(∞) < 𝐼𝑖𝑝𝑑𝑓 (1).
Proposition 4.
For two parameters 𝑋𝑖 and 𝑋𝑗 and 𝑝1 < 𝑝2 , if (𝐼𝑖∗ (𝑝1 ) −
𝐼𝑗∗ (𝑝1 )) (𝐼𝑖∗ (𝑝2 ) − 𝐼𝑗∗ (𝑝2 )) < 0, where “∗” can be 𝑝𝑑𝑓, 𝑐𝑑𝑓 or 𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒, then there exists 𝑝0 ∈ (𝑝1 , 𝑝2 ) such that 𝐼𝑖∗ (𝑝0 ) = 𝐼𝑗∗ (𝑝0 ). Proof. One only should note that 𝐼𝑖∗ (𝑝) is actually a continuous function of 𝑝 and applying the mean value theorem, one can get the conclusion.
4. Some new measures derived from MD importance measures The most commonly used cases of the Minkowski distance are these with the order set to 1, 2 and ∞, which are rectilinear distance, Euclid distance and Chebyshev distance, respectively. Therefore, it is intuitive to use the MD importance measures with order 1, 2 or ∞ for practical applications. The MD importance measures with 𝑝𝑑𝑓
𝑝 = 1, i.e. 𝐼𝑖
𝑐𝑑𝑓
(1), 𝐼𝑖
(1) and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (1), which measure the area bounded by
the original distribution function and the conditional distribution function, have clear 13
geometric meaning. On the other hand, the MD importance measures with 𝑝 = 2, i.e. 𝑝𝑑𝑓
𝐼𝑖
𝑐𝑑𝑓
(2), 𝐼𝑖
(2) and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (2) can be easily calculated, as argued in (Li, Lu,
Feng, & Wang, 2012), as well as admit further sensitivity analysis on the distribution parameters since differentiation is possible (Cui, et al., 2012). The MD importance 𝑝𝑑𝑓
measures with 𝑝 = ∞, i.e. 𝐼𝑖
𝑐𝑑𝑓
(∞), 𝐼𝑖
(∞), and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞), investigate the
most different part between the original distribution and the conditional distribution, which can bring more intuitive information on the shape change (e.g. the upper bound of the shape change) of the distribution function. Therefore, they can be applied when one concerns to what extent the difference between the original distribution function and the conditional distribution function can be. Some new importance measures can be derived from the generalized MD 𝑝𝑑𝑓
importance measures, i.e. 𝐼𝑖
(2), 𝐼𝑖𝑝𝑑𝑓 (∞), 𝐼𝑖𝑐𝑑𝑓 (∞), 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (2) and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞). 𝑝𝑑𝑓
We first study the application of the PDF-based measures 𝐼𝑖 𝑐𝑑𝑓
the Ishigami function; then the CDF-based 𝐼𝑖
𝑝𝑑𝑓
(2) and 𝐼𝑖
(∞) to
(∞) and quantile-based 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (2)
and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞) are illustrated with a fault tree model. 𝒑𝒅𝒇
4.1 𝑰𝒊
𝒑𝒅𝒇
(𝟐) and 𝑰𝒊
(∞) 𝑝𝑑𝑓
We first study the PDF-based importance measures 𝐼𝑖
𝑝𝑑𝑓
(2) and 𝐼𝑖
(∞), and use
the Ishigami function to illustrate their application (E. Borgonovo, 2007). The mathematical expression of the Ishigami function is: 𝑌 = 𝑔(𝑿) = sin 𝑋1 + 𝑎 sin2 𝑋2 + 𝑏𝑋34 sin 𝑋1 ,
(14)
where 𝑋𝑖 are assumed to be independently and uniformly distributed between – 𝜋 and 𝜋, and a and 𝑏 are respectively set to 5 and 0.1 as in (E. Borgonovo, 2007). 14
To estimate importance measures of 𝑋𝑖 , we first generate 𝑁 = 1000 samples with the Latin Hypercube Sampling (LHS) technique (Helton & Davis, 2003; McKay, Beckman, & Conover, 2000). The corresponding realization of 𝑌 can thus be calculated and its PDF, i.e. the original 𝑓𝑌 (𝑦) can be estimated by the kernel density estimation (Castaings, Borgonovo, Morris, & Tarantola, 2012). Then we apply the substituted column design method (Andrea Saltelli, 2002; Andrea Saltelli, et al., 2010) to construct the conditional samples and estimate the conditional PDF 𝑓𝑌|𝑋𝑖 (𝑦) resorting to the kernel density estimation. A detailed procedure for estimating 𝑓𝑌|𝑋𝑖 (𝑦) is available in, e.g. (Wei, Lu, & Yuan, 2012). The PDF-based importance 𝑝𝑑𝑓
measures can be obtained according to Eq. (5) and (6). 𝐼𝑖 𝑝𝑑𝑓
each 𝑋𝑖 are listed in Table 1 and 𝐼𝑖
𝑝𝑑𝑓
(2) and 𝐼𝑖
(∞) for
(1) is also given for comparison. The ranking
of each parameter is given in the parentheses. Table 1 Importance of 𝑋𝑖 measured by PDF-based MD importance measures
𝑝𝑑𝑓
𝐼𝑖
(1)
𝑝𝑑𝑓
𝐼𝑖
𝑝𝑑𝑓
(2)
𝐼𝑖
(∞)
𝑋1
0.6444(2)
0.2059(2)
0.1753(1)
𝑋2
0.6839(1)
0.2264(1)
0.1370(2)
𝑋3
0.4956(3)
0.1295(3)
0.0740(3)
From the second and the third columns of Table 1, it can be seen that 𝑋2 is the most important parameter, 𝑋1 is the second important parameter while 𝑋3 is the 𝑝𝑑𝑓
least important one. On the other hand, according to 𝐼𝑖
(∞) we know that 𝑋1 is
the most important one. So MD importance measures with different orders can produce different parameters rankings and it should be careful when determining the most influential parameter. Though different rankings can be derived from different 15
MD importance measures, combining these three measures, we can have a deeper insight about how each parameter affects the output if its uncertainty is removed. 𝑝𝑑𝑓
𝐼1
(1) < 𝐼2𝑝𝑑𝑓 (1) and 𝐼1𝑝𝑑𝑓 (2) < 𝐼2𝑝𝑑𝑓 (2) indicate that on average the shape of the
PDF of 𝑌 would change the most over the whole range if the uncertain in 𝑋2 is 𝑝𝑑𝑓
removed; 𝐼1
(∞) > 𝐼2𝑝𝑑𝑓 (∞) indicates that on average the PDF of 𝑌 is most likely
to suffer great shift at certain points if the uncertain in 𝑋1 is removed. Of course, 𝑋3 is the least important parameter and requires least attention since all the three measures give the same ranking for it. 𝑝𝑑𝑓
Remark 4. One may note that 𝐼𝑖
(𝑝) is decreasing with 𝑝 in the above example,
which reveals that the monotonically increasing property may not hold if ∫ 𝛿0 [𝑓𝑌 (𝑦)]𝑑𝑦 > 1. Remark 5. Different orders will bring different rankings, because different order 𝑝 emphasizes different part of the distributional shift in 𝑌. If 𝑝 is large, the part where 𝑓𝑌|𝑋𝑖 (𝑦) differs from 𝑓𝑌 (𝑦) the most is emphasized, and in turn the part where 𝑓𝑌|𝑋𝑖 (𝑦) is close to 𝑓𝑌 (𝑦) is weakened. The opposite happens if 𝑝 is small. 𝒄𝒅𝒇 𝒒𝒖𝒂𝒏𝒕𝒊𝒍𝒆 (𝟐) and 𝑰𝒒𝒖𝒂𝒏𝒕𝒊𝒍𝒆 (∞) 4.2 𝑰𝒊 (∞), 𝑰𝒊 𝒊
We use a fault tree model in (Iman, 1987; Liu & Homma, 2010) to illustrate the 𝑐𝑑𝑓
importance measures 𝐼𝑖
(∞), 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (2) and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞), where the Boolean
expression of the fault tree is 𝑌 = 𝑋1 𝑋3 𝑋5 + 𝑋1 𝑋3 𝑋6 + 𝑋1 𝑋4 𝑋5 + 𝑋1 𝑋4 𝑋6 + 𝑋2 𝑋3 𝑋4 + 𝑋2 𝑋3 𝑋5
(15)
+ 𝑋2 𝑋4 𝑋5 + 𝑋2 𝑋5 𝑋6 + 𝑋2 𝑋4 𝑋7 + 𝑋2 𝑋6 𝑋7 . In the above expression, 𝑌 represents the occurrence frequency of the top event of 16
the fault tree model. 𝑋1 and 𝑋2 are initial events and are expressed as the number of occurrences per year. 𝑋3 , … , 𝑋7 are basic events and are expressed as failure probabilities. All the seven inputs are independent from each other and follow lognormal distributions. The corresponding mean values and error factors are listed in Table 2. Table 2 Distribution parameters for the inputs of the fault tree model (Iman, 1987; Liu & Homma, 2010)
Parameter
Mean value
Error factor
𝑋1
2
2
𝑋2
3
2
𝑋3
0.001
2
𝑋4
0.002
2
𝑋5
0.004
2
𝑋6
0.005
2
𝑋7
0.003
2
Similar to the Ishigami case, we use Monte Carlo simulation to obtain the importance measures of 𝑋𝑖 . 𝑁 = 1000 samples are generated by LHS technique to estimate the CDF or the quantile function by kernel density estimation. Different MD importance measures of each 𝑋𝑖 are given in Table 3 and Table 4. The rankings of the parameters are given in the parentheses. Table 3 CDF-based MD importance measures for the inputs of the fault tree model
Parameter 𝑋1
𝑐𝑑𝑓
𝐼𝑖
(1)
1.8151 × 10−5 (6)
𝑐𝑑𝑓
𝐼𝑖
(2)
0.0009(6)
𝑐𝑑𝑓
𝐼𝑖
(∞)
0.0763(6) 17
𝑋2
5.7323 × 10−5 (1)
0.0028(1)
0.2143(1)
𝑋3
1.1956 × 10−5 (7)
0.0006(7)
0.0476(7)
𝑋4
2.8160 × 10−5 (4)
0.0014(4)
0.1090(4)
𝑋5
4.0745 × 10−5 (3)
0.0021(3)
0.1581(3)
𝑋6
4.6034 × 10−5 (2)
0.0023(2)
0.1718(2)
𝑋7
2.1074 × 10−5 (5)
0.0010(5)
0.0806(5)
Table 4 Quantile-based MD importance measures for the inputs of the fault tree model
Parameter
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (1)
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (2)
𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (∞)
𝑋1
1.8380 × 10−5 (6)
2.0692 × 10−5 (6)
7.1607 × 10−5 (5)
𝑋2
5.7552 × 10−5 (1)
6.9605 × 10−5 (1)
2.3810 × 10−4 (1)
𝑋3
1.1979 × 10−5 (7)
1.3097 × 10−5 (7)
4.0158 × 10−5 (7)
𝑋4
2.8554 × 10−5 (4)
3.2289 × 10−5 (4)
9.5791 × 10−5 (4)
𝑋5
4.1008 × 10−5 (3)
4.9882 × 10−5 (3)
2.0278 × 10−4 (2)
𝑋6
4.5865 × 10−5 (2)
5.4149 × 10−5 (2)
1.8586 × 10−4 (3)
𝑋7
2.1067 × 10−5 (5)
2.3866 × 10−5 (5)
6.0815 × 10−5 (6)
As can be seen from Table 3, all the three CDF-based MD importance measures give the same importance ranking for the inputs. It is thus reasonable to believe that 𝑋2 is the most important inputs while 𝑋3 is the least important inputs. Table 4 lists the quantile-based MD importance measures of 𝑋𝑖 and the corresponding rankings. 𝑐𝑑𝑓
The second column is identical to that of Table 3, since 𝐼𝑖
(1) and 𝐼𝑖𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒 (1)
are the same by definition. Inconsistent with the results of the other five importance measures, the importance rankings of 𝑋5 and 𝑋6 as well as the rankings of 𝑋1 and 𝑞𝑢𝑎𝑛𝑡𝑖𝑙𝑒
𝑋7 are swapped according to 𝐼𝑖
(∞). This is possible since the importance of 18
𝑋5 and 𝑋6 (𝑋1 and 𝑋7 ) are fairly close to each other according to other five importance measures and different measures emphasize different aspects of the importance. Combining the results of these six importance measures, the ranking of the inputs should be: 𝑋2 ≻ 𝑋6 ≻ 𝑋5 ≻ 𝑋4 ≻ 𝑋7 ≻ 𝑋1 ≻ 𝑋3 . In practice, people should choose a proper importance measure to accommodate the situation since different importance measures reflect different aspects of the influence on the output. Alternatively, people can calculate different MD importance measures and combine these results to achieve a balanced ranking. For instance, in this fault tree model, the CDF-based MD importance measures with order 1, 2 and infinity give exactly the same ranking for the seven inputs, then we are convinced to believe that the ranking is reasonable.
5. Conclusions This paper presented a general formulation of moment-independent importance measures used in system reliability analysis. It is noted that almost all the existing moment-independent importance measures are based on Minkowski distance (MD). We have thus generalized the definitions of the existing importance measures by Minkowski distance. Three classes of MD importance measures are proposed: PDF-based, CDF-based and quantile-based MD importance measure. Their properties are investigated and some new and promising moment-independent importance measures are further studied as special cases of the proposed MD importance measures. The proposed importance measures reveal the significance of the input by the expected Minkowski distance between the original distribution function and the conditional distribution function of the output, which compress less information at the begin (the distribution function is apparently more informative than the variance, for 19
instance). So it is more sensitive in identifying the important factors. Indeed, the variance-based measures sometimes are not able to identify the important input (Plischke, Borgonovo, & Smith, 2012). Additionally, the proposed measures have little restriction on the distribution of the inputs or the output (unlike some statistical test-based measures, e.g., measures in Section 6.6 in (Helton, et al., 2006)), which indicates that the measures proposed in this paper is more applicable in real applications. In practice, the joint effect resulting from the variable interaction might be of concern, and measures proposed in this paper can be readily extended to the multiple variable case. For instance, the joint PDF-based importance measure for {𝑋𝑖1 , … , 𝑋𝑖𝑟 } can be defined as
𝑝𝑑𝑓 𝐼𝑖1 ,…,𝑖𝑟 (𝑝)
= ∫
𝑝
= 𝐸𝑋𝑖1 ,…,𝑋𝑖𝑟 (∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖1 ,…,𝑋𝑖𝑟 (𝑦)| 𝑑𝑦)
𝑝
𝑖1 ,...,𝑖𝑟
1 𝑝
1 𝑝
(∫ |𝑓𝑌 (𝑦) − 𝑓𝑌|𝑋𝑖1 =𝑥𝑖1 ,…,𝑋𝑖𝑟 =𝑥𝑖𝑟 (𝑦)| 𝑑𝑦) 𝑓𝑋𝑖1 ,…,𝑋𝑖𝑟 (𝑥𝑖1 , … , 𝑥𝑖𝑟 )𝑑𝑥𝑖1 … 𝑑𝑥𝑖𝑟 ,
which measures the joint influence of the uncertainty in group {𝑋𝑖1 , … , 𝑋𝑖𝑟 } on the uncertainty of the output. The CDF-based and the quantile-based importance measures for multiple inputs can be defined as well and their properties would be a direction for further study. It is pointed out that MD importance measures with different order might lead to different rankings of parameters and suggestions for order selection are proposed. Specifically, it is suggested to combine MD importance measures with different orders to achieve a reasonable parameter ranking. On the other hand, further study on the comparison between different MD importance measures should be carried out to 20
give more detailed guidance on the selection of proper importance measures. In addition, efficient calculation of the MD importance measures is urged. Generally speaking, the computational effort required by the moment-independent importance measures consists in two aspects: running the model to obtain samples to estimate the distribution function and calculating the Minkowski distance between the original distribution function and the conditional distribution function. The latter involves numerical integration techniques and its computational intensity is often negligible compared with the former, especially when the model is associated with the long-running
computer
codes.
Studies
towards
efficiently
estimating
the
moment-independent importance measures have been carried out, and techniques therein can also be used to estimate the MD importance measures (Castaings, et al., 2012; Plischke, et al., 2012). How to further reduce the required model runs to estimate the MD importance measures would be our future study.
References Barlow, R. E., & Proschan, F. (1975). Importance of system components and fault tree events. Stochastic Processes And Their Applications, 3, 153-173. Birnbaum, Z. W. (1968). On the importance of different components in a multicomponent system. In: DTIC Document. Borgonovo, E. (2007). A new uncertainty importance measure. Reliability Engineering & System Safety, 92, 771-784. Borgonovo, E. (2010). The reliability importance of components and prime implicants in coherent and non-coherent systems including total-order interactions. European Journal Of Operational Research, 204, 485-495. Castaings, W., Borgonovo, E., Morris, M. D., & Tarantola, S. (2012). Sampling strategies in density-based sensitivity analysis. Environmental Modelling & Software, 38, 13-26. Castillo, E., Mínguez, R., & Castillo, C. (2008). Sensitivity analysis in optimization and reliability problems. Reliability Engineering & System Safety, 21
93, 1788-1800. Cui, L., Lu, Z., & Wang, Q. (2012). Parametric sensitivity analysis of the importance measure. Mechanical Systems And Signal Processing, 28, 482-491. Distefano, S., Longo, F., & Trivedi, K. S. (2012). Investigating dynamic reliability and availability through state–space models. Computers & Mathematics With Applications, 64, 3701-3716. Do Van, P., Barros, A., & Bérenguer, C. (2008). Reliability importance analysis of Markovian systems at steady state using perturbation analysis. Reliability Engineering & System Safety, 93, 1605-1615. Do Van, P., Barros, A., & Bérenguer, C. (2009). Differential Importance Measure of Markov Models Using Perturbation Analysis. Reliability, Risk and Safety: Theory and Applications-Proc. ESREL 2009, 981-987. Do Van, P., Barros, A., & Bérenguer, C. (2010). From differential to difference importance measures for Markov reliability models. European Journal Of Operational Research, 204, 513-521. Eryilmaz, S. (2013). Component importance for linear consecutive k-out-of-n and m-consecutive k-out-of-n systems with exchangeable components. Naval Research Logistics (NRL). Gao, X., Cui, L., & Li, J. (2007). Analysis for joint importance of components in a coherent system. European Journal Of Operational Research, 182, 282-299. Hellmich, M., & Berg, H.-P. (2013). On the construction of component importance measures for semi-Markov systems. Mathematical Methods Of Operations Research, 77, 15-32. Helton, J. C., & Davis, F. J. (2003). Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69. Helton, J. C., Johnson, J. D., Sallaberry, C. J., & Storlie, C. B. (2006). Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliability Engineering & System Safety, 91, 1175-1209. Hong, J., Koo, H., & Lie, C. (2002). Joint reliability importance of k-out-of-n systems. European Journal Of Operational Research, 142, 22
539-547. Iman, R. L. (1987). A Matrix-Based Approach to Uncertainty and Sensitivity Analysis for Fault Trees. Risk Analysis, 7, 21-33. Kleijnen, J. P. (2005). An overview of the design and analysis of simulation experiments for sensitivity analysis. European Journal Of Operational Research, 164, 287-300. Kleijnen, J. P., & Helton, J. C. (1999). Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques. Reliability Engineering & System Safety, 65, 147-185. Kuo, W., & Zhu, X. (2012a). Importance measures in reliability, risk, and optimization: principles and applications: Wiley. Kuo, W., & Zhu, X. (2012b). Some recent advances on importance measures in reliability. Reliability, IEEE Transactions on, 61, 344-360. Levitin, G., Podofillini, L., & Zio, E. (2003). Generalised importance measures for multi-state elements based on performance level restrictions. Reliability Engineering & System Safety, 82, 287-298. Li, L., Lu, Z., Feng, J., & Wang, B. (2012). Moment-independent importance measure of basic variable and its state dependent parameter solution. Structural Safety, 38, 40-47. Li, L., Lu, Z., & Zhou, C. (2011). Importance analysis for models with correlated input variables by the state dependent parameters method. Computers & Mathematics With Applications, 62, 4547-4556. Liu, Q., & Homma, T. (2010). A New Importance Measure for Sensitivity Analysis. Journal Of Nuclear Science And Technology, 47, 53-61. McKay, M., Beckman, R., & Conover, W. (2000). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42, 55-61. Plischke, E., Borgonovo, E., & Smith, C. L. (2012). Global sensitivity measures from given data. European Journal Of Operational Research. Saltelli, A. (2002). Making best use of model evaluations to compute 23
sensitivity indices. Computer Physics Communications, 145, 280-297. Saltelli, A., Annoni, P., Azzini, I., Campolongo, F., Ratto, M., & Tarantola, S. (2010). Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index. Computer Physics Communications, 181, 259-270. Saltelli, A., Chan, K., & Scott, E. M. (2000). Sensitivity analysis (Vol. 134): Wiley New York. Si, S., Cai, Z., Sun, S., & Zhang, S. (2010). Integrated importance measures of multi-state systems under uncertainty. Computers & Industrial Engineering, 59, 921-928. Si, S., Dui, H., Zhao, X., Zhang, S., & Sun, S. (2012). Integrated importance measure of component states based on loss of system performance. Reliability, IEEE Transactions on, 61, 192-202. Sobol, I. M. (2001). Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics And Computers In Simulation, 55, 271-280. Sobol, I. M. (2003). Theorems and examples on high dimensional model representation. Reliability Engineering & System Safety, 79, 187-193. Storlie, C. B., & Helton, J. C. (2008). Multiple predictor smoothing methods for sensitivity analysis: Description of techniques. Reliability Engineering & System Safety, 93, 28-54. Storlie, C. B., Reich, B. J., Helton, J. C., Swiler, L. P., & Sallaberry, C. J. (2013). Analysis of computationally demanding models with continuous and categorical inputs. Reliability Engineering & System Safety, 113, 30-41. Storlie, C. B., Swiler, L. P., Helton, J. C., & Sallaberry, C. J. (2009). Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models. Reliability Engineering & System Safety, 94, 1735-1763. Wei, P., Lu, Z., & Yuan, X. (2012). Monte Carlo simulation for Moment-independent sensitivity analysis. Reliability Engineering & System Safety. Xie, M. (1987). On some importance measures of system components. 24
Stochastic Processes And Their Applications, 25, 273-280. Xie, M., & Bergman, B. (1992). On a general measure of component importance. Journal Of Statistical Planning And Inference, 29, 211-220. Zhou, C., Lu, Z., Li, L., Feng, J., & Wang, B. (2013). A new algorithm for variance based importance analysis of models with correlated inputs. Applied Mathematical Modelling, 37, 864-875.
25