Learning Markov Models for Stationary System Behaviors

Report 2 Downloads 93 Views
Learning Markov Models for Stationary System Behaviors Yingke Chen Hua Mao Manfred Jaeger Thomas D. Nielsen Kim G. Larsen Brian Nielsen Department of Computer Science, Aalborg University, Denmark

NFM 2012 April 4, 2012

Motivation Learning Markov Models for Stationary System Behaviors

Introduction 2

Motivation Overview Related Work

I I

Constructing formal models manually can be time consuming Formal system models may not exist I I I

I

Preliminaries LMC PSA & PST SPLTL

legacy software 3rd party components black-box embedded system component

PSA Learning Construct PST PST to PSA and PSA to LMC

Our proposal: learn models from observed system behaviors

Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Overview of Our Approach Learning Markov Models for Stationary System Behaviors

System

Data Introduction Motivation

observe

3

Probabilistic Automata

Overview Related Work

Idle, idle, coffe_request, idle, idle, cup, idle, idle, coffee, coffee, idle, idle, ...

Preliminaries LMC PSA & PST SPLTL

learn

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Specification

Conclusion

Model Checker

yes/no

27

Dept. of Computer Science, Aalborg University, Denmark

Related Work Learning Markov Models for Stationary System Behaviors

Related Work I

Learning probabilistic finite automata I I

I

Alergia— R. Carrasco and J. Oncina (1994) Probabilistic Suffix Autumata — D. Ron et al. (1996)

Introduction Motivation Overview 4

I I

Related Work

Preliminaries

Learning models for model checking

LMC

Learning CTMCs — K. Sen and et al. (2004) Learning DLMCs — H. Mao and et al. (2011)

PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC

Limitation I

Hard to restart the system any number of times.

I

Can not reset the system to a well-defined unique initial state.

Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

Proposal I

Learn a model from a single observation sequence 27

Dept. of Computer Science, Aalborg University, Denmark

Labeled Markov Chain (LMC) Learning Markov Models for Stationary System Behaviors

Introduction Motivation Overview

A LMC is a tuple, M = hQ, Σ, π, τ, Li,

Related Work

Preliminaries

I

Q: a finite set of states

I

Σ: finite alphabet

5

LMC PSA & PST SPLTL

PSA Learning

I

π : Q → [0, 1] is an initial probability distribution

I

τ : Q × Q → [0, 1] is the transition probability function

I

L : Q → Σ is a labeling function

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Probabilistic Suffix Automata - PSA Learning Markov Models for Stationary System Behaviors

A PSA is LMC that I H : Q → Σ≤N is a extended labeling function, which represents the history of the most recent visited states. I

I

Introduction Motivation Overview

Each state qi is associated with a string si = H(qi )L(qi ). If τ (q1 , q2 ) > 0, then H(q2 ) ∈ suffix∗ (s1 ) Let S be the set of strings associated with states in the PSA, then ∀ s ∈ S, suffix∗ (s) ∩ S = {s}

Related Work

Preliminaries LMC 6

PSA & PST SPLTL

PSA Learning Construct PST

0.5 idle 0.7

0.3 cup 1

PST to PSA and PSA to LMC

cup, milk

Parameter Tunning

0.3 0.7

0.5

1

milk, milk

Experiment PSA-equivalent Non PSA-equivalent

coff

Conclusion

Figure: A PSA over Σ = {idle, cup, milk, coff} 27

Dept. of Computer Science, Aalborg University, Denmark

Prediction Suffix Tree - PST Learning Markov Models for Stationary System Behaviors

I

A tree over the alphabet Σ = {idle, cup, milk, coff}

I

Each node is labeled by a pair (s, γs ), and each edge is labeled by a symbol σ ∈ Σ

I

The parent’s string is the suffix of its children’s

Introduction Motivation Overview Related Work

Preliminaries LMC 7

PSA & PST SPLTL

0.5 idle 0.7

0.3 cup 1

e

cup, milk

1 coff

PSA Learning Construct PST

0.3 0.7

0.5

(0.57,0.16,0.1,0.16)

milk, milk

idle

cup

(0.7,0.3,0,0)

(0,0,0.5,0.5)

milk

PST to PSA and PSA to LMC

coff

Parameter Tunning

(0,0,0.3,0.7) (1,0,0,0)

cup, milk

milk, milk

(0,0,0.3,0.7)

(0,0,0,1)

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

Figure: PSA and PST define the same distribution of strings over Σ

27

Dept. of Computer Science, Aalborg University, Denmark

Stationary Probabilistic LTL - SPLTL Learning Markov Models for Stationary System Behaviors

Introduction

Syntax

Motivation Overview

The syntax of stationary probabilistic LTL is:

Related Work

Preliminaries LMC

φ ::= S./r (ϕ) (./ ∈ ≥, ≤, =; r ∈ [0, 1]; ϕ ∈ LTL)

PSA & PST 8

SPLTL

PSA Learning

Semantics

Construct PST PST to PSA and PSA to LMC

For a model M, the stationary probability of an LTL property ϕ is

Parameter Tunning

Experiment

s

π M |= S./r (ϕ) iff PM ({s ∈ Σω |s |= ϕ}) ./ r

PSA-equivalent Non PSA-equivalent

Conclusion

s

for all stationary distributions π .

27

Dept. of Computer Science, Aalborg University, Denmark

Outline Learning Markov Models for Stationary System Behaviors

Introduction Motivation Overview Related Work Preliminaries LMC PSA & PST SPLTL

Introduction Motivation Overview Related Work

Preliminaries LMC PSA & PST SPLTL 9

PSA Learning Construct PST

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

Experiment PSA-equivalent Non PSA-equivalent Conclusion 27

Dept. of Computer Science, Aalborg University, Denmark

Overview Learning Markov Models for Stationary System Behaviors

e

(0.57,0.16,0.1,0.16) Introduction

(1,0,0,0)

idle

cup

milk

(0.7,0.3,0,0) (0,0,0.5,0.5)

(0,0,0.3,0.7)

cup, milk

(0,0,0.3,0.7)

(0,0,0,1)

0.5

coff

milk, milk

idle 0.7

0.3 cup 1

cup, milk

Motivation Overview

0.3 0.7

0.5

1

milk, milk

Related Work

Preliminaries LMC

coff

PSA & PST SPLTL 10

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

milk

“coff, idle, idle, cup, milk, milk, coff, idle, cup, milk, coff,….”

Experiment

0.5 idle 0.7

0.3 cup 1

0.3 0.7

0.5

PSA-equivalent

milk

Non PSA-equivalent

1

Conclusion

coff

27

Dept. of Computer Science, Aalborg University, Denmark

Construct PST I I

Learning Markov Models for Stationary System Behaviors

Start with T, only consisting root node (e), and ˜ S = {σ | σ ∈ Σ and P(σ) ≥ }. For each s ∈ S, s will be included in T if X ˜ P(σ|s) ˜ ˜ ≥ P(s) · P(σ|s) · log ˜ σ∈Σ P(σ|suffix (s))

Introduction Motivation Overview Related Work

Preliminaries LMC

I I I

˜ For each s that P(s) ≥ , for all σ 0 ∈ Σ, σ 0 s will be added into S Loop until S is empty Calculate the next symbol distribution for each node in T

PSA & PST SPLTL

PSA Learning 11

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

e

PSA-equivalent Non PSA-equivalent

Conclusion

idle

cup

milk

coff

milk, milk 27

Dept. of Computer Science, Aalborg University, Denmark

Construct PST I I

Learning Markov Models for Stationary System Behaviors

Start with T, only consisting root node (e), and ˜ S = {σ | σ ∈ Σ and P(σ) ≥ }. For each s ∈ S, s will be included in T if X ˜ P(σ|s) ˜ ˜ ≥ P(s) · P(σ|s) · log ˜ σ∈Σ P(σ|suffix (s))

Introduction Motivation Overview Related Work

Preliminaries LMC

I I I

˜ For each s that P(s) ≥ , for all σ 0 ∈ Σ, σ 0 s will be added into S Loop until S is empty Calculate the next symbol distribution for each node in T

PSA & PST SPLTL

PSA Learning 11

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

e

PSA-equivalent Non PSA-equivalent

Conclusion

idle

cup

milk

cup, milk

coff

milk, milk 27

Dept. of Computer Science, Aalborg University, Denmark

Construct PST I I

Learning Markov Models for Stationary System Behaviors

Start with T, only consisting root node (e), and ˜ S = {σ | σ ∈ Σ and P(σ) ≥ }. For each s ∈ S, s will be included in T if X ˜ P(σ|s) ˜ ˜ ≥ P(s) · P(σ|s) · log ˜ σ∈Σ P(σ|suffix (s))

Introduction Motivation Overview Related Work

Preliminaries LMC

I I I

˜ For each s that P(s) ≥ , for all σ 0 ∈ Σ, σ 0 s will be added into S Loop until S is empty Calculate the next symbol distribution for each node in T

PSA & PST SPLTL

PSA Learning 11

Construct PST PST to PSA and PSA to LMC Parameter Tunning

e

Experiment

(0.57,0.16,0.1,0.16)

PSA-equivalent Non PSA-equivalent

Conclusion

(1,0,0,0)

idle

cup

milk

(0.7,0.3,0,0) (0,0,0.5,0.5)

(0,0,0.3,0.7)

cup, milk

coff

(0,0,0.3,0.7)

(0,0,0,1)

milk, milk 27

Dept. of Computer Science, Aalborg University, Denmark

Transform the PST to the LMC Learning Markov Models for Stationary System Behaviors

e

(0.57,0.16,0.1,0.16) Introduction

(1,0,0,0)

idle

cup

milk

(0.7,0.3,0,0) (0,0,0.5,0.5)

(0,0,0.3,0.7)

cup, milk

coff

transform (Ron96)

0.5 idle

(0,0,0.3,0.7)

(0,0,0,1)

0.7

milk, milk

0.3 cup 1

cup, milk

Motivation Overview

0.3 0.7

0.5

1

milk, milk

Related Work

Preliminaries LMC

coff

PSA & PST SPLTL

PSA Learning

relabel

Construct PST 12

Parameter Tunning

milk

Experiment

0.5 idle 0.7

0.3 cup 1

0.3 0.7

0.5

PST to PSA and PSA to LMC

PSA-equivalent

milk

Non PSA-equivalent

1

Conclusion

coff

27

Dept. of Computer Science, Aalborg University, Denmark

Parameter Tunning Learning Markov Models for Stationary System Behaviors

Introduction

Smaller  induces bigger model

Motivation Overview

˜ P(σ|s) ˜ ≥ ˜ σ∈Σ P(σ|s) · log P(σ|suffix (s))

I

˜ P(s) ·

I

˜ P(s) ≥

I

Overfitting;

Related Work

P

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC 13

Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Parameter Tunning Learning Markov Models for Stationary System Behaviors

Introduction

Smaller  induces bigger model

Motivation Overview

˜ P(σ|s) ˜ ≥ ˜ σ∈Σ P(σ|s) · log P(σ|suffix (s))

I

˜ P(s) ·

I

˜ P(s) ≥

I

Overfitting;

Related Work

P

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST

Bayesian Information Criterion - (BIC)

PST to PSA and PSA to LMC 13

Parameter Tunning

Experiment

I

BIC (A | Seq) := log(L(A | Seq)) − 1/2 | A | log(| Seq |)

PSA-equivalent Non PSA-equivalent

Here, | A |=| QA | ·(| Σ | −1)

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Outline Learning Markov Models for Stationary System Behaviors

Introduction Motivation Overview Related Work

Introduction Motivation Overview Related Work

Preliminaries LMC PSA & PST SPLTL PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning 14

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

Experiment PSA-equivalent Non PSA-equivalent Conclusion 27

Dept. of Computer Science, Aalborg University, Denmark

Experiments Setting Learning Markov Models for Stationary System Behaviors

Introduction

I

A single sequence is generated by a given LMC model

I

The difference between the generating model Mg and the learned model Ml is measured as the mean absolute difference D in stationary probability over a set Φ of randomly generated LTL formula (Computed by PRISM)

Motivation Overview Related Work

Preliminaries LMC PSA & PST SPLTL

PSA Learning

D=

Construct PST

1 X s s |PM (φ) − PM (φ)| g l φ∈Φ |Φ|

PST to PSA and PSA to LMC Parameter Tunning 15

I

PSA-equivalent

I

Non PSA-equivalent

Experiment PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

PSA-equivalent Learning Markov Models for Stationary System Behaviors

An LMC M is called PSA-equivalent if there exists a PSA M 0 , such that for every string s,

Introduction

PM (s) = PM 0 (s)

Motivation Overview Related Work

Preliminaries LMC

1

1

a

a

e

PSA & PST

aa

1

SPLTL

PSA Learning

1/2

1/2 1

s

s

Construct PST

sa

1/2

(0,1,0)

1/2 1

a

s

PST to PSA and PSA to LMC

a

Parameter Tunning 16

(1

Experiment PSA-equivalent

b

b

1

Non PSA-equivalent

sa

Conclusion

(0,0.5,0.5)

(a)

(b)

(1,0,0

(c) 27

Dept. of Computer Science, Aalborg University, Denmark

Phone Model Learning Markov Models for Stationary System Behaviors

Introduction

0.6

0.6

iiir

0.2

rii

0.9

hii

0.3 0.7

PSA & PST

riir

SPLTL

PSA Learning

0.95

1

hir

0.4 0.6

LMC

0.4

0.8

hi

0.9

Construct PST

hr

t

0.6

iii

Preliminaries

0.8 0.4

Related Work

0.05 0.6

0.4

p

hiir

0.4

Overview

rir

0.1

0.3

0.6

Motivation

0.4

ri

0.7

PST to PSA and PSA to LMC

0.2

Parameter Tunning

0.1

h

Experiment 17

PSA-equivalent Non PSA-equivalent

Conclusion

Figure: Σ = {(r)ing, (i)dle, (t)alk, (p)ick-up, (h)ang-up}

(B)

27

Dept. of Computer Science, Aalborg University, Denmark

Phone Model

cont.

Learning Markov Models for Stationary System Behaviors

Introduction Motivation

Table: D is based on 507 random LTL formulas. For reference: Ddummy = 0.1569

Overview Related Work

Preliminaries LMC

|S| 320 1280 5120 10240 20480 Mg

|Ql | 5 5 10 14 14 14

PSA & PST

D 0.03200 0.04900 0.00590 0.00160 0.00049

t 0.344 0.385 0.379 0.381 0.378 0.378

rp|r 0.310 0.446 0.490 0.506 0.515 0.512

irp|ir 0.309 0.446 0.490 0.477 0.489 0.488

iirp|iir 0.309 0.446 0.490 0.409 0.414 0.424

♦i 0 0 0 0 0 0

SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment 18

PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Self-stabilizing Protocol Learning Markov Models for Stationary System Behaviors

P1

Introduction Motivation

x1

Overview

xn

P2

Pn

Generate 3 processes

x2

“000,110,000,000,011,000,010,000,011, 000,101,000,001,000,011,000,000,001, 000,001,000,101,000,101,000,….”

Related Work

Preliminaries LMC PSA & PST

xn-1

P3

x3

SPLTL

PSA Learning

…...

Construct PST PST to PSA and PSA to LMC

Learn

Parameter Tunning

Experiment 19

PSA-equivalent Non PSA-equivalent

Conclusion

Learned Model

27

Dept. of Computer Science, Aalborg University, Denmark

Self-stabilizing Protocol Learning Markov Models for Stationary System Behaviors

P1

Introduction Motivation

x1

xn

Overview

P2

Pn

Generate 3 processes

x2

“000,110,000,000,011,000,010,000,011, 000,101,000,001,000,011,000,000,001, 000,001,000,101,000,101,000,….”

Related Work

Preliminaries LMC

xn-1

P3

x3

PSA & PST SPLTL

…...

PSA Learning

Learn

Abstract

000, 111 à 3tokens 010, 110, 011, 101, 001, 100à stable

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment 19

PSA-equivalent Non PSA-equivalent

Learn

Learned Model

“tokens3,stable,tokens3,stable,tokens3, stable,tokens3,tokens3,tokens3,tokens3, stable,tokens3,stable,tokens3,….”

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Self-stabilizing Protocol

cont.

Learning Markov Models for Stationary System Behaviors

Table: Self-stabilizing protocol with 7 processes. D is based on 503 random LTL formulas. For reference: Dd = 0.1669.

Introduction Motivation Overview Related Work

|Seq| 80 160 320 640 1280 2560 5120 10240 20480 50000 100k

time(sec) 73.0 49.4 162.9 34.3 37.2 42.0 47.9 59.3 80.7 1904.4 3435.5

Full model order |Ql | 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 128 1 128

D 0.0192 0.0325 0.0292 0.0234 0.0193 0.0204 0.0182 0.0390 0.0390 0.00034 0.00071

Abstract model time(sec) order |Ql | D 1.6 1 4 0.0172 2.1 1 4 0.0079 3.3 1 4 0.0369 2.3 1 4 0.0114 4.1 1 4 0.0093 5.0 1 4 0.0054 8.9 1 4 0.0018 16.3 1 4 0.0013 31.4 1 4 0.0016 152.42 1 4 0.0011 308.9 1 4 0.0007

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment 20

PSA-equivalent Non PSA-equivalent

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Self-stabilizing Protocol

cont.

Learning Markov Models for Stationary System Behaviors

1

1

0.9

0.9

0.8

0.8

Introduction Motivation

Stationary Probability

Stationary Probability

Overview

0.7 0.6 0.5 real model−3 proc. full model−3 proc. abstract model−3 proc.

0.4 0.3

real model−7 proc. full model−7 proc. abstract model−7 proc.

0.2 5

10

15 L

20

25

30

Related Work

0.7

Preliminaries

0.6

LMC

0.5

PSA & PST SPLTL

0.4 PSA Learning

0.3 0.2

real model−11 proc. abstract model−11 proc.

0.1

real model−19 proc. abstract model−19 proc.

0

0

20

40

60 L

80

100

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

120 21

PSA-equivalent Non PSA-equivalent

s Figure: PM (trueU≤L stable | token = N)

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Self-stabilizing Protocol

cont.

7000

Learning Markov Models for Stationary System Behaviors

6000

Introduction Motivation Overview

5000

Related Work

Time

Preliminaries

4000

LMC

real model−19 proc.

3000

PSA & PST SPLTL

abstract−19 proc.

PSA Learning

real −21 proc.

Construct PST

abstract−21 proc.

2000

PST to PSA and PSA to LMC Parameter Tunning

1000

Experiment 22

0

PSA-equivalent Non PSA-equivalent

10

20 L

30

40

Conclusion

s Figure: The time for calculating PM (trueU≤L stable | token = N) in the generating model and the abstract model. 27

Dept. of Computer Science, Aalborg University, Denmark

Non PSA-equivalent Learning Markov Models for Stationary System Behaviors

Dice Model

Introduction

0.5

1 0.5

0.5

0.48

h2

0.5

t3

0.5

H

0.5

h4

0.5

0.5

0.5

1

h6

PSA & PST

1

t3

SPLTL

0.49

h4

0.5

0.18

PST to PSA and PSA to LMC

t5

T 1

Construct PST

1 0.51

1 0.5

H

T

t5

T

0.44

PSA Learning

1 0.5

T

0.45

0.48

T

Related Work

Preliminaries

h2

LMC

0.52

0.52

start

start 0.5

Overview

1

0.16

H 1

T

Motivation

t1

H

1 0.5

H

0.32

1

H

0.5

0.5

0.52

t1

0.48

Parameter Tunning

Experiment

1

PSA-equivalent

0.34

h6

23

Non PSA-equivalent

Conclusion

Figure: Left: The generating model. Right: A model learned from a sequence with 1440 symbols. 27

Dept. of Computer Science, Aalborg University, Denmark

Dice Model

cont.

Learning Markov Models for Stationary System Behaviors

Introduction

Table: D is based on 501 random LTL formulas. For reference: Ddummy = 0.1014 |S| 360 720 1440 2880 5760 11520 20000 Mg

|Ql | 13 13 13 17 17 19 21 13

D 0.0124 0.0043 0.0023 0.0023 0.0016 0.00094 0.00092

s

P (1) 0.137 0.188 0.184 0.173 0.173 0.162 0.164 0.167

s

P (2) 0.17 0.174 0.166 0.166 0.165 0.17 0.173 0.167

s

P (3) 0.182 0.174 0.169 0.159 0.153 0.176 0.171 0.167

s

P (4) 0.103 0.149 0.143 0.142 0.161 0.157 0.166 0.167

s

P (5) 0.205 0.168 0.153 0.176 0.174 0.168 0.164 0.167

Motivation Overview Related Work

Preliminaries

s

P (6) 0.203 0.147 0.185 0.184 0.174 0.167 0.162 0.167

LMC PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent 24

Non PSA-equivalent

Conclusion

For the non PSA-equivalent system, the learned model still provide good approximation for SPLTL properties.

27

Dept. of Computer Science, Aalborg University, Denmark

20000 symbols! Learning Markov Models for Stationary System Behaviors

H

t1 Introduction

H

Motivation Overview

H

H

H

Related Work

h2

Preliminaries

H

LMC PSA & PST

T

t3

SPLTL

start

PSA Learning Construct PST

H

h4

PST to PSA and PSA to LMC

T

Parameter Tunning

T

T

T

Experiment

t5

PSA-equivalent 25

T T

Non PSA-equivalent

Conclusion

h6

27

Dept. of Computer Science, Aalborg University, Denmark

Outline Learning Markov Models for Stationary System Behaviors

Introduction Motivation Overview Related Work

Introduction Motivation Overview Related Work

Preliminaries LMC PSA & PST SPLTL

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent

PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent 26

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark

Conclusion

Conclusion Learning Markov Models for Stationary System Behaviors

Introduction Motivation Overview Related Work

I

Single observation sequence

I

Learning algorithms

I

SPLTL for stationary behavior

I

Experimental validation

Preliminaries LMC PSA & PST SPLTL

PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment PSA-equivalent Non PSA-equivalent 27

Conclusion

27

Dept. of Computer Science, Aalborg University, Denmark