Almost sure stability of discrete-time Markov jump linear systemsby Yang Song, Taicheng Yang, Minrui Fei, Hao Dong

IET Control Theory & Applications

Similar

Continuous-Time Markov Jump Linear Systems

Authors:
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov
2013

Output feedback control of Markov jump linear systems in continuous-time

Authors:
D.P. De Farias, J.C. Geromel, J.B.R. Do Val, O.L.V. Costa
2000

Text

www.ietdl.org

Published in IET Control Theory and Applications

Received on 11th June 2013

Revised on 25th November 2013

Accepted on 29th December 2013 doi: 10.1049/iet-cta.2013.0550

ISSN 1751-8644

Almost sure stability of discrete-time Markov jump linear systems

Yang Song1,2, Hao Dong1,TaichengYang3, Minrui Fei1,2 1Department of Automation, Shanghai University, Shanghai 200072, People’s Republic of China 2Shanghai Key Laboratory of Power Station AutomationTechnology, Shanghai 200072, People’s Republic of China 3Department of Engineering and Design, University of Sussex, Brighton, BN1 9QT, UK

E-mail: y_song@shu.edu.cn

Abstract: This study deals with a transient analysis and almost sure stability for the discrete-time Markov jump linear systems (MJLS). The expectation of the sojourn time and the activation number of any mode and the switching number between any two modes of the discrete-time MJLS are presented firstly. Then, an analysis of the transient behaviour is given. Finally, a new deterministically testable condition for the exponentially almost sure stability is proposed. 1 Introduction

Markov jump linear systems (MJLS) are composed of a set of linear subsystems (also called modes) and a switching sequence governed by a Markov stochastic process. The

MJLS are extensively used to model the physical systems having abrupt changes or failures, for example, fault tolerant systems [1], aerospace systems [2], networked control systems [3, 4] and so on. A stability study of the stochastic systems is of fundamental importance. Several definitions of the stability have been proposed, such as the δ-moment stability, the mean square stability (MS stability), the almost sure stability (AS stability) and so on [5–9]. The conservativeness of these definitions is very different. The δ-moment stability requires that the expectation of the δth moment of the state norm E[‖x(t)‖δ] should converge to zero asymptotically. When δ = 2, the δ-moment stability degenerates to a special case called MS stability. The AS stability, different from the δ-moment stability, requires that the state trajectory converges to zero with the probability one. From the application point of view, the convergence of the system state trajectory with the probability one is more relevant than the moment behaviour [10]. For the MJLSs, both the MS and the δ-moment stabilities imply the AS stability but not vice versa [5, 6, 11]. Most results for the MJLS are based on the fact that the modes are finite. For the MJLS with the infinite modes, the system stability is studied in [12]. Furthermore, a unified approach on the AS stability analysis including the infinite modes is proposed in [13]. In recent years, some researchers have investigated some more complex cases, for example, the transition probabilities are partially known [14] or piecewise constant [15], the switching between the subsystems is governed jointly by a Markovian process and a deterministic dwell time restriction [16, 17].

Depending on the definitions of the stability, the testabilities of such stabilities are different. The results of the

MS stability are generally given in the form of coupled

Lyapunov equations [2, 6, 18, 19] which can be solved effectively by the linear matrix inequality (LMI) techniques. For the AS stability, however, it is difficult to have a general numerical approach. A necessary and sufficient condition for the AS stability is that the top Lyapunov exponent defined over an infinite time should be negative [20]. This condition is difficult to test by an effective algorithm. A sufficient condition for testing the AS stability is proposed for the stochastic linear systems [10, 21] and the discretetime MJLS [22].This condition is based on the average norm contractivity of the state transition matrix over a finite, yet unknown, time interval. The conditions in [10, 21, 22] are less restrictive compared with the top Lyapunov exponent method, but is non-deterministic. A solving technique based on the Monte Carlo algorithm is developed to check these conditions [10, 13, 21]. Recently, some new deterministic sufficient conditions are proposed for the AS stability of the continuous-time MJLS [7, 17, 23]. These conditions are based on the statistics of the switching actions and the total sojourn time of each mode in an MJLS. For the discretetime MJLS, to the best of the authors’ knowledge, there is no equivalent result reported. This paper aims at bridging this gap. The main contributions of this paper are: (i) the expectations of the sojourn time and the activation number of any mode, and the switching number between any two modes for the discrete-time MJLS are given for the first time; (ii) a result of the transient analysis of the discretetime MJLS is presented; and (iii) a new approach to test the

AS stability of the discrete-time MJLS is obtained.

The paper is structured as follows. Section 2 presents some preliminaries and definitions. In Section 3, the expectation of the switching number between any two modes, and the total sojourn time and the activation number of each mode are provided. In Section 4, a new transient analysis result and an AS stability condition for the discrete-time

IET Control Theory Appl., 2014, Vol. 8, Iss. 11, pp. 901–906 901 doi: 10.1049/iet-cta.2013.0550 © The Institution of Engineering and Technology 2014 www.ietdl.org

MJLS are presented. Section 5 gives two examples and

Section 6 concludes the paper. 2 Preliminaries and definitions

Consider a discrete-time MJLS xk+1 = Aσ(k)xk , k ∈ Z (1) where Z is the non-negative integer set, state xk ∈ Rn, the switching sequence {σ(k), k ∈ Z} is a Markov chain taking values on the finite set {1, 2, . . . ,N } and N is the number of the modes, respectively. The transition probability of the Markov chain {σ(k)} is given by pij = Pr{σ(k + 1) = j | σ(k) = i} and the transition probability matrix is denoted by P. Matrix F = [f1 f2 · · · fN ] is the initial probability distribution of this Markov chain, where fi =