ملاحظ لوننبرگر هو عبارة عن نظام يستعمل لحساب الحالة في نظام آخر و تكون مداخل الملاحظ هي مداخل و مخارج النظام وتكون مخارج الملاحظ هي حالات النظام المراد معرفة حالته. يستعمل ملاحظ لوننبرگر في التحكم عن طريق إرجاع الحالة.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ملاحظ الحالة النمطي
The state of a physical discrete-time system is assumed to satisfy
x
(
k
+
1
)
=
A
x
(
k
)
+
B
u
(
k
)
{\displaystyle \mathbf {x} (k+1)=A\mathbf {x} (k)+B\mathbf {u} (k)}
y
(
k
)
=
C
x
(
k
)
+
D
u
(
k
)
{\displaystyle \mathbf {y} (k)=C\mathbf {x} (k)+D\mathbf {u} (k)}
ملاحظ النمط المنزلق
x
^
˙
=
[
∂
H
(
x
^
)
∂
x
]
−
1
M
(
x
^
)
sgn
(
V
(
t
)
−
H
(
x
^
)
)
{\displaystyle \mathbf {\dot {\hat {x}}} =\left[{\frac {\partial H(\mathbf {\hat {x}} )}{\partial \mathbf {x} }}\right]^{-1}M(\mathbf {\hat {x}} )\,\operatorname {sgn} (V(t)-H(\mathbf {\hat {x}} ))}
where:
The
sgn
(
⋅
)
{\displaystyle \operatorname {sgn} ({\mathord {\cdot }})}
vector extends the scalar signum function to
n
{\displaystyle n}
dimensions. That is,
sgn
(
z
)
=
[
sgn
(
z
1
)
sgn
(
z
2
)
⋮
sgn
(
z
i
)
⋮
sgn
(
z
n
)
]
{\displaystyle \operatorname {sgn} (\mathbf {z} )={\begin{bmatrix}\operatorname {sgn} (z_{1})\\\operatorname {sgn} (z_{2})\\\vdots \\\operatorname {sgn} (z_{i})\\\vdots \\\operatorname {sgn} (z_{n})\end{bmatrix}}}
for the vector
z
∈
R
n
{\displaystyle \mathbf {z} \in \mathbb {R} ^{n}}
.
The vector
H
(
x
)
{\displaystyle H(\mathbf {x} )}
has components that are the output function
h
(
x
)
{\displaystyle h(\mathbf {x} )}
and its repeated Lie derivatives. In particular,
H
(
x
)
≜
[
h
1
(
x
)
h
2
(
x
)
h
3
(
x
)
⋮
h
n
(
x
)
]
≜
[
h
(
x
)
L
f
h
(
x
)
L
f
2
h
(
x
)
⋮
L
f
n
−
1
h
(
x
)
]
{\displaystyle H(\mathbf {x} )\triangleq {\begin{bmatrix}h_{1}(\mathbf {x} )\\h_{2}(\mathbf {x} )\\h_{3}(\mathbf {x} )\\\vdots \\h_{n}(\mathbf {x} )\end{bmatrix}}\triangleq {\begin{bmatrix}h(\mathbf {x} )\\L_{f}h(\mathbf {x} )\\L_{f}^{2}h(\mathbf {x} )\\\vdots \\L_{f}^{n-1}h(\mathbf {x} )\end{bmatrix}}}
where
L
f
i
h
{\displaystyle L_{f}^{i}h}
is the i th Lie derivative of output function
h
{\displaystyle h}
along the vector field
f
{\displaystyle f}
(i.e., along
x
{\displaystyle \mathbf {x} }
trajectories of the non-linear system). In the special case where the system has no input or has a relative degree of n ,
H
(
x
(
t
)
)
{\displaystyle H(\mathbf {x} (t))}
is a collection of the output
y
(
t
)
=
h
(
x
(
t
)
)
{\displaystyle \mathbf {y} (t)=h(\mathbf {x} (t))}
and its
n
−
1
{\displaystyle n-1}
derivatives. Because the inverse of the Jacobian linearization of
H
(
x
)
{\displaystyle H(\mathbf {x} )}
must exist for this observer to be well defined, the transformation
H
(
x
)
{\displaystyle H(\mathbf {x} )}
is guaranteed to be a local diffeomorphism .
The diagonal matrix
M
(
x
^
)
{\displaystyle M({\hat {\mathbf {x} }})}
of gains is such that
M
(
x
^
)
≜
diag
(
m
1
(
x
^
)
,
m
2
(
x
^
)
,
…
,
m
n
(
x
^
)
)
=
[
m
1
(
x
^
)
m
2
(
x
^
)
⋱
m
i
(
x
^
)
⋱
m
n
(
x
^
)
]
{\displaystyle M({\hat {\mathbf {x} }})\triangleq \operatorname {diag} (m_{1}({\hat {\mathbf {x} }}),m_{2}({\hat {\mathbf {x} }}),\ldots ,m_{n}({\hat {\mathbf {x} }}))={\begin{bmatrix}m_{1}({\hat {\mathbf {x} }})&&&&&\\&m_{2}({\hat {\mathbf {x} }})&&&&\\&&\ddots &&&\\&&&m_{i}({\hat {\mathbf {x} }})&&\\&&&&\ddots &\\&&&&&m_{n}({\hat {\mathbf {x} }})\end{bmatrix}}}
where, for each
i
∈
{
1
,
2
,
…
,
n
}
{\displaystyle i\in \{1,2,\dots ,n\}}
, element
m
i
(
x
^
)
>
0
{\displaystyle m_{i}({\hat {\mathbf {x} }})>0}
and suitably large to ensure reachability of the sliding mode.
The observer vector
V
(
t
)
{\displaystyle V(t)}
is such that
V
(
t
)
≜
[
v
1
(
t
)
v
2
(
t
)
v
3
(
t
)
⋮
v
i
(
t
)
⋮
v
n
(
t
)
]
≜
[
y
(
t
)
{
m
1
(
x
^
)
sgn
(
v
1
(
t
)
−
h
1
(
x
^
(
t
)
)
)
}
eq
{
m
2
(
x
^
)
sgn
(
v
2
(
t
)
−
h
2
(
x
^
(
t
)
)
)
}
eq
⋮
{
m
i
−
1
(
x
^
)
sgn
(
v
i
−
1
(
t
)
−
h
i
−
1
(
x
^
(
t
)
)
)
}
eq
⋮
{
m
n
−
1
(
x
^
)
sgn
(
v
n
−
1
(
t
)
−
h
n
−
1
(
x
^
(
t
)
)
)
}
eq
]
{\displaystyle V(t)\triangleq {\begin{bmatrix}v_{1}(t)\\v_{2}(t)\\v_{3}(t)\\\vdots \\v_{i}(t)\\\vdots \\v_{n}(t)\end{bmatrix}}\triangleq {\begin{bmatrix}\mathbf {y} (t)\\\{m_{1}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{1}(t)-h_{1}({\hat {\mathbf {x} }}(t)))\}_{\text{eq}}\\\{m_{2}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{2}(t)-h_{2}({\hat {\mathbf {x} }}(t)))\}_{\text{eq}}\\\vdots \\\{m_{i-1}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{i-1}(t)-h_{i-1}({\hat {\mathbf {x} }}(t)))\}_{\text{eq}}\\\vdots \\\{m_{n-1}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{n-1}(t)-h_{n-1}({\hat {\mathbf {x} }}(t)))\}_{\text{eq}}\end{bmatrix}}}
where
sgn
(
⋅
)
{\displaystyle \operatorname {sgn} ({\mathord {\cdot }})}
here is the normal signum function defined for scalars, and
{
…
}
eq
{\displaystyle \{\ldots \}_{\text{eq}}}
denotes an "equivalent value operator" of a discontinuous function in sliding mode.
The modified observation error can be written in the transformed states
e
=
H
(
x
)
−
H
(
x
^
)
{\displaystyle \mathbf {e} =H(\mathbf {x} )-H(\mathbf {\hat {x}} )}
. In particular,
e
˙
=
d
d
t
H
(
x
)
−
d
d
t
H
(
x
^
)
=
d
d
t
H
(
x
)
−
M
(
x
^
)
sgn
(
V
(
t
)
−
H
(
x
^
(
t
)
)
)
,
{\displaystyle {\begin{aligned}{\dot {\mathbf {e} }}&={\frac {\operatorname {d} }{\operatorname {d} t}}H(\mathbf {x} )-{\frac {\operatorname {d} }{\operatorname {d} t}}H({\hat {\mathbf {x} }})\\&={\frac {\operatorname {d} }{\operatorname {d} t}}H(\mathbf {x} )-M({\hat {\mathbf {x} }})\,\operatorname {sgn} (V(t)-H({\hat {\mathbf {x} }}(t))),\end{aligned}}}
and so
[
e
˙
1
e
˙
2
⋮
e
˙
i
⋮
e
˙
n
−
1
e
˙
n
]
=
[
h
˙
1
(
x
)
h
˙
2
(
x
)
⋮
h
˙
i
(
x
)
⋮
h
˙
n
−
1
(
x
)
h
˙
n
(
x
)
]
⏞
d
d
t
H
(
x
)
−
M
(
x
^
)
sgn
(
V
(
t
)
−
H
(
x
^
(
t
)
)
)
⏞
d
d
t
H
(
x
^
)
=
[
h
2
(
x
)
h
3
(
x
)
⋮
h
i
+
1
(
x
)
⋮
h
n
(
x
)
L
f
n
h
(
x
)
]
−
[
m
1
sgn
(
v
1
(
t
)
−
h
1
(
x
^
(
t
)
)
)
m
2
sgn
(
v
2
(
t
)
−
h
2
(
x
^
(
t
)
)
)
⋮
m
i
sgn
(
v
i
(
t
)
−
h
i
(
x
^
(
t
)
)
)
⋮
m
n
−
1
sgn
(
v
n
−
1
(
t
)
−
h
n
−
1
(
x
^
(
t
)
)
)
m
n
sgn
(
v
n
(
t
)
−
h
n
(
x
^
(
t
)
)
)
]
=
[
h
2
(
x
)
−
m
1
(
x
^
)
sgn
(
v
1
(
t
)
⏞
v
1
(
t
)
=
y
(
t
)
=
h
1
(
x
)
−
h
1
(
x
^
(
t
)
)
⏞
e
1
)
h
3
(
x
)
−
m
2
(
x
^
)
sgn
(
v
2
(
t
)
−
h
2
(
x
^
(
t
)
)
)
⋮
h
i
+
1
(
x
)
−
m
i
(
x
^
)
sgn
(
v
i
(
t
)
−
h
i
(
x
^
(
t
)
)
)
⋮
h
n
(
x
)
−
m
n
−
1
(
x
^
)
sgn
(
v
n
−
1
(
t
)
−
h
n
−
1
(
x
^
(
t
)
)
)
L
f
n
h
(
x
)
−
m
n
(
x
^
)
sgn
(
v
n
(
t
)
−
h
n
(
x
^
(
t
)
)
)
]
.
{\displaystyle {\begin{aligned}{\begin{bmatrix}{\dot {\mathbf {e} }}_{1}\\{\dot {\mathbf {e} }}_{2}\\\vdots \\{\dot {\mathbf {e} }}_{i}\\\vdots \\{\dot {\mathbf {e} }}_{n-1}\\{\dot {\mathbf {e} }}_{n}\end{bmatrix}}&={\mathord {\overbrace {\begin{bmatrix}{\dot {h}}_{1}(\mathbf {x} )\\{\dot {h}}_{2}(\mathbf {x} )\\\vdots \\{\dot {h}}_{i}(\mathbf {x} )\\\vdots \\{\dot {h}}_{n-1}(\mathbf {x} )\\{\dot {h}}_{n}(\mathbf {x} )\end{bmatrix}} ^{{\tfrac {\operatorname {d} }{\operatorname {d} t}}H(\mathbf {x} )}}}-{\mathord {\overbrace {M({\hat {\mathbf {x} }})\,\operatorname {sgn} (V(t)-H({\hat {\mathbf {x} }}(t)))} ^{{\tfrac {\operatorname {d} }{\operatorname {d} t}}H(\mathbf {\hat {x}} )}}}={\begin{bmatrix}h_{2}(\mathbf {x} )\\h_{3}(\mathbf {x} )\\\vdots \\h_{i+1}(\mathbf {x} )\\\vdots \\h_{n}(\mathbf {x} )\\L_{f}^{n}h(\mathbf {x} )\end{bmatrix}}-{\begin{bmatrix}m_{1}\operatorname {sgn} (v_{1}(t)-h_{1}({\hat {\mathbf {x} }}(t)))\\m_{2}\operatorname {sgn} (v_{2}(t)-h_{2}({\hat {\mathbf {x} }}(t)))\\\vdots \\m_{i}\operatorname {sgn} (v_{i}(t)-h_{i}({\hat {\mathbf {x} }}(t)))\\\vdots \\m_{n-1}\operatorname {sgn} (v_{n-1}(t)-h_{n-1}({\hat {\mathbf {x} }}(t)))\\m_{n}\operatorname {sgn} (v_{n}(t)-h_{n}({\hat {\mathbf {x} }}(t)))\end{bmatrix}}\\&={\begin{bmatrix}h_{2}(\mathbf {x} )-m_{1}({\hat {\mathbf {x} }})\operatorname {sgn} ({\mathord {\overbrace {{\mathord {\overbrace {v_{1}(t)} ^{v_{1}(t)=y(t)=h_{1}(\mathbf {x} )}}}-h_{1}({\hat {\mathbf {x} }}(t))} ^{\mathbf {e} _{1}}}})\\h_{3}(\mathbf {x} )-m_{2}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{2}(t)-h_{2}({\hat {\mathbf {x} }}(t)))\\\vdots \\h_{i+1}(\mathbf {x} )-m_{i}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{i}(t)-h_{i}({\hat {\mathbf {x} }}(t)))\\\vdots \\h_{n}(\mathbf {x} )-m_{n-1}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{n-1}(t)-h_{n-1}({\hat {\mathbf {x} }}(t)))\\L_{f}^{n}h(\mathbf {x} )-m_{n}({\hat {\mathbf {x} }})\operatorname {sgn} (v_{n}(t)-h_{n}({\hat {\mathbf {x} }}(t)))\end{bmatrix}}.\end{aligned}}}
وبذلك:
ما دام
m
1
(
x
^
)
≥
|
h
2
(
x
(
t
)
)
|
{\displaystyle m_{1}(\mathbf {\hat {x}} )\geq |h_{2}(\mathbf {x} (t))|}
, the first row of the error dynamics,
e
˙
1
=
h
2
(
x
^
)
−
m
1
(
x
^
)
sgn
(
e
1
)
{\displaystyle {\dot {\mathbf {e} }}_{1}=h_{2}({\hat {\mathbf {x} }})-m_{1}({\hat {\mathbf {x} }})\operatorname {sgn} (\mathbf {e} _{1})}
, will meet sufficient conditions to enter the
e
1
=
0
{\displaystyle e_{1}=0}
sliding mode in finite time.
Along the
e
1
=
0
{\displaystyle e_{1}=0}
surface, the corresponding
v
2
(
t
)
=
{
m
1
(
x
^
)
sgn
(
e
1
)
}
eq
{\displaystyle v_{2}(t)=\{m_{1}({\hat {\mathbf {x} }})\operatorname {sgn} (\mathbf {e} _{1})\}_{\text{eq}}}
equivalent control will be equal to
h
2
(
x
)
{\displaystyle h_{2}(\mathbf {x} )}
, and so
v
2
(
t
)
−
h
2
(
x
^
)
=
h
2
(
x
)
−
h
2
(
x
^
)
=
e
2
{\displaystyle v_{2}(t)-h_{2}({\hat {\mathbf {x} }})=h_{2}(\mathbf {x} )-h_{2}({\hat {\mathbf {x} }})=\mathbf {e} _{2}}
. Hence, so long as
m
2
(
x
^
)
≥
|
h
3
(
x
(
t
)
)
|
{\displaystyle m_{2}(\mathbf {\hat {x}} )\geq |h_{3}(\mathbf {x} (t))|}
, the second row of the error dynamics,
e
˙
2
=
h
3
(
x
^
)
−
m
2
(
x
^
)
sgn
(
e
2
)
{\displaystyle {\dot {\mathbf {e} }}_{2}=h_{3}({\hat {\mathbf {x} }})-m_{2}({\hat {\mathbf {x} }})\operatorname {sgn} (\mathbf {e} _{2})}
, will enter the
e
2
=
0
{\displaystyle e_{2}=0}
sliding mode in finite time.
Along the
e
i
=
0
{\displaystyle e_{i}=0}
surface, the corresponding
v
i
+
1
(
t
)
=
{
…
}
eq
{\displaystyle v_{i+1}(t)=\{\ldots \}_{\text{eq}}}
equivalent control will be equal to
h
i
+
1
(
x
)
{\displaystyle h_{i+1}(\mathbf {x} )}
. Hence, so long as
m
i
+
1
(
x
^
)
≥
|
h
i
+
2
(
x
(
t
)
)
|
{\displaystyle m_{i+1}(\mathbf {\hat {x}} )\geq |h_{i+2}(\mathbf {x} (t))|}
, the
(
i
+
1
)
{\displaystyle (i+1)}
th row of the error dynamics,
e
˙
i
+
1
=
h
i
+
2
(
x
^
)
−
m
i
+
1
(
x
^
)
sgn
(
e
i
+
1
)
{\displaystyle {\dot {\mathbf {e} }}_{i+1}=h_{i+2}({\hat {\mathbf {x} }})-m_{i+1}({\hat {\mathbf {x} }})\operatorname {sgn} (\mathbf {e} _{i+1})}
, will enter the
e
i
+
1
=
0
{\displaystyle e_{i+1}=0}
sliding mode in finite time.
So, for sufficiently large
m
i
{\displaystyle m_{i}}
gains, all observer estimated states reach the actual states in finite time. In fact, increasing
m
i
{\displaystyle m_{i}}
allows for convergence in any desired finite time so long as each
|
h
i
(
x
(
0
)
)
|
{\displaystyle |h_{i}(\mathbf {x} (0))|}
function can be bounded with certainty. Hence, the requirement that the map
H
:
R
n
→
R
n
{\displaystyle H:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}}
is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition.
In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that
∂
H
(
x
)
∂
x
B
(
x
)
{\displaystyle {\frac {\partial H(\mathbf {x} )}{\partial \mathbf {x} }}B(\mathbf {x} )}
does not depend on time. The observer is then
x
^
˙
=
[
∂
H
(
x
^
)
∂
x
]
−
1
M
(
x
^
)
sgn
(
V
(
t
)
−
H
(
x
^
)
)
+
B
(
x
^
)
u
.
{\displaystyle {\dot {\mathbf {\hat {x}} }}=\left[{\frac {\partial H(\mathbf {\hat {x}} )}{\partial \mathbf {x} }}\right]^{-1}M(\mathbf {\hat {x}} )\operatorname {sgn} (V(t)-H(\mathbf {\hat {x}} ))+B(\mathbf {\hat {x}} )u.}
انظر أيضاً
هذه المقالة عبارة عن بذرة تحتاج للنمو والتحسين؛ فساهم في إثرائها بالمشاركة في تحريرها .