ملاحظ لوننبرگر هو عبارة عن نظام يستعمل لحساب الحالة في نظام آخر و تكون مداخل الملاحظ هي مداخل و مخارج النظام وتكون مخارج الملاحظ هي حالات النظام المراد معرفة حالته. يستعمل ملاحظ لوننبرگر في التحكم عن طريق إرجاع الحالة.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ملاحظ الحالة النمطي
The state of a physical discrete-time system is assumed to satisfy
𝐱
(
k
+
1
)
=
A
𝐱
(
k
)
+
B
𝐮
(
k
)
𝐱
𝑘
1
𝐴
𝐱
𝑘
𝐵
𝐮
𝑘
{\displaystyle{\displaystyle\mathbf{x}(k+1)=A\mathbf{x}(k)+B\mathbf{u}(k)}}
𝐲
(
k
)
=
C
𝐱
(
k
)
+
D
𝐮
(
k
)
𝐲
𝑘
𝐶
𝐱
𝑘
𝐷
𝐮
𝑘
{\displaystyle{\displaystyle\mathbf{y}(k)=C\mathbf{x}(k)+D\mathbf{u}(k)}}
ملاحظ النمط المنزلق
𝐱
^
˙
=
[
∂
H
(
𝐱
^
)
∂
𝐱
]
-
1
M
(
𝐱
^
)
sgn
(
V
(
t
)
-
H
(
𝐱
^
)
)
˙
^
𝐱
superscript
delimited-[]
𝐻
^
𝐱
𝐱
1
𝑀
^
𝐱
sgn
𝑉
𝑡
𝐻
^
𝐱
{\displaystyle{\displaystyle\mathbf{\dot{\hat{x}}}=\left[{\frac{\partial H(%
\mathbf{\hat{x}})}{\partial\mathbf{x}}}\right]^{-1}M(\mathbf{\hat{x}})\,%
\operatorname{sgn}(V(t)-H(\mathbf{\hat{x}}))}}
where:
The
sgn
(
⋅
)
sgn
⋅
{\displaystyle{\displaystyle\operatorname{sgn}({\mathord{\cdot}})}}
vector extends the scalar signum function to
n
𝑛
{\displaystyle{\displaystyle n}}
dimensions. That is,
sgn
(
𝐳
)
=
[
sgn
(
z
1
)
sgn
(
z
2
)
⋮
sgn
(
z
i
)
⋮
sgn
(
z
n
)
]
sgn
𝐳
sgn
subscript
𝑧
1
sgn
subscript
𝑧
2
⋮
sgn
subscript
𝑧
𝑖
⋮
sgn
subscript
𝑧
𝑛
{\displaystyle{\displaystyle\operatorname{sgn}(\mathbf{z})={\begin{bmatrix}%
\operatorname{sgn}(z_{1})\\
\operatorname{sgn}(z_{2})\\
\vdots\\
\operatorname{sgn}(z_{i})\\
\vdots\\
\operatorname{sgn}(z_{n})\end{bmatrix}}}}
for the vector
𝐳
∈
ℝ
n
𝐳
superscript
ℝ
𝑛
{\displaystyle{\displaystyle\mathbf{z}\in\mathbb{R}^{n}}}
.
The vector
H
(
𝐱
)
𝐻
𝐱
{\displaystyle{\displaystyle H(\mathbf{x})}}
has components that are the output function
h
(
𝐱
)
ℎ
𝐱
{\displaystyle{\displaystyle h(\mathbf{x})}}
and its repeated Lie derivatives. In particular,
H
(
𝐱
)
≜
[
h
1
(
𝐱
)
h
2
(
𝐱
)
h
3
(
𝐱
)
⋮
h
n
(
𝐱
)
]
≜
[
h
(
𝐱
)
L
f
h
(
𝐱
)
L
f
2
h
(
𝐱
)
⋮
L
f
n
-
1
h
(
𝐱
)
]
≜
𝐻
𝐱
subscript
ℎ
1
𝐱
subscript
ℎ
2
𝐱
subscript
ℎ
3
𝐱
⋮
subscript
ℎ
𝑛
𝐱
≜
ℎ
𝐱
subscript
𝐿
𝑓
ℎ
𝐱
superscript
subscript
𝐿
𝑓
2
ℎ
𝐱
⋮
superscript
subscript
𝐿
𝑓
𝑛
1
ℎ
𝐱
{\displaystyle{\displaystyle H(\mathbf{x})\triangleq{\begin{bmatrix}h_{1}(%
\mathbf{x})\\
h_{2}(\mathbf{x})\\
h_{3}(\mathbf{x})\\
\vdots\\
h_{n}(\mathbf{x})\end{bmatrix}}\triangleq{\begin{bmatrix}h(\mathbf{x})\\
L_{f}h(\mathbf{x})\\
L_{f}^{2}h(\mathbf{x})\\
\vdots\\
L_{f}^{n-1}h(\mathbf{x})\end{bmatrix}}}}
where
L
f
i
h
superscript
subscript
𝐿
𝑓
𝑖
ℎ
{\displaystyle{\displaystyle L_{f}^{i}h}}
is the i th Lie derivative of output function
h
ℎ
{\displaystyle{\displaystyle h}}
along the vector field
f
𝑓
{\displaystyle{\displaystyle f}}
(i.e., along
𝐱
𝐱
{\displaystyle{\displaystyle\mathbf{x}}}
trajectories of the non-linear system). In the special case where the system has no input or has a relative degree of n ,
H
(
𝐱
(
t
)
)
𝐻
𝐱
𝑡
{\displaystyle{\displaystyle H(\mathbf{x}(t))}}
is a collection of the output
𝐲
(
t
)
=
h
(
𝐱
(
t
)
)
𝐲
𝑡
ℎ
𝐱
𝑡
{\displaystyle{\displaystyle\mathbf{y}(t)=h(\mathbf{x}(t))}}
and its
n
-
1
𝑛
1
{\displaystyle{\displaystyle n-1}}
derivatives. Because the inverse of the Jacobian linearization of
H
(
𝐱
)
𝐻
𝐱
{\displaystyle{\displaystyle H(\mathbf{x})}}
must exist for this observer to be well defined, the transformation
H
(
𝐱
)
𝐻
𝐱
{\displaystyle{\displaystyle H(\mathbf{x})}}
is guaranteed to be a local diffeomorphism .
The diagonal matrix
M
(
𝐱
^
)
𝑀
^
𝐱
{\displaystyle{\displaystyle M({\hat{\mathbf{x}}})}}
of gains is such that
M
(
𝐱
^
)
≜
diag
(
m
1
(
𝐱
^
)
,
m
2
(
𝐱
^
)
,
…
,
m
n
(
𝐱
^
)
)
=
[
m
1
(
𝐱
^
)
m
2
(
𝐱
^
)
⋱
m
i
(
𝐱
^
)
⋱
m
n
(
𝐱
^
)
]
≜
𝑀
^
𝐱
diag
subscript
𝑚
1
^
𝐱
subscript
𝑚
2
^
𝐱
…
subscript
𝑚
𝑛
^
𝐱
subscript
𝑚
1
^
𝐱
absent
absent
absent
absent
absent
absent
subscript
𝑚
2
^
𝐱
absent
absent
absent
absent
absent
absent
⋱
absent
absent
absent
absent
absent
absent
subscript
𝑚
𝑖
^
𝐱
absent
absent
absent
absent
absent
absent
⋱
absent
absent
absent
absent
absent
absent
subscript
𝑚
𝑛
^
𝐱
{\displaystyle{\displaystyle M({\hat{\mathbf{x}}})\triangleq\operatorname{diag%
}(m_{1}({\hat{\mathbf{x}}}),m_{2}({\hat{\mathbf{x}}}),\ldots,m_{n}({\hat{%
\mathbf{x}}}))={\begin{bmatrix}m_{1}({\hat{\mathbf{x}}})&&&&&\\
&m_{2}({\hat{\mathbf{x}}})&&&&\\
&&\ddots&&&\\
&&&m_{i}({\hat{\mathbf{x}}})&&\\
&&&&\ddots&\\
&&&&&m_{n}({\hat{\mathbf{x}}})\end{bmatrix}}}}
where, for each
i
∈
{
1
,
2
,
…
,
n
}
𝑖
1
2
…
𝑛
{\displaystyle{\displaystyle i\in\{1,2,\dots,n\}}}
, element
m
i
(
𝐱
^
)
>
0
subscript
𝑚
𝑖
^
𝐱
0
{\displaystyle{\displaystyle m_{i}({\hat{\mathbf{x}}})>0}}
and suitably large to ensure reachability of the sliding mode.
The observer vector
V
(
t
)
𝑉
𝑡
{\displaystyle{\displaystyle V(t)}}
is such that
V
(
t
)
≜
[
v
1
(
t
)
v
2
(
t
)
v
3
(
t
)
⋮
v
i
(
t
)
⋮
v
n
(
t
)
]
≜
[
𝐲
(
t
)
{
m
1
(
𝐱
^
)
sgn
(
v
1
(
t
)
-
h
1
(
𝐱
^
(
t
)
)
)
}
eq
{
m
2
(
𝐱
^
)
sgn
(
v
2
(
t
)
-
h
2
(
𝐱
^
(
t
)
)
)
}
eq
⋮
{
m
i
-
1
(
𝐱
^
)
sgn
(
v
i
-
1
(
t
)
-
h
i
-
1
(
𝐱
^
(
t
)
)
)
}
eq
⋮
{
m
n
-
1
(
𝐱
^
)
sgn
(
v
n
-
1
(
t
)
-
h
n
-
1
(
𝐱
^
(
t
)
)
)
}
eq
]
≜
𝑉
𝑡
subscript
𝑣
1
𝑡
subscript
𝑣
2
𝑡
subscript
𝑣
3
𝑡
⋮
subscript
𝑣
𝑖
𝑡
⋮
subscript
𝑣
𝑛
𝑡
≜
𝐲
𝑡
subscript
subscript
𝑚
1
^
𝐱
sgn
subscript
𝑣
1
𝑡
subscript
ℎ
1
^
𝐱
𝑡
eq
subscript
subscript
𝑚
2
^
𝐱
sgn
subscript
𝑣
2
𝑡
subscript
ℎ
2
^
𝐱
𝑡
eq
⋮
subscript
subscript
𝑚
𝑖
1
^
𝐱
sgn
subscript
𝑣
𝑖
1
𝑡
subscript
ℎ
𝑖
1
^
𝐱
𝑡
eq
⋮
subscript
subscript
𝑚
𝑛
1
^
𝐱
sgn
subscript
𝑣
𝑛
1
𝑡
subscript
ℎ
𝑛
1
^
𝐱
𝑡
eq
{\displaystyle{\displaystyle V(t)\triangleq{\begin{bmatrix}v_{1}(t)\\
v_{2}(t)\\
v_{3}(t)\\
\vdots\\
v_{i}(t)\\
\vdots\\
v_{n}(t)\end{bmatrix}}\triangleq{\begin{bmatrix}\mathbf{y}(t)\\
\{m_{1}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{1}(t)-h_{1}({\hat{\mathbf{x}}%
}(t)))\}_{\text{eq}}\\
\{m_{2}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{2}(t)-h_{2}({\hat{\mathbf{x}}%
}(t)))\}_{\text{eq}}\\
\vdots\\
\{m_{i-1}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{i-1}(t)-h_{i-1}({\hat{%
\mathbf{x}}}(t)))\}_{\text{eq}}\\
\vdots\\
\{m_{n-1}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{n-1}(t)-h_{n-1}({\hat{%
\mathbf{x}}}(t)))\}_{\text{eq}}\end{bmatrix}}}}
where
sgn
(
⋅
)
sgn
⋅
{\displaystyle{\displaystyle\operatorname{sgn}({\mathord{\cdot}})}}
here is the normal signum function defined for scalars, and
{
…
}
eq
subscript
…
eq
{\displaystyle{\displaystyle\{\ldots\}_{\text{eq}}}}
denotes an "equivalent value operator" of a discontinuous function in sliding mode.
The modified observation error can be written in the transformed states
𝐞
=
H
(
𝐱
)
-
H
(
𝐱
^
)
𝐞
𝐻
𝐱
𝐻
^
𝐱
{\displaystyle{\displaystyle\mathbf{e}=H(\mathbf{x})-H(\mathbf{\hat{x}})}}
. In particular,
𝐞
˙
=
d
d
t
H
(
𝐱
)
-
d
d
t
H
(
𝐱
^
)
=
d
d
t
H
(
𝐱
)
-
M
(
𝐱
^
)
sgn
(
V
(
t
)
-
H
(
𝐱
^
(
t
)
)
)
,
˙
𝐞
absent
d
d
𝑡
𝐻
𝐱
d
d
𝑡
𝐻
^
𝐱
missing-subexpression
absent
d
d
𝑡
𝐻
𝐱
𝑀
^
𝐱
sgn
𝑉
𝑡
𝐻
^
𝐱
𝑡
{\displaystyle{\displaystyle{\begin{aligned} \displaystyle{\dot{\mathbf{e}}}&%
\displaystyle={\frac{\operatorname{d}}{\operatorname{d}t}}H(\mathbf{x})-{\frac%
{\operatorname{d}}{\operatorname{d}t}}H({\hat{\mathbf{x}}})\\
&\displaystyle={\frac{\operatorname{d}}{\operatorname{d}t}}H(\mathbf{x})-M({%
\hat{\mathbf{x}}})\,\operatorname{sgn}(V(t)-H({\hat{\mathbf{x}}}(t))),\end{%
aligned}}}}
and so
[
𝐞
˙
1
𝐞
˙
2
⋮
𝐞
˙
i
⋮
𝐞
˙
n
-
1
𝐞
˙
n
]
=
[
h
˙
1
(
𝐱
)
h
˙
2
(
𝐱
)
⋮
h
˙
i
(
𝐱
)
⋮
h
˙
n
-
1
(
𝐱
)
h
˙
n
(
𝐱
)
]
⏞
d
d
t
H
(
𝐱
)
-
M
(
𝐱
^
)
sgn
(
V
(
t
)
-
H
(
𝐱
^
(
t
)
)
)
⏞
d
d
t
H
(
𝐱
^
)
=
[
h
2
(
𝐱
)
h
3
(
𝐱
)
⋮
h
i
+
1
(
𝐱
)
⋮
h
n
(
𝐱
)
L
f
n
h
(
𝐱
)
]
-
[
m
1
sgn
(
v
1
(
t
)
-
h
1
(
𝐱
^
(
t
)
)
)
m
2
sgn
(
v
2
(
t
)
-
h
2
(
𝐱
^
(
t
)
)
)
⋮
m
i
sgn
(
v
i
(
t
)
-
h
i
(
𝐱
^
(
t
)
)
)
⋮
m
n
-
1
sgn
(
v
n
-
1
(
t
)
-
h
n
-
1
(
𝐱
^
(
t
)
)
)
m
n
sgn
(
v
n
(
t
)
-
h
n
(
𝐱
^
(
t
)
)
)
]
=
[
h
2
(
𝐱
)
-
m
1
(
𝐱
^
)
sgn
(
v
1
(
t
)
⏞
v
1
(
t
)
=
y
(
t
)
=
h
1
(
𝐱
)
-
h
1
(
𝐱
^
(
t
)
)
⏞
𝐞
1
)
h
3
(
𝐱
)
-
m
2
(
𝐱
^
)
sgn
(
v
2
(
t
)
-
h
2
(
𝐱
^
(
t
)
)
)
⋮
h
i
+
1
(
𝐱
)
-
m
i
(
𝐱
^
)
sgn
(
v
i
(
t
)
-
h
i
(
𝐱
^
(
t
)
)
)
⋮
h
n
(
𝐱
)
-
m
n
-
1
(
𝐱
^
)
sgn
(
v
n
-
1
(
t
)
-
h
n
-
1
(
𝐱
^
(
t
)
)
)
L
f
n
h
(
𝐱
)
-
m
n
(
𝐱
^
)
sgn
(
v
n
(
t
)
-
h
n
(
𝐱
^
(
t
)
)
)
]
.
subscript
˙
𝐞
1
subscript
˙
𝐞
2
⋮
subscript
˙
𝐞
𝑖
⋮
subscript
˙
𝐞
𝑛
1
subscript
˙
𝐞
𝑛
absent
[
h
˙
1
(
𝐱
)
h
˙
2
(
𝐱
)
⋮
h
˙
i
(
𝐱
)
⋮
h
˙
n
-
1
(
𝐱
)
h
˙
n
(
𝐱
)
]
⏞
d
d
t
H
(
𝐱
)
M
(
𝐱
^
)
sgn
(
V
(
t
)
-
H
(
𝐱
^
(
t
)
)
)
⏞
d
d
t
H
(
𝐱
^
)
subscript
ℎ
2
𝐱
subscript
ℎ
3
𝐱
⋮
subscript
ℎ
𝑖
1
𝐱
⋮
subscript
ℎ
𝑛
𝐱
superscript
subscript
𝐿
𝑓
𝑛
ℎ
𝐱
subscript
𝑚
1
sgn
subscript
𝑣
1
𝑡
subscript
ℎ
1
^
𝐱
𝑡
subscript
𝑚
2
sgn
subscript
𝑣
2
𝑡
subscript
ℎ
2
^
𝐱
𝑡
⋮
subscript
𝑚
𝑖
sgn
subscript
𝑣
𝑖
𝑡
subscript
ℎ
𝑖
^
𝐱
𝑡
⋮
subscript
𝑚
𝑛
1
sgn
subscript
𝑣
𝑛
1
𝑡
subscript
ℎ
𝑛
1
^
𝐱
𝑡
subscript
𝑚
𝑛
sgn
subscript
𝑣
𝑛
𝑡
subscript
ℎ
𝑛
^
𝐱
𝑡
missing-subexpression
absent
subscript
ℎ
2
𝐱
subscript
𝑚
1
^
𝐱
sgn
v
1
(
t
)
⏞
v
1
(
t
)
=
y
(
t
)
=
h
1
(
𝐱
)
-
h
1
(
𝐱
^
(
t
)
)
⏞
𝐞
1
subscript
ℎ
3
𝐱
subscript
𝑚
2
^
𝐱
sgn
subscript
𝑣
2
𝑡
subscript
ℎ
2
^
𝐱
𝑡
⋮
subscript
ℎ
𝑖
1
𝐱
subscript
𝑚
𝑖
^
𝐱
sgn
subscript
𝑣
𝑖
𝑡
subscript
ℎ
𝑖
^
𝐱
𝑡
⋮
subscript
ℎ
𝑛
𝐱
subscript
𝑚
𝑛
1
^
𝐱
sgn
subscript
𝑣
𝑛
1
𝑡
subscript
ℎ
𝑛
1
^
𝐱
𝑡
superscript
subscript
𝐿
𝑓
𝑛
ℎ
𝐱
subscript
𝑚
𝑛
^
𝐱
sgn
subscript
𝑣
𝑛
𝑡
subscript
ℎ
𝑛
^
𝐱
𝑡
{\displaystyle{\displaystyle{\begin{aligned} \displaystyle{\begin{bmatrix}{%
\dot{\mathbf{e}}}_{1}\\
{\dot{\mathbf{e}}}_{2}\\
\vdots\\
{\dot{\mathbf{e}}}_{i}\\
\vdots\\
{\dot{\mathbf{e}}}_{n-1}\\
{\dot{\mathbf{e}}}_{n}\end{bmatrix}}&\displaystyle={\mathord{\overbrace{\begin%
{bmatrix}{\dot{h}}_{1}(\mathbf{x})\\
{\dot{h}}_{2}(\mathbf{x})\\
\vdots\\
{\dot{h}}_{i}(\mathbf{x})\\
\vdots\\
{\dot{h}}_{n-1}(\mathbf{x})\\
{\dot{h}}_{n}(\mathbf{x})\end{bmatrix}}^{{\tfrac{\operatorname{d}}{%
\operatorname{d}t}}H(\mathbf{x})}}}-{\mathord{\overbrace{M({\hat{\mathbf{x}}})%
\,\operatorname{sgn}(V(t)-H({\hat{\mathbf{x}}}(t)))}^{{\tfrac{\operatorname{d}%
}{\operatorname{d}t}}H(\mathbf{\hat{x}})}}}={\begin{bmatrix}h_{2}(\mathbf{x})%
\\
h_{3}(\mathbf{x})\\
\vdots\\
h_{i+1}(\mathbf{x})\\
\vdots\\
h_{n}(\mathbf{x})\\
L_{f}^{n}h(\mathbf{x})\end{bmatrix}}-{\begin{bmatrix}m_{1}\operatorname{sgn}(v%
_{1}(t)-h_{1}({\hat{\mathbf{x}}}(t)))\\
m_{2}\operatorname{sgn}(v_{2}(t)-h_{2}({\hat{\mathbf{x}}}(t)))\\
\vdots\\
m_{i}\operatorname{sgn}(v_{i}(t)-h_{i}({\hat{\mathbf{x}}}(t)))\\
\vdots\\
m_{n-1}\operatorname{sgn}(v_{n-1}(t)-h_{n-1}({\hat{\mathbf{x}}}(t)))\\
m_{n}\operatorname{sgn}(v_{n}(t)-h_{n}({\hat{\mathbf{x}}}(t)))\end{bmatrix}}\\
&\displaystyle={\begin{bmatrix}h_{2}(\mathbf{x})-m_{1}({\hat{\mathbf{x}}})%
\operatorname{sgn}({\mathord{\overbrace{{\mathord{\overbrace{v_{1}(t)}^{v_{1}(%
t)=y(t)=h_{1}(\mathbf{x})}}}-h_{1}({\hat{\mathbf{x}}}(t))}^{\mathbf{e}_{1}}}})%
\\
h_{3}(\mathbf{x})-m_{2}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{2}(t)-h_{2}({%
\hat{\mathbf{x}}}(t)))\\
\vdots\\
h_{i+1}(\mathbf{x})-m_{i}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{i}(t)-h_{i}%
({\hat{\mathbf{x}}}(t)))\\
\vdots\\
h_{n}(\mathbf{x})-m_{n-1}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{n-1}(t)-h_{%
n-1}({\hat{\mathbf{x}}}(t)))\\
L_{f}^{n}h(\mathbf{x})-m_{n}({\hat{\mathbf{x}}})\operatorname{sgn}(v_{n}(t)-h_%
{n}({\hat{\mathbf{x}}}(t)))\end{bmatrix}}.\end{aligned}}}}
وبذلك:
ما دام
m
1
(
𝐱
^
)
≥
|
h
2
(
𝐱
(
t
)
)
|
subscript
𝑚
1
^
𝐱
subscript
ℎ
2
𝐱
𝑡
{\displaystyle{\displaystyle m_{1}(\mathbf{\hat{x}})\geq|h_{2}(\mathbf{x}(t))|}}
, the first row of the error dynamics,
𝐞
˙
1
=
h
2
(
𝐱
^
)
-
m
1
(
𝐱
^
)
sgn
(
𝐞
1
)
subscript
˙
𝐞
1
subscript
ℎ
2
^
𝐱
subscript
𝑚
1
^
𝐱
sgn
subscript
𝐞
1
{\displaystyle{\displaystyle{\dot{\mathbf{e}}}_{1}=h_{2}({\hat{\mathbf{x}}})-m%
_{1}({\hat{\mathbf{x}}})\operatorname{sgn}(\mathbf{e}_{1})}}
, will meet sufficient conditions to enter the
e
1
=
0
subscript
𝑒
1
0
{\displaystyle{\displaystyle e_{1}=0}}
sliding mode in finite time.
Along the
e
1
=
0
subscript
𝑒
1
0
{\displaystyle{\displaystyle e_{1}=0}}
surface, the corresponding
v
2
(
t
)
=
{
m
1
(
𝐱
^
)
sgn
(
𝐞
1
)
}
eq
subscript
𝑣
2
𝑡
subscript
subscript
𝑚
1
^
𝐱
sgn
subscript
𝐞
1
eq
{\displaystyle{\displaystyle v_{2}(t)=\{m_{1}({\hat{\mathbf{x}}})\operatorname%
{sgn}(\mathbf{e}_{1})\}_{\text{eq}}}}
equivalent control will be equal to
h
2
(
𝐱
)
subscript
ℎ
2
𝐱
{\displaystyle{\displaystyle h_{2}(\mathbf{x})}}
, and so
v
2
(
t
)
-
h
2
(
𝐱
^
)
=
h
2
(
𝐱
)
-
h
2
(
𝐱
^
)
=
𝐞
2
subscript
𝑣
2
𝑡
subscript
ℎ
2
^
𝐱
subscript
ℎ
2
𝐱
subscript
ℎ
2
^
𝐱
subscript
𝐞
2
{\displaystyle{\displaystyle v_{2}(t)-h_{2}({\hat{\mathbf{x}}})=h_{2}(\mathbf{%
x})-h_{2}({\hat{\mathbf{x}}})=\mathbf{e}_{2}}}
. Hence, so long as
m
2
(
𝐱
^
)
≥
|
h
3
(
𝐱
(
t
)
)
|
subscript
𝑚
2
^
𝐱
subscript
ℎ
3
𝐱
𝑡
{\displaystyle{\displaystyle m_{2}(\mathbf{\hat{x}})\geq|h_{3}(\mathbf{x}(t))|}}
, the second row of the error dynamics,
𝐞
˙
2
=
h
3
(
𝐱
^
)
-
m
2
(
𝐱
^
)
sgn
(
𝐞
2
)
subscript
˙
𝐞
2
subscript
ℎ
3
^
𝐱
subscript
𝑚
2
^
𝐱
sgn
subscript
𝐞
2
{\displaystyle{\displaystyle{\dot{\mathbf{e}}}_{2}=h_{3}({\hat{\mathbf{x}}})-m%
_{2}({\hat{\mathbf{x}}})\operatorname{sgn}(\mathbf{e}_{2})}}
, will enter the
e
2
=
0
subscript
𝑒
2
0
{\displaystyle{\displaystyle e_{2}=0}}
sliding mode in finite time.
Along the
e
i
=
0
subscript
𝑒
𝑖
0
{\displaystyle{\displaystyle e_{i}=0}}
surface, the corresponding
v
i
+
1
(
t
)
=
{
…
}
eq
subscript
𝑣
𝑖
1
𝑡
subscript
…
eq
{\displaystyle{\displaystyle v_{i+1}(t)=\{\ldots\}_{\text{eq}}}}
equivalent control will be equal to
h
i
+
1
(
𝐱
)
subscript
ℎ
𝑖
1
𝐱
{\displaystyle{\displaystyle h_{i+1}(\mathbf{x})}}
. Hence, so long as
m
i
+
1
(
𝐱
^
)
≥
|
h
i
+
2
(
𝐱
(
t
)
)
|
subscript
𝑚
𝑖
1
^
𝐱
subscript
ℎ
𝑖
2
𝐱
𝑡
{\displaystyle{\displaystyle m_{i+1}(\mathbf{\hat{x}})\geq|h_{i+2}(\mathbf{x}(%
t))|}}
, the
(
i
+
1
)
𝑖
1
{\displaystyle{\displaystyle(i+1)}}
th row of the error dynamics,
𝐞
˙
i
+
1
=
h
i
+
2
(
𝐱
^
)
-
m
i
+
1
(
𝐱
^
)
sgn
(
𝐞
i
+
1
)
subscript
˙
𝐞
𝑖
1
subscript
ℎ
𝑖
2
^
𝐱
subscript
𝑚
𝑖
1
^
𝐱
sgn
subscript
𝐞
𝑖
1
{\displaystyle{\displaystyle{\dot{\mathbf{e}}}_{i+1}=h_{i+2}({\hat{\mathbf{x}}%
})-m_{i+1}({\hat{\mathbf{x}}})\operatorname{sgn}(\mathbf{e}_{i+1})}}
, will enter the
e
i
+
1
=
0
subscript
𝑒
𝑖
1
0
{\displaystyle{\displaystyle e_{i+1}=0}}
sliding mode in finite time.
So, for sufficiently large
m
i
subscript
𝑚
𝑖
{\displaystyle{\displaystyle m_{i}}}
gains, all observer estimated states reach the actual states in finite time. In fact, increasing
m
i
subscript
𝑚
𝑖
{\displaystyle{\displaystyle m_{i}}}
allows for convergence in any desired finite time so long as each
|
h
i
(
𝐱
(
0
)
)
|
subscript
ℎ
𝑖
𝐱
0
{\displaystyle{\displaystyle|h_{i}(\mathbf{x}(0))|}}
function can be bounded with certainty. Hence, the requirement that the map
H
:
ℝ
n
→
ℝ
n
:
𝐻
→
superscript
ℝ
𝑛
superscript
ℝ
𝑛
{\displaystyle{\displaystyle H:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}}}
is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition.
In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that
∂
H
(
𝐱
)
∂
𝐱
B
(
𝐱
)
𝐻
𝐱
𝐱
𝐵
𝐱
{\displaystyle{\displaystyle{\frac{\partial H(\mathbf{x})}{\partial\mathbf{x}}%
}B(\mathbf{x})}}
does not depend on time. The observer is then
𝐱
^
˙
=
[
∂
H
(
𝐱
^
)
∂
𝐱
]
-
1
M
(
𝐱
^
)
sgn
(
V
(
t
)
-
H
(
𝐱
^
)
)
+
B
(
𝐱
^
)
u
.
˙
^
𝐱
superscript
delimited-[]
𝐻
^
𝐱
𝐱
1
𝑀
^
𝐱
sgn
𝑉
𝑡
𝐻
^
𝐱
𝐵
^
𝐱
𝑢
{\displaystyle{\displaystyle{\dot{\mathbf{\hat{x}}}}=\left[{\frac{\partial H(%
\mathbf{\hat{x}})}{\partial\mathbf{x}}}\right]^{-1}M(\mathbf{\hat{x}})%
\operatorname{sgn}(V(t)-H(\mathbf{\hat{x}}))+B(\mathbf{\hat{x}})u.}}
انظر أيضاً
هذه المقالة عبارة عن بذرة تحتاج للنمو والتحسين؛ فساهم في إثرائها بالمشاركة في تحريرها .