版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
Chapter1
Basis
of
Matrix
Calculation1.1Fundamental
Concepts1.2The
Most
Basic
Matrix
Decompositio1.3Singular
Value
Decomposition(SVD)1.4The
Quadratic
Form
1.1FundamentalConceptsThe
purpose
of
this
chapter
is
to
review
important
fundamental
concepts
in
linear
algebra,
as
a
foundation
for
the
rest
of
the
course.We
first
discuss
the
fundamentalbuildingblocks,suchasanoverviewofmatrixmultiplicationfroma“bigblock”
perspective,
linear
independence,
subspaces
and
related
ideas,rank,etc.,upon
which
the
rigor
of
linear
algebra
rests.
We
then
discuss
vector
norms,and
various
interpretations
ofthematrixmultiplicationoperation
1.1.1Notation
Throughout
this
course,
we
shall
indicate
that
a
matrix
A
is
of
dimensionm×n,and
whose
elements
are
taken
from
the
set
of
real
numbers,by
then
otation
A∈Cm×n.Thismeans
that
the
matrix
A
belongs
to
the
Cartesian
product
of
the
real
numbers,
taken
m×n
times,
one
for
each
element
of
A.
In
a
similar
way,
the
notation
A∈Cm×nmeans
thematrix
is
of
dimensionm×n,
and
the
elements
are
taken
fromthe
set
of
complexnumbers.
By
the
matrix
dimension
m×n,
we
mean
A
consists
of
m
rows
and
ncolumns.
Similarly,thenotationa∈Rm(Cm)impliesavectorofdimensionmwhoseelementsaretakenfromthesetofreal(complex)numbers.By“dimensionofavector”,wemeanitslength,i.e.,thatitconsistsofmelements.
Also,weshallindicatethatascalaraisfromthesetofreal(complex)numbersbythenotationa∈R(C).Thus,anuppercaseboldcharacterdenotesamatrix,alowercaseboldcharacterdenotesavector,andalowercasenon-boldcharacterdenotesascalar.
Byconvention,avectorbydefaultistakentobeacolumnvector.Further,foramatrixA,wedenoteitsi-thcolumnasai.Wealsoimplythatitsj-throwisajT,eventhoughthisnotationmaybeambiguous,sinceitmayalsobetakentomeanthetransposeofthej-thcolumn.Thecontextofthediscussionwillhelptoresolvetheambiguity.
1.1.2“Bigger-Block”Interpretations
of
Matrix
Multiplication
Let
us
define
the
matrix
product
C
as
1.1.2.1Inner-ProductRepresentation
Ifaandbarecolumnvectorsofthesamelength,thenthescalarquantityaTbreferredtoastheinnerproductofaandb.IfwedefineaiT∈Rkasthei-throwofAandbj∈Rkasthej-thcolumnofB,thentheelementcijofCisdefinedastheinnerproductaiTbj.Thisistheconventionalsmall-blockrepresentationofmatrixmultiplication.
1.1.2.2ColumnRepresentation
Thisisthenextbigger-blockviewofmatrixmultiplication.Herewelookatformingtheproductonecolumnatatime.Thej-thcolumncjofCmaybeexpressedasalinearcombinationofcolumnsaiofAwithcoefficientswhicharetheelementsofthej-thcolumnofB.Thus,
1.1.2.3Outer-ProductRepresentation
Thisisthelargest-blockrepresentation.Letusdefineacolumnvectora∈RmandarowvectorbT∈Rn.Thentheouterproductofaandbisanm×nmatrixofrankoneandisdefinedasabT.Nowletai
andbiTbethei-thcolumnandrowofAandBrespectively.ThentheproductCmayalsobeexpressedas
1.1.2.4MatrixPre-andPost-Multiplication
Letusnowlookatsomefundamentalideasdistinguishingmatrixpre-andpost-multiplication.Inthisrespect,consideramatrixApre-multipliedbyBtogiveY=BA.(Allmatricesareassumedtohaveconformabledimensions).ThenwecaninterpretthismultiplicationasBoperatingonthecolumnofAtogivethecolumnsoftheproduct.ThisfollowsbecauseeachcolumnyioftheproductisatransformedversionofthecorrespondingcolumnofA;i.e.,yi
=Bai,i=1,2,…,n.Likewise,
let‘sconsiderApost-multipliedbyamatrixCtogiveX=AC.Then,weinterpretthismultiplicationasCoperatingontherowsofA,becauseeachrowXiToftheproductisatransformedversionofthecorrespondingrowofA;i.e.,XjT=ajTC,j=1,2,…,m,wherewedefineajTasthej-throwofA.
Example1:
ConsideranorthonormalmatrixQofappropriatedimension.Weknowthatmultiplicationbyanorthonormalmatrixresultsinarotationoperation.TheoperationQArotateseachcolumnofA.TheoperationAQrotateseachrow.
Thereisanotherwaytointerpretpre-multiplicationandpost-multiplication.AgainconsiderthematrixApre-multipliedbyBtogiveY=BA.Thenaccordingtoequation(1.1.2),thej-thcolumn
yiofYisalinearcombinationofthecolumnsofB,whosecoefficientsarethej-thcolumnofA.Likewise,forX=AB,wecansaythatthei-throwXiTofXisalinearcombinationoftherowsofB,whosecoefficientsarethei-throwofA.
Eitheroftheseinterpretationsisequallyvalid.Beingcomfortablewiththerepresentationsofthissectionisabigstepinmasteringthefieldoflinearalgebra.
1.1.3Fundamental
Linear
Algebra
1.1.3.1Linear
Independence
Suppose
we
have
a
set
of
n
m-dimensional
vectors
{a1,a2,…,an},where
ai∈Rm,i=1,2,…,n.This
set
is
linearly
independent
under
the
conditions
Asetofnvectorsislinearlyindependentifann-dimensionalspacemaybefomedbytakingallpossiblelinearcombinationsofthevectors.Ifthedimensionofthespaceislessthann,thenthevectorsarelinearlydependent.Theconceptofavectorspaceandthedimensionofavectorspaceismademorepreciselater.
Notethatasetofvectors{a1,a2,…,an},wheren>mcannotbelinearlyindependent.
Example2:
Thissetislinearlyindependent.Ontheotherhand,thesetisnot.
Thisfollowsbecausethethirdcolumnisalinearcombinationofthefirsttwo.-1timesthefirstcolumnplus--1timesthesecondequalsthethirdcolumn.Thus,thecoefficientscjinequation(1.1.4)resultinginzeroareanyscalarmultipleofequation(1.1.1).
1.1.3.2Span,RangeandSubspaces
Span
Thespanofavectorset[a1,a2,…,an],writtenasspan[a1,a2,…,an],whereai∈Rm,isthesetofpointsmappedby
Thesetofvectorsinaspanisreferredtoasavectorspace.Thedimensionofavectorspaceisthenumberoflinearlyindependentvectorsinthelinearcombinationwhichformsthespace.Notethatthevectorspacedimensionisnotthedimension(length)ofthevectorsformingthelinearcombinations.
Example3:
Considerthefollowing2vectorsinFig.1.1.Fig.1.1The
span
of
the
sevectors
is
the
(infinite
extension
of
the)plane
of
the
paper
Subspaces
Givenaset(space)ofvectors[a1,a2,…,an]∈Rm,m≥n,asubspaceSisavectorsubsetthatsatisfiestworequirements:
1.Ifx
andyareinthesubspace,thenx+yisstillinthesubspace.
2.Ifwemultiplyanyvectorxinthesubspacebyascalarc,thencxisstillinthesubspace.
Range
TherangeofamatrixA∈Rm×n,denotedR(A),isasubspace(setofvectors)satisfying
Example4:
R(A)isthesetofalllinearcombinationsofanytwocolumnsofA.Inthecasewhenn<m(i.e.,Aisatallmatrix),itisimportanttonotethatR(A)isindeedasubspaceofthem-dimensional“universe”Rm.Inthiscase,thedimensionofR(A)islessthanorequalton.Thus,R(A)doesnotspanthewholeuniverse,andthereforeisasubspaceofit.
1.1.3.3MaximallyIndependentSet
Thisisavectorsetwhichcannotbemadelargerwithoutlosingindependence,andsmallerwithoutremainingmaximal;i.e.itisasetcontainingthemaximumnumberofindependentvectorsspanningthespace.
1.1.3.4ABasis
Abasisforasubspaceisanymaximallyindependentsetwithinthesubspace.Itisnotunique.
Example5:
AbasisforthesubspaceSspanningthefirst2columnsof
is
1.1.3.5OrthogonalComplementSubspace
IfwehaveasubspaceSofdimensionnconsistingofvectors[a1,a2,…,an],ai∈Rm,i=1,2,…,n,forn≤m,theorthogonalcomplementsubspaceS⊥ofSofdimensionm-nisdefinedas
i.e.,anyvectorinS⊥isorthogonaltoanyvectorinS.ThequantityS⊥
ispronounced“S-prep”.
Example6:
TakethevectorsetdefiningSfromExample5:
then,abasisforS⊥is
1.1.3.6Rank
Rankisanimportantconceptwhichwewillusefrequentlythroughoutthiscourse.Webrieflydescribeonlyafewbasicfeaturesofrankhere.Theideaisexpandedmorefullyinthefollowingsections.
1.Therankofamatrixisthemaximumnumberoflinearlyindependentrowsorcolumns.Thus,itisthedimensionofabasisforthecolumns(rows)ofamatrix.
2.RankofA(denotedrank(A)),isthedimensionofR(A).
3.IfA=BC,andr1=rank(B),r2=rank(C),then,rank(A)≤min(r1,r2).
4.AmatrixA∈Rm×nissaidtoberankdeficientifitsrankislessthanmin(m,n)Otherwise,itissaidtobefullrank.
5.IfAissquareandrankdeficient,thendet(A)=0.
6.Itcanbeshownthatrank(A)=rank(AT).Moreissaidonthispointlater.
Example7:
TherankofAinExample6is3,whereastherankofAinExample4is2.
1.1.3.7NullSpaceofA
ThenullspaceN(A)ofAisdefinxedas
Example8:
LetAbeasbeforeinExample4.ThenN(A)=c(1,1,-2)T,wherecisarealconstant.
Afurtherexampleisasfollows.Take3vectors[a1,a2,a3],whereai∈R3,i=1,2,3,thatareconstrainedtolieina2-dimensionalplane.Thenthereexistsazerolinearcombinationofthesevectors.ThecoefficientsofthislinearcombinationdefineavectorxwhichisinthenullspaceofA=[a1,a2,a3
].Inthiscase,weseethatAisrankdeficient.
Anotherimportantcharacterizationofamatrixisitsnullity.ThenullityofAisthedimensionofthenullspaceofA.InExample8above,thenullityofAisone.Wethenhavethefollowinginterestingproperty:
1.1.4Four
Fundamental
Subspaces
of
a
Matrix
The
four
matrix
subspaces
of
concern
are:
the
column
space,
the
row
space,
and
theirrespective
orthogonal
complements.
The
development
of
these
four
subspaces
is
closelylinked
to
N(A)
and
R(A).
We
assume
for
this
section
that
A∈Rm×n,
r≤min(m,n),where
r
=rank
(A).
1.1.4.1TheColumnSpace
ThisissimplyR(A).Itsdimensionisr.ItisthesetofalllinearcombinationsofthecolumnsofA.
1.1.4.2TheOrthogonalComplementoftheColumnSpace
ThismaybeexpressedasR(A)⊥,withdimensionm-r.ItmaybeshowntobeequivalenttoN(AT),asfollows:Bydefinition,N(AT)isthesetxsatisfying:
1.1.4.3TheRowSpace
TherowspaceisdefinedsimplyasR(AΤ),withdimensionr.TherowspaceistherangeoftherowsofA,orthesubspacespannedbytherows,orthesetofallpossiblelinearcombinationsoftherowsofA.
1.1.4.4TheOrthogonalComplementoftheRowSpace
ThismaybedenotedasR(AΤ)⊥.Itsdimensionisn-r.ThissetmustbethatwhichisorthogonaltoallrowsofA:i.e.,forxtobeinthisspace,xmustsatisfy
Thus,thesetx,whichistheorthogonalcomplementoftherowspacesatisfyingequation(1.1.16),issimplyN(A).
Wehavenotedbeforethatrank(A)=rank(AT).Thus,thedimensionoftherowandcolumnsubspacesareequal.Thisissurprising,becauseitimpliesthenumberoflinearlyindependentrowsofamatrixisthesameasthenumberoflinearlyindependentcolumns.Thisholdsregardlessofthesizeorrankofthematrix.Itisnotanintuitivelyobviousfactandthereisnoimmediatelyobviousreasonwhythisshouldbeso.Nevertheless,therankofamatrixisthenumberofindependentrowsorcolumns.
1.1.5Vector
Norms
A
vector
norm
is
a
means
of
expressing
the
length
or
distance
associated
with
avector.A
norm
on
a
vector
space
Rn
is
a
function
f,which
maps
a
point
in
Rninto
a
point
in
R.Formally,thisisstatedmathematicallyasf:
Wedenotethefunctionf(x)as‖x‖.Thep-norms:Thisisausefulclassofnorms,generalizingontheideaoftheEuclideannorm.Theyaredefinedby
Ifp=1:
which
is
simply
the
sum
of
absolute
values
of
the
elements.
If
p=2:
which
is
the
familiar
Euclidean
norm.
If
p=∞:
whichisthelargestelementofx.Thismaybeshowninthefollowingway.Asp→∞,thelargesttermwithintheroundbracketsinequation(1.1.17)dominatesalltheothers.Thereforeequation(1.1.17)maybewrittenas
1.1.6Determinants
Consider
a
square
matrix
A∈Rm×m.We
can
define
the
matrix
Aij
as
the
submatrixobtained
from
Aby
deleting
the
i-th
row
and
j-th
column
of
A.The
scalar
number
det(Aij)
(wheredet
(·)
denotes
determinant)
is
called
the
minor
associated
with
the
elementaij
of
A.The
signed
minor
cij(-1)j+i
det
(Aij)
is
called
the
cofactor
of
aij.
ThedeterminantofAisthem-dimensionalvolumecontainedwithinthecolumns(rows)ofA.Thisinterpretationofdeterminantisveryusefulasweseeshortly.Thedeterminantofamatrixmaybeevaluatedbytheexpression
1.1.7Properties
of
Determinants
Before
we
begin
this
discussion,
let
us
define
the
volume
of
a
parallelopiped
definedby
the
set
of
column
vectors
comprising
a
matrix
as
the
principal
volume
of
that
matrix.We
have
the
following
properties
of
determinants,
which
are
stated
with
outproof:
1.det(AB)=det(A)det(B)A,B∈Rm×m.
2.det(A)=det(AT).
3.det(cA)=cmdet(A),c∈R,A∈Rm×m.
4.det(A)=0?Aissingular.
5.det(A)=,whereλiaretheeigen(singular)valuesofA.
6.Thedeterminantofanorthonormalmatrixis±1.
7.IfAisnonsingular,thendet(A-1)=[det(A)]-1.
8.IfBisnonsingular,thendet(B-1AB)=det(A).
9.IfBisobtainedfromAbyinterchanginganytworows(orcolumns),thendet(B)=-det(A).
10.IfBisobtainedfromAbybyaddingascalarmultipleofonerowtoanother(orascalarmultipleofonecolumntoanother),thendet(B)=det(A).
1.2The
Most
Basic
Matrix
Decomposition
1.2.1GaussianElimination
In
this
section
we
discuss
the
concept
of
Gaussian
elimination
in
some
detail.
ButwepresentaveryquickreviewbyexampleoftheelementaryapproachtoGaussianelimination.
Given
the
system
of
equations
WhereA∈R3×3isnonsingular.Theabovesystemcanbeexpandedintotheform.
TosolvethesystemwetransformthissystemintothefollowinguppertriangularsystembyGaussianelimination:
using
a
sequence
of
elementary
row
operations
as
follows
OnceAhasbeentriangularized,thesolutionxisobtainedbyapplyingbackwardsubstitutiontothesystemUx=b.Withthisprocedurexnisfirstdeterminedfromthelastequationof(1.2.3).Thenxn-1maybedeterminedfromthesecondlastrow,etc.Thealgorithmmaybesummarizedbythefollowingschema:
WhatabouttheAccuracyofBackSubstitution?
Withoperationsonfloatingpointnumberswemustbeconcernedabouttheaccuracyoftheresultsincethefloatingpointnumbersthemselvescontainerror.Wewanttoknowifitispossiblethatthesmallerrorsinthefloatingpointrepresentationofrealnumberscanleadtolargeerrorsinthecomputedresult.Inthisvien,wecanshowthatthecomputedsolutionx^obtainedbybacksubtitutionsatisfiestheexpression.
1.2.2The
LU
Decomposition
Suppose
we
canfind
a
lower
triangularmatrix
L∈Rn×n
with
ones
along
themaindiagonal
and
an
upper
triangular
matrix
U∈Rn×n
such
that:
This
decomposition
of
A
is
referred
to
as
the
LU
decomposition.
To
solve
the
systemAx=b,or
LUx=bwe
define
the
variable
z
as
z=Ux
and
then
1.2.3The
LDM
Factorization
If
no
zero
pivots
are
encountered
during
the
Gaussian
elimination
process,
then
thereexist
unit
lower
triangular
matrices
L
and
Mand
a
diagonal
matrix
D
such
that
Justification
SinceA=LUexists,letU=DMTbeuppertriangular,wheredi=uii;hence,A=LDMTwhichwastobeshown.EachrowofMTisthecorrespondingrowofUdividedbyitsdiagonalelement.
WethensolvethesystemAx=bwhichisequivalenttoLDMTx=binthreesteps:
1.lety=DMTxandsolveLy=Pr(n2flops)
2.lety=MTxandsolveDz=y(nflops)
3.solveMTx=z(n2flops)
1.2.4The
LDL
Decomposition
for
Symmetric
Matrices
For
a
symmetric
non-singular
matrix
A∈Rn×n,
the
factors
L
and
Mareidentical.
Proof
LetA=LDMT.The
matrix
M-1AM-T=M-1LD
is
symmetric
(from
lef
than
dside)and
lower
triangular(from
righ
than
dside).
Hence,they
are
both
diagonal.
But
Disnonsingular,
soM-1Lis
alsodia
gonal.ThematricesMand
Lare
bothunit
lowertriangular
(ULT).It
can
be
easily
shown
that
the
inverse
of
aULT’s
matrix
is
also
ULT,and
furthermore,the
product
o
ULT’s
is
ULT.Therefore
M-1
is
ULT,and
sois
M-1L.ThusM-1L=I;M=L.
1.2.5Cholesky
Decomposition
We
now
consider
several
modifications
to
the
LUdecomposition,
which
ultimatelyleaduptotheCholeskydecomposition.Thesemodificationsare1)theLDMdecomposition,
2)
theLDLdecompositiononsymmetricmatrices,
and
3)theLDLdecomposition
on
positive
definite
symmetricmatrices.The
Cholesky
decomposition
isrelevant
only
for
square
symmetric
positive
definite
matrices
and
is
an
important
concent
insignal
processing.
Several
examples
of
the
useof
the
Cholesky
decomposition
are
providedat
the
end
of
the
section
ForA∈Rn×n
symmetricandpositivedefinite,thereexistsalowertriangularmatrixG∈Rn×nwithpositivediagonalentries,suchthatA=GGT.
1.2.6Applications
and
Examples
of
the
Cholesky
Decomposition
1.2.6.1Generating
Vector
Processes
with
Desired
Covariance
We
may
use
the
Cholesky
decomposition
to
generate
a
random
vector
process
with
adesired
covariance
matrix
Σ∈Rn×n.Since
must
be
symmetric
and
positive
definite,let
1.2.6.2WhiteningaProcess
Thisexampleisessentiallytheinverseoftheonejustdiscussed.Supposewehaveastationaryvectorprocessxi
∈Rn,i=1,2,…,n.Thisprocesscouldbethesignalsreceivedfromtheelementsofanarrayofnsensors,itcouldbesetsofnsequentialsamplesofanytime-varyingsignal,orsetsofdatainatapped-delaylineequalizeroflengthn,attimeinstantst1,t2,…,etc.Lettheprocessxconsistofasignalpartsiandanoisepartvi:
Sincethereceivedsignalx=s+v,thejointprobabilitydensityfunctionp(x|s)ofthereceivedsignalvectorx,giventhenoiselesssignals,inthepresenceofGaussiannoisesamplesvwithcovariancematrixΣissimplythepdfofthenoiseitself,andisgivenbythemulti-dimensionalGaussianprobabilitydensityfunctiondiscussedinSection1.2.1:
1.2.7Eigendecomposition
Eigenvaluedecompositionistodecomposeamatrixintothefollowingform:
WhereQistheeigenvectorofthismatrixA,andtheorthogonalmatrixisinvertible.Σ=diag(λ1,λ2,…,λn)isadiagonalarraywitheachdiagonalelementbeinganeigenvalue.
1.2.7.1Eigenvalues
and
Eigenvectors
Suppose
we
have
a
matrix
A:
We
investigate
its
eigenvalues
and
eigenvectors.
Suppose
we
take
the
product
Ax1,where
x1=[0,1]T,
as
show
nin
Fig.1.2.
Example1:
Considerthematrixgivenby
Itmaybeeasilyverifiedthatanyvectorinspan[e2,e3]isaneigenvectorassociatedwiththezerorepeatedeigenvalue.
Property1
IftheeigenvaluesofaHermitiansymmetricmatrixaredistinct,thentheeigenvectorsareorthogona.
Property5
IfvisaneigenvectorofamatrixA,thencvisalsoaneigenvector,wherecisanyrealorcomplexconstant.
TheprooffollowsdirectlybysubstitutingcvforvinAv=λv.Thismeansthatonlythedirectionofaneigenvectorcanbeunique;itsnormisnotunique.
1.2.7.2OrthonormalMatrices
Beforeproceedingwiththeeigendecompositionofamatrix,wemustdeveloptheconceptofanorthonormalmatrix.Thisformofmatrixhasmutuallyorthogonalcolumns,eachofunitnorm.Thisimpliesthat
Whereδij
istheKroneckerdelta,andqiandqjarecolumnsoftheorthonormalmatrixQ.Withinmind,wenowconsidertheproductQTQ.Theresultmaybevisualizedwiththeaidofthediagrambelow:
Equation(1.2.32)followsdirectlyfromthefactQhasorthonormalcolumns.ItisnotsoclearthatthequantityQQTshouldalsoequaltheidentity.Wecanresolvethisquestioninthefollowingway.SupposethatAandBareanytwosquareinvertiblematricessuchthatAΒ=I.Then,ΒAΒ=Β.Byparsingthislastexpression,wehave
Property6
Thevector2-normisinvariantunderanorthonormaltransformation.
IfQisorthonormal,then
1.2.7.3TheEigendecomposition(ED)ofaSquareSymmetricMatrix
AlmostallmatricesonwhichEDareperformed(atleastinsignalprocessing)aresymmetric.Agoodexampleiscovariancematrices,whicharediscussedinsomedetailinthenextsection。
LetA∈Rn×n
besymmetric.Then,foreigenvaluesandeigenvectorsvi,wehave
Lettheeigenvectorsbenormalizedtounit2-norm.Thenthesenequationscanbecombined,orstackedside-by-sidetogether,andrepresentedinthefollowingcompactform:
WhereV=[v1,v2,…,vn](i.e.,eachcolumnofVisaneigenvector),and
Correspondingcolumnsfromeachsideofrepresentonespecificvalueoftheindexiin.BecausewehaveassumedAissymmetric,fromProperty1,theviareorthogonal.Furthermore,sincewehaveassumed‖vi‖2=1,Visanorthonormalmatrix.Thus,post-multiplyingbothsidesofbyVTandusingVVT=Iweget
1.2.8.1Matrixp-Norms
Amatrixp-normisdefinedintermsofavectorp-norm.Thematrixp-normofanarbitrarymatrixA,denoted‖A‖p,isdefinedas
where“sup”means
supremum;
i.e,.the
largest
value
of
the
argument
over
all
values
of
x≠0.Sinceapropertyofavectornormis‖cx‖p=c‖x‖pforanyscalarc,wecanchoosecinequation(1.2.40)sothat‖x‖p=1.Then,anequivalentstatementtoequation(1.2.40)is
1.2.8.2FrobeniusNorm
TheFrobeniusnormisthe2-normofthevectorconsistingofthe2-normsoftherows(orcolumns)ofthematrixA:
1.2.8.3PropertiesofMatrixNorms
1.ConsiderthematrixA∈Rm×nandthevectorx∈Rn.Then,
Thispropertyfollowsbydividingbothsidesoftheaboveby‖x‖p,andapplying.
2.IfQandZareorthonormalmatricesofappropriatesize,then
and
Thus,weseethatthematrix2-normandFrobeniusnormareinvarianttopre-andpost-multiplicationbyanorthonormalmatrix.
3.Further
Wheretr(·)denotesthetraceofamatrix,whichisthesumofitsdiagonalelements.
1.2.9Covariance
Matrices
Here,we
investigate
the
concepts
andpropertiesof
the
covariancematrix
Rxxcorresponding
to
a
stationary,
discrete-time
random
process
x[n].We
break
the
infinitesequence
x[n]
into
windows
of
lengthm
,
as
shown
in
Fig.1.3.
The
windows
generallyoverlap;
in
fact,they
are
typically
displaced
fromone
another
by
only
one
sample.Thesamples
with
in
the
i-th
window
become
an
m-length
vector
xi,i=1,2,…,n.
Hence,
thevector
corresponding
to
each
window
is
a
vector
sample
from
the
random
process
x[n].
Processing
randomsignals
in
this
way
is
the
fundamental
first
step
inmany
forms
ofelectronic
systemwhich
deal
with
real
signals,
such
as
process
identifi
cation,control,
orany
form
of
communication
system
including
telephones,
radio,
radar,
sonar,
etc.
ThecovariancematrixRxx∈Rm×mcorrespondingtoastationaryorWSSprocessx[n]isdefinedas
WhereμisthevectormeanoftheprocessandE(·)denotestheexpectationoperatoroverallpossiblewindowsofindexioflengthminFig.1.3.Oftenwedealwithzero-meanprocesses,inwhichcasewehave
However,fortheprocessshowninFig.1.4,adjacentsam
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- GB/T 46992-2025可回收利用稀土二次資源分類與綜合利用技術(shù)規(guī)范
- 2026年農(nóng)村電商運(yùn)營實(shí)戰(zhàn)技巧培訓(xùn)
- 2026年健身私教課程設(shè)計(jì)優(yōu)化培訓(xùn)
- 2026年金融數(shù)據(jù)可視化分析應(yīng)用課
- 2026年農(nóng)業(yè)科普教育基地建設(shè)指南
- 基礎(chǔ)化工行業(yè)研究:MDI漲價(jià)豆包手機(jī)助手技術(shù)預(yù)覽版發(fā)布
- 口腔前臺(tái)收款年終總結(jié)(3篇)
- 職業(yè)健康風(fēng)險(xiǎn)評(píng)估在化工職業(yè)體檢中的應(yīng)用
- 職業(yè)健康遠(yuǎn)程隨訪的健康行為干預(yù)策略研究-1-1
- 職業(yè)健康監(jiān)護(hù)檔案的法律效力與保存
- 新疆環(huán)保行業(yè)前景分析報(bào)告
- 2025~2026學(xué)年福建省泉州五中七年級(jí)上學(xué)期期中測(cè)試英語試卷
- 聯(lián)合辦公合同范本
- 2025年黑龍江省檢察院公益訴訟業(yè)務(wù)競(jìng)賽測(cè)試題及答案解析
- 一氧化碳中毒救治課件
- 廣東事業(yè)單位歷年考試真題及答案
- 《會(huì)計(jì)信息化工作規(guī)范》解讀(楊楊)
- 工程機(jī)械設(shè)備租賃服務(wù)方案投標(biāo)文件(技術(shù)方案)
- 高海拔地區(qū)GNSS大壩監(jiān)測(cè)技術(shù)研究
- 實(shí)施指南(2025)《DL-T 1630-2016氣體絕緣金屬封閉開關(guān)設(shè)備局部放電特高頻檢測(cè)技術(shù)規(guī)范》
- 慢性胃炎的護(hù)理業(yè)務(wù)查房
評(píng)論
0/150
提交評(píng)論