var graph = [{"group":"nodes","data":{"id":"def:Corr","name":"definition","text":"
\n
Definition 22 (Correlation). Every definition of the convolution can be turned into a definition of the correlation by where . For the case of function of this gives
\n
\n
Remark: correlation is not commutative.
\n
","parent":"sec:FCC","rank":"0","html_name":"def:Corr","summary":"\n
Definition 22 (Correlation).
\n
","hasSummary":true,"hasTitle":true,"title":"Correlation","height":276,"width":980},"classes":"l0","position":{"x":3344.664174192218,"y":8942.891041283066}},{"group":"nodes","data":{"id":"rem:GR","name":"remark","text":"","parent":"sec:FCC","rank":"0","html_name":"rem:GR","summary":"","hasSummary":true,"hasTitle":true,"title":"General remarks","height":273,"width":980},"classes":"l0","position":{"x":4606.020214642733,"y":8903.649734517832}},{"group":"nodes","data":{"id":"th:ExTI","name":"theorem","text":"\n
Theorem 9 (Exponential function and T.I. operators (!finish morph!)). Let be a operator.
\n
\nLet be an exponential function, that is to say for some . Assume that is in the domain of definition of . Since , we have In we get hence with , hence is an eigenfunction of eigenvalue .
\nLet be an eigenfunction of forall . We have for all hence, If , then is null everywhere. If , then hence verifies and hence is a morphism.
\n
\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:ExTI","summary":"\n
Theorem 9 (Exponential function and T.I. operators (!finish morph!)). The eigenfunctions of a T.I. operator are exponential functions which lie in the domain of definition of .
\n
","hasSummary":true,"hasTitle":true,"title":"Exponential function and T.I. operators (!finish morph!)","height":395,"width":980},"classes":"l0","position":{"x":7322.731965135108,"y":5045.927110898508}},{"group":"nodes","data":{"id":"th:ConvTI","name":"theorem","text":"\n
Theorem 10 (Convolutions are T.I.). Let with , , or . Let the operator defined when it exists by is a T.I. operator. Show it in the case . We have The proof is similar in the other cases.
\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:ConvTI","summary":"\n
Theorem 10 (Convolutions are T.I.). Given a function , the operator is T.I.
\n
","hasSummary":true,"hasTitle":true,"title":"Convolutions are T.I.","height":276,"width":980},"classes":"l0","position":{"x":5431.71662161616,"y":4496.318151068295}},{"group":"nodes","data":{"id":"th:TrFunCon","name":"theorem","text":"\n
Theorem 11 (Transfer function of convolutions operators). Let be a function with a Fourier transform and let be the operator . We already know from th.(Theorem 10) that has a transfer function . Results of section 2.3 on convolution theorems tell us that the transfer function of evaluated on imaginary numbers is the Fourier transform of : Similar results hold with Fourier series or the discrete Fourier transform when is periodic or defined on a finite set. The arguments and should be adjusted to each setting and conventions. Prove the result for the case of Fourier series. The proof for the Fourier transform and the discrete Fourier transform follows the same pattern, though the case of the Fourier transform requires to use the formalism of distribution.
\n
Let be a periodic function with Fourier series , and be the circular convolution. Consider the periodic exponential function with . The Fourier series of is a sequence where
\n
\n
The product theorem says that
\n
Note that
\n
Hence
\n
and is an eigenfunction of eigenvalue :
\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:TrFunCon","summary":"\n
Theorem 11 (Transfer function of convolutions operators). Let be a function with a Fourier transform and let be the operator . The transfer function of evaluated on imaginary numbers is the Fourier transform of :
\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of convolutions operators","height":541,"width":980},"classes":"l0","position":{"x":6653.027474376811,"y":3861.922134151624}},{"group":"nodes","data":{"id":"def:TrInOp","name":"definition","text":"\n
Definition 31 (T.I. operators). Operators are usually defined as a linear maps between spaces of functions. In our case, the set is , , or , and let be the vector space of functions from to . Let be the operator ’translation by ’: When , is defined for , but when , should belong to . In our case, we always take . Let and be subspaces of left stable by all translations. An operator is said to be translation invariant (T.I.) if and only if In an informal way, we have . Still informally, the operator does not make distinctions between the different locations of , it acts the same everywhere.
\n
","parent":"sec:TIOpe","rank":"0","html_name":"def:TrInOp","summary":"\n
Definition 31 (T.I. operators).
\n
","hasSummary":true,"hasTitle":true,"title":"T.I. operators","height":282,"width":980},"classes":"l0","position":{"x":5873.092485873402,"y":4937.334272728404}},{"group":"nodes","data":{"id":"def:TranFun","name":"definition","text":"\n
Definition 32 (Transfer functions). Let be a T.I. operator. Recall that exponential functions which are in the domain of definition of are eigenfunctions of . Note be the eigenvalue of the eigenfunction . In the context of signal processing, the function is called the ’transfer function’. Consider the simple case of a function that can be decomposed as a linear componation of eigenfunctions , with varying in a finite set of complex numbers:
\n
\n
Then
\n
Applying the operator to a function amounts to multiply the coefficients of the decomposition of over exponential functions with the transfer function . This result rely on the linearity of the operator and the finiteness of the sum. In the case of an infinite sum or an integral the linearity is not enough, but the result often remain valid provided that the quantities exists. This happens in particular with Fourier series or the Fourier transform,
\n
The following informal calculation reproduce Eq.[eq:TranFun] in the context of Fourier transform.
\n
\n
The equality is often true in practice when the Fourier transforms exists, but it should be checked case by case.
\n
","parent":"sec:TIOpe","rank":"0","html_name":"def:TranFun","summary":"\n
Definition 32 (Transfer functions). T.I. operators have transfer functions defined as the eigenvalues of the eigenfunctions .
\n
","hasSummary":true,"hasTitle":true,"title":"Transfer functions","height":332,"width":980},"classes":"l0","position":{"x":7326.069599729622,"y":4469.610681516074}},{"group":"nodes","data":{"id":"rem:Cont","name":"remark","text":"","parent":"sec:TIOpe","rank":"0","html_name":"rem:Cont","summary":"","hasSummary":true,"hasTitle":true,"title":"General remarks","height":365,"width":980},"classes":"l0","position":{"x":5869.910069613311,"y":5364.724011420463}},{"group":"nodes","data":{"id":"rem:FinSup","name":"remark","text":"","parent":"sec:TIOpe","rank":"0","html_name":"rem:FinSup","summary":"","hasSummary":true,"hasTitle":true,"title":"The case ","height":848,"width":982},"classes":"l0","position":{"x":4745.357910248369,"y":3819.1752262180235}},{"group":"nodes","data":{"id":"def:SigmaA","name":"definition","text":"\n
Definition 43 (-algebra). Let be a set and be the set of its subsets. A -algebra on is a subset of such that
\n
\n \n \n,
\n
\n
where refers to the complement of in . The two first axioms implies that .
\n
","parent":"sec:Meas","rank":"0","html_name":"def:SigmaA","summary":"\n
Definition 43 (-algebra). Let be a set and be the set of its subsets. A -algebra on is a subset of such that
\n
\n \n \n,
\n
\n
where refers to the complement of in . The two first axioms implies that .
\n
","hasSummary":false,"hasTitle":true,"title":"-algebra","height":205,"width":980},"classes":"l0","position":{"x":21374.655408924926,"y":8617.81610704642}},{"group":"nodes","data":{"id":"def:Borel","name":"definition","text":"\n
Definition 44 (Borel -algebra !check the vector space case!). On , the Borel -algebra is defined by countable union, intersection and complement of arbitrary open intervals . This definition extends to by taking products of intervals . This definition extends to arbitrary vector space of finite dimension by checking that the Borel -algebra is independent of the choice of the basis.
\n
","parent":"sec:Meas","rank":"0","html_name":"def:Borel","summary":"\n
Definition 44 (Borel -algebra !check the vector space case!). On , the Borel -algebra is defined by countable union, intersection and complement of arbitrary open intervals . This definition extends to by taking products of intervals . This definition extends to arbitrary vector space of finite dimension by checking that the Borel -algebra is independent of the choice of the basis.
\n
","hasSummary":false,"hasTitle":true,"title":"Borel -algebra !check the vector space case!","height":303,"width":980},"classes":"l0","position":{"x":23402.866561490624,"y":7521.3676802287655}},{"group":"nodes","data":{"id":"def:Meas","name":"definition","text":"\n
Definition 45 (Measures). Let be a set with a -algebras . A measure is a function such that
\n
\nfor a countable number of disjoint subsets , . In particular for disjoints subests and in , .
\n \n,
\n
\n
When the last requirement is dropped, the measure is called a \"signed measure\". is called a measured space.
\n
","parent":"sec:Meas","rank":"0","html_name":"def:Meas","summary":"\n
Definition 45 (Measures). Let be a set with a -algebras . A measure is a function such that
\n
\nfor a countable number of disjoint subsets , . In particular for disjoints subests and in , .
\n \n,
\n
\n
When the last requirement is dropped, the measure is called a \"signed measure\". is called a measured space.
\n
","hasSummary":false,"hasTitle":true,"title":"Measures","height":198,"width":980},"classes":"l0","position":{"x":20265.49636334657,"y":7465.628481662133}},{"group":"nodes","data":{"id":"def:Measbl","name":"definition","text":"\n
Definition 46 (Measurable function). Let and be two sets with -algebras and . A function is measurable if and only if
\n
","parent":"sec:Meas","rank":"0","html_name":"def:Measbl","summary":"\n
Definition 46 (Measurable function). Let and be two sets with -algebras and . A function is measurable if and only if
\n
","hasSummary":false,"hasTitle":true,"title":"Measurable function","height":198,"width":980},"classes":"l0","position":{"x":21584.558365463217,"y":7710.896413858276}},{"group":"nodes","data":{"id":"def:LebInt","name":"definition","text":"\n
Definition 47 (Lebesgue integration !!deal with general case!!). Let be a measurable function from the measured space to a vector space endowed with the Borel -algebra . The Lebesgue integral of is noted
\n
Give a precise definition in the case where is a \"step function\" taking a finite number of values . The general definition is a limit case of the integration of step functions.
\n
","parent":"sec:Meas","rank":"0","html_name":"def:LebInt","summary":"\n
Definition 47 (Lebesgue integration !!deal with general case!!). Let be a measurable function from the measured space to a vector space endowed with the Borel -algebra . The Lebesgue integral of is noted
\n
Give a precise definition in the case where is a \"step function\" taking a finite number of values . The general definition is a limit case of the integration of step functions.
\n
","hasSummary":false,"hasTitle":true,"title":"Lebesgue integration !!deal with general case!!","height":395,"width":980},"classes":"l0","position":{"x":21994.053917912363,"y":6855.319225843632}},{"group":"nodes","data":{"id":"rem:Dflt","name":"remark","text":"","parent":"sec:Meas","rank":"0","html_name":"rem:Dflt","summary":"","hasSummary":false,"hasTitle":true,"title":"Default -algebras","height":205,"width":980},"classes":"l0","position":{"x":22755.728060176196,"y":8475.307295579609}},{"group":"nodes","data":{"id":"def:VectSpace","name":"definition","text":"\n
Definition 1 (Vector space). Let or . is a vector space if and are such that
\n
\n \n \n \n ()
\n ()
\n \n \n \n \n
\n
Elements of are called vectors, and elements of are called scalars.
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:VectSpace","summary":"\n
Definition 1 (Vector space). Let or . is a vector space if and are such that
\n
\n \n \n \n ()
\n ()
\n \n \n \n \n
\n
Elements of are called vectors, and elements of are called scalars.
\n
","hasSummary":false,"hasTitle":true,"title":"Vector space","height":198,"width":980},"classes":"l0","position":{"x":11579.396635227931,"y":11530.966101934808}},{"group":"nodes","data":{"id":"def:FreeSys","name":"definition","text":"\n
Definition 2 (Free system). Vectors are free if and only if The word \"independent\" is sometimes used instead of \"free\".
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:FreeSys","summary":"\n
Definition 2 (Free system). Vectors are free if and only if The word \"independent\" is sometimes used instead of \"free\".
\n
","hasSummary":false,"hasTitle":true,"title":"Free system","height":198,"width":980},"classes":"l0","position":{"x":12728.226213056792,"y":10844.920114152294}},{"group":"nodes","data":{"id":"def:GenSys","name":"definition","text":"\n
Definition 3 (Generating system). Vectors are generators of if and only if
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:GenSys","summary":"\n
Definition 3 (Generating system). Vectors are generators of if and only if
\n
","hasSummary":false,"hasTitle":true,"title":"Generating system","height":198,"width":980},"classes":"l0","position":{"x":13039.651455434217,"y":11250.710804583194}},{"group":"nodes","data":{"id":"def:Basis","name":"definition","text":"\n
Definition 4 (Basis). Vectors are a basis of if and only if they are both a free system and a generating system of . is called the dimension of .
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Basis","summary":"\n
Definition 4 (Basis). Vectors are a basis of if and only if they are both a free system and a generating system of . is called the dimension of .
\n
","hasSummary":false,"hasTitle":true,"title":"Basis","height":198,"width":980},"classes":"l0","position":{"x":13974.238751330833,"y":10803.711804944005}},{"group":"nodes","data":{"id":"def:SubSpace","name":"definition","text":"\n
Definition 5 (Subspace). is a sub vector space of if and only if and for all and .
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:SubSpace","summary":"\n
Definition 5 (Subspace). is a sub vector space of if and only if and for all and .
\n
","hasSummary":false,"hasTitle":true,"title":"Subspace","height":198,"width":980},"classes":"l0","position":{"x":11013.555366699075,"y":10441.520276248375}},{"group":"nodes","data":{"id":"def:Span","name":"definition","text":"\n
Definition 6 (Span of a set of vectors). Let . The span of is the set generated by all the linear combinations: It can be checked that the span is a vector subspace of .
\nAlternatively the span of a matrix is defined as
\n
When the column of are the coordinate vectors of , the span of are the coordinates of the elements of .
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Span","summary":"\n
Definition 6 (Span of a set of vectors). Let . The span of is the set generated by all the linear combinations: It can be checked that the span is a vector subspace of .
\nAlternatively the span of a matrix is defined as
\n
When the column of are the coordinate vectors of , the span of are the coordinates of the elements of .
\n
","hasSummary":false,"hasTitle":true,"title":"Span of a set of vectors","height":198,"width":980},"classes":"l0","position":{"x":11856.999926963139,"y":9734.551872039543}},{"group":"nodes","data":{"id":"def:Coords","name":"definition","text":"\n
Definition 7 (Coordinates). When is a basis of it can be proved that each vector has a unique decomposition
\n
is called the coordinate vector of in the basis . is often noted , the coordinate vector becomes . Warning: depending on the context, sometimes refers to a coordinate of , or to a vector among a set of vectors .
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Coords","summary":"\n
Definition 7 (Coordinates). When is a basis of it can be proved that each vector has a unique decomposition
\n
is called the coordinate vector of in the basis . is often noted , the coordinate vector becomes . Warning: depending on the context, sometimes refers to a coordinate of , or to a vector among a set of vectors .
\n
","hasSummary":false,"hasTitle":true,"title":"Coordinates","height":198,"width":980},"classes":"l0","position":{"x":13145.822712594667,"y":10256.69575153816}},{"group":"nodes","data":{"id":"def:BasChg","name":"definition","text":"\n
Definition 8 (Basis change). Let and be basis of . For note its coordinates in the first basis and its coordinates in the basis . They are related by where is a matrix whose -th column is the coordinate vector of in the basis . is invertible and
\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:BasChg","summary":"\n
Definition 8 (Basis change). Let and be basis of . For note its coordinates in the first basis and its coordinates in the basis . They are related by where is a matrix whose -th column is the coordinate vector of in the basis . is invertible and
\n
","hasSummary":false,"hasTitle":true,"title":"Basis change","height":198,"width":980},"classes":"l0","position":{"x":13709.444224257108,"y":9660.009061561592}},{"group":"nodes","data":{"id":"rem:ExVectsp","name":"remark","text":"","parent":"subsec:VectSpaces","rank":"0","html_name":"rem:ExVectsp","summary":"","hasSummary":false,"hasTitle":true,"title":"Common examples of vector spaces","height":297,"width":980},"classes":"l0","position":{"x":13033.723852237516,"y":11823.87387793776}},{"group":"nodes","data":{"id":"def:Mat","name":"definition","text":"\n
Definition 9 (Matrix). A matrix is a -dimensional array containing elements of .
\n
","parent":"subsec:Mats","rank":"0","html_name":"def:Mat","summary":"\n
Definition 9 (Matrix). A matrix is a -dimensional array containing elements of .
\n
","hasSummary":false,"hasTitle":true,"title":"Matrix","height":198,"width":980},"classes":"l0","position":{"x":17282.57926982218,"y":8677.258398557675}},{"group":"nodes","data":{"id":"def:MatMul","name":"definition","text":"\n
Definition 10 (Matrix multiplication). Let matrix of size and a matrix of size . The matrix multiplication of and , noted , is a matrix of size whose elements are defined by Visual representation:
\n
When , where is a matrix with only ones on the diagonal, and are inverse to each others: .
\n
","parent":"subsec:Mats","rank":"0","html_name":"def:MatMul","summary":"\n
Definition 10 (Matrix multiplication). Let matrix of size and a matrix of size . The matrix multiplication of and , noted , is a matrix of size whose elements are defined by Visual representation:
\n
When , where is a matrix with only ones on the diagonal, and are inverse to each others: .
\n
","hasSummary":false,"hasTitle":true,"title":"Matrix multiplication","height":198,"width":980},"classes":"l0","position":{"x":16417.779790079614,"y":8170.263532408395}},{"group":"nodes","data":{"id":"def:Trans","name":"definition","text":"\n
Definition 11 (Transpose). Let be a matrix. The transpose of is noted and defined by
\n
Important properties: , ,
\n
","parent":"subsec:Mats","rank":"0","html_name":"def:Trans","summary":"\n
Definition 11 (Transpose). Let be a matrix. The transpose of is noted and defined by
\n
Important properties: , ,
\n
","hasSummary":false,"hasTitle":true,"title":"Transpose","height":198,"width":980},"classes":"l0","position":{"x":16950.37013182701,"y":7548.4583757792325}},{"group":"nodes","data":{"id":"def:IsoKn","name":"theorem","text":"\n
Theorem 1 (Bijection with ). Let be a vector space of finite dimension. Take a basis of . The function which maps a vector to its coordinate vector is linear and bijective. Hence a basis identifies with .
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:IsoKn","summary":"\n
Theorem 1 (Bijection with ). Let be a vector space of finite dimension. Take a basis of . The function which maps a vector to its coordinate vector is linear and bijective. Hence a basis identifies with .
\n
","hasSummary":false,"hasTitle":true,"title":"Bijection with ","height":211,"width":980},"classes":"l0","position":{"x":15687.967979063893,"y":11244.954099266317}},{"group":"nodes","data":{"id":"th:CompLinMaps","name":"theorem","text":"\n
Theorem 2 (Composition of linear maps). Let and be vector spaces of finite dimension with basis , , and . Let be a linear map of matrix and be a linear map of matrix . The matrix of the composition is the matrix product
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"th:CompLinMaps","summary":"\n
Theorem 2 (Composition of linear maps). Let and be vector spaces of finite dimension with basis , , and . Let be a linear map of matrix and be a linear map of matrix . The matrix of the composition is the matrix product
\n
","hasSummary":false,"hasTitle":true,"title":"Composition of linear maps","height":297,"width":980},"classes":"l0","position":{"x":16105.7337414555,"y":9952.548597540048}},{"group":"nodes","data":{"id":"def:LinMap","name":"definition","text":"\n
Definition 12 (Linear map). A linear map between the vector spaces and is a function such that
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:LinMap","summary":"\n
Definition 12 (Linear map). A linear map between the vector spaces and is a function such that
\n
","hasSummary":false,"hasTitle":true,"title":"Linear map","height":198,"width":980},"classes":"l0","position":{"x":16984.969239969676,"y":11523.917573857172}},{"group":"nodes","data":{"id":"def:MatMap","name":"definition","text":"\n
Definition 13 (Matrix of a linear map). Let be a basis of and be a basis of . The elements of the matrix of a linear map are where is the -th coordinate of . Note that the matrix is of size .
\n
Let and be two new basis of and . The matrix of in the new basis is where the columns of are coordinates of the vectors in the basis and the columns of are coordinates of the vectors in the basis .
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:MatMap","summary":"\n
Definition 13 (Matrix of a linear map). Let be a basis of and be a basis of . The elements of the matrix of a linear map are where is the -th coordinate of . Note that the matrix is of size .
\n
Let and be two new basis of and . The matrix of in the new basis is where the columns of are coordinates of the vectors in the basis and the columns of are coordinates of the vectors in the basis .
\n
","hasSummary":false,"hasTitle":true,"title":"Matrix of a linear map","height":198,"width":980},"classes":"l0","position":{"x":16830.563327358752,"y":10698.882412318088}},{"group":"nodes","data":{"id":"def:Eigen","name":"definition","text":"\n
Definition 14 (Eigenvectors and eigenvalues). Let be a linear map from to . An eigenvector is a non null vector such that for some scalar . is called an eigenvalue. Alternatively, let be a matrix. A non null column vector is called an eigenvector when for some scalar . is again called an eigenvalue.
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:Eigen","summary":"\n
Definition 14 (Eigenvectors and eigenvalues). Let be a linear map from to . An eigenvector is a non null vector such that for some scalar . is called an eigenvalue. Alternatively, let be a matrix. A non null column vector is called an eigenvector when for some scalar . is again called an eigenvalue.
\n
","hasSummary":false,"hasTitle":true,"title":"Eigenvectors and eigenvalues","height":297,"width":980},"classes":"l0","position":{"x":18580.795461310972,"y":10255.738296787835}},{"group":"nodes","data":{"id":"def:DiagMap","name":"definition","text":"\n
Definition 15 (Diagonalizable map). A linear map is diagonalizable if there exists a basis of eigenvectors. In that basis, the matrix of is where the are the eigenvalues of . Alternatively, a matrix is said diagonalizable if there exists invertible matrices and such that with a diagonal matrix.
\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:DiagMap","summary":"\n
Definition 15 (Diagonalizable map). A linear map is diagonalizable if there exists a basis of eigenvectors. In that basis, the matrix of is where the are the eigenvalues of . Alternatively, a matrix is said diagonalizable if there exists invertible matrices and such that with a diagonal matrix.
\n
","hasSummary":false,"hasTitle":true,"title":"Diagonalizable map","height":198,"width":980},"classes":"l0","position":{"x":18743.398578871198,"y":9536.883217987297}},{"group":"nodes","data":{"id":"rem:MatMultVect","name":"remark","text":"","parent":"subsec:LinMaps","rank":"0","html_name":"rem:MatMultVect","summary":"","hasSummary":false,"hasTitle":true,"title":"Coordinates of the image","height":297,"width":980},"classes":"l0","position":{"x":17549.755638456132,"y":10218.094491526004}},{"group":"nodes","data":{"id":"def:Pyth","name":"theorem","text":"\n
Theorem 3 (Pythagore theorem). Let be an orthonormal basis. The norm of a vector is given by It is a direct consequence of the linearity (or semi-linearity) of the inner product in both arguments.
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:Pyth","summary":"\n
Theorem 3 (Pythagore theorem). Let be an orthonormal basis. The norm of a vector is given by It is a direct consequence of the linearity (or semi-linearity) of the inner product in both arguments.
\n
","hasSummary":false,"hasTitle":true,"title":"Pythagore theorem","height":198,"width":980},"classes":"l0","position":{"x":12778.889078858281,"y":6938.557194336066}},{"group":"nodes","data":{"id":"th:OrthProj","name":"theorem","text":"\n
Theorem 4 (Orthogonal projection on the span of independent vectors). Let be an orthonormal basis of the vector space . Let be independent vectors of and . The coordinates vector of the orthogonal projection of a vector on is where is the matrix whose columns are the coordinates of the vectors , and is the coordinate vector of .
\n
","parent":"subsec:InnProd","rank":"0","html_name":"th:OrthProj","summary":"\n
Theorem 4 (Orthogonal projection on the span of independent vectors). Let be an orthonormal basis of the vector space . Let be independent vectors of and . The coordinates vector of the orthogonal projection of a vector on is where is the matrix whose columns are the coordinates of the vectors , and is the coordinate vector of .
\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal projection on the span of independent vectors","height":395,"width":980},"classes":"l0","position":{"x":11869.63966158844,"y":8913.769158512221}},{"group":"nodes","data":{"id":"th:CauSch","name":"theorem","text":"\n
Theorem 5 (Cauchy-Schwartz inequality). Let be an vector space endowed with an inner product. For all , The inequality is an equality if and only if with .
\n
","parent":"subsec:InnProd","rank":"0","html_name":"th:CauSch","summary":"\n
Theorem 5 (Cauchy-Schwartz inequality). Let be an vector space endowed with an inner product. For all , The inequality is an equality if and only if with .
\n
","hasSummary":false,"hasTitle":true,"title":"Cauchy-Schwartz inequality","height":297,"width":980},"classes":"l0","position":{"x":11674.786509972117,"y":7932.232740648682}},{"group":"nodes","data":{"id":"def:InnProd","name":"definition","text":"\n
Definition 16 (Inner product). Let be a vector space on . An inner product is a map such that
\n
\n and are linear (bi-linearity)
\n (symmetry)
\n (positive definitness).
\n
\n
A real vector space of finite dimension with an inner product is called \"Euclidean\".
\nWhen is a vector space on , an inner product is a map such that
\n
\n is linear
\n (Hermitian symmetry)
\n (positive definitness).
\n
\n
The Hermitian symmetry implies that is real, hence the last condition is meaningful. The linearity in the first argument and the Hermitian symmetry imply that and . A complex vector space of finit dimension is called \"Hermitian\".
\nThe inner product is often noted .
\nTwo vectors and are orthogonal when .
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:InnProd","summary":"\n
Definition 16 (Inner product). Let be a vector space on . An inner product is a map such that
\n
\n and are linear (bi-linearity)
\n (symmetry)
\n (positive definitness).
\n
\n
A real vector space of finite dimension with an inner product is called \"Euclidean\".
\nWhen is a vector space on , an inner product is a map such that
\n
\n is linear
\n (Hermitian symmetry)
\n (positive definitness).
\n
\n
The Hermitian symmetry implies that is real, hence the last condition is meaningful. The linearity in the first argument and the Hermitian symmetry imply that and . A complex vector space of finit dimension is called \"Hermitian\".
\nThe inner product is often noted .
\nTwo vectors and are orthogonal when .
\n
","hasSummary":false,"hasTitle":true,"title":"Inner product","height":198,"width":980},"classes":"l0","position":{"x":13309.174662116091,"y":8564.514983015952}},{"group":"nodes","data":{"id":"def:Norm","name":"definition","text":"\n
Definition 17 (Norm). An inner product defines a vector norm by
\n
The norm of a vector is its distance to . The distance between and is the distance of to :
\n
Important formula:
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:Norm","summary":"\n
Definition 17 (Norm). An inner product defines a vector norm by
\n
The norm of a vector is its distance to . The distance between and is the distance of to :
\n
Important formula:
\n
","hasSummary":false,"hasTitle":true,"title":"Norm","height":198,"width":980},"classes":"l0","position":{"x":12030.956561613142,"y":8376.704379614777}},{"group":"nodes","data":{"id":"def:OrthBas","name":"definition","text":"\n
Definition 18 (Orthonormal basis). A basis is orthonormal when
\n
The -th coordinates of the vector in the orthonormal basis is Hence we have . To prove it, write and take the inner product with each .
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthBas","summary":"\n
Definition 18 (Orthonormal basis). A basis is orthonormal when
\n
The -th coordinates of the vector in the orthonormal basis is Hence we have . To prove it, write and take the inner product with each .
\n
","hasSummary":false,"hasTitle":true,"title":"Orthonormal basis","height":198,"width":980},"classes":"l0","position":{"x":12831.735871276205,"y":7495.149102832393}},{"group":"nodes","data":{"id":"def:OrthFree","name":"definition","text":"\n
Definition 19 (Orthogonal implies free). If non vectors are orthogonal they are free. To prove it, assume that Hence since by linearity . Also, since for all , we get , hence . We can repeat for each , which shows the result.
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthFree","summary":"\n
Definition 19 (Orthogonal implies free). If non vectors are orthogonal they are free. To prove it, assume that Hence since by linearity . Also, since for all , we get , hence . We can repeat for each , which shows the result.
\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal implies free","height":198,"width":980},"classes":"l0","position":{"x":13917.03077715624,"y":7833.184196598475}},{"group":"nodes","data":{"id":"def:InnMat","name":"definition","text":"\n
Definition 20 (Inner product matrix). Let be a basis. Let be the matrix whose elements are is called the matrix of the inner product. The inner product between two vectors and is given by where and are the coordinate vectors of and . The proof is an exercise.
\nConsider a new basis and let be the matrix whose columns are the coordinates of the vectors in the original basis. Let and be the coordinates of and in the new basis. Recall that . Hence we have and the matrix of the inner product in the new basis is Note the difference with the change of basis of linear maps.
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:InnMat","summary":"\n
Definition 20 (Inner product matrix). Let be a basis. Let be the matrix whose elements are is called the matrix of the inner product. The inner product between two vectors and is given by where and are the coordinate vectors of and . The proof is an exercise.
\nConsider a new basis and let be the matrix whose columns are the coordinates of the vectors in the original basis. Let and be the coordinates of and in the new basis. Recall that . Hence we have and the matrix of the inner product in the new basis is Note the difference with the change of basis of linear maps.
\n
","hasSummary":false,"hasTitle":true,"title":"Inner product matrix","height":198,"width":980},"classes":"l0","position":{"x":14654.663734697007,"y":8283.979878399488}},{"group":"nodes","data":{"id":"def:OrthProj","name":"definition","text":"\n
Definition 21 (Orthogonal projection). Let be a vector space with an inner product and be a vector subspace of . Let . The condition is equivalent to The proof is an exercise. When the condition is verified, is called the orthogonal projection of .
\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthProj","summary":"\n
Definition 21 (Orthogonal projection). Let be a vector space with an inner product and be a vector subspace of . Let . The condition is equivalent to The proof is an exercise. When the condition is verified, is called the orthogonal projection of .
\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal projection","height":198,"width":980},"classes":"l0","position":{"x":10762.815282086516,"y":8366.455920824794}},{"group":"nodes","data":{"id":"rem:InnOrthBas","name":"remark","text":"","parent":"subsec:InnProd","rank":"0","html_name":"rem:InnOrthBas","summary":"","hasSummary":false,"hasTitle":true,"title":"Inner product in orthogonal basis","height":297,"width":980},"classes":"l0","position":{"x":14510.027075352717,"y":7067.834403688812}},{"group":"nodes","data":{"id":"rem:OrthProjBas","name":"remark","text":"","parent":"subsec:InnProd","rank":"0","html_name":"rem:OrthProjBas","summary":"","hasSummary":false,"hasTitle":true,"title":"Projection in an orthonormal basis","height":297,"width":980},"classes":"l0","position":{"x":10782.596613956468,"y":6946.372554210607}},{"group":"nodes","data":{"id":"def:ConvCont","name":"definition","text":"\n
Definition 23 (Convolution of ). Let, and , the convolution is defined as
\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCont","summary":"\n
Definition 23 (Convolution of ).
\n
","hasSummary":true,"hasTitle":true,"title":"Convolution of ","height":385,"width":980},"classes":"l0","position":{"x":3656.171782602364,"y":6611.864062671482}},{"group":"nodes","data":{"id":"def:ConvDisc","name":"definition","text":"\n
Definition 24 (Discrete convolution ). Let and be complex bilateral sequences:
\n
\n
\n
This definition can be obtained from the continuous case using distributions. Let where and are the functions from the continuous definition and is a sampling period. Then
\n
\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvDisc","summary":"\n
Definition 24 (Discrete convolution ).
\n
","hasSummary":true,"hasTitle":true,"title":"Discrete convolution ","height":366,"width":980},"classes":"l0","position":{"x":3646.2432397097928,"y":7331.0800098771215}},{"group":"nodes","data":{"id":"def:ConvArr","name":"definition","text":"\n
Definition 25 (Convolution of arrays ). (multiplication of polynomials):
\nLet and be two complex arrays of size and , with first index . Their convolution is an array of size , Remark: this definition coincides with the multiplication of polynomials.
\n
\n
\n
This definition can also be deduced from the discrete convolution by defining and to be the sequences extending and by on all integers:
\n
\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvArr","summary":"\n
Definition 25 (Convolution of arrays ).
\n
","hasSummary":true,"hasTitle":true,"title":"Convolution of arrays ","height":435,"width":1235},"classes":"l0","position":{"x":3613.8255533884385,"y":8154.398100232943}},{"group":"nodes","data":{"id":"def:ConvCirc","name":"definition","text":"\n
Definition 26 (Circular convolution, or convolution of ). Let and be periodic complex functions , , of period . Their convolution is defined as
\n
\n
\n
It is possible to express the circular convolution using the linear convolution. Let and be periodic function and the indicator function (gate function) of ,
\n
\n
This circular convolution can also be expressed for functions , as
\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCirc","summary":"\n
Definition 26 (Circular convolution, or convolution of ).
\n
","hasSummary":true,"hasTitle":true,"title":"Circular convolution, or convolution of ","height":431,"width":980},"classes":"l0","position":{"x":4883.247163725782,"y":7043.810457638875}},{"group":"nodes","data":{"id":"def:ConvCircDisc","name":"definition","text":"\n
Definition 27 (Discrete circular convolution ). Let and be two complex arrays of same size . Their circular convolution is defined as
\n
\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCircDisc","summary":"\n
Definition 27 (Discrete circular convolution ).
\n
","hasSummary":true,"hasTitle":true,"title":"Discrete circular convolution ","height":422,"width":1114},"classes":"l0","position":{"x":4842.945542025731,"y":7960.554286134175}},{"group":"nodes","data":{"id":"rem:ConvArrCirc","name":"remark","text":"","parent":"subsec:Conv","rank":"0","html_name":"rem:ConvArrCirc","summary":"","hasSummary":true,"hasTitle":true,"title":"Array convolution as a circular convolution","height":322,"width":980},"classes":"l0","position":{"x":2287.4595593855984,"y":7685.409631665113}},{"group":"nodes","data":{"id":"def:FT","name":"definition","text":"\n
Definition 28 (Fourier transform). Remark: there exists different conventions for the normalization coefficients and the argument of the exponential.
\n
\nFourier
\n,
\nInverse Fourier
\n,
\n
\n
\n
For function in the Fourier transform can be expressed as a limit of Fourier coefficients,
\n
where is the integer part of and is the function restricted to the interval . Note that since the can be defined from the DFT, the Fourier transform can also be defined from the DFT.
\n
","parent":"subsec:F","rank":"0","html_name":"def:FT","summary":"\n
Definition 28 (Fourier transform).
\n
\nFourier
\n,
\nInverse Fourier
\n,
\n
\n
","hasSummary":true,"hasTitle":true,"title":"Fourier transform","height":808,"width":980},"classes":"l0","position":{"x":8575.29693013847,"y":6650.129022364675}},{"group":"nodes","data":{"id":"def:FS","name":"definition","text":"\n
Definition 29 (Fourier series).
\n
\nFourier
\nFor , let be -periodic functions defined by are normed and orthogonal to each other for the scalar product They form a \"Hilbert orthonormal basis\" of the vector space of complex functions -perdiodic and such that . The adjective \"Hilbert\" refers to the fact that it has an infinite (countable) number of vectors, unlike traditional basis. The Fourier coefficients are the coefficients in that basis. Let , periodic,
\nInverse Fourier
\n,
\n
\n
The Fourier coefficients can also be defined for non periodic functions .
\n
\n
The Fourier coefficient can be obtained from the Fourier Transform by the following way. Let be the indicator (gate function) of the interval ,
\n
\n
\n
The Fourier coefficient can also be obtained as a limit of the . Let
\n
\n
Then
\n
The last line is obtained after noticing that and .
\n
","parent":"subsec:F","rank":"0","html_name":"def:FS","summary":"\n
Definition 29 (Fourier series).
\n
\nFourier
\n
\nInverse Fourier
\n
\n
\n
","hasSummary":true,"hasTitle":true,"title":"Fourier series","height":693,"width":980},"classes":"l0","position":{"x":7408.464074026597,"y":7271.025032550924}},{"group":"nodes","data":{"id":"def:DFT","name":"definition","text":"\n
Definition 30 (Discrete Fourier Transform (DFT)).
\n
\nFourier
\nLet . Let be the function The with form an orthonormal basis of functions for the scalar product
\n\nUp to a scaling factor, the DFT is the decomposition in that basis. Let ,
\nInverse Fourier
\nLet ,
\n
\n
Remarks: the normalization coefficient appears here in the inverse DFT. This is a question of convention, it could be put in the direct DFT to be more consistent with the Fourier series formulation.
\n
\n
\n
The DFT can be defined from Fourier series or Fourier transform using distributions. Let be an arbitrary sampling period and be the distribution
\n
When is seen as supported in , we have the following relation with the Fourier series coefficients:
\n
Note that the factor appears only due to the difference of convention between the Fourier series and the . When is seen as supported on , we can write
\n
\n
where the Fourier transform is generalized to distributions. Hence the frequency of the DFT relates to the frequency of the Fourier transform.
\n
","parent":"subsec:F","rank":"0","html_name":"def:DFT","summary":"\n
Definition 30 (Discrete Fourier Transform (DFT)).
\n
\nFourier
\nThe letter stands for \"array\".
\nInverse Fourier
\nLet ,
\n
\n
","hasSummary":true,"hasTitle":true,"title":"Discrete Fourier Transform (DFT)","height":781,"width":1278},"classes":"l0","position":{"x":8486.033031064639,"y":8179.06956864164}},{"group":"nodes","data":{"id":"rem:GenRemFour","name":"remark","text":"","parent":"subsec:F","rank":"0","html_name":"rem:GenRemFour","summary":"","hasSummary":true,"hasTitle":true,"title":"General remarks","height":492,"width":980},"classes":"l0","position":{"x":8593.468807795278,"y":8930.27275164237}},{"group":"nodes","data":{"id":"th:prodTF","name":"theorem","text":"\n
Theorem 6 (Fourier Transform). Let and . We have: where the convolution is defined as the convolution of .
\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodTF","summary":"\n
Theorem 6 (Fourier Transform). Fourier transform convolution on
\n
","hasSummary":true,"hasTitle":true,"title":"Fourier Transform","height":276,"width":980},"classes":"l0","position":{"x":6261.952606954309,"y":6691.136811077134}},{"group":"nodes","data":{"id":"th:prodS","name":"theorem","text":"\n
Theorem 7 (Fourier series). Let and be periodic functions. Then we have, where is a circular convolution and is a discrete convolution. Note the inverse Fourier of , where is a circular convolutions and is a discrete convolution.
\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodS","summary":"\n
Theorem 7 (Fourier series). Fourier series circular () / discrete convolution
\n
","hasSummary":true,"hasTitle":true,"title":"Fourier series","height":279,"width":980},"classes":"l0","position":{"x":6258.240960505095,"y":7355.199286445368}},{"group":"nodes","data":{"id":"th:prodDFT","name":"theorem","text":"\n
Theorem 8 (DFT). Let and be complex arrays of size . Then we have where the convolution is the discrete circular convolution.
\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodDFT","summary":"\n
Theorem 8 (DFT). Discrete Fourier transform circular (discrete) convolution
\n
","hasSummary":true,"hasTitle":true,"title":"DFT","height":276,"width":980},"classes":"l0","position":{"x":6267.830374915837,"y":7848.467819309728}},{"group":"nodes","data":{"id":"rem:GenRemProd","name":"remark","text":"","parent":"subsec:CT","rank":"0","html_name":"rem:GenRemProd","summary":"","hasSummary":true,"hasTitle":true,"title":"General remarks","height":273,"width":980},"classes":"l0","position":{"x":6267.391601097236,"y":6287.057872604234}},{"group":"nodes","data":{"id":"th:TranFunDer","name":"theorem","text":"\n
Theorem 12 (Transfer function of the derivative). Consider a function defined on , or . The derivative exists for all and . Hence, the transfer function of is simply
\n
It can be checked that the function does not admit inverse Fourier transform (or inverse Fourier series), in the sens of functions. Hence we cannot write at least in the sens of functions.
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"th:TranFunDer","summary":"\n
Theorem 12 (Transfer function of the derivative). The transfer function of is
\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of the derivative","height":332,"width":980},"classes":"l0","position":{"x":7387.452722020224,"y":2315.164086156882}},{"group":"nodes","data":{"id":"def:Der","name":"definition","text":"\n
Definition 33 (Derivative). We say that a function is differential in if exists. We write then and is called the derivative of at point .
\nEquivalently, we can say that is differentiable in when there exists and where is such that . We have then . It can be checked that the derivation is a linear operator, invariant by translations. The derivative is often noted .
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Der","summary":"\n
Definition 33 (Derivative).
\n
","hasSummary":true,"hasTitle":true,"title":"Derivative","height":332,"width":980},"classes":"l0","position":{"x":8735.929814648183,"y":2176.035907257376}},{"group":"nodes","data":{"id":"def:DirDer","name":"definition","text":"\n
Definition 34 (Directional derivative ). Let . The directional derivative of at in the direction is defined when it exists by We write then is also called a ’partial’ derivative. For a function and note the -th coordinate of vectors. The directional derivatives in the direction of the -th basis vectors is noted
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:DirDer","summary":"\n
Definition 34 (Directional derivative ).
\n
","hasSummary":true,"hasTitle":true,"title":"Directional derivative ","height":343,"width":980},"classes":"l0","position":{"x":9163.85066889793,"y":1299.503587539544}},{"group":"nodes","data":{"id":"def:Differ1","name":"definition","text":"\n
Definition 35 (Differentials ). Let . We say that is differentiable in if there exists a linear map and a function such that where is such that . is called the differential of at .
\nIt can be checked that the is the directional derivative in the direction . Hence the matrix of in the canonical basis of is given by
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Differ1","summary":"\n
Definition 35 (Differentials ).
\n
","hasSummary":true,"hasTitle":true,"title":"Differentials ","height":332,"width":980},"classes":"l0","position":{"x":8568.119567542746,"y":755.3430877744534}},{"group":"nodes","data":{"id":"def:DifferM","name":"definition","text":"\n
Definition 36 (Differentials ). The definition of the differential in the case can be directly generalized to the case . We say that is differentiable in if there exists a linear map and a function such that where is such that . is called the differential of at .
\nAssume that is differentiable. Note the -th coordinate of in the canonical basis of . is a function . Let be the canonical basis of . The matrix of in the canonical the basis of and is
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:DifferM","summary":"\n
Definition 36 (Differentials ).
\n
","hasSummary":true,"hasTitle":true,"title":"Differentials ","height":298,"width":980},"classes":"l0","position":{"x":8558.562142572166,"y":194.07915788554078}},{"group":"nodes","data":{"id":"def:GradCon","name":"definition","text":"\n
Definition 37 (Gradient (!add Riesz to inner prod + cauchy schwartz!)). Let be the canonical inner product of . If is differentiable at , is a linear map from to . Hence there is a vector, noted such that is called the ’gradient’ of at . In the canonical basis the gradient vector is Let be a vector of coordinates . We have that
\n
The gradient is the direction of steepest variation of the function. This can be shown as follow. The variation of the function between and is approximated by . The Cauchy Schwartz inequality implies that the vectors of unit norm () which maximizes are when and are colinear, that is to say or .
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:GradCon","summary":"\n
Definition 37 (Gradient (!add Riesz to inner prod + cauchy schwartz!)).
\n
","hasSummary":true,"hasTitle":true,"title":"Gradient (!add Riesz to inner prod + cauchy schwartz!)","height":658,"width":980},"classes":"l0","position":{"x":7425.253545827187,"y":729.7808136044371}},{"group":"nodes","data":{"id":"def:nthDer","name":"definition","text":"\n
Definition 38 (n-th order derivative). Composing the derivation operator times gives the ’-th order derivative’ noted A function for which the -th order derivative exists is called ’differentiable at the order ’.
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:nthDer","summary":"\n
Definition 38 (n-th order derivative). Composing the derivation operator times gives the ’n-th order derivative’:
\n
","hasSummary":true,"hasTitle":true,"title":"n-th order derivative","height":440,"width":980},"classes":"l0","position":{"x":10366.270358909627,"y":1364.0686261068784}},{"group":"nodes","data":{"id":"def:Lap","name":"definition","text":"\n
Definition 39 (Laplacian). Let be a function , twice differentiable. The Laplacian operator is defined as In particular, in dimension and get
\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Lap","summary":"\n
Definition 39 (Laplacian). In dimension and get
\n
","hasSummary":true,"hasTitle":true,"title":"Laplacian","height":530,"width":980},"classes":"l0","position":{"x":9801.78921427882,"y":-302.15845593828703}},{"group":"nodes","data":{"id":"th:TranFunDisDer","name":"theorem","text":"\n
Theorem 13 (Transfer function of discrete derivatives). In the following, we define the derivative as . The discussion can be adapted to the other definitions. Consider the function We have that , hence . And since ,
\n
From the definition of the gradient, . Hence
\n
Note that the transfer function of the continuous derivation seems very different. However, consider the case where the samples of the discrete function are separated by instead of . The discrete derivation become and the transfer function We have then when . Hence for all , the transfer function of the discrete case gets closer and closer to the one of the continuous case when . It is interesting to note for a fixed , the values of and are closer and closer when , and bigger and bigger when . This means that discrete and continuous derivations tend to agree for functions containing only on low frequencies and give very different results for functions containing high frequencies. This is not a surprising behavior.
\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"th:TranFunDisDer","summary":"\n
Theorem 13 (Transfer function of discrete derivatives). The transfer function of the discrete derivative is
\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of discrete derivatives","height":386,"width":980},"classes":"l0","position":{"x":6104.503000925931,"y":2361.81373466607}},{"group":"nodes","data":{"id":"def:DisDer","name":"definition","text":"\n
Definition 40 (Discrete derivative). The discrete derivative is the analog of the continuous derivation but for functions defined on , or . There are several ways to define it. Let , the most common definitions are,
\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:DisDer","summary":"\n
Definition 40 (Discrete derivative). Let . A possible definition of the derivative is
\n
","hasSummary":true,"hasTitle":true,"title":"Discrete derivative","height":332,"width":980},"classes":"l0","position":{"x":6153.406129183674,"y":1843.942960056564}},{"group":"nodes","data":{"id":"def:GradDis","name":"definition","text":"\n
Definition 41 (Discrete gradient). Remark: as there are many ways to define the discrete derivative, there are many different ways to define the gradient. They can all be seen as approximations of the continuous definition. We assume here that the discrete derivative is defined as . The discrete gradient is defined in the same way as the continuous gradient: it is a vector that contains the directional derivatives in each coordinates.
\n
\none dimension
\nFor a function , the discrete gradient is the same as the discrete derivative When the function if defined on a finite set, , this gradient function is only defined without ambiguities on . If we want to define the gradient on , the value in depends on a border convention. The same remark holds for higher dimensions.
\ntwo dimensions
\nFor a function , the gradient is a function which associates a vector of size to each point of the domain of :
\narbitrary dimensions
\nFor a function , the gradient becomes
\n
\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:GradDis","summary":"\n
Definition 41 (Discrete gradient).
\n
","hasSummary":true,"hasTitle":true,"title":"Discrete gradient","height":374,"width":980},"classes":"l0","position":{"x":6150.334606657032,"y":716.1554667660994}},{"group":"nodes","data":{"id":"def:GradMask","name":"definition","text":"\n
Definition 42 (Derivative masks). We address the case of functions defined on . For functions defined on , see the remark Remark 10 from the section on translation invariant operators.
\nLet , and with , and everywhere else:
\n
\n
It is possible to check that the discrete derivative of , can be rewritten as
\n
\n
where is the discrete convolution. The mask should be adapted to each definition of . When is a function the directional derivative can also be obtained using convolutions masks. In the case , the horizontal directional derivatives is given by the convolution mask
\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:GradMask","summary":"\n
Definition 42 (Derivative masks). Let The construction can be adapted to the case and to the multi-dimensional case.
\n
","hasSummary":true,"hasTitle":true,"title":"Derivative masks","height":488,"width":980},"classes":"l0","position":{"x":4736.114697313205,"y":1752.8311378577023}},{"group":"nodes","data":{"id":"def:ConfSp","name":"definition","text":"\n
Definition 48 (Configuration space / universe). The configuration space is usually noted . As its name indicates, the set contains all the possible configurations of a probabilistic model. Its elements are usually noted , and we will call them ’elementary configurations’.
\nWarning : the name of the space and its elements vary across contexts ans languages. is often called the ’sample space’ or ’universe’, and the elements ’samples’,’outcomes’ or ’realizations’.
\n
\n
Probabilistic models are very useful to analyse dice rolls and card games. What is a relevant configuration space
\n
\nto model a dice roll ?
\nto model dice rolls ?
\nto model dice rolls ? What is the size of this configuration space? , which is of size .
\n
\n
What should we chose as configuration space when
\n
\nthe player draws one card ?
\nthe player draws one card, puts it back and draws another card ?
\nthe player draws two cards without putting them back ? , or
\n
\n
Assume a player draws cards without putting them back in the deck. What is the size of the smallest configuration space describing the possible results ?
\n
Note that when using Cartesian products, we have . The order between the card draws is taken into account. In card games, the order in which cards are drawn is often not important. In that case, when drawing cards, we have .
\n
If we do not distinguish results up to permutations, what is the size of the configuration space when
\n
\nthe player draws one card ?
\nthe player draws cards, putting the card back in the deck before taking the next one ?
\nthe player draws cards without putting them back ?
\n
\n
In this course we are particularly interested in signal and image processing problems.
\n
What are the relevant configuration spaces when studying
\n
\nreal signals observed on points ?
\ncontinuous real signals of duration second ?
\n color images ?
\n
\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:ConfSp","summary":"\n
Definition 48 (Configuration space / universe). The configuration space is usually noted . As its name indicates, the set contains all the possible configurations of a probabilistic model. Its elements are usually noted , and we will call them ’elementary configurations’.
\n
","hasSummary":true,"hasTitle":true,"title":"Configuration space / universe","height":468,"width":980},"classes":"l0","position":{"x":13393.03060830612,"y":5109.9275802798675}},{"group":"nodes","data":{"id":"def:Ev","name":"definition","text":"\n
Definition 49 (Events). When the configuration space is endowed with a -algebra , the elements of are called ’events’. For those unfamiliar with -algebra, ’events’ can be defined as subsets of .
\nExamples:
\n
\nwhen is a card deck, ’the card is a diamond’, ’the card is a ’, are typical events.
\nwhen is a set of signals, the event describes all the signals starting with .
\n
\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:Ev","summary":"\n
Definition 49 (Events). When the configuration space is endowed with a -algebra , the elements of are called ’events’. For those unfamiliar with -algebras, ’events’ can be defined as subsets of .
\n
","hasSummary":true,"hasTitle":true,"title":"Events","height":425,"width":980},"classes":"l0","position":{"x":13427.542481071661,"y":4534.440071296909}},{"group":"nodes","data":{"id":"def:Prob","name":"definition","text":"\n
Definition 50 (Probability). A probability is a function which takes in input an event and returns a positive real number. It must verify the following axioms:
\n
\naxiom 1:
\naxiom 2:
\naxiom 3: for a countable number of disjoint events , .
\n
\n
As direct consequences, we have:
\n
\nconsequence 1:
\nconsequence 2:
\nconsequence 3: . Hence if then .
\n
\n
Axioms 2-3 and consequence 2 are precisely axioms of measures, hence a probability is a measure.
\nInterpretation: the probability gives a notion of ’size’ to events. Given that the ’size’ of is , the size of events can be interpreted as the proportion they occupy in . Hence, the of probability can also be read as the of proportion.
\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:Prob","summary":"\n
Definition 50 (Probability). A probability is a measure on a set such that
\n
","hasSummary":true,"hasTitle":true,"title":"Probability","height":336,"width":980},"classes":"l0","position":{"x":13469.107194598806,"y":3928.411817425082}},{"group":"nodes","data":{"id":"def:ProbDen","name":"definition","text":"\n
Definition 51 (Probability density). Let be a probability on . We say that the function is the probability density of when
\n
\n
Note that does not always have a density. Take for instance such that : there are no functions verifying the above condition.
\n
\n
The following concerns reader familiar with -algebras.
\nStrictly speaking, in the previous definition must be a measurable set. In general the probability density is defined with respect to a reference measure. Let be a probability measure, and be a reference measure on a set . Then is the density of with respect to when where is the -algebra of . is noted .
\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:ProbDen","summary":"\n
Definition 51 (Probability density).
\n
","hasSummary":true,"hasTitle":true,"title":"Probability density","height":226,"width":980},"classes":"l0","position":{"x":14615.25950129798,"y":3942.3523070315723}},{"group":"nodes","data":{"id":"rem:IntProb","name":"remark","text":"","parent":"subsec:ConfProb","rank":"0","html_name":"rem:IntProb","summary":"","hasSummary":true,"hasTitle":true,"title":"Introductory remark","height":504,"width":980},"classes":"l0","position":{"x":12202.683927367605,"y":5429.029516520528}},{"group":"nodes","data":{"id":"rem:Ev","name":"remark","text":"","parent":"subsec:ConfProb","rank":"0","html_name":"rem:Ev","summary":"","hasSummary":true,"hasTitle":true,"title":"Events and -algebra","height":607,"width":980},"classes":"l0","position":{"x":12196.553096064195,"y":4609.336702454983}},{"group":"nodes","data":{"id":"rem:ProbAss","name":"remark","text":"","parent":"subsec:ConfProb","rank":"0","html_name":"rem:ProbAss","summary":"","hasSummary":true,"hasTitle":true,"title":"Assignment of probabilities","height":412,"width":980},"classes":"l0","position":{"x":12258.913898509865,"y":3919.3695181741937}},{"group":"nodes","data":{"id":"rem:AbsConf","name":"remark","text":"","parent":"subsec:ConfProb","rank":"0","html_name":"rem:AbsConf","summary":"","hasSummary":true,"hasTitle":true,"title":"Abstract configuration space","height":514,"width":980},"classes":"l0","position":{"x":14700.381605505565,"y":4791.199530686157}},{"group":"nodes","data":{"id":"def:RV","name":"definition","text":"\n
Definition 52 (Random variables). Introduce first the idea behind the definition. Build a configuration space representing the possible ages of persons. A natural choice is where the first coordinate represents the possible ages of the first person,and the second the possible age of the second person ( is also a natural choice).
\n
\n
The coordinate function ,
\n
\n
’extracts’ the age of the first person out of an elementary configuration . Similarly, the coordinate function ,
\n
\n
’extracts’ age of the second person. The functions and are called random variables.
\nThe coordinates functions and extracts interesting quantities from the elementary configuration. However, there are other interesting quantities which are not coordinate functions. For instance, for an elementary configuration , we can be interested in the average age of the persons. We can define the function ,
\n
\n
The function is also called a random variable.
\nDefinition:
\nMore generally, a random variable taking values in a space is a function . In particular, integer random variables are functions and real random variables are functions . The only requirement for a function on to be called a ’random variable’ is to be measurable: if is an event of , must be an event of . This restriction only has importance when not all subsets are events.
\nConsider the following dice roll game. If the number is even, the player gains euro, and if the number is odd he loses euro. The function , is a random variable.
\n
Assume that we roll a dice times. The configurations space that describes all the possible results is . Let be the -th coordinate function on . For instance,
\n
These random variables ’extract’ the result of the -th roll out of the elementary configuration. We can also construct the random variables : the gain at the -th roll.
\n
","parent":"subsec:RV","rank":"0","html_name":"def:RV","summary":"\n
Definition 52 (Random variables). A random variable valued in is a ’measurable’ function defined on the configuration space and valued in :
\n
","hasSummary":true,"hasTitle":true,"title":"Random variables","height":423,"width":980},"classes":"l0","position":{"x":16543.63804116524,"y":4285.242966345903}},{"group":"nodes","data":{"id":"def:RVLaw","name":"definition","text":"\n
Definition 53 (Law of a random variables). Let be a configuration space with a probability , and be a random variable taking values in a set . Recall that is a function that takes in input an event of and associates its probability. The random variable transports the probability to the set . The probability of a set in is determined by the probability of the subset of whose image by are in .
\nLet be a probability on defined as follow. For an event in , is called the law of the random variable . The measurable assumption on ensures that is an event of . Remarks:
\n
\ninstead of , we often write (the probability that the value of the random variable falls in ).
\n can be viewed as another configuration space with a probability
\n
\n
\n
Cumulative distribution
\n
\n
Assume is a real random variable. The probability can be represented by its cumulative distribution , which is function is defined by
\n
All the information on is contained in . Since the cumulative distribution is a function , sometimes it is conceptually simpler to manipulate than the probability itself which is a function from the subsets of to .
\n
\n
\nWhat do we know about ?
\nHow do we compute from ?
\nCan we have and ? No!
\nIf for , , what is ?
\n
\n
\n
Density of the random variable
\n
\n
When the law has a density , the density is called the density of the random variable and is often noted .
\nFor a real random variable ,
\n
\nshow that
\nshow that . Hence the result.
\n
\n
","parent":"subsec:RV","rank":"0","html_name":"def:RVLaw","summary":"\n
Definition 53 (Law of a random variables). Let be a random variable. For an event in , define is called the law of the random variable .
\n
","hasSummary":true,"hasTitle":true,"title":"Law of a random variables","height":442,"width":980},"classes":"l0","position":{"x":15903.778546829752,"y":3703.1538379753565}},{"group":"nodes","data":{"id":"def:JoinRV","name":"definition","text":"\n
Definition 54 (Joint random variables). Consider random variables and . The random variable , is called the joint random variable. The law of is called the joint law (or joint probability distribution).
\n
\n
Example.
\nConsider the case of dice rolls. Let and be the -th coordinate function, and let be the uniform probability distribution. Consider now the joint random variable
\n
What is the law of ? Hence .
\n
\n
is uniform on .
\n
","parent":"subsec:RV","rank":"0","html_name":"def:JoinRV","summary":"\n
Definition 54 (Joint random variables). Consider random variables and . The random variable , is called the joint random variable. The law of is called the joint law (or joint probability distribution).
\n
","hasSummary":true,"hasTitle":true,"title":"Joint random variables","height":535,"width":980},"classes":"l0","position":{"x":16412.728588815913,"y":2993.8869510895343}},{"group":"nodes","data":{"id":"def:Marg","name":"definition","text":"\n
Definition 55 (Concept of marginal). Give an informal description of the concept of marginal. Some mathematical objects defined on the Cartesian product naturally give rise to similar objects defined on and by ’forgetting’ the other coordinate. The objects defined in this manner on and are called marginal.
\nAs we will see, ’forgetting’ the other coordinate is opposed to conditioning, where the other coordinate is fixed to a particular value.
\n
","parent":"subsec:Marg","rank":"0","html_name":"def:Marg","summary":"\n
Definition 55 (Concept of marginal). Give an informal description of the concept of marginal. Some mathematical objects defined on the Cartesian product naturally give rise to similar objects defined on and by ’forgetting’ the other coordinate. The objects defined in this manner on and are called marginal.
\n
","hasSummary":true,"hasTitle":true,"title":"Concept of marginal","height":514,"width":980},"classes":"l0","position":{"x":12166.850463373075,"y":2844.9599406535326}},{"group":"nodes","data":{"id":"def:MargProb","name":"definition","text":"\n
Definition 56 (Marginal probability). Let be a probability on the Cartesian product . The marginal probability on is defined by and the marginal on by When has a density with respect to a reference measure on , the marginal densities are defined by
\n
","parent":"subsec:Marg","rank":"0","html_name":"def:MargProb","summary":"\n
Definition 56 (Marginal probability). Let be a probability on the Cartesian product . The marginal probability on is defined by and the marginal on by
\n
","hasSummary":true,"hasTitle":true,"title":"Marginal probability","height":494,"width":980},"classes":"l0","position":{"x":12774.067271971502,"y":2160.2187117665944}},{"group":"nodes","data":{"id":"def:MargRV","name":"definition","text":"\n
Definition 57 (Marginal random variables). Given a random variable valued in a Cartesian product , the projections on and define ’marginals’. Let is called the marginal random variable on and the marginal random variable on . The law of and are called the \"marginal laws\", or \"marginal probabilities\" of the law . The laws are given by When the laws have densities,
\n
","parent":"subsec:Marg","rank":"0","html_name":"def:MargRV","summary":"\n
Definition 57 (Marginal random variables). Let be a random variable, and let and be the random variables corresponding to each coordinate: is called the marginal random variable on and the marginal random variable on .
\n
","hasSummary":true,"hasTitle":true,"title":"Marginal random variables","height":524,"width":980},"classes":"l0","position":{"x":12137.531799873219,"y":1425.528800121085}},{"group":"nodes","data":{"id":"th:Bayes","name":"theorem","text":"\n
Theorem 14 (Bayes Theorem). The definition of conditional densities can be rewritten as something called the \"Bayes theorem\":
\n
","parent":"subsec:Cond","rank":"0","html_name":"th:Bayes","summary":"\n
Theorem 14 (Bayes Theorem).
\n
","hasSummary":true,"hasTitle":true,"title":"Bayes Theorem","height":283,"width":980},"classes":"l0","position":{"x":17173.002711062174,"y":1012.9142665309903}},{"group":"nodes","data":{"id":"def:CondProb","name":"definition","text":"\n
Definition 58 (Conditional probability). Consider a probability on the set of configurations . Given a event with , the idea of conditioning with respect to is to focus on the configurations and forget about the configurations . We would like to define a new probability which respects the proportions of the events included in and gives zero probability to event which do not intersect .
\nLet be an event of , with . We define the probability conditional to the event by is called the probability of given , and is usually noted .
\nExercise: check that is a probability on
\n
","parent":"subsec:Cond","rank":"0","html_name":"def:CondProb","summary":"\n
Definition 58 (Conditional probability). We define the probability conditional to the event by is called the probability of given , and is usually noted .
\n
","hasSummary":true,"hasTitle":true,"title":"Conditional probability","height":502,"width":980},"classes":"l0","position":{"x":17123.201347644594,"y":1608.387855598545}},{"group":"nodes","data":{"id":"def:CondProbRV","name":"definition","text":"\n
Definition 59 (Conditional laws of a joint variable). We address here the particular case of conditional laws of a joint random variable. Let and be two random variables. The law of the joint random variable is a probability on . This joint probability on can be conditioned by imposing that one of the variables lies in a given set. In particular, when it is possible to condition by the event . We obtain then a probability on , interpreted as a probability on . The conditional probability knowing is usually written
\n
We recognized the term , which is the marginal distribution on evaluated on .
\n
When densities exist, the conditional density is as long as the marginal density .
\n
","parent":"subsec:Cond","rank":"0","html_name":"def:CondProbRV","summary":"\n
Definition 59 (Conditional laws of a joint variable).
\n
","hasSummary":true,"hasTitle":true,"title":"Conditional laws of a joint variable","height":345,"width":980},"classes":"l0","position":{"x":16121.82435700667,"y":742.0546539898808}},{"group":"nodes","data":{"id":"th:JLIV","name":"theorem","text":"\n
Theorem 15 (Joint law of independent variables ). When and are independent, we have the two important results:
\n
\nthe joint probability of is
\nthe conditional probabilities are independent of and in other words the marginal and conditional are the same for all . The same is true when conditioning on .
\n
\n
The proofs are exercises.
\n
","parent":"subsec:Ind","rank":"0","html_name":"th:JLIV","summary":"\n
Theorem 15 (Joint law of independent variables ).
\n
","hasSummary":true,"hasTitle":true,"title":"Joint law of independent variables ","height":338,"width":980},"classes":"l0","position":{"x":14311.227641064226,"y":-1037.0753394813926}},{"group":"nodes","data":{"id":"def:IndEv","name":"definition","text":"\n
Definition 60 (Independent events). Two events and are called independent when
\n
Interpretation when : the proportion of in is the same as the proportion of in In other words, conditioning probabilities to the event does not change the probability of .
\n
\n
When , the same can be said about the proportion of in .
\n
","parent":"subsec:Ind","rank":"0","html_name":"def:IndEv","summary":"\n
Definition 60 (Independent events). Two events and are called independents when
\n
","hasSummary":true,"hasTitle":true,"title":"Independent events","height":336,"width":980},"classes":"l0","position":{"x":13940.301973467818,"y":396.39492473894427}},{"group":"nodes","data":{"id":"def:IndRV","name":"definition","text":"\n
Definition 61 (Independent random variables). We say that random variables are independent when an information on one of them does not affect the other. This translates to the following definition.
\nTwo random variables and are independent when :
\n
Recall that it means that when , conditioning the probability on to does not affect the probability of . The other direction holds when .
\n
\n
","parent":"subsec:Ind","rank":"0","html_name":"def:IndRV","summary":"\n
Definition 61 (Independent random variables). Two random variables and are independent when :
\n
","hasSummary":true,"hasTitle":true,"title":"Independent random variables","height":379,"width":1232},"classes":"l0","position":{"x":14337.070950072977,"y":-98.7174819388315}},{"group":"nodes","data":{"id":"def:IndExp","name":"theorem","text":"\n
Theorem 16 (Independence and expectation). When and are two independent variables, we have that When the laws have densities, the result is given by the following computation:
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:IndExp","summary":"\n
Theorem 16 (Independence and expectation). When and are two independent variables, we have that
\n
","hasSummary":true,"hasTitle":true,"title":"Independence and expectation","height":379,"width":980},"classes":"l0","position":{"x":18372.05589195286,"y":41.302104565725244}},{"group":"nodes","data":{"id":"def:StdProp","name":"theorem","text":"\n
Theorem 17 (Properties of variance). Show as a first exercise that variance can also be written as .
\nShow that:
\n
\n
Hence, when are independent variables with identical distributions,
\n
This is a very important result: summing independent and identically distributed (i.i.d) variables reduces the variance in and the standard deviation in .
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:StdProp","summary":"\n
Theorem 17 (Properties of variance). when and are independent, .
\n
","hasSummary":true,"hasTitle":true,"title":"Properties of variance","height":455,"width":980},"classes":"l0","position":{"x":18398.953449215493,"y":-619.6713064174974}},{"group":"nodes","data":{"id":"def:Expe","name":"definition","text":"\n
Definition 62 (Expectation). Let with and . Let be a real random variable, . For each elementary configuration , takes the value . The ’expectation’ of is the ’average’ value of the with respect to the probabilities of the . In our example:
\n
If , the average of a random variable becomes:
\n
\n
Now consider the case , with a probability of density . The formula for the expecation becomes Note that in that case, the expectation exists only when the integral exists.
\n
There is a definition of the expectation which does not depend on the nature of . The expectation (average / mean) of is defined by
\n
The rigorous definition requires what is called ’Lebesgue integrals’. Hoewever, the intuitive meaning of the formula is clear. When is discrete, the integral becomes a sum over . When is continuous and that has a density, becomes .
\nRemember that the random variable transports the probability defined on to a probability defined on .
\n
\n is in fact discrete, it takes values on . We have Exercise: show this formula when .
\nThe probability has a density . We have Exercise: show this formula when , has a density and is a diffeomorphism.
\n
\n
The set of random variables is a vector space (). The set of random variables such that exists is called , it is again a vector space. Since integrals are linear, the expectation
\n
is a linear application from to . In other words, we have the fundamentals properties:
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Expe","summary":"\n
Definition 62 (Expectation). The general definition of the expectation of a random variable is in particular, when , and when ,
\n
","hasSummary":true,"hasTitle":true,"title":"Expectation","height":799,"width":980},"classes":"l0","position":{"x":18366.371663039306,"y":1162.4577397102996}},{"group":"nodes","data":{"id":"def:ScalRV","name":"definition","text":"\n
Definition 63 (Inner products on random variables). The expectation enables to define an important inner product on random variables. It provides a norm and a notion of angles between random variables. Let and be random variables. When it exists, we can define where is understood as the product of the functions and .
\n
The set of random variables such that exists is called . is a vector space.
\nExercise: show that is an inner product on .
\nConsider the important case of independent random variables and with zero mean. Then Hence the strong link between independence and orthogonality. Note however that the converse is not always true: we can find centered random variables and with which are not independent.
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:ScalRV","summary":"\n
Definition 63 (Inner products on random variables).
\n
","hasSummary":true,"hasTitle":true,"title":"Inner products on random variables","height":422,"width":980},"classes":"l0","position":{"x":19899.27127201591,"y":741.8116466487754}},{"group":"nodes","data":{"id":"def:Cov","name":"definition","text":"\n
Definition 64 (Covariance (dimension 1)). Given a random variable note the ’centered’ variable. The covariance between two variables and is the scalar product between their centered versions.
\nWhen it exists, the covariance between and is defined by
\n
We have that .
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Cov","summary":"\n
Definition 64 (Covariance (dimension 1)).
\n
","hasSummary":true,"hasTitle":true,"title":"Covariance (dimension 1)","height":297,"width":980},"classes":"l0","position":{"x":19930.787327169135,"y":160.3847155789902}},{"group":"nodes","data":{"id":"def:Std","name":"definition","text":"\n
Definition 65 (Standard deviation / variance). The variance and standard deviation measure how a random variable varies around its mean. The deviation from the mean is given by the Euclidean norm of the centered variable . when they exists, the variance is defined by,
\n
\n
and the standard deviation by
\n
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Std","summary":"\n
Definition 65 (Standard deviation / variance).
\n
","hasSummary":true,"hasTitle":true,"title":"Standard deviation / variance","height":416,"width":980},"classes":"l0","position":{"x":19621.80213564521,"y":-573.1885588548129}},{"group":"nodes","data":{"id":"def:CovM","name":"definition","text":"\n
Definition 66 (Covariance matrix)). Consider random variables . The variables can be put in a column vector . is then a random variable . Such random variables are often called random vectors.
\nFor a random column vector , when it exists the covariance matrix is defined by
\n
\n
Hence we can see that .
\nWhen is a line vector, the definition becomes
\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:CovM","summary":"\n
Definition 66 (Covariance matrix)). Consider random variables . The variables can be put in a column vector
\n
","hasSummary":true,"hasTitle":true,"title":"Covariance matrix","height":338,"width":980},"classes":"l0","position":{"x":20418.20378469191,"y":-1072.7316180751332}},{"group":"nodes","data":{"id":"def:WLLN","name":"theorem","text":"\n
Theorem 18 ((Weak) Law of large numbers !finish iid!). Let be an infinite sequence i.i.d real random variables of mean . The empirical means are the random variables For all , we have
\n
We will not prove this result, but it can be intuitively understood in a simple way when the variable have variances. First, note that . Then, remember that Hence the empirical mean is more and more concentrated around its mean , which means that the probability should be smaller and smaller.
\n
","parent":"subsec:LLN","rank":"0","html_name":"def:WLLN","summary":"\n
Theorem 18 ((Weak) Law of large numbers !finish iid!).
\n
","hasSummary":true,"hasTitle":true,"title":"(Weak) Law of large numbers !finish iid!","height":297,"width":980},"classes":"l0","position":{"x":16413.042621379984,"y":-2186.5815533798905}},{"group":"nodes","data":{"id":"sec:LA","name":"section","text":"","rank":"0","html_name":"sec:LA","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":14753.106930478858,"y":9770.0607479875}},{"group":"nodes","data":{"id":"titlesec:LA","name":"sectionTitle","text":"Linear algebra
","parent":"sec:LA","rank":"0","html_name":"sec:LA","hasSummary":false,"hasTitle":false,"height":561,"width":1250},"classes":"l0","position":{"x":15251.947598908304,"y":12488.748941764392}},{"group":"nodes","data":{"id":"sec:FCC","name":"section","text":"","rank":"0","html_name":"sec:FCC","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":5458.492997493175,"y":7893.861778089176}},{"group":"nodes","data":{"id":"titlesec:FCC","name":"sectionTitle","text":"Fourier, Convolution and Correlation
","parent":"sec:FCC","rank":"0","html_name":"sec:FCC","hasSummary":false,"hasTitle":false,"height":1022,"width":1516},"classes":"l0","position":{"x":6070.582696672855,"y":9153.16568357412}},{"group":"nodes","data":{"id":"sec:TIOpe","name":"section","text":"","rank":"0","html_name":"sec:TIOpe","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":6919.291003777886,"y":4471.199618819243}},{"group":"nodes","data":{"id":"titlesec:TIOpe","name":"sectionTitle","text":"Translation invariant operators
","parent":"sec:TIOpe","rank":"0","html_name":"sec:TIOpe","hasSummary":false,"hasTitle":false,"height":792,"width":1417},"classes":"l0","position":{"x":8883.724097307402,"y":4635.4707260975465}},{"group":"nodes","data":{"id":"sec:DiffOp","name":"section","text":"","rank":"0","html_name":"sec:DiffOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":7551.192528111416,"y":1286.450679389157}},{"group":"nodes","data":{"id":"titlesec:DiffOp","name":"sectionTitle","text":"Differential operators
","parent":"sec:DiffOp","rank":"0","html_name":"sec:DiffOp","hasSummary":false,"hasTitle":false,"height":561,"width":1395},"classes":"l0","position":{"x":8889.318218063834,"y":2886.559814716601}},{"group":"nodes","data":{"id":"sec:Meas","name":"section","text":"","rank":"0","html_name":"sec:Meas","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":22234.269516961438,"y":7896.252214189441}},{"group":"nodes","data":{"id":"titlesec:Meas","name":"sectionTitle","text":"Measures theory
","parent":"sec:Meas","rank":"0","html_name":"sec:Meas","hasSummary":false,"hasTitle":false,"height":561,"width":1273},"classes":"l0","position":{"x":24064.54267057631,"y":8862.18520253525}},{"group":"nodes","data":{"id":"sec:Prob","name":"section","text":"","rank":"0","html_name":"sec:Prob","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16277.867792282563,"y":1748.0584128393234}},{"group":"nodes","data":{"id":"titlesec:Prob","name":"sectionTitle","text":"Probabilities
","parent":"sec:Prob","rank":"0","html_name":"sec:Prob","hasSummary":false,"hasTitle":false,"height":331,"width":1550},"classes":"l0","position":{"x":16356.192235909653,"y":5692.698379058537}},{"group":"nodes","data":{"id":"subsec:VectSpaces","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:VectSpaces","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12493.897059014955,"y":10834.568404340866}},{"group":"nodes","data":{"id":"titlesubsec:VectSpaces","name":"subsectionTitle","text":"Vector spaces
","parent":"subsec:VectSpaces","rank":"0","html_name":"subsec:VectSpaces","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":11032.53029978813,"y":12031.627747120141}},{"group":"nodes","data":{"id":"subsec:Mats","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:Mats","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":17086.411266588824,"y":8265.076922654986}},{"group":"nodes","data":{"id":"titlesubsec:Mats","name":"subsectionTitle","text":"Matrices
","parent":"subsec:Mats","rank":"0","html_name":"subsec:Mats","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":17763.04274309803,"y":9004.19546953074}},{"group":"nodes","data":{"id":"subsec:LinMaps","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:LinMaps","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":17215.683278967546,"y":10711.510666246777}},{"group":"nodes","data":{"id":"titlesubsec:LinMaps","name":"subsectionTitle","text":"Linear maps
","parent":"subsec:LinMaps","rank":"0","html_name":"subsec:LinMaps","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":16890.61421535985,"y":11908.638114506259}},{"group":"nodes","data":{"id":"subsec:InnProd","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:InnProd","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12708.739508391762,"y":7954.5708563614135}},{"group":"nodes","data":{"id":"titlesubsec:InnProd","name":"subsectionTitle","text":"Inner products
","parent":"subsec:InnProd","rank":"0","html_name":"subsec:InnProd","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":14495.95155335858,"y":8766.363082609456}},{"group":"nodes","data":{"id":"subsec:Conv","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:Conv","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":3595.949252973721,"y":7460.855180934309}},{"group":"nodes","data":{"id":"titlesubsec:Conv","name":"subsectionTitle","text":"Convolutions
","parent":"subsec:Conv","rank":"0","html_name":"subsec:Conv","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":2273.952963921711,"y":8425.846299197136}},{"group":"nodes","data":{"id":"subsec:F","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:F","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":8021.748552545618,"y":7711.200887003522}},{"group":"nodes","data":{"id":"titlesubsec:F","name":"subsectionTitle","text":"Fourier
","parent":"subsec:F","rank":"0","html_name":"subsec:F","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":7475.231691469583,"y":9011.496901521983}},{"group":"nodes","data":{"id":"subsec:CT","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:CT","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":6263.035667710466,"y":7199.18445415821}},{"group":"nodes","data":{"id":"titlesubsec:CT","name":"subsectionTitle","text":"Convolution Theorems
","parent":"subsec:CT","rank":"0","html_name":"subsec:CT","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":6263.057121854839,"y":8171.311035712188}},{"group":"nodes","data":{"id":"subsec:DiffOp","name":"subsection","text":"","parent":"sec:DiffOp","rank":"0","html_name":"subsec:DiffOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":8876.861540464926,"y":957.0028151092976}},{"group":"nodes","data":{"id":"titlesubsec:DiffOp","name":"subsectionTitle","text":"Differential operators (continuous case)
","parent":"subsec:DiffOp","rank":"0","html_name":"subsec:DiffOp","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":10265.349643524034,"y":2232.9755917479156}},{"group":"nodes","data":{"id":"subsec:DisDifOp","name":"subsection","text":"","parent":"sec:DiffOp","rank":"0","html_name":"subsec:DisDifOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":5444.760413248439,"y":1682.9165409097325}},{"group":"nodes","data":{"id":"titlesubsec:DisDifOp","name":"subsectionTitle","text":"Discrete differential operators
","parent":"subsec:DisDifOp","rank":"0","html_name":"subsec:DisDifOp","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":5349.19460384174,"y":2760.1776150533656}},{"group":"nodes","data":{"id":"subsec:ConfProb","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:ConfProb","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":13448.46735078488,"y":4698.229628990624}},{"group":"nodes","data":{"id":"titlesubsec:ConfProb","name":"subsectionTitle","text":"Configurations and probabilities
","parent":"subsec:ConfProb","rank":"0","html_name":"subsec:ConfProb","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":13791.694952880101,"y":5571.589739807056}},{"group":"nodes","data":{"id":"subsec:RV","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:RV","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16231.081813234416,"y":3736.742929949278}},{"group":"nodes","data":{"id":"titlesubsec:RV","name":"subsectionTitle","text":"Random Variables
","parent":"subsec:RV","rank":"0","html_name":"subsec:RV","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":16566.38507963908,"y":4670.598908809021}},{"group":"nodes","data":{"id":"subsec:Marg","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Marg","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12455.79953592236,"y":2247.726441018981}},{"group":"nodes","data":{"id":"titlesubsec:Marg","name":"subsectionTitle","text":"Marginals
","parent":"subsec:Marg","rank":"0","html_name":"subsec:Marg","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":12184.695106224906,"y":3255.424081916877}},{"group":"nodes","data":{"id":"subsec:Cond","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Cond","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16647.413534034422,"y":1351.3941439416058}},{"group":"nodes","data":{"id":"titlesubsec:Cond","name":"subsectionTitle","text":"Conditioning
","parent":"subsec:Cond","rank":"0","html_name":"subsec:Cond","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":17154.284822312693,"y":2056.733633893331}},{"group":"nodes","data":{"id":"subsec:Ind","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Ind","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":14201.686461770398,"y":-195.4099234936357}},{"group":"nodes","data":{"id":"titlesubsec:Ind","name":"subsectionTitle","text":"Independence
","parent":"subsec:Ind","rank":"0","html_name":"subsec:Ind","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":13942.47290294537,"y":738.7554924941212}},{"group":"nodes","data":{"id":"subsec:MomRV","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:MomRV","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":19392.287723865607,"y":160.11306081758323}},{"group":"nodes","data":{"id":"titlesubsec:MomRV","name":"subsectionTitle","text":"Moments of random variables
","parent":"subsec:MomRV","rank":"0","html_name":"subsec:MomRV","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":19978.181943275064,"y":1429.0085676484198}},{"group":"nodes","data":{"id":"subsec:LLN","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:LLN","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16391.615276470766,"y":-1865.428969093652}},{"group":"nodes","data":{"id":"titlesubsec:LLN","name":"subsectionTitle","text":"Laws of large numbers and central limit theorem
","parent":"subsec:LLN","rank":"0","html_name":"subsec:LLN","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":16362.187931561548,"y":-1507.2763848074137}},{"data":{"id":"def:Corrsubsec:Conv","source":"subsec:Conv","target":"def:Corr","type":"strong","visibility":1}},{"data":{"id":"th:ExTIdef:TrInOp","source":"def:TrInOp","target":"th:ExTI","type":"strong","visibility":1}},{"data":{"id":"th:ConvTIdef:TrInOp","source":"def:TrInOp","target":"th:ConvTI","type":"strong","visibility":1}},{"data":{"id":"th:ConvTIsubsec:Conv","source":"subsec:Conv","target":"th:ConvTI","type":"strong","visibility":0}},{"data":{"id":"th:TrFunConsubsec:CT","source":"subsec:CT","target":"th:TrFunCon","type":"strong","visibility":0}},{"data":{"id":"th:TrFunConth:ConvTI","source":"th:ConvTI","target":"th:TrFunCon","type":"strong","visibility":1}},{"data":{"id":"th:TrFunCondef:TranFun","source":"def:TranFun","target":"th:TrFunCon","type":"strong","visibility":1}},{"data":{"id":"def:TranFundef:TrInOp","source":"def:TrInOp","target":"def:TranFun","type":"strong","visibility":1}},{"data":{"id":"def:TranFunth:ExTI","source":"th:ExTI","target":"def:TranFun","type":"strong","visibility":1}},{"data":{"id":"rem:Contdef:TrInOp","source":"def:TrInOp","target":"rem:Cont","type":"strong","visibility":1}},{"data":{"id":"rem:FinSupth:TrFunCon","source":"th:TrFunCon","target":"rem:FinSup","type":"strong","visibility":1}},{"data":{"id":"rem:FinSuprem:ConvArrCirc","source":"rem:ConvArrCirc","target":"rem:FinSup","type":"strong","visibility":0}},{"data":{"id":"def:Boreldef:SigmaA","source":"def:SigmaA","target":"def:Borel","type":"strong","visibility":1}},{"data":{"id":"def:Measdef:SigmaA","source":"def:SigmaA","target":"def:Meas","type":"strong","visibility":1}},{"data":{"id":"def:Measbldef:SigmaA","source":"def:SigmaA","target":"def:Measbl","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Meas","source":"def:Meas","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Measbl","source":"def:Measbl","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Borel","source":"def:Borel","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"rem:Dfltdef:SigmaA","source":"def:SigmaA","target":"rem:Dflt","type":"strong","visibility":1}},{"data":{"id":"rem:Dfltdef:Borel","source":"def:Borel","target":"rem:Dflt","type":"strong","visibility":1}},{"data":{"id":"def:FreeSysdef:VectSpace","source":"def:VectSpace","target":"def:FreeSys","type":"strong","visibility":1}},{"data":{"id":"def:GenSysdef:VectSpace","source":"def:VectSpace","target":"def:GenSys","type":"strong","visibility":1}},{"data":{"id":"def:Basisdef:GenSys","source":"def:GenSys","target":"def:Basis","type":"strong","visibility":1}},{"data":{"id":"def:Basisdef:FreeSys","source":"def:FreeSys","target":"def:Basis","type":"strong","visibility":1}},{"data":{"id":"def:SubSpacedef:VectSpace","source":"def:VectSpace","target":"def:SubSpace","type":"strong","visibility":1}},{"data":{"id":"def:Spandef:SubSpace","source":"def:SubSpace","target":"def:Span","type":"strong","visibility":1}},{"data":{"id":"def:Spansubsec:Mats","source":"subsec:Mats","target":"def:Span","type":"strong","visibility":1}},{"data":{"id":"def:Coordsdef:Basis","source":"def:Basis","target":"def:Coords","type":"strong","visibility":1}},{"data":{"id":"def:BasChgdef:Coords","source":"def:Coords","target":"def:BasChg","type":"strong","visibility":1}},{"data":{"id":"def:BasChgdef:MatMul","source":"def:MatMul","target":"def:BasChg","type":"strong","visibility":1}},{"data":{"id":"rem:ExVectspdef:VectSpace","source":"def:VectSpace","target":"rem:ExVectsp","type":"strong","visibility":1}},{"data":{"id":"def:MatMuldef:Mat","source":"def:Mat","target":"def:MatMul","type":"strong","visibility":1}},{"data":{"id":"def:Transdef:Mat","source":"def:Mat","target":"def:Trans","type":"strong","visibility":1}},{"data":{"id":"def:Transdef:MatMul","source":"def:MatMul","target":"def:Trans","type":"strong","visibility":1}},{"data":{"id":"def:IsoKndef:Basis","source":"def:Basis","target":"def:IsoKn","type":"strong","visibility":1}},{"data":{"id":"def:IsoKndef:LinMap","source":"def:LinMap","target":"def:IsoKn","type":"strong","visibility":1}},{"data":{"id":"th:CompLinMapsdef:MatMul","source":"def:MatMul","target":"th:CompLinMaps","type":"strong","visibility":1}},{"data":{"id":"th:CompLinMapsdef:MatMap","source":"def:MatMap","target":"th:CompLinMaps","type":"strong","visibility":1}},{"data":{"id":"def:LinMapdef:VectSpace","source":"def:VectSpace","target":"def:LinMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapdef:LinMap","source":"def:LinMap","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapdef:Coords","source":"def:Coords","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapsubsec:Mats","source":"subsec:Mats","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:Eigendef:LinMap","source":"def:LinMap","target":"def:Eigen","type":"strong","visibility":1}},{"data":{"id":"def:Eigensubsec:Mats","source":"subsec:Mats","target":"def:Eigen","type":"strong","visibility":1}},{"data":{"id":"def:DiagMapdef:Eigen","source":"def:Eigen","target":"def:DiagMap","type":"strong","visibility":1}},{"data":{"id":"rem:MatMultVectdef:MatMap","source":"def:MatMap","target":"rem:MatMultVect","type":"strong","visibility":1}},{"data":{"id":"def:Pythdef:OrthBas","source":"def:OrthBas","target":"def:Pyth","type":"strong","visibility":1}},{"data":{"id":"th:OrthProjdef:Span","source":"def:Span","target":"th:OrthProj","type":"strong","visibility":1}},{"data":{"id":"th:OrthProjdef:OrthProj","source":"def:OrthProj","target":"th:OrthProj","type":"strong","visibility":1}},{"data":{"id":"th:CauSchdef:Norm","source":"def:Norm","target":"th:CauSch","type":"strong","visibility":1}},{"data":{"id":"def:InnProddef:VectSpace","source":"def:VectSpace","target":"def:InnProd","type":"strong","visibility":1}},{"data":{"id":"def:InnProddef:LinMap","source":"def:LinMap","target":"def:InnProd","type":"strong","visibility":1}},{"data":{"id":"def:Normdef:InnProd","source":"def:InnProd","target":"def:Norm","type":"strong","visibility":1}},{"data":{"id":"def:OrthBasdef:InnProd","source":"def:InnProd","target":"def:OrthBas","type":"strong","visibility":1}},{"data":{"id":"def:OrthBasdef:Norm","source":"def:Norm","target":"def:OrthBas","type":"strong","visibility":1}},{"data":{"id":"def:OrthFreedef:InnProd","source":"def:InnProd","target":"def:OrthFree","type":"strong","visibility":1}},{"data":{"id":"def:InnMatdef:InnProd","source":"def:InnProd","target":"def:InnMat","type":"strong","visibility":1}},{"data":{"id":"def:InnMatsubsec:Mats","source":"subsec:Mats","target":"def:InnMat","type":"strong","visibility":1}},{"data":{"id":"def:OrthProjdef:Norm","source":"def:Norm","target":"def:OrthProj","type":"strong","visibility":1}},{"data":{"id":"def:OrthProjdef:SubSpace","source":"def:SubSpace","target":"def:OrthProj","type":"strong","visibility":1}},{"data":{"id":"rem:InnOrthBasdef:OrthBas","source":"def:OrthBas","target":"rem:InnOrthBas","type":"strong","visibility":1}},{"data":{"id":"rem:InnOrthBasdef:InnMat","source":"def:InnMat","target":"rem:InnOrthBas","type":"strong","visibility":1}},{"data":{"id":"rem:OrthProjBasdef:OrthProj","source":"def:OrthProj","target":"rem:OrthProjBas","type":"strong","visibility":1}},{"data":{"id":"rem:OrthProjBasdef:OrthBas","source":"def:OrthBas","target":"rem:OrthProjBas","type":"strong","visibility":1}},{"data":{"id":"def:ConvDiscdef:ConvCont","source":"def:ConvCont","target":"def:ConvDisc","type":"strong","visibility":1}},{"data":{"id":"def:ConvArrdef:ConvDisc","source":"def:ConvDisc","target":"def:ConvArr","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircdef:ConvCont","source":"def:ConvCont","target":"def:ConvCirc","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircDiscdef:ConvCirc","source":"def:ConvCirc","target":"def:ConvCircDisc","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircDiscdef:ConvDisc","source":"def:ConvDisc","target":"def:ConvCircDisc","type":"strong","visibility":1}},{"data":{"id":"rem:ConvArrCircdef:ConvCircDisc","source":"def:ConvCircDisc","target":"rem:ConvArrCirc","type":"strong","visibility":1}},{"data":{"id":"rem:ConvArrCircdef:ConvArr","source":"def:ConvArr","target":"rem:ConvArrCirc","type":"strong","visibility":1}},{"data":{"id":"def:FTdef:FS","source":"def:FS","target":"def:FT","type":"strong","visibility":1}},{"data":{"id":"def:FTdef:DFT","source":"def:DFT","target":"def:FT","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:FT","source":"def:FT","target":"def:FS","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:DFT","source":"def:DFT","target":"def:FS","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:OrthBas","source":"def:OrthBas","target":"def:FS","type":"strong","visibility":0}},{"data":{"id":"def:DFTdef:FS","source":"def:FS","target":"def:DFT","type":"strong","visibility":1}},{"data":{"id":"def:DFTdef:FT","source":"def:FT","target":"def:DFT","type":"strong","visibility":1}},{"data":{"id":"def:DFTdef:OrthBas","source":"def:OrthBas","target":"def:DFT","type":"strong","visibility":0}},{"data":{"id":"th:prodTFdef:ConvCont","source":"def:ConvCont","target":"th:prodTF","type":"strong","visibility":1}},{"data":{"id":"th:prodTFdef:FT","source":"def:FT","target":"th:prodTF","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:ConvCirc","source":"def:ConvCirc","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:ConvDisc","source":"def:ConvDisc","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:FS","source":"def:FS","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodDFTdef:ConvCircDisc","source":"def:ConvCircDisc","target":"th:prodDFT","type":"strong","visibility":1}},{"data":{"id":"th:prodDFTdef:DFT","source":"def:DFT","target":"th:prodDFT","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDerdef:Der","source":"def:Der","target":"th:TranFunDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDerdef:TranFun","source":"def:TranFun","target":"th:TranFunDer","type":"strong","visibility":0}},{"data":{"id":"def:DirDerdef:Der","source":"def:Der","target":"def:DirDer","type":"strong","visibility":1}},{"data":{"id":"def:Differ1def:Der","source":"def:Der","target":"def:Differ1","type":"strong","visibility":1}},{"data":{"id":"def:Differ1def:DirDer","source":"def:DirDer","target":"def:Differ1","type":"strong","visibility":1}},{"data":{"id":"def:DifferMdef:Differ1","source":"def:Differ1","target":"def:DifferM","type":"strong","visibility":1}},{"data":{"id":"def:GradCondef:Differ1","source":"def:Differ1","target":"def:GradCon","type":"strong","visibility":1}},{"data":{"id":"def:nthDerdef:Der","source":"def:Der","target":"def:nthDer","type":"strong","visibility":1}},{"data":{"id":"def:Lapdef:nthDer","source":"def:nthDer","target":"def:Lap","type":"strong","visibility":1}},{"data":{"id":"def:Lapdef:DirDer","source":"def:DirDer","target":"def:Lap","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerdef:DisDer","source":"def:DisDer","target":"th:TranFunDisDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerth:TranFunDer","source":"th:TranFunDer","target":"th:TranFunDisDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerth:TrFunCon","source":"th:TrFunCon","target":"th:TranFunDisDer","type":"strong","visibility":0}},{"data":{"id":"def:DisDerdef:Der","source":"def:Der","target":"def:DisDer","type":"strong","visibility":1}},{"data":{"id":"def:GradDisdef:GradCon","source":"def:GradCon","target":"def:GradDis","type":"strong","visibility":1}},{"data":{"id":"def:GradDisdef:DisDer","source":"def:DisDer","target":"def:GradDis","type":"strong","visibility":1}},{"data":{"id":"def:GradMaskdef:DisDer","source":"def:DisDer","target":"def:GradMask","type":"strong","visibility":1}},{"data":{"id":"def:GradMaskrem:FinSup","source":"rem:FinSup","target":"def:GradMask","type":"strong","visibility":0}},{"data":{"id":"def:ConfSprem:IntProb","source":"rem:IntProb","target":"def:ConfSp","type":"strong","visibility":1}},{"data":{"id":"def:Evdef:ConfSp","source":"def:ConfSp","target":"def:Ev","type":"strong","visibility":1}},{"data":{"id":"def:Probdef:Meas","source":"def:Meas","target":"def:Prob","type":"strong","visibility":0}},{"data":{"id":"def:Probdef:Ev","source":"def:Ev","target":"def:Prob","type":"strong","visibility":1}},{"data":{"id":"def:ProbDendef:Prob","source":"def:Prob","target":"def:ProbDen","type":"strong","visibility":1}},{"data":{"id":"def:ProbDendef:LebInt","source":"def:LebInt","target":"def:ProbDen","type":"strong","visibility":0}},{"data":{"id":"rem:Evdef:Ev","source":"def:Ev","target":"rem:Ev","type":"strong","visibility":1}},{"data":{"id":"rem:ProbAssdef:Prob","source":"def:Prob","target":"rem:ProbAss","type":"strong","visibility":1}},{"data":{"id":"rem:AbsConfdef:ConfSp","source":"def:ConfSp","target":"rem:AbsConf","type":"strong","visibility":1}},{"data":{"id":"rem:AbsConfsubsec:RV","source":"subsec:RV","target":"rem:AbsConf","type":"strong","visibility":1}},{"data":{"id":"def:RVLawdef:RV","source":"def:RV","target":"def:RVLaw","type":"strong","visibility":1}},{"data":{"id":"def:JoinRVdef:RV","source":"def:RV","target":"def:JoinRV","type":"strong","visibility":1}},{"data":{"id":"def:JoinRVdef:RVLaw","source":"def:RVLaw","target":"def:JoinRV","type":"weak"}},{"data":{"id":"def:Margdef:CondProbRV","source":"def:CondProbRV","target":"def:Marg","type":"weak"}},{"data":{"id":"def:MargProbdef:Marg","source":"def:Marg","target":"def:MargProb","type":"strong","visibility":1}},{"data":{"id":"def:MargRVdef:Marg","source":"def:Marg","target":"def:MargRV","type":"strong","visibility":1}},{"data":{"id":"def:MargRVdef:MargProb","source":"def:MargProb","target":"def:MargRV","type":"weak"}},{"data":{"id":"def:MargRVdef:RVLaw","source":"def:RVLaw","target":"def:MargRV","type":"weak"}},{"data":{"id":"th:Bayesdef:CondProb","source":"def:CondProb","target":"th:Bayes","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:JoinRV","source":"def:JoinRV","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:CondProb","source":"def:CondProb","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:MargRV","source":"def:MargRV","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:IndRV","source":"def:IndRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:MargRV","source":"def:MargRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:CondProbRV","source":"def:CondProbRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"def:IndRVsubsec:RV","source":"subsec:RV","target":"def:IndRV","type":"strong","visibility":1}},{"data":{"id":"def:IndRVdef:IndEv","source":"def:IndEv","target":"def:IndRV","type":"strong","visibility":1}},{"data":{"id":"def:IndExpdef:Expe","source":"def:Expe","target":"def:IndExp","type":"strong","visibility":1}},{"data":{"id":"def:IndExpdef:IndRV","source":"def:IndRV","target":"def:IndExp","type":"strong","visibility":1}},{"data":{"id":"def:StdPropdef:Std","source":"def:Std","target":"def:StdProp","type":"strong","visibility":1}},{"data":{"id":"def:StdPropdef:IndRV","source":"def:IndRV","target":"def:StdProp","type":"weak"}},{"data":{"id":"def:Expedef:RV","source":"def:RV","target":"def:Expe","type":"strong","visibility":1}},{"data":{"id":"def:Expedef:LebInt","source":"def:LebInt","target":"def:Expe","type":"strong","visibility":0}},{"data":{"id":"def:ScalRVdef:Expe","source":"def:Expe","target":"def:ScalRV","type":"strong","visibility":1}},{"data":{"id":"def:ScalRVdef:IndExp","source":"def:IndExp","target":"def:ScalRV","type":"weak"}},{"data":{"id":"def:Covdef:ScalRV","source":"def:ScalRV","target":"def:Cov","type":"strong","visibility":1}},{"data":{"id":"def:Stddef:Cov","source":"def:Cov","target":"def:Std","type":"strong","visibility":1}},{"data":{"id":"def:CovMdef:Cov","source":"def:Cov","target":"def:CovM","type":"strong","visibility":1}},{"data":{"id":"def:WLLNdef:Std","source":"def:Std","target":"def:WLLN","type":"strong","visibility":1}},{"data":{"id":"sec:TIOpesec:LA","source":"sec:LA","target":"sec:TIOpe","type":"strong","visibility":1}},{"data":{"id":"sec:DiffOpsec:TIOpe","source":"sec:TIOpe","target":"sec:DiffOp","type":"strong","visibility":1}},{"data":{"id":"sec:Probsec:Meas","source":"sec:Meas","target":"sec:Prob","type":"strong","visibility":1}},{"data":{"id":"sec:Probsec:LA","source":"sec:LA","target":"sec:Prob","type":"strong","visibility":1}},{"data":{"id":"subsec:Fsec:LA","source":"sec:LA","target":"subsec:F","type":"strong","visibility":0}},{"data":{"id":"subsec:RVsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:RV","type":"strong","visibility":1}},{"data":{"id":"subsec:RVdef:Measbl","source":"def:Measbl","target":"subsec:RV","type":"strong","visibility":0}},{"data":{"id":"subsec:Margsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:Marg","type":"strong","visibility":1}},{"data":{"id":"subsec:Margdef:JoinRV","source":"def:JoinRV","target":"subsec:Marg","type":"strong","visibility":1}},{"data":{"id":"subsec:Margdef:RVLaw","source":"def:RVLaw","target":"subsec:Marg","type":"weak"}},{"data":{"id":"subsec:Condsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:Cond","type":"strong","visibility":1}},{"data":{"id":"subsec:Inddef:Prob","source":"def:Prob","target":"subsec:Ind","type":"strong","visibility":1}},{"data":{"id":"subsec:Inddef:CondProb","source":"def:CondProb","target":"subsec:Ind","type":"weak"}},{"data":{"id":"subsec:LLNsubsec:MomRV","source":"subsec:MomRV","target":"subsec:LLN","type":"strong","visibility":1}}];