var graph = [{"group":"nodes","data":{"id":"def:Corr","name":"definition","text":"
\n

Definition 22 (Correlation). Every definition of the convolution can be turned into a definition of the correlation by Rfg=f*g̃R_{fg} = f* \\tilde g where g̃(x)=g(x)\\tilde g(x)=g(-x). For the case of function of RR\\rightarrow \\mathbb{C} this gives

\n

Rfg(x)=f(u)g(ux)duR_{fg}(x) = \\int_{-\\infty}^{\\infty} f(u)g(u-x)du

\n

Remark: correlation is not commutative.

\n
","parent":"sec:FCC","rank":"0","html_name":"def:Corr","summary":"
\n

Definition 22 (Correlation). Rfg=f*g̃R_{fg} = f* \\tilde g

\n
","hasSummary":true,"hasTitle":true,"title":"Correlation","height":276,"width":980},"classes":"l0","position":{"x":3344.664174192218,"y":8942.891041283066}},{"group":"nodes","data":{"id":"rem:GR","name":"remark","text":"
\n

Remark 5 (General remarks).

\n\n
","parent":"sec:FCC","rank":"0","html_name":"rem:GR","summary":"
\n

Remark 5 (General remarks). Arrow directions + Multidimensional case

\n
","hasSummary":true,"hasTitle":true,"title":"General remarks","height":273,"width":980},"classes":"l0","position":{"x":4606.020214642733,"y":8903.649734517832}},{"group":"nodes","data":{"id":"th:ExTI","name":"theorem","text":"
\n

Theorem 9 (Exponential function and T.I. operators (!finish morph!)). Let AA be a T.I.T.I. operator.

\n\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:ExTI","summary":"
\n

Theorem 9 (Exponential function and T.I. operators (!finish morph!)). The eigenfunctions of a T.I. operator AA are exponential functions teiαtt\\mapsto e^{i\\alpha t} which lie in the domain of definition of AA.

\n
","hasSummary":true,"hasTitle":true,"title":"Exponential function and T.I. operators (!finish morph!)","height":395,"width":980},"classes":"l0","position":{"x":7322.731965135108,"y":5045.927110898508}},{"group":"nodes","data":{"id":"th:ConvTI","name":"theorem","text":"
\n

Theorem 10 (Convolutions are T.I.). Let h:Eh:E\\rightarrow \\mathbb{C} with E=nE=\\mathbb{R}^n, (/)n(\\mathbb{R}/\\mathbb{Z})^n, n\\mathbb{Z}^n or (/k)n(\\mathbb{Z}/k\\mathbb{Z})^n. Let AA the operator defined when it exists by A(f)=f*h.A(f)=f*h. AA is a T.I. operator. Show it in the case E=E=\\mathbb{R}. We have φt(A(f))(x)=φt(f*h)(x)=(f*h)(x+t)=f(u)h(x+tu)du=f(v+t)h(xv)dv=(φt(f)*h)(x)=A(φt(f))(x).\\begin{aligned}\n\\varphi_t(A(f))(x)&=&\\varphi_t(f*h)(x)\\\\\n&=& (f*h)(x+t)\\\\\n &=& \\int_{-\\infty}^{\\infty} f(u)h(x+t-u)du\\\\\n&=&\\int_{-\\infty}^{\\infty} f(v+t)h(x-v)dv\\\\\n&=& (\\varphi_t(f)*h)(x)=A(\\varphi_t(f))(x).\\\\\\end{aligned} The proof is similar in the other cases.

\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:ConvTI","summary":"
\n

Theorem 10 (Convolutions are T.I.). Given a function hh, the operator ff*hf\\mapsto f*h is T.I.

\n
","hasSummary":true,"hasTitle":true,"title":"Convolutions are T.I.","height":276,"width":980},"classes":"l0","position":{"x":5431.71662161616,"y":4496.318151068295}},{"group":"nodes","data":{"id":"th:TrFunCon","name":"theorem","text":"
\n

Theorem 11 (Transfer function of convolutions operators). Let hh be a function with a Fourier transform (h)\\mathcal{F}(h) and let AA be the operator A(f)=f*hA(f)=f*h. We already know from th.(Theorem 10) that AA has a transfer function HH. Results of section 2.3 on convolution theorems tell us that the transfer function of AA evaluated on imaginary numbers is the Fourier transform of hh: H(2iπν)=(h)(ν).H(2i\\pi \\nu)=\\mathcal{F}(h)(\\nu). Similar results hold with Fourier series or the discrete Fourier transform when hh is periodic or defined on a finite set. The arguments 2iπν2i\\pi\\nu and ν\\nu should be adjusted to each setting and conventions. Prove the result for the case of Fourier series. The proof for the Fourier transform and the discrete Fourier transform follows the same pattern, though the case of the Fourier transform requires to use the formalism of distribution.

\n

Let hh be a TT periodic function with Fourier series (cn(h))n(c_n(h))_{n\\in \\mathbb{Z}}, and ** be the circular convolution. Consider the TT periodic exponential function ek(t)=e2iπkTte_k(t)=e^{2i\\pi \\frac{k}{T}t } with kk\\in \\mathbb{Z}. The Fourier series (cn(ek))n(c_n(e_k))_{n\\in \\mathbb{Z}} of eke_k is a sequence where

\n

{cn(ek)=1,ifk=1cn(ek)=0,otherwise.\\left\\{\n\\begin{array}{ll}%@{}ll@{}}\n c_n(e_k)=1, & \\text{if}\\ k=1 \\\\\n c_n(e_k)=0, & \\text{otherwise.}\n\\end{array}\\right.

\n

The product theorem says that (ek*h)(x)=n=n=+cn(ek).cn(h)e2iπnTx(e_k*h)(x) = \\sum_{n=-\\infty}^{n=+\\infty} c_n(e_k).c_n(h) e^{2i\\pi \\frac{n}{T}x }

\n

Note that

\n

{cn(ek).cn(h)=1.ck(h),ifn=kcn(ek).cn(h)=0,otherwise.\\left\\{\n\\begin{array}{ll}%@{}ll@{}}\n c_n(e_k).c_n(h)=1.c_k(h), & \\text{if}\\ n=k \\\\\n c_n(e_k).c_n(h)=0, & \\text{otherwise.}\n\\end{array}\\right. Hence

\n

(ek*h)(x)=ck(h)e2iπkTx=ck(h)ek(x)(e_k*h)(x) = c_k(h) e^{2i\\pi \\frac{k}{T}x }=c_k(h)e_k(x) and eke_k is an eigenfunction of eigenvalue ck(h)c_k(h): H(2iπkT)=ck(h).H\\left(2i\\pi \\frac{k}{T}\\right)=c_k(h).

\n
","parent":"sec:TIOpe","rank":"0","html_name":"th:TrFunCon","summary":"
\n

Theorem 11 (Transfer function of convolutions operators). Let hh be a function with a Fourier transform (h)\\mathcal{F}(h) and let AA be the operator A(f)=f*hA(f)=f*h. The transfer function of AA evaluated on imaginary numbers is the Fourier transform of hh: H(2iπν)=(h)(ν)H(2i\\pi \\nu)=\\mathcal{F}(h)(\\nu)

\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of convolutions operators","height":541,"width":980},"classes":"l0","position":{"x":6653.027474376811,"y":3861.922134151624}},{"group":"nodes","data":{"id":"def:TrInOp","name":"definition","text":"
\n

Definition 31 (T.I. operators). Operators are usually defined as a linear maps between spaces of functions. In our case, the set EE is n\\mathbb{R}^n, (/)n(\\mathbb{R}/\\mathbb{Z})^n, n\\mathbb{Z}^n or (/k)n(\\mathbb{Z}/k\\mathbb{Z})^n, and let (E,)\\mathcal{F}(E,\\mathbb{C}) be the vector space of functions from EE to \\mathbb{C}. Let φt:(E,)(E,)\\varphi_t:\\mathcal{F}(E,\\mathbb{C})\\rightarrow \\mathcal{F}(E,\\mathbb{C}) be the operator ’translation by tt’: φt(f)(x)=f(x+t).\\varphi_t(f)(x)=f(x+t). When E=nE=\\mathbb{R}^n, φt\\varphi_t is defined for tnt\\in \\mathbb{R}^n, but when E=nE=\\mathbb{Z}^n, tt should belong to n\\mathbb{Z}^n. In our case, we always take tEt\\in E. Let FF and GG be subspaces of (E,)\\mathcal{F}(E,\\mathbb{C}) left stable by all translations. An operator A:FGA:F\\rightarrow G is said to be translation invariant (T.I.) if and only if t,Aφt=φtA.\\forall t,\\quad A \\circ \\varphi_t = \\varphi_t \\circ A. In an informal way, we have A(f(x+t))=A(f)(x+t)A(f(x+t))=A(f)(x+t). Still informally, the operator AA does not make distinctions between the different locations of EE, it acts the same everywhere.

\n
","parent":"sec:TIOpe","rank":"0","html_name":"def:TrInOp","summary":"
\n

Definition 31 (T.I. operators). Aφt=φtAA \\circ \\varphi_t = \\varphi_t \\circ A

\n
","hasSummary":true,"hasTitle":true,"title":"T.I. operators","height":282,"width":980},"classes":"l0","position":{"x":5873.092485873402,"y":4937.334272728404}},{"group":"nodes","data":{"id":"def:TranFun","name":"definition","text":"
\n

Definition 32 (Transfer functions). Let AA be a T.I. operator. Recall that exponential functions which are in the domain of definition of AA are eigenfunctions of AA. Note H(z)H(z) be the eigenvalue of the eigenfunction xezxx\\mapsto e^{z x}. In the context of signal processing, the function H(z)H(z) is called the ’transfer function’. Consider the simple case of a function ff that can be decomposed as a linear componation of eigenfunctions xezxx\\mapsto e^{z x}, with zz varying in a finite set SS of complex numbers:

\n

f=zSα(z)(xezx)=xzSα(z)ezx.f = \\sum_{z \\in S} \\alpha(z)\\left( x\\mapsto e^{z x}\\right)=x\\mapsto \\sum_{z \\in S} \\alpha(z)e^{z x}.

\n

Then A(f)=zSα(z)A(xezx)=xzSα(z)H(z)ezx,\\label{eq:TranFun}\nA(f)= \\sum_{z \\in S} \\alpha(z)A\\left(x\\mapsto e^{z x}\\right)=x\\mapsto \\sum_{z \\in S} \\alpha(z)H(z) e^{z x},

\n

Applying the operator to a function ff amounts to multiply the coefficients of the decomposition of ff over exponential functions with the transfer function HH. This result rely on the linearity of the operator AA and the finiteness of the sum. In the case of an infinite sum or an integral the linearity is not enough, but the result often remain valid provided that the quantities exists. This happens in particular with Fourier series or the Fourier transform, f(x)=+F(ν)e2iπνxdν.f(x) = \\int_{-\\infty}^{+\\infty} F(\\nu)e^{2i\\pi\\nu x} d\\nu.

\n

The following informal calculation reproduce Eq.[eq:TranFun] in the context of Fourier transform.

\n

A(f)=A(x+F(ν)e2iπνxdν)=+A(xF(ν)e2iπνx)dν=+H(2iπν)(xF(ν)e2iπνx)dν=x+H(2iπν)F(ν)e2iπνxdν=1(νH(2iπν).F(ν)).\\begin{aligned}\nA(f)&=&A\\left(x\\mapsto \\int_{-\\infty}^{+\\infty} F(\\nu)e^{2i\\pi\\nu x} d\\nu \\right) \\\\\n&=& \\int_{-\\infty}^{+\\infty} A\\left( x\\mapsto F(\\nu)e^{2i\\pi\\nu x} \\right)d\\nu \\\\\n&=& \\int_{-\\infty}^{+\\infty} H(2i\\pi\\nu)\\left( x\\mapsto F(\\nu)e^{2i\\pi\\nu x} \\right) d\\nu \\\\\n&=& x\\mapsto \\int_{-\\infty}^{+\\infty} H(2i\\pi\\nu) F(\\nu)e^{2i\\pi\\nu x} d\\nu \\\\\n&=& \\mathcal{F}^{-1}\\left(\\nu \\mapsto H(2i\\pi\\nu).F(\\nu) \\right).\\\\\\end{aligned}

\n

The equality A(f)=1(νH(2iπν).F(ν))A(f)=\\mathcal{F}^{-1}\\left(\\nu \\mapsto H(2i\\pi\\nu).F(\\nu) \\right) is often true in practice when the Fourier transforms exists, but it should be checked case by case.

\n
","parent":"sec:TIOpe","rank":"0","html_name":"def:TranFun","summary":"
\n

Definition 32 (Transfer functions). T.I. operators have transfer functions H(z)H(z) defined as the eigenvalues of the eigenfunctions teztt\\mapsto e^{z t}.

\n
","hasSummary":true,"hasTitle":true,"title":"Transfer functions","height":332,"width":980},"classes":"l0","position":{"x":7326.069599729622,"y":4469.610681516074}},{"group":"nodes","data":{"id":"rem:Cont","name":"remark","text":"
\n

Remark 9 (General remarks). For simplicity reasons, results on T.I. operators are expressed in one dimensions but are true in any finite dimension.

\n
","parent":"sec:TIOpe","rank":"0","html_name":"rem:Cont","summary":"
\n

Remark 9 (General remarks). For simplicity reasons, results on T.I. operators are expressed in one dimensions but are true in any finite dimension.

\n
","hasSummary":true,"hasTitle":true,"title":"General remarks","height":365,"width":980},"classes":"l0","position":{"x":5869.910069613311,"y":5364.724011420463}},{"group":"nodes","data":{"id":"rem:FinSup","name":"remark","text":"
\n

Remark 10 (The case {0,..,N1}\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}). Consider a function Mask:Mask:\\mathbb{Z}\\rightarrow \\mathbb{C} and the operator A(f)=f*MaskA_{\\mathbb{Z}}(f)=f*Mask where ** is the discrete convolution. Assume that the function MaskMask is non-null on an integer interval M,..,M-M,..,M. It is often interesting in practice to define analogous of AA but for functions defined on a finite set, for instance {0,N1}\\{0,N-1\\}. It is in particular the case for image and signal processing, where data are observed on a finite domains. There are two standard ways to define an operator ANA_N mimicking AA for functions {0,..,N1}\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}. Consider now

\n

f:{0,..,N1}.f:\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}.

\n
    \n
  1. Completion by 00. The function f:{0,N1}f:\\{0,N-1\\}\\rightarrow \\mathbb{R} can be extended by 00 before and after. The new function f\\bar f is given by f(k{0,..,N1})=0\\bar f(k\\notin \\{0,..,N-1 \\})=0 and f(k{0,..,N1})=f(k)\\bar f(k\\in \\{0,..,N-1 \\})=f(k). It is then possible to define an operator AN1A^1_N by

    \n

    AN1(f)(k{0,..,N1})=(f*Mask)(k),A^1_N(f)(k\\in \\{0,..,N-1 \\}) = (\\bar f * Mask)(k), where ** is the discrete convolution.

  2. \n
  3. Periodization. We interpret now {0,..,N1}\\{0,..,N-1 \\} as /n\\mathbb{Z}/n\\mathbb{Z}. We would like first to turn MaskMask into a periodic function, and restrict it to the new function Mask:{0,..,N1}\\bar Mask:\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}. The operator AN2A^2_N is then defined as AN2(f)(k{0,..,N1})=(f*Mask)(k),A^2_N(f)(k\\in \\{0,..,N-1 \\}) = (f * \\bar Mask)(k),

    \n

    but where ** is the discrete circular convolution. In order to obtain a relevant operator, Mask\\bar Mask should be constructed in the following way. Assume that the range of non null values 2M+12M+1 of MaskMask is smaller than NN. Then we can wright Maskper(k)=n=+Mask(k+nN),Mask_{per}(k) = \\sum_{n=-\\infty}^{+\\infty} Mask(k+nN), and define Mask(k{0,..,N1})=Maskper(k)\\bar Mask(k\\in \\{0,..,N-1 \\})=Mask_{per}(k). In practice when NN is even, the indices N2,..,1-\\frac{N}{2},..,-1 are translated to indices N2,..,N1\\frac{N}{2},..,N-1.

  4. \n
\n

When kk is far from the borders with respect to the support of the mask, that is to say M<kM<k and k<NMk<N-M, it can be checked that all the different convolutions agree. Hence the operators AN1A^1_N and AN2A^2_N reproduce the behavior of AA_{\\mathbb{Z}}. However, AN1A^1_N and AN2A^2_N differ in the way they deal with border effects. It can be argued that AN1A^1_N has more meaning in practice than AN2A^2_N. Indeed, A2A^2 treats {0,..,N1}\\{0,..,N-1\\} as /n\\mathbb{Z}/n\\mathbb{Z} but the function ff has usually no physical reason to be periodic.
\nThe main difference between these operators are their spectral properties. Note that when {0,..,N1}\\{0,..,N-1\\} is seen as /n\\mathbb{Z}/n\\mathbb{Z}, AN2A^2_N is a translation invariant operator. Using the properties of the discrete circular convolution, we have

\n

AN2(f)=DFT1(DFT(f).DFT(Mask)).A^2_N(f) = \\operatorname{DFT}^{-1}(\\operatorname{DFT}(f).\\operatorname{DFT}(\\bar Mask)).

\n

The use the fast Fourier transform algorithm (FFT) to compute the DFT\\operatorname{DFT}, can significantly decrease the computational complexity of AN2A^2_N compared with using the convolution formula. This makes AN2(f)A^2_N(f) the most interesting choice in many applications.
\nOn the other hand, AN1A^1_N is strictly speaking not a translation invariant operator. This is because the convolution is with f\\bar f and not ff. Though AN1A^1_N could still be expressed as

\n

AN1(f)=1((f).(Mask)),A^1_N(f) = \\mathcal{F}^{-1}(\\mathcal{F}(\\bar f).\\mathcal{F}(Mask)),

\n

where \\mathcal{F} is a Fourier transform for functions \\mathbb{Z}\\rightarrow \\mathbb{C}. We will not show it here but in that case, the functions (f)\\mathcal{F}(\\bar f) and (Mask)\\mathcal{F}(Mask) have a continuous support in the frequency domain. Evaluating AN1(f)A^1_N(f) using this formula requires to evaluate the function (f)\\mathcal{F}(\\bar f) on an infinite number of points. Hence the Fourier transform is not an interesting strategy to compute AN1(f)A^1_N(f).
\nThe previous considerations can be informally interpreted as follow. In the ’completion by 00 case’, the function f:{0,..,N1}f:\\{0,..,N-1\\}\\rightarrow \\mathbb{C} is endowed in the bigger set of functions \\mathbb{Z}\\rightarrow \\mathbb{C}. The operator AN1A^1_N is an interesting candidate to replace AA^\\mathbb{Z}, but the Fourier transform of functions \\mathbb{Z}\\rightarrow \\mathbb{C} does not enable fast computations. On the other hand, in the ’periodization’ case, the arbitrary function f:{0,..,N1}f:\\{0,..,N-1\\}\\rightarrow \\mathbb{C} ’is seen in the smaller sets of periodic functions’. This point of view is usually not entirely accurate on the borders but lead to fast computation using the FFT.
\nZero-padding. As pointed out in remark Remark 6 from the section on convolutions, in order to avoid the undesired border effects of the circular convolution of AN2A^2_N, the domain {0,..,N1}\\{0,..,N-1\\} can be extended to {0,..,2N1}\\{0,..,2N-1\\} by adding zeros to ff and MaskMask. Then operators AN1A^{1}_N and AN1A^{1}_N are the same.
\nAll the discussion remains valid in the multidimensional case f:{0,..,N1}nf:\\{0,..,N-1\\}^n\\rightarrow \\mathbb{C}. Convolutions are replaced by their multidimensional versions.

\n
","parent":"sec:TIOpe","rank":"0","html_name":"rem:FinSup","summary":"
\n

Remark 10 (The case {0,..,N1}\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}). Let Mask:Mask:\\mathbb{Z}\\rightarrow \\mathbb{C} and the operator A(f)=f*Mask.A_{\\mathbb{Z}}(f)=f*Mask. There are two standard ways to define an operator ANA_N mimicking AA for functions {0,..,N1}\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}.

\n
    \n
  1. Completion by 00. ** is replaced with the discrete convolution.

  2. \n
  3. Periodization. ** is replaced with the discrete circular convolution.

  4. \n
\n

The second option leads to fast computations using the FFTFFT.

\n
","hasSummary":true,"hasTitle":true,"title":"The case {0,..,N1}\\{0,..,N-1 \\}\\rightarrow \\mathbb{C}","height":848,"width":982},"classes":"l0","position":{"x":4745.357910248369,"y":3819.1752262180235}},{"group":"nodes","data":{"id":"def:SigmaA","name":"definition","text":"
\n

Definition 43 (σ\\sigma-algebra). Let Ω\\Omega be a set and 𝒫(Ω)\\mathcal{P}(\\Omega) be the set of its subsets. A σ\\sigma-algebra 𝒜\\mathcal{A} on Ω\\Omega is a subset of 𝒫(Ω)\\mathcal{P}(\\Omega) such that

\n\n

where XcX^{c} refers to the complement of XXin Ω\\Omega. The two first axioms implies that Ω𝒜\\Omega\\in \\mathcal{A}.

\n
","parent":"sec:Meas","rank":"0","html_name":"def:SigmaA","summary":"
\n

Definition 43 (σ\\sigma-algebra). Let Ω\\Omega be a set and 𝒫(Ω)\\mathcal{P}(\\Omega) be the set of its subsets. A σ\\sigma-algebra 𝒜\\mathcal{A} on Ω\\Omega is a subset of 𝒫(Ω)\\mathcal{P}(\\Omega) such that

\n\n

where XcX^{c} refers to the complement of XXin Ω\\Omega. The two first axioms implies that Ω𝒜\\Omega\\in \\mathcal{A}.

\n
","hasSummary":false,"hasTitle":true,"title":"σ\\sigma-algebra","height":205,"width":980},"classes":"l0","position":{"x":21374.655408924926,"y":8617.81610704642}},{"group":"nodes","data":{"id":"def:Borel","name":"definition","text":"
\n

Definition 44 (Borel σ\\sigma-algebra !check the vector space case!). On \\mathbb{R}, the Borel σ\\sigma-algebra is defined by countable union, intersection and complement of arbitrary open intervals ]a,b[]a,b[. This definition extends to n\\mathbb{R}^n by taking products of intervals ]a1,b1[×...×]an,bn[]a_1,b_1[\\times ... \\times ]a_n,b_n[. This definition extends to arbitrary vector space of finite dimension by checking that the Borel σ\\sigma-algebra is independent of the choice of the basis.

\n
","parent":"sec:Meas","rank":"0","html_name":"def:Borel","summary":"
\n

Definition 44 (Borel σ\\sigma-algebra !check the vector space case!). On \\mathbb{R}, the Borel σ\\sigma-algebra is defined by countable union, intersection and complement of arbitrary open intervals ]a,b[]a,b[. This definition extends to n\\mathbb{R}^n by taking products of intervals ]a1,b1[×...×]an,bn[]a_1,b_1[\\times ... \\times ]a_n,b_n[. This definition extends to arbitrary vector space of finite dimension by checking that the Borel σ\\sigma-algebra is independent of the choice of the basis.

\n
","hasSummary":false,"hasTitle":true,"title":"Borel σ\\sigma-algebra !check the vector space case!","height":303,"width":980},"classes":"l0","position":{"x":23402.866561490624,"y":7521.3676802287655}},{"group":"nodes","data":{"id":"def:Meas","name":"definition","text":"
\n

Definition 45 (Measures). Let Ω\\Omega be a set with a σ\\sigma-algebras 𝒜\\mathcal{A}. A measure is a function μ:𝒜\\mu:\\mathcal{A}\\rightarrow \\mathbb{R} such that

\n\n

When the last requirement is dropped, the measure is called a \"signed measure\". (Ω,𝒜,μ)(\\Omega,\\mathcal{A},\\mu) is called a measured space.

\n
","parent":"sec:Meas","rank":"0","html_name":"def:Meas","summary":"
\n

Definition 45 (Measures). Let Ω\\Omega be a set with a σ\\sigma-algebras 𝒜\\mathcal{A}. A measure is a function μ:𝒜\\mu:\\mathcal{A}\\rightarrow \\mathbb{R} such that

\n\n

When the last requirement is dropped, the measure is called a \"signed measure\". (Ω,𝒜,μ)(\\Omega,\\mathcal{A},\\mu) is called a measured space.

\n
","hasSummary":false,"hasTitle":true,"title":"Measures","height":198,"width":980},"classes":"l0","position":{"x":20265.49636334657,"y":7465.628481662133}},{"group":"nodes","data":{"id":"def:Measbl","name":"definition","text":"
\n

Definition 46 (Measurable function). Let EE and FF be two sets with σ\\sigma-algebras 𝒜E\\mathcal{A}_E and 𝒜F\\mathcal{A}_F. A function ff is measurable if and only if Y𝒜F,f1(Y)𝒜E.\\forall Y\\in \\mathcal{A}_F, f^{-1}(Y)\\in \\mathcal{A}_E.

\n
","parent":"sec:Meas","rank":"0","html_name":"def:Measbl","summary":"
\n

Definition 46 (Measurable function). Let EE and FF be two sets with σ\\sigma-algebras 𝒜E\\mathcal{A}_E and 𝒜F\\mathcal{A}_F. A function ff is measurable if and only if Y𝒜F,f1(Y)𝒜E.\\forall Y\\in \\mathcal{A}_F, f^{-1}(Y)\\in \\mathcal{A}_E.

\n
","hasSummary":false,"hasTitle":true,"title":"Measurable function","height":198,"width":980},"classes":"l0","position":{"x":21584.558365463217,"y":7710.896413858276}},{"group":"nodes","data":{"id":"def:LebInt","name":"definition","text":"
\n

Definition 47 (Lebesgue integration !!deal with general case!!). Let ff be a measurable function from the measured space (Ω,𝒜,μ)(\\Omega,\\mathcal{A},\\mu) to a vector space EE endowed with the Borel σ\\sigma-algebra E\\mathcal{B}_E. The Lebesgue integral of ff is noted Ωfdμ.\\int_{\\Omega}f \\mathrm{d}\\mu.

\n

Give a precise definition in the case where ff is a \"step function\" taking a finite number of values {a1,..,an}\\{a_1,..,a_n\\}. Ωfdμ=i=1naiμ(f1({ai})).\\int_{\\Omega}f \\mathrm{d}\\mu= \\sum_{i=1}^n a_i\\mu\\left(f^{-1}(\\{a_i\\})\\right). The general definition is a limit case of the integration of step functions.

\n
","parent":"sec:Meas","rank":"0","html_name":"def:LebInt","summary":"
\n

Definition 47 (Lebesgue integration !!deal with general case!!). Let ff be a measurable function from the measured space (Ω,𝒜,μ)(\\Omega,\\mathcal{A},\\mu) to a vector space EE endowed with the Borel σ\\sigma-algebra E\\mathcal{B}_E. The Lebesgue integral of ff is noted Ωfdμ.\\int_{\\Omega}f \\mathrm{d}\\mu.

\n

Give a precise definition in the case where ff is a \"step function\" taking a finite number of values {a1,..,an}\\{a_1,..,a_n\\}. Ωfdμ=i=1naiμ(f1({ai})).\\int_{\\Omega}f \\mathrm{d}\\mu= \\sum_{i=1}^n a_i\\mu\\left(f^{-1}(\\{a_i\\})\\right). The general definition is a limit case of the integration of step functions.

\n
","hasSummary":false,"hasTitle":true,"title":"Lebesgue integration !!deal with general case!!","height":395,"width":980},"classes":"l0","position":{"x":21994.053917912363,"y":6855.319225843632}},{"group":"nodes","data":{"id":"rem:Dflt","name":"remark","text":"
\n

Remark 11 (Default σ\\sigma-algebras). In many case the choice of σ\\sigma-algebra is not mentioned. Here are the default choices in standard situations:

\n\n
","parent":"sec:Meas","rank":"0","html_name":"rem:Dflt","summary":"
\n

Remark 11 (Default σ\\sigma-algebras). In many case the choice of σ\\sigma-algebra is not mentioned. Here are the default choices in standard situations:

\n\n
","hasSummary":false,"hasTitle":true,"title":"Default σ\\sigma-algebras","height":205,"width":980},"classes":"l0","position":{"x":22755.728060176196,"y":8475.307295579609}},{"group":"nodes","data":{"id":"def:VectSpace","name":"definition","text":"
\n

Definition 1 (Vector space). Let K=K= \\mathbb{R} or K=K=\\mathbb{C}. (E,+,.)(E,+,.) is a KK vector space if ++ and .. are such that

\n\n

Elements of EE are called vectors, and elements of KK are called scalars.

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:VectSpace","summary":"
\n

Definition 1 (Vector space). Let K=K= \\mathbb{R} or K=K=\\mathbb{C}. (E,+,.)(E,+,.) is a KK vector space if ++ and .. are such that

\n\n

Elements of EE are called vectors, and elements of KK are called scalars.

\n
","hasSummary":false,"hasTitle":true,"title":"Vector space","height":198,"width":980},"classes":"l0","position":{"x":11579.396635227931,"y":11530.966101934808}},{"group":"nodes","data":{"id":"def:FreeSys","name":"definition","text":"
\n

Definition 2 (Free system). Vectors x1,...,xnEx_1,...,x_n\\in E are free if and only if α1.x1+...+αnxn=0i{1,..,n},αi=0.\\alpha_1.x_1+...+\\alpha_nx_n=0 \\Rightarrow \\forall i\\in \\{1,..,n\\},\\quad \\alpha_i = 0. The word \"independent\" is sometimes used instead of \"free\".

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:FreeSys","summary":"
\n

Definition 2 (Free system). Vectors x1,...,xnEx_1,...,x_n\\in E are free if and only if α1.x1+...+αnxn=0i{1,..,n},αi=0.\\alpha_1.x_1+...+\\alpha_nx_n=0 \\Rightarrow \\forall i\\in \\{1,..,n\\},\\quad \\alpha_i = 0. The word \"independent\" is sometimes used instead of \"free\".

\n
","hasSummary":false,"hasTitle":true,"title":"Free system","height":198,"width":980},"classes":"l0","position":{"x":12728.226213056792,"y":10844.920114152294}},{"group":"nodes","data":{"id":"def:GenSys","name":"definition","text":"
\n

Definition 3 (Generating system). Vectors x1,...,xnEx_1,...,x_n\\in E are generators of EE if and only if xE,αi,α1.x1+...+αnxn=x\\forall x\\in E, \\exists \\alpha_i,\\quad \\alpha_1.x_1+...+\\alpha_nx_n=x

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:GenSys","summary":"
\n

Definition 3 (Generating system). Vectors x1,...,xnEx_1,...,x_n\\in E are generators of EE if and only if xE,αi,α1.x1+...+αnxn=x\\forall x\\in E, \\exists \\alpha_i,\\quad \\alpha_1.x_1+...+\\alpha_nx_n=x

\n
","hasSummary":false,"hasTitle":true,"title":"Generating system","height":198,"width":980},"classes":"l0","position":{"x":13039.651455434217,"y":11250.710804583194}},{"group":"nodes","data":{"id":"def:Basis","name":"definition","text":"
\n

Definition 4 (Basis). Vectors x1,...,xnEx_1,...,x_n\\in E are a basis of EE if and only if they are both a free system and a generating system of EE. nn is called the dimension of EE.

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Basis","summary":"
\n

Definition 4 (Basis). Vectors x1,...,xnEx_1,...,x_n\\in E are a basis of EE if and only if they are both a free system and a generating system of EE. nn is called the dimension of EE.

\n
","hasSummary":false,"hasTitle":true,"title":"Basis","height":198,"width":980},"classes":"l0","position":{"x":13974.238751330833,"y":10803.711804944005}},{"group":"nodes","data":{"id":"def:SubSpace","name":"definition","text":"
\n

Definition 5 (Subspace). VV is a sub vector space of EE if and only if VEV\\subset E and α.x+β.yV\\alpha.x+\\beta.y\\in V for all x,yVx,y\\in V and α,βK\\alpha,\\beta\\in K.

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:SubSpace","summary":"
\n

Definition 5 (Subspace). VV is a sub vector space of EE if and only if VEV\\subset E and α.x+β.yV\\alpha.x+\\beta.y\\in V for all x,yVx,y\\in V and α,βK\\alpha,\\beta\\in K.

\n
","hasSummary":false,"hasTitle":true,"title":"Subspace","height":198,"width":980},"classes":"l0","position":{"x":11013.555366699075,"y":10441.520276248375}},{"group":"nodes","data":{"id":"def:Span","name":"definition","text":"
\n

Definition 6 (Span of a set of vectors). Let x1,..,xkEx_1,..,x_k\\in E. The span of x1,..,xkx_1,..,x_k is the set generated by all the linear combinations: span({x1,..,xn})={α1x1+...+αkxk,αiK}.\\operatorname{span}(\\{x_1,..,x_n\\}) = \\{ \\alpha_1 x_1 +...+ \\alpha_k x_k, \\alpha_i \\in K \\}. It can be checked that the span is a vector subspace of EE.
\nAlternatively the span of a matrix AA is defined as span(A)={AX,XKn}.\\operatorname{span}(A) = \\{ AX, X\\in K^n\\}.

\n

When the column of AA are the coordinate vectors of x1,..,xkx_1,..,x_k, the span of AA are the coordinates of the elements of span({x1,..,xn})\\operatorname{span}(\\{x_1,..,x_n\\}).

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Span","summary":"
\n

Definition 6 (Span of a set of vectors). Let x1,..,xkEx_1,..,x_k\\in E. The span of x1,..,xkx_1,..,x_k is the set generated by all the linear combinations: span({x1,..,xn})={α1x1+...+αkxk,αiK}.\\operatorname{span}(\\{x_1,..,x_n\\}) = \\{ \\alpha_1 x_1 +...+ \\alpha_k x_k, \\alpha_i \\in K \\}. It can be checked that the span is a vector subspace of EE.
\nAlternatively the span of a matrix AA is defined as span(A)={AX,XKn}.\\operatorname{span}(A) = \\{ AX, X\\in K^n\\}.

\n

When the column of AA are the coordinate vectors of x1,..,xkx_1,..,x_k, the span of AA are the coordinates of the elements of span({x1,..,xn})\\operatorname{span}(\\{x_1,..,x_n\\}).

\n
","hasSummary":false,"hasTitle":true,"title":"Span of a set of vectors","height":198,"width":980},"classes":"l0","position":{"x":11856.999926963139,"y":9734.551872039543}},{"group":"nodes","data":{"id":"def:Coords","name":"definition","text":"
\n

Definition 7 (Coordinates). When e1,...,enEe_1,...,e_n\\in E is a basis of EE it can be proved that each vector xx has a unique decomposition x=α1.e1+...+αn.en.x= \\alpha_1.e_1 + ... + \\alpha_n.e_n.

\n

(α1..αn)Kn\\begin{pmatrix}\\alpha_1\\\\.\\\\.\\\\ \\alpha_n\\end{pmatrix}\\in K^n is called the coordinate vector of xx in the basis e1,...,ene_1,...,e_n. αi\\alpha_i is often noted xix_i, the coordinate vector becomes (x1..xn)\\begin{pmatrix}x_1\\\\.\\\\.\\\\ x_n\\end{pmatrix}. Warning: depending on the context, xix_i sometimes refers to a coordinate of xx, or to a vector among a set of vectors x1,...,xnEx_1,...,x_n\\in E.

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:Coords","summary":"
\n

Definition 7 (Coordinates). When e1,...,enEe_1,...,e_n\\in E is a basis of EE it can be proved that each vector xx has a unique decomposition x=α1.e1+...+αn.en.x= \\alpha_1.e_1 + ... + \\alpha_n.e_n.

\n

(α1..αn)Kn\\begin{pmatrix}\\alpha_1\\\\.\\\\.\\\\ \\alpha_n\\end{pmatrix}\\in K^n is called the coordinate vector of xx in the basis e1,...,ene_1,...,e_n. αi\\alpha_i is often noted xix_i, the coordinate vector becomes (x1..xn)\\begin{pmatrix}x_1\\\\.\\\\.\\\\ x_n\\end{pmatrix}. Warning: depending on the context, xix_i sometimes refers to a coordinate of xx, or to a vector among a set of vectors x1,...,xnEx_1,...,x_n\\in E.

\n
","hasSummary":false,"hasTitle":true,"title":"Coordinates","height":198,"width":980},"classes":"l0","position":{"x":13145.822712594667,"y":10256.69575153816}},{"group":"nodes","data":{"id":"def:BasChg","name":"definition","text":"
\n

Definition 8 (Basis change). Let B=(e1,...,en)B=(e_1,...,e_n) and B=(e1,...,en)B'=(e'_1,...,e'_n) be 22 basis of EE. For xEx\\in E note XX its coordinates in the first basis BB and XX' its coordinates in the basis BB'. They are related by X=PXX = PX' where PP is a matrix whose jj-th column is the coordinate vector of eje'_j in the basis BB. PP is invertible and X=P1X.X' = P^{-1}X.

\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"def:BasChg","summary":"
\n

Definition 8 (Basis change). Let B=(e1,...,en)B=(e_1,...,e_n) and B=(e1,...,en)B'=(e'_1,...,e'_n) be 22 basis of EE. For xEx\\in E note XX its coordinates in the first basis BB and XX' its coordinates in the basis BB'. They are related by X=PXX = PX' where PP is a matrix whose jj-th column is the coordinate vector of eje'_j in the basis BB. PP is invertible and X=P1X.X' = P^{-1}X.

\n
","hasSummary":false,"hasTitle":true,"title":"Basis change","height":198,"width":980},"classes":"l0","position":{"x":13709.444224257108,"y":9660.009061561592}},{"group":"nodes","data":{"id":"rem:ExVectsp","name":"remark","text":"
\n

Remark 1 (Common examples of vector spaces).

\n\n
","parent":"subsec:VectSpaces","rank":"0","html_name":"rem:ExVectsp","summary":"
\n

Remark 1 (Common examples of vector spaces).

\n\n
","hasSummary":false,"hasTitle":true,"title":"Common examples of vector spaces","height":297,"width":980},"classes":"l0","position":{"x":13033.723852237516,"y":11823.87387793776}},{"group":"nodes","data":{"id":"def:Mat","name":"definition","text":"
\n

Definition 9 (Matrix). A matrix is a 22-dimensional array containing elements of KK.

\n
","parent":"subsec:Mats","rank":"0","html_name":"def:Mat","summary":"
\n

Definition 9 (Matrix). A matrix is a 22-dimensional array containing elements of KK.

\n
","hasSummary":false,"hasTitle":true,"title":"Matrix","height":198,"width":980},"classes":"l0","position":{"x":17282.57926982218,"y":8677.258398557675}},{"group":"nodes","data":{"id":"def:MatMul","name":"definition","text":"
\n

Definition 10 (Matrix multiplication). Let AA matrix of size m×nm\\times n and BB a matrix of size n×pn\\times p. The matrix multiplication of AA and BB, noted C=ABC=AB, is a matrix of size m×pm\\times p whose elements are defined by Cij=k=1nAikBkj.C_{ij} = \\sum_{k=1}^n A_{ik}B_{kj}. Visual representation: (Ai1..Ain)(B1j..B1n)=(Cij).\\begin{pmatrix} \\\\ \n\\\\\nA_{i1}&..&A_{in} \\\\\n\\\\\n\\end{pmatrix} \n\\begin{pmatrix} \n&&B_{1j}&&\\\\\n&&.&&\\\\\n&&.&&\\\\\n&&B_{1n}&&\\\\\n\\end{pmatrix}\n=\\begin{pmatrix} \n&&&&\\\\\n&&&&\\\\\n&&C_{ij}&&\\\\\n&&&&\\\\\n\\end{pmatrix}.

\n

When AB=IAB=I, where II is a matrix with only ones on the diagonal, AA and BB are inverse to each others: A=B1A=B^{-1}.

\n
","parent":"subsec:Mats","rank":"0","html_name":"def:MatMul","summary":"
\n

Definition 10 (Matrix multiplication). Let AA matrix of size m×nm\\times n and BB a matrix of size n×pn\\times p. The matrix multiplication of AA and BB, noted C=ABC=AB, is a matrix of size m×pm\\times p whose elements are defined by Cij=k=1nAikBkj.C_{ij} = \\sum_{k=1}^n A_{ik}B_{kj}. Visual representation: (Ai1..Ain)(B1j..B1n)=(Cij).\\begin{pmatrix} \\\\ \n\\\\\nA_{i1}&..&A_{in} \\\\\n\\\\\n\\end{pmatrix} \n\\begin{pmatrix} \n&&B_{1j}&&\\\\\n&&.&&\\\\\n&&.&&\\\\\n&&B_{1n}&&\\\\\n\\end{pmatrix}\n=\\begin{pmatrix} \n&&&&\\\\\n&&&&\\\\\n&&C_{ij}&&\\\\\n&&&&\\\\\n\\end{pmatrix}.

\n

When AB=IAB=I, where II is a matrix with only ones on the diagonal, AA and BB are inverse to each others: A=B1A=B^{-1}.

\n
","hasSummary":false,"hasTitle":true,"title":"Matrix multiplication","height":198,"width":980},"classes":"l0","position":{"x":16417.779790079614,"y":8170.263532408395}},{"group":"nodes","data":{"id":"def:Trans","name":"definition","text":"
\n

Definition 11 (Transpose). Let MM be a matrix. The transpose of MM is noted MTM^T and defined by MijT=Mji.M^T_{ij}=M_{ji}.

\n

Important properties: (A+B)T=AT+BT(A+B)^T = A^T+B^T, (AB)T=ATBT(AB)^T=A^TB^T, (A1)T=(AT)1(A^{-1})^T=(A^T)^{-1}

\n
","parent":"subsec:Mats","rank":"0","html_name":"def:Trans","summary":"
\n

Definition 11 (Transpose). Let MM be a matrix. The transpose of MM is noted MTM^T and defined by MijT=Mji.M^T_{ij}=M_{ji}.

\n

Important properties: (A+B)T=AT+BT(A+B)^T = A^T+B^T, (AB)T=ATBT(AB)^T=A^TB^T, (A1)T=(AT)1(A^{-1})^T=(A^T)^{-1}

\n
","hasSummary":false,"hasTitle":true,"title":"Transpose","height":198,"width":980},"classes":"l0","position":{"x":16950.37013182701,"y":7548.4583757792325}},{"group":"nodes","data":{"id":"def:IsoKn","name":"theorem","text":"
\n

Theorem 1 (Bijection with KnK^n). Let EE be a vector space of finite dimension. Take a basis e1,..,ene_1,..,e_n of EE. The function which maps a vector xEx\\in E to its coordinate vector XKnX\\in K^n is linear and bijective. Hence a basis e1,..,ene_1,..,e_n identifies EE with KnK^n.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:IsoKn","summary":"
\n

Theorem 1 (Bijection with KnK^n). Let EE be a vector space of finite dimension. Take a basis e1,..,ene_1,..,e_n of EE. The function which maps a vector xEx\\in E to its coordinate vector XKnX\\in K^n is linear and bijective. Hence a basis e1,..,ene_1,..,e_n identifies EE with KnK^n.

\n
","hasSummary":false,"hasTitle":true,"title":"Bijection with KnK^n","height":211,"width":980},"classes":"l0","position":{"x":15687.967979063893,"y":11244.954099266317}},{"group":"nodes","data":{"id":"th:CompLinMaps","name":"theorem","text":"
\n

Theorem 2 (Composition of linear maps). Let E,FE,F and GG be KK vector spaces of finite dimension with basis e1,..,ene_1,..,e_n, f1,..,fmf_1,..,f_m, and g1,..,gpg_1,..,g_p. Let f:EFf:E\\rightarrow F be a linear map of matrix AA and g:FGg:F \\rightarrow G be a linear map of matrix BB. The matrix of the composition h=gfh=g \\circ f is the matrix product C=AB.C=AB.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"th:CompLinMaps","summary":"
\n

Theorem 2 (Composition of linear maps). Let E,FE,F and GG be KK vector spaces of finite dimension with basis e1,..,ene_1,..,e_n, f1,..,fmf_1,..,f_m, and g1,..,gpg_1,..,g_p. Let f:EFf:E\\rightarrow F be a linear map of matrix AA and g:FGg:F \\rightarrow G be a linear map of matrix BB. The matrix of the composition h=gfh=g \\circ f is the matrix product C=AB.C=AB.

\n
","hasSummary":false,"hasTitle":true,"title":"Composition of linear maps","height":297,"width":980},"classes":"l0","position":{"x":16105.7337414555,"y":9952.548597540048}},{"group":"nodes","data":{"id":"def:LinMap","name":"definition","text":"
\n

Definition 12 (Linear map). A linear map ff between the KK vector spaces EE and HH is a function f:EHf:E\\rightarrow H such that x,yE,α,βK,f(α.x+Eβy)=α.f(x)+Hβ.f(y)\\forall x,y\\in E,\\forall \\alpha,\\beta\\in K, \\quad f(\\alpha.x+_E\\beta y) = \\alpha.f(x) +_H \\beta.f(y)

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:LinMap","summary":"
\n

Definition 12 (Linear map). A linear map ff between the KK vector spaces EE and HH is a function f:EHf:E\\rightarrow H such that x,yE,α,βK,f(α.x+Eβy)=α.f(x)+Hβ.f(y)\\forall x,y\\in E,\\forall \\alpha,\\beta\\in K, \\quad f(\\alpha.x+_E\\beta y) = \\alpha.f(x) +_H \\beta.f(y)

\n
","hasSummary":false,"hasTitle":true,"title":"Linear map","height":198,"width":980},"classes":"l0","position":{"x":16984.969239969676,"y":11523.917573857172}},{"group":"nodes","data":{"id":"def:MatMap","name":"definition","text":"
\n

Definition 13 (Matrix of a linear map). Let e1,..,ene_1,..,e_n be a basis of EE and f1,..,fmf_1,..,f_m be a basis of HH. The elements of the matrix MM of a linear map f:EHf:E\\rightarrow H are mij=f(ej)i,m_{ij} = f(e_j)_i, where f(ej)if(e_j)_i is the ii-th coordinate of f(ej)f(e_j). Note that the matrix MM is of size m×nm\\times n.

\n

Let e1,..,ene'_1,..,e'_n and f1,..,fmf'_1,..,f'_m be two new basis of EE and HH. The matrix of ff in the new basis is M=Q1MPM'= Q^{-1} M P where the columns of PP are coordinates of the vectors eie'_i in the basis eie_i and the columns of QQ are coordinates of the vectors fif'_i in the basis fif_i.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:MatMap","summary":"
\n

Definition 13 (Matrix of a linear map). Let e1,..,ene_1,..,e_n be a basis of EE and f1,..,fmf_1,..,f_m be a basis of HH. The elements of the matrix MM of a linear map f:EHf:E\\rightarrow H are mij=f(ej)i,m_{ij} = f(e_j)_i, where f(ej)if(e_j)_i is the ii-th coordinate of f(ej)f(e_j). Note that the matrix MM is of size m×nm\\times n.

\n

Let e1,..,ene'_1,..,e'_n and f1,..,fmf'_1,..,f'_m be two new basis of EE and HH. The matrix of ff in the new basis is M=Q1MPM'= Q^{-1} M P where the columns of PP are coordinates of the vectors eie'_i in the basis eie_i and the columns of QQ are coordinates of the vectors fif'_i in the basis fif_i.

\n
","hasSummary":false,"hasTitle":true,"title":"Matrix of a linear map","height":198,"width":980},"classes":"l0","position":{"x":16830.563327358752,"y":10698.882412318088}},{"group":"nodes","data":{"id":"def:Eigen","name":"definition","text":"
\n

Definition 14 (Eigenvectors and eigenvalues). Let ff be a linear map from EE to EE. An eigenvector xx is a non null vector such that f(x)=λxf(x)= \\lambda x for some scalar λK\\lambda\\in K. λ\\lambda is called an eigenvalue. Alternatively, let MM be a matrix. A non null column vector XX is called an eigenvector when MX=λXMX=\\lambda X for some scalar λK\\lambda\\in K. λ\\lambda is again called an eigenvalue.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:Eigen","summary":"
\n

Definition 14 (Eigenvectors and eigenvalues). Let ff be a linear map from EE to EE. An eigenvector xx is a non null vector such that f(x)=λxf(x)= \\lambda x for some scalar λK\\lambda\\in K. λ\\lambda is called an eigenvalue. Alternatively, let MM be a matrix. A non null column vector XX is called an eigenvector when MX=λXMX=\\lambda X for some scalar λK\\lambda\\in K. λ\\lambda is again called an eigenvalue.

\n
","hasSummary":false,"hasTitle":true,"title":"Eigenvectors and eigenvalues","height":297,"width":980},"classes":"l0","position":{"x":18580.795461310972,"y":10255.738296787835}},{"group":"nodes","data":{"id":"def:DiagMap","name":"definition","text":"
\n

Definition 15 (Diagonalizable map). A linear map ff is diagonalizable if there exists a basis of eigenvectors. In that basis, the matrix MM of ff is M=(λ1λ2.λn)M= \\begin{pmatrix} \\\\ \n\\lambda_1&&&\\\\\n&\\lambda_2&&\\\\\n&&.&\\\\\n&&&\\lambda_n\\\\\n\\end{pmatrix} where the λi\\lambda_i are the eigenvalues of ff. Alternatively, a matrix AA is said diagonalizable if there exists invertible matrices PP and QQ such that A=PDP1A=PDP^{-1} with DD a diagonal matrix.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"def:DiagMap","summary":"
\n

Definition 15 (Diagonalizable map). A linear map ff is diagonalizable if there exists a basis of eigenvectors. In that basis, the matrix MM of ff is M=(λ1λ2.λn)M= \\begin{pmatrix} \\\\ \n\\lambda_1&&&\\\\\n&\\lambda_2&&\\\\\n&&.&\\\\\n&&&\\lambda_n\\\\\n\\end{pmatrix} where the λi\\lambda_i are the eigenvalues of ff. Alternatively, a matrix AA is said diagonalizable if there exists invertible matrices PP and QQ such that A=PDP1A=PDP^{-1} with DD a diagonal matrix.

\n
","hasSummary":false,"hasTitle":true,"title":"Diagonalizable map","height":198,"width":980},"classes":"l0","position":{"x":18743.398578871198,"y":9536.883217987297}},{"group":"nodes","data":{"id":"rem:MatMultVect","name":"remark","text":"
\n

Remark 2 (Coordinates of the image). Let ff be a linear map of matrix MM and xx be a vector whose coordinate vector is XX. The coordinate vector YY of f(x)f(x) is given by the matrix product of MM and XX, Y=MX.Y = MX.

\n
","parent":"subsec:LinMaps","rank":"0","html_name":"rem:MatMultVect","summary":"
\n

Remark 2 (Coordinates of the image). Let ff be a linear map of matrix MM and xx be a vector whose coordinate vector is XX. The coordinate vector YY of f(x)f(x) is given by the matrix product of MM and XX, Y=MX.Y = MX.

\n
","hasSummary":false,"hasTitle":true,"title":"Coordinates of the image","height":297,"width":980},"classes":"l0","position":{"x":17549.755638456132,"y":10218.094491526004}},{"group":"nodes","data":{"id":"def:Pyth","name":"theorem","text":"
\n

Theorem 3 (Pythagore theorem). Let e1,...,ene_1,...,e_n be an orthonormal basis. The norm of a vector xx is given by x2=ix,ei2.\\|x\\|^2 = \\sum_i \\langle x,e_i \\rangle^2. It is a direct consequence of the linearity (or semi-linearity) of the inner product in both arguments.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:Pyth","summary":"
\n

Theorem 3 (Pythagore theorem). Let e1,...,ene_1,...,e_n be an orthonormal basis. The norm of a vector xx is given by x2=ix,ei2.\\|x\\|^2 = \\sum_i \\langle x,e_i \\rangle^2. It is a direct consequence of the linearity (or semi-linearity) of the inner product in both arguments.

\n
","hasSummary":false,"hasTitle":true,"title":"Pythagore theorem","height":198,"width":980},"classes":"l0","position":{"x":12778.889078858281,"y":6938.557194336066}},{"group":"nodes","data":{"id":"th:OrthProj","name":"theorem","text":"
\n

Theorem 4 (Orthogonal projection on the span of independent vectors). Let e1,..,ene_1,..,e_n be an orthonormal basis of the vector space EE. Let x1,..,xkx_1,..,x_k be kk independent vectors of EE and V=span({x1,..,xk}V=\\operatorname{span}(\\{x_1,..,x_k\\}. The coordinates vector of the orthogonal projection of a vector xEx\\in E on VV is Xp=A(ATA)1ATX,X_p = A(A^TA)^{-1}A^TX, where AA is the matrix whose columns are the coordinates of the vectors xix_i, and XX is the coordinate vector of xx.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"th:OrthProj","summary":"
\n

Theorem 4 (Orthogonal projection on the span of independent vectors). Let e1,..,ene_1,..,e_n be an orthonormal basis of the vector space EE. Let x1,..,xkx_1,..,x_k be kk independent vectors of EE and V=span({x1,..,xk}V=\\operatorname{span}(\\{x_1,..,x_k\\}. The coordinates vector of the orthogonal projection of a vector xEx\\in E on VV is Xp=A(ATA)1ATX,X_p = A(A^TA)^{-1}A^TX, where AA is the matrix whose columns are the coordinates of the vectors xix_i, and XX is the coordinate vector of xx.

\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal projection on the span of independent vectors","height":395,"width":980},"classes":"l0","position":{"x":11869.63966158844,"y":8913.769158512221}},{"group":"nodes","data":{"id":"th:CauSch","name":"theorem","text":"
\n

Theorem 5 (Cauchy-Schwartz inequality). Let EE be an vector space endowed with an inner product. For all x,yEx,y\\in E, x,yx2y2.\\langle x,y \\rangle \\leq \\|x\\|^2 \\|y\\|^2. The inequality is an equality if and only if x=αyx=\\alpha y with αK\\alpha \\in K.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"th:CauSch","summary":"
\n

Theorem 5 (Cauchy-Schwartz inequality). Let EE be an vector space endowed with an inner product. For all x,yEx,y\\in E, x,yx2y2.\\langle x,y \\rangle \\leq \\|x\\|^2 \\|y\\|^2. The inequality is an equality if and only if x=αyx=\\alpha y with αK\\alpha \\in K.

\n
","hasSummary":false,"hasTitle":true,"title":"Cauchy-Schwartz inequality","height":297,"width":980},"classes":"l0","position":{"x":11674.786509972117,"y":7932.232740648682}},{"group":"nodes","data":{"id":"def:InnProd","name":"definition","text":"
\n

Definition 16 (Inner product). Let EE be a vector space on K=K=\\mathbb{R}. An inner product φ\\varphi is a map E×EKE\\times E\\rightarrow K such that

\n\n

A real vector space of finite dimension with an inner product is called \"Euclidean\".
\nWhen EE is a vector space on K=K=\\mathbb{C}, an inner product φ\\varphi is a map such that

\n\n

The Hermitian symmetry implies that φ(x,x)\\varphi(x,x) is real, hence the last condition is meaningful. The linearity in the first argument and the Hermitian symmetry imply that φ(x,y+z)=φ(x,y)+φ(x,z)\\varphi(x,y+z)=\\varphi(x,y)+\\varphi(x,z) and φ(x,αy)=α¯φ(x,y)\\varphi(x,\\alpha y)=\\overline \\alpha\\varphi(x,y). A complex vector space of finit dimension is called \"Hermitian\".
\nThe inner product is often noted x,y\\langle x,y \\rangle.
\nTwo vectors xx and yy are orthogonal when x,y=0\\langle x,y \\rangle = 0.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:InnProd","summary":"
\n

Definition 16 (Inner product). Let EE be a vector space on K=K=\\mathbb{R}. An inner product φ\\varphi is a map E×EKE\\times E\\rightarrow K such that

\n\n

A real vector space of finite dimension with an inner product is called \"Euclidean\".
\nWhen EE is a vector space on K=K=\\mathbb{C}, an inner product φ\\varphi is a map such that

\n\n

The Hermitian symmetry implies that φ(x,x)\\varphi(x,x) is real, hence the last condition is meaningful. The linearity in the first argument and the Hermitian symmetry imply that φ(x,y+z)=φ(x,y)+φ(x,z)\\varphi(x,y+z)=\\varphi(x,y)+\\varphi(x,z) and φ(x,αy)=α¯φ(x,y)\\varphi(x,\\alpha y)=\\overline \\alpha\\varphi(x,y). A complex vector space of finit dimension is called \"Hermitian\".
\nThe inner product is often noted x,y\\langle x,y \\rangle.
\nTwo vectors xx and yy are orthogonal when x,y=0\\langle x,y \\rangle = 0.

\n
","hasSummary":false,"hasTitle":true,"title":"Inner product","height":198,"width":980},"classes":"l0","position":{"x":13309.174662116091,"y":8564.514983015952}},{"group":"nodes","data":{"id":"def:Norm","name":"definition","text":"
\n

Definition 17 (Norm). An inner product defines a vector norm by x=x,x.\\| x \\|=\\sqrt{\\langle x,x \\rangle}.

\n

The norm of a vector is its distance to 00. The distance between xx and yy is the distance of xyx-y to 00: d(x,y)=xy.d(x,y)=\\|x-y\\|.

\n

Important formula: xy2=x2+y22x,y\\|x-y\\|^2=\\|x\\|^2 + \\|y\\|^2 - 2\\langle x,y \\rangle

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:Norm","summary":"
\n

Definition 17 (Norm). An inner product defines a vector norm by x=x,x.\\| x \\|=\\sqrt{\\langle x,x \\rangle}.

\n

The norm of a vector is its distance to 00. The distance between xx and yy is the distance of xyx-y to 00: d(x,y)=xy.d(x,y)=\\|x-y\\|.

\n

Important formula: xy2=x2+y22x,y\\|x-y\\|^2=\\|x\\|^2 + \\|y\\|^2 - 2\\langle x,y \\rangle

\n
","hasSummary":false,"hasTitle":true,"title":"Norm","height":198,"width":980},"classes":"l0","position":{"x":12030.956561613142,"y":8376.704379614777}},{"group":"nodes","data":{"id":"def:OrthBas","name":"definition","text":"
\n

Definition 18 (Orthonormal basis). A basis e1,...,ene_1,...,e_n is orthonormal when ij,ei,ej=0\\forall i\\neq j, \\langle e_i,e_j\\rangle = 0 ei,ei=1.\\langle e_i,e_i\\rangle = 1.

\n

The ii-th coordinates of the vector xx in the orthonormal basis is xi=x,ei.x_i = \\langle x, e_i\\rangle. Hence we have x=ix,eieix=\\sum_i \\langle x,e_i\\rangle e_i. To prove it, write x=ixieix=\\sum_i x_i e_i and take the inner product with each eje_j.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthBas","summary":"
\n

Definition 18 (Orthonormal basis). A basis e1,...,ene_1,...,e_n is orthonormal when ij,ei,ej=0\\forall i\\neq j, \\langle e_i,e_j\\rangle = 0 ei,ei=1.\\langle e_i,e_i\\rangle = 1.

\n

The ii-th coordinates of the vector xx in the orthonormal basis is xi=x,ei.x_i = \\langle x, e_i\\rangle. Hence we have x=ix,eieix=\\sum_i \\langle x,e_i\\rangle e_i. To prove it, write x=ixieix=\\sum_i x_i e_i and take the inner product with each eje_j.

\n
","hasSummary":false,"hasTitle":true,"title":"Orthonormal basis","height":198,"width":980},"classes":"l0","position":{"x":12831.735871276205,"y":7495.149102832393}},{"group":"nodes","data":{"id":"def:OrthFree","name":"definition","text":"
\n

Definition 19 (Orthogonal implies free). If kk non 00 vectors are orthogonal they are free. To prove it, assume that α1x1+...+αkxk=0.\\alpha_1 x_1 +...+ \\alpha_k x_k = 0. Hence α1x1+...+αkxk,x1=0\\langle \\alpha_1 x_1 +...+ \\alpha_k x_k, x_1 \\rangle = 0 since by linearity 0,x=0\\langle 0,x\\rangle = 0. Also, since xi,x1=0\\langle x_i,x_1\\rangle = 0 for all i>1i>1, we get α1x1+...+αkxk,x1=α1x1,x1=α1x12=0\\langle \\alpha_1 x_1 +...+ \\alpha_k x_k, x_1 \\rangle=\\langle \\alpha_1 x_1 , x_1 \\rangle = \\alpha_1 \\| x_1 \\|^2=0, hence α1=0\\alpha_1=0. We can repeat for each αi\\alpha_i, which shows the result.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthFree","summary":"
\n

Definition 19 (Orthogonal implies free). If kk non 00 vectors are orthogonal they are free. To prove it, assume that α1x1+...+αkxk=0.\\alpha_1 x_1 +...+ \\alpha_k x_k = 0. Hence α1x1+...+αkxk,x1=0\\langle \\alpha_1 x_1 +...+ \\alpha_k x_k, x_1 \\rangle = 0 since by linearity 0,x=0\\langle 0,x\\rangle = 0. Also, since xi,x1=0\\langle x_i,x_1\\rangle = 0 for all i>1i>1, we get α1x1+...+αkxk,x1=α1x1,x1=α1x12=0\\langle \\alpha_1 x_1 +...+ \\alpha_k x_k, x_1 \\rangle=\\langle \\alpha_1 x_1 , x_1 \\rangle = \\alpha_1 \\| x_1 \\|^2=0, hence α1=0\\alpha_1=0. We can repeat for each αi\\alpha_i, which shows the result.

\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal implies free","height":198,"width":980},"classes":"l0","position":{"x":13917.03077715624,"y":7833.184196598475}},{"group":"nodes","data":{"id":"def:InnMat","name":"definition","text":"
\n

Definition 20 (Inner product matrix). Let e1,...,ene_1,...,e_n be a basis. Let MM be the matrix whose elements are Mi,j=ei,ej.M_{i,j} = \\langle e_i,e_j \\rangle. MM is called the matrix of the inner product. The inner product between two vectors xx and yy is given by x,y=XTMY (or XTMY¯when the vector space is complex),\\langle x,y \\rangle = X^TMY \\text{ (or } X^TM\\overline{Y} \\text{when the vector space is complex)}, where XX and YY are the coordinate vectors of xx and yy. The proof is an exercise.
\nConsider a new basis e1,...,ene'_1,...,e'_n and let PP be the matrix whose columns are the coordinates of the vectors eie'_i in the original basis. Let XX' and YY' be the coordinates of xx and yy in the new basis. Recall that X=PXX=PX'. Hence we have x,y=YTPTMPX (or YTPTMPX¯)\\langle x,y \\rangle = Y'^T P^TMPX' \\text{ (or } Y'^T P^TM\\overline{PX'}) and the matrix of the inner product in the new basis is M=PTMP (or PTMP¯).M' = P^TMP \\text{ (or } P^TM\\overline{P}). Note the difference with the change of basis of linear maps.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:InnMat","summary":"
\n

Definition 20 (Inner product matrix). Let e1,...,ene_1,...,e_n be a basis. Let MM be the matrix whose elements are Mi,j=ei,ej.M_{i,j} = \\langle e_i,e_j \\rangle. MM is called the matrix of the inner product. The inner product between two vectors xx and yy is given by x,y=XTMY (or XTMY¯when the vector space is complex),\\langle x,y \\rangle = X^TMY \\text{ (or } X^TM\\overline{Y} \\text{when the vector space is complex)}, where XX and YY are the coordinate vectors of xx and yy. The proof is an exercise.
\nConsider a new basis e1,...,ene'_1,...,e'_n and let PP be the matrix whose columns are the coordinates of the vectors eie'_i in the original basis. Let XX' and YY' be the coordinates of xx and yy in the new basis. Recall that X=PXX=PX'. Hence we have x,y=YTPTMPX (or YTPTMPX¯)\\langle x,y \\rangle = Y'^T P^TMPX' \\text{ (or } Y'^T P^TM\\overline{PX'}) and the matrix of the inner product in the new basis is M=PTMP (or PTMP¯).M' = P^TMP \\text{ (or } P^TM\\overline{P}). Note the difference with the change of basis of linear maps.

\n
","hasSummary":false,"hasTitle":true,"title":"Inner product matrix","height":198,"width":980},"classes":"l0","position":{"x":14654.663734697007,"y":8283.979878399488}},{"group":"nodes","data":{"id":"def:OrthProj","name":"definition","text":"
\n

Definition 21 (Orthogonal projection). Let EE be a vector space with an inner product .,.\\langle .,. \\rangle and VV be a vector subspace of EE. Let pVp\\in V. The condition p=argminyVxy.p=\\text{argmin}_{y\\in V} \\| x-y \\|. is equivalent to yV,xp,y=0.\\forall y\\in V, \\langle x-p,y \\rangle=0. The proof is an exercise. When the condition is verified, pp is called the orthogonal projection of xx.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"def:OrthProj","summary":"
\n

Definition 21 (Orthogonal projection). Let EE be a vector space with an inner product .,.\\langle .,. \\rangle and VV be a vector subspace of EE. Let pVp\\in V. The condition p=argminyVxy.p=\\text{argmin}_{y\\in V} \\| x-y \\|. is equivalent to yV,xp,y=0.\\forall y\\in V, \\langle x-p,y \\rangle=0. The proof is an exercise. When the condition is verified, pp is called the orthogonal projection of xx.

\n
","hasSummary":false,"hasTitle":true,"title":"Orthogonal projection","height":198,"width":980},"classes":"l0","position":{"x":10762.815282086516,"y":8366.455920824794}},{"group":"nodes","data":{"id":"rem:InnOrthBas","name":"remark","text":"
\n

Remark 3 (Inner product in orthogonal basis). Let EE be a real vector space. In an orthonormal basis the scalar product of xx and yy is given by x,y=xiyi,\\langle x,y \\rangle = \\sum x_i y_i, where the xix_i and yiy_i are the coordinates of xx and yy. When EE is complex, the formula becomes x,y=xiyi¯.\\langle x,y \\rangle = \\sum x_i \\overline{y_i}. It can be shown directly using the linearity properties, or from the matrix expression of inner products.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"rem:InnOrthBas","summary":"
\n

Remark 3 (Inner product in orthogonal basis). Let EE be a real vector space. In an orthonormal basis the scalar product of xx and yy is given by x,y=xiyi,\\langle x,y \\rangle = \\sum x_i y_i, where the xix_i and yiy_i are the coordinates of xx and yy. When EE is complex, the formula becomes x,y=xiyi¯.\\langle x,y \\rangle = \\sum x_i \\overline{y_i}. It can be shown directly using the linearity properties, or from the matrix expression of inner products.

\n
","hasSummary":false,"hasTitle":true,"title":"Inner product in orthogonal basis","height":297,"width":980},"classes":"l0","position":{"x":14510.027075352717,"y":7067.834403688812}},{"group":"nodes","data":{"id":"rem:OrthProjBas","name":"remark","text":"
\n

Remark 4 (Projection in an orthonormal basis). Let e1,..,ene_1,..,e_n be an orthonormal basis. Recall that for all xEx\\in E, x=i=1nx,eiei.x=\\sum_{i=1}^{n} \\langle x,e_i\\rangle e_i. The point y=i=1kx,eiei.y=\\sum_{i=1}^{k} \\langle x,e_i\\rangle e_i. is the orthogonal projection of xx on the subspace generated by e1,..,eke_1,..,e_k.

\n
","parent":"subsec:InnProd","rank":"0","html_name":"rem:OrthProjBas","summary":"
\n

Remark 4 (Projection in an orthonormal basis). Let e1,..,ene_1,..,e_n be an orthonormal basis. Recall that for all xEx\\in E, x=i=1nx,eiei.x=\\sum_{i=1}^{n} \\langle x,e_i\\rangle e_i. The point y=i=1kx,eiei.y=\\sum_{i=1}^{k} \\langle x,e_i\\rangle e_i. is the orthogonal projection of xx on the subspace generated by e1,..,eke_1,..,e_k.

\n
","hasSummary":false,"hasTitle":true,"title":"Projection in an orthonormal basis","height":297,"width":980},"classes":"l0","position":{"x":10782.596613956468,"y":6946.372554210607}},{"group":"nodes","data":{"id":"def:ConvCont","name":"definition","text":"
\n

Definition 23 (Convolution of \\mathbb{R}). Let, f:f:\\mathbb{R}\\rightarrow \\mathbb{C} and g:g:\\mathbb{R}\\rightarrow \\mathbb{C}, the convolution is defined as x,(f*g)(x)=+f(u)g(xu)du\\forall x\\in \\mathbb{R},\\quad (f*g)(x) = \\int_{-\\infty}^{+\\infty} f(u)g(x-u)du

\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCont","summary":"
\n

Definition 23 (Convolution of \\mathbb{R}). x,(f*g)(x)=+f(u)g(xu)du\\forall x\\in \\mathbb{R},\\quad (f*g)(x) = \\int_{-\\infty}^{+\\infty} f(u)g(x-u)du

\n
","hasSummary":true,"hasTitle":true,"title":"Convolution of \\mathbb{R}","height":385,"width":980},"classes":"l0","position":{"x":3656.171782602364,"y":6611.864062671482}},{"group":"nodes","data":{"id":"def:ConvDisc","name":"definition","text":"
\n

Definition 24 (Discrete convolution ()(\\mathbb{Z})). Let (fn)n(f_n)_{n\\in \\mathbb{Z}} and (gn)n(g_n)_{n\\in \\mathbb{Z}} be complex bilateral sequences:
\nx,((fn)*(gn))(x)=u=fugxu\\forall x\\in \\mathbb{Z},\\quad \\left((f_n)*(g_n)\\right)(x) = \\sum_{u=-\\infty}^{\\infty} f_ug_{x-u}

\n
\n

This definition can be obtained from the continuous case using distributions. Let fs=f.u=δuTs and gs=g.u=δuTs,f_s = f.\\sum_{u=-\\infty}^{\\infty} \\delta_{uT_s} \\quad \\text{ and }\\quad g_s = g.\\sum_{u=-\\infty}^{\\infty} \\delta_{uT_s}, where ff and gg are the functions from the continuous definition and TsT_s is a sampling period. Then

\n

(fn)*(gn)(x)=(f*g)(xTe)(f_n)*(g_n)(x) = (f*g)(xT_e)

\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvDisc","summary":"
\n

Definition 24 (Discrete convolution ()(\\mathbb{Z})). x,((fn)*(gn))(x)=u=fugxu\\forall x\\in \\mathbb{Z},\\quad \\left((f_n)*(g_n)\\right)(x) = \\sum_{u=-\\infty}^{\\infty} f_ug_{x-u}

\n
","hasSummary":true,"hasTitle":true,"title":"Discrete convolution ()(\\mathbb{Z})","height":366,"width":980},"classes":"l0","position":{"x":3646.2432397097928,"y":7331.0800098771215}},{"group":"nodes","data":{"id":"def:ConvArr","name":"definition","text":"
\n

Definition 25 (Convolution of arrays ({0,...,N1})(\\{0,...,N-1\\})). (multiplication of polynomials):
\nLet ff and gg be two complex arrays of size MM and NN, with first index 00. Their convolution is an array of size M+N1M+N-1, x{0,..,M+N1},(f*g)(k)=u=max(0,nN+1)min(n,M1)f(u)g(xu)\\forall x\\in \\{0,..,M+N-1\\},\\quad (f*g)(k) = \\sum_{u=\\max(0,n-N+1)}^{\\min(n,M-1)} f(u)g(x-u) Remark: this definition coincides with the multiplication of polynomials.
\n

\n
\n

This definition can also be deduced from the discrete convolution by defining ff_\\mathbb{Z} and gg_\\mathbb{Z} to be the sequences extending ff and gg by 00 on all integers:

\n

x{0,..,M+N1},(f*g)(x)=(f*g)(x)\\forall x\\in \\{0,..,M+N-1\\},\\quad (f*g)(x) = (f_\\mathbb{Z}*g_\\mathbb{Z})(x)

\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvArr","summary":"
\n

Definition 25 (Convolution of arrays ({0,...,N1})(\\{0,...,N-1\\})). x{0,..,M+N1},(f*g)(k)=u=max(0,nN+1)min(n,M1)f(u)g(xu)\\forall x\\in \\{0,..,M+N-1\\},\\quad (f*g)(k) = \\sum_{u=\\max(0,n-N+1)}^{\\min(n,M-1)} f(u)g(x-u)

\n
","hasSummary":true,"hasTitle":true,"title":"Convolution of arrays ({0,...,N1})(\\{0,...,N-1\\})","height":435,"width":1235},"classes":"l0","position":{"x":3613.8255533884385,"y":8154.398100232943}},{"group":"nodes","data":{"id":"def:ConvCirc","name":"definition","text":"
\n

Definition 26 (Circular convolution, or convolution of /\\mathbb{R}/\\mathbb{Z}). Let ff and gg be periodic complex functions f:f:\\mathbb{R}\\rightarrow \\mathbb{C}, g:g:\\mathbb{R}\\rightarrow \\mathbb{C}, of period TT. Their convolution is defined as

\n

x,(f*g)(x)=0Tf(u)g(xu)du.\\forall x\\in \\mathbb{R},\\quad (f*g)(x)= \\int_0^T f(u)g(x-u)du.

\n
\n

It is possible to express the circular convolution using the linear convolution. Let ff and gg be TT periodic function and pTp_T the indicator function (gate function) of [0,T[[0,T[,

\n

x,(circular)(f*g)(x)=(linear)((f.pT)*g).\\forall x\\in \\mathbb{R},\\quad (circular)(f*g)(x)= (linear) \\left((f.p_T)*g\\right).

\n

This circular convolution can also be expressed for functions f:[0,T[f:[0,T[\\rightarrow \\mathbb{C}, g:[0,T[g:[0,T[\\rightarrow \\mathbb{C} as x[0,T[,(f*g)(x)=0Tf(u)g((xu)modT)du.\\forall x\\in [0,T[,\\quad (f*g)(x)= \\int_0^T f(u)g((x-u) \\mod T)du.

\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCirc","summary":"
\n

Definition 26 (Circular convolution, or convolution of /\\mathbb{R}/\\mathbb{Z}). x,(f*g)(x)=0Tf(u)g(xu)du.\\forall x\\in \\mathbb{R},\\quad (f*g)(x)= \\int_0^T f(u)g(x-u)du.

\n
","hasSummary":true,"hasTitle":true,"title":"Circular convolution, or convolution of /\\mathbb{R}/\\mathbb{Z}","height":431,"width":980},"classes":"l0","position":{"x":4883.247163725782,"y":7043.810457638875}},{"group":"nodes","data":{"id":"def:ConvCircDisc","name":"definition","text":"
\n

Definition 27 (Discrete circular convolution ({0,...,N1})(\\{0,...,N-1\\})). Let ff and gg be two complex arrays of same size NN. Their circular convolution is defined as

\n

x{0,..,N1},(f*g)(x)=u=0N1f(u)g(xumodN).\\forall x\\in \\{0,..,N-1\\},\\quad (f*g)(x) = \\sum_{u=0}^{N-1} f(u)g(x-u \\mod N).

\n
","parent":"subsec:Conv","rank":"0","html_name":"def:ConvCircDisc","summary":"
\n

Definition 27 (Discrete circular convolution ({0,...,N1})(\\{0,...,N-1\\})). x{0,..,N1},(f*g)(x)=u=0N1f(u)g(xumodN).\\forall x\\in \\{0,..,N-1\\},\\quad (f*g)(x) = \\sum_{u=0}^{N-1} f(u)g(x-u \\mod N).

\n
","hasSummary":true,"hasTitle":true,"title":"Discrete circular convolution ({0,...,N1})(\\{0,...,N-1\\})","height":422,"width":1114},"classes":"l0","position":{"x":4842.945542025731,"y":7960.554286134175}},{"group":"nodes","data":{"id":"rem:ConvArrCirc","name":"remark","text":"
\n

Remark 6 (Array convolution as a circular convolution). Let ff and ff be two arrays of size MM and NN. The different convolutions differ in the way they treat border effects. However, when the arrays are padded with enough zeros, the circular convolution becomes identical to the array convolution. Note PfPf and PgPg the extension of ff and gg by zeros such that PfPf and PgPg have size N+M1N+M-1 (the letter PP stands for zeros-padded). Then array convolution(Pf*Pg)=array convolution(f*g).\\text{array convolution}\\quad (Pf * Pg) = \\text{array convolution}\\quad (f * g).

\n
","parent":"subsec:Conv","rank":"0","html_name":"rem:ConvArrCirc","summary":"
\n

Remark 6 (Array convolution as a circular convolution). Circular convolution of extended array \\leftrightarrow array convolution

\n
","hasSummary":true,"hasTitle":true,"title":"Array convolution as a circular convolution","height":322,"width":980},"classes":"l0","position":{"x":2287.4595593855984,"y":7685.409631665113}},{"group":"nodes","data":{"id":"def:FT","name":"definition","text":"
\n

Definition 28 (Fourier transform). Remark: there exists different conventions for the normalization coefficients and the argument of the exponential.

\n\n
\n

For function ff in 1()\\mathcal{L}^1(\\mathbb{R}) the Fourier transform can be expressed as a limit of Fourier coefficients,

\n

(f)(ν)=limTT.cn(fT),with n=νT,\\mathcal{F}(f)(\\nu) = \\lim_{T\\rightarrow \\infty} T.c_n \\left(f_T\\right),\\quad \\text{with } n=\\lfloor \\nu T \\rfloor , where x\\lfloor x \\rfloor is the integer part of xx and fTf_T is the function ff restricted to the interval [T2,T2[[-\\frac{T}{2},-\\frac{T}{2}[. Note that since the cnc_n can be defined from the DFT, the Fourier transform can also be defined from the DFT.

\n
","parent":"subsec:F","rank":"0","html_name":"def:FT","summary":"
\n

Definition 28 (Fourier transform).

\n\n
","hasSummary":true,"hasTitle":true,"title":"Fourier transform","height":808,"width":980},"classes":"l0","position":{"x":8575.29693013847,"y":6650.129022364675}},{"group":"nodes","data":{"id":"def:FS","name":"definition","text":"
\n

Definition 29 (Fourier series).

\n\n

The Fourier coefficients can also be defined for non periodic functions f:[a,a+T[f:[a,a+T[\\rightarrow \\mathbb{C}.

\n
\n

The Fourier coefficient can be obtained from the Fourier Transform by the following way. Let pTp_T be the indicator (gate function) of the interval [0,T[[0,T[,

\n

cn(f)=1T(f.pT)(nT)c_n(f) =\\frac{1}{T} \\mathcal{F}(f.p_T)(\\frac{n}{T})

\n
\n

The Fourier coefficient can also be obtained as a limit of the DFT\\operatorname{DFT}. Let

\n

fN(x)=f(xTN),x{0,...,N1}.f_N(x) = f\\left(x\\frac{T}{N}\\right), \\quad x \\in \\{0,...,N-1\\}.

\n

Then cn(f)=1T0Tf(x)e2iπnTxdx=limN1TTNx=0N1f(xTN)e2iπnTxTN=limN1Nx=0N1f(xTN)e2iπnNx=limN1NDFT(fN)(k),with k=nmodN\\begin{aligned}\nc_n(f) &=& \\frac{1}{T} \\int_{0}^{T} f(x)e^{-2i\\pi \\frac{n}{T} x}dx\\\\\n&=&\\lim_{N \\rightarrow \\infty} \\frac{1}{T}\\frac{T}{N} \\sum_{x=0}^{N-1} f\\left(x\\frac{T}{N}\\right)e^{-2i\\pi \\frac{n}{T} x\\frac{T}{N}}\\\\\n&=& \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\sum_{x=0}^{N-1} f\\left(x\\frac{T}{N}\\right)e^{-2i\\pi \\frac{n}{N} x}\\\\\n&=&\\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\operatorname{DFT}\\left( f_N \\right)(k),\\quad \\text{with } k=n \\mod N\\\\\\end{aligned}

\n

The last line is obtained after noticing that e2iπnNx=e2iπn+NNxe^{-2i\\pi \\frac{n}{N} x}=e^{-2i\\pi \\frac{n+N}{N} x} and e2iπnNx=e2iπnmodNNxe^{-2i\\pi \\frac{n}{N} x}=e^{-2i\\pi \\frac{n \\mod N}{N} x}.

\n
","parent":"subsec:F","rank":"0","html_name":"def:FS","summary":"
\n

Definition 29 (Fourier series).

\n\n
","hasSummary":true,"hasTitle":true,"title":"Fourier series","height":693,"width":980},"classes":"l0","position":{"x":7408.464074026597,"y":7271.025032550924}},{"group":"nodes","data":{"id":"def:DFT","name":"definition","text":"
\n

Definition 30 (Discrete Fourier Transform (DFT)).

\n\n

Remarks: the normalization coefficient 1N\\frac{1}{N} appears here in the inverse DFT. This is a question of convention, it could be put in the direct DFT to be more consistent with the Fourier series formulation.
\n

\n
\n

The DFT can be defined from Fourier series or Fourier transform using distributions. Let 0<Ts0<T_s be an arbitrary sampling period and ff be the distribution

\n

f̃=x=0N1f(x)δxTs.\\tilde f=\\sum_{x=0}^{N-1}f(x) \\delta_{xT_s}. When f̃\\tilde f is seen as supported in [0,T[[0,T[, we have the following relation with the Fourier series coefficients:

\n

DFT(f)(k)=N.Ts.ck(f̃).\\operatorname{DFT}(f)(k) =N.T_s. c_k(\\tilde f). Note that the N.TsN.T_s factor appears only due to the difference of convention between the Fourier series and the DFT\\operatorname{DFT}. When f̃\\tilde f is seen as supported on [,[[-\\infty,\\infty[, we can write

\n

DFT(f)(k)=(f)(kNTs),\\operatorname{DFT}(f)(k) = \\mathcal{F}(f)\\left( \\frac{k}{N T_s} \\right),

\n

where the Fourier transform \\mathcal{F} is generalized to distributions. Hence the frequency kk of the DFT relates to the frequency kNTs\\frac{k}{N T_s} of the Fourier transform.

\n
","parent":"subsec:F","rank":"0","html_name":"def:DFT","summary":"
\n

Definition 30 (Discrete Fourier Transform (DFT)).

\n\n
","hasSummary":true,"hasTitle":true,"title":"Discrete Fourier Transform (DFT)","height":781,"width":1278},"classes":"l0","position":{"x":8486.033031064639,"y":8179.06956864164}},{"group":"nodes","data":{"id":"rem:GenRemFour","name":"remark","text":"
\n

Remark 7 (General remarks).

\n\n
","parent":"subsec:F","rank":"0","html_name":"rem:GenRemFour","summary":"
\n

Remark 7 (General remarks).

\n\n
","hasSummary":true,"hasTitle":true,"title":"General remarks","height":492,"width":980},"classes":"l0","position":{"x":8593.468807795278,"y":8930.27275164237}},{"group":"nodes","data":{"id":"th:prodTF","name":"theorem","text":"
\n

Theorem 6 (Fourier Transform). Let f:f:\\mathbb{R}\\rightarrow \\mathbb{C} and g:g:\\mathbb{R}\\rightarrow \\mathbb{C}. We have: (f*g)=(f).(g)\\mathcal{F}(f*g) = \\mathcal{F}(f).\\mathcal{F}(g) (f.g)=(f)*(g)\\mathcal{F}(f.g) = \\mathcal{F}(f)*\\mathcal{F}(g) 1(f*g)=(f)1.1(g)\\mathcal{F}^{-1}(f*g) = \\mathcal{F}(f)^{-1}.\\mathcal{F}^{-1}(g) 1(f.g)=1(f)*1(g)\\mathcal{F}^{-1}(f.g) = \\mathcal{F}^{-1}(f)*\\mathcal{F}^{-1}(g) where the convolution ** is defined as the convolution of \\mathbb{R}.

\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodTF","summary":"
\n

Theorem 6 (Fourier Transform). Fourier transform \\leftrightarrow convolution on \\mathbb{R}

\n
","hasSummary":true,"hasTitle":true,"title":"Fourier Transform","height":276,"width":980},"classes":"l0","position":{"x":6261.952606954309,"y":6691.136811077134}},{"group":"nodes","data":{"id":"th:prodS","name":"theorem","text":"
\n

Theorem 7 (Fourier series). Let f:f:\\mathbb{R}\\rightarrow \\mathbb{C} and g:g:\\mathbb{R}\\rightarrow \\mathbb{C} be TT periodic functions. Then we have, (cn(f*g))=T.(cn(f)).(cn(g)))(c_n(f*g)) = T. (c_n(f)).(c_n(g))) (cn(f.g))=(cn(f))*(cn(g)).(c_n(f.g)) = (c_n(f))*(c_n(g)). where f*gf*g is a circular convolution and (cn(f))*(cn(g))(c_n(f))*(c_n(g)) is a discrete convolution. Note 1((cn))\\mathcal{F}^{-1}((c_n)) the inverse Fourier of cnc_n, 1((un)*(vn))=1((un)).1((vn))\\mathcal{F}^{-1}((u_n)*(v_n))=\\mathcal{F}^{-1}((u_n)).\\mathcal{F}^{-1}((v_n)) 1((un).(vn))=1T1((un))*1((vn))\\mathcal{F}^{-1}((u_n).(v_n))=\\frac{1}{T}\\mathcal{F}^{-1}((u_n))*\\mathcal{F}^{-1}((v_n)) where 1((un))*1((vn))\\mathcal{F}^{-1}((u_n))*\\mathcal{F}^{-1}((v_n)) is a circular convolutions and (un)*(vn)(u_n)*(v_n) is a discrete convolution.

\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodS","summary":"
\n

Theorem 7 (Fourier series). Fourier series \\leftrightarrow circular (/\\mathbb{R}/\\mathbb{Z}) / discrete convolution

\n
","hasSummary":true,"hasTitle":true,"title":"Fourier series","height":279,"width":980},"classes":"l0","position":{"x":6258.240960505095,"y":7355.199286445368}},{"group":"nodes","data":{"id":"th:prodDFT","name":"theorem","text":"
\n

Theorem 8 (DFT). Let AfAf and AgAg be complex arrays of size NN. Then we have DFT(Af*Ag)=DFT(Af).DFT(Ag)\\operatorname{DFT}(Af*Ag) = \\operatorname{DFT}(Af).\\operatorname{DFT}(Ag) DFT(Af.Ag)=1N.DFT(Af)*DFT(Ag)\\operatorname{DFT}(Af.Ag) = \\frac{1}{N}.\\operatorname{DFT}(Af)*\\operatorname{DFT}(Ag) DFT1(Af*Ag)=N.DFT1(Af).DFT1(Ag)\\operatorname{DFT}^{-1}(Af*Ag) = N. \\operatorname{DFT}^{-1}(Af).\\operatorname{DFT}^{-1}(Ag) DFT1(Af.Ag)=DFT1(Af)*DFT1(Ag)\\operatorname{DFT}^{-1}(Af.Ag) = \\operatorname{DFT}^{-1}(Af)*\\operatorname{DFT}^{-1}(Ag) where the convolution ** is the discrete circular convolution.

\n
","parent":"subsec:CT","rank":"0","html_name":"th:prodDFT","summary":"
\n

Theorem 8 (DFT). Discrete Fourier transform \\leftrightarrow circular (discrete) convolution

\n
","hasSummary":true,"hasTitle":true,"title":"DFT","height":276,"width":980},"classes":"l0","position":{"x":6267.830374915837,"y":7848.467819309728}},{"group":"nodes","data":{"id":"rem:GenRemProd","name":"remark","text":"
\n

Remark 8 (General remarks). The result expressed in this section are true when quantities exists, and when the order of integrals and sums can changed. Hence they must be confirmed case by case.

\n
","parent":"subsec:CT","rank":"0","html_name":"rem:GenRemProd","summary":"
\n

Remark 8 (General remarks). Results much be checked case by case

\n
","hasSummary":true,"hasTitle":true,"title":"General remarks","height":273,"width":980},"classes":"l0","position":{"x":6267.391601097236,"y":6287.057872604234}},{"group":"nodes","data":{"id":"th:TranFunDer","name":"theorem","text":"
\n

Theorem 12 (Transfer function of the derivative). Consider a function f(x)=ezxf(x)=e^{zx} defined on \\mathbb{R}, or /T\\mathbb{R}/T\\mathbb{Z}. The derivative ff' exists for all xx and f(x)=zezx=z.f(x)f'(x)=ze^{zx}=z.f(x). Hence, the transfer function of DD is simply H(z)=z.H(z) = z.

\n

It can be checked that the function H(z)H(z) does not admit inverse Fourier transform (or inverse Fourier series), in the sens of functions. Hence we cannot write D(f)=1(2iπν(f)(ν))=1(ν2iπν)*f,D(f)=\\mathcal{F}^{-1}(2i\\pi \\nu \\mathcal{F}(f)(\\nu)) = \\mathcal{F}^{-1}(\\nu\\mapsto 2i\\pi \\nu)*f, at least in the sens of functions.

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"th:TranFunDer","summary":"
\n

Theorem 12 (Transfer function of the derivative). The transfer function of DD is H(z)=iz.H(z) = iz.

\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of the derivative","height":332,"width":980},"classes":"l0","position":{"x":7387.452722020224,"y":2315.164086156882}},{"group":"nodes","data":{"id":"def:Der","name":"definition","text":"
\n

Definition 33 (Derivative). We say that a function f:]a,b[f:]a,b[\\rightarrow \\mathbb{R} is differential in xx if limh0f(x+h)f(x)h\\lim_{h\\rightarrow 0} \\frac{f(x+h)-f(x)}{h} exists. We write then f(x)=limh0f(x+h)f(x)h,f'(x)=\\lim_{h\\rightarrow 0} \\frac{f(x+h)-f(x)}{h}, and f(x)f'(x) is called the derivative of ff at point xx.
\nEquivalently, we can say that ff is differentiable in xx when there exists α\\alpha\\in \\mathbb{R} and εx\\varepsilon_x f(x+h)=f(x)+α.h+εx(h)hf(x+h) = f(x) + \\alpha.h + \\varepsilon_x(h)h where εx\\varepsilon_x is such that εx(h0)=0\\varepsilon_x(h\\rightarrow 0)=0. We have then f(x)=αf'(x)=\\alpha. It can be checked that the derivation D:ffD:f\\mapsto f' is a linear operator, invariant by translations. The derivative is often noted dfdx(x)=f(x)\\frac{\\mathrm{d}f}{\\mathrm{d}x}(x)=f'(x).

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Der","summary":"
\n

Definition 33 (Derivative). f(x)=dfdx(x)=limh0f(x+h)f(x)hf'(x)=\\frac{\\mathrm{d}f}{\\mathrm{d}x}(x)= \\lim_{h\\rightarrow 0} \\frac{f(x+h)-f(x)}{h}

\n
","hasSummary":true,"hasTitle":true,"title":"Derivative","height":332,"width":980},"classes":"l0","position":{"x":8735.929814648183,"y":2176.035907257376}},{"group":"nodes","data":{"id":"def:DirDer","name":"definition","text":"
\n

Definition 34 (Directional derivative (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R})). Let f:]a,b[nf:]a,b [^n\\rightarrow \\mathbb{R}. The directional derivative of ff at x]a,b[nx\\in ]a,b [^n in the direction unu\\in \\mathbb{R}^n is defined when it exists by limh0f(x+hu)f(x)h.\\lim_{h\\rightarrow 0} \\frac{f(x+hu)-f(x)}{h}. We write then fu(x)=limh0f(x+hu)f(x)h.\\frac{\\partial f}{\\partial u}(x) = \\lim_{h\\rightarrow 0} \\frac{f(x+hu)-f(x)}{h}. fu(x)\\frac{\\partial f}{\\partial u}(x) is also called a ’partial’ derivative. For a function f:nf:\\mathbb{R}^n \\rightarrow \\mathbb{R} and note xix_i the ii-th coordinate of n\\mathbb{R}^n vectors. The directional derivatives in the direction of the ii-th basis vectors is noted fxi(x).\\frac{\\partial f}{\\partial x_i}(x).

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:DirDer","summary":"
\n

Definition 34 (Directional derivative (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R})). fu(x)=limh0f(x+hu)f(x)h.\\frac{\\partial f}{\\partial u}(x) = \\lim_{h\\rightarrow 0} \\frac{f(x+hu)-f(x)}{h}.

\n
","hasSummary":true,"hasTitle":true,"title":"Directional derivative (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R})","height":343,"width":980},"classes":"l0","position":{"x":9163.85066889793,"y":1299.503587539544}},{"group":"nodes","data":{"id":"def:Differ1","name":"definition","text":"
\n

Definition 35 (Differentials (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R} )). Let f:]a,b[nf:]a,b [^n\\rightarrow \\mathbb{R}. We say that ff is differentiable in xx if there exists a linear map dfx:n\\mathrm{d}f_x: \\mathbb{R}^n \\rightarrow \\mathbb{R} and a function εx\\varepsilon_x such that f(x+u)=f(x)+dfx(u)+εx(u)uf(x+u) = f(x) + \\mathrm{d}f_x(u) + \\varepsilon_x(u)u where εx\\varepsilon_x is such that εx(h0)=0\\varepsilon_x(h\\rightarrow 0)=0. dfx\\mathrm{d}f_x is called the differential of ff at xx.
\nIt can be checked that the dfx(u)\\mathrm{d}f_x(u) is the directional derivative in the direction uu. Hence the matrix of dfx\\mathrm{d}f_x in the canonical basis of n\\mathbb{R}^n is given by dfx:(fe1(x)...fen(x))\\mathrm{d}f_x:\n\\begin{pmatrix}\n\\frac{\\partial f}{\\partial e_1}(x)&...&\\frac{\\partial f}{\\partial e_n}(x)\n\\end{pmatrix}

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Differ1","summary":"
\n

Definition 35 (Differentials (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R} )). dfx:(fe1(x)...fen(x))\\mathrm{d}f_x:\n\\begin{pmatrix}\n\\frac{\\partial f}{\\partial e_1}(x)&...&\\frac{\\partial f}{\\partial e_n}(x)\n\\end{pmatrix}

\n
","hasSummary":true,"hasTitle":true,"title":"Differentials (n)( \\mathbb{R}^n \\rightarrow \\mathbb{R} )","height":332,"width":980},"classes":"l0","position":{"x":8568.119567542746,"y":755.3430877744534}},{"group":"nodes","data":{"id":"def:DifferM","name":"definition","text":"
\n

Definition 36 (Differentials (nm)( \\mathbb{R}^n \\rightarrow \\mathbb{R}^m )). The definition of the differential in the case f:]a,b[f:]a,b [\\rightarrow \\mathbb{R} can be directly generalized to the case f:]a,b[nmf:]a,b [^n\\rightarrow \\mathbb{R}^m. We say that f:]a,b[mf:]a,b [\\rightarrow \\mathbb{R}^m is differentiable in xx if there exists a linear map dfx:n\\mathrm{d}f_x: \\mathbb{R}^n \\rightarrow \\mathbb{R} and a function εx\\varepsilon_x such that f(x+u)=f(x)+dfx(u)+εx(u)uf(x+u) = f(x) + \\mathrm{d}f_x(u) + \\varepsilon_x(u)u where εx\\varepsilon_x is such that εx(h0)=0\\varepsilon_x(h\\rightarrow 0)=0. dfx\\mathrm{d}f_x is called the differential of ff at xx.
\nAssume that ff is differentiable. Note fi(x)f_i(x) the ii-th coordinate of f(x)f(x) in the canonical basis of m\\mathbb{R}^m. fif_i is a function ]a,b[n]a,b [^n\\rightarrow \\mathbb{R}. Let (ei)(e_i) be the canonical basis of n\\mathbb{R}^n. The matrix of dfx\\mathrm{d}f_x in the canonical the basis of n\\mathbb{R}^n and m\\mathbb{R}^m is dfx:(f1e1(x)...f1en(x)...fme1(x)...fmen(x))\\mathrm{d}f_x:\n\\begin{pmatrix}\n\\frac{\\partial f_1}{\\partial e_1}(x)&...&\\frac{\\partial f_1}{\\partial e_n}(x)\\\\\n.\\\\\n.\\\\\n.\\\\\n\\frac{\\partial f_m}{\\partial e_1}(x)&...&\\frac{\\partial f_m}{\\partial e_n}(x)\\\\\n\\end{pmatrix}

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:DifferM","summary":"
\n

Definition 36 (Differentials (nm)( \\mathbb{R}^n \\rightarrow \\mathbb{R}^m )). f(x+u)=f(x)+dfx(u)+εx(u)uf(x+u) = f(x) + \\mathrm{d}f_x(u) + \\varepsilon_x(u)u

\n
","hasSummary":true,"hasTitle":true,"title":"Differentials (nm)( \\mathbb{R}^n \\rightarrow \\mathbb{R}^m )","height":298,"width":980},"classes":"l0","position":{"x":8558.562142572166,"y":194.07915788554078}},{"group":"nodes","data":{"id":"def:GradCon","name":"definition","text":"
\n

Definition 37 (Gradient (!add Riesz to inner prod + cauchy schwartz!)). Let .,.\\langle.,.\\rangle be the canonical inner product of n\\mathbb{R}^n. If f:]a,b[nf:]a,b [^n\\rightarrow \\mathbb{R} is differentiable at xx, dfx\\mathrm{d}f_x is a linear map from n\\mathbb{R}^n to \\mathbb{R}. Hence there is a vector, noted fx\\nabla f_x such that un,dfx(u)=fx,u.\\forall u\\in \\mathbb{R}^n,\\quad \\mathrm{d}f_x(u) = \\langle \\nabla f_x,u\\rangle. fx\\nabla f_x is called the ’gradient’ of ff at xx. In the canonical basis the gradient vector is fx=(fe1(x)...fen(x)).\\nabla f_x=\n\\begin{pmatrix}\n\\frac{\\partial f}{\\partial e_1}(x)\\\\\n.\\\\\n.\\\\\n.\\\\\n\\frac{\\partial f}{\\partial e_n}(x)\n\\end{pmatrix}. Let uu be a vector of coordinates (ui)(u_i). We have that dfx(u)=(fe1(x)...fen(x)).(u1...un)=(fx)T.(u1...un)\\mathrm{d}f_x(u) = \\begin{pmatrix}\n\\frac{\\partial f}{\\partial e_1}(x)&...&\\frac{\\partial f}{\\partial e_n}(x)\n\\end{pmatrix}.\n\\begin{pmatrix}\nu_1\\\\\n.\\\\\n.\\\\\n.\\\\\nu_n\\\\\n\\end{pmatrix}\n=\n(\\nabla f_x)^T.\n\\begin{pmatrix}\nu_1\\\\\n.\\\\\n.\\\\\n.\\\\\nu_n\\\\\n\\end{pmatrix}

\n

The gradient is the direction of steepest variation of the function. This can be shown as follow. The variation of the function between f(x)f(x) and f(x+u)f(x+u) is approximated by dfx(u)=fx,u\\mathrm{d}f_x(u) = \\langle \\nabla f_x,u\\rangle. The Cauchy Schwartz inequality implies that the vectors unu\\in \\mathbb{R}^n of unit norm (u=1\\|u\\|=1) which maximizes fx,u2\\langle \\nabla f_x,u\\rangle^2 are when uu and fx\\nabla f_x are colinear, that is to say u=fxu= \\nabla f_x or u=fxu= -\\nabla f_x.

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:GradCon","summary":"
\n

Definition 37 (Gradient (!add Riesz to inner prod + cauchy schwartz!)). fx=(fe1(x)...fen(x)).\\nabla f_x=\n\\begin{pmatrix}\n\\frac{\\partial f}{\\partial e_1}(x)\\\\\n.\\\\\n.\\\\\n.\\\\\n\\frac{\\partial f}{\\partial e_n}(x)\n\\end{pmatrix}.

\n
","hasSummary":true,"hasTitle":true,"title":"Gradient (!add Riesz to inner prod + cauchy schwartz!)","height":658,"width":980},"classes":"l0","position":{"x":7425.253545827187,"y":729.7808136044371}},{"group":"nodes","data":{"id":"def:nthDer","name":"definition","text":"
\n

Definition 38 (n-th order derivative). Composing the derivation operator nn times gives the ’nn-th order derivative’ noted f(n)(x)=dnfdxn(x)=ddx(...dfdx)(x)f^{(n)}(x)=\\frac{\\mathrm{d}^n f}{\\mathrm{d} x^n}(x)=\\frac{\\mathrm{d} }{\\mathrm{d} x}\\left(...\\frac{\\mathrm{d}f }{\\mathrm{d} x}\\right)(x) A function for which the nn-th order derivative exists is called ’differentiable at the order nn’.

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:nthDer","summary":"
\n

Definition 38 (n-th order derivative). Composing the derivation operator nn times gives the ’n-th order derivative’: f(n)(x)=dnfdxn(x)=dndx(...dfdx)(x)f^{(n)}(x)=\\frac{\\mathrm{d}^n f}{\\mathrm{d} x^n}(x)=\\frac{\\mathrm{d}^n }{\\mathrm{d} x}(...\\frac{\\mathrm{d}f }{\\mathrm{d} x})(x)

\n
","hasSummary":true,"hasTitle":true,"title":"n-th order derivative","height":440,"width":980},"classes":"l0","position":{"x":10366.270358909627,"y":1364.0686261068784}},{"group":"nodes","data":{"id":"def:Lap","name":"definition","text":"
\n

Definition 39 (Laplacian). Let ff be a function f:nf:\\mathbb{R}^n\\rightarrow \\mathbb{R}, twice differentiable. The Laplacian operator Δ\\Delta is defined as Δf(x)=i=1n2fxi2(x).\\Delta f (x) = \\sum_{i=1}^{n} \\frac{\\partial^2 f}{\\partial x^2_i}(x). In particular, in dimension 11 and 22 get Δf(x)=f(2)(x) and Δf(x)=2fx12(x)+2fx22(x).\\Delta f (x) = f^{(2)}(x) \\text{ and } \\Delta f (x) = \\frac{\\partial^2 f}{\\partial x^2_1}(x) + \\frac{\\partial^2 f}{\\partial x^2_2}(x).

\n
","parent":"subsec:DiffOp","rank":"0","html_name":"def:Lap","summary":"
\n

Definition 39 (Laplacian). Δf(x)=i=1n2fxi2(x).\\Delta f (x) = \\sum_{i=1}^{n} \\frac{\\partial^2 f}{\\partial x^2_i}(x). In dimension 11 and 22 get Δf(x)=f(2)(x) and Δf(x)=2fx12(x)+2fx22(x).\\Delta f (x) = f^{(2)}(x) \\text{ and } \\Delta f (x) = \\frac{\\partial^2 f}{\\partial x^2_1}(x) + \\frac{\\partial^2 f}{\\partial x^2_2}(x).

\n
","hasSummary":true,"hasTitle":true,"title":"Laplacian","height":530,"width":980},"classes":"l0","position":{"x":9801.78921427882,"y":-302.15845593828703}},{"group":"nodes","data":{"id":"th:TranFunDisDer","name":"theorem","text":"
\n

Theorem 13 (Transfer function of discrete derivatives). In the following, we define the derivative as f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x). The discussion can be adapted to the other definitions. Consider the function f(k)=ezk.f(k)=e^{zk}. We have that G(k)=H(z).f(k)G(k)=H(z).f(k), hence G(0)=H(z).f(0)G(0)=H(z).f(0). And since f(0)=1f(0)=1, G(0)=H(z).G(0)=H(z).

\n

From the definition of the gradient, G(0)=f(0+1)f(0)=ez.1ez.0=ez1G(0)=f(0+1)-f(0)=e^{z. 1}-e^{z. 0}=e^z-1. Hence H(z)=ez1.\\begin{aligned}\n H(z)&=&e^z-1.\\\\\\end{aligned}

\n

Note that the transfer function of the continuous derivation Hcont(z)=z,H_{cont}(z)=z, seems very different. However, consider the case where the samples of the discrete function are separated by ϵ\\epsilon instead of 11. The discrete derivation become G(kϵ)=f(kϵ+ϵ)f(kϵ)ϵ,G(k\\epsilon)=\\frac{f(k\\epsilon+\\epsilon)-f(k\\epsilon)}{\\epsilon}, and the transfer function Hϵ(z)=eϵ.z1ϵ.H_{\\epsilon}(z)=\\frac{e^{\\epsilon.z}-1}{\\epsilon}. We have then Hϵ(z)z=Hcont(z)H_{\\epsilon}(z)\\rightarrow z= H_{cont}(z) when ϵ0\\epsilon \\rightarrow 0. Hence for all zz, the transfer function of the discrete case gets closer and closer to the one of the continuous case when ϵ0\\epsilon \\rightarrow 0. It is interesting to note for a fixed ϵ\\epsilon, the values of Hϵ(z)H_{\\epsilon}(z) and Hcont(z)H_{cont}(z) are closer and closer when z0z\\rightarrow 0, and bigger and bigger when zz\\rightarrow \\infty. This means that discrete and continuous derivations tend to agree for functions ff containing only on low frequencies and give very different results for functions containing high frequencies. This is not a surprising behavior.

\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"th:TranFunDisDer","summary":"
\n

Theorem 13 (Transfer function of discrete derivatives). The transfer function of the discrete derivative f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x) is H(z)=ez1\\begin{aligned}\n H(z)&=&e^z-1\\\\\\end{aligned}

\n
","hasSummary":true,"hasTitle":true,"title":"Transfer function of discrete derivatives","height":386,"width":980},"classes":"l0","position":{"x":6104.503000925931,"y":2361.81373466607}},{"group":"nodes","data":{"id":"def:DisDer","name":"definition","text":"
\n

Definition 40 (Discrete derivative). The discrete derivative is the analog of the continuous derivation but for functions defined on \\mathbb{Z}, or {0,...,N1}\\{0,...,N-1\\}. There are several ways to define it. Let f:f:\\mathbb{Z}\\rightarrow \\mathbb{R}, the 33 most common definitions are, f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x) f(x)=f(x)f(x1)f'(x)=f(x)-f(x-1) f(x)=f(x+1)f(x1)2f(x+1)f(x).f'(x)= \\frac{f(x+1)-f(x-1)}{2} f(x+1)-f(x).

\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:DisDer","summary":"
\n

Definition 40 (Discrete derivative). Let f:f:\\mathbb{Z}\\rightarrow \\mathbb{R}. A possible definition of the derivative is f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x)

\n
","hasSummary":true,"hasTitle":true,"title":"Discrete derivative","height":332,"width":980},"classes":"l0","position":{"x":6153.406129183674,"y":1843.942960056564}},{"group":"nodes","data":{"id":"def:GradDis","name":"definition","text":"
\n

Definition 41 (Discrete gradient). Remark: as there are many ways to define the discrete derivative, there are many different ways to define the gradient. They can all be seen as approximations of the continuous definition. We assume here that the discrete derivative is defined as f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x). The discrete gradient is defined in the same way as the continuous gradient: it is a vector that contains the directional derivatives in each coordinates.

\n
    \n
  1. one dimension
    \nFor a function f:f:\\mathbb{Z}\\rightarrow \\mathbb{R}, the discrete gradient G:G:\\mathbb{Z}\\rightarrow \\mathbb{R} is the same as the discrete derivative G(k)=f(k+1)f(k).G(k)=f(k+1)-f(k). When the function if defined on a finite set, f:{1,..,N}f:\\{1,..,N\\}\\rightarrow \\mathbb{R}, this gradient function is only defined without ambiguities on {1,..,N1}\\{1,..,N-1\\}. If we want to define the gradient on {1,..,N}\\{1,..,N\\}, the value in NN depends on a border convention. The same remark holds for higher dimensions.

  2. \n
  3. two dimensions
    \nFor a function f:2f:\\mathbb{Z}^2\\rightarrow \\mathbb{R}, the gradient is a function G:2nG: \\mathbb{Z}^2\\rightarrow \\mathbb{R}^n which associates a vector of size 22 to each point of the domain of ff: G(i,j)=(G1(i,j)G2(i,j))=(f(i+1,j)f(i,j)f(i,j+1)f(i,j)).G(i,j)\n=\n\\begin{pmatrix}\nG_1(i,j)\\\\\nG_2(i,j)\\\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\nf(i+1,j)-f(i,j)\\\\\nf(i,j+1)-f(i,j)\\\\\n\\end{pmatrix}.

  4. \n
  5. arbitrary dimensions
    \nFor a function f:nf:\\mathbb{Z}^n\\rightarrow \\mathbb{R}, the gradient becomes G(k1,...,kn)=(G1(k1,...,kn)...Gn(k1,...,kn))=(f(k1+1,...,kn)f(k1,..,kn)...f(k1,...,kn+1)f(k1,..,kn)).G(k_1,...,k_n)\n=\n\\begin{pmatrix}\nG_1(k_1,...,k_n)\\\\\n.\\\\\n.\\\\\n.\\\\\nG_n(k_1,...,k_n)\\\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\nf(k_1+1,...,k_n)-f(k_1,..,k_n)\\\\\n.\\\\\n.\\\\\n.\\\\\nf(k_1,...,k_n+1)-f(k_1,..,k_n)\\\\\n\\end{pmatrix}.

  6. \n
\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:GradDis","summary":"
\n

Definition 41 (Discrete gradient). G(i,j)=(G1(i,j)G2(i,j))=(f(i+1,j)f(i,j)f(i,j+1)f(i,j)).G(i,j)\n=\n\\begin{pmatrix}\nG_1(i,j)\\\\\nG_2(i,j)\\\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\nf(i+1,j)-f(i,j)\\\\\nf(i,j+1)-f(i,j)\\\\\n\\end{pmatrix}.

\n
","hasSummary":true,"hasTitle":true,"title":"Discrete gradient","height":374,"width":980},"classes":"l0","position":{"x":6150.334606657032,"y":716.1554667660994}},{"group":"nodes","data":{"id":"def:GradMask","name":"definition","text":"
\n

Definition 42 (Derivative masks). We address the case of functions defined on \\mathbb{Z}. For functions defined on {0,..,N1}\\{0,..,N-1\\}, see the remark Remark 10 from the section on translation invariant operators.
\nLet f:f:\\mathbb{Z}\\rightarrow \\mathbb{R}, and Mask:Mask:\\mathbb{Z}\\rightarrow \\mathbb{R} with Mask(0)=1Mask(0)=-1, Mask(1)=1Mask(-1)=1 and Mask(k)=0Mask(k)=0 everywhere else:

\n

Mask=(..01100...)Mask = \\begin{pmatrix} ..&0&1&-1&0&0&... \\end{pmatrix}

\n

It is possible to check that the discrete derivative of f:f:\\mathbb{Z}\\rightarrow \\mathbb{R}, f(x)=f(x+1)f(x)f'(x)=f(x+1)-f(x) can be rewritten as

\n

G=f*Mask,G = f*Mask,

\n

where ** is the discrete convolution. The mask should be adapted to each definition of ff'. When ff is a function n\\mathbb{Z}^n\\rightarrow \\mathbb{R} the directional derivative can also be obtained using convolutions masks. In the case f:2f:\\mathbb{Z}^2\\rightarrow \\mathbb{R}, the horizontal directional derivatives is given by the convolution mask Maskx=(....00000.....01100.....00000.....)Mask_x = \\begin{pmatrix}\n &&&.&&&\\\\\n &&&.&&&\\\\\n ..&0&0&0&0&0&...\\\\\n ..&0&1&-1&0&0&...\\\\\n ..&0&0&0&0&0&...\\\\\n &&&.&&&\\\\\n &&&.&&&\\\\\n \\end{pmatrix}

\n
","parent":"subsec:DisDifOp","rank":"0","html_name":"def:GradMask","summary":"
\n

Definition 42 (Derivative masks). Let f:f:\\mathbb{Z}\\rightarrow \\mathbb{R} Mask=(..01100...)Mask = \\begin{pmatrix} ..&0&1&-1&0&0&... \\end{pmatrix} f=f*Mask.f' = f*Mask. The construction can be adapted to the case f:{0,N1}f:\\{0,N-1\\}\\rightarrow \\mathbb{R} and to the multi-dimensional case.

\n
","hasSummary":true,"hasTitle":true,"title":"Derivative masks","height":488,"width":980},"classes":"l0","position":{"x":4736.114697313205,"y":1752.8311378577023}},{"group":"nodes","data":{"id":"def:ConfSp","name":"definition","text":"
\n

Definition 48 (Configuration space / universe). The configuration space is usually noted Ω\\Omega. As its name indicates, the set Ω\\Omega contains all the possible configurations of a probabilistic model. Its elements are usually noted ω\\omega, and we will call them ’elementary configurations’.
\nWarning : the name of the space Ω\\Omega and its elements ω\\omega vary across contexts ans languages. Ω\\Omega is often called the ’sample space’ or ’universe’, and the elements ω\\omega ’samples’,’outcomes’ or ’realizations’.
\n

\n

Probabilistic models are very useful to analyse dice rolls and card games. What is a relevant configuration space

\n\n

What should we chose as configuration space when

\n\n

Assume a player draws nn cards without putting them back in the deck. What is the size of the smallest configuration space describing the possible results ? 52×(521)×(522)×...×(52n+1)=52!(52n)!52\\times (52-1)\\times (52-2)\\times ... \\times (52-n+1)=\\frac{52!}{(52-n)!}

\n

Note that when using Cartesian products, we have (a,b)(b,a)(a,b)\\neq (b,a). The order between the card draws is taken into account. In card games, the order in which cards are drawn is often not important. In that case, when drawing 22 cards, we have (a,b)(b,a)(a,b)\\sim (b,a).

\n

If we do not distinguish results up to permutations, what is the size of the configuration space when

\n\n

In this course we are particularly interested in signal and image processing problems.

\n

What are the relevant configuration spaces when studying

\n\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:ConfSp","summary":"
\n

Definition 48 (Configuration space / universe). The configuration space is usually noted Ω\\Omega. As its name indicates, the set Ω\\Omega contains all the possible configurations of a probabilistic model. Its elements are usually noted ω\\omega, and we will call them ’elementary configurations’.

\n
","hasSummary":true,"hasTitle":true,"title":"Configuration space / universe","height":468,"width":980},"classes":"l0","position":{"x":13393.03060830612,"y":5109.9275802798675}},{"group":"nodes","data":{"id":"def:Ev","name":"definition","text":"
\n

Definition 49 (Events). When the configuration space Ω\\Omega is endowed with a σ\\sigma-algebra 𝒜\\mathcal{A}, the elements of 𝒜\\mathcal{A} are called ’events’. For those unfamiliar with σ\\sigma-algebra, ’events’ can be defined as subsets of Ω\\Omega.
\nExamples:

\n\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:Ev","summary":"
\n

Definition 49 (Events). When the configuration space Ω\\Omega is endowed with a σ\\sigma-algebra 𝒜\\mathcal{A}, the elements of 𝒜\\mathcal{A} are called ’events’. For those unfamiliar with σ\\sigma-algebras, ’events’ can be defined as subsets of Ω\\Omega.

\n
","hasSummary":true,"hasTitle":true,"title":"Events","height":425,"width":980},"classes":"l0","position":{"x":13427.542481071661,"y":4534.440071296909}},{"group":"nodes","data":{"id":"def:Prob","name":"definition","text":"
\n

Definition 50 (Probability). A probability PP is a function which takes in input an event and returns a positive real number. It must verify the following axioms:

\n\n

As direct consequences, we have:

\n\n

Axioms 2-3 and consequence 2 are precisely axioms of measures, hence a probability is a measure.
\nInterpretation: the probability gives a notion of ’size’ to events. Given that the ’size’ of Ω\\Omega is 11, the size of events can be interpreted as the proportion they occupy in Ω\\Omega. Hence, the PP of probability can also be read as the PP of proportion.

\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:Prob","summary":"
\n

Definition 50 (Probability). A probability PP is a measure on a set Ω\\Omega such that P(Ω)=1.P(\\Omega) = 1.

\n
","hasSummary":true,"hasTitle":true,"title":"Probability","height":336,"width":980},"classes":"l0","position":{"x":13469.107194598806,"y":3928.411817425082}},{"group":"nodes","data":{"id":"def:ProbDen","name":"definition","text":"
\n

Definition 51 (Probability density). Let PP be a probability on \\mathbb{R}. We say that the function ff is the probability density of PP when

\n

A,P(A)=Af(x)dx.\\forall A\\in \\mathbb{R},\\quad P(A)=\\int_A f(x)\\mathrm{d}x.

\n

Note that PP does not always have a density. Take for instance PP such that P(0)=1P({0})=1: there are no functions ff verifying the above condition.

\n
\n

The following concerns reader familiar with σ\\sigma-algebras.
\nStrictly speaking, in the previous definition AA must be a measurable set. In general the probability density is defined with respect to a reference measure. Let PP be a probability measure, and μ\\mu be a reference measure on a set EE. Then ff is the density of PP with respect to μ\\mu when A𝒜E,P(A)=Efdμ,\\forall A\\in \\mathcal{A}_E,\\quad P(A)=\\int_E f\\mathrm{d}\\mu, where 𝒜E\\mathcal{A}_E is the σ\\sigma-algebra of EE. ff is noted dPdμ\\frac{\\mathrm{d}P}{\\mathrm{d}\\mu}.

\n
","parent":"subsec:ConfProb","rank":"0","html_name":"def:ProbDen","summary":"
\n

Definition 51 (Probability density).

\n
","hasSummary":true,"hasTitle":true,"title":"Probability density","height":226,"width":980},"classes":"l0","position":{"x":14615.25950129798,"y":3942.3523070315723}},{"group":"nodes","data":{"id":"rem:IntProb","name":"remark","text":"
\n

Remark 12 (Introductory remark). General remark: in most situations the word probability refers to a proportion. For instance, the question \"what is the probability of this event?\" can usually be interpreted as \"what is the proportion of the possible configurations in which that event occurs?\".

\n
","parent":"subsec:ConfProb","rank":"0","html_name":"rem:IntProb","summary":"
\n

Remark 12 (Introductory remark). General remark: in most situations the word probability refers to a proportion. For instance, the question \"what is the probability of this event?\" can usually be interpreted as \"what is the proportion of the possible configurations in which that event occurs?\".

\n
","hasSummary":true,"hasTitle":true,"title":"Introductory remark","height":504,"width":980},"classes":"l0","position":{"x":12202.683927367605,"y":5429.029516520528}},{"group":"nodes","data":{"id":"rem:Ev","name":"remark","text":"
\n

Remark 13 (Events and σ\\sigma-algebra). Most of definitions and results of this section do not focus on measure theoretical aspects of probabilities. Hence, it can be read without knowledge of measure theory. In order to make statements at the same time rigorous and understandable without measure theory, we ’hide’ the σ\\sigma-algebra behind the word ’event’. In all the section, an ’event’ AA of a set EE can be understood at 22 levels:

\n\n
","parent":"subsec:ConfProb","rank":"0","html_name":"rem:Ev","summary":"
\n

Remark 13 (Events and σ\\sigma-algebra). In all the section, an ’event’ AA of a set EE can be understood at 22 levels:

\n\n
","hasSummary":true,"hasTitle":true,"title":"Events and σ\\sigma-algebra","height":607,"width":980},"classes":"l0","position":{"x":12196.553096064195,"y":4609.336702454983}},{"group":"nodes","data":{"id":"rem:ProbAss","name":"remark","text":"
\n

Remark 14 (Assignment of probabilities). To model real life situations, we assign probabilities to events which reflect what we known about the situation. The probabilities on the configuration space Ω\\Omega are usually assigned according to one of the two following principles.

\n\n

Independently from how they are assigned, probability values should always verifies the axioms of measures.
\n

\n
\n

Examples.
\nAssume that a dice is a perfect cube. All the faces are indistinguishable from each other, apart from the number they carry. In that case, it is relevant to assign probabilities according to the indistinguishability principle:

\n

P({1})=16,P({2})=16,P({3})=16,P(\\{1\\}) = \\frac{1}{6}, P(\\{2\\}) = \\frac{1}{6},P(\\{3\\}) = \\frac{1}{6}, P({4})=16,P({5})=16,P({6})=16.P(\\{4\\}) = \\frac{1}{6},P(\\{5\\}) = \\frac{1}{6},P(\\{6\\}) = \\frac{1}{6}.

\n

The distribution is called uniform. It is now possible to compute the probability of every event, using the axiom 3 (if AB=A\\cap B=\\emptyset then P(AB)=P(A)+P(B)P(A\\cup B)=P(A)+P(B)):

\n

P({2,4,6})=P({2}{4}{6})=P({2})+P({4})+P({6})=12P\\left(\\{2,4,6\\}\\right) = P\\left(\\{2\\}\\cup\\{4\\}\\cup \\{6\\}\\right)=P(\\{2\\})+P(\\{4\\})+P(\\{6\\})=\\frac{1}{2}

\n

Assume now that our dice is a bit damaged. It is no longer perfectly symmetrical. Hence We cannot use the indistinguishability principle anymore. In that case a relevant approach is to perform many dice rolls and to assign probabilities according the frequency of apparition of number.

\n

P({1})=k1N,P({2})=k2N,P({3})=k3N,P(\\{1\\}) = \\frac{k_1}{N}, P(\\{2\\}) = \\frac{k_2}{N},P(\\{3\\}) = \\frac{k_3}{N}, P({4})=k4N,P({5})=k5N,P({6})=k6N.P(\\{4\\}) = \\frac{k_4}{N},P(\\{5\\}) = \\frac{k_5}{N},P(\\{6\\}) = \\frac{k_6}{N}.

\n

Exactly as before, the probability of other events is computed with the rule ’if AB=A\\cap B=\\emptyset then P(AB)=P(A)+P(B)P(A\\cup B)=P(A)+P(B)’.

\n
","parent":"subsec:ConfProb","rank":"0","html_name":"rem:ProbAss","summary":"
\n

Remark 14 (Assignment of probabilities). Probability values are usually assigned either based either on an indistinguishability criterion, or from observed frequencies of a repeated experiment.

\n
","hasSummary":true,"hasTitle":true,"title":"Assignment of probabilities","height":412,"width":980},"classes":"l0","position":{"x":12258.913898509865,"y":3919.3695181741937}},{"group":"nodes","data":{"id":"rem:AbsConf","name":"remark","text":"
\n

Remark 15 (Abstract configuration space). In many cases Ω\\Omega is difficult to describe. In such case, random variables are particularly useful. Consider a machine that produces cables. The cables produced are not all strictly identical, the length, the weight, the breaking load,... fluctuate from one cable to another. The number of parameters characterising the cable is very large, and might even not be known precisely. In that case it is not possible to properly characterize our configuration space Ω\\Omega. We will not try to describe Ω\\Omega, we only suppose its existence. Each parameter of the cables can be represented by a random variable. For example, the length is a variable L:ΩL:\\Omega\\rightarrow \\mathbb{R}, the section a variable S, ..... Then, we are not really interested in the full configuration space Ω\\Omega and the probability on Ω\\Omega, but by the joint law of the random variables corresponding to interesting parameters.

\n
","parent":"subsec:ConfProb","rank":"0","html_name":"rem:AbsConf","summary":"
\n

Remark 15 (Abstract configuration space). In many cases Ω\\Omega is difficult to describe. We will not try to describe Ω\\Omega, we only suppose its existence. We are not really interested in the full configuration space Ω\\Omega and the probability on Ω\\Omega, but by the joint law of the random variables corresponding to interesting parameters.

\n
","hasSummary":true,"hasTitle":true,"title":"Abstract configuration space","height":514,"width":980},"classes":"l0","position":{"x":14700.381605505565,"y":4791.199530686157}},{"group":"nodes","data":{"id":"def:RV","name":"definition","text":"
\n

Definition 52 (Random variables). Introduce first the idea behind the definition. Build a configuration space representing the possible ages of 22 persons. A natural choice is Ω=×\\Omega = \\mathbb{R} \\times \\mathbb{R} where the first coordinate represents the possible ages of the first person,and the second the possible age of the second person (Ω=+×+\\Omega = \\mathbb{R}_+ \\times \\mathbb{R}_+ is also a natural choice).
\n

\n

The coordinate function age1:Ωage_1:\\Omega \\rightarrow \\mathbb{R},

\n

age1(ω=(ω1,ω2))=ω1age_1\\left(\\omega=(\\omega_1,\\omega_2)\\right)=\\omega_1

\n

’extracts’ the age of the first person out of an elementary configuration ω\\omega. Similarly, the coordinate function age2:Ωage_2:\\Omega \\rightarrow \\mathbb{R},

\n

age2(ω=(ω1,ω2))=ω2age_2\\left(\\omega=(\\omega_1,\\omega_2)\\right)=\\omega_2

\n

’extracts’ age of the second person. The functions age1age_1 and age2age_2 are called random variables.
\nThe coordinates functions age1age_1 and age2age_2 extracts interesting quantities from the elementary configuration. However, there are other interesting quantities which are not coordinate functions. For instance, for an elementary configuration ω\\omega, we can be interested in the average age of the 22 persons. We can define the function average:Ωaverage:\\Omega \\rightarrow \\mathbb{R},

\n

average(ω=(ω1,ω2))=ω1+ω22average\\left(\\omega=(\\omega_1,\\omega_2)\\right) = \\frac{\\omega_1+\\omega_2}{2}

\n

The function averageaverage is also called a random variable.
\nDefinition:
\nMore generally, a random variable XX taking values in a space EE is a function X:ΩEX:\\Omega\\rightarrow E. In particular, integer random variables are functions X:ΩX:\\Omega\\rightarrow \\mathbb{N} and real random variables are functions X:ΩX:\\Omega\\rightarrow \\mathbb{R}. The only requirement for a function on Ω\\Omega to be called a ’random variable’ is to be measurable: if AA is an event of EE, X1(A)X^{-1}(A) must be an event of Ω\\Omega. This restriction only has importance when not all subsets are events.
\nConsider the following dice roll game. If the number is even, the player gains 11 euro, and if the number is odd he loses 11 euro. The function Gain:Ω={1,2,3,4,5,5}{1,1}Gain:\\Omega=\\{1,2,3,4,5,5\\}\\rightarrow \\{-1,1\\}, G(1)=G(3)=G(5)=1G(1)=G(3)=G(5)=-1 G(2)=G(4)=G(6)=1G(2)=G(4)=G(6)=1 is a random variable.

\n

Assume that we roll a dice nn times. The configurations space that describes all the possible results is Ω={1,2,3,4,5,6}n\\Omega = \\{1,2,3,4,5,6\\}^n. Let XiX_i be the ii-th coordinate function on Ω\\Omega. For instance,

\n

X1((3,2,3,...,6))=3.X_1\\left( (3,2,3,...,6) \\right) = 3. These random variables ’extract’ the result of the ii-th roll out of the elementary configuration. We can also construct the random variables Gaini=GainXiGain_i = Gain \\circ X_i: the gain at the ii-th roll.

\n
","parent":"subsec:RV","rank":"0","html_name":"def:RV","summary":"
\n

Definition 52 (Random variables). A random variable XX valued in EE is a ’measurable’ function defined on the configuration space Ω\\Omega and valued in EE: X:ΩEX:\\Omega \\rightarrow E

\n
","hasSummary":true,"hasTitle":true,"title":"Random variables","height":423,"width":980},"classes":"l0","position":{"x":16543.63804116524,"y":4285.242966345903}},{"group":"nodes","data":{"id":"def:RVLaw","name":"definition","text":"
\n

Definition 53 (Law of a random variables). Let Ω\\Omega be a configuration space with a probability PP, and XX be a random variable taking values in a set EE. Recall that PP is a function that takes in input an event of Ω\\Omega and associates its probability. The random variable transports the probability PP to the set EE. The probability of a set AA in EE is determined by the probability of the subset of Ω\\Omega whose image by XX are in AA.
\nLet PXP_X be a probability on EE defined as follow. For AA an event in EE, PX(A)=P(X1(A)).P_X(A) = P\\left(X^{-1}(A)\\right). PXP_X is called the law of the random variable XX. The measurable assumption on XX ensures that X1(A)X^{-1}(A) is an event of Ω\\Omega. Remarks:

\n\n
\n

Cumulative distribution

\n
\n

Assume XX is a real random variable. The probability PXP_X can be represented by its cumulative distribution FXF_X, which is function FX:F_X:\\mathbb{R}\\rightarrow \\mathbb{R} is defined by FX(a)=PX(],a]).F_X(a) = P_X\\left(]-\\infty,a]\\right).

\n

All the information on PXP_X is contained in FXF_X. Since the cumulative distribution is a function \\mathbb{R}\\rightarrow \\mathbb{R}, sometimes it is conceptually simpler to manipulate FXF_X than the probability itself which is a function from the subsets of \\mathbb{R} to \\mathbb{R}.
\n

\n\n
\n

Density of the random variable

\n
\n

When the law PXP_X has a density ff, the density is called the density of the random variable XX and is often noted fXf_X.
\nFor a real random variable XX,

\n\n
","parent":"subsec:RV","rank":"0","html_name":"def:RVLaw","summary":"
\n

Definition 53 (Law of a random variables). Let X:ΩEX:\\Omega \\rightarrow E be a random variable. For AA an event in EE, define PX(A)=P(X1(A)).P_X(A) = P\\left(X^{-1}(A)\\right). PXP_X is called the law of the random variable XX.

\n
","hasSummary":true,"hasTitle":true,"title":"Law of a random variables","height":442,"width":980},"classes":"l0","position":{"x":15903.778546829752,"y":3703.1538379753565}},{"group":"nodes","data":{"id":"def:JoinRV","name":"definition","text":"
\n

Definition 54 (Joint random variables). Consider 22 random variables X:ΩEX:\\Omega \\rightarrow E and Y:ΩFY:\\Omega \\rightarrow F. The random variable Z:ΩE×FZ:\\Omega \\rightarrow E\\times F, Z=(X,Y)Z=(X,Y) is called the joint random variable. The law PZ=P(X,Y)P_{Z}=P_{(X,Y)} of ZZ is called the joint law (or joint probability distribution).
\n

\n

Example.
\nConsider the case of nn dice rolls. Let Ω={1,2,3,4,5,6}n\\Omega = \\{1,2,3,4,5,6\\}^n and XiX_i be the ii-th coordinate function, and let PP be the uniform probability distribution. Consider now the joint random variable Z:Ω{1,2,3,4,5,6}2,Z=(X1,X2).Z:\\Omega \\rightarrow \\{1,2,3,4,5,6\\}^2,\\quad Z=(X_1,X_2).

\n

What is the law of ZZ? PZ({(a,b)})=P(Z1({(a,b)}))=P({a}×{b}×{1,2,3,4,5,6}n2)P_Z(\\{(a,b)\\})=P(Z^{-1}(\\{(a,b)\\}))=P(\\{a\\}\\times \\{b\\} \\times \\{1,2,3,4,5,6\\}^{n-2}) Hence PZ({(a,b)})=Card({1,2,3,4,5,6}n2)Card({1,2,3,4,5,6}n)=6n26n=1/36P_Z(\\{(a,b)\\})=\\frac{Card(\\{1,2,3,4,5,6\\}^{n-2})}{Card(\\{1,2,3,4,5,6\\}^{n})}= \\frac{6^{n-2}}{6^n}=1/36.
\n

\n

PZ\\rightarrow P_Z is uniform on {1,2,3,4,5,6}2\\{1,2,3,4,5,6\\}^2.

\n
","parent":"subsec:RV","rank":"0","html_name":"def:JoinRV","summary":"
\n

Definition 54 (Joint random variables). Consider 22 random variables X:ΩEX:\\Omega \\rightarrow E and Y:ΩFY:\\Omega \\rightarrow F. The random variable Z:ΩE×FZ:\\Omega \\rightarrow E\\times F, Z=(X,Y)Z=(X,Y) is called the joint random variable. The law PZ=P(X,Y)P_{Z}=P_{(X,Y)} of ZZ is called the joint law (or joint probability distribution).

\n
","hasSummary":true,"hasTitle":true,"title":"Joint random variables","height":535,"width":980},"classes":"l0","position":{"x":16412.728588815913,"y":2993.8869510895343}},{"group":"nodes","data":{"id":"def:Marg","name":"definition","text":"
\n

Definition 55 (Concept of marginal). Give an informal description of the concept of marginal. Some mathematical objects defined on the Cartesian product E×FE\\times F naturally give rise to similar objects defined on EE and FF by ’forgetting’ the other coordinate. The objects defined in this manner on EE and FF are called marginal.
\nAs we will see, ’forgetting’ the other coordinate is opposed to conditioning, where the other coordinate is fixed to a particular value.

\n
","parent":"subsec:Marg","rank":"0","html_name":"def:Marg","summary":"
\n

Definition 55 (Concept of marginal). Give an informal description of the concept of marginal. Some mathematical objects defined on the Cartesian product E×FE\\times F naturally give rise to similar objects defined on EE and FF by ’forgetting’ the other coordinate. The objects defined in this manner on EE and FF are called marginal.

\n
","hasSummary":true,"hasTitle":true,"title":"Concept of marginal","height":514,"width":980},"classes":"l0","position":{"x":12166.850463373075,"y":2844.9599406535326}},{"group":"nodes","data":{"id":"def:MargProb","name":"definition","text":"
\n

Definition 56 (Marginal probability). Let PE×FP_{E\\times F} be a probability on the Cartesian product E×FE\\times F. The marginal probability on EE is defined by PE(A)=PE×F(A,F)P_E(A) = P_{E\\times F}(A,F) and the marginal on FF by PF(B)=PE×F(E,B).P_F(B) = P_{E\\times F}(E,B). When PE×FP_{E\\times F} has a density fE×Ff_{E\\times F} with respect to a reference measure on E×F{E\\times F}, the marginal densities are defined by fE(x)=x×FfE×F(x,y)dy and fF(y)=E×yfE×F(y,x)dx.f_E(x) = \\int_{x\\times F} f_{E\\times F}(x,y)\\mathrm{d}y \\text{ and } f_F(y) = \\int_{E\\times y} f_{E\\times F}(y,x)\\mathrm{d}x.

\n
","parent":"subsec:Marg","rank":"0","html_name":"def:MargProb","summary":"
\n

Definition 56 (Marginal probability). Let PE×FP_{E\\times F} be a probability on the Cartesian product E×FE\\times F. The marginal probability on EE is defined by PE(A)=PE×F(A,F)P_E(A) = P_{E\\times F}(A,F) and the marginal on FF by PF(B)=PE×F(E,B).P_F(B) = P_{E\\times F}(E,B).

\n
","hasSummary":true,"hasTitle":true,"title":"Marginal probability","height":494,"width":980},"classes":"l0","position":{"x":12774.067271971502,"y":2160.2187117665944}},{"group":"nodes","data":{"id":"def:MargRV","name":"definition","text":"
\n

Definition 57 (Marginal random variables). Given a random variable ZZ valued in a Cartesian product E×FE\\times F, the projections on EE and FF define ’marginals’. Let Z=(X,Y),Z=(X,Y), XX is called the marginal random variable on EE and YY the marginal random variable on FF. The law of XX and YY are called the \"marginal laws\", or \"marginal probabilities\" of the law PZP_Z. The laws are given by PX(A)=PZ((A,F)) and PY(A)=PZ((E,A)).P_X(A) = P_{Z}((A,F)) \\text{ and } P_Y(A) = P_{Z}((E,A)). When the laws have densities, fX(x)=x×FfZ(x,y)dy and fY(y)=E×yfZ(y,x)dx.f_X(x) = \\int_{x\\times F} f_{Z}(x,y)\\mathrm{d}y \\text{ and } f_Y(y) = \\int_{E\\times y} f_{Z}(y,x)\\mathrm{d}x.

\n
","parent":"subsec:Marg","rank":"0","html_name":"def:MargRV","summary":"
\n

Definition 57 (Marginal random variables). Let Z:ΩE×FZ:\\Omega\\rightarrow E\\times F be a random variable, and let XX and YY be the random variables corresponding to each coordinate: Z=(X,Y).Z=(X,Y). XX is called the marginal random variable on EE and YY the marginal random variable on FF.

\n
","hasSummary":true,"hasTitle":true,"title":"Marginal random variables","height":524,"width":980},"classes":"l0","position":{"x":12137.531799873219,"y":1425.528800121085}},{"group":"nodes","data":{"id":"th:Bayes","name":"theorem","text":"
\n

Theorem 14 (Bayes Theorem). The definition of conditional densities can be rewritten as something called the \"Bayes theorem\": P(B|A)P(A)=P(A|B)P(B)=P(AB)P(B|A)P(A) = P(A|B)P(B) = P(A\\cap B)

\n
","parent":"subsec:Cond","rank":"0","html_name":"th:Bayes","summary":"
\n

Theorem 14 (Bayes Theorem). P(B|A)P(A)=P(A|B)P(B)=P(AB)P(B|A)P(A) = P(A|B)P(B) = P(A\\cap B)

\n
","hasSummary":true,"hasTitle":true,"title":"Bayes Theorem","height":283,"width":980},"classes":"l0","position":{"x":17173.002711062174,"y":1012.9142665309903}},{"group":"nodes","data":{"id":"def:CondProb","name":"definition","text":"
\n

Definition 58 (Conditional probability). Consider a probability PP on the set of configurations Ω\\Omega. Given a event AA with P(A>0)P(A>0), the idea of conditioning with respect to AA is to focus on the configurations ωA\\omega \\in A and forget about the configurations ωA\\omega \\notin A. We would like to define a new probability which respects the proportions of the events included in AA and gives zero probability to event which do not intersect AA.
\nLet AA be an event of Ω\\Omega, with P(A)>0P(A)>0. We define the probability conditional to the event AA by PA(B)=P(BA)P(A).P_A(B) = \\frac{P(B\\cap A)}{P(A)}. PA(B)P_A(B) is called the probability of BB given AA, and is usually noted P(B|A)P(B|A).
\nExercise: check that PAP_A is a probability on Ω\\Omega

\n
","parent":"subsec:Cond","rank":"0","html_name":"def:CondProb","summary":"
\n

Definition 58 (Conditional probability). We define the probability conditional to the event AA by PA(B)=P(BA)P(A).P_A(B) = \\frac{P(B\\cap A)}{P(A)}. PA(B)P_A(B) is called the probability of BB given AA, and is usually noted P(B|A)P(B|A).

\n
","hasSummary":true,"hasTitle":true,"title":"Conditional probability","height":502,"width":980},"classes":"l0","position":{"x":17123.201347644594,"y":1608.387855598545}},{"group":"nodes","data":{"id":"def:CondProbRV","name":"definition","text":"
\n

Definition 59 (Conditional laws of a joint variable). We address here the particular case of conditional laws of a joint random variable. Let X:ΩEX:\\Omega \\rightarrow E and Y:ΩFY:\\Omega \\rightarrow F be two random variables. The law PZP_Z of the joint random variable Z=(X,Y)Z=(X,Y) is a probability on E×FE\\times F. This joint probability on E×FE\\times F can be conditioned by imposing that one of the variables lies in a given set. In particular, when P(X,Y)(E×{y})>0P_{(X,Y)}(E\\times \\{y\\})>0 it is possible to condition by the event E×{y}E\\times \\{y\\}. We obtain then a probability on E×{y}E×FE\\times \\{y\\}\\subset E\\times F, interpreted as a probability on EE. The conditional probability knowing Y=yY=y is usually written P(XA|Y=y)=P(X,Y)(A×{y})P(X,Y)(E×{y})=P(X,Y)(A×{y})PY({y}).P(X\\in A |Y=y)= \\frac{P_{(X,Y)}(A\\times \\{y\\})}{P_{(X,Y)}(E\\times \\{y\\})}=\\frac{P_{(X,Y)}(A\\times \\{y\\})}{P_Y(\\{y\\})}.

\n

We recognized the term P(X,Y)(E×{y})=PY({y})P_{(X,Y)}(E\\times \\{y\\})=P_Y(\\{y\\}), which is the marginal distribution on FF evaluated on {y}\\{y\\}.

\n

When densities exist, the conditional density is f(x|Y=y)=f(X,Y)(x,y)fY(y)f(x|Y=y) = \\frac{f_{(X,Y)}(x,y)}{f_Y(y)} as long as the marginal density fY(y)>0f_Y(y)>0.

\n
","parent":"subsec:Cond","rank":"0","html_name":"def:CondProbRV","summary":"
\n

Definition 59 (Conditional laws of a joint variable). P(XA|Y=y)=P(X,Y)(A×{y})P(X,Y)(E×{y})=P(X,Y)(A×{y})PY({y}).P(X\\in A |Y=y)= \\frac{P_{(X,Y)}(A\\times \\{y\\})}{P_{(X,Y)}(E\\times \\{y\\})}=\\frac{P_{(X,Y)}(A\\times \\{y\\})}{P_Y(\\{y\\})}.

\n
","hasSummary":true,"hasTitle":true,"title":"Conditional laws of a joint variable","height":345,"width":980},"classes":"l0","position":{"x":16121.82435700667,"y":742.0546539898808}},{"group":"nodes","data":{"id":"th:JLIV","name":"theorem","text":"
\n

Theorem 15 (Joint law of independent variables ). When XX and YY are independent, we have the two important results:

\n\n

The proofs are exercises.

\n
","parent":"subsec:Ind","rank":"0","html_name":"th:JLIV","summary":"
\n

Theorem 15 (Joint law of independent variables ). P(X,Y)=PXPYP_{(X,Y)}=P_XP_Y P(XA|Y=y)=PX(A),P(X\\in A | Y = y) = P_X(A),

\n
","hasSummary":true,"hasTitle":true,"title":"Joint law of independent variables ","height":338,"width":980},"classes":"l0","position":{"x":14311.227641064226,"y":-1037.0753394813926}},{"group":"nodes","data":{"id":"def:IndEv","name":"definition","text":"
\n

Definition 60 (Independent events). Two events AA and BB are called independent when P(AB)=P(A)P(B)P(A\\cap B) =P(A)P(B)

\n

Interpretation when P(A)>0P(A)>0: the proportion of BAB\\cap A in AA is the same as the proportion of BB in Ω\\Omega P(BA)P(A)=P(B)P(Ω)=P(B).\\frac{P(B\\cap A)}{P(A)} = \\frac{P(B)}{P(\\Omega)}=P(B). In other words, conditioning probabilities to the event AA does not change the probability of BB.
\n

\n

When P(B)>0P(B)>0, the same can be said about the proportion of ABA\\cap B in BB.

\n
","parent":"subsec:Ind","rank":"0","html_name":"def:IndEv","summary":"
\n

Definition 60 (Independent events). Two events AA and BB are called independents when P(AB)=P(A)P(B).P(A\\cap B) = P(A)P(B).

\n
","hasSummary":true,"hasTitle":true,"title":"Independent events","height":336,"width":980},"classes":"l0","position":{"x":13940.301973467818,"y":396.39492473894427}},{"group":"nodes","data":{"id":"def:IndRV","name":"definition","text":"
\n

Definition 61 (Independent random variables). We say that 22 random variables are independent when an information on one of them does not affect the other. This translates to the following definition.
\nTwo random variables X:ΩEX:\\Omega \\rightarrow E and Y:ΩFY:\\Omega \\rightarrow F are independent when : AE,BF,X1(A) and Y1(B) are independent events in Ω.\\forall A\\subset E,B\\subset F,\\quad X^{-1}(A) \\text{ and } Y^{-1}(B) \\text{ are independent events in } \\Omega.

\n

Recall that it means that when P(Y1(B))>0P(Y^{-1}(B))>0, conditioning the probability PP on Ω\\Omega to Y1(B)Y^{-1}(B) does not affect the probability of X1(A)X^{-1}(A). The other direction holds when P(X1(A))>0P(X^{-1}(A))>0.
\n

\n
","parent":"subsec:Ind","rank":"0","html_name":"def:IndRV","summary":"
\n

Definition 61 (Independent random variables). Two random variables X:ΩEX:\\Omega \\rightarrow E and Y:ΩFY:\\Omega \\rightarrow F are independent when : AE,BF,X1(A) and Y1(B) are independent events in Ω.\\forall A\\subset E,B\\subset F,\\quad X^{-1}(A) \\text{ and } Y^{-1}(B) \\text{ are independent events in } \\Omega.

\n
","hasSummary":true,"hasTitle":true,"title":"Independent random variables","height":379,"width":1232},"classes":"l0","position":{"x":14337.070950072977,"y":-98.7174819388315}},{"group":"nodes","data":{"id":"def:IndExp","name":"theorem","text":"
\n

Theorem 16 (Independence and expectation). When XX and YY are two independent variables, we have that 𝔼(XY)=𝔼(X)𝔼(Y).\\mathbb{E}(XY) = \\mathbb{E}(X)\\mathbb{E}(Y). When the laws have densities, the result is given by the following computation: 𝔼(XY)=xyf(X,Y)dxdy=xyfX(x)fY(y)dxdy=xfX(x)dxyfY(y)dy\\begin{aligned}\n\\mathbb{E}(XY) &=& \\int \\int xy f_{(X,Y)}dxdy \\\\\n&=& \\int \\int xy f_X(x)f_Y(y)dxdy\\\\\n&=& \\int xf_X(x)dx \\int yf_Y(y)dy\\end{aligned}

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:IndExp","summary":"
\n

Theorem 16 (Independence and expectation). When XX and YY are two independent variables, we have that 𝔼(XY)=𝔼(X)𝔼(Y).\\mathbb{E}(XY) = \\mathbb{E}(X)\\mathbb{E}(Y).

\n
","hasSummary":true,"hasTitle":true,"title":"Independence and expectation","height":379,"width":980},"classes":"l0","position":{"x":18372.05589195286,"y":41.302104565725244}},{"group":"nodes","data":{"id":"def:StdProp","name":"theorem","text":"
\n

Theorem 17 (Properties of variance). Show as a first exercise that variance can also be written as v(X)=𝔼(X2)𝔼(X)2v(X)=\\mathbb{E}(X^2)-\\mathbb{E}(X)^2.
\nShow that:

\n\n

Hence, when X1,...,XnX_1,...,X_n are independent variables with identical distributions, v(1niXi)=1n2v(iXi)=1n2iv(Xi)=1nv(X1)n0v\\left(\\frac{1}{n}\\sum_i X_i\\right)=\\frac{1}{n^2}v\\left(\\sum_i X_i\\right) = \\frac{1}{n^2}\\sum_i v(X_i)=\\frac{1}{n}v(X_1)\\xrightarrow[n\\rightarrow \\infty]{} 0

\n

This is a very important result: summing independent and identically distributed (i.i.d) variables reduces the variance in 1n\\frac{1}{n} and the standard deviation in 1n\\frac{1}{\\sqrt{n}}.

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:StdProp","summary":"
\n

Theorem 17 (Properties of variance). v(X)=𝔼(X2)𝔼(X)2v(X)=\\mathbb{E}(X^2)-\\mathbb{E}(X)^2 v(aX+b)=a2v(X)v(aX+b)=a^2v(X) when XX and YY are independent, v(X+Y)=v(X)+v(Y)v(X+Y)=v(X)+v(Y).

\n
","hasSummary":true,"hasTitle":true,"title":"Properties of variance","height":455,"width":980},"classes":"l0","position":{"x":18398.953449215493,"y":-619.6713064174974}},{"group":"nodes","data":{"id":"def:Expe","name":"definition","text":"
\n

Definition 62 (Expectation). Let Ω={1,2}\\Omega=\\{1,2\\} with P({1})=110P(\\{1\\})=\\frac{1}{10} and P({2})=910P(\\{2\\})=\\frac{9}{10}. Let XX be a real random variable, X:ΩX:\\Omega \\rightarrow \\mathbb{R}. For each elementary configuration ω\\omega, XX takes the value X(ω)X(\\omega). The ’expectation’ of XX is the ’average’ value of the X(ω)X(\\omega) with respect to the probabilities of the {ω}\\{\\omega\\}. In our example: 𝔼(X)=X(1).P({1})+X(2).P({2})=1×110+2×910.\\mathbb{E}(X) = X(1).P(\\{1\\}) + X(2).P(\\{2\\}) = 1\\times \\frac{1}{10} + 2\\times \\frac{9}{10}.

\n

If Ω={1,2,...,N}\\Omega=\\{1,2,...,N\\}, the average of a random variable X:ΩX:\\Omega \\rightarrow \\mathbb{R} becomes:

\n

𝔼(X)=iΩX(i)×P(i).\\mathbb{E}(X) = \\sum_{i\\in\\Omega} X(i)\\times P({i}).

\n

Now consider the case Ω=\\Omega=\\mathbb{R}, with a probability PP of density ff. The formula for the expecation becomes 𝔼(X)=+X(ω)f(ω)dω.\\mathbb{E}(X) = \\int_{-\\infty}^{+\\infty} X(\\omega) f(\\omega)d\\omega. Note that in that case, the expectation exists only when the integral exists.

\n

There is a definition of the expectation which does not depend on the nature of Ω\\Omega. The expectation (average / mean) of XX is defined by 𝔼(X)=ΩX(ω)dP(ω).\\mathbb{E}(X) = \\int_{\\Omega} X(\\omega)dP(\\omega).

\n

The rigorous definition requires what is called ’Lebesgue integrals’. Hoewever, the intuitive meaning of the formula is clear. When Ω\\Omega is discrete, the integral becomes a sum over Ω\\Omega. When Ω\\Omega is continuous and that PP has a density, dP(ω)dP(\\omega) becomes f(ω)dωf(\\omega)d\\omega.
\nRemember that the random variable X:ΩX:\\Omega \\rightarrow \\mathbb{R} transports the probability PP defined on Ω\\Omega to a probability PXP_{X} defined on \\mathbb{R}.

\n\n

The set of random variables X:ΩX:\\Omega \\rightarrow \\mathbb{R} is a vector space ((X+Y)(ω)=X(ω)+Y(ω)(X+Y)(\\omega)=X(\\omega)+Y(\\omega)). The set of random variables such that ΩX(ω)dP(ω)\\int_{\\Omega} X(\\omega)dP(\\omega) exists is called L1(Ω)L^1(\\Omega), it is again a vector space. Since integrals are linear, the expectation 𝔼:L1(Ω)\\mathbb{E}:L^1(\\Omega)\\rightarrow \\mathbb{R}

\n

X𝔼(X)=ΩX(ω)dP(ω)X \\mapsto \\mathbb{E}(X)=\\int_{\\Omega} X(\\omega)dP(\\omega) is a linear application from L1(Ω)L^1(\\Omega) to \\mathbb{R}. In other words, we have the fundamentals properties: 𝔼(αX)=α𝔼(X) and 𝔼(X+Y)=𝔼(X)+𝔼(Y).\\mathbb{E}(\\alpha X) = \\alpha \\mathbb{E}(X) \\quad \\text{ and } \\quad \\mathbb{E}(X+Y) = \\mathbb{E}(X) + \\mathbb{E}(Y).

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Expe","summary":"
\n

Definition 62 (Expectation). The general definition of the expectation of a random variable X:ΩX:\\Omega\\rightarrow \\mathbb{R} is 𝔼(X)=ΩX(ω)dP(ω).\\mathbb{E}(X) = \\int_{\\Omega} X(\\omega)dP(\\omega). in particular, when Ω={1,..,N}\\Omega = \\{1,..,N\\}, 𝔼(X)=iΩX(i)×P(i)=ii.PX({i}),\\mathbb{E}(X) = \\sum_{i\\in\\Omega} X(i)\\times P({i})=\\sum_{i\\in \\mathbb{Z}} i.P_X(\\{i\\}), and when Ω=\\Omega = \\mathbb{R}, 𝔼(X)=+X(ω)f(ω)dω=+xfX(x)dx.\\mathbb{E}(X) = \\int_{-\\infty}^{+\\infty} X(\\omega) f(\\omega)d\\omega=\\int_{-\\infty}^{+\\infty} xf_X(x)dx.

\n
","hasSummary":true,"hasTitle":true,"title":"Expectation","height":799,"width":980},"classes":"l0","position":{"x":18366.371663039306,"y":1162.4577397102996}},{"group":"nodes","data":{"id":"def:ScalRV","name":"definition","text":"
\n

Definition 63 (Inner products on random variables). The expectation enables to define an important inner product on random variables. It provides a norm and a notion of angles between random variables. Let X:ΩX:\\Omega \\rightarrow \\mathbb{R} and Y:ΩY:\\Omega \\rightarrow \\mathbb{R} be 22 random variables. When it exists, we can define X,Y=𝔼(XY)=ΩX(ω)Y(ω)dP(Ω),\\langle X,Y \\rangle = \\mathbb{E}(XY)= \\int_{\\Omega} X(\\omega)Y(\\omega)dP(\\Omega), where XYXY is understood as the product of the functions XX and YY.

\n

The set of random variables XX such that ΩX(ω)2dP(ω)\\int_{\\Omega} X(\\omega)^2dP(\\omega) exists is called L2(Ω)L^2(\\Omega). L2(Ω)L^2(\\Omega) is a vector space.
\nExercise: show that .,.\\langle.,.\\rangle is an inner product on L2(Ω)L^2(\\Omega).
\nConsider the important case of 22 independent random variables XX and YY with zero mean. Then X,Y=0.\\langle X,Y\\rangle =0. Hence the strong link between independence and orthogonality. Note however that the converse is not always true: we can find centered random variables XX and YY with X,Y=0\\langle X,Y\\rangle =0 which are not independent.

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:ScalRV","summary":"
\n

Definition 63 (Inner products on random variables). X,Y=𝔼(XY)=ΩX(ω)Y(ω)dP(Ω),\\langle X,Y \\rangle = \\mathbb{E}(XY)= \\int_{\\Omega} X(\\omega)Y(\\omega)dP(\\Omega),

\n
","hasSummary":true,"hasTitle":true,"title":"Inner products on random variables","height":422,"width":980},"classes":"l0","position":{"x":19899.27127201591,"y":741.8116466487754}},{"group":"nodes","data":{"id":"def:Cov","name":"definition","text":"
\n

Definition 64 (Covariance (dimension 1)). Given a random variable XX note Xc=X𝔼(X)X_{c} = X-\\mathbb{E}(X) the ’centered’ variable. The covariance between two variables XX and YY is the scalar product between their centered versions.
\nWhen it exists, the covariance between X:ΩX:\\Omega\\rightarrow \\mathbb{R} and Y:ΩY:\\Omega\\rightarrow \\mathbb{R} is defined by cov(X,Y)=Xc,Yc=𝔼((X𝔼(X))(Y𝔼(Y))).cov(X,Y)=\\langle X_{c},Y_{c}\\rangle = \\mathbb{E}\\left( (X-\\mathbb{E}(X))(Y-\\mathbb{E}(Y)) \\right).

\n

We have that v(X)=cov(X,X)v(X)=cov(X,X).

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Cov","summary":"
\n

Definition 64 (Covariance (dimension 1)). cov(X,Y)=Xc,Yc=𝔼((X𝔼(X))(Y𝔼(Y))).cov(X,Y)=\\langle X_{c},Y_{c}\\rangle = \\mathbb{E}\\left( (X-\\mathbb{E}(X))(Y-\\mathbb{E}(Y)) \\right).

\n
","hasSummary":true,"hasTitle":true,"title":"Covariance (dimension 1)","height":297,"width":980},"classes":"l0","position":{"x":19930.787327169135,"y":160.3847155789902}},{"group":"nodes","data":{"id":"def:Std","name":"definition","text":"
\n

Definition 65 (Standard deviation / variance). The variance and standard deviation measure how a random variable varies around its mean. The deviation from the mean is given by the Euclidean norm of the centered variable Xc=X𝔼(X)X_{c} = X-\\mathbb{E}(X). when they exists, the variance is defined by,

\n

v(X)=𝔼((X𝔼(X))2)=cov(X,X)=Xc2,v(X)= \\mathbb{E}\\left((X-\\mathbb{E}(X))^2\\right)=cov(X,X)=\\|X_c\\|^2,

\n

and the standard deviation by

\n

σ(X)=v(X)=Xc.\\sigma(X)=\\sqrt{v(X)} =\\|X_c\\|.

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:Std","summary":"
\n

Definition 65 (Standard deviation / variance). v(X)=𝔼((X𝔼(X))2),v(X)= \\mathbb{E}\\left((X-\\mathbb{E}(X))^2\\right), σ(X)=v(X)=𝔼((X𝔼(X))2).\\sigma(X)=\\sqrt{v(X)} = \\sqrt{ \\mathbb{E}\\left((X-\\mathbb{E}(X))^2\\right)}.

\n
","hasSummary":true,"hasTitle":true,"title":"Standard deviation / variance","height":416,"width":980},"classes":"l0","position":{"x":19621.80213564521,"y":-573.1885588548129}},{"group":"nodes","data":{"id":"def:CovM","name":"definition","text":"
\n

Definition 66 (Covariance matrix)). Consider nn random variables Xi:ΩX_i:\\Omega \\rightarrow \\mathbb{R}. The variables can be put in a column vector 𝐗=(X1,..,Xn)\\bm X=(X_1,..,X_n). 𝐗\\bm X is then a random variable 𝐗:Ωn\\bm X:\\Omega \\rightarrow \\mathbb{R}^n. Such random variables are often called random vectors.
\nFor a random column vector 𝐗:Ωn\\bm X:\\Omega \\rightarrow \\mathbb{R}^n, when it exists the covariance matrix is defined by cov(𝐗)=𝔼(𝐗c𝐗ct)=𝔼((𝐗𝔼(𝐗))(𝐗𝔼(𝐗))T).cov(\\bm X)=\\mathbb{E}(\\bm X_{c} \\bm X_{c}^t)= \\mathbb{E}\\left((\\bm X-\\mathbb{E}(\\bm X))(\\bm X-\\mathbb{E}(\\bm X))^T\\right).

\n

𝔼((X1𝔼(X1)X2𝔼(X2)..Xn𝔼(Xn))(X1𝔼(X1)X2𝔼(X2)..Xn𝔼(Xn)))\\mathbb{E}\\left( \\begin{pmatrix} X_1-\\mathbb{E}(X_1)\\\\ X_2 -\\mathbb{E}(X_2)\\\\ . \\\\ . \\\\ X_n-\\mathbb{E}(X_n) \\end{pmatrix}\\begin{pmatrix} X_1-\\mathbb{E}(X_1)& X_2 -\\mathbb{E}(X_2)& . & . & X_n-\\mathbb{E}(X_n) \\end{pmatrix} \\right) == (𝔼((X1𝔼(X1))(X1𝔼(X1)))..𝔼((X1𝔼(X1))(Xn𝔼(Xn)))....𝔼((Xn𝔼(Xn))(X1𝔼(X1)))..𝔼((Xn𝔼(Xn))(Xn𝔼(Xn))))\\begin{pmatrix} \n\\mathbb{E}\\left( ( X_1-\\mathbb{E}(X_1) )( X_1-\\mathbb{E}(X_1) )\n\\right) &.&.& \\mathbb{E}\\left( ( X_1-\\mathbb{E}(X_1) )( X_n-\\mathbb{E}(X_n) ) \\right) \\\\\n.&&&.\\\\\n.&&&.\\\\\n\\mathbb{E}\\left( ( X_n-\\mathbb{E}(X_n) )( X_1-\\mathbb{E}(X_1) )\n\\right) &.&.& \\mathbb{E}\\left( ( X_n-\\mathbb{E}(X_n) )( X_n-\\mathbb{E}(X_n) ) \\right) \\\\\n\\end{pmatrix}

\n

Hence we can see that cov(X)ij=cov(Xi,Xj)=Xi,c,Xj,ccov(X)_{ij}=cov(X_{i},X_{j})=\\langle X_{i,c},X_{j,c} \\rangle.
\nWhen 𝐗\\bm X is a line vector, the definition becomes cov(𝐗)=𝔼(𝐗ct𝐗c).cov(\\bm X)=\\mathbb{E}(\\bm X_{c}^t\\bm X_{c}).

\n
","parent":"subsec:MomRV","rank":"0","html_name":"def:CovM","summary":"
\n

Definition 66 (Covariance matrix)). Consider nn random variables Xi:ΩX_i:\\Omega \\rightarrow \\mathbb{R}. The variables can be put in a column vector 𝐗=(X1,..,Xn)\\bm X=(X_1,..,X_n)

\n
","hasSummary":true,"hasTitle":true,"title":"Covariance matrix","height":338,"width":980},"classes":"l0","position":{"x":20418.20378469191,"y":-1072.7316180751332}},{"group":"nodes","data":{"id":"def:WLLN","name":"theorem","text":"
\n

Theorem 18 ((Weak) Law of large numbers !finish iid!). Let X1,...,Xn,...X_1,...,X_n,... be an infinite sequence i.i.d real random variables of mean μ\\mu. The empirical means are the random variables Xn=X1+..+Xnn.\\bar X_n = \\frac{X_1+..+X_n}{n}. For all ϵ>0\\epsilon>0, we have P(|Xnμ|>ϵ)n0.P(|\\bar X_n-\\mu |>\\epsilon)\\xrightarrow[n\\rightarrow \\infty]{} 0.

\n

We will not prove this result, but it can be intuitively understood in a simple way when the variable have variances. First, note that 𝔼(X)=𝔼(X1)=μ\\mathbb{E}(\\bar X)=\\mathbb{E}(X_1)=\\mu. Then, remember that v(X=1niXi)n0.v\\left(\\bar X=\\frac{1}{n}\\sum_i X_i\\right)\\xrightarrow[n\\rightarrow \\infty]{} 0. Hence the empirical mean X\\bar X is more and more concentrated around its mean μ\\mu, which means that the probability P(|Xnμ|>ϵ)P(|\\bar X_n-\\mu |>\\epsilon) should be smaller and smaller.

\n
","parent":"subsec:LLN","rank":"0","html_name":"def:WLLN","summary":"
\n

Theorem 18 ((Weak) Law of large numbers !finish iid!).

\n
","hasSummary":true,"hasTitle":true,"title":"(Weak) Law of large numbers !finish iid!","height":297,"width":980},"classes":"l0","position":{"x":16413.042621379984,"y":-2186.5815533798905}},{"group":"nodes","data":{"id":"sec:LA","name":"section","text":"","rank":"0","html_name":"sec:LA","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":14753.106930478858,"y":9770.0607479875}},{"group":"nodes","data":{"id":"titlesec:LA","name":"sectionTitle","text":"

Linear algebra

","parent":"sec:LA","rank":"0","html_name":"sec:LA","hasSummary":false,"hasTitle":false,"height":561,"width":1250},"classes":"l0","position":{"x":15251.947598908304,"y":12488.748941764392}},{"group":"nodes","data":{"id":"sec:FCC","name":"section","text":"","rank":"0","html_name":"sec:FCC","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":5458.492997493175,"y":7893.861778089176}},{"group":"nodes","data":{"id":"titlesec:FCC","name":"sectionTitle","text":"

Fourier, Convolution and Correlation

","parent":"sec:FCC","rank":"0","html_name":"sec:FCC","hasSummary":false,"hasTitle":false,"height":1022,"width":1516},"classes":"l0","position":{"x":6070.582696672855,"y":9153.16568357412}},{"group":"nodes","data":{"id":"sec:TIOpe","name":"section","text":"","rank":"0","html_name":"sec:TIOpe","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":6919.291003777886,"y":4471.199618819243}},{"group":"nodes","data":{"id":"titlesec:TIOpe","name":"sectionTitle","text":"

Translation invariant operators

","parent":"sec:TIOpe","rank":"0","html_name":"sec:TIOpe","hasSummary":false,"hasTitle":false,"height":792,"width":1417},"classes":"l0","position":{"x":8883.724097307402,"y":4635.4707260975465}},{"group":"nodes","data":{"id":"sec:DiffOp","name":"section","text":"","rank":"0","html_name":"sec:DiffOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":7551.192528111416,"y":1286.450679389157}},{"group":"nodes","data":{"id":"titlesec:DiffOp","name":"sectionTitle","text":"

Differential operators

","parent":"sec:DiffOp","rank":"0","html_name":"sec:DiffOp","hasSummary":false,"hasTitle":false,"height":561,"width":1395},"classes":"l0","position":{"x":8889.318218063834,"y":2886.559814716601}},{"group":"nodes","data":{"id":"sec:Meas","name":"section","text":"","rank":"0","html_name":"sec:Meas","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":22234.269516961438,"y":7896.252214189441}},{"group":"nodes","data":{"id":"titlesec:Meas","name":"sectionTitle","text":"

Measures theory

","parent":"sec:Meas","rank":"0","html_name":"sec:Meas","hasSummary":false,"hasTitle":false,"height":561,"width":1273},"classes":"l0","position":{"x":24064.54267057631,"y":8862.18520253525}},{"group":"nodes","data":{"id":"sec:Prob","name":"section","text":"","rank":"0","html_name":"sec:Prob","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16277.867792282563,"y":1748.0584128393234}},{"group":"nodes","data":{"id":"titlesec:Prob","name":"sectionTitle","text":"

Probabilities

","parent":"sec:Prob","rank":"0","html_name":"sec:Prob","hasSummary":false,"hasTitle":false,"height":331,"width":1550},"classes":"l0","position":{"x":16356.192235909653,"y":5692.698379058537}},{"group":"nodes","data":{"id":"subsec:VectSpaces","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:VectSpaces","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12493.897059014955,"y":10834.568404340866}},{"group":"nodes","data":{"id":"titlesubsec:VectSpaces","name":"subsectionTitle","text":"

Vector spaces

","parent":"subsec:VectSpaces","rank":"0","html_name":"subsec:VectSpaces","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":11032.53029978813,"y":12031.627747120141}},{"group":"nodes","data":{"id":"subsec:Mats","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:Mats","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":17086.411266588824,"y":8265.076922654986}},{"group":"nodes","data":{"id":"titlesubsec:Mats","name":"subsectionTitle","text":"

Matrices

","parent":"subsec:Mats","rank":"0","html_name":"subsec:Mats","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":17763.04274309803,"y":9004.19546953074}},{"group":"nodes","data":{"id":"subsec:LinMaps","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:LinMaps","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":17215.683278967546,"y":10711.510666246777}},{"group":"nodes","data":{"id":"titlesubsec:LinMaps","name":"subsectionTitle","text":"

Linear maps

","parent":"subsec:LinMaps","rank":"0","html_name":"subsec:LinMaps","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":16890.61421535985,"y":11908.638114506259}},{"group":"nodes","data":{"id":"subsec:InnProd","name":"subsection","text":"","parent":"sec:LA","rank":"0","html_name":"subsec:InnProd","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12708.739508391762,"y":7954.5708563614135}},{"group":"nodes","data":{"id":"titlesubsec:InnProd","name":"subsectionTitle","text":"

Inner products

","parent":"subsec:InnProd","rank":"0","html_name":"subsec:InnProd","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":14495.95155335858,"y":8766.363082609456}},{"group":"nodes","data":{"id":"subsec:Conv","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:Conv","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":3595.949252973721,"y":7460.855180934309}},{"group":"nodes","data":{"id":"titlesubsec:Conv","name":"subsectionTitle","text":"

Convolutions

","parent":"subsec:Conv","rank":"0","html_name":"subsec:Conv","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":2273.952963921711,"y":8425.846299197136}},{"group":"nodes","data":{"id":"subsec:F","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:F","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":8021.748552545618,"y":7711.200887003522}},{"group":"nodes","data":{"id":"titlesubsec:F","name":"subsectionTitle","text":"

Fourier

","parent":"subsec:F","rank":"0","html_name":"subsec:F","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":7475.231691469583,"y":9011.496901521983}},{"group":"nodes","data":{"id":"subsec:CT","name":"subsection","text":"","parent":"sec:FCC","rank":"0","html_name":"subsec:CT","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":6263.035667710466,"y":7199.18445415821}},{"group":"nodes","data":{"id":"titlesubsec:CT","name":"subsectionTitle","text":"

Convolution Theorems

","parent":"subsec:CT","rank":"0","html_name":"subsec:CT","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":6263.057121854839,"y":8171.311035712188}},{"group":"nodes","data":{"id":"subsec:DiffOp","name":"subsection","text":"","parent":"sec:DiffOp","rank":"0","html_name":"subsec:DiffOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":8876.861540464926,"y":957.0028151092976}},{"group":"nodes","data":{"id":"titlesubsec:DiffOp","name":"subsectionTitle","text":"

Differential operators (continuous case)

","parent":"subsec:DiffOp","rank":"0","html_name":"subsec:DiffOp","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":10265.349643524034,"y":2232.9755917479156}},{"group":"nodes","data":{"id":"subsec:DisDifOp","name":"subsection","text":"","parent":"sec:DiffOp","rank":"0","html_name":"subsec:DisDifOp","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":5444.760413248439,"y":1682.9165409097325}},{"group":"nodes","data":{"id":"titlesubsec:DisDifOp","name":"subsectionTitle","text":"

Discrete differential operators

","parent":"subsec:DisDifOp","rank":"0","html_name":"subsec:DisDifOp","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":5349.19460384174,"y":2760.1776150533656}},{"group":"nodes","data":{"id":"subsec:ConfProb","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:ConfProb","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":13448.46735078488,"y":4698.229628990624}},{"group":"nodes","data":{"id":"titlesubsec:ConfProb","name":"subsectionTitle","text":"

Configurations and probabilities

","parent":"subsec:ConfProb","rank":"0","html_name":"subsec:ConfProb","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":13791.694952880101,"y":5571.589739807056}},{"group":"nodes","data":{"id":"subsec:RV","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:RV","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16231.081813234416,"y":3736.742929949278}},{"group":"nodes","data":{"id":"titlesubsec:RV","name":"subsectionTitle","text":"

Random Variables

","parent":"subsec:RV","rank":"0","html_name":"subsec:RV","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":16566.38507963908,"y":4670.598908809021}},{"group":"nodes","data":{"id":"subsec:Marg","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Marg","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":12455.79953592236,"y":2247.726441018981}},{"group":"nodes","data":{"id":"titlesubsec:Marg","name":"subsectionTitle","text":"

Marginals

","parent":"subsec:Marg","rank":"0","html_name":"subsec:Marg","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":12184.695106224906,"y":3255.424081916877}},{"group":"nodes","data":{"id":"subsec:Cond","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Cond","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16647.413534034422,"y":1351.3941439416058}},{"group":"nodes","data":{"id":"titlesubsec:Cond","name":"subsectionTitle","text":"

Conditioning

","parent":"subsec:Cond","rank":"0","html_name":"subsec:Cond","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":17154.284822312693,"y":2056.733633893331}},{"group":"nodes","data":{"id":"subsec:Ind","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:Ind","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":14201.686461770398,"y":-195.4099234936357}},{"group":"nodes","data":{"id":"titlesubsec:Ind","name":"subsectionTitle","text":"

Independence

","parent":"subsec:Ind","rank":"0","html_name":"subsec:Ind","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":13942.47290294537,"y":738.7554924941212}},{"group":"nodes","data":{"id":"subsec:MomRV","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:MomRV","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":19392.287723865607,"y":160.11306081758323}},{"group":"nodes","data":{"id":"titlesubsec:MomRV","name":"subsectionTitle","text":"

Moments of random variables

","parent":"subsec:MomRV","rank":"0","html_name":"subsec:MomRV","hasSummary":false,"hasTitle":false,"height":169,"width":980},"classes":"l0","position":{"x":19978.181943275064,"y":1429.0085676484198}},{"group":"nodes","data":{"id":"subsec:LLN","name":"subsection","text":"","parent":"sec:Prob","rank":"0","html_name":"subsec:LLN","hasSummary":false,"hasTitle":false,"height":100,"width":980},"classes":"l0","position":{"x":16391.615276470766,"y":-1865.428969093652}},{"group":"nodes","data":{"id":"titlesubsec:LLN","name":"subsectionTitle","text":"

Laws of large numbers and central limit theorem

","parent":"subsec:LLN","rank":"0","html_name":"subsec:LLN","hasSummary":false,"hasTitle":false,"height":239,"width":980},"classes":"l0","position":{"x":16362.187931561548,"y":-1507.2763848074137}},{"data":{"id":"def:Corrsubsec:Conv","source":"subsec:Conv","target":"def:Corr","type":"strong","visibility":1}},{"data":{"id":"th:ExTIdef:TrInOp","source":"def:TrInOp","target":"th:ExTI","type":"strong","visibility":1}},{"data":{"id":"th:ConvTIdef:TrInOp","source":"def:TrInOp","target":"th:ConvTI","type":"strong","visibility":1}},{"data":{"id":"th:ConvTIsubsec:Conv","source":"subsec:Conv","target":"th:ConvTI","type":"strong","visibility":0}},{"data":{"id":"th:TrFunConsubsec:CT","source":"subsec:CT","target":"th:TrFunCon","type":"strong","visibility":0}},{"data":{"id":"th:TrFunConth:ConvTI","source":"th:ConvTI","target":"th:TrFunCon","type":"strong","visibility":1}},{"data":{"id":"th:TrFunCondef:TranFun","source":"def:TranFun","target":"th:TrFunCon","type":"strong","visibility":1}},{"data":{"id":"def:TranFundef:TrInOp","source":"def:TrInOp","target":"def:TranFun","type":"strong","visibility":1}},{"data":{"id":"def:TranFunth:ExTI","source":"th:ExTI","target":"def:TranFun","type":"strong","visibility":1}},{"data":{"id":"rem:Contdef:TrInOp","source":"def:TrInOp","target":"rem:Cont","type":"strong","visibility":1}},{"data":{"id":"rem:FinSupth:TrFunCon","source":"th:TrFunCon","target":"rem:FinSup","type":"strong","visibility":1}},{"data":{"id":"rem:FinSuprem:ConvArrCirc","source":"rem:ConvArrCirc","target":"rem:FinSup","type":"strong","visibility":0}},{"data":{"id":"def:Boreldef:SigmaA","source":"def:SigmaA","target":"def:Borel","type":"strong","visibility":1}},{"data":{"id":"def:Measdef:SigmaA","source":"def:SigmaA","target":"def:Meas","type":"strong","visibility":1}},{"data":{"id":"def:Measbldef:SigmaA","source":"def:SigmaA","target":"def:Measbl","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Meas","source":"def:Meas","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Measbl","source":"def:Measbl","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"def:LebIntdef:Borel","source":"def:Borel","target":"def:LebInt","type":"strong","visibility":1}},{"data":{"id":"rem:Dfltdef:SigmaA","source":"def:SigmaA","target":"rem:Dflt","type":"strong","visibility":1}},{"data":{"id":"rem:Dfltdef:Borel","source":"def:Borel","target":"rem:Dflt","type":"strong","visibility":1}},{"data":{"id":"def:FreeSysdef:VectSpace","source":"def:VectSpace","target":"def:FreeSys","type":"strong","visibility":1}},{"data":{"id":"def:GenSysdef:VectSpace","source":"def:VectSpace","target":"def:GenSys","type":"strong","visibility":1}},{"data":{"id":"def:Basisdef:GenSys","source":"def:GenSys","target":"def:Basis","type":"strong","visibility":1}},{"data":{"id":"def:Basisdef:FreeSys","source":"def:FreeSys","target":"def:Basis","type":"strong","visibility":1}},{"data":{"id":"def:SubSpacedef:VectSpace","source":"def:VectSpace","target":"def:SubSpace","type":"strong","visibility":1}},{"data":{"id":"def:Spandef:SubSpace","source":"def:SubSpace","target":"def:Span","type":"strong","visibility":1}},{"data":{"id":"def:Spansubsec:Mats","source":"subsec:Mats","target":"def:Span","type":"strong","visibility":1}},{"data":{"id":"def:Coordsdef:Basis","source":"def:Basis","target":"def:Coords","type":"strong","visibility":1}},{"data":{"id":"def:BasChgdef:Coords","source":"def:Coords","target":"def:BasChg","type":"strong","visibility":1}},{"data":{"id":"def:BasChgdef:MatMul","source":"def:MatMul","target":"def:BasChg","type":"strong","visibility":1}},{"data":{"id":"rem:ExVectspdef:VectSpace","source":"def:VectSpace","target":"rem:ExVectsp","type":"strong","visibility":1}},{"data":{"id":"def:MatMuldef:Mat","source":"def:Mat","target":"def:MatMul","type":"strong","visibility":1}},{"data":{"id":"def:Transdef:Mat","source":"def:Mat","target":"def:Trans","type":"strong","visibility":1}},{"data":{"id":"def:Transdef:MatMul","source":"def:MatMul","target":"def:Trans","type":"strong","visibility":1}},{"data":{"id":"def:IsoKndef:Basis","source":"def:Basis","target":"def:IsoKn","type":"strong","visibility":1}},{"data":{"id":"def:IsoKndef:LinMap","source":"def:LinMap","target":"def:IsoKn","type":"strong","visibility":1}},{"data":{"id":"th:CompLinMapsdef:MatMul","source":"def:MatMul","target":"th:CompLinMaps","type":"strong","visibility":1}},{"data":{"id":"th:CompLinMapsdef:MatMap","source":"def:MatMap","target":"th:CompLinMaps","type":"strong","visibility":1}},{"data":{"id":"def:LinMapdef:VectSpace","source":"def:VectSpace","target":"def:LinMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapdef:LinMap","source":"def:LinMap","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapdef:Coords","source":"def:Coords","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:MatMapsubsec:Mats","source":"subsec:Mats","target":"def:MatMap","type":"strong","visibility":1}},{"data":{"id":"def:Eigendef:LinMap","source":"def:LinMap","target":"def:Eigen","type":"strong","visibility":1}},{"data":{"id":"def:Eigensubsec:Mats","source":"subsec:Mats","target":"def:Eigen","type":"strong","visibility":1}},{"data":{"id":"def:DiagMapdef:Eigen","source":"def:Eigen","target":"def:DiagMap","type":"strong","visibility":1}},{"data":{"id":"rem:MatMultVectdef:MatMap","source":"def:MatMap","target":"rem:MatMultVect","type":"strong","visibility":1}},{"data":{"id":"def:Pythdef:OrthBas","source":"def:OrthBas","target":"def:Pyth","type":"strong","visibility":1}},{"data":{"id":"th:OrthProjdef:Span","source":"def:Span","target":"th:OrthProj","type":"strong","visibility":1}},{"data":{"id":"th:OrthProjdef:OrthProj","source":"def:OrthProj","target":"th:OrthProj","type":"strong","visibility":1}},{"data":{"id":"th:CauSchdef:Norm","source":"def:Norm","target":"th:CauSch","type":"strong","visibility":1}},{"data":{"id":"def:InnProddef:VectSpace","source":"def:VectSpace","target":"def:InnProd","type":"strong","visibility":1}},{"data":{"id":"def:InnProddef:LinMap","source":"def:LinMap","target":"def:InnProd","type":"strong","visibility":1}},{"data":{"id":"def:Normdef:InnProd","source":"def:InnProd","target":"def:Norm","type":"strong","visibility":1}},{"data":{"id":"def:OrthBasdef:InnProd","source":"def:InnProd","target":"def:OrthBas","type":"strong","visibility":1}},{"data":{"id":"def:OrthBasdef:Norm","source":"def:Norm","target":"def:OrthBas","type":"strong","visibility":1}},{"data":{"id":"def:OrthFreedef:InnProd","source":"def:InnProd","target":"def:OrthFree","type":"strong","visibility":1}},{"data":{"id":"def:InnMatdef:InnProd","source":"def:InnProd","target":"def:InnMat","type":"strong","visibility":1}},{"data":{"id":"def:InnMatsubsec:Mats","source":"subsec:Mats","target":"def:InnMat","type":"strong","visibility":1}},{"data":{"id":"def:OrthProjdef:Norm","source":"def:Norm","target":"def:OrthProj","type":"strong","visibility":1}},{"data":{"id":"def:OrthProjdef:SubSpace","source":"def:SubSpace","target":"def:OrthProj","type":"strong","visibility":1}},{"data":{"id":"rem:InnOrthBasdef:OrthBas","source":"def:OrthBas","target":"rem:InnOrthBas","type":"strong","visibility":1}},{"data":{"id":"rem:InnOrthBasdef:InnMat","source":"def:InnMat","target":"rem:InnOrthBas","type":"strong","visibility":1}},{"data":{"id":"rem:OrthProjBasdef:OrthProj","source":"def:OrthProj","target":"rem:OrthProjBas","type":"strong","visibility":1}},{"data":{"id":"rem:OrthProjBasdef:OrthBas","source":"def:OrthBas","target":"rem:OrthProjBas","type":"strong","visibility":1}},{"data":{"id":"def:ConvDiscdef:ConvCont","source":"def:ConvCont","target":"def:ConvDisc","type":"strong","visibility":1}},{"data":{"id":"def:ConvArrdef:ConvDisc","source":"def:ConvDisc","target":"def:ConvArr","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircdef:ConvCont","source":"def:ConvCont","target":"def:ConvCirc","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircDiscdef:ConvCirc","source":"def:ConvCirc","target":"def:ConvCircDisc","type":"strong","visibility":1}},{"data":{"id":"def:ConvCircDiscdef:ConvDisc","source":"def:ConvDisc","target":"def:ConvCircDisc","type":"strong","visibility":1}},{"data":{"id":"rem:ConvArrCircdef:ConvCircDisc","source":"def:ConvCircDisc","target":"rem:ConvArrCirc","type":"strong","visibility":1}},{"data":{"id":"rem:ConvArrCircdef:ConvArr","source":"def:ConvArr","target":"rem:ConvArrCirc","type":"strong","visibility":1}},{"data":{"id":"def:FTdef:FS","source":"def:FS","target":"def:FT","type":"strong","visibility":1}},{"data":{"id":"def:FTdef:DFT","source":"def:DFT","target":"def:FT","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:FT","source":"def:FT","target":"def:FS","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:DFT","source":"def:DFT","target":"def:FS","type":"strong","visibility":1}},{"data":{"id":"def:FSdef:OrthBas","source":"def:OrthBas","target":"def:FS","type":"strong","visibility":0}},{"data":{"id":"def:DFTdef:FS","source":"def:FS","target":"def:DFT","type":"strong","visibility":1}},{"data":{"id":"def:DFTdef:FT","source":"def:FT","target":"def:DFT","type":"strong","visibility":1}},{"data":{"id":"def:DFTdef:OrthBas","source":"def:OrthBas","target":"def:DFT","type":"strong","visibility":0}},{"data":{"id":"th:prodTFdef:ConvCont","source":"def:ConvCont","target":"th:prodTF","type":"strong","visibility":1}},{"data":{"id":"th:prodTFdef:FT","source":"def:FT","target":"th:prodTF","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:ConvCirc","source":"def:ConvCirc","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:ConvDisc","source":"def:ConvDisc","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodSdef:FS","source":"def:FS","target":"th:prodS","type":"strong","visibility":1}},{"data":{"id":"th:prodDFTdef:ConvCircDisc","source":"def:ConvCircDisc","target":"th:prodDFT","type":"strong","visibility":1}},{"data":{"id":"th:prodDFTdef:DFT","source":"def:DFT","target":"th:prodDFT","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDerdef:Der","source":"def:Der","target":"th:TranFunDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDerdef:TranFun","source":"def:TranFun","target":"th:TranFunDer","type":"strong","visibility":0}},{"data":{"id":"def:DirDerdef:Der","source":"def:Der","target":"def:DirDer","type":"strong","visibility":1}},{"data":{"id":"def:Differ1def:Der","source":"def:Der","target":"def:Differ1","type":"strong","visibility":1}},{"data":{"id":"def:Differ1def:DirDer","source":"def:DirDer","target":"def:Differ1","type":"strong","visibility":1}},{"data":{"id":"def:DifferMdef:Differ1","source":"def:Differ1","target":"def:DifferM","type":"strong","visibility":1}},{"data":{"id":"def:GradCondef:Differ1","source":"def:Differ1","target":"def:GradCon","type":"strong","visibility":1}},{"data":{"id":"def:nthDerdef:Der","source":"def:Der","target":"def:nthDer","type":"strong","visibility":1}},{"data":{"id":"def:Lapdef:nthDer","source":"def:nthDer","target":"def:Lap","type":"strong","visibility":1}},{"data":{"id":"def:Lapdef:DirDer","source":"def:DirDer","target":"def:Lap","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerdef:DisDer","source":"def:DisDer","target":"th:TranFunDisDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerth:TranFunDer","source":"th:TranFunDer","target":"th:TranFunDisDer","type":"strong","visibility":1}},{"data":{"id":"th:TranFunDisDerth:TrFunCon","source":"th:TrFunCon","target":"th:TranFunDisDer","type":"strong","visibility":0}},{"data":{"id":"def:DisDerdef:Der","source":"def:Der","target":"def:DisDer","type":"strong","visibility":1}},{"data":{"id":"def:GradDisdef:GradCon","source":"def:GradCon","target":"def:GradDis","type":"strong","visibility":1}},{"data":{"id":"def:GradDisdef:DisDer","source":"def:DisDer","target":"def:GradDis","type":"strong","visibility":1}},{"data":{"id":"def:GradMaskdef:DisDer","source":"def:DisDer","target":"def:GradMask","type":"strong","visibility":1}},{"data":{"id":"def:GradMaskrem:FinSup","source":"rem:FinSup","target":"def:GradMask","type":"strong","visibility":0}},{"data":{"id":"def:ConfSprem:IntProb","source":"rem:IntProb","target":"def:ConfSp","type":"strong","visibility":1}},{"data":{"id":"def:Evdef:ConfSp","source":"def:ConfSp","target":"def:Ev","type":"strong","visibility":1}},{"data":{"id":"def:Probdef:Meas","source":"def:Meas","target":"def:Prob","type":"strong","visibility":0}},{"data":{"id":"def:Probdef:Ev","source":"def:Ev","target":"def:Prob","type":"strong","visibility":1}},{"data":{"id":"def:ProbDendef:Prob","source":"def:Prob","target":"def:ProbDen","type":"strong","visibility":1}},{"data":{"id":"def:ProbDendef:LebInt","source":"def:LebInt","target":"def:ProbDen","type":"strong","visibility":0}},{"data":{"id":"rem:Evdef:Ev","source":"def:Ev","target":"rem:Ev","type":"strong","visibility":1}},{"data":{"id":"rem:ProbAssdef:Prob","source":"def:Prob","target":"rem:ProbAss","type":"strong","visibility":1}},{"data":{"id":"rem:AbsConfdef:ConfSp","source":"def:ConfSp","target":"rem:AbsConf","type":"strong","visibility":1}},{"data":{"id":"rem:AbsConfsubsec:RV","source":"subsec:RV","target":"rem:AbsConf","type":"strong","visibility":1}},{"data":{"id":"def:RVLawdef:RV","source":"def:RV","target":"def:RVLaw","type":"strong","visibility":1}},{"data":{"id":"def:JoinRVdef:RV","source":"def:RV","target":"def:JoinRV","type":"strong","visibility":1}},{"data":{"id":"def:JoinRVdef:RVLaw","source":"def:RVLaw","target":"def:JoinRV","type":"weak"}},{"data":{"id":"def:Margdef:CondProbRV","source":"def:CondProbRV","target":"def:Marg","type":"weak"}},{"data":{"id":"def:MargProbdef:Marg","source":"def:Marg","target":"def:MargProb","type":"strong","visibility":1}},{"data":{"id":"def:MargRVdef:Marg","source":"def:Marg","target":"def:MargRV","type":"strong","visibility":1}},{"data":{"id":"def:MargRVdef:MargProb","source":"def:MargProb","target":"def:MargRV","type":"weak"}},{"data":{"id":"def:MargRVdef:RVLaw","source":"def:RVLaw","target":"def:MargRV","type":"weak"}},{"data":{"id":"th:Bayesdef:CondProb","source":"def:CondProb","target":"th:Bayes","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:JoinRV","source":"def:JoinRV","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:CondProb","source":"def:CondProb","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"def:CondProbRVdef:MargRV","source":"def:MargRV","target":"def:CondProbRV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:IndRV","source":"def:IndRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:MargRV","source":"def:MargRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"th:JLIVdef:CondProbRV","source":"def:CondProbRV","target":"th:JLIV","type":"strong","visibility":1}},{"data":{"id":"def:IndRVsubsec:RV","source":"subsec:RV","target":"def:IndRV","type":"strong","visibility":1}},{"data":{"id":"def:IndRVdef:IndEv","source":"def:IndEv","target":"def:IndRV","type":"strong","visibility":1}},{"data":{"id":"def:IndExpdef:Expe","source":"def:Expe","target":"def:IndExp","type":"strong","visibility":1}},{"data":{"id":"def:IndExpdef:IndRV","source":"def:IndRV","target":"def:IndExp","type":"strong","visibility":1}},{"data":{"id":"def:StdPropdef:Std","source":"def:Std","target":"def:StdProp","type":"strong","visibility":1}},{"data":{"id":"def:StdPropdef:IndRV","source":"def:IndRV","target":"def:StdProp","type":"weak"}},{"data":{"id":"def:Expedef:RV","source":"def:RV","target":"def:Expe","type":"strong","visibility":1}},{"data":{"id":"def:Expedef:LebInt","source":"def:LebInt","target":"def:Expe","type":"strong","visibility":0}},{"data":{"id":"def:ScalRVdef:Expe","source":"def:Expe","target":"def:ScalRV","type":"strong","visibility":1}},{"data":{"id":"def:ScalRVdef:IndExp","source":"def:IndExp","target":"def:ScalRV","type":"weak"}},{"data":{"id":"def:Covdef:ScalRV","source":"def:ScalRV","target":"def:Cov","type":"strong","visibility":1}},{"data":{"id":"def:Stddef:Cov","source":"def:Cov","target":"def:Std","type":"strong","visibility":1}},{"data":{"id":"def:CovMdef:Cov","source":"def:Cov","target":"def:CovM","type":"strong","visibility":1}},{"data":{"id":"def:WLLNdef:Std","source":"def:Std","target":"def:WLLN","type":"strong","visibility":1}},{"data":{"id":"sec:TIOpesec:LA","source":"sec:LA","target":"sec:TIOpe","type":"strong","visibility":1}},{"data":{"id":"sec:DiffOpsec:TIOpe","source":"sec:TIOpe","target":"sec:DiffOp","type":"strong","visibility":1}},{"data":{"id":"sec:Probsec:Meas","source":"sec:Meas","target":"sec:Prob","type":"strong","visibility":1}},{"data":{"id":"sec:Probsec:LA","source":"sec:LA","target":"sec:Prob","type":"strong","visibility":1}},{"data":{"id":"subsec:Fsec:LA","source":"sec:LA","target":"subsec:F","type":"strong","visibility":0}},{"data":{"id":"subsec:RVsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:RV","type":"strong","visibility":1}},{"data":{"id":"subsec:RVdef:Measbl","source":"def:Measbl","target":"subsec:RV","type":"strong","visibility":0}},{"data":{"id":"subsec:Margsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:Marg","type":"strong","visibility":1}},{"data":{"id":"subsec:Margdef:JoinRV","source":"def:JoinRV","target":"subsec:Marg","type":"strong","visibility":1}},{"data":{"id":"subsec:Margdef:RVLaw","source":"def:RVLaw","target":"subsec:Marg","type":"weak"}},{"data":{"id":"subsec:Condsubsec:ConfProb","source":"subsec:ConfProb","target":"subsec:Cond","type":"strong","visibility":1}},{"data":{"id":"subsec:Inddef:Prob","source":"def:Prob","target":"subsec:Ind","type":"strong","visibility":1}},{"data":{"id":"subsec:Inddef:CondProb","source":"def:CondProb","target":"subsec:Ind","type":"weak"}},{"data":{"id":"subsec:LLNsubsec:MomRV","source":"subsec:MomRV","target":"subsec:LLN","type":"strong","visibility":1}}];