var graph = [{"group": "nodes", "data": {"id": "rem:AbsConf", "name": "remark", "text": "
", "parent": "sec:Prob", "rank": "0", "html_name": "rem:AbsConf", "summary": "", "hasSummary": true, "hasTitle": true, "title": "Abstract configuration space"}, "classes": "l0", "position": {"x": -5264.652624402453, "y": 5665.884246925858}}, {"group": "nodes", "data": {"id": "def:ConfSp", "name": "definition", "text": "\n
Definition 1 (Configuration space / universe).
\n
\n
\n
\n
\n
\nThe configuration space is usually noted . As its name indicates, the set contains all the possible configurations of a probabilistic model.
\nIts elements are usually noted , and we will call them \u2019elementary configurations\u2019.
\n
\n
Warning : the name of the space and its elements vary across contexts ans languages. is often called the \u2019sample space\u2019 or \u2019universe\u2019, and the elements \u2019samples\u2019,\u2019outcomes\u2019 or \u2019realizations\u2019.
\n
\n
\n
\nProbabilistic models are very useful to analyse dice rolls and card games. What is a relevant configuration space
\n
\n
\n
\nAssume that is a card deck and that we are interested in modeling an experiment where a player draws cards from the card game. What should we chose as configuration space when
\n
\nthe player draws one card ?
\nthe player draws one card, puts it back and draws another card ?
\nthe player draws two cards without putting them back ?
\n
\n
Assume a player draws cards without putting them back in the deck. What is the size of the smallest configuration space describing the possible results ?
\n
Note that when using Cartesian products, we have . The order between the card draws is taken into account. In card games, the order in which cards are drawn is often not important. In that case, when drawing cards, we have .
\n
If we do not distinguish results up to permutations, what is the size of the configuration space when
\n
\n
\n
\nIn this course we are particularly interested in signal and image processing problems.
\n
What are the relevant configuration spaces when studying
\n
\nreal signals observed on points ?
\ncontinuous real signals of duration second ?
\n color images ?
\n
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "def:ConfSp", "summary": "\n
Definition 1 (Configuration space / universe). The configuration space is usually noted . As its name indicates, the set contains all the possible configurations of a probabilistic model. Its elements are usually noted , and we will call them \u2019elementary configurations\u2019.
\n
", "hasSummary": true, "hasTitle": true, "title": "Configuration space / universe"}, "classes": "l0", "position": {"x": -6591.287378112764, "y": 5808.483163577717}}, {"group": "nodes", "data": {"id": "def:Ev", "name": "definition", "text": "\n
Definition 2 (Events).
\n
\n
\n
\nAn event is a subset of the configuration space : .
\n
\n
Remark: a more precise definition requires the notion of -algebra. The next \u2019Remark\u2019 node gives the definition of -algebras, but the notion will not be used in the rest of the course.
\n
\n
\n
\n
\n
\nfor the dice roll configuration space , a possible event is \u2019the number is even\u2019.
\nwhen is a card deck, \u2019the card is a diamond\u2019, \u2019the card is a \u2019, are typical events.
\nwhen is a set of signals, the event describes all the signals starting with .
\n
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "def:Ev", "summary": "\n
Definition 2 (Events). An event is a subset of the configuration space : .
\n
", "hasSummary": true, "hasTitle": true, "title": "Events"}, "classes": "l0", "position": {"x": -6569.997740292376, "y": 5276.242218068017}}, {"group": "nodes", "data": {"id": "def:Prob", "name": "definition", "text": "\n
Definition 3 (Probability).
\n
\n
\n
\nA probability is a function which takes in input an event and returns a positive real number. It must verify the following axioms:
\n
\naxiom 1:
\naxiom 2:
\naxiom 3: if then (also true for countable sums)
\n
\n
Direct consequences:
\n
\nconsequence 1:
\nconsequence 2 :
\nconsequence 3:
\n
\n
\n
\nThe probability gives a notion of \u2019size\u2019 to events. Given that the size of is , the size of events can be interpreted as the proportion they occupy in . Hence, the of probability can also be read as the of proportion.
\n
\n
Assume that a dice is a perfect cube. All the faces are indistinguishable from each other, apart from the number they carry. In that case, it is relevant to assign probabilities according to the indistinguishability principle:
\n
\n
The distribution is called uniform. It is now possible to compute the probability of every event, using the axiom 3 (if then ):
\n
\n
Assume now that our dice is a bit damaged. It is no longer perfectly symmetrical. Hence We cannot use the indistinguishability principle anymore. In that case a relevant approach is to perform many dice rolls and to assign probabilities according the frequency of apparition of number.
\n
\n
Exactly as before, the probability of other events is computed with the rule \u2019if then \u2019.
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "def:Prob", "summary": "\n
Definition 3 (Probability).
\n
\naxiom 1:
\naxiom 2:
\naxiom 3: if then (also true for countable sums)
\n
\n
", "hasSummary": true, "hasTitle": true, "title": "Probability"}, "classes": "l0", "position": {"x": -6588.173409433914, "y": 4695.7977448282445}}, {"group": "nodes", "data": {"id": "def:ProbDen", "name": "definition", "text": "\n
Definition 4 (Probability density).
\n
\nLet be a probability on . We say that the function is the probability density of when
\n
\n
Note that does not always have a density. Take for instance such that : there are no functions verifying the above condition.
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "def:ProbDen", "summary": "\n
Definition 4 (Probability density). is the density of , a probability on , when
\n
", "hasSummary": true, "hasTitle": true, "title": "Probability density"}, "classes": "l0", "position": {"x": -7411.798315229266, "y": 4025.4912117439035}}, {"group": "nodes", "data": {"id": "rem:IntProb", "name": "remark", "text": "", "parent": "subsec:ConfProb", "rank": "0", "html_name": "rem:IntProb", "summary": "", "hasSummary": true, "hasTitle": true, "title": "Introductory remark"}, "classes": "l0", "position": {"x": -7804.796733874878, "y": 5776.548706847134}}, {"group": "nodes", "data": {"id": "rem:Ev", "name": "remark", "text": "", "parent": "subsec:ConfProb", "rank": "0", "html_name": "rem:Ev", "summary": "", "hasSummary": true, "hasTitle": true, "title": "Events and -algebra"}, "classes": "l0", "position": {"x": -7794.151914964684, "y": 5212.373304606854}}, {"group": "nodes", "data": {"id": "rem:ProbAss", "name": "remark", "text": "", "parent": "subsec:ConfProb", "rank": "0", "html_name": "rem:ProbAss", "summary": "", "hasSummary": true, "hasTitle": true, "title": "Assignment of probabilities"}, "classes": "l0", "position": {"x": -7826.086371695265, "y": 4701.421996917542}}, {"group": "nodes", "data": {"id": "ex:RemSet", "name": "exercise", "text": "\n
Exercise 1 (Reminders of set theory).
\n
\n
\n
\nEnumerate all the elements of .
\nEnumerate all the subsets of .
\nRecall what is the Cartesian product of sets.
\nEnumerate all the elements of .
\nEnumerate all the subsets of .
\nEnumerate all the elements of
\nEnumerate all the subsets of
\n
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "ex:RemSet", "summary": "\n
Exercise 1 (Reminders of set theory).
\n
", "hasSummary": false, "hasTitle": true, "title": "Reminders of set theory"}, "classes": "l0", "position": {"x": -8929.518068117426, "y": 5857.966875248107}}, {"group": "nodes", "data": {"id": "ex:RemCom", "name": "exercise", "text": "\n
Exercise 2 (Reminders on combinatorics).
\n
\n
\n
\n
\nLet be a set of elements. We call a permutation when is it a bijection. How many different permutation can we construct?
\n
\n
\n
\n
Q2: drawing objects from a box
\n
\n
\nAssume that elements of are physical objects contained in a box. Draw successively element from the box, without looking in the box, and without putting the element back before taking the next one. If we remember the order in which we take the elements, how many different configurations can we obtain? And if we don\u2019t remember the order in which we took the different elements?
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "ex:RemCom", "summary": "\n
Exercise 2 (Reminders on combinatorics).
\n
", "hasSummary": false, "hasTitle": true, "title": "Reminders on combinatorics"}, "classes": "l0", "position": {"x": -8928.41412074155, "y": 5434.540693704344}}, {"group": "nodes", "data": {"id": "ex:Gauss", "name": "exercise", "text": "\n
Exercise 3 (Gaussian densities ()).
\n
\n
\n
\nWhat condition must a function verify to be interpreted as a probability density?
\nIt can be proved that the standardized Gaussian function has integral :
\nWhat can we say about
\n
\n
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "ex:Gauss", "summary": "\n
Exercise 3 (Gaussian densities ()).
\n
", "hasSummary": false, "hasTitle": true, "title": "Gaussian densities ()"}, "classes": "l0", "position": {"x": -8921.777124083159, "y": 4104.367513229578}}, {"group": "nodes", "data": {"id": "def:RV", "name": "definition", "text": "\n
Definition 5 (Random variables).
\n
\n
\n
\nIntroduce first the idea behind the definition. Build a configuration space representing the possible ages of persons. A natural choice is where the first coordinate represents the possible ages of the first person,and the second the possible age of the second person ( is also a natural choice).
\n
\n
The coordinate function ,
\n
\n
\u2019extracts\u2019 the age of the first person out of an elementary configuration . Similarly, the coordinate function ,
\n
\n
\u2019extracts\u2019 age of the second person. The functions and are called random variables.
\nThe coordinates functions and extracts interesting quantities from the elementary configuration. However, there are other interesting quantities which are not coordinate functions. For instance, for an elementary configuration , we can be interested in the average age of the persons. We can define the function ,
\n
\n
The function is also called a random variable.
\n
\n
\n
\nA random variable taking values in a space is a function . In particular, integer random variables are functions and real random variable are functions .
\n
\n
Remark: this is a simplified version of the true definition which requires that random variables are \u2019measurable functions\u2019. When is in the -algebra of , should be in the -algebra of .
\n
\n
\n
\nConsider the following dice roll game. If the number is even, the player gains euro, and if the number is odd he loses euro. The function , is a random variable.
\n
\n
\n
Example 2: coordinate random variables
\n
\n
\nAssume that we roll a dice times. The configurations space that describes all the possible results is . Let be the -th coordinate function on . For instance,
\n
These random variables \u2019extract\u2019 the result of the -th roll out of the elementary configuration. We can also construct the random variables : the gain at the -th roll.
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "def:RV", "summary": "\n
Definition 5 (Random variables). A random variable is a function defined on the configuration space . A integer random variable is valued in or and a real random variable is valued in .
\n
", "hasSummary": true, "hasTitle": true, "title": "Random variables"}, "classes": "l0", "position": {"x": -3993.986179509592, "y": 3935.377851810639}}, {"group": "nodes", "data": {"id": "def:RVLaw", "name": "definition", "text": "\n
Definition 6 (Law of a random variables).
\n
\n
\n
\nLet be a configuration space with a probability , and be a random variable taking values in a set . Recall that is a function that takes in input an event of and associates its probability. The random variable transports the probability to the set . The probability of a set in is determined by the probability of the inverse image of in .
\n
\n
\n
\nThe law of of the random variable is a probability on defined by It is called the law of the random variable .
\nRemarks:
\n
\ninstead of , we often write (the probability that the value of the random variable falls in ).
\n can be viewed as another configuration space with a probability
\n
\n
In the previous game, the random variable , induces a probability distribution on . We have
\n
\n
\n
Definition: cumulative distribution
\n
\n
\nLet be a real random variable (). The probability can be represented by its cumulative distribution , defined by
\n
\n
All the information on is contained in . Since the cumulative distribution is a function , sometimes it is conceptually simpler to manipulate than the probability itself which is a function from the subsets of to .
\n
\nHow do we compute from ?
\nWhat do we know about ?
\nCan we have and ?
\nIf for , , what is ?
\n
\n
\n
\n
Definition: density of a real random variable
\n
\n
\nLet be a real random variable. Sometimes, the law of admits a density. When it exists, the density of is a function such that
\n
\n
Remark: we defined the cumulative distribution and density of a \u2019random variable\u2019 but note that it depends on only through the probability . Hence the density can be defined for a probability on even if it is not defined as the law of a random variable.
\n
\n
\nFor a real random variable ,
\n
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "def:RVLaw", "summary": "\n
Definition 6 (Law of a random variables). A RV transports the probability on to a probability on . For an event in , define is called the law of the random variable .
\n
", "hasSummary": true, "hasTitle": true, "title": "Law of a random variables"}, "classes": "l0", "position": {"x": -2761.1231158163064, "y": 3870.8379779883944}}, {"group": "nodes", "data": {"id": "def:JoinRV", "name": "definition", "text": "\n
Definition 7 (Joint random variables).
\n
\n
\n
\nConsider random variables and . The random variable ,
\n
\n
is called the joint random variable. The law of is called the joint law (or joint probability distribution).
\n
\n
\n
\nConsider the case of dice rolls. Let be a configuration space, with the uniform probability distribution . Let be the -th coordinate function. Consider the joint random variable
\n
\n
What the space of values of ? What is the law of ?
\n
Consider now
\n
What is the law of ?
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "def:JoinRV", "summary": "\n
Definition 7 (Joint random variables). Consider random variables and . The random variable , is called the joint random variable. The law of is called the joint law (or joint probability distribution).
\n
", "hasSummary": true, "hasTitle": true, "title": "Joint random variables"}, "classes": "l0", "position": {"x": -5158.705568517556, "y": 3888.000422320245}}, {"group": "nodes", "data": {"id": "ex:SpGM", "name": "exercise", "text": "\n
Exercise 4 (Simple game model).
\n
\nConsider the following game. A player flips twice a coin. If both flips are heads, the player wins euro. Otherwise he loses euro. Make a probabilistic model which describes the game.
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "ex:SpGM", "summary": "\n
Exercise 4 (Simple game model).
\n
", "hasSummary": false, "hasTitle": true, "title": "Simple game model"}, "classes": "l0", "position": {"x": -2755.6188300699455, "y": 4768.891256628461}}, {"group": "nodes", "data": {"id": "ex:ImRV", "name": "exercise", "text": "\n
Exercise 5 (Image as random variable).
\n
\nLet be a image valued in . can be viewed as a random variable from the configuration space . Assume that is a uniform probability distribution over . The law of corresponds to something that we already meet in the image processing course. What is it? What is the formal difference?
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "ex:ImRV", "summary": "\n
Exercise 5 (Image as random variable).
\n
", "hasSummary": false, "hasTitle": true, "title": "Image as random variable"}, "classes": "l0", "position": {"x": -2763.4777060587226, "y": 4388.939429960873}}, {"group": "nodes", "data": {"id": "ex:SumEq01", "name": "exercise", "text": "\n
Exercise 6 (Sums of equiprobable 0s and 1s).
\n
\n
\n
\n
\nConsider the configuration space ( can be interpreted as the vertices of an hypercube), with the uniform probability distribution. What is the cardinal of ?
\nConsider now the random variable , which counts the number of in each -tuple. What is the law of the random variable ?
\n
\n
\n
Application: probability of a binary image
\n
\n
\nPropose a very simple probabilistic model over the set of binary images of size . What can we say about the probability that
\n
\nan image contains only s (or s)?
\nit contains exactly time the number ?
\nthe number of it contains is exactly ?
\nthe number of s is smaller or equal to ?
\n
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "ex:SumEq01", "summary": "\n
Exercise 6 (Sums of equiprobable 0s and 1s).
\n
", "hasSummary": false, "hasTitle": true, "title": "Sums of equiprobable 0s and 1s"}, "classes": "l0", "position": {"x": -3966.066245820806, "y": 4379.791652955868}}, {"group": "nodes", "data": {"id": "ex:SimJoin", "name": "exercise", "text": "\n
Exercise 7 (Simple joint law).
\n
\nConsider with a uniform probability, and and the coordinate random variables. Let and be two random variables defined by
\n
\n
Give the laws of , , and the joint law of .
\n
", "parent": "subsec:RV", "rank": "0", "html_name": "ex:SimJoin", "summary": "\n
Exercise 7 (Simple joint law).
\n
", "hasSummary": false, "hasTitle": true, "title": "Simple joint law"}, "classes": "l0", "position": {"x": -5140.737592553508, "y": 4382.769647635065}}, {"group": "nodes", "data": {"id": "def:Marg", "name": "definition", "text": "\n
Definition 8 (Concept of marginal).
\n
\n
\n
\n
\n
Coordinate projection
\n
\n
\nBy definition, elements of are couples with and . Note and the maps which map to its first and second coordinate,
\n
\n
and are called projections on and .
\n
\n
\nSome mathematical objects (probabilities and random variables) defined on the Cartesian product naturally give rise to similar objects defined on or by projecting them on or , in a sens that will be made clear in subsequent nodes. The objects defined in this manner on and are called marginals of the object on .
\n
\n
By projecting on a coordinate, we \u2019forget\u2019 the other one. As we will see, \u2019forgetting\u2019 the other coordinate is opposed to conditioning, where the other coordinate is fixed to a particular value.
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "def:Marg", "summary": "\n
Definition 8 (Concept of marginal). Some mathematical objects defined on the Cartesian product naturally give rise to similar objects defined on and by \u2019forgetting\u2019 the other coordinate. The objects defined in this manner on and are called marginal.
\n
", "hasSummary": true, "hasTitle": true, "title": "Concept of marginal"}, "classes": "l0", "position": {"x": -7666.146928631908, "y": 3152.463522697209}}, {"group": "nodes", "data": {"id": "def:MargProb", "name": "definition", "text": "\n
Definition 9 (Marginal probability).
\n
\n
\n
\n
\nLet be a probability on the Cartesian product . The marginal probability on is defined by
\n
\n
and the marginal on by
\n
\n
\n
\n
\n
\nWhen is discrete,
\nwhen densities exists,
\n
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "def:MargProb", "summary": "\n
Definition 9 (Marginal probability). Let be a probability on the Cartesian product . The marginal probability on is defined by
\n
", "hasSummary": true, "hasTitle": true, "title": "Marginal probability"}, "classes": "l0", "position": {"x": -7574.774146852916, "y": 2525.6983691605765}}, {"group": "nodes", "data": {"id": "def:MargRV", "name": "definition", "text": "\n
Definition 10 (Marginal random variables).
\n
\n
\n
\n
\nConsider a random variable . The marginal random variables and on and on are defined by
\n
\n
\n
\n
The laws of marginal variables
\n
\n
\nThe laws of and are called the marginal laws. They are of course the marginals of the joint probability (prove it). The notations and introduced in the general case are replaced by and . When the density exist, they are noted and .
\n
\n
\n
\n
\n
\nThe marginals of the joint variable are and .
\n
\nConsider the configuration space of dice rolls , with a probability . The law of the -th coordinate random variable is a marginal of .
\n
\nIn the same flavor, consider a configuration space describing real signals observed on points: , with a probability . The value of the signal at the -th measurement is also a coordinate random variable and its law is a marginal of .
\n
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "def:MargRV", "summary": "\n
Definition 10 (Marginal random variables). Let be a random variable, and let and be the random variables corresponding to each coordinate: is called the marginal random variable on .
\n
", "hasSummary": true, "hasTitle": true, "title": "Marginal random variables"}, "classes": "l0", "position": {"x": -8825.065439679365, "y": 2569.562259332014}}, {"group": "nodes", "data": {"id": "ex:SimPExa", "name": "exercise", "text": "\n
Exercise 8 (A simple example).
\n
\nConsider the configuration space ,
\n
\n
\n\n\n\n\n\n(b,a) | \n(b,b) | \n
\n\n
\n
\n
\n
\n
With probabilities
\n
\n
\n
\n\n\n\n\n\n | \n | \n
\n\n
\n
\n
Note and the coordinate random variables on .
\n
\nExplain why the notation is not entirely rigorous.
\nCompute the marginal laws of and .
\nIf you know the laws of and , can you recover the law of the joint distribution ?
\n
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "ex:SimPExa", "summary": "\n
Exercise 8 (A simple example).
\n
", "hasSummary": false, "hasTitle": true, "title": "A simple example"}, "classes": "l0", "position": {"x": -10021.854885608795, "y": 3276.801731567818}}, {"group": "nodes", "data": {"id": "ex:MargGauss", "name": "exercise", "text": "\n
Exercise 9 (Marginals of Gaussians).
\n
\nLet and let be a probability with density
\n
\n
\nDraw approximately some level lines of .
\nShow that is a probability density using the results of Exercise\u00a03
\nRecall the definition of the marginal distribution on the first coordinate.
\nCompute the marginal densities of each coordinate.
\n
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "ex:MargGauss", "summary": "\n
Exercise 9 (Marginals of Gaussians).
\n
", "hasSummary": false, "hasTitle": true, "title": "Marginals of Gaussians"}, "classes": "l0", "position": {"x": -10013.868622442598, "y": 2864.2827750796514}}, {"group": "nodes", "data": {"id": "ex:LandSp", "name": "exercise", "text": "\n
Exercise 10 (Landing a spaceship).
\n
\n
\n
\n
\nThe following function
\n
is called the \u2019Gaussian error function\u2019. Express this integral,
\n
\n
with the function. Ask google its value.
\n
\n
\nAssume that a spaceship wants to land on earth at a certain location. Assume that in the region of the landing zone the earth surface is assimilated to a plan. Given coordinate axis, points on the surface of the earth can be identified to points of . The spaceship aims at the point of coordinate , but the wind introduce a perturbation on the landing point. In about % of landings the perturbation on each coordinate is smaller than meters.
\nPropose a reasonable probabilistic modelization of the landing problem in which, is an abstract space not entirely specified, and the landing position is a random variable.
\n
", "parent": "subsec:Marg", "rank": "0", "html_name": "ex:LandSp", "summary": "\n
Exercise 10 (Landing a spaceship).
\n
", "hasSummary": false, "hasTitle": true, "title": "Landing a spaceship"}, "classes": "l0", "position": {"x": -10013.868622442598, "y": 2480.1339980091607}}, {"group": "nodes", "data": {"id": "th:Bayes", "name": "theorem", "text": "\n
Theorem 1 (Bayes Theorem).
\n
\nThe definition of conditional densities can be rewritten as something called the \"Bayes theorem\":
\n
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "th:Bayes", "summary": "\n
Theorem 1 (Bayes Theorem).
\n
", "hasSummary": true, "hasTitle": true, "title": "Bayes Theorem"}, "classes": "l0", "position": {"x": -3698.7342091408714, "y": 2127.4049655263498}}, {"group": "nodes", "data": {"id": "def:CondProb", "name": "definition", "text": "\n
Definition 11 (Conditional probability).
\n
\n
\n
\n
\nConsider a probability on the set of configurations . Given a event with , the idea of conditioning with respect to is to focus on the configurations and forget about the configurations . We would like to define a new probability which respects the proportions of the events included in and gives zero probability to event which do not intersect .
\n
\n
\n
\nLet be an event of , with . We define the probability conditional to the event by is called the probability of given , or probability of knowing , and is usually noted .
\nExercise: check that is a probability on
\n
\n
\n
\nAssume that is the set of cards of a standard 52 card deck, and is the uniform distribution.
\n
\nWhat is the probability of then event knowing ?
\nWhat is the probability of knowing ?
\nWhat is the probability of knowing ?
\n
\n
\n
\nAssume that a dice is rolled twice and that we have a uniform distribution on .
\n
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "def:CondProb", "summary": "\n
Definition 11 (Conditional probability). We define the probability conditional to the event by is called the probability of given , and is usually noted .
\n
", "hasSummary": true, "hasTitle": true, "title": "Conditional probability"}, "classes": "l0", "position": {"x": -4871.671100695654, "y": 2254.936653287442}}, {"group": "nodes", "data": {"id": "def:CondCart", "name": "definition", "text": "\n
Definition 12 (Conditional probabilities on a Cartesian product).
\n
\n
\n
\n
\nLet be a probability on a Cartesian product . Assume that the marginal is non null. We can condition by the event , \u2019the second coordinate is \u2019. This gives a new probability on , which can be interpreted as a probability on as follow.
\n
\n
\nAssume that . For ,
\n
\n
When all the densities exist, the conditional density at is as long as the marginal density .
\n
\n
Hence, in general, when conditioning a joint law by a coordinate value, we have
\n
\n
\n
\n
Conditional probabilities of a joint law
\n
\n
\nAssume now and . The joint law , is a probability on . In this context we use particular notations to condition on some . When ,
\n
\n
Using shortened notation, we can write
\n
\n
And when it exists, the conditional density is noted
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "def:CondCart", "summary": "\n
Definition 12 (Conditional probabilities on a Cartesian product). Let be a probability on . The conditional probability knowing is given by where is an event of .
\n
", "hasSummary": true, "hasTitle": true, "title": "Conditional probabilities on a Cartesian product"}, "classes": "l0", "position": {"x": -4842.334127720824, "y": 1526.3454399792772}}, {"group": "nodes", "data": {"id": "rem:CondCart", "name": "remark", "text": "", "parent": "subsec:Cond", "rank": "0", "html_name": "rem:CondCart", "summary": "", "hasSummary": true, "hasTitle": true, "title": "Conditioning on a Cartesian product"}, "classes": "l0", "position": {"x": -3705.4533780122365, "y": 1581.3149748236115}}, {"group": "nodes", "data": {"id": "ex:RCRD", "name": "exercise", "text": "\n
Exercise 11 (Random card in random deck).
\n
\nConsider that a box contains card decks. decks contain cards (2 - Ace) and decks contain cards (7-Ace). Without watching, a deck is chosen in the box, and a card is chosen in the deck. Given that the card is a , what is the probability that it comes from a card deck ?
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "ex:RCRD", "summary": "\n
Exercise 11 (Random card in random deck).
\n
", "hasSummary": false, "hasTitle": true, "title": "Random card in random deck"}, "classes": "l0", "position": {"x": -2522.109017729677, "y": 2345.4030045895497}}, {"group": "nodes", "data": {"id": "ex:DWRPM", "name": "exercise", "text": "\n
Exercise 12 (Draws without replacement (probabilistic model)).
\n
\nThis exercise is similar to the question 2 of Exercise\u00a02 \u2019Reminder on combinatorics\u2019, but we will re-express our previous reasonings in language of probabilities and random variables.
\n
\n
Consider again that elements of are physical objects contained in a box. Assume that we draw successively element from the box, without looking in the box, and without putting the element back before taking the next one. We want to now build a probabilistic model of the possible draws. We will model the experiment with the configuration space and call the coordinate random variables.
\n
\nGive a singleton which must have a null probability.
\nGive the law of .
\nGive the law of , given . Is there independence? Compute the joint law of and the marginal law of .
\nGiven the law of , given .
\nExpress the probability on , from the law of and successive conditioning. What is the probability of a singleton which has a non null probability?
\n
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "ex:DWRPM", "summary": "\n
Exercise 12 (Draws without replacement (probabilistic model)).
\n
", "hasSummary": false, "hasTitle": true, "title": "Draws without replacement (probabilistic model)"}, "classes": "l0", "position": {"x": -2532.2100549601205, "y": 1876.5173886765588}}, {"group": "nodes", "data": {"id": "ex:CondGauss", "name": "exercise", "text": "\n
Exercise 13 (Conditionals of Gaussians).
\n
\nAgain, let and let be a probability with density
\n
\n
Give the conditional density on the first coordinate given that the second coordinate has a fixed value .
\n
", "parent": "subsec:Cond", "rank": "0", "html_name": "ex:CondGauss", "summary": "\n
Exercise 13 (Conditionals of Gaussians).
\n
", "hasSummary": false, "hasTitle": true, "title": "Conditionals of Gaussians"}, "classes": "l0", "position": {"x": -2536.920292020898, "y": 1410.897130613792}}, {"group": "nodes", "data": {"id": "th:JLIV", "name": "theorem", "text": "\n
Theorem 2 (Joint law of independent variables).
\n
\n
\n
\n
\nLet and be independent RV.
\n
\nLet be the joint law on of the joint variable .
\n \nThe conditional probabilities do not dependent on and
\n\nin other words the marginal and conditional are the same (for all values of ).
\n
\n
When and are real random variables the results hold for densities (when they exist):
\n
\n
\n
\n
Proofs of 1) and 2)
\n
\n
\n) Call the joint variable . We have
\n
\n
Hence,
\n
\n
) When ,
\n
\n
\nLet with
\n
\n
\n\n\n\n\n\n | \n. | \n
\n\n
\n
\n
Call and the coordinate random variables. Are and independent?
\n
\n
", "parent": "subsec:Ind", "rank": "0", "html_name": "th:JLIV", "summary": "\n
Theorem 2 (Joint law of independent variables).
\n
", "hasSummary": true, "hasTitle": true, "title": "Joint law of independent variables"}, "classes": "l0", "position": {"x": -7044.081789564841, "y": 288.5955547839242}}, {"group": "nodes", "data": {"id": "def:IndEv", "name": "definition", "text": "\n
Definition 13 (Independent events).
\n
\n
\n
\n
\nConsider again dice rolls with a uniform probability distribution . Consider the two events
\n
\n\u2019\u2019: , the first roll is .
\n\u2019\u2019: , the second roll is .
\n
\n
Remember that conditional probabilities are given by
\n
\n
Hence : the probability of getting on the second roll is the same as the probability of getting on the second roll knowing that the first roll was a . We say in that case that the event does not depend on the event . Reversing the conditioning in the formula shows that is also independent of .
\n
\n
\nTwo events and are called independent when
\n
\n
\n
\nWhen : the proportion of in is the same as the proportion of in
\n
\n
In other words, conditioning probabilities to the event does not change the probability of .
\n
\n
When , the same can be said about the proportion of in .
\n
\n
\nLet be the card deck with uniform distribution . Show that the events \u2019the card is a diamond\u2019 and \u2019the card is an ace\u2019 are independent.
\n
\n \n \n \n
\n
and
\n
\n
they are independent.
\n
", "parent": "subsec:Ind", "rank": "0", "html_name": "def:IndEv", "summary": "\n
Definition 13 (Independent events). Two events and are called independents when
\n
", "hasSummary": true, "hasTitle": true, "title": "Independent events"}, "classes": "l0", "position": {"x": -7096.069015477114, "y": 1542.1955190811266}}, {"group": "nodes", "data": {"id": "def:IndRV", "name": "definition", "text": "\n
Definition 14 (Independent random variables).
\n
\n
\n
\n
\nThe notion of independence of two random variables and is based on the notion of independence of events.
\n
\n
Consider two consecutive dice rolls. Call and the 2 coordinates random variables on the configuration space endowed with the uniform distribution .
\n
\n
Check that events and are independent for all and . Rephrased in common language: obtaining an for the first roll is independent of obtaining a for the second roll.
\n
\n
More generally we will require the independence of all events in described by a constraint on the values of and all events described by a constraint on the values of . In that case, an information on one of them does not affect the other.
\n
\n
\nTwo random variables and are independent when
\n
\n
\n
\n
\n
\nRecall that it means that when , conditioning the probability on to does not change the probability of . The other direction holds when .
\n
\nIt is clear that independence is a symmetric relation.
\n
\n
\n
", "parent": "subsec:Ind", "rank": "0", "html_name": "def:IndRV", "summary": "\n
Definition 14 (Independent random variables). Two random variables and are independent when :
\n
", "hasSummary": true, "hasTitle": true, "title": "Independent random variables"}, "classes": "l0", "position": {"x": -6798.704393552654, "y": 1020.20838151672}}, {"group": "nodes", "data": {"id": "def:iid", "name": "definition", "text": "\n
Definition 15 (i.i.d variables).
\n
\n
\n
\n
\n random variables defined on a configuration space are independent and identically distributed when they are independent and with same law.
\nKnowing the law of one the variables determines the law of the joint distribution. In the discrete case the joint law is given by
\n
\n
If has a density , the joint density is
\n
\n
\n
\nIn the modelling of the repetition of dice rolls, we have always used the uniform probability over all possible configurations. From this, we deduced the independence of the different rolls.
\nThe reasoning usually goes the other way. Let and the be the coordinate random variables. It is usually reasonable to assume that the are independent, and that their law is uniform on . They are hence i.i.d. variables. This determines the probability on :
\n
Hence is uniform.
\n
", "parent": "subsec:Ind", "rank": "0", "html_name": "def:iid", "summary": "\n
Definition 15 (i.i.d variables). random variables defined on a configuration space are independent and identically distributed when they are independent and with same law.
\n
\n
", "hasSummary": true, "hasTitle": true, "title": "i.i.d variables"}, "classes": "l0", "position": {"x": -7011.649719946043, "y": -244.86656227072086}}, {"group": "nodes", "data": {"id": "ex:Bin", "name": "exercise", "text": "\n
Exercise 14 (Binomial).
\n
\nLet be independent Bernoulli variables of parameter . A Binomial variable is a sum of independent Bernoulli variables of same parameter. The law of a Binomial variable is noted , where is the parameter of the Bernoulli, and is the number of Bernoulli variables in the sum.
\n
\nExplain why the are i.i.d. variables
\nMake a parallel with Exercise\u00a06. Describe a configuration space and the probability on compatible with the current exercise.
\nLet . Given the law of
\nLet . Given the law of
\nLet , give the law of .
\n
\n
", "parent": "subsec:Ind", "rank": "0", "html_name": "ex:Bin", "summary": "\n
Exercise 14 (Binomial).
\n
", "hasSummary": false, "hasTitle": true, "title": "Binomial"}, "classes": "l0", "position": {"x": -7024.526699601474, "y": -661.9589284740644}}, {"group": "nodes", "data": {"id": "def:IndExp", "name": "theorem", "text": "\n
Theorem 3 (Independence and expectation).
\n
\n
\n
\n
\nWhen and are two independent variables, we have that
\n
\n
\nWhen the laws have densities, the result is given by the following computation:
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:IndExp", "summary": "\n
Theorem 3 (Independence and expectation). When and are two independent variables, we have that
\n
", "hasSummary": true, "hasTitle": true, "title": "Independence and expectation"}, "classes": "l0", "position": {"x": -1284.4874785919005, "y": 1204.6149245783558}}, {"group": "nodes", "data": {"id": "th:IndCov", "name": "theorem", "text": "\n
Theorem 4 (Independence and covariance).
\n
\nLet and be independent variables. It can be checked that and are also independent. We have
\n
\n
Hence, independence and covariance are related notions. If two variables and have non null covariance, knowing one gives information on the other. Note however that the converse is not true.
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "th:IndCov", "summary": "\n
Theorem 4 (Independence and covariance).
\n
", "hasSummary": true, "hasTitle": true, "title": "Independence and covariance"}, "classes": "l0", "position": {"x": -1275.363933513836, "y": 708.3709925139169}}, {"group": "nodes", "data": {"id": "def:StdProp", "name": "theorem", "text": "\n
Theorem 5 (Properties of variance).
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n This is simply the Pythagorean theorem. Alternatively we can write,
\n
\n
\n
\n
Important consequence
\n
\n
\nHence, when are independent variables with identical distributions,
\n
This is a very important result: summing independent and identically distributed (i.i.d) variables reduces the variance in and the standard deviation in .
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:StdProp", "summary": "\n
Theorem 5 (Properties of variance). when and are independent,
\n
", "hasSummary": true, "hasTitle": true, "title": "Properties of variance"}, "classes": "l0", "position": {"x": -1264.554042025985, "y": 179.87513751583037}}, {"group": "nodes", "data": {"id": "def:Expe", "name": "definition", "text": "\n
Definition 16 (Expectation).
\n
\n
\n
\n
\nLet with and . Let be a real random variable, . For each elementary configuration , takes the value . The \u2019expectation\u2019 of is the \u2019average\u2019 value of the with respect to the probabilities of the . In our example:
\n
\n
\n
\nIf , the average of a random variable becomes:
\n
\n
Now consider the case , with a probability of density . The formula for the expectation becomes (when the integral exists)
\n
\n
There is a definition of the expectation which does not depend on the nature of and . The expectation (average / mean) of is defined by
\n
\n
Where the integral is a \u2019Lebesgue integrals\u2019 (as opposed to Riemann integrals). When is discrete, the Lebesgue integral becomes a sum over , while when has a density, can be replaced by . Lebesgue integrals will not be used in this course, but it is useful to be familiar with this notation.
\n
\n
\n
\n
from the law
\n
\n
\nIn the discrete case, the previous definition is based on the sum of the over all the configurations , weight by the probabilities . The same result can be obtained by summing all the possible values that can take, weighted by their probabilities .
\nThis approach leads to the following important equalities
\n
\nIf , we have that
\n \nIf and if the law has a density , we have
\n \n
\n
Exercise: Prove the result when .
\n
\n
\nThe set of random variables is a vector space (). The set of random variables such that exists is again a vector space. Since integrals are linear, the expectation
\n
\n
is a linear application valued in (i.e. a linear form). In other words, we have the fundamentals properties:
\n
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:Expe", "summary": "\n
Definition 16 (Expectation). The general definition of the expectation of a random variable is in particular, when , and when ,
\n
", "hasSummary": true, "hasTitle": true, "title": "Expectation"}, "classes": "l0", "position": {"x": -1256.2681048153534, "y": 2246.8829423217985}}, {"group": "nodes", "data": {"id": "def:ScalRV", "name": "definition", "text": "\n
Definition 17 (Inner products on random variables).
\n
\nThe expectation enables to define an important inner product on random variables. It provides a norm and a notion of angles between random variables. Let and be random variables. When it exists, we can define
\n
\n
where is understood as the function .
\n
\n
Why is it an inner product ?
\n
\nIt is easy to check that the set of random variables such that exists is a vector space.
\n is linear in and and .
\nExcept pathological cases that we will not consider here, we can show that
\n
\n
Hence is an inner product.
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:ScalRV", "summary": "\n
Definition 17 (Inner products on random variables).
\n
", "hasSummary": true, "hasTitle": true, "title": "Inner products on random variables"}, "classes": "l0", "position": {"x": 8.002538748805819, "y": 1434.73204758651}}, {"group": "nodes", "data": {"id": "def:Cov", "name": "definition", "text": "\n
Definition 18 (Covariance).
\n
\nGiven a random variable note the \u2019centered\u2019 variable. The covariance between two variables and is the inner product between their centered versions: the covariance between and is defined by
\n
\n
when it exists.
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:Cov", "summary": "\n
Definition 18 (Covariance).
\n
", "hasSummary": true, "hasTitle": true, "title": "Covariance"}, "classes": "l0", "position": {"x": -117.1408484185086, "y": 765.9280894870162}}, {"group": "nodes", "data": {"id": "def:Std", "name": "definition", "text": "\n
Definition 19 (Variance /Standard deviation).
\n
\n
\n
\n
\nThe variance and standard deviation measure how a random variable varies around its mean. The deviation from the mean is given by the Euclidean norm of the centered variable . When it exists, the variance is defined by,
\n
\n
and the standard deviation by
\n
\n
\n
\n
Alternative formula
\n
\n
\nWe have the important equality
\n
Proof:
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:Std", "summary": "\n
Definition 19 (Variance /Standard deviation).
\n
", "hasSummary": true, "hasTitle": true, "title": "Variance /Standard deviation"}, "classes": "l0", "position": {"x": -76.99749540434607, "y": 216.19300869132803}}, {"group": "nodes", "data": {"id": "def:CovM", "name": "definition", "text": "\n
Definition 20 (Covariance matrix).
\n
\n
\n
\n
\nConsider random variables . The variables can be put in a column vector . is then a random variable . Such random variables are often called random vectors.
\nFor a random column vector , when it exists the covariance matrix is defined by
\n
When is a line vector, the definition becomes
\n
\n
\n
Entries of the matrix
\n
\n
\n
\n
Hence we can see that .
\n
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "def:CovM", "summary": "\n
Definition 20 (Covariance matrix). Consider random variables . The variables can be put in a column vector
\n
", "hasSummary": true, "hasTitle": true, "title": "Covariance matrix"}, "classes": "l0", "position": {"x": 1120.0630833685873, "y": 223.501005840664}}, {"group": "nodes", "data": {"id": "ex:Bern", "name": "exercise", "text": "\n
Exercise 15 (Bernoulli).
\n
\nA Bernoulli random variable is a random variable valued in . The law is determined by , noted .
\n
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "ex:Bern", "summary": "\n
Exercise 15 (Bernoulli).
\n
", "hasSummary": false, "hasTitle": true, "title": "Bernoulli"}, "classes": "l0", "position": {"x": -82.9913469130289, "y": 2489.8505672414767}}, {"group": "nodes", "data": {"id": "ex:ExpSum", "name": "exercise", "text": "\n
Exercise 16 (Expected sum).
\n
\nConsider rolls of a fair dice. What is the configuration space and the probability describing the rolls? Call the random variable \u2019sum of the rolls\u2019. Compute .
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "ex:ExpSum", "summary": "\n
Exercise 16 (Expected sum).
\n
", "hasSummary": false, "hasTitle": true, "title": "Expected sum"}, "classes": "l0", "position": {"x": 1094.3155288304629, "y": 2507.4109660435925}}, {"group": "nodes", "data": {"id": "ex:GaussMean", "name": "exercise", "text": "\n
Exercise 17 (Mean of a Gaussian RV).
\n
\n
\n
\nAssume that the law of has a density Compute .
\nAssume that the law of has a density
\nCompute .
\n
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "ex:GaussMean", "summary": "\n
Exercise 17 (Mean of a Gaussian RV).
\n
", "hasSummary": false, "hasTitle": true, "title": "Mean of a Gaussian RV"}, "classes": "l0", "position": {"x": -96.57582900774878, "y": 2114.7793235130284}}, {"group": "nodes", "data": {"id": "ex:GaussVar", "name": "exercise", "text": "\n
Exercise 18 (Variance on a uni-dimensional Gaussian).
\n
\nLet and let be a probability with Gaussian density
\n
\n
Knowing that
\n
\n
Compute the variance of the probability distribution.
\n
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "ex:GaussVar", "summary": "\n
Exercise 18 (Variance on a uni-dimensional Gaussian).
\n
", "hasSummary": false, "hasTitle": true, "title": "Variance on a uni-dimensional Gaussian"}, "classes": "l0", "position": {"x": 2250.5009496629027, "y": 731.5549544819082}}, {"group": "nodes", "data": {"id": "ex:GaussCov", "name": "exercise", "text": "\n
Exercise 19 (Covariance of a bi-dimensional Gaussian).
\n
\nLet be a random variable whose law has the following density
\n
\n
where .
\n
\nDraw approximate contours of the density for different values of and .
\nCompute the covariance matrix of the probability distribution.
\nExpress in terms of matrix multiplications. Rewrite the density accordingly.
\n
\n
Let be a rotation matrix, and let be another random vector.
\n
\nWhat is the covariance matrix of ? Use the fact that for a matrix , .
\nWhat is the density of ?
\n
\n
Express the general relation between the covariance of a Gaussian density and the term in the exponential.
\n
", "parent": "subsec:MomRV", "rank": "0", "html_name": "ex:GaussCov", "summary": "\n
Exercise 19 (Covariance of a bi-dimensional Gaussian).
\n
", "hasSummary": false, "hasTitle": true, "title": "Covariance of a bi-dimensional Gaussian"}, "classes": "l0", "position": {"x": 2243.324799893895, "y": 204.58760177480008}}, {"group": "nodes", "data": {"id": "def:WLLN", "name": "theorem", "text": "\n
Theorem 6 ((Weak) Law of large numbers).
\n
\n
\n
\n
\n
Statement of the theorem
\n
\n
\nWe are now ready state an important result of the theory of probabilities, relating empirical means and expectations.
\n
\n
Let be an infinite sequence i.i.d real random variables of mean . Let be the empirical mean For all , we have
\n
In simple words: when is large the values of are almost always close to .
\n
\n
\n
\nWe will not prove this result, but it can be intuitively understood in a simple way when the variable have variances. First, note that . Then, remember that Hence the law of the empirical mean is more and more concentrated around its expectation , which means that the probability should be smaller and smaller.
\n
", "parent": "subsec:LLN", "rank": "0", "html_name": "def:WLLN", "summary": "\n
Theorem 6 ((Weak) Law of large numbers). Let be i.i.d. For all , we have
\n
", "hasSummary": true, "hasTitle": true, "title": "(Weak) Law of large numbers"}, "classes": "l0", "position": {"x": -3002.3955704605337, "y": -886.4056204438808}}, {"group": "nodes", "data": {"id": "def:EmpM", "name": "definition", "text": "\n
Definition 21 (Empirical mean).
\n
\n
\n
\n
\nIn general, the adjective \u2019empirical\u2019 is understood as \u2019coming from observations\u2019, as opposed to a computation made on the configuration space . Assume that performing a certain experiment lead to the observation of numbers , modeled by random variables . The mean of a particular observation is
\n
which is described by the random variable
\n
\n
is called the \u2019empirical mean\u2019.
\n
\n
By linearity, the expectation of is given by
\n
\n
\n
Link with expectation
\n
\n
\nThe notion of empirical mean has connections but is different from the notion of expectation, which is a mean over configurations from . Remember that when has a uniform probability over a finite numbers of element,
\n
Hence,
\n
\nthe empirical mean is its self a random variable, computed as a sum over different random variables.
\nthe expectation is a number, computed for a single random variable, as a sum over configurations .
\n
\n
", "parent": "subsec:LLN", "rank": "0", "html_name": "def:EmpM", "summary": "\n
Definition 21 (Empirical mean). Let be RV. Their \u2019empirical mean\u2019 is following RV,
\n
", "hasSummary": true, "hasTitle": true, "title": "Empirical mean"}, "classes": "l0", "position": {"x": -3008.780282009238, "y": -210.97656654692923}}, {"group": "nodes", "data": {"id": "sec:Prob", "name": "section", "text": "", "parent": "", "rank": "0", "html_name": "sec:Prob", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3885.676967972946, "y": 2710.2816977060456}}, {"group": "nodes", "data": {"id": "titlesec:Prob", "name": "sectionTitle", "text": "Probabilities
", "parent": "sec:Prob", "rank": "0", "html_name": "sec:Prob", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3942.777393921063, "y": 6409.4690158559715}}, {"group": "nodes", "data": {"id": "subsec:ConfProb", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:ConfProb", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -7749.757904204901, "y": 5121.845864314333}}, {"group": "nodes", "data": {"id": "titlesubsec:ConfProb", "name": "subsectionTitle", "text": "Configurations and probabilities
", "parent": "subsec:ConfProb", "rank": "0", "html_name": "subsec:ConfProb", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -7814.212974665837, "y": 6300.700516884763}}, {"group": "nodes", "data": {"id": "subsec:RV", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:RV", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3957.1621992937507, "y": 4274.1448945349875}}, {"group": "nodes", "data": {"id": "titlesubsec:RV", "name": "subsectionTitle", "text": "Random Variables
", "parent": "subsec:RV", "rank": "0", "html_name": "subsec:RV", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -4063.0302596209003, "y": 4845.789366749729}}, {"group": "nodes", "data": {"id": "subsec:Marg", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:Marg", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -8798.314516230856, "y": 2897.418165008762}}, {"group": "nodes", "data": {"id": "titlesubsec:Marg", "name": "subsectionTitle", "text": "Marginals
", "parent": "subsec:Marg", "rank": "0", "html_name": "subsec:Marg", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -8780.535452312293, "y": 3384.2740706855107}}, {"group": "nodes", "data": {"id": "subsec:Cond", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:Cond", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3696.8900592126656, "y": 1908.1243223472811}}, {"group": "nodes", "data": {"id": "titlesubsec:Cond", "name": "subsectionTitle", "text": "Conditioning
", "parent": "subsec:Cond", "rank": "0", "html_name": "subsec:Cond", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3696.9254900614355, "y": 2476.8515140807704}}, {"group": "nodes", "data": {"id": "subsec:Ind", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:Ind", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -6905.508163704335, "y": 599.5812383016362}}, {"group": "nodes", "data": {"id": "titlesubsec:Ind", "name": "subsectionTitle", "text": "Independence
", "parent": "subsec:Ind", "rank": "0", "html_name": "subsec:Ind", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -7131.811933856016, "y": 1877.6214050773367}}, {"group": "nodes", "data": {"id": "subsec:MomRV", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:MomRV", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": 483.0067355355011, "y": 1423.3745759249136}}, {"group": "nodes", "data": {"id": "titlesubsec:MomRV", "name": "subsectionTitle", "text": "Moments of random variables
", "parent": "subsec:MomRV", "rank": "0", "html_name": "subsec:MomRV", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": 359.56828538700586, "y": 2809.374014333997}}, {"group": "nodes", "data": {"id": "subsec:LLN", "name": "subsection", "text": "", "parent": "sec:Prob", "rank": "0", "html_name": "subsec:LLN", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3242.911850592072, "y": -419.59974154820395}}, {"group": "nodes", "data": {"id": "titlesubsec:LLN", "name": "subsectionTitle", "text": "Laws of large numbers
", "parent": "subsec:LLN", "rank": "0", "html_name": "subsec:LLN", "hasSummary": false, "hasTitle": false}, "classes": "l0", "position": {"x": -3485.42813072361, "y": 211.70613734747283}}, {"data": {"id": "rem:AbsConfsubsec:ConfProb", "source": "subsec:ConfProb", "target": "rem:AbsConf", "type": "strong", "visibility": 1}}, {"data": {"id": "rem:AbsConfsubsec:RV", "source": "subsec:RV", "target": "rem:AbsConf", "type": "strong", "visibility": 1}}, {"data": {"id": "def:ConfSprem:IntProb", "source": "rem:IntProb", "target": "def:ConfSp", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Evdef:ConfSp", "source": "def:ConfSp", "target": "def:Ev", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Probdef:Ev", "source": "def:Ev", "target": "def:Prob", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Probrem:ProbAss", "source": "rem:ProbAss", "target": "def:Prob", "type": "strong", "visibility": 1}}, {"data": {"id": "def:ProbDendef:Prob", "source": "def:Prob", "target": "def:ProbDen", "type": "strong", "visibility": 1}}, {"data": {"id": "rem:Evdef:Ev", "source": "def:Ev", "target": "rem:Ev", "type": "strong", "visibility": 1}}, {"data": {"id": "rem:ProbAssdef:Ev", "source": "def:Ev", "target": "rem:ProbAss", "type": "strong", "visibility": 1}}, {"data": {"id": "def:RVLawdef:RV", "source": "def:RV", "target": "def:RVLaw", "type": "strong", "visibility": 1}}, {"data": {"id": "def:JoinRVdef:RV", "source": "def:RV", "target": "def:JoinRV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:JoinRVdef:RVLaw", "source": "def:RVLaw", "target": "def:JoinRV", "type": "weak"}}, {"data": {"id": "def:MargProbdef:Marg", "source": "def:Marg", "target": "def:MargProb", "type": "strong", "visibility": 1}}, {"data": {"id": "def:MargRVdef:Marg", "source": "def:Marg", "target": "def:MargRV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:MargRVdef:MargProb", "source": "def:MargProb", "target": "def:MargRV", "type": "weak"}}, {"data": {"id": "def:MargRVdef:RVLaw", "source": "def:RVLaw", "target": "def:MargRV", "type": "weak"}}, {"data": {"id": "ex:MargGaussex:Gauss", "source": "ex:Gauss", "target": "ex:MargGauss", "type": "weak"}}, {"data": {"id": "th:Bayesdef:CondProb", "source": "def:CondProb", "target": "th:Bayes", "type": "strong", "visibility": 1}}, {"data": {"id": "def:CondCartrem:CondCart", "source": "rem:CondCart", "target": "def:CondCart", "type": "strong", "visibility": 1}}, {"data": {"id": "def:CondCartdef:CondProb", "source": "def:CondProb", "target": "def:CondCart", "type": "strong", "visibility": 1}}, {"data": {"id": "def:CondCartdef:MargProb", "source": "def:MargProb", "target": "def:CondCart", "type": "strong", "visibility": 1}}, {"data": {"id": "def:CondCartdef:JoinRV", "source": "def:JoinRV", "target": "def:CondCart", "type": "weak"}}, {"data": {"id": "def:CondCartdef:MargRV", "source": "def:MargRV", "target": "def:CondCart", "type": "weak"}}, {"data": {"id": "rem:CondCartdef:CondProb", "source": "def:CondProb", "target": "rem:CondCart", "type": "strong", "visibility": 1}}, {"data": {"id": "ex:DWRPMex:RemCom", "source": "ex:RemCom", "target": "ex:DWRPM", "type": "weak"}}, {"data": {"id": "th:JLIVdef:IndRV", "source": "def:IndRV", "target": "th:JLIV", "type": "strong", "visibility": 1}}, {"data": {"id": "th:JLIVdef:MargRV", "source": "def:MargRV", "target": "th:JLIV", "type": "strong", "visibility": 1}}, {"data": {"id": "th:JLIVdef:CondCart", "source": "def:CondCart", "target": "th:JLIV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:IndRVsubsec:RV", "source": "subsec:RV", "target": "def:IndRV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:IndRVdef:IndEv", "source": "def:IndEv", "target": "def:IndRV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:iidth:JLIV", "source": "th:JLIV", "target": "def:iid", "type": "strong", "visibility": 1}}, {"data": {"id": "ex:Binex:Bern", "source": "ex:Bern", "target": "ex:Bin", "type": "weak"}}, {"data": {"id": "ex:Binex:SumEq01", "source": "ex:SumEq01", "target": "ex:Bin", "type": "weak"}}, {"data": {"id": "def:IndExpdef:Expe", "source": "def:Expe", "target": "def:IndExp", "type": "strong", "visibility": 1}}, {"data": {"id": "def:IndExpdef:IndRV", "source": "def:IndRV", "target": "def:IndExp", "type": "strong", "visibility": 1}}, {"data": {"id": "th:IndCovdef:IndExp", "source": "def:IndExp", "target": "th:IndCov", "type": "strong", "visibility": 1}}, {"data": {"id": "th:IndCovdef:Cov", "source": "def:Cov", "target": "th:IndCov", "type": "strong", "visibility": 1}}, {"data": {"id": "def:StdPropdef:Std", "source": "def:Std", "target": "def:StdProp", "type": "strong", "visibility": 1}}, {"data": {"id": "def:StdPropth:IndCov", "source": "th:IndCov", "target": "def:StdProp", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Expedef:RV", "source": "def:RV", "target": "def:Expe", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Expedef:RVLaw", "source": "def:RVLaw", "target": "def:Expe", "type": "strong", "visibility": 1}}, {"data": {"id": "def:ScalRVdef:Expe", "source": "def:Expe", "target": "def:ScalRV", "type": "strong", "visibility": 1}}, {"data": {"id": "def:ScalRVdef:IndExp", "source": "def:IndExp", "target": "def:ScalRV", "type": "weak"}}, {"data": {"id": "def:Covdef:ScalRV", "source": "def:ScalRV", "target": "def:Cov", "type": "strong", "visibility": 1}}, {"data": {"id": "def:Stddef:Cov", "source": "def:Cov", "target": "def:Std", "type": "strong", "visibility": 1}}, {"data": {"id": "def:CovMdef:Cov", "source": "def:Cov", "target": "def:CovM", "type": "strong", "visibility": 1}}, {"data": {"id": "def:WLLNdef:iid", "source": "def:iid", "target": "def:WLLN", "type": "strong", "visibility": 1}}, {"data": {"id": "def:WLLNdef:EmpM", "source": "def:EmpM", "target": "def:WLLN", "type": "strong", "visibility": 1}}, {"data": {"id": "def:WLLNdef:StdProp", "source": "def:StdProp", "target": "def:WLLN", "type": "strong", "visibility": 1}}, {"data": {"id": "def:EmpMdef:Expe", "source": "def:Expe", "target": "def:EmpM", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:RVsubsec:ConfProb", "source": "subsec:ConfProb", "target": "subsec:RV", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:Margsubsec:ConfProb", "source": "subsec:ConfProb", "target": "subsec:Marg", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:Margdef:JoinRV", "source": "def:JoinRV", "target": "subsec:Marg", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:Margdef:RVLaw", "source": "def:RVLaw", "target": "subsec:Marg", "type": "weak"}}, {"data": {"id": "subsec:Condsubsec:ConfProb", "source": "subsec:ConfProb", "target": "subsec:Cond", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:Inddef:Prob", "source": "def:Prob", "target": "subsec:Ind", "type": "strong", "visibility": 1}}, {"data": {"id": "subsec:Inddef:CondProb", "source": "def:CondProb", "target": "subsec:Ind", "type": "weak"}}];