
Now that we can actually see this definition, let’s think about what positive and negative Covariances mean. Recall that \(X\) and \(Y\) are random variables, while \(E(X)\) and \(E(Y)\) are averages and thus are constants.

So, in words, the expected value of the product of the random variables minus their respective means. For two random variables \(X\) and \(Y\), you can define the Covariance \(Cov(X,Y)\) as: This might seem a little confusing, so let’s actually define Covariance and continue from there. If two random variables are independent, their Covariance is 0, which makes sense because they don’t affect each other and thus don’t vary together (this relation doesn’t necessarily hold in the opposite direction, though, which we will see later on). A positive value of Covariance means that two random variables tend to vary in the same direction, a negative value means that they vary in opposite directions, and a 0 means that they don’t vary together. Unlike Variance, which is non-negative, Covariance can be negative or positive (or zero, of course). Let’s start with a qualitative framework you can probably already guess what Covariance ‘essentially means.’ We know that variance measures the spread of a random variable, so Covariance measures how two random random variables vary together. However, he’s also the oldest it’s important to talk about him first because we will eventually define Correlation in terms of Covariance.

The first ‘brother’ is likely the less popular of the two, mostly because he is widely less applicable. 9 Limit Theorems and Conditional Expectation.Expectation, Indicators and Memorylessness.
