In the last lesson, and the one before it, we found trendlines for some data.

What about measuring*how well* a trendline meshes with the data? This what the $r$-value is for. For a data set, the $r$-value can be computed from

$$r=\frac{\Sigma (x_i - {\bar x})(y_i - {\bar y})}{\sqrt{\Sigma(x_i-{\bar x})^2}\Sigma(y_i-{\bar y})^2}.$$

If the $r$-value is close to $\pm 1$, then the trendline and data mesh well. If it's close to $0$, then the points do not mesh with the line very well. Here the sums all run from $1$ to $N$, where $N$ is the number of data points.

What about measuring

$$r=\frac{\Sigma (x_i - {\bar x})(y_i - {\bar y})}{\sqrt{\Sigma(x_i-{\bar x})^2}\Sigma(y_i-{\bar y})^2}.$$

If the $r$-value is close to $\pm 1$, then the trendline and data mesh well. If it's close to $0$, then the points do not mesh with the line very well. Here the sums all run from $1$ to $N$, where $N$ is the number of data points.

`r=`

line to compute the $r$-value.
Type your code here:

See your results here:

We got

`r=0.98`

.