The past lesson on GCDs was able to produce GCDs using a very inefficient algorithm. Looping through
all possible numbers simply cannot be the best way to proceed. It worked,
but is not the best way to compute a GCD. This often comes up in computer programming: you find an algorithm
that works, but it is slow or long (in lines of code). How do you make it better?

So enter the idea of "properly" implementing computer algorithms. If you go to the GCD page on Wikipedia, you'll find the algorithm outlined by the

Interestingly, the function itself is recursive. Can you see why? It's because the*within* the

So enter the idea of "properly" implementing computer algorithms. If you go to the GCD page on Wikipedia, you'll find the algorithm outlined by the

`gcd(a,b)`

function shown below. Look how short it is! No loops at all!
Interestingly, the function itself is recursive. Can you see why? It's because the

`gcd(a,b)`

function is called
`gcd(a,b)`

function. That is, this version of `gcd(a,b)`

calls itself. Is this legal? Yes,
it is, and often recursion is a very elegant way of solving problems. But you have to always be extra sure that you
have some "terminal condition," or some condition that will always eventually be true, that will end the
process of the function eternally calling itself. Here it's the `if b==0 then`

condition.
Type your code here:

See your results here:

The GCD has some interesting properties. Can you verify these using the GCD algorithm here? The

`print`

statement verifies the first one in the list below.
- Is $m\cdot gcd(a,b)=gcd(m\cdot a,m\cdot b)$?
- Is $gcd(a + m\cdot b, b) = gcd(a, b)$?
- Is $gcd(a,b)=gcd(b,a)?$
- If you want the GCD of more than two numbers you can use $gcd(a, b, c) = gcd(gcd(a, b), c)$. Define
a function called
`gcd3(a,b,c)`

that uses this fact to compute the GCD of 3 numbers.