Harvard Professor Explains Algorithms in 5 Levels of Difficulty
Released on 11/08/2023
Hello world.
My name is David J. Malan
and I'm a professor of computer science
at Harvard University.
Today, I've been asked to explain algorithms
in five levels of increasing difficulty.
Algorithms are important
because they really are everywhere,
not only in the physical world,
but certainly in the virtual world as well.
And in fact, what excites me about algorithms
is that they really represent an opportunity
to solve problems.
And I dare say, no matter what you do in life,
all of us have problems to solve.
So, I'm a computer science professor,
so I spend a lot of time with computers.
How would you define a computer for them?
Well, a computer is electronic,
like a phone but it's a rectangle,
and you can type like tick, tick, tick.
And you work on it.
Nice. Do you know any of the parts
that are inside of a computer?
No.
Can I explain a couple of them to you?
Yeah.
So, inside of every computer is some kind of brain
and the technical term for that is CPU,
or central processing unit.
And those are the pieces of hardware
that know how to respond to those instructions.
Like moving up or down, or left or right,
knows how to do math like addition and subtraction.
And then there's at least one other type of
hardware inside of a computer called memory
or RAM, if you've heard of this?
I know of memory because you have to memorize stuff.
Yeah, exactly.
And computers have even different types of memory.
They have what's called RAM, random access memory,
which is where your games, where your programs
are stored while they're being used.
But then it also has a hard drive,
or a solid state drive, which is where your data,
your high scores, your documents,
once you start writing essays and stories in the future.
It stays there.
Stays permanently.
So, even if the power goes out,
the computer can still remember that information.
It's still there because
the computer can't just like delete all of the words itself.
Hopefully not.
Because your fingers could only do that.
Like you have to use your finger to delete
all of the stuff. Exactly.
You have to write.
Yeah, have you heard of an algorithm before?
Yes. Algorithm is a list of instructions to tell people
what to do or like a robot what to do.
Yeah, exactly.
It's just step by step instructions for doing something,
for solving a problem, for instance.
Yeah, so like if you have a bedtime routine,
then at first you say, I get dressed, I brush my teeth,
I read a little story, and then I go to bed.
All right.
Well how about another algorithm?
Like what do you tend to eat for lunch?
Any types of sandwiches you like?
I eat peanut butter.
Let me get some supplies from the cupboard here.
So, should we make an algorithm together?
Yeah.
Why don't we do it this way?
Why don't we pretend like I'm a computer
or maybe I'm a robot, so I only understand your instructions
and so I want you to feed me, no pun intended, an algorithm.
So, step-by-step instructions for solving this problem.
But remember, algorithms, you have to be precise,
you have to give...
The right instructions.
[David] The right instructions.
Just do it for me. So, step one was what?
Open the bag.
[David] Okay. Opening the bag of bread.
[David] Stop. Now what?
Grab the bread and put it on the plate.
[David] Grab the bread and put it on the plate.
Take all the bread back and put it back in there.
[David laughing]
So, that's like an undo command.
Yeah.
Little control Z? Okay.
Take one bread and put it on the plate.
Okay.
Take the lid off the peanut butter.
[David] Okay, take the lid off the peanut butter.
Put the lid down.
[David] Okay. Take the knife.
[David] Take the knife.
[Addison] Put the blade inside the peanut butter
and spread the peanut butter on the bread.
I'm going to take out some peanut butter
and I'm going to spread the peanut butter on the bread.
I put a lot of peanut butter on
because I love peanut butter.
Oh, apparently. I thought I was messing with you here...
No, no it's fine.
But I think you're apparently happy with this.
[Addison] Put the knife down,
and then grab one bread and put it on top
of the second bread, sideways.
Sideways.
Like put it flat on.
Oh, flat ways, okay.
[Addison] And now, done. You're done with your sandwich.
Should we take a delicious bite?
Yep. Let's take a bite.
[David] Okay, here we go.
What would the next step be here?
Clean all this mess up.
[David laughing]
Clean all this mess up, right.
We made an algorithm, step by step instructions
for solving some problem.
And if you think about now,
how we made peanut butter and jelly sandwiches,
sometimes we were imprecise and you didn't give me
quite enough information to do the algorithm correctly,
and that's why I took out so much bread.
Precision, being very, very correct with your instructions
is so important in the real world
because for instance, when you're using the worldwide web
and you're searching for something on Google or Bing...
You want to do the right thing.
[David] Exactly.
So, like if you type in just Google,
then you won't find the answer to your question.
Pretty much everything we do in life is an algorithm,
even if we don't use that fancy word to describe it.
Because you and I are sort of following instructions
either that we came up with ourselves
or maybe our parents told us how to do these things.
And so, those are just algorithms.
But when you start using algorithms in computers,
that's when you start writing code.
[upbeat music]
What do you know about algorithms?
Nothing really, at all honestly.
I think it's just probably a way to store information
in computers.
And I dare say, even though you might not have
put this word on it, odds are you executed as a human,
multiple algorithms today even before you came here today.
Like what were a few things that you did?
I got ready.
Okay. And get ready. What does that mean?
Brushing my teeth, brushing my hair.
[David] Okay.
Getting dressed.
Okay, so all of those, frankly, if we really
dove more deeply, could be broken down into
step-by-step instructions.
And presumably your mom, your dad, someone in the past
sort of programmed you as a human to know what to do.
And then after that, as a smart human,
you can sort of take it from there
and you don't need their help anymore.
But that's kind of what we're doing
when we program computers.
Something maybe even more familiar nowadays,
like odds are you have a cell phone.
Your contacts or your address book.
But let me ask you why that is.
Like why does Apple or Google or anyone else
bother alphabetizing your contacts?
I just assumed it would be easier to navigate.
What if your friend happened to be at the very bottom
of this randomly organized list?
Why is that a problem? Like he or she's still there.
I guess it would take a while to get to
while you're scrolling.
That, in of itself, is kind of a problem
or it's an inefficient solution to the problem.
So, it turns out that back in my day,
before there were cell phones, everyone's numbers
from my schools were literally printed in a book,
and everyone in my town and my city, my state
was printed in an actual phone book.
Even if you've never seen this technology before,
how would you propose verbally to find John
in this phone book? Or I would just flip through
and just look for the J's I guess.
Yeah. So, let me propose that we start that way.
I could just start at the beginning
and step by step I could just look at each page,
looking for John, looking for John.
Now even if you've never seen this here technology before,
it turns out this is exactly what your phone could be doing
in software, like someone from Google or Apple or the like,
they could write software that uses a technique
in programming known as a loop,
and a loop, as the word implies,
is just sort of do something again and again.
What if instead of starting from the beginning
and going one page at a time,
what if I, or what if your phone goes like two pages
or two names at a time?
Would this be correct do you think?
Well you could skip over John, I think.
In what sense?
If he's in one of the middle pages that you skipped over.
Yeah, so sort of accidentally and frankly
with like 50/50 probability,
John could get sandwiched in between two pages.
But does that mean I have to throw
that algorithm out altogether?
Maybe you could use that strategy until you get close
to the section and then switch to going one by one.
Okay, that's nice.
So, you could kind of like go twice as fast
but then kind of pump the brakes as you near your exit
on the highway, or in this case near the J section
of the book.
Exactly.
And maybe alternatively, if I get to like
A, B, C, D, E, F, G, H, I, J, K,
if I get to the K section,
then I could just double back like one page
just to make sure John didn't get sandwiched
between those pages.
So, the nice thing about that second algorithm
is that I'm flying through the phone book
like two pages at a time.
So, 2, 4, 6, 8, 10, 12.
It's not perfect, it's not necessarily correct
but it is if I just take one extra step.
So, I think it's fixable,
but what your phone is probably doing
and frankly what I and like my parents and grandparents
used to do back in the day was we'd probably go roughly
to the middle of the phone book here,
and just intuitively, if this is an alphabetized phone book
in English, what section am I probably going to
find myself in roughly?
K?
Okay. So, I'm in the K section.
Is John going to be to the left or to the right?
To the left.
Yeah.
So, John is going to be to the left or the right
and what we can do here, though your phone
does something smarter, is tear the problem in half,
throw half of the problem away,
being left with just 500 pages now.
But what might I next do?
I could sort of naively just start at the beginning again,
but we've learned to do better.
I can go roughly to the middle here.
And you can do it again. Yeah, exactly.
So, now maybe I'm in the E section,
which is a little to the left.
So, John is clearly going to be to the right,
so I can again tear the problem poorly in half,
throw this half of the problem away,
and I claim now that if we started with a thousand pages,
now we've gone to 500, 250,
now we're really moving quickly.
Yeah.
[David] And so, eventually I'm hopefully dramatically
left with just one single page
at which point John is either on that page
or not on that page, and I can call him.
Roughly how many steps might this third algorithm take
if I started with a thousand pages
then went to 500, 250, 125,
how many times can you divide 1,000 in half? Maybe?
10.
That's roughly roughly 10.
Because in the first algorithm,
looking again for someone like Zoe in the worst case
might have to go all the way through a thousand pages.
But the second algorithm you said was 500,
maybe 501, essentially the same thing.
So, twice as fast.
But this third and final algorithm is sort of fundamentally
faster because you're sort of dividing and conquering it
in half, in half, in half,
not just taking one or two bites out of it out of a time.
So, this of course is not how we used to use phone books
back in the day since otherwise they'd be single use only.
But it is how your phone is actually searching for Zoe,
for John, for anyone else, but it's doing it in software.
Oh, that's cool.
So, here we've happened to focus on searching algorithms,
looking for John in the phone book.
But the technique we just used
can indeed be called divide and conquer,
where you take a big problem and you divide and conquer it,
that is you try to chop it up into smaller,
smaller, smaller pieces.
A more sophisticated type of algorithm,
at least depending on how you implement it,
something known as a recursive algorithm.
Recursive algorithm is essentially an algorithm
that uses itself to solve the exact same problem
again and again, but chops it smaller, and smaller,
and smaller ultimately.
[upbeat music]
Hi, my name's Patricia.
Patricia, nice to meet you.
Where are you a student at?
I am starting my senior year now at NYU.
Oh nice. And what have you been studying
the past few years?
I studied computer science and data science.
If you were chatting with a non-CS,
non-data science friend of yours,
how would you explain to them what an algorithm is?
Some kind of systematic way of solving a problem,
or like a set of steps to kind of solve
a certain problem you have.
So, you probably recall learning topics
like binary search versus linear search, and the like.
Yeah.
So, I've come here complete with a
actual chalkboard with some magnetic numbers on it here.
How would you tell a friend to sort these?
I think one of the first things we learned was
something called bubble sort.
It was kind of like focusing on smaller bubbles
I guess I would say of the problem,
like looking at smaller segments rather than
the whole thing at once.
What is I think very true about what you're hinting at
is that bubble sort really focuses on local, small problems
rather than taking a step back trying to fix
the whole thing, let's just fix the obvious problems
in front of us. So, for instance, when we're trying to get
from smallest to largest,
and the first two things we see are eight followed by one,
this looks like a problem 'cause it's out of order.
So, what would be the simplest fix,
the least amount of work we can do
to at least fix one problem?
Just switch those two numbers
'cause one is obviously smaller than eight.
Perfect. So, we just swap those two then.
You would switch those again.
Yeah, so that further improves the situation
and you can kind of see it,
that the one and the two are now in place.
How about eight and six?
[Patricia] Switch it again.
Switch those again. Eight and three?
Switch it again.
[fast forwarding]
And conversely now the one and the two are closer to,
and coincidentally are exactly where we want them to be.
So, are we done?
No.
Okay, so obviously not, but what could we do now
to further improve the situation?
Go through it again but you don't need
to check the last one anymore because we know
that number is bubbled up to the top.
Yeah, because eight has indeed bubbled all the way
to the top. So, one and two?
[Patricia] Yeah, keep it as is.
Okay, two and six?
[Patricia] Keep it as is.
Okay, six and three?
Then you switch it.
Okay, we'll switch or swap those.
Six and four?
[Patricia] Swap it again.
Okay, so four, six and seven?
[Patricia] Keep it.
Okay. Seven and five?
[Patricia] Swap it.
Okay. And then I think per your point,
we're pretty darn close.
Let's go through once more.
[Patricia] One and two? Keep it.
[Patricia] Two three? Keep it.
[Patricia] Three four? Keep it.
[Patricia] Four six? Keep it.
Six five?
[Patricia] And then switch it.
All right, we'll switch this. And now to your point,
we don't need to bother with the ones
that already bubbled their way up.
Now we are a hundred percent sure it's sorted.
Yeah.
And certainly the search engines of the world,
Google and Bing and so forth,
they probably don't keep webpages in sorted order
'cause that would be a crazy long list
when you're just trying to search the data.
But there's probably some algorithm underlying what they do
and they probably similarly, just like we,
do a bit of work upfront to get things organized
even if it's not strictly sorted in the same way
so that people like you and me and others
can find that same information.
So, how about social media?
Can you envision where the algorithms are in that world?
Maybe for example like TikTok, like the For You page,
'cause those are like recommendations, right?
It's sort of like Netflix recommendations
except more constant because it's just every video
you scroll, it's like that's a new recommendation basically.
And it's based on what you've liked previously,
what you've saved previously, what you search up.
So, I would assume there's some kind of algorithm there
kind of figuring out like what to put on your For You page.
Absolutely. Just trying to keep you presumably
more engaged.
So, the better the algorithm is,
the better your engagement is,
maybe the more money the company then makes on the platform
and so forth.
So, it all sort of feeds together.
But what you're describing really is more
artificially intelligent, if I may,
because presumably there's not someone at TikTok
or any of these social media companies saying,
If Patricia likes this post, then show her this post.
If she likes this post, then show her this other post.
Because the code would sort of grow infinitely long
and there's just way too much content for a programmer
to be having those kinds of conditionals,
those decisions being made behind the scenes.
So, it's probably a little more artificially intelligent.
And in that sense you have topics like neural networks,
and machine learning which really describe
taking as input things like what you watch,
what you click on, what your friends watch,
what they click on, and sort of trying to infer
from that instead, what should we show Patricia
or her friends next?
Oh, okay. Yeah. Yeah.
That makes like the distinction more...
Makes more sense now.
Nice. Yeah.
[upbeat music]
I am currently a fourth year PhD student at NYU.
I do robot learning, so that's half and half
robotics and machine learning.
Sounds like you've dabbled with quite a few algorithms.
So, how does one actually research algorithms
or invent algorithms?
The most important way is just trying to think about
inefficiencies, and also think about connecting threads.
The way I think about it is that algorithm for me
is not just about the way of doing something,
but it's about doing something efficiently.
Learning algorithms are practically everywhere now.
Google, I would say for example,
is learning every day about like,
Oh what articles, what links might be better than others?
And re-ranking them.
There are recommender systems all around us, right?
Like content feeds and social media,
or you know, like YouTube or Netflix.
What we see is in a large part determined by this kind of
learning algorithms.
Nowadays there's a lot of concerns
around some applications of machine learning
like deep fakes where it can kind of learn how I talk
and learn how you talk and even how we look,
and generate videos of us.
We're doing this for real, but you could imagine
a computer synthesizing this conversation eventually.
Right.
But how does it even know what I sound like
and what I look like, and how to replicate that?
All of this learning algorithms that we talk about, right?
A lot, like what goes in there is just
lots and lots of data.
So, data goes in, something else comes out.
What comes out is whatever objective function
that you optimize for.
Where is the line between algorithms that
play games with and without AI?
I think when I started off my undergrad,
the current AI machine learning
was not very much synonymous.
Okay.
And even in my undergraduate, in the AI class,
they learned a lot of classical algorithms for game plays.
Like for example, the A star search, right?
That's a very simple example of how you can play a game
without having anything learned.
This is very much, oh you are at a game state,
you just search down, see what are the possibilities
and then you pick the best possibility that it can see,
versus what you think about when you think about,
ah yes, gameplay like the alpha zero for example,
or alpha star, or there are a lot of, you know,
like fancy new machine learning agents that are
even learning very difficult games like Go.
And those are learned agents, as in they are getting better
as they play more and more games.
And as they get more games, they kind of
refine their strategy based on the data that I've seen.
And once again, this high level abstraction
is still the same.
You see a lot of data and you'll learn from that.
But the question is what is objective function
that you're optimizing for?
Is it winning this game?
Is it forcing a tie or is it, you know,
opening a door in a kitchen?
So, if the world is very much focused on supervised,
unsupervised reinforcement learning now,
what comes next five, ten years, where is the world going?
I think that this is just going to be more and more,
I don't want to use the word encroachment,
but that's what it feels like of algorithms
into our everyday life.
Like even when I was taking the train here, right?
The trains are being routed with algorithms,
but this has existed for you know, like 50 years probably.
But as I was coming here, as I was checking my phone,
those are different algorithms,
and you know, they're kind of getting all around us,
getting there with us all the time.
They're making our life better most places, most cases.
And I think that's just going to be a continuation
of all of those.
And it feels like they're even in places
you wouldn't expect, and there's just so much data
about you and me and everyone else online
and this data is being mined and analyzed,
and influencing things we see and hear it would seem.
So, there is sort of a counterpoint which might be good
for the marketers, but not necessarily good for you and me
as individuals.
We are human beings, but for someone
we might be just a pair of eyes who are
carrying a wallet, and are there to buy things.
But there is so much more potential for these algorithms
to just make our life better without
changing much about our life.
[upbeat music]
I'm Chris Wiggins. I'm an associate professor
of Applied Mathematics at Columbia.
I'm also the chief data scientist of the New York Times.
The data science team at the New York Times
develops and deploys machine learning
for newsroom and business problems.
But I would say the things that we do mostly, you don't see,
but it might be things like personalization algorithms,
or recommending different content.
And do data scientists, which is rather distinct
from the phrase computer scientists.
Do data scientists still think in terms of algorithms
as driving a lot of it?
Oh absolutely, yeah.
In fact, so in data science and academia,
often the role of the algorithm is
the optimization algorithm that helps you find the best
model or the best description of a data set.
And data science and industry, the goal,
often it's centered around an algorithm
which becomes a data product.
So, a data scientist in industry might be
developing and deploying the algorithm,
which means not only understanding the algorithm
and its statistical performance,
but also all of the software engineering
around systems integration, making sure that that algorithm
receives input that's reliable and has output that's useful,
as well as I would say the organizational integration,
which is how does a community of people
like the set of people working at the New York Times
integrate that algorithm into their process?
Interesting. And I feel like AI based startups
are all the rage and certainly within academia.
Are there connections between AI
and the world of data science?
Oh, absolutely.
The algorithms that they're in,
can you connect those dots for...
You're right that AI as a field has really exploded.
I would say particularly many people experienced a ChatBot
that was really, really good.
Today, when people say AI,
they're often thinking about large language models,
or they're thinking about generative AI,
or they might be thinking about a ChatBot.
One thing to keep in mind is a ChatBot is a special case
of generative AI, which is a special case of using
large language models, which is a special case of using
machine learning generally,
which is what most people mean by AI.
You may have moments that are what John McCarthy called,
Look Ma, no hands, results,
where you do some fantastic trick and you're not quite sure
how it worked.
I think it's still very much early days.
Large language models is still in the point of
what might be called alchemy and that people are building
large language models without a real clear,
a priori sense of what the right design is
for a right problem.
Many people are trying different things out,
often in large companies where they can afford
to have many people trying things out,
seeing what works, publishing that,
instantiating it as a product.
And that itself is part of the scientific process
I would think too.
Yeah, very much. Well, science and engineering,
because often you're building a thing
and the thing does something amazing.
To a large extent we are still looking for
basic theoretical results around why
deep neural networks generally work.
Why are they able to learn so well?
They're huge, billions of parameter models
and it's difficult for us to interpret
how they're able to do what they do.
And is this a good thing, do you think?
Or an inevitable thing that we, the programmers,
we, the computer scientists, the data scientists
who are inventing these things,
can't actually explain how they work?
Because I feel like friends of mine in industry,
even when it's something simple and relatively familiar
like auto complete, they can't actually tell me
why that name is appearing at the top of the list.
Whereas years ago when these algorithms were more
deterministic and more procedural,
you could even point to the line that made that name
[Chris] bubble up to the top. Absolutely.
So, is this a good thing, a bad thing,
that we're sort of losing control perhaps in some sense
of the algorithm?
It has risks.
I don't know that I would say that it's good or bad,
but I would say there's lots of scientific precedent.
There are times when an algorithm works really well
and we have finite understanding of why it works
or a model works really well
and sometimes we have very little understanding
of why it works the way it does.
In classes I teach, certainly spend a lot of time on
fundamentals, algorithms that have been taught in classes
for decades now, whether it's binary search,
linear search, bubble sorts, selection sort or the like,
but if we're already at the point where I can pull up
chat GPT, copy paste a whole bunch of numbers or words
and say, Sort these for me,
does it really matter how Chat GPT is sorting it?
Does it really matter to me as the user
how the software is sorting it?
Do these fundamentals become more dated and less important
do you think?
Now you're talking about the ways in which code
and computation is a special case of technology, right?
So, for driving a car, you may not necessarily need
to know much about organic chemistry,
even though the organic chemistry is how the car works.
So, you can drive the car and use it in different ways
without understanding much about the fundamentals.
So, similarly with computation, we're at a point
where the computation is so high level, right?
You can import psychic learn and you can go from zero
to machine learning in 30 seconds.
It's depending on what level you want to understand
the technology, where in the stack, so to speak,
it's possible to understand it and make wonderful things
and advance the world without understanding it
at the particular level of somebody who actually might have
originally designed the actual optimization algorithm.
I should say though, for many of the optimization
algorithms, there are cases where an algorithm
works really well and we publish a paper,
and there's a proof in the paper,
and then years later people realize
actually that proof was wrong and we're really
still not sure why that optimization works,
but it works really well or it inspires people
to make new optimization algorithms.
So, I do think that the goal of understanding algorithms
is loosely coupled to our progress
and advancing grade algorithms, but they don't always
necessarily have to require each other.
And for those students especially,
or even adults who are thinking of now steering into
computer science, into programming,
who were really jazzed about heading in that direction
up until, for instance, November of 2022,
when all of a sudden for many people
it looked like the world was now changing
and now maybe this isn't such a promising path,
this isn't such a lucrative path anymore.
Are LLMs, are tools like Chat GPT reason not to perhaps
steer into the field?
Large language models are a particular architecture
for predicting, let's say the next word,
or a set of tokens more generally.
The algorithm comes in when you think about
how is that LLM to be trained or also how to be fine tuned.
So, the P of GPT is a pre-trained algorithm.
The idea is that you train a large language model
on some corpus of text, could be encyclopedias,
or textbooks, or what have you.
And then you might want to fine tune that model
around some particular task or
some particular subset of texts.
So, both of those are examples of training algorithms.
So, I would say people's perception
of artificial intelligence has really changed a lot
in the last six months, particularly around November of 2022
when people experienced a really good ChatBot.
The technology though had been around already before.
Academics had already been working with Chat GPT three
before that and GPT two and GPT one.
And for many people it sort of opened up this conversation
about what is artificial intelligence
and what could we do with this?
And what are the possible good and bad, right?
Like any other piece of technology.
Kranzburg's first law of technology,
technology is neither good, nor bad, nor is it neutral.
Every time we have some new technology,
we should think about it's capabilities
and the good, and the possible bad.
[David] As with any area of study,
algorithms offer a spectrum from the most basic
to the most advanced.
And even if right now, the most advanced of those algorithms
feels out of reach because you just
don't have that background,
with each lesson you learn, with each algorithm you study,
that end game becomes closer and closer
such that it will, before long, be accessible to you
and you will be at the end of that most advanced spectrum.
Musician Explains One Concept in 5 Levels of Difficulty
Expert Explains One Concept in 5 Levels of Difficulty - Blockchain
Oculus' John Carmack Explains Virtual Reality in 5 Levels of Difficulty
Biologist Explains One Concept in 5 Levels of Difficulty - CRISPR
Neuroscientist Explains One Concept in 5 Levels of Difficulty
Astronomer Explains One Concept in 5 Levels of Difficulty
Laser Expert Explains One Concept in 5 Levels of Difficulty
Sleep Scientist Explains One Concept in 5 Levels of Difficulty
Physicist Explains One Concept in 5 Levels of Difficulty
Astrophysicist Explains One Concept in 5 Levels of Difficulty
Hacker Explains One Concept in 5 Levels of Difficulty
Nanotechnology Expert Explains One Concept in 5 Levels of Difficulty
Physicist Explains Origami in 5 Levels of Difficulty
Computer Scientist Explains Machine Learning in 5 Levels of Difficulty
Neuroscientist Explains Memory in 5 Levels of Difficulty
Computer Scientist Explains One Concept in 5 Levels of Difficulty
Astrophysicist Explains Black Holes in 5 Levels of Difficulty
Computer Scientist Explains Fractals in 5 Levels of Difficulty
College Professor Explains One Concept in 5 Levels of Difficulty
Quantum Computing Expert Explains One Concept in 5 Levels of Difficulty
Computer Scientist Explains One Concept in 5 Levels of Difficulty
UMass Professor Explains the Internet in 5 Levels of Difficulty
Mathematician Explains Infinity in 5 Levels of Difficulty
Theoretical Physicist Explains Time in 5 Levels of Difficulty
MIT Professor Explains Nuclear Fusion in 5 Levels of Difficulty
Harvard Professor Explains Algorithms in 5 Levels of Difficulty
Chess Pro Explains Chess in 5 Levels of Difficulty (ft. GothamChess)