Oh my god Bret Victor is SO SMART. He's one of the smartest humans I've ever met.
In 2006 he wrote Magic Ink, a paper that changed the way I thought about UI design forever.
We tried to get him to work for us at Humanized, and then later at Mozilla, but a guy like that can work anywhere he wants. He worked at Apple for a while but has recently gone freelance again.
You sad about the death of last generation's computer pioneers? Well, here's a guy in our generation who is going to be world-famous for doing something amazing. I'm not sure what it is, yet, but it might be this:
He doesn't really mean to kill math, of course. He means to kill the user interface of math.
Which is to say: you know that activity where you represent a problem by writing some squiggles on a piece of paper... and then you pick one of several arcane rules that you know, and apply the rule to the squiggles, to come up with a slightly different set of squiggles, and you write that set beneath the first set... and you keep doing this until you either somehow produce an answer or you give up?
Yeah, that activity is not math. That activity is math's user interface, and it's a terrible one.
Bret points out that in the Roman empire people thought multiplication was this incredibly arcane and difficult task that only a very few initiates would ever understand. But it turned out this wasn't because multiplication was conceptually difficult, it's just because Roman numerals totally suck for multiplying. Once people switched to Arabic numerals, multiplication became something we could teach to second graders.
So maybe the squiggles on paper are holding us back in the same way. I know that I was often frustrated by the arbitrariness of the notation when I was doing higher math in college. More than the notation, what was frustrating was what they didn't teach us.
For instance they walked us through lots of famous proofs, but they never taught us how mathematicians came up with those proofs, or how to invent one ourselves (other than for trivial and contrived textbook problems.) Sure there are a few well-known tricks like induction and contradiction, but mostly I remember a feeling of blind groping as I tried one random technique after another, never knowing whether I was getting closer or farther away. Same thing for doing integrals: there's no general method other than "try to think of a function that has a derivative similar to what you're looking at". So mostly integration is a matter of repeated guess-and-derive, or of applying rules at random looking for anything that works. A lot of higher math (and therefore physics) always had this flavor for me. Whatever real mathematicians and physicists were doing, it was apparent that most of the work was in the gap between one step of the derivation and the next, the gap where they had some mysterious flash of insight that told them what to do next. But since that flash of insight never made it onto the paper, none of the teachers ever talked about it.
(I think this is why so many people learn to hate math: in math you rise as far as your innate mathematical intuition can take you, then you get stuck because everything past that is a black art that nobody knows how to teach you.)
Bret doesn't have the answer yet about what a new user interface for math would look like, but he's asking some verrrrry interesting questions. His writings include some cool interactive visualizations, which you can play with to help yourself gain that oh-so-vital intuition about how systems behave.
A lot of people have apparently looked at these and said, "So this is to help people understand the equations?" but Bret is like "no, you're missing the point, this REPLACES the equations". A set of equations is an abstraction to describe the behavior of a system; an interactive drawing is also an abstraction to describe the behavior of a system, but it's a better one because you can poke it and it moves, and the human brain is evolved to be really good at figuring out what something is based on how it moves when we poke it. It's not evolved to manipulate squiggles on paper.
Moving from squiggle manipulation to interactive visualization forces us to reconsider what math is for. Is the goal always to "solve" something and get a number or equation which we call an "answer"? Or do we just fall into thinking that way because looking for an "answer" gives us a convenient stopping-point for the squiggle-manipulation method? With an interactive drawing, finding the point that makes some quantity equal some other quantity -- "solving" -- is a trivial matter of swiping around until something matches up the way you want. But there's so much more you can do beyond just "solving"; you can keep exploring, discovering new questions to ask; what happens if I do this? Oh did you see what happened when I cranked this input all the way up and the other one all the way down? Why was that?
Relatedly, does it matter if we can't solve the gravitational three-body problem in closed form, if we can have an arbitrarily accurate computer simulation based on iterative methods, that any child can play with and gain a delightful intuition of how a three-body gravitational system behaves?
If we could create a new UI for math, what would people do with it? Well, maybe it's nothing beyond a curiosity. Maybe people already know all the math they have any use for in daily life. How often do you say "if only I had a way to solve a differential equation right now?"
But maybe not. The Romans probably thought that the plebians would never have a use for multiplication. But it turns out that if you understand multiplication then you find uses for it every day. IBM famously didn't think anybody would have a use for a computer in their home, until Apple proved otherwise. When tools get cheaper and easier, people find whole new uses for them. They apply them not just to different problems but to different types of problems.
The basic concept of a system of differential equations isn't hard to grasp at all. "This jet is burning fuel to accelerate. But the faster it goes, the more air resistance it runs into, which slows it down. The amount it speeds up or slows down also depends on how much it weighs, which decreases as it burns fuel." Differential equations are a way of describing scenarios where several interrelated variables are changing over time and the value of one variable at any instant affects the rate of change of another variable. The concept is easy, but answering questions like "how much fuel does the jet have to burn to reach Mach 1" turns out to require some astoundingly difficult squiggle-manipulation.
But situations with variable, interdependent rates of change are everywhere. If you were really good at differential equations -- or whatever the post-Kill-Math replacement is for them -- you might find reasons to use them every day. You would instinctively see them under the surface of the physical, ecological, and economic systems you interact with every day. You'd feed them (somehow) into your interactive visualizer, or whatever it is, and play with the simulation until you understood how to subtly prod the real system to elicit the results you want.
You would, in short, have what would seem like superhuman intelligence to the people of today. You'd do things we could barely dream of.
Imagine if we could use computers to make that level of understanding accessible to people of merely average brainpower.
Man. That's the kind of thing Silicon Valley should be working on. Not "how do we get users to report everything they do to us through their cell phones so we can sell behaviorally targeted advertising".
Knowing and Teaching Elementary Mathematics
Speaking of math education...
Sushu's mom is the author of a book, Knowing and Teaching Elementary Mathematics. The other night I decided I should try reading it and find out what the deal is.
She was interested in the question of why American kids score so much lower on math than Chinese kids do, despite American math teachers having more years of college and formal training. So she went out and did original research, interviewing American and Chinese teachers of elementary-school math to compare their methods.
Again and again, what she found out is that the difference was not in the children, nor in the teaching methods, but in how deep the teacher's understanding was of the material. The Chinese teachers had "a profound understanding of elementary mathematics"; they knew arithmetic backwards and forwards and were able to demonstrate lots of alternate methods of reaching the same answer, to invent lots of real-world examples, and to prove mathematically why a certain procedure worked.
The American teachers had only a shallow understanding focused on rote procedure. E.g. they could tell a kid what to write down in order to multiply two three-digit numbers, but they couldn't explain why executing those steps produces a correct answer. So no wonder they would run into trouble as soon as students started asking questions or showing signs of misunderstanding. No wonder so many kids grow up dreading e.g. long division -- they experience it as an arbitrary algorithm they are made to execute, without rhyme or reason or internal logic.
The most head-slappingly awful bit was when Liping asked teachers to demonstrate how they would teach division of one fraction by another fraction. (One and three quarters divided by one half = ?)
Only 9 out of 21 American math teachers could even do the problem correctly themselves! And only one out of those 21 was able to come up with a story or real-world example that correctly demonstrated the meaning of dividing by one-half. The rest of the teachers were hopelessly confused and started talking about dividing one-and-three-quarters pizzas between two people or something similar. That's not division by 1/2, that's division by 2 or multiplication by 1/2.
Twenty out of twenty-one American elementary school teachers didn't know the difference between dividing by 1/2 and dividing by 2! This was just one of several places in this book that made my jaw drop.
TL;DR: American kids suck at math because their teachers suck at math.
Godel, Escher, Bach
I just finished it and I looooooove this book. Love it! It has earned a place as one of my top three or four favorite books ever, for sure.
The cover of my edition says Gödel, Escher, Bach: An Eternal Golden Braid: A metaphorical fugue on on minds and machines in the spirit of Lewis Carrol by Douglas Hofstadter. What a mouthful! What a pretentious title! What the hell is this book, anyway?
The title is three dudes' names, so I opened the book thinking it was going to be some kind of triple biography. Nope. It's... something else. A very strange book, not like anything else I've ever read. There's not an easy word to describe GEB, either its structure or its subject matter.
I started the book years ago. 2008, I think, and I just finished it today. This is not the kind of book you try to tackle in one sitting. It is thick, dense, demanding. It is for math/computer-science geeks what Ulysses is for literature geeks. It's like a whole year of college squeezed between two covers.
My favorite books are often the ones that feel like an invitation to come live inside the author's brain for a while. GEB is certainly one of these: not so much a single-subject book of nonfiction as it is a tour through Douglas Hofstadter's obsessions, following the connections that he sees between seemingly unrelated topics.
Specifically: GEB is about the deep connections between mathematics, music, and art. It's focused on concepts of formal systems, self-referentiality, self-contradiction, infinite loops, and paradoxes, and how they're expressed in math by Gödel, drawings by Escher, and music by Bach.
Along the way, there's a series of puzzles, exercises, brain-teasers, and Zen koans to ponder; this book is very interactive. If you're into that kind of thing, you don't just read this book, you do it, like a kind of Activity Book for grown-ups. All of these exercises tie back into number theory and the proof of Gödel's Incompleteness Theorem (yes, even the koans!) So do the music and the art - he's not just throwing in Escher drawings and Bach fugues because he thinks they're cool, but in order to draw analogies with the mathematics, shine a light on it from many different angles, get you to think about it in a creative and intuitive way and not just a mechanistic logical way.
Structurally, GEB alternates between nonfiction "chapters" and fictional "dialogues". The chapters teach you about math, music, and art in a rambling, digressionary, conversational, but basically straightforward way. The dialogues are something else. They star Achilles and the Tortoise (borrowed from Zeno's Paradox) and occasionally also a crab, a sloth, and an ant colony, having absurdist adventures and arguing about logical paradoxes.
Some of these dialogues structurally replicate certain musical forms, such as a six-voice fugue or a "crab canon" (a piece of music that harmonizes with itself when played upside-down and backwards). Most of the dialogues involve seriously lateral thinking; some are shaggy dog stories; some are setups for elaborate multilevel puns; usually the content of the dialogue reflects somehow on the structure of the dialogue or of the book as a whole -- this is a self-referential book about self-referentiality.
The dialogues are kind of like the Shadow Play Girls in episodes of Shojo Kakumei Utena, that seem nonsensical on the surface but serve to cast some metaphorical light on the meaning of the other events in the episode. When you get to each Chapter of GEB you're ready to learn a new math concept because you've already been primed for it by some completely ridiculous Tortoise/Achilles shenanigans.
Hofstadter really wants you to understand Gödel's Incompleteness Theorem - not just in an approximate and hand-wavy way, and not just in a rote mathematical recitation way. He wants you to understand it on a deep and intuitive level. He wants this so badly that he's willing to spend seven hundred pages on math lessons, music theory lessons, thought experiments, riddles, and bizarre digressions about a smartass tortoise all in order to build up your intuition about number theory, just so that when you finally get to Gödel's proof you will have the background you need to get your mind COMPLETELY FUCKING BLOWN by it.
And it's pretty mind-blowing stuff. By encoding a self-referential paradox into a mathematical theorem, Gödel proved that any sufficiently complex system of mathematics will contain statements which are true, but can never be proven within that system. Gödel destroyed every other mathematicians' dream of ever having a perfectly complete and consistent theory of mathematics. He did it in the 1930s, so it was around the same time as the quantum mechanics gave us the Heisenberg Uncertainty Principle; it was a time when humanity discovered that there are limits to our knowledge, certain things the universe just doesn't allow us to know completely. Western philosophy is still trying to recover.
Hofstadter has a lot of thoughts about how the Incompleteness theorem applies to other areas. Art and music, of course, but also computer science - there's a lot in here about computability and the Halting Problem. There's a chapter on genetics: is the genetic code a "sufficiently complex system of mathematics"? There's a bunch of stuff about the meaning of "meaning" -- how much of a proof's meaning is in the symbols of the proof, and how much is in the mind interpreting those symbols? How about the meaning of a painting, or a symphony? But mostly, he wants to talk about how the Incompleteness theorem relates to artificial intelligence and human consciousness. It seems that formal rule-based systems like computer programs and mathematical proofs are vulnerable to getting caught in paradoxes like the one in Gödel's proof, or infinite loops like Turing discovered. Yet humans have the ability to deal with infinite loops and paradoxes -- we can recognize when we're caught in one, and make a creative out of the rule system to resolve it.
Does this ability to leap outside of a rule system mean that the human mind is fundamentally different from a computer program? And does that mean that AI is impossible? But how can that be, when the brain is made of neurons which operate according to predictable physical laws? Or is our mind governed by a higher-level rule system that gives us the ability to leap outside of lesser rule systems? If so, what happens when we try to leap outside of that one? Are the sensations of consciousness and free will related to the brain's capacity for self-referential thoughts? What is the equivalent of Gödel for the brain - are there things we're not allowed to know about ourselves, or thoughts we're incapable of thinking?
Central to Hofstadter's thesis is the difference between logical, mechanistic thought and creative, intuitive leaps. So it's appropriate that he leads you to his point using both methods. The chapters walk you through the logical explanation while the dialogues encourage you to make the leaps of intuition for yourself. Again, it's a self-referential book about self-reference: the form reflects the content.
I don't know any other book that has blown my mind so many times.