If you didn't already see it on Not The User's Fault, here's the preview video for my HTML 5 -based, multitouch-interface, comic-drawing webapp. (Now called "Pencilbox" - thanks Googleshng and Ben!)
Not shown in the video: the fact that it's got unlimited undo/redo history; you undo by making a counterclockwise circle gesture with your thumb, and redo with a clockwise circle. Also, your history is backed up to a database on the web server, so you can close the page, reopen it, and not only still have your picture, but still be able to undo stuff.
He doesn't really mean to kill math, of course. He means to kill the user interface of math.
Which is to say: you know that activity where you represent a problem by writing some squiggles on a piece of paper... and then you pick one of several arcane rules that you know, and apply the rule to the squiggles, to come up with a slightly different set of squiggles, and you write that set beneath the first set... and you keep doing this until you either somehow produce an answer or you give up?
Yeah, that activity is not math. That activity is math's user interface, and it's a terrible one.
Bret points out that in the Roman empire people thought multiplication was this incredibly arcane and difficult task that only a very few initiates would ever understand. But it turned out this wasn't because multiplication was conceptually difficult, it's just because Roman numerals totally suck for multiplying. Once people switched to Arabic numerals, multiplication became something we could teach to second graders.
So maybe the squiggles on paper are holding us back in the same way. I know that I was often frustrated by the arbitrariness of the notation when I was doing higher math in college. More than the notation, what was frustrating was what they didn't teach us.
For instance they walked us through lots of famous proofs, but they never taught us how mathematicians came up with those proofs, or how to invent one ourselves (other than for trivial and contrived textbook problems.) Sure there are a few well-known tricks like induction and contradiction, but mostly I remember a feeling of blind groping as I tried one random technique after another, never knowing whether I was getting closer or farther away. Same thing for doing integrals: there's no general method other than "try to think of a function that has a derivative similar to what you're looking at". So mostly integration is a matter of repeated guess-and-derive, or of applying rules at random looking for anything that works. A lot of higher math (and therefore physics) always had this flavor for me. Whatever real mathematicians and physicists were doing, it was apparent that most of the work was in the gap between one step of the derivation and the next, the gap where they had some mysterious flash of insight that told them what to do next. But since that flash of insight never made it onto the paper, none of the teachers ever talked about it.
(I think this is why so many people learn to hate math: in math you rise as far as your innate mathematical intuition can take you, then you get stuck because everything past that is a black art that nobody knows how to teach you.)
Bret doesn't have the answer yet about what a new user interface for math would look like, but he's asking some verrrrry interesting questions. His writings include some cool interactive visualizations, which you can play with to help yourself gain that oh-so-vital intuition about how systems behave.
A lot of people have apparently looked at these and said, "So this is to help people understand the equations?" but Bret is like "no, you're missing the point, this REPLACES the equations". A set of equations is an abstraction to describe the behavior of a system; an interactive drawing is also an abstraction to describe the behavior of a system, but it's a better one because you can poke it and it moves, and the human brain is evolved to be really good at figuring out what something is based on how it moves when we poke it. It's not evolved to manipulate squiggles on paper.
Moving from squiggle manipulation to interactive visualization forces us to reconsider what math is for. Is the goal always to "solve" something and get a number or equation which we call an "answer"? Or do we just fall into thinking that way because looking for an "answer" gives us a convenient stopping-point for the squiggle-manipulation method? With an interactive drawing, finding the point that makes some quantity equal some other quantity -- "solving" -- is a trivial matter of swiping around until something matches up the way you want. But there's so much more you can do beyond just "solving"; you can keep exploring, discovering new questions to ask; what happens if I do this? Oh did you see what happened when I cranked this input all the way up and the other one all the way down? Why was that?
Relatedly, does it matter if we can't solve the gravitational three-body problem in closed form, if we can have an arbitrarily accurate computer simulation based on iterative methods, that any child can play with and gain a delightful intuition of how a three-body gravitational system behaves?
If we could create a new UI for math, what would people do with it? Well, maybe it's nothing beyond a curiosity. Maybe people already know all the math they have any use for in daily life. How often do you say "if only I had a way to solve a differential equation right now?"
But maybe not. The Romans probably thought that the plebians would never have a use for multiplication. But it turns out that if you understand multiplication then you find uses for it every day. IBM famously didn't think anybody would have a use for a computer in their home, until Apple proved otherwise. When tools get cheaper and easier, people find whole new uses for them. They apply them not just to different problems but to different types of problems.
The basic concept of a system of differential equations isn't hard to grasp at all. "This jet is burning fuel to accelerate. But the faster it goes, the more air resistance it runs into, which slows it down. The amount it speeds up or slows down also depends on how much it weighs, which decreases as it burns fuel." Differential equations are a way of describing scenarios where several interrelated variables are changing over time and the value of one variable at any instant affects the rate of change of another variable. The concept is easy, but answering questions like "how much fuel does the jet have to burn to reach Mach 1" turns out to require some astoundingly difficult squiggle-manipulation.
But situations with variable, interdependent rates of change are everywhere. If you were really good at differential equations -- or whatever the post-Kill-Math replacement is for them -- you might find reasons to use them every day. You would instinctively see them under the surface of the physical, ecological, and economic systems you interact with every day. You'd feed them (somehow) into your interactive visualizer, or whatever it is, and play with the simulation until you understood how to subtly prod the real system to elicit the results you want.
You would, in short, have what would seem like superhuman intelligence to the people of today. You'd do things we could barely dream of.
Imagine if we could use computers to make that level of understanding accessible to people of merely average brainpower.
Man. That's the kind of thing Silicon Valley should be working on. Not "how do we get users to report everything they do to us through their cell phones so we can sell behaviorally targeted advertising".
I saw an amazing talk yesterday by my favorite computer-idea-guy, Bret Victor. (The same Bret Victor` who wrote "Kill Math".)
This time, he's talking to an audience of engineering students about the possibilities available to them in their careers. The talk is called Inventing on Principle. The video is almost an hour long but worth watching all the way to the end.
There's two parts. First he talks about the principle that guides his inventions, which is that creative people need tools that give them a direct connection to their work. Ideas are precious and fragile, and there are all sorts of ideas that you'll never even think of if your tools are keeping you disconnected from your work.
He has some jaw-dropping demos of what programming, circuit design, and animation might be like if we had tools that truly connected our hands to the essence of what we're doing. If we didn't have to spend most of our brainpower guessing how the computer is going to interpret our instructions. "This is what it might be like to design an algorithm without a blindfold on.", he says. Bret's a very humble person. He doesn't say things like that lightly. He really means it; after catching a glimpse of how things could be in a better world, coming back to the tools we have now feels so primitive.
The second half of the talk is about other people who invented things according to their own guiding principles, like Larry Tenser who went on a personal crusade against modes in software. Bret suggests this path to the students as an alternative to the career paths that are usually offered to engineers (e.g. "define yourself by the skill that you're good at"). He points out it's more like being a social activist except that you try to change things by inventing instead of changing things by organizing people. He talks about how you might try to find a principle of your own, if you choose this path.
This is inspiring, and it comes at just the right time for me since over the last year I've gotten increasingly disillusioned with the software industry. I spent 2008-2012 trying to make things according to Mozilla's principles, not my own. Before that, I spent 2005-2008 trying to make things according to Aza's principles, or more accurately according to Aza's dad's principles. Working on other people's dreams isn't enough to motivate me anymore. I want to do my own thing. This might involve leaving the software industry or it might involve starting my own company. Either way, examining my guiding principle(s) will have to be part of it.
The short version: I wrote a Thunderbird add-on to make the email interface I've always wanted -- one that helps me remember to stay in touch with people I really care about, instead of always distracting me with the newest incoming trivia.
About a year ago, I wrote a post about how much I hate email. I was frustrated that the few relevant messages from people I care about quickly get buried under a flood of distractions and nonsense. Not spam, even; just trivia.
There's a saying that "Life consists of what you choose to pay attention to."
Software encodes values, biases, assumptions, often unconscious, of the people who create it. The more that software becomes our filter on the world, the more that the unconscious biases of the software determine what we pay attention to.
There's one bias that's so prevalent it's invisible - noticing it is like a fish noticing the water. It's the assumption that the newest thing goes on top.
Twitter! Newest thing on top. News website! Newest thing on top. Blogs! Newest thing on top. Email! Newest thing on top. RSS feeds! Aggregators! What's new! what just happened now? I don't care about that thing it's so 20 minutes ago, get that off the front page.
The newest thing usually isn't the most important. It's usually a distraction from what's most important. Obsessive focus on the newest thing is a sickness in our culture. Not just the culture of software developers, but modern 21st century culture as a whole. Software didn't create distraction, but its bias towards showing you the newest thing is contributing to the constant distraction of modern life.
If life consists of what you choose to pay attention to; and what you pay attention to is increasingly not a choice you make consciously but is dictated by the software lens that you see the world through; then you are giving up control over the contents of your life to decisions made by that software.
And if the software is always focusing your attention on the newest thing just because it's newest, then you're allowing what your life consists of to be decided by who's noisiest.
Does that horrify you? It horrifies me.
Meanwhile, the stuff you don't pay attention to gets pushed out of your life.
Email is the way I talk to more people more often than any other technology -- more than telephone, more than face-to-face contact.
My email interface should be helping me remember to stay in touch with old friends and distant family. But instead, email buries the important conversations under a flood of auto-generated GitHub and eBay notifications, political mailing list ACTION ALERTS, charities begging for money, etc. etc.
Maybe I opened my email interface with a thought in mind about what email I wanted to write. But my thought is soon lost as the interface bombards me with distractions -- all the newest, unread stuff.
Meanwhile that thoughtful, in-depth conversation from an friend I haven't seen in years is down on the third or fourth page. I didn't respond right away because it deserved a considered, crafted response. I starred it, sure, but... I guess I star a lot of things, most of which rapidly lose their relevance.
Unless I make a concerted effort, that conversation's going to get buried forever and I'm gonna forget about it. Now I'm gonna die with regrets because my email interface focuses my attention on what's new instead of what's important!
So I decided to do something about it. I started hacking around with an idea for an email client that would put that conversation with the old friend front and center of my interface, keeping it in my attention.
I built it as a Thunderbird add-on. Since its purpose is to help me stay in touch with the people I love, I named it "Lovebird".
Since it's people I care about, not messages, the Lovebird UI is built around a list of people, not a list of emails.
Everybody thinks they have the right to take up space in my inbox, but not everybody gets in to the Lovebird interface. It's a privilege, not a right. No mailing list or notification-bot should ever be allowed in the Lovebird list. Humans only.
And you only get there if I explicitly add you. I don't want my computer trying to be too smart and guessing who should go in the Lovebird list. That creates the wrong kind of feedback loop.
For everyone else, I can still check my inbox. Lovebird isn't meant to replace the inbox entirely.
I can have Lovebird sort my list of people in a couple different ways, none of which are based on putting the newest stuff on top. The default sort order shows me who's been waiting the longest for me to respond to a conversation. Whatever I've been procrastinating about writing becomes the top item in my interface. Hopefully this will make it harder for me to forget to answer people.
I can also have it show me who I haven't talked to in the longest time, even if they're not expecting a response from me. Maybe I just want to reach out to them and ask about their lives.
I've been hacking on Lovebird, on and off, for the past couple of months. If you've read an email from me lately, I probably sent it to you from Lovebird.