Moore's Historical Observation
Well Time magazine ran a cover story about the Singularity (boldly predicting a date of 2045, even) which means a lot of generally non-tech-y people I know have been bringing it up, ask me if I believe it will happen, expressing various misunderstandings about it, etc.
By the way, hre's Vernor Vinge's essay which more or less kicked off the whole Singularity idea. It's short and worth a look if you haven't read it before.
Things to keep in mind are that Vinge is both a scientist and a science-fiction writer, so which hat is he wearing when he writes this? At least part of it seems to be expressing a writer's frustration: "Oh drat, this approaching singularity makes it really hard to extrapolate a plausible future history for a galactic empire setting". It's also worth noting that he regards the idea with dread, i.e. "Can the Singularity be avoided?".
Singularity dorks have taken this idea and turned it into the equivalent of an apocalyptic religion for atheistic nerds: it describes an approaching end to the human era (as superintelligent machines will take over), to occur at some unspecified but rapidly-approaching date, assumed to be within the believer's lifetime. It's got a god, except instead of creating man the god is created by man. It's got the same abdication of responsibility: Why worry about politics or the environment when the Singularity is about to happen and fix all the mistakes that humans have made? It's even got an afterlife (we all upload our consciousness into computers). The only thing it's missing is a Judgment where, you know, only the True Believers get to be uploaded while the heathens (Windows users) will toil in the server farms for all eternity.
Anyway, the reason I'm writing about this is because I just want to point out a certain logical fallacy that I keep seeing. The singularity guys say the Singularity is inevitable "because of Moore's Law".
Well Moore's Law is just an observation (made by Intel co-founder Gordon Moore) that the speed (or power or number of transistors) of microchips doubles every X months, where X is in the 18-24 range. It's held true since the 60s. The logical fallacy is when people assume that it will continue to hold true indefinitely, and therefore extrapolate that a single microchip will have more processing power than all the world's human brains put together by the year 20X6, therefore POW! Singularity!
But Moore's Law isn't a law of nature or anything. It's not like entropy or gravity or nuclear decay. It's just an observation of economic trends within a particular historical period. It depends on competition between semiconductor companies and on the ability to make transistors smaller. At some point we run up against the fact that a transistor won't work unless it's a certain number of atoms across. Or, at some point people decide "Hey, my computer is actually good enough for anything I'd ever want to do with it, I think I'll spend my extra income on something besides a new laptop" and then the economic conditions that created the incentives for competition between semiconductor companies will change. Either way, the conditions that created Moore's Law belong to a particular period of history and they won't last forever. Even Moore himself says so.
I guess "Moore's Law" just sounds better than "Moore's Historical Observation". Call it whatever you want, but if your predictions are based on assuming that processor power must always grow exponentially, then I'm not gonna take them seriously.
Even if we do have an exponential increase in processing speed and network bandwidth, that doesn't mean that the Internet will somehow magically achieve self-awareness on its own someday. AI is hard. We don't know how our own brains work nor do we know how to duplicate them. We don't even know the right questions to ask. All the processing power in the world is useless if you can't figure out how to express your problem in terms of binary arithmetic.
That said, I watched IBM's Watson beat the human champs at Jeopardy and it was fairly impressive. Not impressive that it knows the answers -- it did have, like, all of Wikipedia and half of the Web stored in its database -- but impressive that it understood the questions as often as it did. What we now know, that the early AI pioneers didn't, is that knowing answers is easy but understanding questions is hard. When Watson got one wrong, it didn't make the kind of mistake a human would. Like, the clue specified an author and Watson named the book instead. Another clue specified an "American city" and Watson said "Toronto" leading to great LOLs. In both cases Watson's response correctly matched the rest of the data in the clue, but it misparsed or overlooked something obvious that a human never would have missed - a human would have recognized "this is asking for a person's name" immediately and ruled out all non-persons-name answers, for instance.
We've come a long way toward natural language processing but there's still a long way to go, in other words.
Godel, Escher, Bach
I just finished it and I looooooove this book. Love it! It has earned a place as one of my top three or four favorite books ever, for sure.
The cover of my edition says Gödel, Escher, Bach: An Eternal Golden Braid: A metaphorical fugue on on minds and machines in the spirit of Lewis Carrol by Douglas Hofstadter. What a mouthful! What a pretentious title! What the hell is this book, anyway?
The title is three dudes' names, so I opened the book thinking it was going to be some kind of triple biography. Nope. It's... something else. A very strange book, not like anything else I've ever read. There's not an easy word to describe GEB, either its structure or its subject matter.
I started the book years ago. 2008, I think, and I just finished it today. This is not the kind of book you try to tackle in one sitting. It is thick, dense, demanding. It is for math/computer-science geeks what Ulysses is for literature geeks. It's like a whole year of college squeezed between two covers.
My favorite books are often the ones that feel like an invitation to come live inside the author's brain for a while. GEB is certainly one of these: not so much a single-subject book of nonfiction as it is a tour through Douglas Hofstadter's obsessions, following the connections that he sees between seemingly unrelated topics.
Specifically: GEB is about the deep connections between mathematics, music, and art. It's focused on concepts of formal systems, self-referentiality, self-contradiction, infinite loops, and paradoxes, and how they're expressed in math by Gödel, drawings by Escher, and music by Bach.
Along the way, there's a series of puzzles, exercises, brain-teasers, and Zen koans to ponder; this book is very interactive. If you're into that kind of thing, you don't just read this book, you do it, like a kind of Activity Book for grown-ups. All of these exercises tie back into number theory and the proof of Gödel's Incompleteness Theorem (yes, even the koans!) So do the music and the art - he's not just throwing in Escher drawings and Bach fugues because he thinks they're cool, but in order to draw analogies with the mathematics, shine a light on it from many different angles, get you to think about it in a creative and intuitive way and not just a mechanistic logical way.
Structurally, GEB alternates between nonfiction "chapters" and fictional "dialogues". The chapters teach you about math, music, and art in a rambling, digressionary, conversational, but basically straightforward way. The dialogues are something else. They star Achilles and the Tortoise (borrowed from Zeno's Paradox) and occasionally also a crab, a sloth, and an ant colony, having absurdist adventures and arguing about logical paradoxes.
Some of these dialogues structurally replicate certain musical forms, such as a six-voice fugue or a "crab canon" (a piece of music that harmonizes with itself when played upside-down and backwards). Most of the dialogues involve seriously lateral thinking; some are shaggy dog stories; some are setups for elaborate multilevel puns; usually the content of the dialogue reflects somehow on the structure of the dialogue or of the book as a whole -- this is a self-referential book about self-referentiality.
The dialogues are kind of like the Shadow Play Girls in episodes of Shojo Kakumei Utena, that seem nonsensical on the surface but serve to cast some metaphorical light on the meaning of the other events in the episode. When you get to each Chapter of GEB you're ready to learn a new math concept because you've already been primed for it by some completely ridiculous Tortoise/Achilles shenanigans.
Hofstadter really wants you to understand Gödel's Incompleteness Theorem - not just in an approximate and hand-wavy way, and not just in a rote mathematical recitation way. He wants you to understand it on a deep and intuitive level. He wants this so badly that he's willing to spend seven hundred pages on math lessons, music theory lessons, thought experiments, riddles, and bizarre digressions about a smartass tortoise all in order to build up your intuition about number theory, just so that when you finally get to Gödel's proof you will have the background you need to get your mind COMPLETELY FUCKING BLOWN by it.
And it's pretty mind-blowing stuff. By encoding a self-referential paradox into a mathematical theorem, Gödel proved that any sufficiently complex system of mathematics will contain statements which are true, but can never be proven within that system. Gödel destroyed every other mathematicians' dream of ever having a perfectly complete and consistent theory of mathematics. He did it in the 1930s, so it was around the same time as the quantum mechanics gave us the Heisenberg Uncertainty Principle; it was a time when humanity discovered that there are limits to our knowledge, certain things the universe just doesn't allow us to know completely. Western philosophy is still trying to recover.
Hofstadter has a lot of thoughts about how the Incompleteness theorem applies to other areas. Art and music, of course, but also computer science - there's a lot in here about computability and the Halting Problem. There's a chapter on genetics: is the genetic code a "sufficiently complex system of mathematics"? There's a bunch of stuff about the meaning of "meaning" -- how much of a proof's meaning is in the symbols of the proof, and how much is in the mind interpreting those symbols? How about the meaning of a painting, or a symphony? But mostly, he wants to talk about how the Incompleteness theorem relates to artificial intelligence and human consciousness. It seems that formal rule-based systems like computer programs and mathematical proofs are vulnerable to getting caught in paradoxes like the one in Gödel's proof, or infinite loops like Turing discovered. Yet humans have the ability to deal with infinite loops and paradoxes -- we can recognize when we're caught in one, and make a creative out of the rule system to resolve it.
Does this ability to leap outside of a rule system mean that the human mind is fundamentally different from a computer program? And does that mean that AI is impossible? But how can that be, when the brain is made of neurons which operate according to predictable physical laws? Or is our mind governed by a higher-level rule system that gives us the ability to leap outside of lesser rule systems? If so, what happens when we try to leap outside of that one? Are the sensations of consciousness and free will related to the brain's capacity for self-referential thoughts? What is the equivalent of Gödel for the brain - are there things we're not allowed to know about ourselves, or thoughts we're incapable of thinking?
Central to Hofstadter's thesis is the difference between logical, mechanistic thought and creative, intuitive leaps. So it's appropriate that he leads you to his point using both methods. The chapters walk you through the logical explanation while the dialogues encourage you to make the leaps of intuition for yourself. Again, it's a self-referential book about self-reference: the form reflects the content.
I don't know any other book that has blown my mind so many times.