[P&N] Chapter 12: Final Thoughts
Citation: Edward A. Lee, 2017: Plato and the Nerd - the creative partnership of humans and technology. MIT Press, Cambridge, MA
Finally, I get here, what a book! I would like to say “Thank you!” to Lee for his wonderful and inspiring book, which made me think a lot and learn a lot.
Reviews and Conclusions
Let’s start finishing this book by review the most important conclusions we drew from this book, some of them are in my own words and some of them are in Lee’s words:
- All models are wrong, but some are useful.
- Hawking argues that all the models we have of the physical world today are both incomplete and inconsistent.
Scientists and engineers treat models differently. Scientists construct models to emulate the physical world while engineers construct things that do not exist before to emulate the properties of a existed model.
The very incompleteness of self-referential systems is what enables creativity. It ensures that we will never be finished.
Technologies should never be treated as dry Platonic facts that have always existed somewhere waiting to be discovered. Instead, they are cultural, dynamic ideas, subject to fashion, politics, and human foibles.
“Paradigms” are what enables people to be effective, within which people shore a common mental framework. Science progresses more through paradigm shifts than through accretion of knowledge.
Layers of paradigm reduce the freedom of choice, which make us able to deal with complicated hierarchical systems.
“Digital Physics” is extremely unlikely to be true. If we are to accept this point of view, we have to provide strong evidence according to Bayes rule. But the evidence is not there, and we don’t even know what sorts of experiments would supply the evidence yet.
- The real power of technology comes from the powerful partnership between man and machine.
Obstacles
The most serious obstacle to overcome is that, despite its amazing capabilities, the human brain is really quite limited. The fact is, we cannot remember as much as computers can. We cannot read as fast. We cannot perform calculations as fast, and we make many errors.
I have been dreaming about having a computer in my head since I was 10 years old. It is the partnership with computers that made us more effective and even more smart. But layers of abstraction simplified the complexity of complicated paradigms, does this imply that we can handle more and more complex systems using our brain without a boundary? If layers create interfaces and interface create stability, we can do that in some sense. But when the system is getting bigger and bigger, there comes specialization.
But our brain can only fit so much, specialization will then lead to fragmentation, where insights in one specialty become inaccessible to the others.
… specialists know more and more about less and less, until they eventually know everything about nothing. Then they become professors, and the courses hey teach become barriers, weeding out unsuspecting undergraduates who simply aren’t prepared for the sophistication of the specialty. The professors love their specialty, they want to teach it, and they cannot see that it’s esoteric; the arcane and complex analytical methods they have developed are neither easily learned nor easily applied to practical problems. Their discipline fragments into further specialties, and each professor loses the big picture. None is qualified to teach the big picture, and anyway, his or her colleagues would consider any such big picture to be “Mickey Mouse”, too easy and unsophisticated to be worthy of their time.
This trend is inevitable in some sense. But being aware of it is very important.
Not only technologies but also consumers are resistance to change, one reason is that unlearning something is often harder than learning something new.
A prevalent but misleading view is that human-computer interfaces should be “intuitive”. There is nothing intuitive about the pedals in a car. There is nothing intuitive about interacting with a computer. All of the interaction mechanisms we use today are learned. I am reminded of an episode of the TV series, “Star Trek”, where the crew of the Enterprise travels back in time to the late 1980s engineers. The engineer, Scotty, needs to use a vintage 1980s computer to solve a problem. So he starts talking to the computer:”Computer. Computer. Computer!” The computer, of course, does not respond. One of the vintage 1980s engineers picks up a mouse and hands it to Scotty, saying, “You have to use this.” Scotty, looking embarrassed, says, “Oh yes, of course.” He picks up the mouse and begins speaking into it as if it were a microphone: “Computer. Computer. Computer!” The computer mouse was a brilliant invention, and we have assimilated the paradigm, but there is nothing intuitive about it.
I can’t help laughing when I read the above paragraph, after which can’t help sinking into deep thinking.
However, humans don’t typically choose paradigms. Paradigms are assimilated slowly, often subconsciously, or are drummed in by educators who are likely too specialized to know the alternatives. As a consequence, engineers typically build models using the paradigms they know regardless of whether these are the right choices. This may explain why so many projects fails.
Last but not least.
I believe it is imperative for nonnerds to understand technology better so they can help guide its evolution in society, and it is imperative for the nerds to understand the cultural context of their work. These are the main reasons I wrote this book.