What comes after minds? by Marvin Minsky
From The New Humanists: Science at the Edge (2003). Edited by John Brockman. Barnes & Noble Books: New York.
- "No uniform scheme will leda to machines as resourceful as the human brain. Instead, I'm convinced that this will require many different 'ways to think' - along with bodies of knowledge about how and when to use them".
- "Computer science has helped us envision a fare wider range of ways to represent different types and forms of knowledge,..."
- "I see each emotional state as a distinctly different way to think".
What I like best about this brief introduction to some of Marvin Minsky's thinking is his reminder of the value of mind modeling. In the early days of cognitive science, back before neuroscience's explosion onto the scene and the advances of computer modeling, we were left with modeling mental processes and testing theories based on that which could be observed. While this approach had its limitations, it did yield some rather free thinking as to what might be happening inside our heads; thinking, one can argue (as Minsky does), that has perhaps dried up in recent years, closing off research paths worthy of investigation.
This piece includes a reminder of Freud's suite of collaborating and competing mental functions, which offers insights into the kind of multiple agents at play in the mind. This collection of action figures, including the Id, the ego, and the superego, may establish a point of departure for AI programming. For the purpose of this writing, we must not let our knowledge of Freud's shameful re-writing of his patient's abuse testimonials interfere with our appraisal of his mental models. While his work cast the public into decades of darkness concerning the damaging effects of trauma and the prevalence of sexual assault, those layers to his lineage will need to be quarantined in our minds if were are to move forward with this review. Everybody breathe; the quarantining lock is on.
Minsky then spells out some of the cool things humans can do, that we haven't been able to get computers to do. This includes our facility with using multiple representations, our emotions, and shifts between vast collections of knowledge. I was peaked by his theories regarding emotional states, and will be following up on that research in my review of his book The Emotion Machine.
And finally, the author's brief nod to computers perhaps one upping us in the process of recording data works across generations of development was exciting. In general, I find myself taking away from this article a sense of "hey, were not as dumb as we look", while at the same time re-committing to the program of exploring computer modeling of skills that extend beyond our capability, particularly in regards to ideating the sub-strata of areas of innovation we deem most necessary for addressing climate change. If AI processing is still skewed in favor of single functions, let's make sure we've get our labs focussed on the most useful single functions.