The fig tree is pollinated only by the insect Blastophaga grassorum. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree…. This co-operative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis.
Nature’s degree of specification is even more extreme than the quotation implies: different fig trees are uniquely pollinated by different fig wasps, and, in fact, the wasp that appears in the image above is not the same species Licklider referred to. (More information available at this figweb page, or the Wikipedia page on fig wasps.) Licklider envisioned a similar degree of specification of human-computer interface: each operator might have a distinct way to interact with the computer, so each member of the group could perform their tasks in parallel, each talking to the central machine. However, in this vision, there is one computer per person, or “symbiotic cooperation between a computer and a team of men.” What happens when there is more than one computer per person in the world? What if, for each person in the world, there are thousands, or millions of computational devices? David Tennenhouse termed the outnumbering of humans by computers a “breakpoint,” and argued in the year 2000 that we were rapidly approaching that breakpoint, so it was essential to reinvent computer science.
Licklider’s hope was “that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling we know today.” Indeed, many of his hopes have been realized. He analyzed how much time he spent on different research tasks, and concluded that his “thinking” time “was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting….” His full list is a series of tasks that we now accomplish with search engines, desktop publishing programs, and tools like MatLab. (This does not mean the scientist of today has been somehow liberated from clerical or mechanical activities, but I will defer that contradiction to a future post.)
Wikipedia has placed Licklider’s article in a section on Intelligence Amplification, which may not be entirely accurate. Licklider stated on the first page, “Man-computer symbiosis is probably not the ultimate paradigm for complex technological systems.” To use a buzzword, he focused his article on what we should do with computers between now and the Singularity, but he acknowledges there may be computer science beyond that. Licklider was remarkably prescient: in addition to predicting mathematical graphing programs, etc., he pointed out the need for better memory organization, and discussed one “promising idea”: trie memory, which at the time was still being written up to be published! (There is much more to say about this article, and I expect to refer to it again in future posts.)
Tennenhouse, in an influential article that appeared in the May 2000 Communications of the ACM, argued that it was time for a fundamental rethinking of computer science, away from man-computer symbiosis, and toward proactive computing, a computational paradigm that is human-supervised instead of being human-centered. He articulated three “breakpoints,” changes in the empirical reality of computing that he believed would demand a change in computer science theory:
- The human/machine/network breakpoint: the point at which the number of interactive computers surpasses the number of people on the planet.
- The realtime breakpoint: proactive computing environments will operate at faster-than-human speeds.
- The robotic control breakpoint: the point at which unmanned vehicles outnumber their human supervisors.
Worldwide, we have not passed any of these breakpoints yet, but in many industrialized countries, we have left (1) and (2) in the dust. More than 25 countries now have more than one cell phone per person. (The USA is not one of them.) These aren’t all smartphones, of course, but they all house objects Licklider would have considered computational devices.
Passing (3) is just a matter of time. Tennenhouse wrote in 2000:
Robotics systems might outnumber humans by only thousands to one over the coming decade. However, the multiplier obtainable by software-based “knowbots” is virtually unlimited, and thinking in terms of of millions of active agents per user is not unreasonable…. Since these agents will interact with each other as they go about our business, we need to invent technologies that sustain human control over agent-based systems, yet allow agents to autonomously negotiate with each other in ways that honor overall systems objectives and constraints.
The first thing I thought of when I read that was, “Like how my brain tells my arm to move without knowing how muscles work.” (You knew this was getting back to biocomputation somehow, didn’t you?) I plan to discuss Tennenhouse’s article in more detail in the future, but, in the meantime, I don’t think it’s hard to imagine that a bio-inspired computational approach might be useful when one human being is supervising — and being served by — millions of agents. That approach provides a computational economy of scale.