Home News Shopping FAQ Library Download Help Support
Contents : Articles : Economics of Learning
Knowledge machine Piotr Wozniak, November 1994
This text was taken from P.A.Wozniak, Economics of learning, Doctoral Dissertation, University of Economics, Wroclaw, 1995 and adapted for publishing as an independent article on the Web in 1998. Please note that the World Wide Web explosion was yet to begin at the moment of writing these words

Knowledge Machine

The great appeal of science fiction comes from the fact that it forms an uninhibited forum for developing the vision of the future society; a vision that is free of peer pressure, peer review and the debilitating compliance with the publish-or-perish principle. Those who are familiar with the plot of the Demon Seed must have already experienced the overwhelming fascination with (or perhaps a fear of) the potential possibilities of artificial intelligence combined with an unlimited access to all sources of knowledge available on the planet. How would the course of time change with the development of the super-brain able to surpass humans in optimally responding to all imaginable questions?

This as yet untapped potential of the unlimited access to knowledge may not be as far away as it might seem at first. On one hand, the greatest obstacles on the way to developing the natural language interface to computers have already been overcome; the computational power and storage space being the last major factors. However, other branches of artificial intelligence did not fair that well (often because of the common underestimation of the extent of knowledge needed for implementing intelligence). Though the overall progress in artificial intelligence seems to be far from the point envisioned in the super-brain fiction, a different sort of tools and techniques creep into place with a potential to change the world as we know it; still with human factor playing the central role. The tools and techniques in question are global hypermedia [currently growing as World Wide Web]. Navigation in a convoluted hyperspace forms a relatively simple paradigm of immeasurable potential that is likely to drive us towards the future system for providing the unified global knowledge.

Seymour Papert from the MIT Media Laboratory has proposed the term Knowledge Machine to describe the ultimate product of the present efforts oriented towards modern knowledge access systems (Kantrowitz 1994). Primitive harbingers of the Knowledge Machine have already arrived in the form of the World Wide Web, multimedia CD-ROM knowledge systems, interactive television, and other components that will ultimately bring us all to the same focal point.

The early hypertext systems have now been extended by addition of audio and video. Virtual reality comes next with touch, smell and even taste coming over the same channels

Technological and economic feasibility of global hyperspace

In the meantime, Information Highway is a hot topic at both sides of the Atlantic. If all goes as planned, Microsoft and McCaw Cellular will launch 840 communications satellites by the year 2001 to provide a quick link to all spots around the world. These are exemplary components of the nervous network of the future global brain. What is now available as a hypertext page, because of the expansion of the bandwidth, will in the future be a great literary work, documentary photograph, a movie, latest video of George Clinton, and whatever pieces of global information resources one could imagine. This will not exclude all papers published around the globe, weather reports for all regions and for all days available in history records, family trees, stock market quotations, etc. NB: Wall Street on a desktop is already a reality at a moderate price through on-line financial bulletin boards and databases.

The question arises about the feasibility of handling this magnitude of data with present technology. As far as storage capacity is concerned, there seems to be no major bottleneck. Consider, for example, the fact that the entire Library of Congress has been estimated to comprise 25 terabytes of information, while standard optical library units (the popular "optical jukeboxes") are already approaching the capacity of one terabyte. Moreover, the concept of distributed processing seems to provide the ultimate answer to the problem of resources with the exception of the bandwidth aspect. Present telephone systems are fully switched two-way networks, but they lack the bandwidth to support, for example, multiple video channels. On the other hand, broadcast and cable television provide the necessary bandwidth but these are not two-way links (though cable TV is soon to get into the picture). Two thirds of all U.S. homes are already wired for cable TV (1993), and the entire suite of hardware necessary to put the visions into effect is already there.

The aspects of information system architecture are in place as well. Some concepts have already appeared to beat all records of popular appeal. For example, Internet, founded in 1969 by the Pentagon, connects at present more than 2.5 million computers across the globe (1993), and the number is doubling every year. The most spectacular application of hypermedia among the systems that obscure the underlying Internet computer network is the already mentioned World Wide Web that can be viewed as a globally distributed knowledge access systems; sort of multipurpose, multitopic, extensible and never obsolete encyclopedia (see Schatz 1994 for review).

The addictive power of solutions such like the World Wide Web, and their ability to penetrate the popular ranks of the society is well illustrated by the success story of the Minitel system in France. The French have already got hooked on their primitive memoryless systems plugged into the ordinary phone system.

All in all, the hardware and software solutions are there, the demand and appeal are more than is needed, the last question remains of the financial aspect of the entire enterprise. It has been predicted that the global network is likely to cost more than a trillion dollars worldwide over the next 20 years (McGrath 1994). It is interesting to note, that since 1991, the U.S. capital spending on computers and communications has exceeded the outlays for heavy industry. These and related spendings are expected to rise dramatically as demand and benefits enter the self-propagating spiral.

Infosociety or global infobabble

Let us go a step further into the future and imagine fully decentralized global hypermedia. One of the premises of success is extensibility. By driving the extensibility to the extreme, every privileged user of the network should be able to contribute to the global hyperspace contents; seemingly a very attractive and democratic concept of the plug-in-and-go modules of the global nervous system. Call it information democracy, critics say that the consequences may be disastrous. Put simply, we will get buried in the glut of unpreprocessed information.

Opinions are split on the question if users are competent to build the cyberspace by themselves. The often quoted example is that of a medical diagnosis, that in the eyes of many, cannot be relied on the resources of the discussed information community. The solution is to introduce information reliability categories and to differentiate sources to those that are respectively subject to global expert panel review, peer-review, popular review, etc. not excluding uninhibited free-lance publication. This would result in the distinction between knowledge, facts, opinions and free informational art.

The concept of information democracy would include then the global ability to rank sources, topics, keywords, hypertext links, etc. by popular or expert vote. The user might be in position to impose a reliability filter on the accessed hyperspace, and individually balance the reliance on popular and expert opinion. By extending the reasoning of the skeptics, one might fear informational glut as a result of global proliferation and dissemination of printed matter. The only rational answer to the infobabble dilemma is that the ultimate outcome will depend on the overall architecture and organization of the system, and it is only a question of time before all the check and balances appear in place.

Processing attributes and repetition spacing tools incorporated in the hyperspace

I devoted a great deal of the presented dissertation to discussing the applications of repetition spacing in learning, education, and more generally, in personal management of individually possessed knowledge. Earlier in the chapter, I also introduced the concept of semantic, processing and ordinal attributes that would enhance human navigation in hyperspace. These attributes seem necessary to counteract the knowledge access problems encountered in the design of just-in-time learning systems advocated by Roger Schank (Williamson 1994).

I hope that I earlier succeeded in demonstrating that the concept of Knowledge Machine will gain greatly on its attractiveness by adapting to the known properties of human memory and cognition. After all, humans will continue playing the key role in managing the resources of the Knowledge Machine and tapping its potential for the global benefit (at least until the arrival of the true artificial intelligence).

The encapsulation of processing attributes will account for human perception in terms of processing speed and memory. In other words, processing attributes are absolutely necessary to optimize the access to knowledge sources.

On the other hand, self-instruction systems based on repetition spacing form another crucial extension to the Papert’s Knowledge Machine. After all, the progress of sciences, technological innovation and social change all originate in the human brain. The outcome of information processing in the brain depends on its factual and inferential information contents. These, in a vast proportion, come from what we learn in the course of our lives. That’s why the conscious choice of the learned material with the full rational management of the remembered knowledge form such a decisive component. As I showed earlier, repetition spacing is the only way towards this conscious choice and rational management.

The main difference between the proposed extensions: processing attributes and repetition spacing tools is that only the latter is knowledge dependent. In other words, processing attributes are established by standard tools and are characteristic to an individual user of the system. On the other hand, repetition spacing (which is also based on standard tools and includes the individual component in the form of the learning process) makes use of knowledge items that belong to the knowledge system itself. Consequently, the formulation of knowledge items associated with particular topics in the hyperspace should comply with analogous principles as the topics themselves. This includes informational democracy, extensibility, reliability rankings, etc.

Though processing attributes pertain entirely to the individual needs and preferences, it is not so with the structure of the hyperspace itself. After all, for the same collection of topics, there can be a number of alternative approaches to the selection of hypertext and hotspot links. As a consequence, the organization of the hyperspace might also undergo individual change. This could be accomplished by the differentiation between individual and global links. The establishment of links might also be subject to popular preference rankings like in the case of the contents of particular topics.

Global impact of the Knowledge Machine

There seems to be little doubt in the field that the Papert’s Knowledge Machine will arrive sooner rather than later; perhaps limited in some aspects as compared with the presented discussion. It should bring untold benefits to global welfare, and promote that what UNESCO calls the culture of peace. Will it reduce interpersonal relations between people? It’s up to people themselves. It is also up to socially accepted systemic solutions. Indeed the arrival of wireless phones resulted in secretaries carrying their mobile phones down to the ladies’ room. That’s, however, just an element of corporate culture which is a fluid concept. Definitely telecommuting will become a mass reality. The appearance of powerful multipurpose PDAs will let anyone, at any place and time, access any piece of information located anywhere in the world. Reversely, it might also be harder for humans to look for niches of individual freedom and contemplation. Again, systemic solutions might prevent the negative aspects of the latter

And finally, will the flash of human civilization wane with the advent of artificial superintelligence? Will humans play secondary role in the eternal quest for understanding, or will they transform their self into new quality that will remain in the center of cosmic events?