Natural Extensions for Artificial Cognition


Speaker:
Oliver Selfridge

Abstract:

My overall recommendation is that we should find out how to produce cognitive software that can be at least partly educated instead of having to be carefully programmed. The software must be able to learn not only how to accomplish the top level desired task, but also how to check and improve its performance on a continuing basis at many different levels. In Artificial Intelligence (AI), Machine Learning (ML) is a vigorous and flourishing field. It is important to also note that education above means much more than what is learnt in school.

Here I am proposing that we extend, modify, and expand our efforts in Cognitive Learning (CL), especially on the kinds of problems we tackle and on the techniques and goals we use. Much of what I propose derives from considering how people do their learning, both in and out of school; and from looking at the kinds of problems that are being analyzed in current work in ML.

There are three main topics in human learning that are mostly not even considered in ML. The first is what has been termed purpose structures; that was discussed in some detail in our DARPA contract [1]. The idea of purpose structures is to build software out of modules that each have a success function, so that changes in them can be assessed to assure continuing improvements.

The second topic is: how are the conclusions of CL in a piece of cognitive software to be remembered, so as to be applicable again in later and different circumstances? Note that there are many different ways of remembering what is learnt: learning and remembering the meaning of a past participle is very different from learning and remembering the tones of many Asian languages.

The third topic is: how are the conclusions of CL in a piece of cognitive software to be shared with other cognitive agents? Infants (and people generally!) obviously use many different kinds of communication.

None of those general topics has been much faced at all in AI, let alone in CL On top of that, the cognitive software must work in environments that are continually changing at all levels, including the overall standards of success. We note that Mama may mean one thing to a two-year-old, but it may mean something very different to a five-year-old, in context, circumstances, responses and usefulness.

We need to analyze these points and order them, so as to propose a program that will diverge ... and take the one less traveled by ... perhaps that will make all the difference!And perhaps we can then break new boundaries in Artificial Cognition.

References:
[1] Final Report: DARPA Contract F30602-00-C-0216; Creation and Modeling of Adaptive Agent Systems