Reactions � MIT MAS962

Greg Detre

Sunday, November 10, 2002

 

Mueller (2002) �ThoughtTreasure: A natural language/commonsense platform�

Mueller describes ThoughtTreasure as a �comprehensive platform for natural language processing and commonsense reasoning�. It amounts to a pretty enormous knowledge base of assertions, both of lexical and of world knowledge, employing a number of different representations. ThoughtTreasure can be seen as enlarging the scope of Wordnet, in adding knowledge like �the sky is blue�, as well as showing connections across syntactic categories. It�s estimated to have 25,000 hierarchical concepts, amounting to approximately 55,000 words and phrases (including both English and French synonyms).

ThoughtTreasure uses the following representation schemes:

finite automata � like scripts, representing rules of thumb, device behaviour and mental/emotional behaviour procedurally

logical assertions � facts and linguistic knowledge

grids � like frames, ascii art used to represent the geography of familiar places

The example of a natural language query of an assertion that Mueller gives illustrates ThoughtTreasure�s basic workings. When asked �Who created Bugs Bunny?�, the question works its way down through the levels. First words and phrases are recognised by the text agent, ambiguities flagged, syntactic parse trees are built, converted into multiple logical assertions, these are given �make sense ratings�, and finally the answer to the most sensical interpretation is returned. When stories and more complicated tasks are involved, the system tries to produce an interpretation combining different representations that satisfies as many of the different variables/roles/slots as possible. ThoughtTreasure employs very lucid, concrete models of the world, rather than relying on abstract or propositional assertions.

Currently, Thought Treasure is the gargantuan work of one man. It certainly seems to work pretty impressively in the examples he gives, although they get recycled frequently, e.g. the potential problem of scheduling a meal with a vegetarian in a steakhouse. However, he barely discuss the issue of scalability, other than to say he�d like to continue building more grids, adding more assertions and perhaps incorporating some of Open Mind�s knowledge. Ultimately though, without a built-in, robust, automated means of aggregating new knowledge, ThoughtTreasure is susbcribing to Cyc�s vain hope that reaching some �critical mass� of common sense knowledge will allow it to harvest the rest from newspapers. On a related note, the knowledge that ThoughtTreasure does have is pretty rich, but seems extremely brittle. I don�t see how it could deal with an unexpected turn of events in its scripts, unless that turn of events was also represented as a script. Presumably, it could try and understand the new events in terms of its models of mental processes and knowledge about the world, but without a notion of embodiment and the huge variation of possibilities for interaction that the real world brings, I don�t see how it could fit the world into its schemas.

There is a further problem of fitting the different representations together. I think he says that all of the grids are, at the lowest level, represented as a series of assertions, but that doesn�t make it much easier to see how the knowledge from the grids and scripts should tussle and settle with uncertain assertions in a rare or highly-contextualised situation.

 

There is an �easy� problem of getting together enough assertions like �the sky is blue�, some basic scripts and geography, which presumably cannot be fully generated from any model of the world but comprise and constrain it. I contend that the �hard� problem is that even millions of such assertions will never fully specify how to behave in the world, what to expect, or how to understand things in the way that we do.

At the risk of simplifying, I can see two aspects to the hard problem. To at least some degree, the way we view the world and the assumptions that we make about it are a product of the way we are. This is not an issue that we can easily deal with in common sense reasoning systems, other than perhaps garnering their knowledge the long, slow way through an embodied system. But this misses the short-term point of trying to build common sense knowledge bases, which is to aim for low-hanging fruit, for software that would benefit from even a basic veneer of basic knowledge (like SensiCal, word-sense disambiguation in NLP etc.).

The second hard problem is context. Only a small proportion of common sense facts can be stated unequivocally. Even statements like �the sky is blue� are only true during the day, when it�s not an eclipse, when it�s not grey or overcast, when you�re not wearing funny-coloured glasses etc.

The approach of Cyc is to try and codify the most general statements that generate such assertions, and then relativise them in some way, either through micro-theories or meta-assertions. Open Mind does things slightly differently, as far as I can tell, in hoping for (or prompting) someone to provide an assertion that highlights exceptions or contextualises the assertion. ThoughtTreasure seems to rely on some explicit contextualising assertions, as well as its grids and finite automata scripts, to gain contextual information.

Context is not something that can be discretised. If you want to capture everything about a context that could potentially be relevant to a common sense understanding, then you have to capture the entire context, because you can never predict which minute, unexpected aspects of the environment, goal, other agents, internal state, past history, chance molecular events or whatever, might affect the way a human would common sensically respond. You cannot reduce the context, if you want to respond common sensically.

My fundamental reservation with all of the above approaches is that they try to make all of common sense knowledge explicit, whereas I think that our common sense is based on rich, underlying models that generate the knowledge that gets taken as first-class by current common sense systems. Knowledge like �You can usually see people�s noses, but not their hearts� (an example which happens to be from CYC), should be knowledge that never gets made explicit. I�m pretty sure that we don�t have any assertions like this is in our brains, and that our brains probably wouldn�t be big enough to store the amount of knowledge that we do have, if it was in such a form. Rather, we have a vague anatomical model of the body, a na� understanding of optics and line of sight, knowledge about clothes and the opacity of skin, and we can generate an assertion like the one above, but only when the situation demands it.

The same complaint could be made of treating scripts and assertions as first-level items of knowledge. It may be that, to some degree, we do have some sort of script-like knowledge in our brains, telling us the order of procedures to expect in a given situation, but this is built on top of more basic knowledge, telling us why that procedure is the way it is. It is because ThoughtTreasure lacks this knowledge that it�s so brittle. In fairness, I think Mueller does a credible job of recognising this problem, and mitigates it with his deepening layers of understanding, incorporating planning and trying to fit the different parameters of a situation within a story. Unfortunately, because the story model itself is hand-coded, rather than self-organising within the world, the approach can�t scale. I think this is one of the the points that Brooks makes best � he says that you can�t hope to train a system in a simplified world and slowly add complexity, hoping the system will scale. This is because the assumptions that the system will have made to operate successfully in the simple world may be based around clean-cut certainties and simplifications that don�t exist in the real world. This is another way of arguing for common sense gathering by interacting with the real world, and building large-scale embodied common sense models tolerant of that noise.