摘要 :
In most work on causal reasoning, an agent's knowledge assigns one of three values to domain facts: yes, no, or maybe. These values are not sufficient, however, to represent the statistical information available in many interestin...
展开
In most work on causal reasoning, an agent's knowledge assigns one of three values to domain facts: yes, no, or maybe. These values are not sufficient, however, to represent the statistical information available in many interesting domains (arguably including most realistic domains). Thus some recent approaches to causal reasoning have concentrated on representation and inference with probabilistic degrees of belief. We have found that proabilistic approaches to causality suffer from some of the same hard problems as traditional approaches. In particular the frame and qualification problems arise in subtle ways, and it is important to realize when such profound representational problems exit. The problems implicitly motivate the representational primitives of Dean and Kanazawa's approach, but we find fault with their choice of primitives. In this paper, we first describe the persistence and qualification problems in a probabilistic setting, then explain and criticize Dean and Kanazawa's solutions from a more traditional non-probabilistic causal framework.
收起
摘要 :
There are two profoundly different (though not exclusive) approaches to uncertain inference'. According to one, uncertain inference leads from one distribution of (non-extreme) uncertainties among those propositions. According to ...
展开
There are two profoundly different (though not exclusive) approaches to uncertain inference'. According to one, uncertain inference leads from one distribution of (non-extreme) uncertainties among those propositions. According to the other, uncertain inference is like deductive inference in that the conclusion is detached from the premises (the evidence) and accepted as practically certain; it differs in being non-monotonic: and augmentation of the premises can lead to the withdrawal of conclusions already accepted. We show here, first, that probabilistic inference is what both traditional inductive logic (ampliative inference) and non-monotonic reasoning are designed to capture, third, that acceptance is legitimate and desirable, fourth, that statistical testing provides a model of probabilistic acceptance, and fifth, that a generalization of this model makes sense in AI.
收起
摘要 :
Goal Reasoning (GR) concerns actors that assume the responsibility for dynamically selecting the goals they pursue. Our focus is on modelling an actor's decision making when they encounter notable events. We model GR as an iterati...
展开
Goal Reasoning (GR) concerns actors that assume the responsibility for dynamically selecting the goals they pursue. Our focus is on modelling an actor's decision making when they encounter notable events. We model GR as an iterative refinement process, where constraints introduced for each abstraction layer shape the solutions for successive layers. Our model provides a conceptual framework for robotics researchers and practitioners. We present a goal lifecycle and define a formal model for GR that(1) relates distinct disciplines concerning actors that operate on goals, and (2) provides a way to evaluate actors. We introduce GR using an example on waypoint navigation and outline its application, in three projects, for controlling simulated and real- world vehicles. We emphasize the relation of GR to planning, and encourage PlanRob researchers to collaborate in exploring this exciting frontier.
收起
摘要 :
Our overall goal is to develop the estimation, planning, and control techniques necessary to enable robots to perform robustly and intelligently in complex uncertain domains. Robots operating in complex, unknown environments have ...
展开
Our overall goal is to develop the estimation, planning, and control techniques necessary to enable robots to perform robustly and intelligently in complex uncertain domains. Robots operating in complex, unknown environments have to deal explicitly with uncertainty. Sensing is increasingly reliable, but inescapably local: robots cannot see, immediately, inside cupboards, under collapsed walls, or into nuclear containment vessels. Task planning, whether in household and disaster-relief domains, requires explicit consideration of uncertainty and the selection of actions at both the task and motion levels to support gathering information. Our approach to robust behavior in uncertain domains is founded on the notion of integrating estimation, planning, and execution in a feedback loop. A plan is made, based on the current belief state; the first step is executed; an observation is obtained; the belief state is updated; the plan is recomputed, if necessary, etc. We call this online replanning. Our work in this grant has developed an initial version of such a planner and demonstrated it for controlling the behavior of an autonomous mobile-manipulation robot.
收起
摘要 :
Agents with incomplete environment models are likely to be surprised, and this represents an opportunity to learn. We investigate approaches for situated agents to detect surprises, discriminate among different forms of surprise, ...
展开
Agents with incomplete environment models are likely to be surprised, and this represents an opportunity to learn. We investigate approaches for situated agents to detect surprises, discriminate among different forms of surprise, and hypothesize new models for the unknown events that surprised them. We instantiate these approaches in a new goal reasoning agent (named FOOLMETWICE), investigate its performance in simulation studies, and report that it produces plans with significantly reduced execution cost in comparison to not learning models for surprising events.
收起
摘要 :
During the past several years, the senior investigator has been attempting to develop a unified theory of human reasoning. This research has proceeded along two major fronts, one involving the formulation of a subtheory of inducti...
展开
During the past several years, the senior investigator has been attempting to develop a unified theory of human reasoning. This research has proceeded along two major fronts, one involving the formulation of a subtheory of inductive reasoning, the other involving the formulation of a subtheory of deductive reasoning. This article discusses work done on deduction. Although the theory of deductive reasoning is not yet completely formulated or tested, work on the theory is far enough along to merit a progress report. So far, models of deduction for the three main kinds of syllogisms that have been investigated by students of human reasoning have been formulated and tested: categorical, conditional, and linear syllogisms. The theory and data for each of the three kinds of syllogisms are summarized and some conclusions are drawn.
收起
摘要 :
Metaphor can be studied in many different ways and at many different levels. In his lucid and enlightening analysis of 'Generative metaphor: A perspective on problem-setting in social policy', Donald A. Schon has reached through m...
展开
Metaphor can be studied in many different ways and at many different levels. In his lucid and enlightening analysis of 'Generative metaphor: A perspective on problem-setting in social policy', Donald A. Schon has reached through macroscopic analysis many of the same conclusions the authors have reached through microscopic analyses of metaphor and induction. This paper discusses the sources of convergence. In the first section, they describe the motivation, approach, theory, and methods underlying their research on metaphor and its relationship to induction. In the second, they point out the convergences between Schon's viewpoint and their own. Finally, they draw some conclusions, showing in particular how their proposed theories of structure and process in metaphor address fundamental questions about the nature of metaphor. The apparent convergence of Schon's views and their own suggests that an understanding of metaphor can be attained that is independent of the means used to attain that understanding.
收起
摘要 :
Tractability aspects of modal fixed point theories have shown to be among the thorniest issues ever since their introduction in the field of non-monotonic reasoning. This is not entirely misplaced, since the prime concern of non-m...
展开
Tractability aspects of modal fixed point theories have shown to be among the thorniest issues ever since their introduction in the field of non-monotonic reasoning. This is not entirely misplaced, since the prime concern of non-monotonic reasoning is to speed up knowledge bases by providing them with exception-ignoring inference mechanisms. Remarkably, many proposed logics, notably the first order ones, seem to have failed just at this point. Untractability typically is raised by recurrent proof systems which--in the propositional case--are always leading into laborious consistency checks. Problems of tractability may however sometimes be evaded by shifting the emphasis slightly towards notions related to provability. In the paper, the semantic notion of autoepistemic membership is axiomatized. The proof system is used to formally derive some well-known propositions in the literature of autoepistemic reasoning.
收起
摘要 :
The ability to generate explanations plays a central role in human cognition and is essential for intelligent problem solving and decision making. Generating explanations requires a deep understanding of the domain and tremendous ...
展开
The ability to generate explanations plays a central role in human cognition and is essential for intelligent problem solving and decision making. Generating explanations requires a deep understanding of the domain and tremendous flexibility in the way concepts are accessed, combined and used. Together, the joint requirements of deep understanding and flexibility in conceptual access and use constitute challenging design requirements for a model of explanation. The Pis developed a systematic program of computational modeling to elucidate the mental representations and processes underlying ing the generation of explanations in the service of problem solving and decision making. They developed and implemented a detailed process model of explanation one that is capable of flexibly using its knowledge in the service of explaining novel cxplanada and demonstrated that it provides a good qualitative account of patterns of explanations generated by human adults in laboratory settings. The same principles the model uses to generate explanations (i.e.. reasoning 'backward' to infer causes of known effects) arc also applicable to planning and problem solving (i.e.. reasoning 'forward' to infer useful actions for problem solving).
收起
摘要 :
Agents, whether biological or artificial, have bounded reasoning capabilities. As a result, they cannot make reasoned decision instantaneously; reasoning takes time. Agents in dynamic environments face a potential difficulty when ...
展开
Agents, whether biological or artificial, have bounded reasoning capabilities. As a result, they cannot make reasoned decision instantaneously; reasoning takes time. Agents in dynamic environments face a potential difficulty when they must make decisions what to do. They run the risk the world may change in ways that undermine the very assumptions upon which their reasoning is proceeding. Dynamic environments and computational resource bounds thus pose a challenge that has led some researchers in Artificial Intelligence (AI) to propose that artificial agents be designed to avoid execution-time practical reasoning. In this paper, the author argues that there is a way in which an agent's plans can be used to constrain practical reasoning; they can suggest solutions to means-end reasoning problems that the agent subsequently encounters. Moreover, such solutions can often be accepted without further deliberation about possible alternatives. An agent will often be able to guide its search for a way to achieve some goal G by looking for an action A that it already intends that can also subserve G, or by looking for an intention that can be overloaded. If it is successful in this, it can typically avoid attempting to find alternative ways of achieving G; it need not weigh the solution involving A against competing options. The author argues such a strategy, fine-tuned in appropriate ways, is rational, despite the fact it may sometimes lead to suboptimal behavior.
收起