NSF LogoNSF Award Abstract - #0412830

Hypothesis Formation and Testing in an Interpretive Domain: a Model and Intelligent Tutoring System


NSF Org IIS
Latest Amendment Date September 3, 2004
Award Number 0412830
Award Instrument Continuing grant
Program Manager Kenneth C. Whang
IIS Division of Information & Intelligent Systems
CSE Directorate for Computer & Information Science & Engineering
Start Date September 15, 2004
Expires February 28, 2006 (Estimated)
Awarded Amount to Date $345111
Investigator(s) Kevin Ashley ashley+@pitt.edu (Principal Investigator)
Sponsor University of Pittsburgh
350 Thackeray Hall
Pittsburgh, PA 15260 412/624-7400
NSF Program(s) ADVANCED LEARNING TECHNOLOGIES
Field Application(s) 0104000 Information Systems,
0116000 Human Subjects,
0104000 Information Systems,
0116000 Human Subjects
Program Reference Code(s)
Program Element Code(s) 1707

Abstract

Hypothesis Formation and Testing in an Interpretive Domain: a Model and Intelligent Tutoring System Abstract Since the days of Bacon and Galileo, formulating hypotheses about natural phenomena and testing them against empirical data have been cornerstones of the natural sciences. As a cognitive framework, hypothesis formation and testing are also important in legal reasoning. The legal domain, however, is different from natural science and mathematics in a significant respect. Determining whether a hypothesized rule and proposed outcome are consistent with past legal decisions is much more a matter of interpretation. The aims of this project are to (1) design and evaluate an Artificial Intelligence (AI) cognitive model of framing and testing hypotheses in an interpretive domain, legal reasoning, and (2) incorporate the model in an intelligent tutoring system (ITS) to teach law students the process. The project builds upon two recent developments: (1) a newly invented means to frame and evaluate hypotheses predicting the outcomes of new cases based on an AI database of existing precedents; (2) a convenient, on-line corpus of U.S. Supreme Court oral arguments in aural and written form, including many concrete examples of legal hypothesis framing and testing. In response to an advocate's proposed hypothesis of how the case should be decided, the Justices often challenge it by posing hypotheticals, sometimes forcing the advocate to modify or abandon the hypothesis. By studying these examples, the researchers, participating law students and law faculty will schematize and model the process of framing and testing legal hypotheses, implement it computationally, evaluate it empirically, and use it to design the ITS. The tutor will implement the model in various legal domains, each with a body of legal rules, issues, precedents, and principles, operationalized in a way that supports hypothesis formulation, prediction, testing, and explanation. Using the model, it will guide and challenge students' arguments. It will predict outcomes of cases, help students construct tests and rationales justifying the prediction, and help them evaluate the hypothesis by posing or responding to hypothetical challenges. The researchers will evaluate the project's success in terms of: (1) the accuracy of the model's predictions for new cases and the extent it improves case retrieval; (2) how well model-generated arguments compare to those in the Supreme Court oral arguments or generated by law students; (3) how well ITS-trained students compare to a control group taught the same process using conventional law school methods; (4) whether ITS-trained students generate more accurate self-explanations of the Supreme Court oral arguments. This work extends AI techniques to a much less well-structured domain than natural science and mathematics, one more like the common sense domains AI has yet to address. By using AI to investigate empirically a cognitive phenomenon, framing and testing hypotheses in an interpretive domain, it will contribute to research in AI & Law, Case-based Reasoning, AI & Education, and Cognitive Science.

Please report errors in award information by writing to: award-abstracts-info@nsf.gov.