|
The following are abstracts from representative LIS Research Awards for FY 1997.
These examples are provided for illustrative purposes only, and are not intended to
be restrictive.
Understanding and Fostering Spatial Competence
PI: |
Janellen Huttenlocher, University of Chicago |
Co-PIs: |
Dedre Gentner, Northwestern University
Nora S. Newcombe, Temple University |
A group of nine senior investigators will study spatial competence, and its
emergence over time, at the cognitive, computational, and neural levels. Topics
to be studied include how people form spatial representations; how people
communicate about spatial information using external symbol systems such as maps,
diagrams, graphs, and linguistic descriptions; the role of the educational input
received in American schools in supporting spatial learning; the optimal
computational model of spatial learning; and, evidence of neural plasticity
for spatial learning, based on both neuroanatomical study and neuropsychological
evaluation.
The common purpose of this group of related research endeavors is to examine the
nature of environmentally-sensitive growth in spatial competence and how spatial
learning can be maximized in the American population. Innovations for educational
practice and educational software resulting from our research will be evaluated
with the help of collaborating teachers.
Simulating Tutors with Natural Dialog and Pedagogical Strategies
PI: |
Arthur C. Graesser, University of Memphis |
Co-PIs: |
Stanley P. Franklin, University of Memphis
Max Garzon, University of Memphis
Roger J. Kreuz, University of Memphis
William Marks, University of Memphis |
The long-term practical objective of the research is to develop a fully
automated computer tutor. The tutor would be able to
- extract meaning from the contributions that the student types into a keyboard and
- formulate dialog contributions with pedagogical value and conversational appropriateness.
The tutor's discourse moves include: pumping, prompting, hinting, questioning,
answering, summarizing, splicing in correct information, providing immediate
feedback, and rewording student contributions. The dialog contributions of
the tutor would be in different formats and media: printed text, synthesized
speech, simulated facial movements, graphic displays, and animation. Such an
achievement will require an interdisciplinary integration of theory and
empirical research from the fields of cognitive psychology, discourse
processing, computational linguistics, artificial intelligence, human-computer
interaction, and education. The tutoring topics will be in the domains of computer
literacy and introductory medicine.
Previous attempts to develop a fully automated tutor have been seriously
challenged by some technical and theoretical barriers. These include:
- the problem of interpreting natural language when it is not well-formed
semantically and grammatically,
- the problem of world knowledge being immense, open-ended and incomplete, and
- the lack of research on human tutorial dialog.
Recent advances have dramatically reduced these barriers, so it is time
to revisit the mission of developing an automated tutor. According to the
recent research on human tutoring, a key feature of effective tutoring lies
in generating discourse contributions that assist learners in actively
constructing explanations, elaborations, and mental models of the material.
The proposed research will advance scientific understanding of how a tutor
can manage a smooth, polite dialog that promotes deep learning of the material.
An Integrated Approach to Concept Learning in Humans and Machines
PI: |
Brian H. Ross, University of Illinois at Urbana-Champaign |
Co-PIs: |
Gerald F. DeJong, University of Illinois at Urbana-Champaign
Gregory L. Murphy, University of Illinois at Urbana-Champaign
Leonard Pitt, University of Illinois at Urbana-Champaign
Karl S. Rosengren, University of Illinois at Urbana-Champaign
|
Concepts are essential for intelligent thought and action. The goal of
the project is an integrated view of concept learning in humans and machines.
The primary focus will be combining psychological experimentation with
artificial intelligence modeling to examine the interaction of world
knowledge and empirical information during concept learning.
The representation of concepts consists of feature regularities observed
in the instances and of features inferred from world knowledge. However,
current theories focus on only one type of feature and do not consider how
learning each might affect the other. Additional work will examine how the
use of concepts (such as those used for problem solving) may affect learning,
how prior knowledge may be restructured to accommodate new information, and
how concepts may change with age and experience. Computational learning
theory will be adapted to provide a mathematical characterization of the
learning process.
The view of concept learning that results from this work will be integrated
in that it will:
- investigate and account for a wide variety of concept learning results that
are often studied separately, and
- pool the research strengths of psychology, artificial
intelligence machine learning, and computational learning theory.
The first goal will place greater constraints on theoretical accounts,
suggest new possibilities, and help to decide among competing explanations.
The second goal will lead to a theory that is psychologically and
computationally plausible, yet sufficiently rigorous to be analyzed
with the mathematical tools of computational learning theory. Such a theory
will contribute to the generation of new knowledge by broadening the
understanding of concept learning in each of the fields, and by promoting
new research issues and approaches in each field through interdisciplinary work.
Structured Statistical Learning
PI: |
Mark E. Johnson, Brown University |
Co-PIs: |
Eugene Charniak, Brown University
John P. Donoghue, Brown University
Stuart A. Geman, Brown University
David Mumford, Brown University
|
Learning in many cognitive domains, including language and vision,
involves recognition of complex hierarchical structure that is hidden
or only indirectly reflected in the input data. In this project a
multi-disciplinary group of applied mathematicians, cognitive scientists,
computer scientists, linguists, and neuroscientists will study the learning
of compositional structure in language, vision, and planning, and will also
probe the neural mechanisms for identifying and exploiting such structure.
The research involves three interacting lines of work. The first refines
and extends statistical learning models; the second applies these models
to language, vision, and planning; and the third develops and applies new
experimental and analysis techniques for probing the neural mechanisms
that learn and exploit compositional structure.
The results of the project should significantly increase our
understanding of complex learning, and should have implications
for a wide range of topics in education (e.g., learning of complex
knowledge structures in science and math) and technology (e.g., automated
speech recognition, computer vision, robotics).
This project is being funded through the Learning & Intelligent Systems
Initiative, and is supported in part by the NSF Office of Multidisciplinary
Activities in the Directorate for Mathematical & Physical Sciences.
Learning Minimal Representations for Visual Navigation and Recognition
PI: |
William H. Warren, Brown University |
Co-PIs: |
Leslie P. Kaelbling, Brown University
Michael J. Tarr, Brown University
|
This project is concerned with the intelligence exhibited in interactions
among sensory-motor activities and cognitive capacities such as reasoning,
planning and learning, in both organisms and machines. Such interaction is
regularly shown in the act of navigation, which is engaged in by humans and
other animals from an early age, and seems almost effortless in normal
circumstances thereafter. Whatever there is in navigation that is innate
and whatever is learned, it is important to try to understand the
interaction of the various cognitive, perceptual, and motor systems
that are involved. The complexity of these interactions becomes clear
in the development of mobile robots, such as the one recently deployed
on Mars, not to mention the more autonomous ones planned for the future.
It is still a major and imperfectly understood task to create programs
that will coordinate sensors, keep an internal "map" of the area, and
allow the robot to cross a space efficiently and without collisions with
obstacles.
An interdisciplinary approach is being taken in this research project,
exploring human capabilities through experiments, developing models based
on the experimental results and what is already known about human navigation,
implementing these models in programs for robot control, then testing these
programs in robotic navigation experiments for their efficacy and their
reasonableness as models of human navigation. The goals are both to
understand the phenomena in humans and machines and to develop robust
algorithms to be used in mobile robots. This alliance of researchers
studying psychophysics, cognition, computation, and robotics will
lead to gains in knowledge across many disciplines and will enhance
our understanding of spatial cognition and visual navigation in agents,
both artificial and natural.
Optimization in Language and Language Learning
PI: |
Paul Smolensky, Johns Hopkins University |
Co-PIs: |
Michael R. Brent, Johns Hopkins University
Robert E. Frank, Johns Hopkins University
Peter W. Jusczyk, Johns Hopkins University
Geraldine Legendre, Johns Hopkins University
|
This project is interdisciplinary research in the knowledge, processing,
and learning of language. It proceeds from a framework utilizing results
from mathematical statistics, adaptive systems, and formal learning theory
which provide a means of treating language as a kind of statistical
optimization.
Previous work by the principal investigator on the integration of
linguistic theory with optimization principles in neural networks has
led to this new grammar formalism, optimality theory, which has had
considerable impact on many aspects of the study of human language,
including learning. Recently developed methods of psychological
experimentation now provide reliable data on the process of language learning,
even in infants.
This research brings together these experimental methods for observing
real-time processing and learning of language, computational methods of
research on optimization and automatic language processing, and linguistic
methods for studying the structure of the representations essential for human
language.
The investigators bring not only expertise in the contributing
disciplines, but also considerable experience in interdisciplinary
collaboration. The results of this research may help to explain the
mystery of how humans - and possibly artificial systems - can learn
to use and understand languages.
Developmental Motor Control in Real and Artificial Systems
PI: |
Neil E. Berthier, University of Massachusetts |
Co-PIs: |
Andrew G. Barto, University of Massachusetts
Rachel K. Clifton, University of Massachusetts
Richard S. Sutton, University of Massachusetts
|
A key aim of this initiative is to understand how highly complex intelligent
systems could arise from simple initial knowledge through interactions with the
environment. The best real-world example of such a system is the human infant
who progresses from relatively simple abilities at birth to quite sophisticated
abilities by two-years-of-age. This research focuses on the development of
reaching by infants because:
- only rudimentary reaching ability is present at birth;
- older infants use their arms in a sophisticated way to exploit
and explore the world; and
- the problems facing the infant are similar to those an artificial
system would face.
The project brings together two computer scientists who are experts
on learning control algorithms and neural networks, and two psychologists
who are experts on the behavioral and neural aspects of infant reaching,
to investigate and test various algorithms by which infants might gain
control over their arms.
The proposed research focuses on the control strategies that infants
use in executing reaches, how infants develop appropriate and adaptive
modes of reaching, the mechanisms by which infants improve their ability
to reach with age, the role of sensory information in controlling the
reach, and how such knowledge might be stored in psychologically appropriate
and computationally powerful ways. Preliminary results suggest that
computational models that are appropriate for modeling the development
of human reaching are different in significant ways from traditional
computational models. Understanding the mechanisms by which intelligence
can develop through learning can have significant impact in many scientific
and engineering domains because the ability to build such systems would be
simpler and faster than engineering a system with the intelligence specified
by the engineer and because systems based on interactive learning could
rapidly adapt to changing environmental conditions.
Knowledge-based Action Planning and Control Problems in Engineering and Biology
PI: |
Bijoy K. Ghosh, Washington University (St. Louis) |
Co-PIs: |
Wijesuriya P. Dayawansa, Texas Tech University
Alberto Isidori, Washington University
Clyde F. Martin, Texas Tech University
Philip S. Ulinski, University of Chicago
|
Biological systems have an innate capability for learning and representation
of dynamical cues from the environment and a navigational and gaze control
capability to sustain themselves in an unstructured environment. Such a
paradigm is largely lacking in engineered and artificial navigational systems.
Man-made systems, (viz. a walking or a mobile robot), are designed and
manufactured on the basis of a specific task objective with little emphasis
on the design of a feedback control system.
The proposed research interests include microcircuits of motion detection
in the visual cortex of a turtle; pattern recognition and visual attention in
primate visual system; muscle dynamics, head eye coordination and asymptotic
tracking; information feedback and learning. We would develop a suitable
algorithm that would learn from its visual cues and visually predict the
motion of a target in a cluttered environment; we would determine what
are the biological mechanisms that allow us to attend to a selected region
of a visual space; we would develop an algorithm to coordinate the motion
of head and eye for the purpose of gaze control and asymptotic- tracking;
and we hope understand "dynamical problems in perception," introducing
dynamical systems with perspective observation geometry. The goal is to
derive algorithms that would visually estimate the motion parameters in
a dynamically changing scene using biologically inspired models of
the retina and Information-coding.
An intelligent system needs to control the flow of information through
judicious choice of its scarce resources. This team proposes to introduce
and investigate a new Information Guided Feedback Paradigm for improved
perception, learning, action planning and control that is an important
research problem with a tremendous potential for education. Engineers
would learn from biological systems as to how machines (robots) of the
future could integrate sensory (visual) knowledge, build an internal
representation (based on neural coding) and be guided by information
feedback towards an improved man/machine interaction.
|