homehomehomehomehomehome
 
 
Research Topics
 
 
My students, collaborators and I are interested in the representation and processing of relations: How can neural architectures (brains or artificial neural networks) generate, represent and manipulate relational structures? How does the human mind's solution to this problem manifest itself in observable behavior? More generally, we are interested in fundamental issues in knowledge representation and the relation between knowledge represntation and cognitive performance. We investigate these issues in several domains.


JIM3 architecture (Hummel, 2001)
Shape perception and object recognition
How do we represent the relations among an object's parts or features? Under what circumstances, and in what form, will we explicitly encode these relations, and how does our encoding affect the manner in which we recognize and categorize objects? What is the role of visual attention in the representation of object shape? We address these questions both empirically, by conducting experiments on object perception, recognition and categorization with human subjects, and theoretically, by developing and testing computational (neural network) models of object perception and recognition. More recently, Collin Green (now at NASA Ames) and I have begun to explore aspects of scene perception and comprehension, as well as the effects of object interactions (e.g., as when a picture appears poised to pour liquid into a glass) on the perception and identification of those objects. If you are interested in these topics, search in the Publications section of this site for papers co-authored with Biederman, Green, Saiki, Stankiewicz or Thoma. There are also a few relevant single-author papers.




LISA mapping of restrain (man, dog) onto restrain (tree, dog)
Analogy, analogical inference and schema induction
Working with Keith Holyoak (at UCLA), I have developed a neural network model--LISA (Learning and Inference with Schemas and Analogies; Hummel & Holyoak, 1997, 2003, Psychological Review)--of analogical mapping, analogy- and rule-based inference, and schema induction. We have applied (and continue to apply) the model to simulate reasoning is normal human adults, aspects of cognitive development, cognitive aging, and various cognitive impairments resulting from fronto-temporal dementia. We are also continuing to develop and refine the model. If you are interested in these topics, search in the Publications section of this site for papers co-authored with Doumas, Holyoak, Krawczyk, Kroger, Kubose, Morrison, Pedone or Viskontas.

Relation discovery and predication
Alex Doumas (now a post-doc in Linda Smith's lab at Indiana University) and I have developed a model--DORA (Discovery Of Relations by Analogy)--of the process whereby people acquire new relational concepts and come to represent those concepts as explicit structures (predicates) that can take arguments. Alex and Linda are in the process of testing some of DORA's fundamental assumptions and predictions, and Alex and I continue to explore the model's properties and account of various phenomena in the literature. If you are interested in this topic, search in the Publications section of this site for papers co-authored with Doumas.

Learning of relational categories
Niki Kittur (a grad student at UCLA), Keith Holyoak (UCLA) and I are investigating how people learn categories defined, not by the features of category exemplars, but by the relations among those features. Our findings suggest that relational categories may be learned and used in ways qualitatively different than the learning and use of more traditional feature-based categories (i.e., more traditional in the category learning literature, which is not to say more common outside the laboratory). If you are interested in this topic, search in the Publications section of this site for papers co-authored with Kittur. Currently, all are conference proceedings but we are preparing a number of papers for submission to journals.

Explanation and problem solving
People routinely generate exlpanations. From the mundane ('How did this coffee get spilled here?') to the excruciatingly difficult ('How do people generate explanations of things?'), explanation is an anctivity we all engage in daily, starting from the time we are able to speak (and even before). Explanations allow us to fit new observations into our existing knowledge, aid us in problem solving, allow us to predict the future, and generally allow us to understand our world and the objects and events in it. Brian Ross, David Landy, Derek Devnich, Eric Taylor and I am working on an Air Force-sponsored project to develop and test a detailed computational model of the cognitive processes involved in generating, evaluating and using explanations in teh service of problem solving. This work is still in an early stage of development, so we have not yet published any papers on it.

Category representation and use
Brian Ross and I are currently collaborating to simulate aspects of categorization and category-based inductive inference, especially the effects of category coherence on category-based inductive inference.

Similarity
Eric Taylor and I are currently collaborating to use the LISA model to simulate aspects of similarity judgment, including Tversky's well-known violations of the metric axioms in similarity judgment and the relations between relational and more featural aspects of similarity.

Reflexive inference and on-line language comprehension
Duane Watson, Gary Dell and I are in the (very) early stages of attempting to use LISA -- in particular, LISA's algorithm for reflexive (i.e., effortless, automatic) inference (see Hummel & Choplin, 2000) -- to account for aspects of on-line language comprehension, especially the effects of context on the interpretation of word meaning.