How can you tell a puppy from a kitten? Ask any first-grade class and you’ll get a range of answers. Some might start with visual information, like whether the animal has whiskers, how its ears look, or what kind of tail it has. Others might use auditory information, like the different sounds of barks and meows. The more creative might want to see if the animals smell or even taste different.
Though none of these answers is any more correct than the others, each reflects a different approach to the problem of categorization. In cognitive neuroscience, the ability to categorize objects is thought to require the existence of a distinct “concept” for each category, containing information about the typical properties, or features, that correspond to items in that category. The exact neural mechanisms by which concepts are formed and maintained, however, are not fully understood. According to one set of theories, concept representations are distributed across sensorimotor areas, with each area representing activation of a specific feature. By this mechanism, knowledge of the concept is a direct result of connections between different feature areas. Alternatively, other theories have proposed the existence of integration areas that connect to multiple sensorimotor regions and contain intermediate representations of concept knowledge. Though the set of features used to define a concept can span multiple sensory modalities (eg. vision, touch), the intermediate representation of a concept appears to exist in a theoretical semantic space, independent of any individual modality.
In a recent paper from the lab of Sharon Thompson-Schill, the role of these intermediate areas was examined using functional magnetic resonance imaging (Coutanche & Thompson-Schill). Specifically, this paper tested whether there is evidence to support the anterior temporal lobe (ATL) as a “convergence zone”, or an area where various feature fragments are consolidated to form a coherent representation of the object concept. During scanning, subjects viewed a screen of visual noise, while attempting to detect the presence of a specific fruit or vegetable within the noise. The researchers analyzed brain activity during the time period before the fruit or vegetable was actually on screen, meaning the subject was actively thinking about the concept for the fruit or vegetable, but was not actually viewing it. These memory-driven neural activation patterns were used to determine the type of information represented by the ATL during concept retrieval, and how it interacts with the feature information represented in other cortical areas.
1. Activation in the ATL is specific to the identity of the retrieved concept, i.e. the specific fruit or vegetable that is being recalled. Using MVPA and a roving searchlight (https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging#Statistical_analysis), the authors were able to identify a cluster of regions near the location of the left ATL that had above-chance classification accuracy (below).
2. When a concept is retrieved, areas that represent specific features of the object (in this case, color and shape) are specifically activated. Again using MVPA, the authors identified a bilateral region of lateral occipital cortex (LOC) and right V4 as areas that encode feature fragments corresponding to the concept being retrieved.
3. Successful retrieval of the object identity is concurrent with activation of the feature fragments corresponding to that object. Concurrent decoding of color and shape within feature regions (LOC and V4) was found to be specifically predictive of successful object identity decoding in the ATL. This result further supports the importance of the ATL in binding the information stored in distributed feature areas into a coherent representation of an object concept.
These results demonstrate that the ATL is likely to function as a convergence zone, providing a plausible neural substrate for the formation and retrieval of feature-based object concepts. They also open the door to further speculation about the role of top-down processing in identification of objects by features, as well as how features may be differently weighted depending on the type of sensory information available. Current and future work by the Thompson-Schill lab may address such questions as well as contributing further insights into the neural mechanisms underlying conceptualization.
Coutanche, M.N., & Thompson-Schill, S.L. (2014). Creating concepts from converging features in human cortex. Cerebral Cortex.