U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Murray MM, Wallace MT, editors. The Neural Bases of Multisensory Processes. Boca Raton (FL): CRC Press/Taylor & Francis; 2012.

Cover of The Neural Bases of Multisensory Processes

The Neural Bases of Multisensory Processes.

Show details

Chapter 31Visual–Vestibular Integration for Self-Motion Perception

and .

31.1. THE PROBLEM OF SELF-MOTION PERCEPTION AND THE UTILITY OF VISUAL–VESTIBULAR INTEGRATION

How do we perceive our direction of self-motion through space? To navigate effectively through a complex three-dimensional (3-D) environment, we must accurately estimate our own motion relative to objects around us. Self-motion perception is a demanding problem in sensory integration, requiring the neural combination of visual signals (e.g., optic flow), vestibular signals regarding head motion, and perhaps also somatosensory and proprioceptive cues (Hlavacka et al. 1992, 1996; Dichgans and Brandt 1974). Consider a soccer player running downfield to intercept a pass and head the ball toward the goal. This athlete must be able to accurately judge the trajectory of the ball relative to the trajectory of his/her self-motion, in order to precisely time his/her head thrust to meet the ball. Optic flow and vestibular signals are likely the two most sensitive cues for judging self-motion (Gu et al. 2007, 2008; Fetsch et al. 2009). To understand the need for multisensory integration of these cues, it is useful to consider the strengths and weaknesses of each cue. Although self-motion generally involves both translations and rotations of the observer, we shall limit the scope of this review to translational movements, such that we focus on visual and vestibular cues that determine our perceived direction of heading.

31.1.1. Optic Flow

It has long been recognized that visual cues provide a rich source of information about self-motion (Gibson 1950). As we move through the environment, the resulting pattern of full-field retinal motion (optic flow) can be used to estimate heading. In the simplest case, involving an observer with stationary eyes and head moving through a stationary scene, the location of the focus of radial expansion in the optic flow field provides a direct indicator of heading. Many visual psychophysical and theoretical studies have examined how heading can be computed from optic flow (see Warren 2003 for review). The notion that optic flow contributes to self-motion perception is further supported by the fact that optic flow, by itself, can elicit powerful illusions of self-motion. As early as 1875, Ernst Mach described self-motion sensations (i.e., circular and linear vection) induced by optic flow. Numerous studies have subsequently characterized the behavioral observation that large-field optic flow stimulation induces self-motion perception (e.g., Berthoz et al. 1975; Brandt et al. 1973; Dichgans and Brandt 1978).

Interpretation of optic flow, however, becomes considerably complicated under more natural conditions. Specifically, optic flow is substantially altered by movements of the eyes and head (Banks et al. 1996; Crowell et al. 1998; Royden et al. 1992, 1994), and by motion of objects in the visual field (Royden and Hildreth 1996; Gibson 1954; Warren and Saunders 1995). An extensive literature, including studies cited above, has been devoted to perceptual mechanisms that compensate for eye and/or head rotation during translational self-motion, making use of both retinal and extraretinal signals (reviewed by Warren 2003). Perceptual compensation for eye and head movements is largely successful, and is likely aided by the fact that the brain contains internal signals related to eye and head movements (e.g., efference copy) that can be used to transform visual signals. The neural basis of this compensation for eye and head movements has been explored considerably (Bradley et al. 1996; Page and Duffy 1999; Shenoy et al. 1999), although our understanding of these compensatory mechanisms is far from complete.

Motion of objects in the world presents an even greater challenge to interpretation of optic flow because the brain contains no internal signals related to object motion. In general, the brain needs to solve a source separation problem because optic flow on the retina at any moment in time includes two major components: flow resulting from self-motion along with the static 3-D structure of the environment, and flow resulting from the movement of objects relative to the observer. Some psychophysical studies have suggested that this source separation problem can be solved through purely visual analysis of optic flow (Rushton and Warren 2005; Warren and Rushton 2007, 2008; Matsumiya and Ando 2009), whereas other studies indicate that nonvisual signals may be essential for interpretation of optic flow in the presence of object motion (Wexler 2003; Wexler et al. 2001; Wexler and van Boxtel 2005). Although interactions between object and background motion have been studied physiologically (Logan and Duffy 2006), the neural mechanisms that solve this problem remain unclear. Vestibular signals may be of particular importance in dealing with object motion because the vestibular system provides an independent source of information about head movements that may help to identify optic flow that is inconsistent with self-motion (induced by moving objects).

31.1.2. Vestibular Signals

The vestibular system provides a powerful independent source of information about head motion in space. Specifically, vestibular sensors provide information about the angular rotation and linear acceleration of the head in space (Angelaki 2004; Angelaki and Cullen 2008), and thus provide important inputs to self-motion estimation. A role of the vestibular system in the perception of selfmotion has long been acknowledged (Guedry 1974, 1978; Benson et al. 1986; Telford et al. 1995).

With regard to heading perception, the limitations of optic flow processing might be overcome by making use of inertial motion signals from the vestibular otolith organs (Benson et al. 1986; Fernandez and Goldberg 1976a, 1976b; Guedry 1974). The otoliths behave much like linear accelerometers, and otolith afferents provide the basis for directional selectivity that could in principle be used to guide heading judgments. Indeed, with a sensory organ that signals real inertial motion of the head, one might ask why the nervous system should rely on visual information at all. Part of the answer is that even a reliable linear accelerometer has shortcomings, such as the inability to encode constant-velocity motion and the inability to distinguish between translation and tilt relative to gravity (due to Einstein’s equivalence principle). The latter problem may be resolved using angular velocity signals from the semicircular canals (Angelaki et al. 1999, 2004; Merfeld et al. 1999), but the properties of the canals render this strategy ineffective during low-frequency motion or static tilts. In fact, in the absence of visual cues, linear acceleration is often misperceived as tilt (the somatogravic illusion; Previc et al. 1992; Wolfe and Cramer 1970). This illusion can be quite dangerous for aviators, who feel compelled to pitch the nose of their aircraft downward to compensate for a nonexistent upward tilt, when in fact what they experienced was linear inertial acceleration.

In summary, both the visual and vestibular systems are limited in their ability to unambiguously signal self-motion. A sensible approach for heading estimation would thus be to combine visual and vestibular information to overcome the limitations of each modality on its own. As discussed further below, this cross-modal integration can also improve perceptual discrimination of heading over what is possible for each modality alone. Thus, we suggest that multisensory integration of visual and vestibular inputs provides dual benefits: it overcomes important limitations of each sensory system alone and it provides increased sensitivity when both systems are active.

31.2. POTENTIAL NEURAL SUBSTRATES FOR VISUAL–VESTIBULAR INTEGRATION

Where should one look in the brain to find neurons that integrate visual and vestibular signals for self-motion perception? One possibility is to look in portions of “visual” cortex that are known to carry selective responses to optic flow stimuli. Another possibility is to look in regions of “vestibular” cortex that may integrate otolith inputs with visual signals. Here, we briefly consider what is known about each of these possibilities.

Optic flow–sensitive neurons have been found in the dorsal portion of the medial superior temporal area (MSTd; Tanaka et al. 1986; Duffy and Wurtz 1991, 1995), ventral intraparietal area (VIP; Bremmer et al. 2002a, 2002b; Schaafsma and Duysens 1996), posterior parietal cortex (7a; Siegel and Read 1997), and the superior temporal polysensory area (STP; Anderson and Siegel 1999). Among these areas, MSTd and VIP (Figure 31.1) currently stand out as good candidates for integrating visual and vestibular signals to subserve heading perception because (1) they have large receptive fields and selectivity for complex optic flow patterns that simulate self-motion (Duffy and Wurtz 1991, 1995; Tanaka et al. 1986; Tanaka and Saito 1989; Schaafsma and Duysens 1996; Bremmer et al. 2002a), (2) they show some compensation for shifts in the focus of expansion due to pursuit eye movements (Bradley et al. 1996; Zhang et al. 2004; Page and Duffy 1999), and (3) they have been causally linked to heading judgments based on optic flow in microstimulation studies (Britten and van Wezel 1998, 2002; Zhang and Britten 2003). Perhaps most importantly, MSTd and VIP also contain neurons sensitive to physical translation in darkness (Bremmer et al. 1999, 2002b; Duffy 1998; Gu et al. 2006; Chen et al. 2007; Schlack et al. 2002; Takahashi et al. 2007; Chowdhury et al. 2009). This suggests the presence of vestibular signals that may be useful for heading perception, and thus the potential for integration with optic flow signals.

FIGURE 31.1. (See color insert.

FIGURE 31.1

(See color insert.) Illustration of some of the areas thought to be involved in processing of visual and/or vestibular signals for self-motion perception (see text for details). A partially inflated surface of cerebral cortex of a macaque monkey is shown. Colored (more...)

In addition to regions conventionally considered to be largely visual in nature, there are several potential loci within the vestibular system where otolith-driven signals regarding translation could be combined with optic flow signals. Putative visual-vestibular convergence has been reported as early as one or two synapses from the vestibular periphery, in the brainstem vestibular nuclei (Daunton and Thomsen 1979; Henn et al. 1974; Robinson 1977; Waespe and Henn 1977) and vestibulo-cerebellum (Markert et al. 1988; Waespe et al. 1981; Waespe and Henn 1981). However, responses to visual (optokinetic) stimuli within these subcortical circuits are more likely related to gaze stabilization and eye movements [optokinetic nystagmus (OKN), vestibulo-ocular reflex (VOR), and/or smooth pursuit] rather than self-motion perception per se. This conclusion is supported by recent experiments (Bryan and Angelaki 2008) showing a lack of optic-flow responsiveness in the vestibular and deep cerebellar nuclei when animals were required to fixate a head-fixed target (suppressing OKN).

At higher stages of vestibular processing, several interconnected cortical areas have traditionally been recognized as “vestibular cortex” (Fukushima 1997; Guldin and Grusser 1998), and are believed to receive multiple sensory inputs, including visual, vestibular, and somatosensory/proprioceptive signals. Specifically, three main cortical areas (Figure 31.1) have been characterized as either exhibiting responses to vestibular stimulation and/or receiving short-latency vestibular signals (trisynaptic through the vestibular nuclei and the thalamus). These include: (1) area 2v, located in the transition zone of areas 2, 5, and 7 near the lateral tip of the intraparietal sulcus (Schwarz and Fredrickson 1971a, 1971b; Fredrickson et al. 1966; Buttner and Buettner 1978); (2) the parietoinsular vestibular cortex (PIVC), located between the auditory and secondary somatosensory cortices (Grusser et al. 1990a, 1990b); and (3) area 3a, located within the central sulcus extending into the anterior bank of the precentral gyrus (Odkvist et al. 1974; Guldin et al. 1992). In addition to showing vestibular responsiveness, neurons in PIVC (Grusser et al. 1990b) and 2v (Buttner and Buettner 1978) were reported to show an influence of visual/optokinetic stimulation, similar to subcortical structures. However, these studies did not conclusively demonstrate that neurons in any of these areas provide robust information about self-motion from optic flow. Indeed, we have recently shown that PIVC neurons generally do not respond to brief (2-second) optic flow stimuli with a Gaussian velocity profile (Chen et al. 2010), whereas these same visual stimuli elicit very robust directional responses in areas MSTd and VIP (Gu et al. 2006; Chen et al. 2007). Thus far, we also have not encountered robust optic flow selectivity in area 2v (unpublished observations).

In summary, the full repertoire of brain regions that carry robust signals related to both optic flow and inertial motion remains to be further elaborated, and other areas that serve as important players in multisensory integration for self-motion perception may yet emerge. However, two aspects of the available data are fairly clear. First, extrastriate areas MSTd and VIP contain robust representations of self-motion direction based on both visual and vestibular cues. Second, traditional vestibular cortical areas (PIVC, 2v) do not appear to have sufficiently robust responses to optic flow to be serious candidates for the neural basis of multimodal heading perception. In the remainder of this review, we shall therefore focus on what is known about visual–vestibular integration in area MSTd, as this area has been best studied so far.

31.3. HEADING TUNING AND SPATIAL REFERENCE FRAMES IN AREA MSTD

31.3.1. Heading Tuning

The discovery of vestibular translation responses in MSTd, first reported by Duffy (1998), was surprising because this area is traditionally considered part of the extrastriate visual cortex. The results of Duffy’s groundbreaking study revealed a wide variety of visual–vestibular interactions in MSTd, including enhancement and suppression of responses relative to single-cue conditions, as well as changes in cells’ preferred direction with anticongruent stimulation.

Building upon Duffy’s findings, we used a custom-built virtual reality system (Figure 31.2a) to examine the spatial tuning of MSTd neurons in three dimensions (Figure 31.2b), making use of stimuli with a Gaussian stimulus velocity profile (Figure 31.2c) that is well suited to activating the otolith organs (Gu et al. 2006; Takahashi et al. 2007). Heading tuning was measured under three stimulus conditions: visual only, vestibular only, and a combined condition in which the stimulus contained precisely synchronized optic flow and inertial motion. We found that about 60% of MSTd neurons show significant directional tuning for both visual and vestibular heading cues. MSTd neurons showed a wide variety of heading preferences, with individual neurons being tuned to virtually all possible directions of translation in 3-D space. Notably, however, there was a strong bias for MSTd neurons to respond best to lateral motions within the frontoparallel plane (i.e., left/right and up/down), with relatively few neurons preferring fore–aft directions of motion. This was true for both visual and vestibular tuning separately (Gu et al. 2006, 2010).

FIGURE 31.2. (a–c) Apparatus and stimuli used to examine visual–vestibular interactions in rhesus monkeys.

FIGURE 31.2

(a–c) Apparatus and stimuli used to examine visual–vestibular interactions in rhesus monkeys. (a) 3-D virtual reality system, (b) heading trajectories, and (c) velocity and acceleration profiles used by Gu et al. (2006). 3-D heading tuning (more...)

Interestingly, MSTd neurons seemed to fall into one of two categories based on their relative preferences for heading defined by visual and vestibular cues. For congruent cells, the visual and vestibular heading preferences are closely matched, as illustrated by the example neuron shown in Figure 31.2d. This neuron preferred rightward motion of the head in both the visual and vestibular conditions. In contrast, opposite cells have visual and vestibular heading preferences that are roughly 180° apart (Gu et al. 2006). For example, the opposite cell in Figure 31.2e prefers rightward and slightly upward motion in the vestibular condition, but prefers leftward and slightly downward translation in the visual condition. For this neuron, responses in the combined stimulus condition (Figure 31.2e, right panel) were very similar to those elicited by optic flow in the visual condition. This pattern of results was common in the study of Gu et al. (2006). However, as discussed further below, this apparent visual dominance was because high-coherence visual stimuli were used. We shall consider this issue in considerably more detail in the next section.

The responses of MSTd neurons to translation in the vestibular condition were found to be very similar when responses were recorded during translation in complete darkness (as opposed to during viewing of a fixation target on a dim background), suggesting that spatial tuning seen in the vestibular condition (e.g., Figure 31.2d, e) was indeed of labyrinthine origin (Gu et al. 2006; Chowdhury et al. 2009). To verify this, we examined the responses of MSTd neurons after a bilateral labyrinthectomy. After the lesion, MSTd neurons did not give significant responses in the vestibular condition, and spatial tuning was completely abolished (Gu et al. 2007; Takahashi et al. 2007). Thus, responses observed in MSTd during the vestibular condition arise from otolith-driven input.

31.3.2. Reference Frames

Given that neurons in MSTd show spatial tuning for both visual and vestibular inputs, a natural question arises regarding the spatial reference frames of these signals. Vestibular signals regarding translation must initially be coded by the otolith afferents in head-centered coordinates, because the vestibular organs are fixed in the head. In contrast, visual motion signals must initially be coded in retinal (eye-centered) coordinates. Since these two signals arise in different spatial frames of reference, how are they coded when they are integrated by MSTd neurons? Some researchers have suggested that signals from different sensory systems should be expressed in a common reference frame when they are integrated (Groh 2001). On the other hand, computational models show that neurons can have mixed and intermediate reference frames while still allowing signals to be decoded accurately (Deneve et al. 2001; Avillac et al. 2005).

To investigate this issue, we tested whether visual and vestibular heading signals in MSTd share a common reference frame (Fetsch et al. 2007). To decouple head-centered and eye-centered coordinates, we measured visual and vestibular heading tuning while monkeys fixated on one of three target locations: straight ahead, 20–25° to the right, and 20–25° to the left. If heading is coded in eye-centered coordinates, the heading preference of the neuron should shift horizontally (in azimuth) by the same amount as the gaze is deviated from straight ahead. If heading is coded in head-centered coordinates, then the heading preference should remain constant as a function of eye position.

Figure 31.3a shows the effect of eye position on the vestibular heading preference of an MSTd neuron. In this case, heading preference (small white circles connected by dashed line) remains quite constant as eye position varies, indicating head-centered tuning. Figure 31.3b shows the effect of eye position on the visual heading tuning of another MSTd neuron. Here, the heading preference clearly shifts with eye position, such that the cell signals heading in an eye-centered frame of reference. A cross-correlation technique was used to measure the amount of shift of the heading preference relative to the change in eye position. This yields a metric, the displacement index, which will be 0.0 for head-centered tuning and 1.0 for eye-centered tuning. As shown in Figure 31.3c, we found that visual heading tuning was close to eye-centered, with a median displacement index of 0.89. In contrast, vestibular heading tuning was found to be close to head-centered, with a median displacement index of 0.24. This value for the vestibular condition was significantly larger than 0.0, indicating that vestibular heading tuning was slightly shifted toward eye-centered coordinates.

FIGURE 31.3. Reference frames of visual and vestibular heading signals in MSTd.

FIGURE 31.3

Reference frames of visual and vestibular heading signals in MSTd. Tuning functions are plotted for two example cells in (a) vestibular and (b) visual conditions, measured separately at three static eye positions along horizontal meridian: –20° (more...)

These data show that visual and vestibular signals in MSTd are not expressed in a common reference frame. By conventional thinking, this might cast doubt on the ability of this area to perform sensory integration for heading perception. However, computational modeling suggests that sensory signals need not explicitly occupy a common reference frame for integration to occur (Avillac et al. 2005; Fetsch et al. 2007; Deneve et al. 2001). Moreover, as we will see in a later section, MSTd neurons can account for improved behavioral sensitivity under cue combination. Thus, the conventional and intuitive notion that sensory signals need to be expressed in a common reference frame for multisensory integration to occur may need to be discarded.

The results of the study by Fetsch et al. (2007) also provide another challenge to conventional ideas regarding multisensory integration and reference frames. To our knowledge, all previous studies on reference frames of sensory signals have only examined responses during unisensory stimulation. Also relevant is the reference frame exhibited by neurons during combined, multimodal stimulation, and how this reference frame depends on the relative strengths of responses to the two sensory modalities. To examine this issue, Fetsch et al. (2007) measured the reference frame of activity during the combined (visual–vestibular) condition, as well as the unimodal conditions. Average displacement index values were computed as a function of the relative strength of unimodal visual and vestibular responses [visual/vestibular ratio (VVR)]. For the visual (circles) and vestibular (squares) conditions, the average displacement index did not systematically depend on VVR (Figure 31.3d), indicating that the reference frame in the unimodal conditions was largely independent of the relative strengths of visual and vestibular inputs to the neuron under study. In contrast, for the combined condition (diamonds), the average displacement index changed considerably as a function of VVR, such that the reference frame of combined responses was more head-centered for neurons with low VVR and more eye-centered for neurons with high VVR (Figure 31.3d). Thus, the reference frame of responses to multimodal stimuli can vary as a function of the relative strengths of the visual and vestibular inputs. This has potentially important implications for understanding how multisensory responses are decoded, and deserves further study.

31.4. THE NEURONAL COMBINATION RULE AND ITS DEPENDENCE ON CUE RELIABILITY

An issue of great interest in multisensory integration has been the manner in which neurons combine their unimodal sensory inputs. Specifically, how is the response to a bimodal stimulus related to the responses to the unimodal components presented separately? Traditionally, this issue has been examined by computing one of two metrics: (1) a multisensory enhancement index, which compares the bimodal response to the largest unimodal response, and (2) an additivity index, which compares the bimodal response to the sum of the unimodal responses (Stein and Stanford 2008).

In classic studies of visual–auditory integration in the superior colliculus (Stein and Meredith 1993), bimodal responses were often found to be superadditive (larger than the sum of the unimodal responses) and this was taken as evidence for a nonlinear cue combination rule such as multiplication (Meredith and Stein 1983, 1986). In contrast, a variety of studies of multisensory integration in cortical areas have reported subadditive interactions (Avillac et al. 2007; Morgan et al. 2008; Sugihara et al. 2006). Some of this variation is likely accounted for by variations in the efficacy of unimodal stimuli, as recent studies in the superior colliculus has demonstrated that superadditive interactions become additive or even subadditive as the strength of unimodal stimuli increases (Perrault et al. 2003, 2005; Stanford et al. 2005).

Although many studies have measured additivity and/or enhancement of multisensory responses, there has been a surprising lack of studies that have directly attempted to measure the mathematical rule by which multisensory neurons combine their unimodal inputs (hereafter the “combination rule”). Measuring additivity (or enhancement) for a limited set of stimuli is not sufficient to characterize the combination rule. To illustrate this point, consider a hypothetical neuron whose bimodal response is the product (multiplication) of its unimodal inputs. The response of this neuron could appear to be subadditive (e.g., 2 × 1 = 2), additive (2 × 2 = 4), or superadditive (2 × 3 = 6) depending on the magnitudes of the two inputs to the neuron. Thus, to estimate the combination rule, it is essential to examine responses to a wide range of stimulus variations in both unimodal domains.

Recently, we have performed an experiment to measure the combination rule by which neurons in area MSTd integrate their visual and vestibular inputs related to heading (Morgan et al. 2008). We asked whether bimodal responses in MSTd are well fit by a weighted linear summation of unimodal responses, or whether a nonlinear (i.e., multiplicative) combination rule is required. We also asked whether the combination rule changes with the relative reliability of the visual and vestibular cues. To address these questions, we presented eight evenly spaced directions of motion (45° apart) in the horizontal plane (Figure 31.4, inset). Unimodal tuning curves (Figure 31.4ac, margins) were measured by presenting these eight headings in both the vestibular and visual stimulus conditions. In addition, we measured a full bimodal interaction profile by presenting all 64 possible combinations of these 8 vestibular and 8 visual headings, including 8 congruent and 56 incongruent (cueconflict) conditions. Figure 31.4ac shows data from an exemplar “congruent” cell in area MSTd. The unimodal tuning curves (margins) show that this neuron responded best to approximately rightward motion (0°) in both the visual and vestibular conditions. When optic flow at 100% coherence was combined with vestibular stimulation, the bimodal response profile of this neuron (grayscale map in Figure 31.4a) was dominated by the visual input, as indicated by the horizontal band of high firing rates. When the optic flow stimulus was weakened by reducing the motion coherence to 50% (Figure 31.4b), the bimodal response profile showed a more balanced, symmetric peak, indicating that the bimodal response now reflects roughly equal contributions of visual and vestibular inputs. When the motion coherence was further reduced to 25% (Figure 31.4c), the unimodal visual tuning curve showed considerably reduced amplitude and the bimodal response profile became dominated by the vestibular input, as evidenced by the vertical band of high firing rates. Thus, as the relative strengths of visual and vestibular cues to heading vary, bimodal responses of MSTd neurons range from visually dominant to vestibularly dominant.

FIGURE 31.4. Effects of cue strength (motion coherence) on weighted summation of visual and vestibular inputs by MSTd neurons.

FIGURE 31.4

Effects of cue strength (motion coherence) on weighted summation of visual and vestibular inputs by MSTd neurons. (a–c) Comparison of unimodal and bimodal tuning for a congruent MSTd cell, tested at three motion coherences. Grayscale maps show (more...)

To characterize the combination rule used by MSTd neurons in these experiments, we attempted to predict the bimodal response profile as a function of the unimodal tuning curves. We found that bimodal responses were well fit by a weighted linear summation of unimodal responses (Morgan et al. 2008). On average, this linear model accounted for ~90% of the variance in bimodal responses, and adding various nonlinear components to the model (such as a product term) accounted for only 1–2% additional variance. Thus, weighted linear summation provides a good model for the combination rule used in MSTd, and the weights are typically less than 1 (Figure 31.4d, e), indicating that subadditive interactions are commonplace.

How does the weighted linear summation model of MSTd integration depend on the reliability of the cues to heading? As the visual cue varies in reliability due to changes in motion coherence, the bimodal response profile clearly changes shape (Figure 31.4ac). There are two basic possible explanations for this change in shape. One possibility is that the bimodal response profile changes simply from the fact that lower coherences elicit visual responses with weaker modulation as a function of heading. In this case, the weights with which each neuron combines its vestibular and visual inputs remain constant and the decreased visual influence in the bimodal response profile is simply due to weaker visual inputs at lower coherences. In this scenario, each neuron has a combination rule that is independent of cue reliability. A second possibility is that the weights given to the vestibular and visual inputs could change with the relative reliabilities of the two cues. This outcome would indicate that the neuronal combination rule is not fixed, but changes with cue reliability. This is a fundamental issue of considerable importance in multisensory integration.

To address this issue, we obtained the best fit of the weighted linear summation model separately for each motion coherence. At all coherences, the linear model provided a good fit to the bimodal responses. The key question then becomes whether the visual and vestibular weights attributed to each neuron remain constant as a function of coherence or whether they change systematically. Figure 31.4d, e shows the distributions of weights obtained at 100% (black bars) and 50% (gray bars) coherence. The average visual weight is significantly higher at 100% coherence than 50% coherence, whereas the average vestibular weight shows the opposite effect. For all neurons that were tested at multiple coherences, Figure 31.4f, g shows how the vestibular and visual weights, respectively, change with coherence for each neuron. There is a clear and significant trend for vestibular weights to decline with coherence, whereas visual weights increase (Morgan et al. 2008). A model in which the weights are fixed across coherences does not fit the data as well as a model in which the weights vary with coherence, for the majority of neurons (Morgan et al. 2008). The improvement in model fit with variable weights (although significant) is rather modest for most neurons, however, and it remains to be determined whether these weight changes have large or small effects on population codes for heading.

The findings of Morgan et al. (2008) could have important implications for understanding the neural circuitry that underlies multisensory integration. Whereas the neuronal combination rule is well described as weighted linear summation for any particular values of stimulus strength/energy, the weights in this linear combination rule are not constant when stimulus strength varies. If MSTd neurons truly perform a simple linear summation of their visual and vestibular inputs, then this finding would suggest that the synaptic weights of these inputs change as a function of stimulus strength. Although this is possible, it is not clear how synaptic weights would be dynamically modified from moment to moment when the stimulus strength is not known in advance. Yet, it is well established that human cue integration behavior involves a dynamic, trial-by-trial reweighting of cues. A recent neural theory of cue integration shows that neurons that simply sum their multisensory inputs can account for dynamic cue reweighting at the perceptual level, if their spiking statistics fall into a Poisson-like family (Ma et al. 2006). In this theory, it was not necessary for neurons to change their combination rule with stimulus strength, but this is what the results of Morgan et al. (2008) demonstrate.

One possible resolution to this conundrum is that multisensory neurons linearly sum their inputs with fixed weights, at the level of membrane potential, but that some network-level nonlinearity makes the weights appear to change with stimulus strength. A good candidate mechanism that may account for the findings of Morgan et al. (2008) is divisive normalization (Carandini et al. 1997; Heeger 1992). In a divisive normalization circuit, each cell performs a linear weighted summation of its inputs at the level of membrane potential, but the output of each neuron is divided by the summed activity of all neurons in the circuit (Heeger 1992). This model has been highly successful in accounting for how the responses of neurons in the primary visual cortex (V1) change with stimulus strength (i.e., contrast; Carandini et al. 1997) and how neurons in visual area MT combine multiple motion signals (Rust et al. 2006), and has also recently been proposed as an explanation for how selective attention modifies neural activity (Lee and Maunsell 2009; Reynolds and Heeger 2009). Recent modeling results (not shown) indicate that divisive normalization can account for the apparent changes in weights with coherence (Figure 31.4f, g), as well as a variety of other classic findings in multisensory integration (Ohshiro et al. 2011). Evaluating the normalization model of multisensory integration is a topic of current research in our laboratories.

31.5. LINKING NEURONAL AND PERCEPTUAL CORRELATES OF MULTISENSORY INTEGRATION

Most physiological studies of multisensory integration have been performed in animals that are anesthetized or passively experiencing sensory stimuli. Ultimately, to understand the neural basis of multisensory cue integration, we must relate neural activity to behavioral performance. Because cue integration may only occur when cues have roughly matched perceptual reliabilities (Alais and Burr 2004; Ernst and Banks 2002), it is critical to address the neural mechanisms of sensory integration under conditions in which cue combination is known to take place perceptually. As a first major step in this direction, we have developed a multisensory heading discrimination task for monkeys (Gu et al. 2008; Fetsch et al. 2009). This task enabled us to ask two fundamental questions that had remained unaddressed: (1) Can monkeys integrate visual and vestibular cues near-optimally to improve heading discrimination performance? (2) Can the activity of MSTd neurons account for the behavioral improvement observed?

31.5.1. Behavioral Results

Monkeys were trained to report their perceived heading relative to straight ahead in a two-alternative forced choice task (Figure 31.5a). In each trial of this task, the monkey experienced a forward motion with a small leftward or rightward component, and the animal’s task was to make a saccade to one of two choice targets to indicate its perceived heading. Again, three stimulus conditions (visual, vestibular, and combined) were examined, except that the heading angles during the task were limited to a small range around straight forward. Psychometric functions were plotted as the proportion of rightward choices as a function of heading angle (negative, leftward; positive, rightward) and fit with a cumulative Gaussian function (Wichmann and Hill 2001). The standard deviation (σ) of the fitted function was taken as the psychophysical threshold, corresponding to the heading at which the subject was approximately 84% correct.

FIGURE 31.5. Heading discrimination task and behavioral performance.

FIGURE 31.5

Heading discrimination task and behavioral performance. (a) After fixating a visual target, the monkey experienced forward motion (real and/or simulated with optic flow) with a small leftward or rightward component, and subsequently reported his perceived (more...)

Optimal cue-integration models (e.g., Alais and Burr 2004; Ernst and Banks 2002; Knill and Saunders 2003) predict that the threshold in the combined condition (σ2comb) should be lower than the single-cue thresholds (σ2ves, σ2vis), as given by the following expression:

To maximize the predicted improvement in performance, the reliability of the visual and vestibular cues (as measured by thresholds in the single-cue conditions) was matched by adjusting the motion coherence of optic flow in the visual display (for details, see Gu et al. 2008). Psychometric functions for one animal are plotted in Figure 31.5b. The vestibular (filled symbols, dashed curve) and visual (open symbols, solid curve) functions are nearly overlapping, with thresholds of 3.5° and 3.6°, respectively. In the combined condition (gray symbols and curve), the monkey’s heading threshold was substantially smaller (2.3°), as evidenced by the steeper slope of the psychometric function.

Figure 31.5c, d summarizes the psychophysical data from two monkeys. For both animals, psychophysical thresholds in the combined condition were significantly lower than thresholds in the visual and vestibular conditions, and were quite similar to the optimal predictions generated from Equation 31.1 (Gu et al. 2008). Thus, monkeys integrate visual and vestibular cues near-optimally to improve their sensitivity in the heading discrimination task. Similar results were also found for human subjects (Fetsch et al. 2009).

31.5.2. Neurophysiological Results

Having established robust cue integration behavior in macaques, we recorded from single neurons in area MSTd while monkeys performed the heading discrimination task (Gu et al. 2008). Figure 31.6a, b shows tuning curves from two example neurons tested with heading directions evenly spaced in the horizontal plane. The neuron in Figure 31.6a preferred leftward (negative) headings for both visual and vestibular stimuli, and was classified as a congruent cell. In contrast, the neuron in Figure 31.6b preferred leftward headings under the visual condition (solid line) and rightward headings under the vestibular condition (dashed line), and was classified as an opposite cell.

FIGURE 31.6. Heading tuning and heading sensitivity in area MSTd.

FIGURE 31.6

Heading tuning and heading sensitivity in area MSTd. (a–b) Heading tuning curves of two example neurons with (a) congruent and (b) opposite visual–vestibular heading preferences. (c–d) Responses of same neurons to a narrow range (more...)

Figure 31.6c and d shows the tuning of these example neurons over the much narrower range of headings sampled during the discrimination task. For the congruent cell (Figure 31.6c), heading tuning became steeper in the combined condition, whereas for the opposite cell (Figure 31.6d) it became flatter. To allow a more direct comparison between neuronal and behavioral sensitivities, we used signal detection theory receiver operating characteristic (ROC) analysis; Bradley et al. 1987; Green and Swets 1966; Britten et al. 1992) to quantify the ability of an ideal observer to discriminate heading based on the activity of a single neuron (Figure 31.6e and f, symbols). As with the psychometric data, we fitted these neurometric data with cumulative Gaussian functions (Figure 31.6e and f, smooth curves) and defined the neuronal threshold as the standard deviation of the Gaussian. For the congruent neuron in Figure 31.6e, the neuronal threshold was smallest in the combined condition (gray symbols and lines), indicating that the neuron could discriminate smaller variations in heading when both cues were provided. In contrast, for the opposite neuron in Figure 31.6f, the reverse was true: the neuron became less sensitive in the presence of both cues (gray symbols and lines).

The effect of visual–vestibular congruency on neuronal sensitivity in the combined condition was robust across the population of recorded MSTd neurons. To summarize this effect, we defined a congruency index (CI) that ranged from +1 (when visual and vestibular tuning functions have a consistent slope, e.g., Figure 31.6c) to –1 (when they have opposite slopes; Figure 31.6d) (for details, see Gu et al. 2008). We then computed, for each neuron, the ratio of the neuronal threshold in the combined condition to the expected threshold if neurons combine cues optimally according to Equation 31.1. A significant correlation was seen between the combined threshold/predicted threshold ratio and CI (Figure 31.7a), such that neurons with large positive CIs (congruent cells, black circles) had thresholds close to the optimal prediction (ratios near unity). Thus, neuronal thresholds for congruent MSTd cells followed a pattern similar to the monkeys’ behavior. In contrast, combined thresholds for opposite cells were generally much higher than predicted from optimal cue integration (Figure 31.7a, open circles), indicating that these neurons became less sensitive during cue combination.

FIGURE 31.7. Neuronal thresholds and choice probabilities as a function of visual–vestibular congruency in combined condition.

FIGURE 31.7

Neuronal thresholds and choice probabilities as a function of visual–vestibular congruency in combined condition. (a) Ordinate in this scatter plot represents ratio of threshold measured in combined condition to prediction from optimal cue integration. Abscissa (more...)

31.5.3. Correlations with Behavioral Choice

If monkeys rely on area MSTd for heading discrimination, the results of Figure 31.7a suggest that they selectively monitor the activity of congruent cells and not opposite cells. To test this hypothesis, we used the data from the recording experiments (Gu et al. 2007, 2008) to compute “choice probabilities” (CPs) (Britten et al. 1996). CPs are computed by ROC analysis similar to neuronal thresholds, except that the ideal observer is asked to predict the monkey’s choice (rather than the stimulus) from the firing rate of the neuron. This analysis is performed after the effect of heading on response has been removed, such that it isolates the effect of choice on firing rates. Thus, CPs quantify the relationship between trial-to-trial fluctuations in neural firing rates and the monkeys’ perceptual decisions. A CP significantly greater than 0.5 indicates that the monkey tended to choose the neuron’s preferred sign of heading (leftward or rightward) when the neuron fires more strongly. Such a result is thought to reflect a functional link between the neuron and perception (Britten et al. 1996; Krug 2004; Parker and Newsome 1998). Notably, although MSTd is classically considered visual cortex, CPs significantly larger than 0.5 (mean = 0.55) were seen in the vestibular condition (Gu et al. 2007), indicating that MSTd activity is correlated with perceptual decisions about heading based on nonvisual information.

It is of particular interest to examine the relationship between CP and CI in the combined condition, where the monkey makes use of both visual and vestibular cues. Given that opposite cells become insensitive during cue combination and congruent cells increase sensitivity, we might expect CP to depend on congruency in the combined condition. Indeed, Figure 31.7b shows that there is a robust correlation between CP and CI (Gu et al. 2008). Congruent cells (black symbols) generally have CPs greater than 0.5, often much greater, indicating that they are robustly correlated with the animal’s perceptual decisions during cue integration. In contrast, opposite cells (unfilled symbols) tend to have CP values near 0.5, and the mean CP for opposite cells does not differ significantly from 0.5 (t-test, p = .08). This finding is consistent with the idea that the animals selectively monitor congruent cells to achieve near-optimal cue integration.

These findings suggest that opposite cells are not useful for visual–vestibular cue integration during heading discrimination. What, then, is the functional role of opposite cells? We do not yet know the answer to this question, but we hypothesize that opposite cells, in combination with congruent cells, are important for dissociating object motion from self-motion. In general, the complex pattern of image motion on the retina has two sources: (1) self-motion combined with the 3-D layout of the scene and (2) objects moving in the environment. It is important for estimates of heading not to be biased by the presence of moving objects, and vice versa. Note that opposite cells will not be optimally stimulated when a subject moves through a static environment, but may fire more robustly when retinal image motion is inconsistent with self-motion. Thus, the relative activity of congruent and opposite cells may help identify (and perhaps discount) retinal image motion that is not produced by self-motion. Indeed, ongoing modeling work suggests that decoding a mixed population of congruent and opposite cells allows heading to be estimated with much less bias from moving objects.

In summary, by simultaneously monitoring neural activity and behavior, it has been possible to study neural mechanisms of multisensory processing under conditions in which cue integration is known to take place perceptually. In addition to demonstrating near-optimal cue integration by monkeys, a population of neurons has been identified in area MSTd that could account for the improvement in psychophysical performance under cue combination. These findings implicate area MSTd in sensory integration for heading perception and establish a model system for studying the detailed mechanisms by which neurons combine different sensory signals.

31.6. CONCLUSION

These studies indicate that area MSTd is one important brain area where visual and vestibular signals might be integrated to achieve robust perception of self-motion. It is likely that other areas also integrate visual and vestibular signals in meaningful ways, and a substantial challenge for the future will be to understand the specific roles that various brain regions play in multisensory perception of self-motion and object motion. In addition, these studies raise a number of important general questions that may guide future studies on multisensory integration in multiple systems and species. What are the respective functional roles of neurons that have congruent or incongruent tuning for two sensory inputs? Do the spatial reference frames in which multiple sensory signals are expressed constrain the contribution of multisensory neurons to perception? Do multisensory neurons generally perform weighted linear summation of their unimodal inputs, or do the mathematical combination rules used by neurons vary across brain regions and across stimuli/tasks within a brain region? How can we account for the change in the weights that neurons apply to their unimodal inputs as the strength of the sensory inputs varies? Does this require dynamic changes in synaptic weights or can this phenomenology be explained in terms of nonlinearities (such as divisive normalization) that operate at the level of the network? During behavioral discrimination tasks involving cue conflict, do single neurons show correlates of the dynamic cue reweighting effects that have been seen consistently in human perceptual studies of cue integration? How do populations of multimodal sensory neurons represent the reliabilities (i.e., variance) of the sensory cues as they change dynamically in the environment? Most of these questions should be amenable to study within the experimental paradigm of visual–vestibular integration that we have presented thus far. Thus, we expect that this will serve as an important platform for tackling critical questions regarding multisensory integration in the future.

ACKNOWLEDGMENTS

We thank Amanda Turner and Erin White for excellent monkey care and training. This work was supported by NIH EY017866 and EY019087 (to DEA) and by NIH EY016178 and an EJLB Foundation grant (to GCD).

REFERENCES

  1. Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol. 2004;14:257–262.. [PubMed: 14761661]
  2. Anderson K.C, Siegel R. M. Optic flow selectivity in the anterior superior temporal polysensory area, STPa, of the behaving monkey. J Neurosci. 1999;19:2681–2692. [PMC free article: PMC6786053] [PubMed: 10087081]
  3. Angelaki D.E. Eyes on target: What neurons must do for the vestibuloocular reflex during linear motion. J Neurophysiol. 2004;92:20–35. [PubMed: 15212435]
  4. Angelaki D.E, Cullen K. E. Vestibular system: The many facets of a multimodal sense. Annu Rev Neurosci. 2008;31:125–150. [PubMed: 18338968]
  5. Angelaki D.E, Mchenry M. Q, Dickman J. D, Newlands S. D, Hess B. J. Computation of inertial motion: Neural strategies to resolve ambiguous otolith information. J Neurosci. 1999;19:316–327. [PMC free article: PMC6782388] [PubMed: 9870961]
  6. Angelaki D.E, Shaikh A. G, Green A. M, Dickman J. D. Neurons compute internal models of the physical laws of motion. Nature. 2004;430:560–564. [PubMed: 15282606]
  7. Avillac M, Ben Hamed S, Duhamel J. R. Multisensory integration in the ventral intraparietal area of the macaque monkey. J Neurosci. 2007;27:1922–1932. [PMC free article: PMC6673547] [PubMed: 17314288]
  8. Avillac M, Deneve S, Olivier E, Pouget A, Duhamel J. R. Reference frames for representing visual and tactile locations in parietal cortex. Nat Neurosci. 2005;8:941–949. [PubMed: 15951810]
  9. Banks M.S, Ehrlich S. M, Backus B. T, Crowell J. A. Estimating heading during real and simulated eye movements. Vision Res. 1996;36:431–443. [PubMed: 8746233]
  10. Benson A.J, Spencer M. B, Stott J. R. Thresholds for the detection of the direction of whole-body, linear movement in the horizontal plane. Aviat Space Environ Med. 1986;57:1088–1096. [PubMed: 3790028]
  11. Berthoz A, Pavard B, Young L. R. Perception of linear horizontal self-motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Exp Brain Res. 1975;23:471–489. [PubMed: 1081949]
  12. Bradley A, Skottun B. C, Ohzawa I, Sclar G, Freeman R. D. Visual orientation and spatial frequency discrimination: A comparison of single neurons and behavior. J Neurophysiol. 1987;57:755–772. [PubMed: 3559700]
  13. Bradley D.C, Maxwell M, Andersen R. A, Banks M. S, Shenoy K. V. Mechanisms of heading perception in primate visual cortex. Science. 1996;273:1544–1547. [PubMed: 8703215]
  14. Brandt T, Dichgans J, Koenig E. Differential effects of central verses peripheral vision on egocentric and exocentric motion perception. Exp Brain Res. 1973;16:476–491. [PubMed: 4695777]
  15. Bremmer F, Duhamel J. R, Ben Hamed S, Graf W. Heading encoding in the macaque ventral intraparietal area (VIP). Eur J Neurosci. 2002a;16:1554–1568. [PubMed: 12405970]
  16. Bremmer F, Klam F, Duhamel J. R, Ben Hamed S, Graf W. Visual–vestibular interactive responses in the macaque ventral intraparietal area (VIP). Eur J Neurosci. 2002b;16:1569–1586. [PubMed: 12405971]
  17. Bremmer F, Kubischik M, Pekel M, Lappe M, Hoffmann K. P. Linear vestibular self-motion signals in monkey medial superior temporal area. Ann N Y Acad Sci. 1999;871:272–281. [PubMed: 10372078]
  18. Britten K.H, Newsome W. T, Shadlen M. N, Celebrini S, Movshon J. A. A relationship between behavioral choice and the visual responses of neurons in macaque MT. Vis Neurosci. 1996;13:87–100. [PubMed: 8730992]
  19. Britten K.H, Shadlen M. N, Newsome W. T, Movshon J. A. 1992The analysis of visual motion: A comparison of neuronal and psychophysical performance J Neurosci124745–4765. [PMC free article: PMC6575768] [PubMed: 1464765]
  20. Britten K.H, Van Wezel R. J. Electrical microstimulation of cortical area MST biases heading perception in monkeys. Nat Neurosci. 1998;1:59–63. [PubMed: 10195110]
  21. Britten K.H, Van Wezel R. J. Area MST and heading perception in macaque monkeys. Cereb Cortex. 2002;12:692–701. [PubMed: 12050081]
  22. Bryan A.S, Angelaki D. E. Optokinetic and vestibular responsiveness in the macaque rostral vestibular and fastigial nuclei. J Neurophysiol. 2008;101:714–720. [PMC free article: PMC2657057] [PubMed: 19073813]
  23. Buttner U, Buettner U. W. Parietal cortex (2v) neuronal activity in the alert monkey during natural vestibular and optokinetic stimulation. Brain Res. 1978;153:392–397. [PubMed: 99209]
  24. Carandini M, Heeger D. J, Movshon J. A. Linearity and normalization in simple cells of the macaque primary visual cortex. J Neurosci. 1997;17:8621–8644. [PMC free article: PMC6573724] [PubMed: 9334433]
  25. Chen A, Deangelis G. C, Angelaki D. E. Macaque parieto-insular vestibular cortex: Responses to self-motion and optic flow. J Neurosci. 2010;30:3022–3042. [PMC free article: PMC3108058] [PubMed: 20181599]
  26. Chen A, Henry E, Deangelis G. C, Angelaki D. E. 2007 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience, 2007; 2007. Comparison of responses to three-dimensional rotation and translation in the ventral intraparietal (VIP) and medial superior temporal (MST) areas of rhesus monkey. Program No. 715.19. Online Society for Neuroscience.
  27. Chowdhury S.A, Takahashi K, Deangelis G. C, Angelaki D. E. Does the middle temporal area carry vestibular signals related to self-motion? J Neurosci. 2009;29:12020–12030. [PMC free article: PMC2945709] [PubMed: 19776288]
  28. Crowell J.A, Banks M. S, Shenoy K. V, Andersen R. A. Visual self-motion perception during head turns. Nat Neurosci. 1998;1:732–737. [PubMed: 10196591]
  29. Daunton N, Thomsen D. Visual modulation of otolith-dependent units in cat vestibular nuclei. Exp Brain Res. 1979;37:173–176. [PubMed: 488213]
  30. Deneve S, Latham P. E, Pouget A. Efficient computation and cue integration with noisy population codes. Nat Neurosci. 2001;4:826–831. [PubMed: 11477429]
  31. Dichgans J, Brandt T. The Neurosciences. Cambridge, MA: MIT Press; 1974. The psychophysics of visually-induced perception of self motion and tilt; pp. 123–129.
  32. Dichgans J, Brandt T. Visual–vestibular interaction: Effects on self-motion perception and postural control. In: Held R, Leibowitz H. W, Teuber H. L, editors. Handbook of sensory physiology. Berlin: Springer-Verlag; 1978.
  33. Duffy C.J. MST neurons respond to optic flow and translational movement. J Neurophysiol. 1998;80:1816–1827. [PubMed: 9772241]
  34. Duffy C.J, Wurtz R. H. Sensitivity of MST neurons to optic flow stimuli: I. A continuum of response selectivity to large-field stimuli. J Neurophysiol. 1991;65:1329–1345. [PubMed: 1875243]
  35. Duffy C.J, Wurtz R. H. Response of monkey MST neurons to optic flow stimuli with shifted centers of motion. J Neurosci. 1995;15:5192–5208. [PMC free article: PMC6577859] [PubMed: 7623145]
  36. Ernst M.O, Banks M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. [PubMed: 11807554]
  37. Fernandez C, Goldberg J. M. Physiology of peripheral neurons innervating otolith organs of the squirrel monkey: I. Response to static tilts and to long-duration centrifugal force. J Neurophysiol. 1976a;39:970–984. [PubMed: 824412]
  38. Fernandez C, Goldberg J. M. Physiology of peripheral neurons innervating otolith organs of the squirrel monkey: II. Directional selectivity and force-response relations. J Neurophysiol. 1976b;39:985–995. [PubMed: 824413]
  39. Fetsch C.R, Turner A. H, Deangelis G. C, Angelaki D. E. Dynamic reweighting of visual and vestibular cues during self-motion perception. J Neurosci. 2009;29:15601–15612. [PMC free article: PMC2824339] [PubMed: 20007484]
  40. Fetsch C.R, Wang S, Gu Y, Deangelis G. C, Angelaki D. E. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area. J Neurosci. 2007;27:700–712. [PMC free article: PMC1995026] [PubMed: 17234602]
  41. Fredrickson J.M, Scheid P, Figge U, Kornhuber H. H. Vestibular nerve projection to the cerebral cortex of the rhesus monkey. Exp Brain Res. 1966;2:318–327. [PubMed: 4959658]
  42. Fukushima K. Corticovestibular interactions: Anatomy, electrophysiology, and functional considerations. Exp Brain Res. 1997;117:1–16. [PubMed: 9386000]
  43. Gibson J.J. The perception of the visual world. Boston: Houghton-Mifflin; 1950.
  44. Gibson J.J. The visual perception of objective motion and subjective movement. Psychol Rev. 1954;61:304–314. [PubMed: 13204493]
  45. Green D.M, Swets J. A. Signal detection theory and psychophysics. New York: Wiley; 1966.
  46. Groh J.M. Converting neural signals from place codes to rate codes. Biol Cybern. 2001;85:159–165. [PubMed: 11561817]
  47. Grusser O.J, Pause M, Schreiter U. Localization and responses of neurones in the parieto-insular vestibular cortex of awake monkeys (Macaca fascicularis). J Physiol. 1990a;430:537–557.. [PMC free article: PMC1181752] [PubMed: 2086773]
  48. Grusser O.J, Pause M, Schreiter U. Vestibular neurones in the parieto-insular cortex of monkeys (Macaca fascicularis): Visual and neck receptor responses. J Physiol. 1990b;430:559–583. [PMC free article: PMC1181753] [PubMed: 2086774]
  49. Gu Y, Angelaki D. E, Deangelis G. C. Neural correlates of multisensory cue integration in macaque MSTd. Nat Neurosci. 2008;11::1201–1210. [PMC free article: PMC2713666] [PubMed: 18776893]
  50. Gu Y, Deangelis G. C, Angelaki D. E. A functional link between area MSTd and heading perception based on vestibular signals. Nat Neurosci. 2007;10:1038–1047.. [PMC free article: PMC2430983] [PubMed: 17618278]
  51. Gu Y, Fetsch C. R, Adeyemo B, Deangelis G. C, Angelaki D. E. Decoding of MSTd population activity accounts for variations in the precision of heading perception. Neuron. 2010;66:596–609.. [PMC free article: PMC2889617] [PubMed: 20510863]
  52. Gu Y, Watkins P. V, Angelaki D. E, Deangelis G. C. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J Neurosci. 2006;26:73–85. [PMC free article: PMC1538979] [PubMed: 16399674]
  53. Guedry F.E. Psychophysics of vestibular sensation. In: Kornhuber H. H, editor. Handbook of sensory physiology. The vestibular system. New York: Springer-Verlag; 1974.
  54. Guedry F. E Jr. Visual counteraction on nauseogenic and disorienting effects of some whole-body motions: A proposed mechanism. Aviat Space Environ Med. 1978;49:36–41. [PubMed: 304720]
  55. Guldin W.O, Akbarian S, Grusser O. J. Cortico-cortical connections and cytoarchitectonics of the primate vestibular cortex: A study in squirrel monkeys. (Saimiri sciureus). J Comp Neurol. 1992;326:375–401. [PubMed: 1281845]
  56. Guldin W.O, Grusser O. J. Is there a vestibular cortex? Trends Neurosci. 1998;21:254–259. [PubMed: 9641538]
  57. Heeger D.J. Normalization of cell responses in cat striate cortex. Vis Neurosci. 1992;9:181–197. [PubMed: 1504027]
  58. Henn V, Young L. R, Finley C. Vestibular nucleus units in alert monkeys are also influenced by moving visual fields. Brain Res. 1974;71:144–149. [PubMed: 4206917]
  59. Hlavacka F, Mergner T, Bolha B. Human self-motion perception during translatory vestibular and proprioceptive stimulation. Neurosci Lett. 1996;210:83–86. [PubMed: 8783278]
  60. Hlavacka F, Mergner T, Schweigart G. Interaction of vestibular and proprioceptive inputs for human self-motion perception. Neurosci Lett. 1992;138:161–164. [PubMed: 1407657]
  61. Knill D.C, Saunders J. A. Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Res. 2003;43:2539–2558. [PubMed: 13129541]
  62. Krug K. A common neuronal code for perceptual processes in visual cortex? Comparing choice and attentional correlates in V5/MT. Philos Trans R Soc Lond B Biol Sci. 2004;359:929–941. [PMC free article: PMC1693376] [PubMed: 15306408]
  63. Lee J, Maunsell J. H. A normalization model of attentional modulation of single unit responses. PLoS ONE. 2009;4:e4651.. [PMC free article: PMC2645695] [PubMed: 19247494]
  64. Logan D.J, Duffy C. J. Cortical area MSTd combines visual cues to represent 3-D self-movement. Cereb Cortex. 2006;16:1494–1507. [PubMed: 16339087]
  65. Ma W.J, Beck J. M, Latham P. E, Pouget A. Bayesian inference with probabilistic population codes. Nat Neurosci. 2006;9:1432–1438. [PubMed: 17057707]
  66. Markert G, Buttner U, Straube A, Boyle R. Neuronal activity in the flocculus of the alert monkey during sinusoidal optokinetic stimulation. Exp Brain Res. 1988;70:134–144. [PubMed: 3261254]
  67. Matsumiya K, Ando H. World-centered perception of 3D object motion during visually guided self-motion. J Vis. 2009;9:151–153. [PubMed: 19271885]
  68. Meredith M.A, Stein B. E. Interactions among converging sensory inputs in the superior colliculus. Science. 1983;221:389–391. [PubMed: 6867718]
  69. Meredith M.A, Stein B. E. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol. 1986;56:640–662. [PubMed: 3537225]
  70. Merfeld D.M, Zupan L, Peterka R. J. Humans use internal models to estimate gravity and linear acceleration. Nature. 1999;398:615–618. [PubMed: 10217143]
  71. Morgan M.L, Deangelis G. C, Angelaki D. E. Multisensory integration in macaque visual cortex depends on cue reliability. Neuron. 2008;59:662–673. [PMC free article: PMC2601653] [PubMed: 18760701]
  72. Odkvist L.M, Schwarz D. W, Fredrickson J. M, Hassler R. Projection of the vestibular nerve to the area 3a arm field in the squirrel monkey. (Saimiri sciureus). Exp Brain Res. 1974;21:97–105. [PubMed: 4213802]
  73. Ohshiro T, Angelaki D. E, De Angelis G. C. A normalization model of multisensory integration. Nat Neurosci. 2011 In press. [PMC free article: PMC3102778] [PubMed: 21552274]
  74. Page W.K, Duffy C. J. MST neuronal responses to heading direction during pursuit eye movements. J Neurophysiol. 1999;81:596–610. [PubMed: 10036263]
  75. Parker A.J, Newsome W. T. Sense and the single neuron: Probing the physiology of perception. Annu Rev Neurosci. 1998;21:227–277. [PubMed: 9530497]
  76. Perrault T. J Jr., Vaughan J. W, Stein B. E, Wallace M. T. Neuron-specific response characteristics predict the magnitude of multisensory integration. J Neurophysiol. 2003;90:4022–406. [PubMed: 12930816]
  77. Perrault T. J Jr., Vaughan J. W, Stein B. E, Wallace M. T. Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli. J Neurophysiol. 2005;93:2575–2586. [PubMed: 15634709]
  78. Previc F.H, Varner D. C, Gillingham K. K. Visual scene effects on the somatogravic illusion. Aviat Space Environ Med. 1992;63:1060–1064. [PubMed: 1456916]
  79. Reynolds J.H, Heeger D. J. The normalization model of attention. Neuron. 2009;61:168–185. [PMC free article: PMC2752446] [PubMed: 19186161]
  80. Robinson D.A. Linear addition of optokinetic and vestibular signals in the vestibular nucleus. Exp Brain Res. 1977;30:447–450. [PubMed: 413730]
  81. Royden C.S, Banks M. S, Crowell J. A. The perception of heading during eye movements. Nature. 1992;360:583–585. [PubMed: 1461280]
  82. Royden C.S, Crowell J. A, Banks M. S. Estimating heading during eye movements. Vis Res. 1994;34:3197–3214. [PubMed: 7975351]
  83. Royden C.S, Hildreth E. C. Human heading judgments in the presence of moving objects. Percept Psychophys. 1996;58:836–856. [PubMed: 8768180]
  84. Rushton S.K, Warren P. A. Moving observers, relative retinal motion and the detection of object movement. Curr Biol. 2005;15:R542–R543. [PubMed: 16051158]
  85. Rust N.C, Mante V, Simoncelli E. P, Movshon J. A. How MT cells analyze the motion of visual patterns. Nat Neurosci. 2006;9:1421–1431. [PubMed: 17041595]
  86. Schaafsma S.J, Duysens J. Neurons in the ventral intraparietal area of awake macaque monkey closely resemble neurons in the dorsal part of the medial superior temporal area in their responses to optic flow patterns. J Neurophysiol. 1996;76:4056–4068. [PubMed: 8985900]
  87. Schlack A, Hoffmann K. P, Bremmer F. Interaction of linear vestibular and visual stimulation in the macaque ventral intraparietal area (VIP). Eur J Neurosci. 2002;16:1877–1886. [PubMed: 12453051]
  88. Schwarz D.W, Fredrickson J. M. Rhesus monkey vestibular cortex: A bimodal primary projection field. Science. 1971a;172:280–281. [PubMed: 4994138]
  89. Schwarz D.W, Fredrickson J. M. Tactile direction sensitivity of area 2 oral neurons in the rhesus monkey cortex. Brain Res. 1971b;27:397–401. [PubMed: 4994681]
  90. Shenoy K.V, Bradley D. C, Andersen R. A. Influence of gaze rotation on the visual response of primate MSTd neurons. J Neurophysiol. 1999;81:2764–2786. [PubMed: 10368396]
  91. Siegel R.M, Read H. L. Analysis of optic flow in the monkey parietal area 7a. Cereb Cortex. 1997;7:327–346. [PubMed: 9177764]
  92. Stanford T.R, Quessy S, Stein B. E. Evaluating the operations underlying multisensory integration in the cat superior colliculus. J Neurosci. 2005;25:6499–6508. [PMC free article: PMC1237124] [PubMed: 16014711]
  93. Stein B.E, Meredith M. A. The merging of the senses. Cambridge, MA: MIT Press; 1993.
  94. Stein B.E, Stanford T. R. Multisensory integration: Current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008;9:255–266. [PubMed: 18354398]
  95. Sugihara T, Diltz M. D, Averbeck B. B, Romanski L. M. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex. J Neurosci. 2006;26:11138–11147. [PMC free article: PMC2767253] [PubMed: 17065454]
  96. Takahashi K, Gu Y, May P. J, Newlands S. D, Deangelis G. C, Angelaki D. E. Multimodal coding of three-dimensional rotation and translation in area MSTd: Comparison of visual and vestibular selectivity. J Neurosci. 2007;27:9742–9756. [PMC free article: PMC2587312] [PubMed: 17804635]
  97. Tanaka K, Hikosaka K, Saito H, Yukie M, Fukada Y, Iwai E. Analysis of local and wide-field movements in the superior temporal visual areas of the macaque monkey. J Neurosci. 1986;6:134–144. [PMC free article: PMC6568626] [PubMed: 3944614]
  98. Tanaka K, Saito H. Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. J Neurophysiol. 1989;62:626–641. [PubMed: 2769351]
  99. Telford L, Howard I. P, Ohmi M. Heading judgments during active and passive self-motion. Exp Brain Res. 1995;104:502–510. [PubMed: 7589301]
  100. Waespe W, Buttner U, Henn V. Visual–vestibular interaction in the flocculus of the alert monkey: I. Input activity. Exp Brain Res. 1981;43:337–348. [PubMed: 6266856]
  101. Waespe W, Henn V. Neuronal activity in the vestibular nuclei of the alert monkey during vestibular and optokinetic stimulation. Exp Brain Res. 1977;27:523–538. [PubMed: 404173]
  102. Waespe W, Henn V. Visual–vestibular interaction in the flocculus of the alert monkey: II. Purkinje cell activity. Exp Brain Res. 1981;43:349–360. [PubMed: 6266857]
  103. Warren P.A, Rushton S. K. Perception of object trajectory: Parsing retinal motion into self and object movement components. J Vis. 2007;7:21–11. [PubMed: 17997657]
  104. Warren P.A, Rushton S. K. Evidence for flow-parsing in radial flow displays. Vis Res. 2008;48:655–663. [PubMed: 18243274]
  105. Warren W.H. Optic flow. In: Chalupa L. M, Werner J. S, editors. The visual neurosciences. Cambridge, MA: MIT Press; 2003.
  106. Warren W.H, Saunders J. A. Perceiving heading in the presence of moving objects. Perception. 1995;24:315–331. [PubMed: 7617432]
  107. Wexler M. Voluntary head movement and allocentric perception of space. Psychol Sci. 2003;14:340–346. [PubMed: 12807407]
  108. Wexler M, Panerai F, Lamouret I, Droulez J. Self-motion and the perception of stationary objects. Nature. 2001;409:85–88. [PubMed: 11343118]
  109. Wexler M, Van Boxtel J. J. Depth perception by the active observer. Trends Cogn Sci. 2005;9:431–438. [PubMed: 16099197]
  110. Wichmann F.A, Hill N. J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept Psychophys. 2001;63:1293–1313. [PubMed: 11800458]
  111. Wolfe J.W, Cramer R. L. Illusions of pitch induced by centripetal acceleration. Aerosp Med. 1970;41:1136–1139. [PubMed: 5458192]
  112. Zhang T, Britten K. H. Neuroscience Abstract Viewer/Itinerary Planner. New Orleans, LA: Society for Neuroscience; 2003. 2003. Microstimulation of area VIP biases heading perception in monkeys. Program No. 339.9.
  113. Zhang T, Heuer H. W, Britten K. H. Parietal area VIP neuronal responses to heading stimuli are encoded in head-centered coordinates. Neuron. 2004;42:993–1001. [PubMed: 15207243]
Copyright © 2012 by Taylor & Francis Group, LLC.
Bookshelf ID: NBK92839PMID: 22593867

Views

  • PubReader
  • Print View
  • Cite this Page

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...