U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Persaud KC, Marco S, Gutiérrez-Gálvez A, editors. Neuromorphic Olfaction. Boca Raton (FL): CRC Press/Taylor & Francis; 2013.

Cover of Neuromorphic Olfaction

Neuromorphic Olfaction.

Show details

Chapter 1Engineering Aspects of Olfaction

.

ABSTRACT

The biological components of vertebrate and invertebrate chemical sensing systems are now becoming well understood in terms of their structure and function. As a result, it is now possible to consider biomimetic models of olfaction and how they may be translated into functional engineering devices. This chapter outlines some of the development in sensor array technology that can enable functional devices. The responses of individual odor sensors combined into an array, where each sensor possesses slightly different response selectivity and sensitivity toward the sample odors, when combined by suitable mathematical methods, can provide information to discriminate between many sample odors. Arrays of gas and odor sensors, made using different technologies, have become known as electronic noses and consist of three elements: a sensor array that is exposed to the volatiles, conversion of the sensor signals to a readable format, and software analysis of the data to produce characteristic outputs related to the odor encountered. The output from the sensor array may be interpreted via a variety of methods—such as pattern recognition algorithms, principal component analysis, discriminant function analysis, cluster analysis, and artificial neural networks—to discriminate between samples. This chapter introduces some of the more biologically oriented algorithms that may transform traditional multivariate data analysis, feature extraction, and pattern recognition.

1.1. INTRODUCTION

Animals have evolved an incredible diversity of sensory systems to extract information from the environment. Of these, the chemosensory systems allow them to extract information from their chemical environments, so that behavioral preferences are elicited in response to stimuli that may be aversive or attractive. Animals live in complex environments where an infinite variety of chemical molecules may be encountered. These may be present as single chemicals, or as complex mixtures, where the relative concentrations of individual components differ. The tasks commonly carried out by the olfactory system include detection of odors, estimating their strength, identifying their source, and recognizing a specific odor in the background of another. The olfactory system in mammals is involved in physiological regulation, emotional responses (e.g., anxiety, fear, pleasure), reproductive functions (e.g., sexual and maternal behaviors), and social behaviors (e.g., recognition of members of the same species, family, clan, or outsiders). In insects such as the honeybee, it has been shown that scents modify behaviors associated with mating, foraging, recognition of kin, brood care, swarming, alarm, and defense (Reinhard and Srinivasanand 2009).

Figure 1.1 shows a diagram of the olfactory epithelium of a mammal. Olfactory receptor neurons are bipolar, and from the apical side, cilia containing membrane-bound olfactory receptor proteins lie in an aqueous environment (mucus) overlying the epithelium. Odorant molecules need to partition from air into water before they can reach the transduction sites in the epithelium. Soluble odorant binding proteins are secreted into the aqueous mucus layer, and these may have an odorant carrier and preconcentration role. Over the last century, ideas that several classes of olfactory receptors exist, selective to chemical species on the basis of molecular size, shape, and charge, were based on evidence from chemistry (Beets 1978), olfactory psychophysics, and structure-activity relationships of odorants (Boelens 1974), together with the examination of “specific anosmias” in the human population, which all supported the definition of selectivity and specificity of putative olfactory receptors initiated by Amoore (1962a, 1962b, 1967). These were confirmed by developments in olfactory neurobiology and molecular genetics (Buck and Axel 1991; Buck 1997a, 1997b; Chess et al. 1992; Mombaerts et al. 1996a). The ideas that several classes of olfactory receptors exist, selective to chemical species on the basis of molecular size, shape, and charge, also pointed to individual olfactory receptors being rather broad in their selectivity to molecules within certain classes. The important molecular parameters of an odorant determining the olfactory response would include the adsorption and desorption energies of the molecule from air to a receptor interface, partition coefficients, and electron donor-acceptor interactions, depending on the polarizability of the molecule, and its molecular size and shape. The plethora of chemicals that an animal can sense, as well as their combinatorial and temporal variability, has made it difficult to understand how the brain processes the incoming information so that an animal can make sense of its chemical environment. Polak (1973) proposed a multiple profile–multiple receptor site model for vertebrate olfaction anticipating some of the combinatorial coding mechanisms later discovered. The identification of odorant receptor (OR) genes in rodents (Buck and Axel 1991), in Caenorhabditis elegans (Sengupta et al. 1996), and in Drosophila melanogaster (Clyne et al. 1999; Gao and Chess 1999) have given us a fundamental understanding of olfactory coding, especially at the olfactory receptor neuron (ORN) level. Individual ORs are proteins that traverse the cell membrane of the cilia of the olfactory neuron. It appears that there may be hundreds of odorant receptors, but only one (or at most a few) expressed in each olfactory receptor neuron. These families of proteins may be encoded by as many as 1000 different genes in humans. This is a large number and accounts for about 2% of the human genome. In humans, however, most are inactive pseudogenes, and only around 350 code for functional receptors. There are many more functional genes in macrosmatic animals like rats. These receptor proteins are members of a well-known receptor family called the seven-transmembrane domain G-protein-coupled receptors (GPCRs) (see Figure 1.2). The hydrophobic regions (the transmembrane parts) contain maximum sequence homology to other members of the G-protein-linked receptor family. There are some notable features of these olfactory receptors, like the divergence in sequence in the third, fourth, and fifth transmembrane domains, that suggest how a large number of different odorants may be discriminated (Pilpel and Lancet 1999).

FIGURE 1.1. Olfactory epithelium of vertebrates.

FIGURE 1.1

Olfactory epithelium of vertebrates. The interface between air and the olfactory receptors is a layer of mucus, where soluble odorant binding proteins are found. These may have an odorant carrier function. The olfactory cells of the epithelium are bipolar (more...)

FIGURE 1.2. Olfactory receptor and transduction mechanism.

FIGURE 1.2

Olfactory receptor and transduction mechanism. Odorants in the mucus bind directly (or are transported via odorant binding proteins) to receptor molecules located in the membranes of the cilia. The conformational change occurring on binding activates (more...)

As crystallographic information on olfactory receptors is lacking, they have been modeled based on their resemblance to rhodopsin. Gelis et al. (2012) has published models of putative binding sites of some human olfactory receptors. On the inner side of the cell membrane, proteins called G-proteins are associated with olfactory receptor. These bind the guanine nucleotides—guanine triphosphate and guanine diphosphate. They are made up of three subunits and are located with the inner surface of the plasma membrane. They are closely associated with the transmembrane receptor protein. When an odorant binds, it is thought that an allosteric change in conformation occurs, in turn causing a conformational change in a subunit of the G-protein Gα—displacing bound guanine triphosphate (GTP) and allowing it to bind GTP. This in turn produces an activated subunit that dissociates from the other subunits and activates another effector molecule, triggering a cascade of events that leads to the opening of an ion channel, and change of electrical potential across the cell membrane. As this electrical potential propagates to the basal side of the cell, it triggers in turn voltage-gated ion channels so that a series of electrical spikes results, which are transmitted to the processing centers in the brain via the axon of the olfactory neuron.

Our understanding is that mammalian and insect olfactory systems are combinatorial in nature—instead of activating a single specialized receptor, each chemical stimulus induces a complex pattern of responses across the olfactory receptor array. The investigation of OR expression patterns has made it possible to dissect the major circuits underlying olfaction (Hoare et al. 2011; Imai et al. 2010; Leinwand and Chalasani 2011; Ressler 1994; Ressler et al. 1993; Su et al. 2009; Vassar et al. 1993). The evidence obtained confirmed previous concepts of a common design of mammalian and insect olfactory systems that are discussed by Hildebrand and coworkers (Hildebrand and Shepherd 1997; Hildebrand 2001; Martin et al. 2011). The consequence of the combinatorial design of the olfactory system is that the number of unique odor representations is not limited to the number of different receptor types, but can be estimated as mn, where n corresponds to the number of receptor types available and m the number of possible response states that each sensor can assume. This is limited to the available signal-to-noise parameters associated with the working system (Cleland and Linster 2005).

Vertebrate or invertebrate life surviving in complex, changing environments requires the use of sophisticated sensory systems to detect, classify, and interpret patterns of input stimulation. Coding mechanisms by which a certain pattern of stimulations may be described are inherent. Such codes may be defined as sets of symbols that can be used to represent patterns of organizations and the sets of rules that govern the selection and use of these symbols. Sensory coding mechanisms in biological systems would appear to project some representation of sensory inputs as a pattern at a high level of the nervous system, the neural activity resulting being then related to the previous experience with regard to this pattern or associated patterns. Fundamental concepts of pattern classification that seem to be common in biological systems would appear to be template matching, whereby the pattern to be classified is compared with a set of templates, one for each class, the closest match determining the classification, and feature detection systems, in which a number of measurements are taken on the input pattern and the resulting data are combined to reach a decision. These systems may involve either a sequential approach whereby information from the evaluation of some features is used to decide which features to evaluate next, or a parallel approach where information about all features is evaluated at the same time with no weight being placed on any particular feature.

The remarkable capabilities of the biological chemosensory systems in detection, recognition, and discrimination of complex mixtures of chemicals, together with rapid advances in understanding how these systems operate, have stimulated the imagination and interest of many researchers and commercial organizations for the development of electronic analogs. The dream of emulating biological olfaction using artificial devices was conceptually realized by Persaud and Dodd (1982), who demonstrated that an array of electronic chemical sensors with partial specificity could be used to discriminate between simple and complex odors; i.e., the combinatorial aspects of olfactory receptors could be emulated, and this could achieve remarkable flexibility in terms of the numbers of types of analytes that can be discriminated. This led to a burgeoning of the “electronic nose” field of research, and formation of many commercial enterprises interested in exploiting a wide range of applications, including environmental, food, medical, security, and others. The researchers and companies have produced instruments that combine gas sensor arrays and pattern analysis techniques for the detection, identification, or quantification of volatile compounds. The multivariate response of an array of chemical gas sensors with broad and partially overlapping selectivities can be processed as a pattern or “fingerprint” to discriminate a wide range of odors or volatile compounds using pattern recognition algorithms. The instruments typically consist of a gas sensor array comprising many types of sensing technologies, a sample delivery system, and the appropriate electronics for signal processing, data acquisition, and storage. Processing of data from such systems can be split into four sequential stages: signal preprocessing, dimensionality reduction, prediction, and validation. The numbers of sensors incorporated into the devices are relatively small, and the data handling approaches have been based on traditional chemometric or neural network methods for processing multivariate data. Applications using such chemosensory arrays at present involve issues such as sensor drift, poor sensitivity compared to biological systems, and interference from background odors. With further understanding of biological processes, some of these engineering limitations may be reduced by the adaptation of biologically plausible models for signal processing.

This chapter gives an introduction to biological chemoreception, going on to the field of artificial olfaction, and discussing some of the signal processing concepts that may be useful in mimicking biological olfactory systems.

1.2. ODORANTS, CHEMICAL STIMULI, AND ODORANT BINDING SITES

Odorants are relatively small molecules of the order 10–8 to 10–10m in size. There are many aspects, such as shape and charge distribution, that determine a molecule’s properties and in turn whether they exist as odorants and their particular scent. Odorants that are typically found in nature tend to be a vast complex mixture of many different chemical species at various concentrations rather than one type of molecule. Varying the concentration of compounds in the aromatic mixture, changing the functional group chemistry or position, and altering the stereochemistry or conformation of a molecule, among other changes that can be made, all affect the smell of an odorant or odorous mixture (Beets 1978). Multiple molecular properties of the stimulus molecules, including these individual molecular properties or determinants, population properties of a single odor substance, and appropriate mixtures of different odor substances, all affect the perception of a volatile chemical stimulus (Shepherd 1990). Although there are examples of simple single molecules with distinct aromas, the discovery of the olfactory code is an extremely complicated task. Odor molecules may vary in structure from simple aliphatic short-chain organic molecules to complex aromatic molecules with multiple functional groups, as shown in Figure 1.3. It illustrates that small changes in the nature of the functional group, the shape, and the size of a molecule can induce great changes in our perception of odor quality (how we describe our perception of an odor). In many cases a change of a single carbon atom can give rise to different odor qualities in terms of human perception, while in other cases apparently unrelated chemicals produce similar odor perceptions, as shown in the case of almond odor in Figure 1.4, where it can also be seen that the shapes of these chemically different molecules may be very similar.

FIGURE 1.3. Structure and space filling models of a range of odorants.

FIGURE 1.3

Structure and space filling models of a range of odorants. The size, shape, and functional groups determine the quality of the odorant molecules perceived.

FIGURE 1.4. Different chemicals may produce similar odor perceptions: changing the functional groups on the ring does not change the perception of almond odor from benzaldehyde, nitrobenzene, and salicaldehyde.

FIGURE 1.4

Different chemicals may produce similar odor perceptions: changing the functional groups on the ring does not change the perception of almond odor from benzaldehyde, nitrobenzene, and salicaldehyde.

Odorants must be volatile in order to be able to be carried to the nasal cavity. Therefore, the molecular weight is confined to be around 30 to 300 g.mol–1. Other important factors are charge distribution and polarizability. Nonpolar and symmetrical molecules tend to be gaseous at room temperature, but not odorants. Introduction of one or two functional groups establishes some polarity and better interaction between molecules. Hydrogen bonding is also expected to play an important role in odorants, particularly the interaction of the odorant molecules and the olfactory receptor proteins. These characteristics are typical of odorants and relate to their affinity and interaction with the olfactory receptors. The recognition of an odor also depends on the threshold detection limit, which varies between different people and greatly varies between different animals.

Molecules containing covalently bonded sets of atoms with a delocalized conjugated system of π-orbitals in a coplanar arrangement with one or more rings have a special place in organic chemistry. If the number of delocalized π-electrons is even but not a multiple of 4 (Hückel’s rule), the molecule is called aromatic. Although this term has a precise definition, the name arose from the fact that many of these types of molecules have a distinctive odor.

It is important to understand what properties of a molecule are mapped by the olfactory system. These include the presence of functional groups, geometry (e.g., molecular length, position of functional groups, geometry of double bonds), and connectivity (e.g., number and sizes of rings and branching. Weyerstahl (1994) reviewed structure-odor correlations, and some of the concepts described have been developed further and updated by Sell (2006). Many odorants contain only one strongly polar function in the molecular structure, and this polar group is thought to form a hydrogen bond or some other dipolar attachment to a polar site on an olfactory receptor, with the remainder of the molecule occupying a hydrophobic space in the receptor. The polar groups such as aldehydes, ketones, esters, ethers, and also some heteroatoms, such as nitrogen and sulfur, are called osmophores (Rupe and von Majewski 1900), and they are used as a reference point in the molecule when mapping structure-activity relationships (SARs) as molecular reference points. A second, usually weaker, electron donor or acceptor may also sometimes be involved.

Relating the structure of an odorant molecule to the sensation that is perceived is not an easy task, and there are many unexplained findings in work so far published. The interaction of a molecule with an olfactory receptor translates into a mixture of repulsive and attractive forces that a molecule encounters when bound to a receptor, and it is clear that simplistic shape models of odorant-receptor interaction do not quite account for the observations. Different types of molecules can have similar odors, while similar molecules can have dissimilar odors; e.g., a series of molecules of different chemical families all have an almond odor to humans (Figure 1.4), while two very similar molecules, such as D-carvone and L-carvone, or vanillin and isovanillin (Figure 1.5), elicit very different sensory perceptions. These are discussed in some detail by Turin and Yoshii (2003) in terms of possible mechanisms that are involved in transduction of these molecules. Some olfactory receptors appear to be tuned to certain families of chemicals; e.g., when the receptive range of OR-I7 was mapped (Araneda et al. 2000), this indicated good selectivity to a range of aliphatic aldehydes up to chain size C11. New methods allowing expression of olfactory receptors on the surface of human embryo kidney cells (HEK293), which are then loaded with fluorescent dyes allowing calcium imaging to be carried out, have given us a new tool to map the selectivity of olfactory receptors (Sanz et al. 2005). Unlike the OR-I7 receptor, human OR-1G1 receptor seems to be quite selective within a class of substrates (for example, alcohols or acids), but not selective between classes, as it responds to alcohols, aldehydes, acids, esters, lactones, and a variety of heterocyclic systems (Sanz et al. 2005). Spehr and Leinders-Zufall (2005) showed that activation of human olfactory receptor (hOR-17) by bourgeonal is inhibited by undecanal, a nonagonist, suggesting allosteric interactions and hence multiple binding sites. Triller et al. (2008) suggest that receptor activation rather than odor should be used as input data in order to understand odorant-receptor interaction.

FIGURE 1.5. Similar molecules may produce different sensory perceptions.

FIGURE 1.5

Similar molecules may produce different sensory perceptions. Isomers of carvone have different odor qualities: L-carvone, spearmint; D-carvone, caraway. Similarly, vanillin and isovanillin have different odors—vanilla and a weak phenolic odor, (more...)

Objective description of odor perception in humans is a complex task. Methods used include

  • 1. The use of profiles of reference odors where an odorous substance is described in terms of a similarity profile that is related to a certain number of reference substances covering the olfactory space.
  • 2. The use of profiles of semantic descriptors: An odorous substance is described by means of a list of semantic descriptors, the values of which can vary in intensity.
  • 3. The use of odor similarities: The likeness between two substances is ranked on a numerical scale fixed a priori (Callegari et al. 1997). They deduce from an analysis of published data that the estimation of a similarity between odors is based only on the distinctive elements; the common elements do not add any information. They also indicate that 25 well-chosen descriptors are enough to describe the perceptual olfactory space.

To model the molecular characteristics that might account for our perceptions, quantitative structure-activity relationship (QSAR) studies are useful tools. It can be considered that a molecule makes a random walk from a very dilute solution until it hits the appropriate receptor or binding site that elicits a response. The rate at which this biological response occurs is dependent on the properties of the molecule in question.

Image ch1_Eq_1_1.jpg

where R is the response, A is the probability of a molecule reaching a binding site in a defined time interval, kx is a rate constant or equilibrium constant, and Cx is the extracellular concentration of the molecules. The most common properties that are correlated to biological activity are electronic properties and lipophilicity. The parameters used to measure electronic and lipophilicity properties are σ, Hammett values, and π, a lipophilicity constant, respectively (Hansch and Fujita 1964). Although other parameters have been investigated in QSAR equations, σ and π are the most widely accepted. With these two parameters, a typical QSAR equation takes the form of Equation 1.2:

Image ch1_Eq_1_2.jpg

Log 1/C is a term representing the concentration of a drug needed to achieve a desired level of effect. k, k′, ρ, and k″ are all regression coefficients. The π term is present as both π and π2 since lipophilicity tends to follow a parabolic relationship relative to activity. In a study of structurally similar odorless and odoriferous benzenoid musks, Yoshii and coworkers (Yoshii et al. 1991, 1992) searched for the best molecular parameters to discriminate these groups. They found that the best three parameters were the log P value (partition coefficient between octanol and water, indicating the balance of lipophilicity and hydrophilicity), the longest side length of a hexahedron circumscribing the molecule, and the parameter that expresses the structural hindrance to the functional group when a molecule approaches the receptor site.

Similar QSAR techniques have been adapted by Guo and Kim (2010) to map the electrophysiological responses of Drosophila to 108 odorants. It is worth dwelling on some of the important findings. Drosophila olfactory receptors are thought to use a combinatorial mechanism to encode olfactory information at the receptor level. Odors may either stimulate or inhibit receptor responses. It was found that linear odorants with five to eight nonhydrogen atoms at the main chain and a hydrogen bond acceptor or hydrogen bond donor at its ends will stimulate a strong excitatory response. They compared the sequences of 90 ORs in 15 orthologous groups and identified 15 putative specificity-determining residues (SDRs) and 15 globally conserved residues that were postulated as functionally key residues. On mapping these residues to models of secondary structure it was found that 12 residues were located in transmembrane domains, while others were located in extracellular halves of its transmembrane domains. As a result, it was hypothesized that the odorant binding pocket lies on this extracellular region. Evidence from QSAR modeling indicates that the binding pocket is about 15 angstroms in depth by about 6 angstroms in width, with 12 key polar or charged residues located in this pocket, and functioning in distinguishing between docked odorants on the basis of geometrical fitting of the molecule into the binding pocket, and hydrogen bond interactions. It seems that many of the original concepts of the receptor site binding outlined by Amoore (1962a) are now being verified.

Many researchers have attempted to model the binding sites for individual olfactory receptors. The QSAR approach can give valuable models of the ligand structure, and information on the form of the binding pocket. Receptor-based approaches such as homology modeling create a model of the protein and the binding site explicitly, and from this give information on ligand binding (Crasto 2009). Both techniques can be combined together. Gelis and coworkers (2012) have used a molecular dynamic approach to predict which amino acid residues play a crucial role in binding odorants. They combined dynamic homology modeling with site-directed mutagenesis and functional analysis, to produce a molecular model of the ligand binding niche of hOR-2AG1 within a receptor model. They deduced the basis for receptor activation by ligands based on computed hydrogen bond contact frequencies to amino acids forming the ligand binding site, forming a ligand selectivity filter. The information gives new insight into the interaction of volatile, highly hydrophobic, and flexible ligands with olfactory receptors.

1.3. ODOR SAMPLE

What is perceived by an animal is a stimulus that is fluctuating constantly in terms of its composition as well as instantaneous concentration. The native environment of an animal provides varied samples of odor stimuli in space and time (see Figure 1.6). It is thought that insect olfactory systems work as flux detectors rather than concentration analyzers, since they react rapidly to changes in the local environment (Kaissling and Rospars 2004; Rospars et al. 2007). Odor samples that are being perceived consist of short bursts where the instantaneous concentration may vary widely. Odor plumes are often meandering and subject to molecular and turbulent diffusion. When near to a source, the peak-to-mean ratio is smaller, and when farther away from the source, the bursts are on average weaker but longer and with greater gaps between them, but there are always exceptions to this. Insects are capable of resolving bursts of odor at 10 s–1. Murlis and coworkers (Murlis et al. 1992; Murlis and Jones 1981) describe mathematical models that allow a description of the wisps of odor that are emitted from a source. A plume could be described in terms of large-scale structure, where the shape and average odor strength help an animal to orient, or in more detail, in terms of the small-scale structure within the plume itself, where fluctuating plume concentrations influence the behavior of an animal when it is within the plume, or in terms of a time-averaged structure where the animal may contact a plume at different points downwind from the source.

FIGURE 1.6. Odor plumes.

FIGURE 1.6

Odor plumes. An odor plume can be thought of as being sheared off as a strong single strand from the source of emission. Air turbulence shreds the odor plume into many substrands as it is transported by larger-scale turbulence out into the environment. (more...)

As a rat sniffs an odor, the air in the nasal passages is conditioned and changes in temperature and humidity, so that by the time molecules arrive at the olfactory epithelium, the air is at fairly constant temperature and humidity. Sniffing dynamics may also direct the stimulus to optimal portions of the sensory epithelium (Schoenfeld and Cleland 2006), and head and body movements help an animal detect and locate an odor. Once an odor is detected, attentional behavior associated with any sensory stimulus will occur. However, we know relatively little about the dynamical interaction between brains and their environments.

1.4. OLFACTORY SYSTEMS

Insect and vertebrate olfactory systems show many analogies in terms of organization (Hildebrand and Shepherd 1997), as shown in Figure 1.7. It is thought that these olfactory systems evolved independently or that there was a common olfactory ancestor with subsequent drastic divergence in the development of olfactory receptors (Benton 2009), with these evolutionary sequences converging on similar types of algorithms (Ache and Young 2005; Kay and Stopfer 2006). Both vertebrate and invertebrate olfactory systems need to perform similar tasks. They need to detect potential signals of interest from chemically noisy environments. They have the task of feature extraction to extract these signals from a complex and changing odor background to form internal representations of the chemical stimuli, and then to compare these patterns to those of previously experienced odors. They further need to differentiate relevant from irrelevant stimuli, which may be context dependent, and then make an appropriate behavioral or other response. From anatomical, physiological, and behavioral evidence, many of the neural circuit elements comprising the olfactory system have been proposed to contribute to these processes in particular ways; for example, multiple feedback and feed-forward interactions among olfactory structures, as well as between olfactory and nonolfactory areas, are thought to contribute to the filtering and construction of olfactory representations (Ache and Young 2005).

FIGURE 1.7. Comparative olfactory systems of insects and vertebrates.

FIGURE 1.7

Comparative olfactory systems of insects and vertebrates. In insects, ORNs are located and compartmentalized in olfactory sensilla, while in vertebrates ORNs are found in the olfactory epithelium. In insects there are different types of sensilla, such (more...)

From the periphery of the olfactory system to the higher processing areas of the brain there are several key components. In considering the design of artificial or biomimetic chemosensory systems, some of the important features are discussed in the following sections.

1.4.1. Peripheral Systems

As discussed above, odorants are small molecules that must partition from air to an aqueous phase in order to reach the binding sites of the receptor cells. In vertebrates the mucus layer overlying the olfactory epithelium contains an ionic medium bathing the olfactory receptors (Figure 1.1). This medium contains small water-soluble proteins that are called odorant binding proteins because they are capable of binding several types of odorant molecules with low affinity. Similar proteins are also found in insects. In fact, olfactory binding proteins (OBPs) are small water-soluble proteins that constitute one of the largest groups of proteins involved in olfaction (Vincent et al. 2000, 2004; Xu 2005; Xu et al. 2009). In vertebrates OBPs form two groups: OBPs and pheromonal binding proteins (PDPs). Vertebrate OBPs belong to a superfamily of proteins called lipocalins, which are commonly involved in the transport of hydrophobic molecules. Insect OBPs can be divided into three groups: PDPs, general odor binding proteins (gOBPs), and antennal specific proteins (ASPs) (Pelosi and Maida 1995; Forêt and Maleszka 2006). These two groups of OBPs are not homologous and are thought to have arisen by convergent evolution (Forêt and Maleszka 2006). However, the similarities between these groups of proteins make solid distinctions and specific role allocations difficult.

OBPs require a significant amount of energy in order to maintain their very high turnover rate. This implies that they have an important physiological function. They are secreted in high concentrations by nonneuronal cells into the fluid surrounding olfactory dendrites. This aqueous fluid provides a barrier between primarily hydrophobic odorant molecules and olfactory receptors (ORs). In invertebrates this fluid is sensilla lymph, and in vertebrates it is a layer of mucus. OBPs are thought to solubilize and shuttle odorants through this aqueous fluid to allow physical contact with ORs, and this is proposed to be their primary role (Pelosi 2001). This is supported by the fact that OBPs of many species have been shown to bind reversibly and selectively to a wide range of odorants (Xu 2005; Kim et al. 1998). Besides the shuttle role described above, OBPs have been suggested to have many other roles, including odor recognition (Forêt and Maleszka 2006), odor release (Hekmat-Scafe et al. 2002), concentration of odorants (Pelosi and Maida 1995), and protection of nasal mucosa and odorant scavenging to prevent receptor saturation (Tegoni et al. 2000). Figure 1.8 shows a schematic diagram of porcine OBP and the ligand 2-isobutyl-3-methoxy pyrazine interacting with the binding site. The protein belongs to the lipocalin family, and is conformationally very stable.

FIGURE 1.8. Porcine OBP.

FIGURE 1.8

Porcine OBP. The secondary structure of the OBP is characterized by a central antiparallel β-barrel and an α-helix, which are characteristic of the lipocalin superfamily. A large buried cavity in the β-barrel forms the ligand binding (more...)

From an engineering perspective, the function of these proteins has interesting consequences. Focusing on the pheromone binding protein (PBP), it is thought that it may have the following functions in insects:

  • 1. Solubilization of the pheromone (which is a lipophilic molecule)—transport through the sensillar lymph, which prevents it from entering the cell membrane
  • 2. Protection of the pheromone from enzymatic degradation
  • 3. Pheromone receptor activation
  • 4. Pheromone deactivation (scavenging function)
  • 5. Provision of organic ions to the sensillar lymph

There is controversy over these functions, and Vogt and coworkers (Vogt 2006; Vogt and Riddiford 1981, 1984) differ in opinion from Kaisling (2009) over the details of these steps, but the latter group have modeled the kinetics of the various pathways and explored a variety of models that have evolved over time and have come up with some numbers that are interesting (see Figure 1.9). The consequence of the sequestration of pheromone by the pheromone binding protein is that it acts as a preconcentrator. After adsorption from the air space at the surface of the olfactory hair, the pheromone (F) passes through the hair wall via the pore tubules. The pheromone is then transported to the receptor neuron while bound to the PBP. It is thought that most of the pheromone entering the hair lumen (the fraction Q1) binds to the PBP. A portion of the incoming pheromone (1-Q1) encounters a pheromone-degrading enzyme (E) within the sensillum lymph, is rapidly degraded to a metabolite (M), and no longer functions as a stimulus compound. It is assumed that the pheromone-PBP complex rather than the free pheromone interacts with the receptor molecules (R). A single activation of the pheromone-PBP-receptor complex is thought to elicit an elementary receptor potential (Minor and Kaissling 2003). From experimental data and modeling, they arrive at the temporal characteristics shown in Figure 1.9. After entering the hair lumen, the free pheromone F may bind to PBP or be degraded enzymatically. It has a half-life of about 3 ms due to binding to the PBP and the formation of the complex between the pheromone and the pheromone binding protein (reaction 2). This half-life is much shorter than expected from solely enzymatic degradation (13 ms). Taking PBP binding and degradation together, the overall half-life of free pheromone is about 2 ms. The direct formation of FA is comparatively slow (267 ms), and FA is much more readily formed via FB (reactions 2 and 4). The formation of the activated receptor molecule FAR’ via the complexes FA and FAR is relatively slow, altogether in the range of about 400 ms. Hence, it would appear that the presence of PBP acts as a preconcentrator—creating and allowing quite high concentrations of the pheromone to be delivered to the receptor.

FIGURE 1.9. Temporal model characteristics (model N) by Kaissling (2009).

FIGURE 1.9

Temporal model characteristics (model N) by Kaissling (2009). Model of the time constants associated with PBP binding in Bombyx mori. The pheromone is adsorbed by the sensillum hair (reaction 1) and diffuses along the hair surface, through the hair wall (more...)

Pheromone perception in insects is a useful model for making comparisons to vertebrate olfaction. In contrast to odor perception, pheromone perception can involve one known molecule that elicits a specific and measurable response. It is easier to measure electrophysiological and behavioral responses in vivo in insects than in vertebrates. Moreover, it is easier to identify the insect genome and produce mutant strains. Common characteristics of OBPs and PDPs suggest that they may fulfill similar functions. Nevertheless, important differences exist between the olfactory systems of these two groups of animals. Unlike mammals, the structure of the olfactory system in insects allows independent regulation of the aqueous environment (Steinbrecht 1998; Xu 2005; Kim et al. 1998). Furthermore, in contrast to the mammalian nose, which carries out a variety of functions, the antennae of insects are specialized for odor and pheromone perception (Pelosi and Maida 1990, 1995). Genome sequencing shows that mammals and nematodes express a large number of ORs and very few OBPs. Consequently, it has been proposed that in animals OBPs play a much smaller role, while the combinatorial use of ORs constitutes the main mechanism of odorant discrimination. Conversely, insects have a smaller number of ORs and a larger number of OBPs; thus, OBPs may play a more important role. In each sensillum a subset of OBPs and ORs could work together in the discrimination of odorants (Forêt and Maleszka 2006; Hekmat-Scafe et al. 2002).

1.4.2. Olfactory Systems and Biological Algorithms

The olfactory epithelium in vertebrates and the antennae in insects are the interface between the olfactory systems and the external environment. The odorant molecules bind to receptor proteins in ciliary membranes of olfactory receptor neurons (ORNs). An individual odor interacts with a subset of the huge number of receptors existing here, activating the ORNs linked to the subset. An ORN produces a series of electrical spikes—the frequency of which is a monotonically increasing function of the odorant concentration. This is dependent on the receptor-odorant binding affinity. As each neuron expresses one or a few receptor types (Kauer and White 2001; Korsching 2001), a pattern of active and inactive ORNs is associated with the odorant. Different odors interact with different but overlapping sets of receptors, generating different but overlapping ORN patterns. Hence, a combinatorial coding of the odor is performed by the olfactory system in this first stage of perception (Firestein 2001; Malnic et al. 1999).

The odor response pattern, extended through a large number of ORNs, is then processed by the OB in mammalians or the antennal lobe in insects. These second layers are composed of spheroidal structures, glomeruli, made up of the synapses between ORN axons and dendrites of second-order neurons, mitral and tufted (M/T) cells in the vertebrate systems, and projection neurons (PNs) in the insects. M/T cells and PNs are the output channels of the olfactory bulb/antennal lobe (see Figure 1.10).

FIGURE 1.10. Parallels in olfactory processing between mammals and insects.

FIGURE 1.10

Parallels in olfactory processing between mammals and insects. Odorants emitted from a stimulus activate distinct subsets of ORNs in both mammals and insects (first-order neurons), and these converge on glomeruli in either the olfactory bulb or the antennal (more...)

Each ORN type projects into one or two glomeruli (Wilson and Mainen 2006), and a single M/T (PN) receives input from just one type of ORN, expressing the same type of receptor, and sends its axon to the olfactory cortex. Therefore, the first representation of the odor is transduced at the glomerular level, to produce a second spatial and very ordered pattern, representing its molecular features and distributed among a much smaller number of output cells (chemotopic coding) (Mombaerts et al. 1996b; Mori et al. 1999; Raman et al. 2006a; Ressler et al. 1994; Vassar et al. 1994; Wang et al. 1998).

The convergence ratio of ORNs to M/T and PN is very high (in mammals it is around 25,000 ORNs per glomerulus), and it allows the amplification of weak signals, as well as an averaging of the receptor input. In computational terms, this averages out uncorrelated noise and leads to an increase in the signal-to-noise ratio (S/N) and, consequently, in the sensitivity of the OB/AL when compared to that of a single ORN (Duchamp-Viret et al. 1989). It is possible to detect odorants below the detection threshold of individual ORNs.

In the OB there are also interneurons, periglomerular and granule cells, and local neurons in the antennal lobe of the insects. Interneurons allow communication within and between glomeruli and regulate the activities of the M/T and PN. Periglomerular cells receive input from the ORN axons and form inhibitory synapses with M/T and PN dendrites within the same glomerulus. Granule cells form inhibitory synapses with M/T and PN of different glomeruli. Local neurons, in the antennal lobe, connect glomeruli and are mainly inhibitory (Shepherd et al. 2004). While evidence is not concrete, the interaction through PG cells may serve as a gain control mechanism, enabling the identification of odorants over several log units of concentration. Local inhibition introduces time as an additional dimension for odor coding by generating temporal patterning of the spatial code available at the glomerular (GL) layer.

The representation of odors at the glomeruli level is sent to the olfactory cortex, where different odorants elicit distinct, sparse, and distributed but partially overlapping activity patterns of the pyramidal cells, the main neurons of the cortex. These cells receive synapses from multiple M/T units, and their activity, consisting of action potentials, is observed only if a defined combination of second-order neurons is synchronously stimulated. Since an M/T is activated by only one type of receptor, an individual odor must excite a precise combination of receptors to generate an action in a few pyramidal neurons, implementing a coding that is sparse (Poo and Isaacson 2009). This organization leads to a more compact odorant representation than that available at the epithelium, and decouples odor quality from odor intensity.

This mechanism increases the discrimination of odors that are structurally similar. In fact, although two comparable odors generate an analogous pattern at the glomeruli level, they activate different and multiplexed pyramidal cells, generating a diverse pattern at the cortex level. Moreover, in this region, the olfactory information is combined together with the information from other sensory organs and from previous memories to give perception of the odor.

Similar information processing mechanisms are observed in the mushroom body of insects. The projection neurons in the antennal lobe synapse with Kenyon cells (KCs), the main neurons in the mushroom body. In the locust 50,000 KCs have been identified. Each of these receives input from more PNs, and each PN synapses with about 600 KCs. In this way, the chemotopic representation of an odor, obtained at the glomerulus level by the broadly tuned PNs, is transduced into a sparse code generated by the highly specific KCs (Szyszka et al. 2005). Schematic representations of the excitatory responses of distinct second- and third-order neurons to different odors are shown in Figure 1.11. Perez-Orive et al. (2002) observed that even though a sparse coding mechanism makes the system more vulnerable to damage, it introduces important benefits. With respect to a nonsparse coding mechanism, it increases the discrimination power, enhances the olfaction system’s memory capacity, and makes easier the association between patterns generated by using a few neurons and memories previously stored.

FIGURE 1.11. Excitatory responses of second- and third-order neurons.

FIGURE 1.11

Excitatory responses of second- and third-order neurons. The second-order neurons, M/T units in mammalians and PNs in insects, generate a second representation of odors at the glomeruli level and send it to the olfactory cortex or mushroom body, where (more...)

1.5. ARTIFICIAL OLFACTION

There has been a large growth of interest in building systems that imitate the five senses of the mammal. The sense of smell is perhaps the least appreciated in man’s daily life, while many animals rely heavily on their acute sense of smell. Nevertheless, it is an invaluable tool in many industries, past and present. An artificial sensing system that mimics the human sense of smell is desired in a range of fields, such as food and drink production, tobacco and cosmetics industries, and for environmental monitoring in chemical plants. The human sense of smell is influenced by many factors, such as age, gender, state of health, and mood. The measure of sense is limited to linguistic ability of the individual in communication between one person and another. The response of subjects to different odors and odor concentrations is highly subjective, as the human sense of smell is relatively weak and can vary dramatically from person to person. An instrument that could perform simple odor discrimination and provide measurement of odor intensity, without the influences mentioned above, would be very useful in modern industry. The concept of an electronic nose has systematic similarities to the way humans sense odors. Sensors in the system act as receptors in the nasal mucosa, and the responses given by the sensors form a pattern (a set of signals), like the resulting event in the olfactory bulb due to a stimulus. As outlined in the previous sections, the biological olfactory code is not yet well understood, but the basic criteria of a nose for odor recognition are broadly defined. This information was sufficient for designing a machine with which it was possible to discriminate and recognize odors. One of the first instruments specifically designed to detect odors was developed in 1961 (Moncrieff 1961), which was really a mechanical nose. An electronic nose was reported in 1964 by Wilkens and Hartman (1964) based on redox reactions of odorants at an electrode. In the following year, Buck et al. (1965) and Dravnieks and Trotter (1965) showed the potential of electronic noses based on the modulation of conductivity and the modulation of contact potential by odorants, respectively. Research in this field showed little progress until 1982 when Persaud and Dodd (1982) reported a successful discrimination of a wide variety of odors using plural semiconductor transducers. The concept of an electronic nose, as an intelligent chemical array sensor system, for odor classification was practically modeled for the first time. The heart of any electronic nose is the type of sensors used for the particular system. Applications of the electronic nose are typically restricted by the sensors’ characteristics to the target gases or odors and the operating environment. Many types of sensor have been developed to detect specific gases and vapors since the 1970s. The human nose can identify many odors that may contain hundreds of individual chemical components, and therefore the sensors for an electronic nose should be generalized at the molecular level. The desired properties for sensors are high sensitivity, rapid response, good reproducibility, and reversibility to large numbers of chemicals. It is also better for an electronic nose to be small in size, flexible, and able to adapt to many environments as well as operate at ambient temperatures.

There are numerous types of gas sensor arrays used in electronic noses. These include metal oxide semiconductor (MOS) sensors, catalytic gas sensors, solid electrolyte gas sensors, conducting polymers, mass-sensitive devices, and fiber-optic devices based on Langmuir–Blodgett films (see Table 1.1). The oxide materials, which have been popular for use in electronic noses, operate on the basis of modulation of conductivity when the odorant molecules react with chemisorbed oxygen species. There are commercially available metal oxide sensors that operate at elevated temperatures, between 100 and 600°C (Arshak et al. 2012). They are sensitive to combustible materials (0.1–100 ppm), such as alcohols, but are generally poor at detecting sulfur- or nitrogen-based odors and have a major problem of irreversible contamination with these compounds. Integrated thin-film metal oxide sensors have been designed using planar integrated microelectronic technology that has advantages of lower power consumption, reduction in size, and improved reproducibility; however, they tend to suffer from poor stability. There are a number of advantages in employing organic materials in electronic noses. A wide variety of materials are available for such devices, and they operate close to or at room temperature (20–60°C) with a typical sensitivity of around 0.1–100 ppm. Furthermore, functional groups that interact with different classes of odorant molecules can be built into the active material, and the processing of organic materials is easier than that of oxides.

TABLE 1.1

TABLE 1.1

Sensor Devices: Example Technologies Used in Gas Sensor Arrays

Persaud and Pelosi (1985) proposed an electronic nose using conducting polymers after investigating properties of a number of conducting polymers. They have found several organic conducting polymers that respond to gases with a reversible reaction of conductivity, fast recovery, and high selectivity toward different compounds. In an experiment with an array of 5 different conducting polymer sensors and 28 odorants, they observed 20 different sets of responses and showed possible discrimination with 14 of the odorants by measuring changes in the electrical resistance. These results led Persaud (2005) to produce arrays of gas-sensitive polymers that had reversible changes in conductivity and rapid adsorption/desorption kinetics at ambient temperatures when they were exposed to volatile chemicals. The concentration-response profiles of such sensors are almost linear over a wide concentration range to single chemicals. This is advantageous, as simple computational methods may be used for information processing. The raw signals from a sensor array consisting of 32 conducting polymers are shown in Figure 1.12. It can be seen that all the sensors respond to the pulse of the odorant, but there are a range of responses—each sensor having a different degree of response to the same concentration of odorant. This is dependent on the selectivity and sensitivity of each element of the sensor array to the analyte.

FIGURE 1.12. Raw data from an array of conducting polymer sensors.

FIGURE 1.12

Raw data from an array of conducting polymer sensors. The raw data responses over time to an odorant stimulus (in this case acetone) from an array of 32 conducting polymer sensors are shown. The array is exposed to clean air and a baseline is taken (A). (more...)

1.5.1. Data Processing

A set of output signals from individual sensors within an electronic nose must be converted into signals appropriate for computer processing. Electronic circuits within a sensing instrument usually perform an analog-to-digital conversion of output signals from sensors, feature extraction of useful information, and interfacing to an external computer for pattern analysis. For unattended automated sampling, it is common to have a mass flow control system that delivers the odor from the source to the sensor array. The final response vectors generated by the sensors are then analyzed using various pattern recognition techniques.

Early researchers in the field used various multisensor systems for identification of different types of gases, often in parallel with analytical methods. Zaromb and Stetter (1984) provided a theoretical basis for the selection and effective use of an array of chemical sensors for a particular application. Bott and Jones (1986) attempted to build a multisensor system to monitor hazardous gases in a mine using six sensors of three different types in combination with oxidizing layers and absorbent traps. The system was able to distinguish between gases evolved from a fire and those evolved from diesel engines or explosives. Müller and Lange (1986) demonstrated possible identification and concentration measurement of six different gases by means of four MOS gas sensors with layers of different types of zeolite filters. (Gall and Müller (1989) adapted similar methods for identifying gas mixtures using partial least-squares and transformed least-squares methods for analysis, which led Sundgren and coworkers (1990) to attempt improvements using three pairs of Pd-gate MOSFETs and Pt-gate MOSFETs. Kaneyasu et al. (1987) modeled an early version of an electronic nose using six integrated sensor elements with a single chip microcomputer. Odor-identifying systems containing two or more different types of sensors are often an attempt to enhance the different dimensional characteristics of the responses. Stetter and coworkers (1986) demonstrated a combined system with a hydrocarbon sensor and an electrochemical sensor (Stetter et al. 1990). They managed to get responses from both the hydrocarbon sensor and the electrochemical sensors after passing the vapors through a combustible gas sensor. The resulting data were successfully analyzed using pattern recognition methods based on neural networks.

Despite a number of disadvantages, including high power consumption, elevated operational temperature, poisoning effects from sulfur-containing compounds, and poor long-term stability, MOS gas sensors are the most widely used in gas and odor detection. The main reason is that MOS commercial products have been available for a number of years. Abe et al. examined an automated odor sensing system based on plural semiconductor sensors to measure 30 substances (Abe et al. 1987) and 47 compounds (Abe et al. 1988). They analyzed the sensor outputs using pattern recognition techniques: Karhunen–Loeve (K-L) projection for visual display output and k-nearest neighborhood (k-NN) method and potential function method for classification. Shurmer et al. (1989) worked on discrimination of alcohols and tobaccos using tin oxide sensors based on the correlation coefficient method in their research. Weimar et al. (1990) demonstrated the possibility of determining single gas components, such as H2, CH4, and CO, in air from specific patterns of chemically modified tin oxide-based sensors by using two different multicomponent analysis approaches. Most methods applied to identification, classification, and prediction of gas sensor outputs were based on conventional pattern recognition techniques until the late 1980s. In the 1990s Sleight (1990) and Gardner et al. (1990) suggested the possible application of artificial neural networks (ANNs) to electronic nose systems. Gardner and coworkers implemented a three-layer back-propagation network with 12 inputs and 5 outputs architecture for the discrimination of several alcohols, where they reported that it was better than the previous work (Shurmer et al. 1990) carried out using analysis of variance (ANOVA). Cluster analysis and principal component analysis (PCA) were used to test 5 alcohols and 6 beverages from 12 tin oxide sensors (Gardner 1991; Gardner et al. 1992a). The results were presented by raw and normalized responses, and showed that the theoretically derived data normalization substantially improved the classification of chemical vapors and beverages. Further investigations were carried out to discriminate the blend and roasting levels of coffees, the differences between tobacco blends in cigarettes, and three different types of beer. The result confirmed the potential application in an electronic instrument for on-line quantitative process control in the food industry (Shurmer and Gardner 1992; (Gardner et al. 1992a, 1992b). Hines and Gardner (1994) also developed a stand-alone microprocessor-based instrument that can classify the signals from an array of odor-sensitive sensors. Data from the odor sensor array were initially trained on a personal computer using a neural network program, and then the neuronal weights were sent to an artificial neural emulator (ANE), which consisted of a microprocessor, ADC chips, read only memory (ROM), and random access memory (RAM). The group also attempted to improve the performance of oxide semiconductor sensors, with respect to long-term stability and poisoning effect (Gardner 1995), a multisensor system using an array of conducting polymers (Gardner et al. 1994), and an adaptation of fuzzy neural networks for classification (Singh et al. 1996).

Another approach to odor sensing was studied using a quartz resonator sensor array where the mechanism of odor detection is based on the changes in oscillation frequencies when gas molecules are adsorbed onto sensing membranes. Nakamoto et al. employed neural network pattern recognition, including three-layer back-propagation and principal component analysis, for the discrimination of several different types of alcoholic drinks using a selection of sensing membranes (Ema et al. 1989; Nakamoto et al. 1990, 1991). They also proposed a new processing element model based on fuzzy theory and Kohonen’s learning vector quantization (LVQ) for the discrimination of known and unknown odors (Moriizumi et al. 1991).

Pattern recognition in the electronic nose system may be regarded as a branch of artificial intelligence that involves the mimicking of human intelligence to solve chemical problems. Figure 1.13 summarizes methods commonly used in the field, some of which are discussed in more detail here. There are two main approaches to pattern recognition: parametric and nonparametric. Parametric methods rely upon obtaining or estimating the probability density function of the parameters used to characterize the response of a system. Conversely, nonparametric methods require no assumption about the fundamental statistical distributions of data. Two types of nonparametric learning or classification methods are available: supervised and nonsupervised. Supervised methods involve the learning of data based on advance knowledge of the classification, whereas unsupervised methods make no prior assumption about the sample classes but try to separate groups or clusters. Mapping methods in pattern recognition are an unsupervised way of analyzing chemical inputs. Many authors since these early days of electronic nose research have used pattern recognition techniques for a number of practical applications. These include wine discrimination (Garcia et al. 2006; Lozano et al. 2008), identification of microorganisms (Moens et al. 2006), fish freshness assessment (GholamHosseini et al. 2007), urinary tract detection (Kodogiannis et al. 2008), and identification of components in mixtures (Penza and Cassano 2003; Penza et al. 2002).

FIGURE 1.13. Data processing methods.

FIGURE 1.13

Data processing methods. The diagram illustrates a number of methodologies used in processing multidimensional data from sensor arrays.

1.5.2. Algorithms

Data projection or mapping is a highly desirable feature in gas and odor sensing, as it allows an approximate visual examination of multidimensional input data. The patterns from the electronic nose system due to chemical stimuli are described in a multidimensional space (this depends on the number of features used). Human vision is very good at recognizing patterns in small dimensions (≤3), but cannot perceive high-dimensional (≥3) relationships. Data projection enables us to visualize high-dimensional data to better understand the underlying structure, explore the intrinsic dimensionality, and analyze the clustering tendency of multivariate data. There are a number of ways to achieve this, but the basic object is the same, which is to reduce the original high dimension to a lower dimension, preserving the structure of the input patterns as well as possible.

Two general approaches are available for the mapping methods: linear and nonlinear. In linear mapping, each of the new features described in a low dimension (usually either two- or three-dimensional space) is a linear combination of the original features that are described in a higher dimension. A linear mapping function is well defined, and its mathematical properties are well understood. Linear projection methods have been commonly used in the pattern recognition parts of many gas and odor sensing systems. Stetter et al. (1986) applied K-L transformation (reviewed by Wold et al. 1987) to project 20-dimensional hazardous gases and vapors onto two-dimensional space for the analysis. Gardner and coworker (Gardner 1991; Shurmer and Gardner 1992) applied PCA for the discrimination of chemical data from their multisensor array and a hybrid electronic nose. Abe et al. (1987, 1988) used K-L projection for the visualization of the data collected with their plural gas sensors and attempted to classify by applying extra pattern recognition techniques such as cluster analysis. Nakamoto et al. (1991) used the first and second principal components in a two-dimensional diagram to show the fine differences among whisky aromas.

Simplicity and generality are the main advantages of linear mappings, whereas nonlinear mapping algorithms have complicated mathematical formulations. Nonlinear mappings have been rarely applied in gas and odor sensing; however, they can be used when the preservation of complex data structures has failed with linear mappings. Kowalski and Bender (1972, 1973) applied nonlinear mappings as well as linear mappings to display chemical data in either two or three dimensions. They have also studied various mapping methods and suggested a combined linear and nonlinear mapping method. Self-organizing maps (SOMs), based on the Kohonen network, have also been shown to be very useful for discrimination of odor classes (Bona et al. 2012).

If we look at the raw data shown in Figure 1.12, we can extract features from it. By normalizing the raw data so as to express each individual sensor response as a fraction of the total response of the entire array of sensors, we can produce a series of patterns that represent individual classes of analytes presented to the array. Such a series of patterns are shown in Figure 1.14. These patterns represent vectors that may be used as the inputs for further processing of data for classification.

FIGURE 1.14. Patterns generated from an array of sensors.

FIGURE 1.14

Patterns generated from an array of sensors. The raw responses (Figure 1.12) to different analytes can be normalized to be relatively independent of concentration. As a result of the differences in sensitivity and selectivity of individual sensors in (more...)

1.5.2.1. Sammon Mapping

The nonlinear mapping method proposed by Sammon (1969, 1970) is based upon a point mapping of N L-dimensional vectors from the L-space to a lower-dimensional space such that the inherent data structure is approximately preserved. Therefore, the resultant data configuration can be easily evaluated by human observations in either two or three dimensions.

For N number of patterns in L-dimensions designated Xi (i = 1, 2, …, N) and corresponding N patterns in d-dimensions (d = 2 or 3) designated Yi (i = 1, 2, …, N), let the distance measure between the patterns Xi and Xj in L-dimensions be defined by dij* = dist[Xi, Yj] and the distance measure between the corresponding patterns Yi and Yj in d-dimensions be defined by dij = dist[Xi, Yj]. Any distance measure functions could be used for the interpattern distances, such as the generalized Mahalanobis distance,

Image ch1_p27eq1.jpg

Hamming metric,

Image ch1_p28eq1.jpg

and the most common Euclidean distance,

Image ch1_p28eq2.jpg

The Y patterns have the following configuration, where the initial values of Y patterns in d-space are generated with random values: Y1 = [y11, y12, …, y1d]T, Y2 = [y21, y22, …, y2d]T, and YN = [yN1, yN2, …, yNd]T, where T denotes transpose of the vector. All the d-space interpoint distances dij are then used to define an error E, which represents how well the present configuration of N points in the d-space fits the N points in the L-space, i.e.,

Image ch1_Eq_1_3.jpg

The error term is a function of the d × N variables ypq (p = 1, 2, …, N) and (q = 1, 2, …, d), where the ypq variables are adjusted or, equivalently, the d-space configuration is changed so as to decrease the error. A steepest descent procedure, which is a gradient method, is used to search for a minimum of error E. It is clear from Equation 1.3 that the error is only zero when dij* = dij, which can only be accomplished when the intrinsic dimensionality is actually two- or less dimensional (three or less for a three-dimensional mapping). In general, the nonlinear mapping attempts to emphasize the fit to small L-space distances since the dij* term in the denominator of the error function acts as a weighting function. which is large when dij* is small. Once the error is minimized to an acceptable level, the resultant patterns may then be plotted on a two- or three-dimensional graph for the visualization.

1.5.2.2. Combination of Sammon’s Nonlinear Mapping with PCA Linear Projection Method

The original Sammon’s nonlinear mapping algorithm initializes the configuration of Y patterns with small random values, where the configuration of Y patterns is updated closer to the optimum position each time an iteration is completed. It is clear that different initial configurations will result in different output configurations of patterns in the projection space, as Sammon’s nonlinear mapping is based on the preservation of interpoint distances of patterns. In addition, a rotation of the resulting map may be observed when new patterns are introduced into the data set. These create difficulties for the observer to relate one map to another. An attempt was made to tackle the listed problems by combining principal component analysis (PCA) linear projection with Sammon’s nonlinear mapping method, which also provided a more validated initialization in the practical application sense.

PCA is popular and a very widely used linear projection method applied in pattern recognition and data analysis. It is based on the statistical Karhunen–Loeve transformation, which creates new variables as linear combinations of the original variables. A linear mapping can be conveniently expressed as Yi = AXi; i = 1, 2, …, N; where Yi is a d × 1 vector, Xi is a L × 1 vector, and A is a d × L matrix. This involves a linear orthogonal transform from a L-dimensional input space to a d-dimensional space, dL, such that the coordinates of the data in the new d-dimensional space are uncorrelated and a maximal amount of variance of the original data is preserved by only a small number of coordinates. The rows of the matrix A are the eigenvectors corresponding to the d largest eigenvalues of the L × L covariance matrix C; so, in the case of a two-dimensional projection, the eigenvectors corresponding to the two largest eigenvalues are the rows of matrix A.

The output patterns from the PCA algorithm are then scaled down to smaller values to be used as initial values of Sammon’s nonlinear mapping. However, the output patterns, from the above steps, could be implemented alone for a linear projection method, as they are linear combinations of all the original coordinates, and this is often the case found in papers published in the electronic nose field. Since each eigenvalue is proportional to the variance along its corresponding eigenvector, a measure is available of the percent variance retained by the chosen principal components. Equation 1.4 shows percent variance in the two-dimensional space case:

Image ch1_Eq_1_4.jpg

The value indicates how well the interstructure is preserved by the principal components from the original multidimensional space. Once the N L-dimensional space patterns are related to the patterns in d-dimensional space (d = 2 or 3), the resultant patterns are presented on a scatter graph for observation.

It has been shown that relatively simple implementations of biologically justified neural rules for Hebbian learning can produce PCA; hence, there has been much interest in these applications (Karhunen and Joutsensalo 1995). Linear approaches have been shown to have several limitations, and introduction of nonlinearities in mapping between input and output data is advantageous.

Figure 1.15 shows a principal component analysis of patterns taken from Figure 1.14. It can be seen that discrete clusters appear representing individual analytes presented to the array. If this is used to initialize a Sammon map, the resultant graph is shown in Figure 1.16, where now the axes are associated with a Euclidean distance measure between each pattern input into the map.

FIGURE 1.15. Principal component analysis of patterns.

FIGURE 1.15

Principal component analysis of patterns. Data projection or mapping is a highly desirable feature in gas and odor sensing, as it allows an approximate visual examination of multidimensional input data. The patterns from the sensor array generated as (more...)

FIGURE 1.16. Sammon map.

FIGURE 1.16

Sammon map. The Sammon map allows visualization of the relationship between one pattern and another based on a distance measure such as the Euclidean distance. It may be more informative than a PCA where the axes do not necessarily indicate if different (more...)

1.5.2.3. Competitive Learning Neural Networks

Classification of input chemicals is one of the main functions that pattern recognition methods provide in an electronic nose system. The visualization of sampled chemicals gives information on the interrelationship between input patterns via human observation. However, it is important in an intelligent system that the ability to classify does not rely on human judgment, so producing a system that can be truly automated.

There are many ways to achieve classification using pattern recognition methods. However, parametric methods of pattern recognition (classical statistical pattern recognition methods), which assume the probability density function is known or can be estimated, do not consider the many problems associated with practical applications and are rarely used in chemical analysis. In most cases there are two stages used in a pattern recognition process. First, the response patterns from an electronic nose are trained by a pattern recognition method using mathematical rules that relate the patterns to a known class (training or learning stage). Then the response from an unknown chemical pattern is tested against the knowledge base and the classification is given (testing or recognition stage). This kind of process is based on supervised learning, which is a nonparametric pattern recognition method. Some of the supervised learning methods are linear techniques and assume that the response vectors are well described in Euclidean space. Distance functions are often used to describe a measure of similarity between patterns through their proximity. However, this kind of measure is only useful when the sensor outputs are linearized or concentration independent.

Artificial neural networks (ANNs) are more sophisticated methods of solving chemical problems, which had not been applied to gas and odor sensing using multisensor arrays until late in the 1980s (Gardner et al. 1990, 1992a; Sleight 1990). The ANNs were developed to provide models that were able to represent some aspects of the working principles of the brain, in particular, learning from experience. These methods can handle nonlinear data and offer potential advantages, such as fault tolerance to sensor drift or noise, adaptability, and high data processing rates, over classical pattern recognition methods (Gardner et al. 1990, 1992a; Persaud and Pelosi 1992). A neural network is characterized by three basic elements: the process units that represent the neurons, the network architecture (neuron connections) where the inputs are individually weighted and the output determined by an activation function, and a learning rule that provides the law according to which the network can learn from experience. Figure 1.17 shows a typical neuron model with synaptic connections and a simple processing unit that is capable of performing nonlinear transformations. This type of network is called the generalized perceptron. The neuron in the network is a processor unit with multiple inputs and one output, where the processor unit is divided into two parts: the first part is a weighted sum of the inputs, and the second is a nonlinear transformation of the sum. The weights are optimized according to the learning rule as a function of the network’s experience.

FIGURE 1.17. Perceptron model.

FIGURE 1.17

Perceptron model. A typical neuron with synaptic connections and a simple processing unit that is capable of performing nonlinear transformation is shown. This type of network is called the generalized perceptron. The neuron in the network is a processor (more...)

A multilayer perceptron network is probably the most widely used architecture, as it can be applied to many problems (Gardner et al. 1992a; Kodogiannis et al. 2008). Figure 1.18 shows a three-layer feed-forward network where the neurons are arranged in three layers: input, hidden, and output layers. The neurons in a layer are connected to all the neurons in the following layers, and the weights exist in every connection between two layers. The number of neurons in the input and output layers depends on the respective number of inputs (dimensions of input patterns) and outputs (number of classes) being considered; however, the number of neurons in the hidden layer is chosen by the user, which essentially defines the topology of the network. Each interconnection has an associated weight that modifies the strength of the signal flowing along that path. Thus, with the exception of the neurons in the input layer, the input to each neuron is a weighted sum of the outputs from neurons in the previous layer. The output of each node is obtained by passing the weighted sum through a nonlinear operator, which is typically a sigmoidal function.

FIGURE 1.18. Multilayer perceptron network.

FIGURE 1.18

Multilayer perceptron network. Each perceptron may be interconnected with other perceptrons to produce a network of processing elements. Here is a three-layer feed-forward network where the neurons are arranged in three layers: input, hidden, and output (more...)

Once the network topology is defined, a set of input-output data is used to train the network to determine appropriate values for the weights associated with each interconnection. The data are propagated forward through the network to produce an output that is compared with the corresponding output (target) in the data set to obtain an error. This error is minimized by updating the weights through an iterative process, until the network has reached an optimal convergence. The last set of weights is retained as the parameters of the neural network model. Process modeling using ANNs is very similar to identifying the coefficients of a parametric model of specified order, where the magnitudes of the weights define the characteristics of the network. However, unlike conventional parametric model forms, which have an a priori assigned structure, the weights of an ANN also define the structural properties of the model. Thus, an ANN has the capability to represent complex systems whose structural properties are unknown. The back-propagation learning method by Rumelhart et al. (1986) has been used widely in various applications, as it was one of the first effective learning techniques introduced for the neural network models. Typically for the back-propagation training algorithm, the mean square error method is used as a measure of error during the iterative learning process, where the gradient descent (steepest descent) method tries to minimize the network total error by adjusting the weights. The negative gradient of the error function, with respect to the weights, then points in the direction which will most quickly reduce the error function, and so the final minimum is reached when the gradient becomes zero.

In contrast to supervised learning, pattern recognition methods with unsupervised learning do not require a separate training stage. They learn to discriminate between the response vectors automatically by making no prior assumption about the sample classes but trying to separate groups or clusters. Self-organizing artificial neural networks or self-organizing feature maps (SOFMs) (Kohonen 1982) are unsupervised pattern recognition methods whose learning methods are based on a competitive learning mechanism called winner takes all. A SOM is a network of neurons arranged as knots of a planar square lattice where each neuron has four logic immediate neighbors. Each neuron is located by a vector whose components are the knot coordinates in the lattice. It has multiple input channels and one output channel whose value can be either 1 (active) or 0 (inactive). The SOM technique tries to transform input patterns of multidimensions into one- or two-dimensional discrete maps, and also to perform this transformation adaptively in a topological ordered fashion.

SOM networks are usually applied to show the input pattern distribution maps and their scatter plots. A probability density function is normally adapted for the classification of input patterns. Di’Natale and coworkers (1995a, 1995b) have applied the SOM technique for the discrimination of chemical patterns, where they have proposed a new classification method that generalizes the potential function method to neural implementation in an unsupervised environment. Distante and coworkers (Distante et al. 2000, 2002; Zuppa et al. 2004) have taken this concept further to introduce multiple self-organizing maps (mSOMs) as a powerful method for classification and feature extraction. This concept of mSOM has been used to counteract the effects of drift over time, which is a common feature of chemical sensing systems.

1.5.2.4. Self-Organizing Feature Maps

In a competitive learning neural network, the output neurons compete among themselves to be activated or fired, with the result that only one output neuron is on at any one time. Figure 1.19 shows the architecture of Kohonen’s SOFM network in one dimension, and Figure 1.20 in two dimensions (B), consisting of two layers—an input layer and an output layer. Each input layer neuron has a feed-forward connection to each output layer neuron (note that only one input neuron connection is shown in Figure 1.19). The output neurons that win the competition are called winner-takes-all neurons, where a winner-takes-all neuron is chosen by selecting a neuron whose weight vector has a minimum Euclidean distance (or maximum similarity) from the input vector.

FIGURE 1.19. Kohonen SOFM networks.

FIGURE 1.19

Kohonen SOFM networks. Self-organizing artificial neural networks or self-organizing feature maps (SOFMs) are unsupervised pattern recognition methods whose learning methods are based on a competitive learning mechanism called winner takes all. An SOFM (more...)

FIGURE 1.20. Kohonen networks—intralayer connections.

FIGURE 1.20

Kohonen networks—intralayer connections. In a competitive learning neural network, the output neurons compete among themselves to be activated or fired, with the result that only one output neuron is on at any one time. Figure 1.19 shows the architecture (more...)

The Kohonen network performs clustering through competitive learning. The node with the largest activation level is declared the winner in the competition. This node is the only node that will generate an output signal, and all other nodes are suppressed to zero activation level. Furthermore, this node and its neighbors are the only nodes permitted to learn for the current input pattern. The Kohonen network uses intralayer connections (see Figure 1.21a) (note that the latter connections are shown only for the neuron at the center of the array) to moderate this competition. The output of each node acts as an inhibitory input to the other nodes but is actually excitatory in its neighborhood. Thus, even though there is only one winner node, more than one node is allowed to change its weights. This scheme for moderating competition within a layer is known as lateral feedback. The inhibitory effect of a node can also decrease with the distance from it and assumes the appearance of a Mexican hat, as seen in Figure 1.21b. The size of the neighborhood varies as learning continues, where it starts large and is gradually reduced, making the range of change sharper and sharper.

FIGURE 1.21. (a) Lateral feedback in a one-dimensional lattice Kohonen layer.

FIGURE 1.21

(a) Lateral feedback in a one-dimensional lattice Kohonen layer. (b) Mexican hat function showing inhibition and excitation areas. The Kohonen network uses intralayer connections (a) to moderate this competition (note that the latter connections are shown (more...)

In an SOFM, the neurons are placed at the nodes of a lattice that is usually one- or two-dimensional, although higher-dimensional maps are also possible. The neurons become selectively tuned to various input patterns or classes of input patterns in the course of a competitive learning process. The location of the neurons so tuned, i.e., the winning neurons, tend to become ordered with respect to each other in such a way that a meaningful coordinate system for different input features is created over the lattice. An SOFM is therefore characterized by the formation of a topographic map of the input patterns, in which the spatial locations of the neurons in the lattice correspond to intrinsic features of the input patterns. This map has similarity with the information processing infrastructure of the nervous system. The self-organization model is effective for dealing with complex problems whose mathematical forms are too complicated to define. In practice, the self-organization model compensates for inaccuracies and noise in the sensors.

In pattern classification, the requirement is to classify the input data sets into a finite number of classes such that the average probability of misclassification is minimized. This is an important task that needs to delineate the class boundaries where decisions are made. A parametric method is commonly adapted for classical pattern classification, which typically assumes a Gaussian distribution, whereas in a nonparametric method, such as SOFM, it is achieved by exploiting the density-matching property of the map. However, it is emphasized that the feature map is intended only to visualize metric-topological relationships of input patterns, and it was not recommended to use the SOFM for classification itself, as the recognition accuracy could be significantly increased if the map is fine-tuned with a supervised learning scheme (Kangas et al. 1990).

1.5.2.5. Self-Organizing Map Algorithm

The SOM may be categorized as a nonlinear projection of the probability density function of n-dimensional input data into a one- or two-dimensional lattice of output layer neurons, which comprises the output space such that a meaningful topological ordering exists within the output space. The weight vector (reference vector) associated with each output layer neuron is regarded as an exemplar of the kind of input vector to which the neuron will respond. Let the input vector, X, be defined as

X = [X1, X2, …, Xn]T

and the weight vector, mi corresponding to output layer neuron i can be written as

mi = [wi1, wi2, …, win]T

where n is the dimension of the input pattern and i = 1, 2, …, N.

Determination of the winning output layer neuron amounts to selecting the output layer neuron whose weight vector mi best matches the input vector X. Therefore, the input vector is compared with all the mi to find the smallest Euclidean distance ‖X – mc between vectors. If the best match is at the unit with index c, then c is determined by

X – mc‖ = {‖X –mi‖}

or

c = arg miin {‖X –mi‖}

Thus, X is mapped onto the node c relative to the parameter value mi (or simply onto mc).

For the lateral feedback operation, a function is needed that defines the size of the neighborhood surrounding the winning neuron. This function Nc(t) is a function of discrete time (i.e., iteration), as seen in Figure 1.22, where two possible topological neighborhoods are shown. The amount of lateral feedback can be varied over the course of network training. Larger neighborhoods mean more positive feedback, and training takes place at a more global level. It is via a large value of the neighborhood function during the early training of the network that the topological ordering of the network is achieved. Subsequent reduction of the neighborhood then makes the clusters sharper so that the cluster response may be refined. The neighborhood function is utilized to modify the learning process as indicated in Equation 1.5:

FIGURE 1.22. Rectangular and hexagonal topological neighborhoods Nc of cell c.

FIGURE 1.22

Rectangular and hexagonal topological neighborhoods Nc of cell c. The radius decreases with time (t1 < t2 < t3). The SOFM may be categorized as a nonlinear projection of the probability density function of n-dimensional input data into (more...)

Image ch1_Eq_1_5.jpg

where α(t) is the learning parameter valued at 0 < α(t) < 1, which decreases with time t.

1.5.2.6. Learning Vector Quantization

Learning vector quantization (Kohonen 1995) is a supervised learning extension of Kohonen network methods (Kohonen 1982). It allows specification of the categories into which inputs will be classified. The designated categories for the training set are known in advance and are part of the training set. The LVQ network architecture has a very similar structure to the SOFM, with an exception that each neuron in the output layer is designated as belonging to one of the several classification categories, as shown in Figure 1.23. In general, several output neurons are assigned to each class. The weight vector (codebook vector) to a given output unit represents an exemplar of the input vectors to which it will most strongly respond. When an input pattern, X, is input to the network, the neuron with the closest (Euclidean distance) codebook vector is declared to be the winner. The training procedure is similar to SOFM, but only the winning neuron is modified in LVQ.

FIGURE 1.23. Learning vector quantizer (LVQ).

FIGURE 1.23

Learning vector quantizer (LVQ). The figure shows the architecture of the LVQ network. The LVQ network architecture has very similar structure to the SOFM with an exception that each neuron in the output layer is designated as belonging to one of the (more...)

In our own research we have utilized these concepts to produce a two-stage adaptive classification system shown in Figure 1.23, where an SOFM acts as a preprocessor and LVQ as a fine-tuning classifier. Vector quantization (VQ) is concerned with how to divide the input space into disjointed subspaces so that each input vector can be represented by the reproduction vector of the subspace to which it belongs, as shown in Figure 1.24. The collection of possible reproduction vectors is called the codebook of the quantizer. In the combined method, the SOFM algorithm provides an approximate method for computing the codebook vectors in an unsupervised manner, with the approximation being specified by the synaptic weight vectors of the neurons in the feature map. The supervised LVQ algorithm uses class information to move the codebook vectors slightly, so as to improve the quality of classifier decision regions.

FIGURE 1.24. Adaptive classification system.

FIGURE 1.24

Adaptive classification system. A simple diagram showing decision boundaries and reproduction vectors for each region is shown. Vector quantization (VQ) is concerned with how to divide the input space into disjointed subspaces so that each input vector (more...)

1.5.2.7. Approaches to Odor Quantification

The quantification of odors or chemicals smelt is a very desirable feature in real life. For example: a type of foodstuff may depend on the concentration of a particular element in a mixture, specific levels of particular chemicals may be identified during environmental monitoring, and the concentration level of a chemical may be critical in the cosmetic and drink industries. The human nose and tongue are employed in many industries as basic tools for determining concentrations of single chemicals or mixtures, where these human senses may vary with the conditions of the human sensory panels at the time of measurement. It is much more difficult to predict concentration levels of single chemicals or mixtures than classification of different chemicals. The properties of single chemicals or mixtures in different concentration levels are similar to each other, and hence the response patterns from the electronic nose system do not show variance as perceived by the human nose. Although each level of a single chemical or mixture can give a unique response pattern, discrimination between these patterns is difficult to achieve.

The neural network approach in pattern recognition enables the automated classification of chemical vapors. However, classification is limited to known classes where a class label is given from the result of recognition. The most widely available multilayer perceptron network, based on back-propagation learning, may be applied to the problems of predicting concentration levels, where the average network output values are given as the result of recognition for each class. However, these networks suffer from local minima problems, and the global optimum is not always guaranteed during the learning process; also, the training process can take a long time. This adds extra burden onto the network architecture due to the complexity of weights adjustment. Radial basis function (RBF) networks have attracted interest due to advantages over multilayer perceptrons, as they are universal approximators but achieve faster learning due to simple architecture, and exhibit none of back-propagation’s training pathologies, such as paralysis or local minima problems. A RBF network is a supervised neural network that is often compared with a three-layer feed-forward back-propagation network due to similarity in the network topology; however, its operation is fundamentally different (Figure 1.25). The back-propagation learning algorithm may be viewed as an optimization method known in statistics as stochastic approximation, whereas the RBF network may be viewed as a curve-fitting (approximation) problem in high-dimensional space. The learning of the RBF network is equivalent to finding a surface in multidimensional space that provides a best fit to the training data. In addition, generalization is equivalent to the use of this multidimensional surface to interpolate the test data. RBF networks are universal approximators, i.e., a given network with enough hidden layer neurons that can approximate any continuous function with arbitrary accuracy (Girosi and Poggio 1990; Hartman et al. 1990). However, the main aim of designing an RBF network for practical applications relies on good generalization ability with a minimum number of nodes to reduce unnecessary calculation and processing times.

FIGURE 1.25. Radial basis function (RBF) network.

FIGURE 1.25

Radial basis function (RBF) network. The architecture of the RBF network consists of an input layer, a hidden layer, and an output layer. The input vector to the network is passed to the hidden layer nodes via unit connection weights. The hidden layer (more...)

RBF methods have their origins in techniques that performed exact interpolation of data set points in multidimensional space (Micchelli 1986; Powell 1987). However, those methods were not practical, as the exact interpolation problem requires every input vector to be mapped exactly onto the corresponding target vector. The exact interpolation function is not desired in RBF neural network applications, as it typically forms a highly oscillatory function when there is noise present on the data. The interpolating function that gives the best generalization is one that typically provides much smoother transformation and averages over the noise on the data. In the late 1980s, Broomhead and Lowe (1988) were the first to exploit the use of RBFs in the design of neural networks. Moody and Darken (1989) made a major contribution to the theory, design, and application of the RBF networks. Girosi and Poggio (1990) emphasized the use of regularization theory applied to the neural network as a method for improved generalization of new data. A network with a finite basis was also developed as a natural approximation of the regularization network. In addition, other extensions, such as moving centers, weighted norm, and networks with different types of basis functions and multiple scales, were also considered.

The modifications introduced for the exact interpolation procedure to obtain the RBF neural network model are as follows: the number of basis functions in the neurons need not be equal to the number of data points; the centers of the basis functions are no longer constrained to be given by the input data vectors, but instead, the determination of suitable centers becomes part of the training process; each basis function is given its own width, whose value is also determined during training; and bias parameters are included in the linear sum that compensate for the difference between the average value of the basis function activations and the corresponding average value of the targets.

The RBF method is one of the possible solutions to real multivariate interpolation problems. A nonlinear function Φ(x, z), where x is the independent variable and z is the constant parameter, is called a radial basis function when it depends only on the radial distance r = ‖xz‖, where z is its “center.” For a given m input vectors xi {xiRn|i = 1, 2, …, m} in n-dimensional space and m real numbers fi {fi ∈ R|i = 1, 2, …, m}, a function F from Rn to R satisfying the interpolation conditions can be described as F(x) = fi, i = 1, 2, …, m. The RBF approach consists of choosing the function F to be an expansion of the form

Image ch1_p43eq1.jpg

where the centers of the expansion zj = xj must be the known data points, and λiiR|i = 1, 2, …, m} are the corresponding weights.

A variety of approaches for training RBF networks have been developed, where most methods operate in two stages: learning from the centers and widths in the hidden layer and learning from the connection weights of the hidden layer to the output layer (Chen et al. 1992; Freeman and Saad 1995, 1996; Moody and Darken 1989). In these learning algorithms, the network structures are predetermined. Learning algorithms that incorporate structure selection mechanisms have been developed (Poggio and Girosi 1990a, 1990b). Chen et al. (1990, 1991) trained an RBF network using an orthogonal least-square algorithm, which provided a compromise between network performance and network complexity, where the number of hidden layer nodes was automatically determined. A training algorithm proposed by Lee and Rhee (Lee and Kil 1991) was based on a supervised clustering method where the learning started with one hidden layer node with a larger width; additional nodes were created when they were desired, causing changes in the associated widths and locations. The work of Musavi et al. (1992) was also based on a clustering method; however, a larger number of nodes were set at initial learning, which were then merged during processing. The associated widths and locations of the nodes were updated accordingly. Billings and Zheng (1995) proposed genetic algorithms to produce RBF networks that showed an ability to determine appropriate network structures and parameters automatically according to given objective functions. In addition, they claimed that the network had a lower probability of becoming trapped at structural local minima due to the property of the genetic algorithm. It is thought that having an automatic construction of an RBF network during the training process would be useful. However, the efficiency of the RBF algorithm may be lost during practical applications. Sherstinsky and Picard (1996) investigated the efficiency of the orthogonal least-squares (OLS) training for RBF networks and reported that while the OLS method had been believed to find a more efficient selection of the RBF centers than a random-based approach (Chen et al. 1991), such a network did not produce the smallest RBF network for a given approximation accuracy. In practice, the centers are arbitrarily chosen from data points. Apparently such a method cannot guarantee satisfactory performance because it may not satisfy the requirement that centers should suitably sample the input domain. (Chen (1995) obtained RBF centers by means of a k-means clustering algorithm, while the network weights were learned using recursive least squares. He also discussed the problem with the conventional k-means clustering algorithm and suggested an enhanced k-means clustering algorithm for the selection of centers.

In our laboratories we have tested many of these algorithms for training and testing RBF networks. If an RBF network is trained with a random value initialization the optimum result is not always guaranteed, so the training may have to be repeated several times before an acceptable recognition level is gained. In addition, the result cannot be reproduced unless the same initial random values are used. Fuzzy c-means algorithms (FCMAs) for locating and initializing centers failed to find optimum centers, especially for higher-dimensional data. The performance of an RBF network can be improved by employing extra centers for each class. The object of the RBF network in practical applications with a minimum number of nodes was to select a center that would well represent the training data set. When the input data have a high dimension, the information becomes more complex, and one center may not be enough to represent the whole class. In the process of assigning multiple centers for a class with the RBF application, optimization with fuzzy c-means algorithms cannot be used as they were designed to choose the optimum cluster center. However, optimization with a learning vector quantization (LVQ) algorithm was shown to be more effective than any of the FCMA techniques. In addition, the iteration time for the LVQ method was several orders of magnitude faster than any of the FCMA methods, especially in the higher-dimensional cases. The combination of the LVQ algorithm and the RBF network was very effective and provided improved results over other methods. The performance of the RBF network depended on the number of centers used for each class. Generally, a better recognition result may be achieved with a higher number of centers, although it also depends on the applications used and input data. However, it is desirable to keep the number of centers as small as possible; the computation required during training is directly related to the complexity of the network. When a large number of the centers are assigned to each class, the trained network will also have a large number of related parameters. Consequently, a large weight file has to be stored during recognition sessions.

We tested such classifiers with mixtures of two alcohols, methanol and ethanol, with the composition shown in Table 1.2. In experiments with seven methanol and ethanol mixtures, four centers were identified to be the ideal number for each class after considering the computational burden, iteration time, and performance. The RBF application could successfully discriminate between the seven mixtures, and could recognize individual mixtures to the correct quantifying target levels. During experiments, 4 optimized centers were chosen from each training set, which had 11 patterns to train the RBF network. Consequently, the 7 testing data sets, consisting of 33 patterns in each set, were all predicted to the corresponding quantifying targets. The capability of the RBF network was further examined to investigate the response to previously unseen data to the network. Training of the network was carried out with previous data training sets with the same target values as in Table 1.2, but the data set, 1:2 mixture, was omitted from the training sets. Figure 1.26 shows the results, which indicated that—although not perfect—the network was able to interpolate unknown values.

TABLE 1.2

TABLE 1.2

Input Data Preparation for the RBF Network Application for the Quantification of Methanol and Ethanol Mixtures in Fixed Ratios

FIGURE 1.26. The figure shows the output display of the RBF application when previously unseen patterns obtained from seven mixtures of analytes were presented.

FIGURE 1.26

The figure shows the output display of the RBF application when previously unseen patterns obtained from seven mixtures of analytes were presented. The training data files listed in Table 1.2 were used, with the exception of mixture class 3. One center (more...)

1.6. TOWARD BIOLOGICALLY INSPIRED ARTIFICIAL OLFACTION

All animals continually adapt to changing environments, and plasticity is a characteristic of odor-mediated behavior. This can range from alterations in levels of responsiveness such as sensitization and habituation to more complex forms of associative learning. Computational models of the olfactory system have been developed from a point of view of understanding how the system processes sensory information, and some of these concepts may be applied to artificial olfaction (Davis and Eichenbaum 1991). As described in Section 1.4.2, distributed patterns of activity in response to chemical stimuli are transmitted to the olfactory bulb via olfactory neuron axons that terminate in the glomeruli of its input layer. Electrophysiological measurements indicate that the OB filters and transforms these incoming sensory data, so that, for example, signal-to-noise enhancement, normalization, and contrast enhancement operations occur before the processed olfactory information diverges to several different secondary olfactory structures. Many computational models attempt to combine anatomy with function. Signals from ORNs are sent directly to the OB or AL, where they are further processed. OB and AL circuits contain two broad classes of neurons (excitatory projection cells and, for the most part, inhibitory local neurons). Because the mitral and tufted (M/T) cells in mammals have one primary dendrite within one glomerulus or a few glomeruli, and because inhibitory neurons (granule cells) contact nearby M/T cells through their secondary dendrites, this connectivity is often interpreted as underlying a form of lateral inhibition to sharpen M/T cell tuning working in the concept of the Kohonen self-organizing maps described earlier. Schild (1988) elegantly brought together many essential concepts of the olfactory system from a biophysical basis and described a practical model of the stimulus responses of all receptor cells by the use of vector spaces. In this case the morphological convergence pattern between receptor cells and glomeruli is given in the same vector space as the receptor cell activities. It is concluded that sets of mitral cells encoding similar odors work very much in the way of mutually inhibited matched filters. The approach is relatively easy to implement in terms of artificial nose concepts, using the self-organizing map principles described earlier. Laurent (1999) approached olfactory coding from a systems point of view. It is considered that information is encoded by assemblies of neurons, and that inhibition regulates the global dynamics of these assemblies. Again, while differing in detail, conceptually these ideas converge to similar computational concepts.

Cleland and Linster (2005) discuss in detail many of the challenging computational aspects of the olfactory system and methods that are currently used to model these functions. Without going into detail, several computational models of the OB have suggested that the temporal pattern of spiking among mitral cells may play a role in odor representation. Spiking models have been reviewed and investigated (Brette et al. 2007) and appropriate simulation tools have been selected. Dynamic oscillatory activity patterns are observed in the bulb in response to odor stimulation, and it has been suggested that odor quality may be encoded in terms of dynamic attractors formed in the OB. Models of these phenomena focus on coupled oscillator circuits with feedback occurring between different connections. The dynamics of the olfactory bulb are also tightly coupled to mutual feedback from the higher processing centers in the piriform cortex. Although some models exist that would allow for associative memory to exist in the olfactory bulb so that certain patterns associated with a range of odors become embedded in the circuitry of the olfactory bulb, most models attribute associative memory functions to the piriform cortex. This area of the brain has afferent inputs from the mitral cells of the olfactory bulb, and the neural circuitry is thought to mediate context-dependent learning of odor patterns.

Kauer and White (Kauer and White 1998, 2003; White and Kauer 1999) used a fiber-optic array of sensors and designed a spiking neuron model of the peripheral olfactory system to process signals. In their model, the response of each sensor is converted into a pattern of spikes across a population of ORNs, which then projects to a unique mitral cell. Different odors produce unique spatiotemporal activation patterns across mitral cells, which are then discriminated with a delay line neural network (DLNN). Their model encodes odor quality by the spatial activity across units and odor intensity by the response latency of the units.

Many researchers started out modeling the Hodgkin-Huxley neuron computationally. Such a model is able to exactly reproduce the shape of the action potential of a neuron by taking into account the involved ionic currents, but it is computationally expensive. Several other computationally more efficient models have evolved, and Izhikevich (2003, 2004, 2010) developed a simple model for an artificial neuron that is capable of simulating almost all the functions of a neuron, but is computationally about a hundred times faster than simulation of the Hodgkin-Huxley neuron. It consists of two differential equations with four parameters and accounts for the membrane potential of the neuron, activation of potassium ion currents, and deactivation of sodium ion currents, together with other bias currents. These various models underline the fact that the morphological and biophysical properties of OB neurons and their connections define their computational capabilities.

This concept has been used to implement a cortical-based artificial neural network (CANN) architecture design inspired by the anatomical structure found in the mammalian cortex (Pioggia et al. 2008). A single CANN of 1000 artificial neurons consisting of 200 inhibitory neurons and 800 excitatory neurons was implemented. This uses a principle of spike timing synchrony that allows a neuron to be activated in correspondence with synchronous input spikes. The researchers used an electronic nose based on an array of conducting polymers together with a composite array-based e-tongue to measure 60 samples of olive oil, correlating these with human panel descriptors. They tested the performance of the cortical-based artificial neural network, a multilayer perceptron, a Kohonen self-organizing map, and a fuzzy Kohonen self-organizing map, and demonstrated that the CANN outperformed the other techniques in terms of its classification and generalization capability.

Raman et al. (2006b) adopted a chemotopic convergence model, where they considered that the signals from many olfactory neurons converge onto a few neighboring glomerular (GL) cells. An ORN is characterized by a vector of log affinities to odorants or molecules in a given chemical problem space. A self-organizing model of convergence based on the Kohenen map described previously was constructed whereby a large number of inputs from ORNS are projected onto a two-dimensional lattice of GL cells. They assume that ORNS converge onto given GL cells on the basis of their selectivity to different molecules. Simulation experiments with this model using a large number of sensory inputs as well as data from a gas sensor array indicate that each analyte evokes a unique glomerular image with higher activity in those GL cells that receive projections from ORNs with high affinity to that analyte. If the analyte concentration increases, additional ORNs with lower affinity are recruited, resulting in an increased activation level and a larger spread of the analyte-specific loci. They found that if the receptive field was too broad, or conversely too narrow, separability between odorant classes deteriorated. In this work, a homogeneous field width was applied, but Alkasab et al. (1999, 2002) indicate that maximum mutual information is achieved with a heterogeneous population of receptors/sensors. Hence, sensors displaying a nonuniform distribution of receptor field widths may produce rather different characteristics when combined in an array.

It is now possible to integrate many olfactory concepts into electronic hardware. For example, a proposed neuron synapse integrated chip (IC) chip set (Horio et al. 2003) makes it possible to construct a scalable and reconfigurable large-scale chaotic neural network with 10,000 neurons and 10,0002 synaptic connections. Bioinspired and neuromorphic designs are emerging. Covington et al. (2007) describe an artificial olfactory mucosa. Koikal et al. (2006, 2007) describe an analog very large scale integration (VLSI) implementation of an adaptive neuromorphic olfaction chip. Guerrero-Rivera and Pearce (2007) described attractor-based pattern classification in a field programmable gate array (FPGA) implementation of the olfactory bulb. It is now also possible to make hybrid systems that incorporate olfactory receptor cells, and a bioelectronic nose based on olfactory sensory neural networks in a culture of cells has been described (Liu et al. 2010a, 2010b; Wu et al. 2009). With advances in gene expression Song and coworkers (2009) reported the expression of a human olfactory receptor in Escherichia coli bacteria.

This chapter has attempted to paint with a broad brush the breath and marvels of chemosensory systems, and our puny attempts to emulate what nature does so well. By painstakingly breaking down the complex tasks associated with olfaction, we are surely making progress, not just in the artificial olfaction field, but toward understanding brain function and cognition.

ACKNOWLEDGMENT

Results presented on basis neural networks were carried out as part of PhD research by Dr. Dong-Hung Lee, Manchester, UK.

REFERENCES

  1. Abe H, Kanaya S, Takahashi Y, Sasaki S.I. Extended studies of the automated odor-sensing system based on plural semiconductor gas sensors with computerized pattern-recognition techniques. Analytica Chimica Acta. 1988;215(1–2):155–168.
  2. Abe H, Yoshimura T, Kanaya S, Takahashi Y, Miyashita Y, Sasaki S.I. Automated odor-sensing system based on plural semiconductor gas sensors and computerized pattern-recognition techniques. Analytica Chimica Acta. 1987;194:1–9.
  3. Ache B.W, Young J.M. Olfaction: Diverse species, conserved principles. Neuron. 2005;48(3):417–430. [PubMed: 16269360]
  4. Alkasab T.K, Bozza T.C, Cleland T.A, Dorries K.M, Pearce T.C, White J, Kauer J.S. Characterizing complex chemosensors: Information-theoretic analysis of olfactory systems. Trends in Neurosciences. 1999;22(3):102–108. [PubMed: 10199633]
  5. Alkasab T.K, White J, Kauer J.S. A computational system for simulating and analyzing arrays of biological and artificial chemical sensors. Chemical Senses. 2002;27(3):261–275. [PubMed: 11923188]
  6. Amoore J.E. The stereochemical theory of olfaction. 1. Identification of the seven primary odours. Proceedings of the Scientific Section, Toilet Goods Association. 1962a;37 Suppl.:1–12.
  7. Amoore J.E. The stereochemical theory of olfaction. 2. Elucidation of the stereochemical properties of the olfactory receptor sites. Proceedings of the Scientific Section, Toilet Goods Association. 1962b;37 Suppl.:13–23.
  8. Amoore J.E. Specific anosmia: A clue to the olfactory code. Nature. 1967;214:1095–1098. [PubMed: 4861233]
  9. Araneda R.C, Kini A.D, Firestein S. The molecular receptive range of an odorant receptor. Nature Neuroscience. 2000;3(12):1248–1255. [PubMed: 11100145]
  10. Arshak K., E, Moore E, Lyons G.M, Harris J, Clifford S. A review of gas sensors employed in electronic nose applications. Sensor Review. 2012;24(2):181–198.
  11. Beets M.J.G. Structure-activity relationships in human chemoreception. London: Applied Science Publishers Ltd.; 1978.
  12. Benton R. Eppendorf winner: Evolution and revolution in odor detection. Science. 2009;326(5951):382–383. [PubMed: 19833953]
  13. Billings S.A, Zheng G.L. Radial basis function network configuration using genetic algorithms. Neural Networks. 1995;8(6):877–890.
  14. Boelens H. Relationship between the chemical structure of compounds and their olfactive properties. Cosmetics and Perfumery. 1974;89:1–7.
  15. Bona E, dos Santos Ferreira da Silva R.S, Borsato D, Bassoli D.G. Self-organizing maps as a chemometric tool for aromatic pattern recognition of soluble coffee. Acta Scientiarum—Technology. 2012;34(1):111–119.
  16. Bott B, Jones T.A. The use of multisensor systems in monitoring hazardous atmospheres. Sensors and Actuators. 1986;9(1):19–25.
  17. Brette R, Rudolph M, Carnevale T, Hines C, Beeman D, Bower J. M, Kanaya S, Takahashi Y, Sasaki S.I, Diesmann M, Morrison A, Goodman P.H, Harris F.C, Zirpe M, Natschlaeger T, Pecevski D, Ermentrout B, Djurfeldt M, Lansner A, Rochel O, Vieville T, Muller E, Davison A. P, El Boustani S, Destexhe A. Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience. 2007;23(3):349–398. [PMC free article: PMC2638500] [PubMed: 17629781]
  18. Broomhead D, Lowe D. Multivariable functional interpolation and adaptive network. Complex Systems. 1988;2:321–355.
  19. Buck L, Axel R. A novel multigene family may encode odorant receptors—A molecular-basis for odor recognition. Cell. 1991;65(1):175–187. [PubMed: 1840504]
  20. Buck L.B. Information coding in the olfactory system. Journal of Neurochemistry. 1997a;69(S210)
  21. Buck L.B. Molecular mechanisms of odor and pheromone detection in mammals. Molecular Biology of the Cell. 1997b;8(739)
  22. Buck T, Allen F, Dalton M. Detection of chemical species by surface effects on metals and semiconductors. In: Bregman J.I, Dravnieks A, editors. Surface effects in detection. Washington, DC: Spartan Books; 1965. pp. 1–27. In.
  23. Callegari P, Rouault J, Laffort P. Olfactory quality: From descriptor profiles to similarities. Chemical Senses. 1997;22(1):1–8. [PubMed: 9056081]
  24. Chen S. Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning. Electronics Letters. 1995;31(5):117–118.
  25. Chen S, Billings S.A, Cowan C.F.N, Grant P.M. Practical identification of Narmax models using radial basis functions. International Journal of Control. 1990;52(6):1327–1350.
  26. Chen S, Billings S.A, Grant P.M. Recursive hybrid algorithm for nonlinear-system identification using radial basis function networks. International Journal of Control. 1992;55(5):1051–1070.
  27. Chen S, Cowan C.F.N, Grant P.M. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Transactions on Neural Networks. 1991;2(2):302–309. [PubMed: 18276384]
  28. Chess A, Buck L, Dowling M.M, Axel R, Ngai J. Molecular-biology of smell—Expression of the multigene family encoding putative odorant receptors. Cold Spring Harbor Symposia on Quantitative Biology. 1992;57:505–516. [PubMed: 1339687]
  29. Cleland T.A, Linster C. Computation in the olfactory system. Chemical Senses. 2005;30(9):801–813. [PubMed: 16267161]
  30. Clyne P.J, Warr C.G, Freeman M.R, Lessing D, Kim J.H, Carlson J.R. A novel family of divergent seven-transmembrane proteins: Candidate odorant receptors in Drosophila. Neuron. 1999;22(2):327–338. [PubMed: 10069338]
  31. Covington J.A, Gardner J.W, Hamilton A, Pearce T.C, Tan S.L. Towards a truly biomimetic olfactory microsystem: An artificial olfactory mucosa. IET Nanobiotechnology. 2007;1(2):15–21. [PubMed: 17428120]
  32. Crasto C.J. Computational biology of olfactory receptors. Current Bioinformatics. 2009;4(1):8–15. [PMC free article: PMC3187719] [PubMed: 21984880]
  33. Davis J.L, Eichenbaum H. Olfaction: A model system for computational neuroscience. Cambridge, MA: MIT Press; 1991.
  34. Di Natale C, Davide F, Damico A. Pattern-recognition in gas-sensing—Well-stated techniques and advances. Sensors and Actuators B—Chemical. 1995a;23(2–3):111–118.
  35. Di Natale C, Davide F.A.M, Damico A. A self-organizing system for pattern-classification—Time-varying statistics and sensor drift effects. Sensors and Actuators B—Chemical. 1995b;27(1–3):237–241.
  36. Distante C, Anglani A, Siciliano P, Vasanelli L. mSOM: A SOM based architecture to track dynamic clusters. Sensors and Microsystems. 2000:270–273.
  37. Distante C, Siciliano P, Persaud K.C. Dynamic cluster recognition with multiple self-organising maps. Pattern Analysis and Applications. 2002;5(3):306–315.
  38. Dravnieks A, Trotter P.J. Polar vapour detection based on thermal modulation of contact potential. Journal of Scientific Instruments. 1965;42:624.
  39. Duchamp-Viret P, Duchamp A, Vigouroux M. Amplifying role of convergence in olfactory system: A comparative study of receptor cell and second-order neuron sensitivities. Journal of Neurophysiology. 1989;61(5):1085–1094. [PubMed: 2723731]
  40. Ema K, Yokoyama M, Nakamoto T, Moriizumi T. Odor-sensing system using a quartz-resonator sensor array and neural-network pattern-recognition. Sensors and Actuators. 1989;18(3–4):291–296.
  41. Firestein S. How the olfactory system makes sense of scents. Nature. 2001;413(6852):211–218. [PubMed: 11557990]
  42. Foret S, Maleszka R. Function and evolution of a gene family encoding odorant binding-like proteins in a social insect, the honey bee (Apis mellifera). Genome Research. 2006;16(11):1404–1413. [PMC free article: PMC1626642] [PubMed: 17065610]
  43. Freeman J.A.S, Saad D. Learning and generalization in radial basis function networks. Neural Computation. 1995;7(5):1000–1020. [PubMed: 7584888]
  44. Freeman J.A.S, Saad D. Radial basis function networks: Generalization in over-realizable and unrealizable scenarios. Neural Networks. 1996;9(9):1521–1529. [PubMed: 12662550]
  45. Gall M, Müller R. Investigation of gas-mixtures with different MOS gas sensors with regard to pattern-recognition. Sensors and Actuators. 1989;17(3–4):583–586.
  46. Gao Q, Chess A. Identification of candidate Drosophila olfactory receptors from genomic DNA sequence. Genomics. 1999;60(1):31–39. [PubMed: 10458908]
  47. Garcia A, Aleixandre A, Gutierrez J, Horrillo M.C. Electronic nose for wine discrimination. Sensors and Actuators B—Chemical. 2006;113(2):911–916.
  48. Gardner J.W. Detection of vapors and odors from a multisensor array using pattern-recognition. 1. Principal component and cluster-analysis. Sensors and Actuators B—Chemical. 1991;4(1–2):109–115.
  49. Gardner J.W. Intelligent gas-sensing using an integrated sensor pair. Sensors and Actuators B—Chemical. 1995;27(1–3):261–266.
  50. Gardner J.W, Hines E.L, Tang H.C. Detection of vapors and odors form a multisensor array using pattern-recognition techniques. 2. Artificial neural networks. Sensors and Actuators B—Chemical. 1992a;9(1):9–15.
  51. Gardner J.W, Hines E.L, Wilkinson M. Application of artificial neural networks to an electronic olfactory system. Measurement Science and Technology. 1990;1(5):446–451.
  52. Gardner J.W, Pearce T.C, Friel S, Bartlett P.N, Blair N. A multisensor system for beer flavor monitoring using an array of conducting polymers and predictive classifiers. Sensors and Actuators B—Chemical. 1994;18(1–3):240–243.
  53. Gardner J.W, Shurmer H.V, Tan T.T. Application of an electronic nose to the discrimination of coffees. Sensors and Actuators B—Chemical. 1992b;6(1–3):71–75.
  54. Gelis L, Wolf S, Hatt H, Neuhaus E.M, Gerwert K. Prediction of a ligand-binding niche within a human olfactory receptor by combining site-directed mutagenesis with dynamic homology modeling. Angewandte Chemie—International Edition. 2012;51(5):1274–1278. [PubMed: 22144177]
  55. GholamHosseini H, Luo D, Liu H, Xu G. Intelligent processing of E-nose information for fish freshness assessment. Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing. 2007:173–177. In.
  56. Girosi F, Poggio T. Networks and the best approximation property. Biological Cybernetics. 1990;63(3):169–176.
  57. Guerrero-Rivera R, Pearce T.C. Attractor-based pattern classification in a spiking FPGA implementation of the olfactory bulb. Proceedings of Neural Engineering, 2007. CNE ’07. 3rd International IEEE/EMBS Conference on Neural Engineering. 2007 In. DOI: 10.1109 CNE.2007.369742.
  58. Guo S, Kim J. Dissecting the molecular mechanism of Drosophila odorant receptors through activity modeling and comparative analysis. Proteins—Structure Function and Bioinformatics. 2010;78(2):381–399. [PubMed: 19714770]
  59. Hansch C, Fujita T. Rho-sigma-pi analysis. Method for correlation of biological activity + chemical structure. Journal of the American Chemical Society. 1964;86(8):1616.
  60. Hartman E.J, Keeler J.D, Kowalski J.M. Layered neural networks with Gaussian hidden units as universal approximations. Neural Computation. 1990;2(2):210–215.
  61. Hekmat-Scafe D.S, Scafe C.R, McKinney A.J, Tanouye M.A. Genome-wide analysis of the odorant-binding protein gene family in Drosophila melanogaster. Genome Research. 2002;12(9):1357–1369. [PMC free article: PMC186648] [PubMed: 12213773]
  62. Hildebrand J.G. From molecule to perception: Five hundred million years of olfaction. Biology International. 2001;41:41–52.
  63. Hildebrand J.G, Shepherd G.M. Mechanisms of olfactory discrimination: Converging evidence for common principles across phyla. Annual Review of Neuroscience. 1997;20:595–631. [PubMed: 9056726]
  64. Hines E.L, Gardner J.W. An artificial neural emulator for an odor sensor array. Sensors and Actuators B—Chemical. 1994;19(1–3):661–664.
  65. Hoare M.J.G, Humble D, Jin J, Gilding N, Petersen R, Cobb M, McCrohan C. Modeling peripheral olfactory coding in Drosophila larvae. PLoS One. 2011;6(8) [PMC free article: PMC3153476] [PubMed: 21857978]
  66. Horio Y, Aihara K, Yamamoto O. Neuron-synapse IC chip-set for large-scale chaotic neural networks. IEEE Transactions on Neural Networks. 2003;14(5):1393–1404. [PubMed: 18244585]
  67. Imai T, Sakano H, Vosshall L.B. Topographic mapping—The olfactory system. Cold Spring Harbor Perspectives in Biology. 2010;2(8) [PMC free article: PMC2908763] [PubMed: 20554703]
  68. Izhikevich E.M. Simple model of spiking neurons. IEEE Transactions on Neural Networks. 2003;14(6):1569–1572. [PubMed: 18244602]
  69. Izhikevich E.M. Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks. 2004;15(5):1063–1070. [PubMed: 15484883]
  70. Izhikevich E.M. Hybrid spiking models. Philosophical Transactions of the Royal Society A—Mathematical Physical and Engineering Sciences. 2010;368(1930):5061–5070. [PubMed: 20921012]
  71. Kaissling K.E. Olfactory perireceptor and receptor events in moths: A kinetic model revised. Journal of Comparative Physiology A—Neuroethology Sensory Neural and Behavioral Physiology. 2009;195(10):895–922. [PMC free article: PMC2749182] [PubMed: 19697043]
  72. Kaissling K.E, Rospars J.P. Dose-response relationships in an olfactory flux detector model revisited. Chemical Senses. 2004;6(8):529–531. [PubMed: 15269125]
  73. Kaneyasu M, Ikegami A, Arima H, Iwanaga S. Smell identification using a thick-film hybrid gas sensor. IEEE Transactions on Components Hybrids and Manufacturing Technology. 1987;10(2):267–273.
  74. Kangas J.A, Kohonen T.K, Laaksonen J.T. Variants of self-organizing maps. IEEE Transactions on Neural Networks. 1990;1(1):93–99. [PubMed: 18282826]
  75. Karhunen J, Joutsensalo J. Generalizations of principal component analysis, optimization problems, and neural networks. Neural Networks. 1995;8(4):549–562.
  76. Kauer J, White J. A portable artificial nose based on olfactory principles. Society for Neuroscience Abstracts. 1998;24(1–2):652.
  77. Kauer J.S, White J. Imaging and coding in the olfactory system. Annual Review of Neuroscience. 2001;24:963–979. [PubMed: 11520924]
  78. Kauer J.S, White J. Representation of odor information in the olfactory system: From biology to an artificial nose. Sensors and Sensing in Biology and Engineering. 2003:305–322.
  79. Kay L.M, Stopfer M. Information processing in the olfactory systems of insects and vertebrates. Seminars in Cell and Developmental Biology. 2006;17:433–442. [PubMed: 16766212]
  80. Kim M.S, Repp A, Smith D.P. LUSH odorant-binding protein mediates chemosensory responses to alcohols in Drosophila melanogaster. Genetics. 1998;150(2):711–721. [PMC free article: PMC1460366] [PubMed: 9755202]
  81. Kodogiannis V.S, Lygouras J.N, Tarczynski A, Chowdrey H.S. Artificial odor discrimination system using electronic nose and neural networks for the identification of urinary tract infection. IEEE Transactions on Information Technology in Biomedicine. 2008;12(6):707–713. [PubMed: 19000949]
  82. Kohonen T. Self-organized formation of topologically correct feature maps. Biological Cybernetics. 1982;43(1):59–69.
  83. Kohonen T. Learning vector quantization. In: Arbib M.A, editor. The handbook of brain theory and neural networks. Cambridge, MA: MIT Press; 1995. pp. 537–540. In.
  84. Koickal T.J, Hamilton A, Pearce T.C, Tan S.L, Covington J.A, Gardner J.W. Analog VLSI design of an adaptive neuromorphic chip for olfactory systems. 2006
  85. Koickal T.J, Hamilton A, Tan S.L, Covington J.A, Gardner J.W, Pearce T.C. Analog VLSI circuit implementation. Of an adaptive neuromorphic olfaction chip. IEEE Transactions on Circuits and Systems I—Regular Papers. 2007;54(1):60–73.
  86. Korsching S.I. Odor maps in the brain: Spatial aspects of odor representation in sensory surface and olfactory bulb. Cellular and Molecular Life Sciences. 2001;58(4):520–530. [PubMed: 11361087]
  87. Kowalski B.R, Bender C.F. Pattern-recognition—Powerful approach to interpreting chemical data. Journal of the American Chemical Society. 1972;94(16):5632.
  88. Kowalski B.R, Bender C.F. Pattern-recognition. 2. Linear and nonlinear methods for displaying chemical data. Journal of the American Chemical Society. 1973;95(3):686–693.
  89. Laurent G. A systems perspective on early olfactory coding. Science. 1999;286(5440):723–728. [PubMed: 10531051]
  90. Lee S, Kil R.M. A Gaussian potential function network with hierarchically self-organizing learning. Neural Networks. 1991;4(2):207–224.
  91. Leinwand S.G, Chalasani S.H. Olfactory networks: From sensation to perception. Current Opinion in Genetics and Development. 2011;21(6):806–811. [PubMed: 21889328]
  92. Liu Q, Ye W, Xiao L, Du L, Hu N, Wang P. Extracellular potentials recording in intact olfactory epithelium by microelectrode array for a bioelectronic nose. Biosensors and Bioelectronics. 2010b;25(10):2212–2217. [PubMed: 20356727]
  93. Liu Q.J, Ye W.W, Hu N, Cai H, Yu H, Wang P. Olfactory receptor cells respond to odors in a tissue and semiconductor hybrid neuron chip. Biosensors and Bioelectronics. 2010a;26(4):1672–1678. [PubMed: 20943368]
  94. Lozano J, Arroyo T, Santos J, Cabellos J, Horrillo M. Electronic nose for wine ageing detection. Sensors and Actuators B—Chemical. 2008;133(1):180–186.
  95. Malnic B, Hirono J, Sato T, Buck L.B. Combinatorial receptor codes for odors. Cell. 1999;96(5):713–723. [PubMed: 10089886]
  96. Martin J.P, Beyerlein A, Dacks A.M, Reisenman C.E, Riffell J.A, Lei H, Hildebrand J.G. The neurobiology of insect olfaction: Sensory processing in a comparative context. Progress in Neurobiology. 2011;95(3):427–447. [PubMed: 21963552]
  97. Micchelli C.A. Interpolation of scattered data—Distance matrices and conditionally positive definite functions. Constructive Approximation. 1986;2(1):11–22.
  98. Minor A.V, Kaissling K.E. Cell responses to single pheromone molecules may reflect the activation kinetics of olfactory receptor molecules. Journal of Comparative Physiology A: Neuroethology Sensory Neural and Behavioral Physiology. 2003;189(3):221–230. [PubMed: 12664098]
  99. Moens M, Smet A, Naudts B, Verhoeven J, Ieven M, Jorens P, Geise H.J, Blockhuys F. Fast identification of ten clinically important micro-organisms using an electronic nose. Letters in Applied Microbiology. 2006;42(2):121–126. [PubMed: 16441375]
  100. Mombaerts P, Wang F, Dulac C, Chao S.K, Nemes A, Mendelsohn M, Edmondson J, Axel R. Visualizing an olfactory sensory map. Cell. 1996b;87(4):675–686. [PubMed: 8929536]
  101. Mombaerts P, Wang F, Dulac C, Vassar R, Chao S.K, Nemes A, Mendelsohn M, Edmondson J, Axel R. The molecular biology of olfactory perception. Cold Spring Harbor Symposia on Quantitative Biology. 1996a;61:135–145. [PubMed: 9246442]
  102. Moncrieff R. Instrument for measuring and classifying odors. Journal of Applied Physiology. 1961;16(4):742. [PubMed: 13771984]
  103. Moody J, Darken C.J. Fast learning in networks of locally-tuned processing units. Neural Computation. 1989;1(2):281–294.
  104. Mori K, Nagao H, Yoshihara Y. The olfactory bulb: Coding and processing of odor molecule information. Science. 1999;286(5440):711–715. [PubMed: 10531048]
  105. Moriizumi T, Nakamoto T, Sakuraba Y. Pattern recognition in electronic noses by artificial neural network methods. In: Gardner J.W, Bartlett P.N, editors. Sensors and sensory systems for an electronic nose. Vol. 212. Dordrecht, The Netherlands: Kluwer Academic Publishers; 1991. pp. 217–236. In. Nat Science Series E.
  106. Müller R, Lange E. Multidimensional sensor for gas-analysis. Sensors and Actuators. 1986;9(1):39–48.
  107. Murlis J, Elkinton J.S, Carde R.T. Odor plumes and how insects use them. Annual Review of Entomology. 1992;37:505–532.
  108. Murlis J, Jones C.D. Fine-scale structure of odor plumes in relation to insect orientation to distant pheromone and other attractant sources. Physiological Entomology. 1981;6(1):71–86.
  109. Musavi M.T, Ahmed W, Chan K.H, Faris K.B, Hummels D.M. On the training of radial basis function classifiers. Neural Networks. 1992;5(4):595–603.
  110. Nakamoto T, Fukuda A, Moriizumi T, Asakura Y. Improvement of identification capability in an odor-sensing system. Sensors and Actuators B—Chemical. 1991;3(3):221–226.
  111. Nakamoto T, Fukunishi K, Moriizumi T. Identification capability of odor sensor using quartz-resonator array and neural-network pattern-recognition. Sensors and Actuators B—Chemical. 1990;1(1–6):473–476.
  112. Pelosi P. The role of perireceptor events in vertebrate olfaction. Cellular and Molecular Life Sciences. 2001;58(4):503–509. [PubMed: 11361085]
  113. Pelosi P, Maida R. Odorant-binding proteins in vertebrates and insects: Similarities and possible common function. Chemical Senses. 1990;15(2):205–215.
  114. Pelosi P, Maida R. The physiological functions of odorant-binding proteins. Biophysics (English translation of Biofizika). 1995;40(1):143–151.
  115. Penza M, Cassano G. Application of principal component analysis and artificial neural networks to recognize the individual VOCs of methanol/2-propanol in a binary mixture by SAW multi-sensor array. Sensors and Actuators B—Chemical. 2003;89(3):269–284.
  116. Penza M, Cassano G, Tortorella F. Identification and quantification of individual volatile organic compounds in a binary mixture by SAW multisensor array and pattern recognition analysis. Measurement Science and Technology. 2002;13(6):846–858.
  117. Perez-Orive J, Mazor O, Turner G.C, Cassenaer S, Wilson R.I, Laurent G. Oscillations and sparsening of odors representations in the mushroom body. Science. 2002;297:359–365. [PubMed: 12130775]
  118. Persaud K.C. Polymers for chemical sensing. Materials Today. 2005;8(4):38–44.
  119. Persaud K, Dodd G. Analysis of discrimination mechanisms in the mammalian olfactory system using a model nose. Nature. 1982;299(5881):352–355. [PubMed: 7110356]
  120. Persaud K.C, Pelosi P. An approach to an artificial nose. Transactions of the American Society for Artificial Internal Organs. 1985;31:297–300. [PubMed: 3837460]
  121. Persaud K.C, Pelosi P. Sensor arrays using conducting polymers for an artificial nose. In: Gardner J.W, Bartlett P.N, editors. Sensors and sensory systems for an electronic nose. Berlin: Springer-Verlag; 1992. pp. 237–256. In.
  122. Pilpel Y, Lancet D. The variable and conserved interfaces of modeled olfactory receptor proteins. Protein Science. 1999;8(5):969–977. [PMC free article: PMC2144322] [PubMed: 10338007]
  123. Pioggia G, Ferro M, Di Francesco F, A. Ahluwalia A, De Rossi D. Assessment of bioinspired models for pattern recognition in biomimetic systems. Bioinspiration and Biomimetics. 2008;3(1) [PubMed: 18364563]
  124. Poggio T, Girosi F. Networks for approximation and learning. Proceedings of the IEEE. 1990a;78(9):1481–1497.
  125. Poggio T, Girosi F. Regularization algorithms for learning that are equivalent to multilayer networks. Science. 1990b;247(4945):978–982. [PubMed: 17776454]
  126. Polak E.H. Multiple profile-multiple receptor site model for vertebrate olfaction. Journal of Theoretical Biology. 1973;40(3):469–484. [PubMed: 4754892]
  127. Poo C, Isaacson J.S. Odor representations in olfactory cortex: “Sparse” coding, global inhibition, and oscillations. Neuron. 2009;62(6):850–861. [PMC free article: PMC2702531] [PubMed: 19555653]
  128. Powell M.J.D. Radial basis functions for multivariable interpolation: A review. In: Mason J.C, Cox M.G, editors. Algorithms for approximation. New York: Clarendon Press; 1987. pp. 143–167. In.
  129. Raman B, Sun P.A, Gutiérrez-Gálvez A, Gutiérrez-Osuna R. Processing of chemical sensor arrays with a biologically inspired model of olfactory coding. IEEE Transactions on Neural Networks. 2006a;17(4):1015–1024. [PubMed: 16856663]
  130. Raman B, Sun P.A, Gutiérrez-Gálvez A, Gutiérrez-Osuna R. Processing of chemical sensor arrays with a biologically inspired model of olfactory coding. IEEE Transactions on Neural Networks. 2006b;17(4):1015–1024. [PubMed: 16856663]
  131. Reinhard J, Srinivasanand M.V. The role of scents in honey bee foraging and recruitment. In: Jarau S, Hrncir M, editors. Food exploitation by social insects: Ecological, behavioral, and theoretical approaches. Boca Raton FL: CRC Press; 2009. pp. 165–182. In.
  132. Ressler K.J. Information coding in the olfactory system—Evidence for a stereotyped and highly organized epitope map in the olfactory-bulb. Cell. 1994;79(7):1245–1255. [PubMed: 7528109]
  133. Ressler K.J, Sullivan S.L, Buck L.B. A zonal organization of odorant receptor gene-expression in the olfactory epithelium. Cell. 1993;73(3):597–609. [PubMed: 7683976]
  134. Ressler K.J, Sullivan S.L, Buck L.B. Information coding in the olfactory system: Evidence for a stereotyped and highly organized epitope map in the olfactory bulb. Cell. 1994;79(7):1245–1255. [PubMed: 7528109]
  135. Rospars J.P, Lucas P, Coppey M. Modelling the early steps of transduction in insect olfactory receptor neurons. Biosystems. 2007;89(1–3):101–109. [PubMed: 17284344]
  136. Rumelhart D.E, Hinton G.E, Williams R.J. Learning representations by back-propagating errors. Nature. 1986;323(6088):533–536.
  137. Rupe H, von Majewski K. Notizen. Berichte der deutschen chemischen Gesellschaft. 1900;33:3401–3410.
  138. Sammon J.W. A nonlinear mapping for data structure analysis. IEEE Transactions on Computers C. 1969;18(5):401.
  139. Sammon J.W. Interactive pattern analysis and classification. IEEE Transactions on Computers C. 1970;19(7):594.
  140. Sanz G, Schlegel C, Pernollet J.C, Briand L. Comparison of odorant specificity of two human olfactory receptors from different phylogenetic classes and evidence for antagonism. Chemical Senses. 2005;30(1):69–80. [PubMed: 15647465]
  141. Schild D. Principles of odor coding and a neural network for odor discrimination. Biophysical Journal. 1988;54(6):1001–1011. [PMC free article: PMC1330413] [PubMed: 3233263]
  142. Schoenfeld T.A, Cleland T.A. Anatomical contributions to odorant sampling and representation in rodents: Zoning in on sniffing behavior. Chemical Senses. 2006;31(2):131–144. [PubMed: 16339266]
  143. Sell C. On the unpredictability of odor. Angewandte Chemie—International Edition. 2006;45(38):6254–6261. [PubMed: 16983730]
  144. Sengupta P, Chou J.H, Bargmann C.I. odr-10 encodes a seven transmembrane domain olfactory receptor required for responses to the odorant diacetyl. Cell. 1996;84(6):899–909. [PubMed: 8601313]
  145. Shepherd G.M. Frank Allison Linville’s R. H. Wright lectures on olfactory research. Colbow K, editor. Burnaby, British Columbia, Canada: Simon Fraser University; 1990. pp. 61–109. In.
  146. Shepherd G.M, Chen W.R, Greer C.A. Olfactory bulb. In: Shepherd G.M, editor. Synaptic organization of the brain. New York: Oxford University Press; 2004. pp. 165–216. In.
  147. Sherstinsky A, Picard R.W. On the efficiency of the orthogonal least squares training method for radial basis function networks. IEEE Transactions on Neural Networks. 1996;7(1):195–200. [PubMed: 18255570]
  148. Shurmer H.W, Gardner J.W. Odor discrimination with an electronic nose. Sensors and Actuators B—Chemical. 1992;8(1):1–11.
  149. Shurmer H.V, Gardner J.W, Chan H.T. The application of discrimination techniques to alcohols and tobaccos using tin-oxide sensors. Sensors and Actuators. 1989;18(3–4):361–371.
  150. Shurmer H.V, Gardner J.W, Corcoran P. Intelligent vapor discrimination using a composite 12-element sensor array. Sensors and Actuators B—Chemical. 1990;1(1–6):256–260.
  151. Singh S, Hines E.L, Gardner J.W. Fuzzy neural computing of coffee and tainted-water data from an electronic nose. Sensors and Actuators B—Chemical. 1996;30(3):185–190.
  152. Sleight R. 1990. Evolutionary strategies and learning for neural networks. MSc, UMIST.
  153. Song H.S, Lee S.H, Oh E.H, Park T.H. Expression, solubilization and purification of a human olfactory receptor from Escherichia coli. Current Microbiology. 2009;59(3):309–314. [PubMed: 19506949]
  154. Spehr M, Leinders-Zufall T. One neuron—multiple receptors: Increased complexity in olfactory coding? Science’s STKE: Signal Transduction Knowledge Environment. 2005;2005(285):e25. [PubMed: 15914726]
  155. Steinbrecht R.A. Odorant-binding proteins: Expression and function. Annals of the New York Academy of Sciences; 1998. pp. 323–332. Olfaction, and Taste XII. [PubMed: 10049226]
  156. Stetter J.R, Findlay M.W, Maclay G.J, Zhang J, Vaihinger S, Gopel W. Sensor array and catalytic filament for chemical-analysis of vapors and mixtures. Sensors and Actuators B—Chemical. 1990;1(1–6):43–47.
  157. Stetter J.R, Jurs P.C, Rose S.L. Detection of hazardous gases and vapors—Pattern-recognition analysis of data from an electrochemical sensor array. Analytical Chemistry. 1986;58(4):860–866.
  158. Su C.Y, Menuz K, Carlson J.R. Olfactory perception: Receptors cells and circuits. Cell. 2009;139(1):45–59. [PMC free article: PMC2765334] [PubMed: 19804753]
  159. Sundgren H, Lundstrom I, Winquist F, Lukkari I, Carlsson R, Wold S. Evaluation of a multiple gas-mixture with a simple MOSFET gas sensor array and pattern-recognition. Sensors and Actuators B—Chemical. 1990;2(2):115–123.
  160. Szyszka P, Ditzen A, Galkin A, Galizia C.G, Menzel R. Sparsening and temporal sharpening of olfactory representations in the honeybee mushroom bodies. Journal of Neurophysiology. 2005;94(5):3303–3313. [PubMed: 16014792]
  161. Tegoni M, Pelosi P, Vincent F, Spinelli S, Campanacci V, Grolli S, Ramoni R, Cambillau C. Mammalian odorant binding proteins. Biochimica et Biophysica Acta-Protein Structure and Molecular Enzymology. 2000;1482(1–2):229–240. [PubMed: 11058764]
  162. Triller A, Boulden E.A, Churchill A, Hatt H, Englund J, Spehr M, Sell C.S. Odorant-receptor interactions and odor percept: A chemical perspective. Chemistry and Biodiversity. 2008;5(6):862–886. [PubMed: 18618409]
  163. Turin L, Yoshii F. Structure-odor relations: A modern perspective. In: Doty R.L, editor. Handbook of olfaction and gustation. New York: Marcel Dekker; 2003. pp. 457–492. In.
  164. Vassar R, Chao S.K, Sitcheran R, Nuñez J.M, Vosshall L.B, Axel R. Topographic organization of sensory projections to the olfactory bulb. Cell. 1994;79(6):981–991. [PubMed: 8001145]
  165. Vassar R, Ngai J, Axel R. Spatial segregation of odorant receptor expression in the mammalian olfactory epithelium. Cell. 1993;74(2):309–318. [PubMed: 8343958]
  166. Vincent F, Ramoni R, Spinelli S, Grolli S, Tegoni M, Cambillau C. Crystal structures of bovine odorant-binding protein in complex with odorant molecules. European Journal of Biochemistry. 2004;271(19):3832–3842. [PubMed: 15373829]
  167. Vincent F, Spinelli S, Ramoni R, Grolli S, Pelosi P, Cambillau C, Tegoni M. Complexes of porcine odorant binding protein with odorant molecules belonging to different chemical classes. Journal of Molecular Biology. 2000;300(1):127–139. [PubMed: 10864504]
  168. Vogt R. Odorant/pheromone metabolism in insects. Chemical Senses. 2006;31(5):A7–A8.
  169. Vogt R.G, Riddiford L.M. Pheromone binding and inactivation by moth antennae. Nature. 1981;293(5828):161–163. [PubMed: 18074618]
  170. Vogt R.G, Riddiford L.M. The biochemical design of pheromone reception—Transport and inactivation. Chemical Senses. 1984;8(3):268.
  171. Wang F, Nemes A, Mendelsohn M, Axel R. Odorant receptors govern the formation of a precise topographic map. Cell. 1998;93(1):47–60. [PubMed: 9546391]
  172. Weimar U, Schierbaum K.D, Gopel W, Kowalkowski R. Pattern-recognition methods for gas-mixture analysis—Application to sensor arrays based upon SnO2. Sensors and Actuators B—Chemical. 1990;1(1–6):93–96.
  173. Weyerstahl P. Odor and structure. Journal fur Praktische Chemie-Chemiker-Zeitung. 1994;336(2):95–109.
  174. White J, Kauer J.S. Odor recognition in an artificial nose by spatio-temporal processing using an olfactory neuronal network. Neurocomputing. 1999;26–27:919–924.
  175. Wilkens W.F, Hartman J.D. Electronic analog for olfactory processes. Annals of the New York Academy of Sciences. 1964;116(A2):608. [PubMed: 14220555]
  176. Wilson R.I, Mainen Z.F. Early events in olfactory processing. Annual Review of Neuroscience. 2006;29(1):163–201. [PubMed: 16776583]
  177. Wold S, Esbensen K, Geladi P. Principal component analysis. Chemometrics and Intelligent Laboratory Systems. 1987;2(1–3):37–52.
  178. Wu C.S, Chen P.H, Yuan Q, Wang P. Response enhancement of olfactory sensory neurons-based biosensors for odorant detection. Journal of Zhejiang University—Science B. 2009;10(4):285–290. [PMC free article: PMC2666205] [PubMed: 19353747]
  179. Xu P.X. Eppendorf 2005 Winner—A Drosophila OBP required for pheromone signaling. Science. 2005;310(5749):798–799. [PubMed: 16272108]
  180. Xu Y.L, He P, Zhang L, Fang S.Q, Dong S.L, Zhang Y.Z, Li F. Large-scale identification of odorant-binding proteins and chemosensory proteins from expressed sequence tags in insects. BMC Genomics. 2009;10 [PMC free article: PMC2808328] [PubMed: 20034407]
  181. Yoshii F, Hirono S, Liu Q, Moriguchi I. 3-Dimensional structure model for benzenoid musks expressed by computer-graphics. Chemical Senses. 1992;17(5):573–582.
  182. Yoshii F, Liu Q, Hirono S, Moriguchi I. Quantitative structure-activity-relationships of structurally similar odorless and odoriferous benzenoid musks. Chemical Senses. 1991;16(4):319–328.
  183. Zaromb S, Stetter J.R. Theoretical basis for identification and measurement of air contaminants using an array of sensors having partly overlapping selectivities. Sensors and Actuators. 1984;6(4):225–243.
  184. Zuppa M, Distante C, Siciliano P, Persaud K.C. Drift counteraction with multiple self-organising maps for an electronic nose. Sensors and Actuators B—Chemical. 2004;98(2–3):305–317.
© 2013 by Taylor & Francis Group, LLC.
Bookshelf ID: NBK298822PMID: 26042329

Views

  • PubReader
  • Print View
  • Cite this Page

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...