U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Persaud KC, Marco S, Gutiérrez-Gálvez A, editors. Neuromorphic Olfaction. Boca Raton (FL): CRC Press/Taylor & Francis; 2013.

Cover of Neuromorphic Olfaction

Neuromorphic Olfaction.

Show details

Chapter 4The Synthetic Moth: A Neuromorphic Approach toward Artificial Olfaction in Robots

, , , , , , , and .

4.1. INTRODUCTION

Olfaction is a sense that is vital for many living organisms. Animals have been relying on smell to sample the environment and gather information from it. Olfaction enables the identification of food, mates, and predators as well as communication (Mykytowycz 1985) not only between members of the same or different species but also between animals and the environment.

Nevertheless, olfaction has not been as widely studied as vision or the auditory system. A deeper understanding of the biological olfactory system would allow us to develop novel artificial olfactory systems for real-world robotic applications such as environmental monitoring (Trincavelli et al. 2008), land mine detection (Bermudez i Badia et al. 2007), as well as detection of explosives and other hazardous substances (Rachkov et al. 2005; Distante et al. 2009). Although there have been several attempts to implement the sense of smell on robots, biological olfaction outperforms its artificial counterparts in robustness, size, response time, precision, and complexity. Animals, and more specifically insects with relatively simple nervous systems, are able to unravel the problem of odor localization and classification with great efficiency: bees use odor to localize nests, ants use pheromone trails to organize foraging in swarms, male moths use olfaction to locate mates (Baker and Haynes 1987), and so on.

Despite the technological advances in the field of artificial olfaction, a robust solution for the task of odor source localization and classification utilizing a fully autonomous robot has not yet been demonstrated. The main challenge is thus to develop an intelligent system able to robustly encode and decode odors as well as navigate autonomously in natural environments and successfully locate an odor source.

Artificial olfaction remains a challenging field in research, as it postulates the development of chemical sensors that are able to reliably capture information from the environment. In the field of artificial chemical sensing there is a wide diversity of technologies; however, the most widely used chemical sensors are made of thin-film metal oxide (MOX). These chemical sensors provide a broad spectrum of sensitivity to volatile chemical compounds with low power consumption. When employed on a robotic platform, they are usually structured in arrays of different types of chemical sensors—widely known as e-noses—which provide less error rates and a larger scale of chemical detection. Nevertheless, they are still less efficient than the sensory modalities of animals. As an alternative to an artificial chemical sensor Kuwana et al. (1999) have used its biological counterpart, which is the actual antennal lobe of a living silkworm moth connected to a mobile robot so as to perform pheromone search.

Equipping a robot with reliable chemical sensors is not enough to perform the odor classification and localization task. This task requires the development of robust odor classification models as well as odor source localization strategies that handle and exploit the information acquired from both the classification model and other sensory modalities. Early attempts to achieve the odor localization task are demonstrated by the Braitenberg’s vehicles (Gomez-Marin et al. 2010; Lilienthal and Duckett 2003) or high-level processes that include a planner and symbolic reasoning (Loutfi and Coradeschi 2008). In the past two decades, several attempts have been made to model animals’ behaviors and techniques to achieve a robust odor localization and classification system. For instance, to determine the direction of a gas source, Hiroshi Ishida and Atsushi Kohnotoh (2008) based their model on the dog’s nose. Frank Grasso et al. (2009) have modeled the behavior of a lobster and built a robot that performs the odor localization task in an underwater environment. The list of studies that approach artificial olfaction by modeling animal olfaction is constantly increasing, with an emphasis on insect chemolocalization, and most specifically, the chemical search based on the behavior and neural substrates of the male moth (Pyk et al. 2006). In fact, in a comparative study of robot-based odor source localization strategies (Bermudez i Badia and Vershure 2009), the authors compare reactive approaches with strategies employed by the male moth, concluding that the latter are more efficient in correct localizations.

Nonetheless, to locate the source of a chemical compound in real-world applications is a rather difficult task. Odors are chemical volatiles in the atmosphere that are mainly transported by airflow, creating a plume. However, the plume dispersion dynamics vary greatly depending on the medium, as the interaction of the airflow with other surfaces produces turbulence. This dispersion is best described by the so-called Reynolds number. In fluid mechanics, the Reynolds number can be characterized by different conditions, where a fluid may be in relative motion to a surface. It includes density and viscosity and measures the ratio of inertial forces to viscous forces. With low Reynolds numbers where viscosity prevails, there is a smooth constant fluid motion with a monotonic decrease of the chemical concentration. At medium or high values, however, turbulence dominates, producing flow instabilities. To address the problem of odor localization and classification, Kowaldo has proposed to divide the task of odor localization in three general steps: (1) search for and identify the chemical compound of interest, (2) track the odor using several sensory modalities (such as chemical), and (3) identify the source of the odor (by either vision or olfaction). Consequently, different search and classification strategies need to be employed for different environments (Kowaldo and Russell 2008).

Our aim is to achieve a novel olfactory-based system that will allow an autonomous mobile robot to navigate within a given environment and locate the source of the desired odor. We propose two models for classification and localization based on the neural substrates and mechanisms employed by a biological system that is known to perform the task of odor localization and classification in a robust way—the male moth. To assess our models, we have conducted experiments using two different chemical compounds: ethanol and ammonia. Our results show the first steps toward a stable odor localization and classification system.

4.2. THE MOTH

4.2.1. Moth Behavior

Insects in general are particularly good at using chemical cues to analyze the environment and achieve key objectives such as locate food, find mates, or communicate with each other. In particular, moths have been widely studied due to their ability to locate the female moth from a large distance, up to several hundred meters. What moths are detecting as odor stimuli are specific pheromone blends. These pheromones are mixed in a complex chemical background and are diffused in turbulent plumes. However, male moths are able to detect minute concentrations of pheromone and locate the female that is emitting them. Thus, moths are able to solve the odor classification and localization task by combining active sampling with specific behavioral and information processing strategies.

The female moth releases a species-specific pheromone blend that acts as a sex attractant for the male moths. This blend flows downwind, creating a specific plume shape. The plume has a filamentous structure. Once the male moth detects the molecules of pheromone within the plume, it flies slowly upwind, tracing the filament of the plume. This stereotypical behavior is called surge (Pearce et al. 2004). However, due to the dynamics of the plume and the complexity of its structure, the moth often loses track of the pheromone plume during surging. To re-acquire the track of the plume, moths have developed cast behavior, which is basically a zigzag movement orthogonal to the wind direction (Pearce et al. 2004) (Figure 4.1). Interestingly enough, when the male moth loses track of the pheromone plume, and after casting finds it again, the point in which it has re-acquired the plume is usually closer to the source than when it initially lost it.

FIGURE 4.1. Illustration showing the pheromone plume and the male moth cast-and-surge behaviors.

FIGURE 4.1

Illustration showing the pheromone plume and the male moth cast-and-surge behaviors. (Courtesy of Lopez, L. L. et al., in On Biomimetics, ed. L. D. Pramatarova, InTech, 2011. Accessed from http://www.intechopen.com/books/on-biomimetics/moth-like-chemo-source-localization-and-classification-on-an-indoor-autonomous-robot.) (more...)

As a result, the behavior employed by the male moth, when it tries to locate the female by tracking the pheromone plume, relies on complex information acquired by both anemotaxis (the orientation movement of the moth in response to the wind) and chemotaxis (the direction or movement of the moth according to a chemical compound).

Given this background and in order to understand the neural substrates and mechanisms that endow male moths with such robustness and high performance, we have developed a model that is based on behavioral mechanisms employed by the male moth, the so-called cast-and-surge behavior, and implemented the resulting model in an autonomous robot. By testing the behavior of our robot in real-world experiments, we want to be able to verify and strengthen our models, and therefore push forward our understanding of the mechanisms employed by the male moth.

4.2.2. Olfactory Pathway

The main components of the insect olfactory pathway are the olfactory receptor neurons (ORNs) in the antenna, the antennal lobe (AL), and the mushroom body (MB) (Hansson 2002) (Figure 4.2). The ORNs are located in the olfactory epithelium in the antenna and project their axons through the olfactory nerve to the insect antennal lobe. They respond to different chemical stimulus present in the air. The number of glomeruli is closely related to the number of types of ORNs. This organization is likely to help the AL to deal with noisy conditions and dynamic input (Laurent 1999). The glomeruli signals are sent to two different types of neurons: projection neurons (PNs) and local neurons (LNs). The projection neurons are the output of the AL to the MB, and will spike simultaneously in the presence of a specific odor. LNs laterally interconnect with the PNs and modify their activity by means of inhibition. Finally, the MB is responsible for the odor’s memory and learning processing.

FIGURE 4.2. Functional representation of the insect’s olfactory pathway.

FIGURE 4.2

Functional representation of the insect’s olfactory pathway. ORNs belonging to the same class converge onto the same glomerulus. LNs interconnect with the PNs, which are then connected to higher brain areas such as the MB. (Courtesy of Lopez, (more...)

4.2.2.1. Olfactory Receptor Neurons

The function of the ORNs is to send precise information to the nervous system on the amount of single odorants present in the air. When an odorant enters in contact with an ORN it interacts with the receptor proteins in the membrane of the neuron and increases the membrane potential, eventually generating a spike. The spike amplitude may differ from ORN to ORN, but it is thought not to carry any useful information (Todd and Baker 1999). What is definitely important is the frequency of the spikes, and also the temporal pattern they create. This translates into a constantly firing rate going from the ORN to the AL that indicates the odorant concentration. Although different ORNs respond differently to different odorants, such differences are in some cases very small and not easy to observe, which makes odor classification a nontrivial task. ORNs are present all over the antenna, also providing spatial information on where the odorant is located in the environment. This information is of fundamental importance for the flying strategy of the insect when locating a plume. The distribution of the different types of ORNs varies in different species, but generally they will be found homogeneously distributed along the antenna.

A key factor for classification is the response of the ORNs over time to a constant stimulus. As to be expected, ORNs do not react immediately to an odorant, but for any given concentration of the odor a certain time is needed to reach the maximum firing rate, and much longer time is needed to reduce their activity when the odor is removed (Purves et al. 2001). Curiously, this behavior has been shown to be very useful for odor blend identifications, since those times are different for different particles.

4.2.2.2. Antennal Lobe

In most species the AL has the shape of a sphere and is well demarcated from other parts of the brain. It is composed by a number of neuropilar compartments called glomeruli. Glomeruli are spheroidal structures that compile the activity that comes from ORNs of the same type, although in some cases different kinds of ORNs are mixed. Moreover, the male moth has a pheromone-specific macro-glomerulus for reproductive purposes.

There are basically two types of neurons in the AL: the projection neurons (PNs) that filter the activity from the ORNs and then send a processed signal to higher cognitive areas, and the local neurons (LNs) that shape this activity by extracting the most significant characteristics of the signal. The LNs can be at the same time of two types: heterogeneous LNs (local neurons whose inputs and outputs remain in the same glomerulus) or homogeneous (local neurons whose inputs and outputs interact with different glomeruli).

The AL shows a high level of interconnectivity between LNs and PNs. The number of LNs is at least four times that of the PNs. A LN may receive input from one or more glomeruli, and may inhibit another LN or a PN. This inhibition depends on two different types of GABA receptor: GABAA (fast) and GABAB (slow) (Waldrop et al. 1987). It is still not clear if connections going from the glomeruli to the PNs exist. It is believed that the inhibition arising from LNs modifies the activity of the ORNs and shapes the response of PNs, which project to the MB (Christensen et al. 1993).

4.2.2.3. Mushroom Body

The mushroom body (MB) is known to be involved with the learning and memory of odors. This work is to focus on the role of the AL in the signal processing for the classification task. The role of the MB is secondary and will be considered just as a linear classifier.

4.3. METHODS

This section describes the technical characteristics of the robotic platform and sensors employed for this study, as well as the technical specifications of the embedded computer and the software used. We also provide an outline of our experimental setup and the evaluation tasks we applied to test our system.

4.3.1. The Robot

The autonomous robot described here was developed within the European project NEUROCHEM supported by Bio-ICT and was the product of collaboration between UPF (Universitat Pompeu Fabra) and UPC (Universitat Politecnica de Catalunya). The robot is composed of two main parts: the mobile robotic platform developed in SPECS-UPF and an embedded computer assembled at UPC. The basic requirements applied to the robot include a full-functioning interface with chemical and other sensors, full autonomy, and demonstration capabilities.

4.3.1.1. Sensory Modalities—Hardware

We provided our robot with several sensory modalities, so as to be able to navigate freely and explore the environment. To receive information on the distance of objects on the left, middle, and right part of the robot, three SRF08 ultrasonic sensors (Devantech Ltd. {Robot Electronics], Norfolk, England) were placed, equally spaced, in the front part of the robotic platform. We also used a CMPS03 compass (Devantech Ltd. {Robot Electronics], Norfolk, England), especially designed for robotic navigation, as it produces a unique number to represent the robot direction. The wind direction was measured with a custom-built wind vane that also produces a unique number for the direction of the wind, relative to the direction of the robot. Finally, we equipped the robot with an array of 16 MOX chemical sensors. The board is placed in the middle of the platform to avoid the sensors being affected by rotations (Figure 4.3).

FIGURE 4.3. Picture of the robotic platform.

FIGURE 4.3

Picture of the robotic platform. The chemosensor board is placed in the middle of the platform and the ultrasonic sensors are placed in the front part of the robot in equal places.

Additionally, a GPS and a two-axis accelerometer were applied to the robotic platform, but were not utilized in the current set of experiments. The moving platform is based on an Arduino board with a bluetooth interface. The mobile base is interconnected with the embedded platform via a bluetooth dongle. The array of the chemical sensors is directly connected to the embedded computer, allowing us to acquire real-time data from the environment. In addition to that, a wireless LAN adapter has been utilized allowing connections between the embedded computer and a local network created for the purpose of our experiments. This ensures communication with the embedded and any other computer connected to the network.

4.3.1.2. Chemical Array

The success of the odor classification and localization task highly depends on the instrumentation capabilities of the robot for odor sensing. The robot’s design is able to host three types of gas sensor arrays. The first type is a large-scale array of 64K polymeric sensors (Beccherelli et al. 2009), consisting of 16 modules of 64 × 64 elements each and approximately 8 sensor types. The second sensor array is composed of four types of thin-film metal oxide (MOX) Figaro sensors (Figaro USA, Inc., Arlington Heights, IL, United States). All four types of Figaro sensors have a low power consumption, small size, and long life expectancy. The four types of MOX sensors are TGS 2442, TGS 2600, TGS 2610, and TGS 2612. Each one of them is a broadly selective gas sensor. Figure 4.4 shows the spatial arrangement of each sensor. Finally, the third type of gas sensor array is a virtual sensor array, which is basically a software abstraction of sensor signals that were used to test various models of insect olfaction. The results represented in this work are obtained with the second type of MOX sensor array.

FIGURE 4.4. (a) The board of the 4 × 4 chemical sensor array.

FIGURE 4.4

(a) The board of the 4 × 4 chemical sensor array. (b) The spatial arrangement of the four types of Figaro chemical sensors: TGS 2442, TGS 2600, TGS 2610, and TGS 2612.

4.3.1.3. Robotic Platform

A first version of the robotic platform used in our experiments had a tank structure with two motors: one in the front and one in the back, and caterpillar tracks on each side (Figure 4.5). With this platform, the robot operated at 80% of nominal speed, advancing 1.8 m/s. However, this given speed was considered inconsistent with the response time of the sensors, and it was necessary to reduce the speed. Ideally, the robot should be able to move at a speed of 3 cm/s. Nevertheless, due to the nature of the motors, it was not possible to lower the speed significantly, as they required a minimum of 60% of tension to begin to move. After having optimized the robot with the minimum possible speed, we tested the platform inside the wind tunnel, by performing one simple cast. Although this design was favoring movement through different terrains and supported the weight of the embedded device as well as the batteries, it did not allow controlled movements and slow maneuvers, which were considered necessary for chemo-search inside the wind tunnel.

FIGURE 4.5. Image of the autonomous robotic platform when operating with caterpillar tracks.

FIGURE 4.5

Image of the autonomous robotic platform when operating with caterpillar tracks. The embedded computer, batteries, and sensory modalities, as well as the chemosensor array, are placed on top of the robotic platform. (Courtesy of Lopez, L. L. et al., in (more...)

Therefore, we decided to redesign the robot to improve maneuverability in relation to speed and ability to carry the weight of the embedded computer and its batteries. The new robot differed from the previous one in the design of the platform, as this one was equipped with two wheels instead of caterpillars. The rest of the sensors that were employed for the chemo-search were mounted onto the new one. The design of the new robot was based on the structure of the tractor-carrying aircrafts, due to its maneuverability. The structure consisted of two wheels, one on each side, controlled by one motor each independently (Figure 4.6). To allow fine-tuning of the robot, we have placed at the back of the platform a set of three omnidirectional passive wheels, and the load of the robot (embedded, batteries) is placed in the front. To reduce the speed of the robot further, we have applied a reduction gear system on each motor. Just like the previous robot, we have tested this new structure by performing a simple cast task. The results show that the novel platform has positive effects on both maneuverability and performance.

FIGURE 4.6. Image showing the robotic platform after the caterpillar tracks were replaced with wheels.

FIGURE 4.6

Image showing the robotic platform after the caterpillar tracks were replaced with wheels. This new design allowed better maneuverability of the robot.

4.3.1.4. Embedded System

Research in biomimetic algorithms on artificial olfaction poses new technical requirements to the hardware and software equipment of the sniffing robots. The embedded technology implemented in the synthetic moth robot offers several benefits in this area. The modular structure of the embedded platform assigns to each part the involved tasks, including system control, data acquisition, biologically inspired processing, and visualization. The system runs a GNU/Linux image that can be operated either headless or with the aid of the standard graphical user interface (GUI) solution, with the iqr simulator for large-scale neural systems embedded in the software (Bernardet and Verschure 2010).

The architecture of the embedded computer is based on the PC104 standard, which is typically targeted to the industrial rugged embedded applications, where this technology permits data acquisition on extreme environments. The PC104 bus offers additional benefits in terms of compact form factor (size reduces to 3.6 by 3.8 in.), a low number of components and internal connectors, and a low power consumption (1–2 W per module).

The embedded computer is composed of four PC104 component boards: CPU board PCM-3372F-S0A1E (Advantech), data acquisition board PC104-DAS16Jr/16 (Measurement Computing), power supply unit HESC104, and battery pack BAT-NiMh45 (Tri-m Systems). The CPU board is a single-board computer that provides a performance similar to a small laptop computer. The board contains an Intel VIA Eden V4 1.0 GHz processor, 1 GB RAM of DDR2 standard at 533 MHz, and the system chipset VIA CX700 with 64 MB VRAM. Running the models in the iqr neuronal simulator achieves 50 cycles per second, which was the target speed on the design of the system.

The I/O periphery of the CPU board consists of two serial ports, six USB 2.0, keyboard/mouse slots, audio and 8-bit GPIO ports, a 10/100 Mbps Ethernet interface, and a slot for a flash type I card. The data acquisition unit is a 16-channel board with 16-bit analog-to-digital converters (ADCs). The system is configured for a parallel 16-channel ADC at 100 KHz sampling frequency. Such a configuration is able to interface all the polymer gas sensor boards developed within the NEUROCHEM project, described in Section 4.3.2.2.

The power supply unit is a DC-DC converter with a wide range of input voltages, from 6 V to 40 V DC, and an output power of 60 W. The uninterruptible power supply (UPS) mode is supported with board configuration stored in the power board EEPROM memory. The power consumption of the embedded computer in the complete configuration is typically 9 W (maximum of 15.5 W). The NEUROCHEM polymeric sensor array with 64K elements with associated electronics requires from 4 W to 10 W. Given the maximum power consumption of 25.5 W, the system includes a battery pack of 4500 mA/h that guarantees an autonomous operation for around 1.1 h.

The software includes a software emulator of a large-scale sensor array. This software module permits us to work, test, benchmark, and prototype a complete neuromorphic signal processing tool chain without the requirement of a physical sensor array. The module includes means for the design of the experiment, and the generation of a large number of sensors/receptor, which behave realistically as a large array of polymeric sensors.

The virtualization of the hardware system has been also implemented in a custom GNU/Linux image based on the Debian (Live Debian project) operating system (OS). The released OS image includes drivers for the PC104 boards, custom data acquisition software, iqr modules of the developed neuromorphic models, and the model of chemosensory array. The end users can therefore choose to develop models targeting a physical sensor array mounted on the robotic platform, or test these models under a simulated experiment on a desktop computer without need of any specific hardware.

4.3.2. Software

4.3.2.1. iqr

Our system consists of two main models: classification and localization. In order to design and simulate the neural networks of both models, a solid software framework was needed. The tool we used is the open-source large-scale neural network simulator iqr (Bernardet and Verchure 2010) (available under the GPL license). iqr provides a multilevel neuronal simulation environment allowing us to visualize and analyze data in real time, as it supports interfacing to external devices, like robots, due to its modular structure. A great feature of iqr is that it is able to simulate biological nervous systems by using standard neural models, such as Linear Threshold or Integrate and Fire. In this way, all behaviors elicited originated from inhibitory and excitatory interactions among such neurons. Given our system’s needs, specific modules allowing communication between the robot and different computers in the network have been developed using C++, as well as custom neurons. By employing and implementing our system with iqr, we were able to acquire data from the sensors of the robot, process them using the models of localization and classification, and send the output commands to the robot, in real time.

4.3.2.2. Data Acquisition Software

The main purpose of the acquisition software is to deliver the stream of real-time chemosensory signals to the classification and localization models implemented in the iqr framework. Figure 4.7 shows a diagram of the acquisition flow that consists of three levels: hardware, software, and user level. The neuromorphic models of the moth robot are located at the user level and interconnect with several components on the other two levels via the iqr modules. The chemosensory readings end up in the iqr modules passing though several stages at the software level. The low-level data acquisition is partly controlled by the Comedi-based driver.

FIGURE 4.7. A scheme of the acquisition flow from three chemosensor arrays of different types: polymeric sensor array of up to 64K elements designed in the NEUROCHEM project, commercial metal oxide (MOX) sensors, and virtual sensors allowing generation of synthetic data (R package chemosensors).

FIGURE 4.7

A scheme of the acquisition flow from three chemosensor arrays of different types: polymeric sensor array of up to 64K elements designed in the NEUROCHEM project, commercial metal oxide (MOX) sensors, and virtual sensors allowing generation of synthetic (more...)

The Comedi project develops open-source drivers, tools, and libraries for data open-source acquisition (Schleef et al. 2003). This project provides a collection of drivers for a variety of common data acquisition boards. The drivers are implemented with help of a Linux kernel module, offering common functionality and individual low-level driver tools. The functions are accessed via the Comedi user interface library, Comedilib. Available functions perform asynchronous and triggered acquisition, configuration of analog and digital channels, and the Direct Memory Access (DMA) data transfer to memory. The NEUROCHEM library is a shared library that communicates with the acquisition electronics of the sensor arrays. This library implements a custom signal protocol targeting the compatible sensor arrays by means of a developer-friendly interface wrapping Comedilib.

The NEUROCHEM acquisition software supports three types of arrays. The first array is a polymeric sensor array developed by the CNR Institute for Microelectronics and Microsystems, Rome, Italy, and the University of Manchester, UK (Beccherelli et al. 2009). This array is the first very large sensor array available, providing 64K sensing elements distributed between 16 sensor dies. Each sensor die has a size of 4 cm2, and contains 4096 active sensor surfaces. Thirty-one different polymer types have been used on the deposition of the 64K sensing elements. This array is the most demanding stage for the software system. However, the slow dynamics of the chemical reactions in the sensor device limits the required acquisition speed, which is close to 1 s for a complete scan of all available sensors (64K).

The second array is a general purpose array made of 16 commercial metal oxide (MOX) sensors from Figaro Engineering, Inc., which has been developed at the University of Barcelona, Spain. This array is composed of 16 sensors of four different types (TGS 2442, TGS 2600, TGS 2610, and TGS 2612).

Both arrays, polymeric and metal oxide, are compatible with the same data acquisition protocol implemented in the NEUROCHEM driver, which takes care of each detail on the ADC process, including acquisition control, communications with the electronics in each sensor array, and signal filtering. The two arrays are available via the same iqr module designed for the end user.

As introduced in Section 4.3.1.4, the third array is a built-in software-based abstraction of a real polymer sensor array. This virtual sensor array is in fact exposing the functionality of an R package (R Development Core Team 2011) named chemosensors, developed by A. Ziyatdinov and A. Perera (R package chemosensors). This virtual array allows for the design of synthetic experiments that simulate real-time signals that have been used to test neuromorphic models within iqr on the NEUROCHEM project. The virtual array allows control of the generation of chemosensory stimuli with a variety of characteristics: unlimited number of sensors, support of multicomponent gas mixtures, and full parametric control of the noise in the sensors, including drift and nonlinearity. The R package chemosensors are included in the OS image released for the moth robot in the NEUROCHEM project.

4.3.3. Environmental Setup

4.3.3.1. Wind Tunnel

For the needs of our experimental setup, we have constructed a wind tunnel inside which the robot is placed. The wind tunnel is located at the SPECS lab in Barcelona, Spain. It consists of a wooden skeleton covered with a transparent polyethylene sheet of low density (Figure 4.8). This solution allows us to have a controlled indoor environment where the robot can move freely. A constant airflow is generated by four ventilators that are located at the one end of the wind tunnel. Each ventilator is a centrifugal 4.4 W fan that creates a negative pressure of an airflow velocity up to 1.0 ms–1.

FIGURE 4.8. Layout of the wind tunnel.

FIGURE 4.8

Layout of the wind tunnel. The camera is located 3 m above the wind tunnel. The arrows indicate the flow direction from the odor source to the exhaust ventilators. (Courtesy of Lopez, L. L. et al., in On Biomimetics, ed. L. D. Pramatarova, InTech, 2011. (more...)

An odor source is placed on the upwind end of the tunnel. Therefore, the plume that is created moves across the whole wind tunnel from the point of the odor source to the four ventilators where the air is extracted out of the room. The wind tunnel is approximately 4 m long, 3 m wide, and 54 cm high.

For the scope of the odor localization experiment, the robot was placed in the middle of the wind tunnel, in front of the fans, facing upwind. In order to create the odor maps, the robot was placed in different parts of the wind tunnel in equal distances, facing upwind.

4.3.3.2. Vision-Based Tracking System (AnTS)

The trajectory of the robot was acquired with the general purpose video-based tracking system AnTS. The tracking system consists of a monochrome camera that is placed approximately 3 m above the wind tunnel. To track the robot independently of light conditions, an IR filter was applied to the camera and 3 IR LEDs were placed on the robot so that they could be identified by the camera. The AnTS tracking application is able to record not only the orientation and absolute position of the robot inside the wind tunnel, but also one trace per tracked element.

4.3.4. System Architecture

The developed models of odor localization and odor classification are based on the behavior and neural substrates of the male moth. The model of classification consists of three main stages, just like the olfactory pathway of the moth. The first stage is the ORN model, which groups the input from the sensors. The second stage is the AL model with custom modifications, which represents the stimulus information in such a way that is relatively easy to classify. Finally, a MB model acts as a linear classifier and obtains the identity of the blend. This information is then passed to the localization model, which is based on the two basic behaviors observed in the male moth: upwind movement (surge) and crosswind movement (cast). Our system combines active sampling from the environment with the moth’s search strategy. This means that the system receives and processes in real time information acquired from the environment (such as odors, obstacles, etc.), decides which action to take, and then outputs the desired action to the motors (Figure 4.9).

FIGURE 4.9. Overall system architecture.

FIGURE 4.9

Overall system architecture. Inputs of the robot are represented on the top, while actuators (motors) are at the bottom of the picture. The classification system receives the input of the chemical sensors, which is grouped and normalized in the Olfactory (more...)

4.3.4.1. ORN Model

The ORN model used is, in accordance with its analog in nature, taking the input from the chemical sensors and translating it to neural activity. The four different types of sensors the robot has are grouped in four different glomeruli, whose activity varies from 0 to 1 (Figure 4.10). There are, however, some important differences with the biological approach.

FIGURE 4.10. ORN model.

FIGURE 4.10

ORN model. First, similar sensor input represented by action potential is grouped into different glomeruli. Second, the signal is normalized to give values between 0 and 1.

The presence of a stimulus is not represented with the firing rate of a neuron, but with its action potential. This way of realization is due to computational constraints in the embedded robot. Making it so, the neuron will show the spike activity on a specific time and not the time of each spike, saving a considerable number of cycles by reducing the simulation speed. Second, the biological ORNs fire at a constant rate and decrease only in the presence of an odorant. However, even if this behavior is also given in the nature of the sensors, the output of the ORNs has been normalized to give a value between 0 and 1. Thus, the absence of odorant will be represented with a 0, and the saturation of the sensor given a concentration of particles will be 1. Finally, each glomerulus in the AL receives input only from one type of sensor. In order to simplify the system, no sensor-mixed glomeruli were implemented.

4.3.4.2. Antennal Lobe Model

The classification model used in the robot is an adaptation of the one proposed by Knüssel (2006). This model is based in the so-called temporal population code (TPC).

After measuring the activity of the PNs in the moth with physiological methods, Knüssel proposed a theoretical model of the AL that takes into account not only the immediate firing rate value of a neuron, but also the evolution in time of the trends of spikes. As the main interest of Knüssel was to study temporal dynamics, his model is set to receive static sensory input. This model consists of four types of neurons: olfactory receptor neurons (ORNs), heterogeneous local neurons, homogeneous local neurons, and projection neurons. Except for the homogeneous local neurons, which are shared by all glomeruli, the rest of them can be found in the relation of one per glomerulus. Each glomerulus receives input from only one olfactory receptor neuron, which represents the average activity of all the receptor neurons of the same type. Glomeruli are physically arranged in a ring so they avoid the boundaries in the connections. The input from the olfactory receptor neuron excites the projection neuron, the homogeneous local neuron in the same glomeruli, and the homogeneous local neuron. It also provides excitation to the neighboring glomeruli heterogeneous local neurons. The homogeneous local neuron inhibits every heterogeneous local neuron with a fast synapse. This way, the homogeneous local neuron keeps the average firing rate of all the olfactory receptor neurons, while the heterogeneous local neurons in each glomerulus represent the difference of the glomeruli receptor neuron firing rate (or neighborhood of them) with the average. Projection neurons are at the same time inhibited with a slow synapse by the heterogeneous local neuron in the same glomeruli. This slow synapse corresponds to a standard exponential kernel, which increases the inhibition exponentially over time.

In this configuration, the model will react with a high, fast peak in the presence of a static stimulus in the receptor neuron, followed by a slow decrease, generating a so-called alpha function (Figure 4.11).

FIGURE 4.11. Illustration of the AL model proposed by Knüssel.

FIGURE 4.11

Illustration of the AL model proposed by Knüssel. Constant input from ORNs is passed to the PN in the glomerulus, the heterogeneous LN in the glomerulus and the neighboring ones, and a common homogeneous LN. Heterogeneous LNs in each glomeruli (more...)

This function encodes information about the stimuli in both amplitude and duration dimensions. While the initial amplitude (the peak) is defined by the direct excitation from the receptor neuron, the decay (and thus the duration) will be conditioned by the slow inhibition of the heterogeneous local neuron (meaning the relative position of the glomerulus receptor neuron with respect to the average of all of them). This way, both dimensions are critically important to the odor classification, being biologically compatible with the physiological measurements of the real moth projection neurons taken in (Knüssel et al. 2007). In this paper the alpha function (Figure 4.12) was used as a fit to the projection neurons’ firing rate, and not only the amplitude but also the duration was shown to have important information content, improving the classification up to a 40% in comparison with simple spatial coding.

FIGURE 4.12. The alpha function is characterized by four parameters: baseline B, offset O, amplitude A, and duration D.

FIGURE 4.12

The alpha function is characterized by four parameters: baseline B, offset O, amplitude A, and duration D. According to Knüssel’s model, if considering not only the amplitude but also the duration of this function, classification can be (more...)

The model presented by Knüssel was implemented and adapted in order to make it viable to run in the embedded computer, but conserving the concept of the temporal population code (Figure 4.13).

FIGURE 4.13. TPC model adaptations on Knüssel’s model.

FIGURE 4.13

TPC model adaptations on Knüssel’s model. Glomeruli are not interconnected anymore and the firing rate is represented by the neuron’s activity.

The original model is prepared to respond to a constant, static input. Since Knüssel’s objective was to show a dynamic response of the PNs to a static input from the ORNs, this was an ideal situation for his purpose. However, in this research a model able to receive dynamic and turbulent input is needed. The robot does not receive air-puffs in the sensors, but is moving inside the arena coping with turbulences.

By observing the reaction of the sensors in the wind tunnel, one realizes that their function over time already takes the shape of the previously described alpha function. The idea of this adaptation is to extract these noisy alpha functions when they are significant enough, clean them, and pass them to the MB for them to be identified.

To achieve this, a conditioning network was prepared for each glomerulus. The typical basic conditioning chain is described in Figure 4.14 , although it can be made more complex by combining signals from several glomeruli and mixing them in different ways. On a first stage, the positive derivative of the signal coming from the glomerulus is obtained by feeding a target neuron with the immediate signal, and inhibiting it with a delayed signal. This allows the system to recognize whether the signal changes and how much, and to know if there is a significant alpha function in the entry. Second, this neuron excites another neuron, which has both a threshold and a membrane persistence value that keeps in memory for some milliseconds if there was a significant alpha in the entry, acting as an accumulator for the derivative. However, this last neuron is limited to never have a membrane potential higher than its input, so the membrane persistence is only used to gradually decrease the activity of the neuron through time. Subsequently, this value is normalized between 0 and 1 in the next layer, setting the boundaries by observing the common limits the sensors reached in the experiments. Doing so allows the use of the activity of this last layer to modulate the input from the glomeruli (also normalized previously), making the noise of most alpha functions disappear and reacting only when a significant change is produced on their input.

FIGURE 4.14. TPC model conditioning chain.

FIGURE 4.14

TPC model conditioning chain. First, the derivative is obtained via an immediate excitatory connection and a slow inhibitory connection (1). The accumulator group (2) keeps the signal of the derivative for some time. The S group (3) obtains a scale factor (more...)

Another point is that the original model considers the firing rate as the main factor to be calculated. Although this is a more realistic approach, the run of all the spikes in the computationally limited system in the robot would require a notably high simulation speed, besides the transformation of the sensor input, which is of a continuous nature, also to spikes. To avoid this and save considerable computational power, the firing rate of each neuron is represented in our model by the activity of a linear threshold type of neuron, instead of using integrate and fire neurons, which are necessary for the spiking model. Implications of this are, however, minimal; the precision of the spikes is not critical for the shaping of the alpha function, and both amplitude and duration can be accurately shown.

Finally, in the simulations of the original model, 10 glomeruli are used in a ring, with each receptor neuron providing input not only to the corresponding glomerulus, but also to the one in the left and the one in the right. Since the robot has only four glomeruli, if we followed this strategy in our model, the relative position with respect to the average of each glomerulus represented by the heterogeneous local neuron would be almost insignificant and very sensitive to noise. Because of this, each receptor neuron in our model excites only the projection and local neurons existing on its own glomeruli.

4.3.4.3. Mushroom Body Model

The MB model that is used to classify the output of the AL is the one developed within the NEUROCHEM framework at University of Barcelona. It is a linear classifier that takes as input a group of neurons, some active, some not, and approximates them to the nearest activity pattern for which it was trained. In other words, it cleans the input and, if there is something similar to the collection of patterns it has inside, outputs the matching one.

The model is implemented as a module for iqr in C++ and has been ported to the robot in order to work with the input of both models of AL. The output of the TPC model of the antennal lobe needs to be adapted to be transformed from amplitude and time to a pure spatial spiking model. This is done by translating the response of every single projection neuron into a special array of neurons that retains the activity of the group for a specified time. This way, each of these groups representing a PN acts like a barcode characteristic of the odor blend. The task of the mushroom body is then to approximate this trend of pulses to the ones it was trained for before.

4.3.4.4. Localization Model

The model of localization is based on the behavioral strategies employed by the male moth when it is trying to locate the female. This model is responsible for receiving information sent by the classification model as well as the different sensory modalities of the robotic platform. Based on the information the system receives, it decides upon which action to take (such as avoid collision, cast, or surge). As this model talks directly to the motors, the behavior of the robot mainly relies on the motor’s actions.

In Figure 4.15 we can see the localization model developed using iqr. The models in iqr are organized in two different levels: the top level represents the system and contains several processes, and in the process level each model is divided into units that allow us to interact with external devices. The main units of each process are groups of neurons that interact with each other through inhibitory and excitatory connections. As iqr is a large-scale neural simulator, each group represents a group of similar neurons and each connection represents synapses of the same type that connect these groups, and each box represents a process. Each process that is marked with an M represents a module, developed in C++, that allows us to exchange data with an external device.

FIGURE 4.15. Image of the main system implemented in iqr.

FIGURE 4.15

Image of the main system implemented in iqr. Each box represents a process. Each box with an M marking represents a module. The arrows indicate exchange of information between processes.

The main module that is responsible for the communication between the robotic platform and iqr is called Moth. This module receives information from the sensors (compass, wind vane, etc.) and makes them available to all other processes as well as outputs the desired commands to the robot’s motors. The process that gets information from the ultrasonic sensors and decides whether there is a collision or not is the collision detection process. Depending on the readings of the sensors, it decides the appropriate desired direction for the motors so as to avoid collisions. To get the robot’s position inside the testing arena, multitracking gets the x, y coordinates of the three points of the robot and transmits them to iqr, while HDA outputs the heading direction of the robot. PID gets as inputs the desired direction of the robot and the current direction and decides the corresponding movement of the motors (left, right, forward, backward) to get to the desired direction. The process that receives the information from the classification model is the classifier. It basically receives information regarding the odors the classifier has detected and informs the cast-and-surge process. It also receives a stop command, which it outputs to the motors, so as to allow the classifier some more time to detect a smell. Cast and surge is the process that decides, based on the information received by the classifier, whether a detected odor is the desired one, and thus performs cast (if there is no odor detected or not the desired one) or surge (if the target odor is detected by the classifier). Finally, one of the most important processes of the system is the director, as it sets the priority of each process over the motors.

4.3.4.4.1. Director

When two or more processes that run in parallel independently output commands to the same group of neurons (motors), there is a chance that two or more neurons will output a command to the motors at the same time. To avoid bewilderment, we have assigned a special process in order to avoid conflicts between processes that control the robot’s motors. Therefore, only the process with the highest priority will finally send commands to the motors.

The architecture of the process is displayed in Figure 4.16. The neuronal group Final Motor Output is the group that commands directly the motors of the robot. In our system, the highest priority over all others (including collision avoidance or cast and surge) is the joystick, where a human controls the robot’s movements. The second-in-hierarchy process is the stop command sent by the classifier, then the collision detection, followed by the PID.

FIGURE 4.16. An iqr scheme of the director process.

FIGURE 4.16

An iqr scheme of the director process. Each box represents a group of neurons. The group Final Motor Output outputs the corresponding command to the robot’s motors. Light gray arrows indicate excitatory connections and dark gray arrows indicate (more...)

4.3.4.4.2. Collision Detection

One of the most important processes of the system is that of collision detection. It receives as input the readings of the ultrasonic sensors and checks if there is a collision or not. The desired action (turn right or left) is based on both the sensors’ readings and the current direction of the robot.

Figure 4.17 illustrates the iqr scheme of the collision detection process. The ultrasonic sensors measure the distance between the robot and an obstacle; thus, the readings may vary from 0 to 60 cm. We normalize these values with sensors 0–1 and set a threshold above which a collision will be detected. For that we need to reverse those values (where 0 would mean no object in the surrounding area), and the neuronal group Collision contains the reversed values of sensors 0–1.

FIGURE 4.17. An iqr scheme of the collision detection process.

FIGURE 4.17

An iqr scheme of the collision detection process. Each box represents a group of neurons. The group Sensor Input receives the data acquired from the ultrasonic sensors, and the group Motor Collision outputs the corresponding command to the robot’s (more...)

If there is no collision, the neural group Decision Compass is inhibited; thus, there is no command outputted to the motors. However, if there is a collision, based on the robot’s direction, the system decides whether to turn left or right and outputs the corresponding command to the motors.

4.3.4.4.3. Classifier

The process classifier is the bridge between the models of localization and classification. It is important to be able to detect that something is an odor and classify it and distinguish it from other odors. In this way we are able to define if one odor is of interest or not, and therefore we are able to develop a form of attraction or repulsion to an odor.

The classifier process is illustrated in Figure 4.18. All necessary information is received by the 10-neuron group Odor_id. In the first five cells, basic odors are mapped, while the last five cells are reserved for future use. When the classifier detects an odor, it may need some time to “smell” in order to classify it. Thus, it sends to the system a stop command for a few seconds. The time frame of the stop command varies according to the signal of the odor detected. The stop command is passed to the Stop Motors group that commands the motors through the director. If the desired smell is detected (in this schema the desired smell is ammonia), it will activate the odor detected cell from the cast-and-surge process and elicit an attractive behavior of the robot to that smell.

FIGURE 4.18. An iqr scheme of the classifier process.

FIGURE 4.18

An iqr scheme of the classifier process. Each box represents a group of neurons. The group Odor_id receives the data acquired from the classifier. Light gray arrows indicate excitatory connections and dark gray arrows indicate inhibitory connections. (more...)

4.3.4.4.4. Cast and Surge

The cast-and-surge process is responsible for demonstrating the basic behavior of the robot, based on the complex behavior of the male moth when it is trying to locate the female. The male moth exhibits a specific upwind behavior called surge when it detects a pheromone plume and a crosswind behavior called cast when it loses track of the plume. We have managed to implement the same behavior on our robot by switching from casting to surging every time it encounters the desired odor.

As indicated in Figure 4.19 , the robot switches between behaviors (casting, surging), depending on the activity displayed from the neuron odor detected. This neuron receives information from the process classifier, and it is activated if the target odor is detected. Thus, the default behavior of the robot is casting, and it is always active; when the target odor is detected and the odor detected displays activity, Begin Surge is enabled and Begin Cast is inhibited. In this way, we exclude the possibility of having activated at the same time both cast-and-surge actions.

FIGURE 4.19. An iqr scheme of cast-and-surge process.

FIGURE 4.19

An iqr scheme of cast-and-surge process. Each box represents a group of neurons. The group Odor detected receives information from the classifier regarding the detection of the target odor. If it is detected, surge is activated and cast is inhibited. (more...)

4.3.5. Experimental Protocol

4.3.5.1. Locomotion and Maneuverability

To assess the maneuverability and locomotion of the robotic platform we placed the robot inside the testing arena with no odors present to perform a simple cast.

4.3.5.2. Static Classification

To appraise the classification performance, we divided the experimenting arena in a grid of points with 50 cm resolution. The robot was placed in each point facing upwind, for 1 min without moving, with only the classification model running. The experiment consisted of two odors where ethanol 20% and ammonia 5% were present. Measurements of the identified odors during that period of time were taken, and an odor map was reconstructed for the two different compounds separately. We called the resulting measurement classifications per minute (CPM), which represents the number of simulation cycles the classification model identified an odor during the 1 min period at each point.

4.3.5.3. Overall System

To evaluate the integration of both classification and localization models in our system the robot was placed inside the experimental arena in front of the ventilators facing upwind. The task was to perform a localization task in the presence of a target chemical compound, placed in the other end of the arena. Our aim was to see if the robot is able to correctly locate the source of the desired chemical compound when only the desired odor is present or when the desired odor has a chemical compound acting as a distracter (Figure 4.20). The two odors used were ethanol and ammonia in various concentrations, as shown in Tables 4.1 and 4.2.

FIGURE 4.20. Image showing the robot while performing an odor localization and classification task inside the wind tunnel.

FIGURE 4.20

Image showing the robot while performing an odor localization and classification task inside the wind tunnel. Two odor sources are placed in the end of the wind tunnel. In this experiment, we used ethanol 20% (left) as a target odor and ammonia 5% (right) (more...)

TABLE 4.1

TABLE 4.1

Concentrations Used for Assessing the System with One Odor and No Distracter

TABLE 4.2

TABLE 4.2

Concentrations Used When Assessing the System with Two Odors

4.4. RESULTS

4.4.1. Cast Performance—Sensor Validation

When the odor is not present, the default action is casting—a crosswind zigzag. In Figure 4.21 we can observe the robot’s trajectory when no odor is present, and it is therefore casting. Our results show a correct crosswind casting movement as no chemicals are detected. To perform a complete cast in the whole arena, the robot needed 4 min and 51 s.

FIGURE 4.21. Image of the trajectory of the robot when no odor is present (casting).

FIGURE 4.21

Image of the trajectory of the robot when no odor is present (casting). The starting point of the robot, marked by the circle, is located in the central point of the arena, in front of the ventilators facing upwind.

We also assessed the instrumentation capabilities of the robot. Two key sensors that we needed to assess were the compass and the wind vane. The compass outputs the direction of the robot in a unique cell, and therefore number, where each cell represents 10° of the robot’s direction. Just like the static classification assessment, we recorded for each point the readings of the compass and compared them with the simulated compass acquired from the AnTS tracking system. The heat map in Figure 4.22 displays the deviations from the readings of the compass compared to the readings that it should be displaying. The areas in red display highest deviation, and the areas in blue display the lowest deviation. As the compass is quite sensitive to magnetic fields, it is possible that the readings are affected by electric wires that are passing under the wind tunnel. Furthermore, we observe an excess deviation (of more than 150°) close to the ventilators, which can be explained by the ventilators themselves, as a magnetic field may be created by their movement. For this reason we have concluded that we will not be able to run successful experiments using the readings of the compass, and we had to simulate the compass from the AnTS tracking system.

FIGURE 4.22. (See color insert.

FIGURE 4.22

(See color insert.) Heat plot illustrating the deviation of the compass readings in comparison to the simulated compass. The areas of high deviation are marked in red, and of low deviation in blue.

Finally, we wanted to evaluate the readings of the wind vane. As we observed, when the robot is moving, the compass is affected by the robot’s movement, and thus outputs false readings. We conducted experiments when the robot stood still in each point in the wind tunnel facing upwind. The wind vane outputs the direction of the wind in relation to the robot as a unique number, which is equivalent to a unique cell in the iqr system. We compared the wind vane’s output with the simulated wind vane data acquired from the tracking system; Figure 4.23 displays a great deviation from the readings of the wind vane.

FIGURE 4.23. Error map of the wind vane.

FIGURE 4.23

Error map of the wind vane. On each point marked the robot was facing upwind.

4.4.2. Odor Maps—Classification

The odor maps reconstructed from the static experiments can be seen in Figures 4.24 and 4.25. The X and Y axes represent the surface of the tunnel in centimeters, while the color scale indicates the classifications per minute value read in that position. We observe that the odor plume reconstructed corresponds to the right side of the tunnel where the source was placed.

FIGURE 4.24. Odor map—ethanol detected: CPM in this case are considered as the number of cycles in which the classifier detected ethanol, with both odors in the air.

FIGURE 4.24

Odor map—ethanol detected: CPM in this case are considered as the number of cycles in which the classifier detected ethanol, with both odors in the air. The concentration used was 20% of ethanol in water.

FIGURE 4.25. Odor map—ammonia detected: CPM in this case are considered as the number of cycles in which the classifier detected ammonia, being both odors in the air.

FIGURE 4.25

Odor map—ammonia detected: CPM in this case are considered as the number of cycles in which the classifier detected ammonia, being both odors in the air. The concentration used was 5% of ammonia in water.

At a first glance we can see that the detection of ammonia is much better than that of ethanol. While false readings for ammonia are always lower than 500 CPM, those for ethanol situate themselves in values around 1000 CPM. This translates into a lower success rate of localization of ammonia, since it is confused with ethanol as much as two times the other way around when working together with the localization model.

In any case the readings in CPM for the correct odor present were always higher than for the wrong one, converting the system in a positive classifier.

4.4.3. Localization Assessment

We defined as successful the trial in which the robot reaches the target odor source while surging and the distance between the robot and the odor source does not exceed the length of the robot. A trial is successful even if the robot reaches the odor source laterally, as long as it is surging. We have come to this decision due to the fact that there is a small error in the compass of the robot (around 20°), and there were cases where the robot was surging and due to either the location of the robot or the compass error, it reached the odor source laterally.

The robot had to perform the cast-and-surge task in two conditions: with the presence of only the target odor, and the presence of the target odor as well as a distracter. For the single odor set of experiments, we conducted in total 30 trials for ethanol and 30 trials for ammonia (10 trials for each concentration). Our results show that we had an overall success rate of 80% for ammonia and 86% for ethanol. Individually, the robot was able to correctly locate the source of ammonia with a percentage of 70% for 1%, 80% for 2%, and 90% for 5%. Similar results are found for ethanol, with a rate of 90% for ethanol 5%, 80% for 10%, and 90% for ethanol 20%. In Figure 4.26 we can see the trajectory of the robot during a successful trial for ethanol and ammonia, respectively.

FIGURE 4.26. Image of a successful trial of the robot when it is locating ethanol 20% (a) and ammonia 5% (b).

FIGURE 4.26

Image of a successful trial of the robot when it is locating ethanol 20% (a) and ammonia 5% (b). The starting point of the robot is marked by a circle and the location of the odor source is marked by a square. The moments where the robot has classified (more...)

The same task had to be performed in the presence of both a target odor and a distracter. In total we conducted 30 trials when the target was ethanol (10 trials for each concentration) and ammonia 5% was the distracter, and 30 trials when ammonia was the target and ethanol 20% was the distracter. Our results show an overall success rate of 90% for ammonia and 80% for ethanol. In the case of ammonia, there was a constant percentage of 90% of correct localizations for all three concentrations, whereas in the case of ethanol, there were 70, 80, and 90% successful localizations for 5, 10, and 20%, respectively. Figure 4.27 illustrates examples of correct localizations for ethanol and ammonia with the presence of a distracter.

FIGURE 4.27. Image of a successful trial of the robot when it is locating ethanol 10% (a) and ammonia 5% (b) in the presence of a distractor.

FIGURE 4.27

Image of a successful trial of the robot when it is locating ethanol 10% (a) and ammonia 5% (b) in the presence of a distractor. The starting point of the robot is marked by a circle and the location of the odor source is marked by a square. The distractor’s (more...)

Our results indicate a success rate that stands well above the random success rate of 60% for both cases of one single odor and one target odor with a distracter. These results suggest that our models are able to not only classify correctly a chemical compound but also locate its source.

4.5. CONCLUSIONS

In this chapter we have demonstrated the implementation of odor localization and classification models on an autonomous robot. The biological system on which we based our system is the male moth. Our main goal was to design a novel robotic system able to employ moth-like chemo-search strategies. Therefore, the localization model follows the same principles of the so-called cast-and-surge behavior of the male moth when it is trying to locate its mate. The classification model is based on the underlying neural structures of the first stages of the insect’s olfactory path. Following the structure of the antenna, antennal lobe, and mushroom body, we have shown that the use of the technique of temporal population code (TPC) facilitates the discrimination of odors and results of actual use when processing real-time signals, and may also be present in the insect’s brain. Both localization and classification models were implemented using the neuronal simulator iqr.

To maximize autonomy of the robotic system, we created a custom-made robotic platform with an embedded computer. Early experiments showed that the platform’s initial design was not favoring maneuverability, control of the movements, and speed. We therefore redesigned the platform, introducing a reduction gear system to achieve optimal speed. From the first set of experiments, we observed an offset in the readings of both the custom-made wind vane and the compass. The wind vane was too sensitive when the robot was moving and not that accurate when the robot was still, while the compass may have been affected by the electricity wires that are passing under the arena of the wind tunnel. To overcome these problems, we simulated the robot’s orientation and the airflow direction, based on information received by the tracking system.

We have shown that our system is able to perform a moth-like behavior. In the absence of an odor, the robot is casting, whereas when an odor is created, it switches to surge. In almost all trials, we have observed both types of behaviors, cast and surge, in the presence of a single target odor. Similar behavior is observed in the presence of both a target odor and a distracter, where the robot successfully identifies and locates the target odor, ignoring the distracter, suggesting that our model is quite similar to the techniques employed by the male moth.

Although there have been some early steps in implementing multimodal techniques to the existing system, by introducing vision and landmark navigation, further refinements and experiments of the model should be made. In this way, our system will not only be able to successfully identify and locate an odor and its source, but it will also be able to navigate through a complex background of visual landmarks, remember the path it followed by recognizing specific visual cues, and successfully return to its “nest.” Thus, a robot’s task may be to locate the source of a leakage of a hazardous gas and then return to its starting point safely.

ACKNOWLEDGMENTS

Supported by the European Community’s Seventh Framework Program (FP7/007-2013) under grant agreement 216916. Biologically inspired computation for chemical sensing (NEUROCHEM).

REFERENCES

  1. Baker T.C, Haynes K.F. Manoeuvres used by flying male oriental fruit moths to relocate a sex pheromone plume in an experimentally shifted wind-field. Physiological Entomology. 1987;12:263–279.
  2. Beccherelli R, Zampetti E, Pantalei S, Bernabei M, Persaud K.C. Very large chemical sensor array for mimicking biological olfaction. Olfaction and Electronic Nose: Proceedings of the 13th International Symposium on Olfaction and Electronic Nose. 2009;1137(1):155–158.
  3. Bermudez i Badia S, Bernardet U, Guanella A, Pyk P, Verschure P.F. A biologically based chemo-sensing UAV for humanitarian demining. International Journal of Advanced Robotic Systems. 2007;4(2):187–198.
  4. Bermúdez i Badia S, Verschure P.F.M.J. Learning from the moth: A comparative study of robot based odor source localization strategies. AIP Conference Proceedings. 2009;1137:163–166. DOI: http://dx​.doi.org/10.1063/1.3156498.
  5. Bernardet U, Verschure P.F.M.J. iqr: A tool for the construction of multi-level simulations of brain and behaviour. Neuroinformatics. 2010;8:113–134. [PubMed: 20502987]
  6. Christensen T.A, Waldrop B.R, Harrow I.D, Hildebrand J.G. Local interneurons and information processing in the olfactory glomeruli of the moth Manduca sexta. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology. 1993;173(4):385–399. DOI: 10.1007/BF00193512. [PubMed: 8254565]
  7. Distante C, Indiveri G, Reina G. An application of mobile robotics for olfactory monitoring of hazardous industrial sites. Industrial Robot: An International Journal. 2009;36(1):51–59.
  8. Gomez-Marin A, Duistermars B, Frye M.A, Louis M. Mechanisms of odor-tracking: Multiple sensors for enhanced perception and behavior. Frontiers in Cellular Neuroscience. 2010;4(6) [PMC free article: PMC2854573] [PubMed: 20407585]
  9. Grasso F.W, Consi T.R, Mountain D.C, Atema J. Biomimetic robot lobster performs chemo-orientation in turbulence using a pair of spatially separated sensors: Progress and challenges. Robotics and Autonomous Systems. 2000;30(1–2):115–131.
  10. Hansson B.S. A bug’s smell research into insect olfaction. Trends in Neurosciences. 2002:270–224. [PubMed: 11972965]
  11. Knüssel P. ETH, Switzerland: 2006. Dynamic neuronal representations of static sensory stimuli. PhD thesis.
  12. Knüssel P, Carlsson M.A, Hansson B.S, Pearce T.C, Verschure P.F.M.J. Time and space are complementary encoding dimensions in the moth antennal lobe. Network (Bristol, England). 2007;18(1):35–62. [PubMed: 17454681]
  13. Kohnotoh A, Ishida H. Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications (ICMLA ’08). Washington, DC: IEEE Computer Society; 2008. Active stereo olfactory sensing system for localization of gas/odor source; pp. 476–481. In.
  14. Kowaldo G, Russell R.A. Robot odor localization: A taxonomy and survey. International Journal of Robotics Research. 2008;27(8):869–894.
  15. Kuwana Y, Nagasawa S, Shimoyama I, Kanzaki R. Synthesis of the pheromone-oriented behaviour of silkworm moths by a mobile robot with moth antennae as pheromone sensors. Biosensors and Bioelectronics. 1999;14(2):195–202.
  16. Laurent G. A systems perspective on early olfactory coding. Science. 1999;286(5440):723–728. [PubMed: 10531051]
  17. Lilienthal A, Duckett T. Proceedings of the IEEE International Conference on Advanced Robotics (ICAR). Vol. 20031. Coimbra, Portugal: 2003. Experimental analysis of smelling Braitenberg vehicles; pp. 375–380. In.
  18. Live Debian. Official website for Debian. Live. http://live​.debian.net/
  19. Lopez L.L, Vouloutsi V, Escuredo Chimeno A, Marcos E, Bermúdez i Badia S, Mathews Z, Verschure P.F.M.J, Ziyatdinov A, Perera i Lluna A. Moth-like chemo-source localization and classification on an indoor autonomous robot. Pramatarova L.D, editor. On biomimetics. 2011 In. InTech. http://www​.intechopen​.com/books/on-biomimetics​/moth-like-chemo-source-localization-and-classification-on-an-indoor-autonomous-robot.
  20. Loutfi A, Coradeschi S. Odor recognition for intelligent systems. IEEE Intelligent Systems. 2008;23(1):41–48.
  21. Mykytowycz R. Olfaction—A link with the past. Journal of Human Evolution. 1985;14(1):75–90.
  22. Pearce T.C, Chong K, Verschure P.F.M.J, i Badia S.B, Carlsson M.A, Chanie E, Hansson B.S. Electronic Noses & Sensors for the Detection of Explosives. Vol. 159. Dordrecht, The Netherlands: Springer; 2004. Chemotactic search in complex environments; pp. 181–207. of NATO Science Series II: Mathematics, Physics and Chemistry.
  23. Purves D, Augustine G.J, Fitzpatrick D. Neuroscience. 2nd. Sunderland. MA: Sinauer Associates; 2001.
  24. Pyk P, Bermudez i Badia S, Bernardet U, Knusel P, Carlsson M, Gu J, Chanie E, Hansson B.S, Pearce T.C, Verschure P.F.M.J. An artificial moth: Chemical source localization using a robot based neuronal model of moth optomotor anemototactic search. Autonomous Robots. 2006;20(3):197–213.
  25. Vienna, Austria: R Foundation for Statistical Computing; R: A language and environment for statistical computing. 2011 R Development Core Team.
  26. Rachkov M.Y, Marques L, de Almeida A. Multisensor demining robot. Autonomous Robots. 2005;18(3):275–291.
  27. Schleef D, Hess F.M, Bruyninckx H. The control and measurement device interface handbook. 2003 http://www​.comedi.org/doc/
  28. Todd J.L, Baker T.C. Function of peripheral olfactory organs. In: Hansson B.S, editor. Insect olfaction. Berlin: Springer-Verlag; 1999. pp. 67–96. In.
  29. Trincavelli M, Reggente M, Coradeschi S, Loutfi A, Ishida H, Lilienthal A.J. Intelligent Robots and Systems. IROS 2008. 22–26. Nice, France: 2008. Towards environmental monitoring with mobile robots; pp. 2210–2215. IEEE/RSJ International Conference.
  30. Waldrop B, Christensen T.A, Hildebrand J.G. GABA-mediated synaptic inhibition of projection neurons in the antennal lobes of the sphinx moth Manduca sexta. 1987;161(1):23–32. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology. [PubMed: 3039128]
© 2013 by Taylor & Francis Group, LLC.
Bookshelf ID: NBK298818PMID: 26042327

Views

  • PubReader
  • Print View
  • Cite this Page

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...