U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Toxicology Program. NTP Technical Report on the Toxicology and Carcinogenesis Studies in Sprague Dawley (Hsd:Sprague Dawley® SD®) Rats Exposed to Whole-body Radio Frequency Radiation at a Frequency (900 Mhz) and Modulations (GSM and CDMA) Used by Cell Phones: Technical Report 595 [Internet]. Research Triangle Park (NC): National Toxicology Program; 2018 Nov.

Cover of NTP Technical Report on the Toxicology and Carcinogenesis Studies in Sprague Dawley (Hsd:Sprague Dawley® SD®) Rats Exposed to Whole-body Radio Frequency Radiation at a Frequency (900 Mhz) and Modulations (GSM and CDMA) Used by Cell Phones

NTP Technical Report on the Toxicology and Carcinogenesis Studies in Sprague Dawley (Hsd:Sprague Dawley® SD®) Rats Exposed to Whole-body Radio Frequency Radiation at a Frequency (900 Mhz) and Modulations (GSM and CDMA) Used by Cell Phones: Technical Report 595 [Internet].

Show details

Introduction

All consumer cell phone devices function through the transmission of radio waves on a cellular network. The cellular network itself is composed of a collection of individual “cells” that include a fixed-location transceiver (a device that transmits and receives radio signals), also referred to as a cell tower. The collection of adjacent smaller “cells” in the cellular network enables cell phones and towers to use low-power transmitters, thereby allowing for the same frequencies to be reused in non-adjacent cells without interference. Together the individual “cells” comprise the cellular network that provides coverage over a large geographical area. In the United States two major nationwide cellular technologies in use are CDMA (Code Division Multiple Access) and GSM (Global System for Mobile Communications). While technologies are rapidly evolving to meet consumers’ increased demand for better coverage, increased call quality, faster data transfer rates, and increased accessibility, in the context of this report, the terms CDMA and GSM group together multiple, sometimes successive, technologies that are implemented by the service providers that maintain the service networks. In the United States, Sprint® and Verizon® networks use CDMA; AT&T® and T-Mobile® use GSM.

For both the GSM and CDMA technologies, transmissions occur at specific radio frequencies, which are allocated and regulated by the Federal Communications Commission (FCC). While the transmission of radio signals (radiofrequency radiation) can occur at the same frequencies for both technologies, they differ in the method by which information is incorporated and transmitted within frequency bands. In telecommunications, these are referred to as signal modulations. Because this process differs for CDMA and GSM, cell phones are not interchangeable between the two network technologies and will only function on one or the other.

The constantly evolving cellular technologies are commonly referred to by their successive generations (G). The first generation (1G) devices were analogue phones, as opposed to the digital phones of today. Digital voice systems of the second generation (2G) replaced the analogue system of 1G. At the time that these studies were being designed, 2G technology was the primary technology in use and 3G technologies were emerging. Therefore, the current studies were conducted using modulated signals that replicated the 2G and 3G technology in use at the time. Over the course of the studies, however, more advanced 4G technologies were developed. Currently, all of these technologies (2G, 3G, and 4G) are still actively in use for mobile communication applications. 2G and 3G are still the basis for voice calling applications, while 3G and 4G technologies were primarily developed to offer faster access to the internet. Some of the 3G technology is based on 2G technology. While 2G technology is being phased out in the United States, this technology will remain in use in other places throughout the world. More advanced and efficient technologies that are currently in development and not yet deployed, termed 5G, will utilize higher frequencies than existing technologies.

Radio Frequency Radiation (RFR) Measurement and Applications

RFR is a form of nonionizing electromagnetic energy that consists of propagating electromagnetic waves of oscillating electric (E-) and magnetic (H-) fields that move together at the speed of light. RF waves are characterized by their wavelength (the distance covered by one complete cycle of the electromagnetic wave) and their frequency (the number of electromagnetic waves passing a given point in 1 second). The frequency of an RF signal is expressed in terms of Hertz (Hz), where one Hz is equivalent to one cycle per second. RF radiation refers to the region of the electromagnetic spectrum from 3 kilohertz (3 kHz) to 300 gigahertz (300 GHz) (Figure 1). As opposed to ionizing radiation, which contains enough energy when passing through matter to break chemical bonds or remove an electron from an atom or molecule to produce charged ions, nonionizing radiation has at most sufficient energy for excitation of an electron to a higher energy state.

Figure 1

Figure 1

Electromagnetic Spectrum

The intensity of an RF field can be expressed by its electric and magnetic components and is measured in volts per meter (V/m) for electric fields and amperes per meter (A/m) for magnetic fields. Another measure of RFR is the power density, which is defined as the power per unit area and is expressed in watts per square meter (W/m2). The quantity used to describe the amount of RFR energy absorbed by the body is referred to as the specific absorption rate (SAR), which is expressed in watts per kilogram (W/kg). SAR is a function of the geometry and the dielectric loss properties of biological tissues absorbing the energy (which results from the interaction of electro-magnetic radiation with constituents at the cellular and molecular level), the square of the strength of the induced E-field, and the mass density of the exposed tissue. The SAR value is derived by averaging the absorbed energy over a specific volume (typically 1 gram, 10 grams, or the whole body for regulatory purposes).

Different applications utilize different frequency bands within the RF portion of the electromagnetic spectrum. RF frequencies for radio and television are in the 145 kHz to 850 MHz range. Wireless communications and networking typically utilize frequencies between 800 MHz and 6 GHz. Cell phone networks that are currently in use (2G, 3G, and 4G) utilize frequencies in the range of 600 MHz to 5.7 GHz. In the United States, wireless telecommunications networks and devices operate in bands at frequencies of nominally 800 MHz, 850 MHz, or 1,900 MHz for 2G; 850 MHz, 1,700 MHz, 1,900 MHz, or 2,100 MHz for 3G; and 600 MHz, 700 MHz, 800 MHz, 850 MHz, 1,700 MHz, 1,900 MHz, 2,100 MHz, 2,300 MHz, 2,500 MHz, 5,200 MHz, or 5,700 MHz for 4G. The next generation, i.e., the 5th generation of wireless communications, will also utilize the RFR spectrum above 6 GHz. Other terms are also used in the literature for part of the RFR spectrum, e.g., microwaves for frequencies above 1 GHz, millimeter waves for frequencies above 30 GHz.

Cell Phones and RFR

Cell phones and other commonly used wireless communication devices are essentially two-way radios that contain both a receiver and a transmitter. When a user makes a call, voice sound is converted into digital information. The information is imposed on to RFR and transmitted to the nearest base station, commonly referred to as a cell tower, that receives and transmits RF signals and forms a bridge to the rest of the communications infrastructure. The base station receives and transmits radio signals in its area or “cell.” As the user moves around, the radio signal can be relayed within the communications network from one “cell” of coverage to another, maintaining call connection. The call is routed through the communications network either through a landline phone or another wireless phone again using radio signals. To conserve energy and minimize interference, mobile phones automatically regulate the RFR signal strength, and hence the emitted field, to the lowest power level possible for a connection to be made. However, in a poor transmission environment (caused by, e.g., a distant base station, presence of obstacles between the base station and the mobile phone, or interference from adjacent cells), there is a higher output power and emission from the mobile phone in order to make a connection. Therefore, the better the connection, the lower the power output of the wireless device.

Cell Phone RFR Signal Modulation

In wireless telecommunications, modulation is the process of conveying digital or analog signals or information (the message) by varying one or more parameters of another signal (the carrier), typically at a much higher frequency. The modulated carrier contains complete information about the message signal and the original message can be recovered by suitable signal processing of the signal when received at a remote location (base station). One of the main goals of the modulation used in mass wireless communications systems is to transfer as much data as possible in the least amount of spectrum. Over the years, multiple modulation techniques have emerged to achieve and improve spectral efficiency, either when considering a single user in isolation or multiple users simultaneously using the same spectrum.

The first generation (1G) of wireless technology introduced in the 1980s, used analog frequency modulation for voice calls. This technology was replaced by second-generation (2G) networks that were digital, provided encryption, were significantly more efficient, and introduced data services [i.e., text messages, picture messages, and Multimedia Message Service (MMS)] in addition to voice calls. The 2G networks became commercially available in 1992 and used three common multiple access technologies for accommodating multiple simultaneous users:

  • Frequency Division Multiple Access (FDMA): The available spectrum is split into a number of distinct parts (channels) each large enough to accommodate a single user or call without overlap, all users utilize their channel 100% of the time for the duration of the call or message. The channels are normally of equal bandwidth.
  • Time Division Multiple Access (TDMA): The available spectrum is allocated to a single channel, each user or call assigned a certain portion of time.
  • Code Division Multiple Access (CDMA): The available spectrum is allocated to a single channel, each user or call is assigned a unique sequence code to spread the message over the available spectrum. All users use the whole of the spectrum all of the time. At the receiver, the same unique sequence code is used to recover the desired signal from the sum of all the user calls.

2G systems used a combination of FDMA/TDMA for GSM or various versions of CDMA, for example, cdmaOne (IS-95). While the 2G technology continues to operate, subsequent third and fourth generations of network technologies were introduced in 1998 (3G), 2006 (4G), and 2011 [4G-Long Term Evolution (LTE)]. These technologies were developed to support increased data demands for multimedia access with increased bandwidth and transfer rates to accommodate internet-based broadband applications, including video conferencing, streaming video, sending and receiving faxes, and downloading e-mail messages with attachments. With the introduction of 3G technology, “smartphones” were developed. With these devices, the newer technologies were overlaid with 2G to support multiple access modes (2G, 3G, and 4G).2 Although the 2G technologies will be phased out over time and replaced by newer technologies, the current wireless communication networks continue to utilize 2G for voice and text.

All 3G systems utilize CDMA/WCDMA technology and fall into two groups complying with the 3rd Generation Partnership Project (3GPP) or 3GGP2 family of standards. Universal Mobile Telecommunications Service (UMTS), Wideband Code Division Multiple Access (WCDMA), and Time Division-Synchronous Code Division Multiple Access (TD-SCDMA) are 3GPP variants, CDMA2000 (which is based on 2G cdmaOne) is 3GPP2. 4G systems use Orthogonal Frequency Division Multiplexing (OFDM) within the E-UTRAS (LTE-Advanced) or Worldwide Interoperability for Microwave Access (WiMAX) standards.

Modulation Schemes (GSM and CDMA)

The Global System for Mobile Communications (originally Groupe Spécial Mobile; GSM) was developed to establish a digital standard for compatibility throughout Europe. GSM is a circuit-switched system that uses both FDMA and TDMA technologies. The frequency division mechanism divides the GSM band into 200 kHz-wide channels. The time division mechanism enables up to eight different time slots (voice channels) per frequency channel wherein a single cell phone transmits in only one out of eight available time slots during a voice communication. This introduces a pulsed signal shape with a pulse repetition rate of 217 Hz. Such a TDMA frame has a length of 4.6 milliseconds (ms) (Figure 2), and 26 TDMA frames make up a multiframe with a 120 ms duration (Figure 3). During a multiframe, a mobile phone transmits in 25 out of 26 possible time slots. This TDMA frame structure causes significant low frequency amplitude modulation components to be superimposed on the RF carrier at 8.3 and 217 Hz. Furthermore, as a direct consequence of the TDMA the peak power, and instantaneous SARs are 8.3 × higher than the average power and SAR, note that the average power is the metric of importance for SAR determination within the context of the current safety standards.

Figure 2

Figure 2

GSM Frame Showing Peak and Average Transmit Powers

Figure 3

Figure 3

GSM Multiframe Showing Missing 26th Frame

With GSM, the duplexing between uplink (when the handset transmits to the base station) and downlink (when the base station transmits to the handset) is implemented in the frequency and time domain. Constant frequency spacing is maintained between up and downlink frequencies; in the United States the uplink is 1,850 to 1,910 MHz, and the downlink 1,930 to 1,990 MHz. The uplink and downlink frequencies are chosen according to the cell (area that is covered by a base station) into which the mobile is registered. In order to minimize interference between neighboring cells, a frequency reuse policy is applied. In this approach, when a mobile phone moves from one cell into an adjacent cell, frequencies used for data uplink and downlink change in association with this movement (i.e., transmission frequencies change at handover from one cell to another).

CDMA technology uses a form of coded transmission known as Direct Sequence Spread Spectrum (DSSS) in which data are multiplied by a much faster pseudo random code before being modulated on to the carrier. The effect of the multiplication is to spread the message across the whole frequency bands available for use at a given time in a given cell, but with very specific characteristics. CDMA signal access technology is based on code division separation of mobile stations as well as base stations. This implies differences of the signal structure compared to GSM. For example, in IS-95 in the forwardlink (downlink), a set of 64 Walsh codes (which are deterministic and orthogonal) are applied to spread/separate the individual channels in the downlink of a cell. After the orthogonal spreading, a short (16-bit) Pseudo Noise code is applied to further spread the signal and identify the cell. Hence, a separation of neighboring cells in the frequency domain is no longer necessary, and there is no need for the mobile station to change its transmission frequency during the transition from one cell into another. As with GSM systems, the duplexing between the forward and reverse links is implemented in the frequency domain. In CDMA systems, an efficient power control is crucial. Because all mobile stations transmit and interfere in the same frequency channel, each mobile device decreases the signal to noise ratio of all the other mobile devices. Hence, the output power of a mobile phone should be kept at a minimum that guarantees good transmission quality.

IS-95, also known as cdmaOne, was developed by Qualcomm (San Diego, CA) as the first 2G CDMA-based digital cellular technology. The term IS-95 generally applies to a protocol revision (P_REV = 1) that was adopted as a standard (TIA-EIA-95) by the Telecommunications Industry Association (TIA) in 1995. Over time, subsequent iterations of the IS-95 protocol such as IS-95A, TSB-74, and IS-95B were developed, each with incremental improvements over the previous protocols. Later, more advanced versions of the CDMA technology have evolved to include IS-2000, which incorporated much higher transfer rates than the previous 2G versions. For a further explanation of these technologies and how the NTP exposure system was designed to reproduce similar GSM and CDMA cell phone RFR exposures please see the video presentationa (day 1 at 54 minutes) by Dr. Myles Capstick.3

Sources, Use, and Human Exposure

The predominant source of exposure to RFR for the majority of the population is through use of telecommunications and mobile internet access applications for wireless devices, and the highest human exposure to cell phone RFR occurs through the use of cellular phone handsets and other wireless devices such as tablets and laptop computers held in close proximity to the human body. Aside from telecommunications, there are other man-made applications of RFR, which include microwave ovens, radar, industrial heating and sealing, medical diagnostics [Magnetic Resonance Imaging (MRI)] and therapy (surgical diathermy and ablation), and remote tracking or detection of objects [anti-theft, Radio Frequency Identification (RFID)]. There are also natural sources of RFR such as atmospheric electrical discharges (lightning) and solar and cosmic radiation. RFR exposures from natural sources are much smaller and tend to be spread over a much wider range of frequencies compared to exposures to fields from man-made radiation sources.4

The use of cell phones has become widespread over the last two decades, and concern has been expressed regarding the potential health risks associated with use specifically by children. According to a Pew Research poll,5 approximately 95% of adult Americans own a cell phone. As of December 2015, the number of active wireless subscriber connections was 377.9 million, which exceeded the population of the United States.6 According to the same survey, 49.3% of households in the United States utilize only a wireless phone, and not a landline.

There has been a great deal of focus on the possibility of increased risk of brain cancer because of the traditional use of these devices in close proximity (0 to 2 cm) to the head. In general (apart from the case when very close to the antenna), the level of RFR exposure from a cell phone is inversely proportional to the square of the distance of the body from the device’s antenna, resulting in the highest SAR levels in the parts of the body nearest to the antenna.

Accurate and detailed measurements of RFR exposure in humans are difficult to estimate because the output power of wireless devices constantly varies depending on several factors. Overall, the network carrier adjusts the output power of each connected device to the lowest level that is still compatible with a good quality signal. This adaptive power control occurs continuously and is achieved by a logarithmic downscaling of the time-averaged power from the maximum of 0.125 or 0.25 W to a level as low as 1 mW. When in use, the output power (and subsequent exposure to cell phone RFR) from the device is increased compared to that in “standby” mode. Therefore, exposures are related to the amount of active time a user spends on the device. The output power of a device changes based on the signal received at the base station. Decreases in signal strength result in higher output powers. Therefore, there are increases in the output power as the distance between the device and the base station increases, if there are physical obstacles between the device and the base station, reflections off buildings or other structures, and during handovers from one cell to another in the case of GSM. The proximity of the device to the body and the type, number, and position of antennas in the device are other important factors affecting the amount of exposure to RFR.

Potential exposure to RFR used in cell phones also occurs from the cell phone towers that form the network. While modern towers emit substantially more power than devices, exposures from base station antennas are considerably lower to users than from the handheld device. Typically, base station antennas are placed at heights of 50 to 200 feet, in order to adequately cover a cell. The antennas direct RF energy toward the horizon, with some downward tilt. As with all forms of radiation (ionizing and nonionizing), the RF energy level decreases rapidly as the distance from the antenna increases. As a result, the level of exposure to RFR at ground level is very low compared to the level close to the antenna.

Some base station antennas are installed on rooftops and at the top of lamp poles that are in close proximity or adjacent to office space and residential buildings. Occupational exposure can occur during maintenance of base stations. As a result, the FCC established guidelines for occupational exposures. Safety guidelines and regulatory compliance are discussed below.

The levels of RFR inside buildings with base station antennas mounted on the roof or on the side of the building are typically much lower than the level outside, depending on the construction materials of the building. Wood or cement block reduces the exposure to RFR by a factor of about 10. Due to the directional nature of the signals, the energy level behind an antenna is orders of magnitude lower than in front of the antenna.

Safety Guidelines for Exposure

The FCC and U.S. Food and Drug Administration (FDA) are jointly responsible for the regulation of wireless communication devices.

Federal Communications Commission

The FCC is required by its responsibilities under the National Environmental Policy Act of 1969 to evaluate the impact of emissions from FCC-regulated transmitters on the quality of the human environment.7 As a result, the FCC regulates both the wireless devices as well as the base stations. Since 1996, the FCC has required that all wireless communication devices (transmitting in the 100 kHz to 6 GHz frequency range) sold in the United States comply with its minimum guidelines for safety and maximum RFR absorption standards based on SAR. The FCC requires a formal approval process for all devices sold in the United States. FCC approval is contingent on the demonstration that the device does not exceed the maximum allowable SAR level when the device is operating at its maximum power. The SAR limit adopted by the FCC for exposure in the general population is 0.08 W/kg, as averaged over the whole body (wbSAR), and a peak spatial-average SAR (psSAR) of 1.6 W/kg, averaged over any 1 gram of tissue8 when averaged over 6 minutes. Exceptions are made for the extremities (hands, wrists, feet, ankles, and pinnae), where the psSAR limit is 4 W/kg, averaged over any 10 grams of tissue for an exposure period of no longer than 30 minutes. For occupational exposures, the wbSAR limit is 0.4 W/kg and the psSAR limit is 8 W/kg, averaged over any 1 gram of tissue. For the hands, wrists, feet, ankles, and pinnae, the psSAR limit for occupational exposure is 20 W/kg, averaged over any 10 grams of tissue for an exposure period not to exceed 6 minutes.

The FCC rules and guidelines for cell phone RFR exposure are based upon standards initially developed by the Institute of Electrical and Electronics Engineers (IEEE) and the National Council on Radiation Protection and Measurements (NCRP). These standards for RF exposure in workers and the general population are based on protection against adverse effects that might occur due to increases in tissue or body temperature in excess of 1°C (wbSAR of approximately 4 W/kg) or less (after applying safety factors). Because RF-energy absorption and any induced effects are dependent on the frequency of incident-field parameters and the composition of exposed tissues, it has been suggested that quantifying SARs in small averaging regions is more relevant for evaluations of human health effects.

Food and Drug Administration

The FDA does not currently regulate the use of wireless communications devices or the devices themselves. The FDA also does not require safety evaluations for radiation-emitting wireless communication devices. It does maintain the authority to take regulatory action if it is demonstrated that exposure to the emitted cell phone RFR from these devices is hazardous to the user.

Absorption of RFR

RFR interacts with the human body via inductive or capacitive coupling or a combination of both. The absorption of the coupled RFR is dependent on the frequency of the signal and the dielectric properties of the exposed tissue. It generates oscillating currents in the tissue, which in turn give rise to induced E-fields. The energy is transferred into molecular motion of polar molecules like water, a strongly dipolar molecule and major component of biological tissues. Resonant oscillations in polar subgroups of cellular macromolecules are damped by collisions with surrounding water molecules that disperse the energy of the RF signal into random molecular motion. Tissue heating occurs as the energy is transferred to the surrounding aqueous environment as heat.4

Toxicity

A comprehensive review of the toxicity of RFR in in vitro models, laboratory animals, and humans was conducted and published in the International Agency for Research on Cancer (IARC) Monograph series.4

Thermal Effects

Given the ability of RFR to heat tissues, the toxic effects of RFR are often considered due to thermal effects. The most well-established and biologically plausible mechanism for RFR-induced effects is through tissue heating. At sufficiently high levels of RFR exposure, the absorption of energy could overwhelm an organism’s ability to thermoregulate and maintain an acceptable body temperature. Typical human exposures to RFR occur at intensities that are not anticipated to cause significant tissue heating if handsets are used according to the manufacturers’ recommendations for use, and assuming the phones are not emitting more RFR than permitted by FCC regulations.

Nonthermal RFR effects refer to biological changes that occur with body temperature increases that are below 1°C. Changes of temperature up to 1°C are considered in the range of thermal noise.4 There is an ongoing debate regarding whether nonthermal biological effects can occur as a result of exposures to low-intensity RFR. It has been suggested that there is no plausible nonthermal mechanism by which exposure to low-intensity RFR could induce significant biological effects.9-11 However, there are numerous reports of specific biological effects associated with RFR exposures at levels considered below those expected to result in a measurable amount of tissue heating. Other than tissue heating, the mechanisms of interaction between RFR and biological systems have not been well characterized, but several mechanisms have been proposed, including the generation of reactive oxygen species, induction of ferromagnetic resonance, and the alteration of ligand binding to hydrophobic sites in receptor proteins.4 Additionally, low levels of exposure to RFR may result in small temperature changes in localized areas of exposed tissues that cause conformational changes in temperature-sensitive proteins and induce the expression of heat-shock or stress-response proteins.

Experimental Animals

Toxic effects have been reported in RFR-exposed laboratory animals and in vitro systems.4,12 Many studies investigating the potential toxicity of RFR have focused on genotoxicity and related effects and are reviewed in the Genetic Toxicity section. However, studies have been conducted to evaluate a variety of other aspects of toxicity, particularly those potentially related to cancer development or surveillance, including specific studies on gene and protein expression, immunotoxicity, and permeability of the blood-brain barrier. The results of these studies have not led to a clear understanding of the interactions of RFR with biological systems, but it’s important to note that many of these studies were conducted with RFR of differing parameters (frequency, power density, continuous wave versus amplitude-modulated signals, etc.).

Several effects on the humoral and cell-mediated responses of the immune system have been reported at various frequencies of RFR in rats and mice. These include effects on the activity of NK cells, plaque-forming cell response to sheep erythrocytes, production of tumor necrosis factor (TNF) in peritoneal macrophages and splenic T-cells, mitogenic response in T lymphocytes, phagocytic activity of neutrophils, leukocyte profile, and thymic and splenic cellularity.13-18 However, many of these effects were observed in studies conducted with RFR at frequencies greater than 10 GHz. Other studies have demonstrated no exposure-related effects on the immune system.17,19-23

A few studies have investigated the impact of RFR at frequencies between 800 and 1,900 MHz on gene and protein expression. Several studies have demonstrated that RFR can alter the expression of certain genes in the brain,24-26 while others have failed to find changes in gene expression.27-29 The expression of various proteins has also been investigated in rats and mice. These studies have primarily yielded negative results for the specific proteins being evaluated in the rat brain.24,25,30-32 Similarly, no effects of RFR on protein expression have been reported in the testis33 or in the skin.34-36 Liu et al.37 reported adverse effects on sperm following exposure for 2 hours/day to 900 MHz RFR at 0.66 W/kg for 50 days. Changes in the expression of bone morphogenic protein and bone morphogenic protein receptors have been reported in the kidney of newborn rats.38 A study by Eşmekaya et al.39 also demonstrated increased expression and activity for caspase 3 and caspase 9 in the thyroid gland of Wistar rats. Ohtani et al.40 observed induction of expression of some heat shock protein genes in the cerebral cortex and cerebellum of rats exposed to 2.14 GHz of WCDMA RF at 4W/kg, but not in rats exposed for 3 hours, or for 3 or 6 hours to 0.4 W/kg.

Exposure to RFR induces changes in markers for oxidative stress in multiple tissues, including the brain,30,41-44 heart,45 kidney,46,47 eye,48 liver,49,50 endometrium,51,52 and testis and epididymis.53 Yakymenko et al.54 reviewed oxidative mechanisms reported in a number of in vitro and in vivo experiments with “low intensity” RFR. A few studies have also demonstrated RFR-mediated effects on differentiation and apoptosis in the endometrium51,52 and brain.32,55 Changes have also been noted in the permeability of the blood-brain barrier in some studies.56-58 However, other studies conducted under similar experimental conditions failed to demonstrate any effect of RFR exposure on the permeability of the blood-brain barrier.59-62

Humans

Numerous epidemiology studies have investigated the association between exposure to RFR and health effects in humans. However, many of these studies examined small groups exposed to RFR signals with different characteristics (frequencies, modulations, intensities, etc.) such as microwaves, extremely low frequency (ELF) fields, and radar rather than the specific frequency bands and modulated RFR signals used in wireless communication.

There is limited research investigating the general toxicity of RFR in humans because most of the focus has been on the potential for carcinogenic effects. There are reports of exposed individuals that complain of acute, subjective effects following exposure to RFR, including headaches, fatigue, skin itching, and sensations of heat.63-68 These have primarily been reported in people that consider themselves electrosensitive. It has been suggested that there are likely other causes, not RFR, for these subjective symptoms.69 Variable results have been observed in the electroencephalogram (EEG) of volunteers exposed to RFR during sleep. Some studies indicate that exposure to RFR induces changes in sleep latency and sleep EEG.70-80 Glucose metabolism in the brain, a marker for metabolic activity, is increased in the region of the brain closest to the antenna.81 While these results demonstrate exposure-related effects, the toxicologic significance of these findings is unclear.

Carcinogenicity

A comprehensive review of the carcinogenicity of RFR in laboratory animals and humans was conducted and published in the IARC Monograph series.4 Additional reviews of animal cancer studies have been published by Lin,82 and of human studies by Repacholi et al.83 and Yang et al.84

Experimental Animals

Studies published to date have not demonstrated consistently increased incidences of tumors at any site associated with exposure to RFR in rodents.82 No increases in tumor incidences were observed in B6C3F1 mice exposed to GSM-modulated RFR for 24 months,85 F344 rats exposed to CDMA-modulated RFR for 24 months,86 or Wistar rats exposed to GSM-modulated RFR for 24 months.87 In studies conducted in transgenic and tumor-prone mouse strains, exposure to RFR has not been consistently associated with an increased incidence of tumors at any site.88-92 While these studies have advanced the knowledge of the potential toxicity of RFR, critical limitations in the design of many of these studies severely limit the utility of the information to adequately evaluate the carcinogenicity of RFR. These limitations include studies with very short daily exposure durations (≤ 2 hours per day) in heavily restrained animals or with levels of RFR exposures too low to adequately assess carcinogenic potential. The focus of many of these studies conducted in genetically altered and tumor-susceptible mice was not to evaluate the overall carcinogenicity of RFR, but to investigate the effects in the specific predisposed tissues in that model.

Based on the constraints in the designs of the existing studies, it is difficult to definitively conclude that these negative results adequately establish that RFR is not carcinogenic. To adequately evaluate the potential chronic toxicity and carcinogenicity of RFR, further studies with enhanced study designs and improved exposure paradigms were needed.

Humans

As a result of the IARC review conducted in 2011,93 RF electromagnetic fields were classified as possibly carcinogenic to humans (Group 2B). This classification was based on limited evidence of carcinogenicity in humans based on positive associations between exposure to RFR from wireless phones and increased risk for gliomas and acoustic neuromas, specifically in users with the greatest amount of cell phone usage. The IARC Working Group acknowledged that the findings were affected by potential selection and information bias, weakness of associations, and inconsistencies between study results.93

While several other studies were considered, the IARC evaluation was based primarily on reports from the INTERPHONE Study, the largest research effort conducted to date examining the potential association between exposure to RFR and cancer in humans. INTERPHONE was an IARC-coordinated research effort that included a series of studies conducted with a common core protocol at 16 study centers in 13 countries: Australia, Canada, Denmark, Finland, France, Germany, Israel, Italy, Japan, New Zealand, Norway, Sweden, and the United Kingdom.94 The studies were specifically designed to investigate the association between RFR and tumors of the brain (glioma and meningioma), acoustic nerve (schwannoma), and parotid gland. The final report for the INTERPHONE studies was published in 2011.93

The results of these studies seemingly demonstrated an elevated risk of glioma and acoustic neuroma in the group in the highest decile for exposure (cumulative phone call time). However, the INTERPHONE study group concluded that recall and selection biases and implausible values for usage reported by the participants in the study may explain the increased risk.95,96

Other studies have compared time trends in cell phone usage and the incidences of different types of cancers to investigate indirect evidence of an association between RFR used in cell phones and cancer. These studies were conducted across several different countries,97 and in a group of European countries,98-102 the United States,103-105 Japan,106 New Zealand,107 and Israel.108 Overall, the evaluations suggest that there was no significant change in the trends of cancer incidences. Any increases in cancer rates that were observed in these studies were attributed to enhanced detection capabilities for cancer that were the result of advances in diagnostic medical equipment, like computerized tomography (CT) scans and MRI.

Several cohort studies have been conducted, but also failed to establish a clear association between cell phone RFR and the development of any of the investigated cancer types.109-111 Additional studies have demonstrated that there was no association between cell phone usage and pituitary gland tumors,112,113 testicular tumors,110,114 parotid gland tumors,115,116 uveal melanoma in the eye,110,117 and cutaneous melanoma.118 Some studies have demonstrated that there was no association between cell phone usage and leukemia109,110 and non-Hodgkin’s lymphoma,119 whereas others have reported increased risk of non-Hodgkin’s lymphoma120 and leukemia.121

Since the 2011 IARC Working Group evaluation, few additional epidemiological studies have examined mobile phone use and risk of cancer. A case-control study of children and adolescents from four European countries did not find an association between overall mobile phone use with brain cancer.122 A pooled analysis of multiple Swedish case-control studies by Hardell, Carlberg and colleagues found a significant increased risk of glioma and acoustic neuroma, particularly among analog phone, ipsilateral, and long-term or high frequency mobile phone users.123-126 No increased risk of meningioma was found with overall mobile phone use.123,124,127 Other case-control studies did not report an increased risk of glioma128,129 or meningioma130 with regular mobile phone use; however, Coureau et al.128 did find a significant increased risk of glioma and meningioma with heavy mobile phone users. A prospective cohort study of UK women did not find an association with glioma, meningioma, or acoustic neuroma.131,132

Numerous systematic reviews of the epidemiology literature database have been conducted in addition to the 2011 IARC evaluation, with conflicting conclusions. Available systematic reviews have found an association between cell phone use and increased risk of brain tumors,124,133 while other reviews did not find an association with brain tumors.83,134 These contrasting results have been considered possibly due, in part, to differences in study eligibility criteria, the number of studies included, when the review was conducted, and how studies were evaluated.135

Genetic Toxicity

Extensive reviews of the literature on the genotoxicity of various frequencies and modulations of RFR, covering experimental systems ranging broadly from cell-free DNA preparations to cells of exposed animals and humans, have concluded that evidence for cell phone RFR-associated genotoxicity is inconsistent and weak.83,136-138 Interpretations of the genotoxicity studies and the ability to draw definitive conclusions based on weight-of-evidence from the large number of studies that have been reported have been hampered by inadequacies in experimental design, especially related to exposure standards and radiation-measuring procedures.136 Although the majority of studies report a lack of effect, the several reports of a positive response are concentrated among experiments assessing chromosomal or DNA damage in mammalian cell systems in vitro and in vivo. Some key studies reporting RFR-associated genotoxicity in human cell lines, including DNA damage and chromosomal effects, could not be replicated.139,140 A critical complicating factor in the study of the genotoxic effects of cell phone RFR is that under certain conditions, RFR is sufficiently energetic to heat cells and tissues, and not all studies have considered this factor in their design. Heating of cells in vivo and in vitro has produced positive results in tests for genotoxicity, such as the comet assay and micronucleus assay.141-143 The mode of action whereby heat induces these effects may be through induction of protein denaturation and aggregation, which can interfere with chromatin structure and slow the kinetics of DNA repair or interfere with mitosis by disrupting microtubule function.144,145 Thus, heat-induced increases in DNA migration seen in the comet assay may reflect slowed repair of endogenous lesions, and similarly, activity in the micronucleus assay may be due to aneugenic rather than clastogenic events.141-143 Therefore, it is important to control thermal conditions when studying measures of genotoxicity following exposure to cell phone RFR.

Study Rationale

The FDA nominated cell phone RFR emissions of wireless communication devices for toxicology and carcinogenicity testing. Current exposure guidelines are based on protection from acute injury from thermal effects and little is known about the potential for health effects from long-term exposure to RFR below the thermal hazard threshold. Epidemiology studies that have been conducted to date have demonstrated possible, but not yet causal links between cell phone RFR and some health problems in humans, however the results of these studies are complicated by confounding factors and potential biases. Additionally, exposures in the general population may not have occurred for a long enough period to account for the long latency period of some types of cancers in humans. Similar to the challenges faced in epidemiological studies, studies in laboratory animals have been complicated by limitations that researchers have faced in conducting robust studies designed to characterize the toxicity and carcinogenicity of cell phone RFR.

For years, the primary concern regarding the potential health risk of chronic exposure to cell phone RFR was brain cancer based on the proximity of wireless devices near the head during use. While the brain is an organ of concern, understanding the potential toxicity and carcinogenicity of whole-body exposure is critical. RFR is constantly emitted from wireless devices to communicate with base stations, regardless of whether the user is on a call or not. As the public has become more aware of the uncertainty regarding the potential effects of RFR on the brain, more emphasis has been placed on the use of wired or wireless headsets (like Bluetooth), which minimize RFR exposure to the head. In recent years, the density of cell towers has increased to cope with the increasing demand for capacity, resulting in installations closer to residential neighborhoods and schools. Additional RFR technologies, like SmartMeters used by power companies, transmit data in real time using RFR. These existing and emerging technologies may potentially increase the level of exposures in human populations. These and other additional sources also expose different parts of the body, not only the head.

In 2011, RFR was classified by the IARC as possibly carcinogenic to humans based on limited evidence of an association between exposure to RFR from heavy wireless phone use and glioma and vestibular schwannoma (acoustic neuroma) in human epidemiology studies and limited evidence for the carcinogenicity of RFR in experimental animals.4 While ionizing radiation is a well-accepted human carcinogen, theoretical arguments have been raised against the possibility that nonionizing radiation could induce tumors (discussed in IARC4). Given the extremely large number of people who use wireless communication devices, even a very small increase in the incidence of disease resulting from exposure to the RFR generated by those devices would translate to a large number of affected individuals, which would have broad implications for public health. Due to the changing exposure patterns and use of cell phones by pregnant women and women of childbearing age, RFR exposures to the whole body, and exposures during the perinatal period (rat studies only) were selected for inclusion in these studies.

In the current studies, male and female Hsd:Sprague Dawley® SD® rats were exposed to GSM or CDMA RFR at 900 MHz in utero, during lactation, and after weaning for 9 hours and 10 minutes per day for five or seven days per week, over the course of 18 hours and 20 minutes in 10 minutes on and 10 minutes off intervals for 28 days or 2 years. Exposures were 0 (sham control), 3, 6, or 9 W/kg in the 28-day studies and 0 (sham control), 1.5, 3, or 6 W/kg in the 2-year studies for each modulation. Exposure energy levels were selected based on pilot studies of body temperature changes from these RFR power levels reported in Wyde et al.146 The selection of 900 MHz for the frequency for the rat studies was based on dosimetry studies by Gong et al.147 and the videob, day 1 a.m. at 2 hours, 37 minutes3.

Copyright Notice

This is a work of the US government and distributed under the terms of the Public Domain

Bookshelf ID: NBK561723

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (5.4M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...