U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Maier A, Steidl S, Christlein V, et al., editors. Medical Imaging Systems: An Introductory Guide [Internet]. Cham (CH): Springer; 2018. doi: 10.1007/978-3-319-96520-8_10

Cover of Medical Imaging Systems

Medical Imaging Systems: An Introductory Guide [Internet].

Show details

Chapter 10Emission Tomography

.

Author Information and Affiliations

Published online: August 3, 2018.

In contrast to the structural imaging used to visualize tissues in the body, functional imaging is used to observe biological processes. In the field of nuclear medicine, functional imaging relies on radioisotopes that are tagged to tracers whose biochemical properties cause them to congregate at regions of diagnostic interest in the body.

10.1. Introduction

In contrast to the structural imaging used to visualize tissues in the body, functional imaging is used to observe biological processes. In the field of nuclear medicine, functional imaging relies on radioisotopes that are tagged to tracers whose biochemical properties cause them to congregate at regions of diagnostic interest in the body. As opposed to transmission tomography with X-ray CT, where the source of imaging radiation is a part of the imaging device, the radiation source in this case is located within the patient. For this reason, functional imaging methods in the field of nuclear medicine – also known as molecular imaging – belong to a family of modalities called emission tomography, whose differing physical properties make them quite distinct from the transmission case.

The process begins with radioactive decay, which results when an unstable isotope ejects particles from its nucleus while transitioning to a stable state. Although a very complicated process, two modes are of interest to molecular imaginγ and β. In the former case, gamma rays are ejected directly from the nucleus that can be imaged with a so-called gamma camera. 3-D images can then be reconstructed from 2-D projections in a process called SPECT. In the second case, a positron is emitted which travels a small distance until an electron (its antiparticle) is encountered. The ensuing annihilation produces a pair of 511 keV photons traveling in opposite directions that, when detected simultaneously, yield lines of response that can be reconstructed into an image in a process known as positron emission tomography (PET).

Although X-rays produced by bombarding targets with electrons had been in use since their discovery by Röntgen in 1895, the use of naturally decaying radioisotopes for medical imaging did not occur until 1935, when George de Hevesy investigated rats injected with radioactive 32P. Using a Geiger counter, de Hevesy investigated the relative amount of radioactivity in different organs after dissection and found that the skeleton had a disproportionately high level of uptake. In doing so, he not only settled once and for all the ongoing medical question of whether or not bones have an active metabolism (they do, otherwise they would not have taken up the 32P atoms), but he was also the first to use radioisotopes and imaging equipment to investigate the body’s biochemistry. Thus, the so-called tracer principle was born. For his work in the field of radiotracers, de Hevesy was awarded the Nobel Prize for Chemistry in 1943, and a variant of his original bone-imaging methodology based on phosphates is still in wide use today. In the decades following de Hevesy’s discovery, research from the field of radiochemistry and molecular biology have yielded a plethora of tracers with desirable uptake characteristics. Complimentary technical advances have provided imaging devices capable of aiding physicians answer a range of diagnostic questions.

10.2. Physics of Emission Tomography

10.2.1. Photon Emission

Although the properties of and decay are different in many respects, they follow the same basic decay law. Namely, the amount of radioactivity S (expressed in Bequerel, or decays per second) in a given sample of radioactive material will decrease until all atoms in the sample reach a stable state. This process follows an exponential curve, and the amount of activity in the sample at a given time t can be expressed as

S(t)=S0eln2t/t1/2,
(10.1)
where S0is the initial activity, and t1/2is the isotope’s half-life, or the time it takes for S(t) to decrease to half of S0. This process is illustrated in Fig. 10.2, where the blue curve depicts the amount of activity remaining in a sample that initially contained 100 MBq. It can be seen from inspection that the isotope’s half-life is six hours, which is the same as that of 99mTc, the most commonly used isotope for SPECT imaging.

Figure 10.2. Exponential decay curve for a 100 MBq sample of a radioisotope having a half-life of six hours (the same as 99mTc).

Figure 10.2

Exponential decay curve for a 100 MBq sample of a radioisotope having a half-life of six hours (the same as 99mTc).

Although Eq. (10.1) represents a sample’s aggregate decay properties, the emission of individual photons (or photon pairs for β decay) within a particular time window is a discrete process and follows a Poisson distribution with a mean ν proportional to the amount of radioactivity present. Note that we can assume independence between the voxels and describe an entire image in a vectorized format using for a single voxel and ν for an entire image. Similarly, the number of photons counted in a particular observation of this process, such as a pixel of a SPECT projection, is a Poisson-distributed random variable as well, provided that the image formation chain is linear.1 If we represent the projection pixels and image voxels as vectors, the distribution of photon counts on the detector Ɗ is related to the activity distribution being imaged ν via the following relation:

D~Poisson(Aν),
(10.2)
where A ϵ ℝM, N is known as the system matrix and is composed of elements am,n representing the probability that a photon emitted from voxel n is detected at pixel m (cf. Geek Box 7.3). M and N are the numbers of detector pixels and image voxels, respectively. Multiplying an image vector ν by A thus accomplishes a forward projection into the projection space. Acquired projection data d from an emission tomography scan therefore represent a single sample, or observation, drawn from the distribution Ɗ.

Eq. (10.2) implies that detected images will always be perturbed by random noise, particularly for small numbers of counts. This effect is shown in Fig. 10.3, where simulated observations are shown for time points t = 0, 2t1/2, t = 4t2, and t = 6t1/2. For each simulation, the total activity from the blue curve in Fig. 10.2 corresponding to the time point t was distributed uniformly inside the ellipsoidal object, yielding an amount of activity at each pixel n that corresponds to the mean value of a Poisson process νn(t). A random number was then drawn from the Poisson distribution at each pixel to create the images d(t). This is equivalent to applying Eq. (10.2) with A set equal to the identity matrix.

Figure 10.3. Simulated images (left) and central horizontal profiles (right) from an object filled with the activity described in Fig. 10.2. The images were simulated after zero, two, four, and six half-lives (a, b, c, and d, respectively). The mean value of the object is shown with a dashed red line through each profile. Note how the images become noisier as the mean decreases.

Figure 10.3

Simulated images (left) and central horizontal profiles (right) from an object filled with the activity described in Fig. 10.2. The images were simulated after zero, two, four, and six half-lives (a, b, c, and d, respectively). The mean value of the object (more...)

Central profiles drawn from each image along the blue line on the left of Fig. 10.3(a) are shown at the right. At the aggregate level, the simulated mean across all the pixels d¯(t) is almost exactly equivalent to the true ν̄(t) and follows the predictable decay curve in Fig. 10.2. However, the noise level in the images and profiles appears to increase with t. This behavior is due to the fact that the mean of a Poisson distribution is equivalent to its variance. But if the variance decreases with the mean, then why does the noise appear to increase? To answer this, we can define a signal-to-noise ratio SNR within our homogeneous ellipsoid’s boundaries to use as a noise measure. In this case, our signal is simply the mean over this object, and the noise is the standard deviation σd:

SNR=d¯σd=d¯d=d¯.
(10.3)
The SNR is thus simply the square root of the mean number of photon counts in the object and increases monotonically, albeit with plateauing benefits, with the number of counts in the image.

In X-ray CT imaging, where the radiation source is located outside the body and can be easily controlled by the system, large numbers of photons are easily attainable, as the patient can be irradiated with a high flux for a short period of time. However, in molecular imaging the radioactive source is located within the body and will continue to bombard tissue with potentially harmful ionizing radiation until it either decays or is excreted by the body.

Therefore, to limit patient dose, relatively small amounts of activity are usually injected, typically ranging from 100 to 1,000 MBq. The activity is then distributed throughout the body, leading to low numbers of emitted photons at any given area. The imaging task is thus similar to taking a photograph in a very dark room. A long exposure time can yield a better SNR, but comes with problems of motion blur and patient discomfort. A typical SPECT projection lasts 15 seconds, resulting in a total scan duration of 30 minutes for the usual 120 projections! Despite this effort, projections typically have only about 20 or fewer useful photons per pixel in diagnostically interesting areas. A representative projection from a skeletal SPECT scan in shown in Fig. 10.4. The mean pixel value is a measly 0.6, and maximum is only 17, significantly less than even the noisiest simulation in Fig. 10.3(d). Due higher scanner sensitivities, PET statistics are slightly better, with roughly a factor of 10 more counts per pixel at typical scan durations of 4–6 minutes for an equivalent field of view.

Figure 10.4. Typical 15 second projection from a skeletal SPECT acquisition. Even pixels with the highest counts have only roughly ten photons. Image courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Figure 10.4

Typical 15 second projection from a skeletal SPECT acquisition. Even pixels with the highest counts have only roughly ten photons. Image courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

The fundamental challenge in emission tomography is therefore to produce reconstructed images of the activity distribution ν with acceptable image quality from noisy acquired data. The following sections describe other physical issues encountered as well.

10.2.2. Photon Interactions

Aside from the fundamental problem of noisy data, the second most important physical factor affecting emission tomography is photon attenuation. Photons traveling through a medium may interact with atoms and eventually be absorbed, resulting in a detected flux I less than that originally emitted. In Chapter 8, we learned how to describe this principle using Beer’s law and exploit it for imaging. For transmission tomography like X-ray CT, this phenomenon is imaged directly to yield reconstructed images of the medium’s (i.e. patient’s) structure. This is possible because the location and current of the emitted flux I0 is known. In emission tomography, however, I0 is determined by the activity distribution in the body ν, which is unknown. Attenuation is therefore a hindrance that leads to errors if it is not accounted for.

Amongst the photon-matter-interactions, Compton scatter is very important for emission tomography (cf. Sec. 7.3). In this interaction, the photon is not absorbed as in attenuation, but merely deflected. The relationship between deflection angle θ and pre- and post-collision energies E0 and Escat is described by the Klein-Nishina formula:

Escat=E01+(Eo511)(1cosθ).
(10.4)

Scatter is an important component of emission tomography due to its role in the degradation of image quality. Specifically, deflected photons may be erroneously assumed to come from locations in the image volume along their scattered trajectories, rather than their original paths. This process is illustrated in Fig. 10.5, where a photon originating in the image is scattered and counted at a detector pixel corresponding to a trajectory other than its original (vertical) path. This has the effect of reducing resolution, contributing to image noise, and reducing contrast.

Figure 10.5. A photon is deflected from its original path (vertical) by a scatter event and detected at an erroneous location to the left of its ideal position.

Figure 10.5

A photon is deflected from its original path (vertical) by a scatter event and detected at an erroneous location to the left of its ideal position.

10.3. Acquisition Systems

10.3.1. SPECT

Early methods for detecting photons emitted from radiotracers focused on scanning probes (e.g. Geiger counters) over the patient. Scanning a field of view of any reasonable size was therefore a painstaking process, and 3-D reconstruction was out of the question. In 1957, Hal Anger solved this problem with the invention of the gamma camera, shown schematically in Fig. 10.6. A classical gamma camera consists of three components: a collimator, a scintillator, and an array of photomultiplier tubes (PMTs).

Figure 10.6. Simplified schematic representation of a gamma camera showing three primary components.

Figure 10.6

Simplified schematic representation of a gamma camera showing three primary components.

The collimator is composed of a lattice of holes separated by septa made of some highly attenuating material (usually lead). Its role is to restrict the angle of acceptance at each point on the detector surface and provide an (ideally) parallel projection of the object being imaged onto the scintillator. In e.g. optical imaging equipment, this is usually accomplished by means of a small aperture known as a pinhole. For this reason, collimators with a parallel-hole geometry consisting of a large array of narrow, parallel bores are the most commonly used type for SPECT imaging..2

Ideally, a point source placed in front of the detector would yield a perfect point in the image. However, the bores of a collimator are neither infinitely long nor infinitely narrow, leading to a finite acceptance angle that allows photons traveling from the point to reach the detector via a range of rays about the ideal one (i.e. the shortest path from point to detector). The structure of these alternate paths is described by the collimator’s PSF and effectively blurs more complicated objects being imaged, which can be thought of as collections of many points.

This effect can be seen in Fig. 10.7, which shows a schematic representation of a trio of 1-D parallel collimator bores in front of a detector. A virtual point source placed at the intersection of the red arrows would be able to reach the detector along a number of rays. Photons reaching the detector on direct paths through air are termed geometric, because their PSF is only a function of the detector and collimator geometry. On an infinitely precise detector, the resulting response would be an array of indicator functions, but due to pixelation in the acquired image and other factors, the PSF has a roughly conical shape.

Figure 10.7. Schematic representation of collimator and PSF (yellow). The acceptance angle of a bore is dependent on bore length and width, leading to a widening of the PSF with depth.

Figure 10.7

Schematic representation of collimator and PSF (yellow). The acceptance angle of a bore is dependent on bore length and width, leading to a widening of the PSF with depth.

In many applications it is modeled as a Gaussian, and the resolution is characterized by the full width at half maximum (FWHM) rPSF, which may be approximated by the following equation:

rPSFDb(Lb+z)Lb,
(10.5)
where Db is the bore diameter, Lb its length, and z the distance between the source plane and the face of the collimator. From Eq. (10.5), it can be seen that the resolution is depth-dependent and becomes wider with increasing z for given collimator dimensions.

An image of a point source showing a true PSF is shown in Fig. 10.8. The image is saturated to highlight the complex structure. In the bright central area outlined in red, primarily geometric photons are present. In the region immediately adjacent to it outlined in blue, photons passing through a portion of a single septum are detected. The long “spider”-like legs are due to septal penetration across multiple walls, which is most probable in a direction perpendicular to the edges of the hexagonal collimator bores. A faint background between these streaks is caused by Compton scattering in the collimator and contaminates the entire function. The magnitude of the spider legs is up to 1.5% of the maximum PSF value, and for 99mTc up to 10% of photons may be extra-geometric and thus not accounted for by ideal models. Therefore, some in the field have begun to use PSF models based on measured true data rather than ideal calculations.

Figure 10.8. Measured PSF from a 99mTc point source imaged at a distance of 10 cm, shown saturated to emphasize low-intensity regions. The geometric bright geometric is outlined in red, and most extra-geometric counts lie between the red and blue hexagons, where a single partial septal wall is penetrated.

Figure 10.8

Measured PSF from a 99mTc point source imaged at a distance of 10 cm, shown saturated to emphasize low-intensity regions. The geometric bright geometric is outlined in red, and most extra-geometric counts lie between the red and blue hexagons, where a (more...)

Issues of resolution and septal penetration are important when designing a collimator. The collimator efficiency ρ is also significant, as it describes the ratio of geometric photons passed through the collimator to the total number emitted towards it. It is ideally constant over z for the parallel hole case and can be approximated as

ρK2(DbLb)2Db2(Db+Ts)2,
(10.6)
where Ts is the width of the septal wall and K is a constant based on hole geometry. A typical value of ρ is on the order of 10−4, making it a key, but necessary, limiting factor in the sensitivity of SPECT systems.

In Eq. (10.6), it can be seen that ρ increases as bores are either shortened or widened. However, from Eq. (10.5), we see that these changes decrease resolution. Taking Eq. (10.5) and Eq. (10.6) together, it becomes apparent that the task of collimator design is a compromise between collimator sensitivity and resolution. The former directly impacts the quality of counting statistics, and therefore noise, in an acquired image. The latter is related to the accuracy with which the detector can localize them and properly reproduce small features such as edges. A third consideration appears via the septal thickness which, when increased, limits the star artifacts shown in Fig. 10.8 at the expense of smaller ρ.

Once a photon has passed through the collimator, it impacts the system’s scintillator (typically composed of NaI), releasing several lower energy photons in the visible range. These photons then travel to the PMTs, where they initiate an electron avalanche that is detected as a current signal at the PMT output. To determine the 2-D location of a photon, a type of centroid is computed by the output electronics of the PMT array in a process known as Anger Logic, after its inventor. In the 1-D case, the estimated location of the photon detection x̂ can be calculated as

x^=qxqGqqGq,
(10.7)
where Gq and xq are the signal strength at and location of the q-th PMT. Applied in this fashion directly, images will suffer from nonuniformities and pincushion distortions. These are removed by replacing Gq with some nonlinear function thereof. Even after this correction, the method is not exact, and the resulting finite resolution rDET adds in quadrature with that of the PSF to yield a total system resolution rSYS=rPSF2+rDET2. Another important property of the PMT output is that the value of Σq Gq is proportional to the energy of the initial photon. This allows SPECT cameras to be energy resolving as well, allowing the effects of Compton scatter to be mitigated.

10.3.2. PET

As shown in Fig. 10.1, the β decay that forms the basis of PET produces two photons that travel in opposing directions away from each other. This is exploited for imaging purposes by using a ring detector and looking for coincidences in the observed data. This coincidence detection principle is illustrated in Fig. 10.9, where a PET ring composed of many small detector blocks is shown. Extremely high speed electronics monitor each detector’s output signal and record a detection event when two impulses are detected simultaneously. The detector blocks themselves are traditionally composed of a scintillator crystal mated to a small PMT array as with the Anger camera. However, no collimator is needed to restrict the scintillator’s acceptance angle in this case because the photon’s incidence angle is implicitly provided by the detector block at the opposite side of the ring. Nevertheless, inaccuracies in the scintillator blocks and PMTs still induce a finite PSF in PET whose geometrical properties vary widely depending on the source’s location in the field of view.

Figure 10.1. Simplified representation of both modes of decay relevant to emission tomography. On the left is a nucleus undergoing decay and emitting a single photon directly. On the right is an example of β+ decay, where a positron is ejected from the nucleus. The positron travels a short distance before coming in contact with an electron. The resulting annihilation produces a pair of 511keV photons traveling in opposite directions.

Figure 10.1

Simplified representation of both modes of decay relevant to emission tomography. On the left is a nucleus undergoing decay and emitting a single photon directly. On the right is an example of β+ decay, where a positron is ejected from the nucleus. (more...)

Figure 10.9. PET ring detector and coincidence detection principle. The detector electronics simultaneously monitor signals from each detector block and record counts when an impulse is detected from two blocks at the same time.

Figure 10.9

PET ring detector and coincidence detection principle. The detector electronics simultaneously monitor signals from each detector block and record counts when an impulse is detected from two blocks at the same time.

The ray connecting the two detection points (red line in Fig. 10.9) is known as the line of response. Integrating along all parallel lines of response at a particular rotation angle will produce a row of a sinogram at that angle that can be used for reconstruction. Early PET systems treated each axial ring of detector blocks as independent slices and thus ignored lines of response with oblique axial angles. This strategy, shown in Fig. 10.10(a), reduces the computational burden on detector electronics (coincidences from fewer blocks must be monitored simultaneously), but sacrifices many counts.

Figure 10.10. 2-D (left) and 3-D (right) detection configurations for PET. The latter offers better sensitivity at the expense of more scatter events.

Figure 10.10

2-D (left) and 3-D (right) detection configurations for PET. The latter offers better sensitivity at the expense of more scatter events.

Newer systems utilize a 3-D detection configuration (cf. Fig. 10.10(b)), where lines of response across a finite axial range are observed. This provides an increase in sensitivity due to the fact that, for a given source location, counts can be registered at a greater number of detectors. However, by the same token, it is more probable that false (random) coincidences or pairs of scattered photons will be detected. Also, an extra step of axial rebinning is needed to produce a sinogram. Spatial and Fourier domain strategies exist, but the common goal is to transform the acquired oblique lines of response into approximate virtual lines of response perpendicular to the axial direction.

PET has a number of advantages over SPECT due to more favorable physics. Sensitivity is roughly an order of magnitude higher due to the absence of a collimator, and the ring detector offers better tomographic consistency (i.e. all angles are acquired simultaneously). Furthermore, the reconstruction problem is better defined than with SPECT due to the (ideally) 1-D search space along each line of response. Mathematically, this translates into a system matrix that is better conditioned. By using TOF information derived from slight delays between coincidence detections, the range of possible emission locations can be even further reduced.

TOF PET systems with 3-D detection thus typically offer superior resolution and noise characteristics compared to SPECT, but this comes at a price. 18F, the most common isotope used in PET, has a half life of only 110 minutes and is more difficult to produce than 99mTc, requiring a complex logistical network to minimize the time between production and injection. Furthermore, the higher energy photons imaged in PET require costly exotic scintillator materials. This, combined with highly specialized detector electronics, makes PET systems more expensive to procure and operate than their SPECT counterparts.

10.4. Reconstruction

10.4.1. Filtered Back-Projection

In Chapter 8, we presented the filtered back-projection method of reconstruction in the context of X-ray CT. The advantages of this reconstruction are its speed and simplicity, as well as reconstructed image properties, such as resolution, that are relatively easy to determine. However, while filtered back-projection works quite well for high-count data, it fails to take into account the Poisson statistics outlined in Sec. 10.2.1. This leads to very noisy images in SPECT and PET, where detected counts are several orders of magnitude lower than those seen in CT.

Furthermore, filtered back-projection operates by inverting the Radon transform – the purely geometrical relationship between voxels in the image and their projected pixels at the detector. This ignores all of the other physical factors, such as attenuation, scatter, and the PSF, that play a vital role in the emission tomography image formation chain. This oversight leads to artifacts in reconstructed images that greatly degrade image quality. For these reasons, filtered back-projection is generally no longer used in clinical situations.

10.4.2. Iterative Reconstruction

In order to improve the noise performance of filtered back-projection, we must use the statistical relationship in Eq. (10.2) between the mean activity distribution in each voxel and the observed counts at the detector. Filtered back-projection implicitly assumes a deterministic relationship, but we can take stochastic effects into account by using the definition of the Poisson mass distribution function.

Geek Box 10.1 describes how probable observed detector data are given a set of model parameters, which take the form of a vector of Poisson means ν for each voxel in our case. Obviously, in emission tomography, these parameters are unknown. However, the likelihood function provides us with a tool to estimate them by searching for the set of ν^* that maximizes P (Ɗ = d) and thus yields the most likely estimate given our data:

ν^*=argmaxν^P(D=d).
(10.8)
The relationship described in Geek Box 10.1 is quite complex, and it is not immediately clear how to maximize the likelihood. However, a seminal paper by Shepp and Vardi in 1982 showed that this can be accomplished via the Expectation Maximization (EM) algorithm, whose general framework involves the estimation of the “complete” information, given a set of observations and hidden, “latent”, information. Although a detailed description is outside the scope of this text, it is worth outlining that for emission tomography, the complete information is the actual emission distribution ν, and the observations are the counts in the projections d. The latent information is comprised of all of the photons originating in the image that escape detection.

As shown by Shepp and Vardi, EM’s methodology of alternatingly forming a conditional expectation via marginalization over a particular variable and then maximizing the resulting likelihood provides a convenient framework for attacking Eq. (10.8) as encountered in emission tomography. This expectation/maximization cycle is repeated until a suitable image is obtained, and each one of these repetitions is referred to as one iteration k. The algorithm begins by initializing some estimate of the activity distribution ν^0. The process proceeds at each iteration by forward projecting the current estimate ν^k comparing it to the measured data, backprojecting the result, and applying

Geek Box 10.1Total Likelihood Function

Image ch10f11
a)

For the simple case of counts from one voxel being emitted directly into a single pixel detector, the probability of a particular observation d given the true mean ν is

P(D=d)=eννdd!,
which is known as the likelihood of the observation.

b)

Moving one step further, where an array of observations d if formed by photons emitted from a vector of voxels with means ν. This is the same scenario we examined in the example in Fig. 10.3. Here, the system matrix is equivalent to the identity matrix A = I, and each voxel contributes to a single detector element. As each observation is independent of the others, we can multiply them together to obtain our likelihood:

P(D=d)=iexp(νi)νididi!,
where i represents the index of the detector and image elements, which are equivalent in this case.

c)

In a true imaging scenario, A I, and multiple image voxels contribute to a single detector element. To account for this, we must subdivide the total detected counts in each pixel dm into contributions from each image voxel: dm=ndm(n). The probabilities contained in the system matrix must also be included. The total likelihood is therefore the sum over each of these possible scenarios:

P(D=d)=m,nexp(νmam,n)(νmam,n)dm(n)dm(n)!
The dual product has the effect of incorporating the contribution from each voxel to each pixel.

a weight to the estimate to create a new ν^k+1. The update mechanism for the algorithm can be expressed using the following equation:
ν^nk+1=ν^nkmam,nman,mdmnν^nkam,n.
(10.9)
Collectively, this method is known as the maximum likelihood expectation maximization (MLEM) algorithm and is widely used in many commercial and research applications.

The iterative reconstruction process for MLEM thus consists of an objective function that describes the quality of the current estimate (the likelihood function) and a way of optimizing it (EM). Within the field of image reconstruction, a wide range of objective function/optimizer pairs are available. Another objective function that has found wide use is weighted least squares:

ν^*=argminν^dAν^w2=argminν^mwm(dm[Aν^]m)2,
(10.10)
where [Aν^]m is the m-th pixel of the forward projected estimate and w is a vector of weights. The weights are often used to take noise into account by, for example, setting each element of w equal to an estimate of the variance at the corresponding detector pixel. This has the effect of adjusting each pixel’s contribution to the objective function depending on its noise properties. The weighted least squares objective function is convex and can be solved with gradient-based optimization techniques such as the conjugate gradient method.

10.4.3. Quantitative Reconstructions

Although iterative reconstruction is motivated by the underlying statistics of photon emission, another major advantage is its ability to model the physics of the imaging system. This is accomplished via the system matrix. In addition to geometric information, the system matrix can include the effects of attenuation and scatter to allow the reconstruction to correct for them. Furthermore, resolution lost due to PSF blurring may be regained to some extent if this is modeled as well.

Aside from image quality improvements such as contrast enhancement and noise reduction, proper system modeling enables PET and SPECT systems to become quantitative as well. In other words, instead of reconstructing images in arbitrary or relative units, absolute units such as activity concentration kBqml are produced. This important distinction allows scans across different patients, scanners, and time points to be meaningfully compared. This is not only useful for individual patient management, but enables larger, multi-center clinical studies as well.

Assuming an accurate system model is available, the cornerstone of a quantitative imaging system is the calibration. This anchors the counts observed during an acquisition to a physical amount of radioactivity in the detector’s field of view. A common way of doing this is to perform an acquisition on a homogeneous phantom with a known activity concentration expressed as kBqml. A volume of interest may then be defined in the reconstructed image, and a count density in units of counts per ml may be determined. Time must then be taken into account by correcting for decay and normalizing by the acquisition duration. After these steps, a volumetric sensitivity factor αVOL may be defined relative to its units as follows:

αVOL= counts  minute mlkBqml.
(10.11)
With this factor in hand, subsequent acquisitions may be quantified, provided they are acquired with the same isotope and reconstructed in the same way. The procedure for this is straightforward and consists of obtaining the count rate density from a volume of interest in units of  counts minml and dividing by αVOL, thus producing the desired absolute units of kBqml.

This solution is not without drawbacks. It requires the filling of a phantom for calibration and is vulnerable to errors and inconsistencies that come from user-defined volumes of interest. A more elegant method is to utilize a planar sensitivity αplanar = counts minkBq. This value is then incorporated into the system matrix to relate the forward projected counts to absolute activity in the reconstructed volume. The result is a reconstruction that is inherently quantitative and dependent on a calibration factor that can be obtained from less tedious planar acquisitions of a point source.

In the medical community, it is also of interest to normalize for factors such as patient weight and injected dose. The commonly used Standardized Uptake Value (SUV) is an example of this. It is based on the assumptions that a) a tracer in healthy tissue will distribute uniformly throughout the body and b) that the body has a uniform density equal to that of water (i. e. 1 kg/l). Combining these assumptions yields the following relation:

SUV=kBqVOImlVOIMBqINJkg,
(10.12)
where the subscripts VOI and INJ refer to quantities drawn from a reconstructed volume of interest (e.g. a region surrounding a suspect tumor) and total injected dose, respectively.

Despite the somewhat unintuitive units of mlkg, the logic behind the SUV is sound: a value significantly greater than unity indicates a disproportionate amount of uptake and a potential abnormality. This is particularly the case for tracers where assumption (a) from above holds. Furthermore, by normalizing for two factors that vary across acquisitions (injected dose and patient weight), the SUV allows for easier comparison between different patients and time points.

Numerous variations on the SUV exist. One of the most popular is the SUVmax, which simply places the maximum activity concentration found in a volume of interest in the numerator of Eq. (10.12) to guard against partial volume effects. Other extensions normalize by lean body mass or body surface area to better account for anatomical variations.

10.4.4. Practical Considerations

Although superior to analytical methods, iterative reconstructions are not without their own complications. Namely, the inclusion of a system model and optimization scheme adds a plethora of parameters that must be tailored to the imaging task at hand. Poor judgment in selecting these values may degrade image quality.

To illustrate this concept, we use a simple 1-D signal with two step functions blurred by a Gaussian. The original signal represents the truth, and its blurred version our observed data. If we initialize a constant function and apply the MLEM algorithm from Eq. (10.9), we can attempt to “reconstruct” the truth from the data. In Fig. 10.11, the results are shown for the case where the blurring function is not modeled (similar to an emission tomography reconstruction without PSF compensation). As expected, the best our method can do is to adjust the constant initialization until it matches the blurred observations: the two curves are identical.

Figure 10.11. 1-D signal (green) convolved with gaussian to yield “observed data” that is reconstructed (i.e. deconvolved) using the MLEM algorithm (blue curve). In this case, the blurring function is not modeled, and the reconstruction cannot improve upon the observed data (the two curves are equal). Figure courtesy of Siemens Molecular Imaging Inc., USA.

Figure 10.11

1-D signal (green) convolved with gaussian to yield “observed data” that is reconstructed (i.e. deconvolved) using the MLEM algorithm (blue curve). In this case, the blurring function is not modeled, and the reconstruction cannot improve (more...)

Fig. 10.12 shows six MLEM iterations with the blur incorporated into the system matrix. This is equivalent to adding a deconvolution problem to our reconstruction, and we see that the edges have become sharpened as frequencies suppressed by the blur are recovered by the reconstruction. However, ringing artifacts have also become visible due to the fact that the original spectrum is only partially recovered. The results after 300 iterations are shown in Fig. 10.13, where even better edge resolution is achieved, albeit with more severe ringing as well.

Figure 10.12. Reconstruction after six MLEM iterations where the blurring function is modeled in the system matrix. Edge resolution is improved over Fig. 10.11, but ringing artifacts also become visible. Figure courtesy of Siemens Molecular Imaging Inc, USA.

Figure 10.12

Reconstruction after six MLEM iterations where the blurring function is modeled in the system matrix. Edge resolution is improved over Fig. 10.11, but ringing artifacts also become visible. Figure courtesy of Siemens Molecular Imaging Inc, USA.

Figure 10.13. MLEM reconstruction after 300 iterations. The edges have become sharper, and artifacts amplified relative to Fig. 10.12. Figure courtesy of Siemens Molecular Imaging Inc., USA.

Figure 10.13

MLEM reconstruction after 300 iterations. The edges have become sharper, and artifacts amplified relative to Fig. 10.12. Figure courtesy of Siemens Molecular Imaging Inc., USA.

If we incorporate Poisson noise into the observed data, we can make our experiment more realistic. How does this change the results? The reconstruction after six iterations shown in Fig. 10.14 indicates that they are broadly comparable to the noiseless case in Fig. 10.12, although slight irregularities inside the wide object can be seen. However, the case after 300 iterations shown in Fig. 10.15 is starkly different from its noiseless counterpart, with the interior of the wide object becoming very inhomogeneous.

Figure 10.14. Six MLEM iterations with on data perturbed with Poisson noise. The result is slightly irregular but comparable to Fig. 10.12. Figure courtesy of Siemens Molecular Imaging Inc., USA.

Figure 10.14

Six MLEM iterations with on data perturbed with Poisson noise. The result is slightly irregular but comparable to Fig. 10.12. Figure courtesy of Siemens Molecular Imaging Inc., USA.

Figure 10.15. 300 MLEM iterations on noisy data. The interior of the large object is highly irregular, and quality is noticably worse than Fig. 10.13. Figure courtesy of Siemens Molecular Imaging Inc., USA.

Figure 10.15

300 MLEM iterations on noisy data. The interior of the large object is highly irregular, and quality is noticably worse than Fig. 10.13. Figure courtesy of Siemens Molecular Imaging Inc., USA.

The noise in the reconstructed signal in Fig. 10.15 is a result of the ill-conditioned nature of the reconstruction problem and can be generalized to the case of emission tomography. During early iterations, low frequencies are recovered that correspond mostly to signal information, such as high-contrast, large objects. However, at higher iterations, the algorithm turns its attention to higher frequencies where the signal and noise energies are comparable. The result is an overfitting of the noise and degradation of image quality.

By iterating further, we can increase resolution and thus reduce quantitative bias due to edge roll-off. However, as seen in Fig. 10.15, this runs the risk of introducing too much noise into the image. The use of image post-smoothing or smoothness regularization can reduce noise while sacrificing some resolution and thus entails making the same type of compromise. In practice, the choice of many reconstruction parameters is therefore another example of the bias/variance trade-off already discussed above with respect to SPECT collimator design.

Another complication for iterative reconstruction in emission tomography is that there are source-dependent factors such as attenuation and scatter in the system matrix. This implies that the properties of the reconstruction will vary from patient to patient, even if the same acquisition and reconstruction parameters are used. Furthermore, depth- and position-dependent PSFs in SPECT and PET lead to shift-variant properties within a given image as well. These factors should be taken into account when reconstructing and interpreting images.

10.5. Clinical Applications

Molecular imaging is used in various fields of medicine such as neurology, oncology, cardiology, and orthopedics. Its application areas can be broadly subdivided into two fields: diagnostics and therapy.

10.5.1. Diagnostics

As the most common use for emission tomography, diagnostics is also the most diverse. In the field of neurology, both PET and SPECT offer perfusion tracers that give physicians insight into the amount of blood flow in the brain during the scan, which is proportional to brain activity. An example of a SPECT brain perfusion procedure using 99mTc-Ethylcysteinat-Dimer (ECD) is shown in Fig. 10.16(a). An epileptic patient is scanned immediately following a seizure and during a neutral state. The reconstructed images are subtracted and fused with an MR of the patient’s brain to localize the focus of the seizure. More specialized applications including imaging of amyloid plaques linked to Alzheimer’s disease (PET) and dopamine receptor imaging (SPECT) are also available.

Figure 10.16. Examples of diagnostic procedures in molecular imaging. a) Differential SPECT scan using 99mTc-ECD to localize seizure epicenter. b) FDG-PET scan for a patient with melanoma. Several small lesions are visible below liver and beside heart. Images courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Figure 10.16

Examples of diagnostic procedures in molecular imaging. a) Differential SPECT scan using 99mTc-ECD to localize seizure epicenter. b) FDG-PET scan for a patient with melanoma. Several small lesions are visible below liver and beside heart. Images courtesy (more...)

For oncology, 18F, the most commonly used PET isotope, may be bonded to a molecule in the glucose family resulting in so-called fludeoxyglucose (FDG). Using these FDG-PET scans, doctors can search for areas with high glucose metabolism – a sign of rapidly growing metastatic tumors. Fig. 10.16(b) shows an FDG-PET scan of a patient with melanoma. Malignant metastases are visible below the liver and beside the heart. A common oncological use for SPECT is skeletal imaging with 99mTc bonded to phosphorous compounds. High uptake of these tracers is often indicative of secondary lesions from e.g. prostate or breast cancer.

In addition to oncology, bone SPECT is also used in the field of orthopedics to localize and diagnose the source of pain felt by patients with faulty prosthetics, small fractures, or degenerative disease. PET and SPECT also both offer myocardial perfusion tracers, which allow cardiologists to assess the viability of the heart muscle and diagnose various heart diseases.

Using quantitative imaging, physicians are also able to monitor disease over time. By comparing metrics such as SUV at scans taken at different time points, they can track the progression of e. g. metastatic lesions and better assess response to therapy. An example of this is shown in Fig. 10.17, where a breast cancer patient was imaged with 99mTc-labelled 3,3-diphosphono-1,2-propanodicarboxylic acid (DPD) at three different time points, roughly six months apart. In the first scan (cf. Fig. 10.17(a)), a region was seen in the skull with uptake suspicious of a metastatic bone tumor. In a follow up study, the SUVmax was seen to increase, and treatment with bisphosphonates was begun. In the final scan shown in Fig. 10.17(c), SUVmax decreased, indicating a response to therapy.

Figure 10.17. Same breast cancer patient imaged on three different dates with 99mTc-DPD. The calculated SUVmax from the volume of interest at the posterior right area of the skull is also shown. A decrease in uptake was noted between the second and third scans, indicating a response to therapy. Images courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Figure 10.17

Same breast cancer patient imaged on three different dates with 99mTc-DPD. The calculated SUVmax from the volume of interest at the posterior right area of the skull is also shown. A decrease in uptake was noted between the second and third scans, indicating (more...)

10.5.2. Therapy

In addition to purely diagnostic imaging, emission tomography plays an integral role in radioisotope therapy as well. Such procedures utilize the tracer principle to target malignant tissue with radiation. This radiation then eliminates or stems the growth of unwanted cells. However, these positive effects must be weighed against the negative side effects on healthy tissue. To accomplish this, physicians must estimate the dose of a course of therapy on sensitive organs.

This process, known as dosimetry, is quite complex. It relies on quantifying the activity distribution within the patient, determining how long it will remain there, and estimating how much energy will be deposited in healthy tissue. As therapy agents typically involve higher energy emissions and more complicated spectra, the system matrix in such cases becomes more difficult to define. Furthermore, post-processing such as organ segmentation and biological modeling become necessary.

10.6. Hybrid Imaging

10.6.1. Clinical Need

SPECT and PET offer excellent sensitivity for the detection of disease due to the functional information they provide. However, pathological regions of an image may be difficult to localize in the body in the absence of structural information.

Take, for example, a hypothetical surgeon who is planning a biopsy and needs to find the specific Sentinel Lymph Nodes (SLNs) draining a tumor in a breast cancer patient. Prior to surgery, a SPECT scan has clearly shown the presence of an SLN with high uptake in the underarm area, but it is known that there are multiple possible lymph nodes here. This stand-alone SPECT might appear similar to the left pane of Fig. 10.18, where only a single bright spot is visible. How will the surgeon proceed?

Figure 10.18. SPECT data after labeling of a Sentinel Lymph Node (SLN) with 99mTc-Nanocoll. The corresponding structural information from a complimentary X-ray CT scan helps provide proper localization of the activity. Images courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Figure 10.18

SPECT data after labeling of a Sentinel Lymph Node (SLN) with 99mTc-Nanocoll. The corresponding structural information from a complimentary X-ray CT scan helps provide proper localization of the activity. Images courtesy of the University Hospital Erlangen, (more...)

Historically, during planar acquisitions, a technologist might trace the outside edge of a patient’s body with a radioactive “pen” to provide a rough anatomical point of reference in the image. The advantages of this method are limited, and it is, in any case, not possible for SPECT, where attempts may be made to register a previously acquired CT to the current SLN SPECT study. However, the human anatomy is non-rigid, and shifts in posture and time between scans may lead to errors. Our surgeon would therefore be left with the option to operate in the general area of the suspected SLN and rely on tedious scanning with gamma counting probes to find the exact node.

10.6.2. Advent und Acceptance of Hybrid Scanners

In 2000, David Townsend and Ronald Nutt of the University of Geneva, working together with CTI (now a division of Siemens Molecular Imaging), introduced the first hybrid PET/CT scanner. This device offered a PET ring detector and multi-slice spiral CT scanner integrated into the same gantry. Patients could therefore receive PET and CT studies in quick succession without moving, greatly reducing registration errors and providing both structural and functional information in one fell swoop. Six years later, Siemens Molecular Imaging introduced the Symbia SPECT/CT scanner, bringing the same advantages to the field of SPECT. Other manufacturers quickly developed similar hybrid imaging systems as well.

With the advent of hybrid imaging, our hypothetical surgeon can now use the CT acquired with the SPECT study to pinpoint the location of the SLN prior to surgery, reducing both the time needed to perform the operation and the risk of misidentification. This is illustrated in the center and right panes of Fig. 10.18. In the center, a CT acquired immediately after the SPECT to the left is shown, and in the right, the two fused datasets are displayed. As the patient was lying on the same SPECT/CT gantry in the same position during both acquisitions, the surgeon can be sure of the accuracy in the registration between the two datasets.

The integration of PET or SPECT system with an X-ray CT scanner represents a complex engineering task. On the hardware side, care must be taken to ensure that the physical mating of the two systems does not affect their individual performance. On the firm- and software side, separate data transmission protocols, formats, and user interfaces must be unified as well. After the devices are physically complete, system engineers must work with others to develop new calibration and quality control routines. These might, for example, provide the reconstruction software with updated transformations between the SPECT and CT coordinate systems, as these parameters vary over time due to parts wearing down or being replaced.

In addition to the clinical benefits of hybrid imaging, the reconstruction process itself can be improved as well. In Sec. 10.4.3, we briefly discussed the importance of CT attenuation correction for emission tomography. The advent of hybrid devices has made these corrections the standard in most clinics rather than a research topic, reducing attenuation artifacts and paving the way for quantitative imaging.

Hybrid PET and SPECT/CT devices represented a major step forward in medical imaging. However, CT as a structural modality has its own weaknesses. Namely, soft tissue contrast in regions of interest for molecular imaging, such as the liver and brain, is poor. Also, the presence of extra radiation dose from the CT is obviously undesirable. For this reason, MR imaging was proposed as a structural imaging modality for use in hybrid scanners.

Although clinically exciting e. g. for neurology applications due to MR’s unparalleled contrast between different brain tissues and PET’s array of sensitive neurological tracers, the mating of PET and MR represented a host of new physical challenges. The most important of these was how to eliminate PMTs from the design, which are unusable in MR’s strong magnetic fields due to the interference they induce on a PMT’s moving electrons. Engineers were able to overcome this by substituting the standard scintillator-PMT setup with semiconductor detectors that convert photons directly to image data. However, another issue is how to derive attenuation maps from the MR data, which does not have the same direct physical meaning that CT’s Hounsfield units have and therefore must be processed further to obtain a μ-map. In this case, pattern recognition methods may be used to estimate the density map based on atlas data and segmentation/classification of the patient’s tissue.

Having overcome these and other issues to a large extent, beginning in 2011, each of the major manufacturers has released a commercial PET/MR system. Much research is currently being performed to both improve their performance and define new clinical applications.

10.6.3. Further Benefits of Hybrid Imaging

In addition to incorporating the μ-map into the system matrix to improve the physical model of the projection process, CT or MR information may be integrated into the reconstruction in other ways as well. We could assume, for example, that a sharp boundary in an MR brain image should have a correspondingly sharp boundary in the nuclear image because we expect that the radioactivity concentration across two types of tissue (e. g. white and gray matter) will be discontinuous. As the low resolution SPECT or PET reconstructions are not capable of reproducing this high resolution on their

Geek Box 10.2Maximum a posteriori Estimation

In order to understand the idea of MAP estimation, we have to understand that our model, i. e., the Poisson distribution, introduces conditions on our probability. Thus P (Ɗ = d) is actually conditioned by the Poisson means ν. As such, we actually need to denote it as P (Ɗ = d|ν) or P (d|ν) in short. Next, we realize that we are actually interested in P (ν|d), as d is observable in our case and we seek to maximize the probability of ν given d. Fortunately, Bayes’ rule applies for conditional probabilities:

ν^*=argmaxν^P(ν|d)=argmaxν^P(d|ν)P(ν)P(d),
(10.13)
where P (d|ν) is known from physics, P(d) is independent of the optimization and can therefore be neglected, and P(ν) is the prior term. Now P(ν) is independent of the actual observation d and can therefore be used to model any prior knowledge on the distribution of ν. For hybrid applications, P(ν) is chosen based on the CT or MR information. Of course, MAP methods may also be used with purely PET or SPECT data to enforce e. g. smoothness, but their greatest potential benefit is the incorporation of information from other modalities.

own, we could work this prior knowledge from MR into the objective function of an iterative reconstruction algorithm.

The family of maximum a posteriori (MAP) algorithms is capable of doing exactly this by building upon the maximum likelihood method with a term representing some prior information known about the object. As the name implies, the MAP method seeks to maximize the posterior probability of the observed data given the distribution being imaged. We explain this principle in Geek Box 10.2.

Another example of this higher level of integration, although one not relying on the MAP principle, is the xSPECT Bone algorithm from Siemens, which is currently used for reconstructing SPECT skeletal scans. This method works by segmenting the CT into several different tissue classes and forward projecting them separately at each iteration. In addition to voxel-wise image updates, the classes themselves are also allowed to be scaled independently while optimizing the objective function. This scaling allows the SPECT reconstruction to have very sharp edges at the boundaries between tissue classes (e. g. if a cortical bone class has much more uptake than a neighboring lung region), while maintaining a typical SPECT-like resolution within each class.

An example of this method is shown in Fig. 10.19 below. Note how the edges of the vertebrae in the xSPECT image (center) are much sharper than the standard MLEM SPECT reconstruction (left) due to the boundary information provided by the CT (right). However, the bladder appears very similar in the two SPECT reconstructions, as there is little additional CT boundary information here.

Figure 10.19. MLEM (left) and xSPECT Bone (center) reconstructions. The latter achieves sharper resolution at the edge of tissue classes with the help of extra-modal CT data (right). Images courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Figure 10.19

MLEM (left) and xSPECT Bone (center) reconstructions. The latter achieves sharper resolution at the edge of tissue classes with the help of extra-modal CT data (right). Images courtesy of the University Hospital Erlangen, Clinic of Nuclear Medicine.

Despite the advantages of methods such as MAP and xSPECT Bone, there are risks as well. For example, a MAP method may assume that bone density is always positively correlated to tracer uptake and enforce this behavior to improve quantitative accuracy. This is indeed generally the case, but in the early stages of a bone infarction, there is little or no blood supply to the bone and, hence, no tracer uptake, despite a normal CT. Our example MAP algorithm would then try to allocate activity here during the reconstruction and potentially provide the physician with a false negative. One should therefore be very careful when designing priors, as they are generally based on assumptions about anatomy or biochemistry that may not be true in all cases.

Footnotes

1

In practical scanners, this is not strictly the case, but it is assumed to be here.

2

Advanced reconstruction algorithms can take advantage of the benefits of non-parallel projection methods, provided they are accomplished by means of multi-hole collimators (fan-beam, convergent, divergent).

Further Reading

[1]
Adam Alessio and Paul Kinahan. “PET Image Reconstruction”. In: Nuclear Medicine 2 (2006).
[2]
Adam Alessio and Paul Kinahan. “Quantitative Accuracy of Clinical 99mTc SPECT/CT Using Ordered-Subset Expectation Maximization With 3-Dimensional Resolution Recovery, Attenuation, and Scatter Correction”. In: Nuclear Medicine 2 (2006). [PubMed: 20484423]
[3]
AM Alessio et al. “PET/CT Scanner Instrumentation, Challenges, and Solutions”. In: Radiologic Clinics of North America 42 (2004), pp. 1017–32. [PubMed: 15488555]
[4]
H.O. Anger. “Scintillation Camera with Multichannel Collimators”. In: Journal of Nuclear Medicine 5.7 (1964), pp. 515–531. [PubMed: 14216630]
[5]
H.O. Anger. “Scintillation Camera with Multichannel Collimators”. In: Journal of Nuclear Medicine 5 (1964), pp. 515–531. [PubMed: 14216630]
[6]
H.H. Barrett and K.J. Myers. Foundations of Image Science. 1st. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2007.
[7]
H.H. Barrett and W. Swindell. Radiological Imaging: The Theory of Image Formation, Detection, and Processing. New York, NY, USA: Academic Press, 1981.
[8]
S.R. Cherry, J.A. Sorenson, and M.E. Phelps. Physics in Nuclear Medicine. Philadelphia, PA, USA: Saunders, 2003.
[9]
A.P. Dempster, N.M. Laird, and D.B. Rubin. “Maximum Likelihood from Incomplete Data via the EM Algorithm”. In: Journal of the Royal Statistical Society 39.1 (1977), pp. 1–38.
[10]
J.A. Fessler and W.L. Rogers. “Spatial Resolution Properties of Penalized-Likelihood Image Reconstruction: Space-Invariance Tomographs”. In: IEEE Transactions on Image Processing 5.9 (Sept. 1996), pp. 1346–1358. [PubMed: 18285223]
[11]
R. J. Jaszczak, R. E. Coleman, and C.B. Lim. “SPECT: Single Photon Emission Computed Tomography”. In: IEEE Transactions on Nuclear Science 27.3 (June 1980), pp. 1137–1153. issn: 0018-9499.
[12]
Bharath Navalpakkam et al. “Magnetic Resonance-Based Attenuation Correction for PET/MR Hybrid Imaging Using Continuous Valued Attenuation Maps”. In: Investigative Radiology 48.5 (2013), pp. 323–332. [PubMed: 23442772]
[13]
Hamamatsu Photonics. Photomultiplier Tube: Principle to Application. 1st. Hamamatsu City, Japan: Hamamatsu Photonics K.K., Electron Tube Center, 1994.
[14]
J. Prekeges. Nuclear Medicine Implementation. 2nd. Burlington, MA, USA: Jones & Bartlett Learning, 2013.
[15]
Philipp Ritt et al. “Absolute Quantification in SPECT”. In: European Journal of Nuclear Medicine and Molecular Imaging 38.1 (2011), pp. 69–77.
[16]
James Sanders et al. “Fully Automated Data-Driven Respiratory Signal Extraction from SPECT Images Using Laplacian Eigenmaps”. In: IEEE Transactions on Medical Imaging 35.11 (2016), pp. 2425–2435. doi: 10.1109/TMI.2016.2576899. [PubMed: 27295657] [CrossRef]
[17]
James Sanders et al. “Quantitative SPECT/CT Imaging of (177)Lu with In Vivo Validation in Patients Undergoing Peptide Receptor Radionuclide Therapy”. In: Molecular Imaging and Biology 17.4 (2015), pp. 585–593. doi: 10.1007/s11307-014-0806-4. [PubMed: 25475521] [CrossRef]
[18]
L. A. Shepp and Y. Vardi. “Maximum Likelihood Reconstruction for Emission Tomography”. In: IEEE Transactions on Medical Imaging 1.2 (Oct. 1982), pp. 113–122. issn: 0278-0062. [PubMed: 18238264]
[19]
B.M.W. Tsui et al. “Comparison Between ML-EM and WLS-CG Algorithms for SPECT Image Reconstruction”. In: IEEE Transactions on Nuclear Science 38.6 (Dec. 1991), pp. 1766–1772.
[20]
M.N. Wernick and J.N. Aarsvold. Emission Tomography: The Fundamentals of PET and SPECT. 1st. London, UK: Elsevier Academic Press, 2004.
Copyright 2018, The Author(s)

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Bookshelf ID: NBK546150PMID: 31725224DOI: 10.1007/978-3-319-96520-8_10

Views

Related Items in Bookshelf

Related information

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...