U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cylus J, Papanicolas I, Smith PC, editors. Health system efficiency: How to make measurement matter for policy and management [Internet]. Copenhagen (Denmark): European Observatory on Health Systems and Policies; 2016. (Health Policy Series, No. 46.)

Cover of Health system efficiency

Health system efficiency: How to make measurement matter for policy and management [Internet].

Show details

5Health system efficiency: measurement and policy*

.

5.1. Introduction: data envelopment analysis and stochastic frontier analysis

There are many indicators available to assess whether scarce resources for health are used in the most efficient manner, ranging from measures that compare activity (see Chapters 2 and 3), to measures that compare costs (see Chapter 4). Performance indicators, league tables and cost ratios have all been used, but these have been criticized for having no conceptual bases and for disregarding the actual needs of health service staff who might need to use them. If measures are constructed and weighted in such a way that does not reflect activities accurately, or which do not benchmark effectively, this will give misleading impressions and may create perverse incentives. High-level summary indicators that are not transparent are often underused for similar reasons (Hollingsworth & Parkin, 2003).

Methods based on sound economic concepts can provide transparent and potentially useful information on efficiency comparisons. Data envelopment analysis (DEA) and stochastic frontier analysis (SFA) are two of the most common methods used to estimate the efficiency of health services. Over 400 published applications have used these methods within health care settings over the past 30 years (Hollingsworth, 2003, 2008, 2012).

Both methods see efficiency as essentially a simple relationship between health care inputs and the outputs they produce, and assess how effectively a unit of production, such as a hospital, uses its own inputs, such as staff and drugs, to produce outputs, such as patients treated. To measure the efficiency of these processes, comparison is made against other units undertaking similar activities. DEA has been used a great deal more than SFA, making up the majority of applications in health care settings (>90%) and can account for multiple inputs and outputs, varying weights and returns to scale.

This chapter describes how we can measure the relationship between inputs and outputs using these methodological tools, and how we can provide information that improves the efficiency of how health services are delivered. The relative merits of each method, how useful they have been, and how useful they really could become are discussed by following a set of simple guidelines.

5.2. Efficiency measurement methods

As explained in Chapter 1, technical efficiency (TE) indicates that the organization is minimizing the use of inputs in producing its chosen outputs, regardless of the value placed on those outputs. An alternative but equivalent formulation is to say that it is maximizing its outputs given its chosen level of inputs. In contrast, allocative efficiency (AE) indicates whether the value of the chosen outputs creates the maximum value to society (or alternatively, the costs of the chosen inputs are the minimum feasible).

These are illustrated with reference to Figure 5.1, where we have a hospital using its combination of labour and capital to produce output at point C. We can see that this point is not on the efficiency curve, which is constructed from the other hospitals in the sample based on how they would use their combinations of labour and capital to produce the same level of output as the hospital at point C. To get to the efficient frontier, this hospital must use less labour and capital. We could measure this hospital’s TE as a ratio:

(1)TE=OA/OC

Figure 5.1. Hospital efficiency frontier.

Figure 5.1

Hospital efficiency frontier.

where TE must take a value >0 and ≤1. If TE = 1, the hospital is technically efficient and is operating on the efficient frontier. If TE is <1, the hospital is technically inefficient and we can measure how inefficient by the distance the hospital is from the frontier.

AE can be measured by:

(2)AE=OB/OA

where similarly AE must take a value >0 and ≤1. AE can be interpreted as a measure of excess costs arising from using inputs in inappropriate proportions.

If producing at Q in Figure 5.1, the hospital would be technically and allocatively efficient; otherwise, there may be some trade-off between the two.

How can we measure these concepts and distances in applied terms that would produce information of use to those who have to deliver services? There are two main frontier-based methods: DEA and SFA.

5.2.1. Data envelopment analysis

DEA is by far the most common method for analysing efficiency in health care. It has now been applied over 400 times within health care settings. DEA makes use of linear programming methods to place weights on the inputs and outputs to show the hospital in the best possible light relative to how the other hospitals in the sample are using their inputs and outputs. In the simple case of a single output/single input firm, a measure of TE3 can be defined as:

(1)TE=outputs/inputs

The greater this ratio, the greater the quantity of output for a certain amount of input. For a multiple output/multiple input firm, like a hospital treating different types of cases using staff of different types, various equipment, and so on, an overall measure of a hospital’s TE requires summing these different inputs and outputs in some way. The problem with this is that inputs and outputs cannot be simply summed as they usually measure very different things, for example, the number of doctors and operating theatres. Rather, we must give weights to each of the inputs and outputs. The weights are chosen so that TE lies between 0 and 1. If the weights are fully flexible, TE is defined for each firm as the ratio of a weighted sum of its outputs relative to a weighted sum of its inputs.

Important here is that the weights are unknown a priori; they must be calculated. Of all of the possible sets of weights (conditional on a set of constraints), the linear program optimizes the ones that give the most favourable view of the firm. This is the highest efficiency score, which shows the firm in the best possible light. The efficiency of any firm or unit, say a hospital (or nursing home, GP practice, and so on), is assessed relative to other firms within a peer group that form the efficiency frontier (that is, the firms that are deemed efficient).

Most areas of the economy, and in turn the health sector, are not linear in terms of the relationship between input use and outputs produced. So it is useful to account for possible increasing or decreasing returns to the inputs used. Figure 5.2 illustrates the DEA frontiers under constant returns to scale (CRS) and under variable returns to scale (VRS).

Figure 5.2. Constant and VRS under DEA.

Figure 5.2

Constant and VRS under DEA. Note: CRS = constant returns to scale; DEA = data envelopment analysis; VRS = variable returns to scale.

The AB section of the VRS frontier exhibits increasing returns to scale (output increases proportionately more than inputs), BC exhibits CRS and CD decreasing returns to scale (output changes proportionately less than the change in inputs). For a given hospital (G), the distance EF measures the effects of economies of scale in production, and FG measures pure inefficiency.

Thus, results under CRS (where returns to scale are part of the inefficiency) or VRS can be very different. The VRS frontier draws in more hospitals to the frontier, so more are given a score as efficient. Often, this means both CRS and VRS are useful to conduct – the latter showing the tendencies related to returns to scale, the former being more discriminatory as to efficiency differences. These techniques can be very sensitive to the assumptions made, so the models and relationships being considered need to be very carefully thought through — something returned to when looking at the guidelines for their use.

In analyses of this type, it is important to account for influences of the distribution of medical case complexity (case mix) on producer efficiency in the production of health care. One approach to modelling the effects of case mix is to include an aggregated measure of patient characteristics at each hospital as a type of input in the production frontier. However, patients are not inputs that are transformed to make the final product (which in this case is health care interventions). Instead, patients consume treatments to (hopefully) produce improvements in their health status.

The characteristics of patients and their illness (or illnesses) will influence the production of health. DEA models can incorporate patient case mix by first adjusting outputs to reflect variations in case severity (see Chapter 2). Not accounting for the mix of cases in some way would produce results that may not be useful comparisons – however, the method of adjustment needs careful consideration. For example, it is now common to incorporate DRG weights into output measures (for example, case mix-adjusted inpatient admissions) in developed countries, where such data are often collected for payment systems. However, it is less common to have such accurate case mix adjustments for outpatients or primary care. This often makes comparisons of efficiency a lot cruder in these areas.

Another method to account for such characteristics involves adding a second stage of analysis to the DEA approach. The first stage involves running a DEA model based on inputs and outputs to yield efficiency scores for units (say hospitals again), as shown earlier. The second stage then takes these efficiency scores and statistically regresses them against hospital-level case mix variables to assess the impact of the patients’ sociodemographic and clinical characteristics on the production process and efficiency. This allows the inclusion of variables that do not fall neatly into the input–output analysis to potentially see if they have a significant impact on the efficiency scores obtained in the first stage, but there are many statistical issues with undertaking such second-stage analysis (see Simar & Wilson, 2008 for further reading).

5.2.2. Limitations of DEA

Before proceeding, it is important to note that DEA has several major limitations that require some care on the part of those constructing models and others interpreting the results. There are statistical issues to account for. The technique is deterministic and outlying observations can be important in determining the frontier (which is made up of the most efficient units). Closer investigation of these outliers is often warranted to ensure the sample is actually uniform in nature, that is, you really are comparing like-with-like.

Care must be taken in interpreting the results, as the DEA efficiency frontier may be influenced by stochastic variation, measurement error or unobserved heterogeneity in the data. DEA makes the strong and non-testable assumption of no measurement error or random variation in output. Small random variation for inefficient hospitals will affect the magnitude of the inefficiency estimate for that hospital. Larger random variation may move the frontier itself, thereby affecting efficiency estimates for a range of hospitals.

DEA is sensitive to the number of input and output variables used in the analysis. Overestimates of efficiency scores can occur if the number of units relative to the number of variables used is small. A general rule of thumb is that the number of units used should be at least three times the combined number of input and output variables.

DEA only provides a measure of relative efficiency in the sense that a hospital which is deemed efficient by DEA is only efficient given the observed practices in the sample which is being analysed. Therefore, it is possible that greater efficiency than that observed could be achieved in the sample.

DEA can be used to measure efficiency changes over time (often referred to as a Malmquist Index). Measuring changes over time, rather than a snapshot of efficiency, gives a more accurate picture of what is really happening in efficiency terms. The interested reader is referred to Thannasoulis, Portela & Despić (2008) for a much more technical explanation of these methods.

5.2.3. Stochastic frontier analysis

SFA has been used in a much smaller number of efficiency analyses in health care than DEA, but the number of papers is increasing. SFA uses statistical regression analysis rather than mathematical programming to do basically the same thing as DEA in terms of measuring the distance a hospital is from a calculated efficient frontier. In SFA, the usual statistical error term in such regression equations is split into inefficiency and error. Some researchers see this as a more precise measure of efficiency, as it accounts for statistical noise, which DEA does not do. However, other researchers recommend using both techniques and looking at the direction both point in (for example, Varabyova & Schreyögg, 2013). If both methods indicate inefficiency in a hospital, then a closer investigation is perhaps warranted.

The use of SFA in the production of health care has received increasing attention in recent years. This is partly because of greater interest in efficiency measurement in general in health and health care, but also because of advances in modelling techniques and increased computing capabilities. As with DEA, there are several limitations. Estimation of an SFA production frontier requires that all outputs can be meaningfully aggregated into a single measure. This assumption is questionable within the health context. To allow multiple outputs to be modelled (as outputs in health care are typically heterogeneous) researchers often estimate costs rather than production frontiers. Costs can be easily aggregated into a single measure using common monetary units such as dollars.

The inclusion of variables capturing case mix and producer characteristics in the model allows statistical testing of hypotheses concerning the relationship between these factors and producer efficiency. Assumptions concerning the error term in SFA may also be important. In technical terms, if an assumption of normality in the error term does not hold, and its distribution is skewed, inefficiency may be under- or overestimated. Also, the functional form of such models is a source of potential error. (The interested reader is pointed to Greene, 2008 for further and more technical reasoning.)

5.3. The application of DEA

Illustrative use is made here of an example from a hospital setting in the United Kingdom. The sample in this study is relatively small, 44 United Kingdom hospitals over two years. The full study has been published (Hollingsworth & Parkin, 2003), and the interested reader is referred to that article for more detail.

When undertaking any applied empirical work, it is useful to first specify a model based on the data and variables available. This involves choices, as rarely are data so comprehensive that they represent perfectly any theoretical model. There are few criteria for choosing between different efficiency models (Parkin & Hollingsworth, 1997; Smith, 1997), but there are some practical considerations. For example, the greater the number of variables in the model, the more information produced on which variables impact on efficiency. However, the more variables, the greater the number of efficient hospitals on the frontier; a balance must then be struck.

In the case illustrated here, sensitivity analyses of different models with different variables was undertaken using correlation analyses to test the robustness of results to changes in the models. They were all based on the same theoretical model, but aggregated in different ways to test for information trade-offs. The final model arrived at is shown in Table 5.1.

Table 5.1. Inputs and outputs in a DEA model.

Table 5.1

Inputs and outputs in a DEA model.

DEA was used with an input orientation – that is the analysis shows the minimum level of inputs that could feasibly be used given a hospital’s chosen outputs. Of course, the equivalent output-oriented model could also be estimated. The results obtained are shown in Table 5.2.

Table 5.2. Results from DEA example.

Table 5.2

Results from DEA example.

The overall results for all hospitals and by group of hospital are shown. The minimum score is useful, as it demonstrates the range of results (the range from 3.57 to 100% efficient for those hospitals on the frontier). Overall, the hospitals were operating on average with more that 10% inefficiency.

We can illustrate these results in many ways. Figure 5.3 contains the ranked scores by hospital.

Figure 5.3. Hospital DEA ranking.

Figure 5.3

Hospital DEA ranking. Note: DEA = data envelopment analysis.

Figure 5.4 looks at the changes in scores from one year to the next. Information of this nature demonstrates which hospitals are outliers, which have potential for efficiency gains and which are most useful as benchmarking units.

Figure 5.4. Changes in DEA efficiency scores from 1994–1995 to 1995–1996, shown by hospital.

Figure 5.4

Changes in DEA efficiency scores from 1994–1995 to 1995–1996, shown by hospital. Note: DEA = data envelopment analysis.

Information can also be fed back to each inefficient hospital on the improvements that could be made to increase efficiency. This is done by calculating the reduced level of resources that would be used if the hospital was on the efficient frontier calculated by the DEA for its chosen level of outputs. Figure 5.5 contains an example for one such hospital (anonymized as NY11 here – but a real hospital in this real data set) in terms of reducing input use to get to the efficiency frontier. (Any one overall reduction, or a combination of lesser reductions, could result in a move to the frontier.)

Figure 5.5. Input reduction targets to improve efficiency in hospital NY11.

Figure 5.5

Input reduction targets to improve efficiency in hospital NY11.

Based on Figure 5.5, this hospital can see that in 1994–1995 it was using its medical staff inefficiently in terms of using a lot more staff to produce the same outputs as its comparator units (those similar in input/output mix and size). However, in the following year, we can see this hospital did a much better job of using its medical staff (and in fact most other inputs) in producing outputs. This is reflected in its overall efficiency score, which improved dramatically by more than 20%. One area of concern that remains is that this hospital appears to have high other costs (in this example, these include all non-staff and capital costs) relative to similar hospitals that are more efficient. Policymakers may find it useful to go to hospital NY11 and ask their management just how they changed procedures on staff use, or how capital spending was changed over this time period, so that other hospitals can learn in benchmarking terms from this best practice.

This particular project used results similar to these to feed back to these hospitals, with great success. Hospital managers and those commissioning care found the results to be of great interest. On occasion, they could explain the results in terms of data deficiencies, but there were also efficiency gains to be made by looking at benchmark examples of best practice. All of the participants found the results useful based on the amount of detail that could be presented in a practical manner. They were all made aware of the limitations of the work, and that often it is not the exact scores that are relevant, but rather, the areas pointed out for potential improvement and the comparators available as benchmarks.

5.4. Setting out the protocol

Given all that has been said above, how can we make these methods even more useful? There are published guidelines for the application of the DEA and SFA techniques. A slightly modified version of these guidelines is reproduced here. They set out clearly how these methods can be of use to those who need to undertake such analyses (that is, suppliers), and, perhaps most importantly, those who need to interpret and make use of the results generated (that is, demanders).

5.4.1. Suppliers

Suppliers should consider how to make their studies more effective. In other words, are there specific criteria or guidelines, which would make efficiency measurement more user-friendly, for example, to those involved in using such information to make policy choices? Here, we establish some initial non-exhaustive criteria as a starting-point, in both macro and micro terms. By macro, we mean the overall process of undertaking the study in terms of set-up and management, in a way to help ensure that the information provided will be of use in policy terms. By micro, we mean the actual production of the efficiency scores.

Macro issues include the following:

  1. Applied research needs to be placed within a policy context. One important element of any efficiency analysis is to get potential end users involved early on. This helps ownership of the research from the users’ perspective and keeps the researcher on track. This may initially involve finding the right person or group of people. (Having a number of people involved reduces risks, for example, staff moving positions.) Meetings held to feed back results at the various stages and to different levels of users (for example, hospital managers, health department staff, those involved with policy development) will help make sure information is provided to those who want to use it. An advisory group featuring such participants to initially help set up the model specification may be useful.
  2. Hospital managers may have concerns about health authorities using efficiency measures as big sticks and are generally interested in more detailed information within their specific unit; health authority staff tend to be more interested in comparisons between hospitals; government policymakers may be more interested in the overall picture of how care is delivered in different sectors, perhaps primary compared to secondary care. The researcher has to balance these views; providing all of the information to everyone may help. Also, it is important to ask what information would be useful that the data/modelling is not already providing. Analysts should try and accommodate this, or suggest means (for example, extra data) that could help. It is essential to identify what value is being added to the way efficiency is already being measured.
  3. End users should be given the information that was intended. Surveying end users may help in refining measures. Results should also be disseminated as widely as possible. Users should know the limitations of efficiency measures: they are a useful policy tool, not the useful policy tool. Results can be manipulated so full provision of information to all may be helpful.

Micro issues include the following:

  1. Are the right questions being asked?
  2. What is the underlying economic theory of production or cost? Do duality theory and the requirement for cost minimization as an objective really apply?
  3. Is the model specified correctly? Has extensive sensitivity analysis been undertaken? An advisory group can point out if there are any obvious omitted variables.
  4. Are the data really good enough to answer the questions, particularly the output data?
  5. Are there any data on quality? What will results using just quantity (throughput) data really show? Will any inefficiency be just made up of omitted quality data?
  6. If there are quality data, how will they be weighted relative to quantity data to avoid being swamped by relatively large numbers of throughput information? Unless carefully weighted, potentially vital information on quality may have little impact on results.
  7. Is the sample inclusive enough, comparing like-with-like? Exploratory analyses are useful; even if all hospitals in the sample have the same categorization, there may be a rogue specialist unit or teaching hospital which will confound the results, as frontier techniques are very susceptible to outliers. Sample size is also an issue.
  8. What techniques should be used: parametric, non-parametric or both? If there are multiple inputs/outputs, non-parametric techniques have an advantage (when comparing DEA and SFA) in terms of disaggregation.4 They provide more detailed information on specific areas of inefficiency. Panel data techniques will also provide more information, not only on what happens between units, but what happens over time. Looking at trends over time is more useful than a snap shot.
  9. Is it useful to do two-stage analyses, and if so, how can any statistical problems to be accounted for (see Simar & Wilson, 2007)?
  10. Is it necessary to generate confidence intervals? Unless the sample is all-inclusive, it may be prudent to account for sampling variation.5

5.4.2. Demanders

Table 5.3 contains a checklist for assessing if an efficiency analysis is useful. This is a starting-point, based on the list by Drummond et al. (2005) for assessing economic evaluations. Suppliers of efficiency studies may also wish to take note of these points.6 The two assessment questions asked by Drummond et al. (see Chapter 3 of their book) are also pertinent here: is the methodology appropriate and are the results valid; and if the answer to this is yes, do the results apply to the setting being evaluated? As Drummond et al. acknowledge, it is unlikely every study can fulfil every criterion, but criteria are useful as screening devices to identify the strengths and weaknesses of studies, and of course to identify the value added by comprehensive extra analysis of this nature.

Table 5.3. A checklist for assessing efficiency measurement studies.

Table 5.3

A checklist for assessing efficiency measurement studies.

From a policymaker’s perspective, the same questions should be asked, but some will be more useful than others in assessing how useful a particular study will be if looking at the bigger picture. For example, under checklist item 1, what is the perspective of the study – if it is to look at efficiency within a single hospital, this may be of interest to a local authority policymaker, but not someone at the WHO wanting to make comparisons between hospitals funded in different ways in different countries. Again, when looking at the samples used, is a policymaker interested in teaching hospitals, hospitals that have merged, hospitals that are over a certain size, and so on? This is information that should be clearly provided by those undertaking the study to make its impact as useful as possible. Whether analyses should be undertaken over time is another key question policymakers may find useful – a snapshot of efficiency may be useful at a certain level, but looking at how previous policy changes have impacted on efficient production of health care over time in a similar setting, with a similar sample, may be very informative to planning processes.

5.5. Conclusions

There are important lessons to be learnt from prior experiences using frontier-based methods to measure efficiency in health care, particularly with regard to how best to implement and interpret such measures. Frontier-based metrics are clearly useful when based on sound data, robust and valid models, and when the limitations of the methods are well understood. In many cases, it makes the most sense to try a mix of both DEA and SFA model specifications, with the hope of finding consistent results.

Additionally, these metrics are most relevant when the results are presented in a manner that is easily understood by those tasked with making changes in policy or service delivery. The use of guidelines is one way forward to ensure that the data produced and presented are pertinent to these end users. Guidelines have made a huge difference to the quality of economic evaluation, and could do the same in terms of efficiency measurement, both in terms of the provision of better information, and the interpretation of such information by end users.

References

  • Coelli T, et al. An introduction to efficiency and productivity analysis. Springer; New York: 2005. http://facweb​.knowlton​.ohio-state.edu/pviton​/courses/crp394/coelli_Intro_effic​.pdf, accessed 22 July 2016.
  • Drummond M, et al. Methods for the economic evaluation of health care programmes. Oxford: Oxford University Press; 2005.
  • Greene WH. The econometric approach to efficiency analysis. In: Fried HO, Knox Lovell CA, Schmidt SS, et al., editors. The measurement of productive efficiency and productivity growth. Oxford: Oxford University Press; 2008.
  • Hollingsworth B. Non-parametric and parametric applications measuring efficiency in health care. Health Care Management Science. 2003;6(4):203–218. [PubMed: 14686627]
  • Hollingsworth B. The measurement of efficiency and productivity of health care delivery. Health Economics. 2008;17(10):1107–1128. [PubMed: 18702091]
  • Hollingsworth B. Revolution, evolution, or status quo? Guidelines for efficiency measurement in health care. Journal of Productivity Analysis. 2012;37(1):1–5.
  • Hollingsworth B, Parkin D. Efficiency and productivity change in the English National Health Service: can data envelopment analysis provide a robust and useful measure? Journal of Health Services Research & Policy. 2003;8(4):230–236. [PubMed: 14596758]
  • Hollingsworth B, Peacock S. Efficiency measurement in health and health care. London: Routledge; 2008.
  • Hollingsworth B, Street A. The market for efficiency analysis of health care organisations. Health Economics. 2006;15(10):1055–1059. [PubMed: 16991208]
  • Kumbhakar S, Knox Lovell CA. Stochastic frontier analysis. Cambridge: CUP; 2000.
  • Parkin D, Hollingsworth B. Measuring production efficiency of acute hospitals in Scotland, 1991–1994: validity issues in data envelopment analysis. Applied Economics. 1997;29(11):1425–1434.
  • Simar L, Wilson PW. Statistical inference in non parametric frontier models: recent developments and perspectives. In: Fried HO, Knox Lovell CA, Schmidt SS, et al., editors. The measurement of productive efficiency and productivity growth. Oxford: Oxford University Press; 2008.
  • Smith P. Model misspecification in data envelopment analysis. Annals of Operations Research. 1997;73:233–252.
  • Thanassoulis E, Portela MCS, Despić O. Data envelopment analysis: the mathematical programming approach to efficiency analysis. In: Fried HO, Knox Lovell CA, Schmidt SS, et al., editors. The measurement of productive efficiency and productivity growth. Oxford: Oxford University Press; 2008.
  • Varabyova Y, Schreyögg J. International comparisons of the TE of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches. Health Policy. 2013;112(1–2):70–79. [PubMed: 23558262]

Footnotes

*

Permission has been granted by Elsevier, Springer, and John Wiley & Sons Ltd, respectively, to reproduce some of the content of this chapter.

3

Other forms of efficiency can be measured using DEA, including, as noted earlier, allocative efficiency (for example, by comparing firms using identical weights); here we concentrate on TE for ease of exposition.

4

A single output stochastic production frontier can be adapted to the multiple output case, making use of distance functions. There is a growing technical literature in the area of multiple output distance functions, see, for example, Kumbhakar & Knox Lovell (2000) or Coelli et al. (2005).

5

See Coelli et al. (2005: pp 202–203) for a discussion about concerns with sampling distributions, that is, DEA is measuring the frontier when all the hospitals in a country are in the sample, but it is estimating the frontier if not.

6

This refers to applied efficiency measurement. See Hollingsworth & Street (2006) for a discussion of this.

7

This checklist relies heavily on Box 3.1 in Drummond et al. (2005).

© World Health Organization 2016 (acting as the host organization for, and secretariat of, the European Observatory on Health Systems and Policies)
Bookshelf ID: NBK436889

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.1M)

Other titles in this collection

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...