U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Bhui K, Aslam RW, Palinski A, et al. Interventions designed to improve therapeutic communications between black and minority ethnic people and professionals working in psychiatric services: a systematic review of the evidence for their effectiveness. Southampton (UK): NIHR Journals Library; 2015 Apr. (Health Technology Assessment, No. 19.31.)

Cover of Interventions designed to improve therapeutic communications between black and minority ethnic people and professionals working in psychiatric services: a systematic review of the evidence for their effectiveness

Interventions designed to improve therapeutic communications between black and minority ethnic people and professionals working in psychiatric services: a systematic review of the evidence for their effectiveness.

Show details

Appendix 14Core quality score for all quantitative studies (0–12)

Numbers in brackets refer to quality score.

How clearly each study indicates that there is an intervention to improve therapeutic communication could be a quality indicator (1–4)

  1. Intervention clearly shown, and it clearly is to improve TC, for relevant outcomes of interest. (4)
  2. Intervention clearly shown and relevant to outcomes of interest. (3)
  3. Intervention vague and or multicomponent, so difficult to discern whether the mediating effect is truly through improved TC. (2)
  4. Inferred by reviewer given elements of decision-making, assessment and conversation needed and efforts to improve these through adaptation of interventions. (1)

Outcome of therapeutic communication (1–3)

  1. Direct measure of TC by a reliable and valid scale, for example alliance, reduced conflict, greater trust. (3)
  2. Proxy measure: attendance, premature termination. (2)
  3. Narrative outcome. (1)
  4. No outcome: exclude study.

Ethnic groups (0–5)

  1. Groups of relevance to the UK and described in a manner consistent with a specific classification scheme for ethnicity (not just race). (5)
  2. Groups of relevance to the UK. (3)
  3. Not of relevance to the UK but well described in terms of ethnicity. (1)
  4. Not of relevance not the UK. (0)

Types of study: (1–4)

  1. Quantitative: randomised controlled trial. (4)
  2. Observational: a series, or service evaluation data, or pre–post type evaluations. (3)
  3. Qualitative/narrative evaluation of a series. (2)
  4. Case study/studies: in-depth narrative information. (1)
  5. Experiences and personal reports with no methodological framework. (0)

Economic evaluation (0–4)

  1. Cost-effectiveness. (4)
  2. Impact and interventions cost/benefits. (3)
  3. Intervention costs. (1)
  4. Financial gains and losses. (1)

Quality of randomised controlled trial (0–30)

  1. Adequate sample size (n per group): 0 = inadequate, 1 = moderate and 2 = large or specified by power calculations.
  2. Appropriate duration of trial including follow up: 0 = too short, 1 = reasonable length and 2 = long enough for assessment of long-term outcomes.
  3. Power calculation: 0 = not reported, 1 = mentioned without details and 2 = details of calculations provided.
  4. Method of allocation: 0 = unrandomised and likely to be biased, 1 = partially or quasi-randomised with some bias possible and 2 = randomised allocation.
  5. Source of subjects described and representative sample recruitment: 0 = source of subjects not described, 1 = source of subjects given but no information on sampling or use of unrepresentative sample (e.g. volunteers) and 2 = source of subjects described plus representative sample taken (e.g. all consecutive admissions or referrals, or random sample taken).
  6. Use of diagnostic criteria (or clear specification of inclusion criteria): 0 = none, 1 = diagnostic criteria or clear inclusion criteria and 2 = diagnostic criteria plus specification of severity.
  7. Record of exclusion criteria and number of exclusions and refusals reported: 0 = criteria and number not reported, 1 = criteria or number of exclusions and refusals not reported and 2 = criteria and number of exclusions and refusals reported.
  8. Blinding of assessor: 0 = not done, 1 = done but no test of blind and 2 = done and integrity of blind tested.
  9. Assessment of compliance with experimental treatments (including attendance for therapy): 0 = not assessed, 1 = assessed for some experimental treatments and 2 = assessed for all experimental treatments.
  10. Record of number and reasons for withdrawal: 0 = no information on withdrawals by group, 1 = withdrawals by group reported without reason and 2 = withdrawals and reason by group reported.
  11. Information on comparability and adjustment for differences in analysis: 0= no information on comparability, 1 = some information on comparability with appropriate adjustment and 2 = sufficient information on comparability with appropriate adjustment.
  12. Inclusion of all subjects in analyses: 0 = less than 95% of subjects included (intention-to-treat analysis) and 2 = 95% or more included.
  13. Presentation of results with inclusion of data for re-analysis of main outcomes (e.g. standard deviations): 0 = little information presented, 1 = adequate information and 2 = comprehensive.
  14. Appropriate statistical analysis (including correction for multiple tests where applicable): 0 = inadequate, 1 = adequate and 2 = comprehensive and appropriate.
  15. Conclusions justified: 0 = no, 1 = partially and 2 = yes.

Quality assessment for case series

Scoring: yes = 2, unclear = 1 and no = 0 (total score = 0–38).

  • Is the hypothesis/aim/objective of the study clearly described?
    • Yes: the hypothesis/aim/objective of the study is clearly reported.
    • Unclear: the hypothesis/aim/objective of the study is vague or unclearly reported.
    • No: the hypothesis/aim/objective is not reported.
  • Are the characteristics of the participants included in the study described?
    • Yes: the most relevant characteristics of the participants are reported (e.g. the total number, age, and gender distribution). Ethnicity, severity of disease/condition, comorbidity, or aetiology should also be included, if relevant.
    • Partially reported: only the number of participants was reported.
    • No: none of the relevant characteristics of the participants is reported.
  • Were the cases collected in more than one centre?
    • Yes: cases are collected in more than one centre (multicentre study).
    • Unclear: unclear where the patients come from (i.e. single or multicentre study).
    • No: cases are collected from one centre.
  • Are the eligibility criteria (i.e. inclusion and exclusion criteria) for entry into the study clearly stated?
    • Yes: both inclusion and exclusion criteria are reported.
    • Partially reported: only one, the inclusion or exclusion criteria are reported.
    • No: neither inclusion nor exclusion criteria are reported.
  • Were participants recruited consecutively?
    • Yes: there is a clear statement or it is clear from the context that the participants were recruited consecutively or study stated that all eligible patients were recruited.
    • Unclear: the method used to recruit participants is not clearly stated or no information is provided about the method used to recruit participants in the study.
    • No: the cases studied were a subgroup of those treated with no evidence to show that they were selected consecutively. The participants were recruited based on other criteria such as access to intervention determined by the distance or availability of resources.
  • Did participants enter the study at a similar point in the disease?
    • Yes: there is a clear description about all participants entering the study at a similar point in the condition/disease based on their clinical status, duration of condition or exposure before the intervention, severity of disease, and presence of comorbidities or complications.
    • Unclear: there is no description of the characteristics of participants before entering the study or there is no statement about entering the study at a similar point in the disease.
    • No: participants did not enter the study at a similar point in the condition/disease. This can be revealed by a wide range of disease durations before entering the study or different levels of severities or comorbidities or complications due to progression of their condition/disease.
  • Were additional interventions (co-interventions) reported in the study?
    • Yes: participants received additional co-intervention(s).
    • Unclear: it is suspected that a co-intervention was administered but the information is not reported.
    • No: there is a clear statement or it is clear from the context that a co-intervention was not administered.
  • Are the outcome measures established a priori?
    • Yes: all relevant outcome measures are reported in the introduction or methods section (e.g. accomplished, measurable improvements or effects, symptoms relieved, improved function, improved test scores, and quality-of-life measures).
    • Partially reported: some of the relevant outcomes are briefly reported in the introduction or methods section.
    • No: the outcome measures are reported for the first time in the results, discussion, or conclusion section of the study.
  • Were the relevant outcomes measured with appropriate objective and/or subjective methods?
    • Yes: all relevant outcomes are measured with appropriate methods, which are described in the methods section. These measures might be objective (e.g. gold standard tests or standardised clinical tests), subjective (e.g. self-administered questionnaires, standardised forms, or patient symptoms interview forms), or both.
    • Unclear: it is unclear how the relevant outcomes were measured. No information is provided on the methods used to measure the study’s relevant outcomes.
    • No: the methods used to measure outcomes were inappropriate.
  • Were the relevant outcomes measured before and after the intervention?
    • Yes: the relevant outcomes are measured before and after applying the intervention.
    • Unclear: it is unclear when the outcomes were measured.
    • No: the study reported only outcomes measured after applying the intervention.
  • Was the study conducted prospectively?
    • Yes: it is clearly stated that the study was conducted prospectively.
    • Unclear: the design of the study is not mentioned or it is unclear if the study was conducted prospectively.
    • No: the authors clearly stated that it was a retrospective study.
  • Were the relevant outcomes assessed blinded to intervention status?
    • Yes: the relevant outcomes were analysed by individuals who were not aware of the intervention status.
    • Unclear: the study did not report whether the outcome assessors were aware of the intervention status.
    • No: it is clearly stated or obvious that the relevant outcomes were analysed by individuals who were aware of the intervention status.
  • Were the statistical tests used to assess the relevant outcomes appropriate?
    • Yes: the statistical tests are clearly described in the methods section of the study and are used appropriately (e.g. parametric test for normally distributed population vs. nonparametric test for non-Gaussian population). The reviewer should assign a yes score if no statistical analysis was performed but reasons for this were stated.
    • Unclear: the statistical tests are not described in the methods section of the study or there is no information about the statistical analysis.
    • No: the statistical tests were used inappropriately.
  • Was the length of follow-up reported?
    • Yes: the length of follow-up is clearly reported (mean, median, range, standard deviation).
    • Unclear: the duration of follow-up is not clearly reported.
    • No: the length of follow-up is not reported.
  • Was the loss to follow-up reported?
    • Yes: the number or proportion of participants lost to follow-up is clearly reported or authors report outcome results on all participants included initially, or number lost to follow-up can be subtracted from the number enrolled and number analysed.
    • Unclear: it is not clear from the information provided how many participants were lost to follow-up or it is an inconsistence of reporting lost to follow-up (e.g. discrepancies between information from tables and text).
    • No: the number or proportion of participants lost to follow-up is not reported.
  • Does the study provide estimates of the random variability in the data analysis of relevant outcomes?
    • Yes: the study reports estimates of the random variability (e.g. standard error, standard deviation, confidence interval for parametric data, and range and interquartile range for nonparametric data) for all relevant outcomes.
    • Unclear or partially reported: the presentation of the random variability is unclear (e.g. the measure of dispersion is reported without indicating if it is a standard deviation or standard error). Estimates of the random variability are not reported for all relevant outcomes.
    • No: the study does not report estimates of the random variability.
  • Are the adverse events related with the intervention reported?
    • Yes: the undesirable or unwanted consequences of the intervention during the study period or within a pre-specified time period are reported. The absence of adverse event(s) is acknowledged in the study.
    • Partially reported: it is deducible that only some but not all potential adverse events are reported.
    • No: there is no statement about the presence or absence of adverse events.
  • Are the conclusions of the study supported by results?
    • Yes: the conclusions of the study (in terms of patient, intervention, outcomes) are supported by the evidence presented in the results and discussion sections.
    • Partially reported: not all components of the patient, intervention, outcomes are supported by the evidence presented in the results and discussion section.
    • No: the conclusions are not supported by the evidence presented in the results and discussion section.
  • Are both competing interests and sources of support for the study reported?
    • Yes: both competing interests and sources of support (financial or other) received for the study are reported, or the absence of any competing interest and source of support is acknowledged.
    • Partially reported: only one of these elements is reported.
    • No: neither competing interests nor sources of support were reported.

Quality score for qualitative studies

One point for each endorsed statement based on judgement (total score = 0–87).

  1. Findings: how credible are the findings?
    1. Findings/conclusions are supported by data/study evidence (i.e. the reader can see how the researcher arrived at his/her conclusions; the ‘building blocks’ of analysis and interpretation are evident).
    2. Findings/conclusions ‘make sense’/have a coherent logic.
    3. Findings/conclusions are resonant with other knowledge and experience (this might include peer or member review).
    4. Use of corroborating evidence to support or refine findings (i.e. other data sources have been used to examine phenomena; other research evidence has been evaluated: see also statement 14, Reporting: how clear are the links between data, interpretation and conclusions, i.e. how well can the route to any conclusions be seen?).
  2. Findings: how has knowledge/understanding been extended by the research?
    1. Literature review (where appropriate) summarising knowledge to date/key issues raised by previous research aims and design of study set in the context of existing knowledge/understanding; identifies new areas for investigation (for example, in relation to policy/practice/substantive theory).
    2. Credible/clear discussion of how findings have contributed to knowledge and understanding (e.g. of the policy, programme or theory being reviewed); might be applied to new policy developments, practice or theory.
    3. Findings presented or conceptualised in a way that offers new insights/alternative ways of thinking.
    4. Discussion of limitations of evidence and what remains unknown/unclear or what further information/research is needed.
  3. Findings: how well does the evaluation address its original aims and purpose?
    1. Clear statement of study aims and objectives; reasons for any changes in objectives.
    2. Findings clearly linked to the purposes of the study – and to the initiative or policy being studied.
    3. Summary or conclusions directed towards aims of study.
    4. Discussion of limitations of study in meeting aims (e.g. are there limitations because of restricted access to study settings or participants, gaps in the sample coverage, missed or unresolved areas of questioning; incomplete analysis; time constraints?).
  4. Findings: scope for drawing wider inference – how well is this explained?
    1. Discussion of what can be generalised to wider population from which sample is drawn/case selection has been made.
    2. Detailed description of the contexts in which the study was conducted to allow applicability to other settings/contextual generalities to be assessed.
    3. Discussion of how hypotheses/propositions/findings may relate to wider theory; consideration of rival explanations.
    4. Evidence supplied to support claims for wider inference (either from study or from corroborating sources).
    5. Discussion of limitations on drawing wider inference (e.g. re-examination of sample and any missing constituencies: analysis of restrictions of study settings for drawing wider inference).
  5. Findings: how clear is the basis of evaluative appraisal?
    1. Discussion of how assessments of effectiveness/evaluative judgements have been reached (i.e. whose judgements are they and on what basis have they been reached?).
    2. Description of any formalised appraisal criteria used, when generated and how and by whom they have been applied.
    3. Discussion of the nature and source of any divergence in evaluative appraisals.
    4. Discussion of any unintended consequences of intervention, their impact and why they arose.
  6. Design: how defensible is the research design?
    1. Discussion of how overall research strategy was designed to meet aims of study.
    2. Discussion of rationale for study design.
    3. Convincing argument for different features of research design (e.g. reasons given for different components or stages of research; purpose of particular methods or data sources, multiple methods, time frames, etc.).
    4. Use of different features of design/data sources evident in findings presented.
    5. Discussion of limitations of research design and their implications for the study evidence.
  7. Sample: how well defended is the sample design/target selection of cases/documents?
    1. Description of study locations/areas and how and why chosen.
    2. Description of population of interest and how sample selection relates to it (e.g. typical, extreme case, diverse constituencies, etc.).
    3. Rationale for basis of selection of target sample/settings/documents (e.g. characteristics/features of target sample/settings/documents, basis for inclusions and exclusions, discussion of sample size/number of cases/setting selected, etc.).
    4. Discussion of how sample/selections allowed required comparisons to be made.
  8. Sample: sample composition/case inclusion – how well is the eventual coverage described?
    1. Detailed profile of achieved sample/case coverage.
    2. Maximising inclusion (e.g. language matching or translation; specialised recruitment; organised transport for group attendance).
    3. Discussion of any missing coverage in achieved samples/cases and implications for study evidence (e.g. through comparison of target and achieved samples, comparison with population, etc.).
    4. Documentation of reasons for non-participation among sample approached/non-inclusion of selected cases/documents.
    5. Discussion of access and methods of approach and how these might have affected participation/coverage.
  9. Data collection: how well was the data collection carried out?
    1. Discussion of:
      • –  who conducted data collection.
      • –  procedures/documents used for collection/recording.
      • –  checks on origin/status/authorship of documents.
    2. Audio or video recording of interviews/discussions/conversations (if not recorded, were justifiable reasons given?).
    3. Description of conventions for taking field notes (e.g. to identify what form of observations were required/to distinguish description from researcher commentary/analysis).
    4. Discussion of how fieldwork methods or settings may have influenced data collected.
    5. Demonstration, through portrayal and use of data, that depth, detail and richness were achieved in collection.
  10. Analysis: how well has the approach to and formulation of the analysis been conveyed?
    1. Description of form of original data (e.g. use of verbatim transcripts, observation or interview notes, documents, etc.).
    2. Clear rationale for choice of data management method/tool/package Evidence of how descriptive analytic categories, classes, labels, etc., have been generated and used (i.e. either through explicit discussion or portrayal in the commentary).
    3. Discussion, with examples, of how any constructed analytic concepts/typologies, etc. have been devised and applied.
    4. Discussion, with examples, of how any constructed analytic concepts/typologies, etc., have been devised and applied.
  11. Analysis: contexts of data sources – how well are they retained and portrayed?
    1. Description of background or historical developments and social/organisational characteristics of study sites or settings.
    2. Participants’ perspectives/observations placed in personal context (e.g. use of case studies/vignettes/individual profiles, textual extracts annotated with details of contributors).
    3. Explanation of origins/history of written documents.
    4. Use of data management methods that preserve context (i.e. facilitate within case description and analysis).
  12. Analysis: how well has diversity of perspective and content been explored?
    1. Discussion of contribution of sample design/case selection in generating diversity.
    2. Description and illumination of diversity/multiple perspectives/alternative positions in the evidence displayed.
    3. Evidence of attention to negative cases, outliers or exceptions.
    4. Typologies/models of variation derived and discussed.
    5. Examination of origins/influences on opposing or differing positions.
    6. Identification of patterns of association/linkages with divergent positions/groups.
  13. Analysis: how well has detail, depth and complexity (i.e. richness) of the data been conveyed?
    1. Use and exploration of contributors’ terms, concepts and meanings.
    2. Unpacking and portrayal of nuance/subtlety/intricacy within data.
    3. Discussion of explicit and implicit explanations.
    4. Detection of underlying factors/influences.
    5. Identification and discussion of patterns of association/conceptual linkages within data.
    6. Presentation of illuminating textual extracts/observations.
  14. Reporting: how clear are the links between data, interpretation and conclusions, i.e. how well can the route to any conclusions be seen?
    1. Clear conceptual links between analytic commentary and presentations of original data (i.e. commentary and cited data relate; there is an analytic context to cited data, not simply repeated description).
    2. Discussion of how/why particular interpretation/significance is assigned to specific aspects of data – with illustrative extracts of original data.
    3. Discussion of how explanations/theories/conclusions were derived – and how they relate to interpretations and content of original data (i.e. how warranted); whether alternative explanations explored.
    4. Display of negative cases and how they lie outside main proposition/theory/hypothesis, etc.; or how proposition, etc. revised to include them.
  15. Reporting: how clear and coherent is the reporting?
    1. Demonstrates link to aims of study/research questions.
    2. Provides a narrative/story or clearly constructed thematic account.
    3. Has structure and signposting that usefully guide reader through the commentary.
    4. Provides accessible information for intended target audience(s).
    5. Key messages highlighted or summarised.
  16. Reflexivity and neutrality: how clear are the assumptions/theoretical perspectives/values that have shaped the form and output of the evaluation?
    1. Discussion/evidence of the main assumptions/hypotheses/theoretical ideas on which the evaluation was based and how these affected the form, coverage or output of the evaluation (the assumption here is that no research is undertaken without some underlying assumptions or theoretical ideas).
    2. Discussion/evidence of the ideological perspectives/values/philosophies of research team and their impact on the methodological or substantive content of the evaluation (again, may not be explicitly stated).
    3. Evidence of openness to new/alternative ways of viewing subject/theories/assumptions (e.g. discussion of learning/concepts/constructions that have emerged from the data; refinement and restatement of hypotheses/theories in light of emergent findings; evidence that alternative claims have been examined).
    4. Discussion of how error or bias may have arisen in design/data collection/analysis and how addressed, if at all.
    5. Reflections on the impact of the researcher on the research process.
  17. Ethics: what evidence is there of attention to ethical issues?
    1. Evidence of thoughtfulness/sensitivity about research contexts and participants.
    2. Documentation of how research was presented in study settings/to participants (including, where relevant, any possible consequences of taking part).
    3. Documentation of consent procedures and information provided to participants.
    4. Discussion of confidentiality of data and procedures for protecting.
    5. Discussion of how anonymity of participants/sources was protected.
    6. Discussion of any measures to offer information/advice/services, etc. at end of study (i.e. where participation exposed the need for these).
    7. Discussion of potential harm or difficulty through participation, and how avoided (guides, observation templates, data management frameworks, etc.).
  18. Auditability: how adequately has the research process been documented?
    1. Discussion of strengths and weaknesses of data sources and methods.
    2. Documentation of changes made to design and reasons; implications for study coverage.
    3. Documentation and reasons for changes in sample coverage/data collection/analytic approach; implications.
    4. Reproduction of main study documents (e.g. letters of approach, topic guides, observation templates, data management frameworks, etc.).

Criteria for quantitative observational studies (0–34)

Published in Reisch JS, Tyson JE, Mize SG. Aid to the evaluation of therapeutic studies. Pediatrics 1989;84:815–27.78

Copyright © Queen’s Printer and Controller of HMSO 2015. This work was produced by Bhui et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK285970

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1010K)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...