Included under terms of UK Non-commercial Government License.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Coid JW, Ullrich S, Kallis C, et al. Improving risk management for violence in mental health services: a multimethods approach. Southampton (UK): NIHR Journals Library; 2016 Nov. (Programme Grants for Applied Research, No. 4.16.)
Improving risk management for violence in mental health services: a multimethods approach.
Show detailsBackground
Previous chapters have outlined the creation and validation of Bayesian networks as the basis for the development of decision support tools. However, when assessing the utility of such a tool, focus should be on the needs of the professionals who will be applying it within their practice. As such, it would be valuable to obtain feedback from professionals who have experienced use of such a tool to collect evidence pertinent to future development, both in terms of the accuracy and usefulness of the information generated and with the intention of developing a user interface that would make the tool of practical value in day-to-day practice. A clinical utility study was therefore conducted with a sample of professionals working in forensic, clinical or criminal justice roles to explore responses to a prototype version of the DSVM-P.
Objectives
The objectives of this project were to:
- demonstrate the DSVM-P model to professionals with gatekeeping responsibilities for forensic patients or offenders
- obtain feedback from professionals about the use value of the DSVM-P and the need for further development to enable the model to have practical utility, specifically:
- whether or not the software was easy to use and, if not, how it could be improved
- whether or not clinicians believed that the questions asked by the software were the most important ones for the future management of violence
- whether or not the software made a decision about violence risk that corresponded to that made by professionals and, if so, to what degree
- whether or not the recommendations in terms of intervention targets made by the software corresponded to those that the professionals would make given similar information and, if not, whether they were helpful or unhelpful recommendations
- what the most useful aspects of the software were, including:
- – whether or not it saves time in making decisions about future management
- – whether or not it provides clinicians with additional information that is useful in making those decisions
- – whether it is more or less useful in guiding practice than SPJ models such as the HCR-20.287
Methods
Design
This was an evaluation study intended to capture and collate feedback from professionals about a prototype of the DSVM-P violence risk decision support tool. It used both quantitative and qualitative approaches to collect structured feedback on what were deemed to be the most pertinent features of the measure, but also less structured feedback on what areas needed development.
Sample and recruitment
The sample for this study consisted of 20 professional staff – either clinicians or criminal justice staff – with gatekeeping roles (forensic psychiatrists, forensic psychologists or probation staff) for patients or offenders. Participants were recruited purposively at two conferences for medical or psychological professionals with a criminal justice focus. The participants attended a stall where the DSVM-P decision support tool was demonstrated and the purpose and remit of the study were explained to them. They were then offered the opportunity to apply the tool to a clinical case (either one of their own or one of a sample of three artificial cases) and asked to provide feedback about the software’s assessment of risk and recommended interventions, together with feedback about a sensitivity analysis of the likely impact of these interventions (see Chapter 21), and complete a questionnaire about their experience. The total interview typically lasted about 20–30 minutes.
All data were collected anonymously; however, participants were asked to specify their disciplinary affiliation and were invited to leave their e-mail address with the researchers if they were interested in receiving updates on the progress of the development of the measure.
Materials
The materials used in this study were:
- The DSVM-P decision support tool, deployed onto a laptop computer running the AgenaRisk development software and operated by a researcher (AC or MF). The software was set up in a ‘risk table’ setting showing the risk factors relevant to the individual used to calculate the risk of reoffending, the number of reconvictions and a sensitivity analysis of appropriate interventions, with respondents able to select values for risk variables from drop-down menus. A screenshot of the network is shown in Figure 22.
- A semistructured questionnaire designed specifically for the study to collect structured feedback on the most useful aspects of the tool in line with the study objectives; identify any shortcomings or lacking features; and elicit feedback on how the tool might be developed further in the future to meet the needs of professionals (see Appendix 12). Structured questionnaire items were coded on a Likert scale from 1 ‘not at all’ to 5 ‘a great deal’.
- Three sample cases from which professionals could choose to assess the performance of the DSVM-P against their own judgement of risk.
Data analysis
As this was a simple evaluation design to gather feedback on an initial prototype, data analysis was limited to descriptive statistics summarising responses to questionnaire items and a simple thematic synthesis of unstructured feedback items.
Given the small sample size, and the freedom given to respondents to rate their own cases instead of the sample cases, we elected not to use inferential statistics to compare clinician ratings of risk with those generated by the network for the same cases.
Results
Nature of respondents
In total, 15 of 20 individuals (75%) who took part in the demonstration of the software agreed to complete the questionnaire; a further two provided qualitative comments only. The main reason for declining to complete the questionnaire was lack of time (four respondents); one respondent said that they preferred not to complete the questionnaire but would give unstructured verbal feedback, which was recorded on a blank questionnaire form.
Of the respondents, 10 (66%) reported being a psychiatrist, five of whom (33%) were specialised forensic psychiatrists. Five other respondents (33%) identified themselves as psychologists, of whom three were specialised forensic psychologists and the remaining two were clinical psychologists. Of the remaining two respondents, one was as an occupational therapist and one was a probation officer.
Quantitative responses
Responses to the quantitative aspects of the questionnaire were in general positive, with all items being rated on average ≥ 3; these responses are summarised in Figure 23.
The most highly rated items were item 3 (‘I thought the questions asked by the software were relevant to risk management’; mean 4.50, SD 0.52), item 6 (‘The software recommended similar targets for management to those that I would have’; mean 4.27, SD 0.90) and item 7 (‘The intervention targets identified were helpful’; mean 4.25, SD 1.14). The lowest rated items were item 8 (‘I thought that using the software might save me time in my professional practice’; mean 3.39, SD 1.36) and item 10 (‘I thought that the software was more useful than SPJ tools like the HCR-20’; mean 3.46, SD 1.57). There seemed to be a higher degree of uncertainty about the less positively rated items, as evidenced by higher SDs for item responses.
All respondents endorsed item 3 (relevance of items) and item 4 (‘I thought the software asked all the right questions to enable violence risk management’) with a score of ≥ 3 (‘a fair amount’) and all except one respondent (93% of the sample) rated item 9 (useful information provided) as ≥ 3. In total, 80% of respondents rated the accuracy of the judgement about the risk of the case entered as representing their own judgement ‘a fair amount’ or better.
Qualitative responses
The responses to the unstructured questions provided substantial information about professional perceptions of the potential merits of and problems with the network.
Positive comments included:
Far more useful than existing risk assessment tools.
An easy and simple method to get likelihood of reconviction.
Minimises the need for lengthy assessments of personality (PCL-R etc.).
[It is] fairly quick to answer the risk questions.
[It is helpful] being able to see reconviction change over different time periods.
The ability to make predictions post intervention; we cannot normally estimate drop in risk after individuals complete interventions normally.
Positive comments seemed broadly to endorse the time taken to complete the DSVM-P measure and the high level of information and accuracy it provided. Some of the comments were not entirely accurate, for example the DSVM-P does specifically require a PCL-R and clinical assessments of personality to be completed and relies on these for improved accuracy.
Feedback which suggested that further work might be required was:
What about measurement of institutional violence?
Consider risk scenarios; should consider a sexual offence version.
Can you get the interface right?
Numbers confuse me; explanation of the risk % figure would be helpful.
[What about] circumstances of crime, or trigger?
A limited number of interventions modelled.
I was sceptical about the reliability of the model to make specific predictions of violence for an individual, e.g. 67% likelihood of reconviction is higher than most other predictive tools.
Lack of evidence for medical treatment; no medication.
One individual had also written the following in the margin of the questionnaire next to item 10 asking whether or not the software was more useful than SPJ tools like the HCR-20: ‘Only if completed instead of, not in addition to’.
Some of these comments reflected the developers’ own concerns, specifically with respect to the unfriendliness of the interface and the possible need for additional interpretation of raw probabilities of reoffending (e.g. categorical ratings such as high, medium or low risk). The suggestion that the number of interventions modelled should be expanded, possibly to include medication, was mostly accurate, although the modelling of ‘psychiatric treatment’ (described in Chapter 20) does relate primarily to psychopharmacological treatment rather than psychological interventions.
Other comments seemed to relate to other areas of need within forensic settings that were intentionally not addressed by the DSVM-P, for example suggestions of alternative networks covering institutional violence and/or sexual reoffending. However they do appear to illustrate an enthusiasm for the approach adopted. One comment relating to the inclusion of specific violence triggers and contexts was well taken, but modelling work on this has not yet proved possible with existing data.464
Discussion
This project obtained feedback from a small sample of ‘likely users’ of a Bayesian network for the risk management of violent offenders, specifically a computer-based prototype version of such a tool. Responses to structured questions eliciting feedback about the prototype were in the majority very positive, with respondents endorsing higher than the median scale value for all items on the questionnaire. However, some respondents expressed reservations about whether or not the software would in fact save them time in their practice, specifically – as illustrated by the responses to unstructured questions – if they were asked to complete it in addition to completing existing risk assessment measures rather than as an alternative. This perhaps reflects a more generalised concern in forensic services over the array of available risk assessment instruments for different populations and outcomes (violence, general offending, sexual offending, institutional violence, etc.) in a climate of increasing pressure on time and resources of skilled and appropriately trained clinicians. In particular, participants did not overwhelmingly endorse the prototype as a replacement for existing standard risk assessment instruments such as the HCR-20.
However, respondents did identify specific strengths and limitations of the DSVM-P prototype. The strengths included the speed of completion; the high accuracy of risk probability (if only prima facie); the detailed recommendations about the efficacy of interventions; and the ability to model risk over different specific time periods. The limitations of the prototype identified included the limited user interface; results that are hard to interpret (e.g. risk probabilities); and the limited modelling of a large potential range of interventions. Some of these restrictions are the result of data limitations (e.g. only four interventions are accurately captured in the PCS data set used to develop the DSVM-P) and others could be modelled more clearly with more development of the network.
Future revisions of the prototype are likely to take the majority of this feedback on board, specifically with respect to the generation of a user interface; wider modelling of potential interventions based on ‘risk targets’, that is, identified drivers of violence risk that should be addressed irrespective of the availability of data relating to suitable intervention(s); and more clinically interpretable feedback from the software, such as categorical risk judgements.
Limitations
This was a simple evaluation study designed to gather feedback and reflections on the possible utility of the DSVM-P from a sample of potential users of the decision support tool. It cannot take the place of a full, naturalistic evaluation of the tool or even an evaluation of inter-rater reliability, exploring the accuracy of the ratings generated by the software relative to those of clinician ‘experts’. However, it has provided important end-user input that will make the future conduct of such studies more productive.
More specifically, the pilot study did not include representatives from the discipline of nursing. Community practice nurses would be the most likely health workers supporting patients on discharge from mental health services and would therefore be among the key potential users of any Bayesian network used with this population.
As stated, this study was an initial evaluation of clinicians’ views on the acceptability of the prototype tool. Further investigative work is needed to ensure that the decision-making processes modelled by the Bayesian network are complementary to those of clinicians and professionals in practical settings, possibly involving further iteration of the model.
- Clinical utility evaluation of a Bayesian network in forensic settings - Improvi...Clinical utility evaluation of a Bayesian network in forensic settings - Improving risk management for violence in mental health services: a multimethods approach
Your browsing activity is empty.
Activity recording is turned off.
See more...