Incomplete Obstacle Resolution in Patient Monitoring System

Patient monitoring is one of the oldest case studies in software engineering and it has been used extensively to illustrate a variety of modelling and analysis techniques. Yet, incidents with patient monitoring systems still happen and we can still learn by looking at them.

This story was reported on the RISK forum this week.

Patient Died at New York VA Hospital After Alarm Was Ignored

Charles Ornstein and Tracy Weber, ProPublica, 15 May 2012

Registered nurses at a Manhattan Veterans Affairs hospital failed to notice a patient had become disconnected from a cardiac monitor until after his heart had stopped and he could not be revived, according to a report Monday from the VA inspector general.

The incident from last June was the second such death at the hospital involving a patient connected to a monitor in a six-month period. The first, along with two earlier deaths at a Denver VA hospital, raised questions about nursing competency in the VA system, ProPublica reported last month.

The deaths also prompted a broader review of skills and training of VA nurses. Only half of 29 VA facilities surveyed by the inspector general in a recent report had adequately documented that their nurses had skills to perform their duties. Even though some nurses “did not demonstrate competency in one or more required skills,” the government report stated, there was no evidence of retraining. …

http://www.propublica.org/article/patient-died-at-new-york-va-hospital-after-alarm-was-ignored

So it seems to be the nurses’ fault and the solution to prevent this from happening again is better nurse training. But is this the only way to look at this problem? Blaming the operator is a common reaction when this sort of incidents occur, but very often the operator’s error is caused or made more likely by poor decisions in the system design.

Being curious, I’ve had a quick look at the more detailed report on these incidents, and this is how it describes the different kinds of  alarm in this system:

The telemetry monitoring equipment at the system triggers three types of audible alarms:

- Red Alarm is an audible critical alarm that is loud and continuous. It indicates the need to immediately check on a patient’s status and vital signs.

- Yellow Alarm is a quieter and intermittent audible alarm that stops after several minutes. It indicates a temporary irregularity in the heart rate or rhythm that is not immediately critical.

- Blue Alarm is similar to the yellow alarm and indicates a problem with the system itself or an improperly connected, or disconnected, telemetry lead.

I’m sure you all see what the problem might have been; having similar alarms for possibly critical and non-critical events is probably not a good idea.

This incident provides a good example to illustrate the technique of goal-oriented obstacle analysis.

An obstacle is something that could go wrong and prevent a system from satisfying its goals. Here, the system designers had identified that the obstacle “Patient is accidentally disconnected from the monitor” would obstruct the goal “Maintain Accurate Measures of Patient Vital Signs” which itself contribute, along many other goals, to the goal “Keep Patient Alive”. The resolution of that obstacle was to include an alarm to warn the nurses when this happens (technically, in our framework we would call this an obstacle mitigation).

Unfortunately, they may not have paid enough attention to the goal that this obstacle resolution was meant to achieve. The goal is not just to send an alarm when the patient gets disconnected (which technically, in our jargon, we would call a weak mitigation), it is to get the nurses to react, reconnect the patient and keep him alive (what we would call a strong mitigation). To achieve this latter goal, the system relies on the assumption that nurses will react to the disconnection alarm. This assumption itself has an obstacle “Nurse Does Not React to Disconnection Alarm”, which is even more likely to happen if the critical disconnection alarm is similar to another non-critical alarm. It is this obstacle that probably had not received sufficient attention or wasn’t even considered during the system design and led to the incidents. The resolution that is being proposed now is “Give the nurses better training” (an instance of obstacle reduction in our framework). But an alternative resolution, that could have been chosen at design time, would of course have been to make the “blue alarm” signalling an instrument malfunction sound similar to the critical “red alarm” rather than to the non-critical yellow one.

The techniques of goal-oriented obstacle analysis provides a modelling and analysis framework to carry out this kind of reasoning in a systematic and rigorous way. You can check the paper for details.

2 thoughts on “Incomplete Obstacle Resolution in Patient Monitoring System

  1. What is the real problem with the death’s at the VA hospital where patients died from heart failure. This article mentions that the nurses were not trained properly. So is it their fault for not having enough staff to monitor patients. Or was it just the patient’s time to die?
    Edwin

  2. The patient records were incomplete so the investigation could neither confirm nor refute that anything went wrong with the devices or that the nurses failed to respond to an alarm, but it also noted that the nurses were confused about what the device would do if a patient became disconnected (some thought it would generate a critical “red alarm”). So it seems the obstacle “Nurse does not respond to disconnection alarm” is quite likely to have happened from time to time, even if we don’t know for these cases.