The Value of Requirements Uncertainty

AxelOn 4 October, I participated to an event in honour of Axel van Lamsweerde, my former PhD supervisor, who turned emeritus this fall. The event had an impressive list of speakers and generated many interesting discussions in and around the warm auditorium SUD11, a lecture room I have spent hours in as a student years ago.

I talked about the value of requirements uncertainty. I made the point that uncertainty is inevitable in requirements and architecture decisions and we should therefore embrace uncertainty instead of ignoring it. I argued that our past focus on precise specification is misguided when requirements are uncertain and called for a goal-based scientific approach to software decisions under uncertainty. Important and fascinating research lies ahead of us.

The slides are on my new slideshare account and here:

A Call to Action

Incomplete Obstacle Resolution in Patient Monitoring System

Patient monitoring is one of the oldest case studies in software engineering and it has been used extensively to illustrate a variety of modelling and analysis techniques. Yet, incidents with patient monitoring systems still happen and we can still learn by looking at them.

This story was reported on the RISK forum this week.

Patient Died at New York VA Hospital After Alarm Was Ignored

Charles Ornstein and Tracy Weber, ProPublica, 15 May 2012

Registered nurses at a Manhattan Veterans Affairs hospital failed to notice a patient had become disconnected from a cardiac monitor until after his heart had stopped and he could not be revived, according to a report Monday from the VA inspector general.

The incident from last June was the second such death at the hospital involving a patient connected to a monitor in a six-month period. The first, along with two earlier deaths at a Denver VA hospital, raised questions about nursing competency in the VA system, ProPublica reported last month.

The deaths also prompted a broader review of skills and training of VA nurses. Only half of 29 VA facilities surveyed by the inspector general in a recent report had adequately documented that their nurses had skills to perform their duties. Even though some nurses “did not demonstrate competency in one or more required skills,” the government report stated, there was no evidence of retraining. …

http://www.propublica.org/article/patient-died-at-new-york-va-hospital-after-alarm-was-ignored

So it seems to be the nurses’ fault and the solution to prevent this from happening again is better nurse training. But is this the only way to look at this problem? Blaming the operator is a common reaction when this sort of incidents occur, but very often the operator’s error is caused or made more likely by poor decisions in the system design.

Being curious, I’ve had a quick look at the more detailed report on these incidents, and this is how it describes the different kinds of  alarm in this system:

The telemetry monitoring equipment at the system triggers three types of audible alarms:

- Red Alarm is an audible critical alarm that is loud and continuous. It indicates the need to immediately check on a patient’s status and vital signs.

- Yellow Alarm is a quieter and intermittent audible alarm that stops after several minutes. It indicates a temporary irregularity in the heart rate or rhythm that is not immediately critical.

- Blue Alarm is similar to the yellow alarm and indicates a problem with the system itself or an improperly connected, or disconnected, telemetry lead.

I’m sure you all see what the problem might have been; having similar alarms for possibly critical and non-critical events is probably not a good idea.

This incident provides a good example to illustrate the technique of goal-oriented obstacle analysis.

An obstacle is something that could go wrong and prevent a system from satisfying its goals. Here, the system designers had identified that the obstacle “Patient is accidentally disconnected from the monitor” would obstruct the goal “Maintain Accurate Measures of Patient Vital Signs” which itself contribute, along many other goals, to the goal “Keep Patient Alive”. The resolution of that obstacle was to include an alarm to warn the nurses when this happens (technically, in our framework we would call this an obstacle mitigation).

Unfortunately, they may not have paid enough attention to the goal that this obstacle resolution was meant to achieve. The goal is not just to send an alarm when the patient gets disconnected (which technically, in our jargon, we would call a weak mitigation), it is to get the nurses to react, reconnect the patient and keep him alive (what we would call a strong mitigation). To achieve this latter goal, the system relies on the assumption that nurses will react to the disconnection alarm. This assumption itself has an obstacle “Nurse Does Not React to Disconnection Alarm”, which is even more likely to happen if the critical disconnection alarm is similar to another non-critical alarm. It is this obstacle that probably had not received sufficient attention or wasn’t even considered during the system design and led to the incidents. The resolution that is being proposed now is “Give the nurses better training” (an instance of obstacle reduction in our framework). But an alternative resolution, that could have been chosen at design time, would of course have been to make the “blue alarm” signalling an instrument malfunction sound similar to the critical “red alarm” rather than to the non-critical yellow one.

The techniques of goal-oriented obstacle analysis provides a modelling and analysis framework to carry out this kind of reasoning in a systematic and rigorous way. You can check the paper for details.

Requirements are never perfect. When are they good enough?

We know it’s impossible to write perfect requirements. Requirements for complex systems will always be incomplete, contain defects such as ambiguities and inconsistencies, and fail to anticipate all changes that will happen during and after development. This creates requirements risks – the risks of developing wrong requirements. Good requirements engineering can limit these risks but they can never be entirely eliminated. Requirements must therefore be good enough so that the remaining risks are acceptable to the project.

But how do we decide when the requirements are good enough and their risks acceptable?

In our paper on “Early Failure Predictions in Feature Request Management Systems”, we’ve experimented with one approach to address this question. The context for this experiment are large-scale open source projects, such as Firefox, Thunderbird and four others, where features are regularly added, modified, and removed  in response to stakeholders’ change requests that are submitted and discussed online. Our idea was to apply machine learning on past change requests to build predictive models that would give early warnings of risks associated to new change requests. Camilo Fitzgerald, the lead author of the paper, has developed an automated technique that gives early warnings about the following risks:

  • Product failure: the risk that there will be bug reports or change requests associated with the implementation of a change request;
  • Abandoned development: the risk that the implementation of a change request will be abandoned before completion;
  • Late development: the risk that the implementation of a change request will not be completed in time;
  • Removed Feature: the risk that the implementation of a change request will have to be removed from the product (typically because it conflicts with other features or requirements);
  • Rejection Reversal: the risk that a decision to reject a change request is premature and will have to be reverted.

The risk warnings are generated by analysing a series of simple characteristics in the discussion associated with the change requests such as the number of persons who participated in the discussion, the length of the discussion, and an analysis of the words being used.

The predictive model is coupled with a simple model of the likely cost and benefit of performing additional requirements analysis on a change request before its implementation. The model takes into account the relative cost of additional requirements analysis with respect to the cost of the failure associated to the risk, and the probability that such additional requirements analysis is actually successful at preventing failure (currently, these factors are estimated based on general empirical data and best judgement rather than calibrated with project-specific data). This cost-benefit model is used for deciding the threshold at which to generate the risks warnings and recommend that further requirements analysis take place on a change request before deciding on its implementation or not.

The results of this experiment are promising; they show that the approach is effective at discovering which change requests have the highest risks of failure later on in development. This information can be useful for project managers and requirements engineers to decide whether more analysis is needed on a change request before assigning it for implementation, and to monitor the development of changes requests that have the highest risks more carefully.

But this was really a first step. Accuracy of the risk warnings might be improved in several ways, for example by applying more sophisticated natural language techniques for automatically detecting ambiguities in change requests, or by taking into account characteristics of the software components affected by a change request. These improvements might also give us better explanations about why a change request has a specific risk. One of the biggest limitations of our current technique is that it identifies correlations between change requests and failure, but it does not give us useful explanations about why some change request characteristics lead to a certain type of failures. Much better models will be needed for this approach to be really useful.

It would also be interesting to try this approach in other contexts than online change request management systems.

It could for example be applied to traditional requirements documents. All we need is to be able to relate individual requirements items to their outcomes (e.g. how long they take to implement, whether they have associated bug reports and change requests, etc.) and we would be able to apply the same machine learning techniques to construct similar predictive models.

It would also be interesting to apply it in the context of agile development. In this context, the approach could be viewed as a form of extensive automated retrospective analysis.  It would consists in automatically analyzing all data and meta-data associated with epics, users stories, acceptance tests, and their outcomes to try to learn characteristics of good and bad user stories with respect to requirements risks.

The long term objective of this work is to make requirements risks much more visible and central to software development than they are today. Instead of viewing requirements primarily as documentation, we should give more importance their role as risk mitigation tools, and develop the techniques that treat them as such.

Note: A preprint of the paper is available below and on my publication page