Goal Modelling for DNA Programming

Sometimes, your research can have influences in unexpected places. A couple of months ago, Google Scholar alerted me of a paper by Robyn Lutz and colleagues at NASA JPL and the Iowa State University that applies goal-oriented requirements modelling to the design and analysis of programmable DNA nanotechnology. This is part of the fascinating field of synthetic biology; the idea that we could engineer new biological systems performing useful functions that can’t be performed by natural biological systems, much in the same way that material engineers have developed new materials such alloys or reinforced concrete that exhibit properties that could not be found in natural metals and stones. These biological systems are composed of huge numbers of nanoscale living organisms that when put together have some global emergent properties and behaviours. The individual organisms are designed by programming their DNA, long sequences of A-T-C-G letters, which is the equivalent of machine language instructions for a computer program. In fact, this period in the development of synthetic biology is often compared to the early days of computer science, with plenty of stories and scares about people hacking systems in their garage. That’s about how much I understand about it at the moment.

Interestingly, software engineering concepts appear to play a significant role in the development of this field. This comic published in Nature, for example, refers to concepts such as abstraction, information hiding, and interfaces that are at the heart of software architecture. The research appears to still be laying out the basic technological foundations and, apart from important general concerns about safety and security, there doesn’t seem to be much explicit consideration for requirements and the relations between requirements and architectures for such systems. As the technology progresses, this will certainly change. The paper by Robyn Lutz and her colleagues is probably the first to investigate what requirements engineering methods for such systems might look like. The paper will be presented at the New Idea and Emerging Research track at ICSE this week. It is one of the many talks I’m looking forward to.

Incomplete Obstacle Resolution in Patient Monitoring System

Patient monitoring is one of the oldest case studies in software engineering and it has been used extensively to illustrate a variety of modelling and analysis techniques. Yet, incidents with patient monitoring systems still happen and we can still learn by looking at them.

This story was reported on the RISK forum this week.

Patient Died at New York VA Hospital After Alarm Was Ignored

Charles Ornstein and Tracy Weber, ProPublica, 15 May 2012

Registered nurses at a Manhattan Veterans Affairs hospital failed to notice a patient had become disconnected from a cardiac monitor until after his heart had stopped and he could not be revived, according to a report Monday from the VA inspector general.

The incident from last June was the second such death at the hospital involving a patient connected to a monitor in a six-month period. The first, along with two earlier deaths at a Denver VA hospital, raised questions about nursing competency in the VA system, ProPublica reported last month.

The deaths also prompted a broader review of skills and training of VA nurses. Only half of 29 VA facilities surveyed by the inspector general in a recent report had adequately documented that their nurses had skills to perform their duties. Even though some nurses “did not demonstrate competency in one or more required skills,” the government report stated, there was no evidence of retraining. …

http://www.propublica.org/article/patient-died-at-new-york-va-hospital-after-alarm-was-ignored

So it seems to be the nurses’ fault and the solution to prevent this from happening again is better nurse training. But is this the only way to look at this problem? Blaming the operator is a common reaction when this sort of incidents occur, but very often the operator’s error is caused or made more likely by poor decisions in the system design.

Being curious, I’ve had a quick look at the more detailed report on these incidents, and this is how it describes the different kinds of  alarm in this system:

The telemetry monitoring equipment at the system triggers three types of audible alarms:

- Red Alarm is an audible critical alarm that is loud and continuous. It indicates the need to immediately check on a patient’s status and vital signs.

- Yellow Alarm is a quieter and intermittent audible alarm that stops after several minutes. It indicates a temporary irregularity in the heart rate or rhythm that is not immediately critical.

- Blue Alarm is similar to the yellow alarm and indicates a problem with the system itself or an improperly connected, or disconnected, telemetry lead.

I’m sure you all see what the problem might have been; having similar alarms for possibly critical and non-critical events is probably not a good idea.

This incident provides a good example to illustrate the technique of goal-oriented obstacle analysis.

An obstacle is something that could go wrong and prevent a system from satisfying its goals. Here, the system designers had identified that the obstacle “Patient is accidentally disconnected from the monitor” would obstruct the goal “Maintain Accurate Measures of Patient Vital Signs” which itself contribute, along many other goals, to the goal “Keep Patient Alive”. The resolution of that obstacle was to include an alarm to warn the nurses when this happens (technically, in our framework we would call this an obstacle mitigation).

Unfortunately, they may not have paid enough attention to the goal that this obstacle resolution was meant to achieve. The goal is not just to send an alarm when the patient gets disconnected (which technically, in our jargon, we would call a weak mitigation), it is to get the nurses to react, reconnect the patient and keep him alive (what we would call a strong mitigation). To achieve this latter goal, the system relies on the assumption that nurses will react to the disconnection alarm. This assumption itself has an obstacle “Nurse Does Not React to Disconnection Alarm”, which is even more likely to happen if the critical disconnection alarm is similar to another non-critical alarm. It is this obstacle that probably had not received sufficient attention or wasn’t even considered during the system design and led to the incidents. The resolution that is being proposed now is “Give the nurses better training” (an instance of obstacle reduction in our framework). But an alternative resolution, that could have been chosen at design time, would of course have been to make the “blue alarm” signalling an instrument malfunction sound similar to the critical “red alarm” rather than to the non-critical yellow one.

The techniques of goal-oriented obstacle analysis provides a modelling and analysis framework to carry out this kind of reasoning in a systematic and rigorous way. You can check the paper for details.

Goal Modelling with Pelorus

The first paper on goal-driven requirements acquisition was published in 1993. Since then, it has been followed by a very large number of research papers, a comprehensive book, and a growing number of industrial applications, but goal modelling is still something that is mostly done in universities and not so much elsewhere. In industry, current practices are based on use cases, process modelling, and user stories; you will rarely meet a requirements engineer or business analyst who knows about goal modelling. This is despite a broad agreement that existing requirements engineering practices don’t work so well, precisely because they focus too much on the processes and the software (the how) and not enough on the goals (the why). So how can we fix this? How can we make goal modelling more broadly used? Ian Alexander, one of the early adopters, says we’ll know we’ve been successful when we’ll regularly hear analysts saying things like “this use case relates to this goal that contributes to this other goal and relies on this domain assumption.” To achieve this, we’ll need to simplify and communicate better the most essential ideas contained in our technical research papers.

With this in mind, I was captivated by Vic Stenning’s talk “Pelorus: a Tool for Business Change” at a meeting of the BCS Business Change Specialist Group last Thursday. The talk consisted purely of discussions about goals, how to elaborate goal models by asking WHY and HOW questions, how to avoid confusing goals and activities, how to identify stakeholders by asking WHO questions, how to define goals with measurable factors and targets, and how to anchor risks analysis on specific goals instead of doing it in a vacuum. Every concept had an almost direct relation to some of our research papers: goal-driven elaboration process, goal refinements, measures of partial goal satisfaction, obstacle analysis. The only ideas I could see missing from the discussions were conflict analysis and reasoning about alternatives.

And Pelorus even brings something new and exciting to goal modelling. It is a lightweight tool for collaborative modelling. The target domain for Pelorus is the domain of business change initiatives. Studies show that about 50% of all such initiatives fail. There are many reasons for this, but one of the main factors is a failure to engage key stakeholders early enough in the design of the changes. Vic quoted Rosabeth Kanter: “Change is disturbing when done to us, and exhilarating when done by us“—a view I agree with entirely. The focus in Pelorus is on providing a platform that allows large groups of stakeholders to collectively define and manage their goals for a change project. Unlike other goal modelling tools such as Objectiver or jUCM-Nav, it is not meant to be used for a detailed goal-oriented elaboration of complete software requirements specifications.

Pelorus is a web-based tool with a deliberate minimalistic design. The main concept is that of goal, a goal can support other goals, goals must be well-defined and measurable, and—apart from a few of other things such as the goal-based risk analysis— that’s it. This is goal modelling stripped to its bare minimum! Keeping it simple is essential to allow a diverse group of stakeholders to contribute directly to its elaboration, but it also ensures that everyone focuses on the goals and noting else. This minimalistic design is reflected in a clean and simple user interface that you can see in their videos. Oh, maybe it’s a detail, but in Pelorus goal models are not called models, they are maps. This is a term that was also used by another company selling goal-oriented techniques, so there might be something here.

Once the goal map has been defined, Stakeholders continue to use Pelorus to supervise the delivery and harvesting of the changes. This transforms the goal model in a form of “living” business case that, unlike traditional business cases that are written once and then forgotten, can evolve throughout the change delivery. This is another interesting idea that resonates with my current research interests on system evolution.

Pelorus could be a good example of how research ideas transfer to practice, except that Vic Stenning had never heard about goal-oriented requirements engineering. The main influence behind Pelorus, he says, is critical systems thinking –although, as far as I know, critical systems thinking doesn’t include the kind of goal modelling approach present in Pelorus. Sure enough, the concepts of goals, goal decomposition, measurable objectives, and perhaps even obstacles are so common that one doesn’t need to have read or heard about goal modelling to come up with the same ideas. Yet, during the talk, the resemblances with the research papers and tutorial on goal-oriented requirements engineering was striking. One person in the audience observed that these ideas are very much in the air at the moment. As a researcher who sometimes has to justify the public money spent on his research by showing it has an impact, I like to believe that we have played at least a small role in putting these ideas in the air. I hope Pelorus and other similar tools will continue to push these ideas forward.