Requirements are never perfect. When are they good enough?

We know it’s impossible to write perfect requirements. Requirements for complex systems will always be incomplete, contain defects such as ambiguities and inconsistencies, and fail to anticipate all changes that will happen during and after development. This creates requirements risks – the risks of developing wrong requirements. Good requirements engineering can limit these risks but they can never be entirely eliminated. Requirements must therefore be good enough so that the remaining risks are acceptable to the project.

But how do we decide when the requirements are good enough and their risks acceptable?

In our paper on “Early Failure Predictions in Feature Request Management Systems”, we’ve experimented with one approach to address this question. The context for this experiment are large-scale open source projects, such as Firefox, Thunderbird and four others, where features are regularly added, modified, and removed  in response to stakeholders’ change requests that are submitted and discussed online. Our idea was to apply machine learning on past change requests to build predictive models that would give early warnings of risks associated to new change requests. Camilo Fitzgerald, the lead author of the paper, has developed an automated technique that gives early warnings about the following risks:

  • Product failure: the risk that there will be bug reports or change requests associated with the implementation of a change request;
  • Abandoned development: the risk that the implementation of a change request will be abandoned before completion;
  • Late development: the risk that the implementation of a change request will not be completed in time;
  • Removed Feature: the risk that the implementation of a change request will have to be removed from the product (typically because it conflicts with other features or requirements);
  • Rejection Reversal: the risk that a decision to reject a change request is premature and will have to be reverted.

The risk warnings are generated by analysing a series of simple characteristics in the discussion associated with the change requests such as the number of persons who participated in the discussion, the length of the discussion, and an analysis of the words being used.

The predictive model is coupled with a simple model of the likely cost and benefit of performing additional requirements analysis on a change request before its implementation. The model takes into account the relative cost of additional requirements analysis with respect to the cost of the failure associated to the risk, and the probability that such additional requirements analysis is actually successful at preventing failure (currently, these factors are estimated based on general empirical data and best judgement rather than calibrated with project-specific data). This cost-benefit model is used for deciding the threshold at which to generate the risks warnings and recommend that further requirements analysis take place on a change request before deciding on its implementation or not.

The results of this experiment are promising; they show that the approach is effective at discovering which change requests have the highest risks of failure later on in development. This information can be useful for project managers and requirements engineers to decide whether more analysis is needed on a change request before assigning it for implementation, and to monitor the development of changes requests that have the highest risks more carefully.

But this was really a first step. Accuracy of the risk warnings might be improved in several ways, for example by applying more sophisticated natural language techniques for automatically detecting ambiguities in change requests, or by taking into account characteristics of the software components affected by a change request. These improvements might also give us better explanations about why a change request has a specific risk. One of the biggest limitations of our current technique is that it identifies correlations between change requests and failure, but it does not give us useful explanations about why some change request characteristics lead to a certain type of failures. Much better models will be needed for this approach to be really useful.

It would also be interesting to try this approach in other contexts than online change request management systems.

It could for example be applied to traditional requirements documents. All we need is to be able to relate individual requirements items to their outcomes (e.g. how long they take to implement, whether they have associated bug reports and change requests, etc.) and we would be able to apply the same machine learning techniques to construct similar predictive models.

It would also be interesting to apply it in the context of agile development. In this context, the approach could be viewed as a form of extensive automated retrospective analysis.  It would consists in automatically analyzing all data and meta-data associated with epics, users stories, acceptance tests, and their outcomes to try to learn characteristics of good and bad user stories with respect to requirements risks.

The long term objective of this work is to make requirements risks much more visible and central to software development than they are today. Instead of viewing requirements primarily as documentation, we should give more importance their role as risk mitigation tools, and develop the techniques that treat them as such.

Note: A preprint of the paper is available below and on my publication page

Goal Modelling with Pelorus

The first paper on goal-driven requirements acquisition was published in 1993. Since then, it has been followed by a very large number of research papers, a comprehensive book, and a growing number of industrial applications, but goal modelling is still something that is mostly done in universities and not so much elsewhere. In industry, current practices are based on use cases, process modelling, and user stories; you will rarely meet a requirements engineer or business analyst who knows about goal modelling. This is despite a broad agreement that existing requirements engineering practices don’t work so well, precisely because they focus too much on the processes and the software (the how) and not enough on the goals (the why). So how can we fix this? How can we make goal modelling more broadly used? Ian Alexander, one of the early adopters, says we’ll know we’ve been successful when we’ll regularly hear analysts saying things like “this use case relates to this goal that contributes to this other goal and relies on this domain assumption.” To achieve this, we’ll need to simplify and communicate better the most essential ideas contained in our technical research papers.

With this in mind, I was captivated by Vic Stenning’s talk “Pelorus: a Tool for Business Change” at a meeting of the BCS Business Change Specialist Group last Thursday. The talk consisted purely of discussions about goals, how to elaborate goal models by asking WHY and HOW questions, how to avoid confusing goals and activities, how to identify stakeholders by asking WHO questions, how to define goals with measurable factors and targets, and how to anchor risks analysis on specific goals instead of doing it in a vacuum. Every concept had an almost direct relation to some of our research papers: goal-driven elaboration process, goal refinements, measures of partial goal satisfaction, obstacle analysis. The only ideas I could see missing from the discussions were conflict analysis and reasoning about alternatives.

And Pelorus even brings something new and exciting to goal modelling. It is a lightweight tool for collaborative modelling. The target domain for Pelorus is the domain of business change initiatives. Studies show that about 50% of all such initiatives fail. There are many reasons for this, but one of the main factors is a failure to engage key stakeholders early enough in the design of the changes. Vic quoted Rosabeth Kanter: “Change is disturbing when done to us, and exhilarating when done by us“—a view I agree with entirely. The focus in Pelorus is on providing a platform that allows large groups of stakeholders to collectively define and manage their goals for a change project. Unlike other goal modelling tools such as Objectiver or jUCM-Nav, it is not meant to be used for a detailed goal-oriented elaboration of complete software requirements specifications.

Pelorus is a web-based tool with a deliberate minimalistic design. The main concept is that of goal, a goal can support other goals, goals must be well-defined and measurable, and—apart from a few of other things such as the goal-based risk analysis— that’s it. This is goal modelling stripped to its bare minimum! Keeping it simple is essential to allow a diverse group of stakeholders to contribute directly to its elaboration, but it also ensures that everyone focuses on the goals and noting else. This minimalistic design is reflected in a clean and simple user interface that you can see in their videos. Oh, maybe it’s a detail, but in Pelorus goal models are not called models, they are maps. This is a term that was also used by another company selling goal-oriented techniques, so there might be something here.

Once the goal map has been defined, Stakeholders continue to use Pelorus to supervise the delivery and harvesting of the changes. This transforms the goal model in a form of “living” business case that, unlike traditional business cases that are written once and then forgotten, can evolve throughout the change delivery. This is another interesting idea that resonates with my current research interests on system evolution.

Pelorus could be a good example of how research ideas transfer to practice, except that Vic Stenning had never heard about goal-oriented requirements engineering. The main influence behind Pelorus, he says, is critical systems thinking –although, as far as I know, critical systems thinking doesn’t include the kind of goal modelling approach present in Pelorus. Sure enough, the concepts of goals, goal decomposition, measurable objectives, and perhaps even obstacles are so common that one doesn’t need to have read or heard about goal modelling to come up with the same ideas. Yet, during the talk, the resemblances with the research papers and tutorial on goal-oriented requirements engineering was striking. One person in the audience observed that these ideas are very much in the air at the moment. As a researcher who sometimes has to justify the public money spent on his research by showing it has an impact, I like to believe that we have played at least a small role in putting these ideas in the air. I hope Pelorus and other similar tools will continue to push these ideas forward.