Requirements are never perfect. When are they good enough?

We know it’s impossible to write perfect requirements. Requirements for complex systems will always be incomplete, contain defects such as ambiguities and inconsistencies, and fail to anticipate all changes that will happen during and after development. This creates requirements risks – the risks of developing wrong requirements. Good requirements engineering can limit these risks but they can never be entirely eliminated. Requirements must therefore be good enough so that the remaining risks are acceptable to the project.

But how do we decide when the requirements are good enough and their risks acceptable?

In our paper on “Early Failure Predictions in Feature Request Management Systems”, we’ve experimented with one approach to address this question. The context for this experiment are large-scale open source projects, such as Firefox, Thunderbird and four others, where features are regularly added, modified, and removed  in response to stakeholders’ change requests that are submitted and discussed online. Our idea was to apply machine learning on past change requests to build predictive models that would give early warnings of risks associated to new change requests. Camilo Fitzgerald, the lead author of the paper, has developed an automated technique that gives early warnings about the following risks:

  • Product failure: the risk that there will be bug reports or change requests associated with the implementation of a change request;
  • Abandoned development: the risk that the implementation of a change request will be abandoned before completion;
  • Late development: the risk that the implementation of a change request will not be completed in time;
  • Removed Feature: the risk that the implementation of a change request will have to be removed from the product (typically because it conflicts with other features or requirements);
  • Rejection Reversal: the risk that a decision to reject a change request is premature and will have to be reverted.

The risk warnings are generated by analysing a series of simple characteristics in the discussion associated with the change requests such as the number of persons who participated in the discussion, the length of the discussion, and an analysis of the words being used.

The predictive model is coupled with a simple model of the likely cost and benefit of performing additional requirements analysis on a change request before its implementation. The model takes into account the relative cost of additional requirements analysis with respect to the cost of the failure associated to the risk, and the probability that such additional requirements analysis is actually successful at preventing failure (currently, these factors are estimated based on general empirical data and best judgement rather than calibrated with project-specific data). This cost-benefit model is used for deciding the threshold at which to generate the risks warnings and recommend that further requirements analysis take place on a change request before deciding on its implementation or not.

The results of this experiment are promising; they show that the approach is effective at discovering which change requests have the highest risks of failure later on in development. This information can be useful for project managers and requirements engineers to decide whether more analysis is needed on a change request before assigning it for implementation, and to monitor the development of changes requests that have the highest risks more carefully.

But this was really a first step. Accuracy of the risk warnings might be improved in several ways, for example by applying more sophisticated natural language techniques for automatically detecting ambiguities in change requests, or by taking into account characteristics of the software components affected by a change request. These improvements might also give us better explanations about why a change request has a specific risk. One of the biggest limitations of our current technique is that it identifies correlations between change requests and failure, but it does not give us useful explanations about why some change request characteristics lead to a certain type of failures. Much better models will be needed for this approach to be really useful.

It would also be interesting to try this approach in other contexts than online change request management systems.

It could for example be applied to traditional requirements documents. All we need is to be able to relate individual requirements items to their outcomes (e.g. how long they take to implement, whether they have associated bug reports and change requests, etc.) and we would be able to apply the same machine learning techniques to construct similar predictive models.

It would also be interesting to apply it in the context of agile development. In this context, the approach could be viewed as a form of extensive automated retrospective analysis.  It would consists in automatically analyzing all data and meta-data associated with epics, users stories, acceptance tests, and their outcomes to try to learn characteristics of good and bad user stories with respect to requirements risks.

The long term objective of this work is to make requirements risks much more visible and central to software development than they are today. Instead of viewing requirements primarily as documentation, we should give more importance their role as risk mitigation tools, and develop the techniques that treat them as such.

Note: A preprint of the paper is available below and on my publication page

Comments are closed.