Part 1: Why Software Requirements In The Real World Are Hard

Written by matthartley | Published 2019/09/03
Tech Story Tags: agile | architecture | user-stories | documentation | programming | software-development | scrum | latest-tech-stories

TLDR This is the first in a series of posts about my experiences developing software in healthcare with my team over the last few years. I outline different views of requirements - objective, ideal, real world, constructive and yours. The real world view describes the challenges my team and I have experienced doing requirements over the years. The constructive view is a rough model we developed to deal with them that you can use for your own requirements. Have you worked on a project that failed because of poor requirements? Do you have ideas on how this could have been avoided?via the TL;DR App

This is the first in a series of posts about my experiences developing software in healthcare with my team over the last few years. For most of that time we’ve worked in eye care, with doctors and patients in major centres in Europe, North America and Australia, as well as with global life science companies, on projects aimed at improving care delivery and patient outcomes.
On the way, I’ve learned a few things I’d like to share. The thread through all of this will be requirements.

TL;DR

Why are requirements important? Because poorly managed requirements result in confusion, waste and failure (not the good type), and it's where countless software projects go wrong almost as soon as they've started.
In this post, I outline different views of requirements - objective, ideal, real world, constructive and yours. The real world view describes the challenges my team and I have experienced doing requirements over the years, and the constructive view a rough model we developed to deal with them that you can use for your own requirements.
These views would be incomplete without yours. Have you worked on a project that failed because of poor requirements? Do you have ideas on how this could have been avoided? Get in touch if so, I'd love to hear them!

The objective view

Software requirements emphasise the ‘why’ and the ‘what’ of your software (e.g. what it does, what characteristics it has) and are used by a development team to come up with designs that emphasise the ‘how’. They are usually, but not always, documented, can be described at different levels of abstraction (e.g. business objectives, user requirements, system requirements), and can use different templates (e.g. user stories, use cases, system capabilities) with defined attributes. Processes for collecting them can differ wildly.

The idealised view

The idealised view tells us requirements are contributed by a mixed bunch of stakeholders, can be reduced to a singular and consistent set of specifications, and are rational, mostly complete and described in sufficient detail for developers to get on with implementation and everyone else to understand what's going on.
The idealised view utilises idealised models (Figure 1). You'll probably recognise the statements to which they're applied:
  1. Agile user stories: "As a [user type], I want [some goal], so that [some reason]" (INVEST model)
  2. System capabilities: "The system shall provide the ability to..." (INCOSE guidelines on good requirements)
Some teams go so far as to use a formal specification language like TLA+ to formally model and test their specifications before any code is written.
Figure 1. INVEST & INCOSE requirements attribute models.
And change? Change is inevitable, of course! Requirements change over time just like everything else in the development cycle, and so long as your development process can accommodate this easily, change is as welcome as it is inevitable. 

The real world view

It won’t surprise you to know that the real world view is at odds with the idealised view. What follows is a summary of the challenges we have encountered over the years, often on the inside of or working with big enterprises, though always as a small, self-contained team, and these days as an independent startup.
Different people need different information at different levels of abstraction. Requirements are best done with input from heterogeneous stakeholder groups - users, customers, internal / external stakeholders from a range of functions (e.g. Development, QA, Ops, UX & Design, Product, PMO, IT, Procurement, Legal, Infosec, Compliance, Regulatory), and other external stakeholders (e.g. regulators, suppliers, industry bodies). Each group has different information needs and expectations as to what constitutes a relevant requirement.
Getting input like this means defining, presenting and relating different information at different levels of abstraction. This may be formal or informal, depending on how you work, but tends to concern the 'what' at the higher levels and the 'how' at the lower ones. With different levels of abstraction come a host of additional stakeholder preferences. One of these is working with different document storage formats - plain text, markdown, XML, Excel, Word, Powerpoint, etc.
Though this variety can lead to better requirements, the more stakeholders you get input from, the greater the risk of too many cooks diluting decision-making. Keeping decision-making centralised can be hard where lots of people are involved, but it is essential for maintaining a workable process.
Language, templates and attributes are rarely consistent. Inconsistency has different sources and types. One is the language used by heterogeneous stakeholder groups to describe their requirements. This mismatch, usually between business and technical domains, can lead to misunderstanding and deliverables that don't address the problem at hand. Developing a ubiquitous language that all stakeholders agree on can help, but the larger the group of stakeholders, the greater the investment required for developing, agreeing on and propagating it.
Another is heterogeneous groups need heterogeneous information, usually at different levels of abstraction. System requirements aren't useful to users, just as user requirements without system requirements are only partially useful to developers. This entails the use of heterogeneous templates and attributes for heterogeneous groups.
Another still is inability to define requirements instance attributes based on template attributes because the underlying model doesn't always work. For example, INVEST says user stories should always be "independent" of internal and external dependencies - the latter can be pretty hard to satisfy when you're forced to architect in dependencies on a client's processes/people/technology. Similarly, INCOSE tell us requirements should be “complete” - for complex projects, this is very hard to achieve.
And sometimes people just make honest mistakes. Figure 2 shows an example of inconsistent requirements on a project we worked on recently, where the author decided to use two different templates, one agreed by the team (user story + acceptance criteria + wireframes + designs), one not.
Figure 2. Inconsistent requirements for Vision Coach, an app we built for patients with diabetic eye disease.
Always incomplete, sometimes avoidably so. Although complete requirements are hard to achieve on complex projects, there are often gaps that can be closed, albeit sometimes with difficulty.
For example, requirements can be geared towards the priorities and skills of the individual/group responsible for them, providing an incomplete picture of the whole. Similarly, customers and stakeholders can be wrong about their needs and it's on you to elicit them, analyse them and turn them into sensible requirements to be implemented. For both, working collaboratively from the start with the right stakeholders can help a good deal.
Stakeholders can also be slow, especially in enterprise, where it can be hard to get engagement until a decision point is reached and accountability looms. Since this can come late in the cycle, so too can requirements. A mixture of persisting in your efforts to involve these people earlier in the requirements development process and offloading risk where you can, is part of the answer here.
De-prioritising or excluding internal dev team or important non-functional requirements is also common. How many times have you seen a system implemented without proper logging or security for example? Time, money and decision-maker focus on users and features usually explain this, but the result is material for the dev team, and usually for software quality too.
Coming up with requirements isn’t rational or linear. You address ‘why’, ‘what’ and ‘how’ in that order with equal emphasis, right? Pah! Not in the real world. Sometimes they happen out of sequence, other times they are given unequal emphasis, or are missed out altogether.
In a rational world, answers to these questions would be captured at different levels of granularity, becoming more detailed as you move from the vision through scope to implementation. In the real world, however, over specification and solutioneering at the higher levels are rife, which makes life harder for developers when it comes to implementation and they want (often rightly) to do things another way.
Lastly, the progression from ‘why’ to ‘how’ is rarely linear. Requirements are iterative, and often you get to discussing implementation details with your dev team only to realise your requirements are ambiguous, incomplete or perhaps just plain wrong, necessitating going round the loop again.
Capturing “just the right amount” of detail is hard. Applying the Goldilocks principle of capturing not too much, not too little, but just the right amount of detail is hard to get right.
Over-specifying requirements is common in waterfall and it often happens at the wrong level of abstraction in the requirements pyramid, causing problems for developers during implementation, as described. Under-specifying them is common in agile, where user stories alone are sometimes deemed sufficient for developers to get on with coding, causing disagreement, confusion and re-work (usually). Both can be hugely wasteful and negatively impact the time and cost of getting to market.
Striking a balance between the two extremes, however, is hard. Part of the solution lies in defining a process for capturing requirements (e.g. elicitation, analysis, specification, verification) with conventions for what information gets captured where in your hierarchy. Part of it may also involve agreeing on templates with defined attributes for capturing this information, which your team can evolve over time.
Change and conflict are par for the course (of course). Like everything else in the development cycle, requirements are iterative. The most important source of change comes from putting your software in the hands of users and customers - this usually produces new requirements and changes to old ones as latent needs are surfaced and fed into new iterations. This is good and healthy change. There is also unhealthy change due to things like recurring errors or demonstrably bad decisions. This change is not welcome (at least, not to me), and iteration worship shouldn’t be a fig leaf for poor decision making or execution.
Managing change formally or semi-formally involves spending time on keeping your requirements up-to-date, which involves content changes, version control, review and approvals, managing dependencies and test cases etc. Choosing the right place, tools and methods for managing this can make the difference between happiness and eternal sorrow.
Requirements also conflict with each other - users want one thing, internal stakeholders want another, both are mutually exclusive. This is just a normal part of product management, of course, and sometimes you may be able to resolve the conflict by establishing precedence rules that recognise one group as more important than another (in healthcare, for example, the person with the biggest rule book usually wins).
Where you can't do this - e.g. where you have two conflicting must-have requirements - there needs to be a mechanism for identifying where the conflict occurs and where it is best solved.

The constructive view  

To leave you with something constructive, I’ll outline some high-level attributes for what I believe makes good requirements. Distilling the preceding, and keeping the ideal view's rose-tinted glasses well out of reach, good requirements are:
  1. Tailored to their audience. Heterogeneous audiences require different data, at different levels of abstraction, and need to be able to work in formats suited to their preferences.
  2. Worked on collaboratively. Right from the start, good requirements require input from a range of people, including users, customers and internal/external stakeholders. Working with them to elicit, analyse and define requirements, whilst maintaining sensible decision-making, helps to improve requirements, designs and software.
  3. Consistent (mostly). Whatever language, template and attributes you choose, near consistency in how you use them will provide the requisite intelligibility and comparability benefits. Borrowing from INVEST and INCOSE, a decent set of attributes for your user stories (for example) might be: independent, negotiable, valuable, estimable, small, testable, traceable and clear.
  4. Described in the right amount of detail. Abstraction decreases as you go from vision through scope to implementation. If you define a hierarchy e.g. vision, user requirements, features, architecture, system requirements, decide what information to collect in each and utilise a template with defined attributes to aid alignment.
  5. Subject to (the right) change. Since requirements are iterative, you need a process for accommodating change (probably some flavour of agile), tools to manage their lifecycle and ways of avoiding unnecessary or wasteful change.
  6. Resolvable where they conflict (mostly). When requirements conflict, come up with rough guidelines for establishing precedence to resolve the conflict quickly and effectively. Where conflict resolution isn't possible, identify at what level the conflict occurs, and where it is best solved.

Your view

How does your view compare? How much time do you spend on requirements? How do you do them? What challenges have you encountered? How have you dealt with them? Were you successful? I'd love to hear your thoughts - it'd be interesting to see how your experiences compare to ours. Please tell us about your views by tweeting us @getfluxapp or emailing on flux@healthforge.io!

What’s next

In my next posts, I look at a case study of managing requirements on a software project - Vision Coach - with many of the challenges described here. I look at process and tooling in detail, and provide practical examples of good, and not so good, practice that can be used to improve how you do requirements.

Written by matthartley | Co-Founder @ModifyHQ
Published by HackerNoon on 2019/09/03