The Latest FDD Processes
The FDD processes that we use on all our projects (internal and for clients) are described in this article. They don't differ greatly from the original FDD processes. They contain corrections to those in the JMCU book (see footnote) and contain some enhancements based on training people on this work. In general, they also contain more "how to" detail. This is detail I intend to continue to enhance the processes with.
A few of the differences are noted here. Some of the differences require more elaboration than is possible in this one topic. Those will be covered in future articles.
Process 1: Develop an Overall Model
The features list is not produced here. We've always done it as a separate and discrete step. You can't really merge process 1 and 2 as you are effectively doing two discrete things: modelling and features elicitation. Furthermore, in process 1 the participation of the domain experts is crucial and mandatory. This is not the case in process 2; the chief programmers doing the features list is the norm.
Process 2: Build a Features List
The category names are much closer (almost identical) to the originals. This is more explicit and helps communicate that functional decomposition of the domain is what it is you are doing in this process. We've found in training people that process 2 is an area that is often hard to grasp, but very easy once demonstrated. Changing the terminology back to the domain oriented names of Subject Area, Business Activity and Business Activity Step makes it clearer and easier for people to understand. Thinking of the steps in each business activity is an easier mental model for people to follow during this functional decomposition. It is afterwards that you think of the steps as features.
This was the way it was originally done. In fact, in the original processes the Business Activity Steps were further categorised into major and minor. This helped first time through to decompose, but after it had been done we found the extra level of categorisation added no value so we dropped it and it was never reported on. It might help you in your first experiences with process 2 to use the five levels of categorisation (i.e. including major and minor). After some practice you'll find you can dispense with the extra level of categorisation as we have.
Another important point is weighting and prioritisation. We don't do this. It was actually attempted with the original processes and we found it added no value. In fact, it made things worse. It is a very labour intensive task to weight the features thus consuming quite a bit of Chief Programmer time at the start of the project (where they are already a bottleneck). Also, features are already so granular (most are less than a weeks work) and they are broken down into six discrete milestones per feature that the data is already amazingly fine-grained and accurate. What seems intuitively obvious, that weighting features would make for even more accuracy, actually decreases the accuracy. Why? Because for this to work you have to be able to very accurately assess the relative weights of hundreds of features against each other into a much smaller set of weightings such as "complex, medium, and simple". The error gets larger and larger as you go and actually decreases the accuracy of the reporting overall.
A feature taking no more than two weeks seems to have become features always, or should, take two weeks. This is not the case. Two weeks is the upper limit to guide you in the decomposition. If, during decomposition, a feature looks bigger than two weeks then decompose it further. Most features take much less than two weeks. Consider the feature "generate the unique number for an order". Hardly two weeks worth of work! FDD doesn't run in rigid two-week cycles. The cycle in an FDD project is the weekly release meeting and at these points large numbers of "less than two week" features are at differing stages of their development.
Finally, prioritisation of the right kind is done as part of the planning in process 3. See the "Determine the Development Sequence" task in process 3. We don't prioritise every single feature in terms of "must have" or "nice to have" at this stage as that is for scope reduction. Of course, there is the usual and common-sense approach of looking for low value function that is a large amount of work or is highly pervasive (significantly affect the complexity of design, build, test and maintain). The challenging and filtering of this is done as a matter of course in process 1. Techniques for scope reduction once into the construction phase of a project are another matter entirely and it's typically applied at the level of business activities, not features.
Process 3: Plan By Feature
As described above, we don't do "must have" and "nice to have" prioritisation at this stage. Hence, the future features board is not a mechanism we use. The kind of prioritisation that is done in this process is described in the "Determine the Development Sequence" task and is to do with bringing forward complex or high risk business activities and consideration of external (visible) milestones such as betas, feedback checkpoints, previews and so on. These are important and this kind of sequencing (a different form of prioritisation) is often required and very valuable.
Process 4: Design By Feature
Design Packages and Work Packages are made explicit again. Design packages are implied in the JMCU book, but work packages and the workflow of features for development are not. These will be discussed in detail in an article dedicated to FDD workflow. Also, the shared Feature Team Area from the original processes is included again as this is required in practice for FDD to work. In a future article we'll discuss this in more detail with various development environments and toolsets.
Process 5: Build By Feature
Unit test is not necessarily class-level. There are a number of approaches to testing. The point is we don't stipulate just one of them as a part of the FDD processes.
Thanks are due to our friends at Sausage Interactive and Open Telecommunications for their questioning and their "nudging" me to republish these processes.
Jeff De Luca