- Give you an understanding of risk and unknowns.
- Quantify the known work.
- Be something that you can base a big decision on.
- Be refined as new information becomes available.
- Be a range with a 90% confidence interval.
In part 1 I explained how to account for risk and unknowns in your software estimation. In this post I'll explain how to quantify the work that is known.
Quantifying The Known Work
All software has to be designed, architected, implemented, tested, debugged, and deployed. Each of these areas needs to be accounted for in the software estimate. Here are a set of questions that should be asked for each area to help you understand the level of effort associated with building your software.
Using these questions you can break out the known work into smaller buckets of work. You can then estimate each of these smaller pieces independently.
Has the user experience and user interaction been defined? Do people interact directly via a user interface you provide or indirectly from another user interface? Have all the transitions and states been defined?
Will your software be available in multiple regions and/or locales? Does the design take regionalization and localization into account? Who provides the translations?
What platform will the software run on? Are there platform specific requirements or constraints? Does hardware need to be purchased or provisioned?
What abstractions need to be defined? What is the hierarchy and/or composition of the abstractions?
Does the software rely on persistent data? How is it generated? How is it updated? What data specific workflows are there?
What metrics do you need to track?
How is the software going to scale? Are there single points of failure? Is the software going to be memory bound and/or CPU bound?
What is the software NOT going to do?
Is your software released incrementally? How will the system handle data structure changes?
How are functionality updates handled in your system? Do updates need to be isolated from each other or can they co-exist. Is there a need to maintain backwards compatibility?
What is the source control mechanism? Is it already setup? Does everyone have access? Is there a branching strategy?
Who is going to implement the software? Do they have all the hardware and software they need? Do you they know the platforms or technologies that your project uses? Do they require time to ramp up on the platform or tools? Is external training required?
Is your project going to use continuous integration? If not, why? Is your project going to use continuous deployment? If so, how do you deal with a failed deployment?
How are bugs going to be tracked?
How are the units of software defined? Are there tests for all units? A good software project has tests for each unit of work as well as tests for the integration of multiple units.
Are your tests automated? How often are the tests run and by whom?
Do you need black box testing, white box testing, or a combination of both for your software? Are you testing more than just the happy path or are you doing negative case testing as well? Does your software need stress or load tests?
Does your testing require specific hardware or software? Do these need to be procured?
What tools are going to be used to debug the software? How are you going to profile performance?
What needs to be logged? How often and by what components? How are logs collected?
How are errors reported? What work is required to make errors in the system reproducible?
How is the software going to be released? How is your software going to be monitored? How is your infrastructure going to be built out? Can you automate the infrastructure build out using tools like Chef or Puppet?
How is the software documented? How is the documentation updated?
Post a Comment