Monday, May 30, 2016

You probably need more data to make your decision

In most software projects that I have been involved in there's always come at least one time during the course of the project where we've hit a crossroads and the right next step isn't clear. Usually there are two or three competing paths that each have merit on their own.

In these situations its typical for people to start relying upon conjecture. The conversation starts to turn from we should to I think.  What's the problem with I think? It means:

  • The data the decision is being made on is coming from you and not your users.
  • The conversation starts becoming more subjective and less objective.
  • There is more risk that you build the wrong thing.

In many situations our experience is going to be enough to make a decision with incomplete information. But you need to be aware when the cost of making decisions based on conjecture can affect the success of the project itself. When there's a lack of data about something that affects the direction of the software using the term I think should be a red flag. Here are three key questions you should ask before proceeding:

If I make the wrong decision, am I able to go back and change it?

If you're building software that others are building on top of then it's likely that their business now relies on the decision you've made. In this case once you've released the software it's going to cost a lot of money and time to go back and change it. If you have a large user base it's even likely that some users will be unwilling to change their use.

What's the cost associated with the decision?

Does your decision require you to make significant purchases in hardware or other software? Do you have to bring on additional people on to the project to complete it? How much additional time will be spent building the proposed solution?

If I build this solution, what am I not able to build?

It's possible that deciding to do X means that you cannot do Y or Z. It's important to understand these trade-offs before you start down a particular path.

So how do you turn the conversation from I think back to a confident we should? Answering those three questions is a good start. But you need to also start the conversation with your users. In agile the product owner is the proxy between you and your users. Product owners are usually in touch with the actual users and can provide a more objective opinion on what your users will really need or want. Bring your product owners up to speed on the context and the ramifications of the decision. Let them help you get the right data to minimize your risk of building the wrong thing.


Monday, May 23, 2016

Generic Software Solutions Are Unicorns

How many times have you been working on a platform and recognized the need for some underlying technology that ties together several disparate components in your system. The conversation usually goes something like this.
We've been building component A for some time and we need to build component B and C. A's been doing well but has a lot of warts and tech debt that we'd like to fix. If we only had a generic system X then we could build B and C faster and reduce redundancy between A, B, and C
On the surface that reasoning seems sound. You're not dreaming up an ideal system and looking for problems it can solve. You have real world problems you're trying to solve for components A, B, and C. So what's wrong with setting out to build a generic system to solve your problems?

You aren't providing business value along the way


By building a new generic system instead of refactoring your existing system you're increasing you're opportunity loss costs. Every day that you spend building software that doesn't get into customers hands is a day of missed opportunity for feedback. You aren't able to get feedback on whether your solution actually fits the customer need. You aren't able to get feedback on what the unknowns in your system are that only come out in production. You aren't able to get feedback to validate the many assumptions you've had to make along the way. Building successful software requires a closed feedback loop between you and your customers. The longer you go without closing that loop the more risk you are adding that you're building the wrong thing.

Generic systems come with too many unknowns


A generic system needs to be able to solve problems it doesn't even know about. But often, these aren't problems that you have today or will likely have in the near future. Building your system in such a way that it can be delivered in small pieces that provide immediate business value allows you to make sure you're solving the correct problems today, and building a foundation that can be refactored and abstracted tomorrow to solve different problems that are unknown today.

It's easier to do the wrong thing than it is to do the right thing


When building a system for which you don't know the full requirements it's easier to do the wrong thing than it is the correct thing. That's because you're possibilities are almost infinite on what you could build, and at best you just have a educated guess as to what you will need to build in the future. This means that you're going to have to make assumptions that can't be validated. It's very possible that you make design decisions based on these assumptions that are difficult to go back and change. Building only the components you need today ensures that your system only needs to change for the problems you have, not the ones you think you're going to have.

Refactoring existing systems allows you to get the more generic solution that you need 


As you build the systems and components you need, you'll be able to start identifying the parts of your system that are needed across components. These common components are the real foundation of the more generic solution you need. Building them to solve your specific problems today means that they will only do what they need to do (i.e. you won't have spent time building features that aren't used). This will create a virtuous cycle of refactoring and adding more functionality as you need it. You'll get the correct level of abstraction for your components because you'll only be abstracting them when you have a real world need for the abstraction.

Monday, May 16, 2016

Successfully Shipping Software

It's unfortunate how much software gets written and never actually ships and is used. This can't be wholly avoided because there are times when priorities do change. There are times when projects do need to get shelved due to more pressing needs. And if you're working at a company that values risk taking then projects will get canceled in an effort to fail fast. 

As a software engineer not shipping due to changing priorities or having to shelve something in favor of a more pressing project were two of the most demoralizing things. Especially when I had worked on a project for N weeks or (even worse) N months.

Being able to ship software fast is one of the big benefits of Agile software development. Agile encourages you to ship at the end of every sprint. In the ideal world this would mean putting something in users hands that can be used at the end of every sprint. In reality, this is usually not the case. Often, you ship a series of smaller end-to-end pieces into production which at some point you'll tell your users about.

Here are some things to consider which will increase your likelihood of shipping software your users will use and decrease the chance that you'll build the wrong thing.

Understand your problem


While this sounds like a no-brainer, it's actually one of the things that I see get missed the most. Understanding your problem is not about requirements gathering. You gather requirements about a known problem space. One reason software projects can fail and never ship is because the problem was based on an idea that was never validated. You should always be able to tie your problem back to something a user has asked for.

Identify your milestones


Many software teams identify milestones from the beginning and move forward linearly in time till shipping. This is not a pit of success in that it's easier to miss a requirement than it is to not miss one. It's better to start with the last milestone and ask the question "what needs to happen right before this to achieve this milestone".  Then you keeping going backwards in time asking that question and defining new milestones. Eventually you'll get to a requirements gathering milestone. You'll have achieved either defining all your milestones, or identifying ambiguity that would have likely derailed or delayed your project had you started defining milestones the other way around. 

Know your risks and how they can be mitigated


Every project comes with risk. Being able to identify what risks are associated with your project will help you determine how your project will succeed in the event that one or more of your risks are realized. Some common risks to software projects include:
  • Upstream/downstream software dependencies (often internal software being developed by another team) being late or failing to ship all together.
  • Hardware/software purchases that need to be made.
  • Making sure you're able to deploy to production.
  • Not having user documentation.
  • Not having a marketing plan for a new feature or product.
  • Getting started with performance testing or security reviews late (or not realizing you need to do them at all).

Make sure you know what you need to measure


Milestones are great to help you know whether your project is on track. But they won't help you once your software is released. You need to identify the ways that you should measure your software to understand it's health. You should understand both your customer facing metrics (like the number of failures, the latency of requests, or etc) as well as your internal metrics (cost, performance, system health, etc). Talk to your product owners and your sales teams to understand what they need to measure about the software in order to aid them in their jobs.

Have an exit criteria for success


What is your definition of done? This doesn't mean you won't maintain or make changes. It just means that you've successfully delivered what you wanted to deliver. This should be a checklist of items that's been validated on both the business and technical sides of the house.

Monday, May 9, 2016

Writing code that will last: Identifying Dependencies

In my previous post, writing code that will last: understanding bottlenecks, I talked about how bottlenecks can come in the form of vertical scalability, horizontal scalability, or statefulness. In this post I'll explore how identifying upstream and downstream dependencies in your software can make or break it from a longevity perspective.

Upstream Dependencies


There are several upstream dependencies that you need to consider in your code. These come in the form of your operating system, the platform your application or service runs on, the external applications and services you use, and the contracts you depend on for data.

Operating systems and software platforms can affect the performance, security, and ability of your service. Writing code that lasts means abstracting your code enough from the OS and underlying platform such that you can upgrade the OS or platform without changes to your application or service. This allows you to protect your self and your customers from security vulnerabilities as well as allows you to take advantage of performance enhancements that come from kernel or library updates.

You should also consider the dependencies you have an external applications or services. Do you depend on them in a version specific way? How will your application need to change if they change the schema of the contract you have or deprecate an API you use. Building code that will last doesn't mean prematurely optimizing your code to try to defend against these changes, but instead means understanding how to encapsulate them such that the effect these changes have on your code are minimal.

Downstream Dependencies


Downstream dependencies are people, applications, and services that depend on your software. From a downstream dependency perspective writing code that will last means understanding when you're making a choice that you can't go back and change after the fact. In order to write code that lasts we have to minimize the number of these choices that we make. These choices often come in the form of the APIs we expose, the contracts we create with other services, the way we represent objects, and etc. Once these choices are made, sometimes they can't be taken back without having some sort of negative side effect.



Monday, May 2, 2016

Writing code that will last: Understanding Bottlenecks

In my previous post, writing code that will last: avoiding potential roadblocks, I talked about how roadblocks can come in the form of cost, re-inventing the wheel, or under-defined requirements. In this post I'll explore how understanding potential bottlenecks in your software can make or break it from a longevity perspective.

Understanding Bottlenecks


The first bottleneck you should be considering is how your application scales vertically. Vertical scale typically has direct correlation with your applications performance characteristics. Vertical scale is achieved by increasing CPU, Memory, Network Capacity, or Disk your application has available to it. Is your application going to be CPU or Memory bound? Will you require a lot of Disk I/O? How much data will you be reading/writing over the network? You need to understand each of these areas in order to know how your application will scale vertically.

How your application scales horizontally is the next potential bottleneck you should understand. Horizontal scaling is the ability to add additional machines running your software into an existing pool to add additional computing. Horizontal scaling allows you to increasing the throughput of your application by having more capacity to process requests or jobs that the application performs. Your software needs to take this into account from the beginning or it won't last without requiring rework.

Another area to consider when understanding your applications bottlenecks is how stateful it is. Does your application require some external state in order to run? How the application gets and maintains this state can lead to potential bottlenecks. E.g maybe your application needs some specific information about the user making the request that it has to lookup somewhere. Or maybe it needs information about previous requests. An example of a stateful application is one that uses a shared database. The reason state cases bottlenecks is because your application is now dependent on something external that it does not control.