Monday, May 25, 2015

Testing your software properly

Here's a general checklist of the type of tests that you can use to ensure that you're testing your software properly before you ship it. This isn't an exhaustive list but can be used as a starting point for you to write a more exhaustive list that's right for your software.

Compile before you commit


Good testing starts with making sure your code actually compiles. Yes, people actually check in code without compiling it first. This is just a dumb mistake and is 100% avoidable. 

Run a clean build before you commit


A somewhat non-obvious thing to make sure when compiling is that you're not using a cached compiled object that has changed. Often compilers will cache objects and attempt to track when the dependency changes and only recompile the dependent object when the compiler believes it has actually changed. Cached objects will then be linked against your code and make your software appear to work but those objects may have actually changed and broken functionality. Doing a clean build before you commit will ensure that an object you depend on hasn't changed in such a way as to break your code integration.

Happy Path Tests


Happy path tests should be your minimum bar when committing code. Happy path tests ensure that your code works as intended when used in the way it was designed. These tests can be thought of as functional tests. They test the functionality of the software and ensure that the software meets the business requirement.

Negative Path Tests


Negative path tests ensure that your software is resilient to change. Negative path testing includes using your code in ways for which it was not intended. Common tests include sending in null object parameters and testing upper and lower bounds of parameters. Negative path testing also includes testing that your software properly handles exceptions and throws the proper exceptions.

White Box Tests


White box tests ensure that your code works from the outside in. These are a set of tests that ensure your objects work from the consumers perspective. These tests include making sure the object can be created and initialized, that method calls work according to spec, and that the code does not misbehave from the callers perspective.

Black Box Tests


Black box tests ensure that your code works from the inside out. These tests require access to the internals of the object. Black box tests usually test the fitness of particular private methods and algorithms. 

Life-cycle Tests


You should test how your objects function in various aspects of the objects life-cycle. The key to life-cycle tests is to make sure that your objects mange state properly. Life cycle tests are also useful in making sure that you don't have any memory leaks in your code due to life-cycle changes.

Life-cycle testing includes testing the creation, destruction, concurrency and serialization of your objects. Two life-cycle areas that tend to cause bugs are not properly testing when the object state is saved or restored or when the object is used in a multi-threaded environment.

Integration Tests


Do you understand how your software works in the context of the larger system of components that use and are used by your software? Integration tests allow you to make sure that your software works end to end in the system as a whole. 


Monday, May 18, 2015

When Not To Refactor

Refactoring software is a crucial part of extending the life of software. Refactoring contributes to enhancing the maintainability of the software by incrementally improving the design, readability and modularity of the components. But not much has been said about when not to refactor software.

Don't refactor code unless you need to change the code for a business reason.


One of the common mistakes I often see with regards to refactoring is when people refactor code that doesn't need it under the guise of making it better. The argument usually goes something like "this needs to be more abstract", "I wrote this code a long time ago and it is crappy", "this code is too complex" or something along those lines.

You should only refactor code when you are already in the code to make a change to support the business. That may sound counter intuitive but one of the worst things we can do is change code, however crappy, unreadable or complex that doesn't have a reason to change.

Valid business reasons to change code include (but are not limited to):

  • Adding new functionality
  • Extending existing functionality.
  • Making measurable performance improvements.
  • Adding a layer of abstraction in order to support a new use case.
  • Modularizing a particular object so that it can be reused in another part of the system

Adding new functionality or Extending existing functionality


This is where the boyscout rule comes into play. If you are in already in the code for another reason then you should clean up the code even if you didn't make the mess.

Making measurable performance improvements


This one is probably self explanatory but it's important to note that performance improvements will usually require some level of refactoring. 

Adding a layer of abstraction in order to support a new use case


This is an important one to understand. Often people will over generalize code at the beginning. This leads to overly complex designs and less readable code. If we follow the rule of not creating a layer of abstraction until we have at least two or three use cases for the code then there will come a point when you need to refactor the code in order to provide a layer of abstraction that doesn't already exist.

Until that second or third use case comes about the code should not be generalized. You don't have enough information about future uses of the code to get the abstraction correct. You may get lucky and guess at the future abstraction but you don't want to run your business on guesses and luck.

Modularizing a particular object so that it can be reused in another part of the system


Code reuse is one of the most important tenets of object oriented programming. When we identify code that is not specific to a particular object or package AND is needed in some other part of the system we should refactor this code into it's own module. Its important to ONLY do this when the code is actually needed in another part of the system.

Don't refactor code without tests


In order to refactor code safely you should have unit and integration tests for the existing functionality. I would also argue that you should write tests for the new functionality as well before you refactor. This will help you to understand the proper way to refactor the code as it helps you define how the refactored code should be used from a consumers standpoint.

If the tests don't exist for the the existing functionality you should write them first before you start refactoring. This helps ensure that you don't cause a new bug in the code or regress an old bug when refactoring.


Monday, May 11, 2015

The Engineers Cloud

In my previous post in this series I explained the aspect of The Cloud that I like to call The Consumers Cloud. I explained how The Consumers Cloud breaks down into data management services, social media, and streaming media. In this post I'll talk about the second aspect of The Cloud.

The Engineers Cloud


I call this type of Cloud use The Engineers Cloud because this aspect of The Cloud isn't something you as a consumer interact with directly. Instead, engineers are taking advantage of Cloud services to enhance how you interact with their content and services.

What The Cloud Means To Engineers


While there are almost no limits to the things you can do in The Cloud from an engineers perspective, there are two main areas I'd like to focus on here. The first is as means of distributed computing. The second is better reliability of their services.

Distributed Computing


The Engineers Cloud allows you to take advantage of the virtual limitlessness of server resources in The Cloud. In the days before The Cloud server resources were finite. You only had the amount of resources you could afford to keep running all the time. These resources lived in data centers.

Companies like Facebook, Netflix, Amazon, and Google use The Cloud to do a variety of tasks that would be nearly impossible with a fixed set of resources. The ability to spin up an (almost) unlimited amount of servers running your services means that you can parallelize computing to a degree that was not possible a decade ago.

Some examples of engineers using The Cloud as a means of distributed computing:

  • NASA's Jet Propulsion Laboratory (JPL) uses the cloud to capture and store images and metadata collected from the Mars Exploration Rover and the Mars Science Laboratory missions. They operate the mars.jpl.nasa.gov website out of The Cloud without building this infrastructure themselves
  • Accuweather is using the cloud to serve over 4 billion requests a day.
  • Evite is using the cloud to send more than 250 million party invitations each year.
  • Netflix is using the cloud to stream videos to it's online streaming customers. It's able to take advantage of The Cloud's distributing computing to analyze a very large amount of data and turn them into recommendations and personalization.

Better Reliability


This is going to sound counter intuitive, but one of the reasons that The Cloud is more reliable is that when planning to put your software and services in The Cloud you have to plan for failure. The best example of this in practice that I'm aware of is Netflix's Simian Army.

The Cloud allows you to plan for failure and provide better reliability because it allows for:
  • Redundency through geo-distributing services.
  • Redundency through clustering your services.
  • Reduced latency through DNS services.

Redundency through geo-distributing services


Most Cloud providers offer the ability to deploy your software and services to many different regions around the world. This allows you to keep your software and services running even if there are data center outages in a specific region like the Northeast blackout of 2003 by having your services fallback from one geo-graphic region to another if the initial region is down.

Redundancy through clustering your services


Most Cloud providers give you the ability to cluster your services behind some sort of virtual load balancer. Most of these load balancers will automatically stop sending traffic to a machine that is not responding or throwing a particular error for a predefined URL on the machine.

While clustering your services behind a load balancer allows you to remove or replace a machine that isn't functioning properly it also is the primary means by which you can quickly scale up your service to meet demand. If your service is experiencing a higher than expected load you can spin up a new server in your cluster and scale proportionally with your traffic. 

Reduced latency through DNS services


DNS is how the internet turns the name of service we go to into the address that the service resides at. For example when you type http://paul.oremland.net into your browser your computer is doing a DNS lookup of paul.oremland.net and being given an IP address. It then uses that IP address to talk directly with the service.

Many Cloud providers allow you to virtually control DNS based on characteristics of the request. Some services allow you to route traffic based on latency to or load on the receiving services. This allows you to distribute your traffic more evenly and provide a better customer experience. Instead of simply pointing your users at a specific machine, you can point them to different machines based on the current state of your system and what gives the users a great experience.

Monday, May 4, 2015

The Consumers Cloud

In my previous post in this series I gave a basic overview of what The Cloud is, its benefits, its high level infrastructure, and why you should care about it. In this post I will go into more detail of what I call The Consumers Cloud.

The Consumers Cloud


Often when people talk about The Cloud what they're really talking about are the applications that are built on top of, and enabled by the infrastructure of The Cloud. At a high level those applications are what I would call The Consumers Cloud.

The main purpose of The Consumers Cloud is to provide distributed access to your data and the services that provide that data. Your data is typically comprised of images, videos, and documents but really it can be any files you need to put or get from a variety of machines in a variety of locations.

In The Consumers Cloud you don't interact with The Cloud directly. Instead you interact with services that are built in The Cloud. Those services are the means by which your data is moved around and presented to you on a variety of devices (mobile, desktop, and etc).

The Consumers Cloud breaks down into three high level areas. The first is data management services, the second is social media, and the third is streaming media.

Your Hard Drive Is Everywhere


You can think of The Consumers Cloud as your hard drive that is everywhere. Services like Dropbox, Amazon Cloud Drive, Microsoft OneDrive, and Apple iCloud all provided the ability to store you files on their servers in order for them to be accessible from anywhere on almost any machine. They take your data and using very sophisticated algorithms distribute that data in such a way as to make reading and writing it from anywhere in the world possible and fast.

Nowadays when you purchase a mobile phone it usually comes with some sort of Cloud backup. That means that the pictures and videos you capture on your phone are uploaded to one of these services and made accessible to you from your many different devices. You can share this media with others much easier since it isn't stored locally on your phone, tablet, laptop, or desktop.

Your Social Life Is Nowhere


The second high level area that The Consumers Cloud breaks into is social networking. Facebook, Twitter, Instagram, Pintrest, and etc all exist in The Cloud. Sometimes these social networks need very few servers to serve the traffic of their users. Sometimes they need thousands of servers to meet their peak demand. Without The Cloud they wouldn't be able to efficiently scale down or up to handle the large volume of traffic they get in a cost effective way. The Cloud also allows them to distribute data and distribute load in such a way as to optimize connecting their users to servers that are closer to them or that have less load at any given time. The Cloud allows them to handle the ebbs and flows of their traffic patterns so that their services are always there.

Your Entertainment Is Just There


The last high level area that The Consumer Cloud breaks into is streaming media. Good examples of this are Amazon Prime, Netflix, and YouTube. All these services are major players in online entertainment business and all of them rely on The Cloud as the backbone of their services. They use The Cloud to optimize the distribution of media so that it can be accessed by millions of people without having millions of people each hitting their data stores for every piece of media every time.

The Consumers Cloud is about you, your data, and your online life. In my next post in this series I'll detail The Engineers Cloud.