Monday, December 30, 2013

What's your legacy

Today what I do is urgent

1 week from today what I do today will be embraced

1 month from today what I do today will be an afterthought

1 year from today what I do today will be diminished

5 years from today what I do today will be diluted

20 years from today what I do today will be forgotten

100 years from today what I do today will be of no consequence

Unless I leave a legacy

Sounds pretty dismal right? I'm here to tell you it's not dismal; It's focusing.

Over the holiday's I've taken some time to step back and consider where I'm investing my time and energy and what the return is on that investment. It's hard to not get caught up in the everyday'ness of what you do. You have deadlines, meetings, expectations, and etc. If you're doing agile (which I hope you are) everything is broken up into nice bite sized chunks for you which puts even more stress on focusing on the tree at hand and not the forest you're in.

What I'm not saying is that what you do today doesn't matter. It does in the short term at your job and in the medium term of your career. But in the long term of your legacy it doesn't matter at all if you're not focused on leaving a legacy.

You've probably heard the old saying Give a man a fish and you'll feed him for a day; teach a man to fish and you'll feed him for life. That is legacy building; taking what you know and have learned and passing it on.

So what's this got to do with your current deadlines or your current project or even your current job? If you focus on mentoring and teaching others what you're learning along the way and teach them to do the same, then you're leaving a legacy that will outlast you many times over.

There are tons of ways to teach others what you're learning. You can blog. You can become a mentor at work. You can speak at a conference. You can write a white-paper. It really doesn't matter how you choose to teach others, it just matters that you choose to.




Monday, December 16, 2013

Migrating from Subversion to Git

Today I thought I would pass along a helpful version control migration tip. I've been writing software both personally and professionally for 14 years. As the years go by the way I use version control change and every couple of years I end up migrating from version control system to another. A few years ago I migrated from Subversion to Git.

I decided to move to Git for a couple of reasons, the biggest of which, is that Git is a distributed source system. What this means is that when you checkout a repo you have a full version of the repo on that machine. This allows you to work completely disconnected from any remote server. This becomes extremely useful if you work in a coffee shop, on an airplane, or somewhere that has no WiFi.

One thing that was pretty important to me was that I was able to keep my SVN history intact. I don't believe in checking in commented out code or leaving classes around that aren't being used. It clutters the system and makes the code more error prone. So I often heavily rely on my version control history to help me remember how or why I've done something that I've removed from the system.

So here are some pretty simple instructions that will allow you to migrate from SVN to Git and maintain your commit history. Make sure you have both git and git-svn installed on your machine before attempting these instructions.

This first step in the migration process is to go to where you have the SVN repo checked out that you want to migrate.

$ cd /path/to/svn/repo

The next step is to find, on the remote repository, where the SVN database with all revisions is hosted.

$ svn info

The output of that command will give you the url of the remote repository which I will later refer to as /path/to/remote/svn/repository.

The next thing we have to do is create the directory for our git repository.

$ mkdir -p /path/to/local/git/repository

$ cd /path/to/local/git/repository

Now that we have a place to put our new Git repository we need to initialize the repository from the SVN repository

$ git svn init /path/to/remote/svn/repository

After we've initialized the repository we can fetch all the revisions from the remote SVN repository.

$ git svn fetch

At this point you have now migrated your SVN repository to Git. Pretty easy huh? Checkout your commit history while you're here.

$ git log --graph

If you're using a remote server as your canonical source for your Git repositories (like Github) you should also push the local Git repository to the remote one on the master branch. In order to do that you need to first add the remote Git repository as a remote called origin.

$ git remote add origin git@your.remote.server:username/repository.git

Now that your local Git repository knows about the remote you can simply push your changes to the remotes master branch.

$ git push origin master

Monday, December 2, 2013

Why I don't use a traditional IDE...unless I have to

I don't really think I'm a luddite when it comes to technology but there is one piece of technology that I don't think I will ever truly embrace; The modern IDE.

I feel I need to make a disclaimer from the start. I don't think it's wrong to use IDE's and in fact I'd argue that there are certain times where it's important to use an IDE. But, I do think that becoming dependent on the modern IDE makes you a poor software engineer.

I think I should start with why I keep saying modern IDE and not just IDE. In general an IDE is any development environment that gives you the ability to write, build, and debug software. With this definition a text editor wouldn't be considered an IDE but an argument could be made that VIM is because of it's built in ability to run shell commands.

In contrast here are a few features that most modern IDE's provide.
Those all sound like great add-on's don't they? While most of them can be very useful tools here's why I think some of them are more harmful than helpful to the average software engineer.

Intelligent code completion

Most developers I know will argue day and night that if you take away their intelligent code completion that they'll significantly slow down as an engineer. Intelligent code completion allows you to start typing a word and the IDE will provide you a drop-down of options applicable to your context. Sounds great doesn't it? Here's why it's not.

It discourages the engineer from learning the API, Framework, SDK, or toolkit that they're working with. All you have to know is the keyword you're looking for and the code completion will do the rest. The designers of the API/SDK/Framework, or etc that you're using put things in certain places for a reason. The more you understand about the structure of the API/SDK/Framework the better you understand how it's designer(s) want you to use it. This becomes very important when trying to do something new with a framework as you have a better understanding of how to compose objects in the framework to build more robust pieces. If you don't understand the intended use of objects in the API/SDK/Framework you may, as is often the case, try to shove a square peg through a circular hole.

WYSIWYG UI Editor

This is actually the least harmful feature modern IDE's provide. I added it to my list because, like intelligent code completion, it discourages the engineer from learning how the UI framework works. I do believe that after the engineer has really learned the UI framework and understood why you do things the way you do for a particular framework use of a WYSIWYG can be very helpful as it abstracts away a lot of the annoyances with UI layout and composition.

Proprietary Solutions/Projects

This one is one of the more evil features of a modern IDE. Most modern IDE's have their own way of organizing the resources and dependencies required to build a project. This in and of itself is not a bad thing. The problem arises when the rules for organizing and the file format used to describe the organization is closed. What this means is that if you use the IDE's built in solutions/projects you're making it so that you can ONLY use this IDE with your project.

Software engineers are a fickle group. I challenge you to find a team of developers that has their development environment exactly the same. That is, the same tools, the same configuration, the same defaults, and etc. You won't find it. Even the best organizations that try to standardize their development environments are usually fighting an uphill battle. This is because, as engineers, we all approach software engineering slightly differently. The configuration we use, the defaults we choose, and the tools we have installed are things that help us, individually, become better engineers. But that doesn't mean that a tool that makes me a better engineer is going to make you a better engineer.

Another big problem that proprietary solutions/projects have is flexibility in using best in class continuous integration servers. When the IDE has a closed solution/project structure it becomes more difficult using 3rd party tools to build your application. You may find plugins that allow the server to integrate with your particular IDE but it's a hack at best unless it's an official plugin provided by the vendor of your IDE. I say it's a hack at best because the closed nature of the solution/project organization format means that anything can change between versions of the IDE. This poses a problem if you want to stay current with the latest version of your IDE as well as the latest version of your integration server.

An IDE should be a means to an end and not the end itself. If you can't build and distribute your program without a particular IDE you will be fighting an uphill battle when trying to use best in class build integration servers, on-board new engineers, share your code with people outside your group, or open source your software.

What this means in the real world

My goal with this post isn't to get you to stop using IDE's. It's to get you to start understanding the trade off's with using a particular IDE so you're prepared to handle the downsides. At the end of the day there are going to be certain frameworks (like iOS) which are not built to be developed outside of a particular IDE. But, if we raise enough awareness about what we want to do outside of these IDE's we can get the framework designers to provide a more robust development environment and we will be better software engineers.

Monday, November 25, 2013

Splitting a Git repository into multiple repositories

Today I thought I would pass along a helpful code organization tip. Occasionally I've run across the need to split an existing git repository into multiple repositories and have wanted to keep the histories intact for each split out repository. One common scenario where this arises is when you want to refactor out a piece of code or submodule from an existing project into it's own library for re-use.

Splitting an existing git repository into multiple repositories is actually pretty straight forward if you use git's subtree command. A git subtree is simply a sub folder within the existing repository that you can commit, branch, and merge. The easiest way to explain how to do this is with an example.


Let's pretend we have a project called MyProj that is really made up of two sub-projects ProjA and ProjB that we want to split into their own repositories. The first thing we need to do is make sure we're in the directory of the git repository that we want to split up.

$ cd /path/to/MyProj

I like to remove the origin remote so I don't accidentally push something to origin. This allows me to always start over if I mess something up.

$ git remote rm origin

Now we can split ProjA and ProjB into their own subtrees.  We're going to use the -b argument which tells git to create a new branch for the split subtree with it's own complete history

$ git subtree split -P relative/folder/for/ProjA -b ProjA
$ git subtree split -P relative/folder/for/ProjB -b ProjB
For me the easiest thing to do at this point is to create a new empty git repository for ProjA and ProjB where I can fetch the new branch, add my new remote repository, and then push it to the origin remote's master branch. This presupposes that you've created new empty remote repositories for ProjA and ProjB.

Here I'm going to create the new local repository for ProjA as a sibling folder to the original project. Creating the new ProjB repository is the exact same process.

$ cd ..
$ mkdir ProjARepo
$ cd ProjARepo

Before we do anything with the ProjA subtree we need to initialize our new empty git repository

$ git init

Now that we have an empty git repository we can fetch the ProjA branch from the origin MyProj repository.

$ git fetch ../MyProj ProjA
$ git checkout -b master FETCH_HEAD

The last thing we have to do is add the origin remote for the repository and push our changes to it's master branch.

$ git remote add origin git@github.com:ProjA.git
$ git push -u origin master

And there you have it. You now have separate repositories for ProjA and ProjB.  At this point you can remove them from MyProj or remove MyProj alltogether if ProjA and ProjB were the only things in the original repository.

Monday, November 18, 2013

What's in a title?

Throughout my career I've seen a common detrimental pattern that I think needs some clarification. That pattern is an engineer who has great technical ability focusing on getting a specific title in order to exert more influence in the organization. They seek a specific title in order to justify their place in an organization or amongst a specific peer group.

In essence they're attempting to use a title to define who they are rather than who they have been. What I mean by that is that they need to already be meeting the expectations of a title before they actually earn that title.

Anytime you interact with someone else professionally you need a common understanding of what a reasonable expectation is of that person's capabilities. Many times this starts with the persons role in the organization, which is defined by their title. A persons title sets the bar for what a reasonable level of expectation of duty or performance is.

Because of this many young engineers seek specific titles in order to change or expand their role within the organization. Often they believe their title is a limiting factor in their ability to influence decisions being made on their team or in their organization. They believe that, if they only had a different title, they would have more influence or more ability to set direction within an organization.

A persons title is meant to reflect their current capabilities and their current sphere of influence. In order for them to be successful their title must be a reflection of who they have been not who they are working to become. That's why focusing on achieving a title to provide that clarity is actually detrimental to them actually being successful.

The concrete bar for a title'd position will be different from organization to organization and business unit to business unit, but each title'd position should have a general bar associated with it. That general bar should be determined by the characteristics of the position, the ability of the individual, and the influence the individual should be able to exert amongst their peers and those those that are senior to them.

Characteristics Of The Position

Each position has ideal characteristics associated with it, the depth of which is determined by the seniority of the position. Those characteristics define expectations around communication, self management, team interaction, ability to manage up, and etc.

Ability Of The Individual

Every engineer should be able to implement solutions using industry patterns and best practices. Every engineer should be able to grasp what's needed in order to validate their solutions. The more senior an engineer is the higher the expectations should be that they can identify and implement those patterns and practices, they can design and architect the correct solutions, and they understand and can implement validation of those solutions at the right level.

Influence

Each position has a level of expectation around what their sphere of influence should be. The more senior the position the broader the sphere of influence within the team, the organization, and the business unit.

A title case study: Senior Engineer

Common misconceptions of what it means to be a senior engineer

Being a senior engineer has a general bar associated with it. A common misconception I've seen in our industry associated with the senior title is that it's based on the number of years a person has been in the industry. There are a lot of engineers that believe that after they put in N number of years they're automatically a senior. Another common misconception is that the senior designation is based mostly on technical ability. I've seen countless numbers of engineers that believe that just because they're technically more proficient than their peers, and often as technically proficient as other seniors, that they also deserve a senior title.

Why being a senior isn't only about how much time you've put in

Time doesn't provide any accurate assessment of where a person is at in achieving the characteristics of their position. Time alone doesn't make you a better communicator. It doesn't teach you how to be effective on a team. It doesn't provide you the discipline necessary to be a good self manager. Time doesn't make you more or less successful in managing up.

Time can be helpful and detrimental in helping you build the characteristics needed to be successful at the next level. For example, time can help you to learn to communicate better by giving you more opportunity to practice your communication skills in different scenarios with different audiences. It can also give you more opportunity to reinforce the bad habits of communication that you've been using your entire career.

Time, if used wisely can be a real an opportunity to learn, grow, and start practicing performing at the next level.

Why being a senior isn't only about technical proficiency

Technical proficiency is about a persons ability to solve the problems they face as an individual contributor. These problems may be domain specific, specific to the technology stack they're working on, or may be general to their industry. What technical proficiency doesn't measure is a persons ability influence their peers, their seniors, or those they're senior to. It doesn't provide a measure of how well the person inspires others or communicates. It doesn't provide a measure of their understanding of the business or their ability to manage trade offs or make tough decisions.

What it really means to be a senior

So what does it mean to be a senior?

It means you've grown in the characteristics necessary to make you successful in your current role. You've looked for and taken advantage of opportunities presented to you to practice working with people in all levels within your organization. You've practiced communicating differently and effectively depending on your audience. It means that you're able to manage your time well and able to not get rat-holed into a specific problem or side tracked by the next shiny object.

It means being technically proficient. Being able to identify best practices and patterns in the industry. Being able to implement those patterns and practices yourself but also being able to let someone else implement them differently than you would have. It means being able to understand that there is a difference between the minimal viable product and the ideal product with all the bells and whistles.

It means being able to understand your organizations landscape. You have to be able to identify the key influencers and build alliances with them. It means being able to become one of those key influencers. It means being able to get people to want to follow you without you having to ask or tell them to follow you.

Monday, November 11, 2013

Putting your Software Development Lifecycle on a diet

I was re-reading Mary Poppendieck's Lean Programming Essay and I thought I would take the opportunity while it was fresh in my mind to walk through a few points that really stuck out to me that are commonly misunderstood. I've often seen developers that want to write LEAN software struggle when trying to apply OOP principles like abstraction.


Lean Rule #3: Maximize Flow (Drive Down Development Time)
The basic premise of iterative development is that small but complete portions of a system are designed and delivered throughout the development cycle, with each iteration adding an additional set of features.


In LEAN abstractions aren't bad.  But they're a by product of building small complete portions of a system.  In one iteration you build a complete feature. At this point in the process the YAGNI principle (You Aren't Going To Need It) is applied to any abstractions. There isn't a business case which says that a feature is incomplete without the abstraction. At some later date (either right after the first iteration or some other point) in another iteration you build a feature that, either in design, analysis, or some other point, brings to light that an abstraction is necessary in order to apply the DRY (Don't Repeat Yourself) and YAGNI principles. In order to build the "complete" second feature it  becomes a requirement to refactor and add the abstraction layer. At this point you only add things to the abstraction which are 100% used and required by the first and second iteration features. Adding any additional functionality or abstraction would be a violation of YAGNI. The key principle from LEAN, refactoring, becomes the driving force for the abstraction during the second iteration.


Lean Rule #4:Pull from Demand (Decide as Late as Possible)
Software systems should be designed to respond to change, not predict it.


In the context of abstractions, building an abstraction before you have at least two uses is an attempt to predict change. When you only have one required use case for something you don't have enough information to make a decision about how the second, third, fourth, or Nth use case will use the abstraction. In my experience I would actually argue that you don't really have enough information to abstract until you have a third case, but definitely not before you have a second.

Any abstraction that's built not as a direct result of refactoring would either be built to guide use of the functionality or feature (which goes against the LEAN principle of allowing requirements to change) or to anticipate future use. Anticipating future use is tantamount to trying to predict change which also makes it so that you're putting a limit on how the requirements can change. The ability to constantly change requirements is something that LEAN values tremendously.


Lean Rule #8: Abolish Local Optimization (Sub-Optimized Measurements are the Enemy) “Do it Right” requires that we provide for change.


Lean Programming employs two key techniques that make change easy. Just as Lean Manufacturing builds tests into process so as to detect when the process is broken, Lean Programming builds tests into the development process in order to ensure that when changes don’t inadvertently break the code. In fact, the best approach is to write the tests first, and then write the code. An excellent unit and regression testing capability is the best way to encourage change late in the development process.


The second technique for allowing change to happen late in development is refactoring, or improving the design of existing software in a controlled and rapid manner. When refactoring is an accepted practice, early designs can focus on the issue at hand rather than speculate as to what additional design elements will be needed. As the additional features are actually added, refactoring provides a new, simplified design to handle the new reality. When refactoring is a part of the process, we reduce speculation as to what will be needed in the future by making it easy to accommodate the future if and when it becomes the present.


This last rule really ties #3 and #4 together.  If we have tests for the functionality and regression then we have the ability to change the system at will (including the need to add abstractions) while only having to refactor the tests for the functionality that's changing (adding abstraction is a change).  It allows you to be explicit about the change and the need for the change. If you use TDD then you're ensured that not only does your functionality after the refactor work as expected, but you've also guaranteed that the system is interacted with as expected. The second technique mentioned is really key to the guidance on when/how to add abstractions. The sentence "As the additional features are actually added, refactoring provides a new, simplified design to handle the new reality." really becomes key.

Hopefully this helps you feel good about pushing off adding an abstraction into your system until after you actually have a need for that abstraction. Doing so will allow you to write software that is more adaptable to change.

Monday, November 4, 2013

I finally understand the why of ChromeOS

When I attended Google I/O in May 2013 I had no expectation that I'd be receiving a Chromebook Pixel. I had thoughts (hopes) that they'd announce Android Key-lime Pie (now known to be called Kit-Kat) with a new tablet.  My original Samsung Galaxy Tab 10.1 had been trucking along well since July of 2011 with heavy use. I commute 1 - 1.5 hours each way on the bus to work and I use that time to mostly read Kindle books but also to catch up on blogs, email, and the internet as a whole. It's pretty safe to say that I used my Galaxy Tab 10.1 3 - 5 hours a day every day.

I feel a little privileged saying I expected Google to give away free tablets, and I probably was being a little privileged. But their history had been to give away free phones/tablets at their events (especially Google I/O). Much to my disappointment Google didn't announce Android Kit-Kat and didn't announce a new tablet.  But much to my surprise they did announce that they were giving away Chromebook Pixel's to everyone in attendance.

I was super curious about this because, frankly, I've been really skeptical of Chrome OS since it was first announced. I just couldn't image a world in which Chrome OS and Android could live in the same ecosystem.  At the time I couldn't understand why Google wasn't putting more of an effort into bringing Android to more traditional devices instead of creating a completely new operation system.

Android vs. Chrome OS

After having my Pixel for 6 months I believe I finally understand why Google created Chrome OS as opposed to trying to bring Android to more traditional devices. This realization only came after using my Pixel as my everyday (home) laptop for the last 6 months. The problems that Android is trying to solve are very different from the problems that need to be solved on more traditional, less transient devices.

In my opinion, mobile devices are trying to (primarily) solve these problems:
  • Hard-line to the rest of the world (phone, email, text)
  • Critical content at a glance (calendar, contacts, events, places, transportation, activities).
  • Activity based content with a focus on location and movement (maps, running/biking/etc)
  • Bite-sized consumable content (Facebook, Twitter, Pinterest, etc)
  • Content consumable in the in-between time (games, apps, news, weather, etc). 
  • Media consumption (music, movies, books, etc) typically in a short form. 

In my opinion, traditional devices are trying to (primarily) solve these problems:
  • Communication (email, instant message)
  • Research and Planning
  • Work (documents, spreadsheets, presentations, blogging/writing, development, shared content access, etc)
  • Media consumption (music, movies, books, etc) typically in long form.
There are overlaps between the two but for the most part traditional devices tend to be the canonical source of your data whereas mobile devices tend to be a transient source of your data.  For instance, your traditional device may have all your email on it whereas your mobile device may have only your recent email on it.  Your traditional device may have all your music and movies whereas your mobile device may have a subset of them.

Once I started to look at Chrome OS as a replacement for more traditional devices like desktops/laptops and less of a replacement for your mobile device Chrome OS started to make a lot more sense. Chrome OS actually seems more like Google's attempt to redefine traditional devices. The big change not being in how you use the device but in how the device interacts with your content.

Currently traditional devices are the place where you create, store, and consume your content. Your mobile device tends to be a place where your content is transiently stored, occasionally created, and mostly consumed.  This doesn't lend well to an environment where we're constantly switching between our traditional device and our mobile device. We're being forced to be aware of the context switch and in some cases, take active action before and after the context switch. Currently the hand-off between traditional devices and mobile is clunky and awkward.

Google, Apple, Microsoft, and others have all been making strides to make it easier to store your content outside of your traditional device in The Cloud (I still cringe saying that). But that's only part of the solution. It's a solution that's been grafted into an existing ecosystem that wasn't designed to solve that problem.  So it sticks out and forces you to at least be partially aware of it's existence.  Chrome and Firefox are making good strides to blur the lines with their bookmark and tab syncing across devices but again that's only a partial solution.

The real solution must involve changing the culture around traditional devices. It must involve freeing the user from thoughts or knowledge of where their content is and on what devices that content is available.  And that culture change needs to happen on traditional devices first.

I think that's the why of Chrome OS.

Are they there yet?  I think that depends on who you are. If your the average consumer who blogs, surfs the web, possibly uses Netflix, Rdio, Hulu, Office/Google Docs, Facebook, LinkedIn, Twitter, and etc then yes, I think you'd be able to replace your traditional device with a Chromebook.

If you're a power user, a developer, or someone that relies on open standards then no Chrome OS won't work out of the box for you.  I store a lot of my files on a shared WebDav drive encrypted using open-ssl. I use KeePass to store my important account info. There's no elegant solution to getting those three things working out of the box on Chrome OS.  Thankfully David Schneider created crouton which is a Chromium centered set of scripts which allow you to create chroots. With crouton I can get a full Ubuntu Linux chroot running on my Chromebook Pixel.  What that means is that I can write software, access my WebDav/Encrypted/KeePass files, and do everything I'm used to doing on a traditional device.  But there are two critical failures with this.  First is that I have to keep my Chromebook in developer mode which sucks. Second, and most importantly, I have to go back to the traditional paradigm of interacting with a traditional device.  To me that defeats the purpose of Chrome OS and turns it into just another pretty User Experience.

Only time will tell if Google can save this problem for both the average consumer and the techie. I do think it's possible to solve if you really understand the problem.

Thursday, October 31, 2013

Hello, it's nice to meet you

Well hello there inter-nets. I'm Paul, just another technology guy.

I don't usually like talking about myself, much.  But seeing as this is a new blog I thought I would take this opportunity to tell you a little about myself, my relationship to technology, and my brand affinity. I think the last one is especially important because this is my first post here and you're still getting to know me. I want you to understand the basis of how I come to my conclusions instead of having you read into or in-between the lines.

I also think I need to take a minute to explain that this blog represents my own personal views and is no way affiliated with my employer, their brands, or their views. If I say something stupid; it's on me.

I spent the first 28 years of my life on the east coast in the D.C. area. Which means I'm a little blunt, a lot impatient, and I tend to speak then think. Actually writing is a good cure to that last one because it forces me to examine my thoughts before I make them public. At 28 I moved to Seattle and have been here for the last 8 years and loving every minute of it.  I'm pretty sure that the laid back atmosphere of the west coast has softened me up a bit. But I still stick out like a sore thumb.

I'm not your classic nerd or geek. I like sports, I'm in shape, and I don't play a lot of video games; especially not first person shooters. I built my first computer around age 10 (around 1987) and have been in love with them ever since.

I have no particular brand affinity, only preferences, which change frequently. I'm not a Microsoft/Apple/Google fan-boy and don't wait in line for products in general, though I'll wait in looooooooooooooong lines for movies.

I've been a professional software engineer on Unix, Linux, Windows, OS X, iOS, and Android for the past 14+ years and am fluent in the C, C++, Objective-C, C#, Java, Ruby, Python, and PHP languages; iOS, Android, Rails, and .NET frameworks; Unix, Windows, and OS X operating systems.  I use Windows and OS X at work and Linux (Slackware mostly but also Debian and Ubuntu) at home.

I tend to make my technology choices based on what I want to do and not what I'm most familiar with doing. I tend to favor Linux at home for my critical software (email, calendar, shared data access, etc) because it's more open standard based which means I can find clients to access my data on my other operating systems, it's free, and it's extremely easy to maintain and administer after you've got it setup and running. I can honestly say I've had to reboot my Linux server(s) less than 20 times in the past 14 years and the majority of those reboots were for kernel updates.

I currently use an Android phone (GS3) but have also used the Palm Treo, the original iPhone, iPhone 3G, iPhone 3GS, and Palm Pre as my primary phones over the past 6 years.  Sorry Windows Phone, I hate the Windows Phone UI so it's a non-starter for me.

I've written high scale/high performance/low latency enterprise server side software, desktop software, and mobile software throughout my career. I have written hundreds of apps for iOS and Android professionally for several different companies over my career and a few for Windows Phone 7.

Everyone has their biases and now hopefully you know enough about me to filter mine out. :)