Personally, 2014 was a good year for me. I switched jobs mid-year and have been working on enjoyable projects with really really smart people. My wife and I traveled to Budapest and Prague, which was absolutely amazing as well as San Francisco, Portland, Iowa, and Richmond (VA). We got to see one of my lovely cousins get married and meet the new 2nd cousins. We spent Thanksgiving with my sister-in-law and got to met our new and very beautiful niece Eleanor. I've kept up with blogging once a week which, honestly, I wasn't sure I would actually be able to do. But the best thing that happened in 2014 for me was that my wife and I are found out we are expecting our first child. I'm both terrified and excited at the thought of being a dad, but I can probably predict the future and tell you that having a child is going to be the highlight of 2015.
2014 was an interesting year for technology and there were a lot of new products announced and released. But from the perspective of what affects my everyday life most I've been interested in what's been happening in the streaming media segment.
While Apple has had a media streaming solution for several years their competitors (other than Roku) have struggled to come up with any interesting alternatives. There have been may solutions over the years for home media centers like Windows Media Center, XMBC on a Raspberry Pi (Raspmbc) or XBox to say the least. But in my opinion, none of these solutions other than Apple TV or Roku have had the teeth to take off in the mass market. And, unfortunately, without mass adoption the quality of content is not very good.
2014 saw two entries into the streaming market which I believe will help drive competition and innovation in a category of software and hardware which has been pretty stagnant. Amazon announced it's Fire TV, which was received very well and Google announced the Nexus Player which seems to be a legitimate reboot of their efforts to get into the streaming TV market. Amazon also announced the Fire TV Stick.
We've got a Roku, Chromecast and Fire TV Stick in our household. We don't have an Apple TV because we don't own any Apple mobile products and Apple is a pretty closed ecosystem and I really really really don't want to encourage that. That's not to say they don't make beautiful products because they do.
The Roku is a simple and easy to use device with a pretty decent interface. But to me something just doesn't feel right. When I use the Roku I put down my everyday tech (phone/tablet) and pick up their remote and use their software. I'm aware that I'm using something that isn't customized to me. Their interface doesn't feel like "home" to me like my own tech does. The Roku feels slightly foreign. Also, streaming media from my phone or tablet directly to the Roku has been clunky at best.
Up until December the Chromecast has been my favorite device of 2014. I really like how easy it is to use. I can just open the YouTube app on my phone and start queuing up clips. Or I can open Netflix, WatchESPN, Comedy Central or HBO Go, find what I want to watch and then just fling it to my T.V. Really my biggest complaint with the Chromecast is that the app developers have to integrate it directly into their mobile app and apps have been slow to adopt this additional API.
I say that the Chromecast has been my favorite device up until December. That's because I got my Fire TV Stick this month and so far it's been pretty incredible. The interface is great and very intuitive if you're already familiar with Amazon Instant Video. I downloaded the free remote app and the voice search is very very accurate and fast. One of the things I really liked about the Roku is that the apps are right on the device and the Fire TV Stick followed a similar path. It has all of the apps I use today with my Chromecast except HBO Go. But I can even stream HBO Go directly from my phone to the Fire TV Stick using display mirroring.
While display mirroring is a battery drain on your phone/tablet it's pretty useful for me. Traditionally I've used it to stream to the Chromecast when an app hasn't implemented the Chromecast API. It's been nice being able to use it with the Fire TV Stick out of the box. I think my biggest complaint with it is that I have to enable it on the Fire TV each time I want to mirror.
I have big hopes that 2015 will bring more innovation to streaming media.
Monday, December 22, 2014
Monday, December 15, 2014
DIY Xen: Shrinking a Linux Disk
If you've been a reader of my blog for a while you've probably picked up two important details about me. First, I'm a huge fan of open source. Second, I'm a big DIY'er when it comes to software and services. Not that they have to, but I think the two tend to go hand in hand. My guess as to why that is may be because most people who gravitate to open source seem to do so out of a desire to learn.
I've been running my own mail and web server(s) for well over a decade now. Not because I think I can do it better than what's out there. But because I was truly interested in understanding the nitty gritty of what makes the internet run. Historically I had always done this on a single Slackware Linux box. This served it's purpose but did come with a few side affects. I was using my email service as my primary email, my RSS aggregator as my primary source for news, and my CalDav server as my primary calendar.
One big problem I started to run into with my single server setup was that every so often while tinkering with some new software I wanted to learn about I'd inadvertently take down my server for a bit. Which basically meant I was dead out of the water in terms of my email, calendar, and RSS feeds. So I decided to let my curiosity about "The Cloud" turn into working knowledge by setting up my own Xen server.
My initial impression (which still holds true today) is that Xen is awesome. Within just a few hours I was able to get Xen running on my hand built server (16GB RAM, 700GB hard drive, Intel Quad Core i5). Slackware has never let me down so I decided to stick with it for my guest OSs and I setup separate servers for my production services and my tinkering. It's been great.
One problem I ran into while I was trying to find the optimal setup was shrinking a Linux disk that I made too big to start. So I thought I would document the process in case anyone out there was running into the same issue.
I've been running my own mail and web server(s) for well over a decade now. Not because I think I can do it better than what's out there. But because I was truly interested in understanding the nitty gritty of what makes the internet run. Historically I had always done this on a single Slackware Linux box. This served it's purpose but did come with a few side affects. I was using my email service as my primary email, my RSS aggregator as my primary source for news, and my CalDav server as my primary calendar.
One big problem I started to run into with my single server setup was that every so often while tinkering with some new software I wanted to learn about I'd inadvertently take down my server for a bit. Which basically meant I was dead out of the water in terms of my email, calendar, and RSS feeds. So I decided to let my curiosity about "The Cloud" turn into working knowledge by setting up my own Xen server.
My initial impression (which still holds true today) is that Xen is awesome. Within just a few hours I was able to get Xen running on my hand built server (16GB RAM, 700GB hard drive, Intel Quad Core i5). Slackware has never let me down so I decided to stick with it for my guest OSs and I setup separate servers for my production services and my tinkering. It's been great.
One problem I ran into while I was trying to find the optimal setup was shrinking a Linux disk that I made too big to start. So I thought I would document the process in case anyone out there was running into the same issue.
- Shutdown your existing Linux Virtual Machine (VM).
- Create and attach a new storage device in XEN.
- Start the Linux VM.
- Create a partition on new drive.
- Create a filesystem on the new partition.
- Create temporary mount point so that you can copy the existing partition over.
- Mount smaller partition you just created.
- Copy the contents of the existing partition that you want to shrink onto new smaller partition.
- After the copy has completed unmount the new smaller partition.
- Shutdown your Linux VM.
- Detach the original larger drive from the VM in XEN
- Restart the Linux VM and verify everything was copied and is working as expected.
- Delete no-longer-needed storage device in XEN.
$ sudo /sbin/fdisk /dev/sd[b,c,etc] (i.e. sdc or sdb or etc)
$ sudo /sbin/mkfs.ext4 /dev/sd[b,c,etc]1
$ sudo mkdir /temp_mount
$ sudo mount /dev/sd[b,c,etc]1 /temp_mount
$ sudo cp -rax /old/drive/* /temp_mount/
$ sudo umount /temp_mount
Monday, December 8, 2014
Willing, capable, and nearby
As I was getting ready to graduate college and enter the career world my grandmother gave me possibly the best piece of advice anyone has ever given me. She told me to always remember that there's someone that lives nearby that's just as capable and willing to work for less.
This piece of advice may sound cold or negative on the surface, but in reality it was meant to bring perspective, inspire humility, and make me ask myself why I am doing what I'm doing. My grandmother, who worked 40+ years at The Washington Post, recognized that given enough time we all feel unappreciated at work. She also realized that when we feel unappreciated we tend to over inflate our value and contribution.
Part of her point is that money is a means to an end. If money becomes an end in and of itself and your only motivation for feeling appreciated then you're going to be disappointed. Maybe it's getting less of a raise or bonus than expected or finding out that your co-worker, who works half as hard as you, makes more than you. Whatever the reason, relying on money to provide motivation at work will eventually fail and you'll find yourself unsatisfied and unfulfilled.
So what's the key then? The folks over at RSA have a great 10 minute video explaining how research has shown that money isn't a good enough motivator. That's not to say that money isn't important, it's just that there's a point where money as a motivator peaks. Once people make enough money that they're not constantly worrying about it then there are three main motivators; autonomy, mastery and purpose.
I really think that's the underlying point my grandmother was trying to make all those years ago. I don't know that she'd have been able to name those three areas specifically but I am absolutely sure that she understood that you need a combination of those three to feel appreciated and valued and to be motivated in your career.
I believe that she wanted me to understand that if I didn't search out and understand what it was about my job that motivated me then I would never really be happy in my career. For me this has translated into asking myself the question of whether or not I would do my job outside of work in my spare time.
When I was an individual contributor the answer to this question for me was really simple. It was a resounding yes. I would work 9 - 12 hours a day writing software at work to come home and write more software for personal use for another 4 - 6 hours. Writing software was, and still is, a hobby. It's a way I relax. It's something that helps me grow and keep my mind sharp. I really like solving problems and I like adding utility.
But once I entered middle management I had to ask myself this question "what motivates me now?" I think the answer to that question is actually one of the same reasons I started this blog. I really like investing in people. I enjoy mentoring and helping others grow. Not because I believe I know more than them or that I have all the answers. Actually, it's quite the opposite, in my career one thing I have learned is that I don't know it all and there is always more that I can learn. What motivates me is going through the process of learning with someone else.
This piece of advice may sound cold or negative on the surface, but in reality it was meant to bring perspective, inspire humility, and make me ask myself why I am doing what I'm doing. My grandmother, who worked 40+ years at The Washington Post, recognized that given enough time we all feel unappreciated at work. She also realized that when we feel unappreciated we tend to over inflate our value and contribution.
Part of her point is that money is a means to an end. If money becomes an end in and of itself and your only motivation for feeling appreciated then you're going to be disappointed. Maybe it's getting less of a raise or bonus than expected or finding out that your co-worker, who works half as hard as you, makes more than you. Whatever the reason, relying on money to provide motivation at work will eventually fail and you'll find yourself unsatisfied and unfulfilled.
So what's the key then? The folks over at RSA have a great 10 minute video explaining how research has shown that money isn't a good enough motivator. That's not to say that money isn't important, it's just that there's a point where money as a motivator peaks. Once people make enough money that they're not constantly worrying about it then there are three main motivators; autonomy, mastery and purpose.
I really think that's the underlying point my grandmother was trying to make all those years ago. I don't know that she'd have been able to name those three areas specifically but I am absolutely sure that she understood that you need a combination of those three to feel appreciated and valued and to be motivated in your career.
I believe that she wanted me to understand that if I didn't search out and understand what it was about my job that motivated me then I would never really be happy in my career. For me this has translated into asking myself the question of whether or not I would do my job outside of work in my spare time.
When I was an individual contributor the answer to this question for me was really simple. It was a resounding yes. I would work 9 - 12 hours a day writing software at work to come home and write more software for personal use for another 4 - 6 hours. Writing software was, and still is, a hobby. It's a way I relax. It's something that helps me grow and keep my mind sharp. I really like solving problems and I like adding utility.
But once I entered middle management I had to ask myself this question "what motivates me now?" I think the answer to that question is actually one of the same reasons I started this blog. I really like investing in people. I enjoy mentoring and helping others grow. Not because I believe I know more than them or that I have all the answers. Actually, it's quite the opposite, in my career one thing I have learned is that I don't know it all and there is always more that I can learn. What motivates me is going through the process of learning with someone else.
Monday, December 1, 2014
Why is getting your data on a new phone so much work?
Recently my wife upgraded her phone after finishing her two year contract with our mobile provider. She transitioned between phones on the same carrier made by the same manufacturer.
For some context, my wife's primary email comes from a standard IMAP server. She gets her calendars from a standard CalDAV enabled server. She gets her contacts from a standard CardDAV enabled server. She downloads her music and files from a standard WebDAV server. She installs her applications from two app stores, Google Play and Amazon Appstore.
It took us over 4 hours to transition everything from her old phone to her new phone. Why in 2014 is this still so cumbersome?
What transferred/setup without any work
What we had to manually transfer/setup
There's nothing on the second list that couldn't have been automatically transferred. I'm not sure what the right solution is to this problem, but I do know this shouldn't be as much work as it was.
As technologists we put way too much on the shoulders of our users. We expect them to do the heavy lifting for things that we can do easily through software. I think part of this problem is that we, as an industry, don't think enough about the import/export scenarios for our mobile products. But that's sad given that most people are on 2 year contracts with their carriers and they have an opportunity to upgrade their phones if they can financially afford it.
In my opinion this is real opportunity lost.
For some context, my wife's primary email comes from a standard IMAP server. She gets her calendars from a standard CalDAV enabled server. She gets her contacts from a standard CardDAV enabled server. She downloads her music and files from a standard WebDAV server. She installs her applications from two app stores, Google Play and Amazon Appstore.
It took us over 4 hours to transition everything from her old phone to her new phone. Why in 2014 is this still so cumbersome?
What transferred/setup without any work
- The applications installed from the Google Play Store.
- GMail.
- Home screen background image.
What we had to manually transfer/setup
- Applications that were NOT installed from Google Play Store.
- IMAP email.
- CalDAV calendars.
- CardDAV contacts.
- Lock screen background image.
- Phone PIN.
- Phone home screens
- Widgets.
- Application Shortcuts.
- Alarms.
- Application Settings.
- Her camera pictures.
- Her downloaded music.
- Her downloaded files.
- 3rd party application data (Instagram, Facebook, Pintrest, and etc).
There's nothing on the second list that couldn't have been automatically transferred. I'm not sure what the right solution is to this problem, but I do know this shouldn't be as much work as it was.
As technologists we put way too much on the shoulders of our users. We expect them to do the heavy lifting for things that we can do easily through software. I think part of this problem is that we, as an industry, don't think enough about the import/export scenarios for our mobile products. But that's sad given that most people are on 2 year contracts with their carriers and they have an opportunity to upgrade their phones if they can financially afford it.
In my opinion this is real opportunity lost.
Monday, November 24, 2014
Ubiquity of data
I've been thinking a lot lately about ubiquity of data, or really the lack of it in today's modern technology.
In the 90's and 2000's most of our information lived on our primary machines, whether that was a desktop or a laptop. As an industry we spent a lot of time and resources trying to make that information portable. In the 80's it was the Floppy drive. In the early/mid 90's it was the Iomega Zip Drive. In the late 90's/early 2000's it was the recordable compact disk. In the mid to late 2000's it was flash memory in the form of a USB stick. All of these technologies focused on one thing, making it easier to move data from one place to another.
In the late 2000's/early 201x's we started to talk about shifting our data to the cloud. The thought was that if we put our data in services like Amazon S3, Amazon Cloud Drive, Dropbox, Box, Microsoft One Drive, and etc. that our information would be ubiquitous. In a way we were right in that we can now access that information in the cloud from anywhere. But fundamentally we're still thinking about and interacting with data as something that we move from place to place.
I think as an industry we need to stop thinking about data as a thing that we move from place to place and instead solve the problems that prevent us from accessing our data from anywhere. So what are the problems that we need to solve to make this a reality? This list is by no means exhaustive, but it's where I think we need to start.
Federated Identity Management
In the world we live in today, each service (i.e. company) owns authenticating who we are. I.e. they keep a proprietary set of information about us that they use to test us with. If we pass the test they considered us authenticated. Most of these tests come in the form of two questions, what's your name and what's your password.
The problem with this is that it takes identity authentication out of the hands of those being identified and puts that into the hands of those wanting to authenticate. There's nothing inherently wrong with wanting/needing third party validation. The problem comes when we have hundreds of places we need to authenticate with, each with it's own proprietary method of authentication. Not to mention that it passes the buck to the user to remember how each one of these services authenticates them.
Tim Bray has a good discussion on federation that you should read if you're interested in the deeper discussion of the problems of identity federation.
Data Access Standards
We need data access standards that any group (for-profit or not) or individual can implement on top of their data that allows any other system (using the federated identity management) to interact with it. These standards would define CRUD operations (create, retrieve, update, and delete) in such a way that any other system and interact with the data on that system on the users behalf.
We have a good start to this with standards like OPML, RSS, WebDAV, CalDAV, CardDAV, and etc but these standards aren't cohesive. On top of that we don't have a real way to query a service to see what type of CRUD operations it supports. If we had the ability for the service to state what it serves then the clients could more intelligently interact with that service. Currently we put the onus on the user to know what a service offers.
Networks that do not model their business on whether you're a data consumer or provider
Right now the people who provide us access to the internet think about us in two categories. The first category I'll call "data consumers" and the second category I'll call "data providers".
Data consumers have the ability to get things from the internet and put things somewhere else on the internet. But data consumers don't have the ability to provide things to the internet without putting it somewhere else. A good example of this is email. A customer with a standard "data consumer" internet connection cannot run a mail server for two reasons.
First, they get a dynamic IP address from their their ISP (internet service provider). This means that the address from which they connect to the internet is always changing. Think about this analogy to a dynamic IP address. What if your home address was constantly changing either daily, weekly, or monthly. It would be impossible for anyone to contact you via the mail reliably because anytime your address changed mail sent to the previous address would be delivered to the wrong house. It's the same way on the internet. If you want people to be able to talk to you you need to have a static address for them to contact you.
Second, ISPs block the ports necessary for others to talk to you. Even if you had a static address, often your ISP blocks standard email ports (25, 993, 143, 587, and 465) because they're trying to stop spammers from easily distributing their spam. But as anyone with an email address knows, the spammers are doing just fine even with the ISPs not allowing incoming connections. So I don't buy this as a valid reason to block these ports.
Data providers have all the same access as data consumers except they pay more to have static IP addresses and to not have the ports blocked. Notice anything wrong with this situation? The ability to fully participate in the internet is based on how much you pay your ISP. ISPs hide behind the fallacy that they're trying to protect you in order to be able to charge you more for the ability to truly participate on the internet. Does that extra money you pay actually protect you or anyone else on the internet better? No. Most ISPs will probably tell you that your also paying for more reliability. But you're running on the same system as the data consumers, so I don't buy that argument either.
I truly believe that we're not quite moving in the right direction when it comes to solving these problems. Until we do, you will constantly be battling moving your data from one place to the next when any new interesting service comes into existence.
In the 90's and 2000's most of our information lived on our primary machines, whether that was a desktop or a laptop. As an industry we spent a lot of time and resources trying to make that information portable. In the 80's it was the Floppy drive. In the early/mid 90's it was the Iomega Zip Drive. In the late 90's/early 2000's it was the recordable compact disk. In the mid to late 2000's it was flash memory in the form of a USB stick. All of these technologies focused on one thing, making it easier to move data from one place to another.
In the late 2000's/early 201x's we started to talk about shifting our data to the cloud. The thought was that if we put our data in services like Amazon S3, Amazon Cloud Drive, Dropbox, Box, Microsoft One Drive, and etc. that our information would be ubiquitous. In a way we were right in that we can now access that information in the cloud from anywhere. But fundamentally we're still thinking about and interacting with data as something that we move from place to place.
I think as an industry we need to stop thinking about data as a thing that we move from place to place and instead solve the problems that prevent us from accessing our data from anywhere. So what are the problems that we need to solve to make this a reality? This list is by no means exhaustive, but it's where I think we need to start.
- Federated Identity Management.
- Data Access Standards.
- Networks that do not model their business on whether you're a data consumer or provider.
Federated Identity Management
In the world we live in today, each service (i.e. company) owns authenticating who we are. I.e. they keep a proprietary set of information about us that they use to test us with. If we pass the test they considered us authenticated. Most of these tests come in the form of two questions, what's your name and what's your password.
The problem with this is that it takes identity authentication out of the hands of those being identified and puts that into the hands of those wanting to authenticate. There's nothing inherently wrong with wanting/needing third party validation. The problem comes when we have hundreds of places we need to authenticate with, each with it's own proprietary method of authentication. Not to mention that it passes the buck to the user to remember how each one of these services authenticates them.
Tim Bray has a good discussion on federation that you should read if you're interested in the deeper discussion of the problems of identity federation.
Data Access Standards
We need data access standards that any group (for-profit or not) or individual can implement on top of their data that allows any other system (using the federated identity management) to interact with it. These standards would define CRUD operations (create, retrieve, update, and delete) in such a way that any other system and interact with the data on that system on the users behalf.
We have a good start to this with standards like OPML, RSS, WebDAV, CalDAV, CardDAV, and etc but these standards aren't cohesive. On top of that we don't have a real way to query a service to see what type of CRUD operations it supports. If we had the ability for the service to state what it serves then the clients could more intelligently interact with that service. Currently we put the onus on the user to know what a service offers.
Networks that do not model their business on whether you're a data consumer or provider
Right now the people who provide us access to the internet think about us in two categories. The first category I'll call "data consumers" and the second category I'll call "data providers".
Data consumers have the ability to get things from the internet and put things somewhere else on the internet. But data consumers don't have the ability to provide things to the internet without putting it somewhere else. A good example of this is email. A customer with a standard "data consumer" internet connection cannot run a mail server for two reasons.
First, they get a dynamic IP address from their their ISP (internet service provider). This means that the address from which they connect to the internet is always changing. Think about this analogy to a dynamic IP address. What if your home address was constantly changing either daily, weekly, or monthly. It would be impossible for anyone to contact you via the mail reliably because anytime your address changed mail sent to the previous address would be delivered to the wrong house. It's the same way on the internet. If you want people to be able to talk to you you need to have a static address for them to contact you.
Second, ISPs block the ports necessary for others to talk to you. Even if you had a static address, often your ISP blocks standard email ports (25, 993, 143, 587, and 465) because they're trying to stop spammers from easily distributing their spam. But as anyone with an email address knows, the spammers are doing just fine even with the ISPs not allowing incoming connections. So I don't buy this as a valid reason to block these ports.
Data providers have all the same access as data consumers except they pay more to have static IP addresses and to not have the ports blocked. Notice anything wrong with this situation? The ability to fully participate in the internet is based on how much you pay your ISP. ISPs hide behind the fallacy that they're trying to protect you in order to be able to charge you more for the ability to truly participate on the internet. Does that extra money you pay actually protect you or anyone else on the internet better? No. Most ISPs will probably tell you that your also paying for more reliability. But you're running on the same system as the data consumers, so I don't buy that argument either.
I truly believe that we're not quite moving in the right direction when it comes to solving these problems. Until we do, you will constantly be battling moving your data from one place to the next when any new interesting service comes into existence.
Monday, November 17, 2014
Transitioning to a professional software development role: part 3
In my first post in this series, Transitioning to a professional software development role: part 1, I started to outline some of the gaps I've seen in people's preparation for entering a career in the software development industry. I started off by focusing on what software development is not about.
In my second post in this series, Transitioning to a professional software development role: part 2, I took a look at what software development IS about. In the final post in this series I'd like to talk about the tools available that make us more efficient.
For a long time developing software was very much like developing a product on an assembly line. Assembly lines are very rigid and not well suited to respond to change. They run on the assumption that what happens upstream in the assembly line can be built upon and won't change. The moment change is introduced most of the product on the assembly line is ruined and must be thrown away.
Software's assembly line is called Waterfall. Overtime we've come to understand the downfall of waterfall and it's major flaw is that it's very rigid to change. Rigidity to change was okay when the primary delivery mechanism for software was the compact disk. But as software has grown to allow near real time delivery of features and functionality Waterfalls rigidity to change has become a hindrance to delivering high quality software in smaller but more frequent updates and features.
That's where Agile come in. Agile software development is about being able to respond to change in a rapid manner. It teaches us to think about software in a less monolithic manner but instead as a group of features that can be delivered in small chunks frequently over time.
I wrote a post several months ago called Software Craftsmanship: Project Workflow. If you're new to agile it's a good introduction to the anatomy of a project and what I've found useful. While the project workflow I've outlined isn't something you'll see in official Agile books, it is something that I have found extremely useful.
The concept of Lean Manufacturing was invented at Toyota. The primary goal was to reduce waste in the manufacturing cycle. This was done by re-thinking the manufacturing process to identify and remove waste. On example of waste could is parts sitting in a queue waiting to be processed. Toyota was able to show that by re-engineering their manufacturing process they could improve quality, efficiency, and overall satisfaction of customers.
The concepts behind Lean Manufacturing can also be applied to software development. Unfortunately these concepts often are applied incorrectly and have lead to many misconceptions and misunderstandings of Lean Software development. I wrote a post several months ago which outlined common misunderstandings in applying Lean to software development.
As a professional software developer it's important to understand Lean and how to apply it to developing software.
In my second post in this series, Transitioning to a professional software development role: part 2, I took a look at what software development IS about. In the final post in this series I'd like to talk about the tools available that make us more efficient.
Being a good software developer means understanding how to apply agile
For a long time developing software was very much like developing a product on an assembly line. Assembly lines are very rigid and not well suited to respond to change. They run on the assumption that what happens upstream in the assembly line can be built upon and won't change. The moment change is introduced most of the product on the assembly line is ruined and must be thrown away.
Software's assembly line is called Waterfall. Overtime we've come to understand the downfall of waterfall and it's major flaw is that it's very rigid to change. Rigidity to change was okay when the primary delivery mechanism for software was the compact disk. But as software has grown to allow near real time delivery of features and functionality Waterfalls rigidity to change has become a hindrance to delivering high quality software in smaller but more frequent updates and features.
That's where Agile come in. Agile software development is about being able to respond to change in a rapid manner. It teaches us to think about software in a less monolithic manner but instead as a group of features that can be delivered in small chunks frequently over time.
I wrote a post several months ago called Software Craftsmanship: Project Workflow. If you're new to agile it's a good introduction to the anatomy of a project and what I've found useful. While the project workflow I've outlined isn't something you'll see in official Agile books, it is something that I have found extremely useful.
Being a good software developer means understanding how to use Lean
The concept of Lean Manufacturing was invented at Toyota. The primary goal was to reduce waste in the manufacturing cycle. This was done by re-thinking the manufacturing process to identify and remove waste. On example of waste could is parts sitting in a queue waiting to be processed. Toyota was able to show that by re-engineering their manufacturing process they could improve quality, efficiency, and overall satisfaction of customers.
The concepts behind Lean Manufacturing can also be applied to software development. Unfortunately these concepts often are applied incorrectly and have lead to many misconceptions and misunderstandings of Lean Software development. I wrote a post several months ago which outlined common misunderstandings in applying Lean to software development.
As a professional software developer it's important to understand Lean and how to apply it to developing software.
Being a good software developer means understanding how to make trade-offs
The last area I want to briefly cover is understanding how to make trade-offs. As a professional software developer you're going to be asked to make trade-offs all the time. Sometimes it will come in the form of quality (a bad trade-off IMO). Other times it will come in terms of features.
The key to understanding how to make trade-offs is learning to ask a few questions.
- What am I gaining by making this trade-off?
- What do I not get that I would gotten if the trade-off was not made?
- What downstream affects will this decision have on my long term strategy or road map?
- What additional work will be required later as a result of this trade-off?
The ultimate goal in software development is to provide business value in every part of the process. Understanding how to make trade-offs will help you provide the right business value at each step in the process.
Monday, November 10, 2014
Transitioning to a professional software development role: part 2
In my previous post, Transitioning to a professional software development role: part 1, I started to outline some of the gaps I've seen in people's preparation for entering a career in the software development industry. I started off by focusing on what software development is not about.
In this post I want to take a look at what software development IS about.
Being a good software developer is about understanding data structures
The foundation of a good software developer is understanding data structures and object oriented programming. Data structures like Binary Trees, Hash Tables, Arrays, and Linked Lists are core to writing software that is functional, scalable, and efficient.
It's not just good enough to understand what the data structures are and how they're used. It's crucial that you also understand WHEN to use them. Understanding when to use particular data structures properly comes with a few benefits. First, it helps others intuitively understand your code. Others will be able to understand your frame of reference better. Second, it helps you avoid "having a hammer and making everything a nail" syndrome. That's when you're learning something new and looking for places to apply your new knowledge, often shoehorning it in to places it doesn't belong.
Being a good software developer is about being able to estimate your work
I can't stress enough how important this is. Your team, your managers, and your customers are going to rely on you for consistency. They're going to make plans around what you do. And because of this learning to estimate your work is crucial in helping you and them meet commitments. Understanding how to estimate your software well also helps you build a regular cadence in what you deliver which is helpful for your customers.
There are a three concepts that I've found that really helped me learn to estimate my work well. The first is the Cone of Uncertainty. This concept is really helpful because it helps you tease out what you know you don't know as well as what you don't know you don't know. Understanding the cone of uncertainty helps you remove ambiguity in what you're working on which in turn helps you better understand the level of effort it will take.
Once you've teased out the uncertainty in your work you can use Planning Poker as a way to quantify how much work something is. It's important that you try not to tie your poker points to a time scale as it will tend to skew your pointing exercise. Instead, as you get better about learning to quantify how much work something is relative to your other work you'll start to naturally see how much time it takes. For instance let's say you use fibonacci numbers 1, 2, 3, 5, 8, and 13 to quantify you're work. Over time as you get better at pointing your work, you'll also see a trend in how much time certain points take. Only then can you accurately associate a timescale with your pointing.
The last concept that I've found very helpful in learning to estimate how much work I can do in any given period is by tracking my velocity. If you're using planning poker to determine how big the chunks of work are and you're using agile to set a cadence or rhythm for when you deliver your work, then velocity tracking can help you be more predictable in how much work you can deliver in any given agile sprint. Understanding your velocity helps you to set reasonable expectations on what you can deliver and helps those that are planning for the future understand what it would take to decrease the time of a project or make sure that a project is on track and will meet it's deliverable dates.
There are a three concepts that I've found that really helped me learn to estimate my work well. The first is the Cone of Uncertainty. This concept is really helpful because it helps you tease out what you know you don't know as well as what you don't know you don't know. Understanding the cone of uncertainty helps you remove ambiguity in what you're working on which in turn helps you better understand the level of effort it will take.
Once you've teased out the uncertainty in your work you can use Planning Poker as a way to quantify how much work something is. It's important that you try not to tie your poker points to a time scale as it will tend to skew your pointing exercise. Instead, as you get better about learning to quantify how much work something is relative to your other work you'll start to naturally see how much time it takes. For instance let's say you use fibonacci numbers 1, 2, 3, 5, 8, and 13 to quantify you're work. Over time as you get better at pointing your work, you'll also see a trend in how much time certain points take. Only then can you accurately associate a timescale with your pointing.
The last concept that I've found very helpful in learning to estimate how much work I can do in any given period is by tracking my velocity. If you're using planning poker to determine how big the chunks of work are and you're using agile to set a cadence or rhythm for when you deliver your work, then velocity tracking can help you be more predictable in how much work you can deliver in any given agile sprint. Understanding your velocity helps you to set reasonable expectations on what you can deliver and helps those that are planning for the future understand what it would take to decrease the time of a project or make sure that a project is on track and will meet it's deliverable dates.
Being a good software developer is about re-use in order to avoid re-inventing the wheel
As newer engineers we want to solve problems that we find interesting and a challenge. Often as we get into the depths of a particular problem space it will be evident that you're trying to solve an already solved problem. At this point you're at a cross roads where you can continue down the path of solving the problem yourself and re-invent the wheel. Often this is the result of both curiosity and mistrust. You're curious about how to solve a particular problem or curious about whether you could solve the problem better than those that have come before you. This also happens when we don't trust that a particular library actually solves the problem you're trying to solve. Or because another solution solves a slightly different, but compatible problem, we don't trust that our problem is in the same problem space.
This is very detrimental to a project for a few reasons. First, the problem has already been solved so you're going to waste time solving an already solved problem. Second, it's likely the case that the problem is more nuanced than you're aware of. It's also likely the case that the people who have already solved the problem have dedicated themselves to solving that problem. I.e. it's the entirety of their problem domain. This means that they're going to be the subject matter experts in this area. Because this is only one part of your overall problem you won't be able to dedicate the required amount of time solving the problem as well.
I would encourage you to first look to see if someone has already solved your problem either in part or in whole. There's plenty of high quality open source projects on GitHub and SourceForge. These projects have people who are eager for you to use and incorporate their projects into your project.
This is very detrimental to a project for a few reasons. First, the problem has already been solved so you're going to waste time solving an already solved problem. Second, it's likely the case that the problem is more nuanced than you're aware of. It's also likely the case that the people who have already solved the problem have dedicated themselves to solving that problem. I.e. it's the entirety of their problem domain. This means that they're going to be the subject matter experts in this area. Because this is only one part of your overall problem you won't be able to dedicate the required amount of time solving the problem as well.
I would encourage you to first look to see if someone has already solved your problem either in part or in whole. There's plenty of high quality open source projects on GitHub and SourceForge. These projects have people who are eager for you to use and incorporate their projects into your project.
Being a good software developer is about knowing the limits of your understanding
There are several aspects to understanding the limits of your understanding. One aspect is to know that knowledge about any particular domain has both a breadth and a depth to it. It is impossible to gain both a breadth and depth of understanding in all areas of software development amongst all subject domains. Because if this it's important to be aware of what you have a breadth of understanding in but are lacking depth and what you have a depth of understanding in but don't have a breadth of understanding. Over time you'll develop both a depth and a breadth of understanding in a few particular subject areas. But it's important to know that this takes time, theory, and practice. Without all three of those you won't gain the breadth and the depth.
Knowing the limits of your understanding also involves being able to say you were wrong. There are going to be plenty of times when you thought you had a depth of understanding or breadth of understanding of something only to find out you didn't fully understand or misunderstood the subject. Being able to say you were wrong is the first step to correcting your understanding and being able to build on your new knowledge.
Knowing the limits of your understanding also involves being able to say you were wrong. There are going to be plenty of times when you thought you had a depth of understanding or breadth of understanding of something only to find out you didn't fully understand or misunderstood the subject. Being able to say you were wrong is the first step to correcting your understanding and being able to build on your new knowledge.
Subscribe to:
Posts (Atom)