I've been thinking a lot about open source software recently. It's amazing how much of our lives are run by open source software, how little people are aware of, and the downstream affect it has on our lives. Are you an Android, iPhone, or Kindle user? Do you use Gmail or Google search? If so, you're using open source software. In fact Amazon, Google, Samsung, Apple, Netflix, TiVo, Comcast, Wikipedia, WordPress, and many many many more companies have built core parts of their businesses using open source technologies.
In a way, you not being aware of open source software in your everyday life is a win. It means that open source is overcoming some of the usability problems that plagued the community through most of the 90's and 2000's. The fact that Android is the dominant mobile operating system means that people are learning to build usable graphical user interfaces (GUI) on top of Linux. It also means that by using the products we love we're making products that we aren't even aware of better simply because they're based on the same open source technologies.
Yet in another way, you not being aware of open source software in your everyday life is bad. When people aren't aware of the possibilities of what they're using they don't know what to expect or even demand. Simply using open source software doesn't mean that your favorite app or website or service is interoperable with anything else. In order to make software work together we also need open standards.
Here's an analogy to make this more tangible. What if in order for you to fill up your car with gas you could only go to your manufacturers shop and you couldn't pump gas from a competing manufacturer because the nozzles were different or the cars used different fuel? That would make it very difficult for you because your manufacturers decisions and processes would have a much bigger effect on your everyday life. You'd have a much more rigid schedule for getting gas. But because there are agreed upon ways that gas pumps work you are able to use any number of competing services. Your manufacturer is free to innovate and make their car better but you have flexibility in how you consume and use gas. The competitors have to earn your business.
Now let's use a technology example. Do you use dropbox? What would happen if tomorrow you wanted to move everything in your dropbox to Amazon, Apple, Microsoft or Google's cloud? Would you be able to have that stuff migrated automatically just by choosing a new service? Or would you have to do all the work yourself? There are open standards for folder sharing like WebDav. There's no reason that you couldn't just enter your other services credentials into Dropbox and click a button to migrate your data other than them not wanting to make that easy for you.
Right now in the software world it's like we're driving around in cars that all use gas but because the nozzles are all different shapes and sizes we can only get gas from one manufacturer. As a society we seem to have accepted owning the burden of interoperability in our technology. So we choose the services that cause the least amount of pain rather than choosing the services that cause the most amount of joy.
It doesn't have to be this way. If we demanded that our data be migratable and we were easily able to move from Brand A to Brand B then both would be forced to earn our trust. Both would have to work hard to provide us with services and support that delight us. Both would have to listen to us when we say that using their products are difficult or unintuitive.
How do you make these demands you may be asking? With your wallet. Buy software and services from companies that use open source and open standards. Don't buy from those that don't. Talk to your friends and family about this problem and educate them. Ask your friend who is computer literate to help you chose products that will remove some of the burden from you.
Monday, December 29, 2014
Monday, December 22, 2014
2014: The year for streaming media
Personally, 2014 was a good year for me. I switched jobs mid-year and have been working on enjoyable projects with really really smart people. My wife and I traveled to Budapest and Prague, which was absolutely amazing as well as San Francisco, Portland, Iowa, and Richmond (VA). We got to see one of my lovely cousins get married and meet the new 2nd cousins. We spent Thanksgiving with my sister-in-law and got to met our new and very beautiful niece Eleanor. I've kept up with blogging once a week which, honestly, I wasn't sure I would actually be able to do. But the best thing that happened in 2014 for me was that my wife and I are found out we are expecting our first child. I'm both terrified and excited at the thought of being a dad, but I can probably predict the future and tell you that having a child is going to be the highlight of 2015.
2014 was an interesting year for technology and there were a lot of new products announced and released. But from the perspective of what affects my everyday life most I've been interested in what's been happening in the streaming media segment.
While Apple has had a media streaming solution for several years their competitors (other than Roku) have struggled to come up with any interesting alternatives. There have been may solutions over the years for home media centers like Windows Media Center, XMBC on a Raspberry Pi (Raspmbc) or XBox to say the least. But in my opinion, none of these solutions other than Apple TV or Roku have had the teeth to take off in the mass market. And, unfortunately, without mass adoption the quality of content is not very good.
2014 saw two entries into the streaming market which I believe will help drive competition and innovation in a category of software and hardware which has been pretty stagnant. Amazon announced it's Fire TV, which was received very well and Google announced the Nexus Player which seems to be a legitimate reboot of their efforts to get into the streaming TV market. Amazon also announced the Fire TV Stick.
We've got a Roku, Chromecast and Fire TV Stick in our household. We don't have an Apple TV because we don't own any Apple mobile products and Apple is a pretty closed ecosystem and I really really really don't want to encourage that. That's not to say they don't make beautiful products because they do.
The Roku is a simple and easy to use device with a pretty decent interface. But to me something just doesn't feel right. When I use the Roku I put down my everyday tech (phone/tablet) and pick up their remote and use their software. I'm aware that I'm using something that isn't customized to me. Their interface doesn't feel like "home" to me like my own tech does. The Roku feels slightly foreign. Also, streaming media from my phone or tablet directly to the Roku has been clunky at best.
Up until December the Chromecast has been my favorite device of 2014. I really like how easy it is to use. I can just open the YouTube app on my phone and start queuing up clips. Or I can open Netflix, WatchESPN, Comedy Central or HBO Go, find what I want to watch and then just fling it to my T.V. Really my biggest complaint with the Chromecast is that the app developers have to integrate it directly into their mobile app and apps have been slow to adopt this additional API.
I say that the Chromecast has been my favorite device up until December. That's because I got my Fire TV Stick this month and so far it's been pretty incredible. The interface is great and very intuitive if you're already familiar with Amazon Instant Video. I downloaded the free remote app and the voice search is very very accurate and fast. One of the things I really liked about the Roku is that the apps are right on the device and the Fire TV Stick followed a similar path. It has all of the apps I use today with my Chromecast except HBO Go. But I can even stream HBO Go directly from my phone to the Fire TV Stick using display mirroring.
While display mirroring is a battery drain on your phone/tablet it's pretty useful for me. Traditionally I've used it to stream to the Chromecast when an app hasn't implemented the Chromecast API. It's been nice being able to use it with the Fire TV Stick out of the box. I think my biggest complaint with it is that I have to enable it on the Fire TV each time I want to mirror.
I have big hopes that 2015 will bring more innovation to streaming media.
2014 was an interesting year for technology and there were a lot of new products announced and released. But from the perspective of what affects my everyday life most I've been interested in what's been happening in the streaming media segment.
While Apple has had a media streaming solution for several years their competitors (other than Roku) have struggled to come up with any interesting alternatives. There have been may solutions over the years for home media centers like Windows Media Center, XMBC on a Raspberry Pi (Raspmbc) or XBox to say the least. But in my opinion, none of these solutions other than Apple TV or Roku have had the teeth to take off in the mass market. And, unfortunately, without mass adoption the quality of content is not very good.
2014 saw two entries into the streaming market which I believe will help drive competition and innovation in a category of software and hardware which has been pretty stagnant. Amazon announced it's Fire TV, which was received very well and Google announced the Nexus Player which seems to be a legitimate reboot of their efforts to get into the streaming TV market. Amazon also announced the Fire TV Stick.
We've got a Roku, Chromecast and Fire TV Stick in our household. We don't have an Apple TV because we don't own any Apple mobile products and Apple is a pretty closed ecosystem and I really really really don't want to encourage that. That's not to say they don't make beautiful products because they do.
The Roku is a simple and easy to use device with a pretty decent interface. But to me something just doesn't feel right. When I use the Roku I put down my everyday tech (phone/tablet) and pick up their remote and use their software. I'm aware that I'm using something that isn't customized to me. Their interface doesn't feel like "home" to me like my own tech does. The Roku feels slightly foreign. Also, streaming media from my phone or tablet directly to the Roku has been clunky at best.
Up until December the Chromecast has been my favorite device of 2014. I really like how easy it is to use. I can just open the YouTube app on my phone and start queuing up clips. Or I can open Netflix, WatchESPN, Comedy Central or HBO Go, find what I want to watch and then just fling it to my T.V. Really my biggest complaint with the Chromecast is that the app developers have to integrate it directly into their mobile app and apps have been slow to adopt this additional API.
I say that the Chromecast has been my favorite device up until December. That's because I got my Fire TV Stick this month and so far it's been pretty incredible. The interface is great and very intuitive if you're already familiar with Amazon Instant Video. I downloaded the free remote app and the voice search is very very accurate and fast. One of the things I really liked about the Roku is that the apps are right on the device and the Fire TV Stick followed a similar path. It has all of the apps I use today with my Chromecast except HBO Go. But I can even stream HBO Go directly from my phone to the Fire TV Stick using display mirroring.
While display mirroring is a battery drain on your phone/tablet it's pretty useful for me. Traditionally I've used it to stream to the Chromecast when an app hasn't implemented the Chromecast API. It's been nice being able to use it with the Fire TV Stick out of the box. I think my biggest complaint with it is that I have to enable it on the Fire TV each time I want to mirror.
I have big hopes that 2015 will bring more innovation to streaming media.
Monday, December 15, 2014
DIY Xen: Shrinking a Linux Disk
If you've been a reader of my blog for a while you've probably picked up two important details about me. First, I'm a huge fan of open source. Second, I'm a big DIY'er when it comes to software and services. Not that they have to, but I think the two tend to go hand in hand. My guess as to why that is may be because most people who gravitate to open source seem to do so out of a desire to learn.
I've been running my own mail and web server(s) for well over a decade now. Not because I think I can do it better than what's out there. But because I was truly interested in understanding the nitty gritty of what makes the internet run. Historically I had always done this on a single Slackware Linux box. This served it's purpose but did come with a few side affects. I was using my email service as my primary email, my RSS aggregator as my primary source for news, and my CalDav server as my primary calendar.
One big problem I started to run into with my single server setup was that every so often while tinkering with some new software I wanted to learn about I'd inadvertently take down my server for a bit. Which basically meant I was dead out of the water in terms of my email, calendar, and RSS feeds. So I decided to let my curiosity about "The Cloud" turn into working knowledge by setting up my own Xen server.
My initial impression (which still holds true today) is that Xen is awesome. Within just a few hours I was able to get Xen running on my hand built server (16GB RAM, 700GB hard drive, Intel Quad Core i5). Slackware has never let me down so I decided to stick with it for my guest OSs and I setup separate servers for my production services and my tinkering. It's been great.
One problem I ran into while I was trying to find the optimal setup was shrinking a Linux disk that I made too big to start. So I thought I would document the process in case anyone out there was running into the same issue.
I've been running my own mail and web server(s) for well over a decade now. Not because I think I can do it better than what's out there. But because I was truly interested in understanding the nitty gritty of what makes the internet run. Historically I had always done this on a single Slackware Linux box. This served it's purpose but did come with a few side affects. I was using my email service as my primary email, my RSS aggregator as my primary source for news, and my CalDav server as my primary calendar.
One big problem I started to run into with my single server setup was that every so often while tinkering with some new software I wanted to learn about I'd inadvertently take down my server for a bit. Which basically meant I was dead out of the water in terms of my email, calendar, and RSS feeds. So I decided to let my curiosity about "The Cloud" turn into working knowledge by setting up my own Xen server.
My initial impression (which still holds true today) is that Xen is awesome. Within just a few hours I was able to get Xen running on my hand built server (16GB RAM, 700GB hard drive, Intel Quad Core i5). Slackware has never let me down so I decided to stick with it for my guest OSs and I setup separate servers for my production services and my tinkering. It's been great.
One problem I ran into while I was trying to find the optimal setup was shrinking a Linux disk that I made too big to start. So I thought I would document the process in case anyone out there was running into the same issue.
- Shutdown your existing Linux Virtual Machine (VM).
- Create and attach a new storage device in XEN.
- Start the Linux VM.
- Create a partition on new drive.
- Create a filesystem on the new partition.
- Create temporary mount point so that you can copy the existing partition over.
- Mount smaller partition you just created.
- Copy the contents of the existing partition that you want to shrink onto new smaller partition.
- After the copy has completed unmount the new smaller partition.
- Shutdown your Linux VM.
- Detach the original larger drive from the VM in XEN
- Restart the Linux VM and verify everything was copied and is working as expected.
- Delete no-longer-needed storage device in XEN.
$ sudo /sbin/fdisk /dev/sd[b,c,etc] (i.e. sdc or sdb or etc)
$ sudo /sbin/mkfs.ext4 /dev/sd[b,c,etc]1
$ sudo mkdir /temp_mount
$ sudo mount /dev/sd[b,c,etc]1 /temp_mount
$ sudo cp -rax /old/drive/* /temp_mount/
$ sudo umount /temp_mount
Monday, December 8, 2014
Willing, capable, and nearby
As I was getting ready to graduate college and enter the career world my grandmother gave me possibly the best piece of advice anyone has ever given me. She told me to always remember that there's someone that lives nearby that's just as capable and willing to work for less.
This piece of advice may sound cold or negative on the surface, but in reality it was meant to bring perspective, inspire humility, and make me ask myself why I am doing what I'm doing. My grandmother, who worked 40+ years at The Washington Post, recognized that given enough time we all feel unappreciated at work. She also realized that when we feel unappreciated we tend to over inflate our value and contribution.
Part of her point is that money is a means to an end. If money becomes an end in and of itself and your only motivation for feeling appreciated then you're going to be disappointed. Maybe it's getting less of a raise or bonus than expected or finding out that your co-worker, who works half as hard as you, makes more than you. Whatever the reason, relying on money to provide motivation at work will eventually fail and you'll find yourself unsatisfied and unfulfilled.
So what's the key then? The folks over at RSA have a great 10 minute video explaining how research has shown that money isn't a good enough motivator. That's not to say that money isn't important, it's just that there's a point where money as a motivator peaks. Once people make enough money that they're not constantly worrying about it then there are three main motivators; autonomy, mastery and purpose.
I really think that's the underlying point my grandmother was trying to make all those years ago. I don't know that she'd have been able to name those three areas specifically but I am absolutely sure that she understood that you need a combination of those three to feel appreciated and valued and to be motivated in your career.
I believe that she wanted me to understand that if I didn't search out and understand what it was about my job that motivated me then I would never really be happy in my career. For me this has translated into asking myself the question of whether or not I would do my job outside of work in my spare time.
When I was an individual contributor the answer to this question for me was really simple. It was a resounding yes. I would work 9 - 12 hours a day writing software at work to come home and write more software for personal use for another 4 - 6 hours. Writing software was, and still is, a hobby. It's a way I relax. It's something that helps me grow and keep my mind sharp. I really like solving problems and I like adding utility.
But once I entered middle management I had to ask myself this question "what motivates me now?" I think the answer to that question is actually one of the same reasons I started this blog. I really like investing in people. I enjoy mentoring and helping others grow. Not because I believe I know more than them or that I have all the answers. Actually, it's quite the opposite, in my career one thing I have learned is that I don't know it all and there is always more that I can learn. What motivates me is going through the process of learning with someone else.
This piece of advice may sound cold or negative on the surface, but in reality it was meant to bring perspective, inspire humility, and make me ask myself why I am doing what I'm doing. My grandmother, who worked 40+ years at The Washington Post, recognized that given enough time we all feel unappreciated at work. She also realized that when we feel unappreciated we tend to over inflate our value and contribution.
Part of her point is that money is a means to an end. If money becomes an end in and of itself and your only motivation for feeling appreciated then you're going to be disappointed. Maybe it's getting less of a raise or bonus than expected or finding out that your co-worker, who works half as hard as you, makes more than you. Whatever the reason, relying on money to provide motivation at work will eventually fail and you'll find yourself unsatisfied and unfulfilled.
So what's the key then? The folks over at RSA have a great 10 minute video explaining how research has shown that money isn't a good enough motivator. That's not to say that money isn't important, it's just that there's a point where money as a motivator peaks. Once people make enough money that they're not constantly worrying about it then there are three main motivators; autonomy, mastery and purpose.
I really think that's the underlying point my grandmother was trying to make all those years ago. I don't know that she'd have been able to name those three areas specifically but I am absolutely sure that she understood that you need a combination of those three to feel appreciated and valued and to be motivated in your career.
I believe that she wanted me to understand that if I didn't search out and understand what it was about my job that motivated me then I would never really be happy in my career. For me this has translated into asking myself the question of whether or not I would do my job outside of work in my spare time.
When I was an individual contributor the answer to this question for me was really simple. It was a resounding yes. I would work 9 - 12 hours a day writing software at work to come home and write more software for personal use for another 4 - 6 hours. Writing software was, and still is, a hobby. It's a way I relax. It's something that helps me grow and keep my mind sharp. I really like solving problems and I like adding utility.
But once I entered middle management I had to ask myself this question "what motivates me now?" I think the answer to that question is actually one of the same reasons I started this blog. I really like investing in people. I enjoy mentoring and helping others grow. Not because I believe I know more than them or that I have all the answers. Actually, it's quite the opposite, in my career one thing I have learned is that I don't know it all and there is always more that I can learn. What motivates me is going through the process of learning with someone else.
Monday, December 1, 2014
Why is getting your data on a new phone so much work?
Recently my wife upgraded her phone after finishing her two year contract with our mobile provider. She transitioned between phones on the same carrier made by the same manufacturer.
For some context, my wife's primary email comes from a standard IMAP server. She gets her calendars from a standard CalDAV enabled server. She gets her contacts from a standard CardDAV enabled server. She downloads her music and files from a standard WebDAV server. She installs her applications from two app stores, Google Play and Amazon Appstore.
It took us over 4 hours to transition everything from her old phone to her new phone. Why in 2014 is this still so cumbersome?
What transferred/setup without any work
What we had to manually transfer/setup
There's nothing on the second list that couldn't have been automatically transferred. I'm not sure what the right solution is to this problem, but I do know this shouldn't be as much work as it was.
As technologists we put way too much on the shoulders of our users. We expect them to do the heavy lifting for things that we can do easily through software. I think part of this problem is that we, as an industry, don't think enough about the import/export scenarios for our mobile products. But that's sad given that most people are on 2 year contracts with their carriers and they have an opportunity to upgrade their phones if they can financially afford it.
In my opinion this is real opportunity lost.
For some context, my wife's primary email comes from a standard IMAP server. She gets her calendars from a standard CalDAV enabled server. She gets her contacts from a standard CardDAV enabled server. She downloads her music and files from a standard WebDAV server. She installs her applications from two app stores, Google Play and Amazon Appstore.
It took us over 4 hours to transition everything from her old phone to her new phone. Why in 2014 is this still so cumbersome?
What transferred/setup without any work
- The applications installed from the Google Play Store.
- GMail.
- Home screen background image.
What we had to manually transfer/setup
- Applications that were NOT installed from Google Play Store.
- IMAP email.
- CalDAV calendars.
- CardDAV contacts.
- Lock screen background image.
- Phone PIN.
- Phone home screens
- Widgets.
- Application Shortcuts.
- Alarms.
- Application Settings.
- Her camera pictures.
- Her downloaded music.
- Her downloaded files.
- 3rd party application data (Instagram, Facebook, Pintrest, and etc).
There's nothing on the second list that couldn't have been automatically transferred. I'm not sure what the right solution is to this problem, but I do know this shouldn't be as much work as it was.
As technologists we put way too much on the shoulders of our users. We expect them to do the heavy lifting for things that we can do easily through software. I think part of this problem is that we, as an industry, don't think enough about the import/export scenarios for our mobile products. But that's sad given that most people are on 2 year contracts with their carriers and they have an opportunity to upgrade their phones if they can financially afford it.
In my opinion this is real opportunity lost.
Monday, November 24, 2014
Ubiquity of data
I've been thinking a lot lately about ubiquity of data, or really the lack of it in today's modern technology.
In the 90's and 2000's most of our information lived on our primary machines, whether that was a desktop or a laptop. As an industry we spent a lot of time and resources trying to make that information portable. In the 80's it was the Floppy drive. In the early/mid 90's it was the Iomega Zip Drive. In the late 90's/early 2000's it was the recordable compact disk. In the mid to late 2000's it was flash memory in the form of a USB stick. All of these technologies focused on one thing, making it easier to move data from one place to another.
In the late 2000's/early 201x's we started to talk about shifting our data to the cloud. The thought was that if we put our data in services like Amazon S3, Amazon Cloud Drive, Dropbox, Box, Microsoft One Drive, and etc. that our information would be ubiquitous. In a way we were right in that we can now access that information in the cloud from anywhere. But fundamentally we're still thinking about and interacting with data as something that we move from place to place.
I think as an industry we need to stop thinking about data as a thing that we move from place to place and instead solve the problems that prevent us from accessing our data from anywhere. So what are the problems that we need to solve to make this a reality? This list is by no means exhaustive, but it's where I think we need to start.
Federated Identity Management
In the world we live in today, each service (i.e. company) owns authenticating who we are. I.e. they keep a proprietary set of information about us that they use to test us with. If we pass the test they considered us authenticated. Most of these tests come in the form of two questions, what's your name and what's your password.
The problem with this is that it takes identity authentication out of the hands of those being identified and puts that into the hands of those wanting to authenticate. There's nothing inherently wrong with wanting/needing third party validation. The problem comes when we have hundreds of places we need to authenticate with, each with it's own proprietary method of authentication. Not to mention that it passes the buck to the user to remember how each one of these services authenticates them.
Tim Bray has a good discussion on federation that you should read if you're interested in the deeper discussion of the problems of identity federation.
Data Access Standards
We need data access standards that any group (for-profit or not) or individual can implement on top of their data that allows any other system (using the federated identity management) to interact with it. These standards would define CRUD operations (create, retrieve, update, and delete) in such a way that any other system and interact with the data on that system on the users behalf.
We have a good start to this with standards like OPML, RSS, WebDAV, CalDAV, CardDAV, and etc but these standards aren't cohesive. On top of that we don't have a real way to query a service to see what type of CRUD operations it supports. If we had the ability for the service to state what it serves then the clients could more intelligently interact with that service. Currently we put the onus on the user to know what a service offers.
Networks that do not model their business on whether you're a data consumer or provider
Right now the people who provide us access to the internet think about us in two categories. The first category I'll call "data consumers" and the second category I'll call "data providers".
Data consumers have the ability to get things from the internet and put things somewhere else on the internet. But data consumers don't have the ability to provide things to the internet without putting it somewhere else. A good example of this is email. A customer with a standard "data consumer" internet connection cannot run a mail server for two reasons.
First, they get a dynamic IP address from their their ISP (internet service provider). This means that the address from which they connect to the internet is always changing. Think about this analogy to a dynamic IP address. What if your home address was constantly changing either daily, weekly, or monthly. It would be impossible for anyone to contact you via the mail reliably because anytime your address changed mail sent to the previous address would be delivered to the wrong house. It's the same way on the internet. If you want people to be able to talk to you you need to have a static address for them to contact you.
Second, ISPs block the ports necessary for others to talk to you. Even if you had a static address, often your ISP blocks standard email ports (25, 993, 143, 587, and 465) because they're trying to stop spammers from easily distributing their spam. But as anyone with an email address knows, the spammers are doing just fine even with the ISPs not allowing incoming connections. So I don't buy this as a valid reason to block these ports.
Data providers have all the same access as data consumers except they pay more to have static IP addresses and to not have the ports blocked. Notice anything wrong with this situation? The ability to fully participate in the internet is based on how much you pay your ISP. ISPs hide behind the fallacy that they're trying to protect you in order to be able to charge you more for the ability to truly participate on the internet. Does that extra money you pay actually protect you or anyone else on the internet better? No. Most ISPs will probably tell you that your also paying for more reliability. But you're running on the same system as the data consumers, so I don't buy that argument either.
I truly believe that we're not quite moving in the right direction when it comes to solving these problems. Until we do, you will constantly be battling moving your data from one place to the next when any new interesting service comes into existence.
In the 90's and 2000's most of our information lived on our primary machines, whether that was a desktop or a laptop. As an industry we spent a lot of time and resources trying to make that information portable. In the 80's it was the Floppy drive. In the early/mid 90's it was the Iomega Zip Drive. In the late 90's/early 2000's it was the recordable compact disk. In the mid to late 2000's it was flash memory in the form of a USB stick. All of these technologies focused on one thing, making it easier to move data from one place to another.
In the late 2000's/early 201x's we started to talk about shifting our data to the cloud. The thought was that if we put our data in services like Amazon S3, Amazon Cloud Drive, Dropbox, Box, Microsoft One Drive, and etc. that our information would be ubiquitous. In a way we were right in that we can now access that information in the cloud from anywhere. But fundamentally we're still thinking about and interacting with data as something that we move from place to place.
I think as an industry we need to stop thinking about data as a thing that we move from place to place and instead solve the problems that prevent us from accessing our data from anywhere. So what are the problems that we need to solve to make this a reality? This list is by no means exhaustive, but it's where I think we need to start.
- Federated Identity Management.
- Data Access Standards.
- Networks that do not model their business on whether you're a data consumer or provider.
Federated Identity Management
In the world we live in today, each service (i.e. company) owns authenticating who we are. I.e. they keep a proprietary set of information about us that they use to test us with. If we pass the test they considered us authenticated. Most of these tests come in the form of two questions, what's your name and what's your password.
The problem with this is that it takes identity authentication out of the hands of those being identified and puts that into the hands of those wanting to authenticate. There's nothing inherently wrong with wanting/needing third party validation. The problem comes when we have hundreds of places we need to authenticate with, each with it's own proprietary method of authentication. Not to mention that it passes the buck to the user to remember how each one of these services authenticates them.
Tim Bray has a good discussion on federation that you should read if you're interested in the deeper discussion of the problems of identity federation.
Data Access Standards
We need data access standards that any group (for-profit or not) or individual can implement on top of their data that allows any other system (using the federated identity management) to interact with it. These standards would define CRUD operations (create, retrieve, update, and delete) in such a way that any other system and interact with the data on that system on the users behalf.
We have a good start to this with standards like OPML, RSS, WebDAV, CalDAV, CardDAV, and etc but these standards aren't cohesive. On top of that we don't have a real way to query a service to see what type of CRUD operations it supports. If we had the ability for the service to state what it serves then the clients could more intelligently interact with that service. Currently we put the onus on the user to know what a service offers.
Networks that do not model their business on whether you're a data consumer or provider
Right now the people who provide us access to the internet think about us in two categories. The first category I'll call "data consumers" and the second category I'll call "data providers".
Data consumers have the ability to get things from the internet and put things somewhere else on the internet. But data consumers don't have the ability to provide things to the internet without putting it somewhere else. A good example of this is email. A customer with a standard "data consumer" internet connection cannot run a mail server for two reasons.
First, they get a dynamic IP address from their their ISP (internet service provider). This means that the address from which they connect to the internet is always changing. Think about this analogy to a dynamic IP address. What if your home address was constantly changing either daily, weekly, or monthly. It would be impossible for anyone to contact you via the mail reliably because anytime your address changed mail sent to the previous address would be delivered to the wrong house. It's the same way on the internet. If you want people to be able to talk to you you need to have a static address for them to contact you.
Second, ISPs block the ports necessary for others to talk to you. Even if you had a static address, often your ISP blocks standard email ports (25, 993, 143, 587, and 465) because they're trying to stop spammers from easily distributing their spam. But as anyone with an email address knows, the spammers are doing just fine even with the ISPs not allowing incoming connections. So I don't buy this as a valid reason to block these ports.
Data providers have all the same access as data consumers except they pay more to have static IP addresses and to not have the ports blocked. Notice anything wrong with this situation? The ability to fully participate in the internet is based on how much you pay your ISP. ISPs hide behind the fallacy that they're trying to protect you in order to be able to charge you more for the ability to truly participate on the internet. Does that extra money you pay actually protect you or anyone else on the internet better? No. Most ISPs will probably tell you that your also paying for more reliability. But you're running on the same system as the data consumers, so I don't buy that argument either.
I truly believe that we're not quite moving in the right direction when it comes to solving these problems. Until we do, you will constantly be battling moving your data from one place to the next when any new interesting service comes into existence.
Monday, November 17, 2014
Transitioning to a professional software development role: part 3
In my first post in this series, Transitioning to a professional software development role: part 1, I started to outline some of the gaps I've seen in people's preparation for entering a career in the software development industry. I started off by focusing on what software development is not about.
In my second post in this series, Transitioning to a professional software development role: part 2, I took a look at what software development IS about. In the final post in this series I'd like to talk about the tools available that make us more efficient.
For a long time developing software was very much like developing a product on an assembly line. Assembly lines are very rigid and not well suited to respond to change. They run on the assumption that what happens upstream in the assembly line can be built upon and won't change. The moment change is introduced most of the product on the assembly line is ruined and must be thrown away.
Software's assembly line is called Waterfall. Overtime we've come to understand the downfall of waterfall and it's major flaw is that it's very rigid to change. Rigidity to change was okay when the primary delivery mechanism for software was the compact disk. But as software has grown to allow near real time delivery of features and functionality Waterfalls rigidity to change has become a hindrance to delivering high quality software in smaller but more frequent updates and features.
That's where Agile come in. Agile software development is about being able to respond to change in a rapid manner. It teaches us to think about software in a less monolithic manner but instead as a group of features that can be delivered in small chunks frequently over time.
I wrote a post several months ago called Software Craftsmanship: Project Workflow. If you're new to agile it's a good introduction to the anatomy of a project and what I've found useful. While the project workflow I've outlined isn't something you'll see in official Agile books, it is something that I have found extremely useful.
The concept of Lean Manufacturing was invented at Toyota. The primary goal was to reduce waste in the manufacturing cycle. This was done by re-thinking the manufacturing process to identify and remove waste. On example of waste could is parts sitting in a queue waiting to be processed. Toyota was able to show that by re-engineering their manufacturing process they could improve quality, efficiency, and overall satisfaction of customers.
The concepts behind Lean Manufacturing can also be applied to software development. Unfortunately these concepts often are applied incorrectly and have lead to many misconceptions and misunderstandings of Lean Software development. I wrote a post several months ago which outlined common misunderstandings in applying Lean to software development.
As a professional software developer it's important to understand Lean and how to apply it to developing software.
In my second post in this series, Transitioning to a professional software development role: part 2, I took a look at what software development IS about. In the final post in this series I'd like to talk about the tools available that make us more efficient.
Being a good software developer means understanding how to apply agile
For a long time developing software was very much like developing a product on an assembly line. Assembly lines are very rigid and not well suited to respond to change. They run on the assumption that what happens upstream in the assembly line can be built upon and won't change. The moment change is introduced most of the product on the assembly line is ruined and must be thrown away.
Software's assembly line is called Waterfall. Overtime we've come to understand the downfall of waterfall and it's major flaw is that it's very rigid to change. Rigidity to change was okay when the primary delivery mechanism for software was the compact disk. But as software has grown to allow near real time delivery of features and functionality Waterfalls rigidity to change has become a hindrance to delivering high quality software in smaller but more frequent updates and features.
That's where Agile come in. Agile software development is about being able to respond to change in a rapid manner. It teaches us to think about software in a less monolithic manner but instead as a group of features that can be delivered in small chunks frequently over time.
I wrote a post several months ago called Software Craftsmanship: Project Workflow. If you're new to agile it's a good introduction to the anatomy of a project and what I've found useful. While the project workflow I've outlined isn't something you'll see in official Agile books, it is something that I have found extremely useful.
Being a good software developer means understanding how to use Lean
The concept of Lean Manufacturing was invented at Toyota. The primary goal was to reduce waste in the manufacturing cycle. This was done by re-thinking the manufacturing process to identify and remove waste. On example of waste could is parts sitting in a queue waiting to be processed. Toyota was able to show that by re-engineering their manufacturing process they could improve quality, efficiency, and overall satisfaction of customers.
The concepts behind Lean Manufacturing can also be applied to software development. Unfortunately these concepts often are applied incorrectly and have lead to many misconceptions and misunderstandings of Lean Software development. I wrote a post several months ago which outlined common misunderstandings in applying Lean to software development.
As a professional software developer it's important to understand Lean and how to apply it to developing software.
Being a good software developer means understanding how to make trade-offs
The last area I want to briefly cover is understanding how to make trade-offs. As a professional software developer you're going to be asked to make trade-offs all the time. Sometimes it will come in the form of quality (a bad trade-off IMO). Other times it will come in terms of features.
The key to understanding how to make trade-offs is learning to ask a few questions.
- What am I gaining by making this trade-off?
- What do I not get that I would gotten if the trade-off was not made?
- What downstream affects will this decision have on my long term strategy or road map?
- What additional work will be required later as a result of this trade-off?
The ultimate goal in software development is to provide business value in every part of the process. Understanding how to make trade-offs will help you provide the right business value at each step in the process.
Monday, November 10, 2014
Transitioning to a professional software development role: part 2
In my previous post, Transitioning to a professional software development role: part 1, I started to outline some of the gaps I've seen in people's preparation for entering a career in the software development industry. I started off by focusing on what software development is not about.
In this post I want to take a look at what software development IS about.
Being a good software developer is about understanding data structures
The foundation of a good software developer is understanding data structures and object oriented programming. Data structures like Binary Trees, Hash Tables, Arrays, and Linked Lists are core to writing software that is functional, scalable, and efficient.
It's not just good enough to understand what the data structures are and how they're used. It's crucial that you also understand WHEN to use them. Understanding when to use particular data structures properly comes with a few benefits. First, it helps others intuitively understand your code. Others will be able to understand your frame of reference better. Second, it helps you avoid "having a hammer and making everything a nail" syndrome. That's when you're learning something new and looking for places to apply your new knowledge, often shoehorning it in to places it doesn't belong.
Being a good software developer is about being able to estimate your work
I can't stress enough how important this is. Your team, your managers, and your customers are going to rely on you for consistency. They're going to make plans around what you do. And because of this learning to estimate your work is crucial in helping you and them meet commitments. Understanding how to estimate your software well also helps you build a regular cadence in what you deliver which is helpful for your customers.
There are a three concepts that I've found that really helped me learn to estimate my work well. The first is the Cone of Uncertainty. This concept is really helpful because it helps you tease out what you know you don't know as well as what you don't know you don't know. Understanding the cone of uncertainty helps you remove ambiguity in what you're working on which in turn helps you better understand the level of effort it will take.
Once you've teased out the uncertainty in your work you can use Planning Poker as a way to quantify how much work something is. It's important that you try not to tie your poker points to a time scale as it will tend to skew your pointing exercise. Instead, as you get better about learning to quantify how much work something is relative to your other work you'll start to naturally see how much time it takes. For instance let's say you use fibonacci numbers 1, 2, 3, 5, 8, and 13 to quantify you're work. Over time as you get better at pointing your work, you'll also see a trend in how much time certain points take. Only then can you accurately associate a timescale with your pointing.
The last concept that I've found very helpful in learning to estimate how much work I can do in any given period is by tracking my velocity. If you're using planning poker to determine how big the chunks of work are and you're using agile to set a cadence or rhythm for when you deliver your work, then velocity tracking can help you be more predictable in how much work you can deliver in any given agile sprint. Understanding your velocity helps you to set reasonable expectations on what you can deliver and helps those that are planning for the future understand what it would take to decrease the time of a project or make sure that a project is on track and will meet it's deliverable dates.
There are a three concepts that I've found that really helped me learn to estimate my work well. The first is the Cone of Uncertainty. This concept is really helpful because it helps you tease out what you know you don't know as well as what you don't know you don't know. Understanding the cone of uncertainty helps you remove ambiguity in what you're working on which in turn helps you better understand the level of effort it will take.
Once you've teased out the uncertainty in your work you can use Planning Poker as a way to quantify how much work something is. It's important that you try not to tie your poker points to a time scale as it will tend to skew your pointing exercise. Instead, as you get better about learning to quantify how much work something is relative to your other work you'll start to naturally see how much time it takes. For instance let's say you use fibonacci numbers 1, 2, 3, 5, 8, and 13 to quantify you're work. Over time as you get better at pointing your work, you'll also see a trend in how much time certain points take. Only then can you accurately associate a timescale with your pointing.
The last concept that I've found very helpful in learning to estimate how much work I can do in any given period is by tracking my velocity. If you're using planning poker to determine how big the chunks of work are and you're using agile to set a cadence or rhythm for when you deliver your work, then velocity tracking can help you be more predictable in how much work you can deliver in any given agile sprint. Understanding your velocity helps you to set reasonable expectations on what you can deliver and helps those that are planning for the future understand what it would take to decrease the time of a project or make sure that a project is on track and will meet it's deliverable dates.
Being a good software developer is about re-use in order to avoid re-inventing the wheel
As newer engineers we want to solve problems that we find interesting and a challenge. Often as we get into the depths of a particular problem space it will be evident that you're trying to solve an already solved problem. At this point you're at a cross roads where you can continue down the path of solving the problem yourself and re-invent the wheel. Often this is the result of both curiosity and mistrust. You're curious about how to solve a particular problem or curious about whether you could solve the problem better than those that have come before you. This also happens when we don't trust that a particular library actually solves the problem you're trying to solve. Or because another solution solves a slightly different, but compatible problem, we don't trust that our problem is in the same problem space.
This is very detrimental to a project for a few reasons. First, the problem has already been solved so you're going to waste time solving an already solved problem. Second, it's likely the case that the problem is more nuanced than you're aware of. It's also likely the case that the people who have already solved the problem have dedicated themselves to solving that problem. I.e. it's the entirety of their problem domain. This means that they're going to be the subject matter experts in this area. Because this is only one part of your overall problem you won't be able to dedicate the required amount of time solving the problem as well.
I would encourage you to first look to see if someone has already solved your problem either in part or in whole. There's plenty of high quality open source projects on GitHub and SourceForge. These projects have people who are eager for you to use and incorporate their projects into your project.
This is very detrimental to a project for a few reasons. First, the problem has already been solved so you're going to waste time solving an already solved problem. Second, it's likely the case that the problem is more nuanced than you're aware of. It's also likely the case that the people who have already solved the problem have dedicated themselves to solving that problem. I.e. it's the entirety of their problem domain. This means that they're going to be the subject matter experts in this area. Because this is only one part of your overall problem you won't be able to dedicate the required amount of time solving the problem as well.
I would encourage you to first look to see if someone has already solved your problem either in part or in whole. There's plenty of high quality open source projects on GitHub and SourceForge. These projects have people who are eager for you to use and incorporate their projects into your project.
Being a good software developer is about knowing the limits of your understanding
There are several aspects to understanding the limits of your understanding. One aspect is to know that knowledge about any particular domain has both a breadth and a depth to it. It is impossible to gain both a breadth and depth of understanding in all areas of software development amongst all subject domains. Because if this it's important to be aware of what you have a breadth of understanding in but are lacking depth and what you have a depth of understanding in but don't have a breadth of understanding. Over time you'll develop both a depth and a breadth of understanding in a few particular subject areas. But it's important to know that this takes time, theory, and practice. Without all three of those you won't gain the breadth and the depth.
Knowing the limits of your understanding also involves being able to say you were wrong. There are going to be plenty of times when you thought you had a depth of understanding or breadth of understanding of something only to find out you didn't fully understand or misunderstood the subject. Being able to say you were wrong is the first step to correcting your understanding and being able to build on your new knowledge.
Knowing the limits of your understanding also involves being able to say you were wrong. There are going to be plenty of times when you thought you had a depth of understanding or breadth of understanding of something only to find out you didn't fully understand or misunderstood the subject. Being able to say you were wrong is the first step to correcting your understanding and being able to build on your new knowledge.
Monday, November 3, 2014
Transitioning to a professional software development role: part 1
I've spent 14+ years in the software industry in either an IC (individual contributor) role, as an engineering lead, or as a manager. I've worked in both the public and private sector. I've worked at companies as large as 100,000+ people and as small as 19 people. One thing that's been pretty consistent over time is that people first entering into the software industry are ill-prepared for what it means to be a professional software developer. This is equally as true for those coming out of college as it is for those transitioning to software from another industry.
What I'd like to do in this post is outline some of the gaps I've seen in people's preparation and try to pave the way toward helping those interested in software development understand what's expected of them in the industry and how to be prepared.
While the following post will focus on being a good software developer, most of what I outline is applicable to other roles in the software industry such as project/program/product management.
This will be a mult-part post. In part one I will focus on what software development is not.
Being a good software developer is not just being able to code
You're part of a team. Software development isn't just about solving problems with efficient algorithms. You're part of a team which is part of a larger ecosystem. There are product people trying to manage the vision of the software. Their are project people trying to manage the cadence of the software life-cycle. There are other engineers consuming the output of your work. There are internal and external customers trying to use your software to make their lives more meaningful either by being more efficient, participating in some sort of community, or just goofing off playing a game you've written.
Because of this people are relying on you to be an effective communicator. They're relying on you to be effective with time management. They're relying on you to ask for help when you get stuck. They expect you not to go dark. And they're relying on you to help them out when they get stuck.
Essentially you're part of a new tribe, each person having different but overlapping responsibilities. It's important to remember to grow your skills both technically AND with soft skills.
Being a good software developer is not about being clever
One of the biggest mistakes I see newer folks in the software industry make is trying to be too clever in their solutions. Writing software that lasts is about simplicity. Learning to write simple code that clearly communicates it's intentions and intended purpose(s) means that it will be used effectively. Writing code that is clear means that it's readable.
In the software industry you're going to spend more of your time reading other peoples code than you will actually writing code. It's important to learn what it means to write readable code. I would highly recommend you read the book Clean Code: A Handbook of Agile Software Craftsmanship.
Being a good software developer is not about personal style
Every industry has it's own DSL (domain specific language). That DSL helps people to communicate more effectively within the industry by removing ambiguity and subjectivity. Software development has several different layers of DSLs that it is important to learn.
There are language specific idioms and standards that it's important to be familiar with. There are platform specific standards. For instance standard *nix programs tend to do one thing that can be chained (or composed) with other programs (by piping) to serve some larger purpose. Whereas on the other hand, Windows programs tend to be monolothic in nature and self contained. It's important to know what the standards are for the platform you're working on.
In the same way there are going to be general coding standards that are industry accepted as well as coding standards that are specific your new organization. Your organization will also likely have it's own set of standard tooling for development, deployment, and distribution.
Monday, October 27, 2014
What I've learned in a year
I have now been blogging for one full year. In the past year I've done one installment of the Starting From Scratch series on Android, a series about building a mobile app, 2 Back To The Basics installments covering Binary Trees and Hash Tables, and a whole lot of random posts about software development. So what have I learned over the past year of writing a technology blog.
Over the past year I've learned that I have a lot more to talk about than I originally thought I would. When I originally started this blog in Oct 2013 I wasn't sure what the heck I was going to talk about each week. While I wouldn't say I have a book of ideas just laying around, I haven't had trouble coming up with a topic each week. I've probably got about a dozen or so post ideas sitting in the queue just waiting to be written.
When I set out to start this blog I wasn't sure on which end of the spectrum my blog was going to fall. One side of the spectrum is a Twitter like blog. The type where you have a bunch of frequent but short (as short as one sentence sometimes) posts. On the other side of the spectrum you have article like blogs. These are blogs that read like a magazine or newspaper article. They're usually chock full of information and other require multiple sittings to read through.
I've found myself somewhere in the middle, slightly skewed more toward article length. I really like doing the multi-part series as well as the little nuggests of things I've learned.
I will usually write three or four blog posts at a time and then stew on my thoughts for a couple of weeks. I feel like it really helps me understand HOW I want to write about WHAT I write. Often I'll have an idea which I think about one way, and then after writing about it will come back and edit it from a different perspective. It's almost been like a conversation with myself.
And I'm okay with that. Sometimes I want to write a really technical article. I'll go deep into an algorithm and feel great about it. Sometimes I want to write a high level about something that's applicable to life outside of software development (even if it's a post about software).
I'm just happy to be writing.
Over the life of my blog 38% have visited from Windows, 37% have visited from Mac, a smaller than I expected 6% have visited from Linux, and then a hodgepodge of OS's make up the rest (including mobile).
I consume the blogs I follow almost entirely on my mobile devices (90% phone and 10% tablet). So it was a bit surprising to me that the vast majority of blog readers were doing so on desktop machines. I think a lot of this has to do with the sources of my traffic. But I'm not wholly convinced.
Whether you've been reading this blog from the beginning or this is your first week, I hope you're enjoying what you're finding here. Writing is a form of art to me. I enjoy it, it relaxes me, and it makes me feel connected to humanity.
Thanks for taking the time out of your day to read my blog :)
Ideas aren't as hard to come by as I would have thought
Over the past year I've learned that I have a lot more to talk about than I originally thought I would. When I originally started this blog in Oct 2013 I wasn't sure what the heck I was going to talk about each week. While I wouldn't say I have a book of ideas just laying around, I haven't had trouble coming up with a topic each week. I've probably got about a dozen or so post ideas sitting in the queue just waiting to be written.
A weekly post is a good pace
When I set out to start this blog I wasn't sure on which end of the spectrum my blog was going to fall. One side of the spectrum is a Twitter like blog. The type where you have a bunch of frequent but short (as short as one sentence sometimes) posts. On the other side of the spectrum you have article like blogs. These are blogs that read like a magazine or newspaper article. They're usually chock full of information and other require multiple sittings to read through.
I've found myself somewhere in the middle, slightly skewed more toward article length. I really like doing the multi-part series as well as the little nuggests of things I've learned.
Write a lot and then take time to think
I will usually write three or four blog posts at a time and then stew on my thoughts for a couple of weeks. I feel like it really helps me understand HOW I want to write about WHAT I write. Often I'll have an idea which I think about one way, and then after writing about it will come back and edit it from a different perspective. It's almost been like a conversation with myself.
I still don't know who my target audience is
And I'm okay with that. Sometimes I want to write a really technical article. I'll go deep into an algorithm and feel great about it. Sometimes I want to write a high level about something that's applicable to life outside of software development (even if it's a post about software).
I'm just happy to be writing.
PC marketshare
Over the life of my blog 38% have visited from Windows, 37% have visited from Mac, a smaller than I expected 6% have visited from Linux, and then a hodgepodge of OS's make up the rest (including mobile).
I consume the blogs I follow almost entirely on my mobile devices (90% phone and 10% tablet). So it was a bit surprising to me that the vast majority of blog readers were doing so on desktop machines. I think a lot of this has to do with the sources of my traffic. But I'm not wholly convinced.
I'm just glad to be here
Whether you've been reading this blog from the beginning or this is your first week, I hope you're enjoying what you're finding here. Writing is a form of art to me. I enjoy it, it relaxes me, and it makes me feel connected to humanity.
Thanks for taking the time out of your day to read my blog :)
Monday, October 20, 2014
Conditional logic In Ant
Every so often I find myself needing some conditional logic in my ant build files based on either an automated build property or some property set by current state.
There is an open source library, ant-contrib, which gives you if/else statements in your Ant build files but I tend to not use ant-contrib for three reasons. First, it adds bloat to my project because of the requirement to include it's jar in my projects classpath. Second, you have to mess around with defining tasks in your build files which I just don't feel are very intuitive. Lastly, Ant already includes the ability to perform conditional logic by taking advantage of the Ant target's if attribute.
Performing conditional logic in Ant without an additional library is pretty easy. You simply need to define three targets. The first is the target you want to run in the TRUE scenario. The second is the target you want to run in the FALSE scenario. The third is the target that sets a property (or properties) based on some condition and calls the other targets.
Let's take a look at a very simple build file. This will print This build IS *nix if the isUnix property is set to true otherwise it will print This build is NOT *nix.
<?xml version="1.0" encoding="UTF-8"?>
<project name="example">
<condition property="isUnix"><os family="unix"/></condition>
<target name="-unix-build" if="performUnixBuild">
<echo>This build IS *nix</echo>
</target>
<target name="-non-unix-build" if="performNonUnixBuild">
<echo>This build is NOT *nix</echo>
</target>
<target name="build">
<condition property="performUnixBuild"><istrue value="${isUnix}" /></condition>
<condition property="performNonUnixBuild"><isfalse value="${isUnix}" /></condition>
<antcall target="-unix-build" />
<antcall target="-non-unix-build" />
</target>
</project>
You can see this in action by copying that file to a machine with Ant on it and running:
You can test the else logic by overriding the isUnix property at the command line using:
There is an open source library, ant-contrib, which gives you if/else statements in your Ant build files but I tend to not use ant-contrib for three reasons. First, it adds bloat to my project because of the requirement to include it's jar in my projects classpath. Second, you have to mess around with defining tasks in your build files which I just don't feel are very intuitive. Lastly, Ant already includes the ability to perform conditional logic by taking advantage of the Ant target's if attribute.
Performing conditional logic in Ant without an additional library is pretty easy. You simply need to define three targets. The first is the target you want to run in the TRUE scenario. The second is the target you want to run in the FALSE scenario. The third is the target that sets a property (or properties) based on some condition and calls the other targets.
Let's take a look at a very simple build file. This will print This build IS *nix if the isUnix property is set to true otherwise it will print This build is NOT *nix.
<?xml version="1.0" encoding="UTF-8"?>
<project name="example">
<condition property="isUnix"><os family="unix"/></condition>
<target name="-unix-build" if="performUnixBuild">
<echo>This build IS *nix</echo>
</target>
<target name="-non-unix-build" if="performNonUnixBuild">
<echo>This build is NOT *nix</echo>
</target>
<target name="build">
<condition property="performUnixBuild"><istrue value="${isUnix}" /></condition>
<condition property="performNonUnixBuild"><isfalse value="${isUnix}" /></condition>
<antcall target="-unix-build" />
<antcall target="-non-unix-build" />
</target>
</project>
You can see this in action by copying that file to a machine with Ant on it and running:
$ ant build.If you're on a Unix like machine it will print This build IS *nix otherwise it will print This build is NOT *nix.
You can test the else logic by overriding the isUnix property at the command line using:
$ ant -DisUnix=false build.
Monday, October 13, 2014
The fallacy of the re-write
I've been in the software industry for a decade and a half and have worked on dozens of projects. Many of the systems that I have worked on were considered legacy systems. As with any system, but even more so with legacy systems, developers will get frustrated with the systems inflexibility. And inevitably this will lead to the developers decreeing that if they could only re-write the system all the problems will be solved. Unfortunately most product owners will eventually give in to these cries and will commission a re-write.
I'm here to tell you today (as both a developer and a manager) giving in to this urge IS NOT going to solve your problems. What it is going to do is grind your production to a halt and make your customers unhappy. This will have downstream effects on the team as the pressure to produce builds and builds and builds.
Re-writes are usually based on a few commonly held (but false) beliefs in the software industry.
Why are these fallacies? If we dig a little deeper we will see that a ground up re-write means you are more likely to introduce problems in the new system than you are to solve problems in the old system. What is typically glossed over is the fact that the current architecture is doing a lot of stuff correct. How do I know this? Because it's the architecture that is in production right now running your business.
Let's take them at each of these fallacies one by one.
Because it is true that re-writing a system with knowledge of the current architectural problems can help you avoid current pain points most people are quick to accept this statement without challenge. There are many different times in the life-cycle of a product when problems arise. Some arise as bugs when writing the software. These can typically be rooted out with some sort of unit testing. The next class of problems crop up when integrating each of the pieces of the system together. You can create integration tests to help reduce the amount of integration bugs but often there are integration bugs that don't show up in pre-production environments. These tend to be caused by the dynamic nature of content. Because the new system is a re-write of the old system it will be more difficult to use real inputs/outputs from the old system to test the integration of the new system. Because of this you're likely to introduce problems in the new system that don't already exist in the old system. Because the new system won't be in production till it's done, these new architectural problems are not likely to be found till your new system is in production.
The fallacy in this statement is more subtle but much more severe than the others. The reason is because until your new system performs all functions of your old system, the old system is superior from a business value perspective. In fact it isn't untill the new system has 100% feature parity with the old system that it starts to provide the same business value as the legacy system, not to mention more business value. Some will try to gain business value from the new system earlier by switching over to the new system before there is 100% feature parity with the old system. But by doing this you're offering your customers less value for the same amount of money, time, and/or investment.
This visual does a good job of illustrating the feature parity problem.
By segregating and replacing parts of your architecture you're reducing the surface area of change. This allows you to have a well defined contract for both the input and output of the system as well as the workflow.
In-place re-write has another huge benefit over ground up re-write. It allows you to validate your new system in production as you would any new feature of the system. This allows you to find bugs sooner as well as validate the workflow and feature parity.
Another benefit of an in-place re-write is that you can decommission parts of the legacy system as you go without ever having to do a big (and scary) "flip of the switch" from the old system to the new system.
Most importantly, your customers do not suffer when you do an in-place re-write as you are not ever taking away features from your customers. Even better, you can prioritize giving your customers new features earlier by implementing them on the new system even before you've finished porting the entire old system over.
I'm here to tell you today (as both a developer and a manager) giving in to this urge IS NOT going to solve your problems. What it is going to do is grind your production to a halt and make your customers unhappy. This will have downstream effects on the team as the pressure to produce builds and builds and builds.
So why is a re-write not a viable solution?
Re-writes are usually based on a few commonly held (but false) beliefs in the software industry.
- If we start the project over from scratch we won't carry the problems from the old system into the new.
- If we start the project over from scratch we can use the latest and greatest technologies that are incompatible with our current technology stack.
- If we start the project over from scratch we can move faster and produce results quicker.
Why are these fallacies? If we dig a little deeper we will see that a ground up re-write means you are more likely to introduce problems in the new system than you are to solve problems in the old system. What is typically glossed over is the fact that the current architecture is doing a lot of stuff correct. How do I know this? Because it's the architecture that is in production right now running your business.
Let's take them at each of these fallacies one by one.
If we start the project over from scratch we won't carry the problems from the old system into the new.This statement can really be broken down into two parts. The first part says that there are problems in the architecture that prevent you from extending the code and because you're now aware of those problems you can re-architect the software so that those problems no longer exist. The second part says that you won't carry over existing bugs into the new system. The second part of this statement is really related to the second fallacy, so we'll cover it when we cover that fallacy.
Because it is true that re-writing a system with knowledge of the current architectural problems can help you avoid current pain points most people are quick to accept this statement without challenge. There are many different times in the life-cycle of a product when problems arise. Some arise as bugs when writing the software. These can typically be rooted out with some sort of unit testing. The next class of problems crop up when integrating each of the pieces of the system together. You can create integration tests to help reduce the amount of integration bugs but often there are integration bugs that don't show up in pre-production environments. These tend to be caused by the dynamic nature of content. Because the new system is a re-write of the old system it will be more difficult to use real inputs/outputs from the old system to test the integration of the new system. Because of this you're likely to introduce problems in the new system that don't already exist in the old system. Because the new system won't be in production till it's done, these new architectural problems are not likely to be found till your new system is in production.
If we start the project over from scratch we can use the latest and greatest technologies that are incompatible with our current technology stack.On the surface this statement is likely true. What this statement hides is similar to what's hidden in the previous statement. New technologies mean new bugs and new problems. Again it is likely that many of these problems won't surface till the new system is in production because, as anyone who has worked in the industry for at least a few years knows, production traffic is always different from simulated traffic. You run into different race conditions and bugs simply because of the random nature of production traffic.
If we start the project over from scratch we can move faster and produce results quicker.The final fallacy is usually the one that most companies hang their hat on even if they acknowledge that a re-write from the ground up will introduce new bugs and problems and re-introduce existing bugs and problems. The reason is because they believe that their knowledge of the existing system should help them to only solve problems that need to be solved which leads to the system being built much faster.
The fallacy in this statement is more subtle but much more severe than the others. The reason is because until your new system performs all functions of your old system, the old system is superior from a business value perspective. In fact it isn't untill the new system has 100% feature parity with the old system that it starts to provide the same business value as the legacy system, not to mention more business value. Some will try to gain business value from the new system earlier by switching over to the new system before there is 100% feature parity with the old system. But by doing this you're offering your customers less value for the same amount of money, time, and/or investment.
This visual does a good job of illustrating the feature parity problem.
What is the solution then?
Are you saying I'm stuck with my current architecture and technology stack? NO! The best way to upgrade your technology stack is to do an in-place re-write. By doing this you help mitigate the problems presented in a ground up re-write. What does an in-place re-write look like?By segregating and replacing parts of your architecture you're reducing the surface area of change. This allows you to have a well defined contract for both the input and output of the system as well as the workflow.
In-place re-write has another huge benefit over ground up re-write. It allows you to validate your new system in production as you would any new feature of the system. This allows you to find bugs sooner as well as validate the workflow and feature parity.
Another benefit of an in-place re-write is that you can decommission parts of the legacy system as you go without ever having to do a big (and scary) "flip of the switch" from the old system to the new system.
Most importantly, your customers do not suffer when you do an in-place re-write as you are not ever taking away features from your customers. Even better, you can prioritize giving your customers new features earlier by implementing them on the new system even before you've finished porting the entire old system over.
Monday, October 6, 2014
Starting From Scratch: Android - Creating A Release Build
This week we're finishing the Starting From Scratch series with a look at how to create a release build of our app. I'll show you how to create a release key for your app, secure your release key via encryption, and how to integrate the automatic decryption (and clean up) of your encrypted key during the normal Android Ant build process
Your key is what identifies your app as being published by you. This is what ensures that only official versions of your app can be released. It's ABSOLUTELY important that noone gets access to your key. DO NOT commit this keystore to your source control repo as is.
Create a release keystore
Not having the keystore in source control doesn't create a pit of success as you have to manage your key separately from your project. Furthermore, anyone with access to your key can sign an app as you. In order to safely create a pit of success we're going to encrypt our keystore and delete the original so it's not lying around anywhere for someone to abuse.
To encrypt the keystore we'll use openssl and DES3 encryption.
Now that we have a key that can be used to sign our applicaiton and we've secured that key from unauthorized access we now need to integrate into the standard Android Ant build process.
The first thing we need to do is create an Ant target that will decrypt the keystore. We also want to create a target to clean up the decrypted keystore immediately after the build. Note that the -decrypt-keystore target supports both prompting the builder for the password or getting the password from an Ant property in the case of an automated release build.
Here's what our encryption.xml file looks like. Create this file in the same directory as your projects build.xml file.
Creating a release key for your app
Your key is what identifies your app as being published by you. This is what ensures that only official versions of your app can be released. It's ABSOLUTELY important that noone gets access to your key. DO NOT commit this keystore to your source control repo as is.
Create a release keystore
$ keytool -genkey -v -keystore my.release.keystore -alias myalias -keyalg RSA -keysize 2048 -validity 10000
Securing your release key
Not having the keystore in source control doesn't create a pit of success as you have to manage your key separately from your project. Furthermore, anyone with access to your key can sign an app as you. In order to safely create a pit of success we're going to encrypt our keystore and delete the original so it's not lying around anywhere for someone to abuse.
To encrypt the keystore we'll use openssl and DES3 encryption.
$ openssl des3 -salt -in my.release.keystore -out my.release.keystore.encryptedThe next thing you want to do is put your encrypted keystore in the provisioning directory.
$ rm my.release.keystore
$ mkdir provisioning
$ mv my.release.keystore.encrypted provisioning/
Integrating into the Android Ant build process
Now that we have a key that can be used to sign our applicaiton and we've secured that key from unauthorized access we now need to integrate into the standard Android Ant build process.
The first thing we need to do is create an Ant target that will decrypt the keystore. We also want to create a target to clean up the decrypted keystore immediately after the build. Note that the -decrypt-keystore target supports both prompting the builder for the password or getting the password from an Ant property in the case of an automated release build.
Here's what our encryption.xml file looks like. Create this file in the same directory as your projects build.xml file.
<?xml version="1.0" encoding="UTF-8"?>In order to support automated release builds we need to add a few Ant properties to our projects local.properties file. DO NOT CHECK THIS IN TO YOUR SOURCE CONTROL. This file should be restricted as much as possible because it contains the password used to decrypt your keystore. You do not have to put your password in this file. If you don't you'll be prompted to enter your password during the release build.
<project name="encryption">
<target name="-decrypt-keystore" depends="" if="isRelease">
<echo>Decrypting keystore</echo>
<if>
<condition>
<and>
<isset property="key.store.password"/>
</and>
</condition>
<then>
<exec executable="openssl">
<arg value="des3"/>
<arg value="-d"/>
<arg value="-salt"/>
<arg value="-in"/>
<arg value="provisioning/${assets.keystore}"/>
<arg value="-out"/>
<arg value="provisioning/release.keystore"/>
<arg value="-pass"/>
<arg value="pass:${key.store.password}"/>
</exec>
</then>
<else>
<exec executable="openssl">
<arg value="des3"/>
<arg value="-d"/>
<arg value="-salt"/>
<arg value="-in"/>
<arg value="provisioning/${assets.keystore}"/>
<arg value="-out"/>
<arg value="provisioning/release.keystore"/>
</exec>
</else>
</if>
</target>
<target name="-clean-keystore" depends="">
<echo>Cleaning up decrypted keystore</echo>
<delete file="provisioning/release.keystore"/>
</target>
</project>
assets.keystore=my.release.keystore.encryptedThe last thing we need to do is wire up the decryption and cleanup of our key into the existing Android Ant build process. To do this we'll implement the -pre-build, -pre-clean, and -post-build build targets in our custom_rules.xml file. Note that we only want our decryption to happen during a release build. So we're going to define an isRelease property. Our -decrypt-keystore target checks for this property before execution.
key.store=provisioning/release.keystorekey.alias=myalias
key.alias.password=PASSWORD_YOU_USED_WHEN_CREATING_YOUR_KEYSTORE
key.store.password=PASSWORD_YOU_USED_WHEN_CREATING_YOUR_KEYSTORE
<?xml version="1.0" encoding="UTF-8"?><project name="custom_rules">Finally, the last thing we need to do is update our projects build.xml file to include our encryption.xml and custom_rules.xml files. Add the following two import statements ABOVE the existing ant/build.xml import. For example:
<condition property="isRelease"><contains string="${ant.project.invoked-targets}" substring="release"/></condition>
<target name="-pre-build">
<antcall target="-decrypt-keystore" />
</target>
<target name="-pre-clean" depends="-clean-keystore"></target>
<target name="-post-build" depends="-clean-keystore"></target></project>
<import file="encryption.xml" optional="false" />You can now build a signed release version of your app on the command line with the following command.
<import file="custom_rules.xml" optional="false" />
<import file="${sdk.dir}/tools/ant/build.xml" />
$ ant release
Monday, September 29, 2014
Starting From Scratch: Android - Action Bar
This week we're continuing the Starting From Scratch series. Today we're going to take a look at the Action Bar. The action bar UI was first defined in Android 3.x (API 11+). It provided a consistent navigation paradigm for applications as well as a convenient way to present context specific calls to action (like sharing, search, and etc).
Because Android only supported the Action Bar APIs on Android 11+ an open source project called ActionBarSherlock provided support for earlier versions of Android. ActionBarSherlock is a great project and very useful and eventually Android added a similar Appcompat project to it's v7 support library. While I'm a big fan of ActionBarSherlock I'm going to be using Android's Appcompat project in my example code for consistency.
Setup the support appcompat project as a library. Note, for purposes of this series i'm using android-14 as my target version. You'll want this to match whatever Android version your app is targeting.
One of the great things about the Action Bar is that you can provide menu items directly in the action bar. This allows you to provide context specific menu options in a place that is convenient and easy for your users to access.
If you are planning on supporting the Action Bar via the Support Library then the first thing to do is to update your Activity to extend ActionBarActivity instead of Activity.
Android provides an Action Bar Icon Pack which you can download and use in your application. Simply copy the resources from the theme your app is using into your res/drawable folders. For this example we'll use the refresh icon.
$ mkdir ./res/drawable-xxhdpi
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-xxhdpi/ic_action_refresh.png ./res/drawable-xxhdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-xhdpi/ic_action_refresh.png ./res/drawable-xhdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-hdpi/ic_action_refresh.png ./res/drawable-hdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-mdpi/ic_action_refresh.png ./res/drawable-mdpi/
The first thing you need to do to add a menu in your Action Bar is to define the menu layout. In my layout I'll be referencing an icon. Here's what our main_activity_menu.xml looks like.
$ mkdir res/menu
$ vim res/menu/main_activity_menu.xml
Now that we've defined the Action Bar menu we need to inflate it into our Action Bar. This is done via the Activity's onCreateOptionsMenu event.
}
When a user selects a menu item the onOptionsItemSelected method will be called. This method is called regardless of which Menu Item was selected. So you'll need to check the id of the item before you handle the action. Here's an example of handling our refresh action.
Because Android only supported the Action Bar APIs on Android 11+ an open source project called ActionBarSherlock provided support for earlier versions of Android. ActionBarSherlock is a great project and very useful and eventually Android added a similar Appcompat project to it's v7 support library. While I'm a big fan of ActionBarSherlock I'm going to be using Android's Appcompat project in my example code for consistency.
Creating a project that has an Action Bar.
Setup the support appcompat project as a library. Note, for purposes of this series i'm using android-14 as my target version. You'll want this to match whatever Android version your app is targeting.
$ cd /path/to/android-sdk/extras/android/support/v7/appcompatMake sure that the project.properties file has the android library project property set.
$ android update project -p . -t android-14
android.library=trueNow that we have our pre-requisites complete let's create an app that will use an Action Bar.
$ cd ~In order to actually use the support library you need to add a reference to the Android Support Appcompat Library in your project.properties. Note that the path is relative NOT absolute.
$ mkdir MyActionBarProject
$ cd MyActionBarProject
$ cp /path/to/android-sdk/extras/android/support/v4/android-support-v4.jar libs/
$ android create project -t android-14 -k com.example. myactionbarproject -p . -n MyActionBarProject -a MainActivity
android.library.reference.1=relative/path/to/android-sdk/extras/android/support/v7/appcompatAt this point you've got your application setup so that it can use an Action Bar, but it's not using one yet. In order to actually use an Action Bar you'll need to update your Android manifest to use a theme with an Action Bar. This can be done by using either Theme.AppCompat.Light or Theme.AppCompat (dark theme) as the theme of your activity in your AndroidManifest.xml. For example:
<activity android:name="MainActivity"You'll also want to hint Android that you're going to support an older version of Android. You can do that by adding the following in your AndroidManifest.xml
android:label="@string/app_name"
android:theme="@style/Theme.AppCompat.Light">
<uses-sdk android:minSdkVersion="14" android:targetSdkVersion="19" />At this point you can compile your app and install it on your test device or emulator and you'll see an Action Bar. Make sure you either have an emulator running or a device attached in debug mode and run:
$ ant clean && ant debug && ant installd
Action Bar Menu
If you are planning on supporting the Action Bar via the Support Library then the first thing to do is to update your Activity to extend ActionBarActivity instead of Activity.
Android provides an Action Bar Icon Pack which you can download and use in your application. Simply copy the resources from the theme your app is using into your res/drawable folders. For this example we'll use the refresh icon.
$ mkdir ./res/drawable-xxhdpi
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-xxhdpi/ic_action_refresh.png ./res/drawable-xxhdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-xhdpi/ic_action_refresh.png ./res/drawable-xhdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-hdpi/ic_action_refresh.png ./res/drawable-hdpi/
$ cp /path/to/icons/holo_light/01_core_refresh/drawable-mdpi/ic_action_refresh.png ./res/drawable-mdpi/
$ mkdir res/menu
$ vim res/menu/main_activity_menu.xml
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:myapp="http://schemas.android.com/apk/res-auto">
<item android:id="@+id/menu_refresh"
android:icon="@drawable/ic_action_refresh"
android:showAsAction="ifRoom"
myapp:showAsAction="ifRoom" />
</menu>
Now that we've defined the Action Bar menu we need to inflate it into our Action Bar. This is done via the Activity's onCreateOptionsMenu event.
@Overridereturn super.onCreateOptionsMenu(menu);
public boolean onCreateOptionsMenu(Menu menu) {
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.main_activity_menu, menu);
}
When a user selects a menu item the onOptionsItemSelected method will be called. This method is called regardless of which Menu Item was selected. So you'll need to check the id of the item before you handle the action. Here's an example of handling our refresh action.
@OverrideAt this point you can compile your app and install it on your test device or emulator and you'll see an Action Bar with a refresh menu button. Selecting the refresh button will display message saying Refresh select.
public boolean onOptionsItemSelected(MenuItem item) {
if(item.getItemId() == R.id.menu_refresh) {
Toast.makeText(this, "Refresh selected", Toast.LENGTH_LONG).show();
return true;
}
return super.onOptionsItemSelected(item);
}
$ ant clean && ant debug && ant installd
Monday, September 22, 2014
Starting From Scratch: Android - Fragments
This week we're continuing the Starting From Scratch series. Today we're going to take a look at Android Fragments. I'll discuss what a Fragment is, the Fragment life-cycle, creating a Fragments options menu, and finally I'll give a few Fragment tips I've found along the way.
What Is A Fragment
Simply put, a Fragment is a way to encapsulate a piece of your applications UI and UI interactions into a reusable set of resources and code. A Fragment is not tied to any particular Activity, but instead can be used within many Activities.
One example of where I've used this modularity in my applications is with Lists. Lists are common in mobile applications and often only differ by what they show. Using Fragments it would be easy to encapsulate creating, displaying, and interaction with a list into a set of reusable code and resources. You could then display multiple lists in a variety of places throughout your app using the same code and resources.
Fragment Life-cycle
The Fragment life-cycle is very similar to the Activity life-cycle we've already gone through in this series. You still have creating, starting, resuming, pausing, stopping, and destroying life-cycle events. But in addition to those you have life-cycle events that are associated with creating your Fragments view and attaching/detaching from an Activity. I'm not going to go through every Fragment life-cycle method but instead will call out two key life-cycle differences from Activities.
The first big difference is in creating the Fragments view. In an Activity this is done via the onCreate method. In a Fragment this is done in the onCreateView method. The onCreateView method is expected to return the view to use with this Fragment. Creating this view is pretty simple, just inflate your Fragments layout using the LayoutInflator passes into this method.
The next difference comes with the onActivityCreated method. The main purpose of this method is to allow the Fragment to restore state. There are two places that state can be stored. The first is internally within the Fragment via a Bundle. The second is in any arguments that were passed into the Fragment by it's composing Activity.
The internal saved state is passed into the onActivityCreated method in the form of a Bundle. The state that was passed in via arguments can be retrieved via a call to the getArguments method. It's important to restore Fragment state using the correct source of information. For instance, if you get the initial state from the Arguments but then update that state and save it in your Fragments Bundle then it's important to have some logic that determines the correct place to get the saved state from.
Creating Fragments Options Menu
Creating menu items for your Fragment is a four step process. The fist step is declaring your Fragments options menu in an XML file. The second step is declaring that your Fragment has an options menu. The third step is inflating your options menu layout. The last step is handling users selecting an option in your Fragments menu.
First, create res/menu/my_first_fragment_menu.xml. The important thing to call out here is that each menu item needs to have an id. This id is important because there is one method that is called when the user selects a menu item. So we need a way to differentiate the desired action the user wishes to perform.
Declaring that your Fragment has an options menu is done via a call to setHasOptionsMenu. This call should be made in the Fragments default constructor.
Inflating your Fragments options menu is done by overriding the onCreateOptionsMenu method. This is done by passing the MenuInflater's inflate method the id of the menu XML file you created. If you want to use an ActionProvider in your Fragment, like the ShareActionProvider, this is the right time to set that provider up.
Finally, handling the selection of a menu option is done by overriding the onOptionsItemSelected method. This method is called when any menu item is selected. It's a good idea to encapsulate the menu item action into it's own method and just call that method when the item has been selected. It's important to remember to return true in the onOptionsItemSelected method if you did handle the menu item selection.
Fragment Tips
Retaining
One thing that often causes people to stumble is putting a video, web browser, or any other stateful object inside a Fragment. The reason is that when the device changes orientation the Activity (and it's child Fragments) are torn down and recreated. This causes problems when the user isn't expecting it. One way to solve this problem is to tell Android to retain the Fragment instance across Activity re-creation. This is done via a call to setRetainInstance. This call should be made in the Fragments default constructor.
Cross Fragment Coordination
Cross Fragment coordination is done by declaring an interface in your Fragment for any events you want to allow others Fragments to respond to. The Activities that compose your fragment will implement your Fragments interface and can then dispatch messages to other Fragments that it is composing. This allows you to keep your concerns separated correctly by NOT tightly coupling your Fragment with any other Fragment. It's okay to tightly couple your Activity with your Fragment because your Activity is composing that Fragment.
What Is A Fragment
Simply put, a Fragment is a way to encapsulate a piece of your applications UI and UI interactions into a reusable set of resources and code. A Fragment is not tied to any particular Activity, but instead can be used within many Activities.
One example of where I've used this modularity in my applications is with Lists. Lists are common in mobile applications and often only differ by what they show. Using Fragments it would be easy to encapsulate creating, displaying, and interaction with a list into a set of reusable code and resources. You could then display multiple lists in a variety of places throughout your app using the same code and resources.
Fragment Life-cycle
The Fragment life-cycle is very similar to the Activity life-cycle we've already gone through in this series. You still have creating, starting, resuming, pausing, stopping, and destroying life-cycle events. But in addition to those you have life-cycle events that are associated with creating your Fragments view and attaching/detaching from an Activity. I'm not going to go through every Fragment life-cycle method but instead will call out two key life-cycle differences from Activities.
The first big difference is in creating the Fragments view. In an Activity this is done via the onCreate method. In a Fragment this is done in the onCreateView method. The onCreateView method is expected to return the view to use with this Fragment. Creating this view is pretty simple, just inflate your Fragments layout using the LayoutInflator passes into this method.
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState)
{
return inflater.inflate(R.layout.my_first_fragment_layout, container, false);
}
The next difference comes with the onActivityCreated method. The main purpose of this method is to allow the Fragment to restore state. There are two places that state can be stored. The first is internally within the Fragment via a Bundle. The second is in any arguments that were passed into the Fragment by it's composing Activity.
The internal saved state is passed into the onActivityCreated method in the form of a Bundle. The state that was passed in via arguments can be retrieved via a call to the getArguments method. It's important to restore Fragment state using the correct source of information. For instance, if you get the initial state from the Arguments but then update that state and save it in your Fragments Bundle then it's important to have some logic that determines the correct place to get the saved state from.
@Override
public void onActivityCreated(Bundle savedInstanceState)
{
super.onActivityCreated(savedInstanceState);
Bundle arguments = this.getArguments();
if(arguments != null && arguments.size() > 0)
{
// set any state that was passed in
// via the arguements bundle
this.someVariable = arguments.getString("SomeVariable");
}
}
Creating Fragments Options Menu
Creating menu items for your Fragment is a four step process. The fist step is declaring your Fragments options menu in an XML file. The second step is declaring that your Fragment has an options menu. The third step is inflating your options menu layout. The last step is handling users selecting an option in your Fragments menu.
First, create res/menu/my_first_fragment_menu.xml. The important thing to call out here is that each menu item needs to have an id. This id is important because there is one method that is called when the user selects a menu item. So we need a way to differentiate the desired action the user wishes to perform.
Declaring that your Fragment has an options menu is done via a call to setHasOptionsMenu. This call should be made in the Fragments default constructor.
public MyFirstAndroidFragment()
{
setHasOptionsMenu(true);
}
Inflating your Fragments options menu is done by overriding the onCreateOptionsMenu method. This is done by passing the MenuInflater's inflate method the id of the menu XML file you created. If you want to use an ActionProvider in your Fragment, like the ShareActionProvider, this is the right time to set that provider up.
@Override
public void onCreateOptionsMenu(Menu menu, MenuInflater inflater)
{
inflater.inflate(R.menu.my_first_fragment_menu, menu);
// this is a good time to setup a share action provider
}
Finally, handling the selection of a menu option is done by overriding the onOptionsItemSelected method. This method is called when any menu item is selected. It's a good idea to encapsulate the menu item action into it's own method and just call that method when the item has been selected. It's important to remember to return true in the onOptionsItemSelected method if you did handle the menu item selection.
@Override
public boolean onOptionsItemSelected(MenuItem item)
{
if (item.getItemId() == R.id.my_menu_item)
{
this.handleMyMenuItem();
return true;
}
return false;
}
Fragment Tips
Retaining
One thing that often causes people to stumble is putting a video, web browser, or any other stateful object inside a Fragment. The reason is that when the device changes orientation the Activity (and it's child Fragments) are torn down and recreated. This causes problems when the user isn't expecting it. One way to solve this problem is to tell Android to retain the Fragment instance across Activity re-creation. This is done via a call to setRetainInstance. This call should be made in the Fragments default constructor.
public MyFirstAndroidFragment()
{
setRetainInstance(true);
}
Cross Fragment Coordination
Cross Fragment coordination is done by declaring an interface in your Fragment for any events you want to allow others Fragments to respond to. The Activities that compose your fragment will implement your Fragments interface and can then dispatch messages to other Fragments that it is composing. This allows you to keep your concerns separated correctly by NOT tightly coupling your Fragment with any other Fragment. It's okay to tightly couple your Activity with your Fragment because your Activity is composing that Fragment.
Subscribe to:
Posts (Atom)