The Case for Refactoring

In my career as a programmer, I have done a lot of maintenance development. Some of that maintenance was on what are referred to as “legacy systems”. For those who don’t know, a legacy system is an application that is fragile enough that no developer wants to touch it.

According to Wikipedia, one of the characteristics of a legacy system (i.e. legacy application) is:

These systems can be hard to maintain, improve, and expand because there is a general lack of understanding of the system; the staff who were experts on it have retired or forgotten what they knew about it, and staff who entered the field after it became “legacy” never learned about it in the first place. This can be worsened by lack or loss of documentation.

I would add that a big problem besides lack or loss of documentation is incorrect documentation.

From a blog on

  • The app was written some time ago, its original authors moved on taking knowledge with them (lost knowledge).
  • Subsequent authors changed the application based on insufficient / lost knowledge, introducing bugs and de-stabilizing it.
  • The app is now brittle, and people are reluctant to change it for fear of breaking it.

The thing I remember most about working on these systems is that making changes was scary; you never knew what your change might break. That made me hesitant to make even the simplest of changes. It also forced me to overestimate the amount of time the change would take; I simply had to account for the amount of time I was going to spend testing the application after making my change.

The problem could be made worse by how critical these legacy systems were. Imagine making a mistake that causes peoples paychecks to be wrong (this didn’t happen to me but it happened to a co-worker). Imagine trying to explain  how you let that bug through your testing to the executives. If your lucky, your management understands what maintaining these systems means. But, more often than not, management will think you were lazy or careless.

How Did We Get Here

How did we let things get so bad that we can’t maintain an application?

At the time many of these legacy systems were created, many of the practices and toolsets we take for granted today didn’t exist. There was no such thing as automated unit testing. Refactoring was avoided unless absolutely necessary because it was expensive. This applied regardless of how bad a piece of code was. Instead, comments were added to describe the functionality. Over time, this made the problem worse because the comments weren’t maintained and  got out of sync with the code.

So, How Do We Stop This?

We avoid creating legacy systems by actively and aggressively maintaining the code base. We do this by refactoring constantly.

When code has good, thorough unit tests, refactoring is straightforward. You know that if the code passes the unit tests, it works. This also means that stinky code can be fixed immediately; it doesn’t have to age and get stinkier. This, is how we avoid creating a legacy system.

What Are the Benefits of Refactoring?

Here are the direct benefits of refactoring that I see:

  1. Code doesn’t get harder to maintain as it gets older.
  2. Code is cleaner, making it more likely we can add new features.
  3. Because of the above points, changes occur at a constant pace instead of slowing as the code base ages.
  4. It is easier to add functionality allowing the application to remain in use longer.

All of these add up to a lower cost of ownership for an application.

Why Dont’ All Programmers Embrace Refactoring?

Refactoring isn’t a panacea, it does have costs.

Refactoring Makes Changes Takes Longer

A change that includes refactoring will take longer, period. But it shouldn’t take much longer unless the code has gotten stinky because refactoring isn’t getting done.

However, without refactoring, an application can quickly turn into a legacy application where any change is ridiculously expensive. So while refactoring does incur a cost in the short-term, it incurs significant savings in the long term.

Refactoring Can Introduce Bugs

Any change we make to code can introduce bugs. That means refactoring can too. The one thing that helps refactoring though is quality unit tests. If the tests are thorough, you can be confident that the refactored code is functionally equivalent to the code being replaced. This seems to me to be a small price so that I can have an application I don’t mind maintaining.

Refactoring Doesn’t Add Value

It is true that refactoring doesn’t add new functionality. Some programmers equate this with refactoring adding no value. That is a short-term, if not immediate-term view. If you look at the long term, refactoring makes an application easier to maintain which means future changes will cost less. This can also lead to an extended life for an application which can result in significant cost savings.

Embrace Refactoring

So, embrace refactoring. Used properly it is a tool that will benefit you, your project team, the applications you write, and the customers your applications support.

Reflecting on How Programming Has Changed

My birthday was a few weeks ago. That got me nostalgic and I started thinking about how much has changed since I started programming nearly 25 years ago at Arthur Andersen. I also realized that many things really haven’t changed. And that, unfortunately, we do seem to have to learn the same lesson.

The Internet

Easily, the biggest change is the existence of the Internet. There isn’t an aspect of programming the Internet hasn’t helped. In thinking back, one of the biggest impacts I think it’s had is making documentation and assistance easily and readily available. In my first decade or so of development, if you had questions, the only sources tended to be finding the manufacturer manuals (yes, I have read many IBM COBOL manual) and, if you were lucky (or maybe not so lucky), the corporate “expert”. Now, documentation is readily available on-line and there is tons of help.

REST Programming Architecture

Designing programs using a REST architecture has started to become very common for good reason. However, when I look back, I don’t think REST is a big change. Instead I think it has made incremental improvements over what was learned programming mainframes 40 years ago.

I started as a COBOL, CICS programmer. CICS used an architecture called pseudoconversational which meant that the program serves requests from the terminal and then shuts down. Sound familiar? It is an “old-fashioned” REST architecture. Granted the responses provided are much more limited in CICS than in REST. But CICS was created more than 50 years ago and was revolutionary in how many people it allowed a single mainframe to support.

Open Source

Open source is a great thing; it impacts so many areas of programing and computer use. Open source has been around for decades though. CICS, the forementioned IBM product started life as an Open Source program. What has changed with open source? It is the Internet making Open Source programs and utilities readily available and making it just as easy to contribute.

Programming Languages

Tons of new programming language have come on the scene. Many new capabilities have arrived with them. When I started, procedural programming languages were about it. There was some variation, but the capabilities and the way things were accomplished were pretty much the same in most languages.

Now, we still have some procedural languages though their role is much smaller. Additionally, we have object-oriented languages, functional languages, dynamic languages, and more.

This is the area that I feel has improved the most since I started programming. The improvements in encapsulation and abstraction are nothing short of fantastic allowing us to build more complex systems without making the coding becoming more complex. In fact, in many ways, I believe systems today can be simpler to understand than equivalent systems from 20 years ago because of the extensive encapsulation and abstraction support in modern development languages.

In Conclusion

Being a computer programmer today is vastly different than it was when I graduated college. I wonder how different it will be when I look back in another 20 years? Will we be looking back on the quaint days of object-oriented and functional languages the way I’m looking back at procedural languages today?

>Shutting Things Down

>This isn’t a post about technology or about development; it is about the emotional toll getting laid off can take. As you can probably guess, I’m getting laid off a the end of December. Having been lucky enough to have gotten through more than 20 years without being laid off, I’m surprised how much this is weighing on me emotionally.

In addition to the lay offs, the company is transitioning to what I’d describe as a holding company at the same time. As of January 1, it will no longer have any staff or conduct any business. From an IT perspective, this means turning off all IT services and systems at the end of the month.

I have known this change was coming for a few months now. I thought that know would make dealing with this easier. I was wrong. When I’m up late at night, I start thinking about all of the work we did over the past eight years and how it must have no value. After all, if it had value, wouldn’t it be kept?

I do know this isn’t the case. The work we did provided tremendous value. It allowed us to compete with much larger players and let us force some positive changes in our industry. That’s something we need to be proud of.

I also know that the decision to shut down operations and lay off all staff was not a personal one. The company is changing because it had to. We were at a point where competing was more difficult and more expensive. Something had to change.

A while back, I read (sorry, I’m paraphrasing because I can’t find the source for this quote – if you know the source, please let me know):

Behind every business decision that forces personnel changes are people feeling the personal impacts of that decision.

In other words, the fact that I am losing my job makes the business decision personal to me. In an odd way, this makes me feel better; it validates what I’m feeling and makes me believe what I’m feeling is normal.

I know I will get through this and will come out the other side a stronger and better person. But it is not a fun road to go through.

>User Interfaces Are Complicated

>I’ve been doing some work on a food diary site of mine. One of the items I capture is the time food was eaten. I never thought capturing time in a user interface was so difficult until I started to work it.

My first step was to figure out what I needed to capture. I decided I didn’t need an exact time; rather, an approximate time would be good enough (within a 15 or 30 minute window). I looked for a jQuery plug-in to do this. I found some that used drop downs to capture hours, minutes and seconds. I found some that used spinners. I didn’t like any of those.

I found a couple that provide a type of drop-down (more like an auto-complete than a true drop-down) and I liked that approach. But none of them were quite what I was looking for so I spun my own. So far, it’s okay but I still need to do some tweaking on it.

I decided to make a mobile web version of the site. After doing some research, I decided to create the mobile version using jQuery mobile. It’s feature set is pretty cool and it seems rather stable even though it is only an alpha release.

Then I got to time entry. For my control, I display a scrollable Div below a textbox so that time can be typed in or selected. When my scrollable Div displays in phone browsers, it displays but it doesn’t scroll. Plus, given the assumptions I made about data entry, the approach really doesn’t work for phone/touch based browsers. For example, when you hit tab or click on the next field, the scrollable Div autohides. But, who clicks tab on a phone? And because of the window size, it’s hard to click on the next textbox. So the approach that works decently on a computer browser, really doesn’t fit the mobile browser.

For now, I pulled back on the mobile version of my site. While jQuery Mobile is really slick, there are a few too many things missing. Though I did decide that, when I’m ready, I’ll do the date and time with spinners like Android does it natively (separate text boxes for each entry item with the up and down arrows above and below the text box respectively.

It’s amazing how complicated a single user interface element can become. 

>New Phone

>I just got a new Android Phone, the T-Mobile G2. And I love it. It’s fast, it’s responsive, and the download speeds are incredibly fast (for a phone) and the phone. The phone is a little on the heavy side but the phone feels so solid, the weight doesn’t bother me. In fact, I would say this is one of the best “feeling” elecontric devices I’ve had in years.

This phone replaces my T-Mobile G1 that I’ve had for close to 2 years. The G1 was nice but it was getting long the tooth. I was disappointed when they didn’t push Android 2.1 to the G1. And, it was starting to feel really slow with some of the applications I use like the Google Navigation app.

So, the first question that comes up is why didn’t I get an iPhone. And there are two primary reasons for that. First, I don’t really want to back to AT&T. I was with AT&T for years, originally with AT&T Wireless, and their custom service had gotten to the point where I thought it was terrible. It is what made me switch to T-Mobile (who seems to have some of the best customer service in wireless around). That plus a family plan similar to my T-Mobile plan would cost me a bit more per month. My second reason for no iPhone is that I hate iTunes and don’t want to install that best on my computer.

I am not anti-Apple though. I believe that Apple with the iPhone has taken the user experience to a new, higher level than it was at previously.  And that has forced changes at other manufacturers that have been all cell phones better. I’m guessing that the iPad will have a similar effect on the netbook market.

But back to my G2. I knew I wanted to another Android phone and with T-Mobile, I had a few to chose from. For me, it came down to two phones: the G2 and the Samsung Vibrant (the T-Mobile Galaxy S phone). And there were a few things that made me select the G2 over the Vibrant:

  1. The G2 is running Android 2.2 today; the Vibrant is still 2.,1.
  2. The G2 uses the new HSPA+ connection giving 4G connectivity speeds.
  3. The G2 is a pretty vanilla Android install (which is closer to what I was looking for); the Vibrant includes the Samsung Touchwiz interface. One problem I see with custom interfaces is that they slow  down Android updates to the phone (which is why I believe the Vibrant is still Android 2.1).
That said, there are a few things I wish the G2 had:
  1. More than 4GB of built-in flash memory (with only 1.2GB available – what happened to the rest).
  2. The ability to uninstall some of the pre-installed Google App’s. For example, Google Goggles and Google Earth are cool app’s that I don’t see myself using on my phone. But I cannot uninstall them.
And there are a few really cool features that I get to take advantage of now because of upgrading to a G2:
  1. Chrome to Phone – This is a WOW feature. I look up an address in, click the Chrome to Phone button and , presto, the map shows up on my phone where, with a simple click, I can use it in the Google Navigation App. Very cool.
  2. The email, calendar and contact integration with Exchange now exists and is fantastic. On my G1 I had to use a 3rd party app. With my G2, I setup the Exchange server as an email, and everything just automatically integrated. 
  3. The performance and responsiveness of this phone is phenomenal. It responds to touch instantly and everything opens very quickly. Yes, it is “only” an 800MHz chip instead of the 1GHz chips in a lot of other phones (like the iPhone 4 and Samsung Vibrant) but it is also a next generation chip. And most of the comparisons I’ve seen between the G2 and the Nexus One, running Android 2.2 with the 1GHz chip, have the G2 being the faster phone.
Overall, I couldn’t be happier with my choice though I’m sure some new phone will come out in another couple of months that will make me wish I had waited. 😉

>Spam and Social Engineering

>I continue to be amazed not by the quantity of spam but by the social engineering aspects and how well it seems to work.  And how we tend to treat those people.

In my full-time job, part of my responsibilities are providing desktop support (we are a small shop so we all have a lot of roles).  In that role, I’ve seen how well some of these spam and nasty emails seem to work.  For example:

  • We’ve seen a lot of “fake” retail invoices going out.  I’ve had people click on the links contained in those emails which take advantage of some IE holes and install some nasty software.  I’m personally surprised that the emails work even though there are issues with the email that make me spot it as a fake almost instantly. 
  • We’ve had a few emails arrive talking about us being in violation of copyright.  The email is “sent” from a real law firm.  But again, the content of the email make me believe it is a fake almost instantly.  This email, in fact, has been a big enough problem that the law firm had to put a message on it’s website letting people know that they did not send the copyright violation email.

These instances got me thinking.  How am I able to spot these fakes but many other people can’t?  Granted I am a much more sophisticated computer user than most.  But why when I see the issues I think it is fake and many other people don’t draw the same conclusion.

For example, many of these emails were sent to an email address that didn’t match the name in the message.  For example, Jane Public would receive an email that was addressed to John Smith.  To me, this mismatch says to me “fake”.  But John Smith sees this and sends it to Jane Public because he is worried her order has a problem and she won’t know about it otherwise.

So, why do these types of emails work?  And what can we do to make them not work as well?

We’ve all given the “be suspicious of emails” talk.  Everybody has heard not to click on links in emails you don’t recognize.  So the spammers get around this by sending emails from places people do recognize.  When the email is from a place people do business with, many people will overlook minor issues and believe the email is legitimate.

How can we change the tools to help people identify a legitimate email from Amazon versus the fake?  The spam filters don’t catch them, at least not right away.  The mail programs display the email as legitimate.  The email looks legitimate.  But it’s not.  And the tools do nothing to help people identify these as fake.

We in IT don’t help the situation when we blame the user for clicking on these links.  We act like the people that click on these links don’t listen or don’t understand when we tell them how diligent they need to be.  If this problem was caused by another person instead of an email, we’d call the person who fell for the plea too trusting or gullible.  So why do we deride these people for believing an email that looks legitimate?


Today, I read the Criminal Over-engineering entry on the coderoom blog (

I couldn’t agree more.

I believe we not only over-engineer the code, we over-engineer the solution too.  We spend time adding features that aren’t needed or extending features with little bits that aren’t needed.  All this extra code makes things bigger to code, bigger to test, and takes longer to deploy.  I’m purposely ignoring the case where we don’t realize the feature is not needed; that’s a topic for a different blog post.

We also run into problems when these features are needed down the road.  Why?  Because we were usually not 100% right when we added the feature.  If we got 90% of the  feature correct, we need to change 10% of it.  In other words, we are changing code instead of creating code.  And, the code we are changing has never really been used. So we pay the price of adding the feature too early by having to change it to make it useful. And that means we are changing code which is a slower and more error prone exercise (not to mention, a whole lot less exciting too).

How much time would we have saved if we just left it out in the first place?  We need to remember to code only the features needed today.  Don’t code for tomorrow because we don’t know what tomorrow holds.

>ASP.NET MVC v2 Validations – Part 2

>So, according to the comments on the first answer in this StackOverflow post, it appears that there is nothing included to perform client-side validations of model level attributes.  This seems like a much bigger oversight than  the other things I’ve seen missing.

And, now, I don’t believe I can use the built-in validations.  It’s just is too limited, has too many capabilities missing, and too many work-arounds feel so much like hacks that I’m going to go back to the jquery validation library.


>Today is the last day for six co-workers of mine, five from my IT team.  I’m very sorry to see them go but business is what business is. I wish them all the best.

Deployed First ASP.NET MVC Application

Last week, I deployed my first ASP.NET MVC application at work.  Besides being our first ASP.NET MVC app, it is also the first app where we are testing for multiple browsers; IE7 (using IE8 emulation), IE8, Firefox and Chrome.  And that has taught me a number of things to watch for between the browsers.  The most recent item to catch me was a difference between IE and Firefox in handling “empty” nodes.

If you have a div named “node” and you have the opening and closing tags on separate lines, the jquery statement $(‘#node’).html() will return different results in different browsers.  IE returns an empty string  while Firefox and Chrome return the carriage return and any spaces that might be present.  Since I was checking for an empty string to trigger some processing, it wasn’t working in anything but IE.  Putting the open and close tags for the div next to each other fixed Firefox and Chrome.

I get why Firefox didn’t work.  In fact, I would argue that Firefox & Chrome worked correctly and that IE should not have worked.  My point is that just because you test and have the site working, it doesn’t mean the site works.  If you want true cross-browser compatibility, all web interface tests have to be run against all browsers.

And that begets the real problem.  How do you figure out what browsers and what versions to test?  The list will get very big very quickly and that gets expensive.  While unit tests will help with some of this, they do have limits when it gets to exercising the interface through multiple browsers unless you buy some very expensive tools.  And that’s before even considering the problem of multiple versions of IE or Firefox running side-by-side.

It shouldn’t be this hard to build an “ajaxy” web site that is compatible with the major browsers.