Validate for People

Lately, I seem to have found more web forms that are doing a ridiculously crappy job of validating the input data. It has, on occasion, gotten me yelling at my computer because I can’t understand what the owners of the site were thinking. Some of them are bad enough that they make me stop using the site, assuming that’s an option.

The other day, I had to enter a phone number. My password manager entered the phone number as


I submitted the form. The app told me my phone number was invalid. I, manually, change the phone number to:


I submitted the form again. Again, the app told me my phone number was invalid. I read the entire error message this time. I switched my phone number to:


I submitted the form and, voila, the stupid application finally accepted it.

On my banks website, I would paste in payment amounts. Most of the time, those amounts were formatted (e.g. $1,234.55). When I submitted the payment, my bank would tell me I entered an invalid dollar amount. I had to remove the dollar sign and the comma to get them to accept the amount. (My bank has since fixed this problem).

Also on my banks website, I have some bills go straight to the bank. Occasionally, I’ll submit a payment for a bill the bank received, making no changes. When I submit the payment, the site tells me I provided an invalid date. That makes me yell since I didn’t provide the date. I then realize the date falls on a non-banking day so I fix it and submit the payment again.

These problems have one big thing in common. The web app can’t recognize valid data and it is making this my problem. It shouldn’t be my problem the site  can’t recognize a valid phone number. It shouldn’t be my problem the site can’t recognize a valid dollar amount. It shouldn’t be my problem that the site can’t fix a date or identify the difference between an actual invalid date and a data that is not an option.

Validations are necessary to ensure an app can do stuff with the data. Sometimes that stuff is for me; sometimes it is for the owner of the app. Validations are critical to keeping applications running.

But validations that require following a strict formatting guide are not. Make validations flexible. Computers are good at parsing data. Support all valid formats. Support unformatted input. Fix data before you display it. Make things easy for your user. Because a happy user is the best reference you can get.


The Need for Technical Discipline

Technical discipline is a critical part of any development team. To me, technical discipline is the act/process of continually improving the code. In other words, pay down technical debt as it is created. Doing this can mean the difference between an enjoyable project and a hated project.

Every story completed by every developer adds technical debt. Sometimes the debt is realized immediately; other times the debt appears because of a  future change request. The problem with debt is it makes change more difficult. And, the more debt that is allowed to build up, the more difficult changes become.

When the debt builds to some critical mass is when the project is in trouble. Simple changes that used to take 2 days take twice as long. Complex changes are taking three or four times as long. It’s difficult to explain why.

This is when the pressure to deliver stories starts to weigh down on everyone. And everybody gets unhappy because this great project has just become the latest debacle.

How Did We Get Here?

Most projects don’t start intending to build up lots of technical debt. Usually the road to having mountains of technical debt is paved with good intentions.


It starts when with small things. Sometimes these sacrifices start with a decision to deliver a time-sensitive item quickly, sacrificing whatever we have too. More commonly, it is a lot of little decisions made by the developers as they are developing stories:

  • Avoiding refactoring work to hit the estimate
  • Limiting automated testing to hit the estimate
  • Not removing unused code
  • Leaving Ugly, complicated code as-is
  • Duplicating common code instead of extracting it

For a more thorough explanation of the types of technical debt, check out

When the pile of technical debt gets big enough, the developers try to start doing it. But this leads to a different problem. The business team starts to pressure the developers to cut out those unnecessary activities because delivery is getting slower. After all, if it was okay to avoid these things up until now, why isn’t it okay now? And, if the development team gives into that pressure, things get worse on the project quickly.

The development team has to realize that they are mostly responsible for this situation. But avoiding things that extended the time early on, precedent has been set that the activities are totally unnecessary.

One way I’ve seen attempts to fix this is to create refactoring stories. That is, create a story to fix the mess made while developing a story. I’ve seen lots of projects with refactoring stories but I’ve never seen a project fix a technical debt problem with refactoring stories. The approach doesn’t work because the refactoring stories, aka stories that take valuable development time and deliver nothing, have a way of always having the lowest priorities.

How Do We Fix It? And Keep It From Recurring?

The fix is simple but hard to implement. Don’t cut corners during development except when there are pressing reasons to do so. And those exceptions should be few. In other words, the developers should strive to slightly improve the code base with every story.

This is a difficult transition. The longer the team has let discipline lapse, the more difficult it is. After just a few painful weeks, the team should start seeing improvements in delivery times. Things will continue to improve; slowly, for quite a while.

When making the transition, it’s important to not try and fix everything at once, that’s impossible. Instead, focus on the areas of the codebase that cause the most difficulty with the stories being played.  Other areas will be addressed later when they are being changed by other stories.

When technical discipline lapses, the development team can feel like it is being a better business partner by delivering faster. The reality is the opposite, delivering faster means  paying dearly later.Being the better partner means retaining discipline all the time.

Change – Embrace It

I’m a developer. I’ve been a developer for more years than I care to think about. Thinking back, I realize that how I code today is different enough from what I did after graduating,  that it is almost a different profession.

I started writing COBOL on a mainframe. That COBOL had SQL embedded in it to talk to the database. CICS commands were also added to send and get data from the user.

Today, I write code in a few different languages. My current project uses c#, Javascript, HTML, CSS and a large number of libraries like React, Entity Framework, and Unity (Unity for IOC).

When I look at the work at a low level, not much has changed:

  • I’m still using a relational database to save and retrieve data
  • Data continues to be presented by “screens”
  • Those “screens” are stateless

But as soon as you above that lowest level, nothing is the same:

  • Database activity is done through an ORM; there’s very little SQL coding
  • Getting data to and from the server is taken care of by libraries; this only shows up as some attributes/markers within the code
  • Development environments provide instant feedback about compilation errors
  • Automated tests identify what broke almost instantly

When you look at the applications themselves, there is no resemblance between the applications I built 20 years ago and the ones I build today. To me, the changes are profound.

Getting from then to today was challenging. The change I struggled with the most was the move to TDD. It was such a different take on how I was used to approaching development that I struggled quite a bit. But once I got it, I realized what it’s power and it has influenced every aspect of my development, even when I’m not following TDD. I also think it was the single biggest personal improvement I have made. And I regret that I didn’t start it sooner.

And that’s how some changes go. You adopt them and slowly realize that it’s a significant improvement in how you do thing. But not all changes are like this. Some, don’t amount to much. Others, are not a fit at all. The most difficult category though, are those changes that just fail because they were never a good idea.

  • 4GL (4th Generation Languages) were going to eliminate the need for developers; they would allow business users to code their own programs. While some of the 4GL concepts have made it into the mainstream, this whole idea has disappeared from the market.
  • The Client-Server model was the direction of the future. It was a way to move off of the mainframe and provide better interfaces to the business users. Then the scaling issues became apparent and all but killed this approach. While it continues to exist as layered architectures deployed on servers, the original approach has all but disappeared.

Change is difficult. Some changes are more difficult than others. But we need to embrace change because changing is what allows us to continually improve our skills. And that means we can deliver better software. And that is what we all want.

Exception Handling Anti-Patterns

Exception handling seems easy. But, done poorly, it can cause problems of its own or worse, it can cause problems that hide other problems.

Eating Exceptions

The worst anti-pattern is eating exceptions. This is when an exception is caught, nothing (or minimal logging) is done to handle the exception, and processing continues.

try {
... Do Stuff ...
catch {
Debug.WriteLine("Oh no, an exception occurred.")

This pattern causes some of the hardest to find bugs. Especially when the try block calls code in other classes.

Why are these bugs so nasty? The failure occurs leaving an object (or objects) in an invalid state. The failure is ignored and processing continues assuming everything is good. This can lead to corrupted data, and other, hard to reproduce exceptions.

Finding these exceptions is onerous; usually requiring some luck as well as skill.

What’s the fix for this pattern? 99+% of the time , the best fix is to remove the try/catch block. Let the exception be handled by the application-level handlers; let the stop once an object is in an invalid state.

One of the first things I do when I start on a new codebase is to remove try/catch blocks that are not:
1. Application-level exception handlers
2. Handling an exception
This sometimes causes some angst from the other developers. But it will also slowly (or, sometimes, not so slowly) expose those hard-to-find bugs that have been lurking so that they can be fixed.

Losing Exception Information

Another common problem is losing track of some of the critical information about the exception, most commonly, the stack trace.

Let’s assume we are doing some database update code, and we need to handle exceptions to issue a rollback.

var transaction = new Transaction(connection);
try {
    ... update the database ...
catch (Exception e) {
    throw new Exception($"Issued rollback {e.Message}");

In this example, the stack trace is lost. Does that matter?

It can when you are trying to find and fix the bug. Assume you are using Entity Framework and SQL Server. One common exception is an EntityException (or a descendant) with the message:

String or binary data would be truncated. The statement has been terminated.

Finding the problem behind this error is a challenge in the best of circumstances. Finding it without knowing which update caused the problem is that much more difficult.

Fortunately, fixing this problem is easy. Instead of appending the original exception message, send the entire exception

var transaction = new Transaction(connection);
try {
    ... update the database ...
catch (Exception e) {
    throw new Exception($"Issued rollback", e);

Now, assuming the application level exception handler provides stack traces, finding the statement that caused the problem is easy.

And, like the previous anti-pattern, I search for references to the Message property of any exception when working on a new codebase. And I refactor to pass the entire exception because the information being lost is too important.

c# #region Tags and Why I Don’t Like Them

I’m back to the Microsoft stack (ASP.NET MVC, c#, etc.) for the first time in a while. One of the things When I worked with it previously, I hated #region markers in any code file. Now, since I’m the lead, I have a policy of deleting them where I find them. No exceptions. I did tell my team that I would do this but I never really explained why.

The #region was originally used to segregate generated code from the code written by the developer. These days are over, partial classes totally eliminated this usage.

Today, I see two main uses of #region tags.

First, they are used to group types of things. Region tags are put around:

  • Properties
  • Public Methods
  • Private Methods

The second common usage is grouping related things:

  • Around an event and it’s delegate
  • All methods related that talk to the database

So why don’t I like #region markers?

Regions Hide Code

When a class is opened in Visual Studio, and it has #region tags, the #region tags are collapsed by default. In other words, at least some of the implementation of the class is hidden when the class is opened. This can make it difficult to find what you were looking for. And, expanding the regions can take you out of your train of thought by preventing you from going straight to what you were looking for.

Regions Hide Size

One of the more common design problems is a class that does too much. The feedback from scrolling and navigating a large class is one of the items that helps determine it is time for a refactor. Regions eliminate this feedback, they can make a class that is way too large not feel that way. But that feeling is an illusion, as soon as any serious work needs to be done, the size of the class will quickly become an obstacle to maintaining the class.

Regions Hide Complexity

The hidden code can also hide complexity. If a class is doing too much, when it’s responsibilities are muddled, regions can be used to group that functionality to make things appear more maintainable. But again, it is an illusion. Because as soon as something not insignificant needs to change, especially if that change will cross regions, the region approach will make the change more difficult.

Let’s Make Maintainable Code

One of our goals when developing most systems should be delivering maintainable code. Regions work against this.

Microsoft has acknowledged that #region tags should not be used any longer. Today, the default settings for StyleCode, do not allow #region tags to be used. In other words, Microsoft no longer believes #region tags need to be used. So neither should you.

The Case for Refactoring

In my career as a programmer, I have done a lot of maintenance development. Some of that maintenance was on what are referred to as “legacy systems”. For those who don’t know, a legacy system is an application that is fragile enough that no developer wants to touch it.

According to Wikipedia, one of the characteristics of a legacy system (i.e. legacy application) is:

These systems can be hard to maintain, improve, and expand because there is a general lack of understanding of the system; the staff who were experts on it have retired or forgotten what they knew about it, and staff who entered the field after it became “legacy” never learned about it in the first place. This can be worsened by lack or loss of documentation.

I would add that a big problem besides lack or loss of documentation is incorrect documentation.

From a blog on

  • The app was written some time ago, its original authors moved on taking knowledge with them (lost knowledge).
  • Subsequent authors changed the application based on insufficient / lost knowledge, introducing bugs and de-stabilizing it.
  • The app is now brittle, and people are reluctant to change it for fear of breaking it.

The thing I remember most about working on these systems is that making changes was scary; you never knew what your change might break. That made me hesitant to make even the simplest of changes. It also forced me to overestimate the amount of time the change would take; I simply had to account for the amount of time I was going to spend testing the application after making my change.

The problem could be made worse by how critical these legacy systems were. Imagine making a mistake that causes peoples paychecks to be wrong (this didn’t happen to me but it happened to a co-worker). Imagine trying to explain  how you let that bug through your testing to the executives. If your lucky, your management understands what maintaining these systems means. But, more often than not, management will think you were lazy or careless.

How Did We Get Here

How did we let things get so bad that we can’t maintain an application?

At the time many of these legacy systems were created, many of the practices and toolsets we take for granted today didn’t exist. There was no such thing as automated unit testing. Refactoring was avoided unless absolutely necessary because it was expensive. This applied regardless of how bad a piece of code was. Instead, comments were added to describe the functionality. Over time, this made the problem worse because the comments weren’t maintained and  got out of sync with the code.

So, How Do We Stop This?

We avoid creating legacy systems by actively and aggressively maintaining the code base. We do this by refactoring constantly.

When code has good, thorough unit tests, refactoring is straightforward. You know that if the code passes the unit tests, it works. This also means that stinky code can be fixed immediately; it doesn’t have to age and get stinkier. This, is how we avoid creating a legacy system.

What Are the Benefits of Refactoring?

Here are the direct benefits of refactoring that I see:

  1. Code doesn’t get harder to maintain as it gets older.
  2. Code is cleaner, making it more likely we can add new features.
  3. Because of the above points, changes occur at a constant pace instead of slowing as the code base ages.
  4. It is easier to add functionality allowing the application to remain in use longer.

All of these add up to a lower cost of ownership for an application.

Why Dont’ All Programmers Embrace Refactoring?

Refactoring isn’t a panacea, it does have costs.

Refactoring Makes Changes Takes Longer

A change that includes refactoring will take longer, period. But it shouldn’t take much longer unless the code has gotten stinky because refactoring isn’t getting done.

However, without refactoring, an application can quickly turn into a legacy application where any change is ridiculously expensive. So while refactoring does incur a cost in the short-term, it incurs significant savings in the long term.

Refactoring Can Introduce Bugs

Any change we make to code can introduce bugs. That means refactoring can too. The one thing that helps refactoring though is quality unit tests. If the tests are thorough, you can be confident that the refactored code is functionally equivalent to the code being replaced. This seems to me to be a small price so that I can have an application I don’t mind maintaining.

Refactoring Doesn’t Add Value

It is true that refactoring doesn’t add new functionality. Some programmers equate this with refactoring adding no value. That is a short-term, if not immediate-term view. If you look at the long term, refactoring makes an application easier to maintain which means future changes will cost less. This can also lead to an extended life for an application which can result in significant cost savings.

Embrace Refactoring

So, embrace refactoring. Used properly it is a tool that will benefit you, your project team, the applications you write, and the customers your applications support.

Feedback – Don’t Overlook It

One of the biggest advantages of agile development is the constant feedback built into the process. It is built into every step:

  • Developers give feedback to BA’s
  • QA’s/Testers give feedback to BA’s and Developers
  • Customers/Business Users give feedback to the team

In the process I use with ThoughtWorks, when a developer/pair pick up a story, they read the story and discuss it with the BA before starting to code. This gets everyone  on the same page before coding has begun and it helps eliminate issues that arise because of how imprecise written code specifications can be. This means that the story is changed at the cheapest possible time.

The same is true for all feedback, when you get feedback quickly, your response can be immediate and cheap. When feedback is old, your response must be measured, might require additional research, and it will be much more expensive.

I continue to be surprised when I see or hear about feedback being treated as a non-critical piece of application development:

  • Feedback withheld because management is concerned the feedback will have a negative impact on velocity.
  • Customer feedback is gathered and documented by a specific person or group. The feedback is provided to the development team if the product owner believes the feedback needs to be addressed via a story.

This isn’t good enough. Feedback needs to be provided to the application team no matter what. Related, it is also okay to decide not to act on feedback. But hearing the feedback can help the evolve the overall design in subtle ways to make better fit the needs.

Feedback isn’t and shouldn’t be unique to agile. It can be and should be worked into any and every development process.

Embrace feedback. It will help you better understand your customers and provide a better application.

Reflecting on How Programming Has Changed

My birthday was a few weeks ago. That got me nostalgic and I started thinking about how much has changed since I started programming nearly 25 years ago at Arthur Andersen. I also realized that many things really haven’t changed. And that, unfortunately, we do seem to have to learn the same lesson.

The Internet

Easily, the biggest change is the existence of the Internet. There isn’t an aspect of programming the Internet hasn’t helped. In thinking back, one of the biggest impacts I think it’s had is making documentation and assistance easily and readily available. In my first decade or so of development, if you had questions, the only sources tended to be finding the manufacturer manuals (yes, I have read many IBM COBOL manual) and, if you were lucky (or maybe not so lucky), the corporate “expert”. Now, documentation is readily available on-line and there is tons of help.

REST Programming Architecture

Designing programs using a REST architecture has started to become very common for good reason. However, when I look back, I don’t think REST is a big change. Instead I think it has made incremental improvements over what was learned programming mainframes 40 years ago.

I started as a COBOL, CICS programmer. CICS used an architecture called pseudoconversational which meant that the program serves requests from the terminal and then shuts down. Sound familiar? It is an “old-fashioned” REST architecture. Granted the responses provided are much more limited in CICS than in REST. But CICS was created more than 50 years ago and was revolutionary in how many people it allowed a single mainframe to support.

Open Source

Open source is a great thing; it impacts so many areas of programing and computer use. Open source has been around for decades though. CICS, the forementioned IBM product started life as an Open Source program. What has changed with open source? It is the Internet making Open Source programs and utilities readily available and making it just as easy to contribute.

Programming Languages

Tons of new programming language have come on the scene. Many new capabilities have arrived with them. When I started, procedural programming languages were about it. There was some variation, but the capabilities and the way things were accomplished were pretty much the same in most languages.

Now, we still have some procedural languages though their role is much smaller. Additionally, we have object-oriented languages, functional languages, dynamic languages, and more.

This is the area that I feel has improved the most since I started programming. The improvements in encapsulation and abstraction are nothing short of fantastic allowing us to build more complex systems without making the coding becoming more complex. In fact, in many ways, I believe systems today can be simpler to understand than equivalent systems from 20 years ago because of the extensive encapsulation and abstraction support in modern development languages.

In Conclusion

Being a computer programmer today is vastly different than it was when I graduated college. I wonder how different it will be when I look back in another 20 years? Will we be looking back on the quaint days of object-oriented and functional languages the way I’m looking back at procedural languages today?

Comparing Android and Apple iOS

I’ve had an Android phone for nearly three years now. A few days ago, I bought an iPad. I can’t help but compare them when I use them. So I thought I’d write an article on the differences I see. I’m also going to mention the differences I expected but don’t see.

I’m going to try and avoid things that are better on the iPad because of the larger screen. For example, surfing on the iPad is much more pleasant but that is mostly because of the much larger screen.

Also, my Android phone is a T-Mobile G2 phone running Android 2.2. This phone is vanilla Android; it does not include any of the crappy interfaces pushed by some of the phone manufacturers.

General Interface Stuff

I’ve heard that iOS is more polished than Android but didn’t really believe it. Yes, Android has some inconsistencies and some annoying behavior but overall it was pretty good. Now that I’ve used my iPad for a few days, I believe it. The polish shows throughout the whole product:

  • The interface is more consistent app to app (considering the bundled app’s)
  • The interface for settings is consistent setting to setting
I think this makes iOS devices easier to learn than Android devices.

Winner – Apple

Touch Screen

Just like the refinement, I’d heard that the Apple touch screens were superior but didn’t really believe it. My Android touch screens are responsive and easy to use. But after using the iPad, I definitely agree that the Apple touch screen is better.

I don’t know if the improvements are because of the hardware, software or both but the iPad is definitely a more responsive screen.

Winner – Apple


The MobileMe app included with the iPad is really nice. It lets you see where your iPad is (assuming it is on and connected), let’s you remotely wipe your iPad or iPhone, and is integrated right into the device. Android does not include anything comparable.

Winner – Apple


I have big fingers and find typing on touch keyboards tough. From what I can judge, Apple’s keyboard is better than the default Android keyboard. I think this has more to do with design and the better touch screen than with the form factor.

However, Android allows third-party keyboard app’s. And that means, you can get an Android device with the fantastic Swype keyboard (which is on my G2). Swype is the best touch keyboard I have ever used. It is easy and fast to type on. I never use the physical keyboard on my phone because the Swype keyboard is better. And it is easier to use than the iPad keyboard.

If you ever have the opportunity to try a Swype keyboard, take it. You will be surprised how easy it is to use.

Winner – Android

The Hardware Back Button

Most Android devices include a back button implemented in the hardware. This button returns you to where you were previously.  For example, if you opened the browser by clicking on a link in an email, the back button returns you to the email. A much better description of the hardware back button functionality can be found on the Bump Developer Blog.

I find myself looking for that button on the iPad. The iPad lets me do the same thing via the task bar but it just isn’t the same.

Winner – Android

Email Accounts

I’m a big Gmail fan. The Priority Inbox functionality added about a year ago is a big leap forward  in helping me manage my email and I use it regularly. That leads me to my dilemma.

Android incorporates Priority Inbox into it’s Gmail client. I don’t like the iPad email client because it doesn’t integrate Priority Inbox. But, for other email accounts, the email clients are similar and both are capable. Both include solid Exchange support and good support for regular email accounts.

The one other difference between the two is how they manage contacts. With Android, Gmail contacts automatically show up as contacts in the phone. If I make a change to a contact, the change will automatically sync with Gmail. With my iPad, I have to sync contacts through iTunes.

Winner – Android 


I’m not an iTunes fan. I installed it on my computer for the first time in a long time because I had to for my iPad. Why can’t Apple make this a better product?

Android, on the other hand, doesn’t force you to install any software. That also means Android doesn’t provide a way to sync content nor does it provide a way to backup your device.

Winner – Apple (barely)

Android Store vs. Apple Store

I had heard a lot about the Apple Store and how much better it was than than the Android store. So far, I don’t agree with that. I feel that the experience in the stores is about the same. And both stores need to improve.

Winner – Draw

Installing App’s from the Store

Installing applications from both stores are simple processes. Both devices have some quirks though:

  1. Apple forces you to reenter your Apple ID password every session you install applications from the store even if the app is free. I think this is a pain. And it’s enough of a pain that I’m thinking of changing my password to something simpler.
  2. When you install an app from the Android store, the second step is a page that informs you what resources the app will use. Sounds like a good idea but the execution stinks. I  believe these pages display but are never read.
  3. When you purchase app’s (as opposed to free app’s) from the Android store, the flow is different. The Apple approach is identical (which, I realize, is probably the reason for reentering your Apple ID password) regardless of whether you are paying for the app or not. The Apple approach is much better.

Winner – Apple


There are a few things I didn’t cover. First, I didn’t talk about the browsers because the iPad is better to surf on because it is bigger. And I felt that would prevent me from any type of objective comparison.

Second, I didn’t talk about Flash support because I don’t think it matters. My G2 includes Flash support but I haven’t installed it. And I don’t think it is a very big deal.

Overall, I am happy with my Android phone and my iPad. Each has it’s own strengths and weaknesses and both work great. It will also be interesting to see if Apple can hold it’s lead in the phone and tablet arena.

Epsilon Security Breach

Since my last post on the Epsilon Security Breach, the impact of the Epsilon security breach has gotten bigger. In fact, in an article posted on Yahoo news, a Reuters news writer stated:

In what could be one of the biggest such breaches in U.S. history, a diverse swath of companies that did business with Epsilon stepped forward over the weekend to warn customers some of their electronic information could have been exposed.

So the breach has grown since it was first announced last week. I wonder if this will get bigger  since the same article goes on to say:

“While we are cooperating with authorities and doing a thorough investigation, we cannot say anything else,” said Epsilon spokeswoman Jessica Simon. “We can’t confirm any impacted or non-impacted clients, or provide a list (of companies) at this point in time.”

The problem for Epsilon seems to keep growing. As developers, we need to remember this because the last thing any of us want is to the be developer responsible for letting the hackers in.