The Need for Technical Discipline

Technical discipline is a critical part of any development team. To me, technical discipline is the act/process of continually improving the code. In other words, pay down technical debt as it is created. Doing this can mean the difference between an enjoyable project and a hated project.

Every story completed by every developer adds technical debt. Sometimes the debt is realized immediately; other times the debt appears because of a  future change request. The problem with debt is it makes change more difficult. And, the more debt that is allowed to build up, the more difficult changes become.

When the debt builds to some critical mass is when the project is in trouble. Simple changes that used to take 2 days take twice as long. Complex changes are taking three or four times as long. It’s difficult to explain why.

This is when the pressure to deliver stories starts to weigh down on everyone. And everybody gets unhappy because this great project has just become the latest debacle.

How Did We Get Here?

Most projects don’t start intending to build up lots of technical debt. Usually the road to having mountains of technical debt is paved with good intentions.

 

It starts when with small things. Sometimes these sacrifices start with a decision to deliver a time-sensitive item quickly, sacrificing whatever we have too. More commonly, it is a lot of little decisions made by the developers as they are developing stories:

  • Avoiding refactoring work to hit the estimate
  • Limiting automated testing to hit the estimate
  • Not removing unused code
  • Leaving Ugly, complicated code as-is
  • Duplicating common code instead of extracting it

For a more thorough explanation of the types of technical debt, check out http://martinfowler.com/bliki/TechnicalDebtQuadrant.html.

When the pile of technical debt gets big enough, the developers try to start doing it. But this leads to a different problem. The business team starts to pressure the developers to cut out those unnecessary activities because delivery is getting slower. After all, if it was okay to avoid these things up until now, why isn’t it okay now? And, if the development team gives into that pressure, things get worse on the project quickly.

The development team has to realize that they are mostly responsible for this situation. But avoiding things that extended the time early on, precedent has been set that the activities are totally unnecessary.

One way I’ve seen attempts to fix this is to create refactoring stories. That is, create a story to fix the mess made while developing a story. I’ve seen lots of projects with refactoring stories but I’ve never seen a project fix a technical debt problem with refactoring stories. The approach doesn’t work because the refactoring stories, aka stories that take valuable development time and deliver nothing, have a way of always having the lowest priorities.

How Do We Fix It? And Keep It From Recurring?

The fix is simple but hard to implement. Don’t cut corners during development except when there are pressing reasons to do so. And those exceptions should be few. In other words, the developers should strive to slightly improve the code base with every story.

This is a difficult transition. The longer the team has let discipline lapse, the more difficult it is. After just a few painful weeks, the team should start seeing improvements in delivery times. Things will continue to improve; slowly, for quite a while.

When making the transition, it’s important to not try and fix everything at once, that’s impossible. Instead, focus on the areas of the codebase that cause the most difficulty with the stories being played.  Other areas will be addressed later when they are being changed by other stories.

When technical discipline lapses, the development team can feel like it is being a better business partner by delivering faster. The reality is the opposite, delivering faster means  paying dearly later.Being the better partner means retaining discipline all the time.

Change – Embrace It

Change is constant in development. Learn to embrace it.

I’m a developer. I’ve been a developer for more years than I care to think about. Thinking back, I realize that how I code today is different enough from what I did after graduating,  that it is almost a different profession.

I started writing COBOL on a mainframe. That COBOL had SQL embedded in it to talk to the database. CICS commands were also added to send and get data from the user.

Today, I write code in a few different languages. My current project uses c#, Javascript, HTML, CSS and a large number of libraries like React, Entity Framework, and Unity (Unity for IOC).

When I look at the work at a low level, not much has changed:

  • I’m still using a relational database to save and retrieve data
  • Data continues to be presented by “screens”
  • Those “screens” are stateless

But as soon as you above that lowest level, nothing is the same:

  • Database activity is done through an ORM; there’s very little SQL coding
  • Getting data to and from the server is taken care of by libraries; this only shows up as some attributes/markers within the code
  • Development environments provide instant feedback about compilation errors
  • Automated tests identify what broke almost instantly

When you look at the applications themselves, there is no resemblance between the applications I built 20 years ago and the ones I build today. To me, the changes are profound.

Getting from then to today was challenging. The change I struggled with the most was the move to TDD. It was such a different take on how I was used to approaching development that I struggled quite a bit. But once I got it, I realized what it’s power and it has influenced every aspect of my development, even when I’m not following TDD. I also think it was the single biggest personal improvement I have made. And I regret that I didn’t start it sooner.

And that’s how some changes go. You adopt them and slowly realize that it’s a significant improvement in how you do thing. But not all changes are like this. Some, don’t amount to much. Others, are not a fit at all. The most difficult category though, are those changes that just fail because they were never a good idea.

  • 4GL (4th Generation Languages) were going to eliminate the need for developers; they would allow business users to code their own programs. While some of the 4GL concepts have made it into the mainstream, this whole idea has disappeared from the market.
  • The Client-Server model was the direction of the future. It was a way to move off of the mainframe and provide better interfaces to the business users. Then the scaling issues became apparent and all but killed this approach. While it continues to exist as layered architectures deployed on servers, the original approach has all but disappeared.

Change is difficult. Some changes are more difficult than others. But we need to embrace change because changing is what allows us to continually improve our skills. And that means we can deliver better software. And that is what we all want.

Exception Handling Anti-Patterns

Avoid these common exception handling issue

Exception handling seems easy. But, done poorly, it can cause problems of its own or worse, it can cause problems that hide other problems.

Eating Exceptions

The worst anti-pattern is eating exceptions. This is when an exception is caught, nothing (or minimal logging) is done to handle the exception, and processing continues.

try {
... Do Stuff ...
}
catch {
Debug.WriteLine("Oh no, an exception occurred.")
}

This pattern causes some of the hardest to find bugs. Especially when the try block calls code in other classes.

Why are these bugs so nasty? The failure occurs leaving an object (or objects) in an invalid state. The failure is ignored and processing continues assuming everything is good. This can lead to corrupted data, and other, hard to reproduce exceptions.

Finding these exceptions is onerous; usually requiring some luck as well as skill.

What’s the fix for this pattern? 99+% of the time , the best fix is to remove the try/catch block. Let the exception be handled by the application-level handlers; let the stop once an object is in an invalid state.

One of the first things I do when I start on a new codebase is to remove try/catch blocks that are not:
1. Application-level exception handlers
2. Handling an exception
This sometimes causes some angst from the other developers. But it will also slowly (or, sometimes, not so slowly) expose those hard-to-find bugs that have been lurking so that they can be fixed.

Losing Exception Information

Another common problem is losing track of some of the critical information about the exception, most commonly, the stack trace.

Let’s assume we are doing some database update code, and we need to handle exceptions to issue a rollback.

var transaction = new Transaction(connection);
try {
    ... update the database ...
    transaction.Commit();
}
catch (Exception e) {
    transaction.Rollback();
    throw new Exception($"Issued rollback {e.Message}");
}

In this example, the stack trace is lost. Does that matter?

It can when you are trying to find and fix the bug. Assume you are using Entity Framework and SQL Server. One common exception is an EntityException (or a descendant) with the message:

String or binary data would be truncated. The statement has been terminated.

Finding the problem behind this error is a challenge in the best of circumstances. Finding it without knowing which update caused the problem is that much more difficult.

Fortunately, fixing this problem is easy. Instead of appending the original exception message, send the entire exception

var transaction = new Transaction(connection);
try {
    ... update the database ...
    transaction.Commit();
}
catch (Exception e) {
    transaction.Rollback();
    throw new Exception($"Issued rollback", e);
}

Now, assuming the application level exception handler provides stack traces, finding the statement that caused the problem is easy.

And, like the previous anti-pattern, I search for references to the Message property of any exception when working on a new codebase. And I refactor to pass the entire exception because the information being lost is too important.

c# #region Tags and Why I Don’t Like Them

I’m back to the Microsoft stack (ASP.NET MVC, c#, etc.) for the first time in a while. One of the things When I worked with it previously, I hated #region markers in any code file. Now, since I’m the lead, I have a policy of deleting them where I find them. No exceptions. I did tell my team that I would do this but I never really explained why.

The #region was originally used to segregate generated code from the code written by the developer. These days are over, partial classes totally eliminated this usage.

Today, I see two main uses of #region tags.

First, they are used to group types of things. Region tags are put around:

  • Properties
  • Public Methods
  • Private Methods

The second common usage is grouping related things:

  • Around an event and it’s delegate
  • All methods related that talk to the database

So why don’t I like #region markers?

Regions Hide Code

When a class is opened in Visual Studio, and it has #region tags, the #region tags are collapsed by default. In other words, at least some of the implementation of the class is hidden when the class is opened. This can make it difficult to find what you were looking for. And, expanding the regions can take you out of your train of thought by preventing you from going straight to what you were looking for.

Regions Hide Size

One of the more common design problems is a class that does too much. The feedback from scrolling and navigating a large class is one of the items that helps determine it is time for a refactor. Regions eliminate this feedback, they can make a class that is way too large not feel that way. But that feeling is an illusion, as soon as any serious work needs to be done, the size of the class will quickly become an obstacle to maintaining the class.

Regions Hide Complexity

The hidden code can also hide complexity. If a class is doing too much, when it’s responsibilities are muddled, regions can be used to group that functionality to make things appear more maintainable. But again, it is an illusion. Because as soon as something not insignificant needs to change, especially if that change will cross regions, the region approach will make the change more difficult.

Let’s Make Maintainable Code

One of our goals when developing most systems should be delivering maintainable code. Regions work against this.

Microsoft has acknowledged that #region tags should not be used any longer. Today, the default settings for StyleCode, do not allow #region tags to be used. In other words, Microsoft no longer believes #region tags need to be used. So neither should you.

The Case for Refactoring

In my career as a programmer, I have done a lot of maintenance development. Some of that maintenance was on what are referred to as “legacy systems”. For those who don’t know, a legacy system is an application that is fragile enough that no developer wants to touch it.

According to Wikipedia, one of the characteristics of a legacy system (i.e. legacy application) is:

These systems can be hard to maintain, improve, and expand because there is a general lack of understanding of the system; the staff who were experts on it have retired or forgotten what they knew about it, and staff who entered the field after it became “legacy” never learned about it in the first place. This can be worsened by lack or loss of documentation.

I would add that a big problem besides lack or loss of documentation is incorrect documentation.

From a blog on Forrester.com:

  • The app was written some time ago, its original authors moved on taking knowledge with them (lost knowledge).
  • Subsequent authors changed the application based on insufficient / lost knowledge, introducing bugs and de-stabilizing it.
  • The app is now brittle, and people are reluctant to change it for fear of breaking it.

The thing I remember most about working on these systems is that making changes was scary; you never knew what your change might break. That made me hesitant to make even the simplest of changes. It also forced me to overestimate the amount of time the change would take; I simply had to account for the amount of time I was going to spend testing the application after making my change.

The problem could be made worse by how critical these legacy systems were. Imagine making a mistake that causes peoples paychecks to be wrong (this didn’t happen to me but it happened to a co-worker). Imagine trying to explain  how you let that bug through your testing to the executives. If your lucky, your management understands what maintaining these systems means. But, more often than not, management will think you were lazy or careless.

How Did We Get Here

How did we let things get so bad that we can’t maintain an application?

At the time many of these legacy systems were created, many of the practices and toolsets we take for granted today didn’t exist. There was no such thing as automated unit testing. Refactoring was avoided unless absolutely necessary because it was expensive. This applied regardless of how bad a piece of code was. Instead, comments were added to describe the functionality. Over time, this made the problem worse because the comments weren’t maintained and  got out of sync with the code.

So, How Do We Stop This?

We avoid creating legacy systems by actively and aggressively maintaining the code base. We do this by refactoring constantly.

When code has good, thorough unit tests, refactoring is straightforward. You know that if the code passes the unit tests, it works. This also means that stinky code can be fixed immediately; it doesn’t have to age and get stinkier. This, is how we avoid creating a legacy system.

What Are the Benefits of Refactoring?

Here are the direct benefits of refactoring that I see:

  1. Code doesn’t get harder to maintain as it gets older.
  2. Code is cleaner, making it more likely we can add new features.
  3. Because of the above points, changes occur at a constant pace instead of slowing as the code base ages.
  4. It is easier to add functionality allowing the application to remain in use longer.

All of these add up to a lower cost of ownership for an application.

Why Dont’ All Programmers Embrace Refactoring?

Refactoring isn’t a panacea, it does have costs.

Refactoring Makes Changes Takes Longer

A change that includes refactoring will take longer, period. But it shouldn’t take much longer unless the code has gotten stinky because refactoring isn’t getting done.

However, without refactoring, an application can quickly turn into a legacy application where any change is ridiculously expensive. So while refactoring does incur a cost in the short-term, it incurs significant savings in the long term.

Refactoring Can Introduce Bugs

Any change we make to code can introduce bugs. That means refactoring can too. The one thing that helps refactoring though is quality unit tests. If the tests are thorough, you can be confident that the refactored code is functionally equivalent to the code being replaced. This seems to me to be a small price so that I can have an application I don’t mind maintaining.

Refactoring Doesn’t Add Value

It is true that refactoring doesn’t add new functionality. Some programmers equate this with refactoring adding no value. That is a short-term, if not immediate-term view. If you look at the long term, refactoring makes an application easier to maintain which means future changes will cost less. This can also lead to an extended life for an application which can result in significant cost savings.

Embrace Refactoring

So, embrace refactoring. Used properly it is a tool that will benefit you, your project team, the applications you write, and the customers your applications support.

Feedback – Don’t Overlook It

One of the biggest advantages of agile development is the constant feedback built into the process. It is built into every step:

  • Developers give feedback to BA’s
  • QA’s/Testers give feedback to BA’s and Developers
  • Customers/Business Users give feedback to the team

In the process I use with ThoughtWorks, when a developer/pair pick up a story, they read the story and discuss it with the BA before starting to code. This gets everyone  on the same page before coding has begun and it helps eliminate issues that arise because of how imprecise written code specifications can be. This means that the story is changed at the cheapest possible time.

The same is true for all feedback, when you get feedback quickly, your response can be immediate and cheap. When feedback is old, your response must be measured, might require additional research, and it will be much more expensive.

I continue to be surprised when I see or hear about feedback being treated as a non-critical piece of application development:

  • Feedback withheld because management is concerned the feedback will have a negative impact on velocity.
  • Customer feedback is gathered and documented by a specific person or group. The feedback is provided to the development team if the product owner believes the feedback needs to be addressed via a story.

This isn’t good enough. Feedback needs to be provided to the application team no matter what. Related, it is also okay to decide not to act on feedback. But hearing the feedback can help the evolve the overall design in subtle ways to make better fit the needs.

Feedback isn’t and shouldn’t be unique to agile. It can be and should be worked into any and every development process.

Embrace feedback. It will help you better understand your customers and provide a better application.

Reflecting on How Programming Has Changed

My birthday was a few weeks ago. That got me nostalgic and I started thinking about how much has changed since I started programming nearly 25 years ago at Arthur Andersen. I also realized that many things really haven’t changed. And that, unfortunately, we do seem to have to learn the same lesson.

The Internet

Easily, the biggest change is the existence of the Internet. There isn’t an aspect of programming the Internet hasn’t helped. In thinking back, one of the biggest impacts I think it’s had is making documentation and assistance easily and readily available. In my first decade or so of development, if you had questions, the only sources tended to be finding the manufacturer manuals (yes, I have read many IBM COBOL manual) and, if you were lucky (or maybe not so lucky), the corporate “expert”. Now, documentation is readily available on-line and there is tons of help.

REST Programming Architecture

Designing programs using a REST architecture has started to become very common for good reason. However, when I look back, I don’t think REST is a big change. Instead I think it has made incremental improvements over what was learned programming mainframes 40 years ago.

I started as a COBOL, CICS programmer. CICS used an architecture called pseudoconversational which meant that the program serves requests from the terminal and then shuts down. Sound familiar? It is an “old-fashioned” REST architecture. Granted the responses provided are much more limited in CICS than in REST. But CICS was created more than 50 years ago and was revolutionary in how many people it allowed a single mainframe to support.

Open Source

Open source is a great thing; it impacts so many areas of programing and computer use. Open source has been around for decades though. CICS, the forementioned IBM product started life as an Open Source program. What has changed with open source? It is the Internet making Open Source programs and utilities readily available and making it just as easy to contribute.

Programming Languages

Tons of new programming language have come on the scene. Many new capabilities have arrived with them. When I started, procedural programming languages were about it. There was some variation, but the capabilities and the way things were accomplished were pretty much the same in most languages.

Now, we still have some procedural languages though their role is much smaller. Additionally, we have object-oriented languages, functional languages, dynamic languages, and more.

This is the area that I feel has improved the most since I started programming. The improvements in encapsulation and abstraction are nothing short of fantastic allowing us to build more complex systems without making the coding becoming more complex. In fact, in many ways, I believe systems today can be simpler to understand than equivalent systems from 20 years ago because of the extensive encapsulation and abstraction support in modern development languages.

In Conclusion

Being a computer programmer today is vastly different than it was when I graduated college. I wonder how different it will be when I look back in another 20 years? Will we be looking back on the quaint days of object-oriented and functional languages the way I’m looking back at procedural languages today?