Category Archives: Programming

The so-called ‘Lava Flow’ anti-pattern

This post is simply a comment on another blog that was too long to post there. For context, please read Mike Hadlow’s original blog post.

I would disagree with this statement “The Lava Layer (or Lava Flow) anti-pattern is well documented.” Of the two references given [in the original blog article], one provides no examples or citations (Wiki) and the other provides a mass of confusing and contradictory examples that cannot be usefully coerced into a single concept called ‘Lava Flow’. Also, how can you take seriously an article on Software Engineering that gives as the root cause of the problem: ‘Root Causes:  Avarice, Greed, Sloth’?

You say in your article that one of the causes is multiple developers working on the project; they say it’s often caused by a ‘Single developer (lone wolf) written code.’ Which is it?

Your examples are referring to the adoption of new technologies (data layer mostly) over time; they are mostly referring to the phenomenon commonly known as ‘dead code’ or ‘cruft’. These are two completely separate things and typically have different (multiple) causes.

The use of the lava metaphor implies something that is produced quickly (violently?) and then ossifies quickly producing unmaintainable but unremovable code. The examples you give are of code developed carefully over time but superseded by later technologies which coexist in the same project. These are also different things.

So it boils down to three separate ‘patterns':

1. Dead code that people have forgotten why it’s there but are scared to change.

2. Code produced quickly at the start of the project that then cannot be refactored because it is being used by customers.

3. The adoption of new technologies (external libraries, in effect) that coexist with previous ones.

1 is very well-known and there are strategies for dealing with it, including documentation. Also because code is old doesn’t mean it is dead or crufty – it may be a C module or a Perl program fulfilling a very useful function. It’s a shame that it wasn’t well documented or the documentation got lost, but… For code within the same project, modern analysis tools should be able to tell you if that code is being used or not.

2 shouldn’t happen these days if you are using good programming practices and good refactoring tools. If it’s legacy code, you need to deal with it as part of normal housekeeping.

3 is not always done for the negative reasons you give. In my experience, deciding to use a new data access library is done for good reasons, either performance, ease of development, or compatibility. We moved from OLEDB and ODBC to ADO for ease of development (no more manually managing drivers or low-level connectivity); we moved from ADO dataset to NetTiers (or similar code generators) for ease of development (no need to write all those stored procs and DTO classes yourself); we moved from code generators to ORM to get rid of those thousands of lines of generated code clogging up our solutions; we moved part of our application to something lightweight like Dapper (or, in my case, ADO!) to avoid the performance overhead of overly-fat ORM queries. Sometimes you are forced into an upgrade because your old code is no longer compatible with some new dependencies of another part of the system; rather than fix the compatibility it is sometimes easier to upgrade the old code to use your new ‘slim’ library than to convert it and make it compatible.

In my experience of large, enterprise projects, the phenomenon you describe typically affects different projects within a suite of applications. I have rarely or never seen what you describe inside the same application unless it is a conscious decision to use ADO/Dapper for querying and an ORM for persistence – a good choice in my opinion. For example, one application (let’s call it Interchange) was written using a code generator; there is one developer in the department who is an expert with that code generator. Other developers know how to use it but they prefer to leave it to the other guy. He actually quite likes that code and it has never let him down, albeit it is a bit quirky to deal with. A second application comes along (let’s call it Aster) which has to talk to a different database; the new Team Leader has just learned about ORM and doesn’t want to use that old data layer they use on the other project. He implements that – it’s a steep learning curve for the other developers but they like to do the latest stuff. That Team Leader moves on to work on some OSS project and another one comes in and says ‘Hey, Entity Framework is the dogs bollocks! Let’s write the next project with that.’ So you now have three separate projects using three methods of accessing data. They are hermetically sealed so in itself it is not a bad thing, apart from all your developers having to know enough to work with all three, which is not a bad thing either, because they will understand the object-relational incompatibility problem for themselves rather than reading about it on blogs.

The sociological aspect of the problem (‘not invented here’) is also well-known and a pain but is normally caused by team leaders and dev managers, not lowly developers. With regard to developers coming and going, possibly bringing with them different preferences or prejudices, I don’t think that this is much more risky to a project than an individual developer changing over time. ‘Je est un autre’, as Rimbaud said. I look at my own code one year later and curse myself for writing it that way or, even worse, cannot understand what it is doing. I have even been known to curse the developer that wrote it, then looked through the source control history and discovered it was me. If I can deduce any kind of moral or pattern from this, it is: try to write self-documenting code to the best standards; if the intention of the developer is not easily revealed by the code, document it, inline or externally.

Converting a large project from one technology to another is normally so painful I prefer to leave it as it is. Since most of my own ‘helper’ libraries are then oriented to that stack (NHibernate for example) it exerts a gravitational force on my future choices and I am less likely to adopt Entity Framework when I have all those useful NH helpers that need to be rewritten.

As opposed to the ‘Lava Flow’ analogy I would just say: the world is a dynamic place and you need to develop philosophies and methods that allow you to maintain stability and make progress while things (and people) change around you. Or, in three words: Read Bertrand Meyer.