Software morphs. Market realities, changes in technology, adding new features or removing something that is no longer needed, refactoring etc. are some of the valid reasons for initiating a change. However, this is a very costly affair where large software are concerned.
LARGE SOFTWARE AND IMPORTANCE OF KNOWING ITS HISTORY
The software I am employed to work on is humongous. I am talking millions of SLOC. How huge is that? Two top Google sources[1, 2] reveal the number of words, yes the word-count and not the line-count, of some of the most acclaimed large novels:
311,596 – The Fountainhead – Ayn Rand
316,059 – Middlemarch – George Eliot
349,736 – Anna Karenina – Leo Tolstoy
364,153 – The Brothers Karamazov – Fyodor Dostoyevsky
365,712 – Lonesome Dove – McMurtry, Larry
418,053 – Gone with the Wind – Margaret Mitchell
455,125 – The Lord of the Rings – J. R. R. Tolkien
561,996 – Atlas Shrugged – Ayn Rand
587,287 – War and Peace – Leo Tolstoy
591,554 – A Suitable Boy – Vikram Seth
Each line may contain, say, 10 words. This puts the line-count of the above novels between approximately 30,000 and 60,000. I know comparing a novel with a software is not exactly logical, but it is a funny comparison, nevertheless.
I have all these beasts(except the one crossed) at home. Reading them is an undertaking in itself. The software I talk about do not have the emotional or philosophical underpinnings that these novels contain. No meandering yet arresting plot. The software I talk about is pure logic. Yes, certain logical blocks may meander leading to performance hits, but they are located in the source code and culled.
In large software, you cannot ignore history. No.Every line that makes no sense to you may have a historical reason for its existence. The reason may be valid or otherwise. It is the job of the guy changing the line to make sure that he is not awakening an ancient beast. I have seen code written to workaround a .NET bug, but the developer might have forgotten to add a comment near the code block because he added one to the change-set during checking-in in RTC Jazz or whatever source control. Or, his comment may not make much sense because he had poor writing skills or he was plain lazy. Or, he might have forgotten to add a comment anywhere. There is not much one can do in such a situation other than:
- Scan the history of the file(s) in the version control system.
- Do your basic research and ask the developer. If he has left the company, ask other experts. Doing research reduces burden on others and they become more willing to help.
- If nobody seems to have any idea, perform the change in your private branch and make sure that nothing unintended occurs – a proper regression testing.
WHY MICROSOFT OFFICE FORMATS ARE SO COMPLICATED AND OTHER THINGS
Microsoft Office too is an extremely large and complex software. LibreOffice has a hard time keeping up with it. All moving targets are difficult to shoot at. Imagine shooting at a moving target blind-folded and you will start to appreciate how hard folks at LibreOffice work. Joel Spolsky tries to explain
A normal programmer would conclude that Office’s binary file formats:
- are deliberately obfuscated
- are the product of a demented Borg mind
- were created by insanely bad programmers
- and are impossible to read or create correctly.
The reasons he mentions for the complexity are:
- They were designed to use libraries
- They were not designed with interoperability in mind
The assumption, and a fairly reasonable one at the time, was that the Word file format only had to be read and written by Word
- They have to reflect all the complexity of the applications
- They have to reflect the history of the applications
A lot of the complexities in these file formats reflect features that are old, complicated, unloved, and rarely used. They’re still in the file format for backwards compatibility, and because it doesn’t cost anything for Microsoft to leave the code around. But if you really want to do a thorough and complete job of parsing and writing these file formats, you have to redo all that work that some intern did at Microsoft 15 years ago.
So, LibreOffice suffers because interoperability was not a concern for Microsoft. This happens in all fields because you cannot design anything that can cover all bases. I think, we humans are not capable enough. Also, maintaining backward compatibility makes it difficult to shed the historical baggage.
All this add to the complexity. Then there are dark forces that cannot be ignored:
“Embrace, extend, and extinguish“, also known as “Embrace, extend, and exterminate“, is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.
UNDOCUMENTED API AND BACKWARD COMPATIBILITY
An application developer should not get acquainted with what is popularly known as Undocumented APIs:
First of all: undocumented API is a wrong term. API means Application Programming Interface. But the function calls that usually get the title undocumented API are not intended to be used to program against. A better name would be undocumented program internal interface or short undocumented interface. That’s the term I will use here in this article.
These are APIs that are supposed to be used only by Microsoft and can be removed anytime. However, some of these are very powerful APIs and power corrupts.
Raymond Chen also describes how difficult it is to get rid of Undocumented APIs once it gets used in some popular software:
Suppose you’re the IT manager of some company. Your company uses Program X for its word processor and you find that Program X is incompatible with Windows XP for whatever reason. Would you upgrade?
Charles Petzold also explains why switching standards is so hard:
The computer industry has always been subject to upheavals in standards, with new standards replacing old standards and everyone hoping for backward compatibility.
THE COST OF SOFTWARE CHANGE
Eric Lippert explains the prohibitive cost of fixing a bug or add a new feature:
That initial five minutes of dev time translates into many person-weeks of work and enormous costs, all to save one person a few minutes of whipping up a one-off VB6 control that does what they want.Sorry, but that makes no business sense whatsoever.
IBM’s research shows us the following:
When fixing a bug or adding a feature can be so costly on complex SW systems, carrying historical baggage makes more sense than switching standards.
When trying to fix a bug, enhancing software or adding a new feature:
- Do not forget history.
- Do not forget the cost involved.
- Do not forget that we all are humans and mistakes are unavoidable. Be humble and move on.
Filed under: CodeProject, SoftwareEngineering Tagged: backward compatibility, cost of fixing a bug, Software Engineering, undocumented apis