The Rule of Transparency is an important characteristic of software design that came decades ago from the Unix community. The concept has been introduced and well described by the developer and open software advocate Eric Raymond, who first studied the modus operandi of open source projects and their ability of producing reliable and maintainable software against all odds, by engaging armies of developers all around the globe.
This article is meant as an introduction to transparency in software design and will only scratch the surface of its many implications. The main purpose is to provide the reader with some directions on the subject, a high level map of well-known techniques and strategies that best serve the goal of obtaining a transparent design.
What is the Rule of Transparency?
The Rule of Transparency is the combination of two observable qualities of design that can be described as followed:
“A software system is transparent when you can look at it and immediately understand what it is doing and how. It is discoverable when it has facilities for monitoring and display of internal state so that your program not only functions well but can be seen to function well.”
Unfortunately, in commercial software, the Rule of Transparency is rarely taken into consideration as a whole, often reduced to a security or legal concern. Its original purpose, however, is to create a favorable ground for constantly monitoring and ensuring the quality of software craftsmanship, even in challenging scenarios.
Why Transparency is Important
Design problems and flaky solutions are often concealed behind complexity.
While transparency itself does not guarantee good design, the ability of clearly and explicitly seeing the design in the first place is a natural spotlight that makes problems harder to ignore.
Code transparency is the first pragmatic step towards quality, a requirement for long lasting software that is capable to outlive functional, technological and organizational changes.
Designing Software for Transparency
Transparency is about designing software to make obvious how the code works, for the purpose of code inspection, understanding, monitoring, and debugging. Before we dig into the details, there are a couple of premises that are worth mentioning:
- Transparency is not the opposite of encapsulation in OO design. In fact, internal implementation details can at the same time be hidden and yet accessible to inspection. Transparency not only can coexist with encapsulation but in many ways supports it by promoting isolation between components.
- Transparency is a well-defined goal, but many are the aspects of software design that can contribute to its achievement. We will review these aspects and examine how each of them can directly contribute to its achievement, with particular focus on OO languages.
Simple code contributes to transparency because it is easy to follow and reason about. The basic suggestion for simplicity is to design small, focused and independent components that are connected by thin layers of glue code. While the suggestion is a good one, there is surely much more to be said on the topic. In one of his talks, Rich Hickey (creator of the Clojure language) underlined some insightful points of simplicity:
- Simple does not mean easy or familiar. Simplicity is quite hard to achieve, requires great analysis skills and it aims to help changes and verifying correctness. Simplicity has nothing to do with making the developers comfortable or making their work dull and easily replaceable.
- Simplicity/Complexity is not relative to how smart or dumb we are. Compared to the complexity that we can create, we are all mentally very limited.
- Simplicity is not so much related to the number of components of a system, but more to the way the components interact with each other. The more components are entwined with each other, the more factors we have to keep in our mind when we think of them: this is what cripples our ability to quickly comprehend what is going on in the code, not to mention the increased impact of changes.
So what concrete principles and techniques can help us in designing simple code?
Here are some technical tips:
|Single Responsibility Principle
||Cohesion and orthogonality helps thinking about components in isolation
||Reduce intimacy (entwining) between components
|Rule engines, polymorphism, generalized algorithms
||Can replace complex specificity, intricate conditional logic
|Prefer data complexity over code complexity, declarative data manipulation (LINQ, lambda expressions, etc.)
||It is much easier to manipulate and think about plain data than it is with code
|Message queues and publisher/subscriber patterns
||To break complex interactions between components and produce a simpler and decoupled communication
|Flat and explicit composition over inheritance hierarchies
||Deep inheritance chains make the code hard to follow and debug. Inheritance for the sole purpose of reusing code is actually a very questionable design choice
|Separate/Isolate all forms of state (files, database, shared memory, time and date, etc.) from the business logic
||Mixing state and logic can engender side effects, therefore complexity
|Create consistency by following the same policies, conventions and patterns across the whole system
||Consistency decreases the number of factors that we need to consider in our minds
Finally, Domain Driven Design teaches us a very important lesson: complexity is not always caused by technical factors. In particular, there are two fundamental concepts of DDD that facilitate a more natural and straightforward thinking when reading the code:
- Adopt a ubiquitous language, a clear and unambiguous terminology that is shared by engineers (and used in the code) and business people.
- Invest time in iterative analysis and problem exploration to build clean and expressive software models that effectively reflect the aspects of the business domain that need to be managed.
There is one final interesting piece of advice from Eric Raymond that I find very helpful: don’t be clever.
When designing software, leave the ego at the front door. The world can live just fine without the mountain of unnecessary complexity generated by developers and architects showing off with their coolest tricks.
We cannot immediately see what the code does without relying on the fact that it has a predictable behavior. We are sometimes so absorbed in “getting things done” that we disregard the reasonable expectations of other developers (sometimes even our own) in reading and using our code.
How do we make our code more predictable? Here are a few suggestions:
An important part of predictability is related to the order in which operations are called, particularly when designing APIs and interfaces. To make the flow more predictable, the following two rules can help:
- If a component exposes operations but nothing is enforcing the fact that operations have to be called in a certain order, then it should be valid to call the methods in any order.
- If a component exposes operations that need to be called in a specific order, then the order must be enforced by requiring each operation to have as input parameters the result of the operations that must precede it, or by wrapping the component in a façade that would expose only valid call sequences.
We can sometimes guess that these rules have been violated when suffixes or prefixes of method names implicitly suggest an order (e.g.,
Close, etc.). Another usual sign of flow unpredictability is when a class exposes a lot of methods that do not return anything but produce internal state changes.
The core business logic of our applications is usually where we spend the most of our time when changes occur, and it is also the place where malfunctions are harder to spot. Predictability here pays the most, and overall it is not so hard to achieve when taken into consideration in the design phase.
- One of the strongest principles of logical transparency is referential transparency. A function is referentially transparent when, given the same input, it always returns the same result at any point of time. This independence from time can give high confidence to any developer who needs to understand and change the code by eliminating all the nasty surprises and unmanageable complexity that often comes with non-determinism.
- An effective OOP technique to obtain logical predictability is to use immutable objects. An immutable object has the characteristic of having a read-only internal state that is defined at the time of the creation of the instance and never changes throughout its lifetime. Consequentially, immutable objects have no setters and do not expose any method that could change their internal state. If a new state is required, the old immutable object is disposed and a new one is created with the new state. Validating an immutable object is usually very simple, since it has to be done only during its creation. Immutable objects are naturally thread-safe and the absence of state transitions greatly reduces the chances of logical bugs.
Interaction between components can be unpredictable for many reasons, most of which are related to poor designing choices:
- Generic purpose code is placed into specific components or vice versa, hence, components end up being invoked in the most unsuspected places: generic code needs to be refactored into separate and easily reusable components to support specific functionalities instead of absorbing them.
- Too many ways to do the same thing, too many ways to change the same variables, too many method overloads: low level components that have complex APIs designed for general flexibility should not be used directly, but wrapped into safe context aware proxies that would collapse and constraint the usage and accessibility of a component to what is really needed and makes sense in each specific scenario.
- Lack of well-encapsulated architecture layers and services. One fundamental characteristic of a well-encapsulated layer is it communicates with the layers above or below using high level APIs that exchange only data structures (week coupling), using whatever representation is more convenient (XML, JSON, POJOS/POCOS, etc.).
Looking at a component of your application, how long does it take to find out the other components upon which it depends? How much code do you need to read before to find the answer?
When collaborators are buried in the implementation details, an important piece of information is concealed from our eyes, forcing us to dig into the code to understand the interactions with the rest of the system.
- Dependency injection is the most effective design pattern for revealing the collaborators of classes, by disclosing them as parameters of constructors or methods. The result is that, just by looking at the signatures and APIs, we can understand a lot more about how a class works and what we need in order to use it, reuse it or test it.
- Another fundamental principle that guarantees true collaborator transparency is the Law of Demeter, also known as the “don’t talk to strangers” rule. This principle prohibits deceiving collaborators that are not really used directly but instead function as middlemen for other components, therefore, obscuring the relationships and interactions between elements.
The rule of transparency also emphasizes the quality of discoverability, intended as the introspective capability of the software to provide useful internal state information while it is running, to help the developers monitoring and debugging. Investing time and efforts in introspection can save a tremendous amount of time in troubleshooting problems.
Following are some bullet point suggestions on the topic:
- Accurate Error Messages: Among the goals of error handling is to provide meaningful error messages with a lot of information to help us to pinpoint the causes. As an example, many exception messages are not very useful (e.g.,
null pointer exception); therefore, it helps to capture these exceptions as early as possible to then just re-throw more meaningful exception types with more informative and contextualized messages.
- Detailed Logging: Make the application log the important steps and operations that it is performing. Good logging modules allow to define different levels of verbosity (and in particular, a debug level that would log everything) and easily change the media where the log is stored. The production log file is often the only way to figure out malfunctions that are caused by data issues and therefore very hard to replicate elsewhere.
- Textualization: Create a human readable textual representation of the state and flow of the application, so that it can be easily dumped on files, shells, consoles, etc. For example, overriding the
toString() method of objects to output a well-formatted textual summary of their internal state can be very useful in the debugger console to avoid the time-consuming inspection of hierarchical object structures.
- Debug Options: Add a debug mode to your application that would allow you to hack it to provide extra information and troubleshooting capabilities. For instance, a debug mode can allow to simulate signing on a web application with different security roles to easily reproduce different authorization scenarios, display rich debug information directly in the UI, enable a testing console (much like those cheat shells that we see in many videogames), etc.
Documentation is a key aspect of transparency, but if not done correctly, it could also be a colossal waste of time, and even misleading and counter-productive. Documentation must not only be useful; it has to be realistically easy to update and maintain over time.
- Some developers write rivers of completely useless comments just to comply with the organization standards. Others have a genuine hate for comments and refuse to write them, sustaining that the code is good enough and self-explanatory. Both extremes have their faults. Comments are quite useful to complement what the code itself cannot express and the truth is that there is no developer in heaven and earth who can write code that is 100% self-explanatory. Comments are usually good to provide a summary of the purpose of a class, explaining workarounds or the history of an issue to better understand the solution; they can reveal the extensibility points of the design, explicit assumptions about input and output parameters of methods, and provide suggestions for further improvements.
- External documentation can also be very useful when written in an informal and friendly style (tutorial) to help developers get started with the use of the code before opening source files.
- Automated tests can also be a good form of documentation, to help identifying the formal specifications and requirements that the code is meant to comply with, and provide at the same time useful usage examples.
Passion, positive attitude and a culture open to sharing knowledge and collaboration are essential environmental elements to produce high quality software. Creating the right culture could be very difficult in some organizations and teams: changes need to be made gradually and carefully, to make sure that these core values are understood and accepted. Transparency, in particular, requires developers capable of giving and accepting constructive criticism, and willing to learn from each other.
In the right cultural context, code reviews become the primary educational tool towards transparency: sharing the code for reading and understanding is the ultimate test to identify bad naming, overdesign and all the other characteristics aforementioned that obscure the source code.
To conclude, the successful attitude is perhaps better described in these few lines from the book The Art of Unix Programming:
“[…] You have to believe that software design is a craft worth all the intelligence, creativity, and passion you can muster. Otherwise you won’t look past the easy, stereotyped ways of approaching design and implementation; you’ll rush into coding when you should be thinking. You’ll carelessly complicate when you should be relentlessly simplifying — and then you’ll wonder why your code bloats and debugging is so hard. […]”