Making mistakes is part of programming. Spotting them early can save you time. I’ve started to notice a common set of ‘Domain-Drive Design Mistakes’ which many of us seem to make. And yes, I’ve made them all at some point in my career. This is by no means the definitive list – I’m sure there are more (scroll to the bottom for the infographice version).
1. Allowing Persistence and Databases to Influence your Models
This is a common 'mistake' when following a DDD approach. Many of the tactical patterns like, Aggregate Roots, exist to simplify your models. They achieve simplicity by isolating your solution from infrastructure concerns like databases. The real starting point of a DDD approach is always the domain experts. If you find yourself starting with a schema or data model, alarm bells should be ringing. Your final solution may end up using stored procs over a relational model. But a database should have no part of the early stages of modelling.
2. Not Immersing Yourself With Domain Experts
At the heart of Domain Modeling is the desire to bridge the communications gap. The better you understand the problem domain, the better your solution. Now when I say understand the problem I mean from the perspective of the domain expert. So if you are modeling a circuit board testing system, spend time with the electronics engineers. If it is an aircraft refueling coordination system, spend time with the re-fueling coordinators (or whatever they are called).
3. Ignoring the Language of the Domain Experts
A key concept in DDD is something called the Ubiquitous Language. The idea is simple. First understand the language of the experts. Then infuse the language of the domain experts into all your code and discussions. Right down to the method, class and variable names.
Health Warning: The Ubiquitous Language is only ubiquitous within a bounded context. For example, the word 'Client' may have a different meaning in the accounts BC to the warehouse BC.
4. Not Identifying Bounded Contexts
How do you solve a complex problem? A common approach is to break it down into smaller parts. Isolating and solving smaller problems make resolving the bigger ones more likely. This is one of the key benefits of a bounded context. Identifying them has a direct impact on the likelihood building a successful solution.
5. Maintaining Bounded Contexts Despite Deeper Domain Insights
When you have a found a context which seems to work, it's tempting to stick with it. But this rigidity can become a problem. The process of domain discovery evolves over time. Your code should evolve with those discoveries. To do this, your code needs to be supple. You need to cover your code in the right kind of tests. This gives you freedom to rework your code to reflect those discoveries. It allows you to take advantage of your growing understanding of the domain.
6. Using Anemic Domain Models
This is a common symptom of a modelling process gone wrong. It is also a sign you are not doing 'DDD'. But what is an anaemic domain model? It is domain classes full of public properties with getters and setters. They are classes with no behaviour of their own. These have become prevalent due to ORM's mapping database schema's to code. You may still have these kind of classes in your system but they are not your domain objects. Rather look to encapsulate objects and provide behavior.
7. Assuming All Logic is Domain Logic
8. Over Using Interaction Tests
It is important to keep your code supple. This is particularly true at the early stages. But to allow for major re-factors, you need a safety net. This net is a good suite of tests. They should be a help to the development process rather than a trip hazard. I've noticed interaction tests have a tendency to hinder re-factors. This is because interaction testing expects certain methods to have been called. It ensures these methods are called with particular parameters, etc. Here's the problem. If we change how the work is done but not the end result, the tests will fail. A much more robust mechanism of testing is by checking final state. Given a certain scenario, when a specific input is received, a specific final state is expected. Key here is the test doesn't care how that state was arrived at. This frees you to monkey with the innards and be confident the system still behaves correctly.
9. Treating Security as Part of the Domain (Unless It Actually Is)
Security is an important part of a lot of the systems we build today. Rarely, however, is it part of the domain. The result of a calculation of risk doesn't change if calculated by a superuser or not. So while security is an important part of the system, it shouldn't play a part in the modeling of the domain. Unless it is actually part of the domain!
10. Focusing on Infrastructure
I've already mentioned the common mistake of focusing on the database at the start. Another mistake is to focus on infrastructure concerns at the modeling stage. An example of this could be taking a data feed from a device. You should define the shape of the data for your system. Then use adapters to ensure whatever service feeds you data, is transformed into the correct format.
If all that wasn't enough, here is a bonus one...
BONUS: 11. Skipping the EventStorming Process
EventStorming is often overlooked by developers. It involves a different kind of skillset. It can be hard to get the right people in the room. And there are no guarantees you will come up with something useful. The reason I've added it is because it is something well worth doing. If you want to speed up the design process. If you want the best chance of spotting those 'seems' in the domain as early as possible. If you want to get buy in. If you want involvement. Both crucial elements of a successful project. Then don't miss out the EventStorm.
Health Warning: It maybe DDD is not the right approach for the system you are building. In which case the advice above would not necessarily apply.
Embedded from Learn CQRS and Event Sourcing