Challenges implementing DDD

Dmytro Stepanyshchenko
CodeX
Published in
6 min readJul 11, 2021

--

I have finished reading the DDD books by Eric Evans (blue book) and Vaughn Vernon (red book) and would like to write my personal takeaway notes of challenges implementing DDD.

Although all original ideas come from the blue book I would admit that the red one gives more understanding of how you can apply the DDD approach. It is not a surprise as the blue book was published in 2003 so it naturally feels a bit outdated in the case of provided examples, but nevertheless, the ideas of DDD are the same in both books.

I would say that DDD is a really complex topic and there are a lot of approaches and patterns, so I would not even try to cover them all here. I would just summarize that DDD is all about understanding the business domain and appropriately reflecting it in the code. Omitting a lot of details the approach requires having a dedicated domain layer in the code that describes business logic using the language of the domain experts. And it is important that there is a complex domain logic and ideally domain experts for getting benefits from the DDD as it comes with its own prices.

As I mentioned, DDD approach requires a domain layer described by business language (ubiquitous language) as much as possible. And it naturally moves us towards a so-called “Clean Architecture” as business experts do not speak technical terms so the domain layer should be as framework-agnostic as possible.

Small remark: DDD is often implemented with Hexogonal Architecture but it is not a mandatory requirement as it can be implemented with Layerd Architecture as well.

Now let's highlight the first challenge:

Removing frameworks from the domain layer requires a lot of additional abstractions. Just look at the DDD diagram.

Please note the Entity and Repository are not JPA abstractions. They are domain abstractions. It means you need to deal with putting the data into the domain objects when you create them (using factory or repository) and retrieving the data when you save them (using repository).

You may want to use the Java mapper library for copying information to/from the domain layer. I would recommend looking at the ReMap library. It gives type-safe mapping with the nice DSL + guarantees that any field will not be missed during the mapping (but only in runtime).

As a rule of thumb: If you do not have a complex domain logic you probably do not need to implement DDD as it brings more overhead than benefits.

The second challenge is transaction management. Before going to the explanation I would strongly recommend reading Chapter 12 of the “red book” about the repositories. I personally think it is one of the most useful chapters as it gives a different way of looking at the persistence layer.

Basically, there are 2 types of repositories:

  • Collection-Oriented Repositories — more traditional way of implementing JPA repositories as it often requires “Unit Of Work — aka transaction” and keeping track of the changes of the entities
  • Persistence-Oriented Repositories — each operation of the repository should be executed separately (without “Unit Of Work”)

And now let's come back to the challenge. Domain Repositories should only return Domain Entities and Aggregates and not JPA entities. It means that the Collection-Oriented approach is much harder to implement as domain entities hold only the copy of the data (in the case of framework-agnostic domain layer). Tracking the changes is possible but requires additional effort.

It is much easier to implement the Persistence-Oriented approach as DDD Aggregates represents the abstraction that can be committed atomically. Also, the Persistence-Oriented approach makes us closer to the “Clean Architecture” as we do not depend on the “Unit-Of-Work” and only in that case we can easily swap RDBM-implementation with NoSQL-implementation. Using the Persistence-Oriented approach we can even move transaction management from the Application Layer into the Persistence Layer.

As a conclusion, I would point out that if you deal with the data-centric system rather than the domain-centric system then DDD might not be the best choice as you will often break rules like “One Transaction per Aggregate” and have to move some logic into the DB queries (due to the Performance Requirement). Remark: here I am talking only about “Tactical Design” as in my opinion “Strategical Design” is useful in any cases.

The last challenge is implementing reliable events. There is a simple idea behind DDD — for entity persistence there is a dedicated layer and for communication with other systems there are events that decouple dependencies on the external systems from the domain layer. Ideally, we would like to commit DB changes and send events in an atomic way, but unfortunately, there is no easy way of doing it. “Two-phase commit” has a terrible reputation (due to the performance issues and the coordinator failure problem) and most modern systems do not support the protocol.

It leaves us with the choice of what to do first: committing the transaction or sending the event? Sending the event before committing the transaction may lead to an inconsistent state as there is no guarantee that the transaction will be committed afterward (an application may be killed at any time, especially in the era of cloud-native applications). Most likely we would like to send the event after the fact that the DB changes have been committed. In this case we need to make a choice:

  • sending an event with “at most once” guarantee
  • sending an event with “at least once” guarantee

Sending the event with “at most once” guarantee is pretty straightforward. No need to do anything special, we just accept the fact that the event may be lost in case of the application failure right before the attempt of sending. Unfortunately, not all requirements accept this guarantee.

Implementing “at least once” guarantee is more complicated as it requires saving event and database changes in the same storage within the same transaction. Delivering the event may be done using the dedicated job or special tools like “Debezium” and “Kafka Connect”. The pattern is known as “Change Data Capture” pattern and many cloud providers offer an out-of-the-box solution for streaming changes. But still we need to use the same transaction for saving primary changes and the event. It pushes us back to the previous topic: either we need to have transaction management on the Application Layer (in case of Collection-Oriented Repositories) or we need to have a “consistent” save method that accepts Aggregate and Event (in case of Persistence-Oriented Repositories). With Persistence-Oriented Repositories things get complicated when we deal with No-SQL databases that do not support transactions (as there is no way of implementing a “consistent” save method). We may only monitor changes on the database data itself and generate events out of the domain layer (see this article for more details).

Thus there is no “silver bullet” for solving the issue and as the result an application cannot be fully persistence-agnostic and the database cannot be swap easily. You still need to know at least the “family” of the database and what guarantee it provides for the proper implementation of the persistence layer with respect to reliable events.

To sum up I would say the DDD has a lot of bright ideas but any software contains technical challenges that need to be taken into account and not all of them may be easily presented or hidden in the domain layer.

--

--