By Daryl Olson
In this post, I am going to discuss the Clean Architecture and provide background on a foundational principle used in its implementation, the Dependency Inversion Principle.
I’ll explain why the Clean Architecture can help developers produce applications that exhibit low coupling and independence from technical details such as databases and other infrastructure concerns.
These characteristics lead to code that is more maintainable, adaptable and significantly easier to test.
The Dependency Inversion Principle (DIP)
Before describing the Clean Architecture, it is useful to review the Dependency Inversion Principle as this principle is fundamental to the implementation of this architectural pattern.
The SOLID Principles are:
- Single Responsibility Principle: This is related to the Separation of Concerns. It states that a class should only have a single responsibility.
- Open–Closed Principle: Software entities should be open for extension and closed for modification.
- Liskov Substitution Principle: The correctness of a program should not be affected when objects are replaced with there subtypes.
- Interface Segregation Principle: Keep interfaces specific to the client requirements rather than creating a general-purpose interface.
- Dependency Inversion Principle: Described in the following section.
To keep the scope of this blog reasonable, I will not be discussing the other SOLID principles. For a high-level overview of these other principles see:
The Dependency Inversion Principle was articulated by Robert C. Martin in the mid-1990s. He described it as:
- High-level modules should not depend on low-level modules. Both should depend on abstractions.
- Abstractions should not depend on details. Details should depend on abstractions.
Let’s look at what this means in a more practical sense.
The term “low-level” and “high-level” are relative terms that refer to how close the module (class or method) is to a concrete implementation or infrastructure concern. Lower level modules are closer to the concrete implementation of an abstraction or those that reference infrastructure.
Infrastructure concerns include file systems, databases, and network resources. In C#, even the use of the property DateTime.Now represents the use of a low-level module as it explicitly references an infrastructure concern, the system clock. Code that references infrastructure is inherently more difficult to unit test.
Another way of thinking about the DIP principle is to prefer configurable dependencies for lower-level concerns.
Lower level classes do not have to be infrastructure-related. For example, suppose that sales orders need to have their sales tax calculated. Let’s look at two possible ways of meeting this requirement.
Option 1: We can add a method in the Order class called CalculateSalesTax(). This method contains the specific logic to perform the calculation, returning the resulting value.
Option 2: Alternatively, we could create an interface, ISalesTaxCalulator, with a method CalculateSalesTax. We would then create a specific implementation of this interface which contains our domain logic for performing the calculation. The Order class will reference the ISalesTaxCalulator, and call the CalculateSalesTax method. Dependency Injection would be used to provide a specific implementation (concrete class) at runtime.
Using the concepts of DIP to compare the Order class as described in Option 1 versus that as described in Option 2, we can see that Option 2 more closely follows the Dependency Inversion Principle. The Option 2 Order is using an abstraction (ISalesTaxCalulator) versus the concrete implementation details used by Option 1.
So why is the Dependency Inversion Principle important? Classes that follow DIP have lower levels of coupling and are more modular. These factors have several advantages including:
- It makes classes easier to unit test.
- The application is more able to adapt to changing requirements as a component can be changed independently of the whole. In less modular, tightly coupled code, a maintenance developer may have to understand a large part of the complete system to be able to determine the implications of modifying a single component.
The Clean Architecture
One of the most widely known tenets of software development is the separation of concerns. Wikipedia defines the separation of concerns as:
“In computer science, separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, so that each section addresses a separate concern.”
Wikipedia further states:
“Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer).”
Most developers have used layered structures as described above. Unfortunately, while the traditional layered design is noble in its intent, it often falls short in keeping concerns separate. It can be unclear where logic should go, especially for new or less experienced developers. This can result in the leakage of business logic into the presentation or infrastructure layers. Business logic in these layers results in tight coupling and reduces modularity.
What if there was a way to structure your code so that it helped stop concerns from leaking across your layers? What if we could help developers place functionality in the correct package?
This is where the Clean Architecture can help. This architecture pattern follows the Dependency Inversion Principle to help maintain the separation of concerns.
This architecture was first proposed by Robert C. Martin with the purpose of encouraging modularity, low coupling and keeping the business design independent from the technical details.
An example of a Clean Architecture approach is shown in the figure below.
The key points to note from the Clean Architecture diagram are:
- The arrows depict software code dependencies. Software code refers to classes, methods or any other named software entities.
- All software code dependencies point inward.
- Nothing in an inner layer can know anything about software code in an outer layer.
- Outer layers have a lower level of abstraction when compared to inner layers.
- It places the Domain as the core layer in the application. In many traditional layered designs, the persistence layer is the bottom layer. This can lead to implementations where the database is the core of the application.
An important concept in the Clean Architecture is what data crosses the layer boundaries. In order to not violate the concept that “Nothing in an inner circle can know anything about software code in an outer circle”, data that crosses the boundaries must be simple data structures. This keeps the inner layers from having a dependency on any outer layer software code.
Using these concepts to structure your code will provide the following benefits:
- Testable application and domain code that does not require presentation logic, databases or any other specific infrastructure.
- Domain logic that is independent of the specific persistence being used.
- Presentation (UI) independence.
- Framework independence.
You will also notice that the Clean Architecture adheres to the Dependency Inversion Principle with the lower-level concerns (outer levels) dependent on the high-level concerns (inner levels).
The Domain layer is where the core concepts representing the real-world items being modeled are constructed. This includes both behavior (business logic) and data (entities). This is where the value being delivered to the end-user is created.
This layer will contain:
- Custom Exceptions
- Domain Logic
- Value Objects
Typically, this is where:
- the highest rate of change occurs
- the largest amount of testing is required.
It must not refer to any technical implementation details. This makes the domain behavior much easier to test and respond to changes in requirements.
The Domain Layer should also not have any framework-specific details. For example, if you are using Entity Framework, you should not be decorating your class with Data Annotations. It is better to move this to the Infrastructure Layer using the Fluent Configuration.
This layer is should be kept thin and focused on coordinating tasks, delegating to the Domain layer when business behavior is required. As the business logic is portioned to the Domain layer, no business behavior should be contained in this layer.
The Presentation layer will use this layer to interact with the Domain Layer. These interactions are typically based on a uses case which may involve one or more subtasks. For example, a “Place an Order” use case could involve multiple steps such as checking and reducing inventory, persisting order details and sending a confirmation email, all of which require coordination and represents a transactional boundary.
While this layer should not maintain any domain-related state, it can maintain state related to tasks which it is coordinating.
This layer will contain:
- Application Logic
- Custom Exceptions
- Commands and Queries if the CQRS pattern is being used
- Models and Data Transfer Objects
For information on the CQRS pattern, see:
As with the Domain Layer, the Application Layer must not refer to any technical or framework implementation details. Once again, this will make unit tests easier to create.
This layer is used to implement technical implementation details such as database persistence, usage of the file system and any network-related concerns.
Dependencies Between Layers
The following diagram shows the dependencies between the layers.
When to Choose the Clean Architecture
The Clean Architecture is most suitable for applications when there is a fair amount of business complexity or where support for changing business requirements is important.
If all you require is a simple CRUD application, this pattern might be overkill.
The Clean Architecture provides a pattern for application development that can help provide code that is more maintainable, adaptable and significantly easier to test in an automated manner. If you have any doubts about taking the time and effort to implement a pattern such as the Clean Architecture, I suggest you read Martin Fowler’s post “Is High Quality Software Worth the Cost?”: