Microservices: Consider both the Benefits and the Drawbacks

Introduction 

Microservices are becoming one of the more popular architectural styles. As with any architectural choice, there are trade-offs to consider. Before deciding to adopt microservices, one should make an informed choice by weighing the benefits against the drawbacks in the context of the business and technical problems being addressed. 

In this article, I am going to present a brief overview of some of the benefits and drawbacks to be considered before settling on a microservice architecture.  

In business-related scenarios, software development is about adding business value, not about adopting technology for the sake of the technology. Sam Newman, a respected author and speaker on microservices states this clearly: 

You don’t win by doing microservices. The only person that wins if you do microservices is me because I sell books about microservices. You’re implementing microservices to achieve something. What is it you’re actually trying to achieve? It’s amazing how many people I chat to who can’t give me a clear articulation of why they’re doing microservices. It starts about having a real clear vision for what it is you’re trying to achieve. Then you can say, is microservices the right approach for this? Are we using microservices in the right way?”   

(See https://www.infoq.com/podcasts/monolith-microservices/ and https://samnewman.io/)  

Another well-known software architect, Chris Richardson, has identified several microservice anti-patterns that you should be aware of:

  • Magic pixie dust: believing that microservices will cure all of your development woes
  • Microservices as the goal: focusing on adopting microservices rather than improving the velocity and reliability of software delivery
  • Scattershot adoption: numerous groups with an organization adopt microservices and implement supporting infrastructure without any coordination
  • Trying to fly before you can walk: attempting to adopt microservices while lacking key development skills such as automated testing and the ability to write clean code
  • Focusing on technology: focusing on cool (deployment) technology rather than the essence of the microservice architecture: service decomposition and definition
  • The more the merrier: developing an excessively fine-grained architecture
  • Red flag law: keeping in place policies and processes that obstruct rapid, frequent and reliably software delivery.

(See https://microservices.io/microservices/general/2018/11/04/potholes-in-road-from-monolithic-hell.html )

What are Microservices and Monolith Systems?  

In this article, I’m considering the benefits and drawbacks of microservices compared to those of a monolithic system. Before beginning let’s understand what we mean by these terms. 

The following definitions are from an article by Martin Fowler and James Lewis, “Microservices – a definition of this new architectural term.”  

Microservices   

“In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” 

Monolith  

To start explaining the microservice style it’s useful to compare it to the monolithic style: a monolithic application built as a single unit. Enterprise Applications are often built in three main parts: a client-side user interface (consisting of HTML pages and JavaScript running in a browser on the user’s machine) a database (consisting of many tables inserted into a common, and usually relational, database management system), and a server-side application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. This server-side application is a monolith – a single logical executable. Any changes to the system involve building and deploying a new version of the server-side application.” 

(See https://martinfowler.com/articles/microservices.html )

Microservice Benefits 

Independent service scalability 

In most systems, the load and throughput characteristics are not uniform. Microservices give you the possibility to scale services independently as needed. When running a monolithic system, there is a structural limitation that requires the entire system to be scaled as a unit. This is an inefficient use of resources. 

Independent development and deployment

Microservice architectures are well suited to continuous integration, continuous deployment, and delivery.  

Small autonomous teams can develop, test, and deploy microservices independently and in parallel. With a monolithic system, more coordination across teams is needed. Releases of a monolithic system, therefore, tend to be larger in scope and done on a slower cadence. Also, as the scale of the deployment is larger, monoliths have a higher chance of creating issues compared to releasing a microservice. 

The improved development and deployment features of microservices enable a faster time to market which can be a critical factor for many systems. 

Changes are easier in a microservice architecture 

Across a system, some services are more prone to change than others. A microservice architecture allows these services to be changed and deployed at their own pace.   

When compared to a monolithic system, each service in a microservice architecture is easier to understand and change.   

When making changes to a monolithic system:  

  • The team must have a deeper understanding of the entire system. 
  • It is more difficult to figure out which parts need updating.
  • It is more difficult to understand the impacts across the system when code in one area requires changes. This increases the chance of introducing breaking changes.  

Another factor in favor of microservices is service boundary stability. Service boundaries in a microservice are easier to keep consistent as they are explicit.

In a monolithic system, the in-process service boundaries are hard to enforce. Overtime these boundaries tend to break down leading to specific functionality being arbitrarily spread across the system. This makes maintenance more difficult and error-prone. As it can be difficult to understand how to correctly implement a change, you can end up in a downward spiral of declining code quality.

Technology Flexibility 

Microservices give you the ability to choose technology based on the specific requirements of each service. For instance, it might make sense to use a document database for one service, a graph database for another and a relational database for another.   

It should be noted, there is a cost to having a more heterogeneous technology stack. As with any design decision, do not make this choice until you have carefully weighed the costs against the benefits. 

Using a monolithic architecture requires a long-term commitment to a particular technology stack. Adopting a microservice architecture allows the internal architecture of each service to evolve independently. If required, a service may be swapped for a new implementation or even removed if it is no longer needed.  

Resilience 

In a microservice architecture, if one service fails, and the failure does not cascade, other services may be able to continue to operate.  

Note that stopping the failure from cascading does not happen intrinsically; the system must be designed to handle failure situations. This includes adopting design patterns such as:  

Microservice Drawbacks 

Complexity 

Microservices are a distributed system. Distributed systems are by their nature, unpredictable. With that comes complexity in developing, testing, operating and understanding the system. Examples of this complexity include:  

  •  When a failure occurs, it can be difficult to find the root cause and understand how other services are affected.  
  •  Transactions that span service boundaries add complexity to the design and operation of the system.  
  •  End-to-end testing is more difficult to setup and execute compared to a monolithic system.

Monitoring and observability strategies need to be factored into the design right from the beginning. Having a robust monitoring strategy is a requirement for microservices.  

Determining Service Boundaries is Difficult  

Deciding on the service boundaries is one of the most important and difficult design decisions in a microservice architecture.

If your services are too fine-grained, you risk having:   

  • Excessive coupling between services. Higher coupling means that a change in one service can potentially affect multiple services, reducing the advantages of “Independent development and deployment” and “Changes are easier in a microservice architecture” described above.    
  •  Performance issues due to network latency. If your services need to use too many other services to fulfill its use case, this can cause an excessive number of network calls, each of which adds latency overhead.  
  •  Cyclical dependencies can cause a so-called distributed stack overflow. This is where a certain transaction is stuck in a loop of two services calling each other.

If you make your service boundaries too wide, that is having your services to coarse-grained, then you lose many of the benefits of microservices such as “Independent service scalability” and “Independent development and deployment.” These were described above in the Benefits section.  

When deciding on service boundaries, you must also consider data consistency requirements. Transactional boundaries should be understood upfront. If transactions need to span multiple services, you need to consider how these transactions will be handled. Some options are compensating transactions, messaging with eventual consistency, the Saga Pattern, or redrawing your service boundaries.  

For information on compensating transactions, see: https://docs.microsoft.com/en-us/azure/architecture/patterns/compensating-transaction 

For information on Saga Pattern, see: https://microservices.io/patterns/data/saga.html

Also, Microsoft has a good article on data considerations for microservices. See: https://docs.microsoft.com/en-us/azure/architecture/microservices/design/data-considerations 

One recognized strategy for determining service boundaries is to use Domain Driven Design (DDD). In DDD, the concept of the Bounded Context is useful for service boundary determination.

For more information on DDD Bounded Contexts, see the article “Identify domain-model boundaries for each microservice”: https://docs.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/identify-microservice-domain-model-boundaries

Network Overhead 

Even if you get your service boundaries correct, you may still face issues with network latency. This needs to be considered in your design. You should also plan for performance testing and latency monitoring on your production systems.  

Organizational Considerations 

In addition to the technical challenges presented by microservices, organizational structure should be considered. 

One of the main benefits of microservices is “Independent development and deployment” as noted in the Benefits section above. This is centered around small cross-functional teams (5 to 10 people) working independently. These teams should be organized around delivering business value, not around technical capabilities. If your organizational structure and culture do not support these types of teams, you will not realize the full benefits of a microservice architecture.   

You should also consider Conway’s Law which states:  

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”  

There is evidence, both anecdotal and empirical, that shows organizational structure influences the nature and quality of the systems they provide. 

For more information, see the article “Microservices: Organizational Practices, Part 2” https://dzone.com/articles/microservices-organizational-practices-part-2

Do your teams have the required skills to develop and operate a microservice? 

Microservices are a distributed system. You need to evaluate whether the teams have the skills and experience to be successful in building and operating a distributed system. If not, you may be better off with a modular monolith.   

If you are not familiar with the modular monolith, I suggest you watch Simon Brown’s video presentation available on YouTube at https://www.youtube.com/watch?v=5OjqD-ow8GE 

Conclusion 

While microservices have many compelling benefits, before starting your journey to this architectural style, you need to understand the potential problems. As with any architectural decision, there are trade-offs.  

If your microservices are not designed with enough care and attention, you may end up walking into one of the worst architectures there is, the distributed monolith. This is a system that has all the drawbacks of a distributed system with none of its benefits.  

One potential path towards microservices is first building a well-structured modular monolith. If at a later point you are facing issues that need to be solved using a microservice architecture, you can break out one or two services from the modular monolith and slowly move towards microservices. This also gives you and your teams a chance to incrementally learn what it takes to build and operate a microservice.  

As a final warning about the pitfalls that may lay ahead, I suggest you watch the following videos:

C# 9.0: The New Record Keyword

Introduction

In this post, I am going to discuss one of the new features included in C# 9.0: records. I’ll cover when and why you would want to use a C# record.

To see a full list of what’s new in C# 9.0 see:

What’s new in C# 9.0: https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9

C# 9.0 is being released with .NET 5. Before I get into the theme of this post, I want to mention the upcoming release of .NET 5. This release is a major milestone. It represents the unification of .NET Core and .NET Framework. Going forward, there will be one .NET which can target Linux, macOS, iOS, Android, tvOS, watchOS, WebAssembly, and more.

.NET - A Unified PlatformSource: Microsoft

.NET 5 will be released at the .NET Conf which runs from Nov 10th to Nov 12th, 2020.

If you wish to sign up for the conference, details can be found here: https://www.dotnetconf.net/?utm_source=dotnet&utm_medium=banner&utm_campaign=savedate

If you want to get a better understanding of what’s new in .NET 5, I suggest you watch the following video from Microsoft Ignite 2020:

The Future of .NET is .NET 5: https://www.youtube.com/watch?v=kX0eX5tDGgc

What is the new record keyword and when would you use it?

Records provide a quick and straightforward way to write code for value-based types. Note: everything provided by a record can be implemented with C# classes. However, C# records make it far easier to do and significantly reduces the amount of boilerplate code. Later, we will see how much boilerplate code disappears when using a record versus a class.

So, what do I mean by value-based types? These are types where the data contained in the object are the primary concern, not the behaviour of the object; value-based types are simply a collection of data. Typical uses include:

  • Data Transfer Objects (DTOs). DTOs are types that contain no business logic. DTOs are used for transferring data over the wire or where isolation between service layers is required.

For more information on DTO’s, see: https://deviq.com/anemic-model/

  • Value Objects. Value objects are used in the Domain Driven Design (DDD) pattern of the same name. These are immutable objects that are only identifiable by the values of their properties. Value Objects are for types that only exist within the context of another object, that is, they have no identity on their own. An example of this would be an Address object which is associated with a Customer Order. The Address has no identity on its own; it exists only in the context of the Order. If the Order is deleted, the Address also is deleted as it makes no sense to exist on its own.

Further, 2 value objects are considered equal only if properties are equal. In contrast, 2 entities may be considered equal if their identifying values are equal. For instance, two Customer Orders may be considered equal if their Order Numbers are equal regardless of the state of their other properties.

For more information on Value Objects, see: https://deviq.com/value-object/

  • View Models. A View Model is a type that includes just the data a View requires for display or sending back to the server.

For more information on View Models, see: https://deviq.com/kinds-of-models/

The characteristics common to the value-based types are:

  • All property values determine equality between instances.
  • They do not contain business logic. These objects act as containers for data.
  • They should be immutable; changing initialized properties should not be allowed.

Immutability provides several benefits:

  • When you pass objects around, you should not have to worry about another method or class modifying the object. Unexpected property modifications can result in unexpected bugs.
  • Immutable objects are inherently thread safe. With the rise of async programming, this becomes more important.
  • The use of immutability is part of a defensive programming strategy.

While DTOs, Value Objects, and View Models are closely related and share the same characteristics, the intent of their usage is what distinguishes them.

Example of using the new record keyword

To see the basic functionality provided by a C# record, we will first create a class-based Value Object. We will then replace the class-based Value Object with one based on a C# record.

In the following example, we will create a basic console app and an Address Value Object. First, let’s code the Address as a class. After that, we can refactor it to make the Address a C# record. From this, we will see how much boilerplate code is removed by making the Address a C# record.

If you wish to try this yourself, you will need:

From Visual Studio, create a project named “RecordTypesCSharp9” as a “Console App (.NET Core)”.

In Visual Studio, change the Project properties to target “.NET 5.0” as shown below:

Now let’s create an Address class. The Address type will have the following characteristics:

  • It can be considered a DDD Value Object.
  • Immutable.
  • Supports equality comparison based on the properties of the object.
  • Overrides ToString() to list out the Address properties.
  • Implements Deconstruct() in order to support pattern matching.

Add the following code to the Address class. Note that most of this code is boilerplate and there is no business logic.

using System;
namespace RecordTypesCSharp9
{
  public class Address : IEquatable<Address>
  {
    public string StreetAddress { get; }
    public string City { get; }
    public string State { get; }

    public Address(string streetAddress, string city, string state)
    {
      StreetAddress = streetAddress;
      City = city;
      State = state;
    }

    public override string ToString()
    {
      return $"StreetAddress:{StreetAddress}, City:{City}, State:{State}";
    }

    public static bool operator == (Address left, Address right) =>
      left is object ? left.Equals(right) : right is null;

    public static bool operator != (Address left, Address right) =>
      !(left == right);

    public override bool Equals(object obj) => 
      obj is Address a && Equals(a);

    public bool Equals(Address other) =>
      other is object &&
      StreetAddress == other.StreetAddress &&
      City == other.City &&
      State == other.State;

    public override int GetHashCode()
    {
      return HashCode.Combine(StreetAddress, City, State);
    }

    public void Deconstruct(out string streetAddress, out string city, out string state)
    {
      streetAddress = StreetAddress;
      city = City;
      state = State;
    }

  }
}

Now copy the following code into our Program.cs. It exercises the functionality we incorporated in our Address type:

using System;
using System.Text.Json;

namespace RecordTypesCSharp9
{
  class Program
  {
    static void Main(string[] _)
    {
      Console.WriteLine($"Try out the Address (class based).");

      Address address1 = new Address
      ("200 Riverfront Ave SW", "Calgary", "AB");
      Console.WriteLine($"1) address1: {address1}");

      Address address2 = new Address
      ("200 Riverfront Ave SW", "Calgary", "AB");
      Console.WriteLine($"2) address2: {address2}");

      Console.WriteLine($"3) address1 == address2: {address1 == address2}");

      string jsonAddress1 = JsonSerializer.Serialize(address1);
      Address address1a = JsonSerializer.Deserialize<Address>(jsonAddress1);
      Console.WriteLine($"4) address1a == address1: {address1a == address1}");

      var isInCalgary = IsInCalgary(address1);
      Console.WriteLine($"5) isInCalgary: {isInCalgary}");
    }

    static bool IsInCalgary(Address address) => address switch
      {
        (_, "Calgary", _) => true,
        _ => false
      };
  }
}

When we run the Console App, the following output is produced.

From this we can see the following functioned as expected:

  • The ToString() override: 1) and 2) in the above screenshot.
  • Equality for constructed objects: 3) in the above screenshot.
  • Equality for serialized/deserialized objects: 4) in the above screenshot.
  • Deconstruction and pattern matching: 5) in the above screenshot.

None of this is either surprising or even that interesting. The thing to note is the amount of boilerplate code in the Address class required to provide this basic functionality. Consider that if you added a new property to the Address class, say a second address line, the boilerplate code must be updated, possibly introducing errors. Also, if you have many of these value-based types in your application, the amount of boilerplate code can be significant.

Now let’s replace the Account class with an Account type based on a C# record.

Go to the Address.cs file and replace the existing code with the code shown below.

namespace RecordTypesCSharp9
{
  public record Address(string StreetAddress, string City, string State);
}

That’s it; this all that is required to provide the same functionality as the previous implementation of the class-based Address.

Before we run the Console app again, let’s also change the following line in Program.cs from:

Console.WriteLine($"Try out the Address (class based).");

To:

Console.WriteLine($"Try out the Address (record based).");

Running the console app produces the following output:

We can see that the new Address implementation based on the C# record works the same as before.

The “With” Expression

By default, C# records are immutable. Note that records can be made mutable by using the same class-based semantics for defining properties and constructors as classes but in general you should be using the default immutability provided by C# records.

To help make immutability easier to work with, the C# record semantics provide a way to copy an existing record and update one or more values in a single statement.

For example, to create a new address from an existing instance and update the StreetAddress to a new value, we can do this using the “with” expression as follows:

var newAddress = address1 with {StreetAddress = “804-3 Ave SW”};

The expression above would give us a new Address instance with the same City and State as contained in address1 but with a new value for StreetAddress.

Summary

Underneath the covers, a C# record is converted to a C# class that implements the boilerplate code that we had in our class-based Address type.

As C# records are converted to classes, records can use the same semantics as classes including:

  • Inheritance
  • Methods
  • Property getter and setters

By using a C# record to replace value-based classes, we can remove a significant amount of boilerplate code. This helps to reduce the size of our codebase and more importantly, reduces the probability of errors and makes our code easier to change.

For more information on C# records, see: https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9#record-types

Dependency Inversion and the Clean Architecture

By Daryl Olson

Overview

In this post, I am going to discuss the Clean Architecture and provide background on a foundational principle used in its implementation, the Dependency Inversion Principle.

I’ll explain why the Clean Architecture can help developers produce applications that exhibit low coupling and independence from technical details such as databases and other infrastructure concerns.

These characteristics lead to code that is more maintainable, adaptable and significantly easier to test.

The Dependency Inversion Principle (DIP)

Before describing the Clean Architecture, it is useful to review the Dependency Inversion Principle as this principle is fundamental to the implementation of this architectural pattern.

The Dependency Inversion Principle is the “D” in the SOLID principles of object-oriented programming.

The SOLID Principles are:

  • Single Responsibility Principle: This is related to the Separation of Concerns. It states that a class should only have a single responsibility.
  • Open–Closed Principle: Software entities should be open for extension and closed for modification. 
  • Liskov Substitution Principle: The correctness of a program should not be affected when objects are replaced with there subtypes.
  • Interface Segregation Principle:  Keep interfaces specific to the client requirements rather than creating a general-purpose interface.
  •  Dependency Inversion Principle: Described in the following section.

To keep the scope of this blog reasonable, I will not be discussing the other SOLID principles. For a high-level overview of these other principles see:

https://en.wikipedia.org/wiki/SOLID

The Dependency Inversion Principle was articulated by Robert C. Martin in the mid-1990s. He described it as:

  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.

Let’s look at what this means in a more practical sense.

The term “low-level” and “high-level” are relative terms that refer to how close the module (class or method) is to a concrete implementation or infrastructure concern. Lower level modules are closer to the concrete implementation of an abstraction or those that reference infrastructure.

Infrastructure concerns include file systems, databases, and network resources. In C#, even the use of the property DateTime.Now represents the use of a low-level module as it explicitly references an infrastructure concern, the system clock. Code that references infrastructure is inherently more difficult to unit test.

Another way of thinking about the DIP principle is to prefer configurable dependencies for lower-level concerns.

Lower level classes do not have to be infrastructure-related. For example, suppose that sales orders need to have their sales tax calculated. Let’s look at two possible ways of meeting this requirement.

Option 1: We can add a method in the Order class called CalculateSalesTax(). This method contains the specific logic to perform the calculation, returning the resulting value. 

Option 2: Alternatively, we could create an interface, ISalesTaxCalulator, with a method CalculateSalesTax. We would then create a specific implementation of this interface which contains our domain logic for performing the calculation. The Order class will reference the ISalesTaxCalulator, and call the CalculateSalesTax method. Dependency Injection would be used to provide a specific implementation (concrete class) at runtime.

Using the concepts of DIP to compare the Order class as described in Option 1 versus that as described in Option 2, we can see that Option 2 more closely follows the Dependency Inversion Principle. The Option 2 Order is using an abstraction (ISalesTaxCalulator) versus the concrete implementation details used by Option 1.

So why is the Dependency Inversion Principle important? Classes that follow DIP have lower levels of coupling and are more modular. These factors have several advantages including:

  • It makes classes easier to unit test.
  • The application is more able to adapt to changing requirements as a component can be changed independently of the whole. In less modular, tightly coupled code, a maintenance developer may have to understand a large part of the complete system to be able to determine the implications of modifying a single component.

The Clean Architecture

One of the most widely known tenets of software development is the separation of concerns. Wikipedia defines the separation of concerns as:

In computer science, separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, so that each section addresses a separate concern.”

Wikipedia further states:

“Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer).

Most developers have used layered structures as described above. Unfortunately, while the traditional layered design is noble in its intent, it often falls short in keeping concerns separate. It can be unclear where logic should go, especially for new or less experienced developers. This can result in the leakage of business logic into the presentation or infrastructure layers. Business logic in these layers results in tight coupling and reduces modularity.

What if there was a way to structure your code so that it helped stop concerns from leaking across your layers?  What if we could help developers place functionality in the correct package?

This is where the Clean Architecture can help. This architecture pattern follows the Dependency Inversion Principle to help maintain the separation of concerns.

This architecture was first proposed by Robert C. Martin with the purpose of encouraging modularity, low coupling and keeping the business design independent from the technical details.

An example of a Clean Architecture approach is shown in the figure below.

The key points to note from the Clean Architecture diagram are:

  • The arrows depict software code dependencies. Software code refers to classes, methods or any other named software entities.
  • All software code dependencies point inward.
  • Nothing in an inner layer can know anything about software code in an outer layer.
  • Outer layers have a lower level of abstraction when compared to inner layers.
  • It places the Domain as the core layer in the application. In many traditional layered designs, the persistence layer is the bottom layer. This can lead to implementations where the database is the core of the application.

An important concept in the Clean Architecture is what data crosses the layer boundaries. In order to not violate the concept that “Nothing in an inner circle can know anything about software code in an outer circle”, data that crosses the boundaries must be simple data structures. This keeps the inner layers from having a dependency on any outer layer software code.

Using these concepts to structure your code will provide the following benefits:

  • Testable application and domain code that does not require presentation logic, databases or any other specific infrastructure.
  • Domain logic that is independent of the specific persistence being used.
  • Presentation (UI) independence.  
  • Framework independence.

You will also notice that the Clean Architecture adheres to the Dependency Inversion Principle with the lower-level concerns (outer levels) dependent on the high-level concerns (inner levels).

Domain Layer

The Domain layer is where the core concepts representing the real-world items being modeled are constructed. This includes both behavior (business logic) and data (entities).  This is where the value being delivered to the end-user is created.

This layer will contain:

  • Custom Exceptions
  • Domain Logic
  • Entities
  • Enumerations
  • Value Objects

Typically, this is where:

  • the highest rate of change occurs
  • the largest amount of testing is required.

It must not refer to any technical implementation details. This makes the domain behavior much easier to test and respond to changes in requirements.

The Domain Layer should also not have any framework-specific details. For example, if you are using Entity Framework, you should not be decorating your class with Data Annotations. It is better to move this to the Infrastructure Layer using the Fluent Configuration.

Application Layer

This layer is should be kept thin and focused on coordinating tasks, delegating to the Domain layer when business behavior is required. As the business logic is portioned to the Domain layer, no business behavior should be contained in this layer.

The Presentation layer will use this layer to interact with the Domain Layer. These interactions are typically based on a uses case which may involve one or more subtasks. For example, a “Place an Order” use case could involve multiple steps such as checking and reducing inventory, persisting order details and sending a confirmation email, all of which require coordination and represents a transactional boundary.

While this layer should not maintain any domain-related state, it can maintain state related to tasks which it is coordinating.

This layer will contain:

  • Application Logic
  • Custom Exceptions
  • Commands and Queries if the CQRS pattern is being used
  • Interfaces
  • Models and Data Transfer Objects
  • Validators

For information on the CQRS pattern, see:

https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs

As with the Domain Layer, the Application Layer must not refer to any technical or framework implementation details. Once again, this will make unit tests easier to create.

Infrastructure Layer

This layer is used to implement technical implementation details such as database persistence, usage of the file system and any network-related concerns.

Dependencies Between Layers

The following diagram shows the dependencies between the layers.

When to Choose the Clean Architecture

The Clean Architecture is most suitable for applications when there is a fair amount of business complexity or where support for changing business requirements is important.

If all you require is a simple CRUD application, this pattern might be overkill.

Summary

The Clean Architecture provides a pattern for application development that can help provide code that is more maintainable, adaptable and significantly easier to test in an automated manner. If you have any doubts about taking the time and effort to implement a pattern such as the Clean Architecture, I suggest you read Martin Fowler’s post “Is High Quality Software Worth the Cost?”:

https://martinfowler.com/articles/is-quality-worth-cost.html