How To Ensure Idempotency In An Eventual Consistent DDD/CQRS Application

Written on 26 August 2015

If you want a reliable DDD/CQRS app (distributed or not) you need to treat idempotency as a first class concept. And while not all DDD apps use a message driven architecture, most of them do so and, in my opinion, that's the best way to end up with a maintainable codebase. Idempotency is required because of the eventual consistency 'feature' of a message driven system.

But before we talk about solutions, let's understand the problem. An eventual consistent app treats a business process as a sequence of business cases. An aggregate is changed, one or more events are published, the events handlers trigger other business cases and so on. We have a chain of independent operations each with their own state which together define one process (a domain unit of work), that can be broken at any time, because the app crashes or the server recycles the app or we deploy a newer version etc. We end up with a state where some aggregates were changed, while others never made it.

It sure looks like the app is in an inconsistent state, however, the use of the durable service bus will make sure that we are in an eventually consistent state i.e things will be consistent very soon (it's just a matter of time). If you're used to the lower level db transaction, where everything is either commited or rolledback, eventual consistency looks quite dangerous and you might feel unsafe. Luckily, it's just the typical "out of the comfort zone" feeling. Eventual consistency might look fragile and complicated to handle but once you get used to it, it will become second nature.

The main 'ingredient' in dealing with eventual consistency is proper idempotency handling. Considering a process can be interrupted at any time and then executed again from the beginning, we need a way to ensure that a repeated operation changes a model only once. I'm going to show you 2 ways of doing this, but first there are some "rules":

  • Every operation has an unique id (guid);
  • The aggregate root needs the operation id to be set before any changes.

Handling Idempotency at the Aggregate Level

The easiest scenario is when you're using Event Sourcing. And that's because the entity already has all or at least the latest events. And we can make the operation id part of the event itself. All my events(and commands) have the SourceId property which is the id of the operation that generated the events (by default it's the command id which for commands, in most cases is also the source id). Then it goes like this

var foo=_repo.Get(fooId);
IEnumerable<IEvent> evnts=new IEvent[0];

try
{
  foo.SetOperationId(cmd.SourceId);
  foo.ChangeStuff();
  evnts=foo.GetGeneratedEvents();
}
catch(DuplicateOperationException ex)
{
  evnts=foo.GetEventsForOperation(cmd.Id);
}

_bus.Publish(evnts);

So, the aggregate is aware of the most recent (or all) operation Id and if you try to set one that was used before, it throws. Pretty simple.

But what happens if you're not using event sourcing? Well, we can make the entity keep a history of operations and check it everytime the operation id is set. The problem with that is it will increase the entity size a lot if the object is changed often. However, for objects that rarely change it might be a solution. And here's a base class that may be used for that.

public abstract class AIdempotentEntity:IOperationId,IEntityId
  {
      protected int MaxHistory = 2;

      private List<Guid> _history = new List<Guid>(10);
      protected Guid? _operationId;

     // json serialization friendly
      protected List<Guid> History
      {
          get { return _history; }
          set { _history = value; }
      }

      public virtual void SetOperationId(Guid id)
      {
          if (_history.Contains(id)) throw new DuplicateOperationException(id, null);
          _operationId = id;
          if (MaxHistory == 0) return;
          var pos = _history.Count;
          if (pos == MaxHistory)
          {
              pos = 0;
              _history.RemoveAt(MaxHistory-1);
          }

          _history.Insert(pos, id);


      }

      public Guid GetOperationId()
      {
          return _operationId.Value;
      }


      public Guid EntityId { get; protected set; }
  }

So, what can we do when you have objects that change often or you need to keep all the operation id history? We'll handle idempotency at the persistence level.

Handling Idempotency at the Persistence Level

I must say this is my preferred solution. It's also a generic solution that can be used any time you need to ensure idempotency but it does have a small flaw: it requires an ACID compliant db.

The main idea is like this: we have a table (OperationId:Guid,Hash:string) with a unique constraint on both columns. When we persist an object, we have this transaction

using (var t = db.BeginTransaction())
 {
     if (db.IsDuplicateOperation(item.ToIdempotencyId()))
     {
         return;
     }

    //save object

    t.Commit();
  }


//ext method
public static bool IsDuplicateOperation(this DbConnection db, IdempotencyId data)
       {
           data.MustNotBeNull();
           try
           {
               //idems is the name of the idempotency table
               db.Insert("Idems", data);
           }
           catch (DbException ex)
           {
              //duplicate operation
               if (ex.IsUniqueViolation()) return true;
               throw;
           }
           return false;
       }

   public class IdempotencyId
      {
          public static IdempotencyId Empty=new IdempotencyId() ;
          public bool IsEmpty() => OperationId == Guid.Empty && Hash.IsNullOrEmpty();

          public Guid OperationId { get; set; }
          /// <summary>
          /// Its actually an identifier of the entity/model/event involved
          /// </summary>
          public string Hash { get; set; }

          public IdempotencyId(Guid operationId,string hash)
          {
              OperationId = operationId;
              Hash = hash;
          }

          private IdempotencyId()
          {

          }
      }

Basically, we try to insert an idempotency id that uniquely identifies the object and the operation and if it fails, it means it's a duplicate operation. If it succeeds then we can persist the entity and commit the transaction. If an error occurs the transaction is rolledback and the idempotency id is not saved.

The idempotency id is the combination of operation id and some other specific identifier(I call it a hash in lack of a better name) of the object. For domain objects, the hash can be the entity id. For read models, hash can be the read model category and/or the domain entity id involved. Obviously, it depends, there are no rules here, the only 'rule' is that you need to be aware that one operation can affect multiple aggregates or read models. For example, if you compute metrics in an event handler and this involves incrementing a counter, the read model specific hash should be something like "statsusercount".

Note that for this to work we always need to have a transaction involving the idempotency storage and the domain storage, so they both need to share the same storage. But you don't need one global idempotency store, each component can have its very own (also the case if you're using multiple databases/engines).

Conclusion

There you have it, 2 approaches to ensure idempotency for your eventual consistent app. They both work and I'm using both at the same time, it's not about choosing one over the other upfront, but choosing the best for your specific case. As an example, I have a "Project" entity uses event sourcing and it makes sense to use the first approach, but the read model updater (an event handler) and the metrics updater (another event handler) subscribing to the ProjectCreated event will use the second approach with the idempotency id hashes: 'projectread' and 'projectcount'.

To Null Or Not TO Null aka Is Null Really Evil?

Written on 10 August 2015

It's not a secret that some devs are considering null to be 'evil' and even the 'father' of null himself said that introducing null was his billion dollars mistake. But, is null that evil? Some 'purists' (in lack of a better word) think so and they advocate that we should stay away of nulls no matter what!

Personally, I don't care much about null. For me null means 'lack of value' or 'nothing' and I treat it as such. 'Nothing' is a valid result in many scenarios and the alternative is to have a Maybe<T> type which will be used exactly as null. For me, it would be just a string replacement and my codeflow would be the same. Thing is, from a developer point of view, null is just a tool. The code isn't more buggy because C# has nulls. And the null reference exceptions are great signals that you have a unhandled use case (read: bug) there.

I think that adding an option type to replace null is more of a cosmetic change. We'll still be checking if a result exists instead of null and the NullObjects pattern will still be used. Personally, I don't see this pattern as a 'necessary evil' (and some people are calling it an anti-pattern altogether) because it really makes sense in cases where you always expect to have a value. External services like logging, caching and even persistence are use cases where we expect some instance, but there is none configured (and in this case 'none' is a valid - and the default - configuration option). So we simply need a value which looks and acts like the expected service but does nothing , hence the null object.

I really think that most devs 'screaming' that nulls are evil are nitpicking. While null might not be the best technical solution, it's not really as bad as some people claim to be. Yes, null is the easiest way, yes bad programmers write crappy code and knives can kill people. In the end the tool is less important, the tool wielder makes or breaks something.

Null is just a simple tool and I would welcome a better tool if it would become available in the future. But better should translate into tangible (yeah...) benefits like less code required, not replacing a word with another word.

Functional Programming vs OOP

Written on 30 July 2015

I have little experience with Functional Programing (FP) and even less with a FP language like F#. The FP I've done was mainly Linq stuff in C# and my attempt to write a build script for FAKE. I must say that I wanted to see how I fare with F# considering the number of its very vocal fans. Well, it was a painful experience and not only because of the strange syntax. The main problem is the mindset. Really, a FP mindset is required when using a FP language, which is pretty logical. But the big question is: Why would I want to use the FP mindset?

I follow devs on twitter who are very passionate about FP in general and F# in particular. For them, FP is the greatest thing ever, the ultimate golden hammer. Everything would be better if FP would be used (with F# of course). FP makes your code simpler, cleaner, less buggier etc. All hail FP the greatest hammer of them all. Thor is green of envy now.

Based on my experience as a developer, I beg to differ. While I've used the FP approach, I did it only where it makes sense. I'd say that the majority of problems/use cases aren't naturally fit for FP. But I know that the FP crowd says the opposite. And my opinion is they found their perfect solution and now they force every problem into it.

Let's see some examples. There's a pretty famous site full of articles why FP and F# rulz and OOP and C# sux (not said directly but clearly implied). I've read it in order to try and do something with F#, but after reading some very biased articles I decided it's more of a propaganda center than a useful resource (I'm sure others disagree but that's my opinion). It seems that OOP is such a sin for the FP cult (some devs are really behaving like zealots) that they need to come up with articles like this . In the article, F# source code is compared to the same code but decompiled to C# and surprise, surprise the C# code is ugly.

But let me show you other code decompiled to C#

private void ExecuteTasks(TaskDictionary list)
    {
      foreach (KeyValuePair<int, List<Func<string>>> keyValuePair in (IEnumerable<KeyValuePair<int, List<Func<string>>>>) Enumerable.OrderBy<KeyValuePair<int, List<Func<string>>>, int>((IEnumerable<KeyValuePair<int, List<Func<string>>>>) list.Items, (Func<KeyValuePair<int, List<Func<string>>>, int>) (d => d.Key)))
      {
        string str1 = keyValuePair.Key.ToString();
        if (keyValuePair.Key == int.MaxValue)
          str1 = "no order";
        foreach (Func<string> func in keyValuePair.Value)
        {
          string str2 = func();
          string message = "Executed task ({0}) '{1}' ";
          object[] objArray = new object[2];
          int index1 = 0;
          string str3 = str1;
          objArray[index1] = (object) str3;
          int index2 = 1;
          string str4 = str2;
          objArray[index2] = (object) str4;
          LogManager.LogInfo<AStrapbootWithContainer<T>>(this, message, objArray);
        }
      }
    }

Yes, it's ugly. And here is the original source of the above code

private void ExecuteTasks(TaskDictionary list)
       {
           foreach (var tasks in list.Items.OrderBy(d=>d.Key))
           {
               var pos = tasks.Key.ToString();
               if (tasks.Key == int.MaxValue)
               {
                   pos = "no order";
               }
               foreach (var task in tasks.Value)
               {
                   var name = task();
                   this.LogInfo("Executed task ({0}) '{1}' ", pos,name);
               }

           }
       }

A bit better right? Based on the article's logic then hand written C# is better than decompiled code to C#, therefor C# is better than C#.

But wait, F# has immutable types and the C# equivalent is more verbose and uglier (FP people really care about source code beauty). Yes, it might be but... WHY would I want that everything should be immutable and value compared (cough C# structs cough)?. Well, FP crowd will tell you how immutability means fewer bugs etc. I get that immutability is great but should it be used everywhere? Even F# has mutable support. The point is, if it exists and it's great in F# or any FP language, it doesn't mean I'll need it in an OOP language.

And this brings me to the real issue. The FP mindset requires that every operation should be expressed as a mathematical function. But we, normal people don't think like that. Unless you're a mathematician, but not everyone is. In the 'human' language we think imperative, very similar with an OOP language. Doing object oriented design is like being a manager in the 'real' world: you decide who does what, which are the teams and their projects. And unlike the real world, you can create the perfect professionals/employees which are specialists at their job.

OOP is quite natural. When I design something or try to come up with an algorithm, I think in a mix of Romanian and English. Only after I have a solution I expressed it in a programming language. But my thought process is unrelated to a programing language. I'm using a 'natural' mindset which then can be easily expressed in an OOP way. To change my mindset to functional just because someone says it's "much better", would be stupid. Most of the problems can be solved and expressed easily in OOP. Some of the problems are naturally functional and then can be easily expressed using a FP language.

It's simply a matter of how many use cases can be solved using a 'normal' mindset and how many use cases fit the functional paradigm better. My experience says the functional use cases are more limited and I can use C# for those cases, even if in F# I could have done them easier. There are simply too few to matter.

But some of the FP crowd doesn't think that. For them everything is better if it's expressed in a functional way. Ok, let's see how it goes with the famous FizzBuzz test. The code below is taken from F# for fun and profit.

Let's start with the imperative version but expressed in F#.

module FizzBuzz_IfPrime =

    let fizzBuzz i =
        let mutable printed = false

        if i % 3 = 0 then
            printed <- true
            printf "Fizz"

        if i % 5 = 0 then
            printed <- true
            printf "Buzz"

        if not printed then
            printf "%i" i

        printf "; "

    // do the fizzbuzz
    [1..100] |> List.iter fizzBuzz

Very similar to a C# version. But wait, we know that FP is the best paradigm ever so let's solve the problem in a functional way, cause everyone knows FP means less code, fewer bugs, beauty etc.

module FizzBuzz_Pipeline_WithTuple =

    // type Data = int * string option

    let carbonate factor label data =
        let (i,labelSoFar) = data
        if i % factor = 0 then
            // pass on a new data record
            let newLabel =
                labelSoFar
                |> Option.map (fun s -> s + label)
                |> defaultArg <| label
            (i,Some newLabel)
        else
            // pass on the unchanged data
            data

    let labelOrDefault data =
        let (i,labelSoFar) = data
        labelSoFar
        |> defaultArg <| sprintf "%i" i

    let fizzBuzz i =
        (i,None)   // use tuple instead of record
        |> carbonate 3 "Fizz"
        |> carbonate 5 "Buzz"
        |> labelOrDefault     // convert to string
        |> printf "%s; "      // print

    [1..100] |> List.iter fizzBuzz

Hmm, so this is how less, concise code looks like. For whatever reason I prefer the bloated and uglier (and of course hard to maintain) imperative version. But wait, there's more!!

OOP has design patterns because it's such an imperfect mindset and we need workarounds. FP is so great it doesn't need patterns. Until it needs them. Enter the Railway pattern . And here's FizzBuzz using this pattern.

module FizzBuzz_RailwayOriented_UsingCustomChoice =

    open RailwayCombinatorModule

    let (|Uncarbonated|Carbonated|) =
        function
        | Choice1Of2 u -> Uncarbonated u
        | Choice2Of2 c -> Carbonated c

    /// convert a single value into a two-track result
    let uncarbonated x = Choice1Of2 x
    let carbonated x = Choice2Of2 x

    // carbonate a value
    let carbonate factor label i =
        if i % factor = 0 then
            carbonated label
        else
            uncarbonated i

    let connect f =
        function
        | Uncarbonated i -> f i
        | Carbonated x -> carbonated x

    let connect' f =
        either f carbonated

    let fizzBuzz =
        carbonate 15 "FizzBuzz"
        >> connect (carbonate 3 "Fizz")
        >> connect (carbonate 5 "Buzz")
        >> either (printf "%i; ") (printf "%s; ")

    // test
    [1..100] |> List.iter fizzBuzz

Is it me or this version is even longer than the previous one? And to me, being a n00b when it comes to F#, it looks quite complicated and ugly. At least compared to the imperative version.

The point of this is to show you that you can force anything to work with the FP golden hammer but the result isn't the expected one. Clearly in this case the imperative approach is the most simpler one. But does this mean that only simple problems are fit for the imperative (OOP) mindset? Not really, remember that normally we think in an imperative mindset. The Domain expert will communicate according to that mindset. It takes explicit effort to 'convert' everything to a functional approach. And as seen with FizzBuzz, you end up with abominations.

I'm really annoyed by FP zealots who claim that FP is the answer to everything. As the saying goes "A real programmer writes assembler in any language", and in the end it's the developer who's in charge of the code quality. Your codebase isn't more bug free just because you're using F# , it's because you wrote good code. You may like FP and your brain can easily convert every problem to a FP representation. Ok, but mine and other's don't work that way. This doesn't mean our code is crap and the codebase is huge.

For me OOP is natural and I have no problem using it. For the little FP I do when it's the case, C# is enough. Maybe if I have enough use cases that can fit the FP mindset, I could write them in F#. But that's it. OOP is not automatically inferior for everything, FP is not superior for everything. And the fact that very few devs are using a functional language is a sign that it's a niche mindset for niche problems. And zealotry won't change things.

P.S: I'm one of the devs who say "always use DDD" regardless of the app. But I'm talking about the strategic aspect. Most of the time you're building an app that provides some business value. It's natural for the app to be designed according to the business needs, hence domain driven design. But that doesn't mean you should always use the DDD tactical tools. And obviously for experiments, toys, learning some new tool, there's no need for anything DDD.

CQ(R)S And Providing Immediate Feedback In A Web App Using A Service Bus

Written on 20 July 2015

Whenever I'm developing a DDD web app, I'm using CQRS and Domain Events and this means I have a lot of commands and event handlers. Therefore I'm using a durable service bus which means that if the server crashes, all the unhandled messages are sent again to be handled. But I have a problem: all my commands are handled asynchronously because this is how my service bus works. And it works this way because a message can be handled anywhere in that process (usually in a background thread) or in another process if we have a distributed app.

Asynchrony means I can only 'send' a command but I can't expect an answer. Basically it's only

 bus.Send(new DoSomething(){});

and this sucks when application services are implemented as command handlers. Because the user expects some feedback. If the command fails you want to return some error and this is quite tricky in this scenario. The usual solution is to do some client side server polling checking the state of the operation or waiting for a read model. This is obviously cumbersome.

However, we can apply a similar concept but on server side using a mediator. Simply put, we'll have a an object in charge of delivering the command result to our controller. It goes like this in the controller

//constructor
public MyController(IServiceBus bus,ICommandResultMediator mediator) {}


//action
var cmd=new DoSomething();

await _bus.SendAsync(cmd);

var listener = _mediator.GetListener(cmd.Id);

try
 {
     var result = await listener.GetResult<CommandResult>(cancel);

     return Response.AsJson(result);
 }
 catch (TimeoutException)
 {
     return HttpStatusCode.RequestTimeout;
 }

 catch (OperationCanceledException)
 {
     return HttpStatusCode.InternalServerError;
 }

This is NancyFx code but it's easy to understand, instead of returning an ActionResult or directly the model, I'm returning a NancyResponse which can be a HttpStatusCode as well. This is not important, the important part is the use of the mediator. We send the command to be handled then we ask the mediator for a listener that will be used to await the command result. If the result doesn't arrive in a couple of seconds or if the operation is cancelled, exceptions are thrown.

Now here's how the command handler looks like

class DoSomethingHandler:IExecute<DoSomething>
{
  public DoSomethingHandler(ICommandResultMediator mediator) { }

  public void Execute(DoSomething cmd)
  {
    //do stuff
    //get result
    var result=new CommandResult();
    result.AddError("stuff","Stuff happened");
    _mediator.AddResult(cmd.Id,result);
  }
}

In the handler we do things as usual, the only change is that now the result is sent to the mediator. And here is the mediator itself

public class CommandResultMediator : ICommandResultMediator
   {

       public CommandResultMediator()
       {
           ResultCheckPeriod = 100;
       }
       public void AddResult<T>(Guid cmdId, T result) where T :class
       {
           Listener l = null;
           if (_items.TryRemove(cmdId,out l))
           {
               l.Result = result;
           }

       }

       /// <summary>
       /// How often to check if a result has arrived, in ms.
       /// Default is 100 ms
       /// </summary>
       public int ResultCheckPeriod { get; set; }

       void Remove(Guid cmdId)
       {
            Listener l = null;
           _items.TryRemove(cmdId, out l);
       }

       public int ActiveListeners
       {
           get { return _items.Count; }
       }

       ConcurrentDictionary<Guid,Listener> _items=new ConcurrentDictionary<Guid, Listener>();

       public IResultListener GetListener(Guid cmdId,TimeSpan? timeout=null)
       {
           timeout = timeout ?? TimeSpan.FromSeconds(5);
           var listener=new Listener(timeout.Value,cmdId,this);
           listener.UpdatePeriod = ResultCheckPeriod;
           _items.TryAdd(cmdId, listener);
           return listener;
       }


       class Listener:IResultListener
       {
           private readonly Guid _cmdId;
           private readonly CommandResultMediator _parent;

           public Listener(TimeSpan timeout, Guid cmdId, CommandResultMediator parent)
           {
               _cmdId = cmdId;
               _parent = parent;
               TimeoutTime = DateTime.Now.Add(timeout);
           }

           private DateTime TimeoutTime { get; set; }

           public object Result { get; set; }

           public int UpdatePeriod=100;

           public async Task<T> GetResult<T>(CancellationToken cancel=default(CancellationToken)) where T : class
           {
               while(Result==null)
               {
                   if (DateTime.Now >= TimeoutTime)
                   {
                       _parent.Remove(_cmdId);
                       throw new TimeoutException();
                   }
                   if (cancel.IsCancellationRequested)
                   {
                       _parent.Remove(_cmdId);
                       throw new OperationCanceledException();
                   }
                   await Task.Delay(UpdatePeriod,cancel).ConfigureAwait(false);
               }
               return Result as T;
           }
       }
   }

Btw, the mediator should be a singleton managed by your favourite DI Container.

Note that the mediator is part of my CavemanTools library so if you're using it, or SqlFu or MvcPowertools you only need to update to the last version.

So pretty much that's it! The easiest way to return a result from an async command handler.

Make Your CQRS Web App More Maintainable With a Generic Controller

Written on 18 July 2015

From a design point of view if you want a proper separation of concerns (SoC) in a web app, it looks like this

  • You have a controller/action "in charge" of a http request
  • The controller maps the request to a command (input data)
  • The controller invokes an application service passing the input data. In a CQRS app, you might have a command send to a command handler, which is the implementation of an application use case, ergo an application service.
  • The service updates the domain objects then persists them. Domain events are published if you're using the domain events pattern.
  • The service returns a result which is mapped by the controller to a http response.

You want to keep the controllers very slim because their purpose is just to act as adapter between the web and the application services. The actual app behaviour is in the application service. So for each use case we'd have an application service and a controller/action which will invoke it. And this can become very boring quickly.

Problem: Repetitive boilerplate code required to respect SoC

Some devs will consider that DRY is a good reason to violate SoC so they merge a controller with an application service. Everything that should be in that service is put in a controller. While it works it does have a big drawback: it's so easy to couple yourself to the MVC framework. Add an ORM into the mix and the controller becomes a part of persistence as well. This is more prevalent for Query controllers.

While some apps are trivial enough that we don't need to bother with abstractions and SOLID principles, if you actually build a business app or at least some app that will evolve in time and you want it maintainable, the above "pragmatic" solution will slowly lead to a ball of mud.

We want SoC but we don't want to have hundreds of repetitive controllers with 1-2 lines that in the end amounts to "call some service" and return a result.

Solution: Enter the Generic Controller

What we want is to have one controller where to put all the repetitive bits (validation,specific authorization, common errors scenario handling) that will act as a "host" for any app service (the Command part of CQRS) or query handlers (the Query part of CQRS).Yes, one controller to rule'm all. Or several, because you want specific customizations for some parts of your app.

In a nutshell it look like this (although this list is common for both command/query there would be a specific implementation for each case):

  • Controller is invoked for a route pattern which captures a command/query name
  • Based on the name (or other convention), the request is mapped to the corresponding model (a command/query) i.e AddUser or GetUser . Note that these are inputs, not handlers.
  • For commands basic validation can be performed
  • Other common stuff, like ensuring the user is authenticated (if you need it) or populating the model with common information like the user id
  • Use case specific behaviour. This is basically aspect oriented programming. We annotate the model with attributes that will trigger specific behavour in the controller. An example is to check if the curent user is Admin or has the CanManageUsers permission. Or that the model expects a certain property populated.
  • Based on the input model, a service/query handler is selected then executed and a result is returned.
  • A non-null result is then returned as Json (for a Api) or a view model.
  • A convention can say that if a result is null then return a 406 status (Not acceptable) for a Api or some error page.

Implementation

Be aware that the implementation varies according to your app's needs. The only thing generic (sic) enough is the principle, the code needs to be adapted to your situation. This means the example below is not meant to be copy/pasted, it just shows you a concrete implementation. The code is an excerpt from the project I'm working on (while my project uses NancyFx I'm gonna show you asp.net webapi/mvc implementations).

But first, let's see what we have and what we need. I'm using CQRS all the way up (or down) so my application services are command handlers while querying is done by query handlers (which work directly with the db). For both my convention is to have one unique model in, one model out. I also have specific interfaces for each case

class AddUserService:IHandleCommand<AddUser,CommandResult>
{
    public CommandResult Handle(AddUser cmd){ }
}

class GetUsersQuery:IHandleQueryAsync<GetUser,UserInfo[]>
{
  public Task<UserInfo[]> Handle(GetUser query){ }
}

Each service handles one or more related commands, the name isn't important but the abstraction is, because this is how we'll ask the DI Container the service/query handler we need. But remember that in our controller we only have a name extracted from the url. We need a way to map that name to our input and to detect the correct service that will handle the command/query.

For that, I've come up with the following class (this needs to be adapted to your specific needs)

public class ServiceHandlersCache
  {
      Dictionary<string,ServiceHandlerItem> _items=new Dictionary<string, ServiceHandlerItem>();

      //scan and register the input/result models from the found command/query handlers
      public void AddHandlers(Assembly asm)
      {
          var types = asm.GetPublicTypes(t => !t.IsAbstract && (t.ImplementsGenericInterface(typeof(IHandleCommand<,>)) || t.ImplementsGenericInterface(typeof(IHandleQueryAsync<,>)))).ToArray();
          asm.GetTypesDerivedFrom<RequestInput>()
              .ForEach(t =>
              {
                  var result = FindResultModel(t, types);
                  if (result==null) return;
                  var item=new ServiceHandlerItem(t,result);
                  _items.Add(item.CmdId,item);
              });

      }

      public ServiceHandlerItem GetModels(string cmdId)
      {
          return _items.GetValueOrDefault(cmdId);
      }

      static Type FindResultModel(Type input,Type[] types)
      {
          var result = types.Select(t =>
          {
              var interAll =
                  t.GetInterfaces()
                      .Where(i => i.Name.StartsWith("IHandleCommand") || i.Name.StartsWith("IHandleQueryAsync"));
              var inter = interAll.FirstOrDefault(i => i.GetGenericArgument() == input);
              if (inter==null) return null;
              return inter.GetGenericArgument(1);
          }).FirstOrDefault(d=>d!=null);
          return result;
      }
  }

  public class ServiceHandlerItem
   {
       public Type Input { get; set; }
       public Type ResultModel { get; set; }

       public string CmdId
       {
           get { return Input.Name; }
       }

       public RequestInput CreateInputInstance()
       {
           return Input.CreateInstance() as RequestInput;
       }

       public AccountRights[] RequiredRights { get; private set; }

       public ServiceHandlerItem(Type input, Type resultModel)
       {
           Input = input;
           RequiredRights=input.GetAttributeValue<RequiresAttribute, AccountRights[]>(a => a.Right);
           ResultModel = resultModel;
       }
   }

ServiceHandlersCache, which is used as a singleton, maintains a cache of the input/result models which is injected into the controller. Given an identifier it returns the input/result models that will be used by the controller. While I'm using this mainly for Api purposes, it can be easily adapted to web sites.

[Requires(AccountRights.ManageUsers)]
public class AddUser:RequestInput
{

}

 //controller constructor
  public GenericController(ServiceHandlersCache cache) {}


//actions
  [Route("api/{cmdId}")]
  [HttpPost]
  [Authorize]
public CommandResult Post(string cmdId)  
{
   var models=_cache.GetModels(cmdId);

  //handle null scenario {}

   var cmd=models.CreateInputInstance();

   this.TryUpdateModel((dynamic)cmd);

   if (!ModelState.IsValid){
     var result=ModelState.ToCommandResult();//ext method
     return result;
   }

  //ext method to check for specific AccountRights if the model has it
    if (!this.HasAnyOfRights(models.RequiredRights)){
    return null;
  };

  var result=cmd.ExecuteAndReturn(models.ResultModel) as CommandResult;
  return result;
}

ExecuteAsyncAndReturn is an extension method that uses the Di Container to create the service implementing IHandleCommand<Input,Output> then executes it returning the result.

For query handlers we have a similar action

  [Route("api/{queryId}")]
  [HttpGet]
public Task<object> Get(string queryId)
{
  var models = _cache.GetModels(queryid);

  if (!this.HasAnyOfRights(models.RequiredRights)){
    return null;
  };

    var query = models.CreateInputInstance();

    var result = await cmd.QueryAsyncTo(models.ResultModel, cancel);

    return result;  
}

I'm using POSTs to identify commands and GETs to identify queries. We can create additional attributes to annotate the input model and based on them, invoke a specific behaviour in our controller. Those behaviours are basically decorators that need to be generic enough to be reusable. Until now, in my app I didn't need more than permission checking.

The point here is to extract the repetitive infrastructural behaviour into a controller and action filters. This allows us to focus only on the actual app behaviour. Adding a new use case means adding a new command, handler, validator i.e all the new things required, but not the boilerplate stuff like controllers/actions. Instead of having 10 services, 15 queries and 25 controller actions, I now have 10 services, 15 queries and 2 controller actions. Less code, DRY code.

Of course, the great benefit here is decoupling of the app service from the api/mvc framework. Our app service knows only about its input (nothing http or UI framework related) and the controller just prepares that input from a request. If the service needs a user id or an Ip, the input model can be decorated with an attribute that will tell the controller to populate that specific property from the model.

Note that I'm not using a service bus in this example although I am using one in my app, but that's a special case that I'll handle (pun intended) in the next post.