The Unit of Work and Transactions In Domain Driven Design

Written on 02 September 2015

As a principle, the Unit of Work (UoW) pattern is about ensuring consistency by persisting a group of changes as a unit (all at once). Basically, it's an all or nothing operation and the point of this is we don't want to end up with inconsistencies. This is the main idea and it's implemented by a db transaction or the ORM's DbCOntext, ISession etc which themselves in the end will use a db transaction.

As you can see, the UoW is very useful and very persistence related. The problem is many devs trying to do DDD, end up needing something that surely looks like a UOW. However, its not.

Let's take a very simple example, the old 'I want to move an amount from one account to other' which from the Domain point of view is a domain transaction, which involves 2 instances of the same aggregate (the Account). It doesn't matter if the aggregates are different, what's important is that for the transaction to complete, we need to change 2 aggregates. This looks very similar to the Db's BeginTransaction -> Commit usage and naturally, devs consider this should be a UoW and try to implement it.

And that's how we end up with abominations like the UoW interface which 'holds' all the repositories that would be involved in the transaction. Too bad those devs are trying to implement a domain use case with a persistence related design pattern.

Here's the thing, first of all, in a real world business process (a sequence of chained business use cases) there's very little of the 'all changes or nothing' mindset. Further more, a business process can involve different bounded contexts (BC) and can run for days until it completes. Using the UoW pattern is a very naive solution and worse, from a technical point of view, it complicates things a lot when you're dealing with different db types or a distributed app.

Obviously, we need consistency in our process and at the end things will be consistent, but not immediately. As I've said above, a business process can be a series of use cases. Consistency is achieved when all use cases involved in the process are completed. One at the time. Because, in the end, it's all about manipulating aggregates, and the only thing we need to care about is maintaining the involved aggregates consistency.

If you look at the real world, you'll see that very few things are immediate consistent, and 'thing' is the proper term, because all real world processes are eventual consistent, meaning they achieve consistency one operation at the time. Only whole objects are immediately consistent, because otherwise they wouldn't exist. Picture a chair (with 4 legs), if we remove a leg then it's not consistent anymore and it becomes a broken (not valid/usable) chair. The chair needs immediate consistency for it to exist as a valid object.

In DDD, the concepts are implemented as aggregates, therefore only aggregates need to be immediate consistent. But any use case regardless of how many operations it has is basically a smaller process which is naturally eventual consistent.

Btw, when dealing with a business process involving more than one BC i.e involves aggregates that are in different BCs, first of all, it means that we have at least one use case per bounded context. Each use case will publish domain events that will be used by the interested BC to 'trigger' their use cases. If at the end, we need to publish a notification that the process ended, then you're dealing with a saga i.e a long running process.

From a strategical DDD point of view, a domain use case encapsulates one BC specific business operation. This means that a domain use case is limited only to the model of the containing BC. If your use case needs to use an aggregate which is a part of another BC, then the modelling is wrong (either the use case or the BC).

So, a domain use case implementation is responsible to keep each involved aggregate consistent, but one at the time and this can be a problem from a technical point of view if the app goes down from whatever reason. That's why we need to ensure that a started use case/process needs to complete and we can use a durable service bus for that purpose (which will re-execute the use case). And we also need every use case to be idempotent in order to prevent duplicate operations and this means that every contained aggregate change needs to be idempotent as well. The basic principle here is that we achieve idempotency by making every component idempotent.

In conclusion, a domain process or a domain use case should be considered as being eventual consistent and thus the UoW doesn't apply here. But aggregates need to be immediately consistent and thus, the UoW is usable as a repository implementation detail. But only for one aggregate (one consistency group) at the time.

P.S: The "transferring money from one account to another" example gives the impression that somehow this is a 'all or nothing' type of operation and we need to change both accounts in the same time to keep things consistent. Not really. It's still an eventual consistent operation (actually there are 2, one for each aggregate) and if the use case is interrupted midway because of a technical reason, it's not a problem, because the aggregates themselves are always in consistent state. What is 'inconsistent' is the use case itself, because it hasn't completed yet, but this is a non issue because a use case is technically a service, so it doesn't have its own persisted state therefore there's no consistency to care about. Other use cases depending on our case completion will wait for the 'TransferCompleted' event in order to start.

An In-Depth Look At CQRS

Written on 01 September 2015

While in theory CQRS simply means that the model for reads(query) is different (a simpler version of) than the model for writes(command), in practice things are a bit more complex. Add Event Sourcing and CQS into equation and things become quite confusing. But first things first.

Know Thy Enemy

You shouldn't use it just because you've heard it's cool and this how apps should be developed. You need to be aware of its benefits (and its drawbacks). The main benefit is that it enables an app to perform very fast reads while having a maintainable domain model. The drawbacks are that it requires a bit of experience in order to be implemented properly and it adds more technical complexity. Maintaining 2 models (and the read model can be further specialized into more models) adds more code and might require a certain infrastructure. That's why it might be overkill for a glorified db UI app, like many CRUD apps are.

CQRS is a different concept than CQS

CQRS not only has a very similar name to CQS but it's actually inspired by the latter. But what's CQS?

CQS (Command Query Separation) is a design principle which states that an operation should do either a Write (Command) or a Read(Query) but not both in the same time (we`re talking about high level here, not a low level implementation which can do both read and writes). At first it was a principle used when defining functions, but it can be used as an architectural principle too: the application is "partitioned" into Commands (services that change something) and Queries (services which only read the model). It's great for maintainability, because each command/query encapsulates one use case.

If we're using a message driven architecture then we're gonna have commands and command handlers and queries and query handlers. Obviously, in a DDD app,there will be events and event handlers too, but those are have nothing to do with CQS. Note that a command/query handler is just an implementation detail of the Command/Query part of CQS. Basically, I can decide that all my command services will be implemented as command handlers, in order to take advantage of a service bus. Again, this is an implementation detail.

These have little do to with CQRS which deals with a Command model and a Query model. To clear up the confusion you can consider than CQRS is about separate models while CQS is about separate behaviour responsibility (although these can overlap).

To make things even more fun, CQRS and CQS work very well together, however they're still different concepts and should be treated as such. So, an app can use CQS without touching CQRS and vice-versa. However, in practice, especially in DDD apps, they're often used together in combination with a message driven architecture.

CQRS Can Be Used With Event Sourcing (ES)

...but one doesn't always imply the other. And speaking about events, an event driven architecture or the Domain Events pattern doesn't imply ES. They're all different things that are used together because they have great synergy. In probably 99.99% of cases, an app using ES will also use CQRS (and a message driven architecture), because it needs a queryable read model, different than the even store model (hence CQRS).

CQRS: The Command Model

Well, "command model" is not a very accurate term, because in practice there are 2 models: application(as in non-persistence) and persistence. For now on, when I'm referring to the application, I'll say "business or domain" because this is where things make more sense. But it's good to know that we're talking about principles that work and can be used outside the domain layer (if you're thinking in layers).

So, we have 2 points of view of a command model: the model which encapsulates business rules and the model used to store the aggregates (business objects). The command application model is always about enforcing business rules. This is the 'behaviour rich' domain model. It's used only for changes and it defines the "domain model".

But we do need to persist/restore this model so we need a command persistence model fit for this purpose. And you can store it in way you like, however the easiest ways is to use an event store (if your using ES which I highly recommend for DDD apps) or to just serialize the entities or use a document database. Obviously, you can use an ORM, but this is the complicated solution, you'll need to define mappings for each entity and tell it how to deals with Value Objects and other encapsulated details that might be required.

As I've said, it's complicated and it's the least maintainable solution. A generic storage like the event store or a document db, basically anything that doesn't care about your entity structure is the desired solution. And once you have that storage it will work with any entity automatically and in a way it's a kind of a generic repository, but the good one, not the anti-pattern abstraction on top of the ORM one.

We can get away with a generic, non-queryable store because we only need basic CRUD functionality from it: Add, Get, Save, Delete. For the use case which require more than one entity, the the service will use a domain query which will use both the query persistence model and command persistence model to do its job.

CQRS: The Query Model

As with the command model, we still have the query application model and the query persistence model. I'll start with persistence because it's the easiest one: this is your queryable model plain and simple. With a RDBMS this is where you're writing SQL, doing joins and other fun stuff. With a document db you might have query indexes. You can also have very specific models i.e serialized pre-computed view models or query application models. Pretty much anything that will allow you to query things fast.

The application model has the following 'feature': it doesn't contain/enforce any business rules. It can use encapsulation and can be immutable and even have some technical behaviour (methods), but nothing domain related. In many cases it's just a data structure or a DTO (Data Transfer Object). If we're using CQS, then the query application model is in fact the query handler and its result.

Note that a domain (which is the Command model) has queries. However in this case, it's the CQS principle at work and the query service can use a query persistence model, but it can return business objects. This is a matter of using CQS 'inside' the Command of CQRS. It is a bit confusing but if you understand that these principles are different yet they work together, then you won't have a problem.


It's always important to understand the context we're working on. You might call these layers or components, vertical slices etc. but the important part is that they're a criteria to group related concepts and use cases. And we can mix CQRS and CQS. So we can have an UI Query model which consists of query handlers querying a persistence read model and returning view models (whole or bits of it). An Application context with Commands (Register Account, Login etc) and Queries (GetUserForLogin, GetUserRights etc) or a Domain context which has their very own Command Model and/or Command Handlers as well as Queries (CountVipMembers, GetMembersWithoutOrders etc).

As a fun fact, once you see the app like a group of contexts which themselves are a group of use cases, the traditional 'layer' approach doesn't make much sense anymore. It's all about specialized model for changes or for reads and clean services which either change something or read something. And these lead to a better (more maintainable and scalable) architecture but that's a subject for a future post.

One more thing, there are no rules with CQRS besides the separation of the write/read concerns. What matters the most is the principle and the problem it solves. In some use cases, applying the principle means there's no clear separation between the writes and the reads from one point of view (you might be using only one persistence model). This doesn't invalidate the whole principle, it's just a local decision to implement things differently. As long as the app is maintainable, there is no problem. Like many other principles, CQRS is not a recipe with a strict "how to" dogma. But everyone using CQRS should first understand it properly before deciding on a specific implementation.

How To Ensure Idempotency In An Eventual Consistent DDD/CQRS Application

Written on 26 August 2015

If you want a reliable DDD/CQRS app (distributed or not) you need to treat idempotency as a first class concept. And while not all DDD apps use a message driven architecture, most of them do so and, in my opinion, that's the best way to end up with a maintainable codebase. Idempotency is required because of the eventual consistency 'feature' of a message driven system.

But before we talk about solutions, let's understand the problem. An eventual consistent app treats a business process as a sequence of business cases. An aggregate is changed, one or more events are published, the events handlers trigger other business cases and so on. We have a chain of independent operations each with their own state which together define one process (a domain unit of work), that can be broken at any time, because the app crashes or the server recycles the app or we deploy a newer version etc. We end up with a state where some aggregates were changed, while others never made it.

It sure looks like the app is in an inconsistent state, however, the use of the durable service bus will make sure that we are in an eventually consistent state i.e things will be consistent very soon (it's just a matter of time). If you're used to the lower level db transaction, where everything is either commited or rolledback, eventual consistency looks quite dangerous and you might feel unsafe. Luckily, it's just the typical "out of the comfort zone" feeling. Eventual consistency might look fragile and complicated to handle but once you get used to it, it will become second nature.

The main 'ingredient' in dealing with eventual consistency is proper idempotency handling. Considering a process can be interrupted at any time and then executed again from the beginning, we need a way to ensure that a repeated operation changes a model only once. I'm going to show you 2 ways of doing this, but first there are some "rules":

  • Every operation has an unique id (guid);
  • The aggregate root needs the operation id to be set before any changes.

Handling Idempotency at the Aggregate Level

The easiest scenario is when you're using Event Sourcing. And that's because the entity already has all or at least the latest events. And we can make the operation id part of the event itself. All my events(and commands) have the SourceId property which is the id of the operation that generated the events (by default it's the command id which for commands, in most cases is also the source id). Then it goes like this

var foo=_repo.Get(fooId);
IEnumerable<IEvent> evnts=new IEvent[0];

catch(DuplicateOperationException ex)


So, the aggregate is aware of the most recent (or all) operation Id and if you try to set one that was used before, it throws. Pretty simple.

But what happens if you're not using event sourcing? Well, we can make the entity keep a history of operations and check it everytime the operation id is set. The problem with that is it will increase the entity size a lot if the object is changed often. However, for objects that rarely change it might be a solution. And here's a base class that may be used for that.

public abstract class AIdempotentEntity:IOperationId,IEntityId
      protected int MaxHistory = 2;

      private List<Guid> _history = new List<Guid>(10);
      protected Guid? _operationId;

     // json serialization friendly
      protected List<Guid> History
          get { return _history; }
          set { _history = value; }

      public virtual void SetOperationId(Guid id)
          if (_history.Contains(id)) throw new DuplicateOperationException(id, null);
          _operationId = id;
          if (MaxHistory == 0) return;
          var pos = _history.Count;
          if (pos == MaxHistory)
              pos = 0;

          _history.Insert(pos, id);


      public Guid GetOperationId()
          return _operationId.Value;

      public Guid EntityId { get; protected set; }

So, what can we do when you have objects that change often or you need to keep all the operation id history? We'll handle idempotency at the persistence level.

Handling Idempotency at the Persistence Level

I must say this is my preferred solution. It's also a generic solution that can be used any time you need to ensure idempotency but it does have a small flaw: it requires an ACID compliant db.

The main idea is like this: we have a table (OperationId:Guid,Hash:string) with a unique constraint on both columns. When we persist an object, we have this transaction

using (var t = db.BeginTransaction())
     if (db.IsDuplicateOperation(item.ToIdempotencyId()))

    //save object


//ext method
public static bool IsDuplicateOperation(this DbConnection db, IdempotencyId data)
               //idems is the name of the idempotency table
               db.Insert("Idems", data);
           catch (DbException ex)
              //duplicate operation
               if (ex.IsUniqueViolation()) return true;
           return false;

   public class IdempotencyId
          public static IdempotencyId Empty=new IdempotencyId() ;
          public bool IsEmpty() => OperationId == Guid.Empty && Hash.IsNullOrEmpty();

          public Guid OperationId { get; set; }
          /// <summary>
          /// Its actually an identifier of the entity/model/event involved
          /// </summary>
          public string Hash { get; set; }

          public IdempotencyId(Guid operationId,string hash)
              OperationId = operationId;
              Hash = hash;

          private IdempotencyId()


Basically, we try to insert an idempotency id that uniquely identifies the object and the operation and if it fails, it means it's a duplicate operation. If it succeeds then we can persist the entity and commit the transaction. If an error occurs the transaction is rolledback and the idempotency id is not saved.

The idempotency id is the combination of operation id and some other specific identifier(I call it a hash in lack of a better name) of the object. For domain objects, the hash can be the entity id. For read models, hash can be the read model category and/or the domain entity id involved. Obviously, it depends, there are no rules here, the only 'rule' is that you need to be aware that one operation can affect multiple aggregates or read models. For example, if you compute metrics in an event handler and this involves incrementing a counter, the read model specific hash should be something like "statsusercount".

Note that for this to work we always need to have a transaction involving the idempotency storage and the domain storage, so they both need to share the same storage. But you don't need one global idempotency store, each component can have its very own (also the case if you're using multiple databases/engines).


There you have it, 2 approaches to ensure idempotency for your eventual consistent app. They both work and I'm using both at the same time, it's not about choosing one over the other upfront, but choosing the best for your specific case. As an example, I have a "Project" entity uses event sourcing and it makes sense to use the first approach, but the read model updater (an event handler) and the metrics updater (another event handler) subscribing to the ProjectCreated event will use the second approach with the idempotency id hashes: 'projectread' and 'projectcount'.

To Null Or Not TO Null aka Is Null Really Evil?

Written on 10 August 2015

It's not a secret that some devs are considering null to be 'evil' and even the 'father' of null himself said that introducing null was his billion dollars mistake. But, is null that evil? Some 'purists' (in lack of a better word) think so and they advocate that we should stay away of nulls no matter what!

Personally, I don't care much about null. For me null means 'lack of value' or 'nothing' and I treat it as such. 'Nothing' is a valid result in many scenarios and the alternative is to have a Maybe<T> type which will be used exactly as null. For me, it would be just a string replacement and my codeflow would be the same. Thing is, from a developer point of view, null is just a tool. The code isn't more buggy because C# has nulls. And the null reference exceptions are great signals that you have a unhandled use case (read: bug) there.

I think that adding an option type to replace null is more of a cosmetic change. We'll still be checking if a result exists instead of null and the NullObjects pattern will still be used. Personally, I don't see this pattern as a 'necessary evil' (and some people are calling it an anti-pattern altogether) because it really makes sense in cases where you always expect to have a value. External services like logging, caching and even persistence are use cases where we expect some instance, but there is none configured (and in this case 'none' is a valid - and the default - configuration option). So we simply need a value which looks and acts like the expected service but does nothing , hence the null object.

I really think that most devs 'screaming' that nulls are evil are nitpicking. While null might not be the best technical solution, it's not really as bad as some people claim to be. Yes, null is the easiest way, yes bad programmers write crappy code and knives can kill people. In the end the tool is less important, the tool wielder makes or breaks something.

Null is just a simple tool and I would welcome a better tool if it would become available in the future. But better should translate into tangible (yeah...) benefits like less code required, not replacing a word with another word.

Functional Programming vs OOP

Written on 30 July 2015

I have little experience with Functional Programing (FP) and even less with a FP language like F#. The FP I've done was mainly Linq stuff in C# and my attempt to write a build script for FAKE. I must say that I wanted to see how I fare with F# considering the number of its very vocal fans. Well, it was a painful experience and not only because of the strange syntax. The main problem is the mindset. Really, a FP mindset is required when using a FP language, which is pretty logical. But the big question is: Why would I want to use the FP mindset?

I follow devs on twitter who are very passionate about FP in general and F# in particular. For them, FP is the greatest thing ever, the ultimate golden hammer. Everything would be better if FP would be used (with F# of course). FP makes your code simpler, cleaner, less buggier etc. All hail FP the greatest hammer of them all. Thor is green of envy now.

Based on my experience as a developer, I beg to differ. While I've used the FP approach, I did it only where it makes sense. I'd say that the majority of problems/use cases aren't naturally fit for FP. But I know that the FP crowd says the opposite. And my opinion is they found their perfect solution and now they force every problem into it.

Let's see some examples. There's a pretty famous site full of articles why FP and F# rulz and OOP and C# sux (not said directly but clearly implied). I've read it in order to try and do something with F#, but after reading some very biased articles I decided it's more of a propaganda center than a useful resource (I'm sure others disagree but that's my opinion). It seems that OOP is such a sin for the FP cult (some devs are really behaving like zealots) that they need to come up with articles like this . In the article, F# source code is compared to the same code but decompiled to C# and surprise, surprise the C# code is ugly.

But let me show you other code decompiled to C#

private void ExecuteTasks(TaskDictionary list)
      foreach (KeyValuePair<int, List<Func<string>>> keyValuePair in (IEnumerable<KeyValuePair<int, List<Func<string>>>>) Enumerable.OrderBy<KeyValuePair<int, List<Func<string>>>, int>((IEnumerable<KeyValuePair<int, List<Func<string>>>>) list.Items, (Func<KeyValuePair<int, List<Func<string>>>, int>) (d => d.Key)))
        string str1 = keyValuePair.Key.ToString();
        if (keyValuePair.Key == int.MaxValue)
          str1 = "no order";
        foreach (Func<string> func in keyValuePair.Value)
          string str2 = func();
          string message = "Executed task ({0}) '{1}' ";
          object[] objArray = new object[2];
          int index1 = 0;
          string str3 = str1;
          objArray[index1] = (object) str3;
          int index2 = 1;
          string str4 = str2;
          objArray[index2] = (object) str4;
          LogManager.LogInfo<AStrapbootWithContainer<T>>(this, message, objArray);

Yes, it's ugly. And here is the original source of the above code

private void ExecuteTasks(TaskDictionary list)
           foreach (var tasks in list.Items.OrderBy(d=>d.Key))
               var pos = tasks.Key.ToString();
               if (tasks.Key == int.MaxValue)
                   pos = "no order";
               foreach (var task in tasks.Value)
                   var name = task();
                   this.LogInfo("Executed task ({0}) '{1}' ", pos,name);


A bit better right? Based on the article's logic then hand written C# is better than decompiled code to C#, therefor C# is better than C#.

But wait, F# has immutable types and the C# equivalent is more verbose and uglier (FP people really care about source code beauty). Yes, it might be but... WHY would I want that everything should be immutable and value compared (cough C# structs cough)?. Well, FP crowd will tell you how immutability means fewer bugs etc. I get that immutability is great but should it be used everywhere? Even F# has mutable support. The point is, if it exists and it's great in F# or any FP language, it doesn't mean I'll need it in an OOP language.

And this brings me to the real issue. The FP mindset requires that every operation should be expressed as a mathematical function. But we, normal people don't think like that. Unless you're a mathematician, but not everyone is. In the 'human' language we think imperative, very similar with an OOP language. Doing object oriented design is like being a manager in the 'real' world: you decide who does what, which are the teams and their projects. And unlike the real world, you can create the perfect professionals/employees which are specialists at their job.

OOP is quite natural. When I design something or try to come up with an algorithm, I think in a mix of Romanian and English. Only after I have a solution I expressed it in a programming language. But my thought process is unrelated to a programing language. I'm using a 'natural' mindset which then can be easily expressed in an OOP way. To change my mindset to functional just because someone says it's "much better", would be stupid. Most of the problems can be solved and expressed easily in OOP. Some of the problems are naturally functional and then can be easily expressed using a FP language.

It's simply a matter of how many use cases can be solved using a 'normal' mindset and how many use cases fit the functional paradigm better. My experience says the functional use cases are more limited and I can use C# for those cases, even if in F# I could have done them easier. There are simply too few to matter.

But some of the FP crowd doesn't think that. For them everything is better if it's expressed in a functional way. Ok, let's see how it goes with the famous FizzBuzz test. The code below is taken from F# for fun and profit.

Let's start with the imperative version but expressed in F#.

module FizzBuzz_IfPrime =

    let fizzBuzz i =
        let mutable printed = false

        if i % 3 = 0 then
            printed <- true
            printf "Fizz"

        if i % 5 = 0 then
            printed <- true
            printf "Buzz"

        if not printed then
            printf "%i" i

        printf "; "

    // do the fizzbuzz
    [1..100] |> List.iter fizzBuzz

Very similar to a C# version. But wait, we know that FP is the best paradigm ever so let's solve the problem in a functional way, cause everyone knows FP means less code, fewer bugs, beauty etc.

module FizzBuzz_Pipeline_WithTuple =

    // type Data = int * string option

    let carbonate factor label data =
        let (i,labelSoFar) = data
        if i % factor = 0 then
            // pass on a new data record
            let newLabel =
                |> (fun s -> s + label)
                |> defaultArg <| label
            (i,Some newLabel)
            // pass on the unchanged data

    let labelOrDefault data =
        let (i,labelSoFar) = data
        |> defaultArg <| sprintf "%i" i

    let fizzBuzz i =
        (i,None)   // use tuple instead of record
        |> carbonate 3 "Fizz"
        |> carbonate 5 "Buzz"
        |> labelOrDefault     // convert to string
        |> printf "%s; "      // print

    [1..100] |> List.iter fizzBuzz

Hmm, so this is how less, concise code looks like. For whatever reason I prefer the bloated and uglier (and of course hard to maintain) imperative version. But wait, there's more!!

OOP has design patterns because it's such an imperfect mindset and we need workarounds. FP is so great it doesn't need patterns. Until it needs them. Enter the Railway pattern . And here's FizzBuzz using this pattern.

module FizzBuzz_RailwayOriented_UsingCustomChoice =

    open RailwayCombinatorModule

    let (|Uncarbonated|Carbonated|) =
        | Choice1Of2 u -> Uncarbonated u
        | Choice2Of2 c -> Carbonated c

    /// convert a single value into a two-track result
    let uncarbonated x = Choice1Of2 x
    let carbonated x = Choice2Of2 x

    // carbonate a value
    let carbonate factor label i =
        if i % factor = 0 then
            carbonated label
            uncarbonated i

    let connect f =
        | Uncarbonated i -> f i
        | Carbonated x -> carbonated x

    let connect' f =
        either f carbonated

    let fizzBuzz =
        carbonate 15 "FizzBuzz"
        >> connect (carbonate 3 "Fizz")
        >> connect (carbonate 5 "Buzz")
        >> either (printf "%i; ") (printf "%s; ")

    // test
    [1..100] |> List.iter fizzBuzz

Is it me or this version is even longer than the previous one? And to me, being a n00b when it comes to F#, it looks quite complicated and ugly. At least compared to the imperative version.

The point of this is to show you that you can force anything to work with the FP golden hammer but the result isn't the expected one. Clearly in this case the imperative approach is the most simpler one. But does this mean that only simple problems are fit for the imperative (OOP) mindset? Not really, remember that normally we think in an imperative mindset. The Domain expert will communicate according to that mindset. It takes explicit effort to 'convert' everything to a functional approach. And as seen with FizzBuzz, you end up with abominations.

I'm really annoyed by FP zealots who claim that FP is the answer to everything. As the saying goes "A real programmer writes assembler in any language", and in the end it's the developer who's in charge of the code quality. Your codebase isn't more bug free just because you're using F# , it's because you wrote good code. You may like FP and your brain can easily convert every problem to a FP representation. Ok, but mine and other's don't work that way. This doesn't mean our code is crap and the codebase is huge.

For me OOP is natural and I have no problem using it. For the little FP I do when it's the case, C# is enough. Maybe if I have enough use cases that can fit the FP mindset, I could write them in F#. But that's it. OOP is not automatically inferior for everything, FP is not superior for everything. And the fact that very few devs are using a functional language is a sign that it's a niche mindset for niche problems. And zealotry won't change things.

P.S: I'm one of the devs who say "always use DDD" regardless of the app. But I'm talking about the strategic aspect. Most of the time you're building an app that provides some business value. It's natural for the app to be designed according to the business needs, hence domain driven design. But that doesn't mean you should always use the DDD tactical tools. And obviously for experiments, toys, learning some new tool, there's no need for anything DDD.