Organizing App Startup Tasks With StartItUp

Written on 30 May 2016

Every time I start a new project I go through the same ritual: configure logging, setup DI Container, configure data mapper, service bus, event store etc. Ok, that’s not really a problem, ‘cause I don’t do that every day, but I do need to change some of this stuff often enough to require a more organized approach.

For some time, I’ve had a small library that allowed me to organize the startup tasks into classes, but I needed a bit more flexibility: the ability to pass values/context between tasks without much ceremony. And this is how version 2 of StartItUp came to be. But let’s see how it makes things easier.

Well, here’s a self explaining screenshot

Imgur

All tasks are classes named according to the “ConfigTask_[order]_[TaskName]” convention.

`AppSettings’ is the … settings object that in most apps would be a static class, but I prefer an instance that I can register in the DI Container as a singleton if I need to inject it somewhere. It has options like email addresses, logging directory, db connection string etc. It’s also used as a state to be shared by the config tasks.


 public class AppSettings : StartupContext
    {
       
        public AppSettings(IAppBuilder owin)
        {
            ContextData["owin"] = owin;
            LogDirectory = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "App_Data") + Path.DirectorySeparatorChar;
#if DEBUG
            IsDebug = true;
#endif
        }

        public bool IsDebug { get; set; }

        public string DbConnection { get; set; } = LocalDb;

        public string LogDirectory { get; set; }

     
    }

I should have private setters, but meh… Let’s talk about some interesting stuff: the class inherits from StartupContext, defined in StartItUp, this is useful for passing values or using helpers like Autofac extension methods (see below). It also has a dependency on IAppBuilder because we are in an OWIN app and it will be needed to setup the web stuff.

Let’s see a bit of logging configuration (with Serilog)

 public class ConfigureLogging
    {
        public void Run(AppSettings cfg)
        {
            var tpl = "{Timestamp:G} |{Level}|{Message}{NewLine:l}{Exception:l}";

            var logConfig = new LoggerConfiguration();
            if (cfg.IsDebug)
            {
                logConfig.MinimumLevel.Debug();
            }
            else
            {
                logConfig.MinimumLevel.Information();
            }

            logConfig
                .WriteTo.Trace(outputTemplate: tpl,restrictedToMinimumLevel:LogEventLevel.Debug)
                .WriteTo.RollingFile(cfg.LogDirectory + "log-{Date}.log", outputTemplate: tpl, restrictedToMinimumLevel: LogEventLevel.Warning);
            LogManager.SetWriter(new SerilogAdapter(logConfig.CreateLogger()));
        }
    }

The interesting part here is that this class is automatically invoked by StartItUp before all other tasks. When the app will be ready to go in production, I’ll add integration with a 3rd party logging service like LogEntries.

 public class ConfigTask_0_RegisterServices
    {

        public void Run(AppSettings cfg)
        {
            cfg.ConfigureContainer(cb =>
            {
                cb.RegisterServices(asm: cfg.AppAssemblies);
                
                //AppSettings can be injected now 
                cb.Register(c => cfg).AsSelf().SingleInstance();
            });
        }
    }

We’re configuring Autofac based on conventions: classes ending in “Service”,”Query” or “Manager” are automatically added by the RegisterServices method defined in StartItUp.Autofac and I can easily add other suffixes if I need to.

Let’s set up the data mapper configuration, SqlFu obviously.

  public class ConfigTask_0_SqlFu
    {
        public void Run(AppSettings cfg)
        {
            
            SqlFuManager.Configure(c =>
            {
                c.AddProfile(new SqlServer2012Provider(SqlClientFactory.Instance.CreateConnection), cfg.DbConnection);
                c.AddNamingConvention(t => t.Name.EndsWith("Item"), t => new TableName(t.Name.SubstringUntil("Item")));
                c.RegisterConverter(o => o == null ? null : new UserId((Guid)o));
                c.RegisterConverter(o => o != DBNull.Value && ((int)o) > 0);
                
                c.OnCommand = cmd => this.LogDebug(cmd.FormatCommand());
            });

            
            cfg.ConfigureContainer(cb=> cb.RegisterInstance(SqlFuManager.GetDbFactory<MainDb>("default")).As<IMainDb>().SingleInstance());            

        }
    }

See how I configure the mapper and update the DI Container configuration. We keep related things in the same place.

Now, the (N)EventStore.


 public class ConfigTask_2_EventStore
    {

             
        public void Run(AppSettings cfg)
        {
            
            var es = Wireup.Init()
                .UsingSqlPersistence(cfg.Container().Resolve<IMainDb>())
                .WithDialect(new MsSqlDialect())
                .InitializeStorageEngine()
                .UsingJsonSerialization()
                .HookIntoPipelineUsing( new DispatchMessages(cfg.Container().Resolve<IDomainBus>().GetDispatcher()))
                .LogTo(t=>new LogAdapter(t))
                .Build();
            cfg.ConfigureContainer(cb =>
            {
                cb.RegisterInstance(es).As<IStoreEvents>().SingleInstance();
            });

        }
       
      

        class DispatchMessages:IPipelineHook
        {
            private readonly IDispatchMessages _bus;

            public DispatchMessages(IDispatchMessages bus)
            {
                _bus = bus;
            }

            public void Dispose()
            {
                
            }

            public ICommit Select(ICommit committed)
                => committed;

            public bool PreCommit(CommitAttempt attempt)
            =>true;

            public void PostCommit(ICommit committed)
            {
                var events = committed.Events.Select(d => d.Body).Cast<DomainEvent>().ToArray();
#if DEBUG
                this.LogDebug($"Publishing {events.Length} events: {events.Select(d=>d.GetType().ToString()).StringJoin()}");                
#endif
                _bus.Publish(events);
            }

            public void OnPurge(string bucketId)
            {
              
            }

            public void OnDeleteStream(string bucketId, string streamId)
            {
              
            }
        }
    }

As you can see, there is a bit of config here, including the definition of an event store hook that will be automatically invoked by NEventStore to publish the persisted events. Also we’re using the DI Container but we update its configuration too. Once again, we keep relevant things in one place.

My current project is a Nancy app, so besides other stuff I do need to bootstrap it and nancy has its own bootstrapper (ha!). So, I need to configure OWIN but also integrate with Nancy’s bootstrapper. How hard can it be?

 public class ConfigTask_6_WebFramework
    {
        public void Run(AppSettings cfg)
        {
            var owin = cfg.ContextData["owin"] as IAppBuilder;
            owin.UseCookieAuthentication(new CookieAuthenticationOptions()
            {
                CookieName = "_dfauth",
                SlidingExpiration = true,
                ExpireTimeSpan = TimeSpan.FromDays(7)
            }, PipelineStage.Authenticate);
            
            
            owin.UseNancy(opt => opt.Bootstrapper = new NancyBoot(cfg));
            
            owin.UseStageMarker(PipelineStage.MapHandler);
        }
    }
    
 //nancy bootstrapper
  public class NancyBoot : AutofacNancyBootstrapper
    {
        private readonly AppSettings _cfg;

        public NancyBoot(AppSettings cfg)
        {
            _cfg = cfg;
        }

      /* removed other settings */

        protected override ILifetimeScope GetApplicationContainer() => _cfg.Container();
       
    }
    

When adding Nancy, we specify manually what bootstrapper to use and the bootstrapper will use our configuration object to return the Autofac instance when Nancy needs it. That wasn’t that hard, was it? OK, now we have our tasks, we need one more thing

 public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            StartIt.Up(new AppSettings(app));
        }
    }

Yep, you need to invoke StartItUp manually, no automagic involved. This is required in order to pass things to AppSettings. And later, I’d have to add some code to read the settings from web.config.

So, you’ve seen how we can organize the startup tasks in a maintainable manner. Very easy to add/change stuff, easy enough to integrate with other libraries and you can even unit test the tasks. Go on Install-Package StartItUp!

Using Discriminated Unions In C#

Written on 14 May 2016

I’ve reached the point where I needed discriminated unions but in C#, which doesn’t have support for it (yet). So, I’ve created a couple of classes (if you want a nuget they’re part of my CavemanTools library) that simulate a basic discriminated union with 2 or 3 options.


var fruits=new AnyOf<Apple,Orange>(new Apple());

fruits.When<Apple>(a=> {} );
fruits.When<Orange>(o=> {} );

//compile time type
var apple=fruits.As<Apple>();

//dynamic type
var apple=fruits.Value;

if (fruits.Is<Orange>()) {} 

Even if C# 7 will offer pattern matching, I still find useful to have a type that specifies and enforces only a couple of types; a dynamic field is just too generic.

And since option types won’t be available in C# 7 , I’ve also included an Optional struct for those moments when you really don’t want to deal with nulls .

private Optional<User> GetUser(){ }

var user=GetUser();

if (user.IsEmpty) {} 

var other=user.ValueOr(new User());

P.S: Inb4 “Why don’t you just use F# already?”. Because I know C# better, my F# knowledge is limited at this point and I need these features NOW. Besides, I only want a couple of features, not the whole language.

DDD Is Expensive Is A Myth

Written on 12 May 2016

If only I had a dollar every time I read something similar to this: “DDD is expensive and it should be used ONLY if the domain is BIG, behaviour rich and only for the core domain, not for the whole application” .

Well… today, I would say DDD is mainstream. Sure, from the crowd of people “doing DDD” only a minority actually does DDD, but nevertheless, there are plenty of experienced developers or architects who can properly apply DDD. Now, DDD isn’t expensive by its intrinsic nature, but the people who can apply it aren’t that cheap and if they deal with a complex domain, it could take some time. So, in the end you can say it’s expensive.

However, thing is, a company is already paying those people and it doesn’t pay them more because they’re using DDD, that’s just another development tool. So, it’s not about the money. But what about time? Can we afford the time spend to understand the domain BEFORE we start coding? Well, can we afford NOT to? In what universe is it a better idea to just start coding functionality you don’t fully understand?

“Hey, DDD is too complex for a CRUD app” you might say. How do you know it’s a CRUD app, without having a domain model? For any app, simple or complex you need to gather all the information you need in order to implement it. As I’ve said in the past DDD is not programming, it’s just a methodology to get that information, to understand the functionality that needs to be implemented. Only after you know the model, you can say: it’s a CRUD app or it’s a rich-behaviour app, but not before.

What about the tactical patterns? Guess what, they’re still not code, but ways to group information. When I’m identifying an aggregate I’m interested in concepts and business rules. If I end up with some state and some validation, this is what the domain needs. And based on that I will choose an implementation which will be CRUDy in nature.

I think the problem lies in how the devs look at DDD: like it’s a way to design objects (OOP), instead of a way to understand the relevant parts of the domain. And they think they have to use a certain architecture or ‘behaviour-rich’ objects or the Repository pattern. They view it as a complex solution which requires a complex problem (but how can they know upfront how complex the problem is?!).

Once someone understands the nature and purpose of DDD, they can apply it as the first step in building (any)software. And the result will be a specific (relevant) domain model (abstraction) which will contain all the information developers need in order to write the code. And btw, CRUD apps are the easiest and fastest to model with DDD.

Let's Fix The Broken Hiring Process

Written on 01 May 2016

Posts about the stupidity of the hiring process aren’t news in the developers community. Most readers agree with the author that the process is broken, often humiliating and more important, not that effective. Very few people are actually defending the process but for whatever reason this is the norm in the industry. Like many others, I’ve read and agreed with Sahat Yalkabov. We have a problem and seemingly very few solutions. Some say the process, like democracy, might not be the best, but it’s the best we have now; after all, the big, successful companies use it and they’re doing well. However, in my opinion the process is not even remotely decent and if this is “the best we have now”, it’s sad considering it’s a field populated by many smart people.

What are you looking for?

But let’s start with the beginning. What’s the purpose of hiring a developer? Note the word developer, I didn’t say ‘programmer’ and the distinction is important. A developer builds products that represent value for a business, a programmer writes code. Any business wants to lease, rent or buy assets that provide functionality needed by it. From this point of view, developers are just rented assets. And you need them to deliver business value. No manager (or stakeholder) actually cares about the code itself, they care about the functionality the company gets from that code. The code is just a production detail.

When your hiring process targets programmers, you want people who can write code. When you target developers, you want people who deliver business value (using code). Do you think that after you get through a byzantine process and you’ve played your part in the interview theater and finally got the job, your boss (and their boss) will care about how efficient or cool or even maintainable your code is? No, they care about you implementing feature X. You fixing bug Y. You don’t get a bonus if you are the author of that great algorithm. Actually, they couldn’t care less if you’ve just copy pasted it from Stack Overflow. They just want you to deliver business value.

Sure, it’s important to come up with your own solutions, but the most important thing is to come up with solutions. Maintainable, reliable solutions. Your boss doesn’t care who spent 3 nights at home to think about the solution, they care about who implemented the solution, regardless of who did the hard work. So, yes if someone on SO will design an architecture for you, from your team/boss/company point of view YOU are the problem solver. Now, if you fail to understand the solution, the value you’ll bring is not going to be enough, but the point here is that stakeholders want a valid result and are not concerned about how that result came to be.

The current hiring process looks for people who can theoretically solve problems on their own. The issue here is that many problems have been already solved and reinventing the wheel is just wasting time (read: money). And this is exactly the kind of problems a candidate is asked to solve in an interview. And if you deliver a working solution found on SO, it’s bad because it wasn’t you who thought of it. As if the original source of the solution is the most important thing, not the solution itself.

Is it about showing off (ego) or delivering value?

You’re not tested if you can bring value, you’re tested if you can think yourself of a solution from scratch regardless if it’s needed or not. The reasoning here is “I want to see how you think, how you solve problems”. But wait, the solution needs to come from you directly, you can’t use already existing and available solutions that can deliver value fast. No, you have to prove that “YOU’re the (wo)man!”. After all, in your daily job you’ll be inventing the wheel everytime, ‘cuz using an already invented wheel is not cool. And if you can prove you can build a wooden wheel, it’s so obvious that you can build a car, too!

Most interviews are about showing how smart or knowledgeable you are, instead of showing your worth of how quickly you can deliver business value TODAY. And the logical fallacy is that if you know how to build a hammer, clearly you know how to be a good carpenter and you can also build a house. An interviewer tries to guess today, by the means of artificial scenarios or worse, puzzles, how ‘good’ you’ll be in the future, in an unknown scenario. A good developer would know about the principle of YAGNI. It seems that people conducting technical interviews don’t know that it can be applied outside the code as well. Hint: The point of YAGNI is that you shouldn’t waste time trying to guess the future functionality.

An interviewer should care about the ability of a candidate to deliver solutions. If you ask someone to reverse a string and they invoke a function from a library, that’s a valid solution. Because it does what it is supposed to do, in a maintainable manner. Writing a new function is a waste of time. You can build a better string reversal, but the question is: do you really need it or the current solution is good enough? Doing ‘what if’ katas doesn’t bring value in this context.

You can ask the candidate to solve a simple, easy to explain and to understand domain related problem, a realistic scenario where there aren’t libraries or the problem is to specific to google an answer, and the candidate must be able to deliver a working, maintainable solution.

A business doesn’t have time to waste, it wants value as fast as possible. And it’s the job of a developer to know when they can use an existing solution or when they need to customize or to build one from scratch. If the interviewer asks about problems that have been solved 1000 times already, expect the candidate to NOT reinvent the wheel and to use a library or a tool like google to solve it.

“He who doesn’t have wise men should buy them” - Romanian Saying

Another problem is that it seems very trendy to disregard the practical experience of a candidate. Not only this is insulting, but it’s a sign of the process’ stupidity. In any field (except IT?!) experience is a highly priced asset. You know why? Because it can’t be taught and it can’t be faked. I’ve seen this reasoning from some interviewers: “people lie all the time on their CV” or “I don’t have 2 hours to spend to read your blog, your StackOverflow profile or your OSS work [therefore I will ignore it]”.

Well, if YOU, the technical interviewer, a senior developer, can’t tell if someone lies when they’re talking about their experience, it means that YOU don’t have experience and you shouldn’t conduct technical interviews. Any developer worth their salt will detect in 2-5 minutes if a candidate tries to bullshit their way out of a question. Only people who don’t have a clue think you can fake experience. They’re detected so fast it’s not even funny.

And you don’t need hours to assess someone’s technical level. A quick read of their CV or better, their code, technical blog (just look at the titles) or SO profile(number of answers/questions ratio for the relevant tags) and in 10 minutes you have a vague idea (junior, middle, senior). Then you can just talk to the candidate about their work and you can (gasp!) solve together a real problem. Using a computer, writing code, delivering value.

Things like “every engineer should know this [trivia bit that shows how smart I am]” are also pointless, unless your daily work requires everyone to use that. Don’t ask about things that “anyone who took a 101 CS class should know” if they aren’t needed. Remember that TV show “Are You Smarter Than a 5th Grader”? Well, are you? After all, they are things every 5th grader knows. How come YOU don’t know them?

A simple solution

If we apply a bit of common sense (I’ve heard Walmart sells it, limited offer, get it while it lasts), a technical interview can be a very efficient and cost effective affair. A candidate who seems good enough to be invited to an on-site interview should be treated for 1 day like a new member of the team. They will have to work with the team to solve some problems that doesn’t require a deep understanding of the existing code base. For example: ask them to fix a simple bug, implement a minor functionality, refactor some technical debt etc. But the point here is to put them into a context where they can deliver business value.

As an interviewer you get these benefits:

  • Simple and natural to implement, you don’t need special training or reading what questions to ask
  • See how the candidate communicates
  • See how the candidate works in a team
  • See the actual worth of the candidate. Potential is good, delivering value on the spot is better. Maybe they will tell you about a new library you didn’t know about. Maybe they show you a different way to approach a problem. Maybe they spot (and fix) some bugs for you. If the candidate is good, it would be very beneficial for the company even if you don’t make an offer (but you should). A weak candidate will be weed out in a couple of hours at most and in that case don’t waste time, just send them home.

As a candidate, you won’t be the subject of an idiotic, artificial process designed by people who either don’t care or they’re simply incompetent. No 4-5 interviews, no 2-3 weeks/months process, no repeating the same answers to the same (basic) questions asked by different people. No wasted time and you get to show what you know and what value you can bring. In return, you get an idea about the quality of the code base, the management process, the tools, the working environment etc. It’s WIN-WIN!

Any developer worth their salt will LOVE this. And I know there are some (very few…) companies who are using a similar approach and they are very satisfied with the results. Others favour a homework style approach (sometimes paying the candidate for their work) which I can’t say I prefer but it’s still way better than the standard whiteboarding crap we have today.

Conclusion

Want to hire good developers, with minimum effort and cost?

  • Test the ability to deliver business value, instead of raw knowledge
  • Don’t discard experience, try to use it to get more insight from the candidate
  • When a candidate shows you their Open Source code or StackOverflow profile or their technical blog posts, they’re making your life EASIER. They give you fuel for your questions
  • See how the candidate performs in your day-to-day working environment, solving real business problems.

To quote a redditor

If you want to hire a truck driver to take your goods to New York, you put it on a truck and see how fast and safe he can reach New Your, you don’t test its truck-drving-to-new-york skills by asking him to make stunts on a bicycle.

A CQS Api Client With Aurelia and TypeScript

Written on 19 March 2016

Every SPA needs a way to communicate with the server and Aurelia features a HttpClient that implements the Fetch API. But this client is a bit to low level for my taste, I want my app to not be concerned with how a client communicates with the server. I want my app to take a Command/Query approach i.e when something needs to change I’ll send a command request and when I just want some data but I don’t want to change anything then I’ll send a query request.

On my server, I’m using the following conventions

  • For commands: [POST] baseUrl/{command} . For every request I’m returning a CommandResult which looks like this
export interface ICommandResult {
    errors: any;
    hasErrors: boolean;
    data: any;
}
  • For queries: [GET] baseUrl/{query}. It returns the query result, usually an object or a list of objects.

So, my API client interface looks like this

interface IApiClient{    
     execute(cmd: string, data: any, func: (res: ICommandResult) => void);
     query<T>(query: string, func: (result: T) => void, params?: any);
}

Considering the http client returns a promise and I want to handle request errors in one place, I’ve decided to process any result inside a function that will be automatically invoked by the API client, that’s why I’m always passing a function when invoking execute or query.

Let’s see some implementation details

  execute(cmd: string, data: any, func: (res: ICommandResult) => void) {

        return this.http.fetch(cmd, {
            method: "POST",
            body: json(data)
        })
            .then(resp => resp.json().then(d=> func(d)))
            .catch(er=> this.handleError(er));

    }

Pretty straightforward, I’d say. We send the command as a POST in JSON format, receive a response which needs to be processed as a json, then we invoke the result processor, while any errors will be handled inside handleError (will see its implementation a bit later). Any command result processor will check for validation messages(the hasError field) and has the error messages available in the errors field, and it can get things like an id or something generated on the server side from that data field.

If you know the original CQS (which applies to objects), returning a command result with some data looks like a violation of the principle. However, this is not that CQS and we’d only complicate things if we’d have to query specifically for that data later.

Here’s an example of how to use it

 this.api.execute(/*command name */"createAsset"
                 ,/* command params*/ { name:"test" }
                 ,/* result processor */ r => {
                    if (r.hasErrors) {
                        //this will be used by an <errors> element
                        this.error = r.errors.name;
                        return;
                    }
                    //do something with the new id
                    console.log("new id:"+r.data.id); 
                });

The query is similar, only that we return a typed result

 query<T>(query: string, func: (result: T) => void, params?: any) {
    
        if (params) {
            var args = $.param(params);
            query = query + "?" + args;
        }

        return this.http.fetch(query)
            .then(r => r.json().then(b => func(b)))
            .catch(err => this.handleError(err));

    }

I’m using jquery to create the query string. The query result is considered to be of the specified <T> type.

this.api.query<MyResult>("getapicommanddata", result => {
           // do something with the result  
         this.data=result;
       });

Plain and simple.

Here’s the full ApiClient class in all its glory


import {HttpClient, HttpClientConfiguration,json} from "aurelia-fetch-client";
import 'fetch'; //polyfill
import * as $ from 'jquery';

export interface ICommandResult {
    errors: any;
    hasErrors: boolean;
    data: any;
}


export abstract class ApiClient {
    constructor(private http: HttpClient) {
        this.http.configure((cf: HttpClientConfiguration) => {
            cf.useStandardConfiguration()
                .defaults.headers = {
                    'Accept': 'application/json',
                    'X-Requested-With': 'Fetch'
                };


        });

    }

    configure(baseUrl: string) {
        this.http.baseUrl = baseUrl + (baseUrl.endsWith("/") ? "" : "/");
    }




    query<T>(query: string, func: (result: T) => void, params?: any) {
    
        if (params) {
            var args = $.param(params);
            query = query + "?" + args;
        }

        return this.http.fetch(query)
            .then(r => r.json().then(b => func(b)))
            .catch(err => this.handleError(err));

    }

    protected handleError(err) {
        
        //todo send to raygun
        console.debug("Server err. ");
        console.debug(err);
    }

    execute(cmd: string, data: any, func: (res: ICommandResult) => void) {

        return this.http.fetch(cmd, {
            method: "POST",
            body: json(data)
        })
            .then(resp => resp.json().then(d=> func(d)))
            .catch(er=> this.handleError(er));

    }


}

The code is more maintainable and allows us to change how we communicate with the server in the future. The class is abstract because I want to be able to talk to multiple endpoints (on the same server) so I’d create a class for each endpoint and optionally, semantic methods that makes the client look like a normal app service or even a repository.


@autoinject
export class FooApi extends ApiClient {
    constructor(http: HttpClient) {
        super(http);
        
        //setup base api url
        this.configure("api/foo-endpoint");
    }

    //override error handling
    protected handleError(err) {
      
        Notifier.instance.error();
    }

    //a nice facade for the command
    delete(item: IdName, success: Function) {
        this.execute("deleteFoo", {assetId: item.id }, r=> success(r));
    }
}

As a trivia fact, when I’ve started building my app, Aurelia’s HttpClient was a wrapper over XMLHttpRequest, but now it implements the FetchApi. Since I was using an abstraction like the above, I just needed to change the ApiClient implementation and nothing else. But I recommend using this kind of abstraction primarily for developer friendliness as we don’t need to change how we communicate with the server very often. But it’s a nice side effect.

While I’m still a novice when it comes to developing single page apps, I can’t help but notice how often I can use the same approaches I’m using on the server side. In the end,we’re still building an app so all the design patterns and SOLID stuff applies, but we’re using Aurelia/TypeScript instead of WPF/C#.