Let's Fix The Broken Hiring Process

Written on 01 May 2016

Posts about the stupidity of the hiring process aren’t news in the developers community. Most readers agree with the author that the process is broken, often humiliating and more important, not that effective. Very few people are actually defending the process but for whatever reason this is the norm in the industry. Like many others, I’ve read and agreed with Sahat Yalkabov. We have a problem and seemingly very few solutions. Some say the process, like democracy, might not be the best, but it’s the best we have now; after all, the big, successful companies use it and they’re doing well. However, in my opinion the process is not even remotely decent and if this is “the best we have now”, it’s sad considering it’s a field populated by many smart people.

What are you looking for?

But let’s start with the beginning. What’s the purpose of hiring a developer? Note the word developer, I didn’t say ‘programmer’ and the distinction is important. A developer builds products that represent value for a business, a programmer writes code. Any business wants to lease, rent or buy assets that provide functionality needed by it. From this point of view, developers are just rented assets. And you need them to deliver business value. No manager (or stakeholder) actually cares about the code itself, they care about the functionality the company gets from that code. The code is just a production detail.

When your hiring process targets programmers, you want people who can write code. When you target developers, you want people who deliver business value (using code). Do you think that after you get through a byzantine process and you’ve played your part in the interview theater and finally got the job, your boss (and their boss) will care about how efficient or cool or even maintainable your code is? No, they care about you implementing feature X. You fixing bug Y. You don’t get a bonus if you are the author of that great algorithm. Actually, they couldn’t care less if you’ve just copy pasted it from Stack Overflow. They just want you to deliver business value.

Sure, it’s important to come up with your own solutions, but the most important thing is to come up with solutions. Maintainable, reliable solutions. Your boss doesn’t care who spent 3 nights at home to think about the solution, they care about who implemented the solution, regardless of who did the hard work. So, yes if someone on SO will design an architecture for you, from your team/boss/company point of view YOU are the problem solver. Now, if you fail to understand the solution, the value you’ll bring is not going to be enough, but the point here is that stakeholders want a valid result and are not concerned about how that result came to be.

The current hiring process looks for people who can theoretically solve problems on their own. The issue here is that many problems have been already solved and reinventing the wheel is just wasting time (read: money). And this is exactly the kind of problems a candidate is asked to solve in an interview. And if you deliver a working solution found on SO, it’s bad because it wasn’t you who thought of it. As if the original source of the solution is the most important thing, not the solution itself.

Is it about showing off (ego) or delivering value?

You’re not tested if you can bring value, you’re tested if you can think yourself of a solution from scratch regardless if it’s needed or not. The reasoning here is “I want to see how you think, how you solve problems”. But wait, the solution needs to come from you directly, you can’t use already existing and available solutions that can deliver value fast. No, you have to prove that “YOU’re the (wo)man!”. After all, in your daily job you’ll be inventing the wheel everytime, ‘cuz using an already invented wheel is not cool. And if you can prove you can build a wooden wheel, it’s so obvious that you can build a car, too!

Most interviews are about showing how smart or knowledgeable you are, instead of showing your worth of how quickly you can deliver business value TODAY. And the logical fallacy is that if you know how to build a hammer, clearly you know how to be a good carpenter and you can also build a house. An interviewer tries to guess today, by the means of artificial scenarios or worse, puzzles, how ‘good’ you’ll be in the future, in an unknown scenario. A good developer would know about the principle of YAGNI. It seems that people conducting technical interviews don’t know that it can be applied outside the code as well. Hint: The point of YAGNI is that you shouldn’t waste time trying to guess the future functionality.

An interviewer should care about the ability of a candidate to deliver solutions. If you ask someone to reverse a string and they invoke a function from a library, that’s a valid solution. Because it does what it is supposed to do, in a maintainable manner. Writing a new function is a waste of time. You can build a better string reversal, but the question is: do you really need it or the current solution is good enough? Doing ‘what if’ katas doesn’t bring value in this context.

You can ask the candidate to solve a simple, easy to explain and to understand domain related problem, a realistic scenario where there aren’t libraries or the problem is to specific to google an answer, and the candidate must be able to deliver a working, maintainable solution.

A business doesn’t have time to waste, it wants value as fast as possible. And it’s the job of a developer to know when they can use an existing solution or when they need to customize or to build one from scratch. If the interviewer asks about problems that have been solved 1000 times already, expect the candidate to NOT reinvent the wheel and to use a library or a tool like google to solve it.

“He who doesn’t have wise men should buy them” - Romanian Saying

Another problem is that it seems very trendy to disregard the practical experience of a candidate. Not only this is insulting, but it’s a sign of the process’ stupidity. In any field (except IT?!) experience is a highly priced asset. You know why? Because it can’t be taught and it can’t be faked. I’ve seen this reasoning from some interviewers: “people lie all the time on their CV” or “I don’t have 2 hours to spend to read your blog, your StackOverflow profile or your OSS work [therefore I will ignore it]”.

Well, if YOU, the technical interviewer, a senior developer, can’t tell if someone lies when they’re talking about their experience, it means that YOU don’t have experience and you shouldn’t conduct technical interviews. Any developer worth their salt will detect in 2-5 minutes if a candidate tries to bullshit their way out of a question. Only people who don’t have a clue think you can fake experience. They’re detected so fast it’s not even funny.

And you don’t need hours to assess someone’s technical level. A quick read of their CV or better, their code, technical blog (just look at the titles) or SO profile(number of answers/questions ratio for the relevant tags) and in 10 minutes you have a vague idea (junior, middle, senior). Then you can just talk to the candidate about their work and you can (gasp!) solve together a real problem. Using a computer, writing code, delivering value.

Things like “every engineer should know this [trivia bit that shows how smart I am]” are also pointless, unless your daily work requires everyone to use that. Don’t ask about things that “anyone who took a 101 CS class should know” if they aren’t needed. Remember that TV show “Are You Smarter Than a 5th Grader”? Well, are you? After all, they are things every 5th grader knows. How come YOU don’t know them?

A simple solution

If we apply a bit of common sense (I’ve heard Walmart sells it, limited offer, get it while it lasts), a technical interview can be a very efficient and cost effective affair. A candidate who seems good enough to be invited to an on-site interview should be treated for 1 day like a new member of the team. They will have to work with the team to solve some problems that doesn’t require a deep understanding of the existing code base. For example: ask them to fix a simple bug, implement a minor functionality, refactor some technical debt etc. But the point here is to put them into a context where they can deliver business value.

As an interviewer you get these benefits:

  • Simple and natural to implement, you don’t need special training or reading what questions to ask
  • See how the candidate communicates
  • See how the candidate works in a team
  • See the actual worth of the candidate. Potential is good, delivering value on the spot is better. Maybe they will tell you about a new library you didn’t know about. Maybe they show you a different way to approach a problem. Maybe they spot (and fix) some bugs for you. If the candidate is good, it would be very beneficial for the company even if you don’t make an offer (but you should). A weak candidate will be weed out in a couple of hours at most and in that case don’t waste time, just send them home.

As a candidate, you won’t be the subject of an idiotic, artificial process designed by people who either don’t care or they’re simply incompetent. No 4-5 interviews, no 2-3 weeks/months process, no repeating the same answers to the same (basic) questions asked by different people. No wasted time and you get to show what you know and what value you can bring. In return, you get an idea about the quality of the code base, the management process, the tools, the working environment etc. It’s WIN-WIN!

Any developer worth their salt will LOVE this. And I know there are some (very few…) companies who are using a similar approach and they are very satisfied with the results. Others favour a homework style approach (sometimes paying the candidate for their work) which I can’t say I prefer but it’s still way better than the standard whiteboarding crap we have today.


Want to hire good developers, with minimum effort and cost?

  • Test the ability to deliver business value, instead of raw knowledge
  • Don’t discard experience, try to use it to get more insight from the candidate
  • When a candidate shows you their Open Source code or StackOverflow profile or their technical blog posts, they’re making your life EASIER. They give you fuel for your questions
  • See how the candidate performs in your day-to-day working environment, solving real business problems.

To quote a redditor

If you want to hire a truck driver to take your goods to New York, you put it on a truck and see how fast and safe he can reach New Your, you don’t test its truck-drving-to-new-york skills by asking him to make stunts on a bicycle.

A CQS Api Client With Aurelia and TypeScript

Written on 19 March 2016

Every SPA needs a way to communicate with the server and Aurelia features a HttpClient that implements the Fetch API. But this client is a bit to low level for my taste, I want my app to not be concerned with how a client communicates with the server. I want my app to take a Command/Query approach i.e when something needs to change I’ll send a command request and when I just want some data but I don’t want to change anything then I’ll send a query request.

On my server, I’m using the following conventions

  • For commands: [POST] baseUrl/{command} . For every request I’m returning a CommandResult which looks like this
export interface ICommandResult {
    errors: any;
    hasErrors: boolean;
    data: any;
  • For queries: [GET] baseUrl/{query}. It returns the query result, usually an object or a list of objects.

So, my API client interface looks like this

interface IApiClient{    
     execute(cmd: string, data: any, func: (res: ICommandResult) => void);
     query<T>(query: string, func: (result: T) => void, params?: any);

Considering the http client returns a promise and I want to handle request errors in one place, I’ve decided to process any result inside a function that will be automatically invoked by the API client, that’s why I’m always passing a function when invoking execute or query.

Let’s see some implementation details

  execute(cmd: string, data: any, func: (res: ICommandResult) => void) {

        return this.http.fetch(cmd, {
            method: "POST",
            body: json(data)
            .then(resp => resp.json().then(d=> func(d)))
            .catch(er=> this.handleError(er));


Pretty straightforward, I’d say. We send the command as a POST in JSON format, receive a response which needs to be processed as a json, then we invoke the result processor, while any errors will be handled inside handleError (will see its implementation a bit later). Any command result processor will check for validation messages(the hasError field) and has the error messages available in the errors field, and it can get things like an id or something generated on the server side from that data field.

If you know the original CQS (which applies to objects), returning a command result with some data looks like a violation of the principle. However, this is not that CQS and we’d only complicate things if we’d have to query specifically for that data later.

Here’s an example of how to use it

 this.api.execute(/*command name */"createAsset"
                 ,/* command params*/ { name:"test" }
                 ,/* result processor */ r => {
                    if (r.hasErrors) {
                        //this will be used by an <errors> element
                        this.error = r.errors.name;
                    //do something with the new id
                    console.log("new id:"+r.data.id); 

The query is similar, only that we return a typed result

 query<T>(query: string, func: (result: T) => void, params?: any) {
        if (params) {
            var args = $.param(params);
            query = query + "?" + args;

        return this.http.fetch(query)
            .then(r => r.json().then(b => func(b)))
            .catch(err => this.handleError(err));


I’m using jquery to create the query string. The query result is considered to be of the specified <T> type.

this.api.query<MyResult>("getapicommanddata", result => {
           // do something with the result  

Plain and simple.

Here’s the full ApiClient class in all its glory

import {HttpClient, HttpClientConfiguration,json} from "aurelia-fetch-client";
import 'fetch'; //polyfill
import * as $ from 'jquery';

export interface ICommandResult {
    errors: any;
    hasErrors: boolean;
    data: any;

export abstract class ApiClient {
    constructor(private http: HttpClient) {
        this.http.configure((cf: HttpClientConfiguration) => {
                .defaults.headers = {
                    'Accept': 'application/json',
                    'X-Requested-With': 'Fetch'



    configure(baseUrl: string) {
        this.http.baseUrl = baseUrl + (baseUrl.endsWith("/") ? "" : "/");

    query<T>(query: string, func: (result: T) => void, params?: any) {
        if (params) {
            var args = $.param(params);
            query = query + "?" + args;

        return this.http.fetch(query)
            .then(r => r.json().then(b => func(b)))
            .catch(err => this.handleError(err));


    protected handleError(err) {
        //todo send to raygun
        console.debug("Server err. ");

    execute(cmd: string, data: any, func: (res: ICommandResult) => void) {

        return this.http.fetch(cmd, {
            method: "POST",
            body: json(data)
            .then(resp => resp.json().then(d=> func(d)))
            .catch(er=> this.handleError(er));



The code is more maintainable and allows us to change how we communicate with the server in the future. The class is abstract because I want to be able to talk to multiple endpoints (on the same server) so I’d create a class for each endpoint and optionally, semantic methods that makes the client look like a normal app service or even a repository.

export class FooApi extends ApiClient {
    constructor(http: HttpClient) {
        //setup base api url

    //override error handling
    protected handleError(err) {

    //a nice facade for the command
    delete(item: IdName, success: Function) {
        this.execute("deleteFoo", {assetId: item.id }, r=> success(r));

As a trivia fact, when I’ve started building my app, Aurelia’s HttpClient was a wrapper over XMLHttpRequest, but now it implements the FetchApi. Since I was using an abstraction like the above, I just needed to change the ApiClient implementation and nothing else. But I recommend using this kind of abstraction primarily for developer friendliness as we don’t need to change how we communicate with the server very often. But it’s a nice side effect.

While I’m still a novice when it comes to developing single page apps, I can’t help but notice how often I can use the same approaches I’m using on the server side. In the end,we’re still building an app so all the design patterns and SOLID stuff applies, but we’re using Aurelia/TypeScript instead of WPF/C#.

The Myth of Manual POCO Mapping Performance

Written on 02 March 2016

We are in 2016 and I still see developers proud that they’re using Ado.Net and writing all that boring mapping code because “it’s faster” and they want maximum performance. Well, the phrase “stealing from your employer” comes to mind and in this post I’m going to tell you why.

As the author of an object mapper a.ka. micro-ORM called SqlFu, for version 3 I’ve decided to change how I do the benchmarks. In previous versions, I’ve taken the easy way and benchmarked whole queries but this meant I also benchmarked the server configuration and ado.net performance. So, I wanted to test only the maping part from a DbDataReader to a POCO. This way, I could measure exactly the differences between manual mapping and code generated mapping.

The results were pretty much what I’ve expect them to be:

  • manualy mapping to a POCO is between 80-100%+ faster than code generated
  • manualy mapping to an anonymous object (projection) is similar to the previous case
  • manualy mapping to dynamic is ~50% slower than code generated.

Yes, it’s faster to use SqlFu to map to dynamic, but that’s not the point. In fairness, when doing dynamic mappings, we’re cheating a bit, by using specially crafted objects.However, the interested part wasn’t this. The interesting part was transforming the relative numbers to absolute numbers.

Sure, manually mappings are 2x faster but you see, it’s the difference between 4 microseconds and 8 microseconds. Let me put that into perspecitve: assuming you have a 1 ms query, which is a very fast query, the performance gain you get by doing things manually is …. 0.4%. For a 1 ms query. Let that sink in for a bit.

You’re spending at least 30 seconds (that’s 30 000 000 microseconds) writing boring, repetitive and error prone code for a 0.4% performance gain (and that’s when you’re query is already fast). Sure, you might say that “every microsecond counts” but if you need that kind of performance, I’m afraid you’ve chosen the wrong platform to start with. .Net is fast, but in a scenario where every microsecond matters, carefully written C code is much better.

Thing is, if you have slow queries, then manually object mapping should be last on your optimizations list. Ensure the query is good, the server is well configured, the network has low latency etc. The 0.4% impact the code generated mapping has is simply insignificant. You’re wasting time(money) and maintainability for an optimization that doesn’t really matter in the big picture. And you should always properly benchmark things to assess if it’s worth spending time for ‘optimizations’.

In 2016, the only excuse you have to use Ado.Net directly is that you’re building your own (micro)ORM. For the other use cases, you’re just wasting time and money. And every time you’re tempted to map to POCOs manually, ask yourself why StackOverflow, which is obssessed with performance, prefers to use an object mapper. After all, the main and unique feature of dapper.net is speed. Why didn’t they go with manual mapping, after all, even in their (old and kinda flawed and obsolete) benchmarks manual is still faster.

C# (.Net) Password Hashing The Easy Way

Written on 01 March 2016

We know we shouldn’t store passwords at all, not even encrypted nevermind in plain text. And by encrypted I mean something like AES, Base64 encoding is not encryption. In a nutshell, we need to store only salted hashes so that a password can’t be reversed and an attacker shouldn’t find a common pattern that will lead to breaking the password. As any developer knows, we want a hash which is slow to generate, so fast algorithms like MD5, Murmur or the SHA family shouldn’t be used. For best results something like bcrypt or Pbkdf2 should be used with a high number of iterations.

In order to make things easy, I’ve come up with a value object that does the ‘hard’ work and it’s versatile enough to allow you to customize things. It’s a class called PasswordHash available as part of my CavemanTools library or you can copy/paste the code directly from here .

First thing is to decide on the hash length and I suggest to set a value > 32. This value should vary from one app to another and should be secret. I also recommend to set a (secret)salt size and a (secret) iteration count.

PasswordHash.KeySize = 40; //default is 32
PasswordHash.DefaultSaltSize = 32; //default is 32 
PasswordHash.DefaultIterations = 70000; //default is 64000

Be aware that the number of iterations affects the hash generation time. The default is a good value (it takes around 200ms on my PC) but you might want to increase that for sensitive apps.

Let’s use it

var pwd = new PasswordHash("some pwd");

//store as bytes

//store as string

//restore from string
var pwd=PasswordHash.FromHash(hash,PasswordHash.DefaultSaltSize,PasswordHash.DefaultIterations);

//restore from bytes
 var pwd = new PasswordHash(hash, PasswordHash.DefaultSaltSize, PasswordHash.DefaultIterations);
 //check pasword
 if (pwd.IsValidPassword("other pwd")) { }

Customizing how the final hash is generated

Regardless of how you keep it in the database, it’s important to be aware that the final hash is a combination of salt and the password hash. By default, salt is stored first then the password hash. And that’s why we want to change the passwords hash length or/and the salt size, so that an attacker would have a hard time guessing when the salt ends and when the hash begins. If an attacker knows either, they already have 2 pieses of the puzzle.

Another way is to change how the salt and hash are stored.

PasswordHash.PackBytes = (data) => /* return a byte[] containing the salt/hash stored in a creative manner*/;
PasswordHash.UnpackBytes = (packed,saltSize) =>  /* 'recover' the salt and password hash */;

Basically, we have these options to ensure an attacker won’t have an easy job:

  • Variable/Secret key (password hash) length
  • and/or Variable/Secret iterations count
  • and/or Variable/Secret salt size
  • and/or Your own ‘pack/unpack’ algorithm

This way, even if an attacker knows you’re using PasswordHash they still have to guess the above.

Submit on Enter With Aurelia

Written on 25 January 2016

We have a form or at least an input field and we want to submit or execute some behaviour when the user presses Enter. In this tutorial I’m going to show you how to build an Aurelia custom attribute that can be used wherever you want this feature.

Our goal is to use it like this

<input type="text" on-enter.call="submit()">

Let’s create the attribute

import {customAttribute, autoinject} from "aurelia-framework";

export class OnEnter {
    constructor(element:Element) {
        this.element = element;
        this.onEnter=(ev:KeyboardEvent) => {
            //Enter keyCode is 13
            if (ev.keyCode !== 13) return;
    attached() {
    valueChanged(func) {
        this.action = func;

    detached() {


Yes, that’s all, but now let me explain it. The autoinject decorator tells Aurelia to use metadata generated by TypeScript to determine the dependency type (in this case a DOM Element. This is normal usage in Aurelia, we do that everyhwhere we need to inject dependencies (and we’re using TypeScript). The customAttribute is self-explaining.

In the constructor we need the html element where the attribute is used and that’s what it gets injected here. Next, we set up the ‘onEnter’ function. This part is quite important so pay attention.

We need that function as an event handler for a keyup event. You see that we have the attached() and detached() methods which are part of the Aurelia custom element lifecycle i.e they are invoked automatically when the view is attached/detached from the DOM. In this case we don’t have a view, but the methods are still invoked by the framework (I suppose when the parent element is attached/detached) allowing us to set up or remove the event handler. That’s why we have the onEnter variable instead of using arrow functions (lambdas).

Now, you may wonder why onEnter isn’t defined as a function directly. It’s because of javascript and the this affair. In a simple function this would be the calling context i.e the window. But we need this to point at our attribute model and using lambdas is the easiest way to do it (TypeScript takes care of the details).

The valueChanged method is automatically invoked by Aurelia when the value of the attribute changes, and in this case we get the function that should be invoked on enter. Note how the binding syntax in html is not .bind but .call. This tells Aurelia to create and pass a delegate (which will wrap the specified function) as a value. So, when we invoke this.action() we’re actually invoking the delegate which in turn invokes the specified function.

And pretty much that’s it. Once you’re aware of the this gotcha and the .call binding you can create all sorts of ‘event handler as an attribute’. For example, it’s trivial to have an on-escape attribute (hint: replace 13 with 27).

Custom attributes still need to be referenced in a view so you either need a <require from="myAtttribute"></require> or set it up as a global resource or a feature. You can read more about those in Aurelia docs