Fixing your scrum

Practical Solutions to Common Scrum Problems – by Ryan Ripley and Todd Miller

This is a book intended for the Scrum Master but in my humble oppinion it should be read by everyone involved in anything Scrum related, because chances are quite high that if you are doing Scrum, then you are probably not.

This book gives the authors’ best shots at fixing common problems typically occuring in Scrum teams and their organization.

In general I think it’s a good book which dutyfully works through all parts of Scrum adressing particular problems which the authors have discovered through the years of being Scrum Masters. One caveat, though, is that the book was released – and therefore, of course written – before the current version of the Scrum Guide ( The astute reader will notice that some of the fixes to the problems might not be what the Scrum Guide itself dictates. Those solutions are therefore – per definition – not Scrum.

One chapter where this shines through is in the section “Trust is Missing” (Chapter 2). It is somewhat funny – and rather weird – that trust is not a Scrum Value. You could argue, of course, that team members naturally must trust each other for team work to actually work. But you could argue that about the other Scrum Values, as well. Not that I think that those five values are a bad choice, definitely not. But trust should have been there, too.

Chapter 3 goes in depth with the Scrum Values and – I think – does a markedly better job at describing them than the Scrum Guide does. Let’s take Focus, which the Scrum Guide describes so:

Their [the scrum team] primary focus is on the work of the Sprint to make the best possible progress toward these goals.

Fixing your Scrum describes it like this:

Focus allows us to do our very best. Valuing focus means that we give people the time they need to think about their work. After all, creativity is hard enough without being constantly interrupted. Allowing development team members to focus just on one product, the current sprint, and the current sprint goal gives them the best chance of succeeding. Encourage the product owner to focus on the future value of the product while you, the Scrum master, focus on upholding Scrum.

I really wish the Scrum Guide would be this detailed about the Scrum Values, because this actually makes sense. But the Scrum Guide is the one source of Scrum truth so you could probably not use the definition from Fixing your Scrum and still claim to be doing Scrum.

On it goes in chapter 4 about the Product Owner where there definitely are some sensible takeaways. It does state the being a Product Owner is a full-time job. And it also states that:

A development team that’s distracted by PO-type work is an impediment that you need to address.

Chapter 5 – The Product Backlog has a lot of gold nuggets, but one I really like is this:

Bottom line: The product backlog is the single source of truth for every kind of work in Scrum. There should only be one product owner and one product backlog on a product-development effort, regardless of the type of work or number of teams.

Chapter 6 – The Development Team about all the problems that can arise on a team that is not cross-functional, that is, the team does not together have the needed skills for creating finished work:

in Scrum, a development team needs to have all the skills necessary to create a done increment every sprint

There is so much more in this chapter, make sure you read it. Lots of common sense, even if you are on a team that is not doing Scrum. This is actually true for most parts of the book.

Chapter 7 is about the Scrum Master role. One interesting nit pick in this chapter is the discussion about whether a Scrum Master needs to have technical skills:

Being a Scrum master requires so many different skills that technical proficiency belongs near the bottom of the list of things to look for when evaluating candidates.

I personally think it’s a hard pill to swallow, but I get the intent behind it. Being a Scrum Master is about team dynamics, organizational issues and a lot of the softer, or even softest skills. And as with the Product Owner it is a full-time job:

Organizations that use a part-time Scrum master likely won’t realize the full value and potential of a high-performing Scrum team.

Then the book moves on to chapter 8 – about managing management, which takes a whole new set of skills. The whole chapter is great and I had a hard time to pick one essential quote, but ended up with this one:

Staying curious (by asking questions instead of passing judgment) is how you fulfill your role as a servant leader.

Servant leadership is nothing new, of course, and most definitely not a Scrum invention. Still, having anything called a leader in a team without hierarchies (according to the Scrum Guide) is an oxymoron.

Chapter 9 is about sprints and my personal guess (and it is nothing more than a guess) is that this is one of the most hard parts of Scrum for many teams. Scrum insists that you must have a production ready increment each sprint, but some tasks/jobs/pbis/stuff are so difficult to squash into a sprint, especially when a lot of setup is needed or when an architectural rewrite is needed. But Scrum insists, so you have to find a way.

Unsurprisingly the chapter really doesn’t offer much help on this. It is, after all, a book for Scrum Masters, who aren’t technical people and therefore cannot really help the team. Only demand that they deliver.

Chapter 10 is about Sprint Planning. An activity that can be both painful and exhilerating, although probably not at the same time. A useful quote here is:

A painful sprint planning event is a sign of an unrefined product backlog.

There is also serious and thorough talk about the Sprint Goal, which in Scrum is an essential part of a Sprint.

I will, though, offer the valuable quote about Sprint Planning:

The purpose of sprint planning isn’t to fill the development team’s “plate” so that the plate can be “cleared” during the sprint. Rather, sprint planning is meant to create a starting plan for the sprint which includes a sprint backlog, forecast, and a sprint goal. The plan that is created in sprint planning will change. The sprint backlog emerges: When complex work is started, additional work is found. The point at which a team knows finitely how long it will take to complete a PBI is when the PBI has been completed.

It is, of course, important to notice that the Sprint Goal is the important part of a sprint. The team may have additional PBIs on the Sprint Backlog but those additional PBIs are not that important, the Sprint Goal is.

Chapter 11 is about the Sprint Backlog. The core of the sprint machinery. Where all the work of a sprint is captured. Ideally, that is. The chaper is clearly written in pre-covid times, but that is what it is. There are some really good pointers in this chapter, and this is not the only one:

In Scrum, the development team commits to achieving the sprint goal. But given the complexity of the work that dev teams face, it’s impossible for them to commit to completing everything in the sprint backlog.

This is really important! The Sprint Goal surfaces here again as the most important in a sprint. All other PBIs are bonuses.

There are also a very good (and funny) note on estimation:

A Forecast: An estimate of how much total work exists in this sprint and the amount of work remaining. Scrum is agnostic on how you estimate work. You can use PBI counts, task counts, Fibonacci numbers, zebra stripes, or whatever.

Chapter 12 discusses The Daily. How it is usually a status meeting instead of an actual mini-planning for the day. The chapter has tips and tricks for how to coach out the more quiet team members and correspondingly how to quiet down the more outspoken members.

In chapter 13 the Definition of Done is put on the table. According to this chapter it is the development team that owns the definition, but the Scrum Guide states that it is the entire team. This chapter, too, suffers from the Scrum Master’s missing technical skills.

Chapter 14 is about the Sprint Review, which in my oppinion is inappropriately named, since the idea is that it is NOT a review, but rather a discussion between the team and the stakeholders about the project and how it should progress forward. The Sprint Review must include PO, the developers and the stakeholders and it must be a collaboration.

The chapter does have a very good suggestion for an agenda which will make the review into a colleborative session. Read it and weep!

It mentions the concept of “mob programming”, which I had never heard of before.

The final chapter – 15 – is about the Sprint Retrospective. Again the Scrum Master is on the home turf: team dynamics. The Scrum Master is held accountable for positive outcomes:

When used correctly, sprint retrospectives are a great tool to help keep your team moving forward in a positive way. It’s vital that they happen and are fruitful in outcomes that create solid improvements for the team. You, as a Scrum master, are accountable for making sure that happens.

I think this is – in general – a very good book. There are much more to it than mentioned above. And probably most teams – and most team members – can benefit from it. You cannot do it all at once, of course not. But focus on one Scrum Event/Role/Artifact at a time, read the corresponding chapter and make adjustments.

Mr. Legacy

At my current (at the time of writing) work place I am often referred to as Mr. Legacy. This is a title that I’m honoured to have earned. But how did that come about?

I was hired to do some fire fighting. My boss knew me from a previous assignment and it was initially meant to be a short assignment, but I liked it there and they seemed to like me, and one thing led to another, so I stayed on board for – so far – more than 7 years.

For some reason I have a knack for messing around (in the good way) with old crap, that’s just saying it as it is. I’m very good at surgical incisions, “spot-improving” the code and adding features without otherwise ruining the system. And I’m not just good at it. I actually like it!

But there was a time when even I was bested. I’m not proud of it, for several reasons, which all will come clear.

It all started in 2008 when I got a call from my job-broker: Would I like a small, cosy 120 hour project, implementing a few features in an existing project, written in Borland C++? Of course I would since I hadn’t been working for a couple of months.

The application was a automatic dutyroster planner managing dutyrosters for employees who work different shifts and takes into account various union rules, employees wishes, overtime etc. The built-in autoplanning feature worked surprisingly well.

At the time they had made some promises to a customer about the application. It should be ablt to not only plan the shifts but also record which hours the employees were actually there.

My job in that respect was to implement an export of those hours to a third party accounting system. A simple job, you would think, but I was in for a surprise.

My first action was to take a look at the database. The data obviously had to be there somewhere to make en export happen. There where some tables that had the right smell, but did not fit the description exactly. After a few hours I had to give in – I could not figure out how this was supposed to be done. And at that time I was pretty certain that it would take more than the 120 hours to implement, because the application did not seem to even have the features for recording the hours.

I had a meeting with the boss. Told him my view of the lay of the land. At the same time he was both shocked and not surprised at the same time.

It was not thought through at the moment, but I suggested we made some kind of partnership deal because I really did think there was something valuable in that product. Great ideas, great functionality and existing customers. We came to an agreement, details of which are not relevant here.

Then started to ordeal. Another requirement from the customer was to enable multi-user support. Had I known about that requirement I might definitely not have entered into the agreement.

The application used a proprietary database – EasyTable (nothing to do with the name of the application) – which is really a quite decent database. And it sure was easy to use.

But in the application there were no sight of a data access layer. There was littered SQL all over it. Concatenated SQL. Not parameter bindings. At all. Table and field names was repeated all over, too.

To enable multi-user support we had to use another database, wherefor we needed to become more or less database agnostic. In other words: A DAL was needed. A database independent one. A task that in itself is not that difficult.

I wrote some basic Entity and EntityList classes and quickly figured out how to implement some C++ template classes to help me out. Boring job, but it had to be done.

The really hard part was to change existing embedded SQL-statements and field names as magig strings to using Entities and EntityLists.

The application used a DataModule form on which TQuery-thingies (Delphi did the same, if I remember correctly) was put more or less resembling the actual tables in the database. Mostly less. These tables were generally abused all over the application to hold some random SQL and return the corresponding recordset or to update various data in the database.

The main form in the application had about 10.000 lines of code behind. A staggering number of lines, but if it were well-written it would have been manageble. It was not. Global variables were as abundant as parenthesis in LISP. And they were referenced from everywhere, even other forms.

The application was a night mare, it was surprising it worked as well, as it did.

Eventually I had to throw in the towel. After I had sunk 100s of hours into the application I gave up. I was cleaned up exceptionally well at the time, but admittedly I ran out of money not to earn while working on it.

Lessons learned

First I’ll take a look at what went wrong for the application in the first place. What thoughts went into choosing C++ Builder for the task? At the time, in 2008, it would have been considered insane, but we have to look at it in the context of 2003 where they started the application. What where the options back then?

.NET 1.1 came out officially 9. April 2003 ( and really was not useful for much before then. I do not remember the state of the Winforms designer, but I guess it was just as good as the VB6 designer, which was horrible, but was used to create loads of applications.

The other option was Delphi, also from Borland. I guess the form designers where about the same level and I was pretty fond of the designers in Delphi 1 to 6.

The people making the choice was not very experienced in the business. They were just out of school and there chose what they had used in class: Borland C++ Builder, which anyone who have used it will readily admit is not a good product. But still, I understand why they made the choice.

Whatever tool they had chosen it would not have saved them from the flood of mistakes they continued to make. Again, it was probably understandable. The application started as a proof of concept and continued to grow and had a demand so they never could get around to refactor it.

But did I learn something from this quest? I have to say yes. It was hard earned, so having nothing learned would be sad.

In hindsight I should, of course, have left the assignment after the first few hours but what could have warned me of that back then?

Firstly I probably should have spend a considerable amount of hours trying to understand the innards of the application before entering any agreement about it. That I did not do. Today I would – in the same situation – simply state that the wanted functionality could not be implemented with the current state of the application.

I should, also, have talked to some customers – or at least the boss – about expected and known feature requests, so I could have judged whether they were feasible.

F#: Making illegal states unrepresentable

Following Property-based testing with F# by Mark Seemann it annoyed me a bit that the generator did not involve the type itself in restricting what could be generated. I tried to figure out a nice way to do this. Then one day arrived the need for a string which would always contain a value – a NonEmptyString – because I like to make illegal states unrepresentable – which is no new idea. Scott Wlaschin is brilliant. Period.

I eventually came up with this little piece of code:

type NonEmptyString = private NonEmptyString of string
    with static member create value =
            match NonEmptyString.validate value with
            | true -> 
                NonEmptyString value 
                |> Some
            | false -> None

         static member validate value =
            String.IsNullOrEmpty value
            |> not

         static member primitiveValue value =
            let (NonEmptyString primitiveValue) = value

The code represents a string which cannot be null or empty, ever. You simply cannot construct it with an empty value. This also means, of course, that you always have to construct it using e.g. match with:

let s = match NonEmptyString.create "somestring" with
    | None -> ... // you have to think of some error handling here!
    | Some value -> value

Now having the validate function makes it very easy to create a generator for FsCheck:

type Generators =
    static member NonEmptyString() =
            new Arbitrary() with
                override x.Generator =
                    |> Arb.filter(fun s -> NonEmptyString.validate s)
                    |> Arb.toGen
                    |> x -> match NonEmptyString.create x with | x -> x.Value)

If you haven’t used FsCheck I strongly suggest that you give it a go. There is quite a steep learning code, but it is very much worth it.

Error handling – part 4 – now on to the REST

After the slippery SOAP adventure it is time to look at a more forward looking way of exposing your API to the world: REST. Again the discussion has root in the .NET world and uses WEB.API as the underlying engine, explaining the principles. And again the principles are equally applicable to other languages, frameworks and platforms.

Returning an error from a REST Service is basically as simple as returning a HTTP status code. How you do that is naturally implementation dependent, but we still have to figure out what status code to return when. Taking the first article as a starting point we could produce a mapping to status codes like the following table:

Exception HTTP Status Code Description
DuplicateKeyException 409 Conflict Duplicate key
DeadLockedException 409 Conflict Deadlock
DataUpdatedException 409 Conflict Concurrency conflict
TimeoutException 504 Gateway Timeout Timeout
AuthorizationException 403 Forbidden Not authorized
InvalidDataException 400 Bad Request Invalid data
TruncatedDataException 400 Bad Request Truncated data
ProviderInaccessiblexception 502 Bad Gateway Provider inaccessible
System.Exception 500 Internal Server Error Internal server error

This is a mapping that might be more natural if it had been created the other way around, starting with HTTP Status Codes and then inventing exceptions matching those. And it is a mapping that might not always be fitting your scenario. But remember that it reflects a general view on the status codes.

Besides returning an HTTP status code to the client it can be helpful to return some more information, which can help the client figure out what went wrong and maybe retry with a corrected request. In my most recent project we returned an error structure like this:

    /// <summary>
/// The response structure returned in the body from the service when an error occurs.
/// </summary>
public class ErrorResponse
    /// <summary>
    /// The http response code.
    /// </summary>
    public int code { get; set; } 

    /// <summary>
    /// Will contain the value "error".
    /// </summary>
    public string status { get; set; }

    /// <summary>
    /// Gets or sets further information about the error.
    /// </summary>
    public object data { get; set; }

    /// <summary>
    /// Gets or sets the message id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? messageId { get; set; }

    /// <summary>
    /// Gets or sets the correlation id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? correlationId { get; set; }

    /// <summary>
    /// Gets or sets a short description of the error.
    /// </summary>
    public string error { get; set; }

This would, in our project, be converted to JSON, but could as well be converted to XML. The main point of the structure is that both the code, status and data properties are available no matter whether the response was a success or failure.

We also had an ExtendedErrorResponse, which – in development and test – could be requested by sending along a custom header. This response would include detailed information about any exceptions, which would ease debugging.

The project used WEB.API and we had to figure out a way to generalize the above handling of exceptions and map them to responses. This turned out to be fairly easy in WEB.API: Create a class derived from ApiControllerActionInvoker, implement the InvokeActionAsync method in a manner similar to this:

        /// <summary>
    /// Asynchronously invokes the specified action by using the specified controller context. 
    /// </summary>
    /// <param name="actionContext">The controller context.</param>
    /// <param name="cancellationToken">The cancellation token.</param>
    /// <returns>A task representing the invoked action.</returns>
    public override async Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, CancellationToken cancellationToken)
        Contract.Requires(actionContext != null);
        Contract.Ensures(Contract.Result<Task<HttpResponseMessage>>() != null);

        StatisticsContext statisticsContext = null;
        OperationContext operationContext = null;
        var request = actionContext.Request;

            statisticsContext = StatisticsContextSession.GetStatisticsContext(request);
            StatisticsStartRequest(request, statisticsContext);

            var commonRequestParameters = GetCommonRequestParameters(actionContext);

            operationContext = SetControllerOperationContext(commonRequestParameters, request);

            var filterTrees = GetServicelagController(request).CreateODataFilterTrees().ToList();
            var allowedTreeNames = GetAllowedODataFilterTreeNames(actionContext, filterTrees);
            ODataFilterVerifier.ParseAndCheckODataFilter(commonRequestParameters, filterTrees, allowedTreeNames);

            ParseAndCheckAuthorization(request, commonRequestParameters, operationContext, applicationIdChecker);

            // Here the task that executes the actual APIController method is created
            Task<HttpResponseMessage> actionTask =
                InvokeControllerActionAsync(operationContext, actionContext, statisticsContext, cancellationToken);

            // and executed
            var response = await actionTask;*

            StatisticsEndRequest(statisticsContext, request, response);

            return response;
        catch (Exception exception)
            var response = ErrorResponseMapper.LogErrorAndCreateErrorResponse(
                configuration.AllowExtendedErrorInformation == AllowExtendedErrorInformation.Yes);

            StatisticsEndRequest(statisticsContext, request, response);
            return response;

To register this class in WEB.API you would – e.g. in WebApiConfig do something like this:

            new MyActionInvoker(...parameters...));

I have left in quite a few details not necessary for understanding the actual error handling, just to suggest that this method is a nice place to do a lot of pre- and post request handling. We needed, for instance, statistics on what services where called when and by who. Call timings were also quite easy to do like this.

The error handling is, of course, done in catch (Exception exception), which here delegates the responsibility to an ErrorResponseMapper, which basically takes care of mapping the exception to the HTTP Status Code as described in the table above.

To allow for custom, non-generalized status codes we created a HttpResponseMessageException (derived from UnrecoverableException), which allows for the individual api controllers to throw any custom exception, requesting that a specific HTTP Status Code is returned:

    /// <summary>
/// An exception thrown when a Controller wants to respond with a specific <see cref="HttpStatusCode"/>.
/// </summary>
public class HttpResponseMessageException : UnrecoverableException
    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message)
        Code = code;            

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, Exception innerException)
        : base(string.Empty, innerException)
        Code = code;

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message, Exception innerException)
        : base(message, innerException)
        Code = code;

    /// <summary>
    /// Initializes a new instance of the <see cref="DuplicateKeyException"/> class.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    protected HttpResponseMessageException(SerializationInfo info, StreamingContext context)
        : base(info, context)
        Contract.Requires(info != null);

        var code = info.GetInt32(CodeKey);
        Contract.Assume(Enum.IsDefined(typeof(HttpStatusCode), code));
        Code = (HttpStatusCode)code;

    /// <summary>
    /// When overridden in a derived class, sets the <see cref="T:System.Runtime.Serialization.SerializationInfo" /> with information about the exception.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    /// <PermissionSet>
    ///   <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="*AllFiles*" PathDiscovery="*AllFiles*" />
    ///   <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="SerializationFormatter" />
    ///   </PermissionSet>
    public override void GetObjectData(SerializationInfo info, StreamingContext context)
        base.GetObjectData(info, context);
        info.AddValue(CodeKey, Code);

    /// <summary>
    /// Gets the <see cref="HttpStatusCode"/> intended for the client.
    /// </summary>
    public HttpStatusCode Code { get; private set; }

    private const string CodeKey = "Code";

It could be used something like this:

throw new HttpResponseMessageException(
    "Unsupported $Filter query: " + commonRequestParameters.Filter,

The ErrorResponseMapper then takes care of mapping this to an ErrorResponse.


I have in this series tried to shed some light on how to build a generic error handling (micro-)framework into your application. If you do this you will always have some default error handling in place, which then can be individualized when necessary along the way.

I hope you see that the process is really not that complicated, you just have to use the features that the languages and frameworks give you.

Having the default error handling in place, allows you to REST peacefully at night 😉

Learning F#

I have started on a new quest. I want to learn F#. And I want to as good in F# as I am in C#. Well, you might say, how hard can it be? It’s just another language. Just my initial thought, sort of. There is one huge difference compared to C#: F# is (firstly) a functional programming language, whereas C# is an object oriented programming language. And to me that is a considerable obstacle. Let me be honest here: the one course in my computer science study that concentrated on a functional language (Miranda) took me three tries to pass. Not something I’m particular proud of, but it sure is an indicator how hard it was for me – I am – honestly – not that stupid 😉

There are several issues with learning a new language.

1. The libraries and frameworks available

Since F# runs on the .NET (and Mono) platforms this is an issue that can mostly be overlooked. F# can call everything .NETish, and F# is a hybrid language, so various object-oriented frameworks (in C#) can be used, too.

2. The eco-system, IDE, tools etc.

Some people think that an IDE is not a good idea. I’ll leave them to struggle with that. I like when my computer is helpful, especially when learning something new. F# is supported in Visual Studio, but the support is light years behind that of C#. There is not, for instance, something as simple as a rename-refactoring support.

The tools available are vastly different, as well. I’m used to Resharper and uses its test runner when running unit tests. Since Resharper don’t have the faintest idea what F# is this limits the usefulness of the test runner quite a bit. It can be used, though.

The F# compiler is wildly more intelligent and smart than the C# compiler. It almost does, out of the box, what Roslyn is supposed to give to C#. The editor in Visual Studio uses the F# compiler to continuously compile the code and underlines (with red squiggly lines) problematic code, without you hitting compile at all. Something like what Resharper does when “Whole solution analysis” is enabled. Quite neat.

3. The paradigm and syntax

This is where I struggle the most. Functional programming is a vastly different paradigm than object oriented. And even though Microsoft has sneaked functional constructs into C# over the years, I still struggle with going all the way.

And in addition to have to think in a wholly different way about how to solve a problem I also have to get used to a wholly new syntax – both reading it and writing it. Here I am truly happy about using a fairly smart IDE, because even having it, I can sometimes ponder for minutes over some error, before getting it.

4. Structuring the code

After using mantras like “one class, one responsiblity“, one class per file, SOLID, Design Patterns, libraries, layers and loads of other principles to keep me out of harm’s way I’m now looking at a totally blank slate. Since functional programming still not has the benefit of being main stream (getting there, but not yet) the collective wisdom of the very good functional programmers still haven’t been distilled into sound advice, so each and everyone is more or less on their own. This fact is intensified by something else I have noticed when meeting other functional programmers:

5. The milieu

The functional milieu seems to be – in some respects – a very mathematically oriented, elitist milieu, which – in its approach to programming – is very far from the many sound principles which seem to proliferate in the object oriented milieu. And this is strange. A lot of the things that good programmers do in object oriented languages, like using meaningful names, structuring the code in meaningful ways, functional programmers seem to regard with contempt. They seem to think along the lines of: Why write let CalculateTax income = ... when you can simply write let ct i = ....

This is, of course, overstated a bit, but the write once, read many mantra still have a long way to go in the functional milieu and the general elitist attitude will work against adopting functional languages, since the average programmer will simply look at it and say: It’s to hard. And to make a programming language main stream it has to be adopted by the average programmer.

How am I going about it?

I have been wanting to get into F# for a few years, now. It started to get serious, when I in 2013 attended the Skillsmatter Fast Track to F# with Tomas Petricek and Phil Trelford. I then went to New York to attend the F# Tutorials, which I also plan to attend in 2014.

Then I bought the book (along with others): Functional Programming with F#, which has exercises in it and seems to be a good book to teach me the building blocks of F#.

In Copenhagen a functional meetup has started, where we, amongst other stuff, has a monthly meetup where we in groups work through the book. In connection with that I am exposing myself by uploading my feeble attempts at solving the exercises to github.

I’m also trying to work at a more or less real world problem: url routing and html templates, where I try to focus on creating a nice API for the user of the library and on structuring the code in a nice, consistent and readable manner. Definitely a work in progress!

So all in all I’m attacking the problem from various angles and I feel I’m getting there – slowly, but surely!

I really want to be a skilled, versatile FSharp Dressed Man :D!

Error handling – part 3 – as easy as sliding in SOAP

One way to expose your API to the world is via a SOAP-interface. Yes, I know it’s old and out-of-fashion, but there’s fantasillions of installations using SOAP and for some bizarre reason new installations are popping up everyday. So for good measure I will discuss how you keep the slick error handling introduced previously and generalize this in your SOAP API.

I will discuss this in the light of the Microsoft WCF beast, but the discussion is – I guess – equally applicable for other SOAP frameworks.

SOAP Faults

SOAP has an official way of signalling error conditions to it’s users – the SOAP Fault and WCF has the FaultContract attribute for this. It is used on the methods of a ServiceContract to signal what types/classes can be returned with error information.

In my endavours with SOAP I have chosen to return one of two FaultContracts: RecoverableFault and UnrecoverableFault. In some situations it might be useful to have a UserRecoverableFault, as well.

I want to show you a sample implementation of RecoverableFault here and discuss pros and cons of it. Please note that it has been a while, since I have used this, so it may not 100% reflect the content of the previous articles and it is definitely not as matured:

/// <summary>
/// Describes a recoverable fault, which is a fault than can be recovered by correcting data and/or retrying.
/// </summary>
public class RecoverableFault
    /// <summary>
    /// Initializes a new instance of the <see cref="RecoverableFault"/> class.
    /// </summary>
    /// <param name="faultType">Type of the fault.</param>
    /// <param name="exceptionDetails">The exception details.</param>
    public RecoverableFault(RecoverableFaultType faultType, string exceptionDetails)
        FaultType = faultType;
        ExceptionDetails = exceptionDetails;

    /// <summary>
    /// Gets the type of the fault.
    /// </summary>
    /// <see cref="RecoverableFaultType"/>
    public RecoverableFaultType FaultType { get; private set; }

    /// <summary>
    /// Gets or sets a generic description of the id of the entity, which the fault referes to, if available. Can be left empty.
    /// </summary>
    public string EntityId { get; set; }

    /// <summary>
    /// Gets or sets the name of the entity to which the fault refers, if available. Can be left empty.
    /// </summary>
    public string EntityName { get; set; }

    /// <summary>
    /// Gets or sets the name of the key in cases of DuplicateData FaultType.
    /// </summary>
    public string KeyName { get; set; }

    /// <summary>
    /// Gets the exception details, if any were available. Can be used for debugging and logging purposes.
    /// </summary>
    public string ExceptionDetails { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
        return "RecoverableFaultType.{1}{0}EntityId:{2}{0}EntityName:{3}{0}KeyName:{4}{0}Details:{5}".FormatInvariant(

Note the ExceptionDetails property. This is something I would only leave in if the SOAP API is for internal use only. Having exception details may risk exposing implementation details, such as usernames, connection strings, etc. to the client, which is something that we really don’t want 😉

The EntityId, EntityName and KeyName properties surely suggests that this Fault has been used primarily for database related problems.

The RecoverableFaultType is a simple enum, which underlines the above assumption:

    /// <summary>
/// A categorization of Recoverable Faults.
/// </summary>
public enum RecoverableFaultType
    /// <summary>
    /// The data being updated/deleted was already deleted by another user.
    /// </summary>

    /// <summary>
    /// The data being updated/deleted was already updated by another user.
    /// </summary>

    /// <summary>
    /// The data being added already exists.
    /// </summary>

    /// <summary>
    /// The database dead locked while applying the changes.
    /// </summary>

    /// <summary>
    /// There was a time out while applying the changes.
    /// </summary>

This enum can used for simple case-switching in the client, when trying to determine a valid action for the error.

The UnrecoverableFault could be modeled along these lines:

    /// <summary>
/// Description of a unrecoverable error.
/// </summary>
public class UnrecoverableFault
    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// </summary>
    /// <param name="message">The message.</param>
    /// <param name="errorDetails">The error details.</param>
    /// <param name="parameters">The description of parameters to the method where it all went wrong.</param>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(string message, string errorDetails, string parameters, Guid logId)
        Message = message;
        ErrorDetails = errorDetails;
        Parameters = parameters;
        LogId = logId;

    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// Use this constructor when you really do not want to provide the client with any detail information. When
    /// given the logId the client can instruct support to look for the given logId in the log.
    /// </summary>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(Guid logId)
        LogId = logId;

    /// <summary>
    /// A friendly description of the error. May be empty.
    /// </summary>
    public string Message { get; private set; }

    /// <summary>
    /// Details about the error. Could be exception details.
    /// </summary>
    public string ErrorDetails { get; private set; }

    /// <summary>
    /// Gets a description of the Parameters to the method where the error happened. Typically obtained with
    /// MethodDescriptor.Describe().
    /// </summary>
    public string Parameters { get; private set; }

    /// <summary>
    /// An id for a potential error log.
    /// </summary>
    public Guid LogId { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
        return "LogId:{1}{0}Message:{2}{0}Details:{3}{0}Parameters:{4}".FormatInvariant(

The only really interesting about this is the LogId property, which in this specific system refers to a Log Entry in the log-database. So this can used for correlating errors. There are more effective ways to do this, but this was how it was done here.

Generalizing the SOAP service implementations

Having these rather generic Faults is all well and good, but we have to have an easy way of applying them. This was handled by two classes ServiceMethodHandler and ServiceMethodHandlerImplementation:

public static class ServiceMethodHandler
    public static void Execute(Func<string> methodDescriptor, Action action)
            () =>
                return true;

    public static TReturn Execute<TReturn>(Func<string> methodDescriptor, Func<TReturn> action)
        return new ServiceMethodHandlerImplementation<TReturn>(methodDescriptor, action).Execute();

internal class ServiceMethodHandlerImplementation<TReturn>
    public ServiceMethodHandlerImplementation(Func<string> methodDescriptor, Func<TReturn> action)
        this.methodDescriptor = methodDescriptor;
        this.action = action;

    public TReturn Execute()
        Log.ThreadCorrelationId = Guid.Empty;
            return action();
        catch (RecoverableException exception)
            var methodDescription = methodDescriptor();
            Log.Warning(exception, methodDescription);
            throw new FaultException<RecoverableFault>(ExceptionToFault.RecoverableFaultFromRecoverableException(exception));
        catch (UnrecoverableException exception)
            var methodDescription = methodDescriptor();
            Log.Error(exception, methodDescription);
            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "UnrecoverableException executing the action.",
        catch (Exception exception)
            var methodDescription = methodDescriptor();
                "This exception here suggest a program logic error.{0}{1}",
                Environment.NewLine + Environment.NewLine,

            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "This exception here suggest a program logic error.",

    private readonly Func<string> methodDescriptor;
    private readonly Func<TReturn> action;

This is also fairly old which therefore does not have any checks for the UserRecoverableException. The only really quirky thing about this class is the Func methodDescriptor. This is used to describe the method calling ServiceMethodHandler in the case of errors. This is used for logging purposes. The ServiceMethodHandler class is used in the SOAP service implementations like this:

    public void PublishAndSubscribe(Message message, Subscription subscription)
            () => MethodDescriptor.Describe(message, subscription),
            () => manager.Publish(message, subscription));

MethodDescriptor.Describe(...) is helper method which formats the given parameters in a nice, pseudo-readable way, which then can be logged. It can be very nice to have the values of parameters in the log, when debugging problems in production.

The bulk action of this specific service is the line () => manager.Publish(message, subscription). And all the error handling has been generalized with the ServiceMethodHandler class.

It is really that simple to generalize your error handling!

Error handling – part 2 – keeping it easy

In the last part I discussed error handling in the repository/data access layer. The ideas in there are equally valid for any lower layer in the application. When moving up the ladder to higher levels, the principle is the same. Let me consider the layer which in Domain Driven Design is called Application Services, where the orchestration of ports, adapters, repositories and domain objects is handled. I will also touch on handling errors in the domain objects/aggregate roots. Please note that when I use the phrase domain object it can mean any of value object and entity.

Aggregate Roots

This is a concept from Domain Driven Design – an Aggregate Root is a so-called consistency boundary and very often also a transaction boundary (in the database-sense of the word). Note: The transaction is not controlled from within the Aggregate Root, which (ideally) knows nothing of the database. Rather, the transaction is controlled by the Application Service.

Of course, errors may happen in a domain object. You could feed it incorrect data, call a method, when the object is in an incorrect state for that method to be called, and so on.

To my knowledge there are two main ways of signalling errors in a domain object/an aggregate root.

  1. Throw an exception
  2. Publish an (error) event

And of course any combination of the two.

1. Throwing an exception

If you are in the habit of using e.g. Code Contracts you will get either a Code Contract exception, which can only be caught by catching System.Exception or by using the generic variant, where you’d normally choose to throw a System.ArgumentException or System.ArgumentNullException. You could, of course, choose your own InvalidDataException to standardise the handling. These kinds of errors are rarely recoverable.

You can also have other kinds of errors, e.g. trying to withdraw to large an amount from an Account. This is a domain logic error, which should be clearly separated from the above, which more can be said to be a program logic error.

The main object to obtain by signalling errors in a domain object is to avoid domain events published within the transaction/consistency boundary (as started by the Application Service) to be persisted. This could very well result in corruption of state. Exceptions is a very effective and easy to understand way of doing this. If the Application Service catches any exception it simply won’t commit the transaction is has opened.

If you have any kind of argument validation – using Code Contracts or manual approaches – on your domain objects (and you probably really ought to), you will eventually run into validation exceptions. This means that your Application Service somehow should catch these, if not for anything else, then for logging purposes. The population of these domain objects, which are used as arguments for methods on the Aggregate Root, should happen outside the transaction boundary.

It can, as can so much else, be discussed whether argument validation is domain logic or not. As a rule of thumb, I’d say that argument validation on aggregate root methods is considered domain logic, but the validation should rather be kept inside other domain objects or simple DTO’s. This also makes it possible to safely populate the objects outside the transaction boundary, without worrying about domain events being published when not applicable.

2. Publish an (error) event

Since it’s quite normal to use domain events from within domain objects it makes sense to extend this to also use domain events to signal domain logic errors. The hard part of this is to make sure that we have not published domain events which would be invalidated by the domain logic error.

This is where the common sense of using small aggregate roots comes in. If your Application Service calls more than one method on your aggregate root, you might consider a redesign. If it only calls one method, it is up to that single method to make sure that when a domain event is published it is valid in a global sense. In general this is done by publishing only one of two events: one on success and one on failure. To be fair, there might be more than one failure type, though.

An interesting approach is to return the domain event from the method instead of publishing. This totally relieves the domain object of any responsibility related to infrastructure.

Application Services

There are (at least) two parts to the handling of errors in an Application Service. The one part is where it’s calling your own code. Here you have total control over what exceptions you want to throw, or even whether you want to use specialized return codes instead of exceptions. You can also choose to use events to signal errors. This suggests the need for some kind of Saga for each service the Application Service exhibits.

I will suggest using exceptions, since it’s easier to generalize the handling of these, and the generalization is important, if you want to have consistent and manageable error handling.

If you’re using third-party components, where the error handling may be different and possibly widely inconsistent, I think you should wrap the component in a thin layer, which normalizes and standardizes the errors which leaves the layer.

You will need to standardize the exceptions you want to exit the Application Service. It may be that you need more exceptions than previously discussed, because different scenarios need different detail information passed on to the user of the Application Service. But you should keep the top hierarchy to the minimum, like this:

Exception Classes - Top Hierarchy

Having just a small number of errors at the top of the pyramid makes it considerably more manageable for the user of the layer in question to handle most errors in the same way, and then give special attention only to a very specific small number of errors.

There are, in general, two ways of generalizing the error handling in an Application Service. One is to use Commands and inheritance, having the generalized error handling in a base class. The other is to use methods with nice, meaningful names and, in these methods, wrap the executing code in a Func<> parameter to an error handler method. And then there are any number of combinations of these two methods.

I prefer the one with Commands and inheritance, since you then cannot forget to add the error handling code. It’s there, it’s been written once for all your Application Services. Job well done. If you want to return any error information from your application service – and most likely you will – both approaches will work. Catching an exception or subscribing to an error domain event is comparatively the same amount of work. But as suggested above you can not fully avoid a try-catch in your application service, this part is the part you will want to generalize in an ApplicationService base class.

And if you boil down your error handling approach to a few common cases, as described in this and the previous post, that generalized code can be written once and for all. Then you only have to specially handle some very specific cases for each aggregate root.

As noted above you can choose to signal errors out of the Application Service with exceptions, return codes or with events. Exceptions are easy to understand, you don’t need an event infrastructure of any kind. Return codes are messy, I suggest not to use them.

Events are a very elegant way of signalling anything. It decouples everything and you can have other parts of the code listening in on the events, without being the instigator of the action causing the event.

You will need some kind of event infrastructure, though. This can be a simple in-memory thingy or a mean beast like NServicebus. The various implications of using events is a discussion another post. It’s not that hard, not that difficult, you just need another mind set.

Hey! Catch! 😉

Error handling – the easy way

Error handling

Error handling is boring. Error handling is hard. Error handling is tedious. There is a lot of work involved. You have to think about it everywhere. And you will never be quite finished with it. But by standardizing and generalizing error handling you will save a lot of work, because all in all – a lot of errors should be handled in exactly the same way.

The discussion here is rooted in Windows, .NET and SQL Server, but the principles and considerations are equally valid on any platform.

There are two kinds of audience to an error: computers and people. And there are basically two kinds of errors: recoverable and unrecoverable. And the recoverable errors can as well be divided into two kinds: One requiring user intervention and one that can basically be recovered by trying again. This means we end up having three main groups of errors:

  1. Unrecoverable errors
  2. Recoverable errors requiring user intervention
  3. Recoverable errors not requiring user intervention

Unrecoverable errors

This is the kind of error that neither a user or a client application can recover in any way. It will sometimes be a configuration or a program logic error, but it could also be a network which is down. There is nothing the client currently can do about it.

Recoverable errors requiring user intervention

Typical causes can be a user-entered-value, which caused e.g. a database unique constraint on e.g. a username column to act up. The user can correct the problem by entering a another username.

Recoverable errors not requiring user intervention

Here we have situations like time out errors, database dead lock problems. All these can (usually) be corrected by automatically retrying the operation. The user need not be involved (at least, not at first).

Exceptions versus return codes

I really don’t want to poke the religious war here, simply want to state what my opinion is: If a method call returns and doesn’t blow up in your face, the call is considered as succeeded and no error occurred. Simple as that.

You can forget to check return codes, which will usually result in your application failing in spectacular ways, far away from where the error originated. You can also, of course, forget to catch an exception, but an unhandled exception will, usually, cause your application to crash disgracefully, but at least near where the error occurred.

So I use exceptions, even for error conditions, which are not only not exceptional but are actually expected to happen.

The joy of layering

An application typically has several layers, no matter whether it is a classic desktop application or a web-application with service and database layers and what have you. The layering actually helps us standardize error handling, because we can choose a set of errors that we’ll allow crossing the layer boundary.

Let’s consider some typical layers.

Database layer

A huge number of errors can occur when working with a database. The discussion here should not in any way be considered exhaustive, Murphy is way more inventive than I am.

When using the word client here, I typically mean the code that is actually using the layer.

Duplicate Key

When you want a column (or more columns) in a database table to be unique, you create a unique constraint. If you happen to violate this constraint when trying to insert or update data, you will, if using SQL Server and the .NET platform receive a SqlException, which indirectly will inform you of the problem, due to having a number of properties as shown below:

Property Value Description
Number 2627 SQL Server error code
Class 14 [Severity level in SQL Server][2]
Errors SqlErrorCollection List of detailed error descriptions
Message Violation of UNIQUE KEY constraint ‘*name of constraint*’. Cannot insert duplicate key in object ‘*name of table*’. The statement has been terminated.

If you name your constraints consistently you can retrieve the root of the problem from the Message property. SQL Server cannot effectively figure out – in the general case – which column caused the problem, since unique constraints can span several columns.

Its worth noting that if you happen to use Entity Framework, the SqlException will be the InnerException of an System.Data.UpdateException.

The gist of this example is that this kind of error is probably a user recoverable error, if you can give your user the right information. I’d suggest that you have a base RecoverableException, inherit a UserRecoverableException from this, and again inherit a DuplicateKeyException with properties EntityName and DuplicateFieldName, which would detail where the problem lies.

The UI could then use this information to tell the user how to proceed to correct to the error.

Invalid Data

When checking the data for e.g. consistency or field lengths before entering them into the database, you may encounter some problems with the data. It could also be the database throwing a constraint violation error. This would cause a SqlException with the Number property set to 547.

Since its in general hard to figure out exactly what went wrong we will consider this a program logic error and throw an InvalidDataException, which is inherited from UnrecoverableException.

String or binary data would be truncated

When you try to put to many data into a column, SQL Server will naturally complain about this. In .NET this will result in a SqlException with the Number property have the telling value 8152, but no other information is available. SQL Server is not kind enough to tell us, which column we’re happily trying to fill to much information into. This seems rather unhelpful, and since we don’t have this information we don’t have any other choice than making this an UnrecoverableException. The problem is of course a design flaw in the software, which would be hard to rectify on run time anyway. We will throw a TruncatedDataException.


This exception is from Entity Framework, which can detect this situation if the table(s) involved have a rowversion column. The error occurs when the user tries to update (or delete) data that have been modified (or deleted) by another user.

In general, this must be considered a UserRecoverableException and I would suggest specializing this situation into a DataUpdatedException. Your usage scenario determines what kind of properties you would put onto this exception. The UI-handler could handle this situation by loading the new data and doing a compare to discover what data has been changed.

If you are able to detect it you can additionally specialize into a DataDeletedException.

Transaction was deadlocked

In a highly concurrent (and probably badly designed) environment it is likely that enough locking contention occurs in the database so SQL Server has to kill a session, so a command doesn’t get to finish. This will result in a SqlException with the Number property set to 1205.

These situations can often be resolved automatically by the client by retrying the command. It is therefore a RecoverableException, which I would specialize into a DeadlockedException.


Timeouts can be said to be a close relative to the dead locks, but the error is not detected by SQL Server which, as such, does not care how long it takes to run a query. In .NET the situation is brought to our attention by the ADO.NET layer which has a command timeout setting for the connection. When this timeout is reached, the command is cancelled and a SqlException is abused to signal the problem with the Number property set to -2 (yes, minus two!).

If you are using Entity Framework you should catch a System.Data.EntityCommandExecutionException and look at InnerException, where the Number property is -2, as above.

Usually this problem can be rectified by running the command again, but always remember to log the error (with the time spent on the command).

This is a RecoverableException. Specialize it with TimeoutException.

Other weird Entity Framework exceptions

A few other Entity Framework exceptions worth mentioning: System.Data.EntityException typically suggests an authorization problem. The account does not have access to the database. This can also be caused by an incorrect connection string.

System.Data.EntityCommandExecutionException usually means that there is a mismatch between the database and the Entity Framework model (the .edmx). Can be caused by forgetting to update the model after changing the database.

System.Data.EntityCommandCompilationException also suggests some kind of mismatch between the database and the Entity Framework model.

All these exceptions are, of course, unrecoverable on run time and this should be signalled by throwing a ProviderInaccessibleException which is a specialization of UnrecoverableException.

Putting it under one roof

I usually create or reuse a DataAccessHandler class, which handles all the various database problems I come across and map them to one of these exceptions – or a subset of them:

  • DataDeletedException
  • DataUpdatedException
  • DeadlockedException
  • DuplicateKeyException
  • InvalidDataException
  • AuthorizationException
  • ProviderInaccesibleException
  • TimeoutException
  • TruncatedDataException

And since all these exceptions are inherited from UnrecoverableException, RecoverableException and UserRecoverableException the catching and handling of them can be done quite nice and tidy.

Wrapping up – for now

I have discussed error handling in a data access layer – and especially error reporting out of the layer using exceptions. In coming posts I will discuss the corresponding issues in Application Services, SOAP service layers and REST service layers.

I will end this post with a smashing beautiful class diagram of the exceptions suggested:

Exception Classes

Specifying Business Rules by Example

One of the (many) benefits of using Specification by Example is that we get the business logic out in the open, being able to discuss it among developers, analysts, domain experts and users, instead of having it buried deep down in the code, wherefrom it can be rather hard to extract. And especially hard to extract non-manually.
But I think it is really hard and difficult to figure out from which perspective (some of) these business rules should be written. I mean, some business rules do belong in the abyss of the code, but other rules should most definitely be surfaced and be easy to extract as documentation.
To encourage some thoughts about this, I will describe my thoughts about a more general problem, which many systems will face. The problem, I think, is generic enough that the discussion it might spur will be useful in other situations.
We have some system (The System), which should be accessible by web (The Web Site) and mobile apps (The App). The System demands that Members do Create an Account. And for (more or less) obvious reasons The System has some requirements as to what makes up a Valid Username and a Valid Password.
I see the action of creating an account via The Website and The App as two different features (e.g. stories) of The System. So we would have two features for describing this, very much alike:
Feature: Create Account on The Web site
    As a potential Member of The System
    I want to Create an Account on The Web Site 
    so that I can login and use the features of The System.
Feature: Create Account via The App
    As a potential Member of The System 
    I want to Create an Account in The App 
    so that I can login and use the features of The System.

I believe I have managed to describe these two features in a way that open up for writing Scenarios suitable for automatic testing. A Scenario for the first Feature could be:

Scenario: Successfully Create Account on The Web Site
    Given a Valid Username that is not already in use and a Valid Password
    When The User asks The Web Site to Create the Account
    Then The Account is created by The System and the User is logged in as a Member of The System    


The Scenario for The App will be very much the same. And I figure that these Scenarios should be automatically tested through their corresponding UI.


My brain starts to heat up, though, when I think of the actual business requirements as to what makes up a Valid Username and a Valid Password. It should be obvious that these rules should be upheld no matter whether we create an account using The Web Site or The App. And since I will want the Create Account Scenarios above to be tested through UI, which is (very) slow, I don’t want to list several examples of Usernames and Passwords, which then all should be tested through the UI.


The App will (obviously?) use a Rest API for creating and accessing account information. The REST Api will use an Account Application Service, which also will be used by The Web Site’s server-side code (The Web Site could also use the REST Api AJAX-style, but for now we assume it doesn’t). Note, that this also means that we want to – somehow – test the REST Api, using the same business rules, which would result in one more Feature, describing that Scenario.


So what I’m actually aiming at here is that I want the Account Application Service to be Specified by Example, as well. And I want this because the rules that describe what constitutes a Valid Username and Valid Password are business rules, open for discussion. They should not be buried in the code. At the same time we don’t want to repeat the description of the business rules all over Features and Scenarios.


I cannot see a clear-cut way to this nicely and it may be that I’m using the wrong hammer (in this case SpecFlow, which may be hindering me thinking outside the famous box) for this or just my inexperience in Specification by Example shining through.


Since what I want to Specify is the Account Application Service I will call the users of the service for Consumers. What I have come up with is this:


Feature: Validate Username and Password
    As a potential Consumer of The System's Account Application Service 
    I want The Account Application Service to Validate my Username and Password according to the Business Rules.
    so that I can create an account with a Valid Username and Valid Password 

A Valid User Name:
  1. Cannot begin with white space.
  2. Cannot end with white space.
  3. Must start with a Unicode letter
  4. Must consist of letters, digits and spaces.
  5. Is case insensitive.
  6. Is at least 3 characters long.
  7. Is at most 30 characters long.

A Valid Password:
  1. Can contain any Unicode character.
  2. Is at least 6 characters long.
  3. Is at most 40 characters long.
  4. Must have at least 6 different characters.


The details of the business rules is not what’s at stake here. They should emerge out of the discussion of what is a Valid Username and Password, which is helped on the way by some examples:


Scenario Outline: Validating Username
    When I give a <username>
    Then The Account Application Service responds with an answer detailing whether the Username is <valid>.

    | username:string                            | valid:bool |
    | user1                                      | yes        |
    | aVeryVeryLongUserNameWhichShouldBeAccepted | yes        |
    | æøåÆØÅ1234567890                           | yes        |
    |  user with space in beginning              | no         |


The examples here might not be that exhaustive, I’ve only discussed them with myself 😉 But the testing of this validation can be automated quite easily. And this will be the only Feature describing and testing the actual business rules regarding Valid Usernames and Passwords.


The Scenario Outline for Validating a Password would look just about the same. There should also be Scenario Outlines for Invalid Usernames and Passwords.


It is noteworthy, though, that the validation of Usernames and Passwords is not a User Story, it is – I think – more of an implementation detail. The same goes for the description of the REST Api.


For all the other tests (The Web Site, The App, REST Api) that need a Valid Username and Password (or invalid for that matter), it would be a good idea to have a TestHelper that supplies these, instead of dispersing the knowledge of Valid Usernames and Passwords over more tests than absolutely necessary.


I have chosen validation of Usernames and Passwords only as an example here. The business rules could have been anything else: financial rules, what ever. But where and how to describe the rules is still an issue.

Thoughts on usability

Designing software for ease of use is hard and as developers we seldom use our own software extensively when it has been delivered. But we are still expected to be able to put ourselves in the users seat and know how they work, to have the application work as seamlessly as possible and actually be a help in their work. This can be hard!

We are, however, not the only people in with this problem. To give some food for thought we can take a look on how ordinary house hold appliances work. I’m pretty certain that the typical developers of washing machines, tumblers, stoves and even to some extent TVs, are not the typical users of them at home.

Be aware of dangerous default settings.

I had a washing machine which, when turned on, had a default washing program where the temperature was 60 degrees (Celsius). If you weren’t aware of this you could easily ruin some clothes.

Don’t disable settings for no particular reason.

My current washing machine has a quick wash program, which takes 15 minutes on 30 degrees. But for some peculiar reason the machine can only sling the clothes at 1200 RPM instead of the maximum setting which normally is 1600 RPM. But on the quick wash program it is simply not possible to select higher than 1200 RPM. So in other words: I want my clothes washed in a hurry  but I have plenty of time to wait for it to dry?

Make sure that settings have obvious names.

My tumbler has two buttons, which – (probably inaccurately) translated from Danish – read: Delicate and Protect. And I cannot if my life depended on it remember what the difference between these two are. I’m sure at least one of them makes the tumbler dry the clothes with a lower temperature. But to figure out which one I have to look in the manual. Every time.

The user should be able to see what the application is doing.

My stove, which is a ceramic cooker, has some lights that indicate whether the cooking areas are “too hot” (to touch). But it has no indicator telling me whether any of the cooking areas are actually turned on – or off, for that matter. I have to manually inspect all the knobs to determine this.

Let the user multi-task if the scenario allows for it.

My LG TV has a Electronic Program Guide, which is fairly normal these days. But the brilliant designers has determined that I, when using the EPG, don’t want to listen to the program I was watching. The sound is disabled when I open the EPG. This seems like an odd decision (and it is very likely founded in something technical), since the sound would allow me to follow along in the program for some seconds, while reading up on what I’m actually watching 😉


These few examples show that we, simply by looking around, can spot usability problems. But how can we avoid them? Well, we will probably have to talk with our users about that… 😉