Mr. Legacy

At my current (at the time of writing) work place I am often referred to as Mr. Legacy. This is a title that I’m honoured to have earned. But how did that come about?

I was hired to do some fire fighting. My boss knew me from a previous assignment and it was initially meant to be a short assignment, but I liked it there and they seemed to like me, and one thing led to another, so I stayed on board for – so far – more than 7 years.

For some reason I have a knack for messing around (in the good way) with old crap, that’s just saying it as it is. I’m very good at surgical incisions, “spot-improving” the code and adding features without otherwise ruining the system. And I’m not just good at it. I actually like it!

But there was a time when even I was bested. I’m not proud of it, for several reasons, which all will come clear.

It all started in 2008 when I got a call from my job-broker: Would I like a small, cosy 120 hour project, implementing a few features in an existing project, written in Borland C++? Of course I would since I hadn’t been working for a couple of months.

The application was a automatic dutyroster planner managing dutyrosters for employees who work different shifts and takes into account various union rules, employees wishes, overtime etc. The built-in autoplanning feature worked surprisingly well.

At the time they had made some promises to a customer about the application. It should be ablt to not only plan the shifts but also record which hours the employees were actually there.

My job in that respect was to implement an export of those hours to a third party accounting system. A simple job, you would think, but I was in for a surprise.

My first action was to take a look at the database. The data obviously had to be there somewhere to make en export happen. There where some tables that had the right smell, but did not fit the description exactly. After a few hours I had to give in – I could not figure out how this was supposed to be done. And at that time I was pretty certain that it would take more than the 120 hours to implement, because the application did not seem to even have the features for recording the hours.

I had a meeting with the boss. Told him my view of the lay of the land. At the same time he was both shocked and not surprised at the same time.

It was not thought through at the moment, but I suggested we made some kind of partnership deal because I really did think there was something valuable in that product. Great ideas, great functionality and existing customers. We came to an agreement, details of which are not relevant here.

Then started to ordeal. Another requirement from the customer was to enable multi-user support. Had I known about that requirement I might definitely not have entered into the agreement.

The application used a proprietary database – EasyTable (nothing to do with the name of the application) – which is really a quite decent database. And it sure was easy to use.

But in the application there were no sight of a data access layer. There was littered SQL all over it. Concatenated SQL. Not parameter bindings. At all. Table and field names was repeated all over, too.

To enable multi-user support we had to use another database, wherefor we needed to become more or less database agnostic. In other words: A DAL was needed. A database independent one. A task that in itself is not that difficult.

I wrote some basic Entity and EntityList classes and quickly figured out how to implement some C++ template classes to help me out. Boring job, but it had to be done.

The really hard part was to change existing embedded SQL-statements and field names as magig strings to using Entities and EntityLists.

The application used a DataModule form on which TQuery-thingies (Delphi did the same, if I remember correctly) was put more or less resembling the actual tables in the database. Mostly less. These tables were generally abused all over the application to hold some random SQL and return the corresponding recordset or to update various data in the database.

The main form in the application had about 10.000 lines of code behind. A staggering number of lines, but if it were well-written it would have been manageble. It was not. Global variables were as abundant as parenthesis in LISP. And they were referenced from everywhere, even other forms.

The application was a night mare, it was surprising it worked as well, as it did.

Eventually I had to throw in the towel. After I had sunk 100s of hours into the application I gave up. I was cleaned up exceptionally well at the time, but admittedly I ran out of money not to earn while working on it.

Lessons learned

First I’ll take a look at what went wrong for the application in the first place. What thoughts went into choosing C++ Builder for the task? At the time, in 2008, it would have been considered insane, but we have to look at it in the context of 2003 where they started the application. What where the options back then?

.NET 1.1 came out officially 9. April 2003 (https://en.wikipedia.org/wiki/.NET_Framework_version_history) and really was not useful for much before then. I do not remember the state of the Winforms designer, but I guess it was just as good as the VB6 designer, which was horrible, but was used to create loads of applications.

The other option was Delphi, also from Borland. I guess the form designers where about the same level and I was pretty fond of the designers in Delphi 1 to 6.

The people making the choice was not very experienced in the business. They were just out of school and there chose what they had used in class: Borland C++ Builder, which anyone who have used it will readily admit is not a good product. But still, I understand why they made the choice.

Whatever tool they had chosen it would not have saved them from the flood of mistakes they continued to make. Again, it was probably understandable. The application started as a proof of concept and continued to grow and had a demand so they never could get around to refactor it.

But did I learn something from this quest? I have to say yes. It was hard earned, so having nothing learned would be sad.

In hindsight I should, of course, have left the assignment after the first few hours but what could have warned me of that back then?

Firstly I probably should have spend a considerable amount of hours trying to understand the innards of the application before entering any agreement about it. That I did not do. Today I would – in the same situation – simply state that the wanted functionality could not be implemented with the current state of the application.

I should, also, have talked to some customers – or at least the boss – about expected and known feature requests, so I could have judged whether they were feasible.

F#: Making illegal states unrepresentable

Following Property-based testing with F# by Mark Seemann it annoyed me a bit that the generator did not involve the type itself in restricting what could be generated. I tried to figure out a nice way to do this. Then one day arrived the need for a string which would always contain a value – a NonEmptyString – because I like to make illegal states unrepresentable – which is no new idea. Scott Wlaschin is brilliant. Period.

I eventually came up with this little piece of code:

type NonEmptyString = private NonEmptyString of string
    with static member create value =
            match NonEmptyString.validate value with
            | true -> 
                NonEmptyString value 
                |> Some
            | false -> None

         static member validate value =
            String.IsNullOrEmpty value
            |> not

         static member primitiveValue value =
            let (NonEmptyString primitiveValue) = value
            primitiveValue

The code represents a string which cannot be null or empty, ever. You simply cannot construct it with an empty value. This also means, of course, that you always have to construct it using e.g. match with:

let s = match NonEmptyString.create "somestring" with
    | None -> ... // you have to think of some error handling here!
    | Some value -> value

Now having the validate function makes it very easy to create a generator for FsCheck:

type Generators =
    static member NonEmptyString() =
        {
            new Arbitrary() with
                override x.Generator =
                    Arb.Default.String()
                    |> Arb.filter(fun s -> NonEmptyString.validate s)
                    |> Arb.toGen
                    |> Gen.map(fun x -> match NonEmptyString.create x with | x -> x.Value)
        }

If you haven’t used FsCheck I strongly suggest that you give it a go. There is quite a steep learning code, but it is very much worth it.

Error handling – part 4 – now on to the REST

After the slippery SOAP adventure it is time to look at a more forward looking way of exposing your API to the world: REST. Again the discussion has root in the .NET world and uses WEB.API as the underlying engine, explaining the principles. And again the principles are equally applicable to other languages, frameworks and platforms.

Returning an error from a REST Service is basically as simple as returning a HTTP status code. How you do that is naturally implementation dependent, but we still have to figure out what status code to return when. Taking the first article as a starting point we could produce a mapping to status codes like the following table:

Exception HTTP Status Code Description
DuplicateKeyException 409 Conflict Duplicate key
DeadLockedException 409 Conflict Deadlock
DataUpdatedException 409 Conflict Concurrency conflict
TimeoutException 504 Gateway Timeout Timeout
AuthorizationException 403 Forbidden Not authorized
InvalidDataException 400 Bad Request Invalid data
TruncatedDataException 400 Bad Request Truncated data
ProviderInaccessiblexception 502 Bad Gateway Provider inaccessible
System.Exception 500 Internal Server Error Internal server error

This is a mapping that might be more natural if it had been created the other way around, starting with HTTP Status Codes and then inventing exceptions matching those. And it is a mapping that might not always be fitting your scenario. But remember that it reflects a general view on the status codes.

Besides returning an HTTP status code to the client it can be helpful to return some more information, which can help the client figure out what went wrong and maybe retry with a corrected request. In my most recent project we returned an error structure like this:

    /// <summary>
/// The response structure returned in the body from the service when an error occurs.
/// </summary>
public class ErrorResponse
{
    /// <summary>
    /// The http response code.
    /// </summary>
    public int code { get; set; } 

    /// <summary>
    /// Will contain the value "error".
    /// </summary>
    public string status { get; set; }

    /// <summary>
    /// Gets or sets further information about the error.
    /// </summary>
    public object data { get; set; }

    /// <summary>
    /// Gets or sets the message id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? messageId { get; set; }

    /// <summary>
    /// Gets or sets the correlation id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? correlationId { get; set; }

    /// <summary>
    /// Gets or sets a short description of the error.
    /// </summary>
    public string error { get; set; }
}

This would, in our project, be converted to JSON, but could as well be converted to XML. The main point of the structure is that both the code, status and data properties are available no matter whether the response was a success or failure.

We also had an ExtendedErrorResponse, which – in development and test – could be requested by sending along a custom header. This response would include detailed information about any exceptions, which would ease debugging.

The project used WEB.API and we had to figure out a way to generalize the above handling of exceptions and map them to responses. This turned out to be fairly easy in WEB.API: Create a class derived from ApiControllerActionInvoker, implement the InvokeActionAsync method in a manner similar to this:

        /// <summary>
    /// Asynchronously invokes the specified action by using the specified controller context. 
    /// </summary>
    /// <param name="actionContext">The controller context.</param>
    /// <param name="cancellationToken">The cancellation token.</param>
    /// <returns>A task representing the invoked action.</returns>
    public override async Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, CancellationToken cancellationToken)
    {
        Contract.Requires(actionContext != null);
        Contract.Ensures(Contract.Result<Task<HttpResponseMessage>>() != null);

        StatisticsContext statisticsContext = null;
        OperationContext operationContext = null;
        var request = actionContext.Request;
        try
        {

            statisticsContext = StatisticsContextSession.GetStatisticsContext(request);
            StatisticsStartRequest(request, statisticsContext);

            var commonRequestParameters = GetCommonRequestParameters(actionContext);

            operationContext = SetControllerOperationContext(commonRequestParameters, request);

            var filterTrees = GetServicelagController(request).CreateODataFilterTrees().ToList();
            var allowedTreeNames = GetAllowedODataFilterTreeNames(actionContext, filterTrees);
            ODataFilterVerifier.ParseAndCheckODataFilter(commonRequestParameters, filterTrees, allowedTreeNames);

            ParseAndCheckAuthorization(request, commonRequestParameters, operationContext, applicationIdChecker);

            // Here the task that executes the actual APIController method is created
            Task<HttpResponseMessage> actionTask =
                InvokeControllerActionAsync(operationContext, actionContext, statisticsContext, cancellationToken);

            // and executed
            var response = await actionTask;*

            StatisticsEndRequest(statisticsContext, request, response);

            return response;
        }
        catch (Exception exception)
        {
            var response = ErrorResponseMapper.LogErrorAndCreateErrorResponse(
                actionContext.Request,
                operationContext,
                statisticsContext,
                exception,
                configuration.AllowExtendedErrorInformation == AllowExtendedErrorInformation.Yes);

            StatisticsEndRequest(statisticsContext, request, response);
            return response;
        }
    }

To register this class in WEB.API you would – e.g. in WebApiConfig do something like this:

        config.Services.Replace(
            typeof(IHttpActionInvoker), 
            new MyActionInvoker(...parameters...));

I have left in quite a few details not necessary for understanding the actual error handling, just to suggest that this method is a nice place to do a lot of pre- and post request handling. We needed, for instance, statistics on what services where called when and by who. Call timings were also quite easy to do like this.

The error handling is, of course, done in catch (Exception exception), which here delegates the responsibility to an ErrorResponseMapper, which basically takes care of mapping the exception to the HTTP Status Code as described in the table above.

To allow for custom, non-generalized status codes we created a HttpResponseMessageException (derived from UnrecoverableException), which allows for the individual api controllers to throw any custom exception, requesting that a specific HTTP Status Code is returned:

    /// <summary>
/// An exception thrown when a Controller wants to respond with a specific <see cref="HttpStatusCode"/>.
/// </summary>
[Serializable]
public class HttpResponseMessageException : UnrecoverableException
{
    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message)
        :base(message)
    {
        Code = code;            
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, Exception innerException)
        : base(string.Empty, innerException)
    {
        Code = code;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message, Exception innerException)
        : base(message, innerException)
    {
        Code = code;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="DuplicateKeyException"/> class.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    protected HttpResponseMessageException(SerializationInfo info, StreamingContext context)
        : base(info, context)
    {
        Contract.Requires(info != null);

        var code = info.GetInt32(CodeKey);
        Contract.Assume(Enum.IsDefined(typeof(HttpStatusCode), code));
        Code = (HttpStatusCode)code;
    }

    /// <summary>
    /// When overridden in a derived class, sets the <see cref="T:System.Runtime.Serialization.SerializationInfo" /> with information about the exception.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    /// <PermissionSet>
    ///   <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="*AllFiles*" PathDiscovery="*AllFiles*" />
    ///   <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="SerializationFormatter" />
    ///   </PermissionSet>
    [SecurityCritical]
    public override void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        base.GetObjectData(info, context);
        info.AddValue(CodeKey, Code);
    }

    /// <summary>
    /// Gets the <see cref="HttpStatusCode"/> intended for the client.
    /// </summary>
    public HttpStatusCode Code { get; private set; }

    private const string CodeKey = "Code";
}

It could be used something like this:

throw new HttpResponseMessageException(
    HttpStatusCode.BadRequest,
    "Unsupported $Filter query: " + commonRequestParameters.Filter,
    exception);

The ErrorResponseMapper then takes care of mapping this to an ErrorResponse.

Summary

I have in this series tried to shed some light on how to build a generic error handling (micro-)framework into your application. If you do this you will always have some default error handling in place, which then can be individualized when necessary along the way.

I hope you see that the process is really not that complicated, you just have to use the features that the languages and frameworks give you.

Having the default error handling in place, allows you to REST peacefully at night 😉

Learning F#

I have started on a new quest. I want to learn F#. And I want to as good in F# as I am in C#. Well, you might say, how hard can it be? It’s just another language. Just my initial thought, sort of. There is one huge difference compared to C#: F# is (firstly) a functional programming language, whereas C# is an object oriented programming language. And to me that is a considerable obstacle. Let me be honest here: the one course in my computer science study that concentrated on a functional language (Miranda) took me three tries to pass. Not something I’m particular proud of, but it sure is an indicator how hard it was for me – I am – honestly – not that stupid 😉

There are several issues with learning a new language.

1. The libraries and frameworks available

Since F# runs on the .NET (and Mono) platforms this is an issue that can mostly be overlooked. F# can call everything .NETish, and F# is a hybrid language, so various object-oriented frameworks (in C#) can be used, too.

2. The eco-system, IDE, tools etc.

Some people think that an IDE is not a good idea. I’ll leave them to struggle with that. I like when my computer is helpful, especially when learning something new. F# is supported in Visual Studio, but the support is light years behind that of C#. There is not, for instance, something as simple as a rename-refactoring support.

The tools available are vastly different, as well. I’m used to Resharper and uses its test runner when running unit tests. Since Resharper don’t have the faintest idea what F# is this limits the usefulness of the test runner quite a bit. It can be used, though.

The F# compiler is wildly more intelligent and smart than the C# compiler. It almost does, out of the box, what Roslyn is supposed to give to C#. The editor in Visual Studio uses the F# compiler to continuously compile the code and underlines (with red squiggly lines) problematic code, without you hitting compile at all. Something like what Resharper does when “Whole solution analysis” is enabled. Quite neat.

3. The paradigm and syntax

This is where I struggle the most. Functional programming is a vastly different paradigm than object oriented. And even though Microsoft has sneaked functional constructs into C# over the years, I still struggle with going all the way.

And in addition to have to think in a wholly different way about how to solve a problem I also have to get used to a wholly new syntax – both reading it and writing it. Here I am truly happy about using a fairly smart IDE, because even having it, I can sometimes ponder for minutes over some error, before getting it.

4. Structuring the code

After using mantras like “one class, one responsiblity“, one class per file, SOLID, Design Patterns, libraries, layers and loads of other principles to keep me out of harm’s way I’m now looking at a totally blank slate. Since functional programming still not has the benefit of being main stream (getting there, but not yet) the collective wisdom of the very good functional programmers still haven’t been distilled into sound advice, so each and everyone is more or less on their own. This fact is intensified by something else I have noticed when meeting other functional programmers:

5. The milieu

The functional milieu seems to be – in some respects – a very mathematically oriented, elitist milieu, which – in its approach to programming – is very far from the many sound principles which seem to proliferate in the object oriented milieu. And this is strange. A lot of the things that good programmers do in object oriented languages, like using meaningful names, structuring the code in meaningful ways, functional programmers seem to regard with contempt. They seem to think along the lines of: Why write let CalculateTax income = ... when you can simply write let ct i = ....

This is, of course, overstated a bit, but the write once, read many mantra still have a long way to go in the functional milieu and the general elitist attitude will work against adopting functional languages, since the average programmer will simply look at it and say: It’s to hard. And to make a programming language main stream it has to be adopted by the average programmer.

How am I going about it?

I have been wanting to get into F# for a few years, now. It started to get serious, when I in 2013 attended the Skillsmatter Fast Track to F# with Tomas Petricek and Phil Trelford. I then went to New York to attend the F# Tutorials, which I also plan to attend in 2014.

Then I bought the book (along with others): Functional Programming with F#, which has exercises in it and seems to be a good book to teach me the building blocks of F#.

In Copenhagen a functional meetup has started, where we, amongst other stuff, has a monthly meetup where we in groups work through the book. In connection with that I am exposing myself by uploading my feeble attempts at solving the exercises to github.

I’m also trying to work at a more or less real world problem: url routing and html templates, where I try to focus on creating a nice API for the user of the library and on structuring the code in a nice, consistent and readable manner. Definitely a work in progress!

So all in all I’m attacking the problem from various angles and I feel I’m getting there – slowly, but surely!

I really want to be a skilled, versatile FSharp Dressed Man :D!

Error handling – part 3 – as easy as sliding in SOAP

One way to expose your API to the world is via a SOAP-interface. Yes, I know it’s old and out-of-fashion, but there’s fantasillions of installations using SOAP and for some bizarre reason new installations are popping up everyday. So for good measure I will discuss how you keep the slick error handling introduced previously and generalize this in your SOAP API.

I will discuss this in the light of the Microsoft WCF beast, but the discussion is – I guess – equally applicable for other SOAP frameworks.

SOAP Faults

SOAP has an official way of signalling error conditions to it’s users – the SOAP Fault and WCF has the FaultContract attribute for this. It is used on the methods of a ServiceContract to signal what types/classes can be returned with error information.

In my endavours with SOAP I have chosen to return one of two FaultContracts: RecoverableFault and UnrecoverableFault. In some situations it might be useful to have a UserRecoverableFault, as well.

I want to show you a sample implementation of RecoverableFault here and discuss pros and cons of it. Please note that it has been a while, since I have used this, so it may not 100% reflect the content of the previous articles and it is definitely not as matured:

/// <summary>
/// Describes a recoverable fault, which is a fault than can be recovered by correcting data and/or retrying.
/// </summary>
[DataContract]
public class RecoverableFault
{
    /// <summary>
    /// Initializes a new instance of the <see cref="RecoverableFault"/> class.
    /// </summary>
    /// <param name="faultType">Type of the fault.</param>
    /// <param name="exceptionDetails">The exception details.</param>
    public RecoverableFault(RecoverableFaultType faultType, string exceptionDetails)
    {
        FaultType = faultType;
        ExceptionDetails = exceptionDetails;
    }

    /// <summary>
    /// Gets the type of the fault.
    /// </summary>
    /// <see cref="RecoverableFaultType"/>
    [DataMember]
    public RecoverableFaultType FaultType { get; private set; }

    /// <summary>
    /// Gets or sets a generic description of the id of the entity, which the fault referes to, if available. Can be left empty.
    /// </summary>
    [DataMember]
    public string EntityId { get; set; }

    /// <summary>
    /// Gets or sets the name of the entity to which the fault refers, if available. Can be left empty.
    /// </summary>
    [DataMember]
    public string EntityName { get; set; }

    /// <summary>
    /// Gets or sets the name of the key in cases of DuplicateData FaultType.
    /// </summary>
    [DataMember]
    public string KeyName { get; set; }

    /// <summary>
    /// Gets the exception details, if any were available. Can be used for debugging and logging purposes.
    /// </summary>
    [DataMember]
    public string ExceptionDetails { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
    {
        return "RecoverableFaultType.{1}{0}EntityId:{2}{0}EntityName:{3}{0}KeyName:{4}{0}Details:{5}".FormatInvariant(
                Environment.NewLine,
                FaultType,
                EntityId,
                EntityName,
                KeyName,
                ExceptionDetails);
    }
}

Note the ExceptionDetails property. This is something I would only leave in if the SOAP API is for internal use only. Having exception details may risk exposing implementation details, such as usernames, connection strings, etc. to the client, which is something that we really don’t want 😉

The EntityId, EntityName and KeyName properties surely suggests that this Fault has been used primarily for database related problems.

The RecoverableFaultType is a simple enum, which underlines the above assumption:

    /// <summary>
/// A categorization of Recoverable Faults.
/// </summary>
public enum RecoverableFaultType
{
    /// <summary>
    /// The data being updated/deleted was already deleted by another user.
    /// </summary>
    DataDeletedByAnotherUser,

    /// <summary>
    /// The data being updated/deleted was already updated by another user.
    /// </summary>
    DataUpdatedByAnotherUser,

    /// <summary>
    /// The data being added already exists.
    /// </summary>
    DuplicateData,

    /// <summary>
    /// The database dead locked while applying the changes.
    /// </summary>
    Deadlocked,

    /// <summary>
    /// There was a time out while applying the changes.
    /// </summary>
    Timeout    
}

This enum can used for simple case-switching in the client, when trying to determine a valid action for the error.

The UnrecoverableFault could be modeled along these lines:

    /// <summary>
/// Description of a unrecoverable error.
/// </summary>
[DataContract]
public class UnrecoverableFault
{
    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// </summary>
    /// <param name="message">The message.</param>
    /// <param name="errorDetails">The error details.</param>
    /// <param name="parameters">The description of parameters to the method where it all went wrong.</param>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(string message, string errorDetails, string parameters, Guid logId)
    {
        Message = message;
        ErrorDetails = errorDetails;
        Parameters = parameters;
        LogId = logId;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// Use this constructor when you really do not want to provide the client with any detail information. When
    /// given the logId the client can instruct support to look for the given logId in the log.
    /// </summary>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(Guid logId)
    {
        LogId = logId;
    }

    /// <summary>
    /// A friendly description of the error. May be empty.
    /// </summary>
    [DataMember]
    public string Message { get; private set; }

    /// <summary>
    /// Details about the error. Could be exception details.
    /// </summary>
    [DataMember]
    public string ErrorDetails { get; private set; }

    /// <summary>
    /// Gets a description of the Parameters to the method where the error happened. Typically obtained with
    /// MethodDescriptor.Describe().
    /// </summary>
    [DataMember]
    public string Parameters { get; private set; }

    /// <summary>
    /// An id for a potential error log.
    /// </summary>
    public Guid LogId { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
    {
        return "LogId:{1}{0}Message:{2}{0}Details:{3}{0}Parameters:{4}".FormatInvariant(
            Environment.NewLine,
            LogId,
            Message,
            ErrorDetails,
            Parameters);
    }
}

The only really interesting about this is the LogId property, which in this specific system refers to a Log Entry in the log-database. So this can used for correlating errors. There are more effective ways to do this, but this was how it was done here.

Generalizing the SOAP service implementations

Having these rather generic Faults is all well and good, but we have to have an easy way of applying them. This was handled by two classes ServiceMethodHandler and ServiceMethodHandlerImplementation:

public static class ServiceMethodHandler
{
    public static void Execute(Func<string> methodDescriptor, Action action)
    {
        ServiceMethodHandler.Execute<bool>(
            methodDescriptor,
            () =>
            {
                action();
                return true;
            });
    }

    public static TReturn Execute<TReturn>(Func<string> methodDescriptor, Func<TReturn> action)
    {
        return new ServiceMethodHandlerImplementation<TReturn>(methodDescriptor, action).Execute();
    }         
}

internal class ServiceMethodHandlerImplementation<TReturn>
{
    public ServiceMethodHandlerImplementation(Func<string> methodDescriptor, Func<TReturn> action)
    {
        this.methodDescriptor = methodDescriptor;
        this.action = action;
    }

    public TReturn Execute()
    {
        Log.ThreadCorrelationId = Guid.Empty;
        try
        {
            return action();
        }
        catch (RecoverableException exception)
        {
            var methodDescription = methodDescriptor();
            Log.Warning(exception, methodDescription);
            throw new FaultException<RecoverableFault>(ExceptionToFault.RecoverableFaultFromRecoverableException(exception));
        }
        catch (UnrecoverableException exception)
        {
            var methodDescription = methodDescriptor();
            Log.Error(exception, methodDescription);
            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "UnrecoverableException executing the action.",
                    exception.ToString(),
                    methodDescription,
                    Log.ThreadCorrelationId));
        }
        catch (Exception exception)
        {
            var methodDescription = methodDescriptor();
            Log.Error(
                exception,
                "This exception here suggest a program logic error.{0}{1}",
                Environment.NewLine + Environment.NewLine,
                methodDescription);

            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "This exception here suggest a program logic error.",
                    exception.ToString(),
                    methodDescription,
                    Log.ThreadCorrelationId));
        }
    }

    private readonly Func<string> methodDescriptor;
    private readonly Func<TReturn> action;
}

This is also fairly old which therefore does not have any checks for the UserRecoverableException. The only really quirky thing about this class is the Func methodDescriptor. This is used to describe the method calling ServiceMethodHandler in the case of errors. This is used for logging purposes. The ServiceMethodHandler class is used in the SOAP service implementations like this:

    public void PublishAndSubscribe(Message message, Subscription subscription)
    {
        ServiceMethodHandler.Execute(
            () => MethodDescriptor.Describe(message, subscription),
            () => manager.Publish(message, subscription));
    }

MethodDescriptor.Describe(...) is helper method which formats the given parameters in a nice, pseudo-readable way, which then can be logged. It can be very nice to have the values of parameters in the log, when debugging problems in production.

The bulk action of this specific service is the line () => manager.Publish(message, subscription). And all the error handling has been generalized with the ServiceMethodHandler class.

It is really that simple to generalize your error handling!

Error handling – part 2 – keeping it easy

In the last part I discussed error handling in the repository/data access layer. The ideas in there are equally valid for any lower layer in the application. When moving up the ladder to higher levels, the principle is the same. Let me consider the layer which in Domain Driven Design is called Application Services, where the orchestration of ports, adapters, repositories and domain objects is handled. I will also touch on handling errors in the domain objects/aggregate roots. Please note that when I use the phrase domain object it can mean any of value object and entity.

Aggregate Roots

This is a concept from Domain Driven Design – an Aggregate Root is a so-called consistency boundary and very often also a transaction boundary (in the database-sense of the word). Note: The transaction is not controlled from within the Aggregate Root, which (ideally) knows nothing of the database. Rather, the transaction is controlled by the Application Service.

Of course, errors may happen in a domain object. You could feed it incorrect data, call a method, when the object is in an incorrect state for that method to be called, and so on.

To my knowledge there are two main ways of signalling errors in a domain object/an aggregate root.

  1. Throw an exception
  2. Publish an (error) event

And of course any combination of the two.

1. Throwing an exception

If you are in the habit of using e.g. Code Contracts you will get either a Code Contract exception, which can only be caught by catching System.Exception or by using the generic variant, where you’d normally choose to throw a System.ArgumentException or System.ArgumentNullException. You could, of course, choose your own InvalidDataException to standardise the handling. These kinds of errors are rarely recoverable.

You can also have other kinds of errors, e.g. trying to withdraw to large an amount from an Account. This is a domain logic error, which should be clearly separated from the above, which more can be said to be a program logic error.

The main object to obtain by signalling errors in a domain object is to avoid domain events published within the transaction/consistency boundary (as started by the Application Service) to be persisted. This could very well result in corruption of state. Exceptions is a very effective and easy to understand way of doing this. If the Application Service catches any exception it simply won’t commit the transaction is has opened.

If you have any kind of argument validation – using Code Contracts or manual approaches – on your domain objects (and you probably really ought to), you will eventually run into validation exceptions. This means that your Application Service somehow should catch these, if not for anything else, then for logging purposes. The population of these domain objects, which are used as arguments for methods on the Aggregate Root, should happen outside the transaction boundary.

It can, as can so much else, be discussed whether argument validation is domain logic or not. As a rule of thumb, I’d say that argument validation on aggregate root methods is considered domain logic, but the validation should rather be kept inside other domain objects or simple DTO’s. This also makes it possible to safely populate the objects outside the transaction boundary, without worrying about domain events being published when not applicable.

2. Publish an (error) event

Since it’s quite normal to use domain events from within domain objects it makes sense to extend this to also use domain events to signal domain logic errors. The hard part of this is to make sure that we have not published domain events which would be invalidated by the domain logic error.

This is where the common sense of using small aggregate roots comes in. If your Application Service calls more than one method on your aggregate root, you might consider a redesign. If it only calls one method, it is up to that single method to make sure that when a domain event is published it is valid in a global sense. In general this is done by publishing only one of two events: one on success and one on failure. To be fair, there might be more than one failure type, though.

An interesting approach is to return the domain event from the method instead of publishing. This totally relieves the domain object of any responsibility related to infrastructure.

Application Services

There are (at least) two parts to the handling of errors in an Application Service. The one part is where it’s calling your own code. Here you have total control over what exceptions you want to throw, or even whether you want to use specialized return codes instead of exceptions. You can also choose to use events to signal errors. This suggests the need for some kind of Saga for each service the Application Service exhibits.

I will suggest using exceptions, since it’s easier to generalize the handling of these, and the generalization is important, if you want to have consistent and manageable error handling.

If you’re using third-party components, where the error handling may be different and possibly widely inconsistent, I think you should wrap the component in a thin layer, which normalizes and standardizes the errors which leaves the layer.

You will need to standardize the exceptions you want to exit the Application Service. It may be that you need more exceptions than previously discussed, because different scenarios need different detail information passed on to the user of the Application Service. But you should keep the top hierarchy to the minimum, like this:

Exception Classes - Top Hierarchy

Having just a small number of errors at the top of the pyramid makes it considerably more manageable for the user of the layer in question to handle most errors in the same way, and then give special attention only to a very specific small number of errors.

There are, in general, two ways of generalizing the error handling in an Application Service. One is to use Commands and inheritance, having the generalized error handling in a base class. The other is to use methods with nice, meaningful names and, in these methods, wrap the executing code in a Func<> parameter to an error handler method. And then there are any number of combinations of these two methods.

I prefer the one with Commands and inheritance, since you then cannot forget to add the error handling code. It’s there, it’s been written once for all your Application Services. Job well done. If you want to return any error information from your application service – and most likely you will – both approaches will work. Catching an exception or subscribing to an error domain event is comparatively the same amount of work. But as suggested above you can not fully avoid a try-catch in your application service, this part is the part you will want to generalize in an ApplicationService base class.

And if you boil down your error handling approach to a few common cases, as described in this and the previous post, that generalized code can be written once and for all. Then you only have to specially handle some very specific cases for each aggregate root.

As noted above you can choose to signal errors out of the Application Service with exceptions, return codes or with events. Exceptions are easy to understand, you don’t need an event infrastructure of any kind. Return codes are messy, I suggest not to use them.

Events are a very elegant way of signalling anything. It decouples everything and you can have other parts of the code listening in on the events, without being the instigator of the action causing the event.

You will need some kind of event infrastructure, though. This can be a simple in-memory thingy or a mean beast like NServicebus. The various implications of using events is a discussion another post. It’s not that hard, not that difficult, you just need another mind set.

Hey! Catch! 😉

Error handling – the easy way

Error handling

Error handling is boring. Error handling is hard. Error handling is tedious. There is a lot of work involved. You have to think about it everywhere. And you will never be quite finished with it. But by standardizing and generalizing error handling you will save a lot of work, because all in all – a lot of errors should be handled in exactly the same way.

The discussion here is rooted in Windows, .NET and SQL Server, but the principles and considerations are equally valid on any platform.

There are two kinds of audience to an error: computers and people. And there are basically two kinds of errors: recoverable and unrecoverable. And the recoverable errors can as well be divided into two kinds: One requiring user intervention and one that can basically be recovered by trying again. This means we end up having three main groups of errors:

  1. Unrecoverable errors
  2. Recoverable errors requiring user intervention
  3. Recoverable errors not requiring user intervention

Unrecoverable errors

This is the kind of error that neither a user or a client application can recover in any way. It will sometimes be a configuration or a program logic error, but it could also be a network which is down. There is nothing the client currently can do about it.

Recoverable errors requiring user intervention

Typical causes can be a user-entered-value, which caused e.g. a database unique constraint on e.g. a username column to act up. The user can correct the problem by entering a another username.

Recoverable errors not requiring user intervention

Here we have situations like time out errors, database dead lock problems. All these can (usually) be corrected by automatically retrying the operation. The user need not be involved (at least, not at first).

Exceptions versus return codes

I really don’t want to poke the religious war here, simply want to state what my opinion is: If a method call returns and doesn’t blow up in your face, the call is considered as succeeded and no error occurred. Simple as that.

You can forget to check return codes, which will usually result in your application failing in spectacular ways, far away from where the error originated. You can also, of course, forget to catch an exception, but an unhandled exception will, usually, cause your application to crash disgracefully, but at least near where the error occurred.

So I use exceptions, even for error conditions, which are not only not exceptional but are actually expected to happen.

The joy of layering

An application typically has several layers, no matter whether it is a classic desktop application or a web-application with service and database layers and what have you. The layering actually helps us standardize error handling, because we can choose a set of errors that we’ll allow crossing the layer boundary.

Let’s consider some typical layers.

Database layer

A huge number of errors can occur when working with a database. The discussion here should not in any way be considered exhaustive, Murphy is way more inventive than I am.

When using the word client here, I typically mean the code that is actually using the layer.

Duplicate Key

When you want a column (or more columns) in a database table to be unique, you create a unique constraint. If you happen to violate this constraint when trying to insert or update data, you will, if using SQL Server and the .NET platform receive a SqlException, which indirectly will inform you of the problem, due to having a number of properties as shown below:

Property Value Description
Number 2627 SQL Server error code
Class 14 [Severity level in SQL Server][2]
Errors SqlErrorCollection List of detailed error descriptions
Message Violation of UNIQUE KEY constraint ‘*name of constraint*’. Cannot insert duplicate key in object ‘*name of table*’. The statement has been terminated.

If you name your constraints consistently you can retrieve the root of the problem from the Message property. SQL Server cannot effectively figure out – in the general case – which column caused the problem, since unique constraints can span several columns.

Its worth noting that if you happen to use Entity Framework, the SqlException will be the InnerException of an System.Data.UpdateException.

The gist of this example is that this kind of error is probably a user recoverable error, if you can give your user the right information. I’d suggest that you have a base RecoverableException, inherit a UserRecoverableException from this, and again inherit a DuplicateKeyException with properties EntityName and DuplicateFieldName, which would detail where the problem lies.

The UI could then use this information to tell the user how to proceed to correct to the error.

Invalid Data

When checking the data for e.g. consistency or field lengths before entering them into the database, you may encounter some problems with the data. It could also be the database throwing a constraint violation error. This would cause a SqlException with the Number property set to 547.

Since its in general hard to figure out exactly what went wrong we will consider this a program logic error and throw an InvalidDataException, which is inherited from UnrecoverableException.

String or binary data would be truncated

When you try to put to many data into a column, SQL Server will naturally complain about this. In .NET this will result in a SqlException with the Number property have the telling value 8152, but no other information is available. SQL Server is not kind enough to tell us, which column we’re happily trying to fill to much information into. This seems rather unhelpful, and since we don’t have this information we don’t have any other choice than making this an UnrecoverableException. The problem is of course a design flaw in the software, which would be hard to rectify on run time anyway. We will throw a TruncatedDataException.

OptimisticConcurrencyException

This exception is from Entity Framework, which can detect this situation if the table(s) involved have a rowversion column. The error occurs when the user tries to update (or delete) data that have been modified (or deleted) by another user.

In general, this must be considered a UserRecoverableException and I would suggest specializing this situation into a DataUpdatedException. Your usage scenario determines what kind of properties you would put onto this exception. The UI-handler could handle this situation by loading the new data and doing a compare to discover what data has been changed.

If you are able to detect it you can additionally specialize into a DataDeletedException.

Transaction was deadlocked

In a highly concurrent (and probably badly designed) environment it is likely that enough locking contention occurs in the database so SQL Server has to kill a session, so a command doesn’t get to finish. This will result in a SqlException with the Number property set to 1205.

These situations can often be resolved automatically by the client by retrying the command. It is therefore a RecoverableException, which I would specialize into a DeadlockedException.

Timeouts

Timeouts can be said to be a close relative to the dead locks, but the error is not detected by SQL Server which, as such, does not care how long it takes to run a query. In .NET the situation is brought to our attention by the ADO.NET layer which has a command timeout setting for the connection. When this timeout is reached, the command is cancelled and a SqlException is abused to signal the problem with the Number property set to -2 (yes, minus two!).

If you are using Entity Framework you should catch a System.Data.EntityCommandExecutionException and look at InnerException, where the Number property is -2, as above.

Usually this problem can be rectified by running the command again, but always remember to log the error (with the time spent on the command).

This is a RecoverableException. Specialize it with TimeoutException.

Other weird Entity Framework exceptions

A few other Entity Framework exceptions worth mentioning: System.Data.EntityException typically suggests an authorization problem. The account does not have access to the database. This can also be caused by an incorrect connection string.

System.Data.EntityCommandExecutionException usually means that there is a mismatch between the database and the Entity Framework model (the .edmx). Can be caused by forgetting to update the model after changing the database.

System.Data.EntityCommandCompilationException also suggests some kind of mismatch between the database and the Entity Framework model.

All these exceptions are, of course, unrecoverable on run time and this should be signalled by throwing a ProviderInaccessibleException which is a specialization of UnrecoverableException.

Putting it under one roof

I usually create or reuse a DataAccessHandler class, which handles all the various database problems I come across and map them to one of these exceptions – or a subset of them:

  • DataDeletedException
  • DataUpdatedException
  • DeadlockedException
  • DuplicateKeyException
  • InvalidDataException
  • AuthorizationException
  • ProviderInaccesibleException
  • TimeoutException
  • TruncatedDataException

And since all these exceptions are inherited from UnrecoverableException, RecoverableException and UserRecoverableException the catching and handling of them can be done quite nice and tidy.

Wrapping up – for now

I have discussed error handling in a data access layer – and especially error reporting out of the layer using exceptions. In coming posts I will discuss the corresponding issues in Application Services, SOAP service layers and REST service layers.

I will end this post with a smashing beautiful class diagram of the exceptions suggested:

Exception Classes

Serialization in one line

I have this neat static helper class, which makes it easy to serialize an instance of a class marked with [DataContract] and [DataMember]. E.g.

var xml = DataContractSerialization.Serialize(instance);

Where instance is an instance of any class which is serializable with DataContractSerializer.

Deserialization is equally easy:

var instance = DataContractSerialization.Deserialize(typeof(MyClass), xml);

or:

var instance = DataContractSerialization.Deserialize<MyClass>(xml);

The code for that class, which also can be found on GitHub:

using System;
using System.IO;
using System.Runtime.Serialization;
using System.Text;
using System.Xml;

namespace SoftwarePassion.Common.Core.Serialization
{
    /// <summary>
    /// One-liners for the DataContractSerializer class.
    /// </summary>
    public static class DataContractSerialization
    {
        /// <summary>
        /// Serializes the specified value.
        /// </summary>
        /// <typeparam name="TType">The type of the value.</typeparam>
        /// <param name="value">The value.</param>
        /// <param name="knownTypes">The known types.</param>
        /// <returns>A string (xml) with the serialized value.</returns>
        public static string Serialize<TType>(TType value, params Type[] knownTypes)
        {
            if (knownTypes == null)
            {
                knownTypes = new Type[] { };
            }

            var serializer = new DataContractSerializer(typeof(TType), knownTypes);
            var stringBuilder = new StringBuilder(1024);
            using (var stream = XmlWriter.Create(stringBuilder))
            {
                serializer.WriteObject(stream, value);
                stream.Flush();
            }

            return stringBuilder.ToString();
        }

        /// <summary>
        /// Deserializes the specified XML into an instance of the given TType.
        /// </summary>
        /// <typeparam name="TType">The type of the data expected to have be serialized in the given xml.</typeparam>
        /// <param name="xml">The XML.</param>
        /// <param name="knownTypes">The known types.</param>
        /// <returns>A deserialized instance of the given TType.</returns>
        public static TType Deserialize<TType>(string xml, params Type[] knownTypes) where TType : class
        {
            if (string.IsNullOrEmpty(xml))
                return null;

            if (knownTypes == null)
            {
                knownTypes = new Type[] { };
            }

            var serializer = new DataContractSerializer(typeof(TType), knownTypes);

            using (var reader = new StringReader(xml))
            {
                var stream = XmlReader.Create(reader);
                return serializer.ReadObject(stream) as TType;
            }
        }

        /// <summary>
        /// Deserializes the specified XML into an instance of the given dataType.
        /// </summary>
        /// <param name="dataType">The type of the data expected to have be serialized in the given xml.</param>
        /// <param name="xml">The XML.</param>
        /// <param name="knownTypes">The known types.</param>
        /// <returns> A deserialized instance of the given dataType. </returns>
        public static object Deserialize(Type dataType, string xml, params Type[] knownTypes)
        {
            if (string.IsNullOrEmpty(xml))
                return null;

            if (knownTypes == null)
            {
                knownTypes = new Type[] { };
            }

            var serializer = new DataContractSerializer(dataType, knownTypes);

            using (var reader = new StringReader(xml))
            {
                var stream = XmlReader.Create(reader);
                return serializer.ReadObject(stream);
            }
        }
    }
}

Testable Time

I was inspired by Mark Seeman’s TimeProvider class from Dependency Injection in .NET which is an Ambient Context for resolving the current date and time, but without the most prevalent shortcoming of DateTime.Now and DateTime.UtcNow: The lack of testability.

If you need to test how a class reacts to changes in date and/or time there is no way around having a testable TimeProvider. For this purpose I created a set of four classes, which interact nicely to allow for faking the time during a test. The most prominent is the TimeProvider class itself, because its the one that allows you to query the time, simply by calling UtcNow:

var now = TimeProvider.UtcNow;

It is possible to change what TimeProvider returns from UtcNow by changing its Current TimeProvider with the help of the TimeSetter class:

using (new TimeSetter(new DateTime(2012, 12, 12, 12, 12, 12)))
{

    // interesting test code…  
TimeProvider
.UtcNow // will now return 2012-12-12 12:12.12
}
TimeProvider
.UtcNow // will now return whatever the actual current date and time is.

TimeSetter implements IDisposable to allow for an easy way to reset which TimeProvider is used by UtcNow. The default provider is:

    /// <summary>
    /// DefaultTimeProvider implementation, which simply wraps a standard DateTime.UtcNow.
    /// </summary>
    internal class DefaultTimeProvider : TimeProvider
    {
        private DefaultTimeProvider()
        {
        }

        public static DefaultTimeProvider Instance
        {
            get { return DefaultTimeProvider.instance; }
        }

        protected override DateTime RetrieveUtcNow()
        {
            return DateTime.UtcNow;
        }

        private static readonly DefaultTimeProvider instance = new DefaultTimeProvider();
    }

TimeSetter has a method – SetNow – for changing the DateTime returned:

    /// <summary>
    /// Used for encapsulating a change of the DefaultTimeProvider. Very useful in tests.
    /// </summary>
    /// <remarks>
    /// Implements IDisposable so it figures out to restore the DefaultTimeProvider after use.
    /// Example:
    /// <code>
    ///   var timeProviderMock = new TimeProviderMock();
    ///   using (var timeSetter = new TimeSetter(timeProviderMock)
    ///   {
    ///     timeProviderMock.Now = new DateTime(2010, 1, 1);
    /// 
    ///     var now = TimeProvider.UtcNow; // Returns 2010-01-01.
    ///   }
    /// </code>
    /// </remarks>
    public sealed class TimeSetter : IDisposable
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="TimeSetter"/> class.
        /// </summary>
        /// <param name="timeProvider">The time provider to set as current TimeProvider.</param>
        public TimeSetter(TimeProvider timeProvider)
        {
            if (timeProvider == null)
            {
                throw new ArgumentNullException("timeProvider");
            }

            previousProvider = TimeProvider.Current;
            TimeProvider.Current = timeProvider;
        }

        /// <summary>
        /// Initializes a new instance of the <see cref="TimeSetter" /> class with a TimeProviderMock as the
        /// provider.
        /// </summary>
        /// <param name="currentTime">The current time.</param>
        public TimeSetter(DateTime currentTime)
        {
            previousProvider = TimeProvider.Current;
            TimeProvider.Current = new TimeProviderMock(currentTime);
        }

        /// <summary>
        /// Restores the previous TimeProvider.
        /// </summary>
        public void Dispose()
        {
            TimeProvider.Current = this.previousProvider;
        }

        /// <summary>
        /// Sets the current time, if the provider is a TimeProviderMock.
        /// </summary>
        public static void SetNow(DateTime now)
        {
            var mock = TimeProvider.Current as TimeProviderMock;
            if (mock != null)
                mock.DateTime = now;
        }

        private readonly TimeProvider previousProvider;
    }

The mock class used by TimeSetter:

    /// <summary>
    /// Convenience class for testing scenarios. Use TimeSetter to encapsulate it in a using statement.
    /// </summary>
    public class TimeProviderMock : TimeProvider
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="TimeProviderMock" /> class, setting the DateTime to DateTime.UtcNow.
        /// </summary>
        public TimeProviderMock()
        {
            DateTime = DateTime.UtcNow;
        }

        /// <summary>
        /// Initializes a new instance of the <see cref="TimeProviderMock" /> class, setting the DateTime to the given value.
        /// </summary>
        public TimeProviderMock(DateTime now)
        {
            DateTime = now;
        }

        /// <summary>
        /// Gets or sets the date time.
        /// </summary>
        public DateTime DateTime { get; set; }

        protected override DateTime RetrieveUtcNow()
        {
            return DateTime;
        }
    }

TimeProvider itself has some convenience methods for creating DateTimes:

    /// <summary>
    /// Ambient TimeProvider with support for property injection to change the actual
    /// TimeProvider implementation, which is very useful in testing scenarios.
    /// </summary>
    public abstract class TimeProvider
    {
        /// <summary>
        /// Gets the current TimeProvider.
        /// </summary>
        public static TimeProvider Current
        {
            get
            {
                return TimeProvider.current;
            }

            internal set
            {
                if (value == null)
                {
                    throw new ArgumentNullException("value");
                }

                TimeProvider.current = value;
            }
        }

        /// <summary>
        /// Gets the current time in UTC.
        /// </summary>
        public static DateTime UtcNow
        {
            get
            {
                return TimeProvider.Current.RetrieveUtcNow();
            }
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <returns>The created DateTime.</returns>
        public static DateTime CreateUtc(int year, int month, int day)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day), DateTimeKind.Utc);
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <param name="hour">The hour.</param>
        /// <param name="minute">The minute.</param>
        /// <param name="second">The second.</param>
        /// <returns></returns>
        public static DateTime CreateUtc(int year, int month, int day, int hour, int minute, int second)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day, hour, minute, second), DateTimeKind.Utc);
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <param name="hour">The hour.</param>
        /// <param name="minute">The minute.</param>
        /// <param name="second">The second.</param>
        /// <param name="millisecond">The millisecond.</param>
        /// <returns></returns>
        public static DateTime CreateUtc(int year, int month, int day, int hour, int minute, int second, int millisecond)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day, hour, minute, second, millisecond), DateTimeKind.Utc);
        }

        /// <summary>
        /// Resets to using the default TimeProvider (which uses DateTime).
        /// </summary>
        public static void ResetToDefault()
        {
            TimeProvider.current = DefaultTimeProvider.Instance;
        }

        /// <summary>
        /// Override to implement the specific way to return a current DateTime of DateTimeKind.Utc.
        /// </summary>
        /// <returns>A DateTime.</returns>
        protected abstract DateTime RetrieveUtcNow();

        private static TimeProvider current = DefaultTimeProvider.Instance;
    }

These four classes together should solve most of the needs for testable time.

WCF Logging Configuration

WCF is a great framework, one the few really brilliant frameworks from Microsoft. It can be a bit hard, though, to figure out what goes wrong when it doesn’t work. One of the general drawbacks of high-level frameworks is that errors are typically swallowed deep down or at least very hard to understand.

 

Fortunately WCF has some great support for logging very detailed information about what happens. Unfortunately it is a bit like black magic to figure it out. That is why I here provide a simple logging configuration for reuse. It hasn’t failed yet to help me figure out what was wrong when WCF didn’t work.

 

It is divided into two parts, one in <system.serviceModel>:

 

  <system.serviceModel>

    <diagnostics>
      <messageLogging logEntireMessage="true"
                                logMalformedMessages="true"
                                logMessagesAtServiceLevel="true"
                                logMessagesAtTransportLevel="true"
                                maxMessagesToLog="50000"
                                maxSizeOfMessageToLog="2147483647" />
    </diagnostics>

  </system.serviceModel>

 

 

And one in <system.diagnostics>:

 

 

 

  <system.diagnostics>
    <sources>
      <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing" propagateActivity="true">
        <listeners>
          <add name="corr"/>
        </listeners>
      </source>

      <source name="System.ServiceModel.MessageLogging">
        <listeners>
          <add name="corr"/>
        </listeners>
      </source>

      <source name="System.Net" tracemode="includehex" maxdatasize="1024">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>

      <source name="System.Net.Sockets">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>

      <source name="System.Net.Cache">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>

    </sources>

    <switches>
      <add name="System.Net" value="Verbose"/>
      <add name="System.Net.Sockets" value="Verbose"/>
      <add name="System.Net.Cache" value="Verbose"/>
    </switches>

    <sharedListeners>
      <add name="System.Net" initializeData="c:logApplicationName-network.log" type="System.Diagnostics.TextWriterTraceListener"/>
      <add name="corr" initializeData="c:logApplicationName.svclog" type="System.Diagnostics.XmlWriterTraceListener"/>
    </sharedListeners>

    <trace autoflush="true"/>
  </system.diagnostics>

Notice the paths to the two log files.

 

One of the neat things about this configuration is that it also logs messages on the network-level, which especially helps when looking into HTTP PUT/GET/POST problems.

 

The .svclog file can be read by the Microsoft Service Trace Viewer (simply double-click on the file and it opens automatically). If you have a .svclog-file on the client side and on the server side, this tool can correlate the two files, so the entire communation can be examined.

 

You have to have a little patience with the tool. The logs contain a lot of information and it can take a little time to learn to dig out the necessary data to fix a problem.

 

Also watch out for the size of the log files. They can grow very big, very fast (and will then take a long time to load in the log viewer). Don’t leave a configuration like this on a production server!