Mr. Legacy

At my current (at the time of writing) work place I am often referred to as Mr. Legacy. This is a title that I’m honoured to have earned. But how did that come about?

I was hired to do some fire fighting. My boss knew me from a previous assignment and it was initially meant to be a short assignment, but I liked it there and they seemed to like me, and one thing led to another, so I stayed on board for – so far – more than 7 years.

For some reason I have a knack for messing around (in the good way) with old crap, that’s just saying it as it is. I’m very good at surgical incisions, “spot-improving” the code and adding features without otherwise ruining the system. And I’m not just good at it. I actually like it!

But there was a time when even I was bested. I’m not proud of it, for several reasons, which all will come clear.

It all started in 2008 when I got a call from my job-broker: Would I like a small, cosy 120 hour project, implementing a few features in an existing project, written in Borland C++? Of course I would since I hadn’t been working for a couple of months.

The application was a automatic dutyroster planner managing dutyrosters for employees who work different shifts and takes into account various union rules, employees wishes, overtime etc. The built-in autoplanning feature worked surprisingly well.

At the time they had made some promises to a customer about the application. It should be ablt to not only plan the shifts but also record which hours the employees were actually there.

My job in that respect was to implement an export of those hours to a third party accounting system. A simple job, you would think, but I was in for a surprise.

My first action was to take a look at the database. The data obviously had to be there somewhere to make en export happen. There where some tables that had the right smell, but did not fit the description exactly. After a few hours I had to give in – I could not figure out how this was supposed to be done. And at that time I was pretty certain that it would take more than the 120 hours to implement, because the application did not seem to even have the features for recording the hours.

I had a meeting with the boss. Told him my view of the lay of the land. At the same time he was both shocked and not surprised at the same time.

It was not thought through at the moment, but I suggested we made some kind of partnership deal because I really did think there was something valuable in that product. Great ideas, great functionality and existing customers. We came to an agreement, details of which are not relevant here.

Then started to ordeal. Another requirement from the customer was to enable multi-user support. Had I known about that requirement I might definitely not have entered into the agreement.

The application used a proprietary database – EasyTable (nothing to do with the name of the application) – which is really a quite decent database. And it sure was easy to use.

But in the application there were no sight of a data access layer. There was littered SQL all over it. Concatenated SQL. Not parameter bindings. At all. Table and field names was repeated all over, too.

To enable multi-user support we had to use another database, wherefor we needed to become more or less database agnostic. In other words: A DAL was needed. A database independent one. A task that in itself is not that difficult.

I wrote some basic Entity and EntityList classes and quickly figured out how to implement some C++ template classes to help me out. Boring job, but it had to be done.

The really hard part was to change existing embedded SQL-statements and field names as magig strings to using Entities and EntityLists.

The application used a DataModule form on which TQuery-thingies (Delphi did the same, if I remember correctly) was put more or less resembling the actual tables in the database. Mostly less. These tables were generally abused all over the application to hold some random SQL and return the corresponding recordset or to update various data in the database.

The main form in the application had about 10.000 lines of code behind. A staggering number of lines, but if it were well-written it would have been manageble. It was not. Global variables were as abundant as parenthesis in LISP. And they were referenced from everywhere, even other forms.

The application was a night mare, it was surprising it worked as well, as it did.

Eventually I had to throw in the towel. After I had sunk 100s of hours into the application I gave up. I was cleaned up exceptionally well at the time, but admittedly I ran out of money not to earn while working on it.

Lessons learned

First I’ll take a look at what went wrong for the application in the first place. What thoughts went into choosing C++ Builder for the task? At the time, in 2008, it would have been considered insane, but we have to look at it in the context of 2003 where they started the application. What where the options back then?

.NET 1.1 came out officially 9. April 2003 (https://en.wikipedia.org/wiki/.NET_Framework_version_history) and really was not useful for much before then. I do not remember the state of the Winforms designer, but I guess it was just as good as the VB6 designer, which was horrible, but was used to create loads of applications.

The other option was Delphi, also from Borland. I guess the form designers where about the same level and I was pretty fond of the designers in Delphi 1 to 6.

The people making the choice was not very experienced in the business. They were just out of school and there chose what they had used in class: Borland C++ Builder, which anyone who have used it will readily admit is not a good product. But still, I understand why they made the choice.

Whatever tool they had chosen it would not have saved them from the flood of mistakes they continued to make. Again, it was probably understandable. The application started as a proof of concept and continued to grow and had a demand so they never could get around to refactor it.

But did I learn something from this quest? I have to say yes. It was hard earned, so having nothing learned would be sad.

In hindsight I should, of course, have left the assignment after the first few hours but what could have warned me of that back then?

Firstly I probably should have spend a considerable amount of hours trying to understand the innards of the application before entering any agreement about it. That I did not do. Today I would – in the same situation – simply state that the wanted functionality could not be implemented with the current state of the application.

I should, also, have talked to some customers – or at least the boss – about expected and known feature requests, so I could have judged whether they were feasible.

Error handling – part 4 – now on to the REST

After the slippery SOAP adventure it is time to look at a more forward looking way of exposing your API to the world: REST. Again the discussion has root in the .NET world and uses WEB.API as the underlying engine, explaining the principles. And again the principles are equally applicable to other languages, frameworks and platforms.

Returning an error from a REST Service is basically as simple as returning a HTTP status code. How you do that is naturally implementation dependent, but we still have to figure out what status code to return when. Taking the first article as a starting point we could produce a mapping to status codes like the following table:

Exception HTTP Status Code Description
DuplicateKeyException 409 Conflict Duplicate key
DeadLockedException 409 Conflict Deadlock
DataUpdatedException 409 Conflict Concurrency conflict
TimeoutException 504 Gateway Timeout Timeout
AuthorizationException 403 Forbidden Not authorized
InvalidDataException 400 Bad Request Invalid data
TruncatedDataException 400 Bad Request Truncated data
ProviderInaccessiblexception 502 Bad Gateway Provider inaccessible
System.Exception 500 Internal Server Error Internal server error

This is a mapping that might be more natural if it had been created the other way around, starting with HTTP Status Codes and then inventing exceptions matching those. And it is a mapping that might not always be fitting your scenario. But remember that it reflects a general view on the status codes.

Besides returning an HTTP status code to the client it can be helpful to return some more information, which can help the client figure out what went wrong and maybe retry with a corrected request. In my most recent project we returned an error structure like this:

    /// <summary>
/// The response structure returned in the body from the service when an error occurs.
/// </summary>
public class ErrorResponse
{
    /// <summary>
    /// The http response code.
    /// </summary>
    public int code { get; set; } 

    /// <summary>
    /// Will contain the value "error".
    /// </summary>
    public string status { get; set; }

    /// <summary>
    /// Gets or sets further information about the error.
    /// </summary>
    public object data { get; set; }

    /// <summary>
    /// Gets or sets the message id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? messageId { get; set; }

    /// <summary>
    /// Gets or sets the correlation id of the log message which was logged in connection with the error.
    /// </summary>
    public Guid? correlationId { get; set; }

    /// <summary>
    /// Gets or sets a short description of the error.
    /// </summary>
    public string error { get; set; }
}

This would, in our project, be converted to JSON, but could as well be converted to XML. The main point of the structure is that both the code, status and data properties are available no matter whether the response was a success or failure.

We also had an ExtendedErrorResponse, which – in development and test – could be requested by sending along a custom header. This response would include detailed information about any exceptions, which would ease debugging.

The project used WEB.API and we had to figure out a way to generalize the above handling of exceptions and map them to responses. This turned out to be fairly easy in WEB.API: Create a class derived from ApiControllerActionInvoker, implement the InvokeActionAsync method in a manner similar to this:

        /// <summary>
    /// Asynchronously invokes the specified action by using the specified controller context. 
    /// </summary>
    /// <param name="actionContext">The controller context.</param>
    /// <param name="cancellationToken">The cancellation token.</param>
    /// <returns>A task representing the invoked action.</returns>
    public override async Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, CancellationToken cancellationToken)
    {
        Contract.Requires(actionContext != null);
        Contract.Ensures(Contract.Result<Task<HttpResponseMessage>>() != null);

        StatisticsContext statisticsContext = null;
        OperationContext operationContext = null;
        var request = actionContext.Request;
        try
        {

            statisticsContext = StatisticsContextSession.GetStatisticsContext(request);
            StatisticsStartRequest(request, statisticsContext);

            var commonRequestParameters = GetCommonRequestParameters(actionContext);

            operationContext = SetControllerOperationContext(commonRequestParameters, request);

            var filterTrees = GetServicelagController(request).CreateODataFilterTrees().ToList();
            var allowedTreeNames = GetAllowedODataFilterTreeNames(actionContext, filterTrees);
            ODataFilterVerifier.ParseAndCheckODataFilter(commonRequestParameters, filterTrees, allowedTreeNames);

            ParseAndCheckAuthorization(request, commonRequestParameters, operationContext, applicationIdChecker);

            // Here the task that executes the actual APIController method is created
            Task<HttpResponseMessage> actionTask =
                InvokeControllerActionAsync(operationContext, actionContext, statisticsContext, cancellationToken);

            // and executed
            var response = await actionTask;*

            StatisticsEndRequest(statisticsContext, request, response);

            return response;
        }
        catch (Exception exception)
        {
            var response = ErrorResponseMapper.LogErrorAndCreateErrorResponse(
                actionContext.Request,
                operationContext,
                statisticsContext,
                exception,
                configuration.AllowExtendedErrorInformation == AllowExtendedErrorInformation.Yes);

            StatisticsEndRequest(statisticsContext, request, response);
            return response;
        }
    }

To register this class in WEB.API you would – e.g. in WebApiConfig do something like this:

        config.Services.Replace(
            typeof(IHttpActionInvoker), 
            new MyActionInvoker(...parameters...));

I have left in quite a few details not necessary for understanding the actual error handling, just to suggest that this method is a nice place to do a lot of pre- and post request handling. We needed, for instance, statistics on what services where called when and by who. Call timings were also quite easy to do like this.

The error handling is, of course, done in catch (Exception exception), which here delegates the responsibility to an ErrorResponseMapper, which basically takes care of mapping the exception to the HTTP Status Code as described in the table above.

To allow for custom, non-generalized status codes we created a HttpResponseMessageException (derived from UnrecoverableException), which allows for the individual api controllers to throw any custom exception, requesting that a specific HTTP Status Code is returned:

    /// <summary>
/// An exception thrown when a Controller wants to respond with a specific <see cref="HttpStatusCode"/>.
/// </summary>
[Serializable]
public class HttpResponseMessageException : UnrecoverableException
{
    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message)
        :base(message)
    {
        Code = code;            
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, Exception innerException)
        : base(string.Empty, innerException)
    {
        Code = code;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="HttpResponseMessageException"/> class.
    /// </summary>
    /// <param name="code">The <see cref="HttpStatusCode"/> to return to the client.</param>
    /// <param name="message">A message with details.</param>
    /// <param name="innerException">The inner exception, which caused this exception to be created.</param>
    public HttpResponseMessageException(HttpStatusCode code, string message, Exception innerException)
        : base(message, innerException)
    {
        Code = code;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="DuplicateKeyException"/> class.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    protected HttpResponseMessageException(SerializationInfo info, StreamingContext context)
        : base(info, context)
    {
        Contract.Requires(info != null);

        var code = info.GetInt32(CodeKey);
        Contract.Assume(Enum.IsDefined(typeof(HttpStatusCode), code));
        Code = (HttpStatusCode)code;
    }

    /// <summary>
    /// When overridden in a derived class, sets the <see cref="T:System.Runtime.Serialization.SerializationInfo" /> with information about the exception.
    /// </summary>
    /// <param name="info">The <see cref="T:System.Runtime.Serialization.SerializationInfo" /> that holds the serialized object data about the exception being thrown.</param>
    /// <param name="context">The <see cref="T:System.Runtime.Serialization.StreamingContext" /> that contains contextual information about the source or destination.</param>
    /// <PermissionSet>
    ///   <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="*AllFiles*" PathDiscovery="*AllFiles*" />
    ///   <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="SerializationFormatter" />
    ///   </PermissionSet>
    [SecurityCritical]
    public override void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        base.GetObjectData(info, context);
        info.AddValue(CodeKey, Code);
    }

    /// <summary>
    /// Gets the <see cref="HttpStatusCode"/> intended for the client.
    /// </summary>
    public HttpStatusCode Code { get; private set; }

    private const string CodeKey = "Code";
}

It could be used something like this:

throw new HttpResponseMessageException(
    HttpStatusCode.BadRequest,
    "Unsupported $Filter query: " + commonRequestParameters.Filter,
    exception);

The ErrorResponseMapper then takes care of mapping this to an ErrorResponse.

Summary

I have in this series tried to shed some light on how to build a generic error handling (micro-)framework into your application. If you do this you will always have some default error handling in place, which then can be individualized when necessary along the way.

I hope you see that the process is really not that complicated, you just have to use the features that the languages and frameworks give you.

Having the default error handling in place, allows you to REST peacefully at night 😉

Error handling – part 3 – as easy as sliding in SOAP

One way to expose your API to the world is via a SOAP-interface. Yes, I know it’s old and out-of-fashion, but there’s fantasillions of installations using SOAP and for some bizarre reason new installations are popping up everyday. So for good measure I will discuss how you keep the slick error handling introduced previously and generalize this in your SOAP API.

I will discuss this in the light of the Microsoft WCF beast, but the discussion is – I guess – equally applicable for other SOAP frameworks.

SOAP Faults

SOAP has an official way of signalling error conditions to it’s users – the SOAP Fault and WCF has the FaultContract attribute for this. It is used on the methods of a ServiceContract to signal what types/classes can be returned with error information.

In my endavours with SOAP I have chosen to return one of two FaultContracts: RecoverableFault and UnrecoverableFault. In some situations it might be useful to have a UserRecoverableFault, as well.

I want to show you a sample implementation of RecoverableFault here and discuss pros and cons of it. Please note that it has been a while, since I have used this, so it may not 100% reflect the content of the previous articles and it is definitely not as matured:

/// <summary>
/// Describes a recoverable fault, which is a fault than can be recovered by correcting data and/or retrying.
/// </summary>
[DataContract]
public class RecoverableFault
{
    /// <summary>
    /// Initializes a new instance of the <see cref="RecoverableFault"/> class.
    /// </summary>
    /// <param name="faultType">Type of the fault.</param>
    /// <param name="exceptionDetails">The exception details.</param>
    public RecoverableFault(RecoverableFaultType faultType, string exceptionDetails)
    {
        FaultType = faultType;
        ExceptionDetails = exceptionDetails;
    }

    /// <summary>
    /// Gets the type of the fault.
    /// </summary>
    /// <see cref="RecoverableFaultType"/>
    [DataMember]
    public RecoverableFaultType FaultType { get; private set; }

    /// <summary>
    /// Gets or sets a generic description of the id of the entity, which the fault referes to, if available. Can be left empty.
    /// </summary>
    [DataMember]
    public string EntityId { get; set; }

    /// <summary>
    /// Gets or sets the name of the entity to which the fault refers, if available. Can be left empty.
    /// </summary>
    [DataMember]
    public string EntityName { get; set; }

    /// <summary>
    /// Gets or sets the name of the key in cases of DuplicateData FaultType.
    /// </summary>
    [DataMember]
    public string KeyName { get; set; }

    /// <summary>
    /// Gets the exception details, if any were available. Can be used for debugging and logging purposes.
    /// </summary>
    [DataMember]
    public string ExceptionDetails { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
    {
        return "RecoverableFaultType.{1}{0}EntityId:{2}{0}EntityName:{3}{0}KeyName:{4}{0}Details:{5}".FormatInvariant(
                Environment.NewLine,
                FaultType,
                EntityId,
                EntityName,
                KeyName,
                ExceptionDetails);
    }
}

Note the ExceptionDetails property. This is something I would only leave in if the SOAP API is for internal use only. Having exception details may risk exposing implementation details, such as usernames, connection strings, etc. to the client, which is something that we really don’t want 😉

The EntityId, EntityName and KeyName properties surely suggests that this Fault has been used primarily for database related problems.

The RecoverableFaultType is a simple enum, which underlines the above assumption:

    /// <summary>
/// A categorization of Recoverable Faults.
/// </summary>
public enum RecoverableFaultType
{
    /// <summary>
    /// The data being updated/deleted was already deleted by another user.
    /// </summary>
    DataDeletedByAnotherUser,

    /// <summary>
    /// The data being updated/deleted was already updated by another user.
    /// </summary>
    DataUpdatedByAnotherUser,

    /// <summary>
    /// The data being added already exists.
    /// </summary>
    DuplicateData,

    /// <summary>
    /// The database dead locked while applying the changes.
    /// </summary>
    Deadlocked,

    /// <summary>
    /// There was a time out while applying the changes.
    /// </summary>
    Timeout    
}

This enum can used for simple case-switching in the client, when trying to determine a valid action for the error.

The UnrecoverableFault could be modeled along these lines:

    /// <summary>
/// Description of a unrecoverable error.
/// </summary>
[DataContract]
public class UnrecoverableFault
{
    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// </summary>
    /// <param name="message">The message.</param>
    /// <param name="errorDetails">The error details.</param>
    /// <param name="parameters">The description of parameters to the method where it all went wrong.</param>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(string message, string errorDetails, string parameters, Guid logId)
    {
        Message = message;
        ErrorDetails = errorDetails;
        Parameters = parameters;
        LogId = logId;
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="UnrecoverableFault"/> class.
    /// Use this constructor when you really do not want to provide the client with any detail information. When
    /// given the logId the client can instruct support to look for the given logId in the log.
    /// </summary>
    /// <param name="logId">The log id.</param>
    public UnrecoverableFault(Guid logId)
    {
        LogId = logId;
    }

    /// <summary>
    /// A friendly description of the error. May be empty.
    /// </summary>
    [DataMember]
    public string Message { get; private set; }

    /// <summary>
    /// Details about the error. Could be exception details.
    /// </summary>
    [DataMember]
    public string ErrorDetails { get; private set; }

    /// <summary>
    /// Gets a description of the Parameters to the method where the error happened. Typically obtained with
    /// MethodDescriptor.Describe().
    /// </summary>
    [DataMember]
    public string Parameters { get; private set; }

    /// <summary>
    /// An id for a potential error log.
    /// </summary>
    public Guid LogId { get; private set; }

    /// <summary>
    /// Returns a <see cref="System.String"/> that represents this instance.
    /// </summary>
    /// <returns>
    /// A <see cref="System.String"/> that represents this instance.
    /// </returns>
    public override string ToString()
    {
        return "LogId:{1}{0}Message:{2}{0}Details:{3}{0}Parameters:{4}".FormatInvariant(
            Environment.NewLine,
            LogId,
            Message,
            ErrorDetails,
            Parameters);
    }
}

The only really interesting about this is the LogId property, which in this specific system refers to a Log Entry in the log-database. So this can used for correlating errors. There are more effective ways to do this, but this was how it was done here.

Generalizing the SOAP service implementations

Having these rather generic Faults is all well and good, but we have to have an easy way of applying them. This was handled by two classes ServiceMethodHandler and ServiceMethodHandlerImplementation:

public static class ServiceMethodHandler
{
    public static void Execute(Func<string> methodDescriptor, Action action)
    {
        ServiceMethodHandler.Execute<bool>(
            methodDescriptor,
            () =>
            {
                action();
                return true;
            });
    }

    public static TReturn Execute<TReturn>(Func<string> methodDescriptor, Func<TReturn> action)
    {
        return new ServiceMethodHandlerImplementation<TReturn>(methodDescriptor, action).Execute();
    }         
}

internal class ServiceMethodHandlerImplementation<TReturn>
{
    public ServiceMethodHandlerImplementation(Func<string> methodDescriptor, Func<TReturn> action)
    {
        this.methodDescriptor = methodDescriptor;
        this.action = action;
    }

    public TReturn Execute()
    {
        Log.ThreadCorrelationId = Guid.Empty;
        try
        {
            return action();
        }
        catch (RecoverableException exception)
        {
            var methodDescription = methodDescriptor();
            Log.Warning(exception, methodDescription);
            throw new FaultException<RecoverableFault>(ExceptionToFault.RecoverableFaultFromRecoverableException(exception));
        }
        catch (UnrecoverableException exception)
        {
            var methodDescription = methodDescriptor();
            Log.Error(exception, methodDescription);
            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "UnrecoverableException executing the action.",
                    exception.ToString(),
                    methodDescription,
                    Log.ThreadCorrelationId));
        }
        catch (Exception exception)
        {
            var methodDescription = methodDescriptor();
            Log.Error(
                exception,
                "This exception here suggest a program logic error.{0}{1}",
                Environment.NewLine + Environment.NewLine,
                methodDescription);

            throw new FaultException<UnrecoverableFault>(
                new UnrecoverableFault(
                    "This exception here suggest a program logic error.",
                    exception.ToString(),
                    methodDescription,
                    Log.ThreadCorrelationId));
        }
    }

    private readonly Func<string> methodDescriptor;
    private readonly Func<TReturn> action;
}

This is also fairly old which therefore does not have any checks for the UserRecoverableException. The only really quirky thing about this class is the Func methodDescriptor. This is used to describe the method calling ServiceMethodHandler in the case of errors. This is used for logging purposes. The ServiceMethodHandler class is used in the SOAP service implementations like this:

    public void PublishAndSubscribe(Message message, Subscription subscription)
    {
        ServiceMethodHandler.Execute(
            () => MethodDescriptor.Describe(message, subscription),
            () => manager.Publish(message, subscription));
    }

MethodDescriptor.Describe(...) is helper method which formats the given parameters in a nice, pseudo-readable way, which then can be logged. It can be very nice to have the values of parameters in the log, when debugging problems in production.

The bulk action of this specific service is the line () => manager.Publish(message, subscription). And all the error handling has been generalized with the ServiceMethodHandler class.

It is really that simple to generalize your error handling!

Error handling – part 2 – keeping it easy

In the last part I discussed error handling in the repository/data access layer. The ideas in there are equally valid for any lower layer in the application. When moving up the ladder to higher levels, the principle is the same. Let me consider the layer which in Domain Driven Design is called Application Services, where the orchestration of ports, adapters, repositories and domain objects is handled. I will also touch on handling errors in the domain objects/aggregate roots. Please note that when I use the phrase domain object it can mean any of value object and entity.

Aggregate Roots

This is a concept from Domain Driven Design – an Aggregate Root is a so-called consistency boundary and very often also a transaction boundary (in the database-sense of the word). Note: The transaction is not controlled from within the Aggregate Root, which (ideally) knows nothing of the database. Rather, the transaction is controlled by the Application Service.

Of course, errors may happen in a domain object. You could feed it incorrect data, call a method, when the object is in an incorrect state for that method to be called, and so on.

To my knowledge there are two main ways of signalling errors in a domain object/an aggregate root.

  1. Throw an exception
  2. Publish an (error) event

And of course any combination of the two.

1. Throwing an exception

If you are in the habit of using e.g. Code Contracts you will get either a Code Contract exception, which can only be caught by catching System.Exception or by using the generic variant, where you’d normally choose to throw a System.ArgumentException or System.ArgumentNullException. You could, of course, choose your own InvalidDataException to standardise the handling. These kinds of errors are rarely recoverable.

You can also have other kinds of errors, e.g. trying to withdraw to large an amount from an Account. This is a domain logic error, which should be clearly separated from the above, which more can be said to be a program logic error.

The main object to obtain by signalling errors in a domain object is to avoid domain events published within the transaction/consistency boundary (as started by the Application Service) to be persisted. This could very well result in corruption of state. Exceptions is a very effective and easy to understand way of doing this. If the Application Service catches any exception it simply won’t commit the transaction is has opened.

If you have any kind of argument validation – using Code Contracts or manual approaches – on your domain objects (and you probably really ought to), you will eventually run into validation exceptions. This means that your Application Service somehow should catch these, if not for anything else, then for logging purposes. The population of these domain objects, which are used as arguments for methods on the Aggregate Root, should happen outside the transaction boundary.

It can, as can so much else, be discussed whether argument validation is domain logic or not. As a rule of thumb, I’d say that argument validation on aggregate root methods is considered domain logic, but the validation should rather be kept inside other domain objects or simple DTO’s. This also makes it possible to safely populate the objects outside the transaction boundary, without worrying about domain events being published when not applicable.

2. Publish an (error) event

Since it’s quite normal to use domain events from within domain objects it makes sense to extend this to also use domain events to signal domain logic errors. The hard part of this is to make sure that we have not published domain events which would be invalidated by the domain logic error.

This is where the common sense of using small aggregate roots comes in. If your Application Service calls more than one method on your aggregate root, you might consider a redesign. If it only calls one method, it is up to that single method to make sure that when a domain event is published it is valid in a global sense. In general this is done by publishing only one of two events: one on success and one on failure. To be fair, there might be more than one failure type, though.

An interesting approach is to return the domain event from the method instead of publishing. This totally relieves the domain object of any responsibility related to infrastructure.

Application Services

There are (at least) two parts to the handling of errors in an Application Service. The one part is where it’s calling your own code. Here you have total control over what exceptions you want to throw, or even whether you want to use specialized return codes instead of exceptions. You can also choose to use events to signal errors. This suggests the need for some kind of Saga for each service the Application Service exhibits.

I will suggest using exceptions, since it’s easier to generalize the handling of these, and the generalization is important, if you want to have consistent and manageable error handling.

If you’re using third-party components, where the error handling may be different and possibly widely inconsistent, I think you should wrap the component in a thin layer, which normalizes and standardizes the errors which leaves the layer.

You will need to standardize the exceptions you want to exit the Application Service. It may be that you need more exceptions than previously discussed, because different scenarios need different detail information passed on to the user of the Application Service. But you should keep the top hierarchy to the minimum, like this:

Exception Classes - Top Hierarchy

Having just a small number of errors at the top of the pyramid makes it considerably more manageable for the user of the layer in question to handle most errors in the same way, and then give special attention only to a very specific small number of errors.

There are, in general, two ways of generalizing the error handling in an Application Service. One is to use Commands and inheritance, having the generalized error handling in a base class. The other is to use methods with nice, meaningful names and, in these methods, wrap the executing code in a Func<> parameter to an error handler method. And then there are any number of combinations of these two methods.

I prefer the one with Commands and inheritance, since you then cannot forget to add the error handling code. It’s there, it’s been written once for all your Application Services. Job well done. If you want to return any error information from your application service – and most likely you will – both approaches will work. Catching an exception or subscribing to an error domain event is comparatively the same amount of work. But as suggested above you can not fully avoid a try-catch in your application service, this part is the part you will want to generalize in an ApplicationService base class.

And if you boil down your error handling approach to a few common cases, as described in this and the previous post, that generalized code can be written once and for all. Then you only have to specially handle some very specific cases for each aggregate root.

As noted above you can choose to signal errors out of the Application Service with exceptions, return codes or with events. Exceptions are easy to understand, you don’t need an event infrastructure of any kind. Return codes are messy, I suggest not to use them.

Events are a very elegant way of signalling anything. It decouples everything and you can have other parts of the code listening in on the events, without being the instigator of the action causing the event.

You will need some kind of event infrastructure, though. This can be a simple in-memory thingy or a mean beast like NServicebus. The various implications of using events is a discussion another post. It’s not that hard, not that difficult, you just need another mind set.

Hey! Catch! 😉

Error handling – the easy way

Error handling

Error handling is boring. Error handling is hard. Error handling is tedious. There is a lot of work involved. You have to think about it everywhere. And you will never be quite finished with it. But by standardizing and generalizing error handling you will save a lot of work, because all in all – a lot of errors should be handled in exactly the same way.

The discussion here is rooted in Windows, .NET and SQL Server, but the principles and considerations are equally valid on any platform.

There are two kinds of audience to an error: computers and people. And there are basically two kinds of errors: recoverable and unrecoverable. And the recoverable errors can as well be divided into two kinds: One requiring user intervention and one that can basically be recovered by trying again. This means we end up having three main groups of errors:

  1. Unrecoverable errors
  2. Recoverable errors requiring user intervention
  3. Recoverable errors not requiring user intervention

Unrecoverable errors

This is the kind of error that neither a user or a client application can recover in any way. It will sometimes be a configuration or a program logic error, but it could also be a network which is down. There is nothing the client currently can do about it.

Recoverable errors requiring user intervention

Typical causes can be a user-entered-value, which caused e.g. a database unique constraint on e.g. a username column to act up. The user can correct the problem by entering a another username.

Recoverable errors not requiring user intervention

Here we have situations like time out errors, database dead lock problems. All these can (usually) be corrected by automatically retrying the operation. The user need not be involved (at least, not at first).

Exceptions versus return codes

I really don’t want to poke the religious war here, simply want to state what my opinion is: If a method call returns and doesn’t blow up in your face, the call is considered as succeeded and no error occurred. Simple as that.

You can forget to check return codes, which will usually result in your application failing in spectacular ways, far away from where the error originated. You can also, of course, forget to catch an exception, but an unhandled exception will, usually, cause your application to crash disgracefully, but at least near where the error occurred.

So I use exceptions, even for error conditions, which are not only not exceptional but are actually expected to happen.

The joy of layering

An application typically has several layers, no matter whether it is a classic desktop application or a web-application with service and database layers and what have you. The layering actually helps us standardize error handling, because we can choose a set of errors that we’ll allow crossing the layer boundary.

Let’s consider some typical layers.

Database layer

A huge number of errors can occur when working with a database. The discussion here should not in any way be considered exhaustive, Murphy is way more inventive than I am.

When using the word client here, I typically mean the code that is actually using the layer.

Duplicate Key

When you want a column (or more columns) in a database table to be unique, you create a unique constraint. If you happen to violate this constraint when trying to insert or update data, you will, if using SQL Server and the .NET platform receive a SqlException, which indirectly will inform you of the problem, due to having a number of properties as shown below:

Property Value Description
Number 2627 SQL Server error code
Class 14 [Severity level in SQL Server][2]
Errors SqlErrorCollection List of detailed error descriptions
Message Violation of UNIQUE KEY constraint ‘*name of constraint*’. Cannot insert duplicate key in object ‘*name of table*’. The statement has been terminated.

If you name your constraints consistently you can retrieve the root of the problem from the Message property. SQL Server cannot effectively figure out – in the general case – which column caused the problem, since unique constraints can span several columns.

Its worth noting that if you happen to use Entity Framework, the SqlException will be the InnerException of an System.Data.UpdateException.

The gist of this example is that this kind of error is probably a user recoverable error, if you can give your user the right information. I’d suggest that you have a base RecoverableException, inherit a UserRecoverableException from this, and again inherit a DuplicateKeyException with properties EntityName and DuplicateFieldName, which would detail where the problem lies.

The UI could then use this information to tell the user how to proceed to correct to the error.

Invalid Data

When checking the data for e.g. consistency or field lengths before entering them into the database, you may encounter some problems with the data. It could also be the database throwing a constraint violation error. This would cause a SqlException with the Number property set to 547.

Since its in general hard to figure out exactly what went wrong we will consider this a program logic error and throw an InvalidDataException, which is inherited from UnrecoverableException.

String or binary data would be truncated

When you try to put to many data into a column, SQL Server will naturally complain about this. In .NET this will result in a SqlException with the Number property have the telling value 8152, but no other information is available. SQL Server is not kind enough to tell us, which column we’re happily trying to fill to much information into. This seems rather unhelpful, and since we don’t have this information we don’t have any other choice than making this an UnrecoverableException. The problem is of course a design flaw in the software, which would be hard to rectify on run time anyway. We will throw a TruncatedDataException.

OptimisticConcurrencyException

This exception is from Entity Framework, which can detect this situation if the table(s) involved have a rowversion column. The error occurs when the user tries to update (or delete) data that have been modified (or deleted) by another user.

In general, this must be considered a UserRecoverableException and I would suggest specializing this situation into a DataUpdatedException. Your usage scenario determines what kind of properties you would put onto this exception. The UI-handler could handle this situation by loading the new data and doing a compare to discover what data has been changed.

If you are able to detect it you can additionally specialize into a DataDeletedException.

Transaction was deadlocked

In a highly concurrent (and probably badly designed) environment it is likely that enough locking contention occurs in the database so SQL Server has to kill a session, so a command doesn’t get to finish. This will result in a SqlException with the Number property set to 1205.

These situations can often be resolved automatically by the client by retrying the command. It is therefore a RecoverableException, which I would specialize into a DeadlockedException.

Timeouts

Timeouts can be said to be a close relative to the dead locks, but the error is not detected by SQL Server which, as such, does not care how long it takes to run a query. In .NET the situation is brought to our attention by the ADO.NET layer which has a command timeout setting for the connection. When this timeout is reached, the command is cancelled and a SqlException is abused to signal the problem with the Number property set to -2 (yes, minus two!).

If you are using Entity Framework you should catch a System.Data.EntityCommandExecutionException and look at InnerException, where the Number property is -2, as above.

Usually this problem can be rectified by running the command again, but always remember to log the error (with the time spent on the command).

This is a RecoverableException. Specialize it with TimeoutException.

Other weird Entity Framework exceptions

A few other Entity Framework exceptions worth mentioning: System.Data.EntityException typically suggests an authorization problem. The account does not have access to the database. This can also be caused by an incorrect connection string.

System.Data.EntityCommandExecutionException usually means that there is a mismatch between the database and the Entity Framework model (the .edmx). Can be caused by forgetting to update the model after changing the database.

System.Data.EntityCommandCompilationException also suggests some kind of mismatch between the database and the Entity Framework model.

All these exceptions are, of course, unrecoverable on run time and this should be signalled by throwing a ProviderInaccessibleException which is a specialization of UnrecoverableException.

Putting it under one roof

I usually create or reuse a DataAccessHandler class, which handles all the various database problems I come across and map them to one of these exceptions – or a subset of them:

  • DataDeletedException
  • DataUpdatedException
  • DeadlockedException
  • DuplicateKeyException
  • InvalidDataException
  • AuthorizationException
  • ProviderInaccesibleException
  • TimeoutException
  • TruncatedDataException

And since all these exceptions are inherited from UnrecoverableException, RecoverableException and UserRecoverableException the catching and handling of them can be done quite nice and tidy.

Wrapping up – for now

I have discussed error handling in a data access layer – and especially error reporting out of the layer using exceptions. In coming posts I will discuss the corresponding issues in Application Services, SOAP service layers and REST service layers.

I will end this post with a smashing beautiful class diagram of the exceptions suggested:

Exception Classes

Specifying Business Rules by Example

One of the (many) benefits of using Specification by Example is that we get the business logic out in the open, being able to discuss it among developers, analysts, domain experts and users, instead of having it buried deep down in the code, wherefrom it can be rather hard to extract. And especially hard to extract non-manually.
 
But I think it is really hard and difficult to figure out from which perspective (some of) these business rules should be written. I mean, some business rules do belong in the abyss of the code, but other rules should most definitely be surfaced and be easy to extract as documentation.
 
To encourage some thoughts about this, I will describe my thoughts about a more general problem, which many systems will face. The problem, I think, is generic enough that the discussion it might spur will be useful in other situations.
 
We have some system (The System), which should be accessible by web (The Web Site) and mobile apps (The App). The System demands that Members do Create an Account. And for (more or less) obvious reasons The System has some requirements as to what makes up a Valid Username and a Valid Password.
 
I see the action of creating an account via The Website and The App as two different features (e.g. stories) of The System. So we would have two features for describing this, very much alike:
 
Feature: Create Account on The Web site
    As a potential Member of The System
    I want to Create an Account on The Web Site 
    so that I can login and use the features of The System.
Feature: Create Account via The App
    As a potential Member of The System 
    I want to Create an Account in The App 
    so that I can login and use the features of The System.

I believe I have managed to describe these two features in a way that open up for writing Scenarios suitable for automatic testing. A Scenario for the first Feature could be:

Scenario: Successfully Create Account on The Web Site
    Given a Valid Username that is not already in use and a Valid Password
    When The User asks The Web Site to Create the Account
    Then The Account is created by The System and the User is logged in as a Member of The System    

 

The Scenario for The App will be very much the same. And I figure that these Scenarios should be automatically tested through their corresponding UI.

 

My brain starts to heat up, though, when I think of the actual business requirements as to what makes up a Valid Username and a Valid Password. It should be obvious that these rules should be upheld no matter whether we create an account using The Web Site or The App. And since I will want the Create Account Scenarios above to be tested through UI, which is (very) slow, I don’t want to list several examples of Usernames and Passwords, which then all should be tested through the UI.

 

The App will (obviously?) use a Rest API for creating and accessing account information. The REST Api will use an Account Application Service, which also will be used by The Web Site’s server-side code (The Web Site could also use the REST Api AJAX-style, but for now we assume it doesn’t). Note, that this also means that we want to – somehow – test the REST Api, using the same business rules, which would result in one more Feature, describing that Scenario.

 

So what I’m actually aiming at here is that I want the Account Application Service to be Specified by Example, as well. And I want this because the rules that describe what constitutes a Valid Username and Valid Password are business rules, open for discussion. They should not be buried in the code. At the same time we don’t want to repeat the description of the business rules all over Features and Scenarios.

 

I cannot see a clear-cut way to this nicely and it may be that I’m using the wrong hammer (in this case SpecFlow, which may be hindering me thinking outside the famous box) for this or just my inexperience in Specification by Example shining through.

 

Since what I want to Specify is the Account Application Service I will call the users of the service for Consumers. What I have come up with is this:

 

Feature: Validate Username and Password
    As a potential Consumer of The System's Account Application Service 
    I want The Account Application Service to Validate my Username and Password according to the Business Rules.
    so that I can create an account with a Valid Username and Valid Password 

A Valid User Name:
  1. Cannot begin with white space.
  2. Cannot end with white space.
  3. Must start with a Unicode letter
  4. Must consist of letters, digits and spaces.
  5. Is case insensitive.
  6. Is at least 3 characters long.
  7. Is at most 30 characters long.

A Valid Password:
  1. Can contain any Unicode character.
  2. Is at least 6 characters long.
  3. Is at most 40 characters long.
  4. Must have at least 6 different characters.

 

The details of the business rules is not what’s at stake here. They should emerge out of the discussion of what is a Valid Username and Password, which is helped on the way by some examples:

 

Scenario Outline: Validating Username
    When I give a <username>
    Then The Account Application Service responds with an answer detailing whether the Username is <valid>.

    Examples: 
    | username:string                            | valid:bool |
    | user1                                      | yes        |
    | aVeryVeryLongUserNameWhichShouldBeAccepted | yes        |
    | æøåÆØÅ1234567890                           | yes        |
    |  user with space in beginning              | no         |

 

The examples here might not be that exhaustive, I’ve only discussed them with myself 😉 But the testing of this validation can be automated quite easily. And this will be the only Feature describing and testing the actual business rules regarding Valid Usernames and Passwords.

 

The Scenario Outline for Validating a Password would look just about the same. There should also be Scenario Outlines for Invalid Usernames and Passwords.

 

It is noteworthy, though, that the validation of Usernames and Passwords is not a User Story, it is – I think – more of an implementation detail. The same goes for the description of the REST Api.

 

For all the other tests (The Web Site, The App, REST Api) that need a Valid Username and Password (or invalid for that matter), it would be a good idea to have a TestHelper that supplies these, instead of dispersing the knowledge of Valid Usernames and Passwords over more tests than absolutely necessary.

 

I have chosen validation of Usernames and Passwords only as an example here. The business rules could have been anything else: financial rules, what ever. But where and how to describe the rules is still an issue.

Domain Driven Design – Concepts

Last week I attended the IDDD Tour held by TigerTeam and Vaughn Vernon. I like to do mind maps when I’m at these kinds of events. It helps me to get an overview of the subject and is a good help for reviewing after the event. I therefore tried to create a mind map of the various DDD concepts and how they are related, which I hope can be of use to others.

You will need the book Implementing Domain-Driven Design by Vaughn Vernon, which is a good book, to make sense of the concepts here (or you could google your way through).

July 8th 2013: Updated map – using MindMup. Added a few more concepts.

Domain Driven Concepts

Testable Time

I was inspired by Mark Seeman’s TimeProvider class from Dependency Injection in .NET which is an Ambient Context for resolving the current date and time, but without the most prevalent shortcoming of DateTime.Now and DateTime.UtcNow: The lack of testability.

If you need to test how a class reacts to changes in date and/or time there is no way around having a testable TimeProvider. For this purpose I created a set of four classes, which interact nicely to allow for faking the time during a test. The most prominent is the TimeProvider class itself, because its the one that allows you to query the time, simply by calling UtcNow:

var now = TimeProvider.UtcNow;

It is possible to change what TimeProvider returns from UtcNow by changing its Current TimeProvider with the help of the TimeSetter class:

using (new TimeSetter(new DateTime(2012, 12, 12, 12, 12, 12)))
{

    // interesting test code…  
TimeProvider
.UtcNow // will now return 2012-12-12 12:12.12
}
TimeProvider
.UtcNow // will now return whatever the actual current date and time is.

TimeSetter implements IDisposable to allow for an easy way to reset which TimeProvider is used by UtcNow. The default provider is:

    /// <summary>
    /// DefaultTimeProvider implementation, which simply wraps a standard DateTime.UtcNow.
    /// </summary>
    internal class DefaultTimeProvider : TimeProvider
    {
        private DefaultTimeProvider()
        {
        }

        public static DefaultTimeProvider Instance
        {
            get { return DefaultTimeProvider.instance; }
        }

        protected override DateTime RetrieveUtcNow()
        {
            return DateTime.UtcNow;
        }

        private static readonly DefaultTimeProvider instance = new DefaultTimeProvider();
    }

TimeSetter has a method – SetNow – for changing the DateTime returned:

    /// <summary>
    /// Used for encapsulating a change of the DefaultTimeProvider. Very useful in tests.
    /// </summary>
    /// <remarks>
    /// Implements IDisposable so it figures out to restore the DefaultTimeProvider after use.
    /// Example:
    /// <code>
    ///   var timeProviderMock = new TimeProviderMock();
    ///   using (var timeSetter = new TimeSetter(timeProviderMock)
    ///   {
    ///     timeProviderMock.Now = new DateTime(2010, 1, 1);
    /// 
    ///     var now = TimeProvider.UtcNow; // Returns 2010-01-01.
    ///   }
    /// </code>
    /// </remarks>
    public sealed class TimeSetter : IDisposable
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="TimeSetter"/> class.
        /// </summary>
        /// <param name="timeProvider">The time provider to set as current TimeProvider.</param>
        public TimeSetter(TimeProvider timeProvider)
        {
            if (timeProvider == null)
            {
                throw new ArgumentNullException("timeProvider");
            }

            previousProvider = TimeProvider.Current;
            TimeProvider.Current = timeProvider;
        }

        /// <summary>
        /// Initializes a new instance of the <see cref="TimeSetter" /> class with a TimeProviderMock as the
        /// provider.
        /// </summary>
        /// <param name="currentTime">The current time.</param>
        public TimeSetter(DateTime currentTime)
        {
            previousProvider = TimeProvider.Current;
            TimeProvider.Current = new TimeProviderMock(currentTime);
        }

        /// <summary>
        /// Restores the previous TimeProvider.
        /// </summary>
        public void Dispose()
        {
            TimeProvider.Current = this.previousProvider;
        }

        /// <summary>
        /// Sets the current time, if the provider is a TimeProviderMock.
        /// </summary>
        public static void SetNow(DateTime now)
        {
            var mock = TimeProvider.Current as TimeProviderMock;
            if (mock != null)
                mock.DateTime = now;
        }

        private readonly TimeProvider previousProvider;
    }

The mock class used by TimeSetter:

    /// <summary>
    /// Convenience class for testing scenarios. Use TimeSetter to encapsulate it in a using statement.
    /// </summary>
    public class TimeProviderMock : TimeProvider
    {
        /// <summary>
        /// Initializes a new instance of the <see cref="TimeProviderMock" /> class, setting the DateTime to DateTime.UtcNow.
        /// </summary>
        public TimeProviderMock()
        {
            DateTime = DateTime.UtcNow;
        }

        /// <summary>
        /// Initializes a new instance of the <see cref="TimeProviderMock" /> class, setting the DateTime to the given value.
        /// </summary>
        public TimeProviderMock(DateTime now)
        {
            DateTime = now;
        }

        /// <summary>
        /// Gets or sets the date time.
        /// </summary>
        public DateTime DateTime { get; set; }

        protected override DateTime RetrieveUtcNow()
        {
            return DateTime;
        }
    }

TimeProvider itself has some convenience methods for creating DateTimes:

    /// <summary>
    /// Ambient TimeProvider with support for property injection to change the actual
    /// TimeProvider implementation, which is very useful in testing scenarios.
    /// </summary>
    public abstract class TimeProvider
    {
        /// <summary>
        /// Gets the current TimeProvider.
        /// </summary>
        public static TimeProvider Current
        {
            get
            {
                return TimeProvider.current;
            }

            internal set
            {
                if (value == null)
                {
                    throw new ArgumentNullException("value");
                }

                TimeProvider.current = value;
            }
        }

        /// <summary>
        /// Gets the current time in UTC.
        /// </summary>
        public static DateTime UtcNow
        {
            get
            {
                return TimeProvider.Current.RetrieveUtcNow();
            }
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <returns>The created DateTime.</returns>
        public static DateTime CreateUtc(int year, int month, int day)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day), DateTimeKind.Utc);
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <param name="hour">The hour.</param>
        /// <param name="minute">The minute.</param>
        /// <param name="second">The second.</param>
        /// <returns></returns>
        public static DateTime CreateUtc(int year, int month, int day, int hour, int minute, int second)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day, hour, minute, second), DateTimeKind.Utc);
        }

        /// <summary>
        /// Creates a DateTime of DateTimeKind.Utc
        /// </summary>
        /// <param name="year">The year.</param>
        /// <param name="month">The month.</param>
        /// <param name="day">The day.</param>
        /// <param name="hour">The hour.</param>
        /// <param name="minute">The minute.</param>
        /// <param name="second">The second.</param>
        /// <param name="millisecond">The millisecond.</param>
        /// <returns></returns>
        public static DateTime CreateUtc(int year, int month, int day, int hour, int minute, int second, int millisecond)
        {
            return DateTime.SpecifyKind(new DateTime(year, month, day, hour, minute, second, millisecond), DateTimeKind.Utc);
        }

        /// <summary>
        /// Resets to using the default TimeProvider (which uses DateTime).
        /// </summary>
        public static void ResetToDefault()
        {
            TimeProvider.current = DefaultTimeProvider.Instance;
        }

        /// <summary>
        /// Override to implement the specific way to return a current DateTime of DateTimeKind.Utc.
        /// </summary>
        /// <returns>A DateTime.</returns>
        protected abstract DateTime RetrieveUtcNow();

        private static TimeProvider current = DefaultTimeProvider.Instance;
    }

These four classes together should solve most of the needs for testable time.

Review: Elemental Design Patterns

Jason McC. Smith has jotted down a few clever words, not so much about design patterns as about Elemental Design Patterns. There is quite a difference. If you haven’t read the original Design Patterns: Elements of Reusable Object-Oriented Software by the now so famous Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides) you really should. Its an important book.

 

Elemental Design Patterns is an important book, too. It may, eventually, end up being just as famous and having even more influence on development of object oriented software. I hope it will.

 

What is it about? The GoF book was about abstractions that could be hard to recognize in software without knowing what to look for. And even then it could be hard if the software was not built from the ground up using Design Patterns. Jason McC. Smith realized this difficulty while building a system aimed at finding Design Patterns in software. He finally figured out that something more, ehm elemental, was needed. Hence, Elemental Design Patterns.

 

Elemental Design Patterns can be something as simple as a method call, but it is important to acknowledge that we are talking about concepts here. The patterns are language independent (although they obviously feel really at home in object oriented languages). The neat thing about these patterns are the fact that you can use them to reason about structure in software on a conceptual level and all the higher abstracted design patterns can be built from the elementals.

 

Elemental design can even be used to reason about refactoring of code, which is one of the reasons I predict it will have a great influence in the years to come.

 

I can’t say that I think the book is a light read. It is very well written, though, but if you are a stranger to sigma and rho-calculus (I am), you will likely gain something from reading Jason McC. Smiths dissertation about the subject. And you’ll probably like to read the most important reference: A Theory of Objects by Martín Abadi and Luca Cardelli.

 

I say: If you have the slightest interest in the theoretical foundation of software development you should read this book.

Understanding Variable Scope in Workflow Foundation 4

When creating you own activities in WF4 you can sometimes be surprised at how the variable scope works. At times it seems dysfunctional. On thing that really bit me is that An activity cannot touch its own public variables. This seems strangely illogical but that is really how it is. Try creating a simple Activity like this:

 

using System.Activities;
using System.Collections.ObjectModel;

 

public class TestActivity : NativeActivity
{
    public InArgument<string> Value { get; set; }

    public Collection<Variable> Variables
    {
        get
        {
            if (variables == null)
                variables = new Collection<Variable>();

            return variables;
        }
    }

    protected override void Execute(NativeActivityContext context)
    {}

    private Collection<Variable> variables;
}

 

 

Drop it as the only activity in a workflow. Add a variable called test of type string on the activity. In properties try to use test variable for the Value property. You will get: Compiler error(s) encountered processing expression “test”. ‘test’ is not declared. It may be inaccessible due to its protection level.

 

image

 

The variable test is not in the scope of the activity on which it is declared!

 

An activity can, on the other hand, access the variables of its parents. To see this drop a Sequence and let TestActivity be a part of the Sequence. Then create test as a variable on Sequence instead. Again use test for the Value property. Now there is no problem. So an activity cannot access its own public variables but it can access its parent’s public variables.

 

image

Now hear this: Implementation children cannot access any public variables. At all. To see this, create an activity like this:

 

using System;
using System.Activities;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Linq;

[Designer(typeof(TestActivityDesigner))]
public class TestActivity : NativeActivity
{
    public InArgument<string> Value { get; set; }

    public Collection<Variable> Variables
    {
        get
        {
            if (variables == null)
                variables = new Collection<Variable>();

            return variables;
        }
    }

    public Activity Body { get; set; }

    protected override void CacheMetadata(NativeActivityMetadata metadata)
    {
        metadata.SetVariablesCollection(Variables);
        metadata.AddImplementationChild(Body);
    }

    protected override void Execute(NativeActivityContext context)
    {
        context.ScheduleActivity(Body);
    }

    private Collection<Variable> variables;
}

Also create an ActivityDesigner with this body:

    <sap:ActivityDesigner.Resources>
        <DataTemplate x:Key="Collapsed">
            <StackPanel>
                <TextBlock>Drop activity here...</TextBlock>
            </StackPanel>
        </DataTemplate>
        <DataTemplate x:Key="Expanded">
            <StackPanel>
                <sap:WorkflowItemPresenter Item="{Binding Path=ModelItem.Body, Mode=TwoWay}"
                                        HintText="Drop activity here..." />
            </StackPanel>
        </DataTemplate>
        <Style x:Key="ExpandOrCollapsedStyle" TargetType="{x:Type ContentPresenter}">
            <Setter Property="ContentTemplate" Value="{DynamicResource Collapsed}"/>
            <Style.Triggers>
                <DataTrigger Binding="{Binding Path=ShowExpanded}" Value="true">
                    <Setter Property="ContentTemplate" Value="{DynamicResource Expanded}"/>
                </DataTrigger>
            </Style.Triggers>
        </Style>
    </sap:ActivityDesigner.Resources>
    <Grid>
        <ContentPresenter Style="{DynamicResource ExpandOrCollapsedStyle}" Content="{Binding}" />
    </Grid>

 

Create a workflow with the new TestActivity, and put a test variable on the TestActivity. Place an Assign activity on the TestActivity as shown:

 

image

As you can see, the Body of TestActivity cannot access the public variable test. You can even try to enclose all of it in a Sequence and move the test variable to the Sequence scope:

 

image

 

So an implementation child cannot access a public variable. Period. If you change the line:

 

metadata.AddImplementationChild(Body);

 

to:

 

metadata.AddChild(Body);

 

It will work, because Body is now a public child instead.

 

You can then choose to create an implementation variable called test and it will now work:

 

protected override void CacheMetadata(NativeActivityMetadata metadata)
{
    metadata.SetVariablesCollection(Variables);
    metadata.AddImplementationChild(Body);
    testVariable = new Variable<string>("test", "42");
    metadata.AddImplementationVariable(testVariable);
}

The reason for the workflow foundation variable scope to work like this is probably that public variables are the means to pass values from one activity to another by using OutArgument and InArgument. This is actually a very flexible but at the same time a rather cumbersome approach. Not much flow in that.