May 18 2010

Just tap me on the shoulder when it's time

Many applications have a requirement to execute code on a recurring schedule. In cases such as this, there is a tendency to create dedicated background threads to perform this function. This pattern certainly works, but can carry some in-efficiencies when you have large numbers of threads that spend the vast majority of their life sleeping.
As an alternative, consider using an invocation callback pattern.

To implement this pattern we need to start with an interface to define the invoker.

using System;

public interface IInvocationTimer : IDisposable
    /// <summary>
    /// The interval at which the invocation targets are to be called
    /// </summary>
    TimeSpan Interval { get; set; }

    /// <summary>
    /// Starts the invocations
    /// </summary>
    void Start();

    /// <summary>
    /// Stops the invocations
    /// </summary>
    void Stop();

    /// <summary>
    /// Subscription point for invocations
    /// </summary>
    event EventHandler<EventArgs> InvocationTarget;

    /// <summary>
    /// Subscription point for all invokation exceptions
    /// </summary>
    event EventHandler<InvocationExceptionEventArgs> OnInvocationException;

Next we need to define the concrete implementation. Under the covers this uses the System.Timers.Timer which manages schedule-based invocations via the ThreadPool. This approach is lightweight, easy to manage, and testable in isolation.

using System;
using System.Runtime.Remoting.Messaging;
using System.Timers;

public class InvocationTimer : IInvocationTimer
    private readonly object _lock;
    private readonly Timer _timer;

    public InvocationTimer()
        : this(TimeSpan.Zero)
        _lock = new Object();

    public InvocationTimer(TimeSpan interval)
        _timer = new Timer { AutoReset = true, Enabled = false };

        if (interval > TimeSpan.Zero)
            _timer.Interval = interval.TotalMilliseconds;

        _timer.Elapsed += TimerNotificationChain;

    public event EventHandler<EventArgs> InvocationTarget;
    public event EventHandler<InvocationExceptionEventArgs> OnInvocationException;

    public TimeSpan Interval
            return TimeSpan.FromMilliseconds(_timer.Interval);
            _timer.Interval = value.TotalMilliseconds;

    public void Start()
        if (_timer.Interval < 1)
            throw new InvalidOperationException("The invocation 'Interval' must be set to a value above zero before the timer can be started.");


    public void Stop()

    public void Dispose()

    private void TimerNotificationChain(object sender, ElapsedEventArgs e)
        lock (InvocationTarget)
            foreach (EventHandler<EventArgs> handler in InvocationTarget.GetInvocationList())
                handler.BeginInvoke(this, new EventArgs(), InvocationCompletionCallback, null);

    private void InvocationCompletionCallback(IAsyncResult ar)
        lock (_lock)
            var asyncResult = (AsyncResult)ar;
            var handler = (EventHandler<EventArgs>)asyncResult.AsyncDelegate;

            catch (Exception exp)
                RaiseInvocationExceptionEvent(new InvocationExceptionEventArgs
                                                        Exception = exp

    private void RaiseInvocationExceptionEvent(InvocationExceptionEventArgs e)
        if (OnInvocationException == null) return;

        OnInvocationException(this, e);

Finally, lets take a look at how to use this.

public class MyClass
    private IInvocationTimer _timer;
    private volatile bool _inInvokation;

    public MyClass()
        : this(new InvocationTimer())
    { }

    internal MyClass(IInvocationTimer timer)
        _timer = timer;
        _timer.Interval = TimeSpan.FromSeconds(5);
        _timer.InvocationTarget += _timer_InvocationTarget;
        _timer.OnInvocationException += _timer_OnInvocationException;

    void _timer_OnInvocationException(object sender, InvocationExceptionEventArgs e)
        if (_inInvokation) return;

        _inInvokation = true;

        //TODO: Do Work

        _inInvokation = false;

    void _timer_InvocationTarget(object sender, EventArgs e)
        throw new NotImplementedException();

Other Considerations / Selection Criteria

  • The InvocationTimer implementation of this pattern runs out of the ThreadPool. Consequently, when the pool is under pressure, things may not fire exactly on schedule. If you have a strict schedule to keep, consider an alternate pattern
  • The InvocationTimer is not blocked while waiting for Invoked members to complete an invocation. Consequently, the subscriber must manage concerns with regard to re-entrancy. The sample above shows one simple way to do this
  • The InvocationTimer is backed by the IInvocationTimer interface to make it mockable. Doing so, allows this concern to be removed from consuming layers for more isolated testing


Tags: , ,

Apr 20 2010

SCRUM-tastic Tuesday - Break it down

Whether your following the SCRUM process or not, at some point you've been asked to "estimate" your work. Not surprisingly, the answer to this question often comes in the form of days, weeks, or months. If your providing a Rough Order of Magnitude (ROM) (a.k.a. ball park estimate) for some proposal this may be fine, although this kind of estimation can get you in trouble if you don't add that all-important padding. At this point, your probably saying... Duhhh!... and you surely recognize why. We add padding because we've not taken the time to think about the details.

Unfortunately, it's all too easy to follow this same pattern in Sprint Planning. I can't count the number of times I've witnessed Backlog estimation with the same cavalier days/week estimation approach. To be fair, I've been guilty of it myself and usually pay the price of over-committing myself because I skipping the details. According to SCRUM process tenets, the second half of the Sprint Planning session is dedicated to breaking down "how" we'll deliver the requirements identified in each Product Backlog Item. This is the time for us to think about the detailed tasks that must be accomplished as part of any piece of work. Look around, and you'll find no shortage of guidance on how to structure these but lets see what Ken Schwaber has to say.

While designing, the Team identifies tasks. These tasks are the detailed pieces of work needed to convert the Product Backlog into working software. Tasks should have decomposed so they can be done in less than one day.
- (

Notice there is no specific number mentioned, although it's implied we should break tasks into sizes no larger than one productive day (whatever that is). This is really clever, because; if achieved; team members can easily assess real progress from one day to the next and be able to both report and observe this progress in the Daily Scrum meetings. If we combine the desire to avoid over-committal with making progress more observable for the team, this seems like a no brainer... Then why do we keep skipping the all-to-critical break-down step? Assuming it's not plain old laziness, maybe we just need some ideas on how we can break down tasking. For me, I often use my definition of "Done" to start guiding my detailed task list.

  • Author Code - Try separating your target layers into separate tasks (Ex. Domain Repository, Dal, etc.). This isn't meant to represent a list of classes but it can represent layers of discreet behaviors/functionality. If your developing visual interfaces this could reflect components, user controls, or other visualizations
  • Author Tests - As a compliment to authoring the code, create separate tasks for testing each developed layer or component. The previous tasks allow you to report on implementation of required functionality while these tasks allow you to report on the confidence with respect to completeness, reliability, etc...
  • Integration - These tasks identify work often required for larger products and may represent dependencies you have on other resources such as graphic arts, localization teams, etc. Additionally, you may also want to set aside tasks to integrate new products and/or components into your build process, source control, and/or CI tool-sets
  • Reviews - This reflects interactions you inevitably need to have with other people, and can serve as a nice reminder that you need to coordinate/review material you've authored/updated. This could include things like peer reviews with other developers, component introduction with the test team, etc...
  • Deployment - Set aside tasks specifically targeted to implement/update those things you need to deploy your product. This may include installers, configuration profiles, monitoring resources, data stores, etc...
  • Documentation - This reflects tasks related to documenting what you are going to do or have already done. This could range from release notes, code documentation, test cases, user documentation, etc... 
  • Demo/Review - Don't forget to set aside a task to prepare product scenarios for the Sprint Review/Demo

This isn't meant to be a complete list and I'm sure you can think of additional items. If you think about it, this really isn't hard and making a quick bullet list of items such as I've done above doesn't take much time. Conducting exercises like this may help you realize that your shoot-from-the-hip estimation may not be enough to produce a "complete" product. As an added bonus, this kind of detail in your estimate can really help facilitate discussions with those that complain... "What do you mean that takes two weeks?". As with most things, the devil is in the details, so don't just assume you'll remember them, break them down...


Apr 15 2010

A ghostly proxy

Take a look at the following pseudocode:

START to SaveSomething CALL Logger with 'Entering SaveSomething' BEGIN IF User Has Permission CALL PersistSomething EXCEPTION CALL Logger with 'Procedure Failed' END CALL Logger with 'Exiting SaveSomething' END SaveSomething

If you stand-back and squint your eyes, this will probably look a little familiar. This represents that all-to-familiar pattern where we blur the lines of responsibility because its the most convenient way to accomplish things like logging, permission checks, tracing, etc... Unfortunately, this can lead to things like the following making your code much harder to read, debug, etc...

#if (DEBUG) CALL Logger with 'Procedure Entered' #endif

The slippery slope should be obvious, but the more important issue is that we've mixed completely separate areas of concern making our code more inflexible. In this case, authorization, logging, and entity specific domain logic are all jumbled together. Putting this specific; and possibly terrible; example aside... this may be reasonable in some cases, but there are patterns that can provide us more flexibility. Ideally we'd be able to aggregate concerns such as these without being so tightly coupled. In this case, we'll explore a combination of the Proxy Pattern and the Interceptor Pattern which are commonly found in Aspect-Oriented Programming (AOP). The combination of these patterns creates what is loosely defined as a Ghost Proxy Object, although some usages do not include interception. Lets start with a simple Domain Object with a simple operation.

public class DomainObject : IDomainObject { private readonly IDal _dal; public DomainObject() : this(IoC.Resolve<IDal>()) { } internal DomainObject(IDal injectedDal) { _dal = injectedDal; } public virtual void SaveSomething(object thing) { _dal.SaveSomething(thing); } }

The class is pretty straightforward... get an object, and then save it. From a Domain perspective, that is the only concern this component needs to have. Now lets wrap this in a proxy and add methods to wire-up externally supplied delegates. The actual code to wire-up the delegates is not really important and would be pretty trivial to implement. suffice it to say, registered delegates will be called before and after the proxied method is called. The goal is to have a class that represents the base DomainObject while providing us with opportunities to execute externally registered logic. Additionally, all this should be accomplished without any changes to; or accommodations by; the base type aside from it being inheritable.

public sealed class ProxyObject : DomainObject, IInterceptable { /* The real domain object instance to be called */ readonly IDomainObject _realInstance; static IDictionary<string, BeginAction[]> _beginActions = new Dictionary<string, BeginAction[]>(); static IDictionary<string, EndAction[]> _endActions = new Dictionary<string, EndAction[]>(); public ProxyObject() : this(IoC.Resolve<IDal>()) { } internal ProxyObject(IDal injectedDal) { _realInstance = new DomainObject(injectedDal); } public override void SaveSomething(object thing) { this.InvokeBeginMethod(ref _beginActions, "Void SaveSomething(System.Object)", new[] { thing }); _realInstance.SaveSomething(thing); this.InvokeEndMethod(ref _endActions, "Void SaveSomething(System.Object)", new[] { thing }, null); } public void AddInterceptMethod(MemberInfo memberInfo, BeginAction beginAction, EndAction endAction) { /* Extension Method for instances of IInterceptable */ this.AddInterceptMethod(ref _beginActions, ref _endActions, memberInfo, beginAction, endAction); } public void RemoveInterceptMethod(MemberInfo memberInfo) { /* Extension Method for instances of IInterceptable */ this.RemoveInterceptMethod(ref _beginActions, ref _endActions, memberInfo); } }

Lets start by calling the un-proxied DomainObject to see the basic behavior in action.

IDomainObject realObject = new DomainObject(); realObject.SaveSomething("Foo");

Calling an instance of the proxy would produce the same result, but we need to inject additional functionality such as logging, tracing, security, etc... To do this, we must first create an instance of the Proxy and register the Delegates which will be called before and after proxied methods are called.

MethodInfo methodToIntercept = typeof(IDomainObject).GetMethod("SaveSomething"); ProxyObject proxyObject = new ProxyObject(); proxyObject.AddInterceptMethod(methodToIntercept, LogBeginAction, LogEndAction); proxyObject.AddInterceptMethod(methodToIntercept, PermissionCheckBeginAction, null);

Once registered, any calls to new instances of the proxy type will be wrapped with the intercepting delegate calls.

IDomainObject proxyDomainObject = new ProxyObject(); proxyDomainObject.SaveSomething("Foo");

As you can see this works fine, but this could be improved by using an IoC container. This gives us the flexibility to control this behavior externally, which provides a wide range of new options. In this case, my IoC Container returns an instance of ProxyObject which is mapped to the IDomainObject interface in an external configuration file.

IDomainObject proxyDomainObject2 = IoC.Resolve<IDomainObject>(); proxyDomainObject2.SaveSomething("Foo");

These patterns can provide some really interesting capabilities although; as you can see; they require some investment in additional code. These days, proxies such as this can be automatically created using Code Generation or Dynamic Proxy tools (here is a sample). Another important consideration is performance... Profile your application and you'll see more in memory which means more to be GC'd. In the end, this means things will be slower. Like anything else, this is all about options, so take a look and see where this pattern might fit into your tool bag. Enjoy...

Tags: , ,

Apr 12 2010

Take a byte out of time

Category: Tips and TricksJoeGeeky @ 00:01

Recently I was working on a custom transport and at some point I realized my payload had grown to an unacceptable size. After some analysis I discovered that DateTime structures represented the bulk of the problem. On reflection this was completely obvious since the default binary representation of a DateTime is a long (e.g. 8 bytes). This allows us to store date representations across just over 10,000 years.

Normal 8-Byte DateTime: ~10005 Years

In my case; and probably many of yours; I don't need to work with such a wide range of dates. This means I can shrink my dates by half and still have a sufficient range of dates to meet the needs of most applications. 

4-Byte DateTime: ~136 Years

To do this, we can take a lesson from the POSIX community which has been converting Unix Time to and from other formats for a long time. This is a simple pattern that allows us to reuse the built-in DateTime structures. What we need are methods to convert to and from our offset 4 byte representation.   

public static class TimeOffsetExtensions
    static readonly DateTime _timeReference = new DateTime(2010, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
    public static DateTime ToDateTime(this int timestamp)
        return _timeReference.AddSeconds(timestamp);
    public static uint ToOffsetTimestamp(this DateTime instance)
        TimeSpan diff = instance - _timeReference;
        return (uint)Math.Floor(diff.TotalSeconds);

One of the keys to this is setting a common time offset. Above you will see that I set my offset to start at 01/01/2010. As long as this never changes, then you can adjust this as needed to meet your particular requirements. With these methods in place we can apply them in reading/writing to transports, stores, or whatever. Here is an example of writing a DateTime in its short form:

DateTime time = DateTime.UtcNow;
byte[] offsetDateBytes;
using (var stream = new MemoryStream())
    using (var writer = new BinaryWriter(stream))
    offsetDateBytes = stream.ToArray();

Here is an example of reading the offset bytes back into a normal DateTime object:

DateTime recoveredDateTime;
using (var stream = new MemoryStream(offsetDateBytes))
    using (var reader = new BinaryReader(stream))
        recoveredDateTime = reader.ReadInt32().ToDateTime();

That's all there is to it... Apply this in good health. 

Tags: ,

Apr 10 2010

Wrap config settings for speedier access

Category: Tips and TricksJoeGeeky @ 08:11

Some time ago I wrote about creating custom configuration sections as an alternative to using appSettings. At the time, the focus revolved largely around the need for hierarchal and strongly typed configuration points, neither of which are really a feature of sections like appSettings. Although I feel like I'm picking on the poor little appSettings a little bit, if they're used incorrectly they can really hurt products that demand really high performance. Lets look at a common pattern applied to using appSettings:

string aString = ConfigurationManager.AppSettings["AString"];

The thing to remember is, these calls are bound to interactions with the applications configuration file and the ConfigurationManager does very little to mitigate the expense for this type of call. As an I/O bound resource, repeated calls can be very costly. Whether you're using appSettings or another config section, the only way to mitigate I/O bound costs is to cache your settings. There are lots of patterns to do this, here is a quick example of just one.

Lets start with an interface representing our configuration settings. Strictly speaking an interface is not required, but makes it easier to inject configurations using the Dependency Injection pattern later on.

public interface IConfig
    string AString { get; }
    bool BBoolean { get; }
    int CInt { get; }
    DateTime DDate { get; }

Now lets implement the interface. In doing so we need to accomplish the following:

  • Load configuration points from file
  • Cache the configuration results
  • Provide a mechanism to refresh the cached configuration

public class Config : IConfig
    private static IConfig _config;

    public Config()
        AString = ConfigurationManager.AppSettings["AString"];
        BBoolean = Convert.ToBoolean(ConfigurationManager.AppSettings["BBoolean"]);
        CInt = Convert.ToInt32(ConfigurationManager.AppSettings["CInt"]);
        DDate = Convert.ToDateTime(ConfigurationManager.AppSettings["DDate"]);

    #region IConfig Members

    public string AString { get; private set; }
    public bool BBoolean { get; private set; }
    public int CInt { get; private set; }
    public DateTime DDate { get; private set; }


    public static IConfig Current
            if (_config == null)
                _config = new Config();

            return _config;

    public static void Refresh()
        _config = new Config();

As you can see this is essentially a singleton pattern. While this may seem a little boring, its clean, testable, and can be extended to provide a wide range of related functionality. The real test is whether or not this is actually any faster... Creating a simple loop test we can get a sense of how this stacks up.  

private static void Main(string[] args)
    const int MaxReads = int.MaxValue / 2;
    var watch = new Stopwatch();
    for (int i = 0; i < MaxReads; i++)
        IConfig config = Config.Current;
        string AString = config.AString;
        bool BBoolean = config.BBoolean;
        int CInt = config.CInt;
        DateTime DDate = config.DDate;
    Console.WriteLine("Using Cached Config - Rate/Sec: \t{0}", MaxReads / watch.Elapsed.TotalSeconds);
    for (int i = 0; i < MaxReads; i++)
        string AString = ConfigurationManager.AppSettings["AString"];
        bool BBoolean = Convert.ToBoolean(ConfigurationManager.AppSettings["BBoolean"]);
        int CInt = Convert.ToInt32(ConfigurationManager.AppSettings["CInt"]);
        DateTime DDate = Convert.ToDateTime(ConfigurationManager.AppSettings["DDate"]);
    Console.WriteLine("Using ConfigurationManager - Rate/Sec: \t{0}", MaxReads / watch.Elapsed.TotalSeconds);

Here are the results...

As you can see, the difference in performance can be dramatic. Although I used appSettings as the center of this example, the same basic problem exists for any configuration section (Ex. connection strings, custom sections, etc...). Hope this helps.

Tags: ,

Apr 6 2010

SCRUM-tastic Tuesday - Production Ready-ish?

Category: Intellectual PursuitsJoeGeeky @ 00:01

SCRUM strives to have developers release 'production ready' code with each iteration, or least that's what popular guidance tells us. Unfortunately, these two words can lead to some teams questioning their compliance with the process, or worse, questioning whether or not the process is even viable. I was reminded of this recently when someone claimed they weren't following "real" SCRUM because they always had to wait for testing after the Sprint ended.

Depending on how you define "production ready", you may be setting expectations too high. "If" you can actually implement changes, test, and release them into production with each iteration, then great. However, many shops can't achieve this, and that doesn't mean they're not following the SCRUM process correctly. To understand this, we need to go to the source and see how some popular guidance may have mis-led us. In this case, we need to hear from Ken Schwaber who is credited as one of the main authors of the SCRUM process. Luckily for us he's published the details on this particular issue.

The Team consists of developers with all the skills to turn the Product Owner’s requirements into a potentially releasable piece of the product by the end of the Sprint.
- (

If you compare "production ready" and "potentially releasable", you can see the difference. This may seem like splitting hairs, but the expectations for each are different. If you're going to question whether or not the process is viable based on this tenet then you need to make sure you scope it correctly. The SCRUM process doesn't really define what this means, leaving it's criteria and measurement up to the implementor. With that said, the process does try to encourage Sprint definitions that lead to complete features, which is consistent with an iterative approach. This lack of detail is actually a good thing, since it gives us (e.g. the implementors) the flexibility to define a criteria that is appropriate for our environment. Lets face it, if you've worked in more then a couple dev shops then you know the culture related to product releases can be quite varied.

The decision on whether or not to release a product can be informed by a number of factors; many of which can require resources outside the Team. Here are a few example:

  • Product Testing (e.g. verification and validation of functional requirements)
  • End user acceptance testing
  • Security review
  • Compliance review (e.g. validation of regulatory issues such as PCI, Privacy Act, Section 508, etc.)
  • Alignment with marketing initiatives
  • Enterprise integration testing
  • Etc...  

Consequently... while the Team may feel they're "Production Ready" it may be some time before the larger organization makes the same assessment. Thinking more generically about testing, lets take a look at a few common patterns seen in dev shops today.

This first pattern represents what would be required to produce an iteration of a product and test it within a single Sprint. This approach requires the testers and dev team work very closely together. Since both are pigs in the same Sprint, communication between the two parties can be very efficient allowing devs to quickly address bugs as they're found. On the down side, the team must reduce their development capacity to stabilize the product earlier in the Sprint life-cycle. For small product changes this capacity reduction can be fine, but present challenges for larger changes. While this may work in some teams it won't be practical for many. Many organizations have dedicated test teams supporting multiple Teams and Products. This makes commitment in any one Teams Sprint less practical, so these organizations may employ patterns that look more like the following:

In this pattern, the dev team communicates readiness for test as features are completed throughout the Sprint. This typically occurs near the end of the Sprint allowing the test team to start building/refining test plans, start testing, etc... In this case, the test team is not a pig, so testing will often continue well into the next Sprint. 

Similar to the previous pattern, this version acknowledges those environments where the test teams first introduction to product changes is the Sprint Review.

No matter which pattern your organization follows, any of these can reasonably; and correctly; be applying the SCRUM process. If the dev team is producing complete changes, based on what was known during Sprint Planning, they're not in violation of the process. So the next time someone says your not following the "real" SCRUM process because of your release schedule, you can point them to Ken. Maybe they can try and convince him.  Good Luck. 


Apr 1 2010

Big time for a short meeting

Category: Cool Products | Just for funJoeGeeky @ 22:00

I walked into a meeting earlier this week, only to find a huge count-down clock projected on the wall. There was no mistaking what this meant... We were going to have time-boxed discussions. I wrote about a similar pattern in my post Your Time is Up, but I've never seen anything quite like this. As it turns out, this tool is called "Big Time" which is appropriate. But the real hidden gem is the voice-over which had a distinct measure of desperation in his voice. As if to say... "I got a Computer Science Degree and THIS is what I am doing?"

With that said, the real credit goes to the my Scrum Master... This worked brilliantly...  Simple and effective... Nicely done Jon. 


Mar 30 2010

SCRUM-tastic Tuesday - Going Dark

Category: Intellectual PursuitsJoeGeeky @ 00:04

In my last SCRUM-tastic post, Manager Interruptus, I picked on managers a little and blamed them for hampering the teams momentum. As I mentioned, it was a little unfair but it served to illustrate something we all know is true... Despite "your" best efforts, constant interruptions can really destroy your ability to be productive. From time-to-time, no matter what you do, you'll find yourself; or your team; in danger or not meeting commitments. When this happens, you generally have four choices... 

  • Cancel the Sprint - If something has happened that is a complete game-changer, then you should approach your Product Owners and discuss canceling and re-planning the Sprint. In Scrum terms this is totally acceptable, although it should be done only when absolutely needed and should not become a regular occurrence 
  • Ask for more time - This may work, although it creates a slippery-slope and could have down-stream impacts that may not be obvious to the development team. Putting aside the issue of breaking the Scrum time-box rules, this could impact things like resource scheduling for other team, which is common in Scrum-of-Scrum scenario's often seen in larger organizations. This could impact business commitments to customers or suppliers, which could have costly legal implications. It could effect marketing campaigns which often have substantial investments and may be time sensitive such as holiday offerings. In some markets licensing restrictions may only give a company a small window of opportunity to get into a market, and delays could cause them to miss the window. In any case, since we burned the managers in the last post they probably won't be keen on this for a while Foot in mouth
  • Let it fail - I covered this in the last post, and this may be an option in some cases, but is generally frowned upon. Assuming it doesn't require super-human effort, striving for success is often better then outright failure
  • Go Dark! - Given the title of this post, I'm sure you guessed we would end up here. This isn't the most ideal solution, but sometimes this is the only "reasonable" option. Lets explore this a little further

When individuals cut themselves off from the team, this could be an indication they're thrashing which is generally a bad thing. In this case, I'm talking about the larger team going dark together for part or all of the Sprint. The goal is to limit the number of interruptions to mitigate the risk of the team not meeting their commitments. Depending on how far the team is behind, you may only need to Go Dark for part of each day or for several days straight. Assuming you can see the danger coming, you should start with the former and only go fully dark if you absolutely need to. The techniques vary widely and really need to be adapted for your team but here are a few suggestions:

Level 1 - Nominate a stand-in

When people start to fall behind you'll often hear complaints about having to attend too many meetings. If this is your case, then you'll need to nominate someone else to attend these meetings and represent the team. The Scrum Master or Team Manager may be ideal candidates for this. Since they are likely aware of your predicament, they should be more then ready to step-up and support the team.

Level 2 - Drop your connections

Whether we like to admit it or not, there are a lot of ambient distractions that tend to draw our eyes away from what we should be focusing on. While it will pain most of us to admit it, our distractions are likely online in one form or another. A few examples include; but are not limited to; the following:

  • Close the Browsers pointing to your favorite social networking site, news/sports cast, and/or blogs (except mine of course). At the risk of stating the obvious... you probably shouldn't be doing this at work anyway
  • Shut down Instant Messengers... It's not enough to set "Away" or "Invisible". To avoid those attention getting beeps, bleeps and honks you'll need to say goodbye to your friends and loved one's, shutdown IM, and get to work
  • Close Outlook, or whatever email client you're using. When you see or hear emails arrive there is a tendency to stop, read, and answer them. In most cases, they can wait a little while
  • Disable Mobile device "Push" Notifications. In some ways, this feature should be an indication we've gone one step too far in being constantly connected. Let go of your Twitter, Stock Alerts, Facebook, and Weather updates for a few hours so you can get something accomplished
  • Etc...

Level 3 - Keep the lines busy

If you work in one of those shops where people just love calling you, then you may need to unplug your desk line, close your VoIP client, and put your mobile phone in Airplane mode. If you're on-call or still need to be reached by your manager, make other arrangements by having them contact the team through a third-party and make sure these details aren't advertised to anyone else. This could be through the Scrum Master, receptionist, or someone else you can trust to hold-back the flood-gates of distraction.   

Level 4 - Leave the building

This may seem a little over the top, but it may be necessary. We've all experienced those moments when noone was around and we got a tremendous amount of work accomplished. In this case, we're just trying to recreate that experience. One of my old teams liked going to the local coffee-shop for a few hours each day. They had fuel (e.g. coffee and scones), wireless internet (often free), most patrons treated it like a library (e.g. quiet), and most important they had no drive-by interruptions from colleague's. There are many variations to this technique. Assuming you trust your team, they could work from home from time-to-time. You could book a conference room for a long period of time and all work from laptops. In most cases, people will typically avoid bothering people who appear to be in a meeting. Your company may even have other facilities you can use.

At the moment, you might be trying to figure out what you can get away with where you work. Whatever your case, you'll need the support of your Scrum Master and Team Manager. Ideally, they'll run interference for the team to make success as achievable as possible. Even if you disagree with my specific examples, hopefully you can see how techniques like this can be useful in varying degree's. Although I implied this is needed to avoid interruptions from people, it is worth pointing out this can be useful for those high-value last-minute business "requests" that need quick solutions. In any case, give it a think, adapt to your team, and hopefully you'll make it over that hump...


Mar 26 2010

Structuring my storage

Category: Intellectual PursuitsJoeGeeky @ 00:04

In the late 90's, a term was born which spawned a whole range of non-relational storage facilities. What is this term? NoSql

NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees.
- Wikipedia

There are a wide range of purpose-built solutions out there ranging from document storage systems to Tuple Space data grids. Each of these targets a specific niche sacrificing more traditional SQL-like patterns for other advantages (Ex. speed, portability, structure, etc...). Even with so many differences between them, these architectures generally share a number of common characteristics and while they may sounds a lot like traditional databases, their implementations can be quite different. Consider the following common Structured Storage components:

  • Store - This is the storage medium. This is commonly a file (or many files). However, in this modern distributed world of ours this can be virtualized in many different ways
  • Index - This can be one or many different representations of part or all of the stored data. This pattern is generally used to optimize search or data location routines. Any one store can have many different Indexes to facilitate its needs
  • Cursor - This is used to iterate through the store generally pointing at one record at a time. This is similar to an Enumerator or an Iterator although one store could have multiple cursors at any one time. The cursor is often the point from which data is written to the store or read from it

Understanding these basic principles can make it easy(ier) to create your own purpose-built store for meeting any specific needs you might have. I recently built a custom point-to-point queue and needed it to be durable. In this case, I wrote a store to guarantee delivery of queued messages on both the sending and receiving ends of the queue. In doing so, I was reminded of a few valuable lessons:

  • Technical Debt - To make a custom store suitable for highly available and/or performant applications, you will need to employ a number of advanced techniques. This includes asynchronous and possibly distributed processing, replication strategies for fail-over, etc... These issues are not trivial, and can require large investments in time and money to make them work correctly. If you have these needs then it may be better to go with an established technology
  • Disk thrash - It may not seem obvious, but high-performance persistence technologies need to be aware of what a storage medium; such as as disk; can and cannot do well. Think about how disk heads move. If your cursors, data readers, and/or data writers behave in a manner that would cause the disk heads to jump around, you will lose precious time just due to the mechanical limitations of the disk. Do a little research and you'll find patterns to help mitigate this kind of performance hit
  • Describe your data - When you're architecting your store, keep in mind that you need to store meta-data along with the data you intend to store. This could include data used to generate metrics, versioning, structure details, arbitrary headers, or whatever. While it may cost more, make sure you give yourself room to grow 
  • Familiarity - Take a sampling of developers and show them a database, tables, sprocs, etc... The vast majority will know exactly what to do if change is needed. Compare that to showing the same developers a proprietary storage solution. While they may be able to figure it out, it will take a great deal more time and energy to make change, isolate bugs, etc. Like it or not, most of us recognize the classic database model. Having something people recognize can be worth a lot, so don't underestimate the value of older patterns  

In today's complex environments, experience with the aforementioned patterns can really come in handy. Investing a little energy in this area can be worth it, even if it is just done as an intellectual pursuit. Just remember, this is all about purpose-built stores. Don't feel like you need to copy or replicate functions from other tool sets. If you are, then maybe you should just use those tools... Wink

You're welcome to take a look at an early version of one of my stores. Although its not the best example, it met my needs. Here is some SmellyStorage.

Tags: , , ,

Mar 23 2010

SCRUM-tastic Tuesday - Manager Interruptus

Category: Intellectual PursuitsJoeGeeky @ 01:12

In the last couple of posts, Dealing with naysayers and Big projects and Epic tasking, I focused on the team members that seem to love complaining. Unfortunately, this isn't completely fair to those that have bought-in to the process and are just trying to meet their commitments. For them, let's turn the discussion to a type of impediment that can make any process or committed team fail... Managers... or at least some of them. Again, this may not be fair, but let's explore this a little more and I think you'll get my point.

For the sake of discussion, let's just assume everyone is bought into the process. This buy-in assumes that everyone knows that one of the key process tenets in Scrum is that once the Team has made a commitment, they need to be left to meet that commitment. Unfortunately, managers are vulnerable to acute cases of amnesia which often manifests shortly after the Sprint Planning session. Whether or not this is truly a medical condition, it can have detrimental effects on the team's ability to stay focuses and maintain momentum. Treatments for this affliction can require a delicate touch and have mixed results, but before we discuss treatments let's look at some of the symptoms and explain how they can affect the Team. 

Multi-Tasking, Context Switching, Task Switching

Some would say the ability to multi-task is a crucial skill for any IT professional. The problem is... us humans just don't do it very well, although many of us think we can. Do a little Googling and read what the productivity experts have to say. Their agreement on the matter is almost unanimous... we often become less productive when we try to multi-task. Managers need to be aware of this more than anyone since they're the first line of defence. Most teams will have some ability to absorb unplanned tasking; which is often represented in the team's productivity rate; but managers need to make sure people don't take advantage of this "potential" capacity.   

Here's the main problem... when someone is working on a task and they're interrupted, it takes them considerably more time to regain the same momentum they had before they were interrupted. This includes drive-by questions, phone calls, email, visits from the manager, and meetings, meetings, meetings. Depending on the task, this can be really costly if the developer has to regain complex or nuanced information before they can restart their work. Some interruptions are inevitable, but constant interruptions can put the team at risk of not meeting their "real" commitments. The manager needs to be aware of the amount of context switching occurring in the team; especially when it's their fault; and work with the Scrum Master to deal with excessive interruptions. Some might say the interruptions need to be "managed", but that's just me trying to be cheeky. 

One reason this might be the managers fault, is based on the typical manager/employee dynamics surrounding delegation. This is fine to a point, but if the managers aren't careful they can delegate the team right into failure. We can assume this isn't conscious neglect. Maybe they're naively oblivious to the damage left in the wake of them trying to be "good" managers... What?... It could happen!... At some point, managers; and other supporting staff; need to become self-sufficient and carry the load without the Team.

Let's just add this "one" more "little" thing...

If your Sprints are the average three week length, there's a tendency to think that with so much time we can add just "one" more "little" thing. This is another famous manager line, but it's a slippery-slope. If the manager wasn't afflicted with 'the illness', they would realize they're asking for trouble; or at the very least, adding risk. The reality is, there are very few "little" things that can be added without having a larger knock-on effect. New stuff; whatever the size; needs to be designed, coded, integrated, tested, documented, etc... Before you know it, the dev has lost valuable time on something that wasn't originally a commitment. As a rule, there should be no new or altered commitments once the Sprint has started. 

Possible Treatments

Communicate the problem - The best thing to do is have a team meeting with your manager(s) and tell him/her what the concerns are. The Retrospective Meeting is a great place to start this discussion. Depending on your relationship with your manager this may or may work, and you may need a more subtle approach. In this case, try reducing your teams productivity rate and subsequently reduce your commitment at the next Planning session. If you keep doing this, your manager is bound to ask what is going on. This will give you another opportunity to articulate how interruptions are endangering your team's ability to meet their commitments. To be effective in making your point, you'll need to have some evidence. As they happen, try adding unplanned/uncommitted tasks to whatever tracking system you're using. This will give you a valuable point of reference and may also reflect on the burn-down chart which can help reinforce your point.

Learn to say "No" without saying "No" - This can be hard in some shops, but may be necessary to prevent the teams failure. Since "No" can be received negatively, try approaching this from another angle. This is a bit Freudian, but it works.

  • Start by suggesting the new task be a "priority" for the next Sprint. This sounds like "Yes" but implies a "No" for the current Sprint without saying it 
  • Say "Yes" with the Proviso the manager identify what "commitmentshould not be done as a result of adding the new task 

Batch your interruptions - As valuable resources, it makes sense that some questions can only be answered by the Team. If you're suffering from frequent drive-by questions. Ask your manager to save them up and ask more questions with fewer interruptions. Maybe he/she can agree to meet the team once a day or every couple days to have a time-boxed Q/A session. For this, I recommend following stand-up rules so things don't take too long. 

Let it fail - No one likes to fail, and I'm not suggesting sabotage. With that said, there's no sense fighting the inevitable. Everyone has to be made aware of the consequence of not keeping the larger commitment to the team. Some Sprints can require extra-ordinary effort, but they can't be planned with the expectation of Herculean efforts on the part of the development team. That's a recipe for disaster. 

This isn't meant to represent a complete list of treatment options, but may give you a few ideas on how to handle this. Whatever you do, try and remember that everyone has their own perspective, and odds are good they're busy as well. In most cases, the problem(s) comes down to communication, or the lack of it. Start with the direct approach, and resort to Freudian manipulation only if you need to. 

I picked on managers a little bit, but the truth is, they're a valuable part of any team being successful. With that said, we can all slip into bad habits and sometimes need to do a little Freudian introspection to reflect on our own behavior. For full disclosure... I've done my share of managing, and I've certainly been guilty of doing things mentioned above. Like anything else, the Team and their manager(s) need to find a balance to ensure the larger business objectives can be met without killing your team in the process (pun intended). 

Before wrapping things up, there is one more thing... What about the Scrum Master? ... Great question!... Since they facilitate every daily Scrum and should be the first to hear about impediments, they should be well aware of what kind of stress the Team is under. Consequently, they should be the first one to tell the manager when its time to back-away and start helping keep outside interruptions from the Team. So get to work you Scrum Masters... Wink