Apr 15 2010

A ghostly proxy

Take a look at the following pseudocode:

START to SaveSomething CALL Logger with 'Entering SaveSomething' BEGIN IF User Has Permission CALL PersistSomething EXCEPTION CALL Logger with 'Procedure Failed' END CALL Logger with 'Exiting SaveSomething' END SaveSomething

If you stand-back and squint your eyes, this will probably look a little familiar. This represents that all-to-familiar pattern where we blur the lines of responsibility because its the most convenient way to accomplish things like logging, permission checks, tracing, etc... Unfortunately, this can lead to things like the following making your code much harder to read, debug, etc...

#if (DEBUG) CALL Logger with 'Procedure Entered' #endif

The slippery slope should be obvious, but the more important issue is that we've mixed completely separate areas of concern making our code more inflexible. In this case, authorization, logging, and entity specific domain logic are all jumbled together. Putting this specific; and possibly terrible; example aside... this may be reasonable in some cases, but there are patterns that can provide us more flexibility. Ideally we'd be able to aggregate concerns such as these without being so tightly coupled. In this case, we'll explore a combination of the Proxy Pattern and the Interceptor Pattern which are commonly found in Aspect-Oriented Programming (AOP). The combination of these patterns creates what is loosely defined as a Ghost Proxy Object, although some usages do not include interception. Lets start with a simple Domain Object with a simple operation.

public class DomainObject : IDomainObject { private readonly IDal _dal; public DomainObject() : this(IoC.Resolve<IDal>()) { } internal DomainObject(IDal injectedDal) { _dal = injectedDal; } public virtual void SaveSomething(object thing) { _dal.SaveSomething(thing); } }

The class is pretty straightforward... get an object, and then save it. From a Domain perspective, that is the only concern this component needs to have. Now lets wrap this in a proxy and add methods to wire-up externally supplied delegates. The actual code to wire-up the delegates is not really important and would be pretty trivial to implement. suffice it to say, registered delegates will be called before and after the proxied method is called. The goal is to have a class that represents the base DomainObject while providing us with opportunities to execute externally registered logic. Additionally, all this should be accomplished without any changes to; or accommodations by; the base type aside from it being inheritable.

public sealed class ProxyObject : DomainObject, IInterceptable { /* The real domain object instance to be called */ readonly IDomainObject _realInstance; static IDictionary<string, BeginAction[]> _beginActions = new Dictionary<string, BeginAction[]>(); static IDictionary<string, EndAction[]> _endActions = new Dictionary<string, EndAction[]>(); public ProxyObject() : this(IoC.Resolve<IDal>()) { } internal ProxyObject(IDal injectedDal) { _realInstance = new DomainObject(injectedDal); } public override void SaveSomething(object thing) { this.InvokeBeginMethod(ref _beginActions, "Void SaveSomething(System.Object)", new[] { thing }); _realInstance.SaveSomething(thing); this.InvokeEndMethod(ref _endActions, "Void SaveSomething(System.Object)", new[] { thing }, null); } public void AddInterceptMethod(MemberInfo memberInfo, BeginAction beginAction, EndAction endAction) { /* Extension Method for instances of IInterceptable */ this.AddInterceptMethod(ref _beginActions, ref _endActions, memberInfo, beginAction, endAction); } public void RemoveInterceptMethod(MemberInfo memberInfo) { /* Extension Method for instances of IInterceptable */ this.RemoveInterceptMethod(ref _beginActions, ref _endActions, memberInfo); } }

Lets start by calling the un-proxied DomainObject to see the basic behavior in action.

IDomainObject realObject = new DomainObject(); realObject.SaveSomething("Foo");

Calling an instance of the proxy would produce the same result, but we need to inject additional functionality such as logging, tracing, security, etc... To do this, we must first create an instance of the Proxy and register the Delegates which will be called before and after proxied methods are called.

MethodInfo methodToIntercept = typeof(IDomainObject).GetMethod("SaveSomething"); ProxyObject proxyObject = new ProxyObject(); proxyObject.AddInterceptMethod(methodToIntercept, LogBeginAction, LogEndAction); proxyObject.AddInterceptMethod(methodToIntercept, PermissionCheckBeginAction, null);

Once registered, any calls to new instances of the proxy type will be wrapped with the intercepting delegate calls.

IDomainObject proxyDomainObject = new ProxyObject(); proxyDomainObject.SaveSomething("Foo");

As you can see this works fine, but this could be improved by using an IoC container. This gives us the flexibility to control this behavior externally, which provides a wide range of new options. In this case, my IoC Container returns an instance of ProxyObject which is mapped to the IDomainObject interface in an external configuration file.

IDomainObject proxyDomainObject2 = IoC.Resolve<IDomainObject>(); proxyDomainObject2.SaveSomething("Foo");

These patterns can provide some really interesting capabilities although; as you can see; they require some investment in additional code. These days, proxies such as this can be automatically created using Code Generation or Dynamic Proxy tools (here is a sample). Another important consideration is performance... Profile your application and you'll see more in memory which means more to be GC'd. In the end, this means things will be slower. Like anything else, this is all about options, so take a look and see where this pattern might fit into your tool bag. Enjoy...

Tags: , ,

Mar 26 2010

Structuring my storage

Category: Intellectual PursuitsJoeGeeky @ 00:04

In the late 90's, a term was born which spawned a whole range of non-relational storage facilities. What is this term? NoSql

NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees.
- Wikipedia

There are a wide range of purpose-built solutions out there ranging from document storage systems to Tuple Space data grids. Each of these targets a specific niche sacrificing more traditional SQL-like patterns for other advantages (Ex. speed, portability, structure, etc...). Even with so many differences between them, these architectures generally share a number of common characteristics and while they may sounds a lot like traditional databases, their implementations can be quite different. Consider the following common Structured Storage components:

  • Store - This is the storage medium. This is commonly a file (or many files). However, in this modern distributed world of ours this can be virtualized in many different ways
  • Index - This can be one or many different representations of part or all of the stored data. This pattern is generally used to optimize search or data location routines. Any one store can have many different Indexes to facilitate its needs
  • Cursor - This is used to iterate through the store generally pointing at one record at a time. This is similar to an Enumerator or an Iterator although one store could have multiple cursors at any one time. The cursor is often the point from which data is written to the store or read from it

Understanding these basic principles can make it easy(ier) to create your own purpose-built store for meeting any specific needs you might have. I recently built a custom point-to-point queue and needed it to be durable. In this case, I wrote a store to guarantee delivery of queued messages on both the sending and receiving ends of the queue. In doing so, I was reminded of a few valuable lessons:

  • Technical Debt - To make a custom store suitable for highly available and/or performant applications, you will need to employ a number of advanced techniques. This includes asynchronous and possibly distributed processing, replication strategies for fail-over, etc... These issues are not trivial, and can require large investments in time and money to make them work correctly. If you have these needs then it may be better to go with an established technology
  • Disk thrash - It may not seem obvious, but high-performance persistence technologies need to be aware of what a storage medium; such as as disk; can and cannot do well. Think about how disk heads move. If your cursors, data readers, and/or data writers behave in a manner that would cause the disk heads to jump around, you will lose precious time just due to the mechanical limitations of the disk. Do a little research and you'll find patterns to help mitigate this kind of performance hit
  • Describe your data - When you're architecting your store, keep in mind that you need to store meta-data along with the data you intend to store. This could include data used to generate metrics, versioning, structure details, arbitrary headers, or whatever. While it may cost more, make sure you give yourself room to grow 
  • Familiarity - Take a sampling of developers and show them a database, tables, sprocs, etc... The vast majority will know exactly what to do if change is needed. Compare that to showing the same developers a proprietary storage solution. While they may be able to figure it out, it will take a great deal more time and energy to make change, isolate bugs, etc. Like it or not, most of us recognize the classic database model. Having something people recognize can be worth a lot, so don't underestimate the value of older patterns  

In today's complex environments, experience with the aforementioned patterns can really come in handy. Investing a little energy in this area can be worth it, even if it is just done as an intellectual pursuit. Just remember, this is all about purpose-built stores. Don't feel like you need to copy or replicate functions from other tool sets. If you are, then maybe you should just use those tools... Wink

You're welcome to take a look at an early version of one of my stores. Although its not the best example, it met my needs. Here is some SmellyStorage.

Tags: , , ,

Apr 11 2009

Inverting my control

Although not a new pattern the Alt.NET Community has been cheerleading Inversion of Control (IoC) for a few years now. Generally speaking, I think this has been good for all of us. This has reminded us old-timers that tightly coupled architectures are just not good for us, not to mention teaching this lesson to the noobs. Also good, has been the development of many community tools like Sprint.Net, Castle, and others, each of which help us (re)embrace this pattern.  

Whenever this topic comes up it is inevitably linked with Test Driven Development (TDD). This is unfortunate because no matter how you look at it people seem the miss the point of both when linking the two so closely together. Now that I have said that, this debate is a post for another day. Lets get back to IoC... I have tried working with a number of different open-source tools to implement IoC-based solutions. Some work very well, although the good ones have become so large, it begs the question. Do I really want to absorb such a large code-base for small projects? For me the answer is no, and since I am not willing to abandon the IoC pattern I had to write my own.

If you think about it, IoC is just a Factory pattern so creating one to map an interface to a concrete implementation will be easy. Here is what we need:

  • A Factory to resolve interfaces to concrete implementations
    • To give us some flexibility and separate the areas of concern we will also want to separate the actual resolution process so we can plugin new one's (Ex. Castle) as your products mature
  • Configuration Section to tell the factory how to map types

Lets start with an interface to define the contract for our resolver component. This will be responsible for understanding any one mapping implementation (Ex. Configuration-based, discovered via reflection, etc...). This can also be used to connect you to other containers like Castle at a later date, in case you change your mind and don't want to recode everything.

public interface IDependencyResolver
{
    T Resolve();
}

Now we need to create at least one implementation of the resolver.

using System;
using System.Collections.Generic;
using System.Configuration;

public sealed class DependencyResolver : IDependencyResolver
{
    private readonly Dictionary types = new Dictionary();

    public DependencyResolver()
    {
        MyCustomConfigSection configuration = MyCustomConfigSection.Current;

        foreach (ComponentConfigElement component in configuration.Components)
        {
            try
            {
                Register(Type.GetType(component.Contract, true, true), 
                Type.GetType(component.Implementation, true, true));
            }
            catch (Exception ex)
            {
                string message = "A configured component is not valid. Contract='{0}', Implementation={1}";
                message = string.Format(message, component.Contract, component.Implementation);

                var configurationErrorsException = new ConfigurationErrorsException(
                    message, ex);

                throw configurationErrorsException;
            }
        }
    }

    #region IDependencyResolver Members

    public T Resolve()
    {
        return (types.ContainsKey(typeof(T))) ? 
                    (T)Activator.CreateInstance(types[typeof(T)]) : default(T);
    }

    #endregion

    public void Register(Type contractType, Type implementationType)
    {
        if (contractType.IsAssignableFrom(implementationType) == false)
        {
            throw new InvalidOperationException(string.Format(
                "The supplied instance does not implement {0}", contractType.FullName));
        }

        if (types.ContainsKey(contractType))
            types.Remove(contractType);

        types.Add(contractType, implementationType);
    }
}

Now lets create the Factory class. This will serve user requests for resolution and will load a configured resolver.

using System;

public static class IoC
{
    private static IDependencyResolver _resolver;

    static IoC()
    {
        Initialize();
    }

    private static void Initialize()
    {
        MyCustomConfigSection configuration = MyCustomConfigSection.Current;

        if (configuration == null || string.IsNullOrEmpty(configuration.Components.Resolver))
        {
            var exception = new ConfigurationErrorsException(
                "No IoC resolver configuration found in the configuration file");

            throw exception;
        }

        Type configuredResolver = Type.GetType(configuration.Components.Resolver, false, true);

        if (configuredResolver == null)
        {
            var exception = new ConfigurationErrorsException(
                "The IoC resolver found in the configuration file is either an invalid type or could not be found");

            throw exception;
        }

        Initialize((IDependencyResolver)Activator.CreateInstance(configuredResolver));
    }

    public static void Initialize(IDependencyResolver resolver)
    {
        _resolver = resolver;
    }

    public static T Resolve()
    {
        return _resolver.Resolve();
    }
}

If we skip the configuration details, lets see how callers would get class instances using the Factory:

IPhoneNumber phoneNumber = IoC.Resolve<IPhoneNumber>();

Remember, this pattern forces us to think in terms of the contract represented by the Interface (e.g. contract/specification) as opposed to thing what any one concrete implementation provides you. Thats nice, but the reality is that it will never work without configuration. Here is a list of what we will need:

  • Configuration Element defining a mapping between an interface and an implementing class
  • Configuration Element Collection to contain a list of mappings

First the config element for the mappings:

using System.Configuration;

public sealed class ComponentConfigElement : ConfigurationElement
{
    public ComponentConfigElement()
    {
        Init();
    }

    #region Contract Property

    internal const string ContractPropertyName = "contract";

    [ConfigurationProperty(ContractPropertyName,
        Options = ConfigurationPropertyOptions.IsRequired ^ ConfigurationPropertyOptions.IsKey)]
    public string Contract
    {
        get { return (string)this[ContractPropertyName]; }
    }

    #endregion

    #region Implementation Property

    internal const string ImplementationPropertyName = "implementation";

    [ConfigurationProperty(ImplementationPropertyName, Options = ConfigurationPropertyOptions.IsRequired)]
    public string Implementation
    {
        get { return (string)this[ImplementationPropertyName]; }
    }

    #endregion
}

The following represents a collection of the previous mappings and provides for the configuration of a resolver implementation:

using System.Configuration;

public sealed class ComponentConfigElementCollection : ConfigurationElementCollection
{
    public override ConfigurationElementCollectionType CollectionType
    {
        get { return ConfigurationElementCollectionType.BasicMap; }
    }

    internal const string ElementPropertyName = "component";

    protected override string ElementName
    {
        get { return ElementPropertyName; }
    }

    protected override ConfigurationElement CreateNewElement()
    {
        return new ComponentConfigElement();
    }

    protected override object GetElementKey(ConfigurationElement element)
    {
        return ((ComponentConfigElement)element).Contract;
    }

    #region Resolver Property

    internal const string ResolverPropertyName = "resolver";

    [ConfigurationProperty(ResolverPropertyName)]
    public string Resolver
    {
        get { return (string)this[ResolverPropertyName]; }
    }

    #endregion
}

These classes allow us to add the following to any config section you may have already defined for your application:

<components resolver="MyAssembly.DependencyResolver, MyAssembly">
    <component contract="MyAssembly.IPhoneNumber, MyAssembly" implementation="MyAssembly.PhoneNumber, MyAssembly" />
</components>

Well that's it...  This is pretty slim and can be extended to do a wide range of things. In fact, if you are following a Dependency Injection Pattern with your constructors this pattern is a VERY nice fit, but that's a Post for another day.  Enjoy.

If you would like a full set of source code, feel free to download the SmellyContainer, which will a full demonstration of this pattern.

Tags: ,

Apr 30 2006

Automatic Code Generation, Code that writes itself!

Category: Selfish MotivationJoeGeeky @ 01:02

Billy Hollis once said that developers have an addiction... Writing code... After much sole searching and a little Freudian introspection, I realized that I was a code addict. Having faced my demon, I set out to address my addiction head-on.  What was the answer? Simple… write code that writes itself. 

As usual Microsoft was thinking ahead, and provided us with a very powerful .NET namespace containing tons of great classes to facilitate this process. That namespace is System.CodeDom…

Consider for a moment, how many times you’ve had to produce an information class filled with public properties and their private members. This is tedious work and really doesn’t lend itself to making you a better programmer. Like the first step in a 10-step program, my first application was geared towards making this portion of my job obsolete, allowing me to provide basic property information in a language neutral short-hand and then producing a complete code set in the language of my choice. I don’t normally post my code but this is something that more developers need to explore. To that end, the below links are provided to help proliferate this type of practice… 

Thanks Billy, you saved my soul….

Joes Property Code Generator - Binaries.zip (19.49 kb)

Joes Property Code Generator - VS 2K5 Source Code.zip (26.81 kb)

Note: You must have both the .NET Runtime and the J# 2.0 Runtime installed to use this.

Tags: ,

Feb 16 2006

My DOTNETROCKS! Media Center Plugin

Category: Just for funJoeGeeky @ 05:00

I remember the day when I first saw a TIVO...  Besides being one of the best implementations of Linux I had every seen, I saw an appliance approach that just made sense.  Shortly after that, I was sitting in a Microsoft WinHEC session when I saw Microsoft Media Center for the first time.  It was at that moment I realized what TIVO was missing...  What was it? What every good geeks longs for...  Extensibility!

That right... I could make it do what I wanted.  Microsoft had empowered me.  AGAIN!.  Ok... I admit it... I drank the Microsoft Kool-Aid long ago, and yes I know you can hack TIVO.  With all this new 'supported' power, what did I do? I made a DNR plugin of course.  Microsoft has written a lot about the 10-foot user experience, and Media Center appears to be the embodiment of all that user interface research.  If you have not explored this space, it is worth a look.  Want to know more...  Check out this episode of MSDN TV - http://msdn.microsoft.com/theshow/episode.aspx?xml=theshow/en/Episode053/manifest.xml


UPDATE!...  On the 21 Feb 2006 issue of DotNetRocks,  Carl Franklin was kind enough to give me a small mention about seven minutes into the show...  Thanks guys...  You Rock!

Another UPDATE!...  On the 01 April 2006 I completed the construction of my very own media center...  This was a lot of fun and a real challenge...!  Now I will never miss an episode of Good Eats!

Click Here to see some of the construction work.
 

Tags: ,

Jan 16 2006

My Windows Cleaner

Category: Selfish MotivationJoeGeeky @ 12:49

During my time as a systems integrator I found myself manually cleaning out system refuge prior to every system delivery. This is an extremely tedious process that can really eat up a lot of time. With no shortage of things to do, I needed to get a little selfish and make an app to deal with this. Enter, the Windows Cleaner… A nifty little .NET application that is extensible for my fellow integrators. For this project I even created a little SDK that added a cleaner module project type to Microsoft Visual Studio.NET allowing developers to easily create new/custom cleaner modules. As a side note, if you are not aware of the VS.NET extensibility model, then you really need to look in to that.  Ahhh…  you have to love the little things in life…  Why not just buy a commercial product?...  Ummm... have you not been paying attention?  I'm a GEEK and this is free!

BTW, if you want something like this for use at home, I recommend Windows Washer from Webroot.

Application Screen Capture

Tags:

Dec 15 2005

Audit Assistant

Category: Selfish MotivationJoeGeeky @ 23:16

This one was a lot of fun... We needed something to help audit our systems. On the surface a simple task, but this is one of those cases when one thing leads to another and pretty soon you’re on your way to developing enterprise services (boggle). There are a lot of nifty features here, but since this is for the geeks in the house, this included tons of owner-drawn stuff using GDI+ and more brute-force pagination than I ever want to repeat again :-)... To round things out, are sprinkles of XML/XSD, serialization, interoperability with the Windows Instrumentation Engine, SOA based services, and a partridge in a pear tree. This also gave me a chance to explore some IUI concepts in human engineering and process automation. Although I developed this on my own time this has gotten some attention and may grow up to be something larger in the future.

Tags:

Dec 2 2005

processMonkey

Category: Selfish MotivationJoeGeeky @ 03:22

This is another case where 'sometimes' the best solution is the one you create yourself. In this scenario, I was faced with the crippling process of gathering and analyzing metrics for our SW-CMM initiatives. After doing this by hand a few times I realized that I had to find a way to automate the process and define more objective measuring criteria for the team. Here is where the geek part comes in... All of the data was stuck in a proprietary portal engine that had no 'advertised' interfaces... 'They' said I was only allowed to view content via the browser. You know 'They', because we all work for 'Them'. There are those of us who understand what we do not manage, which is often managed by those that do not understand what 'They' manage... Got it?... Good!... Ok, I will play... In this application I embedded a number of browsers and created bots that would crawl through the portal, scrape the data out, normalize it, model it, cache it, and then apply it to a charting engine I developed for the font-end. This process was eventually relagated to a server-based product with a service layer allowing access to scraped data by clients.  One thing led to another, and I found myself integrating with Microsoft Outlook, implementing full-screen briefing support, chart image capturing and extraction, and the aforementioned implemention of a service oriented layer to support multiple client access. Ok... A little out of control?... Maybe... but it made my job; in this area; super easy and I had a lot fun working with the embedded browsers, DOMs, bot methods, service layers, owner-drawn charting, etc. Hey, it’s a geek thing...  :-)

As a side note...  Sadly...  Sometimes I find that I am one of 'Them', ohhh the horror!  Nevermind, it comes with a pay raise...  :-)

Tags:

Nov 11 2005

MD5 Calculator Add-in for Visual Studio.NET

Category: Selfish MotivationJoeGeeky @ 11:35

Got an email one day that said, '...all files that must be audited, shall be submitted with an MD5 signature...'. Hmmm, what to do then? I could (a) use an external product to calculate the signature, transfer them to a file and email the file to all the interested parties or (b) do the geek thing and write an app to do it for me... Duhhh, I have to do the geek thing. In this case, I explored the VS.NET Extensibility Framework. If you have not used this before it is worth investigating. This little gem stays out of site and out of mind, silently calculating MD5 signatures for all the compiled elements of your project, and drops an XML file with all the details in your project directory updating it when you recompile. Add this artifact to your source repository and you can find your file signatures quick and easy. I added a search module in the aforementioned audit tool that could perform audits based on this output file. Ahhh, I love it when a plan comes together.

Tags:

Sep 22 2005

Joe's Simple Port Scanner

Category: Selfish MotivationJoeGeeky @ 16:39

One day, a friend came to me and asked how to scan a TCP or UDP port.  With that question, I wrote a simple little port scanner to demonstrate at least one approach to the problem. Once written I found out that it wasn't one port but thousands. Well, now it is time for some metrics... As it turns out, my little sample took 2 minutes and 5 seconds to scan 1024 ports. While it worked, it needed to be faster.  After scratching my head a bit, I wrote a new port scanner that performed scans on a pool of threads.  Now it scanned the same port range in 2.2 seconds. What can I say, multithreading is groovy stuff.  :-)

 

Tags: