Jun 9 2011

Lost Priorities

A common practice when developing real-time systems is to elevate the priority of the host process. A misconception that is often held is that raising a processes priority will implicitly provide greater priority to threads spawned within the host process. In truth, this is only true for the main thread and has no impact on Threads that were spawned during host execution. To ensure new threads are synchronized with the host Process, you’ll need to perform the following:

Create a means of mapping between the two thread priority options. Below is an example of a simple mapping which could serve a number of application needs

public static ThreadPriority GetThreadPriority(this Process process)
{
    switch (process.PriorityClass)
    {
        case ProcessPriorityClass.AboveNormal:
            return ThreadPriority.AboveNormal;
        case ProcessPriorityClass.BelowNormal:
            return ThreadPriority.BelowNormal;
        case ProcessPriorityClass.High:
            return ThreadPriority.Highest;
        case ProcessPriorityClass.Idle:
            return ThreadPriority.Normal;
        case ProcessPriorityClass.Normal:
            return ThreadPriority.Normal;
        case ProcessPriorityClass.RealTime:
            return ThreadPriority.Highest;
        default:
            return ThreadPriority.Normal;
    }
}

Set the priority of each thread you expect to be mapped to the host processes priority

var _workerThread = new Thread(WorkerStart)
{
    IsBackground = true,
    Priority = Process.GetCurrentProcess().GetThreadPriority(),
    Name = "My Worker Thread"
};

That's it, use it in good health.

Tags: ,

Apr 16 2011

A few principles for writing blazing fast code in .NET

When authoring high performance applications, the following generalized rules can be considered valuable for creating highly performant capabilities:

Share nothing across threads, even at the expense of memory

  • Sharing across thread boundaries leads to increased preemptions, costly thread contention, and may introduce other less obvious expenses in L2 cache, and more
  • When working with shared state that is seldom or never updated, give each thread its own copy even at the expense of memory
  • Create thread affinity if the workload represents saga’s for object state, but keep in mind this may limit scalability within a single instance of a process
  • Where possible, isolating threads is ideal

Embrace lock-free architectures

  • The fewer locks the better, which is obvious to most people
  • Understanding how to achieve thread-safety using lock-free patterns can be somewhat nuanced, so digging into the details of how the primitive/native locking semantics work and the concepts behind memory fencing can help ensure you have leaner execution paths

# dedicated long-running Threads == Number of processing cores

  • It’s easy to just spin up another thread. Unfortunately, the more threads you create the more contention you are likely to create with them. Eventually, you may find your application is spending so much time jumping between threads; there is no time to do any real work. This is known as a ‘Live Lock’ scenario and is somewhat challenging to debug
  • Test the performance of your application using different threading patterns on hardware that is representative of the production environment to ensure the number of threads you’ve chosen is actually optimal
  • For background tasks that have more flexibility in how often they can be run, when, and how much work can be done at any given time, consider Continuation or Task Scheduler patterns and collapse them onto fewer (or a single) threads
  • Consider using patterns that utilize the ThreadPool instead of using dedicated long-running threads

Stay in-memory and avoid or batch I/O where possible

  • File, Database, and Network I/O can be costly
  • Consider batching updates when I/O is required. This includes buffering file writes, batch message transmissions, etc…
  • For database interactions, try using bulk inserts even if it’s only to temp tables. You can use Stored Procedures to signal submission of data, which can then perform ETL like functions in the database

Avoid the Heap

  • Objects placed on the Heap carry with them the burden of being garbage collected. If your application produces a large number of objects with very short lives, the burden of collection can be expensive to the overall performance of your application
  • Consider switching to Server GC (a.k.a multi-core GC)
  • Consider switching to Value Types maintained on the call stack and don’t box them
  • Consider reusing object instances. Samples of each of these can be found in the below Coding Guidelines

Use method-level variables during method execution and merge results with class-level variables after processing

  • Using shared variables that are frequently updated can create inefficiencies in how the call stack is managed and how L2 Cache’s behave
  • When working with relatively small variables, follow a pattern of copy-local, do work, merge changes
  • For shared state that is updated frequently by multiple threads, be aware of ‘False Sharing’ concerns and code accordingly

Avoid polling patterns

  • Blind polling can lead to inefficient use of resources and reduce a systems ability to scale or reduce overall performance. Where possible, apply publish-subscribe patterns

Know what things cost

  • If you dig a little deeper into the Framework you may find some surprises with regard to what things cost. Take a look at Know What Things Cost.

Develop layers with at least two consumers in mind… Production & Tests

  • Developing highly performant systems requires a fair amount of testing. As such, each new layer/component needs to be testable in isolation so that we can better isolate performance bottlenecks, measure thresholds and capacity, and model/validate behavior under various load scenarios
  • Consider using Dependency Injection patterns to allow injection of alternate dependencies, mocks, etc…
  • Consider using Provider patterns to make selection of separate implementations easier. It’s not uncommon for automated test systems to configure alternate implementations to help suite various test cases
    • Ex. Layers that replicate network instability, layers that accommodate Bot-users that drive change in the system, layers that replicate external resources with predictable behaviours, etc…

Tags: , ,

Jan 1 2011

Dropping the locks

When your working on high performance products you'll eventually find yourself creating some form of a Near-Cache. If things really need to go fast, you may start thinking about going lock-free. This may be perfectly reasonable, but lets explore this a little to see where it makes sense.

For the purposes of this article, a Near-Cache is defined as any collection of state from durably persistent mediums that is maintained in-process and services requests concurrently. This may be a direct projection of state from a single medium or a custom View of state from many different mediums.

Typically, the implementation of Near-Cache patterns fall into a couple broad categories

Write-Once/Read-Many

  • As the name suggests this is a cache that is seldom written too, and spends most of its time servicing read/lookup requests. A common example would be maintaining an in-memory View of static (or seldom updated) state in a database to offset the expense of calling back to the database for each individual lookup

Volatile Read-Write

  • This type of cache typically has data written-to or altered as frequently as its read. In many cases, caches of this type maintain state that is discovered through more dynamic processes, although it could also be backed against persistent or other semi-volatile mediums

The benefits and/or consequences of selecting either pattern varies depending on the needs of your application. Generally speaking, both approaches will save costs as compared to calling-out to more expensive external resources. In C# there are several patterns which could be used to create a Thread-Safe Near-Cache. Exploring two samples to demonstrate opposite ends of the spectrum will help reveal their relative merits. Along the way I'll identify some criteria that may be useful in determining which is the most appropriate for a given scenario.

The Full-lock Cache

This pattern is suitable for either Write-Once/Read-Many or Volatile Read-Write caches

public static class FullLockCache
{
    private static IDictionary<int, string> _cache;
    private static object _fullLock;

    static FullLockCache()
    {
        _cache = new Dictionary<int, string>();
        _fullLock = new object();
    }

    public static string GetData(int key)
    {
        lock (_fullLock)
        {
            string returnValue;
            _cache.TryGetValue(key, out returnValue);
            return returnValue;
        }
    }

    public static void AddData(int key, string data)
    {
        lock (_fullLock)
        {
            if (!_cache.ContainsKey(key))
                _cache.Add(key, data);
        }
    }
}

Benefits
  • The locking and Thread-Safety semantics are easily understood
  • Unless you have high numbers of threads accessing the cache concurrently or have extremely high performance demands, the relative performance of this pattern can be quite reasonable
  • This can be relatively easy to test and could be made easier with subtle extensions
Consequences
  • The consequences of this cache pattern would be the full lock placed on each call. This can start to manifest as higher contention rates for applications that have large numbers of threads or fewer threads with very high performance demands

The Lock-Free Cache

This pattern is only suitable for a Write-Once/Read-Many cache

public static class NoLockDictionaryCache
{
    private static IDictionary<int, string> _cache;

    static NoLockDictionaryCache()
    {
        _cache = new Dictionary<int, string>();
    }

    public static string GetData(int key)
    {
        string returnValue;
        _cache.TryGetValue(key, out returnValue);
        return returnValue;
    }

    public static void LoadCacheFromSource()
    {
        IDictionary<int, string> tempCache = new Dictionary<int, string>();

        //TODO: Load Cache

        Interlocked.Exchange(ref _cache, tempCache);
    }
}

Benefits
  • No locking is required
  • Read performance of a cache is significantly improved
  • This can be relatively easy to test and could be made easier with subtle extensions
Consequences
  • Writes can be relatively slow when compared to other patterns. This is a result of reloading the entire cache, or cloning and then updating the existing cache
Variations
  • The Hashtable and ConcurrentDictionary types work very well in place of the Dictionary. These perform almost the same as the Dictionary

Tags: , ,

May 18 2010

Just tap me on the shoulder when it's time

Many applications have a requirement to execute code on a recurring schedule. In cases such as this, there is a tendency to create dedicated background threads to perform this function. This pattern certainly works, but can carry some in-efficiencies when you have large numbers of threads that spend the vast majority of their life sleeping.
As an alternative, consider using an invocation callback pattern.

To implement this pattern we need to start with an interface to define the invoker.

using System;

public interface IInvocationTimer : IDisposable
{
    /// <summary>
    /// The interval at which the invocation targets are to be called
    /// </summary>
    TimeSpan Interval { get; set; }

    /// <summary>
    /// Starts the invocations
    /// </summary>
    void Start();

    /// <summary>
    /// Stops the invocations
    /// </summary>
    void Stop();

    /// <summary>
    /// Subscription point for invocations
    /// </summary>
    event EventHandler<EventArgs> InvocationTarget;

    /// <summary>
    /// Subscription point for all invokation exceptions
    /// </summary>
    event EventHandler<InvocationExceptionEventArgs> OnInvocationException;
}

Next we need to define the concrete implementation. Under the covers this uses the System.Timers.Timer which manages schedule-based invocations via the ThreadPool. This approach is lightweight, easy to manage, and testable in isolation.

using System;
using System.Runtime.Remoting.Messaging;
using System.Timers;

public class InvocationTimer : IInvocationTimer
{
    private readonly object _lock;
    private readonly Timer _timer;

    public InvocationTimer()
        : this(TimeSpan.Zero)
    {
        _lock = new Object();
    }

    public InvocationTimer(TimeSpan interval)
    {
        _timer = new Timer { AutoReset = true, Enabled = false };

        if (interval > TimeSpan.Zero)
            _timer.Interval = interval.TotalMilliseconds;

        _timer.Elapsed += TimerNotificationChain;
    }

    public event EventHandler<EventArgs> InvocationTarget;
    public event EventHandler<InvocationExceptionEventArgs> OnInvocationException;

    public TimeSpan Interval
    {
        get
        {
            return TimeSpan.FromMilliseconds(_timer.Interval);
        }
        set
        {
            _timer.Interval = value.TotalMilliseconds;
        }
    }

    public void Start()
    {
        if (_timer.Interval < 1)
            throw new InvalidOperationException("The invocation 'Interval' must be set to a value above zero before the timer can be started.");

        _timer.Start();
    }

    public void Stop()
    {
        _timer.Stop();
    }

    public void Dispose()
    {
        _timer.Stop();
        _timer.Dispose();
    }

    private void TimerNotificationChain(object sender, ElapsedEventArgs e)
    {
        lock (InvocationTarget)
        {
            foreach (EventHandler<EventArgs> handler in InvocationTarget.GetInvocationList())
            {
                handler.BeginInvoke(this, new EventArgs(), InvocationCompletionCallback, null);
            }
        }
    }

    private void InvocationCompletionCallback(IAsyncResult ar)
    {
        lock (_lock)
        {
            var asyncResult = (AsyncResult)ar;
            var handler = (EventHandler<EventArgs>)asyncResult.AsyncDelegate;

            try
            {
                handler.EndInvoke(asyncResult);
            }
            catch (Exception exp)
            {
                RaiseInvocationExceptionEvent(new InvocationExceptionEventArgs
                                                    {
                                                        Exception = exp
                                                    });
            }
        }
    }

    private void RaiseInvocationExceptionEvent(InvocationExceptionEventArgs e)
    {
        if (OnInvocationException == null) return;

        OnInvocationException(this, e);
    }
}

Finally, lets take a look at how to use this.

public class MyClass
{
    private IInvocationTimer _timer;
    private volatile bool _inInvokation;

    public MyClass()
        : this(new InvocationTimer())
    { }

    internal MyClass(IInvocationTimer timer)
    {
        _timer = timer;
        _timer.Interval = TimeSpan.FromSeconds(5);
        _timer.InvocationTarget += _timer_InvocationTarget;
        _timer.OnInvocationException += _timer_OnInvocationException;
    }

    void _timer_OnInvocationException(object sender, InvocationExceptionEventArgs e)
    {
        if (_inInvokation) return;

        _inInvokation = true;

        //TODO: Do Work

        _inInvokation = false;
    }

    void _timer_InvocationTarget(object sender, EventArgs e)
    {
        throw new NotImplementedException();
    }
}

Other Considerations / Selection Criteria

  • The InvocationTimer implementation of this pattern runs out of the ThreadPool. Consequently, when the pool is under pressure, things may not fire exactly on schedule. If you have a strict schedule to keep, consider an alternate pattern
  • The InvocationTimer is not blocked while waiting for Invoked members to complete an invocation. Consequently, the subscriber must manage concerns with regard to re-entrancy. The sample above shows one simple way to do this
  • The InvocationTimer is backed by the IInvocationTimer interface to make it mockable. Doing so, allows this concern to be removed from consuming layers for more isolated testing

Nice!!!

Tags: , ,

Mar 26 2010

Structuring my storage

Category: Intellectual PursuitsJoeGeeky @ 00:04

In the late 90's, a term was born which spawned a whole range of non-relational storage facilities. What is this term? NoSql

NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees.
- Wikipedia

There are a wide range of purpose-built solutions out there ranging from document storage systems to Tuple Space data grids. Each of these targets a specific niche sacrificing more traditional SQL-like patterns for other advantages (Ex. speed, portability, structure, etc...). Even with so many differences between them, these architectures generally share a number of common characteristics and while they may sounds a lot like traditional databases, their implementations can be quite different. Consider the following common Structured Storage components:

  • Store - This is the storage medium. This is commonly a file (or many files). However, in this modern distributed world of ours this can be virtualized in many different ways
  • Index - This can be one or many different representations of part or all of the stored data. This pattern is generally used to optimize search or data location routines. Any one store can have many different Indexes to facilitate its needs
  • Cursor - This is used to iterate through the store generally pointing at one record at a time. This is similar to an Enumerator or an Iterator although one store could have multiple cursors at any one time. The cursor is often the point from which data is written to the store or read from it

Understanding these basic principles can make it easy(ier) to create your own purpose-built store for meeting any specific needs you might have. I recently built a custom point-to-point queue and needed it to be durable. In this case, I wrote a store to guarantee delivery of queued messages on both the sending and receiving ends of the queue. In doing so, I was reminded of a few valuable lessons:

  • Technical Debt - To make a custom store suitable for highly available and/or performant applications, you will need to employ a number of advanced techniques. This includes asynchronous and possibly distributed processing, replication strategies for fail-over, etc... These issues are not trivial, and can require large investments in time and money to make them work correctly. If you have these needs then it may be better to go with an established technology
  • Disk thrash - It may not seem obvious, but high-performance persistence technologies need to be aware of what a storage medium; such as as disk; can and cannot do well. Think about how disk heads move. If your cursors, data readers, and/or data writers behave in a manner that would cause the disk heads to jump around, you will lose precious time just due to the mechanical limitations of the disk. Do a little research and you'll find patterns to help mitigate this kind of performance hit
  • Describe your data - When you're architecting your store, keep in mind that you need to store meta-data along with the data you intend to store. This could include data used to generate metrics, versioning, structure details, arbitrary headers, or whatever. While it may cost more, make sure you give yourself room to grow 
  • Familiarity - Take a sampling of developers and show them a database, tables, sprocs, etc... The vast majority will know exactly what to do if change is needed. Compare that to showing the same developers a proprietary storage solution. While they may be able to figure it out, it will take a great deal more time and energy to make change, isolate bugs, etc. Like it or not, most of us recognize the classic database model. Having something people recognize can be worth a lot, so don't underestimate the value of older patterns  

In today's complex environments, experience with the aforementioned patterns can really come in handy. Investing a little energy in this area can be worth it, even if it is just done as an intellectual pursuit. Just remember, this is all about purpose-built stores. Don't feel like you need to copy or replicate functions from other tool sets. If you are, then maybe you should just use those tools... Wink

You're welcome to take a look at an early version of one of my stores. Although its not the best example, it met my needs. Here is some SmellyStorage.

Tags: , , ,

Feb 19 2010

What the heck is a torn-read?

If you've written any high performance asynchronous applications you may have found yourself putting up Memory Barriers to avoid those costly thread locks. More commonly known as Memory Fences, this technique can allow you to create lock-free read-modify-write operations which can give you that little extra bit of performance that some applications need. To be fair this is an advanced technique so if you are embarking on this sort of thing be very careful and do lots of testing.

The problem with using lock-free approaches is that you now have to deal with Atomicity on your own and this may not always be as straightforward as you might think. Lets look at a couple simple examples of what is and is not Atomic.

Single reads and writes are always atomic:

Int32 number = 123; /* single writes are atomic */
number.ToString(); /*single reads are atomic */

Unary operations are another story since they consist of two operations (one read and one write). As a result, using them in a lock-free environment can lead to unexpected results:

number++;
number += 2;

This means that another thread or process could alter the value of 'number' in between its read and write. When this happens you will certainly be left scratching your head wondering how 1+= 1 could possibly equal 1294.

64 Bit types present an even more subtle problem. Single reads or writes of 64 Bit types 'may' not be atomic the same way their 32 Bit counterparts are. Thats right, I said 'may' not... Here's the issue. To store a 64 Bit value in a 32 Bit environment the runtime needs two separate memory locations and consequently there are two separate instructions to read or write a value. This creates a vulnerability similar to unary operations called a torn-read. When this occurs one of the memory locations reflects a value from one thread or process while the other reflects a value from another thread or process. In the end, this will cause you no end of pain and is next to impossible to debug.

Thread or Process A writes 5,000,000,000 which is stored in two 32 bit registers as follows:

10101010000001011111001000000000 -- 10101010000001011111001000000000

Thread or Process B writes 10,000,000,000 which is stored in the same two 32 bit registers as follows:

11010100000010111110010000000000 -- 11010100000010111110010000000000

A read by either could result in the following combination:

10101010000001011111001000000000 -- 11010100000010111110010000000000

In 64 Bit environments this is not an issue because they only need one memory location and can read and write with only one instruction.

So how can you use a lock-free pattern and avoid this problem on 32-bit platforms? There are a couple ways to deal with this, but really only one worth mentioning... Interlocked.

Interlocked.Increment(ref number);

This class provides a number of methods to help ensure interaction with types such as these are done atomically.

So what is the moral of this story?  If you are going lock-free... Test, test, test...

Tags: ,

Jan 16 2010

Exceptional Threading

Category: Rabbit Trails | Tips and TricksJoeGeeky @ 21:27

When multi-threading applications it can be easy to lose the plot from time to time. Sometimes it can take all your energy just to remember what is running when, how to sync, lock, join, etc... Often, exception handling takes a back seat or can lose consideration with respect to where exceptions should; or will; be communicated and how they may be handled. Even if you assume you are the greatest developer who ever lived, exceptions are inevitable, and when they occur in a multi-threaded application the root cause can be very hard to isolate. In fact, depending on the type of feature being executed on a thread you may have silent failures leading to no end of rabbit-trails as dependent behaviors and/or components exhibit who knows what.

With that in mind, there are a number of patterns that can keep you out of trouble; or at least; help you isolate problems when trouble strikes. Lets tackle one of the most commonly used threading patterns first, the QueueUserWorkItem.

ThreadPool.QueueUserWorkItem(DoSomethingFeature, null);

This is something I see a lot of and unfortunately it can lead to disappointment. Any unhandled exceptions that occur in the aforementioned DoSomethingFeature() method will reach the AppDomain and will crash your application. There are; at least; two patterns we can employ to deal with this kind of problem. The first pattern focuses on catching exceptions. Thanks to lambda support, we can easily wrap our feature methods with some basic try {} catch {} blocks.

ThreadPool.QueueUserWorkItem(state =>
    {
        try
        {
            DoSomethingFeature(state);
        }
        catch (Exception ex)
        {
            //Handle the exception
        }
    });

The above approach will provide you an opportunity to catch unhandled exceptions but does not provide an elegant means of communicating to other threads so they can take action if needed. To achieve that, you could employ the Observer Pattern using static Events... Here is a simplified example:   

Define a delegate and EventArgs implementation to communicate whatever is needed to facilitate your exception handling needs...  For this sample, all we need is the Exception itself.

public delegate void CustomExceptionHandler(object sender, ExceptionArgs e);

public sealed class ExceptionArgs : EventArgs
{
    public Exception Exception { get; set; }
}

Next, define a static Event in a location that is accessible to all required areas of concern.

public static event CustomExceptionHandler OnCustomException;

With that in place, we can now queue our threads as we did before, but this time we will wire up the new event/delegate created previously to communicate exception details.

ThreadPool.QueueUserWorkItem(state =>
    {
        try
        {
            DoSomethingFeature(state);
        }
        catch (Exception ex)
        {
            if (OnCustomException != null)
                OnCustomException(null, new ExceptionArgs { Exception = ex });
        }
    });

For those layers charged with handling or responding to unhandled exceptions, they just need to subscribe to the Events. 

OnCustomException += ((sender, e) => Console.WriteLine(e.Exception.Message));

Now lets address a second commonly used unhandled exception catch pattern. You may have seen code such as follows:

AppDomain.CurrentDomain.UnhandledException += ((sender, e) => /* catch and continue */));

This approach is often misunderstood... On the surface, it may appear as a method of catching an unhandled exception and preventing your application from crashing, but testing will show that this is not true starting with .NET 2.0. This delegate is provided to allow the application to save state, log exception details, etc. but will not prevent a terminal Exception from bringing down the AppDomain. Using this for the stated purposes is still a good idea, but you will need to employ other methods such as the ones above to prevent total failure.  

Tags: , ,

Dec 10 2008

How to make a thread-safe Singleton web friendly

Category: Tips and TricksJoeGeeky @ 13:32

Here is a quick one... There is often a lot of debate about the use of Singleton's in Web environments, but they can make a lot of sense. With that said, using a Per-Request Singleton in a Web Application without considering thread-safety will lead to results that are... well... dubious. A typical Singleton pattern such as the one below is thread-safe under normal circumstances but when it comes to ASP.NET Applications the threading behaviours; per request; are unique. The consequence of this "uniqueness" leads to the below pattern no longer being safe at least insofar as having a single instance per request:

public sealed class TypicalSingleton
{
    static TypicalSingleton _instance;

    static readonly object _padlock = new object();

    public static TypicalSingleton Current
    {
        get
        {
            lock (_padlock)
            {
                if (_instance == null)
                    _instance = new TypicalSingleton();

                return _instance;
            }
        }
    }
}

If you wish to have an instance per web request, you can do so with the following pattern, and this can easily be created, read, and/or modified within the HTTP pipeline using HTTP Modules as well as use within the application itself:

public sealed class WebContextSingleton
{
    private const string ContextKey = "WebContextSingleton";

    public static WebContextSingleton Current
    {
        get
        {
            var instance = HttpContext.Current.Items[ContextKey] as WebContextSingleton;

            if (instance == null)
            {
                instance = new WebContextSingleton();
                HttpContext.Current.Items.Add(ContextKey, instance);
            }

            return instance;
        }
    }
}

See... I told you... a quick one.

Tags: ,

Oct 10 2007

Parameterized threading for your own good

Category: Tips and TricksJoeGeeky @ 06:14

The Benefits of multithreading application function are numerous and well documented so I will not bore you will all of that.  However setting up a parameterized threading routine is not all that obvious and you don't see much of this in the boggouspher. So with that in mind I will walk you through a simple example of how to set up a parameterized thread, which I believe is far more practical for most applications today. First lets look a standard thread invokation without any params.  This is pretty straight foward... 

1.  Write the method that you would like to call via an alternate thread.  

 

private void MyAsyncMethod()
{
    //Save the world here...
}

2.  Write the method that will invoke your new AsyncMethod.  In this method you will need to instantiate the thread and point it to the address of the method to be invoked and then you will need to start the thread.  If you read the details on the Thread class on MSDN you will see there are tons of things you can do, but to keep this simple we will simply start the thread.

public void InvokeMyAsyncMethod()
{
    Thread myThread = new Thread(MyAsyncMethod);
    myThread.Start();
}

Thats it...  Call your invoke method and you are all set.  If you look at the Thread Class Constructor you will quickly see there is no overload supporting an address assignment of a method with parameters.  In the real world, we often need to execute threads based on some set of parameters. To accomplish our goal we need to employ a class named "System.Threading.ParameterizedThreadStart".  Essentially this is a delegate that allows us to assign a Thread to specific signature supplying one parameter via the Threads Start method. Lets walk through an example.

1.  Write the parameterize method that you would like to call via an alternate thread. In this case we have to follow the signature x(object y).  

private void MyAsyncParamMethod(object paramObject)
{
    //Save the world here with parameters...
}

2.  Now lets write our parameter-aware invocation method.

public void InvokeMyAsyncParamMethod(object paramObject)
{
    ParameterizedThreadStart pts = new ParameterizedThreadStart(MyAsyncParamMethod);
    Thread myParamThread = new Thread(pts);
    myParamThread.Start(paramObject);
}

That it... At this point you can pass a single object in for the params and parse the object internally to process whatever you need.

This is were a little old-time developer religion comes in to play.  So often I see people sending dictionary objects, collections, or worse yet an arraylist.  While this can get the job done, I would recommend creating your own strongly typed parameter class.  This is easier to parse, maintain, extend, document, etc...  Also, you can place tighter controls on validating what can be submitted via these params.... and it not so sloppy.  

 

Tags: ,