I’m using Unity for the first time on my current project (a WCF service), and the project is now fairly large with a significant amount of container configuration.  I’m taking advantage of lifetime managers to optimise object reuse, with PerThreadLifetimeManager being particularly attractive in a service – less garbage through object reuse but no locking to worry about.

I decided that it might be nice to see counts of the number of objects that Unity is creating for each type, to confirm that my lifetime configuration is optimal.  It turned out to be straightforward to do this with a Unity extension.

Here’s how the extension is used.  First, the extension is added to the container:

    var objectCounterExtension = new ObjectCounterExtension();
    container.AddExtension(objectCounterExtension);

 

At application shutdown a list of types and their associated instance counts can be output like this:

    IDictionary<Type,int> objectCounts = objectCounterExtension.ObjectCounts;

    foreach (var objectCount in objectCounts.OrderBy(oc => oc.Value))
    {
        Type type = objectCount.Key;
        int count = objectCount.Value;
        Console.WriteLine("{0}: {1}", type, count);
    }

 

The extension is quite simple, once you know how:

    class ObjectCounterExtension : UnityContainerExtension
    {
        ObjectCounterStrategy objectCounterStrategy;

        protected override void Initialize()
        {
            objectCounterStrategy = new ObjectCounterStrategy();
            Context.Strategies.Add(objectCounterStrategy, UnityBuildStage.Creation);
        }

        public IDictionary<Type, int> ObjectCounts
        {
            get { return objectCounterStrategy.ObjectCounts; }
        }
    }

    class ObjectCounterStrategy : IBuilderStrategy
    {
        Dictionary<Type, int> objectCounts = new Dictionary<Type,int>();

        public void PostBuildUp(IBuilderContext context)
        {
            IBuildKey buildKey = context.BuildKey as IBuildKey;
            if (buildKey == null)
                return;

            Type type = buildKey.Type;

            lock (objectCounts)
            {
                int count;
                if (objectCounts.TryGetValue(type, out count))
                {
                    objectCounts[type] = count + 1;
                }
                else
                {
                    objectCounts[type] = 1;
                }
            }
        }

        public Dictionary<Type, int> ObjectCounts
        {
            get
            {
                lock (objectCounts)
                {
                    return new Dictionary<Type, int>(objectCounts);
                }
            }
        }

        public void PostTearDown(IBuilderContext context) {}
        public void PreBuildUp(IBuilderContext context) {}
        public void PreTearDown(IBuilderContext context) {}
    }
 

In a previous post I described how to host a COM server in a managed process using RegistrationServices.RegisterTypeForComClients.  I’ve been using this approach successfully for a while, but today I hit a snag.   I changed my C# server process from 32-bit to 64-bit, and immediately my 32-bit C++ client could no longer connect.

In theory it shouldn’t matter to the client whether the server is 32-bit or 64-bit – everything is out-of-process so there is no compatibility issue.   But I could see that COM was refusing to allow my client to connect to the running 64-bit server process, and instead was trying to launch a new server process (which was failing because I don’t allow that).

I have seen this type of problem many times before with COM, and it’s almost always due to security configuration – specifically the ‘run as’ configuration of the server.   So I spent a lot of time investigating that, but it turned out to be something much simpler.  Since Windows 2003 SP1, COM has a rule on x64 that if a 32-bit client CoCreates an out-of-proc server, COM will try to connect to a 32-bit server.  If the client is 64-bit, COM will try to connect to a 64-bit server.  So in my case, COM could see that the 64-bit server was running, but because the client was 32-bit it decided to launch a new (hopefully 32-bit) server process to service the request.

Fortunately there are two easy ways around the problem.  The first option is to modify the client to specify  CLSCTX_ACTIVATE_64_BIT_SERVER in the CoCreateInstance call.  The other (probably better) option is to add a PreferredServerBitness flag to the AppID registry entry for the server.

CLSCTX_ACTIVATE_64_BIT_SERVER is described here, and PreferredServerBitness here.

I really like my Pentax K20D DSLR, but when I first got it the autofocus was not accurate, particularly with my favourite FA 50mm 1.4 lens.   Fortunately the camera has an autofocus adjustment in the standard menu, which fixed the problem, but I recently discovered that there is a hidden ‘debug’ menu that offers even more control over autofocus.   However the procedure to turn on the debug menu is a little fiddly, so I thought I’d document it here.

[Note that these instructions only apply to the K20D.   Other Pentax and Samsung cameras also have a debug mode, but the instructions are slightly different for each camera – see the end of the post for links.]

Step 1: Create a text file called MODSET.442 in the root of your SD card.   The file should contain the following single line:

[OPEN_DEBUG_MENU]

You can use any plain text editor, for example Windows Notepad.  Make sure that the file is called MODSET.442 and not MODSET.442.txt

Step 2: Put the SD card into the camera but leave the card door open.

Step 3: Hold down the Menu button and turn on the camera.  Keep holding down the Menu button until the debug menu appears.

debugmode1

Step 4:  Close the SD card door.

Step 5: To enable debug mode, press the right arrow to change Debug Mode from DIS to EN, then press the OK button.

debugmode2

Step 6: Press the MENU button to get to the standard camera menu.   (If nothing happens, make sure you have closed the SD card door.   If necessary, turn the camera off and on again after closing the door.)

Press the right arrow twice to get to the Setup menu:

debugmode3

This is just the standard Setup menu, but some new items have been added to the bottom.  Press the up arrow to quickly jump to the bottom to see them.

debugmode4

Select AF TEST then click the right arrow button.

debugmode5

You can now set the global focus correction.  In this screenshot the correction is set to –90 and I am about to increase it by 20 (making it –70).   Press the OK button and you are ready to take a test shot with your new AF setting.  (Some reports suggest that the camera must be restarted after setting the correction value, but it seems to apply immediately on my camera.)

Step 7:  Once you have finished playing with the AF correction, you will want to turn off the debug menu.  To do so, just turn the camera on and when the debug menu appears, set DEBUG MODE to DIS and click OK.   Your camera should now be totally back to normal.

You can leave the MODSET.442 on your SD card and then re-enable the debug menu at any time by starting from step 2.  Or you can just delete the file if you prefer.

If some part of the instructions doesn’t seem to be working, check your SD card door.   For some steps it must be open, and others it must be closed – follow the instructions exactly as above.

The debug menu is also available on several other Pentax and Samsung cameras – see the pentax-hack site for details.

Also there are detailed instructions for the Pentax K-x here, which I found very useful, even though the steps are slightly different than the K20D.

After many many test shots I think I’ve settled on –90 as my correction value.   That allows me to run my FA 50mm 1.4 with no ‘standard’ correction, and my other lenses seem happy too.