Thursday, November 20, 2014

Dell XPS 15 screen has a blue or red tint

I just got a brand new Dell XPS 15 and I noticed the screen taking on a blue or red tent, seemingly at random. If you see this happening, check your system tray for this icon:
If you see it, right-click the icon and select "Disable True Color". The True Color app attempts to adjust your screen's color profile to be more pleasing based on your laptop's ambient light sensor. In my brief experience, it does a horrible job at this.

Friday, January 25, 2013

Script Bundling doesn't work for SignalR Release Candidates

With SignalR now available as a Release Candidate from Microsoft, we found that our script bundling no longer worked.  Our BundleConfig contained a statement to register the bundle and include the version-based file.

bundles.Add(new ScriptBundle("~/bundles/signalr").Include(
            "~/Scripts/jquery.signalR-{version}.js"));

Once upgrading from 0.5.3 to 1.0.0-rc2 via NuGet, the bundle request returned an empty response.  Downgrading back to 0.5.3 fixed the issue and our bundle request returned as expected.

Hardcoding the entry by replacing "{version}" with "1.0.0-rc2" also resolved the issue, so the {version} parsing was not working for some reason.  Presumably, {version} was supposed to be replaced with the installed version of the product during bundling.  A check of packages.config showed that the version for SignalR was correctly set to "1.0.0-rc2".

It turns out that {version}, rather than being replaced with the installed version number, is simply a regex matching pattern for version numbers in a format similar to #.#.#.  It does not look for a particular version number, but rather anything that looks like a version number.  In this case, it is getting tripped up on the "-rc2" in the script name.  You can work around this without breaking future (non-Prerelease) upgrades by adding the following to ScriptBundle configuration.

bundles.Add(new ScriptBundle("~/bundles/signalr").Include(

            "~/Scripts/jquery.signalR-{version}.js",
            "~/Scripts/jquery.signalR-1.0.0-rc2.js"));

It's not the most dynamic solution, but it does allow bundling to immediately work with any upgrades.  The only caveat would be what if upgrading to a different Prerelease version (rc3, for example), an update to the config would be required.  While there is wildcard (*) support, Microsoft advises against using it in most cases, preferring that scripts be hardcoded.  So for now, this is probably the least painful solution.

RequireJS 2.0 dependencies seemingly ignored

After struggling for months with RequireJS 2.0 and why it seemed to ignore our dependencies for ordered script loading, we finally found the issue today and it was a facepalmableteachable moment.  When setting up a module to use something like SignalR, jQuery must be loaded first.  Searches quickly led to StackOverflow posts that suggested configuring dependency shims in RequireJS with something like:

require.config({
  shims: {
    'signalr': ['jquery'],
    'backbone': ['jquery'],
  }
}

In theory, when later requiring 'signalr', RequireJS will see that it needs to load 'jquery' first and all is well:

define(['signalr'], function() { 
  return {
    initialize: function() { $.connection.doSomething; }
  };
});

In practice, what we (and many others, based on the abundance of posts) saw was that SignalR complained about needing jQuery loaded first.  In fact, Developer Tools showed that not only was the request for jQuery not being prioritized before SignalR, but it wasn't being requested at all.

Maybe Dependencies Don't Autoload?

A quick check of the RequireJS API confirms that this should be working.  Some more searches lead to StackOverflow posts where others are inexplicably having similar problems with dependencies.  My first logical guess was that RequireJS doesn't automatically load specified dependencies for you, but rather it just orders them correctly while still requiring you to specify every dependency.  This seemed to defeat the purpose of RequireJS, but what the heck.  I tried adding 'jquery' in to our define:


define(['jquery', 'signalr'], function() { 
  return {
    initialize: function() { $.connection.doSomething; }
  };
});

This seemed to resolve the issue, though we'd still occasionally see transient "jQuery must be loaded first" errors that went away whenever we tried to debug them.  Eventually, we discovered that dependencies were still not working and that Require was not waiting for jQuery to finish loading before trying to load SignalR.   Whenever the browser cache was cleared and the page reloaded, the same "jQuery must be loaded first" error would appear.  However, subsequent refreshes would fetch the jQuery page from the browser cache fast enough that it was finished loaded by the time RequireJS had moved on to SignalR.  At this point, it became obvious that something was fundamentally wrong with how we were using RequireJS.

Face, meet palm

After looking through the RequireJS API more closely, it became painfully clear why our shims were being ignored.  The RequireJS API config option for dependencies is *shim*, not *shims*.  The extra "s" in our config was causing our dependency chains to be completely ignored.  Searching online again, it looks like many of the examples on StackOverflow make the same "shims" typo, thus adding to the confusion.  Removing the "s" allowed RequireJS to suddenly start working as expected.

require.config({
  shim: {

    'signalr': ['jquery'],
    'backbone': ['jquery'],

  }
}

Developer Tools now shows jQuery always being loaded to completion before SignalR's is even requested.  Our define went back to just containing 'signalr', with 'jquery' no longer being necessary to explicitly specify as it is being automatically pulled in.  Modules suddenly make sense again.

So if you see issues where your RequireJS dependencies don't load as expected, make sure you're not making the same mistake we did.  Configure a shim, not shims.  One little character makes all the difference.

Tuesday, December 21, 2010

Firefox 4 Beta static cursor

After upgrading to Firefox 4 Beta, I found that I was clicking on things multiple times trying to get them to work. For some reason, the default in FF4 is to not have a busy cursor when you click something. Your cursor remains the default arrow with no indication anything is happening. You can restore the FF3-standard (and really Windows-standard) behavior of having a "Busy" hourglass cursor with the following steps:

1) Open a new tab.
2) Browse to the URL "about:config".
3) Click through the warning about voiding your warranty.
4) In the "Filter" box at the top, type "cursor".
5) Double-click on the item "ui.use_activity_cursor". It should turn bold and the value should change from "false" to "true".
6) Close the "about:config" tab.

That's it, no restart required. The hourglass cursor should now be back when you click on a link and the page is loading.

Sunday, August 1, 2010

Socket connection aborted when calling a WCF service method.

We are currently converting our project from .NET Remoting to WCF as the preferred method of remote service calls. One issue we ran into is that some methods worked perfectly while others bombed with a CommunicationException with very little useful information in it. There seemed to be no rhyme or reason as to why some methods worked, but others failed. The exception we always got back on the consumer after trying to make the service call was:

The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.2350000'.

The various levels of nested inner exceptions (the innermost being "An existing connection was forcibly closed by the remote host") were no more helpful. Further debugging showed that the call did make it from the consumer to the service and that the service did successfully execute the method in under a second and return the results, so we definitely weren't running into a socket timeout. After turning on the trace writers for WCF and digging through the event output, we saw issues where some of the types we were returning were unrecognized, despite being decorated with [DataContract] attributes:

Type '(type name)' with data contract name '(contract name)' is not expected. Add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer.

In our scenario, the culprit ended up being abstract classes our objects inherited. When an object is passed over the wire using WCF, it is instantiated and rehydrated on the other side. If the type is being passed as an abstract, the consuming end does not know what concrete type it should reconstitute the abstract type as. To work around this, you can decorate the abstract class with the [KnownType] attribute, specifying what known concrete types it could be. Why WCF doesn't automatically infer this based on both types being [DataContract]'d, I'm not sure.

For example, if concrete type Person inherits abstract type NamedItemBase and you attempt to call WCF service method "List<NamedItemBase> GetAllPeople()", you would need to decorate your Person class with [DataContract] and your NamedItemBase with [DataContract] and [KnownType(typeof(Person))]

Sunday, July 11, 2010

Deserialization considerations when renaming objects, namespaces, or assemblies

In one of my current projects, we are considering refactoring some legacy code into new assemblies and namespaces. The product is an enterprise backup solution and the objects we're moving are serialized to disk during a backup. To restore, the objects are deserialized from disk and read in for processing.

No types like that around here

The issue we ran into almost immediately was supporting legacy backups. We wanted customers to still be able to use their existing backups to restore, even if they upgraded to the new product. However, moving objects into new namespaces or assemblies breaks deserialization.

Take, as an example, an Objects assembly which contains a Square class in the Bluey.Objects namespace. When a Square is serialized to disk, both the assembly FullName:
  Objects, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
and the object's type name (including namespace):
  Bluey.Objects.Square
are saved to the byte stream.

If you later decide to rename the Objects assembly to Shapes, any objects that were previously serialized will fail to deserialize because the Objects assembly no longer exists and cannot be loaded. Likewise, if you were to leave the assembly name alone and just rename the Bluey.Objects namespace to Bluey.Shapes, the object would still fail to deserialize because Bluey.Objects.Square no longer exists.

Enter SerializationBinder

This problem can be solved by injecting custom SerializationBinder logic into the formatter used to deserialize the objects. The SerializationBinder is responsible for taking a type name and assembly name and binding it to the corresponding type. By injecting our own binder, we can override the default behavior and handle special cases. Here is a very simplistic example binder we can use for the Objects->Shapes rename scenario above:

  public class ObjectsToShapesBinder : SerializationBinder {
    public override Type BindToType(string assemblyName, string typeName) {
      if (typeName == "Bluey.Objects.Square") {
        typeName = "Bluey.Shapes.Square";
      }
      return Type.GetType(typeName);
    }
  }

To use this custom binder, we simply attach it to the formatter we're going to use for deserialization:

  var binFormatter = new BinaryFormatter();
  binFormatter.Binder = new ObjectsToShapesBinder();

At this point, any deserialization done with binFormatter will use our binder to determine the representative Types to create. Because the assembly is likely to be already loaded in memory at runtime, we do not need to worry about the assembly name parameter in our custom BindToType method. However, if that were not the case, we could use the assembly name to determine the correct assembly to load before trying to Type.GetType.

The use of the custom binder applies to any object in the object graph, not just the root object being deserialized. So deserializing a Hashtable containing a Square will work just as well as deserializing a lone Square.

Thursday, July 1, 2010

PluralSight's On-Demand training library

The next phase of the project I'm on will likely move us from .NET Remoting to WCF services. I wanted to get a quick introduction to WCF beyond the basic user-group sessions I found online. I eventually noticed that that as a BizSpark start-up, I had a code for a free month of access to Pluralsight's on-demand training library among my listed MSDN benefits.

I signed up and started on their very in-depth 15-hour course on the fundamentals of WCF. So far, these courses are fantastic and if the rest of their library is of the same caliber, this seems like a goldmine of useful information. In the first two hours alone, most of the questions I had on what we'd need to do to drop WCF in to replace .NET Remoting have been answered. The questions that have come up while watching (can I keep my existing domain objects?) or spots where I thought the code was sloppy (why is the channel being closed before the async call completes?) were addressed and were part of the curriculum.

The only downside is the cost is too high for me ($500/yr to stream the content, $1000/yr if you want to download). If you can get your company to spring for a subscription though, Pluralsight's much cheaper (and much more valuable and less inconvenient, in my opinion) than a 5-day classroom training session.