Deleted origin git branches aren’t removed from VS2015 git

We use a gitflow-esque branching strategy where a new branch is created for each feature being worked on. Once the code is sync’d to the server and the PR is approved, the code is merged down into our main development branch and the feature branch is automatically deleted from origin by Visual Studio Team Services. This results in a lot of branches that we work on. One issue that popped up was that despite the branches being deleted from origin, we’d still see the branches within Visual Studio 2015’s git Branches pane.
Too many branches

Of all the branches listed in the remotes/origin/feature folder, only 2 of them still exist at origin. Fetching from within VS2015 does not remove the entries which are no longer valid. The issue stems from VS2015 not supporting remote origin pruning. An easy workaround is to go to the command prompt and run:

git config remote.origin.prune true

From that point on, any fetch commands performed by git will also prune any branches that are no longer valid from the remote/origin list. Once this option is turned on, we can perform a Fetch from within Visual Studio and voila!
Much more concise

We now only see the branches that still exist on origin.

Three Must-Have Traits for Agile Team Members

During our most recent Agile Orlando Lean Lunch meetup, one interesting topic we discussed was what ideal traits a recruiter should be looking for to vet whether candidates are good fits in an Agile environment and culture. This naturally evolved into a discussion of common characteristics we’d observed in successful team members on Agile projects. Beyond the obvious technical or professional skills required for a particular project, there are some key character traits and behaviors that can mean the difference between adding (or yourself being) a rock star or just another run-of-the-mill team member.

  • Ownership of your deliverables
    One difficult transition for team members jumping from traditional development teams (or not-actually-Agile teams) to the Agile mindset is the need for all team members to be owners. Each team member should feel responsible for the success of the work product being delivered. The deliverable is a result of every team member’s effort. Team members who learn to embrace that ownership and become invested in the product begin to naturally look for ways to improve the product, the process of developing the product, and how to get the best out of their fellow team members. Those who remain stuck in the grind of simply working and closing tasks tend to remain stuck in their silos and disengaged from the team.
  • Engagement within the team
    A team member must be willing to engage with the team on a regular, consistent basis. Many team members new to Agile mistakenly assume the common Daily Standup is the limit of team engagement they need to worry about. The Daily Standup is seen as nothing more than a replacement for the traditional Status Report, albeit shorter and more frequent. The focus becomes “What status do I need to give?” rather than “What is everyone else working on and are there any problems I can help resolve?” A team member who is able to engage with the rest of the team is better able to build trust, know when their help might be beneficial to someone stuck, and feel that they can depend on their fellow team to help when needed.
  • Resilience and the willingness to fail
    Agile is an iterative process, built on the idea of trying things out, discarding the things that don’t work, and keeping what does. Whether it’s a new way of approaching a business process, a new technology stack, or a team ceremony to try, something will inevitably not work the way you or your team hoped. A rock star team member learns what they can from this failure and moves on to their next experiment, which could mean scrapping an idea completely and or a simple modification to the previous experiment. Failure is not something to celebrate, but neither is it something to shy away from. When team members meet with failure of one sort or another, they should reflect on how they can avoid such a failure in the future rather than dwelling on why and how it failed in the first place.

What other traits have you noticed in the members who seem to excel in Agile environments?

Event Store on Azure VM Quick Start Guide

We recently started work on an event-sourced application that will run in Azure and we needed to get a quick Event Store instance up for development. We had a hard time finding information about the details of standing up Event Store in Azure, so this post will walk through the various steps required to go from nothing to a working dev instance.

Note that this post does not cover best practices for an instance of Event Store, simply getting Event Store going for development and testing. As such, considerations such as authentication and performance are not covered.

Create and configure Azure VM

Creating the VMAs a first step, we’ll create a new Windows Server 2012 R2 Datacenter VM that uses the Resource Manager deployment model. Since this is for development use, we don’t need much in the way of performance or scalability. We’ll sized the VM as a small Basic A1 instance. I do want to be able to easily reach my Event Store repo from my development boxes, so we’ll also change the Public IP address to static. The rest of the options will be left at their defaults. Once the VM is created, the server blade will open. We’ll take note of the Public IP assigned to the VM for later use.

While still in the portal, let’s configure the network to allow Event Store traffic. Connections to the Event Store repo take place over TCP port 1113. The Event Store administration website is hosted on port 2113. We need to be able to read and write to the Event Store repo from external boxes, so let’s open TCP port 1113 to my new VM.

Inbound port ruleWe’ll click on All resources from the main menu blade. When the esdemo VM was created, it automatically created an esdemo network security group. We’ll select that from the resources list, click on Inbound security rules, and click Add to create the new rule. The Protocol should be set to TCP and the Destination port range to 1113.

If we wanted to be able to remotely administer the Event Store instance via the web interface, we would add a second rule for TCP port 2113 as well. Since I plan to only administer the instance locally by remoting into the VM, we will only apply the first rule and continue on.

Note that since we are not covering security and authentication for Event Store, you may wish to limit which IP addresses can access Event Store through the network configuration. This can be done using the CIDR block or Tag options of the Source setting when creating the inbound security rule.

Installing and Configuring Event Store on our VM

Now that the VM and network configuration is ready, let’s RDP in and install Event Store as a service. Before installing, we’ll need to create a local, non-admin user to run the service under (heretofore referred to as “service user account”). Since the user needs no special rights yet, let’s create a standard account and make sure to uncheck the User must change password option so the account doesn’t have any issues starting the service.

>Next, we will create a C:\EventStore directory to hold the Event Store installation and repository. Download the latest release of Event Store and unzip it into the directory. Event Store does not have any built-in support for running as a service, so we’ll also download the latest release of NSSM to run Event Store as a service and unzip it into the same Event Store directory. We’ll need to grant the service user account we just created Full Control access to this new C:\EventStore directory.

Installing the serviceTo install Event Store as a service, let’s open a command prompt and change to the C:\EventStore directory where NSSM has been unzipped. Running nssm install EventStore allows me to specify the executable to run as a new service.

For Path, we’ll browse to the EventStore_ClusterNode executable where we unzipped Event Store. Startup directory will be the directory where the executable is located. By default, Event Store will only listen for connections on the local interfaces (localhost and 127.0.0.1). Since we want Event Store to be externally accessible, we will tell it to listen on all interfaces by specifying –ext-ip “0.0.0.0” in the Arguments field.

Lastly, switch to the Log on tab and have the service log in as the service user account we previously created. Click Install service and we’re almost ready to start up Event Store.

When Event Store starts up, it will attempt to register with HTTP.SYS to listen on port 2113 for the administration website. We need to grant our service user account permissions to register that port. To do so, we’ll run netsh http add urlacl url=http://*:2113/ user=esuser from the Command Prompt.

Windows Firewall ruleAnd finally, we need to make sure port 1113 is open in the Windows Firewall by creating a new TCP port rule to allow 1113 inbound. We would also create a rule for 2113 if we wanted the administration website to be remotely accessible.

Testing our service

Event Store admin siteAt this point, the service is ready to go! We can start it up using net start EventStore from the command prompt or using the Services control panel. Once started, we’ll open a browser on the VM and attempt to locally browse the web administration website. We go to http://localhost:2113/ and are greeted by the Event Store administration page.

The default admin username is admin and the default password is changeit. The first order of business will be to change that password, especially if you’ve opened port 2113 for external access.

Browsing the web administration site shows that the Event Store service is running and accessible internally, but we need to confirm that Event Store is available externally as well. For this, we’ll open IE on our local desktop and browse to our public IP address, port 1113.
Remotely accessing Event Store with a browser

We’re sending a simple GET request and not a proper Event Store request, so we will receive back an Invalid TCP frame received error, but this is confirmation that we’re able to reach the Event Store service externally. Note that if trying this test with Chrome, the error does not show up in the browser window, but instead triggers a download of a text file that contains the error text. Just save yourself the trouble and test in IE.

That’s it! At this point, Event Store is ready to be a repository for your event-sourced dev applications. I’ll cover some best practices for production deployments and configuration in a future post.

Script Bundling doesn’t work for SignalR Release Candidates

With SignalR now available as a Release Candidate from Microsoft, we found that our script bundling no longer worked.  Our BundleConfig contained a statement to register the bundle and include the version-based file.

bundles.Add(new ScriptBundle(“~/bundles/signalr”).Include(
            “~/Scripts/jquery.signalR-{version}.js”));

Once upgrading from 0.5.3 to 1.0.0-rc2 via NuGet, the bundle request returned an empty response.  Downgrading back to 0.5.3 fixed the issue and our bundle request returned as expected.

Hardcoding the entry by replacing “{version}” with “1.0.0-rc2” also resolved the issue, so the {version} parsing was not working for some reason.  Presumably, {version} was supposed to be replaced with the installed version of the product during bundling.  A check of packages.config showed that the version for SignalR was correctly set to “1.0.0-rc2”.

It turns out that {version}, rather than being replaced with the installed version number, is simply a regex matching pattern for version numbers in a format similar to #.#.#.  It does not look for a particular version number, but rather anything that looks like a version number.  In this case, it is getting tripped up on the “-rc2” in the script name.  You can work around this without breaking future (non-Prerelease) upgrades by adding the following to ScriptBundle configuration.

bundles.Add(new ScriptBundle(“~/bundles/signalr”).Include(
            “~/Scripts/jquery.signalR-{version}.js”,
            “~/Scripts/jquery.signalR-1.0.0-rc2.js”));

It’s not the most dynamic solution, but it does allow bundling to immediately work with any upgrades.  The only caveat would be what if upgrading to a different Prerelease version (rc3, for example), an update to the config would be required.  While there is wildcard (*) support, Microsoft advises against using it in most cases, preferring that scripts be hardcoded.  So for now, this is probably the least painful solution.

RequireJS 2.0 dependencies seemingly ignored

After struggling for months with RequireJS 2.0 and why it seemed to ignore our dependencies for ordered script loading, we finally found the issue today and it was a facepalmableteachable moment.  When setting up a module to use something like SignalR, jQuery must be loaded first.  Searches quickly led to StackOverflow posts that suggested configuring dependency shims in RequireJS with something like:

require.config({
  shims: {
    ‘signalr’: [‘jquery’],
    ‘backbone’: [‘jquery’],
  }
}

In theory, when later requiring ‘signalr’, RequireJS will see that it needs to load ‘jquery’ first and all is well:

define([‘signalr’], function() { 
  return {
    initialize: function() { $.connection.doSomething; }
  };
});

In practice, what we (and many others, based on the abundance of posts) saw was that SignalR complained about needing jQuery loaded first.  In fact, Developer Tools showed that not only was the request for jQuery not being prioritized before SignalR, but it wasn’t being requested at all.

Maybe Dependencies Don’t Autoload?

A quick check of the RequireJS API confirms that this should be working.  Some more searches lead to StackOverflow posts where others are inexplicably having similar problems with dependencies.  My first logical guess was that RequireJS doesn’t automatically load specified dependencies for you, but rather it just orders them correctly while still requiring you to specify every dependency.  This seemed to defeat the purpose of RequireJS, but what the heck.  I tried adding ‘jquery’ in to our define:

define([‘jquery’, ‘signalr’], function() { 
  return {
    initialize: function() { $.connection.doSomething; }
  };
});

This seemed to resolve the issue, though we’d still occasionally see transient “jQuery must be loaded first” errors that went away whenever we tried to debug them.  Eventually, we discovered that dependencies were still not working and that Require was not waiting for jQuery to finish loading before trying to load SignalR.   Whenever the browser cache was cleared and the page reloaded, the same “jQuery must be loaded first” error would appear.  However, subsequent refreshes would fetch the jQuery page from the browser cache fast enough that it was finished loaded by the time RequireJS had moved on to SignalR.  At this point, it became obvious that something was fundamentally wrong with how we were using RequireJS.

Face, meet palm

After looking through the RequireJS API more closely, it became painfully clear why our shims were being ignored.  The RequireJS API config option for dependencies is *shim*, not *shims*.  The extra “s” in our config was causing our dependency chains to be completely ignored.  Searching online again, it looks like many of the examples on StackOverflow make the same “shims” typo, thus adding to the confusion.  Removing the “s” allowed RequireJS to suddenly start working as expected.
require.config({
  shim: {

    ‘signalr’: [‘jquery’],
    ‘backbone’: [‘jquery’],

  }
}

Developer Tools now shows jQuery always being loaded to completion before SignalR’s is even requested.  Our define went back to just containing ‘signalr’, with ‘jquery’ no longer being necessary to explicitly specify as it is being automatically pulled in.  Modules suddenly make sense again.
So if you see issues where your RequireJS dependencies don’t load as expected, make sure you’re not making the same mistake we did. Configure a shim, not shims.  One little character makes all the difference.

Firefox 4 Beta static cursor

After upgrading to Firefox 4 Beta, I found that I was clicking on things multiple times trying to get them to work. For some reason, the default in FF4 is to not have a busy cursor when you click something. Your cursor remains the default arrow with no indication anything is happening. You can restore the FF3-standard (and really Windows-standard) behavior of having a “Busy” hourglass cursor with the following steps:

1) Open a new tab.
2) Browse to the URL “about:config”.
3) Click through the warning about voiding your warranty.
4) In the “Filter” box at the top, type “cursor”.
5) Double-click on the item “ui.use_activity_cursor”. It should turn bold and the value should change from “false” to “true”.
6) Close the “about:config” tab.

That’s it, no restart required. The hourglass cursor should now be back when you click on a link and the page is loading.

Socket connection aborted when calling a WCF service method.

We are currently converting our project from .NET Remoting to WCF as the preferred method of remote service calls. One issue we ran into is that some methods worked perfectly while others bombed with a CommunicationException with very little useful information in it. There seemed to be no rhyme or reason as to why some methods worked, but others failed. The exception we always got back on the consumer after trying to make the service call was:

The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was ’00:00:59.2350000′.

The various levels of nested inner exceptions (the innermost being An existing connection was forcibly closed by the remote host) were no more helpful. Further debugging showed that the call did make it from the consumer to the service and that the service did successfully execute the method in under a second and return the results, so we definitely weren’t running into a socket timeout. After turning on the trace writers for WCF and digging through the event output, we saw issues where some of the types we were returning were unrecognized, despite being decorated with [DataContract] attributes:

Type ‘(type name)‘ with data contract name ‘(contract name)‘ is not expected. Add any types not known statically to the list of known types – for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer.

In our scenario, the culprit ended up being abstract classes our objects inherited. When an object is passed over the wire using WCF, it is instantiated and rehydrated on the other side. If the type is being passed as an abstract, the consuming end does not know what concrete type it should reconstitute the abstract type as. To work around this, you can decorate the abstract class with the [KnownType] attribute, specifying what known concrete types it could be. Why WCF doesn’t automatically infer this based on both types being [DataContract]’d, I’m not sure.

For example, if concrete type Person inherits abstract type NamedItemBase and you attempt to call WCF service method “List<NamedItemBase> GetAllPeople()”, you would need to decorate your Person class with [DataContract] and your NamedItemBase with [DataContract] and [KnownType(typeof(Person))]

Deserialization considerations when renaming objects, namespaces, or assemblies

In one of my current projects, we are considering refactoring some legacy code into new assemblies and namespaces. The product is an enterprise backup solution and the objects we’re moving are serialized to disk during a backup. To restore, the objects are deserialized from disk and read in for processing.

No types like that around here

The issue we ran into almost immediately was supporting legacy backups. We wanted customers to still be able to use their existing backups to restore, even if they upgraded to the new product. However, moving objects into new namespaces or assemblies breaks deserialization.

Take, as an example, an Objects assembly which contains a Square class in the Bluey.Objects namespace. When a Square is serialized to disk, both the assembly FullName:
  Objects, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
and the object’s type name (including namespace):
  Bluey.Objects.Square
are saved to the byte stream.

If you later decide to rename the Objects assembly to Shapes, any objects that were previously serialized will fail to deserialize because the Objects assembly no longer exists and cannot be loaded. Likewise, if you were to leave the assembly name alone and just rename the Bluey.Objects namespace to Bluey.Shapes, the object would still fail to deserialize because Bluey.Objects.Square no longer exists.

Enter SerializationBinder

This problem can be solved by injecting custom SerializationBinder logic into the formatter used to deserialize the objects. The SerializationBinder is responsible for taking a type name and assembly name and binding it to the corresponding type. By injecting our own binder, we can override the default behavior and handle special cases. Here is a very simplistic example binder we can use for the Objects->Shapes rename scenario above:

  public class ObjectsToShapesBinder : SerializationBinder {
    public override Type BindToType(string assemblyName, string typeName) {
      if (typeName == “Bluey.Objects.Square“) {
        typeName = “Bluey.Shapes.Square“;
      }
      return Type.GetType(typeName);
    }
  }

To use this custom binder, we simply attach it to the formatter we’re going to use for deserialization:

  var binFormatter = new BinaryFormatter();
  binFormatter.Binder = new ObjectsToShapesBinder();

At this point, any deserialization done with binFormatter will use our binder to determine the representative Types to create. Because the assembly is likely to be already loaded in memory at runtime, we do not need to worry about the assembly name parameter in our custom BindToType method. However, if that were not the case, we could use the assembly name to determine the correct assembly to load before trying to Type.GetType.

The use of the custom binder applies to any object in the object graph, not just the root object being deserialized. So deserializing a Hashtable containing a Square will work just as well as deserializing a lone Square.

PluralSight’s On-Demand training library

The next phase of the project I’m on will likely move us from .NET Remoting to WCF services. I wanted to get a quick introduction to WCF beyond the basic user-group sessions I found online. I eventually noticed that that as a BizSpark start-up, I had a code for a free month of access to Pluralsight’s on-demand training library among my listed MSDN benefits.

I signed up and started on their very in-depth 15-hour course on the fundamentals of WCF. So far, these courses are fantastic and if the rest of their library is of the same caliber, this seems like a goldmine of useful information. In the first two hours alone, most of the questions I had on what we’d need to do to drop WCF in to replace .NET Remoting have been answered. The questions that have come up while watching (can I keep my existing domain objects?) or spots where I thought the code was sloppy (why is the channel being closed before the async call completes?) were addressed and were part of the curriculum.

The only downside is the cost is too high for me ($500/yr to stream the content, $1000/yr if you want to download). If you can get your company to spring for a subscription though, Pluralsight’s much cheaper (and much more valuable and less inconvenient, in my opinion) than a 5-day classroom training session.

SPSite.Exists leaks SPSites

The Sharepoint Object Model is a painful, leak-prone API to work with. Roger Lamb’s MSDN blog article is a great reference for deciding when you need to dispose of objects and when doing so will break things, but it’s necessity is pretty powerful evidence of how clunky disposal is implemented in this managed code API.

In dev testing, QA found an issue where SPWeb objects were being leaked left and right. It turned out some disposable objects (like SPLimitedWebPartManager) themselves contained disposable objects you had to account for (LimitedWPM’s Web).

Today, I ran into another case with a very minimal set of code I had changed. A quick look in Reflector revealed that as of SP2, even Microsoft can’t always remember when to dispose of objects. The static method SPSite.Exists instantiates a new SPSite object that needs to be disposed of. However, there is no disposal code in the method. There’s no way to work around this issue other than not calling SPSite.Exists. Luckily, no other code in the object model seems to call SPSite.Exists.