Conveniently Customized Instance Mappings in NServiceBus

Illustration of Mapping

Jason Reardon

02/08/2018

A lot of great changes came out of the upgrade to NServiceBus version 6. One in particular was the move of endpoint mappings from the app.config to a separate instance mapping file. This change allowed us to finally to ditch the distributor in favor of sender side routing, eliminating the performance bottleneck caused by the distributor. While it is convenient to be able to drop all your instance mappings into one xml file and have each endpoint load it at start up, this can start to become a maintenance nightmare. As the number of endpoints grows, it can make pulling a server out of the pool an involved process. Maintaining such a file also requires collaboration across all teams that are using it, which can cause issues with versioning and ownership. Now, you might be thinking that the alternative of hard-coding the instances utilizing a custom instance mapping would make matters worse, and at just that, I would have to agree. However, there is another way.

First, a little background for those who might not be completely familiar with NServiceBus' instance mappings. The instance mappings are used to tell each endpoint about all the other endpoint instances that are available to receive messages. The is done when an endpoint starts up, where the endpoint loads a list of endpoints and the machines were each instance is deployed. This is also used in NServiceBus’ message distribution. An endpoint can either be configured with only the list of endpoints that it needs to get messages to, or with all endpoints. In the latter case, the endpoint will ignore all instance mappings that it does not need.

By default, when an endpoint starts, it will look for a file called "instance-mapping.xml" in AppDomain.BaseDirectory. However, this can be changed in the endpoint's configuration when defining the transport. Because of this, each endpoint can be configured to use one centralized instance-mapping file that is maintained on a share directory, or each individual endpoint can receive its own instance-mapping file. Each of these solutions has its benefits. The centralized instance-mapping file makes maintenance a little easier, since all mappings can be added to one large file and each endpoint will only use the mappings that it cares about. This makes adding/removing endpoints a simple process. Giving each endpoint its own mapping file, on the other hand, makes configuration and deployment a little simpler; but, the distribution of fails can make it difficult to easily make changes or track down misconfigurations. Regardless of the implementation, the use of the instance-mapping file presents an issue when a server needs to be temporarily pulled from the pool or endpoints need to be added or removed from an ever-expanding centralized file or each of a myriad of per-endpoint files. Luckily, NServiceBus provides us with the ability to create our own custom instance mappings.

Custom instance mappings are done through the Feature API of NServiceBus and allow the creation of a custom extension that runs when an endpoint starts. The feature can be configured to be enabled by default, which means it will automatically be run by every endpoint that references it. Now, the custom instance mapping in and of itself is not a superior solution to the instance-mapping file. After all, it's never a good idea to hard-code configuration information. However, it does provide us with tools to build a more feature-rich, easy to maintain, and easy to scale instance mapping solution.

To achieve this, we developed a solution that implements both the centralized nature of the single instance-mapping file and the customizability of code. Our solution was to move the configuration to the database and use the custom instance mappings to read and load them when an endpoint starts. Because we're doing this in code and maintaining the instance mappings in SQL, it becomes easier to add and remove endpoints. Need to pull a server from the pool? How about updating an Enabled flag in the database for a given machine; rather than needing to delete a machine listing from the instance-mapping file for each endpoint, all you need is a simple update statement. Need to remove an endpoint? Again, just disable it. This will ensure that should an instance of an endpoint be left installed by mistake, it won't continue to get messages and fill up its queue, if it is no longer running. Finally, since we're dealing with code and not an xml file that needs manual editing, we can have each endpoint instance scan the instance mappings for its own entry and, if it doesn't exist, register itself with the pool.


Looking at the code snippet above, you'll notice the use of an AutoRefresher as a startup task. This is used to create a timer that will periodically scan and reload the the instance mappings from the database. This solves one of the biggest issues that exists with the default instance mappings: the need to restart an endpoint to reload the instance mappings every time something changes.


Here at Afterman Software, we have developed such a solution and have successfully implemented it at multiple clients. If you're interested in checking out our solution, fell free to grab it from our GitHub. I hope you found this post useful and informative. Leave us a comment if you have any questions, or create a pull request if you would like to contribute!