Enhancing Application Insights Request Telemetry

This post is a continuation of my series about using Application Insights in ASP.NET Core. Today we will take a deeper dive into Request telemetry.

Request Telemetry

For an ASP.NET Core process, the Application Insights SDK will automatically collect data about every request that the server process receives. This specific type of telemetry is called Request telemetry and it contains a ton of very useful data including: the request path, the HTTP verb, the response status code, the duration, the timestamp when the request was received.

Sample Request Telemetry

The default data is great, but I often find myself wanting more information. For example, in a multi-tenant application, it would be very useful to track the tenant id as part of the request telemetry. This would allow us to filter data more effectively in the Application Insights portal and craft some very useful log analytics queries.

Adding custom data to Request Telemetry

All types of telemetry in Application Insights provide an option to store custom properties. In the previous post, we saw how to create an ITelemetryInitializer to set properties on a particular telemetry instance. We could easily add custom properties to our Request telemetry using a telemetry initializer.

public class CustomPropertyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{

requestTelemetry.Properties["MyCustomProperty"] = "Some Useful Value";
}
}

Any custom properties you add will be listed under Custom Properties in the Application Insights portal.

Sample Request Telemetry with Custom Properties

But telemetry initializers are singletons and often don’t have access to the useful data that we want to add to request telemetry. Typically the data we want is related in some way to the current request and that data wouldn’t be available in a singleton service. Fortunately, there is another easy way to get an instance of the request telemetry for the current request.

var requestTelemetry = HttpContext.Features.Get<RequestTelemetry>();
requestTelemetry.Properties["TenantId"] = "ACME_CORP";

You can do it anywhere you have access to an HTTP Context. Some examples I have seen include: Middleware, ActionFilters, Controller action methods, OnActionExecuting in a base Controller class and PageModel classes in Razor Pages.

Filtering by Custom Properties in the Portal

Once you’ve added custom properties to Request Telemetry, you can use those custom properties to filter data in the Application Insights portal. For example, you might want to investigate failures that are occurring for a specific tenant or investigate performance for a particular tenant.

Filtering by Custom Property

This type of filtering can be applied almost anywhere in the portal and can help narrow things down when investigating problems.

Writing Useful Log Analytics Queries

Now this is where things get really interesting for me. What if we had one particular tenant complaining about performance. Wouldn’t it be interesting to plot out the average request duration for all tenants? We can easily accomplish this using a log analytics query.

requests
| summarize avg(duration) by tostring(customDimensions.TenantId), bin(timestamp, 15m)
| render timechart

This simple query will produce the following chart:

Log Analytics Query Summarize by Custom Property

Small variations on this query can be extremely useful in comparing response times, failure rates, usage and pretty much anything else you can think of.

Wrapping it up

TenantId is just an example of a custom property. The custom properties that are useful for a particular application tend to emerge naturally as you’re investigating issues and sifting through telemetry in Application Insights. You will eventually find yourself saying “I wish I knew what xxx was for this request`. When that happens, stop and add that as a custom property to the request telemetry. You’ll thank yourself later.

Setting Cloud Role Name in Application Insights

This post is a continuation of my series about using Application Insights in ASP.NET Core. Today we will explore the concept of Cloud Role and why it’s an important thing to get right for your application.

In any application that involves more than a single server process/service, the concept of Cloud Role becomes really important in Application Insights. A Cloud Role roughly represents a process that runs somewhere on a server or possibly on a number of servers. A cloud role made up of 2 things: a cloud role name and a cloud role instance.

Cloud Role Name

The cloud role name is a logical name for a particular process. For example, I might have a cloud role name of “Front End” for my front end web server and a name of “Weather Service” for a service that is responsible for providing weather data.

When a cloud role name is set, it will appear as a node in the Application Map. Here is an example showing a Front End role and a Weather Service role.

Application Map when Cloud Role Name is set

However, when Cloud Role Name is not set, we end up with a misleading visual representation of how our services communicate.
Application Map when Cloud Role Name is not set

By default, the application insights SDK attempts to set the cloud role name for you. For example, when you’re running in Azure App Service, the name of the web app is used. However, when you are running in an on-premise VM, the cloud role name is often blank.

Cloud Role Instance

The cloud role instance tells us which specific server the cloud role is running on. This is important when scaling out your application. For example, if my Front End web server was running 2 instances behind a load balancer, I might have a cloud role instance of “frontend_prod_1” and another instance of “frontend_prod_2”.

The application insights SDK sets the cloud role instance to the name of the server hosting the service. For example, the name of the VM or the name of the underlying compute instance hosting the app in App Service. In my experience, the SDK does a good job here and I don’t usually need to override the cloud role instance.

Setting Cloud Role Name using a Telemetry Initializer

Telemetry Initializers are a powerful mechanism for customizing the telemetry that is collected by the Application Insights SDK. By creating and registering a telemetry initializer, you can overwrite or extend the properties of any piece of telemetry collected by Application Insights.

To set the Cloud Role Name, create a class that implements ITelemetryInitializer and in the Initialize method set the telemetry.Context.Cloud.RoleName to the cloud role name for the current application.

public class CloudRoleNameTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{

// set custom role name here
telemetry.Context.Cloud.RoleName = "Custom RoleName";
}
}

Next, in the Startup.ConfigureServices method, register that telemetry initializer as a singleton.

services.AddSingleton<ITelemetryInitializer, CloudRoleNameTelemetryInitializer>();

For those who learn by watching, I have recorded a video talking about using telemetry initializers to customize application insights.

Using a Nuget Package

Creating a custom telemetry initializer to set the cloud role name is a simple enough, but it’s something I’ve done so many times that I decided to publish a Nuget package to simplify it even further.

First, add the AspNetMonsters.ApplicationInsights.AspNetCore Nuget package:

dotnet add package AspNetMonsters.ApplicationInsights.AspNetCore

Next, in call AddCloudRoleNameInitializer in your application’s Startup.ConfigureServices method:

services.AddCloudRoleNameInitializer("WeatherService");

Filtering by Cloud Role

Setting the Cloud Role Name / Instance is about a lot more than seeing your services laid out properly in the Application Map. It’s also really important when you starting digging in to the performance and failures tabs in the Application Insights portal. In fact, on most of the sections of the portal, you’ll see this Roles filter.

Roles pill

The default setting is all. When you click on it, you have the option to select any combination of your application’s role names / instances. For example, maybe I’m only interested in the FrontEnd service and WeatherService that were running on the dave_yoga920 instance.

Roles filter

These filters are extremely useful when investigating performance or errors on a specific server or within a specific service. The more services your application is made up of, the more useful and essential this filtering become. These filters really help focus in on specific areas of an application within the Application Insights portal.

Next Steps

In this post, we saw how to customize telemetry data using telemetry initializers. Setting the cloud role name is a simple customization that can help you navigate the massive amount of telemetry that application insights collects. In the next post, we will explore a more in complex example of using telemetry initializers.

Getting the Most Out of Application Insights for .NET (Core) Apps

Application Insights is a powerful and surprisingly flexible application performance monitoring (APM) service hosted in Azure. Every time I’ve used Application Insights on a project, it has opened the team’s eyes to what is happening with our application in production. In fact, this might just be one of the best named Microsoft products ever. It literally provides insights into your applications.

Application Map provides a visual representation of your app's dependencies

Application Insights has built-in support for .NET, Java, Node.js, Python, and Client-side JavaScript based applications. This blog post is specifically about .NET applications. If you’re application is built in another language, head over to the docs to learn more.

Codeless Monitoring vs Code-based Monitoring

With codeless monitoring, you can configure a monitoring tool to run on the server (or service) that is hosting your application. The monitoring tool will monitor running processes and collect whatever information is available for that particular platform. There is built in support for Azure VM and scale sets, Azure App Service, Azure Cloud Services, Azure Functions, Kubernetes applications and On-Premises VMs. Codeless monitoring is a good option if you want to collect information for applications that have already been built and deployed, but you are generally going to get more information using Code-based monitoring.

With code-based monitoring, you add the Application Insights SDK. The steps for adding the SDK are well document for ASP.NET Core, ASP.NET, and .NET Console applications so I don’t need to re-hash that here.

If you prefer, I have recorded a video showing how to add Application Insights to an existing ASP.NET Core application.

Telemetry

Once you’ve added the Application Insights SDK to your application, it will start collecting telemetry data at runtime and sending it to Application Insights. That telemetry data is what feeds the UI in the Application Insights portal. The SDK will automatically collection information about your dependencies calls to SQL Server, HTTP calls and calls to many popular Azure Services. It’s the dependencies that often are the most insightful. In a complex system it’s difficult to know exactly what dependencies your application calls in order to process an incoming request. With App Insights, you can see exactly what dependencies are called by drilling in to the End-to-End Transaction view.

End-to-end transaction view showing an excess number of calls to SQL Server

In addition to dependencies, the SDK will also collect requests, exceptions, traces, customEvents, and performanceCounters. If your application has a web front-end and you add the JavaScript client SDK, you’ll also find pageViews and browserTimings.

Separate your Environments

The SDK decides which Application Insights instance to send the collected telemetry based on the configured Instrumentation Key.

In the ASP.NET Core SDK, this is done through app settings:

{
"ApplicationInsights": {
"InstrumentationKey": "ccbe3f84-0f5b-44e5-b40e-48f58df563e1"
}
}

When you’re diagnosing an issue in production or investigating performance in your production systems, you don’t want any noise from your development or staging environments. I always recommend creating an Application Insights resource per environment. In the Azure Portal, you’ll find the instrumentation key in the top section of the Overview page for your Application Insights resource. Just grab that instrumentation key and add it to your environment specific configuration.

Use a single instance for all your production services

Consider a micro-services type architecture where your application is composed of a number of services, each hosted within it’s own process. It might be tempting to have each service point to a separate instance of Application Insights.

Contrary to the guidance of separating your environments, you’ll actually get the most value from Application Insights if you point all your related production services to a single Application Insights instance. The reason for this is that Application Insights automatically correlates telemetry so you can track a particular request across a series of separate services. That might sound a little like magic but it’s not actually as complicated as it sounds.

It’s this correlation that allows the Application Map in App Insights to show exactly how all your services interact with each other.

Application Map showing multiple services

It also enables the end-to-end transaction view to show a timeline of all the calls between your services when you are drilling in to a specific request.

This is all contingent on all your services sending telemetry to the same Application Insights instance. The Application Insights UI in the Azure Portal has no ability to display this visualizations across multiple Application Insights instances.

You don’t need to be on Azure

I’ve often heard developers say “I can’t use Application Insights because we’re not on Azure”. Well, you don’t need to host your application on Azure to use Application Insights. Yes, you will need an Azure subscription for the Application Insights resource, but your application can be hosted anywhere. That includes your own on-premise services, AWS or any other public/private cloud.

Next Steps

Out of the box, Application Insights provides a tremendous amount of value but I always find myself having to customize a few things to really get the most out of the telemetry. Fortunately, the SDK provides some useful extension points. My plan is to follow up this post with a few more posts that go over those customizations in detail. I also have started to create a NuGet package to simplify those customizations so stay tuned!

*Update

Other posts in this series:
Setting Cloud Role Name
Enhancing Application Insights Request Telemetry

Using NodaTime with Dapper

This is a part of a series of blog posts on data access with Dapper. To see the full list of posts, visit the Dapper Series Index Page.

After my recent misadventures attempting to use Noda Time with Entity Framework Core, I decided to see what it would take to use Dapper in a the same scenario.

A quick recap

In my app, I needed to model an Event that occurs on a particular date. It might be initially tempting to store the date of the event as a DateTime in UTC, but that’s not necessarily accurate unless the event happens to be held at the Royal Observatory Greenwich. I don’t want to deal with time at all, I’m only interested in the date the event is being held.

NodaTime provides a LocalDate type that is perfect for this scenario so I declared a LocalDate property named Date on my Event class.

public class Event
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public LocalDate Date {get; set;}
}

Querying using Dapper

I modified my app to query for the Event entities using Dapper:

var queryDate = new LocalDate(2019, 3, 26);
using (var connection = new SqlConnection(myConnectionString))
{
await connection.OpenAsync();
Events = await connection.QueryAsync<Event>(@"SELECT [e].[Id], [e].[Date], [e].[Description], [e].[Name]
FROM [Events] AS[e]");

}

The app started up just fine, but gave me an error when I tried to query for events.

System.Data.DataException: Error parsing column 1 (Date=3/26/19 12:00:00 AM - DateTime) —> System.InvalidCastException: Invalid cast from ‘System.DateTime’ to ‘NodaTime.LocalDate’.

Likewise, if I attempted to query for events using a LocalDate parameter, I got another error:

var queryDate = new LocalDate(2019, 3, 26);
using (var connection = new SqlConnection("myConnectionString"))
{
await connection.OpenAsync();

Events = await connection.QueryAsync<Event>(@"SELECT [e].[Id], [e].[Date], [e].[Description], [e].[Name]
FROM [Events] AS[e]
WHERE [e].[Date] = @Date", new { Date = queryDate });

}

NotSupportedException: The member Date of type NodaTime.LocalDate cannot be used as a parameter value

Fortunately, both these problems can be solved by implementing a simple TypeHandler.

Implementing a Custom Type Handler

Out of the box, Dapper already knows how to map to the standard .NET types like Int32, Int64, string and DateTime. The problem we are running into here is that Dapper doesn’t know anything about the LocalDate type. If you want to map to a type that Dapper doesn’t know about, you can implement a custom type handler. To implement a type handler, create a class that inherits from TypeHandler<T>, where T is the type that you want to map to. In your type handler class, implement the Parse and SetValue methods. These methods will be used by Dapper when mapping to and from properties that are of type T.

Here is an example of a type handler for LocalDate.

public class LocalDateTypeHandler : TypeHandler<LocalDate>
{
public override LocalDate Parse(object value)
{
if (value is DateTime)
{
return LocalDate.FromDateTime((DateTime)value);
}

throw new DataException($"Unable to convert {value} to LocalDate");
}

public override void SetValue(IDbDataParameter parameter, LocalDate value)
{
parameter.Value = value.ToDateTimeUnspecified();
}
}

Finally, you need to tell Dapper about your new custom type handler. To do that, register the type handler somewhere in your application’s startup class by calling Dapper.SqlMapper.AddTypeHandler.

Dapper.SqlMapper.AddTypeHandler(new LocalDateTypeHandler());

There’s a NuGet for that

As it turns out, someone has already created a helpful NuGet package containing TypeHandlers for many of the NodaTime types so you probably don’t need to write these yourself. Use the Dapper.NodaTime package instead.

Wrapping it up

TypeHandlers are a simple extension point that allows for Dapper to handle types that are not already handled by Dapper. You can write your own type handlers but you might also want to check if someone has already published a NuGet package that handles your types.

Using Noda Time with Entity Framework Core

If you have ever dealt dates/times in an environment that crosses time zones, you know who difficult it can be to handle all scenarios properly. This situation isn’t made any better by .NET’s somewhat limited representation of date and time values through the one DateTime class. For example, how to I represent a date in .NET when I don’t care about the time. There is no type that represents a Date on it’s own. That’s why the Noda Time library was created, billing itself as a better date and time API for .NET.

Noda Time is an alternative date and time API for .NET. It helps you to think about your data more clearly, and express operations on that data more precisely.

An example using NodaTime

In my app, I needed to model an Event that occurs on a particular date. It might be initially tempting to store the date of the event as a DateTime in UTC, but that’s not necessarily accurate unless the event happens to be held at the Royal Observatory Greenwich. I don’t want to deal with time at all, I’m only interested in the date the event is being held.

NodaTime provides a LocalDate type that is perfect for this scenario so I declared a LocalDate property named Date on my Event class.

public class Event
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public LocalDate Date {get; set;}
}

Using Entity Framework

This app was using Entity Framework Core and there was a DbSet for the Event class.

public class EventContext : DbContext
{
public EventContext(DbContextOptions<EventContext> options) : base(options)
{

}

public DbSet<Event> Events { get; set; }
}

This is where I ran into my first problem. Attempting to run the app, I was greeted with a friendly InvalidOperationException:

InvalidOperationException: The property ‘Event.Date’ could not be mapped, because it is of type ‘LocalDate’ which is not a supported primitive type or a valid entity type. Either explicitly map this property, or ignore it using the ‘[NotMapped]’ attribute or by using ‘EntityTypeBuilder.Ignore’ in ‘OnModelCreating’.

This first problem was actually easy enough to solve using a ValueConverter. By adding the following OnModelCreating code to my EventContext, I was able to tell Entity Framework Core to store the Date property as a DateTime with the Kind set to DateTimeKind.Unspecified. This has the effect of avoiding any unwanted shifts in the date time based on the local time of the running process.

public class EventContext : DbContext
{
public EventContext(DbContextOptions<EventContext> options) : base(options)
{

}

public DbSet<Event> Events { get; set; }

protected override void OnModelCreating(ModelBuilder modelBuilder)
{

base.OnModelCreating(modelBuilder);
var localDateConverter =
new ValueConverter<LocalDate, DateTime>(v =>
v.ToDateTimeUnspecified(),
v => LocalDate.FromDateTime(v));

modelBuilder.Entity<Event>()
.Property(e => e.Date)
.HasConversion(localDateConverter);
}
}

With that small change, my application now worked as expected. The value conversions all happen behind the scenes so I can just use the Event entity and deal strictly with the LocalDate type.

But what about queries

I actually had this application running in a test environment for a week before I noticed a serious problem in the log files.

In my app, I was executing a simple query to retrieve the list of events for a particular date.

var queryDate = new LocalDate(2019, 3, 25);
Events = await context.Events.Where(e => e.Date == queryDate).ToListAsync();

In the app’s log file, I noticed the following warning:

Microsoft.EntityFrameworkCore.Query:Warning: The LINQ expression ‘where ([e].Date == __queryDate_0)’ could not be translated and will be evaluated locally.

Uh oh, that sounds bad. I did a little more investigation and confirmed that the query was in fact executing SQL without a WHERE clause.

SELECT [e].[Id], [e].[Date], [e].[Description], [e].[Name]
FROM [Events] AS [e]

So my app was retrieving EVERY ROW from Events table, then applying the where filter in the .NET process. That’s really not what I intended to do and would most certainly cause me some performance troubles when I get to production.

So, the first thing I did was modified my EF Core configuration to throw an error when a client side evaluation like this occurs. I don’t want this kind of thing accidently creeping in to this app again. Over in Startup.ConfigureServices, I added the following option to ConfigureWarnings.

services.AddDbContext<EventContext>(options =>
options.UseSqlServer(myConnectionString)
.ConfigureWarnings(warnings =>
warnings.Throw(RelationalEventId.QueryClientEvaluationWarning)));

Throwing an error by default is the correct behavior here and this is actually something that will be fixed in Entity Framework Core 3.0. The default behavior in EF Core 3 will be to throw an error any time a LINQ expression results in client side evaluation. You will then have the option to allow those client side evaluations.

Fixing the query

Now that I had the app throwing an error for this query, I needed to find a way for EF Core to properly translate my simple e.Date == queryDate expression to SQL. After carefully re-reading the EF Core documentation related for value converters, I noticed a bullet point under Limitations:

Use of value conversions may impact the ability of EF Core to translate expressions to SQL. A warning will be logged for such cases. Removal of these limitations is being considered for a future release.

Well that just plain sucks. It turns out that when you use a value converter for a property, Entity Framework Core just gives up trying to convert any LINQ expression that references that property. The only solution I found was to query for my entities using SQL.

var queryDate = new LocalDate(2019, 3, 25);
Events = await context.Events.
FromSql(@"SELECT [e].[Id], [e].[Date], [e].[Description], [e].[Name]
FROM[Events] AS[e]
WHERE [e].[Date] = {0}", queryDate.ToDateTimeUnspecified()).ToListAsync();

Wrapping it up

NodaTime is a fantastic date and time library for .NET and you should definitely consider using it in your app. Unfortunately, Entity Framework Core has some serious limitations when it comes to using value converters so you will need to be careful. I almost got myself into some problems with it. While there are work-arounds, writing custom SQL for any query that references a NodaTime type is less than ideal. Hopefully those will be addressed in Entity Framework Core 3.

Optimistic Concurrency Tracking with Dapper and SQL Server

This is a part of a series of blog posts on data access with Dapper. To see the full list of posts, visit the Dapper Series Index Page.

In today’s post, we explore a pattern to prevent multiple users (or processes) from accidentally overwriting each other’s change. Given our current implementation for updating the Aircraft record, there is potential for data loss if there are multiple active sessions are attempting to update the same Aircraft record at the same time. In the example shown below, Bob accidentally overwrites Jane’s changes without even knowing that Jane made changes to the same Aircraft record

Concurrent Updates

The pattern we will use here is Optimistic Offline Lock, which is often also referred to as Optimistic Concurrency Control.

Modifying the Database and Entities

To implement this approach, we will use a rowversion column in SQL Server. Essentially, this is a column that automatically version stamps a row in a table. Any time a row is modified, the rowversion column will is automatically incremented for that row. We will start by adding the column to our Aircraft table.

ALTER TABLE Aircraft ADD RowVer rowversion

Next, we add a RowVer property to the Aircraft table. The property is a byte array. When we read the RowVer column from the database, we will get an array of 8 bytes.

public class Aircraft 
{
public int Id { get; set; }
public string Manufacturer {get; set;}
public string Model {get; set;}
public string RegistrationNumber {get; set;}
public int FirstClassCapacity {get; set;}
public int RegularClassCapacity {get; set;}
public int CrewCapacity {get; set;}
public DateTime ManufactureDate {get; set; }
public int NumberOfEngines {get; set;}
public int EmptyWeight {get; set;}
public int MaxTakeoffWeight {get; set;}
public byte[] RowVer { get; set; }
}

Finally, we will modify the query used to load Aircraft entities so it returns the RowVer column. We don’t need to change any of the Dapper code here.

public async Task<Aircraft> Get(int id)
{

Aircraft aircraft;
using (var connection = new SqlConnection(_connectionString))
{
await connection.OpenAsync();
var query = @"
SELECT
Id
,Manufacturer
,Model
,RegistrationNumber
,FirstClassCapacity
,RegularClassCapacity
,CrewCapacity
,ManufactureDate
,NumberOfEngines
,EmptyWeight
,MaxTakeoffWeight
,RowVer
FROM Aircraft WHERE Id = @Id";

aircraft = await connection.QuerySingleAsync<Aircraft>(query, new {Id = id});
}
return aircraft;
}

Adding the Concurrency Checks

Now that we have the row version loaded in to our model, we need to add the checks to ensure that one user doesn’t accidentally overwrite another users changes. To do this, we simply need to add the RowVer to the WHERE clause on the UPDATE statement. By adding this constraint to the WHERE clause, we we ensure that the updates will only be applied if the RowVer has not changed since this user originally loaded the Aircraft entity.

public async Task<IActionResult> Put(int id, [FromBody] Aircraft model)
{

if (id != model.Id)
{
return BadRequest();
}

using (var connection = new SqlConnection(_connectionString))
{
await connection.OpenAsync();
var query = @"
UPDATE Aircraft
SET Manufacturer = @Manufacturer
,Model = @Model
,RegistrationNumber = @RegistrationNumber
,FirstClassCapacity = @FirstClassCapacity
,RegularClassCapacity = @RegularClassCapacity
,CrewCapacity = @CrewCapacity
,ManufactureDate = @ManufactureDate
,NumberOfEngines = @NumberOfEngines
,EmptyWeight = @EmptyWeight
,MaxTakeoffWeight = @MaxTakeoffWeight
WHERE Id = @Id
AND RowVer = @RowVer";


await connection.ExecuteAsync(query, model);
}

return Ok();
}

So, the WHERE clause stops the update from happening, but how do we know if the update was applied successfully? We need to let the user know that the update was not applied due to a concurrency conflict. To do that, we add OUTPUT inserted.RowVer to the UPDATE statement. The effect of this is that the query will return the new value for the RowVer column if the update was applied. If not, it will return null.

public async Task<IActionResult> Put(int id, [FromBody] Aircraft model)
{

byte[] rowVersion;
if (id != model.Id)
{
return BadRequest();
}

using (var connection = new SqlConnection(_connectionString))
{
await connection.OpenAsync();
var query = @"
UPDATE Aircraft
SET Manufacturer = @Manufacturer
,Model = @Model
,RegistrationNumber = @RegistrationNumber
,FirstClassCapacity = @FirstClassCapacity
,RegularClassCapacity = @RegularClassCapacity
,CrewCapacity = @CrewCapacity
,ManufactureDate = @ManufactureDate
,NumberOfEngines = @NumberOfEngines
,EmptyWeight = @EmptyWeight
,MaxTakeoffWeight = @MaxTakeoffWeight
OUTPUT inserted.RowVer
WHERE Id = @Id
AND RowVer = @RowVer";

rowVersion = await connection.ExecuteScalarAsync<byte[]>(query, model);
}

if (rowVersion == null) {
throw new DBConcurrencyException("The entity you were trying to edit has changed. Reload the entity and try again.");
}
return Ok(rowVersion);
}

Instead of calling ExecuteAsync, we call ExecuteScalarAsync<byte[]>. Then we can check if the returned value is null and raise a DBConcurrencyException if it is null. If it is not null, we can return the new RowVer value.

Wrapping it up

Using SQL Server’s rowversion column type makes it easy to implement optimistic concurrency checks in a .NET app that uses Dapper.

If you are building as REST api, you should really use the ETag header to represent the current RowVer for your entity. You can read more about this pattern here.