Markdown in your ASP.NET Core Razor Pages

What? Markdown in your Razor code? Yeah…and it was totally easy to build too.

Taylor Mullen demoed the idea of a Markdown Tag Helper idea at Orchard Harvest and I thought it would be nice to include this in my Tag Helper Samples project.

How to use it

This tag helper allows you to write Markdown directly in Razor and have that automatically converted to HTML at runtime. There are 2 options for how to use this tag helper. The first option is to use a <markdown> element.

<markdown>This is some _simple_ **markdown**.</markdown>

The tag helper will take this and convert it to the following HTML:

<p>This is some <em>simple</em> <strong>markdown</strong>.</p>

The other option is to use a <p> element that has the markdown attribute:

<p markdown>This is some _simple_ **markdown** in a _p_ element.</p>

The tag helper uses MarkdownSharp, which supports most of the markdown syntax supported by Stack Overflow.

How it works

The implementation of this tag helper is surprisingly simple. All we do is grab the contents of the tag and use MarkdownSharp to convert that to HTML.

[HtmlTargetElement("p", Attributes = "markdown")]
public class MarkdownTagHelper : TagHelper
public async override Task ProcessAsync(TagHelperContext context, TagHelperOutput output)

if (output.TagName == "markdown")
output.TagName = null;

var content = await GetContent(output);
var markdown = content;
var html = CommonMarkConverter.Convert(markdown);
output.Content.SetHtmlContent(html ?? "");

Try it yourself

You can grab the code from GitHub or install the package using Nuget.

Install-Package TagHelperSamples.Markdown

Give it a try and let me know what you think.

Custom ASP.NET Core Tag Helper Samples

A group of us who have been exploring ASP.NET Core MVC Tag Helpers have created a repository of Tag Helper Samples. The repository contains a set of real world samples that can help you understand how to build your own custom tag helpers.

So far, we have been focusing on Tag Helpers that make it easier to use various Bootstrap components. We chose Bootstrap because Bootstrap components are often verbose and it can be easy to miss a particular class or a specific attribute. I find that this is especially when you consider all the accessibility aria-* attributes. So far, we have implemented tag helpers for Bootstrap Alerts, Progress Bars and most recently Modals.


The alert tag helper, contributed by Rick Strahl, makes it easy to display Bootstrap alerts containing Font-Awesome icons.

<alert message="Payment has been processed." icon="success">

Will output the following HTML:

<div class="alert alert-success" role="alert">
<i class="fa fa-check"></i> Payment has been processed.

Progress Bar

Displaying a progress bar in Bootstrap is a rather verbose set of elements and attributes:

<div class="progress">
<div class="progress-bar" role="progressbar" aria-valuenow="60" aria-valuemin="0" aria-valuemax="100" style="width: 60%;">
<span class="sr-only">60% Complete</span>

The progress bar tag helper provides a much cleaner syntax:

<div bs-progress-value="66">

Bootstrap modals are also rather convoluted items. The simplest possible modal consists of too many nested divs and in my opinion is hard to read:

<div class="modal fade">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&amp;times;</span></button>
<h4 class="modal-title">Modal title</h4>
<div class="modal-body">
<p>One fine body...</p>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div><!-- /.modal-content -->
</div><!-- /.modal-dialog -->
</div><!-- /.modal -->

The same modal using the modal tag helper is much easier to read and will produce the same output:

<modal id="simpleModal" title="Modal Title" >
<p>One fine body...</p>
<button type="button" class="btn btn-primary">Save changes</button>

Wrapping it up

Feel free to browse the sample code or view them in action on Azure. If you have ideas for other Tag Helpers, feel free to log an issue in the repo. Better yet, you could also submit a pull request.

A big thank you to Rick Anderson for suggesting this and getting us started and to Rick Strahl for contributing.

Adding Prefixes to Tag Helpers in ASP.NET Core MVC

Some people have said that they would prefer all Tag Helper elements in ASP.NET Core MVC to be prefixed. I honestly don’t see myself doing this but it is easy to turn on if you prefer tag helper elements to be prefixed with some special text.

Simply add the @tagHelperPrefix directive to the _ViewImports.cshtml file in your project:

@tagHelperPrefix “th:”

Now, Razor will only recognize elements as Tag Helpers if the elements are prefixed with “th:”.

You can choose whatever prefix you want for your project. As I said, I probably won’t be using this myself but at least there is an easy way to turn on tag helper prefixes for those who want to be very explicit about tag helpers.

One nice thing with prefixes is that it enables is a quick way to identify what tag helpers exist in a project. When you type in the prefix, IntelliSense will show you a list of elements that can be processed by tag helpers:

What do you think? Prefix or not prefix?

Why Gulp?

I recently made some updates to my blog post on How to Use Gulp in Visual Studio. I don’t usually go back and update old blog posts, but this one receives a fair amount of daily traffic. There was a minor mistake in the way I had setup my gulp watch and I wanted to fix that to avoid confusion. I also get a lot of questions about why using a task runner like Gulp is a ‘better approach’ than the way things are done in ASP.NET 4.x. I have addressed some of those questions in the original post but I will go into more detail here.

A Quick Example

Let’s start with a quick example using the 2 approaches.


In previous versions of ASP.NET, optimizations such as bundling and minification are done using the System.Web.Optimization package. In this approach, we configure our bundles in C#:

public class BundleConfig
    // For more information on bundling, visit
    public static void RegisterBundles(BundleCollection bundles)

        bundles.Add(new ScriptBundle("~/bundles/js").Include(


Those bundles are referenced in our Razor views as follows:


When running in Release mode, the server combines the files in a bundle into a single minified file and renders a single <link> or <script> tag for the bundle. When running in Debug mode, the server renders individual <link> or <script> tags for each file in the bundle. The file optimization step is done at runtime. A version hash is added to the bundle URL to support aggressively caching the asset on the client side.

Task Runners

When using a Task Runner like Gulp (or Grunt), optimizations like bundling and minification are done at build/compile time. The bundles and any step related to bundling are configured in a JavaScript file that is executed by the task runner. Here is a simple example of a gulp file that does the same optimizations as the example above:

// include plug-ins
var gulp = require('gulp');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');

var config = {
    //Include all js files but exclude any min.js files
    src: ['app/**/*.js', '!app/**/*.min.js']

gulp.task('scripts', function () {

    return gulp.src(config.src)

//Set a default tasks
gulp.task('default', ['scripts'], function () { });

Note that this is a simplified example. For a more complete example see my original post.

By running the scripts task, all the JS files in my app folder are combined and minified into a single all.min.js file. In ASP.NET 5, we can decided based on our current environment if we should include references to the individual files or the single combined and minified file.

<environment names="Development">
    <script asp-src-include="~/app/**/*.js" asp-src-exclude="~/app/**/*.min.js"></script>
<environment names="Staging,Production">
    <script src="~/app/all.min.js" asp-append-version="true"></script>

In this case, the files are combined and minified at build/compile time. The minified version of the file is published to the server. At runtime, Razor tag helpers are responsible for deciding which script tags to include. The tag helpers also append the file version hash to support aggressively caching the files on the client side. As was covered in my original post, we can use the Task Runner Explorer to link the Scripts task to the build event in Visual Studio. Using a watch, I can automatically run the Scripts task anytime a JS file changes.


Why I prefer the Task Runner approach

Now let’s get into the details of why I prefer using a task runner like Gulp over the runtime optimization approach taken by System.Web.Optimization.

Runtime vs. Compile-Time Optimizations

System.Web.Optimization takes the approach of bundling/minifying your assets at runtime. The first time a request comes in for a bundle, it will combine and minify all the files in that bundle and cache the results for the next request. While the cost of this is minimal, it has always seemed to me that it is a strange to use server resources to do this task. At the time of publishing our application to the server, we already know what the code is. To me it makes more sense to do this step on the build server or on the developer machine BEFORE publishing the application. Task runners like Gulp take the approach of doing these asset optimization steps at compile/build time.

This becomes a bigger advantage when we start doing more than just bundling and minification. My typical scripts task takes all theTypeScript files from my app, compiles them to JavaScript, combines the output of that to a single minified JS file and writes out source maps. Gulp allows me to easily automate all of this with a single task. Compiling TypeScript and generating source maps is just not possible with System.Web.Optimization and I don’t think anyone would argue that doing all those steps on the web server at runtime would make sense anyway. Yes, some of these steps could be handled using Visual Studio plugins…more on that later.

For the vast majority of applications, I think the task runner approach is more logical. You are shipping known, pre-optimized assets to your production server. Don’t make your server do more than it needs to.

Note that there are some specific use cases such as CMS tools that require runtime optimizations because the assets might not be known at compile time.

Extensibility and Consistency

There is no question that the runtime bundling in MVC 5 provides a better ‘out-of-the-box’ experience. When you create a new project, bundling and minification is setup and working. It is easy to add new files. People generally understand the concepts and don’t need to spend a lot of time fiddling with the bundle configuration. As I have eluded to in the TypeScript example, System.Web.Optimization starts to fall apart for me is when you want to take things 1 step further.

Let’s consider another example. What if I want to start using a CSS pre-processor like LESS or SASS? There is no way built-in way to tie CSS pre-processors into System.Web.Optimization. Now you need to start looking for VS plugins to do this task. If we’re lucky, these will work well. In my experience they have some problems, are often out-of-date or are just not available. One big problem with using VS plugins is that I can’t make use of those on the build server which means I now need to check my generated CSS files in source control. I much prefer to only check in my LESS or SASS source files and have the build server generate the CSS files. (Checking in generated files pollutes the commit logs and makes code reviews a lot less effective).

Another problem is trying to make sure that everyone on the team has the right plugins installed. There are ways to enforce this, but it is not very easy.

With Gulp, all we need to do is include a gulp plugin (eg, gulp-less) and add the less compilation step to my stylesheet task. It is a 1 or 2 line change to my gulp file. The node package manager is able to ensure that everyone on the team has the right gulp plugins installed. Since everything is command line based, it is also very easy to call the same tasks from the build server.

So the big advantages that I see are extensibility and consistency. System.Web.Optimization is very good at doing a couple things, but it is also limited to doing those couple of things. When we want to take things a little further, we start to run into some pain points with ensuring a consistent development environment. Gulp on the other hand is extremely flexible and extensible in a way that makes it easy to provide consistency environment and consistent builds across your entire team.

Wrapping it up

In small and simple MVC 5 projects, I still use System.Web.Optimization for it’s simplicity. For more complex projects where I want to use some newer web dev tooling, I use Gulp. Gulp gives me a lot more options and the opportunity to design a better workflow for my team.

The File-New Project experience in the current release candidate of ASP.NET Core MVC uses Gulp. I’m excited about this, but the default gulp file is in need of some work. It is difficult to extend and contains some errors that will cause problems for those who are new to Gulp. Of course, this is a beta version and the team is still working on this. I am hopeful that the experience will improve before the official release of ASP.NET Core MVC. In the meantime, don’t be afraid to learn about Gulp and all the amazing things it can do. I find the Gulp Recipes to be a very valuable learning tool.

Cancelling Long Running Queries in ASP.NET MVC and Web API

A lot has been written about the importance of using async controller actions and async queries in MVC and Web API when dealing with long running queries. If done properly, it can hopefully improve throughput of your ASP.NET applications. While async won’t solve the problem of your database being a bottleneck, it can help to ensure that your web server is still able to process other smaller/shorter requests. It will especially ensure requests that do not require access to that database can be processed in a timely fashion.

There is one very important aspect that is often missed in the tutorials that talk about async and that is cancellation.

NOTE: For the purpose of this article, I am referring to long running queries in terms of read queries (those that are returning data but not modifying data). Cancelling queries that have modified data might not be a good choice for your application. Do you really want to cancel a Save because the user navigated to another page in your application? Maybe you do but probably not. Aside from the data issue, this also won’t likely help performance because the database server will need to rollback that transaction which could be a costly operation.

What is cancellation and how does it work?

Cancellation is a way to signal to an async task that it should stop doing whatever it happens to be doing. In .NET, this is done using a CancellationToken. An instance of a cancellation token is passed to the async task and the async task monitors the token to see if a cancellation has been requested. If a cancellation is requested, it should stop doing what it is doing. Most Async methods in .NET will have an overload that accepts a cancellation token.

Here is a simple console application that illustrates how cancellation tokens work. In this example, a cancellation token is created (via a CancellationTokenSource) and passed along to an async task that does some work. When the user presses the ‘z’ key, the Cancel method is called on the CancellationTokenSource. This sets the IsCancellationRequested property to true for the token which will cause the async task to stop doing the work.

static void Main(string[] args)

Console.WriteLine("Silly counter: Press Z to Stop");
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
Task.Run(() =>
long n = 0;
while (!cancellationToken.IsCancellationRequested)
n = n + 1;
}, cancellationToken);

while (true)
if (Console.Read() == 'z')

Why is this so important?

Think about this in the context of a long-running query in a web application. Somewhere on the other side of this long running query is a frustrated user. WHY IS THIS TAKING SO LONG???  What is that user likely to do? Will they sit there diligently waiting for a query to finish running? The longer the query is running, the less likely the user will wait. It is likely the user will give up and navigate to another page or they might hit the browser refresh button in hopes that it will load faster next time.

So, what happens now you have a query running on the server and the user has moved on to another page (or another site all together). Unfortunately, both your database server and web server will continue processing the request, wasting resources executing a query that likely no one cares about anymore.

Cancellation and the Browser

When I was first looking in to this, I assumed there was no way in my MVC controller to be notified when the user has moved to another page. I turns out I was wrong. If a user navigates to a new page or refreshes the browser, any HTTP Requests that are in progress will be cancelled.

Here is an example where the user visited the MyReallySlowReport page. After waiting for nearly 5 seconds, they gave up and went to the Contact page (I assume to look for a phone number to call and complain about how slow that report is). You can see the status of the original page request is canceled.

Both MVC and Web API will recognize the cancelled request and signal a cancellation to the async action method for that request. All you need to do for this to work is add a CancellationToken parameter to your controller action method and pass that token on to whatever async task is doing the work. In this case, Entity Framework:

public async Task<ActionResult> MyReallySlowReport(CancellationToken cancellationToken)

List<ReportItem> items;
using (ApplicationDbContext context = new ApplicationDbContext())
items = await context.ReportItems.ToListAsync(cancellationToken);
return View(items);

If your action method already has parameters, just add the CancellationToken parameter to the end of your parameter list. That’s it. Now if the browser cancels the HTTP Request, MVC will set the CancellationToken to cancelled and Entity Framework will cancel the SQL query that was executing as part of that request.

That was easy! One simple change to make sure there are fewer server resources wasted processing canceled requests.

Cancelling Additional Work

Okay, so it’s easy to cancel the SQL server request, but what if we were doing some long running task ourselves in the controller method. In that case all we need to do is check the IsCancellationRequested property of the cancellation token and stop the processing if it is set to true.

public async Task<ActionResult> MyReallySlowReport(CancellationToken cancellationToken)

List<ReportItem> items;
using (ApplicationDbContext context = new ApplicationDbContext())
items = await context.ReportItems.ToListAsync(cancellationToken);

foreach (var item in items)
if (cancellationToken.IsCancellationRequested)
//Do some fairly slow operation
return View(items);

By exiting the for loop when a cancellation is requested, we can avoid using server CPU resources for HTTP requests that no longer matter.

SPAs and Ajax Requests

If you are working in a Single Page Application, your app will likely spawn a number of ajax requests to get data from the server. In this case, the browser will not automatically cancel requests when the user navigates to another ‘page’ in your app. That’s because you are handling page navigation yourself in JavaScript rather then using traditional web page navigation. In a single page app, you will need to cancel HTTP requests yourself. This is usually fairly easy to do. For example in jQuery all you need to do is call the abort() method on the XMLHttpRequest instance:

var xhr = $.get("/api/myslowreport", function(data){
//show the data

//If the user navigates away from this page

You will of course need to tie in to the page/component lifecycle of whatever framework you are using. This varies a lot from framework to framework so I won’t specifically cover it here.

UPDATE: Making this work in MVC 5

A huge thank you to Muhammad Rehan Saeed for pointing out that in MVC 5, the cancellation token is not actually being signaled when the browser cancels the request. Everything works as expected in both ASP.NET Core MVC and in Web API, but for some reason, MVC 5 only supports cancellation if you use the AsyncTimeout attribute. I was able to reproduce this and even cloned to MVC 5 / Web API and ASP.NET Core MVC repositories to confirm that the implementation are in fact different.

I did find a workaround that should achieve desired results. It involves grabbing the ClientDisconnectedToken from the Response property and creating a linked token source.

public async Task<ActionResult> MyReallySlowReport(CancellationToken cancellationToken)

CancellationToken disconnectedToken = Response.ClientDisconnectedToken;
var source = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, disconnectedToken);

List<ReportItem> items;
using (ApplicationDbContext context = new ApplicationDbContext())
items = await context.ReportItems.ToListAsync(source.Token);
return View(items);

Using this workaround, linked token will be signaled when the browser cancels the request and also if a timeout has expired if you choose to use the AsyncTimeout attribute.


If you are using Web API or ASP.NET Core MVC in combination with any modern data access layer, it should be extremely easy to pass your cancellation token and cancel long running queries when a request from the client is aborted. While this does not work out of the box with MVC 5, I have provided a workaround that should help. This is a simple approach that can help to avoid situations where a small number of users can accidentally overload your web server and database server.

ASP.NET Core Image Tag Helper

ASP.NET 5 Beta 5 shipped yesterday and it includes a new tag helper: the Image tag helper. While this is a very simple tag helper, it has special meaning for me. Implementing this tag helper was my first pull request submitted to the aspnet/mvc repo.

So, what does this tag helper do? If you add the asp-append-version=”true” attribute to an image tag, the tag helper will automatically append a version tag to the image file path. This allows you to aggressively cache an image without worrying about updated images not being sent to the client.

Using it is simple. Just add asp-append-version=”true” to a standard img tag:

<img src="~/images/logo.png" 
alt="company logo"
asp-append-version="true" />

which will generate something like this:

<img src="/images/logo.png?v=W2F5D366_nQ2fQqUk3URdgWy2ZekXjHzHJaY5yaiOOk" 
alt="company logo"/>

The value of the v parameter is calculated based on the contents of the image file. If the contents of the image change, the value of the parameter will change. This forces the browser to download the new version of the file, even if the old version was cached locally. This technique is often called cache busting.

As I said, this is very simple tag helper but I find it to be very useful. I have been caught more than once with updated images not showing up for clients that had older versions cached locally. One recent example was when I was iterating quickly through logo designs for a site that was live in production. I could have changed the logo filename every time I updated the logo but this would have been tedious. Cache busting with the image tag helper allows me to update the image contents without having to rename the file or worry about manually changing the references to that image.

August 29th: Updated code samples to use new tag helper attribute name from Beta 6