Integration Testing with Entity Framework Core and SQL Server

Entity Framework Core makes it easy to write tests that execute against an in-memory store. Using an in-memory store is convenient since we don’t need to worry about setting up a relational database. It also ensures our unit tests run quickly so we aren’t left waiting hours for a large test suite to complete.

While Entity Framework Core’s in-memory store works great for many scenarios, there are some situations where it might be better to run our tests against a real relational database. Some examples include when loading entities using raw SQL or when using SQL Server specific features that can not be tested using the in-memory provider. In this case, the tests would be considered an integration test since we are no longer testing our Entity Framework context in isolation. We are testing how it will work in the real world when connected to SQL Server.

The Sample Project

For this example, I used the following simple model and DbContext classes.

public class Monster
{
public int Id { get; set; }
public string Name { get; set; }
public bool IsScary { get; set; }
public string Colour { get; set; }
}
public class MonsterContext : DbContext
{
public MonsterContext(DbContextOptions<MonsterContext> options)
: base(options)
{

}

public DbSet<Monster> Monsters { get; set; }
}

In an ASP.NET Core application, the context is configured to use SQL Server in the Startup.ConfigureServices method.

services.AddDbContext<MonsterContext>(options =>
{
options.UseSqlServer("DefaultConnection");
});

The DefaultConnection is defined in appsettings.json which is loaded at startup.

{
"ConnectionStrings": {
"DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=monsters_db;Trusted_Connection=True;MultipleActiveResultSets=true"
}
}

The MonsterContext is also configured to use Migrations which were initialized using the dotnet ef migrations add InitialCreate command. For more on Entity Framework Migrations, see the official tutorial.

As a simple example, I created a query class that loads scary monsters from the database using a SQL query instead of querying the Monsters DbSet directly.

public class ScaryMonstersQuery
{
private MonsterContext _context;

public ScaryMonstersQuery(MonsterContext context)
{

_context = context;
}

public IEnumerable<Monster> Execute()
{

return _context.Monsters
.FromSql("SELECT Id, Name, IsScary, Colour FROM Monsters WHERE IsScary = {0}", true);
}

}

To be clear, a better way to write this query is _context.Monster.Where(m => m.IsScary == true), but I wanted a simple example. I also wanted to use FromSql because it is inherently difficult to unit test. The FromSql method doesn’t work with the in-memory provider since it requires a relational database. It is also an extension method which means we can’t simply mock the context using a tool like Moq. We could of course create a wrapper service that calls the FromSql extension method and mock that service but this only shifts the problem. The wrapper approach would allow us to ensure that FromSql is called in the way we expect it to be called but it would not be able to ensure that the query will actually run successfully and return the expected results.

An integration test is a good option here since it will ensure that the query runs exactly as expected against a real SQL Server database.

The Test

I used xunit as the test framework in this example. In the constructor, which is the setup method for any tests in the class, I configure an instance of the MonsterContext connecting to a localdb instance using a database name containing a random guid. Using a guid in the database name ensures the database is unique for this test. Uniqueness is important when running tests in parallel because it ensures these tests won’t impact any other tests that aer currently running. After creating the context, a call to _context.Database.Migrate() creates a new database and applies any Entity Framework migrations that are defined for the MonsterContext.

public class SimpleIntegrationTest : IDisposable
{
MonsterContext _context;

public SimpleIntegrationTest()
{

var serviceProvider = new ServiceCollection()
.AddEntityFrameworkSqlServer()
.BuildServiceProvider();

var builder = new DbContextOptionsBuilder<MonsterContext>();

builder.UseSqlServer($"Server=(localdb)\\mssqllocaldb;Database=monsters_db_{Guid.NewGuid()};Trusted_Connection=True;MultipleActiveResultSets=true")
.UseInternalServiceProvider(serviceProvider);

_context = new MonsterContext(builder.Options);
_context.Database.Migrate();

}

[Fact]
public void QueryMonstersFromSqlTest()
{

//Add some monsters before querying
_context.Monsters.Add(new Monster { Name = "Dave", Colour = "Orange", IsScary = false });
_context.Monsters.Add(new Monster { Name = "Simon", Colour = "Blue", IsScary = false });
_context.Monsters.Add(new Monster { Name = "James", Colour = "Green", IsScary = false });
_context.Monsters.Add(new Monster { Name = "Imposter Monster", Colour = "Red", IsScary = true });
_context.SaveChanges();

//Execute the query
ScaryMonstersQuery query = new ScaryMonstersQuery(_context);
var scaryMonsters = query.Execute();

//Verify the results
Assert.Equal(1, scaryMonsters.Count());
var scaryMonster = scaryMonsters.First();
Assert.Equal("Imposter Monster", scaryMonster.Name);
Assert.Equal("Red", scaryMonster.Colour);
Assert.True(scaryMonster.IsScary);
}

public void Dispose()
{

_context.Database.EnsureDeleted();
}
}

The actual test itself happens in the QueryMonstersFromSqlTest method. I start by adding some sample data to the database. Next, I create and execute the ScaryMonstersQuery using the context that was created in the setup method. Finally, I verify the results, ensuring that the expected data is returned from the query.

The last step is the Dispose method which in xunit is the teardown for any tests in this class. We don’t want all these test databases hanging around forever so this is the place to delete the database that was created in the setup method. The database is deleted by calling _context.Database.EnsureDeleted().

Use with Caution

These tests are slow! The very simple example above takes 13 seconds to run on my laptop. My advice here is to use this sparingly and only when it really adds value for your project. If you end up with a large number of these integration tests, I would consider splitting the integration tests into a separate test suite and potentially running them on a different schedule than my unit test suite (e.g. Nightly instead of every commit).

The Code

You can browse or download the source on GitHub.

Creating a New View Engine in ASP.NET Core

Earlier in November, the ASP.NET Monsters had the opportunity to take part in the ASP.NET Core hackathon at the Microsoft MVP Summit. In past years, we have used the hackathon as an opportunity to spend some time working on GenFu. This year, we wanted to try something a little different.

The Crazy Idea

A few months ago, we had Taylor Mullen on The Monsters Weekly to chat about Razor in ASP.NET Core. At some point during that interview, it was pointed that MVC is designed in a way that a new view engine could easily be plugged into the framework. It was also noted that implementing a view engine is a really big job. This got us to thinking…what if we could find an existing view engine of some sort. How easy would it be to get actually put a new view engine in MVC?

And so, that was our goal for the hackathon. Find a way to replace Razor with an alternate view engine in a single day of hacking.

Finding a Replacement

We wanted to pick something that in no way resembled Razor. Simon suggested Pug (previously known as Jade), a popular view template engine used in Express. In terms of syntax, Pug is about as different from Razor as it possibly could be. Pug uses whitespace to indicate nesting of elements and does away with angle brackets all together. For example, the following template:

div
    a(href='google.com') Google

would generate this HTML:

<div>
<a href="google.com">Google</a>
</div>

Calling Pug from ASP.NET Core

The first major hurdle for us was figuring out a way to compile pug templates from within an ASP.NET Core application. Pug is a JavaScript based template engine and we only had a single day to pull this off so a full port of the engine to C# was not feasible.

Our first thought was to use Edgejs to call Pug’s JavaScript compile function. Some quick prototyping showed us that this worked but Edgejs doesn’t have support for .NET Core. This lead us to explore the JavaScriptServices packages created by the ASP.NET Core team. Specifically the Node Services package which allows us to easily call out to a JavaScript module from within an ASP.NET Core application.

To our surpise, this not only worked, it was also easy! We created a very simple file called pugcompile.js.

var pug = require('pug');

module.exports = function (callback, viewPath, model) {
var pugCompiledFunction = pug.compileFile(viewPath);
callback(null, pugCompiledFunction(model));
};

Calling this JavaScript from C# is easy thanks to the Node Services package. Assuming model is the view model we want to bind to the template and mytemplate.pug is the name of the file containing the pug template:

var html = await _nodeServices.InvokeAsync<string>("pugcompile", "mytemplate.pug", model);

Now that we had proven this was possible, it was time to integrate this with MVC by creating a new MVC View Engine.

Creating the Pugzor View Engine

We decided to call our view engine Pugzor which is a combination of Pug and Razor. Of course, this doesn’t really make much sense since our view engine really has nothing to do with Razor but naming is hard and we thought we were being funny.

Keeping in mind our goal of implenting a view engine in a single day, we wanted to do this with the simplest way possible. After spending some time digging through the source code for MVC, we determined that we needed to implement the IViewEngine interface as well as implement a custom IView.

The IViewEngine is responsible for locating a view based on a ActionContext and a ViewName. When a controller returns a View, it is the IViewEngine‘s FindView method that is responsible for finding a view based on some convetions. The FindView method returns a ViewEngineResult which is a simple class containing a boolean Success property indicating whether or not a view was found and an IView View property containing the view if it was found.

/// <summary>
/// Defines the contract for a view engine.
/// </summary>
public interface IViewEngine
{
/// <summary>
/// Finds the view with the given <paramref name="viewName"/> using view locations and information from the
/// <paramref name="context"/>.
/// </summary>
/// <param name="context">The <see cref="ActionContext"/>.</param>
/// <param name="viewName">The name of the view.</param>
/// <param name="isMainPage">Determines if the page being found is the main page for an action.</param>
/// <returns>The <see cref="ViewEngineResult"/> of locating the view.</returns>
ViewEngineResult FindView(ActionContext context, string viewName, bool isMainPage);

/// <summary>
/// Gets the view with the given <paramref name="viewPath"/>, relative to <paramref name="executingFilePath"/>
/// unless <paramref name="viewPath"/> is already absolute.
/// </summary>
/// <param name="executingFilePath">The absolute path to the currently-executing view, if any.</param>
/// <param name="viewPath">The path to the view.</param>
/// <param name="isMainPage">Determines if the page being found is the main page for an action.</param>
/// <returns>The <see cref="ViewEngineResult"/> of locating the view.</returns>
ViewEngineResult GetView(string executingFilePath, string viewPath, bool isMainPage);
}

We decided to use the same view location conventions as Razor. That is, a view is located in Views/{ControllerName}/{ActionName}.pug.

Here is a simplified version of the FindView method for the PugzorViewEngine:

public ViewEngineResult FindView(
ActionContext actionContext,
string viewName,
bool isMainPage)

{

var controllerName = GetNormalizedRouteValue(actionContext, ControllerKey);

var checkedLocations = new List<string>();
foreach (var location in _options.ViewLocationFormats)
{
var view = string.Format(location, viewName, controllerName);
if(File.Exists(view))
return ViewEngineResult.Found("Default", new PugzorView(view, _nodeServices));
checkedLocations.Add(view);
}
return ViewEngineResult.NotFound(viewName, checkedLocations);
}

You can view the complete implentation on GitHub.

Next, we created a class called PugzorView which implements IView. The PugzorView takes in a path to a pug template and an instance of INodeServices. The MVC framework calls the IView‘s RenderAsync when it is wants the view to be rendered. In this method, we call out to pugcompile and then write the resulting HTML out to the view context.

public class PugzorView : IView
{
private string _path;
private INodeServices _nodeServices;

public PugzorView(string path, INodeServices nodeServices)
{

_path = path;
_nodeServices = nodeServices;
}

public string Path
{
get
{
return _path;
}
}

public async Task RenderAsync(ViewContext context)
{

var result = await _nodeServices.InvokeAsync<string>("./pugcompile", Path, context.ViewData.Model);
context.Writer.Write(result);
}
}

The only thing left was to configure MVC to use our new view engine. At first, we thought we could easy add a new view engine using the AddViewOptions extension method when adding MVC to the service collection.

services.AddMvc()
.AddViewOptions(options =>
{
options.ViewEngines.Add(new PugzorViewEngine(nodeServices));
});

This is where we got stuck. We can’t add a concrete instance of the PugzorViewEngine to the ViewEngines collection in the Startup.ConfigureServices method because the view engine needs to take part in dependency injection. The PugzorViewEngine has a dependency on INodeServices and we want that to be injected by ASP.NET Core’s dependency injection framework. Luckily, the all knowning Razor master Taylor Mullen was on hand to show us the right way to register our view engine.

The recommended approach for adding a view engine to MVC is to create a custom setup class that implements IConfigureOptions<MvcViewOptions>. The setup class takes in an instance of our IPugzorViewEngine via constructor injection. In the configure method, that view engine is added to the list of view engines in the MvcViewOptions.

public class PugzorMvcViewOptionsSetup : IConfigureOptions<MvcViewOptions>
{
private readonly IPugzorViewEngine _pugzorViewEngine;

/// <summary>
/// Initializes a new instance of <see cref="PugzorMvcViewOptionsSetup"/>.
/// </summary>
/// <param name="pugzorViewEngine">The <see cref="IPugzorViewEngine"/>.</param>
public PugzorMvcViewOptionsSetup(IPugzorViewEngine pugzorViewEngine)
{
if (pugzorViewEngine == null)
{
throw new ArgumentNullException(nameof(pugzorViewEngine));
}

_pugzorViewEngine = pugzorViewEngine;
}

/// <summary>
/// Configures <paramref name="options"/> to use <see cref="PugzorViewEngine"/>.
/// </summary>
/// <param name="options">The <see cref="MvcViewOptions"/> to configure.</param>
public void Configure(MvcViewOptions options)
{
if (options == null)
{
throw new ArgumentNullException(nameof(options));
}

options.ViewEngines.Add(_pugzorViewEngine);
}
}

Now all we need to do is register the setup class and view engine the Startup.ConfigureServices method.

services.AddTransient<IConfigureOptions<MvcViewOptions>, PugzorMvcViewOptionsSetup>();
services.AddSingleton<IPugzorViewEngine, PugzorViewEngine>();

Like magic, we now have a working view engine. Here’s a simple example:

Controllers/HomeController.cs

public IActionResult Index()
{

ViewData.Add("Title", "Welcome to Pugzor!");
ModelState.AddModelError("model", "An error has occurred");
return View(new { People = A.ListOf<Person>() });
}

Views/Home/Index.pug

block body
	h2 Hello
	p #{ViewData.title} 
	table(class='table')
		thead
			tr
				th Name
				th Title
				th Age
		tbody
			each val in people
				tr
					td= val.firstName
					td= val.title
					td= val.age

Result

<h2>Hello</h2>
<p>Welcome to Pugzor! </p>
<table class="table">
<thead>
<tr>
<th>Name</th>
<th>Title</th>
<th>Age</th>
</tr>
</thead>
<tbody>
<tr><td>Laura</td><td>Mrs.</td><td>38</td></tr>
<tr><td>Gabriel</td><td>Mr. </td><td>62</td></tr>
<tr><td>Judi</td><td>Princess</td><td>44</td></tr>
<tr><td>Isaiah</td><td>Air Marshall</td><td>39</td></tr>
<tr><td>Amber</td><td>Miss.</td><td>69</td></tr>
<tr><td>Jeremy</td><td>Master</td><td>92</td></tr>
<tr><td>Makayla</td><td>Dr.</td><td>15</td></tr>
<tr><td>Sean</td><td>Mr. </td><td>5</td></tr>
<tr><td>Lillian</td><td>Mr. </td><td>3</td></tr>
<tr><td>Brandon</td><td>Doctor</td><td>88</td></tr>
<tr><td>Joel</td><td>Miss.</td><td>12</td></tr>
<tr><td>Madeline</td><td>General</td><td>67</td></tr>
<tr><td>Allison</td><td>Mr. </td><td>21</td></tr>
<tr><td>Brooke</td><td>Dr.</td><td>27</td></tr>
<tr><td>Jonathan</td><td>Air Marshall</td><td>63</td></tr>
<tr><td>Jack</td><td>Mrs.</td><td>7</td></tr>
<tr><td>Tristan</td><td>Doctor</td><td>46</td></tr>
<tr><td>Kandra</td><td>Doctor</td><td>47</td></tr>
<tr><td>Timothy</td><td>Ms.</td><td>83</td></tr>
<tr><td>Milissa</td><td>Dr.</td><td>68</td></tr>
<tr><td>Lekisha</td><td>Mrs.</td><td>40</td></tr>
<tr><td>Connor</td><td>Dr.</td><td>73</td></tr>
<tr><td>Danielle</td><td>Princess</td><td>27</td></tr>
<tr><td>Michelle</td><td>Miss.</td><td>22</td></tr>
<tr><td>Chloe</td><td>Princess</td><td>85</td></tr>
</tbody>
</table>

All the features of pug work as expected, including templage inheritance and inline JavaScript code. Take a look at our test website for some examples.

Packaging it all up

So we reached our goal of creating an alternate view engine for MVC in a single day. We had some time left so we thought we would try to take this one step further and create a NuGet package. There were some challenges here, specifically related to including the required node modules in the NuGet package. Simon is planning to write a separate blog post on that topic.

You can give it a try yourself. Add a reference to the pugzor.core NuGet package then call .AddPugzor() after .AddMvc() in the Startup.ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc().AddPugzor();
}

Razor still works as the default but if no Razor view is found, the MVC framework will try using the PugzorViewEngine. If a matching pug template is found, that template will be rendered.

Pugzor

Wrapping it up

We had a blast working on this project. While this started out as a silly excercise, we sort of ended up with something that could be useful. We were really surprised at how easy it was to create a new view engine for MVC. We don’t expect that Pugzor will be wildly popular but since it works we thought we would put it out there and see what people think.

We have some open issues and some ideas for how to extend the PugzorViewEngine. Let us know what you think or jump in and contribute some code. We accept pull requests :-)

Loading View Components from a Class Library in ASP.NET Core MVC

In a previous post we explored the new View Component feature of ASP.NET Core MVC. In today’s post we take a look at how view components can be implemented in a separate class library and shared across multiple web applications.

Creating a class library

First, add a a new .NET Core class library to your solution.

Add class library

This is the class library where we will add our view components but before we can do that we have to add a reference to the MVC and Razor bits.

"dependencies": {
"NETStandard.Library": "1.6.0",
"Microsoft.AspNetCore.Mvc": "1.0.0",
"Microsoft.AspNetCore.Razor.Tools": {
"version": "1.0.0-preview2-final",
"type": "build"
}
},
"tools": {
"Microsoft.AspNetCore.Razor.Tools": "1.0.0-preview2-final"
}

Now we can add a view component class to the project. I created a simple example view component called SimpleViewComponent.

[ViewComponent(Name = "ViewComponentLibrary.Simple")]
public class SimpleViewComponent : ViewComponent
{
public IViewComponentResult Invoke(int number)
{
return View(number + 1);
}
}

By convention, MVC would have assigned the name Simple to this view component. This view component is implemented in a class library with the intention of using it across multiple web apps which opens up the possibility of naming conflicts with other view components. To avoid naming conflicts, I overrode the name using the [ViewComponent] attribute and prefixed the name with the name of my class library.

Next, I added a Default.cshtml view to the ViewComponentLibrary in the Views\Shared\Components\Simple folder.

@model Int32

<h1>
Hello from an external View Component!
</h1>
<h3>Your number is @Model</h3>

For this view to be recognized by the web application, we need to include the cshtml files as embedded resources in the class library. Currently, this is done by adding the following setting to the project.json file.

"buildOptions": {
"embed": "Views/**/*.cshtml"
}

Referencing external view components

The first step in using the external view components in our web application project is to add a reference to the class library. Once the reference is added, we need tell the Razor view engine that views are stored as resources in the external view library. We can do this by adding some additional configuration code to the ConfigureServices method in Startup.cs. The additional code creates a new EmbeddedFileProvider for the class library then adds that file provider to the RazorViewEngineOptions.

public void ConfigureServices(IServiceCollection services)
{

// Add framework services.
services.AddApplicationInsightsTelemetry(Configuration);

services.AddMvc();

//Get a reference to the assembly that contains the view components
var assembly = typeof(ViewComponentLibrary.ViewComponents.SimpleViewComponent).GetTypeInfo().Assembly;

//Create an EmbeddedFileProvider for that assembly
var embeddedFileProvider = new EmbeddedFileProvider(
assembly,
"ViewComponentLibrary"
);

//Add the file provider to the Razor view engine
services.Configure<RazorViewEngineOptions>(options =>
{
options.FileProviders.Add(embeddedFileProvider);
});
}

Now everything is wired up and we can invoke the view component just like we would for any other view component in our ASP.NET Core MVC application.

<div class="row">
@await Component.InvokeAsync("ViewComponentLibrary.Simple", new { number = 5 })
</div>

Wrapping it up

Storing view components in a separate assembly allows them to be shared across multiple projects. It also opens up the possibility of creating a simple plugin architecture for your application. We will explore the plugin idea in more detail in a future post.

You can take a look at the full source code on GitHub.

ASP.NET Core Distributed Cache Tag Helper

The anxiously awaited ASP.NET Core RC2 has finally landed and with it we have a shiny new tag helper to explorer.

We previously talked about the Cache Tag Helper and how it allows you to cache the output from any section of a Razor page. While the Cache Tag Helper is powerful and very useful, it is limited in that it uses an instance of IMemoryCache which stores cache entries in memory in the local process. If the server process restarts for some reason, the contents of the cache will be post. Also, if your deployment consists of multiple servers, each server would have its own cache, each potentially containing different contents.

Distributed Cache Tag Helper

The cache tag helper left people wanting more. Specifically they wanted to store the cached HTML in a distributed cache like Redis. Instead of complicating the existing Cache Tag Helper, the ASP.NET team enabled this use-case by adding a new Distributed Cache Tag Helper.

Using the Distributed Cache Tag Helper is very similar to using the Cache Tag Helper:

<distributed-cache name="MyCache">
<p>Something that will be cached</p>
@DateTime.Now.ToString()
</distributed-cache>

The name property is required and the value should be unique. It is used as a prefix for the cache key. This differs from the Cache Tag Helper which uses an automatically generated unique id based on the location of the cache tag helper in your Razor page. The auto generated approach cannot be used with a distributed cache because Razor would generate different unique ids for each server. You will need to make sure that you use a unique name each time you use the distributed-cache tag helper. If you unintentionally use the same name in multiple places, you might get the same results in 2 places.

For example, see what happens when 2 distributed-cache tag helpers with the same name:

<distributed-cache name="MyCache">
<p>Something that will be cached</p>
@DateTime.Now.ToString()
</distributed-cache>

<distributed-cache name="MyCache">
<p>This should be different</p>
@DateTime.Now.ToString()
</distributed-cache>

Accidental Cache Key Collision

If you are really curious about the how cache keys are generated for both tag helpers, take a look at the CacheTagKey Class.

The vary-by-* and expires-* attributes all work the same as the Cache Tag Helper. You can review those in my previous post.

Configuring the Distributed Cache

Unless you specify some additional configuration, the distributed cache tag helper actually uses a local process in memory cache. This might seem a little strange but it does help with the developer workflow. As a developer, I don’t need to worry about standing up a distributed cache like Redis just to run the app locally. The intention of course is that a true distributed cache would be used in a staging/production environments.

The simplest approach to configuring the distributed cache tag helper is to configure a IDistributedCache service in the Startup class. ASP.NET Core ships with 2 distributed cache implementations out of the box: SqlServer and Redis.

As a simple test, let’s try specifying a SqlServerCache in the Startup.ConfigureServices method:

services.AddSingleton<IDistributedCache>(serviceProvider =>
new SqlServerCache(new SqlServerCacheOptions()
{
ConnectionString = @"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=DistributedCacheTest;Integrated Security=True;",
SchemaName = "dbo",
TableName = "MyAppCache"
}));

Of course, the ConnectionString should be stored in a configuration file but for demonstration purposes I have in-lined it here.

You will need to create the database and table manually. Here is a script for creating the table, which I extracted from here:

CREATE TABLE MyAppCache(            
Id nvarchar(449) COLLATE SQL_Latin1_General_CP1_CS_AS NOT NULL,
Value varbinary(MAX) NOT NULL,
ExpiresAtTime datetimeoffset NOT NULL,
SlidingExpirationInSeconds bigint NULL,
AbsoluteExpiration datetimeoffset NULL,
CONSTRAINT pk_Id PRIMARY KEY (Id))

CREATE NONCLUSTERED INDEX Index_ExpiresAtTime ON MyAppCache(ExpiresAtTime)

Now when I visit the page that contains the distributed-cache tag helper, I get the following error:

InvalidOperationException: Either absolute or sliding expiration needs to be provided.

The SQL Server implementation requires us to specify some form of expiry. No problem, let’s just add the those attributes to the tag helper:

<distributed-cache name="MyCacheItem1" expires-after="TimeSpan.FromHours(1)">
<p>Something that will be cached</p>
@DateTime.Now.ToString()
</distributed-cache>


<distributed-cache name="MyCacheItem2" expires-sliding="TimeSpan.FromMinutes(30)">
<p>This should be different</p>
@DateTime.Now.ToString()
</distributed-cache>

Now the page renders properly and we can see the contents in SQL Server:

SQL Server Cache Contents

Note that since the key is hashed and the value is stored in binary, the contents of the table in SQL server are not human readable.

For more details on working with a SQL Server or Redis distrubted cache, see the official ASP.NET Docs;

Even more configuration

In some cases, you might want more control over how values are serialized or even how the distributed cache is used by the tag helper. In those cases, you could implement your own IDistributedCacheTagHelperFormatter and/or IDistributedCacheTagHelperStorage.

In cases where you need complete control, you could implement your own IDistributedCacheTagHelperService.

I suspect that this added level of customization won’t be needed by most people.

Conclusion

The Distributed Cache Tag Helper provides an easy path to caching HTML fragments in a distributed cache. Out of the box, Redis and SQL Server are supported. Over time, I expect that a number of alternative distributed cache implementations will be provided by the community.

Submitting Your First Pull Request

Originally posted to http://blogs.msdn.com/b/cdndevs/archive/2016/01/06/submitting-your-first-pull-request.aspx

Over the last few years, we have seen a big shift in the .NET community towards open source. In addition to a huge number of open source community led projects, we have also seen Microsoft move major portions of the .NET framework over to GitHub.

With all these packages out in the wild, the opportunities to contribute are endless. The process however can be a little daunting for first timers, especially if you are not using git in your day-to-day work. In this post I will guide you through the process of submitting your first pull request. I will show examples from my experience contributing to the Humanitarian Toolbox‘s allReady project. As with all things git related, there is more than one way to do everything. This post will outline the workflow I have been using and should serve as a good starting point for most .NET developers who are interested in getting started with open source projects hosted on GitHub.

Installing GitHub for Windows

The first step is to install GitHub for Windows. GitHub’s Windows desktop app is great, but the installer also installs the excellent posh-git command line tools. We will be using a combination of the desktop app and the command line tools.

Forking a Repo

The next step is to fork the repository (repo for short) to which you are hoping to contribute. It is very unlikely that you will have permissions to check in code directly to the actual repo. Those permissions are reserved for project owners. The process instead is to fork the repo. A fork is a copy of the repo that you own and can do whatever you want with. Create a fork by clicking the Fork button on the repo.

Forking a repo

This will create the fork for you. This is where you will be making changes and then submitting a pull request to get your changes merged in to the original repo.

Your forked repo

Notice on my fork’s master branch where it says This branch is even with HTBox:master. The branch HTBox:master is the master branch from the original repo and is the upstream for my master branch. When GitHub tells me my branch is even with master that means no changes have happened to HTBox:master and no changes have happened to my master branch. Both branches are identical at this point.

Cloning your fork to your local machine

Next up, you will want to clone the repo to your local machine. Launch GitHub for Windows and sign in with your GitHub account if you have not already done so. Click on the + icon in the top right, select Clone and select the repo that you just forked. Click the big checkmark button at the bottom and select a location to clone the repo on your local disk.

Cloning your fork

Create a local branch to do your work in

You could do all your work in your master branch, but this might be a problem if you intend to submit more than one pull request to the project. You will have trouble working on your second pull request until after your first pull request has been accepted. Instead it is best practice to create a new branch for each pull request you intend to submit.

As a side note, it is also considered best practice to submit pull requests that solve 1 issue at a time. Don’t fix 10 separate issues and submit a single pull request that contains all those fixes. That makes it difficult for the project owners to review your submission.

We could use GitHub for Windows to create the branch, but we’re going to drop down to the command line here instead. Using the command line to do git operations will give you a better appreciation for what is happening.

To launch the git command line, select your fork in GitHub for Windows, click on the Settings menu in the top right and select Open in Git Shell.

Open Git shell

This will open a posh-git shell. From here, type the command git checkout -b MyNewBranch, where MyNewBranch is a descriptive name for your new branch.

Create new branch

This command will create a new branch with the specified name and switch you to that branch. Notice how posh-git gives you a nice indication of what branch you are currently working on.

Advanced Learning: Learn more about git branching with this interactive tutorial http://pcottle.github.io/learnGitBranching/

Pro tip: posh-git has auto complete. Type git ch + tab will autocomplete to git checkout. Press tab multiple times to cycle through available options. This is a great learning tool!

Committing and publishing your changes

The next step is to commit and publish your changes to GitHub. Make your changes just like you normally would (Enjoy…this is the part where you actually get to write code!). When you are done making your changes, you can view a list of your changes by typing the git status command.

git status

To commit your changes, you first need to add them to your current set of changes. To add all your changes, enter the git add –A command. Note that the git add command doesn’t actually do anything other than get those changes ready to commit. Once your changes have been added, you can commit your changes using the git commit –m "Your commit message" command.

git commit

If you wanted to commit only some of the files you changed, you would need to add each of the files individually before doing the commit. This can be a little tedious. In this case, you might want to use the GitHub for Windows app. Simply select the files that you want to include, enter your commit message and click the Commit to YourBranch button. This will do both the add and commit operations as a single operation. The GitHub for Windows app also shows you a diff for each file which makes it a great tool for reviewing your changes.

Review changes

Now your changes have been committed locally, but they have not been published to GitHub yet. To do this, you need to push your branch to a copy on GitHub. You can do this from the command line by using the git push command.

git push

Notice that git detected this branch does not exist on GitHub yet and very kindly tells me the command I need to use to create the upstream branch. Alternatively, you could simply use the Publish button in GitHub for Windows.

Publish from GitHub for Windows

Now the branch containing your changes should show up in your fork on the GitHub website.

Published branch

GitHub says my branch is 1 commit ahead of HTBox:master. That’s what I want to see. I made 1 commit in my branch and no one has made any commits to HTBox:master since I created my fork. That should make my pull request clean and easy to merge. In some cases, HTBox:master will have changed since the time you started working on your branch. We’ll take a look at how to handle that situation later. For now let’s proceed with creating this pull request.

Creating your pull request

The next step is to create a pull request so your code can (hopefully) be merged into the original repo.

To create your pull request, click on the Compare & pull request button that is displayed when viewing your branch on the GitHub website. If for some reason that button is not visible, click the Pull Request link on your branch.

Create pull request

On the Pull Request page, you can scroll down to review the changes you are submitting. For some projects, you will also see a link to guidelines for contributing. Be descriptive in your pull request. Provide information on the change you made so the project owners know exactly what you were trying to accomplish. If there is an issues you are addressing with this pull request you should reference it by number (e.g. #124) in the description of your pull request. If everything looks good, click the Create Pull Request button.

Enter pull request details

Your pull request has now been created and is ready for the project owners to review (and hopefully accept). Some projects will have automated checks that happen for each pull request. allReady has an AppVeyor build that compiles the application and runs unit tests. You should monitor this and ensure that all the checks pass.

Automated checks on pull requests

If all goes as planned, your pull request will be accepted and you will feel a great sense of accomplishment. Of course, things don’t always go as planned. Let’s explore how to handle a few common scenarios.

Making changes to an existing pull request

Often, the project owners will make comments on your pull request and ask you to make some changes. Don’t feel bad if this happens…my first pull request to a large project had 59 comments and required a fair bit of rework before it was finally merged in to the master branch. When this happens, don’t close the pull request. Simply make your changes locally, commit them to your local branch, then push those changes to GitHub.

Push changes to an existing pull request

The push can be done using the GitHub for Windows app by clicking the Sync button.

Push changes to an existing pull request

As soon as your changes have been pushed to GitHub the new commit will appear in the pull request. Any required checks will be re-run and the conversation with the project owners can continue. Really that’s what a pull request is: An ongoing conversation about a set of proposed changes to the code base.

Pull request with multiple changes

Keeping your fork up to date

Another common scenario is that your fork (and branches) become out of date. This happens any time changes are made to the original repo. You can see in this example that 4 commits have been made to HTBox:master since I created my pull request.

Branch out of date

It is a good idea to make sure that your branch is not behind the branch that your pull request will be merged into (in this case HTBox:master). When you branch gets behind, you increase the chances of having merge conflicts in your pull request. Keeping your branch up to date is actually fairly simple but not entirely obvious. A common approach is to click the Update from upstream button in GitHub for Windows. Clicking this button will merge the commits from master into your local branch.

Merging changes from master

This works, but it’s not a very clean. When using this approach, you get these strange “merge remote tracking branch” commits in your branch. I find this can get confusing and messy pretty quick as these additional commits make it difficult to read through your commit history to understand the changes you made in this branch. It is also strange to see a commit with your name on it that doesn’t actually relate to any real changes you made to the code.

Merge commit message

I find a better approach is to do a git rebase. Don’t be scared by the new terminology. A rebase is the process of rewinding the changes you made, updating the branch to include the missing commits from another branch, then replaying your commits after those. In my mind this more logically mirrors what you actually want for your pull request. This should also make your changes much easier to review.

Before you can rebase, you first need to fetch the changes from the upstream (in this case HTBox). Run git fetch HTBox. The fetch itself won’t change your branch. It simply ensures that your local git repo has a copy of the changes from HTBox/master. Next, execute git rebase HTBox/master. This will rewind all your changes and then replay them after the changes that happened to HTBox/master.

git rebase

Luckily, we had no merge conflicts to deal with here so we can proceed with pushing our changes up to GitHub with the git push –f command.

Force push

Now when we look at this branch on GitHub, we can see that it is no longer behind the HTBox/master branch.

Updated branch

Over time, you will also want to keep your master branch up to date. The process here is the same but you usually don’t need to use the force flag to push. The force flag is only necessary when you have made changes in that branch.

Updating fork

_Caution: _When you rebase, then push –f, you are rewriting the history for your branch. This normally isn’t a problem if you are the only person working on your branch. It can however be a big problem if you are collaborating with another developer on your branch. If you are collaborating with others, the merge approach mentioned earlier (using the Update from button in GitHub for Windows) is a safer option than the rebase option.

Dealing with Merge Conflicts

Dealing with conflicts is the worst part of any source control system, including git. When I run into this problem I use a combination of the command line and the git tooling built-in to Visual Studio. I like to use Visual Studio for this because the visualization used for resolving conflicts is familiar to me.

If a merge conflict occurs during a rebase, git will spew out some info for you.

Merge conflict

Don’t panic. What happens here is the rebase stops at the commit where the merge conflict happened. It is now up to you to decide how you want to handle this merge conflict. Once you have completed the merge, you can then continue the rebase by running the git rebase –continue command. Alternatively, you can cancel everything by running the git rebase –abort command.

As I said earlier, I like to jump over to Visual Studio to handle the merge conflicts. In Visual Studio, with the solution file for the project open, open the file that has a conflict.

File with conflict

Here, we can see the conflicted area. You could merge it manually here, but there is a much better way. In Visual Studio, open the Team Explorer and select Changes.

Visual Studio Team Explorer

Visual Studio knows that you are in the middle of a rebase and that you have conflicts.

Visual Studio Show Conflicts

Click the Conflicts warning and then click the Merge button to resolve merge conflicts for the conflicted file.

Open merge tool

This will open the Merge window where I can select the changes I want to keep and then click the Accept Merge button.

Three way merge tool in Visual Studio

Now, we can continue the rebase operation with git rebase --continue:

git rebase --continue

Finally, a git push –f to push the changes to GitHub and our merge is complete! See…that wasn’t so bad was it?

Squashing Commits

Some project owners will ask you to squash your commits before they will accept your changes. Squashing is the process of combining all your commits into a single commit. Some project owners like this because it keeps the commit log on the master branch nice and clean with a single commit per pull request. Squashing is the subject of much debate but I won’t get into that here. If you got through the merging you can handle this too.

To squash your commits, start by rebasing as described above. Squashing only works if all your commits are replayed AFTER all the changes in the branch that the pull request will be merged into. Next, rebase again with the interactive (-i) flag, specifying the number of changes you will be squashing using HEAD~x. In my case, that is 2 commits. This will open Notepad with a list of the last x commits and some instructions on how to specify the commits you will be squashing.

Squashing commits

Edit the file, save it and close it. Git will continue the rebase process and open a second file in Notepad. This file will allow you to modify the commit messages.

Modify commit messages

I usually leave this file alone and close it. This completes the squashing.

Squash complete

Finally, run the git push –f command to push these changes to GitHub. Your branch (and associated pull request) should now show a single commit with all your changes.

Results of squashing

Pull request successfully merged and closed!

Mission Accomplished

Congrats! You know have the tools you need to handle most scenarios you might encounter when contributing to an open source project on GitHub. It’s time to impress your friends with your new found knowledge of rebasing, merging and squashing! Get out there and start contributing. If you’re looking for a project to get started on, check out the list at http://up-for-grabs.net.

Goodbye Child Actions, Hello View Components

Updated May 22, 2016: Updated to match component invocations changes in ASP.NET Core RC2 / RTM

In previous versions of MVC, we used Child Actions to build reusable components / widgets that consisted of both Razor markup and some backend logic. The backend logic was implemented as a controller action and typically marked with a [ChildActionOnly] attribute. Child actions are extremely useful but as some have pointed out, it is easy to shoot yourself in the foot.

Child Actions do not exist in ASP.NET Core MVC. Instead, we are encouraged to use the new View Component feature to support this use case. Conceptually, view components are a lot like child actions but they are a lighter weight and no longer involve the lifecycle and pipeline related to a controller. Before we get into the differences, let’s take a look at a simple example.

A simple View Component

View components are made up of 2 parts: A view component class and a razor view.

To implement the view component class, inherit from the base ViewComponent and implement an Invoke or InvokeAsync method. This class can be anywhere in your project. A common convention is to place them in a ViewComponents folder. Here is an example of a simple view component that retrieves a list of articles to display in a What’s New section.

namespace MyWebApplication.ViewComponents
{
public class WhatsNewViewComponent : ViewComponent
{
private readonly IArticleService _articleService;

public WhatsNewViewComponent(IArticleService articleService)
{

_articleService = articleService;
}

public IViewComponentResult Invoke(int numberOfItems)
{

var articles = _articleService.GetNewArticles(numberOfItems);
return View(articles);
}
}
}

Much like a controller action, the Invoke method of a view component simply returns a view. If no view name is explicitly specified, the default Views\Shared\Components\ViewComponentName\Default.cshtml is used. In this case, Views\Shared\Components\WhatsNew\Default.cshtml. Note there are a ton of conventions used in view components. I will be covering these in a future blog post.

Views\Shared\Components\WhatsNew\Default.cshtml
@model IEnumerable<Article>

<h2>What's New</h2>
<ul>
@foreach (var article in Model)
{
<li><a asp-controller="Article"
asp-action="View"
asp-route-id="@article.Id">@article.Title</a></li>

}
</ul>

To use this view component, simply call @Component.InvokeAsync from any view in your application. For example, I added this to the Home/Index view:

Views\Home\Index.cshtml
<div class="col-md-3">
@await Component.InvokeAsync("WhatsNew", new { numberOfItems = 3})
</div>

The first parameter to @Component.InvokeAsync is the name of the view component. The second parameter is an object specifying the names and values of arguments matching the parmeters of the Invoke method in the view component. In this case, we specified a single int named numberOfItems, which matches the Invoke(int numberOfItems) method of the WhatsNewViewComponent class.

What's New View Component

How is this different?

So far this doesn’t really look any different from what we had with Child Actions. There are however some major differences here.

No Model Binding

With view components, parameters are passed directly to your view component when you call @Component.Invoke() or @Component.InvokeAsync() in your view. There is no model binding needed here since the parameters are not coming from the HTTP request. You are calling the view component directly using C#. No model binding means you can have overloaded Invoke methods with different parameter types. This is something you can’t do in controllers.

No Action Filters

View components don’t take part in the controller lifecycle. This means you can’t add action filters to a view component. While this might sound like a limitation, it is actually an area that caused problems for a lot of people. Adding an action filter to a child action would sometimes have unintended consequences when the child action was called from certain locations.

Not reachable from HTTP

A view component never directly handles an HTTP request so you can’t call directly to a view component from the client side. You will need to wrap the view component with a controller if your application requires this behaviour.

What is available?

Common Properties

When you inherit from the base ViewComponent class, you get access to a few properties that are very similar to controllers:

[ViewComponent]
public abstract class ViewComponent
{
protected ViewComponent();
public HttpContext HttpContext { get; }
public ModelStateDictionary ModelState { get; }
public HttpRequest Request { get; }
public RouteData RouteData { get; }
public IUrlHelper Url { get; set; }
public IPrincipal User { get; }

[Dynamic]
public dynamic ViewBag { get; }
[ViewComponentContext]
public ViewComponentContext ViewComponentContext { get; set; }
public ViewContext ViewContext { get; }
public ViewDataDictionary ViewData { get; }
public ICompositeViewEngine ViewEngine { get; set; }

//...
}

Most notably, you can access information about the current user from the User property and information about the current request from the Request property. Also, route information can be accessed from the RouteData property. You also have the ViewBag and ViewData. Note that the ViewBag / ViewData are shared with the controller. If you set ViewBag property in your controller action, that property will be available in any ViewComponent that is invoked by that controller action’s view.

Dependency Injection

Like controllers, view components also take part in dependency injection so any other information you need can simply be injected to the view component. In the example above, we injected the IArticleService that allowed us to access articles form some remote source. Anything that you could inject into a controller can also be injected into a view component.

Wrapping it up

View components are a powerful new feature for creating reusable widgets in ASP.NET Core MVC. Consider using View Components any time you have complex rendering logic that also requires some backend logic.