MapKit JS with ASP.NET Core

Apple’s MapKit JS isn’t currently as widely used as some alternative web mapping solutions, but it’s a reasonable option, depending on your needs. While there’s no free version, the usage limits are very high, and it’s priced more competitively than many rivals, since it’s included with an Apple Developer Membership. Integrating MapKit JS with an ASP.NET Core application is a little different at first, if you’ve never worked with JWT, but it’s actually pretty straightforward. The example below also only makes use of standard .NET libraries, and doesn’t require any large third party libraries or any additional NuGet packages. This code should also work on Windows, macOS, and Linux. The full sample that goes along with this post is available on GitHub. (Note that there are some alternative Microsoft Cng libraries you could use — but they’re Windows only.)

MapKit JS makes use of JWT with an authorization callback, rather than an API key you might see with some competitors. So, we’re going to create an API method that generates a JWT token for authorization. There are some steps you need to take on your Apple Developer account to get set up, so you’ll also want to read Apple’s documentation on creating a maps identifier and a private key and about some of the other values you need to supply to generate a token.

The example has spots in the appsettings.json & appsettings.Development.json for these values, but it does not include valid values, so it will not run as-is. You’ll want to put your own values for PrivateKey (as a Base64 string — you can take the p8 file you download from Apple, and copy/paste the private key out of the file with the line breaks removed), KeyIdentifier from the key you set up in your Apple Developer account, and TeamID that you can also grab from your Apple Developer account. The final value you’ll want to change is Origin. MapKit JS will compare this value to the location the maps are hosted — if it doesn’t match, it’ll throw an error. Note that you can omit Origin and MapKit JS won’t validate your URL, but you probably won’t want to do that for production and forgo the extra security check.

One of the other fields in the payload is the expiration — our example has tokens that expire after 20 minutes. You can set the expiration to shorter or longer — MapKit JS will call your authorization again if the token times out. Note that you will definitely want to create the JWT on your server — you don’t want to have your private key exposed anywhere on the web.

Below is the main piece of code in an API method which creates the JWT token:

[Route("api/mapkit/gettoken")]
public string GetMapKitToken()
{
    var header = new
    {
        // The encryption algorithm (alg) used to encrypt the token. ES256 should be used to
        // encrypt your token, and the value for this field should be "ES256".
        alg = "ES256",
        // A 10-character key identifier (kid) key, obtained from your Apple Developer account.
        kid = _mapKitSettings.KeyIdentifier,
        // A type parameter (typ), with the value "JWT".
        typ = "JWT"
    };
    var payload = new
    {
        // The Issuer (iss) registered claim key. This key's value is your 10-character Team ID,
        // obtained from your developer account.
        iss = _mapKitSettings.TeamID,
        // The Issued At (iat) registered claim key. The value of this key indicates the time at
        // which the token was generated, in terms of the number of seconds since UNIX Epoch, in UTC.
        iat = DateTimeOffset.UtcNow.ToUnixTimeSeconds(),
        // The Expiration Time (exp) registered claim key, in terms of the number of seconds since
        // UNIX Epoch, in UTC.
        exp = DateTimeOffset.UtcNow.AddMinutes(_mapKitSettings.TokenExpirationMinutes).ToUnixTimeSeconds(),
        // This key's value is a fully qualified domain that should match the Origin header passed
        // by a browser.  Apple compares this to your requests for verification.  Note that you can
        // omit this and get warnings, though it's definitely not recommended.
        origin = _mapKitSettings.Origin
    };
    var headerBytes = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(header));
    var payloadBytes = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(payload));
    var message = EncodingHelper.JwtBase64Encode(headerBytes)
        + "." + EncodingHelper.JwtBase64Encode(payloadBytes);
    var messageBytes = Encoding.UTF8.GetBytes(message);
    var crypto = ECDsa.Create();
    crypto.ImportPkcs8PrivateKey(Convert.FromBase64String(_mapKitSettings.PrivateKey), out _);
    var signature = crypto.SignData(messageBytes, HashAlgorithmName.SHA256);
    return message + "." + EncodingHelper.JwtBase64Encode(signature);
}

The below helper class has a single method that produces a Base64 string version of a byte array to the JWT specifications, which includes a few additional character replacements and one removal beyond the stock Base64 string conversion.

public static class EncodingHelper
{
    // Base64 Encode Per the JWT specifications
    public static string JwtBase64Encode(byte[] bytes)
    {
        if (bytes == null)
            throw new ArgumentNullException(nameof(bytes));
        if (bytes.Length == 0)
            throw new ArgumentOutOfRangeException(nameof(bytes));
        return Convert.ToBase64String(bytes)
            .Replace('+', '-')
            .Replace('/', '_')
            .TrimEnd('=');
    }
}

And below is a razor view with the MapKit initialization that takes advantage of our API method that generates a JWT token, sets up a map with a single marker annotation (pin) at Apple’s headquarters, and finally fits the map to that location with a little bit of padding.

@{
    ViewData["Title"] = "Sample Home";
}
<div class="text-center">
    <h1 class="display-4">Sample Map</h1>
    <div id="mapContainer" style="height:400px"></div>
</div>
@section Scripts
{
    <script src="https://cdn.apple-mapkit.com/mk/5.x.x/mapkit.js"></script>
    <script>
        mapkit.init({
            authorizationCallback: function(done) {
                fetch("/api/mapkit/gettoken")
                    .then(res => res.text())
                    .then(done);
            },
            language: "en"
        });
        var map = new mapkit.Map('mapContainer');
        // Create a balloon marker for Apple Park
        var appleParkCoordinates = new mapkit.Coordinate(37.28808, -122.01982);
        var annotation = new mapkit.MarkerAnnotation(appleParkCoordinates,
            {
                title: 'Apple Park'
            });
        map.addAnnotation(annotation);
        // Center and fit map on the annotation
        map.showItems([annotation],
            {
                padding: new mapkit.Padding(100, 100, 100, 100)
            });
    </script>
}

After replacing the configuration values with your own, you should be able to run the project and get a sample map like the screenshot below. The full sample is also posted on GitHub.

Elasticsearch with ASP.NET Core using NEST

Adding search functionality to a site can be deceptively complex. There’s a lot more than just basic string matching that goes into a decent search function — e.g. fuzziness, stemming, synonyms, etc. Luckily, there are several software services available that provide a lot of functionality out-of-the-box. I’m currently a fan of Elasticsearch for its ease of use and feature-set.

Elasticsearch provides two .NET clients: both a low-level .NET client, Elasticsearch.net, as well as their high-level client, NEST. This post is about using NEST with ASP.NET Core. NEST is quite powerful, and easy to get into. Note that there’s also a sample working .NET Core 3.1 project in C# that goes along with these code examples.

There are several terms around search that are important, and can improve your search performance if you apply them thoughtfully to your fields:

  • Fuzziness -not every search term will be spelled or typed exactly like the words that are indexed. Making a query fuzzy means that it will match results that aren’t spelled exactly the same. E.g., you probably want to return results for matrix when the user has typed matrx. Elasticsearch supports a number of options, such as edit distance, which is based on the number of single character edits to a string, as well as phonetic fuzziness where a term may be misspelled, but the misspelling sounds like the correct spelling. We’ll show an example of a fuzzy search below.
  • Stemming -the process of reducing words to their base, or stem. Your text will typically use multiple forms of a word. E.g., learn, learned, learning. You may want a query for any of the forms to match any of the others. So a query for learning can return results that have learn and learned in them. Elasticsearch has multiple options here, from algorithmic stemmers that automatically determine word stems, to dictionary stemmers. Stemming can also decrease index size by storing only the stems, and thus, fewer words. We’ll show an example of using algorithmic stemmers below.
  • Stop Words -these are words that aren’t used in the index. Commonly used, short words that don’t add additional meaning to sentences are commonly used as stop words. E.g., a, and, the, or. Somewhat similarly to stemming, using stop words can improve the size and performance of indices by limiting the number of words stored. Elasticsearch has some built-in stop word lists, and we’ll show the use of one below.
  • Synonyms – to produce better search results, you can also define lists of synonyms for Elasticsearch. Synonyms are what they sound like — words that have the same or nearly the same meanings. E.g., you may want a query for huge to bring back results that have big in them. Synonyms can also be particularly useful with industry terms. We’ll show an example of using synonyms below.

Let’s get started — the example below is pretty simple, and we’re going to use small objects that describe books and their objects. With NEST, you need to map your objects. The mapping can be inferred, or you can use attribute mapping or a fluent API with NEST. Note that you must use AutoMap() as we do below when using attribute mapping. I personally prefer to decorate classes with attributes, and it’s a similar pattern to data annotations:

public enum BookGenre
{
    None = 0,
    Adventure,
    Biography,
    Drama,
    HistoricalFiction,
    Science
}
[ElasticsearchType(RelationName = "author")]
public class Author
{
    public string FirstName { get; set; }
    public string LastName { get; set;}
}
[ElasticsearchType(RelationName = "book", IdProperty = nameof(Id))]
public class Book
{
    public int Id { get; set; }
    public Author Author { get; set; }
    [Text(Boost = 1.5)]
    public string Title { get; set; }
    public string Opening { get; set; }
    [StringEnum]
    public BookGenre Genre { get; set; }
    [Ignore]
    public int InitialPublishYear { get; set; }
}

Couple things to note here: We’ve specified our Id field for our Book class, and you can use this to change what’s used as an id in your index. We’ve also specified the type on our Title field, though NEST could have already figured out that string maps to Text — the interesting thing here is that we’re also boosting the field, which means it will be more important and count towards more of the score when we search against this data. (We’re basically assuming the Title is the most important piece of the text here.) On the enumerated type, we’re using [StringEnum] which tells NEST to serialize the enumerated values as strings, e.g. “”Adventure”, “Biography”, etc. If we didn’t do this, it would store those as “1”, “2”, “3”, etc. in the index. Finally, we’re using [Ignore] to remove a field from serialization and not use it in our index.

Next we’ll create an extension that you can use to add the client to the services collection for dependency injection:

    public static class ElasticsearchExtensions
    {
        public static void AddElasticsearch(this IServiceCollection services, IConfiguration configuration)
        {
            var settings = new ConnectionSettings(new Uri(configuration["ElasticsearchSettings:uri"]));
            var defaultIndex = configuration["ElasticsearchSettings:defaultIndex"];
            if (!string.IsNullOrEmpty(defaultIndex))
                settings = settings.DefaultIndex(defaultIndex);
            var client = new ElasticClient(settings);
            services.AddSingleton<IElasticClient>(client);
        }
    }

The ElasticClient is thread-safe, so Singleton is the correct pattern here. There are also quick code examples for ApiKey and BasicAuth in the linked code sample.

Your appsettings.json might then look something like this for the Uri of your Elasticsearch instance and your default index:

{
  "ElasticsearchSettings": {
    "uri": "http://localhost:9200/",
    "defaultIndex": "books"
  }
}

We also need to create our index to search against in Elasticsearch. We could put this in our ElasticsearchExtensions class, but it might take a little longer to run. So, we really want it to run asynchronously when the application starts up. To do that, we create a class that implements IHostedService:

public class ElasticsearchHostedService : IHostedService
{
    private readonly IElasticClient _elasticClient;
    public ElasticsearchHostedService(IElasticClient elasticClient)
    {
        _elasticClient = elasticClient;
    }
    public async Task StartAsync(CancellationToken cancellationToken)
    {
        var booksIndexName = "books";
        // The check for whether this index exists and subsequently deleting
        // it if it does is for demo purposes!  This is so we can make changes
        // in our code and have them reflected in the index.  In production,
        // you would not want to do this.
        if ((await _elasticClient.Indices.ExistsAsync(booksIndexName)).Exists)
            await _elasticClient.Indices.DeleteAsync(booksIndexName);
        var createMoviesIndexResponse = await _elasticClient.Indices.CreateAsync(booksIndexName, c => c
            .Settings(s => s
                .Analysis(a => a
                    .TokenFilters(tf => tf
                        .Stop("english_stop", st => st
                            .StopWords("_english_")
                        )
                        .Stemmer("english_stemmer", st => st
                            .Language("english")
                        )
                        .Stemmer("light_english_stemmer", st => st
                            .Language("light_english")
                        )
                        .Stemmer("english_possessive_stemmer", st => st
                            .Language("possessive_english")
                        )
                        .Synonym("book_synonyms", st => st
                            // If you have a lot of synonyms, it's probably better to create a synonyms
                            // text file and use .SynonymsPath here instead.
                            .Synonyms(
                                "haphazard,indiscriminate,erratic",
                                "incredulity,amazement,skepticism")
                        )
                    )
                    .Analyzers(aa => aa
                        .Custom("light_english", ca => ca
                            .Tokenizer("standard")
                            .Filters("light_english_stemmer", "english_possessive_stemmer", "lowercase", "asciifolding")
                        )
                        .Custom("full_english", ca => ca
                            .Tokenizer("standard")
                            .Filters("english_possessive_stemmer",
                                    "lowercase",
                                    "english_stop",
                                    "english_stemmer",
                                    "asciifolding")
                        )
                        .Custom("full_english_synopsis", ca => ca
                            .Tokenizer("standard")
                            .Filters("book_synonyms",
                                    "english_possessive_stemmer",
                                    "lowercase",
                                    "english_stop",
                                    "english_stemmer",
                                    "asciifolding")
                        )
                    )
                )
            )
            .Map<Book>(m => m
                .AutoMap()
                .Properties(p => p
                    .Text(t => t
                        .Name(n => n.Title)
                        .Analyzer("light_english")
                    )
                    .Text(t => t
                        .Name(n => n.Opening)
                        .Analyzer("full_english_synopsis")
                    )
                )
            )
        );
    }
    public Task StopAsync(CancellationToken cancellationToken) => Task.CompletedTask;
}

There area couple things to unpack here:

  • The first thing to note is that we’re always deleting and recreating the index when the app starts. This makes it easy to test, since we can make changes to the data or indexing that only happen when documents are indexed. However, this is something you really don’t want to do in a real production app.
  • Our index creation call is roughly split into three sections: setting up some filters to use in Settings.Analysis, creating analyzers that use them in Settings.Analyzers, and then making use of them when we map our object.
  • Under our TokenFilters section, we’re setting up some of the things we briefly mentioned above: a StopWords filter, a couple Stemmers (“light_english” is less aggressive than “english”), and our Synonym list.
  • At the bottom of StartAsync, we’re doing the most important bit, which is calling AutoMap to make use of the attribute mapping we applied on our data classes above, and then applying Analyzers that we set up immediately before. We’re letting the defaults take effect for the rest of the fields, but we’re setting the more aggressive Analyzer (with the “english” stemmer and making use of our “book_synonyms”) on the Opening field, and the more straightforward Analyzer on the Title field.

Then, in our Startup.cs, we’re going to make use of our extension and hosted service:

public void ConfigureServices(IServiceCollection services)
{
    // Add the configured ElasticClient to our service collection
    services.AddElasticsearch(Configuration);
    // Add our hosted service which will create our indices and mapping
    // asynchronously on startup
    services.AddHostedService<ElasticsearchHostedService>();
    services.AddControllersWithViews();
}

For demonstration purposes, we’ve also added some static values into Index() on HomeController.cs where we have a few Book items which we’ll insert into our index (the examples below are excerpts from books that are in the public domain; you can download them from many places such as the Digital Public Library of America):

public async Task<IActionResult> Index()
{
    var books = new List<Book>()
    {
        new Book
        {
            Id = 1,
            Title = "Narrative of the Life of Frederick Douglass",
            Opening = "I was born in Tuckahoe, near Hillsborough, and about twelve miles from Easton, in Talbot county, Maryland. I have no accurate knowledge of my age, never having seen any authentic record containing it. By far the larger part of the slaves know as little of their ages as horses know of theirs, and it is the wish of most masters within my knowledge to keep their slaves thus ignorant.",
            Genre = BookGenre.Biography,
            Author = new Author
            {
                FirstName = "Frederick",
                LastName = "Douglass"
            },
            InitialPublishYear = 1845
        },
        new Book
        {
            Id = 2,
            Title = "A Tale of Two Cities",
            Opening = "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way—in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.",
            Genre = BookGenre.HistoricalFiction,
            Author = new Author
            {
                FirstName = "Charles",
                LastName = "Dickens"
            },
            InitialPublishYear = 1859
        },
        new Book
        {
            Id = 3,
            Title = "On the Origin of Species",
            Opening = "When we compare the individuals of the same variety or sub-variety of our older cultivated plants and animals, one of the first points which strikes us is, that they generally differ more from each other than do the individuals of any one species or variety in a state of nature. And if we reflect on the vast diversity of the plants and animals which have been cultivated, and which have varied during all ages under the most different climates and treatment, we are driven to conclude that this great variability is due to our domestic productions having been raised under conditions of life not so uniform as, and somewhat different from, those to which the parent species had been exposed under nature.",
            Genre = BookGenre.Science,
            Author = new Author
            {
                FirstName = "Charles",
                LastName = "Darwin"
            },
            InitialPublishYear = 1859
        },
        new Book
        {
            Id = 4,
            Title = "Oh Pioneers!",
            Opening = "One January day, thirty years ago, the little town of Hanover, anchored on a windy Nebraska tableland, was trying not to be blown away. A mist of fine snowflakes was curling and eddying about the cluster of low drab buildings huddled on the gray prairie, under a gray sky. The dwelling-houses were set about haphazard on the tough prairie sod; some of them looked as if they had been moved in overnight, and others as if they were straying off by themselves, headed straight for the open plain.",
            Genre = BookGenre.HistoricalFiction,
            Author = new Author
            {
                FirstName = "Willa",
                LastName = "Cather"
            },
            InitialPublishYear = 1913
        },
        new Book
        {
            Id = 5,
            Title = "Moby Dick",
            Opening = "Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen and regulating the circulation.",
            Genre = BookGenre.Adventure,
            Author = new Author
            {
                FirstName = "Herman",
                LastName = "Melville"
            },
            InitialPublishYear = 1851
        }
    };
    foreach (var book in books)
    {
        var existsResponse = await _elasticClient.DocumentExistsAsync<Book>(book);
        // If the document already exists, we're going to update it; otherwise insert it
        // Note:  You may get existsResponse.IsValid = false for a number of issues
        // ranging from an actual server issue, to mismatches with indices (e.g. a
        // mismatch on the datatype of Id).
        if (existsResponse.IsValid && existsResponse.Exists)
        {
            var updateResponse = await _elasticClient.UpdateAsync<Book>(book, u => u.Doc(book));
            if (!updateResponse.IsValid)
            {
                var errorMsg = "Problem updating document in Elasticsearch.";
                _logger.LogError(updateResponse.OriginalException, errorMsg);
                throw new Exception(errorMsg);
            }
        }
        else
        {
            var insertResponse = await _elasticClient.IndexDocumentAsync(book);
            if (!insertResponse.IsValid)
            {
                var errorMsg = "Problem inserting document to Elasticsearch.";
                _logger.LogError(insertResponse.OriginalException, errorMsg);
                throw new Exception(errorMsg);
            }
        }
    }
    var vm = new HomeViewModel
    {
        InsertedData = JsonConvert.SerializeObject(books, Formatting.Indented)
    };
    return View(vm);
}

Above, we’re checking whether the Book already exists in the index, and updating it if it does, or inserting it, if not. We’re then converting our object list to Json to display to the user if they’re using the sample application.

And we’ve implemented a basic search in our SearchController.cs:

[HttpGet]
public async Task<IActionResult> Index(string q)
{
    if (string.IsNullOrEmpty(q))
    {
        var noResultsVM = new SearchViewModel { Term = "[No Search]" };
        return View(noResultsVM);
    }
    var response = await _elasticClient.SearchAsync<Book>(s =>
        s.Query(sq =>
            sq.MultiMatch(mm => mm
                .Query(q)
                .Fuzziness(Fuzziness.Auto)
            )
        )
    );
    var vm = new SearchViewModel
    {
        Term = q
    };
    if (response.IsValid)
        vm.Results = response.Documents?.ToList();
    else
        _logger.LogError(response.OriginalException, "Problem searching Elasticsearch for term {0}", q);
    return View(vm);
}

There’s a good comparison of the types of queries available on qbox.io. Here we’re using MultiMatch, and the other important thing here is that we’re using Fuzziness(Fuzziness.Auto) to make this a fuzzy search.

Let’s see some examples from our sample application:

Example of fuzziness where I’ve misspelled “Easton” as “eston”
Example of stemming where I search for “drove” and get back a result with “driving” in it.
Example of synonyms where I search for “erratic” and get back a result with “haphazard”

Check out the sample application on GitHub at https://github.com/adam-russell/elasticsearch-aspnet-core-sample. Feel free to let me know if you notice any bugs or omissions.

ASP.NET Core: Section Scripts in a Partial View

A @section Scripts block does not work when placed in a partial view in ASP.NET Core, which is the same functionality as in ASP.NET MVC. Unfortunately, you don’t get any error messages if you try to add the section to a partial — it just does nothing. In many cases, having a scripts section in a partial view would be an anti-pattern, since the partial can be rendered an unknown number of times. However, there are times when I believe a scripts section is warranted in a partial, particularly when you’re trying to create dynamic JavaScript based on the model passed into the partial view. While you can’t just use the actual @section Scripts in a partial view, you can add some HTML helper extensions to accomplish the same thing.

Below is the code to accomplish this functionality — here are the helper extension methods that you’d add into a C# file in your project:

using System;
using System.Linq;
using System.Text.Encodings.Web;
using System.Text.RegularExpressions;
using Microsoft.AspNetCore.Html;
using Microsoft.AspNetCore.Mvc.Razor;
using Microsoft.AspNetCore.Mvc.Rendering;
public static class HtmlHelperExtensions
{
    private const string _partialViewScriptItemPrefix = "scripts_";
    public static IHtmlContent PartialSectionScripts(this IHtmlHelper htmlHelper, Func<object, HelperResult> template)
    {
        htmlHelper.ViewContext.HttpContext.Items[_partialViewScriptItemPrefix + Guid.NewGuid()] = template;
        return new HtmlContentBuilder();
    }
    public static IHtmlContent RenderPartialSectionScripts(this IHtmlHelper htmlHelper)
    {
        var partialSectionScripts = htmlHelper.ViewContext.HttpContext.Items.Keys
            .Where(k => Regex.IsMatch(
                k.ToString(),
                "^" + _partialViewScriptItemPrefix + "([0-9A-Fa-f]{8}[-][0-9A-Fa-f]{4}[-][0-9A-Fa-f]{4}[-][0-9A-Fa-f]{4}[-][0-9A-Fa-f]{12})$"));
        var contentBuilder = new HtmlContentBuilder();
        foreach (var key in partialSectionScripts)
        {
            var template = htmlHelper.ViewContext.HttpContext.Items[key] as Func<object, HelperResult>;
            if (template != null)
            {
                var writer = new System.IO.StringWriter();
                template(null).WriteTo(writer, HtmlEncoder.Default);
                contentBuilder.AppendHtml(writer.ToString());
            }
        }
        return contentBuilder;
    }
}

PartialSectionScripts is called in the partial view in place of where you would otherwise be using @section Scripts.

RenderPartialSectionScripts would typically be called in your shared layout, e.g. _Layout.cshtml in the standard scaffolded projects, and will render any scripts added in partials via the PartialSectionScripts method call.

Here’s an example from a partial view of using PartialSectionScripts:

@Html.PartialSectionScripts(
    @<script>
        alert('Hello from the partial view!');
    </script>
)

And the example with the RenderPartialSectionScripts line added in your shared layout, where you would likely want to place it after your RenderSection and before the end of the body:

    @*...*@
    @RenderSection("Scripts", required: false)
    @Html.RenderPartialSectionScripts()
</body>
</html>

ASP.NET Core: Invalid non-ASCII or control character in header

I’m running some ASP.NET Core applications under Kestrel on Linux proxied by NGINX. I ran into an issue recently where I was getting the following exception when I redirected with certain strings:

System.InvalidOperationException: Invalid non-ASCII or control character in header: 0x00ED

The problem is that Kestrel does not support non-ASCII characters in the HTTP header, and I was redirecting when the string containing í, though many unicode characters would present similar problems. Here’s what a section of the problematic HTTP response looks like:

HTTP/1.1 302 Found
Location: /locations/juana-díaz
[Rest of the Response Here]

The solution is pretty simple: you need to encode strings that contain non-ASCII characters the same way you’d need to encode strings that contain any other characters with special meaning in a URL. The easiest thing to do is to use WebUtility.UrlEncode that’s in System.Net; for example:

using System.Net;
// ...
var encodedLocationName = WebUtility.UrlEncode(locationName);
return Redirect("~/locations/" + encodedLocationName);

Which will then produce a redirect that Kestrel is happier with, and looks something like this:

HTTP/1.1 302 Found
Location: /locations/juana-d%C3%ADaz
[Rest of the Response Here] 

ASP.NET Core: Logging with Serilog to MongoDB Using Configuration

This is likely a bit esoteric, but I was setting up an ASP.NET Core 2.2 project recently where I wanted to store logs in MongoDB. I wanted to play with Serilog vs. some of the alternatives. I also wanted to use a configuration file, and I wasn’t able to find any good examples online, so I figured I’d create a quick example in this post. This example is applicable to ASP.NET Core 2.2, and may work for other versions, but isn’t tested with others.

First you’ll want to install the following NuGet packages:

Serilog
Serilog.AspNetCore
Serilog.Settings.Configuration
Serilog.Sinks.MongoDB

Next, go to your appsettings.json/appsettings.Development.json files. You can delete the Logging section in the default scaffolded files and add a Serilog section as below. Here is a basic example of my Development version with only the AllowedHosts section left from the scaffolded version:

{
  "Serilog": {
    "MinimumLevel": {
      "Default": "Debug",
      "Override": {
        "Microsoft": "Warning",
        "System": "Warning"
      }
    },
    "WriteTo": [
      {
        "Name": "MongoDBCapped",
        "Args": {
          "databaseUrl": "mongodb://localhost/logs",
          "collectionName": "log",
          "cappedMaxSizeMb": "50",
          "cappedMaxDocuments": "1000"
        }
      }
      // Add other sinks here if desired...
    ]
  },
  "AllowedHosts": "*"
}

A lot of this is straightforward, such as the url and connection name. In the example above, I’m using a capped collection with “Name”: “MongoDBCapped” in the WriteTo sinks section because in my application, I don’t care about keeping logs forever. If you want to use a normal collection, change the name to “MongoDB” and remove the capped… settings in “Args”.

For the “MinimumLevel”, you’ll likely want to change the values to higher levels for your production settings. Microsoft has a nice explanation of ASP.NET Core Log Levels.

Now you’re going to want to set Serilog up in code. There are a couple similar ways to do this, but I prefer adding the .UseSerilog call in Program.cs inside CreateWebHostBuilder. Right after .UseStartup, add the following code:

.UseSerilog((context, config) =>
{
    config.ReadFrom.Configuration(context.Configuration);
});
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>()
        .UseSerilog((context, config) =>
        {
            config.ReadFrom.Configuration(context.Configuration);
        });

The whole section in Program.cs might look like this, if you have no other builder calls:

And that’s really it from a setup perspective. At this point, you can add constructor parameters to get a working logger from dependency injection. For example:

public class HomeController : Controller
{
    private readonly ILogger<HomeController> _logger;
    public HomeController(ILogger<HomeController> logger)
    {
        _logger = logger;
    }
    public IActionResult Index()
    {
        _logger.LogInformation("Hello from inside HomeController.Index!");
        return View();
    }
    // Rest of the class here...
}

MongoDB C# Aggregating and Grouping By Distinct Fields

This seems to be a difficult solution to search for at the moment. The MongoDB .NET driver documentation links to the unit test files on GitHub for examples on aggregation related items, but the links are broken. Looking at the repos on GitHub is useful to find the actual tests, but for the specific problem I was trying to solve, it wasn’t entirely helpful.

Basically what I was trying to do would be relatively easy with SQL — a SELECT DISTINCT from a table/collection based on multiple fields. I really just wanted to get the unique combinations of several fields at the database layer without having to retrieve all the documents from the collection, and then figuring out which were distinct.

Below is a bit of a contrived example using an IMongoCollection<Location> named Locations where the code is grouping and retrieving the distinct set of records for State, StateShortName, StateSlug, and CountrySlug into a List<State>.

var uniqueStates = await _context
        .Locations
        .Aggregate()
        .Group(
            i => new { i.State, i.StateShortName, i.StateSlug, i.CountrySlug },
            g => new State
            {
                Name = g.First().State,
                ShortName = g.First().StateShortName,
                Slug = g.First().StateSlug,
                CountrySlug = g.First().CountrySlug
            })
        .ToListAsync();

The first argument to the Group method is the new _id for the group, and the second is the actual group/output into the output object. The group output needs to correspond to accumulator objects, such as First, Distinct, Max, Min, Sum, etc. This is also possible using BsonDocument, but I wanted to use expressions and make it more strongly typed.

It’s more or less the equivalent of something like the below in SQL:

SELECT DISTINCT
	l.State, l.StateShortName, l.StateSlug, l.CountrySlug
	FROM Locations l

There may be a more straightforward way to do this, and if you know a better way, please feel free to let me know.

Appsettings.json in a .NET Core Console Application

This is applicable to .NET Core 2.0, and is the equivalent of the old app.config and ConfigurationManager pattern. Below are the steps to set it up.
This is a modified version of Ray Vega’s answer on stackoverflow.

  1. Create a file named appsettings.json at the project root. (The filename can actually be anything, and is referenced below, but appsettings.json is a good convention.)
  2. Add your settings to that file in JSON format. Note that the file is much more flexible than the old app.config, and this is just an option. I currently prefer to have an AppSettings section here:
  3. {
      "AppSettings": {
        "Key1": "KeyValue1",
        "Key2": "KeyValue2"
      }
    }
  4. Set appsettings.json to be copied to the output directory if there have been any changes, since we’ll load it from the default path below (in Visual Studio, right click the file, and choose Properties. Then under the advanced section in the window, set Copy to output directory to Copy if newer):
    Copy if newer settings in Visual Studio
  5. Install these two NuGet packages:
    • Microsoft.Extensions.Configuration.Json
    • Microsoft.Extensions.Options.ConfigurationExtensions
  6. Add an AppSettings.cs file to your project with a class and properties that match the names you added in the JSON file above. For example:
        public class AppSettings
        {
            public string Key1 { get; set; }
            public string Key2 { get; set; }
        }
  7. And add the following code to Main in Program.cs, or factored out into a separate function (note the filename “appsettings.json” below — it should match whatever you created above):
    using System;
    using System.IO;
    using Microsoft.Extensions.Configuration;
    namespace ConsoleApplication
    {
        class Program
        {
            static AppSettings appSettings = new AppSettings();
            static void Main(string[] args)
            {
                var builder = new ConfigurationBuilder()
                    .SetBasePath(Directory.GetCurrentDirectory())
                    .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);
                var configuration = builder.Build();
                ConfigurationBinder.Bind(configuration.GetSection("AppSettings"), appSettings);
                // The rest of your program here
            }
        }
    }
  8. You can now access the configuration values using properties of the appSettings property, for example:
    Console.WriteLine(appSettings.Key1);
    Console.WriteLine(appSettings.Key2);

'Microsoft.SqlServer.Types' version 10 or higher could not be found

If you have a connection to SQL Server where you’re using spatial types with Entity Framework, and deploying to an Azure service/server, you’re likely to come across the exception:

Spatial types and functions are not available for this provider because the assembly ‘Microsoft.SqlServer.Types’ version 10 or higher could not be found.

While you can install SQL Server, a far easier way around it is to install the NuGet package Microsoft.SqlServer.Types.
After you install that NuGet package, you’ll want to make a call to SqlServerTypes.Utilities.LoadNativeAssemblies prior to any actual calls in Entity Framework.
ASP.NET (e.g. in Global.asax.cs -> Application_Start)

SqlServerTypes.Utilities.LoadNativeAssemblies(Server.MapPath("~/bin"));

Standalone Application

SqlServerTypes.Utilities.LoadNativeAssemblies(AppDomain.CurrentDomain.BaseDirectory);

Additionally, in the current versions of Entity Framework, there is a hardcoded reference to only look for versions 10 and 11 of the Microsoft.SqlServer.Types assembly. You can fix this by doing the following in code (the version number should match the version you’re using via the NuGet package above):

SqlProviderServices.SqlServerTypesAssemblyName = "Microsoft.SqlServer.Types, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";

The final version:
ASP.NET:

SqlProviderServices.SqlServerTypesAssemblyName = "Microsoft.SqlServer.Types, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";
SqlServerTypes.Utilities.LoadNativeAssemblies(Server.MapPath("~/bin"));

Standalone Application:

SqlProviderServices.SqlServerTypesAssemblyName = "Microsoft.SqlServer.Types, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";
SqlServerTypes.Utilities.LoadNativeAssemblies(AppDomain.CurrentDomain.BaseDirectory);