"Swap with preview" should be used when upgrade a production database

I wrote once previously about how I used EF Core migrations to upgrade production database. I have two slots production and staging and I swap them every time I push new code that don't involve database changes.  This week I had to make a data change to production database, it's a reminder of when to use the Swap with preview feature provided by Azure App Services.  

I recently decided to change versioning scheme a bit in hope to turn out releases more often, so the upcoming release originally planned as v1.1 has now become v2.0.  As a result, I opened up issue #235 EF Migration and SQL upgrade files got wrong version numbers to fix the out dated text in these files.

Migration changes
Migration code changes

In addition I need to update a record in the __EFMigrationsHistory table, specifically it’s the second row’s MigrationId needs to become "20180530163323_FanV2_0".

__EFMigrationsHistory table
__EFMigrationsHistory table needs data update

To update this record I can run a simple SQL update statement while the site still running, because EF only checks this table every time the application starts up to see if there are new migrations need to be applied.  The challenge is I don’t want a request hit the database before the new code gets there, because the existing code looks for the "20180530163323_FanV1_1" record and if it’s not there it’ll apply migration thus causing problems.

When using Swap with preview, these are the steps to take place:

1. First get staging database and server ready with new data and code (since it’s staging I can just stop the server and do whatever so that’s easy).

2. Update production database with the data change (the production site is still running).

3. Start the swap with “Swap with preview”, this will apply all production configurations to staging slot while still keep production running. It seems you can do this in either slot now, I somehow remember the option was only available in production slot previously.

4. Azure then restarts staging using these production configurations, now staging runs against production database which is OK since staging has the new code.

5. Finally do “Complete swap”, Azure moves the already warmed up staging slot into production.

In conclusion I could do this because it was only a data change.  Consider if there were schema changes then you may not be able to update the production database first while production service is still running, in that case I'd refer to my previous post.  

Structured Logging with Serilog and Seq

Fanray uses Serilog and practices Structured Logging.  One of the output sources is Seq.


The wiring of Serilog happens inside Program.cs

  public static IWebHostBuilder CreateWebHostBuilder(string[] args) => 

And the configuration of it with Seq happens in appsettings.json and appsettings.Production.json. Below is the snippet from appsettings.json with MinimumLevel set to Deubg for local development.

  "Serilog": { 
   "Using": [ 
   "WriteTo": [ 
       "Name": "Console" 
       "Name": "Seq", 
       "Args": { 
         "serverUrl": "http://localhost:5341" 
   "MinimumLevel": { 
     "Default": "Debug", 
     "Override": { 
       "Microsoft": "Information", 
       "System": "Information" 

Structured Logging

Structured Logging enables you to dump entire objects as JSON into the logs and you can later search for them with their properties as search criteria.

For example I have this code in BlogService.cs, it logs a BlogPost object.  Notice the special syntax {@BlogPost}

_logger.LogDebug("Show {@BlogPost}", blogPost);

That outputs the following and notice it also outputs the nested objects inside BlogPost like Category, Tags etc.

   Type: "BlogPost", 
   CategoryTitle: "Technology", 
   Tags: […], 
   TagTitles: […], 
   Body: "<p>This is a test post.</p>", 
   BodyMark: null, 
   Category: {…}, 
   CategoryId: 2, 
   CommentCount: 0, 
   CommentStatus: "NoComments", 
   CreatedOn: "2018-09-22T02:23:40.0000000+00:00", 
   CreatedOnDisplay: "yesterday", 
   UpdatedOnDisplay: "09/21/2018", 
   Excerpt: "This is a test post.", 
   ParentId: null, 
   RootId: null, 
   Slug: "this-is-a-test", 
   Status: "Draft", 
   Title: "This is a test", 
   UpdatedOn: "2018-09-22T02:23:41.0002676+00:00", 
   User: {…}, 
   UserId: 1, 
   ViewCount: 0, 
   PostTags: […], 
   Id: 5, 
   _typeTag: "BlogPost" 


Seq is the tool that allows you to search through what you have logged taking advantage of Structured Logging.  And it is my recommended way to log during the development of Fanray.

To get started with Seq first download and install it from https://getseq.net/ and then to use it during development simply go to http://localhost:5341.

Search for Objects

To search for an object, say give me all BlogPost with a Body that contains the word "Welcome" case insensitive.

Seq filters BlogPost with Body that contains word Welcome
Seq filters BlogPost with Body that contains word Welcome

Here are docs on Seq's Filter Syntax and Cheet Sheet  https://docs.getseq.net/docs/query-syntax

Search for a Request

A request is common thing to search through the log, ASP.NET Core outputs detailed debug information during development. 

For example, when I publish a new post from Open Live Writer, I want to see what exactly happens during that request of publishing my post.  Since Asp.net Core provides these detailed info for each request thus you can search for it with its id "0HLH1ICD1KGGF:00000001".  The screen shot below shows the entirety of that request, the bottom rectangle marks the start of the request while the top rectangle marks the finish of it.  From this you are able to see things like all the SQLs EF Core executed, the MetaWeblogAPI newPost endpoint was called and the XML it received and other good stuff.

Seq shows a request from Open Live Writer
Seq shows a request from Open Live Writer

The type or namespace name could not be found in your ".g.cshtml.cs" file

Last night I updated the namespace of my User class from Fan.Models to Fan.Membership (I probably should have used refactoring tool but didn't) and the web project Fan.Web stopped building.

The type or namespace name could not be found in my .g.cshtml.cs file
The type or namespace name could not be found in my .g.cshtml.cs file

These are CS0246 compiler errors all saying "type or namespace name could not be found" and pointing to my ".g.cshtml.cs" files.

The actual issues are in the .cshtml files, but Visual Studio points to the compiled version of those .cshtml file. Moreover clicking on these errors in Visual Studio will not open up either .cshtml or .g.cshtml.cs file.  So for a moment I was hanging and guessing on what I should do.

My first response was that compiled Razor pages got cached and didn't get cleaned out.  Then I went all over to find these files, maybe it's getting late at night.  Finally I right clicked and copied the error out to see that they are just locally in my folder src\Fan.Web\obj\Debug\netcoreapp2.1\Razor\Pages\Admin\Categories.g.cshtml.cs

To sum up, Razor pages are compiled into these C# .g.cshtml.cs files, they are located right in your web project's "...\obj\Debug\netcoreapp2.1\Razor\Pages\...".  If your .cshtml pages have namespaces that you just renamed, Visual Studio won't point you to .cshtml but to the compiled version instead.

Gotcha: Time zone was not found on the local computer

An unit test failed when I built my code on BitBucket, it threw System.TimeZoneNotFoundException.  It happens where I try to convert a UTC Time to local time in a specific time zone. Interestingly the same unit test passes on my local running Windows 10 and on Appveyor.  

The exact error message is

The time zone ID 'Pacific Standard Time' was not found on the local computer. 

The exception occurred on the following line of code, its parameter timeZoneId got passed in the value "Pacific Standard Time".

TimeZoneInfo userTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId);

The issue comes from the fact that there is more than one source that provides time zone values, to list a few

And they output different time zone values, for example while on Windows you have "Eastern Standard Time", IANA calls it "America/New_York".

I found this cool library TimeZoneConverter that solves exactly this problem with minimal code change, my previous code becomes

TimeZoneInfo userTimeZone = TZConvert.GetTimeZoneInfo(timeZoneId);

It has the ability to work with any time zone provider.

// Either of these will work on any platform: 
TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("Eastern Standard Time"); 
TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("America/New_York"); 

Alternatively I read that Noda Time lib doesn't have these issues, for right now before a release I just want to resolve it with the least code change.

Production deployment with Azure App Service slots and Entity Framework Core migrations

I recently made some progress on this blog and wanted to push the latest changes live. I've been using EF Core migrations to assist with any data related changes, and it has worked out very well for me during development. When it's time to push to production I expect it to work just as well, but nonetheless I want to first see what others think about using EF migrations against a production database.  So I found this StackOverflow question Is it OK to update a production database with EF migrations? The short answer is YES! Now having done this myself, I can say that EF migrations used in conjunction with Azure deployment slots makes production deployment really easy. In this post I want to explain my setup and the simple steps I followed to upgrade this site.

Deployment slots

Azure app service deployment slots give you the ability to setup multiple environments like production, staging and test, and quickly swap between them.

My Azure deployment slots and databases

This feature is only available on standard service plan or higher. This site was running on Azure B1 App Service plan, so I had to scale it up to a S1. With the standard plan I created a Staging slot.  Also previously I had a S0 SQL Database as my production database, this time I created a Basic SQL Database to serve as the Staging database.

Here I'm trying to save money so I only created two environments, production and staging, and let staging serve as testing.  Theoretically staging should closely mimic production and even share same database.   

In addition to app services and databases I also have separate Application Insights and Blob Storage accounts for each of the production and staging environment, they are not shown on the diagram.

Deployment steps

My Azure deployment flow


Step 1

Last year I had my v1.0 app deployed from github on to the production app service, and EF Core automatically populated the production database. This week I did the same thing for the Staging app service and database. 

So by this time the Production and Staging environments have the same exact code and DB schema.

Step 2

I deployed the v1.1 alpha to staging slot and let EF upgrade Staging DB to the newer v1.1 schema. Here I actually have a choice of running a SQL upgrade script myself to upgrade the database or letting EF do it, more on this later. 

This step is critical, because if it works then when I push v1.1 to production it should work too.  But I did encounter a BadImageFormatException right after I pushed v1.1 to staging, this error never occurred to me during development and it took me a while to track down what was happening.  The point is by deploying to staging first gave me the chance to fix that.  

At this moment the entire Staging web app and database got the latest code and working well.

Step 3

Remember to check mark your slot-specific settings before swapping!


It was time to do the swap, but before doing that I made sure I had the slot-specific settings in each production and staging's Application Settings checked. Each environment has their own database, application insights and blob storage, by check marking them these settings are not going with the swap.

Before swapping setup Source and Destination

Now I was ready to do the swap and I had a choice of doing a Swap or Swap with preview. In either case let's make staging the Source and production the Destination

Doing Swap will start swapping the source and destination immediately, and the first time I did this it took about 1 minute and 15 seconds to complete.  Basically Azure first warms up the staging slot and then completes the actual swap. 

The Swap with preview on the other hand is not immediate, according to the Azure Doc the following happens.

  • It keeps production unchanged so existing workload on that slot is not impacted.
  • It applies configurations of production slot to staging slot, including the production slot-specific settings!
  • It restarts the worker processes on the staging slot using these production configuration elements.
  • When you complete the swap: it moves the pre-warmed-up staging slot into the production slot, and production slot into the staging slot as in a manual swap.
  • When you cancel the swap: it reapplies the settings of the staging slot to the staging slot.

Below is a screen shot that shows when you do Swap with Preview the connection strings used by both slots are the same.

Azure slot swap Preview Changes shows both Production and Staging using same connection strings


Step 4

When you preview swap, the staging site shows up with production data, in other words Azure warms up the staging slot.  So when you are ready to Complete swap it's actually done more efficiently than previously when I did the direct swap, the complete swap step took only about 20 to 30 seconds.

Complete swap after previewing staging


EF Core Migrations

For the upcoming Fanray v1.1 I made a series of changes to the database schema, including add and rename columns, make column nullable, update existing data and insert new data.  As I mentioned in step 2 I had a choice on how to do the database upgrade, I could either let EF do it automatically or I can use EF to generate an upgrade SQL script first then run that against the database. In some organizations they prefer to have a DBA look over any SQL upgrade script first.  I have tried both ways and both worked perfectly.

To generate the SQL script you can either run the EF PowerShell command Script-Migration or the dotnet CLI command dotnet ef migrations script

Final thoughts on what happens during the upgrade 

Remember back in step 3 during the swap preview, the staging actually gets all the production's settings.  As a result EF is actually upgrading your production database at this point already.  Based on the success of step 2 I know this upgrade should work, but while that's happening remember the production website is still running v1.0 of the app against the same database which staging is upgrading to v1.1! 

I tested locally my v1.0 app does work with v1.1 DB schema, but this obviously is an issue if there are breaking changes. The site visitors may experience error during that one-minute swap on the live site.  The first thing comes to my mind to remedy this is to use an old trick app_offline.htm, as discussed in this SO question How to use IIS app_offline.htm file with Azure. The downside of doing this is that even though the swap happens pretty quickly but still during that time your site is down to your visitors.

One of the answers on that SO question mentioned "you should be able to virtually eliminate down time with Azure by running multiple instances".  As I explained above I'm not sure that's the case.  The comment left below this answer is more inline with what I have noted.

My co-founder is actually the Azure expert on our team, and we are already running multiple instances with SQL Azure. However, earlier today, he needed to update the DB schema which meant that part of the site was down for several minutes. When I hit the site, I was redirected to my main ErrorPage. But I would have preferred to have had the app_offline.htm file in the root during those few minutes. I was just under the impression that it's non trivial to be doing file I/O related things on an Azure deployment.

Also Azure provides SQL Database backup so if upgrading the production database fails you can restore it from your backup.  This has been my deployment flow so far, but is there a better way or how are you guys approaching the deployment and upgrade of the production database?  Please let me know what you think.

How to update git commit messages (single or multiple, local or remote)

Whether you want to update a single or multiple, local or remote git commit messages, this post shows you how.

To update the most recent local commit message

$ git commit --amend

The text editor opens, edit your commit message, save and close file.

To update multiple local commit messages

$ git rebase -i HEAD~3 # Modify the last 3 commits

You will see something like the following

pick e499d89 Delete CNAME
pick 0c39034 Better README
pick f7fde4a Change the commit message but push the same commit.# … with some instructions in the comments here …

Replace pick with reword then save and close file.

Git will open the first commit above in the text editor, you can type the new commit message, save and close the file. Then git will open the second commit in the text editor, so on and so forth till you update, save and close the last commit file.

To update commits that have been pushed

Do exactly as explained above whether its the last one commit or last several commits. Then do this

$ git push --force

One thing to note is that the commit hashes will be updated as well.

Reference: Changing a commit message

Angular 5 vs React 16

After releasing Fanray v1 I took some time to research on what is next to learn and build.  A blog roughly has two parts, the public facing site, this is the part visitors see and it is normally themed; the blog also has an admin console, that is where blog owners and writers login, write posts and manage the entire site. The public site is normally a MPA, Multi-page Application, meaning when you go from one page to another you see a full browser reload. Whereas the admin console is a good candidate for being a SPA, Single Page Application.

The question is which front end framework / library to use? I had Angular experience in the past, I’ve built Chef.me project using Angularjs 1.x and used Angular 2 in hackathons. But since I have the luxury of building something entirely from ground up, I want to see and experiment what is out there. I’ve considered four: Angular, React, Vue and Ember. Touch choices really but I had to make my picks, eventually I came down to two, Angular vs React.

There are numerous articles out there that compare these technologies, a couple that stand out to me.

Here are some basic info I came up with based on my current research.

CLIAngular CLIcreate-react-app
BindingTwo wayOne way
DOMRegular DOMVirtual DOM
Dominant LanguageTypeScriptES6
Static Type CheckingTypeScript with DefinitelyTypedFlow
Html TemplateEither html file or inline in the component ts fileJSX
Recommended EditorVisual Studio CodeAtom with Nuclide
Native Mobile DevelopmentNativeScript (by Progress)ReactNative
Material DesignAngular MaterialMaterial-UI


Below is how each Angular and React works in a simple example.


The best way to get an Angular project started is through its CLI (v1.6.1 as of this writing), ng init my-angular-app. After you build it for production, ng build --prod, below is your angular app index.html.

It includes three JavaScript bundle files, inline (this is the webpack loader), polyfills and main (your code plus styles and vendor). The main bundle is about 147k. All builds make use of bundling and limited tree-shaking, while --prod builds also run limited dead code elimination via UglifyJS. There is also an experimental service worker support for production builds available in the CLI, you can enable manually. I mention this as you will see React has this support too. For more information on see ng build documentation.

<!doctype html>
<html lang="en">
       <meta charset="utf-8">
       <base href="/">
       <meta name="viewport" content="width=device-width,initial-scale=1">
       <link rel="icon" type="image/x-icon" href="favicon.ico">
       <link href="styles.d41d8cd98f00b204e980.bundle.css" rel="stylesheet"/>
       <script type="text/javascript" src="inline.19f3f7885ab6e4e2dee3.bundle.js"></script><script type="text/javascript" src="polyfills.f039bbc7aaddeebcb9aa.bundle.js"></script><script type="text/javascript" src="main.5f6465ddee537c95d12a.bundle.js"></script>

The index.html also includes your Angular directive <app-root></app-root>. When your website starts, the Angular app’s main entry point is main.ts.

import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';import { AppModule } from './app/app.module';
import { environment } from './environments/environment';if (environment.production) {
   .catch(err => console.log(err));

Then the main.ts bootstraps an Angular module AppModule, each Angular app must at least have one module. Here is the app.module.ts.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
   declarations: [
   imports: [
   providers: [],
   bootstrap: [AppComponent]
export class AppModule { }

After that the AppModule bootstraps a very simple Angular component AppComponent.  Here is that component looks like in app.component.ts.

import { Component } from '@angular/core';@Component({
   selector: 'app-root',
   templateUrl: './app.component.html',
   styleUrls: ['./app.component.css']
export class AppComponent {
   title = 'app';

Finally the component has a template with html that will replaces the directive in index.html <app-root></app-root> and show it to the users in browser.

So the Angular component flow is like this:

An HTML page with some angular directives –> Module loader calls main.ts –> bootstraps AppModule –> bootstraps AppComponent –> replaces the angular directive with its template content.


With React CLI called create-react-app (v1.4.3 as of this writing), do create-react-app my-react-app will create a startup project for you.  And after you build it for production with react-scripts build command, below is your react app index.html.

It includes a main bundle JavaScript file that has everything except styles and it is about 113k. Notice React does not provide polyfills out of box and we need to add it manually.

<!DOCTYPE html>
<html lang="en">
       <meta charset="utf-8">
       <meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no">
       <meta name="theme-color" content="#000000">
       <link rel="manifest" href="/manifest.json">
       <link rel="shortcut icon" href="/favicon.ico">
       <title>React App</title>
       <link href="/static/css/main.9a0fe4f1.css" rel="stylesheet">
       <noscript>You need to enable JavaScript to run this app.</noscript>
       <div id="root"></div>
       <script type="text/javascript" src="/static/js/main.656db2cf.js"></script>

The index.html also has a <div id=”root”></div>.  When your website starts, the main entry point for the Reach app is index.js

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
ReactDOM.render(<App />, document.getElementById('root'));

Then index.js calls ReactDOM.render which renders your component App and attach its output to the root div. Notice it calls on a registerServiceWorker() from registerSerivceWorker.js.  This is to serve assets from local cache, it lets the app load faster on subsequent visits in production, and gives it offline capabilities. However, it also means that developers (and users) will only see deployed updates on the "N+1" visit to a page, since previously cached resources are updated in the background. For more information see create-react-app documentation.

The App component looks like this.

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';class App extends Component {
   render() {
     return (
       <div className="App">
         <header className="App-header">
           <img src={logo} className="App-logo" alt="logo" />
           <h1 className="App-title">Welcome to React</h1>
         <p className="App-intro">
           To get started, edit <code>src/App.js</code> and save to reload.
}export default App;

The React component flow is like this:

An html page with a div placeholder –> Loader calls index.js –> calls ReactDOM.render –> calls your component –> attaches the html output to the div placeholder.


This post jots my thoughts based on brief research with Angular and React.  This is only scratching the surface comparing these two technologies, but it dose give a glimpse on how each works with components, as component is the building block of both Angular and React.  Just by looking at the code, Angular does look like it is taking more turns to render a component, but that is because it has its own concept of a module which basically is used to group components. Whereas React feels more straight forward in the sense you have a html tag and your have one piece of JavaScript that works on that tag.

Angular is a full blown framework while React is a library, both can achieve exactly the same thing, with React you can add in everything you need with other libs. I love both technologies based on my experimentation.  In my view, Angular is more suited for SPA apps and I intend to build the admin console with it. Since React is more light weight and I’d like to try it out on the public site on certain pages replacing jQuery.

Fanray 1.0.0 released

From 8/14/2017 to 11/30/2017, it took me three and half months to go from the initial commit to the v1 release today. I’m right on track to achieve what I started out to do, learning in the open, building something I can use everyday, and sharing all aspects of this process with the community.

It’s an MVP

V1 is not much, but it’s useful enough to bring you these words on this page. It was intended to be an MVP.

A Minimum Viable Product (MVP) is a product with just enough features to satisfy early customers, and to provide feedback for future product development.[1][2] Some experts suggest that in business to business transactions an MVP also means saleable: "it’s not an MVP until you sell it. Viable means you can sell it".

Here I myself is the early customer and for it to be saleable to myself it has to have the basic features I think a blog should have, posts, categories, tags, comments, SEO considerations, RSS feeds etc., and on top of these it must be performant and stable.

The blog has been around since the 90s, its features vary greatly from a static page with text to complex systems like WordPress. Ambition could easily kick in and scope of things could go out of hand and let me start something that I never finish on time. To avoid this I’ve decided early on to support MetaWeblog API, which dictates a set of features a blog needs to implement so that desktop clients, like the Open Live Writer, can talk to it. This strategy has proven to be helpful, it really limited my scope on what needs to build without ambiguity. This also allows me to have a rich client to at least start posting without a full blown admin console which takes more time to develop and is coming in 1.1.


I’ve designed the app using n-tier architecture, a very typical presentation to business logic to data access setup. Below diagram also shows some of the clients the blog could potentially support and how they communicate.  For example, a desktop client talks to Fanray through MetaWeblog API which is built on XML-RPC, so the content type of communication is XML, whereas the browser talks to MVC controllers that return them HTML, CSS and JavaScript.

  • desktop (MetaWeblog API – XML)
  • browser (MVC – HTML)
  • mobile (Web API – JSON)
Fanray blog architecture
Fanray blog architecture


On top of the basic architecture the practice of doing Skinny Controllers, Fat Models and Dumb Views is a very effective strategy to achieve Separation of Concerns. The Web Tier handles traffic and presentational logic only.  The Business Logic Layer does most of the heavy lifting on validation, calculation, caching and much more. When it comes to Data Access Layer, it does just data accessing operations.  Of course, there are grey areas, for example validation can happen at any tier, this deserves a post of its own.  But the basic idea is that each tier (or layer I kind use them interchangeably) has a very specific concern. These different clients talk to different kind of endpoints, browser calls on MVC controllers and Open Live Writer calls on MetaWeblog API endpoints, both ask for the same business logic to carry out, when they get the results they return them to the clients in different formats.


I’m making steady improvements to this app and hopefully others who happen to come across this project could find it useful as well. Any feedback is welcoming and if you would like to participate, please check out the GitHub repo on how to contribute. Thank you.

How to ask Google to re-crawl your site and tips on how to avoid broken link when you post

I’ve been testing my live site extensively this past month, posting, reposting and then I found my site’s Google search results come up with broken links. If you ever need to update your site with new URLs on existing resources, you want to think about the SEO implications first.

Google search results


Now let me point out the first link that is broken is due to beta software, the code wasn’t finalized on what URL to use to show a list of posts for a particular tag.  And that has been finalized, for example to show posts tagged with azure, www.fanray.com/posts/tagged/azure is the URL. But the second broken link is due to my update on an existing post.

How you may break your post link

By updating your post Slug

When you publish a post Fanray automatically comes up with a Slug based on your post title.  For example, my first post Welcome to Fanray yielded a slug welcome-to-fanray.  However, you can choose to manually enter this value to be anything you want. 

If you use Open Live Writer, go find the View all link.

Open Live Writer "View all" to see all properties


Clicking on it will open Post Properties.

Open Live Writer - Post Properties


All the post properties can be set here. And if you are going to set the slug manually, please follow the convention and make it an all lowercase, hyphen-separated, alphanumeric string.

By updating your post Publish Date

Furthermore, updating the post slug is not the only thing that’ll yield a new URL for the post and thus results in broken links, updating a post Publish Date may also do it. A Fanray’s blog post uses this URL template “/post/{year}/{month}/{slug}”, if you first published a post say in October 2017, then you update it by manually setting it to a date in a different month say November 2017, then that’ll result in a new URL.

What I mostly do

The best way to avoid any of these is to never change a post’s slug or publish date after you published it for a while if you don’t have to.  When you first publish your post you can do whatever, either you enter these values or you leave them blank to let the blog take care of setting them. 

I mostly only enter the Category and Keywords (tags), and sometimes Excerpt if I want a different message than the one the blog comes up with; you can set a setting to let excerpts show up instead of full posts. By default if you leave Excerpt blank, the blog will get the first 55 words from your post and use that as the excerpt.

How to ask Google to Re-crawl

Logon to Google Search Console, on the Dashboard see if you have an URL Errors.

Google Search Console - Crawl Errors


Click on the Crawl Errors, you will see URL Errors listed.

Google Search Console - URL Errors


On the left click on Fetch as Google, then input the new URL for the one that results error, and click on FETCH.

Google Search Console - Fetch as Google


You have a quota on how many URLs you can fetch in a given period, for more info see Use Fetch as Google for websites.

Preferred Domain and URL Redirect

Last time I set up Custom Domain and HTTPS for my Azure web app, there remains an issue - my website can be accessed from both the root domain fanray.com and the www.fanray.com subdomain.  This is bad for SEO, we need to tell search engine which one we prefer, hence we have to decide on a Preferred Domain either www or non-www. The one you choose will be the one that will be used to index your site's pages and be used for your site in the search results.

So, www or non-www?

This is like a religious debate and there are numerous resources out there, to list a few

Luckily, Google does not care and you can just pick one and stick to it. I chose www because of its ability to restrict cookies and it’s more flexible with DNS.

Set up Preferred Domain on Google

To set up preferred domain, go to Google Search Console and add a website property for each of the URL variations that your site supports, including https, http, www, and non-www.  You will go through a verification process to prove you own the site; I chose to add a TXT record at my registrar.  And you will receive an email titled “Preferred domain changed for site …” for each property you set up.

Google Search Console properties


You set the preferred domain by going to the Gear icon > Site Settings

Set Preferred domain


One common question is why Google only gives you the option to set up the http version but not the https? I found this question How to set preferred domain with https in Google Webmaster Tools and according to one of its answers,

Google takes this automatically from your canonical link tag.

<link rel="canonical" href="https://example.com/">

So whenever the Google spider sees this line in your head section, Google automatically indexes the HTTPS version of your site.

If you go to any one of the post page on this blog and view source you will see for example, something like the following tag with https.


<link rel="canonical" href="https://www.fanray.com/post/2017/11/26/custom-domain-and-https-for-azure-web-app" />

Note the canonical link does not appear on the blog’s main page but in the individual post, I visited other sites like StackOverflow, Techcrunch and this seems to be a common practice.

URL Redirect on Azure

Now I told Google what my preferred domain is, I still need to make the redirect actually happen for requests going to the less preferred domain over to the preferred domain. Two of the most commonly asked redirects for websites are

  • Non-www to www, or www to non-www

There are many solutions to get both done, you can even do domain forwarding from your registrars, however those are not reliable and thus not recommended.

In the last post I took care of HTTP to HTTPS redirect by turning on HTTPS Only inside Azure portal, this is the easiest way to achieve this in Azure, but there is also an Azure extension someone wrote that can do it.

Turn HTTPS Only On


For the preferred domain, if you decided to go from www to non-www, there is also an Azure extension that does it. But currently I didn’t see an extension that goes the other way around from non-www to www.

URL Redirect in ASP.NET Core

To do URL Rewrite in ASP.NET Core, the common way is to use the RewriteMiddleware class, it’s part of the ASP.NET Core BasicMiddleware project.

To use this middleware, wire it up inside your Startup.cs Configure() method, and typically have the regex matching rules in a separate config file.

app.UseRewriter(new RewriteOptions()
     .AddIISUrlRewrite(env.ContentRootFileProvider, "urlRewrite.config"));

For example to redirect http to https in the urlRewrite.config

<?xml version="1.0" encoding="utf-8"?>
< rewrite>
     <rule name="Redirect to https">
       <match url="(.*)" />
         <add input="{HTTPS}" pattern="Off" />
         <add input="{HTTP_HOST}" negate="true" pattern="localhost" />
       <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" />
< /rewrite>

URL Redirect with HttpWwwRewriteMiddleware

Above I explained some of the options to do URL Rewrite on Azure and in ASP.NET Core; and the RewriteMiddleware is actually quite powerful and can do very complex URL redirect rules.

However I intended for Fanray to run anywhere .NET Core can run, not just on Azure; furthermore, I wanted the easiest configuration experience possible to get these two common scenarios done for users.  Therefore I’ve written a middleware called HttpWwwRewriteMiddleware that does only two things, redirect

  • Non-www to www, or www to non-www

To use this middleware, in your Startup.cs Configure() method add this line of code


Then in appsettings.Production.json there are two settings

"AppSettings": {
   // The preferred domain to use: "auto" (default), "www" or "nonwww".
   // - "auto" will use whatever the url is given, will not do forward
   // - "www" will forward root domain to www subdomain, e.g. fanray.com -> www.fanray.com
   // - "nonwww" will forward www subdomain to root domain, e.g. www.fanray.com -> fanray.com
   "PreferredDomain": "www",  // Whether to use https: false (default) or true.
   // - false, will not forward http to https
   // - true, will forward http to https
   "UseHttps": true

When you deploy to Azure, if you would like the non-www version, just simply update “PreferredDomain” to “nonwww”, all requests to www subdomain will then redirect to your root domain.


At the end of this post, I have deployed Fanray to Azure App Service, got my Custom Domain and HTTPS working, and now all requests can redirect to the Preferred Domain I chose.

Newer posts Older posts