Production deployment with Azure App Service slots and Entity Framework Core migrations

I recently made some progress on this blog and wanted to push the latest changes live. I've been using EF Core migrations to assist with any data related changes, and it has worked out very well for me during development. When it's time to push to production I expect it to work just as well, but nonetheless I want to first see what others think about using EF migrations against a production database.  So I found this StackOverflow question Is it OK to update a production database with EF migrations? The short answer is YES! Now having done this myself, I can say that EF migrations used in conjunction with Azure deployment slots makes production deployment really easy. In this post I want to explain my setup and the simple steps I followed to upgrade this site.

Deployment slots

Azure app service deployment slots give you the ability to setup multiple environments like production, staging and test, and quickly swap between them.

My Azure deployment slots and databases
My Azure deployment slots and databases

This feature is only available on standard service plan or higher. This site was running on Azure B1 App Service plan, so I had to scale it up to a S1. With the standard plan I created a Staging slot.  Also previously I had a S0 SQL Database as my production database, this time I created a Basic SQL Database to serve as the Staging database.

In addition to app services and databases I also have separate Application Insights and Blob Storage accounts for each of the production and staging environment, they are not shown on the diagram.

Deployment steps

My Azure deployment flow
My Azure deployment flow

Step 1

Last year I had my v1.0 app deployed from github on to the production app service, and EF Core automatically populated the production database. This week I did the same thing for the Staging app service and database. 

So by this time the Production and Staging environments have the same exact code and DB schema.

Step 2

I deployed the v1.1 alpha to staging slot and let EF upgrade Staging DB to the newer v1.1 schema. Here I actually have a choice of running a SQL upgrade script myself to upgrade the database or letting EF do it, more on this later. 

This step is critical, because if it works then when I push v1.1 to production it should work too.  But I did encounter a BadImageFormatException right after I pushed v1.1 to staging, this error never occurred to me during development and it took me a while to track down what was happening.  The point is by deploying to staging first gave me the chance to fix that.  

At this moment the entire Staging web app and database got the latest code and working well.

Step 3

Remember to check mark your slot-specific settings before swapping!
Remember to check mark your slot-specific settings before swapping!

It was time to do the swap, but before doing that I made sure I had the slot-specific settings in each production and staging's Application Settings checked. Each environment has their own database, application insights and blob storage, by check marking them these settings are not going with the swap.

Before swapping setup Source and Destination
Before swapping setup Source and Destination

Now I was ready to do the swap and I had a choice of doing a Swap or Swap with preview. In either case let's make staging the Source and production the Destination

Doing Swap will start swapping the source and destination immediately, and the first time I did this it took about 1 minute and 15 seconds to complete.  Basically Azure first warms up the staging slot and then completes the actual swap. 

The Swap with preview on the other hand is not immediate, according to the Azure Doc the following happens.

  • It keeps production unchanged so existing workload on that slot is not impacted.
  • It applies configurations of production slot to staging slot, including the production slot-specific settings!
  • It restarts the worker processes on the staging slot using these production configuration elements.
  • When you complete the swap: it moves the pre-warmed-up staging slot into the production slot, and production slot into the staging slot as in a manual swap.
  • When you cancel the swap: it reapplies the settings of the staging slot to the staging slot.

Below is a screen shot that shows when you do Swap with Preview the connection strings used by both slots are the same.

Azure slot swap Preview Changes shows both Production and Staging using same connection strings
Azure slot swap Preview Changes shows both Production and Staging using same connection strings

Step 4

When you preview swap, the staging site shows up with production data, in other words Azure warms up the staging slot.  So when you are ready to Complete swap it's actually done more efficiently than previously when I did the direct swap, the complete swap step took only about 20 to 30 seconds.

Complete swap after previewing staging
Complete swap after previewing staging

EF Core Migrations

For the upcoming Fanray v1.1 I made a series of changes to the database schema, including add and rename columns, make column nullable, update existing data and insert new data.  As I mentioned in step 2 I had a choice on how to do the database upgrade, I could either let EF do it automatically or I can use EF to generate an upgrade SQL script first then run that against the database. In some organizations they prefer to have a DBA look over any SQL upgrade script first.  I have tried both ways and both worked perfectly.

To generate the SQL script you can either run the EF PowerShell command Script-Migration or the dotnet CLI command dotnet ef migrations script

Final thoughts on what happens during the upgrade 

Remember back in step 3 during the swap preview, the staging actually gets all the production's settings.  As a result EF is actually upgrading your production database at this point already.  Based on the success of step 2 I know this upgrade should work, but while that's happening remember the production website is still running v1.0 of the app against the same database which staging is upgrading to v1.1! 

I tested locally my v1.0 app does work with v1.1 DB schema, but this obviously is an issue if there are breaking changes. The site visitors may experience error during that one-minute swap on the live site.  The first thing comes to my mind to remedy this is to use an old trick app_offline.htm, as discussed in this SO question How to use IIS app_offline.htm file with Azure. The downside of doing this is that even though the swap happens pretty quickly but still during that time your site is down to your visitors.

One of the answers on that SO question mentioned "you should be able to virtually eliminate down time with Azure by running multiple instances".  As I explained above I'm not sure that's the case.  The comment left below this answer is more inline with what I have noted.

My co-founder is actually the Azure expert on our team, and we are already running multiple instances with SQL Azure. However, earlier today, he needed to update the DB schema which meant that part of the site was down for several minutes. When I hit the site, I was redirected to my main ErrorPage. But I would have preferred to have had the app_offline.htm file in the root during those few minutes. I was just under the impression that it's non trivial to be doing file I/O related things on an Azure deployment.

Also Azure provides SQL Database backup so if upgrading the production database fails you can restore it from your backup.  This has been my deployment flow so far, but is there a better way or how are you guys approaching the deployment and upgrade of the production database?  Please let me know what you think.

How to update git commit messages (single or multiple, local or remote)

Whether you want to update a single or multiple, local or remote git commit messages, this post shows you how.

To update the most recent local commit message

$ git commit --amend

The text editor opens, edit your commit message, save and close file.

To update multiple local commit messages

$ git rebase -i HEAD~3 # Modify the last 3 commits

You will see something like the following

pick e499d89 Delete CNAME
pick 0c39034 Better README
pick f7fde4a Change the commit message but push the same commit.
# … with some instructions in the comments here …

Replace pick with reword then save and close file.

Git will open the first commit above in the text editor, you can type the new commit message, save and close the file. Then git will open the second commit in the text editor, so on and so forth till you update, save and close the last commit file.

To update commits that have been pushed

Do exactly as explained above whether its the last one commit or last several commits. Then do this

$ git push --force

One thing to note is that the commit hashes will be updated as well.

Reference: Changing a commit message

Angular 5 vs React 16

After releasing Fanray v1 I took some time to research on what is next to learn and build.  A blog roughly has two parts, the public facing site, this is the part visitors see and it is normally themed; the blog also has an admin console, that is where blog owners and writers login, write posts and manage the entire site. The public site is normally a MPA, Multi-page Application, meaning when you go from one page to another you see a full browser reload. Whereas the admin console is a good candidate for being a SPA, Single Page Application.

The question is which front end framework / library to use? I had Angular experience in the past, I’ve built project using Angularjs 1.x and used Angular 2 in hackathons. But since I have the luxury of building something entirely from ground up, I want to see and experiment what is out there. I’ve considered four: Angular, React, Vue and Ember. Touch choices really but I had to make my picks, eventually I came down to two, Angular vs React.

There are numerous articles out there that compare these technologies, a couple that stand out to me.

Here are some basic info I came up with based on my current research.

Angular React
Classification Framework Library
Version 5 16
CLI Angular CLI create-react-app
Binding Two way One way
DOM Regular DOM Virtual DOM
Dominant Language TypeScript ES6
Static Type Checking TypeScript with DefinitelyTyped Flow
Html Template Either html file or inline in the component ts file JSX
Recommended Editor Visual Studio Code Atom with Nuclide
Native Mobile Development NativeScript (by Progress) ReactNative
Material Design Angular Material Material-UI

Below is how each Angular and React works in a simple example.


The best way to get an Angular project started is through its CLI (v1.6.1 as of this writing), ng init my-angular-app. After you build it for production, ng build --prod, below is your angular app index.html.

It includes three JavaScript bundle files, inline (this is the webpack loader), polyfills and main (your code plus styles and vendor). The main bundle is about 147k. All builds make use of bundling and limited tree-shaking, while --prod builds also run limited dead code elimination via UglifyJS. There is also an experimental service worker support for production builds available in the CLI, you can enable manually. I mention this as you will see React has this support too. For more information on see ng build documentation.

<!doctype html>
<html lang="en">
       <meta charset="utf-8">
       <base href="/">
       <meta name="viewport" content="width=device-width,initial-scale=1">
       <link rel="icon" type="image/x-icon" href="favicon.ico">
       <link href="styles.d41d8cd98f00b204e980.bundle.css" rel="stylesheet"/>
       <script type="text/javascript" src="inline.19f3f7885ab6e4e2dee3.bundle.js"></script><script type="text/javascript" src="polyfills.f039bbc7aaddeebcb9aa.bundle.js"></script><script type="text/javascript" src="main.5f6465ddee537c95d12a.bundle.js"></script>

The index.html also includes your Angular directive <app-root></app-root>. When your website starts, the Angular app’s main entry point is main.ts.

import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';
import { environment } from './environments/environment';
if (environment.production) {
   .catch(err => console.log(err));

Then the main.ts bootstraps an Angular module AppModule, each Angular app must at least have one module. Here is the app.module.ts.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';

   declarations: [
   imports: [
   providers: [],
   bootstrap: [AppComponent]
export class AppModule { }

After that the AppModule bootstraps a very simple Angular component AppComponent.  Here is that component looks like in app.component.ts.

import { Component } from '@angular/core';
   selector: 'app-root',
   templateUrl: './app.component.html',
   styleUrls: ['./app.component.css']
export class AppComponent {
   title = 'app';

Finally the component has a template with html that will replaces the directive in index.html <app-root></app-root> and show it to the users in browser.

So the Angular component flow is like this:

An HTML page with some angular directives –> Module loader calls main.ts –> bootstraps AppModule –> bootstraps AppComponent –> replaces the angular directive with its template content.


With React CLI called create-react-app (v1.4.3 as of this writing), do create-react-app my-react-app will create a startup project for you.  And after you build it for production with react-scripts build command, below is your react app index.html.

It includes a main bundle JavaScript file that has everything except styles and it is about 113k. Notice React does not provide polyfills out of box and we need to add it manually.

<!DOCTYPE html>
<html lang="en">
       <meta charset="utf-8">
       <meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no">
       <meta name="theme-color" content="#000000">
       <link rel="manifest" href="/manifest.json">
       <link rel="shortcut icon" href="/favicon.ico">
       <title>React App</title>
       <link href="/static/css/main.9a0fe4f1.css" rel="stylesheet">
       <noscript>You need to enable JavaScript to run this app.</noscript>
       <div id="root"></div>
       <script type="text/javascript" src="/static/js/main.656db2cf.js"></script>

The index.html also has a <div id=”root”></div>.  When your website starts, the main entry point for the Reach app is index.js

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';

ReactDOM.render(<App />, document.getElementById('root'));

Then index.js calls ReactDOM.render which renders your component App and attach its output to the root div. Notice it calls on a registerServiceWorker() from registerSerivceWorker.js.  This is to serve assets from local cache, it lets the app load faster on subsequent visits in production, and gives it offline capabilities. However, it also means that developers (and users) will only see deployed updates on the "N+1" visit to a page, since previously cached resources are updated in the background. For more information see create-react-app documentation.

The App component looks like this.

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
class App extends Component {
   render() {
     return (
       <div className="App">
         <header className="App-header">
           <img src={logo} className="App-logo" alt="logo" />
           <h1 className="App-title">Welcome to React</h1>
         <p className="App-intro">
           To get started, edit <code>src/App.js</code> and save to reload.
export default App;

The React component flow is like this:

An html page with a div placeholder –> Loader calls index.js –> calls ReactDOM.render –> calls your component –> attaches the html output to the div placeholder.


This post jots my thoughts based on brief research with Angular and React.  This is only scratching the surface comparing these two technologies, but it dose give a glimpse on how each works with components, as component is the building block of both Angular and React.  Just by looking at the code, Angular does look like it is taking more turns to render a component, but that is because it has its own concept of a module which basically is used to group components. Whereas React feels more straight forward in the sense you have a html tag and your have one piece of JavaScript that works on that tag.

Angular is a full blown framework while React is a library, both can achieve exactly the same thing, with React you can add in everything you need with other libs. I love both technologies based on my experimentation.  In my view, Angular is more suited for SPA apps and I intend to build the admin console with it. Since React is more light weight and I’d like to try it out on the public site on certain pages replacing jQuery.

Fanray 1.0.0 released

From 8/14/2017 to 11/30/2017, it took me three and half months to go from the initial commit to the v1 release today. I’m right on track to achieve what I started out to do, learning in the open, building something I can use everyday, and sharing all aspects of this process with the community.

It’s an MVP

V1 is not much, but it’s useful enough to bring you these words on this page. It was intended to be an MVP.

A Minimum Viable Product (MVP) is a product with just enough features to satisfy early customers, and to provide feedback for future product development.[1][2] Some experts suggest that in business to business transactions an MVP also means saleable: "it’s not an MVP until you sell it. Viable means you can sell it".

Here I myself is the early customer and for it to be saleable to myself it has to have the basic features I think a blog should have, posts, categories, tags, comments, SEO considerations, RSS feeds etc., and on top of these it must be performant and stable.

The blog has been around since the 90s, its features vary greatly from a static page with text to complex systems like WordPress. Ambition could easily kick in and scope of things could go out of hand and let me start something that I never finish on time. To avoid this I’ve decided early on to support MetaWeblog API, which dictates a set of features a blog needs to implement so that desktop clients, like the Open Live Writer, can talk to it. This strategy has proven to be helpful, it really limited my scope on what needs to build without ambiguity. This also allows me to have a rich client to at least start posting without a full blown admin console which takes more time to develop and is coming in 1.1.


I’ve designed the app using n-tier architecture, a very typical presentation to business logic to data access setup. Below diagram also shows some of the clients the blog could potentially support and how they communicate.  For example, a desktop client talks to Fanray through MetaWeblog API which is built on XML-RPC, so the content type of communication is XML, whereas the browser talks to MVC controllers that return them HTML, CSS and JavaScript.

  • desktop (MetaWeblog API – XML)
  • browser (MVC – HTML)
  • mobile (Web API – JSON)


On top of the basic architecture the practice of doing Skinny Controllers, Fat Models and Dumb Views is a very effective strategy to achieve Separation of Concerns. The Web Tier handles traffic and presentational logic only.  The Business Logic Layer does most of the heavy lifting on validation, calculation, caching and much more. When it comes to Data Access Layer, it does just data accessing operations.  Of course, there are grey areas, for example validation can happen at any tier, this deserves a post of its own.  But the basic idea is that each tier (or layer I kind use them interchangeably) has a very specific concern. These different clients talk to different kind of endpoints, browser calls on MVC controllers and Open Live Writer calls on MetaWeblog API endpoints, both ask for the same business logic to carry out, when they get the results they return them to the clients in different formats.


I’m making steady improvements to this app and hopefully others who happen to come across this project could find it useful as well. Any feedback is welcoming and if you would like to participate, please check out the GitHub repo on how to contribute. Thank you.

How to ask Google to re-crawl your site and tips on how to avoid broken link when you post

I’ve been testing my live site extensively this past month, posting, reposting and then I found my site’s Google search results come up with broken links. If you ever need to update your site with new URLs on existing resources, you want to think about the SEO implications first.

Google Search Results

Now let me point out the first link that is broken is due to beta software, the code wasn’t finalized on what URL to use to show a list of posts for a particular tag.  And that has been finalized, for example to show posts tagged with azure, is the URL. But the second broken link is due to my update on an existing post.

How you may break your post link

By updating your post Slug

When you publish a post Fanray automatically comes up with a Slug based on your post title.  For example, my first post Welcome to Fanray yielded a slug welcome-to-fanray.  However, you can choose to manually enter this value to be anything you want. 

If you use Open Live Writer, go find the View all link.

OLW View all

Clicking on it will open Post Properties.

OLW Post Properties

All the post properties can be set here. And if you are going to set the slug manually, please follow the convention and make it an all lowercase, hyphen-separated, alphanumeric string.

By updating your post Publish Date

Furthermore, updating the post slug is not the only thing that’ll yield a new URL for the post and thus results in broken links, updating a post Publish Date may also do it. A Fanray’s blog post uses this URL template “/post/{year}/{month}/{slug}”, if you first published a post say in October 2017, then you update it by manually setting it to a date in a different month say November 2017, then that’ll result in a new URL.

What I mostly do

The best way to avoid any of these is to never change a post’s slug or publish date after you published it for a while if you don’t have to.  When you first publish your post you can do whatever, either you enter these values or you leave them blank to let the blog take care of setting them. 

I mostly only enter the Category and Keywords (tags), and sometimes Excerpt if I want a different message than the one the blog comes up with; you can set a setting to let excerpts show up instead of full posts. By default if you leave Excerpt blank, the blog will get the first 55 words from your post and use that as the excerpt.

How to ask Google to Re-crawl

Logon to Google Search Console, on the Dashboard see if you have an URL Errors.

Google Search Console Crawl Errors

Click on the Crawl Errors, you will see URL Errors listed.

URL Errors

On the left click on Fetch as Google, then input the new URL for the one that results error, and click on FETCH.

Fetch as Google

You have a quota on how many URLs you can fetch in a given period, for more info see Use Fetch as Google for websites.

Preferred Domain and URL Redirect

Last time I set up Custom Domain and HTTPS for my Azure web app, there remains an issue - my website can be accessed from both the root domain and the subdomain.  This is bad for SEO, we need to tell search engine which one we prefer, hence we have to decide on a Preferred Domain either www or non-www. The one you choose will be the one that will be used to index your site's pages and be used for your site in the search results.

So, www or non-www?

This is like a religious debate and there are numerous resources out there, to list a few

Luckily, Google does not care and you can just pick one and stick to it. I chose www because of its ability to restrict cookies and it’s more flexible with DNS.

Set up Preferred Domain on Google

To set up preferred domain, go to Google Search Console and add a website property for each of the URL variations that your site supports, including https, http, www, and non-www.  You will go through a verification process to prove you own the site; I chose to add a TXT record at my registrar.  And you will receive an email titled “Preferred domain changed for site …” for each property you set up.


You set the preferred domain by going to the Gear icon > Site Settings


One common question is why Google only gives you the option to set up the http version but not the https? I found this question How to set preferred domain with https in Google Webmaster Tools and according to one of its answers,

Google takes this automatically from your canonical link tag.

<link rel="canonical" href="">

So whenever the Google spider sees this line in your head section, Google automatically indexes the HTTPS version of your site.

If you go to any one of the post page on this blog and view source you will see for example, something like the following tag with https.

<link rel="canonical" href="" />

Note the canonical link does not appear on the blog’s main page but in the individual post, I visited other sites like StackOverflow, Techcrunch and this seems to be a common practice.

URL Redirect on Azure

Now I told Google what my preferred domain is, I still need to make the redirect actually happen for requests going to the less preferred domain over to the preferred domain. Two of the most commonly asked redirects for websites are

  • Non-www to www, or www to non-www

There are many solutions to get both done, you can even do domain forwarding from your registrars, however those are not reliable and thus not recommended.

In the last post I took care of HTTP to HTTPS redirect by turning on HTTPS Only inside Azure portal, this is the easiest way to achieve this in Azure, but there is also an Azure extension someone wrote that can do it.

Custom Domain

For the preferred domain, if you decided to go from www to non-www, there is also an Azure extension that does it. But currently I didn’t see an extension that goes the other way around from non-www to www.

URL Redirect in ASP.NET Core

To do URL Rewrite in ASP.NET Core, the common way is to use the RewriteMiddleware class, it’s part of the ASP.NET Core BasicMiddleware project.

To use this middleware, wire it up inside your Startup.cs Configure() method, and typically have the regex matching rules in a separate config file.

app.UseRewriter(new RewriteOptions()
     .AddIISUrlRewrite(env.ContentRootFileProvider, "urlRewrite.config"));

For example to redirect http to https in the urlRewrite.config

<?xml version="1.0" encoding="utf-8"?>
< rewrite>
     <rule name="Redirect to https">
       <match url="(.*)" />
         <add input="{HTTPS}" pattern="Off" />
         <add input="{HTTP_HOST}" negate="true" pattern="localhost" />
       <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" />
< /rewrite>

URL Redirect with HttpWwwRewriteMiddleware

Above I explained some of the options to do URL Rewrite on Azure and in ASP.NET Core; and the RewriteMiddleware is actually quite powerful and can do very complex URL redirect rules.

However I intended for Fanray to run anywhere .NET Core can run, not just on Azure; furthermore, I wanted the easiest configuration experience possible to get these two common scenarios done for users.  Therefore I’ve written a middleware called HttpWwwRewriteMiddleware that does only two things, redirect

  • Non-www to www, or www to non-www

To use this middleware, in your Startup.cs Configure() method add this line of code


Then in appsettings.Production.json there are two settings

"AppSettings": {
   // The preferred domain to use: "auto" (default), "www" or "nonwww".
   // - "auto" will use whatever the url is given, will not do forward
   // - "www" will forward root domain to www subdomain, e.g. ->
   // - "nonwww" will forward www subdomain to root domain, e.g. ->
   "PreferredDomain": "www",  // Whether to use https: false (default) or true.
   // - false, will not forward http to https
   // - true, will forward http to https
   "UseHttps": true

When you deploy to Azure, if you would like the non-www version, just simply update “PreferredDomain” to “nonwww”, all requests to www subdomain will then redirect to your root domain.


At the end of this post, I have deployed Fanray to Azure App Service, got my Custom Domain and HTTPS working, and now all requests can redirect to the Preferred Domain I chose.

Custom Domain and HTTPS for Azure Web App

When you create an Azure Web app, you are given an Azure website URL like mine, in this post I will

  • Use my custom domain instead of
  • Buy an SSL certificate so my site URL can use HTTPS instead of HTTP

Custom Domain

To map a custom domain the App Service you chose cannot be in the Free tier, in my last post Set up Fanray on Azure App Service I chose the Basic tier. 

Start by first find your site IP address. Go to Azure Portal > your App Service > Settings > Custom domains.

01 Custom Domain

Mapping a custom domain basically requires you to create 3 DNS records at your domain registrar,

  • an A record, where A stands for Address, it deals with IP address and there should be one maps your root domain to your site IP
  • another A record maps all subdomains to your IP or a CNAME record, where C stands for Canonical, it’s used as an alias often pointing the www subdomain to root domain
  • a TXT record commonly used for verification purpose, App Service uses this record only at configuration time, to verify that you own the custom domain

After all three records have been created at my registrar, my DNS looks like this,

02 Fanray DNS records

Go back to Azure Portal, Custom domains, click on Add hostname, enter and validate both and


HTTPS is important not only because of security but also because Google prefers HTTPS as a ranking signal.

Buy an SSL Certificate on Azure

You can buy an SSL certificate directly on Azure for $69.99/yr Standard or $299.99/yr Wild Card.  Both covers only a single domain, the Standard will cover both the root domain and www subdomain, while the Wild Card can give you other subdomains, say you want

If you need a certificate that covers multiple domains, currently you have to buy it else where, one option would be Digicert’s Multi-Domain (SAN) Certificates. Then you would need to manually upload the certificate to Azure.

Also be aware if you buy the certificate on Azure and you are using a subscription, your purchase will be charged towards your monthly credit. And if your credit is less than the cost of the certificate, it will cause your subscription to be disabled.

To buy it on Azure, go to to get started.


Store Cert in Azure Key Vault

It takes a few minutes for the purchase to complete, then it will open the App Service Certificate blade for you. Go to Certificate Configuration and click on Step 1 to store this certificate in Key Vault. During this process, you can choose an existing Key Vault or create a new one. The Standard cost is $0.03/mo.

Verify Domain Ownership

Click on Step 2: Verify

If you bought your domain with Azure you can simply click on verify, otherwise you can verify through an email you receive.  The email contains a link clicking on which will take a you to GoDaddy and ask you to approve the certificate. Step 2 will take 5 to 10 minutes to complete on its own.  After this completes you will see step 1 to 3 all check marked.

Import Certificate and Create Binding

Finally assign the certificate to your app, go to App Service > SSL certificates > click on Import App Service Certificate

07 Import App Service Certificate

After that add bindings to both root and subdomain, and

insert image of SSL Bindings

Turn on HTTPS Only

Finally go back to your App Service > Settings > Custom domains, and turn on the HTTPS Only option, this will redirect all HTTP traffic to HTTPS.

01 Custom Domain

Additional Resources


Thus far I have launched the site live and gotten my custom domain and https working.  But there is an issue, the website can be accessed from both the root domain and the subdomain, for SEO purpose I will want to set up Preferred Domain and URL Redirect.

Set up Fanray on Azure App Service

Fanray can be deployed to any environment .NET Core runs on, Azure App Service is the best choice for most web apps. The setup is straight forward, I start by creating the necessary Azure resources.

Create Web App + SQL

At minimum an App Service and a SQL database are required.  Go to Azure Portal, click on New and choose Web App + SQL template.  Following instructions the portal will create the web app and database in one step and put the database connection string in the web app’s Application settings. During this process, a Resource Group and Service plan will also be created.

Azure organizes resources like this,

  • Web: Subscription > Resource Group > Service plan > App services
  • SQL: Subscription > Resource Group > SQL server > SQL databases

Normally one gets started on Azure with a subscription. Under subscription there is Resource Group, a container for your resources, it enables you to easily say delete your Resource Group or transfer it to a different subscription, then all the resources the group contains will be deleted or transferred together.

A Service Plan contains one or more app services like web app, mobile app etc. and you scale up / out these apps at the service plan level.

A SQL server contains one or more SQL databases, unlike a service plan you scale up and down workloads on the individual SQL databases in the unit of DTUs. This article Tuning performance in Azure SQL Database explains the different database service tiers and their performance.

I created one Resource Group, one Service plan (Basic Service plan, B1), one App Service, one SQL server, one SQL database (Standard SQL database, S0).  I see the following after I set up my resources, all these live in my Resource Group.

01 Fanray Azure Resources

After creating all the resources, go to your App Service > Application settings, under Connection strings there should be an entry named “defaultConnection” pointing to the SQL server and database you just created.

02 Application settings defaultConnection

Create Storage account and Application Insights

I also recommend creating an Azure storage account and an Application Insights resource, though these are not required.

Fanray can use Azure Blob storage to store uploaded files and log to Application Insights in addition to files. These can be configured in appsettings.Production.json, for example you can choose to store uploaded files instead on the file system of your App Service (not recommended).

Below are the options I chose when created my storage account for Blob Storage. I wanted the cheapest way possible thus Standard, LRS and Cool. As of this writing the cost for Cool, LRS in West US region for the First 50 TB / Mo is $0.0152. I also chose Enabled for Secure transfer required option, this will require your requests to be https. I’ll set up SSL for my site in the next post.

03 Create storage account

Then I created Application Insights for ASP.NET web application. For Location I tried to find the closest location, currently West US is not available though West US 2 is.  The pricing for Application Insights is that your first 1 GB for each app is free, so if you're just experimenting or developing, you're unlikely to have to pay.

04 Create Application Insights

After creating these two resources, you need to add your Application Insights Instrumentation Key and Blob storage connection string to your App Service App settings.  Go to your Application Insights resource find the Instrumentation Key.

05 Application Insights Instrumentation Key

And go to your Storage account > Access keys, copy one of the Connection Strings after key1 or key2.  This key is secretive and don’t share it. It’s also recommended to refresh with new values from time to time. There are two keys, key1 and key2, exactly for this purpose, when you are getting a new value for one, your app can still function with the other.

06 Azure Storage account Access keys

Go back to your App Service > Application settings, scroll down to App settings section and add “ApplicationInsights:InstrumentationKey property.  Then, in Connection strings section, add “BlobStorageConnectionString” property.

07 Add Blob ConnStr and AppInsights Key

Disqus and Google Analytics

Before we deploy code, there are two more resources I recommend, Disqus and Google Analytics. On the blog setup page later you can optionally put in your Disqus Shortname and Google Analytics Tracking ID.

To find your Disqus Shortname, go to > Admin > select your site > Settings > General

08 Disqus Shortname

To find your Google Analytics Tracking ID, logon Google Analytics > Admin > Tracking Code

09 Google Analytics Tracking ID

Deploy from GitHub

Code can be deployed to Azure in many ways, directly from GitHub is a super easy way and you can start by forking the Fanray repo on GitHub. Then go into App Service > Deployment options > Choose Source and select GitHub, then authenticate, choose project and branch.  Click OK and it will start initial deployment and every subsequent push of commits will trigger this deployment process again.

10 GitHub Deployment options

Launch your site

At this time my azure website is up and running!  Visiting the site for the first time, Entity Framework will create the database and populate the tables for you, then you will see the blog setup page show up.

11 Fanray Blog Setup


By now you should have a Fanray blog running live on Azure. The next step I recommend is to set up Custom Domain and HTTPS for your Azure Web App.

Welcome to Fanray

Thank you for trying out the Fanray project. A blog is like the Hello World program for a real world application, I created Fanray to learn new technologies and share their best practices. I hope this app is useful to you as well on your journey of learning and building!

Start posting

Fanray 1.0 is pretty bare-boned and to start posting you have to use a client that supports MetaWeblog API, I recommend Open Live Writer.

To make the blog more useful, I’ve created two shortcodes for easily posting source code and youtube videos, they are documented on the project github page


When you are ready to run this app on Azure, I have a few posts that may be of interest to you.


Any participation from the community is welcoming, please see the contributing guidelines.

Happy coding :)