Fix slow login response and verify email using regex

This week I got my first PR thanks to Flyznex, he helped fix issue #234 Login has a very slow response and along implemented a better way to verify if a string is a valid email. 

The issue came up when I found that sometime when I login it took a long time.  I'd put my email/username and password in and click on the login button and nothing would happen, and after a while like a few seconds then it let me in.

PasswordSignInAsync

As Flyznex pointed in the PR, the SignInManager<TUser> class in Asp.net Core Identity has two PasswordSignInAsync methods, one that takes in a string for userName while the other takes in a user object.  If you call the one that takes in the userName, the method will call UserManager.FindByNameAsync(userName); to look up the user first by its userName, hence you make an extra call to the database.

In my Login method I already looked the user up with its userName by calling my UserService method FindByEmailOrUsernameAsync(loginUser.UserName); so I could just pass the user object to PasswordSignInAsync.  My mistake was that I passed in user.UserName instead thus incurred the extra db call.

Verify if a string is valid email

On my login I allow user to use either either user name or email to login.  When a user is logging in I first check if what's inputted is a valid email or not.  If the string is not a valid email then the user is passing in his user name.

So checking if a string is email sounds easy but in reality it is very complex!  If you don't believe me just take a look at Phil Haack's post I Knew How To Validate An Email Address Until I Read The RFC.  Luckily there is a solution provided to us in the .NET documentation How to: Verify that Strings Are in Valid Email Format.  And when you take a look at that implementation it's like 60 lines of C# code just to do this verification, for good reasons.

Before this PR I was using System.Net.Mail.MailAddress class constructor to verify the validity of an email

// bad code, do not use! 
bool isEmail;              
try  
{  
    new MailAddress(emailOrUsername);  
    isEmail = true;  
}  
catch (FormatException)  
{  
    isEmail = false;  
} 

I originally thought good, less than 10 lines of code to get the same thing done.  But this implementation is actually not reliable.  Try pass in this string a@a as input which is not a valid email, it actually returns true!

Another thing to notice is that verifying email feels like a utility feature, as the name of the class in the .NET doc also suggests RegexUtilities.  For that normally we want to use a static method, but this implementation depends on a local field bool invalid = false; that is shared between the two methods inside the class.  But Flyznex was able to fix that by removing this local variable, as a result our implementation now is really simple to use, you can just call Util.IsValidEmail("email.to.verify@test.com").

Version scheme and Roadmap

It's been a year since v1.0 was released, to be able to release more often I need to put in more effort to issue management and planning.  As part of that I want to have a Roadmap that highlights roughly what's coming in the foreseeable future, but first I need a versioning scheme that could work for me.

I came across this Wikipedia page on Software Versioning and found that people use all kinds of schemes to version their releases, it can be anything really.

So I decided to adopt a semver major.minor.patch-(a|b|rc)[1-9] kinda scheme starting v2.0. To get releases to users faster, a release is planned every milestone. Milestones can give or take be weekly or bi-weekly (or a long time if I get stuck :) 

This table shows an example of this version scheme for the next full release cycle. 

Life CycleVersion ExampleDescription
alphav2.0-a1Active development stage. Each alpha release contains one or more new features or fixes. During alpha breaking changes like DB schema updates are common.
betav2.0-b1Planned features are complete. Each beta release contains fixes only, no breaking changes.
rcv2.0-rc1A rc is a beta that is potentially production ready. Each rc release contains fixes only, no breaking changes.
rtmv2.0An official release.
patchv2.0.1Bug fixes after an official release. A patch can go through alpha, beta and rc as well.
minorv2.1Next minor release. About 20 milestones away from v2.0.
majorv3.0Next major release. It should bring major value to the product, not time bound.

Based on that I came up with a Roadmap

  • v1.0 - Minimal Viable Product (Oct 2017)
  • v2.0 - Admin Panel (Dec 2018)
    • v2.0-a1 - initial v2 alpha
    • v2.0-a2 - feat: preview post in composer
    • v2.0-a3 - feat: dnd images in composer
    • v2.0-a4 - feat: social links
    • v2.0-b1 - fixes
    • v2.0-b2 - fixes
    • v2.0-rc - fixes
    • v2.0-rtm
  • v2.1 - Featured image and a second theme
  • v2.2 - Navigation
  • v2.3 - Pages
  • v2.4 - User roles

Finally I'm trying these out for now to see how it goes, as a result the time frame and planned features can change.

"Swap with preview" should be used when upgrade a production database

I wrote once previously about how I used EF Core migrations to upgrade production database. I have two slots production and staging and I swap them every time I push new code that don't involve database changes.  This week I had to make a data change to production database, it's a reminder of when to use the Swap with preview feature provided by Azure App Services.  

I recently decided to change versioning scheme a bit in hope to turn out releases more often, so the upcoming release originally planned as v1.1 has now become v2.0.  As a result, I opened up issue #235 EF Migration and SQL upgrade files got wrong version numbers to fix the out dated text in these files.

Migration changes
Migration code changes

In addition I need to update a record in the __EFMigrationsHistory table, specifically it’s the second row’s MigrationId needs to become "20180530163323_FanV2_0".

__EFMigrationsHistory table
__EFMigrationsHistory table needs data update

To update this record I can run a simple SQL update statement while the site still running, because EF only checks this table every time the application starts up to see if there are new migrations need to be applied.  The challenge is I don’t want a request hit the database before the new code gets there, because the existing code looks for the "20180530163323_FanV1_1" record and if it’s not there it’ll apply migration thus causing problems.

When using Swap with preview, these are the steps to take place:

1. First get staging database and server ready with new data and code (since it’s staging I can just stop the server and do whatever so that’s easy).

2. Update production database with the data change (the production site is still running).

3. Start the swap with “Swap with preview”, this will apply all production configurations to staging slot while still keep production running. It seems you can do this in either slot now, I somehow remember the option was only available in production slot previously.

4. Azure then restarts staging using these production configurations, now staging runs against production database which is OK since staging has the new code.

5. Finally do “Complete swap”, Azure moves the already warmed up staging slot into production.

In conclusion I could do this because it was only a data change.  Consider if there were schema changes then you may not be able to update the production database first while production service is still running, in that case I'd refer to my previous post.  

Structured Logging with Serilog and Seq

Fanray uses Serilog and practices Structured Logging.  One of the output sources is Seq.

Setup

The wiring of Serilog happens inside Program.cs

  public static IWebHostBuilder CreateWebHostBuilder(string[] args) => 
     WebHost.CreateDefaultBuilder(args) 
       .UseApplicationInsights() 
       .UseSerilog() 
       .UseStartup<Startup>(); 

And the configuration of it with Seq happens in appsettings.json and appsettings.Production.json. Below is the snippet from appsettings.json with MinimumLevel set to Deubg for local development.

  "Serilog": { 
   "Using": [ 
     "Serilog.Sinks.Console", 
     "Serilog.Sinks.Seq" 
   ], 
   "WriteTo": [ 
     { 
       "Name": "Console" 
     }, 
     { 
       "Name": "Seq", 
       "Args": { 
         "serverUrl": "http://localhost:5341" 
       } 
     } 
   ], 
   "MinimumLevel": { 
     "Default": "Debug", 
     "Override": { 
       "Microsoft": "Information", 
       "System": "Information" 
     } 
   } 
 } 

Structured Logging

Structured Logging enables you to dump entire objects as JSON into the logs and you can later search for them with their properties as search criteria.

For example I have this code in BlogService.cs, it logs a BlogPost object.  Notice the special syntax {@BlogPost}

_logger.LogDebug("Show {@BlogPost}", blogPost);

That outputs the following and notice it also outputs the nested objects inside BlogPost like Category, Tags etc.

{ 
   Type: "BlogPost", 
   CategoryTitle: "Technology", 
   Tags: […], 
   TagTitles: […], 
   Body: "<p>This is a test post.</p>", 
   BodyMark: null, 
   Category: {…}, 
   CategoryId: 2, 
   CommentCount: 0, 
   CommentStatus: "NoComments", 
   CreatedOn: "2018-09-22T02:23:40.0000000+00:00", 
   CreatedOnDisplay: "yesterday", 
   UpdatedOnDisplay: "09/21/2018", 
   Excerpt: "This is a test post.", 
   ParentId: null, 
   RootId: null, 
   Slug: "this-is-a-test", 
   Status: "Draft", 
   Title: "This is a test", 
   UpdatedOn: "2018-09-22T02:23:41.0002676+00:00", 
   User: {…}, 
   UserId: 1, 
   ViewCount: 0, 
   PostTags: […], 
   Id: 5, 
   _typeTag: "BlogPost" 
}

Seq

Seq is the tool that allows you to search through what you have logged taking advantage of Structured Logging.  And it is my recommended way to log during the development of Fanray.

To get started with Seq first download and install it from https://getseq.net/ and then to use it during development simply go to http://localhost:5341.

Search for Objects

To search for an object, say give me all BlogPost with a Body that contains the word "Welcome" case insensitive.

Seq filters BlogPost with Body that contains word Welcome
Seq filters BlogPost with Body that contains word Welcome

Here are docs on Seq's Filter Syntax and Cheet Sheet  https://docs.getseq.net/docs/query-syntax

Search for a Request

A request is common thing to search through the log, ASP.NET Core outputs detailed debug information during development. 

For example, when I publish a new post from Open Live Writer, I want to see what exactly happens during that request of publishing my post.  Since Asp.net Core provides these detailed info for each request thus you can search for it with its id "0HLH1ICD1KGGF:00000001".  The screen shot below shows the entirety of that request, the bottom rectangle marks the start of the request while the top rectangle marks the finish of it.  From this you are able to see things like all the SQLs EF Core executed, the MetaWeblogAPI newPost endpoint was called and the XML it received and other good stuff.

Seq shows a request from Open Live Writer
Seq shows a request from Open Live Writer

The type or namespace name could not be found in your ".g.cshtml.cs" file

Last night I updated the namespace of my User class from Fan.Models to Fan.Membership (I probably should have used refactoring tool but didn't) and the web project Fan.Web stopped building.

The type or namespace name could not be found in my .g.cshtml.cs file
The type or namespace name could not be found in my .g.cshtml.cs file

These are CS0246 compiler errors all saying "type or namespace name could not be found" and pointing to my ".g.cshtml.cs" files.

The actual issues are in the .cshtml files, but Visual Studio points to the compiled version of those .cshtml file. Moreover clicking on these errors in Visual Studio will not open up either .cshtml or .g.cshtml.cs file.  So for a moment I was hanging and guessing on what I should do.

My first response was that compiled Razor pages got cached and didn't get cleaned out.  Then I went all over to find these files, maybe it's getting late at night.  Finally I right clicked and copied the error out to see that they are just locally in my folder src\Fan.Web\obj\Debug\netcoreapp2.1\Razor\Pages\Admin\Categories.g.cshtml.cs

To sum up, Razor pages are compiled into these C# .g.cshtml.cs files, they are located right in your web project's "...\obj\Debug\netcoreapp2.1\Razor\Pages\...".  If your .cshtml pages have namespaces that you just renamed, Visual Studio won't point you to .cshtml but to the compiled version instead.

Gotcha: Time zone was not found on the local computer

An unit test failed when I built my code on BitBucket, it threw System.TimeZoneNotFoundException.  It happens where I try to convert a UTC Time to local time in a specific time zone. Interestingly the same unit test passes on my local running Windows 10 and on Appveyor.  

The exact error message is

The time zone ID 'Pacific Standard Time' was not found on the local computer. 

The exception occurred on the following line of code, its parameter timeZoneId got passed in the value "Pacific Standard Time".

TimeZoneInfo userTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId);

The issue comes from the fact that there is more than one source that provides time zone values, to list a few

And they output different time zone values, for example while on Windows you have "Eastern Standard Time", IANA calls it "America/New_York".

I found this cool library TimeZoneConverter that solves exactly this problem with minimal code change, my previous code becomes

TimeZoneInfo userTimeZone = TZConvert.GetTimeZoneInfo(timeZoneId);

It has the ability to work with any time zone provider.

// Either of these will work on any platform: 
TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("Eastern Standard Time"); 
TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("America/New_York"); 

Alternatively I read that Noda Time lib doesn't have these issues, for right now before a release I just want to resolve it with the least code change.

Production deployment with Azure App Service slots and Entity Framework Core migrations

I recently made some progress on this blog and wanted to push the latest changes live. I've been using EF Core migrations to assist with any data related changes, and it has worked out very well for me during development. When it's time to push to production I expect it to work just as well, but nonetheless I want to first see what others think about using EF migrations against a production database.  So I found this StackOverflow question Is it OK to update a production database with EF migrations? The short answer is YES! Now having done this myself, I can say that EF migrations used in conjunction with Azure deployment slots makes production deployment really easy. In this post I want to explain my setup and the simple steps I followed to upgrade this site.

Deployment slots

Azure app service deployment slots give you the ability to setup multiple environments like production, staging and test, and quickly swap between them.

1-azure-deployment-slots
My Azure deployment slots and databases

This feature is only available on standard service plan or higher. This site was running on Azure B1 App Service plan, so I had to scale it up to a S1. With the standard plan I created a Staging slot.  Also previously I had a S0 SQL Database as my production database, this time I created a Basic SQL Database to serve as the Staging database.

Here I'm trying to save money so I only created two environments, production and staging, and let staging serve as testing.  Theoretically staging should closely mimic production and even share same database.   

In addition to app services and databases I also have separate Application Insights and Blob Storage accounts for each of the production and staging environment, they are not shown on the diagram.

Deployment steps

2-azure-app-service-deployment-slot-swap
My Azure deployment flow

 

Step 1

Last year I had my v1.0 app deployed from github on to the production app service, and EF Core automatically populated the production database. This week I did the same thing for the Staging app service and database. 

So by this time the Production and Staging environments have the same exact code and DB schema.

Step 2

I deployed the v1.1 alpha to staging slot and let EF upgrade Staging DB to the newer v1.1 schema. Here I actually have a choice of running a SQL upgrade script myself to upgrade the database or letting EF do it, more on this later. 

This step is critical, because if it works then when I push v1.1 to production it should work too.  But I did encounter a BadImageFormatException right after I pushed v1.1 to staging, this error never occurred to me during development and it took me a while to track down what was happening.  The point is by deploying to staging first gave me the chance to fix that.  

At this moment the entire Staging web app and database got the latest code and working well.

Step 3

3-azure-slot-settings
Remember to check mark your slot-specific settings before swapping!

 

It was time to do the swap, but before doing that I made sure I had the slot-specific settings in each production and staging's Application Settings checked. Each environment has their own database, application insights and blob storage, by check marking them these settings are not going with the swap.

4-azure-slot-swap-source-destination
Before swapping setup Source and Destination

Now I was ready to do the swap and I had a choice of doing a Swap or Swap with preview. In either case let's make staging the Source and production the Destination

Doing Swap will start swapping the source and destination immediately, and the first time I did this it took about 1 minute and 15 seconds to complete.  Basically Azure first warms up the staging slot and then completes the actual swap. 

The Swap with preview on the other hand is not immediate, according to the Azure Doc the following happens.

  • It keeps production unchanged so existing workload on that slot is not impacted.
  • It applies configurations of production slot to staging slot, including the production slot-specific settings!
  • It restarts the worker processes on the staging slot using these production configuration elements.
  • When you complete the swap: it moves the pre-warmed-up staging slot into the production slot, and production slot into the staging slot as in a manual swap.
  • When you cancel the swap: it reapplies the settings of the staging slot to the staging slot.

Below is a screen shot that shows when you do Swap with Preview the connection strings used by both slots are the same.

5-azure-swap-preview-changes
Azure slot swap Preview Changes shows both Production and Staging using same connection strings

 

Step 4

When you preview swap, the staging site shows up with production data, in other words Azure warms up the staging slot.  So when you are ready to Complete swap it's actually done more efficiently than previously when I did the direct swap, the complete swap step took only about 20 to 30 seconds.

6-azure-slot-complete-swap
Complete swap after previewing staging

 

EF Core Migrations

For the upcoming Fanray v1.1 I made a series of changes to the database schema, including add and rename columns, make column nullable, update existing data and insert new data.  As I mentioned in step 2 I had a choice on how to do the database upgrade, I could either let EF do it automatically or I can use EF to generate an upgrade SQL script first then run that against the database. In some organizations they prefer to have a DBA look over any SQL upgrade script first.  I have tried both ways and both worked perfectly.

To generate the SQL script you can either run the EF PowerShell command Script-Migration or the dotnet CLI command dotnet ef migrations script

Final thoughts on what happens during the upgrade 

Remember back in step 3 during the swap preview, the staging actually gets all the production's settings.  As a result EF is actually upgrading your production database at this point already.  Based on the success of step 2 I know this upgrade should work, but while that's happening remember the production website is still running v1.0 of the app against the same database which staging is upgrading to v1.1! 

I tested locally my v1.0 app does work with v1.1 DB schema, but this obviously is an issue if there are breaking changes. The site visitors may experience error during that one-minute swap on the live site.  The first thing comes to my mind to remedy this is to use an old trick app_offline.htm, as discussed in this SO question How to use IIS app_offline.htm file with Azure. The downside of doing this is that even though the swap happens pretty quickly but still during that time your site is down to your visitors.

One of the answers on that SO question mentioned "you should be able to virtually eliminate down time with Azure by running multiple instances".  As I explained above I'm not sure that's the case.  The comment left below this answer is more inline with what I have noted.

My co-founder is actually the Azure expert on our team, and we are already running multiple instances with SQL Azure. However, earlier today, he needed to update the DB schema which meant that part of the site was down for several minutes. When I hit the site, I was redirected to my main ErrorPage. But I would have preferred to have had the app_offline.htm file in the root during those few minutes. I was just under the impression that it's non trivial to be doing file I/O related things on an Azure deployment.

Also Azure provides SQL Database backup so if upgrading the production database fails you can restore it from your backup.  This has been my deployment flow so far, but is there a better way or how are you guys approaching the deployment and upgrade of the production database?  Please let me know what you think.

How to update git commit messages (single or multiple, local or remote)

Whether you want to update a single or multiple, local or remote git commit messages, this post shows you how.

To update the most recent local commit message

$ git commit --amend

The text editor opens, edit your commit message, save and close file.

To update multiple local commit messages

$ git rebase -i HEAD~3 # Modify the last 3 commits

You will see something like the following

pick e499d89 Delete CNAME
pick 0c39034 Better README
pick f7fde4a Change the commit message but push the same commit.# … with some instructions in the comments here …

Replace pick with reword then save and close file.

Git will open the first commit above in the text editor, you can type the new commit message, save and close the file. Then git will open the second commit in the text editor, so on and so forth till you update, save and close the last commit file.

To update commits that have been pushed

Do exactly as explained above whether its the last one commit or last several commits. Then do this

$ git push --force

One thing to note is that the commit hashes will be updated as well.

Reference: Changing a commit message

Angular 5 vs React 16

After releasing Fanray v1 I took some time to research on what is next to learn and build.  A blog roughly has two parts, the public facing site, this is the part visitors see and it is normally themed; the blog also has an admin console, that is where blog owners and writers login, write posts and manage the entire site. The public site is normally a MPA, Multi-page Application, meaning when you go from one page to another you see a full browser reload. Whereas the admin console is a good candidate for being a SPA, Single Page Application.

The question is which front end framework / library to use? I had Angular experience in the past, I’ve built Chef.me project using Angularjs 1.x and used Angular 2 in hackathons. But since I have the luxury of building something entirely from ground up, I want to see and experiment what is out there. I’ve considered four: Angular, React, Vue and Ember. Touch choices really but I had to make my picks, eventually I came down to two, Angular vs React.

There are numerous articles out there that compare these technologies, a couple that stand out to me.

Here are some basic info I came up with based on my current research.

 AngularReact
ClassificationFrameworkLibrary
Version516
CLIAngular CLIcreate-react-app
BindingTwo wayOne way
DOMRegular DOMVirtual DOM
Dominant LanguageTypeScriptES6
Static Type CheckingTypeScript with DefinitelyTypedFlow
Html TemplateEither html file or inline in the component ts fileJSX
Recommended EditorVisual Studio CodeAtom with Nuclide
Native Mobile DevelopmentNativeScript (by Progress)ReactNative
Material DesignAngular MaterialMaterial-UI


 

Below is how each Angular and React works in a simple example.

Angular

The best way to get an Angular project started is through its CLI (v1.6.1 as of this writing), ng init my-angular-app. After you build it for production, ng build --prod, below is your angular app index.html.

It includes three JavaScript bundle files, inline (this is the webpack loader), polyfills and main (your code plus styles and vendor). The main bundle is about 147k. All builds make use of bundling and limited tree-shaking, while --prod builds also run limited dead code elimination via UglifyJS. There is also an experimental service worker support for production builds available in the CLI, you can enable manually. I mention this as you will see React has this support too. For more information on see ng build documentation.

<!doctype html>
<html lang="en">
    <head>
       <meta charset="utf-8">
       <title>my-angular-app</title>
       <base href="/">
       <meta name="viewport" content="width=device-width,initial-scale=1">
       <link rel="icon" type="image/x-icon" href="favicon.ico">
       <link href="styles.d41d8cd98f00b204e980.bundle.css" rel="stylesheet"/>
    </head>
    <body>
       <app-root></app-root>
       <script type="text/javascript" src="inline.19f3f7885ab6e4e2dee3.bundle.js"></script><script type="text/javascript" src="polyfills.f039bbc7aaddeebcb9aa.bundle.js"></script><script type="text/javascript" src="main.5f6465ddee537c95d12a.bundle.js"></script>
    </body>
</html>

The index.html also includes your Angular directive <app-root></app-root>. When your website starts, the Angular app’s main entry point is main.ts.

import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';import { AppModule } from './app/app.module';
import { environment } from './environments/environment';if (environment.production) {
   enableProdMode();
}platformBrowserDynamic().bootstrapModule(AppModule)
   .catch(err => console.log(err));

Then the main.ts bootstraps an Angular module AppModule, each Angular app must at least have one module. Here is the app.module.ts.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
@NgModule({
   declarations: [
     AppComponent
   ],
   imports: [
     BrowserModule
   ],
   providers: [],
   bootstrap: [AppComponent]
})
export class AppModule { }
 

After that the AppModule bootstraps a very simple Angular component AppComponent.  Here is that component looks like in app.component.ts.

import { Component } from '@angular/core';@Component({
   selector: 'app-root',
   templateUrl: './app.component.html',
   styleUrls: ['./app.component.css']
})
export class AppComponent {
   title = 'app';
}
 

Finally the component has a template with html that will replaces the directive in index.html <app-root></app-root> and show it to the users in browser.

So the Angular component flow is like this:

An HTML page with some angular directives –> Module loader calls main.ts –> bootstraps AppModule –> bootstraps AppComponent –> replaces the angular directive with its template content.

React

With React CLI called create-react-app (v1.4.3 as of this writing), do create-react-app my-react-app will create a startup project for you.  And after you build it for production with react-scripts build command, below is your react app index.html.

It includes a main bundle JavaScript file that has everything except styles and it is about 113k. Notice React does not provide polyfills out of box and we need to add it manually.

<!DOCTYPE html>
<html lang="en">
    <head>
       <meta charset="utf-8">
       <meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no">
       <meta name="theme-color" content="#000000">
       <link rel="manifest" href="/manifest.json">
       <link rel="shortcut icon" href="/favicon.ico">
       <title>React App</title>
       <link href="/static/css/main.9a0fe4f1.css" rel="stylesheet">
    </head>
    <body>
       <noscript>You need to enable JavaScript to run this app.</noscript>
       <div id="root"></div>
       <script type="text/javascript" src="/static/js/main.656db2cf.js"></script>
    </body>
</html>

The index.html also has a <div id=”root”></div>.  When your website starts, the main entry point for the Reach app is index.js

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
ReactDOM.render(<App />, document.getElementById('root'));
registerServiceWorker();
 

Then index.js calls ReactDOM.render which renders your component App and attach its output to the root div. Notice it calls on a registerServiceWorker() from registerSerivceWorker.js.  This is to serve assets from local cache, it lets the app load faster on subsequent visits in production, and gives it offline capabilities. However, it also means that developers (and users) will only see deployed updates on the "N+1" visit to a page, since previously cached resources are updated in the background. For more information see create-react-app documentation.

The App component looks like this.

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';class App extends Component {
   render() {
     return (
       <div className="App">
         <header className="App-header">
           <img src={logo} className="App-logo" alt="logo" />
           <h1 className="App-title">Welcome to React</h1>
         </header>
         <p className="App-intro">
           To get started, edit <code>src/App.js</code> and save to reload.
         </p>
       </div>
     );
   }
}export default App;
 

The React component flow is like this:

An html page with a div placeholder –> Loader calls index.js –> calls ReactDOM.render –> calls your component –> attaches the html output to the div placeholder.

Conclusion

This post jots my thoughts based on brief research with Angular and React.  This is only scratching the surface comparing these two technologies, but it dose give a glimpse on how each works with components, as component is the building block of both Angular and React.  Just by looking at the code, Angular does look like it is taking more turns to render a component, but that is because it has its own concept of a module which basically is used to group components. Whereas React feels more straight forward in the sense you have a html tag and your have one piece of JavaScript that works on that tag.

Angular is a full blown framework while React is a library, both can achieve exactly the same thing, with React you can add in everything you need with other libs. I love both technologies based on my experimentation.  In my view, Angular is more suited for SPA apps and I intend to build the admin console with it. Since React is more light weight and I’d like to try it out on the public site on certain pages replacing jQuery.

Fanray 1.0.0 released

From 8/14/2017 to 11/30/2017, it took me three and half months to go from the initial commit to the v1 release today. I’m right on track to achieve what I started out to do, learning in the open, building something I can use everyday, and sharing all aspects of this process with the community.

It’s an MVP

V1 is not much, but it’s useful enough to bring you these words on this page. It was intended to be an MVP.

A Minimum Viable Product (MVP) is a product with just enough features to satisfy early customers, and to provide feedback for future product development.[1][2] Some experts suggest that in business to business transactions an MVP also means saleable: "it’s not an MVP until you sell it. Viable means you can sell it".

Here I myself is the early customer and for it to be saleable to myself it has to have the basic features I think a blog should have, posts, categories, tags, comments, SEO considerations, RSS feeds etc., and on top of these it must be performant and stable.

The blog has been around since the 90s, its features vary greatly from a static page with text to complex systems like WordPress. Ambition could easily kick in and scope of things could go out of hand and let me start something that I never finish on time. To avoid this I’ve decided early on to support MetaWeblog API, which dictates a set of features a blog needs to implement so that desktop clients, like the Open Live Writer, can talk to it. This strategy has proven to be helpful, it really limited my scope on what needs to build without ambiguity. This also allows me to have a rich client to at least start posting without a full blown admin console which takes more time to develop and is coming in 1.1.

Architecture

I’ve designed the app using n-tier architecture, a very typical presentation to business logic to data access setup. Below diagram also shows some of the clients the blog could potentially support and how they communicate.  For example, a desktop client talks to Fanray through MetaWeblog API which is built on XML-RPC, so the content type of communication is XML, whereas the browser talks to MVC controllers that return them HTML, CSS and JavaScript.

  • desktop (MetaWeblog API – XML)
  • browser (MVC – HTML)
  • mobile (Web API – JSON)
Fanray blog architecture
Fanray blog architecture

 

On top of the basic architecture the practice of doing Skinny Controllers, Fat Models and Dumb Views is a very effective strategy to achieve Separation of Concerns. The Web Tier handles traffic and presentational logic only.  The Business Logic Layer does most of the heavy lifting on validation, calculation, caching and much more. When it comes to Data Access Layer, it does just data accessing operations.  Of course, there are grey areas, for example validation can happen at any tier, this deserves a post of its own.  But the basic idea is that each tier (or layer I kind use them interchangeably) has a very specific concern. These different clients talk to different kind of endpoints, browser calls on MVC controllers and Open Live Writer calls on MetaWeblog API endpoints, both ask for the same business logic to carry out, when they get the results they return them to the clients in different formats.

Onward

I’m making steady improvements to this app and hopefully others who happen to come across this project could find it useful as well. Any feedback is welcoming and if you would like to participate, please check out the GitHub repo on how to contribute. Thank you.

Newer posts Older posts