Sunday, April 23, 2017

Setup Application Gateway & the Internal Front-End Load Balancer for an application

Setup Application Gateway & the Internal Front-End Load Balancer for an application

This post follows on from the one on creating a service fabric environment in Azure http://jonlanceley.blogspot.co.uk/2017/04/3-node-service-fabric-environment-with.html

After you have deployed an application to service fabric you need to add the port for it to the service fabric cluster Front End load balancer and then to the Application Gateway.

1. Front End load balancer

The port is the one you have defined in your serviceManifest.xml e.g.

<Resources>

<Endpoints>

<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8702" />

</Endpoints>

</Resources>

</ServiceManifest>

Create a new Health Probe

e.g. 8702Probe

image

Create a new Load balancing rule

e.g. 8702Rule, make sure you set the highlighted items correctly.

image

image


2. Application Gateway

Add a new Health Probe

image

The path just needs to be an endpoint which can return a response so the health probe knows if the application is alive.

image


Add a new Http Setting

image

image


Add a new Mult-Site Listener

image

image


Add a new Basic rule

Make sure you choose the httpSetting you created earlier

image

image

Remove rule1 for the appGatewayHttpListener

image


Check the backend health

Before connecting via a browser it’s worth checking that the Backend health report is Healthy otherwise you have missed something.

image

If it’s healthy, try opening your application in a browser e.g.

http://sfapptest.com/api/values/get

Wednesday, April 19, 2017

3 Node Service Fabric Environment with an Azure Application Gateway

This is an article I put together as I was experimenting with Service Fabric for a real world solution to a problem we had. 

In it we will create a service fabric environment in Azure which contains 3 node types, FrontEnd, BackEnd, and Management, plus an Application Gateway in front which all internet traffic can be routed through to the FrontEnd node. We will also be using an existing Virtual Network and Subnets that we will put the service fabric cluster into.

This post helped me a lot with producing this solution:

https://brentdacodemonkey.wordpress.com/2016/08/01/network-isolationsecurity-with-azure-service-fabric/

My template originally came from the Azure Portal when creating a new service fabric cluster there is the option of saving it as a template. It was then customised as the portal wizard does not let you do certain things. Most of the customisations came from this site:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-patterns-networking

This is what we will build:

The FrontEnd node type is where we put any stateless services.

The BackEnd node type is where we would put any stateful services.

The Azure service fabric services will run on the Management / Primary node type.

· This has a public static outbound IP number, so we can connect to view the status of the cluster.

· It can also host services which need to connect out to a third party which have IP security on their firewall. The third party then only needs to add this IP number to their firewall.

· We can also use this to securely access an Azure SQL database that has IP restricted access.

 

The steps below are my notes for creating the service fabric environment.  All the scripts and ARM template are available on Github:

https://github.com/jonlanceley/jonlanceley/tree/master/CreateServiceFabricEnvironment

1. Create Service Fabric dependencies.

· Public Static IP (for Management nodeType)

· Key Vault (for service fabric certificates)

· Active Directory Application (for authentication)

· Resource Group to put service fabric cluster in

· Existing Virtual Network with 4 subnets for:

    o FrontEnd

    o BackEnd

    o Management

    o WAF / Application Gateway

Edit & change the parameters as required in this script:

Azure-CreateDependanciesForServiceFabricPlatform.ps1

Execute the script

Note: this script will prompt you yes/no to create each of the above items.

If you’re creating a non-development version we do not want to use a self-signed certificate so say ‘no’ when prompted. After the script has run you then need to manually add certificates into the key vault. Details here:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-via-portal#add-certificates-to-key-vault

https://blogs.technet.microsoft.com/kv/2016/09/26/get-started-with-azure-key-vault-certificates/


2.
Create the Service Fabric Environment

This will create a 3 nodeType service fabric environment FrontEnd, Backend and Management nodeTypes.

The management node type is set as the Primary.

Note the NSG’s are not assigned to the Subnet but they are created by the script.

Go to folder:

secureTemplateAnd3NodeTypeWithApplicationGatewayAndExistingSubnet

Copy the parameters.json file and then change the parameters.

Note: By default the script creates the minimum number of VM’s all at Standard A0 size. If this is a non-development environment you will want to change the VM size to be:

    · Minimum number of instances:

        o Set to 5 on Management/Primary node type

        o Set to 5 on Backend node type (stateful)

        o Set to 2 on Frontend node type (stateless)

    · Size (set to Standard D1_V2 the minimum supported spec for all node types)

    · Reliability Level of the cluster should be minimum of Silver in production (default is Bronze)

o Static IP parameters (change to match those you just setup):

    • existingStaticIPResourceGroup
    • existingStaticIPName
    • existingStaticIPDnsFQDN

o Specifiy the existing Virtual network and subnet names:

    • virtualNetworkName
    • existingVNetRGName
    • subnet0Name
    • subnet1Name
    • subnet2Name
    • subnetWAFName

o Active Directory parameters (change to match those you just setup):

    • aadTenantId
    • aadClusterApplicationId
    • aadClientApplicationId

o Certificate parameters (change to match those you just setup):

    • SourceVaultValue
    • certificateUrlValue
    • certificateThumbprint

o VM login parameters (used if you ever need to RDP into a cluster machine):

    • adminUserName
    • adminPassword

o Other parameters

Execute the deploy script:

.\deploy.ps1 -subscriptionId <yourAzureSubscriptionIdHere> -resourceGroupName mycluster -deploymentName mycluster -parametersFilePath .\parameters.json

If after a long time it errors with this message ‘Monitoring Agent not reporting success after launch’

image

Then you should be fine as Service Fabric will automatically recover the nodes that this failed for.

 

3. After deployment

Go to the Azure portal and find your service fabric cluster and you should eventually see the nodes (they may take some time to appear).

image

Once the deployment has finished and you can see in the Azure Portal that the nodes in the cluster are running you should be able to view the cluster e.g.

https://jonscluster.northeurope.cloudapp.azure.com:19080/Explorer

This should prompt you to login. If you see a message:

AADSTS50105: The signed in user 'jon.lanceley_xxxxxxxxx.com#EXT#@jonlanceleyxxxxxxxx.onmicrosoft.com' is not assigned to a role for the application '9df93f43-6682-4004-addd-1522a4e13439'.

Go to Azure Active Directory -> Enterprise Applications -> All Applications

image

Find the cluster server application (not the client one)

Add the user as an Admin

image

That’s it, you should now have a running Service Fabric Cluster. 

You now just need to deploy some code to it.  And then open the Front End Internal load balancer and the Application Gateway ports for the application: http://jonlanceley.blogspot.co.uk/2017/04/setup-application-gateway-internal.html

Tuesday, April 12, 2016

Release Management in TFS 2015.2

With the TFS 2015.2 update we now have the ability to use Release Manager in TFS on premise. This is how I have setup our branching structure, gated check-in and release manager to control the deployment (with approvals at some stages) of an MVC5 web site with Entity Framework into multiple Azure Web Application environments.

We have our branching structure set as:

Dev\developer name 1

Dev\developer name 2 etc

Main

Developers take a branch from main into the dev folder under their name and work on changes. When development is complete they check-in to their dev branch and then merge the changes into the Main branch. This allows the developer to pull in other changes from main into their dev branch, and be able to check-in at least daily into their dev branch so their code is backed up overnight.

We are using the new TFS build tasks; and setup against the Main branch is:

  • A gated check-in.

image001_thumb7_thumb

  • And a number of build tasks that compile the code in Release mode, run some unit tests and finally copy the files/build artifacts we need to deploy the web site into the drop folder (essentially a web deploy zip file is created).

image002_thumb2_thumb

 

The key to creating a web deploy zip file is the MS Build Arguments:

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true

In the copy and Publish build artifacts task we have:

image003_thumb1_thumb

This automatically builds all changes checked into the Main branch with the final step it produces the web deploy files. Assuming the build passes then Release Manager will take over to deploy the web site into an Azure Web Application.

We have 4 environments in Azure that we need to deploy to ‘UAT Staging’, ‘UAT’, ‘Production Staging’ and ‘Production’. For us these are split into 2 Azure subscriptions, one for UAT one for Production. Within each subscription is 1 web application e.g. UAT and against that 1 slot ‘staging’ and each has its own Azure SQL database. This gives us 2 websites and 2 databases. We make use of sticky slot settings as well so the connection string / app settings stay against its environment because in production we make use of the Azure web application swap slot functionality.

So we configure release manager to deploy the code to UAT staging, UAT and Production staging. But for production release manager just initiates a swap slot PowerShell command.

So in Release manager we will need to setup the deployment process or tasks for each of these environments.

But first Release manager is setup to watch the main branch for a new build artifact which is done by setting the release trigger to Continuous Deployment.

image004_thumb1_thumb

As soon as a new version is checked in setting the UAT Staging environment trigger to ‘Automated after release creation’ will initiate a deployment into that environment automatically (and for us with no approvals required because that is the first server in Azure which our developers can test against).

On the environments tab we define the deployment steps for each environment so for UAT Staging we have:

  • Stop the Azure web application
  • Deploy the new code/web site
  • Start the Azure web application

image005_thumb1_thumb

We are making use of the new TFS Market Place extension ‘Run Inline Azure Powershell’ task which allows us to stop the Azure web application with:

Stop-AzureWebsite -Name $(WebAppName) -Slot stage

And below are the properties of the Azure Web App Deployment task, with the main one being the path to the Web Deploy Package.

Note: Our web applications are always running in Azure we are not creating them on demand.

image006_thumb2_thumb

The UAT and Production Staging environments all have the same 3 tasks. Deployment into an environment takes about 1 minute 20 seconds.

The Production environment has 1 task which swaps Production Staging with Production by executing this powershell script:

Switch-AzureWebsiteSlot -Name $(WebAppName) -Slot1 stage -Slot2 production -Force -Verbose

image007_thumb2_thumb

That allows us to do a quick production deployment (saving us that 1 minute 20).

UAT, Production Staging and Production all have Pre-Deployment approvers setup.

image008_thumb1_thumb

So deployment to the 1st Azure environment ‘UAT Staging’ happens automatically upon a successful check-in to the Main branch.

The developer has the chance now to manually test the site. When they want to deploy to the next environment ‘UAT’ they would open the release in release manager and start the deployment:

image009_thumb1_thumb

image010_thumb1_thumb

By using the ‘Deploy’ button to request the code is deployed into the UAT environment this will send an email to the approvers who will ‘hopefully’ approve the release and if they do release manager will execute the 3 deployment tasks setup for the UAT environment.

The same process would be followed for the remaining environments, the developer tests then uses the deploy button to move the same web application code to the next environment.

At any point we can check what release is in what environment by looking at the overview tab.

image011_thumb2_thumb

Database changes are done via Entity Framework Code First Migrations which are executed upon web site start up by running this code added to the MVC site startup.cs class:

var efConfiguration = new Configuration();

var dbMigrator = new System.Data.Entity.Migrations.DbMigrator(efConfiguration);

dbMigrator.Update();

This will execute any schema changes and then run the seed data method. The key to keeping the seed data updated is using the extension method ‘AddOrUpdate’.

 

Useful Links

https://msdn.microsoft.com/library/vs/alm/release/overview

Sunday, August 30, 2015

Unit Testing (part 4) - Faking Entity Framework code first DbContext & DbSet


This is the 4th in a series of posts about unit testing:

Unit Testing (part 1) - Without using a mocking framework

Unit Testing (part 2) - Faking the HttpContext and HttpContextBase

Unit Testing (part 3) - Running Unit Tests & Code Coverage

Unit Testing (part 4) - Faking Entity Framework code first DbContext & DbSet

 

Following on from the last 3 articles we can use the same approach of faking with test doubles on our database repository methods. 

I’m using Entity Framework 6 code first and I want to be able to call the code in my repository layer so I can test the where clauses etc, but I do not want to actually call the database.  Entity framework has a DBContext and a DbSet we just need to fake them.

All of our model properties implement this interface IDbEntity. 

    public interface IDbEntity<TPrimaryKey>

    {

        /// <summary>

        /// Unique identifier for this entity.

        /// </summary>

        TPrimaryKey ID { get; set; }

    }

What this does is say we must have a primary key called ID on each model.  We will use this later to implement a fast generic DbSet Find method.

public class Country : IDbEntity<Int32>

{

        [Key, DatabaseGenerated(DatabaseGeneratedOption.None)]     

        public Int32 ID { get; set; }

       

        [Required]

        [MaxLength(100)]

        public String Name { get; set; }

}

First we need to setup our DbContext so we start with our interface which just contains a list of DbSet’s (which is what we need in order to begin faking it for the unit tests).

public interface ISiteDBContext

{

        DbSet<Country> Countries { get; set; }       

}


Our concrete class that the MVC web site uses looks like this:

public class SiteDBContext : DbContext, ISiteDBContext

{

        public SiteDBContext()

            : base()

        {

            // Disable database initialisation (e.g. when the site is first run)           

     Database.SetInitializer<SiteDBContext>(null);

        }

 

        public SiteDBContext(string nameOrConnectionString)

            : base(nameOrConnectionString)

        {

            // Disable database initialisation (e.g. when the site is first run)           

     Database.SetInitializer<SiteDBContext>(null);

        }

       

        public DbSet<Country> Countries { get; set; }

       

        protected override void OnModelCreating(DbModelBuilder modelBuilder)

        {

            // Fluent API commands go here e.g.

            modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();                 

     modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();   

            base.OnModelCreating(modelBuilder);

        }

    }

 

And our fake one for unit testing looks like this (note the differences in yellow).  We are implementing the interface but then in the constructor setting the Country DbSet to use our FakeDbSet instead:

public class FakeSiteDBContext : DbContext, ISiteDBContext

{

      public FakeSiteDBContext() : base()

      {

         // Disable code first auto creation of a database                 

         Database.SetInitializer<FakeSiteDBContext>(null);

      Countries = new FakeDbSet<Country>();

}

public DbSet<Country> Countries { get; set; }

public override DbSet<TEntity> Set<TEntity>()

       {

            foreach (PropertyInfo property in

typeof(FakeSiteDBContext).GetProperties())

            {

                if (property.PropertyType == typeof(DbSet<TEntity>))

                {

                    var value = property.GetValue(this, null) as DbSet<TEntity>;

                    return value;

                }

            }

 

            // If the above fails fall back to the base default

            return base.Set<TEntity>();

  }

 }

And this is the FakeDbSet:

public sealed class FakeDbSet<TEntity> : DbSet<TEntity>, IQueryable, IEnumerable<TEntity>, IDbAsyncEnumerable<TEntity>

            where TEntity : class

    {

        ObservableCollection<TEntity> _data;

        IQueryable _query;

 

        public FakeDbSet()

        {

            _data = new ObservableCollection<TEntity>();

            _query = _data.AsQueryable();

        }

 

        public override TEntity Find(params object[] keyValues)

        {

            // Find by the Primary Key (ID) as defined in the interface IDbEntity

     // which is set on all of our model classes. This is a fast generic way to

     // implement find.

            // There is only currently 1 primary keyValue that can be passed in so we

     // use [0] to find it

            var result = _data.OfType<IDbEntity<Int32>>().Where(m => m.ID ==

(Int32)keyValues[0]);

            var myEntity = (TEntity)result.SingleOrDefault();

            return myEntity;

        }

 

        public override TEntity Add(TEntity item)

        {

            // In our FakeDbSet when an item is added to the context we increment it's

     // primary Key (ID column) otherwise it will always be 0

            // All our model classes inherit IDbEntity which defines an ID column as

     // the primary key

            // But note this will not update navigation properties, apperently there

     // is no way in EF to do that yet (so you have to work around it)

            if (item is IDbEntity<Int32>)

            {

                var myItem = (IDbEntity<Int32>)item;

                if (myItem.ID == 0)

                {

                    // Get the last record entered, so we can get it's ID then add 1

      // to it for the new record

                    var lastItem = _data.LastOrDefault();

                    if (lastItem == null)

                        myItem.ID = 1;

                    else

                    {

                        var myLastItem = (IDbEntity<Int32>)lastItem;

                        myItem.ID = myLastItem.ID + 1;

                    }

                }

            }

 

            _data.Add(item);

            return item;

        }

       

        public override TEntity Remove(TEntity item)

        {

            _data.Remove(item);

            return item;

        }

 

        public override TEntity Attach(TEntity item)

        {

            _data.Add(item);

            return item;

        }

 

        public override TEntity Create()

        {

            return Activator.CreateInstance<TEntity>();

        }

 

        public override TDerivedEntity Create<TDerivedEntity>()

        {

            return Activator.CreateInstance<TDerivedEntity>();

        }

 

        public override ObservableCollection<TEntity> Local

        {

            get { return _data; }

        }

 

        Type IQueryable.ElementType

        {

            get { return _query.ElementType; }

        }

 

        Expression IQueryable.Expression

        {

            get { return _query.Expression; }

        }

 

        IQueryProvider IQueryable.Provider

        {

            get { return new TestDbAsyncQueryProvider<TEntity>(_query.Provider); }

        }

 

        System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()

        {

            return _data.GetEnumerator();

        }

 

        IEnumerator<TEntity> IEnumerable<TEntity>.GetEnumerator()

        {

            return _data.GetEnumerator();

        }

 

        IDbAsyncEnumerator<TEntity> IDbAsyncEnumerable<TEntity>.GetAsyncEnumerator()

        {

            return new TestDbAsyncEnumerator<TEntity>(_data.GetEnumerator());

        }

    }

 

    internal class TestDbAsyncQueryProvider<TEntity> : IDbAsyncQueryProvider

    {

        private readonly IQueryProvider _inner;

 

        internal TestDbAsyncQueryProvider(IQueryProvider inner)

        {

            _inner = inner;

        }

 

        public IQueryable CreateQuery(Expression expression)

        {

            return new TestDbAsyncEnumerable<TEntity>(expression);

        }

 

        public IQueryable<TElement> CreateQuery<TElement>(Expression expression)

        {

            return new TestDbAsyncEnumerable<TElement>(expression);

        }

 

        public object Execute(Expression expression)

        {

            return _inner.Execute(expression);

        }

 

        public TResult Execute<TResult>(Expression expression)

        {

            return _inner.Execute<TResult>(expression);

        }

 

        public Task<object> ExecuteAsync(Expression expression, CancellationToken cancellationToken)

        {

            return Task.FromResult(Execute(expression));

        }

 

        public Task<TResult> ExecuteAsync<TResult>(Expression expression, CancellationToken cancellationToken)

        {

            return Task.FromResult(Execute<TResult>(expression));

        }

    }

 

    internal class TestDbAsyncEnumerable<T> : EnumerableQuery<T>, IDbAsyncEnumerable<T>, IQueryable<T>

    {

        public TestDbAsyncEnumerable(IEnumerable<T> enumerable)

            : base(enumerable)

        { }

 

        public TestDbAsyncEnumerable(Expression expression)

            : base(expression)

        { }

 

        public IDbAsyncEnumerator<T> GetAsyncEnumerator()

        {

            return new TestDbAsyncEnumerator<T>(this.AsEnumerable().GetEnumerator());

        }

 

        IDbAsyncEnumerator IDbAsyncEnumerable.GetAsyncEnumerator()

        {

            return GetAsyncEnumerator();

        }

 

        IQueryProvider IQueryable.Provider

        {

            get { return new TestDbAsyncQueryProvider<T>(this); }

        }

    }

 

    internal class TestDbAsyncEnumerator<T> : IDbAsyncEnumerator<T>

    {

        private readonly IEnumerator<T> _inner;

 

        public TestDbAsyncEnumerator(IEnumerator<T> inner)

        {

            _inner = inner;

        }

 

        public void Dispose()

        {

            _inner.Dispose();

        }

 

        public Task<bool> MoveNextAsync(CancellationToken cancellationToken)

        {

            return Task.FromResult(_inner.MoveNext());

        }

 

        public T Current

        {

            get { return _inner.Current; }

        }

 

        object IDbAsyncEnumerator.Current

        {

            get { return Current; }

        }

    }

 

Then using dependency injection in your unit test you register the FakeSiteDBContext.  Using Unity it would be:

container.RegisterType<DbContext, FakeSiteDBContext>(new PerRequestLifetimeManager());

And in the website you’d do:

container.RegisterType<DbContext, SiteDBContext>(new PerRequestLifetimeManager());

 

The unit test would look like this, in this example I’m calling a basketService method which would do all the same calls as if we were running the MVC web site, except in the test it is going to call our FakeDbSet and FakeDbSiteContext to avoid hitting a database because that’s what we told our dependency injection to do swap out every instance of DbContext with our FakeSiteDBContext.

 

[TestMethod]

public void AddToBasket_AddUSDItemToNewBasket()

{

    HttpContext.Current = new FakeHttpContext().CreateFakeHttpContext();

    unityContainer = UnityConfig.GetConfiguredContainer();

    var basketService = unityContainer.Resolve<IBasketService>();

    var httpContextWrapper = new FakeHttpContextWrapper(httpContext:

HttpContext.Current);

 

    var model = basketService.AddSubscriptionItemToBasket(httpContextWrapper, params go here…);

           

    Assert.AreEqual("en-US", model.CurrencyFormat.CurrencyCulture, "CurrencyCulture");

}

 

If you wanted to you can also seed the entity framework models with the same seed data you’d use in the real database, remember it’s all in memory and it’s fast.  It’s also useful as your working with the same data rather than creating fake data for every test.

There is nothing stopping you creating specific test data for one test as well, all you have to do is add populated models to the Entity Framework DbContext at the start of a unit test.

     var order = new Order()

            {

                UserBasketID = userBasketId,

                OrderItems = new Collection<OrderItem>(),

                DateCreated = DateTime.Now

            };

 

            var orderItem1 = new OrderItem

            {

                Price = 2.00M,

                Quantity = 1,

                DateCreated = DateTime.Now,

                Order = order

            };

            order.OrderItems.Add(orderItem1);

            dbContext.Orders.Add(order);

            dbContext.OrderItems.Add(orderItem1);

No need to save (remember it’s in memory all you have to do is .Add). 

The only downside with this approach I’ve found so far are your site will run but the tests may fail because:

  • Navigation property might = null. 
  • Not all the data is added to the context.

Remember we are faking out the DbContext and DbSet so we do not get all the Entity Framework functionality. 

To address both of these points:

  • If you have reference/navigation properties make sure you set them like the ones in yellow above (we assign the order to the orderItem).  This way your repository methods navigation properties will work in your unit test and won’t be null. 
  • If you think back to EF v1 days you had to add the items to the context that you want to save.  So in the example above EF would be fine with just the dbContext.Orders.Add(order) line it would know there are orderItems that also need saving.  But the fake DbContext won’t!  So if we are testing a lookup directly for orderItems our test would show 0 records. 

    We just have to also attach the orderItems to the dbContext too.  It won’t affect the way the site runs and our tests will pass.  So there is a compromise to be made here but in my opinion a small one.  Some people will argue that we are changing our site code to make the tests pass, and yes we are (slightly and only when we hit this scenario which has not happened to me much so far).

You should always follow up with UI integration tests such as Selenium, Microsoft’s Coded UI etc to test all the site functions and 3rd party calls work, these will be slower, but at least now we’ve got a way to run lots of fast unit tests before the code leaves Visual Studio.

Using this approach here are just some of my unit tests so you can see how quick they are:

image