Click here to Skip to main content
14,668,067 members
Articles » Cloud Computing » Amazon Web Services » General
Article
Posted 20 May 2015

Tagged as

Stats

18.5K views
6 bookmarked

Amazon Web Services part 4 - Cloudformation template to create a load balanced IIS/SQL Server web site using EC2, RDS and Route53

Rate this:
5.00 (2 votes)
Please Sign up or sign in to vote.
5.00 (2 votes)
20 May 2015CPOL
A fully functional sample Cloudformation template that generates a load balanced IIS/SQL Server based web site, plus PowerShell scripts to automatically deploy the web site on your EC2 instances

Introduction

This is part 4 of a series of articles about deploying your site to Amazon Web Services (AWS), Amazon's counterpart to Windows Azure.

In parts 1, 2 and 3, you saw how to use the AWS service Elastic Beanstalk to deploy a load balanced IIS based web application with a SQL Server database and its own domain name. This involved defining your EC2 servers, database server, etc. by completing a series of menus.

This part introduces the AWS service CloudFormation. This allows you to express your entire environment in a text file called a template.

The solution presented in this article uses a template to define all the web servers, database server, load balancer, name servers, etc. needed to host a web application. It also takes care of deploying the web application itself. Running a single Publish.ps1 PowerShell script will create/update all the bits of infrastructure and deploy the web application, so when it's done, you have a running web site.

Why CloudFormation

Specifying your environment by fillout out a few forms is very convenient. However, once your environment becomes a bit more complicated, you'll quickly run into some drawbacks with this method:

  • Repeatabality - You or a co-worker needs a carbon copy of your infrastructure in some other region. Or you need a test environment that is the same as your live environment. With menus, you're reduced to writing down every step and manually creating the copy.

  • Source control - There is no system to track changes by you or your co-workers. No rolling back if you make a mistake.

  • Documentation - To find out what's in an environment you have to hunt around the AWS dashboards.

The solution is to express your environment in a text file. This is easy to store, give to others and source control.

The AWS service CloudFormation lets you do exactly that. You define everything as a single large JSON object in a template file. CloudFormation takes that file and creates all the servers, load balancers, etc. you specified. This environment is called a stack.

When it comes time to modify your stack - remove/add servers, upgrade/downgrade servers, etc. - you change your template and invoke CloudFormation again. It will figure out how to modify your existing stack to your desired new stack making the fewest changes and causing the least disruption.

Stacks have their own name, such as "Test" or "Live". You can have multiple stacks and manage them independently. If you want 2 identical stacks, you'd use the same template to create them both.

In addition to infrastructure, CloudFormation will also deploy software for you, such as a web application.

About this article

CloudFormation is a low level tool. Whereas Elastic Beanstalk takes care of lots of details such as creating the right security groups, you have to define all this in your template. There are many details to get right - and many ways to get it wrong. I found that the learning curve can be quite steep.

When I set out to create an MVC web application with SQL Server using CloudFormation, I found that:

  • To even get something simple working required getting many bits in place;
  • The AWS documentation is extensive, but it only gets you 90% there. You still have to figure many things out on your own.
  • Although AWS and others provide sample code, this was too simple or sketchy for my purposes.

So I'm publishing the fully fledged working solution here that I arrived at. You should be able to use this as is for your own web application, or as a basis for your own solution.

Note that this is not an introduction to CloudFormation or AWS. The focus of this article is on describing the solution I came up with. However, it has many links to relevant parts of the AWS documentation to get all the details. This is not too hard and you'll learn a lot.

Goals for this solution

The solution described in this article sets out to achieve 2 goals: to deploy certain infrastructure and to deploy a web application and its updates.

Infrastructure Goal

The infrastructure goal was to deploy a web application that uses a database and that has its own domain:

Image 1

Lets start from the center of the picture:

  • Web servers on EC2 instances - The web application will run on one or more EC2 instances - virtual machines in the AWS cloud acting as web servers.

  • SQL Server database server - The web application accesses a SQL Server database hosted on an RDS (Relational Database Service) instance. RDS is a fully managed database server as a service with automatic backups, etc.

  • SQL Server standby - For enhanced availability, you can use a Multi-AZ deployment where a standby database server automatically takes over when the main database server fails (only available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) regions).

    Although reasonably priced for a company, this feature is probably too expensive for a personal site. So I created both a CloudFormation template without a standy and a template with a standby.

  • Auto Scaling Group - The auto scaling group starts more EC2 instances when the existing instances get overloaded, and terminates instances when there is little to do.

  • Code package stored in S3 bucket - Seeing that the auto scaling group starts and terminates EC2 instances at will, there is no point in deploying software to individual servers. Instead, a package file with the code making up the web application is stored in an S3 bucket (the S3 service lets you store files in the cloud). When the auto scaling group starts a new EC2 instance, it retrieves the package from the S3 bucket and installs it on the new instance.

  • Load Balancer - The ELB (Elastic Load Balancer) receives web requests from the Internet and parcels them out to the available web servers.

  • DNS Name Servers - Used to translate your domain name to the IP address of the load balancer. Provided by the Route 53 service.

Software deployment goal

The solution also achieves some software deployment related goals:

  • Packaging - Packages up an ASP.NET web application and stores the packet in an S3 bucket.

  • Deployment - Installs IIS, etc. on new EC2 instances with a Deploy.ps1 script.

  • Version tags - Adds tags with the current software version to the EC2 instances, to make it easy to see what version a particular instance is running.

  • Terminate and Replace - EC2 instances are never updated with new software versions. Instead, instances running old software are terminated and new EC2 instances created with the new software. This ensures that all EC2 instances are in the same known state, and that they all have the latest Windows patches.

  • Rolling updates - EC2 instances are replaced one by one, so there are always instances running to serve requests. If there is only one EC2 instance, a temporary second instance is created.

  • Web.config updated - During deployment, the web.config file is updated with the software version, and the server name, user name and user password of the database server.

I didn't look into database schema updates. If you use Entity Framework Code First, the web application itself will update the schema for you. However, I appreciate more work would be needed if you can't use Entity Framework or Code First.

Solution Components

The solution consists of a number of templates, scripts, etc., which are in the downloads. The next section shows how to use those components.

  • CloudFormation template without database server standby that specifies the infrastructure without the database server standby.

  • CloudFormation template with database server standby that specifies the infrastructure with the database server standby.

  • Very simple sample web application that uses a database. You may want to use this to experiment before moving to something more complicated.

    If you have a look at its web.config file, you'll find that it has a web.release.config transform to generate the Release version. That Release version has the placeholders {{DbUsername}}, {{DbPassword}}, {{DbServer}} and {{Version}}. During deployment these will be replaced by the actual database server details and the software version.

  • Deploy.ps1 PowerShell script.
    • After a new web server is created by the auto scaling group, any Deploy.ps1 script inside the source tree of your web application is executed on the new webserver by the CloudFormation template.
    • However, this particular Deploy.ps1 file is a bit more restrictive as to where it sits. It has to sit in the root directory of the web application, where the main web.config is located.
    • Receives software version, database server details, etc. via parameters.
    • Responsible for starting IIS, replacing placeholders in the web.config, etc.

  • Publish.ps1 PowerShell script that packages up your web application, stores it in an S3 bucket and calls the CloudFormation service with a template to deploy the infrastructure and web application. Also updates an existing stack.

  • Stack policy that determines what servers can be replaced during an update. Use this to prevent CloudFormation from for example replacing a database server (more about stack policies).

  • User policy that provides the Publish.ps1 script all the permissions it needs to spin up the infrastructure.

Costs

Using AWS services is not free. This section is here to spell this out. Personally, I found the amounts involved to be tiny, except for Multi-AZ deployments - do not leave these running overnight.

Pricing Page Service is used for
CloudFormation Deployment configuration
EC2 instances Virtual machines running web servers
Elastic Load Balancing Load balancer
Auto Scaling Spinning up/down web servers in response to load
Route53 DNS name servers
S3 Storing software packages
RDS, SQL Server licence Database server (see below)

Some useful tools

Multi-AZ deployment

Multi-AZ deployments are much more expensive than a database server without standby.

Amazon have implemented fail over for SQL Server servers using the SQL Server Mirroring feature. This feature is not available on SQL Server Express or SQL Server Web Edition.

As of March 2015, Your cheapest option that supports mirroring would be a SQL Server Standard Edition database on a db.m1.small instance (pricing).

Keep in mind that AWS will charge you for the SQL Server Standard Edition licence, while a SQL Server Express licence is free.

Also, you'll be paying for 2 database servers - the main server and the standby.

If you're just playing around, I wouldn't keep a Multi-AZ deployment running overnight.

Detailed installation steps

This section shows how to spin up the infrastructure plus deploy a web application. First we need to get some once-off preliminaries out of the way, and then we can run the Publish.ps1 script to create the stack. You can use the same script to update your stack later on.

1. AWS Account

I'm assuming you have an AWS account (get one), and a key pair (get one).

2. Download

Download all the components to a directory on your computer. It may be easiest to simply download the entire Github project with all the articles in this series and then grab the components.

3. Set user policy

The user (probably yourself) that will run the Publish.ps1 script needs to have all the permissions required to spin up all the different infrastructure elements.

You express these permissions as an IAM policy. You then attach that policy to the user.

First create the policy:

  1. Sign into the AWS Console;
  2. Click Services (top of the screen) | IAM | Policies | Create Policy;
  3. Click Select for Create your own policy;
  4. Make up a nice name for your new policy, such as "Publish-CloudFormation-Stack". Whatever name you choose, write it down somewhere.
  5. Open the user policy file you just downloaded and copy and paste its content to the Policy Document. You'll see that a policy is simply a JSON object.
  6. Click Create Policy;

Then attach your new policy to the user:

  1. Click Users;
  2. Click the user that will be running the Publish.ps1 script;
  3. Click Attach Policy;
  4. Enter the policy you just created in the filter. Select your policy. Click Attach Policy;

4. Store credentials in credentials store

When you run an AWS PowerShell command, it sends a message to AWS to make an update or retrieve information. Just as you need to log into AWS before you can do this manually, the command must carry credentials, so it can be authenticated by AWS.

You could pass your credentials to each command, but that means your credentials are exposed in your PowerShell scripts.

A better way is to store your credentials in a credentials store, which is simply a file somewhere on your machine:

  1. Get the access key and secret key of the user that will run the Publish.ps1 script (how).

  2. Think of a name for your new credentials store, such as "mycredentials".

  3. Open a PowerShell command line and run:

    Set-AWSCredentials -AccessKey <acess key> -SecretKey <secret key> -StoreAs <store name>

As an aside, if you have a look at the Publish.ps1 script, you'll see that it uses this command to tell the PowerShell commands in the current session to use the credentials in the credentials store:

Set-AWSCredentials -ProfileName <store name> 

5. Find your CIDR

You don't want everybody in the world to be able to try and RDP into your EC2 instances or SSMS into your database servers.

To lock this down, the Publish.ps1 script lets you pass in the CIDR of the machine that can RDP or SSMS into your servers.

To find the CIDR of your machine:

  1. Get your external IP address;
  2. Add /32 at the end. So if your IP address is 130.180.2.320, your CIDR is 130.180.2.320/32.

6. Create RDS option group to switch on SQL Server mirroring

You only need to do this if you plan to use a Multi-AZ deployment.

AWS' implementation of Multi-AZ deployments for SQL Server is based on the SQL Server Mirroring feature. To switch on this feature, you need to associate your RDS instance with an option group that has the mirroring option enabled.

Because this option group is simply a bit of infrastructure, you'd think you can create this in a CloudFormation template. Oddly, that cannot be done. You have to create it manually once off. When you run Publish.ps1, you'll pass in the name of the option group that you create here via a parameter.

To create the option group:

  1. In the AWS console, click Services | RDS | Option Groups | Create Group;
  2. Think of a name for your option group, such as "sqlserver-mirroring". Fill in the name and a description. Write down the name, you'll need it later on;
  3. Set Engine to sqlserver-se (Standard Edition) or sqlserver-ee (Enterprise Edition);
  4. Set Major Engine Version to version 11 (which is the same as the EngineVersion specified in the CloudFormation template);
  5. Click Create. This creates your option group and takes you back to the option groups list;
  6. Select your new option group;
  7. Click Add Option (near top of the page);
  8. Mirroring is the only option. You probably want to set Apply Immediately to Yes. Click Add Option.

7. Run Publish.ps1 script

With the preliminaries done, you can now run the Publish.ps1 script to spin up the infrastructure and deploy the web application.

Time to run

Running this script takes a while. Spinning up a new stack and deploying the site takes about 20 minutes in my experience. If you go for a stack with a Multi-AZ database deployment, 50 minutes is more like it.

Luckily, updates (running the script again for the same stack) tend to take far less time because CloudFormation does the minimum necessary to do the updates.

Domain

If you own a spare domain name that you want to use here, you'll pass that to the script via the websiteDomain parameter.

If you don't have a domain to use right now, just use a bogus domain such as "mybogusdomain.com".

Usage of the Publish.ps1 script

.\publish.ps1 -version <code version> -stackName <stack name> -websiteDomain <web site domain> 
    -keyName <key name> -credentialsStoreName <credential store name> -adminCidr <admin cidr> 
    -dbMasterUsername <database username> -dbMasterUserPassword <database password> 
    -bucketName <S3 bucket to store code> -templatePath <template file> 
    -csProjPath <.csproj file of your web application> -dbOptionGroupName <option group name> 
    -stackPolicyPath <file with stack policy>

Example

This is just an example that spins up infrastructure without a Multi-AZ deployment. It assumes you stored the downloaded files in a directory "c:\aws".

Do not copy this blindly, but use your own parameter values, especially the ones that are underlined below.

.\publish.ps1 -version 1.0.0 -stackName teststack -websiteDomain mybogusdomain.com -keyName mykeyname 
    -credentialsStoreName mycredentials -adminCidr 130.180.2.320/32 
    -dbMasterUsername dbusername -dbMasterUserPassword 'MustBe$uperHardToGu3ss!' 
    -bucketName must-be-unique-across-all-S3-buckets-in-AWS 
    -templatePath 'c:\aws\Template with SQL Server Express\Ec2RdsRoute54.template' 
    -csProjPath 'c:\aws\SimpleSiteWithDb\SimpleSiteWithDb\SimpleSiteWithDb.csproj'
    -stackPolicyPath 'c:\aws\stackpolicy.txt'

PowerShell special characters in parameter values

If any parameter value contain a $, enclose the parameter value with single quotes ('), not with double quotes. That way, PowerShell won't regard the $ as the start of a variable and try to expand it.

Parameters

version

Version of the code you're deploying. For example "2.1.0". The version doubles as the name of the code package in the S3 bucket. If you're running Publish.ps1 to update infrastructure whilst keeping the web application the same, use the same version.

stackName

For example, "Test" or "Live". The set of servers, etc. you create with a template is called a stack. You can create a different set of servers by passing in a different stack name.

websiteDomain

Domain of the web application that will be associated with the name servers. For example, "mydomain.com" for your Live stack, or "test-domain.com" for your Test stack.

keyName

Name of the key pair that you created earlier. You'll use this to log into your EC2 instances. Remember that key pairs are tied to a region.

credentialsStoreName

Name of the credentials store where you stored your credentials earlier.

adminCidr

Use the CIDR you found earlier.

dbMasterUsername
dbMasterUserPassword

RDS only supports SQL Server Authentication. These are the username and password that will be able to access your new database server via SSMS, and that will be inserted into your web.config.

bucketName

S3 bucket where the code packages will be stored. This name must be unique across all S3 buckets hosted by AWS. To make sure a given S3 bucket name is available, it would be easiest to create the S3 bucket yourself and then pass your chosen name in via this parameter. To open the S3 dashboard, click Services | S3.

templatePath

Path to the CloudFormation template to use. Use one of the templates in the downloads or use your own.

If you use your own template and it has parameters, be aware that the Publish.ps1 script sets only a limited set of parameters (in the method launch-stack). All other parameters must be optional and/or have defaults.

csProjPath

Path of the .csproj file of the web application you're deploying. The Publish.ps1 script will build the site in release mode, package up the code and store the package in the S3 bucket.

dbOptionGroupName (optional)

Name of the option group you created earlier that has mirroring enabled. Only use this if you're using a template that creates a SQL Server Standard Edition (or higher) database server (such as this template).

If you try to use such an option group with an engine that does not support mirroring, such as SQL Server Express or SQL Server Web Edition, CloudFormation will fail to create your stack.

stackPolicyPath (optional)

File path of a stack policy to protect for example your database during stack updates (example). If you do not use a stack policy, do not use this parameter.

8. Set nameservers

After you've created your stack, you need to change the configuration of your domain name so it uses your new Route53 name servers.

Part 3 of this series described how to do that here.

9. Copy data and schema from old database

If you have an existing database, you'll want to copy its schema and data to your new RDS database. Part 2 of this series described how to do that here.

Unless you delete your database server, this needs to be done only once after the initial creation of your stack.

Implementation notes

This section isn't a detailed discussion of every file in the downloads. That would be very boring. I also provided a lot of comments and apart from the templates the files are not very complicated.

Instead, this section discusses some of the more interesting things I learned when building this solution.

CloudFormation template

On a high level, a CloudFormation template is simply a long list of infrastructure resources. The introduction to building CloudFormation templates is well worth reading, as is the user guide. There are also the Pseudo Parameters Reference and the Intrinsic Function Reference.

I gained a lot from looking at sample templates and snippets. Also, each resource has a type, such as AWS::RDS::DBInstance. Googling those types and understanding what they do taught me a lot on how AWS works.

Having said that, there were a few tricky bits I wanted to discuss here.

Loading a code package onto a new EC2 instance

The auto scaling group is responsible for creating new EC2 instances. This includes installing the web application, installing IIS, etc.

This installation work is handled by a launch configuration - a separate resource. If you search for AWS::AutoScaling::AutoScalingGroup in the template, you'll find that it is associated with the auto scaling group via the auto scaling group's LaunchConfigurationName property.

If you now search for LaunchConfig, you'll find that it knows everything about configuring a new EC2 instance, including the AMI image id, its instance type and everything related to installing the web application. The installation story is spread over two properties: the UserData property and the AWS::CloudFormation::Init object in the Metadata property.

The UserData property contains scripts that will be run on the new EC2 instance after it has been created. Oddly, it is Base64 encoded. Luckily, the built in intrinsic function Fn::Base64 does the Base64 encoding for us:

"LaunchConfig" : {
    "Type" : "AWS::AutoScaling::LaunchConfiguration",
    "Properties" : {
        "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
            "<script>\n",
                "... MS-DOS commands ... \n"
            "</script>\n",
            "<powershell>\n",
                "... PowerShell commands ... \n"
            "</powershell>\n"
        ]] } }
    }
}

Looking at the template, you'll see that the one MS-DOS command calls the cfn-init.exe program. This CloudFormation Helper Script is always available on an EC2 instance. It actions the meta data in the AWS::CloudFormation::Init object. You'll see below that the AWS::CloudFormation::Init object used in this template gets cfn-init.exe to load a package file (a .zip file) from the S3 bucket and extract the files into the c:\inetpub\deploy directory.

After cfn-init.exe has executed, the PowerShell script kicks in. This finds all files called Deploy.ps1 within the code package and invokes them as PowerShell scripts. Note that it has to set the execution policy to Bypass temporarily to allow this to happen. It then cleans up the c:\inetpub\deploy directory.

The AWS::CloudFormation::Init documentation is well worth reading. One of its many tricks is the sources section, which allows you to download a .zip file from a URL and extract it into a directory on the EC2 instance. Here that URL is the package in the S3 bucket, but you could also load a .zip file from Github.

In principle you can extract the files in any directory. However, keep in mind that:

  • c:\inetpub\temp is used by IIS for its own purposes. Deleting it does break things.
  • c:\inetpub\wwwroot is used by the default web site in IIS.
  • %temp% gets expanded into the temporary directory by Windows Explorer, but not by AWS::CloudFormation::Init.

This is why in the end I used c:\inetpub\deploy as a temporary directory to extract the web application.

Starting the software update process

You can run the Publish.ps1 script to purely install a new version of your web application, without changes to the CloudFormation template. As part of this, Publish.ps1 uploads a new package to the S3 bucket. Question is how to get the launch configuration to pick up this new version and replace EC2 instances running the old version with EC2 instances running the new version. Luckily, AWS makes this fairly easy.

Firstly, when you update a property of a launch configuration, that launch configuration gets replaced by a new configuration. Then when its auto scaling group creates new EC2 instances, those instances receive the latest version of your web application.

If you have a look at the AWS::CloudFormation::Init object used in the launch configuration, you see that the sources property changes when you update the Version parameter. This is because the name of every package is based on the version. So that solves how to give new EC2 instances the latest web application version.

Basing the names of packages on their version means that packages of previous versions are preserved (unless you manually delete them). Obviously this takes space. In order to save space, you could give all packages the same name, so old packages get overwritten (you would have to modify the Publish.ps1 script to make this happen). However, in that case the sources property no longer changes with the version.

In that case, to still have the launch configuration replaced when the version changes, you could set the Version property on the Metadata:

"LaunchConfig" : {
    "Type" : "AWS::AutoScaling::LaunchConfiguration",
    "Metadata" : {
        "Version" : { "Ref" : "Version" },

Having made sure that new EC2 instances receive the latest code, we still have to get rid of the EC2 instances running old code. You can do this by specifying an UpdatePolicy Attribute for your auto scaling group (in the template, search for "UpdatePolicy"). In addition to causing the auto scaling group to replace EC2 instances in a rolling update, this also lets you specify how many EC2 insances get replaced in one go, etc.

Allowing EC2 instances to read a file from an S3 bucket

For the EC2 instances to load the package from the S3 bucket, they have to have permission to do so.

This can be done by creating a role. This is a collection of permissions that can be assigned to for example a user on a temporary basis. The permissions themselves are contained in a policy, which needs to be attached to the role.

In the template, the permission to execute a GetObject action (that is, file load) on the package is encoded in a policy in the RolePolicies resource of type AWS::IAM::Policy. Its Roles property attaches the policy to the InstanceRole resource of type AWS::IAM::Role.

An added twist is that it isn't the EC2 instance itself that will be loading the package, but software running under Windows on that virtual machine. Because of this, InstanceRole needs to be first added to an InstanceProfile (details). In the template this InstanceProfile is imaginatively called InstanceProfile.

Finally, we can now attach InstanceProfile to the launch configuration (resource LaunchConfig) via its IamInstanceProfile property.

Setting credentials for retrieving resources listed in AWS::CloudFormation::Init

In addition to allowing EC2 instances to read a file from an S3 bucket, you also need to provide credentials for accessing those files (unless they are not protected at all).

You do this by adding a AWS::CloudFormation::Authentication object to the Metadata of your launch configuration (search for "AWS::CloudFormation::Authentication" in the template). You can use this to provide credentials for any resources or files listed in your AWS::CloudFormation::Init object. If you were to retrieve files or resources from for example Github, this would be the place to provide your Github credentials.

As far as providing credentials for the S3 bucket is concerned, the simplest way here is to refer to the InstanceRole object that had already been created by the template.

Some pages that I found useful:

Adding version tags to EC2 instances

One goal was to have a version tag on each EC2 instance, to make it easy to find out what software version it runs. Of course, EC2 instances get created and terminated by the auto scaling group, so you can't add tags directly to the EC2 instances.

A simple solution is to add the version tag to the auto scaling group itself (details). If you look at the WebServerGroup (of type AWS::AutoScaling::AutoScalingGroup) in the template, you'll find that it has a Tags property containing the version tag. Setting PropagateAtLaunch to true ensures that the tag will be copied to newly launced EC2 instances.

Publish.ps1

You run this PowerShell script to create or update a stack.

This script interacts with AWS using PowerShell Cmdlets provided by AWS Tools for Windows PowerShell. I found that this works well and is very well documented.

Some things I learned while writing this script:

  • Some sample CloudFormation templates on the Internet hard code the ID of the Amazon Machine Image (AMI) to use with EC2 instances. I followed their lead, until one day I found that the AMI I had hardcoded no longer existed.

    In fact, AWS regularly provides new updated AMIs with new IDs, so hard coding IDs won't work long term. The solution was to get the latest EC2 instance AMI each time you create or update a stack:

    $imageId = (Get-EC2ImageByName -Names WINDOWS_2012R2_BASE | Select -ExpandProperty ImageId)

  • At the end of the method launch-stack, it waits until the stack has been completely created or updated. It does this by polling the status of the stack every 5 seconds (in method waitfor-stack-status). I had to interpret the AWS stack status (a string) to find out whether the create/update had succeeded/failed or was still in progress. This was made easier by having the full set of stack status codes.

  • The Cmdlets New-CFNStack and Update-CFNStack are used to create/update a stack using a template. When your template creates an IAM Instance Profile, for example to give EC2 instances access to an S3 bucket, you have to allow the Cmdlets to use that template by passing in "CAPABILITY_IAM" via the Capability parameter:
    New-CFNStack -Capability @( "CAPABILITY_IAM" ) ....
  • In method upload-deployment, I used MSBuild and more specifically its Package target to build and package up the web application into a single .zip file using the line:
    msbuild $csProjPath /t:Package /p:Configuration=Release /p:PackageLocation=$releaseZip /p:AutoParameterizationWebConfigConnectionStrings=False

    I looked at alternatives such as OctoPack, but this is the simplest way and it works well.

    By default, the Package target replaces the web.config's connection by a replacable token, which is not what I want here. Setting AutoParameterizationWebConfigConnectionStrings to False suppresses this behaviour.

Deploy.ps1 PowerShell script

This PowerShell script runs on the EC2 instance after the web application package has been copied there and has been unzipped. It isn't complicated and well documented.

A few interesting things:

  • You'll see that I used appcmd.exe to work with IIS, rather than easier to use IIS Cmdlets.

    The reason behind this is that the IIS Cmdlets require the WebAdministration module to have been loaded, and I found I couldn't count on this always being the case. So I used appcmd.exe which always works.

  • I was sometimes confronted with the error message HTTP Error 500.21 - Internal Server Error Handler "ExtensionlessUrlHandler-Integrated-4.0" has a bad module "ManagedPipelineHandler" in its module list.

    The solution was to re-install the .Net framework (Stack Overflow discussion), using

    cmd /c %systemroot%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i

  • Be careful with the PowerShell Cmdlet Out-File. By default it produces a file with encoding Unicode. IIS chokes on this when reading for example web.config. I believe that in most cases you're better off with UTF8 anyway:
    Out-File ..... -encoding UTF8

Trial site Web.Release.config

This is the web.config transform that generates the release version of the web.config. You can see the placeholders that will be replaced by Deploy.ps1.

Some connection strings use Integrated Security=<a href="http://stackoverflow.com/questions/1229691/difference-between-integrated-security-true-and-integrated-security-sspi" target="_blank" ="">SSPI;. Do not use this with an AWS hosted site. It will cause AWS to try to log into your database with the NT AUTHORITY\ANONYMOUS LOGON account rather than your own account - which will fail.

Next Parts

In future parts, I am intending to write about deploying a stack from TeamCity, blue/green deployments and breaking large templates into manageable pieces.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Matt Perdeck
Architect
Australia Australia
Twitter: @MattPerdeck
LinkedIn: au.linkedin.com/in/mattperdeck
Current project: JSNLog JavaScript Logging Package

Matt has over 9 years .NET and SQL Server development experience. Before getting into .Net, he worked on a number of systems, ranging from the largest ATM network in The Netherlands to embedded software in advanced Wide Area Networks and the largest ticketing web site in Australia. He has lived and worked in Australia, The Netherlands, Slovakia and Thailand.

He is the author of the book ASP.NET Performance Secrets (www.amazon.com/ASP-NET-Site-Performance-Secrets-Perdeck/dp/1849690685) in which he shows in clear and practical terms how to quickly find the biggest bottlenecks holding back the performance of your web site, and how to then remove those bottlenecks. The book deals with all environments affecting a web site - the web server, the database server and the browser.

Matt currently lives in Sydney, Australia. He recently worked at Readify and the global professional services company PwC. He now works at SP Health, a global provider of weight loss web sites such at CSIRO's TotalWellBeingDiet.com and BiggestLoserClub.com.

Comments and Discussions

 
-- There are no messages in this forum --