Click here to Skip to main content
Click here to Skip to main content

Campus Classifieds using Windows Azure

, 23 Jun 2013
Rate this:
Please Sign up or sign in to vote.
Learn how to get started in Windows Azure by following the progress of an actual web application as it is constructed.

Please note

This article is an entry in our Windows Azure Developer Challenge. Articles in this sub-section are not required to be full articles so care should be taken when voting. Create your free Azure Trial Account to Enter the Challenge.

Introduction

As part of the Windows Azure Developer Challenge, this article will perform a dual purpose. First, it will show the reader how to get started and use Windows Azure in the five specified areas. Second, it will show how I used those skills to develop a set of web applications to support my project. Each challenge will have its own section, with each section having the two parts inside it. 

Project Overview  

My project is focused on Campus Classifieds sites. The idea is that sites like Craigslist do not focus specifically enough on serving college and university campuses. For instance, how do you look for a ride home from campus? How do you sell your books to other people on the same campus who might need them? What if you are holding a party on campus and want to advertise (but don't want to invite the entire metro area)?

For this article, I will be building three websites. The first will be a sales site where college administrators can purchase a pre-built campus classifieds site for their college/university. The second website will be a demo of an actual Campus Classifieds site. It will be SQL (on Azure) driven and it will use Azure blob storage for image uploads. The final site will be a support forum for all campus classified sites to use. This will be managed by a third-party company and it will be built on an open source support software that runs on an Azure virtual machine. The final piece will be to develop mobile applications that can access the campus classifieds site where you are signed up.

Contest Sites 

To keep things simple, here are the links to each challenge entry in one place. They are also below in their respective sections.

  1. Challenge One: <no specific site, just the article>
  2. Challenge Two: http://ccsales.azurewebsites.net/ 
  3. Challenge Three: http://samplecc.azurewebsites.net/ 
  4. Challenge Four:  http://ccappsrv.cloudapp.net/login 
  5. Challenge Five: http://samplecc.azurewebsites.net/Service/Item/4 

Table of Contents 

First Challenge: Getting Started

How Azure might benefit or change what I do today: Azure will allow me to rapidly develop complex, inter-related web applications that can scale to meet my growing needs. That takes the worry out of rapid growth or a slow start since I only pay for what I use yet the power is there when I need it.

Azure Tutorial 

Getting started with Windows Azure is fairly straightforward. However, there are a couple parts that might trip you up so it is best if we just walk through each step. This will ensure that no one gets lost or sidetracked.

To get started, head over to www.windowsazure.com. While the page may be different depending on where you are located, you should see something similar to this screen: 

From here, click the green button that says "try it free". This will lead you to the next page, which, quite redundantly, has a "try it free" button on it:

 

Once you click that second "try it free" button, you will get a login screen. You will need to use a Microsoft account for this login:

 

Don't get this confused with a Microsoft email account. You do not need a hotmail/outlook/live account to have a Microsoft account. A Microsoft account is simply any email address you have registered with Microsoft. For instance, I am using the email address azure@timothycorey.com for this project. Everything related to the Campus Classifieds project will go through this email, including my Azure hosting information. At the time, however, this email address was not registered with Microsoft as a Microsoft account. To register the account, I simply went to signup.live.com and filled in the appropriate information. 

The interesting thing I found about the above login screen is that it did not allow me to register my email address via a "sign up" link. This page assumed I had a Microsoft account already in place. However, there is a way to get that "sign up" link to appear. Simply put your email address in the box and hit the blue "Sign in" button without putting in a password. This will bring up the next screen:

 

As you can see, there is now a "get a new account" link that you can follow to register your email address. If you don't have an email address already, this page will also allow you to create a Microsoft email account. Once you get through the sign-up/login process, you will come to the Azure signup screen:

 

This three-step wizard will walk you through what you get in your trial and what you need to provide to get signed up. Review the initial page to familiarize yourself with what the trial offers and then hit the arrow in the lower right-hand corner. This will bring up step two of the wizard:

 

Windows Azure will want to verify your account by calling or texting your phone. This just verifies that you are a real person. This is a one-time verification. The number you give them will not be stored (at least not yet). Once you put in your number, click the "Send text message" or "Call me" button next to your number. This will trigger an automated text/call to your number. You will be given a verification code, which you put into the box below the phone number. Once you fill in the verification code, click the "Verify code" button to process the number. Only then will the arrow in the lower right-hand corner be enabled to allow you to proceed to the next step.

The final step is credit card verification: 

 

A lot of people get concerned at this step. A free trial shouldn't need a credit card, right? What am I going to be charged for? Does this convert into a paid service automatically? The basic answer is this: don't worry. The credit card entry serves two purposes. First, it verifies who you are specifically. Second, it makes it easier for you to purchase things if you authorize them. By default, your Windows Azure account will not charge your credit card for anything. If your site reaches its given limits before the end of the month, the site will shut down rather than exceed the limitations. You can change this if you want, but that would mean your credit card would be charged for the overages.

Once you complete the final step in the wizard, you will be signed up for a Windows Azure trial account. You will then be redirected to a page like this:

 

This is the account overview page. However, my assumption is that you are going to want to dive right into using Windows Azure. To do so, click the blue "Portal" button in the upper right-hand corner. This will bring you to the portal dashboard. The first time you go here a quick tutorial wizard will pop up:

Take a minute to walk through the five steps. It will give you a good overview of the portal and what is available to you. From there, you will see the actual portal dashboard screen, which should look like this:

Now you are ready to get started using Windows Azure. On the left you will see what services you are currently using. In the lower left-hand corner you will see the "New" button, which will allow you to add new services. Play around for a while in order to familiarize yourself with what is available and what all Windows Azure can do for you. Once you get started doing Windows Azure development, this portal will become your home. 

Practical Example 

There wasn't a lot to do for this step. I signed up for a new trial account of Windows Azure following the steps I laid out above. Now that I have my portal set up for the Campus Classifieds sites and services, I've familiarized myself with the many different offerings that make up Windows Azure. Knowing what is available helps me plan out how best to accomplish my goals.

Second Challenge: Build a Website

Azure Tutorial

Creating a Website in Windows Azure cannot get much easier. When you are in your portal, click the "NEW" button in the lower left-hand corner and choose Compute -> Web Site -> Custom Create. You also have the option of Quick Creating a site or choosing a site template from the gallery. Here is a screenshot of that menu:

 

When you choose the Custom Website option, you will be presented with a wizard like so:

Note that you can choose your region (choose the closest to you or your customers), set up or select an existing database, and even choose the option to publish your site from source control. If you choose to publish your site from source control, you will get this next screen:

 

These are the different source code repositories that Windows Azure supports. Since in my project I am using TFS, I initially chose to use Team Foundation Service to publish my source code from automatically. That brings up a screen like this where I can put in my information about my TFS repository:

 

Unfortunately, when I attempt to complete this step, I get the following error:

It turns out that while the option is listed in Windows Azure and while it seems to all work (right up to the moment when it doesn't), publishing from TFS to Windows Azure is not yet supported. The latest news is that Microsoft hopes to support it by this summer.

However you decide to configure your website, once you have completed the process you should see an entry like this show up in your portal:

When you click on your new website for the first time, you will get the following quick start screen:

You can also get back to this screen by clicking on the cloud with the lightning bolt in it (next to "dashboard" near the top). One thing I recommend doing here is clicking the "Set up deployment credentials" link. You will get the following pop-up if you do:

 

This will allow you to set up the username and password that will be used for Git and FTP for your site.

Back on the getting started screen, you will notice that you can also hook up your site to source control here, get the SDK, or even install Web Matrix. Since I am using Visual Studio for my development, I won't show that off but the process for getting Web Matrix is fairly simple.

When you proceed to the dashboard, you will see an overview for your website:

 

This is where all of your site administration will be done. There are also some useful tools here as well. Note the "quick glance" section on the right. This is where you will get your FTP information (if you are going to upload your site via FTP), your site URL, and even your publish profile for Visual Studio and Web Matrix. This profile makes setting up these two tools painless and simple. In my practical example section you will see an example of how to set up Visual Studio using this publish profile option.

Now that you have your site configured and your files uploaded, you have a working Windows Azure website. Keep an eye on that dashboard, since it will tell you when you are getting close to your limits.

Practical Example 

For this example, I am going to create a sales website for Campus Classifieds. This sales site will target school administrators who want to purchase a Campus Classifieds site for their school. The site will be simplistic, with no database back-end or login system needed. It will just have the sales information and a way for the administrators to contact our company. I will call this site CCSales.

While it is not quite feature complete yet, I am going to use Team Foundation Services (http://tfs.visualstudio.com/) to host the source code for all of my projects. This will allow me to create a development system that is easy to bring other developers on-board at a later date. Currently TFS online is free for unlimited users, although in the future it will only be free for the first five users. I have also chose to use Git as my source control type, since I prefer Git.

Getting Set Up 

Once I set up my project in TFS for CCSales, I get this screen:

Note that I did accidentally include a space in the project name. That is not a good thing. It will cause errors and issues with setting up Visual Studio integration (and the errors are not clear). My advice is to not put spaces in your project names. It makes life simpler for all involved. The bad news is that I have not yet found a way to delete or rename a project once it has been created.

The first thing I do is click on the "Open new instance of Visual Studio" link. For you, you might need to install Visual Studio Tools for Git before taking this step (note: I believe this step requires Visual Studio 2012). If you aren't sure, just take the step. Visual Studio is great about prompting you about what you need.

When you are set up for Git in Visual Studio and you click on the link on the dashboard, you will get the following screen in your newly-opened instance of Visual Studio:

 

Click the Clone this repo link, which will bring up the following screen:

This sets up where your local copy of the repository will be located. By default, Visual Studio will point to your root repository folder. For my specific repo, you will notice that I renamed it to CCSales. Originally it was CC%20Sales, which matches the url of the remote repo. If I had left the name this way, it would have caused problems in Visual Studio. 

Once I had everything set up between Visual Studio, TFS, and Windows Azure, I created a new ASP.NET MVC 4 application in my local repo folder:

 

Next, I created a Home controller and a View for the Index action. This was just to get the MVC project to a working state. Once that was complete, I right-clicked on my solution and chose "Commit". I got the following screen:

I named my commit "initial commit" and hit the "Commit" button. This committed my files locally. To send that commit to my TFS server, I had to select the commits option and hit "Push" on the Outgoing Commits. This sends the local commits to the remote repository.

At this point, my site was ready to be published (just to see that it works). I right-clicked on my project and selected "Publish". When I did that, this wizard popped up:

 

I took the "publish profile" file I had downloaded from Windows Azure and imported it here (the step of downloading a file is no longer needed with the latest update - now you can directly point to an Azure instance if you want). That brought me to this next screen:

 

Here I basically just validated the connection and hit next. There was no need to modify anything (the file took care of setting all this up).

 

The settings screen was the next to come up. Here I got to make some rather obvious choices. I decided to publish my release build and to not include additional files that were not necessary to my project. I did not mess with the databases section since I am not using a database for this project.

This brought me to the preview screen, where I could preview the changes that would be published to my Windows Azure web site. After previewing the files (just so I knew what was going to happen - it is not necessary to do this in order to publish successfully), my site was live.

This last set of steps to publish my project will go away once TFS allows for automatic publishing of specified branches to Windows Azure. At that point, whenever I push my changes to the remote master branch, those same changes will be put to live. The great thing about this, besides it keeping in sync automatically, is that I can also publish other branches to testing sites. That way I can have a test server and a live server automatically refreshed from my repository.

One final note on using Git with ASP.NET MVC: your NuGet packages will not (should not) upload with your project to your Git repository. That means that when you pull down your project to a new machine after it has been created, nothing will work. All of your references will be missing. Visual Studio can fix this for you by default. In the Options menu, you can change the following setting:

 

Check the box to allow NuGet to pull down the missing packages when you build. From then on out, you shouldn't have a problem pulling down projects from source control.

Developing the CC Sales Site   

Here is a link to the completed site: http://ccsales.azurewebsites.net/ 

The actual development of the CC Sales site was fairly straightforward. I used Twitter Bootstrap as my starting point, which gave me a number of benefits including responsive layouts. For the main page, I even started from one of their example sites. This gave me a foundation from which I could start tweaking the site. Having something working makes it much easier to figure out what you like and don't like compared to trying to start from scratch.

I intentionally kept this site very simple. It has three pages (index, features, and contact) and very little "stuff". The point of the site is to keep pointing the user back to buying a Campus Classifieds site (or, in this case signing up to buy one when it is available). The site is still in "Coming Soon" mode but once the Campus Classifieds sites are launched, the Coming Soon banner will be replaced with a slider control that will show off different parts of a CC site. 

There are a couple things to notice as you look over the site. First, the entire site will work on practically any screen size. Most of this comes from using the responsive grid system of Twitter Bootstrap. For instance, most of my "row" divs have a class called "row-fluid". This means that out of the box, when the site is too small for the normal 12-part grid, the cells will be stacked on top of each other. The one tweak I added was on the features page. When the page was shrunk down to phone size, the images were rather distracting. They took up a lot of space and didn't really add to the page. I solved this issue by adding the class "hidden-phone" to their div. Twitter Bootstrap uses media queries to identify different resolution sets. When the resolution shrinks to the approximate size of a phone, my images will disappear. Now my features page looks good even in small sizes.

The next thing to notice about the site is that it is simple. I intentionally crated a stand-alone site that does not use a lot of advanced features. For one, this type of site doesn't need many advanced features. For another, I didn't want to distract from the use of Windows Azure Web Sites. This means that my contact page does not write out to a SQL database. Instead, it uses the .NET mail program to email me a copy of the completed form. No matter how I did this form (email, database, etc.), I am opening myself up to exploitation. I decided not to put a CAPTCHA on my form since those are annoying for the end user and can turn people away. I will live with the spam that comes through the form. However, I did want to limit the vulnerability of my form. That is why I used an AntiForgeryToken. To do this in .NET is really easy. On your form, you add @Html.AntiForgeryToken(). Then on your controllers action that handles the POST, you simply add [ValidateAntiForgeryToken] as a decorator. 

I did not add any complicated JavaScript to my site. In fact, except for what Twitter Bootstrap does, I only have three JavaScript functions, all of which work to perform the contact form validation and processing. I did add the ToastR JavaScript library by John Papa as a clean way of alerting the users in a way that was nicer than an alert box popping up. 

So that is it. The Campus Classifieds Sales site is fully up and operational. The Coming Soon indicates the sales of Campus Classifieds sites, not the sales site itself. The sales site will take potential customers' information and deliver it to the sales inbox for processing. 

Third Challenge: Using SQL on Azure 

Azure Tutorial 

Setting up SQL on Azure is even simpler than setting up a web site. In fact, if you are setting up a new web site, one of the options is to set up an instance of SQL as part of the process. If you go down this route, you can either connect your new web site to an existing database (as I'm doing below) or you can create a new database:

If you decide instead to create your database on its own, you will need to go to the following screen to get started:

Either way, once you start the database creation process, there aren't many options, as you can see below. In our case, we are going to use a web database with max size of 1GB. This is what comes free with the trial of Azure.  

Once everything is set up, you will be able to go to your database from the main Azure dashboard.  The first screen you will see on your database will look like this:

To get back to this screen, click on the little cloud with the lightning bolt on it near the upper left. This page isn't technically necessary since you can do all of these tasks somewhere else. However, it is a great way to get started in database management. The common tasks are here. I use this screen as a type of checklist to make sure I've done everything to configure the server before I try to use it. The designer tools only need to be downloaded once, and the starter project isn't really necessary but the next option is very important. The Windows Azure firewall prevents any machine from accessing your database directly. I always come in here and unblock my home IP address. This couldn't be simpler. To unblock your IP address, click on the link and follow the simple instructions. It will discover your IP address for you, so there is no complicated setup.

Gotcha: If you have a problem connecting to your database (even via the web client), most likely you forgot to unblock your IP address or your IP address changed (home IP addresses do change from time to time). 

The next task is to actually do something with the database you just created. This is where you could do things in a number of different ways. For instance, you could upload an existing database (certain restrictions apply), you could run a series of T-SQL commands to set up your database structure and populate your tables, you could use the online editor to create your objects and populate them, or you could use a tool such as SQL Server Management Studio or Visual Studio.  Each of these options has a situation where it makes sense to use that option over the others. I have personally used each method and found each to be fairly simple. In this example, I'll show you a couple of screenshots from using Visual Studio since that is the environment we are already in. First, to create a table, you can use either the visual editor or edit the T-SQL directly:

When you are done making your edits (and yes, you can edit either and your changes will be reflected in the other window), hit the Update button. That will trigger a preview of your changes. It will also allow you to generate a script to make these changes and/or it will allow you to push these changes directly to your database. 

And that is it. Once you create all of your tables, stored procedures and the like, you have a working database in Azure SQL.  

Practical Example 

Here is the sample site I'll be referring to (and my challenge entry): http://samplecc.azurewebsites.net/ 

This is where things really get interesting. You have a database in the nebulous cloud and you have a web site in that same cloud. How do you connect them together, both at design time and runtime? Quite frankly I have found the answer to be so simple it is hard to believe.

Connecting to an Azure SQL database instance is no different than connecting to a SQL instance in a server room right next to you. You use a connection string to handle letting the code know where the database is located. 

Gotcha: Do NOT store your connection string in your web.config file, especially if you have that file in source control. This leaves your password open for discovery. You could encrypt it, but the better solution is to use the connection string option in the Azure configuration. It puts the information in your config at runtime as needed without exposing your password to prying eyes. 

What I do is I put my connection string in my web.config.debug file. This allows me to test my site locally but when I upload the release version of the site, this information is not included. I also specify that my web.config.debug file does not get included in the upload to my source control. For instance, your web.config.debug file might have a section like this to swap out the connection string info:

<connectionStrings>
  <add name="AzureConnection" providerName="System.Data.SqlClient" 
        connectionString="Server=tcp:server.database.windows.net,1433;Database=CampusClassifiedsDB;User ID=user@server;Password={pwd};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;" 
        xdt:Transform="SetAttributes" xdt:Locator="Match(name)" />
</connectionStrings> 

In Azure, you can set up a connection string replacement value in the Configuration section (of your web site) like so:

Now when you upload your site, your password will be safe, yet your site will work both locally and remotely. Another option could be to have a different database locally compared to when the site is live. This will allow you to mess around with test data without affecting your live site. To do this, just change your web.debug.config file to point to a different database (even a local SQL or SQL Compact database, for example). 

For me, I am not a big fan of Entity Framework, so I use straight-up ADO connections to get my data in and out. That allows me to more easily use stored procedures and it gives me an easier customization of the data access layer (I like control). As such, one of the things I have to do for my application is to create stored procedures for everything (reads, inserts, updates, and deletes). Creating a stored procedure in Azure SQL is just the same as in a local SQL server. I just run my T-SQL scripts and then call them from my code.

To do that, I don't even bother going through a designer or tool. I just use the web interface of Azure SQL. I click on the link to design my database (the link is on the opening page of my database). That takes me to a login box (don't forget to unblock your IP address first) like this:

I put in my credentials and it comes to a database designer page. I select "Stored Procedure" from the list:

From there, I create a new procedure, add the parameters as necessary and the T-SQL code. I save it and I can even test it right there.

I won't bore you with how I created each procedure, since you probably understand that already. Once I had all of my procedures done (and obviously my tables created as well), it was time to get started hooking everything up.

For this practical example, I chose to develop a sample of what a Campus Classifieds site might look like. This would be what I would show potential customers as an example of how the site could work. The link above takes you to that site. I've intentionally not hooked up the login screen or any of the site update pages. This site is a demo. I don't want to manage it like I would a real Campus Classifieds site.

So, to access the data, I use a data access layer that I am comfortable with. It is standardized and simple. To read data, I use the following method:

/// <summary>
/// Reads the data out into a DataTable
/// </summary>
/// <param name="connectionString">The connection string</param>
/// <param name="sqlQuery">The SQL string to execute</param>
/// <param name="parameters">Any SQL parameters to add</param>
/// <returns>A DataTable of the returned data</returns>
public System.Data.DataTable ReadData(string connectionString, string sqlQuery, ref List<SqlParam> parameters)
{
    DataTable myTable = new DataTable();
            
    using (SqlConnection cnn = new SqlConnection(connectionString))
    {
        cnn.Open();
        using (SqlCommand myCommand = new SqlCommand())
        {
            SqlParameter param;
            myCommand.CommandText = sqlQuery;
            foreach (var entry in parameters)
            {
                if (entry.ValueLength > 0)
                {
                    param = new SqlParameter(entry.Name, entry.ValueType, entry.ValueLength);
                }
                else
                {
                    param = new SqlParameter(entry.Name, entry.ValueType);
                }
                param.Direction = entry.Direction;
                param.Value = entry.Value;
                param.TypeName = entry.TypeName;
                            
                myCommand.Parameters.Add(param);
            }
            myCommand.Connection = cnn;
            using (SqlDataReader reader = myCommand.ExecuteReader())
            {
                myTable.Load(reader);
            }
            foreach (SqlParameter item in myCommand.Parameters)
            {
                parameters.Find(p => p.Name == item.ParameterName).Value = item.Value;
            }
        }
    }
    return myTable;
} 

Note that I'm passing in a custom parameter object. That object describes each parameter. This way we can pass in an int differently than a nvarchar. When dealing with an ADO connection, that is a big deal. Otherwise there is behind-the-scenes conversions of data types that can cause serious overhead. Here is that parameter class:

public class SqlParam
{
    /// <summary>
    /// Loads the default values for each item so
    /// they don't all need to be set to be used.
    /// </summary>
    public SqlParam()
    {
        this.ValueType = SqlDbType.VarChar;
        this.ValueLength = 0;
        this.Direction = ParameterDirection.Input;
        this.Value = string.Empty;
        this.TypeName = string.Empty;
    }
    public string Name { get; set; }
    public object Value { get; set; }
    public SqlDbType ValueType { get; set; }
    public int ValueLength { get; set; }
    public ParameterDirection Direction { get; set; }
    public string TypeName { get; set; }
} 

Now that you have seen both of these and before I lose you totally, let me explain what I am doing and why. What I want is a way to call the data access layer to get data. I want it to be simple, quick, and easy to replicate. I also want something that I can swap out database types easily. For example, if I want to use MySQL tomorrow for this same project, I don't want to have to change much. The above code meets that need. I use an interface to call the ReadData method (I also have a WriteData method that does a ExecuteNonQuery, but it uses the same parameter types and has the same pattern). If I want to change the code out, I would simply need to have my factory give out the MySQL version of this code instead of the SQL version.

As for the ease of use, let me show you how I get data:

IDataAccess data = Factory.DataAccess();
List<SqlParam> parameters = new List<SqlParam>();

parameters.Add(new SqlParam { Direction = System.Data.ParameterDirection.Output, Name = "@TimeElapsed", Value = 0, ValueType = System.Data.SqlDbType.Int });
parameters.Add(new SqlParam { Direction = System.Data.ParameterDirection.Input, Name = "@SearchValue", Value = searchText, ValueType = System.Data.SqlDbType.NVarChar, ValueLength = 50 });

DataTable results = data.ReadData(Factory.ConnectionString("A"), "exec dbo.up_SearchSalesEntries @TimeElapsed OUTPUT, @SearchValue", ref parameters);

That's it.  I ask my factory for a DataAccess instance and it gives me back one that works with SQL. I then create two parameters (one being an output parameter, one being an input parameter). I pass all that in (my factory also gives me a connection string) and out pops a DataTable that has all my data in it. The parameter also has the output information now (it is passed in as a ref, which means the same parameter is updated with the output value. I take that data and map it to my model (for some things I just do it manually but AutoMapper is your friend here). 

It seems like I haven't shared much here, but there isn't anything more to tell really. Once your database is in Azure SQL, and if you already have your code working with a SQL database, the "conversion" process takes about 30 seconds. Just point your connection string to the Azure SQL connection string (which is provided - how cool is that?) and you are done.

If you look at my demo site, all of the data you see is pulled from the database. All of the posts, the lists, and even the featured sections come from the database directly via stored procedures. I convert the data into my models and then just display the models.  

Fourth Challenge: Virtual Machines 

Azure Tutorial

If you have gotten this far in this article, you understand the Microsoft Azure portal pretty well. We've created web sites and SQL Servers with very little effort. Now we are going to get into a more advanced topic: Virtual Machines. Technically, we've already used Azure virtual machines. Each time you spool up a web site (or SQL Server, etc.), you are actually creating a virtual machine. The difference is that they are virtual machines controlled by Microsoft. They are responsible for updates, patches, and configuration of these virtual machines. 

We are going to stray out of this protected harbor into the turbulent ocean of raw virtual machines. This will allow us to control everything about the server, but it also means we need to maintain it. If a patch comes out for the server, we need to apply it ourselves.

The benefits of this solution are numerous, but they are specific. They include:

  • Test servers - You can create test servers that mimic your production environment so that you can try out different configurations or changes without affecting your live servers.
  • Hot spare servers - You can configure a hot spare in the cloud to support your on-site server. This will become more important as more servers support this feature, including SQL Server 2014.
  • Custom App Hosting - If you have a custom application that won't run on a standard web server, you can get a virtual machine to still run your app in the cloud.
  • Long-running processes - If you have a web (or desktop) application that takes a long time to complete, moving it to a virtual machine might be a good solution.
While there are other possible reasons to use a pure virtual machine in Windows Azure, these are some of the most common reasons. The big key here, though, is to be sure that your situation is best solved by a virtual machine before committing to this path.

Once you identify that a virtual machine is the right path, getting started is rather easy (what would you expect from Azure, right?). First, click on the "New" link at the bottom of the portal, and select "Virtual Machine" under the Compute section like so:

Again, you can Quick Create a VM, but we are going to choose one from the Gallery so we can get the full experience: 

Note that there is an extensive list of Windows (and Linux) VMs to choose from as a base, but we can also choose from an image we already uploaded. This would allow us to take an image of an existing machine and upload it. We could also prepare an image offline and then upload the finished product instead of relying on a pre-built one. For our purposes, we are going to choose a pre-built image of Windows Server 2008 R2. 

Once you have chosen your image, you will get the following screen, which will allow you to set up some basic information about the server, including your admin login name and password:

This next step will allow you to hook this virtual machine up to a load balanced set of virtual machines. If you are going to use a virtual machine in production and you believe the load will be too heavy for one machine, you should add a second machine in the same region for load-balancing purposes:

Note that you can select your storage account, if you have already configured one, or you can have one created automatically (how great is that!). For our purposes, I'm going to select the East (US) for the region and I'm going to have the storage created for me.

Next I can add this virtual machine to an availability set. This would allow me to effectively fault-protect my virtual machine in the event of an outage at a particular data center. I recommend you have at least one server in a different region for any service that you want to have maintain a high level of availability. On this screen, you can allow Powershell remoting if you are configuring a pre-built Windows server. My recommendation is to go ahead and allow it. It will make administration much easier:

Once the server(s) are built (two if you created a storage account at the same time), you will see them show up in your list of items in your Azure portal like so (the two I created are below the highlighted line):

From here, you can jump right into your two servers to take a peak. For example, here is the quick glance for my storage server:

Note that this server has a backup region and geo-replication is enabled. This ensures that my data will be accessible in another region if the East US region fails. I could then just attach a VM to the storage in the West US and be up and running in no time (if I didn't have one already configured and running). 

When on the dashboard for your virtual machine, the most important thing to note, and one which I initially missed, is the menu at the bottom that looks like this:

This is how you control your server. For instance, if you are done using your virtual machine for a while, you can shut the instance down and save yourself the cost of running a VM (this is a new change as of June 3rd, 2013). Once you shut down your virtual machine, the menu will change like so:

Now you can capture an image of this machine. This is to be used in the case where you want to back up an existing machine for possible use later (or for replication). Make sure before you take this step that you sysprep the machine (or run the Azure Linux tools on it if it is a Linux machine):

Note that this process will delete the machine when the image is captured.

When starting the machine back up, and if you are running a Windows virtual machine, you can use Remote Desktop to access the full machine for administration (if you don't want to use remote Powershell to complete the task). To do so, go back to that command bar and click Connect:

This will trigger the download of a RDP file. Open the file and you will get a message similar to this:

This is normal. Click the "Don't ask me again for connections to this computer" check box to select it and then hit "Connect". You will then be prompted to enter your credentials. These are the credentials you entered in step one of creating your virtual machine:

When you hit OK, you will get yet another warning message. Again, check the box that says "Don't ask me again for connections to this computer" and hit "Yes":

At this point, you should be in. You now have full access to the entire virtual machine. You can change settings, run updates, install applications, and do practically anything else you would with a normal server. 

Practical Example 

For Campus Classifieds, I chose to have my (imaginary) staff continue to use the helpdesk software that they are used to working with. However, in order to add better availability and scalability to my installation, I have chosen to host this software on a Windows Azure virtual machine. To do so, I followed the exact steps listed above. When I had completed all of the steps and had logged into my Windows Azure virtual machine, I uploaded and installed my exe. This application (Spiceworks) sets up a website on port 80 by default. I configured the firewall to allow port 80 traffic to come through. That was all the configuration it took. I now have a helpdesk up and running at http://ccappsrv.cloudapp.net (the URL given to me by Azure in my virtual machine's dashboard).

It seems like a simple setup, but that is the beauty of it. Microsoft Azure virtual machines, while more complex than web sites, are simple to set up and configure. Even if you are unsure of a step, there is most likely a little help dialog next to each item to clue you in as to what the step is for and what you should do. 

Fifth Challenge: Mobile Access 

Azure Tutorial

Setting up mobile access is the last stop in our Azure tour and it is the most complicated. The cool thing is that the complication is not really with what it takes to get this topic working but rather with what you want to do with the resulting power. The title of Mobile Services can be a bit of a misnomer. You can use it with IOS, Android and Windows Phone apps, sure, but you can also use it with web pages. So let's get started setting up an Azure Mobile Service.

The first step, as in all of our previous examples, is to find the Azure Mobile Service option in the menu:

That wasn't hard, right? So, select the Create link and you will get the following screen:  

Note that the database option that is shown is a new feature. You can now get a free 20MB database with your mobile service. That doesn't seem like a lot, but it actually is for development. Once you go live with your site, you can upgrade the database as needed to a larger (and paid) version. You also have the option of pointing to an existing database (make sure to match the regions up to avoid data transfer costs and potential latency issues).

If you do decide to create a database specific to this mobile service, you will see the following screen: 

Name the database appropriately, set the login credentials and pick your region. If this looks familiar, it should. We did this same step when we created our database in stage three.

And, that is it for getting a mobile service set up in Azure: 

The above dashboard will be the starting point for using your mobile service. Just like other services, this page gives you the rundown of how to get up and running quickly. The really cool part is that this page gives you sample code specific to your platform (and there are five offered). Here is where Microsoft really shakes off the past. This is no badly-written and practically useless MSDN article. Instead, it is ultra-practical code snippets. To get even more in-depth, follow the link that offers to teach you how to add authentication. That will show you how to set up any of the four standard authentication methods (including Facebook and Google). In my practical example I'll show the code for how to authentication, but know that it came straight from the listed link.

Before you jump into using the code on the dashboard, you should probably create a table. Go to the Data tab and add a table: 

When you click "Add a Table", you get this: 

While this page isn't that complicated, this is all you need to get started with a basic table. You decide what level of permissions are needed for each of the CRUD actions and you name the table. I'm sure you savvy SQL developers out there have figured out what is missing: the schema. Where are the columns? I'm glad you asked. The Mobile Services database tables are dynamic by default. What that means is that they add columns as needed based upon the incoming data. You can create them the old fashioned way but for now we are going to keep this the way it is. When we put an app in production, we will change the flag to disable dynamic modification of our table structures.

The next stop on our Mobile Service tour is the scheduler: 

Creating a new job gives you the option of how often to run the job: 

That's great. Another fun feature, but what does it mean? Well, when you set up a scheduled job, you can put in a section of JavaScript that can call services, get data, change your database, etc. This can be helpful for cleaning up data, creating totals (once instead of every time a person requests them), or doing any number of other interesting things. The point here is that the options are practically endless.

Speaking of endless wonder, note that I haven't even hit on the API, Push notifications, Identity information, and a number of other interesting features. And while you are right (as you have been all article), I'm not going to go into these advanced areas just yet. If I show you too many options all at once, you will get feature overload and I want this to be a practical article that you can use. I've shown you how to get sample data access code, how to create a table, and even how to set up a schedule to do something to that data. That should be enough to get you started. Once you feel comfortable with that information, read on into my practical example. You will get an overview of how I used this basic information and how I used it in a real-world instance. 

Practical Example 

I'm assuming that if you are reading this, you've got a basic handle on the capabilities of Azure Mobile Services. If not, read the section just above this one. That will catch you up to speed. Now let's dig into what I actually did to utilize Azure Mobile Services in my Campus Classifieds site then I will show you how I did it. 

The Campus Classifieds site is designed to be a hub for students, faculty and staff to share information including campus events. One key to any good event is to know who is coming. I decided to implement a rudimentary sign-up sheet for campus events. In this iteration, we will just allow a person to indicate that yes, they are coming to a particular event. They should not be able to see the registrations of others, they should not be able to sign up anyone but themselves, and they should know when they are signed up.

After evaluating the use case, I determined that we first needed to be able to identify a user. That means authentication. I remember when the idea of authenticating users used to make me run for the hills. Fortunately, I don't need to "accidentally" format my computer to avoid the pain of authentication anymore. Instead I just need to follow the simple step-by-step directions found here: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-with-users-html/ 

Simple, right? I thought so. I would share a really awesome piece of code that I created to hook this up, but...well, it is actually a copy/paste of Microsoft's code. Don't overthink it, just do likewise. You won't regret it.

Now that I had hooked up authentication, I figured setting up the table and connecting to it would be just as simple. I mean, that code snippet looked simple enough. This is where I quickly went off the rails. I wanted to check for a specific user's records in a safe manner. I also wanted to try out that dynamic table creation thing. Finally, I wanted to have a system that was dead simple to use and understand. Let's fast forward past the part where I repeatedly bash my head on the keyboard and get to the part where I look like I know what I'm doing.

First, here is the code I developed for writing to the table: 

var data = {
    eventId: 1,
    username: 'Tim'
};
client.getTable("signup").insert(data).done(
    function (results) {
        //Figure out what you want to do on success
    },
    function (error) {
        //This is a quick and dirty error message
        alert(JSON.stringify(error));
    }); 

Let's look at that code for a second. First, I'm creating a data object that holds two pieces of test data. These two pieces of data will become two columns in the table. Next, I call client.getTable('signup').insert(data). This actually puts the data into the signup table. After that, I've attached a couple functions to the done event. The first handles the success option. The second handles the potential case where we have an error. In my case, I kick off a method inside the success function that re-evaluates which DIVs to hide and show. Note that if you want to set up your table, you could run code like this with dummy data once to get your columns set. Then you could go into the Azure page and delete the dummy data (the table schema will remain). 

The next step is to read data out of the database. Here is my code to do so:

var query = client.getTable("signup").where({ eventId: id });
query.read().done(
    function (results) {
        if (results.length > 0) {
            //You have data
        }
        else {
            //You don't have data
        }
    },
    function (error) {
        alert("Error Occur during Reading data .." + JSON.stringify(error));
    }); 

This code is very similar to the code that we used to write to the database. The biggest difference is probably the where clause in the first line. This is how you filter a recordset request.

So, now you can read from the table and write to it. We are done, right? Not so fast. Right now it looks like we will get all the records from every user. Where is the code that limits what records to a specific user? In fact, I don't see anything that is user-specific here. What is to prevent an anonymous user from trying to add data to our table?

I'm glad you asked. When we set up our authentication, we got a snippet similar to this one:

client.currentUser !== null; 

This allows us to check to see if a user is logged in (it will be true if someone is logged in, false if no one is logged in). This could allow us to do some filtering on the client side, but a variable is easy to overwrite. That would allow an attack on our data. This indicates that the solution needs to be server-side, and in fact, that is the case in this instance. If you go to your table, you will see an option called "Script" like so:

As you may notice, your script looks different than mine. I have added the following line:

item.userId = user.userId; 

What this line does is it adds the user id for every record that is inserted. Now I know that all inserted records will be accurately tagged before they are entered. How do I know that I'll have a user id for every user that tries to insert a record? Well, remember when we created our table, we were asked what permission level each of the CRUD operations should have? I made sure that every action needed to have authentication for this particular table. Now, because of this extra code piece, I'll always have a server-verified and un-tampered with value.  Next, I put a similar line on the read option (change to read by using the dropdown):

query.where({ userId: user.userId }); 

This line ensures that we read only the records relating to this particular user. This is how I can figure out if a user has an entry in the signup list or not.

And that is it. I wired all of that code up with a little JavaScript plumbing and away I went. You can see the results of this work by going to the Campus Classifieds site and going to any event. First it will tell you that  you need to sign in before you sign up (I only used Google authentication for now). Once you sign in, the page will change and it will give you the option to sign up. Just hitting the button will sign you up. Your basic information (from your login) will be stored in the table. Now if you come back to the site, it will remember who you are (if you are logged in) and it will tell you that you are already signed up for that particular event.

You can try it out here: http://samplecc.azurewebsites.net/Service/Item/4 

Conclusion

Well, I've written this article in two parts, so I'll break my conclusion up into those same two parts. On the Azure side, the conclusion is simple: Azure is AWESOME! If you can't see the potential in Azure, your eyes must be closed. The power, flexibility, scalability, and ease of entry into Azure make this platform an all-around winner. I'm often hard on Microsoft for their documentation but they have really outdone themselves getting the documentation for Azure just right. Sure, there are places where it could be expanded or improved but these are the exception not the rule. From a personal standpoint, I can't wait to dive further into Azure.

As to the practical side, I've had a lot of fun creating Campus Classifieds. Yes, it is a real project that I'm working on. While I have more of a working storyboard right now versus a true alpha or beta product, I'm pleased with the results and with the potential. When I finally get a working product, I will definitely consider hosting it on Azure. The scalability ensures that I'll be able to grow the site as the business grows (and not before, which will save me money). The scriptability will allow me to spin up new sites (both test sites and production sites as well) very quickly. Every new plan has a number of issues to be overcome, but I feel that hosting is not one of those issues. I know I can count on Azure to see me through. 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Author

Tim Corey
Software Developer (Senior) Epicross
United States United States
I am currently a Lead Technical Consultant for a consulting company called Epicross. My primary skills are in .NET, SQL, JavaScript, and other web technologies although I have worked with PowerShell, C, and Java as well.
 
In my previous positions, I have worked as a lead developer and IT Director. As such, I have been able to develop software on a number of different types of systems and I have learned how to correctly oversee the overall direction of technology for an organization. I've developed applications for everything from machine automation to complete ERP systems.
 
My current position is mainly focused making our clients more efficient and effective. I use custom software (desktop, mobile, and web) to help facilitate this goal. When I'm not working for the company, I'm usually developing applications to fill the needs of the organizations I volunteer for.
Follow on   Twitter   Google+   LinkedIn

Comments and Discussions

 
GeneralMy vote of 5 Pinprofessionalroscler12-Jun-13 16:34 
GeneralRe: My vote of 5 PinprofessionalTim Corey13-Jun-13 3:58 
GeneralMy vote of 5 PinprotectorAspDotNetDev1-May-13 7:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web04 | 2.8.140721.1 | Last Updated 23 Jun 2013
Article Copyright 2013 by Tim Corey
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid