|
I recently had a need to consume a private nuget feed in one of our Azure DevOps build pipelines. This was for our Xamarin Forms mobile app's build pipeline. We wanted to use a Telerik UI nuget package in our app. In order to add a reference to this nuget package to your project, you firstly need to add your Telerik credentials into Visual Studio. This ensures that you are a fully paid up Telerik subscriber with access to the nuget package.
I needed to update the build pipeline therefore to fetch this private nuget package. After a bit of trial and error (and a few failed builds) I got this working. In Azure DevOps I needed to update the nuget restore build task to also fetch the Telerik nuget package.
- Add a Nuget restore task to your build pipeline (if you don't already have one). This task needs to come before you build the project.
- Set the path to the project in the relevant textbox
- Set the option for Feeds in my Nuget.config (this is important as this allows you to specify credentials for consuming external nuget packages)
You should now see a Manage link which will allow you to configure the credentials to your private nuget package. Clicking on this link opens up the Service Connections that are available for your build pipeline. Add a new service connection of type Nuget. In the dialog box that is now displayed click the option for Basic Authentication and enter the following information.
- Connection name
- Feed URL
- Username
- Password
Click OK to save these credentials.
Back in your build pipeline's nuget restore task, you should now be able to select these credentials in the dropdown. What Azure DevOps will now do, is merge these credentials into it's default nuget.config file (or into the one you have specified under the Path to Nuget.config). Either way, whatever credentials you have specified will be merged into the nuget.config file.
And that's basically all there is to it. Your build pipeline is now able to consume nuget packages from private feeds.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been setting up the Azure DevOps builds required for our new mobile app. One for Android and one for iOS. In this article I will focus on the iOS app as this is the one that caused me the most difficulty. There is a degree more difficulty when developing for the Apple platform as you need to have a Mac, certificates and provisioning profiles, so configuring a build for iOS is a little more complex. This is definitely borne out by the number of Stackoverflow posts I found on the various issues I encountered.
Before proceeding, I want to fully clarify something that caught me out. This may seem self evident, but judging by the posts I came across on this issue, perhaps not so much. When running your iOS app from Visual Studio there are two methods of provisioning the app.
- Automatic provisioning - This is useful during development. You need to install a Mac onto your network that is visible to your Visual Studio environment and pair with it. Your Visual Studio environment will then read the necessary provisioning information directly from the Mac (be sure to disable the screen-saver on the Mac or else you'll lose your pairing with it).
- Manual provisioning - This is needed when you intend to build your app from a build server. Unlike automatic provisioning (where your Visual Studio environment just fetches what it needs from the paired Mac), you instead enter the necessary signing identity and provisioning profile information into Visual Studio.
So if you are setting up your iOS app to be built on a build server such as Azure DevOps, you will to use manual provisioning.
When setting up an iOS build you firstly need to select the correct agent pool from Azure DevOps. In this case select the Hosted macOS agent pool. Selecting this provides you with a template consisting of the core tasks necessary for building your iOS app.
- Install an Apple certificate
- Install an Apple provisioning profile
- Build Xamarin.iOS solution
- Copy files to the artifacts staging directory
- Publish the build artifacts
We are also using Visual Studio App Center so I have the following task defined too.
- Deploy to Visual Studio App Center
We intend to use App Center for testing but we haven't set this up just yet.
Installing the Apple certificate and provisioning profile
=========================================================
The Apple certificate and provisioning profile can both be downloaded from your Apple developer account and uploaded to your Azure DevOps build pipeline. The certificate should be in the form of a .p12 file which differs from the .cer file. You may need to open the certificate in XCode on a Mac to generate the required .p12 file. Either way, once you have these files, they need to be uploaded to Azure DevOps. Your build will fail without them.
Build the Xamarin.iOS solution
==============================
Before you proceed to this step, ensure you have set your Xamarin.Forms iOS project to use Manual Provisioning, and set values for the Signing Identity and Provisioning Profile (and these match the previously uploaded certificate and provisioning profile from earlier).On the build task check the box Create app package if you want to create an .ipa file (which is the file that is actually installed onto the devices). If you intend to test your app in any way, then presumably this needs to be checked.
The output from this task should be the required .ipa file.
Copy files to the artifacts staging directory
=============================================
The template makes a good job of this, so this task should need very little configuration. Basically, all the task is doing is copying the generated .ipa file from the build folder to the artifacts folder from where it can be used by subsequent build tasks.
- Source folder - $(system.defaultworkingdirectory)
- Contents - **/*.ipa
- Target folder - $(build.artifactstagingdirectory)
Publish the build artifacts
===========================
This task simply publishes the contents of the artifacts folder from above - $(build.artifactstagingdirectory)
At this point we now have a complete build process that has generated an .ipa file using the latest code changes, and published that .ipa file so that it is available for subsequent build processes such as testing and / or deployment. So at this point you can use your preferred testing / deployment tools of choice. In my case, I have deployed the generated .ipa file to App Center for testing and deployment.
Deploy to Visual Studio App Center
==================================
You will need to configure your build with an App Center token. This authorises your build process to access App Center studio on your behalf. I will write a future article on App Center, but for now it is sufficient to know that I have two apps configured in App Center - one for iOS and one for Android. Once configured, enter the name of the App Center connection into your Azure DevOps task.
If you are using App Center as part of a team then it's a good idea to create an organisation, and assign your developers to the organisation. Then in the App slug you would enter {organisation-name}/{app-name} e.g. myOrganisation/MyAppName.
Now for each build that is triggered, we have a full build pipeline that builds the iOS app and deploys it to App Center from where we can deploy it to actual physical devices (allowing us to monotor analytics, crashes and push notifications).
Setting up this build process has been far from straight-forward. I encountered several problems along the way, and didn't always find answers to my questions. Many times it was down to good old fashioned trial and error, along with a dash of perseverance.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of our new mobile app offering that we are busy developing, I need to deploy the backend Azure mobile app services. These are the backend services that will provide all the main business logic that the app will need to function and provide value to the user. The app itself will essentially be a dumb set of screens that will have no smarts in and of themselves. The smarts will come from the backend services that the app will consume. And these services will be hosted on Azure in the form of mobile app services.
I have previously written about how I setup the build pipeline using Azure DevOps[^]. The next step was therefore to deploy the build articles to Azure using the same Azure DevOps pipeline.
The main steps needed to deploy your app to Azure are actually defined in your build pipeline:
- create a zip file containing the deployed build artifacts
- publish the zip file so it is available for the release pipeline
In my build pipeline I have these two tasks defined as the last tasks in the pipeline. To create the zip file I use MSBUILD using the parameters
WebPublishMethod=Package;
PackageFileName=$(Build.ArtifactStagingDirectory)\package.zip;
DesktopBuildPackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
PackageAsSingleFile=true;
PackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
DeployOnBuild=true;
DeployTarget=Package I therefore added an MSBUILD task to the build pipeline. You may also need to add other build parameters for specifying the OutputPath, Configuration and Platform and any other parameters as necessary.
You will then need to add a Publish Build Artifacts task to your build pipeline. This makes your zip file available to the release pipeline. In the textbox for Path to publish I have entered
$(Build.ArtifactStagingDirectory) as this is where I want the zip file to be published.
There are various templates you can use for setting up your release pipeline. For the purposes of this article I will keep it simple and refer to the Deploy Azure App Service template. Here you will need to authorise your Azure subscription. Once this has been completed you will need to enter other details including:
- app type
- app service name
- package folder (the filename and path where the zip file is located)
- optionally you can specify a slot if you are deploying to slots (which I highly recommend you do)
There are some subtle differences between how TFS handles deployments to Azure and how Azure DevOps handles them that threw me when I first setup the release pipeline. For example ensuring the release pipeline has access to the zip file threw me at first, until I discovered you need to publish it for it to be available to the release pipeline.
Other than that, the process itself is fairly straight-forward and I didn't encounter any major problems.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have long had an interest in the DevOps side of the software development lifecycle. As much as I love to write software, I love to build and deploy it too. After all, it doesn't really matter how great your code is, if people aren't using it then it's an irrelevance. The process of building and deploying software has long been a passion of mine. I love to setup and configure build processes. DevOps automates and simplifies the process by which software can be deployed onto user's machines. From the developer checking in their code, the process of versioning the software, executing and publishing unit tests, analysing code coverage, and depoying the final article onto a release server are all part of the DevOps process. These can (and should) all be automated. The developer shouldn't have to worry about any of this (unless like me, they actually enjoy setting up these processes). I have previously used CruiseControl, TeamCity, Team Foundation Services and most recently Azure DevOps.
We have recently begun the task of re-building our next generation mobile app. For this we are using Xamarin in conjunction with Azure services. We currently use Team Foundation Services (TFS) for all of our DevOps processes. This is a brilliantly simple, yet very flexible and powerful build process tool. I haven't found anything that I haven't been able to do with it. For our new project though, I wanted to make use of Microsoft's replacement to Visual Studio Team Services (VSTS) which is now branded as Azure DevOps.
This seemed the perfect time to start using Azure DevOps - with a new project. I don't have any intentions of migrating our existing projects, so it would require a new project to allow me to get my hands on Azure DevOps.
First off, for anyone who has previously used TFS or VSTS, Azure DevOps (which is really a re-branding of VSTS) should look and feel very familiar. As it's name suggests, it is powered by Azure infrastructure meaning it will scale up and out as your build process grows.
We have separated our new mobile app into two distinct solutions. One is the Xamarin Forms app, and this will unsurprisingly contain the actual app itself. The other will be the Azure backend server that will provide all the functionality to the app (business logic, notifications, service requests etc). It is this latter solution that I have been focussed on moving into Azure DevOps. At the time of writing, I have setup the pipeline to include:
- versioning the assemby
- restoring the Nuget packages
- building the solution
- executing the unit tests
- publishing the code coverage
There are literally hundredes of in-built tasks for building, testing, packaging and deploying your software. You also have access to the Marketplace where you can find hundreds more tasks developed by the community. Even big players such as JetBrains have free tasks available in the Marketplace. So if you can't find the task you want, you can probably find one that matches in the Marketplace. If not, you can always develop your own and publish it in the Marketplace yourself.
My first impressions of Azure DevOps is that it's quite simply a brilliant tool. It reduces our reliance on our on-premise infrastructure and allows us to fully build and deploy our applications on rock solid infrastructure in the Microsoft cloud. If you currently use TFS then it's worth spending the time to explore Azure DevOps. Unless your business already has a large investment in IT infrastructure, you'll be very hard pushed to beat the Azure stack. If you're currently using VSTS then you'll be automatically migrated to Azure DevOps anyway. Even if you don't currently use TFS or VSTS, it doesn't matter. You can build, package and deploy your application using Azure Devops regardless. It has support for every platform and technology. So whether you're brand new to DevOps and don't have anything currently configured, or if you're currently using an alternative, it's worth checking out Azure DevOps anyway.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Having developed several mobile apps and deployed them to both the Google Play and Apple Store, we have been forced to re-write the apps. The reason we have to re-write our apps is due to the fact that our development platform - Telerik Platform - has been retired. The reasons are probably so that they can focus their development efforts on their other cross-platform mobile technology - Nativescript.
That leaves us in the position of having our app in the app stores, but with no means by which to update them. We can update the RESTful services used by the apps as these are completely separate to the apps (thank goodness for good architecture), but we can't make any changes to the apps themselves. That puts us in a slightly vulnerable position, as we can't respond to customer suggestions or changes in market forces (or even change the branding / look and feel).
We have therefore been forced to re-evaluate the technologies that are available to us for developing the mobile apps. We have looked at several technologies. I have previously written why I think Building native enterprise apps is (probably) the wrong approach[^]. In relation to enterprise apps, there is very little (if any) benefit to going native (longer development cycles, greater expense, bigger teams with bigger skill sets, little overall benefit). Cross-platform therefore is the only approach on the table.
We firstly looked at NativeScript (the natural successor of Telerik Platform) as both are owned by Progress. This looked like a great development platform. Progress have made big strides in trying to ease the migration path of existing Telerik Platform users to their newer NativeScript platform. You can choose from Javascript, Angular or Typescript as the language in which to build your apps. It comes with a companion application called Sidekick to simplify many of the development processes. Has the support of a large community and backed by Progress (giving peace of mind). Also, it rendered native components on the device making it a truly cross-platform development environment.
The only other alternative that I considered seriously was Xamarin. I have used this previously (before the Microsoft acquisition) and so was already familiar with it. I was intrigued as to how it may have changed since the Microsoft acquisition. The first thing I noticed when looking through the documentation and examples was the tight integration with Azure. We already make substantial use of Azure with our other mobile and web apps, so it great to see the same design philosophy applied to Xamarin. In fact, the overall architecture used by Xamarin is not too dissimilar to the one I developed for our existing apps and current web app. This was a huge benefit to us right out the box, as I was already familar with the architecture and the key moving parts to building a mobile app with Xamarin. Like NativesScript, Xamarin also renders truly native components on the device.
I spent considerable time looking at both offerings, as well as taking into consideration the skill set of the team. In the end we have decided to go with Xamarin. I am far more familiar with C# and Azure (as well as the architecture of their apps) and this played a part in the final decision. Nativescript would have required us to learn Typescript. Although this is not necessarily a barrier on its own, the reality is that I will be up and running far quicker with Xamarin than compared to NativeScript.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had to figure out how to execute an AJAX request upon loading an ASP.NET Core Razor page which contained a querystring parameter. On one of our Razor pages, we have a Kendo UI Treeview. Clicking on this treeview populates the page with data for the selected item. The click event on the treeview executes a Javascript event, which in turn makes an AJAX request to a service to fetch the data for the selected item. This Javascript event then populates the Razor page with the returned data.
When launching the Razor page interactively from the menu the URL looks like this.
https://localhost/DocumentManager/DocumentManager
When launching the Razor page for a specific item programmatically the URL looks like this.
https://localhost/DocumentManager/DocumentManager?documentid=1234
So basically, the page is populated with data from an AJAX request which is fired from a Javascript event. The problem I had was that we needed to open this page and load up the data for a specific item. Whereas currently the item is selected by the user interactively clicking on an item from the Kendo UI Treeview, I now had to figure out how to load the page data for an item progrmmatically.
So here's how I did it.
I firstly needed to figure out if the Razor page was being launched interactively (with no querystring parameters) or programmatically (with querystring parameters). I did this using URLSearchParams URLSearchParams - Web APIs | MDN[^]. This is an interface that allows a Javascript client to manipulate and work with querystring parameters. This offers a far simpler and elegant mechanism for working with querystring parameters than horrible string manipulation and / or regex queries.
I was passing a document ID to the Razor page in the form of:
https://localhost/DocumentManager/DocumentManager?documentid=1234
<div>
rest of the Razor page goes here
</div>
<script>
$(document).ready(function () {
var queryparams = window.location.search;
if (queryparams && typeof (queryparams) === "string" && queryparams.length > 0) {
var urlParams = new URLSearchParams(window.location.search);
var documentid = urlParams.get('documentid');
loadDocumentManagerForDocument(documentid );
}
});
</script> In our JS file site.js the function that makes the AJAX request is defined as follows.
function loadDocumentManagerForDocument(documentid) {
if (documentid) {
$.ajax({
type: "GET",
url: `/DocumentManager/DocumentManager?handler=document&documentid=${documentid}`,
contentType: "application/json",
dataType: "json",
success: function (response) {
},
failure: function (response) {
}
});
}
} Finally, here is the Razor page handler that fetches the data. Remember, this is the same Razor page handler that is used for both loading the data interactively (as the user clicks items from the Kendo UI Treeview) and programmatically.
public async Task<JsonResult> OnGetDocument(int documentid)
{
JsonResult result = null;
var response = await new DocumentManagerPageModelService().GetDocumentById(documentid);
result = new JsonResult(document);
return result;
} Here is yet another example demonstrating the incredible flexibility and power of ASP.NET Core. This solved what I was thinking may be a really tough problem, but in the end it wasn't really that difficult. With a bit of thinking the problem through, the solution is quite straight forward. I hope this solution helps someone else.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 20-Aug-18 11:11am.
|
|
|
|
|
The Document Manager was intended to be a web based application that would allow users to upload documents (reports, spreadsheets etc) and assign subscribers to them. A subscriber would then be able to login to the application and download any documents assigned to them. The premise was to build an application along the lines of a collararative Dropbox.
The entire application was a proof-of-concept for building the next generation fleet management system. The new application would be a replacement for the current one. To ensure that the technical choices we had made were sound, and to reduce the risk to the business, we decided to develop a single module first. If this went well and we were satisfied that the technologies were sound, then we would create the rest of the application.
The technologies we had selected (and therefore used to build the Document Manager) included the following:
- ASP.NET Core 2.1 Razor pages (for the front-end application)
- ASP.NET Web API (for building the RESTful services that would be consumed by the application)
- Azure (for hosting, SQL and blob storage)
The only unknown was the use of ASP.NET Core and Razor pages. We had used the other technologies previously on our mobile apps. We didn't want to use full blown MVC for this project, as we intended to create a suite of RESTful services to provide the business logic. The architecture was service-oriented-architecture (SOA), so therefore the client application only needed to be a lightweight front-end. Hence we didn't need anything as complicated as MVC or single-page-application (SPA) for the client application.
With ASP.NET Core 2.0 there comes a project template whereby you can create an application based on Razor pages, without the added complication of MVC. This seemed the perfect fit for our needs. After experimenting with these for a few days they seemed to fit very well with the rest of our architecture.
Part way through the application lifecycle we upgraded from ASP.NET Core 2.0 to 2.1, and upgraded Visual Studio at the same time. Apart from making a minor change to one of our build scripts, this upgrade was seamless and without problems.
We are now nearing completion of this project. We have developed the Minimum-Viable-Product (MVP) as our proof-of-concept. The application allows for the uploading, downloading, editing and deleting of documents. You are able to add / delete the subscribers to a document. Subscribers are notified of their subscription via our email service (so a subscriber is alerted to the fact that they need to login to the application and download a document). There is also administration functionality (maintaining companies, users and roles).
I have found using ASP.NET Core 2.1 in conjunction with Razor pages to have been the perfect choice. ASP.NET Core is an incredibly powerful development platform. The support for AJAX and the Razor page handlers alone make this a fantastic platform. There are multiple ways of achieving the same objective, making it incredibly flexible.
I am very pleased with how the project went. The technical choices were justified and sound, and we are now extremely confident of building out the rest of the next generation fleet management software using these technologies.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I came across this design strategy many years ago when writing client APIs, but it's a strategy that is worth considering when designing any client API. It's one I have made extensive use of in our current suite of ASP.NET Web APIs.
The surface area is what the client interacts with when consuming an API. Reducing the surface area is therefore a strategy for reducing what the client needs to interact with. By reducing what the client needs to interact with, your API is simpler and easier to consume by client applications.
Let's say you have a fairly simple data-entry application that allows the client to add/update/get/delete items such as customers, orders and deliveries (basic CRUD functionality). If we wanted to develop a client API to implement this functionality, we quite reasonably do this by implementing a customer API, an order API and a delivery API. Nothing wrong with this approach. Let's say that six months later we have added more functionality. We can now add/update/get/delete stock, suppliers and materials. The number of APIs the client now needs to interact with has doubled. From the point-of-view of the client, the API is now getting increasingly more complex to use, as there are a greater number of APIs to learn and use.
But wait. Don't all those APIs do roughly the same sort of thing? They all provide the same CRUD functionality, just to different entities (customer, order, delivery, stock, supplier and material).
What if we condensed all those CRUD APIs for all those different entities into a single API. That would provide the same level of functionality to the client application, but would also be easier to learn and understand as they only require the one API to interact with.
This is the concept behind reducing the surface area of the client.
In a web application I have been developing, we have a very similar scenario. We have a data-entry web application that provides CRUD functionality to the user. All the functionality has been implemented using ASP.NET Web API. However, the web application only consumes a single API. All POST, PUT, GET and DELETE requests are reduced down to a single API that performs all operations across the entire web application. Not only that, but all the APIs work in the same, consistent manner.
For example, the POST controllers (API) work in the following manner. I pass a single string key on the querystring. This tells the API what type of data is being passed. In the request body is the data itself in JSON format (other formats are available).
Example values for the querystring key could be "addcustomer", "addorder", "addsupplier". Then in the body of the request would be the actual data that represented the entity (customer, order, supplier etc).
Here is example code from the POST request controller.
[HttpPost]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public async Task<HttpResponseMessage> WebPostData(string formname, [FromBody] JToken postdata)
{
base.LogMessage(string.Format(CultureInfo.InvariantCulture, "{0}.{1}", GetType().Name, "WebPostDataToServiceBus"));
base.LogMessage($"Formname={formname}");
if (!base.IsRequestAuthenticated())
{
base.LogMessage("Request failed authentication");
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.Forbidden));
}
return await this.AddWebTaskToServiceBus(formname, postdata);
} This same concept can be applied to PUT, GET and DELETE requests. As long as you have a string key parameter that determines the type of the data, then you are able to implement the appropriate logic to process it (e.g. if you know you are adding a new customer, then you de-serialise the customer data and pass it to the database, service bus etc).
This makes your API surface much smaller, which in turn makes them far easier to consume, learn and comprehend. Surely that's better for everyone.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This may seem a self evident statement, but apparently not. I recently worked with a colleague (who shall remain nameless to spare their blushes), who was young and very inexperienced. When tasked with fulfilling a particular task, their approach was not to spend time trying to understand the problem, investigate several solutions, before implementing something that would hopefully solve the problem. Instead, the approach taken by this particular individual was to ask Google for answers, and then use whatever solution was at the top of the list, no matter how inappropriate the solution was.
I have no problem with anybody using Google or any of the technical forums such as Stackoverflow. We all get stuck sometimes, and it's useful to look to these forums for advice, suggestions or possible answers. But that's when you get stuck. I wouldn't go straight to Google from the get go like this person did. If you don't fully understand the problem, let alone the answer, you're headed for big trouble further down the road. It's only a matter of when, not if.
I have encountered many problems where I have been genuinely stuck. What I do then, is start researching. I spend time reading around the problem, reading around the various tools / technologies that may help me in resolving the problem. What I most definitely don't do is start coding. And I wouldn't expect other members of the team to have to make changes to the code to accomodate my ill researched solution. Changing code that has been stable for a period of time is not a good idea, and it's especially not a good idea if you intend to introduce changes because you don't understand the problem yourself. That's akin to asking your fellow mechanic to put tractor wheels on your sports car because the solution you read on the internet suggested it. Had you fully understood the problem and the solution, you would have worked out that this was a silly idea.
So before plunging headlong into implementing a solution to a given problem, spend the time to fully understand what the problem is. Spend the time researching various solutions, and look at the bigger picture. Don't just focus on the immediate problem, but how your solution may impact the other moving parts of the application.
1. Take a breath
2. Understand the problem
3. Research and ask questions
4. Take another breath
5. Walk through your solutions with your colleagues who may have valuable knowledge that may help
6. Propose a solution
Downloading code and copying & pasting solutions from the internet just doesn't work. You'll grow into a far more valuable resource by taking the time to understand the problems you're trying to solve.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a need to setup different environments for our ASP.NET Core 2.1 web application. When we run the app from our development machines, we need an environment where we can diagnose and debug the code. We also use a different endpoint for the services (we have development, staging and production endpoints for our ASP.NET Web API services which are consumed by the development, staging and production versions of the web app respectively). On top of that we also use a different Azure ADB2C (Azure Active Directory Business-to-Consumer) directories for our identity provisioning. We have one for development, staging and production.
So we need to separate out these different environments when running the application so that the development, staging and production settings are consumed appropriately by the application.
Thankfully ASP.NET Core makes this very straight-forward. Right out of the box ASP.NET Core supports 3 environments. Development, Staging and Production. The values for these environments are contained in json files called appsettings.<environment>.json e.g. appsettings.Development.json, appsettings.Staging.json and appsettings.Production.json.
If you have common settings that apply irrespective of the environment, then these can be specified in the default appsettings.json file. The environment specific settings will then be merged into this default file at runtime. So for example, if you use the same instance of Application Insights across all the environments, then specify these once in the default appsettings.json file. Then in the Development, Staging and Production versions of the appsettings.json file, specify those settings that are specific to that environment.
Next you need to tell the ASP.NET Core runtime execution engine what environment to use. For development, you set this inside Visual Studio (right click on the project -> Properties -> Debug). You will see an environment variable called ASPNETCORE_ENVIRONMENT. This will be set to Development. This tells the ASP.NET Core runtime to use the Development environment settings. So any settings that are contained within your default appsettings.json file will be merged with those of appsettings.Development.json.
N.B. specific settings overwrite general ones. So if there are any settings that are in both files, the environment specific ones will be taken.
Setting the environment for development is straight forward, as it's done within Visual Studio. How do you set the environment for a deployed application on a Windows server or Azure?
This is also straight forward and can be found in this article[^].
N.B. It is by setting the ASPNETCORE_ENVIRONMENT to the appropriate value that determines which environment the ASP.NET Core runtime uses.
For the current ASP.NET Core web application I am developing I have an appsettings.json (which stores our Azure Application Insights settings), appsettings.Development.json, appsettings.Staging.json and appsettings.Production.json. Ther latter 3 store the values that are specific to those particular environments (debuggings settings, logging settings etc).
ASP.NET Core makes is simple and easy to configure your application for different environments, and these can be easily set for Windows / IIS environments and / or Azure environments.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I deployed the ASP.NET Web API services to our Azure hosting endpoint, I needed to create a zipped deployment file to do this. Azure then unzips the contents of this file and deploys it into your hosting slot. I needed to do the same thing recently with our ASP.NET Core 2.0 web application.
Just to clarify, I am using Team Foundation Services (TFS) for all our builds and releases. I much prefer using a build server than to deploy straight from Visual Studio. Yes I know this is entirely possible, but I prefer to keep the build and deployment lifecycle separate from the development lifecycle. I find this leads to greater efficiency, particularly when you have a team of developers who need to collaborate on the same code.
When I did this previously with the ASP.NET Web API project, I used an MSBUILD task from TFS and used the argument /t:publish,package to force the creation of the zipped deployment file. However, the /t:package argument does not exist for ASP.NET Core projects. So how do you create the zip file needed to deploy your web application to Azure.
Well it seems that there a couple of ways to achieve this (although they don't seem to be fully documented anywhere that I can find). I had to resort to reading through Stackoverflow to find the answer. You can either use MSBUILD or dotnet build. As the arguments that are passed to dotnet build are ultimately passed into MSBUILD (yes it is good old MSBUILD that sits underneath dotnet build), I decided to opt for using MSBUILD. I am also much more familiar with MSBUILD having used it for many years building many other applications.
The MSBUILD statement that worked for me is the following.
"Path\To\MSBuild\MSBuild.exe" /p:configuration="release";platform="any cpu";WebPublishMethod=Package;PackageFileName="\MyFolder\package.zip";DesktopBuildPackageLocation="\MyFolder\package.zip";PackageAsSingleFile=true;PackageLocation="\MyFolder\package.zip";DeployOnBuild=true;DeployTarget=Package I have this command in a batch file which I then run under TFS as a build step. This build step is one of the last steps in the build process because it only needs to run prior to the release process.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
First of all, just to be absolutely perfectly clear, I do not work for Microsoft and have received nothing in return for writing this article. I just want to get that out of the way before going any further.
Throughout my nearly 20 year career as a professional software developer, I have always used Microsoft products to develop the various applications I have helped build. This includes their products, services and languages and which have included Visual Foxpro, Visual Basic, Xamarin, C#, SQL Server, Azure, Visual Studio, Visual Code, ASP.NET (Core) to name a few.
Natually I have liked some of these better than others. What is becoming very apparent to me, is that I am genuinely loving the new development ecosystem that has been coming out from Microsoft over the last few years. Under the leadership of Satya Nadella, the company has completely transformed. Their products, tools and services just keep getting better and better. As a developer, this is fantastic news. For any regular readers of my articles, none of this should come as a surprise. I regularly praise the Microsoft tooling I use on a regular basis.
I started using Azure over a year ago, and can't believe how awesome it is. I use it for everything including SQL storage, blob storage, hosting, service bus, webjobs, functions, identity provision and application insights to name a few. I use it for everything. It allows me to build modern, scalable, highly available, secure and robust applications. All of the Azure services can be leveraged from within your .NET apps as well as from the Azure portal itself.
This year I started building a web app using ASP.NET Core 2.0. It brings the joy back into building web applications. It is very obvious that a lot of thought went into the architecture and design of ASP.NET Core 2.0. I have always enjoyed working with ASP.NET, but ASP.NET Core lifts this to entirely new levels. The team behind it have a clear understanding of the sorts of problems that developers face, and have solved these in simple yet elegant ways.
They have embraced open-source, they are open and transparent, their tools are no longer closed but integrate with practically every other tool (whether they are Microsoft or not). They are a completely different company to the one I carved out my career with. Credit where credit is due, they have listened to their customers and have responded accordingly. They are now building tools that developers need, want and can enjoy using.
Being a Microsoft developer these days is great fun, and I hope it stays that way for a very long time.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
The solution to this particular problem was not as difficult as I at first thought. Our web app contains dynamic toolbars that are in fact Razor form page handlers. Inside each form page handler is a button that invokes the Razor page handler and thus services the request.
For example, a toolbar may contain buttons for adding, deleting, updating a particular entity.
<form asp-page-handler=@toolbar.PageHandler method=@toolbar.Method>
<button class="btn btn-default">@toolbar.DisplayText</button>
</form> As can be seen from the example code, the toolbar is built entirely from dynamic data. This is because each toolbar menu item is tied to a permission, and the ASP.NET Web API service that returns the toolbar items contains all the business logic for deciding which toolbar items to return based on the user's permissions.
This all works perfectly. However, I ran into a problem when I needed to add a confirmation dialog to the Delete toolbar menu item. Adding a confirmation dialog using JQuery was simple enough, the problem was that all toolbar items are linked to a Razor page handler. For example, the toolbar menu item for Edit was linked to the OnPostEdit page handler. This would then implement any code as necessary to service the user's request to Edit the entity. None of the these toolbar items required a confirmation dialog. The toolbars are defined entirely by the service and all are linked to a single Razor page handler.
The Delete toolbar item needed a confirmation ("Are you sure you want to delete this item?"), and the most obvious solution was to implement this using JQuery, but this would break the pattern I was using for the other toolbar items.
I eventually came across a simple solution that solved the problem. I could add an onclick to the toolbar items that needed a confirmation dialog.
<form asp-page-handler=@toolbar.PageHandler method=@toolbar.Method onclick="return confirm('Are you sure you want to delete this?')">
<button class="btn btn-default">@toolbar.DisplayText</button>
</form> Clicking on the toolbar item now brings up a confirmation dialog. If the user selects the Yes option the Razor page handler is still invoked - which is exactly what I wanted. If the user selects No, then nothing happens.
This very simple addition to the Razor form page handler gives you a confirmation before the page handler code is invoked. No need for JQuery. Just a very simple solution that I have now implemented and which works perfectly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of the web app I'm currently developing using ASP.NET Core 2.0, I needed to allow the user to upload files to the server. The file upload control I am using is the Kendo UI Upload control, but the underlying process will be similar irrespective of the underlying UI control that you are using.
To achieve this I am using an ASP.NET Core page handler. Within this page handler I have placed the Kendo UI Upload control. The page handler contains the name of the page handler and the HTTP method to use.
The below code is the Razor (.cshtml) syntax (simplified) that demonstrates how I have created the page handler and the Kendo UI Upload control.
<form asp-page-handler="upload" method="post">
@(Html.Kendo().Upload()
.Name("files")
)
<button>Save</button>
</form> In the example above it should be noted that the name of the page handler is "upload" and the HTTP method will be a POST. Without going into a full description of ASP.NET Core page handlers, the name of the form page handler and the HTTP method dictate the name of the page handler on the Razor code-behind.
When the user clicks the Save button the files that have been specified for upload will be posted to the ASP.NET Core "upload" page handler.
public void OnPostUpload(IEnumerable<IFormFile> files)
{
} Note: the name of the Kendo UI Upload control MUST be the same as the name of the parameter received by the page handler i.e. "files" in this example. The files get posted to the page handler when the form is submitted. You can then process the files in any way you want. Note also that the type of parameter is different to previous versions of ASP.NET which used HttpPostedFileBase. With ASP.NET Core the posted files are of type IEnumerable<iformfile>.
ASP.NET Core makes handling file uploads very simple and straight forward. Doing so is even easier using the Kendo UI Upload control which reduced the amount of code I had to write. Files can be uploaded asynchronously and in multiples too.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been using Razor page handlers recently in our web application to handle the interaction between client-side code and server-side code. There have previously been several ways of achieving this using AJAX. And this is still possible using ASP.NET Core 2.0 as I will demonstrate later in this article. There is also a TagHelper that allows you to invoke a page handler from within your Razor page. I will demonstrate both of these within this article.
The purpose of this article is not to give a detailed, step-by-step account of both of these methods. Instead, I will give an overview of them so you can see how they both work, then investigate them further if you want to implement them in your own application.
By way of introduction, a page handler is a page method that responds to a particular HTTP method e.g. Get, Post, Delete, Put etc. Page handler names are preceded by On. So the default page handlers include OnGet, OnPost etc. The name of the handler is appended to the default page handler name e.g. OnGetCustomer would be a page handler that is invoked to retrieve a specific customer. OnPostOrder would be used to post an order.
It is definitely worth looking at page handlers in more depth. I will assume the reader is familiar with page handlers. If this is not the case, please read up on them first. Okay, with that out of the way, let's dive into the detail of the article and describe how Razor page handlers can be used within an application.
I will start by firstly looking at the ASP.NET Core TagHelper method. This is the most straight-forward. This allows you to bind a client-side event such as a button-click to a server-side page handler. Here is a very simple page handler that you would define in the .cshtml page.
<form asp-page-handler="Customers" method="GET">
<button class="btn btn-default">List Customers</button>
</form> In the example above I have created a simple Razor page handler that fetches a list of customers. The name of the page handler given in the TagHelper syntax would therefore be OnGetCustomers. Here is the definition of the page handler in the .cshtml.cs file.
public void OnGetCustomers()
{
} ASP.NET Core 2.0 allows you to add multiple page handlers to the same page. You could therefore add page handlers for adding, editing, deleting and viewing customers all on the same page.
You can also pass parameters to page handlers. So if you wanted to fetch a specific customer you could achieve this using the following code example.
<form asp-page-handler="Customer" method="GET">
<button class="btn btn-default">Get Customer</button>
<input id="handler_parameter" type="hidden" name="selectedCustomerID" value="0"/>
</form>
public void OnGetCustomer(int selectedCustomerID)
{
} Obviously you would need to set the value of the input control to some meaningful value. In my particular case, I am setting the value when the click event of a Kendo UI TreeView is raised. In the event click for the Kendo control I am using JQuery to set the value to the ID of the currently selected item in the TreeView. Then when the user wants to perform an action on the item (edit, delete, view etc), the ID is passed to the page handler.
Here is the Keno UI TreeView click event when an item is selected.
function onDocumentViewNode(data) {
$("#handler_parameter")[0].value = data.id;
} It is worth noting that the name of the input control must be the same as the name of the parameter on the page handler. In the above example they are set to "selectedCustomerID". If they do not match, then nothing is passed to the page handler.
Another way to use Razor page handlers is by using AJAX. With AJAX you are able to invoke requests using GET, POST etc. These can be RESTful requests for example. With ASP.NET Core 2.0 they can also be Razor page handlers.
Here is a simple example of invoking a Razor page handler using AJAX.
$.ajax({
type: "GET",
url: "/Customer?handler=customer&selectedCustomerID="+ data.id,
contentType: "application/json",
dataType: "json",
success: function (response) {
},
failure: function (response) {
console.log(JSON.stringify(response));
}
}); Razor page handlers open up massive opportunities for creating highly flexible applications. The interaction between the client-code and server-code is baked into the very fabric of the .NET Core Framework. To achieve such seamless interaction previously would have involved writing a lot of custom code, much of it probably spagetti-like or very clunky. With ASP.NET Core, interaction between the client and server is now totally seamless and extremely easy to achieve. Razor page handlers are highly flexible (you can respond to any HTTP verb) and very performant. They also lead to more cleaner code (there is far less code to write), and can be unit-tested (by separatng out the code into further layers of separation). Using them is really a no-brainer.
I use both of these methods in my web applications, and they allow me to write very flexible code. I have recently used both of them to interact with Kendo UI controls which give the application a much higher degree of responsiveness and flexibility.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Within our web application, we are using the HttpContext.Session object to store certain items of information. Although we make minimal use of this object (it is after all, global data), there are times when it just makes sense to store certain kinds of information in the session. For example, when the user logs into the application, we grab their email address and store this in the session. This obviously won't change and so is a prime candidate for storing within the session.
All of our services require that the user's email address is passed as parameter so we can determine the user making the request. Our functions therefore retrieve the email address from session storage. This works great, but led to a problem when I came to try unit testing the functions as I was unable to access session storage from my unit tests. After a bit of trial and error I came up with the following solution. Googling the problem revealed that there are several ways in which this can be achieved. I didn't want to use a mocking framework, as I wanted to keep the unit tests as small and simple as possible. Although adding a mocking framework would have given me a lot more functionality to play with, I was only interested in mocking the HttpContext.Session object. The solution I have used is a vanilla approach that doesn't use any external frameworks, thus making it possible for anyone to implement it.
First off, I created a class that implemented the ISession interface. This is the same interface that the HttpContext.Session object implements.
public class MockHttpSession : ISession
{
readonly Dictionary<string, object> _sessionStorage = new Dictionary<string, object>();
string ISession.Id => throw new NotImplementedException();
bool ISession.IsAvailable => throw new NotImplementedException();
IEnumerable<string> ISession.Keys => _sessionStorage.Keys;
void ISession.Clear()
{
_sessionStorage.Clear();
}
Task ISession.CommitAsync(CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
Task ISession.LoadAsync(CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
void ISession.Remove(string key)
{
_sessionStorage.Remove(key);
}
void ISession.Set(string key, byte[] value)
{
_sessionStorage[key] = Encoding.UTF8.GetString(value);
}
bool ISession.TryGetValue(string key, out byte[] value)
{
if (_sessionStorage[key] != null)
{
value = Encoding.ASCII.GetBytes(_sessionStorage[key].ToString());
return true;
}
value = null;
return false;
}
} My functions have an optional parameter which takes an instance of an ISession object. If one is not passed as an argument then the function simply uses HttpSession.Session instead.
public void MyFunction(ISession context = null)
{
if (context == null)
{
context = HttpContext.Session;
}
string email = context.Get("UserEmail");
} Then in my unit tests I create an instance of the mock Httpcontext.Session class from above and pass this as an argument to the functions I wish to unit test. In the example below I am unit testing a ViewComponent that retrieves the user's email from the HttpContext.Session object.
[TestMethod]
public async Task InvokeAsyncTest()
{
MainMenuViewComponent component = new MainMenuViewComponent();
var mockContext = MockHttpContext();
var result = await component.InvokeAsync(mockContext);
Assert.IsNotNull(result);
}
private static ISession MockHttpContext()
{
MockHttpSession httpcontext = new MockHttpSession();
httpcontext.Set<string>("UserEmail", "unittest@mycompany.com");
return httpcontext;
} I have now implemented this same pattern for all my ViewComponent's, and it works absolutely perfectly. I can easily and simply unit test any code that makes use of the HttpContext.Session object without a problem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
While developing our new web application, we wanted to take a component-based approach and build up the user-interface from small, discreet UI components. So instead of having monolithic Razor Pages containing many different controls, we thought it would be a far better design approach to develop the UI from smaller, discreet components that we could then re-use in other parts of the application.
I initially looked into the concept of partials in ASP.NET Core, and while these are great for re-using static markup, they're not so great for building dynamic, data-driven content such as menus (which would be the first component I would in fact be developing).
Where your requirement is to re-use dynamic and / or data driven content, then the correct design approach is to use a ViewComponent. From the Microsoft documentation[^]
Quote: New to ASP.NET Core MVC, view components are similar to partial views, but they're much more powerful. View components don't use model binding, and only depend on the data provided when calling into it. A view component:
- Renders a chunk rather than a whole response.
- Includes the same separation-of-concerns and testability benefits found between a controller and view.
- Can have parameters and business logic.
- Is typically invoked from a layout page.
View components are intended anywhere you have reusable rendering logic that's too complex for a partial view, such as:
- Dynamic navigation menus // bingo!! this is what we're looking for!! I won't copy the entire list here, I've posted the link to the documentation so you can have a read of it for yourself.
So our menu tree structure is handled by a ViewComponent. All the business logic for building a user-specific menu is contained within the ViewComponent, and the ViewComponent returns the menu tree structure. This is then displayed by the Razor Page that is associated with the ViewComponent.
So building our application's menu is encapsulated in a re-usable, discreet and unit-testable ViewComponent. Going forwards, we will use ViewComponent's for all of our UI components, and build up our Razor Pages from multiple ViewComponents.
This gives us huge benefits.
- Encapsulate the underlying business logic for a Razor Page in a separate component
- Allow for the business logic to be unit-tested
- Allow for the UI component to be re-used across different forms
- Leads to cleaner code with separation of concerns
Here's a (very) simplified example of how we've used a ViewComponent to build our menu tree structure. Note that all exception handling, logging etc has been removed for brevity.
public class MenuItemsViewComponent: ViewComponent
{
public async Task<IViewComponentResult> InvokeAsync(int parentId, string email)
{
var response = await new MenuServices().GetModulesItemsForUser(parentId, email);
return View(response);
}
} The ViewComponent calls one of our ASP.NET Web API services to retrieve the menu tree specified menu level and user. It then returns this wrapped inside an instance of IViewComponentResult, which is one of the supported result types returned from a ViewComponent.
Here is the (very) simplified Razor Page that displays the output from the ViewComponent. Note that all styling has been removed for brevity.
@model Common.Models.MainMenuModels
@if (Model != null && Model.MenuItems != null && Model.MenuItems.Any())
{
foreach (var menuitem in Model.MenuItems)
{
<a asp-page=@menuitem.Routing>@menuitem.DisplayText</a>
}
} And finally here's how we invoke the ViewComponent from our layout page.
@await Component.InvokeAsync("MenuItems", 0, "myemail@company.co.uk") I am very impressed with the ViewComponent concept. From a design point-of-view, it is the correct approach if you are building forms that contain any sort of dynamic content. By allowing for clean separation of concerns and supporting unit testing, you can ensure your applications are far more robust and less likely to fail in production. These are just a couple of reasons why you should consider using ViewComponent's in your own ASP.NET Core 2.0 applications. Why not give them a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I previously wrote about Azure Application Insights[^] in an article where I talked about how we would be using it within our ASP.NET Web API services. We use it in that context to monitor our services for such things as availability, performance and to record metrics such as the number of requests our services are processing. And Azure Application Insights gives me all of this, and a whole lot more besides.
This time round we were exploring logging engines for our latest ASP.NET Core 2.0 Razor Pages web application. We wanted something that would record the various events that our application would generate, as well as enable us to debug and diagnose errors and / or exceptions as they occurred.
We looked at various logging engines such as ELMAH and Log4Net. Each would have satisfied our requirements, but when we looked into the feature set of Application Insights, there was absolutely no comparison. Application Insights won the contest hands down. Straight out the box it measures practically everything you need without writing a single line of code. On top of that, you can write your own custom events, traces, exception handlers etc.
I've added several custom logging methods for monitoring and measuring our application whilst it's running, as well as exception logging. All of this helps us to debug and diagnose issues, helps with development as we can add traces throughout the application, allows us to monitor and measure the health of the application and to generate exception reports when the application encounters errors.
Within the Azure portal, you can open your Application Insights blade and dice and slice this data any way you want. You can drill top down from your management summary type data (showing broad trends and metrics), right into the nitty gritty detail of an individual request or exception. The data can be filtered in an almost infinite number of ways and presented in multiple formats (or downloaded for offline use). And it's fast. Despite the vast amount of data that is collected, filtering it and querying is surprisingly fast. Even complex queries of the data are returned blazingly fast.
To use Application Insights you firstly need to download the Application Insights package from nuget. Once installed, you can then start using it in your application.
Here's a single example of a trace event I have implemented. I use it to trace the requests to our ASP.NET Web API services. We pass an event name, the duration of the request (so we can monitor for performance) and a dictionary of custom properties (which I use to pass the request arguments).
public class LoggingService
{
private readonly TelemetryClient _telemetryClient = new TelemetryClient();
public void TrackEvent(string eventName, TimeSpan timespan, IDictionary<string, string> properties = null)
{
var telemetry = new EventTelemetry(eventName);
telemetry.Metrics.Add("Elapsed", timespan.TotalMilliseconds);
if (properties != null)
{
foreach (var property in properties)
{
telemetry.Properties.Add(property.Key, property.Value);
}
}
this._telemetryClient.TrackEvent(eventName, properties);
this._telemetryClient.TrackEvent(telemetry);
}
} And here's the method being invoked within the application.
stopwatch.Restart();
try
{
var response = await new MyService().GetMyData(param1, param2);
}
catch (Exception ex)
{
service.TrackException(ex);
throw;
}
finally
{
stopwatch.Stop();
var properties = new Dictionary<string, string>
{
{ "param1", param1 },
{ "param2", param2 }
};
service.TrackEvent("GetMyData", stopwatch.Elapsed, properties);
} You can add traces for events, exceptions, diagnostics etc. All of this data is recorded and available for you to filter, dice and slice in any way you need it.
If you're looking for a logging engine in your application, then you need to check out Application Insights. It does everything we need and a whole bunch more, and is highly configurable and fast. The question is not why you should use it, but why you shouldn't use it!
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In a previous article I demonstrated how to write flexible code[^] for n-tier designed applications. In this article, I want to describe how I approached designing my code for our ASP.NET Core 2.0 Razor Pages application. My key goal was to separate out the various concerns, and in particular keep the UI code separate from the business logic code.
We are using Razor Pages in our current app, and all the business logic is encapsulated within our ASP.NET Web API services which are invoked by the Razor Pages. A Razor Page is backed by a PageModel class which supplies much of the "plumbing" logic behind the Razor Page. For example, the PageModel class contains such things as the Response, the Request, ViewData, PageContext, HttpContext. But no business logic. So this article will describe how I have approached surfacing business logic within my Razor Pages.
It is worth noting that I have deliberately used a very simple example for clarity and to keep the article nice and simple.
The first thing I did was to create a base PageModel class for the Razor Pages. As stated earlier, all Razor Pages are backed by a PageModel class as in the following code.
public class IndexModel : PageModel
{
} So I created a base PageModel class that I would use instead. My base PageModel class inherits from the default AspNetCore.Mvc.RazorPages but adds the ability to specify a completely different class which will contain all the business logic.
public class PageModelBase<T>: PageModel where T : PageModelService, new()
{
public readonly T Service = new T();
} I wanted consistency in my design, so I created a base service class that would be instantiated by the PageModel classes. I have called this class PageModelService class. In the example code above, I am creating an instance of this backing service class in my PageModel. The PageModelService class is where I will place all my business logic code (which in my case are my ASP.NET Web API services). This separation ensures that the business logic code is separated out from the UI code, and is therefore also unit-testable.
Here's my PageModelService class definition.
public abstract class PageModelService
{
protected abstract string ModuleName { get; }
} I only have one property defined (the name of the module to which the Razor Page belongs), but you can define as many properties, methods etc as your applications needs. Remember, this is the base PageModelService class, so only place code here that is applicable to all your Razor Pages.
Here's an example class definition for a PageModelService that sets the ModuleName property in the constructor.
public class ExamplePageModelService : PageModelService
{
public ExamplePageModelService ()
{
ModuleName = "Example Module;
}
protected override string ModuleName { get; }
} Finally, here's a Razor Page using the new PageModelBase class and setting the name of the backing service (which will be instantiated by the class).
public class ExamplePageModel : PageModelBase<ExamplePageModelService>
{
public OnPost()
{
}
public void OnGet()
{
}
} This very simple design pattern allows us to separate the UI code from the business logic code within the Razor Pages and allows the business logic code to be unit tested also. So if you're building Razor Pages and want to keep your UI code separate from your business logic code, then give this design pattern a try. You are encouraged to amend the code to suit your own specific requirements, but feel free to use the code as a starting point.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of all my builds I provide code coverage. After executing the unit tests, I then perform code coverage over the source code. I won't go into the reasons why this is necessary (I've written about code coverage in previous articles and almost certainly will again), but for me it's a vital part of every build. Without code coverage you have no idea how effective your unit testing strategy is.
I use a tool called dotcover to provide code coverage. This is a utility from JetBrains (the same people who create Resharper) and is a genuinely brilliant tool. It integrates into Visual Studio and can be run from the command-line as part of a script (which is exactly how I use it in our builds).
Unfortunately I couldn't get it working with our ASP.NET Core 2.0 application. The documentation states that it should work with .NET Core but after many attempts I just couldn't get it working. There is a command-line utility called dotnet that ships with .NET Core that can be used to perform all manner of functions, such as creating projects, testing projects, adding references to projects etc. I was already using dotnet to execute my unit tests, and thought that I'd try using it for my code coverage also.
As it happens there is an open-source utility that integrates directly with dotnet that performs code coverage. It's called coverlet and is available on Github here[^]. There are examples on using the utility here[^]
One issue that I came across was executing this from Team Foundation Services (TFS). Despite working without error from the command-line on the build server, I was getting an error when executing it as part of the build from TFS. I managed to eventually resolve this by explicitly specifying the name of the project and suppressing dotnet test from performing a build and restore.
dotnet test "MyProject.Tests.csproj" --no-build --no-restore /p:CollectCoverage=true This generates a JSON file containing the generated code coverage. Whilst this is not as pretty as the output from dotcover (which is simply exceptional) it at least works and gives me code coverage for our .NET Core 2.0 project.
Another problem encountered, another problem solved
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Code coverage is great and all, but we don't get too hung up on that in a fast past, ever changing environment, with frequent releases.
I guess what I am saying is that really good unit tests and code coverage is desirable, but in my experience, rarely obtained.
|
|
|
|
|
In my projects, unit testing and code coverage are always obtained
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As I have mentioned in previous articles, I am using Team Foundation Services 2015 (TFS2015) to build our apps, and our latest ASP.NET Core 2.0 web app is no different. I've already run into the issue of versioning the app from the build process (which I have covered here[^]).
The next problem I ran into was getting the projects to build. By default TFS2015 will use the latest version of MSBUILD unless you specify a different version. For different, read earlier version i.e. VS2013, VS2012. To enable TFS2015 to build the project you need to specify the exact location of the version of MSBUILD you need to use. Thanksfully, TFS2015 gives you this option (under the Advanced tab on the MSBUILD task).
Before you can do this though, you need to install the Visual Studio 2017 SDK tools and APIs which will then install the required version of MSBUILD (which at the time of writing is version 15).
The next problem I ran into was executing the unit tests within our TFS2015 pipeline. The TFS2015 Visual Studio Test task wasn't producing the required test output. I tried several tweaks and variations but none of them worked. After some reading around and looking at posts on Stackoverflow it was suggested that using the dotnet command-line tool would allow me to execute our unit tests. After some playing around with the various settings I eventually managed to get this working and finally able to publish our test results to the TFS2015 dashboard.
The biggest problem I have had so far is that I have really struggled to find solutions to the problems I have faced. This is due largely because .NET Core 2.0 is still (relatively) new. As the uptake increases I am sure many of these issues and problems will become better known and be addressed or have workarounds provided. That said, I've learned a great deal about how .NET Core 2.0 works under the covers because I've had to dig deep to find these solutions.
.NET Core 2.0 is a different beast to standard .NET in many ways, and each day I find one (or more) of these differences. It's a pleasure to work with though, and is a genuinely fantastic environment in which to develop applications. A happy developer is a productive one
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is defiitely something that caught me out. We are using Azure Active Directory Business-2-Consumer (AD B2C) in our latest web app for all user identity including signup / signin / password reset. After configuring and setting up the required policies (specifying what information we wanted returned in the token upon success), I then set about trying to retrieve the JWT token that is returned from Azure AD B2C so that I would know the identity of the logged-in user..
Retrieving this token proved a bit more difficult than I originally thought. I checked the response headers and couldn't find the token. I checked through the documentation and couldn't find any examples or explanation of how to retrieve the token.
Using the browser's built-in debugging tools and Telerik Fiddler, I could see that the token was being posted to the /signin-oidc endpoint (which is the default endpoint for OpenId Connect applications).
I did eventually come across this article[^] which seemed a likely candidate. Unfortunately, when attempting to follow the instructions I got an error when running the application. Our configuration didn't seem to work with the example code given in the article.
Eventually, I managed to come across this article[^] The important part of the article is the code snippet below.
@{
ViewData["Title"] = "Security";
}
<h2>Secure</h2>
<dl>
@foreach (var claim in User.Claims)
{
<dt>@claim.Type</dt>
<dd>@claim.Value</dd>
}
</dl> Basically, the returned claims from Azure AD B2C are contained within the user object Claims property.
User.Claims By iterating through this object I was able to retrieve all the claims that I had configured in our Azure policies.
I don't know why this critical piece of the jigsaw is so sparsely documented. Without knowing which user has logged into our web app, we are pretty much at a loss as to provide any functionality. Being able to determine the identity of the user is the critical functionality provided by the Identity Provider (any identity provider).
I hope this article helps out at least a few other developers.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When creating our ASP.NET Core 2.0 application, one of the first tasks I had was to create a build for the application. The build will mature and grow over time and acquire additional tasks (such as unit testing steps, deployments to our Azure web hosting etc). But for the time being, I was only creating the most basic of builds for the application to perform Continuous Integration and to deploy to a testing endpoint.
The first problem I encountered was how to version the application using our Team Foundation Services (TFS) build process. Versioning in .NET Core 2.0 does not work the same way as it does in earlier versions of .NET. So I couldn't just take my previous Powershell script (which I use for versioning my other .NET applications) as it didn't work.
After reading through lots of documentation and StackOverflow posts, I came across a solution that works and which I have now impemented within the build.
Here's a link to a utility called dotnet-setversion[^] that will version your .NET Core 2.0 application. After adding the reference to your project, you simply invoke the utility and pass it the version number as a parameter. I achieved this within our build process by adding a new step which invokes a Windows batch file. This batch file invokes the utility which then versions our application.
Within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet setversion %1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|