|
There's an approach that I have been using for several years now that has helped me improve and simplify my stored procedures. This is for stored procedures that return data i.e. SELECT stored procedures as opposed to INSERT or UPDATE stored procedures. This approach is particularly useful where a stored procedure needs to reference more than one table i.e. where there is a JOIN between one or more tables.
Firstly I create a VIEW of the data that I want to query. The VIEW contains all the tables, columns, JOINs etc as necessary. It is from this VIEW that the stored procedure will SELECT its data as necessary. All the stored procedure needs to do then is filter the data from the VIEW with a WHERE clause.
The advantages of this approach is that the VIEW hides the underlying details of all the JOINs. The stored procedures then become simple affairs as they simply SELECT from the VIEW. This leads to simpler stored procedures, and allows a VIEW to be reused across multiple stored procedures. Therefore you don't need to repeat the same complicated JOINs in each of your stored procedures.
Example VIEW
CREATE VIEW [dbo].[v_CardDefinitions] AS
SELECT
CardDefinitions.*,
Cards.ID AS CardID,
Cards.ParentID,
Cards.[Index],
Cards.UserID,
Cards.CardDefinitionID,
Users.Email AS UserEmail,
Modules.Name AS ModuleName
FROM
CardDefinitions
LEFT JOIN
Cards ON CardDefinitions.ID = Cards.CardDefinitionID
JOIN
Modules ON CardDefinitions.ModuleID = Modules.ID
LEFT JOIN
Users ON Cards.UserID = Users.ID
WHERE
CardDefinitions.Active = 1 Example stored procedure
CREATE PROCEDURE [dbo].[Cards_GetById]
@cardId INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT
DISTINCT ID, Name, [Permissions]
FROM
v_CardDefinitions
WHERE
ID = @cardId
END So to summarise the approach.
- Create a VIEW of the data that JOINs all the necessary tables
- Create a stored procedure that SELECTs data from the VIEW by filtering the VIEW using WHERE clauses
This is an approach that I use regularly as it simplifies the stored procedures I need to create.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a requirement to update multiple tables with the same value. We have a table that stores information about documents (Excel documents, Word documents, text documents, images, reports etc). Every document has an owner associated with it. This person has admin privileges over the document. After a discussion with one of our users, they wanted the ability to change the owner of a document. Doing this at the level of a single document is straight forward. However, the user wanted this for multiple documents. For example, if a user is due to leave the business, they wanted the ability to change the owner of all their documents to a new owner.
I therefore needed the ability to pass a list of document IDs into a stored procedure. The stored procedure would then change the owner for all the documents in the list to the specified owner. Passing in the comma-delimited list of document IDs wouldn't be difficult, as this is essentially a long string. The tricky part would be to iterate through the items in the list i.e. to fetch each document ID from the comma-delimited list so that the owner can be updated.
The first thing I needed to do was to create a function that could iterate through the list. I create a Table-Valued-Function (TVF) called Split to achieve this. If you don't already know, a TVF is a function that returns a table (as the name suggests). In our case, we will return a two column table containing a unique ID and an item from the list. So if there are 10 items in the list, then there will be 10 rows in the table returned by our TVF.
CREATE FUNCTION [dbo].[Split]
(
@List nvarchar(2000),
@SplitOn nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Value nvarchar(100)
)
AS
BEGIN
While (Charindex(@SplitOn,@List)>0)
Begin
Insert Into @RtnValue (value)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1)))
Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List))
End
Insert Into @RtnValue (Value)
Select Value = ltrim(rtrim(@List))
Return
END The function has two paramters. The first is the comma-delimited list of document IDs
@List = '1, 2, 3, 4, 5' The second parameter is the delimiter. In this case we are passing a comma-delimited list hence the delimiter is a comma.
@SplitOn = ',' The function loops through the list locating the next item by searching for the next occurrence of the delimiter. It keeps doing this until it cannot find any more occurrences of the delimiter. Each item it finds between the current and next delimiter is inserted into the table that will be returned by the TVF.
We next need to write a stored procedure that invokes our Split Table-Valued-Function.
CREATE PROCEDURE [dbo].[Documents_UpdateOwner]
@owner INT,
@documentids NVARCHAR(1000)
AS
BEGIN
UPDATE
Documents
SET
UploadedBy = @owner
WHERE
ID IN (SELECT CONVERT(INT, Value) FROM Split(@documentids, ','))
END There are two parameters to the stored procedure. The first one is the ID of the new owner for the documents. The second parameter is a comma-delimited list of document IDs for which we wish to change the owner. The items returned from the Split TVF are stored in string format. Therefore if we need to update data in another format we need to do a conversion. In our case, we are updating an INT and therefore need to convert the item from an NVARCHAR to an INT. Obviously we wouldn't need to do any conversion if we were comparing against string data.
I have since used this Table-Valued-Function in other stored procedures where I need to iterate through a list of items. It's a very efficient way of updating multiple tables. Instead of having to make multiple calls to a stored procedure to update each document owner, I can instead make one call to a stored procedure and update all of them at once. This is a neat way to allow for those scenarios where you need to update data from a list of items.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had some plumbing work done in my house that made me think of a similarity between software development and plumbing. I realise they are fundamentally different beasts, but bear with me. Whilst talking to my plumber, he was showing me the differences between the work he had done, and the work done on one of the other houses in the street where I live. Even as a complete novice I could see the differences he was describing. He wasn't trying to be disrespectful or mean to the other plumber (he didn't know him as he had never met him), but merely demonstrating how high his quality of work was using a direct example.
- The holes made in the brickwork in my house were neat and the pipes fitted tightly through with no gaps. In the other house they were rough and there were gaps where the pipes came through.
- Where my brickwork needed replacing outside my house, these has been replaced with identically coloured bricks and you couldn't see any differences when looking at the wall. On the other house, the bricks had been replaced with differently coloured bricks and the bricks had been replaced so the interlacing (bricks are laid in an overlapping manner vertically for strength) had been broken.
- There were no pipes running outside my house. The pipes running outside the other house were left totally exposed to the elements as they were not protected with lagging.
I'm sure there were similar differences inside the houses too.
The point I am making is that my plumber showed care. His work was of a very high standard and demonstrated diligence and work ethic. The other plumber was satisfied with far lower standards. For him, close was good enough.
This same comparison can also be made with software development. When I write code, I take care to ensure that my code is well organised, structured and readable. I ensure that there are unit tests that exercise an adequate level of code coverage. I implement best practices and aim to be consistent.
When I look at a piece of code, I can very quickly determine if there was care put into it. Sloppy, ill thought out code that is inconsistent and unstructured are amongst some of the signals that reveal such a lack of care. Even as a novice, you can still demonstrate a level of care within your work. This is not about how knowledgeable or experienced you are, but how dilligent you are. It is still entirely possible to write code with care and attention to detail despite being inexperienced.
As a professional software engineer, I want others to look at my code and think "Hey this guy has put a lot of effort and care into writing this". It will have my name against it. I have high standards, and I expect the same from every other developer on the team. I have taken it upon myself to write the coding standards document that we all follow as a team. Not by dictatorship, but by democracy.
When you have checked in your code, take a moment to reflect what another developer would think of it. What would they think when looking at your code? What does your code say about you and your work ethic? Our bread and butter is our code. The care, love and dilligence that we use to craft it speaks volumes about us as professional software developers. Make sure that when another developer looks at your code, that at the very least they will say that this guy cared about what they were doing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 8-Mar-19 12:24pm.
|
|
|
|
|
Following on from an earlier article[^] I wrote about versioning a .NET Core 2.0 application, I have now had to revise this since the method I used for that version of the application is not supported in .NET Core 2.2. In that article, I demonstrated how to use a tool called setversion[^] for versioning a .NET Core 2.0 application. After upgrading our application to .NET Core 2.2 I found out that this is not currently supported any more.
Instead of using the setversion tool, I am using the dotnet publish command-line utility. When using this command-line utility, you are able to specify a version number.
I am still using the same build script as described in my previous article, and this is invoked from our TFS build server in the same manner. Just to reiterate, within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet publish <project>.csproj --configuration Release /p:Version=%1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article Sending Push Notifications with Azure Notification Hub[^] I briefly described our rationale for selecting Azure Notification Hub over alternatives. I have now fully implemented an ASP.NET Web API service for sending push notifications as well as managing their associated tags.
The service provides the following functionality.
- Send push notifications to either Android or iOS devices (with or without tags)
- Adds tags
- Removes tags
If you aren't familiar with the concept of tags where push notifications are concerned, you aren't alone. I hadn't heard of them either until I started working with push notifications. The concept is surprisingly simple, yet provides great flexibility in how you target where your push notifications are sent.
When a device is registered for push notifications (via code running on the device), you can optionally assign tags with the device registration. This is a list of characteristics (or interests) that the device wishes to receive push notifications about. Tags can either be set by the user (perhaps via a system preferences page where they can tick boxes to select the items they wish to receive push notifications about) or by the backend (where we can set characteristics to allow us to target specific devices(s) when sending push notifications).
In our case, we have implemented the latter i.e. we are adding tags that relate to the user's device to allow us to send targetted push notifications. For example, we have added tags that specify the user's ID, their company ID etc. This allows us to send a push notification to a specific user's device (by specifying the user's ID) or to all the user's for a specific company (by specifying the company ID).
When a push notification is sent, you can specify a tag alongside your push notification message. The push notification is then only sent to any registered devices that have expressed an interest in that particular tag. So in our case, we can send a message to a specific user by supplying their ID as the tag. Or we can send a push notification and supply the company ID, thus ensuring that the push notification is only sent to user's of that specific company. We can slice and dice the demographics of our user base in any way that we find meaningful by simply registering the device with the desired tag(s).
This is a powerful way of decomposing the demographics of your user base. You can now explicitly categorise your user base by the tags they have registered with. By doing so, this then allows us to send targetted push notifications, right the way down to a specific user's device.
The service that I have implemented manages these tags, as well as providing the ability to send the push notifications themselves. The service therefore allows the backend to add and / or remove tags from a user's device. For example, when a user logs in on a device, the service is invoked to register them with various tags according to the information we hold on them. Likewise, we will remove those tags when they sign out.
This process is very straight forward, yet gives us an incredible level of flexibility for sending targetted push notifications to our users. If you have't already looked into the concept of push notification tags, then I'd definitely have a look at them. They're a great idea.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In the latest version of the Xamarin Forms app that I am working on, we wanted to send push notifications to the devices. There were a couple of approaches that we could have taken. The key ones being Twilio (which we are already using for sending SMS messages) and Azure Notification Hub. After some initial exploration, the clear choice was Azure Notification Hub. Unsurprisingly it had tight integration with Xamarin Forms and the Microsoft ecosystem, and was very straight-forward to configure and get working.
There were also very good examples of how to make the necessary code changes to the respective Android and iOS projects to ensure we got this working quickly.
The beauty of working with Azure Notification Hub, is that this abstracts us away from the underlying details of the Android and iOS platforms. Instead, once we had made the necessary configurations and setup changes to enable push notifications for each platform, we then integrated the platform specific push notification engines into Azure Notification Hub. From this point onwards, we only have to work with Azure Notification Hub. This gives us a far simpler and cleaner abstraction onto our notification setup.
It is very simple to setup and send test push notifications to your registered devices using Azure Notification Hub. We have also intergrated App Center event tracking for all device registrations and sending of push notifications. This gives us a helicopter view of what our code is doing under the hood, and to help us diagnosing any errors should they arise.
The step-by-step tutorials I used can be found here[^].
So if you're looking to implement push notifications in your mobile app, give Azure Notification Hub a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
With the imminent release of our latest mobile app, I thought I'd summarise how we ensured high levels of quality, and proved that the software was correct. I'm not going to write an article justifying the case for unit testing (it should go without saying that unit testing is a fundamental part of the development process - if not you're doing it wrong), but rather to explain how we implemented unit testng within the software for the app.
The architecture I favour when designing an application, is to firstly reduce the surface area of the client[^]. Simply put, this entails keeping the UI code as sparse as possible, and removing any / all code that is involved with the domain. The UI should ONLY contain code that relates to the UI. While this sounds straight forward, I have lost count of the number of times I've come across code bases where the UI contains code from the domain and / or the data layer.
In relation to a Xamarin Forms mobile app, you should keep the code in the Views as sparse as possible. The UI code should only invoke your domain code, it should NEVER implement it. Your Xamarin Views should contain code for manipulating the various UI controls, populating them with data etc. As soon as there is a need for anything beyond this, then refactor the code and place this code in a completely separate layer of the app. Within the context of a Xamarin Forms app, I created separate folders for such things as the models, services, entities etc. These were completely separate to the Views.
To enforce this separation of concerns, we adopted the MVVM design pattern. I won't go into great detail here about this pattern (as there are many articles out there already). The MVVM pattern stands for
Model -> View -> View-Model
More correctly it could be named VVMM (View -> View-Model -> Model) as this is the order in which they relate to each other (in terms of dependency). The Model should have no knowledge of the View-Model. The View-Model should have no knowledge of the View. This is important when implementing an MVVM application, as it reduces the dependencies between the various parts of the application.
The View in a MVVM designed app is the UI element, or in the case of a Xamarin Forms app, they are the Views. Only UI code should be placed in the Views.
The View-Model is the place where domain logic will reside. All UI controls should be bound to properties in the View-Model. The code that provides your UI controls with data, hides/shows the UI element etc should all be implemented here. This way, you can unit test those rules and ensure that they are correct. And this is done without the need for the UI to be present. This means you don't have to keep using the simulator or physical device to test the domain rules of your app. You should be able to unit test these rules in the absence of the UI, and in complete isolation from other parts of the application. The unit tests should require minimal setup, and any dependencies should be injected into the methods to remove hard-wired dependencies. This is good old fashioned Dependency-Injection, and it is a vital design pattern when implementing unit tests. This ensures the correctness of your domain.
The Model is concerned with the data, and therefore maps your data entities into classes. The Model will contain such things as definitions for customer, order, supplier etc. The Model should not be concerned with how it is used by the View-Model or View. For example, you may have an Order class which contains an Order-date. This is stored within the Model as a Date type. The fact that this date is displayed as a string in the UI is of no concen to the Model. Any conversions needed to map Model properties into UI elements should be implemented by the View-Model (you may have a conversion needed by several elements or Views, so it makes sense to place this conversion code within a View-Model where it can be invoked from multiple places). Again, these conversions can be unit tested with complete independence from the UI by placing them in the View-Model. You can write unit tests against the Model to ensure that the values you set against it match those that are returned. So if you set the Order-date of your Order to a specific date, you can assert that this date is returned by the unit test. This ensures the correctness of your underlying data.
Unit testing a mobile app need not be difficult as long as you have carefully designed and architected the various moving parts and separated the key concerns. Implementing an architecture that supports separating out the various concerns is vital (layering). It's also useful to implement a design pattern that enforces such layering (such as MVC, MVVM). You should aim to keep your UI as sparse as possible, and place all code that is not involved in the UI elsewhere within the application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been developing mobile apps for the Android and iOS platforms for several years now. I have used both Telerik Platform (now retired) and Xamarin Forms. Both of these are excellent development platforms. Most recently, I have been developing apps using Xamarin Forms. Most of the code for the app in a Xamarin Forms app is contained within a single, shared project. This code is shared between both the Android and iOS apps. When you require platform specific behaviour, you place this code in the Android or iOS specific project as required.
During the development of the latest app, we have hit several issues as you would expect. Some small, some not so small. Android development is pretty painless, intuitive, conforms to well defined best practices and standards. We have hit a few snags with Android, but these have been relatively small and easy to fix.
Apple however is a whole different can of worms. Nothing they do seems to conform to any well defined standard or best practice. They have this habit of almost deliberately ignoring the well defined and understood patterns and practices from other development platforms, and doing it "their way". It's fair to say that the "Apple way" is usually vastly more time consuming, complicated and error prone. The Apple motto seems to be the total inverse of Occam's razor.
When given two or more ways of solving a problem, always choose the worst option.
From provisioning profiles and certificates to asset catalogues (I have never encountered a worse way of storing images than this), the "Apple way" is never simple, straight-forward or intuitive.
Nearly every issue or bug we have encounterted has been with the iOS version of the app (on both Telerik Platform and Xamarin Forms). The Apple platform just doesn't seem as robust as Android (which just works).
I am assuming that the majority of Apple developers don't get much exposure to other development environments, and probably build mainly Apple apps. They therefore never get to experience how things "should" be. If you only know the "Apple way" of doing things, then you have nothing else for comparison.
I have worked within development for approaching 20 years now, and in that time have used pretty much every platform, tool and technology at some point. I therefore have a broad knowledge of what is considered "best practice" by my exposure to the huge number of technologies over the years. I know what works, and how things ought to work. I can spot efficiency, good design, simplicity and elegance from afar.
This is why I am of the opinion that the Apple way just sucks. Doing something differently merely for the sake of it is not innovative. There are very good reasons why certain ideas become best practice within the development field. It's because they work. And not just work, but are well understood and accepted by those working within the industry. They have been put to the test, and been successful.
In all my years as a professional software developer, engineer and architect, I can honestly say that I have never come across a development platform as poor as that provided by Apple. If you genuinely think Apple make great development products, then I'd suggest having a look at how everyone else builds their development tools. Microsoft and Google for example build excellent development tools, and they employ industry best practices and standards in their processes and workflows.
Unfortunately, while Apple remains a player in the mobile app space, developers such as myself will just have to put up with the "Apple way" of doing things. I think Apple would do well to take a look around at the other players in their industry and take some inspiration from them. Until they do, they will continue to frustrate developers who find the "Apple way" cumbersome, time consuming and inefficient.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I hear discussions relating to the prevalent censorship and bias at the hands of the tech giants (Facebook, Twitter, Google et al), an argument I hear repeated is that they're private companies and can do whatever they want. Yes they are private companies, but I don't think that's a sufficiently powerful nor persuasive argument for allowing them off the hook. If you're unaware of the bias and censorship within Silicon Valley then read read my article[^] where I cover these issues.
Here's why I think anyone proposing that particular argument is wrong.
- Google is the number one search engine across the entire planet, and as such has a large share of the internet-search market. They can control (and censor / filter) their searches to disseminate their own political narrative with ease. Unlike going to the local baker's to buy a cake, if you get refused for some reason, you can just go to the baker next door and try again. Saying Google is a private company and can therefore have total control over what they do is a little naive. Google are very secretive about how their algorithms work and will no doubt refute any claim that their searches are biased. But you only need to compare the results from Google with that of a neutral search engine (such as DuckDuckGo) and you will see the stark contrast when comparing searches for political terms (I covered this in my previous article).
- The tech giants are more than just tech companies. They are highly influential agents that shape our cultural, political and social landscapes. They step far outside the technical arena in how they shape and influence our day-to-day lives. Many people today get their news from their social media platform of choice e.g. Facebook, Twitter or via organic search via Google. This places them in very influential positions. Rather than merely informing us about the state of current events, they can influence them to fit their own political agenda. This is no longer acting as a neutral observer, but an agent of change and influence.
- As we have recently seen with the de-platforming of Gab.com, the tech giants will collude to crush their competitors. Gab has been de-platformed by (amongst others) Microsoft, Apple, Google, Paypal and Patreon. If this happended in any other industry, there would quite rightly be a public outcry. For some reason, this behaviour seems to be accepted within the tech industry (but only if you have the "right" politics). You can't have choice in the marketplace, when the technical oligarchs at Silicon Valley will actively crush that competition. So the argument for "Private companies can do what they want" only really applies when there is true competition and an open and fair marketplace. Silicon Valley provides none of these.
So stating that the tech giants are private companies, for me at least, doesn't constitute a valid argument when considered against the points I've made here. They do not operate within the boundaries of a market where there is anything approaching competition. They have huge power and influence that they wield to perpetuate their political agenda. It is this same power that they use (in collusion with other tech giants) to silence and crush their competitors.
I'll keep posting my usual technical articles, but from time to time I will continue to delve into the political side of things with articles such as these. I'm genuinely interested to hear other people's opinions on these matters so feel free to share and discuss your own views on these topics.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
The latest version of the app (which will replace the current app that is in the app stores) is nearing completion. We are into user-acceptance with key stake-holders from around the business. The journey from beginning the app several months ago, to now, has involved a great deal of learning. Although we had an existing app on which to base our development efforts, that's where the similarities ended. Many of the technologies used for the new app were either brand new, or very different from when we last used them.
- Xamarin: Although I have used Xamarin previously (long before Microsoft decided to acquire it), it is vastly different now than it was then. It's fair to say that in its current Microsoft incarnation, much of the Android and iOS specifics are abstracted away from the developer and bore little resemblance to the version I used all those years ago. So whilst I needed to refresh my knowledge of Xamarin as it had changed substantially since I had last used it, it was brand new to the rest of the development team.
- App Center: This is Microsoft's build / test / deploy center for mobile apps. This is an absolutely brilliant tool. We used this throughout our development lifecycle for all of our diagnostics and debugging. We added tracking for all our events, service calls and exception handling. App Center allows you to setup and configure analytics for your crash reporting as well as for event tracking. This was very useful when we needed to diagnose exceptions and errors during the development cycle. We also configured our Azure DevOps build to deploy to App Center. So with each code check-in, upon a successful build, we would have an Android and iOS release ready for testing.
- Telerik DataForm: Is a means of simplifying the development of your data-entry forms. You define the properties of your data-entry form in your model class (and decorate your properties with the necessary validation rules and label-text). This model then forms the basis of your data-entry form. Telerik DataForm then takes your model and generates the necessary UI controls for your model, and hence generates your data-entry form. Including the validation rules and label-texts. Your UI is therefore built from the programmatic definition of the underlying model. This is an incredibly powerful paradigm. It frees up the developer to focus on the model's rules and validation, and delegates the building of the UI to Telerik. This paradigm is not suitable for every form, but for simple, static data-entry forms it is perfect. Telerik DataForm implements the MVVM design pattern, thus your forms consist of the following logical pieces.
- View (the XAML layout and code-behind)
- View-Model (where you define the rules for your data-entry form)
- Model (where you define the data to which your UI elements are to be bound)
- Azure AD B2C (Identity Provision): We have previously setup Azure AD B2C (Busines-2-Consumer) for one of our line-of-business web apps. This allowed us to delegate the login functionality to Azure. Rather than implementing our own login functionality, we configured the web app to use Azure AD B2C instead. This gives us an incredibly secure app as you would expect. We are leveraging the same login functionality that is used daily by 2 billion Office365 users. We decided to use the same Azure AD B2C functionality in our mobile app. This gives us far higher security, scalability and we don't have to write a single line of code. Perfect!
We also trialled Azure DevOps for this project. All our source code, build and release definitions were defined here. Although I have used Team Foundation Services previously, this was my first time using Azure DevOps, and was my first time defining builds and releases for Android and iOS.
So it's fair to say that we had many (steep) learning curves on this project. Despite that though, they were the right decisions, as the new app puts us in a far stronger position both technically and strategically. From the development platform to the technology ecosystem, the new app is a far stronger proposition.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
For the record, and before I embark on this article, I would like it noted that I am a professional software engineer who works within the field of software development. I have done so for nearly two decades. I am a geek with a genuine passion for technology. I get enthused by technology, and wouldn't want to be in any other field.
With that out the way, let's get on with the article. I don't generally write about politics, and for very good reason. Like religion, politics can be a very controversial subject. It can be polemic and can often escalate to hyperboplic arguments. I have my political views, but don't wish to use this platform to air them. I do, however, from time to time, voice them over on my Twitter and Gab feeds. Over the last decade, I have seen many small, incremental changes from many of the tech giants that have made me question whether they provide a net positive for the world. Unless you have lived under a rock for the past few decades, you cannot have failed to realise how immersive technology is in our every day lives. We use technology for our personal lives, social lives, communications, gaming, entertainment, searching for news and information and so on.
Over the past decade, the tech giants including Google, Facebook and Twitter have come to dominate not just the technical arena, but the social, cultural and political ones as well. It is no secret that these technical corporations are liberal and left leaning in their political makeup. How can an organisation that is composed of thousands of people be said to have a single political bias? Surely with so many people working for them, you would think there would be large variation in political diversity? It would seem that this is far from the case. Despite being told that "Diversity is our strength" by those on the political left, this doesn't apply to political diversity. Yes there may be gender, religious and racial diversity, but there is very little in the way of political diversity. And herein lies the problem.
Twitter CEO Jack Dorsey has openly admitted that there is 'left leaning bias' within Twitter, but then goes on to state that this doesn't influence company policy. I think Jack is being more than a little economical with the truth if he thinks Twitter's left leaning bias doesn't affect company policy. If you're a conservative, a Trump supporter, Republican, or right-of-centre in your political compass, it is fair to say that Twitter can be a very unwelcoming place. In fact, it can often be a downright hostile place. Many right leaning Twitter users have faced bans, shadow bans or been outright kicked off the platform (Alex Jones, Milo Yiannopoulos, Gavin McInnes, James Woods (the actor - although he has since been reinstated) and Jesse Kelly) to name just a few. Even President Trump is not immune from the threat of being kicked off the platform[^].
New York Times op-ed Sarah Jeong made many openly, anti-white, anti-male tweets[^] earlier this year but didn't receive a ban or even a suspension. Some of her tweets included:
- “#cancelwhitepeople”
- “1. White men are bulls—. 2. No one cares about women. 3. You can threaten anyone on the internet except cops.”
- “Oh man. It’s sick how much joy I get from being cruel to old white men”
- Dumba— f—ing white people marking up the internet with their opinions like dogs pissing on fire hydrants.”
It should be noted that Sarah Jeong's account is a verified, blue check-marked account. So whilst Twitter bans people from its platform for wrong-think in many other areas (particularly identity politics), it rewards people like Sarah Jeong by verifying their accounts. As long as your racism is towards white people, and your sexism is towards men, then you're all good. In the world of Twitter, hate speech does not include white men.
Back in 2017 Google sacked one of its software engineers - James Damore - for sending out a memo that related to Google's diversity policies. Specifically, it related to the gender differences between men and women, and why women were under-represented in the field of software engineering. To anyone who has read (and understood) the science of gender differences, it won't come as any surprise that men have a greater interest in this field than women. Men (on average) have a greater interest in "things" (cars, computers etc) and will tend to gravitate to those professions including STEM (science-technology-engineering-mathematics). Whereas women (on average) have a greater interest in "people" and tend to gravitate to those professions such as law, medicine, social care etc. There is nothing inherently wrong with any of this. If you accept that men and women are different (and there are many who don't accept this self-evident premise), then it stands to reason that their biological differences will lead to differences in their average proclivities and interests. Google it would seem however, don't seem to accept this. It is this hive mind that has been referred to as Google's Ideological Echo Chamber[^].
Other examples of Google's bias include the fact that they recognise International Women's Day (by displaying an appropriate image on their home page), but don't recognise International Men's Day. There are more virtue signalling points to be gained from recognising the former than the latter.
Google searches are notoriously biased in the search results they return. In just one specific example, when asked to define the term "nationalism", the results between Google (politically biased) and DuckDuckGo (politically neutral) couldn't be more stark[^]. This was just for a single term. Imagine scaling this up to the millions of search results carried out on the Google platform everyday. At this point Google stops being a search engine, and instead becomes a political tool. Giving you the results it wants you to have. To me this is terrifyinhg. Google is the most powerful internet platform on the planet (forget Twitter, Facebook, Microsoft). Google owns the internet. The fact that it is so blatantly partisan reminds me of Big Brother in 1984. I no longer use Google for my search engine. I now use DuckDuckGo.
In the US, free speech is protected under their First Amendment. This covers speech that could be defined by some as offensive. However, none of the tech giants allow free speech on their platforms. All of them have very strict policies that set out rules for what is permissable speech. These are in fact, rules for policing speech. I am an ardent advocate of free speech. I would much rather all ideas (both good and bad) were transparent, and out in the open in the marketplace of ideas. Not all ideas or ideologies are equal, and the best way to counter the bad ideas is to subject them to public criticism and ridicule. I think the US First Amendment protecting free speech is one the greatest inventions of our time. Something I would dearly love to see protected in the UK (where I live).
The problem with defining hate speech and / or offensive speech, is that hate and offence are very subjective terms. And who gets to decide what is hateful / offensive? What one person may find offensive, another person may not. To my mind at least, the best way to counter this is to let all speech be accepted (apart from speech that directly advocates violence). Then allow people to exercise their free speech to criticise and ridicule that idea or ideology. Protecting certain ideas whilst allowing criticism of others is both prejudicial and counter to free speech, not to mention utterly hypocritical. But this is exactly where all socal media platforms are right now. The worst offender for this is surely Twitter.
Enter Gab. Gab is a social media platform not too dis-similar to Twitter. It hit the headlines recently when it came to light that the Pittsburgh shooter had vented many of his extreme views on the platform before going on his shooting rampage[^] at a synogogue killing 11 people. Gab came into a lot of controversy over the events. The entire tech industry promptly rounded on Gab. The hosting providers (including Microsoft), their app was de-platformed by both Google and Apple, payment processor Paypal and the list goes on. Gab advocates free speech (and is the only social media platform that does), but it certainly does NOT advocate violence. It's creator Andrew Torba is very clear on this. I suspect that many of the tech giants were simply looking for a reason to de-platform Gab, and the shootings played right into their hands. It is worth noting that the shooter also had accounts on Facebook and Twitter too. Having a competitor that advocated free speech (when they don't) was always going to end in a retaliatory strike from the elites at Silicon Valley. In my opinion, the (over) reaction from the Silicon Valley tech giants was unfair, unjust and completely unfounded.
There's a famous phrase that states "If you're not the one paying for the service, then you're not the customer". And this phrase could almost be Facebook's mission statement. What started as an ambitious social media platform with some great features and concepts, has over the years transformed into little more than a marketing tool for businesses to sell us their products and services. It's impossible to scroll through your timeline without being bombarded with ads. Many of these ads it is worth noting come directly from your Google searches. It was reported in early 2018 that the big data company Cambridge Analytica had harvested the personal data of millions of Facebook profiles without their consent[^] and used the data for political purposes. The scandal eventually led to Facebook founder Mark Zuckerberg appearing before the United States Congress to testify. However, as this was a voluntary agreement on his part, many simply dismissed the hearing as a dog and pony trick which was never going to trigger any criminal proceedings. Are social media giants held to different standards than everyone else? I wonder what the outcome would have been had the scandal involved a tobacco company for example. It's easy to see how hitting on a tobacco company could generate much kudos and back patting.
In a recent survey it was found that a majority of Americans don’t think social networks are good for the world[^]
Quote: the number of people who think social media is a net positive for society is down to 40 percent. This is not entirely unexpected. Many people are beginning to now see how much power these tech giants wield, and how much influence they hold. Not just politically, but socially and culturally. They dominate our landscape and every part of our lives. I recognise and appreciate the technical advances made by the tech giants, but I have genuine concerns that they are now over stepping their boundaries of responsibility. We are slowly and inexorablly sleep walking into a dystopian, Orwellian world where we are under constant surveillance. Where our personal data renders us to mere commodity. Where we are told what to think and what to say. Where the social, cultural and political norms are dictated to us. Free thought and free expression are being eroded by the tech industry. They promulgate their own political narratives, and destroy all those they disagree with. They don't take kindly to any form of competition, and will beat into submission anyone that dares to create a competitive technology. Is this really where we want to be? Technology naturally has a part to play in shaping our social and cultural fabric, but that should not include dictating it by force. We are giving far too much power and influence to the Silicon Valley elites. It is high time we put ourselves back in charge.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
We are currently in the middle of re-building our existing mobile app. Probably the most important form in the app is the Vehicle Inspection form. This form allows a driver to fill out a vehicle inspection from their mobile device and to submit the results. In our current app (which is an Apache Cordova hybrid app developed using Javascript in conjunction with Kendo UI controls) we generate an HTML page from the inspection metadata. This allows us to use all the HTML controls such as
- textboxes
- checkboxes
- dates
- radiobuttons
- dropdowns
We then capture the driver's responses using Javascript, and submit these responses to our backend system.
Our current mobile app however is being developed using Xamarin Forms. All of our form controls use Telerik UI controls. We knew we wanted to replicate as closely as possible the implementation of the current app. The vehicle inspection is a critical piece of functionality, and it works extremely well. The challenge therefore would be to try to find something that replicated this same impementation in Xamarin Forms.
Whilst investigating how we would reproduce this I came across the WebView. This is a view for displaying HTML content inside the app. Unlike the OpenUri() method wich navigates the user to a web page using the app's in-built browser, the WebView displays HTML content "inside" the app. This sounded like what I needed.
Generating the HTML to render the vehicle inspection was the easy part. I had this working quite quickly. Using the same logic for creating the HTML controls in our existing app (which uses Javascript) I was able to mimic this using C# to achieve exactly the same output in the current app. The problem came when I wanted to sumbit my responses. I looked at the simple example on the Microsoft documentation, but this didn't provide nearly enough clarity of how to proceed. I tried injecting Javascript functions into the generated HTML but this only seemed to work for functions that didn't interact with the DOM. However, to retrieve the responses required interaction with the DOM.
There doesn't seem to be much information anywhee on this particular topic. I looked through the usual suspects (Stackoverflow, Xamarin forums) but to no avail.
I then stumbled across an article that went into a lot more detail on how to Use Javascript with a WebView[^]. Reading through this and looking at the example code gave me sufficient knowledge to work out how to retrieve the responses from the HTML generated vehicle inspection.
Here are the functions I wrote that enable me to retrieve the responses.
private async Task<string> GetValueFromTextbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').value;");
}
private async Task<string> GetValueFromCheckbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').checked;");
}
private async Task<string> GetValueFromRadioButton(string controlname)
{
return await WebView.EvaluateJavaScriptAsync($"document.querySelector(\'input[name=\"{controlname}\"]:checked\').value;");
}
private async Task<string> GetValueFromDropdownn(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById(\'{controlId}\').options[document.getElementById(\'{controlId}\').selectedIndex].value;");
} I have now got this working and am able to submit the responses that have been entered into the HTML generated vehicle inspection.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This article assumes that the reader is already familair with the MVVM software design pattern. If you are not familair with this design pattern, then it's worth reading up on that first, before proceeding with this article. There are many descriptions of this article, including this one[^]. It is useful to understand the design pattern from a purely conceptual perspective, before looking at the various technical impementations of it. By understanding the design pattern at a conceptual level, you will far easier comprehend its implementation details.
I have used the MVVM design pattern previously. In fact, I have used the MVVM pattern within our current mobile app. For this, I used Kendo UI controls in conjunction with javascript. This particular implementation uses what is known as an observable. An observable (which is based on the Observer design pattern[^] is an object that maintains a list of dependents (called observers) and notifies them of any changes in state. It is this notification system that provides the two-way notification (or binding) that is essential to the MVVM design pattern.
With our latest incarnation of the mobile app now well underway, we have come to the point where we can start building our data entry forms. I have so far implemented the underpinning infrastructure and architecture which enables the app to consume our services, save data to local storage using SQL Lite and send emails from the app. All of this is now fully implemented and working.
We have several data entry forms within our app that allow the user to submit data to our backend services. These include forms for submitting:
- mileages
- service, repair and MOT bookings
- vehicle inspections
As we have already done so in our previous mobile app, we will be using the MVVM design pattern to implement these data entry forms.
We will impement the data entry forms using XAML and Telerik controls. We could have used the native Xamarin UI controls, but there is a greater selection of Telerik controls, and they provide a consistent API and are easily themable. Although the implementation uses Telerik controls and XAML, the underlying concepts can be applied with any UI technology.
I'll use an example that refers to a simple data entry form that allows a user to enter a message which is sent to the backend service. The message may be to request information for example. This trivial example containing just the one UI control should suffice to demonstrate how the MVVM pattern can be implemented.
I tend to begin the development of a new data entry form from the Model and work backwards from there i.e. Model -> ViewModel -> View.
All Models inherit from the the same base Model class. This base Model class inherits from NotifyPropertyChangedBase which is a Telerik class that supports behaviour similar to INotifyPropertyChanged.
public class BaseFormModel : NotifyPropertyChangedBase
{
} This ensures that all Models used by the data entry forms will support the ability to raise events when a property on the Model changes. These changes to the Model will be notified to the ViewModel.
Models used by the data entry forms also inherit from the following interface.
public interface IFormData<T>
{
T CreateDefaultModel();
} By implementing this interface, the Model must therefore contain the method CreateDefaultModel(). This method is used by the ViewModel to supply a default Model (containing default values) which can be used when the View (the XAML form) is first displayed to the user. It implements generics which therefore allows it to work with any type of Model.
Here's the Model for the "Message Us" data entry form. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsModel : BaseFormModel, IFormData<MessageUsModel>
{
private string _messageToSend;
[DisplayOptions(Header = MessageUsModelConstants.MessageHeader)]
[NonEmptyValidator(MessageUsModelConstants.MessageError)]
public string MessageToSend
{
get => _messageToSend;
set
{
if (_messageToSend == value) return;
_messageToSend = value;
OnPropertyChanged();
}
}
public MessageUsModel CreateDefaultModel()
{
return new MessageUsModel
{
_messageToSend = ""
};
}
} The decorations on the public property MessageToSend are Telerik specific and define the validation rules / messages for the property. These rules / messages are then enforced by the View. Using this particular implementation of MVVM, the data rules are therefore defined at the level of the Model (which makes sense). Whenever a new value is set on the MessageToSend property, the OnPropertyChanged() event is raised. This updates the state of the ViewModel that is bound to the Model.
Moving onto the ViewModel, we define the base behaviour for all our ViewModels in our base class.
public abstract class ViewModelBase<T> : NotifyPropertyChangedBase where T : new()
{
public T FormModel = new T();
public abstract Task PostCompleteTask();
} I have used an abstract class that inherits from the same Telerik class as the base Model class i.e. NotifyPropertyChangedBase. The public property FormModel is a reference to the Model. This property is used by the ViewModel when it needs to refer to the Model. The method PostCompleteTask() is invoked by the ViewModel when the form is ready to be submitted. As this is an abstract method, it must therefore be implemented by each inheriting subclass. This provides consistency to all of our ViewModels. The actual work performed by each ViewModel will always be defined within this method.
Here's the ViewModel for the "Message Us" class. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsViewModel : ViewModelBase<MessageUsModel>
{
public MessageUsModel MessageUsModel;
public MessageUsViewModel()
{
this.MessageUsModel = this.FormModel.CreateDefaultModel();
}
public override async Task PostCompleteTask()
{
}
} The public property MessageUsModel is the reference to our Model. This is initially populated with a default instance in the class constructor by invoking the method CreateDefaultModel() (which we saw earlier) using the public property FormModel (which we also saw earlier).
this.MessageUsModel = this.FormModel.CreateDefaultModel(); When the user has finished entering their message and is ready to submit the form, clicking on the form's submit button will invoke the PostCompleteTask() method that will perform whatever processing as necessary (in our case all form data is submitted to our backend services using RESTful Web API services).
Finally, here's the XAML for the View and the code-behind.
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class MessageUsView : ContentPage
{
public MessageUsViewModel Muvm;
public MessageUsView()
{
InitializeComponent();
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm;
}
private async void DataFormValidationCompleted(object sender, FormValidationCompletedEventArgs e)
{
dataForm.FormValidationCompleted -= this.DataFormValidationCompleted;
if (e.IsValid)
{
await this.Muvm.PostCompleteTask();
}
}
private void CommitButtonClicked(object sender, EventArgs e)
{
dataForm.FormValidationCompleted += this.DataFormValidationCompleted;
dataForm.CommitAll();
}
} And the XAML code.
<input:RadDataForm x:Name="dataForm" CommitMode="Immediate" />
<input:RadButton x:Name="CommitButton" Text="Save" Clicked="CommitButtonClicked" IsEnabled="True"/> The important parts to note are the setting up of the binding between the View and the ViewModel in the constructor. This sets up the two-way binding, such that any changes in the View are reflected in the ViewModel and vice-versa. These changes are also reflected in the underlying Model (if that wasn't already clear).
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm; When the user clicks the Submit button, the actions implemented within the ViewModel's PostCompleteTask() method are invoked.
This is a fairly simple example. In a real world use case there would undoubtedly be more complexity, but this should serve as a useful example of using the MVVM design pattern within a Xamarin mobile app. The fact that we are using Telerik UI controls doesn't change the core concepts discussed. The MVVM design pattern is a very powerful design pattern that is perfect for use within data entry forms.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a need to consume a private nuget feed in one of our Azure DevOps build pipelines. This was for our Xamarin Forms mobile app's build pipeline. We wanted to use a Telerik UI nuget package in our app. In order to add a reference to this nuget package to your project, you firstly need to add your Telerik credentials into Visual Studio. This ensures that you are a fully paid up Telerik subscriber with access to the nuget package.
I needed to update the build pipeline therefore to fetch this private nuget package. After a bit of trial and error (and a few failed builds) I got this working. In Azure DevOps I needed to update the nuget restore build task to also fetch the Telerik nuget package.
- Add a Nuget restore task to your build pipeline (if you don't already have one). This task needs to come before you build the project.
- Set the path to the project in the relevant textbox
- Set the option for Feeds in my Nuget.config (this is important as this allows you to specify credentials for consuming external nuget packages)
You should now see a Manage link which will allow you to configure the credentials to your private nuget package. Clicking on this link opens up the Service Connections that are available for your build pipeline. Add a new service connection of type Nuget. In the dialog box that is now displayed click the option for Basic Authentication and enter the following information.
- Connection name
- Feed URL
- Username
- Password
Click OK to save these credentials.
Back in your build pipeline's nuget restore task, you should now be able to select these credentials in the dropdown. What Azure DevOps will now do, is merge these credentials into it's default nuget.config file (or into the one you have specified under the Path to Nuget.config). Either way, whatever credentials you have specified will be merged into the nuget.config file.
And that's basically all there is to it. Your build pipeline is now able to consume nuget packages from private feeds.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been setting up the Azure DevOps builds required for our new mobile app. One for Android and one for iOS. In this article I will focus on the iOS app as this is the one that caused me the most difficulty. There is a degree more difficulty when developing for the Apple platform as you need to have a Mac, certificates and provisioning profiles, so configuring a build for iOS is a little more complex. This is definitely borne out by the number of Stackoverflow posts I found on the various issues I encountered.
Before proceeding, I want to fully clarify something that caught me out. This may seem self evident, but judging by the posts I came across on this issue, perhaps not so much. When running your iOS app from Visual Studio there are two methods of provisioning the app.
- Automatic provisioning - This is useful during development. You need to install a Mac onto your network that is visible to your Visual Studio environment and pair with it. Your Visual Studio environment will then read the necessary provisioning information directly from the Mac (be sure to disable the screen-saver on the Mac or else you'll lose your pairing with it).
- Manual provisioning - This is needed when you intend to build your app from a build server. Unlike automatic provisioning (where your Visual Studio environment just fetches what it needs from the paired Mac), you instead enter the necessary signing identity and provisioning profile information into Visual Studio.
So if you are setting up your iOS app to be built on a build server such as Azure DevOps, you will to use manual provisioning.
When setting up an iOS build you firstly need to select the correct agent pool from Azure DevOps. In this case select the Hosted macOS agent pool. Selecting this provides you with a template consisting of the core tasks necessary for building your iOS app.
- Install an Apple certificate
- Install an Apple provisioning profile
- Build Xamarin.iOS solution
- Copy files to the artifacts staging directory
- Publish the build artifacts
We are also using Visual Studio App Center so I have the following task defined too.
- Deploy to Visual Studio App Center
We intend to use App Center for testing but we haven't set this up just yet.
Installing the Apple certificate and provisioning profile
=========================================================
The Apple certificate and provisioning profile can both be downloaded from your Apple developer account and uploaded to your Azure DevOps build pipeline. The certificate should be in the form of a .p12 file which differs from the .cer file. You may need to open the certificate in XCode on a Mac to generate the required .p12 file. Either way, once you have these files, they need to be uploaded to Azure DevOps. Your build will fail without them.
Build the Xamarin.iOS solution
==============================
Before you proceed to this step, ensure you have set your Xamarin.Forms iOS project to use Manual Provisioning, and set values for the Signing Identity and Provisioning Profile (and these match the previously uploaded certificate and provisioning profile from earlier).On the build task check the box Create app package if you want to create an .ipa file (which is the file that is actually installed onto the devices). If you intend to test your app in any way, then presumably this needs to be checked.
The output from this task should be the required .ipa file.
Copy files to the artifacts staging directory
=============================================
The template makes a good job of this, so this task should need very little configuration. Basically, all the task is doing is copying the generated .ipa file from the build folder to the artifacts folder from where it can be used by subsequent build tasks.
- Source folder - $(system.defaultworkingdirectory)
- Contents - **/*.ipa
- Target folder - $(build.artifactstagingdirectory)
Publish the build artifacts
===========================
This task simply publishes the contents of the artifacts folder from above - $(build.artifactstagingdirectory)
At this point we now have a complete build process that has generated an .ipa file using the latest code changes, and published that .ipa file so that it is available for subsequent build processes such as testing and / or deployment. So at this point you can use your preferred testing / deployment tools of choice. In my case, I have deployed the generated .ipa file to App Center for testing and deployment.
Deploy to Visual Studio App Center
==================================
You will need to configure your build with an App Center token. This authorises your build process to access App Center studio on your behalf. I will write a future article on App Center, but for now it is sufficient to know that I have two apps configured in App Center - one for iOS and one for Android. Once configured, enter the name of the App Center connection into your Azure DevOps task.
If you are using App Center as part of a team then it's a good idea to create an organisation, and assign your developers to the organisation. Then in the App slug you would enter {organisation-name}/{app-name} e.g. myOrganisation/MyAppName.
Now for each build that is triggered, we have a full build pipeline that builds the iOS app and deploys it to App Center from where we can deploy it to actual physical devices (allowing us to monotor analytics, crashes and push notifications).
Setting up this build process has been far from straight-forward. I encountered several problems along the way, and didn't always find answers to my questions. Many times it was down to good old fashioned trial and error, along with a dash of perseverance.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of our new mobile app offering that we are busy developing, I need to deploy the backend Azure mobile app services. These are the backend services that will provide all the main business logic that the app will need to function and provide value to the user. The app itself will essentially be a dumb set of screens that will have no smarts in and of themselves. The smarts will come from the backend services that the app will consume. And these services will be hosted on Azure in the form of mobile app services.
I have previously written about how I setup the build pipeline using Azure DevOps[^]. The next step was therefore to deploy the build articles to Azure using the same Azure DevOps pipeline.
The main steps needed to deploy your app to Azure are actually defined in your build pipeline:
- create a zip file containing the deployed build artifacts
- publish the zip file so it is available for the release pipeline
In my build pipeline I have these two tasks defined as the last tasks in the pipeline. To create the zip file I use MSBUILD using the parameters
WebPublishMethod=Package;
PackageFileName=$(Build.ArtifactStagingDirectory)\package.zip;
DesktopBuildPackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
PackageAsSingleFile=true;
PackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
DeployOnBuild=true;
DeployTarget=Package I therefore added an MSBUILD task to the build pipeline. You may also need to add other build parameters for specifying the OutputPath, Configuration and Platform and any other parameters as necessary.
You will then need to add a Publish Build Artifacts task to your build pipeline. This makes your zip file available to the release pipeline. In the textbox for Path to publish I have entered
$(Build.ArtifactStagingDirectory) as this is where I want the zip file to be published.
There are various templates you can use for setting up your release pipeline. For the purposes of this article I will keep it simple and refer to the Deploy Azure App Service template. Here you will need to authorise your Azure subscription. Once this has been completed you will need to enter other details including:
- app type
- app service name
- package folder (the filename and path where the zip file is located)
- optionally you can specify a slot if you are deploying to slots (which I highly recommend you do)
There are some subtle differences between how TFS handles deployments to Azure and how Azure DevOps handles them that threw me when I first setup the release pipeline. For example ensuring the release pipeline has access to the zip file threw me at first, until I discovered you need to publish it for it to be available to the release pipeline.
Other than that, the process itself is fairly straight-forward and I didn't encounter any major problems.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have long had an interest in the DevOps side of the software development lifecycle. As much as I love to write software, I love to build and deploy it too. After all, it doesn't really matter how great your code is, if people aren't using it then it's an irrelevance. The process of building and deploying software has long been a passion of mine. I love to setup and configure build processes. DevOps automates and simplifies the process by which software can be deployed onto user's machines. From the developer checking in their code, the process of versioning the software, executing and publishing unit tests, analysing code coverage, and depoying the final article onto a release server are all part of the DevOps process. These can (and should) all be automated. The developer shouldn't have to worry about any of this (unless like me, they actually enjoy setting up these processes). I have previously used CruiseControl, TeamCity, Team Foundation Services and most recently Azure DevOps.
We have recently begun the task of re-building our next generation mobile app. For this we are using Xamarin in conjunction with Azure services. We currently use Team Foundation Services (TFS) for all of our DevOps processes. This is a brilliantly simple, yet very flexible and powerful build process tool. I haven't found anything that I haven't been able to do with it. For our new project though, I wanted to make use of Microsoft's replacement to Visual Studio Team Services (VSTS) which is now branded as Azure DevOps.
This seemed the perfect time to start using Azure DevOps - with a new project. I don't have any intentions of migrating our existing projects, so it would require a new project to allow me to get my hands on Azure DevOps.
First off, for anyone who has previously used TFS or VSTS, Azure DevOps (which is really a re-branding of VSTS) should look and feel very familiar. As it's name suggests, it is powered by Azure infrastructure meaning it will scale up and out as your build process grows.
We have separated our new mobile app into two distinct solutions. One is the Xamarin Forms app, and this will unsurprisingly contain the actual app itself. The other will be the Azure backend server that will provide all the functionality to the app (business logic, notifications, service requests etc). It is this latter solution that I have been focussed on moving into Azure DevOps. At the time of writing, I have setup the pipeline to include:
- versioning the assemby
- restoring the Nuget packages
- building the solution
- executing the unit tests
- publishing the code coverage
There are literally hundredes of in-built tasks for building, testing, packaging and deploying your software. You also have access to the Marketplace where you can find hundreds more tasks developed by the community. Even big players such as JetBrains have free tasks available in the Marketplace. So if you can't find the task you want, you can probably find one that matches in the Marketplace. If not, you can always develop your own and publish it in the Marketplace yourself.
My first impressions of Azure DevOps is that it's quite simply a brilliant tool. It reduces our reliance on our on-premise infrastructure and allows us to fully build and deploy our applications on rock solid infrastructure in the Microsoft cloud. If you currently use TFS then it's worth spending the time to explore Azure DevOps. Unless your business already has a large investment in IT infrastructure, you'll be very hard pushed to beat the Azure stack. If you're currently using VSTS then you'll be automatically migrated to Azure DevOps anyway. Even if you don't currently use TFS or VSTS, it doesn't matter. You can build, package and deploy your application using Azure Devops regardless. It has support for every platform and technology. So whether you're brand new to DevOps and don't have anything currently configured, or if you're currently using an alternative, it's worth checking out Azure DevOps anyway.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Having developed several mobile apps and deployed them to both the Google Play and Apple Store, we have been forced to re-write the apps. The reason we have to re-write our apps is due to the fact that our development platform - Telerik Platform - has been retired. The reasons are probably so that they can focus their development efforts on their other cross-platform mobile technology - Nativescript.
That leaves us in the position of having our app in the app stores, but with no means by which to update them. We can update the RESTful services used by the apps as these are completely separate to the apps (thank goodness for good architecture), but we can't make any changes to the apps themselves. That puts us in a slightly vulnerable position, as we can't respond to customer suggestions or changes in market forces (or even change the branding / look and feel).
We have therefore been forced to re-evaluate the technologies that are available to us for developing the mobile apps. We have looked at several technologies. I have previously written why I think Building native enterprise apps is (probably) the wrong approach[^]. In relation to enterprise apps, there is very little (if any) benefit to going native (longer development cycles, greater expense, bigger teams with bigger skill sets, little overall benefit). Cross-platform therefore is the only approach on the table.
We firstly looked at NativeScript (the natural successor of Telerik Platform) as both are owned by Progress. This looked like a great development platform. Progress have made big strides in trying to ease the migration path of existing Telerik Platform users to their newer NativeScript platform. You can choose from Javascript, Angular or Typescript as the language in which to build your apps. It comes with a companion application called Sidekick to simplify many of the development processes. Has the support of a large community and backed by Progress (giving peace of mind). Also, it rendered native components on the device making it a truly cross-platform development environment.
The only other alternative that I considered seriously was Xamarin. I have used this previously (before the Microsoft acquisition) and so was already familiar with it. I was intrigued as to how it may have changed since the Microsoft acquisition. The first thing I noticed when looking through the documentation and examples was the tight integration with Azure. We already make substantial use of Azure with our other mobile and web apps, so it great to see the same design philosophy applied to Xamarin. In fact, the overall architecture used by Xamarin is not too dissimilar to the one I developed for our existing apps and current web app. This was a huge benefit to us right out the box, as I was already familar with the architecture and the key moving parts to building a mobile app with Xamarin. Like NativesScript, Xamarin also renders truly native components on the device.
I spent considerable time looking at both offerings, as well as taking into consideration the skill set of the team. In the end we have decided to go with Xamarin. I am far more familiar with C# and Azure (as well as the architecture of their apps) and this played a part in the final decision. Nativescript would have required us to learn Typescript. Although this is not necessarily a barrier on its own, the reality is that I will be up and running far quicker with Xamarin than compared to NativeScript.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had to figure out how to execute an AJAX request upon loading an ASP.NET Core Razor page which contained a querystring parameter. On one of our Razor pages, we have a Kendo UI Treeview. Clicking on this treeview populates the page with data for the selected item. The click event on the treeview executes a Javascript event, which in turn makes an AJAX request to a service to fetch the data for the selected item. This Javascript event then populates the Razor page with the returned data.
When launching the Razor page interactively from the menu the URL looks like this.
https://localhost/DocumentManager/DocumentManager
When launching the Razor page for a specific item programmatically the URL looks like this.
https://localhost/DocumentManager/DocumentManager?documentid=1234
So basically, the page is populated with data from an AJAX request which is fired from a Javascript event. The problem I had was that we needed to open this page and load up the data for a specific item. Whereas currently the item is selected by the user interactively clicking on an item from the Kendo UI Treeview, I now had to figure out how to load the page data for an item progrmmatically.
So here's how I did it.
I firstly needed to figure out if the Razor page was being launched interactively (with no querystring parameters) or programmatically (with querystring parameters). I did this using URLSearchParams URLSearchParams - Web APIs | MDN[^]. This is an interface that allows a Javascript client to manipulate and work with querystring parameters. This offers a far simpler and elegant mechanism for working with querystring parameters than horrible string manipulation and / or regex queries.
I was passing a document ID to the Razor page in the form of:
https://localhost/DocumentManager/DocumentManager?documentid=1234
<div>
rest of the Razor page goes here
</div>
<script>
$(document).ready(function () {
var queryparams = window.location.search;
if (queryparams && typeof (queryparams) === "string" && queryparams.length > 0) {
var urlParams = new URLSearchParams(window.location.search);
var documentid = urlParams.get('documentid');
loadDocumentManagerForDocument(documentid );
}
});
</script> In our JS file site.js the function that makes the AJAX request is defined as follows.
function loadDocumentManagerForDocument(documentid) {
if (documentid) {
$.ajax({
type: "GET",
url: `/DocumentManager/DocumentManager?handler=document&documentid=${documentid}`,
contentType: "application/json",
dataType: "json",
success: function (response) {
},
failure: function (response) {
}
});
}
} Finally, here is the Razor page handler that fetches the data. Remember, this is the same Razor page handler that is used for both loading the data interactively (as the user clicks items from the Kendo UI Treeview) and programmatically.
public async Task<JsonResult> OnGetDocument(int documentid)
{
JsonResult result = null;
var response = await new DocumentManagerPageModelService().GetDocumentById(documentid);
result = new JsonResult(document);
return result;
} Here is yet another example demonstrating the incredible flexibility and power of ASP.NET Core. This solved what I was thinking may be a really tough problem, but in the end it wasn't really that difficult. With a bit of thinking the problem through, the solution is quite straight forward. I hope this solution helps someone else.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 20-Aug-18 11:11am.
|
|
|
|
|
The Document Manager was intended to be a web based application that would allow users to upload documents (reports, spreadsheets etc) and assign subscribers to them. A subscriber would then be able to login to the application and download any documents assigned to them. The premise was to build an application along the lines of a collararative Dropbox.
The entire application was a proof-of-concept for building the next generation fleet management system. The new application would be a replacement for the current one. To ensure that the technical choices we had made were sound, and to reduce the risk to the business, we decided to develop a single module first. If this went well and we were satisfied that the technologies were sound, then we would create the rest of the application.
The technologies we had selected (and therefore used to build the Document Manager) included the following:
- ASP.NET Core 2.1 Razor pages (for the front-end application)
- ASP.NET Web API (for building the RESTful services that would be consumed by the application)
- Azure (for hosting, SQL and blob storage)
The only unknown was the use of ASP.NET Core and Razor pages. We had used the other technologies previously on our mobile apps. We didn't want to use full blown MVC for this project, as we intended to create a suite of RESTful services to provide the business logic. The architecture was service-oriented-architecture (SOA), so therefore the client application only needed to be a lightweight front-end. Hence we didn't need anything as complicated as MVC or single-page-application (SPA) for the client application.
With ASP.NET Core 2.0 there comes a project template whereby you can create an application based on Razor pages, without the added complication of MVC. This seemed the perfect fit for our needs. After experimenting with these for a few days they seemed to fit very well with the rest of our architecture.
Part way through the application lifecycle we upgraded from ASP.NET Core 2.0 to 2.1, and upgraded Visual Studio at the same time. Apart from making a minor change to one of our build scripts, this upgrade was seamless and without problems.
We are now nearing completion of this project. We have developed the Minimum-Viable-Product (MVP) as our proof-of-concept. The application allows for the uploading, downloading, editing and deleting of documents. You are able to add / delete the subscribers to a document. Subscribers are notified of their subscription via our email service (so a subscriber is alerted to the fact that they need to login to the application and download a document). There is also administration functionality (maintaining companies, users and roles).
I have found using ASP.NET Core 2.1 in conjunction with Razor pages to have been the perfect choice. ASP.NET Core is an incredibly powerful development platform. The support for AJAX and the Razor page handlers alone make this a fantastic platform. There are multiple ways of achieving the same objective, making it incredibly flexible.
I am very pleased with how the project went. The technical choices were justified and sound, and we are now extremely confident of building out the rest of the next generation fleet management software using these technologies.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I came across this design strategy many years ago when writing client APIs, but it's a strategy that is worth considering when designing any client API. It's one I have made extensive use of in our current suite of ASP.NET Web APIs.
The surface area is what the client interacts with when consuming an API. Reducing the surface area is therefore a strategy for reducing what the client needs to interact with. By reducing what the client needs to interact with, your API is simpler and easier to consume by client applications.
Let's say you have a fairly simple data-entry application that allows the client to add/update/get/delete items such as customers, orders and deliveries (basic CRUD functionality). If we wanted to develop a client API to implement this functionality, we quite reasonably do this by implementing a customer API, an order API and a delivery API. Nothing wrong with this approach. Let's say that six months later we have added more functionality. We can now add/update/get/delete stock, suppliers and materials. The number of APIs the client now needs to interact with has doubled. From the point-of-view of the client, the API is now getting increasingly more complex to use, as there are a greater number of APIs to learn and use.
But wait. Don't all those APIs do roughly the same sort of thing? They all provide the same CRUD functionality, just to different entities (customer, order, delivery, stock, supplier and material).
What if we condensed all those CRUD APIs for all those different entities into a single API. That would provide the same level of functionality to the client application, but would also be easier to learn and understand as they only require the one API to interact with.
This is the concept behind reducing the surface area of the client.
In a web application I have been developing, we have a very similar scenario. We have a data-entry web application that provides CRUD functionality to the user. All the functionality has been implemented using ASP.NET Web API. However, the web application only consumes a single API. All POST, PUT, GET and DELETE requests are reduced down to a single API that performs all operations across the entire web application. Not only that, but all the APIs work in the same, consistent manner.
For example, the POST controllers (API) work in the following manner. I pass a single string key on the querystring. This tells the API what type of data is being passed. In the request body is the data itself in JSON format (other formats are available).
Example values for the querystring key could be "addcustomer", "addorder", "addsupplier". Then in the body of the request would be the actual data that represented the entity (customer, order, supplier etc).
Here is example code from the POST request controller.
[HttpPost]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public async Task<HttpResponseMessage> WebPostData(string formname, [FromBody] JToken postdata)
{
base.LogMessage(string.Format(CultureInfo.InvariantCulture, "{0}.{1}", GetType().Name, "WebPostDataToServiceBus"));
base.LogMessage($"Formname={formname}");
if (!base.IsRequestAuthenticated())
{
base.LogMessage("Request failed authentication");
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.Forbidden));
}
return await this.AddWebTaskToServiceBus(formname, postdata);
} This same concept can be applied to PUT, GET and DELETE requests. As long as you have a string key parameter that determines the type of the data, then you are able to implement the appropriate logic to process it (e.g. if you know you are adding a new customer, then you de-serialise the customer data and pass it to the database, service bus etc).
This makes your API surface much smaller, which in turn makes them far easier to consume, learn and comprehend. Surely that's better for everyone.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This may seem a self evident statement, but apparently not. I recently worked with a colleague (who shall remain nameless to spare their blushes), who was young and very inexperienced. When tasked with fulfilling a particular task, their approach was not to spend time trying to understand the problem, investigate several solutions, before implementing something that would hopefully solve the problem. Instead, the approach taken by this particular individual was to ask Google for answers, and then use whatever solution was at the top of the list, no matter how inappropriate the solution was.
I have no problem with anybody using Google or any of the technical forums such as Stackoverflow. We all get stuck sometimes, and it's useful to look to these forums for advice, suggestions or possible answers. But that's when you get stuck. I wouldn't go straight to Google from the get go like this person did. If you don't fully understand the problem, let alone the answer, you're headed for big trouble further down the road. It's only a matter of when, not if.
I have encountered many problems where I have been genuinely stuck. What I do then, is start researching. I spend time reading around the problem, reading around the various tools / technologies that may help me in resolving the problem. What I most definitely don't do is start coding. And I wouldn't expect other members of the team to have to make changes to the code to accomodate my ill researched solution. Changing code that has been stable for a period of time is not a good idea, and it's especially not a good idea if you intend to introduce changes because you don't understand the problem yourself. That's akin to asking your fellow mechanic to put tractor wheels on your sports car because the solution you read on the internet suggested it. Had you fully understood the problem and the solution, you would have worked out that this was a silly idea.
So before plunging headlong into implementing a solution to a given problem, spend the time to fully understand what the problem is. Spend the time researching various solutions, and look at the bigger picture. Don't just focus on the immediate problem, but how your solution may impact the other moving parts of the application.
1. Take a breath
2. Understand the problem
3. Research and ask questions
4. Take another breath
5. Walk through your solutions with your colleagues who may have valuable knowledge that may help
6. Propose a solution
Downloading code and copying & pasting solutions from the internet just doesn't work. You'll grow into a far more valuable resource by taking the time to understand the problems you're trying to solve.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a need to setup different environments for our ASP.NET Core 2.1 web application. When we run the app from our development machines, we need an environment where we can diagnose and debug the code. We also use a different endpoint for the services (we have development, staging and production endpoints for our ASP.NET Web API services which are consumed by the development, staging and production versions of the web app respectively). On top of that we also use a different Azure ADB2C (Azure Active Directory Business-to-Consumer) directories for our identity provisioning. We have one for development, staging and production.
So we need to separate out these different environments when running the application so that the development, staging and production settings are consumed appropriately by the application.
Thankfully ASP.NET Core makes this very straight-forward. Right out of the box ASP.NET Core supports 3 environments. Development, Staging and Production. The values for these environments are contained in json files called appsettings.<environment>.json e.g. appsettings.Development.json, appsettings.Staging.json and appsettings.Production.json.
If you have common settings that apply irrespective of the environment, then these can be specified in the default appsettings.json file. The environment specific settings will then be merged into this default file at runtime. So for example, if you use the same instance of Application Insights across all the environments, then specify these once in the default appsettings.json file. Then in the Development, Staging and Production versions of the appsettings.json file, specify those settings that are specific to that environment.
Next you need to tell the ASP.NET Core runtime execution engine what environment to use. For development, you set this inside Visual Studio (right click on the project -> Properties -> Debug). You will see an environment variable called ASPNETCORE_ENVIRONMENT. This will be set to Development. This tells the ASP.NET Core runtime to use the Development environment settings. So any settings that are contained within your default appsettings.json file will be merged with those of appsettings.Development.json.
N.B. specific settings overwrite general ones. So if there are any settings that are in both files, the environment specific ones will be taken.
Setting the environment for development is straight forward, as it's done within Visual Studio. How do you set the environment for a deployed application on a Windows server or Azure?
This is also straight forward and can be found in this article[^].
N.B. It is by setting the ASPNETCORE_ENVIRONMENT to the appropriate value that determines which environment the ASP.NET Core runtime uses.
For the current ASP.NET Core web application I am developing I have an appsettings.json (which stores our Azure Application Insights settings), appsettings.Development.json, appsettings.Staging.json and appsettings.Production.json. Ther latter 3 store the values that are specific to those particular environments (debuggings settings, logging settings etc).
ASP.NET Core makes is simple and easy to configure your application for different environments, and these can be easily set for Windows / IIS environments and / or Azure environments.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I deployed the ASP.NET Web API services to our Azure hosting endpoint, I needed to create a zipped deployment file to do this. Azure then unzips the contents of this file and deploys it into your hosting slot. I needed to do the same thing recently with our ASP.NET Core 2.0 web application.
Just to clarify, I am using Team Foundation Services (TFS) for all our builds and releases. I much prefer using a build server than to deploy straight from Visual Studio. Yes I know this is entirely possible, but I prefer to keep the build and deployment lifecycle separate from the development lifecycle. I find this leads to greater efficiency, particularly when you have a team of developers who need to collaborate on the same code.
When I did this previously with the ASP.NET Web API project, I used an MSBUILD task from TFS and used the argument /t:publish,package to force the creation of the zipped deployment file. However, the /t:package argument does not exist for ASP.NET Core projects. So how do you create the zip file needed to deploy your web application to Azure.
Well it seems that there a couple of ways to achieve this (although they don't seem to be fully documented anywhere that I can find). I had to resort to reading through Stackoverflow to find the answer. You can either use MSBUILD or dotnet build. As the arguments that are passed to dotnet build are ultimately passed into MSBUILD (yes it is good old MSBUILD that sits underneath dotnet build), I decided to opt for using MSBUILD. I am also much more familiar with MSBUILD having used it for many years building many other applications.
The MSBUILD statement that worked for me is the following.
"Path\To\MSBuild\MSBuild.exe" /p:configuration="release";platform="any cpu";WebPublishMethod=Package;PackageFileName="\MyFolder\package.zip";DesktopBuildPackageLocation="\MyFolder\package.zip";PackageAsSingleFile=true;PackageLocation="\MyFolder\package.zip";DeployOnBuild=true;DeployTarget=Package I have this command in a batch file which I then run under TFS as a build step. This build step is one of the last steps in the build process because it only needs to run prior to the release process.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
First of all, just to be absolutely perfectly clear, I do not work for Microsoft and have received nothing in return for writing this article. I just want to get that out of the way before going any further.
Throughout my nearly 20 year career as a professional software developer, I have always used Microsoft products to develop the various applications I have helped build. This includes their products, services and languages and which have included Visual Foxpro, Visual Basic, Xamarin, C#, SQL Server, Azure, Visual Studio, Visual Code, ASP.NET (Core) to name a few.
Natually I have liked some of these better than others. What is becoming very apparent to me, is that I am genuinely loving the new development ecosystem that has been coming out from Microsoft over the last few years. Under the leadership of Satya Nadella, the company has completely transformed. Their products, tools and services just keep getting better and better. As a developer, this is fantastic news. For any regular readers of my articles, none of this should come as a surprise. I regularly praise the Microsoft tooling I use on a regular basis.
I started using Azure over a year ago, and can't believe how awesome it is. I use it for everything including SQL storage, blob storage, hosting, service bus, webjobs, functions, identity provision and application insights to name a few. I use it for everything. It allows me to build modern, scalable, highly available, secure and robust applications. All of the Azure services can be leveraged from within your .NET apps as well as from the Azure portal itself.
This year I started building a web app using ASP.NET Core 2.0. It brings the joy back into building web applications. It is very obvious that a lot of thought went into the architecture and design of ASP.NET Core 2.0. I have always enjoyed working with ASP.NET, but ASP.NET Core lifts this to entirely new levels. The team behind it have a clear understanding of the sorts of problems that developers face, and have solved these in simple yet elegant ways.
They have embraced open-source, they are open and transparent, their tools are no longer closed but integrate with practically every other tool (whether they are Microsoft or not). They are a completely different company to the one I carved out my career with. Credit where credit is due, they have listened to their customers and have responded accordingly. They are now building tools that developers need, want and can enjoy using.
Being a Microsoft developer these days is great fun, and I hope it stays that way for a very long time.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|