Click here to Skip to main content
14,034,985 members
Click here to Skip to main content
Add your own
alternative version

Tagged as

Stats

1.8K views
22 downloads
4 bookmarked
Posted 14 Apr 2019
Licenced CPOL

Test Driving MongoDB with .NET Core

, 14 Apr 2019
Rate this:
Please Sign up or sign in to vote.
Test Driving MongoDB with .NET Core

Introduction

So, what’s the difference between an apple and an orange? Other than the fact that apples and oranges are both fruits and they can be juiced, they don't have much in common. Apples contain more fiber, but oranges contain more vitamin C. The idiom, comparing apples and oranges, refers to the apparent differences between items which are popularly thought to be incomparable or incommensurable, such as apples and oranges.

But as it turns out, through science, you can compare apples to oranges. In 1995, NASA researcher Scott Sandford performed a science research project by drying out a Granny Smith apple and a Sunkist navel orange in an oven, ground them up and put each into a mass spectrometer. In the end, he in fact found that apples and oranges can be compared and are similar. Thus, as Sandford stated; “the comparing of apples and oranges defense should no longer be considered valid”. Leave it science to figure things out.

Software Technology Comparisons

In the software engineering field, we are often comparing things and making decisions based on these comparisons. The software engineering world is a community of professionals sharing software reviews, and best practices. We also have many tools at our disposal for making technology decisions, including reading software comparison reports; test driving software and developing proof of concepts. In the end, software decisions are sometimes based on a development philosophy and/or the overall ecosystem of a software technology or its overall fit for a needed solution; and of course, its overall popularity.

Relational Databases and NoSQL Databases

When it comes to choosing a database, one of the biggest decisions these days is picking a relational (SQL) or non-relational (NoSQL) data structure for your application. While both are viable options, there are certain key differences between the two that must be keep in mind when deciding.

Relational database management systems (Oracle, MySQL, MS Server) were developed in the 1970s to handle the increasing flood of data being produced. They have a solid foundational theory and have influenced nearly every database system in use today. They were designed with full concurrency and transactional support for a wide range of applications.

NoSQL databases are non relational and provide a database that stores data in JSON, BSON, or XML formatted documents. They feature a flexible schema. Unlike SQL databases, where users must declare a table’s schema before inserting data, NoSQL document stores don’t enforce document structure. Documents can contain any data desired. Reviewing its history; NoSQL databases seem to have been born from the needs of social media applications (Facebook, Twitter) where large amounts of data need to be processed quickly. NoSQL databases provide high operational speed and increased flexibility.

There are more than two dozen open source and commercial NoSQL databases on the market that support various use cases. Some of the popular NoSQL databases include MongoDB, Couchbase, Redis Enterprise, Amazon DynamoDB, Cosmos DB, RavenDB and MarkLogic. Some of these NoSQL databases support both JSON document-orientated storage and/or key-value storage. Some even provide for high availability and security features at the data level, including transactional ACID compliance.

MongoDB

For this article, I chose to demonstrate the NoSQL database MongoDB with Microsoft .NET Core using the MongoDB .NET Core Driver. The front-end web application will use Angular 6.

MongoDB offers both Enterprise and Community versions of its powerful non-relational database. MongoDB is the most popular NoSQL database that can be deployed on your local servers. A free and open source, cross-platform, document-oriented database, MongoDB uses JSON-like documents with schemas. The company that supports MongoDB also offers a cloud-based offering that handles database management, setup, configuration, software patching, monitoring, backups, and it operates as a distributed database cluster and starting with version 4.0 provides concurrency and transactional support.

Common use cases for MongoDB include big data, content management, and social and collaboration applications.

Sample Application

The sample application for this article will be a shopping cart application. The sample application will have the following data and transactional requirements:

  • Run product searches against product information by product number and product description.
  • Return results of a product search in paged data sets of a maximum 20 rows per page.
  • Provide the ability to sort the product results by various columns in either ascending or descending order.
  • Create a sales order with a unique sequentially generated order number.
  • Create a sales order within an ACID transaction.
  • When creating a sales order, execute start transaction, commit transaction and abort transaction within an ACID transaction.
  • Reduce product quantities available for sale for each product detail in the shopping cart when creating a sales order.
  • Rollback the creation of a sales order and all updates to product information when any product in the shopping cart would reduce the quantity available for sale below zero.
  • Generate a list of sales orders and perform a join to product information for display purposes

Shopping cart and ecommerce applications have not generally been considered good use cases for NoSQL databases because NoSQL databases have generally not provided for enough transactional support for these types of applications. The sample shopping cart application for this article will be a good test of the latest features of MongoDB including its new support for transactions.

Getting Started with the MongoDB Community Edition and Compass GUI

To get started, you can go to the MongoDB web site and download the free MongoDB Community Server Edition. Starting with MongoDB 4.0, you can set up MongoDB to run as a Windows service during the install. MongoDB also comes with a GUI tool called Compass that you can also install. Compass allows you to analyze and understand the contents of your data without formal knowledge of MongoDB query syntax. In addition to exploring your data in a visual environment, you can also use Compass to interact with documents, collections and databases with full CRUD functionality. You can also create and execute queries and aggregation pipelines; create and delete indexes and view and optimize query performance with visual explain plans.

MongoDB .NET Core Driver

The sample application for this article will use Microsoft .NET Core 2.2 for the back-end Web API, business and data access layers. The .NET Core projects for the back-end application include:

  • CodeProject.Mongo.WebApi
  • CodeProject.Mongo.Business.Service
  • CodeProject.Mongo.Data.Models
  • CodeProject.Mongo.Data.MongoDb
  • CodeProject.Mongo.Interfaces
  • CodeProject.Mongo.Functions

To get started interacting with the MongoDB database with .NET Core, you must first install the MongoDB .NET Core Driver through a NuGet package. For this project, I installed the driver and NuGet package in the CodeProject.MongoDb.Data.MongoDB project. This application will use version 2.7.2 of the MongoDB .NET Core driver.

<PackageReference Include="MongoDB.Driver" Version="2.7.2" />

What’s really cool is that once the driver is installed, you can immediately start developing your application. All database collections will be created on-the-fly as they are referenced by the application.

Data Models

Now the first thing you can do is design your MongoDB collections. Unlike the fixed schemas in traditional relational databases like SQL Server, the schema in MongoDB is dynamic. You are not locked into a particular structure. Collections do not enforce document structure by default. This flexibility gives you data-modeling choices to match your application and its performance requirements. This flexibility is also great as you develop your application while following an Agile methodology.

The two main collections for the sample application will be a Products collection and an Orders collection. Collections are like tables but in MongoDB the structure of a collection can be flexible. The documents in a collection can have many different formats. As a developer, you are not locked into a physical schema like a relational database.

The Product class below defines a layout of the Products collection. This Product class has the same look and feel as defining an entity model using Entity Framework Code First. Like Entity Framework, MongoDB also includes data annotations that provides additional information to the MongoDB driver when creating documents.

namespace CodeProject.Mongo.Data.Models.Entities
{
   public class Product
   {
       [BsonId]
       public ObjectId Id { get; set; }
       public string ProductNumber { get; set; }
       public string Description { get; set; }
       public string LongDescription { get; set; }
       [BsonRepresentation(BsonType.Decimal128)]
       public decimal UnitPrice { get; set; }
       public int QuantityOnHand { get; set; 
       public int QuantitySold { get; set; }
   }
}

Object ID and BSON ID Annotation

By default, MongoDB generates a unique ObjectId identifier that is assigned to the _id field in a new document before writing that document to the database. In many cases, the default unique identifiers assigned by MongoDB will meet application requirements. The unique identifier ensures uniqueness at the database level.

Due to the multi-threaded and distributed nature of modern applications, it is not always a straightforward task to generate unique identifiers that satisfy application requirements on your own. Later in this article, custom unique order numbers will be generated for sales orders.

ObjectId is a 96-bit number which is composed as follows:

  • a 4-byte value representing the seconds since the Unix epoch
  • a 4-byte value representing the seconds since the Unix epoch
  • a 3-byte machine identifier (usually derived from the MAC address)
  • a 2-byte process id, and a 3-byte counter, starting with a random value

You can map the ObjectId in your data model by adding the [BsonId] annotation in your data model and using the ObjectId MongoDB data type.

MongoDB Decimal BSON Type

Another data annotation added to the Product model is for the UnitPrice to handle a floating point number. Applications that handle monetary data often require the ability to capture fractional units of currency and need to emulate decimal rounding with exact precision when performing arithmetic.

Using the Decimal BSON Type annotation provides a decimal-based floating-point format capable of providing exact precision. In this case, we are telling MongoDB to store the UnitPrice as a floating point number with exact precision.

[BsonRepresentation(BsonType.Decimal128)]
public decimal UnitPrice { get; set; }

One-To-Many Relationships

In a traditional relational database; we define a one-to-many relationship where one row in a table is associated with one or more rows in another table. For example, in this sample application, orders will be created. Each order can have many order details. This is where things get interesting when designing your NoSQL collections in MongoDB. Traditionally with a relational database; you would most likely store order header and order detail information in separate tables but with a MongoDB document; the JSON representation of an order can be combined into a single document that is stored in a single collection; which for the sample application is the Orders collection. This provides for a great deal of flexibility when it comes to storing your application data.

Order and Order Details Model

Designing the Orders collection, I created both an Order class and an OrderDetail class. Adding a generic list of OrderDetail and adding order details to the generic list will create a single JSON document with a one-to-many relationship. This is the flexibility of a NoSQL database. Your documents don’t have to have a flat structure to them and may contain many nodes.

public class Order
{
       [BsonId]
       public ObjectId Id { get; set; }
       public int OrderNumber { get; set; }
       public string FirstName { get; set; }
       public string LastName { get; set; }
       public string AddressLine1 { get; set; }
       public string AddressLine2 { get; set; }
       public string City { get; set; }
       public string State { get; set; }
       public string ZipCode { get; set; }
       public string EmailAddress { get; set; }
       [BsonRepresentation(BsonType.DateTime)]
       public DateTime OrderDate { get; set; }
       [BsonRepresentation(BsonType.Decimal128)]
       public decimal OrderTotal { get; set; }
       public List<OrderDetail> OrderDetails { get; set; }
}

public class OrderDetail
{
       [BsonId]
       public ObjectId ProductId { get; set; }
       [BsonRepresentation(BsonType.Decimal128)]
       public decimal UnitPrice { get; set; }
       public int OrderQuantity { get; set; }
}

The resulting JSON document will end up with a format like this:

{
  "_id": "5c96b0d0c11fcf3180cd38d4",
  "OrderNumber": 100000,
  "FirstName": "Bill",
  "LastName": "Gates",
  "AddressLine1": "One Microsoft Way",
  "AddressLine2": "",
  "City": "Redmond",
  "State": "WA",
  "ZipCode": "98052",
  "EmailAddress": "bgates@microsoft.com",
  "OrderDate": 1553379518520,
  "OrderTotal": 64.36
  "OrderDetails": [
    {
      "_id": "5c96a3d06a5ce1572c7bac22",
      "UnitPrice": 6.76,
      "OrderQuantity": 1
    },
    {
      "_id": 5c96a3d16a5ce1572c7bae07",
      "UnitPrice": 4.7,
      "OrderQuantity": 5
    },
    {
      "_id": "5c96a3d06a5ce1572c7bacc1",
      "UnitPrice": 6.82,
      "OrderQuantity": 5
    }
  ]
}

Now that I have a couple data models defined for my MongoDB collections, the next thing I want to do is to create a data repository class that can be used to create a MongoDB connection. This is done by creating a custom Settings object that contains properties for both the connection string and the name of the MongoDB database. After populating these properties; the Settings object can then be passed into another class that sets ups and initializes all the plumbing for the MongoDB database.

/// <summary>
/// MongoDB Repository
/// </summary>
public class MongoDbRepository : IDataRepository, IDisposable
{
    private OnlineStoreDatabase _context;
    private IClientSessionHandle _session;
    private MongoClient _mongoClient;
    private DateTime _transactionStartDateTime;

    /// <summary>
    /// Open Connection
    /// </summary>
    public void OpenConnection()
    {
        Settings options = new Settings();
        options.ConnectionString = "mongodb://localhost:27017";
        options.Database = "OnlineStore";

        _context = new OnlineStoreDatabase(options);
        _mongoClient = _context.GetMongoClient();
    }
    
    /// <summary>
    /// Start Transaction
    /// </summary>
    public async Task StartTransaction()
    {
        _transactionStartDateTime = DateTime.Now;

        _session = await _mongoClient.StartSessionAsync();
        _session.StartTransaction();
    }

    /// <summary>
    /// Commit Transaction
    /// </summary>
    public async Task CommitTransaction()
    {
        await _session.CommitTransactionAsync();
    }

    /// <summary>
    /// Abort Transaction
    /// </summary>
    public async Task AbortTransaction()
    {
        await _session.AbortTransactionAsync();
    }

    /// <summary>
    /// Database Context
    /// </summary>
    public OnlineStoreDatabase dbConnection
    {
        get { return _context; }
    }

    /// <summary>
    /// Session property
    /// </summary>
    public IClientSessionHandle Session
    {
        get { return _session; }
    }

    /// <summary>
    /// Transaction Start Date & Time
    /// </summary>
    public DateTime TransactionStartDateTime
    {
        get { return _transactionStartDateTime; }
    }

    /// <summary>
    /// Close Connection
    /// </summary>
    public void CloseConnection()
    {
        _context = null;
    }       
}

The OnlineStoreDatabase class below acts as the database context for creating the MongoDB client connection and all the database references that sets up the interfaces for each collection in the database. Those familiar with Entity Framework will notice this mimics and looks very similar to the Entity Framework initialization process.

/// <summary>
/// Database Context
/// </summary>
public class OnlineStoreDatabase : IOnlineStoreDatabaseContext
{
    private readonly IMongoDatabase _db;
    private readonly MongoClient _client;

    public OnlineStoreDatabase(Settings options)
    {
        _client = new MongoClient(options.ConnectionString);
        _db = _client.GetDatabase(options.Database);
    }

    public IMongoCollection<Product> Products => _db.GetCollection<Product>("Products");
    public IMongoCollection<Order> Orders => _db.GetCollection<Order>("Orders");
}

/// <summary>
/// Customer Context
/// </summary>
public interface IOnlineStoreDatabaseContext
{
    IMongoCollection<Product> Products { get; }
    IMongoCollection<Order> Orders { get; }
}

Seeding the Products Collection

Now that we have the plumbing set-up to interact with the MongoDb database, we can now seed the Products collection with test data. Included in the source code download is a project called CodeProject.Mongo.Import that will import a text file of product data and insert documents into the Products collection. You can execute this program by opening it in Visual Studio and pressing the run button.

Inserting data into MongoDB collections is similar to Entity Framework too. All you have to do is populate a POCO (Plain Old Class Object) class of your data model; in this case, the Product class and add it to the collection.

/// <summary>
/// Upload Products
/// </summary>
/// <param name="products"></param>
/// <returns></returns>
public async Task<ResponseModel<List<ProductDataTransformation>>> UploadProducts(var products)
{   
    try
    {
        _onlineStoreDataService.OpenConnection();

        foreach (ProductDataTransformation productItem in products)
        {
                    
            Product product = new Product();
                
            product.ProductNumber = productItem.ProductNumber;
            product.Description = productItem.Description;
            product.LongDescription = productItem.LongDescription;
            product.UnitPrice = productItem.UnitPrice;
            product.QuantityOnHand = productItem.QuantityOnHand;
            product.QuantitySold = 0;
            
            await _onlineStoreDataService.CreateProduct(product);
        }

        returnResponse.ReturnStatus = true;
    }
    catch (Exception ex)
    {
        returnResponse.ReturnStatus = false;
        returnResponse.ReturnMessage.Add(ex.Message);
    }
    finally
    {
        _onlineStoreDataService.CloseConnection();
    }
}

MongoDB methods support both synchronous and asynchronous processing. In the CreateProduct method below; the product object is inserted into the Products collection asynchronously. Upon insertion into the Products collection, the ObjectId for the newly inserted document is automatically populated by MongoDB.

/// <summary>
/// Create Product
/// </summary>
/// <param name="product"></param>
/// <returns></returns>
public async Task CreateProduct(Product product)
{
    await dbConnection.Products.InsertOneAsync(product);
}

Product Inquiry

The first MongoDB exercise for the sample application is to support a Product Inquiry page. The Product Inquiry page will have options to search and filter products by product number and/or by product description. There will also be options to sort the grid by selected columns in ascending or descending order. By default, product data will be returned from the server as paged data sets of 20 rows per page. There will also be options to page through the results by selecting the next page, previous page, first page or last page.

When the end user executes a request to search for products, the request will go through the usual Web API controller, through to the business service layer and ultimately to the data access layer where the MongoDB Products collection will be filtered based on the information submitted by the user. Below is the complete ProductInquiry method in the data access layer. While coding this data access method, MongoDB was able to handle all aspects that were needed to fulfill the search requirements of the Product Inquiry page including paging, filtering and sorting data in ascending or descending order.

/// <summary>
/// Product Inquiry
/// </summary>
/// <param name="productNumber"></param>
/// <param name="description"></param>
/// <param name="paging"></param>
/// <returns></returns>
public async Task<List<Product>> ProductInquiry(string productNumber, 
                                                string description, 
                                                DataGridPagingInformation paging)
{            
    string sortExpression = paging.SortExpression;
    string sortDirection = paging.SortDirection;

    int sortOrder = 1;
    if (sortDirection == "desc")
    {
        sortOrder = -1;
    }

    if (string.IsNullOrEmpty(sortExpression))
    {
        sortExpression = "{Description: 1}";
    }
    else
    {
        sortExpression = "{" + sortExpression + ": " + sortOrder + "}";
    }

    //
    //    initialize filter
    //

    FilterDefinition<Product> filter = Builders<Product>.Filter.Empty;

    productNumber = productNumber.Trim();
    description = description.Trim();

    //
    //    filter by product number
    //
    
    if (productNumber.Length>0)
    {
        filter = filter & Builders<Product>.Filter.Regex(m => m.ProductNumber,
                 new BsonRegularExpression(string.Format("^{0}", productNumber), "i"));
    }
            
    //
    //    filter by description
    //
            
    if (description.Length > 0)
    {
        filter = filter & Builders<Product>.Filter.Regex(m => m.Description,
                 new BsonRegularExpression(string.Format("{0}", description), "i"));
    }

    long numberOfRows = await dbConnection.Products.CountDocumentsAsync(filter);

    var productCollection = await dbConnection.Products.Find(filter)
            .Skip(paging.CurrentPageNumber * paging.PageSize)
            .Limit(paging.PageSize)
            .Sort(sortExpression)
            .Project(p => new
            {
                p.Id,
                p.ProductNumber,
                p.Description,
                p.LongDescription,
                p.UnitPrice,
                p.QuantityOnHand
            })
            .ToListAsync();

    List<Product> products = new List<Product>();

    foreach (var productDocument in productCollection)
    {
        Product product = new Product();

        product.Id = productDocument.Id;
        product.ProductNumber = productDocument.ProductNumber;
        product.Description = productDocument.Description;
        product.LongDescription = productDocument.LongDescription;
        product.UnitPrice = productDocument.UnitPrice;
        product.QuantityOnHand = productDocument.QuantityOnHand;

        products.Add(product);
    }

    paging.TotalRows = numberOfRows;
    paging.TotalPages = Utilities.CalculateTotalPages(numberOfRows, paging.PageSize);

    return products;
}

Filter Definitions and Builders

While working on the Product Inquiry data access method, I came across the MongoDB FilterDefinition object and its associated Filter Builder API. The FilterDefinition object provides for the ability to apply both simple and complex filtering queries when executing a Find command against a MongoDB collection.

The FilterDefinition object also comes with a type-safe filter builder to aid in the creation of MongoDB queries. Each builder has a generic type parameter of TDocument which represents the type of document with which you are working with. 

FilterDefinition builders are powerful. You can concatenate several builders together if you need to filter on multiple columns. The builder also comes with a BSON Regular Expression object that can be used to build queries when you want to execute traditional SQL Server type WHERE statements that require typical searches such as LIKE, STARTS WITH and CONTAINS search patterns. You basically supply a regular expression pattern to execute these types of queries.

Additionally, the FilterBuilder has full support to execute case insensitive searches by supplying the standard “/i” option.

Paging, Sorting, Projections and Document Counts

The MongoDB .NET Core driver also comes with full LINQ support for returning a page of data, setting a page size, indicating a sort expression and returning a projection of data using Skip, Limit, Sort and Project statements.

MongoDB collections also support a method called CountDocumentsAsync that allows you to count the total number of documents that match the filter criteria before you return a page of results by supplying the filter definition to the CountDocumentsAsync command.

The Checkout Page and Creating a Sales Order

Once we can search for products in the sample application, we can then go ahead and select items and add them to a shopping cart, checkout and place an order. On the backend, MongoDB will create a Sales Order document in the Orders collection while also reducing available on hand quantities in the Products collection. This is where things start to get interesting and making things work in the MongoDB world.

Creating a Sequential Sales Order Number

When a user places an order, I wanted to create a sequentially unique sales order number with each order. In Microsoft SQL-Server, this can be achieved by creating an Identity column on a table whose value increases automatically that is managed by the server.

MongoDB does not have out-of-the-box auto-increment functionality like SQL databases. By default, it uses the 12-byte ObjectId field as the primary key to uniquely identify documents.

Since this is not a default feature in MongoDB, I had to programmatically achieve this functionality by using a SequenceNumber collection as suggested by the MongoDB documentation. The data model for the SequenceNumber collection will contain two primary columns, one for a sequence key and one for the incremented sequence value. This will allow the application to use one collection for various sequence numbers that are needed for the sample application.

public class SequenceNumber
{
    [BsonId]
    public ObjectId Id { get; set; }
    public string SequenceKey { get; set; }
    public int SequenceValue { get; set; }
}

To implement an auto incrementing function, I created a GetNextSequenceNumber method in the MongoDB data access layer. The method accepts a sequence key and increments the sequence value by one for the sequence key and saves it back to the SequenceNumbers collection. During the seeding of the Products collection, the order number sequence value was also seeded and defaulted to a value of 100000.

The MongoDB .NET Core driver provides a FindOneAndUpdateAsync method that can be used for finding a specific document and update it all in one atomic operation which also generates a write lock so that only one user instance at a time can increment the order number counter.

A FIndOneAndUpdateOptions object is also provided that allows you to set option values for the FindOneAndUpdateAsync method. In the sample code below, two options are set, one to tell MongoDB to either insert or not to insert a new document if the document doesn't exist with the IsUpsert property and an option to return or not return the document after it has been updated. Since the order number sequence was already seeded; we can turn off the insert functionality and just return the new sequence number back to the caller.

The MongoDB .NET Core driver comes with a built in increment function that provides for the functionality of incrementing document property values while updating a document. This is configured through a builder object using the Update method of the builder object. This is efficient because only the incremented value is updated on the document instead of updating the entire document.

/// <summary>
/// Get Next Sequence Number
/// </summary>
/// <returns></returns>
private async Task<int> GetNextSequenceNumber(string sequenceKey)
{

    FilterDefinition<SequenceNumber> filter = Builders<SequenceNumber>
                .Filter.Eq(m => m.SequenceKey, sequenceKey);

    FindOneAndUpdateOptions<SequenceNumber> options = 
                            new FindOneAndUpdateOptions<SequenceNumber>();

    options.IsUpsert = false;
    options.ReturnDocument = ReturnDocument.After;

    SequenceNumber updatedSequence = await dbConnection.SequenceNumbers.
                FindOneAndUpdateAsync(filter, 
                Builders<SequenceNumber>.Update.Inc(x => x.SequenceValue, 1), options);

    return updatedSequence.SequenceValue;
}

Unique Index Keys

For additional concurrency protection, I created a unique index on the order number property in the Orders collection to ensure that a duplicate order number is never generated. This is also useful to speed up order searches based on the order number because the search will use the index. The sample code below uses another builder to create an IndexKeysDefinition that allows you to configure an index. There are various settings available to you when configuring an index. In the sample below, the index is being configured to be unique. All MongoDB collections expose a collection of indexes that can be used to create indexes on the collection programmatically. The below method was executed during the initial seed program.

/// <summary>
/// Create Unique Index for Order/Order Number 
/// </summary>
public async Task CreateOrderOrderNumberUniqueIndex()
{
    IndexKeysDefinition<Order> keys = Builders<Order>.IndexKeys.Ascending("OrderNumber");

    var options = new CreateIndexOptions { Unique = true };
    var indexModel = new CreateIndexModel<Order>(keys, options);

    await dbConnection.Orders.Indexes.CreateOneAsync(indexModel);
}

Transactions and ACID Support in MongoDB 4.0

Version 4.0 of MongoDB introduced multi-document transactions. The lack of a transactional capability has been a key limiting factor for MongoDB. The lack of a transactional capability in MongoDB limited the range of applications to which the database could be applied to. The initial implementation of MongoDB transactions is limited in capability when compared to familiar relational systems. Those adopting MongoDB transactions need to be aware of these limitations. Additionally, MongoDB 4.0 does provide full ACID transaction support, but multi-document transactions are available for replica set deployments only. You can use transactions on a standalone server, but you need to configure it as a replica set.

Converting a MongoDB Standalone Server to a MongoDB Replica Set

A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments. One and only one mongod instance is deemed the primary node, while the other nodes are deemed secondary nodes in a MongoDB replica set. In a production environment with multiple copies of data on different database servers, replication provides a high level of fault tolerance against the loss of a single database server.

When I initially installed the MongoDB Community Edition Server, it was installed as a standalone server. Standalone MongoDB servers do not support ACID transactions. I wanted to implement full commit and rollback concurrency and transaction support for the sample application so I needed to convert my standalone MongoDB server to a replica set implementation. The first thing I needed to do was figure out how the MongoDB Server Windows service was executing. Opening up the properties of the Windows service, I discovered the following command:

C:\Program Files\MongoDB\Server\4.0\bin\mongod.exe"
 --config "C:\Program Files\MongoDB\Server\4.0\bin\mongod.cfg"
 --service

Researching the details of the mongod.cfg configuration file, I found that I could begin to convert the standalone server to a replica set by setting up and adding the replication option in the configuration file as follows:

replication: replSetName: OnlineStoreSet

When creating a replica set, you need to give the replica set a name and all mongod instances must use the same replica set name. After updating the configuration file, I just needed to restart the MongoDB Server Windows service and thus completing step 1 of creating a replica set. The next steps required for creating a replica set include executing the following in the DOS command window and then executing Mongo commands while in a MongoDB session:

mongod --port 27020 --dbpath "C:\MyFiles\MongoDBOnlineStoreReplication" --replSet OnlineStoreSet

// connect to the primary MongoDB instance
mongo --port 27017
rs.add("localhost:27020")
rs.slaveOk();

// connect to the secondary MongoDB instance
mongo --port 27020
rs.slaveOk();

With the first MongoDB instance running in replica set mode; you need to start a 2nd instance of a MongoDB server as a secondary member of the replica set using mongod on another port.

Since the primary MongoDB server by default runs on port 27017, I needed to run the secondary server on a different port, in this case, I used port number 27020 while also telling the mongod instance the folder path to the secondary MongoDB database and the name of the replica set. The cool thing that happens is that the new mongod instance will automatically create a new MongoDB database in the folder specified.

Now with two MongoDB instances running, I just needed to configure the primary instance to replicate to the secondary instance. To do this, I needed to start another MongoDB session and connect to the primary instance on port 27017 in a DOS window.

Once connected to the primary instance, I needed to execute an rs.add command and an rs.slaveOk command in the MongoDB session of the primary instance. The rs.add command adds a member to a replica set. In this case, the secondary member is running on localhost on port 27020.

When executing the rs.slaveOk command, you are indicating to the MongoDB replica set that you acknowledge that reads will be eventually consistent and you wish to be able to query the secondary instance. Now, we just need to connect to the secondary instance using the mongo command on port 27020. Once connected to the secondary instance, we issue another rs.slaveOk command on the secondary instance.

Replication set-up is now complete and if you start creating documents in the primary database, you'll be able to see those documents replicated in the secondary database when connected to the secondary instance. For convenience purposes, I created a DOS batch command file called StartReplicationInstance.bat that can be used to kick off the secondary instance. The primary instance will continue to start-up as a Windows service as it did when it was initially created as a standalone instance.

Now we can add full ACID transactional support to the sample application.

Creating a Sales Order and Updating Inventory - The ACID Test

Now comes the ACID test; creating a sales order and reducing the available quantity for sale for each product submitted in the shopping cart. In the MongoDB documentation, it states that MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously.

The documentation sounds good but it seems that MongoDB only supports optimistic locking; what if a document is returned from a collection back to the data access layer and that document is modified in the scope of a complex transaction in the memory space of the data access layer and then a separate operation is performed to update the document back to the MongoDB collection during the transaction. It seems like two simultaneous users could still end up updating the same data and thus causing a loss of data integrity.

Database Isolation Levels and Acquiring an Exclusive Pessimistic Lock

Most relational database management systems offer a number of transaction isolation levels, which control the degree of locking that occurs when selecting data. For many database applications, the majority of database transactions can be constructed to avoid requiring high isolation levels (e.g., SERIALIZABLE level), thus reducing the locking overhead for the system. The programmer must carefully analyze database access code to ensure that any relaxation of isolation does not cause software bugs that are difficult to find. Conversely, if higher isolation levels are used, the possibility of a deadlock is increased, which also requires careful analysis and programming techniques to avoid.

A key aspect for database transaction processing is exclusive pessimistic row locking. MongoDB implements a locking mechanism that is somewhat different to that which RDBMS programmers would be used to. In a typical RDBMS transaction, an attempt to modify a document which is currently changed by an uncommitted transaction would result in a blocking lock. MongoDB does lock a document that is being modified by a transaction, however, other sessions that attempt to modify that document do not block. Rather, their transaction is aborted, and they are required to retry the transaction.

One of the business rules for creating a sales order in the sample application, is that there must be enough available inventory for each product line item in the shopping cart before a sales order can be created. To support this, what is needed is an exclusive pessimistic document level lock while inside of the business transaction where each product document is read and locked and the lock is held for the duration of the transaction to ensure that the quantity available for each product is properly reduced in the Products collection. This is needed because multiple users could place orders for the same products at the same time.

After reviewing the transactional and document-level locking support in MongoDB, I felt a need to implement my own SERIALIZABLE pessimistic locking strategy, so I went down the path of creating a collection named Locks; dedicated to creating and holding document level locks. The data model for the Locks collection includes properties for the name of the collection, the entity id of the document being locked, and a date time stamp of the lock.

public class LockRow
{
    [BsonId]
    public ObjectId Id { get; set; }
    public string Table { get; set; }
    public string EntityId { get; set; }
    [BsonRepresentation(BsonType.DateTime)]
    public DateTime CreatedAt { get; set; }
}

Adding a compound unique index on both the Table property and the EntityId property to the Locks collection will provide the controls needed for implementing a document level locking strategy. Each locked document requested will have a unique single document inserted into this collection for the duration of a transaction.

/// <summary>
/// Create Lock Index
/// </summary>
public async Task CreateLockIndex()
{
     IndexKeysDefinition<LockRow> keys =
                  Builders<LockRow>.IndexKeys
                       .Ascending("Table")
                       .Ascending("EntityId");

     CreateIndexOptions options = new CreateIndexOptions();
     options.Unique = true;

     var indexModel = new CreateIndexModel<LockRow>(keys, options);

     await dbConnection.Locks.Indexes.CreateOneAsync(indexModel);
}

The AcquireLock method below will be called prior to accessing and updating each product item. This method will check to see if any existing lock is currently being held by another process. If the document is not being locked, a document will be inserted into the Locks collection essentially executing an exclusive document level lock.

If the document is already being held and locked by another process, the AcquireLock method will wait for one second before retrying the lock request. By default, MongoDB transactions exist only for relatively short time periods. By default, a transaction must span no more than one minute of clock time. This limitation results from the underlying MongoDB implementation.

As a result, the AcquireLock method will raise an error and force an AbortTransaction when the transaction time span has exceeded 60 seconds. MongoDB transactions are designed to be short in nature, so if you have a high volume eCommerce application with hundreds of users, you might need to think carefully how you implement your transactions, otherwise it might be best to use a traditional relational database like SQL-Server or Oracle.

Of course, my custom locking strategy is not a literal locking of a MongoDB document in that other areas of the application that are updating the same documents will not know that there is a lock on the document unless it is actually coded for by using the same AcquireLock method throughout the application.

/// <summary>
/// Acquire Lock
/// </summary>
/// <param name="table"></param>
/// <param name="entityId"></param>
/// <returns></returns>
public async Task<Boolean> AcquireLock(string table, string entityId)
{
    FilterDefinition<LockRow> filter = Builders<LockRow>.Filter.Eq(m => m.Table, table);

    filter = filter & Builders<LockRow>.Filter.Eq(m => m.EntityId, entityId);

    Boolean returnStatus = false;

    while (returnStatus == false)
    {
        DateTime currentDateTime = DateTime.Now;
        double secondsElapsed = (currentDateTime - this.TransactionStartDateTime).TotalSeconds;
        if (secondsElapsed >= 60)
        {
            throw new Exception("Transaction Expired");
        }

        try
        {
            LockRow lockedRow = await dbConnection.Locks.Find(filter).FirstOrDefaultAsync();
            if (lockedRow == null)
            {
                LockRow lockrow = new LockRow();
                lockrow.Table = table;
                lockrow.EntityId = entityId;
                lockrow.CreatedAt = DateTime.Now;

                await dbConnection.Locks.InsertOneAsync(lockrow);
                returnStatus = true;
            }
            else
            {
                await Task.Delay(1000);
            }
                
       }
       catch (Exception)
       {
            await Task.Delay(1000);
       }
    }

    return returnStatus;
}

Expire Documents after a Specified Number of Seconds

One of the cool features of MongoDB is its (“time to live”) TTL collection feature. TTL collections make it possible to store data in MongoDB and have the mongod process automatically remove data after a specified number of seconds or at a specific clock time. The TTL feature relies on a background thread in the mongod process that reads the date-typed values in the index and removes expired documents from the collection.

For the Locks collection, I added a TTL index to expire documents in the Locks collection after 60 seconds. This is a great feature and the sample application can use it to clean up the Locks collection in cases where an unhandled exception occurred during a transaction. It basically becomes a catch-all to automatically unlock all documents that have been locked for 60 seconds or more.

In the CreateLockExpiredIndex method below, a CreatedAt DateTime property was added to the Locks collection and the CreatedAt property will get set with the DateTime value that the Locks document was created. When inserted into the Locks collection, this property will be included in the TTL index and its value will be used to auto delete the document if it still exists after 60 seconds.

/// <summary>
/// Create Lock Expired Index
/// </summary>
/// <returns></returns>
public async Task CreateLockExpiredIndex()
{
    IndexKeysDefinition<LockRow> keys = Builders<LockRow>.IndexKeys.Ascending("CreatedAt");

    CreateIndexOptions options = new CreateIndexOptions();
    options.ExpireAfter = TimeSpan.FromSeconds(60);
            
    var indexModel = new CreateIndexModel<LockRow>(keys, options);

    await dbConnection.Locks.Indexes.CreateOneAsync(indexModel);
}

Transaction Deadlocks

Using a pessimistic locking strategy like I’ve implemented in the sample application is vulnerable to deadlocks. When two or more sessions are waiting to acquire a lock on a document already locked by another session; a 'circular chain' could occur where no session can complete, and thus creating a deadlock situation.

Relational databases like SQL Server and Oracle have built-in functionality to intervene and clear up deadlocks by automatically choosing one of the sessions as the 'victim' and killing it and rolling back its transaction allowing the other sessions to proceed.

Ideally, no user should ever be confronted with a transaction that is automatically rolled back, even if a deadlock does occur. Any application code that issues database queries should be equipped with error handling that deals with the problem gracefully, and sends the calling client a "user friendly" message. It should also have retry logic that allows the victim transaction to rerun, once the deadlock clears.

Create Sales Order with an Atomic Transaction

With some pessimistic locking functionality in place, we are now ready to create a sales order under the control of an atomic transaction using the transactional support functionality that now comes with MongoDB.

The CreateOrder method of the business service layer below performs the following to a create a sales order and reduce the available quantity on hand for the products coming in from the shopping cart:

  • Open a MongoDB connection.
  • Start a MongoDB transaction.
  • Populate the Sales Order model.
  • For each product in the shopping cart, a document lock will be acquired and held until the transaction is committed.
  • For each product in the shopping cart, the quantity on hand will be reduced by the quantity sold.
  • When the available on hand quantity for any product on the order goes below zero, the entire transaction is rolled back and aborted.
  • Each line item on the shopping cart is appended to the sales order data model.
  • After all line items have been processed, the sales order is added to the Orders collection.
  • Commits the transaction.
  • Releases all locks in the Locks collection.
  • When exceptions occurs, the transaction is rolled back and aborted and all locks are released.
/// <summary>
/// Create Order
/// </summary>
/// <param name="orderDataTransformation"></param>
/// <returns></returns>
public async Task<ResponseModel<OrderDataTransformation>> CreateOrder(
                                OrderDataTransformation orderDataTransformation)
{
    ResponseModel<OrderDataTransformation> returnResponse = 
                                           new ResponseModel<OrderDataTransformation>();

    PessimisticLocking pessimisticLocking = new PessimisticLocking(_onlineStoreDataService);

    try
    {
        _onlineStoreDataService.OpenConnection();

        await _onlineStoreDataService.StartTransaction();

        //
        //    Create Order
        //

        Order order = new Order();

        order.FirstName = orderDataTransformation.FirstName;
        order.LastName = orderDataTransformation.LastName;
        order.AddressLine1 = orderDataTransformation.AddressLine1;
        order.AddressLine2 = orderDataTransformation.AddressLine2;
        order.City = orderDataTransformation.City;
        order.State = orderDataTransformation.State;
        order.ZipCode = orderDataTransformation.ZipCode;
        order.EmailAddress = orderDataTransformation.EmailAddress;
        order.OrderDate = DateTime.Now;
        order.OrderTotal = 0;

        order.OrderDetails = new List<OrderDetail>();

        foreach (OrderDetailDataTransformation orderDetailTransformation in 
                 orderDataTransformation.OrderDetails.OrderBy(x=>x.ProductNumber))
        {
            string productNumber = orderDetailTransformation.ProductNumber; 
            string productId = orderDetailTransformation.ProductId;

            decimal lineItemValue = orderDetail.OrderQuantity * orderDetail.UnitPrice;

            OrderDetail orderDetail = new OrderDetail();
            orderDetail.OrderQuantity = orderDetailTransformation.OrderQuantity;
            orderDetail.UnitPrice = orderDetailTransformation.UnitPrice;

            order.OrderTotal = order.OrderTotal + lineItemValue;

            await pessimisticLocking.AcquireLock("Product", productId);

            Product product = await 
                    _onlineStoreDataService.UpdateStock(productId, orderDetail.OrderQuantity);

            if (product.QuantityOnHand < 0)
            {
                returnResponse.ReturnStatus = false;
                returnResponse.ReturnMessage.Add("Quantity not available for product " + 
                                                  productNumber);

                await _onlineStoreDataService.AbortTransaction();

                return returnResponse;
            }
            
            orderDetail.ProductId = product.Id;

            order.OrderDetails.Add(orderDetail);
        }

        await _onlineStoreDataService.CreateOrder(order);

        await _onlineStoreDataService.CommitTransaction();

        returnResponse.ReturnStatus = true;

    }
    catch (Exception ex)
    {
        await _onlineStoreDataService.AbortTransaction();

        returnResponse.ReturnStatus = false;
        returnResponse.ReturnMessage.Add(ex.Message);
    }
    finally
    {
        await pessimisticLocking.UnlockRows();

        _onlineStoreDataService.CloseConnection();
    }

    return returnResponse;
}

After successfully creating a connection to the primary MongoDB database instance, a MongoDB session can be created and started from a MongoClient instance. A session is used to group together a series of operations that are related to each other. Sessions are also used for transactions. Transactions are started, committed or aborted using methods of IClientSession. A session can only execute one transaction at a time, but a session can execute more than one transaction as long as each transaction is committed or aborted before the next one is started. After a session has been created, a transaction can be started by executing the StartTransaction method off the Session object.

private OnlineStoreDatabase _context;
private IClientSessionHandle _session;
private MongoClient _mongoClient;
private DateTime _transactionStartDateTime;

/// <summary>
/// Session property
/// </summary>
public IClientSessionHandle Session
{
    get { return _session; }
}

/// <summary>
/// Open Connection
/// </summary>
public void OpenConnection()
{
    Settings options = new Settings();
    options.ConnectionString = "mongodb://localhost:27017";
    options.Database = "OnlineStore";

    _context = new OnlineStoreDatabase(options);
    _mongoClient = _context.GetMongoClient();
}
    
/// <summary>
/// Start Transaction
/// </summary>
public async Task StartTransaction()
{
    _transactionStartDateTime = DateTime.Now;

    _session = await _mongoClient.StartSessionAsync();
    _session.StartTransaction();
}

One thing you will notice with the MongoDB .NET Core driver is that there is a builder for just about everything. In the UpdateStock method, two builders are being used. One for building a FilterDefinition and one for an UpdateDefinition. The UpdateDefinition object provides several options for updating documents in a collection. In the sample UpdateStock method, the UpdateDefinition is being built with two increment updates; one for the QuantityOnHand and the QuantitySold properties in the document. Increment updates make it easy to supply a value that will either be added to or subtracted from the original value stored in the document upon executing an update of the document.

Additionally, a FindOneAndUpdateOptions object is created for the FindOneAndUpdateAsync method. This method will be used to update the product document.

The FindOneAndUpdateAsync method can be configured to insert a new document if the document doesn’t exist while trying to update the document. In this case, the Create Order transaction just wants to update an existing document and not insert any new documents by setting the IsUpsert property to false.

For the Create Order transaction, the application also needs to return the document after it has been updated. Setting the ReturnDocument property to ReturnDocument.After implements this functionality. Optionally, the document could be returned before the document is updated by setting the ReturnDocument property to ReturnDocument.Before.

Finally, when executing the FindOneAndUpdateAsync method, the session object must be attached to the FindOneAndUpdateAsync method which will tell the method to participate within the scope of the existing transaction and thus the updated document will not be written to the database until the transaction is committed.

The nice thing about the FindOneAndUpdateAsync method is that this method will find the specified document and update it all within one trip to the MongoDB server while also managing and any write-conflicts.

/// <summary>
/// Update Stock
/// </summary>
/// <param name="productId"></param>
/// <param name="quantitySold"></param>
/// <returns></returns>
public async Task<Product> UpdateStock(string productId, int quantitySold)
{            
    int quantityReduced = quantitySold * -1;

    ObjectId productIdentifier = new ObjectId(productId);

    FilterDefinition<Product> filter = Builders<Product>
                             .Filter.Eq(m => m.Id, productIdentifier);

    FindOneAndUpdateOptions<Product> options = new FindOneAndUpdateOptions<Product>();
    options.IsUpsert = false;
    options.ReturnDocument = ReturnDocument.After;

    UpdateDefinition<Product> update = Builders<Product>.Update
                           .Inc("QuantitySold", quantitySold)
                           .Inc("QuantityOnHand", quantityReduced);

    Product updatedProduct = await dbConnection.Products
                .FindOneAndUpdateAsync(this.Session, filter, update, options);

    return updatedProduct;
}

As a test to exercise a transaction rollback, you can place an order quantity for a product greater than the available on hand quantity. After updating the product document, the Create Order transaction will check to see if the quantity on hand went below zero. When this occurs, the transaction will get aborted and the document update will get rolled back and not committed to the database. Exercising this condition will verify that your MongoDB environment is properly set-up as a replica set and that transactions are working properly.

Product product = await _onlineStoreDataService.UpdateStock(productId, orderQuantity);

if (product.QuantityOnHand < 0)
{
    returnResponse.ReturnStatus = false;
    returnResponse.ReturnMessage.Add("Quantity not available for product " +  productNumber);

    _onlineStoreDataService.AbortTransaction();

    return returnResponse;
}

After updating all the product quantities for each product in the shopping cart; the sales order can be created. Before adding a sales order to the Orders collection, a sequential order number (as previously detailed) is generated for the order. When inserting a new sales order into the Orders collection, be sure to add the session object to the InsertOneAsync method so that the insert is included in the transaction; in case it needs to be rolled back.

Finally, the Create Order transaction can be committed to the database by issuing a CommitTransaction command off the session object and all the documents inserted into the Locks collection can be deleted and thus freeing up those documents to be updated by other users.

/// <summary>
/// Create Order
/// </summary>
/// <param name="order"></param>
/// <returns></returns>
public async Task CreateOrder(Order order)
{
    order.OrderNumber = await GetNextOrderNumber();
    await dbConnection.Orders.InsertOneAsync(this.Session, order);
}

/// <summary>
/// Commit Transaction
/// </summary>
public async Task CommitTransaction()
{
    await _session.CommitTransactionAsync();
}

/// <summary>
/// Abort Transaction
/// </summary>
public async Task AbortTransaction()
{
    await _session.AbortTransactionAsync();
}

/// <summary>
/// Unlock row
/// </summary>
/// <param name="table"></param>
/// <param name="entityId"></param>
/// <returns></returns>
public async Task<Boolean> UnLockRow(string table, string entityId)
{
    await dbConnection.Locks
          .DeleteOneAsync(filter: p => p.Table == table && p.EntityId == entityId);

    return true;
}

Sales Order Inquiry and SQL JOINS

One of the biggest differences between SQL and NoSQL databases is the SQL JOIN clause. In relational databases, the SQL JOIN clause allows you to combine rows from two or more tables using a common field between them.

Document-oriented databases such as MongoDB are designed to store denormalized data. Ideally, there should be no relationship between collections. If the same data is required in two or more documents, it must be repeated or duplicated. Of course, with careful planning and design, you can still mitigate a lot of the duplication of data in a NoSQL database by performing your own joins in memory and carefully only selecting those documents from one collection that are needed by another collection of documents.

In the sample application, a Sales Order Inquiry page displays a list of sales orders. The Sales Order Inquiry page will execute the GetOrders method of the business service layer below to return a list of sales orders to display to the user. This method makes two separate calls to the MongoDB server. One to return a list of sales orders and one to get all the product information for those products within the list of sales orders. Once both of these collections are returned back from MongoDB, the sales order list is joined together in memory with the products list and the combined data is returned in a response object.

/// <summary>
/// Get Orders
/// </summary>
/// <returns></returns>
public async Task<ResponseModel<List<OrderInquiryDataTransformation>>> GetOrders()
{
    ResponseModel<List<OrderInquiryDataTransformation>> returnResponse =
                       new ResponseModel<List<OrderInquiryDataTransformation>>();

    List<OrderInquiryDataTransformation> orderList = new List<OrderInquiryDataTransformation>();

    try
    {
        _onlineStoreDataService.OpenConnection();

        List<Order> orders = await _onlineStoreDataService.GetOrders();

        List<ObjectId> productList = (
                       from o in orders
                       from od in o.OrderDetails
                       select od.ProductId).Distinct().ToList();

        List<Product> products = await _onlineStoreDataService.GetProducts(productList);

        foreach (Order order in orders)
        {
            foreach (OrderDetail orderDetail in order.OrderDetails)
            {
                OrderInquiryDataTransformation orderInquiryDataTransformation = 
                                               new OrderInquiryDataTransformation();

                orderInquiryDataTransformation.CustomerName = order.CustomerName;
                orderInquiryDataTransformation.OrderDate = order.OrderDate;
                orderInquiryDataTransformation.OrderNumber = order.OrderNumber;
                orderInquiryDataTransformation.UnitPrice = orderDetail.UnitPrice;
                orderInquiryDataTransformation.OrderQuantity = orderDetail.OrderQuantity;

                Product product = products.Where(x => x.Id == orderDetail.ProductId)
                                 .FirstOrDefault();

                if (product != null)
                {
                    orderInquiryDataTransformation.ProductNumber = product.ProductNumber;
                    orderInquiryDataTransformation.Description = product.Description;
                }

                orderList.Add(orderInquiryDataTransformation);
            }
        }

        returnResponse.Entity = orderList;

        returnResponse.ReturnStatus = true;

    }
    catch (Exception ex)
    {
        returnResponse.ReturnStatus = false;
        returnResponse.ReturnMessage.Add(ex.Message);
    }
    finally
    {
        _onlineStoreDataService.CloseConnection();
    }

    return returnResponse;
}

In the design of the Orders collection, I combined the order header information with the order detail information forming a one-to-many relationship within the same document. This design deviates from the traditional relational database design of having two separate tables (one for the header and one for the details) to join on. Combing these two pieces of data together into the same document saves a separate trip to the MongoDB server when retrieving sales orders. I stopped short of also adding and duplicating product detail information such as product number and description in the sales order details node but I could have and thus there would be no need to make separate trips to the MongoDB server. In MongoDB, document design is a trade-off between storing extra data in comparison with improved performance and reduced complexity.

But careful design of your document structure should be considered. In MongoDB, the default maximum BSON document size is 16 megabytes. So, you need to understand the nature of your data and its overall potential size in terms of document sizes. The maximum document size helps ensure that a single document cannot use excessive amount of RAM during transmission, or excessive amount of bandwidth. Of course, MongoDB provides you with options that will allow to store documents larger than the maximum size.

/// <summary>
/// Get Orders
/// </summary>
/// <returns></returns>
public async Task<List<Order>> GetOrders()
{
    string sortExpression = "{OrderDate: -1}";
    FilterDefinition<Order> filter = Builders<Order>.Filter.Empty;
    List<Order> orders = await dbConnection.Orders.Find(filter)
                               .Sort(sortExpression).ToListAsync();
    return orders;
}

After retrieving a list of orders from the MongoDB server, each distinct product id is added to a generic list of unique product keys and is passed into the GetProducts data access layer.

A FilterDefinition object is then created using the Filter.In method and basically simulates a SQL WHERE IN clause of all the ObjectIds for products on sales orders. This FilterDefinition is then used in a Find command on the Products collection to join the products needed for the sales orders to be returned to the client for display purposes.

/// <summary>
/// Get Products
/// </summary>
/// <param name="products"></param>
/// <returns></returns>
public async Task<List<Product>> GetProducts(List<ObjectId> productList)
{    
    FilterDefinition<Product> filter = Builders<Product>.Filter.In(p => p.Id, productList);

    List<Product> products = await dbConnection.Products.Find(filter).ToListAsync();
    return products;
}

Summary

So, there you have it, MongoDB in a nutshell at a high level. MongoDB is an interesting NoSQL database and comes with a fast and light weight look and feel. Of course, MongoDB comes with a lot of features not covered in this article that makes it a great solution for storing, processing and retrieving large amounts of data.

MongoDB also comes with an Aggregation Framework that allows you to perform aggregation operations in an efficient manner. The major and very common problem with a growing web application is scaling. To overcome this, MongoDB has come up with a Sharding feature; it is one of the greatest key features of MongoDB. Sharding is a method for distributing data across multiple machines. MongoDB uses Sharding to support deployments with very large data sets and high throughput operations. Sharding makes it possible to provide horizontal scalability.

MongoDB supports field, range queries, and regular expression searches. Queries can return specific fields of documents as projections. Additionally, MongoDB supports document indexing, the replication of data and support for geospatial indexing that allows users to store and search on x and y coordinates within documents.

At the end of the day, picking a database solution for your application is a lot like picking between apples and oranges. Ultimately, you need to understand your application’s use case and potential data, performance and storage requirements. With its new concurrency and transactional support, valid use cases for MongoDB will expand. Ultimately, if your application requires the strongest transactional support; a traditional relational database management system may still be the way to go.

Prerequisites and Running the Sample Application

Between Angular 6, .NET Core and MongoDB, there are a lot of moving parts to install and configure to get the sample application up and running. The sample application also consists of three Visual Studio 2017 projects.

In attempt to make getting the sample application up and running in your local development environment as painless as possible, I have outlined below the prerequisites and install steps needed to get up and running.

Software installation prerequisites:

  • MongoDB 4,0 or greater
  • Visual Studio 2017 Professional or Community Edition
  • .NET Core 2.2 - SDK version 2.2.106
  • NodeJS 10.13.0 or greater
  • Angular CLI 6

To get the sample application running, the following steps need to be executed:

  • Install the MongoDB Server as a Service - Download The MongoDB Community Server version 4.0 from their download page. Starting in version 4.0, you can install and configure MongoDB as a Windows Service during the install, and the MongoDB service is started upon successful installation. MongoDB is configured using the configuration file in the installation bin directory.
  • Convert the MongoDB Standalone Server to a Replica Set - To support transactions, convert the MongoDB installation to a replica set; following the instructions mentioned previously in this article.
  • Download the Sample Application Source Code - The source code for the sample application can be downloaded from the download source code link at the top of this article.
  • .NET Core 2.2 - When you download and install Visual Studio 2017 Professional or Community Edition, .NET Core 2.2 should automatically get installed with Visual Studio. If you already have Visual Studio 2017, you can verify your installation by going to the Tools menu and selecting Get Tools and Features and this will start the Visual Studio Installer. From the installer options, you can verify that .NET Core 2.2 has been installed. If you need to download the .NET Core SDK for Visual Studio 2017; choose SDK version 2.2.106 for Windows 64 bit operating systems from the .NET Core SDK download page.
  • Angular CLI 6.0.8 - The Angular 6 front-end application is built and served through the Angular CLI. You can verify your Angular CLI installation by running the Angular CLI command: ng version. If the Angular CLI is not installed, you can install it from the command window by typing npm install -g @angular/cli@6.0.8.
  • Build and Run the Sample Application .NET Core Web API Project - To verify everything has been installed correctly, compile the Web API project CodeProject.Mongo.WebAPI for the sample application. Be sure to wait a minute or two when opening and building these projects with Visual Studio 2017 because Visual Studio will need to restore the packages required to compile these projects when the project is opened.
  • SSL - The Web API project was configured to use SSL. To avoid SSL issues, you'll need to try and run the project by selecting the IISExpress profile and selecting the run button and ASP.NET Core will create an SSL certificate. Depending on your development machine, Visual Studio may ask you if you would like to trust the self-signed certificate that ASP.NET Core has generated. Choose yes to trust the certificate. Because Visual Studio is Visual Studio, you might have to run the project a second or third time or exit and reload Visual Studio to confirm that everything with the project is working. When running the project from Visual Studio, the browser should launch and display output in the browser from the Values controller.
  • Build and Run the Seed Program - To populate the Products collection with test data for the sample application build and run the .NET Core console application CodeProject.Mongo.Import. This program will also create the needed indexes to support the functionality of the sample application.
  • Build the Angular 6 front-end application - The Angular 6 front-end application is dependent on node modules to be installed in the project's node_modules folder. Creating all the node modules can be done by going to the Portal folder and opening the Visual Studio 2017 project solution file CodeProject.Mongo.Portal.sln. After opening the project, right-click on the packages.json file and select Restore Packages. If there is a packages-lock.json file, delete it, otherwise the packages will not get restored. Alternately, you could execute npm install in the DOS command window from the application’s portal folder. Once the packages have been installed you can build the Angular 6 project using the Angular 6 CLI in a DOS command window and navigate to the CodeProject.Mongo.Portal folder and execute: ng build.
  • Troubleshooting & fixing Node.js and npm issues - It seems from time-to-time that Node.js and the Node Package Manager (npm) end up having strange issues. These issues you can sometimes be resolved by simply running npm cache clean or running npm install with the -verbose option to see more details. As a last resort, you can uninstall Node.js and delete the %APPDATA% folders npm and npm-cache and reinstall the latest Long-Term Support (LTS) version of Node.js.
  • Start-up the Angular 6 CLI Web Server - Once the Angular 6 front-end application is built, you can start up the project using the Angular CLI ng serve command. Since webpack is now being used by the Angular CLI for website development, ng-serve will start up a Node.js Express server in the background.
  • Launch the sample application - To run the sample application, enter http://localhost:4200 in your browser.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Mark J. Caplin
Software Developer Joey Software Solutions
United States United States
Mark Caplin has specialized in Information Technology solutions for the past 30 years. Specializing in full life-cycle development projects for both enterprise-wide systems and Internet/Intranet based solutions.

For the past fifteen years, Mark has specialized in the Microsoft .NET framework using C# as his tool of choice. For the past four years Mark has been implementing Single Page Applications using the Angular platform.

When not coding, Mark enjoys playing tennis, listening to U2 music, watching Miami Dolphins football and watching movies in Blu-Ray technology.

In between all this, his wife of over 25 years, feeds him well with some great home cooked meals.

You can contact Mark at mark.caplin@gmail.com

...

You may also be interested in...

Pro

Comments and Discussions

 
-- There are no messages in this forum --
Permalink | Advertise | Privacy | Cookies | Terms of Use | Mobile
Web01 | 2.8.190424.1 | Last Updated 14 Apr 2019
Article Copyright 2019 by Mark J. Caplin
Everything else Copyright © CodeProject, 1999-2019
Layout: fixed | fluid