Instrumentation, the process of auditing and logging in your applications, is a vital part of any enterprise solution. When you are developing applications with Service Level Agreements and specific Auditing requirements, logging is a big deal. Add in the complexity of the decoupled nature of the cloud with Service Orientated Architecture, piecing together a clear view of a business process from end to end can be a challenge.
Microsoft has recently released the Semantic Logging Application Block (SLAB) to the enterprise library. This article will form the first part of a mini-series, detailing the benefits, pitfalls and examples of how to make use of it in your enterprise. Although new to the Enterprise Library, Semantic logging is not a new concept. It has been adopted for years under different names such as "structured logging", "strongly typed logging" and "schematised logging" to name a few.
What is it?
SLAB inherits event structure from the Event Tracing for Windows (ETW) pipeline, allowing it to store the payload of the event along with it's metadata. In this case you have data along with data which describes the format. ETW has built in support for logging a stack trace along with the event and can also be extended which enables you to add your own event data. SLAB's secret source is to make light work of using a reliable and robust logging infrastructure that has been around since Windows 2000 in a way that abstracts us as developers from the messy details of manifests and publishing to ETW, which is typically overly complex, focusing on What you want to log and Where you want to store those logs.
What you want to log
The abstraction of what you want to log comes in the form of an EventSource. You create a class which inherits from the EventSource class and decorate a set of methods with attributes which determine the following:
- The parameters the log method expects
- The Logging Level
- The Event Id
- The Task (optional)
- Operational Codes (optional)
ETW will also provide a TimeStamp for you.
This upfront design approach allows you to separate your design of what will be logged into a discrete, specific set of methods that will be called from your business logic.
The downside to this approach as you will find when you start using it in earnest is the amount of different methods you will have to create in order to log each and every log or audit event. The current docs recommend splitting your EventSources into partial classes to help with the manageability. My current preference is to create an EventSource for each business process which i will demonstrate shortly.
Where you want to log
To store a log with SLAB you need to configure a listener and one or more Event Sinks. An event sink receives events, by default as an EventEntry type, and has the responsibility of storing the event data in a text file, event trace, database or other storage medium. Out of the box the SLAB contains the following sinks:
- Flat File
- Rolling File
- Sql Database
- Windows Azure Table
You can configure sinks either in code or in xml configuration. Which you use is determined by whether you are listening for events 'In Process' or 'Out of Process'
When using the SLAB in-process, all configuration is performed in code. You first create an ObservableListener<EventEntry> (default) and you subscribe to the listener with one or more event sinks.
Note* If you are using the block in-process, the ETW infrastructure is not used to collect events but it does use the same semantic approach to logging as ETW.
For the purposes of our sample, we will use in process.
When using the SLAB out-of-process, all configuration is performed in xml. A windows service acts as a listener which receives events from ETW and configuration sets up the subscriptions for the event sinks. The event sinks are then hosted by the windows service.
*The next article will highlight the configuration and benefits of using out of process.
The code sample associated with this article contains a console application which makes use of the SLAB in process and logs to the console.
At this time i envisage at least 3 parts consisting of:
- Part 1 - Semantic Logging Application Block - Microsoft Enterprise Library 6 (In Process)
- Part 2 - Semantic Logging in Windows Azure (Out of Process)
- Part 3 - Extending Semantic Logging - Creating Custom Sinks for Windows Azure
Additional Reading - What is the Enterprise Library?
If you have used other blocks from the Enterprise Library before, feel free to skip this section. "Enterprise Library is made up of a series of application blocks, each aimed at managing specific crosscutting concerns. In case this concept is unfamiliar, crosscutting concerns are those annoying tasks that you need to accomplish in several places in your application. When trying to manage crosscutting concerns there is often the risk that you will implement slightly different solutions for each task at each location in your application, or that you will just forget them altogether. Writing entries to a system log file or Windows Event Log, and validating user input are typical crosscutting concerns. While there are several approaches to managing them, the Enterprise Library application blocks make it a whole lot easier by providing generic and configurable functionality that you can centralise and manage. What are application blocks? The definition we use is "pluggable and reusable software components designed to assist developers with common enterprise development challenges." Application blocks help address the kinds of problems developers commonly face from one line-of-business project to the next. Their design encapsulates the Microsoft recommended practices for Microsoft .NET Framework-based applications, and developers can add them to .NET-based applications and configure them quickly and easily." - Direct quote: Enterprise Library Developers Guide. The diagram below taken from the Enterprise Library Developers Guide (preview) shows the different blocks, their dependencies and the highlighted block we will be focusing on Figure 1:
Why another Logging Framework?
Like me you are probably already using logging frameworks such as Log4net, nLog and the Logging Application Block for many years. So why another framework when Microsoft in particular already have the Logging Application Block? There are a lot of similarities in terms of the features and the things you can do when compared to other frameworks including logging events to multiple destinations, filtering, and formatting. The real power of Semantic Logging comes from the way SLAB events get written to the logs. Your events are strongly typed which allows for a consistent format and structure. There is no chance of a developer making a coding error when the log event is written such as passing in the wrong event ID or verbosity. It is easier to post analyse and query the log results due to their strongly types nature and formatting. It becomes easier to consume log data from other applications. You can more easily see the big picture in terms of correlation between business processes. Automation can take place in a much cleaner way as the sinks available can utilise database and cloud storage mechanisms, preserving the actual state, already in use and being actively consumed by existing applications.
Responsibility of concerns
While great and easy to adopt in your applications, much responsibility for logging with traditional frameworks rests with the developers. For example, setting the logging level or deciding what gets logged is constrained by what the developer at the time of writing a piece of logic deemed to be appropriate. In a world of auditing where systems require specific checkpoints and requirements, this responsibility can very easily miss some vital information, purely because a new developer adjusting some existing code never fully understood the context or simply makes a mistake in choosing the type of event to be logged. Semantic Logging removes this responsibility from the developers consuming the Logging object and places it with the design of the logging requirements.
Rich Data - keeping the context
One of the biggest problems i have experienced when dealing with logs is the flattened nature of them and the different types of information recorded for each log entry. Some class with some method has logged an event which contains information very specific to the operation and very often not generalised enough to see a picture or view of a business process from end to end when viewing the log data that has been recorded.
Putting some meat on the bone (What do i need to do as a dev)
I created a simple console app to demonstrate using the SLAB in process.
The first thing you will want to do is add references to the SLAB. The recommended approach is to install a package via NuGet as shown below:
Click 'Accept' to the license agreement and after a few moments you will see the following added to your references:
Designing an EventSource (The What)
We first create a class which derives from EventSource. EventSource can be found in System.Diagnostics.Tracing so you will need to add a using statement as well.
The next step is to define a method that will represent a log. This is simply a method with a void return type that takes as arguments one or more of the following CLR types:
Surprisingly as of writing this article, the DateTime type is not supported, neither are complex types.
After you have defined your method/s, you need to decorate them with the [Event] attribute and add the necessary values. The minimum required is to specify the Event ID.
Note* The same value needs to make it into the WriteEvent() method call inside the body of the method as shown in the sample.
The EventSource from the sample code looks like the following:
The sample demonstrates two log events: Start and End application. Although the sample is very basic, the real challenge starts to emerge as your event requirements grow. Future articles will address this in the samples.
The next step is to enable the eventsource to be consumed by providing an instance of it. The simplest way of achieving this is to create a singleton instance of the eventsource and expose it via a public property as shown in the code sample.
Logging an Event
Now the design of our events has been completed, it's time to put the hat of the developer on and log some events on our event source as shown below:
At this point, we have performed all the upfront tasks that are required. A developer could start logging the points in the application that are appropriate for each event without needing to know any of the internals as to what logging levels have been set, the event id that is associated with the event or even where the events will be stored. All this is taken care of by the design of the EventSource. It maybe that this level of detail has not yet been decided upon and yet development can move ahead seamlessly.
Consuming events (The where)
Before any events can be stored anywhere, an event source listener needs to be setup and sinks need to subscribe to it. For our sample we will use the Console Sink. But first we need to create a listener as shown below:
After running the console application we can see two events being output, demonstrating
the Event Id, Message, Event Name and Level.
The Console output can be seen below:
Semantic Logging Console Part1
References (recommended reading)
Does Semantic Logging solve all our logging desires?
In two words, Kinda and It depends. It does come a long way to separating concerns, enabling a true design of auditing and logging needs and adds that process as a first class citizen. If you have specific auditing requirements that can map individually to specific logging methods, it gets you a long way out of the box. If you want a more generic solution, although you can create custom event sources to map against logging frameworks which take a level and an exception for example, it's not as flexible yet. That said, you can always come up with creative ways to bend any technology to suit where necessary. Semantic Logging has enough extensibility points to customise what you need and I aim to highlight those in future articles as mentioned.
It's definitely a shift in thinking and forces more thought into designing logs before consumption, which I am certainly in favour of from certain design perspectives. Also it separates instrumentation from your business logic and enables the two tasks to run in parallel, useful for larger teams.
Deriving and EventSource from an interface caused me a few problems such that I left it out of the sample code where ordering of the methods appears crutial. This makes testing a bit of a pain although there are some specific test methods that exist for an EventSource which help to validate it out of the box. These will be detailed in the next article.
Bear in mind this is a version 1 addition to the block so we can hopefully expect some improvements for authoring, configuration and features.
More to come soon...