Remember my post “Five reasons not to directly query SharePoint databases”? I have cautioned you more than once throughout it NOT to directly query SharePoint databases and I have mentioned its disadvantages and problems. If you haven’t read this post, I would encourage you to do that before carrying on.
I’m not contradicting myself! The first post was all about SharePoint 2007 but now I’m talking about the new and the amazing 2010 version of SharePoint.
So you might be asking “What the hell is SharePoint Logging Database?” Good question!
To answer you, please fire up the SQL Management Studio, and expand your databases. You’ll notice a new one named
SharePoint 2010 keeps track of everything it does by logging into the
WSS_Logging database. It aggregates all of the raw logging data accumulated in the text files under the 14 hive and imports it into this wonderful logging database. This is the ONLY database in SharePoint that Microsoft will be happy to let the developers directly read, query and build reports against it. There is a bunch of useful views at your disposal, the one that I will show you now is the “
Every time a user visit generates a page request, a record is inserted into one of the partitioned tables in this database and the “
RequestUsage” view is kind enough to union all the data in the partitioned tables and presents it to you to consume in your custom solutions (Web Parts, Reports, Application Pages,…). An example is shown below:
Let’s dive a little bit deeper to see what happens behind the scenes and where this data comes from.
- Navigate SharePoint 2010 Central Administration > Monitoring > Configure usage and health data collection.
- Now let’s configure the data collection by specifying what events to log to the text files under the 14 hive. Use the snapshots below to configure your own SharePoint system.
- Did you notice the “Log Collection Schedule” section? This implies that there is a timer job that collects the log files located under the 14 hive and copies the events you specify into your logging database which can be employed later for reporting purposes. You can even schedule this timer job based on the load patterns of your server as you will see in the next step.
- I have opened up my favorite troubleshooting tool (SharePoint Manager 2010) to track this job. As you can see in the figure below, I have configured the “Microsoft SharePoint Foundation Usage Data Import” job from the central administration to run every minute.
- Out of curiosity, I have decided to use .NET Reflector to check out how this timer job works, I have noticed two things.
The first one is that the Job lock type specified in the constructor is
SPJobLockType.None which instructs the Timer Service to run this job on all the Web Front Ends in the farm, this makes sense!
The difference between the Job and the None LockTypes is that the Job LockType ensures that the timer job only runs on one server but the None ensures that the job runs on every server.
The second one is the
ImportUsageData() Method in the
execute method which is called when the timer job runs, this method is responsible for copying the events from the log files into your logging database. You can further expand this method if you need to know more.
So, what are those four benefits I’m talking about ?
- It’s fully supported by Microsoft to directly read, query and build reports from the logging database. Third party applications can even write their data back to it.
- It’s enabled by default on all the SharePoint deployments.
- The retention policy is customizable allowing you to manage how many data you want to accumulate (14 days by default, but this could be modified using PowerShell)
- The Schema will be documented which will definitely facilitate its usage.
I would also like to point out that the Logging Database forms the basis for a lot of usage and health reporting. For example, the Web Analytics features heavily rely on it as it takes data out, does some additional processing on it, puts it into the analytics database and reports are generated based on that.
- 6th April, 2010: Initial post