<IMG height=”1” src="http://www.codeproject.com/ShowImg.aspx?id=2668" width=”1”>This is a showcase review for our sponsors at CodeProject. These reviews are intended to provide you with information on products and services that we consider useful and of value to developers.
DBAs and database developers are a breed apart from other software engineers. A deadline for an application software engineer can often be viewed as the flag in a three way tug-of-war, typically between development, testing, and marketing. So long as you pull (or dig your feet in) hard enough, you can expect to shift the deadline in your favour.
But for DBAs and Development DBAs, deadlines retain their original meaning. As with Time and Tide, databases, in general, wait for no man. The rate at which customer (or indeed supplier) data flows into a database is not up to the DBA. It’s up to the customer; and although the precise timing of when to execute an upgrade to the production database may be up to the DBA, this decision is driven largely by the rate and volume of business transactions taking place.
If you are a software company selling over the web, the contents of your customer database is undoubtedly more valuable than the source-control database where you stash your products. After all, it might take a year to re-engineer from scratch the software product line. It could easily take five years to re-grow the customer data. It’s this data that is the company’s real crown jewels – the river of gold – that is entrusted to the steady hands of the DBA.
The way the commercial world tends to work is that the more responsibility you shoulder, the more executive autonomy you can demand in return.
Let’s say you’re a deep sea engineer, doing precision welds on oil platforms at thirty fathoms. As a professional with your life in your hands, you would never accept a regulation issue BCD (buoyancy control device) from the company, just because they happened to get a good discount deal on it. You will contractually insist on getting the equipment – the tanks, the dry-suit, the BCD, the regulators, maybe even the brand of compressed air you’re going to breathe - that you have personally tested and chosen because you can trust it with your life; and for that matter, the lives of your fellow team divers too.
By the same token, if the security of your job is dependent on the integrity of the database you look after, then you can expect to make the decisions about how you carry out the job. No one in their right mind would accept that level of responsibility without getting to name exactly which tools they want within reach at all times.
Responsibility is often seen as making decisions about the way other people work in the organisation. But this obscures the real equation: Responsibility = power to make decisions that affect the way you work.
Over the last six years, with the widespread adoption of powerful and reliable tools like SQL Compare and SQL Data Compare, the work of SQL Server professionals, whether you’re developing or administering databases, has evolved naturally around a ‘lock ladder’ model. This is the way engineers for many centuries have routed canals up and down steep gradients using a staircase of lock gates – sometimes 10 or 20 or more - to control the flow and ensure the safety of all the boats and their freight.
In the SQL world, a ladder of databases from development to production enables a series of stepped changes to be made, tested, and implemented without compromising the production database. Each of the lock gates in the ladder is a live database, in the sense that it is an actual instance on a server. Confusion arises only from the fact that the ‘live database’ is also sometimes used to refer to the ‘production database’, or a publicly visible database at the head of the staircase. Pushing changes upstream means moving changes in locked steps – typically across testing and staging databases - up to the final, visible instance at the top.
The ‘lock ladder’ model is Occam’s favourite: the simplest and least roundabout way of developing and updating databases because it incurs the smallest number of format switches. This means that only the changes you intend to make are carried out, without exposing the Master (i.e., the instance carrying the most recent changes) to unnecessary human error through switching various selections of DBOs in and out of a .sql file source control system.
As long as integrity can be assured, then the simpler development process should always be preferred over the more complex, precisely because of the need to keep opportunities for corrupting the database to a minimum. This model has become the de facto working method employed by hundreds of thousands of database professionals because it is secure and efficient.
Visual Studio Team System Team Edition for Database Professionals (VSTSTEforDP) is a new offering from Microsoft that aims to replace the ‘lock ladder’ model with a ‘dry dock’ model. In a dry dock, the ship is removed entirely from the water for refitting before returning it to the water for test trials and back again into dry dock for further adjustments, e.g., caulking up any remaining leaks in the hull of the vessel, or realigning the propeller shaft.
Applied to databases, this ‘dry dock’ model offers an offline workflow system for developing remotely from a live server, using a database project built up from .sql files that are checked in and out of a source control application as and when they are required. Once you (and your team members) have made the changes to the SQL scripts, you can check them back into source control and hope to generate a live database instance from the aggregate of those scripts.
Over the next two sections, we present the arguments for and against both “online” and “offline” development models.
The offline “Dry dock” model
|Management: it’s arguably easier to manage the database guys as part of the engineering department.||Inaccessible: you cannot query the database you are developing directly, but have to generate a new instance from the scripts you’re working on.|
|Source control: schema can more easily be stored in source control, which provides a ready-made solution for team-working, versioning, and object-level history.||Untestable: since there is no real database, it cannot be 'tested' until a database is deployed, which is an extra step.|
|Workflow: database schema can be developed in any order, not in an order constrained by object dependencies. Objects can be added or removed in any order.||Corruptible: since database integrity isn't enforced, it is easier to end up with a 'broken' database in source control, as developers not only can break the database for themselves, but also for others in their team.|
| ||Inefficient: adds an unnecessary intermediate “Database Project” step when making changes, thereby slowing down processes and introducing further opportunities for human error. |
| ||Closed: lacks a simple and easy method for remote database deployment and updates.|
| ||Unclear: maintaining transparent and accurate documentation is hampered by fragmented development through offline projects.|
| ||Occam’s razor: it goes squarely against the general principle of: ‘If it ain’t broke, don’t fix it”.|
The online “Lock ladder” model
|Accuracy: database integrity is guaranteed – i.e., if a change breaks the database, then you can’t make it.||Source control: currently, does not provide the ideal method of source control and versioning.|
|Speed: it’s fast and efficient – depending on the criticality of the data, developers can set up as many locks in the ladder as they prefer.||Auditing: database snapshots provide only a limited mechanism for keeping an audit trail.|
|Flexibility: a wide range of productivity tools are available for working with the ‘lock ladder’ model.|| |
|Access: for most developers, it is essential to be able to be able to run queries on the database they’re developing.|| |
|Simplicity: it keeps things as simple as possible, minimizing transfers from script to instance and back again, thereby reducing opportunities for unforced database corruption.|| |
|Familiarity: far and away, the most widely used development model in the industry, therefore easier for training, scaling up, and sharing best practices.|| |
The balance of favour falls on the side of the tried, tested, and widely employed online model of database development. This isn’t to say it’s perfect, but it is currently far closer to the daily needs of database professionals, and offers greater opportunity for satisfying their as yet unmet needs over the long term. Because of its widespread adoption, there is a wide and established range of vendors already busily competing to provide the simplest and most reliable tools for the sector.
But the single most compelling reason why SQL Server database professionals prefer to develop and update using a ladder of live instances is because every time you make or migrate a schema change or data update, you know with absolute clarity that database integrity is being maintained. You could argue that it means more pressure, particularly when reaching the final synchronization from staging to production. But ask any professional in whatever field of endeavor, and they’ll tell you that pressure – an awareness of the significance of your actions – is essential for ensuring optimal performance. Whether you’re mountaineering or cutting sheet steel, it’s when the pressure is off and your attention wanders that mistakes and accidents occur.
Dry docking your database means that the pressure is turned off at source. You have no feedback on whether the changes you (or another member of your team) are making will generate a broken database when you check the scripts back out of source control.
Database professionals do it right first time because they do it for real.
Appendix: Feature comparison between SQL Toolbelt & VSTSTEforDP
|Schema compare & synch||SQL Compare||Schema compare|
- view object dependencies
- object owner mapping
|Data compare & synch||SQL Data Compare||Data compare|
- object owner mapping
- custom comparison keys
- indexed views
WHERE clause comparisons
- comparisons on primary keys and unique indexes
|IntelliSense ™||SQL Prompt||-|
|Visual modeling||SQL Dependency Tracker||-|
|API automation||SQL Toolkit||-|
|DB deployment||SQL Packager||-|
|Compressed, encrypted backups||SQL Backup||-|
|Refactoring||SQL Refactor - 12 refactorings||Object rename - 1 refactoring|
|Log analysis||SQL Log Rescue||-|
|Unit testing||-||Unit testing|
|Data generation||-||Data generation|
|Source control compatibility||(available from Q2, 2007)||Database projects|
|Price – 20 user license over 3 years||$2,391 per seat||$6,066 per seat|