Click here to Skip to main content
14,604,280 members

ORM as Vietnam of Computer Science: A Response

Rate this:
4.95 (43 votes)
Please Sign up or sign in to vote.
4.95 (43 votes)
12 Oct 2007
A response to Ted Neward's 2004 article compating Object Relational Mapping to the American debacle in Vietnam.


Ted Neward wrote an article in 2004 comparing Object Relational Mapping to the American debacle in Vietnam. Jeff Attwood revived the article in 2006. He pasted a giant photo at the top of his block (Vietnam grunts-in-the-fields-with-helicopters), copied Neward's conclusions verbatim, and pronounced his favorable opinion in a few sentences. The revival provoked some extended commentary and breathed life back into an article that might justly have been forgotten. Apparently it has struck a chord with some recent readers and so I feel compelled to review it.

See Neward's article here on his blog and a brief follow-up here. See Attwood's article here.

Where's the Evidence?

The first three pages are dedicated to thoughts on the Vietnam War. His remarks are irrelevant to computer science or ORM and the comparison, to my mind, is hyperbolic. Rhetorically, it intends to prepare for a horrifying tale of ORM disaster — carnage, squalor, and waste. In his apocalyptic vision:

"Object/Relational Mapping is the Vietnam of Computer Science. It represents a quagmire which starts well, gets more complicated as time passes, and before long entraps its users in a commitment that has no clear demarcation point, no clear win conditions, and no clear exit strategy."

From this beginning, surely we can expect tales of ORM disaster — some woeful story of failed projects, misspent millions, and broken careers. Nope. We get plenty of theory but not a single account of any organization suffering this fate.

  • No quagmire
  • No "more complicated as time passes"
  • No entrapment
  • No one screaming for the exits

Not one example.

Perhaps some of the folks who commented on his blog (or Jeff Attwood's blog) had such experiences. I found one engineer who tried ORM a few times (not our product) and hated it (Rob Conery — for no concrete reasons) and a couple of sane voices who have used ORM successfully (Wesley Shephard, Martin Marconcini).

There is no evidence — not from Neward, not from Atwood, not from Conery — that anything bad has actually happened.

With one exception. Neward himself claims that he "built three ORM's in his lifetime, and refuses to build another one because they all faced the same end, despite very different beginnings." To what fateful "end" we never learn. I'll take him at his word and suggest he refrain from writing another ORM.

Praise for ORM

On the plus side of this ledger stands solid evidence for success. I can't speak for other ORM vendors but I can report that our files are full of praise for ORM.

Our product has been on the market since 2002. There have been many successes and there have been some project failures and projects not yet finished. Never … not once … has there been a hint of disappointment with the ORM approach. No developer — not even of a failed project — has ever blamed ORM. In fact, we get letter after letter saying how ORM has been responsible for faster development and improved application quality. Developers associated with flops are not shy about laying blame and casting aspersions. In five years, surely someone would have said something about the "quagmire". Not yet.

I'm not suggesting that everything is sweetness and light. As with all software tools, we've had bugs to fix. Developers make design errors. It's easy to forget performance consequences when you are so well insulated from the persistence operations, particularly by client-side caching. Such adverse consequences are easily detected and easily addressed.

What Kinds of Applications Are We Talking About?

ORM is not ideal for every project. If we have to grind through simple calculations on millions of records, with no user experience to worry about, we gain little from expressing the data as objects for a few microseconds; the cost of transformation from raw bytes to business objects is never repaid. We're better off with the fastest, native data manipulation techniques available.

An object-oriented approach is better suited to highly interactive, line-of-business applications. Data records are not nearly so numerous in such applications. They live in client space for minutes or hours, not microseconds. During their unpredictable stay, they appear in various guises on multiple screens. Users change them. They're roiled in complex business logic. The consensus of architects is that such data records are best represented as business objects. The open question is how best to shuttle data between database and business object form.

Our answer is ORM. Neward vehemently disagrees.

What is Neward Recommending?

Toward the end of the piece, Neward recommends five alternatives. However, these alternatives are so briefly considered — a paragraph each — that it is difficult to see meaningful contrasts. Neward offers a sixth suggestion — "Acceptance of ORM limitations" — a choice he derided relentlessly unto this very moment.

While some [choices] are more attractive to others [sic], which are "better" is a value judgment that every developer and development team must make for themselves. It is conceivable that the object/relational problem can be "won" through careful and judicious application of a strategy that is clearly aware of its own limitations.

Where did that quagmire Go?

So it is "conceivable" that ORM might work. Of course we'll have to be "careful" and "judicious" and have a "strategy that is clearly aware of its own limitations." This is sound generic advice but impractical in the absence of details. Neward offers none as the article races to a close. What do other experts say?

There is much written about representing data within an application — about the application's "domain." Among the most respected voices is Martin Fowler. Let's turn to the Fowler book that Neward references approvingly: Patterns of Enterprise Application Architecture (PEAA).

Early on, Fowler contrasts two fundamental approaches to "Domain Logic": Transaction Script and Domain Model.

Transaction Script organizes all [application] logic primarily as a single procedure, making calls directly to the database or though a thin database wrapper. Each transaction will have its own Transaction Script … [110]

This is the approach recommended for processing a large volume of records with simple calculations … as one might do in a batch payroll run.

The line of business application is a different animal.

At its worst business logic can be very complex. Rules and logic describe many different cases and slants of behavior, and it's this complexity that objects were designed to work with. A Domain Model creates a web of interconnected objects, where each object represents some meaningful individual, whether as large as a corporation or as small as a single line on an order form. [116]

How do you choose between approaches?

It all comes down to the complexity of the behavior in your system. If you have complicated and ever-changing business rules involving validation, calculations, and derivations, chances are that you'll want an object model to handle them. On the other hand, if you have simple not-null checks and a couple of sums to calculate, a Transaction Script is a better bet. …

It certainly takes practice and coaching to get used to a Domain Model, but once used to it I've found that few people want to go back to a Transaction Script for any but the simplest problems. [119]

Ok .. that sounds like a call for Domain Model. Fowler continues:

If you're using Domain Model, my first choice for database interaction is Data Mapper (165). This will help keep your Domain Model independent from the database and is the best approach to handle cases where the Domain Model and database schema diverge. [119]

Data Mapper (which is ORM) isn't the only way to support a Domain Model architecture.

If the domain model is pretty simple, and the database is under the domain model developers' control, then it's reasonable for the domain objects to access the database directly with Active Record (106) [a simpler pattern]. Effectively this puts the mapper behavior discussed here into the domain objects themselves. [171]

These words deserve commentary:

If the domain model is pretty simple Line-of-business application domain models rarely start or stay simple.
The database is under the domain model developers' control The developer rarely controls the database. He may define it initially but, in time, a crowd of other applications and sub-applications (e.g., reports) surrounds the database, freezing the database schema. A DBA takes primary ownership and the developer learns to live with joint custody.
this puts the mapper behavior … into the domain objects themselves Every application, no matter how constructed, commits to a view of the database and thus develops its own schema. Because the database and application evolve semi-autonomously at different rates there is always some kind of mapping between the two. The question becomes not "should I map?" but "should I map explicitly and how do I manage that mapping?" ORM has a clear answer.

Data Mapper appears to be the safe choice. Why not use a Data Mapper?

The price .. is the extra layer that you don't get with Active Record (160). [170]

Neward will try to make a big deal out of this; but Fowler continues:

Remember that you don't have to build a full-featured database-mapping layer. It's a complicated beast to build, and there are products available that do this for you. For most cases I recommend buying a database-mapping layer rather than building one yourself. [171]

There is no shame in this. A Data Mapper is a "complicated beast" to build. It's harder still to build a mapper that 'both' does a great job 'and' is easy to learn and use. We are wise to take heed of Neward's multiple failures. Such caution need not dissuade us from using a good one.

Into the Quagmire

Impedance Mismatch

The analysis begins in earnest with Neward's recap of the widely known "Object-Relational Impedance Mismatch." One can learn more about this topic (and learn it more clearly) from other sources but Neward's rendition is unobjectionable.

Object-to-Table Mapping Problem

Class-to-Table mapping — the fundamental act of object mapping — works well most of the time but poorly models certain business object families in which several concrete objects share have common characteristics and common state. This is the "inheritance" or "generalization hierarchy" problem discussed in plain and neutral language by Fowler (PEAA, p.45-7).

Neward blames the object world whereas the fault — if there is fault — lies more obviously on the side of the SQL. In any case, there is certainly a mismatch to be resolved.

In Neward's example, there exists a base class, Person, with a cascade of subclasses: Student inherits from Person, GraduateStudent inherits from Student. Other derived classes follow.

This is actually a terrible example (as is Fowler's which suffers from the same fault). We should never model persons this way for the simple reason that the people we want to model are not pinned to any of the subtypes. A student can become a non-student or become a graduate student in the course of an application session. Unfortunately, an object can't change its type; once instantiated, it is what it is. An instance of Student can't suddenly become an instance of GraduateStudent or stop being a Student altogether. Type assignment is permanent.

Moreover, a Person could belong to more than one of the subtypes; he could be both a Student and an Employee.

Student, GraduateStudent, Employee … these are "roles" or "facets" of a person. A person can have them or not. A person may have more than one facet. A person may gain and lose a facet.

Neward asserts "it's only natural that a well-trained object-oriented developer will seek to leverage inheritance in the object system, and seek ways to do the same in the relational model."

Actually, this is the natural inclination of the "poorly-trained" object-oriented developer. The justly famous Design Patterns book (Design Patterns, Gamma, et al) states: Favor object composition over class inheritance [20]

Inheritance is clearly the wrong mechanism for this use case; we should turn instead to a compositional approach.

Perhaps more objectionable is the assertion that the object-oriented developer will try to pervert the relational schema to satisfy ORM. We will see that this is not so.

Let me substitute another example that might be more plausibly represented through inheritance. Imagine that the application concerns groceries of two basic kinds, Fruit and Vegetable.

We'll suppose that Fruits and Vegetables have many common Produce properties and a significant number of distinct properties of their own. We'd like to make Produce a base class and extend it with derived classes called Fruit and Vegetable.

I do not favor a base Produce class and derivative Fruit and Vegetable subclasses but at least the example does not suffer from the concern that a piece of produce might transform itself from an apple to a carrot nor is there much chance that a given item will be both apple and carrot.

Neward correctly observes that SQL doesn't support inheritance. If we choose to represent produce in a class inheritance hierarchy, there is going to be a mismatch between the business objects and the tables in the database.

There are three typical approaches:

Table per type A Produce table holds the data common to all kinds of produce. There are two additional tables for the two kinds of produce: a Fruit and a Vegetable table. In order to fully realize an Apple in our object world, we need to retrieve its "base" row from the Produce table and its "derived" row from the Fruit table. The three tables are "normalized" in relational database-parlance.
Table per concrete type There is a table for each "final" type; the base type fields are duplicated across the two type tables. There is a Fruit table with Produce columns and Fruit-specific columns. There is a Vegetable table with the same Produce columns and its own Vegetable-specific columns.
Table per type family There is one Produce table with Produce columns and all the columns of all of the "derived" types. There are both Fruit and Vegetable columns in this one table. A discriminator column helps us distinguish Fruit rows from Vegetable rows. The vegetable columns of Fruit rows are null (or some irrelevant value); the fruit columns of Vegetable rows are null.

Neward writes as if ORM developers are free to choose from among these alternatives. That's rare in my experience. We get what we're given and deal with it.

Neward claims that ORM developers always prefer the second or third because he thinks these (theoretically inferior) designs are easier for ORM developers to handle. The implication is that they will be at war with the DBAs and antagonistic to good database design.

ORM practitioners are more nuanced. Here's Fowler:

There's no clear-cut winner here. You need to take into account your own circumstances and preferences ... My first choice tends to be [Table per type family] as it is easy to do and resilient … I tend to use the other two as needed to help solve the inevitable issues with irrelevant and wasted columns. Often it is best to talk to the DBAs; they often have good advice as to the sort of access that makes the most sense for the database.[PEAA, 47]

Neward's argument rests on three false claims.

  1. Generalization hierarchies are only a problem for ORM developers.
  2. Generalization hierarchies are common
  3. ORM developers are stuck

Generalization hierarchies aren't just problematic for ORM developers; they are trouble for everyone — ORM and non-ORM developers alike. I suspect that a survey of actual databases would show that the second and third designs were promoted by non-ORM developers! It seems they too find these alternatives easier to handle. The folks who write reports, for example, are notoriously averse to normalized databases.

Any developer who must represent inheritance hierarchies in a relational database is going to have to make some uncomfortable design decisions and is going to have to write some code to access, present, and save data relating to these groceries.

I invite you to pause for a moment and think about how you addressed this situation in the past — without an ORM. If it was challenging without ORM, why are we alarmed when the situation provokes some discomfort for ORM developers?

Second, consider if this problem is common or rare. Look at your own data and take an informal count of the number of generalization hierarchies. Most of us will have none; some of us will have one; and a few of us will have more than one. The larger the database, the smaller the percentage of tables involved in generalization hierarchies.

How can this be a crisis if the one-class-per-table mapping is perfectly satisfactory for more than 90% of our data?

Perhaps those few cases are so critical that the application will fail catastrophically if we don't find a solution. We wouldn't fly with an airline if it had even 1% failures.

We've already seen that generalization hierarchies are tricky … for any developer. Do they break ORM development? We'll let Neward try to make this case when the data are normalized database as in the "Table per type" arrangement.

Relating these [Produce, Fruit, and Vegetable] tables together … requires each to have an independent primary key (one whose value is not actually stored in the object entity) so that each derived class can have a foreign key.

This is not correct. Each table must have a primary key but the three tables can share the same primary key. An apple may be represented by a row in the Apple table with id=123 and a row in the Produce table with id=123. If the Apple table row has its own independent primary key, it should still have a foreign key column (value = 123) that refers to the matching Produce row; this foreign key column is stored in the Apple object.

Foreign keys are artifacts of relational designs; their existence and value are independent of their utility in support of object oriented domain models whose objects always store foreign keys.

This means that when querying for a particular instance at the relational level, at least three JOINs must be made in order to bring all of the object's state into the object program's working memory.

Why would I want to get all object state for all groceries? If I'm interested in all Produce items, regardless of their Fruit or Vegetable nature, I retrieve only the Produce data. There is no reason for me to go after the Fruit and Vegetable tables. When I need to know about them as Fruits or as Vegetables, I'll get those data.

Suppose I bring all grocery data into memory anyway. I don't have to do a three-way join. I could make three separate queries: one for selected Produce rows and one for each of the related Fruit and Vegetable rows. Our testing proves that three queries can be as efficient as a single query with three outer joins. Multiple queries are faster, producing less data, as the number of related tables increases.

Round trips to the database might be expensive if bandwidth is poor. But I don't have to make three trips to the server if my object persistence layer stacks those queries and sends all query results over the wire in a single bundle.

Finally, if I am caching the results on the client, I won't have to make those queries to the database again.

That doesn't sound so bad. Is raw ADO really better than ORM? If I need to get all grocery data into memory and I don't have ORM, what would I do differently?

If you can think of something, rest assured that you can apply such wizardry within any high quality ORM product. DevForce, for example, offers several alternative persistence mechanisms. PassThru Query, Stored Procedure Query, Dynamic Entities, and Remote Procedure Call (RPC). They all return (or can return) first class business objects that are resident in cache. When the queries complete, the unpleasantness is behind us. Our in-memory model is in good shape.

In a postscript Neward claims without evidence that workarounds such as I've described won't use the cache. That's not true. The retrieved entities are merged into the entity cache regardless of query technique.

We cover generalization hierarchies in our training class, in our documentation, and in our tutorials. This is not an exotic topic and nothing to be afraid of. There is some extra care and work involved … but no more than you'd expend in some rival paradigm. The effort is probably less and you'll have a nicely encapsulated method to hide the details from the UI developers.

Most ORM products can handle data stored according to one of the other two formats: Table-per-concrete-type and Table-per-type-family. DevForce has an optional "where clause" feature that is particularly useful for Table-per-type-family designs.

Schema-Ownership Conflict

At heart, many object-relational mapping tools assume that the [database] schema is something that can be defined according to schemes that help optimize the O/R-M's queries against the relational data.

This assertion is not supported by either our experience or that of our clients and consultants.

Most quality ORM products assume the opposite; they assume that the relational database schema is inviolate. The schema doesn't adapt to us. We adapt to it.

In the real world, the database can often be extended but rarely can it be radically transformed. There are too many external forces (reports for example) that resist change. Fortunately, commercial object mapping products offer features for working around inconvenient and non-performant database designs.

All database schemas (like all code) tend to rot with time. Refactoring the database typically improves database integrity and performance for everyone, not just the ORM developers. ORM and DBA interests are aligned; there is not merit to the insinuation that ORM developers wish to contort the schema for their own purposes.

I agree that application domain schemas evolve more rapidly than relational database schemas. Business requirements are often easier to satisfy by changing the application than by changing the database. Neward seems to think this difference is a special problem for ORM.

In fact, ORMs tend to mitigate the problem, which is far more severe for non-ORM applications that are tightly coupled to the database schema. ORM frees the application's domain model from a lock-step dependence on database schema as Fowler explained earlier and on many pages of PEAA. When Neward writes "before too long, the schema must be "frozen", thereby potentially creating a barrier to object model refactoring" he has it exactly backwards. ORM helps us overcome the limitations of a database that is frozen in some unsupportive state.

Databases schemas are not permanently frozen. Changing business requirements eventually break the ice. The problem is that database changes usually wreck several applications at once. The developer who needs the database change for his application module is happy while everyone else suffers. The widespread suffering chills the schema again. The DBAs keep it cool … until business pressures crack it once more.

ORM-based applications are insulated from this dynamic.

  • The ORM mapping is an exact description of how application data structures correspond to database schema. It is easy to identify where a schema change will affect the application.
  • Adding tables and columns usually has no affect on the application; DevForce domain model assemblies do not have to be recompiled; they adapt at runtime.
  • Renaming or removing tables and columns usually forces a re-mapping session and recompilation. This is a trivial process (because the mapper roundtrips). With a few mapping adjustments or a little custom code, the developer can preserve the surface of the domain model so that the dependent application layers are unaware that anything has changed.

An ORM application is far more resilient than a non-ORM application that has fixed the database schema in its DNA … as data structures dependent upon outdated table structures or as SQL DML commands with obsolete column names hidden in strings.

Finally, contrary to Neward's claim, ORMs do not promote turf wars with DBAs by trying to impose an ORM-favorable structure on the database. In our experience, they do just the opposite — they protect existing applications from DBA whims such as the sudden compulsion to impose a new naming convention across the tables and columns.

Dual-Schema Problem

A related issue to the question of schema ownership is that, in an O/R-M solution, the metadata to the system is held fundamentally in two different places: once in the database schema, and once in the object model.

Of course there are two schemas. This verity has nothing to do with ORM. When you develop a line-of-business application, you have some kind of data model. That model may mimic the database schema at first but it is sure to diverge over time. The only serious question is whether the application schema is tacit and unmanaged or if it is explicit and well managed.

Non-ORM applications have implied application schemas. Their schemas are hidden in the data structures that hold data retrieved from the database. Their schemas are hidden in the SQL commands that fetch the data. There is no central authority. Commands and containers are invented and reinvented in all corners of the application. The compiler can't help you find them; you'll have to search for them, line by line.

The ORM mapping file declares an explicit application schema and describes precisely how that schema corresponds to the database schema. The Data Mapper tool helps you manage the mapped application schema. Every ORM-driven persistence operation conforms to it. Every business object conforms to it.

When the database schema changes, which approach has a problem?

The ORM developer has two options: he can make the application schema conform to the database or he can adjust the mapping to preserve the application schema. The non-ORM developer has only the first option; he must modify the application to conform to the database because he is tightly coupled to it.

The ORM developer gets help from the Data Mapper, refactoring tools, and the compiler. The Data Mapper tells him what has changed. The mapper regenerates domain model code. The compiler catches most mistakes because there is much greater use of strong-typing.

The non-ORM developer must rely predominantly on text search and unit testing. He has no tool to tell him both what changed in the database schema and how the changes affect his application. The compiler can't help because there is little strong typing; field value indexers are usually strings as are the table and column names buried in query commands. Unit testing can catch omissions … if there is any unit testing.

Clearly the explicit and managed application schema improves maintainability. Yet Neward writes about it as if it were sinister:

As the system grows over time, there will be increasing pressure on the developers to "tie off" the object model from the database schema, such that schema changes won't require similar object model refactorings, and vice versa.

What is wrong with that? We call that reduced dependency. Most architects think it's a good thing.

In some cases, where the O/R-M doesn't permit such disconnection, an entirely private database instance may have to be deployed, with the exact schema the O/R-M-based solution was built against, creating yet another silo of data in an IT environment where pressure is building to reduce such silos.

I am unaware of an ORM product with this nearly-fatal limitation.

ORMs didn't create the problem of database schema change. If the database schema is evolving in one direction while the application schema is moving in another, the blame lies not with ORM. ORM is a solution that can facilitate these divergent tectonic shifts without suffering an earthquake.

Entity Identity Issues

Object systems use an implicit sense of identity.

Two objects that contain precisely identical bit patterns in two different locations of memory are in fact separate objects. (This is the reason for the distinction between "==" and ".equals()" in Java or C#.)

This is a genuine problem for many ORM products. In fact, it's a concern for all applications, whether they use ORM or not. But ORMs with some form of caching can resolve it comfortably.

Business objects built with a caching ORM are identified by the same primary key that identifies the corresponding row in its database table. An application does not use the object's reference — its location in memory — to identify the object. It identifies an object by its primary key.

It is critically important that there is one object and only one object with a given primary key. This is what we mean by "Entity Identity". My company's ORM product, DevForce, is an example of a caching ORM. Every business object read into or created in memory resides in an entity cache. The consumer of a business object holds a reference to an object in the cache, not some free-floating object. In a given entity cache, there will be one instance of an entity of a particular type and its primary key is the same as the primary key of its corresponding database table row.

For example, suppose we query for the employee whose name is "Nancy Davolio." We then query for the employee whose id = 1. We'll get the same employee object instance because the employee named "Nancy" also has id=1. If she appears in two lists, she appears in each list as a reference to the one and only "Nancy" object in the cache. If we change her name to "Sally" in one list, her name is "Sally" in the all lists.

To be precise, there can be only one object with a given primary key in a particular entity cache.

Most applications only need one entity cache. The one cache is shared throughout the application among all screens. The employee appearing on an HR form is the same employee appearing in a grid on the Company Contacts form.

You are free to create as many caches as you like and there are reasonable scenarios for doing so, in which case, you make a conscious and controlled decision to enable multiple versions of the same "thing" in your application session.

Entity identity is a difficult to achieve, whether you take an ORM approach or use something else such as native ADO. It's not an exclusively ORM issue. It is a critical ORM feature.

The lesson: only use an ORM that supports entity identity.

In a related diatribe, Neward faults the object-oriented system's supposed inability to support concurrency and ACID transactions. He must have been thinking about the ORMs he built without object identity. Concurrency and ACID transactions (including distributed transactions) should be fully supported by the object persistence layer and should be largely transparent to the developer.


Neward makes some claims that have no bearing on ORM systems with client-side caching schemes.

When does the actual "flush" to the database take place, and what does this say about transactional integrity if the application code believes the write to have occurred when in fact it hasn't?

The "flush" takes place when the application tells the object manager to save. The save is transactional by default. If the save fails, the objects remain in their current cached state, exactly as they were before the save attempt. If the save succeeds, the saved objects are adjusted to reflect (a) their currently "unmodified" state and (b) any property changes resulting from database triggers (e.g., updates to modification timestamps within the objects). From the application's perspective, the database reality and the session reality are the same at this moment.

The balance of his caching critique raises concerns about a caching strategy unknown to me. I suspect he is talking about some kind of server-side caching.

Data Retrieval Mechanism Concerns

In all honesty, a purely object-oriented approach would make use of object approaches for retrieval, ideally using constructor-style syntax identifying the object(s) desired

A purely object-oriented approach prefers a Factory method. See Robert Martin's many lessons on dependency inversion in Agile Principles, Patterns and Practices in C#.

I agree that a purely object-oriented syntax for retrieval is strongly preferable. It isn't achievable without changing the language syntax because SQL and most object-oriented languages are incompatible in this respect. Or, rather, they were incompatible. Microsoft LINQ (Language Integrated Query), due in early 2008, brings a strongly-typed, SQL-like syntax to the major .NET languages. Let's return to the present.

Query Syntax

Neward describes three object-oriented query syntaxes: My company's product, DevForce, offers an Object Query Language (OQL) that approximates what he calls Query-by-API.

Using OQL, the developer defines a query object, adds elements to it (in the manner of StringBuilder), passes the finished object to a PersistenceManager (PM), and the PM returns a specialized collection of (cached) business objects.

Simple OQL queries are easy to understand:

RdbQuery q = new RdbQuery(typeof(Person));
q.AddClause(Person.LastNameEntityColumn, EntityQueryOp.EQ, "Smith");
EntityList<Person> oc = aPM.GetEntities<Person>(q);

The query object, "q", is pinned to the Person business object type. We add a clause that restricts the query to returning persons whose last name equals "Smith." We ask a PersistenceManager to perform the query described by "q"; it returns a strongly typed collection of Person objects.

The query can be made more complex by adding additional information to the query object as in:

q.AddClause(Person.LastNameEntityColumn, EntityQueryOp.EQ, "Jones");
q.AddClause(Person.FirstNameEntityColumn, EntityQueryOp.EQ, "John");

Now the query will return persons named either "John Smith" or "John Jones."

I agree that this approach is "much more verbose than the traditional SQL approach."

The developer could write object queries in raw SQL via the "PassThru SQL" facility. I recommend only limited use of this feature; PassThru should be reserved for queries that cannot be expressed in OQL.

Why the strong OQL preference?

  • OQL is largely immune to the "fat finger" problem. There are no strings to mistype, a problem more likely to afflict the non-ORM developer than the ORM developer.
  • There is considerable type safety throughout the query construction process. The column names are strongly typed; correct syntax is assured; the query is guaranteed to return fully-formed entities of the specified type. Raw SQL affords none of these design-time protections.
  • OQL guards against SQL query injection attacks.
  • OQL abstracts the query from dependence upon a particular data source technology. The vendor-neutral data access layer transforms the query object into the vendor-specific (e.g., SQL Server, Oracle, Informix) database query.
  • The same abstraction should facilitate development of custom query objects and custom translations so that an OQL query can manipulate almost any kind of data repository.
  • If the object-relational system supports an in-memory cache, the persistence layer should be able to use the same OQL query object to retrieve objects either from the entity cache (for speed) or from the database (for freshness). The mechanics of retrieval should be invisible to the developer.
  • Query objects are inherently easier to compose and deconstruct than SQL query strings.

Dynamically Constructed Queries

Why would I care about the ability to compose a query?

We tend to think of queries as static expressions known by the developer at design time. Many of them are. But many are not.

We often ask the user to supply search criteria. What is the last name we want? Should the name be exactly as specified, begin with certain characters, or be "like" a given string?

Oh … there's more than one last name?

Oh … you want to limit the result to persons living in a particular zip code?

What's that? Role-based security says that this user can only search for Persons in his own department?

Unfortunately this business requirement surfaced in a module downstream from the UI that gathered the user's search criteria. How do we merge the security restriction into the previously prepared query?

Constructing correct SQL dynamically in response to user input is simple with OQL. The query object is easy to inspect and modify as it moves along a pipeline to its execution point. Inserting the security restriction outside the search view (but prior to query execution) is just another operation on the query object.

On the other hand, constructing and merging into raw SQL strings is nontrivial.

OQL is More Verbose and Less Capable than SQL

We've made the case in favor of OQL but there is no denying that OQL syntax is "unnatural" for most of us. We don't want to write a lot of these queries.

We have to write some queries. While we can wrap our heads around the simple queries, the complex queries are hard to read and write.

Fortunately, static analysis of existing applications reveals that ORM applications contain far fewer explicit queries than non-ORM applications. Fewer queries; less to go wrong.

Several ORM vendors will be able to support something closer to Neward's Query-by-Language syntax with their LINQ-based products. Thanks to Microsoft's control over language definition and the .NET common runtime, LINQ syntax is both strongly-typed and much closer to SQL.

What about Joins?

Examine the typical SQL-laced application and you'll find hundreds of handcrafted SQL statements sprinkled throughout the UI. Many queries appear to select for the same thing … but who can be sure.

Do we know if a query will still work when the database schema changes? ADO query commands are strings, often constructed from shorter pieces of string. There's no type-safety in strings and no easy way to determine its assumptions about the schema. We probably won't find out if the query still works until crash time.

Notice that there are a lot of joins in those queries. Most joins exist solely to flesh out the query result with column values from multiple tables.

For example, suppose we intend to display a grid of orders with columns for order number, customer name, order status, order date, shipping date, the shipping company name, and the ship-to street, city and state. In typical SQL fashion, we'll write a five-way join of Order, Customer, OrderStatus, Shipper, and Address.

On another page we'll display a different order grid, this time with order number, customer name, order status, and delivery date. We don't need the address fields so even though we want to see the same orders we'll write a different query with four joins.

ORM Queries Don't Use Joins!

With ORM, we need only a simple query for Orders to support both displays. We'll acquire the Customer, Status, Shipper, and Address attributes when we need them … if we need them … as we need them.

Need to show the customer name? We get it from anOrder.Customer.Name.

This syntax is known as "object navigation." Calling the order's Customer property causes the object persistence layer to fetch the order's related Customer object just in time. The first time, we have to go to the database; subsequent requests for the customer will be served from the cache.

Neward will raise objections to such "lazy loading" in a few pages; we'll tackle the objections then.

Since our two displays rely on a simple order query, maybe we can use one query — and one query result — to drive both screens. We won't have to issue new queries as we jump between the screens. We won't have to make extra trips to the database either.

In sum,

  • Our queries tend to be simple because we don't join to get related data.
  • We write fewer queries thanks to object navigation and query reuse.

Bet You Can't Do That!

Certain styles of queries (particularly the more unconventional joins, such as outer joins) are much more difficult — if not impossible — to represent in the QBA approach.

Joins are usually unnecessary in an ORM world because we rely on object navigation to deliver data from related objects.

Of course we must be able to query related objects for reasons other than displaying the data from those related objects. For example, we might need to query for orders shipped to California. Such a query depends upon "joining" the Order table to the Address table.

Most OQLs support such queries with syntax that, while different from SQL, is logical and easy to learn.

There are less common types of queries that OQLs may not support. Some OQLs don't support the fancier grouping queries and the proprietary constructs that search XML and geographical data.

That's why ORM vendors offer alternative query mechanisms. Our product, for example, has PassThru, StoredProcedure, Web Service, and Dynamic Entity queries. The programmer can create any kind of query at all using a feature dubbed "Remote Procedure Call (RPC)."

These alternatives break with the usual ORM paradigm. Is that bad? It is bad if we resort to them frequently. But occasional use is bearable. I think it is ridiculous to chastise ORM when it both acknowledges and facilitates workarounds for edge conditions.

Awareness of the data source representation

The O/R layer has now lost an important "selling point", that of the "objects and only objects" mantra that begat it in the first place; using a SQL-like language is almost just like using SQL itself, so how can it be more "objectish"?

We're faced with the basic problem that greater awareness of the logical — or physical — data representation is required on the part of the developer — instead of simply focusing on how the objects are related to one another … the developer must now have greater awareness of the form in which the objects are stored, leaving the system somewhat vulnerable to database schema changes.

True, every query requires some awareness of a persistent data repository. We can specify selection criteria for some properties (e.g., Person.LastName) but not others (Person.Age). We make that manifest by reference to special objects representing the persistable properties (e.g., Person.LastNameEntityColumn).

It might be nice to write something like:


We can't do that kind of thing yet. To retrieve objects, we must pull back the curtain and expose some of the machinery.

True, this essential awareness marks the querying apparatus as a point in which the "system is somewhat vulnerable to database schema changes."

Is this the quagmire we've been waiting for? I say it is merely inconvenient. We dedicate a tiny fraction of the application code to retrieving business objects — perhaps less than 100th of a percent of all handwritten lines of code concern queries. The rest of the time, we are indeed "focusing on [the objects themselves and] how the objects are related to one another."

Strongly-typed OQL improves our ability to detect problems caused by schema changes and to address them quickly. Although ORM systems are not invulnerable, they are less vulnerable to schema changes than non-ORM systems.

The Partial-Object Problem

Neward's critique turns to performance. He alleges that ORM's commitment to business objects of fixed shape ensures inferior performance.

He argues that we should "optimize" our application by retrieving only the subset of table columns that we actually need "right now".

Sounds reasonable doesn't it? Is it true? Is there really a measurable penalty to retrieving more columns than you need at the moment you issue the query?

The answer: no one knows and no one could know.

The problem with Neward's argument — as with most performance arguments — is that it lacks context. There is no abstract quantity called "performance." There is only measured performance coupled to a judgment about whether the measured performance is good or bad.

Consider his example:

SELECT id, first_name, last_name FROM person;

Neward argues that this will perform better than the object system equivalent:

EntityList<Person> persons = aPM.GetEntities<Person>();

How does he know? Surely it matters what other fields define Person, how many persons are in the database, and what it costs to communicate between client and server.

Let's be generous and grant that the size of a full person is 10 times that of the {id, firstname, lastname} tuple. Let's be generous again and grant that there are 10,000 persons. Does the SQL query deliver the data any faster?

I submit that we don't know until we test under conditions that approximate production use.

Let's stipulate that the object system query takes four times as long as the straight SQL query. Does it matter? If the SQL query completes in 2/10th second and the object query took 8/10ths, I submit that it does not matter. If the times were 2 seconds versus 8 seconds it might matter — depending upon how often you made this query. If the timings are 20 versus 80, we want to reconsider grabbing so much data at once, regardless of the approach.

In any case, it is foolish to judge a system based on the performance of a single query. The application will make many person queries. Most of us care about how users judge the responsiveness of the application over the course of a typical session.

Following Neward, we turn next to a two query application:

SELECT id, first_name, last_name FROM person;

// time passes

SELECT * FROM person WHERE id = 1;

Here's the object system equivalent:

EntityList<Person> persons = aPM.GetEntities<Person>();

// time passes

Person aPerson =  aPM.GetEntity<Person>(new PrimaryKey(typeof(

Assume that our object system takes 8 seconds to fetch all persons and Neward's takes just two seconds. So what is the speed of the second query?

The object-oriented client system cached all Persons so it takes no measurable time to get the employee.

Neward's client system could not have cached so it goes to the database. How long did that take? Let's say we measured and discovered that the latency for any trip to the server is one second regardless of the number of records returned.

That's not bad. But it's not as good as the object system. In fact, for all Person queries, the object system will always be faster than Neward's system after the first 8 seconds. The seconds lost to lots of small queries are going to add up. Neward's system will seem sluggish by comparison. There could be end user productivity consequences.

Can Neward add caching? Not easily. A query can return any shape. It's virtually impossible to cache queries that differ both in shape and criteria.

The simplicity of reliable business object shape makes caching possible which in turn wins back the performance lost to fetching supposedly unnecessary data columns.

Notice that I haven't said anything about the benefits of encapsulating data and behavior in business objects. We don't get those benefits unless the object shapes are fixed. Free form queries can only return raw data. Logic we would like to see inscribed inside the business objects must be reproduced in code surrounding the query result. Does anyone really want to go back to procedural programming?

Varying the Column Composition of Business Objects

Neward claims:

[M]ost SQL experts will eschew the "*" wildcard column syntax, preferring instead to name each column in the query, both for performance and maintenance reasons--performance, since the database will better optimize the query, and maintenance, because there will be less chance of unnecessary columns being returned as DBAs or developers evolve and/or refactor the database table(s) involved.

It just isn't so. There are no SQL experts who claim that the wildcard selection is faster than naming the columns. Column selection is only faster if it results in a significant reduction of data transmitted to the client. "Significant data reduction" cannot be determined apriori.

There are no maintenance savings to be had. Good object persistence systems do not break when the query returns "unnecessary" columns; they don't care if there are unrecognized columns.

The application could throw an exception if asked to insert a new record. This is sure to happen when one of the new, unrecognized columns is required and the database cannot supply a default value. This is a problem for everyone, not just ORM systems.

Neward's fundamental point is that we want to change the column selection from one query to the next and we may want to return just a few of the table columns. This is not the normal practice of an object relational approach and he is almost correct in saying that

An object-oriented system … cannot return just "parts" of an object — an object is an object, and if the Person object consists of 12 fields, then all 12 fields will be present in every Person returned.

It would be more accurate to say that object-oriented systems 'prefer' to return the complete Person object with all twelve fields, exactly as it was mapped.

Many object systems can return Person-like objects with fewer columns. We can map the Person table to a PersonInBrief type that includes only some of the Person columns.

This is a different type than the Person object; a Person object with the same primary key as a PersonInBrief object is not the same object (even though their primary key values are the same). Clearly the developer must exercise caution with this technique. But if the application demands both a brief form and an expanded form of an object, it can be done and there are ways to prevent entity identity mistakes.

We are not limited to design time decisions about the shapes of our business objects. The DevForce dynamic entity query can return an object of any shape, the shape can be determined at runtime, and the object resides in cache just like any other business object.

There are also schemes for lazy loading fields — for retrieving them only when an object consumer asks for them. Such schemes are difficult to implement. We've not tried — because years of experience demonstrate that there is little need and, in those rare cases of need, our recommended workarounds suffice.

So the question is not "can" the object-system support partial objects, but is it "desirable" to do so in other than special circumstances.

Special circumstances do arise. The developer may improve performance dramatically by the careful and tactical use of "partial object" techniques.

It's a good thing that these techniques are available. ORM would be a quagmire if the developer was unable to step outside of the ORM paradigm. However, the developer should only step outside the paradigm when measured bottlenecks justify that step.

The Load-Time Paradox

Neward's final shot aims at how ORM solutions manage the object graph. An object graph consists of a root object and the set of all objects related to it by some form of association. In practice, we usually want a pruned graph consisting of some subset of the most often needed related objects.

Neward claims

[O]bjects are frequently associated with other objects, in various cardinalities (one-to-one, one-to-many, many-to-one, many-to-many), and an O/R mapping has to make some up-front decisions about when to retrieve these associated objects.

This is true. The safe decision is to never retrieve related objects automatically. Is that a bad decision? For Neward it's a trick question. He doesn't actually care what we decide.

[T]here will always be common use-cases where the decision made will be exactly the wrong thing to do. Most O/R-M's offer some kind of developer-driven decision-making support, usually some kind of configuration or mapping file, to identify exactly what kind of retrieval policy will be, but this setting is global to the class, and as such can't be changed on a situational basis.

Neward asserts that there is a paradox in the impossibility of predetermining whether objects should always be loaded or only loaded when needed. A paradox is a false and self-contradictory proposition. There is no paradox here but there is futility: the futility of attempts to predetermine load behavior.

Maybe some ORMs are trapped in this way, but others enable the developer to vary the retrieval policy on a situational basis by means of span queries.

For example, an OQL query within the DevForce product returns one or more objects of a 'single' type; it doesn't try to load related objects. However, the query can be decorated with spans that instruct the persistence layer to fetch related objects at the same time.

Suppose we know that, on a particular page after retrieving a select group of customers, we will display each customer's orders, the line items on those orders, and the sales rep who placed those orders.

We don't have to issue separate queries for each customer's order, order detail, and sales rep objects. We can retrieve all of the selected customers with all of their related objects in a single shot by decorating the customer query with the "spans" that identify these associative relationships. The object persistence machinery will acquire these related objects for us. It will get all of the orders of these customers (but only their orders). It will get all the order details of those orders (but only of those orders). And so on.

In a distributed application, the server sends all of these objects to the client in one package. The persistence machinery pours them into the client's entity cache where subsequent navigation expressions (e.g., aCustomer.Order[0].OrderDetails) will be satisfied entirely from the cache.

This multi-entity, object graph fetching happens because the developer made a conscious decision to retrieve the auxiliary information at the same time that he was querying for customers.

Now suppose there is another "Customer Browser" page displaying only the customer names. The query driving this view won't include spans and will retrieve only customers. It's the developer's choice.


Every one of Neward's critical observations concerns an edge condition. There are problems to surmount but they lurk in the rarely visited corners of our application.

Generalization hierarchies, for example, are uncommon and present difficult design choices for everyone. ORM has a decent if not wonderful answer. The "partial object" issue is a bogeyman we need not fear when we have caching to restore performance. There doesn't have to be an entity identity problem for ORM systems — and it's just as serious an issue for non-ORM systems.

If the Vietnam comparison made sense, he should have found disaster and devastation everywhere, not just discomfort at the periphery. I contend that a 95% solution with a safe landing for the remaining 5% is astonishingly good technology — as good as it gets. Neward thinks anything less than 100% satisfaction is a crushing defeat for ORM.

"[ORM] developers simply accept that there is no way to … close the loop on the O/R mismatch."

At the end of the day, does Neward produce any evidence to substantiate his charge that ORM adoption is a Slippery Slope leading to a Vietnam-like quagmire?

No. Not a single case study. Fortunately thousands of happy ORM users are unperturbed by Neward's dark fears.

I close with an extended quote from one of the blog responses:

I read, and even understood the article. However, I find it overblown because I'm one of those who have accepted a data centric universe. In a large enterprise, if you accept a data centric universe (and for interoperability, that would seem wise) then you don't try to shoehorn objects into relations. You use objects generated by your ORM to encapsulate your relations and build your business logic objects on top of those building blocks.

I guess I'm confused by the commentary that makes this seem "unsolved". Systems I have designed are used by tens of thousands of users every day, with high concurrency and various access patterns (reporting, transactions and multi-screen edits) and it just hasn't become a problem. Neither web products nor desktop products have made me curse my choice of ORM tool, nor have I encountered situations where I have been unable (or even hard pressed) to create a solution. Nor do I feel like I'm writing tedious and buggy code, since the only code I write at the business layer is to handle, you know, business rules.

Wesley Shephard on June 29, 2006

Postscript: "The ORM Smackdown"

Ted Neward and Oren Eini squared off on a "Dot Net Rocks" show on 24 May 2007. The show was billed as "The ORM Smackdown" and there were plenty of verbal punches thrown.

The transcript can be found here

Neward again failed to produce a serious example of disaster. Eini, on the other hand, managed to refer to several extremely challenging situations in which an ORM solution worked well.

Neward identified one ORM failure wherein the developers used a tool to generate a database from the object model. The resulting relational schema was unworkable. This proves little other than that we should be wary of database generation tools. Schema generation is, at most, an optional ORM-related feature.

Neward restated his belief that object-oriented developers and DBAs are in some kind of war. They each think they own the data. There is some truth to this and, being a developer, I side with the developers. But no sensible minds in either camp seriously believe they can — or should — dictate the database schema to the other.

There was an interesting exchange about object-oriented databases in which Eini observed — correctly I think — that they succeed only in relocating the object relational mapping to a different tier: a replication layer that translates the object database into a relational database for reporting and analytic tools.

At the end of the show, Neward arrives at a startlingly sensible conclusion:

I guess my big thing would simply be, an ORM is not gonna save you from the object/relational impedance mismatch; It may make things easier for you to manage; but this is a leaky abstraction. We need to just accept that. You need to basically decide where you wanna be on the continuum. And, I mean, to a large degree, you can use an ORM, NHibernate or Hibernate or whatever, to simplify your development life, to take the easiest 80% and then use the remaining 20% to just write straight SQL. And do those hard things that an HQL or something else may not be able to accomplish for you. But don't expect that an ORM is going to completely close the loop and remove the relational database from view. It's never going to go away. You just have to accept that it's a leaky abstraction.

I can live with the "leaky abstraction." I might quarrel with the arithmetic but I fundamentally agree. ORM is a great tool that carries you most of the way home. You can never forget that there is a relational database behind it and you will have to compromise the object model and break the object-oriented paradigm on occasion.


This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


About the Author

Ward Bell
Web Developer
Ward Bell is v.p. of Product Management at IdeaBlade (, makers of DevForce, a .NET enterprise application development product.

Comments and Discussions

GeneralOther alternatives... Pin
Simon Mourier23-Oct-12 2:20
MemberSimon Mourier23-Oct-12 2:20 
GeneralSchema changes at runtime Pin
metator17-Jun-10 2:15
Membermetator17-Jun-10 2:15 
GeneralThanks for your article Pin
hacey7-Sep-08 11:20
Memberhacey7-Sep-08 11:20 
GeneralSQL: &quot;select *&quot; Pin
Christian G. Becker17-Oct-07 15:36
MemberChristian G. Becker17-Oct-07 15:36 
GeneralRe: SQL: " select * " Pin
Ward Bell17-Oct-07 16:17
MemberWard Bell17-Oct-07 16:17 
GeneralRe: SQL: " select * " Pin
Christian G. Becker17-Oct-07 22:21
MemberChristian G. Becker17-Oct-07 22:21 
> But does [performance in DB] matter? Is it "significant".
> ...
> For example, I would argue that it doesn't matter at all if, in a particular
> case, selection-by-column happens to be 2x faster, so long as the time
> difference experienced by all users is nil. Is that really possible? Could
> users not notice a 2x difference in performance at the data tier? Absolutely!

Of course it can happen. On the average it can also happen, that a user does not experience if you bubble-sort data instead of quick-sorting it. But will you use bubble-sort for an average sorting-problem, given that it is mostly a line of (lib-call) difference?

> The seemingly dramatic difference in data tier performance often is masked by
> the noise of drag elsewhere in the system. We'll get a better return on our
> energy output (not to mention $$) if we apply our energies to improving these
> other balky areaas rather than fine tuning our select statements.

That does not invalidate the rule "tailoring the query to your needs saves time in the DB tier". Fine tuning your queries comes much later. Column-selection aka 'project' is essential.

> While the data tier may be the bottleneck, I find that it rarely is in Line of
> Business Apps. Usually it is some other process that's hanging things up. Like
> all of those calculations to optimize the number of desired columns.

Which will be calculated in an entire different tier and are as fast as finding out the rules for calculating the select-arguments. Or should be - if not, the point 'impedance mismatch' would become interesting again.

> If the query can be satisfied from the cache, no select needed. As it happens,
> always selecting for the same columns ("select *" for example) simplifies the
> cache management logic and can reduce the number of trips to the database that
> you'd otherwise make with hand-crafted queries. For example, it's one trip
> only for the orders shipped to California no matter how many views ask for
> those orders. But it's at least three trips if three views ask about
> California orders with three distinct selects; probably more as the user
> bounces from view to view.

Self written caching sound like the premature optimization you mention below. DBMSes already cache data for you (MS, IBM, Oracle). Queries and data. And they can partial match queries, means reuse plans for sligthly different queries.

The more you specify, the smaller the cached data. (Here we reach again the transfer limits - if you hit these, you might have requested more data then you actually need.)
Yes, if you have to make 3 queries instead of 1 "select *", you will have surely (transfer limits ignored) faster response of the DB overall. But the idea ist to combine the query into one fine SPJ-query and let the optimizer of the DB (a) match the query in its query-cache (b) match the data in its data-cache (c) return you the cached + the meanwhile changed data (see multi user env).

> Measure, measure, measure.

See "query rewriting" and "view selection"-problems in the literature of the big DBMS-vendors for measurements.

> The "beginner mistake" is not using "select *". The "beginner mistake" - which
> is also the "expert's mistake" - is to anticipate performance problems and
> correct them prematurely. More often than not this introduces complexity,
> wastes time, and does nothing to improve overall application performance.

Now we are talking about different things. I say "write a query according to your needs" and not "try to optimize your query beforehand". Optimizing a (still) slow query will take place in DB tier and APP tier and will take place later, yes.

> If you've done the job well (and ORM helps), it is easy to refactor away a
> poorly performing query. It's really hard to refactor away a rats nest of
> select statements.

If you had a bunch of "select *", that would be easy to 'refactor'. Performance gain where you didn't had to have to loose performance in the first place. That is equal for me to not using "quick sort" instead of "bubble sort" in the first place for an average sorting-problem.
And here things do start to become complicated: when your sorting-problem becomes bigger, refactor away the slow sort, redeploy one APP, done.
When your DB-load becomes bigger, refactor away a lot of sloppy "select *"-queries. Refactor them away from _all_ APPs that do access the DB-layer before you measure again if you need a bigger server or still have to rewrite/view-support/index-support this one page-spanning query. And do this again if your schema is extended and you have this 5 new columns in the table you do not _need_ in APP1 with its multiple "select *".
GeneralFacet Mapping Question Pin
Dieter Domanski16-Oct-07 12:58
MemberDieter Domanski16-Oct-07 12:58 
GeneralRe: Facet Mapping Question Pin
Ward Bell17-Oct-07 16:21
MemberWard Bell17-Oct-07 16:21 
QuestionPerformance? Pin
Nyoti Rukadikar15-Oct-07 20:14
MemberNyoti Rukadikar15-Oct-07 20:14 
AnswerRe: Performance? Pin
Roger Alsing16-Oct-07 0:04
MemberRoger Alsing16-Oct-07 0:04 
GeneralRe: Performance? Pin
Nyoti Rukadikar16-Oct-07 2:28
MemberNyoti Rukadikar16-Oct-07 2:28 
GeneralRe: Performance? Pin
Roger Alsing16-Oct-07 20:49
MemberRoger Alsing16-Oct-07 20:49 
GeneralRe: Performance? Pin
Nyoti Rukadikar16-Oct-07 22:34
MemberNyoti Rukadikar16-Oct-07 22:34 
GeneralRe: Performance? Pin
Ward Bell17-Oct-07 16:29
MemberWard Bell17-Oct-07 16:29 
GeneralVery good article Pin
Member 9615-Oct-07 7:39
MemberMember 9615-Oct-07 7:39 
GeneralRegarding OQL Pin
Roger Alsing12-Oct-07 18:31
MemberRoger Alsing12-Oct-07 18:31 
GeneralRe: Regarding OQL Pin
Ward Bell13-Oct-07 9:40
MemberWard Bell13-Oct-07 9:40 
GeneralExcellent Article of Uncommon Quality Pin
HellfireHD12-Oct-07 10:10
MemberHellfireHD12-Oct-07 10:10 
GeneralRe: Excellent Article of Uncommon Quality Pin
Jaroslav Klima12-Oct-07 11:40
MemberJaroslav Klima12-Oct-07 11:40 
GeneralRe: Excellent Article of Uncommon Quality Pin
Glytzhkof27-Aug-09 8:02
MemberGlytzhkof27-Aug-09 8:02 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Posted 12 Oct 2007


62 bookmarked