|
|
Yeah, that basically sums it up
|
|
|
|
|
johnsyd wrote: Inserts, updates and deletes had to go through the same process: XML blob -> temp table -> apply insert/update/delete -> convert back to XML blob, save back in the "table".
Presumably without any locking to ensure that concurrent changes don't get lost?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Enterprisey. It's like the modern version of emulating a 1-M relationship using a delimited string in a text column.
|
|
|
|
|
I work for a large company which has an odd habit of splitting off logically related tables into separate databases. So instead of one database ABC, you have databases ABC_CLIENT, ABC_PORTFOLIO, ABC_PRICING, etc. To join client data to their own portfolio data, you need to go cross database and to include pricing data means yet another cross database connection.
A friend who works for another large company in the same industry says that his company thinks this is "best practice". No one I've talked to thinks this is a good idea, let alone "best practice". What do you think?
|
|
|
|
|
Client and pricing information in separate databases.
Wait until the company grows and amount of transactions get to several millions per day.
|
|
|
|
|
Exactly -- they get away with it because this particular product has limited number of clients and transaction volumes.
|
|
|
|
|
|
Our company (banking industry, over 500'000 credit cards (at the time I worked in that area, so now it could be doubled), bank accounts, ecc. Split between credit card, bank and hybrid (for both), but that's it so far in DB2 area. There are some ORACLE DBs used mainly internaly and some MS SQL DBs used for internal intranet web sites, but that's OK because totally unrelated.
The signature is in building process.. Please wait...
|
|
|
|
|
I kind of already faced a similar problem with SOA architectures.
If you split your business logic across several self-contained services sooner or latter you'll end up needing to show, on a grid or a report, data that comes from several services.
You shouldn't, even if it's possible, do joins across services databases as it breaks all the decoupling principle but when performance starts to be an issue... you know how the story goes from now on don't you?!
So the only reason I see here (even if it's not a good idea) is an attempt to implement this "SOA" concept but only at DB level (say... Service Oriented Databases? ), separating "services" by database.
Now I'm curious to know if I'm right!
|
|
|
|
|
I think you are right but hope you are wrong The key to SOA is to implement throughout the vertical, if all is not in SOA becomes an inhibitor and a PAIN.
____________________________________________________________
Be brave little warrior, be VERY brave
|
|
|
|
|
I would put them in the same DB, but in different schemas.
|
|
|
|
|
Yep, MS's sample AdventureWorks does exactly this.
I don't understand the reasons, but on SQL Server how much penalty is there to having multiple databases?
|
|
|
|
|
If the databases are all on the same server instance, there is little performance impact. However if on different server instances, there is a big impact.
It complicates issues like disaster recovery - say if one of the databases fails over but the others don't. You also have to keep all the database permissions in synch which can become quite onerous. If a stored procedure accesses tables on 5 different databases it needs to preserve permissions on all of them. You also need to make sure the database-level settings are consistent. Backups need to be coordinated, so if a restore becomes necessary, you're using backups taken at the same time.
My issue is when you have related tables and someone splits them into multiple databases ... I can't see any benefit in the splitting and there are ongoing maintenance problems you introduce by doing it. You also have multiple points of failure. You may want to keep tables which are often JOIN'ed on different physical disks for performance, but another member mentioned that you can do this within the same database using FILEGROUPs.
|
|
|
|
|
"Best practice" or "Common pattern", people tend to confuse the 2... Best practice will have justification(s), common pattern probably won't. Best practice is best for specific reasons / conditions, and those will tell you if it applies to you. Try to get the detail of the best practice and you are likely to find out that it is more common pattern.
Something like this sounds more like common pattern than best practice, but I still don't see the advantage / point of doing it.
IT people (including me) tend to over complicate things, this sounds like one of those.
____________________________________________________________
Be brave little warrior, be VERY brave
|
|
|
|
|
Are they putting the databases on different physical drives? If so, they might not know about filegroups[^], which would allow them to put different tables from the same database on different drives.
Splitting related tables between different databases is a terrible idea. Apart from anything else, you can't have DRI between the tables. You end up using triggers - or worse, external application code - to enforce referential integrity.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I've seen a system spec written up that required the developer to use SMO to create user-specific tables for each new user in a SQL database, e.g.
[dbo].[JohnSmith_Orders]
[dbo].[JohnSmith_Profile]
[dbo].[JohnSmith_Settings]
I think you get the idea. The reasoning behind it?
"Each user gets their own sub-schema so that no user data leaks into another user"
Surely the best way of preventing this would be to have a separate app for each user?
There are people out there who should not be trusted with crayons. stuff like this is proof that yes, you really can get your dick caught in a ceiling fan. you just need to try hard enough.
|
|
|
|
|
I've seen one worse, an events management system that created a new database for each event.
Fortunately, the only involvement I had in the system was recommending it be replaced. I feel sorry for the guys that were trying to report across all the events - especially as the SQL server was having to constantly unload databases to load the new ones...
|
|
|
|
|
Another magnificent specimen for my scrapbook of SQL horrors!!
Why create new rows for an event when you can create a whole new database?
|
|
|
|
|
Yikes!! The wrongness of that approach is almost awe-inspiring!
I'll add it to my scrapbook of SQL horrors.
|
|
|
|
|
I'm re-writing a legacy ASP application. There's an expensive call to an external webservice and a database to determine a calculated setting specific to the person logged in.
On the first page this calculation is done and the result is stored in a session variable.
On every subsequent page the session variable is ignore, the value is recalculated and then stored in the session variable again!
Some days I don't know if to laugh or cry.
|
|
|
|
|
Sometimes, you just want to go into a code review with a baseball bat and a spiked glove...
The universe is composed of electrons, neutrons, protons and......morons. (ThePhantomUpvoter)
|
|
|
|
|
Cool, who ever wrote the code invented the write-only variable.. What shall we call it? "Wrariable"?
|
|
|
|
|
That kind of innovation must be why this guy is now a manager. My only solace in that is that he's no longer writing "code".
|
|
|
|
|
He's a manager now? Well, that explains a lot. I know quite some managers who also write (or wrote) code, and that was among the most horrible code I've ever seen. Think you still might find examples here in TWATW.
|
|
|
|