December 28, 2010 at 2:04 pm
We have a server that has 30+ databases on it for different external clients, the databases are all similar in structure. I need to update one field in one table but need to propagate these updates through to all the databases. Here is the code I use for one update
USE VP_Client1_V05
DECLARE @sql VARCHAR(4000)
SET @sql = ('SELECT DISTINCT CompanyID AS HierarchyCoID,
ISNULL(Company1Name, '''')
+ ISNULL(''--> '' + Company2Name, '''')
+ ISNULL(''--> '' + Company3Name, '''')
+ ISNULL(''--> '' + Company4Name, '''')
+ ISNULL(''--> '' + Company5Name, '''') AS HierarchyName
FROM vwAllLevelsHierarchyData
WHERE MasterGroupID = @MasterGroupID
AND ( ( EXISTS ( SELECT *
FROM tblClientSetup
WHERE ClientSetupFieldName = ''ClientHierarchyLevel''
AND ClientSetupFieldValue in (''Lease'',''Both'') )
AND CompanyID IN ( SELECT la.TenantID
FROM tblLeaseAbstract la )
)
OR CompanyID IN ( SELECT b.TenantID
FROM tblBuilding b )
)')
UPDATE dbo.tblCriteria
SET CriteriaSourceSQL=@SQL
WHERE CriteriaSetID=30
AND CriteriaDesc='Select Hierarchy'
I need to provide a script that does this automatically. Has anyone done this before?
December 28, 2010 at 2:08 pm
You can do this via means like sp_MSForEachDB.
What I recommend, however, is if the data is the same in each database, move it to one database and have the rest reference it. Then you only have to maintain it in one place.
I recommend a "Common" database that holds tables/data that are common to all customers.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
December 29, 2010 at 6:28 am
GSquared (12/28/2010)
You can do this via means like sp_MSForEachDB.What I recommend, however, is if the data is the same in each database, move it to one database and have the rest reference it. Then you only have to maintain it in one place.
I recommend a "Common" database that holds tables/data that are common to all customers.
That approach can work, but it can also lead to issues if you have to move databases between servers. I hate have cross-database dependencies between seperate applications. In the past it's kept us from upgrading a particular app because it was dependent on another db that was hooked to another db for an app that couldn't be upgraded... giant pain in the tucas. Because then you have to create multiple "common" databases and maintain them all over the place.
However, it really can work.
Another alternative, and one with it's own set of issues, is to set up a service for common data. Then you can have a single database, completely independent of other databases, and just provide the data through a service call. We've done that fairly successfully.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 29, 2010 at 7:37 am
Yes, there are a number of ways to go about the mechanics of storing common data in one place. They all have advantages/drawbacks.
I'd say that, under most circumstances, maintaining a Common database on multiple servers (probably through replication) will still be easier and more efficient than maintaining common data in each database.
If, however, you need to keep it in each database, replication would probably allow for a reasonably efficient means of keeping the tables synchronized.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
December 29, 2010 at 7:46 am
Apologies. I think that might have sounded more argumentative than I meant for it to.
I agree. There are a number of ways to skin this particular cat. None of them are perfect. I do have a preference to avoid cross-database queries, just because of so many issues in the past, but that's an opinion.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 29, 2010 at 8:09 am
Thanks for all your input, I too agree with having everything in a common DB but alas I just started here and this is what I have to work with 😉
Still trying to wrap my head around sp_MSforeachdb, gonna give it a shot. If not I am going to build a while loop to take care of my entries 😉
December 30, 2010 at 6:52 am
The for each procs use cursors, they just hide it a bit. If you're comfortable with doing the update that way, then write a simple cursor that selects from sys.databases, and then dynamically updates the table in each. That's the easiest way to handle this once. But do look into consolidation or replication as a longer term solution.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
December 30, 2010 at 7:01 am
Grant Fritchey (12/29/2010)
Apologies. I think that might have sounded more argumentative than I meant for it to.I agree. There are a number of ways to skin this particular cat. None of them are perfect. I do have a preference to avoid cross-database queries, just because of so many issues in the past, but that's an opinion.
Actually, per my tests and data I've seen published elsewhere, it's not just an opinion on your part. There's a 3-5% performance hit just for doing a cross-database query in every test I've done on it, and others have had similar results.
This can be compensated for if the Common database is set to Read Only most of the time. In a Read Only database, SQL Server doesn't even try to figure out what kind of locks to take, which can usually make up for that cross-database hit, by saving resources on every query that hits it. Read Only files are treated similarly, but add the maintenance overhead of multiple files per database, and backup issues in that scenario are more complex.
Overall, I've found the trade-off of having a Common database to be worth the effort. It saves on backup/restore complexity, I can make the performance issues disappear, and if I reference it via synonyms, I can move resources into a stand-alone database if I need to. And it saves on the "woops, we updated that table everywhere except our most important customer's database" issue.
And, no, you didn't come across as argumentative. Every DBA situation has that "it depends" factor, and a very heavy "your mileage may vary" factor. You just came across as raising those issues.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply