June 29, 2017 at 9:30 am
Thanks everybody. We're getting some promising-looking results from a very similar idea - that of making #InsertedDeleted from a full join of #Inserted and #Deleted and then using it in the final query. i.e. along the lines of
Insert Into #InsertedDeleted
Select I.*, D.*
From #Inserted I
Full Join #Deleted D On (D.ServiceLevelID = I.ServiceLevelID)
and then
When 'DefaultFundingPaidByProvider' Then 703028
When 'FinancialReturnSubcategoryID' Then 717012
End ColumnID
From #InsertedDeleted ID
Join sys.columns SC On (SC.Object_ID = @ObjectID)
Left Join dbo.TRef_AgeBandPricingScheme New10 With (NoLock) On (New10.[AgeBandPricingSchemeID] = ID.[IAgeBandPricingSchemeID])
Left Join dbo.TRef_AgeBandPricingScheme Old10 With (NoLock) On (Old10.[AgeBandPricingSchemeID] = ID.[DAgeBandPricingSchemeID])
We've also tried an earlier version of this code, in which each column is handled by a separate insert statement. (A few years ago we foolishly thought that it would be better to do "everything in one go" rather than one-column-at-a-time. SQL prefers to do set-based operations, no?)
June 29, 2017 at 9:37 am
julian.fletcher - Thursday, June 29, 2017 9:30 AMThanks everybody. We're getting some promising-looking results from a very similar idea - that of making #InsertedDeleted from a full join of #Inserted and #Deleted and then using it in the final query. i.e. along the lines of
Insert Into #InsertedDeleted
Select I.*, D.*
From #Inserted I
Full Join #Deleted D On (D.ServiceLevelID = I.ServiceLevelID)
and then
When 'DefaultFundingPaidByProvider' Then 703028
When 'FinancialReturnSubcategoryID' Then 717012
End ColumnID
From #InsertedDeleted ID
Join sys.columns SC On (SC.Object_ID = @ObjectID)
Left Join dbo.TRef_AgeBandPricingScheme New10 With (NoLock) On (New10.[AgeBandPricingSchemeID] = ID.[IAgeBandPricingSchemeID])
Left Join dbo.TRef_AgeBandPricingScheme Old10 With (NoLock) On (Old10.[AgeBandPricingSchemeID] = ID.[DAgeBandPricingSchemeID])
We've also tried an earlier version of this code, in which each column is handled by a separate insert statement. (A few years ago we foolishly thought that it would be better to do "everything in one go" rather than one-column-at-a-time. SQL prefers to do set-based operations, no?)
Julian, I think a significant contributor to the problem is the combination of all those joins and the unpivot resulting from the cross join to all the columns of the table, which is precisely why I've put them into physically separated queries.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
June 29, 2017 at 9:40 am
julian.fletcher - Thursday, June 29, 2017 9:30 AM(A few years ago we foolishly thought that it would be better to do "everything in one go" rather than one-column-at-a-time. SQL prefers to do set-based operations, no?)
Usually you want set-based and single statements, but it is possible to go too far and have to break things up for good performance
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 29, 2017 at 12:48 pm
GilaMonster - Thursday, June 29, 2017 9:40 AMjulian.fletcher - Thursday, June 29, 2017 9:30 AM(A few years ago we foolishly thought that it would be better to do "everything in one go" rather than one-column-at-a-time. SQL prefers to do set-based operations, no?)Usually you want set-based and single statements, but it is possible to go too far and have to break things up for good performance
Actually I haven't thought that for about 20 years now. I'm about as atypical as you can get though. π
I have lost track of the times I have broken down "Godzilla Queries" at clients to both improve and normalize query performance. And it is actually necessary in more than just the "way too many things" scenarios. Constructs that can have either widely-varying inputs that ACTUALLY lead to vast differences in rowcounts OR where the optimizer simply ESTIMATES significantly out-of-whack rowcounts need to be restructured to incorporate one or more temporary tables. Note temp table is correct there - NOT table variables - due to the lack of proper statistics on the latter.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 30, 2017 at 6:23 am
I guess I just don't understand the use case that requires this. Foreign Keys exist to enforce a relationship, which you are effectively over-riding and creating a means of soft-deleting. Why bother to have the Foreign Key in the first place? What would you do if you made a mistake in tying one record to another, and needed to change that relationship to not actually exist? If knowing the history of the data changes is important, you can always create a history table and use an AFTER DELETE trigger to populate it. That lets the foreign key constraints do what they need to do, and doesn't require any calculated fields. Soft-deleting records tends to gum up the table with no longer needed data that is better kept in a history table. Just my two cents...
It has been a very longstanding requirement, but if you have a better solution I'd love to hear it. Essentially, it's to enable a user to "get rid of" something added in error. We can't hard-delete for the usual and obvious reasons, so we simply added a Deleted flag and take that in to account where necessary and appropriate.
We do have FKs to enforce relationships in the usual way - i.e, columns referencing the PK column of another table. That's obviously a way of preventing invalid data from entering the database. But then we decided to use FKs (in some cases) to prevent other kinds of "invalid data" from getting in - not where one row was trying to reference a non-existent PK value from another table, but where a calculated column didn't match with a calculated column in a different table. The details might be different but the result is the same; the FK prevents "invalid data" from getting in to the database. (It's just a different type of invalid.)
June 30, 2017 at 7:32 am
I just noticed the part about previously using triggers to enforce complex logic and switching to using FKs on complex computed persisted columns. Without details, my gut tells me the triggers would be better for that, although digging into it is way beyond a free forum task.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 30, 2017 at 10:20 am
julian.fletcher - Friday, June 30, 2017 6:23 AMI guess I just don't understand the use case that requires this. Foreign Keys exist to enforce a relationship, which you are effectively over-riding and creating a means of soft-deleting. Why bother to have the Foreign Key in the first place? What would you do if you made a mistake in tying one record to another, and needed to change that relationship to not actually exist? If knowing the history of the data changes is important, you can always create a history table and use an AFTER DELETE trigger to populate it. That lets the foreign key constraints do what they need to do, and doesn't require any calculated fields. Soft-deleting records tends to gum up the table with no longer needed data that is better kept in a history table. Just my two cents...
It has been a very longstanding requirement, but if you have a better solution I'd love to hear it. Essentially, it's to enable a user to "get rid of" something added in error. We can't hard-delete for the usual and obvious reasons, so we simply added a Deleted flag and take that in to account where necessary and appropriate.
We do have FKs to enforce relationships in the usual way - i.e, columns referencing the PK column of another table. That's obviously a way of preventing invalid data from entering the database. But then we decided to use FKs (in some cases) to prevent other kinds of "invalid data" from getting in - not where one row was trying to reference a non-existent PK value from another table, but where a calculated column didn't match with a calculated column in a different table. The details might be different but the result is the same; the FK prevents "invalid data" from getting in to the database. (It's just a different type of invalid.)
"We can't hard delete for the usual and obvious reasons" sounds like an excuse not to build an audit or history table and the associated trigger logic, which, to be honest, isn't really all that challenging. As to computing something that can be referenced as a foreign key, the only way I'm usually willing to accept that is when it's a concatenation of two (or maybe 3 or 4) character fields. Otherwise, I have to wonder why that would be necessary. The larger issue with attempting to help with that is that the analysis necessary is well beyond the scope of free forum help.
Table design can go in a lot of different directions, but just using a feature because you can instead of coming up with alternatives that work just as well but don't do things that can be dangerous doesn't usually result in longer-term happiness. Again, just my two cents, for whatever that's worth. I've come across a lot of funky designs in my day, many of which were in the "too heavily invested in it to be willing to change it" category, but often, it was because the design just wasn't very thorough.
Steve (aka sgmunson) π π π
Rent Servers for Income (picks and shovels strategy)
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply