November 21, 2011 at 9:28 pm
Comments posted to this topic are about the item Dealing with Incomplete Data - A T-SQL Puzzle 1
November 22, 2011 at 2:28 am
What a really nicely written article to start the day off with, very enjoyable.
Suresh, have you tried using CROSS APPLY to partition the rows into "records" as an alternative to calculating the upper and lower bound? You can then sequence the rows within each "record", then by joining the result to itself (staggered by one row), identification of missing data is quite simple:
;WITH OrderedData AS (
SELECT
Seq_No,
cdata,
x.RowID,
rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)
FROM #Test_Table p
CROSS APPLY (
SELECT RowID = MAX(Seq_No)
FROM #Test_Table
WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No
) x
)
SELECT t.*
FROM OrderedData t
WHERE t.RowID IN (
SELECT r1.RowID
FROM OrderedData r1
INNER JOIN OrderedData r2
ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1
WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')
I wonder how this would perform against the original?
Cheers
ChrisM
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
November 22, 2011 at 3:06 am
Dear Sir,
Very good use of CTE. I learn to know the real strength of CTE after reading this article.
The best thing about CTE is that it keeps the readability of the code. Its very easy to understand the approach of a developer in the solutions based on CTE feature.
and also the real life Scenario u have given here helps me to understand the kind of problems and real world situation that can arise before DBA.
I am new in this forum and I want to be good DBA.
Now i know that i am going to learn a lot here.
Thanks...
November 22, 2011 at 4:02 am
Dear Sir,
This is really very good alternative solution.
Can't I delete the selected bad rows in the same statement like this....
WITH OrderedData AS (
SELECT
Seq_No,
cdata,
x.RowID,
rn = ROW_NUMBER() OVER(PARTITION BY x.RowID ORDER BY p.Seq_No)
FROM #Test_Table p
CROSS APPLY (
SELECT RowID = MAX(Seq_No)
FROM #Test_Table
WHERE CData = '*********************************************' AND Seq_No <= p.Seq_No
) x
)
Delete from t
output deleted.Seq_No, deleted.CData
into dbo.Test_Table_Bad_Records
FROM OrderedData t
WHERE t.RowID IN (
SELECT r1.RowID
FROM OrderedData r1
INNER JOIN OrderedData r2
ON r2.RowID = r1.RowID AND r2.rn = r1.rn + 1
WHERE r1.CData = 'RateQuote' AND r2.CData = ' ')
Thanks..
November 22, 2011 at 3:04 pm
Hi Chris,
Thanks for the nice feedback.
I appreciate and like the solution you have provided. Your code is shorter too for which I like it even more.
Just to see from a performance perspective, I tried executing both the code snippets in the same batch. The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.
Notwithstanding the minor performance difference since the data volume in the code is very limited, I still like your code for its brevity.
------Thanks and Best Regards,
Suresh.
November 22, 2011 at 10:07 pm
Suresh Kumar Maganti (11/22/2011)
The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.
The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 23, 2011 at 9:43 am
Jeff Moden (11/22/2011)
Suresh Kumar Maganti (11/22/2011)
The Query cost (relative to the batch) was 17% for the one in the article. The same for the alternative version was 83%.The query cost relative to the batch actually doesn't mean much. It's quite easy to write queries that doe the same thing where one will be 100% of the batch and the other will be 0% and yet, when you run them, the 0% query take take several seconds where the 100% query runs so fast it almost isn't measureable.
I'm relatively new to SQL tuning. I haven't heard of that happening. Can you explain how or why that might happen? Thank you.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply