March 19, 2010 at 5:25 am
I've learned quite a bit about indexing through all your posts but the point I was getting at originally is when using a parameter the query took over a minute to return 0 rows but when hard-coding it was less than a second.
Now if the query was to return 800,000 rows of 6 million I can see that a table scan would be the fastest. I guess I was getting hung up with the fact it was doing a table scan to figure out that there were not any rows. With the value hardcoded it was able to determine that the index was the way to go based on the frequency of values in the table. With the parameter it apparently didn't sniff the parameter to know and therefore used the table scan based on the statistics.
Also the use of the nvarchar, I can see that needs to be changed 72 bytes vs 16 of a uniqueid would be much better, that is something I will be resolving.
March 19, 2010 at 5:35 am
Tom Van Harpen (3/19/2010)
...I guess I was getting hung up with the fact it was doing a table scan to figure out that there were not any rows. With the value hardcoded it was able to determine that the index was the way to go based on the frequency of values in the table. With the parameter it apparently didn't sniff the parameter to know and therefore used the table scan based on the statistics.
Exactly right. 🙂
If you interested in even more of the gory details, see Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
March 19, 2010 at 5:39 am
If all you wanted to do was to quickly check if a certain value was in the table you could have used something like this:
DECLARE @execid NVARCHAR(50)
SET @execid = '{00000000-0000-0000-0000-000000000000}'
SELECT TOP 1 *
FROM [dq].[TableX]
WHERE BatchExecutionID = @execid
The TOP 1 means that the optimizer knows it will never have to read more than one row so it will always use the non-clustered index and return immediately.
March 19, 2010 at 5:44 am
Tom, you might also consider adding Execute SQL Task to your package to grab the newsequentialguid() from database instead of using the ExecutionInstanceGuid. It looks like you would need to add a table with this value as primary key. This would be a nice batch run history table.
Regards
Piotr
...and your only reply is slàinte mhath
March 19, 2010 at 5:47 am
Piotr.Rodak (3/19/2010)
Tom, you might also consider adding Execute SQL Task to your package to grab the newsequentialguid() from database instead of using the ExecutionInstanceGuid. It looks like you would need to add a table with this value as primary key. This would be a nice batch run history table.
That is an excellent idea, and would make checking for the existence of a batch GUID easy and fast. Probably a non-clustered primary key though 😉
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
March 19, 2010 at 6:26 am
Piotr that is a good suggestion I used the Exec instance guid since it was readily available. If I did generate my own batch keys I might consider an Identity at that point, would certainly reduce the size of the index.
And a note I'm not checking for the existence of records, I just noticed when I test ran with the guid of {000000-000000....} that it took that long to return nothing. The object at that point was not testing data movement just a validation that all the metadata was correct. That's what got me going down this road in the first place. I'm glad I did though, was very educational.
Thanks - Tom
Viewing 6 posts - 46 through 50 (of 50 total)
You must be logged in to reply to this topic. Login to reply