September 19, 2010 at 8:52 am
ColdCoffee (9/19/2010)
And i have attached the "Text Mode" Results .. I promise atleast 5 distinct environment's results tomorrow morning once i reach office..Please tel me if i have to provide any further information...
That'll be awesome, CC. And that last run was just what I needed. Thanks.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 8:56 am
gah (9/19/2010)
Hi JeffHappy to help.
Desktop
SQL Server 2008 R2 Dev ed. 10.50.1600.1
OS NameMicrosoftยฎ Windows Vista Business
Version6.0.6002 Service Pack 2 Build 6002
System Manufacturer Dell Inc.
System ModelPrecision WorkStation 390
System Typex64-based PC
ProcessorIntel(R) Core(TM)2 CPU 6300 @ 1.86GHz, 1862 Mhz, 2 Core(s), 2 Logical Processor(s)
Installed Physical Memory (RAM)2.00 GB.
Good luck
Regards Graham
Very cool... thanks, Graham. It doesn't appear that Vista is the dog they claimed it to be on these kinds of things.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 8:59 am
Jeff.....will reun the revised code for you in a moment.
Note on my machine I have SQL throttled at 1GB.
regards Graham
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
September 19, 2010 at 9:04 am
Jeff Moden (9/19/2010)
Nah... my apologies, Paul. I wouldn't make anyone read the whole script to figure out what they need to do to do ME a favor. That's an artifact from my previous test code and I'll remove it so it doesn't confuse anyone. Thanks for reading the code, though! I love a good peer review.
No worries - just thought I'd better check.
Also, I'm going to add two of Brad's... the one you posted will be commented as "XML-Brad1 (Split8KXMLBrad1 iTVF)". The other one is what I believe Brad meant to be the fastest and will be commented as XML-Brad (Split8KXMLBrad iTVF). Almost done with that. Just running a sanity check before I update the code.
Yeah, sadly (perhaps) I think the XML approach is doomed to be slower than the Tally - the cost of the XML shredding (nodes method) is just too high. That's to be expected I suppose - it does much more than just string splitting after all.
And thanks for jumping in on this thread. I always appreciate your comments and your code.
You're welcome! BTW, you know this, but for other people's benefit: I am not a SQLCLR zealot - Jeff and I agree to a very large extent on the different use cases for Tally and SQLCLR. ๐
Paul
September 19, 2010 at 9:06 am
Mark-101232 (9/19/2010)
Hi Jeff,
Results attached for my creaky Fujitsu Amilo laptop running Vista Home Premium, 2GB RAM, Intel Core 2 Duo
with SQL Server 2008 Express R2.
Query took 27 mins 49 secs to complete.
Cheers
Mark
Thanks for the test results, Mark. And thanks for jumping in on the CLR discussion. Even though we know SQLCLR splitters tend to be faster, it always makes for interesting conversation. I really appreciate the "times" you posted on the subject.
It wasn't my intent to compare SQLCLR on this particular test but if folks want to, I think everyone will appreciate it. I know I will... I'm both a fan and one who expresses caution. Heh... of course, I do that with every bloody thing! ๐
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:10 am
gah (9/19/2010)
Jeff.....will reun the revised code for you in a moment.Note on my machine I have SQL throttled at 1GB.
regards Graham
That will actually show up in the configuration output. Thanks for your time, Graham.
As a bit of a side bar, I'm in awe and a bit humbled that so many good folks like yourself have jumped in on this test. Thank you one and all.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:22 am
version 2 resullts attached.
regards Graham
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
September 19, 2010 at 9:28 am
Paul White NZ (9/19/2010)
Mark-101232 (9/19/2010)
The difference on mine is far less pronouncedTally:
Table 'Tally'. Scan count 10000, logical reads 30000, physical reads 0
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 7719 ms, elapsed time = 7791 ms.
SQLCLR:
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 5610 ms, elapsed time = 5815 ms.
1. The logical reads difference seems about the same ๐
2. CLR uses JIT (just-in-time) compilation, so the first few times you (ever) use the SQLCLR function it won't have been fully compiled to native machine code. Run the test several times to ensure the code is fully optimized.
3. If running on a laptop, ensure that your CPU is running at full clock speed (not on battery/ check power plan settings etc.) If in doubt, run something like CPU-Z to check.
Paul
Re-ran a few times, seems to be consistent now
Tally:
Table 'Tally'. Scan count 10000, logical reads 30000, physical reads 0
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 8328 ms, elapsed time = 8177 ms.
SQLCLR:
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 2625 ms, elapsed time = 2668 ms.
____________________________________________________
Deja View - The strange feeling that somewhere, sometime you've optimised this query before
How to get the best help on a forum
http://www.sqlservercentral.com/articles/Best+Practices/61537September 19, 2010 at 9:29 am
Paul White NZ (9/19/2010)
And thanks for jumping in on this thread. I always appreciate your comments and your code.
You're welcome! BTW, you know this, but for other people's benefit: I am not a SQLCLR zealot - Jeff and I agree to a very large extent on the different use cases for Tally and SQLCLR. ๐
Absolutely spot on. As you say... "you know this, but for other people's benefit:" I'll add that although it sometimes seems that I'm an Anti-SQLCLR Zealot, I'm not. Paul is correct in saying the he and I agree to a very large extent on the different use cases for Tally and SQLCLR (and a whole bunch of other things, to boot!) ๐
Heh... if you want to see a "zealot", tell me you want to write an SQLCLR to do a modulus because you don't know how to do it in T-SQL (it really happened). ๐
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:32 am
Mark-101232 (9/19/2010)
Paul White NZ (9/19/2010)
Mark-101232 (9/19/2010)
The difference on mine is far less pronouncedTally:
Table 'Tally'. Scan count 10000, logical reads 30000, physical reads 0
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 7719 ms, elapsed time = 7791 ms.
SQLCLR:
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 5610 ms, elapsed time = 5815 ms.
1. The logical reads difference seems about the same ๐
2. CLR uses JIT (just-in-time) compilation, so the first few times you (ever) use the SQLCLR function it won't have been fully compiled to native machine code. Run the test several times to ensure the code is fully optimized.
3. If running on a laptop, ensure that your CPU is running at full clock speed (not on battery/ check power plan settings etc.) If in doubt, run something like CPU-Z to check.
Paul
Re-ran a few times, seems to be consistent now
Tally:
Table 'Tally'. Scan count 10000, logical reads 30000, physical reads 0
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 8328 ms, elapsed time = 8177 ms.
SQLCLR:
Table 'CsvTest'. Scan count 1, logical reads 774, physical reads 0
CPU time = 2625 ms, elapsed time = 2668 ms.
Because I honestly don't know, let me ask if the SQLCLR will "uncache" like an old execution plan might and, if it does, will it "revert" back to needing to be executed a couple of times to "re-optimize"?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:34 am
Jeff, this is from environment 2..
~Edit : all my tests will be over the desktop machines, i dont have laptop yet, Jeff.. Rest of the eonvironments, i will give u tomorrow..
September 19, 2010 at 9:39 am
Jeff Moden (9/19/2010)
Because I honestly don't know, let me ask if the SQLCLR will "uncache" like an old execution plan might and, if it does, will it "revert" back to needing to be executed a couple of times to "re-optimize"?
Excellent question - and the answer is no. Once compiled to native machine code, the assembly only needs re-compiling if it is replaced by a new implementation (ALTER ASSEMBLY). The native code version is written to the file system, and survives server/SQL restarts/service packs etc. This is a .NET feature, and completely independent of SQL Server.
September 19, 2010 at 9:42 am
gah (9/19/2010)
version 2 resullts attached.regards Graham
Now THAT's interesting! Even though the "XML-1 (Split8KXML1 mlTVF)" multi-line table valued still uses more logical reads, it still beat the new addititions for CPU and Duration by a factor of more than 3:1.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:44 am
Paul White NZ (9/19/2010)
Jeff Moden (9/19/2010)
Because I honestly don't know, let me ask if the SQLCLR will "uncache" like an old execution plan might and, if it does, will it "revert" back to needing to be executed a couple of times to "re-optimize"?Excellent question - and the answer is no. Once compiled to native machine code, the assembly only needs re-compiling if it is replaced by a new implementation (ALTER ASSEMBLY). The native code version is written to the file system, and survives server/SQL restarts/service packs etc. This is a .NET feature, and completely independent of SQL Server.
Again, still in the "I don't know so be gentle with me" mode, why is it then that you have to run it a couple of times to get it to "sink in"?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2010 at 9:45 am
Version 2 results...:satisfied:
Viewing 15 posts - 31 through 45 (of 214 total)
You must be logged in to reply to this topic. Login to reply