September 23, 2009 at 1:42 pm
Great article, Lynn! I might finally be ready to stop cross-joining master..syscolumns. 🙂
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
September 23, 2009 at 1:50 pm
I might finally be ready to stop cross-joining master..syscolumns.
I can't stop either, Barry. It's a sickness, I tell you. :w00t:
Maybe we should form a support group?
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
September 23, 2009 at 2:26 pm
RBarryYoung (9/23/2009)
Great article, Lynn! I might finally be ready to stop cross-joining master..syscolumns. 🙂
That's just SICK!! Blasphemy I tell you!!! 😀
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 23, 2009 at 4:02 pm
RBarryYoung (9/23/2009)
Great article, Lynn! I might finally be ready to stop cross-joining master..syscolumns. 🙂
I already quit... heh... I use Master.sys.All_Columns now. 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
September 23, 2009 at 4:42 pm
Jeff Moden (9/23/2009)
RBarryYoung (9/23/2009)
Great article, Lynn! I might finally be ready to stop cross-joining master..syscolumns. 🙂I already quit... heh... I use Master.sys.All_Columns now. 😛
Heh. I try to stick to only either syscolumns or system_columns because their number and content is pretty reliably consistent within a version.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
September 23, 2009 at 5:17 pm
Jeff Moden (9/22/2009)
Keep in mind that the display is the "great equalizer"... you cannot judge real performance when returning result sets to the screen.
Jeff - excellent advice. My question is: how do you do this when you're working on tuning a stored procedure?
Lynn - SUPERB article. And as is typical for such great article, an excellent discussion has sprung forth. Congratulations.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
September 23, 2009 at 5:26 pm
WayneS (9/23/2009)
how do you do this when you're working on tuning a stored procedure?
INSERT {tablename} EXEC {procname}
🙂
Atlantis Interactive - SQL Server Tools
My blog[/url]
Why I wrote a sql query analyzer clone
September 23, 2009 at 5:57 pm
WayneS (9/23/2009)
Jeff Moden (9/22/2009)
Keep in mind that the display is the "great equalizer"... you cannot judge real performance when returning result sets to the screen.Jeff - excellent advice. My question is: how do you do this when you're working on tuning a stored procedure?
Lynn - SUPERB article. And as is typical for such great article, an excellent discussion has sprung forth. Congratulations.
1) there is an option in SSMS to discard results on execution
2) often you don't actually NEED the results to figure out what needs to be done to tune a sproc
3) the insert exec mentioned by another poster is a possibililty
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 23, 2009 at 6:12 pm
It still seems to me like a solution in search of a problem.
If you can't get a simple static table approved for a
perfectly defensible purpose, why would The Force
with the authority to deny it, but not the sense
to understand it, ever consider allowing something
like this instead?
September 23, 2009 at 7:00 pm
I think both the static table and the on-the-fly function have merits. Even when a static table is allowed I would still favor the dynamic solution when performance-requirements are not too rigid.
What I like about the dynamic solution is that when implemented correctly it will guarantee correctness for every possible range of numbers (see especially my earlier posted solution which WILL return every conceivable range of numbers that can be represented by BIGINT) while performing well for the majority of cases that require only small ranges.
With a static-table solution you need to choose a predetermined size that will cater for all use-cases. The routine will not raise any error or other notification and just return incorrect results when a situation occurs that requires more (unless you explicitly test your input-parameters for "overflow" at an additional cost while you would choose this solution for performance-reasons).
What I like about the static table solution (especially with a dedicated table with just one bigint-column as a clustered index) is that you can squeeze out the last little bit of performance, which in certain cases could make the difference between success and failure of a project (I've once had a number-table with 100,000,000 values taking a few Gigabyte of space, put in a separate database so it wouldn't eat up precious room in the full backups).
September 23, 2009 at 11:36 pm
As to the size of the table, you'll presumably write it to be large enough for the purpose to which you are applying it at the time. Validating its size is easy enough for any future user. And I would think it's nature is clear enough that it's reasonable to expect another developer will be inclined to reuse it for other purposes.
What would be reasonable to expect about the reusability of a substantially more complex dynamic solution? I would expect that it's not unlikely that future developers who are given responsibility for something they don't understand very well will be tempted to replace it. Or at least be unlikely to make use of it for some new purpose.
What is your experience with the life-cycle of complex esoteric modules when passed from developer to developer?
September 24, 2009 at 6:56 am
doofledorfer (9/23/2009)
As to the size of the table, you'll presumably write it to be large enough for the purpose to which you are applying it at the time. Validating its size is easy enough for any future user. And I would think it's nature is clear enough that it's reasonable to expect another developer will be inclined to reuse it for other purposes.What would be reasonable to expect about the reusability of a substantially more complex dynamic solution? I would expect that it's not unlikely that future developers who are given responsibility for something they don't understand very well will be tempted to replace it. Or at least be unlikely to make use of it for some new purpose.
What is your experience with the life-cycle of complex esoteric modules when passed from developer to developer?
With good developers, I would expect the dynamic tally table to be used just as much as a static table if available. You called the Dynamic Tally Table a solution looking for a problem. I don't see it that way. As others have indicated in this thread it is a recurring theme amongst many people seeking solutions to problems that are easily solved using a tally table that they can't change the schema or build a static tally table. Using a dynamic tally table is then the apropriate solution in those cases.
September 25, 2009 at 2:28 am
October 1, 2009 at 11:10 pm
Jeff Moden (9/22/2009)
Actually, it doesn't come close. To generate 1 to 10,000,000 numbers, Lynn's method pretty much blows the doors off that type of cross join (ie: more than a pair of tables cross joined) when inserting into a temp table...
Are you sure, Jeff? The unnecessary sort in the OVER clause is the main cost. I only have 2 minutes for a quick post, but do try this:
go
set statistics time on
declare @bb int;
select @bb = N from [dbo].[ufn_Tally2] (1, 10000000, 1)
set statistics time off
go
set statistics time on
declare @bb int;
select top (10000000)
@bb = ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
from master.sys.all_columns C1,
master.sys.all_columns C2,
master.sys.all_columns C3;
set statistics time off
go
The performance difference becomes more marked with higher row counts. Back later.
Paul
edit: I hate the way this site only shows one page when you come to it from the email. So - there are 3 pages of discussion 😀
Well never mind, I'll leave this here anyway.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
October 2, 2009 at 2:00 am
So, for a "classic" tally table ( i.e. start with 1 increment by 1 ), Pauls solution performs best.
If you actually want a dynamic tally table ( i.e. start with x increment by Y )[/, Lynns solution beats it all !
Just don't combine both in a single (multi statement) TVF, or it will actually suffer massive IO overhead because of the actual insert into the @Tally result table itself !
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
Viewing 15 posts - 76 through 90 (of 159 total)
You must be logged in to reply to this topic. Login to reply