June 27, 2008 at 3:04 pm
Ok... so basically when you ask it to opperate a command on a column, then, it will identify the command, then enact it on every row instead of asking for the command again for each row and thus reduce the number of redundant calls?
yes? no? maybe so?
June 27, 2008 at 4:26 pm
David Jackson (6/27/2008)
Insert this into the top of the script
--===== Conditionally drop
IF OBJECT_ID('dbo.Tally') IS NOT NULL
DROP TABLE dbo.Tally
--===== Create and populate the Tally table on the fly
SELECT TOP 11000 --equates to more than 30 years of dates
IDENTITY(INT,1,1) AS N
INTO dbo.Tally
FROM Master.dbo.SysColumns sc1,
Master.dbo.SysColumns sc2
--===== Add a Primary Key to maximize performance
ALTER TABLE dbo.Tally
ADD CONSTRAINT PK_Tally_N
PRIMARY KEY CLUSTERED (N) WITH FILLFACTOR = 100
No loop required! 🙂
On my old-ish server it runs in 80 ms
Hats off to Jeff M for teaching us this trick
Edit: I see Matt beat me to it, but he is spot on, forget loops if you can.
Dave
Also, depending on need, you can adapt the above to dynamic SQL should you want to work with input parameters.
June 27, 2008 at 6:25 pm
dfalso (6/27/2008)
Also, depending on need, you can adapt the above to dynamic SQL should you want to work with input parameters.
Why would you need to do that?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 6:28 pm
zxn001 (6/27/2008)
Ok... so basically when you ask it to opperate a command on a column...
THAT, is the basis of set based programming. THAT, is the necessary paradigm shift necessary to become a good set based progammer...
... Stop thinking about what you want to do with a row... think about how you want to affect a column! 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 6:34 pm
David Jackson (6/27/2008)
Hats off to Jeff M for teaching us this trickEdit: I see Matt beat me to it, but he is spot on, forget loops if you can.
Thanks for the nice compliment, David... so far as Matt goes... he's had way tooooo much coffee! He beats everybody on posts he's interested in! 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 6:42 pm
Jeff Moden (6/27/2008)
dfalso (6/27/2008)
Also, depending on need, you can adapt the above to dynamic SQL should you want to work with input parameters.Why would you need to do that?
If you wanted the number of numbers evaluated to be @N. You could build your temporary tally table to be dynamically sized. Or, if you're using an existing tally table of maximum number @m < @N, you could dynamically insert as many rows as needed.
June 27, 2008 at 7:27 pm
Oh no... about 99% of everything you need to do can be done with a Tally table of 11,000 rows. If you need more than that, you can create a larger table on the fly... but you don't need dynamic SQL for any of it.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 9:38 pm
Not to quibble about it, but you do need dynamic SQL if you need more than 11,000 but don't know how much more. Right? In other words, if your upper maximum is @N where @N > 11000, I think you'd need dynamic SQL to craft the insert join. If you're using syscolumns, say, you'd figure out the rowcount of it, find out how many rows you're missing, and CEILING the quotient to get the number of crossjoins that'd make the missing rows. But that join statement would be dynamic. Unless you can think of some CTE way to do it non-dynamically. I can't, but then again I'm not thinking that hard about it. Maybe Monday:D
And to me, I wouldn't presume that 11000 was enough for anything except days since 1970ish.
June 27, 2008 at 10:35 pm
I personally roll with a tally table with a 1M limit, but that's a personal choice.
Otherwise - it's very easy to come up with a numbers table of pretty much any size you need. CTE method coming up.
;with cte1 (N) as (select 1 union select 2), --2
cte2 as (select c1.N from cte1 c1, cte1 c2, cte1 c3, cte1 c4), --16
cte3 as (select c1.N from cte2 c1, cte2 c2, cte2 c3, cte2 c4), --65536
cte4 as (select c1.N from cte3 c1, cte3 c2, cte3 c3, cte3 c4) --1.84467E+19
select row_number() over (order by N) as Num
from CTE4
Thanks to exponential growth - you can pretty much size it to whatever number you need.
Edit: remove an extraneous comman thanks to eagle-eyed Jeff~
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
June 27, 2008 at 10:45 pm
Matt Miller (6/27/2008)
I personally roll with a tally table with a 1M limit, but that's a personal choice.Otherwise - it's very easy to come up with a numbers table of pretty much any size you need. CTE method coming up.
;with cte1 (N) as (select 1 union select 2), --2
cte2 as (select c1.N from cte1 c1, cte1 c2, cte1 c3, cte1 c4), --16
cte3 as (select c1.N from cte2 c1, cte2 c2, cte2 c3, cte2 c4), --65536
cte4 as (select c1.N from cte3 c1, cte3 c2, cte3 c3, cte3 c4), --1.84467E+19
select row_number() over (order by N) as Num
from CTE4
I like my method... it's about as fast and will easily make up to 121 million numbers... if you need more than that, just add 1 more cross-join... but, I submit... if you need more than a 121 million numbers, you're probably doing something wrong...
DECLARE @Top INT
SET @Top = 100000
;WITH cteTally
AS (
SELECT TOP(@Top) ROW_NUMBER() OVER(ORDER BY sc1.Type) AS N
FROM Master.sys.SysColumns sc1,
Master.sys.SysColumns sc2)
SELECT *
FROM cteTally
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 10:48 pm
dfalso (6/27/2008)
Not to quibble about it, but you do need dynamic SQL if you need more than 11,000 but don't know how much more. Right? In other words, if your upper maximum is @N where @N > 11000, I think you'd need dynamic SQL to craft the insert join. If you're using syscolumns, say, you'd figure out the rowcount of it, find out how many rows you're missing, and CEILING the quotient to get the number of crossjoins that'd make the missing rows. But that join statement would be dynamic. Unless you can think of some CTE way to do it non-dynamically. I can't, but then again I'm not thinking that hard about it. Maybe Monday:DAnd to me, I wouldn't presume that 11000 was enough for anything except days since 1970ish.
But, you'll quibble anyway, huh?. 😉 Look at the code above... no dynamic SQL. It could easily be modified to find your "ceiling" but no dynamic SQL enters the picture.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 11:14 pm
dfalso (6/27/2008)
And to me, I wouldn't presume that 11000 was enough for anything except days since 1970ish.
How many times have you personally needed a Tally table larger than 11,000? 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2008 at 11:17 pm
Jeff Moden (6/27/2008)
dfalso (6/27/2008)
Not to quibble about it, but you do need dynamic SQL if you need more than 11,000 but don't know how much more. Right? In other words, if your upper maximum is @N where @N > 11000, I think you'd need dynamic SQL to craft the insert join. If you're using syscolumns, say, you'd figure out the rowcount of it, find out how many rows you're missing, and CEILING the quotient to get the number of crossjoins that'd make the missing rows. But that join statement would be dynamic. Unless you can think of some CTE way to do it non-dynamically. I can't, but then again I'm not thinking that hard about it. Maybe Monday:DAnd to me, I wouldn't presume that 11000 was enough for anything except days since 1970ish.
But, you'll quibble anyway, huh?. 😉 Look at the code above... no dynamic SQL. It could easily be modified to find your "ceiling" but no dynamic SQL enters the picture.
Yeah, you got me on that 😀 BTW, I meant CEILING in the function sense, as in: if I'm missing @m numbers, and my SELECT yields @N numbers, the number of times I have to CROSS JOIN the SELECT is CEILING(log(@M)/log(@N)).
Anyway, my basic point is that your solution is only not dynamic so long as the input parameter is less than the number of records in you expect to have to deal with (syscolumns squared in your above example, if I'm reading it correctly). In order to be truly safe, dynamic sql is needed, unless we could write a CTE with an input variable that governs the amount of recursion (again, on Monday maybe).
June 28, 2008 at 10:58 am
Matt Miller (6/27/2008)
I personally roll with a tally table with a 1M limit, but that's a personal choice.Otherwise - it's very easy to come up with a numbers table of pretty much any size you need. CTE method coming up.
;with cte1 (N) as (select 1 union select 2), --2
cte2 as (select c1.N from cte1 c1, cte1 c2, cte1 c3, cte1 c4), --16
cte3 as (select c1.N from cte2 c1, cte2 c2, cte2 c3, cte2 c4), --65536
cte4 as (select c1.N from cte3 c1, cte3 c2, cte3 c3, cte3 c4), --1.84467E+19
select row_number() over (order by N) as Num
from CTE4
Thanks to exponential growth - you can pretty much size it to whatever number you need.
Hey there, ol' friend... try limiting the output of your code to 1000 rows printed to the screen... see what happens...
--Jeff Moden
Change is inevitable... Change for the better is not.
June 28, 2008 at 11:50 am
dfalso (6/27/2008)
In order to be truly safe, dynamic sql is needed, unless we could write a CTE with an input variable that governs the amount of recursion (again, on Monday maybe).
Still, you haven't answered my question... How many times have you personally needed a Tally table larger than 11,000? And have you EVER heard of anyone needing over 121 million rows for a Tally table?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 256 through 270 (of 309 total)
You must be logged in to reply to this topic. Login to reply