March 3, 2010 at 5:01 am
steve-893342 (3/3/2010)
Yes Mister Magoo that really is a very elegant solution indeed.I've had a look at comparing against my query and it sure is a close one to call.
Re the TRILLION - I think you'll have to give it a lot longer than that!
I get impatient 😀
MM
select geometry::STGeomFromWKB(0x0106000000020000000103000000010000000B0000001000000000000840000000000000003DD8CCCCCCCCCC0840000000000000003DD8CCCCCCCCCC08408014AE47E17AFC3F040000000000104000CDCCCCCCCCEC3F9C999999999913408014AE47E17AFC3F9C99999999991340000000000000003D0000000000001440000000000000003D000000000000144000000000000000400400000000001040000000000000F03F100000000000084000000000000000401000000000000840000000000000003D0103000000010000000B000000000000000000143D000000000000003D009E99999999B93F000000000000003D009E99999999B93F8014AE47E17AFC3F400000000000F03F00CDCCCCCCCCEC3FA06666666666FE3F8014AE47E17AFC3FA06666666666FE3F000000000000003D1800000000000040000000000000003D18000000000000400000000000000040400000000000F03F000000000000F03F000000000000143D0000000000000040000000000000143D000000000000003D, 0);
March 4, 2010 at 1:15 am
I know the feeling
March 4, 2010 at 12:44 pm
Multiple Unpivots
It occurred to me that an interesting variation of the Gianluca Sartori UNPIVOT method would be to create a base table using multiple unpivots. This particular version creates a base table with 1000 rows using a single select statement with 3 unpivots. This preliminary version seems to work quite nicely although I'm sure it lends itself to significant refinement.
WITH cte1 AS
(
SELECT A FROM
(
SELECT 1 AS A1,1 AS A2,1 AS A3,1 AS A4,1 AS A5,1 AS A6,1 AS A7,1 AS A8,1 AS A9,1 AS A10,
1 AS B1,1 AS B2,1 AS B3,1 AS B4,1 AS B5,1 AS B6,1 AS B7,1 AS B8,1 AS B9,1 AS B10,
1 AS C1,1 AS C2,1 AS C3,1 AS C4,1 AS C5,1 AS C6,1 AS C7,1 AS C8,1 AS C9,1 AS C10
) AS Z
UNPIVOT (A FOR O IN (A1,A2,A3,A4,A5,A6,A7,A8,A9,A10)) AS Z
UNPIVOT (B FOR P IN (B1,B2,B3,B4,B5,B6,B7,B8,B9,B10)) AS Z
UNPIVOT (C FOR Q IN (C1,C2,C3,C4,C5,C6,C7,C8,C9,C10)) AS Z
)
,
cte2 AS
(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS row FROM cte1
CROSS JOIN cte1 AS T2
CROSS JOIN cte1 AS T3)
SELECT COUNT_BIG(ALL CASE
WHEN row%15 = 0 THEN 'FIZZBUZZ'
WHEN row%5 = 0 THEN 'BUZZ'
WHEN row%3 = 0 THEN 'FIZZ'
ELSE CAST(row AS VARCHAR(10))
END)
FROM cte2 WHERE row < 1000000001
March 4, 2010 at 8:48 pm
How did that one perform in comparison to your other scripts on your system?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 5, 2010 at 12:38 am
Really very similar at the 1 billion level
March 5, 2010 at 1:28 am
Cascading Unpivots
Or how about this variation to ratchet things up to the 9 billion level
WITH cte1 AS
(
SELECT A,B,C,O,P,Q FROM
(
SELECT 1 AS A1,1 AS A2,1 AS A3,1 AS A4,1 AS A5,1 AS A6,1 AS A7,1 AS A8,1 AS A9,1 AS A10,
1 AS B1,1 AS B2,1 AS B3,1 AS B4,1 AS B5,1 AS B6,1 AS B7,1 AS B8,1 AS B9,1 AS B10,
1 AS C1,1 AS C2,1 AS C3,1 AS C4,1 AS C5,1 AS C6,1 AS C7,1 AS C8,1 AS C9,1 AS C10
) AS Z
UNPIVOT (A FOR O IN (A1,A2,A3,A4,A5,A6,A7,A8,A9,A10)) AS Z
UNPIVOT (B FOR P IN (B1,B2,B3,B4,B5,B6,B7,B8,B9,B10)) AS Z
UNPIVOT (C FOR Q IN (C1,C2,C3,C4,C5,C6,C7,C8,C9,C10)) AS Z
)
,
cte2 AS
(
SELECT D FROM
(
SELECT A,B,C,O,P,Q FROM cte1
) AS Z
UNPIVOT (D FOR R IN (A,B,C)) AS Z
UNPIVOT (E FOR S IN (O,P,Q)) AS Z
)
,
cte3 AS
(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS row FROM cte2
CROSS JOIN cte1 AS T2
CROSS JOIN cte1 AS T3)
SELECT COUNT_BIG(ALL CASE
WHEN row%15 = 0 THEN 'FIZZBUZZ'
WHEN row%5 = 0 THEN 'BUZZ'
WHEN row%3 = 0 THEN 'FIZZ'
ELSE CAST(row AS VARCHAR(10))
END)
FROM cte3 WHERE row < 9000000001
March 5, 2010 at 11:54 am
THE TRILLION
Well the TRILLION did run to completion OK. A little over 69 hours which ain't bad for a modest spec VM with only 2GB RAM. And nice and scalable too (almost exactly so), from the 100 billion level. Another redeeming feature of this query is that it ran so passively that nobody even knew it was there!
I enclose some data from the sys.dm_exec_query_stats DMV for your perusal
total_elapsed_time 249397409179
total_worker_time 249357439443
total_physical_reads 0
total_logical_reads 390078
with
cte1 as
--(select TOP 10000 1 as col from master..syscolumns)
(select TOP 10000 1 as col from master..tally)
,
cte2 as
(select row_number() over (order by (select 0)) as row from cte1
cross join cte1 AS T2
cross join cte1 AS T3)
SELECT count_big(all case
when row%15 = 0 then 'FIZZBUZZ'
when row%5 = 0 then 'BUZZ'
when row%3 = 0 then 'FIZZ'
else cast(row as varchar(14))
end)
from cte2 where row < 1000000000001
March 5, 2010 at 3:56 pm
@steve-2 - congrats - I think you have the record - I can't think anyone else will even attempt THE TRILLION after seeing it takes that long!
MM
select geometry::STGeomFromWKB(0x0106000000020000000103000000010000000B0000001000000000000840000000000000003DD8CCCCCCCCCC0840000000000000003DD8CCCCCCCCCC08408014AE47E17AFC3F040000000000104000CDCCCCCCCCEC3F9C999999999913408014AE47E17AFC3F9C99999999991340000000000000003D0000000000001440000000000000003D000000000000144000000000000000400400000000001040000000000000F03F100000000000084000000000000000401000000000000840000000000000003D0103000000010000000B000000000000000000143D000000000000003D009E99999999B93F000000000000003D009E99999999B93F8014AE47E17AFC3F400000000000F03F00CDCCCCCCCCEC3FA06666666666FE3F8014AE47E17AFC3FA06666666666FE3F000000000000003D1800000000000040000000000000003D18000000000000400000000000000040400000000000F03F000000000000F03F000000000000143D0000000000000040000000000000143D000000000000003D, 0);
March 5, 2010 at 10:46 pm
steve-893342 (3/5/2010)
THE TRILLIONWell the TRILLION did run to completion OK. A little over 69 hours which ain't bad for a modest spec VM with only 2GB RAM. And nice and scalable too (almost exactly so), from the 100 billion level. Another redeeming feature of this query is that it ran so passively that nobody even knew it was there!
I enclose some data from the sys.dm_exec_query_stats DMV for your perusal
total_elapsed_time 249397409179
total_worker_time 249357439443
total_physical_reads 0
total_logical_reads 390078
with
cte1 as
--(select TOP 10000 1 as col from master..syscolumns)
(select TOP 10000 1 as col from master..tally)
,
cte2 as
(select row_number() over (order by (select 0)) as row from cte1
cross join cte1 AS T2
cross join cte1 AS T3)
SELECT count_big(all case
when row%15 = 0 then 'FIZZBUZZ'
when row%5 = 0 then 'BUZZ'
when row%3 = 0 then 'FIZZ'
else cast(row as varchar(14))
end)
from cte2 where row < 1000000000001
Heh... freakin' awesome, Steve. Considering how very short and easy the code is, anyone who settles for just a hundred in the future or makes an excuse about lack of scalability due to limited requirements probably won't get the job anymore. Well done.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 6, 2010 at 12:45 am
mister.magoo (3/5/2010)
@steve - congrats - I think you have the record - I can't think anyone else will even attempt THE TRILLION after seeing it takes that long!
If I could be certain that it would not affect a server, I might give it a shot - just for the sake of doing it.
Of course, that won't be anytime soon.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 6, 2010 at 1:21 am
Cascading Unpivots to the 100 Billion Level
This seems to be a viable approach too. I adjusted my cascading unpivots query to potentially release over a trillion rows (not that I intend running THE TRILLION again any time soon! - but for comparison purposes). I ran this last night and it took 6.5 hours. This is faster than my previous attempt using the CTE version of the table join, although the cascading unpivots query was run on a slightly faster machine. Certainly this would appear to be a very comparable method.
WITH cte1 AS
(
SELECT A,B,C,O,P,Q FROM
(
SELECT 1 AS A1,1 AS A2,1 AS A3,1 AS A4,1 AS A5,1 AS A6,1 AS A7,1 AS A8,1 AS A9,1 AS A10,1 AS A11,
1 AS B1,1 AS B2,1 AS B3,1 AS B4,1 AS B5,1 AS B6,1 AS B7,1 AS B8,1 AS B9,1 AS B10,1 AS B11,
1 AS C1,1 AS C2,1 AS C3,1 AS C4,1 AS C5,1 AS C6,1 AS C7,1 AS C8,1 AS C9,1 AS C10
) AS Z
UNPIVOT (A FOR O IN (A1,A2,A3,A4,A5,A6,A7,A8,A9,A10,A11)) AS Z
UNPIVOT (B FOR P IN (B1,B2,B3,B4,B5,B6,B7,B8,B9,B10,B11)) AS Z
UNPIVOT (C FOR Q IN (C1,C2,C3,C4,C5,C6,C7,C8,C9,C10)) AS Z
) --1,210 rows
,
cte2 AS
(
SELECT D FROM
(
SELECT A,B,C,O,P,Q FROM cte1
) AS Z
UNPIVOT (D FOR R IN (A,B,C)) AS Z
UNPIVOT (E FOR S IN (O,P,Q)) AS Z
) --10,890 rows (1,210 X 9)
,
cte3 AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS row FROM cte2
CROSS JOIN cte2 AS T2
CROSS JOIN cte2 AS T3
) --1,291,467,969,000 rows (10,890^3) over a trillion rows
SELECT COUNT_BIG(ALL CASE
WHEN row%15 = 0 THEN 'FIZZBUZZ'
WHEN row%5 = 0 THEN 'BUZZ'
WHEN row%3 = 0 THEN 'FIZZ'
ELSE CAST(row AS VARCHAR(14))
END)
FROM cte3 WHERE row < 100000000001 --set limit to 100 billion
March 6, 2010 at 1:26 am
wow
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 6, 2010 at 3:39 am
Multiple Unpivots to the 100 Billion Level
This one might go fast too
In this variation 6 unpivots are used to create a base table with 1 million rows.
Just one cross join then releases 1 trillion rows.
WITH cte1
AS
(
SELECT A FROM (SELECT 1 AS A1, 2 AS A2, 3 AS A3, 4 AS A4, 5 AS A5, 6 AS A6, 7 AS A7, 8 AS A8, 9 AS A9, 10 AS A10,
1 AS B1, 2 AS B2, 3 AS B3, 4 AS B4, 5 AS B5, 6 AS B6, 7 AS B7, 8 AS B8, 9 AS B9, 10 AS B10,
1 AS C1, 2 AS C2, 3 AS C3, 4 AS C4, 5 AS C5, 6 AS C6, 7 AS C7, 8 AS C8, 9 AS C9, 10 AS C10,
1 AS D1, 2 AS D2, 3 AS D3, 4 AS D4, 5 AS D5, 6 AS D6, 7 AS D7, 8 AS D8, 9 AS D9, 10 AS D10,
1 AS E1, 2 AS E2, 3 AS E3, 4 AS E4, 5 AS E5, 6 AS E6, 7 AS E7, 8 AS E8, 9 AS E9, 10 AS E10,
1 AS F1, 2 AS F2, 3 AS F3, 4 AS F4, 5 AS F5, 6 AS F6, 7 AS F7, 8 AS F8, 9 AS F9, 10 AS F10) AS Z
UNPIVOT (A FOR O IN (A1,A2,A3,A4,A5,A6,A7,A8,A9,A10)) AS Z
UNPIVOT (B FOR P IN (B1,B2,B3,B4,B5,B6,B7,B8,B9,B10)) AS Z
UNPIVOT (C FOR Q IN (C1,C2,C3,C4,C5,C6,C7,C8,C9,C10)) AS Z
UNPIVOT (D FOR R IN (D1,D2,D3,D4,D5,D6,D7,D8,D9,D10)) AS Z
UNPIVOT (E FOR S IN (E1,E2,E3,E4,E5,E6,E7,E8,E9,E10)) AS Z
UNPIVOT (F FOR T IN (F1,F2,F3,F4,F5,F6,F7,F8,F9,F10)) AS Z
) --1 million rows
,
cte2 AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS row FROM cte1
CROSS JOIN cte1 as T2
) --1 trillion rows
SELECT COUNT_BIG(ALL CASE
WHEN row%15 = 0 THEN 'FIZZBUZZ'
WHEN row%5 = 0 THEN 'BUZZ'
WHEN row%3 = 0 THEN 'FIZZ'
ELSE CAST(row AS VARCHAR(14))
END)
FROM cte2 WHERE row < 100000000001 --set limit to 100 billion
March 6, 2010 at 6:28 am
Trillion Row Base Table - Pushing Multiple Unpivots to the Limit
So what do you think folks? Will this one go fast?
Sorry Jeff, no cross joins!
WITH cte1 AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS row FROM
(
SELECT 1 AS A1,1 AS A2,1 AS A3,1 AS A4,1 AS A5,1 AS A6,1 AS A7,1 AS A8,1 AS A9,1 AS A10,
1 AS A11,1 AS A12,1 AS A13,1 AS A14,1 AS A15,1 AS A16,1 AS A17,1 AS A18,1 AS A19,1 AS A20,
1 AS A21,1 AS A22,1 AS A23,1 AS A24,1 AS A25,1 AS A26,1 AS A27,1 AS A28,1 AS A29,1 AS A30,
1 AS A31,1 AS A32,1 AS B1,1 AS B2,1 AS B3,1 AS B4,1 AS B5,1 AS B6,1 AS B7,1 AS B8,1 AS B9,
1 AS B10,1 AS B11,1 AS B12,1 AS B13,1 AS B14,1 AS B15,1 AS B16,1 AS B17,1 AS B18,1 AS B19,
1 AS B20,1 AS B21,1 AS B22,1 AS B23,1 AS B24,1 AS B25,1 AS B26,1 AS B27,1 AS B28,1 AS B29,
1 AS B30,1 AS B31,1 AS B32,1 AS C1,1 AS C2,1 AS C3,1 AS C4,1 AS C5,1 AS C6,1 AS C7,1 AS C8,
1 AS C9,1 AS C10,1 AS C11,1 AS C12,1 AS C13,1 AS C14,1 AS C15,1 AS C16,1 AS C17,1 AS C18,
1 AS C19,1 AS C20,1 AS C21,1 AS C22,1 AS C23,1 AS C24,1 AS C25,1 AS C26,1 AS C27,1 AS C28,
1 AS C29,1 AS C30,1 AS C31,1 AS C32,1 AS D1,1 AS D2,1 AS D3,1 AS D4,1 AS D5,1 AS D6,1 AS D7,
1 AS D8,1 AS D9,1 AS D10,1 AS D11,1 AS D12,1 AS D13,1 AS D14,1 AS D15,1 AS D16,1 AS D17,
1 AS D18,1 AS D19,1 AS D20,1 AS D21,1 AS D22,1 AS D23,1 AS D24,1 AS D25,1 AS D26,1 AS D27,
1 AS D28,1 AS D29,1 AS D30,1 AS D31,1 AS D32,1 AS E1,1 AS E2,1 AS E3,1 AS E4,1 AS E5,1 AS E6,
1 AS E7,1 AS E8,1 AS E9,1 AS E10,1 AS E11,1 AS E12,1 AS E13,1 AS E14,1 AS E15,1 AS E16,1 AS E17,
1 AS E18,1 AS E19,1 AS E20,1 AS E21,1 AS E22,1 AS E23,1 AS E24,1 AS E25,1 AS E26,1 AS E27,
1 AS E28,1 AS E29,1 AS E30,1 AS E31,1 AS E32,1 AS F1,1 AS F2,1 AS F3,1 AS F4,1 AS F5,1 AS F6,
1 AS F7,1 AS F8,1 AS F9,1 AS F10,1 AS F11,1 AS F12,1 AS F13,1 AS F14,1 AS F15,1 AS F16,1 AS F17,
1 AS F18,1 AS F19,1 AS F20,1 AS F21,1 AS F22,1 AS F23,1 AS F24,1 AS F25,1 AS F26,1 AS F27,
1 AS F28,1 AS F29,1 AS F30,1 AS F31,1 AS F32,1 AS G1,1 AS G2,1 AS G3,1 AS G4,1 AS G5,1 AS G6,
1 AS G7,1 AS G8,1 AS G9,1 AS G10,1 AS G11,1 AS G12,1 AS G13,1 AS G14,1 AS G15,1 AS G16,1 AS G17,
1 AS G18,1 AS G19,1 AS G20,1 AS G21,1 AS G22,1 AS G23,1 AS G24,1 AS G25,1 AS G26,1 AS G27,
1 AS G28,1 AS G29,1 AS G30,1 AS G31,1 AS G32,1 AS H1,1 AS H2,1 AS H3,1 AS H4,1 AS H5,1 AS H6,
1 AS H7,1 AS H8,1 AS H9,1 AS H10,1 AS H11,1 AS H12,1 AS H13,1 AS H14,1 AS H15,1 AS H16,1 AS H17,
1 AS H18,1 AS H19,1 AS H20,1 AS H21,1 AS H22,1 AS H23,1 AS H24,1 AS H25,1 AS H26,1 AS H27,
1 AS H28,1 AS H29,1 AS H30,1 AS H31,1 AS H32
) AS Z
UNPIVOT (A FOR O IN (A1,A2,A3,A4,A5,A6,A7,A8,A9,A10,A11,A12,A13,A14,A15,A16,
A17,A18,A19,A20,A21,A22,A23,A24,A25,A26,A27,A28,A29,A30,A31,A32)) AS Z
UNPIVOT (B FOR P IN (B1,B2,B3,B4,B5,B6,B7,B8,B9,B10,B11,B12,B13,B14,B15,B16,
B17,B18,B19,B20,B21,B22,B23,B24,B25,B26,B27,B28,B29,B30,B31,B32)) AS Z
UNPIVOT (C FOR Q IN (C1,C2,C3,C4,C5,C6,C7,C8,C9,C10,C11,C12,C13,C14,C15,C16,
C17,C18,C19,C20,C21,C22,C23,C24,C25,C26,C27,C28,C29,C30,C31,C32)) AS Z
UNPIVOT (D FOR R IN (D1,D2,D3,D4,D5,D6,D7,D8,D9,D10,D11,D12,D13,D14,D15,D16,
D17,D18,D19,D20,D21,D22,D23,D24,D25,D26,D27,D28,D29,D30,D31,D32)) AS Z
UNPIVOT (E FOR S IN (E1,E2,E3,E4,E5,E6,E7,E8,E9,E10,E11,E12,E13,E14,E15,E16,
E17,E18,E19,E20,E21,E22,E23,E24,E25,E26,E27,E28,E29,E30,E31,E32)) AS Z
UNPIVOT (F FOR T IN (F1,F2,F3,F4,F5,F6,F7,F8,F9,F10,F11,F12,F13,F14,F15,F16,
F17,F18,F19,F20,F21,F22,F23,F24,F25,F26,F27,F28,F29,F30,F31,F32)) AS Z
UNPIVOT (G FOR U IN (G1,G2,G3,G4,G5,G6,G7,G8,G9,G10,G11,G12,G13,G14,G15,G16,
G17,G18,G19,G20,G21,G22,G23,G24,G25,G26,G27,G28,G29,G30,G31,G32)) AS Z
UNPIVOT (H FOR V IN (H1,H2,H3,H4,H5,H6,H7,H8,H9,H10,H11,H12,H13,H14,H15,H16,
H17,H18,H19,H20,H21,H22,H23,H24,H25,H26,H27,H28,H29,H30,H31,H32)) AS Z
) --1,099,511,627,776 rows
SELECT COUNT_BIG(ALL CASE
WHEN row%15 = 0 THEN 'FIZZBUZZ'
WHEN row%5 = 0 THEN 'BUZZ'
WHEN row%3 = 0 THEN 'FIZZ'
ELSE CAST(row AS VARCHAR(14))
END)
FROM cte1 WHERE row < 100000000001 --set limit to 100 billion
March 6, 2010 at 8:42 am
Once again, very interesting. Checking this last one out, and it seems very similar in speed to the other "fastest" ones submitted. Good job on coming up with yet another script.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 15 posts - 241 through 255 (of 363 total)
You must be logged in to reply to this topic. Login to reply