April 8, 2010 at 7:07 am
Gianluca Sartori (4/8/2010)
Looking at the execution plans, it should not be faster.
Heh... I've run into that many, many times. Perhaps an article titled "The execution plan lies". 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
April 8, 2010 at 7:14 am
Jeff Moden (4/8/2010)
Gianluca Sartori (4/8/2010)
Looking at the execution plans, it should not be faster.Heh... I've run into that many, many times. Perhaps an article titled "The execution plan lies". 😛
Subtitled "almost never". 😉
-- Gianluca Sartori
April 8, 2010 at 9:04 am
Gianluca Sartori (4/8/2010)
Jeff Moden (4/8/2010)
Gianluca Sartori (4/8/2010)
Looking at the execution plans, it should not be faster.Heh... I've run into that many, many times. Perhaps an article titled "The execution plan lies". 😛
Subtitled "almost never". 😉
Well at least there is some to truth to that.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 8, 2010 at 3:06 pm
It would be great to see some detailed analysis of the execution. If you want to write it up, I'd like to publish it.
April 18, 2010 at 10:47 am
Jeff Moden (2/23/2010)
For those that add a triple cross-join... eh... wrong answer because behind the scenes, it will cause explosive growth of one of the log tables for big numbers like a Billion (for example). Running a triple cross join to create a Billion rows will grow one of the log files to over 40GB even if it's done in TempDB.
Sorry for reviving an old thread, but I have to ask:
Why on earth would using a triple cross write anything to disk ?
For example:
select count(n) from (
select top (1000000000) n=row_number() over(order by (select null))
from sys.system_columns c1, sys.system_columns c2, sys.system_columns c3
) t
option(maxdop 1)
Why would this ever write to disk ?
/SG
April 18, 2010 at 12:48 pm
Stefan_G (4/18/2010)
Jeff Moden (2/23/2010)
For those that add a triple cross-join... eh... wrong answer because behind the scenes, it will cause explosive growth of one of the log tables for big numbers like a Billion (for example). Running a triple cross join to create a Billion rows will grow one of the log files to over 40GB even if it's done in TempDB.Sorry for reviving an old thread, but I have to ask:
Why on earth would using a triple cross write anything to disk ?
For example:
select count(n) from (
select top (1000000000) n=row_number() over(order by (select null))
from sys.system_columns c1, sys.system_columns c2, sys.system_columns c3
) t
option(maxdop 1)
Why would this ever write to disk ?
/SG
Simply because SQL Server will use disk and/or memory interchangeably as needed. When it runs into memory pressure, it will flush things to disk. In the case above, if SQL Server doesn't have enough room to materialize the entire triple cross join in memory, it's going to use temp space (i.e. tempdb).
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
April 18, 2010 at 12:59 pm
Matt Miller (#4) (4/18/2010)
Simply because SQL Server will use disk and/or memory interchangeably as needed. When it runs into memory pressure, it will flush things to disk. In the case above, if SQL Server doesn't have enough room to materialize the entire triple cross join in memory, it's going to use temp space (i.e. tempdb).
Nonsense! The SQL Server does not materialize the entire triple cross join in memory. It generates the data dynamically as it is needed.
Try the query yourself if you dont believe me. It will use 100% of a single CPU, but it will not cause any noticeable extra memory usage.
If Jeff got 40GB log use he must have been doing something strange.
/SG
April 18, 2010 at 1:07 pm
Stefan_G (4/18/2010)
Matt Miller (#4) (4/18/2010)
Simply because SQL Server will use disk and/or memory interchangeably as needed. When it runs into memory pressure, it will flush things to disk. In the case above, if SQL Server doesn't have enough room to materialize the entire triple cross join in memory, it's going to use temp space (i.e. tempdb).Nonsense! The SQL Server does not materialize the entire triple cross join in memory. It generates the data dynamically as it is needed.
Try the query yourself if you dont believe me. It will use 100% of a single CPU, but it will not cause any noticeable extra memory usage.
If Jeff got 40GB log use he must have been doing something strange.
/SG
I have tried the query myself and have also witnessed the huge log growth. Isolated environment where the only thing running was the query and it caused the disks to fill up.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 18, 2010 at 1:12 pm
CirquedeSQLeil (4/18/2010)
I have tried the query myself and have also witnessed the huge log growth. Isolated environment where the only thing running was the query and it caused the disks to fill up.
OK, exactly what did the query look like ?
My query obviously does not fill up the disks despite the triple-cross join, so there must be something else in your query causing the disk to fill up.
/SG
April 18, 2010 at 1:31 pm
CirquedeSQLeil (4/18/2010)
I have tried the query myself and have also witnessed the huge log growth. Isolated environment where the only thing running was the query and it caused the disks to fill up.
Hmm. After reading what you wrote more carefully, it seems that you actually ran the query I posted above. Is that true ?
If that is the case, then there must be something different between your system and mine.
I am running on a SQL 2008 developer edition. What is your version ?
/SG
April 18, 2010 at 2:04 pm
Stefan_G (4/18/2010)
Nonsense! The SQL Server does not materialize the entire triple cross join in memory. It generates the data dynamically as it is needed.Try the query yourself if you dont believe me. It will use 100% of a single CPU, but it will not cause any noticeable extra memory usage.
If Jeff got 40GB log use he must have been doing something strange.
/SG
Heh... nonsense yourself! 😉 The only thing I did was a SELECT/INTO using a triple cross join to create a Billion row table with just one column in it... "N".
--Jeff Moden
Change is inevitable... Change for the better is not.
April 18, 2010 at 2:37 pm
Jeff Moden (4/18/2010)
The only thing I did was a SELECT/INTO using a triple cross join to create a Billion row table with just one column in it... "N".
And did you try creating the same table with a double cross join instead ?
The thing I find hard to understand is why a triple join should produce any more log in that situation than a double join.
What was the recovery model of your database ?
I just tried to create a billion row table using the following query:
select n=cast(n as int)
into temp
from (
select top (1000000000) n=row_number() over(order by (select null))
from sys.system_columns c1, sys.system_columns c2, sys.system_columns c3
) t
It made my datafile grow with over 10GB, but the log is still just 200 MB.
My database uses simple recovery model.
/SG
April 18, 2010 at 3:22 pm
Stefan_G (4/18/2010)
Jeff Moden (4/18/2010)
The only thing I did was a SELECT/INTO using a triple cross join to create a Billion row table with just one column in it... "N".And did you try creating the same table with a double cross join instead ?
The thing I find hard to understand is why a triple join should produce any more log in that situation than a double join.
What was the recovery model of your database ?
I just tried to create a billion row table using the following query:
select n=cast(n as int)
into temp
from (
select top (1000000000) n=row_number() over(order by (select null))
from sys.system_columns c1, sys.system_columns c2, sys.system_columns c3
) t
It made my datafile grow with over 10GB, but the log is still just 200 MB.
My database uses simple recovery model.
/SG
I'm rerunning the tests I did and I'll post the results when I'm done. They take about 50 per test on my poor ol' 8 year old desktop box so it'll be a while... first test has been running for about 31 minutes.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 18, 2010 at 3:43 pm
First test is still running on my dedicated but old desktop after 52 minutes. Log file has grown to 34.5 gig and is still growing. Like I said, I'll post all of my code and results when I'm done... I've got about 4 more tests to run. :sick:
--Jeff Moden
Change is inevitable... Change for the better is not.
April 18, 2010 at 3:50 pm
What is your recovery model ?
Viewing 15 posts - 316 through 330 (of 363 total)
You must be logged in to reply to this topic. Login to reply