July 24, 2014 at 11:59 pm
Got this sample which works:
SELECT TOP (5)
ROW_NUMBER() OVER(ORDER BY somefield DESC)
FROM sometable
Now the following can also the same but what is the difference between the two? This is lifted from http://www.sqlservercentral.com/articles/T-SQL/74118/
WITH
E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), -- 1*10^1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), -- 1*10^2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), -- 1*10^4 or 10,000 rows
E8(N) AS (SELECT 1 FROM E4 a, E4 b) -- 1*10^8 or 100,000,000 rows
SELECT TOP (5) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E8
;
July 25, 2014 at 12:06 am
The only difference is that the CTEs are building a dynamic table that the query runs against. It is a very quick way to generate a lot a rows to see how well a query scales.
__________________________________________________________________________________________________________
How to Post to get the most: http://www.sqlservercentral.com/articles/Best+Practices/61537/
July 25, 2014 at 12:41 am
The second piece is a CTE Tally Table. If you just need a list of 1 through 5, that's usually faster than accessing a random table, particularly since you can strip it down to just E1 for that small of a range.
What you're asking for is an odd setup, and needs context, to be able to help you figure out which is best for a particular issue. If all you need is 1-5 as a joinable table to split rows, the Tally's usually your best approach.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
July 26, 2014 at 1:10 am
It is not actually for me and I just want to test if the 2nd option is really better than the first one. Upon running them in the Profiler they almost got same performance only that the first one has 4 reads compared to the 0 read of the 2nd, is the difference negligible?
Thanks guys!
July 26, 2014 at 1:24 am
rodeliorodriguez 26453 (7/26/2014)
It is not actually for me and I just want to test if the 2nd option is really better than the first one. Upon running them in the Profiler they almost got same performance only that the first one has 4 reads compared to the 0 read of the 2nd, is the difference negligible?Thanks guys!
The difference between 0 and 4 is what the second option is all about, it is huge;-)
In other words, either having to do nothing versus N times somthing.
😎
July 26, 2014 at 1:29 am
The first option I am getting this
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1000 row(s) affected)
Table 'iwItems'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 51 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
And in the second one I am getting this
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1000 row(s) affected)
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 40 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
The elapsed time is not consistent, at times the first one has lesser elapsed time, why is that?
July 26, 2014 at 2:52 am
Here is a little snippet to play around with, mind you one cannot blindly read the statistics output;-)
😎
USE tempdb;
GO
SET NOCOUNT ON;
DECLARE @SET_SIZE INT = 1000000;
DECLARE @BUCKET INT = 0;
CREATE TABLE dbo.NON_INDEXED_NUMS (N INT PRIMARY KEY CLUSTERED NOT NULL);
;WITH T(N) AS (SELECT N FROM (VALUES (NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL)) AS X(N))
,NUMS(N) AS (SELECT TOP(@SET_SIZE) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM T T1, T T2, T T3, T T4, T T5, T T6, T T7)
INSERT INTO dbo.NON_INDEXED_NUMS(N)
SELECT N FROM NUMS;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
SELECT TOP(@SET_SIZE) @BUCKET = N
FROM dbo.NON_INDEXED_NUMS;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
SELECT TOP(@SET_SIZE) @BUCKET = ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM dbo.NON_INDEXED_NUMS;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
SELECT TOP(@SET_SIZE) @BUCKET = ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.columns C1,sys.columns C2,sys.columns C3;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
;WITH T(N) AS (SELECT N FROM (VALUES (NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL)) AS X(N))
,NUMS(N) AS (SELECT TOP(@SET_SIZE) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM T T1, T T2, T T3, T T4, T T5, T T6, T T7)
SELECT @BUCKET = N FROM NUMS;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
;WITH
E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), -- 1*10^1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), -- 1*10^2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), -- 1*10^4 or 10,000 rows
E8(N) AS (SELECT 1 FROM E4 a, E4 b) -- 1*10^8 or 100,000,000 rows
SELECT TOP (@SET_SIZE) @BUCKET = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E8;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
DROP TABLE dbo.NON_INDEXED_NUMS;
Results (on my mediocre laptop)
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'NON_INDEXED_NUMS'. Scan count 1, logical reads 1615, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 125 ms, elapsed time = 117 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'NON_INDEXED_NUMS'. Scan count 1, logical reads 1615, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 187 ms, elapsed time = 190 ms.
Table 'syscolpars'. Scan count 3, logical reads 23, physical reads 0, read-ahead reads 1, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 125 ms, elapsed time = 133 ms.
SQL Server Execution Times:
CPU time = 140 ms, elapsed time = 140 ms.
SQL Server Execution Times:
CPU time = 141 ms, elapsed time = 137 ms.
July 28, 2014 at 1:39 am
Thanks guys but just to straighten things out I am not really a DBA, I know the basic stuffs but digesting the difference of that posted code is out of my league, can anybody explain to me what is it really trying to tell me?
Thanks for the patience!
July 28, 2014 at 8:49 am
Just wanted to note that you won't see much difference with 5 rows. Anything will run fast with 5 rows. That's why performance tests usually start at least with 1000 rows and go on for several millions depending on the expected workload.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply