March 3, 2017 at 2:14 am
I have a procedure which returns 5000+ records and has a code of 500+ lines with mixture of different tables and huge logic. I used Variable table and results used to come in time. But at certain point of time my query has became very slow and took 30 mins to execute. I made many changes of performance but didn't work out. At last i removed the table variable and kept Temp table and executed it. Results came in seconds.
On my analysis giving the parameters directly to the procedure and executed, variable table stopped after getting a result of 20+ results and kept on running running. My question is how does variable table and temp table works internally. Why for table variable it was keep on running for long time.
Thanks
Vasu
March 3, 2017 at 2:35 am
Table variable aren't great. Regardless of how many rows you're going to INSERT into them, the query planner will always estimate the number of rows within the table as 1. This means that the query plan isn't going to allocate correct resources to run your query, and almost everything will likely ending up spilling into TempDB, rather than the server doing as much as it can in RAM.
Temporary tables have much better estimates, and thus a better query plan can be generated and better resources devoted to your query.
Someone will be able to give a much better/in detail answer than I, but a quick point of why there is such a difference in the two scenarios. Basically, the end answer is use Temporary tables, rather than variables, especially when dealing with more than one row.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
March 3, 2017 at 3:00 am
Yes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices. Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.
John
March 3, 2017 at 3:10 am
John Mitchell-245523 - Friday, March 3, 2017 3:00 AMYes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices. Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.John
Interesting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
March 3, 2017 at 3:14 am
Thom A - Friday, March 3, 2017 3:10 AMInteresting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.
What compat mode is the database that you tested in?
Compat mode 2012 or before, and you get the old CE.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 3, 2017 at 3:22 am
That
GilaMonster - Friday, March 3, 2017 3:14 AMThom A - Friday, March 3, 2017 3:10 AMInteresting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.What compat mode is the database that you tested in?
Compat mode 2012 or before, and you get the old CE.
That would explain it. Thanks 🙂
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
March 3, 2017 at 3:36 am
Thanks Friends for good clarification. In many of procs they are using table variable as the result set is very less they are working fine but still in further it may leads to slow so almost I'm converting them to temp tables.
March 3, 2017 at 5:17 am
John Mitchell-245523 - Friday, March 3, 2017 3:00 AMYes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices. Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.John
Funny enough, on 2014 Express in 120 Comp. mode I only get 1 from the CE for the table variable, don't have any other instances of 2014 running as I've migrated everything to 2016 and vNext
😎
March 3, 2017 at 5:59 am
Yeah, I get the same on 2014 Developer. And I can't find any authoritative reference for the 100 row thing, either. Hmmmm.
John
March 3, 2017 at 6:11 am
John Mitchell-245523 - Friday, March 3, 2017 5:59 AMYeah, I get the same on 2014 Developer. And I can't find any authoritative reference for the 100 row thing, either. Hmmmm.John
Mmm, created a 130 on the dev server, made a bunch of trash data and also still get estimate of 1.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
March 3, 2017 at 6:19 am
John Mitchell-245523 - Friday, March 3, 2017 5:59 AMYeah, I get the same on 2014 Developer. And I can't find any authoritative reference for the 100 row thing, either. Hmmmm.John
No server wide traceflags in the mix?
😎
March 3, 2017 at 7:01 am
John Mitchell-245523 - Friday, March 3, 2017 3:00 AMYes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices. Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.John
The advantage of table variables over temp tables is that (because they don't use statistics) they help maintain a reusable execution plan. As a general rule, table variables are better for OLTP stored procedures where only a small (may 100 or less) rows are returned. However, for more robust stored procedures, something like the data source for a report, that will process thousands or millions of rows, which sounds more like what you're working with at the moment, temp variables are better SQL Server calculates actual statistics which improves performance of joins.
Also, it could be useful for some situations if table variable row count estimate were a server level setting, rather than SQL Server assigning an arbitrary value like 1 or 100. That way the DBA could throttle it up or down as appropriate in a manner similar to the cost threshold for parallelism setting. However, as far as I know there is no server setting or trace flag for this currently.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 3, 2017 at 7:07 am
Eirikur Eiriksson - Friday, March 3, 2017 6:19 AMJohn Mitchell-245523 - Friday, March 3, 2017 5:59 AMYeah, I get the same on 2014 Developer. And I can't find any authoritative reference for the 100 row thing, either. Hmmmm.John
No server wide traceflags in the mix?
😎
No, none.
March 3, 2017 at 8:05 am
John Mitchell-245523 - Friday, March 3, 2017 7:07 AMEirikur Eiriksson - Friday, March 3, 2017 6:19 AMJohn Mitchell-245523 - Friday, March 3, 2017 5:59 AMYeah, I get the same on 2014 Developer. And I can't find any authoritative reference for the 100 row thing, either. Hmmmm.John
No server wide traceflags in the mix?
😎No, none.
Had some time to kill so I've tried any possible way of using a table variable and the CE on 120 compat is always returning 1 for the estimation.
😎
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply