February 25, 2013 at 6:37 pm
Running totals:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911
_____________
Code for TallyGenerator
February 25, 2013 at 6:45 pm
ScottPletcher (2/25/2013)
But they may need spool space just like any other table or view used multiple times for lookups, etc., yes?
I saw a SQL Server brought down my "No Memory" error because of too fat CTE's.
It seems they are bound to physical memory only.
And therefore quite dangerous, if overused.
_____________
Code for TallyGenerator
February 25, 2013 at 8:15 pm
Sergiy,
Thank you for that example. Since I am programming in VB.Net that would be an "easy" solution. However, I am using this as an opportunity in increase my SQL knowledge and expertise.
Pat
February 26, 2013 at 3:10 am
ScottPletcher (2/25/2013)
GilaMonster (2/25/2013)
ScottPletcher (2/25/2013)
GilaMonster (2/25/2013)
CTEs don't store results
Never??? What if enough memory does not exist to store the results? Surely the results would spill to disk then, presumably to tempdb ...
They are not temp tables or table variables, they do not store results any more than a plain select statement or normal view does. Talking about a CTE's results is a meaningless concept, as the SQL inlines them into the query during the parsing and binding phase, just like a subquery or view. The CTE is not first calculated and the results stored somewhere, then the rest of the query run against those results and as such, one cannot talk about a CTE's results as if it were a temp table or table variable.
But they may need spool space just like any other table or view used multiple times for lookups, etc., yes?
If the optimiser decides for some reason to put a spool in the plan (which is not a certainty for a table or view used multiple times).
Even then, the spool does not persist past the end of that query, so the OP's question about managing a CTE's results is meaningless as there are no persistent results to manage.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 26, 2013 at 4:58 am
Gail,
I am still a bit lost so let me paraphrase. A CTE never writes data to the TempDb. But in this instance since the TempDb grows from 8MB to 3.8 GB on the first use of the query, we can assume the optimizer has decided to spool data to the TempDb. After the query ends the data in the TempDb does not persist.
Is that correct and if not please correct me.
Thanks.
I have a second question regarding this post. It is becoming clear to me that we are about at the end of the topic of this exact question. But this thread has uncovered another problem. The query is optimized differently for each iteration. The two Actual execution plans I posted show different steps, even though all I did was change the input date. I need to understand that process better. I need to have the query optimized so that it executes the same way each time. I do not want the query to take 10 times as long to process 1/3 the amount of data.
Do I continue this thread or do I start a new thread? I think the answer is to start a new thread but I do not want to violate any etiquette rules. So a little advice would be appreciated.
February 26, 2013 at 5:30 am
mpdillon (2/26/2013)
I am still a bit lost so let me paraphrase. A CTE never writes data to the TempDb. But in this instance since the TempDb grows from 8MB to 3.8 GB on the first use of the query, we can assume the optimizer has decided to spool data to the TempDb. After the query ends the data in the TempDb does not persist.Is that correct and if not please correct me.
I never said that a CTE will never write to TempDB. I said it's not a temp table or table variable that stores results. If the query (CTE or no CTE doesn't matter) ends up spilling some stuff to TempDB (spools, hash, sort, etc), that will be automatically cleaned up afterwards and hence you have no need to manage it yourself
I have a second question regarding this post. It is becoming clear to me that we are about at the end of the topic of this exact question. But this thread has uncovered another problem. The query is optimized differently for each iteration. The two Actual execution plans I posted show different steps, even though all I did was change the input date. I need to understand that process better. I need to have the query optimized so that it executes the same way each time. I do not want the query to take 10 times as long to process 1/3 the amount of data.
I haven't analysed your code in detail, just don't have the time today. Try an OPTION (RECOMPILE) at the end, see if that helps. That trades off compilation time for an exec plan that's freshly compiled and hopefully more accurate in its estimates. You may also need to update the statistics of any table you're inserting into in the course of this exercise.
Also, have a read through the like Sergiy posted, there are optimal ways to do running totals and moving averages and there are terribly inefficient ways to do them as well. If you're using an inefficient way, you will kill the sever through memory, IO, CPU, TempDB or all of the above.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 6 posts - 16 through 20 (of 20 total)
You must be logged in to reply to this topic. Login to reply