May 6, 2015 at 7:41 am
Interesting. I'll have to do more investigation.
@Gail--I've normally used #temptables to load objects (e.g. offices, jobsites etc) to which someone had access. This table would then connect with other stored procedures and provide an efficient way to do this kind of security. As most people would have a different list, there would be almost no benefit to caching in this case.
May 6, 2015 at 7:45 am
I think it's session related,
If one temp table is created in a procedure, and the procedure is executed for example 1000 times,
The table is getting cached in the tempdb say approx. 100 times with different ID's but the same structure.
Now when the procedure is executed again, I think SQL Server is struggling to determine which of the
cached versions of the table structure to use. I think this is probably causing the performance drop over time.
I'm still trying to understand this but this is my best guess at the moment.
May 6, 2015 at 7:53 am
RonKyle (5/6/2015)
@Gail--I've normally used #temptables to load objects (e.g. offices, jobsites etc) to which someone had access. As most people would have a different list, there would be almost no benefit to caching in this case.
Errr, no.
The cached 'table' is a shell. A single IAM and a single (empty) data page. It's not caching the data, it's not even caching the columns iirc. It's caching two allocated pages so that when a new temp table (of any form, with any contents) is needed, the allocation pages (mainly the SGAM) don't need to be updated, SQL can just take the existing shell of a table, link it to the metadata as defined and present it as a fully formed temp table.
It's not a case of 'which cached table to use', because they're all stripped of their columns (iirc) and data any one will do.
Now it's possible there are too many, it's possible that SQL's being overly aggressive. Is that causing a problem though? If you explicitly alter the temp table in the proc so that it can't be cached, does your performance problem go away?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 6, 2015 at 7:55 am
Thanks for the clarification. Good stuff.
May 6, 2015 at 8:06 am
It's not a case of 'which cached table to use', because they're all stripped of their columns (iirc) and data any one will do.
Now it's possible there are too many, it's possible that SQL's being overly aggressive. Is that causing a problem though? If you explicitly alter the temp table in the proc so that it can't be cached, does your performance problem go away?
Regarding the first point.
If they are stripped of their columns how is it all there meta data is still stored in the sys.columns 100s of times.
Regarding the second point
Freeing all the proc cache improves performance
May 7, 2015 at 6:14 am
If they are stripped of their columns how is it all there meta data is still stored in the sys.columns 100s of times.
Not to mention possibly stale statistics retained from earlier incarnations of the temp table, as Paul White's article also mentions.
Freeing all the proc cache improves performance
Other users of your database instance (if any) may beg to differ!
May 7, 2015 at 6:19 am
Other users of your database instance (if any) may beg to differ!
I know it doesn't make sense. However dropping the proc cache removing all the cached temp tables improved performance by about 25%
May 7, 2015 at 6:34 am
Freeing the proc cache does a hell of a lot more than just dropping the cached tables. You can't say that it's all due to the cached temp tables when you've also forced fresh compiles of all plans possibly with more accurate row estimations. There's too many possible causes coming from a plan cache clear
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 7, 2015 at 6:42 am
You can't say that it's all due to the cached temp tables
I didn't, I'm still investigating what's causing the problem.
Question
Why are the same tables getting cached multiple times?.
I would have thought once the structure has been cached
the next time the create #table statement was executed, SQL Server would re-use
the cached version. However in some circumstances it's generating new ones.
May 7, 2015 at 7:25 am
I should get a nickel or something every time I cite that Paul White article, since I seem to be continually going back to it 🙂
Read the article, specifically the section on concurrent executions.
If a stored procedure is executed concurrently, multiple separate cached objects may be created in tempdb. There is one cached plan for the procedure but one cached temporary object per execution context derived from that cached plan. Recall that execution contexts are relatively lightweight instances of a cached plan, populated with execution-specific data such as temporary object ids and parameter values...
The runtime contents of a temporary object (table or variable) are obviously specific to a particular execution, so it makes sense for the cached object to be associated with execution contexts rather than the parent plan. There may also be more than one cached plan for a procedure in cache (for example due to compilations with different SET options) and each parent plan will have its own collection of execution contexts, so there can be one cached tempdb object per execution context per plan.
There does not appear to be a fixed limit on the number of these cached objects; I was able to quickly create 2,000 of them using the test procedures above and Adam Machanic’s SQL Query Stress tool running 200 threads.
(emphasis mine)
Cheers!
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply