November 19, 2006 at 9:32 am
Ninja, I agree.
November 19, 2006 at 12:43 pm
That's all you can provide as a proof for your "best practices"?
"I disagree just because I disagree"?
_____________
Code for TallyGenerator
November 19, 2006 at 2:07 pm
First of all it's not MY best practice but A best practice. I'll keep adhering to it because it's still valid client side where I pass over 50% of my time. Also since it has no noticable impact server side I see no use of changing my own personal standard and even less gain in changing currently deployed code. I see and agree with your point, thanx for sharing it but still no thanx for me .
November 19, 2006 at 3:07 pm
The matter of fact it's not A best practice.
You cannot provide a single fact to prove that this practice is better than mine. You even admit it affects server performance a little bit, but not significatly enough to make you change own personal standard.
So, it's you own personal standard, nothing else.
_____________
Code for TallyGenerator
November 19, 2006 at 5:51 pm
This is an amusing discussion. []
I think the position that explicitly dropping temp tables is a best practice is a reasonable one. why? because it's a programming pattern that's good to get in the habit of doing. what you allocate, you should also free if the programming language has syntax for it (even if it's not strictly necessary, as in this case). This is applicable not just in sql, but in all programming languages.
If you have any experience with resource leaks in managed code, you may know how painful it can be for resource managment to be out of your hands. I have tracked down many GDI handle and memory leaks in managed code caused by sloppy programmers not calling Dispose() on their IDisposables. why don't they do it? because they think the GC should take care of it. it's a similar position to Sergiy's of maintaining that the server will take care of cleaning up his temp tables. well, sometimes the server doesn't clean up after you (although in this particular special case it does). The point is, in general, it's a good thing to clean up after yourself if given the means to do so. It's part of being a good programmer citizen.
Another benefit is you don't have to remember when it's necessary to clean up, and when it strictly isn't. Just do it always and you don't have to clutter your head with stuff like this. []
---------------------------------------
elsasoft.org
November 19, 2006 at 7:56 pm
Yes, really amusing.
Guys decided to show how smart they are, but did not realise whom they are dealing with.
jezemine, sorry, but you are wrong as well.
It's not reasonable position.
SQL Server allows to create temp tables as well as declare table variables.
If you read BOL thoroughly enough you may know that SQL Server treats those 2 types of objects almost identically: it creates object in tempdb, allocates space for the recordset, places it into memory at the beginning and moves to physical disk if they don't fit memory anymore.
But there is no way to deallocate table variable. You must rely on server in cleaning up and releasing resourses.
So, approach of removing every object created in SP is not universal. So, there is no programming pattern in this case.
# tables are somewhere between user tables and table variables. But they are closer to table variables, not to ordinary tables - they are limited to the scope, they exist in tempdb, not in application db, you may create many tables with the same name simultenuosly if you have multiple connections, etc.
So, the programming pattern applied to of table variables must be used for # tables rather than programming pattern of user tables.
Is it reasonable position, as for you?
_____________
Code for TallyGenerator
November 19, 2006 at 9:46 pm
Actually, #tables and ##tables are more like database tables than table variables. Yes, table variables and #tables are created in tempdb, however, SQL Server does not handle them the same way. Transactions against #tables are logged, just like normal database tables. Transactions against table variables aren't.
In addition, yes, you proved that SQL Server cleans up #tables when a procedure ends, but you didn't prove it was more efficient to implicitly clean up the #tables rather than explicitly cleaning up the #tables by dropping them explicitly prior to exiting from the stored procedure. Either way, it probably takes SQL Server the same amount of time to drop the #tables.
Those of us who have come from a development background where it is good practice to dispose of the objects created in the course of a routine, are going to continue this practice.
And after all the comments made, I don't see anyone changing sides on the debate. This is where we have to agree to disagree. Anything else is arrogance, no one wins.
November 19, 2006 at 10:16 pm
Sergiy, you are stubborn!
I was brought up on C++, where resource management is something that you need to be aware of, and you are trained to clean up what you allocate. so that's really where I got my "clean up after yourself" instinct.
However, a quick search on google groups came up with at least one person who claimed to have fixed a tempdb perf issue by explicitly dropping temp tables, and several MVPs that recommend explicitly dropping them. that's good enough for me:
---------------------------------------
elsasoft.org
November 19, 2006 at 10:31 pm
Although I personally agree with Sergiy, this is interesting fodder for thought.
Clean up is always a consideration, but it does vary with each platform. Fourth generation applications are insidious for not having good internal cleanup; .NET proposes to handle self-cleaning, but if you remember when all datatypes were to be declared as variants, you will quickly agree that not all of Microsoft’s solutions hold up in the real world. It is, (and from my view will always be) necessary to destroy any objects created within front-end applications.
But, SQL Server is a different bird. I have my own personal standards, (or better, personal coding style) like using CAPS and indentation. I think it is easier to read, but others who work for me have their own style and it does not affect the efficiency of the program is they do not use my formatting…
I have copied some sections from a Microsoft White Paper along with a reference to the website. If you have a small machine and TempDB allocation is of issue, destroying #TempTables “may” have limited benefit – “may”. But in general, if you are running up against these issues, sizing and altering your growth and shrink and various other settings within TempDB is a better solution. Else, you will run into other more serious problems, especially if you determine to grow your application.
As Lynn Pettis pointed out, SQL Server will still have to clean up #TempTables and as Sergiy pointed out @TableVariables; and it is currently impossible to destroy @TableVariables within a Stored Procedure. SQL Server will still run its own processes to clean up cache, paging, etc. on both types of tables regardless if it is done implicitly or explicitly. [ "You can also use the Version Generation Rate and Version Cleanup Rate performance counters to fine tune your computation. If your Version Cleanup Rate is 0, this implies that there is a long running transaction that is preventing the version store cleanup." ]
It seems to me that the issue is more on efficiency; we have found that many of our programs run more efficiently with @TableVariables using the memory resources of the current database rather than TempDB, while other processes work fine using #TempTables. Obviously, testing these different approaches is necessary to determine which approach is best. In short, it looks to me from this paper that if clean up of #TempTables is of issue, altering your procedures or increasing the size of your server, (to allow for more space in TempDB) may be the real solution. You can judge for yourselves.
Microsoft White Paper - Troubleshooting Performance Problems in SQL Server 2005
Tempdb
Tempdb globally stores both internal and user objects and the temporary tables, objects, and stored procedures that are created during SQL Server operation.There is a single tempdb for each SQL Server instance. It can be a performance and disk space bottleneck.
The tempdb can become overloaded in terms of space available and excessive DDL/DML operations. This can cause unrelated applications running on the server to slow down or fail.
Some of the common issues with tempdb are as follows:
• Running out of storage space in tempdb.
• Queries that run slowly due to the I/O bottleneck in tempdb. This is covered under I/O Bottlenecks.
• Excessive DDL operations leading to a bottleneck in the system tables.
• Allocation contention. Before we start diagnosing problems with tempdb, let us first look at how the space in tempdb is used. It can be grouped into four main categories.User objects These are explicitly created by user sessions and are tracked in system catalog.
They include the following:
• Table and index.
• Global temporary table (##t1) and index.
• Local temporary table (#t1) and index.
• Session scoped.
• Stored procedure scoped in which it was created.
• Table variable (@t1).
• Session scoped.
• Stored procedure scoped in which it was created.
Internal objects These are statement scoped objects that are created and destroyed by SQL Server to process queries. These are not tracked in the system catalog.
They include the following:
• Work file (hash join)
• Sort run
• Work table (cursor, spool and temporary large object data type (LOB) storage)
As an optimization, when a work table is dropped, one IAM page and an extent is saved to be used with a new work table.
There are two exceptions; the temporary LOB storage is batch scoped and cursor worktable is session scoped.
Version Store - This is used for storing row versions. MARS, online index, triggers and snapshot-based isolation levels are based on row versioning. This is new in SQL Server 2005.
Free Space - This represents the disk space that is available in tempdb.
The total space used by tempdb equal to the User Objects plus the Internal Objects plus the Version Store plus the Free Space.
This free space is same as the performance counter free space in tempdb.
Excessive DDL and allocation operations
Two sources of contention in tempdb can result in the following situations.
• Creating and dropping large number of temporary tables and table variables can cause contention on metadata. In SQL Server 2005, local temporary tables and table variables are cached to minimize metadata contention. However, the following conditions must be satisfied, otherwise the table is not cached.
• No named constraints on the table.
• No DDL on the table after the creating statement (for example, CREATE INDEX, and CREATE STATISTICS).
• Typically, most temporary/work tables are heaps; therefore, an insert, delete, or drop operation can cause heavy contention on Page Free Space (PFS) pages. If most of these tables are under 64 KB and use mixed extent for allocation or deal location, this can put heavy contention on Shared Global Allocation Map (SGAM) pages. SQL Server 2005 caches one data page and one IAM page for local temporary tables to minimize allocation contention. This caching was already done for work tables starting with SQL Server 2000.
Since SGAM and PFS pages occur at fixed intervals in data files, it is easy to find their resource description. So, for example, 2:1:1 represents the first PFS page in the tempdb (database-id = 2, file-id =1, page-id = 1) and 2:1:3 represents the first SGAM page. SGAM pages occur after every 511232 pages and each PFS page occurs after every 8088 pages. You can use this to find all other PFS and SGAM pages across all files in tempdb. Any time a task is waiting to acquire latch on these pages, it will show up in sys.dm_os_waiting_tasks. Since latch waits are transient, you will need to query this table frequently (about once every 10 seconds) and collect this data for analysis later.
Resolution
If the contention in tempdb is due to excessive DDL operation, you will need to look at your application and see if you can minimize the DDL operation. You can try the following suggestions.
• If you use stored procedure scoped temporary tables, consider if these tables can be moved outside of the stored procedure. Otherwise, each execution of the stored procedure will cause a create/drop of the temporary table.
• Look at query plans to see if some plans create lot of temporary objects, spools, sorts, or worktables. You may need to eliminate some temporary objects. For example, creating an index on a column that is used in ORDER BY may eliminate the sort.
If the contention is due to the contention in SGAM and PFS pages, you can mitigate it by trying the following:
• Increase the tempdb data files by an equal amount to distribute the workload across all of the disks and files. Ideally, you want to have as many files as there are CPUs (taking into account the affinity).
• Use TF-1118 to eliminate mixed extent allocations.
I wasn't born stupid - I had to study.
November 20, 2006 at 7:02 am
I can't believe my eyes on this one... it may even have a huge impact on a very high transaction environement. Keep in mind that I have a celeron 2 Ghz / 512 MB of ram as a test server :
USE master
GO
IF EXISTS (SELECT * FROM dbo.SysObjects WHERE XType = 'P' AND Name = 'DDL_Demo_NoDROP' AND USER_NAME(uid) = 'dbo')
DROP PROCEDURE dbo.DDL_Demo_NoDROP
GO
IF EXISTS (SELECT * FROM dbo.SysObjects WHERE XType = 'P' AND Name = 'DDL_Demo_WithDROP' AND USER_NAME(uid) = 'dbo')
DROP PROCEDURE dbo.DDL_Demo_WithDROP
GO
CREATE PROCEDURE dbo.DDL_Demo_NoDROP
AS
SET NOCOUNT ON
CREATE TABLE #Test (ID INT IDENTITY(1,1), Name VARCHAR(500))
SET NOCOUNT OFF
GO
CREATE PROCEDURE dbo.DDL_Demo_WithDROP
AS
SET NOCOUNT ON
CREATE TABLE #Test (ID INT IDENTITY(1,1), Name VARCHAR(500))
DROP TABLE #Test
SET NOCOUNT OFF
GO
DECLARE @i AS INT
DECLARE @NbRuns AS INT
DECLARE @StartDt AS Datetime
SET @NbRuns = 10000
SET @i = 0
SET @StartDt = GETDATE()
WHILE @i < @NbRuns
BEGIN
EXEC dbo.DDL_Demo_WithDROP
SET @i = @i + 1
END
PRINT 'WITH DROP : ' + CONVERT(VARCHAR(10), DATEDIFF(MS, @StartDt, GETDATE()))
SET @i = 0
SET @StartDt = GETDATE()
WHILE @i < @NbRuns
BEGIN
EXEC dbo.DDL_Demo_NoDROP
SET @i = @i + 1
END
PRINT 'NO DROP : ' + CONVERT(VARCHAR(10), DATEDIFF(MS, @StartDt, GETDATE()))
GO
IF EXISTS (SELECT * FROM dbo.SysObjects WHERE XType = 'P' AND Name = 'DDL_Demo_NoDROP' AND USER_NAME(uid) = 'dbo')
DROP PROCEDURE dbo.DDL_Demo_NoDROP
GO
IF EXISTS (SELECT * FROM dbo.SysObjects WHERE XType = 'P' AND Name = 'DDL_Demo_WithDROP' AND USER_NAME(uid) = 'dbo')
DROP PROCEDURE dbo.DDL_Demo_WithDROP
GO
/*
Results of my last 2 runs
WITH DROP : 80126
NO DROP : 37390
WITH DROP : 62236
NO DROP : 19826
*/
November 20, 2006 at 8:21 am
Not really a good test. On my desktop system the numbers were a lot closer. But what is also interesting is if you look at the execution times of each individual run. Starts looking like a flip of a coin as to which is faster. Need to do more testing thogh, as creating an empty table and dropping it (or not) is not really testing the system.
WITH DROP : 5156
NO DROP : 4156
November 20, 2006 at 8:31 am
My goal was not to test the system on DML operations but testing the statement itself. That's why I am using as little statements as possible.
However I'm glad to see that a real PC will handle this without problem . I mean that's 1 sec per 10K runs. It's almost not even worth mentionning when you have a real server (quad proc and up). And I would assume that you have only a few temp tables used in the system. Not enough to be gained to make any change in any existing code... but still nice to know
November 20, 2006 at 8:33 am
Also anyone feel free to come up with a better test than mine.. not my area of expertise.
November 20, 2006 at 9:22 am
Ran a few more tests using one of our production servers. Even with adding code to insert 1000 or 5000 rows of data to the tables being created and then dropped, I still get numbers like this:
With drop: 296060
No drop: 294000
That is still about 2 seconds over 10,000 test runs. Again, what is interesting is looking at the individual times. Neither way is a clear winner. If you already have code deployed explicitly dropping #tables, it doesn't make sense to go out and change it. Also, if this is your standard, it really doesn't make much sense to change to implicitly dropping tables either.
Same goes if you currently drop the #tables implicitly.
November 20, 2006 at 9:29 am
So the conclusion still stands. There's a hit on explicitly dropping temp tables but about as much as a fly sneezing on a mamooth from 10 feet away . Do it if it's your standard but don't feel obliged to apply to that standard.
Viewing 15 posts - 31 through 45 (of 55 total)
You must be logged in to reply to this topic. Login to reply