January 8, 2008 at 1:27 am
Hi All
I have a developer that have found a way to work around the holy grail of Tempdb, by creating his own tables there instead of using # or ## tables.
He insist that's he's code is faster, when he does not need to wait for a temp table to be created every time he uses it :w00t: !!
Being unable to force him to rewrite his code, the usual situation that it's already written and it will take too long to change etc. I have tried to get it working but faces a small issue. The account the code is running under, is a sql login account, I have granted this account login on Tempdb with the roles of DDL admin, data reader and data writer. But the problem is, that tempdb is of cause dropped and recreated out of the model database at server startup, so all security goes away.
I could setup the rights on model, but thats not very smart, since this will affect all new DB's wich I of cause don't want.
Any ideas except getting the poor guy to use SQL server tempdb like it's supposed to be used, with # and ## tables?
//SUN
January 8, 2008 at 6:20 am
I think inflicting a severe beating prior to requiring him to rewrite his code is the correct response. But assuming you don't want to risk prison time, you could use a startup stored procedure to set the security (and send a page to this guy every time you have to manipulate the system into a goofy state because of his gross incompetance). Make sure "scan for startup procs" is set on in the system configuration. Then you can use sp_procoption to set the procedure as a start up proc.
I still like the beating option though.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 8, 2008 at 6:24 am
Grant Fritchey (1/8/2008)
I still like the beating option though.
I second that notion. 😀
And thanks for the info Grant. Never knew that...
-Roy
January 8, 2008 at 6:29 am
The beating is better...
If the code must ever be executed more than once, it will fail because the tables in the two or more instances of the proc will be named the same... kinda like if he used ##.
Force the Developer to comply... get his manager involved... get the CIO involved if necessary. Creating non # tables in TempDb is one of the worst ideas he could have come up with.
Then, start another good practice... code reviews.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 8, 2008 at 10:47 am
Oh yeah, code reviews... and beatings... I forgot about the reviews.
One other option that occurred to me (besides more beatings) was that you could set up the security in the Model database then it would get automagically created in tempdb (and every other database you create from then on). It's not a good option (beating is), but it's available (as is beating).
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 8, 2008 at 11:48 am
If he has access to create the tables in tempdb, setup a job that runs ever hour or so to delete them.
When his stuff starts failing, tell him you have no idea why his tables are disappearing, he is responsible for the problems, and the fix is to use temp tables.
January 8, 2008 at 1:01 pm
Michael Valentine Jones (1/8/2008)
If he has access to create the tables in tempdb, setup a job that runs ever hour or so to delete them.When his stuff starts failing, tell him you have no idea why his tables are disappearing, he is responsible for the problems, and the fix is to use temp tables.
Thats wicked...:D
I prefer the beating over this... 😉
-Roy
January 8, 2008 at 4:54 pm
Heh... I was thinking the same thing... beating are fun especially when delivered by code...
Since it's 2k5, I wonder if you could add a schema trigger to TempDB to sweep table names that don't begin with "#", say, 500 milliseconds after they're created 😛 Don't delete them... just truncate them :hehe:
--Jeff Moden
Change is inevitable... Change for the better is not.
January 8, 2008 at 5:12 pm
OK, there was this guy I know, not me, who was unable to get replication working between servers so he came up with some stored procedures that would write to base tables created in tempdb on one server. Then another stored proc would be run to pull data from the tables into a user database. It seemed to work just fine. Why was that such a bad idea?
January 8, 2008 at 5:22 pm
That's not such a bad idea (ad hoc usage to solve a very specific problem)... what is a bad idea is using non-temptable names for everything that you would normally have used a temp table for... if a proc that uses a temp table happens to run more than once simultanously, no problem. If you create a non-# prefixed table in temp db, it can only be created once and if the proc runs more than once, it will fail on one side or the other (if you're lucky).
Also, everything in TempDb is automatically wiped clean when you reboot a server or restart the service.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 9, 2008 at 3:41 am
Your developer says it is quicker to create specific tables in tempdb rather than using #tables?
What process was used to test this? It is amazing how different response time can be when testing on a single-user box compared to a busy server. SQL 2005 is optimised to keep #tables in memory as long as possible, which would not be the case for specific tables.
Also, if the table size is likely to be below 1MB, then table variables would always out-perform any other type of table. For larger tables, the limitations of indexing in table variables may mean #tables with appropriate indexes are a better choice.
At the end of the day the developer is mis-using SQL Server, and is a long way from best practice. You should definitely escalate this to get the developer to follow best practice. Tell your management they could get a beating from their bosses if they allow something this wierd to go live and it causes a problem. What I have done on very rare occasions is to refuse to sign off that the application is fit to go live, but make clear to my boss I will not escalate further if they sign.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
January 10, 2008 at 12:16 pm
EdVassie (1/9/2008)
Also, if the table size is likely to be below 1MB, then table variables would always out-perform any other type of table.
Why do you say this, Ed? Is it because of the indexing options available to # tables but not table variables? I thought table vars and temp tables both lived in tempdb.
January 10, 2008 at 12:45 pm
I don't know if I'd place a hard value on it like 1mb, but I suspect he meant that since table variables have no statistics, they're quick to build & load at small sizes. The problem with them is, because they have no statistics if you use them within a join or anywhere that the statistics are going to make a difference in performance, they'll tank, good & hard. However, you're right, temp tables and table variables both live either in memory or on disk within the tempdb. It's only the statistics that are the real differentiator.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 10, 2008 at 1:04 pm
And also you can only put a single index on a table var - the primary key. Not great if you would like an index on a non-unique column.
Rule of thumb for me is around 100 rows. Much over that and the cardinality inaccuracies just stuff the optimiser up 10 ways from sunday. (In the absence of stats, it estimates 1 row)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2008 at 2:33 am
I gave the limit of 1 MB for the type of reasons given by Dave and Grant.
On small tables the optimiser will often find it is cheaper to use a table scan than an index lookup, unless the query exploits the cluster index. The cut-off point for when indexed access is preferred depends on many things, but a reasonable rule of thumb is that a table below 1MB is more likely to be scanned than poked. This means the lightweight structure of a table variable when compared to a #temp or permanent table is more likely to give faster results than the other options if it is small.
As GilaMonster points out you can specify a primary key on a table variable, so if your WHERE clause exploit the PK, then a table variable will remain performant at much larger sizes. However, if the WHERE clause does not contain the PK, then the ability to create other indexes on a large #temp table will mean it should out-perform a scan on a large table variable.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply