September 24, 2013 at 7:01 am
Hi There ,
Im handling cores of data which will refreshed in every run.
for this which one I can go with ? temp table or permanent table ?
September 24, 2013 at 7:05 am
vignesh.ms (9/24/2013)
Im handling cores of data which will refreshed in every run.
Can you elaborate a bit more on your set-up?
What do you mean with cores of data?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
September 24, 2013 at 7:34 am
No where near enough information to make anything other than a wild guess.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 24, 2013 at 9:31 am
vignesh.ms (9/24/2013)
Hi There ,Im handling cores of data which will refreshed in every run.
for this which one I can go with ? temp table or permanent table ?
I always default to a temp table over a table variable, until the table variable is required (xaction rollback and needing the temp data still or recompilation issues). I have found that to be a very low percentage of the time in the real world.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 25, 2013 at 4:34 am
It depends.
If the structure is stable and the same at every import I prefer to use a permanent temp table. That is define the permanent table with a schema of "temp".
Advantages:
- schema makes clear that data is pass through
- allows permanent indexes to be defined and persisted in object definition instead of just code definition
- space requirements can be determined and agreed
- as data pass through creates lots of logging the table can exist in a bulk logged or simple database
- last data run can be kept in case something goes wrong so restart possible without source access
- will not max out incorrectly defined / sized tempdb
Disadvantages:
- requires extra server disk space
- more object maintenance and disk size management
- more agreements to be reached with DBA
- some managers see it as "waste of space" not understanding that the space is required anyway
September 26, 2013 at 4:17 am
Knut Boehnert (9/25/2013)
It depends.If the structure is stable and the same at every import I prefer to use a permanent temp table. That is define the permanent table with a schema of "temp".
Advantages:
- schema makes clear that data is pass through
- allows permanent indexes to be defined and persisted in object definition instead of just code definition
- space requirements can be determined and agreed
- as data pass through creates lots of logging the table can exist in a bulk logged or simple database
- last data run can be kept in case something goes wrong so restart possible without source access
- will not max out incorrectly defined / sized tempdb
Disadvantages:
- requires extra server disk space
- more object maintenance and disk size management
- more agreements to be reached with DBA
- some managers see it as "waste of space" not understanding that the space is required anyway
HORRIBLE ADVICE!! There are many things that happen in temp tables that are not logged (which is different from "minimally logged"), leading to many fewer writes than if you were to create a permanent table to store transient data. Also if more than one thing at a time uses that table you now have blocking potential. Larger backups (both log and data). Could 'max out' your production database. I am sure there is more.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 26, 2013 at 4:23 am
TheSQLGuru (9/26/2013)
There are many things that happen in temp tables that are not logged
Temp tables are logged. There are NO unlogged operations in SQL other than inserts into the row version store. They're efficiently logged, they generate smaller amounts of log than permanent tables do (because TempDB never needs to roll forward), but they very definitely are logged.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 26, 2013 at 12:13 pm
vignesh.ms (9/24/2013)
Hi There ,Im handling cores of data which will refreshed in every run.
for this which one I can go with ? temp table or permanent table ?
When you say "cores" do you mean "crores"? That sounds like a lot of data to be refreshed on every run.
Tom
September 26, 2013 at 3:40 pm
GilaMonster (9/26/2013)
TheSQLGuru (9/26/2013)
There are many things that happen in temp tables that are not loggedTemp tables are logged. There are NO unlogged operations in SQL other than inserts into the row version store. They're efficiently logged, they generate smaller amounts of log than permanent tables do (because TempDB never needs to roll forward), but they very definitely are logged.
Sorry, was lumping internal objects in with my temp tables phrasing, and much of that activity is not logged (at least per BOL). But I personally consider not logging the backside of an update/delete to be "not logged".
Latch contention is lower due to better latching control and quite important to many systems is that an automatic checkpoint in tempdb does not flush all the dirty pages down to disk.
I still think it is much better to use temp table for transient data outside some esoteric need.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply