September 4, 2007 at 10:28 am
Hi,
If due to the some process running (mainly the maintainance job) and the tempdb grows exorbitantly, does the tempdd comes to normal size automatically when the process is over? May sound silly.. But can anyone throw some light?
Thanks and Regards
Sandhya
September 4, 2007 at 10:41 am
You will need to shrink the tempdb manually.
dbcc shrinkdatabase (tempdb, 'target percent')
Andras
September 4, 2007 at 5:31 pm
If tempdb is growing well beyond what you think it should, check the code for Cursors and improper use of temp tables.
How big is your tempDB MDF getting?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 5, 2007 at 6:41 am
If your main problem is growth during optimization/maintenance which uses inbuilt functionalities (DBCC DBREINDEX, DBCC INDEXDEFRAG), then it is quite probable that you can't do anything about it. As far as I can remember, mentioned actions expand more the logfile than TEMPDB, but tempdb is affected as well.
If the growth problem occurs due to other (user-written) code, then there could be a way to make it better.
Question is, what you think is "too big". A few GB - let's say up to 10% of the production database - is nothing irregular.
Shrinking the database does not help much, it will expand again. If you can afford the space, let it be, don't shrink it. If you can't, shrink it, but consider increasing storage capacity.
September 5, 2007 at 6:50 am
Hust for a point of reference... we set TempDB to be 9 gig on bootup.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 5, 2007 at 8:53 am
I know that I have some developers who write "run-away" queries that will blow out my tempdb until I am out of disk space. How they do it I don't know, but just FYI bad user queries can do it in a heart beat or a few clock cycles.
Brian
September 6, 2007 at 7:02 am
Thank you very much for all of you to make my concept clear !!!!
September 6, 2007 at 6:36 pm
Try to copy 9 gig file from one folder to another.
How long does it take?
And it's simplest thing you can do to 9 gigs.
Tempdb does not store any permanent data. Everything being deleted after closing of each connection.
I would feel some kind of anxiety about the system which needs 9 gig for operational data.
Except the case when this amount of memory is used only for some terrible reports performed overnight.
But not everyone has such luxury as "overnight time".
Bloody globalization!
_____________
Code for TallyGenerator
September 7, 2007 at 9:12 am
Hi,
I am new here. Just started to research a problem I am having with my tempdb. I keep reading about setting the size, shrinking it and specifying the growth rate. It is all handy dandy. But what if the query that is running now and finishes within 12 seconds with 19000 rows will not run in 5 minutes? It will fill tempdb and take up 70GB of disc space.
September 7, 2007 at 6:48 pm
I would feel some kind of anxiety about the system which needs 9 gig for operational data. |
You ain't just whistling Dixie there, ol' friend... massive amounts of crap code including cursors, nested cursors, views of views that require full materialization because of the way they were written (crap), triangular joins in some vain attempt to do rowcounts and running totals, crap Java code, crap Business Ojbects replication and reporting routines, massive unfiltered outer joins because some craphead wants everything to be available "just in case", and a wealth of other crap written into a 3rd party VB app obviously written by performance challenged neophytes that don't even know how to spell "SQL". Combine that with all the crap reporting/formatting code , exports to "Excel" files that use cursors just to keep a dashed line from showing up that the inhouse morons wrote, crap nightly reconciliations, and some double crap idea that XML would be efficient for making fulfillment letters, and you'll understand my great anxiety. It makes me wanna crap
--Jeff Moden
Change is inevitable... Change for the better is not.
September 7, 2007 at 6:50 pm
Now, THAT sounds like a cross-join... post the code and let's have a look-see... (provided it's not hundreds of lines long ).
--Jeff Moden
Change is inevitable... Change for the better is not.
September 10, 2007 at 6:15 am
Had one just this morning. A single query with a couple of bad joins on a 58mb database created a 7.5GB tempdb.
September 10, 2007 at 8:39 am
Thanks for your interest Jeff, but it is not a cross join. As I said earlier, I can run the query multiple times without any problems and then without any cause(that I know of) it will not run. One other thing I didn't mention before, the query runs in Decision Stream (for those who don't know this, it is a ETL tool from Cognos). This nights run went without problems. Here is the query anyway...excuse the structure. it is very hard to edit it in here:
SELECT D1.stu_sid, D1.stu_id, Min(D1.Entry_Date) as Entry_Date,
Case MAX(D2.Exit_Date)
When '01/01/2900' then null
Else MAX(D2.Exit_Date) end as Exit_Date
FROM (SELECT stuc.stu_sid, stuc.stu_id,f.date_desc,f.date_value,
Case When F.Date_Desc Like '%Entry%'
Then F.Date_Value
Else Case when stuc.schl_year = sy.school_year
Then cast('01/01/2900' as datetime)
Else cast('01/01/1900' as datetime) End end as Entry_Date
FROM stuc_student_dim_dex1 stuc,ESP_Dates_Fact F,student_dims, stu_sms_cfg_dex1 sy
WHERE stuc.stu_id = s.stu_id
and s.stu_sid = F.stu_sid
and (F.Date_Value <= S.Eff_End_Dt or S.Eff_End_Dt is Null)
and f.Date_Desc like 'ESOL%') D1,
(SELECT stuc.stu_sid, stuc.stu_id,
Case When F.Date_Desc Like '%With%'
Then F.Date_Value
Else Case When stuc.schl_year = sy.school_year
Then cast('01/01/2900' as datetime)
Else cast('01/01/1900' as datetime) End end as Exit_Date
FROM stuc_student_dim_dex1 stuc, ESP_Dates_Fact F, student_dim s,
stu_sms_cfg_dex1 sy
WHERE stuc.stu_id = s.stu_id
and s.stu_sid = F.stu_sid
and (F.Date_Value <= S.Eff_End_Dt or S.Eff_End_Dt is Null) and s.schl_yr = stuc.schl_year
and f.Date_Desc like 'ESOL%' ) D2
where d1.stu_sid = d2.stu_sid
and (d1.entry_date <> '1900-01-01' and d2.exit_date <> '1900-01-01')
group by D1.stu_sid, D1.stu_id
order by d1.stu_sid
September 10, 2007 at 6:15 pm
Yep... accidental cross join... tell me where the join criteria is for the "sy" table alias...
SELECT D1.stu_sid, D1.stu_id, Min(D1.Entry_Date) as Entry_Date, Case MAX(D2.Exit_Date) When '01/01/2900' then null Else MAX(D2.Exit_Date) end as Exit_Date FROM --------------------------------------------------------------------------------------------------------- (SELECT stuc.stu_sid, stuc.stu_id,f.date_desc,f.date_value, Case When F.Date_Desc Like '%Entry%' Then F.Date_Value Else Case when stuc.schl_year = sy.school_year Then cast('01/01/2900' as datetime) Else cast('01/01/1900' as datetime) End end as Entry_Date FROM stuc_student_dim_dex1 stuc,ESP_Dates_Fact F,student_dim s, stu_sms_cfg_dex1 sy WHERE stuc.stu_id = s.stu_id and s.stu_sid = F.stu_sid and (F.Date_Value <= S.Eff_End_Dt or S.Eff_End_Dt is Null) and f.Date_Desc like 'ESOL%') D1, --------------------------------------------------------------------------------------------------------- (SELECT stuc.stu_sid, stuc.stu_id, Case When F.Date_Desc Like '%With%' Then F.Date_Value Else Case When stuc.schl_year = sy.school_year Then cast('01/01/2900' as datetime) Else cast('01/01/1900' as datetime) End end as Exit_Date FROM stuc_student_dim_dex1 stuc, ESP_Dates_Fact F, student_dim s, stu_sms_cfg_dex1 sy WHERE stuc.stu_id = s.stu_id and s.stu_sid = F.stu_sid and (F.Date_Value <= S.Eff_End_Dt or S.Eff_End_Dt is Null) and s.schl_yr = stuc.schl_year and f.Date_Desc like 'ESOL%' ) D2
--Jeff Moden
Change is inevitable... Change for the better is not.
September 11, 2007 at 1:53 pm
There is only one value in the 'stu_sms_cfg_dex1' table and that is the current school year. We use it to restrict the student data to the current school year. Instead of hard coding and making endless changes every year, we retrieve the year from the current transactional system and put it in there.
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply