December 15, 2003 at 6:47 am
Hi Gurus,
Sql server 2000 on win 2000, Database size 5 GB
I am facing a strange problem for a big insert from source table t2 (10 lacs recs)
to destination table t2 (4 lacs rec).upto yesterday it was taking 15 mins to run , suddenly today it is taking more then 6 hours to complete. There is no change in Indexes or any other structure.While replicating on another m/c problem is same.
Its a data warehouse customized ETL process, I can avoid default locking as it is a single user process.Right now i am using the default locking. It escalats to Table lock xclusive and takes hours to complete.Proper Indexs are in place and using in explain plan.
TIA,
Sheilesh
following is the code type in sp body -
sp begin
Begin tran
Insert into t1 select * from t2 where cond1
.
Insert into t1 select * from t2 where cond2
.
Insert into t1 select * from t2 where cond2
commit tran
sp end
Any help will be greatly appreciated
December 15, 2003 at 7:56 am
Is it possible that your DB is autogrowing? Check the parameters of your DB. If it is set to autogrow, and your current size is near the maximum, try either to remove autogrowth or increase the increment for growth to a value near the size of the import.
Obviously, another problem (but that is already covered) might be an index, a reference (foreign key) or a trigger that is acting up.
December 15, 2003 at 8:00 am
Also worth checking any other usage of SQL instance and/or server in general.
What about fragmentation of indexes?
Has the query plan changed? Are your statistics now out of date?
December 15, 2003 at 8:08 am
You are rignt, Autogrow is TRUE in my database, Its daily upload for more then 50 US locations of data. Should i take fairly maximum size of database and transaction log and put autogrow Off ?
December 15, 2003 at 8:19 am
On the same instance another OLTP testing database is also running but its used in day time and ETL runs in night time.
Maint. plan is there to defrag indexes, also indexes are dropped at the time of upload and created again while running business logic sp to generate summary data.
Query plan is same, Using auto create and update statistics.
December 15, 2003 at 9:23 am
Could it be blocked? Run sp_who2 to verify that.
December 15, 2003 at 9:31 am
You mention you drop all the indexes... including the ones on t2?
I wouldn't turn off AutoGrow until you have examined the number of Database growths in Profiler.
Also look for SP recompilations...
If you are happy doing this try using the NOLOCK locking hint
December 15, 2003 at 11:03 am
Good suggestions above, one other point. If the database grows during the insert, while it grows, ALL activity stops. That could be the issue. Has the extra long time occurred more than once?
Steve Jones
http://www.sqlservercentral.com/columnists/sjones
The Best of SQL Server Central.com 2002 - http://www.sqlservercentral.com/bestof/
December 15, 2003 at 5:11 pm
Can you tell what the bottleneck is, for example is SQL Server saturating the CPU, is there no disk idle time, is the system faulting too heavily?
If it's IO, did anything change there? We had a huge drop in performance once and found our SAN had gone through an event which disabled the writeback cache which made a huge difference.
Could something have changed in the keys, not because of structure but because maybe something wrapped around, e.g. you used to add to the end but something wrapped around and you're inserting to the beginning now?
December 16, 2003 at 12:49 am
- If you can avoid lock-escalation, do so.
So set a table lock when you start loading.
- You can have autgrowth active, but if you detect space may be allocated during load, do so yourself in advance.
- How much space will be added when autogrowth occurs ? That to takes # time.
- Do you have disk-fragmentation ?
- Maybe a rebuild of the indexes may be needed. Adjust the FILLFACTOR.
- Examin your clustering index for pagesplits that occur during load.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
December 16, 2003 at 3:24 am
THanks a lot friends for the replies.
Actually its a daily load process , Activities we are performing are -
1. truncate all the from all 12 nos tables.
2. Drop all indexes from all the tables.
3. Shrinking the database using DBCC SHRINKDATABASE
4. uploading the database using Bulk insert commands.
5. Run Business logic sp to generate summary data from this base data.
Our all the things are fine but Pt. 5 sp is drastically slow.
I guess after truncating shink database step (step 3) is wrong it will definity autogrow the database on daily run.
Its a standalone process so i guess i could go for read uncommitted insolation level , and tables lock xclusively while in inserts, nolock in selects.
My another question column query or Outer join in selects which one is preferable to fetch optional nature of data ?
Pl. also suggest me if can go for any other improvement in above steps.
TIA,
Sheilesh
December 16, 2003 at 5:22 am
Why perform the shrink database if you've truncated the tables and are planning to load ?
build your load-data sorted on the tables clustered index and then load.
you could also put the db in read-only mode if there are no changes after the load and the ix-creates.
You might also split data-filegroups and index-filegroups.
use profiles to find out where user-processes get stuck when using your data. It might be you've created indexes that are not used and replaced by table-scans.
If your tables are joined, provide indexes to support the joins. maybe even clusterd indexes,..
Edited by - alzdba on 12/16/2003 05:28:26 AM
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
December 16, 2003 at 5:56 am
If I'm understanding your #5 is the problem, can you determine whether the problem is in the formation of the result sets or the insert. I'm guessing you have one or more things like this:
insert into SummaryTable
select ... from Detailtables
where the select may be complicated. If you have isolated that your slowness is definitely in these steps, and especially if the size of these summary tables is not large in comparison to the data you loaded, I suspect the shrink/grow stuff is not relevant (if it was database expansion making you run slow, one would assume the growth during the larger load would be slower than during the presumably smaller summary table growth. Of course, if your summary tables are larger than the detail...)
If it is indeed in this insert..select group, then run profiler to determine which of the particular insert..selects are at fault.
Then try running the select separately without the insert and/or look at its query plan. Try to figure out if forming the result set is the issue, or inserting it into the summary table. I'd bet on the select.
Then tune the select and/or try to find out what changed about its query plan.
December 16, 2003 at 6:14 am
Don't shrink! Shrinking before the bulk insert causes two things, One your DB has to grow back out again to handle the new data, Two disk fragmentation of the datafiles causing even more slowdown. One of the first things I always do is grow the database out to a size I know its not going to cause a growth and I never shrink them. Expanding and contracting databases is a huge performance killer.
If you are dropping the indexes make sure the database is set to bulklogged so you can get the most speed out of the insert as well.
After that I would look at indexing schemes, join orders that support your indexes, and lastly adding additional filegroups for indexes and/or to seperate large tables out.
Just my thoughts.
Wes
December 16, 2003 at 6:53 am
I don't see indexes rebuild before you perform business logic sp to generate summary data. You will perform table scan mostly since indexes have been dropped at step3.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply