September 12, 2009 at 10:33 pm
I copied two dbs from 32bit sql 2005 environment to 64 bitsql 2005 environment..one proc takes 55 mins on old environemnt and takes 2 hours on new..new environment is better and faster in every way..but the proc still takes more time..i did rebuild and update stats before running the proc..i found couple tables have more scan counts in new environment that old. Can anyone suggest how do i decrease the scan count on tables. I do not want to make any change in the code as the same script takes less time in old environment. I am planning to maunally drop and recreate indexes for those tables. Does anyone have better suggestion to tackle this issue? Thanks
September 12, 2009 at 11:25 pm
Any chance of posting some detail?
It would be helpful to know exactly what the procedure contains, the definitions of the objects it references, the number of rows in each, and, best of all, an actual execution plan for the procedure. If you can post the execution plans (zipped) for runs on both databases, that would be awesome.
Otherwise I would just be guessing.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
September 14, 2009 at 12:16 am
As Paul said we need more details.
Did you run DBCC UpdateUsage and update statistics? You need to check the execution plans in the old server and the new one and note the differences.
"Keep Trying"
September 14, 2009 at 1:22 am
iqtedar (9/12/2009)
I copied two dbs from 32bit sql 2005 environment to 64 bitsql 2005 environment..one proc takes 55 mins on old environemnt and takes 2 hours on new..new environment is better and faster in every way..but the proc still takes more time..i did rebuild and update stats before running the proc..i found couple tables have more scan counts in new environment that old. Can anyone suggest how do i decrease the scan count on tables. I do not want to make any change in the code as the same script takes less time in old environment. I am planning to maunally drop and recreate indexes for those tables. Does anyone have better suggestion to tackle this issue? Thanks
Is the size of both databases (32bit and 64-bit) same?
Is the Index definition same on both the environments?
What is your DBCC SHOWCONTIG saying? Any comparision you happened to check?
there are many more fundamental stuffs which you can perform as an initial steps.
September 14, 2009 at 1:33 am
Without seeing execution plans, it is hard to suggest anything. if you can post more details it would help. just out of curiosity, how different is the new hardware from the old. has anything been changed at the hardware level, like physical block size and is the service pack and build versions the same.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
September 14, 2009 at 10:15 am
thanks everyone for response. I will not be able to get the execution plan since it had arnd 30 execution plans for one proc, like the proc has batches of code. I did the following on db
i)dbcc freeproccache
ii)rebuild and update stats using maintenance plan
In another 64 bit takes less time, i think it is issue with memory. But aprt from that i still have one question..
On new 64bit it shows
able 'tablea '. Scan count 17, logical reads 24355, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
on 32 bit shows
Table 'tablea '. Scan count 3, logical reads 24355, physical reads 0, read-ahead reads 8773, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
my only question is why is scan count less on 32 bt and more on 64 bit. Same db is used and indexes and stats are updated on both. Do i need to worry about that scan count? thanks
September 19, 2009 at 2:51 pm
some updates on this one..procedure runs fast after recycling the sql services...and everytime after that exec time increases....is this issue with tempdb? i am planning to move tempdb to SAN drive and then test it..any suggestions?
September 19, 2009 at 3:13 pm
procedure runs fast after recycling the sql services...and everytime after that exec time increases....is this issue with tempdb?
Yes, It sounds like tempdb issue.
In this case it is also good idea to split TempDB data file to the several files having equal size. Number of files should be equal your server CPUs number, and a MAX size of them should be limited. In this case performance will be increased significantly, because TempDB doesn't need any time for space allocation. Additional point is that in this case all files fills simultaneously, because all files are equal.
September 19, 2009 at 3:44 pm
thanks for your reply....server has 16 processors with 32 gb of RAM and has two instances on it. But the second instance has very limited activity on it for the time being. Can you please suggest me how many files should i split the tempdb into and what should be max size..thanks again.
September 19, 2009 at 3:59 pm
I need to get more details before I give you any recommendations.
Does server have 16 physical CPUs or it 8 dual cores?
What size of a SAN disk is going to be dedicated to the TempDB datafiles?
Are you going to have a separate disk for TempDB log file?
September 19, 2009 at 4:21 pm
when i look in the properities under the sql server instance i see 16 processors..space is not an issue..i have couple of drives...if you want i can keep atleast 2 files on each drive..thanks
September 19, 2009 at 4:44 pm
Ok.
I would put TempDB log file on a separate drive with size around 10 Gb and restrict the file grow at 9Gb. In this case you will have extra 1Gb in case of emergency to create a secondary log file.
I would split your tempDB data file to 16 files with MAX size restricted at ... for example 2 Gb each if you have 35 Gb drive,
In this case you will immediately allocate 32GB and have extra 3gb just in case of emergency.
September 19, 2009 at 4:49 pm
thanks..ok i will assign 10 gb to log file and 2gb max size to each data file..now do u want me to split the data files on differenct drives or just one SAN drive is good?
September 19, 2009 at 5:00 pm
Correct.
According to Microsoft Best Practices one SAN disk is Ok to keep all you TempDB datafiles.
September 20, 2009 at 5:32 pm
i did try splitting the tempdb files and putting on a san disk..dat didnt help..only the firsttime after recycle it works..other times takes longer..cld this be a 64 bit issue
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply