October 8, 2010 at 4:10 am
[font="Verdana"]We recently migrated a sql 80 database to sql 100.
After practicing all necessary recommendations
>DB Compatibility change
>Update Usage
>Rebuilding Indexes
> and any ++
we have experienced that if a same query/process is re-run on same sql 100 then the execution time varies from 1 - 200 times. Means same task and same system and same enviornment but the execution time inconsistent. And interesting to add that there is complete inconsistency regarding execution/re-run #, it may be 1st the quickest then may be 4th ...!
😛
We have tried Enable/Disabling AUTO STATISTICS but same response.
May some clue of the problem
>Round 10 tables have 0.5+ million rows.
>DB had poor/non-updated statistics on previous system, even never updated.
>DMV of missing indexes lists at least 100 indexes (but it absolutely doesnt mean that previous 80 system indexes were not proper).
>Most quries are adhoc (and sp_executesql not used)
>Process is mainly related to reporting/data fetch purposes.
>All other DB utilization is zero when the test/process is run.
....
....
Please any guru who have work wonder when migrating from sql 80 to 100?
Many thanks[/font]
October 8, 2010 at 9:28 am
how does the hardware of this new server compare to the old one? How much memory? How many drives? Is tempdb on its own drive array and have you set up multiple files? How many active connections do you have going on, what type of clients (are you using SQLNCLI10 provider?) 64 bit or 32 bit?
Give us some more information perhaps I can take an educated guess.
The probability of survival is inversely proportional to the angle of arrival.
October 8, 2010 at 11:52 pm
sturner (10/8/2010)
how does the hardware of this new server compare to the old one? How much memory? How many drives? Is tempdb on its own drive array and have you set up multiple files? How many active connections do you have going on, what type of clients (are you using SQLNCLI10 provider?) 64 bit or 32 bit?Give us some more information perhaps I can take an educated guess.
[font="Verdana"]Sturner, thanks for your time!
1. Hardware is of new system is 8 core processor and 4 GB of RAM(2ndly i had made observation on new system not the old one. so the comparison amongst new and old is not much in scene.)
2. Yes, Tempdb is of huge importance and obviously is on its default installation directory, with other system databases. While the production db is on other drive! And TempDB files are not splited but are as on default.
3. Client is via ADO.Net, and a desktop application with adhoc queries. [but why u stressed over this point, is it important as well (sorry me being working throughout on backend thats why your this question is as
French for me)]. And its 32 bit infrastructure.
4. Not big deal of active connections may be at max 50 (as a desktop application). And at concerned process/testing may be only 1-2 connections would be active.
I have noticed ppl with problems related to Trace-Flags, AWE setting, Memory setups for newly set servers. do u can explain/discuss why and how this can be a factor. As in most 90%+ cases default settings work well in systems (i guess).
We may assume that Auto Create/update Statistics made system at its dead level. but it isn't mentioned in any documentation/expertise that this option can drove the system to dead level. Specially in presence of Asynchornous update/create of statistics!
Tried every thing....
but do we need to flush all cache and may run server with afresh. I dont think so ... that system still having old 80 compatible plans.
Also some ppl suggest query rewriting... !!! but its not practiced in average cases while system is at public.
Let hope more deep down into problem!
Thank you again![/font]
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply