October 13, 2011 at 8:45 am
Hi,
I am using a trace file which is of 50GB and using it as a workload for a database which is 160GB in size. I am using SQL Server 2008 Standard Edition x64 bit SP1. I started the analysis 26hours back and I see it still running with tuning progress as "4 remaining "and the action is at "consuming workload (which is in progress)" and "Performing analysis(which is in progress)".
I dont understand why is it taking so long. Is it because of the size of the trace file? Also, the trace file was generated from the production database a week back and I backed up that database 2 days back and restored it on a test server and running the DTA analysis.
Please throw some light on me.
Thanks
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
October 13, 2011 at 8:49 am
DTA is for when you don't know what you are doing (and it's even more dangerous for those people).
This is the correct way to go :
P.S. 50 GB trace file is a little overkill!
October 13, 2011 at 10:06 am
Sure...but how long should it usually take to analyze the workload for such a big trace...
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
October 13, 2011 at 10:09 am
Sapen (10/13/2011)
Sure...but how long should it usually take to analyze the workload for such a big trace...
A freakishly a$$ long time.
If the server is slow, maybe 1 month.
October 14, 2011 at 12:19 pm
Create a roll-over trace files and then run the DETA on each of them seperately.
October 14, 2011 at 5:02 pm
Ninja's_RGR'us (10/13/2011)
P.S. 50 GB trace file is a
littlehell of an overkill!
This should be handled as a surgical strike. Not shotgun approach.
If you can / want to go with the shotgun approach, just query the missing indexes and prune the duplicates.
Then log to see what's really being used or causing bottlenecks.
IMNSHO DTA is just a waste of time unless you want to specifically tune 1 long proc shotgun style (or a very small list of queries).
October 14, 2011 at 5:16 pm
I stopped the analysis after your reply and traced the db. This time I only traced for an hour and the file is 1GB and started the DETA. In the next one hour its gonna be 24hrs of DETA analysis and the progress is 12% now. I running it on a 4 core windows server 2008 machine. In the past a 1GB trace file was analysed in an hour and a half against the same db.
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
October 14, 2011 at 5:27 pm
Sapen (10/14/2011)
I stopped the analysis after your reply and traced the db. This time I only traced for an hour and the file is 1GB and started the DETA. In the next one hour its gonna be 24hrs of DETA analysis and the progress is 12% now. I running it on a 4 core windows server 2008 machine. In the past a 1GB trace file was analysed in an hour and a half against the same db.
After 24 hours of work on my end I'd be at least at the 5th round of tuning and I'd have gotten well over 90% of the possible gain.
Some jobs are not better left to computers...
October 14, 2011 at 5:55 pm
use a server side trace to capture workloads from key points in the business day and then feed this too DTA.
DTA only recommends and all of these should be tested first to verify. DTA offers the option to hypothetically apply the changes and re run the workload to check for improvements.
I prefer to target long running queries and examine their execution plans and then look at improvements
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
October 17, 2011 at 12:56 pm
You can use a light-weight trace capture. Don't capture all the events/columns but only the important ones.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply