May 10, 2010 at 1:44 pm
Ninja's_RGR'us (5/10/2010)
Is it possible to take a backup from production, then start a trace on prod. Then a couple hours later do a restore to test server and replay the trace there?
Yes. You may not get exactly the same behaviour as happened on prod (multi-threads, concurrent queries, etc), but should be close enough. Watch the database IDs. It's one thing that gives trouble when replaying a trace against a different instance.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 10, 2010 at 1:59 pm
GilaMonster (5/10/2010)
Ninja's_RGR'us (5/10/2010)
Is it possible to take a backup from production, then start a trace on prod. Then a couple hours later do a restore to test server and replay the trace there?Yes. You may not get exactly the same behaviour as happened on prod (multi-threads, concurrent queries, etc), but should be close enough. Watch the database IDs. It's one thing that gives trouble when replaying a trace against a different instance.
What if it's the same id but different name?
How do you change the id of the db once it's live?
May 10, 2010 at 2:12 pm
LMGT4U 🙂
Apparently I'm screwed since 99.9% of the queries are run with windows authentication... so much for load testing.
Is it safe for me to enable the transaction level on the db (without changing any code)?
(Yes I'll test on preprod server to confirm)..., but are there any adverse effects other than performance I should be worried about?
Assuming it's not enabled now it shouldn't have any effect on the server other than tempdb, right?
Also assuming I can't replay the file and that I can't spare 200 employees next sunday to do a load testing :w00t:, I can go with the next best thing... I can do a trial on saturday since we have only 25% of the normal weekday load. If that holds up without any increase on the cpu we should be fine.
Now talking rollback. If I were to go forth with this plan and find out performance problem with the site on monday morning even after passing the test on saturday. Would simply changing the setting at the database level rollback all the changes and go back to "normal"?
May 10, 2010 at 2:24 pm
As far as I know, there is no need to change any code. This is set in the Database level. Also keep in mind that the Isolation levels will not be set till your last transaction has been committed. (This is kind of equivalent of saying "Restart SQL Service" 🙂
-Roy
May 10, 2010 at 2:33 pm
Roy Ernest (5/10/2010)
As far as I know, there is no need to change any code. This is set in the Database level. Also keep in mind that the Isolation levels will not be set till your last transaction has been committed. (This is kind of equivalent of saying "Restart SQL Service" 🙂
Not for us... our clients are on the site from 6 am to 8 pm. So it's safe to assume that I'll find a window to set it.
Now is it the same thing for the rollback?
May 10, 2010 at 2:53 pm
Yes, It is the same for rollback as well. At least from my testing it was that way.
-Roy
May 10, 2010 at 3:57 pm
Ninja's_RGR'us (5/10/2010)
Is it safe for me to enable the transaction level on the db (without changing any code)?(Yes I'll test on preprod server to confirm)..., but are there any adverse effects other than performance I should be worried about?
Assuming it's not enabled now it shouldn't have any effect on the server other than tempdb, right?
You may want to take a look at the following for application side-effects:
http://sqlblog.com/blogs/hugo_kornelis/archive/tags/Isolation/default.aspx
http://www.devx.com/dbzone/Article/32957/0/page/4
In terms of the server I don't know of any adverse affects other than those previously discussed.
May 10, 2010 at 4:17 pm
We replayed traces back when we were upgrading from 2K to 2K5 - If I remember right we got around the issues by loading the trace up into a table, running an update to change the DB ID and user name, then bringing it back into profiler for the rerun. We ended up updating the trace so that the replay ran all the statements as the same SQL (not NT) user, so I'm not sure about the password issues on the link you posted, but if all you want is to get a rough gauge on performance, doing an update like that might work for you. All I was interested in was statements that failed on 2K5 that didn't fail on 2K and it worked beautifully for that.
Chad
May 10, 2010 at 9:09 pm
Chad Crawford (5/10/2010)
We replayed traces back when we were upgrading from 2K to 2K5 - If I remember right we got around the issues by loading the trace up into a table, running an update to change the DB ID and user name, then bringing it back into profiler for the rerun. We ended up updating the trace so that the replay ran all the statements as the same SQL (not NT) user, so I'm not sure about the password issues on the link you posted, but if all you want is to get a rough gauge on performance, doing an update like that might work for you. All I was interested in was statements that failed on 2K5 that didn't fail on 2K and it worked beautifully for that.Chad
Ya that should do it for us. In MS Dynamics Nav all logins are dbos and the app secures all data access through forms so it's safe to test for me... especially since I'm mostly interested in performance.
May 10, 2010 at 9:10 pm
matt stockham (5/10/2010)
Ninja's_RGR'us (5/10/2010)
Is it safe for me to enable the transaction level on the db (without changing any code)?(Yes I'll test on preprod server to confirm)..., but are there any adverse effects other than performance I should be worried about?
Assuming it's not enabled now it shouldn't have any effect on the server other than tempdb, right?
You may want to take a look at the following for application side-effects:
http://sqlblog.com/blogs/hugo_kornelis/archive/tags/Isolation/default.aspx
http://www.devx.com/dbzone/Article/32957/0/page/4
In terms of the server I don't know of any adverse affects other than those previously discussed.
Thanks, I'll let you know if I have any more questions :w00t:.
May 10, 2010 at 9:11 pm
Oh and I almost forgot.
GO HABS GO
:w00t::-D:hehe:
May 21, 2010 at 8:20 pm
Ninja's_RGR'us (5/10/2010)
Here's the big picture. I'm working as an outside ressource for a tire distributor with the ERP MS Dynamics Nav 2009 SP1. My job is to build and maintain a transactional website in ASP .NET c# VS 2008 to take orders from the clients.We're on SQL 2008 Standard 64 bits windows 2008 4 bits.
When MS Nav needs to start a posting sequence like for an order it starts by locking all required tables with an UPDLOCK.
Under normal circumstances it works OK. The posting is generally done under 1 sec even if 400 queries are required for the process :sick:.. So in the absolute worst cases my queries will take 2 seconds to complete rather that 250 MS.
Now when the other consulting company comes in to work and debug the application, the UPDLOCK remains in place for minutes at a time while they manually debug the application (IN PRODUCTION). This causes my web application to constantly timeout because it can't access the data. I've seen as much as 150 timeouts in 60 minutes.
Now I've gone as far as kick their *** out the door (server's door) but since I have no authority on the matter they keep coming back and causing more damage.
I have a somewhat limited understanding of locking and transaction isolation levels. I've read the articles on SSC but I'm still not conviced there's a way around that problem.
Is there any way other than NOLOCK if possible to go around that problem?
We're concerned about dirty reads but at the same time dirty read is better than no read at all for that application (this assumes short windows of such reads... like 1-2 hours a month). I was thinking about switching to nolock when I get a timeout so that I at the very least return some data to the client (which I can revalidate without nolock at the next postback).
Thanks you for the post.
__________________
May 24, 2010 at 2:29 am
We have a principal server S01 and a mirror server S02. Mirroring model is high protection.
The principal also has transactional replication enabled.
If i take down the mirror ( lets say someone unplugs it), then my log will grow as the transactions cannot be saved at the mirror until it comes back online. Im aware of this, but i need to know if the Replication will be affected? can it continue as normal or is it dependant on the mirror being in sync?
Will pausing mirroring(clicking the pause button) allow replication to continue unaffected? 😛
May 24, 2010 at 6:14 am
Please post new questions in a new thread. Thanks.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply