August 25, 2010 at 3:32 am
Hi,
we are having a performance problem and we are trying to tune the database.
Decisions were made long ago and here is what I found when I got here:
Current system consists of a SQL Server 2005 Enterprise sp3 Cluster installation with logical disks residing in SANS and 1000 remote users with SQL Server 2005 Express.
It has 4 merge publications and 1000 subscribers to each of them. We usually have between 10 and 40 concurrent users, (the ones with the sql server express), replicating between 08:00 and 19:00 Monday to Friday, as they usually work remotely and may be replicating from 500 kms away or even more.
Steps we have taken include:
- Checking disks --> they seem to have problems coping with the workload but they perform quite well otherwise.
- Checking indexes on server side--> they were causing problems but they are fixed now.
- Improved performance of tempb by icreasing the number of files.
- Reduced the time for the subscription to expire.
- ...
Any suggestion would be welcome
Thanks,
Gerardo
August 25, 2010 at 1:11 pm
What sort of performance issues are you having exactly?
Is it the actual process of the merging that's running slowly? Are you using push or pull subscribers? Is the Distribution db on the publisher server or seperate?
August 26, 2010 at 2:57 am
Hi
we have the distribution database on the same server, and we have performance problems when the merge is done, even when there are no changes, more especifically when having more than 20 concurrent replicating subscribers and whenever we do an initial replica.
In fact our main concern now is how to change anything as doing changes to the tables, (published articles), has become impossible as it would require to much time.
(BTW Subscribers are using pull subscriptions and we are using filters).
Thank you
August 26, 2010 at 3:45 am
How are your subscribers connected - over the internet?
How big is your filtered database?
August 26, 2010 at 4:04 am
They connect over the internet, using a VPN and the quality of connection varies, some have cable, some use 3G.
One of the tables has about 17M records, but the filtering reduces it to a max of 5000
The other tables have only about 50m records maximum.
Some data is shared, but only about 10%
Thank you
August 26, 2010 at 4:11 am
Not sure if merge allows it, but in transactional replication you can test the latency of replication with a "trace toker". Just google'd and it isnt. Doh
With a varying number of connection speeds I would personally add from warning within the replication publication.
MSDN has some info within the link below
http://msdn.microsoft.com/en-us/library/ms152768%28v=SQL.90%29.aspx
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply