September 5, 2008 at 6:59 am
I have a query that when executed on a local server, runs in less than one second.
However when I fully qualify (server.dbname.schema.table) and run it on another server with a linked server to the server where the data is, it runs in 50-100 seconds. Is it normal for there to be such a drastic difference?
The Redneck DBA
September 5, 2008 at 7:18 am
Does the query include any joins? Can you post the query?
September 5, 2008 at 8:03 am
I have frequently seen that sort of behaviour.
Setting the "Collation Compatible" option to true for the linked server has helped occasionally. This is what BOL has to say about to it:-
Affects Distributed Query execution against linked servers. If this option is set to true, Microsoft SQL Server assumes that all characters in the linked server are compatible with the local server, with regard to character set and collation sequence (or sort order). This enables SQL Server to send comparisons on character columns to the provider. If this option is not set, SQL Server always evaluates comparisons on character columns locally.
September 5, 2008 at 12:03 pm
I tried playing with that, but it didn't seem to make a difference.
I finally gave up and made a stored procedure on the server that gets hit, and just execute that stored procedure remotely. That solved the issue.
The Redneck DBA
September 22, 2008 at 9:35 am
Yes, there are lots of joins, but with that collation setting shouldn't those all be done remotely and not on the calling server?
I'm still fighting with this thing. I don't really care that it takes forever to run, but I do care that for some reason the SELECT statement I'm running is locking the table being queried.
Has anyone ever seen a remote select query lock a full table before? Anyone know what to do about it?
The Redneck DBA
September 22, 2008 at 9:55 am
yes, I have seen it lock on a SELECT when I don't think it should. When it's done that we've use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED. Sometimes that will help.
EDIT: Be sure you read up on the consequences of using that command.
MCITP, Database Administrator
A hodgepodge of Information Technology and Life
LinkedIn Profile
My Twitter
September 22, 2008 at 10:03 am
That's interesting. I would expect that to allow the select statement to run if something else was in the middle of a transaction and that was preventing the select to run, but why would setting that isolation level cause the select itself to not lock a table while it is running? (And for that matter, why is the select statment locking the table to begin with)?
The Redneck DBA
September 22, 2008 at 10:46 am
I had the same problem using linked servers and full table locks. Our solution was to get rid of the linked servers and and put the two databases on the same server and use synonyms. Our next step is to combine the two databases under different schema's. It may sound drastic but when the application is a standstill because of blocking it was our quickest option.
September 22, 2008 at 10:50 am
I wish we could do that, but the servers involved here are scattered all over the country and can't really be combined.
The Redneck DBA
September 22, 2008 at 11:01 am
Jason Shadonix (9/22/2008)
T(And for that matter, why is the select statment locking the table to begin with)?
I stopped trying to figure that out.
MCITP, Database Administrator
A hodgepodge of Information Technology and Life
LinkedIn Profile
My Twitter
September 23, 2008 at 12:24 am
I have also had the problem with performance over link servers. One thing i found was when to much data is retrieved or tables are used in the queries it becomes slow. So...here is what i did. I created views of the queries where there is to much tables involved and a substantial amount of data is returned. The other option i used was to created a procedure on the local server and execute that procedure from the link server. Reason for the two options was that the query plans and execution plans are on the local server and that makes it faster. I also made sure that these tables are optimised. This means indexes(Index Defrags) and primary keys. I also played with the various ways of connection providers.
Hope this helps.
September 24, 2008 at 9:10 am
Another strange thing... I tried to reproduce it on another server by changing my query from:
[PROD-SERVER-NAME].DBNAME.dbo.TableName to [DEV-SERVER-NAME].DBNAME.dbo.TableName and I can't reproduce the locking behavior.
What types of settings (besides the collation compatable) can I look at that might be causing this?
The Redneck DBA
September 24, 2008 at 9:16 am
Hi,
AFAIK we cannot specify WITH (NOLOCK) when you are querying linked servers.. So which means the locking options won't work while querying linked servers rite? Also Network IO adds to our count since normal 4096 bytes/sec (Not sure of exact count) is what SQL Server will normally handle.. So incase our resultset is more then it may take time.. Also the latency beween two servers will increase the delay in getting results...
Regards,
Sakthi
My Blog -> http://www.sqlserverdba.co.cc
September 24, 2008 at 9:35 am
But even if NOLOCK did work, that solves the opposite problem of what I have doesn't it? I would expect nolock to allow the select statment to run even if there was already a lock on the table. But the lock on the table is coming from the select itself, not another query.
The Redneck DBA
September 24, 2008 at 9:41 am
did you try the read uncommitted that I posted?
MCITP, Database Administrator
A hodgepodge of Information Technology and Life
LinkedIn Profile
My Twitter
Viewing 15 posts - 1 through 15 (of 38 total)
You must be logged in to reply to this topic. Login to reply