March 30, 2021 at 2:26 am
there is a sql statment which accesses the tables of another server , after checking the execution plan, lots of cost was consumed by remote scan and remote query, how to refine the performance of the remote scan and remote query. thanks!
March 30, 2021 at 9:22 am
There isn't really much that you can do.
Are you accessing the data with OPENQUERY ( you might pass some query hints) ?
Check the linked server provider options
Is it possible to change drivers for the linked server?
March 30, 2021 at 9:48 am
Remote scan usually means copying the remote table to local memory (tempdb) in its entirety and then scan it locally.
If the same object is scanned repeatedly it might make sense to explicitly copy it into a temp table and then use that temp table in the query.
Otherwise - you need to ask the remote DBA if that remote table is indexed and if it's possible to use that indexing in your query in order to minimize the amount of data copied across.
_____________
Code for TallyGenerator
March 30, 2021 at 11:44 am
One aspect of remote queries you can't affect is the data being moved between the two systems. If you move one million rows (just plucking a number out of the air) it takes longer than moving one row. There's no tuning possible on the calling machine that changes this.
So, what can you do? Well, treat the other machine and the other query as if it were a query on your system. Ensure that you're passing filter criteria so it retrieves a smaller data set. Ensure that there are indexes in place on the remote machine that you're able to use when filtering that data. Don't use constructs in the query that prevent index/statistics use. In short, all the standard query tuning mechanisms that you employ locally have to be done on the remote end.
After that, it's just a question of bandwidth. And yeah, I hear it constantly, but we can't filter that data until we get it back. Oh well, you're stuck. I'd push back on that concept. Filter as much as humanly possible on the remote query, however you have to do it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 30, 2021 at 1:29 pm
Another thing is, what is the allowable latency for the data? Does it actually have to be up-to-the-second accurate or is it something that you can download once overnight and use the data locally for the the rest of the day.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 31, 2021 at 12:30 am
There isn't really much that you can do.
Are you accessing the data with OPENQUERY ( you might pass some query hints) ?
Check the linked server provider options
Is it possible to change drivers for the linked server?
Dear Jo Pattyn, I used linked server , I changed the setting of provider, seeming no any change, thanks!
March 31, 2021 at 12:42 am
Remote scan usually means copying the remote table to local memory (tempdb) in its entirety and then scan it locally.
If the same object is scanned repeatedly it might make sense to explicitly copy it into a temp table and then use that temp table in the query.
Otherwise - you need to ask the remote DBA if that remote table is indexed and if it's possible to use that indexing in your query in order to minimize the amount of data copied across.
Thank you Sergiy for your kind help!
March 31, 2021 at 12:58 am
One aspect of remote queries you can't affect is the data being moved between the two systems. If you move one million rows (just plucking a number out of the air) it takes longer than moving one row. There's no tuning possible on the calling machine that changes this.
So, what can you do? Well, treat the other machine and the other query as if it were a query on your system. Ensure that you're passing filter criteria so it retrieves a smaller data set. Ensure that there are indexes in place on the remote machine that you're able to use when filtering that data. Don't use constructs in the query that prevent index/statistics use. In short, all the standard query tuning mechanisms that you employ locally have to be done on the remote end.
After that, it's just a question of bandwidth. And yeah, I hear it constantly, but we can't filter that data until we get it back. Oh well, you're stuck. I'd push back on that concept. Filter as much as humanly possible on the remote query, however you have to do it.
Thanks for you kind and patient help for a long time, Grant Fritchey!
March 31, 2021 at 1:01 am
Another thing is, what is the allowable latency for the data? Does it actually have to be up-to-the-second accurate or is it something that you can download once overnight and use the data locally for the the rest of the day.
if the data is too big, do we need to use sql replication to sync data into local server or have another better way to sync data? thanks !
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply