September 12, 2013 at 11:29 am
I have a query that will utlimatley return several million records. The query itself is too much for the memory on the particular server it's on; however, we used the Import Export wizard to push it to a table and it zips right along. Can anyone explain why this is more efficient? The wizard is using the same query as it's source. Indexes are on the join fields.
Thanks
Crusty
September 12, 2013 at 1:53 pm
CptCrusty1 (9/12/2013)
I have a query that will utlimatley return several million records. The query itself is too much for the memory on the particular server it's on;
I'm not really sure what you mean by "too much for the memory on the particular server it's on". If you are running the query in window in SSMC then it is using memory, if you are dumping it out to a file or into another table then it is using disk storage.
The fastest way to move a very big data set from one database to another database on a different server is bulk copy (BCP) over the network. So without more information about what you are trying to do I really can't comment any further.
The probability of survival is inversely proportional to the angle of arrival.
September 12, 2013 at 1:57 pm
I have a query that pulls from 4 tables. Each table is very large, over 1 million records each. I need to combine them with left joins; however, when I add the last table to the query, it runs literally all night and when I kill it it says system out of memory. A colleague tried using the Import wizard and set my SQL as the source pointed towards the destination table and it worked... Why would it work that way with the same SQL, but not in SSMS
September 12, 2013 at 2:51 pm
Okay, now I understand what you mean. Well for one thing there could be a different query plan being used in the import wizard implementation because the query is running on a remote server. Since the data is being pulled off over the connections and not being stuffed into system memory buffers to be displayed on the SSMC console the memory requirements are less ... sort of being distributed across two servers.
Remember too that if SQL server is configured to use most of the RAM on the server, whatever is left is being used by the OS and your applications (Ie SSMC) and so it can easily run out of memory for really huge data sets.
Does each server have the same amount of RAM and same amount allocated to SQL Server? Try reducing the amount of memory allocated to SQL Server (free up some RAM) and try it again.
The probability of survival is inversely proportional to the angle of arrival.
September 12, 2013 at 4:21 pm
Each server is a VM with memory allocated via the Sys Admin. I can ask them to bump it up. There are 4 servers on this VM, total of 192GB or RAm available, I don't think we're using half of it.
September 13, 2013 at 8:26 am
Yeah, you need to determine:
1) how much memory is available on your server (or VM)
2) what other processes or programs are running on that server
3) max (and min) amount of memory to allocate to the SQL server instance based on 1) & 2) above.
4) if you have two instances of SQL server on the same machine you need to consider each independently so they don't starve each other or cause paging.
The probability of survival is inversely proportional to the angle of arrival.
September 13, 2013 at 1:54 pm
4 tables of between 1M and 2M rows shouldn't take much time to run, except on a very limited server.
There must be something in the query that is causing some undesired side effect: a CROSS JOIN, etc.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
September 13, 2013 at 2:18 pm
3 tables left joined to the same field in the first table, all have indexes.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply