July 4, 2014 at 11:30 am
Hello,
We have been facing poor performance while inserting records using MERGE statment in SQL Server 2012 standard edition.
We are using dataloader of salesforce to download the data and UPSERT into SQL server database using JDBC driver.
The RAM is 16 GB and its a VM ware virtual platform with dual 1.87 GHz Intel Xeon(R) E7-4807.
MAX DOP is set up to 0 and cost threshold for parallelism to 0.
Any specific pointers on tunning or configuration aspect would be useful.
Any help is appreciated?
Thank You.
July 4, 2014 at 12:04 pm
First suggestion, load the data to a staging table first. Then, when the data is local to database UPSERT the data to its final destination.
July 4, 2014 at 5:14 pm
Pulivarthi Sasidhar (7/4/2014)
Hello,We have been facing poor performance while inserting records using MERGE statment in SQL Server 2012 standard edition.
We are using dataloader of salesforce to download the data and UPSERT into SQL server database [font="Arial Black"]using JDBC driver.[/font]
The RAM is 16 GB and its a VM ware virtual platform with dual 1.87 GHz Intel Xeon(R) E7-4807.
MAX DOP is set up to 0 and cost threshold for parallelism to 0.
Any specific pointers on tunning or configuration aspect would be useful.
Any help is appreciated?
Thank You.
The bolded text above is the likely problem and sending it to a staging table won't actually help the overall scheme of things using this design.
Attach a file with the first ten lines of data (assuming there's nothing private or proprietary in the file) and we might be able to show you a better way.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 4, 2014 at 8:18 pm
Jeff Moden (7/4/2014)
Pulivarthi Sasidhar (7/4/2014)
Hello,We have been facing poor performance while inserting records using MERGE statment in SQL Server 2012 standard edition.
We are using dataloader of salesforce to download the data and UPSERT into SQL server database [font="Arial Black"]using JDBC driver.[/font]
The RAM is 16 GB and its a VM ware virtual platform with dual 1.87 GHz Intel Xeon(R) E7-4807.
MAX DOP is set up to 0 and cost threshold for parallelism to 0.
Any specific pointers on tunning or configuration aspect would be useful.
Any help is appreciated?
Thank You.
The bolded text above is the likely problem and sending it to a staging table won't actually help the overall scheme of things using this design.
Attach a file with the first ten lines of data (assuming there's nothing private or proprietary in the file) and we might be able to show you a better way.
That was part of the reason I suggested loading a staging table first. If you are doing a straight load with no additional processing it may help.
July 4, 2014 at 10:27 pm
1) There are about a bajillion things that could be at play here.
2)
cost threshold for parallelism to 0.
THAT is a WAY WAY WAY bad thing to do!!! One of the most amazed I have ever made a client was on a BUSY machine that had that setting like you have it. When I changed it from 0 to 20 the CPU on the box made a SIGNIFICANT step-change down immediately and responsiveness of the box went WAY up!! I leave it to the inquisitive reader to determine why zero is a bad setting for CTFP.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
July 5, 2014 at 12:38 am
Lynn Pettis (7/4/2014)
Jeff Moden (7/4/2014)
Pulivarthi Sasidhar (7/4/2014)
Hello,We have been facing poor performance while inserting records using MERGE statment in SQL Server 2012 standard edition.
We are using dataloader of salesforce to download the data and UPSERT into SQL server database [font="Arial Black"]using JDBC driver.[/font]
The RAM is 16 GB and its a VM ware virtual platform with dual 1.87 GHz Intel Xeon(R) E7-4807.
MAX DOP is set up to 0 and cost threshold for parallelism to 0.
Any specific pointers on tunning or configuration aspect would be useful.
Any help is appreciated?
Thank You.
The bolded text above is the likely problem and sending it to a staging table won't actually help the overall scheme of things using this design.
Attach a file with the first ten lines of data (assuming there's nothing private or proprietary in the file) and we might be able to show you a better way.
That was part of the reason I suggested loading a staging table first. If you are doing a straight load with no additional processing it may help.
Understood. Admittedly, it was several years ago, but it took a Java "straight load" more than 15 minutes to load a lousy 1.2 million rows into a staging table with no data cleansing, validation, or anything else. It took BULK INSERT 51 seconds flat and that included datatype/length checking, bad row sequestration, and some other goodies such as validation of column order by field name.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 10, 2014 at 4:14 am
I did not get any solution ......
I tried with following Configurations values with many times
Degree of IIlism to 2
Degree of IIlism to 1
Degree of IIlism to 0
Threads to 512,1024
cost to IIlism 5,0
and
Properties-> Processors -> deselected Affinity masks and select NUMA(both CPUS)
Can you please help me that SQL Server USE both CPUs....It utilising single CPU only....
Thanks,
SP
July 10, 2014 at 4:30 am
How many rows are we talking about here?
Why are you using a JDBC driver?
What tool are you using?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
July 10, 2014 at 6:07 am
I am Holding Staging Server...Which will hold Salesforce replication data.
using dataloader the batch replication will start...we are testing for single Object with 50K records...
when we looked in TaskManager the sqlserver is not more than 50...rarly it showed 53 or 54 or 57 .....
My target is SQLServer should utilise max no of available CPUs( currently i had 2 only)....
Thanks,
SP
July 11, 2014 at 4:49 am
One of the first thing I would check is the "power options" configuration of the server.
It has to be set to high performance to allow the server to fully use all the cores ...thus enabling SQL to do the same
Regards
July 11, 2014 at 10:29 am
Not a solution but something to look into...
MERGE will request a HASH join. Run the following when the query is running to see how much memory it is requesting.
SELECT * FROM sys.dm_exec_requests WHERE command = 'MERGE'
Ensure that Lock Pages in Memory is configured, important for a VM.
If you want to experiment with parallelism use OPTION (MAXDOP n).
I normally end up refactoring MERGE statements into INSERT/UPDATE. Consider this as part of your tuning.
July 14, 2014 at 2:20 am
Hello All...
Thanks for your valuable inputs.......i called one of my frnd who is working as Performance Tuning ...& i explained him complete process...he has given good inputs....it worked for me...
Now , i have replaced MERGE with Procedure and passed all columns as parameters ...just replaced MERGE with SQL Statements....
Inside SP:Option1:
Created tbl variable
Inserted into it
Updated Matched records on Target table
Deleted matched records from tbl variable
Inserted tbl records into Target Table
Inside SP:Option2:
Same Merge Statement....
It has given good throughput....and its consistence....
Why SQL Server is not using multi-core when Merge calling from JDBC and it utilised multi-core during SP
Thanks,
SP
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply