February 26, 2010 at 11:29 am
cjones-956169 (2/26/2010)
From the perfmon counters, it looks like you are doing 1 insert per transaction.Transactions / Sec : 1921 ( For SQL Database)
Would it be possible for you to batch several inserts in a transaction?
Paul Randall has some recent blog posts about how batching can DRAMATICALLY improve throughput (both of tlog stuff and resultingly the inserts themselves). I think Linchi Shea may have done this type of analysis long ago as well. This was what I was speaking about when I mentioned logging as one of the hindrances to insert performance.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 26, 2010 at 8:54 pm
Matt Miller already covered batching with the OP on the first page. The OP is looking into it.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
February 27, 2010 at 6:10 am
Two small observations:
1. You're passing a unique identifier to the procedure that isn't used
2. The clustered index you have defined is likely causing many page splits
Paul
March 3, 2010 at 11:43 am
sqlbulkcopy throws Primery key violation exception, and there is no priemery key issue, I mean all are uniqure record.
All ideas are welcomed.
Thanks.
March 3, 2010 at 11:49 am
Have you tried importing the data into a table that does not have the primary key defined and then adding the PK after word? I have found that this actually helps find the duplicate records as SQL will tell you where the duplicate record is in this case.
If you are getting PK violations, there appears to be duplicate data.
March 3, 2010 at 11:59 am
I will surely try that but what suprises me is, when i tried to use a differnt machine with same database and same setup of records insertion works perfect.
March 3, 2010 at 1:07 pm
My table has combined Primary key of (Composite key of ID and date Time). It seems SQLBULKCOPY throws exception bcs it is not checking for millisecond interval precision, it throws exception if i will try to insert record with same id in a second.
Any idea how can i make command to check for Datetime primary key till millisecond interval and just not throw exception. has any one have tried SQLBULKCOPY with Composite key ?
Thanks.
March 3, 2010 at 2:04 pm
mansshah (3/3/2010)
My table has combined Primary key of (Composite key of ID and date Time). It seems SQLBULKCOPY throws exception bcs it is not checking for millisecond interval precision, it throws exception if i will try to insert record with same id in a second.Any idea how can i make command to check for Datetime primary key till millisecond interval and just not throw exception. has any one have tried SQLBULKCOPY with Composite key ?
Thanks.
1) if ID is an identity, do you need datetime as well for PK? Does this allow duplicates potentially?
2) are you explicitly stating the time down to the millisecond for every inserted row or using getdate()? If the latter that is possibly your problem? If so, maybe you can send in each row in it's own insert statement, at a huge cost in performance. Also note it is easy to get sub-millisecond activity that will STILL give PK violation since datetime precision is 3.33ms IIRC. Not a very good design...??
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 3, 2010 at 3:16 pm
TheSQLGuru (3/3/2010)
mansshah (3/3/2010)
My table has combined Primary key of (Composite key of ID and date Time). It seems SQLBULKCOPY throws exception bcs it is not checking for millisecond interval precision, it throws exception if i will try to insert record with same id in a second.Any idea how can i make command to check for Datetime primary key till millisecond interval and just not throw exception. has any one have tried SQLBULKCOPY with Composite key ?
Thanks.
1) if ID is an identity, do you need datetime as well for PK? Does this allow duplicates potentially?
2) are you explicitly stating the time down to the millisecond for every inserted row or using getdate()? If the latter that is possibly your problem? If so, maybe you can send in each row in it's own insert statement, at a huge cost in performance. Also note it is easy to get sub-millisecond activity that will STILL give PK violation since datetime precision is 3.33ms IIRC. Not a very good design...??
1) yes i need datetime also with Id.
2) I am getting datetime from Datetime.Now and just inserting in datasource. My sensor data is changing may be twice in second and my app is designed accordingly.
Performance is what i was looking for so no way of going back to each row insert as there will be big queue on my thread bcs of slow insertion.
March 3, 2010 at 5:46 pm
What is the precision of Datetime.Now??
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 4, 2010 at 9:48 am
TheSQLGuru (3/3/2010)
What is the precision of Datetime.Now??
In Milliseconds.
March 4, 2010 at 10:11 am
simple answer: having time with your PK will ALWAYS leave you at risk for PK violations.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 19, 2010 at 6:38 am
I would use SqlBulkCopy() to get a buffer full of data into a temporary table, then I would call a stored procedure that performs the necessary work to move the data from the temp table to the real table using a set-based insert.
/sg
Viewing 13 posts - 16 through 27 (of 27 total)
You must be logged in to reply to this topic. Login to reply