October 13, 2011 at 9:04 pm
We have a large table(60 million rows),which receives lots of inserts per second.I notices that for some reason some of these inserts takes about 500 - 700 millisecond to be executed.these table has foreign key relation ship with some other large tables.when I removed those relationships, the performance improved dramatically.Will removing that relationship cause any problem?
Is there any other possible solution for this problem?
Regards
October 13, 2011 at 10:41 pm
From your question, I assume that this is similar to a data warehousing scenario where you have multiple dimensions and a central fact tied with relationship constraints. One thing is for sure, do not remove the relationships which may cause constraint violation and data integrity is also lost.
I believe, When inserting, start from the object that has the least constraints and then work your way to the top. I think, an insert to a table that has a foreign key relationship with another table waits until a relative data insert conforming the constraint is completed on the other table. As this granularity increases so does the insert time on the primary table.
Try to Work it backwards. Start from the least constrainted table and then carry on with the inserts to the most constrainted table.
Not sure if this is the best way to resolve your issue but for sure is worth a shot.
Cheers !!
October 14, 2011 at 1:02 am
Will removing that relationship cause any problem? Is there any other possible solution for this problem?
Please don’t remove constraints. Just disable it till your data load finishes & then validate your data against business rules and enable them back.
How to: Disable Foreign Key Constraints with INSERT and UPDATE Statements
October 14, 2011 at 9:04 am
I agree to "Dev".
Though my suggestion seems more like a workaround without having to disable the constraints at data load, i think "dev"s suggestion seems to be a right fit too.
You could also consider using an SSIS package and uncheck the option "Check constraints".
Cheers !!
October 15, 2011 at 11:27 pm
If you talk about pure Insert statement of normal OLTP type not data warehousing without any dependent select statements from other tables, we could up with the below enhancements:
1.Then try to put a nonclustered index on the foreign key column
2.Assure no big size indexes are there which could impact on all DML statements performance
3.It is much preferred for PKs to be of numerical data types like int , float ..etc and much preferred more to be identity column where Significant DML performance could be there
4.Assure defragmentation of clustered indexes is < 30 % as maximum and if exceeded , schedule more index rebuild jobs
5.Implement a schema partitioning on an appropriate field (Much preferred to be PK not to lose storage aligned concept of partitioning )
6.Assure that schema partitioning is built on multiple File groups where their files could be dispensed in different isolated SAN LUNs or disk arrays
7.This basically will enhance significantly I/O parallel processing and all DML statements performance as well.
8.Then configure SQL alerts Page latch waits/sec > 20 and Page latch waits time (msec) = 20 msec as average to monitor more DML performance
9.All of above + besides of assuring your I/O subsystem perform well and that is another big article (If needed more about it ..let me know please)
Otherwise , if you have any dependent select statements , please pass them to me to figure out the appropriate indexes for them
•
[font="Arial Black"]Performance Guard[/font]
[font="Courier New"]Just ask me to get 0 sec for all queries[/font]
October 16, 2011 at 12:34 am
Performace Guard (Shehap) (10/15/2011)
5.Implement a schema partitioning on an appropriate field (Much preferred to be PK not to lose storage aligned concept of partitioning )
Partitioning is Enterprise only, and typically not for performance. It's mostly for easy of archiving and data removal and the like.
6.Assure that schema partitioning is built on multiple File groups where their files could be dispensed in different isolated SAN LUNs or disk arrays
That's not a trivial change and should only be done once there is identified IO subsystem contention that can't be fixed by tuning queries or indexes.
Seeing as this is a large number of inserts which (if the partitioning is on an identity clustered key) will all be in the most recent partition, this probably not the best approach for this particular situation.
7.This basically will enhance significantly I/O parallel processing and all DML statements performance as well.
No, it won't necessarily do that. Unless there is IO contention and said contention has been isolated to this operation, scaling out the IO subsystem will achieve little to nothing.
8.Then configure SQL alerts Page latch waits/sec > 20 and Page latch waits time (sec) = 20 msec as average to monitor more DML performance
Page Latch waits have nothing to do with IO. Where did those 20ms and 20/sec thresholds come from?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2011 at 12:37 am
Aspet Golestanian Namagerdi (10/13/2011)
Is there any other possible solution for this problem?
Maybe, but the first thing we need to do is identify a cause.
Can you post an execution plan of one of the slower inserts?
Do you have indexes on the foreign key columns?
Can you identify what the wait type that these queries incur? (sys.dm_exec_requests or sys,dm_os_waiting_tasks)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2011 at 1:21 am
Thanks for your reply.
All the sql statments in SP runs very fast(0 ms).In SP I have two insert statment that these two also run fast ,but sometimes after their execution there is delay from 10-200ms.How can I check what is this delay?First I thought there is Page split,but when I check the indexs I didnt have any fragmentation.then I thought that the delay was because of existance of foriegn-key ,which was wrong.
thanks
October 16, 2011 at 4:20 am
GilaMonster...Really I wonder from your comments ..!!!!!!
Please read more about schema partitioning + SQL Alerts+ benefits of each one + Relevant Cases studies
[font="Arial Black"]Performance Guard[/font]
[font="Courier New"]Just ask me to get 0 sec for all queries[/font]
October 16, 2011 at 7:22 am
Performace Guard (Shehap) (10/16/2011)
GilaMonster...Really I wonder from your comments ..!!!!!!Please read more about schema partitioning + SQL Alerts+ benefits of each one + Relevant Cases studies
Gail certainly doesn't need my help, she's got a sterling reputation in the SQL server community when it comes to performance and disaster recovery.
By your nickname [Performace Guard (Shehap)]--misspelled "Performance" by the way", I can infer you are familiar with performance issues. just as Gail stated though, partitioning methodologies are a lot lower on the list of items to review first.
1. The querys themselves,
2. whether their arguments are SARG-able,
3. whether there are proper indexes, and
4. whether there are up to date statistics are all the first things I personally try to look at when trying to review performance;
The original posters other thread is much more revealing about the performance problem.
In the OP's other thread, he identified that he's calling RBAR code from C#, which is constructing a stored proc call for each row of data, processing it in a procedure, and returning a pair of values to his application via output variables.
because he's doing it for lots of rows, he's seeing a performance and IO issue with the calls back and forth to the server for each row;
If he changes the logic to process the dataset as a set based operation(which does whatever his proc did row by row), he'll resolve the issue.
the items you identified certainly have merit, but Gail identified the recommended way to review the performance issue. , and is certainly starting at the top/most common issues to review first, where you get 90% of the needed performance enhancements.
Lowell
October 16, 2011 at 9:05 am
Aspet Golestanian Namagerdi (10/16/2011)
All the sql statments in SP runs very fast(0 ms).In SP I have two insert statment that these two also run fast ,but sometimes after their execution there is delay from 10-200ms.How can I check what is this delay?First I thought there is Page split,but when I check the indexs I didnt have any fragmentation.then I thought that the delay was because of existance of foriegn-key ,which was wrong.
Do you have any triggers?
Can you get the things that I asked about earlier?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2011 at 9:08 am
Hi Lowell..
First..Thanks for your hint ..
Actually ,I agree with you for most of comments ….
But what I have to say here…That we really don’t work here since It is a community to help some guy like other big Comminutes such as SQL Server Performance where everyone deposits his capital of recommendations which might match up with the current case which looks like generic cases till we have more details about schema design + Server Specs +Data entity size + I/O subsystem type ( SAN or local storage …)….etc
Therefore , I tried to enumerate a good checklist of recommendations ordered by the highest important ones as above
Again , thanks Lowell for your comments
[font="Arial Black"]Performance Guard[/font]
[font="Courier New"]Just ask me to get 0 sec for all queries[/font]
October 16, 2011 at 9:48 am
GilaMonster (10/16/2011)
Aspet Golestanian Namagerdi (10/16/2011)
All the sql statments in SP runs very fast(0 ms).In SP I have two insert statment that these two also run fast ,but sometimes after their execution there is delay from 10-200ms.How can I check what is this delay?First I thought there is Page split,but when I check the indexs I didnt have any fragmentation.then I thought that the delay was because of existance of foriegn-key ,which was wrong.Do you have any triggers?
Can you get the things that I asked about earlier?
Two other things that just came to mind...
Do you have autoshrink enabled?
What are your file autogrow settings?
Are you getting autogrow events? (check the default trace)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 16, 2011 at 11:31 am
I copied the backup of production database to my local machine and ran the sql process(Having loop,which inside the loop we have that insert statment), and it ran pretty fast.The only difference between my SLQ SERVER and Production is that I have SQL SERVER 2008 R2 Standard Edition(SP1) and in Production w have SQL SERVER R2 Enterprise Edition.Is it possible that I need to install SP1 on Production environment?
October 16, 2011 at 12:41 pm
You should always be on the latest service pack, but, best practices aside, it's unlikely to be the cause of the problem. Production has extra load, lots of users.
I've asked for a whole bunch of information. Can you please answer some of my questions.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply