August 10, 2009 at 2:34 am
>I have an app that INSERTS 50 rows per second from which i measure the insert time.
>Each insert in a single conneciton, insert row, close conneciton, i.e 50 connections per second.
>I have users running large date range queires in the same table as the inserts, when they execute my insert times increase.
Im trying to understand how SQL handles these tasks.
I understand the following:
1) Shared Locks (used when reading data, compatible with other shared or update locks)
2) Update Locks (happen when shared lock in place but exclusive lock required for updates)
3) Exclusive Locks (used for INSERT UPDATE DELETE. Not compatible with above locks)
If the above is ture my INSERT connections are creating an EXCLUSIVE lock on my table.
So when a user read query hits this same table does it:
a) start, blocking my inserts until it completes (this would seem unlikly as EXCLUSIVE locks used by INSERTS appear to take "priority" over SHARED locks used for SELECT/READ queries or
b) My SELECT/READS are "patchy" , i.e they start/stop in between the INSERT and the INSERTS take priority as they are EXCLUSIVE.
I guess what im trying to understand is what takes priority or is it simply chronological ?
Im trying to build an efficent INSERT process. (understand batch insert transactions may help).
Thank you.
Scott
(SQL 2005 standard)
August 10, 2009 at 3:39 am
i.e is this happening if i have a constant flow on INSERTS and a user runs a read query ?
INSERT (Exclusive Lock start/stops)
Read starts
Read paused
INSERT (Exclusive Lock start/stops)
Read continues
Read paused
INSERT (Exclusive Lock start/stops)
Read continues
Read stops
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply