December 9, 2016 at 8:30 pm
More testing was showing simple parameterization alone would yield significant performance improvements, without using hints like option(recompile) that would require code changes.
When I switched to sql statements as captured in profiler from production, ( below )
like: exec sp_executesql N'SELECT DISTINCT TOP 2000
I wasn't getting that performance boost.
If I add the option(recompile), either query form uses the new index, runs multi-threaded, uses the two new filtered statistics and doesn't timeout during compilation. Running the straight select without sp_executesql I got all of that just with simple parameterization as the database setting ( no option recompile )
exec sp_executesql N'SELECT DISTINCT TOP 2000 CollateralGroupRequest02.SERVICE_REQUEST_ID AS PrimaryKey,Client18.SHORT_NAME AS ClientShortName,EeeReason19.DESCRIPTION AS EeeReason,AccountProperty110.MODIFIED_MANUFACTURER_ID AS VIN,Account15.CATEGORY AS AccountType,FollowupEntity111.ENTITY_CODE AS DealerID,FollowupEntity111.NORMALIZED_ENTITY_CODE AS FollowupEntity111_NORMALIZED_ENTITY_CODE60,Account15.FINANCED_DATE AS FinancedDate,Account15.BOOKED_DATE AS BookedDate,AccountProperty110.EXPECTED_TITLING_STATE_ABBR AS ExpectedTitlingState,BusinessUnit112.LONG_NAME AS BusinessUnit,BusinessUnit112.SHORT_NAME AS BusinessUnitId,CollateralGroupRequest02.SERVICE_REQUEST_STATUS AS Status,AccountOwnershipDocSummary16.STATUS AS AccountStatus,AccountOwnershipDocSummary16.BORROWER_FULL_NAMES AS BorrowerFullNames,Account15.CUSTOM_ATTRIBUTE_1 AS AccountNumber,Account15.CUSTOM_ATTRIBUTE_2 AS LoanNumber,Account15.CUSTOM_ATTRIBUTE_3 AS LoanSuffix,Account15.CUSTOM_ATTRIBUTE_4 AS Branch,CollateralGroupRequest02.EEE_DATE AS EEEDate,CollateralGroupRequest02.EEE_CAUSE AS CauseOfEEE,CollateralGroupRequest02.REQUEST_TRANSACTION_TYPE AS RequestType,Jurisdiction113.SHORT_NAME AS TMState,Account15.RECOVERY_STATUS AS RecoveryCode,Account15.USER_DEFINED_1 AS UserDef1,Account15.USER_DEFINED_2 AS UserDef2,Account15.USER_DEFINED_3 AS UserDef3,LienholderStatusCode114.STATUS_CODE AS LienholderStatus,AccountProperty110.VEHICLE_TYPE AS CollateralType,Account15.SUB_CATEGORY AS AccountSubType,(SELECT DerivedTable01_11 FROM (SELECT MIN(ReminderWorkItem02.REMIND_DATE) AS DerivedTable01_11 FROM WORK_QUEUE_ITEM AS ReminderWorkItem02 WHERE ((ReminderWorkItem02.ACCOUNT_ID = Account15.ACCOUNT_ID AND ReminderWorkItem02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS421 AND ReminderWorkItem02.NAME = @DerivedTable01_NAME622 AND ReminderWorkItem02.SECURED_ORGANIZATIONORGANIZATION_ID = Account15.CLIENT_ID) AND ((ReminderWorkItem02.CONCRETE_TYPE IN ( @DerivedTable01_923))))) AS ScalarQueryTable) AS ReminderDate,(SELECT ExternalUser13_USERNAME13 FROM (SELECT TOP 1 ExternalUser13.USERNAME AS ExternalUser13_USERNAME13,ReminderWorkItem02.REMIND_DATE AS ReminderWorkItem02_REMIND_DATE1 FROM WORK_QUEUE_ITEM AS ReminderWorkItem02 INNER JOIN USR AS ExternalUser13 ON ReminderWorkItem02.ASSIGNED_USER_ID=ExternalUser13.USR_ID WHERE ((ReminderWorkItem02.ACCOUNT_ID = Account15.ACCOUNT_ID AND ReminderWorkItem02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS524 AND ReminderWorkItem02.NAME = @DerivedTable01_NAME725 AND ReminderWorkItem02.SECURED_ORGANIZATIONORGANIZATION_ID = Account15.CLIENT_ID) AND ((ReminderWorkItem02.CONCRETE_TYPE IN ( @DerivedTable01_1126)))) ORDER BY 2) AS ScalarQueryTable) AS ReminderUser FROM SERVICE_REQUEST AS CollateralGroupRequest02 INNER JOIN (SERVICED_COLLATERAL_GROUP_ITEM AS ServicedCollateralGroupItem13 INNER JOIN (SERVICED_COLLATERAL_GROUP_ITEM AS ServicedAccount14 INNER JOIN (ACCOUNT AS Account15 INNER JOIN ACCOUNT_OWNERSHIP_DOC_SUMMARY AS AccountOwnershipDocSummary16 ON Account15.ACCOUNT_ID=AccountOwnershipDocSummary16.ACCOUNT_ID INNER JOIN PROPERTY AS AccountProperty110 ON Account15.ACCOUNT_ID=AccountProperty110.ACCOUNT_ID LEFT OUTER JOIN LEGAL_ENTITY AS FollowupEntity111 ON Account15.FOLLOWUP_ENTITYLEGAL_ENTITY_ID=FollowupEntity111.LEGAL_ENTITY_ID LEFT OUTER JOIN BUSINESS_UNIT AS BusinessUnit112 ON Account15.BUSINESS_UNIT_ID=BusinessUnit112.BUSINESS_UNIT_ID LEFT OUTER JOIN LIENHOLDER_STATUS_CODE AS LienholderStatusCode114 ON Account15.LIENHOLDER_STATUS_CODE_ID=LienholderStatusCode114.LIENHOLDER_STATUS_CODE_ID) ON ServicedAccount14.ACCOUNT_ID=Account15.ACCOUNT_ID) ON ServicedCollateralGroupItem13.SERVICED_COLLATERAL_GROUP_ITEM_ID=ServicedAccount14.SERVICED_COLLATERAL_GROUP_ITEM_ID) ON CollateralGroupRequest02.SERVICE_REQUEST_ID=ServicedCollateralGroupItem13.COLLATERAL_GROUP_REQUESTSERVICE_REQUEST_ID INNER JOIN ORGANIZATION AS Client18 ON CollateralGroupRequest02.CLIENT_ID=Client18.ORGANIZATION_ID LEFT OUTER JOIN EEE_REASON AS EeeReason19 ON CollateralGroupRequest02.EEE_REASON_ID=EeeReason19.EEE_REASON_ID INNER JOIN ORGANIZATION AS Jurisdiction113 ON CollateralGroupRequest02.JURISDICTION_ID=Jurisdiction113.ORGANIZATION_ID WHERE ((CollateralGroupRequest02.CLIENT_ID = @DerivedTable01_CLIENT_ID30 AND Account15.CLIENT_ID = @DerivedTable01_CLIENT_ID61 AND CollateralGroupRequest02.EEE_DATE IS NOT NULL AND ((CollateralGroupRequest02.CONCRETE_TYPE = @DerivedTable01_CONCRETE_TYPE113 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS IN ( @DerivedTable01_BUSINESS_PROCESS_STATUS134, @DerivedTable01_BUSINESS_PROCESS_STATUS145) AND AccountOwnershipDocSummary16.STATUS = @DerivedTable01_STATUS166) OR (CollateralGroupRequest02.CONCRETE_TYPE = @DerivedTable01_CONCRETE_TYPE187 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS208 AND AccountOwnershipDocSummary16.STATUS = @DerivedTable01_STATUS229) OR (CollateralGroupRequest02.CONCRETE_TYPE = @DerivedTable01_CONCRETE_TYPE2410 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS2611) OR (CollateralGroupRequest02.CONCRETE_TYPE = @DerivedTable01_CONCRETE_TYPE2812 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS3013 AND AccountOwnershipDocSummary16.STATUS = @DerivedTable01_STATUS3214 OR (CollateralGroupRequest02.SERVICE_REQUEST_STATUS = @DerivedTable01_SERVICE_REQUEST_STATUS3415 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS3616 AND AccountOwnershipDocSummary16.STATUS = @DerivedTable01_STATUS3817)) OR (CollateralGroupRequest02.CONCRETE_TYPE = @DerivedTable01_CONCRETE_TYPE4018 AND CollateralGroupRequest02.BUSINESS_PROCESS_STATUS = @DerivedTable01_BUSINESS_PROCESS_STATUS4219)) AND (Account15.CUSTOM_ATTRIBUTE_4 = @DerivedTable01_CUSTOM_ATTRIBUTE_44420))) ORDER BY 20
OPTION
(
--RECOMPILE,-- Used to see the Statistics Output
QUERYTRACEON 3604,-- Redirects the output to SSMS
QUERYTRACEON 9204 -- Returns the Statistics that were used during Cardinality Estimation ("Stats loaded")
)',N'@DerivedTable01_CLIENT_ID30 int,@DerivedTable01_CLIENT_ID61 int,@DerivedTable01_CONCRETE_TYPE113 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS134 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS145 varchar(8000),@DerivedTable01_STATUS166 varchar(8000),@DerivedTable01_CONCRETE_TYPE187 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS208 varchar(8000),@DerivedTable01_STATUS229 varchar(8000),@DerivedTable01_CONCRETE_TYPE2410 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS2611 varchar(8000),@DerivedTable01_CONCRETE_TYPE2812 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS3013 varchar(8000),@DerivedTable01_STATUS3214 varchar(8000),@DerivedTable01_SERVICE_REQUEST_STATUS3415 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS3616 varchar(8000),@DerivedTable01_STATUS3817 varchar(8000),@DerivedTable01_CONCRETE_TYPE4018 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS4219 varchar(8000),@DerivedTable01_CUSTOM_ATTRIBUTE_44420 varchar(8000),@DerivedTable01_BUSINESS_PROCESS_STATUS421 varchar(8000),@DerivedTable01_NAME622 varchar(8),@DerivedTable01_923 varchar(32),@DerivedTable01_BUSINESS_PROCESS_STATUS524 varchar(8000),@DerivedTable01_NAME725 varchar(8),@DerivedTable01_1126 varchar(32)',@DerivedTable01_CLIENT_ID30=11330,@DerivedTable01_CLIENT_ID61=11330,@DerivedTable01_CONCRETE_TYPE113='Fdi.Po.FollowUpRequest',@DerivedTable01_BUSINESS_PROCESS_STATUS134='Open',@DerivedTable01_BUSINESS_PROCESS_STATUS145='Closed',@DerivedTable01_STATUS166='NO_TITLE_PM',@DerivedTable01_CONCRETE_TYPE187='Fdi.Po.TitleMaintenanceRequest',@DerivedTable01_BUSINESS_PROCESS_STATUS208='Open',@DerivedTable01_STATUS229='TITLE_MAINTENANCE_REQUEST_SENT',@DerivedTable01_CONCRETE_TYPE2410='Fdi.Po.DuplicateTitleRequest',@DerivedTable01_BUSINESS_PROCESS_STATUS2611='Closed',@DerivedTable01_CONCRETE_TYPE2812='Fdi.Po.DirectLendingServiceRequest',@DerivedTable01_BUSINESS_PROCESS_STATUS3013='Open',@DerivedTable01_STATUS3214='NO_TITLE_PM',@DerivedTable01_SERVICE_REQUEST_STATUS3415='DocumentsReturned',@DerivedTable01_BUSINESS_PROCESS_STATUS3616='Closed',@DerivedTable01_STATUS3817='NO_TITLE_PM',@DerivedTable01_CONCRETE_TYPE4018='Fdi.Po.AdHocRequest',@DerivedTable01_BUSINESS_PROCESS_STATUS4219='Open',@DerivedTable01_CUSTOM_ATTRIBUTE_44420='01102',@DerivedTable01_BUSINESS_PROCESS_STATUS421='Open',@DerivedTable01_NAME622='REMINDER',@DerivedTable01_923='Fdi.Workflow.Po.ReminderWorkItem',@DerivedTable01_BUSINESS_PROCESS_STATUS524='Open',@DerivedTable01_NAME725='REMINDER',@DerivedTable01_1126='Fdi.Workflow.Po.ReminderWorkItem'
December 10, 2016 at 8:55 am
Creating a plan guide when your sql statements are very complicated is proving to be difficult but I found the store procedure that will create a plan guide from a query plan handle.
this looks like it is much easier but when the plan cache gets cleared three or four times a month what happens to your plan guide.
Many of the problematic queries are run by different clients with different parameters so although there are a lot of different query hints and optimize for specific parameters and so forth I'm thinking that since the top part of the query is always the same the plan guide would be created for that with option recompile which is proving to be the best performing option for many of these queries
December 10, 2016 at 9:00 am
Just read and re read what I wrote okay so the plan handle is used once to create the plan guide from that it gets the sequel statement.. then going forward the plan guide doesn't need that specific plan handle any longer
December 11, 2016 at 2:08 pm
Indianrock (12/9/2016)
"It’s not using new stuff because of FORCED parameterization because it caches the plan and reuses it. If you keep it SIMPLE, I think it would use the new stats/indexes."
Nope. Cached plans get invalidated by a bunch of things, including index changes and stats updates. Forced/simple parameterisation doesn't change that.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 11, 2016 at 2:14 pm
Alan.B (12/9/2016)
Indianrock (12/9/2016)
I've heard this advice everywhere. Don't do things to change the way the sql optimizer produces plans etc. "it's smart so don't use plan guides, query hints, force recompile etc etc etc etc"If you don't know what you are doing and don't understand the SQL Server query optimizer then this advice is pretty good.
Which is precisely why that advice is given. The number of times I've seen query hints or weird query forms at clients and heard "I read it somewhere" or "<developer> says to always use that because it makes things faster" or similar, where people are messing with hints with no idea what they do or when they should be used.
When you know what a hint does and why you're planning on using it, and have tested and it does make things better, then use it.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 11, 2016 at 5:16 pm
There are certainly enough things to get frustrated about -- the advice to not try and influence the optimizer is just one of them. Another is, even today, article after article trying to explain some of these "under the hood" features where A) the example uses a query consisting of 30 characters and B) using stored procedures in these examples.
I don't know how much of the industry is using ORMs versus stored procedures.
Anyway the combination of reverting to simple parameterization, using filtered stats and some application of option(recompile ) shows that good performance can be achieved on some of these ORM-generated queries that fall victim to huge data skews. Need to do more testing but in at least once instance, leaving forced parameterization in place didn't yield the gains even with the other tweaks in place. ( change 3 things and try to determine which one fixed the problem 🙂 )
Never did get a plan guide to be used. Probably not surprising with queries as long and messy as ours.
Oh, the seed about hiring a consultant was planted. Don't know if it will grow yet but the project to convert all databases to SSD is imminent.
December 11, 2016 at 8:18 pm
1) You know what I do when I get called in to a new client that is using an ORM? I go KAAAACCCCHHHIIIIINNNGGGG! 😀 I know there is going to be LOTS of performance tuning work to be done, and most of it won't be easy (especially if they did Code First development).
2) It is a rare ORM system that shouldn't have Optimize for Ad Hoc Workloads on.
3) In my experience, as yours has been so far, plan guides are essentially useless on a system that generates queries like you have posted.
4) Since you mentioned it, I will throw my hat into the consultant ring for consideration. About 45000 hours invested in just the SQL Server relational engine, mostly as an independent consultant and most of that dealing with performance in some way or another.
5) BTW, you had darn well better test your system UNDER LOAD and on PRODUCTION DATA BEFORE you move to SSDs. You could wind up with a mess of deadlocks otherwise. I have seen it so bad that the system was unusable (and that client had no way plan to return to the old system they had migrated off of).
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 12, 2016 at 6:16 am
Yes we do have optimize for ad hoc workloads on. If it was up to me, I'd hire you and Gail Shaw today. I did pass both of your contact info to my IT manager. I believe they are planning on several days of testing a copy of prod in QA with SSDs but will reiterate the need for prod-like load testing.
Since it isn't a perfect world, the plan guide looked like another ( possibly temporary) way to get some relief without code changes when it's going to take months to get code changes where the ORM can use query hints, be re-written in a more sane manner or bypassed for big queries.
thanks
December 12, 2016 at 6:21 am
Why would SSDs generate deadlocks?
December 12, 2016 at 6:42 am
Indianrock (12/12/2016)
Why would SSDs generate deadlocks?
It doesn't generate deadlocks, just facilitates them.
If your workload currently is IO-bottlenecked (quite common), then the IO latencies are slowing things down. Remove those, which is what essentially what moving to SSD will do, and you have queries running faster, more things happening at a time, more opportunity for things to deadlock.
I've got a current client who upgraded hardware and moved to SSDs and got exactly that, a massive increase in deadlocks, which I'm still fixing 8 months later.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 12, 2016 at 8:55 am
GilaMonster (12/12/2016)
Indianrock (12/12/2016)
Why would SSDs generate deadlocks?It doesn't generate deadlocks, just facilitates them.
If your workload currently is IO-bottlenecked (quite common), then the IO latencies are slowing things down. Remove those, which is what essentially what moving to SSD will do, and you have queries running faster, more things happening at a time, more opportunity for things to deadlock.
I've got a current client who upgraded hardware and moved to SSDs and got exactly that, a massive increase in deadlocks, which I'm still fixing 8 months later.
Exactly. More things "go bump in the night" ... and day!! The thing that REALLY torqued me off about the worst-case scenario was that it was a client of mine I had been with for years as a part-time consultant. They KNEW what I could do for them and they didn't even mention the hardware upgrade. Had they done that I would have told them what was coming. There were some core pieces of their app and logic that they hadn't wanted to bite off on that I knew were time bombs just waiting to be unleashed. That was an exceptionally unpleasant phone call at 0-dark-thirty! :angry:
BTW, if you have EF I believe there is a sledge hammer you can use that can post-pend OPTION (RECOMPILE) to the end of queries. Sadly to my knowledge it is an all-or-nothing intercept. But it could be a Band-Aid you turn on and off on the fly before and after calling your ugliest report beasts. (Note that I don't know if that on/off thing is actually feasible.)
The most likely solution I can see at this point (with very little information honestly) is to replace some of the monsters with sprocs or perhaps application-created dynamic SQL (guarding against SQL Injection, obviously).
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 13, 2016 at 8:35 am
Alan.B (12/9/2016)
Indianrock (12/9/2016)
I've heard this advice everywhere. Don't do things to change the way the sql optimizer produces plans etc. "it's smart so don't use plan guides, query hints, force recompile etc etc etc etc"If you don't know what you are doing and don't understand the SQL Server query optimizer then this advice is pretty good. I would say, for example, that 90% of the WITH (NOLOCK) table hints I have seen are unnecessary, unhelpful and counter-intuitive.
(NOLOCK) is really worse than you indicate, having worked with this hint used in calculating transactional stuff, I saw constant glitchy behavior. I would have to troubleshoot a database because of entirely random misfiring updates. Index reorgs absolutely become an "offline" operation, because if you have transactions running during index reorgs that prepared using queries with (NOLOCK) you'd get random crap in your database. I understand the use of (NOLOCK) for troubleshooting and rough approximate queries, but why didn't they name it better, like (APPROXIMATE_NONTRANSACTIONAL) or something to indicate to the developer of the possible ramifications? On the bright side, you get good at learning your database because any given update can go wrong and you get good at knowing what needs fixed (if possible, sometimes data is just plain gone). Also, end users do get used to "glitchy" computers, but its sad knowing I'm shoveling crap, such is the nature of 3rd rate IT support.
I especially loved hearing one c# developer saying that transactional integrity was "ivory tower" stuff not worth the effort in pursuing in the real world.
December 13, 2016 at 10:12 am
patrickmcginnis59 10839 (12/13/2016)
Alan.B (12/9/2016)
Indianrock (12/9/2016)
I've heard this advice everywhere. Don't do things to change the way the sql optimizer produces plans etc. "it's smart so don't use plan guides, query hints, force recompile etc etc etc etc"If you don't know what you are doing and don't understand the SQL Server query optimizer then this advice is pretty good. I would say, for example, that 90% of the WITH (NOLOCK) table hints I have seen are unnecessary, unhelpful and counter-intuitive.
(NOLOCK) is really worse than you indicate, having worked with this hint used in calculating transactional stuff, I saw constant glitchy behavior. I would have to troubleshoot a database because of entirely random misfiring updates. Index reorgs absolutely become an "offline" operation, because if you have transactions running during index reorgs that prepared using queries with (NOLOCK) you'd get random crap in your database. I understand the use of (NOLOCK) for troubleshooting and rough approximate queries, but why didn't they name it better, like (APPROXIMATE_NONTRANSACTIONAL) or something to indicate to the developer of the possible ramifications? On the bright side, you get good at learning your database because any given update can go wrong and you get good at knowing what needs fixed (if possible, sometimes data is just plain gone). Also, end users do get used to "glitchy" computers, but its sad knowing I'm shoveling crap, such is the nature of 3rd rate IT support.
I especially loved hearing one c# developer saying that transactional integrity was "ivory tower" stuff not worth the effort in pursuing in the real world.
A) Regarding index reorg, I don't think you get "random crap in your database" because of it while running NOLOCK SELECTs. You may get committed bad data OUT on said SELECTs though.
B) I'm sorry, but anyone doing database development should have some idea of ACID and how it is achieved by locking in SQL Server, even if they have no idea of the details. So NOLOCK does clearly remove one of the fundamental tenants of transactioning to me. I don't think any other name would be more appropriate.
C) Sounds like that C# developer is used to working in social media or a similar field, where indeed it matters little if one of your 743 friends don't get (or lose after commitment) the knowledge that your birthday party location changed. 😀 There are very few lines of business that can accept losing committed data or presenting/storing/creating bad data.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 13, 2016 at 10:59 am
Forcing plan in my opinion is a temporary workaround. Since a change in data growth pattern may give you a less optimal performance after some time. If you know what you are doing then fine, its good to keep an eye on it long term. And in many cases there are limits to what you can do in terms of performance fixes if the design is bad.
We have a couple of application using ORM . They don't have that much issues since we model database first.
December 13, 2016 at 11:35 am
TheSQLGuru (12/13/2016)
A) Regarding index reorg, I don't think you get "random crap in your database" because of it while running NOLOCK SELECTs. You may get committed bad data OUT on said SELECTs though.
If you prepare an update based on erroneous data from nolock-ed selects then the entire routine is suspect in my opinion. I do realize opinions differ on this, I'm sure theres a school of thought that says the update is fine, but the select is bad.
-- A COUPLE OF TABLES
CREATE TABLE TEST1 ( KEY1 INT, DATA1 INT)
CREATE TABLE TEST2 ( KEY1 INT, DATA1 INT)
-- IN FIRST WINDOW START A TRANSACTION
BEGIN TRAN
INSERT INTO TEST1 SELECT 1,1
-- IN SECOND WINDOW ACT UPON THE UNCOMMITTED TRANSACTION WITH A NOLOCK
INSERT INTO TEST2 (KEY1, DATA1) SELECT KEY1, DATA1 FROM TEST1 WITH (NOLOCK)
-- IN FIRST WINDOW ROLL BACK THE TRANSACTION
ROLLBACK
-- VIEW RESULTS
SELECT * FROM TEST1
SELECT * FROM TEST2
I'm sure that folks don't include the source select in the transaction as part of the code that I on the other hand want to execute properly. That doesn't make me like (NOLOCK) any better.
Even more so if you're compiling data in separate steps to prepare for an update, and your compiling steps have (NOLOCK)
Whether doing update routines in multiple steps is a good idea or not is another question, I think its possible to do them correctly but even here I see racy code from experienced posters, in any case using nolock to gather data for updates is not my favorite programming paradigm.
Viewing 15 posts - 16 through 30 (of 56 total)
You must be logged in to reply to this topic. Login to reply