December 20, 2010 at 8:27 am
I am trying to get the datediff on consecutive rows. Any help would be appreciated. Ideally I would like to group by cv.idcode.
Query:
SELECT
cv.IDCode AS 'MRN',
sobs.value as 'Pain Scale',
od.recordeddtm as 'Date/Time Documented'
FROM CV3PatientCareDocument pcd (nolock)
INNER JOIN CV3ClientDocument cd (nolock) ON pcd.GUID = cd.PatCareDocGUID
INNER JOIN CV3ClientVisit cv (nolock) ON cd.ClientGUID = cv.ClientGUID
AND cd.ChartGUID = cv.ChartGUID
INNER JOIN CV3ObservationDocument od (nolock) ON cd.GUID = od.OwnerGUID
INNER JOIN CV3ObsCatalogMasterItem ocmi (nolock) ON od.ObsMasterItemGUID = ocmi.GUID
inner JOIN CV3UserDictionary ud (nolock) ON ocmi.UserDictionaryCode = ud.Code
INNER JOIN CV3Observation AS COBS (nolock) ON od.ObservationGUID = COBS.GUID
LEFT OUTER JOIN SCMObsFSListValues AS SOBS (nolock) ON od.ObservationDocumentGUID = SOBS.ParentGUID
AND SOBS.ClientGUID = cd.ClientGUID AND SOBS.Active = 1
INNER JOIN CV3Client c(Nolock) ON cv.ClientGUID = c.GUID
WHERE pcd.Name in ('01. Vital Signs Flowsheet','01. Vital Signs ICU Flowsheet','01. Vital Signs Progressive Care Flowsheet',
'1. SCN Vital Signs')
AND ocmi.description in('pain rating','Pain Reassessment - on 3 VS FS & short stay & A&I','FLACC: Score','Pain Rating Restricted - on 3 VS FS & short stay & A&I')
Sample Results:
MRNPain ScaleDate/Time Documented
4928780/10 - no pain2010-12-19 04:35:00.000
492878level 3/102010-12-19 08:08:00.000
4541180/10 - no pain2010-12-19 04:37:00.000
4541180/10 - no pain2010-12-19 07:59:00.000
4541180/10 - no pain2010-12-19 11:47:00.000
4541180/10 - no pain2010-12-19 16:25:00.000
4541180/10 - no pain2010-12-19 19:55:00.000
4541180/10 - no pain2010-12-19 23:54:00.000
3276820/10 - no pain2010-12-19 00:08:00.000
3276820/10 - no pain2010-12-19 01:37:00.000
3276820/10 - no pain2010-12-19 03:48:00.000
3276820/10 - no pain2010-12-19 05:50:00.000
3276820/10 - no pain2010-12-19 07:30:00.000
3276820/10 - no pain2010-12-19 11:35:00.000
December 20, 2010 at 8:31 am
Based on the sample data, what kind result you want to get?
December 20, 2010 at 8:45 am
I would like to see the datediff in ( minutes). I would like the results to be something like
mrn pain rating recordeddtm MinutesFromPrev
123456 0/10 2010-12-12 22:00 0
123456 5/10 2010-12-12 23:00 60
664321 7/10 2010-12-11 09:30 0
664321 7/10 2010-12-11 10:15 45
664321 0/10 2010-12-11 10:30 15
Thank you I hope this helps to clarify.
December 20, 2010 at 10:16 am
Wildcat (12/20/2010)
Based on the sample data, what kind result you want to get?
Well, the expected results that you provided are not based on the sample data you provided. (the expected results are for mrn of 123456 or 664321 - these are not in the sample data). This makes it extremely difficult for us to know what you're really wanting.
You know, the people that help others out here are not paid for doing this - we're all volunteers. You need to provide us with the table definitions (DDL) in the form of CREATE TABLE / CREATE INDEX statements (and where appropriate, include any Foreign Key relationships), sample data in the form of INSERT statements, and then the expected results based off of the sample data provided - this will make it a lot easier for us. Additionally, please provide what you have already tried. You will benefit by getting tested results back that will solve your issue. So, please HELP US HELP YOU! (You can read further information for how to accomplish all of this by reading the first two links in my signature!)
You should also know that there are many people here that just plain won't help you out if you don't post sample code in this format. If it's not in a readily consumable format, where we can just cut-and-paste it in and start running it, then it's not of much use to us.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
December 20, 2010 at 11:00 am
Why are you using GUIDS? (answer: Because these are the keys that this particular vendor uses to join these particular tables)
Why are you using aliases to format display headers? (answer: These will be used in SSRS and I wanted to make the columns more meaningful for the user)
Why are so many data element names vague generic terms? (Answer: they are not vague to me)
Why are so many data element names just plain wrong? (Answer: just because you do not understand what they may be does not make them wrong)
What are you slapping hints all over the code? ( Answer: I use comments to help clarify)
Thank you for your input. You did see where my status was SSC rookie? Please dont waste any more or your time on this.
December 20, 2010 at 11:54 am
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
;WITH Observations (ObservationIndex, MRN, PainScale, DateTimeDocumented) AS
(SELECT ROW_NUMBER() OVER (PARTITION BY CV.IDCode ORDER BY OD.RecordedDtm) AS ObservationIndex,
CV.IDCode,
SOBS.value,
OD.recordeddtm
FROM CV3ClientDocument CD
INNER JOIN CV3ClientVisit CV ON CD.ClientGUID = CD.ClientGUID
AND CD.ChartGUID = CV.ChartGUID
INNER JOIN CV3ObservationDocument OD ON CD.GUID = OD.OwnerGUID
LEFT OUTER JOIN SCMObsFSListValues SOBS ON OD.ObservationDocumentGUID = SOBS.ParentGUID
AND SOBS.ClientGUID = CD.ClientGUID
AND SOBS.Active = 1)
SELECT O.MRN,
O.PainScale,
O.DateTimeDocumented,
DATEDIFF(mi,ISNULL(PreviousO.DateTimeDocumented,O.DateTimeDocumented),O.DateTimeDocumented) AS MinutesFromPrev
FROM Observations O
LEFT JOIN Observations PreviousO ON O.MRN = PreviousO.MRN
AND O.ObservationIndex = (PreviousO.ObservationIndex -1)
Something like this should work.
You can replace the nolock query hints with a single use of "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" at the beginning of your script.
I removed some of the tables in your query that were not necessary to the result set. There are some other issues that you may run into with this code depending on how you have the application configured. I'm not sure how you will deal with cases where the same observation exists on multiple documents.
For your where clause I would recommend using the GUIDs from dbo.CV3PatientCareDocument and dbo.CV3ObsCatalogMasterItem instead of their names. This will save you a little bit in performance.
Note that you had a few additional tables in your original code that I have removed since they weren't needed for the query.
If you need some additional help post either here or on the client connect website.
December 20, 2010 at 12:22 pm
Charles - Thank you very much. This is exactly what I was looking for.
December 20, 2010 at 12:30 pm
This article might also help[/url] you figure out exactly how you want to go about the calculations.
--------------------------------------
When you encounter a problem, if the solution isn't readily evident go back to the start and check your assumptions.
--------------------------------------
It’s unpleasantly like being drunk.
What’s so unpleasant about being drunk?
You ask a glass of water. -- Douglas Adams
December 20, 2010 at 1:42 pm
bhutchin (12/20/2010)
What are you slapping hints all over the code? ( Answer: I use comments to help clarify)
The "hints" he is referring to would be the (nolock) query hints throughout your query (not any code comments you might have). Of all of the questions he brought up (I'm not commenting on the tone of the post, just the contents), this is the one you might need to worry about.
Nolock will allow dirty data through, meaning things that haven't been committed, or might end up getting rolled back, etc.... In other words, it allows for a certain amount of "bad data" to sneak through just to allow you to move through a little faster. This could lead to really, really bad things in many settings with regards to patient care, so you might care to NOT use (NOLOCK).
Again - you know the setting you're dealing with, so it's definitely not written in stone. Just be careful with its use.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
December 20, 2010 at 1:46 pm
thank you very much for clarifying this point. It is definetly one that I will remember.
December 20, 2010 at 2:19 pm
Just some more data about the NOLOCK hint, I'm pretty sure it has no effect if you're crossing databases and servers. (Couldn't find the reference that says that though. I know it doesn't work across servers, not sure about across databases on the same server. Perhaps someone else can enlighten me with the reference on that one. :-))
But as I was looking for the reference, I came across this in BOL on the "Hints (Transact-SQL)" entry
Note:
Support for use of the READUNCOMMITTED and NOLOCK hints in the FROM clause that apply to the target table of an UPDATE or DELETE statement will be removed in a future version of Microsoft SQL Server. Avoid using these hints in this context in new development work, and plan to modify applications that currently use them.
So per Microsoft, there will come a time when the NOLOCK hint won't work. Just something to keep in mind for future development and upgrades.
-- Kit
December 20, 2010 at 2:47 pm
That is for DELETE or UPDATE statement, not for the SELECT.
BTW, how could a NOLOCK hint to the target table on the DELETE or UPDATE statement? Make no sense to me.
December 20, 2010 at 3:12 pm
Wildcat (12/20/2010)
That is for DELETE or UPDATE statement, not for the SELECT.BTW, how could a NOLOCK hint to the target table on the DELETE or UPDATE statement? Make no sense to me.
It might erronesously allow you to select something that otherwise wouldn't be picked up, becuase the data to scan for qualifying records is already in the server cache.
EDIT: I didn't read that closely enough. The TARGET table has no effect. per BOL it will ignore the NOLOCK on a target table: that said - it can still cause issues during an UPDATE or DELETE statement if the NOLOCK hints are on related tables in the FROM statement.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
December 20, 2010 at 8:27 pm
Matt Miller (#4) (12/20/2010)
Wildcat (12/20/2010)
That is for DELETE or UPDATE statement, not for the SELECT.BTW, how could a NOLOCK hint to the target table on the DELETE or UPDATE statement? Make no sense to me.
It might erronesously allow you to select something that otherwise wouldn't be picked up, becuase the data to scan for qualifying records is already in the server cache.
To quantify what Matt said, it can and has allowed duplicated rows to show up in similar queries... one before an update and one after. If you're lucky, it'll give you some kind of error. If you're not lucky, it'll just use the duplicated data which, of course, is exactly what you don't want ever.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 20, 2010 at 8:37 pm
bhutchin (12/20/2010)
Charles - Thank you very much. This is exactly what I was looking for.
I'm asking for reasons of performance, present or future. How many rows do you have in the tables you're using in these queries and how many of those rows do you expect to return?
While it's real handy, a CTE that is called more than once in the outer query is actually executed the number of times it's called just like a View. That's why CTE's are frequently referred to as "Inline Views". It can be (not always) a HUGE performance problem especially when there are a number of joins in the CTE.
My recommendation to folks is usually that if you need to call the CTE more than once, it's usually a good idea to use INTO to populate a Temp Table with the results of the CTE so it only has to run once no matter how many times you need the same result set from the CTE.
Also, depending on the number of rows involved, you could end up with a substantial waiting period when you do a self join on a large number of rows even if you do resolve to use a Temp Table.
Post back with the rowcounts and we'll let you know if the "Running Total" article (with a new safety check) you've been pointed to may or may not be better. To wet your whistle on the prospect, the code from that article will do the "difference" you request in just a couple of seconds on a million rows and, with the new safety check in place that I've not yet included in the article, it IS guaranteed to work or to throw an error without affecting your data.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply