Issue with date field calculations

  • I've got a query which I've been using for quite some time that doesn't work consistently.

    What it's doing is descending on a Date field into a key'ed table to get the latest sold date for a particular customer.

    Yet for whatever reason SQL behind the scenes is seeing 7/26/2005 as being a greater date than 9/5/2007. And I've struggled trying to find out what could be causing this. Every field is a pure datetime field, no calculations are being done other than to order by Sold in a descending format, with a Unique Index on Phone number to weed out duplicate customers (we do telemarketing so phone is our key currently).

    Anyone have any hints or thoughts of where to go to resolve this and make the results correct and consistent? (I have other leads, which come in just fine, but I also have many like the one I'm mentioning here which are completely out of wack).

    -Terry

  • the only way i see that happening is if the columns with the date is being treated as a varchar...but you said you were sure the fields were datetime data types....

    man that's wierd.

    can you post an actual query? maybe there's a where statement that uses a parameter, like Datetimefield=@effdt, and effdt is typed as a varchar, so it's doing an implicit conversion of the datetime field to varchar?

    I'm just guessing, but that's a place to start.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Here is a sample of the query:

    truncate table [LM_Date1]

    insert into [LM_Date1]

    select

    C.[Cust_ID],

    C.Phone,

    H.[Sold_Date],

    H.[Amt_Sold],

    H.[Amt_Paid],

    H.[Paid_Date]

    from [Lead_Management_Preparation Table] L INNER JOIN (History H inner join Customers C on

    C.[Cust_ID]=H.[Cust_ID]) ON H.Cust_ID=L.Cust_ID Where

    --L.Phone = C.Phone

    L.[Sales Type] ='79'

    AND H.[Dept_No]=@Dept

    Order by H.[Sold_Date] DESC

    This populates my very first table with data, and even here I have an issue. History Sold_Date is a datetime field and when I run the query using the exact customer ID I have that I'm testing it seems to work correctly (I picked 9/5/07) but if I leave it alone and let the regular inner joins work in order to get all information back, it ends up retrieving 7/25/05. For the life of me it doesn't make sense.

  • just to be sure, can you exec sp_help History and also exec sp_help LM_Date1 so we can be sure the columns H.Paid_Date and LM_Date1.Paid_Date are both datetime?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • I ran the SP_Helps, and they both are set to datetime fields.

  • I bet its seeing 9/5/2007 as May 5th 2007. Try 9/20/2007 and see what happens. Also try convert(datetime, '5 September 2007')

  • Here is what I think is happening, but not sure how to prove it.

    The table being imported into is indexed against duplicates based on cust_id field.

    What I'm thinking is that even though my query specifically says order by sold_date desc, the engine is still duping out and getting the first sold_date that it sees, and then disallowing the real one I'm wanting (hopefully I'm making sense).

    Is this possible and/or if it is, how do you combat it?

  • Terry Gamble (1/18/2008)

    truncate table [LM_Date1]

    insert into [LM_Date1]

    select

    ...

    Order by H.[Sold_Date] DESC

    Order bys have no real use in an insert statement. The 'order' that the data's stored in the table depends on a whole lot of underlying factors, not necessarily on the order of the recordset at time of insert.

    Are there any selects where you're seeing this behaviour? If so, could you please post a sample.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Terry Gamble (1/18/2008)


    Here is what I think is happening, but not sure how to prove it.

    The table being imported into is indexed against duplicates based on cust_id field.

    What I'm thinking is that even though my query specifically says order by sold_date desc, the engine is still duping out and getting the first sold_date that it sees, and then disallowing the real one I'm wanting (hopefully I'm making sense).

    Is this possible and/or if it is, how do you combat it?

    If you have an insert with dups and an index on the destination table that prevents dupes, the entire insert will fail. Inserts success or fail as a statement, not row by row (unless you're inserting using a cursor)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • No, no... it won't fail if the IGNORE DUPLICATES option is turned on for the unique index. It will just give a warning that at least one duplicate was ignored.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    That's actually a bit of a misnomer in that it won't fail.

    You're correct, it does end up going through, however it shows to the client that an "error" occurred. So, if you relied on the step to give you a success on a SSIS job (let's say), then it would come back as a failure.

    Although I do believe Gail hit it on the head with the order by having no real bearing on the query due to the fact that it's processing the way the data is structured.

    I ended up re-writing my logic to not use this particular type of logic (order by) to do the work for me and it seems basically as fast, although I am breaking some best practice rules by using temp tables, and indexes to speed things up. But since this is only ran on average of one time per week, I'm guessing I won't get thumped too hard if someone comes in and looks over my work.

  • Misnomer would be correct... client could/will, indeed, pick it up as a "failure" because some error message as you said. Of course, that should be handled...

    Loading the table with no key, doing a dupe delete, and then applying the key would also prevent an error from bubbling up to the client. Use of a DISTINCT probably wouldn't do what's needed.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I've tried to find a way to handle it multiple times and ways, but I've never found out how to actually capture it and programmatically ignore it.

    Granted, I use SQL in my job, but these "fails" are something we know about ahead of time and I've never had to actually guard against them.

    But, if you've got time, how would you write the TSQL that would ignore this particular "error" (or instead of ignore, "handle" might be a better term).

  • Sorry, Terry... I don't know enough about the client known as SSIS to flip a nickel. I'm just assuming (bad thing to do, I guess) that it has some form of error trapping.

    I do most of this type of thing in T-SQL and pretty much like I said...

    Import the data into a keyless table (actually, an IDENTITY column is the temporary key for performance of the dupe checks). Then, I clean the data including removing any dupes. From there, if it's a whole table import, I'll do cascade a "rename" to replace the original table with the new table (possibly dropping the IDENTITY column in the process in favor of a more natural key). If it's a partial table import, I'll write an "upsert" to import the data into the original table. Neat thing is, the "cleaning" process guarantees no errors. Similar thing can be done with table to table transfers.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Yeah, there is supposed to be a way to trap this error in TSQL and handle it programmatically (I was talking SSIS for a different aspect of fail/success). But although I've tried to incorporate error handling, for what I do it's usually not really necessary (I'm the only person in the database), and I've never gotten the things I've read on web pages to work for what I'm trying to accomplish.

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply