record loading

  • as you have been advised before do not do the dateadd on the column - instead convert the dates to a value compatible to the source t_stamp and use a where t_stamp between value_a and value b

    and as mentioned before use alias on all tables and columns.

    finally why are you using a linked server to link to yet another server? why not go directly to the final server (REM) on your query?

  • To keep the query on the remote server "sargable", you shouldn't use any function on its columns in the WHERE, so something more like this (I may not have the exact calc right, although I tried very hard to be accurate; the "<" with the next day is intentional, it's needed here):

    from OPENQUERY(srv1, 'select tagid,floatvalue,DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'') from rem.ignition.dbo.sqlt_data_1_2021_12
    where t_stamp >= datediff_big(ms, ''1969-12-31 20:00:00'', ''12-09-2021'') AND t_stamp < datediff_big(ms, ''1969-12-31 20:00:00'', ''12-11-2021'') AND floatvalue is not null');

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • I would modify the query so that I wasn't converting the t_stamp value to a date in the where clause.  I might also move the conversion in the openquery - and perform that conversion on the returned results from the remote system.

    It looks to me like t_stamp is a unix timestamp - except that you are offsetting that by 4 hours.  So maybe a unix timestamp stored as UTC where you want to filter based on a specific time zone offset (-04:00)?

    If so - then calculate the date's and use the actual unix timestamp that represents that date and time in the query instead of converting the t_stamp column to a datetime.

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • correct

    It looks to me like t_stamp is a unix timestamp - except that you are offsetting that by 4 hours.  So maybe a unix timestamp stored as UTC where you want to filter based on a specific time zone offset (-04:00)?

  • rem server is SQL2012

    Local is sql2016

    Thanks.

  • and what is the third server?

    you are connecting to SERVER X (where you execute the code)  - from this one you do a open query to SRV1 - and that query queries another server (REM)

    so again why do  you have 3 servers on the mix and why do you not go from SERVER X directly to server REM?

  • Bruin wrote:

    Insert Into Tag_Data_Capture_staging
    (TagName,CaptureValue,DateRecorded)
    select *
    from OPENQUERY(srv1, 'select tagid,floatvalue,DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'') from rem.ignition.dbo.sqlt_data_1_2021_12
    where DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'') between ''12-09-2021'' AND ''12-10-2021'' and floatvalue is not null');

    I did the Insert to local table and used criteria to narrow down data that is retrieved.  DO you see any better way to do the Open Query ?

    Thanks for your replies.

    In the OPEN QUERY, you have the following...

    where DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'')

    That will make the use of an index on the other server impossible because it's not SARG-able code.  Same rules apply on the remote server as they do on a local server with it comes to such things.

    Why are they dividing a TimeStamp by 1000?  Is it a milliseond based "UNIX TimeStamp"with a 4 hour offset???  It certainly looks that way.

    The best thing to do would be to convert your two limiting dates into a properly offset Unix TimeStamp so that you can make a SARG-able bit of criteria that might actually stand a chance of using an index seek and effective range scan.

    Also be aware that DATEADD is limited to an INT.  That means that any date (using your offest) 2038-01-18 23:14:07.000 will fail with an arithmetic overflow (it's refered to as "The 2038 Problem" for the obvious reason).

    If you make the query SARG-able, it will probably run a whole lot faster because there's probably an index on the other server on the timestamp column.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • And, I didn't read Jeffrey Williams post before I posted (I was responding to a post on the first page of this thread).  He's spot on about the Unix TimeStamp thing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • So make call in open query to get the int value of  t_stamp then use that as the driver in query..

    sorry no 3rd server(rem) that was miss-step on my part..

    just 2 servers local and srv1

    Thanks..

  • then give us the code as it is supposed to run - and are you going to be hardcoding and changing the dates everytime it runs? you were already advised and given pointers on how to allow dynamic values for those.

  • Bruin wrote:

    So make call in open query to get the int value of  t_stamp then use that as the driver in query..

    If I'm understanding what you wrote above correctly (it's really hard to tell what you mean), then NO..  Convert the two filter  dates to UTC and then convert those to UNIX TimeStamps and use those timestamps against the t_timestamp column so that it's SARGable.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I tried to use this Function and passed in my date that I want to start pulling information just for a test so I could verify results.

    SELECT dbo.UNIX_TIMESTAMP('12-21-2021 00:00:00');

    The Value it returned I plugged into my Open Query..

    select *

    from OPENQUERY(srv1, 'select tagid,floatvalue,DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'') from ignition.dbo.sqlt_data_1_2021_12

    where t_stamp >= ''1640062800''')

    The data pulled back started with 12-1-2021

     

     

     

    alter FUNCTION UNIX_TIMESTAMP(@ctimestamp datetime)
    RETURNS integer
    AS
    begin
    return DATEDIFF(second,'1970-01-01',GETUTCDATE())+datediff(second,GETDATE(),@ctimestamp)
    end
  • I would recommend reviewing this article: https://www.sqlservercentral.com/articles/convert-unix-timestamps-to-date-times-in-sql-server-part1

    It looks like your calculation for the UNIX timestamp is incorrect.  From that article (@JeffModen):

    --===== Convert a given Date/Time to the "Old" and "New" UNIX Timestamps.
    DECLARE @GivenDateTime DATETIME = '2022-07-19 17:59:30.853'
    ;
    SELECT OldUnixTS=DATEDIFF (ss,'1970',@GivenDateTime) --Always based on Seconds
    ,NewUnixTS=DATEDIFF_BIG(ms,'1970',@GivenDateTime) --Based on Milliseconds, in this case.
    ;​

    I am not even sure what your function is trying to do here - or why you would even care about UTC.  The UNIX timestamp is an offset in seconds or milliseconds from a base date and converting to/from the UNIX timestamp really doesn't have anything to do with UTC.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • I used that piece of code but just used DATE:

    ----===== Convert a given Date/Time to the "Old" and "New" UNIX Timestamps.

    DECLARE @GivenDateTime DATE = '2021-12-21'

    ;

    SELECT OldUnixTS=DATEDIFF (ss,'1970',@GivenDateTime) --Always based on Seconds

    ,NewUnixTS=DATEDIFF_BIG(ms,'1970',@GivenDateTime) --Based on Milliseconds, in this case.

    ;​

    ----===== Convert a given Date/Time to the "Old" and "New" UNIX Timestamps.

    DECLARE @GivenDateTime DATE = '2021-12-25'

    ;

    SELECT OldUnixTS=DATEDIFF (ss,'1970',@GivenDateTime) --Always based on Seconds

    ,NewUnixTS=DATEDIFF_BIG(ms,'1970',@GivenDateTime) --Based on Milliseconds, in this case.

    ;​

    Then I used those 2 values in my Open Query...

    select *

    from OPENQUERY(jcsql, 'select tagid,floatvalue,DATEADD(s,t_stamp/1000,''1969-12-31 20:00:00'') from ignition.dbo.sqlt_data_1_2021_12

    where( t_stamp >= ''1640044800000'' and t_stamp <= ''1640390400000'') and floatvalue is not null ')

    I saw records prior to 2021-12-21

    Thanks.

  • Not surprising - what dates/times did you see prior to 12/21?  Based on your code - I suspect you are going to see dates and times that are not the same.

    Your conversion from the UNIX timestamp uses 1969-12-31 20:00 as the offset start.  I would first convert the time stamp to its correct date/time and then use AT TIME ZONE to convert to the appropriate local time.  The process that you should follow is:

    1. Get date range in local time
    2. Convert date range from local date/times to UTC date/times
    3. Convert the UTC date/times to UNIX timestamps
    4. Use those timestamps in your open query
    5. Convert the t_stamp column to a datetime

    To include all dates in a given date - you also should be using an open-interval range check.  That is - to include all of 2022-12-25 you want to calculate the timestamp as '2022-12-26 00:00:00.000' local time.  Convert that to UTC - then convert to UNIX timestamp and pull everything less than that value (t_stamp >= 'start' and t_stamp < 'end').

     

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 15 posts - 16 through 30 (of 55 total)

You must be logged in to reply to this topic. Login to reply