Log shipping and database backups

  • I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • The more I think about it, what kind of delay is acceptable in the reporting environment (C & D), relative to the live data on server A?   Any chance you could do a one-time backup of Server A onto a secondary database right there on Server A, and then keep that up to date with triggers on the original database?   Then you could replicate to C & D from there.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • sgmunson - Wednesday, May 16, 2018 8:16 AM

    I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    sgmunson - Wednesday, May 16, 2018 8:22 AM

    The more I think about it, what kind of delay is acceptable in the reporting environment (C & D), relative to the live data on server A?   Any chance you could do a one-time backup of Server A onto a secondary database right there on Server A, and then keep that up to date with triggers on the original database?   Then you could replicate to C & D from there.

    To respond to these in reverse order.....
    The current delay is around a day. We take a snapshot of the mirrored databases (mirrored to yet another server we're trying to get rid of) at midnight each night and use those for our (non-live) reporting.
    Anything more regular would be a bonus.  I mentioned log-shipping at 15-minute intervals to keep the log sizes down to palatable levels.
    I like the idea of the secondary databases being on the server A and replicating from there, I just fancy the thought of writing triggers on 11K (yes, that's right, 11 thousand tables).  The databases aren't normalised which doesn't help matters.

    As for your original points...
    Managing the scenario is (more than likely) going to be painful (and contribute to even more hair loss).
    I suppose we could write scripts to back up the database(s) on A, restore on B (and apply the indexes) and then backup/restore on C&D relatively easily.  It would mean really keeping an eye on the indexes we create to add them to the indexing script.  I did have another post on here (here) about grabbing current indexes, storing them in a table and running a job to apply them.  If I can get that uip and running, that would be a bonus.
    If I can get that working, we can store all the indexes in a table and run a job to just restore them.  It would save keeping them scripted and making sure the script get's updated every time something changes.
    We did originally use server A for all our reporting (using linked servers) but it put server A under too much strain and we were told to think of an alternative (hence mirroring).

  • richardmgreen1 - Wednesday, May 16, 2018 8:51 AM

    sgmunson - Wednesday, May 16, 2018 8:16 AM

    I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    sgmunson - Wednesday, May 16, 2018 8:22 AM

    The more I think about it, what kind of delay is acceptable in the reporting environment (C & D), relative to the live data on server A?   Any chance you could do a one-time backup of Server A onto a secondary database right there on Server A, and then keep that up to date with triggers on the original database?   Then you could replicate to C & D from there.

    To respond to these in reverse order.....
    The current delay is around a day. We take a snapshot of the mirrored databases (mirrored to yet another server we're trying to get rid of) at midnight each night and use those for our (non-live) reporting.
    Anything more regular would be a bonus.  I mentioned log-shipping at 15-minute intervals to keep the log sizes down to palatable levels.
    I like the idea of the secondary databases being on the server A and replicating from there, I just fancy the thought of writing triggers on 11K (yes, that's right, 11 thousand tables).  The databases aren't normalised which doesn't help matters.

    As for your original points...
    Managing the scenario is (more than likely) going to be painful (and contribute to even more hair loss).
    I suppose we could write scripts to back up the database(s) on A, restore on B (and apply the indexes) and then backup/restore on C&D relatively easily.  It would mean really keeping an eye on the indexes we create to add them to the indexing script.  I did have another post on here (here) about grabbing current indexes, storing them in a table and running a job to apply them.  If I can get that uip and running, that would be a bonus.
    If I can get that working, we can store all the indexes in a table and run a job to just restore them.  It would save keeping them scripted and making sure the script get's updated every time something changes.
    We did originally use server A for all our reporting (using linked servers) but it put server A under too much strain and we were told to think of an alternative (hence mirroring).

    Wow, 11 thousand tables?  Are there a significant number of those that never get used?   I can't quite get my head around the idea that you could ever use more than a few hundred of them.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • sgmunson - Wednesday, May 16, 2018 8:16 AM

    I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    I wouldn't...that's why I was warning about it earlier. I did a test of something like it and I had to end up writing my own polling for some of it. I started having to play with the schedules and the unpredictability of how much data at what times for either the LS or rep kept that from working well. Multiple schedules didn't work either. Time frames are a huge issue. Like I said before, it was a management nightmare. We scrapped the idea due to the management of it all.

    Sue

  • Hi Steve

    Yeah, we only do use a few hundred but the data is scattered all over.
    I think, for safety's sake, we'd need to set up triggers on all the tables and replicate the lot.
    The vendor does have a nasty habit of starting to populate a table without telling anyone (and we find out about it a long time afterwards).

  • Sue_H - Wednesday, May 16, 2018 9:56 AM

    sgmunson - Wednesday, May 16, 2018 8:16 AM

    I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    I wouldn't...that's why I was warning about it earlier. I did a test of something like it and I had to end up writing my own polling for some of it. I started having to play with the schedules and the unpredictability of how much data at what times for either the LS or rep kept that from working well. Multiple schedules didn't work either. Time frames are a huge issue. Like I said before, it was a management nightmare. We scrapped the idea due to the management of it all.

    Sue

    Hi Sue

    Yeah, looks like we're completely scrapping log shipping in favour of something else (see Steve's post)

  • richardmgreen1 - Thursday, May 17, 2018 1:51 AM

    Hi Steve

    Yeah, we only do use a few hundred but the data is scattered all over.
    I think, for safety's sake, we'd need to set up triggers on all the tables and replicate the lot.
    The vendor does have a nasty habit of starting to populate a table without telling anyone (and we find out about it a long time afterwards).

    At least the triggers would take care of the "they didn't tell you about it" situation.   Can't say I'd be looking forward to writing 11,000 of them, but maybe this is something you can script out, because what are the chances they add a column somewhere, so you may end up needing a DDL trigger to catch those, too...

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • We've got some code to write dynamic triggers so we'll be using some of that to put a trigger on each table (should be reasonably fast as well).
    We're not writing DDL triggers as yet as, chances are, they only add fields when we ask for them and we'd have to restart replication anyway.

  • richardmgreen1 - Thursday, May 17, 2018 2:03 AM

    Sue_H - Wednesday, May 16, 2018 9:56 AM

    sgmunson - Wednesday, May 16, 2018 8:16 AM

    I'm not sure I'd want to sign up for that scenario.   It sounds like you'll be managing it to death (your death, that is).   Starting to wonder if just restoring backups might be viable,   Specifically, restore A's backup on B, apply the indexes, then backup B and make local copies of that B backup on C and D, and then restore those backups on C and D.   Timeframe could be an issue there, but I'm not sure managing the scenario you've described will actually be palatable either.   Any chance that the reporting load on C and D could be handled by just one rather larger server?   You might be stretching the limits of the achievable here.

    I wouldn't...that's why I was warning about it earlier. I did a test of something like it and I had to end up writing my own polling for some of it. I started having to play with the schedules and the unpredictability of how much data at what times for either the LS or rep kept that from working well. Multiple schedules didn't work either. Time frames are a huge issue. Like I said before, it was a management nightmare. We scrapped the idea due to the management of it all.

    Sue

    Hi Sue

    Yeah, looks like we're completely scrapping log shipping in favour of something else (see Steve's post)

    There was just too much activity for both the processes to get easily coordinated. It would work with low traffic but not when it was more like production.
    Your vendor does some things that would be concerning and it sounds like some of it is a bit unpredictable. That's going to make things more challenging.

    Sue

  • It looked complicated but I couldn't see any way around it.
    Thanks (again) to Steve for a brilliantly simple solution.

    As for unpredictable, that's an understatement.  And as for concerning activity, we keep flagging things up (again and again) for ways to improve some of their processes but we get ignored.

  • richardmgreen1 - Thursday, May 17, 2018 8:57 AM

    It looked complicated but I couldn't see any way around it.
    Thanks (again) to Steve for a brilliantly simple solution.

    As for unpredictable, that's an understatement.  And as for concerning activity, we keep flagging things up (again and again) for ways to improve some of their processes but we get ignored.

    The day may come when you realize that it might actually be easier (and potentially less expensive) to design your own software from scratch rather than continue to pay this screwball vendor for their low quality crap.   Eleven thousand tables sounds more like something designed specifically to keep you trapped in their box, so to speak..

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • Hi Steve

    I've already mentioned about building our own in-house software (there's enough skills here to do it) but was told it would be too complex and too expensive to do it.
    I work for a hospital in the UK (not sure where anyone else is) so we would have to build something very complicated to encompass everything.

    I still say it's doable (with a better SQL back-end as a bonus) but would take a lot of time/effort and no-one in higher management seem to want to buy into.

  • richardmgreen1 - Friday, May 18, 2018 2:04 AM

    Hi Steve

    I've already mentioned about building our own in-house software (there's enough skills here to do it) but was told it would be too complex and too expensive to do it.
    I work for a hospital in the UK (not sure where anyone else is) so we would have to build something very complicated to encompass everything.

    I still say it's doable (with a better SQL back-end as a bonus) but would take a lot of time/effort and no-one in higher management seem to want to buy into.

    richardmgreen1 - Friday, May 18, 2018 2:04 AM

    Hi Steve

    I've already mentioned about building our own in-house software (there's enough skills here to do it) but was told it would be too complex and too expensive to do it.
    I work for a hospital in the UK (not sure where anyone else is) so we would have to build something very complicated to encompass everything.

    I still say it's doable (with a better SQL back-end as a bonus) but would take a lot of time/effort and no-one in higher management seem to want to buy into.

    Yeah, nothing new in regard to management not really understanding the true cost of what they continue to do poorly,   Maybe the bums in management could at least use their political clout to either beat up your vendor or start schmoozing them in such a way that cooperation on making changes, starts to actually happen.  It's the kind of thing one OUGHT to be able to expect from management, but rarely occurs...

    EDIT:  And most times, when a vendor knows they "have you over a barrel", they'll just keep bending you over, so to speak, and there will be no incentive for them to change until someone stands up to the bullying that represents.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • Hi again 😀

    I've added a few columns to my receiving database as follows:-
    SYSDateLoaded - this is when the data first appeared
    SYSDateLastUpdated - This is the last time is was changed
    SYSIsDeleted - sometimes data gets deleted from the main database instead of being flagged as removed so we added this in to make sure we keep everything.

    The two SYSDate* fields above are for our use as the vendors RowUpdateDateTime field should be the correct timestamp for the last time a row was changed but isn't always accurate (it's been known to be 10+ hours in the future).

    That's all gone swimmingly.

    I'm dynamically creating the triggers on each table and I've got the code working for most tables.

    However, some tables have a timestamp column (don't know why, but I'm just ignoring it as I won't need it for anything).
    Some tables also have various identity columns that are causing me a headache.
    I'm getting this error trying to create the trigger:-
    Msg 8102, Level 16, State 1, Procedure trg_AbsElectronicSigAttestText_Data_Transfer, Line 9 [Batch Start Line 0]
    Cannot update identity column 'TextID'.

    I've tried the following (so far):-
    Set IDENTITY_INSERT = ON as part of the trigger creation on the sending database - that doesn't seem to work as I'm still getting the error when I try to create the trigger
    Set IDENTITY_INSERT = ON as part of adding the extra columns to the receiving database - that's not working either as it doesn't seem to "stick"

    This is my code for creating the trigger (on a random table):-
    CREATE TRIGGER trg_AbsElectronicSigAttestText_Data_Transfer
    ON dbo.AbsElectronicSigAttestText
    AFTER UPDATE, INSERT, DELETE
    AS
        BEGIN
            DECLARE @Now    DATETIME2    = GETDATE();

            MERGE testdbcopy.dbo.AbsElectronicSigAttestText AS TARGET
            USING Inserted AS SOURCE
            ON SOURCE.SourceID = TARGET.SourceID
            WHEN NOT MATCHED THEN
                INSERT
                    (
                        SourceID
                        ,VisitID
                        ,AbstractID
                        ,AbsFormID
                        ,FormID
                        ,TextSeqID
                        ,TextID
                        ,TextLine
                        ,RowUpdateDateTime
                    )
                VALUES
                    (
                        SOURCE.SourceID
                        ,SOURCE.VisitID
                        ,SOURCE.AbstractID
                        ,SOURCE.AbsFormID
                        ,SOURCE.FormID
                        ,SOURCE.TextSeqID
                        ,SOURCE.TextID
                        ,SOURCE.TextLine
                        ,SOURCE.RowUpdateDateTime
                    )
            WHEN MATCHED AND(
                                SOURCE.VisitID <> TARGET.VisitID
                        OR        SOURCE.AbstractID <> TARGET.AbstractID
                        OR        SOURCE.AbsFormID <> TARGET.AbsFormID
                        OR        SOURCE.FormID <> TARGET.FormID
                        OR        SOURCE.TextSeqID <> TARGET.TextSeqID
                        OR        SOURCE.TextID <> TARGET.TextID
                        OR        SOURCE.TextLine <> TARGET.TextLine
                            ) THEN
                UPDATE SET
                    TARGET.VisitID = SOURCE.VisitID
                    ,TARGET.AbstractID = SOURCE.AbstractID
                    ,TARGET.AbsFormID = SOURCE.AbsFormID
                    ,TARGET.FormID = SOURCE.FormID
                    ,TARGET.TextSeqID = SOURCE.TextSeqID
                    ,TARGET.TextID = SOURCE.TextID
                    ,TARGET.TextLine = SOURCE.TextLine
                    ,TARGET.SYSDateLastUpdated = @Now;

            WITH DELETES AS
                (
                    SELECT
                        Deleted.SourceID
                    FROM
                        deleted
                    EXCEPT
                    SELECT
                        Inserted.SourceID
                    FROM
                        inserted
                )
            UPDATE
                TARGET
            SET
            TARGET    .SYSIsDeleted = 1
                ,TARGET.SYSDateLastUpdated = @Now
            FROM
                testdbcopy.dbo.AbsElectronicSigAttestText TARGET
                INNER JOIN DELETES
                    ON DELETES.SourceID = TARGET.SourceID;
        END;

    If the table doesn't have an IDENTITY field in it, the dynamic code seems to work fine (although I haven't tested it yet).  I've even managed to pick up the primary key columns for use in the MERGE statement.

    I've tried "SET IDENTITY_INSERT AbsElectronicSigAttestText ON" just after the BEGIN statement but it's not worked.

    Anyone any ideas how I can insert an IDENTITY value as part of the trigger?

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply