Backup procedure not reading last modified date

  • I don't think the offline usb backup is paranoia at all, I would call that prudence - if you don't have something better available to store offline backups. Even before ransomware attacks were common, a fire, hardware problem, etc could destroy backups stored on the same appliances as the rest of the network and you needed something to get them off site and offline. Now the threat of ransomware is very real and have seen data loss from clients who had good and tested backups but didn't get their backups air gapped from the rest of the network. It can happen to a soho organization all the way up to large enterprises.

  • bump post

  • Even weirder. I just made some manual changes, in a table that I confirmed has a primary key. Initially, it showed a change in only one of the tables, from yesterday. I made a change in the one showing Null for most recent change, and the proper date/time appeared, and the one from yesterday remained. Then I made a change to the one showing yesterday's date, and both showed the proper date/time - today, moments ago. Then I looked at some other stuff, including waiting for the automated task to try doing a backup. It didn't, and seems to have hung, but looking at the most recent date/time change, one of them suddenly has gone back to a Null.

  • pdanes wrote:

    ScottPletcher wrote:

    OK, I finally found notes on the sys view I was trying to remember:

    sys.dm_db_file_space_usage

    This time, that should (actually) help you determine what you need to know (you would likely need to store the previous values to compare to the current ones).

    Great, thank you. I'll have a look in the morning.

    I'm looking at it now, and I do not see anything that would help me deal with this. I see three ID columns and eight page count columns. What should I get from this that would help with my problem? Sorry if I'm being dense, but I don't understand what I can do with this.

  • As another random thought - Are you seeing any pattern to when it blanks out? Like is it good at 8:02 but by 8:15 it is blank? If so, I imagine you have some automated process doing something on it.

    any chance you have any scheduled jobs running that could be doing something goofy on the database? For example, an ETL process that drops the indexes to do the data load and then recreates them once the load completes?

    Or, if this is a third party tool running on the database, any chance it could be doing something crazy like dropping and recreating indexes at app startup or worse - a stored procedure call drops and recreates some indexes?

    I know I set that up on my ETL system as we do a truncate and reload of all ETL related tables during our nightly load and it helped performance a ton.

    Might not hurt to run profiler/extended events to see what's happening. Start profiler, run your script to update data, wait 15 minutes or however long it takes for the field to blank out. Then look at the profiler/extended event logs to see what happened.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Mr. Brian Gale wrote:

    As another random thought - Are you seeing any pattern to when it blanks out? Like is it good at 8:02 but by 8:15 it is blank? If so, I imagine you have some automated process doing something on it.

    any chance you have any scheduled jobs running that could be doing something goofy on the database? For example, an ETL process that drops the indexes to do the data load and then recreates them once the load completes?

    Or, if this is a third party tool running on the database, any chance it could be doing something crazy like dropping and recreating indexes at app startup or worse - a stored procedure call drops and recreates some indexes?

    I know I set that up on my ETL system as we do a truncate and reload of all ETL related tables during our nightly load and it helped performance a ton.

    Might not hurt to run profiler/extended events to see what's happening. Start profiler, run your script to update data, wait 15 minutes or however long it takes for the field to blank out. Then look at the profiler/extended event logs to see what happened.

    There is no third-party stuff in here at all. I personally wrote the entire thing - table design, queries, stored procedures - everything. Bits and pieces of cribbed code from SQL Server discussion forums, of course, but all adapted to my system and none doing anything like rebuilding indexes or spawning jobs. No jobs anywhere in the system. The only thing automated is this backup construction, which is activated from outside SQL Server entirely.

    Just discovered that if I open the table in SSMS for editing, the last date/time change stays as long as I leave the table open. When I close the edit window, the date/time goes to Null, but not for the LAST table closed. That is, if I have the same table open in both databases, and I manually edit both tables, the last change values are correct from both databases. When I close one of the tables, the value from that database goes to Null. When I close the second one, it stays correct. Doesn't matter which one goes first. It always loses the first one closed, and the second one stays correct.

    • This reply was modified 1 year, 3 months ago by  pdanes.
    • This reply was modified 1 year, 3 months ago by  pdanes.
    • This reply was modified 1 year, 3 months ago by  pdanes.
  • That sounds like it may be a bug in SSMS then... but oof... using the edit GUI in SSMS? I find that to be clunky and painful to work with. I ALWAYS use scripts for updating or retrieving data. I like my changes to be repeatable. As a thought, are you using the latest and greatest SSMS OR are you using the one that matches your SQL instance version or something different? I tend to try to use the same version of SSMS as my SQL instance if I am doing any GUI-based work as there are some bugs that can pop up (especially around SSIS).

    And your backup process is done outside of SQL? Is there a reason you don't use the SQL Agent to schedule the jobs? I mean, nothing against doing it outside SQL, but doing it inside SQL has some advantages such as built-in logging.

    If you ARE using scripts for it, any chance that you have an open transaction that is being rolled back when you close the tab? Might not hurt to toss the query into an explicit transaction to see if that helps.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Mr. Brian Gale wrote:

    That sounds like it may be a bug in SSMS then... but oof... using the edit GUI in SSMS? I find that to be clunky and painful to work with. I ALWAYS use scripts for updating or retrieving data. I like my changes to be repeatable. As a thought, are you using the latest and greatest SSMS OR are you using the one that matches your SQL instance version or something different? I tend to try to use the same version of SSMS as my SQL instance if I am doing any GUI-based work as there are some bugs that can pop up (especially around SSIS).

    And your backup process is done outside of SQL? Is there a reason you don't use the SQL Agent to schedule the jobs? I mean, nothing against doing it outside SQL, but doing it inside SQL has some advantages such as built-in logging.

    If you ARE using scripts for it, any chance that you have an open transaction that is being rolled back when you close the tab? Might not hurt to toss the query into an explicit transaction to see if that helps.

    The About header says v 18.7, but the details are this:

    SQL Server Management Studio 15.0.18357.0

    SQL Server Management Objects (SMO) 16.100.44091.28

    Microsoft Analysis Services Client Tools 15.0.19260.0

    Microsoft Data Access Components (MDAC) 10.0.17763.4644

    Microsoft MSXML 3.0 6.0

    Microsoft .NET Framework 4.0.30319.42000

    Operating System 10.0.17763

     

    I'm doing the edits manually, trying to track down why this blasted value is going to Null. Obviously, not something I would normally do in a production evironment. I'm just making changes to trigger the backup process, but it's not working.

     

    I just tried writing a short bit of T-SQL to make some changes. None of the code has any impact on the change data/time, UNTIL I open the table for manual editing in SSMS. Then the date/time gets updated, to reflect the previous change, even if I don't do anything in the edit window

  • Mr. Brian Gale wrote:

    And your backup process is done outside of SQL? Is there a reason you don't use the SQL Agent to schedule the jobs? I mean, nothing against doing it outside SQL, but doing it inside SQL has some advantages such as built-in logging.

    If you ARE using scripts for it, any chance that you have an open transaction that is being rolled back when you close the tab? Might not hurt to toss the query into an explicit transaction to see if that helps.

    I use VBScript to call a stored procedure, and that SP does the actual backup. I did it that way initially, because I had the free version and that did not allow jobs. I also have stuff written into the VBScript to compress and copy the backup files to various storage places.

    I suppose I could completely abandon my previous constructs, schedule a job to let the SP run every 15 minutes, forget the last change date/time test, create endless backups of no changes and change the SP to create compressd backups, but that means redoing a great deal of all this. I have never used the job scheduler, and trying to learn and implement an entire new way of doing something that has been working reliably for many years just seems like it should not be necessary.

    • This reply was modified 1 year, 3 months ago by  pdanes.
  • I agree that re-doing the backup strategy seems like overkill, especially when it is working. Mind you, as a long term solution and for future supportability, I would be moving things over to SQL jobs if possible. If you go on vacation and the backup DBA has no VBScript experience and the script fails, you REALLY don't want to be getting called back in to debug the problem. Much nicer if you can use the native tools that a DBA SHOULD know so you don't have to think about work when you are not working. Even a very junior DBA SHOULD have some knowledge of how to review the logs of a SQL agent job.

    BUT, like you said, that'll take time and right now your goal is to get things up and running without reinventing the wheel.

    I'd still recommend loading up profiler/extended events, running the script to update (even if you update nothing like put in a "WHERE 1=0" at the end) and run it in an explicit transaction (just in case it is being rolled back and thus the index not being seen as updated) so you can see what is happening according to profiler/extended events. That'll probably point you in the right direction as to what is happening and hopefully help fix the problem.

    And, I think it was mentioned earlier, but any chance you have "autoclose" or "autoshrink" turned on? I doubt that autoshrink would cause this issue, but I could see autoclose doing it, and best practice is to have both of those options turned off in production instances.

     

    EDIT NOTES - made the backup notes a bit more clear.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Mr. Brian Gale wrote:

    And, I think it was mentioned earlier, but any chance you have "autoclose" or "autoshrink" turned on? I doubt that autoshrink would cause this issue, but I could see autoclose doing it, and best practice is to have both of those options turned off in production instances.

    GOOD GOD, YES!!! I don't know how I missed it earlier, but both of these had AutoClose turned on. I just turned it off, and the last change value now stays put.

    I have no idea how that could have happened - I most certainly didn't do it. There are four databases here. Two I created manually here, and the both had AutoClose off. But the two that were turned on both got here as restored copies of a backup done in the 2008 R2 system, where I don't even see AutoClose as an option. AutoShrink, yes, and that is turned off. I wonder if the import of an older version backup automatically turns that option on? I'll have to look at that - do some tests with some other imports,

    Many thanks, and thanks also to ScottPletcher, who mentioned it yesterday, but somehow I didn't clue in to go look at it. If you ever make it to Prague, the beers are on me.

  • Great, glad that resolved it.  Autoclose on or taking the db offline were the only things I could think of that would cause that to happen.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • pdanes wrote:

    GOOD GOD, YES!!! I don't know how I missed it earlier, but both of these had AutoClose turned on. I just turned it off, and the last change value now stays put.

    I have no idea how that could have happened - I most certainly didn't do it.

    If the databases ever ran in SQL Express, that is a default setting that it will carry forward in the database. Or it was a default in some previous version of express at least.

    Have had fun with that before. Naturally in my cases it was also hiding the junk apps not closing their connections properly that led to new problems after it was turned off.

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply