January 5, 2021 at 12:30 am
ScottPletcher wrote:Yes, sorry, as I stated in passing earlier, what I meant was that you "can't" in practice even if you can in theory because it takes so long to do. Again, I should have been cleared about that. I guess, though, if you have no choice, you have no choice.
As I stated multiple times, after the rebuilds, I immediately switch the recovery model to FULL and take a full backup.
I can do any recoveries past that point by restoring that full backup, including to a PIT after that.
How do you recover lost data in your method if the differential backup gets corrupted? You now can't restore that diff, which is the only copy of the data in your method.
Besides, that diff is likely to be almost the size of the db anyway. I'd rather just do a full backup. I see too many disadvantages of space and time to do a diff after rebuilds that large (again, I stated this same thing earlier).
I've not seen a backup file so corrupted it can't be restored, at least in modern times (since SQL 2000), . Maybe just lucky. We do make two copies of backups on two separate tapes, if that matters here. If you're that worried about backup corruption, include CHECKSUM in the backup.
To me, this seems more about "breaking the rules about the log chain" than about actual processing and recovery of data. I'll stick with the practical approach here when it's much more performant and much easier to recover with than when slavishly following the rules.
On a different, but indirectly related subject, and one most vital for performance, the "rule" about "(always) using an identity value as the clustering index" is a horrible error. The worst, most damaging myth in dbs. So, yeah, I don't follow that "rule" either.
I can recover using transaction log backups - because I can go to a *previous* backup file and apply all transaction log backups from that point forward - including a tail-log backup if something drastic has occurred. If we lose those transaction log backups - then, of course we cannot restore...the point is that I don't *need* the DIFF to restore, but it does reduce the number of files I need to restore and will be faster than restoring all transaction log backups prior to that DIFF.
This isn't about how much time it takes to recover - it is about how much data you can recover. As soon as you break the log chain (and this isn't slavishly following the rules) - you no longer have the ability to recover across that break. Just because you took a full backup after that does not *guarantee* that backup file is available - or that the database wasn't corrupted and you backed up that corruption - or many other issues that can occur.
The point is - to me - very simple. Why risk being able to recover your systems when you have a valid methodology available *that does the same thing*? Use the exact same process and switch to bulk-logged instead of simple and you get the same benefits (minimally logged index rebuilds) and the added benefit of transaction log backups to recover across/past that process?
It just doesn't make sense to me to add risk when then are better options with less risk available.
You don't get the same benefits. I don't have to backup massive logs and then restore them. You do.
It's unrealistic to try to apply such huge -- 30GB+ -- log files (I have some tables with billions of rows and dbs in the 200GB to 500GB range). From a practical standpoint, my method is the only timely recovery that is available. Yes, in theory you can go back and apply all those logs, including the massively huge ones, but in reality it just takes too long. You're going to be recovering for an extraordinarily long time. I just don't have that much time available to get back up and running if something happens after the rebuilds that I need to recover from.
>> you no longer have the ability to recover across that break. << Neither do you. You can NEVER recover to a point-in-time after minimal logging has occurred and before you've done a diff or full backup. You can only recover to a point before that or a point after that, the same as I can.
Corrupt backups is a red herring. Corrupt backup(s) will prevent you from recovering as well. Again, if you have corrupt backups that often: (1) turn on CHECKSUM (2) audit your backup system and hardware to find out where the corruption if occurring.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
January 10, 2021 at 9:00 am
From Brent Ozar,
Rebuild: An index ‘rebuild’ creates a fresh, sparkling new structure for the index. If the index is disabled, rebuilding brings it back to life. You can apply a new fillfactor when you rebuild an index. If you cancel a rebuild operation midway, it must roll back (and if it’s being done offline, that can take a while).
https://www.brentozar.com/archive/2013/09/index-maintenance-sql-server-rebuild-reorganize/
https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html -- You may be able to get the process and update it for your use.
Depending on the index fragmentation, it could take anywhere from 2 to 4 hours... I would try it on a test server.
DBASupport
January 12, 2021 at 2:01 am
What’s gonna happen to that “sparkling new structure” after the 1st INSERT statement executed the next morning? Will it be still as sparkling?
what about 2nd insert? Will 1st one have time to finish the work caused by page splits before the2nd one starts doing the same? Would they deadlock each other?
Having a good rebuild is not the task to focus on. To make rebuilding not necessary by choosing a correct indexing strategy - that’s the goal to achieve.
_____________
Code for TallyGenerator
Viewing 3 posts - 46 through 47 (of 47 total)
You must be logged in to reply to this topic. Login to reply