December 30, 2016 at 12:26 pm
We have a 3rd party app with little control over code but possibly some of the object locations. It is an OLTP with several tables which are audited and these records stored in audit schema and tables. Since these are basically write one time and then read I was wondering from a SAN perspective if anyone had some ideas for performance for these audit tables for write/read and the idea of separating them from the other drives that do the read/update/write?
December 31, 2016 at 12:23 am
How often are the audit tables actually read? I ask because such things usually end up being the biggest things in the database and during an emergency restore, can really slow down the "get back in business quickly" nature of things.
Even if the audit tables aren't in a different schema, you can always move the data to a different file group on less expensive, slower drives like maybe an iSCSI box (like we did).
Unless the app executes DDL to change the table (in which case, you might want to consider getting a new software vendor), does it's own index maintenance (ditto my previous suggestion), or takes an "inventory" of all it's own stuff to make sure it's not been tampered with (ditto again), you can take it to the next level, like I have in one particular instance).
I built an "archive" database, move the audit table there (partitioning them by month on the way in), and added synonyms to the original database for those tables that I moved. I also built some empty tables with the same structure in the original database that I could repoint the synonyms to if I needed to do a quick restore without having to load all of the previous audit data.
The reason why I partitioned the audit tables in the "archive" database was so that I could assign one partition to one filegroup with one filegroup per month. With fully aligned indexes, that means that I not only don't have to do any index maintenance to the older month partitions, but was also able to set them to a "READ_ONLY" status so that I also wouldn't have to back them up once I packed them and sealed them for a month. When I first did it, that cut my nightly full backups down from more than 10 hours (there was 7 years of data that we still have to keep and it's freakin' huge) to about 6 minutes. Restores have been tested similarly.
If you don't have the Enterprise Edition of 2014, you can still partition the tables using Partitioned Tables. Looking back on it, those probably would have worked better for me because tables in a read only status could benefit from slightly different indexing than the current month partition, which is read/write.
There are caveats to partitioning either way but it has seriously streamlined my nightly full backups and reduced my "get back in business" time for restores. It also allows for online piecemeal restores of the "archive" database, should the need ever rear its ugly head.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 3, 2017 at 6:20 am
no chance in changing vendor, these are DML triggers and I do expect them to be rather large
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply