February 6, 2017 at 3:25 pm
Hi,
Hi,
We have a 40 TB DB, and have some big tables and few indexes which are too big now to handle
Table Size:
Index Space: 3,044,139 MB
Row count: 7982040000
data space: 2344409 MB
no.of partitions 700
5 indexes: Sum of page_count = 374558277
Fill Factor = 90
Rebuilt takes days, any suggestions how to manage it successfully fast and scheduled.
Or its time for any 3rd party tools?
February 6, 2017 at 3:36 pm
ALIF-662928 - Monday, February 6, 2017 3:25 PMHi,Hi,
We have a 40 TB DB, and have some big tables and few indexes which are too big now to handleTable Size:
Index Space: 3,044,139 MB
Row count: 7982040000
data space: 2344409 MB
no.of partitions 700
5 indexes: Sum of page_count = 374558277
Fill Factor = 90
Rebuilt takes days, any suggestions how to manage it successfully fast and scheduled.
Or its time for any 3rd party tools?
Every Database needs a DBA π
February 6, 2017 at 3:48 pm
ALIF-662928 - Monday, February 6, 2017 3:25 PMHi,Hi,
We have a 40 TB DB, and have some big tables and few indexes which are too big now to handleTable Size:
Index Space: 3,044,139 MB
Row count: 7982040000
data space: 2344409 MB
no.of partitions 700
5 indexes: Sum of page_count = 374558277
Fill Factor = 90
Rebuilt takes days, any suggestions how to manage it successfully fast and scheduled.
Or its time for any 3rd party tools?
Stop rebuilding/reorganizing indexes. Rebuild only statistics instead. Seriously... I've not done any index maintenance on my production boxes since 17 Jan 2016 (more than a year ago) and performance actually got better. You DO need to rebuild stats on a regular basis if the stats have had row mods/inserts against them. I know it sounds strange but I took the tip from Brent Ozar and, by golly, it worked.
Heh... and remember, fragmentation isn't considered when the execution plan is being built and SANs are being hit so often that large physical sequential reads from disk are nearly impossible. You also find that page splits settle down into a kind of "natural Fill Factor" and not as much space as you think will be wasted. Certainly it's less than a 90% Fill Factor, which is guaranteed to at least temporarily waste at least 10% of the space only to still run into full pages and cause splits anyway. Rebuilding indexes is nearly as futile as shrinking a log file.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 6, 2017 at 3:57 pm
Hi Jeff, you have any articles to support it?
February 6, 2017 at 4:22 pm
ALIF-662928 - Monday, February 6, 2017 3:57 PMHi Jeff, you have any articles to support it?
If I didn't, I'd write one. π Actually, I'm in the process of writing one but the following links are what inspired me to give a try... that and the fact that we suffered pretty bad blocking every Monday morning, which was always the day after indexes were rebuilt/reorganized and hasn't happened since I stopped doing index maintenance.
https://www.youtube.com/watch?v=fD1CZVc6oUk
https://www.youtube.com/watch?v=qxQv0576iAU
https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
https://www.youtube.com/watch?v=fzy8AnF6UQ8
--Jeff Moden
Change is inevitable... Change for the better is not.
February 7, 2017 at 2:18 am
I would
Buy a 3rd party tool if I can afford it.
if not, I would
Rebuild one partition at a time.
use maxdop to make it litter faster
use sort in temp db options so I can control storage
put historical data in read only file-groups so that i don't have to go through this again
February 7, 2017 at 6:16 am
In support of Jeff, if you're not seeing scans (and on a 3tb index, I sure hope not), then constantly rebuilding indexes just doesn't make sense. A point look up, retrieval of one or only a few rows, is going to access a very limited number of pages regardless of the fragmentation level (OK, if there's only one row per page, that might be a problem, but that doesn't happen in the real world). For an index that size, with a point lookup, you're probably hitting 4-5 pages for each query. Defragging will only reduce space used (and that, not by much), not the performance of the system. However, if you're seeing scans of this data, fragmentation becomes a giant issue. However, the core of that issue is the scans, not the fragmentation.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 7, 2017 at 6:20 am
ALIF-662928 - Monday, February 6, 2017 3:25 PMHi,Hi,
We have a 40 TB DB, and have some big tables and few indexes which are too big now to handleTable Size:
Index Space: 3,044,139 MB
Row count: 7982040000
data space: 2344409 MB
no.of partitions 700
5 indexes: Sum of page_count = 374558277
Fill Factor = 90
Rebuilt takes days, any suggestions how to manage it successfully fast and scheduled.
Or its time for any 3rd party tools?
Are you absolutely sure you need all those indexes? Have you checked the usage of each one?
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
February 7, 2017 at 6:37 am
Of course you should avoid rebuilding indexes. However, if for whatever reason you do need to periodically rebuild indexes on TB sized tables, then definately parition the indexes and REBUILD each partition incrementally. For example, if rebuilding indexes on a billion row table takes 12 hours, then rebuilding in (10) 100 million row partitions will take only about 2 - 4 hours. That's the performance improvement I've seen with very large tables partitioned by year/month. Also, when your tables and indexes are paritioned by something like period, you now have the option of only rebuilding the most recent period that's getting all the updates and page splits. The historical paritions are more static and can be left alone.
In addition to partitioned indexes, also consider enabling page compression while you're at it, which will consume maybe 50% less disk and memory storage, and will also can result in a rebuild performance improvement of 15% - 25%.
ALTER INDEX ALL ON MyBigTable REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = PAGE);
ALTER INDEX ALL ON MyBigTable REBUILD PARTITION = 2 WITH (DATA_COMPRESSION = PAGE);
...
...
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
February 7, 2017 at 11:24 am
As it seems like there is not a standard approach for this, so have to review all the possible suggestions you guys have recommended, will see how we can manage it for a long run, Again thanks for suggestions,
I wish Microsoft build a process for Index Management in the SSMS management in later editions for a better way of approaching and handling it,
February 7, 2017 at 12:07 pm
1) There is no 3rd-party tool that I know of that will make index rebuilds go faster.
2) Ola.hallengren.com can and should be used for all maintenance tasks. You can spread specific index activity around the week/month with it.
3) That is a pretty hefty index size compared to data size. Has someone been using the Database Tuning Adviser?!? Duplicate/overlapping indexes there??
4) What is your IO throughput benchmarked out to?? I STRONGLY suspect you need more hardware (especially tempdb I bet). My laptop could probably rebuild 3TB of indexes in less than a day...
5) Partition level activities are what you need for this table. Later builds of SQL Server continue to improve partition-level activities.
6) How much RAM does the box have?
7) Have you performed differential file IO stalls and wait stats analyses during index rebuild??
8) Have you used sp_whoisactive to monitor real-time (and differential too) performance metrics during index rebuild?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 7, 2017 at 5:35 pm
ALIF-662928 - Tuesday, February 7, 2017 11:24 AMAs it seems like there is not a standard approach for this, so have to review all the possible suggestions you guys have recommended, will see how we can manage it for a long run, Again thanks for suggestions,I wish Microsoft build a process for Index Management in the SSMS management in later editions for a better way of approaching and handling it,
Keep in mind that partitioning is not a tool to increase performance in most cases. Partitioned tables and indexes are frequently slower than their monolithic counter parts.
Also, if you're seeing scans, fragmentation may or may not be an issue. Remember that the optimizer doesn't even consider fragmentation. More likely, it's a bad query or a query that actually does need to do a scan. Unless the page fullness is ridiculously low (<50%) and your query does actually need to do a scan, then the level of fragmentation won't really hurt a thing because data on a SAN is going to be in a terribly random order on disk, anyway, and the disk heads don't stand much of a chance servicing only your request for data so they're going to move almost as bad as if you had a fragmented local disk.
What may be affected is how much memory is required because SQL Server can't actually do anything with data on a disk... it has to be loaded into memory but, once loaded, you're working at memory speeds where fragmentation doesn't really matter.
On the subject of partitioning, if the data in the table is static after a certain period (such as in an audit table, log table, or, perhaps, something like a "completed order" table, then partitioning can pay off in spades insofar as backups, restores, and index maintenance. Consider that the older (month or two old?) will never be written to ever again. Backing up or reindexing such data is like nuking the same spot twice or like washing clean cloths. It just doesn't need to happen. If you make 1 file per 1 file group per period, you can make the file groups READ_ONLY and then tell SQL Server to not backup the read only file groups. For restores, it would allow you to quickly get back in business by restoring the latest data first and then you can take your sweet time restoring the legacy file groups (not to mention allowing for online piecemeal restores if a part of a disk goes bad). Obviously, you'll also never have to rebuild indexes on the READ_ONLY partitions ever again, either.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 7, 2017 at 6:41 pm
ALIF-662928 - Monday, February 6, 2017 3:25 PMRebuilt takes days, any suggestions how to manage it successfully fast and scheduled.
Or its time for any 3rd party tools?
Easy.
Do not rebuild, build instead.
Create a new index with exactly the same definition as the one you want to rebuild, but with a different name.
While it's being created queries use the old one.
Wait until it's fully built and then drop the original index.
You can do it as often as you wish - if you feel like your server has nothing better to do.
But I'd look at the reason behind those 700 partitions.
There must be something really wrong with the DB design, which makes any effort of index defragmentation nearly useless exercise,
_____________
Code for TallyGenerator
February 8, 2017 at 11:34 am
Hi All,
Thanks for all of your valuable replies,
Here is the server config for those who wanted to know,
Memory 256 GB,
Processor Intel(R) Xeon(R) CPU E5-2630 0 @ 2.50 GHz 2.50 GHz(2 Processors)
OS Windows Server 2012 R2
SQL Server 2014 Enterprise (12.0.4457.0)
Tempdb = 10 files on SSDs
Database Size: 31 TB (we have couple of DBs around 10 TB on the same host apart from this main DB)
DB available space (3 TB), we did a lot of truncates and swap of table to a partition table, so we left it as it is, will grow eventually.
Tables: 500+
Recovery Model : Simple
Parallelism : 8 (got better performance on the queries with 8)
Backups: SnapBackup Manager(NO SQL Server backups to reduce read/write overhead on the host)
Here is the history of this table, so you know where we are going,
Main table= 25 columns
Rows = 8 Billion+
Size = 3 TB
Index Size = 2.5 TB
Partitions =700
Replication/InMemory = None
The DB we have has over 500 tables most tables in few hundred Gbs, this are manageable, queries on here are fast and have been analyzed throughly and no issues,
The indexes are for this one big table in specific which is heavily being used by Analytics, pouring in data like anything,
initially we did not thought it would grow that big, but now it grown to over 2TB due to many changes in their application implementation,
we have made the table swap from non-partition table to a partition, This helped us and now we got good performance then before,
To get some more performance out of it, we got ssd and moved tempdb to this ssd,
then we moved the table partitions latest 1months(30 files, partition by day) of data on one ssd lun and old data on the iscsi lun,
After the swap to partitioned, we had a lot of changes in the data and hence the fragmentation got worse, almost 6 months it was good, then now it has come of no use, the other dba who made this indexes used 100% fill factor,
Now we tried to built the index with ONLINE ON and it took 15 hours to complete with 90% fill factor for one of the 5 non clustered indexes in this table, which i guess is pretty fast with ONLINE option,
I did not notice any issues with sp_whoisactive when it was rebuilding the index
our next strategy would be to index the partitions instead of all the partitions together in future, As i got the confirmation from the application team that they are more interested on last month, but there will be minor changes in the historic data so we cannot keep those in read-only,
This would be quick and targeting only the data they needed, and hopefully we will revisit on the other not so used partitions after a long time.
we have a similar script as Ola.hallengren.com for other tables excluding this big table which we manually maintain once in a while,
we do the drop-recreate index for smaller tables or use ONLINE ON option, for a big one we do not have enough storage to create one lavishly π
We are also trying to see the possibility of using columnstore indexes, we plan to upgrade the host to 2016 and get rid of some other indexes, use querystore and other features.
Again thanks for all the suggestions,
Regards
ALI
!!! Every DB needs a DBA π !!!
February 8, 2017 at 1:15 pm
Main question - what is the clustered index on the main table?
_____________
Code for TallyGenerator
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply