October 1, 2014 at 10:50 am
Is there a performance limit on the number of indexes per table / database ?
With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.
October 1, 2014 at 10:58 am
Indexing is a HUGE topic, both in breadth and depth. The best answer is that to be optimal you put on all the proper indexes that give you the "best" performance for your app, on the whole and/or in parts, while not causing other unacceptable issues.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 1, 2014 at 11:48 am
Kevin is correct. And the number of indexes that is correct is going to vary greatly based on workload. Is it an an active OLTP system, then fewer is usually better, a DW/Reporting database then more is usually better.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
October 2, 2014 at 9:49 am
It depends of the workload and type of operations.
In data warehousing, which is basically the type of databases I have, I've seen Indexes with columns on 80% or more of the table, and those are needed. Their impact is minimal or none due the specifically type of workload we have.
I would suggest checking your critical queries, without ignoring your nightly data loads (if any) and add your Indexes based on that.
Also, don't blindly add Indexes based on what Management Studio may suggest, or drop Indexes if a view does not show lot of usage on it. Just keep an eye for several days, weeks or even months, in order to get a proper query pattern if you have not done that before. Sometimes companies run a monthly or year report that without that specific Index, will never finish or will be running for hours.
October 2, 2014 at 5:44 pm
Just to add to what everyone has said... any index that does not get used is one index too many. This is a huge topic but here's a few rules to live by:
(1) It's usually a good idea to have a clustered index on every table (technically speaking clustered indexes are preferable to heaps)
(2) Clustered indexes should be on columns that will be sorted.
(3) monitor your index usage. DMV's are good for that kinda thing. Check out
Performance Tuning with SQL Server Dynamic Management Views
By Tim Ford and Louis Davidson for more about that
(4) if you have a column, say col1, that allows NULLS, and you run queries that filter for "WHERE col1 IS NOT NULL" then consider a filtered index that filters out NULLs
(5) If you use windows functions you should have POC indexing strategy. Google "POC index sql server itzek ben-gan" for more details. This huge in my data warehouses.
-- Itzik Ben-Gan 2001
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply