Need help with Index basics

  • Several of our databases lack indexes (some don't even have a primary key!). This can't lead to good performance. I'm new to performance analysis so I don't know where to start looking for the likely bottlenecks.

    A most fundamental question that I can't answer: if I add an index to a column how do I tell if the change is an improvement?

    SQL 7/2K has an index tuning wizard but I tried it once and it said it couldn't help. Is that product of any value? Any other products worth looking at? Give me some direction.

    Thanks.

    is it really of value. I used it once and it told me it couldn't help me.

    worth looking at.Will it tell me

    . But I don't understand hor to recitfy the situation.

    Is the SQL 2k index tuning wizard of any real value.

    real value? I need to tell if adding an index of mchanging the type from clusetred to nonclustered has any real effect on speed.

    an index change I'm making has any real effect.

    SQL 2K offer and

  • I've had mixed results with the tuning wizard. A good start is to get a primary key on each table. Next would be to index columns that are foreign keys - used in joins a lot. Then try to identify a good candidate for the clustered index, go with the primary key to start with if you're not sure.

    As far as measuring improvements, there are two ways. The low tech way is to record your server performance before changes (CPU, disk IO), then do again afterwards. More granular is to capture the SQL statements using Profiler, then pick the worst ones to tune. I definitely recommend changing one thing at a time.

    Andy

  • You can also use profiler and get the "worst queries" by specifying those that take 2 sec, 3 sec, or whatever. This is a good place to start.

    I've had decent luck with the tuning wiz, but only if you have a good workload. It provides me with some ideas about which idenxes I may want.

    Usually, though, I start with the longest queries and go from there. Tuning is a constant battle as your application and workload changes.

    I agree with Andy's rcommendations as well.

    Steve Jones

    steve@dkranch.net

  • Bill,

    I've had good use of the Index Analyzer. It has come up with good suggestions, even if I had added the indexes that I thought were useful.

    But to do that you need test data.

    Lots of test data.

    So much test data that SQL Server cannot store all of it in RAM.

    It is no good at giving you suggestions for Clustered Indexes. That is probably because there are no good and hard rules about that. So far I've found very little info about clustered indexes, but I've spent a bit of time playing with them on 25GB data.

    And it works.

    You get better speed on the reports that can utilize the clustered index.

    But how to determine which cluster is the best? I do not know.

    If you find out, please let me know.

    Best regards

    Henrik Staun Poulsen

    Stovi Software

    Denmark

  • Check out this link, it may help:

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q110352

  • A couple of people mentioned Profiler and that's a bread and butter tool here. One thing Profiler will give you when you do a capture is *how* the data is being accessed. You can see what stored procedures and individual queries are being executed. If you take the time to pull out some of them, you can put them in Query Analyzer and look at the Execution Plan. The column(s) used to select the data (WHERE clause, for instance) should give you an indication of where to look at putting the indexes. Once you put in an index, take another look at the Execution plan for the same procedures and queries. See if table scans are now index scans, etc.

    As for book resources, you might look at SQL Server 7.0 or 2000 Performance Tuning Guide from MS Press.

    K. Brian Kelley

    bk@warpdrivedesign.org

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Hi

    I take the following approach:

    a) P-keys are mandatory (dont cluster them unless they are cluster friendly and you will get the maximum benefit from them as its 1 per table only).

    b) Index your f-keys

    Then sit back with profiler, collect all SELECT statements running against your TEST or DEV databases (dont run too long as it really whacks db performance). And run it through index wizard. Look long and hard at the results and reports. Index wizard can come up with some interesting results, ie. people querying tables and the concatenated indexes not being used because of the order of columns used in the join is not the same as the index etc.

    Tuning large app sql is hard at times, I have found profiler inconsistent with the "duration" column and because of that, locating poor sql is difficult. I have a classic example where profiler was reporting 20sec (duration) for a large sql statement but the query was blisteringly fast and fully tuned!? I believe its to do with transactions initiated at the client but have yet to confirm it.

    Cheers

    Chris


    Chris Kempster
    www.chriskempster.com
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • Profiler is a good tool for this type of an issue. I have had good results with capturing the profiler data to a table and writing your own custom queries to retrieve the information your looking for. You can search for specific counters over a certain threshold like I/O, CPU, duration and return things such as the cmd text being issued. If you can watch the performance being degraded then kick profiler on and off when your server is being hit. The table will grow fast so keep and eye on it.

    pochinej

    Production DBA

  • I do believe Profiler is reporting back not just the retrieval time from the engine's perspective, but total time to include delivery to the client. We've seen changes in how the client on a particular server (COM+ server) is responding drastically affect the duration times for identical stored procedure calls.

    K. Brian Kelley

    bk@warpdrivedesign.org

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • 1)Definitely create Primary Key.

    2). Have lot of test data and create a workload.

    3). Use Index Tuning wiz.

    4). Use Profiler and get the timing.

    5). Look at the queries from the business perspective. What kind of queries you use often. What fields you query on very often. Does this field based on one or more fields. Then create an index on those fields.

    6). Based on your business perspective, find out the field on which you can create a clustered index.

    7). If needed you can also create non-clustered index for other set of queries.

    Basically you need to have a primary key and then think about various possibilities of queries. Create a clustered index for the one you use the most. Create non-clustered index for the rest of the queries.

    Give me a fish, you feed me for a day! Teach me to fish, you feed me for a life time.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply