All Fields in a Table Under One Unique Key

  • Hi Everyone,

    I'm looking for some ideas and or feedback here. I created a settings table which holds settings for various automated processes. The reason for one table was so that all settings were in one place (I built a front end which allows a user to view/edit one settings group at a time) vs. creating table after table for each process' settings.

    The table has 32 fields and each record in this table needs to be unique so that the automated processes, which use the settings, function correctly.

    My questions:

    -Is it inefficient for every field to be a part of a unique key? The table is not updated often, but it is queried thousands of times a day.

    -Would it be more efficient to write up a script which looks for duplicates and then identifies the duplicate rows?

    -Any cleaner ideas on how to create a settings table? There isn't a single process which uses every field in the table.

    Thanks much.

    -Dan

  • well this stands out to me:

    There isn't a single process which uses every field in the table.

    that means some, or many of the columns might be null...for me,that alone would disqualify making every field part of a unique constraint. lots of nulls don't seem unique to me, only undefined.

    second, I believe a unique constraint, or a PK for that matter , is limited to a max of 16 columns...as well as the total bytes of the width of that index being < 900 bytes,so that further disqualifies it as well.

    I'd think it'd be easier to create a procedure which wither adds or updates an exisitng row with the 32 values, rather than adding a process which looks for duplicates...if the proc is the single point of insert/update, you'd have everything set up how i would expect it.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • If you really have lots of fields not used by other processes and a table as wide as that, I think you'd be better off looking at some form of normalization of the data. Then you can make each appropriate set of data unique within it's own definition. That's the whole idea of being able to normalize.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply