Identity and uniqueIdentifier and custom id

  • Digs (6/15/2009)


    Date time with milliesecound, that create a great chance of uniqueness, would it not !

    condition: You dont have 1000s inserts in a split second.

    [font="Verdana"]Unless you are talking about the new datetime2 type in SQL Server 2008, datetime is only down to a multiple of milliseconds. And in my experience, it's not enough to guarantee uniqueness (I would describe it as "semi-unique"). All it takes is two transactions coming in at the same time, which does happen.

    So why risk it? I don't see the advantage in using the datetime as a unique key.

    [/font]

  • This what I am going to do...SQL server 2005

    UserGroupID as Identity, increment as 1, Primary key

    UserGroupKey as BigInt (filled from function that produces date/time/ms, it also must be unique)

    NOTE: as I far as I can work out you can only have one identity increment unique key per table.

    Why two unique ids/keys in the table.

    When I rebuild the table, say i need to truncate a table, I can use the key field' to re link Identity fields to one another if I have problems. I will use the key field in joining tables as well.

    if I have to rebuild a table, identities are recounted, so I have a back up with the key fields if something goes wrong...

    Am I crazy, or what another way to get two unique keys in the same table.:hehe:

    Question: For the field named UserGroupKey, is there any setting where I can make this accept only unique inputs, I know that the Primay Key field does this for the UserGroupID, but how can it be done for a non primary key field ?

  • You can only have one declared primary key for a table, which can enforce uniqueness and build a clustered index that will store the data in order by the primary key.

    But you can define other unique indexes on a table that will enforce uniqueness on other combinations of columns. I see this, in most cases, where some combination of columns is unique, but the table designer has decided to add an identifier of some sort for any number of good (and sometimes not-so-good) reasons.

    The question is: why add the identifier? What problem will it solve? What efficiencies will you gain by using it? What complexity are you introducing by adding it?


    And then again, I might be wrong ...
    David Webb

  • ..."But you can define other unique indexes on a table that will enforce uniqueness on other combinations of columns."..

    How is this done for a single column ? ie UserGroupKey !

    Thanks for your input..

  • create unique nonclustered index userkey on tablename (userid, datetime_entered) on PRIMARY

    If you're going to just slap these two columns together as a bigint, as you have proposed, you might save a little space on the index, but if you actually store the bigint in the row, you'll lose some space there. What will having one bigint column do for you that this index and the two columns you're already storing won't?


    And then again, I might be wrong ...
    David Webb

  • David Webb (6/15/2009)


    create unique nonclustered index userkey on tablename (userid, datetime_entered) on PRIMARY

    [font="Verdana"]Or:

    alter table tablename add constraint userkey unique (userid, datetime_entered);

    I'm still struggling to see the value of the bigint thing. The minute you want to join on originating tables, you'd be better off having the distinct columns (with their own indexes).

    [/font]

  • The composite column is just asking for trouble. There are reasons that you aren't supposed to store more than one value per row in a single column. Check out the normal forms (Wikipedia had a good article on them last time I looked - subject to change without notice, of course).

    The first problem with it is going to be decomposing it into its parts. Where does the date start and the user ID end? Is it always the fourth digit? If so, do you start user IDs at 000, or at 100? If 000, you need to be padding the ones up to 99. That by itself adds overhead to your code. It also means you better make sure you will never have more than 1000 users (000-999), because the moment you add user ID 1000 to it, you'll break every piece of the database that assumes the date starts at position 4.

    It's up to you, of course, but I strongly recommend against that whole idea. I would never allow it in any database I have any control over.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • To conclude..

    I am going to use the simple Identity key as my main PK on all tables.

    My delete process for a record (in all tables) is like this a field called 'Deleted' as bit will be tagged with a 1 or true via an update process, undeleted will be 0 or false.

    When i wish to rebuild ( or even truncate table, re insert into a new table, or whatever I wish to do) the table to remove the deleted '1' above I was concerned about the risk of loosing uniquess match btw all tables. So I need a uniques backup.

    An Identity field can only be ONE per table ( as I understand it) so I want a unique key back up. So thats why I created the UserID + Date+Time+ms key as well. Sure its an extra field, but it wont be used unless disaster strikes, I will use at all times the simple Identity PK for joins and searches..etc...

    Unless a table can have TWO identity increment keys, how else can get a backup uniqueness without using GUID !;-)

  • You can easily enforce uniqueness with an index, identity or not. I think you're confused about what rebuilding the table means. It doesn't change the identities, or loose uniqueness.

  • Well dont hold back.. how do you re build a table , and re load records into a table that has a identity key...

    I guess you just turn identity on and off before and after the process...but what if you mess that up...disaster strikes..backups, and then if they fail...

  • I'm not sure you'd have much luck rebuilding a table with an identity column from raw data. Normal practice would be to get the last backup, restore the entire database to a different name, and then copy the data back. I'm not sure what kinds of hoops you'd have to jump through to ensure the identities were re-applied to the same records as the originals in a rebuild-from-raw-data scenario.

    The question remains, why do you want the identity column in the first place? You have a combination of columns that guarantees uniqueness. What problem is the identity column solving?


    And then again, I might be wrong ...
    David Webb

  • If you rebuild a table, meaning drop the data and add it back, you get new identities. You can reset the identity, or not.

    If you add the same set of data again, it gets new values. If you turn on IDENTITY_INSERT, and things fail, it rolls back. SQL is not that fragile.

  • Digs (6/16/2009)


    Well dont hold back.. how do you re build a table , and re load records into a table that has a identity key...

    I guess you just turn identity on and off before and after the process...but what if you mess that up...disaster strikes..backups, and then if they fail...

    I guess I don't understand your question.

    With regard to rebuilding a table and restoring all the identity values to the prior value, usually you would do that be restoring from a backup, or by having an audit log that records all values inserted into the table and any changes made to them. I say "usually", but in fact I've never yet had to rebuild a table that way. I've had to restore backups, but not because I'd somehow lost the identity values in a table.

    I guess if I needed a backup to my backups, that would be an audit log.

    But then the argument goes on infinitely:

    What if I lose all my data?

    Restore from a backup.

    What if the backup is messed up?

    Restore from an audit log.

    What if the audit log is messed up?

    Restore from a backup of the audit log.

    What if the backup of the audit log is messed up?

    Get your prior backup from your offsite storage and use that.

    There's a hole in the bucket, dear Liza, dear Liza, there's a hole in the bucket, dear Liza, a hole.

    (There's a reason that song never ends.)

    At some point, you have to decide that what you've got is good enough for any catastrophe that leaves the database relevant. You can't restore from CERN accidentally creating a negatively charged strange particle and destroying the whole physical universe, but at the same time, if that happens, it's unlikely that restoring your database will matter at that point. Same for an erruption of the Yellowstone Supervolcano if the business your database supports is in North America. Check http://www.exitmundi.nl/ for more ways that your backups may not matter.

    Usually, the point of "good enough", is you have regularly scheduled backups, and you test them with reasonable frequency. If that's not enough, then you add an audit log on a separate disk subsystem. In either case, you copy the backups to tape, and take the tapes to a separate location (in case of fire or similar situations), and rotate them based on retention policies.

    Another common step is log shipping to a remote server, preferably in a different weather pattern (to account for hurricanes and blizzards and such), even better in a different geographic zone (earthquakes), and even better yet on a different continent and not near a sea shore or significant faultline and not in an area known for severe political unrest or major terrorism incidents. (Antarctica is perfect for the political/terrorism aspects, has remote regions that don't get a lot of earthquakes, or volcanic activity and aren't subject to tidal waves, tornadoes, or hurricanes, but I really can't recommend it for a backup data center location - too many psycho penguins).

    Most businesses have their servers on an uniniterruptable power supply of one sort or another. Many have generators to keep the servers and SANs and such going for up to several days without power.

    But I really don't see all of that being replaced by having a secondary, backup to the identity column in a table. What am I missing?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Digs (6/16/2009)


    When i wish to rebuild ( or even truncate table, re insert into a new table, or whatever I wish to do) the table to remove the deleted '1' above I was concerned about the risk of loosing uniquess match btw all tables. So I need a uniques backup.

    [font="Verdana"]Here's how I would do it:

    1. Delete all rows that are marked as deleted (so do not truncate the table! Leave the existing data in place.)

    2. Rebuild the clustered index periodically (once a month in a weekend or slow usage period) to reclaim the space.

    That way there's no issues around having to "restore uniqueness" because it's never lost.

    [/font]

  • Ok the last post makes sense...

    .."Rebuild the clustered index periodically "...

    I assume you mean the Primary Key identity field.

    How is this done, rebuid a PK ? Do yo just remove it then turn it back on ???:w00t:

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply