Identity field on each table

  • Thanks for the feedback. Sounds very interesting. I am surprised a bit that these weren't simply broken up into monthly or even weekly tables rather than going for just one big table. Another thing that could have been done would have been to use something like DECIMAL(38,0) (serious overkill... smaller value at the "byte break" for DECIMAL would certainly suffice) for the IDENTITY column.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • In my PASS session on statistics, I asked the audience for the largest table anyone had worked with....

    One gentleman had a 100 billion row table. He wouldn't tell me what he was storing in it. Someone from the local usergroup (when I did the same presentation there) claimed a 17 billion row table. Again, no comments on what was in it.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Jeff Moden (2/28/2010)


    I am surprised a bit that these weren't simply broken up into monthly or even weekly tables rather than going for just one big table.

    Well yes, quite.

    As I recall, the database design was very tightly correlated with the Java object design. Those were the days when the answer to everything was Java. There was little interest in doing anything flash in the database. Well, until some genius database guy rewrote the massively RBAR Java aggregation routines in a single set-based procedure than ran in twenty minutes rather than twenty hours. Given that this was a daily task, the improvement was quite well received.

  • GilaMonster (2/28/2010)


    One gentleman had a 100 billion row table.

    Did you mention that size isn't everything? 😉

    One would hope that such a monster was at the very least partitioned, and quite probably distributed. In that case, it is stretching the definition of 'table' a little, at least from my perspective.

    Paul

  • Paul White (2/28/2010)


    GilaMonster (2/28/2010)


    One gentleman had a 100 billion row table.

    Did you mention that size isn't everything? 😉

    Well, I was asking for the largest single table so I could make some points on statistics updates. That size suited me very well as the points I was making were very clear on a table that size.

    One would hope that such a monster was at the very least partitioned, and quite probably distributed.

    I was talking at the time about single tables and statistics on single tables.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply