String

  • Recently we ran out of Id's on one of the system. So we were asked to switching to alphanumeric. So the field is used on other system the column which has char(50). I don't think this will cause any issues. Any thoughts?

  • "Switching to alphanumeric" from what?  Int, bigint?  If so, have you used negative numbers yet?  Typically the quick-and-dirty way to stretch values is to use the full range of negative numbers.  If the numbers are truly just internal links, neg shouldn't make any difference.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • I agree with Scott.  And to add to what he said - if you are currently using INT, I would MUCH rather switch to BIGINT before switching to CHAR(50).  I would also be curious how many rows of data you have in the table and what your identity increment was set to.  Do you have gaps that you should fill for example?

    Also, if the table is part of an SSIS package, changing datatypes (int to bigint or to char(50) or any datatype changes) will break the SSIS package and you will need to update it with proper metadata.  The same MAY apply to integrated systems.  For example, if you have a .NET application that is reading that data and it is expecting an INT back and it is storing the result  as an INT, converting it on the database side to a CHAR is likely going to break the app.  Same thing likely applies to any reporting tools you have.  Now going from int to bigint is less likely to break any apps/reports, but that doesn't mean they won't break.

    I would STRONGLY encourage you to make the change on test and then test everything that touches that table.  Every application, report, package, stored procedure, trigger, etc. to make sure that your change doesn't break anything.

    We ran out of INTs before with one table, but it was due to an accident that was easily corrected.  Our ETL process was to truncate and reload data nightly.  One developer had accidentally done a "DELETE" instead of "TRUNCATE" so the identity value wasn't being reset with each ETL load.  Things worked fine for a while then one day the ETL failed and we were struggling to find the problem.  Changing the DELETE to a TRUNCATE fixed the problem as the table didn't have that many rows in it.  The short term solution that we did as it was an "emergency fix" was to reseed to the max negative INT value while we investigated the SSIS package to see what was wrong.  Once we found that DELETE, the fix was trivial.

     

    My opinion, changing from a numeric value to a character value is not a good long term solution and is likely to introduce problems.  I would much rather investigate archiving some of the data.  If you have over 2 billion rows (INT), I imagine some of that is historical data that isn't used much.  Now, if you DO need to keep it, I would recommend going to BIGINT which gives you a LOT more rows... Now if you are overflowing a BIGINT AND can't archive some of the data, I would be curious why you need so much data?  That is a LOT of rows of data and it is doubtful that ANYONE would be looking at all of it.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Following on what Mr. Brian Gale wrote about a DELETE causing a massive gap in the IDENTITY property of a table.  It happens sometimes because things which aren't good sometimes happen.  🙂  It's possible to reseed a table to a specific number value.  This is risky and not recommended of course.  Here's a script which causes a big gap in a table and then "fixes it" by reseeding to a specific value

    drop table if exists #t;
    go
    create table #t (
    id int identity(1,1) primary key not null,
    rating decimal(18, 2));

    insert #t(rating) values (1);

    set identity_insert #t on;
    insert #t(id, rating) values (3, 1);
    set identity_insert #t off;

    insert #t(rating) values (1);

    delete #t where id=4;

    insert #t(rating) values (1);

    set identity_insert #t on;
    insert #t(id, rating) values (2000000, 1);
    set identity_insert #t off;

    insert #t(rating) values (1);

    delete #t where id>=2000000;

    dbcc checkident (#t, reseed, 5);

    insert #t(rating) values (1);

    select * from #t;

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • I find it hard to believe that most folks would ever run out of BIGINT values under normal situations.  The field on the other database being a VARCHAR(50) is concerning for performance and may space usage there.  Discussions about that are like having a discussion about leaving the toilet seat up... many will die in the process. 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • No, just no.  This sounds like a bad idea.

    Curious, what datatype has exceeded the values?

    We have a logging table that does this every few years, but because we keep the last 30 days of records, its no big deal to re-seed the identity.

    Like Scott said, what about reseeding at the lowest negative number?

    A long time ago, we had a database that was the backend for a VB 3 front end.  This was a mission critical app, and because it was written by a third party, we had no source code.

    VB 3 integers were 16 bit integers, so the app would blow up when an new identity that exceeded 32767 was returned back to the app.  I found that while the identity was greater than 32767, there were only 8k records in the database.  I created a new column on the main table, as well as the child tables.  I populated this with a new sequential number, and then updated the rest of the tables new columns with these values.  Dropped the constraints, renamed the columns, and re-added the constraints.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply