Changing NVARCHAR to VARCHAR...

  • Arsh - Wednesday, November 1, 2017 3:51 AM

    Eirikur,
    It was an offshore project I spent time on analysing it but not currently working on it . But there were many tables like this in that database. The NVARCHARs are there every and of unnecessarily longer lengths than required. Wherever a character or alphanumeric is required , there 's NVARCHAR. Don't the extra pages for each row get , unnecessarily loaded into the memory.My idea is to know if getting rid of NVARCHAR helps in reducing the pages and hence gaining some performance. How am I wrong in thinking so ? I agree memory has to be doubled atleast but its a standard edition. Thank you.

    That would really depend on what your queries are doing, sure if you are doing a lot of table scans you will end up having to pull more pages as there will be less rows per page with the bigger nvarchar size(how many less would really depend on how big the average field is and how many there are per table) , on the other hand if you are mostly returning small numbers of distinct rows each page probably has more rows than you actually need so nvarchar vs varchar doesn't matter.  And if you are finding that there a lot of scans maybe it would be better to address that first.

  • ZZartin - Wednesday, November 1, 2017 1:24 PM

    Arsh - Wednesday, November 1, 2017 3:51 AM

    Eirikur,
    It was an offshore project I spent time on analysing it but not currently working on it . But there were many tables like this in that database. The NVARCHARs are there every and of unnecessarily longer lengths than required. Wherever a character or alphanumeric is required , there 's NVARCHAR. Don't the extra pages for each row get , unnecessarily loaded into the memory.My idea is to know if getting rid of NVARCHAR helps in reducing the pages and hence gaining some performance. How am I wrong in thinking so ? I agree memory has to be doubled atleast but its a standard edition. Thank you.

    That would really depend on what your queries are doing, sure if you are doing a lot of table scans you will end up having to pull more pages as there will be less rows per page with the bigger nvarchar size(how many less would really depend on how big the average field is and how many there are per table) , on the other hand if you are mostly returning small numbers of distinct rows each page probably has more rows than you actually need so nvarchar vs varchar doesn't matter.  And if you are finding that there a lot of scans maybe it would be better to address that first.

    We need more information, already asked but no firm reply....yet?
    😎

  • Thank you. I'm sorry I'm not currently on this product but worked for few months to help in PT.. so I'm not in a position to provide the DDL ..that's why I provided the approximate numbers...and asking this question in general terms to understand. I know all the records are distinct for those reports. 70% of the fields out of 36 columns for this particular table are nvarchar and the average occupancy of the data in them per row , is only around 50% . Will SQL not be forced to get that 40% empty spaces too into memory? thanks..

  • Arsh - Wednesday, November 1, 2017 9:16 PM

    . Will SQL not be forced to get that 40% empty spaces too into memory? thanks..

    Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Thursday, November 2, 2017 12:00 AM

    Arsh - Wednesday, November 1, 2017 9:16 PM

    . Will SQL not be forced to get that 40% empty spaces too into memory? thanks..

    Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.

    i c . Thank you Gail. So can we conclude that code improvement is the major , if not the only  , area to work for performance gains . If that's the case what is the role of Database Admin/Architect in performance tuning ?

  • Arsh - Thursday, November 2, 2017 12:31 AM

    GilaMonster - Thursday, November 2, 2017 12:00 AM

    Arsh - Wednesday, November 1, 2017 9:16 PM

    . Will SQL not be forced to get that 40% empty spaces too into memory? thanks..

    Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.

    i c . Thank you Gail. So can we conclude that code improvement is the major , if not the only  , area to work for performance gains . If that's the case what is the role of Database Admin/Architect in performance tuning ?

    Creating database structures that code can actually be tuned against.

  • So , that boils down to normalization. Then would it be ok to indiscriminately use nvarchar for every non-numeric/date column with longer lengths without consideration for the usage and occupancy etc ? Even if Unicode is required, isn't there any significance to the length we define a column ?

    thank you .

  • Arsh - Thursday, November 2, 2017 12:31 AM

    So can we conclude that code improvement is the major , if not the only  , area to work for performance gains .

    The vast majority of performance tuning is ensuring that queries can use indexes and that there are indexes for queries to use.

    If that's the case what is the role of Database Admin/Architect in performance tuning ?

    Err, doing it?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you Gail. No .. I mean I am a DBA and there seems to be ambiguity about the role of a DBA in many organizations as to what a DB ADMIN must be doing. For them DBA means DB developer. No matter what one does,one is declared 'underperforming' if one doesn't have advanced coding expertise 😉

  • Arsh - Thursday, November 2, 2017 1:08 PM

    So , that boils down to normalization. Then would it be ok to indiscriminately use nvarchar for every non-numeric/date column with longer lengths without consideration for the usage and occupancy etc ?

    No.  ABSOLUTELY NOT!  If any of us implied that, then apologies.  All data types should be right-typed and right-sized at design time.  I can't speak for anyone else but when I talked about changing NVARCHAR to VARCHAR possibly not being worth it, that's after the damage has already been done and, depending on the rest of the idiocy heaped upon the database, the conversion could cause other problems which might make it not worth it.  For example, if all the parameters for stored procedures are NVARCHAR and you change the columns to VARCHAR without also changing those parameters, then you've just guaranteed that procs will always do a full scan while it implicitly casts the now VARCHAR columns to NVARCHAR according to the rules of datatype precedence.

    Even if Unicode is required, isn't there any significance to the length we define a column ? thank you .

     

    Yes.  If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Arsh - Thursday, November 2, 2017 7:43 PM

     No matter what one does,one is declared 'underperforming' if one doesn't have advanced coding expertise 😉

    Oh, I agree with that, DBAs absolutely should have advanced T-SQL skills, plus good powershell skills ideally. Who else is going to teach the developers, identify and tune badly written stored procs, set up monitoring and automation, etc?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Jeff Moden - Thursday, November 2, 2017 9:23 PM

    Yes.  If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.

    Not really over-allocate memory, but it will have to allocate more memory than for correctly sized, correctly typed data types. Nvarchar(4000) needs 8k of memory, because it may contain 8k of string. varchar(100) requires significantly less.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank u Gail. Then what about varchar(4000)? wouldn't it be a better design to use varchar(n) instead of of nvarchar(n) as it'll need half of the pages loaded into memory(assuming there's only one column in this table and no need of multilingual feature) ?

  • If you're designing a new table, yes. If you're considering altering an existing table, then, as I said earlier

    GilaMonster - Wednesday, November 1, 2017 1:06 PM

    You might get some minor gains, but unlikely to be particularly good for the amount of effort and work that this change will require.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Friday, November 3, 2017 9:13 AM

    Jeff Moden - Thursday, November 2, 2017 9:23 PM

    Yes.  If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.

    Not really over-allocate memory, but it will have to allocate more memory than for correctly sized, correctly typed data types. Nvarchar(4000) needs 8k of memory, because it may contain 8k of string. varchar(100) requires significantly less.

    You and I are talking about the same thing but with different words.  I was speaking of "over allocating" with the meaning of just what you said... allocating more memory than necessary, which correctly sized sized datatypes do help prevent.  I wasn't speaking of using more memory than what's available in the system.  Just allocating more memory than the task at hand needs.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 16 through 30 (of 36 total)

You must be logged in to reply to this topic. Login to reply