November 1, 2017 at 1:24 pm
Arsh - Wednesday, November 1, 2017 3:51 AMEirikur,
It was an offshore project I spent time on analysing it but not currently working on it . But there were many tables like this in that database. The NVARCHARs are there every and of unnecessarily longer lengths than required. Wherever a character or alphanumeric is required , there 's NVARCHAR. Don't the extra pages for each row get , unnecessarily loaded into the memory.My idea is to know if getting rid of NVARCHAR helps in reducing the pages and hence gaining some performance. How am I wrong in thinking so ? I agree memory has to be doubled atleast but its a standard edition. Thank you.
That would really depend on what your queries are doing, sure if you are doing a lot of table scans you will end up having to pull more pages as there will be less rows per page with the bigger nvarchar size(how many less would really depend on how big the average field is and how many there are per table) , on the other hand if you are mostly returning small numbers of distinct rows each page probably has more rows than you actually need so nvarchar vs varchar doesn't matter. And if you are finding that there a lot of scans maybe it would be better to address that first.
November 1, 2017 at 1:45 pm
ZZartin - Wednesday, November 1, 2017 1:24 PMArsh - Wednesday, November 1, 2017 3:51 AMEirikur,
It was an offshore project I spent time on analysing it but not currently working on it . But there were many tables like this in that database. The NVARCHARs are there every and of unnecessarily longer lengths than required. Wherever a character or alphanumeric is required , there 's NVARCHAR. Don't the extra pages for each row get , unnecessarily loaded into the memory.My idea is to know if getting rid of NVARCHAR helps in reducing the pages and hence gaining some performance. How am I wrong in thinking so ? I agree memory has to be doubled atleast but its a standard edition. Thank you.That would really depend on what your queries are doing, sure if you are doing a lot of table scans you will end up having to pull more pages as there will be less rows per page with the bigger nvarchar size(how many less would really depend on how big the average field is and how many there are per table) , on the other hand if you are mostly returning small numbers of distinct rows each page probably has more rows than you actually need so nvarchar vs varchar doesn't matter. And if you are finding that there a lot of scans maybe it would be better to address that first.
We need more information, already asked but no firm reply....yet?
😎
November 1, 2017 at 9:16 pm
Thank you. I'm sorry I'm not currently on this product but worked for few months to help in PT.. so I'm not in a position to provide the DDL ..that's why I provided the approximate numbers...and asking this question in general terms to understand. I know all the records are distinct for those reports. 70% of the fields out of 36 columns for this particular table are nvarchar and the average occupancy of the data in them per row , is only around 50% . Will SQL not be forced to get that 40% empty spaces too into memory? thanks..
November 2, 2017 at 12:00 am
Arsh - Wednesday, November 1, 2017 9:16 PM. Will SQL not be forced to get that 40% empty spaces too into memory? thanks..
Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 2, 2017 at 12:31 am
GilaMonster - Thursday, November 2, 2017 12:00 AMArsh - Wednesday, November 1, 2017 9:16 PM. Will SQL not be forced to get that 40% empty spaces too into memory? thanks..Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.
i c . Thank you Gail. So can we conclude that code improvement is the major , if not the only , area to work for performance gains . If that's the case what is the role of Database Admin/Architect in performance tuning ?
November 2, 2017 at 10:00 am
Arsh - Thursday, November 2, 2017 12:31 AMGilaMonster - Thursday, November 2, 2017 12:00 AMArsh - Wednesday, November 1, 2017 9:16 PM. Will SQL not be forced to get that 40% empty spaces too into memory? thanks..Correct, but that's might get you something like 4 seconds to 3, which is really poor results from optimisation, and probably less since fetching data's only one part of the query executing.
i c . Thank you Gail. So can we conclude that code improvement is the major , if not the only , area to work for performance gains . If that's the case what is the role of Database Admin/Architect in performance tuning ?
Creating database structures that code can actually be tuned against.
November 2, 2017 at 1:08 pm
So , that boils down to normalization. Then would it be ok to indiscriminately use nvarchar for every non-numeric/date column with longer lengths without consideration for the usage and occupancy etc ? Even if Unicode is required, isn't there any significance to the length we define a column ?
thank you .
November 2, 2017 at 3:18 pm
Arsh - Thursday, November 2, 2017 12:31 AMSo can we conclude that code improvement is the major , if not the only , area to work for performance gains .
The vast majority of performance tuning is ensuring that queries can use indexes and that there are indexes for queries to use.
If that's the case what is the role of Database Admin/Architect in performance tuning ?
Err, doing it?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 2, 2017 at 7:43 pm
Thank you Gail. No .. I mean I am a DBA and there seems to be ambiguity about the role of a DBA in many organizations as to what a DB ADMIN must be doing. For them DBA means DB developer. No matter what one does,one is declared 'underperforming' if one doesn't have advanced coding expertise 😉
November 2, 2017 at 9:23 pm
Arsh - Thursday, November 2, 2017 1:08 PMSo , that boils down to normalization. Then would it be ok to indiscriminately use nvarchar for every non-numeric/date column with longer lengths without consideration for the usage and occupancy etc ?
No. ABSOLUTELY NOT! If any of us implied that, then apologies. All data types should be right-typed and right-sized at design time. I can't speak for anyone else but when I talked about changing NVARCHAR to VARCHAR possibly not being worth it, that's after the damage has already been done and, depending on the rest of the idiocy heaped upon the database, the conversion could cause other problems which might make it not worth it. For example, if all the parameters for stored procedures are NVARCHAR and you change the columns to VARCHAR without also changing those parameters, then you've just guaranteed that procs will always do a full scan while it implicitly casts the now VARCHAR columns to NVARCHAR according to the rules of datatype precedence.
Even if Unicode is required, isn't there any significance to the length we define a column ? thank you .
Yes. If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 3, 2017 at 9:09 am
Arsh - Thursday, November 2, 2017 7:43 PMNo matter what one does,one is declared 'underperforming' if one doesn't have advanced coding expertise 😉
Oh, I agree with that, DBAs absolutely should have advanced T-SQL skills, plus good powershell skills ideally. Who else is going to teach the developers, identify and tune badly written stored procs, set up monitoring and automation, etc?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 3, 2017 at 9:13 am
Jeff Moden - Thursday, November 2, 2017 9:23 PMYes. If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.
Not really over-allocate memory, but it will have to allocate more memory than for correctly sized, correctly typed data types. Nvarchar(4000) needs 8k of memory, because it may contain 8k of string. varchar(100) requires significantly less.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 3, 2017 at 10:44 am
Thank u Gail. Then what about varchar(4000)? wouldn't it be a better design to use varchar(n) instead of of nvarchar(n) as it'll need half of the pages loaded into memory(assuming there's only one column in this table and no need of multilingual feature) ?
November 3, 2017 at 3:10 pm
If you're designing a new table, yes. If you're considering altering an existing table, then, as I said earlier
GilaMonster - Wednesday, November 1, 2017 1:06 PMYou might get some minor gains, but unlikely to be particularly good for the amount of effort and work that this change will require.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 4, 2017 at 5:11 pm
GilaMonster - Friday, November 3, 2017 9:13 AMJeff Moden - Thursday, November 2, 2017 9:23 PMYes. If you oversize the columns and/or the parameters that you use in stored procedures, then rumor has it that the optimizer can and frequently will over allocated memory for something that doesn't need the oversized allocation.Not really over-allocate memory, but it will have to allocate more memory than for correctly sized, correctly typed data types. Nvarchar(4000) needs 8k of memory, because it may contain 8k of string. varchar(100) requires significantly less.
You and I are talking about the same thing but with different words. I was speaking of "over allocating" with the meaning of just what you said... allocating more memory than necessary, which correctly sized sized datatypes do help prevent. I wasn't speaking of using more memory than what's available in the system. Just allocating more memory than the task at hand needs.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 36 total)
You must be logged in to reply to this topic. Login to reply