April 19, 2017 at 2:05 pm
HI Experts
I am unable to change the data type VARCHAR to INT . I would appreciate your help.
CODE :
CASE When
[Worksheet].[Worksheet].[PLC Cd] IS NOT NULL
then [Worksheet].[Worksheet].[PLC Cd]
else
(if ([Pool Number]=0)
then([Worksheet].[Revenue Worksheet].[Account ID])
else (cast([Pool Number],varchar(7))))
END
ACCount_ID - DATATYPE is Varchar
Pool number - DATA TYPE - Int
I do not want to use the cast as above mentioned. If I use the CAST SQL server is showing a warning to "may affect "CardinalityEstimate" in query plan choice"
I would like to convert the ACCOUNT ID as INT.
ACCOUNT ID data format is ; 500-201-001
I am dealing with 6 million records in the tables. and Above code is the key values to aggregate the data. while aggregating the above case statement output values is key.
using the SQL SERVER 12. and using the same query in COGNOS Reporting.
Appreciate your help.!
April 19, 2017 at 2:15 pm
What data type is PLC Cd?
It would really help if you gave us more/all of the query and create table statement with some sample data.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 19, 2017 at 2:15 pm
sreesree.96 - Wednesday, April 19, 2017 2:05 PMHI Experts
I am unable to change the data type VARCHAR to INT . I would appreciate your help.
CODE :
CASE When
[Worksheet].[Worksheet].[PLC Cd] IS NOT NULL
then [Worksheet].[Worksheet].[PLC Cd]
else
(if ([Pool Number]=0)
then([Worksheet].[Revenue Worksheet].[Account ID])
else (cast([Pool Number],varchar(7))))
ENDACCount_ID - DATATYPE is Varchar
Pool number - DATA TYPE - Int
I do not want to use the cast as above mentioned. If I use the CAST SQL server is showing a warning to "may affect "CardinalityEstimate" in query plan choice"
I would like to convert the ACCOUNT ID as INT.
ACCOUNT ID data format is ; 500-201-001
I am dealing with 6 million records in the tables. and Above code is the key values to aggregate the data. while aggregating the above case statement output values is key.
using the SQL SERVER 12. and using the same query in COGNOS Reporting.Appreciate your help.!
A CAST function won't affect Cardinality if used in the column list.
Post the entire statement and the sqlplan that shows the warning to get some help (if possible, add DDL for tables and indexes). Otherwise, everything would be a shot in the dark.
April 19, 2017 at 2:16 pm
If you can't change the data type, then you need to accept the CAST (and casting the account to INT will give you the same warning). It won't cause cardinality estimation problems as shown though, even though the plan will have the warning. Unless that expression is in the JOIN or WHERE.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 19, 2017 at 2:17 pm
First, your CASE statement isn't going to run at all as a query against any version of SQL Server. CASE statements don't support having the IF statement embedded within. You also say you don't want to CAST the Pool Number field to varchar(7), but don''t say what data type it currently has. If you have an Account ID field that is consistently formatted with 3 numbers on either side of each dash, then converting that to INT isn't that hard, but you'll have a problem if that format ever changes, or growth forces the use of more digits. You can either CAST or CONVERT the Account ID field by taking the LEFT most 3 chars, the middle 3 using SUBSTRING, and the RIGHT most 3 chars and CONCAT those, and you then CAST that whole thing as INT. Does that help?
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
April 20, 2017 at 7:12 am
sgmunson - Wednesday, April 19, 2017 2:17 PMIf you have an Account ID field that is consistently formatted with 3 numbers on either side of each dash, then converting that to INT isn't that hard, but you'll have a problem if that format ever changes, or growth forces the use of more digits. You can either CAST or CONVERT the Account ID field by taking the LEFT most 3 chars, the middle 3 using SUBSTRING, and the RIGHT most 3 chars and CONCAT those, and you then CAST that whole thing as INT. Does that help?
If the OP is worried about no longer fitting the xxx-xxx-xxx format, and needing to go xxxx-xxx-xxxx, then perhaps he would be better off using REPLACE to strip out the hyphens and then CAST the result of that? Just a thought...
Thomas Rushton
blog: https://thelonedba.wordpress.com
April 20, 2017 at 10:05 am
ThomasRushton - Thursday, April 20, 2017 7:12 AMsgmunson - Wednesday, April 19, 2017 2:17 PMIf you have an Account ID field that is consistently formatted with 3 numbers on either side of each dash, then converting that to INT isn't that hard, but you'll have a problem if that format ever changes, or growth forces the use of more digits. You can either CAST or CONVERT the Account ID field by taking the LEFT most 3 chars, the middle 3 using SUBSTRING, and the RIGHT most 3 chars and CONCAT those, and you then CAST that whole thing as INT. Does that help?If the OP is worried about no longer fitting the xxx-xxx-xxx format, and needing to go xxxx-xxx-xxxx, then perhaps he would be better off using REPLACE to strip out the hyphens and then CAST the result of that? Just a thought...
REPLACE can be expensive from a performance perspective when larger numbers of rows are involved. No biggy for a one-time thing, but if you're suddenly talking about a billion rows, I'd be as far away from REPLACE as I can get.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
April 20, 2017 at 1:05 pm
sgmunson - Thursday, April 20, 2017 10:05 AMThomasRushton - Thursday, April 20, 2017 7:12 AMsgmunson - Wednesday, April 19, 2017 2:17 PMIf you have an Account ID field that is consistently formatted with 3 numbers on either side of each dash, then converting that to INT isn't that hard, but you'll have a problem if that format ever changes, or growth forces the use of more digits. You can either CAST or CONVERT the Account ID field by taking the LEFT most 3 chars, the middle 3 using SUBSTRING, and the RIGHT most 3 chars and CONCAT those, and you then CAST that whole thing as INT. Does that help?If the OP is worried about no longer fitting the xxx-xxx-xxx format, and needing to go xxxx-xxx-xxxx, then perhaps he would be better off using REPLACE to strip out the hyphens and then CAST the result of that? Just a thought...
REPLACE can be expensive from a performance perspective when larger numbers of rows are involved. No biggy for a one-time thing, but if you're suddenly talking about a billion rows, I'd be as far away from REPLACE as I can get.
Thank you for post. it helps me.!
April 20, 2017 at 1:06 pm
Luis Cazares - Wednesday, April 19, 2017 2:15 PMsreesree.96 - Wednesday, April 19, 2017 2:05 PMHI Experts
I am unable to change the data type VARCHAR to INT . I would appreciate your help.
CODE :
CASE When
[Worksheet].[Worksheet].[PLC Cd] IS NOT NULL
then [Worksheet].[Worksheet].[PLC Cd]
else
(if ([Pool Number]=0)
then([Worksheet].[Revenue Worksheet].[Account ID])
else (cast([Pool Number],varchar(7))))
ENDACCount_ID - DATATYPE is Varchar
Pool number - DATA TYPE - Int
I do not want to use the cast as above mentioned. If I use the CAST SQL server is showing a warning to "may affect "CardinalityEstimate" in query plan choice"
I would like to convert the ACCOUNT ID as INT.
ACCOUNT ID data format is ; 500-201-001
I am dealing with 6 million records in the tables. and Above code is the key values to aggregate the data. while aggregating the above case statement output values is key.
using the SQL SERVER 12. and using the same query in COGNOS Reporting.Appreciate your help.!
A CAST function won't affect Cardinality if used in the column list.
Post the entire statement and the sqlplan that shows the warning to get some help (if possible, add DDL for tables and indexes). Otherwise, everything would be a shot in the dark.
Thank you for suggestion. it helps me.!
April 20, 2017 at 3:28 pm
sgmunson - Thursday, April 20, 2017 10:05 AMThomasRushton - Thursday, April 20, 2017 7:12 AMsgmunson - Wednesday, April 19, 2017 2:17 PMIf you have an Account ID field that is consistently formatted with 3 numbers on either side of each dash, then converting that to INT isn't that hard, but you'll have a problem if that format ever changes, or growth forces the use of more digits. You can either CAST or CONVERT the Account ID field by taking the LEFT most 3 chars, the middle 3 using SUBSTRING, and the RIGHT most 3 chars and CONCAT those, and you then CAST that whole thing as INT. Does that help?If the OP is worried about no longer fitting the xxx-xxx-xxx format, and needing to go xxxx-xxx-xxxx, then perhaps he would be better off using REPLACE to strip out the hyphens and then CAST the result of that? Just a thought...
REPLACE can be expensive from a performance perspective when larger numbers of rows are involved. No biggy for a one-time thing, but if you're suddenly talking about a billion rows, I'd be as far away from REPLACE as I can get.
Again, I'm not sure how REPLACE got such a bad name except for some of the more complicated collations. Heh... and updating a billion rows of anything is going to take a long time even if you don't use REPLACE.
Of course, at this point, that's only an opinion on my part. Do you have an alternative method or methods that we could test against REPLACE?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 21, 2017 at 10:14 am
Jeff,
Every time I "replace" a REPLACE with something less expensive like LEFT, RIGHT, and/or SUBSTRING, the query I'm working on improves it's performance significantly. Of all the string functions, it appears to be one of the most expensive, and with a default data type of varchar at a length of either 4000 or 8000 (I forget which), its slingin' a lot of bytes around, and bump a record count up into the millions and that starts to seriously add up. It was one of the first things I started to notice about performance, way back in the early 2000's, and I haven't yet seen any change to that behavior over the last 15 to 20 years, pretty much regardless of SQL Server version.
However, that said, I do agree that it's worth testing to try and quantify just how much more expensive it is compared to the LEFT, RIGHT, and SUBSTRING functions, and possibly CHARINDEX and PATINDEX as well. I wish I had the time to devote to it, but that's just not in the cards at the moment.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
April 22, 2017 at 11:31 am
sgmunson - Friday, April 21, 2017 10:14 AMJeff,Every time I "replace" a REPLACE with something less expensive like LEFT, RIGHT, and/or SUBSTRING, the query I'm working on improves it's performance significantly. Of all the string functions, it appears to be one of the most expensive, and with a default data type of varchar at a length of either 4000 or 8000 (I forget which), its slingin' a lot of bytes around, and bump a record count up into the millions and that starts to seriously add up. It was one of the first things I started to notice about performance, way back in the early 2000's, and I haven't yet seen any change to that behavior over the last 15 to 20 years, pretty much regardless of SQL Server version.
However, that said, I do agree that it's worth testing to try and quantify just how much more expensive it is compared to the LEFT, RIGHT, and SUBSTRING functions, and possibly CHARINDEX and PATINDEX as well. I wish I had the time to devote to it, but that's just not in the cards at the moment.
Agreed. There is no question in my mind that you do have to use the correct tool for the correct job.
But using LEFT/SUBSTRING/RIGHT instead of REPLACE won't actually improve performance "significantly" even in the face of 6 million rows. Let's do the test.
First, here's code to generate 6 million rows of test data consisting of a column in the NNN-NNN-NNN format and renders out as a VARCHAR(11).
/**************************************************************************************************
Create and populate a test table.
This isn't a part of the solution. We're just building something to test with.
**************************************************************************************************/
--===== If the test table already exists, drop it to make reruns in SSMS easier.
IF OBJECT_ID('tempdb..#TestTable','U') IS NOT NULL
DROP TABLE #TestTable
;
--===== Create and populate the test table with AccountIDs in the form of NNN-NNN-NNN.
-- We don't need an index here because we're going to scan the whole thing later.
-- AccountID renders out as a VARCHAR(11).
SELECT TOP (6000000) --Not to worry. Only takes about 9 seconds for 6 million rows.
AccountID = RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3) + '-'
+ RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3) + '-'
+ RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3)
INTO #TestTable
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
;
GO
CHECKPOINT;
GO
--===== Display the column name and datatype for the column formed above.
SELECT sc.name, st.name, sc.max_length
FROM tempdb.sys.columns sc
JOIN sys.types st
ON sc.user_type_id = st.user_type_id
WHERE sc.object_id = OBJECT_ID('tempdb..#Testtable','U')
;
GO
Up next, some test code. Details are in the comments.
PRINT '==========================================================================================';
PRINT '===== BaseLine (No Conversions)';
DECLARE @Bitbucket VARCHAR(11);
SET STATISTICS TIME ON;
SELECT @Bitbucket = AccountID
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '==========================================================================================';
PRINT '===== REPLACE/CONVERT w/o Collation';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CONVERT(INT,REPLACE(AccountID,'-',''))
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '===== REPLACE/CONVERT w/ Collation';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CONVERT(INT,REPLACE(AccountID COLLATE Latin1_General_BIN,'-',''))
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '==========================================================================================';
PRINT '===== REPLACE/CAST w/o Collation';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CAST(REPLACE(AccountID,'-','') AS INT)
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '===== REPLACE/CAST w/ Collation';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CAST(REPLACE(AccountID COLLATE Latin1_General_BIN,'-','') AS INT)
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '==========================================================================================';
PRINT '===== LEFT/SUBSTRING/RIGHT/CONVERT (Collate won''t help here)';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CONVERT(INT,LEFT(AccountID,3)+SUBSTRING(AccountID,5,3)+RIGHT(AccountID,3))
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '===== LEFT/SUBSTRING/RIGHT/CAST (Collate won''t help here)';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CAST(LEFT(AccountID,3)+SUBSTRING(AccountID,5,3)+RIGHT(AccountID,3) AS INT)
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
Here's the out come of that testing. Like I said previously, Collation DOES matter when it comes to REPLACE. As for LEFT/SUBSTRING/RIGHT being "significantly" faster than REPLACE, I'm just not seeing it even across these 6 million rows even without the Collation and certainly now with it.
==========================================================================================
===== BaseLine (No Conversions)
SQL Server Execution Times:
CPU time = 764 ms, elapsed time = 771 ms.
==========================================================================================
===== REPLACE/CONVERT w/o Collation
SQL Server Execution Times:
CPU time = 3011 ms, elapsed time = 3008 ms.
===== REPLACE/CONVERT w/ Collation
SQL Server Execution Times:
CPU time = 2449 ms, elapsed time = 2460 ms.
==========================================================================================
===== REPLACE/CAST w/o Collation
SQL Server Execution Times:
CPU time = 2902 ms, elapsed time = 2923 ms.
===== REPLACE/CAST w/ Collation
SQL Server Execution Times:
CPU time = 2449 ms, elapsed time = 2440 ms.
==========================================================================================
===== LEFT/SUBSTRING/RIGHT/CONVERT (Collate won't help here)
SQL Server Execution Times:
CPU time = 2340 ms, elapsed time = 2349 ms.
===== LEFT/SUBSTRING/RIGHT/CAST (Collate won't help here)
SQL Server Execution Times:
CPU time = 2356 ms, elapsed time = 2368 ms.
The advantage of REPLACE, in this case, is if the 9 digit format with interceding dashes ever changes. REPLACE will require no code changes while the string parsing will.
If the format will ALWAYS be NNN-NNN-NNN, then both REPLACE and the string parsing methods are actually the wrong tool. Here's what I'd use, instead.
PRINT '==========================================================================================';
PRINT '===== A faster way';
PRINT '==========================================================================================';
PRINT '===== STUFF/CONVERT (Collate won''t help here ========================';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CONVERT(INT,STUFF(STUFF(AccountID,8,1,''),4,1,''))
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
PRINT '===== STUFF/CAST w/ (Collate won''t help here ========================';
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = CAST(STUFF(STUFF(AccountID ,8,1,''),4,1,'') AS INT)
FROM #TestTable;
SET STATISTIcS TIME OFF;
GO
Here's the outcome of that:
==========================================================================================
===== A faster way
==========================================================================================
===== STUFF/CONVERT (Collate won't help here ========================
SQL Server Execution Times:
CPU time = 1654 ms, elapsed time = 1741 ms.
===== STUFF/CAST w/ (Collate won't help here ========================
SQL Server Execution Times:
CPU time = 1638 ms, elapsed time = 1646 ms.
Is THAT "significantly" faster? It depends. It IS 30% faster than the string parsing method (which, btw, is only 4% faster that a REPLACE COLLATE, in this case) BUT... it's still more than twice as slow as using the original column as is and that brings me to the burning questions that no one has yet asked....
WHY? Why does the OP want to change this column from what it is to an INT to being with? IS the OP aware than any and all leading zero's will be lost? And, what is the reason the AccountID was originally stored in the NNN-NNN-NNN format to begin with? Is it possible that the triplets do have some meaning and should the column actually be changed to 3 small integer columns instead?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 22, 2017 at 3:41 pm
Jeff Moden - Saturday, April 22, 2017 11:31 AMWHY? Why does the OP want to change this column from what it is to an INT to being with? IS the OP aware than any and all leading zero's will be lost? And, what is the reason the AccountID was originally stored in the NNN-NNN-NNN format to begin with? Is it possible that the triplets do have some meaning and should the column actually be changed to 3 small integer columns instead?
Because he thought it would get rid of the cardinality warning in the query plan that comes from converting a column. Since he has a case statement with an int and a varchar, one of them has to be converted to the other.
He was converting the int to varchar and was getting a warning that the CAST may affect cardinality estimates.
However, the warning is one given unnecessarily, CASTs in the SELECT clause don't affect cardinality estimates (as was mentioned earlier in the thread) and hence the original cast is just fine (as was mentioned earlier in the thread) and, even if the OP had changed the query to convert the varchar to int instead of the int to varchar, he would still have got the cardinality estimate warning (as mentioned earlier in the thread)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 22, 2017 at 4:51 pm
GilaMonster - Saturday, April 22, 2017 3:41 PMJeff Moden - Saturday, April 22, 2017 11:31 AMWHY? Why does the OP want to change this column from what it is to an INT to being with? IS the OP aware than any and all leading zero's will be lost? And, what is the reason the AccountID was originally stored in the NNN-NNN-NNN format to begin with? Is it possible that the triplets do have some meaning and should the column actually be changed to 3 small integer columns instead?Because he thought it would get rid of the cardinality warning in the query plan that comes from converting a column. Since he has a case statement with an int and a varchar, one of them has to be converted to the other.
He was converting the int to varchar and was getting a warning that the CAST may affect cardinality estimates.However, the warning is one given unnecessarily, CASTs in the SELECT clause don't affect cardinality estimates (as was mentioned earlier in the thread) and hence the original cast is just fine (as was mentioned earlier in the thread) and, even if the OP had changed the query to convert the varchar to int instead of the int to varchar, he would still have got the cardinality estimate warning (as mentioned earlier in the thread)
Thanks, Gail. I mostly got that from the original request. And I do appreciate the information on the message that will come up because I've never seen such a thing from a SELECT list before.
But that brings me back to the original request and my not-so-clear question about it. To clarify, we have an AccountID as a secondary "thing" to be chosen if the primary "thing" is NULL. Message not withstanding, why does the OP think it's ok to potentially drop leading zeros and some apparently significant (wouldn't {well, shouldn't} be stored that way if it wasn't) dash characters and also mix data in the same column even if this turns out to be a report? What is a "PLC Cd" and why are AccountIDs and "PLC Cd"s being mixed in the same column? Report or not, that seems like an accident waiting to happen because of the mixed use of data in a single denormalized column.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 22, 2017 at 5:56 pm
sreesree.96 - Wednesday, April 19, 2017 2:05 PMHI Experts
I am unable to change the data type VARCHAR to INT . I would appreciate your help.
CODE :
CASE When
[Worksheet].[Worksheet].[PLC Cd] IS NOT NULL
then [Worksheet].[Worksheet].[PLC Cd]
else
(if ([Pool Number]=0)
then([Worksheet].[Revenue Worksheet].[Account ID])
else (cast([Pool Number],varchar(7))))
ENDACCount_ID - DATATYPE is Varchar
Pool number - DATA TYPE - Int
I do not want to use the cast as above mentioned. If I use the CAST SQL server is showing a warning to "may affect "CardinalityEstimate" in query plan choice"
I would like to convert the ACCOUNT ID as INT.
ACCOUNT ID data format is ; 500-201-001
I am dealing with 6 million records in the tables. and Above code is the key values to aggregate the data. while aggregating the above case statement output values is key.
using the SQL SERVER 12. and using the same query in COGNOS Reporting.Appreciate your help.!
Ok... as stated above, I have no idea why you'd want to denormalize a column, even for a report, with this data. I'd be very interested in what the final business-use of this query will actually be. I'd also be curious why it seems to be ok to put two different types of data (datatypes be damned) in the same column. What is a "PLC Cd" and why are you mixing it with AccountIDs in the same column, never mind a "Pool Number" in the same column?
Since you didn't alias the [Pool Number} column and haven't provided any other info about your tables, etc, etc, I created some data all in one table and then demonstrate how this might be done using only the implicit conversion of ISNULL, which replaces the first CASE function altogether. I don't know if it will actually help suppress the message you're getting because I'm not using two tables like you are. I just don't know enough about your tables or what they're being joined on to produce a two table example that stands a chance of being correct.
/*********************************************************************************************************************
Create and populate a test table.
This isn't a part of the solution. We're just building something to test with.
**********************************************************************************************************************/
--===== If the test table already exists, drop it to make reruns in SSMS easier.
IF OBJECT_ID('tempdb..#TestTable','U') IS NOT NULL
DROP TABLE #TestTable
;
--===== Create and populate the test table with AccountIDs in the form of NNN-NNN-NNN.
-- We don't need an index here because we're going to scan the whole thing later.
-- AccountID renders out as a VARCHAR(11).
SELECT TOP (6000000) --Not to worry. Only takes about 9 seconds for 6 million rows.
AccountID = RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3) + '-'
+ RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3) + '-'
+ RIGHT(ABS(CHECKSUM(NEWID())%1000)+1000,3)
,[PLC Cd] = CASE
WHEN ABS(CHECKSUM(NEWID())%2) = 0 THEN NULL --Randomly makes about half the entries NULL.
ELSE ABS(CHECKSUM(NEWID())%1000000000)+1 --Swagged an INT up to 9 digits. Don't know the actual range.
END
,[Pool Number] = ABS(CHECKSUM(NEWID())%5) -- 0 to 4
INTO #TestTable
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
;
GO
CHECKPOINT;
GO
--===== Visually verify that everything is working as expected.
SELECT TOP (1000)
AccountID
,[PLC Cd]
,[Pool Number]
,Result = ISNULL([PLC Cd]
,CASE
WHEN [Pool Number] = 0
THEN STUFF(STUFF(AccountID ,8,1,''),4,1,'')
ELSE [Pool Number]
END)
FROM #TestTable
;
GO
--===== Check the performance of the Result.
DECLARE @Bitbucket INT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = ISNULL([PLC Cd]
,CASE
WHEN [Pool Number] = 0
THEN STUFF(STUFF(AccountID ,8,1,''),4,1,'')
ELSE [Pool Number]
END)
FROM #TestTable;
SET STATISTICS TIME OFF;
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply