expand field and fill with zeros

  • Jeff Moden wrote:

    Steve Collins wrote:

    Jeff Moden wrote:

    Lordy, no.  That's one of the slowest methods possible.  I'll be back to prove it.  Just don't use FORMAT!

    Jeff, maybe so, maybe so.  It appears the OP's doing things one at a time here tho.  So the penalty for a single case is maybe small.

    That's what I try to teach about.  While the penalty for a single case may be small, it's still a penalty.  If you do it the fast way all the time, you don't have penalties to worry about and you don't have to remember two types of code AND you don't ever have to worry about what happens when someone either uses your small stuff code on something bigger or someone decides to make your small stuff bigger.  You'll also develop the "habit" of "doing it right" all the time.

    Generally I agree and it's good to know the edge cases.  For what I do I admit to having a segmented view of performance issues.  If I'm working inside of a procedure on a row or a few rows and the rows have been located by primary key and the error handling is properly done, then all I care about is convenience and compactness of code.   Moving the decimal point... 8190ms/1,000,000 requests = 0.00819 ms per request.  Having said this, I'm switching going forward 🙂

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • mbatchelor wrote:

    I am trying to create a row in a table that is created as: CREATE TABLE tax_table ( Data char(275)).  Within this table the "Data" consist of 4 fields that are  last_name first_name   middle_name  amount_of _quarttax which is selected using select script and stored in a tmp file; then using this script:

    DECLARE @data varchar(275),

    @last_name varchar(20),

    @first_name varchar(12),

    @middle_name varchar(1),

    @qtramt1 numeric(11,0)

    open tmp

    fetch tmp into

    @last_name, @first_name,@middle_name,@qtramt1

    while @@FETCH_STATUS = 0

    BEGIN

    Select @data =  @last_name +  @first_name + @middle_name

    Select @data = @data + RIGHT('00000000000' +

    CAST( (SUBSTRING(STR(@qtramt1),1,11)) as SYSNAME), 14)

    INSERT INTO MRB_StateUnemployment

    SELECT @data

    fetch tmp into

    @last_name, @first_name,@middle_name,@qtramt1

    END

    Here is the results:

    Batch               Mille       A0000   2329242

    This what I need:

    Batch               Mille       A00000002329242

    I hope this clears up what I am talking about.

    What's going on with this cursor which isn't really looping because it doesn't call fetch next?

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins wrote:

    Jeff Moden wrote:

    Steve Collins wrote:

    Jeff Moden wrote:

    Lordy, no.  That's one of the slowest methods possible.  I'll be back to prove it.  Just don't use FORMAT!

    Jeff, maybe so, maybe so.  It appears the OP's doing things one at a time here tho.  So the penalty for a single case is maybe small.

    That's what I try to teach about.  While the penalty for a single case may be small, it's still a penalty.  If you do it the fast way all the time, you don't have penalties to worry about and you don't have to remember two types of code AND you don't ever have to worry about what happens when someone either uses your small stuff code on something bigger or someone decides to make your small stuff bigger.  You'll also develop the "habit" of "doing it right" all the time.

    Generally I agree and it's good to know the edge cases.  For what I do I admit to having a segmented view of performance issues.  If I'm working inside of a procedure on a row or a few rows and the rows have been located by primary key and the error handling is properly done, then all I care about is convenience and compactness of code.   Moving the decimal point... 8190ms/1,000,000 requests = 0.00819 ms per request.  Having said this, I'm switching going forward 🙂

    Aye... thank you for the feedback.  It sounds like you may have been "converted" 😀 so the following is for others that may not have been.

    Just to tell a bit about my sometimes stark "milliseconds matter" attitude, I've not yet worked in a shop where there wasn't a ton of legacy code already in place.  Generally speaking, there were no "easy" fixes for the performance and resource usage issues in any of the shops.  Sure, in some cases, adding a missing index could occasionally work wonders and so could a minor tweak here and there.  But, for the most part, the performance issues where spread out everywhere in the code in little places like the one this thread is about and, to be sure, a million rows has been "chump change" for more than a decade when it comes to the sizes of the tables I've had to work with.  I look at some of the 2 hour runs and add "timers" to each section of the code only to find out those two hours frequently consist of "7 seconds here, 10 seconds there", etc, etc.  On top of that, there's frequently the "cost of contention", especially in TempDB and the fact that people end up doing scans on really large tables when they really only need to address a very small part of the table, etc, etc.

    Yes, I can fix the 7-10 second code to run in milliseconds but there's so much of it that it takes years to get to it all only to find that people are still writing code in the same manner because they keep saying to themselves that 7-10 seconds is good because they're working with a million rows.  It's not good... especially in this day and age.  The fact that the code is poorly documented (if at all) doesn't help when it comes to fixing the code, either (in the peer reviews I do, the code can be absolutely perfect but if it's not or poorly commented, I won't let it go to test, never mind production).

    The other thing is that people look at something like the previous tests I've done and say something to the effect that and scoff at the differences as being rather insignificant especially when being used for a million rows.  Again, that's a trivial number of rows in today's databases but let's continue to use that number of rows as an example.

    People forget that the tests I did was for just ONE column on ONE table in ONE piece of code.  You DO have to look at the larger picture.  How many columns of how many tables of how many pieces of code will this ONE technique be used in.  The "tolerance buildup" can add up very quickly as pertaining to the overall health of the server.  This is what I mean by a "Death of a thousand small cuts".

    For example, let's just say the requirement for this ONE thing had 10 columns that needed such formatting.  A lot of people would, of course, say that you shouldn't do such things in SQL Server but people frequently don't have the luxury to do otherwise.  If they can do otherwise, why would they write slower but easier to code methods outside of SQL Server?  It makes no sense at all.

    So, let's check things out for 10 columns within SQL Server to meet just ONE requirement on ONE table in ONE piece of code.  Here's the test harness.  Again, the @Bitbucket variable is used to prevent display time and time to store to disk from being included in the test.  All the data ends up living in memory and so we are only measuring the actual performance of the code save about 800 milliseconds to read the data from RAM for each run.

    --============================================================================================================
    -- Setup a million row test table.
    --============================================================================================================
    --===== If the test table already exists, drop it to make reruns in SSMS easier.
    SET STATISTICS TIME OFF;
    IF OBJECT_ID('tempdb..#Data','U') IS NOT NULL DROP TABLE #Data
    ;
    --===== Create a test table of random constrained values from 1 to 100,000.
    -- This isn''t a part of the solution... we''re just building some test data here.
    -- Don''t let the million rows scare ya... this takes less than a second on my laptop.
    SELECT Col01 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col02 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col03 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col04 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col05 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col06 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col07 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col08 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col09 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col10 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    INTO #Data
    FROM Util.dbo.fnTally(1,1000000) --Get this from the link in my signature line below)
    ;
    GO
    --===== This first test is just to "Prime the pump" and does not meet requirements.
    RAISERROR('--===== BASELINE (Implict Conversion Only) =====================================',0,0) WITH NOWAIT;
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SET STATISTICS TIME ON;
    SELECT @BitBucket = Col01
    ,@BitBucket = Col02
    ,@BitBucket = Col03
    ,@BitBucket = Col04
    ,@BitBucket = Col05
    ,@BitBucket = Col06
    ,@BitBucket = Col07
    ,@BitBucket = Col08
    ,@BitBucket = Col09
    ,@BitBucket = Col10
    FROM #Data;
    SET STATISTICS TIME OFF;
    GO
    --============================================================================================================
    -- Test Multiple Methods (FORMAT d14 v.s String of Zeros methods)
    -- Total runtime of these tests is < 72 Seconds on my LapTop.
    -- Most of the time is because the losing code in this test loses REALLY bad. :D
    --============================================================================================================
    RAISERROR('--===== FORMAT d14 method =====================================================',0,0) WITH NOWAIT;
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SET STATISTICS TIME ON;
    SELECT @BitBucket = FORMAT(CAST(Col01 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col02 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col03 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col04 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col05 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col06 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col07 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col08 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col09 AS BIGINT), 'd14')
    ,@BitBucket = FORMAT(CAST(Col10 AS BIGINT), 'd14')
    FROM #Data;
    SET STATISTICS TIME OFF;
    GO
    RAISERROR('--===== String of Zeros method =================================================',0,0) WITH NOWAIT;
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SET STATISTICS TIME ON;
    SELECT @BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col01),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col02),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col03),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col04),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col05),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col06),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col07),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col08),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col09),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col10),14)
    FROM #Data;
    SET STATISTICS TIME OFF;
    GO

    Here are the results of that run... as you can see, milliseconds suddenly matter.  Imagine if you have dozens of columns on multiple tables in multiple stored procs running dozens or even hundreds of times per day. 😉  It's also important to note that the faster method is actually "sub-linear".  In  the single column test, it took 191 ms to do 1 column of a million rows.  If it were linear, it would take 1,910 ms to do 10 columns but it only took 1,188 ms... that's about 60% less time than expected.  The slower method is also slightly sub-linear but that slight difference doesn't matter because it take over a minute to run now.  Overall, the faster method is more than 57 times faster than the slower method.  Imagine if you did that with all your code? 😀

    (1000000 rows affected)
    --===== BASELINE (Implict Conversion Only) =====================================
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 812 ms, elapsed time = 802 ms.
    --===== FORMAT d14 method =====================================================

    SQL Server Execution Times:
    CPU time = 66578 ms, elapsed time = 69470 ms.
    --===== String of Zeros method =================================================

    SQL Server Execution Times:
    CPU time = 1172 ms, elapsed time = 1188 ms.

    And, please... for the love of your servers and your customers, stop all the excuses about how such optimizations slow the Developers down.  That's totally lame BS.  Teach your Developers how to do things the right way and stop justifying doing it wrong by saying stupid things like "it takes longer to type" 53 characters than it does 36.  The extra second or two to type the additional 17 characters will quite literally save you hours agony and a whole lot of money hiring a consultant to fix your junk.

    Remember... "MILLISECONDS MATTER!"  😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Just one point - not sure it makes a difference in performance but you can avoid this:

    @BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col01),14)

    By using this:

    @BitBucket = RIGHT(CONCAT('00000000000000', Col01), 14)

    Here are the results on my system using CONCAT compared to CONVERT.  If you run this multiple times you should see that sometimes CONCAT is faster and sometimes CONVERT is faster...

    --===== String of Zeros method =================================================

    SQL Server Execution Times:
    CPU time = 1922 ms, elapsed time = 1920 ms.
    --===== String of Zeros method - with CONCAT ===================================

    SQL Server Execution Times:
    CPU time = 1875 ms, elapsed time = 1879 ms.

     

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeff are the results reversed?  It lists FORMAT d14 method as 802ms.  Maybe copy/paste gone off?

    Jeff Moden wrote:

    And, please... for the love of your servers and your customers, stop all the excuses about how such optimizations slow the Developers down.  That's totally lame BS.  Teach your Developers how to do things the right way and stop justifying doing it wrong by saying stupid things like "it takes longer to type" 53 characters than it does 36.  The extra second or two to type the additional 17 characters will quite literally save you hours agony and a whole lot of money hiring a consultant to fix your junk.

    Remember... "MILLISECONDS MATTER!"  😀

    For my part I really appreciate the deep analysis into components.  It's vital to make progress in a good way so I think it's essential research.  It NEVER takes longer to do things correctly the first time.  That's what all my years of programming experience have taught me.  Over time in a similar way I've grown to be focused on strictly adhering to normalization rules and using surrogate keys.  Whenever I've veered away from strict adherence it seems to always come back to bite me in the booty.  In the other thread on identity pk's I'm ready to throw down this time 🙂

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Jeffrey Williams wrote:

    Just one point - not sure it makes a difference in performance but you can avoid this:

    @BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col01),14)

    By using this:

    @BitBucket = RIGHT(CONCAT('00000000000000', Col01), 14)

    Here are the results on my system using CONCAT compared to CONVERT.  If you run this multiple times you should see that sometimes CONCAT is faster and sometimes CONVERT is faster...

    --===== String of Zeros method =================================================

    SQL Server Execution Times:
    CPU time = 1922 ms, elapsed time = 1920 ms.
    --===== String of Zeros method - with CONCAT ===================================

    SQL Server Execution Times:
    CPU time = 1875 ms, elapsed time = 1879 ms.

    Ah... nicely done.  I'm so used to having to brute force stuff on older systems that I forget about cool stuff like CONCAT.  I'm adding that bad boy to my test harness.  And, yeah, I can confirm that  it's takes turns winning with the original "String of Zeros" method.  The code with CONCAT is also only 4 characters longer than the "FORMAT d14" method, leaving little excuse to ever use the incredibly slothful FORMAT function.  Thanks, Jeffrey.

    Here's the 10 column test with both "String of Zeros" methods.

    -- https://www.sqlservercentral.com/forums/topic/expand-field-and-fill-with-zeros
    --============================================================================================================
    -- Setup a million row test table.
    --============================================================================================================
    --===== If the test table already exists, drop it to make reruns in SSMS easier.
    SET STATISTICS TIME OFF;
    IF OBJECT_ID('tempdb..#Data','U') IS NOT NULL DROP TABLE #Data
    ;
    --===== Clear the guns. 'DON''T DO THIS ON A PROD BOX!!!'.
    -- I just want to make sure that an automatic checkpoint doesn''t occur during timed testing.
    CHECKPOINT;
    DBCC DROPCLEANBUFFERS
    ;
    --===== Create a test table of random constrained values from 1 to 100,000.
    -- This isn''t a part of the solution... we''re just building some test data here.
    -- Don''t let the million rows scare ya... this takes less than a second on my laptop.
    SELECT Col01 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col02 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col03 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col04 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col05 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col06 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col07 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col08 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col09 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    ,Col10 = CONVERT(DECIMAL(11,0),ABS(CHECKSUM(NEWID())%100000)+1)
    INTO #Data
    FROM Util.dbo.fnTally(1,1000000) --Get this from the link in my signature line below)
    ;
    GO
    --===== This first test is just to "Prime the pump" and does not meet requirements.
    RAISERROR('--===== BASELINE (Implict Conversion Only) =====================================',0,0) WITH NOWAIT;
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SET STATISTICS TIME ON;
    SELECT @BitBucket = Col01
    ,@BitBucket = Col02
    ,@BitBucket = Col03
    ,@BitBucket = Col04
    ,@BitBucket = Col05
    ,@BitBucket = Col06
    ,@BitBucket = Col07
    ,@BitBucket = Col08
    ,@BitBucket = Col09
    ,@BitBucket = Col10
    FROM #Data;
    SET STATISTICS TIME OFF;
    GO
    --============================================================================================================
    -- Test Multiple Methods (String of Zeros w/Concat v.s String of Zeros methods)
    -- Total runtime of these tests is < 72 Seconds on my LapTop.
    -- Most of the time is because the losing code in this test loses REALLY bad. :D
    --============================================================================================================
    RAISERROR('--===== String of Zeros method with CONCAT =====================================',0,0) WITH NOWAIT;
    SET STATISTICS TIME ON;
    GO
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SELECT @BitBucket = RIGHT(CONCAT('00000000000000',Col01),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col02),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col03),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col04),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col05),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col06),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col07),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col08),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col09),14)
    ,@BitBucket = RIGHT(CONCAT('00000000000000',Col10),14)
    FROM #Data;
    GO 5
    SET STATISTICS TIME OFF;
    GO
    RAISERROR('--===== String of Zeros method =================================================',0,0) WITH NOWAIT;
    SET STATISTICS TIME ON;
    GO
    DECLARE @BitBucket VARCHAR(20); --Used to take disk and display times out of the picture.
    SELECT @BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col01),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col02),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col03),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col04),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col05),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col06),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col07),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col08),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col09),14)
    ,@BitBucket = RIGHT('00000000000000'+CONVERT(VARCHAR(14),Col10),14)
    FROM #Data;
    GO 5
    SET STATISTICS TIME OFF;
    GO

    Here are the run results... a virtual tie for 5 runs each.

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    (1000000 rows affected)
    --===== BASELINE (Implict Conversion Only) =====================================
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 797 ms, elapsed time = 803 ms.
    --===== String of Zeros method with CONCAT =====================================
    Beginning execution loop
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1172 ms, elapsed time = 1170 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1187 ms, elapsed time = 1173 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1157 ms, elapsed time = 1164 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1187 ms, elapsed time = 1176 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1156 ms, elapsed time = 1163 ms.
    Batch execution completed 5 times.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.
    --===== String of Zeros method =================================================
    Beginning execution loop
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1172 ms, elapsed time = 1172 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1157 ms, elapsed time = 1159 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1171 ms, elapsed time = 1161 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1157 ms, elapsed time = 1157 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 1156 ms, elapsed time = 1162 ms.
    Batch execution completed 5 times.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    Thanks again, Jeffrey.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Steve Collins wrote:

    For my part I really appreciate the deep analysis into components.  It's vital to make progress in a good way so I think it's essential research.  It NEVER takes longer to do things correctly the first time.  That's what all my years of programming experience have taught me.  Over time in a similar way I've grown to be focused on strictly adhering to normalization rules and using surrogate keys.  Whenever I've veered away from strict adherence it seems to always come back to bite me in the booty.  In the other thread on identity pk's I'm ready to throw down this time 🙂

    Right there with you.  I wouldn't bother on that other thread, though.  That subject keeps cropping up over and over and over and some of the participants do too.  It's one of those subjects where everyone is an "expert" and they frequently base their arguments on the same thing you and I and other on this thread could claim... years and even decades of experience.

    And that's the key.  A lot of it is based on "opinion".  I hate "opinion" for such matters.  Opinions are like butt-holes... almost everyone has one and they usually stink. 😀  And, if they don't have one, they usually have really bad breath. 😀 😀 😀

    I'd rather let the code do the talking.  Until someone writes a really good article with well written simple but substantial examples that are clearly and easily demonstrable by anyone that wants to copy the code, talking with people on the subject is annoying at best.  Even if such a well written article were available, people would still fall back on supposedly "expert experience" and argue their opinion(s) without the use of code.

    • This reply was modified 4 years, 7 months ago by  Jeff Moden.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • We might even be able to save those 4 extra characters - depending on the source data.  If we know for sure that we have a minimum length of at least 4 characters - then we could shorten the number of zeroes...but it really isn't worth it 😉

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeff Moden wrote:

    Right there with you.  I wouldn't bother on that other thread, though.  That subject keeps cropping up over and over and over and some of the participants do too.  It's one of those subjects where everyone is an "expert" and they frequently base their arguments on the same thing you and I and other on this thread could claim... years and even decades of experience.

    After that other thread I went back to read what everyone on SSC had written about identity primary keys in the past.  In my opinion Grant Fitchley has done a great job correctly describing the advantages of system generated integer primary keys.

    https://www.sqlservercentral.com/forums/topic/database-design-theory-regarding-best-practices-for-querying-tables

    https://www.sqlservercentral.com/forums/topic/index-on-identity-column/page/2

    There are others I can't find now too.

    Jeff Moden wrote:

    I'd rather let the code do the talking.  Until someone writes a really good article with well written simple but substantial examples that are clearly and easily demonstrable by anyone that wants to copy the code, talking with people on the subject is annoying at best.  Even if such a well written article were available, people would still fall back on supposedly "expert experience" and argue their opinion(s) without the use of code.

    Ha, I'm right there with you.  Besides code there's just bullshit.  Partly because I've experience great frustration with existing articles and tools I created my open source (Apache 2.0) C# data access library for auto-magically turning Sql Sever CRUD procedures into API endpoints.  To demonstrate how the framework works we have to have sample stored procedures which expose various features.  It's basically what you're describing, reference code to demonstrate key handling and error handling.   As of this week it's ready to be criticized 🙂   I'll send you PM with my Sql code if you're interested.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Sounds good.  Looking forward to it.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeffrey Williams wrote:

    We might even be able to save those 4 extra characters - depending on the source data.  If we know for sure that we have a minimum length of at least 4 characters - then we could shorten the number of zeroes...but it really isn't worth it 😉

    Agreed.  On top of that, if you stop and think about it, how many people are actually going to have an 11 digit integer that needs to be LPAD'ed to 14 characters?

    It DID make for an interesting test, though...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 11 posts - 16 through 25 (of 25 total)

You must be logged in to reply to this topic. Login to reply