October 14, 2020 at 4:38 pm
ScottPletcher wrote:Jeff Moden wrote:It also increases the number of characters to be transmitted by a whopping 25% compared with "YYYYMMDD" and it was specified and designed in an age where 110-300 baud caused stupidity like 2 digit years to save space and transmission time.
It wasn't "stupidity". That's rather a rude and arrogant characterization (reminds me of someone else, just can't put my finger on who :-)). One more byte per date (in packed decimal format (typical at the time), yyyy took one more byte than yy) was significant at the time, particularly in RAM, but also for disk. RAM for even large systems then was measured in MB. Both RAM and disk were extraordinarily more expensive than today. (I'll spare all of us any lecture about punched cards, since that's irrelevant.)
Also, note that smalldatetime survived into the 2000s, again because it saved bytes.
I don't use the term "stupid" or it's derivatives very often so, no... I do realize that sounds arrogant but, at the time it was created, it was "stupidity" because it increased the load on transmission systems by 25% for such columns of data at the same time that they were trying to cut down on byte sizes by using 2 digit years. And the 2 digit year thing keeps ringing in my ears... that was a problem with mortgage calculations 30 years before Y2K happened. Again, sounds arrogant but that was another fine bit of stupidity because most people waited until 2000 to even begin trying to fix things.
Compared to what people do with XML and JSON, it's no longer "stupid" because of the larger mistake of those other datatypes. 😀
Hmm, dropping YYYY to YY saved space and resources all around, that was the only reason for it. It was not stupid because resource availability was much less then. It did not cause an issue in future calcs because longer dates when used when needed in calcs, and indeed in storage when absolutely needed. As we know, millions upon millions of people had mortages in those days, and some dates in almost every shop, including banks (remember the "ATMs won't work" scare), used YY (vs. YYYY) dates.
As to adding dashes (or other formatting) to dates, yes, that is stupid in transmissions, both today and in the past, since it adds wasted bytes and adds ambiguity (in the real world at least, if perhaps not in ideal-world "standards"). But that's a completely different issue from using a 2-digit YY.
As to fixing it, no, IT folks fixed things before 2000 because otherwise it would truly have been too late.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 14, 2020 at 5:19 pm
Heh for a lot of folks, they actually did wait until it broke. 😀 Most every one else (that I knew at the time) waited until 1999. I made my first conversion for it back in 1980. 😀
You're also incorrect. 2 digit years actually did cause issues back in the 70's for things like mortgages. The supposed fix was the "2 year digit cutoff" fix that exists in SQL Server even today. I also saw that "fix" was incorporated into some of the IMS systems I worked with back in the '80s.
What's crazy is that the two digit year took 2 bytes to store as character based storage but it also only takes 2 bytes to store the number 65535, which would have allowed folks to store (just like the SMALLDATETIME datatype), 4 digit years up thru 2079-06-06 using just two bytes. Of course and even though the calculations are relatively simple to convert days to dates, vacuum tube and transistorized CPU time was also expensive and so folks made that tradeoff. They also would probably have made the same mistake that MS made in thinking that the year 2000 wasn't a leap year because of them not understanding the actual rules of what a leap year is. Excel still has that problem an the "fix" was to base their calculations on the new base date of 2000-03-01. Lordy, did that ever throw things for a loop for me when they did that. Fortunately, they also left the old way, which I had already compensated for in my code.
Heh... considering how long calculator chips have been available and the fact that 2 bytes can get you out to 2079 in the same amount of storage as 2 digit years, you just have to wonder why people didn't act a whole lot sooner. Too many available work arounds might be the reason, like the 2 digit year cutoff thing and peoples insane need to be able to read text based transmissions and storage directly.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 14, 2020 at 5:50 pm
Again, as I stated, it took only 1 more byte to store YYYY vs YY. Dates were stored as numeric and not char, also to save space. YYMMDD took 4 bytes as a number (packed decimal) but would of course be 6 bytes as char.
Dates are often still stored as an integer, full YYYYMMDD format now of course. This should not be done any more but still is. It made real sense when there was not a "date" type in SQL Server, only datetime or smalldatetime (which is basically a YYMMDD format, with a sliding year window, which saved 4 bytes per value). I still have trouble getting developers to use date rather than smalldatetime, even when they only need to store a date.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 15, 2020 at 11:12 am
Jeff's comment about the cut-off for 2 digit years reminded me that the millennium bug was "fixed" at one place I worked by using YY>20 as the cutoff to guess that the century was 19. If they're still using that application, they must have had an interesting time in January this year.
October 15, 2020 at 2:15 pm
Again, as I stated, it took only 1 more byte to store YYYY vs YY. Dates were stored as numeric and not char, also to save space. YYMMDD took 4 bytes as a number (packed decimal) but would of course be 6 bytes as char.
That's actually a part of my frustration about the past. Just two bytes is all that is required to cover all dates (not including time, obviously) from 1900 to 2079 by storing the dates as number of days since 19000101. Obviously, that wouldn't help for storing information about history (although the 1 extra byte you're talking about would have covered all dates from 00010101 through 99991231 and more {although pre-Gregorian dates are still a PITA}) but it would have been good enough for business use. And, like I said, the conversion formula just isn't that complicated. It was even included in the owners manual for the Texas Instruments TI-55 calculator I bought way back in 1977. I'm pretty sure that the formula was available long before then (which inherently includes the ability to do date math like "days between dates" etc). It even handled the extra leap year that occurs when the year is divisible by 400 (which is the part of the leap year formula that MS and a lot of other companies missed).
The whole Y2k thing (along with the leap year thing for 2000) should never have happened. It was particularly frustrating for me because all of my stuff was 4 digit year capable with the year 2000 leap year thing since 1980. I was a director of MIS for a small telephone company in 1999 and we spent exactly zero hours prepping for it because of that and the frustrating part was that I spent a little over 150 hours answering stupid letters from all the states/PUCs, each requiring the original form questionnaire they sent us to be filled out explaining how we had prepared in quite some detail (most of the forms were 4 or more pages long). None of the forms allowed attachments or "tape overs".
I also checked with my previous employer of 15 years... none of the stuff I had written for them had any problems with Y2k or the leap year but they did confirm that the mainframe stuff others had written needed a fair bit of prep work.
And that's why, with no intent of arrogance, I used the word "stupid" about all of this. If a non-degreed, non-member-of-IT could figure that out, why did Y2K ever need to be an issue. Notice that's not actually a question. The reason is pretty apparent 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
October 15, 2020 at 2:20 pm
I kind of get it for stuff that was done 50 years ago or whatever and people were paying by the byte for storage that saving a couple bytes on multiple dates adds up. What's infuriating is that people still to this day are doing new development and using 2 digit years.....
October 15, 2020 at 3:33 pm
ScottPletcher wrote:Again, as I stated, it took only 1 more byte to store YYYY vs YY. Dates were stored as numeric and not char, also to save space. YYMMDD took 4 bytes as a number (packed decimal) but would of course be 6 bytes as char.
That's actually a part of my frustration about the past. Just two bytes is all that is required to cover all dates (not including time, obviously) from 1900 to 2079 by storing the dates as number of days since 19000101. Obviously, that wouldn't help for storing information about history (although the 1 extra byte you're talking about would have covered all dates from 00010101 through 99991231 and more {although pre-Gregorian dates are still a PITA}) but it would have been good enough for business use. And, like I said, the conversion formula just isn't that complicated. It was even included in the owners manual for the Texas Instruments TI-55 calculator I bought way back in 1977. I'm pretty sure that the formula was available long before then (which inherently includes the ability to do date math like "days between dates" etc). It even handled the extra leap year that occurs when the year is divisible by 400 (which is the part of the leap year formula that MS and a lot of other companies missed).
The whole Y2k thing (along with the leap year thing for 2000) should never have happened. It was particularly frustrating for me because all of my stuff was 4 digit year capable with the year 2000 leap year thing since 1980. I was a director of MIS for a small telephone company in 1999 and we spent exactly zero hours prepping for it because of that and the frustrating part was that I spent a little over 150 hours answering stupid letters from all the states/PUCs, each requiring the original form questionnaire they sent us to be filled out explaining how we had prepared in quite some detail (most of the forms were 4 or more pages long). None of the forms allowed attachments or "tape overs".
I also checked with my previous employer of 15 years... none of the stuff I had written for them had any problems with Y2k or the leap year but they did confirm that the mainframe stuff others had written needed a fair bit of prep work.
And that's why, with no intent of arrogance, I used the word "stupid" about all of this. If a non-degreed, non-member-of-IT could figure that out, why did Y2K ever need to be an issue. Notice that's not actually a question. The reason is pretty apparent 😀
The overhead of converting a day-displacement to a date every time would have been excessive for that time. Besides, binary was just not used to store dates or similar types of data then. [Unix-based systems used day-displacement, but they used 1970-01-01 as the base date. And it was a pain to always have to do the calc before you knew what the date was when looking at raw data in the record.]
As for "although the 1 extra byte you're talking about would have covered all dates from 00010101 through 99991231 and more", that's just wrong. There was no 3-byte binary storage, then nor now. In SQL Server one can define smallint (2 bytes) and int (4 bytes), but not a 3-byte binary value. Luckily, with row compression, you can get the same effect from an int.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 15, 2020 at 4:37 pm
As for "although the 1 extra byte you're talking about would have covered all dates from 00010101 through 99991231 and more", that's just wrong. There was no 3-byte binary storage, then nor now. In SQL Server one can define smallint (2 bytes) and int (4 bytes), but not a 3-byte binary value. Luckily, with row compression, you can get the same effect from an int.
One of the more interesting main frame date formats I've seen used 3 byte storage. 1 byte for the year with the first 4 bits being the number of decades since 1900 and the last 4 bits being the number of years in that decade. Then 1 byte for month and one for days in month. Which gives it a range of 1900-01-01 through 2059-12-31, not the greatest range but there's no ambiguity like with 2 digit years.
October 16, 2020 at 3:57 am
This was removed by the editor as SPAM
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply