November 8, 2011 at 9:21 am
Gianluca Sartori (11/7/2011)
Totally different topic: this thread asks how to format datetime values in T-SQL.I think this is a very bad habit to kick and I put together an article (well, it was meant to be a post for my blog, but I'm sure it would have a larger audience here).
Is anyone interested in taking a look at the draft and give some feedback?
If you still need someone, I'd be up for this.
---------------------------------------------------------
How best to post your question[/url]
How to post performance problems[/url]
Tally Table:What it is and how it replaces a loop[/url]
"stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."
November 8, 2011 at 12:09 pm
Gianluca Sartori (11/7/2011)
Tom, you made a very good point and it would be great if you gave me the permission to include your considerations in my article.
You certainly have my permission, Gianluca. I think it's something you've thought a lot about yourself, and I doubt that I can add anything new (except to remind you yet again how awful the real world is).
I strongly believe that the ideal you want to promote in your article is good, partticularly when combined with practical advice about how to do conversions when they are unfortunately unavoidable. My worry is that the ideal is often not achievable (not for technical reasons usually - it's all political/commercial). In fact I would probably go further (more idealistic) than you, since I firmly believe that the structure of the bases tables (and even what base tables exist, let alone what columns they have and what the datatypes of those columns are) should be hidden from the application layer - apps can see read-only views, have write access only through stored procedures, and can't therefore prevent schema development/upgrade by being too expensive to change. In that world, dates and times would indeed be passed across the database interface as character strings, probably ISO8601 conformant (because datetime and datetime2 are not types available in Cobol, Excel, Java, Fortran, LML, Lisp, C++, SML, and so on) but the storage format would be whatever the DBA determined was best, and could be changed with no impact at all on application logic.
Indeed, there are good reasons to do the formatting in SQL, but many developers just do it because it comes quicker, not because they are forced by an unfortunate fate. My intent is to stress the negative impacts of storing and passing string representations of dates instead of actual dates but also to enumerate all the built-in conversion methods that I'm aware of, plus a couple of custom T-SQL functions that I wrote for this very purpose.
In the end, I live in the real world and the main DB I'm working on now contains all the evil you can think of. A date stored as string is nothing compared to the sins I have to fight on a daily basis. 🙂
I worry about the concept of passing actual dates as opposed to strings; it's too language and hardware dependent. If I want to transfer a datetime across a network to another machine, different hardware, different OS, app written in different language, what format do I use? It pretty much has to be a character string, not some efficient binary representation of a date and time.
Incidentally, my original comment was really triggered by this post, not the one I quoted, but I thought it wouldn't be appropriate confuse the OP and show disagreement in that thread (and anyway, Jack had already give the OP a good answer there) so I didn't reply there; then I saw the post I quoted, and while I thought the article a good idea I thought it might also be useful to remind you to take account of real world constraints when preparing the article.
After all, you can be right even if you say the same things as Celko. 😛
As long as you don't say them the same way as he does. And don't keep banging on about irrelevancies as well as the useful stuff - although what some take as irrelevancies others won't - for example I approve of his attacks on the use of Hungarian naming (table names begin tbl etcetera) while others disagree with him and still others think it's irrelevant.
Tom
November 8, 2011 at 2:19 pm
Roy Ernest (11/8/2011)
Hey everyone, I am hijacking the thread with an "Off Topic"I am planning on a presentation for my user group "DBA MythBusters" (Courtesy Paul Randal) His idea is brilliant. It is a fun filled session. I would like to do one like that for our user group. If anybody have suggestions on what Myth to bust please, please let me know. The audience is 100 to 200 level.
Here are a couple of normalisation myths; probably more verbose thanm I should be, and it's a bit of a rant.
To me the worst myth is "The database should have nothing to do with business logic", when the whole point of designing schemata with (primary and foreign and unique) key constraints and domain (column type, non-nullability, and check) constraints and doing normalisation is to ensure that the very structure of the schemata and their constraints enforces business rules, to make it easier to write code which will not result in data that is invalid according to business rules, and to make it very difficult (if not impossible) to write code that will allow data in the database to violate those business rules.
The next bad normalisation myth is that normalisation always increases the amount of storage required - a database in 3NF will require more storage than the same data expressed in an unnormalised (not even 2NF) way; in fact normalisation almost always reduces the storage required - and each step from 1NF to 5NF reduces datbase size; denormalisation is sometimes needed to gain performance in an effectively read-only database (typical BI or DWH usage) but that is not a size issue; certain 3NF schemata have no 4NF equivalents which enforce all the business rules enforced in the 3NF version so it is sometimes necessary to keep some tables below 4NF in order to preserve enforcement of required referential integrity, but this is extremely rare and even when it happens doesn't usually cause much size increase.
The third myth is that normalisation is difficult to understand, intellectually challenging. It isn't - it's just that it's pretty rare that anyone describes it in everyday terms instead of in academic jargon.
A fourth normalisation myth that is very damaging is propagated by the happy band of anti-null fundamentalists. They teach that all normalisation is rendered impossible as soon as you have a nullable column in your table. This is extremely damaging because naive people believe them, and although they have to work with rdbms products that allow nulls and soon discover that nulls are essential for some of their work they decide not to attempt normalisation, because they've been told that for their null-permitting systems it is impossible. This leads them to produce incredibly bad schemata that leave the database wide open to application errors that ought to have been prevented by key constraints. Disliking nulls is very sensible (I hate the things myself, except when I really need them), pretending they are not needed is stupid, and claiming they make normalisation impossible is irresponsible vandalism.
Tom
November 8, 2011 at 2:26 pm
L' Eomot Inversé (11/8/2011)
Roy Ernest (11/8/2011)
Hey everyone, I am hijacking the thread with an "Off Topic"I am planning on a presentation for my user group "DBA MythBusters" (Courtesy Paul Randal) His idea is brilliant. It is a fun filled session. I would like to do one like that for our user group. If anybody have suggestions on what Myth to bust please, please let me know. The audience is 100 to 200 level.
Here are a couple of normalisation myths; probably more verbose thanm I should be, and it's a bit of a rant.
To me the worst myth is "The database should have nothing to do with business logic", when the whole point of designing schemata with (primary and foreign and unique) key constraints and domain (column type, non-nullability, and check) constraints and doing normalisation is to ensure that the very structure of the schemata and their constraints enforces business rules, to make it easier to write code which will not result in data that is invalid according to business rules, and to make it very difficult (if not impossible) to write code that will allow data in the database to violate those business rules.
The next bad normalisation myth is that normalisation always increases the amount of storage required - a database in 3NF will require more storage than the same data expressed in an unnormalised (not even 2NF) way; in fact normalisation almost always reduces the storage required - and each step from 1NF to 5NF reduces datbase size; denormalisation is sometimes needed to gain performance in an effectively read-only database (typical BI or DWH usage) but that is not a size issue; certain 3NF schemata have no 4NF equivalents which enforce all the business rules enforced in the 3NF version so it is sometimes necessary to keep some tables below 4NF in order to preserve enforcement of required referential integrity, but this is extremely rare and even when it happens doesn't usually cause much size increase.
The third myth is that normalisation is difficult to understand, intellectually challenging. It isn't - it's just that it's pretty rare that anyone describes it in everyday terms instead of in academic jargon.
A fourth normalisation myth that is very damaging is propagated by the happy band of anti-null fundamentalists. They teach that all normalisation is rendered impossible as soon as you have a nullable column in your table. This is extremely damaging because naive people believe them, and although they have to work with rdbms products that allow nulls and soon discover that nulls are essential for some of their work they decide not to attempt normalisation, because they've been told that for their null-permitting systems it is impossible. This leads them to produce incredibly bad schemata that leave the database wide open to application errors that ought to have been prevented by key constraints. Disliking nulls is very sensible (I hate the things myself, except when I really need them), pretending they are not needed is stupid, and claiming they make normalisation impossible is irresponsible vandalism.
Copy-and-paste that into an article. Maybe edit a slight bit. I'll add it to my bookmarks that I refer people to on a routine basis. (Like Jeff's running totals, or Gail's blog article on Catch All queries; I send people to those two just about every week. This wouldn't be that frequent, but it would sure be at least every month!)
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 8, 2011 at 2:37 pm
BOOya
November 8, 2011 at 2:41 pm
Ninja's_RGR'us (11/8/2011)
BOOya
Impressive
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
November 8, 2011 at 2:47 pm
I give it about a month and a half to two months before I pass Steve for top place overall.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 8, 2011 at 2:51 pm
GilaMonster (11/8/2011)
I give it about a month and a half to two months before I pass Steve for top place overall.
Taking a break? Just curious why waiting so long to pass him overall.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
November 8, 2011 at 2:55 pm
SQLRNNR (11/8/2011)
GilaMonster (11/8/2011)
I give it about a month and a half to two months before I pass Steve for top place overall.Taking a break? Just curious why waiting so long to pass him overall.
There's a 700 point difference. According to the front page I'm posting about 550 a month at the moment.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 8, 2011 at 3:26 pm
I'm sure you'll pass me. I am down to about 150 posts a month, too busy with other stuff and most questions are being handled.
November 8, 2011 at 4:12 pm
L' Eomot Inversé (11/8/2011)
Roy Ernest (11/8/2011)
Hey everyone, I am hijacking the thread with an "Off Topic"I am planning on a presentation for my user group "DBA MythBusters" (Courtesy Paul Randal) His idea is brilliant. It is a fun filled session. I would like to do one like that for our user group. If anybody have suggestions on what Myth to bust please, please let me know. The audience is 100 to 200 level.
Here are a couple of normalisation myths; probably more verbose thanm I should be, and it's a bit of a rant.
To me the worst myth is "The database should have nothing to do with business logic", when the whole point of designing schemata with (primary and foreign and unique) key constraints and domain (column type, non-nullability, and check) constraints and doing normalisation is to ensure that the very structure of the schemata and their constraints enforces business rules, to make it easier to write code which will not result in data that is invalid according to business rules, and to make it very difficult (if not impossible) to write code that will allow data in the database to violate those business rules.
The next bad normalisation myth is that normalisation always increases the amount of storage required - a database in 3NF will require more storage than the same data expressed in an unnormalised (not even 2NF) way; in fact normalisation almost always reduces the storage required - and each step from 1NF to 5NF reduces datbase size; denormalisation is sometimes needed to gain performance in an effectively read-only database (typical BI or DWH usage) but that is not a size issue; certain 3NF schemata have no 4NF equivalents which enforce all the business rules enforced in the 3NF version so it is sometimes necessary to keep some tables below 4NF in order to preserve enforcement of required referential integrity, but this is extremely rare and even when it happens doesn't usually cause much size increase.
The third myth is that normalisation is difficult to understand, intellectually challenging. It isn't - it's just that it's pretty rare that anyone describes it in everyday terms instead of in academic jargon.
A fourth normalisation myth that is very damaging is propagated by the happy band of anti-null fundamentalists. They teach that all normalisation is rendered impossible as soon as you have a nullable column in your table. This is extremely damaging because naive people believe them, and although they have to work with rdbms products that allow nulls and soon discover that nulls are essential for some of their work they decide not to attempt normalisation, because they've been told that for their null-permitting systems it is impossible. This leads them to produce incredibly bad schemata that leave the database wide open to application errors that ought to have been prevented by key constraints. Disliking nulls is very sensible (I hate the things myself, except when I really need them), pretending they are not needed is stupid, and claiming they make normalisation impossible is irresponsible vandalism.
Incidental you are talking about Normalization tonight when I just posted this on the discussion forum at Stanford?. Just in case you can't get to it, they are skipping 1/2/3NF in their Introduction to Databases course altogether and going straight to BCNF and 4NF because according to Prof 1NF/2NF aren't discussed much anymore. LOL!. I don't agree with it. Tom, I put a link in there to your 1NF article on SSC and pointed out to follow the discussion that ensued. Let's see if I get any responses from higher up.
Edit: Added some stuff for clarity.
November 8, 2011 at 4:13 pm
November 8, 2011 at 4:16 pm
GSquared (11/8/2011)
Jan Van der Eecken (11/8/2011)
GSquared (11/8/2011)
Roy Ernest (11/8/2011)
Hey everyone, I am hijacking the thread with an "Off Topic"I am planning on a presentation for my user group "DBA MythBusters" (Courtesy Paul Randal) His idea is brilliant. It is a fun filled session. I would like to do one like that for our user group. If anybody have suggestions on what Myth to bust please, please let me know. The audience is 100 to 200 level.
Just make sure to use the joke from The Muppet Movie, and you'll be just fine.
For the uninitiated (yeah, that would be me), what's the joke from the Muppet Movie?
There's a repeating joke in The Muppet Movie where someone will say, "Myth! Myth!", and an actress with a lisp will respond "Yeth?" I think the first use of it is Kermit telling someone that getting warts from a frog, "That's a myth! Myth!" and (Carol Kane?) pops up, "Yeth?" It's in the scene where he meets Fozzie.
(I saw the movie when it first came out. How bad is it that I remember that joke?)
LOL, would be a great joke to use in a talk about Myths! I never saw the movie, just the TV series when I was a kid in the 70s. And then it was dubbed in German, so the "Myth" wouldn't have come across as intended I guess :hehe:
November 8, 2011 at 4:25 pm
How can BCNF be sensibly discussed if the earlier normal forms aren't known?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 8, 2011 at 11:37 pm
L' Eomot Inversé (11/8/2011)
The next bad normalisation myth is that normalisation always increases the amount of storage required - a database in 3NF will require more storage than the same data expressed in an unnormalised (not even 2NF) way
That is a strange myth. How can a technique that reduces redundancy lead to more storage? :blink:
L' Eomot Inversé (11/8/2011)
The third myth is that normalisation is difficult to understand, intellectually challenging. It isn't - it's just that it's pretty rare that anyone describes it in everyday terms instead of in academic jargon.
That is a myth? I still have troubles understanding it fully. But that is maybe because it was explained to me in academic jargon by a professor old enough to have probably teached Codd himself.
Luckily I'm a BI guy, all I do is denormalize 😀
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
Viewing 15 posts - 31,501 through 31,515 (of 66,749 total)
You must be logged in to reply to this topic. Login to reply