May 31, 2012 at 9:55 am
jfogel (5/31/2012)
patrickmcginnis59 (5/31/2012)
Grant Fritchey (5/31/2012)
TravisDBA (5/31/2012)
Sorry about that. I was trying to make the point that "generic" database designs aren't really possible due to the nature of the underlying infrastructure. Clearly failed.
I'm a little confused too Grant . Who really uses "generic' database designs nowadays anyway? Particularly, when the underlying database structure is constantly changing in a fast paced environment. At least mine anyway. This seems to me like something that is already very apparent to most DBA's IMHO anyway. So, Oracle and SQL Server are fundamentally different, Uhhhhh... yes, they are....no kidding! I'm with Chris, I'm not sure what point the message was trying to convey, that we didn't already know.:-D
The message may have been unclear, again apologies, but you're wrong that this is common knowledge. Tons and tons of developers and plenty of DBAs still believe in not hooking yourself to a single vendor. I interviewed someone just a little over a year ago who lectured me on my technical questions because they were too "Microsoft & SQL Server specific." His argument was that only lowest common denominator SQL should be used so that it's easier to port between database systems. People really do still believe this.
I'm one of those I guess, I spend a bit of effort comparing different server features just to stay in tune with how things are handled between different systems. I've dealt with a few systems that also try for a measure of independence from database vendors even to the extent of having in house programming languages, so theres clearly some pretty smart folks out there who believe there are reasons for doing so which you dismissed without comment. Its an interesting topic that maybe deserves a better writeup then you are able to deliver. Oh well, this is a pretty vendor specific forum so I guess it is to be expected.
True. We sell our software and not a particular vendors database. Our app had to play nice with Oracle or SQL to give the buyer a choice. At the time Oracle was the major player and SQL Server was at 6.5 and not really trusted. Then things changed and the majority of clients chose the SQL Server back end mainly because of price. Have a clustered or heap index, our app doesn't care as long as the index exists and if it does not it will create it using vendor specific syntax. Our app did not try to use say ANSI only SQL, the "mode" is detected and either Oracle or SQL Server specific code is executed within executables. For reports we had two sets of procedures, one with Oracle syntax and one with SQL Server so we did not rely on one set of code to be all things to all people. Was that a lot of work? Yes but it avoided much of what is the root of this topic.
But that would be the correct approach. Your code does different things in different environments rather than trying to do one thing in all environments. It makes perfect sense. I'm not anti-PLATFORMX. After years & years in this industry, I've just found that programming to the platform, whatever platform, works and programming to some generic & somewhat amorphous standard doesn't.
Heck, it's even the issue I've seen with ORM tools. Many of the failed implementations of ORM have attempted to force objects into the database rather than focus on that "M" part of ORM. It's programming to a standard, not to the platform. And the funny thing is, the ORM tools I've worked with are perfectly capable of working with whatever platform we're talking about. But the problem is knowledge and approach, not software, tools or platform.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 31, 2012 at 9:58 am
jfogel (5/31/2012)
Our app did not try to use say ANSI only SQL, the "mode" is detected and either Oracle or SQL Server specific code is executed within executables. For reports we had two sets of procedures, one with Oracle syntax and one with SQL Server so we did not rely on one set of code to be all things to all people. Was that a lot of work? Yes but it avoided much of what is the root of this topic.
I wish JD Edwards had done this. They sent the same SQL through to all platforms (Oracle/SQL Server/DB2) and performance was poor all around.
I think it would be easy to have separate DALs. Developers might not like it, but that's why you hire a separate DBA to write the code for each platform.
May 31, 2012 at 9:59 am
I guess I might take this issue a bit to heart. I've had what could amount to arguments with people who claim no app can support this or that DB and still be efficient. Even after explaining that the app reads parameters to know the back end and act accordingly and they still refuse to see the positives.
Cheers
May 31, 2012 at 10:01 am
Your Name Here (5/31/2012)
. . . I haven't had a ton of experience with multi-platform apps ("Oracle, SQL, MySQL, our app will work on any platform") but the few I have worked with were pretty much epic fail. Yeah, "it worked" sort of like using a hammer "works" to put in screws.
It may (and should) work if you design a well thought-through abstract facade and each implementation inherits it, with code specific to the respective database.
Grant, I read this editorial before my second coffee and I could not belived my eyes: "Grant? Saying this?", until I finally got it.
Thanks for making my morning! 🙂
May 31, 2012 at 10:03 am
Steve Jones - SSC Editor (5/31/2012)
jfogel (5/31/2012)
Our app did not try to use say ANSI only SQL, the "mode" is detected and either Oracle or SQL Server specific code is executed within executables. For reports we had two sets of procedures, one with Oracle syntax and one with SQL Server so we did not rely on one set of code to be all things to all people. Was that a lot of work? Yes but it avoided much of what is the root of this topic.I wish JD Edwards had done this. They sent the same SQL through to all platforms (Oracle/SQL Server/DB2) and performance was poor all around.
I think it would be easy to have separate DALs. Developers might not like it, but that's why you hire a separate DBA to write the code for each platform.
Or in my case some developers only want to know SQL Server and I get charged with translating everything to Oracle. It isn't all bad though because due to this I can port code very quickly and do it either way.
Cheers
May 31, 2012 at 10:07 am
Can someone, anyone, get the Microsoft SQL team to talk to the Microsoft Dynamics Great Plains team?
Can you say database best practices?
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
May 31, 2012 at 10:30 am
Coming late to this, but a couple of thoughts, if I may?
I work in the Business Intelligence area, and deal with all sorts of source systems. Getting data out in a reasonably performant way is important, as is processing the overnight loads (sure, there are many variants, but this is still the most common way to go).
Mostly, I don't care about performance. I only care if (say) my overnight run time is a significant proportion (>20%?) of my available overnight window. By way of comparison, a District General's Hospital records take about 6-7 minutes to run, and a world-class supermarket's 100,000,000 transactions take about 5 hours against a 120 Terabyte warehouse. In the latter case, I keep an eye on things. In the former case, I don't. On the gripping hand, there are extremely few applications in the extremely large class, but many bread-and-butter situations.
What's important to me is that my SQL Server guy's time is not wasted pursuing performance. For bread-and-butter, that means keeping the focus on meeting business needs, not playing with performance tweaks. There is no cost-benefit return in making the 6 minute batch run twice as fast, so that the overnight window has improved from 6 hours 52 minutes to 6 hours 57 minutes.
The focus needs to be on business benefit, not techie fun.
That said, reporting performance is critically important. But there's a different skill set required here. All reporting out of the RDBMS is overnight batch, and who cares how long it takes, as long as it is in the business users' inboxes before 7. Ad-hoc reporting is a different kettle of fish. That's done out of the data warehouse . . . which in the Microsoft world, means an Analysis Services hypercube. Of course, indexing is irrelevant to that.
My central point is that worrying about common denominator code is perhaps not where the attention need be focused. If it's fast enough, that's good enough. The only cardinal sin regarding performance is not observing the rule never keep the user waiting!
Cheers, Donna
May 31, 2012 at 10:47 am
donnapatriciakelly (5/31/2012)
Coming late to this, but a couple of thoughts, if I may?I work in the Business Intelligence area, and deal with all sorts of source systems. Getting data out in a reasonably performant way is important, as is processing the overnight loads (sure, there are many variants, but this is still the most common way to go).
Mostly, I don't care about performance. I only care if (say) my overnight run time is a significant proportion (>20%?) of my available overnight window. By way of comparison, a District General's Hospital records take about 6-7 minutes to run, and a world-class supermarket's 100,000,000 transactions take about 5 hours against a 120 Terabyte warehouse. In the latter case, I keep an eye on things. In the former case, I don't. On the gripping hand, there are extremely few applications in the extremely large class, but many bread-and-butter situations.
What's important to me is that my SQL Server guy's time is not wasted pursuing performance. For bread-and-butter, that means keeping the focus on meeting business needs, not playing with performance tweaks. There is no cost-benefit return in making the 6 minute batch run twice as fast, so that the overnight window has improved from 6 hours 52 minutes to 6 hours 57 minutes.
The focus needs to be on business benefit, not techie fun.
That said, reporting performance is critically important. But there's a different skill set required here. All reporting out of the RDBMS is overnight batch, and who cares how long it takes, as long as it is in the business users' inboxes before 7. Ad-hoc reporting is a different kettle of fish. That's done out of the data warehouse . . . which in the Microsoft world, means an Analysis Services hypercube. Of course, indexing is irrelevant to that.
My central point is that worrying about common denominator code is perhaps not where the attention need be focused. If it's fast enough, that's good enough. The only cardinal sin regarding performance is not observing the rule never keep the user waiting!
Cheers, Donna
I'm going to have to take a slight exception to this; If it's fast enough, that's good enough. Maybe it is fast enough in an environment with a few thousand rows, but what happens when the volumn increases from a few thousand to several hundred thousand rows? If this isn't taken into account during development, it may have to be taken care of later when the application isn't performing as it needs to.
May 31, 2012 at 11:01 am
donnapatriciakelly (5/31/2012)
Coming late to this, but a couple of thoughts, if I may?I work in the Business Intelligence area, and deal with all sorts of source systems. Getting data out in a reasonably performant way is important, as is processing the overnight loads (sure, there are many variants, but this is still the most common way to go).
Mostly, I don't care about performance. I only care if (say) my overnight run time is a significant proportion (>20%?) of my available overnight window. By way of comparison, a District General's Hospital records take about 6-7 minutes to run, and a world-class supermarket's 100,000,000 transactions take about 5 hours against a 120 Terabyte warehouse. In the latter case, I keep an eye on things. In the former case, I don't. On the gripping hand, there are extremely few applications in the extremely large class, but many bread-and-butter situations.
What's important to me is that my SQL Server guy's time is not wasted pursuing performance. For bread-and-butter, that means keeping the focus on meeting business needs, not playing with performance tweaks. There is no cost-benefit return in making the 6 minute batch run twice as fast, so that the overnight window has improved from 6 hours 52 minutes to 6 hours 57 minutes.
The focus needs to be on business benefit, not techie fun.
That said, reporting performance is critically important. But there's a different skill set required here. All reporting out of the RDBMS is overnight batch, and who cares how long it takes, as long as it is in the business users' inboxes before 7. Ad-hoc reporting is a different kettle of fish. That's done out of the data warehouse . . . which in the Microsoft world, means an Analysis Services hypercube. Of course, indexing is irrelevant to that.
My central point is that worrying about common denominator code is perhaps not where the attention need be focused. If it's fast enough, that's good enough. The only cardinal sin regarding performance is not observing the rule never keep the user waiting!
Cheers, Donna
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 31, 2012 at 11:06 am
Grant Fritchey (5/31/2012)
donnapatriciakelly (5/31/2012)
Coming late to this, but a couple of thoughts, if I may?I work in the Business Intelligence area, and deal with all sorts of source systems. Getting data out in a reasonably performant way is important, as is processing the overnight loads (sure, there are many variants, but this is still the most common way to go).
Mostly, I don't care about performance. I only care if (say) my overnight run time is a significant proportion (>20%?) of my available overnight window. By way of comparison, a District General's Hospital records take about 6-7 minutes to run, and a world-class supermarket's 100,000,000 transactions take about 5 hours against a 120 Terabyte warehouse. In the latter case, I keep an eye on things. In the former case, I don't. On the gripping hand, there are extremely few applications in the extremely large class, but many bread-and-butter situations.
What's important to me is that my SQL Server guy's time is not wasted pursuing performance. For bread-and-butter, that means keeping the focus on meeting business needs, not playing with performance tweaks. There is no cost-benefit return in making the 6 minute batch run twice as fast, so that the overnight window has improved from 6 hours 52 minutes to 6 hours 57 minutes.
The focus needs to be on business benefit, not techie fun.
That said, reporting performance is critically important. But there's a different skill set required here. All reporting out of the RDBMS is overnight batch, and who cares how long it takes, as long as it is in the business users' inboxes before 7. Ad-hoc reporting is a different kettle of fish. That's done out of the data warehouse . . . which in the Microsoft world, means an Analysis Services hypercube. Of course, indexing is irrelevant to that.
My central point is that worrying about common denominator code is perhaps not where the attention need be focused. If it's fast enough, that's good enough. The only cardinal sin regarding performance is not observing the rule never keep the user waiting!
Cheers, Donna
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
I will agree, chasing 5% improvements my be crazy, but it also has to be taken in perspective. A 5% improvement in a routine called a few times a day may not be that important, but a 5% improvement in performance of a routine called thousands of times an hour may be different.
May 31, 2012 at 11:16 am
Grant Fritchey (5/31/2012)
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
So what takes 6 hours in Oracle that takes 3 minutes in SQL server?
May 31, 2012 at 11:18 am
As Depeche Mode sang - everything counts in large amounts.
"So what takes 6 hours in Oracle that takes 3 minutes in SQL server? "
I'd like to know too and I'd like to know what was done on the Oracle side (if still in place) to cure the six hour time of a task.
Cheers
May 31, 2012 at 11:34 am
patrickmcginnis59 (5/31/2012)
Grant Fritchey (5/31/2012)
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
So what takes 6 hours in Oracle that takes 3 minutes in SQL server?
Lordy no. Oh dear. I sure hope that was a joke.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 31, 2012 at 11:52 am
Grant Fritchey (5/31/2012)
patrickmcginnis59 (5/31/2012)
Grant Fritchey (5/31/2012)
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
So what takes 6 hours in Oracle that takes 3 minutes in SQL server?
Lordy no. Oh dear. I sure hope that was a joke.
Its not a joke. You're posting that he cut a query from 6 hours to 3 minutes based on a Microsoft feature. Now I can accept if theres an equivalent Oracle feature that would have also cut it down to 3 minutes, and can readily accept that there are material differences in implementations, or that possibly an Oracle implementer simply lacked training to implement on SQL server. What I'm curious on is the difference between 6 hours and 3 minutes and what stopped the Oracle guy from finding out himself how to cut the time?
For example, did the Oracle guy believe the process would also take 6 hours on an Oracle server?
May 31, 2012 at 11:59 am
patrickmcginnis59 (5/31/2012)
Grant Fritchey (5/31/2012)
I'm with you. Fast enough is fast enough.
But, I remember when a friend of mine (wish I could say I did this) spotted a problematic query in one of the overnight load processes. He did some tweaks and something that took 6 hours was cut down to 3 minutes. It's the kind of win you dream about finding. And the solution was to take advantage of a particular set of functionality offered by SQL Server while the original load process had been written by an Oracle DBA (a very capable and good Oracle DBA, don't misunderstand my message) who used an approach common to both platforms.
Tweaking to get a 5% improvement in speed.. Yeah, that's probably crazy. But that doesn't mean that any or all tweaks should be ignored.
So what takes 6 hours in Oracle that takes 3 minutes in SQL server?
That isn't what he said. He said that a query written by an Oracley guy using Oracley syntax on a SQL Server box took 6 hours, which when rewritten by a guy savvy in SQL Servery syntax realized dramatic gains.
Viewing 15 posts - 31 through 45 (of 64 total)
You must be logged in to reply to this topic. Login to reply