Speed or Value?

  • Comments posted to this topic are about the item Speed or Value?

  • I believe everyone knows what I'm going to say... good code can save a "slow" machine. I've seen too many people go down the primrose path of buying more expensive machines and spending a lot of time setting up the new server and transferring pot wads of data to it, all to speed up their code, only to be grossly disappointed because the code is so bad. RBAR is still RBAR even on a fast machine.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Interesting post, P/E is very important here as well.

    And as Jeff Moden says, good code can save a slow machine, that is true. We are however moving over more and more to a vm cluster over some server machines with a san. That gives flexibility and in the longer term can save cash because it's easier to manage and applications can get extra resources when needed. Doing this transition does however not mean we can be sloppy with our programming.

  • What strikes me, whenever I get a quote for new server hardware, is how insignificant the price of the CPU(s) and Memory is, compared to the whole package. Once I have paid the relatively fixed price for SCSI hard drives, RAID controller, redundant PSUs, etc, etc, then factored in the OS, backup software, anti-virus, and then added on the 3-year support contract for the hardware.... the direct cost of CPU and Memory is negligible. And when we are talking about every-day performance of SQL Server, these are the 2 factors that really count.

    Price/performance is always the predominant factor, but I'm surprised at how little the final price determines what processing power you are buying.

    I qualify this by saying I am talking about standard x86/x64 hardware; I know that with more specialised hardware, the cost rapidly jumps upwards.

  • Surely this is an unanswerable question. No performance comes without cost, so price cannot be removed from the equation altogether. However, even performance at a good nominal value could be irrelevant if you have a surplus. Surely the only thing that matters is how much it costs to achieve the stated objective vs the value realised from achieving said objective.

    And the cost of achieving the objective would, of course, include development time for streamlining bad code (thanks, Jeff) as well as any more obvious hardware prices.

    Semper in excretia, suus solum profundum variat

  • This is a great topic! Buying what is really needed is often overlooked. Really, one of the big reasons that VM solutions are making such a strong play (aside from availability) is that they reduce a lot of the wasterd server utilization. Wasn't "Consolidation" the big buzz word for a couple of the past years.

    So, I agree with Jeff, code more efficiently. I also believe that we need to spec to the project requirements. Part of that obviously implies good requirements, which may indeed require us to push a bit in order to get them. I know that from working on a couple of pretty significant DR solutions there can be a real tendency to overbuy hardware and solutions and understanding the true requirements is the only way to overcome that.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Well, I believe every manager would ask for more performance; but there are always budget constraints. If you can generate better performance by better coding while still using the same equipment so much the better.

  • I've never had the opportunity to work in an environment where I've had to squeeze 10 ms out of my code to make a drastic difference. I've always maintained systems with fairly low connection volumes and low trans per second. That said, I've mostly worked in the small business segment, squeezing life out of old hardware praying that the next day another hard drive, memory stick or whatnot didn't wink out of existence. So to me It's really been all about price/performance. How much do we need to spend to get the required availability, or performance? And in the same breath how much more do we spend now to make sure this server will last 5-7 years until the next replacement cycle?

    To help us help you read this[/url]For better help with performance problems please read this[/url]

  • I thought the whole reason for SQL Server's success was its superior Price/Performance over Oracle. Not just licensing but development costs as well. Does it need to be high on the TPC benchmarks? Doesn't matter to me one bit since I've never worked with anything remotely close to those requirements.

  • Ian Massi (10/3/2008)


    I thought the whole reason for SQL Server's success was its superior Price/Performance over Oracle. Not just licensing but development costs as well.

    Perhaps a more accurate statement might be that SQL Server has enjoyed a better price/performance ratio in certain situations, and that those situations are precisely where a large number of potential buyers have calculated their need to lie.

    I suspect, as DavidB suggests, that a more accurate calculation of what was actually needed could have displayed a distinct shift in buying patterns (lump several SQL Server requirements into one Oracle justification, or downsize a SQL Server requirement into something less "industrial"). However, Microsoft have done precisely what they ought; they've accurately assessed what their customers think they want, then pitched the products there, instead of trying to convince their customers to think differently. Net result is one of the most successful software companies in the world, so fair play to them.

    Semper in excretia, suus solum profundum variat

  • Maybe I'm straying off topic a bit, but my oppinion is that it's really about reliability and robustness per price per performance. If it was just about price per performance, everyone would be using MySQL on Linux running on computers that some fanboy built from spare parts, but in reality that model only fits a minority portion of the marketspace.

    Some people like to compare Oracle to SQL Server, but I think those two products still target different audiences. With 2005 and 2008 though, SQL Server is getting closer to the Oracle target market, and they probably do overlap a little, but in general I think they are still separate. Oracle has also been trying to push up it's target markets with their emphasis on RAC and "grid" computing, so it's not like SQL Server is going to suddenly overtake Oracle's market.

  • To me the only thing that matters is "Effectiveness". If the most effective way of storing and manipulating data is in cuniform on clay tablets, fine.

    Usually "most effective" implies some form price/performance analysis, but not always. It may mean that I have to do things in a certain manner no matter the cost.

    In other instances it means that no matter the cost, I will never find a more effective solution than the cheapest one.

  • Sometimes money is of little consequence, you've got to get that performance. But when we buy, we always look at bang for the buck. It's when VARs are able to bypass IT and sell directly to managers that things get all out of whack and you end up oversold or grossly underprovisioned.

    But briefly getting back to sports, that's what I love about the Yankees and the Lakers: the highest (or nearly so) payroll franchises in their respective sports, yet they don't bring home the championship every year. Jerry Collangelo decided to buy a baseball championship for the Arizona Diamondbacks, and he bought a team that could do it. It took 2 years, but in 2001, they were the champions. And following the 2001 season, the team was largely dismantled because he couldn't afford to continue to pay for those players. Curt Shilling went to Boston, and shortly thereafter they won the championship.

    Money doesn't always equal a win, but it gives you a better shot at it.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • I think price/Performance is the most important from a business point of view.

    As with some other posters I have not been involved in the kind of consistent huge loads that TPC benchmarks test. So when deciding on hardware for a project or shared SQL server, cost is the primary constraint. I tend to go for the most obvious things: disk reduncency, loads of memory, as many cpu/cores as can be had within the low budget. On the configuration side, I physically seperate indexes, data, logs, temp DB and backups as much as possible given the amount of disk arrays purchaged.

    On the software side, it is all a bit blurrier as not all code has been written by me and nearly always there is legacy or generated code that cannot be optimized as that would result in non-billable time spend (costs). This is I think where my experience (and that of most SQL Server technical users) differs most from what pure DBA experiences are. The room to optimize at no extra cost is minimal to us as there is always a huge shortage of time with the next project being ready to start.

    Even new developed code is not immume to common pitfals, simply because a lot if not most programmers are not that deep into SQL and the underlying architecure and they already need to know/master 4 to 5 different languages to get started.

    Then there is the fact that real world requirements are more dynamic then can be expressed in a fixed set of stored procedures that can be optimized and on and on untill its perfect. An axample of this are datagrids and the reporting parts of interactive applications that generate their own SQL. Even while I have always pushed to have SQL code in the DB as much as possible, that has in practice seldom happened due to practical/organisation reasons.

    To top it off:

    These days even junior developers 'draw' their own tables with the management studio and essentially do the physical data modeling without ever seeing any undelying SQL table creation code, let alone having an overview of the indexes and such. Obviously this is plain wrong to do from a design point of view, but it is by far the most common way applications are made out in the real world by many small companies (and even medium sized ones I know).

    It is a door opened by Microsoft when they added visual aids in the DBA tools. And really that is why SQL Server is used so often....anyone can make something work without needing an expensive expert to do it for them. If it works reliably and performs sufficient and is cheap then the rest is in all honestly quickly a non-issue. In the end cost is what matters and peoples time cost more then slightly better hardware.

    And in my experience the only times SQL server gets a hard time is when I/O is stressed by scheduled backups in the background for prolonged times. Usualy at 3 AM, which is of no consequence for most uses! New features of SQL2008 might improve things as you can manage resource use in that version making all tasks more reliable.

    Don't get me wrong, I do spend a lot of time optimising code and modeling in my projects, but for the majority of code the extra effort is just not that cost-effective. Developers with no feel at all for this will simply not do it and in most cases it still run acceptable for them! As long as there is room for optimisation (and there is usualy plenty) then scaling will be dealt with when it becomes neccecary.

    Why I am telling all this? Just to bring you one reallity in which SQL server is used a lot and seem to be significantly different from what others posted! And I fear that this reality is far more prevalent then the pure DBA side one!

    One more note:

    For software developers there is more that can be done then optimizing the DB side of things. Considder caching of frequently requested and processed data. This can immensly offload the strain on a database when application loads increases and handle scaling where the DB might not due to inefficient modeling/SQL.

  • I watch Top Gear. One of my favorite programs on BBC. Clarkson is always complaining about cars with a limited top speed. Going 240 on anything but a closed track will only get you a longer jail term. Yes there are still places in the U.S. where you can get jailed for speed and it not be drugs.

    Then, more on topic, I have customers that run their entire business on SQL Server Express. They already had the machine, OS, and AV. We show them how to back it up. We even worked with them to develop a cmd file to make the process easy. We used our tiny scheduler to automate the process. (Can't run scheduled jobs in Express. 😎 Command line interface to the rescue.) They had purchased a few days of on-site configuration an support services anyway. What is their cost for SQL. Say it with me friends. ZERO. The performance is good enough for them. When they do outgrow that then we upgrade to the full version and license that per server/processor.

    ATBCharles Kincaid

Viewing 15 posts - 1 through 15 (of 32 total)

You must be logged in to reply to this topic. Login to reply