Test Before Deciding

  • Comments posted to this topic are about the item Test Before Deciding

  • This kind of goes without saying don't you think? I test everything before I put it into production. That is why I have a QA DB server.:-D

    "Technology is a weird thing. It brings you great gifts with one hand, and it stabs you in the back with the other. ...:-D"

  • One of the things the shop I work in has missed is staying ahead of clients in testing different load levels. We made an architectural decision with a recent overhaul of our EHR that pulls all of a patient's data to the client and caches it for faster response time after loading a chart. Well, this has lead to very slow chart load times to the point it's a major performance concern as chart sizes have grown. It's crucial to know not just what's going to happen under current data conditions but what will happen under future data conditions.

  • I think this is the most frustrating part of performance tuning. On a theoretical level an index can make perfect sense, while in practice it can have devastating effects. To the extent that some indexing will help a database in one setting while the same indexing strategy hinders at another. I have found that there is a certain amount of 'art' that must combine with the 'science' to make performance tuning most successful.

    The science must happen at design time as the testing scenarios will apply to the test data and the way the application is designed to function. Then when rolled out to a customer environment it is often needed to do some customization depending on volumes. This is where the Art\Science approach combines. Some folks get it. Others don't. Unfortunately you have to be down the rabbit hole before you can find out.

    Of course the other side of that is customers always want to hear 'Yes, absolutely 100% confidence that it will turn your system around.' When in fact, it is rarely feasible on a large scale system to adequately test some of the minor changes that may have drastic effects.

  • Truly drastic hardware additions and purely single-threaded assembly code optimization are about the only things that are an almost guaranteed performance improvement... and they can expose concurrency and timing flaws that were previously hidden!

    For all software updates, benchmarking is the key; check your main cases, edge cases, and so on with a full dataset... and, if it's critical, with the rest of the system under load as well. This is, without a doubt, expensive and difficult.

    In general, try the top two or three ways you think it'll work, and see; sometimes, for a variety of reasons, a method that "should" be slower is, in fact, faster.

  • I have two analagies that I have found extremely useful when working on performance issues with non technical folks.

    Databases are like legal pads. You can either buy one at a time or you can buy them in bulk so you have them on hand. You can also leave a few lines free on each page in case you want to add something later. Then if you need to you may take all of the pages and reorganize them.

    Performance tuning is like peeling layers off of an onion. You can identify all the layers you want to remove however if you do them all at once you may have undesired results. Peel one layer off and re-evaluate. You may identify that you don't want to do any more as it has caused you great pain or you may decide that you found extra layers.

  • For a fine-tuned, high-performance system I would recommend a complete performance review after each service pack. It is time-consuming and therefore expensive, but it it prevents unpleasant surprises.

  • Revenant (3/21/2011)


    For a fine-tuned, high-performance system I would recommend a complete performance review after each service pack. It is time-consuming and therefore expensive, but it it prevents unpleasant surprises.

    Do you see that big a different after SPs? Typically I haven't noticed them much in the past, often applying them just for support reasons in case we do have issues.

  • My major push is an annual perf review for my customers. It allows for a baseline and much quicker fixing when an issue arrises after a change is made. I would love to see one done after every database code change but don't see any reason to apply one after each SQL Server Service Pack or Hotfix unless it is applied to fix a perf related issue you are experiencing.

  • Steve Jones - SSC Editor (3/21/2011)


    Revenant (3/21/2011)


    For a fine-tuned, high-performance system I would recommend a complete performance review after each service pack. It is time-consuming and therefore expensive, but it it prevents unpleasant surprises.

    Do you see that big a different after SPs? Typically I haven't noticed them much in the past, often applying them just for support reasons in case we do have issues.

    I have not seen much difference in the SQLS itself but 2008 SP1 needed tuning of the Win2k8 servers.

  • It always amazes me that we can meet for hours arguing about design issues that could be determined in about twenty minutes of testing and monitoring on a development server.

  • J Thaddeus Klopcic (3/22/2011)


    It always amazes me that we can meet for hours arguing about design issues that could be determined in about twenty minutes of testing and monitoring on a development server.

    Exactly J, and what even amazes me even more, as I stated previously, is that this should kind of go without saying. I mean testing/monitoring systems should be a no-brainer, even for people with just a high school education. This is not something that should require a meeting, or even an article to discover, or remind all of us of the importance of this IMHO.:-D

    "Technology is a weird thing. It brings you great gifts with one hand, and it stabs you in the back with the other. ...:-D"

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply