Are the posted questions getting worse?

  • Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    It would be interesting to see the $ value associated with your code fixes.

    Adding more memory many times is much more costly than someone expects.

    After several rounds of the server guys having to toss out chips just to add more, I became part of the process.

    Configuration was not always the cheapest way to start, but we had options to make it to the end of lease that were cheaper overall.

    A real eye opener for those on a tight budget running smaller machines (2 socket, 32 GB RAM) - Price out 4 or 8 socket machines, with over 64 GB of RAM.

    We had a consolidation project that those in charge didn't have a good handle on this.

    Projected savings took a hit.

  • Greg Edwards-268690 (1/26/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    It would be interesting to see the $ value associated with your code fixes.

    Adding more memory many times is much more costly than someone expects.

    After several rounds of the server guys having to toss out chips just to add more, I became part of the process.

    Configuration was not always the cheapest way to start, but we had options to make it to the end of lease that were cheaper overall.

    A real eye opener for those on a tight budget running smaller machines (2 socket, 32 GB RAM) - Price out 4 or 8 socket machines, with over 64 GB of RAM.

    We had a consolidation project that those in charge didn't have a good handle on this.

    Projected savings took a hit.

    In my/our case the decision went the other way: to meet the performance criteria, we got a 64-core, 384 GB server. We could implement predictive caching and we still may have to do that; however, the new server was in place and up in three weeks while design and implementation of caching will take months.

  • hisakimatama (1/24/2014)


    Actually, I thought I faintly remembered reading something similar at some point; after doing some digging, it turns out it was (incorrectly) attributed to Brent Ozar's site in my head:

    Blog Entry[/url]

    There's a snippet in there saying that, if you can fit the database into memory, the I/O need overall should be reduced (paraphrasing). Somehow or another, that got twisted into the "it's best to fit the database into RAM" idea in my head. Not too big of a logical leap to make; perhaps others have made similarly misconstrued assumptions.

    I remember seeing a session of Brent Ozar on channel9 (I think it was TechEd North America) where he does say for OLTP databases to just load the whole thing in memory. He then add that this of course doesn't work for data warehouses.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Jeff Moden (1/25/2014)Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period.

    Wow...by my estimates that's half a billion reads per *second*! :blink: I dread to think what sort of monster machine you'd need to provide that sort of throughput even for logical reads.

    I recommend tasking whoever wrote that to produce a spreadsheet program on a ZX Spectrum 48K--hopefully that'll teach them the advantages of optimisation. πŸ™‚

  • Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

  • jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    /

    /

    /

    /

    think you missed a few.

    Stay warm everyone.

    So much warmer in Alaska....

  • Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    We have similar issues where I work. The database doesn't necessarily fit all in memory, but when doing tuning I regularly see queries that do a more logical reads with no physical reads than there are pages in the table(s) being accessed. Then we went and bought Ignite to see what needed tuning (after I had said there weren't wait issues) and all Ignite did was show us that we weren't taxing the server (CPU and memory stats are all good) and that we weren't waiting (wait stats were all good), it was just what I said at the beginning, bad practices that lead to bad plans with reads orders of magnitudes bigger than we needed. We could have added memory, faster CPU's, and faster storage and not seen much, if any, gains. Tune the queries and have better practices for maintenance and we got consistent performance.

  • jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    ... as well as a good project management that will not sign for unrealistic performance SLs.

  • Revenant (1/27/2014)


    jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    ... as well as a good project management that will not sign for unrealistic performance SLs.

    you mean they can't get results for the query before they submit it?

    or return the data they need, although they don't know what the want yet?

    that's unrealistic.... I must have a bad attitude that needs adjustment.

  • Greg Edwards-268690 (1/27/2014)


    Revenant (1/27/2014)


    jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    ... as well as a good project management that will not sign for unrealistic performance SLs.

    you mean they can't get results for the query before they submit it?

    or return the data they need, although they don't know what the want yet?

    that's unrealistic.... I must have a bad attitude that needs adjustment.

    There are a lot of bad attitudes in the world. They want their data delivered before they know what they want and it should have already been formatted yesterday the way they're going to want it tomorrow. It should also predict the future and already allow for any change based on nothing but their own whim. All this without DBCC TIMEWARP. Oh yeah...and it should be free as well.

  • Ed Wagner (1/27/2014)


    Greg Edwards-268690 (1/27/2014)


    Revenant (1/27/2014)


    jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    ... as well as a good project management that will not sign for unrealistic performance SLs.

    you mean they can't get results for the query before they submit it?

    or return the data they need, although they don't know what the want yet?

    that's unrealistic.... I must have a bad attitude that needs adjustment.

    There are a lot of bad attitudes in the world. They want their data delivered before they know what they want and it should have already been formatted yesterday the way they're going to want it tomorrow. It should also predict the future and already allow for any change based on nothing but their own whim. All this without DBCC TIMEWARP. Oh yeah...and it should be free as well.

    In my case, the requirement to serve a half-million rows custom-filtered catalog converted to Excel in less than 30 seconds, while reading the base rowset takes at least 27 seconds...

  • Revenant (1/27/2014)


    Ed Wagner (1/27/2014)


    Greg Edwards-268690 (1/27/2014)


    Revenant (1/27/2014)


    jasona.work (1/27/2014)


    Jeff Moden (1/25/2014)


    Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

    /Me slamming shoe on desk

    GOOD CODE ALWAYS IS REQUIRED!

    πŸ˜€

    ... as well as a good project management that will not sign for unrealistic performance SLs.

    you mean they can't get results for the query before they submit it?

    or return the data they need, although they don't know what the want yet?

    that's unrealistic.... I must have a bad attitude that needs adjustment.

    There are a lot of bad attitudes in the world. They want their data delivered before they know what they want and it should have already been formatted yesterday the way they're going to want it tomorrow. It should also predict the future and already allow for any change based on nothing but their own whim. All this without DBCC TIMEWARP. Oh yeah...and it should be free as well.

    In my case, the requirement to serve a half-million rows custom-filtered catalog converted to Excel in less than 30 seconds, while reading the base rowset takes at least 27 seconds...

    Lucky you - if reading the data goes well, you still have some extra time to do everything else!

    I've seen both kinds of people out there - those that wait, understanding the challenge, and will do something else while waiting.

    And the other kind - don't think I need to go any further.

  • Jeff Moden (1/25/2014)


    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code.

    Although I'm a great believer in throwing hardware at a problem when that's the cheapest solution, any time I see anywhere near that much logical IO I'm going to want to throw development effort at it first. Just putting it all in store wouldn't help unless the store and processor architecture were such that the physical store access rate could be achieved, which would require some very interesting (and expensive) hardware - I'm not sure where the big boys are on that stuff now, but perhaps solving it with storage as opposed to software design would require significant new hardware development.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    Dead right. And I suspect that development effort would be required on schema design as well as on (SQL) code when fixing that sort of stuff.

    Tom

  • L' Eomot InversΓ© (1/27/2014)


    Jeff Moden (1/25/2014)


    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code.

    Although I'm a great believer in throwing hardware at a problem when that's the cheapest solution, any time I see anywhere near that much logical IO I'm going to want to throw development effort at it first. Just putting it all in store wouldn't help unless the store and processor architecture were such that the physical store access rate could be achieved, which would require some very interesting (and expensive) hardware - I'm not sure where the big boys are on that stuff now, but perhaps solving it with storage as opposed to software design would require significant new hardware development.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    Dead right. And I suspect that development effort would be required on schema design as well as on (SQL) code when fixing that sort of stuff.

    Microsoft has a relatively new office in Edina. I was lucky enough to get a tour when they first opened.

    They 'showcase' some of the MPP technology, and for some customers, will let them set it up with their own data as a proof of concept.

    Rather impressive what can be done by distributing the work across scores of machines.

    Very different from what most are used to - both from a hardware side and how you architect to take advantage of this.

    I'm sure you would find it very interesting should you ever have the chance to visit one of these.

  • Greg Edwards-268690 (1/27/2014)


    L' Eomot InversΓ© (1/27/2014)


    Jeff Moden (1/25/2014)


    The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code.

    Although I'm a great believer in throwing hardware at a problem when that's the cheapest solution, any time I see anywhere near that much logical IO I'm going to want to throw development effort at it first. Just putting it all in store wouldn't help unless the store and processor architecture were such that the physical store access rate could be achieved, which would require some very interesting (and expensive) hardware - I'm not sure where the big boys are on that stuff now, but perhaps solving it with storage as opposed to software design would require significant new hardware development.

    Adding more memory isn't going to fix that kind of code for performance or throughput.

    Dead right. And I suspect that development effort would be required on schema design as well as on (SQL) code when fixing that sort of stuff.

    Microsoft has a relatively new office in Edina. I was lucky enough to get a tour when they first opened.

    They 'showcase' some of the MPP technology, and for some customers, will let them set it up with their own data as a proof of concept.

    Rather impressive what can be done by distributing the work across scores of machines.

    Very different from what most are used to - both from a hardware side and how you architect to take advantage of this.

    I'm sure you would find it very interesting should you ever have the chance to visit one of these.

    I wish that MSFT introduces a T-SQL construct like this:

    RUN ON THREAD a

    (some T-SQL)

    RUN ON THREAD b

    (another T-SQL)

    AWAIT a AND b

    (runs after both a and b finish)

    ;

    This is DEFAULT in Cosmos, for crying out loud! Why not in T-SQL, developed by the same division?

    (You may replace 'on thread a' by 'parallel' or so.)

    edits: posted before I finished (a key combination?)

Viewing 15 posts - 42,646 through 42,660 (of 66,712 total)

You must be logged in to reply to this topic. Login to reply