Are the posted questions getting worse?

  • Stefan Krzywicki (4/20/2011)


    CirquedeSQLeil (4/20/2011)


    GilaMonster (4/20/2011)


    LutzM (4/20/2011)


    Is there any other method than using the mouse pointer? There are so many words on the screen. How else to keep track where the next word is to look up in the dictionary? πŸ™‚

    Finger?

    laser pointer

    Finglonger! (Futurama)

    I just cursed at it ... So must be the cursor:w00t:

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution πŸ˜€

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • ALZDBA (4/20/2011)


    Stefan Krzywicki (4/20/2011)


    CirquedeSQLeil (4/20/2011)


    GilaMonster (4/20/2011)


    LutzM (4/20/2011)


    Is there any other method than using the mouse pointer? There are so many words on the screen. How else to keep track where the next word is to look up in the dictionary? πŸ™‚

    Finger?

    laser pointer

    Finglonger! (Futurama)

    I just cursed at it ... So must be the cursor:w00t:

    That's only good for short or cursory statements.

    --------------------------------------
    When you encounter a problem, if the solution isn't readily evident go back to the start and check your assumptions.
    --------------------------------------
    It’s unpleasantly like being drunk.
    What’s so unpleasant about being drunk?
    You ask a glass of water. -- Douglas Adams

  • Grant Fritchey (4/20/2011)


    Stefan Krzywicki (4/20/2011)


    opc.three (4/20/2011)


    Stefan Krzywicki (4/20/2011)


    I went to the latest SQL Server Saturday in Boston and had a great time. I'm mentioning it now because I've been using a little trick I learned in one of the sessions and it reminds me how valuable these events are. Sure I learned a lot of theory and concepts, but day to day I find myself using this little SSMS throwaway tip.

    Under your table name, click on the Columns folder and drag it to your query window and you get a complete list of all your columns. A time saver and makes me less likely to use * even in quick queries I don't intend to use more than once.

    Sure, it seems a ridiculous, little thing to rave about, but when you learn something you use every day, that changes the way you code, you find yourself thanking the person that you learned it from every day.

    SQL Server Saturdays are awesome and I'd never have known about them if it wasn't for SQLServerCentral.com

    Thanks for sharing that. While we are on the topic I figured I would share too πŸ™‚

    The drag-and-drop feature you mentioned between the Object Explorer and a Query Window works on all proc and function Parameter tree items as well.

    I don't get paid from RedGate, but I have been using SQL Prompt for years and they have it figured out. If I type SELECT * FROM dbo.Table; and then put my cursor immediately after the * and hit Tab it expands the * to the set of column names from dbo.Table. The expansion works on queries with JOINs too.

    You don't get paid by RedGate, but I hope you're at least getting kick-backs from Grant! : -)

    I can't comment on the kick-back program as it's currently configured. Suffice to say, if you've received a check, you earned it.

    I'm on the QA team, err, I signed up for the Beta Tester Program...does that count?

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • GilaMonster (4/20/2011)


    LutzM (4/20/2011)


    Is there any other method than using the mouse pointer? There are so many words on the screen. How else to keep track where the next word is to look up in the dictionary? πŸ™‚

    Finger?

    I tried. But the screen looked messy after a while. Especially if there's pizza involved.

    ...

    Wait a minute: I just figured I could use a slice of pizza to cover a whole section :w00t::-D



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • You know, this is why I'm never quite sure if I should write tech articles. No, I'm not looking for help per se, just bringing a personally amusing moment.

    So I'm trucking along on this seek many vs. scan once mechanic I'm trying to nail down one way or the other in terms of optimization. I've got my different density tables built, some sample items, things are going about how I'm expecting, in my mini-experiments. Then I start trying to describe the results of some real world queries, and I run headfirst into these two queries producing the same row results:

    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'PriceHistoryTable_25000_NoBuf'. Scan count 1, logical reads 2637, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Items25000'. Scan count 1, logical reads 179, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1388 ms, elapsed time = 3434 ms.

    Table 'Worktable'. Scan count 25000, logical reads 1918172, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'PriceHistoryTable_25000_NoBuf'. Scan count 1, logical reads 2637, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Items25000'. Scan count 1, logical reads 179, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 2995 ms, elapsed time = 3261 ms.

    They're MAXDOP 1'd so I don't have to work around parallelism components for the first draft. So, now I'm back to the drawing boards trying to figure out why reading 1.9 million times more is faster then the other query. πŸ™‚

    So for the rest of you tech article writer/bloggers, instead of helping me on this, what the heck do you do when you go sailing headfirst into things of this nature? πŸ˜€


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Craig Farrell (4/20/2011)


    So, now I'm back to the drawing boards trying to figure out why reading 1.9 million times more is faster then the other query. πŸ™‚

    It's not. It's 1.6 seconds slower.

    Don't look at the elapsed time. That includes network transfer and display time, locked time, etc and it can vary dramatically especially if you're working on a desktop that's doing other things. Look at the CPU time, that's the amount of work done, the amount of time the query sat on the processor(s). Unless paralleled, that's also the minimum time the query can take.

    Also, 0.2 seconds in 3 seconds is less than a 10% difference. Is that significant? More to the point, is it consistent? There's a reason why, on my blog, I will always run queries multiple times (even if I don't show that) and either take average or take results that look in the middle of the range for produced values, and will look at the spread before drawing conclusions, and mainly look at the CPU time (hmmm, maybe that in itself is worth a post/article)

    If the range of elapsed time for those is +-0.6 seconds, then that 0.2 is meaningless as it's lost in the error range.

    Got any background in statistics?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Stefan Krzywicki (4/20/2011)


    Under your table name, click on the Columns folder and drag it to your query window and you get a complete list of all your columns. A time saver and makes me less likely to use * even in quick queries I don't intend to use more than once.

    :w00t: That's awesome!

    Another neat little trick is selecting code with the ALT pressed. It allows you to select a block of code, so you can easy allign it with something else.

    (maybe you all know this already :-))

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Grant Fritchey (4/20/2011)


    Brandie Tarvin (4/20/2011)


    Grant, what is that avatar you're using now? I can't figure it out.

    An overflowing glass of water. At least I'm assuming it's water. It might be vodka. That might help to explain things too.

    Is that what it is? I thought at first sight it was some robot with ducktape...

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • GilaMonster (4/20/2011)


    It's not. It's 1.6 seconds slower.

    Check your PM, I don't want to start a tech conversation in the thread. However yes, I have reasonable acquaintance with statistics. πŸ™‚


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Craig Farrell (4/21/2011)


    GilaMonster (4/20/2011)


    It's not. It's 1.6 seconds slower.

    Check your PM, I don't want to start a tech conversation in the thread. However yes, I have reasonable acquaintance with statistics. πŸ™‚

    Am suitably slapped down. πŸ˜‰

    Too often I see people trying to make a big deal out of a 10ms difference in a 15 second query.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Personally, I learn a lot by being a wallflower in these kinds of conversations, so if it's not in the Thread but you are having it elsewhere, can we get a link? (And Gail, you should totally write that article)

    ---------------------------------------------------------
    How best to post your question[/url]
    How to post performance problems[/url]
    Tally Table:What it is and how it replaces a loop[/url]

    "stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."

  • jcrawf02 (4/21/2011)


    Personally, I learn a lot by being a wallflower in these kinds of conversations, so if it's not in the Thread but you are having it elsewhere, can we get a link? (And Gail, you should totally write that article)

    via PM, and there's nothing much to see other than Craig (with, I imagine, increasing levels of exasperation) going 'I know that', 'I did that', 'I checked that', etc. πŸ™‚

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (4/21/2011)


    jcrawf02 (4/21/2011)


    Personally, I learn a lot by being a wallflower in these kinds of conversations, so if it's not in the Thread but you are having it elsewhere, can we get a link? (And Gail, you should totally write that article)

    via PM, and there's nothing much to see other than Craig (with, I imagine, increasing levels of exasperation) going 'I know that', 'I did that', 'I checked that', etc. πŸ™‚

    Nah, I would have to say it would be more along the lines of: Here's my reasoning, here's why I made the choices I did, and here's what numbers I remembered off the top of my head... now I'm happy I'm doing this privately as I cringe waiting for Gail to point out the billion places I was wrong in my reasoning. πŸ™‚

    I've learned not to disagree with Gail unless I'm pretty damned sure of what I'm talking about. That doesn't mean I'm CORRECT, just that I'm sure I THINK I am. *wiggles eyebrows* πŸ˜€


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • jcrawf02 (4/21/2011)


    Personally, I learn a lot by being a wallflower in these kinds of conversations, so if it's not in the Thread but you are having it elsewhere, can we get a link? (And Gail, you should totally write that article)

    To answer this in as short a piece as possible. I'm using multiple runs (10 per iteration) with the first run removed to allow for cache loading. I'm finding and removing outliers that fall outside a range of 50% of the average, which is detected in two iterations, unless there are 2 or more instances of outliers at that 'other' range. That's a human detection, can't really let the machine do it without doing more code than what you were testing in the first place. πŸ™‚ The one I've found was most likely a network hiccup in data returns.

    From there, I'm taking the average difference to determine significance of change and to determine consistency of the issue.

    I also use total time because a portion of this process' wait time is DiskI/O wait because of size of data. The system in question is relatively silent, so for most purposes it's considered dedicated to me. Since the dataset returns are exact matches, only network hiccups would/should change the time of return of the data. This can be confirmed using the tools-options and just not sending yourself the returns, which would probably increase the significance of the average. However, I still want an end-to-end test because my purpose is 'time to user', not 'impact on machine'.

    Gail and I are on the same page, she just wanted to make sure I wasn't shooting myself in the foot... which is always appreciated. :w00t:


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Oh. Em. Gee.

    Are you guys having a deep technical discussion on some type of performance testing? Shouldn't that be a thread somewhere else besides The Thread?

Viewing 15 posts - 25,876 through 25,890 (of 66,712 total)

You must be logged in to reply to this topic. Login to reply