Subroutines in TSQL

  • Telarian (3/3/2011)


    There can be any number of reasons to call a bit of code repeatedly. If that code is very simple and can easily fit into a UDF and perhaps even be useful elsewhere, great, put it in a UDF. But if that code operates on lots of existing data or variables that you are using within the current sp, you will have to pass all that to the new sp you create to substitute for a subroutine. That can be pretty frustrating. As for using GOTO, that's just a disaster waiting to happen when it doesn't follow the path you expected it to. Using GOTO's is definitely not a good replacement for a subroutine structure.

    All IMHO of course.

    Oh, be careful now. "It Depends". 🙂 Scalar and Multi-Line Table Valued Functions (mTVF) can very quickly turn code into some pretty nasty, performance challenged RBAR especially if it refers to a table. Think of it as a hidden cursor.

    Inline Table Valued Functions (iTVF), however, can be VERY useful and are calculated in the execution plan as if they were a "programmable view". Oddly enough, they also work very effectively to return single values like a Scalar function would when used with CROSS APPLY (thanks to Paul White for that nice little trick).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • This seems fitting given the circumstances:

    http://xkcd.com/292/

    :hehe:

  • HowardW (3/3/2011)


    This seems fitting given the circumstances:

    http://xkcd.com/292/

    :hehe:

    Perfect!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Depending on the circumstance, a view can replace the functionality of a scalar or table valued function and do so more efficiently. I once developed a datawarehouse and reporting application that would create views depending on what reporting configurations were chosen by the user. For example, the column configuration and joins would be changed, or it would determine the specific tables unionized in the resultset (this was a SQL2K database using one table per data load and partitioned views). So there was a database set aside to contain scores of user defined views that could be re-used.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (3/3/2011)


    Depending on the circumstance, a view can replace the functionality of a scalar or table valued function and do so more efficiently.

    In 2005 or later, I would prefer an in-line TVF (aka parameterized view). Just as efficient (inline TVFs are expanded into the query exactly as views are) and iTVFs can take parameters too.

  • SQLkiwi (3/3/2011)


    Eric M Russell (3/3/2011)


    Depending on the circumstance, a view can replace the functionality of a scalar or table valued function and do so more efficiently.

    In 2005 or later, I would prefer an in-line TVF (aka parameterized view). Just as efficient (inline TVFs are expanded into the query exactly as views are) and iTVFs can take parameters too.

    It's true that the sql select in an inline TVF is folded into the execution plan basically the same as a view and would yeild the same performance in most scenarios. Whenever I encounter a multi-statement TVF, I typically rewrite it as a view. I like seeing joins on columns rather than function calls with parameters, and I typically stick the old school and ANSI SQL-ish techniques, unless the newer options provides some practical benefit or become the new de-facto standard way of doing it. An indexed view can provide a performance advantage over an inline-TVF for those situations where it's practical to index a view.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Jeff Moden (3/3/2011)


    Oh, be careful now. "It Depends". 🙂 Scalar and Multi-Line Table Valued Functions (mTVF) can very quickly turn code into some pretty nasty, performance challenged RBAR especially if it refers to a table. Think of it as a hidden cursor.

    Inline Table Valued Functions (iTVF), however, can be VERY useful and are calculated in the execution plan as if they were a "programmable view". Oddly enough, they also work very effectively to return single values like a Scalar function would when used with CROSS APPLY (thanks to Paul White for that nice little trick).

    Don't get me wrong, I'm not saying there aren't usually better ways to do things. But sometimes you end up in less than ideal places and at those times a sub would be good.

  • When I need to join a resultset from a remote server, like from a linked server connection, neither a View or TVF are beneficial. I'll typically execute a parameterized pass-through query (using EXEC .. AT...), insert the result into a temporary table, and then join that result with the local select statement. Sometimes using an intermediary table is even the best way to join two complex local resultsets.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (3/3/2011)


    It's true that the sql select in an inline TVF is folded into the execution plan basically the same as a view and would yeild the same performance in most scenarios.

    Yes, but you can't pass parameters to a view.

    Whenever I encounter a multi-statement TVF, I typically rewrite it as a view.

    Or as an in-line TVF 😉 I wouldn't miss multi-statement TVFs much if they disappeared from the language tomorrow. There are some useful edge cases, though.

    An indexed view can provide a performance advantage over an inline-TVF for those situations where it's practical to index a view.

    Naturally.

  • Eric M Russell (3/3/2011)


    When I need to join a resultset from a remote server, like from a linked server connection, neither a View or TVF are beneficial. I'll typically execute a parameterized pass-through query (using EXEC .. AT...), insert the result into a temporary table, and then join that result with the local select statement. Sometimes using an intermediary table is even the best way to join two complex local resultsets.

    I've found that if you insert from a remote server into a local table variable, it avoids the DTC overhead, or at least some of it. Huge performance increase, but not safe for use with volatile data that might change during the local use.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (3/3/2011)


    I've found that if you insert from a remote server into a local table variable, it avoids the DTC overhead, or at least some of it. Huge performance increase, but not safe for use with volatile data that might change during the local use.

    Another option, on 2008, is to use sp_server_option to set remote proc transaction promotion off.

  • Eric M Russell (3/3/2011)


    When I need to join a resultset from a remote server, like from a linked server connection, neither a View or TVF are beneficial. I'll typically execute a parameterized pass-through query (using EXEC .. AT...), insert the result into a temporary table, and then join that result with the local select statement.

    As always, it depends. Sometimes the optimizer can successfully remote parts of a combined query to achieve better performance. There are a number of ways to optimize local/remote queries, and fetching the entire remote set to a local table can be a good fall-back option in some cases. I would almost always let the optimizer have a go first though.

  • SQLkiwi (3/3/2011)


    GSquared (3/3/2011)


    I've found that if you insert from a remote server into a local table variable, it avoids the DTC overhead, or at least some of it. Huge performance increase, but not safe for use with volatile data that might change during the local use.

    Another option, on 2008, is to use sp_server_option to set remote proc transaction promotion off.

    Yep, but I haven't had a chance to try that in anything but proof-of-concept environments yet.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (3/3/2011)


    Yep, but I haven't had a chance to try that in anything but proof-of-concept environments yet.

    Understood.

  • SQLkiwi (3/3/2011)


    Eric M Russell (3/3/2011)


    When I need to join a resultset from a remote server, like from a linked server connection, neither a View or TVF are beneficial. I'll typically execute a parameterized pass-through query (using EXEC .. AT...), insert the result into a temporary table, and then join that result with the local select statement.

    As always, it depends. Sometimes the optimizer can successfully remote parts of a combined query to achieve better performance. There are a number of ways to optimize local/remote queries, and fetching the entire remote set to a local table can be a good fall-back option in some cases. I would almost always let the optimizer have a go first though.

    My most recent involvement with querying remote data involves joining huge resultsets originating in a 3rd party Oracle 11g database with reference data in a SQL Server 2008 datamart, and then inserting the result into into the same SQL Server datamart for later use by the reporting team. The SQL Server DTC and Oracle don't share much, and early experiments with 4 part named queries would result in entire table scans being pulled across the wire. When I EXEC a pass-though query to Oracle, I'm only getting a small DTC overhead, and the remaining 1 - 30 minutes of processing time occurs entirely on Oracle, and then however much time it takes to bring the final resultset back across the wire. Joining the temporary resultset with the reference tables is trivial at that point.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 15 posts - 46 through 60 (of 96 total)

You must be logged in to reply to this topic. Login to reply