Are the posted questions getting worse?

  • And for a change of pace, I am trying to help my daughter with her database class.  What makes it fun is trying to remember my Oracle days (all one year of them).

    And I am sure I am giving her advanced Oracle stuff.

  • Sean Lange - Thursday, November 2, 2017 10:37 AM

    Not really sure why but the wikipedia page for Sargable has been deleted. This was my goto quick reference to direct other to. Anybody know any good resources that are a decent explanation without going too deep into indexes that is useful for semi-technical people?

    I did  find the content using my wayback machine.

    In relational databases, a condition (or predicate) in a query is said to be sargable if the DBMS engine can take advantage of an index to speed up the execution of the query. The term is derived from a contraction of Search ARGument ABLE.

    A query failing to be sargable is known as non-sargable query and typically has a negative effect on query time, so one of the steps in query optimization is to convert them to be sargable. The effect is similar to searching for a specific term in a book that has no index, beginning at page one each time, instead of jumping to a list of specific pages identified in an index.

    The typical situation that will make a SQL query non-sargable is to include in the WHERE clause a function operating on a column value. The WHERE clause is not the only clause where sargability can matter; it can also have an effect on ORDER BY, GROUP BY and HAVING clauses. The SELECT clause, on the other hand, can contain non-sargable expressions without adversely affecting the performance.

    • Sargable operators: =, >, <, >=, <=, BETWEEN, LIKE, IS [NOT] NULL, EXISTS
    • Sargable operators that rarely improve performance: <>, IN, OR, NOT IN, NOT EXISTS, NOT LIKE

    IN, EXISTS absolutely improve performance often.

    Also, there's no such thing as a  SARGable expression in the SELECT. SARG means search argument, the SELECT is not searching.
    Also, it's predicates that are or are not SARGable, not expressions.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Thursday, November 2, 2017 3:04 PM

    Sean Lange - Thursday, November 2, 2017 10:37 AM

    Not really sure why but the wikipedia page for Sargable has been deleted. This was my goto quick reference to direct other to. Anybody know any good resources that are a decent explanation without going too deep into indexes that is useful for semi-technical people?

    I did  find the content using my wayback machine.

    In relational databases, a condition (or predicate) in a query is said to be sargable if the DBMS engine can take advantage of an index to speed up the execution of the query. The term is derived from a contraction of Search ARGument ABLE.

    A query failing to be sargable is known as non-sargable query and typically has a negative effect on query time, so one of the steps in query optimization is to convert them to be sargable. The effect is similar to searching for a specific term in a book that has no index, beginning at page one each time, instead of jumping to a list of specific pages identified in an index.

    The typical situation that will make a SQL query non-sargable is to include in the WHERE clause a function operating on a column value. The WHERE clause is not the only clause where sargability can matter; it can also have an effect on ORDER BY, GROUP BY and HAVING clauses. The SELECT clause, on the other hand, can contain non-sargable expressions without adversely affecting the performance.

    • Sargable operators: =, >, <, >=, <=, BETWEEN, LIKE, IS [NOT] NULL, EXISTS
    • Sargable operators that rarely improve performance: <>, IN, OR, NOT IN, NOT EXISTS, NOT LIKE

    IN, EXISTS absolutely improve performance often.

    Also, there's no such thing as a  SARGable expression in the SELECT. SARG means search argument, the SELECT is not searching.
    Also, it's predicates that are or are not SARGable, not expressions.

    Agreed. Again this is the wiki page, not something I penned. It is not even close to perfect but it does give a high level explanation.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Sean Lange - Thursday, November 2, 2017 10:52 AM

    Thom A - Thursday, November 2, 2017 10:47 AM

    Sean Lange - Thursday, November 2, 2017 10:37 AM

    Not really sure why but the wikipedia page for Sargable has been deleted. This was my goto quick reference to direct other to. Anybody know any good resources that are a decent explanation without going too deep into indexes that is useful for semi-technical people?

    I did  find the content using my wayback machine.

    Apparently "Notability of term called into question." https://en.wikipedia.org/w/index.php?title=Special%3ALog&type=delete&user=&page=sargable&year=&month=-1&tagfilter=&subtype=

    You can ask for it to be reviewed for undeletion here: https://en.wikipedia.org/wiki/Wikipedia:Requests_for_undeletion

    Notability of term. That is a strange way of wording it. Wonder who was questioning it. Most people in the industry know that term quite well. Sometimes the things on wiki are truly strange.

    If you think it should not have been deleted, you can raise an undelete request.  The trouble with the deletion mechanism is that if someone proposes deletion and and during the following 7 days no-one objects then anyone (except the person who originated the deletion proposal) can delete the page (and 7 days is rather a short period for articles that are essentially definitions of technical terms, like the sargable article).  The good thing about the process is that the page content is preserved when the page is deleted, so undeletion is possible.

    The "sargable" page was not a very good page - it stated that sargableness is relevant in WHERE, ORDER BY, GROUP BY and HAVING clauses and omits to mention that it is relevant also in FROM clauses (in the ON <search_condition>  subcomponents of  any <joined_table>  FROM clause's <table_source> components) that have ON subcomponents, so if you ask for it to be undeleted and the request succeeds maybe you can edit the page to fix that once it's been restored.  And that's in addition to what Gail's comment pointed out.

    Tom

  • Most days, it's just Murphy's Law. But, every now and then, the stars align and things work out just fine.

    Yesterday, the devs finally got around to testing a proc that I had re-written 6 months ago.
    Classic catch-all proc behind a search screen.
    The original proc takes 6-8 seconds to run, it's executed hundreds of times an hour, and scans millions of rows.  

    Of course, they came running that it wasn't working. When a date range was used, the new one only returned 2 rows, the old one returned 300 with the test data they were using. 
    Lot's of head scratching over the logic, I finally started looking at the actual data being returned.  

    The old one has a wonderful scalar function that strips the time off of a date time field.  And, returns an nvarchar. 
    The parameter was also an nvarchar, so it was doing a string comparison not a date comparison.

    So the old code was never right, and digging into help desk complaints, there were 148 tickets created over the past year related to this. 

    The devs then figured out that the UI didn't force the user to pick at least one of the 12 search parameters, which returns about 3.2 million rows. Which is exactly what the end-users do nearly every time it runs.  So I added a test to return only the top 100 when no parameters are supplied.  

    Well, at least for a day, the DBA is a hero!

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Sean Lange - Thursday, November 2, 2017 12:47 PM

    UGH!!! I was forced to use READ UNCOMMITTED today on about 30-40 procedures. This is for custom work that extends a third party ERP system. We have been an increasing number of deadlocks when these procedures have been executing. Every one of these is nothing more select statements. But of course the deadlock graphs are showing that inserts and deletes from the ERP are all using read uncommitted. And even the developer documentation suggests that you should use read uncommitted for everything. Who decided this was the right system again??? Certainly not me. Ed Wagner knows fully well how horrific this system is since he has had to work this menace in the past.

    Yes, my friend, I do remember.  I still don't envy you having to work with it.  We didn't have that problem with the incarnation of it I had to work with, but it was some years ago.  I had hoped that with all the product renames, company purchases and changes, it would have fixed the problems it had, but it sounds to me like all it did was pick up new ones.

    I can answer your question about who decided it was a good system.  It was the person who was sold a piece of gold, only to find out after the purchase that it was a piece of ____ and that it contains no gold.

  • Michael L John - Friday, November 3, 2017 8:09 AM

    Most days, it's just Murphy's Law. But, every now and then, the stars align and things work out just fine.

    Yesterday, the devs finally got around to testing a proc that I had re-written 6 months ago.
    Classic catch-all proc behind a search screen.
    The original proc takes 6-8 seconds to run, it's executed hundreds of times an hour, and scans millions of rows.  

    Of course, they came running that it wasn't working. When a date range was used, the new one only returned 2 rows, the old one returned 300 with the test data they were using. 
    Lot's of head scratching over the logic, I finally started looking at the actual data being returned.  

    The old one has a wonderful scalar function that strips the time off of a date time field.  And, returns an nvarchar. 
    The parameter was also an nvarchar, so it was doing a string comparison not a date comparison.

    So the old code was never right, and digging into help desk complaints, there were 148 tickets created over the past year related to this. 

    The devs then figured out that the UI didn't force the user to pick at least one of the 12 search parameters, which returns about 3.2 million rows. Which is exactly what the end-users do nearly every time it runs.  So I added a test to return only the top 100 when no parameters are supplied.  

    Well, at least for a day, the DBA is a hero!

    Nice.  Please tell me you fixed the implicit cast against the datetime column and got rid of the scalar function.  That's got to be a huge boost right there.

  • Ed Wagner - Friday, November 3, 2017 10:20 AM

    Sean Lange - Thursday, November 2, 2017 12:47 PM

    UGH!!! I was forced to use READ UNCOMMITTED today on about 30-40 procedures. This is for custom work that extends a third party ERP system. We have been an increasing number of deadlocks when these procedures have been executing. Every one of these is nothing more select statements. But of course the deadlock graphs are showing that inserts and deletes from the ERP are all using read uncommitted. And even the developer documentation suggests that you should use read uncommitted for everything. Who decided this was the right system again??? Certainly not me. Ed Wagner knows fully well how horrific this system is since he has had to work this menace in the past.

    Yes, my friend, I do remember.  I still don't envy you having to work with it.  We didn't have that problem with the incarnation of it I had to work with, but it was some years ago.  I had hoped that with all the product renames, company purchases and changes, it would have fixed the problems it had, but it sounds to me like all it did was pick up new ones.

    I can answer your question about who decided it was a good system.  It was the person who was sold a piece of gold, only to find out after the purchase that it was a piece of ____ and that it contains no gold.

    LOL no kidding. They went with this one because it so closely resembles our legacy system. Keep in mind that we wanted something different because the one we had was not working well enough for us. Add to that, both our legacy system and the new system all have a common ancestor of an IBM application that was released somewhere about 1965 (not exaggerating). As such many aspects of moving to this system allowed our company to continue working in the same way we were in the past (which of course they all said they wanted to change). And it was the "cheapest" of the ERPs we evaluated. It turns out that the actual cost is probably a lot more than some of the others because everything is ala carte instead of buying the system. So we get to pay lots for each module (most of them are required for any normal business), and we get to pay for consulting work out the nose. But hey, at least the new system mostly (dis)functions just like the old one so we can continue doing what we do.

    For the most part the work I am doing is extending other applications to consume data from this behemoth and not working with it directly. I have had to write a ton of procs and views but nothing to do with anything else in there. Everything else is contained in external databases which are far more sane.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Ed Wagner - Friday, November 3, 2017 10:22 AM

    Nice.  Please tell me you fixed the implicit cast against the datetime column and got rid of the scalar function.  That's got to be a huge boost right there.

    The implicit cast of a string against a datetime column shouldn't be a big deal, but removing the scalar function is. Hopefully, we'll be able to use scalar UDFs in the future without problem according to today's Brent's email.

    Joe’s Big Deal: the vNext query processor gets even better. Joe, Kevin Farlee, and friends are working on the following improvements:

    • Table variable deferred  compilation – so instead of getting crappy row estimates, they’ll get  updated row estimates much like 2017’s interleaved execution of MSTVFs.

       >

    • Batch mode for row store – in  2017, to get batch mode execution, you have to play tricks like joining an  empty columnstore table to your query. vNext will consider batch mode even  if there’s no columnstore indexes involved.

       >

    • Scalar UDF inlining – so  they’ll perform like inline table-valued functions, and won’t cause the  calling queries to go single-threaded.

       >

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Sean Lange - Friday, November 3, 2017 10:29 AM

    Ed Wagner - Friday, November 3, 2017 10:20 AM

    Sean Lange - Thursday, November 2, 2017 12:47 PM

    UGH!!! I was forced to use READ UNCOMMITTED today on about 30-40 procedures. This is for custom work that extends a third party ERP system. We have been an increasing number of deadlocks when these procedures have been executing. Every one of these is nothing more select statements. But of course the deadlock graphs are showing that inserts and deletes from the ERP are all using read uncommitted. And even the developer documentation suggests that you should use read uncommitted for everything. Who decided this was the right system again??? Certainly not me. Ed Wagner knows fully well how horrific this system is since he has had to work this menace in the past.

    Yes, my friend, I do remember.  I still don't envy you having to work with it.  We didn't have that problem with the incarnation of it I had to work with, but it was some years ago.  I had hoped that with all the product renames, company purchases and changes, it would have fixed the problems it had, but it sounds to me like all it did was pick up new ones.

    I can answer your question about who decided it was a good system.  It was the person who was sold a piece of gold, only to find out after the purchase that it was a piece of ____ and that it contains no gold.

    LOL no kidding. They went with this one because it so closely resembles our legacy system. Keep in mind that we wanted something different because the one we had was not working well enough for us. Add to that, both our legacy system and the new system all have a common ancestor of an IBM application that was released somewhere about 1965 (not exaggerating). As such many aspects of moving to this system allowed our company to continue working in the same way we were in the past (which of course they all said they wanted to change). And it was the "cheapest" of the ERPs we evaluated. It turns out that the actual cost is probably a lot more than some of the others because everything is ala carte instead of buying the system. So we get to pay lots for each module (most of them are required for any normal business), and we get to pay for consulting work out the nose. But hey, at least the new system mostly (dis)functions just like the old one so we can continue doing what we do.

    For the most part the work I am doing is extending other applications to consume data from this behemoth and not working with it directly. I have had to write a ton of procs and views but nothing to do with anything else in there. Everything else is contained in external databases which are far more sane.

    Heh - It sounds so familiar.  Here's one for you.  We needed to be able to know the state of the manufacturing schedule.  The BOM and MRP runs would (usually) run overnight.  From that point, I was to write a set of Oracle packages that would query the tables to produce a highly attractive report via a web browser.  No native coding was required and it wouldn't involve the application server, so it would run with the speed of Oracle.  It would cover everyone from the top brass to the plant managers.

    The problem was that some queries would take 1/2 hour to run, and that was on the low side.  The tables themselves were impervious to adjustment (indexing, etc.) because every time a developer would do anything in the "development environment" it would drop and rebuild the table the way it saw it.  The indexes defined in the app were used to rebuild the table, but they weren't what you or I would call indexes.  The bottom line was that wasn't performant enough to be used.

    Enter the warehouse.  Nothing had to be live because the BOM and MRP jobs too so long and had to be run overnight.  I was part of a team that built a procedure to take the required data from the production tables after the BOM and MRP runs were finished and build some snapshots tables for reporting.  It took a while, but was tolerable because it ran overnight.  As the system was used and the data grew, the weaknesses in the warehouse population were exposed and the build took longer and longer.  Once it took longer than 24 hours, it had passed the point of no return because it needed to start again before the previous run had finished.

    Enter the rewrite.  I started from scratch on the warehouse build procedure.  I tweaked everything, and I do mean everything.  I created different tables differently and changed some data types.  This was the first time I tested dropping indexes, truncating, populating and then creating new indexes versus truncate and repopulate versus inserting what was new.  I normalized some stuff that was denormalized in the originals.  In short, I redesigned the warehouse significantly from what the original structure was.  The performance difference was incredible.  I took a process that went longer than 30 hours at its highest down to about 20 minutes.  I learned a tremendous amount about performance tuning through the whole exercise.  It was difficult to do at the time, especially because I was so green, but I can look back on it and say it made me better.

  • Luis Cazares - Friday, November 3, 2017 10:40 AM

    Joe’s Big Deal: the vNext query processor gets even better. Joe, Kevin Farlee, and friends are working on the following improvements:

    • Table variable deferred  compilation – so instead of getting crappy row estimates, they’ll get  updated row estimates much like 2017’s interleaved execution of MSTVFs.

       >

    • Batch mode for row store – in  2017, to get batch mode execution, you have to play tricks like joining an  empty columnstore table to your query. vNext will consider batch mode even  if there’s no columnstore indexes involved.

       >

    • Scalar UDF inlining – so  they’ll perform like inline table-valued functions, and won’t cause the  calling queries to go single-threaded.

       >

    The inlining SF looks very promising.  Then again, a lot of SQL 2017 looks great.  I'll believe it when I test it. 😉

  • Ed Wagner - Friday, November 3, 2017 10:22 AM

    Michael L John - Friday, November 3, 2017 8:09 AM

    Most days, it's just Murphy's Law. But, every now and then, the stars align and things work out just fine.

    Yesterday, the devs finally got around to testing a proc that I had re-written 6 months ago.
    Classic catch-all proc behind a search screen.
    The original proc takes 6-8 seconds to run, it's executed hundreds of times an hour, and scans millions of rows.  

    Of course, they came running that it wasn't working. When a date range was used, the new one only returned 2 rows, the old one returned 300 with the test data they were using. 
    Lot's of head scratching over the logic, I finally started looking at the actual data being returned.  

    The old one has a wonderful scalar function that strips the time off of a date time field.  And, returns an nvarchar. 
    The parameter was also an nvarchar, so it was doing a string comparison not a date comparison.

    So the old code was never right, and digging into help desk complaints, there were 148 tickets created over the past year related to this. 

    The devs then figured out that the UI didn't force the user to pick at least one of the 12 search parameters, which returns about 3.2 million rows. Which is exactly what the end-users do nearly every time it runs.  So I added a test to return only the top 100 when no parameters are supplied.  

    Well, at least for a day, the DBA is a hero!

    Nice.  Please tell me you fixed the implicit cast against the datetime column and got rid of the scalar function.  That's got to be a huge boost right there.

    No, I left them in there.  The dev's were complaining that the query was running too fast.  😀

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Ed Wagner - Friday, November 3, 2017 10:54 AM

    Sean Lange - Friday, November 3, 2017 10:29 AM

    Ed Wagner - Friday, November 3, 2017 10:20 AM

    Sean Lange - Thursday, November 2, 2017 12:47 PM

    UGH!!! I was forced to use READ UNCOMMITTED today on about 30-40 procedures. This is for custom work that extends a third party ERP system. We have been an increasing number of deadlocks when these procedures have been executing. Every one of these is nothing more select statements. But of course the deadlock graphs are showing that inserts and deletes from the ERP are all using read uncommitted. And even the developer documentation suggests that you should use read uncommitted for everything. Who decided this was the right system again??? Certainly not me. Ed Wagner knows fully well how horrific this system is since he has had to work this menace in the past.

    Yes, my friend, I do remember.  I still don't envy you having to work with it.  We didn't have that problem with the incarnation of it I had to work with, but it was some years ago.  I had hoped that with all the product renames, company purchases and changes, it would have fixed the problems it had, but it sounds to me like all it did was pick up new ones.

    I can answer your question about who decided it was a good system.  It was the person who was sold a piece of gold, only to find out after the purchase that it was a piece of ____ and that it contains no gold.

    LOL no kidding. They went with this one because it so closely resembles our legacy system. Keep in mind that we wanted something different because the one we had was not working well enough for us. Add to that, both our legacy system and the new system all have a common ancestor of an IBM application that was released somewhere about 1965 (not exaggerating). As such many aspects of moving to this system allowed our company to continue working in the same way we were in the past (which of course they all said they wanted to change). And it was the "cheapest" of the ERPs we evaluated. It turns out that the actual cost is probably a lot more than some of the others because everything is ala carte instead of buying the system. So we get to pay lots for each module (most of them are required for any normal business), and we get to pay for consulting work out the nose. But hey, at least the new system mostly (dis)functions just like the old one so we can continue doing what we do.

    For the most part the work I am doing is extending other applications to consume data from this behemoth and not working with it directly. I have had to write a ton of procs and views but nothing to do with anything else in there. Everything else is contained in external databases which are far more sane.

    Heh - It sounds so familiar.  Here's one for you.  We needed to be able to know the state of the manufacturing schedule.  The BOM and MRP runs would (usually) run overnight.  From that point, I was to write a set of Oracle packages that would query the tables to produce a highly attractive report via a web browser.  No native coding was required and it wouldn't involve the application server, so it would run with the speed of Oracle.  It would cover everyone from the top brass to the plant managers.

    The problem was that some queries would take 1/2 hour to run, and that was on the low side.  The tables themselves were impervious to adjustment (indexing, etc.) because every time a developer would do anything in the "development environment" it would drop and rebuild the table the way it saw it.  The indexes defined in the app were used to rebuild the table, but they weren't what you or I would call indexes.  The bottom line was that wasn't performant enough to be used.

    Enter the warehouse.  Nothing had to be live because the BOM and MRP jobs too so long and had to be run overnight.  I was part of a team that built a procedure to take the required data from the production tables after the BOM and MRP runs were finished and build some snapshots tables for reporting.  It took a while, but was tolerable because it ran overnight.  As the system was used and the data grew, the weaknesses in the warehouse population were exposed and the build took longer and longer.  Once it took longer than 24 hours, it had passed the point of no return because it needed to start again before the previous run had finished.

    Enter the rewrite.  I started from scratch on the warehouse build procedure.  I tweaked everything, and I do mean everything.  I created different tables differently and changed some data types.  This was the first time I tested dropping indexes, truncating, populating and then creating new indexes versus truncate and repopulate versus inserting what was new.  I normalized some stuff that was denormalized in the originals.  In short, I redesigned the warehouse significantly from what the original structure was.  The performance difference was incredible.  I took a process that went longer than 30 hours at its highest down to about 20 minutes.  I learned a tremendous amount about performance tuning through the whole exercise.  It was difficult to do at the time, especially because I was so green, but I can look back on it and say it made me better.

    Awesome. Sounds like both you and the company gained a lot from that experience. Fairly early in my tenure here I did something similar, although not as big a savings. We had a nightly process that moved a mountain of data and the same kind of thing started happening. It wasn't finished yet when other processes needed (and assumed) the data was finished and complete. I took a process that was several hours down to just a couple of minutes. Major difference on that one was everything was done via massive nested cursors. Modified the thing into set based logic and it just screamed. It took many people quite a few weeks to trust the data. They were skeptical because it didn't take very long. 🙂

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Ed Wagner - Friday, November 3, 2017 10:57 AM

    Luis Cazares - Friday, November 3, 2017 10:40 AM

    Joe’s Big Deal: the vNext query processor gets even better. Joe, Kevin Farlee, and friends are working on the following improvements:

    • Table variable deferred  compilation – so instead of getting crappy row estimates, they’ll get  updated row estimates much like 2017’s interleaved execution of MSTVFs.

       >

    • Batch mode for row store – in  2017, to get batch mode execution, you have to play tricks like joining an  empty columnstore table to your query. vNext will consider batch mode even  if there’s no columnstore indexes involved.

       >

    • Scalar UDF inlining – so  they’ll perform like inline table-valued functions, and won’t cause the  calling queries to go single-threaded.

       >

    The inlining SF looks very promising.  Then again, a lot of SQL 2017 looks great.  I'll believe it when I test it. 😉

    Inline scalar UDFs is not SQL 2017. It's future.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thom A - Monday, October 30, 2017 11:15 AM

    Steve Jones - SSC Editor - Monday, October 30, 2017 9:10 AM

    Do you need DHCP? I tend to aim for static IPs for my VMs to be sure I know where I'm connecting.

    Yes, it'll be a static IP, but is not in the same IP range. It's using a bridge with a different subset, that doesn't talk to the outside network and vice versa (apart from the internet).

    Ended up starting from scratch again. I'd repurposed the PC from a Media Centre to Home Development PC, so think it was just lingering set-up stuff from there and the bridge woulnd't crate. Made an image of my old SSD (as it's only 60GB) and then installed Ubuntu 17.10 instead (was 16.04.3 LTS). Containers worked straight out of the box on DHCP, and then configured them as static on the MAC addresses from the router.

    Was good, however, to see that this meant running 2 instances on a single Linux machine wasn't actually that hard in the end (Developer and Express).

    On a different note, I miss Unity... 😛

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

Viewing 15 posts - 60,271 through 60,285 (of 66,712 total)

You must be logged in to reply to this topic. Login to reply