May 12, 2021 at 7:30 am
SQL Server 2008 is out of support now (which was the reason we upgraded all of our SQL Servers) but your question is valid for SQL Server 2016.
For a few servers which have a specific function, say a reporting server running SSRS, then upgrading to a newer version is worthwhile. A server for Tabular Models running a newer version of SQL Server does also make sense, (assuming there is a need for it, of course).
Once I've tried out temporal tables on a productive system, I'll give you answer to that. They certainly look good on my test system but achieving a realistic load on it is hard.
May 12, 2021 at 11:14 am
Here is a little challenge.
Look back at 10, 15 years of new feature in or around SQL Server you've learned or heard about.
What share of those features did really make a difference? Something which would really justify upgrading a well designed system based on, say, SQL Server 2008?
Query Store
Columnstore
Extended Events
New Cardinality Estimation Engine
Adaptive Joins
Availability Groups
I can keep going. There's a lot of really serious improvements over what was there in 2008. Useful. Supported. Good stuff that will have a positive impact after an upgrade.
Have there been dogs? Oh yes. In-Memory tables. But you get the bad with the good in just about anything on earth.
And I didn't even bring up Azure, AWS or any other cloud technology, which are also net goods, depending, mixed in with some dogs.
Keeping up with all the change is absolutely a bear, yes. However, I stand behind my support for change and improvement and movement into the new things... that work. And yes, there's labor involved in separating the wheat from the chaff. However, isn't that our job?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 12, 2021 at 6:54 pm
The new CE thing killed us. We had to turn it off.
With that in mind, Adaptive Joins has me scared to death.
I won't get into the other stuff on that list except to say that some of it comes at too high a price for use. YMMV.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 14, 2021 at 10:55 am
Age is just a number and learning keeps the mind active and to demonstrate that my number's got to the mid-sixties and I'm getting into learning Azure to create a data store/ data warehouse and also DevOps. I took this job on a payroll rather than day rate contracting for a few reasons, including the opportunity to learn Azure, since you can't contract on your existing skillset for ever, the roles require more and more up to date knowledge.
Features of SQL Server that have made a difference over the years have to be topped by the CTE for which I openly admit an addiction, then there are temporal tables for slowly changing dimensions and the MERGE statement. And probably many more which we use without thinking "when did that arrive?"
Just logging onto a server over the internet and working remotely from home was a dream when I started in IT. Now it's the norm and a change I definitely appreciate. I'm happy to keep learning and changing.
May 17, 2021 at 11:55 am
Age is just a number ...
You'll need to explain that in detail to my knees. Ha!
However, yeah, total agreement.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 20, 2021 at 2:09 am
SQL Server 2008 is out of support now (which was the reason we upgraded all of our SQL Servers) but your question is valid for SQL Server 2016.
For a few servers which have a specific function, say a reporting server running SSRS, then upgrading to a newer version is worthwhile. A server for Tabular Models running a newer version of SQL Server does also make sense, (assuming there is a need for it, of course).
Once I've tried out temporal tables on a productive system, I'll give you answer to that. They certainly look good on my test system but achieving a realistic load on it is hard.
In 2 decades of working with SQL Server I received 0 (zero) support from Microsoft for any of their products. So, it makes very little difference if a product is in or out of support.
Continuous updates just indicate that the new products are not any more stable or less vulnerable to attacks then the old ones. To me it just shows that attempts to fix it are pretty hopeless.
_____________
Code for TallyGenerator
May 20, 2021 at 2:13 am
Sergiy wrote:Here is a little challenge.
Look back at 10, 15 years of new feature in or around SQL Server you've learned or heard about.
What share of those features did really make a difference? Something which would really justify upgrading a well designed system based on, say, SQL Server 2008?
Query Store
Columnstore
Extended Events
New Cardinality Estimation Engine
Adaptive Joins
Availability Groups
I can keep going. There's a lot of really serious improvements over what was there in 2008. Useful. Supported. Good stuff that will have a positive impact after an upgrade.
Have there been dogs? Oh yes. In-Memory tables. But you get the bad with the good in just about anything on earth.
And I didn't even bring up Azure, AWS or any other cloud technology, which are also net goods, depending, mixed in with some dogs.
Keeping up with all the change is absolutely a bear, yes. However, I stand behind my support for change and improvement and movement into the new things... that work. And yes, there's labor involved in separating the wheat from the chaff. However, isn't that our job?
Well, the list is not that long, apparently. No reason to complain about the overwhelming torrent of new things to learn.
Let's see what's in this list and how worthy are those things.
_____________
Code for TallyGenerator
May 20, 2021 at 2:38 am
Query Store
The feature is called to let a DBA to choose an earlier used plan when a new plan chosen by the optimizer is, well, not so optimal.
But why the optimizer even needed to come up with a new plan? What made it abandon the old one which was used before?
Something has changed. Most likely statistics, indexing, some configuration features, which made the engine believe that the old plan is no good anymore.
The new one it came up with appeared to be even worse - there might be several reasons for that: time pressure, lack of resources, cr..py cardinality estimator, etc. But does it make the old plan the perfect choice? Well, may be yes, may be not - we don't know, as we need to go and manually do the engine's work - check for available indexes, analyze stats, evaluate available resources, costs and then make the decision.
So, by introducing the Query Store the vendor is openly admitting that they've done bad job on optimizer and pass its work to admins. And the best those admins can do is to manually pick one of several obsolete plans, none of which is likely the best choice for the current query in the current environment.
Would it encourage me to update to the new version having such feature?
No. It would rather serve as a repellent.
P.S. I actually did manage to force my own preferred plan since the ages of SQL Server 2000. By using query hints.
It was better comparing the the Query Store, because I was able to force any plan I choose, not only ones which were ever generated by the optimizer.
But some folks on this very forum explained me the dangers of doing it in ever changing data environment. And then I read some theory about the reasons for SQL to be a declarative language, and its engine is an interpreter rather than a compiler.
Because of that I've learn not to force my own plans, except probably for some really rear exceptional cases. Which makes the value of Query Store (for me) rather negative.
_____________
Code for TallyGenerator
May 20, 2021 at 6:51 am
Columnstore Index.
I don't want to discuss the troubles with initial versions of columnstore, which put so many developers out of using it. It's all water under the bridge. Let's talk about columnstore as a concept, implemented ideally.
First, columnstore has a very limited scope. It only effective for aggregation queries on fact tables with enormous amount of repeating values and no filtering applied to it.
Those are exactly the kind of queries performed by so called "big data" tools. It goes like this: you empty the table, repopulate it with "current" data set and run analytical queries on this data set, which contains all the data you need and only the data you need to analyse.
Here comes the problem.
Columnstore is extremely expensive in terms of updating. MS suggest to use it only when not more than 10% of the data set is changed (added, deleted or updated). It says to me that 90% of the work is done while columnstore is populated, and it consumes a lot more resources than row-store processing. That hidden cost is usually taken out of the picture by those who promote columnstore, if we're trying to be fair we need to compare the overall cost of 2 indexing methods, including daily load of data changes.
I usually avoid having "current" data sets by having clever data indexing strategies, which allow me to filter out the "current" data with most effective WHERE clause. It saves whole lot of IO and makes the data available for analysis as soon as it arrives.
But columnstore won't work with filtering. Well, it would work, but provide no advantage comparing to row-store. And still have all the overheads of data population.
Even in the situations where columnstore shows best performance it still is not really needed.
Another problem - columnstore makes certain queries effective, and all other ones almost impossible. If you need to add some lookup data, group by some additional parameter - you better rebuild the columnstore index or create a new one. Which, again, will take a lot of time and resources.
In pre-columnstore (early columnstore) times I developed a technic, which I used on tables with only small part of the source data changed between consecutive report invocations. I cached aggregations for chunks of the data (usually per week, or per day) in a staging table, and had a trigger on the base table to save the boundaries of the periods which affected by data changes. Reports are running from pre-aggregated data in that cache table. Then, when it's time to run the report, the procedure updates (creates, deletes) the affected parts of the staging table and the runs the report.
if less than 10% of the data changed, then the process will take almost 10 times less time and resources comparing to a normal query on a row-store, just about the best promise given by the columnstore.
So, considering I've done it on an SQL 2008 system - what would be my insensitive to upgrade to one of the next versions?
What would my return for investment of studying columnstore?
_____________
Code for TallyGenerator
May 20, 2021 at 7:29 am
Extended events.
Don't think there is much to discuss here.
They ditched convenient, user friendly Profiler with a scripting tool for the sake of adding several new types of events to trace - it served more the vendor's development team rather then the users.
Again, as with Query store, they passed their job (making up user-friendly interface in this case) to unfortunate users.
And still, when it comes to basic task of quickly catch a rogue query (which makes >90% of all user cases) Extended Events is no match to Profiler in terms of convenience.
_____________
Code for TallyGenerator
May 20, 2021 at 7:45 am
New Cardinality Estimation Engine
Another unfortunate product of MS attempt to jump on a Big Data vagon.
It's good for massive aggregations, but really sucks when it comes to fine selection queries with highly selective filtering conditions.
Best testament to the advances provided by the engine - the way it could be disabled. Initially it was a trace flag, and now, when it has become probably the most popular flag ever, they added the new server option to turn it off.
Thats pretty much what you need to learn about this feature.
_____________
Code for TallyGenerator
May 20, 2021 at 8:03 am
Adaptive Joins
Attempting to change execution plans on fly after execution started seems quite dodgy.
It still does not eliminate the affects of wrong estimations, but it comes at a significant cost of duplicating source data sets to make the data available for different execution streams.
Almost every article describing this new feature ends with a paragraph about the ways to disable it - isn't it the best testament to the confidence in the usefulness of it?
_____________
Code for TallyGenerator
May 20, 2021 at 8:37 am
Availability Groups
Nice feature in theory.
In reality it's less reliable then the data houses it's supposed to protect.
Maybe in future they will bring it up to the standard, but so far it reminds me automatic replications. Nice and easy thing to set up and maintain, but if you have serious contractual obligations about integrity and availability of your data - better create your own workaround and don't use MS shipped feature.
_____________
Code for TallyGenerator
May 20, 2021 at 12:18 pm
Heh... I absolutely love it, Sergiy. +1 billion!
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply