September 19, 2014 at 1:27 pm
My company would certainly benefit from upgrading to 2014 at this point; not so much because of new features (though in-memory could potentially be useful for one table I know of off the top of my head), but because it would give us a business excuse to upgrade our Windows Server version to Enterprise. We're using 32GB of our 40GB of RAM on the production server, so we're basically wasting 8GB because of the version limitation.
However, we can't upgrade our SQL Server, and we probably never will, because the vendors that made the software using the server specify that we need to stay on 2008 R2 SP1. Yep, specific down to the service pack! :-P. No idea why. They don't have any bits of code that wouldn't work on future versions, besides a bunch of NTEXT fields that could easily be converted, but it's their mandate. Along with NOLOCK everywhere. Definitely not the most well-founded suggestion, but so it goes.
- 😀
September 19, 2014 at 1:35 pm
Andrew Kernodle (9/19/2014)
My company would certainly benefit from upgrading to 2014 at this point; not so much because of new features (though in-memory could potentially be useful for one table I know of off the top of my head), but because it would give us a business excuse to upgrade our Windows Server version to Enterprise. We're using 32GB of our 40GB of RAM on the production server, so we're basically wasting 8GB because of the version limitation.However, we can't upgrade our SQL Server, and we probably never will, because the vendors that made the software using the server specify that we need to stay on 2008 R2 SP1. Yep, specific down to the service pack! :-P. No idea why. They don't have any bits of code that wouldn't work on future versions, besides a bunch of NTEXT fields that could easily be converted, but it's their mandate. Along with NOLOCK everywhere. Definitely not the most well-founded suggestion, but so it goes.
Sadly this is not an uncommon situation, brings up quite few questions on regulatory compliance now the standard support for 2K8R2 has been discontinued.
😎
September 19, 2014 at 4:00 pm
From the article:
We often install the latest version for new instances, often because that's the only version we can buy at the time. However we don't seem to upgrade older instances quickly. This week I'm wondering why.
Just one person's humble opinion and nearly personal thoughts on the subject...
There's just one reason... "Regression Testing" and it's actually a whole lot more expensive than a lot of folks think just to get to that type of testing.
Microsoft has seen fit to not only deprecate and discontinue a lot of things but they also do things that don't make sense in a lot of cases. Because of that, you have to do some pretty massive testing to make sure things don't break. That, in turn, means that you have to bring a new server up because there's normally not enough room on one server to have two copies of the larger databases and you'd be a fool to try to do an "in-place" upgrade because "going back" isn't that easy and I've also heard of too many in-place-upgrade-horror stories. There's also not much time for downtime and that's another reason for not doing an in-place upgrade.
For example, remember the great migration from 2000 to 2005? First (IMHO), the RTM of 2005 wasn't worth a hoot and wasn't even worth looking at until after SP1. Then there are things like what they did with ORDER BY... that broke a lot of code. Yeah, I know what they said and I didn't get caught by it but I (and, apparently, a lot of others) think that an ORDER BY should be a set delimiter and it should bloody well work.
And remember all of the complaints that came about because they decided to change what the optimizer did n 2005? Code that used to work just fine and with good performance suddenly didn't work so well and required anything from minor tuning to a major rewrite. Similar problems existed when 2008 and 2008 R2 came out.
How about all the fun that DTS users had? Yeeee-haaa!!!
And, it's going to get a whole lot worse. Remember that there's still the "promise" that you won't be able to change settings concerning NULLs, etc.
When MS stops breaking code with their releases, that's when I think you might see an increase in upgrades. Maybe not even then because a lot of good folks correctly have the attitude of "If it ain't broke, don't fix it" or "With upgrades, you take the chance of jumping from fat to fire".
--Jeff Moden
Change is inevitable... Change for the better is not.
September 19, 2014 at 6:17 pm
In my case we have 30+ instances spread across different servers. Then each instance has 75+ databases. So our production DBA team would have to make sure that QA does all the testing and then 2-5 instances work off of a master instance's SQL agent.
It will take forever to plan and institute it. And our application touts a 97% uptime as part of the sales package.
----------------
Jim P.
A little bit of this and a little byte of that can cause bloatware.
September 20, 2014 at 1:34 am
There is also the death or bongo answer to the question of scalability.
[/li]If you do have a scalability problem could it be run in a NOSQL stack?[/li]
Circular arguments regarding new features. We design systems to cope without them because we haven't got them, the systems can be designed to cope without them so why demand them?
I know someone who will not upgrade from SQL2000 because he loves the ease and simplicity of DTS and regards SSIS as the spawn of Satan.
An improvement in tooling surrounding SQL Server would be a driver towards adoption particularly if that tooling could support other databases and platforms. Take a look at Aquafold Data Studio. Supports the proprietary features of many databases, supports SSH connections, forward/reverse engineering of database schemas to an inbuilt ERD tool etc.
September 20, 2014 at 7:50 am
We're getting more serious about the NoSQL talks for those same reasons David. Other reasons would be cheaper analytical solutions for both data mining and reporting versus 2008 R2. Then when it comes to data warehousing, I mean, Microsoft themselves are already depending on NoSQL as the core of their infrastructure. So, it seems all paths lead to NoSQL for anything compute heavy where scale is needed.
September 20, 2014 at 9:30 am
The new licensing costs.
We just re-upped our EA and the increased costs now have managers wanting us to look into Postgres.
I understand why they did it, but maybe they have slowed ramped up costs?
September 21, 2014 at 9:27 am
Microsoft DocumentDB looks interesting but is only available in Azure. I'd love to know where the graph database outlined in the Trinity white paper is going to go.
I have some qualms about the scalability capability of NOSQL vs RDBMS. There is a scale point beyond which you cannot go with SQL Server. It is fundamental to the balance of the compromises that have to be made to make an RDBMS work. NOSQL allows a choice to be made to choose to discard or downgrade certain features in order to upweight others. Consider performance as an example, to obtain it we can discard schema enforcement , durability, consistency and a whole raft of other things.
I have a number of concerns with this
I am not sure how many performance issues are down to the DB platform and how many are down to coding problems. Will a tech swap end up enabling a "throw hardware at it" mentality to performance issues?
If I was a business startup I absolutely would try NOSQL first, either Couchbase or MongoDB for document store, REDIS for session state, Elastic Search as part of the ELK stack for log handling and search, NEO4J for graph capability.
In an existing enterprise I think certain NOSQL solutions offer compelling cases for their use. I'm not convinced that an argument for wholesale or majority abandonment of RDBMS stacks up. In an enterprise situation data has to flow between systems. Quite a few NOSQL solutions seem focussed on serving one particular system and not so much on system to system integration.
September 21, 2014 at 9:39 am
David.Poole (9/21/2014)
Microsoft DocumentDB looks interesting but is only available in Azure. I'd love to know where the graph database outlined in the Trinity white paper is going to go.I have some qualms about the scalability capability of NOSQL vs RDBMS. There is a scale point beyond which you cannot go with SQL Server. It is fundamental to the balance of the compromises that have to be made to make an RDBMS work. NOSQL allows a choice to be made to choose to discard or downgrade certain features in order to upweight others. Consider performance as an example, to obtain it we can discard schema enforcement , durability, consistency and a whole raft of other things.
I have a number of concerns with this
- Is there sufficient understanding in what is being traded to gain performance and alleged flexibility
- Is there a realistic sizing and capacity evaluation. Incumbent Tech A is 100x better than we need, Tech B is 1000x better than we need. Why go through the headache and heartache of adopting Tech B?
- Let us suppose you swap out a RDBMS for a document store. Swapping two technologies in a system means that the business has what they had before and not necessarily a step forward....yet. It is a promise of jam tomorrow
I am not sure how many performance issues are down to the DB platform and how many are down to coding problems. Will a tech swap end up enabling a "throw hardware at it" mentality to performance issues?
If I was a business startup I absolutely would try NOSQL first, either Couchbase or MongoDB for document store, REDIS for session state, Elastic Search as part of the ELK stack for log handling and search, NEO4J for graph capability.
In an existing enterprise I think certain NOSQL solutions offer compelling cases for their use. I'm not convinced that an argument for wholesale or majority abandonment of RDBMS stacks up. In an enterprise situation data has to flow between systems. Quite a few NOSQL solutions seem focussed on serving one particular system and not so much on system to system integration.
Could NOSQL be the problem demanding a formal specification solution???
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
September 21, 2014 at 6:03 pm
Last time I had to worry about such things, there were 2 powerful reasons either of which would on its own to ensure that we didn't upgrade SQL version. Those were the same reasons that many others have mentioned.
The main reason was the cost of regression testing (including the cost of the learning process for the support team) - I certainly didn't trust MicroSoft to provide a new release that didn't break our software: we found critical security updates doing that (rarely) and service packs too (as often as there was a service pack, whether for OS or for SQL Server) so why should I believe that a new release wouldn't introduce problems? This was potentially made worse by the possible need to upgrade the OS when SQL Server was upgraded, as our most complex application software was dependent on many windows features other that SQL Server. In addition some regression testing would have to be done on customer sites because we didn't necessarily have hardware available in-house identical to what was installed on customer sites - we had to interface to customer's equipments that used manufacturer-specific protocols and required our stuff to be certified as suitable for interfacing - and for some equipment that certification had to be carried out on the other manufacturers' sites, and wouldn't include regression testing with our applications. Most of the customers would want to conduct an acceptance trial on any significant upgrade such as a new OS or SQL Server version and since they were providing a 24/7 service to their customers they generally weren't willing to take time out for that (it was hard enough to get time to install critical security fixes).
The second reason was the cost of SQL Server. We were using Standard Edition, so it was not as bad as it might have been; but the accountants and marketeers had included the SQL licencing cost up-front in the contracts with the customers, and contracts might run for 3, 5, or 10 years - if we wanted to upgrade we could either wait until the current contract ended or persuade the customer to pay or find the cost in-house somehow; the latter was impossible - the burn rate was hardly acceptable without that expense, and a lot of my time was spent devising ways to reduce it or to prevent it rising when any of our costs increased. Since our software needs big servers to provide the required service for customers where there are a very large number of end-users, the new licensing model - per core licensing - means that upgrades will be an even bigger problem in future.
Tom
September 22, 2014 at 3:09 am
Will nobody think of the developers? 🙂
Seriously, if humankind had adopted "If it ain't broke, don't fix it" we'd still be living in caves and drinking out of puddles. Requirements change! Expectations rise! New ways of thinking emerge!
[/url]
In each release of SQL Server, Microsoft has the opportunity to introduce innovations, typically ones that a mass of developers have been crying out for. How I fumed when our DBAs refused to install 2005, with its greatly-enhanced XML support, leaving my colleagues to write a complicated sequence of 18 related SQL objects whose output I still had to convert to XML. Every change to improve or debug this code set inflicted horrible pain (on me, not the DBAs). After upgrading to 2005, these objects were replaced with a single stored procedure and a user-defined function that outputted the XML directly. I could even implement some changes given to me over the phone as I listened and typed into the test/development environment and tested the results: "Yep, that worked" I could say before putting the phone down. Goddamit!
And yes, every new server had the latest SQL Server on it, but the old ones were not upgraded. So I had learnt all the new features (reading SQL Server 2005/2008 Bibles amongst other sources) but then had to mentally regress to apply solutions in a later version of T-SQL to an earlier version, or versions.
Even between 2008 and 2008 R2 there were huge improvement in Reporting Services.
Anyway, whenever I hear a DBA saying something like "I see no compelling reason to upgrade" my eyes turn red, smoke issues from my nostrils and I reach for that large, heavy latest version of the SQL Server Bible I've been reading for the last month, stuffed with bookmarks on new DML improvements and…
September 22, 2014 at 3:51 am
Trouble is that too many developers either haven't engaged in or haven't been engaged in an upgrade analysis beyond what has sounded like "we want new toys". All too often the argument fails to be made on a business justification or cost basis.
This is not always the case.
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
September 22, 2014 at 4:50 am
Tavis Reddick (9/22/2014)
Anyway, whenever I hear a DBA saying something like "I see no compelling reason to upgrade" my eyes turn red, smoke issues from my nostrils and I reach for that large, heavy latest version of the SQL Server Bible I've been reading for the last month, stuffed with bookmarks on new DML improvements and…
And whenever I see a developer throwing a tantrum and screeching "I want I want I want" because he's seen expensive new toys in the shop window I feel sad that some developers never learn. Most of my background is R&D, but although I too like new toys I have also had a responsibility to help ensure the survival of the company which employed me, which sometimes has meant that new toys had be delayed. Of course every developer shares that responsibility, although learning to recognise it tends to be part of the transition from a junior developer to a senior one. For really senior developers that responsibility can at times take up more of their time than actually doing or managing or leading development does, which can be very frustrating, but if developers don't accept that responsibility they are in my view either silly children who need to grow up, people who are not aware of the real world, or irresponsible nitwits.
Tom
September 22, 2014 at 5:03 am
Tavis Reddick (9/22/2014)
Will nobody think of the developers? 🙂Seriously, if humankind had adopted "If it ain't broke, don't fix it" we'd still be living in caves and drinking out of puddles. Requirements change! Expectations rise! New ways of thinking emerge!
[/url]
In each release of SQL Server, Microsoft has the opportunity to introduce innovations, typically ones that a mass of developers have been crying out for. How I fumed when our DBAs refused to install 2005, with its greatly-enhanced XML support, leaving my colleagues to write a complicated sequence of 18 related SQL objects whose output I still had to convert to XML. Every change to improve or debug this code set inflicted horrible pain (on me, not the DBAs). After upgrading to 2005, these objects were replaced with a single stored procedure and a user-defined function that outputted the XML directly. I could even implement some changes given to me over the phone as I listened and typed into the test/development environment and tested the results: "Yep, that worked" I could say before putting the phone down. Goddamit!
And yes, every new server had the latest SQL Server on it, but the old ones were not upgraded. So I had learnt all the new features (reading SQL Server 2005/2008 Bibles amongst other sources) but then had to mentally regress to apply solutions in a later version of T-SQL to an earlier version, or versions.
Even between 2008 and 2008 R2 there were huge improvement in Reporting Services.
Anyway, whenever I hear a DBA saying something like "I see no compelling reason to upgrade" my eyes turn red, smoke issues from my nostrils and I reach for that large, heavy latest version of the SQL Server Bible I've been reading for the last month, stuffed with bookmarks on new DML improvements and…
For the most part... most part mind you... the bulk of our SQL Server installs since we hit SQL2008R2 will not benefit from anything new within SQL Server. Most are small dept based purchased apps. They run just fine as they are... and most ran fine under SQL2000 quite honestly. Other than going from 32 bit to 64 bit and 2008R2 backup compression our two Help Desk Apps ran fine in SQL2000. Small, basic out of the box apps. They are on 2008R2 and I see no benefit to going to 2012 and 2014. It just works and works well.
September 22, 2014 at 5:19 am
TomThomson (9/22/2014)
Tavis Reddick (9/22/2014)
Anyway, whenever I hear a DBA saying something like "I see no compelling reason to upgrade" my eyes turn red, smoke issues from my nostrils and I reach for that large, heavy latest version of the SQL Server Bible I've been reading for the last month, stuffed with bookmarks on new DML improvements and…And whenever I see a developer throwing a tantrum and screeching "I want I want I want" because he's seen expensive new toys in the shop window I feel sad that some developers never learn. Most of my background is R&D, but although I too like new toys I have also had a responsibility to help ensure the survival of the company which employed me, which sometimes has meant that new toys had be delayed.
My overall comments had a serious theme but an exaggerated style (intended to balance what I saw as a rather one-sided thread full of (legitimate) DBA concerns.
I have heard people who call themselves developers talking about "new toys". I would not use that term, and I think it is unnecessarily derogatory and provocative in certain situations.
SQL Server development has been driven by customer demands, the changing world outside databases (the world they are supposed to model: for example by geographical data types) and even the SQL standard itself. Serious developers will be aware of this process and even part of it.
T-SQL is, in many respects, a wonderful declarative language, but there is always scope for improvement and always new problems to solve. Code management is such a huge issue for some organizations that even what may seem relatively minor improvements can reduce complexity by orders of magnitude (I gave an example). The movement towards elegance in coding is not something just to be appreciated for its abstract beauty, but for real-world gains in productivity, performance and efficiency.
I grant completely that not all developers work this way, and these issues are really not relevant for a lot of organizations who do not change their own code, but respect is due to the deep thinking on solving classes of problems that has gone into the vast amount of improvements made in SQL Server over the years, even if many are only of significance to a minority.
Viewing 15 posts - 46 through 60 (of 69 total)
You must be logged in to reply to this topic. Login to reply