November 19, 2024 at 11:14 pm
First time here...hoping I can find some direction and answers so I can devise a proper solution. Novice Basic SQL experience over the years while being the "jack of all trades' IT support, not a DB admin by any means.
Issue:
After vendor Application upgrade and SQL upgrade from 2014 to 2016 we cannot process claims in our healthcare management system as the same process which took a few minutes before upgrade will take an hour or more and rarely complete and usually fails/locks up the desktop application for hours or return an error rin the application "Incorrect token value:'PARAMS' <>." . The vendor says this is "always" a resource issue. The application and SQL database server are installed on the the same machine and have been for 7 years. (specs & other info below) When I view the Windows resource monitor after the process that fails has been queued there is not large load being put on the hardware that I can see. Screenshots of Windows resource monitor taken while the failing process is running are attached. I cannot verify the resource issue to be the problem. Maybe I am missing something. We did not have performance issue other than a little sluggishness in the user experience prior to this upgrade. I am thinking that the SQL upgrade is the issue and have been investigating as to whether the immediate performance issue could be due to my upgrade of SQL 2014 to SQL 2016 and upgrading the compatibility mode from 2012 to 2016. (I have no verification yet that I upgraded the compatibility mode but I am taking a guess I may have.)
Question:
If my SQL 2014 instance was in 2012 compatibility mode, could the upgrade to SQL 2016 resulting in the DB upgrading to 2016 compatibility mode be causing the many queries that the failing application runs be the issue I am now dealing with?
Server Specs:
Dell PE T430
2 - Xeon E5-2630 2.4GHz
64GB RAM
PERC H730 Controller 1GB cache
2 - 300GB SAS 6Gbps 15k RPM RAID 1
3- 300GB SAS 6Gbps 15k RPM RAID 5 (with hotspare)
Windows Server 2016 Version 1607 Build 14393.7515
SQL 2016 SP3 (13.0.6455.2)
Main DB for application size is ~70GB
A little background....
We have a vendor provided healthcare application that uses SQL as its database engine. We are self hosted on our own Dell T430 server that is about 7 years old. We will be moving to a new cloud based system next year (we were supposed to move last year but the new vendor had issues so a new vendor is being selected for next year so buying new hardware is not an option - migrating to a VMware server we already have may be an option to upgrade hardware if needed but this is not the desired path). In October we had to upgrade the vendor application from 24.1 to 24.3 for some fixes and regulatory updates, in doing so the vendor required us to move from SQL 2014 to SQL 2016. This is when the problem began with certain processes in our application such as billing that generates claims on patient visits. In doing so it runs many queries to compile all the information for every patient who qualifies for billing in a given period. If we run one patient for 3 days it works. Selecting 2 patients for a larger number of days (but less than a month) ran for over an hour and locked up the desktop application. Prior to the upgrade this processing of bills/claims took less than 5 or 10 minutes most times.
November 20, 2024 at 1:32 am
If my SQL 2014 instance was in 2012 compatibility mode, could the upgrade to SQL 2016 resulting in the DB upgrading to 2016 compatibility mode be causing the many queries that the failing application runs be the issue I am now dealing with?
Yes. There was a change to the Cardinality Estimator in 2014 that crushed a lot of people's code including at the place I was working at. Fortunately, they created a Trace Flag to force the entire system to use the old "Legacy Cardinality Estimator".
Please look for 9481 in the trace flag column in the table at the following link for more information.
We set the trace flag in a global fashion for 2016 by making the trace flag a part of the SQL Server Startup. We have since upgraded to 2017 and then to 2022 after that and we still needed this trace flag for mode of our databases. And, yes... in 2022 which has the same and more performance problems and 2019, wouldn't even run our code during the "same day" (seriously well deserved sarcasm because our well behaved overnight jobs suddenly bled into 10 to noon the next day instead of finishing well before 8). On the 2 big databases where most of the actively was, we had to set the databases to 2017 AND have the Legacy Cardinality Estimator enabled to stop the severe bleeding. The code still takes a lot longer than it did before (and we have a LOT of it) but at least it usually finishes before 10 now.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 20, 2024 at 9:08 pm
Thank you! I was reading about the CE possibly being the problem. I did check some of the queries and they were individually set to CE to 70. I am not sure if it is all are. However, I am going to see if I set CE globally to 70 an legacy to see if it improves results.
November 21, 2024 at 1:18 am
Thank you! I was reading about the CE possibly being the problem. I did check some of the queries and they were individually set to CE to 70. I am not sure if it is all are. However, I am going to see if I set CE globally to 70 an legacy to see if it improves results.
Awesome. The next thing is to tell us ...
https://learn.microsoft.com/en-us/lifecycle/products/sql-server-2016
There will be other things to check but those are the usual biggies along with the Legacy Cardinality Estimator.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 6, 2024 at 9:50 am
This was removed by the editor as SPAM
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply