July 22, 2013 at 11:26 am
I am using Idera's SQL diagnostic manager. It is throwing an alert saying a session has a high CPU (ms) count.
The session has been connected for over 11 hours...however, it isn't really "doing anything". (No wait time, wait types, resources, etc... and no blocking caused by the session)
We have a 3rd party application which has a database on that server. The connection is from that application back to its "meta data database". Every time an action is performed by or against this 3rd party application it has to query its meta data database to determine how to respond.
From what I can tell it is just leaving that session open because every few seconds or minutes an action is being performed.
Is there something "inherently" wrong with an application having a semi-perpetual connection like that?
(If there is a blog post or article which covers this then I would be most appreciate of a link to it so I can better educate myself)
Thanks!
July 23, 2013 at 3:44 pm
Today, and it has been this way for a decade or more, applicaitons typically connect, run a query, disconnect and rely on the connection pooling the API that a reconnection will be quick. That is, the API lingers to the connection for about 60 seconds, and if there is no reconnection in this period, the API physically disconnects from SQL Server.
However, occasionally you run into application that opens a connection and keeps it open during the lifetime of the application. I may have written one or two myself. It seems that this application's connection back to the metadata database is of this old school. Right or wrong? Since I don't know anything about this application nor about the design decision I am not to judge. I know that in the system I work with, we have components that stay connected.
[font="Times New Roman"]Erland Sommarskog, SQL Server MVP, www.sommarskog.se[/font]
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply