Closing pooled connections

  • We have an application that uses MSDE on each user's machine and attaches and detaches databases to pass them around streamed inside another file. This is the transition version of an application moving from a proprietary store to database. Anyway, when we attempt to detach, we get an error that the database is still in use due to connection pooling. We have a couple of workarounds, but both have performance implications. The first workaround is to loop through all the SQL processes (using DMO) and kill any that are runing against the database we want to detach. This is quite slow. I'm not sure what the source of the delay is, but it takes at least 30 seconds, which is a lot to add to a file save. The second workaround is to disable connection pooling in the connection strings used in the application. If we do that, then we don't find any processes running against our database and can disconnect without delay. But not using pooling causes some processing to be a little slower (not much; we can live with that if we have to). Is there a quick way to get SQL server to close pooled connections that currently are not in use?

    Edited by - arbarnhart on 09/27/2002 1:36:39 PM

  • Turning off connection pooling seems like a good answer. Or maybe just close your connection as soon as your user is done with db access, increasing the chances that by the time you do your save the connection will be gone (a minute I think?). Could also try doing a select against sysprocesses to get the spids and kill them using ADO instead of DMO - possibly might shave a second or two off the process, have not timed it. You would just be saving the object creation overhead.

    Could you explain why you're detaching and putting inside another file?

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • >>Could you explain why you're detaching and putting inside another file?

    It's just for one version that we will do this. Our application is closing on 8 years and it's 6th version. It was first developed using OLE structured storage; the scope of the project was smaller back then. But even now, one of our requirements is support for multiple disconnected users that trade files by email, CDRW, IR or other means. The field users have notebooks and may have a team working at various client locations. Anyway, we are converting to SQL, but to convert everything at once would take too long for political reasons, as much as anything else; the new features requested in this version are being coded in .NET using SQL (MSDE) but most of the existing functionality is still going to use the old storage format. For housekeeping reasons, a single file has always been a requirement. So we could either pack all the binary data into a database, or pack the database into the binary data. We picked the latter; it was easier to add an extension to the existing codebase that unpacked the database rather than a wrapper around it to unpack the binary data. Other than the speed issue, it's working quite well. Unfortunately, one of the selling points for converting to a database is speed. Once we get everything converted, it will be faster. In the meantime, adding a noticeable delay to file saves would not look good.

    -Andy

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply