SQL Server process growing memory usage

  • I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working).

    The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries.

    An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call).

    The database files are organized in only one filgegroup, recovery type is set to simple.

    Some questions to diagnose the problem:

    1. is that steadily growing memory normal?

    2. did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc)

    3. can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)?

    4. do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server?

    5. is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application)

    6. if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Thank you all in advance!

  • SQL Server will eventually use all the memory allocated to it. It doesn't ever really de-allocate memory, it will just move the oldest cached data out as it needs to cache more recent data.

    To find the optimum level, you'd need to set a maximum and then monitor the Buffer Cache Hit ratios, or just monitor that your application performance remains acceptable over a period of time.

    In terms of your transaction rate, of course SQL Server can handle those sorts of rates, but it depends on the size of the transactions and the scale of the hardware.

    20 transactions per second sounds quite a lot for non-pooled connections though, you'll be adding quite a lot of overhead by doing one log in/out per transaction. Why are you doing that rather than running multiple transactions in a single connection/connection pool?

  • Express has a 1GB memory limit, so are you talking memory as in RAM or are you talking memory as in storage?

  • Thank you all for your answer:

    HowardW (11/20/2012)


    20 transactions per second sounds quite a lot for non-pooled connections though, you'll be adding quite a lot of overhead by doing one log in/out per transaction. Why are you doing that rather than running multiple transactions in a single connection/connection pool?

    Unfortunately I'm not coding the application which is the client of my database: another company is doing that and, as far as I know, they historically use a DLL based on MS OLE DB, the behaviour of which is as described, i.e. "open connection/do the job/close connection". If it was under my responsability, I would have open the connection once and then done all the jobs over that connection - there are no parallel jobs, they happen one after the other. Do you think that the growing memory usage could depend on the pattern used by the DLL?

    anthony.green (11/20/2012)


    Express has a 1GB memory limit, so are you talking memory as in RAM or are you talking memory as in storage?

    I'm talking about RAM memory.

    Regards

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply