Insufficient Memory

  • I have a query which contains two large IN clauses - one with upto 40 items, the other with upto 270 items. I use an IN clause as I find it more efficient than using table joins.

    The problem I had this morning was that the query kept failing with error 701: Insufficient memory.

    Overnight, there were several hundred of these queries which executed successfully and a few which didn't. This morning, when I reran some of the overnight queries which had completed OK, I got the Insufficient Memory error message. There was minimal load on the server this morning (CPU usage < 5%) so there was nothing else grabbing memory.

    Why would a query which completed OK start failing with insufficient memory 6 hours later?

    I have 'fixed' this by recycling the service but I guess it will happen again sometime. The server is dedicated to SQL Server with 2 GB Ram

    Any ideas?

    Jeremy

  • I'm not sure I agree with your statement about IN being more efficient than JOIN.

    SQL Server treats an IN statement as an OR, and frequently an OR will stop SQL Server from using an index. A simple join, supported by the relevant indexes should perform well enough, especially with large data volumes.

    Have you looked at the execution plan for your query. It may be that a change in the data has caused SQL Server to generate a different execution plan.

    If the plan is using Hash joins or sorts, then that will increase the amount of memory the query uses.

  • You say CPU usage was < 5% but what was the memory usage at that point and besides SQL what was the heaviest user of memory. Also, many people say dedicated to SQL server but they forget things such as terminal services or other application that is running on the box. Also, make sure you are up to date on SQLs Service Packs as there have been some known issues with Memory leaks (memory gets used but never freed). Another thing to check is the servers setup. For example if running Windows 2000 right click my computer and choose properties. Goto the Advanced tab and press the Performance Options button. Make sure it is set to Background Services as opposed to Applications as it Windows does some memory tweaking there. Also, check drive fragmentation, drive IO can be a fact in memory issues. And check for Network bottlenecks. If the netowkr is suffering a bottleneck across the NIC data will be held in memory and thus cause buffer errors and ultimately memory errors. Finally another cause can be the number of connections open on the server. Each takes approximately 50k of memroy use and if you have a rouge process that makes connections that are not being freed properly you may be hitting it there in some way (these may also cause memory leaks even on patched servers).

    Also ianscarlett as for this

    quote:


    I'm not sure I agree with your statement about IN being more efficient than JOIN.


    It is gernerally more efficient as the query manager makes determinations you don't even realize. In testing small list will generally be treated as ORs, however at some threshold the query manager will actually convet the IN statement to a temporary table an do an INNER JOIN itself. So creating the temporary table and placing the values in it yourself may actually be a performance hit that was unneccassary. This is not to say it is not possile to find a situation where this is not the case but generally it is the case.

    Edited by - antares686 on 04/09/2003 04:49:45 AM

  • In clause vs join. I converted the in clause to a multi table join and I was surprised that the table join performed significantly better than the in clause. When I first started this application I was using SQL 6.5 on a very basic server but when we upgraded to SQL 7 (raid array, more memory, faster processe) the performance died - the reason being the poor performance of the table joins. We are now on SQL 2000 and it seems to work OK so I'll try joins.

    Memory usage - there was only about 10 Mb of RAM left. The two biggest users were SQL Server and SQL Server Agent.

    The other software we run on the box is NetShield (standard on all our NT Servers) and PcAnywhere - the server is not in a manned computer room and we try to do as much as possible remotely. The other processes running on the server are backup software and the normal server related processes e.g. Raid manager software etc.

    We are running SQL Standard with SP3 on NT 4 (service pack 5 or 6 not sure which).

    I monitor the memory usage through Performance Monitor and it only goes up - if I run a big query the usage goes up but never comes down. As there is nothing else to grab memory I have never considered this a problem. I run another server with SQL Server and other applications and the memory allocated to SQL Server does change as other apps need memory. Both servers are set up with dynamic memory.

    I'm not all that technical, but I have wondered why it continues to grab more memory rather than reusing what it has already got. Any ideas?

    Jeremy

  • quote:


    I'm not all that technical, but I have wondered why it continues to grab more memory rather than reusing what it has already got. Any ideas?


    This is not uncommon and if there is a memory leak in SQLs address space it can be hard to detect. However, SQL should never overtake the boxes memory to the extenet of causing the OS from handling. I would consider setting the max memory to leave at least 128MB of memory for the OS and see what happens.

  • Can you give me some tips as to where to look for memory leaks?

    In my app, I create a lot of temporary tables, dynamic SQL and cursors. I may not have been tidy these up completly after each procedure so would these be a source of leaks? I noticed that even when I submitted the query through QA the memory usage increased but didn't decrease afterwards.

    Jeremy

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply