Error: Insufficient memory available... Does it help to add /3GB switch in boot.ini file while the box has only 2GB of RAM ???

  • 1.

    I'm running a time & memory consuming operation on SQL server 2000 (Bulk Copy from a big XML file). The box is Windows 2000 Advanced Server with 2GB RAM. SQL Server 2000 has SP 4.

    I'm getting the following error:-

    "Error: 17803, Severity: 20, State: 12

    2006-06-02 18:20:30.13 spid53 Insufficient memory available.."

    I read several forums and learnt that adding /3GB switch in boot.ini file will make SQL Server recognize more than 2GB.

    My question here is, since the box has only 2GB, does it make sense to add /3GB switch? Does it help? If not, what else would be the solution?

    2.

    I have another box which has the same settings (infact less RAM, ~1.5 GB RAM). this box performs exactly same operations as the above one, but finishes gracefully without givig me any error.

    Any clue why the above one errors out and not this one?

    Appreciate your help on these.

    Thanks

    Jai

  • No, it makes absolutely no sense to add the /3Gb switch if your box only has 2Gb in total. In fact, it could make things worse - the /3Gb switch works by restricting the OS to just 1 Gb addressable memory instead of the normal 2Gb (addressable) that it would have access to on a 4Gb machine. I'm not sure if this would have any practical effects on a 2Gb machine, but since you can't give 3Gb to SQLServer I'd leave the switch well alone. In addition, if you are not using SQL2000 Enterprise Edition, then you can't use more than 2Gb in any case.

    Many, many factors affect memory usage, and it's very difficult to speculate on why one machine may succeed with apparently less memory than one which fails. Clearly, there are other processes running with a higher memory demand on your bigger server, but how is the SQLServer itself configured? Data cache size? Number of users? Identical database structure - e.g. is one server maybe tablescanning and taking out more locks than the other? No of CPUs - is one server using parallel processing and the other not? I could go on ....

    To answer your question about alternative solutions if allocating 3Gb makes no sense, again "it depends", I'm afraid! Adding more memory is never a bad thing (you can never have too much), so if your server will accept more this could be a quick and comparatively cheap solution. However, don't overlook other equally important issues, such as query tuning performance. SQL queries that are heavy on resource usage (those which perform multiple tablescans, those which have to perform large sorts, those which join many tables together) will all make heavy use of memory.

    Try to optimize these to eliminate sorts where possible (can you put a clustered or a covered index on the desired columns so that they are already sorted?). Try to eliminate large tablescans - use suitable indexes, ensure that FK constraints are indexed at the child end, etc. Try to avoid large numbers of tables in a join - consider denormalising certain tables if you are frequently performing queries that require 5, 6 or more tables to be joined. Don't forget that if a member of a join is actually a view, this view can itself potentially join many tables together which add to the complexity of the overall query.

  • you might want to try batching your input.

    are you using sp_xml_preparedocument  , if so this can give memory issues in certain conditions.

    to be honest using adv server with 2gb ram is a tad wasteful, you pay extra for adv server as it can access more ram, you might like to add some ram to your server.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Philip,

    Thanks a lot. I appreciate your help.

    Both my boxes are dedicated for running sql servers, have 2 cpus, parallel processing enabled on both. (though I'm not sure if parallel processing is being used!). I couldn't differentiate the 2 boxes.

    I tried to narrow down the probelm based on your thoughts. I supressed the bulk copy for the time being and I'm running everything else (other costly operations). These are executing successfully without any issues, even though its takes longer time. Performance is little slow here since the table is huge and a huge insertions/updates are happenning. Would a clustered index help here. Even if it is slow, It works! Now, I'm planning to handle the bulk copies separately. Now that I have isolated the bulk copy from other operations, I'm hoping this goes on well, too. Let me know your thoughts.

    Thanks again

    Jai

  • thanks colin.

    I'm not using sp_xml_preparedocument. I'm using SQLXML 3.0 bulk copy.

    I have requested for 1GB more RAM so that I can make use of /3G switch. I'm waiting for this.

    Thanks again

    Jai

  • Jai,

    Even adding 1Gb more memory isn't really going to help much. Currently, the machine only has 2Gb, so the most you can really allocate to SQLServer is about 1.7 Gb without starving the OS.

    If you add an extra 1Gb, taking the total memory to 3 Gb, you still can't take SQLServer to > 2Gb because you are only using standard edition (I believe - you didn't say you were using Enterprise edition?)

    If you have Enterprise Edition SQL2000, then you can allocate >2Gb if you use the /3Gb switch. However, this is at the expense of the operating system. If you do actually have Enterprise Edition, I really wouldn't take it much past 2.5 Gb on a 3Gb machine, so you're only going to gain 700 Mb or so over what you have already.

    Is an extra 1 Gb really all that you can add? It's more than enough if you're using Standard Edition, but if you are using Enterprise then I'd go for at least 8Gb.

    Regarding Bulk Insert (or BCP) - the fastest way to bulk insert is to have NO indexes at all on the table, and have the database in either SIMPLE or BULK_COPY mode. If you have any index on the table, then every insert will be logged. Ideally, Bulk Copy into a specific database containing a holding table. Have this database in BULK_COPY mode, and have no indexes on the holding table. Once you've imported the data, you can insert it into the real table using a join.

    Presumably, on your table, you can't drop the index as it sounds too big to rebuild again. Of all the indexes, a clustered index can be one of the most obtrusive, unless your raw data is already sorted in clustered index order. If it is not, then the inserts will be made randomly throughout the table, probably with lots of page splits as a result.

    If you're inserting lots of data, then much better to have a dedicated holding table in its own database, and update the real table afterwards.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply