February 19, 2010 at 6:50 am
It's probably a good time to come back to this thread and let you know how our issue panned out. We thought that the -g flag had solved the problem, but in fact this just extended our uptime from two to five weeks.
As Paul has just hinted at, the problem was a particular release of a legacy extended stored procedure dll. We only realised when a couple of months had passed after it had been updated without any memory issues. A quick roll back on one of our test systems and the problem was back.
We're using CLR on all new work, and hopefully we'll find some time to go back over the old ones and update those too.
Regards
Jon
February 19, 2010 at 6:54 am
Thanks for the great feedback there Jon. CLR is definitely a superior option to extended stored procedures!
Paul
February 19, 2010 at 9:46 am
Paul White (2/19/2010)
Thanks for the great feedback there Jon. CLR is definitely a superior option to extended stored procedures!Paul
But in order to do things that are done in many XPs one has to have 'unsafe' CLR, which gets you back square 1.2. 🙂 And if you are on 32 bit, CLR eats up the same memtoleave that XPs do IIRC.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 19, 2010 at 3:35 pm
TheSQLGuru (2/19/2010)
But in order to do things that are done in many XPs one has to have 'unsafe' CLR, which gets you back square 1.2. 🙂
Even CLR assemblies marked with the UNSAFE permission set are subject to hosting controls and extensive protections. CLR assemblies are always a better choice than an XP - especially considering that XPs are deprecated. There is a large section of Books Online dedicated to this topic - the interested reader could do worse than start by looking at the CLR Hosted Environment topic.
TheSQLGuru (2/19/2010)
And if you are on 32 bit, CLR eats up the same memtoleave that XPs do IIRC.
I would state that differently. When a CLR routine requests memory, it goes through the usual SQL Server allocation routines. It does happen to end up being allocated from the multi-page allocator (non-buffer pool), but SQL Server has control. XPs run in SQL Server's process space and can directly allocate memory without reference to SQL Server. This is inherently less safe, and one of the reasons XPs are deprecated.
The limited address space of 32-bit SQL Server means that it is probably a poor choice for many systems. Personally, I very much look forward to SQL Server becoming a 64-bit-only application...:-)
Paul
April 28, 2010 at 4:35 pm
Hello,
I have a similar problem (first post). I m converting databases importing some 2Go File via Bulk insert and doing some operations on those 2Go Tables. I m using SQL server 2008 X64 on System 7.
Same error occurs after using 5.8Go RAM (of 6Go) :
There is insufficient system memory in resource pool 'internal' to run this query.
and also
[...] The error is printed in terse mode because there was error during formatting. Tracing, ETW
I resolve this issue by limiting SQL server Memory. By default value seems far away from memory i had on my pc (6Go), I tried with 2048 Mo for Maximum Memory (server properties) and then all works fine (even if it works slower).
PoP
May 24, 2010 at 6:45 am
Looks like a side by side install issue known with SQL2000 tools which has been fixed for SQL2008 R2. The issue is that for sql80 there are registry keys missing for either version or language. What the fix tried to do was to populate the fields based on other existing sql80 features detected by discovery. The workaround is to go through the sql80 features and copy the registry keys for Version and or Language from existing features to the ones that do not have it. We could not determine why those keys are missing.
🙂
Thanks
DBATAG, http://sqlserver-training.com
<a href="http://sqlserver-training.com"> http://sqlserver-training.com </a>
-- Please mark the post as answered if it answers your question.
Thanks
DBATAG
Website : http://sqlserver-training.com
Email : dbatag@sqlServer-Training.com
July 1, 2010 at 12:45 pm
I cannot find where you made this change as I am having the same problem.... can you guide me to where the setting are located in order for me to try this as well..
Steve
July 2, 2010 at 3:11 am
The -g parameter is a start-up flag. You can set it by running SQL Server Configuration Manager (in the Configuration Tools sub-menu) and selecting SQL Server Services in the left panel. Your database instances will be listed on the right as SQL Server (instance name).
Double-click the instance you want to configure and select the Advanced tab. Scroll down to the Startup Parameters and expand the entry by clicking the down-arrow to the right. You will see a string of path assignments for the master database and error log files, each separated by a semi-colon. You may have some -T trace flags here too.
Either way, append immediately onto the end the string ;-g384 with no spaces anywhere, then restart the database instance.
Good luck, but as I've suggested before, this probably just hides an issue with a leaky .dll - we can still provoke ours into memory pressure if we absolutely hammer it under test conditions. Unfortunately, the current economics mean we have to add more stuff to bring in revenue, rather than fix annoying issues like this :rolleyes:
Cheers
Jon
July 20, 2010 at 5:06 am
Thank you for your solution!
(First of all, sorry for my English.)
We had been suffering from the same "insufficient system memory" error and others (unexpected and severe errors) for more than a week, with no idea about how to solve it. We had to restart the SQL Server (2008) four or five times a day, and changes on memory settings seemed not to improve the issue at all.
But we tried the other day the -g start up flag (-g384) as you specified, and everything is OK now. Our database triggers have been running without any problems for almost two weeks.
I'm very grateful of having consulting this forum and reading your response. Thank you very much again, Jon!
Elena (from Asturias-Spain)
August 17, 2010 at 4:03 am
Hi andy,
Is your problem solved. I am also having similar issue.. where website is showing 503 service unavailable error .. for which we are doubting its of because DB.
September 10, 2010 at 2:14 pm
Andy,
Is your problem solved?
October 8, 2010 at 9:58 am
Josef,
I got the same problem as you described. have you found the solution?
Thank you, Val
February 14, 2011 at 12:39 am
The internal pool is internal to sql server. It can use all the memory available. You should do a dbcc memorystatus and paste it here.
February 17, 2011 at 9:00 am
Hi Josep:
Is the server a 64bit? I was facing an issue related to SQL being out of memory in a SQL 2008 64bits, which doesnt reserve a portion of the VAS memory when the engine starts as the 32bit does, this portion also known as MemToLeave is used for executing some request as (Extended Procedures, COM objects (OLE Automation calls), Linked Servers, OLEDB providers, SQL CLR) , in a 64bit enviroment if you dont limit the max amount of memory used by SQL this could lead you to the server be out of memory because the BufferPool could make use of all the memory available for the user mode.
You will be unable to make new connections to the engine when this issue arise and finally the service will be stopped or restarted.
Hope this could help!
Alejandro
IT Specialist - MCITP DBA SQL 2008
July 18, 2011 at 1:46 am
Hi,
We have exactly the same problem with our SQL server.
We have SQL 2008 32bit on Windows Server 2003 32bit
We have 32GB RAM, using PAE and we dedicated 28 GB RAM for MSSQL
We are not use -g384 switch on startup parameters.
DO you have any idea how to fix this error?
Viewing 15 posts - 16 through 30 (of 38 total)
You must be logged in to reply to this topic. Login to reply