October 28, 2009 at 10:23 am
Hi Wes
Wesley Brown (10/28/2009)
I didn't see anyone mention
Thanks for the link. The advised hotfix didn't work in our case. The hotfix was contained in SP1 (and SP3 - again) but it seems to work only to a few people. Sadly not to us.
It also went away when we moved to 64 bit on another server.
Was one of the first things I told the DBA. They worry about x64 since they did not yet test it. :pinch:
Greets
Flo
October 28, 2009 at 10:52 am
The DBA brought up a good idea. We'll switch the EAV database (which should cause the most load of the server) to another server this night and monitor the error occurrence on both servers. If the error continues only on one server we can concentrate our researches for the processes working with this database.
October 29, 2009 at 5:04 am
Hey Flo,
I think you might want to try Wesley's idea. Think about it like tempdb, if it's shrinking and growing depending on load within the range you specified, it might need to grow faster than it can allocate and you'll get the error. We set our memory to have min & max be the same so that our servers start with all memory allocated. This works because our servers aren't sharing resources with any other apps.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 29, 2009 at 5:29 am
Hi Grant
I'll try to change the Min/Max Memory Allocation to same value. Do you know if we have to restart the service after changing?
Some new information
My guess is, that it depends on the huge work load in PAE/AWE mode on a x86 OS. I'll speak with them again to think about a quick switch to x64. (I know "quick" is never the best solution - but don't forget, I'm a developer :-D. My "quick" means something within the next three or four weeks.)
Greets
Flo
October 29, 2009 at 5:42 am
I'm pretty sure that one can be done on the fly and doesn't require a restart (but I could be wrong). I'd have to look it up to be 100%.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 29, 2009 at 9:14 am
Hi Grant
Sorry for the late response.
Seems that the problem does not occur any more since we split the databases to different servers. We'll change the memory allocation tomorrow.
Data-center team will start with x64 evaluation and we'll hopefully switch within the next some weeks.
Thanks for all your help!
Flo
November 25, 2009 at 9:22 am
Hallo,
I may have also run into this problem.
I have a simple SQL 2008 database table and I have a query that simply inserts 70,000 rows into it:
Table design is along the following lines:
CREATE TABLE [dbo].[yyyzzzqqq] (
[Low] [int] NOT NULL ,
[High] [int] NOT NULL ,
[Info] [char] (10) NOT NULL ,
[Info2] [char] (10) NOT NULL ,
CONSTRAINT [PK_yyyzzzqqq] PRIMARY KEY CLUSTERED
(
[Low],
[High]
) ON [PRIMARY]
) ON [PRIMARY]
The 70,000 inserts will fail at a random point with the error message
"There is insufficient system memory in resource pool 'internal' to run this query."
This problem did not occur when the same queries were executed on a SQL 2000 version of the database.
Digging deeper, the SQL 2008 Error log reports:
2009-11-25 15:34:01.56 spid51 Failed allocate pages: FAIL_PAGE_ALLOCATION 1
2009-11-25 15:34:01.57 spid51 There is insufficient system memory in resource pool 'internal' to run this query.
Googling on this, I found the following:
http://support.microsoft.com/kb/2000380
MS write
"The same TSQL Batch might get parsed, compiled and execute fine in SQL Server 2000 but fails with the above mentioned errors in SQL Server 2005 and SQL Server 2008. This behavior is by design. The T-SQL parser that shipped with SQL Server 2005 (and also included in later versions like SQL Server 2008) has been completely redesigned to accommodate new additions to T-SQL language and the product itself. Before execution of any batch T-SQL parser has to allocate memory for all the T-SQL statements to be lodaded and parsed. A large batch with thousands of inserts can therefore cumulatively consume a lot of SQL Server memory. To hold all the parse trees, the new parser consumes more memory than it used to in the older versions of the product like SQL Server 2000. This coupled with the fact that the newer versions of SQL Server (2005 and above) introduced an array of new memory consumers inside the product, results in the 701 error messages reported even for the batches that used to run successfully in the older versions of product like SQL Server 2000. You may encounter the above errors for this specific issue more predominantly on 32 bit (x86) platforms than on 64 bit platforms.
MS further write:
Rewrite the large batch so that it will have multiple smaller batches.
I have broken up the inserts into batches of 10,000 rows and the problem has disappeared.
In my opinion, this is a bug in SQL 2008 as opposed to a design difference.
November 26, 2009 at 9:55 am
Glad you have an (interim) fix by moving database Florian. 2 questions though:
1) why -g1024? I have never seen anyone use more than 512 there, and most can get by with just 384
2) did you ever run DBCC memorystatus and/or check out the various ring buffers?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 28, 2009 at 12:43 am
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply