March 30, 2012 at 10:15 pm
Windows server 2003 ENT sp2 32bit
16 GB RAM
SQL SERVER 2005 ENT 32BIT
AWE ENABLED
SP_CONFIGURE LOCK =0
12 GB min server - SQL
12 GB max server - SQL
DB SIZE = 1.6 TB
Free space - 700GB
SIMPLE RECOVERY
TRANSACTIONAL REPLICATION
DISK SPACE AVAILABLE = 160 GB
I know shrink is not good .
When i trying to shrikn the MDF file to release the 700GB , it prompt out the error after 3 to 4 minutes. I have disbale all job running, restart , and also bring the DB to single user mode , expand 10 MB for the MDF , and try shrink still failed . I also try to shrink small chunk about 10 MB , but still no luck. I turn on the trace and found the error 1204.
Please advice what parameter i should capture to prove that memory is not enough to handle the shrink operation
-------------
Update :
I capture some performance counter and dbcc memorystatus in the attached excel sheet . I don't understand why the dbcc shrinkfile will failed when the target memory are still available .
April 2, 2012 at 1:54 pm
Have you tried using NOTRUNCATE to move all pages towards the beginning of the file?
Try this:
DBCC SHRINKDATABASE(database_name, NOTRUNCATE);
Then try shrinking the data file again.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 2, 2012 at 9:33 pm
I have try both DBCC SHRINKDB or SHRINKFILE no trucate and truncate. No luck, still the same error.
Continue on my troubleshooting, i restart the sql service and run the DBCC SHRINKFILE command again and run the below statement to get number of locks. The shrinkfile still failed.
SELECT request_session_id, COUNT (*) num_locks
FROM sys.dm_tran_locks
GROUP BY request_session_id
ORDER BY count (*) DESC
request_session_idnum_locks
563576656
Everytime after i restart the sql service , start the DBCC SHRINK and capture the LOCK MEMORY , TARGET MEMORY and TOTAL MEMORY .
The lock memory raise from 0.3 MB and stop at about 800MB , it never go over 900 MB , same as the total memory no matter how many time restart i run the same DBCC SHRINK. it never reach 3 GB. Is there an upper range for the lock memory ? or something like how many percent of the non-awe total memory ?
KB MB GB
Lock Memory 904408 883.210937 0.862510681
Target memory 12582912 12288.000000 12.000000000
Total memory 2356992 2301.750000 2.247802734
April 2, 2012 at 10:03 pm
I read that lock memory will never exceed 40% of total SQL Server memory. I see you have AWE enabled, however could it be that since you are on 32-bit OS that SQL Server will only allocate lock memory from your VAS, i.e. within the first 2GB of memory available? 819MB is 40% of 2GB.
Do you have the 3GB switch enabled? If not, and you can play, try adding it to the boot.ini to gain an extra GB of VAS (requires reboot) and try again.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 3, 2012 at 12:57 am
I do some slightly changes in the Min and Max memory . I set both the min memory and max memory to 12 GB.
This time , i can see the AWE Alloocated buffre pool memory change to 12GB but the lock memory drop to 553.75 MB. Both target and total memory have same figure. But The shrink job still failed .
virtual_memory_ virtual_memory_ AWE Allocated
reserved_kb committed_kb kb
MEMORYCLERK_SQLBUFFERPOOL1,645,208 126,356 12,582,912
MEMORYCLERK_SQLSTORENG1728 1728 0
MEMORYCLERK_SQLUTILITIES7112 7112 0 OJECTSTORE_LOCK_MANAGER4096 4096 0
KBMB GB
Lock Memory 567048 553.7578120.540779113
Target memory1258291212288.00000012.000000000
Total memory1258291212288.00000012.000000000
I also check on the dm_tran_lock and get below result :
select request_session_id, count(*)
from sys.dm_tran_locks
group by request_session_id
order by count(*) desc
--654992824
--65 6714659
--65 8481323
April 3, 2012 at 8:35 am
Lock memory is allocated from the buffer pool, but it is still suspicious that you cannot see lock memory rising above ~800MB. Are you running the 3GB switch?
What are the results of this query on your system:
SELECT CAST(physical_memory_in_bytes / (1024.0 * 1024.0 * 1024.0) AS DECIMAL(20, 2)) AS physical_memory_in_GB,
CAST(virtual_memory_in_bytes / (1024.0 * 1024 * 1024) AS DECIMAL(20, 2)) AS VAS_GB,
CAST((bpool_committed * 8) / (1024.0 * 1024.0) AS DECIMAL(20, 2)) AS buffer_pool_committed_memory_in_GB,
CAST((bpool_commit_target * 8) / (1024.0 * 1024.0) AS DECIMAL(20, 2)) AS buffer_pool_target_memory_in_GB,
(
SELECT CAST(CAST(value_in_use AS INT) / 1024.0 AS DECIMAL(20, 2))
FROM sys.configurations
WHERE name = 'min server memory (MB)'
) AS min_server_memory_GB,
(
SELECT CAST(CAST(value_in_use AS INT) / 1024.0 AS DECIMAL(20, 2))
FROM sys.configurations
WHERE name = 'max server memory (MB)'
) AS max_server_memory_GB,
(
SELECT value_in_use
FROM sys.configurations
WHERE name = 'awe enabled'
) AS [awe_enabled]
FROM sys.dm_os_sys_info;
You could also be running in to issues with replication. Have you thought about disabling replication before trying the shrink?
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 3, 2012 at 10:24 am
I have not implement /3GB . only /PAE and AWE is configured.
Query Output :
physical_memory in bytes 15.99
VAS_GB 2.00
Buffer pool committed memory in GB 12
Buffer pool target memory in GB 12
min_server_memory_GB 12
MAX-server_memory_GB 12
AWE_ENABLED 1
When running DBCC SHRINK, I disable all the sql agent job which including log reader and distribution and snapshot agent and set the database to single user mode. I am not sure is there any different by dropping the entire publisher article .
April 3, 2012 at 10:37 am
Do you have another server where you can restore the database and try shrinking the files there?
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 3, 2012 at 10:45 am
I will try to locate a 64bit sql server with 2 TB disk space available before i can restore the database over there . It will take sometime .
Is there a way to trace the VAS usage while i am running the DBCC SHRINKFILE ? i want to collect some figure to prove that it does not work due to the VAS limitation .
April 3, 2012 at 10:53 am
This will show you all allocations against VAS:
SELECT type,
SUM(virtual_memory_committed_kb) AS virtual_memory_committed_kb,
SUM(multi_pages_kb) AS multi_pages_kb
FROM sys.dm_os_memory_clerks
WHERE virtual_memory_committed_kb > 0
OR multi_pages_kb > 0
GROUP BY type;
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 3, 2012 at 11:04 am
So, should i say when running the DBCC SHRINKFILE , if sum(all type of objects virtual memory committed) larger then 2GB, the error should be generated ?
why i need the sum of multi-page allocator for each type of objects ?
can i trace sum of (all type of objects virtual memory committed) through sql profiler ?
April 3, 2012 at 11:25 am
stanley wong-314201 (4/3/2012)
So, should i say when running the DBCC SHRINKFILE , if sum(all type of objects virtual memory committed) larger then 2GB, the error should be generated ?
I would not expect SQL Server to be able to reach 2GB, as the VAS is shared across all OS processes.
why i need the sum of multi-page allocator for each type of objects ?
These are allocations by clerks that usually make use of the buffer pool, but because the allocation will be larger than 8KB it has to be done outside the buffer pool which affects VAS. An example is space needed for a very large query plan.
can i trace sum of (all type of objects virtual memory committed) through sql profiler ?
I do not think you can see that using Profiler. You may be able to trace the total via PerfMon, but not sure which counter that would be.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply