February 18, 2013 at 4:54 am
Hi,
I have some confusion so i required some help.
I am using below statement in sql server to reduce fragmentation
alter index PK_AccountCodeMaster_1 on AccountCodeMaster rebuild
but in output I always getting the value of "avg_page_space_used_in_percent" is less than 10
can please suggest......how i will increase the value and would request you please guide me .....why sql is not it's increase it's value more than 10 automatically during rebuild.
Is there any problem in my concept ?
-----------------------------------------------------------
Table_Name : CanvasClaimsRuralMarkup
Index_Name : PK_CanvasClaimsRuralMarkup
avg_fragmentation_in_percent : 0
avg_page_space_used_in_percent : 7.042253521
Regards,
Sachin.
February 18, 2013 at 5:36 am
How big is the table? How many pages?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 18, 2013 at 6:55 am
Hi Gail,
By mistake I have given you wrong table
I am using below statement in sql server to reduce fragmentation
alter index PK_JG_UserLogin on UserLogins rebuild
but in output I always Getting after fragmentation
Table_Name : UserLogins
Index_Name : PK_JG_UserLogin
avg_fragmentation_in_percent : 80
avg_page_space_used_in_percent : 84.20311342
Table_Name : UserLogins
Index_Name : PK_JG_UserLogin
avg_fragmentation_in_percent : 0
avg_page_space_used_in_percent : 1.198418582
As per my knowledge... After fragmentation, I required Value like
Table_Name : UserLogins
Index_Name : PK_JG_UserLogin
avg_fragmentation_in_percent : 0
avg_page_space_used_in_percent : 98.20311342
Table_Name : UserLogins
Index_Name : PK_JG_UserLogin
avg_fragmentation_in_percent : 0
avg_page_space_used_in_percent : 98.198418582
DBCC SHOWCONTIG (UserLogins);
------------------------*/
DBCC SHOWCONTIG scanning 'UserLogins' table...
Table: 'UserLogins' (515532920); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 30
- Extents Scanned..............................: 7
- Extent Switches..............................: 6
- Avg. Pages per Extent........................: 4.3
- Scan Density [Best Count:Actual Count].......: 57.14% [4:7]
- Extent Scan Fragmentation ...................: 71.43%
- Avg. Bytes Free per Page.....................: 1762.1
- Avg. Page Density (full).....................: 78.23%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
----------------------------------------------------------------
ObjectID : 515532920
IndexID : 2
PercentFragment : 0
TotalFrags : 1
PagesPerFrag : 1
NumPages : 1
So, would request you please suggest me why sql is not fragment as per my understanding........because I have read some article.
As per article when value of "avg_fragmentation_in_percent is less" is less than 10 then it will be better and when avg_page_space_used_in_percent is greater than 95 then it will be better.
So if any concept problem please guide me.
Regards,
Sachin.
February 18, 2013 at 7:26 am
dsachu (2/18/2013)
So, would request you please suggest me why sql is not fragment as per my understanding........because I have read some article.
Because the table's far too small for you to care about fragmentation much. The index in question is a single page.
ObjectID : 515532920
IndexID : 2
PercentFragment : 0
TotalFrags : 1
PagesPerFrag : 1
NumPages : 1
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 18, 2013 at 7:42 am
It means when number of pages will be small.........fragmentation not follow the rule. right ?
Regards,
Sachin.
February 18, 2013 at 7:56 am
When the number of pages is low, you probably don't want to worry about fragmentation. Besides, you can't defragment a 1 page index.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 10, 2016 at 4:55 am
Hi
We have a table with a LOB column (varchar(MAX)) which we update to NULL, after a period of time.
Historical date sensistive archive / operational data... used for tracking and processing.
Row depth is millions; width is 15 columns (only 1 LOB).
DBCC SHOWCONTIG scanning 'EmailQueue' table...
Table: 'EmailQueue' (244195920); index ID: 1, database ID: 32
TABLE level scan performed.
- Pages Scanned................................: 3485182
- Extents Scanned..............................: 437322
- Extent Switches..............................: 438993
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.24% [435648:438994]
- Logical Scan Fragmentation ..................: 0.44%
- Extent Scan Fragmentation ...................: 2.23%
- Avg. Bytes Free per Page.....................: 7347.2
- Avg. Page Density (full).....................: 9.23%
Tables is huge, but empty...
5 of the fields are defined as varchar(4000) and as far as I understand, only consume space based on the actual content. Is this impacting the free space? [These are not impacted by the nulling process].
Essentially, the bulk of the data allocated is allocated to the field that is nulled, but the free space is not allocated to subsequent inserts. It just grows and grows... ideas?
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply