June 6, 2024 at 11:55 am
Hi all.
I have been working on SQL Environment that are really not well due to legacy system mixed with new systems. Need to ask bit of stupid question. I have run few different SQL Scripts to recommend Creating of indexes including BrentOzar FirstResponderKit. Mostly I use FirstResponderKit scripts. Here is where I am bit stuck or not sure what to do.
Normally I create Index with no more than 5 columns in a index. This time I am getting different recommendations. This is recommending adding between 15 - 30 columns. Avg Estimated Cost is between 100% or 65%. Screen shot attach.
Has anyone added index with so many columns?
June 6, 2024 at 4:10 pm
Scripts that give index reccomendations are often a good starting point but you need to understand what is going on behind the scenes. I generally deal with OLTP dbs so would be very careful with any index with more than five columns. Maybe 15 columns would be okay for an OLAP db. It is usually worth spending time to get the clustered index correct. You may want to read something like:
Expert Performance Indexing in Azure SQL and SQL Server 2022
June 6, 2024 at 5:00 pm
You can't really accurately consider one index in isolation. You need to look at all indexes on the table AND at all uses of the table.
sys.indexes will give you the first info, the missing_indexes and index_usage views will give you the second info.
This script will provide the min info needed. Be sure to specify the specific table name you want to review indexes on. Note: the script adjusts some system settings but it puts them back to what they were originally if it modifies them.
Please post the two result sets the script returns.
/*capture system settings that will be changed so that they can be reset to their original values at the end of this script*/DECLARE @deadlock_priority smallint
DECLARE @transaction_isolation_level smallint
SELECT @deadlock_priority = deadlock_priority, @transaction_isolation_level = transaction_isolation_level
FROM sys.dm_exec_sessions
WHERE session_id = @@SPID
SET DEADLOCK_PRIORITY -8; /*"tell" SQL that if this task somehow gets into a deadlock, cancel THIS task, NOT any other one*/SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
--**********************************************************************************************************************
SET NOCOUNT ON;
DECLARE @filegroup_name nvarchar(128)
DECLARE @include_system_tables bit
DECLARE @list_filegroup_and_drive_summary bit
DECLARE @list_missing_indexes bit
DECLARE @list_missing_indexes_summary bit
DECLARE @list_total_size bit
DECLARE @max_compression int
DECLARE @order_by smallint /* -2=size DESC; 1=table_name ASC; 2=size ASC; */DECLARE @table_name_pattern sysname
DECLARE @table_name_exclude_pattern sysname = '#%'
SET @list_missing_indexes = 1 --NOTE: can take some time, set to 0 if you don't want to wait.
SET @list_missing_indexes_summary = 0 /*not available unless you uncomment the code for it which requires DelimitedSplit8K function*/SET @order_by = -2 /* -2=size DESC; 1=table_name ASC; 2=size ASC; */SET @list_total_size = 1
SET @filegroup_name = '%'
SET @table_name_pattern = '%'
IF @include_system_tables IS NULL
SET @include_system_tables = CASE WHEN DB_NAME() IN ('master', 'msdb', 'tempdb') THEN 1 ELSE 0 END
SET @list_filegroup_and_drive_summary = 0
DECLARE @debug smallint
DECLARE @format_counts smallint --1=',0'; 2/3=with K=1000s,M=1000000s, with 0/1 dec. places;.
DECLARE @include_schema_in_table_names bit
DECLARE @sql_startup_date datetime
DECLARE @total decimal(19, 3)
SELECT @sql_startup_date = create_date
FROM sys.databases WITH (NOLOCK)
WHERE name = 'tempdb'
SET @include_schema_in_table_names = 1
SET @format_counts = 3
SET @debug = 0
IF OBJECT_ID('tempdb.dbo.#index_missing') IS NOT NULL
DROP TABLE dbo.#index_missing
IF OBJECT_ID('tempdb.dbo.#index_operational_stats') IS NOT NULL
DROP TABLE dbo.#index_operational_stats
IF OBJECT_ID('tempdb.dbo.#index_specs') IS NOT NULL
DROP TABLE dbo.#index_specs
IF OBJECT_ID('tempdb.dbo.#index_usage') IS NOT NULL
DROP TABLE dbo.#index_usage
SELECT *
INTO #index_operational_stats
FROM sys.dm_db_index_operational_stats ( DB_ID(), NULL, NULL, NULL )
CREATE TABLE dbo.#index_specs (
object_id int NOT NULL,
index_id int NOT NULL,
min_compression int NULL,
max_compression int NULL,
drive char(1) NULL,
alloc_mb decimal(9, 1) NOT NULL,
alloc_gb AS CAST(alloc_mb / 1024.0 AS decimal(9, 3)),
used_mb decimal(9, 1) NOT NULL,
used_gb AS CAST(used_mb / 1024.0 AS decimal(9, 3)),
rows bigint NULL,
table_mb decimal(9, 1) NULL,
table_gb AS CAST(table_mb / 1024.0 AS decimal(9, 3)),
size_rank int NULL,
approx_max_data_width bigint NULL,
max_days_active int,
UNIQUE CLUSTERED ( object_id, index_id )
) --SELECT * FROM #index_specs
--**********************************************************************************************************************
PRINT 'Started @ ' + CONVERT(varchar(30), GETDATE(), 120)
DECLARE @is_compression_available bit
DECLARE @sql varchar(max)
IF (CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Developer%' AND
CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Enterprise%')
AND (CAST(SERVERPROPERTY('ProductVersion') AS varchar(30)) NOT LIKE '1[456789]%.%')
SET @is_compression_available = 0
ELSE
SET @is_compression_available = 1
SET @sql = '
INSERT INTO #index_specs ( object_id, index_id,' +
CASE WHEN @is_compression_available = 0 THEN '' ELSE '
min_compression, max_compression,' END + '
alloc_mb, used_mb, rows )
SELECT
base_size.object_id,
base_size.index_id, ' +
CASE WHEN @is_compression_available = 0 THEN '' ELSE '
base_size.min_compression,
base_size.max_compression,' END + '
(base_size.total_pages + ISNULL(internal_size.total_pages, 0)) / 128.0 AS alloc_mb,
(base_size.used_pages + ISNULL(internal_size.used_pages, 0)) / 128.0 AS used_mb,
base_size.row_count AS rows
FROM (
SELECT
dps.object_id,
dps.index_id, ' +
CASE WHEN @is_compression_available = 0 THEN '' ELSE '
MIN(p.data_compression) AS min_compression,
MAX(p.data_compression) AS max_compression,' END + '
SUM(dps.reserved_page_count) AS total_pages,
SUM(dps.used_page_count) AS used_pages,
SUM(CASE WHEN dps.index_id IN (0, 1) THEN dps.row_count ELSE 0 END) AS row_count
FROM sys.dm_db_partition_stats dps ' +
CASE WHEN @is_compression_available = 0 THEN '' ELSE '
INNER JOIN sys.partitions p WITH (NOLOCK) ON
p.partition_id = dps.partition_id ' END + '
--WHERE dps.object_id > 100
WHERE OBJECT_NAME(dps.object_id) LIKE ''' + @table_name_pattern + ''' AND
OBJECT_NAME(dps.object_id) NOT LIKE ''' + @table_name_exclude_pattern + '''
GROUP BY
dps.object_id,
dps.index_id
) AS base_size
LEFT OUTER JOIN (
SELECT
it.parent_id,
SUM(dps.reserved_page_count) AS total_pages,
SUM(dps.used_page_count) AS used_pages
FROM sys.internal_tables it WITH (NOLOCK)
INNER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
dps.object_id = it.parent_id
WHERE it.internal_type IN ( ''202'', ''204'', ''211'', ''212'', ''213'', ''214'', ''215'', ''216'' )
GROUP BY
it.parent_id
) AS internal_size ON base_size.index_id IN (0, 1) AND internal_size.parent_id = base_size.object_id
'
IF @debug >= 1
PRINT @sql
EXEC(@sql)
--**********************************************************************************************************************
UPDATE [is]
SET approx_max_data_width = index_cols.approx_max_data_width
FROM #index_specs [is]
INNER JOIN (
SELECT index_col_ids.object_id, index_col_ids.index_id,
SUM(CASE WHEN c.max_length = -1 THEN 16 ELSE c.max_length END) AS approx_max_data_width
FROM (
SELECT ic.object_id, ic.index_id, ic.column_id
--,object_name(ic.object_id)
FROM sys.index_columns ic
WHERE
ic.object_id > 100
UNION
SELECT i_nonclus.object_id, i_nonclus.index_id, ic_clus.column_id
--,object_name(i_nonclus.object_id)
FROM sys.indexes i_nonclus
CROSS APPLY (
SELECT ic_clus2.column_id
--,object_name(ic_clus2.object_id),ic_clus2.key_ordinal
FROM sys.index_columns ic_clus2
WHERE
ic_clus2.object_id = i_nonclus.object_id AND
ic_clus2.index_id = 1 AND
ic_clus2.key_ordinal > 0 --technically superfluous, since clus index can't have include'd cols anyway
) AS ic_clus
WHERE
i_nonclus.object_id > 100 AND
i_nonclus.index_id > 1
) AS index_col_ids
INNER JOIN sys.columns c ON c.object_id = index_col_ids.object_id AND c.column_id = index_col_ids.column_id
GROUP BY index_col_ids.object_id, index_col_ids.index_id
) AS index_cols ON index_cols.object_id = [is].object_id AND index_cols.index_id = [is].index_id
UPDATE ispec
SET table_mb = ispec_ranking.table_mb,
size_rank = ispec_ranking.size_rank
FROM #index_specs ispec
INNER JOIN (
SELECT *, ROW_NUMBER() OVER(ORDER BY table_mb DESC, rows DESC, OBJECT_NAME(object_id)) AS size_rank
FROM (
SELECT object_id, SUM(alloc_mb) AS table_mb, MAX(rows) AS rows
FROM #index_specs
GROUP BY object_id
) AS ispec_allocs
) AS ispec_ranking ON
ispec_ranking.object_id = ispec.object_id
--**********************************************************************************************************************
IF @list_missing_indexes = 1
BEGIN
SELECT
IDENTITY(int, 1, 1) AS ident,
DB_NAME(mid.database_id) AS Db_Name,
CONVERT(varchar(10), GETDATE(), 120) AS capture_date,
ispec.size_rank, ispec.table_mb,
CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix
WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix
WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(mid.object_id /*, mid.database_id*/) + '.'
ELSE '' END + OBJECT_NAME(mid.object_id /*, mid.database_id*/) AS Table_Name,
mid.equality_columns, mid.inequality_columns,
LEN(mid.included_columns) - LEN(REPLACE(mid.included_columns, ',', '')) + 1 AS incl_col_count,
mid.included_columns,
user_seeks, user_scans, NULL AS max_days_active, /*cj1.max_days_active,*/ unique_compiles,
last_user_seek, last_user_scan,
CAST(avg_total_user_cost AS decimal(9, 3)) AS avg_total_user_cost,
CAST(avg_user_impact AS decimal(9, 3)) AS [avg_user_impact%],
system_seeks, system_scans, last_system_seek, last_system_scan,
CAST(avg_total_system_cost AS decimal(9, 3)) AS avg_total_system_cost,
CAST(avg_system_impact AS decimal(9, 3)) AS [avg_system_impact%],
mid.statement, mid.object_id, mid.index_handle
INTO #index_missing
FROM sys.dm_db_missing_index_details mid WITH (NOLOCK)
INNER JOIN sys.indexes i ON i.object_id = mid.object_id AND i.index_id IN (0, 1) AND i.data_space_id <= 32767
LEFT OUTER JOIN sys.dm_db_missing_index_groups mig WITH (NOLOCK) ON
mig.index_handle = mid.index_handle
LEFT OUTER JOIN sys.dm_db_missing_index_group_stats migs WITH (NOLOCK) ON
migs.group_handle = mig.index_group_handle
LEFT OUTER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
dps.object_id = mid.object_id AND
dps.index_id IN (0, 1)
OUTER APPLY (
SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
) AS ca1
OUTER APPLY (
SELECT ispec.table_mb, ispec.size_rank
FROM dbo.#index_specs ispec
WHERE
ispec.object_id = mid.object_id AND
ispec.index_id IN (0, 1)
) AS ispec
--order by
--DB_NAME, Table_Name, equality_columns
WHERE
1 = 1
AND mid.database_id = DB_ID()
AND OBJECT_NAME(mid.object_id) LIKE @table_name_pattern
ORDER BY
--avg_total_user_cost * (user_seeks + user_scans) DESC,
Db_Name,
CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
Table_Name,
equality_columns, inequality_columns,
user_seeks DESC
SELECT *
FROM #index_missing
ORDER BY ident
/*
IF @list_missing_indexes_summary = 1
BEGIN
SELECT
derived.Size_Rank, derived.table_mb,
derived.Table_Name, derived.Equality_Column, derived.Equality#, derived.User_Seeks,
ISNULL((SELECT SUM(user_seeks)
FROM #index_missing im2
OUTER APPLY dbo.DelimitedSplit8K (inequality_columns, ',') ds
WHERE im2.Size_Rank = derived.Size_Rank AND
LTRIM(RTRIM(ds.Item)) = derived.Equality_Column
), 0) AS Inequality_Seeks,
derived.User_Scans, derived.Last_User_Seek, derived.Last_User_Scan,
derived.Max_Days_Active, derived.Avg_Total_User_Cost, derived.Approx_Total_Cost
FROM (
SELECT
Size_Rank, MAX(table_mb) AS table_mb, Table_Name, LTRIM(RTRIM(ds.Item)) AS Equality_Column,
SUM(user_seeks) AS User_Seeks, SUM(user_scans) AS User_Scans,
MAX(last_user_seek) AS Last_User_Seek, MAX(last_user_scan) AS Last_User_Scan,
MIN(max_days_active) AS Max_Days_Active,
MAX(avg_total_user_cost) AS Avg_Total_User_Cost,
(SUM(user_seeks) + SUM(user_scans)) * MAX(avg_total_user_cost) AS Approx_Total_Cost,
MAX(ds.ItemNumber) AS Equality#
FROM #index_missing
CROSS APPLY dbo.DelimitedSplit8K (equality_columns, ',') ds
WHERE equality_columns IS NOT NULL
GROUP BY size_rank, Table_Name, LTRIM(RTRIM(ds.Item))
) AS derived
ORDER BY Size_Rank, Table_Name, Approx_Total_Cost DESC
END --IF
*/END --IF
PRINT 'Index Usage Stats @ ' + CONVERT(varchar(30), GETDATE(), 120)
--**********************************************************************************************************************
-- list index usage stats (seeks, scans, etc.)
SELECT
IDENTITY(int, 1, 1) AS ident,
DB_NAME() AS db_name,
ispec.size_rank, ispec.alloc_mb - ispec.used_mb AS unused_mb,
CASE WHEN i.data_space_id <= 32767 THEN FILEGROUP_NAME(i.data_space_id) ELSE '{CS}' END AS main_fg_name,
CAST(NULL AS int) AS filler,
CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(i.object_id /*, DB_ID()*/) + '.'
ELSE '' END + OBJECT_NAME(i.object_id /*, i.database_id*/) AS Table_Name,
key_cols AS key_cols, nonkey_cols AS nonkey_cols,
LEN(nonkey_cols) - LEN(REPLACE(nonkey_cols, ',', '')) + 1 AS nonkey_count,
CAST(NULL AS varchar(100)) AS filler2,
ius.user_seeks, ius.user_scans, --ius.user_seeks + ius.user_scans AS total_reads,
ius.user_lookups, ius.user_updates,
CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix
WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix
WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
ispec.alloc_gb AS index_gb, ispec.table_gb,
SUBSTRING('NY', CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) + 1, 1) +
CASE WHEN i.is_unique = CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) THEN ''
ELSE '.' + SUBSTRING('NY', CAST(i.is_unique AS int) + 1, 1) END AS [Uniq?],
REPLACE(i.name, oa1.table_name, '~') AS index_name,
i.index_id,
ispec.approx_max_data_width AS [data_width],
CAST(CAST(ispec.used_mb AS float) * 1024.0 * 1024.0 / NULLIF(dps.row_count, 0) AS int) AS cmptd_row_size,
CASE
WHEN ispec.max_compression IS NULL THEN '(Not applicable)'
WHEN ispec.max_compression = 2 THEN 'Page'
WHEN ispec.max_compression = 1 THEN 'Row'
WHEN ispec.max_compression = 0 THEN ''
ELSE '(Unknown)' END AS max_compression,
i.fill_factor,
dios.leaf_delete_count + dios.leaf_insert_count + dios.leaf_update_count as leaf_mod_count,
dios.range_scan_count, dios.singleton_lookup_count,
DATEDIFF(DAY, STATS_DATE ( i.object_id , i.index_id ), GETDATE()) AS stats_days_old,
DATEDIFF(DAY, CASE
WHEN o.create_date > @sql_startup_date AND @sql_startup_date > o.modify_date THEN o.create_date
WHEN o.create_date > @sql_startup_date AND o.modify_date > @sql_startup_date THEN o.modify_date
ELSE @sql_startup_date END, GETDATE()) AS max_days_active,
dios.row_lock_count, dios.row_lock_wait_in_ms,
dios.page_lock_count, dios.page_lock_wait_in_ms,
ius.last_user_seek, ius.last_user_scan,
ius.last_user_lookup, ius.last_user_update,
fk.Reference_Count AS fk_ref_count,
ius2.row_num,
ius.system_seeks, ius.system_scans, ius.system_lookups, ius.system_updates,
ius.last_system_seek, ius.last_system_scan, ius.last_system_lookup, ius.last_system_update,
GETDATE() AS capture_date
INTO #index_usage
FROM sys.indexes i WITH (NOLOCK)
INNER JOIN sys.objects o WITH (NOLOCK) ON
o.object_id = i.object_id
INNER JOIN dbo.#index_specs ispec ON
ispec.object_id = i.object_id AND
ispec.index_id = i.index_id
OUTER APPLY (
SELECT CASE WHEN EXISTS(SELECT 1 FROM #index_specs [is] WHERE [is].object_id = i.object_id AND [is].index_id = 1)
THEN 1 ELSE 0 END AS has_clustered_index
) AS cj2
OUTER APPLY (
SELECT STUFF((
SELECT
', ' + COL_NAME(ic.object_id, ic.column_id)
FROM sys.index_columns ic WITH (NOLOCK)
WHERE
ic.key_ordinal > 0 AND
ic.object_id = i.object_id AND
ic.index_id = i.index_id
ORDER BY
ic.key_ordinal
FOR XML PATH('')
), 1, 2, '')
) AS key_cols (key_cols)
OUTER APPLY (
SELECT STUFF((
SELECT
', ' + COL_NAME(ic.object_id, ic.column_id)
FROM sys.index_columns ic WITH (NOLOCK)
WHERE
ic.key_ordinal = 0 AND
ic.object_id = i.object_id AND
ic.index_id = i.index_id
ORDER BY
COL_NAME(ic.object_id, ic.column_id)
FOR XML PATH('')
), 1, 2, '')
) AS nonkey_cols (nonkey_cols)
LEFT OUTER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
dps.object_id = i.object_id AND
dps.index_id = i.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius WITH (NOLOCK) ON
ius.database_id = DB_ID() AND
ius.object_id = i.object_id AND
ius.index_id = i.index_id
LEFT OUTER JOIN (
SELECT
database_id, object_id, MAX(user_scans) AS user_scans,
ROW_NUMBER() OVER (ORDER BY MAX(user_scans) DESC) AS row_num --user_scans|user_seeks+user_scans
FROM sys.dm_db_index_usage_stats WITH (NOLOCK)
WHERE
database_id = DB_ID()
--AND index_id > 0
GROUP BY
database_id, object_id
) AS ius2 ON
ius2.database_id = DB_ID() AND
ius2.object_id = i.object_id
LEFT OUTER JOIN (
SELECT
referenced_object_id, COUNT(*) AS Reference_Count
FROM sys.foreign_keys WITH (NOLOCK)
WHERE
is_disabled = 0
GROUP BY
referenced_object_id
) AS fk ON
fk.referenced_object_id = i.object_id
LEFT OUTER JOIN #index_operational_stats dios ON
dios.object_id = i.object_id AND
dios.index_id = i.index_id
OUTER APPLY (
SELECT OBJECT_NAME(i.object_id/*, DB_ID()*/) AS table_name
) AS oa1
OUTER APPLY (
SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
) AS ca1
WHERE
--i.object_id > 100 AND
(i.is_disabled = 0 OR @order_by IN (-1, 1)) AND
i.is_hypothetical = 0 AND
i.data_space_id <= 32767 AND
--i.type IN (0, 1, 2) AND
o.type NOT IN ( 'IF', 'IT', 'TF', 'TT' ) AND
(
o.name LIKE @table_name_pattern AND
o.name NOT LIKE 'dtprop%' AND
o.name NOT LIKE 'filestream[_]' AND
o.name NOT LIKE 'MSpeer%' AND
o.name NOT LIKE 'MSpub%' AND
--o.name NOT LIKE 'tmp[_]%' AND
--o.name NOT LIKE 'queue[_]%' AND
--(DB_NAME() IN ('master', 'msdb') OR o.name NOT LIKE 'sys%')
(@include_system_tables = 1 OR o.name NOT LIKE 'sys%')
--AND o.name NOT LIKE 'tmp%'
) AND
(@filegroup_name IS NULL OR CASE WHEN i.data_space_id <= 32767 THEN FILEGROUP_NAME(i.data_space_id) ELSE '{CS}' END LIKE @filegroup_name) AND
(@max_compression IS NULL OR ispec.max_compression <= @max_compression)
ORDER BY
--cj2.has_clustered_index, ispec.size_rank, --heaps first, by size
db_name,
--i.index_id,
--ius.user_seeks - ius.user_scans,
CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
--ius.user_scans DESC,
--ius2.row_num, --user_scans&|user_seeks
table_name,
-- list clustered index first, if any, then other index(es)
CASE WHEN i.index_id IN (0, 1) THEN 1 ELSE 2 END,
key_cols
OPTION (MAXDOP 3, RECOMPILE)
SELECT *
FROM #index_usage
ORDER BY ident
IF @list_total_size > 1
SELECT SUM(index_gb) AS Total_Size_GB
FROM #index_usage
IF @list_filegroup_and_drive_summary = 1
SELECT
LEFT(df.physical_name, 1) AS drive,
FILEGROUP_NAME(au_totals.data_space_id) AS filegroup_name,
au_totals.total_mb AS total_fg_mb,
au_totals.used_mb AS used_fg_mb,
au_totals.total_mb - au_totals.used_mb AS free_fg_mb,
CAST(df.size / 128.0 AS decimal(9, 3)) AS file_size_mb
FROM (
SELECT
au.data_space_id,
CAST(SUM(au.total_pages) / 128.0 AS decimal(9, 3)) AS total_mb,
CAST(SUM(au.used_pages) / 128.0 AS decimal(9, 3)) AS used_mb
FROM sys.allocation_units au
INNER JOIN sys.filegroups fg ON
fg.data_space_id = au.data_space_id
GROUP BY au.data_space_id WITH ROLLUP
) AS au_totals
INNER JOIN sys.database_files df ON
df.data_space_id = au_totals.data_space_id
ORDER BY filegroup_name, drive
PRINT 'Ended @ ' + CONVERT(varchar(30), GETDATE(), 120)
--**********************************************************************************************************************
/*reset settings to their original values*/SET DEADLOCK_PRIORITY @deadlock_priority
IF @transaction_isolation_level = 1
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
ELSE
IF @transaction_isolation_level = 2
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
ELSE
IF @transaction_isolation_level = 3
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
ELSE
IF @transaction_isolation_level = 4
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
ELSE
IF @transaction_isolation_level = 5
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
/*end of script*/
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 6, 2024 at 5:01 pm
(dup post, removed)
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 6, 2024 at 7:00 pm
Might just be me but the only time I'll look at any missing index recommendations is when there's a slow query that could really use some help. Adding indexes is not a panacea of performance and should not be treated as such.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 7, 2024 at 6:44 pm
I look at missing index stats quite frequently. Very often I don't add or update index based on them, but I still review the stats just in case. Sometimes a true need for a new index does come up. More often you just need to add an INCLUDEd column to an existing index. And, as noted, often you can just ignore the stats. But at least you did the due diligence of a review.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 10, 2024 at 5:17 am
Thank you for everyone who replied. Great community!
July 17, 2024 at 5:34 am
After long hours, days and weeks the system back to healthy state. The biggest problem was bad developers code that had to be changed. That was killing the system and pulling too much data that wasn't required. But I loved it how they blamed it on the SQL Environment.
July 18, 2024 at 4:24 am
After long hours, days and weeks the system back to healthy state. The biggest problem was bad developers code that had to be changed. That was killing the system and pulling too much data that wasn't required. But I loved it how they blamed it on the SQL Environment.
It's common for that type of blame to happen, especially since they just don't know. Blame the database because that's where the problem is showing symptoms. As you found out, the real problem is the code they wrote. Hopefully, that turned into several mentor moments to help them write better code in the future.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 18, 2024 at 7:45 am
emuller1982 wrote:After long hours, days and weeks the system back to healthy state. The biggest problem was bad developers code that had to be changed. That was killing the system and pulling too much data that wasn't required. But I loved it how they blamed it on the SQL Environment.
It's common for that type of blame to happen, especially since they just don't know. Blame the database because that's where the problem is showing symptoms. As you found out, the real problem is the code they wrote. Hopefully, that turned into several mentor moments to help them write better code in the future.
Strange they didn't blame the network 😉
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 19, 2024 at 7:49 pm
Jeff Moden wrote:emuller1982 wrote:After long hours, days and weeks the system back to healthy state. The biggest problem was bad developers code that had to be changed. That was killing the system and pulling too much data that wasn't required. But I loved it how they blamed it on the SQL Environment.
It's common for that type of blame to happen, especially since they just don't know. Blame the database because that's where the problem is showing symptoms. As you found out, the real problem is the code they wrote. Hopefully, that turned into several mentor moments to help them write better code in the future.
Strange they didn't blame the network 😉
Funny how that works... Seems like Developers blame the database and DBAs blame the Network. Report writers blame anyone they can.
For me, I simply ask "Who wrote those code" as I'm loading up the pork chop cannon.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply