June 27, 2018 at 8:26 am
Hi:
So I found this great script here on SQL Server Central that allows me to query my database to see the last time a table was accessed in , however, I 'd like to automate this process - perhaps create a table where in the result can be loaded into. I haven't decided whether I would want to append the data or overwrite the data populated in the tables. I want to capture this data on a daily basis. Does anyone know what the best is to fulfill this? Your help is greatly appreciated. Please find here under the script:
WITH LastActivity (ObjectID, LastAction) AS
(
SELECT object_id AS TableName,
last_user_seek as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
UNION
SELECT object_id AS TableName,
last_user_scan as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
UNION
SELECT object_id AS TableName,
last_user_lookup as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
)
SELECT OBJECT_NAME(so.object_id) AS TableName,
MAX(la.LastAction) as LastSelect,
CASE WHEN so.type = 'U' THEN 'Table (user-defined)'
WHEN so.type = 'V' THEN 'View'
END AS Table_View
,CASE WHEN st.create_date IS NULL
THEN sv.create_date
ELSE st.create_date
END AS create_date
,CASE WHEN st.modify_date IS NULL
THEN sv.modify_date
ELSE st.modify_date
END AS modify_date
FROM sys.objects so
LEFT JOIN LastActivity la
on so.object_id = la.ObjectID
LEFT JOIN sys.tables st
on so.object_id = st.object_id
LEFT JOIN sys.views sv
on so.object_id = sv.object_id
WHERE so.type in ('V','U')
AND so.object_id > 100
GROUP BY OBJECT_NAME(so.object_id)
, so.type
,st.create_date
,st.modify_date
,sv.create_date
,sv.modify_date
ORDER BY OBJECT_NAME(so.object_id)
June 27, 2018 at 9:20 am
I think the script can be improved. Specifically, you are reading the same table three times when you can do a simple "UNPIVOT" instead. Also, I think that combining the sys.tables and sys.views using a UNION ALL will probably be better than using two separate CASE expressions.
WITH LastActivity (ObjectID, LastAction) AS
(
SELECT
[object_id] AS TableName,
MAX(LastAction) AS LastAction
FROM sys.dm_db_index_usage_stats u
CROSS APPLY ( VALUES(last_user_seek), (last_user_scan), (last_user_lookup) ) la(LastAction)
WHERE database_id = db_id(db_name())
GROUP BY [object_id]
)
, Tbl AS
(
SELECT
[object_id],
[type],
st.create_date,
st.modify_date
FROM sys.tables st
UNION ALL
SELECT
[object_id],
[type],
sv.create_date,
sv.modify_date
FROM sys.views sv
)
SELECT
OBJECT_NAME(so.object_id) AS TableName,
la.LastAction AS LastSelect,
CASE
WHEN so.type = 'U' THEN 'Table (user-defined)'
WHEN so.type = 'V' THEN 'View'
END AS Table_View
, st.create_date
, st.modify_date
FROM sys.objects so
LEFT JOIN LastActivity la
on so.object_id = la.ObjectID
LEFT JOIN Tbl st
on so.object_id = st.object_id
AND so.[type] = st.[type]
WHERE so.type in ('V','U')
AND so.[object_id] > 100
ORDER BY OBJECT_NAME(so.[object_id])
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
June 27, 2018 at 9:27 am
drew.allen - Wednesday, June 27, 2018 9:20 AMI think the script can be improved. Specifically, you are reading the same table three times when you can do a simple "UNPIVOT" instead. Also, I think that combining the sys.tables and sys.views using a UNION ALL will probably be better than using two separate CASE expressions.
WITH LastActivity (ObjectID, LastAction) AS
(
SELECT
[object_id] AS TableName,
MAX(LastAction) AS LastAction
FROM sys.dm_db_index_usage_stats u
CROSS APPLY ( VALUES(last_user_seek), (last_user_scan), (last_user_lookup) ) la(LastAction)
WHERE database_id = db_id(db_name())
GROUP BY [object_id]
)
, Tbl AS
(
SELECT
[object_id],
[type],
st.create_date,
st.modify_date
FROM sys.tables stUNION ALL
SELECT
[object_id],
[type],
sv.create_date,
sv.modify_date
FROM sys.views sv
)
SELECT
OBJECT_NAME(so.object_id) AS TableName,
la.LastAction AS LastSelect,
CASE
WHEN so.type = 'U' THEN 'Table (user-defined)'
WHEN so.type = 'V' THEN 'View'
END AS Table_View
, st.create_date
, st.modify_date
FROM sys.objects so
LEFT JOIN LastActivity la
on so.object_id = la.ObjectID
LEFT JOIN Tbl st
on so.object_id = st.object_id
AND so.[type] = st.[type]
WHERE so.type in ('V','U')
AND so.[object_id] > 100ORDER BY OBJECT_NAME(so.[object_id])
Thank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?
June 27, 2018 at 10:04 am
EMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?
Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 27, 2018 at 10:51 am
Jeff Moden - Wednesday, June 27, 2018 10:04 AMEMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
Hmm....Thanks for pointing this out - this is exactly my end goal. So, my question now is, what would be the best way to fulfill my goal? Do you know of a better approach to identifying tables for being dropped?
I will say that inspite of this information, I will still like to know how to automate this process - I will appreciate knowledge you're willing to share...
Thanks.
June 27, 2018 at 11:01 am
EMtwo - Wednesday, June 27, 2018 10:51 AMJeff Moden - Wednesday, June 27, 2018 10:04 AMEMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
Hmm....Thanks for pointing this out - this is exactly my end goal. So, my question now is, what would be the best way to fulfill my goal? Do you know of a better approach to identifying tables for being dropped?
I will say that inspite of this information, I will still like to know how to automate this process - I will appreciate knowledge you're willing to share...Thanks.
What type of tables are you looking at dropping? If the database supports an application you should be working with the developers supporting the application regarding what tables are needed for the application to work as designed. If the tables support some other process, again, you should be working with the people using that process. Just dropping tables that don't appear to be used isn't the best way to endear yourself with your customers (the ones using the databases you support).
June 27, 2018 at 1:20 pm
Lynn Pettis - Wednesday, June 27, 2018 11:00 AMEMtwo - Wednesday, June 27, 2018 10:51 AMJeff Moden - Wednesday, June 27, 2018 10:04 AMEMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
Hmm....Thanks for pointing this out - this is exactly my end goal. So, my question now is, what would be the best way to fulfill my goal? Do you know of a better approach to identifying tables for being dropped?
I will say that inspite of this information, I will still like to know how to automate this process - I will appreciate knowledge you're willing to share...Thanks.
What type of tables are you looking at dropping? If the database supports an application you should be working with the developers supporting the application regarding what tables are needed for the application to work as designed. If the tables support some other process, again, you should be working with the people using that process. Just dropping tables that don't appear to be used isn't the best way to endear yourself with your customers (the ones using the databases you support).
I totally agree - we are in the process of migrating and upgrading our current environment. I am working with developers and others to identify what tables are needed, however there are databases (tables) residing on legacy systems inherited from previous admins - it is not certain what is being used and what is not...
June 27, 2018 at 10:39 pm
EMtwo - Wednesday, June 27, 2018 1:20 PMLynn Pettis - Wednesday, June 27, 2018 11:00 AMEMtwo - Wednesday, June 27, 2018 10:51 AMJeff Moden - Wednesday, June 27, 2018 10:04 AMEMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
Hmm....Thanks for pointing this out - this is exactly my end goal. So, my question now is, what would be the best way to fulfill my goal? Do you know of a better approach to identifying tables for being dropped?
I will say that inspite of this information, I will still like to know how to automate this process - I will appreciate knowledge you're willing to share...Thanks.
What type of tables are you looking at dropping? If the database supports an application you should be working with the developers supporting the application regarding what tables are needed for the application to work as designed. If the tables support some other process, again, you should be working with the people using that process. Just dropping tables that don't appear to be used isn't the best way to endear yourself with your customers (the ones using the databases you support).
I totally agree - we are in the process of migrating and upgrading our current environment. I am working with developers and others to identify what tables are needed, however there are databases (tables) residing on legacy systems inherited from previous admins - it is not certain what is being used and what is not...
You can use sys.dm_db_index_usage_stats in the manner you depict provided that you take into consideration a couple of facts that you already know and one that you may not know.
1. sys.dm_db_index_usaage_stats are reset to empty after any restart of the SQL Server service (you already know this).
2. Key tables may exist to support rarely used "one off" or rarely used but scheduled usage tables such as month ending reports or yearly reports. (Again, you already know this)
3. There's a 3rd mention that you may not be aware of and that is audit tables. You may go years of doing inserts on such tables without ever doing any kind of SELECT and yet they're critical to the system. Hopefully, all of these tables have something in common like being the object of an audit trigger or having the word "audit" (or some other common method of identification) in the name of the table.
If I had to do such a thing as what you're attempting, I'd collect the database_id, object_id, and index_id as well as the latest date from the combination of the last_user_seek, last_user_scan, and last_user_lookup as a single column, the last_user_update, and the current sample date and time once each day for at least a month. If you wanted to limit the size that this table could become, you could do "UPSERTS" so that each unique combination of database_id, object_id, and index_id was updated with the latest date as well as a count of each days where the dates previously mentioned suffered a change. This will also help you to identify seeming unused indexes even if the table itself cannot be dropped.
After at least one full monthly cycle has been completed, you could the rename all tables that haven't been used to see if anything breaks. In a similar fashion, you could disable any non-unique non-clustered indexes on tables that are being "used" but the indexes have only suffered updates with the same thought in mind. Obviously, you wouldn't want to rename any tables that are the target objects of such things as triggers unless you're sure that the data being collected by the trigger is no longer necessary.
For renaming, just append the original table name with something like tablename_DELETE_CANDIDATE so that it's painfully obvious that the table has been renamed and the reason why it has been renamed. Of course, the renaming should be based on the lack of usage, which hopefully means that they never show up in your sampling table.
Then, wait for another monthly cycle to complete to see if the renaming broke anything. If you're fortunate, the next monthly cycle may also be the end of a quarterly or yearly cycle, as well.
On "D-DAY" (drop day), make sue that your backups are in good shape in case you need to do a restore to recover a table or index. If such things live in HUGE databases, you might want to copy the tables to be deleted to a new temporary database as well as any indexes, triggers, etc to a real but temporary new database just to make it so you can bring a table back online quickly if you need to do so without doing a restore of a huge database. I'd backup that temporary database separately and keep that backup in long term storage until the end of the year cycle has come and gone.
Drop the tables marked for deletion and drop the indexes that have been disabled by you and drop that temporary database making sure that you don't delete the backup history for it in the process.
If nothing blows up (and it shouldn't at this point), you're probably golden and can drop that temporary database making sure you don't drop the backup history. You might need it at the end of the year.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 28, 2018 at 6:58 am
Jeff Moden - Wednesday, June 27, 2018 10:39 PMEMtwo - Wednesday, June 27, 2018 1:20 PMLynn Pettis - Wednesday, June 27, 2018 11:00 AMEMtwo - Wednesday, June 27, 2018 10:51 AMJeff Moden - Wednesday, June 27, 2018 10:04 AMEMtwo - Wednesday, June 27, 2018 9:27 AMThank you. Your help is appreciated. I'd like to load the result from this query into a table on a daily basis and I'd want to automate this process - What would be the best way to accomplish this?Out of serious concern for what may be the end usage of all this, what is the actual end goal to all of this? I suspect it's to identify tables as candidates for being dropped. Be VERY aware that the information from sys.db_db_index_usage_stats is emptied on any restart of the SQL Server service and you may have tables that may not be used for months at a time but could still be critical.
Hmm....Thanks for pointing this out - this is exactly my end goal. So, my question now is, what would be the best way to fulfill my goal? Do you know of a better approach to identifying tables for being dropped?
I will say that inspite of this information, I will still like to know how to automate this process - I will appreciate knowledge you're willing to share...Thanks.
What type of tables are you looking at dropping? If the database supports an application you should be working with the developers supporting the application regarding what tables are needed for the application to work as designed. If the tables support some other process, again, you should be working with the people using that process. Just dropping tables that don't appear to be used isn't the best way to endear yourself with your customers (the ones using the databases you support).
I totally agree - we are in the process of migrating and upgrading our current environment. I am working with developers and others to identify what tables are needed, however there are databases (tables) residing on legacy systems inherited from previous admins - it is not certain what is being used and what is not...
You can use sys.dm_db_index_usage_stats in the manner you depict provided that you take into consideration a couple of facts that you already know and one that you may not know.
1. sys.dm_db_index_usaage_stats are reset to empty after any restart of the SQL Server service (you already know this).
2. Key tables may exist to support rarely used "one off" or rarely used but scheduled usage tables such as month ending reports or yearly reports. (Again, you already know this)
3. There's a 3rd mention that you may not be aware of and that is audit tables. You may go years of doing inserts on such tables without ever doing any kind of SELECT and yet they're critical to the system. Hopefully, all of these tables have something in common like being the object of an audit trigger or having the word "audit" (or some other common method of identification) in the name of the table.If I had to do such a thing as what you're attempting, I'd collect the database_id, object_id, and index_id as well as the latest date from the combination of the last_user_seek, last_user_scan, and last_user_lookup as a single column, the last_user_update, and the current sample date and time once each day for at least a month. If you wanted to limit the size that this table could become, you could do "UPSERTS" so that each unique combination of database_id, object_id, and index_id was updated with the latest date as well as a count of each days where the dates previously mentioned suffered a change. This will also help you to identify seeming unused indexes even if the table itself cannot be dropped.
After at least one full monthly cycle has been completed, you could the rename all tables that haven't been used to see if anything breaks. In a similar fashion, you could disable any non-unique non-clustered indexes on tables that are being "used" but the indexes have only suffered updates with the same thought in mind. Obviously, you wouldn't want to rename any tables that are the target objects of such things as triggers unless you're sure that the data being collected by the trigger is no longer necessary.
For renaming, just append the original table name with something like tablename_DELETE_CANDIDATE so that it's painfully obvious that the table has been renamed and the reason why it has been renamed. Of course, the renaming should be based on the lack of usage, which hopefully means that they never show up in your sampling table.
Then, wait for another monthly cycle to complete to see if the renaming broke anything. If you're fortunate, the next monthly cycle may also be the end of a quarterly or yearly cycle, as well.
On "D-DAY" (drop day), make sue that your backups are in good shape in case you need to do a restore to recover a table or index. If such things live in HUGE databases, you might want to copy the tables to be deleted to a new temporary database as well as any indexes, triggers, etc to a real but temporary new database just to make it so you can bring a table back online quickly if you need to do so without doing a restore of a huge database. I'd backup that temporary database separately and keep that backup in long term storage until the end of the year cycle has come and gone.
Drop the tables marked for deletion and drop the indexes that have been disabled by you and drop that temporary database making sure that you don't delete the backup history for it in the process.
If nothing blows up (and it shouldn't at this point), you're probably golden and can drop that temporary database making sure you don't drop the backup history. You might need it at the end of the year.
Thank you!
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply