In a recent article on SSG, I discussed how to use Extended Events to function in a Profiler like fashion. You can read about that here. I recommend reading that article first because it helps to lay some of the groundwork for this article.
Within Management Studio, from a righ-click context menu, there is an option to run a report called “Schema Changes History”. The name of this report might indicate that there is some means to find when a change occurred in the schema and potentially what the change was as well as who made the change. It does say “Schema Changes History” after-all.
If you run the report, you would be able to see a report similar to the following.
This looks like it could be a pretty useful report. If I drill into the collapsed items, I can get better information. But, how exactly is this report producing this kind of data? I don’t have a database audit running to trap this information. I also don’t have SSDT-BI installed, so I can’t export the report and check the report definition.
I could run a trace and refresh the report data and see what pops there. I would really hate to use Profiler though, and I really don’t want to go through the hassle of creating a server side trace to figure out the source data definitions. Then the lightbulb moment occurs. I already have an XE session that can trace the SQL queries just like Profiler. Why not use a better tool to do the job and satisfy my curiosity at the same time?
So, go read that article, I wrote for SSG, to get the XE session definition in order to follow along a little better. With that XEvent session running, I can refresh the report data and start to evaluate what is happening in order to produce the Audit report.
By this point, you have probably thought to yourself that the source of the data must be coming from the default trace. If you thought that, then you are correct. It is a good presumption based on a knowledge of what kind of data is being tracked in the default trace. But I really wanted to know for certain what the source of the data was. As I perused the XEvent session data, I noticed several queries similar to the following in the results.
Yes, I am using the GUI (*shrug*) in this image to explore the data from the session. That is just for display friendliness. But, as you look through those queries you start to see a few patterns and some obvious signs that the source of the data is the default trace. Here is a cleaner look at the code being used to get the data-set.
Declare @curr_tracefilename varchar(256) ,@indx varchar(256) ,@base_tracefilename varchar(256) select @curr_tracefilename = path from sys.traces where is_default = 1 set @curr_tracefilename = reverse(@curr_tracefilename) select @indx = PATINDEX('%%', @curr_tracefilename) set @curr_tracefilename = reverse(@curr_tracefilename) set @base_tracefilename = LEFT( @curr_tracefilename,len(@curr_tracefilename) - @indx) + 'log.trc'; select ObjectName as obj_name , ObjectID , DatabaseName , StartTime as start_time , EventClass as event_class , EventSubClass , ObjectType as object_type , ServerName , LoginName , NTUserName , ApplicationName , convert(varchar(128),'temp') as DDL_Operation INTO #temp_trace FROM ::fn_trace_gettable( @base_tracefilename, default ) where EventClass in (46,47,164) and EventSubclass = 0 and DatabaseID = db_id() update #temp_trace set DDL_Operation = 'CREATE' where event_class = 46 update #temp_trace set DDL_Operation = 'DROP' where event_class = 47 update #temp_trace set DDL_Operation = 'ALTER' where event_class = 164 DECLARE @d1 DATETIME ,@diff DATETIME select @d1 = min(start_time) from #temp_trace set @diff= datediff(hh,@d1,getdate()) select @diff as difference , @d1 as date , object_type as obj_type_desc , (dense_rank() over (order by obj_name,object_type ) )%2 as l1 , (dense_rank() over (order by obj_name,object_type,start_time ))%2 as l2 , * from #temp_trace where object_type not in (21587) -- don't bother with auto-statistics as it generates too much noise order by start_time desc drop table #temp_trace
Now, I must confess that I altered it a bit to make sure it was doing what I thought. Just a little thing like swapping a missing definition for the table variable for a quick insert into a temp table. But the reality is, it is pretty much the source code of the report. It is not the prettiest of code for the source, but it works.
Due to the lack of prettiness to the code and the seemingly overly complex means to get to the end result, I decided I wanted to “tune” it up a little bit.
DECLARE @DBName sysname = 'AdventureWorks2014' ,@d1 DATETIME ,@diff INT; SELECT ObjectName , ObjectID , DatabaseName , StartTime , EventClass , EventSubClass , ObjectType , ServerName , LoginName , NTUserName , ApplicationName , CASE EventClass WHEN 46 THEN 'CREATE' WHEN 47 THEN 'DROP' WHEN 164 THEN 'ALTER' END AS DDLOperation INTO #temp_trace FROM sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('',REVERSE(path)),256)) + 'log.trc' FROM sys.traces WHERE is_default = 1)), DEFAULT) T WHERE EventClass in (46,47,164) AND EventSubclass = 0 AND ObjectType <> 21587-- don't bother with auto-statistics as it generates too much noise AND DatabaseName = @DBName; SELECT @d1 = MIN(StartTime) FROM #temp_trace; SET @diff= DATEDIFF(hh,@d1,GETDATE()); SELECT @diff AS HrsSinceFirstChange , @d1 AS FirstChangeDate , sv.name AS obj_type_desc , tt.ObjectType , tt.DDLOperation , tt.DatabaseName,tt.ObjectName,tt.StartTime , tt.EventClass,tt.EventSubClass , tt.ServerName,tt.LoginName, tt.NTUserName , tt.ApplicationName , (dense_rank() OVER (ORDER BY ObjectName,ObjectType ) )%2 AS l1 , (dense_rank() OVER (ORDER BY ObjectName,ObjectType,StartTime ))%2 AS l2 FROM #temp_trace tt INNER JOIN sys.trace_events AS te ON tt.EventClass = te.trace_event_id INNER JOIN sys.trace_subclass_values tsv ON tt.EventClass = tsv.trace_event_id AND tt.ObjectType = tsv.subclass_value INNER JOIN master.dbo.spt_values sv ON tsv.subclass_value = sv.number AND sv.type = 'EOD' ORDER BY StartTime DESC; DROP TABLE #temp_trace;
Now, I have something that is easier to read and maintain (my opinion) and works well. It can also be used easily enough in an RDL should you wish to create an additional report different from the canned report in SSMS. Or just continue to use it from within Management Studio and check the results without the overhead of the report presentation.