January 24, 2018 at 11:37 am
Dear all,
I have below table in my Data Warehouse.
This table has just the function of storing the executions times of my stored procedures.
So, everytime a stored procedure is executed it inserts a row into this table.
There are no Updates
Table definition below:
ExecutionID bigint Unchecked
DatabaseName varchar(100) Unchecked
SchemaName varchar(100) Unchecked
SPName varchar(100) Unchecked
InvokingLogin varchar(200) Checked
ExecutionLogin varchar(200) Checked
LoadID int Checked
StartDateTime datetime2(7) Unchecked
EndDateTime datetime2(7) Checked
Unchecked
Querstion:
The average of inserted rows in this table daily is about 5K.
I have some reports that query this table so that we can understand how are the databases behaving (they are opened once per day).
I believe that this table is a good candidate for data compression at page level as it has already 3.000.000 rows and still growing. And because it has no updates, page compression is a good candidate
But what about indexing? would you create some indexes (non clustered) on this table to help the reports knowing that this is a insert table? or would you just use data compression?
Currently the table has only one clustered index in the field ExecutionID.
My queries inside this report are querying database and the date columns so I am thinking if , by helping a bit the report I will not lose performance on the inserts.
Can you please help me with your views on this?
Thank you
January 24, 2018 at 12:02 pm
5000 inserts a day is a pretty light load, if your reports are having performance issues then yes adding indexes is a good idea. Unless you have some kind of massive space issues or some super fast insert time requirements i can't imagine why there would be a problem adding some indexes for your reports, they're a pretty basic feature of SQL Server.
January 24, 2018 at 12:48 pm
Before you add any indexes, examine the code that's having the performance issue and make sure that the code is SARGable and will actually be able to make use of indexes. That also means checking for datatype mismatches where an entire column must be scanned to be evaluated, which would override the use of an index as a seek and, at best, would result in a scan of the index.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 24, 2018 at 1:03 pm
Something tells me that you would benefit more by having the clustered index on StartDateTime. I don't see how or why ExecutionID can be relevant to your queries.
January 24, 2018 at 1:04 pm
Jeff Moden - Wednesday, January 24, 2018 12:48 PMBefore you add any indexes, examine the code that's having the performance issue and make sure that the code is SARGable and will actually be able to make use of indexes. That also means checking for datatype mismatches where an entire column must be scanned to be evaluated, which would override the use of an index as a seek and, at best, would result in a scan of the index.
Hello, what do you mean by sargable ? How can a datatype cause that? Can you please help me understand?
Thank you
January 24, 2018 at 1:06 pm
Luis Cazares - Wednesday, January 24, 2018 1:03 PMSomething tells me that you would benefit more by having the clustered index on StartDateTime. I don't see how or why ExecutionID can be relevant to your queries.
I will post the code here as it is not a long one so that I can here from you
January 24, 2018 at 1:11 pm
One thing seems that we all agree. As this is purely a delete and insert table,data compression at a page level will be helpful as it may help in terms of i/o, correct?
January 24, 2018 at 1:14 pm
river1 - Wednesday, January 24, 2018 1:04 PMJeff Moden - Wednesday, January 24, 2018 12:48 PMBefore you add any indexes, examine the code that's having the performance issue and make sure that the code is SARGable and will actually be able to make use of indexes. That also means checking for datatype mismatches where an entire column must be scanned to be evaluated, which would override the use of an index as a seek and, at best, would result in a scan of the index.Hello, what do you mean by sargable ? How can a datatype cause that? Can you please help me understand?
Thank you
There's plenty of information about that topic, you just need to Google it. Here's one example https://www.sqlinthewild.co.za/index.php/2016/09/13/what-is-a-sargable-predicate/
January 24, 2018 at 1:19 pm
river1 - Wednesday, January 24, 2018 1:11 PMOne thing seems that we all agree. As this is purely a delete and insert table,data compression at a page level will be helpful as it may help in terms of i/o, correct?
It will help on terms of I/O but will affect on terms of CPU. Your benefit will depend on your resources.
January 24, 2018 at 2:42 pm
river1 - Wednesday, January 24, 2018 1:11 PMOne thing seems that we all agree. As this is purely a delete and insert table,data compression at a page level will be helpful as it may help in terms of i/o, correct?
Heh... hold the phone.... ONLY insert and delete? No SELECTs????
--Jeff Moden
Change is inevitable... Change for the better is not.
January 24, 2018 at 11:15 pm
Jeff Moden - Wednesday, January 24, 2018 2:42 PMriver1 - Wednesday, January 24, 2018 1:11 PMOne thing seems that we all agree. As this is purely a delete and insert table,data compression at a page level will be helpful as it may help in terms of i/o, correct?Heh... hold the phone.... ONLY insert and delete? No SELECTs????
Select is Only done by this ssrs report once per day
January 24, 2018 at 11:16 pm
Luis Cazares - Wednesday, January 24, 2018 1:19 PMriver1 - Wednesday, January 24, 2018 1:11 PMOne thing seems that we all agree. As this is purely a delete and insert table,data compression at a page level will be helpful as it may help in terms of i/o, correct?It will help on terms of I/O but will affect on terms of CPU. Your benefit will depend on your resources.
If it only as inserts and deletes i dont think it can decrease cpu performance. At least this is what i read in some posts
January 25, 2018 at 2:39 am
Luis Cazares - Wednesday, January 24, 2018 1:14 PMriver1 - Wednesday, January 24, 2018 1:04 PMJeff Moden - Wednesday, January 24, 2018 12:48 PMBefore you add any indexes, examine the code that's having the performance issue and make sure that the code is SARGable and will actually be able to make use of indexes. That also means checking for datatype mismatches where an entire column must be scanned to be evaluated, which would override the use of an index as a seek and, at best, would result in a scan of the index.Hello, what do you mean by sargable ? How can a datatype cause that? Can you please help me understand?
Thank you
There's plenty of information about that topic, you just need to Google it. Here's one example https://www.sqlinthewild.co.za/index.php/2016/09/13/what-is-a-sargable-predicate/
After checking your link, it seems that my code is nor SARGable:
EXEC ( '
SELECT
DISTINCT (REPLACE(REPLACE(SchemaName, ''['', ''''), '']'', '''') + '' - '' + REPLACE(REPLACE(spname, ''['', ''''), '']'', '''')) SPNAME
FROM
' + @pDB + '.[Logging].[SPExecutions]
where
convert( date, startdatetime,103) >= convert( date, ''' + @StartDate + ''' , 103)
AND
convert (date, enddatetime,103) <= convert (date, ''' + @EndDate + ''', 103)
ORDER BY
(REPLACE(REPLACE(SchemaName, ''['', ''''), '']'', '''') + '' - '' + REPLACE(REPLACE(spname, ''['', ''''), '']'', ''''))
' )
because it has a convert function compering the dates, correct?
January 25, 2018 at 2:52 am
Now code is without converts. like:
EXEC ( '
SELECT
DISTINCT (REPLACE(REPLACE(SchemaName, ''['', ''''), '']'', '''') + '' - '' + REPLACE(REPLACE(spname, ''['', ''''), '']'', '''')) SPNAME
FROM
' + @pDB + '.[Logging].[SPExecutions]
where
startdatetime >= convert( date, ''' + @StartDate + ''' , 103)
AND
enddatetime <= convert (date, ''' + @EndDate + ''', 103)
ORDER BY
(REPLACE(REPLACE(SchemaName, ''['', ''''), '']'', '''') + '' - '' + REPLACE(REPLACE(spname, ''['', ''''), '']'', ''''))
' )
So it is SARGable. I think 🙂
January 25, 2018 at 8:56 pm
Maybe this would make the code easier to read and prevent any SQL injection.
DECLARE @sql NVARCHAR(MAX);
SELECT @sql = N'SELECT DISTINCT
PARSENAME(SchemaName, 1) + '' - '' + PARSENAME(spname, 1) SPNAME
FROM ' + QUOTENAME(name) + N'.[Logging].[SPExecutions]
WHERE startdatetime >= @StartDate
AND enddatetime <= @EndDate
ORDER BY SPNAME;'
FROM sys.databases
WHERE name = PARSENAME( @pDB, 1);
EXECUTE sp_executesql @sql, N'@StartDate date, @EndDate date', @StartDate, @EndDate;
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply