October 19, 2015 at 1:08 am
Hi folks.
I know you'll say "you have to experiment it" but I need your opinions. Here are key points of project:
its credit card transaction project. Meaning once record inserted its never updated.
there are 1 million existing records. there will be maybe 5 transaction every second.
As its credit card project I need to track transaction volumes for each customer.
it means I need to run SUM statement every time. It must be rely on real data.
Of course it causes deadlocks and takes too much time.
so I wrote stored procedure for this one big table. It consider delete, insert and update situations and
adds INSERTED.amount or DELETED.amount to summary table.
but it adds/subtracts multiple records.
one for customer, one for bank, 2 for chargebacks, 2 for refunds... basically it inserts/updates almost 20 records.
for every year/month there are these 20 record block.
Each record is for some variable.
Year Month CustomerID Type Value
------------------------------------------------
2015 1 10 Refund_Count 5
2015 1 10 Refund_Amount 200
2015 2 10 Refund_Count 3
2015 2 10 Refund_Amount 120
..................
It works perfectly but last week I started to look into indexed views.
These stored procedure functionality can be achieved using 20 different indexed views.
And I need to combine all these queries with UNION ALL
In code, I can arrange the rest. no problem for me.
my question is:
As index views keeps a separate physical data it means it gets data from source and writes to database.
for 20 different indexed views it must write something 20 times.
my SP also does the same thing.
Logic says there's no difference.
Both solution wont cause deadlocks and as they query small separate table they are both fast.
Because I don't use SUM anymore, I only query a field.
Would you get rid of SPs and replace them with indexed views ?
Or would you say "there will be no performance difference so leave it as it is"
or would you say "in future .... kind of problem can be faced so use ...."
do you have suggestions based on experiences ?
best.
October 20, 2015 at 5:53 am
Hi,
According to my experience with the indexed views they are much more efficient then the SP's because they kind of create their own physical data structure but the best part part is apart from one clustered index you can also add multiple Non Clustered indexes on top of that view, however just keep in mind that this view is a schemabinded view and it wont allow any DDL commands to be fired on the base tables also.
Now the negative part of using 'em is - they are difficult to maintain, you may need to update your table statistics after every load/ update (which can be set to true at db level) and memory consumption over a period of time will go on increasing.
BR
Arun
October 21, 2015 at 8:44 am
The fact that the current design consists of a giant table doing nothing but inserts and another giant table where you have to do twenty writes to concerns me quite a bit more than if stored procedures or materialized views are the way to go (and by the way, that's comparing apples to hammers to a degree).
What about storing this data in a normalized structure that allows for queries against it instead of having to do all this odd data manipulation? This is a relational storage engine we're talking about. Proper relational storage enhances performance.
After that, whether you use materialized views or your procedures to break apart this data, unfortunately comes down to testing. Sorry. But it's really hard to say whether the 20 extra inserts you'll have to do as part of 20 materialized views from the one insert to your master table is worse than 20 inserts/updates to a single second table through your stored procedures without testing. However, having 20 separated storage locations for this data within the views is likely to help performance since you won't be facing the same types of row and page contention that you will with you single super-table approach. So the secondary aspect of this process, queries against the data, might be radically helped by the materialized views.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 21, 2015 at 9:53 am
Hi,
Actually its not a "big" table with many columns. actually there are 20 columns. Problem is I need to follow SUMs all the time. but I need only monthly SUMs. but there are many many rows for each day.
second table only holds monthly totals. there are 20 different total types. if there are 10 customers, there will be 10x20 =200 record for each month which is nothing for SQL server. So we are actually not talking about monster tables. First table (big as you call it) is partitioned and for every month there one file.
I had 2 different calculation needs. one if for customers, one for banks. both calculation take monthly totals to different table. I used indexed views for banks and leaved stored procedures for customer.
Yesterday my test program finished adding 1.100.000 records. As far as I see there's no performance problem. But also I have no gain. Actually I'm surprised but I'm very happy 🙂 At least I eliminated some SP usages so everything more simple now. and execution plans are much more simple now.
I think I'll leave my application in that way. I won't convert SPs that calculate customer SUMs to indexed views.
my conclusion is if you need to get 10 or 20 SUMs over a table indexed view performance will be almost same with using SPs to update totals on another table. my experiments shows that if you need one, two or maybe 4 SUMs, COUNTs then indexed view will give you more performance.
I know this is not a solid rule and every programming case is different... that is mine.
best regards my friends.
thank you.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply