Estimating compute cost per type of transaction

  • Hello,

    I am creating a program that calculates the Read, Insert, Update, Delete statements that exist in code.

    Since we  know:

    • The type of transaction,
    • the number of parameters passed,
    • the size of the table,

    I thought good to provide a calculation cost for each program/function.

    That is to say, A read operation in a table with 50 fields costs say 1

    An update operation on the same table for all 50 fields costs say 2.5  etc. etc.

    I think we all agree that an insert does not have the same cost as a read, nor an update like a read and an update etc.

    Any ideas if there are studies regarding this capacity planning? Everybody talks about the number of transactions that SQL can handle etc. but is there a plain vanilla study like this one ?

    Thank you all!

     

    • This topic was modified 3 years, 8 months ago by  dimitrisv. Reason: spelling
  • I don't know of one.  And I don't see the value of it.  First, if the statement appears in a loop, you'd have to multiply the cost.  Second, not all column updates are equal.  Changing 10 non-NULL bit values will be vastly less overhead than significantly lengthening a varchar column, for example.

    You can use the view sys.dm_db_index_operational_stats to determine the number of modifications (DELETEs / INSERTs / UPDATEs) to given table(s)/index(es) in a time frame, including the time (in ms) required.  You can then assign whatever weight you want to each of the values in that view.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Thank you Scott. Indeed I completely agree. It is almost an impossible task. However, there ought to be studies about such costs.

    I was told that IBM research had produced such a document for DB2/IBMi but I have failed to find it.

    As you mentioned, they did took into account even the datatypes.

     

  • I agree with ScottPletcher here that there is little value in that sort of assessment and there is a LOT of things that can impact it.  If your table has 255 indexes on it (for example) your insert performance is going to be drastically different than inserting into a heap.  And same thing with a SELECT, especially if you are ordering or filtering or joining the data.

    If you want to assign a weight to an operation, you need to factor in a lot of things.  Even doing a simple SELECT TOP(1) on a table without an ORDER BY (so you are getting a semi-random row, but there is no SORT operation) on 2 different tables will have a different weight.  Pulling the maximum number of bytes for a row vs the minimum would have different weights.  What I mean is pulling 1 byte per row is DRASITCALLY different weight than pulling 8 KB per row (or more once you start working with LOBs).

    While I agree that a SELECT and an INSERT will have different "weights" in most cases, I disagree that an insert is 2.5 times more "weight" than a SELECT.  You MAY think SELECT will always have a smaller weight than an INSERT, but make a HEAP that is a million rows with no nonclustered indexes on it.  Doing an INSERT will definitely have less weight than a SELECT with a WHERE clause or an ORDER BY.

    On the opposite side of the spectrum, you could probably have a system where the SELECT and INSERT weight is going to be similar with a good clustered index, and there may even be cases where the weight of the SELECT and the INSERT could be the same.

    That is a very specific and uncommon example, but there is no reason that couldn't exist in a real-world scenario.  And if you have a system that does mostly writes to a database (such as a logging or auditing system), you are going to have a different configuration than a mostly read system (such as a data warehouse).  Giving weight to the operation doesn't tell you much if you don't know the whole picture and trying to build the whole picture just to get a weight per operation is going to be a slow, painful, and error prone process.  And when MS updates the cardinality estimator, it could completely throw off the weights too and something you calculated to be light weight turns out to be slow.

    The other fun part about doing "capacity planning" is it doesn't rely ONLY on SQL server.  Your hardware also needs to be sufficient for the workload.

    I, like ScottPletcher, do not see the benefit in knowing an arbitrary weight assigned to an operation.  And knowing the number of parameters given to an operation doesn't really help determine the weight.  Inserting a single value into a 1 column table MAY sound like a light weight operation, but what if that column is a LOB and you are inserting 2 GB of data into it?  So inserting into a 50 column table where each column is a bit datatype WILL have a lower weight than a 2 GB insert.

    And then the storage side of things makes this challenging too.  If you are inserting 10 MB into a table stored on SSD it will be faster than inserting 5 MB of data into a table stored on 10,000 RPM HDD.  You may think HDD's are not used anymore, but I know where I work, our SAN has a mix of SSD's and HDD's.  HDD's are cheaper and if the data isn't accessed much, the SAN pushes it out to the slow disk.  If the data is accessed frequently, then it sits on the SSDs.  And as I am not the SAN admin and have no visibility into that configuration, I don't know which data will be sitting on SSD or HDD.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Hi Brian,

    Thank you for the great response! Look, I think you just specified how one can built a decent benchmarking mechanism for actual DB performance.

    I did not claim that the performance of insert is 2.5 times than of a select, I just brought an exaggerated example that points to the fact that these transaction metrics we get are not at all in a level playing field.

    As you correctly pointed out, even the datatypes of field play a role.

    So why not, create a standard benchmark, i.e. tables with say 3 int fields, 20 varchars of x length, 10 decimals etc and do a comparative run?

    You may wonder why I am asking. Well, here you will find the answer:

    https://www.linkedin.com/posts/dimitrisv_complianceprofessionals-lineage-impactassessment-activity-6769634643546308608-Xjup

    To make a long story short, we analyse (automatically) LANSA. It is a low-code language/dev environment originating in the era of Mainframes/IBMi series etc. Cobol, RPG and the like. Great product btw. Here is the deal. LANSA can be ported to both IBMi series or MS SQL or Oracle or MySQL... you take your pick.

    But how can you take your pick if you do not know the actual performance and thus the real compute cost of each implementation? So then other factors come into play. Politics, service, in house knowledge vs cost of onboarding etc.

    I am keen to know because we compute ALL the statement of ALL functions in the codebase. So I know how many inserts/updates/deletes each function performs (and obviously of which datatypes).

    Enter the cloud, there again the billing is not transparent.

    What else is there to say? If you like you can join me in the webinar and touch this issue as well.

    Again, many thanks for you thorough response.

    Best regards and keep safe lads!

    Dimitris

    • This reply was modified 3 years, 8 months ago by  dimitrisv.
  • It is an interesting concept but it is still incredibly tricky to calculate.  Lets say I do partitioning on my table and throw all of the "old" data onto slower disk and keep the new data on fast disk.  Now my SELECTs are going to be inconsistent as it depends on the partitioning on the disk.

    With the cloud, you remove most of the hardware in the calculations, but there are still a TON of factors you need to consider when looking at it.  And even with all of these factors, having the metric that stored procedure A has a "high cost" based on these values might have very little meaning if that stored procedure is run 1 time per year (such as an archiving process).

    I can't speak for all DBA's, but for me, I have a rough idea of the hardware requirements for my workloads and the biggest constraint for me is getting enough disk space.  My IT department provides me with the hardware (my input is limited there), and my role is more to keep queries running efficiently. I can monitor query performance and look at my "slow running queries" through various reporting tools or even built in stuff in SQL.  And these slow running queries can change by updating statistics or updating SQL server or changing parameters (the parameter sniffing problem).

    For me at this time, I don't think your tools would be beneficial.  I can get compute costs from the execution plan and I'll know how good (or bad) my query is.  When I'm doing performance tuning, my preference is to look at what is actually running slow and focus on that.

    With all of the different factors that go into the performance of a query, I think that any automated tool running on a large database is going to be exceptionally long to run.  Imagine having to look at each and every individual object in a database (tables, columns, procedures, functions, triggers, indexes, statistics, etc) and pull all of that into memory and then start number crunching on it... It would be insane to try to figure that out.  And even if you had a "base" table for which you made your metrics, what if mine was lighter than that?  Plus NULL vs NOT NULL would factor in as well as key constraints... there are just so many factors and after all that number crunching to see which object or operation is the highest "cost", what if the query optimizer handles some of that pretty well?

    And making that base table for your benchmarking, inserting a single row into a table that has 255 columns with data types just under 8 KB is going to happen pretty darn fast (depending on indexes).  I think you would have a hard time just documenting what the "base table" is and how the base benchmark numbers are calculated.  There are just so many factors that make it challenging to build up any metrics like that that would be meaningful.

    Lets say for example I have a stored procedure that inserts into 10 different tables that each have 10 columns of INT datatype.  Would all of these inserts have the same cost?  They may perform differently based on indexes, disk, network, calculated columns, default values, etc.  There is just so much you need to look at to get that guess.  Even SQL needs a little bit of time to build the execution plan to give you estimates on your query cost.

    I mean, maybe this information is useful to somebody, but for me I know my current hardware, I know my current bottlenecks, and I know (and monitor) my slow queries.  Having a tool guess which queries will be slow to me just doesn't sound too beneficial.

     

    Just my 2 cents.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • I think you nailed the biggest issue here.

    It is the compartmentalization!

    You as a DBA you say, I have my kit (chosen by someone else), I have the code (written by someone else) and I am responsible of fine-tunning my server to perform optimally for the given hardware and code.

    What we try to do is to provide a common reference to both Devs and DevOps to figure out exactly what is going wrong and fix it.

    Right now, you have no say to the code that runs. The devs on the other hand may not care if they create a table with 600 columns (yes I have seen those!) that you have to make run optimally, avoid locks etc.

    With such a tool, we hope to  bring you both to the same table.

    If all fails, then the tool itself (LANSA) should optimise. They used to do that even at times where the concept of Indexes was not taking for granted. I mean LANSA creates "Views" based on the Keys that you use in your queries. That used to happen since the late 80's early 90's.

    And yes indeed, now we have the capacity and capability to create metrics for each an every table, with their different datatypes etc.

    And yes indeed, even in the locks case you should be able to know why are happening (right now you just observe the when) and possibly explain to the devs why this is so and how they can optimise their functions and processes to help avoid them.

    Anyway, please allow me to remain optimistic about it.

    With this discussion I got carried away and forgot to post my original question, regarding a bug I found at MSSQL 2019 (and possibly in earlier versions).

     

     

     

     

     

  • Dimitri,

    I want to apologize for sounding pessimistic about this.  I do see where the tool could be beneficial.  Where I work, it wouldn't be, but like you said, when you have developers building inefficient objects, it is sometimes hard as a DBA to explain to them why their design sucks.  I started as a C# developer and moved over to SQL and for quite a while I was having trouble thinking of set based operations rather than row based operations and now I run into snags in C# when I am thinking about set based operations rather than row based.

    Where I work, the developers are the DBA's and ALL SQL code needs to be run past a DBA prior to going live.  If new tables need to be created, it is rare that we don't have a meeting around it with the DBA team to come up with a good table design.  This way we can ensure that all objects that exist are well normalized and make sense for the expected workload and we can estimate what future reports on the data would look like and try to design around that too.

    A table with 600 columns is OK in some systems, but SQL Server would definitely be unhappy with that.

    One thing I fear with a tool like yours is that a developer makes a 600 column table and sees your tool saying "this is a bad idea", so instead they make 600 2 column tables and your tool says "cost is low so you did a good job".  OR you could end up with coding standards that require your tool and all stored procedures and tables and such need to have a score under 10 so you end up with an application that to do a single insert calls 10 different low cost stored procedures because of some logic that needs to be done before doing the insert (9 logic and sanity check stored procedures followed by 1 to do the insert).

    Now these are probably going to be edge cases and I can see your tool being used by the developer team to say their code and design isn't that bad or by a DBA team to provide some proof to the developers that their code and design is bad.

    I am  interested to see where the tool goes and what the cost of such a tool would be, but I think trying to get those base metrics is going to be tricky.  As hardware and software improves, a "bad" metric from todays standards may be fast enough in 5 years.  Like lets say that inserting 1 row into a 50 column table is your base metric of 1, so inserting 100 rows would be a cost of 100 (roughly).  And lets say that a base metric of 1 is 1 second of actual time on your system.  so 100 rows is 100 seconds.  5 years from now, that 1 second may now be 0.01 seconds and inserting 100 rows in 1 second is where you need your base metric to be to provide useful information to the end user.

    What I am saying in that last part is hardware is going to be important in your metrics as well as software for calculating the cost.  Giving something an arbitrary number of 1 or 2.5 doesn't really tell you much except that stored procedure A is less efficient than B. People like seeing relatable metrics such as time or I/O.

    Another thing you would need to watch for is loops designed without exit (while (true) or while (1=1)) as these could cause your tool to get stuck in an infinite loop trying to calculate things.  I've done those before with processes that should start and then run forever (until reboot) OR processes that don't have a good exit point so I want to try forever and will break out manually (goto for example) when a certain condition is met.  I do this both in C# and in SQL.  When I do, I make sure it is well documented though.

    Good luck with the tool and I am curious now about the bug you found in SQL Server 2019!

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Cheers Brian!

    Thanks for all the lovely input. Truth be told we created our tool for documentation and compliance purposes, but it has become so bloody good (apologies for the modesty here 🙂 )  that we thought, well, we can help them also get a measure of the load for each and every process.

    Btw, you are blessed in the sense that you talk and monitor the code prior deployment. In many environments, devs and ops/devops do not even know each other.

    As for the bug, I did write a message here and in MS forums and we shall see. I will run it as a package (I have stored them anyway from Visual studio) and see if it behaves differently.

    I do not even know if this bug is due to the high performance of my kit (see over 11GB/s seq read rate).

    https://pcpartpicker.com/b/dT7TwP

     

  • If you're directly designing "tables", you don't have a good design process and you are not doing proper normalization.  That's because normalization is a logical process, not a physical one.

    What does it mean that SQL Server would be "unhappy" with 600 columns?  And why would that be so?  If in some design situation you really did need 600 attributes for one entity, you'll likely usually split them into separate tables, but one table could still be possible.  But, either way, why would SQL Server care?  It's just a tool that does what you tell it to do.

    > As hardware and software improves, a "bad" metric from todays standards may be fast enough in 5 years.  <<

    Quite, and that is exactly why the design process is split into logical and physical designs.  It's also possible that during the logical design you don't know for sure what the ultimate dbms will be.  That was the case at International Paper for some projects.  Oracle, SQL Server, Access and Notes (gack!) were all possible destinations for some projects.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • I agree with you ScottPletcher... I was just using examples (poorly) and bad terminology.  By "unhappy" I just meant that performance would likely not be great.  It is more that the end users will be unhappy than SQL Server.  600 columns means you are putting in a lot of data per insert and it is likely that selects on that are not going to be pulling all 600 columns in most use cases.  We have some SSRS reports with 50+ columns on them and most end users are removing the columns after exporting to Excel.  Different sets of columns for different users, but 50 columns is a LOT for a report.

    If I saw a table with 600 columns, I would assume it is some reporting or analytics table (OLAP workload).  It MIGHT not be; there may be a good use case for a 600 column table in an OLTP system, I just can't think of it.  And if I did find a 600 column table in an OLTP system, I would be working with the developer(s) to re-design it to be more normalized.  I can't think of a good example of a well-normalized 600 column table.

    And, for me, even if the 600 column table was "required" for some odd reason, I would probably be looking at normalizing it and then having a view that spits back the 600 columns.

     

    Our process for "designing tables" at my workplace is to first determine what sort of data needs to be captured for the application/process and determine if it is "new" data or if we can reuse existing tables.  For example, if a new tool relies on Serial Number data, I am not going to build up a brand new serial number table UNLESS the new tool needs a new serial number set.  I am going to reuse the serial number table we have already as it is ready to go and links over to other related tables nicely (MO, PO, model number, etc).  Next we see which of those NEW pieces of data are related and should be in the same table and then figure out more details about each of the types of data we want to capture and how it relates up.  Once we have that designed, we re-evaluate the design to see if it makes sense the way we did it.  Sometimes once we have it all on the whiteboard we realize that we need to normalize it better and sometimes we decide that it needs to be denormalized a bit as we over-normalized it for what it will be used for.

    Where I work, if we are talking about putting the data into a database that we will control, it is going into SQL Server.  That is the only one we are licensed for and it is what our (small) team is skilled at.  If we had a large DBA team, we may look at other database platforms, but we are a team of 3 who are DBA's AND developers (we do it all!).

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Agreed overall.  I will say, I wouldn't object to providing the user with a view ("table" perhaps to them) that had 600 columns, if that is what they needed.  But behind the scenes it could be more than one physical table, perhaps quite a few.  Otoh, it could be just one table if the total row length range was appropriate for a single table and they tended to use a lot of columns every time they read the table.

    I prefer to only ever make views available to end users, never the actual tables.  It's so much easier to make data structure adjustments later that way.  MS is now doing exactly that.  They provide system views for DBAs and keep the actual tables unreferenced by outsiders.  And no doubt for the same reasons.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply