Monitoring if something is deleted in DB

  • Hello,

    I have long-term task, which came one year ago, and just now I have a little free time to start working on it. I will explain a little bit of the task, and then what I have done so far.

    Some time ago, even before I worked here, colleagues of mine using the web-application delete half of database, she thought that she is working on test database, found that's she is working on production 7 months later. Deletions was Retention Management/Legal Hold, which was supposed to be tested at the time, she got error message in the email and she thought that deletion didn't work, because test DB wasn't touched at all. Since the databases we have on our server are not used often, they are historical systems which are used literally once in a while, no one ever knew that something was deleted up to those 7 months later, and then we found that the error actually says that something disturbed the process, but yet was able to delete about 50% of the tables).

    At this point I was here and I took the task to create a "monitoring" additional to all types of monitoring already exists, something different, but simple. The absolutely basic idea was to create a a script that will count all rows of all tables in each database. Then manually we compare them with the previous run. If there are needed any other activities based on the numbers, we do it manually, manually check the databases, tables each table rows and so on, and compare and find what's changed. (Here quick reminder, those DB are not changed often, most of them will be the same without changes for years, and then deleted, some of them have complicated Retention management - deleting rows/tables, which complicates the monitoring).

    For me as technical guy, basically one of a few who works in this department with some real IT knowledge, this is not a correct way how to do it, and is not as easy as everyone around here thinks it is. However, seems that on this matter my word and my recommendations does not have much weight, so it has to be done.

    As someone who has just 1 year experience with SQL and databases, My first question would be, is there any already existing way to control/check/monitor databases on weekly bases, if there's something deleted/not deleted and so on. I assume the answer is not, because deletion/adding new data and new tables is something common and why would someone create advanced application to control it as monitoring and reports it and keep statistics and so on.

    However, our databases as I mentioned, on business level wont have any new records/tables, but the Retention management created additional tables which uses for statistics/logs on what and when was deleted, which makes the whole idea of calculation rows useless, because we delete business data and we create new tables with statistics, which makes really really difficult to calculate it correctly and make it run automatically.

    If you are interested, I can post more about the structure, for example all new tables(retention management) are ending with _RET and so on, that might be helpful for changing the script make it work better. Even probably I can read the tables and see what's deleted and use the data and statistics inside..

    Here is what I have now - easy, calculate all rows from all tables from the database into one huge number. Can someone give some tips and ideas how to start building it into better solution, in the future making it automatic and so on? I have the idea to create separate database for Monitoring, where I will be able to have multiple tables, put more data, build some statistics, and do additional calculations and so on. Main concern of my bosses(many bosses) is that this numbers does not give anything, and in their opinion this monitoring does "nothing", they even want me to create solution where not only controls if rows/records are correct, but if any value inside the DB has been changed. I was able to make good argument that this is not a good idea, so for now they agreed that's overkill.

    DECLARE @results TABLE (
    DatabaseName SYSNAME,
    TotalRowCount BIGINT
    )

    DECLARE @database_names TABLE (
    Name SYSNAME
    )

    -- Add the databases you want to check to the @database_names table
    INSERT INTO @database_names (Name)
    VALUES ('DB1'), ('DB2'), ('DB3'), ('DB4'), ('DB5'), ('DB6'), ('DB7'), ('DB8'), ('DB9'), ('DB10'), ('DB11'), ('DB12'), ('DB13'), ('DB14'), ('DB15'), ('DB16'), ('DB17'), ('DB18'), ('DB19')

    DECLARE @database_name SYSNAME
    DECLARE @sql_statement NVARCHAR(MAX)

    DECLARE cur_databases CURSOR FOR
    SELECT Name
    FROM @database_names

    OPEN cur_databases

    FETCH NEXT FROM cur_databases INTO @database_name

    WHILE @@FETCH_STATUS = 0
    BEGIN
    BEGIN TRY
    SET @sql_statement = N'
    SELECT
    @DatabaseName AS DatabaseName,
    SUM(p.rows) AS TotalRowCount
    FROM
    ' + QUOTENAME(@database_name) + N'.sys.tables t
    INNER JOIN
    ' + QUOTENAME(@database_name) + N'.sys.partitions p ON t.object_id = p.object_id
    WHERE
    t.is_ms_shipped = 0
    AND p.index_id <= 1
    AND t.type = ''U''
    '

    INSERT INTO @results
    EXEC sp_executesql @sql_statement, N'@DatabaseName SYSNAME', @DatabaseName = @database_name
    END TRY
    BEGIN CATCH
    -- Display the error message
    PRINT 'An error occurred while processing database ' + QUOTENAME(@database_name) + '. Error: ' + ERROR_MESSAGE()
    END CATCH

    FETCH NEXT FROM cur_databases INTO @database_name
    END

    CLOSE cur_databases
    DEALLOCATE cur_databases

    SELECT * FROM @results

    Maybe make it clear - the purpose is to have tables(Excel or PowerPoint presentation) with results presentable to people without much IT knowledge as proof that nothing was deleted by mistake. I except milion of stupid questions, for which I have to be able to answer..... Best if everything runs automatically and we receive emails with the results. In my point of view - what I have with a little bit of analysis on our side is enough, we have many other changes done to prevent such an accident deletion, so that's enough.

    It would be very interesting project when we talk about building something, learning new things and so on, on other hand seems like unnecessary overkill.

    Currently we are running on SQL Server 2019, we will upgrade to 2022 version, but I believe that wont change anything, or minimum changes for this task.

    I apologize for the long post, just wanted to make sure I have explained everything clear enough.

    • This topic was modified 6 months ago by  Martass.
    • This topic was modified 6 months ago by  Martass.
  • CDC (change data capture) will track changes on a table, but it can be messy if you have a lot of data that changes, but how you describe it sounds like the changes are very slowly happening in normal operation. So you could set up CDC and use something like SSIS (SQL Server Integration Services) to parse the data. I've never used CDC, but I know it is an option we looked at a while back before we decided to use a different approach for our problem as CDC was just extra overhead for what we needed.

    Another thing you could do is have an SSRS (SQL Server Reporting Services) report that pulls the data out and sends the report out on a schedule. That way you have the history of row counts changed.

    What you could do that would probably be a good enough option would be either:

    A- trigger on the table so when a delete happens, it logs it in another table or sends an email alert or whatever you need

    B- modify the stored procedure so DELETE operations first move the data to an archive table, does the delete, then sends an alert (if needed)

    Both methods could also handle INSERT and UPDATE operations too. Row counts are not very reliable for watching data changes because if someone deletes 10 rows and adds 10 rows, you have a net change of 0 and your approach will show no changes.

    Plus, like you said about the RET tables, those are growing.

    Now things in your post that mildly concern me are creating new tables. Are you creating new tables from the app? If so, that feels like it MAY be bad design. I find that once I set up the tables for an application I am building, I only need to add new tables IF I am adding features. New tables for an application is a rare thing and is usually planned out so we can handle things like "fake delete" (delete bit column to mark a row as deleted rather than doing an actual delete so we have history AND quick recovery for accidental deletes). Plus, I only delete tables (drop tables) when I am retiring the app. Creating and dropping objects in a database is not something that I would just let an app do whenever it wants.

    In the end, it really depends on what you want and need to capture as to the best solution. Personally, I like limiting what can and can't be done at the application layer and by user. Application Security. Developers should not need to modify production data, so developers only get read access to prod. DBA's get read/write to prod. No data loss due to developer clicking a wrong button that way. Worst case, they blow up test. But that's just me.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Obvious question, why does this person have the ability to do this in prod in the first place, kill this and you resolve 99.9% of the issues

    Put delete triggers on the tables and either block the delete from running or log it

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle wrote:

    Obvious question, why does this person have the ability to do this in prod in the first place, kill this and you resolve 99.9% of the issues

    Put delete triggers on the tables and either block the delete from running or log it

    A thousand "Likes" on that first suggestion.

    You can also add an INSTEAD of DELETE trigger to simply not execute a DELETE.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Perry Whittle wrote:

    Obvious question, why does this person have the ability to do this in prod in the first place, kill this and you resolve 99.9% of the issues

    Put delete triggers on the tables and either block the delete from running or log it

    Truth, before me joining this team and putting some order, here were 5 people,  1 of them thinks of himself as some kind of IT Guru, he is now on higher position where for 20 departments is IT Solution Architect, the truth is that he only talks and talks during the meetings and that's all he does. What is he saying ... I rather not comment on that one.

    So, this person 4 and a half years ago suggested that PROD and Test Servers should have the same username and password. Also they had, read carefully, on test Application server direct connection to the Production SQL server, and they were testing sometimes on PROD DB, sometimes on TEST DB. That same person had them doing 5 Servers landscape, and there were used only 2, then I decided that we need only 3, and his words 2 years ago were: soon you will realize that you need those 2 and you will create them again... Well 2 years later still does not seems that we will need them.

    Mr. Brian Gale wrote:

    Now things in your post that mildly concern me are creating new tables. Are you creating new tables from the app? If so, that feels like it MAY be bad design. I find that once I set up the tables for an application I am building, I only need to add new tables IF I am adding features. New tables for an application is a rare thing and is usually planned out so we can handle things like "fake delete" (delete bit column to mark a row as deleted rather than doing an actual delete so we have history AND quick recovery for accidental deletes). Plus, I only delete tables (drop tables) when I am retiring the app. Creating and dropping objects in a database is not something that I would just let an app do whenever it wants.

    First of all, thank you for the great answer!

    Now, the real problem is that we as Corporate cannot develop all of our applications, and very often we are working with different firms where we are using their software, and sometimes we are not happy with some technical parts, like this one, but unfortunately we cannot change it, so we need at least to learn working with it. Some of the things you suggested are already prepared by this firm and this application, like reports what was deleted and so on, but my bosses are working with the assumption that, those reports are wrong, and they want extra security (btw we have support, and if something is wrong, we contact this firm.... not enough for them). Basically I am supposed to recreate what's already created, only because of unfortunate human mistake, of course with some extra functions/features, that's why called Monitoring..

    _RET tables are tables that have inside data how much is about to be deleted, and how much has been deleted.. This can be used for my goal, the only problem is that not each database has the same retention rule, some of them does not have _ret tables because the whole database is supposed to be deleted at once after some time. Those tables are part of the most complicated Retention Management, where we are deleting data on row(Record) level, based on some basic rules (date, type ect).

     

  • Mr. Brian Gale wrote:

    Now things in your post that mildly concern me are creating new tables. Are you creating new tables from the app? If so, that feels like it MAY be bad design. I find that once I set up the tables for an application I am building, I only need to add new tables IF I am adding features. New tables for an application is a rare thing and is usually planned out so we can handle things like "fake delete" (delete bit column to mark a row as deleted rather than doing an actual delete so we have history AND quick recovery for accidental deletes). Plus, I only delete tables (drop tables) when I am retiring the app. Creating and dropping objects in a database is not something that I would just let an app do whenever it wants.

    Really thank you for the good answer!

    Unfortunately, we are corporate and we cannot develop all of our applications, so often we are buying licenses for applications from different firms. That's the case here, and we don't have much power to change how the application should work, we can present our concerns, but that's all. They are not going to change the whole architecture of the application they built for years just to pleasure our concerns, that's for sure.

    Since I am here, I already have changed lots of things, one of them is read-only DB as much as possible, only few specific cases carefully considered are not read-only, and we know the risks, they are calculated risks and of course monitored.

    What you explained is basically how this application works (it has already implemented check, report when something is deleted, table full with "statistics" and so on), but in the eyes of my bosses (many bosses), because the problem with the deletion came from the application (even when the real problem was human mistake all along from the beginning, series of human mistakes leading to this one), we need our own solution to monitor the databases. All restrictions and changes I recommended and are implemented weren't enough for my bosses. Even the script I provided is not enough, for them. Additional part for the script is where I have the count of the rows for each table separáty, and if there is difference between those, i run next script, where I check which tables has different numbers and decide if there's a problem or no.

    My whole idea is to automatize this process, logically I have it in my mind, but I have the feeling that there are many flaws in my plan, that's why I opened this topic.

    Perry Whittle wrote:

    Obvious question, why does this person have the ability to do this in prod in the first place, kill this and you resolve 99.9% of the issues

    1. There is one person(He thinks of himself as some kind of IT Guru) here in the firm, he was responsible 4 and half years ago for this, and he is the one who decided that PROD and Test SQL servers should have the same username and password for everything, also he decided that from Application Server we should use PROD databases as well, not only Test SQL.. I am still figuring out how this person is still working here and everyone actually thinks he is an amazing part of the firm, and they gave him even more responsibilities. Best thing I did so far was to cut him of my projects.. He is just talking and talking and talking, but saying actually nothing (in my opinion.. my bosses with zero IT knowledge seems to be happy with him)... In my eyes he is the main reason and responsible person for 2 huge problems which I am fixing now... One of them is the one discussed here.
    2. I know that kills 99% of the issues, but as i mention already, try explaining that to someone without IT knowledge.. I did my best, many times, and yet they are saying that's great but they don't feel secure and want this monitoring realized.

    Thank you once again.

  • Martass wrote:

    Mr. Brian Gale wrote:

    Now things in your post that mildly concern me are creating new tables. Are you creating new tables from the app? If so, that feels like it MAY be bad design. I find that once I set up the tables for an application I am building, I only need to add new tables IF I am adding features. New tables for an application is a rare thing and is usually planned out so we can handle things like "fake delete" (delete bit column to mark a row as deleted rather than doing an actual delete so we have history AND quick recovery for accidental deletes). Plus, I only delete tables (drop tables) when I am retiring the app. Creating and dropping objects in a database is not something that I would just let an app do whenever it wants.

    Really thank you for the good answer!

    Unfortunately, we are corporate and we cannot develop all of our applications, so often we are buying licenses for applications from different firms. That's the case here, and we don't have much power to change how the application should work, we can present our concerns, but that's all. They are not going to change the whole architecture of the application they built for years just to pleasure our concerns, that's for sure.

    Since I am here, I already have changed lots of things, one of them is read-only DB as much as possible, only few specific cases carefully considered are not read-only, and we know the risks, they are calculated risks and of course monitored.

    What you explained is basically how this application works (it has already implemented check, report when something is deleted, table full with "statistics" and so on), but in the eyes of my bosses (many bosses), because the problem with the deletion came from the application (even when the real problem was human mistake all along from the beginning, series of human mistakes leading to this one), we need our own solution to monitor the databases. All restrictions and changes I recommended and are implemented weren't enough for my bosses. Even the script I provided is not enough, for them. Additional part for the script is where I have the count of the rows for each table separáty, and if there is difference between those, i run next script, where I check which tables has different numbers and decide if there's a problem or no.

    My whole idea is to automatize this process, logically I have it in my mind, but I have the feeling that there are many flaws in my plan, that's why I opened this topic.

    Perry Whittle wrote:

    Obvious question, why does this person have the ability to do this in prod in the first place, kill this and you resolve 99.9% of the issues

    1. There is one person(He thinks of himself as some kind of IT Guru) here in the firm, he was responsible 4 and half years ago for this, and he is the one who decided that PROD and Test SQL servers should have the same username and password for everything, also he decided that from Application Server we should use PROD databases as well, not only Test SQL.. I am still figuring out how this person is still working here and everyone actually thinks he is an amazing part of the firm, and they gave him even more responsibilities. Best thing I did so far was to cut him of my projects.. He is just talking and talking and talking, but saying actually nothing (in my opinion.. my bosses with zero IT knowledge seems to be happy with him)... In my eyes he is the main reason and responsible person for 2 huge problems which I am fixing now... One of them is the one discussed here.
    2. I know that kills 99% of the issues, but as i mention already, try explaining that to someone without IT knowledge.. I did my best, many times, and yet they are saying that's great but they don't feel secure and want this monitoring realized.

    Thank you once again.

    Having the same user names and passwords on production and test is stupid beyond belief and violates any number of basic system controls. If you are a public company, that probably violates SOX rules as well. If actual legal hold data was deleted, your company can be in big trouble with the courts.

    Contact your internal audit staff and let them know there is a big controls issue with the user id and passwords.

  • Ross McMicken wrote:

    Having the same user names and passwords on production and test is stupid beyond belief and violates any number of basic system controls. If you are a public company, that probably violates SOX rules as well. If actual legal hold data was deleted, your company can be in big trouble with the courts.

    Contact your internal audit staff and let them know there is a big controls issue with the user id and passwords.

    Don't worry, that's already fixed. It was the first thing I did.

    Martin

  • Why use sql auth and not windows auth?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle wrote:

    Why use sql auth and not windows auth?

    We are using SQL Auth for the applications, Windows Auth for the SSMS.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply