Some stupid Questions before I go into battle

  • Hi all

    I was looking for some amo for a meeting I will be having with our so called dba.  We have a sql 7.0 Database server with a 60 gig database on it.  It is a dell 4600 with dual 1.8 gig xeons 4 gig of memory a 300 gig raid with 6 15k disks.  The system is a dog it has about 25 users connect mostly entering in orders with a small handfull running reports.  I have been given the job of finding out why it is so slow.  After looking at the database, table, index structure some things seem odd to me.

    1.  Not one table has a primary key or clustered index

    2.  There are no constrants

    3.  The database is 60 gig but only a quater if it is data the rest are indexes

    4.  The stored procedures are huge and there are doing column   convertions,calculations,string malipulation.  Everything you can image.

    5.  There are triggers everywhere.

    This totally goes against everything I have read.  I just want to have some human verifaction that I have read and understood everything right.

  • I am not a guru, but the PK are essential to the design of the database, otherwise how do you know if you have that record already in any particular table? If there is no primary key I can add as many records for me and who or what is going to stop me?

    Second of all, the tables have any kind or relationship? I imagine they don't by just reading they don't have PK's.

    Constraints based on PK are the most important ones to avoid any data duplication. The size of the database with the machine you described is more than enought o hold that database and more.

    With the store procs, the problem is not how many or how compliacted they are, the problem resides in the database design itself with no table relationships and no PK's.

    This said, I would go to the meeting to suggest that the database needs to be normilized. This recoomendation implies PK's, FK's, Relationships, etc. How does the designer know if an idex has been already created? How fast a search can run without any specific way to look ffor the data? The only thing is, don't go to war! PEACE!!

    Maybe the DB has another school of thought (hiarchy database and not relational database) and that mmakes a huge difference. Good luck anyways!!

  • Which version of SQL Server do you run, standare or Enterprise?

    1.  Not one table has a primary key or clustered index

    It is not necessary each table must have primary key or clustered index. But you really need to find out which queries run slow and whether they use proper indexes.

    2.  There are no constrants

    Constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing integrity. Using constraints is preferred to using triggers, rules, and defaults. The query optimizer also uses constraint definitions to build high-performance query execution plans.

    3.  The database is 60 gig but only a quater if it is data the rest are indexes

    SQL Server needs more time and resources to maintain the indexes. Over indexing will slow the insert/delete/update.

    4.  The stored procedures are huge and there are doing column   convertions,calculations,string malipulation.  Everything you can image.

    How complex your stored procedure depends on business requirement. But you should avoid stored procedure recompile, minimize the use of temp table and cursor in stored procedure.

    5.  There are triggers everywhere.

    Triggers are most useful when the features supported by constraints cannot meet the functional needs of the application.

    You also have to indetify whether there are blocking and dead-lock happening which really decrease database performance.

  • A good start would be to run the MS Best Practices Analyzer and you will get a very detailed info !

    Please, don't go to war   try to handle it with calm so that you get the cooperation of everyone

    HTH


    * Noel

  • I've seen many a commercial product that has been implemented without PKs and RI (presumably to keep their product a trade secret!). The poor DBA then gets the blame for the "thing" running slow as! However, things to try are rebuilding the indexes and running update statistics at a relatively "quiet" time (or request a downtime on the database concerned). Good luck and don't blow your top.

  • You should be kind and polite and suggest the dba should read a book on fundamentals of relational databases.

    Looks like a long way to go and you won't get far without teaming up together.

     

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • If the database has lots of indexes and triggers it suggests that referential integrity is probably handled by the triggers. Not having clustered indexes is probably a bad idea as you will not be able to remove leaf level fragmentation from your heap tables.

    I suggest you run dbcc showcontig to check how fragmented the data structures are, then look at the indexes, whilst constraints and DRI are always preferable set within the database sometimes application design hampers their use.

    I'd suggest you check the data structures and what the triggers actually do - without wishing to start another war < grin >  it doesn't sound as if you've actually analysed your database ?? 

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Thanks Guys for all your recomendations.  I have pretty much narrowed the problem down to exsive indexes that are slowing the updates and inserts to a crawl which then has a snowball effect because of the locks generated blocks reports that are running.  Also these huge stored procedures that run as jobs pretty much pin the cpu and the system comes to a hault.  As far as index framentation there is none since they are not clustered indexes shows all the indexes to be ok.  The update statistics was a problem that I repaired a couple of days ago after I saw the sql and stored procedures forcing the index because sql would not use them.  I am stuck between hardware and software on this one his fix for every slow down is to throw up another log shipping server which to me just adds more burden to the boxes.  Thanks for all your help and if you have any other ideas please post them.  At this point the only thing I can recommend is a totaly review of the database layout and structure.  I was planning to move to a n tier approach in the future on this system to remove this large stored procedures to exe's or dll's running on application servers.

    Anyway I could probably fill the forum with my compliants so I will leave it at that.

     

    Thanks

    Jim

  • Jim,

    "...it has about 25 users connect mostly entering in orders..." and "Not one table has a primary key or clustered index" would be my FIRST point of research.

    Use Profiler to trace ("take a reading") during a heavy usage (slow performance) period for 5 to ? minutes depending on how much information it captures and run Index Tuning Wizard against the trace file. Also, look for the transaction(s) that have the longest duration and compare to any index recommendations from Index Tuning Wizard. This will give you a place to start without even breaking a sweat.

    I'm a mainframe programmer forced into DBA role. This little tip (and a lot of self-instruction reading) resulted in an 87% increase in performance of 3rd-party app, the company footing the bill for SQL Server Admin & SQL Server Implement. certification courses $$$, and a nice feather in my cap!!!

  • Thanks for your input I have been profiling the server for about a month now at different times and for different length of times no indexes are recommended and as far transactions it is the triggers that are taking the most time.  I have included a example of one of our triggers for deleteing a record

    CREATE TRIGGER Trig_UpdateLog_Deletes_BI ON dbo.Cust_Bio

    FOR DELETE

    AS

    return

     if (app_name() not in ('Web Lookup', 'Order Capture System'))

      RETURN

     declare @tablename varchar(50)

     declare @tablecode varchar(20)

     declare @sqltext varchar(5000)

     declare @masterfield varchar(100)

     set @tablename = 'Cust_Bio'

     set @tablecode = 'BI'

     set nocount on

     select

      *

     into

      #deleted

     from

      deleted

     

     Declare

      TheFields

     Cursor static For

      Select

       sysnamefield,

       sysfieldid

      from

       system_alias

      where

       sysnametable = @tablename and

       ynlogdeletes = 'Y' and

       sysnametable <> sysnamefield

     open TheFields

     if (@@CURSOR_ROWS > 0)

     begin

      declare @columnname varchar(100)

      declare @sysfieldid int

      declare @UpdateLogID int

     

      set @sqltext = ''

     

      Fetch

       next

      from

       TheFields

      into

       @columnname,

       @sysfieldid

     

      while (@@fetch_status = 0)

      begin

     

       if (@sqltext = '')

        set @sqltext = ' ''['' + isnull(convert(varchar, ' + @columnname + '), '''') + '']'' '

       else

        set @sqltext = @sqltext + ' + ''-['' + isnull(convert(varchar, ' + @columnname + '), '''') + '']'''

     

       Fetch

        next

       from

        TheFields

       into

        @columnname,

        @sysfieldid

     

      end

     

      declare @temp2 varchar(100)

      declare @temp3 varchar(100)

      declare @temp4 varchar(100)

     

      exec proc_getnewupdatelogid @UpdateLogID OUTPUT

      set @temp2 = convert(varchar, getdate())

                    set @temp3 = user

      set @temp4 = convert(varchar, @UpdateLogID)

     

      execute (

        'insert into

        cust_updatelog

         (

          chapter,

          account,

          systableid,

          syslinkrow,

          sysidfield,

          sysnamefield,

          codetrantype,

          valuefieldtran,

          valuefieldprev,

          idbatchnumber,

          sysupdatelogid,

          iduser,

          datelastupdated

        &nbsp

       select

        #deleted.chapter,

        #deleted.account,

        ''' + @tablecode + ''',

        #deleted.sequence,

        null,

        ''RECORD'',

        ''DEL'',

        left(' + @sqltext + ', 50),

        null,

        0,

        ''' +@temp4 + ''',

        ''' + @temp3 + ''',

        ''' + @temp2 + '''

       from

        #deleted')

     

     end

     close TheFields

     deallocate TheFields

     select

      @masterField = sysnamefield

     from

      system_alias

     where

      sysnametable = @tablename and

      ynmasterfield = 'Y'

     if (not @masterfield is null)

     begin

      exec proc_getnewupdatelogid @UpdateLogID OUTPUT

      set @temp4 = convert(varchar, @UpdateLogID)

      execute (

        'insert into

        cust_updatelog

         (

          chapter,

          account,

          systableid,

          syslinkrow,

          sysidfield,

          sysnamefield,

          codetrantype,

          valuefieldtran,

          valuefieldprev,

          idbatchnumber,

          sysupdatelogid,

          iduser,

          datelastupdated

        &nbsp

       select

        #deleted.chapter,

        #deleted.account,

        ''' + @tablecode + ''',

        #deleted.sequence,

        null,

        ''' + @masterfield + ''',

        ''DEL'',

        isnull(convert(varchar, ' + @masterfield + '), ''''),

        null,

        0,

        ''' +@temp4 + ''',

        ''' + @temp3 + ''',

        ''' + @temp2 + '''

       from

        #deleted')

     end

    most of our triggers are used to provide logging of who did what.

     

     

     

  • Jim,

    Though I don't pretend to be a SQL code guru and haven't put too much thought into someone else's code (others might clarify/elaborate on these questions/observations):

    1. "select * into #deleted from deleted" - why someone would copy the temporary log deleted table to a second temporary TempDB #deleted table seems wasteful of resources.

    2. doubt any impact, but TempDB table #deleted was not explicity dropped. How big is TempDB? Should be "relatively" small for OLTP systems. Larger for reporting.

    3. "exec proc_getnewupdatelogid @UpdateLogID OUTPUT" gets called twice to retrieve same information. Declare a variable once, call the stored procedure once to retrieve the value. Use the variable twice.

    4. If all the other triggers use proc_getnewupdatelogid repeatedly, start by optimizing (maybe set a clustered index if applicable) that procedure since this modification would have a "ripple" effect throughout the other triggers.

    Like I said. Research is up to you. Just some quick "first glance" observations.

  • I am no code guru myself thats why I think I have a much larger problem then just the sql database itself.

  • Though PC departments like to spend, spend, spend: "Need more this", "Need more that", "Need bigger other"... I've found problems are usually more software related than hardware related (unless you've just added significant load to already existing systems).

     

  • sorry for not including previously.

    Our "Engineers" were telling me just that - old PC technology, need to upgrade, get faster drives & more memory $$$. 1 index to create covered query solved problems and price was $0.

  • Temp tables, cursors, and execute ad hoc in a trigger!  No wonder you're having problems.  Better check the procs called also. 

    RandyHelpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply