Performance question

  • tempdb on C drive..how big is the drive itself.

    As a best practice place tempdb on a disk with high throughput, like a RAID 10.

    Well, you can move tempdb to this new place by doing this:

    http://support.microsoft.com/kb/224071

    It would require a reboot. For tempdb you dont have to copy the files to the new location.

    what RAID it is depends on the system you have. Is it a rack mount or tower? What is it (HP, Dell, etc) and what is its model? While this info will not tell you the raid level it can certainly eliminate others. Can you not boot into the scsi/raid bios and look?

    Thanks!!

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • DBA (1/8/2009)


    hi, don't worrry ask as many questions as you want, yes it is full with simple recovery model, i have both my Dbs as autogrowth, by 10% , unrestricted.

    If I understand it correctly, the DB is in simple recovery model right? if so then you dont need transaction log backups..

    Thanks!!

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • The C drive is 19.5 GB, i have 2 GB left of free space.

    My tempD is 5083 MB with available 4846 MB.

    both my DBs are used for read and updates, inserts.

    My computer is Intel (R) Xeon (TM)

    CPU 2.8 GHz 4 GB of RAM

    I also see this in my computer management

    for my disks

    PERC LD 3PERCRAIDSCSIDiskDrive

    does that help you?

  • Micheal Young (1/8/2009)


    Looking at Nitin's reply reminded me of a few coding things that can hinder performance (I'm basing this off my own experience).

    1. If there are queries that use the AliasName = Whatever format instead of Whatever AS AliasName (excluding the setting of variables), this can slow things down especially if the query is accessing large size tables.

    Example (Bad Idea):

    [font="Courier New"]

    SELECT FieldA = colA

    , FieldB = COALESCE(colB, colB2, '')

    , FieldC = colC

    FROM MyTable

    [/font]

    Example (Good Idea):

    [font="Courier New"]

    SELECT colA AS FieldA

    , COALESCE(colB, colB2, '') AS FieldB

    , colC AS FieldC

    FROM MyTable

    [/font]

    Got any proof? I'm just not seeing that particular claim on a million row table...

    [font="Courier New"] SELECT FieldA = RowNum

    , FieldB = COALESCE(SomeLetters2, SomeCsv, '')

    , FieldC = SomeInt

    FROM dbo.JBMTest

    WHERE SomeLetters2 = 'JM'

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 25 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    (14615 row(s) affected)

    SQL Server Execution Times:

    CPU time = 2578 ms, elapsed time = 3241 ms.

    -----------------------------------------------------------------

    SELECT RowNum AS FieldA

    , COALESCE(SomeLetters2, SomeCsv, '') AS FieldB

    , SomeInt AS FieldC

    FROM dbo.JBMTest

    WHERE SomeLetters2 = 'JM'

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    (14615 row(s) affected)

    SQL Server Execution Times:

    CPU time = 2578 ms, elapsed time = 3506 ms.[/font]

    2. If you're using a lot of temp tables in your queries, which take time to create in the temp database. I tried to avoid them if possible and use cte's instead since they're more memory based. Plus you don't have to worry about dropping them.

    Temp tables start out in memory just like anything else. Have you got any Microsoft documentation that states CTE's are more memory based? And, you don't have to worry about dropping Temp tables because they're session sensitive. Global temp tables? Maybe... just not regular temp tables. Dunno about connection pooling and how that would affect temp tables.

    http://support.microsoft.com/default.aspx?scid=kb;en-us;305977&Product=sql2k

    3. Using a unnecessary ORDER BY statements will cause slow-downs. I've seen situations where people who create temp tables with ORDER BY statements (usually more than one) and then create the resulting query that references the temp tables and again specify an ORDER BY statement. In these cases, it's not necessary to use ORDER BY.

    That, I'll agee with. 🙂

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • As always, mileage will vary depending on your hardware.

    Some rules of thumb

    - have your logs and data on different sets of spindles

    - if you can't, then at least different volumes (each volume is given a SQL IO thread)

    - have as much cache as you can in your controller - local or SAN

    - grant the SQL instance as much RAM as you can. Set min and max ram.

    - Set Lock Pages in Memory for the service account running the SQL Database Engine

    - keep your TempDB files separate to your application databases. Data and Log files separated too

    - have as many TempDB datafiles as your do physical processors / cores. Don't count multiple processors provided by Hyper Threading

    - don't keep anything on a disk that has a pagefile - by default your system disk (C:)

    - separate out your OS, Applications and data disks (data and logs)

    --
    Andrew Hatfield

  • hello, by using this query select * from sys.dm_os_performance_counters

    where counter_name = 'Buffer cache hit ratio'

    i get :

    cntr_value cntr_type

    2660 537003264

    how do i know if this is good or bad?

    Also i read that a 64 bit always uses all RAM available as a default, is that true? (i only have 100 MB available memory)

    thank you ALL for all your help!!

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply