April 18, 2007 at 12:34 pm
Hey guys,
Long time reader from the emails I get from this forum, first time poster.
I have a few questions that someone knowledgable will be able to answer:
1:
I bought http://www.dell.com/content/products/productdetails.aspx/pvaul_md3000?c=us&cs=555&l=en&s=biz for my 2 node fail over cluster.
I use to run a Powervault 220S, with 15K 36G SCSI to the max.
The slots are filled up with 15k 36G SAS drives to the max.
I have 2 raid 10's, and a raid 5.
Prior to buying this; I bought two books to help study. "Microsoft Server 2000 Performance Tuning Technical Reference", and "SQL 2000 Unleashed".
Okay here's the facts, I've upgraded the nodes, and beefed them up to 8 gigs of ram. AWE enabled, and big difference in speed prior to me coming aboard. But there are some things I just don't care for; slowness. So I bought the above item; with the intention to split the database between filegroups. Sometimes it takes forever for my employees to pull up customer accounts (forever meaning 5 to 20 seconds; rarely longer). I know whats causing it. We have a IVR/Predicative Dialer with 250 phone lines cranking away; and 150 employees working with our homegrown software; all connected to that one database. After doing some Profiler work; the most consistent top 10 longest query is the Customer Search feature.
The owner of the company wanted some bells and whistles; with the customer search feature, that the sql code is abit fat. We've fine tuned it many times, and we're to the point where we've tested and trimmed everything that was not needed; but gave everything that the owner wanted. Our database is designed as so:
customer_master -- Customer Name, Address.
customer_phone_info -- Home Phone, Work Phone, Cell Phone, Relatives Phone, Other Phone.
customer_acct_info -- Holds Misc information about the customer, and details on the customers account.
I'm basically wanting to move those tables into their own filegroup. I should technically see a increase in performance.
My question(s) is:
1. When I make the filegroup and move the tables in the designated file; are new records going to be saved in the new file or the old file?
2. We schedule out backup jobs of the database; Should I need to restore the backup, will it restore the filegroup as well automatically?
3. Did I waste money by not doing enough homework on increasing the performance by buying this?
April 18, 2007 at 1:01 pm
1. When I make the filegroup and move the tables in the designated file; are new records going to be saved in the new file or the old file?
Should save in th new as lon as the new filegroup has a different filegroup name.
2. We schedule out backup jobs of the database; Should I need to restore the backup, will it restore the filegroup as well automatically?
Depends on how you define your backups, if whole then yes it should although I would always suggest testing your first backup to be sure you hav configured to give you your desired results.
3. Did I waste money by not doing enough homework on increasing the performance by buying this? I don't know but you may find that the curren server even with the changes you want to make may still not perform any better if the issue is becuase of Memory, Network Card, or som other hardware. Could also be a design issue in your current design you may not have accounted for, I would suggest walking thru the steps that seem slow and account for anything that might may slow specifically. Consider maybe are the table(s) large and could some of the data b archived and deleted from the primary table for queries, many other thing might could help, just no definitive answer unless you can determine the exact reason, however 20 seconds is really good and could be the unerlying Harddrive IO Bandwidth could be your bottle neck.
April 18, 2007 at 1:05 pm
I've ran performance counters and I know on the old PowerVault 220s, that my harddrives were bottlenecks. I'm hoping to fix that with this new upgrade.
The only other thing that my database has a problem, from what I've read is that my cache hit ratio is 61%
April 18, 2007 at 1:32 pm
How do the performance counters stack up to the new server? Do the disks show signs of increased performance?
Buffer cache hit ratio of 61% for a prolonged period would show a memory bottleneck.
How much of the 8Gb is allocated to SQL Server? Any other applications running on the box?
Aside from that, what state are the indexes in? Are the statistics up to date? I'd check these and also run the query to view the execution plans.
April 18, 2007 at 1:39 pm
Take the comment back about buffer cache hit ratio and memory.....I'm reading things that don't exist in your posting!
The Cache Hit Ratio counter indicates how often cached queries are being used rather than a recompilation. Are all of your application queries in stored procedures? Do you force a lot of recompiles?
April 19, 2007 at 8:19 am
1) I doubt very much that your customer search stuff is as optimized as you think it is and that there isn't a good bit more performance to be gained there, WITHOUT filegroups. I recommend looking at lock waits, blocking and wait stats. Internal fragmentation?
2) If you DO go to filegroups, put your customer-related DATA on one filegroup and your customer-related INDEXES on a separate filegroup. This will allow for much better asynchronous reads/seeks.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 19, 2007 at 10:31 am
After viewing some performance counters; our cache hit ratio is now at 71%.
The way the database is designed is:
customer_acct_info
customer_master
customer_phone_info
There are various other tables are like the above, but are not used as often as these three tables. All tables have a account number field in common. Like:
SELECT customer_acct,home_area_code,home_phone
FROM
customer_acct_info,customer_master
WHERE
customer_acct_info.account = customer_master.account
The account field in all our tables have one major index in common with each other:
CREATE UNIQUE CLUSTERED INDEX [x_cusomter_acct] ON [dbo].[customer_acct_info]([customer_acct]) WITH FILLFACTOR = 90
CREATE UNIQUE CLUSTERED INDEX [x_cusomter_acct] ON [dbo].[cusomter_phone_info]([customer_acct]) WITH FILLFACTOR = 90
etc
Each individual table has some indexes on some fields that are used for searching like:
CREATE INDEX [x_home_area_code] ON [dbo].[customer_phone_info]([home_area_code]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_home_phone] ON [dbo].[customer_phone_info]([home_phone]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_work_area_code] ON [dbo].[customer_phone_info]([work_area_code]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_work_phone] ON [dbo].[customer_phone_info]([work_phone]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_cell_area_code] ON [dbo].[customer_phone_info]([cell_area_code]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_cell_phone] ON [dbo].[customer_phone_info]([cell_phone]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_relative_area_code] ON [dbo].[customer_phone_info]([relative_area_code]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_relative_phone] ON [dbo].[customer_phone_info]([relative_phone]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_other_area_code] ON [dbo].[customer_phone_info]([other_area_code]) WITH FILLFACTOR = 90 ON [PRIMARY]
CREATE INDEX [x_other_phone] ON [dbo].[customer_phone_info]([other_phone]) WITH FILLFACTOR = 90 ON [PRIMARY]
Could you give more detail on your #2 solution? I'm not a pro at filegroups; currently I am setting up a test server to do filegroups; before going live with a change. I get what your saying; just unsure of how about going to do that. Your saying to move all the data to one filegroup; and leave the indexes in a fresh one. How do you split a table between two filegroups based upon indexing? Or am I not getting a grasp of what your saying?
April 19, 2007 at 11:36 am
1) Cache Hit Ratio of 71 is REALLY bad. My guess is that either you have WAY too much data for your server's RAM size or (more likely) you are doing table scans and that is flushing useful data out of cache. Have you checked Page Life Expectancy counter? Come to think of it, CHR being so low could be due to simple index pages being brought into RAM, since you have a LOT of indexes and you seem to be doing searches on a number of them.
2) What datatype is your Customer_acct field? I sure hope it isn't something like varchar(20), since the Clustered Index field(s) get carried along on EVERY non-clustered index and would make all those indexes have lots of extra pages necessary.
3) Are you doing many table scans?
4) Be aware that having sooooo many indexes carries a significant performance overhead for inserts/updates/deletes. They can also lead to read blocking as they are maintained . . .
5) . . . UNLESS you have NOLOCK hints on all of your search queries. You DO, right?!?!
6) Rewrite your joins to be ansi compliant. SQL Server will not support your current code at some point in the future.
7) Using Books OnLine, you can find out how to place data and indexes on their own filegroups. Doing this improves I/O performance because both can be read/written to at the same time by the OS.
8) do you regularly update the statistics? did you check for internal fragmentation of your clustered and non-clustered indexes? see DBCC SHOWCONTIG in BOL.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 20, 2007 at 9:27 am
A table exists in only one filegroup. The table and the clustered index are inseparable, so creating a clustered index on a different filegroup will move the entire table to that filegroup.
Indexes also exists in only one filegroup, but it doesn't have to be the same filegroup as the table. If you create multiple filegroups with the file(s) for each filegroup on a separate physical drive, you can spread out the I/O involved in joining the tables. This requires a lot of micromanagement, but it can improve performance.
Standard backup and restore commands will backup all filegroups and restore data to the correct filegroups. You have the option of backing up and restoring specific filegroups individually, but standard backup maintenance plans should not require any changes because you have added filegroups.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply