January 25, 2005 at 8:57 am
Currently one of my system databases has on average 3,000,000 IO transactions per day. The client into the database is via web browser with mediocre programming - not taking advantage of indexes, etc.
1. Is 3 million IO transactions per day a lot?
2. Are there any recommended or suggested tuning scripts or hardware adjustements that should be made?
The current hardware is well under-spec'd - dual 930 MHz P3 processor 1GB RAM, mirrored drives. The server is a 1U unit only capable of holding 2 drives, so the each "mirror" has two partitions, one for the OS and another for the SQL program files and database files. I am putting together a project to upgrade the hardware and any recommended suggestions would be great.
Thanks
January 25, 2005 at 4:07 pm
1. Is 3 million IO transactions per day a lot?
It depends on the Harware I am assuming you are not including reads. Because if you are that means nothing. What is the size of the DB?
2.Are there any recommended or suggested tuning scripts or hardware adjustements that should be made?
- Separating data and log files will give you the first boost.
- When you have a very write intensive application the RAID options (or SAN if you have $) you use are critical
- The queries ability to exploit the right indexes becomes essential in large database
- 1GB of RAM sounds like very little for a large DB.
for more info this is a place to start though for sql 7 most of it still applys for 2000
HTH
* Noel
January 26, 2005 at 6:26 am
I suggest you first take a look at the database design and normalize it first. Then recheck the application and correct for knonw performance issues. Then spec out what you expect to occurr on the system as far as read/writes, total size of data, number of indexes and size of indexes.
Beyond that look at all item on how you can get best performance.
If really 3 million write transactions a day you might consider a table without a clustered index, if more writes than ever their will be reads you might opt for the cost of RAID 10 over using RAID 5.
You get better performance when you can place the tempdb on a drive seperate from the data and log files and you get performance gains when you can place the logs on a seperate drive from the data files and you can get read performance gains when you place indexes in a seperate file group on a seperate drive from the table itself (not counting clustered index as it is part of the table).
You just need tol be really descriptive about the amoutn and type of work you will be dealing with, might make sense to use multiple servers with replication, or even server clusters.
There are just so many options you can look at but you need to start with design to fix any known issues and make more managable for fine tuning later, then you need to spec out what you think the user interaction will be, load sizes, number of transactions, index utilization for queries, etc. And then look at hardware. Consider drives, array types, layout, ram needs, cpu needs, what else will be on the same box if anything else besides SQL will br running, etc.
Even with the best hardware poor design will be your biggest enemy for your performance.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply