Moving to new servers

  • We are moving our databases from some virtual servers to physical servers. There are numerous difference other than virtual & physical boxes, but what we want to do is ensure we have adequate resources on the new servers.

    I was thinking of using SQL Profiler & Perfmon while restoring database transaction logs on the new server. Will this give me a clue regarding the overall performance of the new server?

    I have 4-5 databases, can I restore the transaction logs on all databases at the same time for an even better picture?

    Any suggestions are welcome.

    Thanks

  • I guess it will give some clue, but maybe not enough, unless these logs are large. A better test may be the restore of the full database backups. A few of those in parallel and then watch how saturated the I/O stack gets. Then again, if this does not reflect your workload, the test is not that useful.

    So the servers are physical. Where are the disk? Directly attached or on a SAN?

    [font="Times New Roman"]Erland Sommarskog, SQL Server MVP, www.sommarskog.se[/font]

  • You could try SQLIO or SQLIOSIM from MS (http://www.microsoft.com/en-us/download/details.aspx?id=20163) to test the I/O throughput on the servers, there might be better tools out there though.

    I also find the PAL tool (http://pal.codeplex.com/ ) very useful, even if the report is huge, you would need to have some "Live" like activity on the server to get accurate results.

  • Thanks for your replies.

    The disk on the new servers are part of the server - not on a SAN. The current configuration has them on a SAN.

    I'll try monitoring with full backups.

    I've seen the PAL tool before & will check on it again.

    I agree that the best solution is a test that best represents a live environment; one of our developers may have to become involved for that to happen.

  • The SQLIO utilities and PAL are a good way to evaluate the performance.

    You can also look at capturing a trace from your production system and then replaying this on your new enviornment, we do this regularly in my organisation and utilise a utility called OSTRESS.

    In summary we capture the trace in production (server side trace not profiler) and produce stats from it such as:

    1. Procedures called

    2. Numbe rof times the procedure is called.

    3. MIN \ MAX \ AVERAGE Duration for the procedure

    3. MIN \ MAX \ AVERAGE CPU for the procedure

    4. MIN \ MAX \ AVERAGE Reads for the procedure

    5. MIN \ MAX \ AVERAGE Writes for the procedure

    Then replay the trace using ostress on the other server and trace it at the same time capturing the excution. Whilst the replay is in progress we also capture perfomance metrics using perfmon so we can provide stats on this also.

    We then summaries the results of that trace and compare the results. This allows us to quantify if the production workload is better or worse perfoming on the other enviornment.

    We also test running the replay from multiple hosts try and max out the server to see how it will cope with an increased workload.

    MCITP SQL 2005, MCSA SQL 2012

  • Thanks for the suggestion. I had not heard of this utility, I will research it.

    Thanks

  • You should run a complete suite of SQLIO tests on the current and new servers and compare the results. Sequential read, sequential write, random read, and random write at a variety of block sizes, like 4, 8, 16, 32, 64, and 128 KB blocks. Use large test file sizes, like 50 GB, to overcome the impact of controller caching. Run with a variety of worker processes to make sure you can get the point of having the average response time above 40 ms to see the true limit.

    Another good IO test is to restore your production database to the new server, run DBCC CHECKDB, and compare that to how long it takes to run on the current server. It should be faster and if it isn't, you need to start looking a the disk IO.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply