January 12, 2009 at 7:35 am
We have a daily process that transfers a database from one SQL Server 2000 to another SQL Server 2000 via a linked server. The transfer takes approximately 10-15 minutes. The properties of the database show a data space of 50 mb. This is a database that was uploaded from a MS Access database. The Access database is only 400 kb in size.
The two SQL Servers are in two different site locations and are connected via a T1 line over a WAN connection.
The application manager wants to know if there is anthing that can be done with SQL Server to speed up the data transfer.
Thanks.
January 12, 2009 at 8:11 am
From 400 KB to 50 MB is kind of a big expansion. How did it get imported at such a size? I take it that the access is your main collection point. Do you drop and recreate the database from access or do you import the changes? What is the script used to move the data between servers?
January 12, 2009 at 8:25 am
Conan,
I used SQL Server Management Studio 2005 to import the Access table to a SQL Server 2000 database. This is a one-time import.
There is a DTS package that transforms and moves the data. That package used to connect to the Access database at the same site and only took about 1 minute to execute. Now that the data is in SQL at a different site the DTS package takes aproximately 10-15 minutes to execute.
January 12, 2009 at 10:43 am
The DTS is part of the load that is going to take some of the time. It is not the most efficient way to move the data between sites. There is still not enough information here to make any kind of real guess at what is costing you so much time other then inefficiencies in the technology you are using. If you have let SQL server determine its own structure that would explain some of the bloat and then it is trying to transfer unwieldy amounts of text instead of possibly the more streamlined data that exists in access. Also how does SQL 2005 fit in this picture?
January 13, 2009 at 5:42 am
Patrick,
Some questions and possible alternatives:
Is it possible to first copy / transfer the Access DB file (.MDB) to the target SQL Server (or to some file share it can access) and then run the DTS DB load process locally on that server (versus to a linked server)? Perhaps this alternate process might be faster than the linked server approach. The file copy for a 1 MB or less MDB file may be less than the linked server overhead you are experiencing.
When the target SQL Server DB was created by SSMS in the one-time import process, how was the DB size chosen? Could it be that SSMS chose a default size larger than you needed (and not all of it is being used yet)? Is the 50 MB DB size you mentioned the DB size allocated or the DB size in use? SSMS can display both DB size values under database properties.
Are the database tables and columns created by the one-time import process using the desired data types and sizes, or could they have used default column types and sizes that are larger than needed and consuming added DB size? For example: using CHAR(255) instead of VARCHAR(50). Use of larger columns in the target DB may contribute to larger DB sizes and longer DTS run times.
Scott R.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply