January 22, 2023 at 5:17 pm
Hi friends,
I'm migrating data from Oracle to SQL server 2019 using SSMA for Oracle. Faced a lot of hiccups in the process, finally data is migrating however it is very slow (source DB is 500G). Migration is at 80% but we are running out of memory on the target server.
There are a few big tables halfway into the data load, I would like to know if there is a way to resume data migration from where it stopped? I'm aware of using extended data migration setting to prevent SSMA from truncating the table but I wanted to check if we can append data to tables without duplicating rows (on tables without unique keys)?
Please help.. thanks in advance
Thank you
January 23, 2023 at 12:16 pm
This was removed by the editor as SPAM
January 23, 2023 at 4:47 pm
Here's an idea. Create multiple views that break the data in the large tables into smaller pieces. I think you can map these views to the destination table.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 1, 2023 at 2:40 pm
I had the same problem years ago, and for it I developed a very specific application.
I have a version of the application that currently only copies tables from SQL Server to SQL Server, but originally it was from Oracle to SQL Server.
After the migration, the program was modified to be able to copy the data from production to staging or development, since it was a continuous development project and part of the data had to be copied constantly.
The idea is to copy tables in parallel, so that data copy time is reduced to the minimum possible.
The application unmounts the constraints and indexes, copies the data, and remounts the constraints and indexes. The real problem is recreating the indexes. In small tables there are usually no problems, but in large tables you can choose not to disassemble the indexes that are primary keys, in this way, even if it takes longer to copy the data, you save time when generating the indexes.
Large tables can be segmented with conditions that are placed on a WHEERE. However, those conditions should match some index, otherwise there could be problems.
When my time allows it, I will try to release a new version where the origin is Oracle.
If you want to get ideas, you can get them from:
https://www.inforcustom.duckdns.org/en-us/documentacion/insertbulkbestia/ibbintroduccion
Sorry for my english. It's from Google.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply