How to Move an Oracle Database the Old Way – “Quickly” Pt 1

Let’s say you’re moving an Oracle database from a legacy RISC server to Linux on Intel architecture. You’ve been asked to improve the performance of the application along with the migration. That’s easy, the move itself will likely accelerate the performance (pdf)  of the application by hosting the application on a faster platform. But you know you can get more from the database by taking advantage of the migration to reorganize its structure. There is also a limited outage window to execute the migration. To re-organize the database you’ve chosen to use the Oracle tools export and import, i.e. DataPump if moving from a newer Oracle database. These tools, you know, will give you the best opportunity to reorganize the database.

So, how are you going to do this on the target server? Let’s go through the steps:

  1. Configure the hardware, memory, BIOS, operating system (sysctl.conf?), storage (HBA Driver anyone?), and network configuration.
  2. Install Oracle Database Enterprise Edition and create a sample database.
  3. Document the tablespaces and data file sizes of the production database (query the data dictionary).
  4. Start the tablespace creation process on the new database to replicate the tablespaces of the production database. (See below for a speed up idea.)
  5. While these are running:
    1. Export the source database from a quiesed Q/A database. (ROWS=No)
    2. Create an INDEX create script by Importing the export file you just made with the parameter INDEXFILE=’filename’ (FULL=Y)
    3. Edit the ‘filename’ file to have each INDEX run in Parallel Query mode
    4. Generate a file of the constraints for each user
      • Edit this file and make a copy, one to disable the constraints, the other to enable the constraints
  6. Configure a private network (the fastest available) between the source server and the target server. Allow for remote procedure calls.
  7. Create a series of scripts to export the source database table by table, starting with the largest table first.  Bundle the smaller tables into separate import files. Create import scripts that import the same tables as the export scripts. (ROWS=Y, INDEXES=N, CONSTRAINTS=N.) You’re just moving the data.
  8. Put these all into a shell script where they are called all at the same time. Be sure to have the output from each script sent to a file.
  9. Run an import to pre-create the various OBJECTS or structures of the application in the database but without ROWs, INDEXes, or CONSTRAINTS.
  10. Start the data migration.
    1. Shut down the application and put the legacy database into single user mode
    2. Disable the TRIGGERS in the target database
    3. Fire off the script to export the data from the legacy database to a named pipe and import into the target from the same named pipe
    4. Once the row data has been migrated, start the script to create the INDEXes in the target database
    5. Run the script to ENABLE the CONSTRAINTS in the target database
    6. Run the script to ENABLE the TRIGGERS
  11. Audit the database to ascertain that all OBJECTS in the Legacy database are in the target database and that each TABLE has the same number of ROWS in both databases.
  12. Open the new database to production (or mirror this with the legacy database).
  13. Disable the private network used for the migration of the data.

First create a database with just the minimal tablespaces (SYSTEM, ROLLBACK, USER, SYSAUX, etc.) but make each tablespace is a size optimal for the application. Create the application tablespaces for the application laid out on storage (the way you envisioned initially). To get the list of the application specific tablespaces use the following script:

Col tablespace_name for a45 SIZE_IN_BYTES 999,999,999,999
Spool tbs_size.out
Spool off

For these custom tablespaces there is a trick to make them in parallel. While you can’t ADD a tablespace to the database in parallel to another tablespace, you can add data files to the tablespaces in parallel. For example, make the initial data file for the tablespace at the size 100MB. Then do an ADD DATAFILE for each tablespace at a respectable size. You can execute as many of the DDL commands ADD DATAFILE in parallel as long as your server and storage can handle it. (This activity will also give you a good opportunity to measure the maximum sustained I/O to the storage.)

When the data files are being added is a good time to generate the INDEX creation DDL. To get the CREATE INDEX text use import with the INDEXFILE option. Edit the DDL to put the INDEX's in the tablespaces you want them to be in with the EXTENT sizes that are optimal. Run this script to create empty tables. Now you have completed the space allocation for the tables and eliminated this time consuming process from the migration schedule.

In part two I will continue the steps to get this move complete, including a snippit of a Bash shell script to run the export/import processes in parallel.

(I'm just giving you one thread in the sample, you'll have to duplicate the line edited for your particular circumstances.)