Monthly Archives: August 2015

Oracle 11g Data Pump EXCLUDE Parameter Caveat When the FULL=Y Parameter is Used

During the course of my work, I migrate many databases from one host to another.  Occasionally, DBAs must create backup tables when diagnosing an issue or implementing a change and more frequently than not, these objects never get dropped.  Usually this is no problem except for when you need to move a database from one host to another and you do not want to move these old backup objects.  In my latest migration, I found over 300GB of backup tables which would certainly elongate the time needed to complete the data pump.  After obtaining approval that these tables did not need to be migrated, I began to determine how to exclude these tables from the data pump.  At first, I thought I had it all figured out, data pump everything and exclude only the objects I did not want to bring over.  High level the steps looked like this:

  1. Create a work table in the database which contained the list of excluded tables.
  2. Load the reference data which will be queried by data pump for the exclude list.
  3. Create the data pump parameter file
  4. Execute the data pump
Exclusion table DDL (this simplifies the parameter file):
CREATE TABLE GGATE.EXCLUDE_TABLE (
TABLE_NAME VARCHAR2(30),
OBSOLETE_FLG VARCHAR2(1)
)
TABLESPACE USERS;

INSERT INTO GGATE.EXCLUDE_TABLE VALUES ('TEMP_900008090_08122014', 'Y');
COMMIT;

Data pump parameter file:

USERID='/ as sysdba'
DUMPFILE=EXP_ASM_DIR:METADATA_EXP.DAT LOGFILE=EXP_LOG_DIR:METADATA_EXP.LOG
FULL=Y
CONTENT=METADATA_ONLY
REUSE_DUMPFILES=Y
JOB_NAME=EXP_METADATA_JOB
EXCLUDE=STATISTICS
EXCLUDE=TABLE:"IN (SELECT TABLE_NAME FROM GGATE.EXCLUDE_TABLE WHERE OBSOLETE_FLG='Y')"
CLUSTER=N

You will have all objects except for the few tables which you excluded.  Right?  Well, almost.   You have everything you asked for except, the data pump left out ALL of the following paths:

DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER

So after many hours of research, I found this as a bug documented in Oracle Doc 1491557.1 and effects nearly all versions of 11g.  As a work around, you can do one of the following:

  1. Do another export of metadata without exclusions to get these object paths and import them one by one for each schema
  2. Do a FULL=Y with no exclusions and add the EXCLUDE parameter on the impdp parameter file like this:
    USERID='/ as sysdba'
    DUMPFILE=IMP_ASM_DIR:METADATA_EXP.DAT
    LOGFILE=IMP_LOG_DIR:METADATA_IMP.LOG
    CONTENT=METADATA_ONLY
    JOB_NAME=IMP_METADATA_JOB
    CLUSTER=N
    EXCLUDE=TABLE:"IN (SELECT TABLE_NAME FROM GGATE.EXCLUDE_TABLE)"
    EXCLUDE=STATISTICS
    
  3. Apply patch 14095143 and the EXCLUDE will work properly
  4. Upgrade to 11.2.0.4

Have fun!

To see a where exactly the data pump paths were left out, feel free to see a better representation here:
datapump export compare

Set Up and Use of ACFS for GoldenGate

Recently, I found myself in a situation where I needed a Linux mount point of sufficient space for GoldenGate binaries / trail files.  I’ve used the Oracle Database File System (DBFS) option in the past, although I never really was a big fan of it since its use creates additional database objects and in my opinion unnecessary additional database I/O as well as additional redo and rman activity.  Based on this, I decided to explore the use of Oracle ASM Clustered File System (ACFS) for this use case.  At first glance, it seemed to be much faster to set up and was available on all nodes by default, which would also allow GoldenGate to fail over to other nodes.  In addition,  ACFS does not require the database to be up so the filesystem can also be used for other purposes.  If you are using this mount solely for GoldenGate, make sure you follow the best practices document which is updated periodically (Oracle GoldenGate Best Practice: NFS Mount options for use with GoldenGate (Doc ID 1232303.1))

***  Refer to the following steps at your own risk and always test for your use case prior to using in a production setting.

Requirements:

  • Root user access
  • Sufficient ASM Space
  • Separate ASM Diskgroup (Optional)
  • Latest Oracle Grid Infrastructure and Database Patchset

Configuration:

Verify that ACFS/ADVM modules are present in memory (on each node):

 $ lsmod | grep oracle

If the modules are not present, the command will return something similar to:
oracleasm              53591  1

If the modules are present, the command will return something similar to:
oracleacfs 3308260 0
oracleadvm 508030 0
oracleoks 506741 2 oracleacfs,oracleadvm
oracleasm 53591 1

If the modules are not present or you would like to ensure that the latest version is loaded, run the following before proceeding (as the root user):

 
$ . oraenv

ORACLE_SID = [CDBRAC1] ? +ASM

The Oracle base remains unchanged with value /u01/app/oracle

# $GRID_HOME/bin/acfsroot install

Reboot the node if the modules were already present and you are reloading them.

Start and enable the ACFS modules on each node:

On each node and as the root user:

# $GRID_HOME/bin/acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

If running Grid Infrastructure, enable the driver modules in clusterware (only on one node as the root user):

# $GRID_HOME/bin/acfsroot enable
ACFS-9376: Adding ADVM/ACFS drivers resource succeeded.
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'orcl-rac1'
CRS-2676: Start of 'ora.drivers.acfs' on 'orcl-rac1' succeeded
ACFS-9380: Starting ADVM/ACFS drivers resource succeeded.
ACFS-9368: Adding ACFS registry resource succeeded.
CRS-2672: Attempting to start 'ora.registry.acfs' on 'orcl-rac2'
CRS-2672: Attempting to start 'ora.registry.acfs' on 'orcl-rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'orcl-rac2' succeeded
CRS-2676: Start of 'ora.registry.acfs' on 'orcl-rac1' succeeded
ACFS-9372: Starting ACFS registry resource succeeded.

Once installation is complete, and the mount is registered with clusterware, these modules will be loaded automatically.

If you like you can double check the driverstate by using the following executable:
usage: acfsdriverstate [-orahome ] [-s]

As oracle user, create an ASM volume for ACFS (run only on one node):

Source in the grid environment.

$ . oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle

Create the volume using the volcreate command.
You can use an existing disk group or create a separate one to house ACFS.

$ asmcmd
ASMCMD> volcreate -G DATA -s 10G ACFSVOL1
ASMCMD> volinfo --all
Diskgroup Name: DATA

Volume Name: ACFSVOL1
Volume Device: /dev/asm/acfsvol1-370
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

As oracle user, create the filesystem on the volume which was just created:

$ /sbin/mkfs -t acfs /dev/asm/acfsvol1-370

mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/acfsvol1-370
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.

As root, create an empty directory which will house the file system:

# mkdir -p /acfsmounts/acfsvol1
# chown root:oinstall /acfsmounts
# chmod 770 /acfsmounts
# chown -R oracle:oinstall /acfsmounts/acfsvol1
# chmod 775 /acfsmounts/acfsvol1

As root, setup the file system to be auto mounted by clusterware:

In a RAC 11g environment, you use acfsutil (srvctl may be supported – was not tested and the “-u option” will allow the oracle user to administer the mount):
# . /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# /sbin/acfsutil registry -a /dev/asm/acfsvol1-370 /acfsmounts/acfsvol1 -t "ACFS General Purpose Mount" -u oracle
In a RAC 12c GI environment, register it with clusterware using the following commands (the “-u option” will allow the oracle user to administer the mount):
# . /usr/local/bin/oraenv
ORACLE_SID = [CDBRAC1] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
# srvctl add volume -volume ACFSVOL1 -diskgroup DATA -device /dev/asm/acfsvol1-370
# srvctl add filesystem -device /dev/asm/acfsvol1-370 -path /acfsmounts/acfsvol1 -diskgroup DATA -user oracle -fstype ACFS -description "ACFS General Purpose Mount"

At this point the mount should be ready for read/write and will be automatically mounted by clusterware.

Administration of the ACFS mount:

If you need to resize the mount once created (since you granted control to the oracle user, this command can also be executed by the oracle user:

$ acfsutil size 25G /acfsmounts/acfsvol1
$ srvctl start filesystem -device /dev/asm/acfsvol1-370
$ srvctl stop filesystem -device /dev/asm/acfsvol1-370