Why Aren’t My Exadata Database Queries Performing Smart Scans?


Recently, I was performing prerequisite checks on an Exadata in preparation for the normal quarterly full stack patch and in doing so came across a section of the Exacheck with the heading “Database should not be in DST upgrade state“. Upon investigation I found that to be true:

select name, value$ from sys.props$ where name like '%DST%';

NAME			       VALUE$
------------------------------ ------------------------------
DST_UPGRADE_STATE	       UPGRADE
DST_PRIMARY_TT_VERSION	       32
DST_SECONDARY_TT_VERSION       18

Based on this, I took this as an action item to correct during the patching cycle and continued the normal checks and a quick look through recent AWRs. Is was then that I found something interesting in the “Exadata” section of the report:

As you can see, nothing is being offloaded and in the “Passthru Reasons” section “timezone” is the culprit. I then went to look at a query or two and sure enough, that confirms it:


So sure enough a stuck timezone file upgrade can cause the entire reason you bought an Exadata not to work! The date it quit working also coincides with when an upgrade to 19c occurred. Another quick sanity check of MOS also revealed the following note:

Exadata: Database Performance Degrades when Database is in Timezone Upgrade Mode (Doc ID 1583297.1)

So now, how to fix it? The note says to try the following commands, but unfortunately that did not work:

alter session set events '30090 trace name context forever, level 32';
exec dbms_dst.unload_secondary;
BEGIN dbms_dst.unload_secondary; END;

*
ERROR at line 1:
ORA-56938: no secondary time zone data file being loaded by on-demand or a datapump job
ORA-06512: at "SYS.DBMS_DST", line 1969
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DST", line 1906
ORA-06512: at line 1

So at this point we could either try the timezone upgrade again, or try to force the timezone upgrade window to close. The following MOS note discusses the timezone upgrade process:

Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 ) and above using DBMS_DST (Doc ID 1509653.1)

Due to time, I decided to try forcing the upgrade window to close (as all other indicators showed that the upgrade was successful on all other objects) and immediately we found the reason why the upgrade failed in the first place…. An invalid and UNUSED version of APEX!!!

alter session set "_with_subquery"=materialize;
alter session set "_simple_view_merging"=TRUE;
set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/
SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
Table list: "APEX_040200"."WWV_FLOW_FEEDBACK"
BEGIN
*
ERROR at line 1:
ORA-04098: trigger 'APEX_040200.WWV_FLOW_FEEDBACK_T1' is invalid and failed re-validation
ORA-06512: at "SYS.DBMS_DST", line 1131
ORA-06512: at "SYS.DBMS_DST", line 611
ORA-06512: at "SYS.DBMS_DST", line 209
ORA-06512: at "SYS.DBMS_DST", line 858
ORA-06512: at "SYS.DBMS_DST", line 596
ORA-06512: at "SYS.DBMS_DST", line 1123
ORA-06512: at line 2

After following the normal process to remove those old versions of APEX (I am a believer in removing it period if you aren’t using it), I was then able to close the upgrade window for timezone with no errors:

select name, value$ from sys.props$ where name like '%DST%';

NAME			       VALUE$
------------------------------ ------------------------------
DST_UPGRADE_STATE	       UPGRADE
DST_PRIMARY_TT_VERSION	       32
DST_SECONDARY_TT_VERSION       18

alter session set "_with_subquery"=materialize;

Session altered.

alter session set "_simple_view_merging"=TRUE;

Session altered.

set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
Table list: "APEX_200200"."WWV_FLOW_WORKSHEET_NOTIFY"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_WEB_SRC_MODULES"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_ACTIVITY_LOG2$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_DEBUG_MESSAGES"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_DEBUG_MESSAGES2"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_WEBSOURCE_SYNC_LOG1$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_WEBSOURCE_SYNC_LOG2$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_AUTOMATION_LOG1$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_AUTOMATION_LOG2$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_AUTOMATION_MSG_LOG1$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_AUTOMATION_MSG_LOG2$"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_FEEDBACK"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_FEEDBACK_FOLLOWUP"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_PKG_APP_INSTALL_LOG"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_ACTIVITY_LOG1$"
Number of failures: 0
Table list: "APEX_200200"."APEX$ARCHIVE_HEADER"
Number of failures: 0
Table list: "APEX_200200"."APEX$ARCHIVE_CONTENTS"
Number of failures: 0
Table list: "APEX_200200"."APEX$ARCHIVE_HISTORY"
Number of failures: 0
Table list: "APEX_200200"."APEX$ARCHIVE_LOG"
Number of failures: 0
Table list: "APEX_200200"."APEX$ARCHIVE_PREF"
Number of failures: 0
Table list: "APEX_200200"."WWV_FLOW_AUTOMATIONS"
Number of failures: 0
Failures:0

PL/SQL procedure successfully completed.

VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/
An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.

select name, value$ from sys.props$ where name like '%DST%';

NAME			       VALUE$
------------------------------ ------------------------------
DST_UPGRADE_STATE	       NONE
DST_PRIMARY_TT_VERSION	       32
DST_SECONDARY_TT_VERSION       0

Now after changing this, you can now see that the “Passthru Reasons” section is no longer populated and all cell offloads are occurring again:

Confirming via looking at AWR also shows the query now getting proper offload:

So the moral of the story is that when performing an upgrade, take extra care to check if there are invalid objects along the way and also don’t proceed unless it is properly resolved! Also, consider removing features of the database if you are not using them. It will lessen the chance for there being outliers causing problems and your upgrade will actually complete faster!

Enjoy!

Using Python Along with the Oracle GoldenGate Big Data Adapter Integrated Heartbeat

In my previous blog post, https://stborden.wordpress.com/2021/08/27/enabling-and-use-of-the-oracle-goldengate-big-data-adapter-integrated-heartbeat/, I showed how to enable the heartbeat for the GoldenGate Big Data Adapter. In this post, we will extend this facility using python to read this data and provide lag information for the replication path over different time periods.

Python is a perfect language to iterate over the json generated by the integrated heartbeat. For ease, I have posted the script in the code block below, but it is also available on my GitHub site:

https://github.com/shane-borden/goldengate-scripts/blob/main/ogg_big_data_heartbeat_report.py


#!/bin/python3
# Python program to read ogg heartbeat history json file

import json
import time
import datetime
import os
import glob
import sys, getopt

def main(argv):
  # Initialize Variables
  vLagJsonDir = ''
  try:
    opts, args = getopt.getopt(argv,"h:j:",["jsondir="])
    if len(opts) == 0:
      print('Script Usage: ogg_big_data_heartbeat_report.py -j ')
      sys.exit(1)
  except getopt.error as err:
    print('Script Usage: ogg_big_data_heartbeat_report.py -j ')
    sys.exit(2)
  for opt, arg in opts:
    if opt == '-h':
      print('Script Usage: ogg_big_data_heartbeat_report.py -j ')
      sys.exit()
    #elif opt in ("-j", "--jsondir"):
    elif opt == '-j':
      vLagJsonDir = arg
    elif opt == '--jsondir':
      vLagJsonDir = arg

  vTotLag = 0
  vTotJsonRecords = 0
  vTotLag_1hour = 0
  vTotJsonRecords_1hour = 0
  vTotLag_4hour = 0
  vTotJsonRecords_4hour = 0
  vTotLag_8hour = 0
  vTotJsonRecords_8hour = 0
  vTotLag_24hour = 0
  vTotJsonRecords_24hour = 0
  now = time.mktime(datetime.datetime.now().timetuple())
  if vLagJsonDir == "":
    vLagJsonDir = "/u01/app/oracle/product/oggBd/19.1/gg_1/dirtmp/"
    print('JSON Dir defaulted to: ' + str(vLagJsonDir))
  else:
    print('JSON Dir is: ' + str(vLagJsonDir))
  lag_records = []
  heartbeat_timestamp_records = []
  replication_path_records = []

  # Opening JSON file
  for filename in glob.glob(vLagJsonDir + '/*-hb-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].json'):
    #print(os.path.join(vLagJsonDir + "/", filename))
    f = open(os.path.join(vLagJsonDir + "/", filename))

    # returns JSON object as
    # a dictionary
    data = json.load(f)

    # Iterating through the json
    # list
    for i in data['records']:
      vIncomingTs = time.mktime(datetime.datetime.strptime(i['incomingHeartbeatTs'][:-3],"%Y-%m-%d %H:%M:%S.%f").timetuple())
      vOutgoingTs = time.mktime(datetime.datetime.strptime(i['outgoingReplicatTs'][:-3],"%Y-%m-%d %H:%M:%S.%f").timetuple())
      vIncomingHeartbeatTs = datetime.datetime.strptime(i['incomingHeartbeatTs'][:-3],"%Y-%m-%d %H:%M:%S.%f").strftime('%Y-%m-%d %H:%M')
      heartbeat_timestamp_records.append(vIncomingHeartbeatTs)
      #print(str(now - vOutgoingTs))
      if (now - vOutgoingTs)  " + i['incomingRoutingPath'] + " => " + i['incomingReplicat'] + " | " + vIncomingHeartbeatTs + " | " + str(vOutgoingTs - vIncomingTs))
        replication_path_records.append(i['incomingExtract'] + " => " + i['incomingRoutingPath'] + " => " + i['incomingReplicat'])
      elif (now - vOutgoingTs) <= 14400:
        vTotLag_4hour = vTotLag_4hour + (vOutgoingTs - vIncomingTs)
        vTotJsonRecords_4hour = (vTotJsonRecords_4hour + 1)
      elif (now - vOutgoingTs) <= 28800:
        vTotLag_8hour = vTotLag_8hour + (vOutgoingTs - vIncomingTs)
        vTotJsonRecords_8hour = (vTotJsonRecords_8hour + 1)
      elif (now - vOutgoingTs)  0:
    print("Average Lag over the past hour: " + str(vTotLag_1hour // vTotJsonRecords_1hour) + " seconds")
  if vTotJsonRecords_4hour > 0:
    print("Average Lag over the past 4 hours: " + str(vTotLag_4hour // vTotJsonRecords_4hour) + " seconds")
  if vTotJsonRecords_8hour > 0:
    print("Average Lag over the past 8 hours: " + str(vTotLag_8hour // vTotJsonRecords_8hour) + " seconds")
  if vTotJsonRecords_24hour > 0:
    print("Average Lag over the past 24 hours: " + str(vTotLag_24hour // vTotJsonRecords_24hour) + " seconds")
  print("Average Lag over the dataset (" + str(vTimeDiff) + " Days): " + str(vTotLag // vTotJsonRecords) + " seconds")

if __name__ == "__main__":
  if len(sys.argv) < 1:
    print('Script Usage: ogg_big_data_heartbeat_report.py -j ')
    sys.exit(2)
  else:
    main(sys.argv[1:])

Now, to execute the script its quite easy. Just tell the script where the json files are and it will aggregate the results over the past hour, 4 hours, 8 hours, 24 hours and the entire dataset:

oracle@orcl-vbox2:/home/oracle $ ogg_big_data_heartbeat_report.py -j /u01/app/oracle/product/oggBd/19.1/gg_1/dirtmp
JSON Dir is: /u01/app/oracle/product/oggBd/19.1/gg_1/dirtmp

Replication Paths:
E_HR_CL => P_HR_CL => RKFJSCON

Combined Lag Data for Replication Paths:

Average Lag over the past hour: 6.0 seconds
Average Lag over the past 4 hours: 6.0 seconds
Average Lag over the past 8 hours: 6.0 seconds
Average Lag over the past 24 hours: 6.0 seconds
Average Lag over the dataset (18.0 Days): 443446.0 seconds

I’ve only had my VM up for the past day or so, so the data for the entire dataset is skewed as there was a several day period where the VM was down.

Now you can extend this even further if you like to execute on a timely bases from OEM so that historical data can be kept or from any other scheduling tool you might have. Enjoy!

Enabling and Use of the Oracle GoldenGate Big Data Adapter Integrated Heartbeat

I have always found the Integrated Heartbeat for GoldenGate very useful for quickly identifying replication trends and potential slowness in the replication path. Luckily in GoldenGate Big Data Adapter Version 19c they have extended this functionality for the Big Data Adapter. My goal with this blog post is to show how easy it is to enable the heartbeat and to demonstrate a python script to utilize the data the heartbeat produces.

First, enable the heartbeat on the source just as you normally would. I won’t cover how to do that in this post, but for GoldenGate 19c, the instructions can be found here:

https://docs.oracle.com/en/middleware/goldengate/core/19.1/gclir/add-heartbeattable.html#GUID-126E30A2-DC7A-4C93-93EC-0EB8BA7C13CB

Enabling the HeartBeat for the Big Data Adapter

Luckily, enabling the heartbeat for the big data adapter is just as easy as in the normal Goldengate CLI. The official reference is here:

https://docs.oracle.com/en/middleware/goldengate/big-data/19.1/gbdin/automatic-heartbeat-bigdata.html#GUID-56DF810A-A417-45DB-A3ED-2D902B800778

For expedience, I will show the basic usage of the command here. After logging into GGSCI, enabling the target heartbeat is as easy as:

Oracle GoldenGate for Big Data
Version 19.1.0.0.8 (Build 002)

Oracle GoldenGate Command Interpreter
Version 19.1.0.0.210228 OGGCORE_19.1.0.0.0OGGBP_PLATFORMS_210212.1752.2
Linux, x64, 64bit (optimized), Generic  on Mar 18 2021 07:13:36
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2021, Oracle and/or its affiliates. All rights reserved.



GGSCI (orcl-vbox2.localdomain.com) 1> ADD HEARTBEATTABLE
Heartbeat table is now enabled.
HEARTBEAT configuration file dirprm/heartbeat.properties
heartbeat.enabled=true
heartbeat.frequency=60
heartbeat.retention_time=30
heartbeat.purge.frequency=1
heartbeat.db.name=BigData

Now that the heartbeat has been enabled where does the data go? The heartbeat information (by default) is under your GOLDENGATE_HOME/dirtmp directory in the form of JSON files:

RKFJSCON-hb.json – Data for Last Heartbeat

The last heartbeat recorded looks like this:

{
“localDatabase” : “BigData”,
“heartbeatTimestamp” : “2021-08-27 13:17:14.149808000”,
“remoteDatabase” : “VBOXNCDB”,
“incomingExtract” : “E_HR_CL”,
“incomingRoutingPath” : “P_HR_CL”,
“incomingReplicat” : “RKFJSCON”,
“incomingHeartbeatTs” : “2021-08-27 13:17:14.149808000”,
“incomingExtractTs” : “2021-08-27 13:17:16.822345000”,
“incomingRoutingTs” : “2021-08-27 13:17:17.275030000”,
“incomingReplicatTs” : “2021-08-27 13:17:20.411229000”,
“outgoingReplicat” : “RKFJSCON”,
“outgoingReplicatTs” : “2021-08-27 13:17:20.412000000”
}


The every time a new heartbeat arrives the last heartbeat is written to a historical file by day up to the retention period where the data is then deleted:

{
“records” : [ {
“localDatabase” : “BigData”,
“heartbeatTimestamp” : “2021-08-27 00:00:14.110627000”,
“remoteDatabase” : “VBOXNCDB”,
“incomingExtract” : “E_HR_CL”,
“incomingRoutingPath” : “P_HR_CL”,
“incomingReplicat” : “RKFJSCON”,
“incomingHeartbeatTs” : “2021-08-27 00:00:14.110627000”,
“incomingExtractTs” : “2021-08-27 00:00:16.213547000”,
“incomingRoutingTs” : “2021-08-27 00:00:17.371348000”,
“incomingReplicatTs” : “2021-08-27 00:00:20.147079000”,
“outgoingReplicat” : “RKFJSCON”,
“outgoingReplicatTs” : “2021-08-27 00:00:20.170000000”
}, {
More json records
}, {
“localDatabase” : “BigData”,
“heartbeatTimestamp” : “2021-08-27 13:21:14.178962000”,
“remoteDatabase” : “VBOXNCDB”,
“incomingExtract” : “E_HR_CL”,
“incomingRoutingPath” : “P_HR_CL”,
“incomingReplicat” : “RKFJSCON”,
“incomingHeartbeatTs” : “2021-08-27 13:21:14.178962000”,
“incomingExtractTs” : “2021-08-27 13:21:16.602223000”,
“incomingRoutingTs” : “2021-08-27 13:21:17.675216000”,
“incomingReplicatTs” : “2021-08-27 13:21:21.481484000”,
“outgoingReplicat” : “RKFJSCON”,
“outgoingReplicatTs” : “2021-08-27 13:21:21.481000000”
} ]
}


So as you can see the heartbeat data is now in json format to be able to write alerts, OEM extensions or whatever else you see fit. Check out my next blog post on how to report on this data using a simple Python script!

Migration of a Single PDB to a Different Endian Platform – Is it Possible?

Seeing that I do lots of migrations throughout the course of my job, its always good to explore some of the newer features so that you are familiar with them and feel good about putting them into your tool-bag for future use. One of the things that I could see being useful is the ability to migrate a single PDB from one platform to the other. Upon researching the possibilities, I found that migrating and converting the endianness of a PDB really wasn’t documented. What was documented was a process to migrate a PDB from one platform to another when endianness is the same between platforms.

Given that, I wanted to see if the process for converting an entire container using transportable tablespaces could be applied to a single PDB. Unfortunately there is downtime required, but here is what I found:

Create Backup of Source PDB

Determine for a given user what tablespaces are impacted, check for any violations which may occur due to transport, set those tablespaces to read-only and then invoke RMAN by connecting to the PDB and take a backup:

SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
          2 Solaris[tm] OE (64-bit)                  Big                     0

SQL> exec SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'HR_TEST', incl_constraints => TRUE);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;

no rows selected


export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RMAN> alter tablespace hr_test read only;

using target database control file instead of recovery catalog
Statement processed

RMAN> alter tablespace users read only;

Statement processed

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
BACKUP TO PLATFORM 'Linux x86 64-bit' AS COMPRESSED BACKUPSET FILESPERSET 1
   FORMAT '/export/home/oracle/backups/%d_xplat_bs%s_piece%p_%T_%U.bkp'
   DATAPUMP FORMAT '/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp'
   TABLESPACE HR_TEST, USERS;
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=752 instance=xttsun1 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=745 instance=xttsun1 device type=DISK

Starting backup at 29-OCT-20
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_XTTSUN_aahC":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   EXPDP> Master table "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_XTTSUN_aahC is:
   EXPDP>   /u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace HR_TEST:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
   EXPDP> Datafiles required for transportable tablespace USERS:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
   EXPDP> Job "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully completed at Thu Oct 29 13:14:42 2020 elapsed 0 00:02:17
Export completed

channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) in backup set
input datafile file number=00018 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) in backup set
input datafile file number=00017 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
channel XPLATFORM_BKUP_DISK2: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK2: backup set complete, elapsed time: 00:00:03
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:35
channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:01
Finished backup at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

Copy Backup Files and Metadata File to New Platform

Using normal methods such as scp, copy the files generated by RMAN to the new platform. One caveat, is that until 19c, the data pump file for the metadata is copied to $ORACLE_HOME/dbs so you must make sure that you have sufficient space there for the process to complete successfully.

Restore / Convert Endian on New Platform

The last step in the process is the setup of the new PDB, creation of the user in the PDB and then use RMAN to transport the table spaces. The general process can be done simply by executing the following steps:

  • Create empty pluggable database and create target user in the new PDB:
SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
         13 Linux x86 64-bit                         Little                  0

SQL> CREATE PLUGGABLE DATABASE xttpdb ADMIN USER pdb_admin IDENTIFIED BY Welcome123 CREATE_FILE_DEST='+DATA';

Pluggable database created.

SQL> CREATE USER "HR" IDENTIFIED BY VALUES 'S:904B950378D93630E729A1A3051796718B7511BA10F5273F2F031E5CF76D;T:22D35135BC49EEAF5F4C896F26E2E8802D869C6451F39241BB4ACB76A38B7396E1C516DC78F0AFB1D3A96ABB761543673ABCF540CAC36927564F892FC148B436934AD38AD6D35C1EB8F3BB55F70BB02A'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"
PASSWORD EXPIRE
ACCOUNT LOCK;

SQL> GRANT "RESOURCE" TO "HR";

Grant succeeded.
SQL> GRANT CREATE SESSION TO "HR";

Grant succeeded.
SQL> GRANT ALTER SESSION TO "HR";

Grant succeeded.
SQL> GRANT UNLIMITED TABLESPACE TO "HR";

Grant succeeded.
SQL> GRANT CREATE SYNONYM TO "HR";

Grant succeeded.
SQL> GRANT CREATE VIEW TO "HR";

Grant succeeded.
SQL> GRANT CREATE SEQUENCE TO "HR";

Grant succeeded.
SQL> GRANT CREATE DATABASE LINK TO "HR";

Grant succeeded.
SQL> GRANT EXECUTE ON "SYS"."DBMS_STATS" TO "HR";

Grant succeeded.
SQL> ALTER USER "HR" DEFAULT ROLE ALL;

Grant succeeded.

  • Invoke RMAN and connect to the target PDB:

export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
RESTORE
   ALL FOREIGN DATAFILES TO NEW
   FROM BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp'
   BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp'
   DUMP FILE FROM BACKUPSET '/archive/xttlnux/transport_tblsp_hr_test_meta.dmp';
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

using target database control file instead of recovery catalog
allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=512 instance=xttlnux2 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=615 instance=xttlnux1 device type=DISK

Starting restore at 29-OCT-20

channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK2: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK2: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restoring foreign file 17 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.929.1055079821
channel XPLATFORM_BKUP_DISK2: restoring foreign file 18 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.928.1055079877
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:02
channel XPLATFORM_BKUP_DISK2: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: restored backup piece 1
channel XPLATFORM_BKUP_DISK2: restore complete, elapsed time: 00:00:26
channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring Data Pump dump file to /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/backup_tts_XTTLNUX_32652.dmp
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:00

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_XTTLNUX_CFfw" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_XTTLNUX_CFfw":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings

ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings

   IMPDP> Job "SYS"."TSPITR_IMP_XTTLNUX_CFfw" completed with 2 error(s) at Thu Oct 29 13:44:46 2020 elapsed 0 00:00:32
Import completed

Finished restore at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

At this point, the tablespaces have been converted and plugged into the target PDB and all objects are ready for use!

Interesting Items of Note

Interestingly, after performing the above conversion and migration without any issues, and as I normally do, I started to poke around to see what “really” happened. When I did so I found something quite interesting. Upon looking at “v$datafile” I found that while RMAN did assign OMF filenames at restoration time, it assigned a totally different directory to those files than all of the other files in the PDB.

SQL> select file#, name from v$datafile;

     FILE# NAME
---------- ----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.919.1055165973
	34 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.922.1055166029

As you may have noticed, there are 2 uuids present in ASM for the same PDB, “B2A8DA5A8E2A777EE0540010E05D13EA” and “B2E8839B69346990E05302EC090A3057”. Upon looking at the source system, it appears that despite specifying “TO NEW” in my RMAN RUN block, it still carried over the uuid from the old system. Of course there is nothing wrong with this other than files not being in the same directory and potentially causing an administration problem, but if you would like to fix it, normal methods apply one of which is “alter database move datafile ….”. The results of which can be seen below:

SQL> alter database move datafile 33;

Database altered.

Elapsed: 00:00:08.40	
SQL> alter database move datafile 34;

Database altered.

Elapsed: 00:00:00.15
SQL> select file#, name from v$datafile;

     FILE# NAME
---------- -----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/hr_test.957.1055166369
	34 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/users.919.1055166403

6 rows selected.

As you can see, all of the files now reside within ASM under the same UUID and are also still OMF. Hopefully this will help you decide if this is an appropriate migration method to get you to PDB on a different platform!

Upgrading Oracle using autoupgrade.jar and Migration From NON-CDB to CDB Architecture

Over the years, I have done many migrations and many upgrades to Oracle databases. Throughout the versions, that methodology has varied slightly and in all cases has been painstakingly manual until now. Oracle’s “autoupgrade.jar” changes all of that.

“autoupgrade.jar” takes all of the following tasks you used to do manually and automates most of it:

  • prechecks
  • prefixups
  • upgrade
  • migration from non-cdb to cdb
  • postfixups

Periodically, Oracle will make updates and changes to the tool. The latest version and notes for autoupgrade can be obtained from this note: 2485457.1 – AutoUpgrade Tool. Prior to your own testing, ensure that you have reviewed this note and have downloaded the latest version. Also Mike Dietrich has an entire blog dedicated to upgrades and the tool itself.

https://mikedietrichde.com/2019/04/29/the-new-autoupgrade-utility-in-oracle-19c/

Creating a configuration file:

The basis of this tool is a configuration file where parameters are contained that describe the “source” and the “target” information. You can generate a sample configuration file as a starting point by issuing the following command:

java -jar ./autoupgrade.jar -create_sample_file config

A full explanation of all the parameters allowable within a configuration file are detailed in within the documentation:

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/autoupgrade-utility-configuration-files.html#GUID-6AF95954-DAD4-47E2-9318-5EB35631F83D

For example, if you are upgrading a non-cdb to non-cdb, the contents may contain the following parameters:

upg1.dbname=testdb
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=testdb1
upg1.log_dir=/home/oracle/upg_logs/testdb1
upg1.upgrade_node=orcl-vbox2
upg1.target_version=19
upg1.run_utlrp=yes
upg1.timezone_upg=yes

In cases where you are upgrading from non-cdb to cdb, there are a few additional parameters:

upg1.dbname=testdb
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=testdb1
upg1.log_dir=/home/oracle/upg_logs/testdb1
upg1.upgrade_node=orcl-vbox2
upg1.target_version=19
upg1.run_utlrp=yes
upg1.timezone_upg=yes
upg1.target_cdb=vboxdb1
upg1.target_pdb_copy_option=file_name_convert=('/u02/oradata/TESTDB/datafile/o1_mf_example_hrfz1tct_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_sysaux_hrfyw505_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_system_hrfyw480_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_temp_hrfz1omf_.tmp','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_hrfyw5f9_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_users_hrfyw5hh_.dbf','+DATA')
upg1.target_pdb_name=testdb

Notice that when also converting to cdb, at a minimum the following parameters are required:

  • target_cdb
  • target_pdb_name
  • target_pdb_copy_option (optional, if not specified, but the previous two parameters are, then the PDB is created with the NOCOPY option)

One thing to keep in mind that when you utilize the “target_pdb_copy_option=file_name_convert” parameter, you must list every datafile much like when cloning a database. Unfortunately, at the time of this writing, there is no option such as “SET NEWNAME FOR DATABASE” like exists within the RMAN DUPLICATE process.

To help facilitate populating this parameter, this SQL may help:

SELECT
    LISTAGG(
        chr(39)||file_name||chr(39)||','||chr(39)||'+DATA'||chr(39)||','
    ) WITHIN GROUP(
    ORDER BY
        file_id
    ) AS file_name_convert
FROM
    dba_data_files
GROUP BY
    file_name
union
SELECT
    LISTAGG(
        chr(39)||file_name||chr(39)||','||chr(39)||'+DATA'||chr(39)||','
    ) WITHIN GROUP(
    ORDER BY
        file_id
    ) AS file_name_convert
FROM
    dba_temp_files
GROUP BY
    file_name;
FILE_NAME_CONVERT
‘/u02/oradata/TESTDB/datafile/o1_mf_example_hrfz1tct_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_sysaux_hrfyw505_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_system_hrfyw480_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_temp_hrfz1omf_.tmp’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_hrfyw5f9_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_users_hrfyw5hh_.dbf’,’+DATA’,

With this output you can plug it into the parameter “target_pdb_copy_option” as shown within the sample configuration file.

*** Update on 10/15/2020
Shortly after posting this post, I was made aware that the option also exists to leverage the target database “db_create_file_dest” parameter to also perform the conversion to the correct OMF location. If you wish to use this methodology instead of listing every file in the “target_pdb_copy_option=file_name_convert=” parameter, instead populate this parameter as follows:

target_pdb_copy_option=file_name_convert=NONE

Executing pre-checks:

Now that the configuration file is ready to be used, you can execute the pre-checks and then either decide to fix them yourself or allow the tool to fix them for you. In either case, you should always re-execute the pre-checks after fixing them to ensure that everything has been addressed. To begin the pre-check process, execute the following command:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode analyze

This will take the source database and create a log file showing all of the items that need to be fixed and whether or not the tool can fix them automatically or if they need to be fixed manually. The log file is created in the log directory specified in the “log_dir” parameter within the configuration file and then within the directory that corresponds to the task number of the “analyze” task.

Executing fixups:

Once the pre-checks are completed, the script can then be executed to attempt to fix things that it can fix. You can then execute the pre-checks one more time to ensure that all issues have been addressed. To execute the fixups, execute the following command:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode fixups

Deploying the upgrade:

Once all of the fixups are completed, you are ready to upgrade. The upgrade is quite easy using the tool:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode deploy

Finalizing the upgrade:

As with any upgrade, paying attention to the log files is very important. That being said, a few items that I noticed when testing on my test server:

  • oratab is not modified if you are migrating from non-cdb to cdb
  • clusterware is not updated if you are migrating from non-cdb to cdb
  • if migrating from non-cdb to cdb, an existing cdb database must be specified or one must be pre-created
  • be sure to check for any Guaranteed Restore Points that might be created
  • check for the proper setting of the ‘compatible’ parameter and advance it at the appropriate time

Also, the tool does so much more, and this is just the tip of the iceberg. I plan to investigate more of these options such as automated patching and fleet patching in subsequent posts.

Keeping SQLDeveloper JVM Settings Persistent After Upgrade

Sometimes, based on the size or performance of your system, the JVM settings in SQL Developer / Data Modeler need to be adjusted within the “product.conf” file once SQL Developer is installed. A few most commonly parameters that may be tuned are:

  • AddVMOption -Xms[memory size]
  • AddVMOption -Xmx[memory size]
  • SetJavaHome [full path to the Java Installation to use]

Unfortunately, when you upgrade to a newer version to SQLDeveloper, these settings are not always carried forward to the new version. This is particularly important in a scenario such as an RDP server where many users, each with their own product.conf, leverage common binaries.

Described below is an alternative to keep all users on a common product.conf. The solution consists of three easy steps and once completed, will check for common changed settings each time SQLDeveloper is launched.


First, start with constructing a “masterSqldevProduct.conf” based upon the “product.conf” which is created in your users “AppData/Roaming/sqldeveloper/[user]” directory. It is in this file that you can modify any settings that you would like to keep persistent. Once updated, save the file in a location accessible by all users on the system:

##############################################################################
#
# The format of this file is:
#
# Directive  Value
#
# with one or more spaces between the directive and the value. This file
# can be in either UNIX or DOS format for end of line terminators. Use UNIX
# style '/' path separators, although on Windows some directives, such as
# SetJavaHome, can take '\' path separators.
#
##############################################################################

#
# By default, the product launcher will search for a JDK to use, and if none
# can be found, it will ask for the location of a JDK and store its location
# in this file. If a particular JDK should be used instead, uncomment the
# line below and set the path to your preferred JDK.
#
# SetJavaHome /path/jdk

#
# Specify the initial size, in bytes, of the memory allocation pool. This
# value must be a multiple of 1024 greater than 1MB. Append the letter k
# or K to indicate kilobytes, or m or M to indicate megabytes, or g or G
# to indicate gigabytes. The default value is chosen at runtime based on
# the system configuration.
# Examples:  -Xms6291456
#            -Xms6144k
#            -Xms6m
#
# You can specify one value for any JDK using AddVMOption, OR you can specify
# separate values for 32-bit and 64-bit JDK's.
#
AddVMOption -Xms768m
# Add32VMOption -Xms128m
# Add64VMOption -Xms384m

#
# Specify the maximum size, in bytes, of the memory allocation pool. This
# value must be a multiple of 1024 greater than 2MB. Append the letter k
# or K to indicate kilobytes, or m or M to indicate megabytes, or g or G
# to indicate gigabytes. The default value is chosen at runtime based on
# the system configuration.
# Examples:  -Xmx83886080
#            -Xmx81920k
#            -Xmx80m
# On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for this value
# is approximately 4000m minus overhead amounts. On Solaris 2.6 and x86
# platforms, the upper limit is approximately 2000m minus overhead amounts.
# On Linux platforms, the upper limit is approximately 2000m minus overhead
# amounts.
#
# If you are getting the 'Low Memory Warning' Message Dialog while running
# the product, please increase the -Xmx value below from the default 800M to
# something greater, like 1024M or 1250M.  If after increasing the value,
# the product is no longer starting up because it fails to create a virtual
# machine, then please reduce the modified -Xmx value, or use a 64bit JDK
# which allows for very very large value for -Xmx.
#
# You can specify one value for any JDK using AddVMOption, OR you can specify
# separate values for 32-bit and 64-bit JDK's.
#
AddVMOption -Xmx2048m
# Add32VMOption -Xmx800m
# Add64VMOption -Xmx1024m

After constructing a master “product.conf” you can then create a powershell script that will be referred to in the windows shortcut used to launch SQLDeveloper. In this case the powershell script is called “launchSqldeveloperCustomConfig.ps1”. Ensure that this script is also saved in a location accessible by all users on the system:

$MasterConfFileLocation = "D:\Apps\sqlDeveloperConfig\masterSqldevProduct.conf"
$sqldevdir = @(Get-ChildItem -Path $HOME\AppData\Roaming\sqldeveloper\*\product.conf -Recurse -Force | % { $_.FullName })
foreach ($element in $sqldevdir) {
  If ((Get-FileHash $MasterConfFileLocation).Hash -ne (Get-FileHash $element).Hash) 
  {
    echo "Copying $GoldConfFileLocation to file : $element"
    Copy-Item $MasterConfFileLocation -Destination $element
  } Else {
    echo "File : $element already set"
  }   
}

Start-Process -FilePath "D:\Apps\oracle\product\18.0.0\client_1\sqldeveloper\sqldeveloper\bin\sqldeveloper64W.exe"

Customize the above script as needed to reflect your system file locations on your system.


The last part of this customization is to create a shortcut that refers to this powershell script. Upon invocation, the product.conf will be checked and overwritten if necessary. This shortcut should be created in the desktop for all users:

Target:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -ExecutionPolicy Bypass -Command “& ‘D:\Apps\sqlDeveloperConfig\launchSqldeveloperCustomConfig.ps1

Start in:
D:\Apps\oracle\product\18.0.0\client_1\sqldeveloper\sqldeveloper\bin


Now, with these easy steps, any changes you make to customize the JVM or other product level options will now be persistent across upgrades of SQLDeveloper.

Using FLEX ASM Disk Groups to Regain Storage Space

I recently came across an issue where a client was using more space than they anticipated in one of their Exadata Cloud Service Instances within OCI. Unlike an on-premise Exadata, when that machine is delivered in OCI, you do not have the ability to specify the redundancy of the disk groups as they are always delivered with the disk groups set to HIGH redundancy. This level of redundancy is perfect for Production or DR type environments, but sometimes it may be worthwhile in a Test environment to have the ability to free up space by reducing the redundancy. This option also doesn’t require the full tear down of a disk group to rebuild it with the redundancy of your choice. Certainly a savings of downtime and man-hours.

Specific to this problem, the FLEX ASM disk group allows you to specify the redundancy to use at the filetype level within a given file group. There are many other settings that can be adjusted, but none of them are as potentially as useful as the redundancy option. To make things easier, a file group is automatically created within ASM for each Container and Pluggable database. This granularity allows a ton of flexibility should you find this feature useful for you.

There are several other blogs out there that discuss this, however, none that I came across showed space numbers or the file group properties.

Now to demonstrate how it works, I will start with a simple 19c database that contains the sample HR data and some other sample data consuming about 8GB. (For the purposes of this test, the disk group has already been converted to FLEX. At HIGH redundancy the space used is as follows:

StateTypeRebalSectorBlockAUTotal_MBFree_MBName
MOUNTEDFLEXN512409641943049214863352DATA/

Further information can be seen about the individual data files and its redundancy property by querying the files within the specified file group:

sqlplus / as sysasm
select file_number,round(bytes/1024/1024) file_size, round(space/1024/1024) disk_size,type,redundancy from v$asm_file where filegroup_number = 3;
FILE_NUMBERFILE_SIZE_MBDISK_SIZE_MBTYPEREDUNDANCY
275100324DATAFILEHIGH
276270828DATAFILEHIGH
2773401044DATAFILEHIGH
27815134585TEMPFILEHIGH
279524DATAFILEHIGH
280512015384DATAFILEHIGH

Now, we can convert the datafilee in the disk group to MIRROR (NORMAL) redundancy. In order to see the full results, an ASM Rebalance Operation will need to occur which could take some time:

sqlplus / as sysasm
alter diskgroup DATA  modify filegroup VBOXDB1_PDB1  set 'datafile.redundancy' = 'mirror';

Now, once the rebalance finishes, we can query the size of the data files and the disk group:

StateTypeRebalSectorBlockAUTotal_MBFree_MBName
MOUNTEDFLEXN512409641943049214869204DATA/
FILE_NUMBERFILE_SIZE_MBDISK_SIZE_MBTYPEREDUNDANCY
275100220DATAFILEMIRROR
276270556DATAFILEMIRROR
277340700DATAFILEMIRROR
27815134585TEMPFILEHIGH
279516DATAFILEMIRROR
280512010260DATAFILEMIRROR

As you can see, only the data files were effected for a space savings within the disk group of approximately 5.7GB.

Of course there are potential availability drawbacks with changing disk group redundancy, however, if your situation takes the pros and the cons into account, the ASM FLEX disk group may be a good alternative for your environment.

Remember the Recyclebin

When poking around a system, make sure to look at the dba_recyclebin. Oftentimes, this functionality within Oracle seems to be overlooked and forgotten about. This is due to the fact that the default for the parameter “recyclebin” in Oracle 19c is “on”. You can also manage this parameter at the session level as well.

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/RECYCLEBIN.html#GUID-B6F1AA5B-DF1E-4EAF-BDFD-E70EFBAD997A

Should a table drop occur by mistake, often the table can be recovered so long as the column “CAN_UNDROP” for that object shows the value “YES” in the dba_recyclebin table by issuing the command:

select owner, object_name, original_name, can_undrop from dba_recyclebin;
flashback table hr.emp to before drop;

It’s good functionality. Also the recyclebin helps in the case where your application must drop / create / recreate objects on a frequent basis because the operation is much faster as the drop command does not have to wait for the segments to drop.

The area that most forget about is “purging” the recyclebin. As a matter of fact, I recently saw a system the other day that had 5M objects in the recyclebin totaling several hundred GB of space most of which was several years old. Thats several hundred GB of space being backed up every day for objects which have been dropped!

Generally, purging the recyclebin is a manual operation. However, there is a way to implement a dbms_scheduler job which will do this on a regularly scheduled basis.

BEGIN
  DBMS_SCHEDULER.CREATE_JOB (
	job_name => 'WEEKLY_PURGE_RECYCLEBIN',
	job_type => 'PLSQL_BLOCK',
	job_action => 'DECLARE
		V_PRE_RECYCLEBIN_COUNT PLS_INTEGER := 0;
		V_PRE_RECYCLEBIN_SIZE NUMBER(38) :=0;
		V_POST_RECYCLEBIN_COUNT PLS_INTEGER := 0;
        V_SQL_STATEMENT VARCHAR2(500);
        V_DAYS_TO_PURGE PLS_INTEGER := 7;
        V_STALE_DICT_STATS PLS_INTEGER := 0;
        err_code NUMBER;
        err_msg VARCHAR2(500);
        CURSOR purge_bin_cursor IS select owner, object_name, original_name from dba_recyclebin where type=''TABLE'' and to_date(droptime,''YYYY-MM-DD:HH24:MI:SS'') < sysdate - V_DAYS_TO_PURGE;
		BEGIN
			select round(nvl(sum(space * 8192),0)/1024/1024,2) into V_PRE_RECYCLEBIN_SIZE from dba_recyclebin;
			select count(*) into V_PRE_RECYCLEBIN_COUNT from dba_recyclebin where to_date(droptime,''YYYY-MM-DD:HH24:MI:SS'')  0
			THEN
				DBMS_OUTPUT.PUT_LINE(''Purging ''||V_PRE_RECYCLEBIN_COUNT||'' objects, reclaiming ''||V_PRE_RECYCLEBIN_SIZE||'' MB from the recyclebin'');
				FOR purge_rec in purge_bin_cursor
				LOOP
                	V_SQL_STATEMENT := ''purge table ''||purge_rec.owner||''."''||purge_rec.object_name||''"'';
                	DBMS_OUTPUT.PUT_LINE(''Executing: ''||V_SQL_STATEMENT||'' for original object name ''||purge_rec.original_name);
					EXECUTE IMMEDIATE V_SQL_STATEMENT;
				END LOOP;
				select count(*) into V_POST_RECYCLEBIN_COUNT from dba_recyclebin;
				DBMS_OUTPUT.PUT_LINE(''Recyclebin has ''||V_POST_RECYCLEBIN_COUNT||'' objects remaining in the recyclebin'');
				select count(*) into V_STALE_DICT_STATS from dba_tab_statistics where owner = ''SYS'' AND TABLE_NAME = ''OBJ$'' and stale_stats = ''YES'';
				IF V_STALE_DICT_STATS = 1
				THEN
					DBMS_OUTPUT.PUT_LINE(''Executing Dictionary Statistics'');
					DBMS_STATS.GATHER_DICTIONARY_STATS;
				ELSE
					DBMS_OUTPUT.PUT_LINE(''Dictionary Statistics not stale'');
				END IF;
			ELSE
				DBMS_OUTPUT.PUT_LINE(''Nothing to purge from the recyclebin'');
			END IF;
		EXCEPTION
			WHEN OTHERS THEN
		    err_code := SQLCODE;
      		err_msg := SUBSTR(SQLERRM, 1, 500);
      		DBMS_OUTPUT.PUT_LINE(''An error was encountered - ''||SQLCODE||'' -ERROR- ''||SQLERRM);
		END;',
	start_date => sysdate,
	repeat_interval => 'FREQ=MONTHLY;BYDAY=1SAT;BYHOUR=18',
	end_date => NULL,
	enabled => TRUE,
	auto_drop => FALSE,
	comments => 'This job purges the recyclebin on the first Saturday of the Month at 18:00'
);
END;
/

As implemented above, this job will run once per week and delete everything that is older than 7 days that is in the recyclebin. No longer will you have to worry about the recyclebin and it containing too many un-purged objects!

Enjoy!

Ephemeral Ports and Oracle RAC Interconnect

Building upon my previous post “Ulimit nofiles Unlimited. Why not?“, ensuring that there are enough Ephemeral Ports for Interconnect Traffic is important.  As a matter of fact, it’s part of the prerequisite checks for RAC installation.  But what are they and why are they important? For purposes of this post I won’t deal with excessive interconnect traffic, but instead I will deal with latency caused by the lack of ephemeral ports.

Simply explained, ephemeral ports are the short-lived transport protocol ports for Internet Protocol (IP) communications.  Oracle RAC interconnect traffic specifically utilizes the User Datagram Protocol (UDP) as the port assignment for the client end of a client-server communication to a well-known port on a server.  What that means is, when a client initiates a request it choose a random port from ephemeral port range and it expects the response at that port only.  Since interconnect traffic is usually short lived and frequent, there need to be enough ports available to ensure timely communication.  If there are not enough ports available for the needed interconnect traffic, interconnect latency can occur.

In this case, while performing a healthcheck on a system we immediately noticed an issue with the interconnect manifesting itself in AWR as long ping times between nodes as seen in dba_hist_interconnect_pings:

ping-stats-4hr-before

In my experience, in a system that has the interconnect tuned well and not a ton of cross instance traffic, these should be in the 5ms or less range.

So we started to check all of the usual suspects such as NIC and DNS issues.  One thing we did find was that the private addresses were missing from /etc/hosts so that could contribute some of the latency right?  While adding those helped smooth out the spikes, it didn’t address the entire issue.  So as part of my problem solving process, next I just started poking around with the “netstat” command to see if anything turned up there and most certainly something did:

udpNoPorts-Example-pre

udpNoPorts… What’s that and why is that so high?  That led me to check prerequisites and ultimately ephemeral port ranges.  Sure enough, they were set incorrectly so the the system was waiting on available ports to send interconnect traffic thus causing additional latency.

Oracle specifies the following settings:

"tcp_smallest_anon_port"  = 9000
"udp_smallest_anon_port" = 9000
"tcp_largest_anon_port"  = 65500
"udp_largest_anon_port" = 65500

In this case they were set to:

"tcp_smallest_anon_port" = 32768
"udp_smallest_anon_port" = 32768
"tcp_largest_anon_port" = 60000
"udp_largest_anon_port" = 60000

After correcting the settings, we were able to cut the latency in half:

ping-stats-4hr-after

Interconnect post fix

Now, we still have some work to do because the interconnect latency is still higher than desired, but we are now likely dealing with the way the application connects to the database and is not RAC aware vs. an actual hardware or network issue.

Incremental improvement is always welcomed!

 

Ulimit nofiles Unlimited. Why not?

I was going back through all of my notes of various things I have encountered and I one thing I have been seeing a lot of lately are ulimit settings for various installations not following the recommended defaults.  In just about every case, when I queried the team that either made or requested that setting, the standard answer was “I just didn’t want to have to worry about it, so we set it to unlimited”.  As with anything, there are reasons why certain settings are recommended and this post seeks to show you why as it relates to the ulimit kernel setting of “nofiles”.

For example, in the WebLogic documentation (as of 12.2.1.4), the ulimit requirements specifically state that the “/etc/security/limits.conf” should contain:

* soft nofile 4096
* hard nofile 65536

Interestingly enough, the Oracle database guide (as of 19c) states that the nofile soft limits should be at least 1024 and the hard nofile limits should be “at least” 65536 for both the grid user and oracle user, which is different than WebLogic.  So as you can see, one size doesn’t fit all.

One area where I saw this become important was during the boot sequence of WebLogic Tuxedo. We had an issue where sometimes a server would boot and sometimes it wouldn’t.  At the time the best we could tell was that it depended on how busy the cpu was during boot sequence and that led us to truss the “tmboot” process.  With the help of @AlexFatkulin what we found was very interesting.  We saw this message recurring over and over.

…..
25029:  close(5870682)                                  Err#9 EBADF
25029:  close(5870683)                                  Err#9 EBADF
25029:  close(5870684)                                  Err#9 EBADF
25029:  close(5870685)                                  Err#9 EBADF
25029:  close(5870686)                                  Err#9 EBADF
25029:  close(5870687)                                  Err#9 EBADF
25029:  close(5870688)                                  Err#9 EBADF
…..

This message is related directly to the the closing of “open files”.  But wait a minute, why are there open file descriptors if the application isn’t up? As part of what “tmboot” was doing during start up was trying to close all possible file descriptors regardless if the descriptor was open or not. So if ulimit -n was set to “unlimited” that resulted in 2147483647 possible open file descriptors. The boot code was then in the loop calling close from 1 to 2147483647 which was taking a very long time resulting in practically an infinite loop.  As a corrective action, we set the limit to the recommended defaults and guess what.  The server booted every single time.

It looks like setting ulimit hard “nofiles” to unlimited for WebLogic / Tuxedo exposed some bad coding practices which does not track what file descriptors it opened and instead just tries to close all possible descriptors up to the limit.

Bottom line?  Always start with the System Recommendations and go from there.  Don’t set things to UNLIMITED and think it’s a way to not have to worry about your system.  It could expose the bad coding practices of others.