Tag Archives: Oracle

Adjusting Available CPU Threads in SuperCluster Local Zones Online

Lately, I have been working on the Oracle SuperCluster platform. After having worked with Linux for the past many years, it was quite refreshing to get back to an OS that so many of us have worked on. As part of our local zone layout, we have a requirement to allocate different amount of M7 CPU Threads per zone. Upon researching the best way to do this, I found varying information, so I thought that I would go ahead and blog about the way that worked best for this situation.

In this case, CPU Thread control was set-up using resource pools. Solaris Resource Pools are described here:

Oracle Solaris Resource Pools

By default, the resource pool does not restrict access or control scheduling. By modifying the resource pool and allocating specific threads to specific zones, you thereby allocate threads to the local zones.

Here’s how:

First, lets display the pool layout. Since we only need to look at allocating threads (the command actually outputs a ton of data), I will limit the output to only what is relevant.

Find the pool configurations you want to effect. Pset pertains directly to cpu threads so that is what we will look for:

#  poolcfg -dc info | egrep 'pset |pset.size|pset.min|pset.max'

        pset pset_[host name]_id_25289
                uint    pset.min 32
                uint    pset.max 32
                uint    pset.size 32
        pset pset_[host name]_id_25223
                uint    pset.min 64
                uint    pset.max 64
                uint    pset.size 64
        pset pset_[host name]_id_25287
                uint    pset.min 64
                uint    pset.max 64
                uint    pset.size 64
        pset pset_[host name]_id_25224
                uint    pset.min 32
                uint    pset.max 32
                uint    pset.size 32
        pset pset_default
                uint    pset.min 1
                uint    pset.max 65536
                uint    pset.size 64

In this case we can see that out of the 256 CPU threads available to this Global Domain, 32 have been allocated to the first local domain, 64 each to the next 2 and then 32 to the last, leaving 64 in the default pool or available to the global domain.

If you would like to see the file which also details the complete rules of the resource pool, you can look here:

/etc/pooladm.conf

To start with any modifications, it is best to ensure that the latest configuration is saved. To do so you can run this command from the global domain:

# pooladm -s

Once this has been done, you can proceed with the reallocation. In this example, I will modify one pool by taking CPU Threads from the default pool.
Using “-d” operates directly on the kernel state, so use this with caution. On a running system, I would reallocate in small chunks. That will give the operating system time to adapt to the different CPU configuration. In this example we will add 8 threads to a local zone which already had 32 Threads:

# poolcfg -dc 'modify pset pset_[host name]_id_25289 ( uint pset.min = 40 ; uint pset.max = 40)'

At this point the change has been made to the configuration file only (/etc/pooladm.conf), not actually to the system. To make the change to the system, save the configuration and commit to the system:

# pooladm -s

# pooladm -c

Once this change is done, we can inspect the configuration by running the same command shown above. Notice the changes below:

#  poolcfg -dc info | egrep 'pset |pset.size|pset.min|pset.max'

        pset pset_[host name]_id_25289
                uint    pset.min 40
                uint    pset.max 40
                uint    pset.size 40
        pset pset_[host name]_id_25223
                uint    pset.min 64
                uint    pset.max 64
                uint    pset.size 64
        pset pset_[host name]_id_25287
                uint    pset.min 64
                uint    pset.max 64
                uint    pset.size 64
        pset pset_[host name]_id_25224
                uint    pset.min 32
                uint    pset.max 32
                uint    pset.size 32
        pset pset_default
                uint    pset.min 1
                uint    pset.max 65536
                uint    pset.size 56

If you need to transfer cpu from one local zone to another, you can do so by executing the following command:

poolcfg -dc 'transfer 8 from pset pset_default to pset_[host name]_id_25289'

Or if you want to assign a specific CPU Thread:

poolcfg -dc 'transfer to pset pset_[host name]_id_25289 ( cpu 5)'

The rest of the steps remain the same. In the next post I will show you how to verify the additional CPU in each local zone.

Advertisements

Improper Use of the Oracle ‘Rownum’ Pseudocolumn

The other day I found myself needing to explain to some developers why their use-case of the Oracle ‘rownum’ pseudocolumn was yielding a result in one database instance, but a completely different result in another.

In this situation, the correct result is the ‘maximum’ value of the column, however this query was also occasionally returning the exact ‘minimum’ value of this column. How could this happen? The answer lies in the using the ‘rownum’ pseudocolumn correctly. Of course there are other (probably better) ways to write this query without the use of ‘rownum’, but I’m not here to debate that right now….

** Note the tables in the query have been changed to protect the innocent.

select column_a from (select column_a,rownum rowid0 from schema.table order by column_a desc ) aa where aa.rowid0 =1;

Oracle documentation states that it depends how Oracle accessed the rows in the query as to which result you will get. For example your results can vary depending on a lot of factors (ie: the order that you inserted the data in the table or if there is an index on the table and how that index is used). For further information you can see the documentation here:

https://docs.oracle.com/cd/B28359_01/server.111/b28286/pseudocolumns009.htm#SQLRF00255

For further explanation, lets explore the explain plans encountered used in each system:

Correct Result:


--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                               |       |       |     1 (100)|          |
|*  1 |  VIEW                        |                               |  1257 | 32682 |     1   (0)| 00:00:01 |
|   2 |   COUNT                      |                               |       |       |            |          |
|   3 |    INDEX FULL SCAN DESCENDING| SCHEMA_TABLE_PK               |  1257 |  6285 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AA"."ROWID0"=1)


22 rows selected.

Incorrect Result:


---------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                               |       |       |     4 (100)|          |
|*  1 |  VIEW                   |                               |  1257 | 32682 |     4  (25)| 00:00:01 |
|   2 |   SORT ORDER BY         |                               |  1257 |  6285 |     4  (25)| 00:00:01 |
|   3 |    COUNT                |                               |       |       |            |          |
|   4 |     INDEX FAST FULL SCAN| SCHEMA_TABLE_PK               |  1257 |  6285 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AA"."ROWID0"=1)


24 rows selected.

As you can see, the major difference here is that the two systems have not chosen the same access path in which to return the data. In one system a plan utilized an ‘INDEX FULL SCAN DESCENDING’ access path, while the other utilized an ‘INDEX FAST FULL SCAN’ access path.

Is this really that different? Turns out it is.

ASK Tom Explained the reason why very concisely:
(Ask TOM “Difference between Full Index Scans and Fast Full Index Scans”)

They state that:

“An index fast full scan reads the ENTIRE index, unsorted, as it exists on disk. It is basically using the index as a “skinny” version of the table. The query in question would only be accessing attributes in the index (we are not using the index as a way to get to the table, we are using the index INSTEAD of the table) We use multiblock IO and read all of the leaf, branch and the root block. We ignore the branch and root blocks and just process the (unordered) data on the leaf blocks.

An index full scan is when we read the index a block at a time – from start to finish. We’ll read the root block, navigate down the left hand side of the index (or right if we are doing a descending full scan) and then when we hit the leaf block – we’ll read across the entire bottom of the index – a block at a time – in sorted order. We use single block IO, not multiblock IO for this operation.”

Well there you have it. And this is why the result is different. How can we keep this from occurring in the future? The answer is to utilize the ‘rownum’ pseudocolumn correctly. Remember, rownum is not a real column so in order to get the right results, it needs to be added after the data is in the sorted order that you want. To do that, make sure you write the query so that ‘rownum’ is applied after the sort. Using the same query above, lets ‘rewrite’ it in such a way that it will achieve the desired results:

select column_a from (select column_a,rownum from (select column_a from schema.table order by column_a desc)) where rownum = 1;

See the steps now?

  1. Retrieve data in sorted order
  2. Apply the ‘rownum’ pseudocolumn
  3. Filter for the desired value in the list

If you must use the ‘rownum’ pseudocolumn, writing your query in this manner will ensure that you always get the same result.

Enjoy!

Oracle Native Network Encryption

With all of the security concerns out there and data being more important than ever, it might be also time to consider encrypting your data connections, even within your own data center. If you are utilizing cloud, there should be no question that some sort of encryption should be used. In terms of what Oracle provides, you have two options, Native Encryption and SSL/TLS encryption. As of the time of this writing, both of these options are free to use and are no longer part of the Advanced Security Option. In this post, I will discuss the set-up and use of Native Encryption, with SSL/TLS to come later.

Native network encryption provided by the Oracle client is by far, the easiest to set up, so in that same context it would also be the easiest to bypass. That said, there are ways to set it up in such a way that those risks can be mitigated. Due to those same risks, Native encryption would be a great solution to use within a private data center, but not in a public or hybrid cloud scenario. SSL/TLS would be an option to pursue in a public or hybrid cloud scenario and I plan to discuss that in a future post.

Set Up:

Setup of Native encryption is pretty straight forward and easy, especially for OCI “Thick” connections and any other method that utilizes the sqlnet.ora file. In cases where that file is not utilized, there is some additional setup and I will discuss that as well.

First, it is important to understand all of the different combinations of parameters which Native encryption uses. Luckily it is only two, however, there are many different combinations and those combinations and their results are better detailed here:

Version 12.x (OCI Thick):
https://docs.oracle.com/database/121/DBSEG/asoconfg.htm#DBSEG020

Version 12.x (JDBC Thin):
https://docs.oracle.com/database/121/DBSEG/asojbdc.htm#DBSEG9609

By default, both sides of any client connection is configured to ‘ACCEPT’ an encrypted connection.  Because of this, you only have to configure one side or the other, but for safety reasons, I would recommend configuration of both sides.

In 11.2, there are a few less options in terms of encryption and checksum algorithms, so for simplicity circumstances, I will just illustrate a 12.x ‘THICK’ client connection to an 11.2.0.4 database.

To enable this option within the ‘THICK’ client:

# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.2.0/client_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

SQLNET.ENCRYPTION_CLIENT = REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256)

SQLNET.CRYPTO_CHECKSUM_CLIENT = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA1)

If you are utilizing JDBC ‘thin’ connections, then you can also set the properties within the java code itself:

prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_ENCRYPTION_LEVEL,level);
prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_ENCRYPTION_TYPES,algorithm);
prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_CHECKSUM_LEVEL,level);
prop.setProperty(OracleConnection.CONNECTION_PROPERTY_THIN_NET_CHECKSUM_TYPES, algorithm);

And edit the sqlnet.ora on the server:

# sqlnet.ora Network Configuration File: /u01/app/11.2.0.4/grid/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /u01/app/oracle

SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)

SQLNET.CRYPTO_CHECKSUM_SERVER = REQUIRED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA1)

Validation:

There are a few ways to validate that encryption is actually taking place. The easiest is to execute the following SQL upon login to the database:

If no encryption is occurring, then the banner will look like this:

SQL> select network_service_banner from v$session_connect_info
  2  where sid in (select distinct sid from v$mystat);

NETWORK_SERVICE_BANNER
-----------------------
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Oracle Advanced Security: encryption service for Linux: Version 11.2.0.4.0 - Production
Oracle Advanced Security: crypto-checksumming service for Linux: Version 11.2.0.4.0 - Production

If encryption is happening, then the banner will return additional data:

SQL> select network_service_banner from v$session_connect_info
  2  where sid in (select distinct sid from v$mystat);

NETWORK_SERVICE_BANNER
-----------------------
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Oracle Advanced Security: encryption service for Linux: Version 11.2.0.4.0 - Production
Oracle Advanced Security: AES256 encryption service adapter for Linux: Version 11.2.0.4.0 - Product
Oracle Advanced Security: crypto-checksumming service for Linux: Version 11.2.0.4.0 - Production
Oracle Advanced Security: SHA1 crypto-checksumming service adapter

Notice the 2 additional lines in the banner when encryption is occurring:
Oracle Advanced Security: AES256 encryption service adapter for Linux: Version 11.2.0.4.0 – Product
Oracle Advanced Security: SHA1 crypto-checksumming service adapter

So the database indicates that encryption is happening, so what is actually happening on the wire? To determine that, we can either use a product like Wireshark or trace the connection to the listener. To do this, enable the following parameters in the SQLNET.ORA on the client:

DIAG_ADR_ENABLED=OFF
TRACE_DIRECTORY_CLIENT=/home/oracle/trace
TRACE_FILE_CLIENT=nettrace
TRACE_LEVEL_CLIENT=16

And in the trace filem you will see an entry similar to the following:

(3310995200) [24-APR-2017 10:19:21:077] na_tns:         Encryption is active, using AES256
(3310995200) [24-APR-2017 10:19:21:077] na_tns:         Crypto-checksumming is active, using SHA1

So as you can see, the setup of Native encryption is quite easy. As with any additional feature, performance could be compromised, so make sure you test all combinations thoroughly in order to determine what works best in your environment. Enjoy!

Local Listener vs. Remote Listener vs. Listener Networks

Introduction:

Often, when it comes to the database, you may see separate networks configured for the following types of traffic:

  • Backups
  • Management
  • Client

Recently, one of the configurations that I was a part of took it a step further than that and had a few additional networks configured:

  • Data Guard
  • Private Non-Routed Network

One additional requirement was that a scan listener be present for each one of these networks. I wasn’t given the opportunity to set this up either so we had to trust that the other entity set all of the correct parameters.  No big deal right?

 

The Problem:

Once all of the networks were configured and scan listeners were in place for each network, connectivity on each network was very erratic.  Clients would connect at times and at other times they would not.

It wasn’t until we used a  packet analyzer (Wireshark), that we really saw what was going on.  Upon investigation, a colleague found that occasionally the scan listener would return the wrong VIP to the connecting client.  Good news was that it was the SAME wrong VIP each time.  But why was it doing this?  The culprit ended up being incorrect / missing entries in the following parameters.

  • REMOTE_LISTENER
  • LOCAL_LISTENER
  • LISTENER_NETWORKS

The Oracle documentation on this was not a ton of help either.

The Solution:

Upon investigation, we found that an entry for each local listener was present in the LOCAL_LISTENER parameter and each SCAN_LISTENER was present in the REMOTE_LISTENER parameter and LISTENER_NETWORKS parameter was blank.  As it turns out, LOCAL_LISTENER and REMOTE_LISTENER should contain entries for those listeners present on the first network ONLY.

Incorrect Parameters:

local_listener='(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=client-vip.example.com)(PORT=1534))(ADDRESS=(PROTOCOL=TCP)(HOST=vlan-vip.example.com)(PORT=1534)))'
remote_listener='client-scan:1534','vlan-scan:1534'
listener_networks=''

The LISTENER_NETWORKS parameter is responsible for registration of listeners for ALL other networks.

Correct Parameters:

local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=client-vip.example.com)(PORT=1534))'
remote_listener='client-scan:1534'
listener_networks='((NAME=netVLAN)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vlan-vip.example.com)(PORT=1534))))(REMOTE_LISTENER=vlan-scan:1534))'

Once these changes were made, the intermittent connection issues were gone and Wireshark confirmed that the listeners were returning the correct VIP for the network being requested.

GaOUG Oracle Tech Day 2016

On March 3, 2016, I’ll be at AMA Executive Conference Centers for GaOUG’s Tech Day 2016. This Oracle user conference promises to be packed with the best content from the best the industry has to offer.  If you will be in the Atlanta area and use Oracle products, consider attending the event!

Specifically, I will be presenting:

Presentation Name:
Application High Availability and Upgrades Using Oracle GoldenGate

Abstract:
This presentation will discuss the techniques and methods used to deploy a High Availability Active / Active configuration using Oracle GoldenGate.  Specifically, discussion will surround how to deploy GoldenGate in a standard configuration, utilization of the built in Conflict Detection and Resolution (CDR) functionality and available methodologies to deploy application or database architecture changes within this configuration.  Focus will also be given to how the DBA resources must also involve other IT resources to achieve a successful deployment.

Click here for more information or to register for GaOUG’s Tech Day.

We appreciate your efforts and look forward to seeing you on March 3. If you have any questions, please feel free to contact me anytime.

Data Pump May Cause Exponential Growth in USER_HISTORY$ Records

Data Pump is a common method for moving data from one schema to another.  Oftentimes, the easiest way to do this is to do a full schema export, followed by a full schema import using the appropriate ‘remap’ options.  Usually during a job like this, the user will already exist and it was this situation that uncovered an unexpected behavior in Data Pump.  In our situation, the weekly data copy job ran for 37 weeks and all of a sudden the job started to noticeably take longer and longer, until the point to where it would run for days and then never finish.

Upon investigation, we found that the data pump job was initially hanging on the import Data Pump during this step:

Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY

Because we didn’t really know what was going on and we needed the job to finish, we excluded “PASSWORD_HISTORY” from the import DataPump.  Then just a few short weeks later, the job then was exhibiting the same behavior on the export.  Clearly something more was going on.  After taking some time to analyze ASH reports, it was clear that the process was getting hung during processing of the USER_HISTORY$ table.

What is the USER_HISTORY$ table?

As it turns out, this table stores the actual password history for each user in the database.

SQL> desc user_history$
Name Null? Type
----------------------- -------- ----------------
USER# NOT NULL NUMBER
PASSWORD VARCHAR2(30)
PASSWORD_DATE DATE

This table is never purged so each time the user receives a password change, a row is written to this table.  In our case a single password change for this user resulted in millions of rows for this user over the 44 weeks that the job had been occurring and Data Pump was exacerbating the issue.  With each export / import Data Pump, and because we were alternating schemas, this one row became two, two became three, three became five and so on, until week 37 where the table had almost 15 million rows.

Demonstration:

To demonstrate the issue, we will use very simple parameter files and export / import the same schema:
expdp-HR2.par:

USERID='/ as sysdba'
DIRECTORY=EXP_DIR
DUMPFILE=EXP_DIR:hr_exp.dat
LOGFILE=EXP_DIR:hr_exp.log
SCHEMAS=HR2
CONTENT=ALL
COMPRESSION=ALL
REUSE_DUMPFILES=Y
JOB_NAME=EXPHR_JOB

impdp-HR2.par:

USERID='/ as sysdba'
DIRECTORY=EXP_DIR
DUMPFILE=hr_exp.dat
LOGFILE=hr_imp.log
TABLE_EXISTS_ACTION=REPLACE

We should initially check the user_history$ table to determine the baseline for a particular user:

SQL> select name, count(*)
2 from user_history$, user$
3 where user_history$.user# = user$.user#
4 and name like 'HR%'
5 group by name order by name;

NAME  COUNT(*)
----- ---------------
HR    2
HR2   6

We will then run an export and import using the parameter files above and then re-run the query:

SQL> select name,count(*)
 2 from user_history$ a, user$ b
 3 where a.user#=b.user#
 4 and name like 'HR%'
 5 group by name order by 1;

NAME  COUNT(*)
----- ---------------
HR    2
HR2   12

And sure enough, you see the rows for password history have doubled just from running the import 1 time!

Solution:

Luckily we now have two solutions available to us with minimal impacts.  Oracle has made a patch available (16811897) for all versions 11.2.0.3 and above, you can upgrade to 12.1.0.2 or you can exclude ‘PASSWORD_HISTORY’ from the export / import process.

Of course if you are running into this issue, make sure you test thoroughly as results may vary!

Understand Integrated Replicat Performance using the GGSCI STATS Command

With GoldenGate 12c, Integrated Replicat has now become the preferred option to apply transactions (Classic and Coordinated are the others). In order to get the most efficiency out of the Integrated Replicat, it is important to understand what types of operations cannot be applied by the integrated apply server. GoldenGate considers operations which cannot be applied in “integrated” mode to be applied in “direct” mode.

So what is the difference between Integrated and Direct?

“Integrated” applies transactions via a Logical Change Record (LCR) vs. “Direct” which applies transactions with a SQL Statement via OCI. “Direct” also requires that transactions be applied serially vs. “Integrated” where they can be applied in a parallel, coordinated fashion.

There are some limitations to applying changes via a LCR.  The following are items which can only be applied in “direct” mode:

  • DDL operations
  • Sequence operations
  • SQLEXEC parameter within a TABLE or MAP parameter
  • EVENTACTIONS processing
  • UDT Note, if the extract uses USENATIVEOBJSUPPORT to capture the UDT, then Integrated Replicat will apply it with the inbound server, otherwise it will be handled by Replicat directly.

Understanding the types of operations going on inside your database should be one of the first steps in setting up a GoldenGate environment and heavy use of “direct” transactions within an Integrated Replicat will likely reduce the performance of the replicat.

Integrated Replicat Statistics Explained:

  • Total transactions – Total transactions processed by replicat
  • Redirected – Number of transactions for which replicat is redirected to classic mode
  • DDL operations – Number of DDL operations processed in direct mode
  • Stored procedures – Number of Stored procedures in direct mode
  • Datatype functionality – Number of user defined data types processed in direct mode
  • Event actions – Number of event actions processed in direct mode
  • Direct transactions ratio – Percentage of transactions that the Integrated Replicat converts itself to direct mode to apply transactions.

Direct transactions ratio are computed by either:

Direct transactions ratio = (Redirected/Total transactions)*100
(or)
Direct transactions ratio = ((DDL operations + Stored procedures + Datatype functionality + Event actions) / Total transactions)*100

The lower the direct transactions ratio, the better the performance. This is one thing that should be considered when changing to Integrated Replicat.  If the ratio is high, then it could be better to use Classic Replicat to improve performance.

Example of Integrated Replicat Statistics:

 
GGSCI (orcl12c-rac1.localdomain) 4> stats R_TST_R1 totalsonly *.*

Sending STATS request to REPLICAT R_TST_R1 ...

Start of Statistics at 2015-09-14 19:34:26.
 Integrated Replicat Statistics:

Total transactions 11829288.00
 Redirected 0.00
 DDL operations 0.00
 Stored procedures 0.00
 Datatype functionality 295977.00
 Event actions 0.00
 Direct transactions ratio 2.50%
.........


 GGSCI (orcl12c-rac1.localdomain) 2> stats R_TST_R2 totalsonly *.*

Sending STATS request to REPLICAT R_TST_R2 ...

Start of Statistics at 2015-09-14 19:11:48.
 Integrated Replicat Statistics:

Total transactions 2781917.00
 Redirected 0.00
 DDL operations 0.00
 Stored procedures 0.00
 Datatype functionality 733578.00
 Event actions 0.00
 Direct transactions ratio 26.37%
.........

As you can see in the second example, the “Direct transactions ratio” is much higher. Because I know this system, I know that this higher ratio is entirely attributable to the replication of sequence objects. Using this as an example, a few options to lower the Direct transaction ratio and improve performance might include the discontinuation of sequence replication or splitting sequence replication into their own ‘classic’ mode replicat. Of course if current performance is adequate, you could do nothing. I would probably consider changing to classic replicat when the “Direct transactions ratio” approached 50%.

No matter the situation, it is important to understand the statistics which GoldenGate is recording on your behalf. It may lend good insight into what is going on in your environment.