With the wider adoption of Oracle container databases and the impending de-support of non-container database coming with Oracle Database 20c, it is well worth considering how this will impact your standby databases and Disaster Recovery (DR) environments.
The good news is that overall, the principle of standby databases has not changed significantly but there are a few key points that need to be understood to manage standby databases in the multi-tenant architecture effectively. There also a few new features that will help with standby database management.
First, let's consider that all Pluggable databases in a container share the same instance components
- Background Processes
- Shared and Process memory (SGA/PGA)
- Online and archive redo logs
So, bearing those in mind, how do we protect a pluggable database (PDB) with a Standby database? For the most part, as we did before with a few key clarifications:
- The database role is defined at the container level; Individual pluggable databases (PDBs) do not have their own roles.
- The database role is changed by issuing DDL at the container level; all PDB's change at the same time.
- ALTER DATABASE RECOVER MANAGED STANDBY functions only in the root container; it is not allowed in a PDB.
- The ALTER PLUGGABLE DATABASE [OPEN|CLOSE] SQL statement is supported on the standby, provided you have already opened the root container.
- The ALTER PLUGGABLE DATABASE RECOVER statement is not supported on the standby. (Standby recovery is always at the CDB level.)
- A Standby container database does not have to have standby PDB's for all PDB's in the primary container.
Create Standby Container
Let's walk through a quick example. Once we have our primary container setup with the PDB's we can start thinking about setting up a standby container database.
When using Multi-tenant, the standby is setup at the container level and then all PDB's are protected if you choose them to be. So first we need to set up the CDB with a standby, mostly as we did with non CDB:
- Enable Force Logging Mode
- Configure the Primary Database to Receive Redo Data
- Set Primary Database Initialization Parameters (log_archive_dest_2 etc)
- Enable Archiving
- Create a Backup Copy of the Primary Database Data Files
- Create a Control File for the Standby Database
- Create a Parameter File for the Standby Database
- Copy Files from the Primary System to the Standby System
- Set Up the Environment to Support the Standby Database
- Copy password file to standby Oracle Home
- Configure and start new Container listener
- Update tnsnames on both nodes & verify connectivity:
From Primary node: sqlplus sys@CNTDB2 as sysdba
From Standby node: sqlplus sys@CNTDB1 as sysdba
- Start the Physical Standby Database (nomount)
- Restore the backup
rman target /
restore standby controlfile from '/oracle/app/oracle/recovery_area/CNTDB1/CNTDB1_standby.ctl';
alter database mount;
sqlplus / as sysdba
- Start Standby log apply
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
- Verify the Physical Standby Database Is Performing Properly
SELECT ROLE, THREAD#, SEQUENCE#, ACTION FROM V$DATAGUARD_PROCESS;
Verify PDB Status
So now we have a standby container DB let's check on the PDB's
set lines 100
column name format a20
SELECT name, open_mode FROM v$pdbs;
SQL> SELECT DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;
DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE
------------------------------ -------------------- ----------------
CNTDB2 MOUNTED PHYSICAL STANDBY
SELECT name, open_mode FROM v$pdbs;
PDB$SEED READ ONLY
Note that the PDB Name matches the primary PDB Name (PDB1)
Adding additional PDB's
Given the fact the role is set at the container level, we cannot add a distinct PDB to the standby container without it already having an associated Primary PDB in the Primary CDB. When we add a Primary PDB, we can select at that point whether it is protected by a standby or not. Note that we can also add/remove the standby out with this operation as indeed we did above.
Adding a new PDB to the primary container with a standby (seed)
This can be done in one step by either creating a new database or cloning from an existing PDB
SQL > CREATE PLUGGABLE DATABASE TESTPDB2 ADMIN USER PDMADMIN IDENTIFIED BY ******* STANDBYS=('CNTDB2');
This is great as it creates a new PDB (Based on the seed database) and also creates a standby database in the standby container
Adding a new PDB to the primary container with a standby (Clone)
SQL> CREATE PLUGGABLE DATABASE TESTPDB3 FROM PDB1 STANDBYS=('CNTDB2');
Again, we get a new Primary and standby in one statement very cool!
Switchover between Primary and Standby
We do not do this very often but when building a new database, it is a good idea to switch over the roles to ensure that the archivelogs can be sent and applied between the DR and primary sites. As mentioned earlier, this is done at the container database level, and all PDB's are switched.
Verify that the primary database can be switched to the standby role.
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;
NEW PRIMARY (CNTDB2)
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
ALTER DATABASE OPEN;
ALTER PLUGGABLE DATABASE ALL OPEN;
Switch Back using the same process as above.
Convert to snapshot standby
Snapshot standby is a really great feature of Dataguard and can be used for reporting databases, DR tests, production destructive testing, etc so let’s make sure it still works in 19c! Again, this is done at the Container level with all PDB's affected.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
ALTER DATABASE OPEN;
So, all in all, yes it works largely the same way except that it is all done at container level so all PDB's are converted to snapshot standby at the same time.
Key Differences in 19c
- The process of flashing back a physical standby to a point in time that was captured on the primary is simplified by automatically replicating restore points from primary to the standby. See Replicating Restore Points from Primary to Standby.
- When flashback or point-in-time recovery is performed on the primary database, a standby that is in mounted mode can automatically follow the same recovery procedure performed on the primary.
- DML operations can be performed on Active Data Guard standby instances by transparently redirected to and run on the primary database.
Changes in Oracle Database Release 18c
These are the changes in Oracle Data Guard Concepts and Administration for Changes in Oracle Database Release 18c, Version 18.1.
- The database buffer cache state is now maintained on an Oracle Active Data Guard standby during a role change
- Global temporary tables can now be dynamically created on an Oracle Active Data Guard standby database
- A new view V$DATAGUARD_PROCESS (which replaces V$MANAGED_STANDBY) provides information that can be queried to verify that redo is being transmitted from the primary database and applied to the standby database.
- Database nologging has been extended with two new modes: Standby Nologging for Load Performance and Standby Nologging for Data Availability.
- A standby database can now be refreshed over the network using one RMAN command, RECOVER STANDBY DATABASE.