21 Jun 2016

Tips for cloning Oracle E-Business Suite

Cloning an Oracle eBusiness Suite environment from one instance to another is one of the most common tasks required of an Oracle Apps DBA.

The basic process is documented in Oracle Note Cloning Oracle Applications Release 12 with Rapid Clone (Doc ID 406982.1), but the actual steps required can be surprisingly different from system to system. It can depend on factors such as the size of the database; the backup method being used; whether there is direct connectivity between the source and target environments; whether there are any snapshot technologies available at storage level; what filesystem is in use and so on.

Nevertheless, there are some common tips and tasks that apply to all systems, which are not often documented by Oracle, but an experienced Apps DBA will always tend to include.

The list below is hardly exhaustive, but provides a flavour of the sort of thing a good Apps DBA should be doing as a standard for your business!

Run Pre- Clone as a nightly scheduled task

The first step of any clone is to run “perl adpreclone.pl” on the database tier and each application tier to prepare the source system for cloning.

If you are therefore planning on performing a clone based on last night’s backup, you will have had to manually run pre-clone at some point before the backup started. But, what if the clone request has come through as an emergency and you didn’t know this had to be done?

One option, is to run pre-clone on your source environment and then take a new backup of the appsutil directories on database tier and each application tier, however this adds an unnecessary manual step to the clone.

Instead, schedule a job to automatically run pre-clone on each tier on the source environment as a nightly job, either as a standalone task or as part of the backup itself.

You can then be confident that any backup you use as the source of your clone will already have had pre-clone run against it.

The script to do this can be very simple – on UNIX something like this would do the job nicely (with some appropriate logging around it)

. <APPL_TOP>/APPS<SID>_<server>.env
cd $INST_TOP/admin/scripts
perl adpreclone.pl appsTier

This can then be scheduled to run in the crontab.

Use a Physical Standby database as the clone source

If you have a Dataguard-managed DR database, chances are that this will either be on the same server – or at the very least in the same datacentre –where your non-production environments are hosted.

You can therefore save considerable time and network bandwidth by cloning from DR to Non-Prod as opposed, to Prod to Non-prod. You can temporarily shut down your DR environment and perform a simple cold copy of the datafiles to their new location, or use a backup of your DR environment as the clone source.

Check and correct soft links

Once you backup, copy and extract the database and application tier filesystems to the target environment, there will be a number of soft links which will potentially point to a production file path which is not a valid location on the target.

On UNIX, the following command can be run to list all soft links which are present in the current directory and downwards:

find . -type l -ls -exec ls -lrt {} \;

Typically, you will find some in your CUST_TOP/bin directory if your business has developed some bespoke shell script code called via a concurrent request. These will vary from site to site, although there are also always some more to be found in the standard eBusiness Suite filesystem.

Usually the application will work perfectly fine with these pointing to invalid paths, however it can become an issue when you try to patch or relink the technology stack components on the target environment. So, I usually always try to correct these as a standard post-clone step.

On eBusiness Suite 12.1 application tier:

cd <10.1.2_ORACLE_HOME>
cd bin
ln -sf <10.1.2_ORACLE_HOME>/webcache/bin/webcachectl/webcachectl webcachectl
ln -sf <10.1.2_ORACLE_HOME>/nls/lbuilder/lbuilder lbuilder
cd ../webcache/examples
ln -sf <10.1.2_ORACLE_HOME>/webcache/docs/readme.examples.html readme.examples.html
cd ../../sysman/webapps/emd/WEB-INF/lib
ln -sf <10.1.2_ORACLE_HOME>/sysman/jlib/log4j-core.jar log4j-core.jar

cd <10.1.3_ORACLE_HOME>
cd bin
ln -sf <10.1.3_ORACLE_HOME>/nls/lbuilder/lbuilder lbuilder

Database tier:

cd <ORACLE_HOME>
cd lib
ln -sf <ORACLE_HOME>/lib/libclntsh.so libclntsh.so.10.1
cd ../bin
ln -sf <ORACLE_HOME>/nls/lbuilder/lbuilder lbuilder

Correct paths for oraInventories

Chances are the $ORACLE_HOME/oraInst.loc file in each Oracle Home on the extracted filesystem will point to a location which is not valid for the target environment.

The clone process will still work fine and the application will come up normally, but you may find that opatch complains about inventory problems, if you try to later apply a database or application server patch onto a cloned environment.

My trick to avoid this is to always have a step to correct the paths to the oraInventories before you run adcfgclone.pl

On the database tier:
vi <ORACLE_HOME>/oraInst.loc

inventory_loc=<ORACLE_HOME>/oraInventory
inst_group=dba

mkdir -p <ORACLE_HOME>/oraInventoryOn the application tier:
vi <10.1.3_ORACLE_HOME>/oraInst.loc

inventory_loc=<10.1.3_ORACLE_HOME>/oraInventory
inst_group=dba

vi <10.1.2_ORACLE_HOME>/oraInst.loc

inventory_loc=<10.1.2_ORACLE_HOME>/oraInventory
inst_group=dba

mkdir -p <10.1.3_ORACLE_HOME>/oraInventory
mkdir -p <10.1.2_ORACLE_HOME>/oraInventory

Update paths in DBA_DIRECTORIES

There will almost certainly be a number of standard and custom directories defined on your source environment and these may point to paths that are not valid for the target instance.

As a post-clone step, always check to see if there are any that should be updated:

SELECT directory_name, directory_path
FROM dba_directories;

CREATE OR REPLACE DIRECORY <name> as <new_path>;

Update paths in UTL_FILE_DIR

Your application may have additional, custom paths defined in UTL_FILE_DIR and these paths may not be valid for the target instance.

As a post-clone step, always check to see if there are any that should be updated.

In addition, by default the first entry in UTL_FILE_DIR will be /usr/tmp on a UNIX system and the application will use this to create temporary files for concurrent requests (the $APPLPTMP directory). This can lead to problems with permissions on files in that, if there is more than one environment hosted on the same server and all environments are using the same location. As a best practice in that situation I would always change the first entry in UTL_FILE_DIR to be /usr/tmp/<TARGET_ENV> and set $APPLPTMP on the application tier to this directory (making sure to create this directory first of course!)

Check and Update Profile Options

Most standard Oracle profile options will be corrected on the target environment when AutoConfig is run. However, there are usually some that don’t get updated, plus whatever custom profile options you may have defined which have to be manually updated.

The following script can be run to quickly list all profile options which may point to production paths, names or ports and which may need to be corrected

select    decode(pov.level_id, '10001', 'SITE',
'10002', 'APPLICATION',
'10003', 'RESPONSIBILITY',
'10004', 'USER',
'10005', 'SERVER',
'10007', 'SERVRESP',
'Undefined') SET_AT_LEVEL,
decode(pov.level_id, '10002', fa.application_short_name,
'10003', frt.responsibility_name,
'10004', fu.user_name,
'10005', fn.node_name,
'10006', org.name,
'10007', 'Serv/resp',
pov.level_value) ASSIGNED_TO,
po.profile_option_name,
pot.user_profile_option_name,
pov.profile_option_value,
po.profile_option_id
from      applsys.fnd_profile_options po,
applsys.fnd_profile_options_tl pot,
applsys.fnd_profile_option_values pov,
applsys.fnd_responsibility_tl frt,
applsys.fnd_user fu,
applsys.fnd_application fa,
applsys.fnd_nodes fn,
hr_operating_units org
where     po.profile_option_id = pov.profile_option_id
and        po.profile_option_name = pot.profile_option_name
and        pov.level_value = frt.responsibility_id (+)
and        pov.level_value = fu.user_id (+)
and        pov.level_value = fa.application_id (+)
and       pov.profile_option_value like '%prod%'
and    fn.node_id (+) = pov.level_value
and    org.organization_id (+) = pov.level_value
order by 3,1,2

Change the highlighted text to what is appropriate for your environment – run the script a few times to check for the source database name, server names and ports.

Once you have identified that some profile options need to be changed, this can either be done in the front-end or more quickly can be done via a script – this also saves having to start the application, change the profile options and then bouncing the application again.

This script will look something like this:

DECLARE
value1 Boolean;
value2 Boolean;
BEGIN
value1 := fnd_profile.save('<PROFILE_OPTION1>','<VALUE>','SITE');
value2 := fnd_profile.save('<PROFILE_OPTION2>','<VALUE>','SITE');
END;
/
commit;

I would also generally recommend always changing these profile options:

  • Site Name – set it to the name of the target environment, and the date on which it was cloned so that users can easily see this information every time they log in.
  • Java Color Scheme – set this to something other than blue so that it is immediately obvious when you are in the production system and when you are not; this helps to avoid users making changes in live when they thought they were in non-prod!

Reduce ADR Retention Policies

At database level (11gR1 and later), alert logs, trace files and dump files are stored in the Automatic Diagnostic Repository (ADR) and are then purged from the system at defined intervals.

It is not usually required to retain these files on non-production environment for as long as it would be on the live system, so disk space can be saved by reducing the ADR retention policies on the non-production environments.

Diagnostic data Purging in ADR is controlled by two attributes:

  1. SHORTP_POLICY which is used for automatically purging short-lived files, i.e. dump files, expressed in hours and defaults to 30 days.
  2. LONGP_POLICY which is used for automatically purging long-lived files, i.e. incidents, expressed in hours and defaults to 1 year.

I usually reduce these to 6 days and 36 days respectively with the ADRCI command line utility as follows:

adrci
adrci> show homes
ADR Homes:
diag/rdbms/<env>/<ENV>
diag/tnslsnr/<server>/<env>
adrci> set homepath diag/rdbms/<env>/<ENV>
adrci> set control (SHORTP_POLICY = 360)
adrci> set control (LONGP_POLICY = 2160)
adrci> set homepath diag/tnslsnr/<server>/<env>
adrci> set control (SHORTP_POLICY = 360)
adrci> set control (LONGP_POLICY = 2160)

Check Database Memory allocations

By default you will get a 1GB SGA and 1GB PGA in a cloned environment.

This might be sufficient, but for a pre-prod environment you should try and match production values; I also generally see better stability if I provide an SGA of at least 2GB.

Workflow Mailer

Always define a test/override email address on cloned environment to prevent spurious emails being sent out to users, employees and suppliers – this will ensure that any emails that are sent out will all go to a ‘bucket’ email address instead.

This can be set in a 12.1 environment by running script $FND_TOP/sql/afsvcpup.sql

Also check and correct the Workflow-related section of the AutoConfig $CONTEXT_FILE (the section tagged <!– Workflow –>) so that the values for SMTP server, IMAP server, usernames and Reply To address are valid for the target environment.

If you are using an IMAP server for Workflow inbound processing, define a separate mailbox for each environment in your estate. This prevents locking issues with too many Workflow Mailers trying to poll the same mailbox at the same time, or emails valid for one environment being moved to the DISCARD folder by the Mailer for another environment.

Change Passwords

As good practice always change the passwords in your cloned environment so that they are something different from the source environment, especially when cloning from production.

Typically, change the following passwords:

  • SYS (via an ALTER USER command)
  • SYSTEM (via an ALTER USER command)
  • APPS (via FNDPCASS)
  • Schema passwords (via FNDCPASS with the ALLORACLE option)
  • SYSADMIN password (via FNDCPASS)

Reduce Concurrent Manager Workshifts

The number of workshifts defined for each of your concurrent Managers will be optimised for the production environment. But, this may not be appropriate for a cloned, non-production environment which may be running on a server with fewer hardware resources and which will never expect the same workload as the live environment.

In consultation with the expected users of the environment (developers, testers etc) see whether any concurrent Managers can have their workshifts reduced to lessen the impact on the server.

Cancel non-required scheduled concurrent requests

There may be a number of overnight batch jobs scheduled to run the live environment. Are all of these also required on the non-production systems? If not – again in consultation with the expected users of the target environment – see whether any pending requests can be cancelled or on the cloned environment to reduce overnight server utilisation on the non-production servers.

Switch to NOARCHIVELOGMODE

There’s a couple of questions to ask yourself before doing this:

  • Are you regularly backing up your non-production environments?
  • In the event of a failure, would you want to restore your non-prod environments from backup or would you just re-clone them from live?

If you have no requirements to backup/restore your non-prod environments then it makes no sense for them to be in ARCHIVELOGMODE – therefore reduce server disk space utilisation by putting the target environment into NOARCHIVELOGMODE as a post-clone step.

Reduce JVMs

Your production server will have been optimised to run with X amount of parallel JVM threads in the oacore, forms and oafm groups as appropriate for the expected user workload and hardware capacity on the production servers.

These values will be inherited by the cloned environments but will probably be over-powered on a development environment, so should be reduced in the $CONTEXT_FILE and AutoConfig re-run e.g. s_oacore_nprocs to eliminate unnecessary CPU and memory consumption on the non-production server.

Check any CUST_TOP variables

If you have a CUSTOM_TOP defined in a custom environment file in your APPL_TOP, then make sure that the file exists on the target environment with the correct name and pointing to the correct paths and directories.

Continual improvement

Always look for ways to improve or speed up the clone process on each iteration:

  • Are there any manual steps that could be achieved quicker via a script?
  • Are there changes that could be made to the $CONTEXT_FILE to prevent having to perform manual updates?
  • Is there a more sensible way to copy across the backups from source to live?

Document, document, document!

And finally the most important tip of all – clearly document the clone process in detail and keep the document up to date!

Always write the document such that another DBA would be able to follow the process for the first time; create the sort of document that you would like if you were having to perform a clone on a new system.

Kevin

Kevin Behan

Oracle Applications DBA

Kevin is an Oracle Applications DBA with over 15 years experience in Oracle. Kevin is responsible for supporting numerous database and applications across Claremont‘s customer base.

Oracle Partner Logo
Investors In People Logo
ISO 27001 certificate

Oracle EBS Partner logo

Oracle-E-Business-Suite-Applications-Into-Oracle-Cloud-Experts.png

Contact Us

enquiries@claremont.co.uk

 

Guildford Office

1 Farnham Road,
Guildford,
Surrey, GU2 4RG
T: +44 (0)1483 549 004

Newcastle - Registered Office

Rotterdam House,
116 Quayside,
Newcastle upon Tyne, NE1 3DY
T: +44 (0)191 206 4152