Starting from AD-TXK Delta 7 Oracle has enabled dualfs option in which the RUN and PATCH file systems get cloned simultaneously.
It is recommended to bring your system to AD-TXK Delta 7 to enable the
use of dualfs option.
There are various methods of cloning as below:
1. 1.Standard Cloning
2. 2.Advanced Cloning
a. a.Refreshing a Target System
b. b.Cloning a Multi-Node System
c. c.Adding a New Application Tier Node to an Existing
System
d. d.Deleting an Application Tier Node
e. e.Cloning an Oracle RAC System
f. f.Adding/Deleting a Node from an Existing Oracle RAC
Cluster
g. g.Cloning the Database Separately
h. h.Application Tier Fast Cloning
We will be proceeding with the concept and detail steps to clone SOURCE
system to TARGET which will cover Point 2 b.
Cloning a Multi-Node System
The major steps involved in cloning a multi-nodes system are
as follows:
1. Clone the database tier node.
2. Clone the primary application tier node from the Source
Run Edition File System to the Target Run Edition File System using the
'dualfs' option.
3. Add further application nodes to the secondary
application tier Run Edition File System.
This is achieved by cloning the primary application tier
node from the Target Run Edition File System to the secondary application tier
node in the Target Run Edition File System using the 'dualfs' option.
Note:
Step 3 can be repeated until all desired nodes are added to
the Target System.
Note:
The 'dualfs' option will clone the Run and Patch file
systems in a single operation.
Note:
Make sure Secure Shell (SSH) is set on all of your
application tier nodes as the adop utility is used to handle multiple nodes
using ssh.
For step 1 and 2 above please refer below url:
https://maharshitrivedidba.blogspot.com/2021/05/cloning-standalone-oracle-apps-r122x.html
Once the database and the PRIMARY application tier nodes are
configured proceed with below steps:
1. Run adpreclone.pl on both the Run and Patch Edition File
Systems in the primary application tier node of the E-Business Suite instance.
Note:
Before performing this step, ensure the AdminServer on both
the Run Edition File System and the Patch Edition File System is running. The
AdminServer on the Patch Edition File System can be shut down after completion
of adpreclone.pl.
2. Copy the Run Edition File System to the Target secondary
node.
Note:
The secondary node must be on a different host.
If the primary node's Run Edition File System is in fs1,
then the secondary node's Run Edition File System should also be in fs1. If the
primary node's Run Edition File System is in fs2, then the secondary node's Run
Edition File System should also be in fs2.
In Release 12.2, the base directory location needs to be set
to the same value on all application tier nodes. This should be taken care of
while copying the the Run and Patch Edition File Systems to the Target
secondary node.
3. Configure both Run and Patch Edition File Systems in the
Target secondary node.
Note:
Before performing these steps, ensure the AdminServer on
both the Run Edition File System and the Patch Edition File System is running.
This is required for adcfgclone.pl to properly register the new node on the
Oracle E-Business Suite instance.
Important:
Ensure that the Oracle Homes of the Run or Patch Edition
file system of the new node being added are not already registered in the
Oracle inventory.
If it is registered, please follow below steps:
In the event of failure of Rapid Clone during execution of
adcfgclone, the following steps need to be performed before restarting
adcfgclone execution on the application tier:
On application tier:
1. De-register the Oracle Homes on both Run Edition and
Patch Edition file system:
Verify if Oracle Inventory contains the following Oracle
Home entries for the Run Edition or Patch Edition file system:
<FMW_HOME>/oracle_common
<FMW_HOME>/webtier
<FMW_HOME>/Oracle_EBS-app1
If any of the above Oracle Home entries are already
registered in Oracle Inventory, you can run the following command to
de-register or detach that Oracle Home:
$ ./runInstaller -detachhome \
ORACLE_HOME=<Oracle Home Location> [-invPtrLoc
<s_invPtrLoc>]
Here,
-invPtrLoc argument needs to be specified only if an 'EBS
Installation Central' inventory is being used.
s_invPtrLoc is the context variable that stores the
inventory pointer location.
For example:
$ cd /u02/r122/fs1/FMW_Home/oracle_common/oui/bin
$ ./runInstaller -detachhome \
ORACLE_HOME=/s0/r122/at1/FMW_Home/oracle_common
2. If the FMW_HOME directory structure exists, delete it as
follows:
$ rm -rf <FMW_HOME>
Note:
Check the value of the context variable
s_wls_admin_console_access_nodes on the primary node. Ensure that all hosts
listed in the context variable are resolvable from the new application tier
node.
Log in as the applmgr user to the Run Edition File System
in the Target node and run adcfgclone.pl to clone both the Run and Patch file
systems:
Note:
Ensure that you enter the correct password for
"WebLogic AdminServer." Although Rapid Clone will not be able to
validate the password at this stage, it will fail later if the password is
incorrect.
Running adcfgclone.pl in the interactive mode
Run the following command:
$ cd <COMMON_TOP>/clone/bin
$ perl adcfgclone.pl appsTier dualfs
When asked the questions:
"Do you want to add a node (yes/no)" you should
answer 'yes'
"Do you want to startup the Application Services for
<TWO_TASK>? (y/n)" you should answer 'y' if the Application services
are to be started up. Otherwise, you should answer 'n'.
Note:
Ensure that the port pool provided for the Run Edition File
System is different from the port pool of the Patch Edition File System of the
primary node. Otherwise, it will result in errors during execution of fs_clone.
As mentioned earlier, the function (run or patch) of the two
file systems is not static, and their values switch every time when a cutover
phase is complete. Hence, refer to the environment variables $RUN_BASE and
$PATCH_BASE to determine the Run Edition File System and Patch Edition File
System respectively.
Different logs are created for Run Edition and Patch Edition
node addition. The log files are created in the following directories in the
Run Edition File System:
<INST_TOP>/admin/log/clone/run
<INST_TOP>/admin/log/clone/patch
Register the new topology from the newly added
application tier node.
If OHS is enabled on the newly added node, perform the
following steps on that node:
Source the Run Edition File System.
The OHS configuration files mod_wl_ohs.conf and apps.conf
will contain entries of managed servers from all application tier nodes. If any
of these managed servers are not desired to be part of the cluster
configuration on the current node, run the following command to delete details
of these managed servers from the OHS configuration files mod_wl_ohs.conf and
apps.conf on the current node:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \
-contextfile=<CONTEXT_FILE> \
-configoption=removeMS \
-oacore=<host>.<domain>:<port> \
-oafm=<host>.<domain>:<port> \
-forms=<host>.<domain>:<port> \
-formsc4ws=<host>.<domain>:<port> \
-ekanban=<host>.<domain>:<port> \
-accessgate=<host>.<domain>:<port> \
-yms=<host>.<domain>:<port>
where
The argument contextfile accepts the complete path to the
context file.
The arguments oacore, oafm, forms, formsc4ws, ekanban,
accessgate and yms accept a comma-separated list of managed server details in
the following format:
<host>.<domain>:<port>
host, domain and port are the hostname, domain and port of
the managed server whose reference is to be deleted.
For example, to delete references of managed servers
oacore_server1 on 'testserver1' and forms_server2 on host 'testserver2' on the
domain 'example.com' with ports 7201 and 7601 respectively, the following
command should be run:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl
-contextfile=<CONTEXT_FILE> \
-configoption=removeMS -oacore=testserver1.example.com:7201
-forms=testserver2.example.com:7601
Perform the following steps on the other application tier
nodes participating in the same cluster where this node is added:
Source the Run Edition File System.
If any of the managed servers from the newly added node are
desired to be part of the cluster configuration on the current node, run the
following command to add details of these managed servers into the OHS
configuration files mod_wl_ohs.conf and apps.conf on the current node:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \
-contextfile=<CONTEXT_FILE> \
-configoption=addMS \
-oacore=<host>.<domain>:<port> \
-oafm=<host>.<domain>:<port> \
-forms=<host>.<domain>:<port> \
-formsc4ws=<host>.<domain>:<port>
where
The argument contextfile accepts the complete path to the
context file.
The arguments oacore, oafm, forms, formsc4ws accept a
comma-separated list of managed server details in the following format:
<host>.<domain>:<port>
host and domain are the hostname and domain name of the
newly added node
port is the port of the new managed server whose reference
is to be added
For example, if the newly added node on host 'testserver1'
and domain 'example.com' contains managed servers oacore_server3 and
oafm_server3 with ports 7205 and 7605 respectively, the following command
should be run:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl
-contextfile=<CONTEXT_FILE> \
-configoption=addMS -oacore=testserver1.example.com:7205
-oafm=testserver1.example.com:7605
On all application tier nodes, perform the following
steps to register the newly added application tier node with the application
tier TNS Listener (FNDFS listener) on each node:
Source the Run Edition file system.
Run AutoConfig.
$ sh <ADMIN_SCRIPTS_HOME>/adautocfg.sh
Reload the application tier TNS Listener (FNDFS listener) as
follows:
$ lsnrctl reload APPS_<TWO_TASK>
If the Node Manager service is up on the Patch Edition File
System of the newly added application tier node, shut it down as follows:
$ sh <ADMIN_SCRIPTS_HOME>/adnodemgrctl.sh stop
Shut down the Admin Server and the Node Manager on the Patch
Edition File System of the primary node as follows:
$ <ADMIN_SCRIPTS_HOME>/adadminsrvctl.sh stop
$ <ADMIN_SCRIPTS_HOME>/adnodemgrctl.sh stop
Perform the following steps on all database tier nodes to
add the newly added node to the Access Control List.
Source the database environment file.
Run AutoConfig as follows:
$
<RDBMS_OH>/appsutil/scripts/<CONTEXT_NAME>/adautocfg.sh
Reload the Database listener as follows:
$ lsnrctl reload <ORACLE_SID>
After adding new nodes, refer to My Oracle Support Knowledge
Document 1375686.1, Using Load-Balancers with Oracle E-Business Suite Release
12.2, for details on how to set up load balancing.
Note:
If SQL*Net access security is enabled in the existing
system, you first need to authorize the new node to access the database through
SQL*Net. Refer to the Oracle Applications Manager online help for instructions
on how to accomplish this.
If addition of the node is being attempted after a failed
execution of adcfgclone.pl, execute steps to first delete the node as below:
Deleting an Application Tier Node:
Note:
These steps are ONLY for deletion of secondary application
tier nodes. The primary application tier node cannot be deleted using these
steps.
The following steps should be performed to delete a
secondary application tier node from an existing Oracle E-Business Suite
Release 12.2 multi-node instance. The steps described in this section are
applicable to secondary application tier nodes with shared and non-shared file
systems.
Note:
In case OHS is enabled only on the secondary node being
deleted, OHS needs to first be enabled on some other node before starting with
deletion of the node.
Before performing the following steps, ensure that the
WebLogic administration server is running from both the Run and Patch Edition
File Systems of the primary application tier node.
Note:
Do not perform a 'delete node' operation when a patching
cycle is active.
1. Delete the run file system and patch file system
configuration of the secondary application tier node.
Note:
Deletion of nodes may need to be performed in two scenarios
- when the node is still accessible (users are able to login to the node) and
in the remote case when the node is not accessible due to some issues ( the
node may not be physically present and/or login to the node is not possible).
In the first scenario, all information related to the node
has to be cleaned up altogether (including removal of references from the
WebLogic domain and EBS topology as well as removal of the file system and inventory
references). Case 1 describes the steps to be performed in this scenario.
In the second scenario, references to the node can be
cleaned up only from the WebLogic domain and the EBS topology. The steps for
deletion of secondary nodes in this scenario are described in Case 2 below.
Please note that this option has been introduced in R12.TXK.C.DELTA.6 and is
not available in the earlier releases.
In order to keep the E-Business Suite instance in a
consistent state, steps should be followed strictly based on the scenario.
Case 1:
If the secondary node to be deleted is accessible, perform
the following steps:
Login to the secondary node to be deleted.
Source the run file system.
Ensure that all application tier services from the run and
patch file system for the node to be deleted are shut down.
Execute the ebs-delete-node option of the adProvisionEBS.pl
script as follows:
$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl
ebs-delete-node \
-contextfile=<CONTEXT_FILE>
-logfile=<LOG_FILE>
This will delete the managed servers, OHS instances and Node
Manager on the current node from the run file system WebLogic domain.
If the node is a non-shared node, verify and remove the
following Oracle Home entries of both the Run and Patch file systems from the Oracle
Inventory:
<FMW_HOME>/webtier
<FMW_HOME>/oracle_common
<FMW_HOME>/Oracle_EBS-app1
If any of the above Oracle Home entries are already
registered in Oracle Inventory, you can run the following command to
de-register or detach that Oracle Home:
$ cd <Oracle Home>/oui/bin
$ ./runInstaller -detachhome ORACLE_HOME=<Oracle Home
Location> [-invPtrLoc <s_invPtrLoc>]
Here,
-invPtrLoc argument needs to be specified only if the 'EBS
installation central' inventory is being used.
s_invPtrLoc is the context variable that stores the
inventory pointer location.
For example:
$ cd /u02/r122/fs1/FMW_Home/oracle_common/oui/bin
$ ./runInstaller -detachhome \
ORACLE_HOME=/u02/r122/fs1/FMW_Home/oracle_common -silent
Note:
The Oracle Home <FMW_HOME>/webtier should be de-registered
from the Oracle Inventory before trying to remove the Oracle Home
<FMW_HOME>/oracle_common.
If the node is a non-shared node, verify and remove the
following Oracle Home entry from the Oracle Inventory:
<OracleAS Tools 10.1.2 ORACLE_HOME>
If the above Oracle Home entry is registered in Oracle
Inventory, you can run the following command to de-register the Oracle Home:
$ ./runInstaller -removeHome
ORACLE_HOME=<s_tools_oh> -silent
Here,
-invPtrLoc argument needs to be specified only if the 'EBS
installation central' inventory is being used.
s_invPtrLoc is the context variable that stores the
inventory pointer location.
For example:
$ cd /u02/r122/fs1/EBSapps/10.1.2/oui/bin
$ ./runInstaller -removeHome \
ORACLE_HOME=/u02/r122/fs1/EBSapps/10.1.2 -silent
Case 2:
If the secondary node to be deleted is not accessible,
perform the following steps to remove the node from the FND topology and the
EBS domain:
Login to the primary node.
Source the run file system.
Execute the ebs-delete-node option of the adProvisionEBS.pl
script as follows:
$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl
ebs-delete-node \
-contextfile=<CONTEXT_FILE> -hostname=<HOSTNAME
OF NODE TO BE DELETED> -logfile=<LOG_FILE>
This will delete all information corresponding to the
specified node from the Weblogic domain like the managed servers, OHS instances
and Node Manager of the specified node from both the run and patch file system
WebLogic domain. In addition, it will also remove the node from the list of
active nodes registered in the topology.
Note:
Since the steps mentioned in Case 2 take care of only
partial cleanup of the node, these steps should be performed only in case the
node to be deleted is not accessible due to some issues. In all other
scenarios, the nodes should be deleted by following steps mentioned in Case1.
Otherwise, this may lead to the E-Business Suite instance being in an
inconsistent state.
Sync the OHS configuration on the other nodes to remove
references of the deleted node.
Perform the following steps on the other nodes:
Source the run file system.
If any of the managed servers from the deleted node are part
of the cluster configuration defined on the current node, run the following
command to delete details of these managed servers from the OHS configuration
files mod_wl_ohs.conf and apps.conf on the current node:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \
-contextfile=<CONTEXT_FILE> \
-configoption=removeMS \
-oacore=<host>.<domain>:<port> \
-oafm=<host>.<domain>:<port> \
-forms=<host>.<domain>:<port> \
-formsc4ws=<host>.<domain>:<port> \
-ekanban=<host>.<domain>:<port> \
-accessgate=<host>.<domain>:<port> \
-yms=<host>.<domain>:<port>
where
The argument contextfile accepts the complete path to the
context file.
The arguments oacore, oafm, forms, formsc4ws, ekanban,
accessgate and yms accept a comma-separated list of managed server details in
the following format:
<host>.<domain>:<port>
host, domain and port are the hostname, domain and port of
the managed server whose reference is to be deleted.
For example, if the deleted node was on host 'testserver1'
and domain 'example.com' and it contained managed servers oacore_server3 and
oafm_server3 with ports 7205 and 7605 respectively, the following command
should be run to remove references to these managed servers:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl
-contextfile=<CONTEXT_FILE>
-configoption=removeMS
-oacore=testserver1.example.com:7205 -oafm=testserver1.example.com:7605
Run AutoConfig on the database tier.
Log into the database tier node.
Run AutoConfig on the database tier by running the following
command:
$ <RDBMS_OH>/appsutil/scripts/<CONTEXT_NAME>/adautocfg.sh
Bounce the database listener by running the following
command:
$ sh
<RDBMS_OH>/appsutil/scripts/<CONTEXT_NAME>/addlnctl.sh stop
<ORACLE_SID>
$ sh
<RDBMS_OH>/appsutil/scripts/<CONTEXT_NAME>/addlnctl.sh start
<ORACLE_SID>
Notes:
Execution of ebs-delete-node option as mentioned above will
not delete the contents of the file system. This has to be done manually by the
customer once the above steps to delete the node are completed successfully.
For non-shared node, the following directories can be
deleted:
<NE_BASE>
<RUN_BASE>
<PATCH_BASE>
EBSapps.env
For a shared file system node, only the <INST_TOP>
directory of the Run Edition File System and the Patch Edition File System
should be deleted.
On all OHS enabled nodes, patch file system OHS
configuration will automatically be synced up during next adop prepare phase.
Reference:
Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone (Doc ID 1383621.1)
No comments:
Post a Comment