Shutdown and Start sequence of Oracle RAC:
STOP ORACLE RAC:
1.
emctl stop dbconsole
2.
srvctl stop listener -n racnode1
3.
srvctl stop database -d RACDB
4.
srvctl stop asm -n racnode1 -f
5.
srvctl stop asm -n racnode2 -f
6.
srvctl stop nodeapps -n racnode1 -f
7. crsctl stop crs
START ORACLE RAC:
1.
crsctl start crs
2.
crsctl start res ora.crsd -init
3.
srvctl start nodeapps -n racnode1
4.
srvctl start nodeapps -n racnode2
5.
srvctl start asm -n racnode1
6.
srvctl start asm -n racnode2
7.
srvctl start database -d RACDB
8.
srvctl start listener -n racnode1
9. emctl start dbconsole
To
start and stop oracle clusterware (run as the superuser) :
[root@node1
~]# crsctl stop crs
[root@node1
~]# crsctl start crs
To
start and stop oracle cluster resources running on all nodes :
[root@node1
~]# crsctl stop cluster
-all
[root@node1
~]# crsctl start cluster
-all
To
check the current status of a cluster :
[oracle@node1~]$
crsctl check cluster
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
To
check the current status of CRS :
[oracle@node1
~]$ crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
To
display the status cluster resources :
[oracle@node1
~]$ crsctl stat res -t
To
check version of Oracle Clusterware :
[oracle@node1
~]$ crsctl query crs
softwareversion
Oracle
Clusterware version on node [node1] is [11.2.0.4.0]
[oracle@node1
~]$ crsctl query crs
activeversion
Oracle
Clusterware active version on the cluster is [11.2.0.4.0]
[oracle@node1
~]$ crsctl query crs
releaseversion
Oracle
High Availability Services release version on the local node is [11.2.0.4.0]
To
check current status of OHASD (Oracle High Availability Services) daemon:
[oracle@node1
~]$ crsctl check has
CRS-4638:
Oracle High Availability Services is online
Forcefully
deleting resource:
[oracle@node1
~]
$
crsctl delete resource
testresource -f
Enabling
and disabling CRS daemons (run as the super user):
[root@node1
~]# crsctl enable crs
CRS-4622:
Oracle High Availability Services autostart is enabled.
[root@node1
~]#
[root@node1
~]# crsctl disable crs
CRS-4621:
Oracle High Availability Services autostart is disabled.
To
check the status of Oracle CRS :
[oracle@node1
~]$ olsnodes
node1
node2
To
print node name with node number :
[oracle@node1
~]$ olsnodes –n
node1 1
node2 2
To
print private interconnect address for the local node:
[oracle@node1
~]$ olsnodes -l –p
node1 192.168.1.101
To
print virtual IP address with node name:
[oracle@node1
~]$ olsnodes –i
node1 node1-vip
node2 node2-vip
[oracle@node1
~]$ olsnodes -i node1
node1 node1-vip
To
print information for the local node:
[oracle@node1
~]$ olsnodes –l
node1
pl
To
print node status (active or inactive):
[oracle@node1
~]$ olsnodes –s
node1 Active
node2 Active
[oracle@node1
~]$ olsnodes -l –s
node1 Active
To
print node type (pinned or unpinned):
[oracle@node1
~]$ olsnodes –t
node1 Unpinned
node2 Unpinned
[oracle@node1
~]$ olsnodes -l –t
node1 Unpinned
To
print clusterware name:
[oracle@node1
~]$ olsnodes –c
rac-scan
To
display global public and global cluster_interconnect:
[oracle@node1
~]$ oifcfg getif
eth0 192.168.100.0
global public
eth1 192.168.1.0
global cluster_interconnect
To
display the database registered in the repository:
[oracle@gpp4
~]$ srvctl config database
TESTRACDB
To
display the configuration details of the database:
[oracle@TEST4 ~]$ srvctl config database -d TESTRACDB
Database
unique name: TESTRACDB
Database
name: TESTRACDB
Oracle
home: /home/oracle/product/11.2.0/db_home1
Oracle
user: oracle
Spfile:
+DATA/TESTRACDB/spfileTESTRACDB.ora
Domain:
Start
options: open
Stop
options: immediate
Database
role: PRIMARY
Management
policy: AUTOMATIC
Server
pools: TESTRACDB
Database
instances: TESTRACDB1,TESTRACDB2
Disk
Groups: DATA,ARCH
Mount
point paths:
Services:
SRV_TESTRACDB
Type:
RAC
Database is administrator managed
To
change policy of database from automatic
to manual:
[oracle@TEST4
~]
$
srvctl modify database -d
TESTRACDB -y MANUAL
To change the startup option of database from open to mount:
[oracle@TEST4
~]
$
srvctl modify database -d
TESTDB -s mount
To
start RAC listener:
[oracle@TEST4
~]$ srvctl start listener
To
display the status of the database :
[oracle@TEST4
~]$ srvctl status database
-d TESTRACDB
Instance
TESTRACDB1 is running on node TEST4
Instance
TESTRACDB2 is running on node TEST5
To
display the status services running in the database:
[oracle@TEST4
~]$ srvctl status service
-d TESTRACDB
Service
SRV_TESTRACDB is running on instance(s) TESTRACDB1,TESTRACDB2
To
check nodeapps running on a node :
[oracle@TEST4 ~]$ srvctl status nodeapps
VIP
TEST4-vip is enabled
VIP
TEST4-vip is running on node: TEST4
VIP
TEST5-vip is enabled
VIP
TEST5-vip is running on node: TEST5
Network
is enabled
Network
is running on node: TEST4
Network
is running on node: TEST5
GSD
is enabled
GSD
is not running on node: TEST4
GSD
is not running on node: TEST5
ONS
is enabled
ONS
daemon is running on node: TEST4
ONS daemon is running on node: TEST5
[oracle@TEST4 ~]$ srvctl status nodeapps -n TEST4
VIP
TEST4-vip is enabled
VIP
TEST4-vip is running on node: TEST4
Network
is enabled
Network
is running on node: TEST4
GSD
is enabled
GSD
is not running on node: TEST4
ONS
is enabled
ONS
daemon is running on node: TEST4
To start or stop all instances associated with a database. This command also starts services and listeners on each node:
[oracle@TEST4
~]$ srvctl start database
-d TESTRACDB
To
shut down instances and services (listeners not stopped):
[oracle@TEST4
~]$ srvctl stop database
-d TESTRACDB
You
can use -o option to specify startup/shutdown options.
To
shutdown immediate database –
$
srvctl stop database -d
TESTRACDB -o immediate
To
startup force all instances –
$
srvctl start database -d
TESTRACDB -o force
To
perform normal shutdown –
$
srvctl stop database -d
TESTRACDB -i instance racnode1
To
start or stop the ASM instance on racnode01 cluster node:
[oracle@TEST4
~]$ srvctl start asm -n
racnode1
[oracle@TEST4
~]$ srvctl stop asm -n
racnode1
To
display current configuration of the SCAN VIP’s:
[oracle@test4
~]$ srvctl config scan
SCAN name: vmtestdb.exo.local, Network: 1/192.168.5.0/255.255.255.0/eth0
SCAN
VIP name: scan1, IP: /vmtestdb.exo.local/192.168.5.100
SCAN
VIP name: scan2, IP: /vmtestdb.exo.local/192.168.5.101
SCAN VIP name: scan3, IP: /vmtestdb.exo.local/192.168.5.102
Refreshing
SCAN VIP’s with new IP addresses from DNS:
[oracle@test4
~]
$
srvctl modify scan -n
your-scan-name.example.com
To
stop or start SCAN listener and the SCAN VIP resources :
[oracle@test4
~]$ srvctl stop
scan_listener
[oracle@test4
~]$ srvctl start
scan_listener
[oracle@test4
~]$ srvctl stop scan
[oracle@test4
~]$ srvctl start scan
To
display the status of SCAN VIP’s and SCAN listeners:
[oracle@test4
~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN
VIP scan1 is running on node test4
SCAN
VIP scan2 is enabled
SCAN
VIP scan2 is running on node test5
SCAN
VIP scan3 is enabled
SCAN VIP scan3 is running on node test5
[oracle@test4 ~]$ srvctl status scan_listener
SCAN
Listener LISTENER_SCAN1 is enabled
SCAN
listener LISTENER_SCAN1 is running on node test4
SCAN
Listener LISTENER_SCAN2 is enabled
SCAN
listener LISTENER_SCAN2 is running on node test5
SCAN
Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node test5
To
add/remove/modify SCAN:
[oracle@test4
~]$ srvctl add scan -n
your-scan
[oracle@test4
~]$ srvctl remove scan
[oracle@test4
~]$ srvctl modify scan -n
new-scan
To
add/remove SCAN listener:
[oracle@test4
~]$ srvctl add
scan_listener
[oracle@test4
~]$ srvctl remove
scan_listener
To
modify SCAN listener port:
srvctl modify scan_listener -p
<port_number>
srvctl modify scan_listener -p
<port_number>
(reflect
changes to the current SCAN listener only)
To
start the ASM instnace in mount state :
ASMCMD>
startup --mount
To
shut down ASM instance immediately (database instance must be shut down before
the ASM instance is shut down):
ASMCMD>
shutdown --immediate
Use
lsop command on ASMCMD to list ASM operations :
ASMCMD
> lsop
To
perform quick health check of OCR:
[oracle@test4 ~]$ ocrcheck
Status
of Oracle Cluster Registry is as follows:
Version : 3
Total
space (kbytes) : 262120
Used
space (kbytes) : 3304
Available space (kbytes) : 258816
ID : 1555543155
Device/File Name :
+DATA
Device/File
integrity check succeeded
Device/File Name :
+OCR
Device/File
integrity check succeeded
Device/File not configured
Device/File
not configured
Device/File
not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
To
dump content of OCR file into an xml:
[oracle@test4
~]$ ocrdump testdump.xml
-xml
To
add or relocate the OCR mirror file to the specified location:
[oracle@test4
~]
$ ocrconfig -replace ocrmirror
‘+TESTDG’
[oracle@test4
~]
$
ocrconfig -replace
+CURRENTOCRDG -replacement +NEWOCRDG
To
relocate existing OCR file:
[oracle@test4
~]$ ocrconfig -replce ocr
‘+TESTDG’
To
add mirrod disk group for OCR:
[oracle@test4
~]$ ocrconfig -add +TESTDG
To
remove OCR mirror:
$
ocrconfig -delete +TESTDG
To
remove the OCR or the OCR mirror:
[oracle@test4
~]$ ocrconfig -replace ocr
[oracle@test4 ~]$ ocrconfig replace ocrmirror
To
list ocrbackup list:
[oracle@test4 ~]$ ocrconfig –showbackup
test5 2016/04/16 17:30:29
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup00.ocr
test5 2016/04/16 13:30:29
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup01.ocr
test5 2016/04/16 09:30:28 /home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup02.ocr
test5 2016/04/15 13:30:26
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/day.ocr
test5 2016/04/08 09:30:03
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/week.ocr
Performs
OCR backup manually:
[root@testdb1
~]# ocrconfig
-manualbackup
testdb1 2016/04/16 17:31:42 /votedisk/backup_20160416_173142.ocr 0
Changes
OCR autobackup directory:
[root@testdb1 ~]# ocrconfig -backuploc /backups/ocr
To
verify the integrity of all the cluster nodes:
[oracle@node1]$
cluvfy comp ocr -n all –verbose
NOTE:
This
check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as
a privileged user to verify the contents of OCR.
OCR
integrity check passed
Verification of OCR integrity was successful.
********************************************************
Startup
and shutdown RAC databases:
è Procedure to stop RAC
database:
Follow
below steps to stop the RAC databases.
1.
Shut
down any processes in the Oracle home on each node that might be accessing a
database. For example if agent is installed and configured to monitor the
database, then you need to stop agent using below command:
$
emctl stop agent
Before
you stop agent, don’t forget to set blackout on targets depending on the
requirements. For example if it is a database level outage, then set blackout
on the database and its listeners or set blackout with nodeLevel option for
server level outage otherwise your alert mailbox will get bombard with false
alerts.
2.
Shut
down all Oracle RAC instances on all nodes using below command:
$
srvctl stop database –d
db_name
This
command stops all instances and services associated with a database. Remember
listeners are not stopped because they might be serving other database
instances running on the same machines.
3.
Shut
down all ASM instances on all nodes:
$
srvctl stop asm –n node
This
step has to be repeated once for each node.
4.
Stop
all node applications on all nodes:
$
srvctl stop nodeapps –n
node
This
step has to be repeated once for each node. This command causes to stop
nodeapps (VIP,GSD,Listener and ONSdaemon) processes.
5.
Shut
down the Oracle Clusterware or CRS process:
#
crsctl stop crs
This
command has to be repeated once for each node and this should be executed as
root user only.
è Procedure to Start RAC
database:
Usually,
CRS and the databases will be started automatically when server gets started
(By default CRS is in enabled state). However, there are some cases where we
will have to disable the CRS before server shutdown. For example, server needs
to be rebooted multiple times to apply OS patches, in that cases we will have
to disable to CRS to ensure file systems do not get corrupted. When you disable
the CRS, CRS and database will not start automatically and we will have to
start CRS and databases manually.
Follow
below steps to bring up cluster and the databases.
1.
Start
the Oracle Clusterware or CRS process:
#
crsctl start crs
NOTE:
This
command has to be repeated once for each node and this should be executed as
root user only.
2.
Start
all node applications on all nodes:
$
srvctl start nodeapps –n
node
This
step has to be repeated once for each node. This command causes to start
nodeapps (VIP,GSD,Listener and ONS daemon) processes.
3.
Start
all ASM instances on all nodes:
$
srvctl start asm –n node
This
step has to be repeated once for each node.
4.
Start
all Oracle RAC instances on all nodes using below command.
$
srvctl start database –d
db_name
This
command starts all instances and services associated with a database.
5.
Start
oracle agent (if any) and stop blackout in case if you had set before you bring
the database down:
$
emctl start agent
è Starting and Stopping
Instances and RAC databases with srvctl and sql:
==================
A.
srvctl
==================
1.
Status
database with srvctl:
[oracle@node1
~]$ srvctl status database
-d node
Instance
node1 is running on node node1
Instance
node2 is running on node node2
2.
Stopping
Instances and RAC databases with srvctl:
[oracle@node1
~]$ srvctl stop database
-d node
3.
Status
database with srvctl:
[oracle@node1
~]$ srvctl status database
-d node
Instance
node1 is not running on node node1
Instance
node2 is not running on node node2
4.
Starting
Instances and RAC databases with srvctl:
[oracle@node1
~]$ srvctl start database
-d node
5.
Status
database with srvctl:
[oracle@node1
~]$ srvctl status database
-d node
Instance
node1 is running on node node1
Instance
node2 is running on node node2
6.
Stopping
Instances and RAC databases with srvctl (not all instance):
[oracle@node1
~]$ srvctl stop instance
-d node -i node2
7.
Status
database with srvctl:
[oracle@node1
~]$ srvctl status database
-d node
Instance
node1 is running on node node1
Instance
node2 is not running on node node2
8.
Starting
Instances and RAC databases with srvctl (not all instance):
[oracle@node1
~]$ srvctl start instance
-d node -i node2
9.
Status
database with srvctl:
[oracle@node1
~]$ srvctl status database
-d node
Instance
node1 is running on node node1
Instance
node2 is running on node node2
==================
B.
sql command
==================
10.
Status
database with sql:
[oracle@node1
~]$ sqlplus system/oracle0@node1
sql>
column instance_name
format a10
sql>
column host_name format
a10
sql>
column archiver format a10
sql>
column status format a10
sql>
select instance_name,
host_name, archiver, thread#, status from gv$instance;
INSTANCE_N
HOST_NAME ARCHIVER THREAD# STATUS
———-
———- ———- ———- ———-
node1
node1 STARTED 1 OPEN
node2
node2 STARTED 2 OPEN
sql>
exit
1.
Stopping
Instances and RAC databases with sql (not all instance, in ex : node1):
[oracle@node1
~]$ sqlplus sys/oracle0@node1 as sysdba
sql>
shutdown;
sql>
exit
[oracle@node1
~]$
2.
status
database with srvctl:
[oracle@node1
~]$ sqlplus system/oracle0@node2
sql>
column instance_name
format a10
sql>
column host_name format
a10
sql>
column archiver format a10
sql>
column status format a10
sql>
select instance_name,
host_name, archiver, thread#, status from gv$instance;
INSTANCE_N
HOST_NAME ARCHIVER THREAD# STATUS
———-
———- ———- ———- ———-
node2
node2 STARTED 2 OPEN
sql>
exit;
[oracle@node1
~]$
1.
Starting
Instances and RAC databases with sql (not all instance, in ex : node1)
[oracle@node1
~]$ sqlplus sys/oracle0@node1 as sysdba
SQL*Plus:
Release 11.1.0.6.0 – Production on Tue Sep 23 13:46:08 2008
Copyright
(c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12521:
TNS:listener does not currently know of instance requested in
connect
descriptor
Enter
user-name: / as sysdba
Connected
to an idle instance.
SQL>
startup;
ORACLE
instance started.
Total
System Global Area 853716992 bytes
Fixed
Size 1303244 bytes
Variable
Size 557845812 bytes
Database
Buffers 289406976 bytes
Redo
Buffers 5160960 bytes
Database
mounted.
Database
opened.
SQL>
exit
Disconnected
from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
With
the Partitioning, Real Application Clusters, OLAP, Data Mining
and
Real Application Testing options
[oracle@node1
~]$
=========================================
Note
use command : sqlplus / as sysdba
=========================================
ERROR:
ORA-12521:
TNS:listener does not currently know of instance requested in
connect
descriptor
2.
status
database with sql:
[oracle@node1
~]$ sqlplus system/oracle0@node2
sql>
column instance_name
format a10
sql>
column host_name format
a10
sql>
column archiver format a10
sql>
column status format a10
sql>
select instance_name,
host_name, archiver, thread#, status from gv$instance;
INSTANCE_N
HOST_NAME ARCHIVER THREAD# STATUS
———-
———- ———- ———- ———-
node2
node2 STARTED 2 OPEN
node1
node1 STARTED 1 OPEN
è Shutdown RAC Database:
You
need to Shutdown Database instances on each node. You can either use Oracle
Enterprise Manager or SRVCTL to shutdown the instances. If you are using EM
Grid control then set a blackout in Grid control for processes that you intend
to shutdown. So that records for these processes indicate that the shutdown was
planned.
1.
Use
below command to stop Enterprise Manager/Grid Control:
$
emctl stop dbconsole
2.
Use
below command to shutdown all oracle RAC instances on all nodes:
$
srvctl stop database -d
db_name
3.
If
you want to stop specific database instances use below command:
$
srvctl stop database -d
db_name –i instance_name
4.
Shutdown
Oracle ASM Instance:
Once
the database is stopped, proceed with ASM Instance shutdown.
Use
below command to shutdown ASM instances on all nodes.
$
srvctl stop asm -n node
5.
Shutdown
Node applications:
Use
below command to shutdown node apps on all RAC nodes.
$
srvctl stop nodeapps -n
node
6.
Shutdown
Oracle Clusterware:
You
need to Shutdown oracle clusterware or CRS as root and run below command on each
node in the cluster.
#
crsctl stop crs
NOTE:
Please
note that using above command will stop Oracle High availability services
(OHAS) and Clustware stack in a single command
From
11g R2, you can do this in two stops:
A. Stop Clustwerware stack
on local node:
#
crsctl stop cluster
B. You can stop the
clusterware stack on all nodes in the cluster:
#
crsctl stop cluster –all
Where
-all
Start clusterware on all nodes
-n
Start clusterware on particular nodes
7.
Stop
Oracle High availability service demon on each node in the cluster:
#
crsctl stop has
8.
Check
the Status of Cluster:
Once
all process stopped run the below command to check the status of CRSD,CSSD,EVMD
process.
#
crsctl check crs
If
you see any process failed to stop then you can also use Force option to
terminate the processes unconditionally.
#
crsctl stop crs –all –f
è Start processes in Oracle
RAC:
Follow
the reverse sequence to start all processes in oracle RAC:
#
crsctl start crs
$
srvctl start nodeapps -n
node
$
srvctl start asm -n node
$
srvctl start database -d
db_name
If
you come across any issues during startup or shutdown, check the Oracle
Clusterware Component Log Files.
è Oracle Clusterware Log
Directory Structure:
CRS_HOME/log/hostname/crsd/
- The log files for the CRS daemon
CRS_HOME/log/hostname/cssd/
- The log files for the CSS daemon
CRS_HOME/log/hostname/evmd/
- The log files for the EVM daemon
CRS_HOME/log/hostname/client/
- The log files for the Oracle Cluster Registry (OCR)
CRS_HOME/log/hostname/racg/
- The log files for the Oracle RAC high availability component
CRS_HOME/log/hostname/racg/
- The log files for the Oracle RAC high availability component
CRS_HOME/log/hostanme/alert.log
– The alert.log for Clusterware issues.
NOTE:
Please
note that the CRS_HOME is the directory in which the Oracle Clusterware
software was installed and hostname is the name of the node.
è Checking CRS Status:
[oracle@node1]</home/oracle>
crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
[oracle@node2]</home/oracle>
crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
è Checking Node Status:
[oracle@node1]</home/oracle>
srvctl status nodeapps
VIP
node1-vip is enabled
VIP
node1-vip is running on node: node1
VIP
192.168.100.101 is enabled
VIP
192.168.100.101 is running on node: node2
Network
is enabled
Network
is running on node: node1
Network
is running on node: node2
GSD
is disabled
GSD
is not running on node: node1
GSD
is not running on node: node2
ONS
is enabled
ONS
daemon is running on node: node1
ONS
daemon is running on node: node2
[oracle@node2]</home/oracle>
srvctl status nodeapps
VIP
node1-vip is enabled
VIP
node1-vip is running on node: node1
VIP
192.168.100.101 is enabled
VIP
192.168.100.101 is running on node: node2
Network
is enabled
Network
is running on node: node1
Network
is running on node: node2
GSD
is disabled
GSD
is not running on node: node1
GSD
is not running on node: node2
ONS
is enabled
ONS
daemon is running on node: node1
ONS
daemon is running on node: node2
è Oracle High Availability
Services:
Disable/Enable
Oracle HAS:
Use
the “crsctl enable/disable has” command to disable automatic startup of the
Oracle High Availability Services stack when the server boots up.
We
can see current settings for Oracle High Availability Services stack when the
server boots up:
[root@node1]
#
crsctl config has
CRS-4622:
Oracle High Availability Services autostart is enabled.
or
[root@node1]
#
cat /etc/oracle/scls_scr/node1/root/ohasdstr
*****enable******
So
as you can see our current setting is enable.
To
Disable automatic startup of the Oracle High Availability Services:
For
Disable:
[root@node1]
#
crsctl disable has
CRS-4621:
Oracle High Availability Services autostart is disabled.
[root@node1]
#
crsctl config has
CRS-4621:
Oracle High Availability Services autostart is disabled.
#
cat /etc/oracle/scls_scr/node1/root/ohasdstr
*****disable*****
For
Enable:
[root@node1]
#
crsctl enable has
CRS-4621:
Oracle High Availability Services autostart is enabled.
Check
new setting:
[root@node1]
#
crsctl config has
CRS-4621:
Oracle High Availability Services autostart is enabled.
[root@node1]
#
cat /etc/oracle/scls_scr/node1/root/ohasdstr
*****enable*****
è Stop the Oracle
clusterware stack:
We
can use below commands:
With
root user:
crsctl stop crs or crsctl stop has
[root@node1]
#
crsctl stop has
CRS-2791:
Starting shutdown of Oracle High Availability Services-managed resources on ‘node1’
CRS-2673:
Attempting to stop ‘ora.crsd’ on ‘node1’
CRS-2790:
Starting shutdown of Cluster Ready Services-managed resources on ‘node1’
CRS-2673:
Attempting to stop ‘ora.LISTENER_SCAN2.lsnr’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.LISTENER.lsnr’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.test01.db’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.LISTENER_SCAN3.lsnr’ on ‘node1’
CRS-2677:
Stop of ‘ora.LISTENER_SCAN2.lsnr’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.scan2.vip’ on ‘node1’
CRS-2677:
Stop of ‘ora.LISTENER.lsnr’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.node1.vip’ on ‘node1’
CRS-2677:
Stop of ‘ora.LISTENER_SCAN3.lsnr’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.scan3.vip’ on ‘node1’
CRS-2677:
Stop of ‘ora.node1.vip’ on ‘node1’ succeeded
CRS-2672:
Attempting to start ‘ora.node1.vip’ on ‘node2’
CRS-2677:
Stop of ‘ora.scan2.vip’ on ‘node1’ succeeded
CRS-2672:
Attempting to start ‘ora.scan2.vip’ on ‘node2’
CRS-2677:
Stop of ‘ora.scan3.vip’ on ‘node1’ succeeded
CRS-2672:
Attempting to start ‘ora.scan3.vip’ on ‘node2’
CRS-2676:
Start of ‘ora.node1.vip’ on ‘node2’ succeeded
CRS-2677:
Stop of ‘ora.test01.db’ on ‘node1’ succeeded
CRS-2676:
Start of ‘ora.scan2.vip’ on ‘node2’ succeeded
CRS-2672:
Attempting to start ‘ora.LISTENER_SCAN2.lsnr’ on ‘node2’
CRS-2676:
Start of ‘ora.scan3.vip’ on ‘node2’ succeeded
CRS-2672:
Attempting to start ‘ora.LISTENER_SCAN3.lsnr’ on ‘node2’
CRS-2676:
Start of ‘ora.LISTENER_SCAN2.lsnr’ on ‘node2’ succeeded
CRS-2676:
Start of ‘ora.LISTENER_SCAN3.lsnr’ on ‘node2’ succeeded
CRS-2673:
Attempting to stop ‘ora.ons’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.eons’ on ‘node1’
CRS-2677:
Stop of ‘ora.ons’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.net1.network’ on ‘node1’
CRS-2677:
Stop of ‘ora.net1.network’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.eons’ on ‘node1’ succeeded
CRS-2792:
Shutdown of Cluster Ready Services-managed resources on ‘node1’ has completed
CRS-2677:
Stop of ‘ora.crsd’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.mdnsd’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.gpnpd’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.cssdmonitor’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.ctssd’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.evmd’ on ‘node1’
CRS-2677:
Stop of ‘ora.cssdmonitor’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.mdnsd’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.gpnpd’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.evmd’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.ctssd’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.cssd’ on ‘node1’
CRS-2677:
Stop of ‘ora.cssd’ on ‘node1’ succeeded
CRS-2673:
Attempting to stop ‘ora.diskmon’ on ‘node1’
CRS-2673:
Attempting to stop ‘ora.gipcd’ on ‘node1’
CRS-2677:
Stop of ‘ora.gipcd’ on ‘node1’ succeeded
CRS-2677:
Stop of ‘ora.diskmon’ on ‘node1’ succeeded
CRS-2793:
Shutdown of Oracle High Availability Services-managed resources on ‘node1’ has
completed
CRS-4133:
Oracle High Availability Services has been stopped.
è Start the Oracle
clusterware stack:
We
can use below commands:
With
root user:
crsctl start crs or crsctl start has
[root@node1]
#
crsctl start crs
CRS-4123:
Oracle High Availability Services has been started.
No comments:
Post a Comment