Quantcast
Channel: Oracle Database 12c – DBA survival BLOG
Viewing all 37 articles
Browse latest View live

A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)

$
0
0

UPDATE: please check my more recent post about this problem and the information I’ve got at the Oracle Demo Grounds during OOW14: http://www.ludovicocaldara.net/dba/demo-grounds-clone-pdb-rw/

I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.

Introduction
For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).

Following the doc, http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#CCHDFDDG, I’ve cloned one local pluggable database to a new PDB and, because Active Data Guard is active, I was expecting the PDB to be created on the standby and its files copied without problems.

BUT! I’ve forgot to put my source PDB in read-only mode on the primary and, strangely:

  • The pluggable database has been created on the primary WITHOUT PROBLEMS (despite the documentation explicitly states that it needs to be read-only)
  • The recovery process on the standby stopped with error.

Recovery copied files for tablespace SYSTEM
Recovery successfully copied file +DATA/CDBGVA/01B838F74693443FE053334EA8C03527/DATAFILE/system.437.856805523 from +DATA/CDBGVA/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/system.435.856802413
MRP0: Background Media Recovery terminated with error 1274
Thu Aug 28 17:32:05 2014
Errors in file /u01/app/oracle/diag/rdbms/cdbgva/CDBGVA_1/trace/CDBGVA_1_mrp0_13949.trc:
ORA-01274: cannot add data file that was originally created as '+DATA/CDBATL/01B838F74693443FE053334EA8C03527/DATAFILE/system.477.856805517'
ORA-19729: File 22 is not the initial version of the plugged in datafile
Thu Aug 28 17:32:05 2014

 

Now, the primary had all its datafiles (the new PDB has con_id 4):

CON_ID NAME
---------- ----------------------------------------------------------------------------------------------------
1 +DATA/CDB/DATAFILE/system.283.854626623
1 +DATA/CDB/DATAFILE/undotbs1.290.854627639
1 +DATA/CDB/DATAFILE/users.291.854627695
1 +DATA/CDB/DATAFILE/undotbs2.287.854627063
1 +DATA/CDB/DATAFILE/sysaux.285.854626879
2 +DATA/CDB/FFBCECBB503D606BE043334EA8C019B7/DATAFILE/sysaux.286.854627011
2 +DATA/CDB/FFBCECBB503D606BE043334EA8C019B7/DATAFILE/system.284.854626785
3 +DATA/CDBATL/00B29F47A2D71CC2E053334EA8C03B13/DATAFILE/sysaux.390.855681795
3 +DATA/CDBATL/00B29F47A2D71CC2E053334EA8C03B13/DATAFILE/system.389.855681795
4 +DATA/CDBATL/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/sysaux.459.856788061
4 +DATA/CDBATL/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/system.458.856788061

 

and the standby was missing the datafiles of the new PDB:

1* select con_id, name from v$datafile order by 1

CON_ID NAME
---------- ----------------------------------------------------------------------------------------------------
1 +DATA/CDBGVA/DATAFILE/system.319.855054997
1 +DATA/CDBGVA/DATAFILE/undotbs2.283.855055141
1 +DATA/CDBGVA/DATAFILE/users.285.855055149
1 +DATA/CDBGVA/DATAFILE/undotbs1.284.855055145
1 +DATA/CDBGVA/DATAFILE/sysaux.281.855055061
2 +DATA/CDBGVA/FFBCECBB503D606BE043334EA8C019B7/DATAFILE/sysaux.282.855055127
2 +DATA/CDBGVA/FFBCECBB503D606BE043334EA8C019B7/DATAFILE/system.280.855055053
3 +DATA/CDBGVA/00B29F47A2D71CC2E053334EA8C03B13/DATAFILE/sysaux.363.855681865
3 +DATA/CDBGVA/00B29F47A2D71CC2E053334EA8C03B13/DATAFILE/system.362.855681863

 

But, on the standby database, the PDB somehow was existing.

16:20:58 SYS@CDBGVA_1> select name from v$pdbs;

NAME
------------------------------
PDB$SEED
MAAZ
LUDO

 

I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in 12.1.0.2).
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:

16:21:35 SYS@CDBGVA_1> alter session set container=LUDO;
ERROR:
ORA-65011: Pluggable database LUDO does not exist.

16:21:39 SYS@CDBGVA_1> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
MAAZ MOUNTED
LUDO MOUNTED

 

So I’ve tried to drop it, but off course, the standby was read-only and I could not drop the PDB:

16:22:01 SYS@CDBGVA_1> drop pluggable database ludo;
drop pluggable database ludo
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access

 

Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)

# [ oracle@racb02:/u01/app/oracle/diag/rdbms/cdbgva/CDBGVA_1/trace [16:22:45] [12.1.0.2.0 EE SID=CDBGVA_1] 1 ] #
# srvctl stop database -d CDBGVA -o immediate
[HANGS]

 

After mounting again the standby, the PDB was also accessible:

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 28 16:30:19 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

16:30:19 SYS@CDBGVA_1> alter session set container=LUDO;

Session altered.

 

So I’ve been able to disable the recovery:

16:31:19 SYS@CDBGVA_1> alter pluggable database ludo disable recovery;

Pluggable database altered.

 

Then, on the primary, I’ve took a fresh backup of the involved datafiles:

RMAN> backup as copy datafile 16,17 format '/tmp/%f.dbf';

Starting backup at 28-AUG-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=88 instance=CDBATL_2 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00017 name=+DATA/CDBATL/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/sysaux.459.856788061
output file name=/tmp/17.dbf tag=TAG20140828T163251 RECID=4 STAMP=856801976
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00016 name=+DATA/CDBATL/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/system.458.856788061
output file name=/tmp/16.dbf tag=TAG20140828T163251 RECID=5 STAMP=856801981
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 28-AUG-14

Starting Control File and SPFILE Autobackup at 28-AUG-14
piece handle=+DATA/CDBATL/AUTOBACKUP/2014_08_28/s_856801982.471.856801983 comment=NONE
Finished Control File and SPFILE Autobackup at 28-AUG-14

 

and I’ve copied and cataloged the copies to the controlfile:

RMAN> catalog start with '/tmp/1';

searching for all files that match the pattern /tmp/1

List of Files Unknown to the Database
=====================================
File Name: /tmp/17.dbf
File Name: /tmp/16.dbf

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/17.dbf
File Name: /tmp/16.dbf

 

but the restore was impossible, because the controlfile was not knowing these datafiles!!

16:38:48 SYS@CDBGVA_1> select file# from v$datafile;

FILE#
----------
1
2
3
4
5
6
7
10
11

RMAN> run {
2> set newname for datafile 16 to new;
3> set newname for datafile 17 to new;
4> restore datafile 16,17;
5> }

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 28-AUG-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=64 instance=CDBGVA_1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/28/2014 16:37:02
RMAN-20201: datafile not found in the recovery catalog
RMAN-06010: error while looking up datafile: 17

RMAN> exit

 

So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.

16:38:08 SYS@CDBGVA_1> alter database recover managed standby database ;
alter database recover managed standby database
*
ERROR at line 1:
ORA-16043: Redo apply has been canceled.
ORA-01013: user requested cancel of current operation

16:38:48 SYS@CDBGVA_1> select file# from v$datafile;

FILE#
----------
1
2
3
4
5
6
7
10
11
16
17

 

Then I’ve been able to restore the datafiles :-)

RMAN> run {
2> set newname for datafile 16 to new;
3> set newname for datafile 17 to new;
4> restore datafile 16,17;
5> }

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 28-AUG-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=CDBGVA_1 device type=DISK

channel ORA_DISK_1: restoring datafile 00016
input datafile copy RECID=21 STAMP=856802136 file name=/tmp/16.dbf
destination for restore of datafile 00016: +DATA
channel ORA_DISK_1: copied datafile copy of datafile 00016
output file name=+DATA/CDBGVA/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/system.435.856802413 RECID=22 STAMP=856802416
channel ORA_DISK_1: restoring datafile 00017
input datafile copy RECID=20 STAMP=856802136 file name=/tmp/17.dbf
destination for restore of datafile 00017: +DATA
channel ORA_DISK_1: copied datafile copy of datafile 00017
output file name=+DATA/CDBGVA/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/sysaux.355.856802417 RECID=23 STAMP=856802421
Finished restore at 28-AUG-14

RMAN>

RMAN> switch datafile 16, 17 to copy;

datafile 16 switched to datafile copy "+DATA/CDBGVA/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/system.435.856802413"
datafile 17 switched to datafile copy "+DATA/CDBGVA/01B431F9BDF51AB7E053334EA8C06877/DATAFILE/sysaux.355.856802417"

RMAN>

 

Finally, I’ve enabled again the recovery for the PDB and restarted the apply process.

16:41:14 SYS@CDBGVA_1> alter session set container=LUDO;

Session altered.

16:41:19 SYS@CDBGVA_1> alter pluggable database ludo enable recovery;

Pluggable database altered.

 

Lesson learned: if you want to clone a PDB never, ever, forget to put your source PDB in read-only mode or you’ll have to deal with it!! :-)


RAC Attack 12c in Switzerland, it’s a wrap!

$
0
0

Last Wednesday, September 17th, we’ve done the first RAC Attack in Switzerland (as far as I know!). I have to say that it has been a complete success like all other RAC Attacks I’ve been involved in.

DSC_0019

This time I’ve been particularly happy and proud because I’ve organized it almost all alone. Trivadis, my employer, has kindly sponsored everything: the venue (the new, cool Trivadis offices in Geneva), the T-shirts (I’ve done the design, very similar to the one I’ve designed for Collaborate 14),  beers and pizza!

For beer lovers,we’ve got the good “Blanche des Neiges” from Belgium, “La Helles” and “La Rossa” from San Martino Brewery, Ticino (Italian speaking region of Switzerland). People have appreciated :-)

DSC_0027

We’ve had 4 top-class Ninjas and 10 people actively installing Oracle RAC (plus a famous blogger that joined for networking), sadly two people have renounced at the last minute. For the very first time, all the participants have downloaded the Oracle Software in advance. When they’ve registered I’ve reminded twice that the software was necessary because we cannot provide it due to legal constraints.

DSC_0023

 

People running the lab on Windows laptops have reported problems with VirtualBox 4.3.16 (4.3.14 has been skipped directly because of known problems). So every one had to fallback to version 4.3.12 (the last stable release, IMO).

The best praise I’ve got has been the presence of a Senior DBA coming from Nanterre! 550Km (> 5h00 by public transport door-to-door) and an overnight stay just for this event, can you believe it? :-)

This makes me think seriously about the real necessity of organizing this kind of events around the world.

DSC02614 DSC02600 DSC02581

 

Off course, we’ve got a photo session with a lot of jumps 😉 We could not miss this RAC Attack tradition!

We’ve wrapped everything around 10:30pm, after a bit more than 5 hours of event. We’ve enjoyed a lot and had good time together chatting about Oracle RAC and about our work in general.

DSC02619

Thank you again to all participants!! :-)

 

 

Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade Standby, and Other Goodies

$
0
0

Here you can find the content related to my second presentation at Oracle Open World 2014.

 Slides

Demo video1: Real-Time Cascade

Demo video2: Far Sync Instance

Demo 1 Script

clear

echo "#### CURRENT CONFIGURATION: CLASSIC DATA GUARD 2 DATABASES ####"

dgmgrl -echo sys/manager <<EOF
show configuration
EOF
read -p ""


echo "#### ADDING DATABASE REP ####"
dgmgrl -echo sys/manager <<EOF
add database 'REP' as connect identifier is 'REP';
EOF
read -p ""

echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration
EOF
read -p ""


echo
echo "#### EDIT REDOROUTES ####"
echo
dgmgrl -echo sys/manager <<EOF
edit database 'PROD' set property redoroutes='(PROD:DR)'
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit database 'REP' set property redoroutes='(REP:DR)' 
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit database 'DR' set property redoroutes='(PROD:REP)(REP:PROD)(DR:REP,PROD)'
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit database 'REP' set property 'NetTimeout'=15;
edit database 'REP' set property 'ReopenSecs'=5;
EOF
read -p ""
echo

echo "#### ENABLE DATABASE REP ####"
dgmgrl -echo sys/manager <<EOF
enable database 'REP'
EOF
read -p ""

echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration 
EOF
echo
echo
echo "#### IS IT WORKING?? ####"
read -p ""

echo "#### ENABLE REL-TIME CASCADE ####"
dgmgrl -echo sys/manager <<EOF
edit database 'DR' set property redoroutes='(PROD:REP ASYNC)(REP:PROD ASYNC)(DR:REP,PROD)'
EOF
read -p ""


echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration 
EOF
echo
echo "#### NOTICE THE NEW BEHAVIOR ####"
read -p ""

echo "#### SWITCHOVER TO REP ####"
dgmgrl -echo sys/manager <<EOF
switchover to 'REP'
EOF
read -p ""

echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration 
EOF
read -p ""


echo "#### SWITCHOVER TO PROD ####"
dgmgrl -echo sys/manager <<EOF
switchover to 'PROD'
EOF

 

Demo 2 script

clear

echo "#### CURRENT CONFIGURATION: 3 CASCADE STANDBY DATABASES ####"

dgmgrl -echo sys/manager <<EOF
show configuration
EOF
read -p ""


echo "#### CREATE CONTROLFILE ####"
sqlplus "/ as sysdba" @create_fs_ctl.sql 
read -p ""

echo "#### COPY FAR SYNC CONTROLFILE TO FAR SYNC HOSTS ####"
scp /tmp/control01.ctl oracle@o12f01:/u01/app/oracle/oradata/PRODFS/controlfile/control01.ctl

scp /tmp/control01.ctl oracle@o12f02:/u01/app/oracle/oradata/DRFS/controlfile/control01.ctl

read -p "#### START FS INSTANCES, CLEAR STANDBY LOGS, THEN CONTINUE HERE ####"



echo "#### ADD FAR_SYNC INSTANCED ####"
dgmgrl -echo sys/manager <<EOF
add far_sync 'PRODFS' as connect identifier is 'PRODFS_DG';
EOF
read -p ""
dgmgrl -echo sys/manager <<EOF
add far_sync 'DRFS' as connect identifier is 'DRFS_DG';
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit far_sync 'PRODFS' set property 'NetTimeout'=15;
edit far_sync 'PRODFS' set property 'ReopenSecs'=5;
edit far_sync 'DRFS' set property 'NetTimeout'=15;
edit far_sync 'DRFS' set property 'ReopenSecs'=5;
EOF
read -p ""
echo

echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration 
EOF
read -p ""


echo "#### EDIT REDOROUTES ####"
dgmgrl -echo sys/manager <<EOF
edit database 'PROD' set property redoroutes='(PROD:PRODFS SYNC)'
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit far_sync 'PRODFS' set property redoroutes='(PROD:DR ASYNC)'
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit database 'DR' set property redoroutes='(PROD:REP ASYNC)(REP:DRFS ASYNC)(DR:REP SYNC, DRFS SYNC)' 
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit database 'REP' set property redoroutes='(REP:DR SYNC)'
EOF
read -p ""

dgmgrl -echo sys/manager <<EOF
edit far_sync 'DRFS' set property redoroutes='(DR:PROD ASYNC)(REP:PROD ASYNC)'
EOF
read -p ""

echo
echo "#### ENABLE FAR_SYNCS ####"
dgmgrl -echo sys/manager <<EOF
enable far_sync 'PRODFS'
EOF
read -p ""
dgmgrl -echo sys/manager <<EOF
enable far_sync 'DRFS'
EOF
read -p ""

echo "#### NEW CONFIGURATION ####"
dgmgrl -echo sys/manager <<EOF
show configuration 
EOF
echo
echo "#### IT MAY TAKE SOME MINUTES BEFORE EVERYTHING START WORKING ####"
read -p ""

dgmgrl -echo sys/manager <<EOF
show configuration 
EOF

For the demo I’ve used 5 machines running 3 database instances and 2 Far Sync instances. I cannot provide the documentation for creating the demo environment, but the scripts may be useful to understand how the demo works.

Cheers

Ludo

Oracle RAC, Oracle Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant (OOW14)

$
0
0

Here you can find the material related to my session at Oracle Open World 2014. I’m sorry I’m late in publishing them, but I challenge you to find spare time during Oracle Open World! It’s the busiest week of the year! (Hard Work, Hard Play)

 Slides

 Demo 1 video

Demo 2 video

Demo 1 script

clear

function pause () {
	echo
	read -p "$*"
	echo
}

tnsping cdbatl

pause "next... status db"
clear
echo \$ srvctl status database -db CDBATL

srvctl status database -db CDBATL

pause "next... status pdb"

clear

sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select INST_ID, CON_ID, name, OPEN_MODE
	 from gv\$pdbs
	  where con_id!=2
	  order by name, inst_id;
	exit
EOF

pause "next... add singleton service"

clear

###  add service MAAZAPP SINGLETON
cmd="srvctl add service -db CDBATL -service  maazapp -serverpool CDBPOOL -cardinality singleton -role primary -failovertype select -failovermethod basic -policy automatic -failoverdelay 2 -failoverretry 180 -pdb maaz"
echo \$ $cmd
eval $cmd

pause "next... start service"
clear

cmd="srvctl start service -db CDBATL -service maazapp -instance CDBATL_1"
echo \$ $cmd
eval $cmd

pause "next... status pdb"

clear

sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select INST_ID, CON_ID, name, OPEN_MODE
	 from gv\$pdbs
	  where con_id!=2
	  order by name, inst_id;
	exit
EOF


cmd="srvctl status database -db cdbatl"
echo
echo \$ $cmd
eval $cmd

cmd="srvctl status service -service maazapp -db cdbatl"
echo
echo \$ $cmd
output=`$cmd`
echo $output

current=`echo $output | awk '{print $NF}'`
if [ $current == "raca01" ] ; then
	target="raca02"
else
	target="raca01"
fi


pause "pause... please launch demo1_client.sh"
pause "next... relocate service from $current to $target"

clear

cmd="srvctl relocate service -db CDBATL -service maazapp -currentnode $current -targetnode $target"
echo \$ $cmd
eval $cmd


pause "next... status pdb"

clear

sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select INST_ID, CON_ID, name, OPEN_MODE
	 from gv\$pdbs
	  where con_id!=2
	  order by name, inst_id;
	exit
EOF

pause "next... close pdb immediate on old inst"

clear

sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
 alter pluggable database maaz close immediate instances=('CDBATL_1');
	exit
EOF

pause "next... status pdb"

clear

sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select INST_ID, CON_ID, name, OPEN_MODE
	 from gv\$pdbs
	  where con_id!=2
	  order by name, inst_id;
	exit
EOF

pause "next... modify service to uniform"

clear

cmd="srvctl modify service -db CDBATL -service maazapp -cardinality uniform"
echo \$ $cmd
eval $cmd

pause "next... status pdb"
clear
sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select INST_ID, CON_ID, name, OPEN_MODE
	 from gv\$pdbs
	  where con_id!=2
	  order by name, inst_id;
	exit
EOF

echo
cmd="srvctl status service -service maazapp -db cdbatl"
echo \$ $cmd
eval $cmd 

exit

 

Demo 2 script

 

txtblk='\e[0;30m' # Black - Regular
txtred='\e[0;31m' # Red
txtgrn='\e[0;32m' # Green
txtrst='\e[0m'    # Text Reset

function echop () {
	echo
	echo -e "${txtgrn}$*${txtrst}"
}

function echos () {
	echo
	echo -e "${txtred}$*${txtrst}"
}

function pause() {
	echo
	read -p "$*"
	echo
}

clear

echop "Status of the PRIMARY DATABASE"
sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	select db_unique_name, database_role from v\$database;
	select inst_id, con_id, name,open_mode from gv\$pdbs where con_id!=2 order by con_id, inst_id;
	exit
EOF

pause "next... standby status"
clear
echos "Status of the STANDBY DATABASE"

sqlplus sys/racattack@cdbgva as sysdba <<EOF
	set echo on
	select db_unique_name, database_role from v\$database;
	select open_mode from v\$database;
	select inst_id, con_id, name,open_mode from gv\$pdbs where con_id!=2 order by con_id, inst_id;
	exit
EOF



pause "next... dgmgrl status"
clear
echos "Data Guard configuration and status of the STANDBY database"
dgmgrl <<EOF
connect sys/racattack
show configuration;
show database 'CDBGVA';
exit
EOF


pause "please do tail -f on the apply instance"
pause "next... create new pdb ludo on primary "
clear

echop "Create new pluggable database on the primary: "
echop "create pluggable database ludo admin user ludoadmin identified by ludoadmin;"
sqlplus sys/racattack@cdbatl as sysdba <<EOF
	set echo on
	create pluggable database ludo admin user ludoadmin identified by ludoadmin;

	select inst_id, con_id, name,open_mode from gv\$pdbs where con_id!=2 order by con_id, inst_id;
	exit
EOF

pause "next... create service for primary on both clusters"
clear

echop "Create service for primary ROLE on the primary cluster (CDBATL) via SSH"

cmd="srvctl add service -db CDBATL -service  ludoapp -serverpool CDBPOOL -cardinality singleton -role primary -failovertype select -failovermethod basic -policy automatic -failoverdelay 1 -failoverretry 180 -pdb ludo"
echo "\$ ssh raca01 $cmd"
ssh raca01 ". /home/oracle/.bash_profile ; $cmd"

echos "Create service for primary ROLE on the standby cluster (CDBGVA)"

cmd="srvctl add service -db CDBGVA -service  ludoapp -serverpool CDBPOOL -cardinality singleton -role primary -failovertype select -failovermethod basic -policy automatic -failoverdelay 1 -failoverretry 180 -pdb ludo"
echo "\$ $cmd"
eval $cmd


pause "next... start service on primary"
clear
echop "Starting service on the primary via SSH"
cmd="srvctl start service -db CDBATL -service  ludoapp"
echo "\$ ssh raca01 $cmd"
ssh raca01 ". /home/oracle/.bash_profile ; $cmd"



pause "next... create read only service for physical standby on both clusters"
clear

echop "Creating temporarily the readonly service for PRIMARY ROLE on the primary cluster (CDBATL) via SSH"
cmd="srvctl add service -db CDBATL -service  ludoread -serverpool CDBPOOL -cardinality singleton -role primary -failovertype select -failovermethod basic -policy automatic -failoverdelay 1 -failoverretry 180 -pdb ludo"
echo "\$ ssh raca01 $cmd"
ssh raca01 ". /home/oracle/.bash_profile ; $cmd"

echop "Starting the readonly service for PRIMARY ROLE on the primary cluster (CDBATL) via SSH"
cmd="srvctl start service -db CDBATL -service  ludoread"
echo "\$ ssh raca01 $cmd"
ssh raca01 ". /home/oracle/.bash_profile ; $cmd"

echop "Modifying the readonly service from PRIMARY ROLE to PHYSICAL STANDBY on the primary cluster (CDBATL) via SSH"
cmd="srvctl modify service -db CDBATL -service  ludoread -role physical_standby -pdb ludo"
echo "\$ ssh raca01 $cmd"
ssh raca01 ". /home/oracle/.bash_profile ; $cmd"


echos "Creating the readonly service for PHYSICAL STANDBY on the standby cluster (CDBGVA)"
cmd="srvctl add service -db CDBGVA -service  ludoread -serverpool CDBPOOL -cardinality singleton -role physical_standby -failovertype select -failovermethod basic -policy automatic -failoverdelay 1 -failoverretry 180 -pdb ludo"
echo "\$ $cmd"
eval $cmd

pause "next... start read only service"

clear
echos "Starting the readonly service on the standby cluster"

cmd="srvctl start service -db CDBGVA -service  ludoread"
echo "\$ $cmd"
eval $cmd


pause "next... standby status"
clear

echos "Standby status"
sqlplus sys/racattack@cdbgva as sysdba <<EOF
	select db_unique_name, database_role from v\$database;
	select open_mode from v\$database;
	select inst_id, con_id, name,open_mode from gv\$pdbs where con_id!=2 order by con_id, inst_id;
	exit
EOF

pause "please connect to the RW service"

pause "next... dgmgrl status and validate"
clear

echos "Validate Standby database"

dgmgrl <<EOF
connect sys/racattack
show configuration;
validate database 'CDBGVA';
exit
EOF

pause "next... switchover to CDBGVA"
clear

echos "Switchover to CDBGVA! (it takes a while)"
dgmgrl <<EOF
connect sys/racattack
switchover to 'CDBGVA';
exit
EOF

There’s one slide describing the procedure for cloning one PDB using the standbys clause. Oracle has released a Note while I was preparing my slides (one month ago) and I wasn’t aware of it, so you may also checkout this note on MOS:

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

UPDATE: I’ve blogged about it in a more recent post: Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

UPDATE 2: I’ve written another blog post about these topics: Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

Cheers!

 

Ludovico

Tales from Demo Grounds part 1: Clone PDBs while open READ-WRITE

$
0
0

DISCLAIMER: I’ve got this information by chatting with Oracle developers at the Demo Grounds. The functionality is not documented yet and Oracle may change it at its sole discretion. Please refer to the documentation if/when it will be updated 😉

In one of my previous posts named “A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)” I’ve blogged about a weird behaviour:

The documentation states that you can create a pluggable database from another one only if the source PDB is open read-only.

Indeed, If I try to clone it when the source PDB is MOUNTED, I get error ORA-65036:

15:44:24 SYS@CDBATL_1> select inst_id, name, open_mode from gv$pdbs where name='MAAZ';

INST_ID NAME OPEN_MODE
---------- ------------------------------ ----------
1 MAAZ MOUNTED
2 MAAZ MOUNTED

15:53:43 SYS@CDBATL_1> create pluggable database ludo from maaz;
create pluggable database ludo from maaz
*
ERROR at line 1:
ORA-65036: pluggable database MAAZ not open in required mode

The weird behavior is that if you do it when the source is in read-write mode, it works from release 12.1.0.2 (onward?)

I’ve questioned the developers at the DEMO Grounds and they have confirmed that:

  • With the 12.1.0.2, they have initially planned to disclose this functionality (clone PDBS in READ-WRITE).
  • That they had problems in making it work with an Active Data Guard environment (a-ah! so my post was not completely wrong)
  • Finally they have released it as undocumented feature
  • In the next release “they will fix it, maybe” and document it
  • The process of cloning the PDB anyway freeze the transactions on the source

I hope that this update helps clarifying both the behavior and my previous post about this problem! :-)

Cheers

Ludo

Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

$
0
0

In my #OOW14 presentation about MAA and Multitenant, more precisely at slide #59, “PDB Creation from other PDB without ADG*”, I list a few commands that you can use to achieve a “correct” Pluggable Database clone in case you’re not using Active Data Guard.

What’s the problem with cloning a PDB in a MAA environment without ADG? If you’ve attended my session you should know the answer…

If you read the book “Data Guard Concepts and Administration 12c Release 1 (12.1)“, paragraph 3.5 Creating a PDB in a Primary Database, you’ll see that:

If you plan to create a PDB as a clone from a different PDB, then copy the data files that belong to the source PDB over to the standby database. (This step is not necessary in an Active Data Guard environment because the data files are copied automatically when the PDB is created on the standby database.)

But because there are good possibilities (99%?) that in a MAA environment you’re using ASM, this step is not so simple: you cannot copy the datafiles exactly where you want, it’s OMF, and the recovery process expects the files to be where the controlfile says they should be.

So, if you clone the PDB, the recovery process on the standby doesn’t find the datafiles at the correct location, thus the recovery process will stop and will not start until you fix manually. That’s why Oracle has implemented the new syntax “STANDBYS=NONE” that disables the recovery on the standby for a specific PDB: it lets you disable the recovery temporarily while the recovery process continues to apply logs on the remaining PDBs. (Note, however, that this feature is not intended as a generic solution for having PDBs not replicated. The recommended solution in this case is having two distinct CDBs, one protected by DG, the other not).

With ADG, when you clone the PDB on the primary, on the standby the ADG takes care of the following steps, no matter if on ASM or FS:

  1. recover up to the point where the file# is registered in the controlfile
  2. copy the datafiles from the source DB ON THE STANDBY DATABASE (so no copy over the network)
  3. rename the datafile in the controlfile
  4. continue with the recovery

If you don’t have ADG, and you’re on ASM, Oracle documentation says nothing with enough detail to let you solve the problem. So in August I’ve worked out the “easy” solution that I’ve also included in my slides (#59 and #60):

  1. SQL> create pluggable database DEST from SRC standbys=none;
  2. RMAN> backup as copy pluggable database DEST format ‘/tmp/dest%f.dbf';
  3. $ scp  /tmp/dest*.dbf remote:/tmp
  4. RMAN> catalog start with ‘/tmp/dest’
  5. RMAN> set newnamefor pluggable database DEST to new;
  6. RMAN> restore pluggable database DEST;
  7. RMAN> switch pluggable database DEST to copy;
  8. DGMGRL> edit database ‘STBY’ set state=’APPLY-OFF';
  9. SQL> Alter pluggable database DEST enable recovery;
  10. DGMGRL> edit database ‘STBY’ set state=’APPLY-ON';

Once at #OOW14, after endless conversations at the Demo Grounds, I’ve discovered that Oracle has worked out the very same solution requiring network transfer and that it has been documented in a new note.

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

This note is very informative and I recommend to read it carefully!

What changes (better) in comparison with my first solution, is that Oracle suggests to use the new feature “restore from service”:

RMAN> run{
2> set newname for pluggable database DEST to new;
3> restore pluggable database DEST from service prim;
4> switch datafile all;
5> }

I’ve questioned the developers at the Demo Grounds about the necessity to use network transfer (I had the chance to speak directly with the developer who has written this piece of code!! :-)) and they said that they had worked out only this solution. So, if you have a huge PDB to clone, the network transfer from the primary to standby may impact severely your Data  Guard environment and/or your whole infrastructure, for the time of the transfer.

Of course, I have a complex, undocumented solution, I hope I will find the time to document it, so stay tuned if you’re curious! :-)

Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

$
0
0

Ok, if you’re reading this post, you may want to read also the previous one that explains something more about the problem.

Briefly said, if you have a CDB running on ASM in a MAA architecture and you do not have Active Data Guard, when you clone a PDB you have to “copy” the datafiles somehow on the standby. The only solution offered by Oracle (in a MOS Note, not in the documentation) is to restore the PDB from the primary to the standby site, thus transferring it over the network. But if you have a huge PDB this is a bad solution because it impacts your network connectivity. (Note: ending up with a huge PDB IMHO can only be caused by bad consolidation. I do not recommend to consolidate huge databases on Multitenant).

So I’ve worked out another solution, that still has many defects and is almost not viable, but it’s technically interesting because it permits to discover a little more about Multitenant and Data Guard.

The three options

At the primary site, the process is always the same: Oracle copies the datafiles of the source, and it modifies the headers so that they can be used by the new PDB (so it changes CON_ID, DBID, FILE#, and so on).

On the standby site, by opposite, it changes depending on the option you choose:

Option 1: Active Data Guard

If you have ADG, the ADG itself will take care of copying the datafile on the standby site, from the source standby pdb to the destination standby pdb. Once the copy is done, the MRP0 will continue the recovery. The modification of the header block of the destination PDB is done by the MRP0 immediately after the copy (at least this is what I understand).

ADG_PDB_copy

Option 2: No Active Data Guard, but STANDBYS=none

In this case, the copy on the standby site doesn’t happen, and the recovery process just add the entry of the new datafiles in the controlfile, with status OFFLINE and name UNKNOWNxxx.  However, the source file cannot be copied anymore, because the MRP0 process will expect to have a copy of the destination datafile, not the source datafile. Also, any tentative of restore of the datafile 28 (in this example) will give an error because it does not belong to the destination PDB. So the only chance is to restore the destination PDB from the primary.
NOADG_PDB_STANDBYS_NONE_copy

Option 3: No Active Data Guard, no STANDBYS=none

This is the case that I want to explain actually. Without the flag STANDBYS=none, the MRP0 process will expect to change the header of the new datafile, but because the file does not exist yet, the recovery process dies.
We can then copy it manually from the source standby pdb, and restart the recovery process, that will change the header. This process needs to be repeated for each datafile. (that’s why it’s not a viable solution, right now).

NOADG_PDB_copy

Let’s try it together:

The Environment

Primary

08:13:08 SYS@CDBATL_2> select db_unique_name, instance_name from v$database, gv$instance;

DB_UNIQUE_NAME                 INSTANCE_NAME
------------------------------ ----------------
CDBATL                         CDBATL_2
CDBATL                         CDBATL_1

Standby

07:35:56 SYS@CDBGVA_2> select db_unique_name, instance_name from v$database, gv$instance;

DB_UNIQUE_NAME                 INSTANCE_NAME
------------------------------ ----------------
CDBGVA                         CDBGVA_1
CDBGVA                         CDBGVA_2

The current user PDB (any resemblance to real people is purely coincidental 😉 #haveUSeenMaaz):

08:14:31 SYS@CDBATL_2> select open_mode, name from gv$pdbs where name='MAAZ';

OPEN_MODE  NAME
---------- ------------------------------
OPEN       MAAZ
OPEN       MAAZ

Cloning the PDB on the primary

First, make sure that the source PDB is open read-only

08:45:54 SYS@CDBATL_2> alter pluggable database maaz close immediate instances=all;

Pluggable database altered.

08:46:20 SYS@CDBATL_2> alter pluggable database maaz open read only instances=all;

Pluggable database altered.

08:46:32 SYS@CDBATL_2> select open_mode, name from gv$pdbs where name='MAAZ' ;

OPEN_MODE  NAME
---------- ------------------------------
READ ONLY  MAAZ
READ ONLY  MAAZ

Then, clone the PDB on the primary without the clause STANDBYS=NONE:

08:46:41 SYS@CDBATL_2> create pluggable database LUDO from MAAZ;

Pluggable database created.

Review the clone on the Standby

At this point, on the standby the alert log show that the SYSTEM datafile is missing, and the recovery process stops.

Mon Dec 15 17:46:11 2014
Recovery created pluggable database LUDO
Mon Dec 15 17:46:11 2014
Errors in file /u01/app/oracle/diag/rdbms/cdbgva/CDBGVA_2/trace/CDBGVA_2_mrp0_16464.trc:
ORA-01565: error in identifying file '+DATA'
ORA-17503: ksfdopn:2 Failed to open file +DATA
ORA-15045: ASM file name '+DATA' is not in reference form
Recovery was unable to create the file as:
'+DATA'
MRP0: Background Media Recovery terminated with error 1274
Mon Dec 15 17:46:11 2014
Errors in file /u01/app/oracle/diag/rdbms/cdbgva/CDBGVA_2/trace/CDBGVA_2_mrp0_16464.trc:
ORA-01274: cannot add data file that was originally created as '+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765'
Mon Dec 15 17:46:11 2014
Managed Standby Recovery not using Real Time Apply
Mon Dec 15 17:46:11 2014
Recovery interrupted!
Recovery stopped due to failure in applying recovery marker (opcode 17.34).
Datafiles are recovered to a consistent state at change 10433175 but controlfile could be ahead of datafiles.
Mon Dec 15 17:46:11 2014
Errors in file /u01/app/oracle/diag/rdbms/cdbgva/CDBGVA_2/trace/CDBGVA_2_mrp0_16464.trc:
ORA-01274: cannot add data file that was originally created as '+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765'
Mon Dec 15 17:46:11 2014
MRP0: Background Media Recovery process shutdown (CDBGVA_2)

One remarkable thing, is that in the standby controlfile, ONLY THE SYSTEM DATAFILE exists:

18:02:50 SYS@CDBGVA_2> select con_id from v$pdbs where name='LUDO';

    CON_ID
----------
         4

18:03:10 SYS@CDBGVA_2> select name from v$datafile where con_id=4;

NAME
---------------------------------------------------------------------------
+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765

We need to fix the datafiles one by one, but most of the steps can be done once for all the datafiles.

Copy the source PDB from the standby

What do we need to do? Well, the recovery process is stopped, so we can safely copy the datafiles of  the source PDB from the standby site because they have not moved yet. (meanwhile, we can put the primary source PDB back in read-write mode).

-- on primary
08:58:07 SYS@CDBATL_2> alter pluggable database maaz close immediate instances=all;

Pluggable database altered.

08:58:15 SYS@CDBATL_2> alter pluggable database maaz open read write instances=all;

Pluggable database altered.

Copy the datafiles:

## on the standby:
RMAN> backup as copy pluggable database MAAZ;

Starting backup at 15-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 instance=CDBGVA_2 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00029 name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.463.857404625
output file name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.866397043 tag=TAG20141215T175041 RECID=54 STAMP=866397046
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00028 name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.283.857404623
output file name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049 tag=TAG20141215T175041 RECID=55 STAMP=866397051
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 15-DEC-14

Starting Control File and SPFILE Autobackup at 15-DEC-14
piece handle=+DATA/CDBGVA/AUTOBACKUP/2014_12_15/s_866396771.865.866397053 comment=NONE
Finished Control File and SPFILE Autobackup at 15-DEC-14

Do the magic

Now there’s the interesting part: we need to assign the datafile copies of the maaz PDB to LUDO.

Sadly, the OMF will create the copies on the bad location (it’s a copy, to they are created on the same location as the source PDB).

We cannot try to uncatalog and recatalog the copies, because they will ALWAYS be affected to the source PDB. Neither we can use RMAN because it will never associate the datafile copies to the new PDB. We need to rename the files manually.

RMAN> list datafilecopy all;

List of Datafile Copies
=======================

Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
55 28 A 15-DEC-14 10295232 14-DEC-14
 Name: +DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.86639709
 Tag: TAG20141215T175041

54 29 A 15-DEC-14 10295232 14-DEC-14
 Name: +DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.86639703
 Tag: TAG20141215T175041


RMAN> select name, guid from v$pdbs;

NAME       GUID
---------- --------------------------------
PDB$SEED   FFBCECBB503D606BE043334EA8C019B7
MAAZ       0243BF7B39D4440AE053334EA8C0E471
LUDO       0A4A0048D5321597E053334EA8C0E40A

It’s better to uncatalog the datafile copies before, so we keep the catalog clean:

RMAN> change datafilecopy '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049' uncatalog;

uncataloged datafile copy
datafile copy file name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049 RECID=55 STAMP=866397051
Uncataloged 1 objects


RMAN> change datafilecopy '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.866397043' uncatalog;

uncataloged datafile copy
datafile copy file name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.866397043 RECID=54 STAMP=866397046
Uncataloged 1 objects

Then, because we cannot rename files on a standby database with standby file management set to AUTO, we need to put it temporarily to MANUAL.

10:24:21 SYS@CDBGVA_2> alter database rename file '+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765' to '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049';
alter database rename file '+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765' to '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049'
*
ERROR at line 1:
ORA-01275: Operation RENAME is not allowed if standby file management is automatic.

10:27:49 SYS@CDBGVA_2> select name, ispdb_modifiable from v$parameter where name like 'standby%';

NAME                                                         ISPDB
------------------------------------------------------------ -----
standby_archive_dest                                         FALSE
standby_file_management                                      FALSE

standby_file_management is not PDB modifiable, so we need to do it for the whole CDB.

10:31:42 SYS@CDBGVA_2> alter system set standby_file_management=manual;

System altered.

18:05:04 SYS@CDBGVA_2> alter database rename file '+DATA/CDBATL/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.825.866396765' to '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049';

Database altered.

then we need to set back the standby_file_management=auto or the recover will not start:

10:34:24 SYS@CDBGVA_2> alter system set standby_file_management=auto;
System altered.

We can now restart the recovery.

The recovery process will:
– change the new datafile by modifying the header for the new PDB
– create the entry for the second datafile in the controlfile
– crash again because the datafile is missing

18:11:30 SYS@CDBGVA_2> alter database recover managed standby database;
alter database recover managed standby database
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 61 is unknown - rename to correct file
ORA-01110: data file 61: '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/UNNAMED00061'
ORA-01157: cannot identify/lock data file 61 - see DBWR trace file
ORA-01111: name for data file 61 is unknown - rename to correct file
ORA-01110: data file 61: '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/UNNAMED00061'


18:11:33 SYS@CDBGVA_2> select name from v$datafile where con_id=4;

NAME
---------------------------------------------------------------------------
+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049
/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/UNNAMED00061

We already have the SYSAUX datafile, right? So we can alter the name again:

18:14:21 SYS@CDBGVA_2> alter system set standby_file_management=manual;

System altered.

18:14:29 SYS@CDBGVA_2> alter database rename file '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/UNNAMED00061' to '+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.866397043';

Database altered.

18:14:31 SYS@CDBGVA_2> alter system set standby_file_management=auto;

System altered.

18:14:35 SYS@CDBGVA_2> alter database recover managed standby database;

This time all the datafiles have been copied (no user datafile for this example) and the recovery process will continue!! :-) so we can hit ^C and start it in background.

18:14:35 SYS@CDBGVA_2> alter database recover managed standby database;
alter database recover managed standby database
*
ERROR at line 1:
ORA-16043: Redo apply has been canceled.
ORA-01013: user requested cancel of current operation

 

18:18:10 SYS@CDBGVA_2> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

Database altered.

18:18:19 SYS@CDBGVA_2>

The Data Guard configuration reflects the success of this operation.

Do we miss anything?

Of course, we do!! The datafile names of the new PDB reside in the wrong ASM path. We need to fix them!

18:23:07 SYS@CDBGVA_2> alter database recover managed standby database cancel;

Database altered.

RMAN> backup as copy pluggable database ludo;

Starting backup at 15-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=60 instance=CDBGVA_2 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00061 name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/sysaux.863.866397043
output file name=+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/sysaux.866.866398933 tag=TAG20141215T182213 RECID=56 STAMP=866398937
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00060 name=+DATA/CDBGVA/0243BF7B39D4440AE053334EA8C0E471/DATAFILE/system.864.866397049
output file name=+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.867.866398941 tag=TAG20141215T182213 RECID=57 STAMP=866398943
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 15-DEC-14

Starting Control File and SPFILE Autobackup at 15-DEC-14
piece handle=+DATA/CDBGVA/AUTOBACKUP/2014_12_15/s_866398689.868.866398945 comment=NONE
Finished Control File and SPFILE Autobackup at 15-DEC-14

RMAN> switch pluggable database ludo to copy;

using target database control file instead of recovery catalog
datafile 60 switched to datafile copy "+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.867.866398941"
datafile 61 switched to datafile copy "+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/sysaux.866.866398933"

18:23:54 SYS@CDBGVA_2> select name from v$datafile where con_id=4;

NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/system.867.866398941
+DATA/CDBGVA/0A4A0048D5321597E053334EA8C0E40A/DATAFILE/sysaux.866.866398933

 

I know there’s no practical use of this procedure, but it helps a lot in understanding how Multitenant has been implemented.

I expect some improvements in 12.2!!

Cheers

Ludo

 

My Collaborate 14 articles about Active Data Guard 12c and Policy Managed Databases

$
0
0

After almost 1 year, I’ve decided to publish these articles on my Slideshare account. You may have already seen them in the IOUG Collaborate 14 conference content or in the SOUG Newsletter 2014/4. Nothing really new, but I hope you’ll still enjoy them.


Cheers

Ludo


It’s time to Collaborate again!!

$
0
0

Collaborate15_Horizontal_LogoIn a little more than a couple of weeks, the great Collaborate conference will start again.

My agenda will be quite packed again, as speaker, panelist and workshop organizer:

Date/Time Event
08/04/2015
3:15 pm - 4:15 pm
Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant
IOUG Collaborate 15, Las Vegas NV
08/04/2015
4:30 pm - 5:30 pm
Panel: Nothing to BLOG About - Think Again
IOUG Collaborate 15, Las Vegas NV
12/04/2015
9:00 am - 4:00 pm
RAC Attack! 12c
IOUG Collaborate 15, Las Vegas NV
15/04/2015
5:30 pm - 6:00 pm
IOUG RAC SIG Meeting
IOUG Collaborate 15, Las Vegas NV

 

RAC Attack! 12c

This technical workshop and networking event (never forget it’s a project created several years ago thanks to an intuition of Jeremy Schneider), confirms to be one of the best, long-living projects in the Oracle Community. It certainly boosted my Community involvement up to becoming an Oracle ACE. This year I’m the coordinator of the organization of the workshop, it’s a double satisfaction and it will certainly be a lot of fun again. Did I said that it’s already full booked? I’ve already blogged about it (and about what the lucky participants will get) here.

 

Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant 

One of my favorite presentations, I’ve presented it already at OOW14 and UKOUG Tech14, but it’s still a very new topic for most people, even the most experienced DBAs. You’ll learn how Multitenant, RAC and Data Guard work together. Expect colorful architecture schemas and a live demo!  You can read more about it in this post.

 

Panel: Nothing to BLOG About – Think Again

My friend Michael Abbey (Pythian) invited me to participate in his panel about blogging. It’s my first time as panelist, so I’m very excited!

 

IOUG RAC SIG Meeting

Missing this great networking event is not an option! I’m organizing this session as RAC SIG board member (Thanks to the IOUG for this opportunity!). We’ll focus on Real Application Clusters role in the private cloud and infrastructure optimization. We’ll have many special guests, including Oracle RAC PM Markus Michalewicz, Oracle QoS PM Mark Scardina and Oracle ASM PM James Williams.

Can you ever miss it???

 

A good Trivadis representative!!

trivadis.com

This year I’m not going to Las Vegas alone. My Trivadis colleague Markus Flechtner , one of the most expert RAC technologists I have the chance to know, will also come and present a session about RAC diagnostics:

615: RAC Clinics- Starring Dr. ORACHK, Dr CHM and Dr. TFA

Mon. April 13| 9:15 AM – 10:15 AM | Room Palm D

If you speak German you can follow his nice blog: http://oracle.markusflechtner.de/

Looking forward to meet you there

Ludovico

SQL Plan Directives: they’re always good… except when they’re bad!

$
0
0

The new Oracle 12c optimizer adaptive features are just great and work well out of the box in most cases.

Recently, however,  I’ve experienced my very first problem with SQL Plan Directives migrating a database to 12c, so I would like to share it.

Disclaimer 1: this is a specific problem that I found on ONE system. My solution may not fit with your environment, don’t use it if you are not sure about what you’re doing!

Disclaimer 2: despite I had this problem with a single SPD, I like adaptive features and I encourage to use them!!

Problem: a query takes a sub-second in 11gR2, in 12c it takes 12 seconds or more.

--11gR2
SQL> select * from APPUSER.V_TAB_PROP where TAB_ID = 842300;

...

48 rows selected.

Elapsed: 00:00:00.71

 
--12c
SQL> select * from APPUSER.V_TAB_PROP where TAB_ID = 842300;

...

48 rows selected.

Elapsed: 00:00:12.74

V_TAB_PROP is a very simple view. It just selects a central table “TAB” and then takes different properties by joining  a property table “TAB_PROP”.

To do that, it does 11 joins on the same property table.

create view ... as
select ...
from TAB li
left join
(select v.TAB_PROP_ID, v.PROP_VAL as c89 from  TAB_PROP v where v.PROP_ID = 89) v89 on li.TAB_PROP_ID = v89.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c88 from  TAB_PROP v where v.PROP_ID = 88) v88 on li.TAB_PROP_ID = v88.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c90 from  TAB_PROP v where v.PROP_ID = 90) v90 on li.TAB_PROP_ID = v90.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c82 from  TAB_PROP v where v.PROP_ID = 82) v82 on li.TAB_PROP_ID = v82.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c84 from  TAB_PROP v where v.PROP_ID = 84) v84 on li.TAB_PROP_ID = v84.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c93 from  TAB_PROP v where v.PROP_ID = 93) v93 on li.TAB_PROP_ID = v93.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c79 from  TAB_PROP v where v.PROP_ID = 79) v79 on li.TAB_PROP_ID = v79.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c81 from  TAB_PROP v where v.PROP_ID = 81) v81 on li.TAB_PROP_ID = v81.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c96 from  TAB_PROP v where v.PROP_ID = 96) v96 on li.TAB_PROP_ID = v96.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c95 from  TAB_PROP v where v.PROP_ID = 95) v95 on li.TAB_PROP_ID = v95.TAB_PROP_ID
left join                          
(select v.TAB_PROP_ID, v.PROP_VAL as c94 from  TAB_PROP v where v.PROP_ID = 94) v94 on li.TAB_PROP_ID = v94.TAB_PROP_ID
);

On the property table, TAB_PROP_ID and PROP_ID are unique (they compose the pk), so nested loops and index unique scans are the best way to get this data.
The table is 1500Mb big and the index 1000Mb.

This was the plan in 11g:

----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |  3401 (100)|          |
|   1 |  NESTED LOOPS OUTER                    |                 |  1009 |   218K|  3401   (1)| 00:00:41 |
|   2 |   NESTED LOOPS OUTER                   |                 |   615 |   123K|  2171   (1)| 00:00:27 |
|   3 |    NESTED LOOPS OUTER                  |                 |   390 | 73320 |  1391   (1)| 00:00:17 |
|   4 |     NESTED LOOPS OUTER                 |                 |   248 | 42408 |   894   (0)| 00:00:11 |
|   5 |      NESTED LOOPS OUTER                |                 |   160 | 24640 |   574   (0)| 00:00:07 |
|   6 |       NESTED LOOPS OUTER               |                 |   104 | 14248 |   366   (0)| 00:00:05 |
|   7 |        NESTED LOOPS OUTER              |                 |    68 |  8160 |   230   (0)| 00:00:03 |
|   8 |         NESTED LOOPS OUTER             |                 |    44 |  4532 |   142   (0)| 00:00:02 |
|   9 |          NESTED LOOPS OUTER            |                 |    29 |  2494 |    84   (0)| 00:00:02 |
|  10 |           NESTED LOOPS OUTER           |                 |    19 |  1311 |    46   (0)| 00:00:01 |
|  11 |            NESTED LOOPS OUTER          |                 |    13 |   676 |    20   (0)| 00:00:01 |
|  12 |             TABLE ACCESS BY INDEX ROWID| TAB             |     8 |   280 |     4   (0)| 00:00:01 |
|* 13 |              INDEX RANGE SCAN          | FK_TAB_PROP     |     8 |       |     3   (0)| 00:00:01 |
|  14 |             TABLE ACCESS BY INDEX ROWID| TAB_PROP        |     1 |    17 |     2   (0)| 00:00:01 |
|* 15 |              INDEX UNIQUE SCAN         | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  16 |            TABLE ACCESS BY INDEX ROWID | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 17 |             INDEX UNIQUE SCAN          | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  18 |           TABLE ACCESS BY INDEX ROWID  | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 19 |            INDEX UNIQUE SCAN           | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  20 |          TABLE ACCESS BY INDEX ROWID   | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 21 |           INDEX UNIQUE SCAN            | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  22 |         TABLE ACCESS BY INDEX ROWID    | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 23 |          INDEX UNIQUE SCAN             | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  24 |        TABLE ACCESS BY INDEX ROWID     | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 25 |         INDEX UNIQUE SCAN              | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  26 |       TABLE ACCESS BY INDEX ROWID      | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 27 |        INDEX UNIQUE SCAN               | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  28 |      TABLE ACCESS BY INDEX ROWID       | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 29 |       INDEX UNIQUE SCAN                | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  30 |     TABLE ACCESS BY INDEX ROWID        | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 31 |      INDEX UNIQUE SCAN                 | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  32 |    TABLE ACCESS BY INDEX ROWID         | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 33 |     INDEX UNIQUE SCAN                  | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
|  34 |   TABLE ACCESS BY INDEX ROWID          | TAB_PROP        |     2 |    34 |     2   (0)| 00:00:01 |
|* 35 |    INDEX UNIQUE SCAN                   | PK_TAB_PROP     |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
  13 - access("LI"."TAB_ID"=842300)
  15 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=94)
  17 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=93)
  19 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=79)
  21 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=96)
  23 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=84)
  25 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=95)
  27 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=82)
  29 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=81)
  31 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=88)
  33 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=89)
  35 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=90)

In 12c, the plan switches to adaptive, and half of the joins are converted to hash joins / full table scans:

---------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                                    | Name            | Starts | E-Rows | A-Rows |
---------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                                             |                 |      1 |        |     48 |
|  *  1 |  HASH JOIN RIGHT OUTER                                       |                 |      1 |    829K|     48 |
|  *  2 |   TABLE ACCESS FULL                                          | TAB_PROP        |      1 |   2486K|   2486K|
|  *  3 |   HASH JOIN OUTER                                            |                 |      1 |    539K|     48 |
|-    4 |    NESTED LOOPS OUTER                                        |                 |      1 |    539K|     48 |
|-    5 |     STATISTICS COLLECTOR                                     |                 |      1 |        |     48 |
|  *  6 |      HASH JOIN OUTER                                         |                 |      1 |    350K|     48 |
|-    7 |       NESTED LOOPS OUTER                                     |                 |      1 |    350K|     48 |
|-    8 |        STATISTICS COLLECTOR                                  |                 |      1 |        |     48 |
|  *  9 |         HASH JOIN OUTER                                      |                 |      1 |    228K|     48 |
|-   10 |          NESTED LOOPS OUTER                                  |                 |      1 |    228K|     48 |
|-   11 |           STATISTICS COLLECTOR                               |                 |      1 |        |     48 |
|  * 12 |            HASH JOIN OUTER                                   |                 |      1 |    148K|     48 |
|-   13 |             NESTED LOOPS OUTER                               |                 |      1 |    148K|     48 |
|-   14 |              STATISTICS COLLECTOR                            |                 |      1 |        |     48 |
|  * 15 |               HASH JOIN OUTER                                |                 |      1 |  96510 |     48 |
|-   16 |                NESTED LOOPS OUTER                            |                 |      1 |  96510 |     48 |
|-   17 |                 STATISTICS COLLECTOR                         |                 |      1 |        |     48 |
|  * 18 |                  HASH JOIN OUTER                             |                 |      1 |  62771 |     48 |
|-   19 |                   NESTED LOOPS OUTER                         |                 |      1 |  62771 |     48 |
|-   20 |                    STATISTICS COLLECTOR                      |                 |      1 |        |     48 |
|- * 21 |                     HASH JOIN OUTER                          |                 |      1 |  40827 |     48 |
|    22 |                      NESTED LOOPS OUTER                      |                 |      1 |  40827 |     48 |
|-   23 |                       STATISTICS COLLECTOR                   |                 |      1 |        |     48 |
|- * 24 |                        HASH JOIN OUTER                       |                 |      1 |  26554 |     48 |
|    25 |                         NESTED LOOPS OUTER                   |                 |      1 |  26554 |     48 |
|-   26 |                          STATISTICS COLLECTOR                |                 |      1 |        |     48 |
|- * 27 |                           HASH JOIN OUTER                    |                 |      1 |  17271 |     48 |
|    28 |                            NESTED LOOPS OUTER                |                 |      1 |  17271 |     48 |
|-   29 |                             STATISTICS COLLECTOR             |                 |      1 |        |     48 |
|- * 30 |                              HASH JOIN OUTER                 |                 |      1 |  11305 |     48 |
|    31 |                               NESTED LOOPS OUTER             |                 |      1 |  11305 |     48 |
|-   32 |                                STATISTICS COLLECTOR          |                 |      1 |        |     48 |
|    33 | BATCHED                         TABLE ACCESS BY INDEX ROWID  | TAB             |      1 |      9 |     48 |
|  * 34 |                                  INDEX RANGE SCAN            | FK_TAB_PROP     |      1 |      9 |     48 |
|    35 |                                TABLE ACCESS BY INDEX ROWID   | TAB_PROP        |     48 |   1326 |     48 |
|  * 36 |                                 INDEX UNIQUE SCAN            | PK_TAB_PROP     |     48 |      1 |     48 |
|- * 37 |                               TABLE ACCESS FULL              | TAB_PROP        |      0 |   1326 |      0 |
|    38 |                             TABLE ACCESS BY INDEX ROWID      | TAB_PROP        |     48 |      2 |     48 |
|  * 39 |                              INDEX UNIQUE SCAN               | PK_TAB_PROP     |     48 |      1 |     48 |
|- * 40 |                            TABLE ACCESS FULL                 | TAB_PROP        |      0 |      2 |      0 |
|    41 |                          TABLE ACCESS BY INDEX ROWID         | TAB_PROP        |     48 |      2 |     48 |
|  * 42 |                           INDEX UNIQUE SCAN                  | PK_TAB_PROP     |     48 |      1 |     48 |
|- * 43 |                         TABLE ACCESS FULL                    | TAB_PROP        |      0 |      2 |      0 |
|    44 |                       TABLE ACCESS BY INDEX ROWID            | TAB_PROP        |     48 |      2 |     48 |
|  * 45 |                        INDEX UNIQUE SCAN                     | PK_TAB_PROP     |     48 |      1 |     48 |
|- * 46 |                      TABLE ACCESS FULL                       | TAB_PROP        |      0 |      2 |      0 |
|-   47 |                    TABLE ACCESS BY INDEX ROWID               | TAB_PROP        |      0 |      2 |      0 |
|- * 48 |                     INDEX UNIQUE SCAN                        | PK_TAB_PROP     |      0 |        |      0 |
|  * 49 |                   TABLE ACCESS FULL                          | TAB_PROP        |      1 |   2486K|   2486K|
|-   50 |                 TABLE ACCESS BY INDEX ROWID                  | TAB_PROP        |      0 |      2 |      0 |
|- * 51 |                  INDEX UNIQUE SCAN                           | PK_TAB_PROP     |      0 |        |      0 |
|  * 52 |                TABLE ACCESS FULL                             | TAB_PROP        |      1 |   2486K|   2486K|
|-   53 |              TABLE ACCESS BY INDEX ROWID                     | TAB_PROP        |      0 |      2 |      0 |
|- * 54 |               INDEX UNIQUE SCAN                              | PK_TAB_PROP     |      0 |        |      0 |
|  * 55 |             TABLE ACCESS FULL                                | TAB_PROP        |      1 |   2486K|   2486K|
|-   56 |           TABLE ACCESS BY INDEX ROWID                        | TAB_PROP        |      0 |      2 |      0 |
|- * 57 |            INDEX UNIQUE SCAN                                 | PK_TAB_PROP     |      0 |        |      0 |
|  * 58 |          TABLE ACCESS FULL                                   | TAB_PROP        |      1 |   2486K|   2486K|
|-   59 |        TABLE ACCESS BY INDEX ROWID                           | TAB_PROP        |      0 |      2 |      0 |
|- * 60 |         INDEX UNIQUE SCAN                                    | PK_TAB_PROP     |      0 |        |      0 |
|  * 61 |       TABLE ACCESS FULL                                      | TAB_PROP        |      1 |   2486K|   2486K|
|-   62 |     TABLE ACCESS BY INDEX ROWID                              | TAB_PROP        |      0 |      2 |      0 |
|- * 63 |      INDEX UNIQUE SCAN                                       | PK_TAB_PROP     |      0 |        |      0 |
|  * 64 |    TABLE ACCESS FULL                                         | TAB_PROP        |      1 |   2486K|   2486K|
---------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
   2 - filter("V"."PROP_ID"=84)
   3 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
   6 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
   9 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  12 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  15 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  18 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  21 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  24 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  27 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  30 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID")
  34 - access("LI"."TAB_ID"=842300)
  36 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=94)
  37 - filter("V"."PROP_ID"=94)
  39 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=89)
  40 - filter("V"."PROP_ID"=89)
  42 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=93)
  43 - filter("V"."PROP_ID"=93)
  45 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=90)
  46 - filter("V"."PROP_ID"=90)
  48 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=79)
  49 - filter("V"."PROP_ID"=79)
  51 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=81)
  52 - filter("V"."PROP_ID"=81)
  54 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=96)
  55 - filter("V"."PROP_ID"=96)
  57 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=95)
  58 - filter("V"."PROP_ID"=95)
  60 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=88)
  61 - filter("V"."PROP_ID"=88)
  63 - access("LI"."TAB_PROP_ID"="V"."TAB_PROP_ID" AND "V"."PROP_ID"=82)
  64 - filter("V"."PROP_ID"=82)
 
Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan (rows marked '-' are inactive)
   - 1 Sql Plan Directive used for this statement

However, the inflection point is never reached. The execution keeps the default plan that has half of the joins HJ and the other half NL.

The problem in this case is the SQL Directive. Why?

There are to many distinct values for TAB_ID and the data is very skewed.

-- without adaptive features
SQL> alter session set optimizer_adaptive_features=false ;
 
Session altered.
 
SQL> select * from APPUSER.V_TAB_PROP where TAB_ID = 842300;
...
48 rows selected.

Elapsed: 00:00:00.23


-- with adaptive features
SQL> alter session set optimizer_adaptive_features=true;
 
Session altered.
SQL> select * from APPUSER.V_TAB_PROP where TAB_ID = 842300;
...
48 rows selected.
 
Elapsed: 00:00:13.84

The histogram on that column is OK and it always leads to the correct plan (with the adaptive features disabled).
But there are still some “minor” misestimates, and the optimizer sometimes decides to create a SQL Plan directive:

SQL> select DIRECTIVE_ID, TYPE, ENABLED, REASON, NOTES from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where object_name='TAB_PROP');

        DIRECTIVE_ID TYPE             ENA REASON                             
NOTES
-------------------- ---------------- --- ------------------------------------ --------------------------------------------------------------------------------
5347794880142580861 DYNAMIC_SAMPLING YES JOIN CARDINALITY MISESTIMATE        
<spd_note><internal_state>PERMANENT</internal_state><redundant>NO</redundant><spd_text>{F(APPUSER.TAB) - F(APPUSER.TAB_PROP)}</spd_text></spd_note>
 
5473412518742433352 DYNAMIC_SAMPLING YES JOIN CARDINALITY MISESTIMATE        
<spd_note><internal_state>HAS_STATS</internal_state><redundant>NO</redundant><spd_text>{(APPUSER.TAB) - F(APPUSER.TAB) - F(APPUSER.TAB_PROP)}</spd_text></spd_note>
 
14420228120434685523 DYNAMIC_SAMPLING YES JOIN CARDINALITY MISESTIMATE        
<spd_note><internal_state>HAS_STATS</internal_state><redundant>NO</redundant><spd_text>{F(APPUSER.CHAMP) - (APPUSER.TAB) - F(APPUSER.TAB) - F(APPUSER.TAB_PROP)}</spd_text></spd_note>

The Directive instructs the optimizer to do a dynamic sampling, but with a such big and skewed table this is not ok, so the Dynamic sampling result is worse than using the histogram. I can check it by simplifying the query to just one join:

-- with dynamic sampling/sql plan directive:
-------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |     48 |
|   1 |  NESTED LOOPS OUTER                  |                 |      1 |  11305 |     48 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| TAB             |      1 |      9 |     48 |
|*  3 |    INDEX RANGE SCAN                  | FK_TAB_PROP     |      1 |      9 |     48 |
|   4 |   TABLE ACCESS BY INDEX ROWID        | TAB_PROP        |     48 |   1326 |     48 |
|*  5 |    INDEX UNIQUE SCAN                 | PK_TAB_PROP     |     48 |      1 |     48 |
-------------------------------------------------------------------------------------------
 
-- without dynamic sampling
-------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |     48 |
|   1 |  NESTED LOOPS OUTER                  |                 |      1 |     13 |     48 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| TAB             |      1 |      9 |     48 |
|*  3 |    INDEX RANGE SCAN                  | FK_TAB_PROP     |      1 |      9 |     48 |
|   4 |   TABLE ACCESS BY INDEX ROWID        | TAB_PROP        |     48 |      2 |     48 |
|*  5 |    INDEX UNIQUE SCAN                 | PK_TAB_PROP     |     48 |      1 |     48 |
-------------------------------------------------------------------------------------------

What’s the fix?

I’ve tried to drop the directive first, but it reappears as soon as there are new misestimates.
The best solution in my case has been to disable the directive, an operation that can be done easily with the DBMS_SPD package:

BEGIN
  FOR rec in (select d.directive_id as did from dba_sql_plan_directives d join dba_sql_plan_dir_objects o on
    (d.directive_id=o.directive_id) where o.owner='APPUSER' and o.object_name in ('TAB','TAB_PROP'))
  LOOP
    DBMS_SPD.ALTER_SQL_PLAN_DIRECTIVE ( rec.did, 'ENABLED','NO');
  END LOOP;
END;
/

I did this on a QAS environment.
Because the production system is not migrated to 12c yet, it’s wise to import these disabled directives in production before the optimizer creates and enables them.

-- export from the source
SET SERVEROUTPUT ON
DECLARE
  my_list  DBMS_SPD.OBJECTTAB := DBMS_SPD.ObjectTab();
  dir_cnt  NUMBER;
BEGIN
  DBMS_SPD.CREATE_STGTAB_DIRECTIVE  (table_name => 'TAB_PROP_DIRECTIVES', table_owner=> 'SYSTEM' );
  my_list.extend(2);
 
  -- TAB table
  my_list(1).owner := 'APPUSER';
  my_list(1).object_name := 'TAB';
  my_list(1).object_type := 'TABLE';
  -- TAB_PROP table
  my_list(2).owner := 'APPUSER';
  my_list(2).object_name := 'TAB_PROP';
  my_list(2).object_type := 'TABLE';
 
  dir_cnt :=
  dir_cnt :=
   DBMS_SPD.PACK_STGTAB_DIRECTIVE(table_name => 'TAB_PROP_DIRECTIVES', table_owner=> 'SYSTEM', obj_list => my_list);
   DBMS_OUTPUT.PUT_LINE('dir_cnt = ' || dir_cnt);
END;
/

expdp directory=data_pump_dir dumpfile=TAB_PROP_DIRECTIVES.dmp logfile=expdp_VAL_LIG_DIRECTIVES.log tables=system.TAB_PROP_DIRECTIVES

-- import into the destination
impdp directory=data_pump_dir dumpfile=TAB_PROP_DIRECTIVES.dmp logfile=impdp_VAL_LIG_DIRECTIVES.log

SELECT DBMS_SPD.UNPACK_STGTAB_DIRECTIVE(table_name => 'TAB_PROP_DIRECTIVES', table_owner=> 'SYSTEM') FROM DUAL;

Off course, the directives can’t be created for objects that do not exist, the import  has to be done after the objects migrate to the 12c version.

Because the SQL Plan Directives are tied to specific objects and not specific queries, they can fix many statements at once, but in case like this one, they can compromise several statements!

Monitoring the creation of new directives is an important task as it may indicate misestimates/lack of statistics on one side or execution plan changes on the other one.

Standard Edition and Standard Edition One are dead. Welcome Standard Edition 2 (Two)

$
0
0

Disclaimer: nothing you’re reading here is real, nor is confirmed by Oracle. Don’t think anything, don’t come to conclusions or take any action before the licensing documents will be updated with new information about this new Edition.

 

The news has come today (July 3rd, 2015).

After many years of existence, Standard Edition and Standard Edition One will be no longer part of the Oracle Database Edition portfolio.

The short history

Standard Edition has been for longtime the “stepbrother” of Enterprise Edition, with less features, no options, but cheaper than EE. I can’t remember when SE has been released. It was before 2000s, I guess.

In 2003, Oracle released 10gR1. Many new features as been released for EE only, but:

– RAC as been included as part of Standard Edition

– Standard Edition One has been released, with an even lower price and “almost” the same features of Standard Edition.

For a few years, customers had the possibility to get huge savings (but many compromises) by choosing the cheaper editions.

SE ONE: just two sockets, but with today’s 18-core processors, the possibility to run Oracle on 36 cores (or more?) for less than 12k of licenses.

SE: up to four sockets and the possibility to run on either 72 core servers or RAC composed by a total of 72 cores (max 4 nodes) for less than the price of a 4-core Enterprise Edition deployement.

In 2014, for the first time, Oracle released a new Database version (12.1.0.2) where  Standard Edition and SE One were not available (not immediately, at least).

For months, customers asked: “When will the Oracle 12.1.0.2 SE be available?”

Now the big announcement: SE and SE One will no longer exist. With 12.1.0.2, there’s a new Edition: Oracle Database Standard Edition 2.

You can read the MOS Note that introduces it here: Oracle Database 12c Standard Edition 2 (12.1.0.2) (Doc ID 2027072.1)

That means a lot of things.

– SE One will no longer exist

– SE is replaced by SE Two that has a limitation of 2 socket

– SE Two will be (maybe) a mix of the two other edition in terms of features

– SE Two will still include the RAC feature

– Customers with SE on 4 socket nodes (or clusters) will need to migrate to 2 socket nodes (or clusters)

– Customers with SE One should definitely be prepared to spend some money to upgrade to SE Two

It’s not known whether SE Two will be cheaper than SE or not, but my guess is that the price may fall everywhere between 10k$ and 25k$ per socket if they keep the per-socket licensing model.

As soon as the new Price List will be available, everything will be clear. But for know, I think that SE and SE One customers have to expect (a lot of) changes in Oracle Licensing.

My feedback after upgrading EM12c 12.1.0.3 to 12.1.0.5

$
0
0

Today I’ve upgraded EM12c for a customer from the second-last version (12.1.0.3) to the last one (12.1.0.5) and the EM Repository from 11.2.0.3 to 12.1.0.2.

The upgrade path was not very easy: EM 12.1.0.3 is not compatible with a repository 12.1.0.2 and EM 12.1.0.5 requires a mandatory patch for the repository if 11.2.0.3 (or an upgrade to 11.2.0.4).

So I’ve done:

  • upgrade of the repository from 11.2.0.3 (in Data Guard configuration) to 11.2.0.4
  • upgrade of the EM from 12.1.0.3 to 12.1.0.5
  • upgrade of the repository from 11.2.0.4 to 12.1.0.2 (in Data Guard configuration), from Solaris to Linux

 

In my case, I was particularly concerned about my customer’s EM topology:

  • two OMS in load balancing
  • console secured with a custom SSL certificate
  • a good amount of targets (more than 800 total targets, more than 500 targets with status)
  • a lot of jobs and custom reports
  • a big, shared central software library
  • many other small customizations: auth, groups, metrics, templates…

I will not bother with the actual execution steps, every installation may differ, I strongly recommend to read the upgrade documentation (I know, it’s HUGE :-( ).

Just to resume, the upgrade guide is here: https://docs.oracle.com/cd/E24628_01/upgrade.121/e22625/toc.htm

in my case I had to read carefully the chapters 3, 4, 5, 6 and appendixes G and K.

By following every step carefully, I had no problems at all and at the end everything was working correctly: all the targets up, the load balancing working in SSL as expected, the jobs restarted and ran successfully…

It has been incredible to see how many operations the OUI has done without raising a single error!!

Ok, it’s not just a Click Next Next Next Next installation, there are a lot of steps to do manually before and afterwards, but still… very good impression.

It took a little more than one hour to upgrade the first OMS (this also upgrades the EM repository) and a little less than 20 minutes to upgrade the second one.

Let a couple of hours for checking everything before, staging the binaries, taking backups/snapshots, creating restore points… and one hours more for upgrading the central agents and cleansing the old installations.

About upgrading/moving the repository, check this good post by Maaz AnjumMIGRATE ENTERPRISE MANAGER 12.1.0.4.0 TO A PDB FROM A NON-CDB, even if you don’t plan to do it, it’s worth a read.

HTH

Ludo

How to avoid ORA-02153 when creating database links on 11.2.0.4 (the unsupported way)

$
0
0

Disclaimer (wow, most of my recent posts start with a disclaimer, I guess it’s bad): this post explains an UNSUPPORTED workaround for an error enforced by recent Oracle security checks. You should never use it in production! Forewarned is forearmed.

Before Oracle Database 11.2.0.4, it was possible to create a database link using the following syntax:

create database link XX connect to YY 
identified by values 'DEA2G0D1A57B0071057A11DA7A' using 'ZZZ';

It was possible to get the password hash by either selecting dbms_metadata.get_ddl for the database link or by querying directly the link$ table.

Starting with Oracle 11.2.0.4, Oracle is enforcing a check that prevents to use such syntax. Every newly created database link must have the password explicitly set.

This is clearly stated in the MOS note:

ORA-02153: Invalid VALUES Password String When Creating a Database Link Using BY VALUES With Obfuscated Password After Upgrade To 11.2.0.4 (Doc ID 1905221.1)

This is seen as a security enhancement. In my opinion, it forces also to specify clear text passwords somewhere in the scripts that create the db links. (You do not create the db links by hand in sql*plus every time you need one.  Do you?)

The only exception is when using the expdp/impdp. If you expdp a schema, the dumpfile contains the password hash and the statement needed to recreate the database link (… identified by values ‘:1′), but Oracle only allows impdp to use such statement.

So, simple workaround, just create the database links on a dev/staging environment, export them using expdp and then provide your dba the dumpfile so he/she can import it and create the dblinks. Right? Not always.

There is one case where you really need of the old syntax.

  • You don’t know the password

AND

  • You MUST change the database link name.

As you may know, there are no ways to change a database link name (even through impdp, there is no remap_dblink or anything like that).

E.g., you need to keep the db link and intend to use it for a check BUT you want to prevent the application from using it with the old name.

Because I believe that no problems exist that cannot be solved by my Trivadis’ colleagues, I’ve checked internally. A colleague came out with a dead simple (and unsupported) solution:

Insert/update sys.link$, flush the shared_pool.

SQL> select * from dba_db_links;

OWNER                          DB_LINK              USERNAME                       HOST       CREATED
------------------------------ -------------------- ------------------------------ ---------- ---------
SCOTT                          REMOTEDB             SCOTT                           remotedb   10-APR-15

SQL> select * from sys.link$;
    OWNER# NAME                 CTIME     HOST       USERID     PASSWORD                             FLAG AUTHUSR
---------- -------------------- --------- ---------- ---------- ------------------------------ ---------- ------------------------------
AUTHPWD
------------------------------
PASSWORDX
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
AUTHPWDX
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
         0 REMOTEDB             10-APR-15 remotedb   SCOTT                                              2

061D009E40A5981668DFEE1C710CF68E20B1A4DEE898857B2C3C458C3DEA042675E6CC98CC8D7B72C2F21314D94872D32882BECDE0594B3A525E342B8958BDF37ACE0DE3CE0A4D153AF41EEAF8391A9D84924521C45BA79FF2A2
CEA78709E3BD7775DB9B79A2B4D2F742472B7B5733E142CBCBA2A73511B81F3840611737351

SQL> insert into sys.link$ (OWNER#, NAME, CTIME, HOST, USERID, PASSWORD, FLAG, AUTHUSR, AUTHPWD, PASSWORDX, AUTHPWDX)
  2  select OWNER#, 'NEWDBLINK', CTIME, HOST, USERID, PASSWORD, FLAG, AUTHUSR, AUTHPWD, PASSWORDX, AUTHPWDX 
  3  from sys.link$ where name='REMOTEDB';

1 row created.

SQL> commit;

Commit complete.

SQL> alter system flush shared_pool;

System altered.

SQL> select * from dba_db_links;

OWNER                          DB_LINK              USERNAME                       HOST       CREATED
------------------------------ -------------------- ------------------------------ ---------- ---------
SCOTT                          REMOTEDB             SCOTT                          remotedb   10-APR-15
SCOTT                          NEWDBLINK            SCOTT                          remotedb   10-APR-15

Remember, use it at your own risk (or don’t use it at all) 😉

HTH

Ludovico

Another successful RAC Attack in Geneva!

$
0
0

ninja-suisseLast week I have hosted the second Swiss RAC Attack workshop at Trivadis offices in Geneva. It has been a great success, with 21 total participants: 5 Ninjas, 4 alumni and 14 people actively installing or playing with RAC 12c on their laptops.

Last year I was suprised by a participant coming fron Nanterre. This year two people came directly from Moscow, just for the workshop!

We’ve got good pizza and special beer: Chimay , Vedett, Duvel, Andechs…

Last but not least, our friend Marc Fielding was visiting Switzerland last week, so he took the opportunity to join us and make the workshop even more interesting! 😀
DSC07173 DSC07164 DSC07183 DSC07148 DSC07147 DSC07144 DSC07142 DSC07154 DSC07153 DSC07152 DSC07151

Looking forward to organize it again in one year! Thank you guys :-)

Ludovico

Grid Infrastructure 12c: Recovering the GRID Disk Group and recreating the GIMR

$
0
0

Losing the Disk Group that contains OCR and voting files has always been a challenge. It requires you to take regular backups of OCR, spfile and diskgroup metadata.

Since Oracle 12cR1, there are a few additional components you must take care of:

– The ASM password file (if you have Flex ASM it can be quite critical)

– The Grid Infrastructure Management Repository

Why ASM password file is important? Well, you can read this good blog post form my colleague Robert Bialek: http://blog.trivadis.com/b/robertbialek/archive/2014/10/26/are-you-using-oracle-12c-flex-asm-if-yes-do-you-have-asm-password-file-backup.aspx

So the problem here, is not whether you should back them up or not, but how you can restore them quickly.

Assumptions: you back up regularly:

ASM parameter  file:

SQL> create pfile='/backup/spfileASM.ora' from spfile;

File created.

Oracle Cluster Registry:

grid@tvdrach01:~/ [+ASM1] sudo $ORACLE_HOME/bin/ocrconfig -manualbackup
tvdrach03 2015/09/21 14:30:39 /u01/app/grid/12.1.0.2/cdata/tvdrac-cluster/backup_20150921_143039.ocr 0

ASM Diskgroup Metadata:

ASMCMD [+] > md_backup GRID.dg -G GRID
Disk group metadata to be backed up: GRID
Current alias directory path: _MGMTDB/DATAFILE
Current alias directory path: _MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE
Current alias directory path: tvdrac-cluster
Current alias directory path: _MGMTDB/FD9AC0F7C36E4438E043B6A9E80A24D5/DATAFILE
Current alias directory path: _MGMTDB/FD9AC0F7C36E4438E043B6A9E80A24D5
Current alias directory path: ASM/PASSWORD
Current alias directory path: _MGMTDB/TEMPFILE
Current alias directory path: tvdrac-cluster/ASMPARAMETERFILE
Current alias directory path: _MGMTDB/20BC39F0F36C18F4E0533358A8C058F7/TEMPFILE
Current alias directory path: _MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815
Current alias directory path: _MGMTDB/20BC2691871B0B14E0533358A8C01AC6
Current alias directory path: _MGMTDB/ONLINELOG
Current alias directory path: _MGMTDB
Current alias directory path: ASM
Current alias directory path: tvdrac-cluster/OCRFILE
Current alias directory path: _MGMTDB/20BC39F0F36C18F4E0533358A8C058F7
Current alias directory path: _MGMTDB/20BC2691871B0B14E0533358A8C01AC6/TEMPFILE
Current alias directory path: _MGMTDB/CONTROLFILE
Current alias directory path: _MGMTDB/PARAMETERFILE

ASM password file:

ASMCMD [+GRID] > pwcopy +GRID/orapwASM /backup/
copying +GRID/orapwASM -> /backup/orapwASM

What about the GIMR?

According to the MOS Note: FAQ: 12c Grid Infrastructure Management Repository (GIMR) (Doc ID 1568402.1), there is no such need for the moment.

Weird, huh? The -MGMTDB itself contains for the moment just the Cluster Health Monitor repository, but expect to see its important increasing with the next versions of Oracle Grid Infrastructure.

If you REALLY want to back it up (even if not fundamental, it is not a bad idea, after all), you can do it.

The -MGMTDB is in noarchivelog by default. You need to either put it in archivelog mode (and set a recovery area, etc etc) or back it up while it is mounted.

Because the Cluster Health Monitor (ora.crf)  depends on it, you have to stop it beforehand:

grid@tvdrach01:~/ [-MGMTDB] crsctl stop resource ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'tvdrach01'
CRS-2677: Stop of 'ora.crf' on 'tvdrach01' succeeded

Then you can operate with -MGMTDB:

grid@tvdrach01:~/ [-MGMTDB] srvctl stop mgmtdb -stopoption IMMEDIATE
grid@tvdrach01:~/ [-MGMTDB] srvctl start mgmtdb -startoption MOUNT

grid@tvdrach01:~/ [-MGMTDB]

grid@tvdrach02:~/ [-MGMTDB] rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Sep 27 17:59:55 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: _MGMTDB (DBID=1095800268, not open)

RMAN> backup as compressed backupset database format '+DATA';

Starting backup at 27-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.891526555
input datafile file number=00007 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/system.270.891526555
input datafile file number=00008 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysaux.271.891526555
input datafile file number=00010 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysgridhomedata.272.891526555
input datafile file number=00012 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdatadb.273.891526555
input datafile file number=00009 name=+GRID/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/users.274.891526555
channel ORA_DISK_1: starting piece 1 at 27-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=+DATA/_MGMTDB/20BC39F0F36C18F4E0533358A8C058F7/BACKUPSET/2015_09_27/nnndf0_tag20150927t180016_0.256.891540019 tag=TAG20150927T180016 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+GRID/_MGMTDB/DATAFILE/system.258.891526155
input datafile file number=00003 name=+GRID/_MGMTDB/DATAFILE/sysaux.257.891526135
input datafile file number=00004 name=+GRID/_MGMTDB/DATAFILE/undotbs1.259.891526181
channel ORA_DISK_1: starting piece 1 at 27-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=+DATA/_MGMTDB/BACKUPSET/2015_09_27/nnndf0_tag20150927t180016_0.257.891540043 tag=TAG20150927T180016 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+GRID/_MGMTDB/FD9AC0F7C36E4438E043B6A9E80A24D5/DATAFILE/system.265.891526233
input datafile file number=00006 name=+GRID/_MGMTDB/FD9AC0F7C36E4438E043B6A9E80A24D5/DATAFILE/sysaux.266.891526233
channel ORA_DISK_1: starting piece 1 at 27-SEP-15
channel ORA_DISK_1: finished piece 1 at 27-SEP-15
piece handle=+DATA/_MGMTDB/20BC2691871B0B14E0533358A8C01AC6/BACKUPSET/2015_09_27/nnndf0_tag20150927t180016_0.258.891540069 tag=TAG20150927T180016 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 27-SEP-15

Starting Control File and SPFILE Autobackup at 27-SEP-15
piece handle=/u01/app/grid/12.1.0.2/dbs/c-1095800268-20150927-00 comment=NONE
Finished Control File and SPFILE Autobackup at 27-SEP-15

RMAN> alter database open;

Statement processed

RMAN>

Now, imagine that you loose the GRID diskgroup (nowadays, with the ASM Filter Driver, it’s more complex to corrupt a device by mistake, but let’s assume that you do it):

root@tvdrach01:~/ [-MGMTDB] dd if=/dev/zero of=/dev/asm-disk1 bs=1M count=128
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.360653 s, 372 MB/s

The cluster will not start anymore, you need to disable the crs, reboot and start it in exclusive mode:

root@tvdrach01:~/ [-MGMTDB] crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'tvdrach01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'tvdrach01'
CRS-2676: Start of 'ora.mdnsd' on 'tvdrach01' succeeded
CRS-2676: Start of 'ora.evmd' on 'tvdrach01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tvdrach01'
CRS-2676: Start of 'ora.gpnpd' on 'tvdrach01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tvdrach01'
CRS-2672: Attempting to start 'ora.gipcd' on 'tvdrach01'
CRS-2676: Start of 'ora.cssdmonitor' on 'tvdrach01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'tvdrach01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tvdrach01'
CRS-2672: Attempting to start 'ora.diskmon' on 'tvdrach01'
CRS-2676: Start of 'ora.diskmon' on 'tvdrach01' succeeded
CRS-2676: Start of 'ora.cssd' on 'tvdrach01' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'tvdrach01'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'tvdrach01'
CRS-2672: Attempting to start 'ora.ctssd' on 'tvdrach01'
CRS-2676: Start of 'ora.ctssd' on 'tvdrach01' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'tvdrach01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'tvdrach01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'tvdrach01'
CRS-2676: Start of 'ora.asm' on 'tvdrach01' succeeded
root@tvdrach01:~/ [-MGMTDB]

 

Then you can recreate the GRID disk group and restore everything inside it:

SQL> alter system set asm_diskstring='/dev/asm*';

System altered.

SQL> create diskgroup GRID  external redundancy disk '/dev/asm-disk1' attribute 'COMPATIBLE.ADVM'='12.1.0.0.0', 'COMPATIBLE.ASM'='12.1.0.0.0';

Diskgroup created.

SQL> create spfile='+GRID' from pfile='/backup/spfileASM.ora';

File created.

SQL> 

root@tvdrach01:~/ [+ASM1] ocrconfig -restore /u01/app/grid/12.1.0.2/cdata/tvdrac-cluster/backup_20150927_174702.ocr
root@tvdrach01:~/ [+ASM1]

grid@tvdrach01:~/ [+ASM1] crsctl replace votedisk '+GRID'
Successful addition of voting disk a375f4bdb7854f8fbf7a92cd880fba60.
Successfully replaced voting disk group with +GRID.
CRS-4266: Voting file(s) successfully replaced


root@tvdrach01:~/ [+ASM1]  crsctl stop crs -f
...
root@tvdrach01:~/ [+ASM1]  crsctl start crs
...


ASMCMD [+] >  pwcopy --asm /backup/orapwASM +GRID/orapwASM
copying /backup/orapwASM -> +GRID/orapwASM

Finally, the last missing component: the GIMR.

You can recreate it or restore it (if you backed it up at some point in time).

Let’s see how to recreate it:

grid@tvdrach03:~/ [-MGMTDB] srvctl disable mgmtdb
grid@tvdrach03:~/ [-MGMTDB] srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
grid@tvdrach01:~/ [+ASM1] dbca -silent -createDatabase -sid -MGMTDB \
> -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc \
> -gdbName _mgmtdb -storageType ASM -diskGroupName +GRID \
> -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates -characterset AL32UTF8 \
> -autoGeneratePasswords -skipUserTemplateCheck
Cleaning up failed steps
5% complete
Registering database with Oracle Grid Infrastructure
11% complete
Copying database files
12% complete
14% complete
21% complete
27% complete
34% complete
41% complete
44% complete
Creating and starting Oracle instance
46% complete
51% complete
52% complete
53% complete
58% complete
62% complete
63% complete
66% complete
Completing Database Creation
70% complete
80% complete
90% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb0.log" for further details.
grid@tvdrach01:~/ [+ASM1] dbca -silent -createPluggableDatabase -sourceDB -MGMTDB \
>  -pdbName tvdrac_cluster -createPDBFrom RMANBACKUP \
>  -PDBBackUpfile $ORACLE_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb \
>  -PDBMetadataFile $ORACLE_HOME/assistants/dbca/templates/mgmtseed_pdb.xml \
>  -createAsClone true -internalSkipGIHomeCheck
Creating Pluggable Database
Creating Pluggable Database
4% complete
12% complete
21% complete
38% complete
55% complete
O-GRINF Grid Infrastructure Disaster Recovery
Page 21
85% complete
Completing Pluggable Database Creation
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/tvdrac_cluster/_mgmtdb.log" for further details.
grid@tvdrach01:~/ [+ASM1] srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node tvdrach01

grid@tvdrach01:~/ [+ASM1] sudo $ORACLE_HOME/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
grid@tvdrach01:~/ [+ASM1] crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'tvdrach01'
CRS-2676: Start of 'ora.crf' on 'tvdrach01' succeeded
grid@tvdrach01:~/ [+ASM1]

Conclusion

Recovering from a lost Disk Group / Cluster is not rocket science. Just practice it every now and then. If you do not have a test RAC, you can build your lab on your laptop using the RAC Attack instructions. If you want to test all the scenarios, the RAC SIG webcast: Oracle 11g Clusterware failure scenarios with practical demonstrations by Kamran Agayev is the best starting point, IMHO. Just keep in mind that Flex ASM and the GIMR add more complexity.

HTH

Ludovico


Querying the dba_hist_sys_time_model to get historical data

$
0
0

This quick post is mainly for myself… I will certainly use it for reference in the future.

Debugging problems due to adaptive dynamic sampling and in general adaptive features sometimes needs to get historical data about, e.g., parse time.

In order to get this information you may need to query the view DBA_HIST_SYS_TIME_MODEL (take care, it needs Diagnostic Pack license!)

You can use this query as an example.

with h as (
select s.snap_id, s.BEGIN_INTERVAL_TIME,
        --s.END_INTERVAL_TIME,
        g.STAT_ID,
        g.stat_name,
        nvl(
          decode(
            greatest(
              VALUE,
              nvl(lag(VALUE) over (partition by s.dbid, s.instance_number, g.stat_name order by s.snap_id),0)
             ),
            VALUE,
            VALUE - lag(VALUE)
               over (partition by s.dbid,
                                    s.instance_number,
                                    g.stat_name
                    order by s.snap_id
                ),
            VALUE
           ),
           0
        ) VALUE
from DBA_HIST_SNAPSHOT s,
    DBA_HIST_SYS_TIME_MODEL g,
    v$instance i
where s.SNAP_ID=g.SNAP_ID
and s.BEGIN_INTERVAL_TIME >=
    trunc(to_timestamp(nvl('&startdate',to_char(sysdate,'YYYYMMDD')),'YYYYMMDD'))
and s.BEGIN_INTERVAL_TIME < =
    trunc(to_timestamp(nvl('&enddate',to_char(sysdate,'YYYYMMDD')),'YYYYMMDD')+1)
and s.instance_number=i.instance_number
and s.instance_number=g.instance_number
)
select p.begin_interval_time, p.value as "parse time elapsed", t.value as "DB time",
round(p.value/t.value,2)*100 as "parse pct", par.value as opt_adapt_feat
from h p, h t , dba_hist_parameter par
where p.snap_id=t.snap_id
and p.snap_id=par.snap_id
and p.stat_name='parse time elapsed'
and t.stat_name='DB time'
and par.parameter_name='optimizer_adaptive_features'
and t.value>0
order by p.begin_interval_time
/

 

In this specific example, it shows the “parse time elapsed”, the “DB time” and the percentage parse/dbtime, along with the value of the parameter “optimizer_adaptive_features“. You can use it to check if changing the parameters related to adaptive dynamic sampling improves or not the parse time.

The output will be something like this:

BEGIN_INTERVAL_TIME    	  parse time elapsed     DB time  parse pct OPT_ADAPT_FEAT
-------------- ---------- ------------------ ----------- ---------- ----------
23-OCT-15 03.00.36.569 AM       3235792   	57030479      	5.67 TRUE
23-OCT-15 03.30.38.712 AM       3438093   	60262996       	5.71 TRUE
23-OCT-15 04.00.40.709 AM       4622998   	69813760       	6.62 TRUE
23-OCT-15 04.30.42.776 AM       4590463   	56441202       	8.13 TRUE
23-OCT-15 05.00.44.735 AM      13772357        113741371      	12.11 TRUE
23-OCT-15 05.30.46.722 AM       3448944   	49807800       	6.92 TRUE
23-OCT-15 06.00.48.664 AM       4792886   	54235691       	8.84 TRUE
23-OCT-15 06.30.50.713 AM       8527305   	58775613      	14.51 TRUE
23-OCT-15 07.00.52.667 AM       8518273   	75248056      	11.32 TRUE
23-OCT-15 07.30.54.622 AM       9800048  	17381081       1.07 TRUE
23-OCT-15 08.00.56.609 AM       6986551       1629027583      .43 TRUE
23-OCT-15 08.30.58.568 AM       8414695       2493025822      .34 TRUE
23-OCT-15 09.00.00.457 AM      13648260       2412333113      .57 TRUE
23-OCT-15 09.30.02.384 AM      15186610       4635080356      .33 TRUE
23-OCT-15 10.00.04.298 AM      23465769  	39080849       3.17 FALSE
23-OCT-15 10.30.06.421 AM      12152991       2654461964      .46 FALSE
23-OCT-15 11.00.08.444 AM      24901111        549936076       4.53 FALSE
23-OCT-15 11.30.10.485 AM       8080236        354568317       2.28 FALSE
23-OCT-15 12.00.12.453 PM       4291839   	91028268       	4.71 FALSE
23-OCT-15 12.30.14.430 PM       3675163        177312397       2.07 FALSE
23-OCT-15 01.00.16.468 PM       9184841        231138367       3.97 FALSE
23-OCT-15 01.30.18.438 PM       8132397        162607229       5 FALSE
23-OCT-15 02.00.20.707 PM      13375709        210251458       6.36 FALSE
23-OCT-15 02.30.23.740 PM      10116413        285114368       3.55 FALSE
23-OCT-15 03.00.25.699 PM       8067777        123864339       6.51 FALSE
23-OCT-15 03.30.27.641 PM       5787931        110621767       5.23 FALSE

HTH

Ludo

Get information about Cursor Sharing for a SQL_ID

$
0
0

Yesterday I’ve got a weird problem with Adaptive Cursor Sharing. I’m not sure yet about the issue, but it seems to be related to cursor sharing histograms. Hopefully one day I will blog about what I’ve learnt from this experience.

To better monitor the problem on that specific query, I’ve prepared this script (tested on 12.1.0.2):

COLUMN Shareable HEADING 'S|H|A|R|E|A|B|L|E'
COLUMN "Bind-Aware" HEADING 'B|I|N|D| |A|W|A|R|E'
COLUMN Sensitive HEADING 'S|E|N|S|I|T|I|V|E'
COLUMN Reoptimizable HEADING 'R|E|O|P|T|I|M|I|Z|A|B|L|E'
BREAK on child_number on Execs on "Gets/Exec" on "Ela/Exec" on "Sensitive" on "Shareable" on "Bind-Aware" on bucket0 on bucket1 on bucket2 on cnt on "Reoptimizable" on is_resolved_adaptive_plan

select * from (select *
  from (
select 
s.child_number,
  s.plan_hash_value,
  executions as Execs, 
  round(buffer_gets/executions) as "Gets/Exec",
  round(elapsed_time/executions) as "Ela/Exec",
  is_bind_sensitive as "Sensitive",
  is_shareable as "Shareable",
  is_bind_aware as "Bind-Aware",
  to_char(h.bucket_id) as bucket, h.count as cnt,
  is_reoptimizable as "Reoptimizable",
  is_resolved_adaptive_plan,
  "UNBOUND_CURSOR",  "SQL_TYPE_MISMATCH",  "OPTIMIZER_MISMATCH",
  "OUTLINE_MISMATCH", "STATS_ROW_MISMATCH", "LITERAL_MISMATCH",
  "FORCE_HARD_PARSE", "EXPLAIN_PLAN_CURSOR", "BUFFERED_DML_MISMATCH",
  "PDML_ENV_MISMATCH", "INST_DRTLD_MISMATCH", "SLAVE_QC_MISMATCH",
  "TYPECHECK_MISMATCH", "AUTH_CHECK_MISMATCH", "BIND_MISMATCH",
  "DESCRIBE_MISMATCH", "LANGUAGE_MISMATCH", "TRANSLATION_MISMATCH",
  "BIND_EQUIV_FAILURE", "INSUFF_PRIVS", "INSUFF_PRIVS_REM",
  "REMOTE_TRANS_MISMATCH", "LOGMINER_SESSION_MISMATCH", "INCOMP_LTRL_MISMATCH",
  "OVERLAP_TIME_MISMATCH", "EDITION_MISMATCH", "MV_QUERY_GEN_MISMATCH",
  "USER_BIND_PEEK_MISMATCH", "TYPCHK_DEP_MISMATCH", "NO_TRIGGER_MISMATCH",
  "FLASHBACK_CURSOR", "ANYDATA_TRANSFORMATION", "PDDL_ENV_MISMATCH",
  "TOP_LEVEL_RPI_CURSOR", "DIFFERENT_LONG_LENGTH", "LOGICAL_STANDBY_APPLY",
  "DIFF_CALL_DURN", "BIND_UACS_DIFF", "PLSQL_CMP_SWITCHS_DIFF",
  "CURSOR_PARTS_MISMATCH", "STB_OBJECT_MISMATCH", "CROSSEDITION_TRIGGER_MISMATCH",
  "PQ_SLAVE_MISMATCH", "TOP_LEVEL_DDL_MISMATCH", "MULTI_PX_MISMATCH",
  "BIND_PEEKED_PQ_MISMATCH", "MV_REWRITE_MISMATCH", "ROLL_INVALID_MISMATCH",
  "OPTIMIZER_MODE_MISMATCH", "PX_MISMATCH", "MV_STALEOBJ_MISMATCH",
  "FLASHBACK_TABLE_MISMATCH", "LITREP_COMP_MISMATCH", "PLSQL_DEBUG",
  "LOAD_OPTIMIZER_STATS", "ACL_MISMATCH", "FLASHBACK_ARCHIVE_MISMATCH",
  "LOCK_USER_SCHEMA_FAILED", "REMOTE_MAPPING_MISMATCH", "LOAD_RUNTIME_HEAP_FAILED",
  "HASH_MATCH_FAILED", "PURGED_CURSOR", "BIND_LENGTH_UPGRADEABLE",
  "USE_FEEDBACK_STATS"
from v$sql s
  join v$sql_cs_histogram h
    on (s.sql_id=h.sql_id and
	s.child_number=h.child_number and
	s.con_id=h.con_id
	)
  join v$sql_shared_cursor shc
    on (shc.sql_id=h.sql_id and 
	shc.child_number=h.child_number and
	s.con_id=shc.con_id
	)
	where s.sql_id='&sql_id'
)
pivot (sum(cnt) for (bucket) IN ('0' AS Bucket0,'1' AS Bucket1,'2' AS Bucket2))
)
unpivot (result FOR reason_type IN ("UNBOUND_CURSOR",
  "SQL_TYPE_MISMATCH", "OPTIMIZER_MISMATCH",
  "OUTLINE_MISMATCH", "STATS_ROW_MISMATCH", "LITERAL_MISMATCH",
  "FORCE_HARD_PARSE", "EXPLAIN_PLAN_CURSOR", "BUFFERED_DML_MISMATCH",
  "PDML_ENV_MISMATCH", "INST_DRTLD_MISMATCH", "SLAVE_QC_MISMATCH",
  "TYPECHECK_MISMATCH", "AUTH_CHECK_MISMATCH", "BIND_MISMATCH",
  "DESCRIBE_MISMATCH", "LANGUAGE_MISMATCH", "TRANSLATION_MISMATCH",
  "BIND_EQUIV_FAILURE", "INSUFF_PRIVS", "INSUFF_PRIVS_REM",
  "REMOTE_TRANS_MISMATCH", "LOGMINER_SESSION_MISMATCH", "INCOMP_LTRL_MISMATCH",
  "OVERLAP_TIME_MISMATCH", "EDITION_MISMATCH", "MV_QUERY_GEN_MISMATCH",
  "USER_BIND_PEEK_MISMATCH", "TYPCHK_DEP_MISMATCH", "NO_TRIGGER_MISMATCH",
  "FLASHBACK_CURSOR", "ANYDATA_TRANSFORMATION", "PDDL_ENV_MISMATCH",
  "TOP_LEVEL_RPI_CURSOR", "DIFFERENT_LONG_LENGTH", "LOGICAL_STANDBY_APPLY",
  "DIFF_CALL_DURN", "BIND_UACS_DIFF", "PLSQL_CMP_SWITCHS_DIFF",
  "CURSOR_PARTS_MISMATCH", "STB_OBJECT_MISMATCH", "CROSSEDITION_TRIGGER_MISMATCH",
  "PQ_SLAVE_MISMATCH", "TOP_LEVEL_DDL_MISMATCH", "MULTI_PX_MISMATCH",
  "BIND_PEEKED_PQ_MISMATCH", "MV_REWRITE_MISMATCH", "ROLL_INVALID_MISMATCH",
  "OPTIMIZER_MODE_MISMATCH", "PX_MISMATCH", "MV_STALEOBJ_MISMATCH",
  "FLASHBACK_TABLE_MISMATCH", "LITREP_COMP_MISMATCH", "PLSQL_DEBUG",
  "LOAD_OPTIMIZER_STATS", "ACL_MISMATCH", "FLASHBACK_ARCHIVE_MISMATCH",
  "LOCK_USER_SCHEMA_FAILED", "REMOTE_MAPPING_MISMATCH", "LOAD_RUNTIME_HEAP_FAILED",
  "HASH_MATCH_FAILED", "PURGED_CURSOR", "BIND_LENGTH_UPGRADEABLE",
  "USE_FEEDBACK_STATS"))
where result='Y'
order by child_number;

The result is something similar (in my case it has 26 child cursors):

R
                                                                    E
                                                                    O
                                                                  B P
                                                              S S I T
                                                              E H N I
                                                              N A D M
                                                              S R   I
                                                              I E A Z
                                                              T A W A
                                                              I B A B
                                                              V L R L
CHILD_NUMBER PLAN_HASH_VALUE      EXECS  Gets/Exec   Ela/Exec E E E E I    BUCKET0    BUCKET1    BUCKET2 REASON_TYPE                   R
------------ --------------- ---------- ---------- ---------- - - - - - ---------- ---------- ---------- ----------------------------- -
           0      2293695281        455       2466      14464 Y Y Y N            0        455          0 ROLL_INVALID_MISMATCH         Y
                  2293695281                                                                             BIND_EQUIV_FAILURE            Y
           1      1690560038         99      13943     103012 Y Y Y N            0         99          0 ROLL_INVALID_MISMATCH         Y
                  1690560038                                                                             BIND_EQUIV_FAILURE            Y
           2      3815006743        541      43090     230245 Y Y Y N            0        541          0 BIND_EQUIV_FAILURE            Y
                  3815006743                                                                             ROLL_INVALID_MISMATCH         Y
           3      1483632464        251       4111      18940 Y Y Y N           49        202          0 ROLL_INVALID_MISMATCH         Y
                  1483632464                                                                             BIND_EQUIV_FAILURE            Y
           4      3815006743       1152      42632     220730 Y Y Y N            0       1000          0 BIND_EQUIV_FAILURE            Y
                  3815006743                                                                             ROLL_INVALID_MISMATCH         Y
           5      3922835573        150      39252     184176 Y Y Y N            0        150          0 ROLL_INVALID_MISMATCH         Y
                  3922835573                                                                             BIND_EQUIV_FAILURE            Y
           6       767857637          3       4731     124707 Y Y Y N            0          3          0 ROLL_INVALID_MISMATCH         Y
                   767857637                                                                             BIND_EQUIV_FAILURE            Y
           7       767857637         11       4739      71119 Y Y Y N            0         11          0 BIND_EQUIV_FAILURE            Y
           8      2800467281          1        307     249727 Y Y Y N            0          1          0 BIND_EQUIV_FAILURE            Y
           9      3123241890        536       2982      14428 Y Y Y N            6        530          0 ROLL_INVALID_MISMATCH         Y
                  3123241890                                                                             BIND_EQUIV_FAILURE            Y
          10      3125518635         17        315      16492 Y Y Y N           16          1          0 ROLL_INVALID_MISMATCH         Y
                  3125518635                                                                             BIND_EQUIV_FAILURE            Y
          11      2184442252        130       4686      40188 Y Y Y N            0        130          0 ROLL_INVALID_MISMATCH         Y
                  2184442252                                                                             BIND_EQUIV_FAILURE            Y
          12      3815006743        553      42765     231391 Y Y Y N            0        553          0 ROLL_INVALID_MISMATCH         Y
                  3815006743                                                                             BIND_EQUIV_FAILURE            Y
          13      1166983254         47      14193     111256 Y Y Y N            0         47          0 BIND_EQUIV_FAILURE            Y
                  1166983254                                                                             ROLL_INVALID_MISMATCH         Y
          14      2307602173          2         38      45922 Y Y Y N            2          0          0 BIND_EQUIV_FAILURE            Y
                  2307602173                                                                             ROLL_INVALID_MISMATCH         Y
          15       767857637         11       4304      59617 Y Y Y N            0         11          0 BIND_EQUIV_FAILURE            Y
                   767857637                                                                             ROLL_INVALID_MISMATCH         Y
          16      3108045525          2      34591     176749 Y N N N            1          1          0 ROLL_INVALID_MISMATCH         Y
                  3108045525                                                                             LOAD_OPTIMIZER_STATS          Y
                  3108045525                                                                             BIND_EQUIV_FAILURE            Y
          17      3108045525          6       1794      33335 Y Y Y N            4          2          0 BIND_EQUIV_FAILURE            Y
                  3108045525                                                                             ROLL_INVALID_MISMATCH         Y
          18      2440443365        470       2009      13361 Y Y Y N            0        470          0 ROLL_INVALID_MISMATCH         Y
                  2440443365                                                                             BIND_EQUIV_FAILURE            Y
          19      4079924956         15       2032      19773 Y Y Y N            8          7          0 ROLL_INVALID_MISMATCH         Y
                  4079924956                                                                             BIND_EQUIV_FAILURE            Y
          20       777919270         32       2675      18260 Y Y Y N           11         21          0 BIND_EQUIV_FAILURE            Y
                   777919270                                                                             ROLL_INVALID_MISMATCH         Y
          21      1428146033         63      13929     111116 Y Y Y N            0         63          0 ROLL_INVALID_MISMATCH         Y
                  1428146033                                                                             BIND_EQUIV_FAILURE            Y
          22      3815006743        218      43673     234642 Y Y Y N            0        218          0 BIND_EQUIV_FAILURE            Y
                  3815006743                                                                             ROLL_INVALID_MISMATCH         Y
          23       277802667          1         62      99268 Y Y Y N            1          0          0 BIND_EQUIV_FAILURE            Y
                   277802667                                                                             ROLL_INVALID_MISMATCH         Y
          24      3898025231          3       2364     111231 Y Y Y N            0          3          0 BIND_EQUIV_FAILURE            Y
                  3898025231                                                                             ROLL_INVALID_MISMATCH         Y
          25       767857637          2       6495     169363 Y Y Y N            0          2          0 ROLL_INVALID_MISMATCH         Y
                   767857637                                                                             BIND_EQUIV_FAILURE            Y
          26      3690167092        100       2998      20138 Y Y Y N            0        100          0 BIND_EQUIV_FAILURE            Y
                  3690167092                                                                             ROLL_INVALID_MISMATCH         Y

It’s a quick way to get the relevant information in a single result.

Off course, if you need deeper details, you should consider something more powerful like SQLd360 from Mauro Pagano.

Credits: I’ve got the unpivot idea (and copied that part of the code) from this post by Timur Akhmadeev.

Ludo

Migrating Oracle RAC from SuSE to OEL (or RHEL) live

$
0
0

I have a customer that needs to migrate its Oracle RAC cluster from SuSE to OEL.

I know, I know, there is a paper from Dell and Oracle named:

How Dell Migrated from SUSE Linux to Oracle Linux

That explains how Dell migrated its many RAC clusters from SuSE to OEL. The problem is that they used a different strategy:

– backup the configuration of the nodes
– then for each node, one at time
– stop the node
– reinstall the OS
– restore the configuration and the Oracle binaries
– relink
– restart

What I want to achieve instead is:
add one OEL node to the SuSE cluster as new node
– remove one SuSE node from the now-mixed cluster
– install/restore/relink the RDBMS software (RAC) on the new node
– move the RAC instances to the new node (taking care to NOT run more than the number of licensed nodes/CPUs at any time)
– repeat (for the remaining nodes)

because the customer will also migrate to new hardware.

In order to test this migration path, I’ve set up a SINGLE NODE cluster (if it works for one node, it will for two or more).

oracle@sles01:~> crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       sles01                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       sles01                   STABLE
ora.asm
               ONLINE  ONLINE       sles01                   Started,STABLE
ora.net1.network
               ONLINE  ONLINE       sles01                   STABLE
ora.ons
               ONLINE  ONLINE       sles01                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       sles01                   STABLE
ora.cvu
      1        ONLINE  ONLINE       sles01                   STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       sles01                   STABLE
ora.sles01.vip
      1        ONLINE  ONLINE       sles01                   STABLE
--------------------------------------------------------------------------------
oracle@sles01:~> cat /etc/issue

Welcome to SUSE Linux Enterprise Server 11 SP4  (x86_64) - Kernel \r (\l).

I have to setup the new node addition carefully, mainly as I would do with a traditional node addition:

  • Add new ip addresses (public, private, vip) to the DNS/hosts
  • Install the new OEL server
  • Keep the same user and groups (uid, gid, etc)
  • Verify the network connectivity and setup SSH equivalence
  • Check that the multicast connection is ok
  • Add the storage, configure persistent naming (udev) and verify that the disks (major, minor, names) are the very same
  • The network cards also must be the very same

Once the new host ready, the cluvfy stage -pre nodeadd will likely fail due to

  • Kernel release mismatch
  • Package mismatch

Here’s an example of output:

oracle@sles01:~> cluvfy stage -pre nodeadd -n rhel01

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "sles01"


Checking user equivalence...
User equivalence check passed for user "oracle"
Package existence check passed for "cvuqdisk"

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
Location check passed for: "/u01/app/12.1.0/grid"
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) sles01,rhel01
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity using interfaces on subnet "172.16.100.0"
Node connectivity passed for subnet "172.16.100.0" with node(s) rhel01,sles01
TCP connectivity check passed for subnet "172.16.100.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "172.16.100.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "sles01:/usr,sles01:/var,sles01:/etc,sles01:/u01/app/12.1.0/grid,sles01:/sbin,sles01:/tmp"
Free disk space check passed for "rhel01:/usr,rhel01:/var,rhel01:/etc,rhel01:/u01/app/12.1.0/grid,rhel01:/sbin,rhel01:/tmp"
Check for multiple users with UID value 1101 passed
User existence check passed for "oracle"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed

WARNING:
PRVF-7524 : Kernel version is not consistent across all the nodes.
Kernel version = "3.0.101-63-default" found on nodes: sles01.
Kernel version = "3.8.13-16.2.1.el6uek.x86_64" found on nodes: rhel01.
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "libaio"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "glibc"
Package existence check passed for "glibc-devel"
Package existence check passed for "ksh"
Package existence check passed for "libaio-devel"
Package existence check failed for "libstdc++33"
Check failed on nodes:
        rhel01
Package existence check failed for "libstdc++43-devel"
Check failed on nodes:
        rhel01
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check failed for "libstdc++46"
Check failed on nodes:
        rhel01
Package existence check failed for "libgcc46"
Check failed on nodes:
        rhel01
Package existence check passed for "sysstat"
Package existence check failed for "libcap1"
Check failed on nodes:
        rhel01
Package existence check failed for "nfs-kernel-server"
Check failed on nodes:
        rhel01
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Group existence check passed for "asmadmin"
Group existence check passed for "asmoper"
Group existence check passed for "asmdba"

Checking ASMLib configuration.
Check for ASMLib configuration passed.

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed


User "oracle" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: sles01,rhel01

Check for integrity of file "/etc/resolv.conf" failed


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Pre-check for node addition was unsuccessful on all the nodes.

So the problem is not if the check succeed or not (it will not), but what fails.

Solving all the problems not related to the difference SuSE-OEL is crucial, because the addNode.sh will fail with the same errors.  I need to run it using -ignorePrereqs and -ignoreSysPrereqs switches. Let’s see how it works:

oracle@sles01:/u01/app/12.1.0/grid/addnode> ./addnode.sh -silent "CLUSTER_NEW_NODES={rhel01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rhel01-vip}" -ignorePrereq -ignoreSysPrereqs
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 27479 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2032 MB    Passed

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   9% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2015-11-09_09-57-16PM.log

Instantiate files in progress.

Instantiate files successful.
..................................................   15% Done.

Copying files to node in progress.

Copying files to node successful.
..................................................   79% Done.

Saving cluster inventory in progress.
..................................................   87% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[rhel01]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[rhel01]

The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes.
..........
Update Inventory in progress.
..................................................   100% Done.

Update Inventory successful.
Successfully Setup Software.

Then, as stated by the addNode.sh, I run the root.sh and I expect it to work:

[oracle@rhel01 install]$ sudo /u01/app/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/11/09 23:18:42 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2015/11/09 23:19:08 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel01'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel01'
CRS-2676: Start of 'ora.mdnsd' on 'rhel01' succeeded
CRS-2676: Start of 'ora.evmd' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel01'
CRS-2676: Start of 'ora.gpnpd' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel01'
CRS-2676: Start of 'ora.gipcd' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel01'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel01'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel01'
CRS-2676: Start of 'ora.diskmon' on 'rhel01' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rhel01'
CRS-2676: Start of 'ora.cssd' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel01'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel01'
CRS-2676: Start of 'ora.ctssd' on 'rhel01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel01'
CRS-2676: Start of 'ora.asm' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel01'
CRS-2676: Start of 'ora.storage' on 'rhel01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel01'
CRS-2676: Start of 'ora.crsd' on 'rhel01' succeeded
CRS-6017: Processing resource auto-start for servers: rhel01
CRS-2672: Attempting to start 'ora.ons' on 'rhel01'
CRS-2676: Start of 'ora.ons' on 'rhel01' succeeded
CRS-6016: Resource auto-start has completed for server rhel01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/11/09 23:22:06 CLSRSC-343: Successfully started Oracle clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
2015/11/09 23:22:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Bingo! Let’s check if everything is up and running:

[oracle@rhel01 ~]$ /u01/app/12.1.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rhel01                   STABLE
               ONLINE  ONLINE       sles01                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rhel01                   STABLE
               ONLINE  ONLINE       sles01                   STABLE
ora.asm
               ONLINE  ONLINE       rhel01                   Started,STABLE
               ONLINE  ONLINE       sles01                   Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rhel01                   STABLE
               ONLINE  ONLINE       sles01                   STABLE
ora.ons
               ONLINE  ONLINE       rhel01                   STABLE
               ONLINE  ONLINE       sles01                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       sles01                   STABLE
ora.cvu
      1        ONLINE  ONLINE       sles01                   STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.rhel01.vip
      1        ONLINE  ONLINE       rhel01                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       sles01                   STABLE
ora.sles01.vip
      1        ONLINE  ONLINE       sles01                   STABLE
--------------------------------------------------------------------------------

[oracle@rhel01 ~]$ olsnodes -s
sles01  Active
rhel01  Active

[oracle@rhel01 ~]$ ssh rhel01 uname -r
3.8.13-16.2.1.el6uek.x86_64
[oracle@rhel01 ~]$ ssh sles01 uname -r
3.0.101-63-default

[oracle@rhel01 ~]$ ssh rhel01 cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[oracle@rhel01 ~]$ ssh sles01 cat /etc/issue
Welcome to SUSE Linux Enterprise Server 11 SP4  (x86_64) - Kernel \r (\l).

So yes, it works, but remember that it’s not a supported long-term configuration.

In my case I expect to migrate the whole cluster from SLES to OEL in one day.

NOTE: using OEL6 as new target is easy because the interface names do not change. The new OEL7 interface naming changes, if you need to migrate without cluster downtime you need to setup the new OEL7 nodes following this post: http://ask.xmodulo.com/change-network-interface-name-centos7.html

Otherwise, you need to configure a new interface name for the cluster with oifcfg.

HTH

Ludovico

Oracle Database on ACFS: a perfect marriage?

$
0
0

This presentation has had a very poor score in selections for conferences (no OOW, no DOAG, no UKOUG) but people liked it very much at Paris Oracle Meetup. The Database on ACFS is mainstream now, thanks to the new ODA releases. Having some knowledge about why and how you should run (not) Databases on ACFS is definitely worth a read.

Comments are, as always, very appreciated :-)

Ludo

Oracle Active Data Guard and Global Data Services in Action!

$
0
0

In a few days I will give a presentation at UKOUG Tech15 about Global Data Services, it will be the first time that I present this session.

I usually like to give the link to the material to my audience, so here we go:

Credits

I have to give special credits to my colleague Robert Bialek. I’ve got a late confirmation for this session and my slide deck was not ready at all, so I have used a big part of his original work. Most of the content included in the slides has been created by Robert, not me. (Thank you for your help! :-))

Slides

Demo recording

Demo script

clear

function db {
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
}

function gsm {
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/gsmhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
}

db

echo "#### CURRENT CONFIGURATION: CLASSIC DATA GUARD, 3 DATABASES ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
show configuration
EOF
echo "next: GSM config"
read -p ""

gsm
echo "#### GSM CONFIGURATION ####"
echo "GDS COMMAND:
config"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
config
exit
EOF
echo "next: ADD GDSPOOL"
read -p ""


echo "#### ADD GDSPOOL ####"
echo "GDS COMMAND:
add gdspool -gdspool sales"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
add gdspool -gdspool sales
exit
EOF
echo "next: ADD BROKERCONFIG"
read -p ""


echo "#### ADD BROKERCONFIG ####"
echo "GDS COMMAND:
add brokerconfig -connect gsm02.trivadistraining.com:1521/oltp_de -pwd password1 -gdspool sales -region germany"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
add brokerconfig -connect gsm02.trivadistraining.com:1521/oltp_de -pwd password1 -gdspool sales -region germany
exit
EOF
echo "next: config databases"
read -p ""


echo "#### CONFIG DATABASES ####"
echo "GDS COMMAND:
config database"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
config database
exit
EOF
echo "next: modify databases"
read -p ""

echo "#### MODIFY DATABASES ####"
echo "GDS COMMAND: 
modify database -database oltp_ch1 -region switzerland
modify database -database oltp_ch2 -region switzerland
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
modify database -database oltp_ch1 -region switzerland
modify database -database oltp_ch2 -region switzerland
config database
exit
EOF
echo "next: add service read/write"
read -p ""


echo "#### ADD SERVICE R/W ####"
echo "GDS COMMAND: 
add service -gdspool sales -service gsales_rw -role primary -preferred_all -failovertype SELECT -failovermethod BASIC -failoverretry 5 -failoverdelay 3 -locality LOCAL_ONLY -region_failover
start service -service gsales_rw
services"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
add service -gdspool sales -service gsales_rw -role primary -preferred_all -failovertype SELECT -failovermethod BASIC -failoverretry 5 -failoverdelay 3 -locality LOCAL_ONLY -region_failover
start service -service gsales_rw
services
exit
EOF
echo "next: ADD SERVICE R/O"
read -p ""

echo "#### ADD SERVICE R/O ####"
echo "GDS COMMAND: 
add service -gdspool sales -service gsales_ro -role PHYSICAL_STANDBY -failover_primary -lag 20 -preferred_all -failovertype SELECT -failovermethod BASIC -failoverretry 5 -failoverdelay 3 -locality LOCAL_ONLY -region_failover
start service -service gsales_ro
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
add service -gdspool sales -service gsales_ro -role PHYSICAL_STANDBY -failover_primary -lag 20 -preferred_all -failovertype SELECT -failovermethod BASIC -failoverretry 5 -failoverdelay 3 -locality LOCAL_ONLY -region_failover
start service -service gsales_ro
services
exit
EOF
echo "next: stop apply ch1 (run cli_ro_short.sh first)"
read -p ""

db
echo "#### STOP APPLY DATA GUARD ON OLTP_CH1 ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
edit database oltp_ch1 set state='apply-off';
EOF
echo "next: gds services"
read -p ""


gsm
echo "#### GDS SERVICES ####"
echo "GDS COMMAND: 
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
services
exit
EOF
echo "next: stop apply ch2 (run cli_ro_short.sh first)"
read -p ""

db
echo "#### STOP APPLY DATA GUARD ON OLTP_CH2 ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
edit database oltp_ch2 set state='apply-off';
EOF
echo "next: gds services"
read -p ""

gsm
echo "#### GDS SERVICES ####"
echo "GDS COMMAND: 
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
services
exit
EOF
echo "next: gds services"
read -p ""

gsm
echo "#### GDS SERVICES ####"
echo "GDS COMMAND: 
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
services
exit
EOF
echo "next: start apply ch1  and ch2"
read -p ""

db
echo "#### START APPLY DATA GUARD ON OLTP_CH1 and OLTP_CH2 ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
edit database oltp_ch1 set state='apply-on';
EOF
echo "sleeping 5"
sleep 5
dgmgrl -echo sys/password1@oltp_de <<EOF
edit database oltp_ch2 set state='apply-on';
EOF
echo "next: gds services"
read -p ""

gsm
echo "#### GDS SERVICES ####"
echo "GDS COMMAND: 
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
services
exit
EOF
echo "next: gds services"
read -p ""

gsm
echo "#### GDS SERVICES ####"
echo "GDS COMMAND: 
services
"
gdsctl <<EOF
connect gsm_admin/password1@gsm1
services
exit
EOF
echo "next: switchover to CH1 (run cli_ro_long.sh and cli_rw_long.sh first)"
read -p ""


db
echo "#### VALIDATE DATABASE OLTP_CH1 ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
validate database oltp_ch1;
EOF
echo "next: switchover"
read -p ""
echo "#### SWITCHOVER TO OLTP_CH1 ####"
dgmgrl -echo sys/password1@oltp_de <<EOF
switchover to oltp_ch1;
EOF
echo "next: gds services"
read -p ""

And the script to revert the demo:

clear

function db {
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
}

function gsm {
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/gsmhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
}

db
dgmgrl -echo sys/password1@oltp_de <<EOF
switchover to oltp_de;
EOF

gsm
echo "#### STOP and DELETE SERVICE, REMOVE BROKERCONFIG, REMOVE POOL ####"
 
gdsctl <<EOF
connect gsm_admin/password1@gsm1
stop service -service gsales_ro
stop service -service gsales_rw
remove service -service gsales_ro
remove service -service gsales_rw
remove brokerconfig
remove gdspool -gdspool sales
config
exit
EOF

db
dgmgrl -echo sys/password1@oltp_de <<EOF
show configuration
EOF

echo "DEMO reverted."
read -p ""

Cheers

Ludovico

 

Viewing all 37 articles
Browse latest View live