Blog 
CloneDB – Quick ‘n Easy

CloneDB Feature

Oracle has quietly and secretly provided a new method to duplicate databases with Version 11.2.0.2 (yes, the patchset). CloneDB makes it possible to build up several databases on the basis of a RMAN image copy. Foundation for that is Direct NFS support in the database, introduced with version 11.1.0.6. With that it is possible to run an Oracle database on NFS storage over a special driver (dNFS). With CloneDB a new I/O layer will be implemented, so that reading access will be provided by a RMAN backup and writing access creates own unique files. In the view of the database it looks like there are completely normal data files in use. That means an administrator or an application can not recognize that a clone is in use.

Benefit of Copy-On-Write

CloneDB is easy to set up and the creation of a new clone is realized in minutes and consumes almost no space, the size of the source database doesn’t really matter. This will be achieved by the use of the copy-on-write method, which is already used by ZFS or btrfs file systems for example. The main idea behind copy-on-write (c.o.w.) is very simple. The copy will only be physically built up when it is changed. As long as the copy is not changed it is sufficient to store the original only one time. So at first the copy is still virtual and own data blocks will only be created on changes.

In our case this method will be realized with an RMAN image copy which is locally stored or also located on a NFS storage. This image copy represents our source with read-only access and a NFS share is provided for the referenced c.o.w. files. All together we get a complete clone database although this fact is transparent for the database itself and application accessing it. So far, so good. Until this moment we didn’t save any byte, needed probably more time by providing this instead of our reliable RMAN DUPLICATE. The trick showed up at the moment when you decide to build a second clone. Now we can take our already existing RMAN image copy and reference a second time on it and when we need a third clone also and so on. We didn’t need to work everything two, three or four times out. We didn’t need to transfer gigs or terabytes of data over the network many times. No. Theoretically it’s enough to when make an image one time and provide it. Theoretically because of the fact that every image in database context is at some time outdated. The next thing we have to consider is that, as more data blocks that we change in our test or development environment as more storage space is needed for the c.o.w. files. In an extreme case we have a fully changed copy of our database in our nfs storage. Another point in this context is the backup method. If we do not use image backups for our daily business than you create additional overhead. As last downer we have to be aware of the fact that performance testing is not convincing. Yes you can, but you have different storage technologies and the referential matching between backup and c.o.w. files produces extra I/O.

Everybody who can live with these restrictions and NFS shares doesn’t represents a problem in the own environment and for whom a clear benefit is on the bottom line should follow the next step-by-step instructions to create his own database clone environment with Oracle 11.2.0.2. 

Step-by-Step to the clone database

CloneDB is documented in “My Oracle Support” under “Clone your dNFS Production for testing (Doc 1210656.1)”. A Perl script called “cloned.pl” is provided also in this document which is needed for the build-up of a clone. Meanwhile there is an entry in the Administrators Guide “Cloning a Database with CloneDB”. Here you can find an important hint: „The CloneDB feature is supported starting with Oracle Database 11g Release 2 (11.2.0.3).“ In this instruction I work with OE Linux and 11.2.0.4 so I have no problems with that.

NFS Server Setup

On the NFS server we had to make a directory for our C.O.W. Files

# mkdir -p /u01/nfshare/clonedb/CLONE

Export the directory by writing into etc/exports.

/u01/nfshare/clonedb/CLONE         *(rw,sync,no_wdelay,insecure,insecure_locks,no_root_squash)

Backup Production

Create an image copy backup of the database. The copy can be placed locally or also in a NFS storage.

$ rman target /

BACKUP AS COPY creates a byte-for-byte copy of the database.

RMAN> configure controlfile autobackup off; RMAN> sql 'alter database begin backup'; RMAN> backup as copy database format '/u03/orabackup/prod/%U'; RMAN> sql 'alter database end backup'; 

Parameter file of the production

Creation of a PFILE of the production.

$ sqlplus / as sysdba SQL> CREATE PFILE='/u03/orabackup/prod/initCLONE.ora' FROM SPFILE; 

Change all references to original SID and database corresponding to our prospectively. With database version 11.2.0.3 the parameter „clonedb“ had to be set “true” in the PFILE of our clone.

*.clonedb=true

CloneDB Server Setup

Linking of the Direct NFS module.

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk dnfs_on

Creating of all necessary directories for mounting the NFS and for starting the instance.

# su - oracle $ mkdir -p $ORACLE_BASE/oradata/CLONE $ mkdir -p $ORACLE_BASE/fast_recovery_area/CLONE $ mkdir -p $ORACLE_BASE/admin/CLONE/adump $ mkdir -p $ORACLE_BASE/admin/CLONE/dpdump 

Write NFS mount in /etc/fstab for automatically mounting after reboot.

nas1:/u01/nfshares/clonedb/CLONE /u01/app/oracle/oradata/CLONE  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

Mounting of the nfs direcrtory as „oracle“ user. Ownership und permission should checked.

# mount /u01/app/oracle/oradata/CLONE

Cloning

Set environment variables.

$ export ORACLE_SID=CLONE $ export MASTER_COPY_DIR=/u03/orabackup/prod $ export CLONE_FILE_CREATE_DEST=/u01/app/oracle/oradata/CLONE $ export CLONEDB_NAME=CLONE 

Execute Perl script clonedb.pl. You can find this script either in $ORACLE_HOME/rdbms/install/clonedb.pl or as attachment in My Oracle Support under Doc ID 1210656.1.

$ perl clonedb.pl /u01/app/oracle/product/11.2.0.4/db_1/dbs/initCLONE.ora crtdb.sql dbren.sql

This script creates by setting the parameter file two more SQL scripts. The first script „crtdb.sql“ executes only a CREATE CONTROLFILE and provides the basis for the second script. The magic script „dbren.sql“ now links with a DBMS_DNFS.CLONEDB_RENAMEFILE our backup database files with the copy-on-write database files.

$ sqlplus / as sysdba Connected to an idle instance. @crtdb.sql @dbren.sql 

If everything runs fine you get … sorry … an error message. No good deed goes unpunished. But chin up, the problem is that the not existent temp tablespace of our clone couldn’t dropped and created. So we had to do it ourselves.

SQL> alter tablespace TEMP add tempfile '/u01/app/oracle/oradata/CLONE/temp01.dbf' size200M; Tablespace altered. 

When the scripts are fully executed and TEMP ERROR is corrected you get a working clone. From database perspective everything is fine we can’t find an advice of the existence of a cloned database beside the fact that it’s called CLONE.

SELECT 	t.name AS tbs, 	d.name AS dbf, 	status FROM 	v$datafiled JOIN 	v$tablespacet ON t.ts# = d.ts# ORDER BY t.name; TBS		DBF							STATUS --------------------------------------------------------------------------- EXAMPLE 	/u01/app/oracle/oradata/CLONE/ora_data_CLONE0.dbf 	ONLINE SYSAUX 		/u01/app/oracle/oradata/CLONE/ora_data_CLONE1.dbf 	ONLINE SYSTEM 		/u01/app/oracle/oradata/CLONE/ora_data_CLONE2.dbf 	SYSTEM UNDOTBS1 	/u01/app/oracle/oradata/CLONE/ora_data_CLONE3.dbf 	ONLINE USERS 		/u01/app/oracle/oradata/CLONE/ora_data_CLONE4.dbf 	ONLINE

On the OS level you can see the difference. The copy-on-write location now contains only the changed blocks and not a full copy of the database files.

$ du -k /u03/orabackup/prod/* 162452	/u03/orabackup/prod/data_D-PROD_I-4197193115_TS-EXAMPLE_FNO-5_0eom14qg 614412	/u03/orabackup/prod/data_D-PROD_I-4197193115_TS-SYSAUX_FNO-2_0bom14rf 716812	/u03/orabackup/prod/data_D-PROD_I-4197193115_TS-SYSTEM_FNO-1_0aom14po 312332	/u03/orabackup/prod/data_D-PROD_I-4197193115_TS-UNDOTBS1_FNO-3_0com14ss 5128	/u03/orabackup/prod/data_D-PROD_I-4197193115_TS-USERS_FNO-4_0dom14tl $ du -k /u01/app/oracle/oradata/CLONE/*.dbf 20	/u01/app/oracle/oradata/CLONE/ora_data_CLONE0.dbf 55	/u01/app/oracle/oradata/CLONE/ora_data_CLONE1.dbf 89	/u01/app/oracle/oradata/CLONE/ora_data_CLONE2.dbf 201	/u01/app/oracle/oradata/CLONE/ora_data_CLONE3.dbf 18	/u01/app/oracle/oradata/CLONE/ora_data_CLONE4.dbf 9851	/u01/app/oracle/oradata/CLONE/CLONE_ctl.dbf 

At the end a little hint: With a Hot Backup you maybe have to recover the database.

SQL> recover database using backup controlfile until cancel; SQL> alter database open resetlogs; 

Done. Our first clone is provided and testing can be started. And what do we have to do with the next clone? Everything again? Of course not. We have to create a second nfs directory, create a second clone instance on the target server and repeat cloning. When you’re familiar with these steps you can provide new clones in minutes.

With several clones we can go through several test scenarios now. While on clone 1 we test version 1 of our new application we test Version 2 on clone 2 which is must be much better, hopefully. On clone 3 we can now test our recovery scenario which we planned for a long time. As we dropped clone 4 we provide clone 5 because someone forgot the where clause during development. Shit happens…

 

9 comments on “CloneDB – Quick ‘n Easy

question – have you actually done the 2 clone dbs from the same source on the same server? It sounds as if you need an NFS mount for each clonedb – correct? I am trying this and getting the dreaded ORA-17513: dNFS package call failed. Just looking for guidance.

You’re exactly right. You need a NFS mount to use clonedb… that should be the reason for your error message.

I understand that you need an NFS mount, but can you re-use the same NFS mount for the second clone on the same server?

Now I catched your problem… You should create subfolders for several clones behind the mount point that’s quite easier than creating a mount for every single clone but that’s also possible. In this example the CLONE_FILE_CREATE_DEST would be /u01/app/oracle/oradata/CLONE/clone1_subfolder … clone2_subfolder and so on.

Hi anybody knows how to drop a Clone created with this method??

The cloned database is acting as any other database. So you can simply drop it if you want and all files of the cloned database will be dropped as well. The original datafiles will not be touched – if that’s your question.

Almost 6 years ago since last post. Let’s see if someone still listens…

I followed the procedure described here or similar found at other places. My clones work – until I shutdown the clone db. Then, after startup again, changes made on the clone are gone, i.e. I’m back at the state when creating the clone. Any idea why the state is not persisted, or how I can further analyze the situation? I checked the datafile consistency using RMAN.

Hi Michael,
I will try to test the behaviour in the next weeks and let you know the outcome. May be we should try with the newly 23 developer release.

Leave a Reply

Your email address will not be published. Required fields are marked *

What can CarajanDB do for you?