The system was build up to test some tools, i.e. the new Oracle Cloud Control 12c, Toad for Oracle and Spotlight (Quest Software) with their RAC components and the HLMM (Herrmann & Lenz Monitoring Module). And this was not about “certificated” configurations or even to set up a productive environment, but about creating a RAC database with minimum effort.
Applied Software
- – VMware ESXi 5.0
- – Oracle Enterprise Linux 6 Update 1 (OEL 6U1)
- – Oracle Database and Grid Infrastructure 11.2.0.3
The VMware ESXi itself was simply used as a shared storage. Read later on how that works.
First of all the virtual machine, Asterix, (4GB main storage, 2 network adapters and 40GB disk) is set up and Oracle Enterprise Linux 6 Update 1 installed (30GB would probably have been far enough, but you should spare some for later updates and the Enterprise Cloud Control agents). Then this virtual machine is “cloned” as Obelix.
Configuration of the shared storage
Now two additional virtual disks (20GB each) are created on Asterix. Pay attention to the following:
- The disks must be created as “thick provision eager zeroed”. If not, the SCSI controller will complain later, that sharing is impossible.
- You must choose a separate SCSI controller => SCSI (1:0) and SCSI (1:1).
- The snapshots mode must be “permanent independent”.
- By defining the first of the new virtual disks, a new SCSI controller appears. This then has to be set to “virtual”. (In the description it becomes clear that by this several virtual machines on the same server can use the virtual disk.)
- Now the disks are created, which takes some time, as they have to be set up and preformatted completely.
The same is also performed on Obelix, but for SCSI (2:0) and SCSI (2:1). So afterwards there are two disks with 20GB each on each machine, which can be shared.
What remains, is to introduce the virtual disks to each other’s machine (of course, you could have created all disks on one machine to then just introduce them to the other machine, but it was faster like this).
For Obelix this means that now there are two virtual disks added, but not by “create new virtual disk” but by “use existing virtual disk”. By this the disks asterix_1.vmdk and asterix_2.vmdk are chosen. It is important to again choose a separate SCSI controller (here SCSI (1:0) and SCSI (1:1)) and change the SCSI bus to “virtual”.
If you did everything correctly, both virtual machines should now boot clean and by “fdisk –l ” you can see the disks sdb, sdc, sdd and sde on both machines.
Installation and configuration of ASM
The “owner” of the ASM disks must be defined, before ASM can be configured. Though it is possible to install all Oracle components under the “Oracle” owner, it is actually better to separate the grid infrastructure from the database. That is why two Linux users (oracle and oragrid) and three groups (oinstall, dba and griddba) are created.
There are two methods for configuring ASM: first by the asmlib and second by using the Linux udev definition. As I always made good experiences with asmlib in the past, I stayed with it. So I downloaded the regarding package from OTN and installed it (rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm). The ASM support package must be installed, too. You find this on the OEL 6U1 DVD (oracleasm-support-2.1.5-1.el6.x86_64.rpm).
Now the partitions are created on one computer (!) by fdisk (one partition for each disk), so that fdisk –l later gives this result:
Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 5222 41430016 8e Linux LVM Disk /dev/sdb: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x54a3d8c5 Device Boot Start End Blocks Id System /dev/sdb1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sdc1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sdd1 1 2610 20964793+ 83 Linux ... Device Boot Start End Blocks Id System /dev/sde1 1 2610 20964793+ 83 Linux
Next ASM is configured on both systems:
# service oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oragrid Default group to own the driver interface []: griddba Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [n]: y ...
Now the ASM disks can be created (on one system only, e.g. Asterix):
# service oracleasm createdisk ASM1 /dev/sdb1 # service oracleasm createdisk ASM2 /dev/sdc1 # service oracleasm createdisk ASM3 /dev/sdd1 # service oracleasm createdisk ASM4 /dev/sde1
On another system the ASM configuration is imported:
# service oracleasm scandisks
Afterwards the ASM disks show up on both systems and can be used for the grid infrastructure.
Installation of the grid infrastructure
The installation of the grid infrastructure is basically very simple. It is very helpful, that Oracle allows or needs an “out of place” upgrade with patch sets since version 11.2.0. That means the patch set 11.2.0.3 can be installed as a regular release. Before a basic release had to be installed first.
The grid infrastructure is located on the third part of the patch set (p10404530_112030_Linux-x86-64_3of7.zip).
Before opening the universal installer, some configurations for the network have to be made. It is the easiest and Oracle recommends to use a name server, because only by this several SCAN (single client access names) can be defined. As my system is a test configuration, as I already mentioned, I did not exert myself, but simply made the according entries in the host file. That means:
- One “public” IP address each (asterix and obelix)
- One “private” IP address each (asterix-priv and obelix-priv)
- One “virtual” IP address each (asterix-vip and obelix-vip)
- One SCAN IP address for the entire cluster (gallier)
It must also be possible that both systems are able to communicate with each other without any password checks. Therefore the ssh for “oragrid” and “oracle” has to be set up accordingly:
Asterix: $ ssh-keygen -t rsa Obelix: $ ssh-keygen -t rsa $ cd .ssh $ cp rsa.pub authorized_keys $ scp authorized_keys asterix:.ssh Asterix: $ cd .ssh $ cat rsa.pub >> authorized_keys $ scp authorized_keys obelix:.ssh $ ssh asterix date $ ssh obelix date Obelix: $ ssh asterix date $ ssh obelix date
Now there should not be any passwords checks anymore between the systems and the installation of the grid infrastructure can be performed.
The installation under the user “oragrid” develops very easily, if you consider which network is “public” and which one is “private”. Since Oracle 11 release 2 the cluster registry files can also be put into ASM, so that no separate disks are required. If you choose the according option, next the “candidate disks” for ASM are shown and the files can be created. The password of the ASM user must have a certain complexity (“manager” as a password is not possible anymore).
In the end the root.sh must be recalled as usual. Here you should take care to perform them one after another and not at the same time, so waiting is on.
Same with the installation of the software database (under the “oracle” user). If you made a mistake earlier or if the installation of the grid infrastructure has not been finished correctly, the point “Grid Infrastructure Option” is not shown when opening the universal installer in Oracle. Here you can choose to perform a single instance installation, a RAC installation or a RAC single node installation.
Conclusion
The installation of Oracle RAC in this environment developed very easy and is sufficient for testing purposes. Maybe some of you are wondering, that I did not mention the kernel parameter. The reason is simple: Since Oracle 11g release 2 the installation is basically helping itself. If requirements are not met, which can be fixed by simple adjustments of limits or the kernel (i.e. sysctl.conf), one script “runfixup.sh” for each computer is created in the folder /tmp/CVU_. You just perform this (as root user, of course) and the parameters are set correct afterwards.
Regarding the new packages, still manual labor is on. But except for the already described asmlib all required packaged are on the OEL 6U1 CD.
I am very interested in a feedback regarding this blog and willing to give some more information on my tests with Oracle RAC.
Thank you for nice topic. I have to learning for lot of information for this site Oracle RAC Online Training