Pythian Blog: Technical Track

Build an 11gR2 RAC Cluster in VirtualBox in 1 Hour Using OVM Templates

After reviewing my blog post about running EBS OVM templates in VirtualBox, two of my teammates suggested that I work on something with potentially broader appeal. Their basic message was: "This is really cool for us EBS nerds, but what about the Core DBAs?" So how does "11gR2 RAC in an hour" sound? :-) In this post, I'll demonstrate how to deploy the pre-built Oracle VM templates to create a two-node 11gR2 RAC cluster in Oracle VirtualBox. UPDATE, 13 Feb 2014: If you would like to do try this procedure with more up-to-date 11.2.0.4 templates, Gareth Roberts has provided excellent notes on some of the differences in this comment. Thanks, Gareth!

Why do this?

There are already several high-quality "How to run RAC on your workstation" HOW-TO's out there, including the well-known RAC Attack (by Pythian's own Jeremy Schneider, and others) and Tim Hall's super-straightforward article on ORACLE-BASE. Does the internet really need another screenshot-heavy blog post about installing Oracle RAC? Maybe not, but I'm doing it anyway, because:
  • The OVM templates come with the software pre-installed/patched, and scripts that configure the networking, Grid Infrastructure, and database for you. Less fiddling around reduces the possibility of error, and you still have a RAC cluster at the end!
  • I claimed in my earlier blog post that it should be possible to convert other OVM templates, so it seemed like a good idea to actually test that claim.
  • I wanted an excuse to play around a bit more with the command-line interface to VirtualBox.
Some readers might point out that installing and configuring the software is a good way to learn how things work, and that breaking and fixing things along the way helps to learn even more. I actually agree with that sentiment in general, since I'm a "learn by failing doing" kind of guy. However, Oracle is selling a line of high-end products that are supposed to take all of the hard work out of configuring RAC, so why shouldn't we have a bit of fun?

Ingredients

You will need:
  1. RAM. Lots of RAM. The OVM template docs specify 2GB *per RAC NODE*, and that is probably on the small side for any serious work. If you want to do anything else with your workstation while this is running, you will not be able to proceed without at least 6GB of RAM on the host machine. This is less resource-intensive than building your own OVM server, but it is not a lightweight endeavor.
  2. 80-100GB of disk space, depending on how you size your ASM disks
  3. A recent version of VirtualBox. An old one might do, but I didn't test on an old version. :)
  4. DNS service for the SCAN interface. You might be able to get away without it, but I can't guarantee that the Oracle-supplied cluster build scripts will work if you try to fake it. Tim Hall has a great post on a minimal DNS setup for SCAN, or you can use dnsmasq to convert your local hosts file into a DNS service. I opted for dnsmasq; it's pretty cool.
  5. A Linux install ISO image (or physical CD, if you're into that sort of thing). I used Oracle Enterprise Linux 5, Update 6, but any relatively recent OEL or RHEL install image should do the job here.
  6. An understanding of some basic Linux systems administration tasks.
  7. Familiarity with configuring storage and network options in Virtualbox.

Important notes and thank-you's:

Nothing you're about to read in this post is supported by anyone. Not me, not Pythian, and certainly not Oracle. If you're thinking about using the techniques described here for any sort of production or QA deployment, please stop and question your sanity. Then call a few colleagues over to your desk and ask them to question your sanity. Please be mindful of your licensing and support status before working with these templates. Content from Oracle's Software Delivery Cloud is subject to a far more restrictive licensing than the more-familiar OTN development license. (Thanks to Don Seiler ( @dtseiler) for reminding me of this.) So far, this is just a proof-of-concept. I haven't done extensive work to validate the RAC cluster I built from these instructions. There may be resource limitations that I have not yet discovered in this system or more artifacts specific to the Oracle VM template that could be removed. As Darth Vader once said: "Do not be too proud of this technological terror you've constructed." :-) As always, I'm "standing on the shoulders of giants" to make this post happen. Huge thanks to Tim Hall (aka ORACLE-BASE) for his concise HOW-TO documents that served as a springboard for this project, to the creators of dnsmasq for the easy local DNS option, to the clever folks at Oracle who built the VM cluster deployment script, and to my Pythian teammates and a handful of Twitter followers for encouraging me to blog about this.

HOW-TO: The short version

The basic steps are as follows, with details in the next section.
  1. Set up your local DNS with IP addresses for both nodes in your future RAC cluster.
  2. Download the "Oracle RAC 11.2.0.1.4 for x86_64 (64 bit) with Oracle Linux 5.5 " OVM templates from the Oracle Software Delivery Cloud, and unzip (and unzip again!) the files.
  3. Create a single VM in Virtualbox to be the first node in the RAC cluster.
  4. Convert the OVM disk image files to VDI format and attach them to your VM.
  5. Boot the VM in rescue mode from a Linux install ISO, install a non-Xen version of the kernel, and make some config file adjustments.
  6. Clone the VM to create the second node of the RAC cluster.
  7. Create shared disks and attach to both VMs.
  8. Boot both VMs and run the cluster configuration script.
  9. Start playing with your new Virtualbox RAC cluster! (Or, watch your workstation swap itself to death, if you didn't heed my "lots of RAM" warning, above.)

HOW-TO: The long version

  1. Set up DNS entries for your RAC cluster: Complete details on DNS setup are beyond the scope of this post; instead, I've provided external references above to point you in a good direction. Here are the IPs and hostnames that I will be using in my example deployment. I'm using two separate host-only networks (vboxnet0 and vboxnet1) for the public and private interfaces, and the subnets (192.168.56.x and 192.168.57.x) were chosen automatically for me by Virtualbox. I try to keep things simple. :)[plain gutter="0"]#RAC stuff #Pub 192.168.56.11 thing1.local.org thing1 192.168.56.12 thing2.local.org thing2 #Priv 192.168.57.11 thing1-priv.local.org thing1-priv 192.168.57.12 thing2-priv.local.org thing2-priv #VIP 192.168.56.21 thing1-vip.local.org thing1-vip 192.168.56.22 thing2-vip.local.org thing2-vip #SCAN 192.168.56.31 clu-scan.local.org clu-scan 192.168.56.32 clu-scan.local.org clu-scan 192.168.56.33 clu-scan.local.org clu-scan [/plain] You should run tests to make sure that the new IPs resolve to the expected hostnames on your host machine. In particular, it's a good idea to check whether your SCAN IPs are round-robining: [plain gutter="0" highlight="1,12,24,33,45"] zathras:OVMRACTempl jpiwowar$ nslookup clu-scan Server: 127.0.0.1 Address: 127.0.0.1#53 Name: clu-scan Address: 192.168.56.31 Name: clu-scan Address: 192.168.56.32 Name: clu-scan Address: 192.168.56.33 zathras:OVMRACTempl jpiwowar$ dig clu-scan ; <<>> DiG 9.7.6-P1 <<>> clu-scan ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43681 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;clu-scan. IN A ;; ANSWER SECTION: clu-scan. 0 IN A 192.168.56.32 clu-scan. 0 IN A 192.168.56.33 clu-scan. 0 IN A 192.168.56.31 ;; Query time: 2 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Dec 20 21:28:19 2012 ;; MSG SIZE rcvd: 74 zathras:OVMRACTempl jpiwowar$ dig clu-scan ; <<>> DiG 9.7.6-P1 <<>> clu-scan ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15944 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;clu-scan. IN A ;; ANSWER SECTION: clu-scan. 0 IN A 192.168.56.33 clu-scan. 0 IN A 192.168.56.31 clu-scan. 0 IN A 192.168.56.32 ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Dec 20 21:28:27 2012 ;; MSG SIZE rcvd: 74 [/plain]
  2. Download the 11.2.0.1 11gR2 EL5.5 templates: Connect to Oracle's Software Delivery Cloud and download the files listed under Oracle VM Templates for Oracle RAC 11gR2 Media Pack for x86_64 (64 bit). You'll need the two files for "Oracle RAC 11.2.0.1.4 for x86_64 (64 bit) with Oracle Linux 5.5" (V25916-01.zip and V25917-01.zip). I also recommend clicking the "View Digest" button near the top of the download page and running md5sum on each of the downloaded zip files to make sure that the checksums match that list.
  3. Extract the templates:
    • Unzip the two files you just downloaded (V25916-01.zip and V25917-01.zip). You'll get two .tgz files, OVM_EL5U5_X86_64_11201RAC_PVM-1of2.tgz and OVM_EL5U5_X86_64_11201RAC_PVM-2of2.tgz
    • Unpack the two zipped tar files (tar zxpf OVM_EL5U5_X86_64_11201RAC_PVM*.tgz). This will create a directory called OVM_EL5U5_X86_64_11201RAC_PVM, and that's where we'll be doing all of our work.
  4. Convert the OVM disk images to VDI format: Open a command/terminal window and use the VBoxManage utility to convert the raw disk images (.img) in OVM_EL5U5_X86_64_11201RAC_PVM to .vdi files. This utility is installed with VirtualBox; you may need to find it first and add it to your path (location varies by host platform). Timings listed in the examples below are provided to set expectations of how long you'll need to wait for the conversion to complete. Note: I'm running VirtualBox on OS X, and the installer dropped VBoxManage into /usr/bin for me, so it's already in my path. Presumably, you'll find a similar situation in Linux. If you're on Windows and haven't customized your install, you should be able to find VBoxManage.exe in Program Files/Oracle/VirtualBox.[plain gutter="0" highlight="1-2,9"]zathras:OVMRACTempl jpiwowar$ mkdir OVM_EL5U5_X86_64_11201RAC_PVM/Thing1 zathras:OVMRACTempl jpiwowar$ time VBoxManage convertfromraw OVM_EL5U5_X86_64_11201RAC_PVM/System.img OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacRoot.vdi Converting from raw image file="OVM_EL5U5_X86_64_11201RAC_PVM/System.img" to file="OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacRoot.vdi"... Creating dynamic image with size 13316728320 bytes (12700MB)... real 5m12.042s user 0m6.336s sys 0m12.783s zathras:OVMRACTempl jpiwowar$ time VBoxManage convertfromraw OVM_EL5U5_X86_64_11201RAC_PVM/Oracle11201RAC_x86_64-xvdb.img OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacORCL.vdi Converting from raw image file="OVM_EL5U5_X86_64_11201RAC_PVM/Oracle11201RAC_x86_64-xvdb.img" to file="OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacORCL.vdi"... Creating dynamic image with size 17179869184 bytes (16384MB)... real 9m1.932s user 0m7.424s sys 0m20.825s[/plain]
  5. Create a VM for the first node of your RAC cluster: The VM will need to be configured as follows:
    • OS: Oracle Linux (64-bit)
    • Three (3) NICs: The first two attached to separate Host-only networks (vboxnet0 and vboxnet1), and the third configured to use NAT.
    • One (1) CPU
    • 2GB of RAM
    • Device boot order: CD-ROM, then Hard Disk
    • Storage: Attach the two .VDI files to the SATA controller (Root disk first), and attach the Linux install ISO to the virtual DVD drive
    Rather than just present a screenshot of the configuration, I'll give you a listing of the VBoxManage showvminfo command for my first VM (Thing1): [plain gutter="0" highlight="3,9,16,20-21,47-48,52-54,58-64"]Name: Thing1 Groups: / Guest OS: Oracle (64 bit) UUID: f7108f32-190f-431e-8310-19179ce73909 Config file: /Users/jpiwowar/VMs/OVMRACTempl/OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/Thing1.vbox Snapshot folder: /Users/jpiwowar/VMs/OVMRACTempl/OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/Snapshots Log folder: /Users/jpiwowar/VMs/OVMRACTempl/OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/Logs Hardware UUID: f7108f32-190f-431e-8310-19179ce73909 Memory size: 2048MB Page Fusion: off VRAM size: 8MB CPU exec cap: 100% HPET: off Chipset: piix3 Firmware: BIOS Number of CPUs: 1 Synthetic Cpu: off CPUID overrides: None Boot menu mode: message and menu Boot Device (1): DVD Boot Device (2): HardDisk Boot Device (3): Not Assigned Boot Device (4): Not Assigned ACPI: on IOAPIC: on PAE: on Time offset: 0ms RTC: local time Hardw. virt.ext: on Hardw. virt.ext exclusive: off Nested Paging: on Large Pages: on VT-x VPID: on State: powered off (since 2012-12-19T07:57:37.202000000) Monitor count: 1 3D Acceleration: off 2D Video Acceleration: off Teleporter Enabled: off Teleporter Port: 0 Teleporter Address: Teleporter Password: Tracing Enabled: off Allow Tracing to Access VM: off Tracing Configuration: Autostart Enabled: off Autostart Delay: 0 Storage Controller Name (0): IDE Controller Storage Controller Type (0): PIIX4 Storage Controller Instance Number (0): 0 Storage Controller Max Port Count (0): 2 Storage Controller Port Count (0): 2 Storage Controller Bootable (0): on Storage Controller Name (1): SATA Controller Storage Controller Type (1): IntelAhci Storage Controller Instance Number (1): 0 Storage Controller Max Port Count (1): 30 Storage Controller Port Count (1): 10 Storage Controller Bootable (1): on IDE (0, 0): /Users/jpiwowar/Downloads/Enterprise-R5-U6-Server-x86_64-dvd.iso (UUID: 43f3022e-fc22-44d0-bc86-8d82e3732d09) SATA (0, 0): /Users/jpiwowar/VMs/OVMRACTempl/OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacRoot.vdi (UUID: d3894bf3-aa74-4d61-b2f7-86b30f1a61db) SATA (1, 0): /Users/jpiwowar/VMs/OVMRACTempl/OVM_EL5U5_X86_64_11201RAC_PVM/Thing1/RacORCL.vdi (UUID: 22565328-fe92-4467-b0a1-98d0bb71879d) NIC 1: MAC: 08002768FFC0, Attachment: Host-only Interface 'vboxnet0', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none NIC 2: MAC: 0800274C8D14, Attachment: Host-only Interface 'vboxnet1', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none NIC 3: MAC: 08002758D099, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none NIC 3 Settings: MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64) NIC 4: disabled NIC 5: disabled NIC 6: disabled NIC 7: disabled NIC 8: disabled Pointing Device: PS/2 Mouse Keyboard Device: PS/2 Keyboard UART 1: disabled UART 2: disabled LPT 1: disabled[/plain]
  6. Boot the new VM (Thing1) in rescue mode from the install CD: Enter “linux rescue” at the the boot: prompt to enter rescue mode: Select the keyboard and language preferences that suit you and enable two network interfaces: eth0 and eth2 (for now, just select “use IPv4" and “DHCP” when configuring). There is no need to enable eth1, since only one of the host-only interface needs to be active for this exercise: (Repeat the steps above for the NAT interface, eth2.) After setting up the network interfaces, progress therough the menus (“Continue” and “OK” in my case) until you get to a Linux prompt, and switch to the root volume as instructed: # chroot /mnt/sysimage Optional step: Start the sshd service and connect to the VM from your host via ssh, instead of performing the next few steps from the console of the VM. Use ‘ifconfig eth0' to find the IP address to use. (Note: The root password for both VMs is ‘ovsroot’.) # service sshd start
  7. Update a few configuration files: The kernel modules that are loaded to support the Xen kernel are not going to work with the non-Xen kernel, so we need to update modprobe.conf to match our target kernel version:[plain gutter="0" highlight="4,5,6"][root@localhost ~]# vi /etc/modprobe.conf "/etc/modprobe.conf" 3L, 77C written [root@localhost ~]# cat /etc/modprobe.conf alias eth0 e1000 alias scsi_hostadapter ata_piix alias scsi_hostadapter ahci [/plain] Prevent the server from repeatedly trying to spawn a console on a non-existent OVM server: [plain gutter="0" highlight="1,12"][root@localhost ~]# perl -pi.orig -e 's/^(co)/#\1/' /etc/inittab [root@localhost ~]# tail /etc/inittab 3:2345:respawn:/sbin/mingetty tty3 4:2345:respawn:/sbin/mingetty tty4 5:2345:respawn:/sbin/mingetty tty5 6:2345:respawn:/sbin/mingetty tty6 # Run xdm in runlevel 5 x:5:respawn:/etc/X11/prefdm -nodaemon # Run a getty on the virtual console #co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav[/plain] Remove the link to the init script that builds the VM template. We don't want that happening at boot time: [plain gutter="0" highlight="1"][root@localhost ~]# rm /etc/rc3.d/S99oraclevm-template rm: remove symbolic link `/etc/rc3.d/S99oraclevm-template'? yes[/plain] Update /etc/fstab and the cluster configuration scripts with correct references to disk device names (including a few disks we have't configured yet; that's coming). [plain gutter="0" highlight="1,3"][root@localhost ~]# perl -pi.orig -e 's/xvd/sd/g' /etc/fstab [root@localhost ~]# cd /u01/racovm [root@localhost racovm]# perl -pi.orig -e 's/xvd/sd/g' params.ini netconfig.ini diskconfig.sh [/plain]
  8. Install a new kernel and modify grub.conf: This VM is configured with a Xen version of the Oracle Linux 5.5 kernel, so we need to grab a "vanilla" version of that kernel. We'll use the Oracle public yum server to accomplish this; that’s why we’ve configured and activated the NAT interface. Since you've set up your host to act as a DNS server already, you should not need to add a nameserver entry to resolv.conf. In my case, the VM was able to resolve the address for public-yum.oracle.com without any further configuration changes. If you have issues, try replacing the "nameserver" line in /etc/resolv.conf with "nameserver 8.8.8.8"[plain gutter="0" highlight="1,4"][root@localhost ~]# cd /etc/yum.repos.d/ [root@localhost yum.repos.d]# cat /etc/resolv.conf nameserver 10.0.4.2 <i>--This worked for me, if it doesn't for you, try 8.8.8.8</i> [root@localhost yum.repos.d]# wget https://public-yum.oracle.com/public-yum-el5.repo --2012-12-18 15:59:56-- https://public-yum.oracle.com/public-yum-el5.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3974 (3.9K) [text/plain] Saving to: `public-yum-el5.repo' 100%[==========================================================================================>] 3,974 --.-K/s in 0s [/plain] Update the public-yum-el5.repo file and set enabled=0 for every source except ol5_u6_base: [plain gutter="0" highlight="1,4"][root@localhost yum.repos.d]# vi public-yum-el5.repo "public-yum-el5.repo" 111L, 3974C written [root@localhost yum.repos.d]# grep -B5 'enabled=1' public-yum-el5.repo | grep ']' [el5_u5_base] [/plain] Install the OEL5.5 kernel and kernel-devel packages from the Oracle puclic yum server. We'll need kernel-devel to install VirtualBox guest additions later. [plain gutter="0" highlight="1"][root@localhost yum.repos.d]# yum install kernel-2.6.18-194.el5 kernel-devel-2.6.18-194.el5 Loaded plugins: security el5_u5_base | 1.1 kB 00:00 el5_u5_base/primary | 1.1 MB 00:02 el5_u5_base 4372/4372 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package kernel.x86_64 0:2.6.18-194.el5 set to be installed ---> Package kernel-devel.x86_64 0:2.6.18-194.el5 set to be installed --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================== Installing: kernel x86_64 2.6.18-194.el5 el5_u5_base 20 M kernel-devel x86_64 2.6.18-194.el5 el5_u5_base 5.5 M Transaction Summary ==================================================================================================================================== Install 2 Package(s) Upgrade 0 Package(s) Total download size: 25 M Is this ok [y/N]: y Downloading Packages: (1/2): kernel-devel-2.6.18-194.el5.x86_64.rpm | 5.5 MB 00:20 (2/2): kernel-2.6.18-194.el5.x86_64.rpm | 20 MB 01:03 ------------------------------------------------------------------------------------------------------------------------------------ Total 266 kB/s | 25 MB 01:37 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : kernel 1/2 Installing : kernel-devel 2/2 Installed: kernel.x86_64 0:2.6.18-194.el5 kernel-devel.x86_64 0:2.6.18-194.el5 Complete! [/plain] Create an initrd for the new kernel. This should also add a new stanza to grub.conf: [plain gutter="0" highlight="1,62,75-78"][root@localhost yum.repos.d]# mkinitrd -v -f /boot/initrd-2.6.18-194.el5.img 2.6.18-194.el5 Creating initramfs Looking for deps of module ehci-hcd Looking for deps of module ohci-hcd Looking for deps of module uhci-hcd Looking for deps of module ext3: jbd Looking for deps of module jbd Found root device sda2 for LABEL=/ Looking for driver for device sda2 Looking for deps of module pci:v00008086d00002829sv00000000sd00000000bc01sc06i01: scsi_mod libata ahci scsi_mod libata ahci Looking for deps of module scsi_mod Looking for deps of module sd_mod: scsi_mod Looking for deps of module libata: scsi_mod Looking for deps of module ahci: scsi_mod libata Looking for driver for device sda3 Looking for deps of module pci:v00008086d00002829sv00000000sd00000000bc01sc06i01: scsi_mod libata ahci scsi_mod libata ahci Looking for deps of module ata_piix: scsi_mod libata Looking for deps of module ide-disk Looking for deps of module dm-mem-cache Looking for deps of module dm-region_hash: dm-mod dm-log Looking for deps of module dm-mod Looking for deps of module dm-log: dm-mod Looking for deps of module dm-message Looking for deps of module dm-raid45: dm-message dm-mod dm-mem-cache dm-log dm-region_hash Using modules: /lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/ehci-hcd.ko /lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/ohci-hcd.ko /lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/uhci-hcd.ko /lib/modules/2.6.18-194.el5/kernel/fs/jbd/jbd.ko /lib/modules/2.6.18-194.el5/kernel/fs/ext3/ext3.ko /lib/modules/2.6.18-194.el5/kernel/drivers/scsi/scsi_mod.ko /lib/modules/2.6.18-194.el5/kernel/drivers/scsi/sd_mod.ko /lib/modules/2.6.18-194.el5/kernel/drivers/ata/libata.ko /lib/modules/2.6.18-194.el5/kernel/drivers/ata/ahci.ko /lib/modules/2.6.18-194.el5/kernel/drivers/ata/ata_piix.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-mem-cache.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-mod.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-log.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-region_hash.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-message.ko /lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-raid45.ko /sbin/nash -> /tmp/initrd.Tu1724/bin/nash /sbin/insmod.static -> /tmp/initrd.Tu1724/bin/insmod copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/ehci-hcd.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/ehci-hcd.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/ohci-hcd.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/ohci-hcd.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/usb/host/uhci-hcd.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/uhci-hcd.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/fs/jbd/jbd.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/jbd.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/fs/ext3/ext3.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/ext3.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/scsi/scsi_mod.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/scsi_mod.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/scsi/sd_mod.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/sd_mod.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/ata/libata.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/libata.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/ata/ahci.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/ahci.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/ata/ata_piix.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/ata_piix.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-mem-cache.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-mem-cache.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-mod.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-mod.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-log.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-log.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-region_hash.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-region_hash.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-message.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-message.ko' [elf64-x86-64] copy from `/lib/modules/2.6.18-194.el5/kernel/drivers/md/dm-raid45.ko' [elf64-x86-64] to `/tmp/initrd.Tu1724/lib/dm-raid45.ko' [elf64-x86-64] /sbin/dmraid.static -> /tmp/initrd.Tu1724/bin/dmraid /sbin/kpartx.static -> /tmp/initrd.Tu1724/bin/kpartx Adding module ehci-hcd Adding module ohci-hcd Adding module uhci-hcd Adding module jbd Adding module ext3 Adding module scsi_mod Adding module sd_mod Adding module libata Adding module ahci Adding module ata_piix Adding module dm-mem-cache Adding module dm-mod Adding module dm-log Adding module dm-region_hash Adding module dm-message Adding module dm-raid45 [root@localhost yum.repos.d]# head -30 /boot/grub/grub.conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/xvda2 # initrd /initrd-version.img #boot=/dev/xvda timeout=9 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Enterprise Linux Enterprise Linux Server (2.6.18-194.el5) root (hd0,0) kernel /vmlinuz-2.6.18-194.el5 ro root=LABEL=/ numa=off initrd /initrd-2.6.18-194.el5.img title Enterprise Linux Enterprise Linux Server (2.6.18-194.0.0.0.3.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.0.0.0.3.el5xen ro root=LABEL=/ numa=off initrd /initrd-2.6.18-194.0.0.0.3.el5xen.img [/plain] Finally, add "divider=10" to the boot parameters in grub.conf to improve VM performance. This is often recommended as a way to reduce host CPU utilization when a VM is idle, but it also improves overall guest performance. When I tried my first run-through of this process without this parameter enabled, the cluster configuration script bogged down terribly and failed midway through creating the database. [plain gutter="0" highlight="1,17"][root@localhost yum.repos.d]# perl -pi.orig -e 's/(numa=off)/\1 divider=10/' /boot/grub/grub.conf [root@localhost yum.repos.d]# head -30 /boot/grub/grub.conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/xvda2 # initrd /initrd-version.img #boot=/dev/xvda timeout=9 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Enterprise Linux Enterprise Linux Server (2.6.18-194.el5) root (hd0,0) kernel /vmlinuz-2.6.18-194.el5 ro root=LABEL=/ numa=off divider=10 initrd /initrd-2.6.18-194.el5.img title Enterprise Linux Enterprise Linux Server (2.6.18-

No Comments Yet

Let us know what you think

Subscribe by email