Pythian Blog: Technical Track

Expand Elastic Configuration on Oracle Exadata - Part 2

In Part 1 of this blog series, we looked at how to reimage a vanilla Exadata compute or storage node to install the required ISO image. This blog will look at how we can integrate new storage servers into an Exadata cluster.

Part 2: Expand the Exadata cluster

A prerequisite to expanding an Exadata cluster is running the OEDA and assigning the hostnames and IP addresses the new servers would occupy. I have already run the OEDA for this blog post and copied the XML files generated to my Exadata compute node 1, where I will run the next steps.

Step 1: Setup Network

The vanilla Exadata compute or storage nodes usually have IP addresses in the 172.16.X.X subnet assigned to the eth0 interface. Depending on how your VLANs and routing are configured, you may not be able to reach this subnet from your network. In this case, log in using the serial console through the ILOM and add a VIP on the eth0 interface using an IP in your actual eth0 subnet so you can reach the server. Then add this IP to /etc/ssh/sshd_config file for Listenaddress and restart the daemon so you can log in to the host using this IP address.

[root@node12 ~]# ifconfig eth0:1 192.168.0.34 netmask 255.255.252.0 up
[root@node12 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a8:69:8c:11:97:58 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/22 brd 172.16.13.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.0.34/22 brd 192.168.3.255 scope global eth0:1
       valid_lft forever preferred_lft forever

[root@node12 ~]# echo “ListenAddress 192.168.0.34” >> /etc/ssh/sshd_config
[root@node12 ~]# systemctl restart sshd

The next step is to run the applyElasticConfig.sh script provided in the OEDA package to configure the hosts being added to the cluster. As a prerequisite step, you must set up passwordless SSH from the host on which the script is being run to the new Exadata hosts. Once done, edit the file properties/es.properties file from your OEDA install directory and set the ROCEELASTICNODEIDRANGE parameter to the IP range from which IPs have been assigned in the previous step.

[root@exadb01 linux-x64]# grep ROCEELASTICNODEIPRANGE properties/es.properties
ROCEELASTICNODEIPRANGE=192.168.0.34:192.168.0.39 
[root@exadb01 linux-x64]# ./applyElasticConfig.sh -cf ./My_Company-exadb.xml
 Applying Elastic Config...
 Discovering pingable nodes in IP Range of 192.168.0.34 - 192.168.0.39.....
 Found 3 pingable hosts..[192.168.0.34,192.168.0.35, 192.168.0.36]
 Validating Hostnames..
 Discovering ILOM IP Addresses..
 Getting uLocations...
 Getting Mac Addressess using eth0...
 Getting uLocations...
 Mapping Machines with local hostnames..
 Mapping Machines with uLocations..
 Checking if Marker file exists..
 Updating machines with Mac Address for 3 valid machines.
 Creating preconf..
 Writing host-specific preconf files..
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel04_preconf.csv for exacel04 ....
 Preconf file copied to exacel04 as /var/log/exadatatmp/firstconf/exacel04_preconf.csv
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel05_preconf.csv for exacel05 ....
 Preconf file copied to exacel05 as /var/log/exadatatmp/firstconf/exacel05_preconf.csv
 Writing host specific file /u01/onecommand/linux-x64/WorkDir/exacel06_preconf.csv for exacel06 ....
 Preconf file copied to exacel06 as /var/log/exadatatmp/firstconf/exacel06_preconf.csv
 Running Elastic Configuration on exacel04.mycompany.com
 Running Elastic Configuration on exacel05.mycompany.com
 Running Elastic Configuration on exacel06.mycompany.com
 Completed Applying Elastic Config...
 Ending applyElasticConfig
[root@exadb01 linux-x64]#

The applyElasticConfig.sh script will apply the hostnames and IP addresses from the OEDA XML file to the new compute/cell nodes.

The following steps detail adding new storage servers to an existing Exadata cluster.

Step 2: Run calibration

Run calibration on new cells to benchmark the performance of the disks.

[root@exadb01 ~]# dcli -g ~/new_cell_group -l root cellcli -e calibrate force;
192.168.0.25: Calibration will take a few minutes...
192.168.0.25: Aggregate random read throughput across all flash disk LUNs: 56431 MBPS
192.168.0.25: Aggregate random read IOs per second (IOPS) across all flash disk LUNs: 1668595
192.168.0.25: Calibrating flash disks (read only, note that writes will be significantly slower) ...
192.168.0.25: LUN 1_0 on drive [FLASH_1_2,FLASH_1_1] random read throughput: 14,175.00 MBPS, and 694092 IOPS
192.168.0.25: LUN 2_0 on drive [FLASH_2_2,FLASH_2_1] random read throughput: 14,178.00 MBPS, and 643495 IOPS
192.168.0.25: LUN 4_0 on drive [FLASH_4_2,FLASH_4_1] random read throughput: 14,176.00 MBPS, and 638538 IOPS
192.168.0.25: LUN 5_0 on drive [FLASH_5_2,FLASH_5_1] random read throughput: 14,229.00 MBPS, and 694577 IOPS
192.168.0.25: LUN 6_0 on drive [FLASH_6_2,FLASH_6_1] random read throughput: 14,198.00 MBPS, and 687977 IOPS
192.168.0.25: LUN 7_0 on drive [FLASH_7_2,FLASH_7_1] random read throughput: 14,167.00 MBPS, and 642601 IOPS
192.168.0.25: LUN 8_0 on drive [FLASH_8_2,FLASH_8_1] random read throughput: 14,185.00 MBPS, and 648842 IOPS
192.168.0.25: LUN 9_0 on drive [FLASH_9_2,FLASH_9_1] random read throughput: 14,204.00 MBPS, and 642252 IOPS
192.168.0.25: CALIBRATE results are within an acceptable range.
192.168.0.25: Calibration has finished.
192.168.0.26: Calibration will take a few minutes...
..
..

Step 3: Add new cells to the RAC cluster

Next, add the new cells to the file /etc/oracle/cell/network-config/cellip.ora on all compute nodes.

[root@exadb01 network-config]# dcli -g ~/dbs_group -l root cat /etc/oracle/cell/network-config/cellip.ora
192.168.0.2: cell="192.168.12.10;192.168.12.11"
192.168.0.2: cell="192.168.12.12;192.168.12.13"
192.168.0.2: cell="192.168.12.14;192.168.12.15"
192.168.0.2: cell="192.168.12.16;192.168.12.17"
192.168.0.2: cell="192.168.12.18;192.168.12.19"
192.168.0.2: cell="192.168.12.20;192.168.12.21"
192.168.0.2: cell="192.168.12.22;192.168.12.23"
192.168.0.2: cell="192.168.12.24;192.168.12.25"
192.168.0.3: cell="192.168.12.10;192.168.12.11"
..
..

Step 4: Provision disks for ASM

Next, create grid disks on the new cells. In my example, the new storage servers have the Extreme Flash configuration, which contains only flash disks.

[root@exacel04 ~]# cellcli -e create griddisk all flashdisk prefix=NEW_DATA, size=5660G
GridDisk NEW_DATA_FD_00_exacel04 successfully created
GridDisk NEW_DATA_FD_01_exacel04 successfully created
GridDisk NEW_DATA_FD_02_exacel04 successfully created
GridDisk NEW_DATA_FD_03_exacel04 successfully created
GridDisk NEW_DATA_FD_04_exacel04 successfully created
GridDisk NEW_DATA_FD_05_exacel04 successfully created
GridDisk NEW_DATA_FD_06_exacel04 successfully created
GridDisk NEW_DATA_FD_07_exacel04 successfully created

Modify the ASM disk string on the ASM instance to include the new grid disks.

SQL> alter system set asm_diskstring='o/*/DATA_*','o/*/RECO_*','/dev/exadata_quorum/*','o/*/NEW_DATA_*' sid='*';

The new disks should show up as candidate disks on the v$asm_disk view.

SQL> select path, state, mount_status, header_status from v$asm_disk where path like '%NEW_DATA%' order by 2;

PATH                                                    STATE    MOUNT_S HEADER_STATU
------------------------------------------------------- -------- ------- ------------
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_00_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_01_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_02_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_03_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_04_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_05_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_06_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.20;192.168.12.21/NEW_DATA_FD_07_exacel06   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.22;192.168.12.23/NEW_DATA_FD_00_exacel07   NORMAL   CLOSED  CANDIDATE   
o/192.168.12.22;192.168.12.23/NEW_DATA_FD_01_exacel07   NORMAL   CLOSED  CANDIDATE   
..
..

Step 5: Create a disk group

Create a new disk group to consume the new disks. The command shown here creates a high redundancy disk group using three fail groups, each containing the complete set of disks from each cell server.

SQL> CREATE DISKGROUP NEW_DATA HIGH REDUNDANCY
FAILGROUP exacel04 DISK 'o/192.168.12.20;192.168.12.21/NEW_DATA*'
FAILGROUP exacel05 DISK 'o/192.168.12.22;192.168.12.23/NEW_DATA*'
FAILGROUP exacel06 DISK 'o/192.168.12.24;192.168.12.25/NEW_DATA*'
ATTRIBUTE 'content.type' = 'data',
'au_size' = '4M',
'cell.smart_scan_capable'='TRUE',
'compatible.rdbms'='11.2.0.4',
'compatible.asm'='19.0.0.0.0';

Diskgroup created.

Voila! Your newly provisioned storage is ready to be consumed by your Oracle databases. These steps will help you the next time you want to expand your Exadata cluster. See you again in another blog post!

No Comments Yet

Let us know what you think

Subscribe by email