Pythian Blog: Technical Track

Are There Performance Penalties Running Oracle ASM on NFS?

In this blog post, I will share my SLOB test results and conclusions on Oracle database IO’s performance when placing data files on NFS directly and on the ASM disk group located on NFS.

You may ask: “Why would anyone consider placing ASM files on NFS? Is it even possible at all?” There seem to be legitimate reasons for doing that, and Oracle supports it. You could find more information in the following blog post: Reasons for using ASM on NFS.

I’ve provided some additional details on how I have executed the tests below. Please do not hesitate to ask for more details. At this stage, I just want to mention that I have used my favorite testing tool’s (SLOB from Kevin Closson) modified version, SOS – SLOB on steroids.

You may want to have a look at my DNFS presentation since it is related to the comparison between kNFS and dNFS. You can find it here.

There are no visible penalties

Placing data files on NFS directly and running SLOB’s 22 reader tests gave me an average response time of 0.53 ms in the DNFS configuration and 1.73 ms in the kernelized NFS test.
The ASM setup using the same NFS configuration returned 0.49 ms and 1.74 ms correspondingly.
Based on the tests results, there are no clear performance-related penalties when running Oracle ASM on NFS compared with placing data files directly to NFS. I think that less than 10% discrepancy in DNFS tests is negligible.

Testing results

The following are bits of AWR reports that I found relevant. The full reports are available here.

Just kNFS

 Elapsed:                1.09 (mins)
      DB Time(s):               21.4              116.9       0.36       4.94
       DB CPU(s):                0.8                4.4       0.01       0.19
   Logical reads:           12,140.1           66,458.8
   Block changes:               41.8              228.8
  Physical reads:           12,042.2           65,923.3
db file sequential read             791,093       1,370      2   97.6 User I/O
DB CPU                                               53           3.8
awr_0w_22r.20121023_201639.txt
Tue Oct 23 20:16:40 EDT 2012

Just dNFS

real    1m13.117s
user    0m0.576s
sys     0m1.281s
   Elapsed:                1.04 (mins)
      DB Time(s):               21.3              110.7       0.13       4.68
       DB CPU(s):                5.0               26.0       0.03       1.10
   Logical reads:           37,408.2          194,450.9
   Block changes:               33.3              173.0
  Physical reads:           37,298.0          193,878.0
db file sequential read           2,326,535       1,229      1   92.5 User I/O
DB CPU                                              312          23.5
awr_0w_22r.20121023_203540.txt
Tue Oct 23 20:35:40 EDT 2012

ASM on kNFS

real    1m13.052s
user    0m0.606s
sys     0m1.259s
   Elapsed:                1.04 (mins)
      DB Time(s):               21.3              111.1       0.36       4.69
       DB CPU(s):                1.2                6.2       0.02       0.26
   Logical reads:           11,883.9           61,944.8
   Block changes:               43.7              227.8
  Physical reads:           11,786.3           61,435.9
db file sequential read             737,114       1,283      2   96.3 User I/O
DB CPU                                               74           5.6
awr_0w_22r.20121104_030233.txt
Sun Nov  4 03:02:34 EST 2012

ASM on dNFS

real    1m13.743s
user    0m0.633s
sys     0m1.241s
   Elapsed:                1.05 (mins)
      DB Time(s):               21.2              111.8       0.13       4.71
       DB CPU(s):                6.5               34.2       0.04       1.44
   Logical reads:           37,754.5          198,865.5
   Block changes:               43.9              231.3
  Physical reads:           37,639.5          198,259.8
db file sequential read           2,379,107       1,162      0   86.7 User I/O
DB CPU                                              411          30.6
awr_0w_22r.20121104_025602.txt
[oracle@ycluster1a SLOB]$

Details on how I executed the test

    • Both NFS server and Oracle database were located on the same host.
    • The volume where a data file was located for all tests was created on Linux RAM disk to exclude any slow HDDs impact.
    • NFS mount was mounted via loopback device (127.0.0.1) to exclude any slow network component impacts on the test results.
    • Tests used “db file sequential read”, a.k.a. random reads.
    • The host was located on Oracle VM.

After reading the above points, you would probably say: “Hey Yury, your tests are far from real world workload. And I totally agree. But I didn’t want to test the real life load. In fact, it is quite difficult to reach close to real life workloads in any testing. The only goal I had was to see if an additional IO layer (ASM) would make any difference in terms of IO performance. From my perspective, this setup may be one of the best for the purpose since it eliminates a lot of components that may have an impact on the testing results. The only concern on my side as of now is the fact that I ran Oracle VM aware kernel, and it may have an impact on the kNFS vs dNFS comparison. However, I think we are good with both NFS and ASM on NFS.

For those of you who are used to SLOB & SOS, I have used the following command to run the tests. I’ve also published my SOS scripts here – SLOB on steroids.

v_r=22; v_w=0; v_c=60
date; time bash -x runit.sh ${v_w} ${v_r} ${v_c} 2>&1 > ./runit.sh_${v_w}_${v_r}c1t.`date +%Y%m%d_%H%M%S`.log ; egrep "Elapsed:  |Logical reads:   |Reddo size:   |Block changes:    |Physical reads:|DB Time\(s\):|DB CPU\(s\):"  `ls -trp awr_* | tail -1` ; egrep "db file sequential read|DB CPU" `ls -trp awr_* | tail -1` | head -3 | tail -2 ; ls -trp awr_* | tail -1 ; date

View Yury Velikanov's profile on LinkedIn

No Comments Yet

Let us know what you think

Subscribe by email