Pythian Blog: Technical Track

Oracle 11gR2 Grid Infrastructure -- Memory Footprint

DIMMs Upgrading to 11g Release Grid Infrastructure? You probably want to read on... Oracle 11g Release 2 Grid Infrastructure has been dramatically redesigned compare to 10g and 11gR1 Clusterware. Coming with impressive set of new features, Grid Infrastructure also uses much more memory. While RAM is rather inexpensive these days, it does pose an inconvenience in some scenarios. Particularly, for sand-box type installations that I use all the time for my own tests and demonstrations. For production upgrades, you need to be aware of and plan for increased memory usage. I've been able to easily run a 2 node 10g RAC cluster on my MacBook with 4 GB of RAM allocating less than 1 GB of RAM to each virtual machine. That was even enough for a mini database instance with a very small memory footprint. Oracle 11g Release 1 was pretty much the same except maybe the database instance itself required a bit more memory but one node could still fit within 1 GB of RAM. In 11gR2, bare-bone Grid Infrastructure processes alone consume 10+ times more memory (11.2.0.1 on 32 bit Linux to be precise): [code] [gorby@cheese1 ~]$ ps -eo pid,%mem,rss,user,cmd --sort=rsz --cols 100 | grep -e '^ *PID' -e grid -e ohasd | grep -v grep PID %MEM RSS USER CMD 3614 0.0 1080 root /bin/sh /etc/init.d/init.ohasd run 4322 0.2 3368 oracle /nfs/11.2.0/grid/opmn/bin/ons -d 4323 0.4 5164 oracle /nfs/11.2.0/grid/opmn/bin/ons -d 4117 0.6 7860 root /nfs/11.2.0/grid/bin/oclskd.bin 3830 0.6 8788 oracle /nfs/11.2.0/grid/bin/gipcd.bin 5048 0.7 8992 oracle /nfs/11.2.0/grid/bin/tnslsnr LISTENER -inherit 4167 0.7 10052 oracle /nfs/11.2.0/grid/bin/evmlogger.bin -o /nfs/11.2.0/grid/evm/log/evmlogger.i 3969 0.9 12412 oracle /nfs/11.2.0/grid/bin/diskmon.bin -d -f 3860 0.9 12736 oracle /nfs/11.2.0/grid/bin/mdnsd.bin 4067 1.1 14648 root /nfs/11.2.0/grid/bin/octssd.bin reboot 5016 1.2 15860 root /nfs/11.2.0/grid/bin/orarootagent.bin 3956 1.3 16964 root /nfs/11.2.0/grid/bin/orarootagent.bin 4292 1.4 17984 oracle /nfs/11.2.0/grid/bin/oraagent.bin 3874 1.5 20112 oracle /nfs/11.2.0/grid/bin/gpnpd.bin 3817 1.5 20300 oracle /nfs/11.2.0/grid/bin/oraagent.bin 4083 1.8 23700 oracle /nfs/11.2.0/grid/bin/evmd.bin 4372 2.4 31548 oracle /nfs/11.2.0/grid/jdk/jre//bin/java -Doracle.supercluster.cluster.server=eo 3564 3.2 41532 root /nfs/11.2.0/grid/bin/ohasd.bin reboot 4081 3.5 44932 root /nfs/11.2.0/grid/bin/crsd.bin reboot 3906 18.6 239428 root /nfs/11.2.0/grid/bin/cssdagent 3887 18.6 239444 root /nfs/11.2.0/grid/bin/cssdmonitor 3924 20.1 258564 oracle /nfs/11.2.0/grid/bin/ocssd.bin [/code] The second column above gives you amount of resident memory in KB for processes related to Grid Infrastructure. As you can cleanly see, processes of CSS components consume well above 700MB! In total we can account for 1 GB. (those calculations are flawed -- see below) Compare that with 10g (10.2.0.3 on 32 bit Linux) -- bare-bone Clusterware processes consume only 60MB: [code] [oracle@lh1 ~]$ ps -eo pid,%mem,rss,user,cmd --sort=rsz --cols 100 | grep -e '^ *PID' -e nfs -e crs -e css -e evm | grep -v grep PID %MEM RSS USER CMD 6524 0.0 348 oracle /nfs1/oracle/oracle/product/10.2.0/crs/opmn/bin/ons -d 4892 0.1 992 oracle /bin/sh -c cd /nfs1/oracle/oracle/product/10.2.0/crs/log/lh1/cssd/oclsomon; 3262 0.1 1072 root /bin/sh /etc/init.d/init.evmd run 3506 0.1 1100 root /bin/sh /etc/init.d/init.crsd run 4575 0.1 1116 root /bin/su -l oracle -c sh -c 'ulimit -c unlimited; cd /nfs1/oracle/oracle/pro 4890 0.1 1120 root /bin/su -l oracle -c /bin/sh -c 'cd /nfs1/oracle/oracle/product/10.2.0/crs/ 4664 0.1 1180 root /bin/sh /etc/init.d/init.cssd oclsomon 3263 0.1 1188 root /bin/sh /etc/init.d/init.cssd fatal 4677 0.1 1188 root /bin/sh /etc/init.d/init.cssd daemon 6525 0.5 4792 oracle /nfs1/oracle/oracle/product/10.2.0/crs/opmn/bin/ons -d 4922 0.6 5224 oracle /nfs1/oracle/oracle/product/10.2.0/crs/bin/oclsomon.bin 5915 0.7 6280 oracle /nfs1/oracle/oracle/product/10.2.0/crs/bin/evmlogger.bin -o /nfs1/oracle/or 4576 1.1 9312 oracle /nfs1/oracle/oracle/product/10.2.0/crs/bin/evmd.bin 5018 1.1 9428 oracle /nfs1/oracle/oracle/product/10.2.0/crs/bin/ocssd.bin 4606 2.0 16712 root /nfs1/oracle/oracle/product/10.2.0/crs/bin/crsd.bin reboot [/code] The memory usage above is a bit overstated. There are some shared memory accounted multiple times. I could use Smaps interface to get better per process statistics. For example, you could see that 3 of the "top offenders" (CSS binaries) have about 40MB of shared libraries each: [code] [root@cheese1 ~]# ./smaps.pl 3924 | head VMSIZE: 258576 kb RSS: 258564 kb total 39164 kb shared 5180 kb private clean 214220 kb private dirty PRIVATE MAPPINGS vmsize rss clean rss dirty file 15052 kb 0 kb 15052 kb 12016 kb 0 kb 12016 kb 11184 kb 0 kb 11184 kb [root@cheese1 ~]# ./smaps.pl 3887 | head VMSIZE: 239456 kb RSS: 239444 kb total 40096 kb shared 6200 kb private clean 193148 kb private dirty PRIVATE MAPPINGS vmsize rss clean rss dirty file 14624 kb 0 kb 14624 kb 10240 kb 0 kb 10240 kb 10240 kb 0 kb 10240 kb [root@cheese1 ~]# ./smaps.pl 3906 | head VMSIZE: 239440 kb RSS: 239428 kb total 40096 kb shared 6200 kb private clean 193132 kb private dirty PRIVATE MAPPINGS vmsize rss clean rss dirty file 14624 kb 0 kb 14624 kb 10240 kb 0 kb 10240 kb 10240 kb 0 kb 10240 kb [root@cheese1 ~]# [/code] One way to get a practical number is to check system memory usage with and without Grid Infrastructure running -- the difference is about 750MB (see the "free" column of the second row). [code] [root@cheese1 ~]# free total used free shared buffers cached Mem: 1283040 1131584 151456 0 18504 295668 -/+ buffers/cache: 817412 465628 Swap: 655328 76 655252 [root@cheese1 ~]# crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'cheese1' ... ... CRS-4133: Oracle High Availability Services has been stopped. [root@cheese1 ~]# free total used free shared buffers cached Mem: 1283040 397144 885896 0 18640 316632 -/+ buffers/cache: 61872 1221168 Swap: 655328 76 655252 [root@cheese1 ~]# ps -eo pid,%mem,rss,user,cmd --sort=rsz --cols 100 | grep -e '^ *PID' -e grid -e ohasd | grep -v grep PID %MEM RSS USER CMD 3614 0.0 1084 root /bin/sh /etc/init.d/init.ohasd run [/code] I don't have 11gR1 test cluster handy so I can't check 100% but Oracle 11g Release 1 Clusterware is not much different from 10g so memory usage must be similar. The lesson is that if you are upgrading your Oracle RAC Cluster to 11gR2 from 10g or 11gR1, then you have to account for additional 700MB memory for Grid Infrastructure alone on each node. Note that, this doesn't take into account higher memory usage of the database instances themselves.

No Comments Yet

Let us know what you think

Subscribe by email