Pythian Blog: Technical Track

Why is Oracle Ignoring My Memory Parameters?

I was recently asked by a colleague to look at why some memory parameters they had set were not being adhered to. Having done so, I thought a blog post was worthwhile on the subject. This is my first one (ever) so be gentle. :)

On the database in question (version 11.2.0.2), both JAVA_POOL_SIZE and LARGE_POOL_SIZE had been set to 256MB and 128MB respectively. This was clear from the spfile. However, the values were seemingly ignored when we started the database, and we ended up with the below values, taken from the alert log start-up sequence:

large_pool_size          = 768M
 java_pool_size           = 768M

Why?

This is due to two factors:

  • The SGA granule size for this system it is set to: 134217728 (128MB). This is calculated at start-up based on the total SGA size. The value is visible through v$sgainfo or the hidden parameter: “_ksmg_granule_size“.
  • The number of sub-pools in the SGA. For this we can look at the hidden parameter: “_kghdsidx_count”, which is set to 6 on our system. This value is again calculated at instance start-up and is based on several factors, the most notable being CPU_COUNT. The more CPUs you have, the more sub-pools you get (to a maximum of 7). In my example, we have a CPU_COUNT of 24, 1 sub-pool for every 4 CPUs. There are other factors involved in calculating the number sub-pools, and I have seen situations where this hidden parameter (by default) doesn’t seem to accurately reflect the number of pools in use. I am still experimenting with this. If you do set “_kghdsidx_count” manually, this value will be used (but I would not recommend doing this without Oracle Support involvement).

Now using the above information, we can see the calculation involved:

GranuleSize*#SubPools: 134217728*6 = 805306368 = 768M

We therefore end up with a memory overhead of 1155M. On large systems like this, it’s probably not too big a deal I hear you say. Well this time it may have contributed to an issue. We couldn’t fit the SGA into the available huge pages, from the alert log:

***********************************************************
 Huge Pages allocation failed (free: 4288 required: 4737)
 Allocation will continue with default/smaller page size
 **********************************************************

This could obviously have a big knock of effect on performance with a large SGA (much documented; I won’t go into that here).

Also worth noting: If you explicitly set either JAVA_POOL_SIZE or LARGE_POOL_SIZE to zero when using manual memory management, it does seem to be honored in the tests I have done.

The story doesn’t stop there though. STREAMS_POOL_SIZE was set explicitly to 67108864 (64M), but on start-up this only ended up being 128M (i.e. one SGA granule). So the sub-pools do not seem to affect the STREAMS_POOL_SIZE in the same manner, even with manual memory. It is documented that rounding to 1 granule will occur (but that is the case for all SGA areas).

Automatic Memory Management

This database was using manual memory management. If you switch to using ASMM with SGA_TARGET (I haven’t tested behavior with MEMORY_TARGET), then the defaults seem to change again.

Some experiments on my own 11.2.0.3 database show that with SGA_TARGET set, JAVA_POOL_SIZE and LARGE_POOL_SIZE all default to the size of one SGA granule and do not seem to be impacted by the number of sub-pools when ASMM is in use, even if you explicitly set them to zero. Again, STREAMS_POOL_SIZE has to be different and will default to zero if you do not set any value. If you make use of stream components, then memory would be transferred dynamically. If you set a minimum value for any of these pools, it will be rounded to the nearest granule automatically.

Worst Case…

In summary, when using manual memory management remember that some of your memory pools could end up being much larger than you expect. This is particularly the case on systems with a very large SGA and CPU_COUNT. At 11gR2 you can end up with a granule size of 512M and a maximum of 7 sub-pools. All of this means that the minimum size for the java and large pools are a whopping 3584M, for a combined total of 7168M. This is worth being aware of I think, particularly since most people I speak with tend to use manual memory management for systems with a very large SGA. I have deliberately left out SHARED_POOL_SIZE because the calculations here get even messier. Another time maybe…

I would be very interested in other people’s findings on this. For instance, have you tried setting any of the hidden parameters manually to reduce the minimum sizes, and why did you need to do this?

Oracle Support References:

How To Determine Granule Size [ID 947152.1] How To Determine The Default Number Of Subpools Allocated During Startup [ID 455179.1]

No Comments Yet

Let us know what you think

Subscribe by email