ISO8859-1L - %5      j  #$`#O&6H'(*z,.w/ 0(2p587M 89[:;= >!!?5"B#C&$D%D&yE'hIq(KK)M&*O+S,hUT-_U.W/X.0Y41Z2Z3v[4]F5^6_76`8b9d:Kf{;h<ZiM=j>m)?%n)@nOA&pjBrCt2DuEoxF{8Gh|H~bIC*JnKoPL)oTt9; ? B5 x O <_4.caxg FzToF2aB`aSg !T<"#Dc$%5&'()<*8=+8v,-.7/K0W1G[2A3G4f-5D67h89sw: ;v<4l=d>+?h2@A[BCD+EFqGHI1JKqL4*M%_N…O0ŠP»Q&ÇRîSóTqSU~VDWrXvȊY~Z̀[b\]Θ^t7_YϬ`ambLco;dӫe;Ԉf gGhWiFnjٵkڗlb"m݅nZoKpqrS<sNtuKv *wN6xy8zP{|}K~~GGfDrh23@EzVHfDKB\ |cz \Skb7R Qgv{t't( -d V1  AWR9{B_})J~t)A_<!^RV:_JD`DcP 3!`!`""i#l$T%q%\&Qb&c'b'uT'e(-(U))m:*TA*C*+ +Q/0>A0=1;3[_34C7}7G889zD9;@;f;#>x?@hBdEC_CJiEFPGAH[JBbJ"KNK$2Ks KLbM42NN"O0O0O.P-P>Pl)Q?qQi*R vS 7T} T 8U[ UV;W1X@Y$;[e9\[\E^7^}_d_._J_.aaIb7bc 7c!UcI"[c#Ae$f=%>gY&g'hI(/h])hTurns on/off Deferred Page Space Allocation (DPSA) policy. Default: 1; Range: 0 or 1; Value of 1 means DPSA is on. May be useful to turn off DPSA policy if you are concerned about page-space overcommitment. Having the value on reduces paging space requirements. Specifies the number of memory frames per bucket. The page-replacement algorithm divides real memory into buckets of frames. On systems with multiple memory pools, the lrubucket parameter is per memory pool. Default: 131072 frames; Range: 1 to total_number_of_memory_frames. Tuning this parameter is not recommended on most systems. Instead of scanning every frame in the system looking for a free frame, the page replacement algorithm scans through the contents of a bucket and scans the same bucket for the second pass before going on to the next bucket. Specifies the number of frames on the free list at which page-stealing is to stop. Default: MIN (#of memory pages/128, 128); Range: 16 to 204800. Observe free-list-size changes with vmstat n. If vmstat n shows free-list size frequently driven below minfree by application demands, increase maxfree to reduce calls to replenish the free list. Generally, keep maxfree - minfree equal to or less than 100. Setting the value too high causes page replacement to run for a longer period of time. Value must be at least 8 greater than minfree. Specifies the point above which the page-stealing algorithm steals only file pages. Default: (Total_Ram - 4 MB)*0.8 or ((# of total memory frames)-1024)*0.8; Range: 1 to 100. Monitor disk I/O with iostat n. This value is expressed as a percentage of the total real-memory page frames in the system. Reducing this value may reduce or eliminate page replacement of working storage pages caused by high number of file page accesses. Increasing this value may help NFS servers that are mostly read-only. For example, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, maxperm may be too low. Specifies the maximum percentage of real memory that can be pinned. Default: 80 percent; Range: 1 to 99. Change if cannot pin memory, although free memory is available. If this value is changed, the new value should ensure that at least 4 MB of real memory will be left unpinned for use by the kernel. The maxpin values must be greater than one and less than 100. The value under maxpin is converted to a percentage at the end of the output of the vmo -a command. Change this parameter only in extreme situations, such as maximum-load benchmarking. Enables the VMM support for local memory allocation. On systems where it is supported this parameter can be used to instruct VMM to allocate memory frames in the same MCM that the executing thread is running in, if possible. However local memory allocation itself is not enabled by default and has to be requested on a per application basis. See Performance guide for more details. Default: 1 (on); Range: 0 or 1. If controlled, repeatable benchmarks are showing unexpected variability in execution time, enabling memory_affinity may be beneficial for performance. Changes the number of memory pools that will be configured at system boot time. Default: MAX (Number of CPUs/8, RAM in GB/16) but not more than the number of CPUs and not less than 1. Changes are not allowed on UP kernels. Specifies the minimum number of frames on the free list at which the VMM starts to steal pages to replenish the free list. Default: maxfree - 8; Range: 8 to 204800. Page replacement occurs when the number of free frames reaches minfree. If processes are being delayed by page stealing, increase minfree to improve response time. The difference between minfree and maxfree should always be equal to or greater than maxpgahead. Specifies the point below which the page-stealer will steal file or computational pages regardless of repaging rates. Default: (Total_RAM -4 MB)*0.2 or ((number of total memory frames) - 1024)*0.2; Range: 1 to 100. Monitor disk I/O with iostat n. Can be useful to decrease this parameter if large number of file pages in memory is causing working storage pages to be replaced. On the other hand, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, minperm may be too low. User IDs lower than this value will be exempt from getting killed due to low page-space conditions. Default: 0 (off); Range: Any positive integer. System out of paging space and system administrator's processes are getting killed. Set to 1 in order to protect specific user ID processes from getting killed due to low page space or ensure there is sufficient paging space available. Specifies the number of free paging-space pages at which the operating system begins killing processes. Default: MAX (64, number_of_paging_space_pages/128). The npskill value must be greater than zero and less than the total number of paging space pages on the system. Specifies the number of free paging-space pages at which the operating system begins sending the SIGDANGER signal to processes. Default: MAX (512,4*npskill). The value of npswarn must be greater than zero and less than the total number of paging space pages on the system. Increase the value if you experience processes being killed because of low paging space. Reserve special data segment IDs for use by processes executed with the environment variable DATA_SEG_SPECIAL=Y. These data segments will be assigned so that the hardware page table entries for pages within these segments will be better distributed in the cache to reduce cache collisions and therefore improve performance. Default: 0 (off) This is advisory. As many will be reserved as possible up to the requested number. Running vmo -a after reboot will display the actual number reserved. The correct number to reserve depends on the number of processes run simultaneously with DATA_SEG_SPECIAL=Y and the number of data segments used by each of these processes. Turns on or off page coloring in the VMM. Default: 0 (off); Range: 0 or 1. This parameter is useful for some applications that run on machines that have a direct mapped cache. Modify the interval between the special data segment IDs reserved with num_spec_dataseg. Default: 512; Range: 1 to any positive integer. Generally, for processes executed with DATA_SEG_SPECIAL=Y, the more pages of the data segment they all access, the higher this value should be to optimize performance. Values that are too high, however, will limit the number of special segment IDs that can be reserved. The performance impact is highly dependent on the hardware architecture as well as the application behavior and different values may be optimal for different architectures and different applications. If set to 1, the maxperm value will be a hard limit on how much of RAM can be used as a persistent file cache. Default: 0 (off); Range 0 or 1. Excessive page outs to page space caused by too many file pages in RAM. Set to 1 in order to make the maxperm value a hard limit (use in conjunction with the tuning of the maxperm parameter). If set to 1, will allow pinning of shared memory segments. Default: 0 (off); Range 0 or 1. Change when there is too much overhead in pinning or unpinning of AIO buffers from shared memory segments. Useful only if application also sets SHM_PIN flag when doing a shmget() call and if doing async I/O from shared memory segments. Specifies the number of large pages to reserve for implementing with the shmget() system call with the SHM_LGPAGE flag. Default: 0; Range: 0 - number of pages. lgpg_size must also be used in addition to this option. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Specifies the size in bytes of the hardware-supported large pages used for the implementation for the shmget() system call with the SHM_LGPAGE flag. Default: 0; Possible Values: 0 or 268435456. bosboot -a command must be run after making the change. lgpg_region must of be set to a non-zero value in addition to this parameter. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Specifies maximum percentage of RAM that can be used for caching client pages. Same as maxperm. Default: 80; Range: 1 to 100%%. If J2 file pages or NFS pages are causing working storage pages to get paged out, maxclient can be reduced. Decrease the value of maxclient if paging out to paging space is occurring due to too many J2 client pages or NFS client pages in memory. Increasing the value can allow more J2 or NFS client pages to be in memory before page replacement starts. Specifies the number of real memory page sets per memory pool with default of 2. A 'bosboot' and reboot must be done to make any change effective. On non-lpar machines, the hardware page frame table (PFT) is completely software controlled and its size is based on the amount of memory being used. The default is to have 4 PTE's (PFT entries) for each frame of memory (sz=(M/4096)*4*16 where size of PTE is 16 bytes). The size can be scaled up or down via htabscale. The default value is -1 (PTE to frame ratio of 4:1). Each decrement of htabscale reduces the PFT size in half. Each increment of htabscale doubles the PFT size. If set to 0, a heuristic will be used, when tearing down an mmap region, to determine when to avoid locking the source mmapped segment. This is a scalability tradeoff, controlled by relalias_percentage, possibly costing more compute time used. If set to 1, the source segment lock is avoided whenever possible, regardless of the value of relalias_percentage. The Default value is 0. If force_relalias_lite is set to 0, then this specifies the factor used in the heuristic to decide whether to avoid locking the source mmapped segment or not. This is used when tearing down an mmapped region and is a scalability statement, where avoiding the lock may help system throughput, but, in some cases, at the cost of more compute time used. If the number of pages being unmapped is less than this value divided by 100 and multiplied by the total number of pages in memory in the source mmapped segment, then the source lock will be avoided. A value of 0 for relalias_percentage, with force_relalias_lite also set to 0, will cause the source segment lock to always be taken. The Default value is 0. Effective values for relalias_percentage will vary by workload, however, a suggested value is: 200. Enables or Disables freeing of paging space disk blocks at pagein of Deferred Page Space Allocation Policy pages. Default: 1, enables freeing of paging space disk blocks when the number of system free paging space blocks is below npsrpgmin, and continues until above npsrpgmax. Range: 0, disables freeing of paging space disk blocks on pagein. 2, always enables freeing of paging space disk blocks on pagein, regardless of thresholds. Enables or Disables freeing paging space disk blocks of Deferred Page Space Allocation Policy pages on read accesses to them. Default: 0, free paging space disk blocks only on pagein of pages that are being modified. Range: 1, free paging space disk blocks on pagein of a page being modified or accessed (read). Specifies the number of free paging space blocks at which the Operating System starts freeing disk blocks on pagein of Deferred Page Space Allocation Policy pages. Default: MAX(768, npswarn+(npswarn/2)). Range: 0 to total number of paging space blocks in the system. Specifies the number of free paging space blocks at which the Operating System stops freeing disk blocks on pagein of Deferred Page Space Allocation Policy pages. Default: MAX(1024, npswarn*2). Range: 0 to total number of paging space blocks in the system. Enables or Disables freeing of paging space disk blocks from pages in memory for Deferred Page Space Allocation Policy pages. Default: 0, disables scrubbing completely. Range: 1, enables scrubbing of in memory paging space disk blocks when the number of system free paging space blocks is below npsscrubmin, and continues until above npsscrubmax. Enables or Disables freeing paging space disk blocks of Deferred Page Space Allocation Policy pages in memory that are not modified. Default: 0, free paging space disk blocks only for modified pages in memory. Range: 1, free paging space disk blocks for modified or unmodified pages. Specifies the number of free paging space blocks at which the Operating System starts Scrubbing in memory pages to free disk blocks from Deferred Page Space Allocation Policy pages. Default: MAX(768, npsrpgmin). Range: 0 to total number of paging space blocks in the system. Specifies the number of free paging space blocks at which the Operating System stops Scrubbing in memory pages to free disk blocks from Deferred Page Space Allocation Policy pages. Default: MAX(1024, npsrpgmax). Range: 0 to total number of paging space blocks in the system. Specifies the point at which a new pta segment will be allocated. This parameter does not exist in the 64-bit kernel. Default: 50; Range: 0-99. System would crash from a dsi (abend code 300) with a stack trace similar to the following: findsrval64() shmforkws64() shmforkws() procdup() kforkx() kfork() Dump investigation would show that the pta segment is full for the page which generated the page fault. Tuning the pta balancing threshold lower will cause new pta segments to be allocated earlier, thereby reducing the chance that a pta segment will fill up and crash the system. If possible, a better solution would be to move to the 64-bit kernel which does not have this potential problem. Specifies what the staggering is that will be applied to the data section of a large-page data executable with LDR_CNTRL=DATA_START_STAGGER=Y. I.e. the nth large-page data process exec'd on a given MCM has its data section start at offset (n * data_stagger_interval * PSIZE) %% LGPSIZE. If kernel_heap_psize is set to 16777216 (16M), then at least large_page_heap_size bytes of the kernel heap will be backed by large pages. The kernel will allocate large pages for the kernel heap in multiples of a segment. If kernel_heap_psize is set to 16777216 (16M), then at least large_page_heap_size bytes of the kernel heap will be backed by large pages. The kernel will allocate large pages for the kernel heap in multiples of a segment. When soft_min_lgpgs_vmpool is non-zero, large pages will not be allocated from a vmpool that has fewer than soft_min_lgpgs_vmpool %% of its large pages free. If all vmpools have less than soft_min_lgpgs_vmpool %% of their large pages free, allocations will occur as normal. Determines whether to keep track of dirty file pages with scb_modlist. Special values: -2: Never keep track of modified pages. This provides the same behavior as on a pre-5.3 system. -1: Keep track of all modified pages by always calling vcs_movep_excp for all file-write calls. Other values >= 0: Keep track of all dirty pages in a file if the number of frames in memory at 'full sync time' is greater than or equal to vm_modlist_threshold. For small file writes, the maintainance of scb_modlist is performed by copying the data at base level and then making sure the frame is on the modlist. Note: This parameter can be modified at any time, changing the behavior of a running system. In general, a new value will not be seen until the next 'full sync' for the file. A 'full sync' occurs when the VW_FULLSYNC flag is used or all pages in the file (from 0 to maxvpn) are written to disk. Specifies the maximum percentage of real memory that can be pinned. Default: 80 percent; Range: 1 to 99. Change if cannot pin memory, although free memory is available. If this value is changed, the new value should ensure that at least 4 MB of real memory will be left unpinned for use by the kernel. The vmo command converts maxpin%% to the corresponding maxpin absolute value, which is the value used by the kernel. Change this parameter only in extreme situations, such as maximum-load benchmarking. This Dynamic parameter will have its nextboot value written into the boot image if a bosboot command is issued. Specifies the memory allocation policy to use for fork()'d processes. Default: 1; Possible values: 0 or 1.When set to 0, a copy-on-reference policy will be used for fork()'d processes' data. When set to 1, fork()'d processes' data will use a copy-on-write policy, which may result in a smaller memory footprint for some workloads. Specifies the size in bytes of the hardware-supported large pages used for the implementation for the shmget() system call with the SHM_LGPAGE flag. Default: 0; Possible Values: 0, 16777216 or 268435456, depending on the architecture. This parameter is of type Bosboot on pre-Power4 systems. On Power4 systems and greater, this parameter is Dynamic but will have its nextboot value written into the boot image if a bosboot command is issued. lgpg_region must be set to a non-zero value in addition to this parameter. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Enables or Disables freeing of paging space disk blocks at pagein of Deferred Page Space Allocation Policy pages. Default: 2, always enables freeing of paging space disk blocks on pagein, regardless of thresholds. Note that read accesses are only processed if rpgclean is 1. By default, only write accesses are always processed. Range: 0, disables freeing of paging space disk blocks on pagein. 1, enables freeing of paging space disk blocks when the number of system free paging space blocks is belownpsrpgmin, and continues until above npsrpgmax. This works in conjunction with rpgclean, just like #2 above. 3, a combination of #2 (writes only), and #1 above. Disk blocks are always freed on write accesses. For read accesses, these disk blocks are freed when the number of system free paging space blocks isbelow npsrpgmin, and continues until above npsrpgmax. This honors rpgclean as well, so for reads to get processed here, rpgclean must be set to 1. Specifies the action to change the system behavior in relation to process termination during low paging space conditions. Default: 1; Range: 1 or 2: Value of 1 indicates current behavior of process termination on low paging space. Value of 2 indicates a new behavior where processes with SIGDANGER handler will be killed, if no other processes were found earlier to recover from low paging space condition. Specifies the page size, in bytes, to back MBUF pages with. Default: 4096. Valid sizes: 4096, 16777216. If set to 1, the maxclient value will be a hard limit on how much of RAM can be used as a client file cache. Default: 1 (on); Range 0 or 1. Set to 0 in order to make the maxclient value a soft limit if client pages are being paged out when there are sufficient free pages. (use in conjunction with the tuning of the maxperm and maxclient parameters). Determines the ratio of CPUs per-mempool. For every cpu_scale_memp CPUs, at least one mempool will be created. Default: 8; Range 1 to 128 (Max # of CPUs). Can be reduced to reduce contention on the mempools. (use in conjunction with the tuning of the maxperm parameter). Selects between different Virtual Memory Page Replacement Policies. If set to 1, the Workload Manager scans pages from lists by Class. Default: 1 (on); Range 0 or 1. Set to 0, the Workload Manager scans pages by Physical Address. This is the original behavior. Specifies the default memory placement policy for application text. It applies only to text of the main executable and not to its dependencies. Text placement can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policies for the program stack. Stack placement can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policy for data. Data refers to : data of the main executable (initialized data, BSS), heap, shared library data and data of object modules loaded at run-time. Data placement can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policy for named shared memory. Named shared memory refers to working storage memory, created via shmget() or shm_open(), which is associated with a name (or key) that allows more than one process to access it simultaneously. Default placement of named shared memory can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policy for anonymous shared memory. Anonymous shared memory refers to working storage memory, created via shmget() or mmap(), that can be accessed only by the creating process or its descendants. This memory is not associated with a name (or key). Default placement of anonymous shared memory can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policy for files that are mapped into the address space of a process (such as through shmat() and mmap()). Default placement of memory mapped files can be set to first-touch (1) or round-robin across the system (2). Specifies the default memory placement policy for unmapped file access, such as through read()/write(). Default placement of unmapped file access can be set to first-touch (1) or round-robin across the system (2). Enables the VMM support for local memory allocation. On systems where it is supported, this parameter can be used to instruct VMM to allocate memory frames in the same MCM that the executing thread is running in, if possible. However local memory allocation itself is not enabled by default and has to be requested on a per application basis. See Performance guide for more details. Default: 1 (on); Range: 0 or 1. If controlled, repeatable benchmarks are showing unexpected variability in execution time, enabling memory_affinity may be beneficial for performance. Specifies the number of frames on the free list at which the VMM starts to steal pages to replenish the free list. Default: 960; Range: 8 to 204800. Page replacement occurs when the number of free frames reaches minfree. If processes are being delayed by page stealing, increase minfree to improve response time. The difference between maxfree and minfree should be of the order of maxpgahead, and no less than 8. Specifies the number of frames on the free list at which page-stealing is to stop. Default: 1088; Range: 8 to 204800. Observe free-list-size changes with vmstat n. If vmstat n shows free-list size frequently driven below minfree by application demands, increase maxfree to reduce calls to replenish the free list. Setting the value too high causes page replacement to run for a longer period of time. The difference between maxfree and minfree should be of the order of maxpgahead, and no less than 8. Determines the interval, in milliseconds, at which LRU page replacement daemons poll for off-level interrupts. Default: 10 milliseconds. Possible values: 0 through 60,000 (1 minute). A LRU page replacement daemon blocks low priority interrupts while running on a CPU. If this option is enabled, LRU page replacement daemons will process pending interrupts at the designated interval. On a heavily loaded system with large amounts of I/O, enabling this option can improve I/O throughput since I/O interrupts don't have to wait for LRU page replacement daemons to finish their processing. This parameter has been deprecated. The cpu_scale_memp parameter can be used instead to influence the number of mempools being used. Specifies what the staggering is that will be applied to the data section of a large-page data executable with LDR_CNTRL=DATA_START_STAGGER=Y. I.e. the nth large-page data process exec'd on a given MCM has its data section start at offset (n * data_stagger_interval * PSIZE) %% LGPSIZE. This tunable is only meaningful when large pages are used. Specifies the size in bytes of the hardware-supported large pages used for the implementation for the shmget() system call with the SHM_LGPAGE flag. Default: 0; Possible Values: 0 or 16777216. Supported on systems from Power4 onwards, this parameter is Dynamic but will have its nextboot value written into the boot image if a bosboot command is issued. lgpg_regions must be set to a non-zero value in addition to this parameter. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Selects Virtual Memory Page Replacement policy. Default: 0 (off); Range 0 or 1. When set to 0, pages are scanned by Physical Address. When set to 1, page steal lists are used and, if the Workload Manager is enabled, pages are scanned from lists by Class. Total number of paging-space blocks. Specifies the number of large pages to reserve for implementing with the shmget() system call with the SHM_LGPAGE flag. Default: 0; Range: 0 - number of pages. lgpg_size must also be used in addition to this option. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. This Dynamic parameter will have its nextboot value written into the boot image if a bosboot command is issued. When kernel_heap_psize is set to 16M, this tunable specifies the maximum amount of the kernel heap to try to back with 16M pages. After the kernel heap grows beyond this amount and 16M is the selected kernel_heap_psize, 4K pages will be used for the kernel heap. Default: 0; Range: 0 to MAXINT64. If this tunable is set to 0 and kernel_heap_psize is 16MB, the entire kernel heap should be backed with 16MB pages. This tunable should only be used in very special environments where only a portion of the kernel heap needs to be backed with 16M pages. Specifies the default page size to use for the kernel heap. This is an advisory setting and is only valid on the 64-bit kernel. Default: 4096; Range: 4096, 65536 or 16777216. Support for 64K pages is provided by POWER5+ and later machines and used when vmm_mpsize_support is also enabled. 16M pages, provided by POWER4 and later machines, should only be used for the kernel heap under high performance environments. This tunable toggles AIX 64 bit kernel multiple page size support for the extra page sizes provided by POWER5+ and later machines. This has no effect on legacy support of 4K or large pages and on machines with processors which do not support extra page sizes. Default: 1, AIX will take advantage of the extra page sizes supported by a processor. Range: 0 or 1. When set to 0, the only page sizes AIX will recognize are 4K and a system's large page size. Specifies the size in bytes of the hardware-supported large pages used for the implementation for the shmget() system call with the SHM_LGPAGE flag. Default: 0; Possible Values: 0 or 16777216. Supported on systems from Power4 onwards. Although this parameter is Dynamic on DLPAR-capable systems, its nextboot value is written into the boot image when a bosboot command is issued so that the setting is optimally restored at reboot. lgpg_regions must be set to a non-zero value in addition to this parameter. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Specifies the number of large pages to reserve for implementing with the shmget() system call with the SHM_LGPAGE flag. Default: 0; Range: 0 - number of pages. lgpg_size must also be used in addition to this option. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Although this parameter is Dynamic on DLPAR-capable systems, its nextboot value is written into the boot image when a bosboot command is issued so that the setting is optimally restored at reboot. Specifies the page size, in bytes, used to back MBUF pages. This is an advisory setting and is only valid on the 64-bit kernel. Default: 0. Valid sizes: 0, 4096, 65536 and 16777216. When set to 0, AIX will set the page size to the system's preferred value: 64K on POWER5+ and later machines when vmm_mpsize_support is also enabled, else 4K. 16M pages, provided by POWER4 and later machines, should only be used under high performance environments. Specifies the page size backing the kernel segment. This setting is only valid on the 64-bit kernel, Power4 and later. Default: 0, meaning the kernel will determine the best page size. Possible values: 0, 4096, or 16777216. When the kernel segment is backed with 16M pages roughly 240MB additional pinned memory will be consumed, but performance is increased. Specifies the point below which the page-stealer will steal file or computational pages regardless of repaging rates. Default: 3; Range: 1 to 100. Monitor disk I/O with iostat n. Can be useful to decrease this parameter if large number of file pages in memory is causing working storage pages to be replaced. On the other hand, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, minperm may be too low. Specifies the point above which the page-stealing algorithm steals only file pages. Default: 90; Range: 1 to 100. Monitor disk I/O with iostat n. This value is expressed as a percentage of the total real-memory page frames in the system. Reducing this value may reduce or eliminate page replacement of working storage pages caused by high number of file page accesses. Increasing this value may help NFS servers that are mostly read-only. For example, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, maxperm may be too low. Specifies maximum percentage of RAM that can be used for caching client pages. Same as maxperm. Default: 90; Range: 1 to 100%%. If J2 file pages or NFS pages are causing working storage pages to get paged out, maxclient can be reduced. Decrease the value of maxclient if paging out to paging space is occurring due to too many J2 client pages or NFS client pages in memory. Increasing the value can allow more J2 or NFS client pages to be in memory before page replacement starts. Specifies the page size backing the kernel segment. This setting is only valid on the 64-bit kernel, Power4 and later. Default: 0, meaning the kernel will determine the best page size. Possible values: 0, 4096, 65536, or 16777216. When the kernel segment is backed with 16M pages roughly 240MB additional pinned memory will be consumed, but performance is increased. Specifies the number of memory frames per bucket. The page-replacement algorithm divides real memory into buckets of frames. On systems with multiple memory pools, the lrubucket parameter is per memory pool. Default: 131072 frames; Range: 1 to total_number_of_memory_frames (with an imposed maximum of 4194304 representing 16GB, to avoid probable LRU performance degradation). Tuning this parameter is not recommended on most systems. Instead of scanning every frame in the system looking for a free frame, the page replacement algorithm scans through the contents of a bucket and scans the same bucket for the second pass before going on to the next bucket. Determines the ratio of CPUs per-mempool. For every cpu_scale_memp CPUs, at least one mempool will be created. Can be reduced to reduce contention on the mempools. (use in conjunction with the tuning of the maxperm parameter). Specifies what the staggering is that will be applied to the data section of a large-page data executable with LDR_CNTRL=DATA_START_STAGGER=Y. For example, the nth large-page data process exec'd on a given MCM has its data section start at offset (n * data_stagger_interval * PSIZE) %% LGPSIZE. This tunable is only meaningful when large pages are used. Turns on/off Deferred Page Space Allocation (DPSA) policy. A value of 1 means DPSA is on. May be useful to turn off DPSA policy if you are concerned about page-space overcommitment. Having the value on reduces paging space requirements. If set to 0, a heuristic will be used, when tearing down an mmap region, to determine when to avoid locking the source mmapped segment. This is a scalability tradeoff, controlled by relalias_percentage, possibly costing more compute time used. If set to 1, the source segment lock is avoided whenever possible, regardless of the value of relalias_percentage. Specifies the number of real memory page sets per memory pool. A 'bosboot' and reboot must be done to make any change effective. On non-lpar machines, the hardware page frame table (PFT) is completely software controlled and its size is based on the amount of memory being used. The default is to have 4 PTE's (PFT entries) for each frame of memory (sz=(M/4096)*4*16 where size of PTE is 16 bytes). The size can be scaled up or down via htabscale. The value of -1 indicates a PTE to frame ratio of 4:1. Each decrement of htabscale reduces the PFT size in half. Each increment of htabscale doubles the PFT size. Specifies the default page size to use for the kernel heap. This is an advisory setting. Support for 64K pages is provided by POWER5+ and later machines and used when vmm_mpsize_support is also enabled. 16M pages, provided by POWER4 and later machines, should only be used for the kernel heap under high performance environments. A value of 0 indicates that the kernel will use the preferred default value of 64K, if that page size is supported, else 4K pages. Specifies the page size backing the kernel segment. This setting is only valid on Power4 and later. A value of 0 indicates the kernel will determine the best page size. When the kernel segment is backed with 16M pages roughly 240MB additional pinned memory will be consumed, but performance is increased. When kernel_heap_psize is set to 16M, this tunable specifies the maximum amount of the kernel heap to try to back with 16M pages. After the kernel heap grows beyond this amount and 16M is the selected kernel_heap_psize, 4K pages will be used for the kernel heap. If this tunable is set to 0 and kernel_heap_psize is 16MB, the entire kernel heap should be backed with 16MB pages. This tunable should only be used in very special environments where only a portion of the kernel heap needs to be backed with 16M pages. Specifies the number of large pages to reserve for implementing with the shmget() system call with the SHM_LGPAGE flag. lgpg_size must also be used in addition to this option. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Although this parameter is Dynamic on DLPAR-capable systems, its nextboot value is written into the boot image when a bosboot command is issued so that the setting is optimally restored at reboot. Specifies the size in bytes of the hardware-supported large pages used for the implementation for the shmget() system call with the SHM_LGPAGE flag. Supported on systems from Power4 onwards. Although this parameter is Dynamic on DLPAR-capable systems, its nextboot value is written into the boot image when a bosboot command is issued so that the setting is optimally restored at reboot. lgpg_regions must be set to a non-zero value in addition to this parameter. The application has to be modified to specify the SHM_LGPAGE flag when calling shmget(). This will improve performance in the case where there are many TLB misses and large amounts of memory is being accessed. Specifies the action to change the system behavior in relation to process termination during low paging space conditions. A value of 1 indicates current behavior of process termination on low paging space. A value of 2 indicates a new behavior where processes with SIGDANGER handler will be killed, if no other processes were found earlier to recover from low paging space condition. Determines the interval, in milliseconds, at which LRU page replacement daemons poll for off-level interrupts. The maximum value of 60,000 corrsponds to 1 minute. A LRU page replacement daemon blocks low priority interrupts while running on a CPU. If this option is enabled, LRU page replacement daemons will process pending interrupts at the designated interval. On a heavily loaded system with large amounts of I/O, enabling this option can improve I/O throughput since I/O interrupts don't have to wait for LRU page replacement daemons to finish their processing. Specifies the number of memory frames per bucket. The page-replacement algorithm divides real memory into buckets of frames. On systems with multiple memory pools, the lrubucket parameter is per memory pool. The maximum value corresponds to the total number of memory frames, with an imposed maximum of 4194304 representing 16GB, to avoid probable LRU performance degradation. Tuning this parameter is not recommended on most systems. Instead of scanning every frame in the system looking for a free frame, the page replacement algorithm scans through the contents of a bucket and scans the same bucket for the second pass before going on to the next bucket. Specifies maximum percentage of RAM that can be used for caching client pages. Same as maxperm. If J2 file pages or NFS pages are causing working storage pages to get paged out, maxclient can be reduced. Decrease the value of maxclient if paging out to paging space is occurring due to too many J2 client pages or NFS client pages in memory. Increasing the value can allow more J2 or NFS client pages to be in memory before page replacement starts. Specifies the number of frames on the free list at which page-stealing is to stop. Observe free-list-size changes with vmstat n. If vmstat n shows free-list size frequently driven below minfree by application demands, increase maxfree to reduce calls to replenish the free list. Setting the value too high causes page replacement to run for a longer period of time. The difference between maxfree and minfree should be of the order of maxpgahead, and no less than 8. Specifies the point above which the page-stealing algorithm steals only file pages. Monitor disk I/O with iostat n. This value is expressed as a percentage of the total real-memory page frames in the system. Reducing this value may reduce or eliminate page replacement of working storage pages caused by high number of file page accesses. Increasing this value may help NFS servers that are mostly read-only. For example, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, maxperm may be too low. Specifies the maximum percentage of real memory that can be pinned. Change if cannot pin memory, although free memory is available. If this value is changed, the new value should ensure that at least 4 MB of real memory will be left unpinned for use by the kernel. The vmo command converts maxpin%% to the corresponding maxpin absolute value, which is the value used by the kernel. Change this parameter only in extreme situations, such as maximum-load benchmarking. This Dynamic parameter will have its nextboot value written into the boot image if a bosboot command is issued. Enables the VMM support for local memory allocation. On systems where it is supported, this parameter can be used to instruct VMM to allocate memory frames in the same MCM that the executing thread is running in, if possible. However local memory allocation itself is not enabled by default and has to be requested on a per application basis. See Performance guide for more details. A value of 1 indicates on. If controlled, repeatable benchmarks are showing unexpected variability in execution time, enabling memory_affinity may be beneficial for performance. Number of valid memory frames. N/A Specifies the page size, in bytes, used to back MBUF pages. This is an advisory setting. When set to 0, the operating system will set the page size to the system's preferred value: 64K on POWER5+ and later machines when vmm_mpsize_support is also enabled, else 4K. 16M pages, provided by POWER4 and later machines, should only be used under high performance environments. Specifies the default memory placement policy for data. Data refers to : data of the main executable (initialized data, BSS), heap, shared library data and data of object modules loaded at run-time. Data placement can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policy for files that are mapped into the address space of a process (such as through shmat() and mmap()). Default placement of memory mapped files can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policy for anonymous shared memory. Anonymous shared memory refers to working storage memory, created via shmget() or mmap(), that can be accessed only by the creating process or its descendants. This memory is not associated with a name (or key). Default placement of anonymous shared memory can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policy for named shared memory. Named shared memory refers to working storage memory, created via shmget() or shm_open(), which is associated with a name (or key) that allows more than one process to access it simultaneously. Default placement of named shared memory can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policies for the program stack. Stack placement can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policy for application text. This applies only to text of the main executable and not to its dependencies. Text placement can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the default memory placement policy for unmapped file access, such as through read()/write(). Default placement of unmapped file access can be set to first-touch (value of 1) or round-robin across the system (value of 2). Specifies the number of frames on the free list at which the VMM starts to steal pages to replenish the free list. Page replacement occurs when the number of free frames reaches minfree. If processes are being delayed by page stealing, increase minfree to improve response time. The difference between maxfree and minfree should be of the order of maxpgahead, and no less than 8. Specifies the point below which the page-stealer will steal file or computational pages regardless of repaging rates. Monitor disk I/O with iostat n. Can be useful to decrease this parameter if large number of file pages in memory is causing working storage pages to be replaced. On the other hand, if some files are known to be read repetitively, and I/O rates do not decrease with time from startup, minperm may be too low. User IDs lower than this value will be exempt from getting killed due to low page-space conditions. A value of 0 indicates off. Useful when system is out of paging space and system administrator's processes are getting killed. Either set this tunable to 1 in order to protect specific user ID processes from getting killed due to low page space or ensure there is sufficient paging space available. Specifies the number of free paging-space pages at which the operating system begins killing processes. The default value is the maximum of 64 and (number of paging space pages)/128. The npskill value must be greater than zero and less than the total number of paging space pages on the system. Specifies the number of free paging space blocks at which the Operating System stops freeing disk blocks on pagein of Deferred Page Space Allocation Policy pages. The default value is the maximum of 1024 and (npswarn*2). The maximum value is the total number of paging space blocks in the system. Specifies the number of free paging space blocks at which the Operating System starts freeing disk blocks on pagein of Deferred Page Space Allocation Policy pages. The default value is the maximum of 768 and (npswarn+(npswarn/2)). The maximum value is the total number of paging space blocks in the system. Specifies the number of free paging space blocks at which the Operating System stops Scrubbing in memory pages to free disk blocks from Deferred Page Space Allocation Policy pages. The default value is the maximum of 1024 and npsrpgmax. The maximum value is the total number of paging space blocks in the system. Specifies the number of free paging space blocks at which the Operating System starts Scrubbing in memory pages to free disk blocks from Deferred Page Space Allocation Policy pages. The default value is the maximum of 768 and npsrpgmin. The maximum value is the total number of paging space blocks in the system. Specifies the number of free paging-space pages at which the operating system begins sending the SIGDANGER signal to processes. The default value is the maximum of 512 and (4*npskill). The value of npswarn must be greater than zero and less than the total number of paging space pages on the system. Increase the value if you experience processes being killed because of low paging space. Reserve special data segment IDs for use by processes executed with the environment variable DATA_SEG_SPECIAL=Y. These data segments will be assigned so that the hardware page table entries for pages within these segments will be better distributed in the cache to reduce cache collisions and therefore improve performance. A value of 0 indicates off. This is advisory. As many will be reserved as possible up to the requested number. Running vmo -a after reboot will display the actual number reserved. The correct number to reserve depends on the number of processes run simultaneously with DATA_SEG_SPECIAL=Y and the number of data segments used by each of these processes. Total number of paging-space blocks. N/A Selects Virtual Memory Page Replacement policy. A value of 0 indicates off. When set to 0, pages are scanned by Physical Address. When set to 1, page steal lists are used and, if the Workload Manager is enabled, pages are scanned from lists by Class. Number of pages available for pinning N/A If force_relalias_lite is set to 0, then this specifies the factor used in the heuristic to decide whether to avoid locking the source mmapped segment or not. This is used when tearing down an mmapped region and is a scalability statement, where avoiding the lock may help system throughput, but, in some cases, at the cost of more compute time used. If the number of pages being unmapped is less than this value divided by 100 and multiplied by the total number of pages in memory in the source mmapped segment, then the source lock will be avoided. A value of 0 for relalias_percentage, with force_relalias_lite also set to 0, will cause the source segment lock to always be taken. Effective values for relalias_percentage will vary by workload, however, a suggested value is: 200. Enables or Disables freeing paging space disk blocks of Deferred Page Space Allocation Policy pages on read accesses to them. A value of 0 indicates free paging space disk blocks only on pagein of pages that are being modified. A value of 1 indicates free paging space disk blocks on pagein of a page being modified or accessed (read). Enables or Disables freeing of paging space disk blocks at pagein of Deferred Page Space Allocation Policy pages. A value of 2 always enables freeing of paging space disk blocks on pagein, regardless of thresholds. Note that read accesses are only processed if rpgclean is 1. By default, only write accesses are always processed. A value of 0 disables freeing of paging space disk blocks on pagein. A value of 1 enables freeing of paging space disk blocks when the number of system free paging space blocks is belownpsrpgmin, and continues until above npsrpgmax. This works in conjunction with rpgclean, just like #2 above. A values of 3 indicates a combination of #2 (writes only), and #1 above. Disk blocks are always freed on write accesses. For read accesses, these disk blocks are freed when the number of system free paging space blocks isbelow npsrpgmin, and continues until above npsrpgmax. This honors rpgclean as well, so for reads to get processed here, rpgclean must be set to 1. Enables or Disables freeing of paging space disk blocks from pages in memory for Deferred Page Space Allocation Policy pages. A value of 0 disables scrubbing completely. A value of 1 enables scrubbing of in memory paging space disk blocks when the number of system free paging space blocks is below npsscrubmin, and continues until above npsscrubmax. Enables or Disables freeing paging space disk blocks of Deferred Page Space Allocation Policy pages in memory that are not modified. A value of 0 indicates free paging space disk blocks only for modified pages in memory. A value of 1 indicates free paging space disk blocks for modified or unmodified pages. When soft_min_lgpgs_vmpool is non-zero, large pages will not be allocated from a vmpool that has fewer than soft_min_lgpgs_vmpool %% of its large pages free. If all vmpools have less than soft_min_lgpgs_vmpool %% of their large pages free, allocations will occur as normal. Modify the interval between the special data segment IDs reserved with num_spec_dataseg. Generally, for processes executed with DATA_SEG_SPECIAL=Y, the more pages of the data segment they all access, the higher this value should be to optimize performance. Values that are too high, however, will limit the number of special segment IDs that can be reserved. The performance impact is highly dependent on the hardware architecture as well as the application behavior and different values may be optimal for different architectures and different applications. If set to 1, the maxclient value will be a hard limit on how much of RAM can be used as a client file cache. A value of 1 indicates on. Set to 0 in order to make the maxclient value a soft limit if client pages are being paged out when there are sufficient free pages. (use in conjunction with the tuning of the maxperm and maxclient parameters). If set to 1, the maxperm value will be a hard limit on how much of RAM can be used as a persistent file cache. A value of 0 indicates off. Excessive page outs to page space caused by too many file pages in RAM. Set to 1 in order to make the maxperm value a hard limit (use in conjunction with the tuning of the maxperm parameter). If set to 1, will allow pinning of shared memory segments. A value of 0 indicates off. Change when there is too much overhead in pinning or unpinning of AIO buffers from shared memory segments. Useful only if application also sets SHM_PIN flag when doing a shmget() call and if doing async I/O from shared memory segments. Determines whether to keep track of dirty file pages with scb_modlist. A value of -2 indicates never keep track of modified pages. This provides the same behavior as on a pre-5.3 system. A value of -1indicates keep track of all modified pages by always calling vcs_movep_excp for all file-write calls. Other values >= 0 indicate keep track of all dirty pages in a file if the number of frames in memory at 'full sync time' is greater than or equal to vm_modlist_threshold. For small file writes, the maintainance of scb_modlist is performed by copying the data at base level and then making sure the frame is on the modlist. Note: This parameter can be modified at any time, changing the behavior of a running system. In general, a new value will not be seen until the next 'full sync' for the file. A 'full sync' occurs when the VW_FULLSYNC flag is used or all pages in the file (from 0 to maxvpn) are written to disk. Specifies the memory allocation policy to use for fork()'d processes. When set to 0, a copy-on-reference policy will be used for fork()'d processes' data. When set to 1, fork()'d processes' data will use a copy-on-write policy, which may result in a smaller memory footprint for some workloads. This tunable specifies the operating system's multiple page size support for the extra page sizes provided by POWER5+ and later machines. This has no effect on legacy support of 4K or large pages and on machines with processors that do not support extra page sizes. A value of 1 indicates the operating system will take advantage of the extra page sizes supported by a processor. A value of 2 indicates the operating system will take advantage of the capability of using multiple page sizes per segment. When set to 0, the only page sizes the operating system will recognize are 4K and a system's large page size. A value of -1 indicates that the operating system will choose an optimal policy based on current hardware and software configuration. This tunable controls the default aggressiveness of page size promotion. Its value is an abstract aggressiveness weighting which is treated by the operating system as the inverse of the page promotion threshold. A value of 0 for the vmm_default_pspa setting is equivalent to a page promotion threshold of 100%%, that is, a memory range must have 100%% real memory occupancy in order to be promoted. A value of 100 for the vmm_default_pspa setting is equivalent to a page promotion threshold of 0%%, that is, a memory range should be promoted immediately on first reference to memory in the range. A value of -1 for the vmm_default_pspa setting is equivalent to a page promotion threshold of -1, that is, never do page promotion for a memory range. Page size promotion thresholds are only considered at segment creation time. Thus, changing vmm_default_pspa will only affect the page size promotion thresholds for segments created after the tunable is adjusted. Selects whether non-pageable page sizes (16M, 16G) are included in the WLM realmem and virtmem counts. If 1 is selected, then non-pageable page sizes are included in the realmem and virtmem limits count. If 0 is selected, then only pageable page sizes (4K, 64K) are included in the realmem and virtmem counts. This value can only be changed when WLM Memory Accounting is off, or the change will fail. When this tunable is set to 0, WLM virtual and real memory limits will only apply to pageable pages consumed by a WLM class. Because heavy use of pageable pages is what causes paging on a system, a value of 0 provides more granular control over how much a WLM class pages when non-pageable pages are in use. This tunable should only be adjusted when WLM real or virtual memory limits are being used on a system configured with non-pageable pages. Determines the timeout interval, in milliseconds, to wait for page size management daemons to make forward progress before LRU page replacement is started. When page size management is working to increase the number of page frames of a particular page size, LRU page replacement is delayed for that page size for up to this amount of time. On a heavily loaded system, increasing this tunable can give the page size management daemons more time to create more page frames before LRU is started. This tunable toggles the loaning behavior when shared memory mode is enabled. When the tunable is set to 0, loaning is disabled. When set to 1, loaning of file cache is enabled. When set to 2, loaning of any type of data is enabled. In response to low memory in the VRM pool, the VMM will free memory and loan it to the hypervisor. Specifies the memory allocation policy to use for shared library mappings. When set to 0, a copy-on-reference policy will be used for shared library mappings. When set to 1, shlib mappings will use a copy-on-write policy, which may result in a smaller memory footprint for some workloads, but also a longer execution time for some workloads. This tunable toggles the loaning behavior when shared memory mode is enabled. When the tunable is set to 0, loaning is disabled. When set to 1, loaning of file cache is enabled. When set to 2, loaning of any type of data is enabled. In response to low memory in the AMS pool, the VMM will free memory and loan it to the hypervisor. Specifies the default memory placement policy for data. Data refers to : data of the main executable (initialized data, BSS), heap, shared library data and data of object modules loaded at run-time. Data placement can be set to first-touch (value of 1), round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policy for files that are mapped into the address space of a process (such as through shmat() and mmap()). Default placement of memory mapped files can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policy for anonymous shared memory. Anonymous shared memory refers to working storage memory, created via shmget() or mmap(), that can be accessed only by the creating process or its descendants. This memory is not associated with a name (or key). Default placement of anonymous shared memory can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policy for named shared memory. Named shared memory refers to working storage memory, created via shmget() or shm_open(), which is associated with a name (or key) that allows more than one process to access it simultaneously. Default placement of named shared memory can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policies for the program stack. Stack placement can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policy for application text. This applies only to text of the main executable and not to its dependencies. Text placement can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Specifies the default memory placement policy for unmapped file access, such as through read()/write(). Default placement of unmapped file access can be set to first-touch (value of 1) or round-robin across the system (value of 2) or auto-affinitized (value of 0), where the system decides the best placement for the memory. Defines the minimum size of the uncompressed pool. If compressed memory pool grows too large, there may not be enough space in memory to house uncompressed memory which can slow down application performance due to excessive use of the compressed memory pool. Increase this value to limit the size of the compressed memory pool and make more uncompressed pages available. Specifies the amount of free memory in a compressed memory pool free list at which the VMM will grow the compressed pool. If processes are being delayed waiting for compressed memory to become available, increase ame_minfree_mem to improve response time. Note, this must be at least 64kb less than ame_maxfree_mem. Specifies the average amount of free memory in a compressed memory pool free list at which the VMM will shrink the compressed pool. Excessive shrink and grow operations can occur if compressed memory pool size tends to change significantly. This can occur if a workload's working set size frequently changes. Increase this tunable to raise the threshold at which the VMM will shrink a compressed memory pool and thus reduce the number of overall shrink and grow operations. Set to either 0 or 1 for either a system wide expanded memory view or true memory view, respectively. If applications require a true view of memory, change this tunable. Determines the ratio of CPUs per compressed memory pool. For every ame_cpus_per_pool CPUs, at least one compressed memory pool will be created. Lower ratios can be used to reduce contention on compressed memory pools. This ratio is not the only factor used to determine the number of compressed memory pools (amount of memory and its layout are also considered) so certain changes to this ratio may not result in any change to the number of compressed memory pools. Specifies percentage of process private memory allocations that are affinitized by default. This tunable limits the default amount of affinitized memory allocated for process private memory. Affinitizing private memory may improve process performance. However, too much affinitized memory in a vmpool can cause paging and impact overall system performance. Specifies percentage of memory allocation requests with affinity attachments that are affinitized. This tunable limits the amount of affinitized memory allocated for memory allocations that have affinity attachments. Affinitizing such memory allocations may improve performance. However, too much affinitzed memory in a vmpool can cause paging and impact overall system performance. Specifies percentage difference of affinitized memory allocations in a vmpool relative to the average across all vmpools. Affinitized memory allocations in a vmpool are converted to balanced allocations if the affinitized percentage difference between the vmpool and the average across all vmpools is greater than this limit. Specifies the weighting factor for the balanced computational storage allocation algorithm. Larger values increase the aggressiveness of the algorithm to balance computational memory across vmpools. Limits the percentage of CPU capacity that can be used for automatic background reaffinitization. A value of zero disables background reaffinitization. Allows the Enhanced Affinity support to be disabled on Power 7 and up processors. The Enhanced Affinity support is enabled by default on Power 7 and up processors. Enhanced Affinity can not be enabled on Power 6 and lower processors. If processes are being delayed waiting for compressed memory to become available, increase ame_minfree_mem to improve response time. Note, this must be at least 257kb less than ame_maxfree_mem. Turns on/off Large Segment Aliasing (LSA) effective address allocation policies. A value of 1 indicates LSA policies are on. A value of 0 indicates LSA policies are off. These policies can assist in reducing SLB misses in applications with large shared memory regions. This tunable is only valid if esid_allocator is set to 1. Sets the threshold number of 256MB segments at which a shared memory region is promoted to use a 1TB alias. Lower values will aggressively promote shared memory regions to use a 1TB alias. This can lower SLB misses but can also increase PTE misses if many shared memory regions that are not used across processes are aliased. This tunable is only valid if esid_allocator is set to 1. In this environment, shared memory regions that did not qualify for shared alias promotion are grouped together into 1TB regions. If the number of 256MB segments in a group of shared memory regions in a 1TB region of the application's address space exceeds this threshold, they are promoted to use an unshared 1TB alias. In applications where there are many shared memory attaches and detaches, lower values of this threshold can result in increased PTE misses. Applications which only detach shared memory regions at exit may see benefit from lower values of this threshold. Select the kernel memory locking mode. Kernel locking prevents paging out kernel data. This improves system performance in many cases. If set to 0, kernel locking is disabled. If set to 1, kernel locking is enabled automatically if Active Memory Expansion (AME) feature is also enabled. In this mode, only a subset of kernel memory is locked. If set to 2, kernel locking is enabled regardless of AME and all of kernel data is eligible for locking. If set to 3, only the kernel stacks of processes are locked in memory. Enabling kernel locking has the most positive impact on performance of systems that do paging but not enough to page out kernel data or on systems that do not do paging activity at all. Note that 1, 2, and 3 are only advisory. If a system runs low on free memory and performs extensive paging activity, kernel locking is rendered ineffective by paging out kernel data. Kernel locking only impacts pageable page-sizes in the system. Turns on VTIOL off-level iodone handler Enables iodone's to be off-loaded to a kproc in some cases, rather than being processed inline within the interrupt context. This will free up the processor more quickly from the interrupt handler in situations where there is a high I/O load. Mode 1 - run with default scheduling options. Mode 2 - force the VTIOL threads to run in the global run queue. Controls total number of VTIOL threads to start. Scales the number of VTIOL threads to start at boot time. Input to this tunable is unit of hundredths of a percent. Minimally, 1 VTIOL thread will be started if the percentage entered is too small. Controls the number of iodone queues allocated per VTIOL thread. Scales the number of iodone queues allocated to each VTIOL thread at boot time. A larger amount of queues helps improve performance because it will reduce lock contention overhead in handling the iodone request, with the trade-off of increased memory usage. Threshold of outstanding queued iodone requests that controls number of active threads As the average number of outstanding iodone requests remaining to be processed rises by a multiple of this tunable's value, a new VTIOL thread will be changed to the active state. Minimum number of VTIOL threads that will be kept active.Regardless of the number of outstanding iodone requests observed, this many VTIOL threads will remain in the active state. Interval in milliseconds that the VTIOL queues will be scanned for outstanding iodone requests A running average of the number of iodone requests will be computed ever vtiol_avg_ms milliseconds. This average affects the number VTIOL threads kept in the active state. Sets the policy that determines whether an iodone associated with a page-in is processed inline or queued to a VTIOL thread. Mode 0 - all iodones are processed inline in the interrupt handler. Mode 1 - all iodones are queued to a VTIOL kproc. Mode 2 - an iodone is queued to a VTIOL thread if no other thread is waiting on the I/O to complete, and if lock contention would be increased by processing the iodone in the interrupt context. Mode 3 - an iodone is queued to a VTIOL thread if no other thread is waiting on the I/O to complete. Mode 4 - an iodone is queued to a VTIOL thread if lock contention would be increased by processing the iodone in the interrupt context. Sets the policy that determines whether an iodone associated with a page-out is processed inline or queued to a VTIOL thread. Mode 0 - all iodones are processed inline in the interrupt handler. Mode 1 - all iodones are queued to a VTIOL kproc. Mode 2 - an iodone is queued to a VTIOL thread if no other thread is waiting on the I/O to complete, and if lock contention would be increased by processing the iodone in the interrupt context. Mode 3 - an iodone is queued to a VTIOL thread if no other thread is waiting on the I/O to complete. Mode 4 - an iodone is queued to a VTIOL thread if lock contention would be increased by processing the iodone in the interrupt context. This tunable, when set, allows the creation of unshared aliases. When set, this tunable allows unshared aliases to be created as long as esid_allocator is set and dependent on the settings of shm_1tb_unshared. When unset, no unshared aliases can be created. Enables (1) or disables (0) base-level locking in munmap(). When enabled, this causes munmap() to perform locking on mapped objects with interrupts fully enabled. This can significantly improve overall system throughput at exec() or exit() time, when the same mapped object is shared across many processes. Specifies the number of pages to unmap in one critical section for mmap mappings. This tunable controls the amount of pages that the VMM will unmap at a time. This only applies to mmap and EXTSHM mappings. A smaller number can improve real-time performance of a system, whereas a larger number may improve throughput. The value of this tunable takes effect only if relalias_lockmode is enabled.Limits the number of pages examined in one file sync operation disabled critical section call. This tunable can be used to improve system responsiveness when syncing large files. The system decides the number of pages to process in one file sync disabled critical section call when the value is 0. Specifying this tunable too small can decrease system performance by causing repeated critical section calls to sync a file. Enables (1) or disables (0) reloading mmap translation faults without the source object locked. This may reduce lock contention when multiple processes are sharing, via mmap, the same memory object. The object may be a MAP_ANONYMOUS object, that is mapped using MAP_SHARED, or an Extended Shared Memory segment. Enables (1) or disables (0) the use of multiple semaphore ID lists. This feature is disabled by default. When enabled, it allows the semaphore code to hash the semundo structures based on semaphore ID instead of process ID. Preallocates PVLIST entries at boot time to the given percentage of the maximum number of entries. Preallocates PVLIST entries at boot time to the given percentage of the maximum number of entries. This may improve the performance of the system at boot time or in low memory situations on systems configured with many CPUs. The PVLIST may shrink below this setting in response to a dynamic reconfiguration event such as memory remove. Forces hard allocations of dynamic PVLIST entries. Forces hard allocations of dynamic PVLIST entries. This will avoid using DR subsystem when dynamically allocating new PVLIST entries. This may improve the performance of the system at boot time or in low memory situations on systems configured with many CPUs. Enables (positive value) or disables (0) the use of multiple semaphore locks per semaphore set. This feature is disabled by default. When enabled, itallows the semaphore code use multiple locks for semaphore operations on a semaphore set. The value must be a power of 2 up to 64. Specifies the number of consecutive semaphore numbers (semnums) that maps to the same semaphore set lock This variable is effective only when using multiple locks per semaphore set (see num_locks_per_semid). The default value is 1 which means that consecutive semnums different locks (for instance semnum 0 uses lock 0, semnum 1 lock 1, etc). A positive value N (power of 2 between 2 and 64) means that semnums 0 thru N-1 use lock 0, N thru 2N-1 use lock 1, and so on. Set to 0 for a memory pool numperm%% scope. Set to 1 for a global numperm%% scope. Setting tunable to 1 can reduce working storage paging when memory pools have unbalanced file cache percentages. Limits the number of pages examined in one thread pager I/O disabled critical section call. Setting this to zero removes/disables the limit. This is specified in integral units of 4K pages. Limits the number of pages examined in one thread pager I/O page invalidate critical section call. Setting this to zero removes/disables the limit. This is specified in integral units of 4K pages. Specifies if hardware acceleration of compression and decompression should be used. Set to 0 to not use hardware acceleration. Set to 1 to use hardware acceleration if it is available. Active Memory Expansion is currently enabled. Active Memory Expansion with multiple page sizes may result in a loss of performance Specifies how to perform LRU page stealing if needed when promoting 16MB MPSS pages. A value of 0 allows LRU to steal 64K MPSS reserved pages and file cache pages. A value of 1 allows LRU to steal 64K MPSS reserved pages but no file cache pages. A value of 2 prevents LRU page stealing for a 16MB MPSS promotion. Specifies if tlb invalidates are processed in batch mode. Enables (1) or disables (0) batch processing of TLB invalidates. Adjusts the types of memory counted in the wlm real memory limits. Setting to 0 (the default) counts only working storage segment pages; setting to 1 counts working storage plus all file segment pages; setting to 2 counts working storage plus client file segment pages. Kernel locking prevents paging out kernel data. This improves system performance in many cases. If set to -1, system chooses the best mode among 0, 1, 2, and 3 based on the current configuration (e.g. AME vs. no AME). If set to 0, kernel locking is disabled. If set to 1, kernel locking is enabled automatically if Active Memory Expansion (AME) feature is also enabled. In this mode, only a subset of kernel memory is locked. If set to 2, kernel locking is enabled regardless of AME and all of kernel data is eligible for locking. If set to 3, only the kernel stacks of processes are locked in memory. Enabling kernel locking has the most positive impact on performance of systems that do paging but not enough to page out kernel data or on systems that do not do paging activity at all. Note that 1, 2, and 3 are only advisory. If a system runs low on free memory and performs extensive paging activity, kernel locking is rendered ineffective by paging out kernel data. Kernel locking only impacts pageable page-sizes in the system. Specifies the number of base level locks to use if relalias_lockmode is enabled. Specifying more locks reduces lock contention at the expense of additional memory usage. The default value of 0 will cause AIX to allocate 8 locks per cpu. Enables (1 or 2) or disables (0) base-level locking in munmap(). When set to 1 or 2, this causes munmap() to perform locking on mapped objects with interrupts fully enabled. This can significantly improve overall system throughput at exec() or exit() time, when the same mapped object is shared across many processes. When set to 1, the source object's disabled lock is then used to serialize with critical sections operating on it as well. When set to 2, source page granular locking is used to serialize with critical sections operating on the same source object thus reducing contention between munmap() and those critical sections. Enables (1 or 2) or disables (0) MMAP translation bitmaps. MMAP translation bitmaps can reduce MMAP page translation creation and deletion times. When set to 0, MMAP translation bitmaps are not used. When set to 1, mappings of 256KB and smaller use translation bitmaps. When set to 2, all mappings use translation bitmaps. Changes to this tunable only affect new mappings created after the tunable change. This tunable has no effect on support for 4K and large (16M) page sizes and on machines that do not support multiple page sizes. A value of 0 specifies 4K and explicitly requested large (16M) page sizes are supported. A value of 1 specifies 4K, 64K and explicitly requested 16M and 16G page sizes are supported but no dynamic variable page size support. A value of 2 is similar to 1 with dynamic variable page size support up to 64K. A value of 3 is similar to 2 with dynamic variable page size support up to 16M. A value of -1 indicates the operating system will choose an optimal page size policy based on hardware and software configuration. Limits the number of simultaneous fork, exec, and exit operations. Setting a positive number limits the number of simultaneous fork, exec, and exit operations to the specified number. Set to 0 to allow unlimited simultaneous fork, exec, and exit operations. Specifies number of entries in the memory semaphore (msem) hash table. Specifying more entries reduces lock contention at the expense of additional memory usage. A value of 0 will cause AIX to use a legacy size hash table and hashing algorithm. This tunable no longer functions. The DPSA policy may not be disabled. The AIX Kernel will ignore any settings of this tunable. This may reduce lock contention when multiple processes are sharing, via mmap, the same memory object. The object may be a MAP_ANONYMOUS or real-time Shared Memory that is mapped using MAP_SHARED, or an Extended Shared Memory segment. The default value of -1 allows the operating system to decide to enable or disable this. Specifies whether the segment lookaside buffer management code will use class hints for slb and erat translation invalidate operations. A value of 1 (the default) will use the class hint performance feature, while a value of 0 will disable that feature when the system detects a state where that is advisable. If supported, this enables or disables low level use of H_BLOCK_REMOVE by the virtual memory manager. H_BLOCK_REMOVE reduces the time it takes to remove the translations for a contiguous block of virtual pages. A value of 1 enables this feature for deleting dynamic variable page size translations. A value of 2 enables this feature for deleting fixed page size translations. A value of 3 enables it for both. A value of 0 disables it for both. If supported, this enables or disables general use of H_BLOCK_REMOVE to remove translations when deleting page frames or unmapping MMAP areas. H_BLOCK_REMOVE reduces the time it takes to remove the translations for a contiguous block of virtual pages. A value of 1 enables this feature when deleting page frames. A value of 2 enables it when unmapping MMAP areas. A value of 3 enables it for both. A value of 0 disables them both. This introduces an upper limit on the maximum amount of uninterrupted CPU utilization by the memory affinity optimizer. If the optimizer does not relinquish the CPU voluntarily within this number of seconds, then it will be forced to sleep for a short quantum. Setting this value to 0 disables this limit and allows the optimizer to attempt to complete its work as quickly as possible. A value greater than 0 specifies this limit in seconds. This number should be low enough to avoid latency issues for threads with near real-time requirements, but a lower number means the optimizer may move more pages than are necessary. Must be set to 1, may not be changed. When set to 1, page steal lists are used and, if the Workload Manager is enabled, pages are scanned from lists by Class. This sets the zeroing mode for large pages in shared memory regions. A value of 1 sets serial mode A value of 2 sets synchronous parallel mode A value of 3 sets asynchronous parallel mode In serial mode, the pages are zeroed by the thread deleting the shared memory region, one page at a time. In synchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The thread deleting the shared memory region waits for the worker threads to complete. In asynchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The zeroing operation occurs in the background while the thread deleting the shared memory region continues on. This sets the number of worker threads per online SRAD for zeroing large pages in shared memory regions. This number is valid only when the page zeroing mode as set by the pgz_mode tunable is synchronous parallel or asynchronous parallel mode. A larger number of worker threads potentially increases the number of large pages that can be zeroed simultaneously, but also increases the CPU utilization. The actual value used may differ if the requested number is greater than the number of physical CPUs in the SRAD. This sets the zeroing mode for large pages made dynamically by the vmo command. A value of 1 sets serial mode A value of 2 sets synchronous parallel mode A value of 3 sets asynchronous parallel mode In serial mode, the pages are zeroed by the thread running the vmo command, one page at a time. In synchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The thread running the vmo command waits for the worker threads to complete. In asynchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The zeroing operation occurs in the background while the thread running the vmo command continues on. The maxclient value will be a limit on how much of RAM can be used as a client file cache. This must be set to 1, which indicates on. This tunable is now deprecated and may not be changed. This tunable no longer functions. This must be set to 0. This tunable is now deprecated and may not be changed. This sets the memory stripe size used by the VMM. A value of -1 lets AIX choose the best stripe size based on the system configuration. A value of 16M sets the stripe size to 16MB. A value of 32M sets the stripe size to 32MB. A value of 64M sets the stripe size to 64MB. A value of 128M sets the stripe size to 128MB. This tunable will enable all supported page sizes in AME environment for Power8 or later machines with 64K accelerator support. A Value of 0 for this tunable will give legacy behavior in AME environment where only 4K and 16M page sizes are enabled. A value of 1 will enable all supported page sizes in AME environment. This tunable can only be changed on Power8 or later machines with 64K accelerator support. Changing this tunable on prior systems will result in an error message. Turns on VTIOLR iodone re-prioritization handler. Enables iodone's to be re-prioritized by the VTIOLR threads. Mode 0 - Off. Mode 1 - run with default scheduling options. Mode 2 - force the VTIOLR threads to run in the global run queue. Ratio of VTIOL to VTIOLR threads. Controls the ratio of VTIOL to VTIOLR threads. Time elapsed between scans of the VTIOL queues. This tunable is in the units of microseconds. VTIOLR bufstruct re-prioritization threshold Controls the threshold where a bufstruct is eligible for re-prioritization. Any bufstructs that have been queued for longer than this threshold will be eligible. This tunable is in the units of microseconds. Controls the read re-prioritization mode Mode 0 - Read re-prioritization is off. Mode 1 - Read re-prioritization is on and will re-prioritize any reads that meet the criteria defined by the VMO tunables Mode 2 - Read re-prioritization is on and will re-prioritize any reads that meet the criteria defined by the VMO tunables if doing so will not result in added lock contention. Controls the write re-prioritization mode Mode 0 - Write re-prioritization is off. Mode 1 - Write re-prioritization is on and will re-prioritize any writes that meet the criteria defined by the VMO tunables Mode 2 - Write re-prioritization is on and will re-prioritize any writes that meet the criteria defined by the VMO tunables if doing so will not result in added lock contention. Controls the max size read that can be re-prioritized. Reads that are less than or equal to this size, are eligible for re-prioritization. Setting this to 0 turns off the size check. This tunable is in units of bytes. Controls the max size write that can be re-prioritized. Writes that are less than or equal to this size, are eligible for re-prioritization. Setting this to 0 turns off the size check. This tunable is in units of bytes. On systems that support both Software and Hardware managed segment-lookaside buffer modes (SSLB vs HSLB) in the virtual memory manager (VMM), this allows explicitly selecting between the two modes. The POWER processor includes a cache of effective to virtual segment id translations that is populated either by software (the kernel VMM) or by hardware (processors that support HSLB via in-memory segment tables). The default value of -1 allows the VMM to choose which mode to use based on the processor and any other system factors that may impact the choice. A value of 0 will force the VMM to use SSLB mode. A value of 1 will force the VMM to use HSLB mode if supported by the processor. This sets the boot time zeroing for large pages. A value of 1 sets serial mode A value of 2 sets synchronous parallel mode A value of 3 sets asynchronous parallel mode In serial mode, the pages are zeroed 1 page at a time. In synchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The zeroing daemon waits for the current zeroing workload to be done before queuing more pages. In asynchronous parallel mode, the pages are zeroed in parallel by multiple worker threads. The zeroing daemon continues queuing more pages to the worker threads while the current workload is being zeroed.A value of -1 lets AIX choose the best stripe size based on the system configuration. A value of 16M sets the stripe size to 16MB. A value of 32M sets the stripe size to 32MB. A value of 64M sets the stripe size to 64MB. A value of 128M sets the stripe size to 128MB. A value of 256M sets the stripe size to 256MB. Specifies whether trusted shared library areas are used. 0 means that trusted shared library areas are not used. 1 means that trusted shared library areas are used for trusted programs, that is, programs that are setuid-root, setgid-system, or setgid-security. You can also specify 32 or 64 to enable trusted shared library areas for 32-bit trusted programs only or 64-bit trusted programs only. Specifies whether ASLR (address space layout randomization) is used. 0 means that ASLR is disabled or is controlled by a restricted tunable. 1 means that randomization is used for shared library areas. 2 means that randomization is used for shared library areas and allowed for marked programs. Controls the way ASLR is used. Specifies ASLR properties. Specifies ASLR properties for 32-bit programs 0 means that ASLR is disabled for 32-bit programs or is controlled by a restricted tunable. 1 means that ASLR is allowed for marked 32-bit programs. You can also specify one or more of the characters t, d, and s to allow randomization for the main program text, main program data, or stack of marked 32-bit programs. Specifies ASLR properties for 64-bit programs 0 means that ASLR is disabled for 64-bit programs or is controlled by a restricted tunable. 1 means that ASLR is allowed for marked 64-bit programs. You can also specify one or more of the characters t, d, s, m, and p to allow randomization for the main program text, main program data, stack, shmat() or mmap() addresses, or privately-loaded libraries of marked 64-bit programs. Specify additional ASLR properties for 32-bit programs Specify additional ASLR properties for 64-bit programs If set, this tunable will assert the LPAR when paging space is completely exhausted. A value of 0 for this tunable will give legacy behavior where processes will be killed when free paging space reaches npskill threshold. Vmo tunable low_ps_handling will influence which process is selected to be killed. SIGDANGER signal will be sent to processes when the free paging space reaches danger levels. A value of 1 for this tunable will assert the LPAR when paging space is completely exhausted. Processes will not be killed when free paging space reaches npskill levels. SIGDANGER signal mechanism is suppressed and processes are not notified when free paging space reaches danger levels. Specifies the number of PTA segments to instantiate during boot. A value of 0 lets AIX choose the best number based on the system configuration. Any other valid value is the number of segments AIX will try to instantiate. This is an advisory tunable, AIX may instantiate more segments if needed. This tunable is only valid if ptalockcntrl is 1. Specifies the number of PTA segments to have in a PTA bucket. This is the number of PTA segments to a single PTA allocation lock. A lower number means more locks available to the system. This tunable is only valid if ptalockcntrl is 1. PTA Lock scaling. Enables (1) or Disables (0) PTA lock scaling. This is the number of PTA segments to a single PTA allocation lock. A lower number means more locks available to the system. A value of 0 lets AIX choose the best value. This tunable is only valid if ptalockcntrl is 1.