ISO8859-1*|dP CF j IO   b R % o06E !!gl#%&'`m)p*[,^/.14r77^9G< >B!t@"!Dq#F$H!%kI=& J'J(rMw)MN*Q8~jT(TcVraVIY8YuZ_Z [ m\i w\ ]O L_`Hua%aAbkb^effgNhY\hi=iklY8mZ}mBn wnT!;n"o#eq$q~%sy&_t' wr(w)Gw*Yw+ey',y-0zj.&z/c{0x|&1}2~s3@.4o567\G89mE:;c<==_(>?K&@rA8*BcC_DnE#FGH6IN'JvKE8LO~MNOOPQsRR S3TU7VW'X YWZN)[]x\]K^S_;-`ia9b(cLd~ef+g@Ghi?-jmki$lpminiopsqLrnsCt|uvv0w#xyzh{W|G}~Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file on Enhanced JFS. Default: 8. The difference between minfree and maxfree should always be equal to or greater than j2_maxPageReadAhead. If run time decreases with higher a j2_maxPageReadAhead value, observe other applications to ensure that their performance has not deteriorated. Specifies a threshold for random writes to accumulate in RAM before subsequent pages are flushed to disk by the Enhanced JFS's write-behind algorithm. The random write-behind threshold is on a per-file basis. Default: 0. Useful if too many pages are flushed out by syncd. Specifies the minimum number of pages to be read ahead when processing a sequentially accessed file on Enhanced JFS. Default: 2. Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random. Specifies the number of file system bufstructs for Enhanced JFS. Default: 512. File system must be remounted. Change when, using vmstat -v, the value of xpagerbufwaitcnt increases fast. If the kernel must wait for a free bufstruct, it puts the process on a wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of pages per cluster processed by Enhanced JFS's write behind algorithm. Default: 8. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the distance apart (in clusters) that writes have to exceed in order for them to be considered as random by the Enhanced JFS's random write behind algorithm. Default: 0. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is random and random write behind is enabled (j2_maxRandomWrite). Number of (4 KB) pages in the block-I/O buffer cache. Default: 20, Range: 20 to 1000. If the sar -b command shows breads or bwrites with %%rcache and %%wcache being low, you might want to tune this parameter. This parameter normally has little performance effect on systems, where ordinary I/O does not use the block-I/O buffer cache. Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file. Default: 8 (the default should be a power of two and should be greater than or equal to minpgahead); Range: 0 to 4096. Observe the elapsed execution time of critical sequential-I/O-dependent applications with the time command. Because of limitations in the kernel, do not exceed 512 as the maximum value used. The difference between minfree and maxfree should always be equal to or greater than maxpgahead. If execution time decreases with higher maxpgahead, observe other applications to ensure that their performance has not deteriorated. Specifies the maximum number of pending I/Os to a file. Default: 0 (no checking); Range: 0 to n (n should be a multiple of 4, plus 1). If foreground response time sometimes deteriorates when programs with large amounts of sequential disk output are running, sequential output may need to be paced. Set maxpout to 33 and minpout to 16. If sequential performance deteriorates unacceptably, increase one or both. If foreground performance is still unacceptable, decrease both. Specifies a threshold (in 4 KB pages) for random writes to accumulate in RAM before subsequent pages are flushed to disk by the write-behind algorithm. The random write-behind threshold is on a per-file basis. Default: 0, Range: 0 to largest_file_size_in_pages. Change if vmstat n shows page out and I/O wait peaks on regular intervals (usually when the sync daemon is writing pages to disk). Useful to set this value to 1 or higher if too much I/O occurs when syncd runs. Default is to have random writes stay in RAM until a sync operation. Setting maxrandwrt ensures these writes get flushed to disk before the sync operation has to occur. However, this could degrade performance because the file is then being flushed each time. Tune this option to favor interactive response time over throughput. After the threshold is reached, all subsequent pages are then immediately flushed to disk. The pages up to the threshold value stay in RAM until a sync operation. A value of 0 disables random write-behind. Specifies the number of pages with which sequential read-ahead starts. Default: 2; Range: 0 to 4096 (should be a power of two). Observe the elapsed execution time of critical sequential-I/O-dependent applications with time command. Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random. Specifies the point at which programs that have reached maxpout can resume writing to the file. Default: 0 (no checking); Range: 0 to n (n should be a multiple of 4 and should be at least 4 less than maxpout). If foreground response time sometimes deteriorates when programs with large amounts of sequential disk output are running, sequential output may need to be paced. Set maxpout to 33 and minpout to 16. If sequential performance deteriorates unacceptably, increase one or both. If foreground performance is still unacceptable, decrease both. Specifies the number of 16 k clusters processed by the sequential write-behind algorithm of the VMM. Default: 1; Range: 0 to any positive integer. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of file system bufstructs. Default: 93 (value is dependent on the size of the bufstruct). File system must be remounted. If the VMM must wait for a free bufstruct, it puts the process on the VMM wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of pages that should be deleted in one chunk from RAM when a file is deleted. Default: largest file size in pages; Range: 1 to largest filesize_in_pages. Real-time applications that experience sluggish response time while files are being deleted. Tuning this option is only useful for real-time applications. If real-time response is critical, adjusting this option may improve response time by spreading the removal of file pages from RAM more evenly over a workload. If set, will cause a sync() to flush all I/O to a file without holding the i-node lock, and then use the i-node lock to do the commit. Default: 0 (off); Range 0 or 1. I/O to a file is blocked when the syncd daemon is running. Default value of 0 means that the i-node lock is held while all dirty pages of a file are flushed. Specifies the total number of pbufs that the LVM uses. Default: 64 or more depending on the number of disks that have open logical volumes (default value is 64 + 16 for each additional physical disk with an open logical volume). New value cannot be reduced because memory is pinned. Change when value of hd_pendqblked in /dev/kmem is non-zero, which indicates that LVM had to wait for pbufs. Useful to increase if there are a substantial number of simultaneous I/Os and value of hd_pendqblked is non-zero. Use ioo -a to see hd_pendqblked. Specifies the number of LVM buffers for raw physical I/Os. Default: 9; Range: 1 to 64. Applications performing large writes to striped raw logical volumes are not obtaining the desired throughput rate. LVM splits large raw I/Os into multiple buffers of 128 K a piece. Default value of 9 means that about 1 MB I/Os can be processed without waiting for more buffers. If a system is configured to have striped raw logical volumes and is doing writes greater than 1.125 MB, increasing this value may help the throughput of the application. If performing larger than 1 MB raw I/Os, it might be useful to increase this value. Controls whether JFS uses clustered reads on all files. In general this option is not needed, but it may benefit certain workloads that have relatively random read access patterns. Controls whether JFS uses a shared lock when reading from a file. If this option is turned off, two processes cannot disrupt each other's read. Certain workloads may benefit from this. Specifies the total number of pbufs that the LVM uses. Default: 64 or more depending on the number of disks that have open logical volumes (default value is 64 + 16 for each additional physical disk with an open logical volume). New value cannot be reduced because memory is pinned. Change when value of hd_pendqblked in /dev/kmem is non-zero, which indicates that LVM had to wait for pbufs. Useful to increase if there are a substantial number of simultaneous I/Os and value of hd_pendqblked is non-zero. Use vmstat -v to see hd_pendqblked. Controls the amount of memory Enhanced JFS will use for the inode cache. It does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. The valid values for this tunable are between one and one thousand, inclusive. Controls the amount of memory Enhanced JFS will use for the metadata cache. It does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. The valid values for this tunable are between one and one thousand, inclusive. Specifies the minimum number of pbufs per PV that the LVM uses. This is a global value that applies to all VGs on the system. The lvmstat command can also be used to set a different value for a particular VG. In this case, the higher of the two values is used for this particular VG. Default: 256 on 64-bit kernels, 512 on 64-bit kernels. Change when value of "pending disk I/Os blocked with no pbuf" (as displayed by vmstat -v) is non-zero, which indicates that LVM had to wait for pbufs. Useful to increase if there is a substantial number of simultaneous I/Os and value of hd_pendqblked is non-zero. Specifies the number of file system bufstructs for Enhanced JFS. Default: 512. File system must be remounted. This tunable only specifies the number of bufstructs that start on the paging device. JFS2 will allocate more dynamically. Ideally, this value should not be tuned, and instead j2_dynamicBufferPreallocation should be tuned. However, it may be appropriate to change this value if, when using vmstat -v, the value of xpagerbufwaitcnt increases quickly (and continues increasing) and j2_dynamicBufferPreallocation tuning has already been attempted. If the kernel must wait for a free bufstruct, it puts the process on a wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of 16k chunks to preallocate when the filesystem is running low of bufstructs. Default: 16 (256k worth). Filesystem must be remounted. The bufstructs for JFS2 are now dynamic; the number of buffers that start on the paging device is controlled by j2_nBufferPerPagerDevice, but buffers are allocated and destroyed dynamically past this initial value. If the value of xpagerbufwaitcnt (from vmstat) increases, the j2_dynamicBufferPreallocation should be increased for that filesystem, as the I/O load on the filesystem may be exceeding the speed of preallocation. A value of 0 will disable dynamic buffer allocation completely. Specifies the maximum LTG size, in pages, that JFS2 will gather into a single bufstruct. Defaults to 512, or a 2 megabyte LTG in a single bufstruct. Filesystem must be remounted. This tunable means nothing on the 32-bit kernel due to heap constraints. On the 64-bit kernel, this value is the maximum size of the gather list of pages that can be collected into a single buf struct. The actual size of the gather list depends on the LTG size of the filesystem, this tunable only specifies a maximum size that JFS2 will use to construct the bufstructs. If kernel heap exhaustion occurs due to the size of JFS2 bufstructs, it may help to lower this tunable. Turn on/off the flag to crash the system when Enhanced JFS corruption occurs. Default: 0 (off); Range 0 or 1. Default value of 0 means that there is no crash when Enhanced JFS corruption occurs. Specifies the number of pages that should be deleted in one chunk from RAM when a file is deleted. Default: 65536; Range: 1 to largest filesize_in_pages. Real-time applications that experience sluggish response time while files are being deleted. Tuning this option is only useful for real-time applications. If real-time response is critical, adjusting this option may improve response time by spreading the removal of file pages from RAM more evenly over a workload. Specifies the number of file system bufstructs for Enhanced JFS. Default: 512. File system must be remounted. This tunable only specifies the number of bufstructs that start on the paging device. Enhanced JFS will allocate more dynamically. Ideally, this value should not be tuned, and instead j2_dynamicBufferPreallocation should be tuned. However, it may be appropriate to change this value if, when using vmstat -v, the number of "external pager filesystem I/Os blocked with no fsbuf" increases quickly (and continues increasing) and j2_dynamicBufferPreallocation tuning has already been attempted. If the kernel must wait for a free bufstruct, it puts the process on a wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of pages per cluster processed by Enhanced JFS's write behind algorithm. Default: 32. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of 16k chunks to preallocate when the filesystem is running low of bufstructs. Default: 16 (256k worth). Filesystem must be remounted. The bufstructs for Enhanced JFS are now dynamic; the number of buffers that start on the paging device is controlled by j2_nBufferPerPagerDevice, but buffers are allocated and destroyed dynamically past this initial value. If the number of "external pager filesystem I/Os blocked with no fsbuf" (from vmstat -v) increases, the j2_dynamicBufferPreallocation should be increased for that filesystem, as the I/O load on the filesystem may be exceeding the speed of preallocation. A value of 0 will disable dynamic buffer allocation completely. Specifies the maximum LTG (Logical Track Group) size, in pages, that Enhanced JFS will gather into a single bufstruct. Defaults to 512, or a 2 megabyte LTG in a single bufstruct. Filesystem must be remounted. This tunable does not apply to the 32-bit kernel due to heap constraints. On the 64-bit kernel, this value is the maximum size of the gather list of pages that can be collected into a single bufstruct. The actual size of the gather list depends on the LTG size of the filesystem, which can be obtained with the lsvg command. This tunable only specifies a maximum size that Enhanced JFS will use to construct the bufstructs. If kernel heap exhaustion occurs due to the size of Enhanced JFS bufstructs, it may help to lower this tunable. This tunable should always be raised incrementally since jumping to a large value could exhaust the kernel heap and make the system crash. Specifies the minimum number of pbufs per PV that the LVM uses. This is a global value that applies to all VGs on the system. The lvmo command can also be used to set a different value for a particular VG. In this case, the higher of the two values is used for this particular VG. Default: 256 on 32-bit kernels, 512 on 64-bit kernels. Useful to increase if there is a substantial number of simultaneous I/Os and if the number of "pending disk I/Os blocked with no pbuf" in vmstat -v is non-zero, which indicates that LVM had to wait for pbufs. Specifies the number of file system bufstructs. Default: 186 on 32-bit kernels, 196 on 64-bit kernels. File system must be remounted. If the VMM must wait for a free bufstruct, it puts the process on the VMM wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. If j2_syncModifiedMapped is set to 1, files which are modified through a mapping (shmat, mmap) are synced via the sync command or the sync daemon. If set to 0, these files are skipped by the sync daemon and the sync command and must be synced using fsync. Default : 1; Range 0 or 1. When the number of free pages in a mempool drops below this threshold then pageahead will be lineraly scaled back, avoiding pre-paging memory that would then need to be forced back out when the LRU daemon runs. Default: 0 (no pageahead scaling). Range: 0 to 204800. Useful to increase if the system is unable to meet the memory demands under heavy read workload. Memory size. Specifies the number of 16K slabs to preallocate when the filesystem is running low of bufstructs. Default: 16 (256K worth). Filesystem does not need remounting. The bufstructs for Enhanced JFS are now dynamic; the number of buffers that start on the paging device is controlled by j2_nBufferPerPagerDevice, but buffers are allocated and destroyed dynamically past this initial value. If the number of "external pager filesystem I/Os blocked with no fsbuf" (from vmstat -v) increases, the j2_dynamicBufferPreallocation should be increased for that filesystem, as the I/O load on the filesystem may be exceeding the speed of preallocation. A value of 0 will disable dynamic buffer allocation completely. When the number of free pages in a mempool drops below this threshold then pageahead will be linearly scaled back, avoiding pre-paging memory that would then need to be forced back out when the LRU daemon runs. Default: 0 (no pageahead scaling). Range: 0 to 4/5 of memory. Useful to increase if the system is unable to meet the memory demands under heavy read workload. If j2_atimeUpdateSymlink is set to 1, then the access time of JFS2 symbolic links is updated on readlink. The default is 0, meaning that the access time of JFS2 symbolic links is not updated on readlink. There is a performance penalty associated with turning j2_atimeUpdateSymlink on, so this tunable should not be changed unless there is a real need for it. SUSv3 does not require that access time be updated on readlink, however JFS and many non-AIX plateforms do update the access time on readlink. This tunable is provided for compatibility with JFS and other UNIX conformant systems. Specifies the number of iterations of syncd that run before syncd attempts to write in-memory metadata pages to disk to move the LogSyncPoint ahead. Default: 1; Range 0 to 4096. A value of 0 specifies that only user data is forced to disk by the syncd. Periodically writing filesystem metadata to disk leads to faster recovery time in the case of system reboot and preserves more filesystem data in the event that a full fsck is required when the log is not available. Typically, syncd is configured to run every 60 seconds and a more aggressive value such as 1 or 2 improves recovery time, but at a cost absorbed at run time on a potentially busy system. A less aggressive value such as 60 would mean that a metadata flush is attempted once an hour. If j2_atimeUpdateSymlink is set to 1, then the access time of JFS2 symbolic links is updated on readlink. A value of 0 indicates that the access time of JFS2 symbolic links is not updated on readlink. There is a performance penalty associated with turning j2_atimeUpdateSymlink on, so this tunable should not be changed unless there is a real need for it. SUSv3 does not require that access time be updated on readlink, however JFS and many other plateforms do update the access time on readlink. This tunable is provided for compatibility with JFS and other UNIX conformant systems. Specifies the number of 16K slabs to preallocate when the filesystem is running low of bufstructs. A value of 16 represents 256K. Filesystem does not need remounting. The bufstructs for Enhanced JFS are now dynamic; the number of buffers that start on the paging device is controlled by j2_nBufferPerPagerDevice, but buffers are allocated and destroyed dynamically past this initial value. If the number of "external pager filesystem I/Os blocked with no fsbuf" (from vmstat -v) increases, the j2_dynamicBufferPreallocation should be increased for that filesystem, as the I/O load on the filesystem may be exceeding the speed of preallocation. A value of 0 will disable dynamic buffer allocation completely. Controls the amount of memory Enhanced JFS will use for the inode cache. The value does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file on Enhanced JFS. The difference between minfree and maxfree should always be equal to or greater than j2_maxPageReadAhead. If run time decreases with higher a j2_maxPageReadAhead value, observe other applications to ensure that their performance has not deteriorated. Specifies a threshold for random writes to accumulate in RAM before subsequent pages are flushed to disk by the Enhanced JFS's write-behind algorithm. The random write-behind threshold is on a per-file basis. Useful if too many pages are flushed out by syncd. Specifies the maximum LTG (Logical Track Group) size, in pages, that Enhanced JFS will gather into a single bufstruct. A value of 512 represents a 2 megabyte LTG in a single bufstruct. Filesystem must be remounted. This tunable value is the maximum size of the gather list of pages that can be collected into a single bufstruct. The actual size of the gather list depends on the LTG size of the filesystem, which can be obtained with the lsvg command. This tunable only specifies a maximum size that Enhanced JFS will use to construct the bufstructs. If kernel heap exhaustion occurs due to the size of Enhanced JFS bufstructs, it may help to lower this tunable. This tunable should always be raised incrementally since jumping to a large value could exhaust the kernel heap and make the system crash. Controls the amount of memory Enhanced JFS will use for the metadata cache. The value does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. Specifies the minimum number of pages to be read ahead when processing a sequentially accessed file on Enhanced JFS. Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random. Specifies the number of file system bufstructs for Enhanced JFS. File system must be remounted. This tunable only specifies the number of bufstructs that start on the paging device. Enhanced JFS will allocate more dynamically. Ideally, this value should not be tuned, and instead j2_dynamicBufferPreallocation should be tuned. However, it may be appropriate to change this value if, when using vmstat -v, the number of "external pager filesystem I/Os blocked with no fsbuf" increases quickly (and continues increasing) and j2_dynamicBufferPreallocation tuning has already been attempted. If the kernel must wait for a free bufstruct, it puts the process on a wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of pages per cluster processed by Enhanced JFS's write behind algorithm. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the distance apart (in clusters) that writes have to exceed in order for them to be considered as random by the Enhanced JFS's random write behind algorithm. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is random and random write behind is enabled (j2_maxRandomWrite). Turn on/off the flag to crash the system when Enhanced JFS corruption occurs. A values of 0 indicates off and that there is no crash when Enhanced JFS corruption occurs. Specifies the number of iterations of syncd that run before syncd attempts to write in-memory metadata pages to disk to move the LogSyncPoint ahead. A value of 0 specifies that only user data is forced to disk by the syncd. Periodically writing filesystem metadata to disk leads to faster recovery time in the case of system reboot and preserves more filesystem data in the event that a full fsck is required when the log is not available. Typically, syncd is configured to run every 60 seconds and a more aggressive value such as 1 or 2 improves recovery time, but at a cost absorbed at run time on a potentially busy system. A less aggressive value such as 60 would mean that a metadata flush is attempted once an hour. Specifies whether files which are modified through a mapping (shmat, mmap) are synced via the sync command or the sync daemon. If j2_syncModifiedMapped is set to 1, files which are modified through a mapping (shmat, mmap) are synced via the sync command or the sync daemon. If set to 0, these files are skipped by the sync daemon and the sync command and must be synced using fsync. Controls whether JFS uses clustered reads on all files. In general this option is not needed, but it may benefit certain workloads that have relatively random read access patterns. Controls whether JFS uses a shared lock when reading from a file. If this option is turned off, two processes cannot disrupt each other's read. Certain workloads may benefit from this. Specifies the number of LVM buffers for raw physical I/Os. Applications performing large writes to striped raw logical volumes are not obtaining the desired throughput rate. LVM splits large raw I/Os into multiple buffers of 128 K a piece. A value of 9 means that about 1 MB I/Os can be processed without waiting for more buffers. If a system is configured to have striped raw logical volumes and is doing writes greater than 1.125 MB, increasing this value may help the throughput of the application. If performing larger than 1 MB raw I/Os, it might be useful to increase this value. Specifies the maximum number of pages to be read ahead when processing a sequentially accessed file. The value must be a power of two and should be greater than or equal to minpgahead. Observe the elapsed execution time of critical sequential-I/O-dependent applications with the time command. Because of limitations in the kernel, do not exceed 512 as the maximum value used. The difference between minfree and maxfree should always be equal to or greater than maxpgahead. If execution time decreases with higher maxpgahead, observe other applications to ensure that their performance has not deteriorated. Specifies a threshold (in 4 KB pages) for random writes to accumulate in RAM before subsequent pages are flushed to disk by the write-behind algorithm. The random write-behind threshold is on a per-file basis. The maximum value indicates the largest file size, in pages. Change if vmstat n shows page out and I/O wait peaks on regular intervals (usually when the sync daemon is writing pages to disk). Useful to set this value to 1 or higher if too much I/O occurs when syncd runs. A value of 0 disables random write-behind and indicates that random writes stay in RAM until a sync operation. Setting maxrandwrt ensures these writes get flushed to disk before the sync operation has to occur. However, this could degrade performance because the file is then being flushed each time. Tune this option to favor interactive response time over throughput. After the threshold is reached, all subsequent pages are then immediately flushed to disk. The pages up to the threshold value stay in RAM until a sync operation. Memory size. N/A Specifies the number of pages with which sequential read-ahead starts. The value must be a power of two. Observe the elapsed execution time of critical sequential-I/O-dependent applications with time command. Useful to increase if there are lots of large sequential accesses. Observe other applications to ensure that their performance has not deteriorated. Value of 0 may be useful if I/O pattern is purely random. Specifies the number of 16 k clusters processed by the sequential write-behind algorithm of the VMM. Useful to increase if there is a need to keep more pages in RAM before scheduling them for I/O when the I/O pattern is sequential. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of file system bufstructs. File system must be remounted. If the VMM must wait for a free bufstruct, it puts the process on the VMM wait list before the start I/O is issued and will wake it up once a bufstruct has become available. May be appropriate to increase if striped logical volumes or disk arrays are being used. Specifies the number of pages that should be deleted in one chunk from RAM when a file is deleted. The maximum value indicates the largest file size, in pages. Real-time applications that experience sluggish response time while files are being deleted. Tuning this option is only useful for real-time applications. If real-time response is critical, adjusting this option may improve response time by spreading the removal of file pages from RAM more evenly over a workload. When the number of free pages in a mempool drops below this threshold then pageahead will be linearly scaled back, avoiding pre-paging memory that would then need to be forced back out when the LRU daemon runs. A value of 0 indicates no pageahead scaling. The maximum value represents 4/5 of memory. Useful to increase if the system is unable to meet the memory demands under heavy read workload. Specifies the minimum number of pbufs per PV that the LVM uses. This is a global value that applies to all VGs on the system. The lvmo command can also be used to set a different value for a particular VG. In this case, the higher of the two values is used for this particular VG. Useful to increase if there is a substantial number of simultaneous I/Os and if the number of "pending disk I/Os blocked with no pbuf" in vmstat -v is non-zero, which indicates that LVM had to wait for pbufs. If set, will cause a sync() to flush all I/O to a file without holding the i-node lock, and then use the i-node lock to do the commit. A value of 0 indicates off and that the i-node lock is held while all dirty pages of a file are flushed. I/O to a file is blocked when the syncd daemon is running. Specifies the current state of the fast path and how asynchronous I/O requests are managed. When enabled (set to 1), AIO initiates I/O requests directly to LVM or disk via the corresponding strategy routine, thereby avoiding calls to the file system or Virtual Memory Manager (VMM). When disabled, the I/O requests are routed to the servers and are serviced via the slow path, through file system code. If asynchronous I/O is working over a file system, the state of the fast path does not matter because it will pass its requests to file system calls that depend on VMM caching. If asynchronous I/O is using a raw logical volume with the fast path enabled, requests are sent directly to the Logical Volume Manager (LVM). File system calls are not made and requests are not passed by the Virtual Memory Manager (VMM). If asynchronous I/O is using a raw logical volume with the fast path disabled, it treats the raw logical volume as a special file and passes its requests through the file system and VMM caching code. Specifies how I/O requests for files opened with CIO (Concurrent I/O) mode in a JFS2 filesystem are managed. When enabled (set to 1), AIO initiates I/O requests directly to LVM or disk via the corresponding strategy routine. When disabled, the I/O requests are routed to the servers and are serviced via the slow path. Controls the process priority of the AIO servers (kernel processes dedicated to asynchronous I/O). Values less than 40 (PUSER) assign a higher scheduling priority to the servers than normal user applications. Concurrency is enhanced by making this number slightly less than the value of PUSER. It cannot be made lower than the values of PRI_SCHED. PUSER and PRI_SCHED are defined in the /usr/include/sys/pri.h file. Specifies whether threads can suspend execution on an AIO request initiated by another thread. When enabled (set to 1), a thread can suspend execution on an AIO request initiated by another thread. This is a legacy feature, not available to POSIX AIO. Specifies the sampling rate at which the freepool in_use count is sampled. The value of 86,400 represents the number of seconds in a day. By setting both sample_rate and samples_per_cycle to large values, the decay of freepools can be effectively inhibited. Specifies the number of sample_rate samples in a cycle. A cycle is the rate at which the freepools decay. By setting both sample_rate and samples_per_cycle to large values, the decay of freepools can be effectively inhibited. Specifies the maximum number of asynchronous I/O requests that can be outstanding at one time. The specified number includes I/O requests that are in progress as well as those that are waiting in queues to be initiated. The maximum number of asynchronous I/O requests cannot be less than the value of AIO_MAX, as defined in the /usr/include/sys/limits.h file, but can be greater. It would be appropriate for a system with a high volume of asynchronous I/O to have a maximum number of asynchronous I/O requests larger than AIO_MAX. Specifies the maximum number of AIO servers (kernel processes dedicated to asynchronous I/O processing) allowed to service slow path I/O requests. This is a per cpu value. The value of maxservers cannot be less than minservers. There can never be more than this many asynchronous I/O requests in progress at one time, so this number limits the possible I/O concurrency. Specifies the minimum number of AIO servers (kernel processes dedicated to asynchronous I/O processing) that remain active to process slow path I/O requests. This is a per cpu value. The value of minservers cannot be greater than maxservers. When the kernel extension is loaded, no AIO servers are created regardless of the current or default settings. This allows a minimal AIO footprint on systems where AIO is never used. As I/O requests are initiated, AIO servers are created to service them until the maximum allowed by maxservers has been reached. Once the minservers values has been exceeded, the number of servers does not fall below minservers. Specifies how long an AIO server will sleep without servicing an I/O request. When this time limit has been exceeded, the server will exit, unless it causes the number of available servers to fall below minservers. In this case the server goes back to sleep. The time the server sleeps in this rare case is the larger of the times specified for the current and default values for server_inactivity. This is a rare case and indicates that there may be an imbalance between the number of available servers and the amount of I/O. Indicates whether the AIO kernel extension has been used and pinned. A value of 1 indicates that the AIO kernel extension has been used and pinned. Sets the maximum number of modified pages of a file that will be written to disk by the sync system call in a single operation. When running an application that uses file system caching and does large numbers of random writes, it may be necessary to adjust this setting to avoid lengthy delays during sync operations. Sets the maximum number of times that the sync system call will use j2_syncPageCount to limit pages written before increasing that count to allow progress on the sync operation. This tunable should be set when j2_syncPageCount is set and should be increased if the effect of the j2_syncPageCount change is not sufficient. Specifies whether the computational segment bit is removed from a executable file segment when the file is unused. If j2_unmarkComp is set to 1, files are unmarked as computational when the final close is done. This can result in executables needing to be paged in more often on execution, but can relieve pressure on memory if many executables are terminated and never run again. Turn dma memory protection from hypervisor on/off. Turn dma memory protection from hypervisor on/off. A value of 1 indicates that dma memory protection is turned on. A value of 0 indicates dma memory protection is turned off. Sets the behavior for recovery from JFS2 write errors. This tunable should be set to 1 for JFS2 to recover from temporary storage subsystem failures, or to 0 for JFS2 to remain in degraded mode after a storage subsystem failure. Control distributed iodone processing. Iodone processing is by default executed on the same processor which recived the interrupt. However, if processor stays disabled for a long time, the iodone may take a while to be processed. This tunable enables a feature to allow auto-detection of iodones that have been pending for a while and allow other processors to process it. Only iodones that are defined as 'migratable' can be moved to another CPU for processing. A value of 1 indicates that distributed iodone processing is disabled and a value of 0 enables it. Indicates whether processors of a core should check siblings also for pending iodones. A value of 1 indicates that sibling processor checking of iodones is enabled. Indicates whether all iodones are considered migratable with the exception of funneled ones. A value of 1 indicates that all iodones (except for funneled ones) are considered migratable. A value of 0 indicates that the only migratable iodones are those explicitly marked so. Indicates whether migratable iodones are to be traced by AIX system trace. A value of 1 indicates that migratable iodones will be traced by AIX system trace. Indicates for devices how timestamp changes are reflected. 0 is the standards-compliant behavior. 1 indicates access and update times are not changed. 2 indicates lazy access/update changes. Sets the maximum size of PCI DMA windows (in GigaBytes). Sets the maximum size of PCI DMA windows (in GigaBytes). The minimum size is 2GB and the maximum is unlimited, setting the 2GB size will enable a TCE mirror. Sets the timeout value for reporting delays in processing a sync operation. When this number of seconds is exceeded before a sync operation completes a warning will be posted to the kernel syslog file. Specifies the number of seconds to wait between calls to sync a JFS2 file system. This value supercedes the value specified on the syncd command line. The value specified for this control is the number of seconds between iterations of the sync processing. That is, the sync daemon will wait 'j2_syncByVFS' seconds between initiating calls to syncvfs for JFS2 file systems. This overrides the time specified on the syncd command line. Sets the number of threads to be used for JFS2 sync operations. The sync daemon will start sync in parallel for the number of filesystems set by j2_syncConcurrency. This control is only effective when j2_syncByVFS is non-zero. Sets the time to wait for temporarily unavailable PCI devices. This tunable specifies the amount of time the OS will wait before declaring a PCI device PERMANENTLY unavailable after an EEH event causes it to be declared temporarily unavailable. Sets the number of metadata buffers a file is allowed to modify before the file is automatically synced. This tunable can be adjusted when a file is being synced much too often due to changes in its block allocation. Sets the maximum range of megabytes of data in a mapped file that can be written during sync processing. This tunable can be used to limit the impact of sync processing for mapped files. It is only effective when j2_syncModifiedMapped is 1. This tunable allows you to enable or disable the support for Logical Block Provisioning (thin-provisioning) in the AIX operating system. When disabled, AIX will not attempt to release un-used blocks from a thin-provisioned disk. A value of 0 disables Logical Block Provisioning (LBP) support. A value of 1 enables LBP and is the default value. Defines the size for the pool of pre-allocated buffers that are used for LBP support. By default, each buffer has a size of 512 bytes. Controls the maximum number of unmap requests that can be processed by the disk driver at any given time. The buffer pool is a system wide resource pool. On any thin-provisioned disk, only one unmap request can be active at a time. The default value for this parameter is 64 (buffer). For example, if you have 64 buffers x 512 bytes = 32 KB of total pinned memory. Defines the size of each buffer in the LBP buffer pool. Default is 512 bytes. Can be changed to 4096 (4K), in which case, blocks can released for disks that support 4K block size. The value for this tunable should be the same as the largest supported block size by any disk attached with the AIX system. The value does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. The default value for this tunable was changed in AIX 710, but systems with smaller main memory sizes and which have large numbers of concurrent users or open files may perform better with the older default value of 400. The value does not explicitly indicate the amount that will be used, but is instead a scaling factor; it is used in combination with the size of the main memory to determine the maximum memory usage for the inode cache. The default value for this tunable was changed in AIX 710, but systems with smaller main memory sizes and which have large numbers of concurrent users or open files may perform better with the older default value of 400. Enables zombie Garbage Collection. This tunable should be set to 1 for JFS2 to use the zombie GC kproc to asynchronously remove data for large files, 0 otherwise. This tunable allows you to enable or disable a feature that attempts to recover paths that were in failed state at the time of disk closing. The recovery is attempted periodically after the disk is closed until the path recovers. Disks that are already closed when this tunable is set to 1 do not have their failed paths recovered. Open and close the disk (such as with lsmpio -o -l hdiskX) to initiate closed path recovery on already closed disks. This feature is supported by the AIX default PCMs. A value of 1 enables 'closed path recovery' feature. A value of 0 disables it and is the default value. Specifies the number of blocks we aggregate in memory before writing them out to disk. A larger value will mean more data blocks will remain in memory before being written out. This will cause fewer overall writes to disk, and less fragmentation. Enables AIO affinitization. This tunable should be set to 7 to turn on full affinitization optimizations and 0 to disable all tunable affinitization optimizations.