ISO8859-1LA@2) K \ , c 3 l!]l]Y(z[rS0f !_,$Je$wm$&Kd)L6)L+ t,5!L."t.#w1l$L1%c21&T2'c2(T3N)3*43+5/,5-76.-6/>7!0,7`1+7273+74752768&798:U9_<:T<};c<<T=6==>>?R?@V?jA?BASCNBDFBmEVBFC GHD HDUI)DJ_E K?EjLEChecks whether an NFS request originated from a privileged port. Value of 0 disables the port-checking that is done by the NFS server. A value of 1 directs the NFS server to do port checking on the incoming NFS requests. This is a configuration decision with minimal performance consequences. Turns on or off the generation of checksums on NFS UDP packets. Make sure this value is set to on in any network where packet corruption might occur. Slight performance gains can be realized by turning it off, but at the expense of increased chance of data corruption. Sets the queue size of the NFS server UDP socket. Increase the size of the nfs_socketsize variable when netstat reports packets dropped due to full socket buffers for UDP, and increasing the number of nfsd daemons has not helped. Sets the queue size of the NFS TCP socket. The queue size is specified in number of bytes. The TCP socket is used for buffering NFS RPC packets on send and receive. This option reserves, but does not allocate, memory for use by the send and receive socket buffers of the socket. It may be appropriate to tune this parameter if poor sequential read or write performance between an NFS server and client is observed, and both of the following situations exist: (1) A large (32 KB or greater) RPC size is being used. (2) Communication between the server and client is over a network link using a large (9000-byte or greater) MTU size. Do not set the nfs_tcp_socketsize value to less than 60,000. The default value should be adequate for the vast majority of environments. This value allows enough space to: (1) Buffer incoming data without limiting the TCP window size. (2) Buffer outgoing data without limiting the speed at which NFS can write data to the socket. The value of the nfs_tcp_socketsize option must be less than the sb_max option, which can be manipulated by the no command. Sets the minimum size of write requests for which write gathering is done. If either of the following two situations are observed, tuning nfs_gather_threshold may be appropriate: (1) Delays are observed in responding to RPC requests, particularly those where the client is exclusively doing non-sequential writes or the files being written are being written with file locks held on the client. (2) Clients are writing with write sizes < 4096 and write-gather is not working. If write-gather is to be disabled, change the nfs_gather_threshold to a value greater than the largest possible write. For AIX Version 4 running NFS Version 2, this value is 8192. Changing the value to 8193 disables write. Use this for the situation described above in scenario (1). If write gather is being bypassed due to a small write size, say 1024, as in scenario (2), change the write gather parameter to gather smaller writes; for example, set to 1024. Checks for duplicate NFS messages. This option is used to avoid displaying duplicate NFS messages. Tuning this parameter does not affect performance. Specifies the number of entries to store in the NFS server's duplicate cache for the UDP network transport. The duplicate cache size cannot be decreased. Increase the duplicate cache size for servers that have a high throughput capability. The duplicate cache is used to allow the server to correctly respond to NFS client retransmissions. If the server flushes this cache before the client is able to retransmit, then the server may respond incorrectly. Therefore, if the server can process 1000 operations before a client retransmits, the duplicate cache size must be increased. Calculate the number of NFS operations that are being received per second at the NFS server and multiply this by 4. The result is a duplicate cache size that should be sufficient to allow correct response from the NFS server. The operations that are affected by the duplicate cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir(). Specifies the number of entries to store in the NFS server's duplicate cache for the TCP network transport. The duplicate cache size cannot be decreased. Increase the duplicate cache size for servers that have a high throughput capability. The duplicate cache is used to allow the server to correctly respond to NFS client retransmissions. If the server flushes this cache before the client is able to retransmit, then the server may respond incorrectly. Therefore, if the server can process 1000 operations before a client retransmits, the duplicate cache size must be increased. Calculate the number of NFS operations that are being received per second at the NFS server and multiply this by 4. The result is a duplicate cache size that should be sufficient to allow correct response from the NFS server. The operations that are affected by the duplicate cache are the following: setattr(), write(), create(), remove(), rename(), link(), symlink(), mkdir(), rmdir(). Sets the base priority of nfsd daemons. By default, the nfsd daemons run with a floating process priority. Therefore, as they increase their cumulative CPU time, their priority will change. This parameter can be used to set a static parameter for the nfsd daemons. The value of 0 represents the floating priority (default). Other values within the acceptable range will be used to set the priority of the nfsd daemon when an NFS request is received at the server. This option can be used if the NFS server is overloading the system (lowering or making the nfsd daemon less favored). It can also be used if you want the nfsd daemons be one of the most favored processes on the server. Use caution when setting the parameter because it can render the system almost unusable by other processes. This situation can occur if the NFS server is very busy and will essentially lock out other processes from having run time on the server. Specifies whether the NFS client should use a dynamic retransmission algorithm to decide when to resend NFS requests to the server. If this function is turned on, the timeo parameter is only used in the first retransmission. With this parameter set to 1, the NFS client will attempt to adjust its timeout behavior based on past NFS server response. This allows for a floating timeout value along with adjusting the transfer sizes used. All of this is done based on an accumulative history of the NFS server's response time. In most cases, this parameter does not need to be adjusted. There are some instances where the straightforward timeout behavior is desired for the NFS client. In these cases, the value should be set to 0 before mounting file systems. Specifies the number of NFS file pages that are scheduled to be written back to the server through the VMM at one time. This I/O scheduling control occurs on close of a file and when the system invokes the syncd daemon. When an application writes a large file to an NFS mounted filesystem, that file data is written to the NFS server when the file is closed. In some cases, the resource it takes to write that file to the server may prevent other NFS file I/O from occurring. This parameter limits the number of 4k pages written to the server to the value of nfs_iopace_pages. The NFS client will schedule nfs_iopace_pages for writing to the server and then will wait for these to complete before scheduling the next batch of pages. The default value will usually be sufficient for most environments. Decreased the values if there are large amounts of contention for NFS client resources. If there is low contention, the value can be increased. If nfs_iopace_pages=0, then the number of pages written by the syncd daemon at one time is calculated using a heuristic intended to optimize performance and prevent exhaustion of resources that might prevent other NFS file I/O from occuring. Specifies the maximum number of NFS server threads that are created to service incoming NFS requests. The NFS server is multithreaded. The NFS server threads are created as demand increases for the NFS server. When the NFS server threads become idle, they will exit. This allows the server to adapt to the needs of the NFS clients. The nfs_max_threads parameter is the maximum number of threads that can be created. In general, it does not detract from overall system performance to have the maximum set to something very large because the NFS server will create threads as needed. However, this assumes that NFS-serving is the primary machine purpose. If the desire is to share the system with other activities, then the maximum number of threads may need to be set low. The maximum number can also be specified as a parameter to the nfsd daemon. Specifies using nonreserved IP port number. Value of 0 will use nonreserved IP port number when the NFS client communicates with the NFS server. This option allows the NFS server to be very aggressive about the reading of a file. The NFS server can only respond to the specific NFS-read request from the NFS client. However, the NFS server can read data in the file which exists immediately after the current read request. This is normally referred to as read-ahead. The NFS server does read-ahead by default. May be useful in cases where server memory is low and too much disk-to-memory activity is occuring. With the nfs_server_clread option enabled, the NFS server becomes very aggressive about doing read-ahead for the NFS client. If value is 1, then aggressive read-ahead is done; If value is 0, normal system default read-ahead methods are used. 'Normal' system read-ahead is controlled by VMM (for JFS file systems) and JFS2 (for JFS2 file systems). This more aggressive top-half read-ahead enabled via the nfs_server_clread option is less susceptible to read-ahead breaking down due to out-of-order requests (which are typical in the NFS server case). When the mechanism is activated, it will read an entire cluster (128 KB, the LVM logical track group size) at a time. Enables very large TCP window size negotiation (greater than 65535 bytes) to occur between systems. If using the TCP transport between NFS client and server, and both systems support it, this allows the systems to negotiate a TCP window size in a way that will allow more data to be 'in-flight' between the client and server. This increases the throughput potential between client and server. Unlike the rfc1323 option of the no command, this only affects NFS and not other applications in the system. Value of 0 means this is disabled, and value of 1 means it is enabled. If the no command parameter rfc1323 is already set, this NFS option does not need to be set. Allows the system administrator to control the NFS RPC sizes at the server. Useful when all clients need to have changes in the read/write sizes, and it is impractical to change the clients. Default means to use the values used by the client mount. This may be required to reduce the V3 read/write sizes when the mounts cannot be manipulated directly on the clients, in particular during NIM installations on networks where the network is dropping packets with the default read/write sizes. In that case, set the maximum size to a smaller size that works on the network. It can also be useful where network devices are dropping packets and a generic change is desired for communications with the server. Allows the system administrator to control the NFS RPC sizes at the server. Useful when all clients need to have changes in the read/write sizes, and it is impractical to change the clients. Default means to use the values used by the client mount. This may be required to reduce the V3 read/write sizes when the mounts cannot be manipulated directly on the clients, in particular during NIM installations on networks where the network is dropping packets with the default read/write sizes. In that case, set the maximum size to a smaller size that works on the network. It can also be useful where network devices are dropping packets and a generic change is desired for communications with the server. Specifies that the NFS server adhere to signal handling requirements for blocked locks for the UNIX 95/98 test suites. A value of 1 turns nfs_allow_all_signals on, and a value of 0 turns it off. Sets or displays the number of tables for memory pools used by the biods for NFS Version 2 mounts. For values greater than 1, this option must be set before NFS mounts are performed. Sets or displays the number of tables for memory pools used by the biods for NFS Version 3 mounts. For values greater than 1, this option must be set before NFS mounts are performed. Sets or displays the number of initial free memory buffers used for each NFS version 2 Paging Device Table(pdt) created after the first table. The very first pdt has a set value of 1000 or 10000 depending on system memory. This initial value is also the default value of each newly created pdt. For values other than the default number, this option must be set before NFS mounts are performed. Sets or displays the number of initial free memory buffers used for each NFS version 3 Paging Device Table(pdt) created after the first table. The very first pdt has a set value of 1000 or 10000 depending on system memory. This initial value is also the default value of each newly created pdt. For values other than the default number, this option must be set before NFS mounts are performed. Sets the number of seconds for which a DES credential. Value of 0 disables DES credential timeouts. Determines if READDIRPLUS calls are supported by the server. Value of 0 disables READDIRPLUS processing. Sets the level of debugging for rpc.lockd. None. Sets the level of debugging for rpc.statd. None. Sets the maximum number of threads for rpc.statd. The rpc.statd is multithreaded so that it can reestablish connections with remote machines in a concurrent manner. The rpc.statd threads are created as demand increases, usually because rpc.statd is trying to reestablish a connection with a machine that it cannot contact. When the rpc.statd threads become idle, they will exit. The statd_max_threads parameter is the maximum number of threads that can be created. Specifies a time out period which the NFSv4 client operation will fail over to the replica provided by the NFSv4 server. Measured in seconds. If value of 0 is specified, the timeout value wll be the timeout value for tcp multiple by 4. Value from 1 to 4 are reserved so if users specify value from 1 to 4, NFSv4 client will treat it as 0. NFSv4 allow client to fail over to other replica server if the main server is not responding. This value will determine how long a client has to wait for a respond from the server before it switch all the NFSv4 request for that fsid to other replica server. Determine if the NFS version 4 client and server will check string data for UTF-8 correctness. A value of 0 disables the UTF-8 checking. A value of 1 enables the UTF-8 checking. Sets or displays the number of tables for memory pools used by the biods for NFS Version 4 mounts. For values greater than 1, this option must be set before NFS mounts are performed. Sets or displays the number of initial free memory buffers used for each NFS version 4 Paging Device Table(pdt) created after the first table. The very first pdt has a set value of 1000 or 10000 depending on system memory. This initial value is also the default value of each newly created pdt. For values other than the default number, this option must be set before NFS mounts are performed. Determine if the NFS version 4 server will issue read delegations for open files. A value of 0 disables delegation granting. A value of 1 enables delegation granting. When the rbr (release behind on read) mount option is not specified, this value can allow release-behind on read behavior when a file is read beyond this trigger offset (specified in MB). The result is that sequentially read files will remain cached in memory as space allows up to the trigger offset, and then pages beyond the trigger will be released as the file continues to be read sequentially. A value of -1 disables this feature, a value of 0 allows the system to choose an appropriate value, and a positive value will be used as the trigger. This is ignored when the rbr mount option is used. Determine if the NFS version 4 client will accept delegations for open files. A value of 0 disables delegations. A value of 1 enables delegations. Determines if the NFS version 4 server should avoid sending a NFS4ERR_DELAY response. If NFS clients are being used which pause applications for long periods of time when encountering a NFS4ERR_DELAY response from the server, setting this option will attempt to process the delay on the server which may avoid pauses seen by the application. The priority at which NFS mount hang messages will be logged to syslog. Possible values are 1 - 7. 1: LOG_ALERT, 2: LOG_CRIT, 3: LOG_ERR, 4: LOG_WARNING, 5: LOG_NOTICE, 6: LOG_INFO, 7: LOG_DEBUG. Default is 6. Enable/Disable GSS Window size checking. A value of 0 disables GSS Window size checking. A value of 1 enables GSS Window size checking. Enable/Disable nfs client side global lock window improvement. A value of 0 disables nfs client side global lock window improvement. A value of 1 enables nfs client side global lock window improvement. NOTE: In very rare cases, rm -r $DIR may fail if this tunable is enabled.