1 .\" $NetBSD: 2.t,v 1.2 1998/01/09 06:55:37 perry Exp $
4 .\" The Regents of the University of California. All rights reserved.
6 .\" This document is derived from software contributed to Berkeley by
7 .\" Rick Macklem at The University of Guelph.
9 .\" Redistribution and use in source and binary forms, with or without
10 .\" modification, are permitted provided that the following conditions
12 .\" 1. Redistributions of source code must retain the above copyright
13 .\" notice, this list of conditions and the following disclaimer.
14 .\" 2. Redistributions in binary form must reproduce the above copyright
15 .\" notice, this list of conditions and the following disclaimer in the
16 .\" documentation and/or other materials provided with the distribution.
17 .\" 3. Neither the name of the University nor the names of its contributors
18 .\" may be used to endorse or promote products derived from this software
19 .\" without specific prior written permission.
21 .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
22 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
24 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
25 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
27 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
28 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
30 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
33 .\" @(#)2.t 8.1 (Berkeley) 6/8/93
35 .sh 1 "Not Quite NFS, Crash Tolerant Cache Consistency for NFS"
37 Not Quite NFS (NQNFS) is an NFS like protocol designed to maintain full cache
38 consistency between clients in a crash tolerant manner.
39 It is an adaptation of the NFS protocol such that the server supports both NFS
40 and NQNFS clients while maintaining full consistency between the server and
42 This section borrows heavily from work done on Spritely-NFS [Srinivasan89],
43 but uses Leases [Gray89] to avoid the need to recover server state information
45 The reader is strongly encouraged to read these references before
46 trying to grasp the material presented here.
49 The protocol maintains cache consistency by using a somewhat
50 Sprite [Nelson88] like protocol,
51 but is based on short term leases\** instead of hard state information
54 \** A lease is a ticket permitting an activity that is
55 valid until some expiry time.
57 The basic principal is that the protocol will disable client caching of a
58 file whenever that file is write shared\**.
60 \** Write sharing occurs when at least one client is modifying a file while
61 other client(s) are reading the file.
63 Whenever a client wishes to cache data for a file it must hold a valid lease.
64 There are three types of leases: read caching, write caching and non-caching.
65 The latter type requires that all file operations be done synchronously with
67 A read caching lease allows for client data caching, but no file modifications
69 A write caching lease allows for client caching of writes,
70 but requires that all writes be pushed to the server when the lease expires.
71 If a client has dirty buffers\**
73 \** Cached write data is not yet pushed (written) to the server.
75 when a write cache lease has almost expired, it will attempt to
76 extend the lease but is required to push the dirty buffers if extension fails.
77 A client gets leases by either doing a \fBGetLease RPC\fR or by piggybacking
78 a \fBGetLease Request\fR onto another RPC. Piggybacking is supported for the
79 frequent RPCs Getattr, Setattr, Lookup, Readlink, Read, Write and Readdir
80 in an effort to minimize the number of \fBGetLease RPCs\fR required.
81 All leases are at the granularity of a file, since all NFS RPCs operate on
82 individual files and NFS has no intrinsic notion of a file hierarchy.
83 Directories, symbolic links and file attributes may be read cached but
85 The exception here is the attribute file_size, which is updated during cached
86 writing on the client to reflect a growing file.
88 It is the server's responsibility to ensure that consistency is maintained
89 among the NQNFS clients by disabling client caching whenever a server file
90 operation would cause inconsistencies.
91 The possibility of inconsistencies occurs whenever a client has
92 a write caching lease and any other client,
93 or local operations on the server,
94 tries to access the file or when
95 a modify operation is attempted on a file being read cached by client(s).
96 At this time, the server sends an \fBeviction notice\fR to all clients holding
97 the lease and then waits for lease termination.
98 Lease termination occurs when a \fBvacated the premises\fR message has been
99 received from all the clients that have signed the lease or when the lease
100 expires via. timeout.
101 The message pair \fBeviction notice\fR and \fBvacated the premises\fR roughly
102 correspond to a Sprite server\(->client callback, but are not implemented as an
103 actual RPC, to avoid the server waiting indefinitely for a reply from a dead
106 Server consistency checking can be viewed as issuing intrinsic leases for a
107 file operation for the duration of the operation only. For example, the
108 \fBCreate RPC\fR will get an intrinsic write lease on the directory in which
109 the file is being created, disabling client read caches for that directory.
111 By relegating this responsibility to the server, consistency between the
112 server and NQNFS clients is maintained when NFS clients are modifying the
113 file system as well.\**
115 \** The NFS clients will continue to be \fIapproximately\fR consistent with
119 The leases are issued as time intervals to avoid the requirement of time of day
120 clock synchronization. There are three important time constants known to
121 the server. The \fBmaximum_lease_term\fR sets an upper bound on lease duration.
122 The \fBclock_skew\fR is added to all lease terms on the server to correct for
123 differing clock speeds between the client and server and \fBwrite_slack\fR is
124 the number of seconds the server is willing to wait for a client with
125 an expired write caching lease to push dirty writes.
127 The server maintains a \fBmodify_revision\fR number for each file. It is
128 defined as a unsigned quadword integer that is never zero and that must
129 increase whenever the corresponding file is modified on the server.
131 by the client to determine whether or not cached data for the file is
133 Generating this value is easier said than done. The current implementation
134 uses the following technique, which is believed to be adequate.
135 The high order longword is stored in the ufs inode and is initialized to one
136 when an inode is first allocated.
137 The low order longword is stored in main memory only and is initialized to
138 zero when an inode is read in from disk.
139 When the file is modified for the first time within a given second of
140 wall clock time, the high order longword is incremented by one and
141 the low order longword reset to zero.
142 For subsequent modifications within the same second of wall clock
143 time, the low order longword is incremented. If the low order longword wraps
144 around to zero, the high order longword is incremented again.
145 Since the high order longword only increments once per second and the inode
146 is pushed to disk frequently during file modification, this implies
147 0 \(<= Current\(miDisk \(<= 5.
148 When the inode is read in from disk, 10
149 is added to the high order longword, which ensures that the quadword
150 is greater than any value it could have had before a crash.
151 This introduces apparent modifications every time the inode falls out of
152 the LRU inode cache, but this should only reduce the client caching performance
153 by a (hopefully) small margin.
154 .sh 2 "Crash Recovery and other Failure Scenarios"
156 The server must maintain the state of all the current leases held by clients.
157 The nice thing about short term leases is that maximum_lease_term seconds
158 after the server stops issuing leases, there are no current leases left.
159 As such, server crash recovery does not require any state recovery. After
160 rebooting, the server refuses to service any RPCs except for writes until
161 write_slack seconds after the last lease would have expired\**.
163 \** The last lease expiry time may be safely estimated as
164 "boottime+maximum_lease_term+clock_skew" for machines that cannot store
165 it in nonvolatile RAM.
167 By then, the server would not have any outstanding leases to recover the
168 state of and the clients have had at least write_slack seconds to push dirty
169 writes to the server and get the server sync'd up to date. After this, the
170 server simply services requests in a manner similar to NFS.
171 In an effort to minimize the effect of "recovery storms" [Baker91],
172 the server replies \fBtry_again_later\fR to the RPCs it is not
173 yet ready to service.
175 After a client crashes, the server may have to wait for a lease to timeout
176 before servicing a request if write sharing of a file with a cachable lease
177 on the client is about to occur.
178 As for the client, it simply starts up getting any leases it now needs. Any
179 outstanding leases for that client on the server prior to the crash will either be renewed or expire
182 Certain network partitioning failures are more problematic. If a client to
183 server network connection is severed just before a write caching lease expires,
184 the client cannot push the dirty writes to the server. After the lease expires
185 on the server, the server permits other clients to access the file with the
186 potential of getting stale data. Unfortunately I believe this failure scenario
187 is intrinsic in any delay write caching scheme unless the server is required to
188 wait \fBforever\fR for a client to regain contact\**.
190 \** Gray and Cheriton avoid this problem by using a \fBwrite through\fR policy.
192 Since the write caching lease has expired on the client,
193 it will sync up with the
194 server as soon as the network connection has been re-established.
196 There is another failure condition that can occur when the server is congested.
197 The worst case scenario would have the client pushing dirty writes to the server
198 but a large request queue on the server delays these writes for more than
199 \fBwrite_slack\fR seconds. It is hoped that a congestion control scheme using
200 the \fBtry_again_later\fR RPC reply after booting combined with
201 the following lease termination rule for write caching leases
202 can minimize the risk of this occurrence.
203 A write caching lease is only terminated on the server when there are have
204 been no writes to the file and the server has not been overloaded during
205 the previous write_slack seconds. The server has not been overloaded
206 is approximated by a test for sleeping nfsd(s) at the end of the write_slack
208 .sh 2 "Server Disk Full"
210 There is a serious unresolved problem for delayed write caching with respect to
211 server disk space allocation.
212 When the disk on the file server is full, delayed write RPCs can fail
213 due to "out of space".
214 For NFS, this occurrence results in an error return from the close system
215 call on the file, since the dirty blocks are pushed on close.
216 Processes writing important files can check for this error return
217 to ensure that the file was written successfully.
218 For NQNFS, the dirty blocks are not pushed on close and as such the client
219 may not attempt the write RPC until after the process has done the close
220 which implies no error return from the close.
221 For the current prototype,
222 the only solution is to modify programs writing important
223 file(s) to call fsync and check for an error return from it instead of close.
224 .sh 2 "Protocol Details"
226 The protocol specification is identical to that of NFS [Sun89] except for
227 the following changes.
231 Program Number 300105
235 Readdir_and_Lookup RPC
237 struct readdirlookargs {
249 nqnfs_fattr entry_attrib;
256 union readdirlookres switch (stat status) {
267 NQNFSPROC_READDIRLOOK(readdirlookargs) = 18;
269 Reads entries in a directory in a manner analogous to the NFSPROC_READDIR RPC
270 in NFS, but returns the file handle and attributes of each entry as well.
271 This allows the attribute and lookup caches to be primed.
275 struct getleaseargs {
281 union getleaseres switch (stat status) {
286 nqnfs_fattr attributes;
292 NQNFSPROC_GETLEASE(getleaseargs) = 19;
294 Gets a lease for "file" valid for "duration" seconds from when the lease
295 was issued on the server\**.
297 \** To be safe, the client may only assume that the lease is valid
298 for ``duration'' seconds from when the RPC request was sent to the server.
300 The lease permits client caching if "cachable" is true.
301 The modify revision level and attributes for the file are also returned.
306 NQNFSPROC_EVICTED (fhandle) = 21;
308 This message is sent from the server to the client. When the client receives
309 the message, it should flush data associated with the file represented by
310 "fhandle" from its caches and then send the \fBVacated Message\fR back to
311 the server. Flushing includes pushing any dirty writes via. write RPCs.
316 NQNFSPROC_VACATED (fhandle) = 20;
318 This message is sent from the client to the server in response to the
319 \fBEviction Message\fR. See above.
331 NQNFSPROC_ACCESS(accessargs) = 22;
333 The access RPC does permission checking on the server for the given type
334 of access required by the client for the file.
335 Use of this RPC avoids accessibility problems caused by client->server uid
338 Piggybacked Get Lease Request
340 The piggybacked get lease request is functionally equivalent to the Get Lease
341 RPC except that is attached to one of the other NQNFS RPC requests as follows.
342 A getleaserequest is prepended to all of the request arguments for NQNFS
343 and a getleaserequestres is inserted in all NFS result structures just after
344 the "stat" field only if "stat == NFS_OK".
346 union getleaserequest switch (cachetype type) {
354 union getleaserequestres switch (cachetype type) {
364 The get lease request applies to the file that the attached RPC operates on
365 and the file attributes remain in the same location as for the NFS RPC reply
368 Three additional "stat" values
370 Three additional values have been added to the enumerated type "stat".
376 The "expired" value indicates that a lease has expired.
378 value is returned by the server when it wishes the client to retry the
379 RPC request after a short delay. It is used during crash recovery (Section 2)
380 and may also be useful for server congestion control.
381 The "authetication error" value is returned for kerberized mount points to
382 indicate that there is no cached authentication mapping and a Kerberos ticket
383 for the principal is required.
394 Type of lease requested. NQLNONE is used to indicate no piggybacked lease
399 typedef unsigned hyper modifyrev;
401 The "modifyrev" is a unsigned quadword integer value that is never zero
402 and increases every time the corresponding file is modified on the server.
408 unsigned nano_seconds;
411 For NQNFS times are handled at nano second resolution instead of micro second
425 unsigned hyper bytes;
436 The nqnfs_fattr structure is modified from the NFS fattr so that it stores
437 the file size as a 64bit quantity and the storage occupied as a 64bit number
438 of bytes. It also has fields added for the 4.4BSD va_flags and va_gen fields
439 as well as the file's modify rev level.
454 The nqnfs_sattr structure is modified from the NFS sattr structure in the
455 same manner as fattr.
457 The arguments to several of the NFS RPCs have been modified as well. Mostly,
458 these are minor changes to use 64bit file offsets or similar. The modified
459 argument structures follow.
463 struct lookup_diropargs {
469 union lookup_diropres switch (stat status) {
472 union getleaserequestres lookup_lease;
474 nqnfs_fattr attributes;
481 The additional "duration" argument tells the server to get a lease for the
482 name being looked up if it is non-zero and the lease is specified
487 struct nqnfs_readargs {
489 unsigned hyper offset;
496 struct nqnfs_writeargs {
498 unsigned hyper offset;
503 The "append" argument is true for apeend only write operations.
505 Get Filesystem Attributes RPC
507 union nqnfs_statfsres (stat status) {
522 The "files" field is the number of files in the file system and the "files_free"
523 is the number of additional files that can be created.
526 The configuration and tuning of an NFS environment tends to be a bit of a
527 mystic art, but hopefully this paper along with the man pages and other
528 reading will be helpful. Good Luck.