1 .\" $NetBSD: 3.t,v 1.3 2003/08/07 10:30:47 agc Exp $
3 .\" Copyright (c) 1985 The Regents of the University of California.
4 .\" All rights reserved.
6 .\" Redistribution and use in source and binary forms, with or without
7 .\" modification, are permitted provided that the following conditions
9 .\" 1. Redistributions of source code must retain the above copyright
10 .\" notice, this list of conditions and the following disclaimer.
11 .\" 2. Redistributions in binary form must reproduce the above copyright
12 .\" notice, this list of conditions and the following disclaimer in the
13 .\" documentation and/or other materials provided with the distribution.
14 .\" 3. Neither the name of the University nor the names of its contributors
15 .\" may be used to endorse or promote products derived from this software
16 .\" without specific prior written permission.
18 .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
19 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
20 .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
21 .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
22 .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
24 .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
25 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
26 .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
27 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
30 .\" @(#)3.t 5.1 (Berkeley) 4/17/91
32 .ds RH Results of our observations
34 Results of our observations
36 When 4.2BSD was first installed on several large timesharing systems
37 the degradation in performance was significant.
38 Informal measurements showed 4.2BSD providing 80% of the throughput
39 of 4.1BSD (based on load averages observed under a normal timesharing load).
40 Many of the initial problems found were because of programs that were
41 not part of 4.1BSD. Using the techniques described in the previous
42 section and standard process profiling several problems were identified.
43 Later work concentrated on the operation of the kernel itself.
44 In this section we discuss the problems uncovered; in the next
45 section we describe the changes made to the system.
52 The mail system was the first culprit identified as a major
53 contributor to the degradation in system performance.
54 At Lucasfilm the mail system is heavily used
55 on one machine, a VAX-11/780 with eight megabytes of memory.\**
57 \** During part of these observations the machine had only four
61 traffic is usually between users on the same machine and ranges from
62 person-to-person telephone messages to per-organization distribution
63 lists. After conversion to 4.2BSD, it was
64 immediately noticed that mail to distribution lists of 20 or more people
65 caused the system load to jump by anywhere from 3 to 6 points.
66 The number of processes spawned by the \fIsendmail\fP program and
67 the messages sent from \fIsendmail\fP to the system logging
68 process, \fIsyslog\fP, generated significant load both from their
69 execution and their interference with basic system operation. The
70 number of context switches and disk transfers often doubled while
71 \fIsendmail\fP operated; the system call rate jumped dramatically.
72 System accounting information consistently
73 showed \fIsendmail\fP as the top CPU user on the system.
77 The network services provided in 4.2BSD add new capabilities to the system,
78 but are not without cost. The system uses one daemon process to accept
79 requests for each network service provided. The presence of many
80 such daemons increases the numbers of active processes and files,
81 and requires a larger configuration to support the same number of users.
82 The overhead of the routing and status updates can consume
83 several percent of the CPU.
84 Remote logins and shells incur more overhead
85 than their local equivalents.
86 For example, a remote login uses three processes and a
87 pseudo-terminal handler in addition to the local hardware terminal
88 handler. When using a screen editor, sending and echoing a single
89 character involves four processes on two machines.
90 The additional processes, context switching, network traffic, and
91 terminal handler overhead can roughly triple the load presented by one
96 To measure the costs of various functions in the kernel,
97 a profiling system was run for a 17 hour
98 period on one of our general timesharing machines.
99 While this is not as reproducible as a synthetic workload,
100 it certainly represents a realistic test.
101 This test was run on several occasions over a three month period.
102 Despite the long period of time that elapsed
103 between the test runs the shape of the profiles,
104 as measured by the number of times each system call
105 entry point was called, were remarkably similar.
107 These profiles turned up several bottlenecks that are
108 discussed in the next section.
109 Several of these were new to 4.2BSD,
110 but most were caused by overloading of mechanisms
111 which worked acceptably well in previous BSD systems.
112 The general conclusion from our measurements was that
113 the ratio of user to system time had increased from
114 45% system / 55% user in 4.1BSD to 57% system / 43% user
117 Micro-operation benchmarks
119 To compare certain basic system operations
120 between 4.1BSD and 4.2BSD a suite of benchmark
121 programs was constructed and run on a VAX-11/750 with 4.5 megabytes
122 of physical memory and two disks on a MASSBUS controller.
123 Tests were run with the machine operating in single user mode
124 under both 4.1BSD and 4.2BSD. Paging was localized to the drive
125 where the root file system was located.
127 The benchmark programs were modeled after the Kashtan benchmarks,
128 [Kashtan80], with identical sources compiled under each system.
129 The programs and their intended purpose are described briefly
130 before the presentation of the results. The benchmark scripts
131 were run twice with the results shown as the average of
133 The source code for each program and the shell scripts used during
134 the benchmarks are included in the Appendix.
136 The set of tests shown in Table 1 was concerned with
137 system operations other than paging. The intent of most
138 benchmarks is clear. The result of running \fIsignocsw\fP is
139 deducted from the \fIcsw\fP benchmark to calculate the context
140 switch overhead. The \fIexec\fP tests use two different jobs to gauge
141 the cost of overlaying a larger program with a smaller one
143 ``null job'' and ``big job'' differ solely in the size of their data
144 segments, 1 kilobyte versus 256 kilobytes. In both cases the
145 text segment of the parent is larger than that of the child.\**
147 \** These tests should also have measured the cost of expanding the
148 text segment; unfortunately time did not permit running additional tests.
150 All programs were compiled into the default load format that causes
151 the text segment to be demand paged out of the file system and shared
160 syscall perform 100,000 \fIgetpid\fP system calls
161 csw perform 10,000 context switches using signals
162 signocsw send 10,000 signals to yourself
163 pipeself4 send 10,000 4-byte messages to yourself
164 pipeself512 send 10,000 512-byte messages to yourself
165 pipediscard4 send 10,000 4-byte messages to child who discards
166 pipediscard512 send 10,000 512-byte messages to child who discards
167 pipeback4 exchange 10,000 4-byte messages with child
168 pipeback512 exchange 10,000 512-byte messages with child
169 forks0 fork-exit-wait 1,000 times
170 forks1k sbrk(1024), fault page, fork-exit-wait 1,000 times
171 forks100k sbrk(102400), fault pages, fork-exit-wait 1,000 times
172 vforks0 vfork-exit-wait 1,000 times
173 vforks1k sbrk(1024), fault page, vfork-exit-wait 1,000 times
174 vforks100k sbrk(102400), fault pages, vfork-exit-wait 1,000 times
175 execs0null fork-exec ``null job''-exit-wait 1,000 times
176 execs0null (1K env) execs0null above, with 1K environment added
177 execs1knull sbrk(1024), fault page, fork-exec ``null job''-exit-wait 1,000 times
178 execs1knull (1K env) execs1knull above, with 1K environment added
179 execs100knull sbrk(102400), fault pages, fork-exec ``null job''-exit-wait 1,000 times
180 vexecs0null vfork-exec ``null job''-exit-wait 1,000 times
181 vexecs1knull sbrk(1024), fault page, vfork-exec ``null job''-exit-wait 1,000 times
182 vexecs100knull sbrk(102400), fault pages, vfork-exec ``null job''-exit-wait 1,000 times
183 execs0big fork-exec ``big job''-exit-wait 1,000 times
184 execs1kbig sbrk(1024), fault page, fork-exec ``big job''-exit-wait 1,000 times
185 execs100kbig sbrk(102400), fault pages, fork-exec ``big job''-exit-wait 1,000 times
186 vexecs0big vfork-exec ``big job''-exit-wait 1,000 times
187 vexecs1kbig sbrk(1024), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
188 vexecs100kbig sbrk(102400), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
191 Table 1. Kernel Benchmark programs.
195 The results of these tests are shown in Table 2. If the 4.1BSD results
196 are scaled to reflect their being run on a VAX-11/750, they
197 correspond closely to those found in [Joy80].\**
199 \** We assume that a VAX-11/750 runs at 60% of the speed of a VAX-11/780
200 (not considering floating point operations).
207 c || c s s || c s s || c s s
208 c || c s s || c s s || c s s
209 c || c | c | c || c | c | c || c | c | c
210 l || n | n | n || n | n | n || n | n | n.
211 Berkeley Software Distribution UNIX Systems
213 Test Elapsed Time User Time System Time
215 \^ 4.1 4.2 4.3 4.1 4.2 4.3 4.1 4.2 4.3
217 syscall 28.0 29.0 23.0 4.5 5.3 3.5 23.9 23.7 20.4
218 csw 45.0 60.0 45.0 3.5 4.3 3.3 19.5 25.4 19.0
219 signocsw 16.5 23.0 16.0 1.9 3.0 1.1 14.6 20.1 15.2
220 pipeself4 21.5 29.0 26.0 1.1 1.1 0.8 20.1 28.0 25.6
221 pipeself512 47.5 59.0 55.0 1.2 1.2 1.0 46.1 58.3 54.2
222 pipediscard4 32.0 42.0 36.0 3.2 3.7 3.0 15.5 18.8 15.6
223 pipediscard512 61.0 76.0 69.0 3.1 2.1 2.0 29.7 36.4 33.2
224 pipeback4 57.0 75.0 66.0 2.9 3.2 3.3 25.1 34.2 29.7
225 pipeback512 110.0 138.0 125.0 3.1 3.4 2.2 52.2 65.7 57.7
226 forks0 37.5 41.0 22.0 0.5 0.3 0.3 34.5 37.6 21.5
227 forks1k 40.0 43.0 22.0 0.4 0.3 0.3 36.0 38.8 21.6
228 forks100k 217.5 223.0 176.0 0.7 0.6 0.4 214.3 218.4 175.2
229 vforks0 34.5 37.0 22.0 0.5 0.6 0.5 27.3 28.5 17.9
230 vforks1k 35.0 37.0 22.0 0.6 0.8 0.5 27.2 28.6 17.9
231 vforks100k 35.0 37.0 22.0 0.6 0.8 0.6 27.6 28.9 17.9
232 execs0null 97.5 92.0 66.0 3.8 2.4 0.6 68.7 82.5 48.6
233 execs0null (1K env) 197.0 229.0 75.0 4.1 2.6 0.9 167.8 212.3 62.6
234 execs1knull 99.0 100.0 66.0 4.1 1.9 0.6 70.5 86.8 48.7
235 execs1knull (1K env) 199.0 230.0 75.0 4.2 2.6 0.7 170.4 214.9 62.7
236 execs100knull 283.5 278.0 216.0 4.8 2.8 1.1 251.9 269.3 202.0
237 vexecs0null 100.0 92.0 66.0 5.1 2.7 1.1 63.7 76.8 45.1
238 vexecs1knull 100.0 91.0 66.0 5.2 2.8 1.1 63.2 77.1 45.1
239 vexecs100knull 100.0 92.0 66.0 5.1 3.0 1.1 64.0 77.7 45.6
240 execs0big 129.0 201.0 101.0 4.0 3.0 1.0 102.6 153.5 92.7
241 execs1kbig 130.0 202.0 101.0 3.7 3.0 1.0 104.7 155.5 93.0
242 execs100kbig 318.0 385.0 263.0 4.8 3.1 1.1 286.6 339.1 247.9
243 vexecs0big 128.0 200.0 101.0 4.6 3.5 1.6 98.5 149.6 90.4
244 vexecs1kbig 125.0 200.0 101.0 4.7 3.5 1.3 98.9 149.3 88.6
245 vexecs100kbig 126.0 200.0 101.0 4.2 3.4 1.3 99.5 151.0 89.0
248 Table 2. Kernel Benchmark results (all times in seconds).
252 In studying the measurements we found that the basic system call
253 and context switch overhead did not change significantly
254 between 4.1BSD and 4.2BSD. The \fIsignocsw\fP results were caused by
255 the changes to the \fIsignal\fP interface, resulting
256 in an additional subroutine invocation for each call, not
257 to mention additional complexity in the system's implementation.
259 The times for the use of pipes are significantly higher under
260 4.2BSD because of their implementation on top of the interprocess
261 communication facilities. Under 4.1BSD pipes were implemented
262 without the complexity of the socket data structures and with
263 simpler code. Further, while not obviously a factor here,
264 4.2BSD pipes have less system buffer space provided them than
267 The \fIexec\fP tests shown in Table 2 were performed with 34 bytes of
268 environment information under 4.1BSD and 40 bytes under 4.2BSD.
269 To figure the cost of passing data through the environment,
270 the execs0null and execs1knull tests were rerun with
271 1065 additional bytes of data. The results are show in Table 3.
276 c || c s || c s || c s
277 c || c s || c s || c s
278 c || c | c || c | c || c | c
279 l || n | n || n | n || n | n.
280 Test Real User System
282 \^ 4.1 4.2 4.1 4.2 4.1 4.2
284 execs0null 197.0 229.0 4.1 2.6 167.8 212.3
285 execs1knull 199.0 230.0 4.2 2.6 170.4 214.9
288 Table 3. Benchmark results with ``large'' environment (all times in seconds).
291 These results show that passing argument data is significantly
292 slower than under 4.1BSD: 121 ms/byte versus 93 ms/byte. Even using
293 this factor to adjust the basic overhead of an \fIexec\fP system
294 call, this facility is more costly under 4.2BSD than under 4.1BSD.
296 Path name translation
298 The single most expensive function performed by the kernel
299 is path name translation.
300 This has been true in almost every UNIX kernel [Mosher80];
301 we find that our general time sharing systems do about
302 500,000 name translations per day.
304 Name translations became more expensive in 4.2BSD for several reasons.
305 The single most expensive addition was the symbolic link.
307 have the effect of increasing the average number of components
308 in path names to be translated.
309 As an insidious example,
310 consider the system manager that decides to change /tmp
311 to be a symbolic link to /usr/tmp.
312 A name such as /tmp/tmp1234 that previously required two component
314 now requires four component translations plus the cost of reading
315 the contents of the symbolic link.
317 The new directory format also changes the characteristics of
319 The more complex format requires more computation to determine
320 where to place new entries in a directory.
321 Conversely the additional information allows the system to only
322 look at active entries when searching,
323 hence searches of directories that had once grown large
324 but currently have few active entries are checked quickly.
325 The new format also stores the length of each name so that
326 costly string comparisons are only done on names that are the
327 same length as the name being sought.
329 The net effect of the changes is that the average time to
330 translate a path name in 4.2BSD is 24.2 milliseconds,
331 representing 40% of the time processing system calls,
332 that is 19% of the total cycles in the kernel,
333 or 11% of all cycles executed on the machine.
334 The times are shown in Table 4. We have no comparable times
335 for \fInamei\fP under 4.1 though they are certain to
336 be significantly less.
342 part time % of kernel
344 self 14.3 ms/call 11.3%
345 child 9.9 ms/call 7.9%
347 total 24.2 ms/call 19.2%
350 Table 4. Call times for \fInamei\fP in 4.2BSD.
356 Nearly 25% of the time spent in the kernel is spent in the clock
358 (This is a clear indication that to avoid sampling bias when profiling the
359 kernel with our tools
360 we need to drive them from an independent clock.)
361 These routines are responsible for implementing timeouts,
362 scheduling the processor,
363 maintaining kernel statistics,
364 and tending various hardware operations such as
365 draining the terminal input silos.
366 Only minimal work is done in the hardware clock interrupt
367 routine (at high priority), the rest is performed (at a lower priority)
368 in a software interrupt handler scheduled by the hardware interrupt
370 In the worst case, with a clock rate of 100 Hz
371 and with every hardware interrupt scheduling a software
372 interrupt, the processor must field 200 interrupts per second.
373 The overhead of simply trapping and returning
374 is 3% of the machine cycles,
375 figuring out that there is nothing to do
376 requires an additional 2%.
378 Terminal multiplexors
380 The terminal multiplexors supported by 4.2BSD have programmable receiver
381 silos that may be used in two ways.
382 With the silo disabled, each character received causes an interrupt
384 Enabling the receiver silo allows the silo to fill before
385 generating an interrupt, allowing multiple characters to be read
387 At low rates of input, received characters will not be processed
388 for some time unless the silo is emptied periodically.
389 The 4.2BSD kernel uses the input silos of each terminal multiplexor,
390 and empties each silo on each clock interrupt.
391 This allows high input rates without the cost of per-character interrupts
392 while assuring low latency.
393 However, as character input rates on most machines are usually
394 low (about 25 characters per second),
395 this can result in excessive overhead.
396 At the current clock rate of 100 Hz, a machine with 5 terminal multiplexors
397 configured makes 500 calls to the receiver interrupt routines per second.
398 In addition, to achieve acceptable input latency
399 for flow control, each clock interrupt must schedule
400 a software interrupt to run the silo draining routines.\**
402 \** It is not possible to check the input silos at
403 the time of the actual clock interrupt without modifying the terminal
404 line disciplines, as the input queues may not be in a consistent state \**.
406 \** This implies that the worst case estimate for clock processing
407 is the basic overhead for clock processing.
409 Process table management
411 In 4.2BSD there are numerous places in the kernel where a linear search
412 of the process table is performed:
414 in \fIexit\fP to locate and wakeup a process's parent;
416 in \fIwait\fP when searching for \fB\s-2ZOMBIE\s+2\fP and
417 \fB\s-2STOPPED\s+2\fP processes;
419 in \fIfork\fP when allocating a new process table slot and
420 counting the number of processes already created by a user;
422 in \fInewproc\fP, to verify
423 that a process id assigned to a new process is not currently
426 in \fIkill\fP and \fIgsignal\fP to locate all processes to
427 which a signal should be delivered;
429 in \fIschedcpu\fP when adjusting the process priorities every
432 in \fIsched\fP when locating a process to swap out and/or swap
435 These linear searches can incur significant overhead. The rule
436 for calculating the size of the process table is:
438 nproc = 20 + 8 * maxusers
440 that means a 48 user system will have a 404 slot process table.
441 With the addition of network services in 4.2BSD, as many as a dozen
442 server processes may be maintained simply to await incoming requests.
443 These servers are normally created at boot time which causes them
444 to be allocated slots near the beginning of the process table. This
445 means that process table searches under 4.2BSD are likely to take
446 significantly longer than under 4.1BSD. System profiling shows
447 that as much as 20% of the time spent in the kernel on a loaded
448 system (a VAX-11/780) can be spent in \fIschedcpu\fP and, on average,
449 5-10% of the kernel time is spent in \fIschedcpu\fP.
450 The other searches of the proc table are similarly affected.
451 This shows the system can no longer tolerate using linear searches of
454 File system buffer cache
456 The trace facilities described in section 2.3 were used
457 to gather statistics on the performance of the buffer cache.
458 We were interested in measuring the effectiveness of the
459 cache and the read-ahead policies.
460 With the file system block size in 4.2BSD four to
461 eight times that of a 4.1BSD file system, we were concerned
462 that large amounts of read-ahead might be performed without
463 being used. Also, we were interested in seeing if the
464 rules used to size the buffer cache at boot time were severely
465 affecting the overall cache operation.
467 The tracing package was run over a three hour period during
468 a peak mid-afternoon period on a VAX 11/780 with four megabytes
470 This resulted in a buffer cache containing 400 kilobytes of memory
471 spread among 50 to 200 buffers
472 (the actual number of buffers depends on the size mix of
473 disk blocks being read at any given time).
474 The pertinent configuration information is shown in Table 5.
480 Controller Drive Device File System
482 DEC MASSBUS DEC RP06 hp0d /usr
484 Emulex SC780 Fujitsu Eagle hp1a /usr/spool/news
488 Fujitsu Eagle hp2a /tmp
494 Table 5. Active file systems during buffer cache tests.
498 During the test period the load average ranged from 2 to 13
499 with an average of 5.
500 The system had no idle time, 43% user time, and 57% system time.
501 The system averaged 90 interrupts per second
502 (excluding the system clock interrupts),
503 220 system calls per second,
504 and 50 context switches per second (40 voluntary, 10 involuntary).
506 The active virtual memory (the sum of the address space sizes of
507 all jobs that have run in the previous twenty seconds)
508 over the period ranged from 2 to 6 megabytes with an average
510 There was no swapping, though the page daemon was inspecting
511 about 25 pages per second.
513 On average 250 requests to read disk blocks were initiated
515 These include read requests for file blocks made by user
516 programs as well as requests initiated by the system.
517 System reads include requests for indexing information to determine
518 where a file's next data block resides,
519 file system layout maps to allocate new data blocks,
520 and requests for directory contents needed to do path name translations.
522 On average, an 85% cache hit rate was observed for read requests.
523 Thus only 37 disk reads were initiated per second.
524 In addition, 5 read-ahead requests were made each second
525 filling about 20% of the buffer pool.
526 Despite the policies to rapidly reuse read-ahead buffers
527 that remain unclaimed, more than 90% of the read-ahead
530 These measurements showed that the buffer cache was working
531 effectively. Independent tests have also showed that the size
532 of the buffer cache may be reduced significantly on memory-poor
533 system without severe effects;
534 we have not yet tested this hypothesis [Shannon83].
538 The overhead associated with the
539 network facilities found in 4.2BSD is often
540 difficult to gauge without profiling the system.
541 This is because most input processing is performed
542 in modules scheduled with software interrupts.
543 As a result, the system time spent performing protocol
544 processing is rarely attributed to the processes that
545 really receive the data. Since the protocols supported
546 by 4.2BSD can involve significant overhead this was a serious
547 concern. Results from a profiled kernel show an average
548 of 5% of the system time is spent
549 performing network input and timer processing in our environment
550 (a 3Mb/s Ethernet with most traffic using TCP).
551 This figure can vary significantly depending on
552 the network hardware used, the average message
553 size, and whether packet reassembly is required at the network
554 layer. On one machine we profiled over a 17 hour
555 period (our gateway to the ARPANET)
556 206,000 input messages accounted for 2.4% of the system time,
557 while another 0.6% of the system time was spent performing
558 protocol timer processing.
559 This machine was configured with an ACC LH/DH IMP interface
560 and a DMA 3Mb/s Ethernet controller.
562 The performance of TCP over slower long-haul networks
563 was degraded substantially by two problems.
564 The first problem was a bug that prevented round-trip timing measurements
565 from being made, thus increasing retransmissions unnecessarily.
566 The second was a problem with the maximum segment size chosen by TCP,
567 that was well-tuned for Ethernet, but was poorly chosen for
568 the ARPANET, where it causes packet fragmentation. (The maximum
569 segment size was actually negotiated upwards to a value that
570 resulted in excessive fragmentation.)
572 When benchmarked in Ethernet environments the main memory buffer management
573 of the network subsystem presented some performance anomalies.
574 The overhead of processing small ``mbufs'' severely affected throughput for a
575 substantial range of message sizes.
576 In spite of the fact that most system ustilities made use of the throughput
577 optimal 1024 byte size, user processes faced large degradations for some
578 arbitrary sizes. This was specially true for TCP/IP transmissions [Cabrera84,
581 Virtual memory subsystem
583 We ran a set of tests intended to exercise the virtual
584 memory system under both 4.1BSD and 4.2BSD.
585 The tests are described in Table 6.
586 The test programs dynamically allocated
587 a 7.3 Megabyte array (using \fIsbrk\fP\|(2)) then referenced
588 pages in the array either: sequentially, in a purely random
589 fashion, or such that the distance between
590 successive pages accessed was randomly selected from a Gaussian
591 distribution. In the last case, successive runs were made with
592 increasing standard deviations.
600 seqpage sequentially touch pages, 10 iterations
601 seqpage-v as above, but first make \fIvadvise\fP\|(2) call
602 randpage touch random page 30,000 times
603 randpage-v as above, but first make \fIvadvise\fP call
604 gausspage.1 30,000 Gaussian accesses, standard deviation of 1
605 gausspage.10 as above, standard deviation of 10
606 gausspage.30 as above, standard deviation of 30
607 gausspage.40 as above, standard deviation of 40
608 gausspage.50 as above, standard deviation of 50
609 gausspage.60 as above, standard deviation of 60
610 gausspage.80 as above, standard deviation of 80
611 gausspage.inf as above, standard deviation of 10,000
614 Table 6. Paging benchmark programs.
618 The results in Table 7 show how the additional
620 of 4.2BSD can generate more work for the paging system.
622 the system used 0.5 of the 4.5 megabytes of physical memory
624 under 4.2BSD it used nearly 1 megabyte of physical memory.\**
626 \** The 4.1BSD system used for testing was really a 4.1a
628 with networking facilities and code to support
629 remote file access. The
630 4.2BSD system also included the remote file access code.
632 systems would be larger than similarly configured ``vanilla''
633 4.1BSD or 4.2BSD system, we consider out conclusions to still be valid.
635 This resulted in more page faults and, hence, more system time.
636 To establish a common ground on which to compare the paging
637 routines of each system, we check instead the average page fault
638 service times for those test runs that had a statistically significant
639 number of random page faults. These figures, shown in Table 8, show
640 no significant difference between the two systems in
641 the area of page fault servicing. We currently have
642 no explanation for the results of the sequential
648 l || c s || c s || c s || c s
649 l || c s || c s || c s || c s
650 l || c | c || c | c || c | c || c | c
651 l || n | n || n | n || n | n || n | n.
652 Test Real User System Page Faults
654 \^ 4.1 4.2 4.1 4.2 4.1 4.2 4.1 4.2
656 seqpage 959 1126 16.7 12.8 197.0 213.0 17132 17113
657 seqpage-v 579 812 3.8 5.3 216.0 237.7 8394 8351
658 randpage 571 569 6.7 7.6 64.0 77.2 8085 9776
659 randpage-v 572 562 6.1 7.3 62.2 77.5 8126 9852
660 gausspage.1 25 24 23.6 23.8 0.8 0.8 8 8
661 gausspage.10 26 26 22.7 23.0 3.2 3.6 2 2
662 gausspage.30 34 33 25.0 24.8 8.6 8.9 2 2
663 gausspage.40 42 81 23.9 25.0 11.5 13.6 3 260
664 gausspage.50 113 175 24.2 26.2 19.6 26.3 784 1851
665 gausspage.60 191 234 27.6 26.7 27.4 36.0 2067 3177
666 gausspage.80 312 329 28.0 27.9 41.5 52.0 3933 5105
667 gausspage.inf 619 621 82.9 85.6 68.3 81.5 8046 9650
670 Table 7. Paging benchmark results (all times in seconds).
681 Test Page Faults PFST
685 randpage 8085 9776 791 789
686 randpage-v 8126 9852 765 786
687 gausspage.inf 8046 9650 848 844
690 Table 8. Page fault service times (all times in microseconds).