Merge pull request #2309 from mitza-oci/warnings
[ACE_TAO.git] / ACE / apps / JAWS / clients / WebSTONE / doc / FAQ-webstone.html
blob2a9d0e7910f3a57459601eda1b8eeef69033575b
1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
2 <HTML VERSION="2.0">
3 <HEAD>
4 <!-- -->
5 <!-- WEBMAGIC VERSION NUMBER="2.0.1" -->
6 <!-- WEBMAGIC TRANSLATION NAME="ServerRoot" SRC="/var/www/htdocs/" DST="/" -->
7 <!-- WEBMAGIC TRANSLATION NAME="ProjectRoot" SRC="./" DST="" -->
8 <TITLE>WebStone FAQ</TITLE>
9 </HEAD>
10 <BODY>
11 <P><!-- Changed by: Michael Blakeley, 9-Nov-1995 --></P>
12 <H1><IMG SRC="webstone.gif" WIDTH="534" HEIGHT="174" SGI_SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/disk6/WebStone-2.0/doc/webstone.gif"></H1>
13 <CENTER><H1 ALIGN="CENTER">WebStone</H1>
14 </CENTER><CENTER><H2 ALIGN="CENTER">Frequently Asked Questions, with Answers</H2>
15 </CENTER><CENTER><ADDRESS ALIGN="CENTER"><A HREF="mailto:schan@engr.sgi.com">Stephen Chan, schan@engr.sgi.com</A></ADDRESS>
16 </CENTER><CENTER><ADDRESS ALIGN="CENTER"><A HREF="http://www.sgi.com/Products/WebFORCE/">WebFORCE</A> Technical Marketing, <A HREF="http://www.sgi.com">Silicon Graphics</A></ADDRESS>
17 </CENTER><CENTER><ADDRESS ALIGN="CENTER">Last revised: 9 November 1995</ADDRESS>
18 </CENTER><HR>
19 <P><STRONG>This document answers frequently-asked questions about WebStone.</STRONG> </P>
20 <UL>
21 <LI><A HREF="#meta-FAQ">Meta-FAQ</A>: What is this document? Where can I get a copy?
22 <LI><A HREF="#diff">What is the difference between WebStone 1.1 and WebStone 2.0?</A>
23 <LI><A HREF="#compare1.1&amp;2">Can I compare WebStone 1.1 and WebStone 2.0 numbers against each other?</A>
24 <LI><A HREF="#what-is">What is WebStone?</A>
25 <LI><A HREF="#webperf">What about Webperf?</A>
26 <LI><A HREF="#what-does">What does WebStone do?</A>
27 <LI><STRONG><A HREF="#_wmh2_815967937">Feature Enhancements in WebStone 2.0</STRONG></A>
28 <LI><A HREF="#does-not">What doesn't WebStone do?</A>
29 <LI><A HREF="#obtaining">Where can I get WebStone?</A>
30 <LI><A HREF="#running">How do I run WebStone?</A>
31 <UL>
32 <LI>Experimental GUI
33 </UL>
34 <LI><A HREF="#common-problems">Common problems when running WebStone</A>
35 <UL>
36 <LI><A HREF="#swap-space">Out of swap space</A>
37 <LI><A HREF="#timing-info">Error reading timing info</A>
38 </UL>
39 <LI><A HREF="#interpreting">What do the results mean?</A>
40 <LI><A HREF="#majordomo">I'm still having problems. Where can I get help?</A>
41 <LI><A HREF="#legal">Legal issues</A>
42 </UL>
43 <P>If you have comments about this document, please forward them to the <A HREF="mailto:mblakele@engr.sgi.com">author</A>. </P>
44 <HR>
45 <H2><A NAME="meta-FAQ">Meta-FAQ: What is this document? Where can I get a copy?</A></H2>
46 <P>This is a list of answers to Frequently Asked Questions (FAQ) about WebStone.
47 The latest copy is always available at <A HREF="http://www.sgi.com/Products/WebFORCE/WebStone">http://www.sgi.com/Products/WebFORCE/WebStone/</A> and via the WebStone mailing list. The FAQ is periodically posted to the <A HREF="#majordomo">WebStone mailing list</A>, and to the USENET newsgroup <A HREF="news:comp.benchmarks">comp.benchmarks</A>. </P>
48 <HR>
49 <H2><A NAME="diff">What is the difference between WebStone 1.1 and WebStone 2.0?</A></H2>
50 <P>WebStone 2.0 is a rewrite of the WebStone 1.1 code. Significant changes
51 have been made both in the code and in the fileset and run rules. Many bugs
52 were eliminated, support for other platforms has been included and many
53 new features have been added. The WebStone 1.1 and WebStone 2.0 numbers
54 cannot be compared, since so much has changed. In general, WebStone 1.1
55 will give higher connections/second values, but lower throughput numbers
56 than WebStone 2.0.</P>
57 <HR>
58 <H2><A NAME="compare1.1&amp;2">Can I compare WebStone 1.1 and WebStone 2.0 numbers against each other?</A></H2>
59 <P>Absolutely NOT! WebStone 1.1 numbers are based on a different fileset, as
60 well as an older version of the benchmarking software. The WebStone 1.1
61 fileset was based on a fileset with a smaller average filesize, so that
62 the number of connections per second will tend to be higher (all things
63 being equal). The WebStone 2.0 fileset is based on observations of several
64 real world sites, and the distribution of the filesizes found there. This
65 fileset is also similar to the fileset chosen by the SPEC committee for
66 their benchmark.</P>
67 <P>While it is possible to convert the 1.1 fileset to a 2.0 format and then
68 test it, the resulting numbers will not be the same, because the underlying
69 software used to perform the testing has changed. WebStone 1.1 was also
70 heavily abused because of the lack of run rules, and reporting rules. It
71 is recommended that everyone move to WebStone 2.0.</P>
72 <HR>
73 <P></P>
74 <H2><A NAME="what-is">What is WebStone?</A></H2>
75 <P>WebStone is a highly-configurable client-server benchmark for HTTP servers. </P>
76 <P>The original WebStone benchmark was released in March, 1995. The original
77 white paper describing this benchmark is available from <A HREF="http://www.sgi.com/Products/WebFORCE/WebStone">http://www.sgi.com/Products/WebFORCE/WebStone/.</A> </P>
78 <P>WebStone is not a proprietary benchmark - it is an open benchmark. The source
79 code is freely available, and anyone can examine it. By design, WebStone
80 does not unfairly favor SGI, Netscape, or any other company - it is simply
81 a performance measurement tool. </P>
82 <HR>
83 <H2><A NAME="webperf">What about Webperf?</A></H2>
84 <P>A SPEC SFS working group is presently adapting SPEC SFS to Web server benchmarking.
85 SGI's WebStone team is part of this working group, and we support fully
86 the effort. WebStone is available to fulfill the immediate Web benchmarking
87 needs - not to confuse the public.</P>
88 <P>Basically, if you like WebStone, use it. When SPEC releases Webperf, check
89 it out.</P>
90 <HR>
91 <H2><A NAME="what-does">What does WebStone do?</A></H2>
92 <P>WebStone makes a user-configurable number of HTTP 1.0 GET requests for specific
93 pages on a Web server. Any Web server can be tested, and any HTML content
94 can be used. </P>
95 <P>WebStone measures the throughput and latency of each HTTP transfer. By default,
96 only statistical data are returned, but the user may optionally request
97 data for each and every transaction. WebStone also reports transaction failures,
98 which translate into those little &quot;Connection Refused&quot; alerts in the real
99 world.</P>
100 <HR>
101 <H2><A NAME="_wmh2_815967937">Feature Enhancements in WebStone 2.0</A></H2>
102 <P>WebStone 2.0 includes support for testing proxy servers, as well as more
103 flexible handling of URL's that enable WebStone to test a wide variety of
104 content types. The code has also been significantly rewritten so that it
105 is more robust and portable.</P>
106 <HR>
107 <H2><A NAME="does-not">What doesn't WebStone do?</A></H2>
108 <P>WebStone does not yet do any of the following (listed roughly in order of
109 planned implementation): </P>
110 <UL>
111 <LI>POST transactions, widely used for CGI-bin scripts
112 </UL>
113 <P>If you have additional requests for WebStone functionality, contact the <A HREF="#majordomo">WebStone mailing list</A>. </P>
114 <HR>
115 <H2><A NAME="obtaining">Where can I get WebStone?</A></H2>
116 <P>The latest copy of WebStone, and of this FAQ, is available at <A HREF="http://www.sgi.com/Products/WebFORCE/WebStone">http://www.sgi.com/Products/WebFORCE/WebStone</A> </P>
117 <HR>
118 <H2><A NAME="running">How do I run WebStone?</A></H2>
119 <P>WebStone includes a README file which may answer some of your questions.
120 However, here's a brief overview. </P>
121 <OL>
122 <LI><A HREF="#test-bed">Set up your test-bed</A>
123 <LI><A HREF="#loading-webstone">Load WebStone onto your webmaster </A>
124 <LI><A HREF="#edit-runbench">Edit <CODE>testbed</CODE></A>
125 <LI><A HREF="#file-list">Write a file list</A>
126 <LI><A HREF="#start-benchmark">Start the benchmark</A>
127 <LI><A HREF="#collect-results">Collect the results</A>
128 </OL>
129 <H3>WebStone now has an experimental GUI!</H3>
130 <P>To try the GUI, make sure you have a Web browser, and run <CODE>./webstone -gui</CODE> from the WebStone base directory. You don't need to hand-edit the <CODE>testbed</CODE> file anymore, but you still need to edit <CODE>filelist</CODE> if you want to change the workload. This may not be necessary, since we've
131 distributed two real-world workload models with WebStone. </P>
132 <P>These are the stepts to follow to run the GUI </P>
133 <OL>
134 <LI><A HREF="#test-bed">Set up your test-bed</A>
135 <LI><A HREF="#loading-webstone">Load WebStone onto your webmaster </A>
136 <LI><CODE>./configure</CODE>
137 <LI><CODE>./webstone -gui</CODE>
138 </OL>
139 <P>If the GUI appears to hang, you can kill stray WebStone processes with <CODE>./webstone -kill</CODE> </P>
140 <H3><A NAME="test-bed">Setting up your test bed</A></H3>
141 <P>Your test bed should include, at minimum, two machines and a network. The
142 first machine is your Web server - it can be any HTTP 1.0-compliant server.
143 As far as WebStone is concerned, it's a black box. </P>
144 <P>You'll also need a webmaster and one or more webclients. These should be
145 Unix hosts, since WebStone hasn't been tested on any non-Unix operating
146 systems (feel free to port it, if you like). The webmaster and the webclient
147 may be the same machine, if desired: we've run up to 120 webclients and
148 the webmaster on a single 32MB Indy. </P>
149 <P>You must establish a trust relationship between your webmaster and webclients.
150 Each webclient must be set up so that the webmaster can use <CODE>rexec</CODE> to execute the WebStone on the client. This can be done with a guest account.
151 It's also helpful if root can <CODE>rexec</CODE> and <CODE>rcp</CODE> to the webclients, and even to the web server. This requires editing the <CODE>/.rhosts</CODE> and <CODE>/etc/host.equiv</CODE> files. Here's an example: </P>
152 <P><CODE>/.rhosts</CODE> (on each webclient) </P>
153 <PRE>
154 webmaster root
155 </PRE>
156 <P><CODE>/etc/hosts/equiv</CODE> (on each webclient) </P>
157 <PRE>
158 webmaster
159 </PRE>
160 <P>To make best use of WebStone, your webmaster should be equipped with a C
161 compiler, Perl, awk, and a Web browser. A data analysis program such as
162 GnuPlot may also come in handy. </P>
163 <P>Connect the webclients, the webmaster, and the web server to a common network.
164 To check your setup, load a browser on one of the webclients, and make sure
165 it can connect to the Web server. </P>
166 <H3><A NAME="loading-webstone">Loading WebStone</A></H3>
167 <P>Copy the WebStone distribution onto your webmaster. If your webmaster isn't
168 an SGI IRIX 5.3 machine, you'll have to make the binaries. Type <KBD>make</KBD> from the WebStone directory - this creates the following binaries: </P>
169 <PRE>
170 webmaster
171 webclient
172 </PRE>
173 <P>Common porting errors </P>
174 <UL>
175 <LI>If you want to use gcc instead of cc, change the CC variable in <CODE>src/Makefile</CODE>.
176 <LI>Many System V-based Unix implementations (such as Solaris 2.x) will need<CODE> LIBS = -lsocket -lnsl</CODE> in <CODE>src/Makefile</CODE>.
177 <LI>Some users may also need to comment out the definition of <CODE>rexec</CODE> in <CODE>webmaster.c</CODE>
178 </UL>
179 <P>If you encounter other errors, please contact the <A HREF="#majordomo">WebStone mailing list</A>. </P>
180 <P>Type <CODE>make install</CODE> to put the binaries in the <CODE>bin</CODE> directory. </P>
181 <P>When you run WebStone, the <CODE>distribute</CODE> script automatically copies the <CODE>webclient</CODE> binary to the other client systems. If you're running diverse clients (e.g.,
182 a couple Suns, a couple BSD hosts), you'll want to comment the <CODE>distribute</CODE> script out of <CODE>bin/runbench</CODE>, and distribute host-specific versions of <CODE>webclient</CODE> by hand. </P>
183 <H3><A NAME="edit-runbench">Edit <CODE></A>testbed</CODE></H3>
184 <P>If you use the <CODE>webstone</CODE> script to automate WebStone, you'll want to edit the <CODE>conf/testbed</CODE> script. The <CODE>testbed</CODE> script contains several configurable parameters that WebStone relies on.
185 Here is an example: </P>
186 <PRE>
187 ### BENCHMARK PARAMETERS -- EDIT THESE AS REQUIRED
188 ITERATIONS=&quot;3&quot;
189 MINCLIENTS=&quot;8&quot;
190 MAXCLIENTS=&quot;128&quot;
191 CLIENTINCR=&quot;8&quot;
192 TIMEPERRUN=&quot;30&quot;
194 ### SERVER PARAMETERS -- EDIT AS REQUIRED
195 #PROXY=
196 SERVER=&quot;www&quot;
197 PORTNO=80
198 SERVERINFO=hinv
199 OSTUNINGFILES=&quot;/var/sysgen/master.d/bsd&quot;
200 WEBSERVERDIR=&quot;/usr/ns-home&quot;
201 WEBDOCDIR=&quot;$WEBSERVERDIR/docs&quot;
202 WEBSERVERTUNINGFILES=&quot;$WEBSERVERDIR/httpd-80/config/magnus.conf $WEBSERVERDIR/httpd-80/config/obj.conf&quot;
204 # WE NEED AN ACCOUNT WITH A FIXED PASSWORD, SO WE CAN REXEC
205 # THE WEBSTONE CLIENTS
206 CLIENTS=&quot;webstone1 webstone2 webstone3 webstone4 webstone5&quot;
207 CLIENTACCOUNT=guest
208 CLIENTPASSWORD=guest
209 CLIENTINFO=hinv
210 TMPDIR=/tmp
211 </PRE>
212 <P>Briefly, the first set of parameters means that the WebStone benchmark will
213 run from 8 clients to 128 clients, in increments of 8. Each increment will
214 run for 30 minutes, and the whole test will be repeated three times. This
215 test suite would take roughly 24 hours to complete. </P>
216 <P>Why multiple iterations? The WebStone benchmark is a stochastic process
217 so there will be variation from run to run, especially if your test file
218 sets have large files or if you approach overloading the server. 3 iterations
219 is about the minimum you should run just to see if there is variation and
220 to gauge the amount of variation. the <TT>TIMEPERRUN</TT> needs to be long enough to establish a steady state and allow it to dominate
221 the run. 30 minutes seems to be enough if the sizes of the files are small.
222 You may want to run the benchmark longer per run to minimize variation if
223 the files are large. </P>
224 <P>The second set of parameters means that we will test a server called &quot;www&quot;
225 at port 80 (note that the port number may be changed to accomodate proxy
226 servers or multiple servers on the same host). We will use four clients.
227 Also, we specify the location of a system tuning file (on Sun Solaris, one
228 could use /etc/system), and web server tuning files (specified for Netscape).
229 These files will be copied into the <CODE>runs</CODE> subdirectories for later reference. </P>
230 <P>Finally, we specify the WebStone account on the clients. Here, we use the
231 guest account, with a fixed password: guest. </P>
232 <H3><A NAME="file-list">Write a file list</A></H3>
233 <P>The basic WebStone tests expect a set of files to reside on the server to
234 be retrieved by the <TT>webstone</TT> client programs. The file list tells WebStone which files to retrieve. </P>
235 <P>It's possible to use an arbitrary set of fixed-length files for WebStone.
236 Although these files have the <TT>.html</TT> extension, they are used to represent files of many types. Basically we
237 treat &quot;bits-as-bits&quot;. You can use the programs in the <TT>genfileset</TT> subdirectory to create the needed set of files, and copy them onto your
238 server: </P>
239 <PRE>
240 ./webstone -genfiles
241 </PRE>
242 <P>The sample file list shipped with WebStone uses the files created by genfiles: </P>
243 <P># Sample filelist, abstracted from access logs<BR>
244 /file500.html 350 #500<BR>
245 /file5k.html 500 #5125<BR>
246 /file50k.html 140 #51250<BR>
247 /file500k.html 9 #512500<BR>
248 /file5m.html 1 #5248000<BR>
249 </P>
250 <P>This filelist consists of 5 different files. The number following the filename
251 is the weight of this file in the distribution. All the weights are summed
252 together and the frequency of each file is the weight of that file over
253 the total weights.</P>
254 <P>For example, in this fileset the weights add up to 1000. So the the file500k.html
255 page will occur 350 out of 1000 times, and the file5m.html will occur once
256 every 1000 pages. </P>
257 <P>Note that the URI should be changed to a full URI when testing proxy servers,
258 for example, if the proxy server is called proxy, but the actual server
259 which stores the file is called seltzer1, you could use the following filelist:</P>
260 <P> #Sample filelist, abstracted from access logs<BR>
261 http://seltzer1.sgi.com/file500.html 350 #500<BR>
262 http://seltzer1.sgi.com/file5k.html 500 #5125<BR>
263 http://seltzer1.sgi.com/file50k.html 140 #51250<BR>
264 http://seltzer1.sgi.com/file500k.html 9 #512500<BR>
265 http://seltzer1.sgi.com/file5m.html 1 #5248000</P>
266 <P>This URI is the one which is passed to the proxy server, which in turn uses
267 it to fetch the file from seltzer1.sgi.com. Notice that the particular files
268 and the distribution are identical to the previous filelist. The other change
269 which would need to be made for testing proxy servers is to have an entry
270 &quot;PROXY=proxy&quot; in the testbed file and to specify the port where the proxy
271 server listens for requests.</P>
272 <P>Wherever possible, use the same pages for WebStone that you will use in
273 the real world. This means that you'll have a harder time comparing your
274 results with published results, but your results will more accurately reflect <STRONG>your</STRONG> situation. </P>
275 <H3><A NAME="start-benchmark">Start the benchmark</A></H3>
276 <P>type <CODE>./webstone</CODE> </P>
277 <P>The results of each run will be saved in a directory called <TT>runs</TT>. Note that the runbench script attempts to collect configuration information
278 about your client and server configurations such as netstat results. You
279 may see some error messages if your clients don't have netstat or other
280 utilities. </P>
281 <H3><A NAME="collect-results">Collect the results</A></H3>
282 <P>The WebStone summary statistics generated by <TT>webmaster</TT> are saved by <TT>runbench</TT> in a date stamped subdirectory of the <TT>runs</TT> directory in the current directory similar to: </P>
283 <PRE>
284 runs/950804_2304/run
285 </PRE>
286 <P>The script wscollect is provided as a tool for collected the results of
287 all of the runs and generating a tab delimited file with all of the results.
288 This file can be read into a spreadsheet or read by other analysis programs. </P>
289 <PRE>
290 wscollect runs &gt; runs.tabs
291 </PRE>
292 <P>An additional script called <TT>tabs2html</TT> will take a tab delimited file and produce an HTML 3.0 style table of the
293 results: </P>
294 <PRE>
295 tabs2html runs.tabs &gt; runs.html
296 </PRE>
297 <HR>
298 <H2><A NAME="common-problems">Common problems when running WebStone</A></H2>
299 <H3><A NAME="swap-space">Out of swap space</A></H3>
300 <P>It's fairly common for the Web server under test to run out of swap space.
301 As a rule of thumb, make sure that you have swap space equal to the number
302 of server processes times the size of the largest test file. </P>
303 <P>For instance, if you're testing a 10MB file on a Netscape server with 64
304 processes, you'll need to have at least 640MB of swap space. <CITE>N.B.</CITE>: On SGI IRIX 5.x, you can substitute large amounts of <EM>virtual swap space</EM>, since Netscape doesn't actually use all the space it asks for. </P>
305 <P>See your operating system-specific administration guide for details on adding
306 and configuring swap space. </P>
307 <H3><A NAME="timing-info">Error reading timing info</A></H3>
308 <P><STRONG>Question</STRONG>: </P>
309 <P>Running: </P>
310 <PRE>
311 webmaster -w webmaster -p 9990 -u flist -f config
312 </PRE>
313 <P>on jan.near.net </P>
314 <P>outputs: </P>
315 <PRE>
316 Waiting for READY from 6 clients
317 All READYs received
318 Sending GO to all clients
319 All clients started at Tue Aug 8 11:57:30 1995
320 Waiting for clients completion
321 Reading results
322 .Error second reading timing info from one of the clients:
323 Interrupted system call
324 web child 1 did not respond. 3456 bytes read
325 .Error second reading timing info from one of the clients:
326 Interrupted system call
327 web child 0 did not respond. 3456 bytes read
328 </PRE>
329 <P>What does the second reading timing info contain? What might cause the second
330 read to fail while the first passes? </P>
331 <P><STRONG>Answer</STRONG>: </P>
332 <P>It's most likely that one of the WebStone clients died before it could report
333 results to the webmaster. We've squashed many circumstances in which this
334 happens, but bugs continue to appear, especially on systems we haven't tested. </P>
335 <P>We can't do much for this kind of problem without debugging traces. Edit <CODE>testbed</CODE>, and set the <CODE>DEBUG</CODE> parameter to <CODE>DEBUG=-d</CODE>, so that debugging info will be written to files named /tmp/webstone-debug.&lt;PID&gt;. </P>
336 <P>If you can replicate this problem with debugging turned on, please let us
337 know. We'd love to examine the traces. </P>
338 <P>Another possible source of problems with reading timing info is when a page
339 in the filelist did not get read by a client, but the webmaster was expecting
340 to find it. This can happen when the test time, number of clients and filelist
341 distribution are set up so that a file which gets read infrequently does
342 not get read _yet_ before the test period ends.This will get ironed out
343 in a later release of WebStone.</P>
344 <HR>
345 <H2><A NAME="interpreting">What do the results mean?</A></H2>
346 <P>WebStone primarily measures throughput (bytes/second) and latency (time
347 to complete a request). WebStone also reports pages/minute, connection rate
348 averages, and other numbers. Some of these may help you to sanity-check
349 the throughput measurements. </P>
350 <P>Two types of throughput are measured: aggregate and per-client. Both are
351 averaged over the entire test time and the entire client base. Aggregate
352 throughput is simply total bytes (body + header) transferred throughout
353 the test, divided by the total test time. Per-client throughput divides
354 aggregate throughput by the number of clients. </P>
355 <P>Two types of latency are reported: connection latency and request latency.
356 For each metric, the mean time is provided, as well as the standard deviation
357 of all data, plus the minimum and maximum times. Connection latency reflects
358 the time taken to establish a connection, while request latency reflects
359 the time to complete the data transfer once the connection has been established. </P>
360 <P>User-perceived latency will include the sum of connection and request latencies,
361 plus any network latency due to WAN connections, routers, modems, etc. </P>
362 <P>WebStone also reports a metric called <EM>Little's Ls</EM>. <EM>Ls</EM> is derived from Little's Law, and reflects how much time is spent by the
363 server on request processing, rather than overhead and errors. Ls. is also an indirect indicator of the average number of connections which
364 the web server has open at any particular instant. This number should stay
365 very close to the number of clients, or else some clients are being denied
366 access to the server at any given time.</P>
367 <P>If you load your Web servers high enough, you'll begin to see errors in
368 the results. That's fine (at least as far as WebStone is concerned). It
369 just means that your server is heavily loaded, and some clients aren't being
370 serviced before they time out. In fact, the number of errors at a given
371 load can be an excellent indicator of how your server will perform under
372 extremely heavy loads. </P>
373 <HR>
374 <H2><A NAME="majordomo">I'm still having problems. Where can I get help?</A></H2>
375 <P>Subscribe to the WebStone mailing list! Send a message to <A HREF="mailto:majordomo@engr.sgi.com">majordomo@engr.sgi.com</A> - the subject doesn't matter, but the content should be: </P>
376 <PRE>
377 subscribe webstone
378 </PRE>
379 <P>You should receive a message shortly, confirming that you've been added
380 to the mailing list. You can send to the whole list at <A HREF="mailto:webstone@engr.sgi.com">webstone@engr.sgi.com</A> - the authors of WebStone read the list, and they'll do their best to help.
381 Other list members may also be able to help. </P>
382 <P>If you have access to USENET News, you can also read and post to <A HREF="news:comp.benchmarks">comp.benchmarks</A>. As with any newsgroup, read the FAQ before posting! </P>
383 <P>There's also a mailing list devoted to the performance limits of the HTTP
384 protocol. You can subscribe by sending e-mail to <A HREF="mailto:www-speed-request@tipper.oit.unc.edu">www-speed-request@tipper.oit.unc.edu</A> with the text </P>
385 <PRE>
386 subscribe &lt;your-email-address&gt;
387 </PRE>
388 <HR>
389 <H2><A NAME="legal">Legal Stuff</A></H2>
390 <P>This file and all files contained in the WebStone distribution are copyright
391 &#169; 1995, 1996 Silicon Graphics, Inc. </P>
392 <P>This software is provided without support and without any obligation on
393 the part of Silicon Graphics, Inc. to assist in its use, correction, modification
394 or enhancement. There is no guarantee that this software will be included
395 in future software releases, and it probably will not be included. </P>
396 <P>THIS SOFTWARE IS PROVIDED &quot;AS IS&quot; WITH NO WARRANTIES OF ANY KIND INCLUDING
397 THE WARRANTIES OF DESIGN, MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE,
398 OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. </P>
399 <P>In no event will Silicon Graphics, Inc. be liable for any lost revenue or
400 profits or other special, indirect and consequential damages, even if Silicon
401 Graphics, Inc. has been advised of the possibility of such damages. </P>
402 </BODY>
403 </HTML>