1 AUTHOR: Bill Maltby <lfsbill@wlmcs.com>
4 SYNOPSIS: Boot any OS from any hard drive.
6 Many users have platforms capable of booting from any hard drive. This
7 hint is intended to make it easier for users to take advantage of this
8 capability with Lilo and LFS.
9 PREREQUISITES: Lilo version with "disk=" sections and "bios=" clause.
12 As is usual for me, I began explaining everything in the world while
13 writing this. But I came to my senses and wrote a (hopefully) more
14 palatable version (i.e. shorter) that I hope meets the needs of a wide
15 range of users of different levels of experience. Constructive
16 suggestions are welcome.
18 Also welcome are contributions regarding things strange to me, like devfs,
19 and any improvements in the text and scripts.
21 I have checked, and this hint *is* shorter than the Lilo README, appearance
22 to the contrary notwithstanding. :)
32 VIII. MY CONFIGURATION
35 B) Minimal fall-back, hda (W98 drive) default boot.
36 C) Better fall-back, hda boot, hdb Linux root.
37 D) Better fall-back, hdb default boot, bios= changes.
39 X. TESTING AND GOTCHAS
40 XI. A PARTIALLY GENERALIZED BOOTBLOCK INSTALL
42 XIII. SOME MORE READING
46 My three primary goals for this process are:
47 a) faster reboot/recovery via enhanced "fall-back" capability;
48 b) reduced upgrade risk of "downtime";
49 c) more convenience by being able to boot from any hard drive in a
50 node to any OS installed on any drive in that node.
53 Objective a) is supported by configuring a secondary drive that is boot
54 ready and has a root file system containing everything needed to cont-
55 inue service in "fall-back" mode if the primary drive is not available
56 due to corruption or hardware problems. It is kept up-to-date via cron-
57 invoked drive-to-drive copies of selected portions of my primary drive.
59 Objective b) is supported by designating a drive/partition as my "sand-
60 box" and using it for making bootable images that can be tested "end-to-
61 end" with no hardware changes. For testing, I can boot it by just
62 changing my BIOS boot sequence specification. If I trash that drive, a
63 BIOS setup change gets me booting my primary drive again.
65 Objective c) is supported by having every drive bootable as the "pri-
66 mary" drive (from BIOS point of view) and configuring the boot loader to
67 boot any of the valid OSs on any drive that will support this.
69 This is implemented by making use of the ability of BIOS to boot any hard
70 drive and making use of the ability of system loaders, like Lilo, to sup-
71 port that capability in an OS-independent fashion.
74 Recent BIOSs (ca. mid-1990s and later) have the ability to boot from any
75 hard disk in the node. Use of secondary drives as a root file system is
76 common practice, but I have not seen this enhanced boot ability exploited
77 frequently for the benefits it can provide.
79 Because cost per giga-byte of storage is now commonly below $1.00 (US),
80 it is feasible to consider secondary hard drives as reasonable alterna-
81 tives in recovery scenarios. When compared to the costs (in time, money
82 and lost availability) of traditional recovery activities, a bootable
83 secondary drive alternative may be quite attractive. It is even more
84 attractive when you consider that the drive can also be used in daily
85 operations, with some safeguards preventing loss of recovery utility.
87 Common practice, when upgrading a node or building hard drive images that
88 are destined for other nodes, has required either hardware reconfiguration
89 (jumper changes, removal of the primary drive or its cabling) in order to
90 adequately test "end-to-end". This increases risk of something going
91 wrong and slows progress.
93 In my past life (before BIOS had the capabilities) I provided similar
94 capabilities with a simple home-made switch attached to the case that
95 would swap drive assignments by changing the jumpering of the drives.
96 Since this was in the days when drives were much less reliable than now,
97 there were several occasions when I looked like a hero when full service
98 was restored by simply flipping a switch. Now you can buy those switches
99 ready-made. Better (maybe) is the fact that you can accomplish almost the
100 same function through the combination of BIOS, boot loader and adequate
101 planning and implementation.
104 Here is where I cut a lot of verbiage. Apply your knowledge of your
105 situation to see how you benefit. I'll just give some "one-liners" here.
107 Workstation users who run different OSs on separate drives, can have
108 quicker recovery when the primary boot drive/partition fails or gets
109 corrupted. Boot floppy usage may become a "last resort" scenario.
111 Administrators responsible for high-availability systems on-the-cheap
112 can recover much more quickly than with typical alternative processes
113 and might continue to provide service, at possibly reduced levels, even
114 when the primary drive has failed. This "buys time" while awaiting the
117 Developers/experimenters that must engage in constant upgrade and test
118 of their OS, as many (B)LFSers do, can reduce risk of catastrophe by
119 having the primary boot stuff be "sacrosanct" and untouched by the
120 development activities. If you trash your development drive to the point
121 it is not bootable or recoverable, change the BIOS setting and reboot.
122 You are back in business (hope you backed up your development stuff).
124 When compiling for another node onto a drive that will be inserted on
125 the target node, you can configure the boot process to boot in that node
126 as any of the BIOS-bootable drives. You may not need to even change
127 jumpers and should not need to remove any existing drive if a "slot" is
128 available on one of the IDE cables.
130 Lastly, all benefit because you have the full set of tools available to
131 recover the failed stuff. No more working in the restricted boot-diskette
132 environment (unless you get *really* inattentative).
135 As with any reasonably complex procedure, there are many opportunities
136 for mistakes and unexpected circumstances to foil your best efforts
137 against disaster. It is very important that you plan for and test any
138 recovery procedures that might be needed to restore service as quickly
139 as possible. A multi-tier plan that covers from minor damage to a comp-
140 lete disaster should be in place and tested. Be sure that your recovery
141 media is reliable and available.
143 If you use devfs(d), you will need to make the necessary adjustments on
144 your own (for now) - I use a static /dev directory as provided by (B)LFS.
145 Contributions from devfs users are appreciated.
147 As I've seen posted somewhere, "If you break it, you get to keep the
148 pieces". I have *no* responsibility whatsoever for your results.
151 John McGinn - who posted to LFS development when he first discovered and
152 used the "bios=" Lilo parameter while creating a boot drive that was
153 destined for another box. That post may have saved me a lot of research
154 and additional experimentation.
156 The LFS and BLFS projects, and all those who constructively participate,
157 for the on-going education provided to me.
160 a) Determine the capabilities of the BIOS for any nodes involved in the
161 process you will undertake. Be sure that they will support your inten-
162 tions. As example, one BIOS I have will allow selection of a non-
163 primary drive for boot, but then will not allow another HD to boot if
164 that drive fails to boot. Another BIOS may allow "fail-over" to other
165 drives to be specified. This may be important if you are planning to
166 use this hint's techniques to provide automatic "fail-over". You may
167 need to temporarily add drives to a node to make this determination,
168 as a BIOS may not allow selection unless it detects a drive.
169 b) Educate yourself. If you are somewhat new to these processes, a review
170 and understanding of some of the documents in the references at the
171 end of this hint will make your efforts more likely successful.
172 c) Have a recovery plan, in place and tested, that is appropriate to your
173 situation, as mentioned above.
174 d) A working GNU/Linux host node (preferably a (B)LFS one).
175 e) A hardware and drive partition configuration that will support your
176 goals (placement of user data, mounting of various directories and so
179 VIII. MY CONFIGURATION
180 A small private network with nodes having various CPUs (486 DX2/66, AMD
181 5x86, Cyrix 6x86, AMD K6-III and others), various memory configurations,
182 some BIOS that support secondary-drive boot and some that may not (don't
183 know yet), various OSs (recent CVS (B)LFS, RH 6.2, RH 6.0, W95, W98). All
184 nodes have multiple HDs.
186 I use Lilo. You may be able to use Grub, or another loader. I have not
187 tried any others, because of a lack of interest. As I figured, the
188 expected benefits to the list of the switch to Grub haven't (apparently,
189 based on list traffic) materialized. It seems the "weaknesses" in Lilo
190 traveled with the users and also affects Grub.
192 The platform upon which this procedure was developed is a workstation
193 configuration as follows.
195 PC CHIPS M-571 mainboard (updated BIOS), AMD K6-III 380MHz, 256MB SDRAM,
196 4GB primary drive W98 (seldom used) /dev/hda, 20GB 10K RPM drive "pure"
197 LFS pre-cvs (my normal boot drive) /dev/hdb, CD-RW as /dev/hdc, 40GB
198 7800 RPM utility drive (my "fall-back") /dev/hdd. The OS is a Pure LFS
199 installation based on Ryan Oliver's 2.2.8 scripts, slightly modified.
200 Tested with lilo versions 22.2 and 22.5 (which needs nasm installed).
201 I've not tested the nerw master boot or other new parameters available
202 with 22.5. But the setup I used for 22.2 worked unchanged with 22.5.
204 My normal boot is /dev/hdb, first fall-back is /dev/hdd, second is
205 /dev/hda (W98) and last is my boot diskette. You need to adjust the
206 examples to account for the normal boot drive (primary drive from BIOS
209 My mount configuration (edited out non-significant things) is:
211 /dev/hdb7 on / type ext2 (rw) # Normal root
212 /dev/hdb1 on /boot type ext2 (rw) # Normal boot
213 /dev/hda1 on /mnt/hda1 type vfat (rw) # Fallback boot 2
214 /dev/hdd7 on /mnt/hdd7 type ext2 (rw) # Fallback root 1
215 /dev/hdd1 on /mnt/hdd7/boot type ext2 (rw) # Fallback boot 1
217 Although I have separate /boot partitions, this is not mandatory. But it
218 does allow additional security because I can leave it unmounted (or
219 mounted read-only) so it is less likely to be damaged. The aggravation comes
220 when you run Lilo and forget to (re)mount it read-write. *sigh*
222 Of note above is the /mnt/hda1 listing. It is a 4GB drive dedicated to
223 W98. So there is no room for a separate boot/root LFS partition. That is
224 handled by configuring the Linux kernel to support vfat and creating a
225 directory in W98 of /liloboot with a copy of the needed parts of my
226 normal boot directory (actually, it has everything because I haven't
227 taken the time yet to clean it up). The important things are those
228 needed when installing the Lilo boot blocks (kernel image, boot.b,
229 etc.) and the things needed at boot time. See the Lilo man pages and
230 README for a list of them.
232 WARNING! The normal /boot/boot.b is a symlink to boot-menu.b or some
233 equivalent. The Win* OSs I've used don't support symlinks. So you must
234 copy the target of the symlink to liloboot and name it boot.b. Like so:
236 cp -avL /boot/boot.b /mnt/hda1/liloboot # Derefs, copies target, renames
238 You must adjust these examples to fit your configuration and intentions.
239 I use each drive as backup to the other, as needed. That is, if /dev/hdb
240 fails, I use /dev/hdd as my primary drive and /dev/hdb will become the
241 "fail-over" drive until I have reason to switch them again.
243 I run Lilo boot installs to all three of my hard drives. This way, even
244 if two of my drives fail, I can still boot *something* from the
245 remaining drive. I even have a minimal RH 6.0 that I can boot for
246 recovery purposes. I also keep a recovery diskette (see the BLFS book,
247 chapter 3) as a "last resort" tool. But I doubt I will need it again (I
248 had to use it while testing these procedures - more on that later).
251 A) General. I have structured my /etc/lilo components to reduce acci-
252 dental running of the wrong configuration. For this reason, I have no
253 file named "/etc/lilo.conf" and I have a directory structure like this.
256 LiloBootInst/a-boot - for setting up hda as default boot drive
257 LiloBootInst/b-boot - " " " hdb " " " "
258 LiloBootInst/d-boot - " " " hdd " " " "
260 In each of those sub-directories are files named similar to this.
262 ?-boot.PR0?-test - shell to install Lilo boot blocks on hd?
263 ?-boot.PR0?-test.out - output from shell a-boot.PR01-test
264 ?-boot.PR01 - conf for W98/LFS hdb boot
265 ?-boot.PR02 - conf for W98/LFS hdb/LFS hdd/LFS boot
267 Contents of shell a-boot.PR01-test is
269 lilo -v3 -C /etc/LiloBootInst/a-boot/a-boot.PR01 \
270 -t &> a-boot.PR01-test.out
272 Until you remove the "-t" on the second line, no update will be done
273 by Lilo. But you will get an output that shows what Lilo sees and
274 what decisions Lilo makes. When you are satisfied with that, remove
277 All the ?-boot.PR0? scripts are the same but for the changing of "1"
278 to "2" and "a-" to "b-" or "d-", depending on what drive is being
279 set up. To run the script(s) as root, (after assuring execute permis-
282 cd /etc/LiloBootInst/?-boot # ?=a or b or d
283 ./?-boot.PR0n-test # ?=a or b or d, n=1 or 2
285 It is important to keep in mind the difference between the "boot" and
286 "root" terms. Boot means the components used by the BIOS to get
287 needed parts to find loaders and begin getting the operating system
288 going. Root is the Linux file system that will be mounted when the
289 kernel has been loaded. It may be on the same or a different
290 partition from the "boot" components. Don't get confused.
292 In all the below examples, the "bios=" in the "disk=" sections refer
293 to the BIOS devices assignments *at_boot_time*, not what they are
294 now. Don't get confused by thinking of what the devices assignments
295 (0x80, 0x81,...) are now, in your current boot environment. For
296 example, if booted off hdb, its assignment is 0x80, hda is 0x81. But
297 if I boot from hda, those assignments are reversed. The important
298 thing to remember is the assignments at boot-time.
300 B) Minimal fall-back, hda (W98 drive) default boot.
301 The contents of a-boot.PR01 (edited to remove unimportant things and
302 annotated) are as follows. Don't forget to add in any additional
303 things you need, like linear or lba32, append="pci=biosirq" or
304 "prompt", "delay=100", "timeout=100" and other things you need/want.
306 # In the "disk=" sections, "bios=" reflects what the BIOS device
307 # assignment (0x80, 0x81, ...) *will* be when BIOS IDE0 (hda, "C:"
308 # for you MSers) is default boot. Though not needed here, they are
309 # shown for later comparison. Why do I show the geometry here? Be-
310 # cause I want Lilo to do its matches on the *real* device geometry,
311 # *not* the (possibly) phony geometry contained in the partition
312 # tables of the master boot block.
326 # To reduce error by "loose nut behind the wheel", add a custom menu
327 # that shows the boot drive and menu being used. Also, note the cus-
328 # tom "map=" parameter. This allows me to back up all boot related
329 # information from any drive without stepping on the "map" file from
330 # other bootable disk setups. Note that all components are gotten from
331 # the liloboot directory. That way, if hdb has failed due to just
332 # loss of the /boot directory, I can recover that directory from the
333 # W98 directory. Also, Lilo will reference *at boot-time* the needed
334 # components in the liloboot directory. So if hdb is not bootable, I
335 # can still boot anything else in the system that is still good.
336 # IMPORTANT! If you do *anything* that may change physical locations
337 # on the drive, RERUN LILO. Lilo is OS independent. It works on phys-
338 # ical locations. If you defrag, replace a kernel image or copy some
339 # files in/out of the directory, locations may change (usually do).
340 # RERUN LILO so it can update the hda1map with the new information.
342 menu-title="A Drive Boot Menu PR01"
345 map=/mnt/hda1/liloboot/hda1map
347 install=/mnt/hda1/liloboot/boot.b
349 # Default is W98 because it is first in the file and no "-D" on the
350 # command line or "default=" in the configuration file was given.
351 # Note that the loader is required for W98 because it is not a Linux
352 # system. Also, we get it from the liloboot drive, as we do for the
356 loader=/mnt/hda1/liloboot/chain.b
358 image=/mnt/hda1/liloboot/K6-20030530
363 C) Better fall-back, hda boot, hdb Linux root.
364 Contents of a-boot.PR02 (edited to remove some things) are here.
365 Since we just add another drive here, and change the default OS we
366 run, I've not annotated it. Just take note that we've added parameter
367 "default=" and the menu name is changed. As before, don't forget to
368 add in any additional things you need.
388 menu-title="A Drive Boot Menu PR02"
391 map=/mnt/hda1/liloboot/hda1map
392 install=/mnt/hda1/liloboot/boot.b
394 default=D7-K6-20030530
398 loader=/mnt/hda1/liloboot/chain.b
400 image=/mnt/hda1/liloboot/K6-20030530
405 image=/mnt/hda1/liloboot/K6-20030530
410 D) Better fall-back, hdb default boot, bios= changes.
411 As before, unrelated stuff is removed. The below is from b-boot.PR02,
412 the equivalent of the a-boot.PR02. The significant changes are in the
413 "bios=" in the "disk=" sections and new statements in the "other="
414 section. As before, a changed boot menu title is used and a "default="
415 parameter is used. Annotations have been inserted.
417 # Since the BIOS setup has been changed to default-boot IDE1 (hdb),
418 # the device assignments (0x80, 0x81, ...) are different than they
419 # were when booting from IDE0. It's a "shuffle down" process. So if
420 # boot is IDE2, it becomes device 0x80, hda becomes 0x81 and hdb be-
421 # comes 0x82. Here we are using IDE1 as default boot, so only the hda
422 # and hdb "bios=" parameters are affected. Hdd is still 0x82.
442 # Now we get all components from hdb because that is considered the
443 # "reliable" drive. We could still get things from hda, but we are
444 # booting hdb and hda may have failed. If hdb is good, all its
445 # components should be considered more reliable than a potentially
446 # failed drive (we don't know if drives are good yet, but BIOS may
447 # "fail over" under certain failure conditions).
449 menu-title="B Drive Boot Menu PR02"
453 default=B7-K6-20030530
455 # Look at the "map-drive=" parameters. Since at boot time hdb is now
456 # device 0x80 and hda is 0x81, we need to switch them if we want to
457 # run W98. These statements reassign hdb to 0x81 and hda to 0x80
458 # before loading the W98 stuff, so it will still work. But we still
459 # get chain.b from the (currently) known good drive, hdb. If you have
460 # shown more than two drives, don't try to "map-drive" more than two
461 # in this section. It will make the results unbootable. You can (ap-
462 # parently) re-map only two devices. If you have DOS partitions scat-
463 # tered among your drives, this may have nasty implications when you
464 # boot from other than your normal drive - I don't know.
474 image=/boot/K6-20030530
479 image=/boot/K6-20030530
485 I also have hdd boot examples, but they are just extensions of what
486 you've seen already. Things to keep in mind are listed here.
488 1) Use the "menu-title=" parameter to help protect against
489 inadvertently booting the wrong drive. Of course, you must look at
490 the menu title for it to do any good.
492 2) The "bios=" always refers to the BIOS device assignments
493 *at_boot_time*, not what the assignments are now.
495 3) The "bios=" parameters change as you change the BIOS boot drive
496 sequence(s) in BIOS setup. They "shuffle down". If you have four
497 drives and you set BIOS boot to IDE3 (normally 0x83), your dev-
498 ice assignments will become like these.
500 hdd = 0x80, hda = 0x81, hdb = 0x82, hdc = 0x83
502 4) Things which are not bootable are not included in the device
503 assignments series 0x80, 0x81, ... So if you have a CD as master
504 on the second IDE channel (hdc), it will not affect the device
505 assignments with which we are concerned now. You will have 0x80
506 and 0x81, but 0x82 will be assigned to hdd, not 0x83.
508 NOTE! I haven't yet tested with a bootable CD installed in the CD
509 drive. I do know that "El Torito" spec bootable CDs get defined as
510 the floppy drive. I have not investigated other boot formats yet.
512 5) Store all needed components, and references to them at boot time,
513 on the drive the BIOS will successfully boot. That way if the OS
514 or partition you are targeting is bad, you can still reference
515 other OSs and partitions from the drive that is still good. If your
516 normal boot drive fails, or its components get corrupted, you can
517 change the BIOS boot device and boot from the other good drives.
519 X. TESTING and GOTCHAS
520 Make sure that all BIOS boot-time needed components occur where your
521 BIOS can find them (below cyl 1024 for some old BIOSs). Lilo uses BIOS
522 to read several of the early components (see "SOME MORE READING" below).
524 The most important testing tools are various type of backups and
525 recovery tools that allow recovery from disasters. Have them available
528 The next most important testing tool is Lilo itself. The -v and -t para-
529 meters get information about what Lilo will do and the decisions it
530 makes. Examine the output from a Lilo run with the -t parameter (and -v)
531 before doing anything that will actually update your boot blocks. Be
532 especially alert that you have specified the correct drive(s) and the
533 correct locations of components needed at boot time. It is very easy to
534 copy configurations and forget to change drive specifications, a "bios="
535 parameter or other parameters.
537 Copy *at least* the boot blocks of any drive you *might* affect,
538 intentionally or accidentally. I usually just get the first track like
539 this. Change the ? to the drive specifier and the 63 to whatever.
541 dd if=/dev/hd? bs=512 count=63 of=/hd?_1st_track.
543 Then if the worst happens, I can recover the original by
545 dd of=/dev/hd? if=/hd?_1st_track.
547 Watch out for the LFS book bootscripts. They are brain-damaged wrt
548 handling detected file system error conditions. For example, when run-
549 ning from hdb and with hdd7 mount required by your normal boot, if you
550 test by wiping the partition information on hdd, you will no longer be
551 able to access hdd and you can't boot into the root on hdb because the
552 missing hdd partitions will cause the LFS bootscripts to shutdown your
553 machine. Wip out the old recovery diskette (it better be very good).
554 Don't ask how I know. Someday I'll submit a fix and see what happens.
556 That problem can be circumvented by adding "noauto" to all the fstab
557 entries that reference the drive to be "crash tested". If you forget and
558 get burned by that "gotcha", you can still copy back the saved first
559 blocks (you *did* copy them as shown above, didn't you) and not lose
560 anything if you have a bootable recovery floppy.
562 Rerun Lilo anytime you change anything in a /boot directory (Linux) or
563 defrag a W* partition containing boot components or copy new components
564 into/out of the directory containing the boot components. Remember that
565 Lilo, being OS independent, is physical-location dependent at boot time.
566 These locations are resolved at Lilo boot-block install time (this is
567 *not* the same as installing the Lilo package time) and any changes in
568 physical location *after* the boot blocks are installed will cause a
569 boot failure of some kind (depends on what has been relocated).
571 Note that there is no boot flag (partition activation) on my non-M$
572 drives. If Lilo is installed into the master boot block, activating a
573 partition may provide all sorts of confusion and opportunities for some
574 fun. It won't always cause problems, but I have seen certain situations
575 where it did. It works fine on my W98 drive (which does have Lilo in the
576 master boot record). IIRC, the problem was with two partitions active at
577 the same time (an older version of Lilo, or LVM related? - I don't
578 recall). BIOS expects the master boot record to take care of situations
579 when no partition is "active".
581 Don't forget to modify the /etc/fstab on the non-primary drive to boot
582 so that it will correctly access things when it is booted/rooted. For
583 example, on my hdd root, I change the hdb7 (root file system) entry to
584 /mnt/hdb7 and change the hdd7 (/mnt/hdd7) entry to be /. Per a post to
585 LFS sometime back (search if you need to know who - I'm too lazy to look
586 it up, but I thank them anyway), /rootfs can be used to eliminate one of
587 the alterations. I haven't tested this yet.
589 Depending on your configuration of root/boot stuff, it may be safer to
590 run Lilo in a chroot environment. Use the -r parameter (man Lilo) for
591 this. This is most useful when you've installed a whole new LFS into a
592 mount partition(s) and want all components to be referenced relative to
593 the mounted components. Remember, at boot time the pathnames do not come
594 into play. Lilo has resolved them to physical locations. If you change
595 the contents of the (relative) /boot in any way, it is safest to RERUN
598 XI. A PARTIALLY GENERALIZED BOOTBLOCK INSTALL
600 This was been tested using lilo 22.{2,5} on a "pure" (B)LFS system,
601 *before* it was integrated into this hint and text was changed
602 accordingly. So please test carefully and let me know about any
603 discrepancies in these instructions that may have crept in during this
606 In the download section of the LFS hints, there is a tarball named
607 boot_any_hd.tar.bz2 that contains some supporting prototype scripts
608 that you can modify and use to ease maintenance of your multi-boot
609 configuration. Its contents will make a directory called boot_any_hd
612 LiloBootInst : Directory to be moved to /etc/LiloBootInst
613 a) lilo.conf-proto: Prototype file used by LBI.sh
614 b) LBI.conf : File for *some* local customization
615 c) LBI.sh : Script to help you install boot blocks
617 After making any changes to the files in this directory, copy the whole
618 directory to /etc and chown ownership and group to root:root on
619 everything, including the directory.
621 a) File lilo.conf-proto is a "ptototype" file that is used by LBI.sh
622 to automatically "do the right thing" when installing lilo boot blocks
623 for a multi-drive boot configuration. Adjust everything to fit your
624 configuration. It is important that the "bios=" values "match" the
625 "disk=/dev/hd..." declarations. So "disk=/dev/hda" needs "bios=$HDA"
626 and "disk=/dev/hdb" needs "bios=$HDB" and so on. Also, be sure to
627 adjust the "sectors", "heads" and "cylinders" parameters to match your
628 drive geometries. These three should really be added into the LBI.conf
629 file and LBI.sh should have supporting code added. Maybe in a future
630 version. Adjust the "prompt", "delay", "timeout" and "lba32" if
633 In the images sections, adjust the image names (leave the $TBBOOT
634 there) and the "label=" values to suit yourself. Remove or change, your
635 choice the "default=" parameter.
637 The lines beginning with "###### lfs20030504" are markers that the
638 shell script uses to tag start/end of code in the generated files. This
639 is so that when you run again, it removes lines inside those tags from
640 the generated file before adding the new versions in again. The "BEG"
641 and "END" lines must match.
643 There are other adjustments you will want to make. Use your common
644 sense and the "man lilo.conf" pages for additional guidance.
646 b) File LBI.conf is a local configuration file. Adjust it for your
647 configuration. This really needs more added to it also, along with
648 supporting code in LBI.sh. Probably the only changes will be to the
649 "DISKS=", and the "HDPFX" and "CONSOLE=" parameters if you use devfs.
651 c) Script LBI.sh helps get things right during the grind of daily
652 operations. It depends on being able to find the prototype file,
653 discused in a) above, and the configuration file mentioned in b).
655 To run it, cd to /etc/LiloBootInst and run this command, with the
656 substitutions noted in the text following.
658 ./LBI.sh {-t|-i} -d X -p N [ -D <default-image-label>
660 The "-t" means test only - don't actually install, but do everything
661 else. If you give it "-i", the boot blocks will be installed. The "X"
662 should be replaced with the drive this run is to affect: a for hda, b
663 for hdb, etc. The "N" is replaced by the partition number containing
664 the root file system.
666 If you provide the "-D" and the label of an image, it will be used as
667 the default boot image, overriding any "default=" within the file. If
668 there is no "default=" and no "-D ..." is provided, the system will
669 generate a "-D ..." for you and tell you that it is doing that. This
670 will be the same default as lilo would take.
672 For first runs, be sure to use the "-t" parameter, test. It will process
673 the prototype file and leave the results in a file named *-boot.conf.
674 This is the lilo.conf file that will be referenced by a shell script
675 that is also created, *-boot.sh. You can examine and adjust either of
676 these files as needed.
678 A file, *-boot.out will also be created, using a lilo debug level of v3,
679 showing the information lilo garnered and the decisions made. When this
680 looks good using the "-t" parameter, you can then remove the "-t" and
681 run the *-boot.sh script or rerun LBI.sh and give the "-i" parameter.
683 If you provide the "-i", install parameter, the shell will warn you
684 that you are going to install the boot blocks and ask if you want to do
685 it. Either way, both the "-t" and "-i" leave the *-boot.conf, *-boot.sh
686 and *-boot.out files there for you to examine.
688 The created shell has a "less" at the end that will page through the
689 corresponding *-boot.out file created when the *-boot.sh script is run.
691 The output log has a copy of the lilo command that runs.
695 In my real Lilo configuration files, I have removed everything that I
696 can specify on the Lilo command line. That, combined with a little shell
697 scripting, allows me to have a single configuration file that works with
698 installing boot blocks on all three of my drives. The last thing I need
699 to add to it is the adjustment of the "bios=" parameters.
701 I have a rudimentary script that generates a find with a prune command
702 that backs up just my root partition, regardless of what else is
703 mounted. With automatic cron invocation at ungodly wee hours of the
704 morning, I'm assured of having a reasonably current root backup that I
705 can boot and root if needed. BTW, it automatically runs Lilo for the
706 backup drive after the backup (not completed yet). If the normal
707 boot/root drive gets corrupted, but is still mechanically/electrically
708 sound, I can just boot into the backup and do a find|cpio operation or a
709 cp -pDr (?) to recover the normal root after doing any *e2fs* operations
712 Since I have, essentially, 3 copies of my /boot directory, I can recover
713 a corrupted boot directory from any of them and RERUN Lilo (very important to
714 remember to do this).
716 I have yet to implement the reduced service capability of my main
717 server. But I have tested the BIOS fail-over capability and it works *if*
718 the BIOS can not detect a valid boot block (the signature must be
719 missing). Unfortunately, if it sees a valid boot signature and loads
720 what it thinks is a good loader, and that loader fails (corrupted boot
721 information, kernel or root file system) then no fail-over will occur. I
722 have worked on IBM servers that have custom BIOS and hardware monitoring
723 and a timer that we were able to use to get around this. If you have one
724 of those, there is more that can be done.
726 XIII. SOME MORE READING
727 See the man pages for lilo and lilo.conf.
728 Read the README in /usr/src/Lilo*.
729 Get to The Linux Documentation Project site (http://www.tldp.org) and
730 look at the various things having to do with booting.
733 All the Lilo error codes are documented in the Lilo README. However,
734 they don't necessarily increase your understanding by themselves.
735 Additional research in that (and other documents) may be required.
737 Post to blfs-support. But, if you have problems and the nature of your
738 post indicates that you have not made a good-faith effort to read the
739 stuff I mentioned above, I will not reply. Some others on the list may
742 This is an early version. I gratefully accept constructive criticism
743 posted publicly to blfs-support.
744 ACKNOWLEDGEMENTS: John McGinn - see "Thanks" in the hint.
746 0.9.5 2003-09-08 - Convert to new LFS hints format requirements.
747 - Integrate the semi-automated lilo.conf processing
749 0.9.4 2003-07-03 - Testing completed with lilo-22.5 ok. Update text.
750 0.9.3 2003-06-27 - Correct typo, minor phrasing changes.
751 0.9.2 2003-06-14 - Add warning about symlink in liloboot dir in Win*
752 partition. Forgot to mention in earlier versions.
753 0.9.1 2003-06-14 - Discovered that Lilo 22.2 doesn't support swapping
754 more than two device assignments in the "other="
755 section of a configuration file. Add a note about it.
756 - Miscellaneous typographical error fixes and a few word-
757 ing changes to increase clarity, hopefully. No
759 0.9 2003-06-12 - Initial release