On Tue, Mar 07, 2006 at 11:36:45AM -0500, Lennart Sorensen wrote: > The ability to add to logical volumes without worrying about where > they start and end makes it much more flexible than partitions > ever were.
Right, and that's what I meant -- my dividing it up into multiple partitions was just a leftover habit from the old way of managing disk space. It didn't really make much sense. > Hmm, I have never used the ssh option myself. It's pretty handy. Aside from letting you wander off-site and continue the install, it also lets you do it from a fully-operational system -- multiple terminal windows, copy and paste, etc. > > It did try to load the 'floppy' driver three times, once as part > > of the CD detection process and twice by prompt later. All failed > > due to lack of a floppy drive. Also not a big deal. > > Yeah I know. Not sure what can be done about that. Perhaps just maintain a list of modules that failed to load. The followup prompts (checklists of modules to load) could offer these modules (like floppy) as an option, but unchecked, meaning you can just dismiss the window in its default state without trying to load it again. (Presumably, failing to check anything will also skip the "specify parameters for modules?" dialog.) > The sarge installer uses devfs. Newer installers use udev and hence > a different device naming system. Ah. I did notice that at the installer shell, 'fdisk -l' on some of my installs reported things in /dev/hd* style, some in /dev/ide/* style. That was probably due to trying out various 2.6 and 2.4 install options until I settled on one. > > Deleting it and recreating the volume group (wanted a different > > name) reported 230.09 GB, which I guess is GiB? All the LVM stuff > > seems to be in GiB, while the partitioner seems to be GB. > > Ehm, yeah probably in GiB. Yeah. "df -H" reports e.g. my "200 GB" /home as 212G, and everything else as being similarly larger than I specified. (I used the 'G' suffix for all my installer input, which seemed to be interpreted as gigabytes by the partitioner, but gibibytes by LVM setup.) > Personally I like swap inside LVM. That allows me to resize it > later too if I decide to. Don't even need an extended partition if > all you have is / and LVM partitions. I was under the impression there were certain issues installing swap in LVM, but I've since looked these up, and they were specific situations (like suspend-to-disk) that aren't applicable in my case. I picked the conservative option at the time. But if there are no issues, then sure, that makes sense. > I also run /tmp as tmpfs so I don't bother with a partition for > that either. Ah, excellent idea. This also very much justifies swap in LVM, since you now have to take the amount of tmpfs space allocated into account when considering how much swap to allocate. > > LV VG Attr LSize Origin Snap% Move Copy% [various lines snipped] > > usr wisq_root -wi-ao 5.00G > > var wisq_root -wi-ao 10.00G > > Looks fairly reasonble. I think /usr is too small given that is > where all debian packages install to, unless you don't intend to > install a lot. I might also think of a larger /var unless you > aren't planning to run databases, proxys, etc. Good points all. I did leave about 9 GiB free in unallocated LVM space to extend these if needed, but I may as well put that to use and resize later as needed. I'm thinking I'll double each of the above. (This system will be the root node of a cluster, so there will be plenty of other disks, but this disk is the newest, largest, and quietest, and resides on the fastest machine with the most RAM. So I realise now that I may as well sacrifice mass /home storage for system operations space, and leave the slower disks & machines for all the data hoarding.) Thanks very much for all your suggestions; they certainly made the entire installation report process worthwhile. :) I'll do another install later to take them into account, but I don't forsee running into any additional issues. One last thing: I'll also be testing out the OpenSSI software and seeing how it fares on the 2.6 kernel, but I may need to also change to a 2.4 on my next install. I recall that (experimentally) booting a 2.6 install with a Debian 2.4 kernel resulted in a "udev requires 2.6" message. What will I end up with for /dev if I do a 2.4 install? Old-style static, or devfs? Just wondering what to expect.
signature.asc
Description: Digital signature