Kyle McDonald wrote: > michael schuster wrote: > >> Charles Soto wrote: >> >> >>> On 6/27/08 8:55 AM, "Mark J Musante" <[EMAIL PROTECTED]> wrote: >>> >>> >>> >>>> On Fri, 27 Jun 2008, wan_jm wrote: >>>> >>>> >>>> >>>>> the procedure is follows: >>>>> 1. mkdir /tank >>>>> 2. touch /tank/a >>>>> 3. zpool create tank c0d0p3 >>>>> this command give the following error message: >>>>> cannot mount '/tank': directory is not empty; >>>>> 4. reboot. >>>>> then the os can only be login in from console. does it a bug? >>>>> >>>>> >>>> No, I would not consider that a bug. >>>> >>>> >>> Why? >>> >>> >> well ... why would it be a bug? >> >> zfs is just making sure that it's not accidentally "hiding" anything by >> mounting something on a non-empty mountpoint; as you probably know, >> anything that is in a directory is invisible if that directory is used as a >> mountpoint for another filesystem. >> >> > Yes, but that is the opposite of decades of UNIX behavior, so it's not > surprising that it's unexpected for many people. > >> zfs cannot know whether the mountpoint contains rubbish or whether the >> mountpoint property is incorrect, therefore the only sensible thing to do >> is to not mount an FS if the mountpoint is non-empty. >> >> to quote Renaud: >> >> >> >>> This is an expected behavior. filesystem/local is supposed to mount all >>> ZFS filesystems. If it fails then filesystem/local goes into maintenance >>> and network/inetd cannot start. >>> >>> > Shouldn't the other services really only be dependent on system > filesystems being mounted? Or possibly all filesystems in the 'root' zfs > pool? > > I consider it a bug if my machine doesn't boot up because one single, > non-system and non-mandatory, FS has an issue and doesn't mount. The > rest of the machine should still boot and function fine. >
I think Kyle might be onto something here. With ZFS it is so easy to create file systems, one could expect many people to do so. In the past, it was so difficult and required planning, so people tended to be more careful about mount points. In this new world, we don't really have a way to show which (ZFS) file systems are critical during boot (AFAICT). However, if we already know that a file system create failed in this manner, we could set the "canmount" property to false. This bothers me, just a little, because if there is such an error, it would be propagated as another potential latent fault. OTOH, as currently implemented, it is a different, and IMHO more impactful, latent fault. Thoughts? -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss