On 11 Oct 2008, at 12:07, Danny Braniss wrote:

On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
On Fri, 10 Oct 2008 08:42:49 -0700
Jeremy Chadwick <[EMAIL PROTECTED]> wrote:

On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
On Fri, 10 Oct 2008 07:41:11 -0700
Jeremy Chadwick <[EMAIL PROTECTED]> wrote:

On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
Mike Meyer wrote:
On Fri, 10 Oct 2008 02:34:28 +0300
[EMAIL PROTECTED] wrote:

Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:

These features are readily available right now on FreeBSD.
You don't have to code anything.
Well with 2 downsides,

Once you actually try and implement these solutions, you'll see that
your "downsides" are largely figments of your imagination.

So if it is my imagination, how can I actually convert UFS to ZFS easily? Everybody seems to say that this is easy and that is easy.

It's not that easy. I really don't know why people are telling you it
is.

Maybe because it is? Of course, it *does* require a little prior
planning, but anyone with more than a few months experience as a
sysadmin should be able to deal with it without to much hassle.

Converting some filesystems are easier than others; /home (if you
create one) for example is generally easy:

1) ZFS fs is called foo/home, mounted as /mnt
2) fstat, ensure nothing is using /home -- if something is, shut it
  down or kill it
3) rsync or cpdup /home files to /mnt
4) umount /home
5) zfs set mountpoint=/home foo/home
6) Restart said processes or daemons

"See! It's like I said! EASY!" You can do this with /var as well.

Yup. Of course, if you've done it that way, you're not thinking ahead,
because:

Now try /usr. Hope you've got /rescue available, because once /usr/lib and /usr/libexec disappear, you're in trouble. Good luck doing this in
multi-user, too.

Oops. You F'ed up. If you'd done a little planning, you would have realized that / and /usr would be a bit of extra trouble, and planned
accordingly.

And finally, the root fs. Whoever says "this is easy" is kidding
themselves; it's a pain.

Um, no, it wasn't. Of course, I've been doing this long enough to have a system set up to make this kind of thing easy. My system disk is on
a mirror, and I do system upgrades by breaking the mirror and
upgrading one disk, making everything work, then putting the mirror
back together. And moving to zfs on root is a lot like a system
upgrade:

1) Break the mirror (mirrors actually, as I mirrored file systems).
2) Repartition the unused drive into /boot, swap & data
3) Build zfs & /boot according to the instructions on ZFSOnRoot
  wiki, just copying /boot and / at this point.
4) Boot the zfs disk in single user mode.
5) If 4 fails, boot back to the ufs disk so you're operational while you contemplate what went wrong, then repeat step 3. Otherwise, go
  on to step 6.
6) Create zfs file systems as appropriate (given that zfs file
  systems are cheap, and have lots of cool features that ufs
  file systems don't have, you probably want to create more than
  you had before, doing thing like putting SQL serves on their
own file system with appropriate blocking, etc, but you'll want to
  have figured all this out before starting step 1).
7) Copy data from the ufs file systems to their new homes,
  not forgetting to take them out of /etc/fstab.
8) Reboot on the zfs disk.
9) Test until you're happy that everything is working properly,
and be prepared to reboot on the ufs disk if something is broken.
10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
   add the data partition to the zfs pool so it's mirrored, and
   you should have already been using swap.

This is 10 steps to your "easy" 6, but two of the extra steps are
testing you didn't include, and 1 of the steps is a failure recovery step that shouldn't be necessary. So - one more step than your easy
process.

Of course, the part you seem to be (intentionally?) forgetting: most people are not using gmirror. There is no 2nd disk. They have one disk with a series of UFS2 filesystems, and they want to upgrade. That's how I read Evren's "how do I do this? You say it's easy..." comment, and I
think his viewpoint is very reasonable.

Granted, most people don't think about system upgrades when they build
a system, so they wind up having to do extra work. In particular,
Evren is talking about spending thousands of dollars on proprietary
software, not to mention the cost of the server that all this data is going to flow to, for a backup solution. Compared to that, the cost of
a few spare disks and the work to install them are trivial.

Yeah, this isn't something you do on a whim. On the other hand, it's not something that any competent sysadmin would consider a pain. For a good senior admin, it's a lot easier than doing an OS upgrade from
source, which should be the next step up from trivial.
I guess you have a very different definition of "easy".  :-)

Given that mine is based on years of working with the kinds of backup
solutions that Evren is asking for: ones that an enterprise deploys
for backing up a data center, the answer may well be "yes".

The above procedure, in no way shape or form, will be classified as "easy" by the user (or even junior sysadmin) community, I can assure you
of that.

I never said it would be easy for a user. Then again, your average
user doesn't do backups, and wouldn't know a continuous backup
solution from a credit default swap. We're not talking about ghosting
a disk partition for a backup, we're talking about enterprise-level
backup solutions for data centers. People deploying those kinds of
solutions tend to have multiple senior sysadmins around.

I wouldn't expect a junior admin to call it easy. At least, not the
first two or three times. If they still have problems with it after
that, they should find a new career path, as they aren't ever going to
advance beyond junior.

I'll also throw this in the mix: the fact that we are *expecting* users to know how to do this is unreasonable. It's even *more* rude to expect

Um, is anyone expecting users to do this? I'm not. ZFS is still marked
as "experimental" in FreeBSD. That means that, among other things,
it's not really well-supported by the installer, etc.  Nuts, as of
January of this year, there wasn't an operating system on the planet
that would install and boot from ZFS.

I'm willing to jump through some hoops to get ZFS's advantages. Those happen to include some things that go a long way to solving Zefren's problems, so it was suggested as the basis for such (not by me, mind you). Having done the conversion, and found it easy, I responded when
he asked how hard it was.

But I'd never recommend this for your average user - which pretty much
excludes anyone contemplating continuous backup solutions.

that mid-level or senior SAs have to do
it "the hard way".  Why?  I'll
explain:

I'm an SA of 16+ years. I'm quite familiar with PBR/MBR, general disk partitioning, sectors vs. blocks, slices, filesystems, and whatever else. You want me to do it by hand, say, with bsdlabel -e? Fine, I
will -- but I will not be happy about it.  I have the knowledge, I
know how to do it, so why must the process continue to be a PITA and
waste my time?

Did I ever mention bsdlabel? But in any case, ZFS makes pretty much
*all* that crap obsolete. You still have to deal with getting a boot
loader installed, but after that, you never have to worry about
partitioning, blocks, sectors, or slices again - until you go to an
operating system that doesn't have ZFS.

so can Freebsd boot off a ZFS root? in stable? current? ...

boot0 doesn't apply here; it cares about what's at sector 0 on the
disk, not filesystems.

boot2/loader does not speak ZFS -- this is why you need the /boot UFS2
partition.  This is an annoyance.

For the final "stage/step", vfs.root.mountfrom="zfs:mypool/root" in
loader.conf will cause FreeBSD to mount the root filesystem from ZFS.
This works fine.

so the answer is:
        yes, if you have only one disk.
        no, if you have ZFS over many disks

because I see no advantage in the springboard solution where ZFS is used to
cover several disks.

I'm asking, because I want to deploy some zfs fileservers soon, and so
far the solution is either PXE boot, or keep one disk UFS (or boot off a USB)
Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
and is readonly, so having 1 disk UFS seems to be a pitty.

ZFS boot is coming. Currently its part of pjd's perforce branch and supports disks, mirrors and collections of disks or mirrors with the only restriction being that enough drives in the pool must be accessible from the bios (i.e. at least one element of a mirror must be seen by the bios).

Currently we don't support booting from raidz or raidz2 pools but there is no fundamental reason why that can't be added - someone just needs to implement the code to understand the raidz layout.


_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to