[zfs-discuss] mv between ZFSs on same zpool

2008-06-21 Thread Yaniv Aknin
Hi, Obviously, moving ('renaming') files between ZFSs on the same zpools is just like a move between any other two filesystems, requiring full copy of the data and deletion of the old file. I was wondering if there is (and why there isn't) an optimization inside ZFS, thus that copy between ZFS

Re: [zfs-discuss] Confusion with snapshot send-receive

2008-06-21 Thread James C. McPherson
Andrius wrote: > Boyd Adamson wrote: >> Andrius <[EMAIL PROTECTED]> writes: >> >>> Hi, >>> there is a small confusion with send receive. >>> >>> zfs andrius/sounds was snapshoted @421 and should be copied to new >>> zpool beta that on external USB disk. >>> After >>> /usr/sbin/zfs send andrius/[EMA

Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-21 Thread Luke Scharf
Orvar Korvar wrote: > Ouch, that seems slow. Do you think ZFS is still the best solution for this > work load, or would for instance Veritas do better? > Maybe this workload would be more appropriate for Postgress or Oracle? How big are the files, how much does their size vary, and how struc

Re: [zfs-discuss] zpool "i/o error"

2008-06-21 Thread Victor Pajor
Another thing config: zfs FAULTED corrupted data raidz1ONLINE c1t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data c70d0 & c71d0 don't exist, it's normal. they are c2t0d0 & c2t1d0 AVAILABLE DISK SELE

Re: [zfs-discuss] zpool "i/o error"

2008-06-21 Thread Victor Pajor
Thank you for your fast reply. You where right. There is something else wrong. # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be act

Re: [zfs-discuss] Confusion with snapshot send-receive

2008-06-21 Thread Andrius
Boyd Adamson wrote: Andrius <[EMAIL PROTECTED]> writes: Hi, there is a small confusion with send receive. zfs andrius/sounds was snapshoted @421 and should be copied to new zpool beta that on external USB disk. After /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv b

Re: [zfs-discuss] zpool "i/o error"

2008-06-21 Thread Richard Elling
Victor Pajor wrote: > System description: > 1 root UFS with Solaris 10U5 x86 > 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0) > > Description: > Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - > x86. > Why do you ask, because I needed to test that it was t

Re: [zfs-discuss] Confusion with snapshot send-receive

2008-06-21 Thread Mattias Pantzare
2008/6/21 Andrius <[EMAIL PROTECTED]>: > Hi, > there is a small confusion with send receive. > > zfs andrius/sounds was snapshoted @421 and should be copied to new zpool > beta that on external USB disk. > After > /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv > beta >

Re: [zfs-discuss] Confusion with snapshot send-receive

2008-06-21 Thread Boyd Adamson
Andrius <[EMAIL PROTECTED]> writes: > Hi, > there is a small confusion with send receive. > > zfs andrius/sounds was snapshoted @421 and should be copied to new > zpool beta that on external USB disk. > After > /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv > beta > o

[zfs-discuss] Confusion with snapshot send-receive

2008-06-21 Thread Andrius
Hi, there is a small confusion with send receive. zfs andrius/sounds was snapshoted @421 and should be copied to new zpool beta that on external USB disk. After /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv beta or usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh

Re: [zfs-discuss] memory hog

2008-06-21 Thread Peter Tribble
On Sat, Jun 21, 2008 at 8:29 PM, Orvar Korvar <[EMAIL PROTECTED]> wrote: > For the server Enterprise target, memory is secondary? Running a company > well, and RAM cost is secondary? For the Enterprise target market, RAM > shouldnt be an issue. > > For the consumer market, RAM should be an issue.

Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-21 Thread Peter Tribble
On Sat, Jun 21, 2008 at 8:13 PM, Orvar Korvar <[EMAIL PROTECTED]> wrote: > Ouch, that seems slow. Do you think ZFS is still the best solution for this > work load, or would for instance Veritas do better? I suspect that raidz specifically (rather than zfs in general) isn't very good for small ran

Re: [zfs-discuss] memory hog

2008-06-21 Thread Orvar Korvar
For the server Enterprise target, memory is secondary? Running a company well, and RAM cost is secondary? For the Enterprise target market, RAM shouldnt be an issue. For the consumer market, RAM should be an issue. But ZFS is not targeted for consumer market. Yet? ZFS is still being polished fo

Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-21 Thread Orvar Korvar
Ouch, that seems slow. Do you think ZFS is still the best solution for this work load, or would for instance Veritas do better? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.

Re: [zfs-discuss] How to identify zpool version

2008-06-21 Thread Orvar Korvar
Ok, so when I am reinstalling from build 68, to build 91ish, I can upgrade my ZFS raid. Then I have to upgrade both the zpool and the zfs??? I should upgrade zpool first, and then zfs? Is the order of the upgrade important? This message posted from opensolaris.org

Re: [zfs-discuss] Sparc rpool mirror failed

2008-06-21 Thread Maurice Castro
The answer is that an SMI label is required in which the first slice covers the whole disk. A detailed process is described at: http://www.castro.aus.net/~maurice/opensolaris/zfsbootmirror.html Please note that there still may be other issues ie bug 6680633 but at least I can now add