Hi,
Obviously, moving ('renaming') files between ZFSs on the same zpools is just
like a move between any other two filesystems, requiring full copy of the data
and deletion of the old file.
I was wondering if there is (and why there isn't) an optimization inside ZFS,
thus that copy between ZFS
Andrius wrote:
> Boyd Adamson wrote:
>> Andrius <[EMAIL PROTECTED]> writes:
>>
>>> Hi,
>>> there is a small confusion with send receive.
>>>
>>> zfs andrius/sounds was snapshoted @421 and should be copied to new
>>> zpool beta that on external USB disk.
>>> After
>>> /usr/sbin/zfs send andrius/[EMA
Orvar Korvar wrote:
> Ouch, that seems slow. Do you think ZFS is still the best solution for this
> work load, or would for instance Veritas do better?
>
Maybe this workload would be more appropriate for Postgress or Oracle?
How big are the files, how much does their size vary, and how struc
Another thing
config:
zfs FAULTED corrupted data
raidz1ONLINE
c1t1d0 ONLINE
c7t0d0 UNAVAIL corrupted data
c7t1d0 UNAVAIL corrupted data
c70d0 & c71d0 don't exist, it's normal. they are c2t0d0 & c2t1d0
AVAILABLE DISK SELE
Thank you for your fast reply.
You where right. There is something else wrong.
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be act
Boyd Adamson wrote:
Andrius <[EMAIL PROTECTED]> writes:
Hi,
there is a small confusion with send receive.
zfs andrius/sounds was snapshoted @421 and should be copied to new
zpool beta that on external USB disk.
After
/usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv b
Victor Pajor wrote:
> System description:
> 1 root UFS with Solaris 10U5 x86
> 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)
>
> Description:
> Just before the death of my motherboard, I've installed OpenSolaris 2008.05 -
> x86.
> Why do you ask, because I needed to test that it was t
2008/6/21 Andrius <[EMAIL PROTECTED]>:
> Hi,
> there is a small confusion with send receive.
>
> zfs andrius/sounds was snapshoted @421 and should be copied to new zpool
> beta that on external USB disk.
> After
> /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv
> beta
>
Andrius <[EMAIL PROTECTED]> writes:
> Hi,
> there is a small confusion with send receive.
>
> zfs andrius/sounds was snapshoted @421 and should be copied to new
> zpool beta that on external USB disk.
> After
> /usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv
> beta
> o
Hi,
there is a small confusion with send receive.
zfs andrius/sounds was snapshoted @421 and should be copied to new zpool
beta that on external USB disk.
After
/usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv beta
or
usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh
On Sat, Jun 21, 2008 at 8:29 PM, Orvar Korvar
<[EMAIL PROTECTED]> wrote:
> For the server Enterprise target, memory is secondary? Running a company
> well, and RAM cost is secondary? For the Enterprise target market, RAM
> shouldnt be an issue.
>
> For the consumer market, RAM should be an issue.
On Sat, Jun 21, 2008 at 8:13 PM, Orvar Korvar
<[EMAIL PROTECTED]> wrote:
> Ouch, that seems slow. Do you think ZFS is still the best solution for this
> work load, or would for instance Veritas do better?
I suspect that raidz specifically (rather than zfs in general) isn't very good
for small ran
For the server Enterprise target, memory is secondary? Running a company well,
and RAM cost is secondary? For the Enterprise target market, RAM shouldnt be an
issue.
For the consumer market, RAM should be an issue. But ZFS is not targeted for
consumer market. Yet? ZFS is still being polished fo
Ouch, that seems slow. Do you think ZFS is still the best solution for this
work load, or would for instance Veritas do better?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
Ok, so when I am reinstalling from build 68, to build 91ish, I can upgrade my
ZFS raid. Then I have to upgrade both the zpool and the zfs??? I should upgrade
zpool first, and then zfs? Is the order of the upgrade important?
This message posted from opensolaris.org
The answer is that an SMI label is required in which the first slice
covers the whole disk. A detailed process is described at:
http://www.castro.aus.net/~maurice/opensolaris/zfsbootmirror.html
Please note that there still may be other issues ie bug 6680633 but at
least I can now add
16 matches
Mail list logo