I tried to buy another drive today (750GB or 1TB) to swap out c3t1d0 (750GB)
but could not find one quickly. So I was thinking as a temporary measure to
use my 1.5TB disk instead as it can be reused at the moment (is currently
attached to a sil3114 controller - c6d1p0).
Would it be ok to do a
Darren J Moffat writes:
> Kjetil Torgrim Homme wrote:
>
>> I don't know how tightly interwoven the dedup hash tree and the block
>> pointer hash tree are, or if it is all possible to disentangle them.
>
> At the moment I'd say very interwoven by design.
>
>> conceptually it doesn't seem impossibl
Anil writes:
> If you have another partition with enough space, you could technically
> just do:
>
> mv src /some/other/place
> mv /some/other/place src
>
> Anyone see a problem with that? Might be the best way to get it
> de-duped.
I get uneasy whenever I see mv(1) used to move directory trees
Ah!
Ok, I will give this a try tonight! Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy,
Thanks for the info and fixing the web site.
I'm still confused why there are two different things (zpool and zfs) that need
to be upgraded. For example, is there any reason I would want to upgrade the
zpool and NOT upgrade the zfs?
Thanks,
Doug
--
This message posted from opensolaris
Hi Doug,
Some features are provided at the pool level and some features are
provided at the file system level so we have two upgrade paths.
I believe the fs versions were originally created to support ZFS
compatibility with other OSes, but I'm not so clear about this.
I can't think of any rea
$zpool create dpool mirror c1t2d0 c1t3d0
$zfs set mountpoint=none dpool
$zfs create -o mountpoint=/export/zones dpool/zones
On Solaris 10 Update 8 when creating a zone with zonecfg and setting the
zonepath to "/export/zones/test1" and then installing with zoneadm install, the
zfs zonepath file s
Hi Tim,
I looked up the sil3114 controller and I found this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6813171
sil3114 sata controller not supported
If you can see this disk with format, then I guess I'm less uneasy, but
due to the hardware support issue, you might try to cr
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
> On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
>>
>> As a specific example of 2 devices with dramatically different performance
>> for sub-4k transfers has anyone done any ZFS benchmarks between the X25E and
>> the F20 they can share?
>
A bug is being filed on this by Sun. A Senior Sun Engineer was able to
replicate the problem and the only work around they suggested was to
temporarily mount the parent filesystem on the pool. This applies to Sol 10
Update 8; not sure about anything else.
--
This message posted from opensolaris
On Dec 18, 2009, at 9:40 AM, Stuart Anderson wrote:
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different
performance for sub-4k transfers has anyone done any ZFS
benchmarks be
I am seeing this issue posted a lot in the forums:
A zpool add/replace command is run, for example:
zpool add archive spare c2t0d2
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d7s0 is part of active ZFS pool archive. Please see zpool(1M).
(-f just says: the
Hi Cindy,
I had similar concerns however I wasn't aware of that bug. Before I bought
this controller I had read a number of people saying that they had problems and
then other people saying didn't have problems with the sil3114. I was
originally after a sil3124 (SATAII) but given my future dr
> "d" == Doug writes:
d> is there any reason I would want to upgrade the zpool and NOT
d> upgrade the zfs?
in theory/hope zfs send streams depend only on the ZFS version being
sent, not on the kernel build or zpool version. In practice I doubt
it's perfectly true across every sing
Hi Tim,
The p* devices represent the larger container Solaris fdisk container,
so a possibly scenario is that someone could create a pool that contains
both a p0 container, which might also point to the same blocks as
another partition in that container that is also included in the pool.
This w
there's actually no device c6d1 in /dev/dsk, only:
t...@opensolaris:/dev/dsk$ ls -l c6d1*
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p0 ->
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:q
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p1 ->
../../devices/p...@0,0/pci10de,5..
should I use slice 2 instead of p0:
Part TagFlag Cylinders SizeBlocks
0 unassignedwm 00 (0/0/0) 0
1 unassignedwm 00 (0/0/0) 0
2 backupwu 0 - 60796
I had referred to this blog entry:
http://blogs.sun.com/observatory/entry/which_disk_devices_to_use
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
hmm ok, the replace with the existing drive still in place wasn't the best
option...it's replacing, but very slowly as it's reading from that sus disk:
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly i
On snv_129, a zfs upgrade (*not* a zpool upgrade) from version 3 to version 4
caused the
desktop to freeze - no response to keyboard or mouse events and clock not
updated.
ermine% uname -a
SunOS ermine 5.11 snv_129 i86pc i386 i86pc
ermine% zpool upgrade
This system is currently running ZFS pool
slow and steady wins the race ?
I ended up doing a zpool remove of c6d1p0. This stopped the replace and it
removed c6d1p0, and left the array doing a scrub, which was going to take by
rough calculations around 12 months and increasing !
So I shut the box down, disconnected the SATA cable fro
Ok, I have started my import after using the -k on my kernel line (I just did a
test dump using this method just to make sure it works ok, and it does).
I have also added the following to my /etc/system file and rebooted:
set snooping=1
According to this page:
http://developers.sun.com/solaris/
> I've taken to creating an unmounted empty filesystem with a
> reservation to prevent the zpool from filling up. It gives you
> behavior similar to ufs's reserved blocks.
So ... Something like this?
zpool create -m /path/to/mountpoint myzpool c1t0d0
and then... Assuming it's a 500G disk ...
zf
On Fri, Dec 18, 2009 at 7:44 PM, Edward Ned Harvey
wrote:
> So ... Something like this?
>
> zpool create -m /path/to/mountpoint myzpool c1t0d0
>
> and then... Assuming it's a 500G disk ...
> zfs create -V 50G /path/to/mountpoint/unused
> zfs create /path/to/mountpoint/importantdata
Once you've c
Stacy Maydew wrote:
The commands "zpool list" and "zpool get dedup " both show a ratio of 1.10.
So thanks for that answer. I'm a bit confused though if the dedup is applied
per zfs filesystem, not zpool, why can I only see the dedup on a per pool basis
rather than for each zfs filesystem?
25 matches
Mail list logo