On 12 Nov 2009, at 19:54, "David Dyer-Bennet" wrote:
On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
I built a fileserver on solaris 10u6 (10/08) intending to back it
up to
another server via zfs send | ssh othermachine 'zfs receive'
However, the new server is too new for 10u6 (1
On Nov 12, 2009, at 1:36 PM, Frank Middleton wrote:
Got some out-of-curiosity questions for the gurus if they
have time to answer:
Isn't dedupe in some ways the antithesis of setting copies > 1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make th
> *snip*
> I hope that's clear.
Yes, perfectly clear, and very helpful. Thank you very much.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Previous behavior was hard to predict. :-)
It worked for a while, then a bug prevented it from working so that you
had to export/import the pool to see the expanded space.
The export/import thing was a temporary workaround until the autoexpand
features integrated.
cs
On 11/12/09 15:23, Tim Co
On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen
wrote:
> Hi Tim,
>
> In a pool with mixed disk sizes, ZFS can use only the amount of disk
> space that is equal to the smallest disk and spares aren't included in
> pool size until they are used.
>
> In your RAIDZ-2 pool, this is equivalent to 10 5
Travis Tabbal wrote:
I'm running nv126 XvM right now. I haven't tried
it
without XvM.
Without XvM we do not see these issues. We're running
the VMs through NFS now (using ESXi)...
Interesting. It sounds like it might be an XvM specific bug. I'm glad I mentioned that in my bug report to Sun.
Hi Tim,
In a pool with mixed disk sizes, ZFS can use only the amount of disk
space that is equal to the smallest disk and spares aren't included in
pool size until they are used.
In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which
should be about 5 TBs.
I think you are running a
Got some out-of-curiosity questions for the gurus if they
have time to answer:
Isn't dedupe in some ways the antithesis of setting copies > 1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make things as robust as possible and
then we reduce redundan
On Wed, 11 Nov 2009, David Magda wrote:
There seem to be 'secure erase' methods available for some SSDs:
Unless the hardware and firmware of these devices has been inspected
and validated by a certified third party which is well-versed in such
analaysis, I would not trust such devices with s
So I've finally finished swapping out my old 300GB drives. The end result
is one large raidz2 pool. 10+2 with one hot spare.
The drives are:
7x500GB
4x1TB
2x1.5TB
One of the 1.5TB is the hot spare. zpool list is still showing capacity of
3.25TB (the 1TB drives replaced 300GB drives). I've trie
David Dyer-Bennet wrote:
On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'
However, the new server is too new for 10u6 (10/08) and requires a later
ver
On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
> I built a fileserver on solaris 10u6 (10/08) intending to back it up to
> another server via zfs send | ssh othermachine 'zfs receive'
>
> However, the new server is too new for 10u6 (10/08) and requires a later
> version of solaris . pre
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'
However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)
Is it crazy for me to try the s
> > I'm running nv126 XvM right now. I haven't tried
> it
> > without XvM.
>
> Without XvM we do not see these issues. We're running
> the VMs through NFS now (using ESXi)...
Interesting. It sounds like it might be an XvM specific bug. I'm glad I
mentioned that in my bug report to Sun. Hopefully
> Have you tried wrapping your disks inside LVM
> metadevices and then used those for your ZFS pool?
I have not tried that. I could try it with my spare disks I suppose. I avoided
LVM as it didn't seem to offer me anything ZFS/ZPOOL didn't.
--
This message posted from opensolaris.org
___
> What type of disks are you using?
I'm using SATA disks with SAS-SATA breakout cables. I've tried different cables
as I have a couple spares.
mpt0 has 4x1.5TB Samsung "Green" drives.
mpt1 has 4x400GB Seagate 7200 RPM drives.
I get errors from both adapters. Each adapter has an unused SAS cha
I submitted a bug on this issue, it looks like you can reference other bugs
when you submit one, so everyone having this issue could possibly link mine and
submit their own hardware config. It sounds like it's widespread though, so I'm
not sure if that would help or hinder. I'd hate to bury the
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
wrote:
>
> The first step towards "acknowledging" that there is a problem
> is you logging a bug in bugs.opensolaris.org. If you don't, we
> don't know that there might be a problem outside of the ones
> that we identify.
>
I apologize if I of
Miles Nordin wrote:
"djm" == Darren J Moffat writes:
>> encrypted blocks is much better, even though
>> encrypted blocks may be subject to freeze-spray attack if the
>> whole computer is compromised
the idea of crypto deletion is to use many keys to encrypt the drive,
and enc
I was just looking to see if it is a known problem before I submit it as a bug.
What would be the best category to submit the bug under? I am not sure if it is
driver/kernel issue. I would be more than glad to help. One of the machines is
a test environment and I can run any dumps/debug versions
> "djm" == Darren J Moffat writes:
>> encrypted blocks is much better, even though
>> encrypted blocks may be subject to freeze-spray attack if the
>> whole computer is compromised
the idea of crypto deletion is to use many keys to encrypt the drive,
and encrypt keys with oth
What type of disks are you using?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Have you tried wrapping your disks inside LVM metadevices and then used those
for your ZFS pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
> I'm running nv126 XvM right now. I haven't tried it
> without XvM.
Without XvM we do not see these issues. We're running the VMs through NFS now
(using ESXi)...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
24 matches
Mail list logo