This is one of the greatest annoyances of ZFS. I don't really understand how,
a zvol's space can not be accurately enumerated from top to bottom of the tree
in 'df' output etc. Why does a "zvol" divorce the space used from the root of
the volume?
Gregg Wonder
Have you tried importing the pool with that drive completely unplugged? Which
HBA are you using? How many of these disks are on same or separate HBAs?
Gregg Wonderly
On Jan 8, 2013, at 12:05 PM, John Giannandrea wrote:
>
> I seem to have managed to end up with a pool that is confuse
am running an up-to-date version of OpenIndiana b151a7.
>
> Thank you,
>
> Jerry
>
>
>
>
> On 10/26/12 10:02 AM, Gregg Wonderly wrote:
>> I've been using this card
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
>>
>>
wouldn't have to figure
out how to do the reboot shuffle. Instead, you could just shuffle the symlinks.
Gregg Wonderly
On Nov 9, 2012, at 10:47 AM, Jim Klimov wrote:
> There are times when ZFS options can not be applied at the moment,
> i.e. changing desired mountpoints of active file
I've been using this card
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
for my Solaris/Open Indiana installations because it has 8 ports. One of the
issues that this card seems to have, is that certain failures can cause other
secondary problems in other drives on the same SA
What is the error message you are seeing on the "replace"? This sounds like a
slice size/placement problem, but clearly, prtvtoc seems to think that
everything is the same. Are you certain that you did prtvtoc on the correct
drive, and not one of the active disks by mistake?
Greg
On Aug 28, 2012, at 6:01 AM, Murray Cullen wrote:
> I've copied an old home directory from an install of OS 134 to the data pool
> on my OI install. Opensolaris apparently had wine installed as I now have a
> link to / in my data pool. I've tried everything I can think of to remove
> this lin
till complaining
about a missing device.
The older OS and ZFS version may in fact have a misbehavior due to some error
condition not being correctly managed.
Gregg Wonderly
On Aug 2, 2012, at 4:49 PM, Richard Elling wrote:
>
> On Aug 1, 2012, at 12:21 AM, Suresh Kumar wrote:
>
&g
ability...
>
> "copies" might be on the same disk. So it's not guaranteed to help if you
> have a disk failure.
I thought I understood that copies would not be on the same disk, I guess I
need to go read up on this again.
Gregg Wonderly
_
a GUI was present or
not, it should log the data to syslog.
That would make it much more nice to use ZFS so that admins could always take
action on multiple pools and devices without being burdened by the constant
problems with failing devices locking you out of system administration
activit
On Jul 11, 2012, at 12:06 PM, Sašo Kiselkov wrote:
>> I say, in fact that the total number of unique patterns that can exist on
>> any pool is small, compared to the total, illustrating that I understand how
>> the key space for the algorithm is small when looking at a ZFS pool, and
>> thus co
, 2012, at 11:02 AM, Sašo Kiselkov wrote:
> On 07/11/2012 05:58 PM, Gregg Wonderly wrote:
>> You're entirely sure that there could never be two different blocks that can
>> hash to the same value and have different content?
>>
>> Wow, can you just send me the
You're entirely sure that there could never be two different blocks that can
hash to the same value and have different content?
Wow, can you just send me the cash now and we'll call it even?
Gregg
On Jul 11, 2012, at 9:59 AM, Sašo Kiselkov wrote:
> On 07/11/2012 04:56 PM, Gregg W
I'm just suggesting that the "time frame" of when 256-bits or 512-bits is less
safe, is closing faster than one might actually think, because social elements
of the internet allow a lot more effort to be focused on a single "problem"
than one might consider.
Gregg
magnitude,
would you be okay with that? What assurances would you be content with using
my ZFS pool?
Gregg Wonderly
On Jul 11, 2012, at 9:43 AM, Sašo Kiselkov wrote:
> On 07/11/2012 04:30 PM, Gregg Wonderly wrote:
>> This is exactly the issue for me. It's vital to always have verify o
Yes, but from the other angle, the number of unique 128K blocks that you can
store on your ZFS pool, is actually finitely small, compared to the total
space. So the patterns you need to actually consider is not more than the
physical limits of the universe.
Gregg Wonderly
On Jul 11, 2012, at
" is not necessary? That just seems ridiculous
to propose.
Gregg Wonderly
On Jul 11, 2012, at 9:22 AM, Bob Friesenhahn wrote:
> On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
>> the hash isn't used for security purposes. We only need something that's
>> fast and has
me thought into how to approach
the problem, and then some time to do the computations.
Huge space, but still finite…
Gregg Wonderly
On Jul 11, 2012, at 9:13 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org]
ng based on the theory of the algorithms for "random" number of bits
is just silly. Where's the real data that tells us what we need to know?
Gregg Wonderly
On Jul 11, 2012, at 9:02 AM, Sašo Kiselkov wrote:
> On 07/11/2012 03:57 PM, Gregg Wonderly wrote:
>> Since there i
e the win?
Gregg Wonderly
On Jul 11, 2012, at 5:56 AM, Sašo Kiselkov wrote:
> On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
>>> Suppose you find a weakness in a specific hash algorithm; you use this
>>> to create hash collisions and now imagined you store the hash collis
On Jun 16, 2012, at 10:13 AM, Scott Aitken wrote:
> On Sat, Jun 16, 2012 at 09:58:40AM -0500, Gregg Wonderly wrote:
>>
>> On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
>>
>>> On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
>>>> Use
On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
> On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
>> Use 'dd' to replicate as much of lofi/2 as you can onto another device, and
>> then
>> cable that into place?
>>
>> It looks like
nto the pool perhaps?
Gregg Wonderly
On 6/16/2012 2:02 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote:
when you say remove the device, I assume you mean simply make it unavailable
for import (I can't remove it from the vdev).
Yes, that's what I mea
d performance and/or data
loss much more often.
Gregg Wonderly
On 1/24/2012 9:50 AM, Stefan Ring wrote:
After having read this mailing list for a little while, I get the
impression that there are at least some people who regularly
experience on-disk corruption that ZFS should be able to report a
high, it's not "simple" to pick up a couple of more spares to have on hand.
For my Root pool, I had only no remaining 250GB disks that I've been using for
root.So, I put in one of my 1.5TB spares for the moment, until I decide
whether or not to order a new small drive.
On Mon,
t can't be corrected
by re-reading enough times.
It looks like you've started mirroring some of the drives. That's really what
you should be doing for the other non-mirror drives.
Gregg Wonderly
___
zfs-discuss mailing list
zfs-dis
ngle large partition. The attached
mirror doesn't have to be the same size as the first component.
On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly <mailto:gregg...@gmail.com>> wrote:
Cindy, will it ever be possible to just have attach mirror the surfaces,
including the
nistration.
I'm very nervous when I have a simplex filesystem setting there, and when a disk
has "died", I'm doubly nervous that the other half is going to fall over.
I'm not trying to be hard nosed about this, I'm just trying to share my angst
and frustration
Cindy, will it ever be possible to just have attach mirror the surfaces,
including the partition tables? I spent an hour today trying to get a new
mirror on my root pool. There was a 250GB disk that failed. I only had a
1.5TB handy as a replacement. prtvtoc ... | fmthard does not work in thi
t by now I just want to know how it might be done from a
shell prompt.
rm ./-c ./-O ./-k
And many versions of getopt support the use of -- as the "end of options"
indicator so that you can do
rm -- -c -O -k
to remove those as well.
Gregg Wonderly
___
On 11/10/2011 7:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis
k perfectly to allow pools to have disk sets
removed. It would provide the needed basic mechanism of just moving stuff
around to eliminate the use of a particular part of the pool that you wanted to
remove.
Gregg Wonderly
___
zfs-discuss mail
I've been building a few 6disk boxes for VirtualBox servers, and I am also
surveying how I will add more disks as these boxes need it. Looking around on
the HCL, I see the Lycom PE-103 is supported. That's just 2 more disks, I'm
typically going to want to add a raid-z w/spare to my zpools, so
33 matches
Mail list logo