But that's exactly the problem Richard: AFAIK.
Can you state that absolutely, categorically, there is no failure mode out
there (caused by hardware faults, or bad drivers) that won't lock a drive up
for hours? You can't, obviously, which is why we keep saying that ZFS should
have this kind of
Scara Maccai wrote:
>> In the worst case, the device would be selectable,
>> but not responding
>> to data requests which would lead through the device
>> retry logic and can
>> take minutes.
>>
>
> that's what I didn't know: that a driver could take minutes (hours???) to
> decide that a devi
> In the worst case, the device would be selectable,
> but not responding
> to data requests which would lead through the device
> retry logic and can
> take minutes.
that's what I didn't know: that a driver could take minutes (hours???) to
decide that a device is not working anymore.
Now it come
On Mon, Nov 24, 2008 at 7:54 PM, Richard Catlin
<[EMAIL PROTECTED]>wrote:
> I am new to OpenSolaris and ZFS.
>
> I created a new filesystem under and existing filesystem for a user
> Exists: /rpool/export/home/user01
> zfs create rpool/export/home/user01/fs1
>
> As root, I can add a file to fs1, b
I am new to OpenSolaris and ZFS.
I created a new filesystem under and existing filesystem for a user
Exists: /rpool/export/home/user01
zfs create rpool/export/home/user01/fs1
As root, I can add a file to fs1, but as user01, I don't have the permission.
How do I give user01 permission? Can I lim
Toby Thain wrote:
> On 24-Nov-08, at 3:49 PM, Miles Nordin wrote:
>
>
>>> "tt" == Toby Thain <[EMAIL PROTECTED]> writes:
>>>
>> tt> Why would it be assumed to be a bug in Solaris? Seems more
>> tt> likely on balance to be a problem in the error reporting path
>>
On Nov 24, 2008, at 17:32, Tim wrote:
> On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal <
> [EMAIL PROTECTED]> wrote:
>>
>
>> Not sure if this is the best place to ask, but do Sun's new Amber
>> road
>> storage boxes have any kind of integration with ESX? Most
>> importantly,
>> quiescing the VMs
On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:
> Hi,
> Not sure if this is the best place to ask, but do Sun's new Amber road
> storage boxes have any kind of integration with ESX? Most importantly,
> quiescing the VMs, before snapshotting the zvols, and/or some level of
On 24-Nov-08, at 3:49 PM, Miles Nordin wrote:
>> "tt" == Toby Thain <[EMAIL PROTECTED]> writes:
>
> tt> Why would it be assumed to be a bug in Solaris? Seems more
> tt> likely on balance to be a problem in the error reporting path
> tt> or a controller/ firmware weakness.
>
> It's
On Mon, Nov 24, 2008 at 4:04 PM, marko b <[EMAIL PROTECTED]> wrote:
> Darren,
>
> Perhaps I misspoke when I said that it wasn't about cost. It is _partially_
> about cost.
>
> Monetary cost of drives isn't a major concern. At about $110-150 each.
> Loss of efficiency (mirroring 50%), zraid1 (25%),
Darren,
Perhaps I misspoke when I said that it wasn't about cost. It is _partially_
about cost.
Monetary cost of drives isn't a major concern. At about $110-150 each.
Loss of efficiency (mirroring 50%), zraid1 (25%), is a concern.
Expense of sata bays, either in a single chassis or an external c
To be honest, I haven't considered the ease-of-use aspects of listing file
systems and/or snapshots, simply that the way it is now is preferable (to me)
than how it used to be. But you could see what others think perhaps.
Yes, I think when a system is evolving it can be confusing to see cases wh
> "tt" == Toby Thain <[EMAIL PROTECTED]> writes:
tt> Why would it be assumed to be a bug in Solaris? Seems more
tt> likely on balance to be a problem in the error reporting path
tt> or a controller/ firmware weakness.
It's not really an assumption. It's been discussed in here a l
somehow I have issue replacing my disk.
[20:09:29] [EMAIL PROTECTED]: /root > zpool status mypooladas
pool: mypooladas
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach
Hi,
Not sure if this is the best place to ask, but do Sun's new Amber road
storage boxes have any kind of integration with ESX? Most importantly,
quiescing the VMs, before snapshotting the zvols, and/or some level of
management integration thru either the web UI or ESX's console ? If there's
nothin
Are there any performance penalties incurred by mixing vdevs? Say you start
with a raidz1 with three 500gb disks. Then over time you add a mirror with 2
1TB disks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
"C. Bergström" wrote:
> Will Murnane wrote:
> > On Mon, Nov 24, 2008 at 10:40, Scara Maccai <[EMAIL PROTECTED]> wrote:
> >
> >> Still don't understand why even the one on
> http://www.opensolaris.com/, "ZFS - A Smashing Hit", doesn't
> show the app running in the moment the HD is smashed... we
> if a disk vanishes like
> a sledgehammer
> hit it, ZFS will wait on the device driver to decide
> it's dead.
OK I see it.
> That said, there have been several threads about
> wanting configurable
> device timeouts handled at the ZFS level rather than
> the device driver
> level.
Uh, so I can
Will Murnane wrote:
> On Mon, Nov 24, 2008 at 10:40, Scara Maccai <[EMAIL PROTECTED]> wrote:
>
>> Still don't understand why even the one on http://www.opensolaris.com/, "ZFS
>> – A Smashing Hit", doesn't show the app running in the moment the HD is
>> smashed... weird...
>>
Sorry this is
Luke Lonergan wrote:
>> Actually, it does seem to work quite
>> well when you use a read optimized
>> SSD for the L2ARC. In that case,
>> "random" read workloads have very
>> fast access, once the cache is warm.
>>
>
> One would expect so, yes. But the usefulness of this is limited to the ca
marko b wrote:
> At this point, this IS an academic exercize. I've tried to outline the
> motivations/justifications for wanting this particular functionality.
>
> I believe my architectural "why not?" and "is it possible?" question is
> sufficiently valid.
>
> Its not about disk cost. Its abou
On Mon, Nov 24, 2008 at 10:40, Scara Maccai <[EMAIL PROTECTED]> wrote:
> Still don't understand why even the one on http://www.opensolaris.com/, "ZFS
> – A Smashing Hit", doesn't show the app running in the moment the HD is
> smashed... weird...
ZFS is primarily about protecting your data: correc
On 24-Nov-08, at 10:40 AM, Scara Maccai wrote:
>> Why would it be assumed to be a bug in Solaris? Seems
>> more likely on
>> balance to be a problem in the error reporting path
>> or a controller/
>> firmware weakness.
>
> Weird: they would use a controller/firmware that doesn't work? Bad
> cal
Nigel,
I have sent you an email with the output that you were looking for.
Once a solution has been discovered I'll post it on here so everyone can see.
Tano
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
I have a pool on a USB stick that has become "stuck" again. Any ZFS
command on that pool will hang.
Is there any worthwhile debugging information I can collect before
rebooting the box (which might not help - the pool was tuck before I
reboot and it's still stuck now)?
--
Ian.
Andrew Gabriel wrote:
> Ian Collins wrote:
>> I've just finished a small application to couple zfs_send and
>> zfs_receive through a socket to remove ssh from the equation and the
>> speed up is better than 2x. I have a small (140K) buffer on the sending
>> side to ensure the minimum number of sen
On Mon, Nov 24, 2008 at 11:41 AM, marko b <[EMAIL PROTECTED]> wrote:
> At this point, this IS an academic exercize. I've tried to outline the
> motivations/justifications for wanting this particular functionality.
>
> I believe my architectural "why not?" and "is it possible?" question is
> suffic
Al Tobey wrote:
> Rsync can update in-place. From rsync(1):
> --inplace update destination files in-place
>
Whee! This is now newly working (for me). I've been using an older
rsync, where this option didn't work properly on ZFS.
It looks like this was fixed on new
At this point, this IS an academic exercize. I've tried to outline the
motivations/justifications for wanting this particular functionality.
I believe my architectural "why not?" and "is it possible?" question is
sufficiently valid.
Its not about disk cost. Its about being able to grow the pool
Rsync can update in-place. From rsync(1):
--inplace update destination files in-place
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
> I _really_ wish rsync had an option to "copy in place" or something like
> that, where the updates are made directly to the file, rather than a
> temp copy.
Isn't this what --inplace does?
--
albert chin ([EMAIL PROTECTED])
__
On Mon, 24 Nov 2008, Erik Trimble wrote:
>
> One note here for ZFS users:
>
> On ZFS (or any other COW filesystem), rsync unfortunately does NOT do the
> "Right Thing" when syncing an existing file. From ZFS's standpoint, the most
> efficient way would be merely to rewrite the changed blocks, th
Bob Friesenhahn wrote:
> On Mon, 24 Nov 2008, BJ Quinn wrote:
>
>
>> Here's an idea - I understand that I need rsync on both sides if I
>> want to minimize network traffic. What if I don't care about that -
>> the entire file can come over the network, but I specifically only
>> want rsync t
On Mon, 24 Nov 2008, BJ Quinn wrote:
> Here's an idea - I understand that I need rsync on both sides if I
> want to minimize network traffic. What if I don't care about that -
> the entire file can come over the network, but I specifically only
> want rsync to write the changed blocks to disk.
> Why would it be assumed to be a bug in Solaris? Seems
> more likely on
> balance to be a problem in the error reporting path
> or a controller/
> firmware weakness.
Weird: they would use a controller/firmware that doesn't work? Bad call...
> I'm pretty sure the first 2 versions of this demo
Here's an idea - I understand that I need rsync on both sides if I want to
minimize network traffic. What if I don't care about that - the entire file
can come over the network, but I specifically only want rsync to write the
changed blocks to disk. Does rsync offer a mode like that?
--
This
On 23-Nov-08, at 12:21 PM, Scara Maccai wrote:
> I watched both the youtube video
>
> http://www.youtube.com/watch?v=CN6iDzesEs0
>
> and the one on http://www.opensolaris.com/, "ZFS – A Smashing Hit".
>
> In the first one is obvious that the app stops working when they
> smash the drives; they
On Sun, 23 Nov 2008 17:02:57 -0600
Tim <[EMAIL PROTECTED]> wrote:
> On Sun, Nov 23, 2008 at 4:55 PM, James C. McPherson
> <[EMAIL PROTECTED]
> > wrote:
>
> > On Sun, 23 Nov 2008 06:13:51 -0800 (PST)
> > Ross <[EMAIL PROTECTED]> wrote:
> >
> > > I'd also like to know how easy it is to identify dri
On Sat, 22 Nov 2008 10:42:51 -0800 (PST)
Asa Durkee <[EMAIL PROTECTED]> wrote:
> My Supermicro H8DA3-2's onboard 1068E SAS chip isn't recognized in
> OpenSolaris, and I'd like to keep this particular system "all
> Supermicro," so the L8i it is. I know there have been issues with
> Supermicro-brand
Yeah, it's not really 'easy', but certainly better than nothing.
I would imagine it would be possible to write a script that could link all
three drive identifiers, and from there it would relatively simple to create a
chart adding the physical location too.
I agree with Tim that we really need
Kam wrote:
> Posted for my friend Marko:
>
> I've been reading up on ZFS with the idea to build a home NAS.
>
> My ideal home NAS would have:
>
> - high performance via striping
> - fault tolerance with selective use of multiple copies attribute
> - cheap by getting the most efficient space util
Ian Collins wrote:
> I've just finished a small application to couple zfs_send and
> zfs_receive through a socket to remove ssh from the equation and the
> speed up is better than 2x. I have a small (140K) buffer on the sending
> side to ensure the minimum number of sent packets
>
> The times I g
Andrew Gabriel wrote:
> Ian Collins wrote:
>>
>>
>> I don't see the 5 second bursty behaviour described in the bug
>> report. It's more like 5 second interval gaps in the network traffic
>> while the
>> data is written to disk.
>
> That is exactly the issue. When the zfs recv data has been written,
43 matches
Mail list logo