Peter,
Are you sure your customer is not hitting this:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
I have a fix that you could have your customer try.
Thanks,
George
Peter Wilk wrote:
IHAC that is asking the following. any tho
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> Yes, ZFS uses this command very frequently. However, it only does this
> if the whole disk is under the control of ZFS, I believe; so a
> workaround could be to use slices rather than whole disks when
> creating a ZFS pool on a buggy
On 8/21/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
I haven't done measurements of this in years, but...I'll wager that compression is memory bound, not CPU bound, for today'sservers. A system with low latency and high bandwidth memory will performwell (UltraSPARC-T1). Threading may not h
Yes, ZFS uses this command very frequently. However, it only does this if the
whole disk is under the control of ZFS, I believe; so a workaround could be to
use slices rather than whole disks when creating a ZFS pool on a buggy device.
This message posted from opensolaris.org
Robert Milkowski wrote:
Hello zfs-discuss,
I've got many ydisks in a JBOD (>100) and while doing tests there
are lot of destroyed pools. Then some disks are re-used to be part
of new pools. Now if I do zpool import -D I can see lot of destroyed
pool in a state that I can't import them anywa
Hi,
I work on a support team for the Sun StorEdge 6920 and have a
question about the use of the SCSI sync cache command in Solaris
and ZFS. We have a bug in our 6920 software that exposes us to a
memory leak when we receive the SCSI sync cache command:
6456312 - SCSI Synchronize Cache Command is
Constantin Gonzalez wrote:
Hi,
my ZFS pool for my home server is a bit unusual:
pool: pelotillehue
state: ONLINE
scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006
config:
NAMESTATE READ WRITE CKSUM
pelotillehue ONLINE 0 0 0
m
Mike Gerdts wrote:
not an expert, but most if not all compression is integer based, and
I don't think floating point is supported inside the kernel anyway so
it has to be integer based.
Not too long ago Roch said "compression runs in the context of a
single thread per
pool", which makes me wor
The current behavior depends on the implementation of the driver and
support for hotplug events. When a drive is yanked, one of two things
can happen:
- I/Os will fail, and any attempt to re-open the device will result in
failure.
- I/Os will fail, but the device can continued to be opened by
I agree with you, but only 50%. Mirroring will only mask the problem
and will delay the fs corruption
(Depending on who zfs responds to data corruption. Does it go back and
recheck the blocks later or just marks them bad?)
The problem lies in somewhere in hardware, but certainly not in disks.
IHAC that is asking the following. any thoughts would be appreciated
Take two drives, zpool to make a mirror.
Remove a drive - and the server HANGS. Power off and reboot the server,
and everything comes up cleanly.
Take the same two drives (still Solaris 10). Install Veritas Volume
Manager (4.1).
I've a few questions:
- Does 'zpool iostat' report numbers from the top of the ZFS stack or at the
bottom? I've corelated the zpool iostat numbers with the system iostat numbers
and they matchup. This tells me the numbers are from the 'bottom' of the ZFS
stack, right? Having said that it'd be
> Matthew Ahrens wrote:
> > On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote:
> >> Hello zfs-discuss,
> >>
> >> Is someone actually working on it? Or any other algorithms?
> >> Any dates?
> >
> > Not that I know of. Any volunteers? :-)
> >
> > (Actually, I think that a RLE c
Hi Ricardo,
Nevermind my previous email.
I think what happened is that a new set of Solaris Express man pages
was downloaded over the weekened for the SX 8/06 and this breaks the
links on the opensolaris...zfs page.
Noel, thanks for fixing them. I'll set a reminder to fix these for
every Solari
On Mon, Aug 21, 2006 at 12:21:44PM +0200, Roch wrote:
>
> We might need something to 'destroy' those properties, locally and
> recursively ?
This is what this piece of text is for:
"Inheriting a property which is not set in any parent is equivalent to
clearing the property, as there is no defaul
Eric Schrock writes:
> Following up on a string of related proposals, here is another draft
> proposal for user-defined properties. As usual, all feedback and
> comments are welcome.
>
> The prototype is finished, and I would expect the code to be integrated
> sometime within the next mon
Hi Robert, Maybe this RFE would contribute to alleviate your
problem:
6417135 need generic way to dissociate disk or slice from it's
filesystem
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6417135
-r
Robert Milkowski writes:
> Hello zfs-discuss,
>
> I've got many y
Hello zfs-discuss,
Looks like I can't get pool ID once pool is imported.
IMHO zpool show should display it also.
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
Hello zfs-discuss,
I've got many ydisks in a JBOD (>100) and while doing tests there
are lot of destroyed pools. Then some disks are re-used to be part
of new pools. Now if I do zpool import -D I can see lot of destroyed
pool in a state that I can't import them anyway (like only two disks
Hi,
my ZFS pool for my home server is a bit unusual:
pool: pelotillehue
state: ONLINE
scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006
config:
NAMESTATE READ WRITE CKSUM
pelotillehue ONLINE 0 0 0
mirrorONLINE 0
20 matches
Mail list logo