Hi,
i'm trying to setup some fault management for ZFS. My first idea would
be to use devd. It looks like PCBSD already has some examples for this
[1]. Is there any documentation regarding which subsystem and types
exist for ZFS ?
Regards,
Leon
[1] http://lists.freebsd.org/pipermail/freebsd-stable
Hi,
running 9-STABLE from 2 weeks ago i'm having a problem where ZFS is not
recognizing a failing SATA disk on an LSI SAS2x36 expander. The gnop(8)
device in the zpool status output is for testing purpose. ZFS fails
those alright. What could i do to check if the SCSI sense code actually
makes sens
On Tue, Jul 26, 2011 at 12:09:25PM +0200, Kurt Jaeger wrote:
> Hi!
>
> > > What kind of SATA 6g 4-port non-RAID controller is currently suggested
> > > for use in 8/9 setups with large RAM (64G) setups with ZFS ?
>
> > SuperMicro AOC-USAS2-L8i
> >
> > PCIe controller with 2 multi-lane connectors
On Mon, Jul 25, 2011 at 01:42:07PM -0700, Freddie Cash wrote:
> On Mon, Jul 25, 2011 at 2:34 AM, Kurt Jaeger wrote:
>
> > What kind of SATA 6g 4-port non-RAID controller is currently suggested
> > for use in 8/9 setups with large RAM (64G) setups with ZFS ?
> >
>
> SuperMicro AOC-USAS2-L8i
>
>
Hi,
i'm interested in stories how you guys handle ZFS drive failures.
Especially regarding automatic replacement and hot spares. There has
already been a lot of discussion about this. If anyone can share some
scripts and how to plug them into devd that would be great.
http://www.freebsd.org/cgi/q
On Fri, Apr 15, 2011 at 06:28:07PM +0300, George Kontostanos wrote:
> I was wondering if ZFS v28 is going to be MFC to 8-Stable or not.
On Fri, Apr 15, 2011 at 06:28:07PM +0300, George Kontostanos wrote:
> I was wondering if ZFS v28 is going to be MFC to 8-Stable or not.
Is there a recent patch
9k jumbo clusters in use (current/cache/total/max)
Whats a reasonable amount to set kern.ipc.nmbjumbo9 to and is there
any
form of auto-tuning (i have absolutely no load on this machine and
mbufs
are higher than default pool size).
Thanks to all,
Leon
> > On Thu, Apr 14, 2011 at 6:05 AM, Leon
d_head: 1396
dev.ix.0.queue3.rxd_tail: 1395
dev.ix.0.queue3.rx_packets: 1396
dev.ix.0.queue3.rx_bytes: 145665
dev.ix.0.queue3.lro_queued: 0
dev.ix.0.queue3.lro_flushed: 0
> On Thu, Apr 14, 2011 at 4:18 PM, Leon Meßner
> wrote:
> > On Thu, Apr 14, 2011 at 03:44:23PM +0200, K. Macy wrote
lls to protocol drain routines
> On Thu, Apr 14, 2011 at 3:05 PM, Leon Meßner
> wrote:
> > Hi,
> >
> > i tried setting the mtu on one of my ixgbe(4) intel NICs to support
> > jumbo frames. This is on a box with RELENG_8 from today.
> >
> > # ifconfig ix0 mt
Hi,
i tried setting the mtu on one of my ixgbe(4) intel NICs to support
jumbo frames. This is on a box with RELENG_8 from today.
# ifconfig ix0 mtu 9198
I then get the following error:
# tail -n 1 /var/log/messages
Apr 14 12:48:43 siloneu kernel: ix0: Could not setup receive structures
I alre
On Wed, Nov 24, 2010 at 09:28:13PM -0500, Paul Mather wrote:
> On Nov 24, 2010, at 7:06 PM, Xin LI wrote:
>
> > On Wed, Nov 24, 2010 at 7:01 AM, Paul Mather
> > wrote:
> >> Thanks for the link! I'm not sure whether "Fixed arcmsr driver prevent
> >> arcsas support for Areca SAS HBA ARC13x0" in
Hi,
I hope this is not the wrong list to ask. Didn't get any answers on
-questions.
When you try to do the following inside a nullfs mounted directory,
where the nullfs origin is itself mounted via nfs you get an error:
# foo
# tail -f foo&
# rm -f foo
tail: foo: Stale NFS file handle
# fg
Th
12 matches
Mail list logo