Hey all,
So I have a couple of storage boxes (NexentaCore & Illumian) and have
been playing with some DTrace scripts to monitor NFS usage. Initially I
ran into the (seemingly common) problem of basically everything showing
up as '', and then after some searching online I found a
workaround w
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> How can I determine for sure that my ZIL is my bottleneck? If it is the
> bottleneck, is it possible to keep adding mirrored pairs of SSDs to the ZIL to
> make it faster? O
2012-10-03 16:04, Fajar A. Nugraha wrote:
On Ubuntu + zfsonlinux + root/boot on zfs, the boot script helper is
"smart" enough to try all available device nodes, so it wouldn't
matter if the dev path/id/name changed. But ONLY if there's no
zpool.cache in the initramfs.
Not sure how easy it would
To answer your questions more directly, zilstat is what I used to check
what the ZIL was doing:
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat
While I have added a mirrored log device, I haven't tried adding multiple
sets of mirror log devices, but I think it should work. I bel
I found something similar happening when writing over NFS (at significantly
lower throughput than available on the system directly), specifically that
effectively all data, even asynchronous writes, were being written to the
ZIL, which I eventually traced (with help from Richard Elling and others o
I'm in the planing stages of a rather larger ZFS system to house
approximately 1 PB of data.
I have only one system with SSDs for L2ARC and ZIL, The ZIL seems to be
the bottle neck for large bursts of data being written.I can't confirm
this for sure, but the when throwing enough data at my st
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> it doesn't work right - It turns out, iscsi
> devices (And I presume SAS devices) are not removable storage. That
> means, if the device goes offline and comes back onlin
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ariel T. Glenn
>
> I have the same issue as described by Ned in his email. I had a zfs
> recv going that deadlocked against a zfs list; after a day of leaving
> them hung I finally had to hard
I have the same issue as described by Ned in his email. I had a zfs
recv going that deadlocked against a zfs list; after a day of leaving
them hung I finally had to hard reset the box (shutdown wouldn't, since
it couldn't terminate the processes). When it came back up, I wanted to
zfs destroy tha
On Wed, Oct 3, 2012 at 5:43 PM, Jim Klimov wrote:
> 2012-10-03 14:40, Ray Arachelian пишет:
>
>> On 10/03/2012 05:54 AM, Jim Klimov wrote:
>>>
>>> Hello all,
>>>
>>>It was often asked and discussed on the list about "how to
>>> change rpool HDDs from AHCI to IDE mode" and back, with the
>>> mo
2012-10-03 14:40, Ray Arachelian пишет:
On 10/03/2012 05:54 AM, Jim Klimov wrote:
Hello all,
It was often asked and discussed on the list about "how to
change rpool HDDs from AHCI to IDE mode" and back, with the
modern routine involving reconfiguration of the BIOS, bootup
from separate live
On 10/03/2012 05:54 AM, Jim Klimov wrote:
> Hello all,
>
> It was often asked and discussed on the list about "how to
> change rpool HDDs from AHCI to IDE mode" and back, with the
> modern routine involving reconfiguration of the BIOS, bootup
> from separate live media, simple import and export o
Hello all,
It was often asked and discussed on the list about "how to
change rpool HDDs from AHCI to IDE mode" and back, with the
modern routine involving reconfiguration of the BIOS, bootup
from separate live media, simple import and export of the
rpool, and bootup from the rpool. The document
13 matches
Mail list logo