Hi All,
I've run into a massive performance problem after upgrading to Solaris 11
Express from oSol 134.
Previously the server was performing a batch write every 10-15 seconds and the
client servers (connected via NFS and iSCSI) had very low wait times. Now I'm
seeing constant writes to the ar
Matthew Anderson wrote:
Hi All,
I've run into a massive performance problem after upgrading to Solaris 11
Express from oSol 134.
Previously the server was performing a batch write every 10-15 seconds and the
client servers (connected via NFS and iSCSI) had very low wait times. Now I'm
seeing
NAMEPROPERTY VALUE SOURCE
MirrorPool sync disabled local
MirrorPool/CCIT sync disabled local
MirrorPool/EX01 sync disabled inherited from MirrorPool
MirrorPool/EX02 sync disabled inherited from
> Sync was disabled on the main pool and then let to inherrit to everything
> else. The > reason for disabled this in the first place was to fix bad NFS
> write performance (even with > Zil on an X25e SSD it was under 1MB/s).
> I've also tried setting the logbias to throughput and latency but t
On 27 April, 2011 - Matthew Anderson sent me these 3,2K bytes:
> Hi All,
>
> I've run into a massive performance problem after upgrading to Solaris 11
> Express from oSol 134.
>
> Previously the server was performing a batch write every 10-15 seconds and
> the client servers (connected via NFS
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lamp Zy
>
> One of my drives failed in Raidz2 with two hot spares:
>
What zpool & zfs version are you using? What OS version?
Are all the drives precisely the same size (Same make/model numb
On 04/26/2011 01:25 AM, Nikola M. wrote:
On 04/26/11 01:56 AM, Lamp Zy wrote:
Hi,
One of my drives failed in Raidz2 with two hot spares:
What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c).
Latest zpool/zfs versions available by numerical designation in all
OpenSolaris base
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy wrote:
> Any ideas how to identify which drive is the one that failed so I can
> replace it?
Try the following:
# fmdump -eV
# fmadm faulty
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
z
On Wed, Apr 27, 2011 at 3:51 PM, Lamp Zy wrote:
> Great. So, now how do I identify which drive out of the 24 in the storage
> unit is the one that failed?
>
> I looked on the Internet for help but the problem is that this drive
> completely disappeared. Even "format" and "iostat -En" show only 23
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> (BTW, is there any way to get a measurement of number of blocks consumed
> per zpool? Per vdev? Per zfs filesystem?) *snip*.
>
>
> you need to use zdb to see what the curr
On 27 April, 2011 - Edward Ned Harvey sent me these 0,6K bytes:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Erik Trimble
> >
> > (BTW, is there any way to get a measurement of number of blocks consumed
> > per zpool? Per vdev? Per
On 4/27/11 4:00 AM, Markus Kovero wrote:
Sync was disabled on the main pool and then let to inherrit to everything else.
The> reason for disabled this in the first place was to fix bad NFS write
performance (even with> Zil on an X25e SSD it was under 1MB/s).
I've also tried setting the log
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Neil Perrin
>
> No, that's not true. The DDT is just like any other ZFS metadata and can
be
> split over the ARC,
> cache device (L2ARC) and the main pool devices. An infrequently referenced
>
On Apr 27, 2011, at 9:26 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Neil Perrin
>>
>> No, that's not true. The DDT is just like any other ZFS metadata and can
> be
>> split over the ARC,
>> cache device
OK, I just re-looked at a couple of things, and here's what I /think/ is
the correct numbers.
A single entry in the DDT is defined in the struct "ddt_entry" :
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/sys/ddt.h#108
I just checked, and the current size of thi
15 matches
Mail list logo