On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
> Do you realise that losing a single disk in that pool could pretty much
> render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out -
all my disks are mirrored, however some of them are mirrored on the
Hi,
On Sun, Dec 18, 2011 at 22:38, Matt Breitbach wrote:
> I'd look at iostat -En. It will give you a good breakdown of disks that
> have seen errors. I've also spotted failing disks just by watching an
> iostat -nxz and looking for the one who's spending more %busy than the rest
> of them, or
Hi Craig,
On Sun, Dec 18, 2011 at 22:33, Craig Morgan wrote:
> Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
> side of failure doing heavy retries that id dragging the pool down.
Thanks for the hint - didn't know about fmdump. Nothing in the log
since 13 Dec, thou
I'd look at iostat -En. It will give you a good breakdown of disks that
have seen errors. I've also spotted failing disks just by watching an
iostat -nxz and looking for the one who's spending more %busy than the rest
of them, or exhibiting longer than normal service times.
-Matt
-Original
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Craig
--
Craig Morgan
On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll wrote:
> Hi,
>
> On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
> I know some others may already have pointed this out - but I can't see it
> and not say something...
>
> Do you realise that losing a single disk in that pool could pretty much
> render the whole thing busted?
>
> At least for me - the
I know some others may already have pointed this out - but I can't see
it and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least for me - the rate at which _I_ seem to lose disks, it would be
worth considering
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached mi
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha wrote:
> From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> (or at least Google's cache of it, since it seems to be inaccessible
> now:
>
> "
> Keep pool space under 80% utilization to maintain pool performance.
> Cur
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
wrote:
> Hi,
>
> On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha wrote:
>> Is the pool over 80% full? Do you have dedup enabled (even if it was
>> turned off later, see "zpool history")?
>
> The pool stands at 86%, but that has not chang
2011-12-17 21:59, Steve Gonczi wrote:
Coincidentally, I am pretty sure entry 0 of these meta dnode objects is
never used,
so the block with the checksum error does never comes into play.
Steve
I wonder if this is true indeed - seems so, because the pool
seems to work reardless of the seemingly
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha wrote:
> Is the pool over 80% full? Do you have dedup enabled (even if it was
> turned off later, see "zpool history")?
The pool stands at 86%, but that has not changed in any way that
corresponds chronologically with the sudden drop in perform
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
wrote:
> The affected pool does indeed have a mix of straight disks and
> mirrored disks (due to running out of vdevs on the controller),
> however it has to be added that the performance of the affected pool
> was excellent until around
Hi,
On Sun, Dec 18, 2011 at 15:13, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> what are the output of zpool status pool1 and pool2
> it seems that you have mix configuration of pool3 with disk and mirror
The other two pools show very similar outputs:
root@stor:~# zpool status pool1
pool: p
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
On 12/18/2011 9:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks c
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks come off the same controller, and
all pools are backed by SSD-based l2arc and ZIL. Performance is
excellent on all pools but one, and I am struggling greatly to figure
out what is wron
On Sun, Dec 18, 2011 at 07:24:27PM +0700, Fajar A. Nugraha wrote:
> On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek wrote:
> > BTW. Can you, Cindy, or someone else reveal why one cannot boot from
> > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
> > would have to be l
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek wrote:
> BTW. Can you, Cindy, or someone else reveal why one cannot boot from
> RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
> would have to be licensed under GPL as the rest of the boot code?
>
> I'm asking, because I
On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:
> Hi Anon,
>
> The disk that you attach to the root pool will need an SMI label
> and a slice 0.
>
> The syntax to attach a disk to create a mirrored root pool
> is like this, for example:
>
> # zpool attach rpool c1t0d0s0 c1t1d0s
Does anyone know where I can still find the SUNWsmbs and SUNWsmbskr
packages for the Sparc version of OpenSolaris? I wanted to experiment with
ZFS/CIFS on my Sparc server but the ZFS share command fails with:
zfs set sharesmb=on tank1/windows
cannot share 'tank1/windows': smb ad
20 matches
Mail list logo