Alexander Lesle wrote:
And what is your suggestion for scrubbing a mirror pool?
Once per month, every 2 weeks, every week.
There isn't just one answer.
For a pool with redundancy, you need to do a scrub just before the
redundancy is lost, so you can be reasonably sure the remaining data is
Hiya,
I am using S11E Live CD to install. The install wouldn't let me select 2 disks
for a mirrored rpool so I done this post-install using this guide;
http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html
Before I go ahead and continue building my server (zpools) I want to make sur
On Aug 6, 2011, at 9:56 AM, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or later. Use SAS and be happy.
>
> Funny thing is Hitachi and Seagate drives work stably, whereas WD drives tend
>
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/gjtuk.html#gjtui
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 8, 2011, at 5:15, Lanky Doodle wrote:
> Hiya,
>
> I am using S11E Live CD to install. The install wouldn't let me select 2
> disks for a mirrored rpool so I do
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> Before I go ahead and continue building my server (zpools) I want to make
> sure the above guide is correct for S11E?
You should simply boot from each disk, while the other di
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing this system but it
John Martin wrote:
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing t
Is it possible to recover the rpool with only a tar/star archive of the root
filesystem? I have used the zfs send/receive methods and that work without a
problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems
(rpool/ROOT, rpool/s10_uXX, rpool/dump, rpool/swa
On 08/ 9/11 07:53 AM, marvin curlee wrote:
Is it possible to recover the rpool with only a tar/star archive of the root
filesystem? I have used the zfs send/receive methods and that work without a
problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems
(rpool/RO
On Sat, Aug 06, 2011 at 07:45:31PM +0200, Roy Sigurd Karlsbakk wrote:
> > Might this be the SATA drives taking too long to reallocate bad
> > sectors? This is a common problem "desktop" drives have, they will
> > stop and basically focus on reallocating the bad sector as long as it
> > takes, which
On 2011-Aug-08 17:12:15 +0800, Andrew Gabriel wrote:
>periodic scrubs to cater for this case. I do a scrub via cron once a
>week on my home system. Having almost completely filled the pool, this
>was taking about 24 hours. However, now that I've replaced the disks and
>done a send/recv of the d
On Mon, Aug 01, 2011 at 01:25:35PM +1000, Daniel Carosone wrote:
> To be clear, the system I was working on the other day is now running
> with a normal ashift=9 pool, on a mirror of WD 2TB EARX. Not quite
> what I was hoping for, but hopefully it will be OK; I won't have much
> chance to mess wit
12 matches
Mail list logo