On Dec 26, 2010, at 5:33 AM, Jackson Wang wrote:
> Dear Richard,
> Thanks for your reply.
>
> Actually there is NO any other disk/controlller fault in this system. An
> engineer of NexentaStor, Andrew, just add a line in /kernel/drv/sd.conf of
> "allow-bus-device-reset=0" of the NexentaStor sys
Do you have SSD in? Which ones and any errors on those?
On 26 Dec 2010 13:35, "Jackson Wang" wrote:
> Dear Richard,
> Thanks for your reply.
>
> Actually there is NO any other disk/controlller fault in this system. An
> engineer of NexentaStor, Andrew, just add a line in /kernel/drv/sd.conf of
> "
Dear Richard,
Thanks for your reply.
Actually there is NO any other disk/controlller fault in this system. An
engineer of NexentaStor, Andrew, just add a line in /kernel/drv/sd.conf of
"allow-bus-device-reset=0" of the NexentaStor system and then the resilver
speed get high. Before the parameter a
On Dec 21, 2010, at 8:18 AM, Jackson Wang wrote:
> Dear Richard,
> I am a Nexenta user and now I meet the same problem of the resilver spend too
> long time. I try to find out solution from the link on your content that "zfs
> set resilver_speed=10% pool_name" but the Nexenta without the property
Dear Richard,
How can I update the important ZFS fixes on NexentaStor? Now my version of
NexentsStor is v3.0.4 enterprise.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
Dear Richard,
I am a Nexenta user and now I meet the same problem of the resilver spend too
long time. I try to find out solution from the link on your content that "zfs
set resilver_speed=10% pool_name" but the Nexenta without the property of
resiler_speed. How can I slove my issue on Nexenta?
Err...I meant Nexenta Core.
-J
On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
> 134 it is. This is an OpenSolaris rig that's going to be replaced within
> the next 60 days, so just need to get it to something that won't through
> false checksum errors
134 it is. This is an OpenSolaris rig that's going to be replaced within the
next 60 days, so just need to get it to something that won't through false
checksum errors like the 120-123 builds do and has decent rebuild times.
Future boxes will be NexentaStor.
Thank you guys. :)
-J
On Sun, Sep 26
On Sep 26, 2010, at 1:16 PM, Roy Sigurd Karlsbakk wrote:
>>> Upgrading is definitely an option. What is the current snv favorite
>>> for ZFS stability? I apologize, with all the Oracle/Sun changes I
>>> haven't been paying as close attention to big reports on zfs-discuss
>>> as I used to.
>>
>> Op
> > Upgrading is definitely an option. What is the current snv favorite
> > for ZFS stability? I apologize, with all the Oracle/Sun changes I
> > haven't been paying as close attention to big reports on zfs-discuss
> > as I used to.
>
> OpenIndiana b147 is the latest binary release, but it also in
On Sep 26, 2010, at 11:03 AM, Jason J. W. Williams wrote:
> Upgrading is definitely an option. What is the current snv favorite for ZFS
> stability? I apologize, with all the Oracle/Sun changes I haven't been paying
> as close attention to big reports on zfs-discuss as I used to.
OpenIndiana b1
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
On Sun, 26 Sep 2010, Edward Ned Harvey wrote:
27G on a 6-disk raidz2 means approx 6.75G per disk. Ideally, the
disk could write 7G = 56 Gbit in a couple minutes if it were all
sequential and no other activity in the system. So you're right to
suspect something is suboptimal, but the root cau
- Original Message -
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jason J. W. Williams
>
> I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x
> raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No
> checksum errors.
27G o
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each during
16 matches
Mail list logo