On 2011-Jul-05 21:03:50 +0800, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
...
>> I suspect the problem is because I changed to AHCI.
>
>This is normal, no matter what OS you have. It's the hard
Reading through this page
(http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html), it seems like all I
need to do is 'rm' the file. The problem is finding it in the first place. Near
the bottom of this page it says:
"
If the damage is within a file data block, then the file can safely be rem
Thanks for your help,
I did a zpool clear and now this happens:
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
en
I have already formatted one disk, so I can not try this anymore.
(But, importing the zpool with the name "rpool" and exporting the rpool again,
was successful. I can now use the disk as usual. But this did not work on the
other disk, so I formatted it)
--
This message posted from opensolari
2011-07-05 19:21, Paul Kraus wrote:
While I agree that you should not change the controller mode with data
behind it, I did do that on a Supermicro system running OpenSuSE and
Linux LVM mirrors with no issues. I suspect because Linux both loads
the AHCI drivers in the "mini-root" (to use a Sun
On Tue, Jul 5, 2011 at 12:54 PM, Lanky Doodle wrote:
> OK, I have finally settled on hardware;
> 2x LSI SAS3081E-R controllers
Beware that this controller does not support drives larger than 2TB.
--
Trond Michelsen
___
zfs-discuss mailing list
zfs-dis
On Tue, Jul 5, 2011 at 9:11 AM, Fajar A. Nugraha wrote:
> On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
> wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>>>
>>> Here is my problem:
>>> I have an 1.5TB disk with
On Tue, 5 Jul 2011, Lanky Doodle wrote:
I am still undecided as to how to group the disks. I have read
elsewhere that raid-z1 is best suited with either 3 or 5 disks and
raid-z2 is better suited with 6 or 10 disks - is there any truth in
this, although I think this was in reference to 4K sect
2011-07-05 17:11, Fajar A. Nugraha пишет:
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an 1.5TB disk with OpenSolaris (b134, b151a) using no
Ok, so I switch back and then I have my data back?
But, it does not work. Because, meanwhile I switched, I tried to "zpool import"
which messed up the drive. Then I switched back, but my data is still not
accessible.
Earlier, when I switched, I did not do a "zpool import" and when I switched
Hello,
Are you certain that after the outage your disks are indeed accessible?
* What does BIOS say?
* Are there any errors reported in "dmesg" output or the
"/var/adm/messages" file?
* Does the "format" command return in a timely manner?
** Can you access and print the disk labels in "format" co
On Tue, Jul 5, 2011 at 7:47 AM, Lanky Doodle wrote:
> Thanks.
>
> I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so
> would not have been able to make the most of the difference in increased
> bandwidth.
Only PCIe 1.0? What chipset is that based on? Might be worthwhile to
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>>
>> Here is my problem:
>> I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
>> I then changed to AHC
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>
> Here is my problem:
> I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
> I then changed to AHCI in BIOS, which results in severe problems: I can
not
> boot t
On Tue, Jul 5, 2011 at 6:54 AM, Lanky Doodle wrote:
> OK, I have finally settled on hardware;
>
> 2x LSI SAS3081E-R controllers
> 2x Seagate Momentus 5400.6 rpool disks
> 15x Hitachi 5K3000 'data' disks
>
> I am still undecided as to how to group the disks. I have read elsewhere that
> raid-z1 is
Ok so now I have no idea what to do. The scrub is not working either. The pool
is only 3x 1.5TB drives so it should not take so long. Does anyone know what I
should do next?
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the a
Hey guys,
I had a zfs system in raidz1 that was working until there was a power outage
and now I'm getting this:
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see
Thanks.
I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so would
not have been able to make the most of the difference in increased bandwidth.
I can't see myself upgrading every few months (my current WHZ build has lasted
over 4 years without a single change) so by the tim
The LSI2008 chipset is supported and works very well.
I would actually use 2 vdevs; 8 disks in each. And I would configure each vdev
as raidz2. Maybe use one hot spare.
And I also have personal, subjective reasons: I like to use the number of 8 in
computers. 7 is an ugly number. Everything is b
OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks
I am still undecided as to how to group the disks. I have read elsewhere that
raid-z1 is best suited with either 3 or 5 disks and raid-z2 is better suited
20 matches
Mail list logo