Good news, great to hear you got your data back.
Victor is a legend, I for one am very glad he's been around to fix these kind
of problems. I imagine he's looking forward to a well earned rest once the
automatic recovery tools become available :)
--
This message posted from opensolaris.org
___
Yes Victor is amazing he has also helped us to recover alot of data we
did not have backed i am forever greatful for his skills and
willingness to help!
On Fri, Oct 9, 2009 at 4:58 AM, Ross wrote:
> Good news, great to hear you got your data back.
>
> Victor is a legend, I for one am very glad he
I've got a mail machine here that I built using ZFS boot/root. It's been
having some major I/O performance problems, which I posted once before... but
that post seems to have disappeared.
Now I've managed to obtain another identical machine, and I've built it in the
same way as the original.
On 09 October, 2009 - Brandon Hume sent me these 2,0K bytes:
> I've got a mail machine here that I built using ZFS boot/root. It's
> been having some major I/O performance problems, which I posted once
> before... but that post seems to have disappeared.
>
> Now I've managed to obtain another id
I think the raid card is a re-branded LSI SCSI raid. I have LSI 21320-4x and
having same problem with ZFS.
Do you have BBU on the card? You may want to disable cache flush and zil and
see how it works. I tried passthrough and basically the result is same.
I gave up on tuning this card with ZFS
> >> CFs designed for the professional photography
> market have better
> >> specifications than CFs designed for the consumer
> market.
> >>
> >
> > CF is pretty cheap, you can pick up 16GB-32GB from
> $80-$200 depending on
> > brand/quality. Assuming they do incorporate wear
> leveling, and
Before you do a dd test try first to do:
echo zfs_vdev_max_pending/W0t1 | mdb -kw
and let us know if it helped or not.
iostat -xnz 1
output while you are doing dd would also help.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mail
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and
hardware raid array? Do we still need to provide two luns from the hardware
raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message
Shawn Joy wrote:
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and
hardware raid array? Do we still need to provide two luns from the hardware
raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Need, no
>If you don't give ZFS any redundancy, you risk loosing you pool if there is
>data corruption.
Is this the same risk for data corruption as UFS on hardware based luns?
If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz pool of
that LUN is ZFS able to handle disk or raid co
Shawn Joy wrote:
Ian Collins wrote:
Shawn Joy wrote:
Hi All,
Its been a while since I touched zfs. Is the below still the case
with zfs and hardware raid array? Do we still need to provide two
luns from the hardware raid then zfs mirror those two luns?
http://www.opensolaris.org/os/commun
GigE wasn't giving me the performance I had hoped for so I spring for some
10Gbe cards.So what am I doing wrong.
My setup is a Dell 2950 without a raid controller, just a SAS6 card. The setup
is as such
:
mirror rpool (boot) SAS 10K
raidz SSD 467 GB on 3 Samsung 256 MLC SSD (220MB/s each
ZFS no longer has the issue where loss of a single device (even
intermittently) causes pool corruption. That's been fixed.
That is, there used to be an issue in this scenario:
(1) zpool constructed from a single LUN on a SAN device
(2) SAN experiences temporary outage, while ZFS host remains ru
Hi! I bought x4270 servers for (write heavy) mail server. And waiting for
delivery. That have two Intel SSD X25-E(for ZIL) and HDDs. x4270 servers have
hardware RAID card based on Adaptec's RAID 5805 adapter, which has 256MB BBWC.
SSD has write cache and RAID card also has BBWC. When set write-
On Fri, Oct 9 at 22:51, tak ar wrote:
When the answer is no, should I set disable SSD's write cache? I
think disabled write cache reduce the usable lifetime of
SSD. Because wear-leveling on SSD is not applied.
I don't think their wear leveling requires the write cache to be enabled.
--
Eric D
15 matches
Mail list logo