Victor Latushkin wrote:
Erik Trimble wrote:
ZFS no longer has the issue where loss of a single device (even
intermittently) causes pool corruption. That's been fixed.
Erik, it does not help at all when you are talking about some issue
being fixed and does not provide corresponding CR number.
> You really do need ECC RAM, but for the naysayers:
> http://www.cs.toronto.edu/%7Ebianca/papers/sigmetrics09.pdf
There are people that still question that? Really ?
>From section 3.2 Errors per DIMM in that paper :
"The mean number of correctable errors per
DIMM are more comparable,
I'm not sure if this is a bug or something. I tried researching but have
come up dry due to hard to come up with right keywords. Anyway, we have been
using OSOL 2008.11 as a file server just fine using instructions very
similar to this
http://blog.scottlowe.org/2006/08/15/solaris-10-and-active-dire
You really do need ECC RAM, but for the naysayers:
http://www.cs.toronto.edu/%7Ebianca/papers/sigmetrics09.pdf
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Oct 10, 2009, at 4:11 PM, Bob Friesenhahn wrote:
On Sat, 10 Oct 2009, tak ar wrote:
I think the IOPS is important for mail server, so ZIL is useful.
The server has 48GB RAM and two(ZFS or hardware mirror) X25-E(32GB)
for ZIL(slog). I understand the ZIL needs half of RAM.
There is a dif
Hua wrote:
I understand that usually zfs need to be created inside a zpool to store
files/data.
However, I quick test shows that I actually can put files directly inside a
mounted zpool without creating any zfs.
After
zpool create -f tank c0d1
I actually can copy/delete any files into /tank.
On Sat, 10 Oct 2009, tak ar wrote:
Use the BBWC to maintain high IOPS when X25-E's write cache is disabled?
It should certainly help. Note that in this case your relatively
small battery-backed memory is accepting writes for both the X25-E and
for the disk storage so the BBWC memory becomes
On Sun, Oct 11, 2009 at 01:00, Hua wrote:
> I understand that usually zfs need to be created inside a zpool to store
> files/data.
>
> However, I quick test shows that I actually can put files directly inside a
> mounted zpool without creating any zfs.
>
> After
> zpool create -f tank c0d1
>
> I
I understand that usually zfs need to be created inside a zpool to store
files/data.
However, I quick test shows that I actually can put files directly inside a
mounted zpool without creating any zfs.
After
zpool create -f tank c0d1
I actually can copy/delete any files into /tank. I can also c
Hi! thanks for reply.
> The BBWC is much more useful than the write cache on
> the X25-E since
> the X25-E's write cache is volatile and therefore may
> cause harm to
> your data. According to reports I have seen, the
> X25-E write IOPS
> reduces by a factor of five when its write cache is
> d
Erik Trimble wrote:
ZFS no longer has the issue where loss of a single device (even
intermittently) causes pool corruption. That's been fixed.
Erik, it does not help at all when you are talking about some issue
being fixed and does not provide corresponding CR number. It does not
allow intere
On Oct 10, 2009, at 01:26, Erik Trimble wrote:
That is, there used to be an issue in this scenario:
(1) zpool constructed from a single LUN on a SAN device
(2) SAN experiences temporary outage, while ZFS host remains running.
(3) zpool is permanently corrupted, even if no I/O occured during
o
As just a home tinkerer with small needs... I've already run into
situations where I've created a zfs fs for some purpose... and mnths
later forgotten what it was for or supposed to do or hold.
I may recognize the files and directories... but have forgotten why
its in this particular fs as opposed
On Oct 10, 2009, at 8:19 AM, Erik Trimble wrote:
Shawn Joy wrote:
>If you don't give ZFS any redundancy, you risk loosing you pool if
there is data corruption.
Is this the same risk for data corruption as UFS on hardware based
luns?
It's a tradeoff. ZFS has more issues with loss of conne
Thank you so much for the detail. The 10Gbe is attached to 10Gbe port on a
Vmware ESX server. I am trying to use NFS for VMware. When I bought the SSD's
I was after low seek time not necessarily total bandwidth. I can add devices
over time to get the bandwidth up. I am puzzled why even my
On Fri, 9 Oct 2009, tak ar wrote:
Hi! I bought x4270 servers for (write heavy) mail server. And
waiting for delivery. That have two Intel SSD X25-E(for ZIL) and
HDDs. x4270 servers have hardware RAID card based on Adaptec's RAID
5805 adapter, which has 256MB BBWC.
SSD has write cache and RAI
On Fri, 9 Oct 2009, Derek Anderson wrote:
I created a NFS filesystem for vmware by using : zfs create
SSD/vmware . I had to set permissoins for Vmware anon=0, but thats
it. Below is what zpool iostat reads:
File copy 10Gbe to SSD -> 40M max
My clients here do better than that over gigabi
Shawn Joy wrote:
>If you don't give ZFS any redundancy, you risk loosing you pool if
there is data corruption.
Is this the same risk for data corruption as UFS on hardware based luns?
It's a tradeoff. ZFS has more issues with loss of connectivity to the
underlying LUN than UFS, while UFS has
>If you don't give ZFS any redundancy, you risk loosing you pool if
there is data corruption.
Is this the same risk for data corruption as UFS on hardware based luns?
If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz
pool of that LUN is ZFS able to handle disk or raid co
19 matches
Mail list logo