It seems we are hitting a boundary with zfs send/receive over a network
link (10Gb/s). We can see peak values of up to 150MB/s, but on average
about 40-50MB/s are replicated. This is far away from the bandwidth that
a 10Gb link can offer.
Is it possible, that ZFS is giving replication a too
Bob Friesenhahn wrote:
Striping across two large raidz2s is not ideal for multi-user use. You
are getting the equivalent of two disks worth of IOPS, which does not
go very far. More smaller raidz vdevs or mirror vdevs would be
better. Also, make sure that you have plenty of RAM installed.
F
I've read the same log entry, and was also thinking about ZFS...
Pillar Data Systems is also answering to the call
http://blog.pillardata.com/pillar_data_blog/2008/08/blog-i-love-a-p.html
BTW: Would transparent compression be considered as cheating? :-)
--
This message posted from opensolaris.or
> Leave the default recordsize. With 128K recordsize,
> files smaller than
If I turn zfs compression on, does the recordsize influence the compressratio
in anyway?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
Adam Leventhal wrote:
> Yes. The Sun Storage 7000 Series uses the same ZFS that's in OpenSolaris
> today. A pool created on the appliance could potentially be imported on an
> OpenSolaris system; that is, of course, not explicitly supported in the
> service contract.
>
Would be interesting to he
Ulrich Graef wrote:
> You need not to wade through your paper...
> ECC theory tells, that you need a minimum distance of 3
> to correct one error in a codeword, ergo neither RAID-5 or RAID-6
> are enough: you need RAID-2 (which nobody uses today).
>
> Raid-Controllers today take advantage of the fa
Hi
Updated today from snv 101 to 105 today. I wanted to do zfs send/receive to a
new zpool while forgetting that the new pool was a newer version.
zfs send timed out after a while, but it was impossible to kill the receive
process.
Shouldn't the zfs receive command just fail with a "wrong vers
Henri Meddox wrote:
> Hi Folks,
> call me a lernen ;-)
>
> I got a crazy Problem with "zpool list" and the size of my pool:
>
> created "zpool create raidz2 hdd1 hdd2 hdd3" - each hdd is about 1GB.
>
> zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB?
>
> After creating a file
Mika Borner wrote:
>
> You're lucky. Ben just wrote about it :-)
>
> http://www.cuddletech.com/blog/pivot/entry.php?id=1013
>
>
>
Oops, should have read your message completly :-) Anyway you can
"lernen" something from it...
___
Andras Spitzer wrote:
Is it worth to move the redundancy from the SAN array layer to the ZFS layer?
(configuring redundancy on both layers is sounds like a waste to me) There are
certain advantages on the array to have redundancy configured (beyond the
protection against simple disk failure).
Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5
>The vdev can handle dynamic lun growth, but the underlying VTOC or
>EFI label
>may need to be zero'd and reapplied if you setup the initial vdev on
>a slice. If
>you introduced the entire disk to the pool you should be fine, but I
>believe you'll
>still need to offline/online the pool.
Fin
>I'm a little confused by the first poster's message as well, but you
lose some benefits of ZFS if you don't create >your pools with either
RAID1 or RAIDZ, such as data corruption detection. The array isn't
going to detect that >because all it knows about are blocks.
That's the dilemma, the arra
>but there may not be filesystem space for double the data.
>Sounds like there is a need for a zfs-defragement-file utility
perhaps?
>Or if you want to be politically cagey about naming choice, perhaps,
>zfs-seq-read-optimize-file ? :-)
For Datawarehouse and streaming applications a
"seq-read-om
>RAID5 is not a "nice" feature when it breaks.
Let me correct myself... RAID5 is a "nice" feature for systems without
ZFS...
>Are huge write caches really a advantage? Or are you taking about
huge
>write caches with non-volatile storage?
Yes, you are right. The huge cache is needed mostly beca
>given that zfs always does copy-on-write for any updates, it's not
clear
>why this would necessarily degrade performance..
Writing should be no problem, as it is serialized... but when both
database instances are reading a lot of different blocks at the same
time, the spindles might "heat up".
>
Hi
We are thinking about moving away from our Magneto-Optical based archive system
(WORM technology). At the moment, we use a volume manager, which virtualizes
the WORM's in the jukebox and presents them as UFS Filesystems. The volume
manager automatically does asynchronous replication to an id
Hi
We have following scenario/problem:
Our zpool resides on a single LUN on a Hitachi Storage Array. We are
thinking about making a physical clone of the zpool with the ShadowImage
functionality.
ShadowImage takes a snapshot of the LUN, and copies all the blocks to a
new LUN (physical copy). In
Here's an interesting read about forthcoming Oracle 11g file system
performance. Sadly, there is now information about how this works.
It will be interesting to compare it with ZFS Performance, as soon as ZFS is
tuned for Databases.
"Speed and performance will be the hallmark of the 11g, s
>Unfortunately, the T1000 only has a
> single drive bay (!) which makes it impossible to
> follow our normal practice of mirroring the root file
You can replace the existing 3.5" disk with two 2.5" disks (quite cheap)
//Mika
This message posted from opensolaris.org
___
20 matches
Mail list logo