> I'm building my new storage server, all the parts should come in this week...
How did it turn out ? Did 8x1TB Drives seem to be the correct number or a
couple too many (based on
the assumption that you did not run out of space; I mean solely from a
performance / 'ZFS usability'
standpoint -
> Hi guys, I am about to reshape my data spool and am wondering what
> performance diff. I can expect from the new config. Vs. The old.
>
> The old config. Is a pool of a single vdev of 8 disks raidz2.
> The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
>
> I understand it should
> I'm building my new storage server, all the parts should come in this week.
> ...
Another answer is here:
http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html
Rob
--
This message posted from opensolaris.org
___
zfs-discuss
> I wanted to build a small back up (maybe also NAS) server using
A common question that I am trying to get answered (and have a few) here:
http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0
Rob
--
This message posted from opensolaris.org
__
> I'm currently planning on running FreeBSD with ZFS, but I wanted to
> double-check how much memory I'd need for it to be stable. The ZFS
> wiki currently says you can go as low as 1 GB, but recommends 2 GB;
> however, elsewhere I've seen someone claim that you need at least 4 GB.
> ...
> How a
References:
Thread: ZFS effective short-stroking and connection to thin provisioning?
http://opensolaris.org/jive/thread.jspa?threadID=127608
Confused about consumer drives and zfs can someone help?
http://opensolaris.org/jive/thread.jspa?threadID=132253
Recommended RAM for ZFS on various platf
> On Sun, Jul 6, 2008 at 3:46 PM, Ross [EMAIL PROTECTED] wrote:
> For your second one I'm less sure what's going on:
> ... . The problem is that a two disk raid-z makes no sense.
> Traditionally this level of raid needs a minimum of three disks to work.
> I suspect ZFS may be interpreting raid-z a
> On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
> > With ZFS and modern CPUs, the parity calculation is
> surely in the noise to the point of being unmeasurable.
>
> I would agree with that. The parity calculation has *never* been a
> factor in and of itself. T
> Robert Milkowski wrote:
> During christmass I managed to add my own compression to zfs - it as quite
> easy.
Great to see innovation but unless your personal compression method is somehow
better (very fast with excellent
compression) then would it not be a better idea to use an existing (lea
> Robert Milkowski wrote:
> During christmass I managed to add my own compression to zfs - it as quite
> easy.
Great to see innovation but unless your personal compression method is somehow
better (very fast with excellent
compression) then would it not be a better idea to use an existing (lea
> I got overzealous with snapshot creation. Every 5 mins is a bad idea. Way too
> many.
> What's the easiest way to delete the empty ones?
> zfs list takes FOREVER
You might enjoy reading:
ZFS snapshot massacre
http://blogs.sun.com/chrisg/entry/zfs_snapshot_massacre.
(Yes, the "." is part of th
> -Peter Tribble wrote:
>> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
>> I have eight 10GB drives.
>> ...
>> I have 6 remaining 10 GB drives and I desire to
>> "raid" 3 of them and "mirror" them to the other 3 to
>> give me raid s
> Solaris will allow you to do this, but you'll need to use SVM instead of ZFS.
>
> Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
> -- richard
Or run Linux ...
Richard, The ZFS Best Practices Guide says not.
"Do not use the same disk or slice in both an SVM and ZFS con
> Hi All
>Is there any hope for deduplication on ZFS ?
>Mertol Ozyoney
>Storage Practice - Sales Manager
>Sun Microsystems
> Email [EMAIL PROTECTED]
There is always hope.
Seriously thought, looking at
http://en.wikipedia.org/wiki/Comparison_of_revision_control_software there are
a lot of choi
> Though possible, I don't think we would classify it as a best practice.
> -- richard
Looking at http://opensolaris.org/os/community/volume_manager/ I see:
"Supports RAID-0, RAID-1, RAID-5", "Root mirroring" and "Seamless upgrades and
live upgrades" (that would go nicely with my ZFS root mirror
> On Tue, 22 Jul 2008, Miles Nordin wrote:
> > scrubs making pools uselessly slow? Or should it be scrub-like so
> > that already-written filesystems can be thrown into the dedup bag and
> > slowly squeezed, or so that dedup can run slowly during the business
> > day over data written quickly at n
There may be some work being done to fix this:
zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689
Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0
Thi
Bump.
Some of the threads on this were last posted to over a year ago. I checked
6485689 and it is not fixed yet, is there any work being done in this area?
Thanks,
Rob
> There may be some work being done to fix this:
>
> zpool should support raidz of mirrors
> http://bugs.opensolaris.org/bugda
I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.
I have eight 10GB drives.
When I installed SX:CE (snv_91) I chose "3" ("Solaris Interactive Text (Desktop
Session)) and the installer found all my drives but I told it to only use two -
giving me a 10GB mirrored rpool.
Immediat
> Peter Tribble wrote:
> Because what you've created is a pool containing two
> components:
> - a 3-drive raidz
> - a 3-drive mirror
> concatenated together.
>
OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am attempting to do).
> I think that w
20 matches
Mail list logo