After some quick experimenting, I determined that it is in fact a single
raidz pool with all 47 devices. Apparently something was either done
wrong or miscommunicated in the process.
Sorry for the bandwidth.
- Matt
Matt Ingenthron wrote:
Hi all,
Sorry for the newbie question, but I
mainly) I need along with something
reliable.
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
__
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
___
MB/s, the best run for ZFS was approx 58 MB/s. Not a
huge difference for sure, but enough to make you think about switching.
This was single stream over a 10GE link. (x4600 mounting vols from an x4500)
Matt
Bill Moore wrote:
On Thu, Nov 23, 2006 at 03:37:33PM +0100, Roch - PAE wrote:
Al H
his with
customers in the past, it can be quite a challenge.
Consider yourself lucky that zfs is catching/correcting things!
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron/
email:
subcommand. In your case, you can run,
for instance: "zpool status mypool".
Good luck,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phon
as everything else to read/write
to disks on a SAN (i.e. the ssd driver and friends)-- it's just smarter
about it. :)
Regards,
- Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h respect to expanding a filesystem), that's available today, with
the limitation that you can't expand a raidz group itself.
Regards,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs
g from.
In fact, CoolStack may be a good tested, stable build for you to use
alongside ZFS. You can email me directly with any issues you run into
with it and I'll get it into the right group of people.
Hope that helps,
- Matt
Sanjeev Bagewadi wrote:
Jason,
Apologies.. I missed
e cache to the slowest drive's
performance)?
Thanks!
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the answer to this question,
but what is the best way to determine how large my pool's l2arc working set
is (i.e. how much l2arc is in use)?
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Excellent thanks to you both. I knew of both those methods and wanted
to make sure i wasn't missing something!
On Wed, Sep 26, 2012 at 11:21 AM, Dan Swartzendruber wrote:
> **
> On 9/26/2012 11:18 AM, Matt Van Mater wrote:
>
> If the added device is slower, you will experience
se defaults might be changed in some future release of
Illumos, but haven't seen any specifics saying that the idea is nearing
fruition in release XYZ.
Matt
On Wed, Dec 5, 2012 at 10:26 AM, Jim Klimov wrote:
> On 2012-11-29 10:56, Jim Klimov wrote:
>
>> For example, I might w
>
>
> At present, I do not see async write QoS as being interesting. That leaves
> sync writes and reads
> as the managed I/O. Unfortunately, with HDDs, the variance in response
> time >> queue management
> time, so the results are less useful than the case with SSDs. Control
> theory works, once a
>
>
>
> I'm unclear on the best way to warm data... do you mean to simply `dd
> if=/volumes/myvol/data of=/dev/null`? I have always been under the
> impression that ARC/L2ARC has rate limiting how much data can be added to
> the cache per interval (i can't remember the interval). Is this not the
201 - 215 of 215 matches
Mail list logo