On Fri, Feb 12, 2010 at 02:25:51PM -0800, TMB wrote:
> I have a similar question, I put together a cheapo RAID with four 1TB WD
> Black (7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with
> slice 0 (5GB) for ZIL and the rest of the SSD for cache:
> # zpool status dpool
> poo
G'Day,
On Sat, Feb 13, 2010 at 09:02:58AM +1100, Daniel Carosone wrote:
> On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote:
> > Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
> > size (GB) 300
> > size (sectors) 585937500
eloaded.
> -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Brendan Gregg, Sun Microsystems Fishworks.http://blogs.sun.com/brenda
hat aren't in OpenSolaris yet, but will be soon.
Brendan
--
Brendan Gregg, Sun Microsystems Fishworks.http://blogs.sun.com/brendan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
G'Day Ben,
ARC visibility is important; did you see Neel's arcstat?:
http://www.solarisinternals.com/wiki/index.php/Arcstat
Try -x for various sizes, and -v for definitions.
On Thu, Aug 21, 2008 at 10:23:24AM -0700, Ben Rockwood wrote:
> Its a starting point anyway. The key is to try
G'Day,
On Wed, Jul 30, 2008 at 01:24:22PM -0400, Alastair Neil wrote:
>
>Thanks very much that's exactly what I needed to hear :)
>On Wed, Jul 30, 2008 at 12:47 PM, Richard Elling
><[EMAIL PROTECTED]> wrote:
>
>Alastair Neil wrote:
>
> I've been reading about the work using
On Wed, Jul 23, 2008 at 03:20:47PM -0700, Brendan Gregg - Sun Microsystems
wrote:
> G'Day Jeff,
>
> On Tue, Jul 22, 2008 at 02:45:13PM -0400, Jeff Taylor wrote:
> > When will L2ARC be available in Solaris 10?
>
> There are no current plans to back port;
Sorry - I shou
G'Day Jeff,
On Tue, Jul 22, 2008 at 02:45:13PM -0400, Jeff Taylor wrote:
> When will L2ARC be available in Solaris 10?
There are no current plans to back port; if we were to, I think it would be
ideal (or maybe a requirement) to sync up zpool features:
VER DESCRIPTION
--- -
G'Day Anil,
On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote:
> Why is it that the read operations are 0 but the read bandwidth is >0?
> What is iostat
> [not] accounting for? Is it the metadata reads? (Is it possible to
> determine what kind of metadata
> reads these are?
This coul
G'Day,
On Sat, Mar 01, 2008 at 08:58:53PM -0800, Bill Shannon wrote:
> Roch Bourbonnais wrote:
> >>> this came up sometime last year .. io:::start won't work since ZFS
> >>> doesn't call bdev_strategy() directly .. you'll want to use something
> >>> more like zfs_read:entry, zfs_write:entry and zf
G'Day Jon,
On Thu, Feb 14, 2008 at 02:54:58PM -0800, Jonathan Loran wrote:
>
G'Day Jon,
For disk layer metrics, you could try Disk/iopending from the DTraceToolkit
to check how saturated the disks become with requests (which answers that
question with much higher definition than iostat). I'd also run disktime.d,
which should be in the next DTraceToolkit release (it's pret
G'Day Luke,
On Thu, Nov 29, 2007 at 08:18:09AM -0800, Luke Schwab wrote:
> HI,
>
> The question is a ZFS performance question in reguards to SAN traffic.
>
> We are trying to benchmark ZFS vx VxFS file systems and I get the following
> performance results.
>
> Test Setup:
> Solaris 10: 11/06
13 matches
Mail list logo