Hi Bob
> I don't know what the request pattern from filebench looks like but it seems
> like your ZEUS RAM devices are not keeping up or
> else many requests are bypassing the ZEUS RAM devices.
>
> Note that very large synchronous writes will bypass your ZEUS RAM device and
> go directly to a l
On Thu, 18 Aug 2011, Thomas Nau wrote:
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
I don't know what the request pattern from filebench looks like but it
seems like your ZEUS RAM devices are not keeping up or else
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
Thomas
Am 18.08.2011 um 17:49 schrieb Tim Cook :
> What are the specs on the client?
>
> On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote:
> > Dear all.
> > We finally got al
What are the specs on the client?
On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote:
> Dear all.
> We finally got all the parts for our new fileserver following several
> recommendations we got over this list. We use
>
> Dell R715, 96GB RAM, dual 8-core Opterons
> 1 10GE Intel dual-port NIC
> 2 LSI 920
Robert,
> I belive it's not solved yet but you may want to try with
> latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
___
zfs-discuss mailing list
zfs
Roch,
On 11/2/06 12:51 AM, "Roch - PAE" <[EMAIL PROTECTED]> wrote:
> This one is not yet fixed :
> 6415647 Sequential writing is jumping
Yep - I mistook this one for another problem with drive firmware on
pre-revenue units. Since Robert has a customer release X4500 it doesn't
have the firmware
How much memory in the V210 ?
UFS will recycle it's own pages while creating files that
are big. ZFS working against a large heap of free memory will
cache the data (why not?). The problem is that ZFS does not
know when to stop. During the subsequent memory/cache
reclaim, ZFS is potentially not
Luke Lonergan writes:
> Robert,
>
> > I belive it's not solved yet but you may want to try with
> > latest nevada and see if there's a difference.
>
> It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
> post build 47 I think.
>
> - Luke
>
This one is not yet fi
Jay Grogan wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic 2
Robert,
> I belive it's not solved yet but you may want to try with
> latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
___
zfs-discuss mailing list
zfs
Hello Jay,
Tuesday, October 31, 2006, 3:31:54 AM, you wrote:
JG> Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
JG> command ran mkfile -v 6gb /ufs/tmpfile
JG> Test 1 UFS mounted LUN (2m2.373s)
JG> Test 2 UFS mounted LUN with directio option (5m31.802s)
JG> Test 3 ZFS LUN
On Oct 30, 2006, at 10:45 PM, David Dyer-Bennet wrote:
Also, stacking it on top of an existing RAID setup is kinda missing
the entire point!
Everyone keeps saying this, but I don't think it is missing the point
at all. Checksumming and all the other goodies still work fine and
you can ru
On 10/30/06, Jay Grogan <[EMAIL PROTECTED]> wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13
13 matches
Mail list logo