Hey there, Bob!
Looks like you and Akhilesh (thanks, Akhilesh!) are driving at a similar,
very valid point. I'm currently using the default recordsize (128K) on all
of the ZFS pool (those of the iSCSI target nodes and the aggregate pool on
the head node).
I should've mentioned something about how
I'm not sure if this is a problem with the iscsitarget or zfs. I'd greatly
appreciate it if it gets moved to the proper list.
Well I'm just about out of ideas on what might be wrong..
Quick history:
I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found out
that multilun's
> So this is where I stand. I'd like to ask zfs-discuss if they've seen any
> ZIL/Replay style bugs associated with u3/u5 x86? Again, I'm confident in my
> hardware, and /var/adm/messages is showing no warnings/errors.
Are you absolutely sure the hardware is OK? Is there another disk you can
Erast Benson wrote:
> James, all serious ZFS bug fixes back-ported to b85 as well as marvell
> and other sata drivers. Not everything is possible to back-port of
> course, but I would say all critical things are there. This includes ZFS
> ARC optimization patches, for example.
Excellent!
James
-
Well, I haven't solved everything yet, but I do feel better now that I realize
that it was setting moutpoint=none that caused the zfs send/recv to hang.
Allowing the default mountpoint setting fixed that problem. I'm now trying
with moutpoint=legacy, because I'd really rather leave it unmounte
Hello
the idea introducd by Chris Greer to use servers as solid state disks
kept my brain busy the last days. Perhaps it makes sense to put L2ARC
devices in memory as well to increase the in-memory-part of a database
in excess of the capacity of a single server. I already wrote about
the
On Tue, Oct 14, 2008 at 12:31 AM, Gray Carper <[EMAIL PROTECTED]> wrote:
> Hey, all!
>
> We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI targets
> over ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on an x4200
> head node. In trying to discover optimal ZFS pool const
Nick Smith wrote:
> Dear all,
>
> Background:
>
> I have a ZFS volume with the incorrect volume blocksize for the filesystem
> (NTFS) that it is supporting.
>
> This volume contains important data that is proving impossible to copy using
> Windows XP Xen HVM that "owns" the data.
>
> The dispari
James, all serious ZFS bug fixes back-ported to b85 as well as marvell
and other sata drivers. Not everything is possible to back-port of
course, but I would say all critical things are there. This includes ZFS
ARC optimization patches, for example.
On Tue, 2008-10-14 at 22:33 +1000, James C. McPh
On Tue, 14 Oct 2008, Gray Carper wrote:
>
> So, how concerned should we be about the low scores here and there?
> Any suggestions on how to improve our configuration? And how excited
> should we be about the 8GB tests? ;>
The level of concern should depend on how you expect your storage pool
to
Dear all,
Background:
I have a ZFS volume with the incorrect volume blocksize for the filesystem
(NTFS) that it is supporting.
This volume contains important data that is proving impossible to copy using
Windows XP Xen HVM that "owns" the data.
The disparity in volume blocksize (current set
Just a random spectator here, but I think artifacts you're seeing here are not
due to file size, but rather due to record size.
What is the ZFS record size ?
On a personal note, I wouldn't do non-concurrent (?) benchmarks. They are at
best useless and at worst misleading for ZFS
- Akhilesh.
--
Howdy!
Sounds good. We'll upgrade to 1.1 (b101) as soon as it is released, re-run
our battery of tests, and see where we stand.
Thanks!
-Gray
On Tue, Oct 14, 2008 at 8:47 PM, James C. McPherson <[EMAIL PROTECTED]
> wrote:
> Gray Carper wrote:
>
>> Hello again! (And hellos to Erast, who has been
Gray Carper wrote:
> Hello again! (And hellos to Erast, who has been a huge help to me many,
> many times! :>)
>
> As I understand it, Nexenta 1.1 should be released in a matter of weeks
> and it'll be based on build 101. We are waiting for that with baited
> breath, since it includes some very
Hello again! (And hellos to Erast, who has been a huge help to me many, many
times! :>)
As I understand it, Nexenta 1.1 should be released in a matter of weeks and
it'll be based on build 101. We are waiting for that with baited breath,
since it includes some very important Active Directory integr
Gray Carper wrote:
> Hey there, James!
>
> We're actually running NexentaStor v1.0.8, which is based on b85. We
> haven't done any tuning ourselves, but I suppose it is possible that
> Nexenta did. If there's something specific you'd like me to look for,
> I'd be happy to.
Hi Gray,
So build 85
Hey there, James!
We're actually running NexentaStor v1.0.8, which is based on b85. We haven't
done any tuning ourselves, but I suppose it is possible that Nexenta did. If
there's something specific you have in mind, I'd be happy to look for it.
Thanks!
-Gray
On Tue, Oct 14, 2008 at 8:10 PM, Jam
Gray Carper wrote:
> Hey, all!
>
> We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI
> targets over ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on
> an x4200 head node. In trying to discover optimal ZFS pool construction
> settings, we've run a number of iozone tests,
For the sake of completeness, in the end I simply created links in
/dev/rdsk for c1t0d0sX to point to my disk and was able to reactivate
the current BE.
The shroud of mystery hasn't lifted though because when I did eventually
reboot, I performed a reconfigure (boot -r) and the format and cfgadm
Carsten Aulbert schrieb:
> Hi again,
>
> Thomas Maier-Komor wrote:
>> Carsten Aulbert schrieb:
>>> Hi Thomas,
>> I don't know socat or what benefit it gives you, but have you tried
>> using mbuffer to send and receive directly (options -I and -O)?
>
> I thought we tried that in the past and with
Hi again,
Thomas Maier-Komor wrote:
> Carsten Aulbert schrieb:
>> Hi Thomas,
> I don't know socat or what benefit it gives you, but have you tried
> using mbuffer to send and receive directly (options -I and -O)?
I thought we tried that in the past and with socat it seemed faster, but
I just made
Hey, all!
We've recently used six x4500 Thumpers, all publishing ~28TB iSCSI targets over
ip-multipathed 10GB ethernet, to build a ~150TB ZFS pool on an x4200 head node.
In trying to discover optimal ZFS pool construction settings, we've run a
number of iozone tests, so I thought I'd share them
Carsten Aulbert schrieb:
> Hi Thomas,
>
> Thomas Maier-Komor wrote:
>
>> Carsten,
>>
>> the summary looks like you are using mbuffer. Can you elaborate on what
>> options you are passing to mbuffer? Maybe changing the blocksize to be
>> consistent with the recordsize of the zpool could improve pe
23 matches
Mail list logo