On Oct 21, 2010, at 5:26 PM, Erik Trimble wrote:
> On Thu, 2010-10-21 at 17:09 -0700, Richard Elling wrote:
>> On Oct 21, 2010, at 6:19 AM, Eff Norwood wrote:
>>> Let me frame this in the context specifically of VMWare ESXi 4.x. If I
>>> create a zvol and give it to ESXi via iSCSI our experience
On Thu, 2010-10-21 at 17:09 -0700, Richard Elling wrote:
> On Oct 21, 2010, at 6:19 AM, Eff Norwood wrote:
> > Let me frame this in the context specifically of VMWare ESXi 4.x. If I
> > create a zvol and give it to ESXi via iSCSI our experience has been that it
> > is very fast and guest response
On Oct 21, 2010, at 6:19 AM, Eff Norwood wrote:
> Let me frame this in the context specifically of VMWare ESXi 4.x. If I create
> a zvol and give it to ESXi via iSCSI our experience has been that it is very
> fast and guest response is excellent. If we use NFS without a zil (we use
> DDRdrive X1
There is nothing in here that requires zfs confidential.
cross-posted to zfs discuss.
On Oct 21, 2010, at 3:37 PM, Jim Nissen wrote:
> Cross-posting.
>
> Original Message
> Subject: Performance problems due to smaller ZFS recordsize
> Date: Thu, 21 Oct 2010 14:00:42 -0500
On Thu, Oct 21, 2010 at 12:06 AM, Peter Jeremy
wrote:
> On 2010-Oct-21 01:28:46 +0800, David Dyer-Bennet wrote:
>>On Wed, October 20, 2010 04:24, Tuomas Leikola wrote:
>>
>>> I wished for a more aggressive write balancer but that may be too much
>>> to ask for.
>>
>>I don't think it can be too mu
On 21/10/2010 18:59, Maurice Volaski wrote:
Does the write cache referred to above refer to the "Writeback Cache"
> property listed by stmfadm list-lu -v (when a zvol is a target) or
> is that some other cache and if it is, how does it interact with the
> first one?
Yes it does, that basically
Does the write cache referred to above refer to the "Writeback Cache" property
listed by stmfadm list-lu -v (when a zvol is a target) or is that some other
cache and if it is, how does it interact with the first one?
--
This message posted from opensolaris.org
___
Could you show us 'iostat -En' please?
On 21 Oct 2010 13:31, "Harry Putnam" wrote:
Ian Collins writes:
> On 10/21/10 03:47 PM, Harry Putnam wrote:
>> build 133
>> zpool version 22
>>
>> I'm getting:
>>
>> zpool status:
>> NAMESTATE READ WRITE CKSUM
>> z3
Hi Harry,
Generally, you need to use zpool clear to clear the pool errors, but I
can't reproduce the removed files reappearing in zpool status on my own
system when I corrupt data so I'm not sure this will help. Some other
larger problem is going on here...
Did any hardware changes lead up to th
Opps... answering myself here... ;)
Am 21.10.10 14:08, schrieb Stephan Budach:
Hi,
my current pool looks like this:
config:
NAME STATE READ WRITE CKSUM
obelixData ONLINE 0 0 0
c4t21D023038FA8d0 ONLINE 0
Yes, ZVOLs do use the ZIL.
If the write cache has been disabled on the zvol by the DKIOCSETWCE
ioctl or the sync property is set to always.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
Let me frame this in the context specifically of VMWare ESXi 4.x. If I create a
zvol and give it to ESXi via iSCSI our experience has been that it is very fast
and guest response is excellent. If we use NFS without a zil (we use DDRdrive
X1==awesome) because VMWare uses sync (Stable = FSYNC) wri
Ian Collins writes:
> On 10/21/10 03:47 PM, Harry Putnam wrote:
>> build 133
>> zpool version 22
>>
>> I'm getting:
>>
>> zpool status:
>> NAMESTATE READ WRITE CKSUM
>> z3 DEGRADED 0 0 167
>>mirror-0 DEGRADED 0 0 334
>>
Hi,
my current pool looks like this:
config:
NAME STATE READ WRITE CKSUM
obelixData ONLINE 0 0 0
c4t21D023038FA8d0 ONLINE 0 0 0
c4t21D02305FF42d0 ONLINE 0 0 0
where c4t
14 matches
Mail list logo