Hi list,
> If you're running solaris proper, you better mirror
> your
> > ZIL log device.
...
> I plan to get to test this as well, won't be until
> late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up "normally" wi
Hi list,
> If you're running solaris proper, you better mirror
> your
> > ZIL log device.
...
> I plan to get to test this as well, won't be until
> late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up "normally" wi
Hi Roch,
> Can you try 4 concurrent tar to four different ZFS
> filesystems (same pool).
Hmmm, you're on to something here:
http://www.science.uva.nl/~jeroen/zil_compared_e1000_iostat_iops_svc_t_10sec_interval.pdf
In short: when using two exported file systems total time goes down to around
Hi Al,
> Have you tried the DDRdrive from Christopher George
> ?
> Looks to me like a much better fit for your application than the F20?
>
> It would not hurt to check it out. Looks to me like
> you need a product with low *latency* - and a RAM based cache
> would be a much better performer than
> It doesn't have to be F20. You could use the Intel
> X25 for example.
The mlc-based disks are bound to be too slow (we tested with an OCZ Vertex
Turbo). So you're stuck with the X25-E (which Sun stopped supporting for some
reason). I believe most "normal" SSDs do have some sort of cache and
Hi Casper,
> :-)
Leuk te zien dat je straal nog steeds even ver komt :-)
>I'm happy to see that it is now the default and I hope this will cause the
>Linux NFS client implementation to be faster for conforming NFS servers.
Interesting thing is that apparently defaults on Solaris an Linux are ch
Hi Richard,
>For this case, what is the average latency to the F20?
I'm not giving the average since I only performed a single run here (still need
to get autopilot set up :) ). However here is a graph of iostat IOPS/svc_t
sampled in 10sec intervals during a run of untarring an eclipse tarbal 4
Hi Karsten,
> But is this mode of operation *really* safe?
As far as I can tell it is.
-The F20 uses some form of power backup that should provide power to the
interface card long enough to get the cache onto solid state in case of power
failure.
-Recollecting from earlier threads here; in
>The write cache is _not_ being disabled. The write cache is being marked
>as non-volatile.
Of course you're right :) Please filter my postings with a "sed 's/write
cache/write cache flush/g'" ;)
>BTW, why is a Sun/Oracle branded product not properly respecting the NV
>bit in the cache flush com
>Oh, one more comment. If you don't mirror your ZIL, and your unmirrored SSD
>goes bad, you lose your whole pool. Or at least suffer data corruption.
Hmmm, I thought that in that case ZFS reverts to the "regular on disks" ZIL?
With kind regards,
Jeroen
--
This message posted from opensolaris.or
>If you are going to trick the system into thinking a volatile cache is
>nonvolatile, you
>might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like? The advise on:
http://wikis.sun.com/display/Performance/Tuning+ZFS+for+t
ilstat shows only one vmod
andwere capped in a layer above the ZIL? Can't rule out networking just yet,
but my gut tells me we're not network bound here. That leaves the ZFS ZPL/VFS
layer?
I'm very open to suggestions on how to proceed... :)
With kind regards,
Jeroen
--
Jeroe
his
out when they arrive (due somewhere in february).
With kind regards,
Jeroen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525 7203
- --
See http://www.science.uva.nl
oen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525 7203
- --
See http://www.science.uva.nl/~jeroen for openPGP public key
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Li
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Jeroen Roodhart wrote:
>> Questions: 1. Client wsize?
>
> We usually set these to 342768 but this was tested with CenOS
> defaults: 8192 (were doing this over NFSv3)
Is stand corrected here. Looking at proc/mounts I see we ar
ok at our iozone
data and spotted glaring mistakes, we would definitely appreciate your
comments.
Thanks for your help,
With kind regards,
Jeroen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscent
uot;better assurance level
but for random-IO significant performance hits" doesn't seem too wrong
to me. In the first case you still have the ZFS guarantees once data
is "on disk"...
Thanks in advance for your insights,
With kind regards,
Jeroen
- --
Jeroen Roodhart
>How did your migration to ESXi go? Are you using it on the same hardware or
>did you just switch that server to an NFS server and run the VMs on another
>box?
The latter, we run these VMs over NFS anyway and had ESXi boxes under test
already. we were already separating "data" exports from "VM"
> I'm running nv126 XvM right now. I haven't tried it
> without XvM.
Without XvM we do not see these issues. We're running the VMs through NFS now
(using ESXi)...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
We see the same issue on a x4540 Thor system with 500G disks:
lots of:
...
Nov 3 16:41:46 uva.nl scsi: [ID 107833 kern.warning] WARNING:
/p...@3c,0/pci10de,3...@f/pci1000,1...@0 (mpt5):
Nov 3 16:41:46 encore.science.uva.nl Disconnected command timeout for Target
7
...
This system is run
20 matches
Mail list logo