ously the latter is only really an option for
systems with a huge amount of ram.
or: am i doing something wrong?
milosz
On Mon, Dec 19, 2011 at 8:02 AM, Jim Klimov wrote:
> 2011-12-15 22:44, milosz цкщеу:
>
>>> There are a few metaslab-related tunables that ca
thanks, bill. i killed an old filesystem. also forgot about
arc_meta_limit. kicked it up to 4gb from 2gb. things are back to
normal.
On Thu, Dec 15, 2011 at 1:06 PM, Bill Sommerfeld
wrote:
> On 12/15/11 09:35, milosz wrote:
>>
>> hi all,
>>
>> suddenly ran into a
was a multi-terabyte pool with dedup=on and constant writes
(goes away once you turn off dedup). no dedup anywhere on this zpool,
though. arc usage is normal (total ram is 12gb, max is set to 11gb,
current usage is 8gb). pool is an 8-disk raidz2.
any ideas? pretty stump
> - - the VM will be mostly few IO systems :
> - -- WS2003 with Trend Officescan, WSUS (for 300 XP) and RDP
> - -- Solaris10 with SRSS 4.2 (Sunray server)
>
> (File and DB servers won't move in a nearby future to VM+SAN)
>
> I thought -but could be wrong- that those systems could afford a high
> la
> Within the thread there are instructions for using iometer to load test your
> storage. You should test out your solution before going live, and compare
> what you get with what you need. Just because striping 3 mirrors *will* give
> you more performance than raidz2 doesn't always mean that is
> 2 first disks Hardware mirror of 146Go with Sol10 & UFS filesystem on it.
> The next 6 others will be used as a raidz2 ZFS volume of 535G,
> compression and shareiscsi=on.
> I'm going to CHAP protect it soon...
you're not going to get the random read & write performance you need
for a vm backend
is this a direct write to a zfs filesystem or is it some kind of zvol export?
anyway, sounds similar to this:
http://opensolaris.org/jive/thread.jspa?threadID=105702&tstart=0
On Tue, Jun 23, 2009 at 7:14 PM, Bob
Friesenhahn wrote:
> It has been quite some time (about a year) since I did testing
thank you, caspar.
to sum up here (seems to have been a lot of confusion in this thread):
the efi vs. smi thing that richard and a few other people have talked
about is not the issue at the heart of this. this:
> 32 bit Solaris can use at most 2^31 as disk address; a disk block is
> 512bytes, so
yeah i pretty much agree with you on this. the fact that no one has
brought this up before is a pretty good indication of the demand.
there are about 1000 things i'd rather see fixed/improved than max
disk size on a 32bit platform.
On Tue, Jun 16, 2009 at 5:55 PM, Neal Pollack wrote:
> On 06/16/0
yeah, i get a nice clean zfs error message about disk size limits when
i try to add the disk.
On Tue, Jun 16, 2009 at 4:26 PM, roland wrote:
>>the only problems i've run into are: slow (duh) and will not
>>take disks that are bigger than 1tb
>
> do you think that 1tb limit is due to 32bit solaris
wow, that hasn't been a recognized problem since this past april?
i've been seeing it for a -long- time. i think i first reported it
back in december. are people actively working on it?
On Tue, Jun 16, 2009 at 10:24 AM, Marcelo Leal wrote:
> Hello all,
> I'm trying to understand the ZFS IO sche
one of my disaster recovery servers has been running on 32bit hardware
(ancient northwood chip) for about a year. the only problems i've run into
are: slow (duh) and will not take disks that are bigger than 1tb. that is
kind of a bummer and means i'll have to switch to a 64bit base soon.
everyth
deleting the lu's via sbdadm solved this. still wondering if there is
some reliable way to figure out what is using the zvol, though =)
On Wed, May 20, 2009 at 6:32 PM, milosz wrote:
> -bash-3.2# zpool export exchbk
> cannot remove device links for 'exchbk/exchbk-2': datase
-bash-3.2# zpool export exchbk
cannot remove device links for 'exchbk/exchbk-2': dataset is busy
this is a zvol used for a comstar iscsi backend:
-bash-3.2# stmfadm list-lu -v
LU Name: 600144F0EAC009004A0A4F410001
Operational Status: Offline
Provider Name : sbd
Alias
google "evil tuning guide" and you will find it. you can throw a "zfs" into
the query too, or not.
zfs will basically use as much ram as it can. see section 2.2, "limiting
arc cache"
On Mon, May 11, 2009 at 11:16 AM, Ross Schaulis wrote:
>
> (Please reply to me directly as I am not on the ZFS
with pass-through disks on areca controllers you have to set the lun id (i
believe) using the volume command. when you issue a volume info your disk
id's should look like this (if you want solaris to see the disks):
0/1/0
0/2/0
0/3/0
0/4/0
etc.
the middle part there (again, i think that's suppos
sorry, that 60% statement was misleading... i will VERY OCCASIONALLY get a
spike to 60%, but i'm averaging more like 15%, with the throughput often
dropping to zero for several seconds at a time.
that iperf test more or less demonstrates it isn't a network problem, no?
also i have been using mi
iperf test coming out fine, actually...
iperf -s -w 64k
iperf -c -w 64k -t 900 -i 5
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
totally steady. i could probably implement some tweaks to improve it, but if i
were getting a steady 77% of gigabi
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's no switch,
jumbo frames are on... maybe it's the e1000g driver? it's been wonky since 94
or so. even during the write bursts i'm only ge
my apologies... 11s, 12s, and 13s represent the number of seconds in a
read/write period, not disks. so, 11 seconds into a period, %b suddenly jumps
to 100 after having been 0 for the first 10.
--
This message posted from opensolaris.org
___
zfs-discu
compression is off across the board.
svc_t is only maxed during the periods of heavy write activity (2-3 seconds
every 10 or so seconds)... otherwise disks are basically idling.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
target zpool (target zpool is 5x bigger than source zpool).
anyone got any ideas? point me in the right direction?
thanks,
milosz
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
22 matches
Mail list logo