On Dec 28, 2009, at 1:40 PM, Brad wrote:
"This doesn't make sense to me. You've got 32 GB, why not use it?
Artificially limiting the memory use to 20 GB seems like a waste of
good money."
I'm having a hard time convincing the dbas to increase the size of
the SGA to 20GB because their philosop
On Mon, Dec 28, 2009 at 5:46 PM, James Dickens wrote:
>
> here is what i see before prefetch_disable is set, i'm currently moving (mv
> /tank/games /tank/fs1 /tank/fs2) .5 GB and larger files from a deduped pool
> to another... file copy seems fine but delete's kill performance. b130 OSOL
> /dev
here is what i see before prefetch_disable is set, i'm currently moving (mv
/tank/games /tank/fs1 /tank/fs2) .5 GB and larger files from a deduped pool
to another... file copy seems fine but delete's kill performance. b130 OSOL
/dev
0 0 0 6 0 7 0 288 1 1
On Dec 28, 2009, at 2:01 PM, Morten-Christian Bernson wrote:
I tried changing cables, nothing changed. Then I saw something
about smartmontools on the net, and I installed that. It reported
that c2t4d0 had indeed reported a SMART error. Why didn't Solaris
detect this?
In your OP, I see
On Mon, 28 Dec 2009, Brad wrote:
I'm having a hard time convincing the dbas to increase the size of
the SGA to 20GB because their philosophy is, no matter what
eventually you'll have to hit disk to pick up data thats not stored
in cache (arc or l2arc). The typical database server in our
envi
I tried changing cables, nothing changed. Then I saw something about
smartmontools on the net, and I installed that. It reported that c2t4d0 had
indeed reported a SMART error. Why didn't Solaris detect this?
I then went ahead and offlined c2t4d0, after which the pool performed much more
li
"This doesn't make sense to me. You've got 32 GB, why not use it?
Artificially limiting the memory use to 20 GB seems like a waste of
good money."
I'm having a hard time convincing the dbas to increase the size of the SGA to
20GB because their philosophy is, no matter what eventually you'll have
On Dec 28, 2009, at 12:40 PM, Brad wrote:
"Try an SGA more like 20-25 GB. Remember, the database can cache more
effectively than any file system underneath. The best I/O is the I/O
you don't have to make."
We'll be turning up the SGA size from 4GB to 16GB.
The arc size will be set from 8GB to 4
"Try an SGA more like 20-25 GB. Remember, the database can cache more
effectively than any file system underneath. The best I/O is the I/O
you don't have to make."
We'll be turning up the SGA size from 4GB to 16GB.
The arc size will be set from 8GB to 4GB.
"This can be a red herring. Judging by t
Alas, even moving the file out of the way and rebooting the box (to guarantee
state) didn't work:
-bash-4.0# zpool import -nfFX hds1
echo $?
-bash-4.0# echo $?
1
Do you need to be able to read all the labels for each disk in the array in
order to recover?
>From zdb -l on one of the disks:
Le 28 déc. 09 à 00:59, Tim Cook a écrit :
On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais > wrote:
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've started porting a video streaming
On Dec 28, 2009, at 4:53 AM, Morten-Christian Bernson wrote:
The best place to start looking at disk-related
performance problems
is iostat.
Slow disks will show high service times. There are
many options, but I
usually use something like:
iostat -zxcnPT d 1
Ignore the first line. Lo
On Dec 27, 2009, at 10:21 PM, Joe Little wrote:
I've had this happen to me too. I found some dtrace scripts at the
time that showed that the file system was spending too much time
finding available 128k blocks or the like as I was near full per each
disk, even though combined I still had 140GB l
Hi Brad, comments below...
On Dec 27, 2009, at 10:24 PM, Brad wrote:
Richard - the l2arc is c1t13d0. What tools can be use to show the
l2arc stats?
raidz1 2.68T 580G543453 4.22M 3.70M
c1t1d0 - -258102 689K 358K
c1t2d0 - -25610
On Sun, 27 Dec 2009, Tim Cook wrote:
Cmon, saying "all I did was change code and firmware" isn't a valid comparison
at all. Ignoring
that, I'm still referring to multiple streams which create random I/O to the
backend disk.
I do agree with you that this is a problematic scenario. The issue
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thank you for the advice. After trying flowadm the situation improved
somewhat, but I'm still getting occasional packet overflow (10-100
packets about every 10-15 minutes). This is somewhat unnerving, because
I don't know how to track it down.
Here ar
Hi, Try to add flow for traffic you want to get prioritized, I noticed that
opensolaris tends to drop network connectivity without priority flows defined,
I believe this is a feature presented by crossbow itself. flowadm is your
friend that is.
I found this particularly annoying if you monitor s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I progressed with testing a bit further and found that I was hitting
another scheduling bottleneck - the network. While the write burst was
running and ZFS was commiting data to disk, the server was dropping
incomming UDP packets ("netstat -s | grep ud
> The best place to start looking at disk-related
> performance problems
> is iostat.
> Slow disks will show high service times. There are
> many options, but I
> usually use something like:
> iostat -zxcnPT d 1
>
> Ignore the first line. Look at the service times.
> They should be
>
Pre-fletching on the file and device level has been disabled yielding good
results so far. We've lowered the number of concurrent ios from 35 to 1
causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 ->
2ms).
I've followed your recommendation in setting primarycache to
20 matches
Mail list logo