So I noticed this during a scrub:
scrub in progress for 307445734561825855h10m, 89.55% done,
307445734561825859h41m to go
Which comes to 35+ trillion years. This makes ZFS the most enduring technology
ever!
Not really a bug--my clock was reset during the scrub. Just thought it was
amusing a
Right, I realized it was Magny not Mangy, but I thought it was related to the
race track or racing not a town.
I completely agree with you on codenames, the Linux distro codename irk me -
hey, guys it might be easy for you to keep track of which release is "Bushy
Beaver" or "Itchy Ibis" or "Ma
On 10/30/2010 7:07 PM, zfs user wrote:
I did it deliberately - how dumb are these product managers that they
name products with weird names and not expect them to be abused? On
the other hand, if you do a search for mangy cours you'll find a bunch
of hits where it is clearly a misspelling on se
I did it deliberately - how dumb are these product managers that they name
products with weird names and not expect them to be abused? On the other hand,
if you do a search for mangy cours you'll find a bunch of hits where it is
clearly a misspelling on serious tech articles, postings, etc.
"
If you take a look at http://www.brendangregg.com/cachekit.html you will see
some DTrace yummyness which should let you tell...
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 30 October 2010 15:49, Eugen Leitl wrote:
> On Sat, Oct
On Sat, Oct 30, 2010 at 02:10:49PM -0700, zfs user wrote:
> 1 Mangy-Cours CPU
^
Dunno whether deliberate, or malapropism, but I love it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
We had the same issue with a 24 core box a while ago. Check your l2 cache
hits and misses. Sometimes more cores does not mean more performance dtrace
is your friend!
On 30 Oct 2010 14:12, "zfs user" wrote:
Here is a total guess - but what if it has to do with zfs processing running
on one CPU ha
Here is a total guess - but what if it has to do with zfs processing running
on one CPU having to talk to the memory "owned" by a different CPU? I don't
know if many people are running fully populated boxes like you are, so maybe
it is something people are not seeing due to not having huge amoun
So maybe a next step is to run zilstat, arcstat, iostat -xe?? (I forget what
people like to use for these params), zpool iostat -v in 4 term windows while
running the same test and try to see what is spiking when that high load
period occurs.
Not sure if there is a better version than this:
h
It wasn't a completely full volume so I wasn't getting the classic 'no space'
issue.
What I did end up doing was booting OpenIndiana (build 147) which seemed tohave
more succes clearing up the space. I also set up some scripts to clear out
space slower. Deleting a 4GB file would take 1-2 min
I owe you all an update...
We found out a clear pattern we can now recreate at will. Whenever we
read/write the pool, it gives expected throughput and IOPS for a while, but at
some point it slows down to a crawl, nothing is responding and pretty much
"hang" for a few seconds and then things go
11 matches
Mail list logo