I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks wouldn't mind sharing their work on
the sub
On 4/21/10 2:15 AM, Robert Milkowski wrote:
> I haven't heard from you in a while! Good to see you here again :)
>
> Sorry for stating obvious but at the end of a day it depends on what
> your goals are.
> Are you interested in micro-benchmarks and comparison to other file
> systems?
>
> I think th
On 5/7/10 9:38 PM, Giovanni wrote:
> Hi guys,
>
> I have a quick question, I am playing around with ZFS and here's what I did.
>
> I created a storage pool with several drives. I unplugged 3 out of 5 drives
> from the array, currently:
>
> NAMESTATE READ WRITE CKSUM
> gpool
On 5/8/10 3:07 PM, Tony wrote:
> Lets say I have two servers, both running opensolaris with ZFS. I basically
> want to be able to create a filesystem where the two servers have a common
> volume, that is mirrored between the two. Meaning, each server keeps an
> identical, real time backup of the
The drive (c7t2d0)is bad and should be replaced. The second drive
(c7t5d0) is either bad or going bad. This is exactly the kind of
problem that can force a Thumper to it knees, ZFS performance is
horrific, and as soon as you drop the bad disks things magicly return to
normal.
My first recommend
On 8/13/10 9:02 PM, "C. Bergström" wrote:
> Erast wrote:
>>
>>
>> On 08/13/2010 01:39 PM, Tim Cook wrote:
>>> http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/
>>>
>>> I'm a bit surprised at this development... Oracle really just doesn't
>>> get it. The part that's most disturbing to m
On 8/14/10 1:12 PM, Frank Cusack wrote:
>
> Wow, what leads you guys to even imagine that S11 wouldn't contain
> comstar, etc.? *Of course* it will contain most of the bits that
> are current today in OpenSolaris.
That's a very good question actually. I would think that COMSTAR would
stay becau
If you're still having issues go into the BIOS and disable C-States, if you
haven't already. It is responsible for most of the problems with 11th Gen
PowerEdge.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
zfs list is mighty slow on systems with a large number of objects, but there is
no foreseeable plan that I'm aware of to solve that "problem".
Never the less, you need to do a zfs list, therefore, do it once and work from
that.
zfs list > /tmp/zfs.out
for i in `grep mydataset@ /tmp/zfs.out`;
Would someone "in the know" be willing to write up (preferably blog) definitive
definitions/explanations of all the arcstats provided via kstat? I'm
struggling with proper interpretation of certain values, namely "p",
"memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit
co
Thanks, not as much as I was hoping for but still extremely helpful.
Can you, or others have a look at this: http://cuddletech.com/arc_summary.html
This is a PERL script that uses kstats to drum up a report such as the
following:
System Memory:
Physical RAM: 32759 MB
Free Me
Its a starting point anyway. The key is to try and draw useful conclusions
from the info to answer the torrent of "why is my ARC 30GB???"
There are several things I'm unclear on whether or not I'm properly
interpreting such as:
* As you state, the anon pages. Even the comment in code is, to
New version is available (v0.2) :
* Fixes divide by zero,
* includes tuning from /etc/system in output
* if prefetch is disabled I explicitly say so.
* Accounts for jacked anon count. Still need improvement here.
* Added friendly explanations for MRU/MFU & Ghost lists counts.
Page and examp
I've been struggling to fully understand why disk space seems to vanish. I've
dug through bits of code and reviewed all the mails on the subject that I can
find, but I still don't have a proper understanding of whats going on.
I did a test with a local zpool on snv_97... zfs list, zpool list,
No takers? :)
benr.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there some hidden way to coax zdb into not just displaying data based on a
given DVA but rather to dump it in raw usable form?
I've got a pool with large amounts of corruption. Several directories are
toast and I get "I/O Error" when trying to enter or read the directory...
however I can re
Ya, I agree that we need some additional data and testing. The iostat
data in itself doesn't suggest to me that the process (dd) is slow but
rather that most of the data is being retrieved elsewhere (ARC). An
fsstat would be useful to correlate with the iostat data.
One thing that also comes to
Robert Milkowski wrote:
CLSNL> but if I click, say E, it has F's contents, F has Gs contents, and no
CLSNL> mail has D's contents that I can see. But the list in the mail
CLSNL> client list view is correct.
I don't belive it's a problem with nfs/zfs server.
Please try with simple dtrace script
I've been playing with replication of a ZFS Zpool using the recently released
AVS. I'm pleased with things, but just replicating the data is only part of
the problem. The big question is: can I have a zpool open in 2 places?
What I really want is a Zpool on node1 open and writable (productio
Is there an existing RFE for, what I'll wrongly call, "recursively visable
snapshots"? That is, .zfs in directories other than the dataset root.
Frankly, I don't need it available in all directories, although it'd be nice,
but I do have a need for making it visiable 1 dir down from the dataset
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The big
BR> question is: ca
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt it'd work. And if it does, it probly
shouldn't.
Darren J Moffat wrote:
Ben Rockwood wrote:
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt
Peter Schuller wrote:
Hello,
with the advent of clones and snapshots, one will of course start
creating them. Which also means destroying them.
Am I the only one who is *extremely* nervous about doing "zfs destroy
some/[EMAIL PROTECTED]"?
This goes bot manually and automatically in a script. I
Diego Righi wrote:
Hi all, I just built a new zfs server for home and, being a long time and avid
reader of this forum, I'm going to post my config specs and my benchmarks
hoping this could be of some help for others :)
http://www.sickness.it/zfspr0nserver.jpg
http://www.sickness.it/zfspr0nser
May 25 23:32:59 summer unix: [ID 836849 kern.notice]
May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
Page fault) rp=ff00232c3a80 addr=490 occurred in module "unix" due to a
NULL pointer dereference
May
George wrote:
> I have set up an iSCSI ZFS target that seems to connect properly from
> the Microsoft Windows initiator in that I can see the volume in MMC
> Disk Management.
>
>
> When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to
> set up the Targets with the target name
Dick Davies wrote:
> On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
>
>
>> Client A
>> - import pool make couple-o-changes
>>
>> Client B
>> - import pool -f (heh)
>>
>
>
>> Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
>> Oct 4 15:03:12 fozzie genunix: [
Dale Ghent wrote:
> ...and eventually in a read-write capacity:
>
> http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
> developer-preview-1-1-for-leopard/
>
> Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
> OS X to Developers this week. The preview updates a p
I've run across an odd issue with ZFS Quota's. This is an snv_43 system with
several zones/zfs datasets, but only one effected. The dataset shows 10GB
used, 12GB refered but when counting the files only has 6.7GB of data:
zones/ABC10.8G 26.2G 12.0G /zones/ABC
zones/[EMAIL PROTECTED]
Today, suddenly, without any apparent reason that I can find, I'm
getting panic's during zpool import. The system paniced earlier today
and has been suffering since. This is snv_43 on a thumper. Here's the
stack:
panic[cpu0]/thread=99adbac0: assertion failed: ss != NULL, file:
../..
I made a really stupid mistake... having trouble removing a hot spare
marked as failed I was trying several ways to put it back in a good
state. One means I tried was to 'zpool add pool c5t3d0'... but I forgot
to use the proper syntax "zpool add pool spare c5t3d0".
Now I'm in a bind. I've got
Eric Schrock wrote:
> There's really no way to recover from this, since we don't have device
> removal. However, I'm suprised that no warning was given. There are at
> least two things that should have happened:
>
> 1. zpool(1M) should have warned you that the redundancy level you were
>atte
Robert Milkowski wrote:
> If you can't re-create a pool (+backup&restore your data) I would
> recommend to wait for device removal in zfs and in a mean time I would
> attach another drive to it so you've got mirrored configuration and
> remove them once there's a device removal. Since you're alread
the following to /etc/system:
set sata:sata_max_queue_depth = 0x1
If you don't life will be highly unpleasant and you'll believe that disks are
failing everywhere when in fact they are not.
benr.
Ben Rockwood wrote:
> Today, suddenly, without any apparent reason that I can find, I'
Can someone please clarify the ability to utilize ACL's over NFSv3 from a ZFS
share? I can "getfacl" but I can't "setfacl". I can't find any documentation
in this regard. My suspicion is that that ZFS Shares must be NFSv4 in order to
utilize ACLs but I'm hoping this isn't the case.
Can anyon
I've run into an odd problem which I lovingly refer to as a "black hole
directory".
On a Thumper used for mail stores we've found find's take an exceptionally long
time to run. There are directories that have as many as 400,000 files, which I
immediately considered the culprit. However, und
Hello,
I'm curious if anyone would mind sharing their experiences with zvol's. I
recently started using zvol as an iSCSI backend and was supprised by the
performance I was getting. Further testing revealed that it wasn't an iSCSI
performance issue but a zvol issue. Testing on a SATA disk l
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 or so soonish (I BFU'd
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I don't know.
Notice vmstat:
3 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 0 0 926 91 703 0 25 75
21 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 13
eric kustarz wrote:
So i'm guessing there's lots of files being created over NFS in one
particular dataset?
We should figure out how many creates/second you are doing over NFS (i
should have put a timeout on the script). Here's a real simple one
(from your snoop it looked like you're only do
Spencer Shepler wrote:
Good to hear that you have figured out what is happening, Ben.
For future reference, there are two commands that you may want to
make use of in observing the behavior of the NFS server and individual
filesystems.
There is the trusty, nfsstat command. In this case, you wo
Bill Moore wrote:
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
Clearly ZFS file creation is just amazingly heavy even with ZIL
disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron
cores we're in big trouble in the longer term. In the meantime I
Robert Milkowski wrote:
Hello eric,
Saturday, December 9, 2006, 7:07:49 PM, you wrote:
ek> Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to)
Stuart Glenn wrote:
A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is
connected to eSATA (SiI3124) via PCI-X two drives are straight
connections, then the other two ports go to 5x multipliers within the
box. My needs/hopes for this was using 12 500GB drives and ZFS make a
v
Andrew Summers wrote:
> So, I've read the wikipedia, and have done a lot of research on google about
> it, but it just doesn't make sense to me. Correct me if I'm wrong, but you
> can take a simple 5/10/20 GB drive or whatever size, and turn it into
> exabytes of storage space?
>
> If that is n
Brad Plecs wrote:
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over the weekend.
After some extensive testing, the extreme slowness appears to only occur when a ZFS filesystem is mounted over NFS.
One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto
47 matches
Mail list logo