On 7/28/2010 4:22 PM, Karol wrote:
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two Quad
Hi ,
while playing with ZFS acls I have noticed chmod strange behavior, it
duplicates some acls , is it a bug or a feature :) ?
For example scenario:
#ls -dv ./2
drwxr-xr-x 2 root root 2 Jul 29 11:22 2
0:owner@::deny
1:owner@:list_directory/read_data/add_file/write_data/a
Hi,
Is there a way to see which files have been deduped, so I can copy them again
an un-dedupe them?
unfortunately, that's not easy (I've tried it :) ).
The issue is that the dedup table (which knows which blocks have been deduped)
doesn't know about files.
And if you pull block pointers fo
Hmmm, that's odd. I have a number of VMs running on NFS (hosted on ESX, rather
than Xen) with no problems at all. I did add a SLOG device to get performance
up to a reasonable level, but it's been running flawlessly for a few months
now. Previously I was using iSCSI for most of the connections,
On Jul 28, 2010, at 3:11 PM, sol wrote:
> A partial workaround was to turn off access time on the share and to mount
> with
> noatime,actimeo=60
>
> But that's not perfect because when left along the VM got into a "stuck"
> state.
> I've never seen that state before when the VM was hosted on
Hi Eric - thanks for your reply.
Yes, zpool iostat -v
I've re-configured the setup into two pools for a test:
1st pool: 8 disk stripe vdev
2nd pool: 8 disk stripe vdev
The SSDs are currently not in the pool since I am not even reaching what the
spinning rust is capable of - I believe I have a de
Sorry - I said the 2 iostats were run at the same time - the second was run
after the first during the same file copy operation.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
> Update to my own post. Further tests more
> consistently resulted in closer to 150MB/s.
>
> When I took one disk offline, it was just shy of
> 100MB/s on the single disk. There is both an obvious
> improvement with the mirror, and a trade-off (perhaps
> the latter is controller related?).
>
>
>Hi r2ch
>The operations column shows about 370 operations for read - per spindle
>(Between 400-900 for writes)
>How should I be measuring iops?
It seems to me then that your spindles are going about as fast as they can and
your just moving small block sizes.
There are lots of ways to test for
Hello,
TRIM support has just been committed into OpenSolaris:
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html
Via:
http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
___
zfs-discuss mailing list
zfs-d
Zpool upgrade on this system went fine, but zfs upgrade failed:
# zfs upgrade -a
cannot unmount '/space/direct': Device busy
cannot unmount '/space/dcc': Device busy
cannot unmount '/space/direct': Device busy
cannot unmount '/space/imap': Device busy
cannot unmount '/space
Hi Gary,
This should just work without having to do anything.
Looks like a bug but I haven't seen this problem before.
Anything unusual about the mount points for the file systems
identified below?
Thanks,
Cindy
On 07/29/10 07:07, Gary Mills wrote:
Zpool upgrade on this system went fine, bu
This sounds very similar to another post last month.
http://opensolaris.org/jive/thread.jspa?messageID=487453
The trouble appears to be below ZFS, so you might try asking on the
storage-discuss forum.
-- richard
On Jul 28, 2010, at 5:23 PM, Karol wrote:
> I appear to be getting between 2-9MB/s
I'm trying to understand how snapshots work in terms of how I can use them for
recovering and/or duplicating virtual machines, and how I should set up my file
system.
I want to use OpenSolaris as a storage platform with NFS/ZFS for some
development VMs; that is, the VMs use the OpenSolaris box
Yes I noticed that thread a while back and have been doing a great deal of
testing with various scsi_vhci options.
I am disappointed that the thread hasn't moved further since I also suspect
that it is related to mpt-sas or multipath or expander related.
I was able to get aggregate writes up t
On Jul 28, 2010, at 4:11 PM, Robert Milkowski wrote:
>
> fyi
This covers the case where an exported pool has lost its log.
zpool export
[log disk or all disks in a mirrored log disappear]
zpool import -- currently fails, missing top-level vdev
The following cases are alre
Which Solaris release is this and are you using /usr/bin/ls and
/usr/bin/chmod?
Thanks,
Cindy
On 07/29/10 02:44, . . wrote:
Hi ,
while playing with ZFS acls I have noticed chmod strange behavior, it
duplicates some acls , is it a bug or a feature :) ?
For example scenario:
#ls -dv ./2
drwxr
Hi Gary,
I found a similar zfs upgrade failure with the device busy error, which
I believe was caused by a file system mounted under another file system.
If this is the cause, I will file a bug or find an existing one.
The workaround is to unmount the nested file systems and upgrade them
indivi
On Jul 29, 2010, at 9:57 AM, Carol wrote:
> Yes I noticed that thread a while back and have been doing a great deal of
> testing with various scsi_vhci options.
> I am disappointed that the thread hasn't moved further since I also suspect
> that it is related to mpt-sas or multipath or expande
Actually writes faster then reads are typical fora Copy on Write FC (or Write
Anywhere). I usually describe it like this.
CoW in ZFS works like when you come home after a long day and you ust want to
go to bed. You take of one pice of clothing after another and drop it on the
floor just where
Hey Nix,
I think I see the problem now.
If you want to review the interaction of setting an explicit ACL and
using the chmod 755 command on 2, you need this command:
# ls -dv 2
What you have is this command:
# ls -dv
(I have no idea what's going on with the parent dir ACL.)
I tested your sy
On Thu, Jul 29, 2010 at 11:50 AM, Mark wrote:
> I'm trying to understand how snapshots work in terms of how I can use them
> for recovering and/or duplicating virtual machines, and how I should set up
> my file system.
>
> I want to use OpenSolaris as a storage platform with NFS/ZFS for some
> de
On Thu, Jul 29, 2010 at 12:00:08PM -0600, Cindy Swearingen wrote:
> Hi Gary,
>
> I found a similar zfs upgrade failure with the device busy error, which
> I believe was caused by a file system mounted under another file system.
>
> If this is the cause, I will file a bug or find an existing one.
On Tue, Jul 20, 2010 at 9:48 AM, Hernan Freschi wrote:
> Is there a way to see which files are using dedup? Or should I just
> copy everything to a new ZFS?
Using 'zfs send' to copy the datasets will work and preserve other
metadata that copying will lose.
-B
--
Brandon High : bh...@freaks.co
I moved one hard drive from a pool to a different controller, and now it isn't
recognized as part of the pool.
This is the pool:
NAME STATE READ WRITE CKSUM
videoDEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c13t0d0 UN
On Thu, Jul 29, 2010 at 10:26:14PM +0200, Pawel Jakub Dawidek wrote:
> On Thu, Jul 29, 2010 at 12:00:08PM -0600, Cindy Swearingen wrote:
> >
> > I found a similar zfs upgrade failure with the device busy error, which
> > I believe was caused by a file system mounted under another file system.
> >
Hi Robert -
I tried all of your suggestions but unfortunately my performance did not
improve.
I tested single disk performance and I get 120-140MBps read/write to a single
disk. As soon as I add an additional disk (mirror, stripe, raidz) , my
performance drops significantly.
I'm using 8Gbit F
27 matches
Mail list logo