more below...
On Nov 24, 2009, at 9:29 AM, Paul Kraus wrote:
On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling
wrote:
Try disabling prefetch.
Just tried it... no change in random read (still 17-18 MB/sec for a
single thread), but sequential read performance dropped from about 200
MB/sec. to
Those are great, but they're about testing the zfs software. There's a small
amount of overlap, in that these injections include trying to simulate the
hoped-for system response (e.g, EIO) to various physical scenarios, so it's
worth looking at for scenario suggestions.
However, for most of us
Using an HP DL 360 G5 with an HP smart array P400i controller. Created 2
mirrored (hardware) RAID volumes.
I installed Solaris onto a ZFS partition during setup onto one of the mirrored
volumes. Used the second mirrored volume to create another ZFS pool.
I patched the OS, rebooted, then ran z
> you can fetch the "cr_txg" (cr for creation) for a
> snapshot using zdb,
yes, but this is hardly an appropriate interface. zdb is also likely to cause
disk activity because it looks at many things other than the specific item in
question.
> but the very creation of a snapshot requires a new
>
glidic anthony wrote:
> I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
> use the sharemgr command.
Then you prefere wrong. ZFS filesystems are not shared this way.
Read up on ZFS and NFS.
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u7 05
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
wrote:
> On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote:
>
>> On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
>> wrote:
>>>
>>> Good question! Additional thoughts below...
>>>
>>> On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
>>>
Suppose I
On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote:
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
wrote:
Good question! Additional thoughts below...
On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (i
Hi all,
I want to share a folder where i have mount many zfs filesystem.
But when i mount this share i have acces on this folder but no on my zfs
filesystem.
If anyone have a solution other make one share by zfs that's be great.
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i pre
On Nov 23, 2009, at 11:41 AM, Richard Elling wrote:
On Nov 23, 2009, at 9:44 AM, sundeep dhall wrote:
All,
I have a test environment with 4 internal disks and RAIDZ option.
Q) How do I simulate a sudden 1-disk failure to validate that zfs /
raidz handles things well without data errors
NB
Lustre is coming in a year(?). It will then use ZFS
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
wrote:
> Good question! Additional thoughts below...
>
> On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
>
>> Suppose I have a storage server that runs ZFS, presumably providing
>> file (NFS) and/or block (iSCSI, FC) services to other machines that
Thankyou for all who've procvided data about this. I've updated
the bugs mentioned earlier and I believe we can now make progress
on diagnosis.
The new synopsis (should show up on b.o.o tomorrow) is as follows:
6894775 mpt's msi support is suboptimal with xVM
James C. McPherson
--
Senior Ke
> Travis Tabbal wrote:
> > I have a possible workaround. Mark Johnson
> has
> > been emailing me today about this issue and he
> proposed the
> > following:
> >
> >> You can try adding the following to /etc/system,
> then rebooting...
> >> set xpv_psm:xen_support_msi = -1
>
> I am also running
>
> On Nov 23, 2009, at 7:28 PM, Travis Tabbal wrote:
>
> > I have a possible workaround. Mark Johnson
>
> > has been emailing me today about this issue and he
> proposed the
> > following:
> >
> >> You can try adding the following to /etc/system,
> then rebooting...
> >> set xpv_psm:xen_sup
On Tue, 24 Nov 2009, Paul Kraus wrote:
On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling
wrote:
Try disabling prefetch.
Just tried it... no change in random read (still 17-18 MB/sec for a
single thread), but sequential read performance dropped from about 200
MB/sec. to 100 MB/sec. (as expect
On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling
wrote:
> Try disabling prefetch.
Just tried it... no change in random read (still 17-18 MB/sec for a
single thread), but sequential read performance dropped from about 200
MB/sec. to 100 MB/sec. (as expected). Test case is a 3 GB file
accessed in
Erik Trimble wrote:
Miles Nordin wrote:
"lz" == Len Zaifman writes:
lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.
You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server? so
Miles Nordin wrote:
"lz" == Len Zaifman writes:
lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.
You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server? sounds like the smart w
Try disabling prefetch.
-- richard
On Nov 24, 2009, at 6:45 AM, Paul Kraus wrote:
I know there have been a bunch of discussion of various ZFS
performance issues, but I did not see anything specifically on this.
In testing a new configuration of an SE-3511 (SATA) array, I ran into
an int
Good question! Additional thoughts below...
On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (iSCSI, FC) services to other machines that
are running Solaris. Some of the use will be for LDoms and zones[
I know there have been a bunch of discussion of various ZFS
performance issues, but I did not see anything specifically on this.
In testing a new configuration of an SE-3511 (SATA) array, I ran into
an interesting ZFS performance issue. I do not believe that this is
creating a major issue f
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (iSCSI, FC) services to other machines that
are running Solaris. Some of the use will be for LDoms and zones[1],
which would create zpools on top of zfs (fs or zvol). I have concerns
about variable block s
Daniel Carosone writes:
>> I don't think it is easy to do, the txg counter is on
>> a pool level,
>> [..]
>> it would help when the entire pool is idle, though.
>
> .. which is exactly the scenario in question: when the disks are
> likely to be spun down already (or to spin down soon without furt
Hi Darren,
Could you post the -D part of the man pages? I have no access to a
system (yet) with the latest man pages.
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
has not been updated yet.
Regards
Peter
Darren J Moffat wrote:
Steven
Sim wrote:
Hello;
Dedup on ZFS is an abs
24 matches
Mail list logo