Manoj Joseph wrote:
Simon wrote:
So,does mean this is oracle bug ? Or it's impossible(or inappropriate)
to use ZFS/SVM volumes to create oracle data file,instead,should use
zfs or ufs filesystem to do this.
Oracle can use SVM volumes to hold its data. Unless I am mistaken, it
should be able t
Tony Galway wrote:
I had previously undertaken a benchmark that pits “out of box”
performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
outstanding availability issues in ZFS. These have been taken care of,
and I am once again undertaking this challenge on behalf of my
custome
Frank Cusack wrote:
On April 16, 2007 10:24:04 AM +0200 Selim Daoud
<[EMAIL PROTECTED]> wrote:
hi all ,
when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach
Simon wrote:
So,does mean this is oracle bug ? Or it's impossible(or inappropriate)
to use ZFS/SVM volumes to create oracle data file,instead,should use
zfs or ufs filesystem to do this.
Oracle can use SVM volumes to hold its data. Unless I am mistaken, it
should be able to use zvols as well.
On 16/04/07, Krzys <[EMAIL PROTECTED]> wrote:
Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be
released by SUN? Yes, I am running U3 at this moment.
Summer is what I last read (July?).
--
"Less is only more where more is no good." --Frank Lloyd Wright
Shawn Walk
Hello Tony,
Monday, April 16, 2007, 7:10:41 PM, you wrote:
>
I had previously undertaken a benchmark that pits “out of box” performance of UFS via SVM, VxFS and ZFS but was waylaid due to some outstanding availability issues in ZFS. These have been taken care of, and I am once again unde
Ah, perfect then... Thank you so much for letting me know...
Regards,
Chris
On Tue, 17 Apr 2007, Robert Milkowski wrote:
Hello Krzys,
Sunday, April 15, 2007, 4:53:43 AM, you wrote:
K> Strange thing, I did try to do zfs send/receive using zfs.
K> On the from host I did the following:
K>
Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be
released by SUN? Yes, I am running U3 at this moment.
Regards,
Chris
On Mon, 16 Apr 2007, [EMAIL PROTECTED] wrote:
Chris,
Looks like you're not running a Solaris release that contains
the zfs receive -F option.
Adrian, you can take a look at pNFS:
http://opensolaris.org/os/community/os_user_groups/frosug/pNFS/FROSUG-pNFS.pdf
Project homepage:
http://opensolaris.org/os/project/nfsv41/
Rayson
On 4/16/07, Jason A. Hoffman <[EMAIL PROTECTED]> wrote:
On Apr 16, 2007, at 3:24 PM, Adrian Thompson wrote:
On Apr 16, 2007, at 3:24 PM, Adrian Thompson wrote:
Hi!
I am very new to ZFS (never installed it), and I have a small
question.
Is it possible with ZFS to merge multiple machines with NFS into
one ZFS filesystem so they look like one storage device?
As I'm typing this I feel like a foo
Hello Krzys,
Sunday, April 15, 2007, 4:53:43 AM, you wrote:
K> Strange thing, I did try to do zfs send/receive using zfs.
K> On the from host I did the following:
K> bash-3.00# zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs
receive
K> mypool/zones/[EMAIL PROTECTED]
K> Password:
K
Chris,
Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup
Otherwise, you'll have to wait until an upcoming Solaris 10 release.
C
On Mon, Apr 16, 2007 at 05:13:37PM -0500, [EMAIL PROTECTED] wrote:
>
> Why it was considered a valid data column in its current state is
> anyone's guess.
>
This column is precise and valid. It represents the amount of space
uniquely referenced by the snapshot, and therefore the amount of space
Hi!
I am very new to ZFS (never installed it), and I have a small question.
Is it possible with ZFS to merge multiple machines with NFS into one ZFS
filesystem so they look like one storage device?
As I'm typing this I feel like a fool, but I'll ask anyway. :-)
Thanks!
-=//-\drian Thompson
[18:19:00] [EMAIL PROTECTED]: /root > zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] |
zfs receive -F mypool2/[EMAIL PROTECTED]
invalid option 'F'
usage:
receive [-vn]
receive [-vn] -d
For the property list, run: zfs set|get
It does not seem to work unless I am
[EMAIL PROTECTED] wrote on 04/16/2007 04:57:43 PM:
> one pool is mirror on 300gb dirives and the other is raidz1 on 7 x
> 143gb drives.
>
> I did make clone of my zfs file systems with their snaps and something is
not
> right, sizes do not match... anyway here is what I have:
>
> [17:50:32] [
On 4/17/07, Krzys <[EMAIL PROTECTED]> wrote:
and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root > zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been mo
ok, here is what I have:
[17:53:35] [EMAIL PROTECTED]: /root > zpool status -v
pool: mypool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirrorONLINE 0 0 0
c1t
Hello folks, I have a question and a small problem... I did try to replicate my
zfs with all the snaps, so I did run few commands:
time zfs send mypool/[EMAIL PROTECTED] | zfs receive mypool2/[EMAIL PROTECTED]
real6h35m12.34s
user0m0.00s
sys 29m32.28s
zfs send -i mypool/[EMAIL PR
Why are you using software-based RAID 5/RAIDZ for the tests? I didn't think
this was a common setup in cases where file system performance was the primary
consideration.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
The volume is 7+1. I have created the volume using both the default (DRL) as
well as 'nolog' to turn it off, both with similar performance. On the advice
of Henk, after he had looked over my data, he is notice that the veritas
test seems to be almost entirely using file system cache. I will retest
On April 16, 2007 10:51:41 AM -0700 Frank Cusack <[EMAIL PROTECTED]>
wrote:
On April 16, 2007 1:10:41 PM -0400 Tony Galway <[EMAIL PROTECTED]>
wrote:
I had previously undertaken a benchmark that pits "out of box"
performance
...
The test hardware is a T2000 connected to a 12 disk SE3510 (prese
On 4/16/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
but there is another article somewhere
about tuning for t2000, related to PCI on the t2000, ie it is
t2000-specific.
This one??
http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for
Rayson
-frank
__
On April 16, 2007 1:10:41 PM -0400 Tony Galway <[EMAIL PROTECTED]> wrote:
I had previously undertaken a benchmark that pits "out of box" performance
...
The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as
...
Now to my problem - Performance! Given the test as defined ab
On April 16, 2007 10:24:04 AM +0200 Selim Daoud <[EMAIL PROTECTED]>
wrote:
hi all ,
when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach snapshot
functionalit
Did you measure CPU utilization by any chance during the tests?
Its T2000 and CPU cores are quite slow on this box hence might be a
bottleneck.
just a guess.
On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:
> I had previously undertaken a benchmark that pits “out of box”
> performance of UFS
"Paul Fisher" <[EMAIL PROTECTED]> wrote:
> Is there any reason that the CDDL dictates, or that Sun would object,
> to zfs being made available as an independently distributed Linux kernel
> module? In other words, if I made an Nvidia-like distribution available,
> would that be OK from the OpenSo
Nicolas Williams <[EMAIL PROTECTED]> wrote:
> Sigh. We have devolved. Every thread on OpenSolaris discuss lists
> seems to devolve into a license discussion.
It is funny to see that in our case, the tecnical problems (those caused
by the fact that linux implements a different VFS interface laye
hi all ,
when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach snapshot
functionalities that are offered in other products suchs as Compellent
(http://www.compe
Hi Roman,
from the provided data I suppose that you a running unpatched Solaris 10
Update 3.
Since fault address is 0xc4 and in zio_create we manipulate mostly with
zio_t structures, then 0xc4 most likely corresponds to io_child member
of zio_t structure. If my assumption about Solaris updat
30 matches
Mail list logo