Patrick Bachmann:
Hey Bill,
Bill Sommerfeld wrote:
Overly wide raidz groups seems to be an unfenced hole that people new to
ZFS fall into on a regular basis.
The man page warns against this but that doesn't seem to be sufficient.
Given that zfs has relatively few such traps, perhaps large rai
zfs automatically mounts locally attached disks (export/import aside). Does
it do this for iscsi? I guess my question is, does the solaris iscsi
initiator provide the same kind of device permanence as for local drives?
thanks
-frank
___
zfs-discuss ma
Frank Cusack wrote:
Patrick Bachmann:
Hey Bill,
Bill Sommerfeld wrote:
Overly wide raidz groups seems to be an unfenced hole that people new to
ZFS fall into on a regular basis.
The man page warns against this but that doesn't seem to be sufficient.
Given that zfs has relatively few such tra
I've had people mention that WAFL does indeed support clones of snapshots.
Is this a "what version of WAFL" problem?
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 7/28/06, Darren Reed <[EMAIL PROTECTED]> wrote:
I've had people mention that WAFL does indeed support clones of snapshots.
Is this a "what version of WAFL" problem?
apparently so, but it is rather new from the impression given from
this site:
http://www.tournament.org.il/run/index.php?/arc
On 7/28/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> I have a SAS array with a zfs pool on it. zfs automatically searches for
> and mounts the zfs pool I've created there. I want to attach another
> host to this array, but it doesn't have any provision for zones or the
> like. (Like you would
Hello Fred,
Friday, July 28, 2006, 12:37:22 AM, you wrote:
FZ> Hi Robert,
FZ> The fix for 6424554 is being backported to S10 and will be available in
FZ> S10U3, later this year.
I know that already - I was rather asking if a patch containing the
fix will be available BEFORE U3 and if yes then w
Hello Matty,
Thursday, July 27, 2006, 7:53:34 PM, you wrote:
M> Are there any known issues with patching zones that are installed on a ZFS
M> file system? Does smpatch and company work ok with this configuration?
Right now I have such configurations and have been using smpatch
without any probl
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial
database (SAP SD scenario).
The cache on the HW raid helps, and the CPU load is less... but the solution
costs more and you _might_ not need the performance of the HW RAID.
Has anybody with access to these unit
>
> * follow-up question from customer
>
>
> Yes, using the c#t#d# disks work, but anyone using fibre-channel storage
> on somethink like IBM Shark or EMC Clariion will want multiple paths to
> disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS
> work wit
Hey Frank,
Frank Cusack wrote:
Patrick Bachmann:
IMHO it is sufficient to just document this best-practice.
I disagree. The documentation has to AT LEAST state that more than 9
disks gives poor performance. I did read that raidz should use 3-9 disks
in the docs but it doesn't say WHY, so of
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the other host.
Multiple hosts writing to the same pool won't work, but you could indeed
have two pools, one for each host, in a dual active-passive arrangement.
That is, you dual-attach the storage with host A tal
On Thu, Jul 27, 2006 at 07:25:55PM -0700, Matthew Ahrens wrote:
> On Thu, Jul 27, 2006 at 08:17:03PM -0500, Malahat Qureshi wrote:
> > Is there any way to boot of from zfs disk "work around" ??
>
> Yes, see
> http://blogs.sun.com/roller/page/tabriz?entry=are_you_ready_to_rumble
I followed those d
On Fri, Jul 28, 2006 at 02:14:50PM +0200, Patrick Bachmann wrote:
> systems config? There are a lot of things you know better off-hand
> about your system, otherwise you need to do some benchmarking, which
> ZFS would have to do too, if it was to give you the best performing
> config.
How hard
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the
other host.
Multiple hosts writing to the same pool won't work, but you could indeed
have two pools, one for each host, in a dual active-passive arrangement.
That is
Brian Hechinger wrote:
On Fri, Jul 28, 2006 at 02:14:50PM +0200, Patrick Bachmann wrote:
systems config? There are a lot of things you know better off-hand
about your system, otherwise you need to do some benchmarking, which
ZFS would have to do too, if it was to give you the best performing
c
Richard Elling wrote:
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
Multiple hosts writing to the same pool won't work, but you could indeed
have two pools, one for each host, in a dual active-passive arrangement.
That is, you dual-attach the storage with host A talking to p
On Fri, 28 Jul 2006, Louwtjie Burger wrote:
reformatted
> Hi there
>
> Is it fair to compare the 2 solutions using Solaris 10 U2 and a
> commercial database (SAP SD scenario).
>
> The cache on the HW raid helps, and the CPU load is less... but the
> solution costs more and you _might_ no
Can someone explain to me what the 'volinit' and 'volfini' options to zfs do ? It's not obvious from the source code and these
options are undocumented.
Thanks,
John
--
John Cecere
Sun Microsystems
732-302-3922 / [EMAIL PROTECTED]
___
zfs-discuss mai
Brian Hechinger wrote:
On Thu, Jul 27, 2006 at 07:25:55PM -0700, Matthew Ahrens wrote:
On Thu, Jul 27, 2006 at 08:17:03PM -0500, Malahat Qureshi wrote:
Is there any way to boot of from zfs disk "work around" ??
Yes, see
http://blogs.sun.com/roller/page/tabriz?entry=are_you_ready_to_rumble
On Fri, Jul 28, 2006 at 12:43:37AM -0700, Frank Cusack wrote:
> zfs automatically mounts locally attached disks (export/import aside). Does
> it do this for iscsi? I guess my question is, does the solaris iscsi
> initiator provide the same kind of device permanence as for local drives?
No, not c
On Fri, Jul 28, 2006 at 10:52:50AM -0400, John Cecere wrote:
> Can someone explain to me what the 'volinit' and 'volfini' options to zfs
> do ? It's not obvious from the source code and these options are
> undocumented.
These are unstable private interfaces which create and destroy the
/dev/zvol
On July 28, 2006 11:42:28 AM +0200 Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Matty,
Thursday, July 27, 2006, 7:53:34 PM, you wrote:
M> Are there any known issues with patching zones that are installed on a ZFS
M> file system? Does smpatch and company work ok with this configuration?
R
On July 28, 2006 2:14:50 PM +0200 Patrick Bachmann <[EMAIL PROTECTED]> wrote:
Richard already pointed out that you should split the devices into a number of
vdevs and not
pools.
I missed that. I guess I also didn't know what a vdev is, guess I know
even less about this "zfs thing" than I thou
On July 28, 2006 9:09:58 AM -0400 Brian Hechinger <[EMAIL PROTECTED]> wrote:
On Fri, Jul 28, 2006 at 02:14:50PM +0200, Patrick Bachmann wrote:
systems config? There are a lot of things you know better off-hand
about your system, otherwise you need to do some benchmarking, which
ZFS would have to
On Fri, Jul 28, 2006 at 09:29:42AM -0700, Eric Schrock wrote:
> On Fri, Jul 28, 2006 at 12:43:37AM -0700, Frank Cusack wrote:
> > zfs automatically mounts locally attached disks (export/import aside). Does
> > it do this for iscsi? I guess my question is, does the solaris iscsi
> > initiator prov
On July 28, 2006 3:31:51 AM -0700 Louwtjie Burger <[EMAIL PROTECTED]> wrote:
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial
database (SAP SD
scenario).
The cache on the HW raid helps, and the CPU load is less... but the solution
costs more and you
_might_
Right now I have such configurations and have been using smpatch
without any problems so far.
I thought I read somewhere (zones guide?) that putting the zone root fs
on zfs was unsupported.
You've missed the earlier part of this thread. Yes, it's
unsupported, but the question was asked "Does
Frank Cusack wrote:
On July 28, 2006 3:31:51 AM -0700 Louwtjie Burger
<[EMAIL PROTECTED]> wrote:
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a
commercial database (SAP SD
scenario).
The cache on the HW raid helps, and the CPU load is less... but the
solution costs
Richard Elling wrote:
How hard would it be to write a tool like that? Something along the
lines of:
zpool bench raidz disk1 disk2 ... diskN
Let ZFS figure out the best way to set up your disks for you and tell
you how it should be laid out (and even offer a "just do it" flag that
will let it
Robert,
The patches will be available sometime late September. This may be a
week or so before s10u3 actually releases.
Thanks,
George
Robert Milkowski wrote:
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek> Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006,
Richard Elling wrote:
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the
other host.
Multiple hosts writing to the same pool won't work, but you could
indeed
have two pools, one for each host, in a dual active-
On Fri, Jul 28, 2006 at 09:47:48AM -0600, Lori Alt wrote:
>
> While the official release of zfs-boot won't be out
> until Update 4 at least, we're working right now on
> getting enough pieces available through OpenSolaris
> so that users can put together a boot CD/DVD/image
> that will directly in
Brian Hechinger wrote:
On Fri, Jul 28, 2006 at 09:47:48AM -0600, Lori Alt wrote:
While the official release of zfs-boot won't be out
until Update 4 at least, we're working right now on
getting enough pieces available through OpenSolaris
so that users can put together a boot CD/DVD/image
that wi
Joseph Mocker wrote:
Richard Elling wrote:
The problem is that there are at least 3 knobs to turn (space, RAS, and
performance) and they all interact with each other.
Good point. then how about something more like
zpool bench raidz favor space disk1 ... diskN
zpool bench raidz favor per
Hello Jeff,
Friday, July 28, 2006, 4:21:42 PM, you wrote:
JV> Now that I've gone and read the zpool man page :-[ it seems that only
whole
JV> disks can be exported/imported.
No, it's not that way.
If you create a pool from slices you'll be able to import/export only
those slices. So if you w
On Jun 21, 2006, at 11:05, Anton B. Rang wrote:
My guess from reading between the lines of the Samsung/Microsoft
press release is that there is a mechanism for the operating system
to "pin" particular blocks into the cache (e.g. to speed boot) and
the rest of the cache is used for write
Hello Lori,
Friday, July 28, 2006, 6:50:55 PM, you wrote:
>>> Right now I have such configurations and have been using smpatch
>>> without any problems so far.
>>
>> I thought I read somewhere (zones guide?) that putting the zone root fs
>> on zfs was unsupported.
LA> You've missed the earlier
On Thu, Jul 27, 2006 at 11:46:30AM -0700, Richard Elling wrote:
> >>I'm don't have visibility of the Explorer development sites at the
> >>moment, but I believe that the last publicly available Explorer I
> >>looked at (v5.4) still didn't gather any ZFS related info, which would
> >>scare me mig
39 matches
Mail list logo