This question is both for the ZFS forum and the Zones forum.
I have a global zone with a pool (mapool).
I have 2 zones, z1 and z2,.
I want to pass a dataset (mapool/fs1) from z1 to z2.
Solution 1 :
mapool/fs1 is mounted under /thing on the global zone (legacy) and I configure
a lofs on z1 and
After swapping some hardware and rebooting:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b31acc-0de8-c1f3-84ec-935574615804
DESC: A ZFS pool fail
On November 14, 2006 7:57:52 PM -0800 listman <[EMAIL PROTECTED]> wrote:
hi all, i'm considering using ZFS for a Perforce server where the
repository might have the following characteristics
Number of branches 68
Number of changes 85,987
Total number of files
(at head
Thank you all for the very quick and informative responses. If it
happens again, I will try to get a core out of it.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi all, i'm considering using ZFS for a Perforce server where the repository might have the following characteristicsNumber of branches 68Number of changes 85,987Total number of files(at head revision) 2,675,545Total number of users 36Total number of clients 3,219Perforce depot size
Chris,
To force a panic on an x86 system using GRUB, you'll first need to boot kmdb. This can be accomplished by adding the 'kmdb' option
to the multiboot line in menu.lst. Rather than hacking your menu.lst:
- power your machine on
- arrow to the OS you want to boot in GRUB
- type 'e'
- arrow
Hm.
If the system is hung, it's unlikely that a reboot -d will help.
You want to be booting into kmdb, then using the F1-a interrupt sequence
then dumping using $http://docs.sun.com/app/docs/doc/817-1985/6mhm8o5p3?a=view
Forcing a crashdump on x86 boxes:
http://docs.sun.com/app/docs/doc/817-1985
Hi, Chris,
You may force a panic by "reboot -d".
Thanks,
Sean
On Tue, Nov 14, 2006 at 09:11:58PM -0600, Chris Csanady wrote:
> I have experienced two hangs so far with snv_51. I was running snv_46
> until recently, and it was rock solid, as were earlier builds.
>
> Is there a way for me to forc
I have experienced two hangs so far with snv_51. I was running snv_46
until recently, and it was rock solid, as were earlier builds.
Is there a way for me to force a panic? It is an x86 machine, with
only a serial console.
Chris
___
zfs-discuss maili
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
[SNIP]
Tunable in a form of pool property, with default 100%.
On the other hand maybe simple algorithm Veritas has used is good
enough - simple delay be
On 14 November, 2006 - oab sent me these 1,0K bytes:
> Hi All,
> How would I do the following in ZFS. I have four arrays connected to
> an E6900.
> Each array is connect to a seperate IB board on the back of the server. Each
> array
> is presenting 4 disks.
>
> c2t40d0 c3t40d0 c4t40d0 c
> Seems that "break" is a more obvious thing to do with
> mirrors; does this
> allow me to peel of one bit of a three-way mirror?
>
> Casper
I would think that this makes sense, and splitting off one side of a two-way
mirror is more the edge case (though emphatically required/desired).
Rainer
Well, I haven't overwritten the disk, in the hopes that I can get the data
back. So, how do I go about copying or otherwise repairing the vdevs?
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi All,
How would I do the following in ZFS. I have four arrays connected to an
E6900.
Each array is connect to a seperate IB board on the back of the server. Each
array
is presenting 4 disks.
c2t40d0 c3t40d0 c4t40d0 c5t40d0
c2t40d1 c3t40d1 c4t40d1 c5t40d1
c2t40d2 c3t40d2 c4t40d2 c5t40d2
Bill Sommerfeld wrote:
On Tue, 2006-11-14 at 17:10 +0100, Robert Milkowski wrote:
BS>zpool fork -p poolname -n newpoolname [devname ...]
BS> (just a concept... )
Could you please create an RFE for it and give us id?
I would immediately add a call record to it :)
6493559 need way to c
>Actually, we have considered this. On both SPARC and x86, there will be
>a way to specify the root file system (i.e., the bootable dataset) to be
>booted,
>at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
>If no root file system is specified, the current default 'bootfs' speci
On 11/14/06, Rainer Heilke <[EMAIL PROTECTED]> wrote:
This makes sense for the most part (and yes, I think it should be done by the
file system, not a manual grovelling through vdev labels).
I agree, this should be done with a new command, as has been
suggested. However,
what I was suggesting
[EMAIL PROTECTED] wrote:
On 11/11/06, Bart Smaalders <[EMAIL PROTECTED]> wrote:
It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
On 13 November, 2006 - Eric Kustarz sent me these 2,4K bytes:
> Tomas Ögren wrote:
> >On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
> >>Regarding the huge number of reads, I am sure you have already tried
> >>disabling the VDEV prefetch.
> >>If not, it is worth a try.
> >That
This makes sense for the most part (and yes, I think it should be done by the
file system, not a manual grovelling through vdev labels).
The one difference I would make is that it should not fail if the pool
_requires_ a scrub (but yes, if a scrub is in progress...). I worry about this
requirem
Hello Bill,
Tuesday, November 14, 2006, 2:31:11 PM, you wrote:
BS> On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
>> After examining the source, it clearly wipes the vdev label during a detach.
>> I suppose it does this so that the machine can't get confused at a later
>> date.
>> It wo
>On 11/11/06, Bart Smaalders <[EMAIL PROTECTED]> wrote:
>> It would seem useful to separate the user's data from the system's data
>> to prevent problems with losing mail, log file data, etc, when either
>> changing boot environments or pivoting root boot environments.
>
>I'll be more concerned ab
Neither clear nor scrub clean up the errors on the pool. I've done this about a
dozen times in the past several days, without success.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
>>zpool fork -p poolname -n newpoolname [devname ...]
>>
>>Create the new exported pool "newpoolname" from poolname by detaching
>>one side from each mirrored vdev, starting with the
>>device names listed on the command line. Fails if the pool does not
>>consist exclusively of
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
> After examining the source, it clearly wipes the vdev label during a detach.
> I suppose it does this so that the machine can't get confused at a later date.
> It would be nice if the detach simply renamed something, rather than
> destroying
On Tue, 2006-11-14 at 17:10 +0100, Robert Milkowski wrote:
> BS>zpool fork -p poolname -n newpoolname [devname ...]
> BS> (just a concept... )
>
> Could you please create an RFE for it and give us id?
> I would immediately add a call record to it :)
6493559 need way to clone a pool by spli
On 11/14/06, Jeremy Teo <[EMAIL PROTECTED]> wrote:
I'm more inclined to "split" instead of "fork". ;)
I prefer "split" too since that's what most of the storage guys are
using for mirrors. Still, we are not making any progress on helping
Rainer out of his predicaments.
--
Just me,
Wire ...
_
On 11/11/06, Bart Smaalders <[EMAIL PROTECTED]> wrote:
It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
I'll be more concerned about the co
> BOOTING AND ACCESSING 6 SATA DRIVES USING AHCI
>
> I have installed b48 running 64 bit succesfully on
> this machine using dual core Intel Woodcrest
> processors. The hardware supports up to 6 SATA II
> drives. I have installed 6 Western Digital Raptor
> drives. Using Parallell ATA mode I can on
On 11/14/06, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
> After examining the source, it clearly wipes the vdev label during a detach.
> I suppose it does this so that the machine can't get confused at a later date.
> It would be nice if the
On 11/14/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Rainer,
Tuesday, November 14, 2006, 4:43:32 AM, you wrote:
RH> Sorry for the delay...
RH> No, it doesn't. The format command shows the drive, but zpool
RH> import does not find any pools. I've also used the detached bad
RH> SATA dr
31 matches
Mail list logo