I would like to reduce the size of a zpool, but am unsure if it is possible, or
if it is how to go about doing so safely. The set-up:
1) 75GB SATA internal drive 1
/dev/dsk/c1d0s0 is in use for live upgrade /.
/dev/dsk/c1d0s1 is currently used by swap.
/dev/dsk/c1d0s3 is currently mounted on
I think I have ran into this bug, 6560174, with a firewire drive. Running build
64a, formatted entire external firewire drive as a zpool. Worked fine with
smaller zfs send operations. Tried zfs send on a 10GB filesystem and it seemed
to hang (though there is no notice of progress with zfs send).
> And 6560174 might be a duplicate of 6445725
I see what you mean. Unfortunately there does not look to be a work-around.
It is beginning to sound like firewire drives are not a safe alternative for
backup? This is unfortunate when you have an Ultra20 with only 2 disks.
Is there a way to dest
> Nope, no work-around.
OK. Then I have 3 questions:
1) How do I destroy the pool that was on the firewire drive? (So that zfs stops
complaining about it)
2) How can I reformat the firewire drive? Does this need to be done on a
non-Solaris OS?
3) Can your code diffs be integrated into the O
Thank you very much for this input. I eventually upgraded to snv_69 and did the
ON build of 69 with your patch. I copied to patched kernels over and have now
re-imported the defunct pool. The pool is working after a quick 'resilvering'.
Thanks very much!
This message posted from opensolaris.
This bug is still not integrated? To upgrade to a community release I still
have to patch and compile the kernel? How can this bug fix be integrated with
the code?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Regarding the patches for this bug (6445725) I applied them to a nightly build
of snv 77, where they now appear to reside in usr/src/uts/intel/sbp2/debug64
rather than obj64 after successful build. I then copied them over the kernels
in the community release of b77.
usr/src/uts/intel/scsa1394/
boot back into Solaris 10.
They are all related to the NVIDIA driver, gfxp, from what I remember from two
weeks ago. I am on an Ultra 20.
thanks,
aric
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
figure out how to remount the second drive c2d0s7 which is formatted
as ZFS. I have not created any ZFS filesystems on the c1d0 root disk yet, if
that matters.
Thanks for any assistance!
aric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I figured this out. Was way too simple.
zpool import fitz
thanks,
On Thursday, September 14, 2006, at 12:16PM, Aric Gregson <[EMAIL PROTECTED]>
wrote:
>I was running solaris 10 6/06 with latest kernel patch on ultra 20 (x86) with
>two internal disks, the root with the OS (c1d0
hat I should have just imported
it once in the new BE. How can I now solve this issue? (BTW, attempting
to boot back into s10u2, the original BE, results in a kernel panic, so
I cannot go back).
thanks,
aric
--On September 21, 2006 10:01:28 AM -0700 Haik Aftandilian
<[EMAIL PROTECTED]>
ot currently
mounted
zpool import --> no pools available to import
zpool import -d /fitz --> no pools available to import
thanks,
aric
I believe I am experiencing a similar, but more severe issue and I do
not know how to resolve it. I use
12 matches
Mail list logo