Nope. I get "no pools available to import". I think that detaching the drive
cleared any pool information/headers on the drive, which is why I can't figure
out a way to get the data/pool back.
There is some new data on the SATA drives, but I've also kept a copy of it
elsewhere. I don't mind los
We called it yesterday and now it's up 100%! Brand new
issue, Cana Petroleum, heading straight up!
VERY tightly held, in a booming business sector, with a
huge publicity campaign starting up, Cana Petroleum (CNPM)
is already bringing our readers huge gains. We advise you
to get in on this on
Rainer Heilke wrote:
Greetings, all.
I put myself into a bit of a predicament, and I'm hoping there's a way out.
I had a drive (EIDE) in a ZFS mirror die on me. Not a big deal, right? Well, I
bought two SATA drives to build a new mirror. Since they were about the same
size (I wanted bigger dr
So, from the deafening silence, am I to assume there's no way to tell ZFS that
the EIDE drive was a zpool, and pull it into a new pool in a manner that I can
(once again) see the data that's on the drive? :-(
Rainer
This message posted from opensolaris.org
__
Lori Alt wrote:
Torrey McMahon wrote:
Jason King wrote:
Anxiously anticipating the ability to boot off zfs, I know there's
been some talk about leveraging some of the snapshotting/cloning
features in conjunction with upgrades and patches.
What I am really hoping for is the ability to clone /
This is CR: 4894692 caching data in heap inflates crash dump
I have a fix which I am testing now. It still needs review from
Matt/Mark before it's eligible for putback, though.
-j
On Fri, Nov 10, 2006 at 02:40:40PM -0800, Thomas Maier-Komor wrote:
> Hi,
>
> I'm not sure if this is the right f
You're right. A bug has already been raised for this:
4894692 caching data in heap inflates crash dump
Thomas Maier-Komor wrote On 11/10/06 15:40,:
Hi,
I'm not sure if this is the right forum, but I guess this topic will be bounced
into the right direction from here.
With ZFS using as much
On Fri, 2006-11-10 at 14:40 -0800, Thomas Maier-Komor wrote:
> Might it be possible to add an extension that would make it possible,
> to support dumping without the whole ZFS cache? I guess this would
> make kernel live dumps smaller again, as they used to be...
It's just a bug:
4894692 caching
Hi,
I'm not sure if this is the right forum, but I guess this topic will be bounced
into the right direction from here.
With ZFS using as much physical memory as it can get, dumps and livedumps via
'savecore -L' are huge in size. I just tested it on my workstation and got a
1.8G vmcore file,
Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older machines and
that's not always what I would like.
REP> The big question, though, is "10% of what?" User CPU? iops?
AH> Probably N% of I/O Ops/Second would work well.
Or if 100% means full speed, then 1
LingBo Tang wrote:
Hi all,
As inotify for Linux, is there same mechanism in Solaris for ZFS?
I think this functionality is helpful for desktop search engine.
I know one engineer of Sun is working on "file event monitor", which
will provide some information of file events, but is not for
search
flama wrote:
Hi people, Is possible detach a device from a stripe zfs without to
destroy the pool?. Zfs is similar to doms in tru64, and it have un
detach device from stripe, and it realloc the space of the datasets
in free disks.
No. Currently Zfs can only replace or add disks. It is not yet
>I'd appreciate it if only people who have made changes to the ZFS
>codebase found in opensolaris respond further to this thread.
Well. I haven't made changes, but I can read code.
When replacing a device, ZFS internally takes the device being replaced and
creates a mirror between the old and n
>The reason that I want to use up the inode is we need to test the
>behaviors in the case of both block and inode are used up. If only fill
>up the block, creating an empty file still succeeds.
Pretty much the only way to tell if you've used up all the space available for
file nodes is to actua
Hello Al,
Friday, November 10, 2006, 2:21:38 PM, you wrote:
AH> On Thu, 9 Nov 2006, Robert Milkowski wrote:
>> Hello Richard,
>>
>> Tuesday, November 7, 2006, 5:19:07 PM, you wrote:
>>
>> REP> Robert Milkowski wrote:
>> >> Saturday, November 4, 2006, 12:46:05 AM, you wrote:
>> >> REP> Incidental
On Thu, 2006-11-09 at 21:19 -0800, Erblichs wrote:
> Bill, Sommerfeld, Sorry,
>
> However, I am trying to explain what I think is
> happening on your system and why I consider this
> normal.
I'm not interested in speculation. Please do not respond to this
message.
> To c
On Thu, 9 Nov 2006, Erblichs wrote:
>
> Bill, Sommerfeld, Sorry,
>
> However, I am trying to explain what I think is
> happening on your system and why I consider this
> normal.
>
> Most of the reads/FS "replace" are normally
^
> at the block l
On 10 November, 2006 - Sanjeev Bagewadi sent me these 3,5K bytes:
> Comments in line...
>
> Neil Perrin wrote:
>
> 1. DNLC-through-ZFS doesn't seem to listen to ncsize.
>
> The filesystem currently has ~550k inodes and large portions of it is
> frequently looked over with rsync
On Thu, 9 Nov 2006, Robert Milkowski wrote:
> Hello Richard,
>
> Tuesday, November 7, 2006, 5:19:07 PM, you wrote:
>
> REP> Robert Milkowski wrote:
> >> Saturday, November 4, 2006, 12:46:05 AM, you wrote:
> >> REP> Incidentally, since ZFS schedules the resync iops itself, then it can
> >> REP> rea
Comments in line...
Neil Perrin wrote:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was
about
68k and vmstat -s said we had a hitrate of ~30%, so I se
On 10 November, 2006 - John Cui sent me these 1,6K bytes:
> Thanks for Anton, Robert and Mark's replies. Your answer verified my
> observation, ;-) .
>
> The reason that I want to use up the inode is we need to test the
> behaviors in the case of both block and inode are used up. If only fill
Richard!
ZFS fans,
Recalling our conversation about hot-plug and hot-swap terminology and use,
I afraid to say that CR 6483250 has been closed as will-not-fix. No
explaination
was given. If you feel strongly about this, please open another CR and
pile on.
*Change Request ID*: 6483250
*Synop
22 matches
Mail list logo