Hello,
while testing some code changes, I managed to fail an assertion while
doing a zfs create.
My zpool is now invulnerable to destruction. :(
bash-3.00# zpool destroy -f test_undo
internal error: unexpected error 0 at line 298 of ../common/libzfs_dataset.c
bash-3.00# zpool status
pool: tes
Jeremy Teo wrote:
How can I destroy this pool so I can use the disk for a new pool?
Thanks! :)
dd if=/dev/zero of=/dev/dsk/c0d1s1
dd if=/dev/zero of=/dev/dsk/c0d1s0
that should do it.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opens
On Thu, 2006-05-18 at 22:05 +0800, Jeremy Teo wrote:
> My zpool is now invulnerable to destruction. :(
Nifty - does that mean your disk is also invulnerable to hardware errors
too ? [ as in, your typical superhero who gets endowed with special
abilities due to a failed radiation experiment ;-) ]
Sorry to revive such an old thread.. but I'm struggling here.
I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I work for a
University, where everyone has a quota. I'd literally have to create > 10K
partitions. Is that really your intention? Of course, backups become a huge
pa
On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
> Sorry to revive such an old thread.. but I'm struggling here.
>
> I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I
> work for a University, where everyone has a quota. I'd literally have
> to create > 10K partitions. Is
> Why can't we just have user quotas in zfs? :)
+1 to that. I support a couple environments with group/user quotas that cannot
move to ZFS since they serve brain-dead apps that read/write from a single
directory.
I also agree that using even a few hundred mountpoints is more tedious than
usin
Eric Schrock wrote:
> > On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
>> >> to create > 10K partitions. Is that really your intention?
> >
> > Yes. You'd group them all under a single filesystem in the hierarchy,
> > allowing you to manage NFS share options, compression, and more from
On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote:
> On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
> > Sorry to revive such an old thread.. but I'm struggling here.
> >
> > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I
> > work for a University, where everyone
On Thu, May 18, 2006 at 02:23:55PM -0600, Gregory Shaw wrote:
> I'd agree except for backups. If the pools are going to grow beyond a
> reasonable-to-backup and reasonable-to-restore threshold (measured by
> the backup window), it would be practical to break it into smaller
> pools.
Speaking of b
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:
> Traditional (amanda). I'm not seeing a way to dump zfs file systems to
> tape without resorting to 'zfs send' being piped through gtar or
> something. Even then, the only thing I could restore was an entire file
> system. (We frequently res
On 5/18/06, Gregory Shaw <[EMAIL PROTECTED]> wrote:
On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote:
> On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
> > Sorry to revive such an old thread.. but I'm struggling here.
> >
> > I really want to use zfs. Fssnap, SVM, etc all have drawb
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:
> Eric Schrock wrote:
> > > Using traditional tools or ZFS send/receive?
>
> Traditional (amanda). I'm not seeing a way to dump zfs file systems to
> tape without resorting to 'zfs send' being piped through gtar or
> something. Even then, th
Bill Moore wrote:
> On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:
>> Eric Schrock wrote:
Using traditional tools or ZFS send/receive?
>> Traditional (amanda). I'm not seeing a way to dump zfs file systems to
>> tape without resorting to 'zfs send' being piped through gtar or
>> some
On Thu, 2006-05-18 at 16:43 -0500, James Dickens wrote:
> On 5/18/06, Gregory Shaw <[EMAIL PROTECTED]> wrote:
> > On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote:
> > > On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
> > > > Sorry to revive such an old thread.. but I'm struggling he
On the topic of ZFS snapshots:
does the snapshot just capture the changed _blocks_, or does it
effectively copy the entire file if any block has changed?
That is, assuming that the snapshot (destination) stays inside the same
pool space.
-Erik
___
On Thu, May 18, 2006 at 03:41:13PM -0700, Erik Trimble wrote:
> On the topic of ZFS snapshots:
>
> does the snapshot just capture the changed _blocks_, or does it
> effectively copy the entire file if any block has changed?
Incremental sends capture changed blocks.
Snapshots capture all of the
Just piqued my interest on this one -
How would we enforce quotas of sorts in large filesystems that are
shared? I can see times when I might want lots of users to use the same
directory (and thus, same filesystem) but still want to limit the amount
of space each user can consume.
Thoughts?
Nat
Hello Roch,
Monday, May 15, 2006, 3:23:14 PM, you wrote:
RBPE> The question put forth is whether the ZFS 128K blocksize is sufficient
RBPE> to saturate a regular disk. There is great body of evidence that shows
RBPE> that the bigger the write sizes and matching large FS clustersize lead
RBPE> to
On Fri, 2006-05-19 at 10:18 +1000, Nathan Kroenert wrote:
> Just piqued my interest on this one -
>
> How would we enforce quotas of sorts in large filesystems that are
> shared? I can see times when I might want lots of users to use the same
> directory (and thus, same filesystem) but still want
Since it's not exactly clear what you did with SVM I am assuming the
following:
You had a file system on top of the mirror and there was some I/O
occurring to the mirror. The *only* time, SVM puts a device into
maintenance is when we receive an EIO from the underlying device. So,
in case
On Thu, May 18, 2006 at 11:40:53PM -0600, Sanjay Nadkarni wrote:
> Since it's not exactly clear what you did with SVM I am assuming the
> following:
>
> You had a file system on top of the mirror and there was some I/O
> occurring to the mirror. The *only* time, SVM puts a device into
> maint
21 matches
Mail list logo