On Mon, May 12, 2008 at 06:44:39PM +0200, Ralf Bertling wrote:
> ...you should be able to "simulate" a scrub on the latest data by
> using
> zfs send > /dev/null
> Since the primary purpose is to verify latent bugs and to have zfs
> auto-correct them, simply reading all data would be sufficient
I have a test bed S10U5 system running under vmware ESX that has a weird
problem.
I have a single virtual disk, with some slices allocated as UFS filesystem
for the operating system, and s7 as a ZFS pool.
Whenever I reboot, the pool fails to open:
May 8 17:32:30 niblet fmd: [ID 441519 daemon.e
>From my understanding, when you delete all the snapshots that reference the
>files that have already been deleted from the file system(s), then all the
>space will be returned to the pool.
So try deleting the snapshots that you no longer need. Obviously, be sure that
you don't need any files r
This is a common problem that we run into and perhaps there's a good
explanation of why it can't be done. Often, there will be a large set of data,
say 200GB or so that gets written to a ZFS share, snapshotted and then deleted
a few days later. As I'm sure you know, none of the space is returned
Christine Tran wrote:
> Hi,
>
> If I delegate a dataset to a zone, and inside the zone, the zoneadmin
> set the attribute of that dataset, where is that data kept? More to the
> point, at what level is that data kept? In the zone? Or on the pool,
> with the zone having privilege to modify that
sean walmsley wrote:
> Some additional information: I should have noted that the client could not
> see the thumper1 shares via the automounter.
>
> I've played around with this setup a bit more and it appears that I can
> manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so th
Some additional information: I should have noted that the client could not see
the thumper1 shares via the automounter.
I've played around with this setup a bit more and it appears that I can
manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the ZFS
and UFS volumes are bei
Hi all,
until the scrub problem (http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
) is fixed,you should be able to "simulate" a scrub on the latest data
by using
zfs send > /dev/null
Since the primary purpose is to verify latent bugs and to have zfs
auto-correct them, simply reading all
Andy Lubel wrote:
> Paul B. Henson wrote:
>>> On Thu, 8 May 2008, Mark Shellenbaum wrote:
>>>
we already have the ability to allow users to create/destroy snapshots
over NFS. Look at the ZFS delegated administration model. If all you
want is snapshot creation/destruction then you w
I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be
there, but when I do upgrade it only does show v4
[10:05:36] [EMAIL PROTECTED]: /export/home > zpool upgrade
This system is currently running ZFS version 4.
Do you know when Version 5 will be included in Solaris 10? are
Yeah, it's a *very* old bug. The main reason we put our ZFS rollout on hold
was concerns over reliability with such an old (and imo critical) bug still
present in the system.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
11 matches
Mail list logo