hey mike/cindy,
i've gone ahead and filed a zfs rfe on this functionality:
6915127 need full support for zfs pools on files
implmenting this rfe is a requirement for supporting encapsulated
zones on shared storage.
ed
On Thu, Jan 07, 2010 at 03:26:17PM -0700, Cindy Swearingen wrote:
> H
hey richard,
so i just got a bunch of zfs checksum errors after replacing some
mirrored disks on my desktop (u27). i originally blamed the new disks,
until i saw this thread, at which point i started digging in bugster. i
found the following related bugs (i'm not sure which one adam was
refering
hey anil,
given that things work, i'd recommend leaving them alone.
if you really want to insist on cleaning things up aesthetically
then you need to do multiple zfs operation and you'll need to shutdown
the zones.
assuming you haven't cloned any zones (because if you did that
complicates things
hey all,
so recently i wrote some zones code to manage zones on zfs datasets.
the code i wrote did things like rename snapshots and promote
filesystems. while doing this work, i found a few zfs behaviours that,
if changed, could greatly simplify my work.
the primary issue i hit was that when ren
On Thu, Apr 23, 2009 at 09:59:33AM -0600, Matthew Ahrens wrote:
> Ed,
>
> "zfs destroy [-r] -p" sounds great.
>
> I'm not a big fan of the "-t template". Do you have conflicting snapshot
> names due to the way your (zones) software works, or are you concerned
> about sysadmins creating these confl
On Thu, Apr 23, 2009 at 11:31:07AM -0500, Nicolas Williams wrote:
> On Thu, Apr 23, 2009 at 09:59:33AM -0600, Matthew Ahrens wrote:
> > "zfs destroy [-r] -p" sounds great.
> >
> > I'm not a big fan of the "-t template". Do you have conflicting snapshot
> > names due to the way your (zones) softwar
hey all,
in both nevada and opensolaris, the zones infrastructure tries to
leverage zfs where ever possible. we take advantage of snapshotting and
cloning for things like zone cloning and zone be management. because of
this, we've recently run into multiple scenarios where a zoneadm
uninstall fai
On Sun, Oct 14, 2007 at 09:37:42PM -0700, Matthew Ahrens wrote:
> Edward Pilatowicz wrote:
> >hey all,
> >so i'm trying to mirror the contents of one zpool to another
> >using zfs send / recieve while maintaining all snapshots and clones.
>
> You will enjoy th
hey all,
so i'm trying to mirror the contents of one zpool to another
using zfs send / recieve while maintaining all snapshots and clones.
essentially i'm taking a recursive snapshot. them i'm mirroring
the oldest snapshots first and working my way forward. to deal
with clones i have a hack that
hey swetha,
i don't think there is any easy answer for you here.
i'd recommend watching all device operations (open, read, write, ioctl,
strategy, prop_op, etc) that happen to the ramdisk device when you don't
use your layered driver, and then again when you do. then you could
compare the two to
i've seen this ldi_get_size() failure before and it usually occurs on
drivers that don't implement their prop_op(9E) entry point correctly
or that don't implement the dynamic [Nn]blocks/[Ss]size property correctly.
what does your layered driver do in it's prop_op(9E) entry point?
also, what driver
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
> We have two aging Netapp filers and can't afford to buy new Netapp gear,
> so we've been looking with a lot of interest at building NFS fileservers
> running ZFS as a possible future approach. Two issues have come up in the
> discussion
ound made available?
>
> Thanks again!
>
>
> Dave Radden
> x74861
>
> ---
>
> Edward Pilatowicz wrote On 10/31/06 18:53,:
>
> >if your running solaris 10 or an early nevada build then it's
> >possible your hitting this bug (which i fixed in build 35)
if your running solaris 10 or an early nevada build then it's
possible your hitting this bug (which i fixed in build 35):
4976415 devfsadmd for zones could be smarter when major numbers change
if you're running a recent nevada build then this could be a new issue.
so what version of sola
zfs depends on ldi_get_size(), which depends on the device being
accessed exporting one of the properties below. i guess the
the devices generated by IBMsdd and/or EMCpower/or don't
generate these properties.
ed
On Wed, Jul 26, 2006 at 01:53:31PM -0700, Eric Schrock wrote:
> On Wed, Jul 26, 200
zfs should work fine with disks under the control of solaris mpxio.
i don't know about any of the other multipathing solutions.
if you're trying to use a device that's controlled by another
multipathing solution, you might want to try specifying the full
path to the device, ex:
zpool creat
rent
filesystem".)
ed
On Wed, May 10, 2006 at 11:05:14AM -0700, Matthew Ahrens wrote:
> On Wed, May 10, 2006 at 09:10:10AM -0700, Edward Pilatowicz wrote:
> > out of curiousity, how are properties handled?
>
> I think you're confusing[*] the "clone origin filesyste
out of curiousity, how are properties handled?
for example if you have a fs with compression disabled, you snapshot
it, you clone it, and you enable compression on the clone, and then
you promote the clone. will compressions be enabled on the new parent?
and what about other clones that have prope
On Wed, May 03, 2006 at 03:05:25PM -0700, Eric Schrock wrote:
> On Wed, May 03, 2006 at 02:47:57PM -0700, eric kustarz wrote:
> > Jason Schroeder wrote:
> >
> > >eric kustarz wrote:
> > >
> > >>The following case is about to go to PSARC. Comments are welcome.
> > >>
> > >>eric
> > >>
> > >To piggy
19 matches
Mail list logo