Thanks to everyone for their help! yes dtrace did help and I found that in my
layered driver, the prop_op entry point had an error in setting the [Ss]ize
dynamic property, and apparently that's what ZFS looks for, not just Nblocks!
what took me so long in getting to this error was that the drive
hi Shweta;
First thing is to look for all kernel function return that errno (25
I think) during your test.
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED]()}'
More verbose but also useful :
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED](20)]=count()}'
It's a cat
hey swetha,
i don't think there is any easy answer for you here.
i'd recommend watching all device operations (open, read, write, ioctl,
strategy, prop_op, etc) that happen to the ramdisk device when you don't
use your layered driver, and then again when you do. then you could
compare the two to
I explored this a bit and found that the ldi_ioctl in my layered driver does
fail, but fails because of an "iappropriate ioctl for device " error, which the
underlying ramdisk driver's ioctl returns. So doesn't seem like that's an issue
at all (since I know the storage pool creation is successfu
With what Edward suggested, I got rid of the ldi_get_size() error by defining
the prop_op entry point appropriately.
However, the zpool create still fails - with zio_wait() returning 22.
bash-3.00# dtrace -n 'fbt::ldi_get_size:entry{self->t=1;}
fbt::ldi_get_size:entry/self->t/{}
fbt::ldi_get_s
Try 'trace((int)arg1);' -- 4294967295 is the unsigned representation of -1.
Adam
On Mon, May 14, 2007 at 09:57:23AM -0700, Shweta Krishnan wrote:
> Thanks Eric and Manoj.
>
> Here's what ldi_get_size() returns:
> bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool
> create a
Thanks Edward.
Currently my layered driver does not implement the prop_op(9E) entry point - I
didn't realize this was necessary since my layered driver worked fine without
it when used over UFS.
My layered driver sits above a ramdisk driver.
I realized the same problem that you've mentioned whe
i've seen this ldi_get_size() failure before and it usually occurs on
drivers that don't implement their prop_op(9E) entry point correctly
or that don't implement the dynamic [Nn]blocks/[Ss]size property correctly.
what does your layered driver do in it's prop_op(9E) entry point?
also, what driver
Thanks Eric and Manoj.
Here's what ldi_get_size() returns:
bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool create
adsl-pool /dev/layerzfsminor1' dtrace: description 'fbt::ldi_get_size:return'
matched 1 probe
cannot create 'adsl-pool': invalid argument for this pool operat
On Mon, May 14, 2007 at 11:55:28AM -0500, Swetha Krishnan wrote:
> Thanks Eric and Manoj.
>
> Here's what ldi_get_size() returns:
> bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool
> create adsl-pool /dev/layerzfsminor1' dtrace: description
> 'fbt::ldi_get_size:return' mat
This is likely because ldi_get_size() is failing for your device. We've
seen this before on 3rd party devices, and have been meaning to create a
special errno (instead of EINVAL) to give a more helpful message in this
case.
- Eric
On Sun, May 13, 2007 at 11:54:45PM -0700, Shweta Krishnan wrote:
Shweta Krishnan wrote:
I ran zpool with truss, and here is the system call trace. (again, zfs_lyr is
the layered driver I am trying to use to talk to the ramdisk driver).
When I compared it to a successful zpool creation, the culprit is the last
failing ioctl
i.e. ioctl(3, ZFS_IOC_CREATE_POOL,
I ran zpool with truss, and here is the system call trace. (again, zfs_lyr is
the layered driver I am trying to use to talk to the ramdisk driver).
When I compared it to a successful zpool creation, the culprit is the last
failing ioctl
i.e. ioctl(3, ZFS_IOC_CREATE_POOL, )
I tried looking at th
13 matches
Mail list logo