i need to get inodeno on ZFS and i am not able to find how to find it in
kernel at vfs layer.
i have vnode pointer and i am doing VTOZ to get znode but printing z_id
from znode pointer
gives me deadbeef(unitialized) , can somebody point me how to get that?
i looked at zfs_getattr code and it doe
i need to get inodeno on ZFS and i am not able to find how to find it in
kernel at vfs layer.
i have vnode pointer and i am doing VTOZ to get znode but printing z_id
from znode pointer
gives me deadbeef(unitialized) , can somebody point me how to get that?
i looked at zfs_getattr code and it doe
Stefano Pini wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS
> (i.e. NFS server).
> Both server will contain the same files and should be accessed by
> different clients at the same time (i.e. they should be both active)
> So we need to guarantee that both
Sure. This operation can be done with whole disks too. The disk
(new_device) should be the same size or larger than the existing disk
(device).
You can review some examples here:
http://docs.sun.com/app/docs/doc/817-2271/gcfhe?a=view
If the disks are of unequal size, then some disk space will b
On Jun 17, 2008, at 1:13 PM, dick hoogendijk wrote:
> This is about slices. Can this be done for a whole disk too? And it
> yes, do these disks have to be exactly the same size?
Indeed, it can be used on an entire disk.
Examples:
zpool create mypool c1t0d0
zpool attach mypool c1t0d0 c2t
On Tue, 17 Jun 2008 17:36:47 +0100
"Enda O'Connor ( Sun Micro Systems Ireland)" <[EMAIL PROTECTED]>
wrote:
> zpool attach [-f] pool device new_device
>
> Attaches new_device to an existing zpool device. The existing
> device cannot be part of a raidz configuration. If device is not
> currently pa
Hi Dale,
It worked. Prtvtoc is already set right.
# zpool attach export c2t0d0s5 c2t2d0s5
# zpool status
pool: export
state: ONLINE
scrub: resilver completed with 0 errors on Tue Jun 17 09:36:12 2008
config:
NAME STATE READ WRITE CKSUM
exportONLINE
Hi
Use zpool attach
from
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
zpool attach [-f] pool device new_device
Attaches new_device to an existing zpool device. The existing device
cannot be part
of a raidz configuration. If device is not currently part of a mirrored
configuration,
On Jun 17, 2008, at 12:23 PM, Srinivas Chadalavada wrote:
> :root # zpool create export mirror c2t0d0s5 c2t0d0s5
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/dsk/c2t0d0s5 is part of active ZFS pool export. Please see
> zpool(1M).
(I presume that you meant to
Hi Dan,
I filed a bug 6715550 to fix this issue.
Thanks for reporting it--
Cindy
Dan Reiland wrote:
>>Yeah. The command line works fine. Thought it to be a
>>bit curious that there was an issue with the HTTP
>>interface. It's low priority I guess because it
>>doesn't impact the functionality re
That fixed it on my SunBlade 2500 with NV90. Thank you very much. This has been
happening on and off for many builds. The fix gores in my "How To" document.
--ron
Message was edited by:
halstead
This message posted from opensolaris.org
___
Hi All,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
export
Ok, all done :)
Check out the details in my blog. Yesterday's post covered some of the stuff I
mentioned in this thread. Today's post covers assembly/construction.
Tomorrow's post will cover "soft" tweaks (installation, configuration,
troubleshooting).
http://blog.flowbuzz.com/search/label/
On Tue, Jun 17, 2008 at 8:42 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> > > I have a quite old machine with an AMD Athlon 900MHz with 640Mb of RAM
> > > serving up NFS, WebDAV locally to my house and running my webserver
> (Apache)
> > > in a Zone. For me performance is perfectly acceptabl
> > I have a quite old machine with an AMD Athlon 900MHz with 640Mb of RAM
> > serving up NFS, WebDAV locally to my house and running my webserver (Apache)
> > in a Zone. For me performance is perfectly acceptable, but this isn't an
> > interactive desktop. Not only is performance acceptable when
On Tue, Jun 17, 2008 at 5:33 AM, Darren J Moffat <[EMAIL PROTECTED]>
wrote:
> Tim wrote:
>
>> I guess I find it ridiculous you're complaining about ram when I can
>> purchase 4gb for under 50 dollars on a desktop.
>>
>
> For many people around the world US$50 is a very significant amount of
> mone
Hello all,
In a "traditional" filesystem, we have a few filesystems, but with ZFS, we can
have thousands..
The question is: "There is a command or procedure to remake the filesystems,
in a recovery from backup scenario"?
I mean, imagine that i have a ZFS pool with 1,000 filesystems, and for "s
Interesting, we'll try that.
Our server with the problem has been boxed now, so I'll check the solution when
it gets on site.
Thanks ahead, anyway ;)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Thanks for your reply. I did go ahead and get the LSI based sata system. I will
be putting the machine together along with my first ever open solaris system
this coming week so I'm sure I'll be poking my head in here a lot!
I really apprecaite your quick response!
This message posted from op
Tim wrote:
> I guess I find it ridiculous you're complaining about ram when I can
> purchase 4gb for under 50 dollars on a desktop.
For many people around the world US$50 is a very significant amount of
money. That also assumes the have the money to buy (or have already
done so) a motherboard t
Hi all, Check this out:
I've had created a zpool:
zpool create storage mirror c0t2d0 c0t3d0 mirror c0t4d0 c0t5d0 mirror c0t6d0
c2t0d0 mirror c2t1d0 c2t2d0 mirror c2t3d0 c2t4d0 mirror c2t5d0 c2t6d0
[ok no problem]
Then when I want to add a log device so I run:
zpool add storage log [b]c0t1d0[/
Raw Device Mapping is a feature of ESX 2.5 and above which allows a guest OS to
have access to a LUN on fibre or ISCSI SAN.
See http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf for more details.
You may be able to do something similar with the raw disks under workstation
see http://www.vmwar
Hello Erik,
Monday, June 16, 2008, 9:45:13 AM, you wrote:
ET> One thing I should mention on this is that I've had _very_ bad
ET> experience with using single-LUN ZFS filesystems over FC.
ET> that is, using an external SAN box to create a single LUN, export that
ET> LUN to a FC-connected host,
23 matches
Mail list logo