Re: [zfs-discuss] dataset is busy when doing snapshot

2012-05-20 Thread Anil Jangity
Yup, that's exactly what I did last night... zoned=off mountpoint=/some/place mount unmount mountpoint=legacy zoned=on Thanks! On May 20, 2012, at 3:09 AM, Jim Klimov wrote: > 2012-05-20 8:18, Anil Jangity wrote: >> What causes these messages? >> >> cannot create sna

[zfs-discuss] dataset is busy when doing snapshot

2012-05-19 Thread Anil Jangity
What causes these messages? cannot create snapshot 'zones/rani/ROOT/zbe-2@migration': dataset is busy There are zones living in zones pool, but none of the zones are running or mounted. root@:~# zfs get -r mounted,zoned,mountpoint zones/rani NAME PROPERTYVALUE

[zfs-discuss] 2.5" to 3.5" bracket for SSD

2012-01-14 Thread Anil Jangity
I have a couple of Sun/Oracle x2270 boxes and am planning to get some 2.5" intel 320 SSD for the rpool. Do you happen to know what kind of bracket is required to get the 2.5" SSD to fit into the 3.5" slots? Thanks ___ zfs-discuss mailing list zfs-disc

[zfs-discuss] zfs upgrade

2011-01-29 Thread Anil
rather not kill the zfs process. Just curious if this is going to take a while or not. Anil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS with STK raid card w battery

2010-10-24 Thread Anil
zfs stats with the logical volume vs just plain zfs mirrors, and see. Just wondeirng if these controllers had any other utility for this. On Sat, Oct 23, 2010 at 10:06 PM, Erik Trimble wrote: > On 10/23/2010 8:22 PM, Anil wrote: >> >> We have Sun STK RAID cards in our x4170 s

[zfs-discuss] ZFS with STK raid card w battery

2010-10-23 Thread Anil
We have Sun STK RAID cards in our x4170 servers. These are battery backed with 256mb cache. What is the recommended ZFS configuration for these cards? Right now, I have created a one-to-one logical volume to disk mapping on the RAID card (one disk == one volume on RAID card). Then, I mirror them u

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Anil Gulecha
g to?  The > last commit to illumos-gate was 6 days ago and you're already not even > keeping it in sync..  Can you even build it yet and if so where's the > binaries? > The project is a couple weeks old. There's already a webrevs for 145 and 146 merges, and another one f

Re: [zfs-discuss] ZFS on Ubuntu

2010-08-05 Thread Anil Gulecha
Or Nexenta :) http://www.nexenta.org ~Anil On Thu, Aug 5, 2010 at 5:15 PM, Tuco wrote: >> That said, if you need ZFS right now, it's either >> FreeBSD or OpenSolaris > > Or Debian GNU/kFreeBSD ;-) > > http://tucobsd.blogspot.com/2010/08/apt-get-install-zfsutils.ht

Re: [zfs-discuss] carrying on [was: Legality and the future of zfs...]

2010-07-19 Thread Anil Gulecha
On Mon, Jul 19, 2010 at 3:31 PM, Pasi Kärkkäinen wrote: > > Upcoming Ubuntu 10.10 will use BTRFS as a default. > Though there was some discussion around this, I don't think the above is a given. The ubuntu devs would look at the status of the project, and decide closer to the relea

[zfs-discuss] NexentaStor Community edition 3.0.3 released

2010-06-15 Thread Anil Gulecha
rk at http://people.nexenta.com. -- Thanks Anil Gulecha Community Leader http://www.nexentastor.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread Anil Gulecha
ins NexentaStor enterprise edtion (nexenta.com) = Costs $$ - NCP underneath + closed UI + enterprise plugins ($$) ~Anil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] SSD sale on newegg

2010-04-06 Thread Anil
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the latest recommendations for a log device? http://bit.ly/aL1dne -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.o

[zfs-discuss] NexentaStor Community edition 3.0 released

2010-03-25 Thread Anil Gulecha
been obsoleted.) If you are a storage solution provider, we invite you to join our growing social network at http://people.nexenta.com. -- Thanks Anil Gulecha Community Leader http://www.nexentastor.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] future of OpenSolaris

2010-02-23 Thread Anil Gulecha
/Tracker: http://www.nexenta.org/projects/nexenta-gate Thanks, Anil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] NexentaStor 2.2.1 Developer Edition Released

2010-01-13 Thread Anil Gulecha
/wiki/DeveloperEdition Summary of recent changes is on freshmeat at http://freshmeat.net/projects/nexentastor/ A complete list of projects (14 and growing) is at http://www.nexentastor.org/projects Nightly images are available at http://ftp.nexentastor.org/nightly/ Regards -- Anil Gulecha Community

Re: [zfs-discuss] HW raid vs ZFS

2010-01-11 Thread Anil
> ZFS will definitely benefit from battery backed RAM > on the controller > as long as the controller immediately acknowledges > cache flushes > (rather than waiting for battery-protected data to > flush to the I am little confused with this. Do we not want the controller to ignore these cache

[zfs-discuss] HW raid vs ZFS

2010-01-11 Thread Anil
I am sure this is not the first discussion related to this... apologies for the duplication. What is the recommended way to make use of a Hardware RAID controller/HBA along with ZFS? Does it make sense to do RAID5 on the HW and then RAIDZ on the software? OR just stick to ZFS RAIDZ and connect

Re: [zfs-discuss] Disks and caches

2010-01-07 Thread Anil
I *am* talking about situations where physical RAM is used up. So definitely the SSD could be touched quite a bit when used as a rpool - for pages in/out. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.o

Re: [zfs-discuss] Disks and caches

2010-01-07 Thread Anil
Also... There is talk about using those cheap disks for rpool. Isn't rpool also prone to a lot of writes, specifically when the /tmp is in a SSD? What's the real reason to making those cheap SSD as a rpool rather than a L2ARC? Basically is everyone saying that SSD without NVRAM/capacitors/batt

[zfs-discuss] Disks and caches

2010-01-07 Thread Anil
After spending some time reading up on this whole deal with SSD with "caches" and how they are prone to data losses during power failures, I need some clarifications... When you guys say "write cache", do you just really mean the on board cache (for both read AND writes)? Or is there a separate

Re: [zfs-discuss] dedup existing data

2009-12-17 Thread Anil
If you have another partition with enough space, you could technically just do: mv src /some/other/place mv /some/other/place src Anyone see a problem with that? Might be the best way to get it de-duped. -- This message posted from opensolaris.org ___

[zfs-discuss] NexentaStor 2.2.0 Developer Edition Released

2009-11-23 Thread Anil Gulecha
s deduplication support. Stay tuned! Regards -- Anil Gulecha Community Lead, NexentaStor.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Anil
I haven't tried this, but this must be very easy with dtrace. How come no one mentioned it yet? :) You would have to monitor some specific syscalls... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] zfs cksum calculation

2009-09-11 Thread P. Anil Kumar
Hi, Thanks for the prompt response. I tried using digest with sha256 to calculate the uberblock checksum. Now, digest gives me a 65 char's ouput, while zdb -uuu pool-name, gives me only 49 char output. how can this be accounted? I'm trying to understand how the checksum is calculated and dis

[zfs-discuss] zfs cksum calculation

2009-09-10 Thread P. Anil Kumar
Hi, I've compiled /export/testws/usr/src/lib/crypt_modules/sha256/test.c and tried to use it to calculate the checksum of the uberblock. This I did as the sha256 executable that comes with solaris is not giving me the correct values for uberblock.(the output is 64chars whereas zfs output is on

[zfs-discuss] NexentaStor.org and Open Source components

2009-09-09 Thread Anil Gulecha
a complete forge environment with mercurial repositories, bug tracking, wiki, file hosting and other features. We welcome developers and the user community to participate and extend the storage appliance via our open Storage Appliance API (SA_API) and plugin API. Website: www.nexentastor.org Than

Re: [zfs-discuss] ub_guid_sum and vdev guids

2009-09-01 Thread P. Anil Kumar
14408718082181993222 + 4867536591080553814 - 2^64 + 4015976099930560107 = 484548669948327 there was an overflow inbetween, that I overlooked. pak -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] changing guid of vdev

2009-08-30 Thread P. Anil Kumar
Hi, I added a vdev(file) to the zpool and then, and using hexedit modified the guid of the vdev in all four labels. I also, caluculated the new ub_guid_sum and updated all uberblock guid_sum values. Now, when I try to import this modified file into the zpool, it says the device is offline...and

Re: [zfs-discuss] zfs kernel compilation issue

2009-08-30 Thread P. Anil Kumar
I just added -xarch=amd64 in Makefile.master and then could compile the driver without any issues. Regards, pak. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/l

Re: [zfs-discuss] zfs kernel compilation issue

2009-08-30 Thread P. Anil Kumar
Hi, bash-3.2# isainfo amd64 i386 The above output shows amd64 is available. But how can I now overcome the compilation failure issue? Regards, pak -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

[zfs-discuss] zfs kernel compilation issue

2009-08-28 Thread P. Anil Kumar
I'm trying to compile zfs kernel on the following machine bash-3.2# uname -a SunOS solaris-b119-44 5.11 snv_119 i86pc i386 i86pc I set the env properly using bldenv -d ./opensolaris.sh. bash-3.2# pwd /export/testws/usr/src/uts bash-3.2# dmake dmake: defaulting to parallel mode. See the man page

[zfs-discuss] ub_guid_sum and vdev guids

2009-08-28 Thread P. Anil Kumar
I've a zfs pool named 'ppool' with two vdevs(files) file1, file2 in it. zdb -l /pak/file1 output: version=16 name='ppool' state=0 txg=3080 pool_guid=14408718082181993222 hostid=8884850 hostname='solaris-b119-44' top_guid=4867536591080553814 guid=4867536591080553

[zfs-discuss] cleaning up cloned zones

2009-07-29 Thread Anil
I create a couple of zones. I have a zone path like this: r...@vps1:~# zfs list -r zones/cars NAME USED AVAIL REFER MOUNTPOINT zones/fans 1.22G 3.78G22K /zones/fans zones/fans/ROOT 1.22G 3.78G19K legacy zones/fans/ROOT/zbe 1.22G 3.78G 1.22G legacy

[zfs-discuss] deduplication

2009-07-11 Thread Anil
When it comes out, how will it work? Does it work at the pool level or a zfs file system level? If I create a zpool called 'zones' and then I create several zones underneath that, could I expect to see a lot of disk space savings if I enable dedup on the pool? Just curious as to what's coming a

Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-05-24 Thread Anil Gulecha
On Sun, May 24, 2009 at 2:32 PM, Bogdan M. Maryniuk wrote: > On Sat, May 23, 2009 at 5:11 PM, Anil Gulecha wrote: >> Hi Bogdan, >> >> Which particular packages were these? RC3 is quite stable, and all >> server packages are solid. If you do face issues with a particula

Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-05-23 Thread Anil Gulecha
gt; be a great distribution with excellent package management and very > convenient to use. Hi Bogdan, Which particular packages were these? RC3 is quite stable, and all server packages are solid. If you do face issues with a particular one, we'd appreciate a bug report. All information on t

[zfs-discuss] COMSTAR/zfs integration for faster SCSI

2009-03-02 Thread Anil Gulecha
Hi, Nexenta CP and NexentaStor has integrated COMSTAR with ZFS, which provides 2-3x performance gain over userland SCSI target daemon. I;ve blogged in more detail at http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/ Cheers, Anil http://www.gulecha.org

Re: [zfs-discuss] scrub never finishes

2008-07-13 Thread Anil Jangity
Oh, my hunch was right. Yup, I do have an hourly snapshot going. I'll take it out and see. Thanks! Bob Friesenhahn wrote: > On Sun, 13 Jul 2008, Anil Jangity wrote: > > >> On one of the pools, I started a scrub. It never finishes. At one time, >> I saw it go up to

[zfs-discuss] scrub never finishes

2008-07-13 Thread Anil Jangity
On one of the pools, I started a scrub. It never finishes. At one time, I saw it go up to like 70% and then a little bit later I ran the pool status, it went back to 5% and started again. What is going on? Here is the pool layout: pool: data2 state: ONLINE scrub: scrub in progress, 35.25% d

[zfs-discuss] global zone snapshots

2008-06-24 Thread Anil Jangity
Is it possible to give access to the snapshots of a global zone (through lofs perhaps?) into a zone? I recall that you can't just delegate a snapshot dataset into a zone yet, but was wondering if there is some lofs magic I can do? Thanks ___ zfs-discu

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
27;t see any reason why I should consider that. I would like to proceed with doing a raidz with double parity, please give me some feedback. Thanks, Anil capacity operationsbandwidth pool used avail read write read write -- - - -

Re: [zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
A three-way mirror and three disks in a double parity array are going to get you the same usable space. They are going to get you the same level of redundancy. The only difference is that the RAIDZ2 is going to consume a lot more CPU cycles calculating parity for no good cause. In this case,

[zfs-discuss] zpool iostat

2008-06-18 Thread Anil Jangity
double parity, please give me some feedback. Thanks, Anil capacity operationsbandwidth pool used avail read write read write -- - - - - - - data1 41.6G 5.67G 2 19 52.3K 198K data2 58.2G 9.83G 3

Re: [zfs-discuss] ZFS layout recommendations

2007-11-21 Thread Anil Jangity
Thanks James/John! That link specifically mentions "new Solaris 10 release", so I am assuming that means going from like u4 to Sol 10 u5, and that shouldn't cause a problem when doing plain patchadd's (w/o live upgrade). If so, then I am fine with those warnings and can use zfs with zones' path

[zfs-discuss] ZFS layout recommendations

2007-11-21 Thread Anil Jangity
I have pool called "data". I have zones configured in that pool. The zonepath is: /data/zone1/fs. (/data/zone1 itself is not used for anything else, by anyone, and has no other data.) There are no datasets being delegated to this zone. I want to create a snapshot that I would want to make avail