Yup, that's exactly what I did last night...
zoned=off
mountpoint=/some/place
mount
unmount
mountpoint=legacy
zoned=on
Thanks!
On May 20, 2012, at 3:09 AM, Jim Klimov wrote:
> 2012-05-20 8:18, Anil Jangity wrote:
>> What causes these messages?
>>
>> cannot create sna
What causes these messages?
cannot create snapshot 'zones/rani/ROOT/zbe-2@migration': dataset is busy
There are zones living in zones pool, but none of the zones are running or
mounted.
root@:~# zfs get -r mounted,zoned,mountpoint zones/rani
NAME PROPERTYVALUE
I have a couple of Sun/Oracle x2270 boxes and am planning to get some 2.5"
intel 320 SSD for the rpool.
Do you happen to know what kind of bracket is required to get the 2.5" SSD to
fit into the 3.5" slots?
Thanks
___
zfs-discuss mailing list
zfs-disc
rather not kill the zfs process. Just curious if this is going to take
a while or not.
Anil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs stats with the logical volume vs
just plain zfs mirrors, and see. Just wondeirng if these controllers
had any other utility for this.
On Sat, Oct 23, 2010 at 10:06 PM, Erik Trimble wrote:
> On 10/23/2010 8:22 PM, Anil wrote:
>>
>> We have Sun STK RAID cards in our x4170 s
We have Sun STK RAID cards in our x4170 servers. These are battery
backed with 256mb cache.
What is the recommended ZFS configuration for these cards?
Right now, I have created a one-to-one logical volume to disk mapping
on the RAID card (one disk == one volume on RAID card). Then, I mirror
them u
g to? The
> last commit to illumos-gate was 6 days ago and you're already not even
> keeping it in sync.. Can you even build it yet and if so where's the
> binaries?
>
The project is a couple weeks old. There's already a webrevs for 145
and 146 merges, and another one f
Or Nexenta :)
http://www.nexenta.org
~Anil
On Thu, Aug 5, 2010 at 5:15 PM, Tuco wrote:
>> That said, if you need ZFS right now, it's either
>> FreeBSD or OpenSolaris
>
> Or Debian GNU/kFreeBSD ;-)
>
> http://tucobsd.blogspot.com/2010/08/apt-get-install-zfsutils.ht
On Mon, Jul 19, 2010 at 3:31 PM, Pasi Kärkkäinen wrote:
>
> Upcoming Ubuntu 10.10 will use BTRFS as a default.
>
Though there was some discussion around this, I don't think the above
is a given. The ubuntu devs would look at the status of the project,
and decide closer to the relea
rk at http://people.nexenta.com.
--
Thanks
Anil Gulecha
Community Leader
http://www.nexentastor.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ins
NexentaStor enterprise edtion (nexenta.com) = Costs $$
- NCP underneath + closed UI + enterprise plugins ($$)
~Anil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
latest recommendations for a log device?
http://bit.ly/aL1dne
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
been obsoleted.)
If you are a storage solution provider, we invite you to join our
growing social network at http://people.nexenta.com.
--
Thanks
Anil Gulecha
Community Leader
http://www.nexentastor.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
/Tracker: http://www.nexenta.org/projects/nexenta-gate
Thanks,
Anil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/wiki/DeveloperEdition
Summary of recent changes is on freshmeat at
http://freshmeat.net/projects/nexentastor/
A complete list of projects (14 and growing) is at
http://www.nexentastor.org/projects
Nightly images are available at
http://ftp.nexentastor.org/nightly/
Regards
--
Anil Gulecha
Community
> ZFS will definitely benefit from battery backed RAM
> on the controller
> as long as the controller immediately acknowledges
> cache flushes
> (rather than waiting for battery-protected data to
> flush to the
I am little confused with this. Do we not want the controller to ignore these
cache
I am sure this is not the first discussion related to this... apologies for the
duplication.
What is the recommended way to make use of a Hardware RAID controller/HBA along
with ZFS?
Does it make sense to do RAID5 on the HW and then RAIDZ on the software? OR
just stick to ZFS RAIDZ and connect
I *am* talking about situations where physical RAM is used up. So definitely
the SSD could be touched quite a bit when used as a rpool - for pages in/out.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
Also...
There is talk about using those cheap disks for rpool. Isn't rpool also prone
to a lot of writes, specifically when the /tmp is in a SSD?
What's the real reason to making those cheap SSD as a rpool rather than a L2ARC?
Basically is everyone saying that SSD without NVRAM/capacitors/batt
After spending some time reading up on this whole deal with SSD with "caches"
and how they are prone to data losses during power failures, I need some
clarifications...
When you guys say "write cache", do you just really mean the on board cache
(for both read AND writes)? Or is there a separate
If you have another partition with enough space, you could technically just do:
mv src /some/other/place
mv /some/other/place src
Anyone see a problem with that? Might be the best way to get it de-duped.
--
This message posted from opensolaris.org
___
s deduplication support. Stay tuned!
Regards
--
Anil Gulecha
Community Lead, NexentaStor.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I haven't tried this, but this must be very easy with dtrace. How come no one
mentioned it yet? :) You would have to monitor some specific syscalls...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Hi,
Thanks for the prompt response.
I tried using digest with sha256 to calculate the uberblock checksum. Now,
digest gives me a 65 char's ouput, while zdb -uuu pool-name, gives me only 49
char output.
how can this be accounted?
I'm trying to understand how the checksum is calculated and dis
Hi,
I've compiled /export/testws/usr/src/lib/crypt_modules/sha256/test.c and tried
to use it to calculate the checksum of the uberblock. This I did as the sha256
executable that comes with solaris is not giving me the correct values for
uberblock.(the output is 64chars whereas zfs output is on
a complete forge environment with mercurial
repositories, bug tracking, wiki, file hosting and other features.
We welcome developers and the user community to participate and extend
the storage appliance via our open Storage Appliance API (SA_API) and
plugin API.
Website: www.nexentastor.org
Than
14408718082181993222 + 4867536591080553814 - 2^64 + 4015976099930560107 =
484548669948327
there was an overflow inbetween, that I overlooked.
pak
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
I added a vdev(file) to the zpool and then, and using hexedit modified the guid
of the vdev in all four labels. I also, caluculated the new ub_guid_sum and
updated all uberblock guid_sum values. Now, when I try to import this modified
file into the zpool, it says the device is offline...and
I just added -xarch=amd64 in Makefile.master and then could compile the driver
without any issues.
Regards,
pak.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Hi,
bash-3.2# isainfo
amd64 i386
The above output shows amd64 is available. But how can I now overcome the
compilation failure issue?
Regards,
pak
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
I'm trying to compile zfs kernel on the following machine
bash-3.2# uname -a
SunOS solaris-b119-44 5.11 snv_119 i86pc i386 i86pc
I set the env properly using bldenv -d ./opensolaris.sh.
bash-3.2# pwd
/export/testws/usr/src/uts
bash-3.2# dmake
dmake: defaulting to parallel mode.
See the man page
I've a zfs pool named 'ppool' with two vdevs(files) file1, file2 in it.
zdb -l /pak/file1 output:
version=16
name='ppool'
state=0
txg=3080
pool_guid=14408718082181993222
hostid=8884850
hostname='solaris-b119-44'
top_guid=4867536591080553814
guid=4867536591080553
I create a couple of zones. I have a zone path like this:
r...@vps1:~# zfs list -r zones/cars
NAME USED AVAIL REFER MOUNTPOINT
zones/fans 1.22G 3.78G22K /zones/fans
zones/fans/ROOT 1.22G 3.78G19K legacy
zones/fans/ROOT/zbe 1.22G 3.78G 1.22G legacy
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I create a zpool
called 'zones' and then I create several zones underneath that, could I expect
to see a lot of disk space savings if I enable dedup on the pool?
Just curious as to what's coming a
On Sun, May 24, 2009 at 2:32 PM, Bogdan M. Maryniuk
wrote:
> On Sat, May 23, 2009 at 5:11 PM, Anil Gulecha wrote:
>> Hi Bogdan,
>>
>> Which particular packages were these? RC3 is quite stable, and all
>> server packages are solid. If you do face issues with a particula
gt; be a great distribution with excellent package management and very
> convenient to use.
Hi Bogdan,
Which particular packages were these? RC3 is quite stable, and all
server packages are solid. If you do face issues with a particular
one, we'd appreciate a bug report. All information on t
Hi,
Nexenta CP and NexentaStor has integrated COMSTAR with ZFS, which
provides 2-3x performance gain over userland SCSI target daemon. I;ve
blogged in more detail at
http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/
Cheers,
Anil
http://www.gulecha.org
Oh, my hunch was right. Yup, I do have an hourly snapshot going. I'll
take it out and see.
Thanks!
Bob Friesenhahn wrote:
> On Sun, 13 Jul 2008, Anil Jangity wrote:
>
>
>> On one of the pools, I started a scrub. It never finishes. At one time,
>> I saw it go up to
On one of the pools, I started a scrub. It never finishes. At one time,
I saw it go up to like 70% and then a little bit later I ran the pool
status, it went back to 5% and started again.
What is going on? Here is the pool layout:
pool: data2
state: ONLINE
scrub: scrub in progress, 35.25% d
Is it possible to give access to the snapshots of a global zone (through
lofs perhaps?) into a zone? I recall that
you can't just delegate a snapshot dataset into a zone yet, but was
wondering if there is some lofs magic I can do?
Thanks
___
zfs-discu
27;t see any reason why I should consider
that. I would like to
proceed with doing a raidz with double parity, please give me some feedback.
Thanks,
Anil
capacity operationsbandwidth
pool used avail read write read write
-- - - -
A three-way mirror and three disks in a double parity array are going to get you
the same usable space. They are going to get you the same level of redundancy.
The only difference is that the RAIDZ2 is going to consume a lot more CPU cycles
calculating parity for no good cause.
In this case,
double parity, please give me some feedback.
Thanks,
Anil
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
data1 41.6G 5.67G 2 19 52.3K 198K
data2 58.2G 9.83G 3
Thanks James/John!
That link specifically mentions "new Solaris 10 release", so I am assuming that
means going from like u4 to Sol 10 u5, and that shouldn't cause a problem when
doing plain patchadd's (w/o live upgrade). If so, then I am fine with those
warnings and can use zfs with zones' path
I have pool called "data".
I have zones configured in that pool. The zonepath is: /data/zone1/fs.
(/data/zone1 itself is not used for anything else, by anyone, and has no other
data.) There are no datasets being delegated to this zone.
I want to create a snapshot that I would want to make avail
45 matches
Mail list logo