Re: [zfs-discuss] ZFS ARC cache issue

2010-06-07 Thread Nicolas Dorfsman


When I looked for references on ARC freeing algo, I did find some lines 
of codes talking about freeing ARC when memory is under pressure.

Nice...but what could be memory under pressure in the kernel syntax ?


Jumping from C lines to blogs to docsI went back to basics :

- lotsfree
- fastscan


IMHO the lotsfree (The greater of 1/64th of physical memory or 512 
Kbytes) is stupid when you're using ZFS.



Nicolas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] Solaris 8/9 branded zones on ZFS root?

2009-02-25 Thread Nicolas Dorfsman


Le 25 févr. 09 à 23:12, Timothy Kennedy a écrit :


Rich Teer wrote:


I have a situation where I need to consolidate a few servers running
Solaris 9 and 8.  If the application doesn't run natively on Solaris
10 or Nevada, I was thinking of using Solars 9 or 8 branded zones.
My intent would be for the global zone to use ZFS boot/root; would I
be correct in thinking that this will be OK for the branded zones?


That's correct.  I have some solaris 8 zones running under cluster
control, where zonepath is zfs, and they're doing just fine.
Nothing special had to be done.



Which ACL model is then used ?



Nico
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] Solaris 8/9 branded zones on ZFS root?

2009-02-26 Thread Nicolas Dorfsman


Le 26 févr. 09 à 15:47, Timothy Kennedy a écrit :


Timothy Kennedy wrote:

Nicolas Dorfsman wrote:


   Which ACL model is then used ?


From: System Administration Guide: Solaris 8 Containers
( http://docs.sun.com/app/docs/doc/820-2914/gfjbk?a=view )


Using ZFS

Although the zone cannot use a delegated ZFS dataset, the zone can  
reside on a ZFS file system. You can add a ZFS file system to share  
with the global zone through the zonecfg fs resource. See Step 7 in  
How to Configure a solaris8 Branded Zone.


Note that the setfacl and getfacl commands cannot be used with ZFS.  
When a cpio or a tar archive with ACLs set on the files is unpacked,  
the archive will receive warnings about not being able to set the  
ACLs, although the files will be unpacked successfully. These  
commands can be used with UFS.


Thanks Timothy !


Nico


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush

2007-11-27 Thread Nicolas Dorfsman
Hi,

   I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on 
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide .  They 
clearly suggest to disable cache flush 
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH . 

It seems to be the only serious article on the net about this subject.

Could someone here state on this tuning suggestion ?  
My cu is running a HDS SAN array with Oracle on ZFS, I'd like to be clear in my 
brain.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Computer usable output for zpool commands

2008-02-01 Thread Nicolas Dorfsman
Hi,

I wrote an hobbit script around lunmap/hbamap commands to monitor SAN health.
I'd like to add detail on what is being hosted by those luns.

With svm metastat -p is helpful.

With zfs, zpool status output is awful for script.

Is there somewhere an utility to show zpool informations in a scriptable format 
?

Nico
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Changes during zpool import

2006-09-01 Thread Nicolas Dorfsman
extract from man zpool :

[I]zpool import [-d dir] [-D] [-f] [-o opts] [-R root]  pool  |
 id [newpool]

 Imports a specific pool. A pool can be identified by its
 name or the numeric identifier. If newpool is specified,
 the pool is imported using the name newpool.  Otherwise,
 it is imported with the same name as its exported name.

 If a device is removed from  a  system  without  running
 "zpool  export" first, the device appears as potentially
 active. It cannot be determined if  this  was  a  failed
 export,  or  whether  the  device  is really in use from
 another host. To import a pool in  this  state,  the  -f
 option is required.

 -d dir   Searches for devices or files in  dir.  The  -d
  option can be specified multiple times.

 -D   Imports destroyed pool. The -f option  is  also
  required.

 -f   Forces import, even if the pool appears  to  be
  potentially active.

 -o opts  Comma-separated list of mount  options  to  use
  when  mounting  datasets  within  the pool. See
  zfs(1M) for a description of dataset properties
  and mount options.

 -R root  Imports pool(s) with an alternate root. See the
  "Alternate Root Pools" section.[/I]


So, you could import it with '-R', and then modify its 
properties[i][/i][i][/i][b][/b][i][/i]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS with expanding LUNs

2006-09-01 Thread Nicolas Dorfsman
It's a really interesting question.

I used to have the issue on UFS, and the answer was "use VxVM or ZFS".  Mmmm.

Does it really work on ZFS ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Nicolas Dorfsman
> The hard part is getting a set of simple
> requirements. As you go into 
> more complex data center environments you get hit
> with older Solaris 
> revs, other OSs, SOX compliance issues, etc. etc.
> etc. The world where 
> most of us seem to be playing with ZFS is on the
> lower end of the 
> complexity scale. Sure, throw your desktop some fast
> SATA drives. No 
> problem. Oh wait, you've got ten Oracle DBs on three
> E25Ks that need to 
> be backed up every other blue moon ...

  Another fact is CPU use.

  Does anybody really know what will be effects of intensive CPU workload on 
ZFS perfs, and effects of ZFS RAID CPU compute on intensive CPU workload ?

  I heard a story about a customer complaining about his higend server 
performances; when a guy came on site...and discover beautiful SVM RAID-5 
volumes, the solution was almost found.

 Nicolas
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on production servers with SLA

2006-09-08 Thread Nicolas Dorfsman
Hi,

I'm currently doing some tests on a SF15K domain with Solaris 10 
installed.
The target is to convince my cu to use Solaris 10 for this domain AND 
establish a list of recommendations.

The ZFS perimeter is really an issue for me.
For now, I'm waiting for fresh informations from the backup software 
vendor about ZFS support.  No ZFS-acl support could be annoying.

Regarding "system partitions" (/var, /opt, all mirrored + alternate 
disk), what would be YOUR recommendations ?  ZFS or not ?


TIA


Nicolas
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs share=".foo-internal.bar.edu" on multipleinterfaces?

2006-09-12 Thread Nicolas Dorfsman
> I have a Sun x4200 with 4x gigabit ethernet NICs.  I
> have several of 
> them configured with distinct IP addresses on an
> internal (10.0.0.0) 
> network.

[off topic]
Why are you using distinct IP addresses instead of IPMP ?
[/off]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Hi,

  There's something really bizarre in ZFS snaphot specs : "Uses no separate 
backing store." .

  Hum...if I want to mutualize one physical volume somewhere in my SAN as THE 
snaphots backing-store...it becomes impossible to do !   Really bad.

  Is there any chance to have a "backing-store-file" option in a future release 
?

  In the same idea, it would be great to have some sort of propertie to add a 
disk/LUN/physical_space to a pool, only reserved to backing-store.  At now, the 
only thing I see to disallow users to use my backing-store space for their 
usage is to put quota.

Nico
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Well.

> ZFS isn't copy-on-write in the same way that things
> like ufssnap are.
> ufssnap is copy-on-write in that when you write
> something, it copies out
> the old data and writes it somewhere else (the
> backing store).  ZFS doesn't
> need to do this - it simply writes the new data to a
> new location, and
> leaves the old data where it is. If that old data is
> needed for a snapshot
> then it's left unchanged, if it's not then it's
> freed.

  We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole 
concept is different.

  So. What could be the best architecture ?
  With UFS, I used to have separate metadevices/LUNs for each application.
  With ZFS, I thought it would be nice to use a separate pool for each 
application.
  But, it means multiply snapshot backing-store OR dynamically remove/add this 
space/LUN to pool where we need to do backups. Knowing that I can't serialize 
backups, my only option is to multiply reservation for backing-stores.  Uh !
  Another option would be to create a single pool and put all apllications in 
it...don't think this as a solution.

   Any suggestion ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
> If you want to copy your filesystems (or snapshots)
> to other disks, you
> can use 'zfs send' to send them to a different pool
> (which may even be 
> on a different machine!).

Oh no ! It means copy the whole filesystem. The target here is definitively to 
snapshot the filesystem and them backup the snapshot.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Hi Matt,

> > So. What could be the best architecture ?
> 
> What is the problem?

I/O profile isolation versus  snap backing-store 'reservation' optimisation.


> > With UFS, I used to have separate metadevices/LUNs for each
> > application. With ZFS, I thought it would be nice to use a separate
> > pool for each application.
> 
> Ick.  It would be much better to have one pool, and a
> separate
> filesystem for each application.

Including performance considerations ?
For instance, if I have two Oracle Databases with two I/O profiles (TP versus 
Batch)...what would be the best :

1) Two pools, each one on two LUNs. Each LUN distributed on n trays.
2) One pool on one LUN. This LUN distributed on 2 x n trays.
3) One pool striped on two LUNs. Each LUN distributed on n trays.


> > But, it means multiply snapshot backing-store OR dynamically
> >  remove/add this space/LUN to pool where we need to  do backups.

> I don't understand this statement.  What problem are
> you trying to solve?
>  If you want to do backups, simply take a snapshot, then point
> your backup program at it. 

With one pool, no problem.

With n pools, my problem is the space used by the snapshot. With the COW method 
of UFS snapshot I can put all backing-stores on one single volume.   With ZFS 
snapshot, it's conceptualy impossible.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Veritas NetBackup Support for ZFS

2006-09-22 Thread Nicolas Dorfsman
> I am using Netbackup 6.0 MP3 on several ZFS systems
> just fine.  I
> think that NBU won't back up some exotic ACLs of ZFS,
> but if you
> are using ZFS like other filesystems (UFS, etc) then there aren't  any issues.

  Hum. ACLs are not so "exotic".

  This IS a really BIG issue.  If you are using ACLs, even POSIX, moving 
production to ZFS filesystems means loosing any ACLs in backups.

   In other words, if you're using 30 years old UNIX rights, no problem.  

   If I'd have to give a list of complaint on ZFS, that would be the first on 
my list !   Sun SHOULD make pressure on backup software editor (or send them 
some engineer) to support ZFS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Why root zone can't be on ZFS for upgrade ?

2006-12-21 Thread Nicolas Dorfsman
Hi,

Something is unclear in "Solaris containers" and "Solaris ZFS" docs

Two extracts :

http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qsm?q=zone&a=view
"Consider the following interactions when working with ZFS on a system with 
Solaris zones installed:

A ZFS file system that is added to a non-global zone must have its mountpoint 
property set to legacy.

A ZFS file system cannot serve as zone root because of issues with the Solaris 
upgrade process. Do not include any system-related software that is accessed by 
the patch or upgrade process in a ZFS file system that is delegated to a 
non-global zone."
http://docs.sun.com/app/docs/doc/817-1592/6mhahuop2?a=view
"4. Set the zone path, /export/home/my-zone in this procedure.
   zonecfg:my-zone> set zonepath=/export/home/my-zone
Do not place the zonepath on ZFS for this release."

I can't understand why the upgrade process need to have non-global root zone on 
anything else than zfs.  Does the boot cdrom can't mount ZFS volumes ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Why root zone can't be on ZFS for upgrade ?

2006-12-21 Thread Nicolas Dorfsman
Jeff wrote :
> The installation software does not yet understand
> ZFS, and is not able to 
> upgrade a Solaris 10 system with a ZFS root file
> system.  Further, it is not 
> able to upgrade a Solaris 10 system with a non-global
> zone that has a ZFS file 
> system as its zonepath.

  Thanks Jeff.  

  Any idea on when Install Software will be able to "see" zfs vol ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss