Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
On Tue, Aug 11, 2009 at 9:39 AM, Ed Spencer wrote: > We backup 2 filesystems on tuesday, 2 filesystems on thursday, and 2 on > saturday. We backup to disk and then clone to tape. Our backup people > can only handle doing 2 filesystems per night. > > Creating more filesystems to increase the paralle

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread David Magda
On Aug 11, 2009, at 17:07, Marcelo Leal wrote: My question is about SSD, and the differences between use SLC for readzillas instead of MLC. Sun uses MLCs for Readzillas for their 7000 series. I would think that if SLCs (which are generally more expensive) were really needed, they would be

Re: [zfs-discuss] surprisingly poor performance

2009-08-11 Thread roland
>SSDs with capacitor-backed write caches >seem to be fastest. how to distinguish them from ssd`s without one? i never saw this explicitly mentioned in the specs. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@openso

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Ed Spencer
On Tue, 2009-08-11 at 14:56, Scott Lawson wrote: > > Also, is atime on? > Turning atime off may make a big difference for you. It certainly does > for Sun Messaging server. > Maybe worth doing and reposting result? Yes. All these results were attained with atime=off. We made that change on all t

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-08-11 Thread roland
>I tried making my nfs mount to higher zvol level. But I cannot traverse to the >sub-zvols from this mount. i really wonder when someone will come up with a little patch which implements crossmnt option for solaris nfsd (like it exists for linux nfsd). ok, even if it´s a hack - if it works it

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Louis-Frédéric Feuillette
On Tue, 2009-08-11 at 08:04 -0700, Richard Elling wrote: > On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote: > > I suspect that if we 'rsync' one of these filesystems to a second > > server/pool that we would also see a performance increase equal to > > what > > we see on the development server. (I

Re: [zfs-discuss] raidz

2009-08-11 Thread C. Bergström
glidic anthony wrote: thanks but if it's experimental i prefer don't use. My server was use to an nfs share for an esxi so i prefer it was stable. But i thnik the best way it's to add an other hdd to make the install and make my raidz with this 3 disks Do you really consider OpenSolaris prod

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread Marcelo Leal
Hello David... Thanks for your answer, but i did not talk in buy disks... I think you misunderstood my email (or my bad english), but i know the performance improvements when using a cache device. My question is about SSD, and the differences between use SLC for readzillas instead of MLC. Tha

Re: [zfs-discuss] raidz

2009-08-11 Thread glidic anthony
thanks but if it's experimental i prefer don't use. My server was use to an nfs share for an esxi so i prefer it was stable. But i thnik the best way it's to add an other hdd to make the install and make my raidz with this 3 disks -- This message posted from opensolaris.org _

Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-11 Thread A Darren Dunham
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote: > Then creating a zpool: > [b]zpool create -m /zones/huhctmp huhctmppool > c6t6001438002A5435A0001005Ad0[/b] > > [b]zpool list[/b] > NAME SIZE USED AVAILCAP HEALTH ALTROOT > huhctmppool 59.5G 103K 59.5G 0%

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Richard Elling
On Aug 11, 2009, at 1:21 PM, Ed Spencer wrote: Concurrency/Parallelism testing. I have 6 different filesystems populated with email data on our mail development server. I rebooted the server before beginning the tests. The server is a T2000 (sun4v) machine so its ideally suited for this type of

Re: [zfs-discuss] raidz

2009-08-11 Thread C. Bergström
glidic anthony wrote: Hi, I have only 3 hdd and i want make a raidz. But i think it's not possible to install opensolaris on a raidz system. So what is the solution? Maybe to create a lice on the first disk (where opensolaris was install) and create a raidz with this slice and the two other di

[zfs-discuss] raidz

2009-08-11 Thread glidic anthony
Hi, I have only 3 hdd and i want make a raidz. But i think it's not possible to install opensolaris on a raidz system. So what is the solution? Maybe to create a lice on the first disk (where opensolaris was install) and create a raidz with this slice and the two other disk? I try that but when

[zfs-discuss] request: Prepare your os.org content for migration to XWiki

2009-08-11 Thread Michelle Olson
Hi all, In six short weeks (Sept. 14th) all content on opensolaris.org will be migrated to XWiki. To prepare your web pages for this migration, please check the first test migration here: http://hub.opensolaris.org/bin/view/Main/ If you find any problems, please refer to the tips and tricks

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Ed Spencer
Concurrency/Parallelism testing. I have 6 different filesystems populated with email data on our mail development server. I rebooted the server before beginning the tests. The server is a T2000 (sun4v) machine so its ideally suited for this type of testing. The test was to tar (to /dev/null) each o

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Scott Lawson
Richard Elling wrote: On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote: On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote: At a first glance, your production server's numbers are looking fairly similar to the "small file workload" results of your development server. I thought you were saying t

[zfs-discuss] Build 119 CIFS / Unix File Permission Oddity

2009-08-11 Thread Michael Sichler
Background: Have a test server running SECE Build 119 configured as a CIFS server in Domain Mode. Build 119 was required because in our test environment we have a Windows 2008 Domain and the DCs have SP2 installed. I Joined the server to the domain, created a ZFS file system and shared it out.

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-08-11 Thread roland
>Re-surfacing an old thread. I was wondering myself if there are any >home-use commercial NAS devices with zfs. I did find that there is >Thecus 7700. But, it appears to come with Linux, and use ZFS in FUSE, >but I (perhaps unjustly) don't feel comfortable with :) no, you justly feel unconforta

Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-11 Thread Sascha
Hi Darren, i tried exactly the same, but it doesn't seem to work. First the size of the disk: [b] echo |format -e|grep -i 05A[/b] 17. c6t6001438002A5435A0001005Ad0 /scsi_vhci/s...@g6001438002a5435a0001005a Then creating a zpool: [b]zpool create -m /zones/

Re: [zfs-discuss] new logbias property

2009-08-11 Thread Eric Schrock
On 08/11/09 06:03, Darren J Moffat wrote: I thought so too initially, then I changed my mind and I like it the way it is. The reason being is that describing the intent allows changing the implementation and keeping the meaning. It is the intent that matters to the administrator not the imp

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread David Magda
On Tue, August 11, 2009 10:39, Ed Spencer wrote: > I suspect that if we 'rsync' one of these filesystems to a second > server/pool that we would also see a performance increase equal to what > we see on the development server. (I don't know how zfs send a receive Rsync has to traverse the entire

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Richard Elling
On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote: On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote: At a first glance, your production server's numbers are looking fairly similar to the "small file workload" results of your development server. I thought you were saying that the development ser

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Ed Spencer
On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote: > At a first glance, your production server's numbers are looking fairly > similar to the "small file workload" results of your development > server. > > I thought you were saying that the development server has faster performance? The developmen

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread David Magda
On Tue, August 11, 2009 09:24, Marcelo Leal wrote: > Many companies (including SUN), has just hardware with support to SLC... > as i need both, i just want to hear your experiences about use SLC SSD > for ZFS cache. One point is cost, but i want to know if the performance > is much different, bec

Re: [zfs-discuss] new logbias property

2009-08-11 Thread Darren J Moffat
przemol...@poczta.fm wrote: On Tue, Aug 11, 2009 at 05:42:31AM -0700, Robert Milkowski wrote: Hi, I like the new feature but I was thinking that maybe the keywords being used should be different? Currently it is: # zfs set logbias=latency {dataset} # zfs set logbias=throughput {dataset} May

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
On Tue, Aug 11, 2009 at 7:33 AM, Ed Spencer wrote: > I've come up with a better name for the concept of file and directory > fragmentation which is, "Filesystem Entropy". Where, over time, an > active and volitile filesystem moves from an organized state to a > disorganized state resulting in back

[zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread Marcelo Leal
Hello there... Many companies (including SUN), has just hardware with support to SLC... as i need both, i just want to hear your experiences about use SLC SSD for ZFS cache. One point is cost, but i want to know if the performance is much different, because the two are created specifically to p

Re: [zfs-discuss] new logbias property

2009-08-11 Thread przemolicc
On Tue, Aug 11, 2009 at 05:42:31AM -0700, Robert Milkowski wrote: > Hi, > > I like the new feature but I was thinking that maybe the keywords being used > should be different? > > Currently it is: > > # zfs set logbias=latency {dataset} > # zfs set logbias=throughput {dataset} > > Maybe it wou

Re: [zfs-discuss] new logbias property

2009-08-11 Thread Darren J Moffat
Robert Milkowski wrote: Hi, I like the new feature but I was thinking that maybe the keywords being used should be different? Currently it is: # zfs set logbias=latency {dataset} # zfs set logbias=throughput {dataset} Maybe it would be more clear this way: # zfs set logdest=dedicated {datas

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Alex Lam S.L.
At a first glance, your production server's numbers are looking fairly similar to the "small file workload" results of your development server. I thought you were saying that the development server has faster performance? Alex. On Tue, Aug 11, 2009 at 1:33 PM, Ed Spencer wrote: > I've come up w

[zfs-discuss] new logbias property

2009-08-11 Thread Robert Milkowski
Hi, I like the new feature but I was thinking that maybe the keywords being used should be different? Currently it is: # zfs set logbias=latency {dataset} # zfs set logbias=throughput {dataset} Maybe it would be more clear this way: # zfs set logdest=dedicated {dataset} # zfs set logdest=pool

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Ed Spencer
I've come up with a better name for the concept of file and directory fragmentation which is, "Filesystem Entropy". Where, over time, an active and volitile filesystem moves from an organized state to a disorganized state resulting in backup difficulties. Here are some stats which illustrate the

Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-11 Thread Ross
... which sounds very similar to issues I've raised many times. ZFS should have the ability to double check what a drive is doing, and speculatively time out a device that appears to be failing in order to maintain pool performance. If a single drive in a redundant pool can be seen to be respon

Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2009-08-11 Thread Fajar A. Nugraha
On Tue, Aug 11, 2009 at 4:14 PM, Martin Wheatley wrote: > Did anyone reply to this question? > > We have the same issue and our Windows admins do see why the iSCSI target > should be disconnected when the underlying storage is extended Is there any iscsi target that can be extended without discon

Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-11 Thread Sanjeev
Hi Chris, On Sun, Aug 09, 2009 at 05:53:12PM -0700, Chris Baker wrote: > OK - had a chance to do more testing over the weekend. Firstly some extra > data: > > Moving the mirror to both drives on ICH10R ports and on sudden disk power-off > the mirror faulted cleanly to the remaining drive no pro

Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2009-08-11 Thread Martin Wheatley
Did anyone reply to this question? We have the same issue and our Windows admins do see why the iSCSI target should be disconnected when the underlying storage is extended -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Adding a single disk to RAIDZ pool

2009-08-11 Thread Thomas Burgess
this sounds right. Theres also the problem of adding non-redundant types to redundant types...but yah, it kind of defeats the purpose, The thing people seem to miss, is ZFS comes with a price, that price is you need to plan your pool AND expansion plan ahead of time. If you want to grow pools w