Re: [zfs-discuss] SSDs with a SCSI SCA interface?

2010-02-24 Thread Brandon High
On Thu, Dec 3, 2009 at 11:06 PM, Erik Trimble wrote: > I need either: > > (a) a SSD with an Ultra160/320 parallel interface (I can always find an > interface adapter, so I'm not particular about whether it's a 68-pin or SCA) > > (b)  a  SAS or SATA to UltraSCSI adapter (preferably with a SCA inter

Re: [zfs-discuss] zfs sequential read performance

2010-02-24 Thread Robert Milkowski
On 24/02/2010 02:21, v wrote: Hi, Thanks for the reply. So the problem of sequential read after random write problem exist in zfs. I wonder if it is a real problem, ie, for example cause longer backup time, will it be addressed in future? Once the famous bp rewriter is integrated and a de

[zfs-discuss] Recommendations required for home file server config

2010-02-24 Thread li...@di.cx
Hi all, (Posting via email due to my forum account being marked inactive!) So i'm re-visiting OpenSolaris and ZFS. Context is my old file server died (Windows 2008 with 4 drives on an Intel Matrix ICH9 controller in RAID 5) and i'm moving into a new house and want to build something new and flas

[zfs-discuss] opensolaris COMSTAR io stats

2010-02-24 Thread Bruno Sousa
Hi all, Using "old" way of sharing volumes over iscsi in zfs (zfs set shareiscsi=on) i can see i/o stats per iscsi volume running a command iscsitadm show stats -I 1 volume. However i couldn't find something similar in new framework,comstar. Probably i'm missing something, so if anyone has some ti

[zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Steve
I would like to ask a question regarding ZFS performance overhead when having hundreds of millions of files We have a storage solution, where one of the datasets has a folder containing about 400 million files and folders (very small 1K files) What kind of overhead do we get from this kind of t

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-02-24 Thread Lutz Schumann
I fully agree. This needs fixing. I can think of so many situations, where device names change in OpenSolaris (especially with movable pools). This problem can lead to serious data corruption. Besides persistent L2ARC (which is much more difficult I would say) - Making L2ARC also rely on label

Re: [zfs-discuss] zfs sequential read performance

2010-02-24 Thread Edward Ned Harvey
> I wonder if it is a real problem, ie, for example cause longer backup > time, will it be addressed in future? It doesn't cause longer backup time, as long as you're doing a "zfs send | zfs receive" But it could cause longer backup time if you're using something like tar. The only way to "solve

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-24 Thread lists
On Wed, Feb 24, 2010 at 3:31 AM, Ethan wrote: > On Tue, Feb 23, 2010 at 21:22, Bob Friesenhahn > wrote: >> Just a couple of days ago there was discussion of importing disks from >> Linux FUSE zfs.  The import was successful.  The same methods used >> (directory containing symbolic links to desire

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Peter Eriksson
> What kind of overhead do we get from this kind of thing? Overheadache... [i](Tack Kronberg för svaret)[/i] -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listin

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Kjetil Torgrim Homme
Steve writes: > I would like to ask a question regarding ZFS performance overhead when > having hundreds of millions of files > > We have a storage solution, where one of the datasets has a folder > containing about 400 million files and folders (very small 1K files) > > What kind of overhead do

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Steve
Hei Kjetil. Actually we are using hardware RAID5 on this setup.. so solaris only sees a single device... The overhead I was thinking of was more in the pointer structures... (bearing in mind this is a 128 bit file system), I would guess that memory requirements would be HUGE for all these fil

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-24 Thread David Dyer-Bennet
On Tue, February 23, 2010 17:58, tomwaters wrote: > Thanks for that. > > It seems strange though that the two disks, which are from the same > manufacturer, same model, same firmware and similar batch/serial's behave > differently. I've found that the ways of writing labels and partitions in Sola

Re: [zfs-discuss] zfs sequential read performance

2010-02-24 Thread Edward Ned Harvey
> Once the famous bp rewriter is integrated and a defrag functionality > built on top of it you will be able to re-arrange your data again so it > is sequential again. Then again, this would also rearrange your data to be sequential again: cp -p somefile somefile.tmp ; mv -f somefile.tmp somefile

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-24 Thread Ethan
On Wed, Feb 24, 2010 at 08:12, wrote: > On Wed, Feb 24, 2010 at 3:31 AM, Ethan wrote: > > On Tue, Feb 23, 2010 at 21:22, Bob Friesenhahn > > wrote: > >> Just a couple of days ago there was discussion of importing disks from > >> Linux FUSE zfs. The import was successful. The same methods used

Re: [zfs-discuss] SSDs with a SCSI SCA interface?

2010-02-24 Thread Al Hopper
On Tue, Feb 23, 2010 at 2:09 PM, Erik Trimble wrote: > Al Hopper wrote: >> >> >> On Fri, Dec 4, 2009 at 1:06 AM, Erik Trimble > > wrote: >> >>    Hey folks. >> >>    I've looked around quite a bit, and I can't find something like this: >> >>    I have a bunch of older

[zfs-discuss] Interrupt sharing

2010-02-24 Thread David Dyer-Bennet
On Tue, February 23, 2010 17:20, Chris Ridd wrote: > To see what interrupts are being shared: > > # echo "::interrupts -d" | mdb -k > > Running intrstat might also be interesting. This just caught my attention. I'm not the original poster, but this sparked something I've been wanting to know ab

[zfs-discuss] snv_133 - high cpu - update

2010-02-24 Thread Bruno Sousa
Hi all, I still didn't find the problem but it seems to be related with interrupts sharing between onboard network cards (broadcom) and the intel 10gbE card PCI-e. Runing a simple iperf from a linux box to my zfs box, if i use bnx2 or bnx3 i have a performance over 100 mbs, but if i use bnx0, bxn1

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-24 Thread Mark J Musante
On Tue, 23 Feb 2010, patrik wrote: I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06. secureUNAVAIL insufficient replicas raidz1 UNAVAIL insufficient replicas c8t1d0p0 ONLINE c8t2d0s2 ONLINE c8t3d0s8 UNAVAIL c

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bart Smaalders
On 02/23/10 15:20, Chris Ridd wrote: On 23 Feb 2010, at 19:53, Bruno Sousa wrote: The system becames really slow during the data copy using network, but i copy data between 2 pools of the box i don't notice that issue, so probably i may be hitting some sort of interrupt conflit in the networ

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Andy Bowers
Hi Bart, yep, I got Bruno to run a kernel profile lockstat... it does look like the mpt issue.. andy :--- Count indv cuml rcnt nsec Hottest CPU+PILCaller 2861 7% 55% 0.00

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-24 Thread Troy Campbell
http://www.oracle.com/technology/community/sun-oracle-community-continuity.html Half way down it says: Will Oracle support Java and OpenSolaris User Groups, as Sun has? Yes, Oracle will indeed enthusiastically support the Java User Groups, OpenSolaris User Groups, and other Sun-related user gro

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-24 Thread Marc Nicholas
On Wed, Feb 24, 2010 at 2:02 PM, Troy Campbell wrote: > > http://www.oracle.com/technology/community/sun-oracle-community-continuity.html > > Half way down it says: > Will Oracle support Java and OpenSolaris User Groups, as Sun has? > > Yes, Oracle will indeed enthusiastically support the Java Use

[zfs-discuss] disks in zpool gone at the same time

2010-02-24 Thread Evgueni Martynov
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it sta

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
On Wed, 24 Feb 2010, Steve wrote: The overhead I was thinking of was more in the pointer structures... (bearing in mind this is a 128 bit file system), I would guess that memory requirements would be HUGE for all these files...otherwise arc is gonna struggle, and paging system is going mental

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Tomas Ögren
On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes: > On Wed, 24 Feb 2010, Steve wrote: >> >> The overhead I was thinking of was more in the pointer structures... >> (bearing in mind this is a 128 bit file system), I would guess that >> memory requirements would be HUGE for all th

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Nicolas Williams
On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote: > I have a directory here containing a million files and it has not > caused any strain for zfs at all although it can cause considerable > stress on applications. The biggest problem is always the apps. For example, ls by default

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bruno Sousa
Yes i'm using the mtp driver . In total this system has 3 HBA's, 1 internal (Dell perc), and 2 Sun non-raid HBA's. I'm also using multipath, but if i disable multipath i have pretty much the same results.. Bruno On 24-2-2010 19:42, Andy Bowers wrote: > Hi Bart, > yep, I got Bruno to run a

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Toby Thain
On 24-Feb-10, at 3:38 PM, Tomas Ögren wrote: On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes: On Wed, 24 Feb 2010, Steve wrote: The overhead I was thinking of was more in the pointer structures... (bearing in mind this is a 128 bit file system), I would guess that memory req

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Eric D. Mudama
On Wed, Feb 24 at 14:09, Bob Friesenhahn wrote: 400 million tiny files is quite a lot and I would hate to use anything but mirrors with so many tiny files. And at 400 million, you're in the realm of needing mirrors of SSDs, with their fast random reads. Even at the 500+ IOPS of good SAS drives

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Steve
It was never the intention that this storage system should be used in this way... And I am now clearning alot of this stuff out.. This is very static files, and is rarely used... so traversing it any way is a rare occasion... What has happened is that reading and writing large files which are

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Andrey Kuzmin
On Wed, Feb 24, 2010 at 11:09 PM, Bob Friesenhahn wrote: > On Wed, 24 Feb 2010, Steve wrote: >> >> The overhead I was thinking of was more in the pointer structures... >> (bearing in mind this is a 128 bit file system), I would guess that memory >> requirements would be HUGE for all these files...

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Steve
thats not the issue here, as they are spread out in a folder structure based on an integer split into hex blocks... 00/00/00/01 etc... but the number of pointers involved with all these files, and directories (which are files) must have an impact on a system with limited RAM? There is 4GB RAM

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
On Wed, 24 Feb 2010, Steve wrote: What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of stress on the pool, even if these files arent

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Richard Elling
On Feb 24, 2010, at 1:17 PM, Steve wrote: > It was never the intention that this storage system should be used in this > way... > > And I am now clearning alot of this stuff out.. > > This is very static files, and is rarely used... so traversing it any way is > a rare occasion... > > What ha

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Andrey Kuzmin
On Thu, Feb 25, 2010 at 12:26 AM, Steve wrote: > thats not the issue here, as they are spread out in a folder structure based > on an integer split into hex blocks...  00/00/00/01 etc... > > but the number of pointers involved with all these files, and directories > (which are files) > must have

[zfs-discuss] How to know the recordsize of a file

2010-02-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Andrey Kuzmin
On Thu, Feb 25, 2010 at 12:34 AM, Andrey Kuzmin wrote: > On Thu, Feb 25, 2010 at 12:26 AM, Steve wrote: >> thats not the issue here, as they are spread out in a folder structure based >> on an integer split into hex blocks...  00/00/00/01 etc... >> >> but the number of pointers involved with all

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Steve
Well I am deleting most of them anyway... they are not needed anymore... Will deletion solve the problem... or do I need to do something more to defrag the file system? I have understood that defrag willl not be available until this block rewrite thing is done? -- This message posted from open

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Robert Milkowski
On 24/02/2010 21:31, Bob Friesenhahn wrote: On Wed, 24 Feb 2010, Steve wrote: What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of s

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
On Wed, 24 Feb 2010, Robert Milkowski wrote: except for one bug which has been fixed which had to do with consuming lots of CPU to find a free block I don't think you are right. You don't have to set recordsize to smaller value for small files. Recordsize property sets a maximum allowed record

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread David Dyer-Bennet
On Wed, February 24, 2010 14:39, Nicolas Williams wrote: > On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote: >> I have a directory here containing a million files and it has not >> caused any strain for zfs at all although it can cause considerable >> stress on applications. > > The

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Nicolas Williams
On Wed, Feb 24, 2010 at 03:31:51PM -0600, Bob Friesenhahn wrote: > With millions of such tiny files, it makes sense to put the small > files in a separate zfs filesystem which has its recordsize property > set to a size not much larger than the size of the files. This should > reduce waste, res

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Adam Serediuk
I manage several systems with near a billion objects (largest is currently 800M) on each and also discovered slowness over time. This is on X4540 systems with average file sizes being ~5KB. In our environment the following readily sped up performance significantly: Do not use RAID-Z. Use as

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Adam Serediuk
Also you will need to ensure that atime is turned off for the ZFS volume(s) in question as well as any client-side NFS mount settings. There are a number of client-side NFS tuning parameters that can be done if you are using NFS clients with this system. Attributes caches, atime, diratime,

Re: [zfs-discuss] How to know the recordsize of a file

2010-02-24 Thread Robert Milkowski
On 24/02/2010 21:35, Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Robert Milkowski
On 24/02/2010 21:54, Bob Friesenhahn wrote: On Wed, 24 Feb 2010, Robert Milkowski wrote: except for one bug which has been fixed which had to do with consuming lots of CPU to find a free block I don't think you are right. You don't have to set recordsize to smaller value for small files. Reco

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Robert Milkowski
On 24/02/2010 21:40, Steve wrote: Well I am deleting most of them anyway... they are not needed anymore... Will deletion solve the problem... or do I need to do something more to defrag the file system? I have understood that defrag willl not be available until this block rewrite thing is don

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread David Dyer-Bennet
On 2/24/2010 4:11 PM, Stefan Walk wrote: On 24 Feb 2010, at 22:57, David Dyer-Bennet wrote: On Wed, February 24, 2010 14:39, Nicolas Williams wrote: On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote: I have a directory here containing a million files and it has not caused any

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Kjetil Torgrim Homme
"David Dyer-Bennet" writes: > Which is bad enough if you say "ls". And there's no option to say > "don't sort" that I know of, either. /bin/ls -f "/bin/ls" makes sure an alias for "ls" to "ls -F" or similar doesn't cause extra work. you can also write "\ls -f" to ignore a potential alias. wi

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-24 Thread tomwaters
Thanks David. Re. the starting cylinder, it was more that one c8t0d0 the partition started at zero and c8t1d0 it started at 1. ie. c8t0d0 Partition Status Type Start End Length % = == = === == === 1 Active Solaris2 0 30401 30402 100 c8t1d0: Partition Status Type

[zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
Ok, I know NOW that I should have used zfs rename...but just for the record, and to give you folks a laugh, this is the mistake I made... I created a zfs file system, cloud/movies and shared it. I then filled it with movies and music. I then decided to rename it, so I used rename in the Gnome to

[zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
Ok, I know NOW that I should have used zfs rename...but just for the record, and to give you folks a laugh, this is the mistake I made... I created a zfs file system, cloud/movies and shared it. I then filled it with movies and music. I then decided to rename it, so I used rename in the Gnome to

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Jason King
Could also try /usr/gnu/bin/ls -U. I'm working on improving the memory profile of /bin/ls (as it gets somewhat excessive when dealing with large directories), which as a side effect should also help with this. Currently /bin/ls allocates a structure for every file, and doesn't output anything unt

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread Ed Jobs
On Thursday 25 of February 2010 03:46, tomwaters wrote: > Ok, I know NOW that I should have used zfs rename...but just for the > record, and to give you folks a laugh, this is the mistake I made... > > I created a zfs file system, cloud/movies and shared it. > I then filled it with movies and musi

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bart Smaalders
On 02/24/10 12:57, Bruno Sousa wrote: Yes i'm using the mtp driver . In total this system has 3 HBA's, 1 internal (Dell perc), and 2 Sun non-raid HBA's. I'm also using multipath, but if i disable multipath i have pretty much the same results.. Bruno From what I understand, the fix is expected

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread tomwaters
well, both I guess... I thought the dataset name was based upon the file system...so I was assuming that if i renamed the zfs filesystem (with zfs rename) it would also rename the dataset... ie... #zfs create tank/fred gives... NAMEUSED AVAIL REFER MOUNTPOINT tank/fr

Re: [zfs-discuss] Oops, ran zfs destroy after renaming a folder and deleted my file system.

2010-02-24 Thread David Dyer-Bennet
On 2/24/2010 7:46 PM, tomwaters wrote: Ok, I know NOW that I should have used zfs rename...but just for the record, and to give you folks a laugh, this is the mistake I made... I created a zfs file system, cloud/movies and shared it. I then filled it with movies and music. I then decided to ren

[zfs-discuss] Moving dataset to another zpool but same mount?

2010-02-24 Thread Gregory Gee
I need to move a dataset to another zpool, but I need to keep the same mount point. I have a zpool called files and datasets called mail, home and VM. files files/home files/mail files/VM I want to move the files/VM to another zpool, but keep the same mount point. What would be the right step

[zfs-discuss] raidz2 array FAULTED with only 1 drive down

2010-02-24 Thread Kocha
I recently had a hard drive die on my 6 drive raidz2 array (4+2). Unfortunately, now that the dead drive didn't register anymore, Linux decided to rearrange all of the drive names around such that zfs couldn't figure out what drives went where. After much hair pulling, I gave up on Linux and w

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bruno Sousa
Hi, Until it's fixed the 132 build should be used instead of the 133? Bruno On 25-2-2010 3:22, Bart Smaalders wrote: > On 02/24/10 12:57, Bruno Sousa wrote: >> Yes i'm using the mtp driver . In total this system has 3 HBA's, 1 >> internal (Dell perc), and 2 Sun non-raid HBA's. >> I'm also using m