Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Eric, thanks for clarifying. Could you confirm the release for #1 ? As "today" can be misleading depending on the user. Is there a schedule/target for #2 ? And just to confirm the alternative to turn off the ZIL globally is the equivalent to always throwing away some commited data on a crash/r

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Eric Schrock
On Feb 6, 2010, at 10:18 PM, Christo Kutrovsky wrote: > Me too, I would like to know the answer. > > I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what > happens if the battery dies after a system crash. There are two different things here: 1. Opening a pool with a mis

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Me too, I would like to know the answer. I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what happens if the battery dies after a system crash. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@o

[zfs-discuss] acl's and new dirs

2010-02-06 Thread Thomas Burgess
I've got a strange issue, If this is covered elsewhere, i apologize in advance for my newbness I've got a couple ZFS filesystems shared cifs and nfs, i've managed to get ACL's working the way i want, provided things are accessed via cifs and nfs. If i create a new dir via cifs or NFS then the ac

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
> I like the original Phenom X3 or X4 we all agree ram is the key to happiness. The debate is what offers the most ECC ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered ECC. So t

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-06 Thread Maurice Volaski
For those who've been suffering this problem and who have non-Sun jbods, could you please let me know what model of jbod and cables (including length thereof) you have in your configuration. For those of you who have been running xVM without MSI support, could you please confirm whether the devic

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Erik Trimble
Bob Friesenhahn wrote: On Fri, 5 Feb 2010, Rob Logan wrote: Intel's RAM is faster because it needs to be. I'm confused how AMD's dual channel, two way interleaved 128-bit DDR2-667 into an on-cpu controller is faster than Intel's Lynnfield dual channel, Rank and Channel interleaved DDR3-1333 in

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Cesare
Hy Cindy, thanks for the hint. Nice feature, if I see that is not yet implemented on Solaris10 (i have on production system the Update8 for now). Right? Do you have a roadmap when this happen? By the way, having zpool with mirroring, I'll try to follow the first part of Mark blog to have a second

Re: [zfs-discuss] list new files/activity monitor

2010-02-06 Thread Nilsen, Vidar
"Kjetil Torgrim Homme" writes: >yes, File Events Notification (FEN) > > http://blogs.sun.com/praks/entry/file_events_notification > >you access this through the event port API. > > http://developers.sun.com/solaris/articles/event_completion.html > >gnome-vfs uses FEN, but unfortunately gnomevfs-m

Re: [zfs-discuss] list new files/activity monitor

2010-02-06 Thread Kjetil Torgrim Homme
"Nilsen, Vidar" writes: > And once an hour I run a script that checks for new dirs last 60 > minutes matching some criteria, and outputs the path to an > IRC-channel. Where we can see if someone else has added new stuff. > > Method used is “find –mmin -60”, which gets horrible slow when more > da

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-06 Thread Frank Cusack
On 2/6/10 4:51 PM +0100 Kjetil Torgrim Homme wrote: the pricing does look strange, and I think it would be better to raise the price of the enclosure (which is silly cheap when empty IMHO) and reduce the drive prices somewhat. but that's just psychology, and doesn't really matter for total cost.

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-06 Thread grarpamp
> > Well, ok, and in my limited knowhow... zfs set checksum=sha256 only > > covers user scribbled data [POSIX file metadata, file contents, directory > > structure, ZVOL blocks] and not necessarily any zfs filesystem internals. > > metadata is fletcher4 except for the uberblocks which are self-

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread David E. Anderson
how can I delete obsolete BEs if I have run out of space and have to boot from LiveCD? On Sat, Feb 6, 2010 at 9:33 AM, Bill Sommerfeld wrote: > On 02/06/10 08:38, Frank Middleton wrote: > >> AFAIK there is no way to get around this. You can set a flag so that pkg >> tries to empty /var/pkg/downl

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Frank Middleton
On 02/ 6/10 11:50 AM, Thorsten Hirsch wrote: Uhmm... well, no, but there might be something left over. When I was doing an image-update last time, my / ran out of space. I even couldn't beadm destroy any old boot environment, because beadm told me that there's no space left. So what I did was "z

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Bob Friesenhahn
On Fri, 5 Feb 2010, Rob Logan wrote: Intel's RAM is faster because it needs to be. I'm confused how AMD's dual channel, two way interleaved 128-bit DDR2-667 into an on-cpu controller is faster than Intel's Lynnfield dual channel, Rank and Channel interleaved DDR3-1333 into an on-cpu controller.

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Bill Sommerfeld
On 02/06/10 08:38, Frank Middleton wrote: AFAIK there is no way to get around this. You can set a flag so that pkg tries to empty /var/pkg/downloads, but even though it looks empty, it won't actually become empty until you delete the snapshots, and IIRC you still have to manually delete the conte

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-06 Thread Richard Elling
On Feb 5, 2010, at 10:50 PM, grarpamp wrote: >>> Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk] >>> is assumed to handle data with integrity. So say netcat is used as >>> transport, >>> zfs is using sha256 on disk, but only fletcher4 over the wire with >>> send/recv

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Cindy Swearingen
Hi Cesare, If you want another way to replicate pools, you might be interested in the zpool split feature that Mark Musante integrated recently. You can read about it here: http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck Cindy - Original Message - From: Cesare Date: Sa

Re: [zfs-discuss] list new files/activity monitor

2010-02-06 Thread Joerg Schilling
"Nilsen, Vidar" wrote: > Method used is "find -mmin -60", which gets horrible slow when more data > is added. This is a questionable "feature" from GNU find. A standard compliant extension with this ad more features is: find -mtime -1h See also sfind which is in: ftp://ftp.berlios.de/pub/sch

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Thorsten Hirsch
Uhmm... well, no, but there might be something left over. When I was doing an image-update last time, my / ran out of space. I even couldn't beadm destroy any old boot environment, because beadm told me that there's no space left. So what I did was "zfs destroy /rpool/ROOT/opensolaris-6". After

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Frank Middleton
On 02/ 6/10 11:21 AM, Thorsten Hirsch wrote: I wonder where ~10G have gone. All the subdirs in / use ~4.5G only (that might be the size of REFER in opensolaris-7), and my $HOME uses 38.5M, that's correct. But since rpool has a size of> 15G there must be more than 10G somewhere. Do you have an

[zfs-discuss] most of my space is gone

2010-02-06 Thread Thorsten Hirsch
zpool tells me the following details for my rpool: SIZE=16G ALLOC=14.5G FREE=1.36G CAP=91% and zfs tells me these stats: rpool USED=15.1G AVAIL=505M REFER=83K MOUNTPOINT=/rpool rpool/ROOT USED=14.3G AVAIL=505M REFER=21K MOUNTPOINT=legacy rpool/ROOT/opensolaris-7 USED=14.3G AVAIL=505M REFER=

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Edward Ned Harvey
> b> (4) Hold backups from windows machines, mac (time machine), > b> linux. > > for time machine you will probably find yourself using COMSTAR and the > GlobalSAN iSCSI initiator because Time Machine does not seem willing > to work over NFS. Otherwise, for Macs you should definitely us

[zfs-discuss] list new files/activity monitor

2010-02-06 Thread Nilsen, Vidar
Hi, I have a fileserver at home, where my household stores common data, like downloaded content, pictures etc.. And once an hour I run a script that checks for new dirs last 60 minutes matching some criteria, and outputs the path to an IRC-channel. Where we can see if someone else has added n

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-06 Thread Kjetil Torgrim Homme
Frank Cusack writes: > On 2/4/10 8:00 AM +0100 Tomas Ögren wrote: >> The "find -newer blah" suggested in other posts won't catch newer >> files with an old timestamp (which could happen for various reasons, >> like being copied with kept timestamps from somewhere else). > > good point. that is d

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-06 Thread Kjetil Torgrim Homme
matthew patton writes: > true. but I buy a Ferrari for the engine and bodywork and chassis > engineering. It is totally criminal what Sun/EMC/Dell/Netapp do > charging customers 10x the open-market rate for standard drives. A > RE3/4 or NS drive is the same damn thing no matter if I buy it from >

Re: [zfs-discuss] 3ware 9650 SE

2010-02-06 Thread Kjetil Torgrim Homme
Alexandre MOREL writes: > It's a few day now that I try to use a 9650SE 3ware controller to work > on opensolaris and I found the following problem : the tw driver seems > work, I can see my controller whith the tw_cli of 3ware. I can see > that 2 drives are created with the controller, but when

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Fajar A. Nugraha
On Sat, Feb 6, 2010 at 1:32 AM, J wrote: > saves me hundreds on HW-based RAID controllers ^_^ ... which you might need to fork over to buy additional memory or faster CPU :P Don't get me wrong, zfs is awesome, but to do so it needs more CPU power and RAM (and possibly SSD) compared to other file

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Robert Milkowski
On 06/02/2010 02:38, Ross Walker wrote: On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote: Actually, there is. One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Cesare
On Fri, Feb 5, 2010 at 5:39 PM, A Darren Dunham wrote: > Just install a new OS, attach the disks, and do a 'zfs import' to find > the importable pools. The same behaviuor will be applied to move to another host the same ZFS pool (with the same or major ZFS version). I use this feature sometime to