Re: [zfs-discuss] Performance drop during scrub?

2010-05-03 Thread Tonmaus
Adding the perspective that scrub could consume my hard disks life may sound like a really good point why I should avoid scrub on my system as far as possible, and thus avoid experiencing performance issues in the first place, while using scrub. I just don't buy this. Sorry. It's too

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Tonmaus
th the ZFS designers to implement a scrub function to ZFS and the author of Best Practises to recommend performing this function frequently. I am hearing you are coming to a different conclusion and I would be interested in learning what could possibly be so highly interpretable in this. Regards,

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Tonmaus
nsumer weekly), not on redundancy level or pool configuration. Obviously, the issue under discussion affects all imaginable configurations, though. It may only vary in the degree. Recommending to not using scrub doesn't even qualify as a workaround, in my regard. Regards, Tonmaus

Re: [zfs-discuss] Performance drop during scrub?

2010-04-28 Thread Tonmaus
on the next service window might help. The advantage: such an option would be usable on any hardware. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance drop during scrub?

2010-04-28 Thread Tonmaus
epts in my opinion. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Help:Is zfs-fuse's performance is not good

2010-04-25 Thread Tonmaus
I wonder if this is the right place to ask, as the Filesystem in User Space implementation is a separate project. In Solaris ZFS runs in kernel. FUSE implementations are slow, no doubt. Same goes for other FUSE implementations, such as for NTFS. Regards, Tonmaus -- This message posted from

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
that, if we are having a dispute about netiquette, that highlights the potential substance of the topic more than anything else. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
e case increases the smell of FUD. -Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Tonmaus
Why don't you just fix the apparently broken link to your source, then? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Setting up ZFS on AHCI disks

2010-04-16 Thread Tonmaus
ng to them partition-wise? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Setting up ZFS on AHCI disks

2010-04-15 Thread Tonmaus
Hi, are the drives properly configured in cfgadm? Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-15 Thread Tonmaus
too expensive given the performance under ZFS, so I swapped it against a full re-fund for a pair of LSIs. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread Tonmaus
chnical" stability as you put it before, is basically the same for Dev and Release builds both from phenomenon and consequence perspective in a OpenSolaris environment. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread Tonmaus
ny restrictions for LP add-in cards, let alone bays, bays, bays... Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Tonmaus
> > On Wed, April 14, 2010 08:52, Tonmaus wrote: > > safe to say: 2009.06 (b111) is unusable for the > purpose, ans CIFS is dead > > in this build. > > That's strange; I run it every day (my home Windows > "My Documents" folder > and all my pho

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread Tonmaus
was b130 also the version that created the data set? -Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Tonmaus
safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead in this build. I am using B133, but I am not sure if this is best choice. I'd like to hear from others as well. -Tonmaus -- This message posted from opensolaris.org __

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Tonmaus
Upgrading the firmware a good idea, as there are other issues with Areca controllers that only have been solved recently. i.e. 1.42 is probably still affected by a problem with SCSI labels that may give problems importing a pool. -Tonmaus -- This message posted from opensolaris.org

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Tonmaus
without additional corruption messages from the areca panel.I am not sure if this relates to the rest of your problem, though. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-11 Thread Tonmaus
the 106x. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-10 Thread Tonmaus
expanders: quite a couple of th Areca cards actually have expander chips on board. Don't know about the 1680 specifically, though. Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-09 Thread Tonmaus
Hi David, why not just use a couple of SAS expanders? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to destroy iscsi dataset?

2010-03-31 Thread Tonmaus
heers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What about this status report

2010-03-29 Thread Tonmaus
Both are driver modules for storage adapters Properties can be reviewed in the documentation: ahci: http://docs.sun.com/app/docs/doc/816-5177/ahci-7d?a=view mpt: http://docs.sun.com/app/docs/doc/816-5177/mpt-7d?a=view ahci has a man entry on b133, as well. cheers, Tonmaus -- This message posted

Re: [zfs-discuss] What about this status report

2010-03-28 Thread Tonmaus
Yes. Basically working here. All fine under ahci, some problems under mpt (smartctl says that WD1002fbys wouldn't allow to store smart events, which I think is probably nonsense.) Regards, Tonmaus -- This message posted from opensolaris.org __

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Tonmaus
pull the drive you will have to unconfigure it in cfgadm as an additional step. If you don't observe that, you can blow things up. Moreover, ahci (7M) according to the specs I know will not support power save. Bottom line: you will have to find out. What the "warning" is concerned: mi

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Tonmaus
urnkey solution that offers ZFS, such as NexentaStore A popular approach is following along the rails of what is being used by SUN, a prominent example being the LSI 106x SAS HBAs in "IT" mode. Regards, Tonmaus -- This message posted from opensolaris.org _

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-19 Thread Tonmaus
n that mpt will support NCQ is mainly based on the marketing information provided by LSI that these controllers offer NCQ support with SATA drives. How (by which tool) do I get to this "actv" parameter? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Tonmaus
hci). Disks are SATA-2. The plan was that this combo will have NCQ support. On the other hand, do you know if there a method to verify if its functioning? Best regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Tonmaus
ty itself is not verified. Aha. Well, my understanding was that a scrub basically means reading all data, and compare with the parities, which means that these have to be re-computed. Is that correct? Regards, Tonmaus -- This message posted from opensolaris.org ___

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-17 Thread Tonmaus
You will need at least double failover resilience for such a pool. If one would do that with mirrors, ending up with app. 600 TB gross to provide 200 TB net capacity is definitely NOT an option. Regards, Tonmaus -- This message posted from opensolaris.org _

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread Tonmaus
ng to say is that CPU may become the bottleneck for I/O in case of parity-secured stripe sets. Mirrors and simple stripe sets have almost 0 impact on CPU. So far at least my observations. Moreover, x86 processors not optimized for that kind of work as much as i.e. an Areca controller with

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Tonmaus
, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread Tonmaus
e typical operation of my system, btw. It can easiely saturate the dual 1000 Mbit NICs for iSCSI and CIFS services. I am slightly reluctant to buy a second L5410 just to provide more headroom during maintenance operations, as the device will be idle otherwise, consuming power. Regards, Tonmau

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread Tonmaus
y believe that the scrub function is more meaningful if it can be applied in a variety of implementations. I think however that the insight that there seems to be no specific scrub management functions is transferable from a commodity implementation to a enterprise configuration. Regards,

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Tonmaus
t is certainly an unwarranted facilitation of Kryder's law for very large storage devices. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread Tonmaus
including zfs scrub in the picture. From what I have learned here it rather looks as if there will be an extra challenge, if not even a problem for the system integrator. That's unfortunate. Regards, Tonmaus -- This message posted from opensolaris.org __

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
#x27; is right on, at "10T" available. Duh! I completely forgot about this. Thanks for the heads-up. Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Tonmaus
cenario is rather one to be avoided. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
,98T 110M /daten I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB gross capacity, and 9 TB net. Zpool is however stating 10 TB and zfs is stating 8TB. The difference between net and gross is correct, but where is the capacity from the 11th disk going? Regards, Tonmaus -- Thi

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-14 Thread Tonmaus
ters is another. So, is there a lever for scrub I/O prio, or not? Is there a possibility to pause scrub passed and resume? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-14 Thread Tonmaus
ond glance. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-14 Thread Tonmaus
Hi Richard, these are - 11x WD1002fbys (7200rpm SATA drives) in 1 raidz2 group - 4 GB RAM - 1 CPU L5410 - snv_133 (where the current array was created as well) Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-13 Thread Tonmaus
activity so that trade-offs can be done to maintain availability and performance of the pool. Does anybody know how this is done? Thanks in advance for any hints, Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-12 Thread Tonmaus
> On Mar 11, 2010, at 10:02 PM, Tonmaus wrote: > All of the other potential disk controllers line up > ahead of it. For example, > you will see controller numbers assigned for your CD, > floppy, USB, SD, CF etc. > -- richard Hi Richard, thanks for the explanation. Actually,

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-11 Thread Tonmaus
merated? I am having two LSI controllers, one is "c10" the other "c11". Why can't controllers count from 1? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-11 Thread Tonmaus
> I'd really like to understand what OS does with > respect to ECC. In information technology ECC (Error Correction Code, Wikipedia article is worth reading.) normally protects point-to-point "channels". Hence, this is entirely a "hardware" thing here. Regards,

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-11 Thread Tonmaus
far as I understand it's just a good idea to have ECC RAM once you talk a certain amount of data that will inevitably go through a certain path. Servers controlling PB of data are certainly a case for ECC memory in my regard. -Tonmaus -- This message posted from o

Re: [zfs-discuss] Fishworks 2010Q1 and dedup bug?

2010-03-05 Thread Tonmaus
isn't probably on the same code level as the current dev build. Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about multiple RAIDZ vdevs using slices on the same disk

2010-03-05 Thread Tonmaus
ive the whole JBOD has to be resilvered. But what will be the interactions between fixing the jbod in SVM and re-silvering in ZFS? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about multiple RAIDZ vdevs using slices on the same disk

2010-03-05 Thread Tonmaus
ive the whole JBOD has to be resilvered. But what will be the interactions between fixing the jbod in SVM and re-silvering in ZFS? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Fishworks 2010Q1 and dedup bug?

2010-03-05 Thread Tonmaus
. After this successful test I am planning to use dedup productively soon. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about multiple RAIDZ vdevs using slices on the same disk

2010-03-04 Thread Tonmaus
ce will drop dramatically. Regards, tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about multiple RAIDZ vdevs using slices on the same disk

2010-03-03 Thread Tonmaus
capacity. The drawback is that the per-vdev redundancy has a price in capacity. I hope I am correct - I am a newbie as you. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zpool status output confusing

2010-02-18 Thread Tonmaus
ch as writing IT firmware over IR type in order to get all drives hooked up correctly, but that's another greenhorn story.) Best , Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Disk Issues

2010-02-15 Thread Tonmaus
Hi, If I may - you mentioned that you use ICH10 over ahci. As far as I know ICH10 is not officially supported by the ahci module. I have also tried myself on various ICH10 systems without success. OSOL wouldn't even install on pre-130 builds, and I haven't tried since. Regards

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-04 Thread Tonmaus
impact that would have if you use them as vdevs of the same pool. Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-04 Thread Tonmaus
Hi Arnaud, which type of controller is this? Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-04 Thread Tonmaus
will my controller really work in a PCIe 1.1 slot?) and 4k clusters are certainly only prominent examples. It's probably even more true than ever to fall back to established technologies in such times, including of biting the bullet of cost premium on occasion. Best regards Tonmaus --

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Tonmaus
s" success reports here, if it were. That all rather points to singular issues with firmware bugs or similar than to a systematic issue, doesn't it? Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mail

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Tonmaus
Hi James, am I right to understand that in a nutshell the problem is that if page 80/83 information is present but corrupt/inaccurate/forged (name it as you want), zfs will not get to down to the GUID? regards, Tonmaus -- This message posted from opensolaris.org

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Tonmaus
Thanks. That fixed it. Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-02 Thread Tonmaus
Hi Simon, I am running 5 WD20EADS in a raidz-1+spare on ahci controller without any problems I could relate to TLER or head parking. Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Tonmaus
Goog morning Cindy, > Hi, > > Testing how ZFS reacts to a failed disk can be > difficult to anticipate > because some systems don't react well when you remove > a disk. I am in the process of finding that out for my systems. That's why I am doing these tests. > On an > x4500, for example, you h

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Tonmaus
If I run # zdb -l /dev/dsk/c#t#d# the result is "failed to unpack label" for any disk attached to controllers running on ahci or arcmsr controllers. Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailin

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Tonmaus
Hi again, > Follow recommended practices for replacing devices in > a live pool. Fair enough. On the other hand I guess it has become clear that the pool went offline as a part of the procedure. That was partly as I am not sure about the hotplug capabilities of the controller, partly as I wante

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Tonmaus
nabled, then physically > swapping out an > active disk in the pool with a spare disk that is is > also connected to > the pool without using zpool replace is a good > approach. Does this still apply if I did a clean export before the swap? Regards, Tonmaus -- This message posted

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Tonmaus
t0%- %t11% are attached to the system. The odd thing still is: %t9% was a member of the pool - where is it? And: I thought a spare could only be 'online' in any pool or 'available', not both at the same time. Does it make more sense now? Regards, Tonmaus -

[zfs-discuss] zpool status output confusing

2010-02-01 Thread Tonmaus
itch. As you see, scrub is running for peace of mind... Ideas? TIA. Cheers, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Tonmaus
lent. That's encouraging. I am planning a similar configuration, with WD RE3 1 TB disks though. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Tonmaus
path features, that is using port multipliers or sas expanders. Avoiding these one should be fine. I am quite a newbie though. Just judging from what I read here. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss ma

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread Tonmaus
OL. I had a 1,4 TB ZVOL on the same pool that also wasn't easy to kill. It hung the machine as well - but only once: it was gone after a forced re-boot. Regards, Tonmaus -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs