Adding the perspective that scrub could consume my hard disks life may sound
like a really good point why I should avoid scrub on my system as far as
possible, and thus avoid experiencing performance issues in the first place,
while using scrub.
I just don't buy this. Sorry. It's too
th
the ZFS designers to implement a scrub function to ZFS and the author of Best
Practises to recommend performing this function frequently. I am hearing you
are coming to a different conclusion and I would be interested in learning what
could possibly be so highly interpretable in this.
Regards,
nsumer weekly), not on redundancy level or pool
configuration. Obviously, the issue under discussion affects all imaginable
configurations, though. It may only vary in the degree.
Recommending to not using scrub doesn't even qualify as a workaround, in my
regard.
Regards,
Tonmaus
on the next service window might help. The advantage: such an option would be
usable on any hardware.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
epts in my opinion.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I wonder if this is the right place to ask, as the Filesystem in User Space
implementation is a separate project. In Solaris ZFS runs in kernel. FUSE
implementations are slow, no doubt. Same goes for other FUSE implementations,
such as for NTFS.
Regards,
Tonmaus
--
This message posted from
that, if we are having a dispute about netiquette, that highlights the
potential substance of the topic more than anything else.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
e case
increases the smell of FUD.
-Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why don't you just fix the apparently broken link to your source, then?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng to them
partition-wise?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
are the drives properly configured in cfgadm?
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
too
expensive given the performance under ZFS, so I swapped it against a full
re-fund for a pair of LSIs.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
chnical" stability as you put it before, is basically the same for Dev and
Release builds both from phenomenon and consequence perspective in a
OpenSolaris environment.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ny restrictions for LP add-in cards, let alone bays, bays,
bays...
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> On Wed, April 14, 2010 08:52, Tonmaus wrote:
> > safe to say: 2009.06 (b111) is unusable for the
> purpose, ans CIFS is dead
> > in this build.
>
> That's strange; I run it every day (my home Windows
> "My Documents" folder
> and all my pho
was b130 also the version that created the data set?
-Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead in
this build.
I am using B133, but I am not sure if this is best choice. I'd like to hear
from others as well.
-Tonmaus
--
This message posted from opensolaris.org
__
Upgrading the firmware a good idea, as there are other issues with Areca
controllers that only have been solved recently. i.e. 1.42 is probably still
affected by a problem with SCSI labels that may give problems importing a pool.
-Tonmaus
--
This message posted from opensolaris.org
without additional corruption messages from the areca panel.I am not sure if
this relates to the rest of your problem, though.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
the 106x.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
expanders: quite a couple of th Areca cards actually
have expander chips on board. Don't know about the 1680 specifically, though.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi David,
why not just use a couple of SAS expanders?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
heers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Both are driver modules for storage adapters
Properties can be reviewed in the documentation:
ahci: http://docs.sun.com/app/docs/doc/816-5177/ahci-7d?a=view
mpt: http://docs.sun.com/app/docs/doc/816-5177/mpt-7d?a=view
ahci has a man entry on b133, as well.
cheers,
Tonmaus
--
This message posted
Yes. Basically working here. All fine under ahci, some problems under mpt
(smartctl says that WD1002fbys wouldn't allow to store smart events, which I
think is probably nonsense.)
Regards,
Tonmaus
--
This message posted from opensolaris.org
__
pull the drive you will have to unconfigure it in cfgadm as an additional
step. If you don't observe that, you can blow things up.
Moreover, ahci (7M) according to the specs I know will not support power save.
Bottom line: you will have to find out.
What the "warning" is concerned: mi
urnkey solution that offers ZFS, such as
NexentaStore
A popular approach is following along the rails of what is being used by SUN, a
prominent example being the LSI 106x SAS HBAs in "IT" mode.
Regards,
Tonmaus
--
This message posted from opensolaris.org
_
n that mpt will support NCQ
is mainly based on the marketing information provided by LSI that these
controllers offer NCQ support with SATA drives. How (by which tool) do I get to
this "actv" parameter?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hci). Disks are SATA-2. The
plan was that this combo will have NCQ support. On the other hand, do you know
if there a method to verify if its functioning?
Best regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ty itself is not verified.
Aha. Well, my understanding was that a scrub basically means reading all data,
and compare with the parities, which means that these have to be re-computed.
Is that correct?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
You will need at least double failover resilience for such a
pool. If one would do that with mirrors, ending up with app. 600 TB gross to
provide 200 TB net capacity is definitely NOT an option.
Regards,
Tonmaus
--
This message posted from opensolaris.org
_
ng to say is that CPU may become the bottleneck for I/O in case
of parity-secured stripe sets. Mirrors and simple stripe sets have almost 0
impact on CPU. So far at least my observations. Moreover, x86 processors not
optimized for that kind of work as much as i.e. an Areca controller with
,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e typical operation of my system,
btw. It can easiely saturate the dual 1000 Mbit NICs for iSCSI and CIFS
services. I am slightly reluctant to buy a second L5410 just to provide more
headroom during maintenance operations, as the device will be idle otherwise,
consuming power.
Regards,
Tonmau
y believe that the scrub
function is more meaningful if it can be applied in a variety of
implementations.
I think however that the insight that there seems to be no specific scrub
management functions is transferable from a commodity implementation to a
enterprise configuration.
Regards,
t is certainly an unwarranted facilitation of Kryder's law for
very large storage devices.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
including zfs scrub in the picture. From what I have learned here it rather
looks as if there will be an extra challenge, if not even a problem for the
system integrator. That's unfortunate.
Regards,
Tonmaus
--
This message posted from opensolaris.org
__
#x27; is right on, at "10T" available.
Duh! I completely forgot about this. Thanks for the heads-up.
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cenario is rather one to be avoided.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,98T 110M /daten
I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB gross
capacity, and 9 TB net. Zpool is however stating 10 TB and zfs is stating 8TB.
The difference between net and gross is correct, but where is the capacity from
the 11th disk going?
Regards,
Tonmaus
--
Thi
ters is another. So, is there a
lever for scrub I/O prio, or not? Is there a possibility to pause scrub passed
and resume?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ond glance.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Richard,
these are
- 11x WD1002fbys (7200rpm SATA drives) in 1 raidz2 group
- 4 GB RAM
- 1 CPU L5410
- snv_133 (where the current array was created as well)
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
activity so that trade-offs
can be done to maintain availability and performance of the pool. Does anybody
know how this is done?
Thanks in advance for any hints,
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
> On Mar 11, 2010, at 10:02 PM, Tonmaus wrote:
> All of the other potential disk controllers line up
> ahead of it. For example,
> you will see controller numbers assigned for your CD,
> floppy, USB, SD, CF etc.
> -- richard
Hi Richard,
thanks for the explanation. Actually,
merated? I am having two LSI controllers, one is "c10"
the other "c11". Why can't controllers count from 1?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
> I'd really like to understand what OS does with
> respect to ECC.
In information technology ECC (Error Correction Code, Wikipedia article is
worth reading.) normally protects point-to-point "channels". Hence, this is
entirely a "hardware" thing here.
Regards,
far as I understand it's just a good idea to have ECC RAM once you talk a
certain amount of data that will inevitably go through a certain path. Servers
controlling PB of data are certainly a case for ECC memory in my regard.
-Tonmaus
--
This message posted from o
isn't probably on the same code level as the
current dev build.
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ive the whole JBOD has to be resilvered. But what will be the
interactions between fixing the jbod in SVM and re-silvering in ZFS?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ive the whole JBOD has to be resilvered. But what will be the
interactions between fixing the jbod in SVM and re-silvering in ZFS?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
After this successful test I am planning to use dedup productively soon.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ce will drop dramatically.
Regards,
tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
capacity. The drawback is that the per-vdev redundancy has a price in capacity.
I hope I am correct - I am a newbie as you.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
ch as writing IT firmware over IR type in order to
get all drives hooked up correctly, but that's another greenhorn story.)
Best ,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Hi,
If I may - you mentioned that you use ICH10 over ahci. As far as I know ICH10
is not officially supported by the ahci module. I have also tried myself on
various ICH10 systems without success. OSOL wouldn't even install on pre-130
builds, and I haven't tried since.
Regards
impact that would have if you use
them as vdevs of the same pool.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Arnaud,
which type of controller is this?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
will my controller really work in a PCIe 1.1 slot?) and 4k clusters are
certainly only prominent examples. It's probably even more true than ever to
fall back to established technologies in such times, including of biting the
bullet of cost premium on occasion.
Best regards
Tonmaus
--
s" success reports here, if it
were. That all rather points to singular issues with firmware bugs or similar
than to a systematic issue, doesn't it?
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mail
Hi James,
am I right to understand that in a nutshell the problem is that if page 80/83
information is present but corrupt/inaccurate/forged (name it as you want), zfs
will not get to down to the GUID?
regards,
Tonmaus
--
This message posted from opensolaris.org
Thanks. That fixed it.
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Simon,
I am running 5 WD20EADS in a raidz-1+spare on ahci controller without any
problems I could relate to TLER or head parking.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Goog morning Cindy,
> Hi,
>
> Testing how ZFS reacts to a failed disk can be
> difficult to anticipate
> because some systems don't react well when you remove
> a disk.
I am in the process of finding that out for my systems. That's why I am doing
these tests.
> On an
> x4500, for example, you h
If I run
# zdb -l /dev/dsk/c#t#d#
the result is "failed to unpack label" for any disk attached to controllers
running on ahci or arcmsr controllers.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailin
Hi again,
> Follow recommended practices for replacing devices in
> a live pool.
Fair enough. On the other hand I guess it has become clear that the pool went
offline as a part of the procedure. That was partly as I am not sure about the
hotplug capabilities of the controller, partly as I wante
nabled, then physically
> swapping out an
> active disk in the pool with a spare disk that is is
> also connected to
> the pool without using zpool replace is a good
> approach.
Does this still apply if I did a clean export before the swap?
Regards,
Tonmaus
--
This message posted
t0%- %t11% are attached
to the system. The odd thing still is: %t9% was a member of the pool - where is
it? And: I thought a spare could only be 'online' in any pool or 'available',
not both at the same time.
Does it make more sense now?
Regards,
Tonmaus
-
itch. As you see, scrub is running for peace of mind...
Ideas? TIA.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lent. That's encouraging. I am planning a similar configuration, with WD
RE3 1 TB disks though.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
path features, that is using port multipliers or sas expanders. Avoiding
these one should be fine.
I am quite a newbie though. Just judging from what I read here.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss ma
OL. I had a 1,4
TB ZVOL on the same pool that also wasn't easy to kill. It hung the machine as
well - but only once: it was gone after a forced re-boot.
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
72 matches
Mail list logo